pack.deltaCacheSize::
The maximum memory in bytes used for caching deltas in
- linkgit:git-pack-objects[1].
- A value of 0 means no limit. Defaults to 0.
+ linkgit:git-pack-objects[1] before writing them out to a pack.
+ This cache is used to speed up the writing object phase by not
+ having to recompute the final delta result once the best match
+ for all objects is found. Repacking large repositories on machines
+ which are tight with memory might be badly impacted by this though,
+ especially if this cache pushes the system into swapping.
+ A value of 0 means no limit. The smallest size of 1 byte may be
+ used to virtually disable this cache. Defaults to 256 MiB.
pack.deltaCacheLimit::
The maximum size of a delta, that is cached in
- linkgit:git-pack-objects[1]. Defaults to 1000.
+ linkgit:git-pack-objects[1]. This cache is used to speed up the
+ writing object phase by not having to recompute the final delta
+ result once the best match for all objects is found. Defaults to 1000.
pack.threads::
Specifies the number of threads to spawn when searching for best
[-l | --files-with-matches] [-L | --files-without-match]
[-z | --null]
[-c | --count] [--all-match]
+ [--max-depth <depth>]
[--color | --no-color]
[-A <post-context>] [-B <pre-context>] [-C <context>]
[-f <file>] [-e] <pattern>
-I::
Don't match the pattern in binary files.
+--max-depth <depth>::
+ For each pathspec given on command line, descend at most <depth>
+ levels of directories. A negative value means no limit.
+
-w::
--word-regexp::
Match the pattern only at word boundary (either begin at the
-o::
--others::
- Show other files in the output
+ Show other (i.e. untracked) files in the output
-i::
--ignored::
DESCRIPTION
-----------
-Lists commit objects in reverse chronological order starting at the
-given commit(s), taking ancestry relationship into account. This is
-useful to produce human-readable log output.
+List commits that are reachable by following the `parent` links from the
+given commit(s), but exclude commits that are reachable from the one(s)
+given with a '{caret}' in front of them. The output is given in reverse
+chronological order by default.
-Commits which are stated with a preceding '{caret}' cause listing to
-stop at that point. Their parents are implied. Thus the following
-command:
+You can think of this as a set operation. Commits given on the command
+line form a set of commits that are reachable from any of them, and then
+commits reachable from any of the ones given with '{caret}' in front are
+subtracted from that set. The remaining commits are what comes out in the
+command's output. Various other options and paths parameters can be used
+to further limit the result.
+
+Thus, the following command:
-----------------------------------------------------------------------
$ git rev-list foo bar ^baz
-----------------------------------------------------------------------
-means "list all the commits which are included in 'foo' and 'bar', but
-not in 'baz'".
+means "list all the commits which are reachable from 'foo' or 'bar', but
+not from 'baz'".
A special notation "'<commit1>'..'<commit2>'" can be used as a
short-hand for "{caret}'<commit1>' '<commit2>'". For example, either of
clear::
Remove all the stashed states. Note that those states will then
- be subject to pruning, and may be difficult or impossible to recover.
+ be subject to pruning, and may be impossible to recover (see
+ 'Examples' below for a possible strategy).
drop [-q|--quiet] [<stash>]::
$ git commit foo -m 'Remaining parts'
----------------------------------------------------------------
+Recovering stashes that were cleared/dropped erroneously::
+
+If you mistakenly drop or clear stashes, they cannot be recovered
+through the normal safety mechanisms. However, you can try the
+following incantation to get a list of stashes that are still in your
+repository, but not reachable any more:
++
+----------------------------------------------------------------
+git fsck --unreachable |
+grep commit | cut -d\ -f3 |
+xargs git log --merges --no-walk --grep=WIP
+----------------------------------------------------------------
+
+
SEE ALSO
--------
linkgit:git-checkout[1],
DESCRIPTION
-----------
-Adds a 'tag' reference in `.git/refs/tags/`
+
+Adds a 'tag' reference in `.git/refs/tags/`. The tag <name> must pass
+linkgit:git-check-ref-format[1] which basicly means that control characters,
+space, ~, ^, :, ?, *, [ and \ are prohibited.
Unless `-f` is given, the tag must not yet exist in
`.git/refs/tags/` directory.
Convenience functions that encapsulate a sequence of
start_command() followed by finish_command(). The argument argv
specifies the program and its arguments. The argument opt is zero
- or more of the flags `RUN_COMMAND_NO_STDIN`, `RUN_GIT_CMD`, or
- `RUN_COMMAND_STDOUT_TO_STDERR` that correspond to the members
- .no_stdin, .git_cmd, .stdout_to_stderr of `struct child_process`.
+ or more of the flags `RUN_COMMAND_NO_STDIN`, `RUN_GIT_CMD`,
+ `RUN_COMMAND_STDOUT_TO_STDERR`, or `RUN_SILENT_EXEC_FAILURE`
+ that correspond to the members .no_stdin, .git_cmd,
+ .stdout_to_stderr, .silent_exec_failure of `struct child_process`.
The argument dir corresponds the member .dir. The argument env
corresponds to the member .env.
+The functions above do the following:
+
+. If a system call failed, errno is set and -1 is returned. A diagnostic
+ is printed.
+
+. If the program was not found, then -1 is returned and errno is set to
+ ENOENT; a diagnostic is printed only if .silent_exec_failure is 0.
+
+. Otherwise, the program is run. If it terminates regularly, its exit
+ code is returned. No diagnistic is printed, even if the exit code is
+ non-zero.
+
+. If the program terminated due to a signal, then the return value is the
+ signal number - 128, ie. it is negative and so indicates an unusual
+ condition; a diagnostic is printed. This return value can be passed to
+ exit(2), which will report the same code to the parent process that a
+ POSIX shell's $? would report for a program that died from the signal.
+
+
`start_async`::
Run a function asynchronously. Takes a pointer to a `struct
To specify a new initial working directory for the sub-process,
specify it in the .dir member.
+If the program cannot be found, the functions return -1 and set
+errno to ENOENT. Normally, an error message is printed, but if
+.silent_exec_failure is set to 1, no message is printed for this
+special error condition.
+
* `struct async`
static int longformat;
static int abbrev = DEFAULT_ABBREV;
static int max_candidates = 10;
+static int found_names;
static const char *pattern;
static int always;
memcpy(e->path, path, len);
commit->util = e;
}
+ found_names = 1;
}
static int get_name(const char *path, const unsigned char *sha1, int flag, void *cb_data)
for_each_ref(get_name, NULL);
}
+ if (!found_names)
+ die("cannot describe '%s'", sha1_to_hex(sha1));
+
n = cmit->util;
if (n) {
/*
revs->max_count = 3;
else if (!strcmp(argv[1], "-q"))
options |= DIFF_SILENT_ON_REMOVED;
+ else if (!strcmp(argv[1], "-h"))
+ usage(builtin_diff_usage);
else
return error("invalid option: %s", argv[1]);
argv++; argc--;
return git_color_default_config(var, value, cb);
}
+/*
+ * Return non-zero if max_depth is negative or path has no more then max_depth
+ * slashes.
+ */
+static int accept_subdir(const char *path, int max_depth)
+{
+ if (max_depth < 0)
+ return 1;
+
+ while ((path = strchr(path, '/')) != NULL) {
+ max_depth--;
+ if (max_depth < 0)
+ return 0;
+ path++;
+ }
+ return 1;
+}
+
+/*
+ * Return non-zero if name is a subdirectory of match and is not too deep.
+ */
+static int is_subdir(const char *name, int namelen,
+ const char *match, int matchlen, int max_depth)
+{
+ if (matchlen > namelen || strncmp(name, match, matchlen))
+ return 0;
+
+ if (name[matchlen] == '\0') /* exact match */
+ return 1;
+
+ if (!matchlen || match[matchlen-1] == '/' || name[matchlen] == '/')
+ return accept_subdir(name + matchlen + 1, max_depth);
+
+ return 0;
+}
+
/*
* git grep pathspecs are somewhat different from diff-tree pathspecs;
* pathname wildcards are allowed.
*/
-static int pathspec_matches(const char **paths, const char *name)
+static int pathspec_matches(const char **paths, const char *name, int max_depth)
{
int namelen, i;
if (!paths || !*paths)
- return 1;
+ return accept_subdir(name, max_depth);
namelen = strlen(name);
for (i = 0; paths[i]; i++) {
const char *match = paths[i];
int matchlen = strlen(match);
const char *cp, *meta;
- if (!matchlen ||
- ((matchlen <= namelen) &&
- !strncmp(name, match, matchlen) &&
- (match[matchlen-1] == '/' ||
- name[matchlen] == '\0' || name[matchlen] == '/')))
+ if (is_subdir(name, namelen, match, matchlen, max_depth))
return 1;
if (!fnmatch(match, name, 0))
return 1;
int kept;
if (!S_ISREG(ce->ce_mode))
continue;
- if (!pathspec_matches(paths, ce->name))
+ if (!pathspec_matches(paths, ce->name, opt->max_depth))
continue;
name = ce->name;
if (name[0] == '-') {
struct cache_entry *ce = active_cache[nr];
if (!S_ISREG(ce->ce_mode))
continue;
- if (!pathspec_matches(paths, ce->name))
+ if (!pathspec_matches(paths, ce->name, opt->max_depth))
continue;
/*
* If CE_VALID is on, we assume worktree file and its cache entry
strbuf_addch(&pathbuf, '/');
down = pathbuf.buf + tn_len;
- if (!pathspec_matches(paths, down))
+ if (!pathspec_matches(paths, down, opt->max_depth))
;
else if (S_ISREG(entry.mode))
hit |= grep_sha1(opt, entry.sha1, pathbuf.buf, tn_len);
OPT_SET_INT('I', NULL, &opt.binary,
"don't match patterns in binary files",
GREP_BINARY_NOMATCH),
+ { OPTION_INTEGER, 0, "max-depth", &opt.max_depth, "depth",
+ "descend at most <depth> levels", PARSE_OPT_NONEG,
+ NULL, 1 },
OPT_GROUP(""),
OPT_BIT('E', "extended-regexp", &opt.regflags,
"use extended POSIX regular expressions", REG_EXTENDED),
opt.pathname = 1;
opt.pattern_tail = &opt.pattern_list;
opt.regflags = REG_NEWLINE;
+ opt.max_depth = -1;
strcpy(opt.color_match, GIT_COLOR_RED GIT_COLOR_BOLD);
opt.color = -1;
static const char *fmt_patch_subject_prefix = "PATCH";
static const char *fmt_pretty;
+static const char * const builtin_log_usage =
+ "git log [<options>] [<since>..<until>] [[--] <path>...]\n"
+ " or: git show [options] <object>...";
+
static void cmd_log_init(int argc, const char **argv, const char *prefix,
struct rev_info *rev)
{
rev->show_decorations = 1;
} else if (!strcmp(arg, "--source")) {
rev->show_source = 1;
+ } else if (!strcmp(arg, "-h")) {
+ usage(builtin_log_usage);
} else
die("unrecognized argument: %s", arg);
}
discard_cache();
if (read_cache() < 0)
die("failed to read the cache");
- return -ret;
+ return ret;
}
}
static int pack_compression_seen;
static unsigned long delta_cache_size = 0;
-static unsigned long max_delta_cache_size = 0;
+static unsigned long max_delta_cache_size = 256 * 1024 * 1024;
static unsigned long cache_max_small_delta_size = 1000;
static unsigned long window_memory_limit = 0;
static const char pre_receive_hook[] = "hooks/pre-receive";
static const char post_receive_hook[] = "hooks/post-receive";
-static int run_status(int code, const char *cmd_name)
-{
- switch (code) {
- case 0:
- return 0;
- case -ERR_RUN_COMMAND_FORK:
- return error("fork of %s failed", cmd_name);
- case -ERR_RUN_COMMAND_EXEC:
- return error("execute of %s failed", cmd_name);
- case -ERR_RUN_COMMAND_PIPE:
- return error("pipe failed");
- case -ERR_RUN_COMMAND_WAITPID:
- return error("waitpid failed");
- case -ERR_RUN_COMMAND_WAITPID_WRONG_PID:
- return error("waitpid is confused");
- case -ERR_RUN_COMMAND_WAITPID_SIGNAL:
- return error("%s died of signal", cmd_name);
- case -ERR_RUN_COMMAND_WAITPID_NOEXIT:
- return error("%s died strangely", cmd_name);
- default:
- error("%s exited with error code %d", cmd_name, -code);
- return -code;
- }
-}
-
static int run_receive_hook(const char *hook_name)
{
static char buf[sizeof(commands->old_sha1) * 2 + PATH_MAX + 4];
code = start_command(&proc);
if (code)
- return run_status(code, hook_name);
+ return code;
for (cmd = commands; cmd; cmd = cmd->next) {
if (!cmd->error_string) {
size_t n = snprintf(buf, sizeof(buf), "%s %s %s\n",
}
}
close(proc.in);
- return run_status(finish_command(&proc), hook_name);
+ return finish_command(&proc);
}
static int run_update_hook(struct command *cmd)
argv[3] = sha1_to_hex(cmd->new_sha1);
argv[4] = NULL;
- return run_status(run_command_v_opt(argv, RUN_COMMAND_NO_STDIN |
- RUN_COMMAND_STDOUT_TO_STDERR),
- update_hook);
+ return run_command_v_opt(argv, RUN_COMMAND_NO_STDIN |
+ RUN_COMMAND_STDOUT_TO_STDERR);
}
static int is_ref_checked_out(const char *ref)
argv[argc] = NULL;
status = run_command_v_opt(argv, RUN_COMMAND_NO_STDIN
| RUN_COMMAND_STDOUT_TO_STDERR);
- run_status(status, update_post_hook);
}
static void execute_commands(const char *unpacker_error)
code = run_command_v_opt(unpacker, RUN_GIT_CMD);
if (!code)
return NULL;
- run_status(code, unpacker[0]);
return "unpack-objects abnormal exit";
} else {
const char *keeper[7];
ip.git_cmd = 1;
status = start_command(&ip);
if (status) {
- run_status(status, keeper[0]);
return "index-pack fork failed";
}
pack_lockfile = index_pack_lockfile(ip.out);
reprepare_packed_git();
return NULL;
}
- run_status(status, keeper[0]);
return "index-pack abnormal exit";
}
}
static void show_pack_info(struct packed_git *p)
{
- uint32_t nr_objects, i, chain_histogram[MAX_CHAIN+1];
+ uint32_t nr_objects, i;
+ int cnt;
+ unsigned long chain_histogram[MAX_CHAIN+1], baseobjects;
nr_objects = p->num_objects;
memset(chain_histogram, 0, sizeof(chain_histogram));
+ baseobjects = 0;
for (i = 0; i < nr_objects; i++) {
const unsigned char *sha1;
&delta_chain_length,
base_sha1);
printf("%s ", sha1_to_hex(sha1));
- if (!delta_chain_length)
+ if (!delta_chain_length) {
printf("%-6s %lu %lu %"PRIuMAX"\n",
type, size, store_size, (uintmax_t)offset);
+ baseobjects++;
+ }
else {
printf("%-6s %lu %lu %"PRIuMAX" %u %s\n",
type, size, store_size, (uintmax_t)offset,
}
}
- for (i = 0; i <= MAX_CHAIN; i++) {
- if (!chain_histogram[i])
+ if (baseobjects)
+ printf("non delta: %lu object%s\n",
+ baseobjects, baseobjects > 1 ? "s" : "");
+
+ for (cnt = 1; cnt <= MAX_CHAIN; cnt++) {
+ if (!chain_histogram[cnt])
continue;
- printf("chain length = %"PRIu32": %"PRIu32" object%s\n", i,
- chain_histogram[i], chain_histogram[i] > 1 ? "s" : "");
+ printf("chain length = %d: %lu object%s\n", cnt,
+ chain_histogram[cnt],
+ chain_histogram[cnt] > 1 ? "s" : "");
}
if (chain_histogram[0])
- printf("chain length > %d: %"PRIu32" object%s\n", MAX_CHAIN,
- chain_histogram[0], chain_histogram[0] > 1 ? "s" : "");
+ printf("chain length > %d: %lu object%s\n", MAX_CHAIN,
+ chain_histogram[0],
+ chain_histogram[0] > 1 ? "s" : "");
}
static int verify_one_pack(const char *path, int verbose)
extern int index_path(unsigned char *sha1, const char *path, struct stat *st, int write_object);
extern void fill_stat_cache_info(struct cache_entry *ce, struct stat *st);
+/* "careful lstat()" */
+extern int check_path(const char *path, int len, struct stat *st);
+
#define REFRESH_REALLY 0x0001 /* ignore_valid */
#define REFRESH_UNMERGED 0x0002 /* allow unmerged */
#define REFRESH_QUIET 0x0004 /* be quiet about it */
#define S_IROTH 0
#define S_IXOTH 0
-#define WIFEXITED(x) ((unsigned)(x) < 259) /* STILL_ACTIVE */
+#define WIFEXITED(x) 1
+#define WIFSIGNALED(x) 0
#define WEXITSTATUS(x) ((x) & 0xff)
-#define WIFSIGNALED(x) ((unsigned)(x) > 259)
+#define WTERMSIG(x) SIGTERM
#define SIGHUP 1
#define SIGQUIT 3
--extended-regexp --basic-regexp --fixed-strings
--files-with-matches --name-only
--files-without-match
+ --max-depth
--count
--and --or --not --all-match
"
(git-get-string-sha1
(git-call-process-string-display-error "write-tree"))))
-(defun git-commit-tree (buffer tree head)
- "Call git-commit-tree with buffer as input and return the resulting commit SHA1."
+(defun git-commit-tree (buffer tree parent)
+ "Create a commit and possibly update HEAD.
+Create a commit with the message in BUFFER using the tree with hash TREE.
+Use PARENT as the parent of the new commit. If PARENT is the current \"HEAD\",
+update the \"HEAD\" reference to the new commit."
(let ((author-name (git-get-committer-name))
(author-email (git-get-committer-email))
(subject "commit (initial): ")
author-date log-start log-end args coding-system-for-write)
- (when head
+ (when parent
(setq subject "commit: ")
(push "-p" args)
- (push head args))
+ (push parent args))
(with-current-buffer buffer
(goto-char (point-min))
(if
(apply #'git-run-command-region
buffer log-start log-end env
"commit-tree" tree (nreverse args))))))
- (when commit (git-update-ref "HEAD" commit head subject))
+ (when commit (git-update-ref "HEAD" commit parent subject))
commit)))
(defun git-empty-db-p ()
status = finish_command(&child_process);
if (status)
- error("external filter %s failed %d", params->cmd, -status);
+ error("external filter %s failed %d", params->cmd, status);
return (write_err || status);
}
return 0;
}
+/*
+ * This is like 'lstat()', except it refuses to follow symlinks
+ * in the path.
+ */
+int check_path(const char *path, int len, struct stat *st)
+{
+ if (has_symlink_leading_path(path, len)) {
+ errno = ENOENT;
+ return -1;
+ }
+ return lstat(path, st);
+}
+
int checkout_entry(struct cache_entry *ce, const struct checkout *state, char *topath)
{
static char path[PATH_MAX + 1];
strcpy(path + len, ce->name);
len += ce_namelen(ce);
- if (!lstat(path, &st)) {
+ if (!check_path(path, len, &st)) {
unsigned changed = ce_match_stat(ce, &st, CE_MATCH_IGNORE_VALID);
if (!changed)
return 0;
fi
# The tree must be really really clean.
-if ! git update-index --ignore-submodules --refresh; then
- die "cannot rebase: you have unstaged changes"
+if ! git update-index --ignore-submodules --refresh > /dev/null; then
+ echo >&2 "cannot rebase: you have unstaged changes"
+ git diff --name-status -r --ignore-submodules -- >&2
+ exit 1
fi
diff=$(git diff-index --cached --name-status -r --ignore-submodules HEAD --)
case "$diff" in
print STDOUT "\n# $path\n";
my $s = $props->{'svn:ignore'} or return;
$s =~ s/[\r\n]+/\n/g;
+ $s =~ s/^\n+//;
chomp $s;
$s =~ s#^#$path#gm;
print STDOUT "$s\n";
open(GITIGNORE, '>', $ignore)
or fatal("Failed to open `$ignore' for writing: $!");
$s =~ s/[\r\n]+/\n/g;
+ $s =~ s/^\n+//;
chomp $s;
# Prefix all patterns so that the ignore doesn't apply
# to sub-directories.
$repo_id = $Git::SVN::default_repo_id;
}
unless (defined $ref_id && length $ref_id) {
- $_[2] = $ref_id = $Git::SVN::default_ref_id;
+ $_prefix = '' unless defined($_prefix);
+ $_[2] = $ref_id = $_prefix . $Git::SVN::default_ref_id;
}
$_[1] = $repo_id;
my $dir = "$ENV{GIT_DIR}/svn/$ref_id";
* if we fail because the command is not found, it is
* OK to return. Otherwise, we just pass along the status code.
*/
- status = run_command_v_opt(argv, 0);
- if (status != -ERR_RUN_COMMAND_EXEC) {
- if (IS_RUN_COMMAND_ERR(status))
- die("unable to run '%s'", argv[0]);
- exit(-status);
- }
- errno = ENOENT; /* as if we called execvp */
+ status = run_command_v_opt(argv, RUN_SILENT_EXEC_FAILURE);
+ if (status >= 0 || errno != ENOENT)
+ exit(status);
argv[0] = tmp;
int pathname;
int null_following_name;
int color;
+ int max_depth;
int funcname;
char color_match[COLOR_MAXLEN];
const char *color_external;
struct http_pack_request *new_http_pack_request(
struct packed_git *target, const char *base_url)
{
- char *url;
char *filename;
long prev_posn = 0;
char range[RANGE_HEADER_SIZE];
end_url_with_slash(&buf, base_url);
strbuf_addf(&buf, "objects/pack/pack-%s.pack",
sha1_to_hex(target->sha1));
- url = strbuf_detach(&buf, NULL);
- preq->url = xstrdup(url);
+ preq->url = strbuf_detach(&buf, NULL);
filename = sha1_pack_name(target->sha1);
snprintf(preq->filename, sizeof(preq->filename), "%s", filename);
preq->slot->local = preq->packfile;
curl_easy_setopt(preq->slot->curl, CURLOPT_FILE, preq->packfile);
curl_easy_setopt(preq->slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
- curl_easy_setopt(preq->slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(preq->slot->curl, CURLOPT_URL, preq->url);
curl_easy_setopt(preq->slot->curl, CURLOPT_HTTPHEADER,
no_pragma_header);
abort:
free(filename);
+ free(preq->url);
+ free(preq);
return NULL;
}
char *hex = sha1_to_hex(sha1);
char *filename;
char prevfile[PATH_MAX];
- char *url;
int prevlocal;
unsigned char prev_buf[PREV_BUF_SIZE];
ssize_t prev_read = 0;
git_SHA1_Init(&freq->c);
- url = get_remote_object_url(base_url, hex, 0);
- freq->url = xstrdup(url);
+ freq->url = get_remote_object_url(base_url, hex, 0);
/*
* If a previous temp file is present, process what was already
if (prev_posn>0) {
prev_posn = 0;
lseek(freq->localfile, 0, SEEK_SET);
- ftruncate(freq->localfile, 0);
+ if (ftruncate(freq->localfile, 0) < 0) {
+ error("Couldn't truncate temporary file %s for %s: %s",
+ freq->tmpfile, freq->filename, strerror(errno));
+ goto abort;
+ }
}
}
curl_easy_setopt(freq->slot->curl, CURLOPT_FILE, freq);
curl_easy_setopt(freq->slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
curl_easy_setopt(freq->slot->curl, CURLOPT_ERRORBUFFER, freq->errorstr);
- curl_easy_setopt(freq->slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(freq->slot->curl, CURLOPT_URL, freq->url);
curl_easy_setopt(freq->slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
/*
return freq;
- free(url);
abort:
free(filename);
+ free(freq->url);
free(freq);
return NULL;
}
args[2] = cmd.buf;
status = run_command_v_opt(args, 0);
- if (status < -ERR_RUN_COMMAND_FORK)
- ; /* failure in run-command */
- else
- status = -status;
fd = open(temp[1], O_RDONLY);
if (fd < 0)
goto bad;
{
int need_in, need_out, need_err;
int fdin[2], fdout[2], fderr[2];
+ int failed_errno = failed_errno;
/*
* In case of errors we must keep the promise to close FDs
need_in = !cmd->no_stdin && cmd->in < 0;
if (need_in) {
if (pipe(fdin) < 0) {
+ failed_errno = errno;
if (cmd->out > 0)
close(cmd->out);
- return -ERR_RUN_COMMAND_PIPE;
+ goto fail_pipe;
}
cmd->in = fdin[1];
}
&& cmd->out < 0;
if (need_out) {
if (pipe(fdout) < 0) {
+ failed_errno = errno;
if (need_in)
close_pair(fdin);
else if (cmd->in)
close(cmd->in);
- return -ERR_RUN_COMMAND_PIPE;
+ goto fail_pipe;
}
cmd->out = fdout[0];
}
need_err = !cmd->no_stderr && cmd->err < 0;
if (need_err) {
if (pipe(fderr) < 0) {
+ failed_errno = errno;
if (need_in)
close_pair(fdin);
else if (cmd->in)
close_pair(fdout);
else if (cmd->out)
close(cmd->out);
- return -ERR_RUN_COMMAND_PIPE;
+fail_pipe:
+ error("cannot create pipe for %s: %s",
+ cmd->argv[0], strerror(failed_errno));
+ errno = failed_errno;
+ return -1;
}
cmd->err = fderr[0];
}
strerror(errno));
exit(127);
}
+ if (cmd->pid < 0)
+ error("cannot fork() for %s: %s", cmd->argv[0],
+ strerror(failed_errno = errno));
#else
int s0 = -1, s1 = -1, s2 = -1; /* backups of stdin, stdout, stderr */
const char **sargv = cmd->argv;
}
cmd->pid = mingw_spawnvpe(cmd->argv[0], cmd->argv, env);
+ failed_errno = errno;
+ if (cmd->pid < 0 && (!cmd->silent_exec_failure || errno != ENOENT))
+ error("cannot spawn %s: %s", cmd->argv[0], strerror(errno));
if (cmd->env)
free_environ(env);
#endif
if (cmd->pid < 0) {
- int err = errno;
if (need_in)
close_pair(fdin);
else if (cmd->in)
close(cmd->out);
if (need_err)
close_pair(fderr);
- return err == ENOENT ?
- -ERR_RUN_COMMAND_EXEC :
- -ERR_RUN_COMMAND_FORK;
+ errno = failed_errno;
+ return -1;
}
if (need_in)
return 0;
}
-static int wait_or_whine(pid_t pid)
+static int wait_or_whine(pid_t pid, const char *argv0, int silent_exec_failure)
{
- for (;;) {
- int status, code;
- pid_t waiting = waitpid(pid, &status, 0);
-
- if (waiting < 0) {
- if (errno == EINTR)
- continue;
- error("waitpid failed (%s)", strerror(errno));
- return -ERR_RUN_COMMAND_WAITPID;
- }
- if (waiting != pid)
- return -ERR_RUN_COMMAND_WAITPID_WRONG_PID;
- if (WIFSIGNALED(status))
- return -ERR_RUN_COMMAND_WAITPID_SIGNAL;
-
- if (!WIFEXITED(status))
- return -ERR_RUN_COMMAND_WAITPID_NOEXIT;
+ int status, code = -1;
+ pid_t waiting;
+ int failed_errno = 0;
+
+ while ((waiting = waitpid(pid, &status, 0)) < 0 && errno == EINTR)
+ ; /* nothing */
+
+ if (waiting < 0) {
+ failed_errno = errno;
+ error("waitpid for %s failed: %s", argv0, strerror(errno));
+ } else if (waiting != pid) {
+ error("waitpid is confused (%s)", argv0);
+ } else if (WIFSIGNALED(status)) {
+ code = WTERMSIG(status);
+ error("%s died of signal %d", argv0, code);
+ /*
+ * This return value is chosen so that code & 0xff
+ * mimics the exit code that a POSIX shell would report for
+ * a program that died from this signal.
+ */
+ code -= 128;
+ } else if (WIFEXITED(status)) {
code = WEXITSTATUS(status);
- switch (code) {
- case 127:
- return -ERR_RUN_COMMAND_EXEC;
- case 0:
- return 0;
- default:
- return -code;
+ /*
+ * Convert special exit code when execvp failed.
+ */
+ if (code == 127) {
+ code = -1;
+ failed_errno = ENOENT;
+ if (!silent_exec_failure)
+ error("cannot run %s: %s", argv0,
+ strerror(ENOENT));
}
+ } else {
+ error("waitpid is confused (%s)", argv0);
}
+ errno = failed_errno;
+ return code;
}
int finish_command(struct child_process *cmd)
{
- return wait_or_whine(cmd->pid);
+ return wait_or_whine(cmd->pid, cmd->argv[0], cmd->silent_exec_failure);
}
int run_command(struct child_process *cmd)
cmd->no_stdin = opt & RUN_COMMAND_NO_STDIN ? 1 : 0;
cmd->git_cmd = opt & RUN_GIT_CMD ? 1 : 0;
cmd->stdout_to_stderr = opt & RUN_COMMAND_STDOUT_TO_STDERR ? 1 : 0;
+ cmd->silent_exec_failure = opt & RUN_SILENT_EXEC_FAILURE ? 1 : 0;
}
int run_command_v_opt(const char **argv, int opt)
int finish_async(struct async *async)
{
#ifndef __MINGW32__
- int ret = 0;
-
- if (wait_or_whine(async->pid))
- ret = error("waitpid (async) failed");
+ int ret = wait_or_whine(async->pid, "child process", 0);
#else
DWORD ret = 0;
if (WaitForSingleObject(async->tid, INFINITE) != WAIT_OBJECT_0)
hook.env = env;
}
- ret = start_command(&hook);
+ ret = run_command(&hook);
free(argv);
- if (ret) {
- warning("Could not spawn %s", argv[0]);
- return ret;
- }
- ret = finish_command(&hook);
- if (ret == -ERR_RUN_COMMAND_WAITPID_SIGNAL)
- warning("%s exited due to uncaught signal", argv[0]);
-
return ret;
}
#ifndef RUN_COMMAND_H
#define RUN_COMMAND_H
-enum {
- ERR_RUN_COMMAND_FORK = 10000,
- ERR_RUN_COMMAND_EXEC,
- ERR_RUN_COMMAND_PIPE,
- ERR_RUN_COMMAND_WAITPID,
- ERR_RUN_COMMAND_WAITPID_WRONG_PID,
- ERR_RUN_COMMAND_WAITPID_SIGNAL,
- ERR_RUN_COMMAND_WAITPID_NOEXIT,
-};
-#define IS_RUN_COMMAND_ERR(x) (-(x) >= ERR_RUN_COMMAND_FORK)
-
struct child_process {
const char **argv;
pid_t pid;
unsigned no_stdout:1;
unsigned no_stderr:1;
unsigned git_cmd:1; /* if this is to be git sub-command */
+ unsigned silent_exec_failure:1;
unsigned stdout_to_stderr:1;
void (*preexec_cb)(void);
};
#define RUN_COMMAND_NO_STDIN 1
#define RUN_GIT_CMD 2 /*If this is to be git sub-command */
#define RUN_COMMAND_STDOUT_TO_STDERR 4
+#define RUN_SILENT_EXEC_FAILURE 8
int run_command_v_opt(const char **argv, int opt);
/*
longest_path_match(name, len, cache->path, cache->len,
&previous_slash);
match_flags = cache->flags & track_flags & (FL_NOENT|FL_SYMLINK);
+
+ if (!(track_flags & FL_FULLPATH) && match_len == len)
+ match_len = last_slash = previous_slash;
+
if (match_flags && match_len == cache->len)
return match_flags;
/*
# Copyright (c) 2005 Junio C Hamano
#
+-include ../config.mak
+
#GIT_TEST_OPTS=--verbose --debug
SHELL_PATH ?= $(SHELL)
TAR ?= $(TAR)
'
test_expect_success 'init creates a new deep directory' '
+ rm -fr newdir &&
+ git init newdir/a/b/c &&
+ test -d newdir/a/b/c/.git/refs
+'
+
+test_expect_success POSIXPERM 'init creates a new deep directory (umask vs. shared)' '
rm -fr newdir &&
(
# Leading directories should honor umask while
git init --bare --shared=0660 newdir/a/b/c &&
test -d newdir/a/b/c/refs &&
ls -ld newdir/a newdir/a/b > lsab.out &&
- ! grep -v "^drwxrw[sx]r-x" ls.out &&
+ ! grep -v "^drwxrw[sx]r-x" lsab.out &&
ls -ld newdir/a/b/c > lsc.out &&
! grep -v "^drwxrw[sx]---" lsc.out
)
'
. ./test-lib.sh
-. ../lib-rebase.sh
+. "$TEST_DIRECTORY"/lib-rebase.sh
set_fake_editor
'
. ./test-lib.sh
-. ../lib-rebase.sh
+. "$TEST_DIRECTORY"/lib-rebase.sh
set_fake_editor
'
. ./test-lib.sh
-. ../lib-rebase.sh
+. "$TEST_DIRECTORY"/lib-rebase.sh
# Set up branches like this:
# A1---B1---E1---F1---G1
'
test_expect_success 'format-patch from a subdirectory (3)' '
- here="$TEST_DIRECTORY/$test" &&
rm -f 0* &&
filename=$(
rm -rf sub &&
mkdir -p sub/dir &&
cd sub/dir &&
- git format-patch -1 -o "$here"
+ git format-patch -1 -o "$TRASH_DIRECTORY"
) &&
basename=$(expr "$filename" : ".*/\(.*\)") &&
test -f "$basename"
git update-index --assume-unchanged file &&
echo second >file &&
git diff --cached >actual &&
- test_cmp ../t4020/diff.NUL actual
+ test_cmp "$TEST_DIRECTORY"/t4020/diff.NUL actual
'
test_done
D=`pwd`
+test_bundle_object_count () {
+ git verify-pack -v "$1" >verify.out &&
+ test "$2" = $(grep '^[0-9a-f]\{40\} ' verify.out | wc -l)
+}
+
test_expect_success setup '
echo >file original &&
git add file &&
test_must_fail git fetch "$D/bundle1" master:master
'
+
test_expect_success 'bundle 1 has only 3 files ' '
cd "$D" &&
(
cat
) <bundle1 >bundle.pack &&
git index-pack bundle.pack &&
- verify=$(git verify-pack -v bundle.pack) &&
- test 4 = $(echo "$verify" | wc -l)
+ test_bundle_object_count bundle.pack 3
'
test_expect_success 'unbundle 2' '
cat
) <bundle3 >bundle.pack &&
git index-pack bundle.pack &&
- test 4 = $(git verify-pack -v bundle.pack | wc -l)
+ test_bundle_object_count bundle.pack 3
'
test_expect_success 'bundle should be able to create a full history' '
! echo "0032want $(git rev-parse HEAD)
0034shallow $(git rev-parse HEAD^)00000009done
0000" | git upload-pack . > /dev/null 2> output.err &&
- grep "waitpid (async) failed" output.err
+ # pack-objects survived
+ grep "Total.*, reused" output.err &&
+ # but there was an error, which must have been in rev-list
+ grep "bad tree object" output.err
'
test_expect_success 'upload-pack fails due to error in pack-objects enumeration' '
--- /dev/null
+#!/bin/sh
+
+test_description='merging when a directory was replaced with a symlink'
+. ./test-lib.sh
+
+if ! test_have_prereq SYMLINKS
+then
+ say 'Symbolic links not supported, skipping tests.'
+ test_done
+fi
+
+test_expect_success 'create a commit where dir a/b changed to symlink' '
+ mkdir -p a/b/c a/b-2/c &&
+ > a/b/c/d &&
+ > a/b-2/c/d &&
+ > a/x &&
+ git add -A &&
+ git commit -m base &&
+ git tag start &&
+ rm -rf a/b &&
+ ln -s b-2 a/b &&
+ git add -A &&
+ git commit -m "dir to symlink"
+'
+
+test_expect_success 'keep a/b-2/c/d across checkout' '
+ git checkout HEAD^0 &&
+ git reset --hard master &&
+ git rm --cached a/b &&
+ git commit -m "untracked symlink remains" &&
+ git checkout start^0 &&
+ test -f a/b-2/c/d
+'
+
+test_expect_success 'checkout should not have deleted a/b-2/c/d' '
+ git checkout HEAD^0 &&
+ git reset --hard master &&
+ git checkout start^0 &&
+ test -f a/b-2/c/d
+'
+
+test_expect_success 'setup for merge test' '
+ git reset --hard &&
+ test -f a/b-2/c/d &&
+ echo x > a/x &&
+ git add a/x &&
+ git commit -m x &&
+ git tag baseline
+'
+
+test_expect_success 'do not lose a/b-2/c/d in merge (resolve)' '
+ git reset --hard &&
+ git checkout baseline^0 &&
+ git merge -s resolve master &&
+ test -h a/b &&
+ test -f a/b-2/c/d
+'
+
+test_expect_failure 'do not lose a/b-2/c/d in merge (recursive)' '
+ git reset --hard &&
+ git checkout baseline^0 &&
+ git merge -s recursive master &&
+ test -h a/b &&
+ test -f a/b-2/c/d
+'
+
+test_expect_success 'setup a merge where dir a/b-2 changed to symlink' '
+ git reset --hard &&
+ git checkout start^0 &&
+ rm -rf a/b-2 &&
+ ln -s b a/b-2 &&
+ git add -A &&
+ git commit -m "dir a/b-2 to symlink" &&
+ git tag test2
+'
+
+test_expect_failure 'merge should not have conflicts (resolve)' '
+ git reset --hard &&
+ git checkout baseline^0 &&
+ git merge -s resolve test2 &&
+ test -h a/b-2 &&
+ test -f a/b/c/d
+'
+
+test_expect_failure 'merge should not have conflicts (recursive)' '
+ git reset --hard &&
+ git checkout baseline^0 &&
+ git merge -s recursive test2 &&
+ test -h a/b-2 &&
+ test -f a/b/c/d
+'
+
+test_done
echo foo mmap bar_mmap
echo foo_mmap bar mmap baz
} >file &&
+ echo vvv >v &&
echo ww w >w &&
echo x x xx x >x &&
echo y yy >y &&
echo zzz > z &&
mkdir t &&
echo test >t/t &&
- git add file w x y z t/t hello.c &&
+ echo vvv >t/v &&
+ mkdir t/a &&
+ echo vvv >t/a/v &&
+ git add . &&
test_tick &&
git commit -m initial
'
! git grep -c test $H | grep /dev/null
'
+ test_expect_success "grep --max-depth -1 $L" '
+ {
+ echo ${HC}t/a/v:1:vvv
+ echo ${HC}t/v:1:vvv
+ echo ${HC}v:1:vvv
+ } >expected &&
+ git grep --max-depth -1 -n -e vvv $H >actual &&
+ test_cmp expected actual
+ '
+
+ test_expect_success "grep --max-depth 0 $L" '
+ {
+ echo ${HC}v:1:vvv
+ } >expected &&
+ git grep --max-depth 0 -n -e vvv $H >actual &&
+ test_cmp expected actual
+ '
+
+ test_expect_success "grep --max-depth 0 -- '*' $L" '
+ {
+ echo ${HC}t/a/v:1:vvv
+ echo ${HC}t/v:1:vvv
+ echo ${HC}v:1:vvv
+ } >expected &&
+ git grep --max-depth 0 -n -e vvv $H -- "*" >actual &&
+ test_cmp expected actual
+ '
+
+ test_expect_success "grep --max-depth 1 $L" '
+ {
+ echo ${HC}t/v:1:vvv
+ echo ${HC}v:1:vvv
+ } >expected &&
+ git grep --max-depth 1 -n -e vvv $H >actual &&
+ test_cmp expected actual
+ '
+
+ test_expect_success "grep --max-depth 0 -- t $L" '
+ {
+ echo ${HC}t/v:1:vvv
+ } >expected &&
+ git grep --max-depth 0 -n -e vvv $H -- t >actual &&
+ test_cmp expected actual
+ '
+
done
cat >expected <<EOF
touch deeply/nested/directory/.keep &&
svn_cmd add deeply &&
svn_cmd up &&
- svn_cmd propset -R svn:ignore 'no-such-file*' .
+ svn_cmd propset -R svn:ignore '
+no-such-file*
+' .
svn_cmd commit -m 'propset svn:ignore'
cd .. &&
git svn show-ignore > show-ignore.got &&
"
cat >prop.expect <<\EOF
+
no-such-file*
EOF
git config --add svn-remote.svn.fetch "branches/b:refs/remotes/b" &&
for i in tags/0.1 tags/0.2 tags/0.3; do
git config --add svn-remote.svn.fetch \
- $i:refs/remotes/$i || exit 1; done
+ $i:refs/remotes/$i || exit 1; done &&
+ git config --get-all svn-remote.svn.fetch > fetch.out &&
+ grep "^trunk:refs/remotes/trunk$" fetch.out &&
+ grep "^branches/a:refs/remotes/a$" fetch.out &&
+ grep "^branches/b:refs/remotes/b$" fetch.out &&
+ grep "^tags/0\.1:refs/remotes/tags/0\.1$" fetch.out &&
+ grep "^tags/0\.2:refs/remotes/tags/0\.2$" fetch.out &&
+ grep "^tags/0\.3:refs/remotes/tags/0\.3$" fetch.out &&
+ grep "^:refs/${remotes_git_svn}" fetch.out
'
# refs should all be different, but the trees should all be the same:
echo "$svnrepo"$path > "$GIT_DIR"/svn/$ref/info/url ) || exit 1;
done &&
git svn migrate --minimize &&
- test -z "`git config -l |grep -v "^svn-remote\.git-svn\."`" &&
+ test -z "`git config -l | grep "^svn-remote\.git-svn\."`" &&
git config --get-all svn-remote.svn.fetch > fetch.out &&
grep "^trunk:refs/remotes/trunk$" fetch.out &&
grep "^branches/a:refs/remotes/a$" fetch.out &&
grep "^branches/b:refs/remotes/b$" fetch.out &&
grep "^tags/0\.1:refs/remotes/tags/0\.1$" fetch.out &&
grep "^tags/0\.2:refs/remotes/tags/0\.2$" fetch.out &&
- grep "^tags/0\.3:refs/remotes/tags/0\.3$" fetch.out
+ grep "^tags/0\.3:refs/remotes/tags/0\.3$" fetch.out &&
grep "^:refs/${remotes_git_svn}" fetch.out
'
valgrind=t; verbose=t; shift ;;
--tee)
shift ;; # was handled already
+ --root=*)
+ root=$(expr "z$1" : 'z[^=]*=\(.*\)')
+ shift ;;
*)
echo "error: unknown test option '$1'" >&2; exit 1 ;;
esac
# Test repository
test="trash directory.$(basename "$0" .sh)"
-test ! -z "$debug" || remove_trash="$TEST_DIRECTORY/$test"
+test -n "$root" && test="$root/$test"
+case "$test" in
+/*) TRASH_DIRECTORY="$test" ;;
+ *) TRASH_DIRECTORY="$TEST_DIRECTORY/$test" ;;
+esac
+test ! -z "$debug" || remove_trash=$TRASH_DIRECTORY
rm -fr "$test" || {
GIT_EXIT_OK=t
echo >&5 "FATAL: Cannot prepare test area"
{
const char **argv;
int argc;
- int err;
if (flags & TRANSPORT_PUSH_MIRROR)
return error("http transport does not support mirror mode");
while (refspec_nr--)
argv[argc++] = *refspec++;
argv[argc] = NULL;
- err = run_command_v_opt(argv, RUN_GIT_CMD);
- switch (err) {
- case -ERR_RUN_COMMAND_FORK:
- error("unable to fork for %s", argv[0]);
- case -ERR_RUN_COMMAND_EXEC:
- error("unable to exec %s", argv[0]);
- break;
- case -ERR_RUN_COMMAND_WAITPID:
- case -ERR_RUN_COMMAND_WAITPID_WRONG_PID:
- case -ERR_RUN_COMMAND_WAITPID_SIGNAL:
- case -ERR_RUN_COMMAND_WAITPID_NOEXIT:
- error("%s died with strange error", argv[0]);
- }
- return !!err;
+ return !!run_command_v_opt(argv, RUN_GIT_CMD);
}
static struct ref *get_refs_via_curl(struct transport *transport, int for_push)