does not trigger if the character before such a carriage-return
is not a whitespace (not enabled by default).
+core.fsyncobjectfiles::
+ This boolean will enable 'fsync()' when writing object files.
++
+This is a total waste of time and effort on a filesystem that orders
+data writes properly, but can be useful for filesystems that do not use
+journalling (traditional UNIX filesystems) or that only journal metadata
+and not file contents (OS X's HFS+, or Linux ext3 with "data=writeback").
+
alias.*::
Command aliases for the linkgit:git[1] command wrapper - e.g.
after defining "alias.last = cat-file commit HEAD", the invocation
relative to the repository root (this was the default for git
prior to v1.5.4).
+status.showUntrackedFiles::
+ By default, linkgit:git-status[1] and linkgit:git-commit[1] show
+ files which are not currently tracked by Git. Directories which
+ contain only untracked files, are shown with the directory name
+ only. Showing untracked files means that Git needs to lstat() all
+ all the files in the whole repository, which might be slow on some
+ systems. So, this variable controls how the commands displays
+ the untracked files. Possible values are:
++
+--
+ - 'no' - Show no untracked files
+ - 'normal' - Shows untracked files and directories
+ - 'all' - Shows also individual files in untracked directories.
+--
++
+If this variable is not specified, it defaults to 'normal'.
+This variable can be overridden with the -u|--untracked-files option
+of linkgit:git-status[1] and linkgit:git-commit[1].
+
tar.umask::
This variable can be used to restrict the permission bits of
tar archive entries. The default is 0002, which turns off the
SYNOPSIS
--------
[verse]
-'git-commit' [-a | --interactive] [-s] [-v] [-u] [--amend]
+'git-commit' [-a | --interactive] [-s] [-v] [-u<mode>] [--amend]
[(-c | -C) <commit>] [-F <file> | -m <msg>]
[--allow-empty] [--no-verify] [-e] [--author=<author>]
[--cleanup=<mode>] [--] [[-i | -o ]<file>...]
the last commit without committing changes that have
already been staged.
--u::
---untracked-files::
- Show all untracked files, also those in uninteresting
- directories, in the "Untracked files:" section of commit
- message template. Without this option only its name and
- a trailing slash are displayed for each untracked
- directory.
+-u[<mode>]::
+--untracked-files[=<mode>]::
+ Show untracked files (Default: 'all').
++
+The mode parameter is optional, and is used to specify
+the handling of untracked files. The possible options are:
++
+--
+ - 'no' - Show no untracked files
+ - 'normal' - Shows untracked files and directories
+ - 'all' - Also shows individual files in untracked directories.
+--
++
+See linkgit:git-config[1] for configuration variable
+used to change the default for when the option is not
+specified.
-v::
--verbose::
unsigned, with 'verbatim', they will be silently exported
and with 'warn', they will be exported, but you will see a warning.
+--export-marks=<file>::
+ Dumps the internal marks table to <file> when complete.
+ Marks are written one per line as `:markid SHA-1`. Only marks
+ for revisions are dumped; marks for blobs are ignored.
+ Backends can use this file to validate imports after they
+ have been completed, or to save the marks table across
+ incremental runs. As <file> is only opened and truncated
+ at completion, the same path can also be safely given to
+ \--import-marks.
+
+--import-marks=<file>::
+ Before processing any input, load the marks specified in
+ <file>. The input file must exist, must be readable, and
+ must use the same format as produced by \--export-marks.
++
+Any commits that have already been marked will not be exported again.
+If the backend uses a similar \--import-marks file, this allows for
+incremental bidirectional exporting of the repository by keeping the
+marks the same across runs.
+
EXAMPLES
--------
SYNOPSIS
--------
-'git-update-ref' [-m <reason>] (-d <ref> <oldvalue> | [--no-deref] <ref> <newvalue> [<oldvalue>])
+'git-update-ref' [-m <reason>] (-d <ref> [<oldvalue>] | [--no-deref] <ref> <newvalue> [<oldvalue>])
DESCRIPTION
-----------
Creating an archive
~~~~~~~~~~~~~~~~~~~
+`export-ignore`
+^^^^^^^^^^^^^^^
+
+Files and directories with the attribute `export-ignore` won't be added to
+archive files.
+
`export-subst`
^^^^^^^^^^^^^^
directory to trigger action at certain points. When
`git-init` is run, a handful example hooks are copied in the
`hooks` directory of the new repository, but by default they are
-all disabled. To enable a hook, make it executable with `chmod +x`.
+all disabled. To enable a hook, rename it by removing its `.sample`
+suffix.
This document describes the currently defined hooks.
parse-options API
=================
-Talk about <parse-options.h>
+The parse-options API is used to parse and massage options in git
+and to provide a usage help with consistent look.
-(Pierre)
+Basics
+------
+
+The argument vector `argv[]` may usually contain mandatory or optional
+'non-option arguments', e.g. a filename or a branch, and 'options'.
+Options are optional arguments that start with a dash and
+that allow to change the behavior of a command.
+
+* There are basically three types of options:
+ 'boolean' options,
+ options with (mandatory) 'arguments' and
+ options with 'optional arguments'
+ (i.e. a boolean option that can be adjusted).
+
+* There are basically two forms of options:
+ 'Short options' consist of one dash (`-`) and one alphanumeric
+ character.
+ 'Long options' begin with two dashes (`\--`) and some
+ alphanumeric characters.
+
+* Options are case-sensitive.
+ Please define 'lower-case long options' only.
+
+The parse-options API allows:
+
+* 'sticked' and 'separate form' of options with arguments.
+ `-oArg` is sticked, `-o Arg` is separate form.
+ `\--option=Arg` is sticked, `\--option Arg` is separate form.
+
+* Long options may be 'abbreviated', as long as the abbreviation
+ is unambiguous.
+
+* Short options may be bundled, e.g. `-a -b` can be specified as `-ab`.
+
+* Boolean long options can be 'negated' (or 'unset') by prepending
+ `no-`, e.g. `\--no-abbrev` instead of `\--abbrev`.
+
+* Options and non-option arguments can clearly be separated using the `\--`
+ option, e.g. `-a -b \--option \-- \--this-is-a-file` indicates that
+ `\--this-is-a-file` must not be processed as an option.
+
+Steps to parse options
+----------------------
+
+. `#include "parse-options.h"`
+
+. define a NULL-terminated
+ `static const char * const builtin_foo_usage[]` array
+ containing alternative usage strings
+
+. define `builtin_foo_options` array as described below
+ in section 'Data Structure'.
+
+. in `cmd_foo(int argc, const char **argv, const char *prefix)`
+ call
+
+ argc = parse_options(argc, argv, builtin_foo_options, builtin_foo_usage, flags);
++
+`parse_options()` will filter out the processed options of `argv[]` and leave the
+non-option arguments in `argv[]`.
+`argc` is updated appropriately because of the assignment.
++
+Flags are the bitwise-or of:
+
+`PARSE_OPT_KEEP_DASHDASH`::
+ Keep the `\--` that usually separates options from
+ non-option arguments.
+
+`PARSE_OPT_STOP_AT_NON_OPTION`::
+ Usually the whole argument vector is massaged and reordered.
+ Using this flag, processing is stopped at the first non-option
+ argument.
+
+Data Structure
+--------------
+
+The main data structure is an array of the `option` struct,
+say `static struct option builtin_add_options[]`.
+There are some macros to easily define options:
+
+`OPT__ABBREV(&int_var)`::
+ Add `\--abbrev[=<n>]`.
+
+`OPT__DRY_RUN(&int_var)`::
+ Add `-n, \--dry-run`.
+
+`OPT__QUIET(&int_var)`::
+ Add `-q, \--quiet`.
+
+`OPT__VERBOSE(&int_var)`::
+ Add `-v, \--verbose`.
+
+`OPT_GROUP(description)`::
+ Start an option group. `description` is a short string that
+ describes the group or an empty string.
+ Start the description with an upper-case letter.
+
+`OPT_BOOLEAN(short, long, &int_var, description)`::
+ Introduce a boolean option.
+ `int_var` is incremented on each use.
+
+`OPT_BIT(short, long, &int_var, description, mask)`::
+ Introduce a boolean option.
+ If used, `int_var` is bitwise-ored with `mask`.
+
+`OPT_SET_INT(short, long, &int_var, description, integer)`::
+ Introduce a boolean option.
+ If used, set `int_var` to `integer`.
+
+`OPT_SET_PTR(short, long, &ptr_var, description, ptr)`::
+ Introduce a boolean option.
+ If used, set `ptr_var` to `ptr`.
+
+`OPT_STRING(short, long, &str_var, arg_str, description)`::
+ Introduce an option with string argument.
+ The string argument is put into `str_var`.
+
+`OPT_INTEGER(short, long, &int_var, description)`::
+ Introduce an option with integer argument.
+ The integer is put into `int_var`.
+
+`OPT_DATE(short, long, &int_var, description)`::
+ Introduce an option with date argument, see `approxidate()`.
+ The timestamp is put into `int_var`.
+
+`OPT_CALLBACK(short, long, &var, arg_str, description, func_ptr)`::
+ Introduce an option with argument.
+ The argument will be fed into the function given by `func_ptr`
+ and the result will be put into `var`.
+ See 'Option Callbacks' below for a more elaborate description.
+
+`OPT_ARGUMENT(long, description)`::
+ Introduce a long-option argument that will be kept in `argv[]`.
+
+
+The last element of the array must be `OPT_END()`.
+
+If not stated otherwise, interpret the arguments as follows:
+
+* `short` is a character for the short option
+ (e.g. `\'e\'` for `-e`, use `0` to omit),
+
+* `long` is a string for the long option
+ (e.g. `"example"` for `\--example`, use `NULL` to omit),
+
+* `int_var` is an integer variable,
+
+* `str_var` is a string variable (`char *`),
+
+* `arg_str` is the string that is shown as argument
+ (e.g. `"branch"` will result in `<branch>`).
+ If set to `NULL`, three dots (`...`) will be displayed.
+
+* `description` is a short string to describe the effect of the option.
+ It shall begin with a lower-case letter and a full stop (`.`) shall be
+ omitted at the end.
+
+Option Callbacks
+----------------
+
+The function must be defined in this form:
+
+ int func(const struct option *opt, const char *arg, int unset)
+
+The callback mechanism is as follows:
+
+* Inside `funct`, the only interesting member of the structure
+ given by `opt` is the void pointer `opt->value`.
+ `\*opt->value` will be the value that is saved into `var`, if you
+ use `OPT_CALLBACK()`.
+ For example, do `*(unsigned long *)opt->value = 42;` to get 42
+ into an `unsigned long` variable.
+
+* Return value `0` indicates success and non-zero return
+ value will invoke `usage_with_options()` and, thus, die.
+
+* If the user negates the option, `arg` is `NULL` and `unset` is 1.
+
+Sophisticated option parsing
+----------------------------
+
+If you need, for example, option callbacks with optional arguments
+or without arguments at all, or if you need other special cases,
+that are not handled by the macros above, you need to specify the
+members of the `option` structure manually.
+
+This is not covered in this document, but well documented
+in `parse-options.h` itself.
+
+Examples
+--------
+
+See `test-parse-options.c` and
+`builtin-add.c`,
+`builtin-clone.c`,
+`builtin-commit.c`,
+`builtin-fetch.c`,
+`builtin-fsck.c`,
+`builtin-rm.c`
+for real-world examples.
LIB_H += mailmap.h
LIB_H += object.h
LIB_H += pack.h
+LIB_H += pack-refs.h
LIB_H += pack-revindex.h
LIB_H += parse-options.h
LIB_H += patch-ids.h
LIB_OBJS += name-hash.o
LIB_OBJS += object.o
LIB_OBJS += pack-check.o
+LIB_OBJS += pack-refs.o
LIB_OBJS += pack-revindex.o
LIB_OBJS += pack-write.o
LIB_OBJS += pager.o
LIB_OBJS += usage.o
LIB_OBJS += utf8.o
LIB_OBJS += walker.o
+LIB_OBJS += wrapper.o
LIB_OBJS += write_or_die.o
LIB_OBJS += ws.o
LIB_OBJS += wt-status.o
strbuf_grow(&path, PATH_MAX);
strbuf_add(&path, base, baselen);
strbuf_addstr(&path, filename);
+ if (is_archive_path_ignored(path.buf + base_len))
+ return 0;
if (S_ISDIR(mode) || S_ISGITLINK(mode)) {
strbuf_addch(&path, '/');
buffer = NULL;
crc = crc32(0, NULL, 0);
path = construct_path(base, baselen, filename, S_ISDIR(mode), &pathlen);
+ if (is_archive_path_ignored(path + base_len))
+ return 0;
if (verbose)
fprintf(stderr, "%s\n", path);
if (pathlen > 0xffff) {
return buffer;
}
+int is_archive_path_ignored(const char *path)
+{
+ static struct git_attr *attr_export_ignore;
+ struct git_attr_check check[1];
+
+ if (!attr_export_ignore)
+ attr_export_ignore = git_attr("export-ignore", 13);
+
+ check[0].attr = attr_export_ignore;
+ if (git_checkattr(path, ARRAY_SIZE(check), check))
+ return 0;
+ return ATTR_TRUE(check[0].value);
+}
extern void *parse_extra_zip_args(int argc, const char **argv);
extern void *sha1_file_to_archive(const char *path, const unsigned char *sha1, unsigned int mode, enum object_type *type, unsigned long *size, const struct commit *commit);
+extern int is_archive_path_ignored(const char *path);
#endif /* ARCHIVE_H */
#include "transport.h"
#include "strbuf.h"
#include "dir.h"
+#include "pack-refs.h"
/*
* Overall FIXMEs:
get_fetch_map(refs, tag_refspec, &tail, 0);
for (r = local_refs; r; r = r->next)
- update_ref(reflog,
- r->peer_ref->name, r->old_sha1, NULL, 0, DIE_ON_ERR);
+ add_extra_ref(r->peer_ref->name, r->old_sha1, 0);
+
+ pack_refs(PACK_REFS_ALL);
+ clear_extra_refs();
+
return local_refs;
}
if (!option_bare) {
junk_work_tree = work_tree;
+ if (safe_create_leading_directories_const(work_tree) < 0)
+ die("could not create leading directories of '%s'",
+ work_tree);
if (mkdir(work_tree, 0755))
die("could not create work tree dir '%s'.", work_tree);
set_git_work_tree(work_tree);
setenv(CONFIG_ENVIRONMENT, xstrdup(mkpath("%s/config", git_dir)), 1);
+ if (safe_create_leading_directories_const(git_dir) < 0)
+ die("could not create leading directories of '%s'", git_dir);
set_git_dir(make_absolute_path(git_dir));
fprintf(stderr, "Initialize %s\n", git_dir);
static char *edit_message, *use_message;
static char *author_name, *author_email, *author_date;
static int all, edit_flag, also, interactive, only, amend, signoff;
-static int quiet, verbose, untracked_files, no_verify, allow_empty;
+static int quiet, verbose, no_verify, allow_empty;
+static char *untracked_files_arg;
/*
* The default commit message cleanup mode will remove the lines
* beginning with # (shell comments) and leading and trailing
OPT_BOOLEAN('o', "only", &only, "commit only specified files"),
OPT_BOOLEAN('n', "no-verify", &no_verify, "bypass pre-commit hook"),
OPT_BOOLEAN(0, "amend", &amend, "amend previous commit"),
- OPT_BOOLEAN('u', "untracked-files", &untracked_files, "show all untracked files"),
+ { OPTION_STRING, 'u', "untracked-files", &untracked_files_arg, "mode", "show untracked files, optional modes: all, normal, no. (Default: all)", PARSE_OPT_OPTARG, NULL, (intptr_t)"all" },
OPT_BOOLEAN(0, "allow-empty", &allow_empty, "ok to record an empty change"),
OPT_STRING(0, "cleanup", &cleanup_arg, "default", "how to strip spaces and #comments from message"),
s.reference = "HEAD^1";
}
s.verbose = verbose;
- s.untracked = untracked_files;
+ s.untracked = (show_untracked_files == SHOW_ALL_UNTRACKED_FILES);
s.index_file = index_file;
s.fp = fp;
s.nowarn = nowarn;
else
die("Invalid cleanup mode %s", cleanup_arg);
+ if (!untracked_files_arg)
+ ; /* default already initialized */
+ else if (!strcmp(untracked_files_arg, "no"))
+ show_untracked_files = SHOW_NO_UNTRACKED_FILES;
+ else if (!strcmp(untracked_files_arg, "normal"))
+ show_untracked_files = SHOW_NORMAL_UNTRACKED_FILES;
+ else if (!strcmp(untracked_files_arg, "all"))
+ show_untracked_files = SHOW_ALL_UNTRACKED_FILES;
+ else
+ die("Invalid untracked files mode '%s'", untracked_files_arg);
+
if (all && argc > 0)
die("Paths with -a does not make sense.");
else if (interactive && argc > 0)
}
/* Since intptr_t is C99, we do not use it here */
-static void mark_object(struct object *object)
+static inline uint32_t *mark_to_ptr(uint32_t mark)
{
- last_idnum++;
- add_decoration(&idnums, object, ((uint32_t *)NULL) + last_idnum);
+ return ((uint32_t *)NULL) + mark;
+}
+
+static inline uint32_t ptr_to_mark(void * mark)
+{
+ return (uint32_t *)mark - (uint32_t *)NULL;
+}
+
+static inline void mark_object(struct object *object, uint32_t mark)
+{
+ add_decoration(&idnums, object, mark_to_ptr(mark));
+}
+
+static inline void mark_next_object(struct object *object)
+{
+ mark_object(object, ++last_idnum);
}
static int get_object_mark(struct object *object)
void *decoration = lookup_decoration(&idnums, object);
if (!decoration)
return 0;
- return (uint32_t *)decoration - (uint32_t *)NULL;
+ return ptr_to_mark(decoration);
}
static void show_progress(void)
if (!buf)
die ("Could not read blob %s", sha1_to_hex(sha1));
- mark_object(object);
+ mark_next_object(object);
printf("blob\nmark :%d\ndata %lu\n", last_idnum, size);
if (size && fwrite(buf, size, 1, stdout) != 1)
for (i = 0; i < diff_queued_diff.nr; i++)
handle_object(diff_queued_diff.queue[i]->two->sha1);
- mark_object(&commit->object);
+ mark_next_object(&commit->object);
if (!is_encoding_utf8(encoding))
reencoded = reencode_string(message, "UTF-8", encoding);
if (!commit->parents)
}
}
+static void export_marks(char *file)
+{
+ unsigned int i;
+ uint32_t mark;
+ struct object_decoration *deco = idnums.hash;
+ FILE *f;
+
+ f = fopen(file, "w");
+ if (!f)
+ error("Unable to open marks file %s for writing", file);
+
+ for (i = 0; i < idnums.size; ++i) {
+ deco++;
+ if (deco && deco->base && deco->base->type == 1) {
+ mark = ptr_to_mark(deco->decoration);
+ fprintf(f, ":%u %s\n", mark, sha1_to_hex(deco->base->sha1));
+ }
+ }
+
+ if (ferror(f) || fclose(f))
+ error("Unable to write marks file %s.", file);
+}
+
+static void import_marks(char * input_file)
+{
+ char line[512];
+ FILE *f = fopen(input_file, "r");
+ if (!f)
+ die("cannot read %s: %s", input_file, strerror(errno));
+
+ while (fgets(line, sizeof(line), f)) {
+ uint32_t mark;
+ char *line_end, *mark_end;
+ unsigned char sha1[20];
+ struct object *object;
+
+ line_end = strchr(line, '\n');
+ if (line[0] != ':' || !line_end)
+ die("corrupt mark line: %s", line);
+ *line_end = 0;
+
+ mark = strtoumax(line + 1, &mark_end, 10);
+ if (!mark || mark_end == line + 1
+ || *mark_end != ' ' || get_sha1(mark_end + 1, sha1))
+ die("corrupt mark line: %s", line);
+
+ object = parse_object(sha1);
+ if (!object)
+ die ("Could not read blob %s", sha1_to_hex(sha1));
+
+ if (object->flags & SHOWN)
+ error("Object %s already has a mark", sha1);
+
+ mark_object(object, mark);
+ if (last_idnum < mark)
+ last_idnum = mark;
+
+ object->flags |= SHOWN;
+ }
+ fclose(f);
+}
+
int cmd_fast_export(int argc, const char **argv, const char *prefix)
{
struct rev_info revs;
struct object_array commits = { 0, 0, NULL };
struct path_list extra_refs = { NULL, 0, 0, 0 };
struct commit *commit;
+ char *export_filename = NULL, *import_filename = NULL;
struct option options[] = {
OPT_INTEGER(0, "progress", &progress,
"show progress after <n> objects"),
OPT_CALLBACK(0, "signed-tags", &signed_tag_mode, "mode",
"select handling of signed tags",
parse_opt_signed_tag_mode),
+ OPT_STRING(0, "export-marks", &export_filename, "FILE",
+ "Dump marks to this file"),
+ OPT_STRING(0, "import-marks", &import_filename, "FILE",
+ "Import marks from this file"),
OPT_END()
};
if (argc > 1)
usage_with_options (fast_export_usage, options);
+ if (import_filename)
+ import_marks(import_filename);
+
get_tags_and_duplicates(&revs.pending, &extra_refs);
if (prepare_revision_walk(&revs))
handle_tags_and_duplicates(&extra_refs);
+ if (export_filename)
+ export_marks(export_filename);
+
return 0;
}
}
}
+ reprepare_packed_git();
return ref_cpy;
}
prepare_packed_git();
for (p = packed_git; p; p = p->next)
/* verify gives error messages itself */
- verify_pack(p, 0);
+ verify_pack(p);
for (p = packed_git; p; p = p->next) {
uint32_t j, num;
return newpath;
}
-static int mkdir_p(const char *path, unsigned long mode)
-{
- /* path points to cache entries, so xstrdup before messing with it */
- char *buf = xstrdup(path);
- int result = safe_create_leading_directories(buf);
- free(buf);
- return result;
-}
-
static void flush_buffer(int fd, const char *buf, unsigned long size)
{
while (size > 0) {
int status;
const char *msg = "failed to create path '%s'%s";
- status = mkdir_p(path, 0777);
+ status = safe_create_leading_directories_const(path);
if (status) {
if (status == -3) {
/* something else exists */
close(fd);
} else if (S_ISLNK(mode)) {
char *lnk = xmemdupz(buf, size);
- mkdir_p(path, 0777);
+ safe_create_leading_directories_const(path);
unlink(path);
symlink(lnk, path);
free(lnk);
stream.total_in == len) ? 0 : -1;
}
-static int check_pack_crc(struct packed_git *p, struct pack_window **w_curs,
- off_t offset, off_t len, unsigned int nr)
-{
- const uint32_t *index_crc;
- uint32_t data_crc = crc32(0, Z_NULL, 0);
-
- do {
- unsigned int avail;
- void *data = use_pack(p, w_curs, offset, &avail);
- if (avail > len)
- avail = len;
- data_crc = crc32(data_crc, data, avail);
- offset += avail;
- len -= avail;
- } while (len);
-
- index_crc = p->index_data;
- index_crc += 2 + 256 + p->num_objects * (20/4) + nr;
-
- return data_crc != ntohl(*index_crc);
-}
-
static void copy_pack_data(struct sha1file *f,
struct packed_git *p,
struct pack_window **w_curs,
sorted_by_offset[i] = objects + i;
qsort(sorted_by_offset, nr_objects, sizeof(*sorted_by_offset), pack_offset_sort);
- init_pack_revindex();
-
for (i = 0; i < nr_objects; i++)
check_object(sorted_by_offset[i]);
-#include "builtin.h"
#include "cache.h"
-#include "refs.h"
-#include "object.h"
-#include "tag.h"
#include "parse-options.h"
-
-struct ref_to_prune {
- struct ref_to_prune *next;
- unsigned char sha1[20];
- char name[FLEX_ARRAY];
-};
-
-#define PACK_REFS_PRUNE 0x0001
-#define PACK_REFS_ALL 0x0002
-
-struct pack_refs_cb_data {
- unsigned int flags;
- struct ref_to_prune *ref_to_prune;
- FILE *refs_file;
-};
-
-static int do_not_prune(int flags)
-{
- /* If it is already packed or if it is a symref,
- * do not prune it.
- */
- return (flags & (REF_ISSYMREF|REF_ISPACKED));
-}
-
-static int handle_one_ref(const char *path, const unsigned char *sha1,
- int flags, void *cb_data)
-{
- struct pack_refs_cb_data *cb = cb_data;
- int is_tag_ref;
-
- /* Do not pack the symbolic refs */
- if ((flags & REF_ISSYMREF))
- return 0;
- is_tag_ref = !prefixcmp(path, "refs/tags/");
-
- /* ALWAYS pack refs that were already packed or are tags */
- if (!(cb->flags & PACK_REFS_ALL) && !is_tag_ref && !(flags & REF_ISPACKED))
- return 0;
-
- fprintf(cb->refs_file, "%s %s\n", sha1_to_hex(sha1), path);
- if (is_tag_ref) {
- struct object *o = parse_object(sha1);
- if (o->type == OBJ_TAG) {
- o = deref_tag(o, path, 0);
- if (o)
- fprintf(cb->refs_file, "^%s\n",
- sha1_to_hex(o->sha1));
- }
- }
-
- if ((cb->flags & PACK_REFS_PRUNE) && !do_not_prune(flags)) {
- int namelen = strlen(path) + 1;
- struct ref_to_prune *n = xcalloc(1, sizeof(*n) + namelen);
- hashcpy(n->sha1, sha1);
- strcpy(n->name, path);
- n->next = cb->ref_to_prune;
- cb->ref_to_prune = n;
- }
- return 0;
-}
-
-/* make sure nobody touched the ref, and unlink */
-static void prune_ref(struct ref_to_prune *r)
-{
- struct ref_lock *lock = lock_ref_sha1(r->name + 5, r->sha1);
-
- if (lock) {
- unlink(git_path("%s", r->name));
- unlock_ref(lock);
- }
-}
-
-static void prune_refs(struct ref_to_prune *r)
-{
- while (r) {
- prune_ref(r);
- r = r->next;
- }
-}
-
-static struct lock_file packed;
-
-static int pack_refs(unsigned int flags)
-{
- int fd;
- struct pack_refs_cb_data cbdata;
-
- memset(&cbdata, 0, sizeof(cbdata));
- cbdata.flags = flags;
-
- fd = hold_lock_file_for_update(&packed, git_path("packed-refs"), 1);
- cbdata.refs_file = fdopen(fd, "w");
- if (!cbdata.refs_file)
- die("unable to create ref-pack file structure (%s)",
- strerror(errno));
-
- /* perhaps other traits later as well */
- fprintf(cbdata.refs_file, "# pack-refs with: peeled \n");
-
- for_each_ref(handle_one_ref, &cbdata);
- if (ferror(cbdata.refs_file))
- die("failed to write ref-pack file");
- if (fflush(cbdata.refs_file) || fsync(fd) || fclose(cbdata.refs_file))
- die("failed to write ref-pack file (%s)", strerror(errno));
- /*
- * Since the lock file was fdopen()'ed and then fclose()'ed above,
- * assign -1 to the lock file descriptor so that commit_lock_file()
- * won't try to close() it.
- */
- packed.fd = -1;
- if (commit_lock_file(&packed) < 0)
- die("unable to overwrite old ref-pack file (%s)", strerror(errno));
- if (cbdata.flags & PACK_REFS_PRUNE)
- prune_refs(cbdata.ref_to_prune);
- return 0;
-}
+#include "pack-refs.h"
static char const * const pack_refs_usage[] = {
"git-pack-refs [options]",
#include "parse-options.h"
static const char * const git_update_ref_usage[] = {
- "git-update-ref [options] -d <refname> <oldval>",
+ "git-update-ref [options] -d <refname> [<oldval>]",
"git-update-ref [options] <refname> <newval> [<oldval>]",
NULL
};
int cmd_update_ref(int argc, const char **argv, const char *prefix)
{
- const char *refname, *value, *oldval, *msg=NULL;
+ const char *refname, *oldval, *msg=NULL;
unsigned char sha1[20], oldsha1[20];
int delete = 0, no_deref = 0;
struct option options[] = {
if (msg && !*msg)
die("Refusing to perform update with empty message.");
- if (argc < 2 || argc > 3)
- usage_with_options(git_update_ref_usage, options);
- refname = argv[0];
- value = argv[1];
- oldval = argv[2];
-
- if (get_sha1(value, sha1))
- die("%s: not a valid SHA1", value);
-
if (delete) {
- if (oldval)
+ if (argc < 1 || argc > 2)
+ usage_with_options(git_update_ref_usage, options);
+ refname = argv[0];
+ oldval = argv[1];
+ } else {
+ const char *value;
+ if (argc < 2 || argc > 3)
usage_with_options(git_update_ref_usage, options);
- return delete_ref(refname, sha1);
+ refname = argv[0];
+ value = argv[1];
+ oldval = argv[2];
+ if (get_sha1(value, sha1))
+ die("%s: not a valid SHA1", value);
}
- hashclr(oldsha1);
+ hashclr(oldsha1); /* all-zero hash in case oldval is the empty string */
if (oldval && *oldval && get_sha1(oldval, oldsha1))
die("%s: not a valid old SHA1", oldval);
- return update_ref(msg, refname, sha1, oldval ? oldsha1 : NULL,
- no_deref ? REF_NODEREF : 0, DIE_ON_ERR);
+ if (delete)
+ return delete_ref(refname, oldval ? oldsha1 : NULL);
+ else
+ return update_ref(msg, refname, sha1, oldval ? oldsha1 : NULL,
+ no_deref ? REF_NODEREF : 0, DIE_ON_ERR);
}
#include "cache.h"
#include "pack.h"
+
+#define MAX_CHAIN 50
+
+static void show_pack_info(struct packed_git *p)
+{
+ uint32_t nr_objects, i, chain_histogram[MAX_CHAIN+1];
+
+ nr_objects = p->num_objects;
+ memset(chain_histogram, 0, sizeof(chain_histogram));
+
+ for (i = 0; i < nr_objects; i++) {
+ const unsigned char *sha1;
+ unsigned char base_sha1[20];
+ const char *type;
+ unsigned long size;
+ unsigned long store_size;
+ off_t offset;
+ unsigned int delta_chain_length;
+
+ sha1 = nth_packed_object_sha1(p, i);
+ if (!sha1)
+ die("internal error pack-check nth-packed-object");
+ offset = nth_packed_object_offset(p, i);
+ type = packed_object_info_detail(p, offset, &size, &store_size,
+ &delta_chain_length,
+ base_sha1);
+ printf("%s ", sha1_to_hex(sha1));
+ if (!delta_chain_length)
+ printf("%-6s %lu %lu %"PRIuMAX"\n",
+ type, size, store_size, (uintmax_t)offset);
+ else {
+ printf("%-6s %lu %lu %"PRIuMAX" %u %s\n",
+ type, size, store_size, (uintmax_t)offset,
+ delta_chain_length, sha1_to_hex(base_sha1));
+ if (delta_chain_length <= MAX_CHAIN)
+ chain_histogram[delta_chain_length]++;
+ else
+ chain_histogram[0]++;
+ }
+ }
+
+ for (i = 0; i <= MAX_CHAIN; i++) {
+ if (!chain_histogram[i])
+ continue;
+ printf("chain length = %d: %d object%s\n", i,
+ chain_histogram[i], chain_histogram[i] > 1 ? "s" : "");
+ }
+ if (chain_histogram[0])
+ printf("chain length > %d: %d object%s\n", MAX_CHAIN,
+ chain_histogram[0], chain_histogram[0] > 1 ? "s" : "");
+}
+
static int verify_one_pack(const char *path, int verbose)
{
char arg[PATH_MAX];
return error("packfile %s not found.", arg);
install_packed_git(pack);
- err = verify_pack(pack, verbose);
+ err = verify_pack(pack);
+
+ if (verbose) {
+ if (err)
+ printf("%s: bad\n", pack->pack_name);
+ else {
+ show_pack_info(pack);
+ printf("%s: ok\n", pack->pack_name);
+ }
+ }
return err;
}
extern int is_inside_work_tree(void);
extern const char *get_git_dir(void);
extern char *get_object_directory(void);
-extern char *get_refs_directory(void);
extern char *get_index_file(void);
extern char *get_graft_file(void);
extern int set_git_dir(const char *path);
extern size_t packed_git_limit;
extern size_t delta_base_cache_limit;
extern int auto_crlf;
+extern int fsync_object_files;
enum safe_crlf {
SAFE_CRLF_FALSE = 0,
int git_config_perm(const char *var, const char *value);
int adjust_shared_perm(const char *path);
int safe_create_leading_directories(char *path);
+int safe_create_leading_directories_const(const char *path);
char *enter_repo(char *path, int strict);
static inline int is_absolute_path(const char *path)
{
}
const char *make_absolute_path(const char *path);
const char *make_nonrelative_path(const char *path);
+const char *make_relative_path(const char *abs, const char *base);
/* Read and unpack a sha1 file into memory, write memory to a sha1 file */
extern int sha1_object_info(const unsigned char *, unsigned long *);
const void *index_data;
size_t index_size;
uint32_t num_objects;
+ uint32_t num_bad_objects;
+ unsigned char *bad_object_sha1;
int index_version;
time_t mtime;
int pack_fd;
extern void unuse_pack(struct pack_window **);
extern struct packed_git *add_packed_git(const char *, int, int);
extern const unsigned char *nth_packed_object_sha1(struct packed_git *, uint32_t);
+extern off_t nth_packed_object_offset(const struct packed_git *, uint32_t);
extern off_t find_pack_entry_one(const unsigned char *, struct packed_git *);
extern void *unpack_entry(struct packed_git *, off_t, enum object_type *, unsigned long *);
extern unsigned long unpack_object_header_gently(const unsigned char *buf, unsigned long len, enum object_type *type, unsigned long *sizep);
extern int git_config_rename_section(const char *, const char *);
extern const char *git_etc_gitconfig(void);
extern int check_repository_format_version(const char *var, const char *value, void *cb);
-extern int git_env_bool(const char *, int);
extern int git_config_system(void);
extern int git_config_global(void);
extern int config_error_nonbool(const char *);
/* bit 0 up to (N-1) are on if the parent has this line (i.e.
* we did not change it).
* bit N is used for "interesting" lines, including context.
+ * bit (N+1) is used for "do not show deletion before this".
*/
unsigned long flag;
unsigned long *p_lno;
{
unsigned long all_mask = (1UL<<num_parent) - 1;
unsigned long mark = (1UL<<num_parent);
+ unsigned long no_pre_delete = (2UL<<num_parent);
unsigned long i;
/* Two groups of interesting lines may have a short gap of
/* Paint a few lines before the first interesting line. */
while (j < i)
- sline[j++].flag |= mark;
+ sline[j++].flag |= mark | no_pre_delete;
again:
/* we know up to i is to be included. where does the
int use_color)
{
unsigned long mark = (1UL<<num_parent);
+ unsigned long no_pre_delete = (2UL<<num_parent);
int i;
unsigned long lno = 0;
const char *c_frag = diff_get_color(use_color, DIFF_FRAGINFO);
int j;
unsigned long p_mask;
sl = &sline[lno++];
- ll = sl->lost_head;
+ ll = (sl->flag & no_pre_delete) ? NULL : sl->lost_head;
while (ll) {
fputs(c_old, stdout);
for (j = 0; j < num_parent; j++) {
return 0;
}
-int git_default_config(const char *var, const char *value, void *dummy)
+static int git_default_core_config(const char *var, const char *value)
{
/* This needs a better name */
if (!strcmp(var, "core.filemode")) {
return 0;
}
+ if (!strcmp(var, "core.pager"))
+ return git_config_string(&pager_program, var, value);
+
+ if (!strcmp(var, "core.editor"))
+ return git_config_string(&editor_program, var, value);
+
+ if (!strcmp(var, "core.excludesfile"))
+ return git_config_string(&excludes_file, var, value);
+
+ if (!strcmp(var, "core.whitespace")) {
+ if (!value)
+ return config_error_nonbool(var);
+ whitespace_rule_cfg = parse_whitespace_rule(value);
+ return 0;
+ }
+
+ if (!strcmp(var, "core.fsyncobjectfiles")) {
+ fsync_object_files = git_config_bool(var, value);
+ return 0;
+ }
+
+ /* Add other config variables here and to Documentation/config.txt. */
+ return 0;
+}
+
+static int git_default_user_config(const char *var, const char *value)
+{
if (!strcmp(var, "user.name")) {
if (!value)
return config_error_nonbool(var);
return 0;
}
+ /* Add other config variables here and to Documentation/config.txt. */
+ return 0;
+}
+
+static int git_default_i18n_config(const char *var, const char *value)
+{
if (!strcmp(var, "i18n.commitencoding"))
return git_config_string(&git_commit_encoding, var, value);
if (!strcmp(var, "i18n.logoutputencoding"))
return git_config_string(&git_log_output_encoding, var, value);
- if (!strcmp(var, "pager.color") || !strcmp(var, "color.pager")) {
- pager_use_color = git_config_bool(var,value);
- return 0;
- }
-
- if (!strcmp(var, "core.pager"))
- return git_config_string(&pager_program, var, value);
-
- if (!strcmp(var, "core.editor"))
- return git_config_string(&editor_program, var, value);
-
- if (!strcmp(var, "core.excludesfile"))
- return git_config_string(&excludes_file, var, value);
+ /* Add other config variables here and to Documentation/config.txt. */
+ return 0;
+}
- if (!strcmp(var, "core.whitespace")) {
- if (!value)
- return config_error_nonbool(var);
- whitespace_rule_cfg = parse_whitespace_rule(value);
- return 0;
- }
+static int git_default_branch_config(const char *var, const char *value)
+{
if (!strcmp(var, "branch.autosetupmerge")) {
if (value && !strcasecmp(value, "always")) {
git_branch_track = BRANCH_TRACK_ALWAYS;
return 0;
}
+int git_default_config(const char *var, const char *value, void *dummy)
+{
+ if (!prefixcmp(var, "core."))
+ return git_default_core_config(var, value);
+
+ if (!prefixcmp(var, "user."))
+ return git_default_user_config(var, value);
+
+ if (!prefixcmp(var, "i18n."))
+ return git_default_i18n_config(var, value);
+
+ if (!prefixcmp(var, "branch."))
+ return git_default_branch_config(var, value);
+
+ if (!strcmp(var, "pager.color") || !strcmp(var, "color.pager")) {
+ pager_use_color = git_config_bool(var,value);
+ return 0;
+ }
+
+ /* Add other config variables here and to Documentation/config.txt. */
+ return 0;
+}
+
int git_config_from_file(config_fn_t fn, const char *filename, void *data)
{
int ret;
return system_wide;
}
-int git_env_bool(const char *k, int def)
+static int git_env_bool(const char *k, int def)
{
const char *v = getenv(k);
return v ? git_config_bool(k, v) : def;
total = add + del;
}
show_name(options->file, prefix, name, len, reset, set);
- fprintf(options->file, "%5d ", added + deleted);
+ fprintf(options->file, "%5d%s", added + deleted,
+ added + deleted ? " " : "");
show_graph(options->file, '+', add, add_c, reset);
show_graph(options->file, '-', del, del_c, reset);
fprintf(options->file, "\n");
int zlib_compression_level = Z_BEST_SPEED;
int core_compression_level;
int core_compression_seen;
+int fsync_object_files;
size_t packed_git_window_size = DEFAULT_PACKED_GIT_WINDOW_SIZE;
size_t packed_git_limit = DEFAULT_PACKED_GIT_LIMIT;
size_t delta_base_cache_limit = 16 * 1024 * 1024;
return git_object_dir;
}
-char *get_refs_directory(void)
-{
- if (!git_refs_dir)
- setup_git_env();
- return git_refs_dir;
-}
-
char *get_index_file(void)
{
if (!git_index_file)
extern void release_pack_memory(size_t, int);
-static inline char* xstrdup(const char *str)
-{
- char *ret = strdup(str);
- if (!ret) {
- release_pack_memory(strlen(str) + 1, -1);
- ret = strdup(str);
- if (!ret)
- die("Out of memory, strdup failed");
- }
- return ret;
-}
-
-static inline void *xmalloc(size_t size)
-{
- void *ret = malloc(size);
- if (!ret && !size)
- ret = malloc(1);
- if (!ret) {
- release_pack_memory(size, -1);
- ret = malloc(size);
- if (!ret && !size)
- ret = malloc(1);
- if (!ret)
- die("Out of memory, malloc failed");
- }
-#ifdef XMALLOC_POISON
- memset(ret, 0xA5, size);
-#endif
- return ret;
-}
-
-/*
- * xmemdupz() allocates (len + 1) bytes of memory, duplicates "len" bytes of
- * "data" to the allocated memory, zero terminates the allocated memory,
- * and returns a pointer to the allocated memory. If the allocation fails,
- * the program dies.
- */
-static inline void *xmemdupz(const void *data, size_t len)
-{
- char *p = xmalloc(len + 1);
- memcpy(p, data, len);
- p[len] = '\0';
- return p;
-}
-
-static inline char *xstrndup(const char *str, size_t len)
-{
- char *p = memchr(str, '\0', len);
- return xmemdupz(str, p ? p - str : len);
-}
-
-static inline void *xrealloc(void *ptr, size_t size)
-{
- void *ret = realloc(ptr, size);
- if (!ret && !size)
- ret = realloc(ptr, 1);
- if (!ret) {
- release_pack_memory(size, -1);
- ret = realloc(ptr, size);
- if (!ret && !size)
- ret = realloc(ptr, 1);
- if (!ret)
- die("Out of memory, realloc failed");
- }
- return ret;
-}
-
-static inline void *xcalloc(size_t nmemb, size_t size)
-{
- void *ret = calloc(nmemb, size);
- if (!ret && (!nmemb || !size))
- ret = calloc(1, 1);
- if (!ret) {
- release_pack_memory(nmemb * size, -1);
- ret = calloc(nmemb, size);
- if (!ret && (!nmemb || !size))
- ret = calloc(1, 1);
- if (!ret)
- die("Out of memory, calloc failed");
- }
- return ret;
-}
-
-static inline void *xmmap(void *start, size_t length,
- int prot, int flags, int fd, off_t offset)
-{
- void *ret = mmap(start, length, prot, flags, fd, offset);
- if (ret == MAP_FAILED) {
- if (!length)
- return NULL;
- release_pack_memory(length, fd);
- ret = mmap(start, length, prot, flags, fd, offset);
- if (ret == MAP_FAILED)
- die("Out of memory? mmap failed: %s", strerror(errno));
- }
- return ret;
-}
-
-/*
- * xread() is the same a read(), but it automatically restarts read()
- * operations with a recoverable error (EAGAIN and EINTR). xread()
- * DOES NOT GUARANTEE that "len" bytes is read even if the data is available.
- */
-static inline ssize_t xread(int fd, void *buf, size_t len)
-{
- ssize_t nr;
- while (1) {
- nr = read(fd, buf, len);
- if ((nr < 0) && (errno == EAGAIN || errno == EINTR))
- continue;
- return nr;
- }
-}
-
-/*
- * xwrite() is the same a write(), but it automatically restarts write()
- * operations with a recoverable error (EAGAIN and EINTR). xwrite() DOES NOT
- * GUARANTEE that "len" bytes is written even if the operation is successful.
- */
-static inline ssize_t xwrite(int fd, const void *buf, size_t len)
-{
- ssize_t nr;
- while (1) {
- nr = write(fd, buf, len);
- if ((nr < 0) && (errno == EAGAIN || errno == EINTR))
- continue;
- return nr;
- }
-}
-
-static inline int xdup(int fd)
-{
- int ret = dup(fd);
- if (ret < 0)
- die("dup failed: %s", strerror(errno));
- return ret;
-}
-
-static inline FILE *xfdopen(int fd, const char *mode)
-{
- FILE *stream = fdopen(fd, mode);
- if (stream == NULL)
- die("Out of memory? fdopen failed: %s", strerror(errno));
- return stream;
-}
-
-static inline int xmkstemp(char *template)
-{
- int fd;
-
- fd = mkstemp(template);
- if (fd < 0)
- die("Unable to create temporary file: %s", strerror(errno));
- return fd;
-}
+extern char *xstrdup(const char *str);
+extern void *xmalloc(size_t size);
+extern void *xmemdupz(const void *data, size_t len);
+extern char *xstrndup(const char *str, size_t len);
+extern void *xrealloc(void *ptr, size_t size);
+extern void *xcalloc(size_t nmemb, size_t size);
+extern void *xmmap(void *start, size_t length, int prot, int flags, int fd, off_t offset);
+extern ssize_t xread(int fd, void *buf, size_t len);
+extern ssize_t xwrite(int fd, const void *buf, size_t len);
+extern int xdup(int fd);
+extern FILE *xfdopen(int fd, const char *mode);
+extern int xmkstemp(char *template);
static inline size_t xsize_t(off_t len)
{
my ($log, $ctx) =
command_output_pipe(qw/rev-list --pretty=raw --no-color --reverse/,
$self->refname, '--');
- my $full_url = $self->full_url;
- remove_username($full_url);
+ my $metadata_url = $self->metadata_url;
+ remove_username($metadata_url);
my $svn_uuid = $self->ra_uuid;
my $c;
while (<$log>) {
# if we merged or otherwise started elsewhere, this is
# how we break out of it
if (($uuid ne $svn_uuid) ||
- ($full_url && $url && ($url ne $full_url))) {
+ ($metadata_url && $url && ($url ne $metadata_url))) {
next;
}
our $action = $cgi->param('a');
if (defined $action) {
if ($action =~ m/[^0-9a-zA-Z\.\-_]/) {
- die_error(undef, "Invalid action parameter");
+ die_error(400, "Invalid action parameter");
}
}
($export_ok && !(-e "$projectroot/$project/$export_ok")) ||
($strict_export && !project_in_list($project))) {
undef $project;
- die_error(undef, "No such project");
+ die_error(404, "No such project");
}
}
our $file_name = $cgi->param('f');
if (defined $file_name) {
if (!validate_pathname($file_name)) {
- die_error(undef, "Invalid file parameter");
+ die_error(400, "Invalid file parameter");
}
}
our $file_parent = $cgi->param('fp');
if (defined $file_parent) {
if (!validate_pathname($file_parent)) {
- die_error(undef, "Invalid file parent parameter");
+ die_error(400, "Invalid file parent parameter");
}
}
our $hash = $cgi->param('h');
if (defined $hash) {
if (!validate_refname($hash)) {
- die_error(undef, "Invalid hash parameter");
+ die_error(400, "Invalid hash parameter");
}
}
our $hash_parent = $cgi->param('hp');
if (defined $hash_parent) {
if (!validate_refname($hash_parent)) {
- die_error(undef, "Invalid hash parent parameter");
+ die_error(400, "Invalid hash parent parameter");
}
}
our $hash_base = $cgi->param('hb');
if (defined $hash_base) {
if (!validate_refname($hash_base)) {
- die_error(undef, "Invalid hash base parameter");
+ die_error(400, "Invalid hash base parameter");
}
}
if (defined @extra_options) {
foreach my $opt (@extra_options) {
if (not exists $allowed_options{$opt}) {
- die_error(undef, "Invalid option parameter");
+ die_error(400, "Invalid option parameter");
}
if (not grep(/^$action$/, @{$allowed_options{$opt}})) {
- die_error(undef, "Invalid option parameter for this action");
+ die_error(400, "Invalid option parameter for this action");
}
}
}
our $hash_parent_base = $cgi->param('hpb');
if (defined $hash_parent_base) {
if (!validate_refname($hash_parent_base)) {
- die_error(undef, "Invalid hash parent base parameter");
+ die_error(400, "Invalid hash parent base parameter");
}
}
our $page = $cgi->param('pg');
if (defined $page) {
if ($page =~ m/[^0-9]/) {
- die_error(undef, "Invalid page parameter");
+ die_error(400, "Invalid page parameter");
}
}
our $searchtype = $cgi->param('st');
if (defined $searchtype) {
if ($searchtype =~ m/[^a-z]/) {
- die_error(undef, "Invalid searchtype parameter");
+ die_error(400, "Invalid searchtype parameter");
}
}
our $search_regexp;
if (defined $searchtext) {
if (length($searchtext) < 2) {
- die_error(undef, "At least two characters are required for search parameter");
+ die_error(403, "At least two characters are required for search parameter");
}
$search_regexp = $search_use_regexp ? $searchtext : quotemeta $searchtext;
}
# dispatch
my %actions = (
- "blame" => \&git_blame2,
+ "blame" => \&git_blame,
"blobdiff" => \&git_blobdiff,
"blobdiff_plain" => \&git_blobdiff_plain,
"blob" => \&git_blob,
}
}
if (!defined($actions{$action})) {
- die_error(undef, "Unknown action");
+ die_error(400, "Unknown action");
}
if ($action !~ m/^(opml|project_list|project_index)$/ &&
!$project) {
- die_error(undef, "Project needed");
+ die_error(400, "Project needed");
}
$actions{$action}->();
exit;
$path =~ s,/+$,,;
open my $fd, "-|", git_cmd(), "ls-tree", $base, "--", $path
- or die_error(undef, "Open git-ls-tree failed");
+ or die_error(500, "Open git-ls-tree failed");
my $line = <$fd>;
close $fd or return undef;
"--max-count=1",
$commit_id,
"--",
- or die_error(undef, "Open git-rev-list failed");
+ or die_error(500, "Open git-rev-list failed");
%co = parse_commit_text(<$fd>, 1);
close $fd;
$commit_id,
"--",
($filename ? ($filename) : ())
- or die_error(undef, "Open git-rev-list failed");
+ or die_error(500, "Open git-rev-list failed");
while (my $line = <$fd>) {
my %co = parse_commit_text($line);
push @cos, \%co;
"</html>";
}
+# die_error(<http_status_code>, <error_message>)
+# Example: die_error(404, 'Hash not found')
+# By convention, use the following status codes (as defined in RFC 2616):
+# 400: Invalid or missing CGI parameters, or
+# requested object exists but has wrong type.
+# 403: Requested feature (like "pickaxe" or "snapshot") not enabled on
+# this server or project.
+# 404: Requested object/revision/project doesn't exist.
+# 500: The server isn't configured properly, or
+# an internal error occurred (e.g. failed assertions caused by bugs), or
+# an unknown error occurred (e.g. the git binary died unexpectedly).
sub die_error {
- my $status = shift || "403 Forbidden";
- my $error = shift || "Malformed query, file missing or permission denied";
-
- git_header_html($status);
+ my $status = shift || 500;
+ my $error = shift || "Internal server error";
+
+ my %http_responses = (400 => '400 Bad Request',
+ 403 => '403 Forbidden',
+ 404 => '404 Not Found',
+ 500 => '500 Internal Server Error');
+ git_header_html($http_responses{$status});
print <<EOF;
<div class="page_body">
<br /><br />
# . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
-sub git_project_list_body {
- my ($projlist, $order, $from, $to, $extra, $no_header) = @_;
-
- my ($check_forks) = gitweb_check_feature('forks');
-
+# fills project list info (age, description, owner, forks) for each
+# project in the list, removing invalid projects from returned list
+# NOTE: modifies $projlist, but does not remove entries from it
+sub fill_project_list_info {
+ my ($projlist, $check_forks) = @_;
my @projects;
+
+ PROJECT:
foreach my $pr (@$projlist) {
- my (@aa) = git_get_last_activity($pr->{'path'});
- unless (@aa) {
- next;
+ my (@activity) = git_get_last_activity($pr->{'path'});
+ unless (@activity) {
+ next PROJECT;
}
- ($pr->{'age'}, $pr->{'age_string'}) = @aa;
+ ($pr->{'age'}, $pr->{'age_string'}) = @activity;
if (!defined $pr->{'descr'}) {
my $descr = git_get_project_description($pr->{'path'}) || "";
- $pr->{'descr_long'} = to_utf8($descr);
+ $descr = to_utf8($descr);
+ $pr->{'descr_long'} = $descr;
$pr->{'descr'} = chop_str($descr, $projects_list_description_width, 5);
}
if (!defined $pr->{'owner'}) {
($pname !~ /\/$/) &&
(-d "$projectroot/$pname")) {
$pr->{'forks'} = "-d $projectroot/$pname";
- }
- else {
+ } else {
$pr->{'forks'} = 0;
}
}
push @projects, $pr;
}
+ return @projects;
+}
+
+# print 'sort by' <th> element, either sorting by $key if $name eq $order
+# (changing $list), or generating 'sort by $name' replay link otherwise
+sub print_sort_th {
+ my ($str_sort, $name, $order, $key, $header, $list) = @_;
+ $key ||= $name;
+ $header ||= ucfirst($name);
+
+ if ($order eq $name) {
+ if ($str_sort) {
+ @$list = sort {$a->{$key} cmp $b->{$key}} @$list;
+ } else {
+ @$list = sort {$a->{$key} <=> $b->{$key}} @$list;
+ }
+ print "<th>$header</th>\n";
+ } else {
+ print "<th>" .
+ $cgi->a({-href => href(-replay=>1, order=>$name),
+ -class => "header"}, $header) .
+ "</th>\n";
+ }
+}
+
+sub print_sort_th_str {
+ print_sort_th(1, @_);
+}
+
+sub print_sort_th_num {
+ print_sort_th(0, @_);
+}
+
+sub git_project_list_body {
+ my ($projlist, $order, $from, $to, $extra, $no_header) = @_;
+
+ my ($check_forks) = gitweb_check_feature('forks');
+ my @projects = fill_project_list_info($projlist, $check_forks);
+
$order ||= $default_projects_order;
$from = 0 unless defined $from;
$to = $#projects if (!defined $to || $#projects < $to);
if ($check_forks) {
print "<th></th>\n";
}
- if ($order eq "project") {
- @projects = sort {$a->{'path'} cmp $b->{'path'}} @projects;
- print "<th>Project</th>\n";
- } else {
- print "<th>" .
- $cgi->a({-href => href(project=>undef, order=>'project'),
- -class => "header"}, "Project") .
- "</th>\n";
- }
- if ($order eq "descr") {
- @projects = sort {$a->{'descr'} cmp $b->{'descr'}} @projects;
- print "<th>Description</th>\n";
- } else {
- print "<th>" .
- $cgi->a({-href => href(project=>undef, order=>'descr'),
- -class => "header"}, "Description") .
- "</th>\n";
- }
- if ($order eq "owner") {
- @projects = sort {$a->{'owner'} cmp $b->{'owner'}} @projects;
- print "<th>Owner</th>\n";
- } else {
- print "<th>" .
- $cgi->a({-href => href(project=>undef, order=>'owner'),
- -class => "header"}, "Owner") .
- "</th>\n";
- }
- if ($order eq "age") {
- @projects = sort {$a->{'age'} <=> $b->{'age'}} @projects;
- print "<th>Last Change</th>\n";
- } else {
- print "<th>" .
- $cgi->a({-href => href(project=>undef, order=>'age'),
- -class => "header"}, "Last Change") .
- "</th>\n";
- }
- print "<th></th>\n" .
+ print_sort_th_str('project', $order, 'path',
+ 'Project', \@projects);
+ print_sort_th_str('descr', $order, 'descr_long',
+ 'Description', \@projects);
+ print_sort_th_str('owner', $order, 'owner',
+ 'Owner', \@projects);
+ print_sort_th_num('age', $order, 'age',
+ 'Last Change', \@projects);
+ print "<th></th>\n" . # for links
"</tr>\n";
}
my $alternate = 1;
sub git_project_list {
my $order = $cgi->param('o');
if (defined $order && $order !~ m/none|project|descr|owner|age/) {
- die_error(undef, "Unknown order parameter");
+ die_error(400, "Unknown order parameter");
}
my @list = git_get_projects_list();
if (!@list) {
- die_error(undef, "No projects found");
+ die_error(404, "No projects found");
}
git_header_html();
sub git_forks {
my $order = $cgi->param('o');
if (defined $order && $order !~ m/none|project|descr|owner|age/) {
- die_error(undef, "Unknown order parameter");
+ die_error(400, "Unknown order parameter");
}
my @list = git_get_projects_list($project);
if (!@list) {
- die_error(undef, "No forks found");
+ die_error(404, "No forks found");
}
git_header_html();
my %tag = parse_tag($hash);
if (! %tag) {
- die_error(undef, "Unknown tag object");
+ die_error(404, "Unknown tag object");
}
git_print_header_div('commit', esc_html($tag{'name'}), $hash);
git_footer_html();
}
-sub git_blame2 {
+sub git_blame {
my $fd;
my $ftype;
- my ($have_blame) = gitweb_check_feature('blame');
- if (!$have_blame) {
- die_error('403 Permission denied', "Permission denied");
- }
- die_error('404 Not Found', "File name not defined") if (!$file_name);
+ gitweb_check_feature('blame')
+ or die_error(403, "Blame view not allowed");
+
+ die_error(400, "No file name given") unless $file_name;
$hash_base ||= git_get_head_hash($project);
- die_error(undef, "Couldn't find base commit") unless ($hash_base);
+ die_error(404, "Couldn't find base commit") unless ($hash_base);
my %co = parse_commit($hash_base)
- or die_error(undef, "Reading commit failed");
+ or die_error(404, "Commit not found");
if (!defined $hash) {
$hash = git_get_hash_by_path($hash_base, $file_name, "blob")
- or die_error(undef, "Error looking up file");
+ or die_error(404, "Error looking up file");
}
$ftype = git_get_type($hash);
if ($ftype !~ "blob") {
- die_error('400 Bad Request', "Object is not a blob");
+ die_error(400, "Object is not a blob");
}
open ($fd, "-|", git_cmd(), "blame", '-p', '--',
$file_name, $hash_base)
- or die_error(undef, "Open git-blame failed");
+ or die_error(500, "Open git-blame failed");
git_header_html();
my $formats_nav =
$cgi->a({-href => href(action=>"blob", -replay=>1)},
print "</td>\n";
}
open (my $dd, "-|", git_cmd(), "rev-parse", "$full_rev^")
- or die_error(undef, "Open git-rev-parse failed");
+ or die_error(500, "Open git-rev-parse failed");
my $parent_commit = <$dd>;
close $dd;
chomp($parent_commit);
git_footer_html();
}
-sub git_blame {
- my $fd;
-
- my ($have_blame) = gitweb_check_feature('blame');
- if (!$have_blame) {
- die_error('403 Permission denied', "Permission denied");
- }
- die_error('404 Not Found', "File name not defined") if (!$file_name);
- $hash_base ||= git_get_head_hash($project);
- die_error(undef, "Couldn't find base commit") unless ($hash_base);
- my %co = parse_commit($hash_base)
- or die_error(undef, "Reading commit failed");
- if (!defined $hash) {
- $hash = git_get_hash_by_path($hash_base, $file_name, "blob")
- or die_error(undef, "Error lookup file");
- }
- open ($fd, "-|", git_cmd(), "annotate", '-l', '-t', '-r', $file_name, $hash_base)
- or die_error(undef, "Open git-annotate failed");
- git_header_html();
- my $formats_nav =
- $cgi->a({-href => href(action=>"blob", hash=>$hash, hash_base=>$hash_base, file_name=>$file_name)},
- "blob") .
- " | " .
- $cgi->a({-href => href(action=>"history", hash=>$hash, hash_base=>$hash_base, file_name=>$file_name)},
- "history") .
- " | " .
- $cgi->a({-href => href(action=>"blame", file_name=>$file_name)},
- "HEAD");
- git_print_page_nav('','', $hash_base,$co{'tree'},$hash_base, $formats_nav);
- git_print_header_div('commit', esc_html($co{'title'}), $hash_base);
- git_print_page_path($file_name, 'blob', $hash_base);
- print "<div class=\"page_body\">\n";
- print <<HTML;
-<table class="blame">
- <tr>
- <th>Commit</th>
- <th>Age</th>
- <th>Author</th>
- <th>Line</th>
- <th>Data</th>
- </tr>
-HTML
- my @line_class = (qw(light dark));
- my $line_class_len = scalar (@line_class);
- my $line_class_num = $#line_class;
- while (my $line = <$fd>) {
- my $long_rev;
- my $short_rev;
- my $author;
- my $time;
- my $lineno;
- my $data;
- my $age;
- my $age_str;
- my $age_class;
-
- chomp $line;
- $line_class_num = ($line_class_num + 1) % $line_class_len;
-
- if ($line =~ m/^([0-9a-fA-F]{40})\t\(\s*([^\t]+)\t(\d+) [+-]\d\d\d\d\t(\d+)\)(.*)$/) {
- $long_rev = $1;
- $author = $2;
- $time = $3;
- $lineno = $4;
- $data = $5;
- } else {
- print qq( <tr><td colspan="5" class="error">Unable to parse: $line</td></tr>\n);
- next;
- }
- $short_rev = substr ($long_rev, 0, 8);
- $age = time () - $time;
- $age_str = age_string ($age);
- $age_str =~ s/ / /g;
- $age_class = age_class($age);
- $author = esc_html ($author);
- $author =~ s/ / /g;
-
- $data = untabify($data);
- $data = esc_html ($data);
-
- print <<HTML;
- <tr class="$line_class[$line_class_num]">
- <td class="sha1"><a href="${\href (action=>"commit", hash=>$long_rev)}" class="text">$short_rev..</a></td>
- <td class="$age_class">$age_str</td>
- <td>$author</td>
- <td class="linenr"><a id="$lineno" href="#$lineno" class="linenr">$lineno</a></td>
- <td class="pre">$data</td>
- </tr>
-HTML
- } # while (my $line = <$fd>)
- print "</table>\n\n";
- close $fd
- or print "Reading blob failed.\n";
- print "</div>";
- git_footer_html();
-}
-
sub git_tags {
my $head = git_get_head_hash($project);
git_header_html();
if (defined $file_name) {
my $base = $hash_base || git_get_head_hash($project);
$hash = git_get_hash_by_path($base, $file_name, "blob")
- or die_error(undef, "Error lookup file");
+ or die_error(404, "Cannot find file");
} else {
- die_error(undef, "No file name defined");
+ die_error(400, "No file name defined");
}
} elsif ($hash =~ m/^[0-9a-fA-F]{40}$/) {
# blobs defined by non-textual hash id's can be cached
}
open my $fd, "-|", git_cmd(), "cat-file", "blob", $hash
- or die_error(undef, "Open git-cat-file blob '$hash' failed");
+ or die_error(500, "Open git-cat-file blob '$hash' failed");
# content-type (can include charset)
$type = blob_contenttype($fd, $file_name, $type);
if (defined $file_name) {
my $base = $hash_base || git_get_head_hash($project);
$hash = git_get_hash_by_path($base, $file_name, "blob")
- or die_error(undef, "Error lookup file");
+ or die_error(404, "Cannot find file");
} else {
- die_error(undef, "No file name defined");
+ die_error(400, "No file name defined");
}
} elsif ($hash =~ m/^[0-9a-fA-F]{40}$/) {
# blobs defined by non-textual hash id's can be cached
my ($have_blame) = gitweb_check_feature('blame');
open my $fd, "-|", git_cmd(), "cat-file", "blob", $hash
- or die_error(undef, "Couldn't cat $file_name, $hash");
+ or die_error(500, "Couldn't cat $file_name, $hash");
my $mimetype = blob_mimetype($fd, $file_name);
if ($mimetype !~ m!^(?:text/|image/(?:gif|png|jpeg)$)! && -B $fd) {
close $fd;
}
$/ = "\0";
open my $fd, "-|", git_cmd(), "ls-tree", '-z', $hash
- or die_error(undef, "Open git-ls-tree failed");
+ or die_error(500, "Open git-ls-tree failed");
my @entries = map { chomp; $_ } <$fd>;
- close $fd or die_error(undef, "Reading tree failed");
+ close $fd or die_error(404, "Reading tree failed");
$/ = "\n";
my $refs = git_get_references();
my $format = $cgi->param('sf');
if (!@supported_fmts) {
- die_error('403 Permission denied', "Permission denied");
+ die_error(403, "Snapshots not allowed");
}
# default to first supported snapshot format
$format ||= $supported_fmts[0];
if ($format !~ m/^[a-z0-9]+$/) {
- die_error(undef, "Invalid snapshot format parameter");
+ die_error(400, "Invalid snapshot format parameter");
} elsif (!exists($known_snapshot_formats{$format})) {
- die_error(undef, "Unknown snapshot format");
+ die_error(400, "Unknown snapshot format");
} elsif (!grep($_ eq $format, @supported_fmts)) {
- die_error(undef, "Unsupported snapshot format");
+ die_error(403, "Unsupported snapshot format");
}
if (!defined $hash) {
-status => '200 OK');
open my $fd, "-|", $cmd
- or die_error(undef, "Execute git-archive failed");
+ or die_error(500, "Execute git-archive failed");
binmode STDOUT, ':raw';
print <$fd>;
binmode STDOUT, ':utf8'; # as set at the beginning of gitweb.cgi
sub git_commit {
$hash ||= $hash_base || "HEAD";
- my %co = parse_commit($hash);
- if (!%co) {
- die_error(undef, "Unknown commit object");
- }
+ my %co = parse_commit($hash)
+ or die_error(404, "Unknown commit object");
my %ad = parse_date($co{'author_epoch'}, $co{'author_tz'});
my %cd = parse_date($co{'committer_epoch'}, $co{'committer_tz'});
@diff_opts,
(@$parents <= 1 ? $parent : '-c'),
$hash, "--"
- or die_error(undef, "Open git-diff-tree failed");
+ or die_error(500, "Open git-diff-tree failed");
@difftree = map { chomp; $_ } <$fd>;
- close $fd or die_error(undef, "Reading git-diff-tree failed");
+ close $fd or die_error(404, "Reading git-diff-tree failed");
# non-textual hash id's can be cached
my $expires;
open my $fd, "-|", quote_command(
git_cmd(), 'cat-file', '-t', $object_id) . ' 2> /dev/null'
- or die_error('404 Not Found', "Object does not exist");
+ or die_error(404, "Object does not exist");
$type = <$fd>;
chomp $type;
close $fd
- or die_error('404 Not Found', "Object does not exist");
+ or die_error(404, "Object does not exist");
# - hash_base and file_name
} elsif ($hash_base && defined $file_name) {
$file_name =~ s,/+$,,;
system(git_cmd(), "cat-file", '-e', $hash_base) == 0
- or die_error('404 Not Found', "Base object does not exist");
+ or die_error(404, "Base object does not exist");
# here errors should not hapen
open my $fd, "-|", git_cmd(), "ls-tree", $hash_base, "--", $file_name
- or die_error(undef, "Open git-ls-tree failed");
+ or die_error(500, "Open git-ls-tree failed");
my $line = <$fd>;
close $fd;
#'100644 blob 0fa3f3a66fb6a137f6ec2c19351ed4d807070ffa panic.c'
unless ($line && $line =~ m/^([0-9]+) (.+) ([0-9a-fA-F]{40})\t/) {
- die_error('404 Not Found', "File or directory for given base does not exist");
+ die_error(404, "File or directory for given base does not exist");
}
$type = $2;
$hash = $3;
} else {
- die_error('404 Not Found', "Not enough information to find object");
+ die_error(400, "Not enough information to find object");
}
print $cgi->redirect(-uri => href(action=>$type, -full=>1,
open $fd, "-|", git_cmd(), "diff-tree", '-r', @diff_opts,
$hash_parent_base, $hash_base,
"--", (defined $file_parent ? $file_parent : ()), $file_name
- or die_error(undef, "Open git-diff-tree failed");
+ or die_error(500, "Open git-diff-tree failed");
@difftree = map { chomp; $_ } <$fd>;
close $fd
- or die_error(undef, "Reading git-diff-tree failed");
+ or die_error(404, "Reading git-diff-tree failed");
@difftree
- or die_error('404 Not Found', "Blob diff not found");
+ or die_error(404, "Blob diff not found");
} elsif (defined $hash &&
$hash =~ /[0-9a-fA-F]{40}/) {
# read filtered raw output
open $fd, "-|", git_cmd(), "diff-tree", '-r', @diff_opts,
$hash_parent_base, $hash_base, "--"
- or die_error(undef, "Open git-diff-tree failed");
+ or die_error(500, "Open git-diff-tree failed");
@difftree =
# ':100644 100644 03b21826... 3b93d5e7... M ls-files.c'
# $hash == to_id
grep { /^:[0-7]{6} [0-7]{6} [0-9a-fA-F]{40} $hash/ }
map { chomp; $_ } <$fd>;
close $fd
- or die_error(undef, "Reading git-diff-tree failed");
+ or die_error(404, "Reading git-diff-tree failed");
@difftree
- or die_error('404 Not Found', "Blob diff not found");
+ or die_error(404, "Blob diff not found");
} else {
- die_error('404 Not Found', "Missing one of the blob diff parameters");
+ die_error(400, "Missing one of the blob diff parameters");
}
if (@difftree > 1) {
- die_error('404 Not Found', "Ambiguous blob diff specification");
+ die_error(400, "Ambiguous blob diff specification");
}
%diffinfo = parse_difftree_raw_line($difftree[0]);
'-p', ($format eq 'html' ? "--full-index" : ()),
$hash_parent_base, $hash_base,
"--", (defined $file_parent ? $file_parent : ()), $file_name
- or die_error(undef, "Open git-diff-tree failed");
+ or die_error(500, "Open git-diff-tree failed");
}
# old/legacy style URI
open $fd, "-|", git_cmd(), "diff", @diff_opts,
'-p', ($format eq 'html' ? "--full-index" : ()),
$hash_parent, $hash, "--"
- or die_error(undef, "Open git-diff failed");
+ or die_error(500, "Open git-diff failed");
} else {
- die_error('404 Not Found', "Missing one of the blob diff parameters")
+ die_error(400, "Missing one of the blob diff parameters")
unless %diffinfo;
}
print "X-Git-Url: " . $cgi->self_url() . "\n\n";
} else {
- die_error(undef, "Unknown blobdiff format");
+ die_error(400, "Unknown blobdiff format");
}
# patch
sub git_commitdiff {
my $format = shift || 'html';
$hash ||= $hash_base || "HEAD";
- my %co = parse_commit($hash);
- if (!%co) {
- die_error(undef, "Unknown commit object");
- }
+ my %co = parse_commit($hash)
+ or die_error(404, "Unknown commit object");
# choose format for commitdiff for merge
if (! defined $hash_parent && @{$co{'parents'}} > 1) {
open $fd, "-|", git_cmd(), "diff-tree", '-r', @diff_opts,
"--no-commit-id", "--patch-with-raw", "--full-index",
$hash_parent_param, $hash, "--"
- or die_error(undef, "Open git-diff-tree failed");
+ or die_error(500, "Open git-diff-tree failed");
while (my $line = <$fd>) {
chomp $line;
} elsif ($format eq 'plain') {
open $fd, "-|", git_cmd(), "diff-tree", '-r', @diff_opts,
'-p', $hash_parent_param, $hash, "--"
- or die_error(undef, "Open git-diff-tree failed");
+ or die_error(500, "Open git-diff-tree failed");
} else {
- die_error(undef, "Unknown commitdiff format");
+ die_error(400, "Unknown commitdiff format");
}
# non-textual hash id's can be cached
$page = 0;
}
my $ftype;
- my %co = parse_commit($hash_base);
- if (!%co) {
- die_error(undef, "Unknown commit object");
- }
+ my %co = parse_commit($hash_base)
+ or die_error(404, "Unknown commit object");
my $refs = git_get_references();
my $limit = sprintf("--max-count=%i", (100 * ($page+1)));
my @commitlist = parse_commits($hash_base, 101, (100 * $page),
- $file_name, "--full-history");
- if (!@commitlist) {
- die_error('404 Not Found', "No such file or directory on given branch");
- }
+ $file_name, "--full-history")
+ or die_error(404, "No such file or directory on given branch");
if (!defined $hash && defined $file_name) {
# some commits could have deleted file in question,
$ftype = git_get_type($hash);
}
if (!defined $ftype) {
- die_error(undef, "Unknown type of object");
+ die_error(500, "Unknown type of object");
}
my $paging_nav = '';
}
sub git_search {
- my ($have_search) = gitweb_check_feature('search');
- if (!$have_search) {
- die_error('403 Permission denied', "Permission denied");
- }
+ gitweb_check_feature('search') or die_error(403, "Search is disabled");
if (!defined $searchtext) {
- die_error(undef, "Text field empty");
+ die_error(400, "Text field is empty");
}
if (!defined $hash) {
$hash = git_get_head_hash($project);
}
my %co = parse_commit($hash);
if (!%co) {
- die_error(undef, "Unknown commit object");
+ die_error(404, "Unknown commit object");
}
if (!defined $page) {
$page = 0;
if ($searchtype eq 'pickaxe') {
# pickaxe may take all resources of your box and run for several minutes
# with every query - so decide by yourself how public you make this feature
- my ($have_pickaxe) = gitweb_check_feature('pickaxe');
- if (!$have_pickaxe) {
- die_error('403 Permission denied', "Permission denied");
- }
+ gitweb_check_feature('pickaxe')
+ or die_error(403, "Pickaxe is disabled");
}
if ($searchtype eq 'grep') {
- my ($have_grep) = gitweb_check_feature('grep');
- if (!$have_grep) {
- die_error('403 Permission denied', "Permission denied");
- }
+ gitweb_check_feature('grep')
+ or die_error(403, "Grep is disabled");
}
git_header_html();
# Atom: http://www.atomenabled.org/developers/syndication/
# RSS: http://www.notestips.com/80256B3A007F2692/1/NAMO5P9UPQ
if ($format ne 'rss' && $format ne 'atom') {
- die_error(undef, "Unknown web feed format");
+ die_error(400, "Unknown web feed format");
}
# log/feed of current (HEAD) branch, log of given branch, history of file/directory
lst = &((*lst)->next);
*lst = (*lst)->next;
- if (!verify_pack(target, 0))
+ if (!verify_pack(target))
install_packed_git(target);
else
remote->can_update_info_refs = 0;
lst = &((*lst)->next);
*lst = (*lst)->next;
- if (verify_pack(target, 0))
+ if (verify_pack(target))
return -1;
install_packed_git(target);
struct idx_entry
{
- const unsigned char *sha1;
off_t offset;
+ const unsigned char *sha1;
+ unsigned int nr;
};
static int compare_entries(const void *e1, const void *e2)
return 0;
}
+int check_pack_crc(struct packed_git *p, struct pack_window **w_curs,
+ off_t offset, off_t len, unsigned int nr)
+{
+ const uint32_t *index_crc;
+ uint32_t data_crc = crc32(0, Z_NULL, 0);
+
+ do {
+ unsigned int avail;
+ void *data = use_pack(p, w_curs, offset, &avail);
+ if (avail > len)
+ avail = len;
+ data_crc = crc32(data_crc, data, avail);
+ offset += avail;
+ len -= avail;
+ } while (len);
+
+ index_crc = p->index_data;
+ index_crc += 2 + 256 + p->num_objects * (20/4) + nr;
+
+ return data_crc != ntohl(*index_crc);
+}
+
static int verify_packfile(struct packed_git *p,
struct pack_window **w_curs)
{
* we do not do scan-streaming check on the pack file.
*/
nr_objects = p->num_objects;
- entries = xmalloc(nr_objects * sizeof(*entries));
+ entries = xmalloc((nr_objects + 1) * sizeof(*entries));
+ entries[nr_objects].offset = pack_sig_ofs;
/* first sort entries by pack offset, since unpacking them is more efficient that way */
for (i = 0; i < nr_objects; i++) {
entries[i].sha1 = nth_packed_object_sha1(p, i);
if (!entries[i].sha1)
die("internal error pack-check nth-packed-object");
- entries[i].offset = find_pack_entry_one(entries[i].sha1, p);
- if (!entries[i].offset)
- die("internal error pack-check find-pack-entry-one");
+ entries[i].offset = nth_packed_object_offset(p, i);
+ entries[i].nr = i;
}
qsort(entries, nr_objects, sizeof(*entries), compare_entries);
enum object_type type;
unsigned long size;
+ if (p->index_version > 1) {
+ off_t offset = entries[i].offset;
+ off_t len = entries[i+1].offset - offset;
+ unsigned int nr = entries[i].nr;
+ if (check_pack_crc(p, w_curs, offset, len, nr))
+ err = error("index CRC mismatch for object %s "
+ "from %s at offset %"PRIuMAX"",
+ sha1_to_hex(entries[i].sha1),
+ p->pack_name, (uintmax_t)offset);
+ }
data = unpack_entry(p, entries[i].offset, &type, &size);
if (!data) {
err = error("cannot unpack %s from %s at offset %"PRIuMAX"",
return err;
}
-
-#define MAX_CHAIN 50
-
-static void show_pack_info(struct packed_git *p)
-{
- uint32_t nr_objects, i, chain_histogram[MAX_CHAIN+1];
-
- nr_objects = p->num_objects;
- memset(chain_histogram, 0, sizeof(chain_histogram));
- init_pack_revindex();
-
- for (i = 0; i < nr_objects; i++) {
- const unsigned char *sha1;
- unsigned char base_sha1[20];
- const char *type;
- unsigned long size;
- unsigned long store_size;
- off_t offset;
- unsigned int delta_chain_length;
-
- sha1 = nth_packed_object_sha1(p, i);
- if (!sha1)
- die("internal error pack-check nth-packed-object");
- offset = find_pack_entry_one(sha1, p);
- if (!offset)
- die("internal error pack-check find-pack-entry-one");
-
- type = packed_object_info_detail(p, offset, &size, &store_size,
- &delta_chain_length,
- base_sha1);
- printf("%s ", sha1_to_hex(sha1));
- if (!delta_chain_length)
- printf("%-6s %lu %lu %"PRIuMAX"\n",
- type, size, store_size, (uintmax_t)offset);
- else {
- printf("%-6s %lu %lu %"PRIuMAX" %u %s\n",
- type, size, store_size, (uintmax_t)offset,
- delta_chain_length, sha1_to_hex(base_sha1));
- if (delta_chain_length <= MAX_CHAIN)
- chain_histogram[delta_chain_length]++;
- else
- chain_histogram[0]++;
- }
- }
-
- for (i = 0; i <= MAX_CHAIN; i++) {
- if (!chain_histogram[i])
- continue;
- printf("chain length = %d: %d object%s\n", i,
- chain_histogram[i], chain_histogram[i] > 1 ? "s" : "");
- }
- if (chain_histogram[0])
- printf("chain length > %d: %d object%s\n", MAX_CHAIN,
- chain_histogram[0], chain_histogram[0] > 1 ? "s" : "");
-}
-
-int verify_pack(struct packed_git *p, int verbose)
+int verify_pack(struct packed_git *p)
{
off_t index_size;
const unsigned char *index_base;
err |= verify_packfile(p, &w_curs);
unuse_pack(&w_curs);
- if (verbose) {
- if (err)
- printf("%s: bad\n", p->pack_name);
- else {
- show_pack_info(p);
- printf("%s: ok\n", p->pack_name);
- }
- }
-
return err;
}
--- /dev/null
+#include "cache.h"
+#include "refs.h"
+#include "tag.h"
+#include "pack-refs.h"
+
+struct ref_to_prune {
+ struct ref_to_prune *next;
+ unsigned char sha1[20];
+ char name[FLEX_ARRAY];
+};
+
+struct pack_refs_cb_data {
+ unsigned int flags;
+ struct ref_to_prune *ref_to_prune;
+ FILE *refs_file;
+};
+
+static int do_not_prune(int flags)
+{
+ /* If it is already packed or if it is a symref,
+ * do not prune it.
+ */
+ return (flags & (REF_ISSYMREF|REF_ISPACKED));
+}
+
+static int handle_one_ref(const char *path, const unsigned char *sha1,
+ int flags, void *cb_data)
+{
+ struct pack_refs_cb_data *cb = cb_data;
+ int is_tag_ref;
+
+ /* Do not pack the symbolic refs */
+ if ((flags & REF_ISSYMREF))
+ return 0;
+ is_tag_ref = !prefixcmp(path, "refs/tags/");
+
+ /* ALWAYS pack refs that were already packed or are tags */
+ if (!(cb->flags & PACK_REFS_ALL) && !is_tag_ref && !(flags & REF_ISPACKED))
+ return 0;
+
+ fprintf(cb->refs_file, "%s %s\n", sha1_to_hex(sha1), path);
+ if (is_tag_ref) {
+ struct object *o = parse_object(sha1);
+ if (o->type == OBJ_TAG) {
+ o = deref_tag(o, path, 0);
+ if (o)
+ fprintf(cb->refs_file, "^%s\n",
+ sha1_to_hex(o->sha1));
+ }
+ }
+
+ if ((cb->flags & PACK_REFS_PRUNE) && !do_not_prune(flags)) {
+ int namelen = strlen(path) + 1;
+ struct ref_to_prune *n = xcalloc(1, sizeof(*n) + namelen);
+ hashcpy(n->sha1, sha1);
+ strcpy(n->name, path);
+ n->next = cb->ref_to_prune;
+ cb->ref_to_prune = n;
+ }
+ return 0;
+}
+
+/* make sure nobody touched the ref, and unlink */
+static void prune_ref(struct ref_to_prune *r)
+{
+ struct ref_lock *lock = lock_ref_sha1(r->name + 5, r->sha1);
+
+ if (lock) {
+ unlink(git_path("%s", r->name));
+ unlock_ref(lock);
+ }
+}
+
+static void prune_refs(struct ref_to_prune *r)
+{
+ while (r) {
+ prune_ref(r);
+ r = r->next;
+ }
+}
+
+static struct lock_file packed;
+
+int pack_refs(unsigned int flags)
+{
+ int fd;
+ struct pack_refs_cb_data cbdata;
+
+ memset(&cbdata, 0, sizeof(cbdata));
+ cbdata.flags = flags;
+
+ fd = hold_lock_file_for_update(&packed, git_path("packed-refs"), 1);
+ cbdata.refs_file = fdopen(fd, "w");
+ if (!cbdata.refs_file)
+ die("unable to create ref-pack file structure (%s)",
+ strerror(errno));
+
+ /* perhaps other traits later as well */
+ fprintf(cbdata.refs_file, "# pack-refs with: peeled \n");
+
+ for_each_ref(handle_one_ref, &cbdata);
+ if (ferror(cbdata.refs_file))
+ die("failed to write ref-pack file");
+ if (fflush(cbdata.refs_file) || fsync(fd) || fclose(cbdata.refs_file))
+ die("failed to write ref-pack file (%s)", strerror(errno));
+ /*
+ * Since the lock file was fdopen()'ed and then fclose()'ed above,
+ * assign -1 to the lock file descriptor so that commit_lock_file()
+ * won't try to close() it.
+ */
+ packed.fd = -1;
+ if (commit_lock_file(&packed) < 0)
+ die("unable to overwrite old ref-pack file (%s)", strerror(errno));
+ if (cbdata.flags & PACK_REFS_PRUNE)
+ prune_refs(cbdata.ref_to_prune);
+ return 0;
+}
--- /dev/null
+#ifndef PACK_REFS_H
+#define PACK_REFS_H
+
+/*
+ * Flags for controlling behaviour of pack_refs()
+ * PACK_REFS_PRUNE: Prune loose refs after packing
+ * PACK_REFS_ALL: Pack _all_ refs, not just tags and already packed refs
+ */
+#define PACK_REFS_PRUNE 0x0001
+#define PACK_REFS_ALL 0x0002
+
+/*
+ * Write a packed-refs file for the current repository.
+ * flags: Combination of the above PACK_REFS_* flags.
+ */
+int pack_refs(unsigned int flags);
+
+#endif /* PACK_REFS_H */
return -1 - i;
}
-void init_pack_revindex(void)
+static void init_pack_revindex(void)
{
int num;
struct packed_git *p;
struct pack_revindex *rix;
struct revindex_entry *revindex;
+ if (!pack_revindex_hashsz)
+ init_pack_revindex();
num = pack_revindex_ix(p);
if (num < 0)
- die("internal error: pack revindex uninitialized");
+ die("internal error: pack revindex fubar");
rix = &pack_revindex[num];
if (!rix->revindex)
unsigned int nr;
};
-void init_pack_revindex(void);
struct revindex_entry *find_pack_revindex(struct packed_git *p, off_t ofs);
#endif
};
extern char *write_idx_file(char *index_name, struct pack_idx_entry **objects, int nr_objects, unsigned char *sha1);
-
-extern int verify_pack(struct packed_git *, int);
+extern int check_pack_crc(struct packed_git *p, struct pack_window **w_curs, off_t offset, off_t len, unsigned int nr);
+extern int verify_pack(struct packed_git *);
extern void fixup_pack_header_footer(int, unsigned char *, const char *, uint32_t);
extern char *index_pack_lockfile(int fd);
fprintf(stderr, "usage: %s\n", *usagestr++);
while (*usagestr && **usagestr)
fprintf(stderr, " or: %s\n", *usagestr++);
- while (*usagestr)
- fprintf(stderr, " %s\n", *usagestr++);
+ while (*usagestr) {
+ fprintf(stderr, "%s%s\n",
+ **usagestr ? " " : "",
+ *usagestr);
+ usagestr++;
+ }
if (opts->type != OPTION_GROUP)
fputc('\n', stderr);
break;
case OPTION_INTEGER:
if (opts->flags & PARSE_OPT_OPTARG)
- pos += fprintf(stderr, "[<n>]");
+ if (opts->long_name)
+ pos += fprintf(stderr, "[=<n>]");
+ else
+ pos += fprintf(stderr, "[<n>]");
else
pos += fprintf(stderr, " <n>");
break;
case OPTION_STRING:
if (opts->argh) {
if (opts->flags & PARSE_OPT_OPTARG)
- pos += fprintf(stderr, " [<%s>]", opts->argh);
+ if (opts->long_name)
+ pos += fprintf(stderr, "[=<%s>]", opts->argh);
+ else
+ pos += fprintf(stderr, "[<%s>]", opts->argh);
else
pos += fprintf(stderr, " <%s>", opts->argh);
} else {
if (opts->flags & PARSE_OPT_OPTARG)
- pos += fprintf(stderr, " [...]");
+ if (opts->long_name)
+ pos += fprintf(stderr, "[=...]");
+ else
+ pos += fprintf(stderr, "[...]");
else
pos += fprintf(stderr, " ...");
}
/* We allow "recursive" symbolic links. Only within reason, though. */
#define MAXDEPTH 5
+const char *make_relative_path(const char *abs, const char *base)
+{
+ static char buf[PATH_MAX + 1];
+ int baselen;
+ if (!base)
+ return abs;
+ baselen = strlen(base);
+ if (prefixcmp(abs, base))
+ return abs;
+ if (abs[baselen] == '/')
+ baselen++;
+ else if (base[baselen - 1] != '/')
+ return abs;
+ strcpy(buf, abs + baselen);
+ return buf;
+}
+
const char *make_absolute_path(const char *path)
{
static char bufs[2][PATH_MAX + 1], *buf = bufs[0], *next_buf = bufs[1];
return 0;
}
+static int is_empty_blob_sha1(const unsigned char *sha1)
+{
+ static const unsigned char empty_blob_sha1[20] = {
+ 0xe6,0x9d,0xe2,0x9b,0xb2,0xd1,0xd6,0x43,0x4b,0x8b,
+ 0x29,0xae,0x77,0x5a,0xd8,0xc2,0xe4,0x8c,0x53,0x91
+ };
+
+ return !hashcmp(sha1, empty_blob_sha1);
+}
+
static int ce_match_stat_basic(struct cache_entry *ce, struct stat *st)
{
unsigned int changed = 0;
if (ce->ce_size != (unsigned int) st->st_size)
changed |= DATA_CHANGED;
+ /* Racily smudged entry? */
+ if (!ce->ce_size) {
+ if (!is_empty_blob_sha1(ce->sha1))
+ changed |= DATA_CHANGED;
+ }
+
return changed;
}
work_tree = get_git_work_tree();
git_dir = get_git_dir();
if (!is_absolute_path(git_dir))
- set_git_dir(make_absolute_path(git_dir));
+ git_dir = make_absolute_path(git_dir);
if (!work_tree || chdir(work_tree))
die("This operation must be run in a work tree");
+ set_git_dir(make_relative_path(git_dir, work_tree));
initialized = 1;
}
return 0;
}
+int safe_create_leading_directories_const(const char *path)
+{
+ /* path points to cache entries, so xstrdup before messing with it */
+ char *buf = xstrdup(path);
+ int result = safe_create_leading_directories(buf);
+ free(buf);
+ return result;
+}
+
char *sha1_to_hex(const unsigned char *sha1)
{
static int bufno;
return win->base + offset;
}
+static struct packed_git *alloc_packed_git(int extra)
+{
+ struct packed_git *p = xmalloc(sizeof(*p) + extra);
+ memset(p, 0, sizeof(*p));
+ p->pack_fd = -1;
+ return p;
+}
+
struct packed_git *add_packed_git(const char *path, int path_len, int local)
{
struct stat st;
- struct packed_git *p = xmalloc(sizeof(*p) + path_len + 2);
+ struct packed_git *p = alloc_packed_git(path_len + 2);
/*
* Make sure a corresponding .pack file exists and that
* the index looks sane.
*/
path_len -= strlen(".idx");
- if (path_len < 1)
+ if (path_len < 1) {
+ free(p);
return NULL;
+ }
memcpy(p->pack_name, path, path_len);
strcpy(p->pack_name + path_len, ".pack");
if (stat(p->pack_name, &st) || !S_ISREG(st.st_mode)) {
/* ok, it looks sane as far as we can check without
* actually mapping the pack file.
*/
- p->index_version = 0;
- p->index_data = NULL;
- p->index_size = 0;
- p->num_objects = 0;
p->pack_size = st.st_size;
- p->next = NULL;
- p->windows = NULL;
- p->pack_fd = -1;
p->pack_local = local;
p->mtime = st.st_mtime;
if (path_len < 40 || get_sha1_hex(path + path_len - 40, p->sha1))
{
const char *idx_path = sha1_pack_index_name(sha1);
const char *path = sha1_pack_name(sha1);
- struct packed_git *p = xmalloc(sizeof(*p) + strlen(path) + 2);
+ struct packed_git *p = alloc_packed_git(strlen(path) + 1);
+ strcpy(p->pack_name, path);
+ hashcpy(p->sha1, sha1);
if (check_packed_git_idx(idx_path, p)) {
free(p);
return NULL;
}
- strcpy(p->pack_name, path);
- p->pack_size = 0;
- p->next = NULL;
- p->windows = NULL;
- p->pack_fd = -1;
- hashcpy(p->sha1, sha1);
return p;
}
prepare_packed_git();
}
+static void mark_bad_packed_object(struct packed_git *p,
+ const unsigned char *sha1)
+{
+ unsigned i;
+ for (i = 0; i < p->num_bad_objects; i++)
+ if (!hashcmp(sha1, p->bad_object_sha1 + 20 * i))
+ return;
+ p->bad_object_sha1 = xrealloc(p->bad_object_sha1, 20 * (p->num_bad_objects + 1));
+ hashcpy(p->bad_object_sha1 + 20 * p->num_bad_objects, sha1);
+ p->num_bad_objects++;
+}
+
int check_sha1_signature(const unsigned char *sha1, void *map, unsigned long size, const char *type)
{
unsigned char real_sha1[20];
while (c & 128) {
base_offset += 1;
if (!base_offset || MSB(base_offset, 7))
- die("offset value overflow for delta base object");
+ return 0; /* overflow */
c = base_info[used++];
base_offset = (base_offset << 7) + (c & 127);
}
base_offset = delta_obj_offset - base_offset;
if (base_offset >= delta_obj_offset)
- die("delta base offset out of bound");
+ return 0; /* out of bound */
*curpos += used;
} else if (type == OBJ_REF_DELTA) {
/* The base entry _must_ be in the same pack */
base_offset = find_pack_entry_one(base_info, p);
- if (!base_offset)
- die("failed to find delta-pack base object %s",
- sha1_to_hex(base_info));
*curpos += 20;
} else
die("I am totally screwed");
return typename(type);
case OBJ_OFS_DELTA:
obj_offset = get_delta_base(p, &w_curs, &curpos, type, obj_offset);
+ if (!obj_offset)
+ die("pack %s contains bad delta base reference of type %s",
+ p->pack_name, typename(type));
if (*delta_chain_length == 0) {
revidx = find_pack_revindex(p, obj_offset);
hashcpy(base_sha1, nth_packed_object_sha1(p, revidx->nr));
off_t base_offset;
base_offset = get_delta_base(p, w_curs, &curpos, *type, obj_offset);
+ if (!base_offset) {
+ error("failed to validate delta base reference "
+ "at offset %"PRIuMAX" from %s",
+ (uintmax_t)curpos, p->pack_name);
+ return NULL;
+ }
base = cache_or_unpack_entry(p, base_offset, &base_size, type, 0);
- if (!base)
- die("failed to read delta base object"
- " at %"PRIuMAX" from %s",
- (uintmax_t)base_offset, p->pack_name);
+ if (!base) {
+ /*
+ * We're probably in deep shit, but let's try to fetch
+ * the required base anyway from another pack or loose.
+ * This is costly but should happen only in the presence
+ * of a corrupted pack, and is better than failing outright.
+ */
+ struct revindex_entry *revidx = find_pack_revindex(p, base_offset);
+ const unsigned char *base_sha1 =
+ nth_packed_object_sha1(p, revidx->nr);
+ error("failed to read delta base object %s"
+ " at offset %"PRIuMAX" from %s",
+ sha1_to_hex(base_sha1), (uintmax_t)base_offset,
+ p->pack_name);
+ mark_bad_packed_object(p, base_sha1);
+ base = read_sha1_file(base_sha1, type, &base_size);
+ if (!base)
+ return NULL;
+ }
delta_data = unpack_compressed_entry(p, w_curs, curpos, delta_size);
- if (!delta_data)
- die("failed to unpack compressed delta"
- " at %"PRIuMAX" from %s",
- (uintmax_t)curpos, p->pack_name);
+ if (!delta_data) {
+ error("failed to unpack compressed delta "
+ "at offset %"PRIuMAX" from %s",
+ (uintmax_t)curpos, p->pack_name);
+ free(base);
+ return NULL;
+ }
result = patch_delta(base, base_size,
delta_data, delta_size,
sizep);
data = unpack_compressed_entry(p, &w_curs, curpos, *sizep);
break;
default:
- die("unknown object type %i in %s", *type, p->pack_name);
+ data = NULL;
+ error("unknown object type %i at offset %"PRIuMAX" in %s",
+ *type, (uintmax_t)obj_offset, p->pack_name);
}
unuse_pack(&w_curs);
return data;
}
}
-static off_t nth_packed_object_offset(const struct packed_git *p, uint32_t n)
+off_t nth_packed_object_offset(const struct packed_git *p, uint32_t n)
{
const unsigned char *index = p->index_data;
index += 4 * 256;
goto next;
}
+ if (p->num_bad_objects) {
+ unsigned i;
+ for (i = 0; i < p->num_bad_objects; i++)
+ if (!hashcmp(sha1, p->bad_object_sha1 + 20 * i))
+ goto next;
+ }
+
offset = find_pack_entry_one(sha1, p);
if (offset) {
/*
enum object_type *type, unsigned long *size)
{
struct pack_entry e;
+ void *data;
if (!find_pack_entry(sha1, &e, NULL))
return NULL;
- else
- return cache_or_unpack_entry(e.p, e.offset, size, type, 1);
+ data = cache_or_unpack_entry(e.p, e.offset, size, type, 1);
+ if (!data) {
+ /*
+ * We're probably in deep shit, but let's try to fetch
+ * the required object anyway from another pack or loose.
+ * This should happen only in the presence of a corrupted
+ * pack, and is better than failing outright.
+ */
+ error("failed to read object %s at offset %"PRIuMAX" from %s",
+ sha1_to_hex(sha1), (uintmax_t)e.offset, e.p->pack_name);
+ mark_bad_packed_object(e.p, sha1);
+ data = read_sha1_file(sha1, type, size);
+ }
+ return data;
}
/*
/* Finalize a file on disk, and close it. */
static void close_sha1_file(int fd)
{
- /* For safe-mode, we could fsync_or_die(fd, "sha1 file") here */
+ if (fsync_object_files)
+ fsync_or_die(fd, "sha1 file");
fchmod(fd, 0444);
if (close(fd) != 0)
die("unable to write sha1 file");
fd = mkstemp(buffer);
if (fd < 0 && dirlen) {
/* Make sure the directory exists */
+ memcpy(buffer, filename, dirlen);
buffer[dirlen-1] = 0;
if (mkdir(buffer, 0777) || adjust_shared_perm(buffer))
return -1;
-t[0-9][0-9][0-9][0-9]-*.sh -whitespace
t[0-9][0-9][0-9][0-9]/* -whitespace
T = $(wildcard t[0-9][0-9][0-9][0-9]-*.sh)
TSVN = $(wildcard t91[0-9][0-9]-*.sh)
-all: $(T) clean
+all: pre-clean $(T) aggregate-results clean
$(T):
@echo "*** $@ ***"; GIT_CONFIG=.git/config '$(SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
+pre-clean:
+ $(RM) -r test-results
+
clean:
- $(RM) -r 'trash directory'
+ $(RM) -r 'trash directory' test-results
+
+aggregate-results:
+ ./aggregate-results.sh test-results/t*-*
# we can test NO_OPTIMIZE_COMMITS independently of LC_ALL
full-svn-test:
$(MAKE) $(TSVN) GIT_SVN_NO_OPTIMIZE_COMMITS=1 LC_ALL=C
$(MAKE) $(TSVN) GIT_SVN_NO_OPTIMIZE_COMMITS=0 LC_ALL=en_US.UTF-8
-.PHONY: $(T) clean
+.PHONY: pre-clean $(T) aggregate-results clean
.NOTPARALLEL:
--- /dev/null
+#!/bin/sh
+
+fixed=0
+success=0
+failed=0
+broken=0
+total=0
+
+for file
+do
+ while read type value
+ do
+ case $type in
+ '')
+ continue ;;
+ fixed)
+ fixed=$(($fixed + $value)) ;;
+ success)
+ success=$(($success + $value)) ;;
+ failed)
+ failed=$(($failed + $value)) ;;
+ broken)
+ broken=$(( $broken + $value)) ;;
+ total)
+ total=$(( $total + $value)) ;;
+ esac
+ done <"$file"
+done
+
+printf "%-8s%d\n" fixed $fixed
+printf "%-8s%d\n" success $success
+printf "%-8s%d\n" failed $failed
+printf "%-8s%d\n" broken $broken
+printf "%-8s%d\n" total $total
usage: test-parse-options <options>
-b, --boolean get a boolean
+ -4, --or4 bitwise-or boolean with ...0100
+
-i, --integer <n> get a integer
-j <n> get a integer, too
+ --set23 set integer to 23
+ -t <time> get timestamp of <time>
+ -L, --length <str> get length of <str>
-string options
+String options
-s, --string <string>
get a string
--string2 <str> get another string
--st <st> get another string (pervert ordering)
-o <str> get another string
+ --default-string set string to default
-magic arguments
+Magic arguments
--quux means --quux
+Standard options
+ --abbrev[=<n>] use <n> digits to display SHA-1s
+ -v, --verbose be verbose
+ -n, --dry-run dry run
+ -q, --quiet be quiet
+
EOF
test_expect_success 'test help' '
- ! test-parse-options -h > output 2> output.err &&
+ test_must_fail test-parse-options -h > output 2> output.err &&
test ! -s output &&
test_cmp expect.err output.err
'
boolean: 2
integer: 1729
string: 123
+abbrev: 7
+verbose: 2
+quiet: no
+dry run: yes
EOF
test_expect_success 'short options' '
- test-parse-options -s123 -b -i 1729 -b > output 2> output.err &&
+ test-parse-options -s123 -b -i 1729 -b -vv -n > output 2> output.err &&
test_cmp expect output &&
test ! -s output.err
'
+
cat > expect << EOF
boolean: 2
integer: 1729
string: 321
+abbrev: 10
+verbose: 2
+quiet: no
+dry run: no
EOF
test_expect_success 'long options' '
test-parse-options --boolean --integer 1729 --boolean --string2=321 \
+ --verbose --verbose --no-dry-run --abbrev=10 \
> output 2> output.err &&
test ! -s output.err &&
test_cmp expect output
boolean: 1
integer: 13
string: 123
+abbrev: 7
+verbose: 0
+quiet: no
+dry run: no
arg 00: a1
arg 01: b1
arg 02: --boolean
boolean: 0
integer: 2
string: (not set)
+abbrev: 7
+verbose: 0
+quiet: no
+dry run: no
EOF
test_expect_success 'unambiguously abbreviated option' '
boolean: 0
integer: 0
string: 123
+abbrev: 7
+verbose: 0
+quiet: no
+dry run: no
EOF
test_expect_success 'non ambiguous option (after two options it abbreviates)' '
test_cmp expect output
'
-cat > expect.err << EOF
+cat > typo.err << EOF
error: did you mean \`--boolean\` (with two dashes ?)
EOF
test_expect_success 'detect possible typos' '
- ! test-parse-options -boolean > output 2> output.err &&
+ test_must_fail test-parse-options -boolean > output 2> output.err &&
test ! -s output &&
- test_cmp expect.err output.err
+ test_cmp typo.err output.err
'
cat > expect <<EOF
boolean: 0
integer: 0
string: (not set)
+abbrev: 7
+verbose: 0
+quiet: no
+dry run: no
arg 00: --quux
EOF
test_cmp expect output
'
+cat > expect <<EOF
+boolean: 0
+integer: 1
+string: default
+abbrev: 7
+verbose: 0
+quiet: yes
+dry run: no
+arg 00: foo
+EOF
+
+test_expect_success 'OPT_DATE() and OPT_SET_PTR() work' '
+ test-parse-options -t "1970-01-01 00:00:01 +0000" --default-string \
+ foo -q > output 2> output.err &&
+ test ! -s output.err &&
+ test_cmp expect output
+'
+
+cat > expect <<EOF
+Callback: "four", 0
+boolean: 5
+integer: 4
+string: (not set)
+abbrev: 7
+verbose: 0
+quiet: no
+dry run: no
+EOF
+
+test_expect_success 'OPT_CALLBACK() and OPT_BIT() work' '
+ test-parse-options --length=four -b -4 > output 2> output.err &&
+ test ! -s output.err &&
+ test_cmp expect output
+'
+
+cat > expect <<EOF
+Callback: "not set", 1
+EOF
+
+test_expect_success 'OPT_CALLBACK() and callback errors work' '
+ test_must_fail test-parse-options --no-length > output 2> output.err &&
+ test_cmp expect output &&
+ test_cmp expect.err output.err
+'
+
+cat > expect <<EOF
+boolean: 1
+integer: 23
+string: (not set)
+abbrev: 7
+verbose: 0
+quiet: no
+dry run: no
+EOF
+
+test_expect_success 'OPT_BIT() and OPT_SET_INT() work' '
+ test-parse-options --set23 -bbbbb --no-or4 > output 2> output.err &&
+ test ! -s output.err &&
+ test_cmp expect output
+'
+
+# --or4
+# --no-or4
+
test_done
'
rm -f .git/$m
+test_expect_success "delete $m without oldvalue verification" "
+ git update-ref $m $A &&
+ test $A = \$(cat .git/$m) &&
+ git update-ref -d $m &&
+ ! test -f .git/$m
+"
+rm -f .git/$m
+
test_expect_success \
"fail to create $n" \
"touch .git/$n_dir
cat > expect.err <<EOF
usage: some-command [options] <args>...
-
+
some-command does foo and bar!
-h, --help show the help
--bar ... some cool option --bar with an argument
An option group Header
- -C [...] option C with an optional argument
+ -C[...] option C with an optional argument
Extras
--extra1 line above used to cause a segfault but no longer does
############################################################
# 17. disallow missing tag timestamp
-cat >tag.sig <<EOF
+tr '_' ' ' >tag.sig <<EOF
object $head
type commit
tag mytag
-tagger T A Gger <tagger@example.com>
+tagger T A Gger <tagger@example.com>__
EOF
echo 4 > other-file &&
git add other-file &&
echo 5 > other-file &&
- test_must_fail git stash apply
+ test_must_fail git stash apply
'
test_expect_success 'apply stashed changes' '
sed -e "/^$/q" patch2 > hdrs2 &&
grep "^To: R. E. Cipient <rcipient@example.com>$" hdrs2 &&
grep "^Cc: S. E. Cipient <scipient@example.com>$" hdrs2
-
+
'
test_expect_success 'extra headers without newlines' '
sed -e "/^$/q" patch3 > hdrs3 &&
grep "^To: R. E. Cipient <rcipient@example.com>$" hdrs3 &&
grep "^Cc: S. E. Cipient <scipient@example.com>$" hdrs3
-
+
'
test_expect_success 'extra headers with multiple To:s' '
git checkout side &&
git format-patch --cover-letter --thread -o patches/ master &&
FIRST_MID=$(grep "Message-Id:" patches/0000-* | sed "s/^[^<]*\(<[^>]*>\).*$/\1/") &&
- for i in patches/0001-* patches/0002-* patches/0003-*
+ for i in patches/0001-* patches/0002-* patches/0003-*
do
grep "References: $FIRST_MID" $i &&
grep "In-Reply-To: $FIRST_MID" $i || break
git update-index x
-cat << EOF > x
+tr '_' ' ' << EOF > x
whitespace at beginning
whitespace change
white space in the middle
-whitespace at end
+whitespace at end__
unchanged line
CR at end
EOF
-tr 'Q' '\015' << EOF > expect
+tr 'Q_' '\015 ' << EOF > expect
diff --git a/x b/x
index d99af23..8b32fb5 100644
--- a/x
+ whitespace at beginning
+whitespace change
+white space in the middle
-+whitespace at end
++whitespace at end__
unchanged line
-CR at endQ
+CR at end
'
cat >expect <<\EOF
- pathname.1 => "Rpathname\twith HT.0" | 0
- pathname.3 => "Rpathname\nwith LF.0" | 0
- "pathname\twith HT.3" => "Rpathname\nwith LF.1" | 0
- pathname.2 => Rpathname with SP.0 | 0
- "pathname\twith HT.2" => Rpathname with SP.1 | 0
- pathname.0 => Rpathname.0 | 0
- "pathname\twith HT.0" => Rpathname.1 | 0
+ pathname.1 => "Rpathname\twith HT.0" | 0
+ pathname.3 => "Rpathname\nwith LF.0" | 0
+ "pathname\twith HT.3" => "Rpathname\nwith LF.1" | 0
+ pathname.2 => Rpathname with SP.0 | 0
+ "pathname\twith HT.2" => Rpathname with SP.1 | 0
+ pathname.0 => Rpathname.0 | 0
+ "pathname\twith HT.0" => Rpathname.1 | 0
7 files changed, 0 insertions(+), 0 deletions(-)
EOF
test_expect_success 'git diff --stat -M HEAD' '
'
. ./test-lib.sh
-# setup
-
-cat > patch1.patch <<\EOF
-diff --git a/main.c b/main.c
-new file mode 100644
---- /dev/null
-+++ b/main.c
-@@ -0,0 +1,23 @@
-+#include <stdio.h>
-+
-+int func(int num);
-+void print_int(int num);
-+
-+int main() {
-+ int i;
-+
-+ for (i = 0; i < 10; i++) {
-+ print_int(func(i));
-+ }
-+
-+ return 0;
-+}
-+
-+int func(int num) {
-+ return num * num;
-+}
-+
-+void print_int(int num) {
-+ printf("%d", num);
-+}
-+
-EOF
-cat > patch2.patch <<\EOF
-diff --git a/main.c b/main.c
---- a/main.c
-+++ b/main.c
-@@ -1,7 +1,9 @@
-+#include <stdlib.h>
- #include <stdio.h>
-
- int func(int num);
- void print_int(int num);
-+void print_ln();
-
- int main() {
- int i;
-@@ -10,6 +12,8 @@
- print_int(func(i));
- }
-
-+ print_ln();
-+
- return 0;
- }
-
-@@ -21,3 +25,7 @@
- printf("%d", num);
- }
-
-+void print_ln() {
-+ printf("\n");
-+}
-+
-EOF
-cat > patch3.patch <<\EOF
-diff --git a/main.c b/main.c
---- a/main.c
-+++ b/main.c
-@@ -1,9 +1,7 @@
--#include <stdlib.h>
- #include <stdio.h>
-
- int func(int num);
- void print_int(int num);
--void print_ln();
-
- int main() {
- int i;
-@@ -12,8 +10,6 @@
- print_int(func(i));
- }
-
-- print_ln();
--
- return 0;
- }
-
-@@ -25,7 +21,3 @@
- printf("%d", num);
- }
-
--void print_ln() {
-- printf("\n");
--}
--
-EOF
-cat > patch4.patch <<\EOF
-diff --git a/main.c b/main.c
---- a/main.c
-+++ b/main.c
-@@ -1,13 +1,14 @@
- #include <stdio.h>
-
- int func(int num);
--void print_int(int num);
-+int func2(int num);
-
- int main() {
- int i;
-
- for (i = 0; i < 10; i++) {
-- print_int(func(i));
-+ printf("%d", func(i));
-+ printf("%d", func3(i));
- }
-
- return 0;
-@@ -17,7 +18,7 @@
- return num * num;
- }
-
--void print_int(int num) {
-- printf("%d", num);
-+int func2(int num) {
-+ return num * num * num;
- }
-
-EOF
+cp ../t4109/patch1.patch .
+cp ../t4109/patch2.patch .
+cp ../t4109/patch3.patch .
+cp ../t4109/patch4.patch .
test_expect_success "S = git apply (1)" \
'git apply patch1.patch patch2.patch'
--- /dev/null
+diff --git a/main.c b/main.c
+new file mode 100644
+--- /dev/null
++++ b/main.c
+@@ -0,0 +1,23 @@
++#include <stdio.h>
++
++int func(int num);
++void print_int(int num);
++
++int main() {
++ int i;
++
++ for (i = 0; i < 10; i++) {
++ print_int(func(i));
++ }
++
++ return 0;
++}
++
++int func(int num) {
++ return num * num;
++}
++
++void print_int(int num) {
++ printf("%d", num);
++}
++
--- /dev/null
+diff --git a/main.c b/main.c
+--- a/main.c
++++ b/main.c
+@@ -1,7 +1,9 @@
++#include <stdlib.h>
+ #include <stdio.h>
+
+ int func(int num);
+ void print_int(int num);
++void print_ln();
+
+ int main() {
+ int i;
+@@ -10,6 +12,8 @@
+ print_int(func(i));
+ }
+
++ print_ln();
++
+ return 0;
+ }
+
+@@ -21,3 +25,7 @@
+ printf("%d", num);
+ }
+
++void print_ln() {
++ printf("\n");
++}
++
--- /dev/null
+cat > patch3.patch <<\EOF
+diff --git a/main.c b/main.c
+--- a/main.c
++++ b/main.c
+@@ -1,9 +1,7 @@
+-#include <stdlib.h>
+ #include <stdio.h>
+
+ int func(int num);
+ void print_int(int num);
+-void print_ln();
+
+ int main() {
+ int i;
+@@ -12,8 +10,6 @@
+ print_int(func(i));
+ }
+
+- print_ln();
+-
+ return 0;
+ }
+
+@@ -25,7 +21,3 @@
+ printf("%d", num);
+ }
+
+-void print_ln() {
+- printf("\n");
+-}
+-
--- /dev/null
+diff --git a/main.c b/main.c
+--- a/main.c
++++ b/main.c
+@@ -1,13 +1,14 @@
+ #include <stdio.h>
+
+ int func(int num);
+-void print_int(int num);
++int func2(int num);
+
+ int main() {
+ int i;
+
+ for (i = 0; i < 10; i++) {
+- print_int(func(i));
++ printf("%d", func(i));
++ printf("%d", func3(i));
+ }
+
+ return 0;
+@@ -17,7 +18,7 @@
+ return num * num;
+ }
+
+-void print_int(int num) {
+- printf("%d", num);
++int func2(int num) {
++ return num * num * num;
+ }
+
'
# Also handcraft GNU diff output; note this has trailing whitespace.
-cat >gpatch.file <<\EOF &&
+tr '_' ' ' >gpatch.file <<\EOF &&
--- file1 2007-02-21 01:04:24.000000000 -0800
+++ file1+ 2007-02-21 01:07:44.000000000 -0800
@@ -1 +1 @@
-A
-+B
++B_
EOF
sed -e 's|file1|sub/&|' gpatch.file >gpatch-sub.file &&
GIT_AUTHOR_NAME="Another Thor"
GIT_AUTHOR_EMAIL="a.thor@example.com"
-GIT_COMMITTER_NAME="Co M Miter"
+GIT_COMMITTER_NAME="Co M Miter"
GIT_COMMITTER_EMAIL="c.miter@example.com"
export GIT_AUTHOR_NAME GIT_AUTHOR_EMAIL GIT_COMMITTER_NAME GIT_COMMITTER_EMAIL
echo text >file_with_long_path) &&
(cd a && find .) | sort >a.lst'
+test_expect_success \
+ 'add ignored file' \
+ 'echo ignore me >a/ignored &&
+ echo ignored export-ignore >.gitattributes'
+
test_expect_success \
'add files to repository' \
'find a -type f | xargs git update-index --add &&
git update-ref HEAD $(TZ=GMT GIT_COMMITTER_DATE="2005-05-27 22:00:00" \
git commit-tree $treeid </dev/null)'
+test_expect_success \
+ 'remove ignored file' \
+ 'rm a/ignored'
+
test_expect_success \
'git archive' \
'git archive HEAD >b.tar'
'[index v2] 5) pack-objects refuses to reuse corrupted data' \
'! git pack-objects test-5 <obj-list'
+test_expect_success \
+ '[index v2] 6) verify-pack detects CRC mismatch' \
+ 'rm -f .git/objects/pack/* &&
+ git-index-pack --index-version=2 --stdin < "test-1-${pack1}.pack" &&
+ git verify-pack ".git/objects/pack/pack-${pack1}.pack" &&
+ chmod +w ".git/objects/pack/pack-${pack1}.idx" &&
+ dd if=/dev/zero of=".git/objects/pack/pack-${pack1}.idx" conv=notrunc \
+ bs=1 count=4 seek=$((8 + 256 * 4 + `wc -l <obj-list` * 20 + 0)) &&
+ ( while read obj
+ do git cat-file -p $obj >/dev/null || exit 1
+ done <obj-list ) &&
+ err=$(! git verify-pack ".git/objects/pack/pack-${pack1}.pack" 2>&1) &&
+ echo "$err" | grep "CRC mismatch"'
+
test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2008 Nicolas Pitre
+#
+
+test_description='resilience to pack corruptions with redundant objects'
+. ./test-lib.sh
+
+# Note: the test objects are created with knowledge of their pack encoding
+# to ensure good code path coverage, and to facilitate direct alteration
+# later on. The assumed characteristics are:
+#
+# 1) blob_2 is a delta with blob_1 for base and blob_3 is a delta with blob2
+# for base, such that blob_3 delta depth is 2;
+#
+# 2) the bulk of object data is uncompressible so the text part remains
+# visible;
+#
+# 3) object header is always 2 bytes.
+
+create_test_files() {
+ test-genrandom "foo" 2000 > file_1 &&
+ test-genrandom "foo" 1800 > file_2 &&
+ test-genrandom "foo" 1800 > file_3 &&
+ echo " base " >> file_1 &&
+ echo " delta1 " >> file_2 &&
+ echo " delta delta2 " >> file_3 &&
+ test-genrandom "bar" 150 >> file_2 &&
+ test-genrandom "baz" 100 >> file_3
+}
+
+create_new_pack() {
+ rm -rf .git &&
+ git init &&
+ blob_1=`git hash-object -t blob -w file_1` &&
+ blob_2=`git hash-object -t blob -w file_2` &&
+ blob_3=`git hash-object -t blob -w file_3` &&
+ pack=`printf "$blob_1\n$blob_2\n$blob_3\n" |
+ git pack-objects $@ .git/objects/pack/pack` &&
+ pack=".git/objects/pack/pack-${pack}" &&
+ git verify-pack -v ${pack}.pack
+}
+
+do_corrupt_object() {
+ ofs=`git show-index < ${pack}.idx | grep $1 | cut -f1 -d" "` &&
+ ofs=$(($ofs + $2)) &&
+ chmod +w ${pack}.pack &&
+ dd if=/dev/zero of=${pack}.pack count=1 bs=1 conv=notrunc seek=$ofs &&
+ test_must_fail git verify-pack ${pack}.pack
+}
+
+test_expect_success \
+ 'initial setup validation' \
+ 'create_test_files &&
+ create_new_pack &&
+ git prune-packed &&
+ git cat-file blob $blob_1 > /dev/null &&
+ git cat-file blob $blob_2 > /dev/null &&
+ git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ 'create corruption in header of first object' \
+ 'do_corrupt_object $blob_1 0 &&
+ test_must_fail git cat-file blob $blob_1 > /dev/null &&
+ test_must_fail git cat-file blob $blob_2 > /dev/null &&
+ test_must_fail git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ '... but having a loose copy allows for full recovery' \
+ 'mv ${pack}.idx tmp &&
+ git hash-object -t blob -w file_1 &&
+ mv tmp ${pack}.idx &&
+ git cat-file blob $blob_1 > /dev/null &&
+ git cat-file blob $blob_2 > /dev/null &&
+ git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ '... and loose copy of first delta allows for partial recovery' \
+ 'git prune-packed &&
+ test_must_fail git cat-file blob $blob_2 > /dev/null &&
+ mv ${pack}.idx tmp &&
+ git hash-object -t blob -w file_2 &&
+ mv tmp ${pack}.idx &&
+ test_must_fail git cat-file blob $blob_1 > /dev/null &&
+ git cat-file blob $blob_2 > /dev/null &&
+ git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ 'create corruption in data of first object' \
+ 'create_new_pack &&
+ git prune-packed &&
+ chmod +w ${pack}.pack &&
+ perl -i -pe "s/ base /abcdef/" ${pack}.pack &&
+ test_must_fail git cat-file blob $blob_1 > /dev/null &&
+ test_must_fail git cat-file blob $blob_2 > /dev/null &&
+ test_must_fail git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ '... but having a loose copy allows for full recovery' \
+ 'mv ${pack}.idx tmp &&
+ git hash-object -t blob -w file_1 &&
+ mv tmp ${pack}.idx &&
+ git cat-file blob $blob_1 > /dev/null &&
+ git cat-file blob $blob_2 > /dev/null &&
+ git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ '... and loose copy of second object allows for partial recovery' \
+ 'git prune-packed &&
+ test_must_fail git cat-file blob $blob_2 > /dev/null &&
+ mv ${pack}.idx tmp &&
+ git hash-object -t blob -w file_2 &&
+ mv tmp ${pack}.idx &&
+ test_must_fail git cat-file blob $blob_1 > /dev/null &&
+ git cat-file blob $blob_2 > /dev/null &&
+ git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ 'create corruption in header of first delta' \
+ 'create_new_pack &&
+ git prune-packed &&
+ do_corrupt_object $blob_2 0 &&
+ git cat-file blob $blob_1 > /dev/null &&
+ test_must_fail git cat-file blob $blob_2 > /dev/null &&
+ test_must_fail git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ '... but having a loose copy allows for full recovery' \
+ 'mv ${pack}.idx tmp &&
+ git hash-object -t blob -w file_2 &&
+ mv tmp ${pack}.idx &&
+ git cat-file blob $blob_1 > /dev/null &&
+ git cat-file blob $blob_2 > /dev/null &&
+ git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ 'create corruption in data of first delta' \
+ 'create_new_pack &&
+ git prune-packed &&
+ chmod +w ${pack}.pack &&
+ perl -i -pe "s/ delta1 /abcdefgh/" ${pack}.pack &&
+ git cat-file blob $blob_1 > /dev/null &&
+ test_must_fail git cat-file blob $blob_2 > /dev/null &&
+ test_must_fail git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ '... but having a loose copy allows for full recovery' \
+ 'mv ${pack}.idx tmp &&
+ git hash-object -t blob -w file_2 &&
+ mv tmp ${pack}.idx &&
+ git cat-file blob $blob_1 > /dev/null &&
+ git cat-file blob $blob_2 > /dev/null &&
+ git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ 'corruption in delta base reference of first delta (OBJ_REF_DELTA)' \
+ 'create_new_pack &&
+ git prune-packed &&
+ do_corrupt_object $blob_2 2 &&
+ git cat-file blob $blob_1 > /dev/null &&
+ test_must_fail git cat-file blob $blob_2 > /dev/null &&
+ test_must_fail git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ '... but having a loose copy allows for full recovery' \
+ 'mv ${pack}.idx tmp &&
+ git hash-object -t blob -w file_2 &&
+ mv tmp ${pack}.idx &&
+ git cat-file blob $blob_1 > /dev/null &&
+ git cat-file blob $blob_2 > /dev/null &&
+ git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ 'corruption in delta base reference of first delta (OBJ_OFS_DELTA)' \
+ 'create_new_pack --delta-base-offset &&
+ git prune-packed &&
+ do_corrupt_object $blob_2 2 &&
+ git cat-file blob $blob_1 > /dev/null &&
+ test_must_fail git cat-file blob $blob_2 > /dev/null &&
+ test_must_fail git cat-file blob $blob_3 > /dev/null'
+
+test_expect_success \
+ '... and a redundant pack allows for full recovery too' \
+ 'mv ${pack}.idx tmp &&
+ git hash-object -t blob -w file_1 &&
+ git hash-object -t blob -w file_2 &&
+ printf "$blob_1\n$blob_2\n" | git pack-objects .git/objects/pack/pack &&
+ git prune-packed &&
+ mv tmp ${pack}.idx &&
+ git cat-file blob $blob_1 > /dev/null &&
+ git cat-file blob $blob_2 > /dev/null &&
+ git cat-file blob $blob_3 > /dev/null'
+
+test_done
set x $cmd; shift
git symbolic-ref HEAD refs/heads/$1 ; shift
rm -f .git/FETCH_HEAD
- rm -f .git/refs/heads/*
- rm -f .git/refs/remotes/rem/*
- rm -f .git/refs/tags/*
+ git for-each-ref \
+ refs/heads refs/remotes/rem refs/tags |
+ while read val type refname
+ do
+ git update-ref -d "$refname" "$val"
+ done
git fetch "$@" >/dev/null
cat .git/FETCH_HEAD
} >"$actual_f" &&
cd - &&
mv test_repo.git $HTTPD_DOCUMENT_ROOT_PATH
'
-
+
test_expect_success 'clone remote repository' '
cd "$ROOT_PATH" &&
git clone $HTTPD_URL/test_repo.git test_repo_clone
'
+test_expect_success 'clone respects GIT_WORK_TREE' '
+
+ GIT_WORK_TREE=worktree git clone src bare &&
+ test -f bare/config &&
+ test -f worktree/file
+
+'
+
+test_expect_success 'clone creates intermediate directories' '
+
+ git clone src long/path/to/dst &&
+ test -f long/path/to/dst/file
+
+'
+
+test_expect_success 'clone creates intermediate directories for bare repo' '
+
+ git clone --bare src long/path/to/bare/dst &&
+ test -f long/path/to/bare/dst/config
+
+'
+
test_done
'
+cat >expect <<EOF
+# On branch master
+# Changes to be committed:
+# (use "git reset HEAD <file>..." to unstage)
+#
+# new file: dir2/added
+#
+# Changed but not updated:
+# (use "git add <file>..." to update what will be committed)
+#
+# modified: dir1/modified
+#
+# Untracked files not listed (use -u option to show untracked files)
+EOF
+test_expect_success 'status -uno' '
+ mkdir dir3 &&
+ : > dir3/untracked1 &&
+ : > dir3/untracked2 &&
+ git status -uno >output &&
+ test_cmp expect output
+'
+
+test_expect_success 'status (status.showUntrackedFiles no)' '
+ git config status.showuntrackedfiles no
+ git status >output &&
+ test_cmp expect output
+'
+
+cat >expect <<EOF
+# On branch master
+# Changes to be committed:
+# (use "git reset HEAD <file>..." to unstage)
+#
+# new file: dir2/added
+#
+# Changed but not updated:
+# (use "git add <file>..." to update what will be committed)
+#
+# modified: dir1/modified
+#
+# Untracked files:
+# (use "git add <file>..." to include in what will be committed)
+#
+# dir1/untracked
+# dir2/modified
+# dir2/untracked
+# dir3/
+# expect
+# output
+# untracked
+EOF
+test_expect_success 'status -unormal' '
+ git status -unormal >output &&
+ test_cmp expect output
+'
+
+test_expect_success 'status (status.showUntrackedFiles normal)' '
+ git config status.showuntrackedfiles normal
+ git status >output &&
+ test_cmp expect output
+'
+
+cat >expect <<EOF
+# On branch master
+# Changes to be committed:
+# (use "git reset HEAD <file>..." to unstage)
+#
+# new file: dir2/added
+#
+# Changed but not updated:
+# (use "git add <file>..." to update what will be committed)
+#
+# modified: dir1/modified
+#
+# Untracked files:
+# (use "git add <file>..." to include in what will be committed)
+#
+# dir1/untracked
+# dir2/modified
+# dir2/untracked
+# dir3/untracked1
+# dir3/untracked2
+# expect
+# output
+# untracked
+EOF
+test_expect_success 'status -uall' '
+ git status -uall >output &&
+ test_cmp expect output
+'
+test_expect_success 'status (status.showUntrackedFiles all)' '
+ git config status.showuntrackedfiles all
+ git status >output &&
+ rm -rf dir3 &&
+ git config --unset status.showuntrackedfiles &&
+ test_cmp expect output
+'
+
cat > expect << \EOF
# On branch master
# Changes to be committed:
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2008 Jan Krüger
+#
+
+test_description='git-svn respects rewriteRoot during rebuild'
+
+. ./lib-git-svn.sh
+
+mkdir import
+cd import
+ touch foo
+ svn import -m 'import for git-svn' . "$svnrepo" >/dev/null
+cd ..
+rm -rf import
+
+test_expect_success 'init, fetch and checkout repository' '
+ git svn init --rewrite-root=http://invalid.invalid/ "$svnrepo" &&
+ git svn fetch
+ git checkout -b mybranch remotes/git-svn
+ '
+
+test_expect_success 'remove rev_map' '
+ rm "$GIT_SVN_DIR"/.rev_map.*
+ '
+
+test_expect_success 'rebuild rev_map' '
+ git svn rebase >/dev/null
+ '
+
+test_done
+
git fast-import &&
git cat-file commit i18n | grep "Áéí óú")
+'
+test_expect_success 'import/export-marks' '
+
+ git checkout -b marks master &&
+ git fast-export --export-marks=tmp-marks HEAD &&
+ test -s tmp-marks &&
+ test $(wc -l < tmp-marks) -eq 3 &&
+ test $(
+ git fast-export --import-marks=tmp-marks\
+ --export-marks=tmp-marks HEAD |
+ grep ^commit |
+ wc -l) \
+ -eq 0 &&
+ echo change > file &&
+ git commit -m "last commit" file &&
+ test $(
+ git fast-export --import-marks=tmp-marks \
+ --export-marks=tmp-marks HEAD |
+ grep ^commit\ |
+ wc -l) \
+ -eq 1 &&
+ test $(wc -l < tmp-marks) -eq 4
+
'
cat > signed-tag-import << EOF
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2008 Lea Wiemann
+#
+
+test_description='perl interface (Git.pm)'
+. ./test-lib.sh
+
+# set up test repository
+
+test_expect_success \
+ 'set up test repository' \
+ 'echo "test file 1" > file1 &&
+ echo "test file 2" > file2 &&
+ mkdir directory1 &&
+ echo "in directory1" >> directory1/file &&
+ mkdir directory2 &&
+ echo "in directory2" >> directory2/file &&
+ git add . &&
+ git commit -m "first commit" &&
+
+ echo "changed file 1" > file1 &&
+ git commit -a -m "second commit" &&
+
+ git-config --add color.test.slot1 green &&
+ git-config --add test.string value &&
+ git-config --add test.dupstring value1 &&
+ git-config --add test.dupstring value2 &&
+ git-config --add test.booltrue true &&
+ git-config --add test.boolfalse no &&
+ git-config --add test.boolother other &&
+ git-config --add test.int 2k
+ '
+
+test_external_without_stderr \
+ 'Perl API' \
+ perl ../t9700/test.pl
+
+test_done
--- /dev/null
+#!/usr/bin/perl
+use lib (split(/:/, $ENV{GITPERLLIB}));
+
+use 5.006002;
+use warnings;
+use strict;
+
+use Test::More qw(no_plan);
+
+use Cwd;
+use File::Basename;
+use File::Temp;
+
+BEGIN { use_ok('Git') }
+
+# set up
+our $repo_dir = "trash directory";
+our $abs_repo_dir = Cwd->cwd;
+die "this must be run by calling the t/t97* shell script(s)\n"
+ if basename(Cwd->cwd) ne $repo_dir;
+ok(our $r = Git->repository(Directory => "."), "open repository");
+
+# config
+is($r->config("test.string"), "value", "config scalar: string");
+is_deeply([$r->config("test.dupstring")], ["value1", "value2"],
+ "config array: string");
+is($r->config("test.nonexistent"), undef, "config scalar: nonexistent");
+is_deeply([$r->config("test.nonexistent")], [], "config array: nonexistent");
+is($r->config_int("test.int"), 2048, "config_int: integer");
+is($r->config_int("test.nonexistent"), undef, "config_int: nonexistent");
+ok($r->config_bool("test.booltrue"), "config_bool: true");
+ok(!$r->config_bool("test.boolfalse"), "config_bool: false");
+our $ansi_green = "\x1b[32m";
+is($r->get_color("color.test.slot1", "red"), $ansi_green, "get_color");
+# Cannot test $r->get_colorbool("color.foo")) because we do not
+# control whether our STDOUT is a terminal.
+
+# Failure cases for config:
+# Save and restore STDERR; we will probably extract this into a
+# "dies_ok" method and possibly move the STDERR handling to Git.pm.
+open our $tmpstderr, ">&", STDERR or die "cannot save STDERR"; close STDERR;
+eval { $r->config("test.dupstring") };
+ok($@, "config: duplicate entry in scalar context fails");
+eval { $r->config_bool("test.boolother") };
+ok($@, "config_bool: non-boolean values fail");
+open STDERR, ">&", $tmpstderr or die "cannot restore STDERR";
+
+# ident
+like($r->ident("aUthor"), qr/^A U Thor <author\@example.com> [0-9]+ \+0000$/,
+ "ident scalar: author (type)");
+like($r->ident("cOmmitter"), qr/^C O Mitter <committer\@example.com> [0-9]+ \+0000$/,
+ "ident scalar: committer (type)");
+is($r->ident("invalid"), "invalid", "ident scalar: invalid ident string (no parsing)");
+my ($name, $email, $time_tz) = $r->ident('author');
+is_deeply([$name, $email], ["A U Thor", "author\@example.com"],
+ "ident array: author");
+like($time_tz, qr/[0-9]+ \+0000/, "ident array: author");
+is_deeply([$r->ident("Name <email> 123 +0000")], ["Name", "email", "123 +0000"],
+ "ident array: ident string");
+is_deeply([$r->ident("invalid")], [], "ident array: invalid ident string");
+
+# ident_person
+is($r->ident_person("aUthor"), "A U Thor <author\@example.com>",
+ "ident_person: author (type)");
+is($r->ident_person("Name <email> 123 +0000"), "Name <email>",
+ "ident_person: ident string");
+is($r->ident_person("Name", "email", "123 +0000"), "Name <email>",
+ "ident_person: array");
+
+# objects and hashes
+ok(our $file1hash = $r->command_oneline('rev-parse', "HEAD:file1"), "(get file hash)");
+our $tmpfile = File::Temp->new;
+is($r->cat_blob($file1hash, $tmpfile), 15, "cat_blob: size");
+our $blobcontents;
+{ local $/; seek $tmpfile, 0, 0; $blobcontents = <$tmpfile>; }
+is($blobcontents, "changed file 1\n", "cat_blob: data");
+seek $tmpfile, 0, 0;
+is(Git::hash_object("blob", $tmpfile), $file1hash, "hash_object: roundtrip");
+$tmpfile = File::Temp->new();
+print $tmpfile my $test_text = "test blob, to be inserted\n";
+like(our $newhash = $r->hash_and_insert_object($tmpfile), qr/[0-9a-fA-F]{40}/,
+ "hash_and_insert_object: returns hash");
+$tmpfile = File::Temp->new;
+is($r->cat_blob($newhash, $tmpfile), length $test_text, "cat_blob: roundtrip size");
+{ local $/; seek $tmpfile, 0, 0; $blobcontents = <$tmpfile>; }
+is($blobcontents, $test_text, "cat_blob: roundtrip data");
+
+# paths
+is($r->repo_path, "./.git", "repo_path");
+is($r->wc_path, $abs_repo_dir . "/", "wc_path");
+is($r->wc_subdir, "", "wc_subdir initial");
+$r->wc_chdir("directory1");
+is($r->wc_subdir, "directory1", "wc_subdir after wc_chdir");
+TODO: {
+ local $TODO = "commands do not work after wc_chdir";
+ # Failure output is active even in non-verbose mode and thus
+ # annoying. Hence we skip these tests as long as they fail.
+ todo_skip 'config after wc_chdir', 1;
+ is($r->config("color.string"), "value", "config after wc_chdir");
+}
test_count=0
test_fixed=0
test_broken=0
+test_success=0
die () {
echo >&5 "FATAL: Unexpected exit with code $?"
test_ok_ () {
test_count=$(expr "$test_count" + 1)
+ test_success=$(expr "$test_success" + 1)
say_color "" " ok $test_count: $@"
}
echo >&3 ""
}
+# test_external runs external test scripts that provide continuous
+# test output about their progress, and succeeds/fails on
+# zero/non-zero exit code. It outputs the test output on stdout even
+# in non-verbose mode, and announces the external script with "* run
+# <n>: ..." before running it. When providing relative paths, keep in
+# mind that all scripts run in "trash directory".
+# Usage: test_external description command arguments...
+# Example: test_external 'Perl API' perl ../path/to/test.pl
+test_external () {
+ test "$#" -eq 3 ||
+ error >&5 "bug in the test script: not 3 parameters to test_external"
+ descr="$1"
+ shift
+ if ! test_skip "$descr" "$@"
+ then
+ # Announce the script to reduce confusion about the
+ # test output that follows.
+ say_color "" " run $(expr "$test_count" + 1): $descr ($*)"
+ # Run command; redirect its stderr to &4 as in
+ # test_run_, but keep its stdout on our stdout even in
+ # non-verbose mode.
+ "$@" 2>&4
+ if [ "$?" = 0 ]
+ then
+ test_ok_ "$descr"
+ else
+ test_failure_ "$descr" "$@"
+ fi
+ fi
+}
+
+# Like test_external, but in addition tests that the command generated
+# no output on stderr.
+test_external_without_stderr () {
+ # The temporary file has no (and must have no) security
+ # implications.
+ tmp="$TMPDIR"; if [ -z "$tmp" ]; then tmp=/tmp; fi
+ stderr="$tmp/git-external-stderr.$$.tmp"
+ test_external "$@" 4> "$stderr"
+ [ -f "$stderr" ] || error "Internal error: $stderr disappeared."
+ descr="no stderr: $1"
+ shift
+ say >&3 "expecting no stderr from previous command"
+ if [ ! -s "$stderr" ]; then
+ rm "$stderr"
+ test_ok_ "$descr"
+ else
+ if [ "$verbose" = t ]; then
+ output=`echo; echo Stderr is:; cat "$stderr"`
+ else
+ output=
+ fi
+ # rm first in case test_failure exits.
+ rm "$stderr"
+ test_failure_ "$descr" "$@" "$output"
+ fi
+}
+
# This is not among top-level (test_expect_success | test_expect_failure)
# but is a prefix that can be used in the test script, like:
#
test_done () {
trap - exit
+ test_results_dir="$TEST_DIRECTORY/test-results"
+ mkdir -p "$test_results_dir"
+ test_results_path="$test_results_dir/${0%-*}-$$"
+
+ echo "total $test_count" >> $test_results_path
+ echo "success $test_success" >> $test_results_path
+ echo "fixed $test_fixed" >> $test_results_path
+ echo "broken $test_broken" >> $test_results_path
+ echo "failed $test_failure" >> $test_results_path
+ echo "" >> $test_results_path
if test "$test_fixed" != 0
then
# Test the binaries we have just built. The tests are kept in
# t/ subdirectory and are run in 'trash directory' subdirectory.
-PATH=$(pwd)/..:$PATH
+TEST_DIRECTORY=$(pwd)
+PATH=$TEST_DIRECTORY/..:$PATH
GIT_EXEC_PATH=$(pwd)/..
GIT_TEMPLATE_DIR=$(pwd)/../templates/blt
unset GIT_CONFIG
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script to check the commit log message taken by
-# applypatch from an e-mail message.
-#
-# The hook should exit with non-zero status after issuing an
-# appropriate message if it wants to stop the commit. The hook is
-# allowed to edit the commit message file.
-#
-# To enable this hook, make this file executable.
-
-. git-sh-setup
-test -x "$GIT_DIR/hooks/commit-msg" &&
- exec "$GIT_DIR/hooks/commit-msg" ${1+"$@"}
-:
--- /dev/null
+#!/bin/sh
+#
+# An example hook script to check the commit log message taken by
+# applypatch from an e-mail message.
+#
+# The hook should exit with non-zero status after issuing an
+# appropriate message if it wants to stop the commit. The hook is
+# allowed to edit the commit message file.
+#
+# To enable this hook, rename this file to "applypatch-msg".
+
+. git-sh-setup
+test -x "$GIT_DIR/hooks/commit-msg" &&
+ exec "$GIT_DIR/hooks/commit-msg" ${1+"$@"}
+:
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script to check the commit log message.
-# Called by git-commit with one argument, the name of the file
-# that has the commit message. The hook should exit with non-zero
-# status after issuing an appropriate message if it wants to stop the
-# commit. The hook is allowed to edit the commit message file.
-#
-# To enable this hook, make this file executable.
-
-# Uncomment the below to add a Signed-off-by line to the message.
-# Doing this in a hook is a bad idea in general, but the prepare-commit-msg
-# hook is more suited to it.
-#
-# SOB=$(git var GIT_AUTHOR_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p')
-# grep -qs "^$SOB" "$1" || echo "$SOB" >> "$1"
-
-# This example catches duplicate Signed-off-by lines.
-
-test "" = "$(grep '^Signed-off-by: ' "$1" |
- sort | uniq -c | sed -e '/^[ ]*1[ ]/d')" || {
- echo >&2 Duplicate Signed-off-by lines.
- exit 1
-}
--- /dev/null
+#!/bin/sh
+#
+# An example hook script to check the commit log message.
+# Called by git-commit with one argument, the name of the file
+# that has the commit message. The hook should exit with non-zero
+# status after issuing an appropriate message if it wants to stop the
+# commit. The hook is allowed to edit the commit message file.
+#
+# To enable this hook, rename this file to "commit-msg".
+
+# Uncomment the below to add a Signed-off-by line to the message.
+# Doing this in a hook is a bad idea in general, but the prepare-commit-msg
+# hook is more suited to it.
+#
+# SOB=$(git var GIT_AUTHOR_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p')
+# grep -qs "^$SOB" "$1" || echo "$SOB" >> "$1"
+
+# This example catches duplicate Signed-off-by lines.
+
+test "" = "$(grep '^Signed-off-by: ' "$1" |
+ sort | uniq -c | sed -e '/^[ ]*1[ ]/d')" || {
+ echo >&2 Duplicate Signed-off-by lines.
+ exit 1
+}
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script that is called after a successful
-# commit is made.
-#
-# To enable this hook, make this file executable.
-
-: Nothing
--- /dev/null
+#!/bin/sh
+#
+# An example hook script that is called after a successful
+# commit is made.
+#
+# To enable this hook, rename this file to "post-commit".
+
+: Nothing
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script for the post-receive event
-#
-# This script is run after receive-pack has accepted a pack and the
-# repository has been updated. It is passed arguments in through stdin
-# in the form
-# <oldrev> <newrev> <refname>
-# For example:
-# aa453216d1b3e49e7f6f98441fa56946ddcd6a20 68f7abf4e6f922807889f52bc043ecd31b79f814 refs/heads/master
-#
-# see contrib/hooks/ for an sample, or uncomment the next line (on debian)
-#
-
-
-#. /usr/share/doc/git-core/contrib/hooks/post-receive-email
--- /dev/null
+#!/bin/sh
+#
+# An example hook script for the "post-receive" event.
+#
+# The "post-receive" script is run after receive-pack has accepted a pack
+# and the repository has been updated. It is passed arguments in through
+# stdin in the form
+# <oldrev> <newrev> <refname>
+# For example:
+# aa453216d1b3e49e7f6f98441fa56946ddcd6a20 68f7abf4e6f922807889f52bc043ecd31b79f814 refs/heads/master
+#
+# see contrib/hooks/ for an sample, or uncomment the next line and
+# rename the file to "post-receive".
+
+#. /usr/share/doc/git-core/contrib/hooks/post-receive-email
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script to prepare a packed repository for use over
-# dumb transports.
-#
-# To enable this hook, make this file executable by "chmod +x post-update".
-
-exec git-update-server-info
--- /dev/null
+#!/bin/sh
+#
+# An example hook script to prepare a packed repository for use over
+# dumb transports.
+#
+# To enable this hook, rename this file to "post-update".
+
+exec git-update-server-info
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script to verify what is about to be committed
-# by applypatch from an e-mail message.
-#
-# The hook should exit with non-zero status after issuing an
-# appropriate message if it wants to stop the commit.
-#
-# To enable this hook, make this file executable.
-
-. git-sh-setup
-test -x "$GIT_DIR/hooks/pre-commit" &&
- exec "$GIT_DIR/hooks/pre-commit" ${1+"$@"}
-:
--- /dev/null
+#!/bin/sh
+#
+# An example hook script to verify what is about to be committed
+# by applypatch from an e-mail message.
+#
+# The hook should exit with non-zero status after issuing an
+# appropriate message if it wants to stop the commit.
+#
+# To enable this hook, rename this file to "pre-applypatch".
+
+. git-sh-setup
+test -x "$GIT_DIR/hooks/pre-commit" &&
+ exec "$GIT_DIR/hooks/pre-commit" ${1+"$@"}
+:
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script to verify what is about to be committed.
-# Called by git-commit with no arguments. The hook should
-# exit with non-zero status after issuing an appropriate message if
-# it wants to stop the commit.
-#
-# To enable this hook, make this file executable.
-
-# This is slightly modified from Andrew Morton's Perfect Patch.
-# Lines you introduce should not have trailing whitespace.
-# Also check for an indentation that has SP before a TAB.
-
-if git-rev-parse --verify HEAD 2>/dev/null
-then
- git-diff-index -p -M --cached HEAD --
-else
- # NEEDSWORK: we should produce a diff with an empty tree here
- # if we want to do the same verification for the initial import.
- :
-fi |
-perl -e '
- my $found_bad = 0;
- my $filename;
- my $reported_filename = "";
- my $lineno;
- sub bad_line {
- my ($why, $line) = @_;
- if (!$found_bad) {
- print STDERR "*\n";
- print STDERR "* You have some suspicious patch lines:\n";
- print STDERR "*\n";
- $found_bad = 1;
- }
- if ($reported_filename ne $filename) {
- print STDERR "* In $filename\n";
- $reported_filename = $filename;
- }
- print STDERR "* $why (line $lineno)\n";
- print STDERR "$filename:$lineno:$line\n";
- }
- while (<>) {
- if (m|^diff --git a/(.*) b/\1$|) {
- $filename = $1;
- next;
- }
- if (/^@@ -\S+ \+(\d+)/) {
- $lineno = $1 - 1;
- next;
- }
- if (/^ /) {
- $lineno++;
- next;
- }
- if (s/^\+//) {
- $lineno++;
- chomp;
- if (/\s$/) {
- bad_line("trailing whitespace", $_);
- }
- if (/^\s* \t/) {
- bad_line("indent SP followed by a TAB", $_);
- }
- if (/^([<>])\1{6} |^={7}$/) {
- bad_line("unresolved merge conflict", $_);
- }
- }
- }
- exit($found_bad);
-'
--- /dev/null
+#!/bin/sh
+#
+# An example hook script to verify what is about to be committed.
+# Called by git-commit with no arguments. The hook should
+# exit with non-zero status after issuing an appropriate message if
+# it wants to stop the commit.
+#
+# To enable this hook, rename this file to "pre-commit".
+
+# This is slightly modified from Andrew Morton's Perfect Patch.
+# Lines you introduce should not have trailing whitespace.
+# Also check for an indentation that has SP before a TAB.
+
+if git-rev-parse --verify HEAD 2>/dev/null
+then
+ git-diff-index -p -M --cached HEAD --
+else
+ # NEEDSWORK: we should produce a diff with an empty tree here
+ # if we want to do the same verification for the initial import.
+ :
+fi |
+perl -e '
+ my $found_bad = 0;
+ my $filename;
+ my $reported_filename = "";
+ my $lineno;
+ sub bad_line {
+ my ($why, $line) = @_;
+ if (!$found_bad) {
+ print STDERR "*\n";
+ print STDERR "* You have some suspicious patch lines:\n";
+ print STDERR "*\n";
+ $found_bad = 1;
+ }
+ if ($reported_filename ne $filename) {
+ print STDERR "* In $filename\n";
+ $reported_filename = $filename;
+ }
+ print STDERR "* $why (line $lineno)\n";
+ print STDERR "$filename:$lineno:$line\n";
+ }
+ while (<>) {
+ if (m|^diff --git a/(.*) b/\1$|) {
+ $filename = $1;
+ next;
+ }
+ if (/^@@ -\S+ \+(\d+)/) {
+ $lineno = $1 - 1;
+ next;
+ }
+ if (/^ /) {
+ $lineno++;
+ next;
+ }
+ if (s/^\+//) {
+ $lineno++;
+ chomp;
+ if (/\s$/) {
+ bad_line("trailing whitespace", $_);
+ }
+ if (/^\s* \t/) {
+ bad_line("indent SP followed by a TAB", $_);
+ }
+ if (/^([<>])\1{6} |^={7}$/) {
+ bad_line("unresolved merge conflict", $_);
+ }
+ }
+ }
+ exit($found_bad);
+'
+++ /dev/null
-#!/bin/sh
-#
-# Copyright (c) 2006 Junio C Hamano
-#
-
-publish=next
-basebranch="$1"
-if test "$#" = 2
-then
- topic="refs/heads/$2"
-else
- topic=`git symbolic-ref HEAD`
-fi
-
-case "$basebranch,$topic" in
-master,refs/heads/??/*)
- ;;
-*)
- exit 0 ;# we do not interrupt others.
- ;;
-esac
-
-# Now we are dealing with a topic branch being rebased
-# on top of master. Is it OK to rebase it?
-
-# Is topic fully merged to master?
-not_in_master=`git-rev-list --pretty=oneline ^master "$topic"`
-if test -z "$not_in_master"
-then
- echo >&2 "$topic is fully merged to master; better remove it."
- exit 1 ;# we could allow it, but there is no point.
-fi
-
-# Is topic ever merged to next? If so you should not be rebasing it.
-only_next_1=`git-rev-list ^master "^$topic" ${publish} | sort`
-only_next_2=`git-rev-list ^master ${publish} | sort`
-if test "$only_next_1" = "$only_next_2"
-then
- not_in_topic=`git-rev-list "^$topic" master`
- if test -z "$not_in_topic"
- then
- echo >&2 "$topic is already up-to-date with master"
- exit 1 ;# we could allow it, but there is no point.
- else
- exit 0
- fi
-else
- not_in_next=`git-rev-list --pretty=oneline ^${publish} "$topic"`
- perl -e '
- my $topic = $ARGV[0];
- my $msg = "* $topic has commits already merged to public branch:\n";
- my (%not_in_next) = map {
- /^([0-9a-f]+) /;
- ($1 => 1);
- } split(/\n/, $ARGV[1]);
- for my $elem (map {
- /^([0-9a-f]+) (.*)$/;
- [$1 => $2];
- } split(/\n/, $ARGV[2])) {
- if (!exists $not_in_next{$elem->[0]}) {
- if ($msg) {
- print STDERR $msg;
- undef $msg;
- }
- print STDERR " $elem->[1]\n";
- }
- }
- ' "$topic" "$not_in_next" "$not_in_master"
- exit 1
-fi
-
-exit 0
-
-################################################################
-
-This sample hook safeguards topic branches that have been
-published from being rewound.
-
-The workflow assumed here is:
-
- * Once a topic branch forks from "master", "master" is never
- merged into it again (either directly or indirectly).
-
- * Once a topic branch is fully cooked and merged into "master",
- it is deleted. If you need to build on top of it to correct
- earlier mistakes, a new topic branch is created by forking at
- the tip of the "master". This is not strictly necessary, but
- it makes it easier to keep your history simple.
-
- * Whenever you need to test or publish your changes to topic
- branches, merge them into "next" branch.
-
-The script, being an example, hardcodes the publish branch name
-to be "next", but it is trivial to make it configurable via
-$GIT_DIR/config mechanism.
-
-With this workflow, you would want to know:
-
-(1) ... if a topic branch has ever been merged to "next". Young
- topic branches can have stupid mistakes you would rather
- clean up before publishing, and things that have not been
- merged into other branches can be easily rebased without
- affecting other people. But once it is published, you would
- not want to rewind it.
-
-(2) ... if a topic branch has been fully merged to "master".
- Then you can delete it. More importantly, you should not
- build on top of it -- other people may already want to
- change things related to the topic as patches against your
- "master", so if you need further changes, it is better to
- fork the topic (perhaps with the same name) afresh from the
- tip of "master".
-
-Let's look at this example:
-
- o---o---o---o---o---o---o---o---o---o "next"
- / / / /
- / a---a---b A / /
- / / / /
- / / c---c---c---c B /
- / / / \ /
- / / / b---b C \ /
- / / / / \ /
- ---o---o---o---o---o---o---o---o---o---o---o "master"
-
-
-A, B and C are topic branches.
-
- * A has one fix since it was merged up to "next".
-
- * B has finished. It has been fully merged up to "master" and "next",
- and is ready to be deleted.
-
- * C has not merged to "next" at all.
-
-We would want to allow C to be rebased, refuse A, and encourage
-B to be deleted.
-
-To compute (1):
-
- git-rev-list ^master ^topic next
- git-rev-list ^master next
-
- if these match, topic has not merged in next at all.
-
-To compute (2):
-
- git-rev-list master..topic
-
- if this is empty, it is fully merged to "master".
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2006, 2008 Junio C Hamano
+#
+# The "pre-rebase" hook is run just before "git-rebase" starts doing
+# its job, and can prevent the command from running by exiting with
+# non-zero status.
+#
+# The hook is called with the following parameters:
+#
+# $1 -- the upstream the series was forked from.
+# $2 -- the branch being rebased (or empty when rebasing the current branch).
+#
+# This sample shows how to prevent topic branches that are already
+# merged to 'next' branch from getting rebased, because allowing it
+# would result in rebasing already published history.
+
+publish=next
+basebranch="$1"
+if test "$#" = 2
+then
+ topic="refs/heads/$2"
+else
+ topic=`git symbolic-ref HEAD` ||
+ exit 0 ;# we do not interrupt rebasing detached HEAD
+fi
+
+case "$topic" in
+refs/heads/??/*)
+ ;;
+*)
+ exit 0 ;# we do not interrupt others.
+ ;;
+esac
+
+# Now we are dealing with a topic branch being rebased
+# on top of master. Is it OK to rebase it?
+
+# Does the topic really exist?
+git show-ref -q "$topic" || {
+ echo >&2 "No such branch $topic"
+ exit 1
+}
+
+# Is topic fully merged to master?
+not_in_master=`git-rev-list --pretty=oneline ^master "$topic"`
+if test -z "$not_in_master"
+then
+ echo >&2 "$topic is fully merged to master; better remove it."
+ exit 1 ;# we could allow it, but there is no point.
+fi
+
+# Is topic ever merged to next? If so you should not be rebasing it.
+only_next_1=`git-rev-list ^master "^$topic" ${publish} | sort`
+only_next_2=`git-rev-list ^master ${publish} | sort`
+if test "$only_next_1" = "$only_next_2"
+then
+ not_in_topic=`git-rev-list "^$topic" master`
+ if test -z "$not_in_topic"
+ then
+ echo >&2 "$topic is already up-to-date with master"
+ exit 1 ;# we could allow it, but there is no point.
+ else
+ exit 0
+ fi
+else
+ not_in_next=`git-rev-list --pretty=oneline ^${publish} "$topic"`
+ perl -e '
+ my $topic = $ARGV[0];
+ my $msg = "* $topic has commits already merged to public branch:\n";
+ my (%not_in_next) = map {
+ /^([0-9a-f]+) /;
+ ($1 => 1);
+ } split(/\n/, $ARGV[1]);
+ for my $elem (map {
+ /^([0-9a-f]+) (.*)$/;
+ [$1 => $2];
+ } split(/\n/, $ARGV[2])) {
+ if (!exists $not_in_next{$elem->[0]}) {
+ if ($msg) {
+ print STDERR $msg;
+ undef $msg;
+ }
+ print STDERR " $elem->[1]\n";
+ }
+ }
+ ' "$topic" "$not_in_next" "$not_in_master"
+ exit 1
+fi
+
+exit 0
+
+################################################################
+
+This sample hook safeguards topic branches that have been
+published from being rewound.
+
+The workflow assumed here is:
+
+ * Once a topic branch forks from "master", "master" is never
+ merged into it again (either directly or indirectly).
+
+ * Once a topic branch is fully cooked and merged into "master",
+ it is deleted. If you need to build on top of it to correct
+ earlier mistakes, a new topic branch is created by forking at
+ the tip of the "master". This is not strictly necessary, but
+ it makes it easier to keep your history simple.
+
+ * Whenever you need to test or publish your changes to topic
+ branches, merge them into "next" branch.
+
+The script, being an example, hardcodes the publish branch name
+to be "next", but it is trivial to make it configurable via
+$GIT_DIR/config mechanism.
+
+With this workflow, you would want to know:
+
+(1) ... if a topic branch has ever been merged to "next". Young
+ topic branches can have stupid mistakes you would rather
+ clean up before publishing, and things that have not been
+ merged into other branches can be easily rebased without
+ affecting other people. But once it is published, you would
+ not want to rewind it.
+
+(2) ... if a topic branch has been fully merged to "master".
+ Then you can delete it. More importantly, you should not
+ build on top of it -- other people may already want to
+ change things related to the topic as patches against your
+ "master", so if you need further changes, it is better to
+ fork the topic (perhaps with the same name) afresh from the
+ tip of "master".
+
+Let's look at this example:
+
+ o---o---o---o---o---o---o---o---o---o "next"
+ / / / /
+ / a---a---b A / /
+ / / / /
+ / / c---c---c---c B /
+ / / / \ /
+ / / / b---b C \ /
+ / / / / \ /
+ ---o---o---o---o---o---o---o---o---o---o---o "master"
+
+
+A, B and C are topic branches.
+
+ * A has one fix since it was merged up to "next".
+
+ * B has finished. It has been fully merged up to "master" and "next",
+ and is ready to be deleted.
+
+ * C has not merged to "next" at all.
+
+We would want to allow C to be rebased, refuse A, and encourage
+B to be deleted.
+
+To compute (1):
+
+ git-rev-list ^master ^topic next
+ git-rev-list ^master next
+
+ if these match, topic has not merged in next at all.
+
+To compute (2):
+
+ git-rev-list master..topic
+
+ if this is empty, it is fully merged to "master".
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script to prepare the commit log message.
-# Called by git-commit with the name of the file that has the
-# commit message, followed by the description of the commit
-# message's source. The hook's purpose is to edit the commit
-# message file. If the hook fails with a non-zero status,
-# the commit is aborted.
-#
-# To enable this hook, make this file executable.
-
-# This hook includes three examples. The first comments out the
-# "Conflicts:" part of a merge commit.
-#
-# The second includes the output of "git diff --name-status -r"
-# into the message, just before the "git status" output. It is
-# commented because it doesn't cope with --amend or with squashed
-# commits.
-#
-# The third example adds a Signed-off-by line to the message, that can
-# still be edited. This is rarely a good idea.
-
-case "$2,$3" in
- merge,)
- perl -i -ne 's/^/# /, s/^# #/#/ if /^Conflicts/ .. /#/; print' "$1" ;;
-
-# ,|template,)
-# perl -i -pe '
-# print "\n" . `git diff --cached --name-status -r`
-# if /^#/ && $first++ == 0' "$1" ;;
-
- *) ;;
-esac
-
-# SOB=$(git var GIT_AUTHOR_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p')
-# grep -qs "^$SOB" "$1" || echo "$SOB" >> "$1"
--- /dev/null
+#!/bin/sh
+#
+# An example hook script to prepare the commit log message.
+# Called by git-commit with the name of the file that has the
+# commit message, followed by the description of the commit
+# message's source. The hook's purpose is to edit the commit
+# message file. If the hook fails with a non-zero status,
+# the commit is aborted.
+#
+# To enable this hook, rename this file to "prepare-commit-msg".
+
+# This hook includes three examples. The first comments out the
+# "Conflicts:" part of a merge commit.
+#
+# The second includes the output of "git diff --name-status -r"
+# into the message, just before the "git status" output. It is
+# commented because it doesn't cope with --amend or with squashed
+# commits.
+#
+# The third example adds a Signed-off-by line to the message, that can
+# still be edited. This is rarely a good idea.
+
+case "$2,$3" in
+ merge,)
+ perl -i -ne 's/^/# /, s/^# #/#/ if /^Conflicts/ .. /#/; print' "$1" ;;
+
+# ,|template,)
+# perl -i -pe '
+# print "\n" . `git diff --cached --name-status -r`
+# if /^#/ && $first++ == 0' "$1" ;;
+
+ *) ;;
+esac
+
+# SOB=$(git var GIT_AUTHOR_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p')
+# grep -qs "^$SOB" "$1" || echo "$SOB" >> "$1"
+++ /dev/null
-#!/bin/sh
-#
-# An example hook script to blocks unannotated tags from entering.
-# Called by git-receive-pack with arguments: refname sha1-old sha1-new
-#
-# To enable this hook, make this file executable by "chmod +x update".
-#
-# Config
-# ------
-# hooks.allowunannotated
-# This boolean sets whether unannotated tags will be allowed into the
-# repository. By default they won't be.
-# hooks.allowdeletetag
-# This boolean sets whether deleting tags will be allowed in the
-# repository. By default they won't be.
-# hooks.allowdeletebranch
-# This boolean sets whether deleting branches will be allowed in the
-# repository. By default they won't be.
-#
-
-# --- Command line
-refname="$1"
-oldrev="$2"
-newrev="$3"
-
-# --- Safety check
-if [ -z "$GIT_DIR" ]; then
- echo "Don't run this script from the command line." >&2
- echo " (if you want, you could supply GIT_DIR then run" >&2
- echo " $0 <ref> <oldrev> <newrev>)" >&2
- exit 1
-fi
-
-if [ -z "$refname" -o -z "$oldrev" -o -z "$newrev" ]; then
- echo "Usage: $0 <ref> <oldrev> <newrev>" >&2
- exit 1
-fi
-
-# --- Config
-allowunannotated=$(git config --bool hooks.allowunannotated)
-allowdeletebranch=$(git config --bool hooks.allowdeletebranch)
-allowdeletetag=$(git config --bool hooks.allowdeletetag)
-
-# check for no description
-projectdesc=$(sed -e '1q' "$GIT_DIR/description")
-if [ -z "$projectdesc" -o "$projectdesc" = "Unnamed repository; edit this file to name it for gitweb." ]; then
- echo "*** Project description file hasn't been set" >&2
- exit 1
-fi
-
-# --- Check types
-# if $newrev is 0000...0000, it's a commit to delete a ref.
-if [ "$newrev" = "0000000000000000000000000000000000000000" ]; then
- newrev_type=delete
-else
- newrev_type=$(git-cat-file -t $newrev)
-fi
-
-case "$refname","$newrev_type" in
- refs/tags/*,commit)
- # un-annotated tag
- short_refname=${refname##refs/tags/}
- if [ "$allowunannotated" != "true" ]; then
- echo "*** The un-annotated tag, $short_refname, is not allowed in this repository" >&2
- echo "*** Use 'git tag [ -a | -s ]' for tags you want to propagate." >&2
- exit 1
- fi
- ;;
- refs/tags/*,delete)
- # delete tag
- if [ "$allowdeletetag" != "true" ]; then
- echo "*** Deleting a tag is not allowed in this repository" >&2
- exit 1
- fi
- ;;
- refs/tags/*,tag)
- # annotated tag
- ;;
- refs/heads/*,commit)
- # branch
- ;;
- refs/heads/*,delete)
- # delete branch
- if [ "$allowdeletebranch" != "true" ]; then
- echo "*** Deleting a branch is not allowed in this repository" >&2
- exit 1
- fi
- ;;
- refs/remotes/*,commit)
- # tracking branch
- ;;
- refs/remotes/*,delete)
- # delete tracking branch
- if [ "$allowdeletebranch" != "true" ]; then
- echo "*** Deleting a tracking branch is not allowed in this repository" >&2
- exit 1
- fi
- ;;
- *)
- # Anything else (is there anything else?)
- echo "*** Update hook: unknown type of update to ref $refname of type $newrev_type" >&2
- exit 1
- ;;
-esac
-
-# --- Finished
-exit 0
--- /dev/null
+#!/bin/sh
+#
+# An example hook script to blocks unannotated tags from entering.
+# Called by git-receive-pack with arguments: refname sha1-old sha1-new
+#
+# To enable this hook, rename this file to "update".
+#
+# Config
+# ------
+# hooks.allowunannotated
+# This boolean sets whether unannotated tags will be allowed into the
+# repository. By default they won't be.
+# hooks.allowdeletetag
+# This boolean sets whether deleting tags will be allowed in the
+# repository. By default they won't be.
+# hooks.allowdeletebranch
+# This boolean sets whether deleting branches will be allowed in the
+# repository. By default they won't be.
+#
+
+# --- Command line
+refname="$1"
+oldrev="$2"
+newrev="$3"
+
+# --- Safety check
+if [ -z "$GIT_DIR" ]; then
+ echo "Don't run this script from the command line." >&2
+ echo " (if you want, you could supply GIT_DIR then run" >&2
+ echo " $0 <ref> <oldrev> <newrev>)" >&2
+ exit 1
+fi
+
+if [ -z "$refname" -o -z "$oldrev" -o -z "$newrev" ]; then
+ echo "Usage: $0 <ref> <oldrev> <newrev>" >&2
+ exit 1
+fi
+
+# --- Config
+allowunannotated=$(git config --bool hooks.allowunannotated)
+allowdeletebranch=$(git config --bool hooks.allowdeletebranch)
+allowdeletetag=$(git config --bool hooks.allowdeletetag)
+
+# check for no description
+projectdesc=$(sed -e '1q' "$GIT_DIR/description")
+if [ -z "$projectdesc" -o "$projectdesc" = "Unnamed repository; edit this file to name it for gitweb." ]; then
+ echo "*** Project description file hasn't been set" >&2
+ exit 1
+fi
+
+# --- Check types
+# if $newrev is 0000...0000, it's a commit to delete a ref.
+if [ "$newrev" = "0000000000000000000000000000000000000000" ]; then
+ newrev_type=delete
+else
+ newrev_type=$(git-cat-file -t $newrev)
+fi
+
+case "$refname","$newrev_type" in
+ refs/tags/*,commit)
+ # un-annotated tag
+ short_refname=${refname##refs/tags/}
+ if [ "$allowunannotated" != "true" ]; then
+ echo "*** The un-annotated tag, $short_refname, is not allowed in this repository" >&2
+ echo "*** Use 'git tag [ -a | -s ]' for tags you want to propagate." >&2
+ exit 1
+ fi
+ ;;
+ refs/tags/*,delete)
+ # delete tag
+ if [ "$allowdeletetag" != "true" ]; then
+ echo "*** Deleting a tag is not allowed in this repository" >&2
+ exit 1
+ fi
+ ;;
+ refs/tags/*,tag)
+ # annotated tag
+ ;;
+ refs/heads/*,commit)
+ # branch
+ ;;
+ refs/heads/*,delete)
+ # delete branch
+ if [ "$allowdeletebranch" != "true" ]; then
+ echo "*** Deleting a branch is not allowed in this repository" >&2
+ exit 1
+ fi
+ ;;
+ refs/remotes/*,commit)
+ # tracking branch
+ ;;
+ refs/remotes/*,delete)
+ # delete tracking branch
+ if [ "$allowdeletebranch" != "true" ]; then
+ echo "*** Deleting a tracking branch is not allowed in this repository" >&2
+ exit 1
+ fi
+ ;;
+ *)
+ # Anything else (is there anything else?)
+ echo "*** Update hook: unknown type of update to ref $refname of type $newrev_type" >&2
+ exit 1
+ ;;
+esac
+
+# --- Finished
+exit 0
#include "parse-options.h"
static int boolean = 0;
-static int integer = 0;
+static unsigned long integer = 0;
+static int abbrev = 7;
+static int verbose = 0, dry_run = 0, quiet = 0;
static char *string = NULL;
+int length_callback(const struct option *opt, const char *arg, int unset)
+{
+ printf("Callback: \"%s\", %d\n",
+ (arg ? arg : "not set"), unset);
+ if (unset)
+ return 1; /* do not support unset */
+
+ *(unsigned long *)opt->value = strlen(arg);
+ return 0;
+}
+
int main(int argc, const char **argv)
{
const char *usage[] = {
};
struct option options[] = {
OPT_BOOLEAN('b', "boolean", &boolean, "get a boolean"),
+ OPT_BIT('4', "or4", &boolean,
+ "bitwise-or boolean with ...0100", 4),
+ OPT_GROUP(""),
OPT_INTEGER('i', "integer", &integer, "get a integer"),
OPT_INTEGER('j', NULL, &integer, "get a integer, too"),
- OPT_GROUP("string options"),
+ OPT_SET_INT(0, "set23", &integer, "set integer to 23", 23),
+ OPT_DATE('t', NULL, &integer, "get timestamp of <time>"),
+ OPT_CALLBACK('L', "length", &integer, "str",
+ "get length of <str>", length_callback),
+ OPT_GROUP("String options"),
OPT_STRING('s', "string", &string, "string", "get a string"),
OPT_STRING(0, "string2", &string, "str", "get another string"),
OPT_STRING(0, "st", &string, "st", "get another string (pervert ordering)"),
OPT_STRING('o', NULL, &string, "str", "get another string"),
- OPT_GROUP("magic arguments"),
+ OPT_SET_PTR(0, "default-string", &string,
+ "set string to default", (unsigned long)"default"),
+ OPT_GROUP("Magic arguments"),
OPT_ARGUMENT("quux", "means --quux"),
+ OPT_GROUP("Standard options"),
+ OPT__ABBREV(&abbrev),
+ OPT__VERBOSE(&verbose),
+ OPT__DRY_RUN(&dry_run),
+ OPT__QUIET(&quiet),
OPT_END(),
};
int i;
argc = parse_options(argc, argv, options, usage, 0);
printf("boolean: %d\n", boolean);
- printf("integer: %d\n", integer);
+ printf("integer: %lu\n", integer);
printf("string: %s\n", string ? string : "(not set)");
+ printf("abbrev: %d\n", abbrev);
+ printf("verbose: %d\n", verbose);
+ printf("quiet: %s\n", quiet ? "yes" : "no");
+ printf("dry run: %s\n", dry_run ? "yes" : "no");
for (i = 0; i < argc; i++)
printf("arg %02d: %s\n", i, argv[i]);
--- /dev/null
+/*
+ * Various trivial helper wrappers around standard functions
+ */
+#include "cache.h"
+
+char *xstrdup(const char *str)
+{
+ char *ret = strdup(str);
+ if (!ret) {
+ release_pack_memory(strlen(str) + 1, -1);
+ ret = strdup(str);
+ if (!ret)
+ die("Out of memory, strdup failed");
+ }
+ return ret;
+}
+
+void *xmalloc(size_t size)
+{
+ void *ret = malloc(size);
+ if (!ret && !size)
+ ret = malloc(1);
+ if (!ret) {
+ release_pack_memory(size, -1);
+ ret = malloc(size);
+ if (!ret && !size)
+ ret = malloc(1);
+ if (!ret)
+ die("Out of memory, malloc failed");
+ }
+#ifdef XMALLOC_POISON
+ memset(ret, 0xA5, size);
+#endif
+ return ret;
+}
+
+/*
+ * xmemdupz() allocates (len + 1) bytes of memory, duplicates "len" bytes of
+ * "data" to the allocated memory, zero terminates the allocated memory,
+ * and returns a pointer to the allocated memory. If the allocation fails,
+ * the program dies.
+ */
+void *xmemdupz(const void *data, size_t len)
+{
+ char *p = xmalloc(len + 1);
+ memcpy(p, data, len);
+ p[len] = '\0';
+ return p;
+}
+
+char *xstrndup(const char *str, size_t len)
+{
+ char *p = memchr(str, '\0', len);
+ return xmemdupz(str, p ? p - str : len);
+}
+
+void *xrealloc(void *ptr, size_t size)
+{
+ void *ret = realloc(ptr, size);
+ if (!ret && !size)
+ ret = realloc(ptr, 1);
+ if (!ret) {
+ release_pack_memory(size, -1);
+ ret = realloc(ptr, size);
+ if (!ret && !size)
+ ret = realloc(ptr, 1);
+ if (!ret)
+ die("Out of memory, realloc failed");
+ }
+ return ret;
+}
+
+void *xcalloc(size_t nmemb, size_t size)
+{
+ void *ret = calloc(nmemb, size);
+ if (!ret && (!nmemb || !size))
+ ret = calloc(1, 1);
+ if (!ret) {
+ release_pack_memory(nmemb * size, -1);
+ ret = calloc(nmemb, size);
+ if (!ret && (!nmemb || !size))
+ ret = calloc(1, 1);
+ if (!ret)
+ die("Out of memory, calloc failed");
+ }
+ return ret;
+}
+
+void *xmmap(void *start, size_t length,
+ int prot, int flags, int fd, off_t offset)
+{
+ void *ret = mmap(start, length, prot, flags, fd, offset);
+ if (ret == MAP_FAILED) {
+ if (!length)
+ return NULL;
+ release_pack_memory(length, fd);
+ ret = mmap(start, length, prot, flags, fd, offset);
+ if (ret == MAP_FAILED)
+ die("Out of memory? mmap failed: %s", strerror(errno));
+ }
+ return ret;
+}
+
+/*
+ * xread() is the same a read(), but it automatically restarts read()
+ * operations with a recoverable error (EAGAIN and EINTR). xread()
+ * DOES NOT GUARANTEE that "len" bytes is read even if the data is available.
+ */
+ssize_t xread(int fd, void *buf, size_t len)
+{
+ ssize_t nr;
+ while (1) {
+ nr = read(fd, buf, len);
+ if ((nr < 0) && (errno == EAGAIN || errno == EINTR))
+ continue;
+ return nr;
+ }
+}
+
+/*
+ * xwrite() is the same a write(), but it automatically restarts write()
+ * operations with a recoverable error (EAGAIN and EINTR). xwrite() DOES NOT
+ * GUARANTEE that "len" bytes is written even if the operation is successful.
+ */
+ssize_t xwrite(int fd, const void *buf, size_t len)
+{
+ ssize_t nr;
+ while (1) {
+ nr = write(fd, buf, len);
+ if ((nr < 0) && (errno == EAGAIN || errno == EINTR))
+ continue;
+ return nr;
+ }
+}
+
+int xdup(int fd)
+{
+ int ret = dup(fd);
+ if (ret < 0)
+ die("dup failed: %s", strerror(errno));
+ return ret;
+}
+
+FILE *xfdopen(int fd, const char *mode)
+{
+ FILE *stream = fdopen(fd, mode);
+ if (stream == NULL)
+ die("Out of memory? fdopen failed: %s", strerror(errno));
+ return stream;
+}
+
+int xmkstemp(char *template)
+{
+ int fd;
+
+ fd = mkstemp(template);
+ if (fd < 0)
+ die("Unable to create temporary file: %s", strerror(errno));
+ return fd;
+}
"use \"git add/rm <file>...\" to update what will be committed";
static const char use_add_to_include_msg[] =
"use \"git add <file>...\" to include in what will be committed";
+enum untracked_status_type show_untracked_files = SHOW_NORMAL_UNTRACKED_FILES;
static int parse_status_slot(const char *var, int offset)
{
wt_status_print_changed(s);
if (wt_status_submodule_summary)
wt_status_print_submodule_summary(s);
- wt_status_print_untracked(s);
+ if (show_untracked_files)
+ wt_status_print_untracked(s);
+ else if (s->commitable)
+ fprintf(s->fp, "# Untracked files not listed (use -u option to show untracked files)\n");
if (s->verbose && !s->is_initial)
wt_status_print_verbose(s);
printf("nothing added to commit but untracked files present (use \"git add\" to track)\n");
else if (s->is_initial)
printf("nothing to commit (create/copy files and use \"git add\" to track)\n");
+ else if (!show_untracked_files)
+ printf("nothing to commit (use -u to show untracked files)\n");
else
printf("nothing to commit (working directory clean)\n");
}
wt_status_relative_paths = git_config_bool(k, v);
return 0;
}
+ if (!strcmp(k, "status.showuntrackedfiles")) {
+ if (!v)
+ return config_error_nonbool(v);
+ else if (!strcmp(v, "no"))
+ show_untracked_files = SHOW_NO_UNTRACKED_FILES;
+ else if (!strcmp(v, "normal"))
+ show_untracked_files = SHOW_NORMAL_UNTRACKED_FILES;
+ else if (!strcmp(v, "all"))
+ show_untracked_files = SHOW_ALL_UNTRACKED_FILES;
+ else
+ return error("Invalid untracked files mode '%s'", v);
+ return 0;
+ }
return git_color_default_config(k, v, cb);
}
WT_STATUS_NOBRANCH,
};
+enum untracked_status_type {
+ SHOW_NO_UNTRACKED_FILES,
+ SHOW_NORMAL_UNTRACKED_FILES,
+ SHOW_ALL_UNTRACKED_FILES
+};
+extern enum untracked_status_type show_untracked_files;
+
struct wt_status {
int is_initial;
char *branch;