x = 1;
}
- is frowned upon. A gray area is when the statement extends
- over a few lines, and/or you have a lengthy comment atop of
- it. Also, like in the Linux kernel, if there is a long list
- of "else if" statements, it can make sense to add braces to
- single line blocks.
+ is frowned upon. But there are a few exceptions:
+
+ - When the statement extends over a few lines (e.g., a while loop
+ with an embedded conditional, or a comment). E.g.:
+
+ while (foo) {
+ if (x)
+ one();
+ else
+ two();
+ }
+
+ if (foo) {
+ /*
+ * This one requires some explanation,
+ * so we're better off with braces to make
+ * it obvious that the indentation is correct.
+ */
+ doit();
+ }
+
+ - When there are multiple arms to a conditional and some of them
+ require braces, enclose even a single line block in braces for
+ consistency. E.g.:
+
+ if (foo) {
+ doit();
+ } else {
+ one();
+ two();
+ three();
+ }
- We try to avoid assignments in the condition of an "if" statement.
* A recent updates to "git p4" was not usable for older p4 but it
could be made to work with minimum changes. Do so.
+ * "git diff" learned diff.interHunkContext configuration variable
+ that gives the default value for its --inter-hunk-context option.
+
+ * The prereleaseSuffix feature of version comparison that is used in
+ "git tag -l" did not correctly when two or more prereleases for the
+ same release were present (e.g. when 2.0, 2.0-beta1, and 2.0-beta2
+ are there and the code needs to compare 2.0-beta1 and 2.0-beta2).
+
Performance, Internal Implementation, Development Support etc.
* Retire long unused/unmaintained gitview from the contrib/ area.
(merge 3120925c25 sb/remove-gitview later to maint).
+ * Tighten a test to avoid mistaking an extended ERE regexp engine as
+ a PRE regexp engine.
+ (merge 7675c7bd01 jk/grep-e-could-be-extended-beyond-posix later to maint).
+
* Other minor doc, test and build updates and code cleanups.
(merge f2627d9b19 sb/submodule-config-cleanup later to maint).
(merge 384f1a167b sb/unpack-trees-cleanup later to maint).
+ (merge 3f05402ac0 ad/bisect-terms later to maint).
+ (merge 874444b704 rh/diff-orderfile-doc later to maint).
+ (merge c68d2d7c2b ws/request-pull-code-cleanup later to maint).
This option is passed unchanged to gpg's --local-user parameter,
so you may specify a key using any method that gpg supports.
-versionsort.prereleaseSuffix::
- When version sort is used in linkgit:git-tag[1], prerelease
- tags (e.g. "1.0-rc1") may appear after the main release
- "1.0". By specifying the suffix "-rc" in this variable,
- "1.0-rc1" will appear before "1.0".
-+
-This variable can be specified multiple times, once per suffix. The
-order of suffixes in the config file determines the sorting order
-(e.g. if "-pre" appears before "-rc" in the config file then 1.0-preXX
-is sorted before 1.0-rcXX). The sorting order between different
-suffixes is undefined if they are in multiple config files.
+versionsort.prereleaseSuffix (deprecated)::
+ Deprecated alias for `versionsort.suffix`. Ignored if
+ `versionsort.suffix` is set.
+
+versionsort.suffix::
+ Even when version sort is used in linkgit:git-tag[1], tagnames
+ with the same base version but different suffixes are still sorted
+ lexicographically, resulting e.g. in prerelease tags appearing
+ after the main release (e.g. "1.0-rc1" after "1.0"). This
+ variable can be specified to determine the sorting order of tags
+ with different suffixes.
++
+By specifying a single suffix in this variable, any tagname containing
+that suffix will appear before the corresponding main release. E.g. if
+the variable is set to "-rc", then all "1.0-rcX" tags will appear before
+"1.0". If specified multiple times, once per suffix, then the order of
+suffixes in the configuration will determine the sorting order of tagnames
+with those suffixes. E.g. if "-pre" appears before "-rc" in the
+configuration, then all "1.0-preX" tags will be listed before any
+"1.0-rcX" tags. The placement of the main release tag relative to tags
+with various suffixes can be determined by specifying the empty suffix
+among those other suffixes. E.g. if the suffixes "-rc", "", "-ck" and
+"-bfs" appear in the configuration in this order, then all "v4.8-rcX" tags
+are listed first, followed by "v4.8", then "v4.8-ckX" and finally
+"v4.8-bfsX".
++
+If more than one suffixes match the same tagname, then that tagname will
+be sorted according to the suffix which starts at the earliest position in
+the tagname. If more than one different matching suffixes start at
+that earliest position, then that tagname will be sorted according to the
+longest of those suffixes.
+The sorting order between different suffixes is undefined if they are
+in multiple config files.
web.browser::
Specify a web browser that may be used by some commands.
Generate diffs with <n> lines of context instead of the default
of 3. This value is overridden by the -U option.
+diff.interHunkContext::
+ Show the context between diff hunks, up to the specified number
+ of lines, thereby fusing the hunks that are close to each other.
+ This value serves as the default for the `--inter-hunk-context`
+ command line option.
+
diff.external::
If this config variable is set, diff generation is not
performed using the internal diff machinery, but using the
If set, 'git diff' does not show any source or destination prefix.
diff.orderFile::
- File indicating how to order files within a diff, using
- one shell glob pattern per line.
- Can be overridden by the '-O' option to linkgit:git-diff[1].
+ File indicating how to order files within a diff.
+ See the '-O' option to linkgit:git-diff[1] for details.
+ If `diff.orderFile` is a relative pathname, it is treated as
+ relative to the top of the working tree.
diff.renameLimit::
The number of files to consider when performing the copy/rename
endif::git-format-patch[]
-O<orderfile>::
- Output the patch in the order specified in the
- <orderfile>, which has one shell glob pattern per line.
+ Control the order in which files appear in the output.
This overrides the `diff.orderFile` configuration variable
(see linkgit:git-config[1]). To cancel `diff.orderFile`,
use `-O/dev/null`.
++
+The output order is determined by the order of glob patterns in
+<orderfile>.
+All files with pathnames that match the first pattern are output
+first, all files with pathnames that match the second pattern (but not
+the first) are output next, and so on.
+All files with pathnames that do not match any pattern are output
+last, as if there was an implicit match-all pattern at the end of the
+file.
+If multiple pathnames have the same rank (they match the same pattern
+but no earlier patterns), their output order relative to each other is
+the normal order.
++
+<orderfile> is parsed as follows:
++
+--
+ - Blank lines are ignored, so they can be used as separators for
+ readability.
+
+ - Lines starting with a hash ("`#`") are ignored, so they can be used
+ for comments. Add a backslash ("`\`") to the beginning of the
+ pattern if it starts with a hash.
+
+ - Each other line contains a single pattern.
+--
++
+Patterns have the same syntax and semantics as patterns used for
+fnmantch(3) without the FNM_PATHNAME flag, except a pathname also
+matches a pattern if removing any number of the final pathname
+components matches the pattern. For example, the pattern "`foo*bar`"
+matches "`fooasdfbar`" and "`foo/bar/baz/asdf`" but not "`foobarx`".
ifndef::git-format-patch[]
-R::
--inter-hunk-context=<lines>::
Show the context between diff hunks, up to the specified number
of lines, thereby fusing hunks that are close to each other.
+ Defaults to `diff.interHunkContext` or 0 if the config option
+ is unset.
-W::
--function-context::
git bisect start [--term-{old,good}=<term> --term-{new,bad}=<term>]
[--no-checkout] [<bad> [<good>...]] [--] [<paths>...]
- git bisect (bad|new) [<rev>]
- git bisect (good|old) [<rev>...]
+ git bisect (bad|new|<term-new>) [<rev>]
+ git bisect (good|old|<term-old>) [<rev>...]
git bisect terms [--term-good | --term-bad]
git bisect skip [(<rev>|<range>)...]
git bisect reset [<commit>]
'git tag' [-n[<num>]] -l [--contains <commit>] [--points-at <object>]
[--column[=<options>] | --no-column] [--create-reflog] [--sort=<key>]
[--format=<format>] [--[no-]merged [<commit>]] [<pattern>...]
-'git tag' -v <tagname>...
+'git tag' -v [--format=<format>] <tagname>...
DESCRIPTION
-----------
multiple times, in which case the last key becomes the primary
key. Also supports "version:refname" or "v:refname" (tag
names are treated as versions). The "version:refname" sort
- order can also be affected by the
- "versionsort.prereleaseSuffix" configuration variable.
+ order can also be affected by the "versionsort.suffix"
+ configuration variable.
The keys supported are the same as those in `git for-each-ref`.
Sort order defaults to the value configured for the `tag.sort`
variable if it exists, or lexicographic order otherwise. See
SYNOPSIS
--------
[verse]
-'git verify-tag' <tag>...
+'git verify-tag' [--format=<format>] <tag>...
DESCRIPTION
-----------
`void *hashmap_iter_next(struct hashmap_iter *iter)`::
`void *hashmap_iter_first(struct hashmap *map, struct hashmap_iter *iter)`::
- Used to iterate over all entries of a hashmap.
+ Used to iterate over all entries of a hashmap. Note that it is
+ not safe to add or remove entries to the hashmap while
+ iterating.
+
`hashmap_iter_init` initializes a `hashmap_iter` structure.
+
+++ /dev/null
-in-core index API
-=================
-
-Talk about <read-cache.c> and <cache-tree.c>, things like:
-
-* cache -> the_index macros
-* read_index()
-* write_index()
-* ie_match_stat() and ie_modified(); how they are different and when to
- use which.
-* index_name_pos()
-* remove_index_entry_at()
-* remove_file_from_index()
-* add_file_to_index()
-* add_index_entry()
-* refresh_index()
-* discard_index()
-* cache_tree_invalidate_path()
-* cache_tree_update()
-
-(JC, Linus)
git_config(get_remote_group, &g);
if (list->nr == prev_nr) {
struct remote *remote = remote_get(name);
- if (!remote_is_configured(remote))
+ if (!remote_is_configured(remote, 0))
return 0;
string_list_append(list, remote->name);
}
return 0;
}
-static int fsck_sha1(const unsigned char *sha1)
-{
- struct object *obj = parse_object(sha1);
- if (!obj) {
- errors_found |= ERROR_OBJECT;
- return error("%s: object corrupt or missing",
- sha1_to_hex(sha1));
- }
- obj->flags |= HAS_OBJ;
- return fsck_obj(obj);
-}
-
static int fsck_obj_buffer(const unsigned char *sha1, enum object_type type,
unsigned long size, void *buffer, int *eaten)
{
}
}
+static struct object *parse_loose_object(const unsigned char *sha1,
+ const char *path)
+{
+ struct object *obj;
+ void *contents;
+ enum object_type type;
+ unsigned long size;
+ int eaten;
+
+ if (read_loose_object(path, sha1, &type, &size, &contents) < 0)
+ return NULL;
+
+ if (!contents && type != OBJ_BLOB)
+ die("BUG: read_loose_object streamed a non-blob");
+
+ obj = parse_object_buffer(sha1, type, size, contents, &eaten);
+
+ if (!eaten)
+ free(contents);
+ return obj;
+}
+
static int fsck_loose(const unsigned char *sha1, const char *path, void *data)
{
- if (fsck_sha1(sha1))
+ struct object *obj = parse_loose_object(sha1, path);
+
+ if (!obj) {
+ errors_found |= ERROR_OBJECT;
+ error("%s: object corrupt or missing: %s",
+ sha1_to_hex(sha1), path);
+ return 0; /* keep checking other objects */
+ }
+
+ obj->flags = HAS_OBJ;
+ if (fsck_obj(obj))
errors_found |= ERROR_OBJECT;
return 0;
}
flags |= TRANSPORT_RECURSE_SUBMODULES_CHECK;
else if (recurse_submodules == RECURSE_SUBMODULES_ON_DEMAND)
flags |= TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND;
+ else if (recurse_submodules == RECURSE_SUBMODULES_ONLY)
+ flags |= TRANSPORT_RECURSE_SUBMODULES_ONLY;
if (tags)
add_refspec("refs/tags/*");
url = argv[1];
remote = remote_get(name);
- if (remote_is_configured(remote))
+ if (remote_is_configured(remote, 1))
die(_("remote %s already exists."), name);
strbuf_addf(&buf2, "refs/heads/test:refs/remotes/%s/test", name);
rename.remote_branches = &remote_branches;
oldremote = remote_get(rename.old);
- if (!remote_is_configured(oldremote))
+ if (!remote_is_configured(oldremote, 1))
die(_("No such remote: %s"), rename.old);
if (!strcmp(rename.old, rename.new) && oldremote->origin != REMOTE_CONFIG)
return migrate_file(oldremote);
newremote = remote_get(rename.new);
- if (remote_is_configured(newremote))
+ if (remote_is_configured(newremote, 1))
die(_("remote %s already exists."), rename.new);
strbuf_addf(&buf, "refs/heads/test:refs/remotes/%s/test", rename.new);
usage_with_options(builtin_remote_rm_usage, options);
remote = remote_get(argv[1]);
- if (!remote_is_configured(remote))
+ if (!remote_is_configured(remote, 1))
die(_("No such remote: %s"), argv[1]);
known_remotes.to_delete = remote;
strbuf_addf(&key, "remote.%s.fetch", remotename);
remote = remote_get(remotename);
- if (!remote_is_configured(remote))
+ if (!remote_is_configured(remote, 1))
die(_("No such remote '%s'"), remotename);
if (!add_mode && remove_all_fetch_refspecs(remotename, key.buf)) {
remotename = argv[0];
remote = remote_get(remotename);
- if (!remote_is_configured(remote))
+ if (!remote_is_configured(remote, 1))
die(_("No such remote '%s'"), remotename);
url_nr = 0;
oldurl = newurl;
remote = remote_get(remotename);
- if (!remote_is_configured(remote))
+ if (!remote_is_configured(remote, 1))
die(_("No such remote '%s'"), remotename);
if (push_mode) {
/* Only loads from .gitmodules, no overlay with .git/config */
gitmodules_config();
- if (prefix) {
- strbuf_addf(&sb, "%s%s", prefix, path);
+ if (prefix && get_super_prefix())
+ die("BUG: cannot have prefix and superprefix");
+ else if (prefix)
+ displaypath = xstrdup(relative_path(path, prefix, &sb));
+ else if (get_super_prefix()) {
+ strbuf_addf(&sb, "%s%s", get_super_prefix(), path);
displaypath = strbuf_detach(&sb, NULL);
} else
displaypath = xstrdup(path);
int i;
struct option module_init_options[] = {
- OPT_STRING(0, "prefix", &prefix,
- N_("path"),
- N_("alternative anchor for relative paths")),
OPT__QUIET(&quiet, N_("Suppress output for initializing a submodule")),
OPT_END()
};
{"relative-path", resolve_relative_path, 0},
{"resolve-relative-url", resolve_relative_url, 0},
{"resolve-relative-url-test", resolve_relative_url_test, 0},
- {"init", module_init, 0},
+ {"init", module_init, SUPPORT_SUPER_PREFIX},
{"remote-branch", resolve_remote_submodule_branch, 0},
{"absorb-git-dirs", absorb_git_dirs, SUPPORT_SUPER_PREFIX},
};
N_("git tag -d <tagname>..."),
N_("git tag -l [-n[<num>]] [--contains <commit>] [--points-at <object>]"
"\n\t\t[--format=<format>] [--[no-]merged [<commit>]] [<pattern>...]"),
- N_("git tag -v <tagname>..."),
+ N_("git tag -v [--format=<format>] <tagname>..."),
NULL
};
}
typedef int (*each_tag_name_fn)(const char *name, const char *ref,
- const unsigned char *sha1);
+ const unsigned char *sha1, const void *cb_data);
-static int for_each_tag_name(const char **argv, each_tag_name_fn fn)
+static int for_each_tag_name(const char **argv, each_tag_name_fn fn,
+ const void *cb_data)
{
const char **p;
char ref[PATH_MAX];
had_error = 1;
continue;
}
- if (fn(*p, ref, sha1))
+ if (fn(*p, ref, sha1, cb_data))
had_error = 1;
}
return had_error;
}
static int delete_tag(const char *name, const char *ref,
- const unsigned char *sha1)
+ const unsigned char *sha1, const void *cb_data)
{
if (delete_ref(ref, sha1, 0))
return 1;
}
static int verify_tag(const char *name, const char *ref,
- const unsigned char *sha1)
+ const unsigned char *sha1, const void *cb_data)
{
- return gpg_verify_tag(sha1, name, GPG_VERIFY_VERBOSE);
+ int flags;
+ const char *fmt_pretty = cb_data;
+ flags = GPG_VERIFY_VERBOSE;
+
+ if (fmt_pretty)
+ flags = GPG_VERIFY_OMIT_STATUS;
+
+ if (gpg_verify_tag(sha1, name, flags))
+ return -1;
+
+ if (fmt_pretty)
+ pretty_print_ref(name, sha1, fmt_pretty);
+
+ return 0;
}
static int do_sign(struct strbuf *buffer)
if (filter.merge_commit)
die(_("--merged and --no-merged option are only allowed with -l"));
if (cmdmode == 'd')
- return for_each_tag_name(argv, delete_tag);
- if (cmdmode == 'v')
- return for_each_tag_name(argv, verify_tag);
+ return for_each_tag_name(argv, delete_tag, NULL);
+ if (cmdmode == 'v') {
+ if (format)
+ verify_ref_format(format);
+ return for_each_tag_name(argv, verify_tag, format);
+ }
if (msg.given || msgfile) {
if (msg.given && msgfile)
#include <signal.h>
#include "parse-options.h"
#include "gpg-interface.h"
+#include "ref-filter.h"
static const char * const verify_tag_usage[] = {
- N_("git verify-tag [-v | --verbose] <tag>..."),
+ N_("git verify-tag [-v | --verbose] [--format=<format>] <tag>..."),
NULL
};
{
int i = 1, verbose = 0, had_error = 0;
unsigned flags = 0;
+ char *fmt_pretty = NULL;
const struct option verify_tag_options[] = {
OPT__VERBOSE(&verbose, N_("print tag contents")),
OPT_BIT(0, "raw", &flags, N_("print raw gpg status output"), GPG_VERIFY_RAW),
+ OPT_STRING( 0 , "format", &fmt_pretty, N_("format"), N_("format to use for the output")),
OPT_END()
};
if (verbose)
flags |= GPG_VERIFY_VERBOSE;
+ if (fmt_pretty) {
+ verify_ref_format(fmt_pretty);
+ flags |= GPG_VERIFY_OMIT_STATUS;
+ }
+
while (i < argc) {
unsigned char sha1[20];
const char *name = argv[i++];
- if (get_sha1(name, sha1))
+ if (get_sha1(name, sha1)) {
had_error = !!error("tag '%s' not found.", name);
- else if (gpg_verify_tag(sha1, name, flags))
+ continue;
+ }
+
+ if (gpg_verify_tag(sha1, name, flags)) {
had_error = 1;
+ continue;
+ }
+
+ if (fmt_pretty)
+ pretty_print_ref(name, sha1, fmt_pretty);
}
return had_error;
}
extern int index_dir_exists(struct index_state *istate, const char *name, int namelen);
extern void adjust_dirname_case(struct index_state *istate, char *name);
extern struct cache_entry *index_file_exists(struct index_state *istate, const char *name, int namelen, int igncase);
+
+/*
+ * Searches for an entry defined by name and namelen in the given index.
+ * If the return value is positive (including 0) it is the position of an
+ * exact match. If the return value is negative, the negated value minus 1
+ * is the position where the entry would be inserted.
+ * Example: The current index consists of these files and its stages:
+ *
+ * b#0, d#0, f#1, f#3
+ *
+ * index_name_pos(&index, "a", 1) -> -1
+ * index_name_pos(&index, "b", 1) -> 0
+ * index_name_pos(&index, "c", 1) -> -2
+ * index_name_pos(&index, "d", 1) -> 1
+ * index_name_pos(&index, "e", 1) -> -3
+ * index_name_pos(&index, "f", 1) -> -3
+ * index_name_pos(&index, "g", 1) -> -5
+ */
extern int index_name_pos(const struct index_state *, const char *name, int namelen);
+
#define ADD_CACHE_OK_TO_ADD 1 /* Ok to add */
#define ADD_CACHE_OK_TO_REPLACE 2 /* Ok to replace file/directory */
#define ADD_CACHE_SKIP_DFCHECK 4 /* Ok to skip DF conflict checks */
#define ADD_CACHE_KEEP_CACHE_TREE 32 /* Do not invalidate cache-tree */
extern int add_index_entry(struct index_state *, struct cache_entry *ce, int option);
extern void rename_index_entry_at(struct index_state *, int pos, const char *new_name);
+
+/* Remove entry, return true if there are more entries to go. */
extern int remove_index_entry_at(struct index_state *, int pos);
+
extern void remove_marked_cache_entries(struct index_state *istate);
extern int remove_file_from_index(struct index_state *, const char *path);
#define ADD_CACHE_VERBOSE 1
#define ADD_CACHE_IGNORE_ERRORS 4
#define ADD_CACHE_IGNORE_REMOVAL 8
#define ADD_CACHE_INTENT 16
+/*
+ * These two are used to add the contents of the file at path
+ * to the index, marking the working tree up-to-date by storing
+ * the cached stat info in the resulting cache entry. A caller
+ * that has already run lstat(2) on the path can call
+ * add_to_index(), and all others can call add_file_to_index();
+ * the latter will do necessary lstat(2) internally before
+ * calling the former.
+ */
extern int add_to_index(struct index_state *, const char *path, struct stat *, int flags);
extern int add_file_to_index(struct index_state *, const char *path, int flags);
+
extern struct cache_entry *make_cache_entry(unsigned int mode, const unsigned char *sha1, const char *path, int stage, unsigned int refresh_options);
extern int chmod_index_entry(struct index_state *, struct cache_entry *ce, char flip);
extern int ce_same_name(const struct cache_entry *a, const struct cache_entry *b);
extern void set_object_name_for_intent_to_add_entry(struct cache_entry *ce);
extern int index_name_is_other(const struct index_state *, const char *, int);
-extern void *read_blob_data_from_index(struct index_state *, const char *, unsigned long *);
+extern void *read_blob_data_from_index(const struct index_state *, const char *, unsigned long *);
/* do stat comparison even if CE_VALID is true */
#define CE_MATCH_IGNORE_VALID 01
extern int has_sha1_pack(const unsigned char *sha1);
+/*
+ * Open the loose object at path, check its sha1, and return the contents,
+ * type, and size. If the object is a blob, then "contents" may return NULL,
+ * to allow streaming of large blobs.
+ *
+ * Returns 0 on success, negative on error (details may be written to stderr).
+ */
+int read_loose_object(const char *path,
+ const unsigned char *expected_sha1,
+ enum object_type *type,
+ unsigned long *size,
+ void **contents);
+
/*
* Return true iff we have an object named sha1, whether local or in
* an alternate object database, and whether packed or loose. This
* It is because of this implicit close() that we created the
* copy of the original.
*
- * Note that the OS can recycle HANDLE (numbers) just like it
- * recycles fd (numbers), so we must update the cached value
- * of "console". You can use GetFileType() to see that
- * handle and _get_osfhandle(fd) may have the same number
- * value, but they refer to different actual files now.
+ * Note that we need to update the cached console handle to the
+ * duplicated one because the dup2() call will implicitly close
+ * the original one.
*
* Note that dup2() when given target := {0,1,2} will also
* call SetStdHandle(), so we don't need to worry about that.
*/
- dup2(new_fd, fd);
if (console == handle)
console = duplicate;
- handle = INVALID_HANDLE_VALUE;
+ dup2(new_fd, fd);
/* Close the temp fd. This explicitly closes "new_handle"
* (because it has been associated with it).
+++ /dev/null
-#include "cache.h"
-#include "blob.h"
-#include "commit.h"
-#include "tree.h"
-
-struct entry {
- unsigned char old_sha1[20];
- unsigned char new_sha1[20];
- int converted;
-};
-
-#define MAXOBJECTS (1000000)
-
-static struct entry *convert[MAXOBJECTS];
-static int nr_convert;
-
-static struct entry * convert_entry(unsigned char *sha1);
-
-static struct entry *insert_new(unsigned char *sha1, int pos)
-{
- struct entry *new = xcalloc(1, sizeof(struct entry));
- hashcpy(new->old_sha1, sha1);
- memmove(convert + pos + 1, convert + pos, (nr_convert - pos) * sizeof(struct entry *));
- convert[pos] = new;
- nr_convert++;
- if (nr_convert == MAXOBJECTS)
- die("you're kidding me - hit maximum object limit");
- return new;
-}
-
-static struct entry *lookup_entry(unsigned char *sha1)
-{
- int low = 0, high = nr_convert;
-
- while (low < high) {
- int next = (low + high) / 2;
- struct entry *n = convert[next];
- int cmp = hashcmp(sha1, n->old_sha1);
- if (!cmp)
- return n;
- if (cmp < 0) {
- high = next;
- continue;
- }
- low = next+1;
- }
- return insert_new(sha1, low);
-}
-
-static void convert_binary_sha1(void *buffer)
-{
- struct entry *entry = convert_entry(buffer);
- hashcpy(buffer, entry->new_sha1);
-}
-
-static void convert_ascii_sha1(void *buffer)
-{
- unsigned char sha1[20];
- struct entry *entry;
-
- if (get_sha1_hex(buffer, sha1))
- die("expected sha1, got '%s'", (char *) buffer);
- entry = convert_entry(sha1);
- memcpy(buffer, sha1_to_hex(entry->new_sha1), 40);
-}
-
-static unsigned int convert_mode(unsigned int mode)
-{
- unsigned int newmode;
-
- newmode = mode & S_IFMT;
- if (S_ISREG(mode))
- newmode |= (mode & 0100) ? 0755 : 0644;
- return newmode;
-}
-
-static int write_subdirectory(void *buffer, unsigned long size, const char *base, int baselen, unsigned char *result_sha1)
-{
- char *new = xmalloc(size);
- unsigned long newlen = 0;
- unsigned long used;
-
- used = 0;
- while (size) {
- int len = 21 + strlen(buffer);
- char *path = strchr(buffer, ' ');
- unsigned char *sha1;
- unsigned int mode;
- char *slash, *origpath;
-
- if (!path || strtoul_ui(buffer, 8, &mode))
- die("bad tree conversion");
- mode = convert_mode(mode);
- path++;
- if (memcmp(path, base, baselen))
- break;
- origpath = path;
- path += baselen;
- slash = strchr(path, '/');
- if (!slash) {
- newlen += sprintf(new + newlen, "%o %s", mode, path);
- new[newlen++] = '\0';
- hashcpy((unsigned char *)new + newlen, (unsigned char *) buffer + len - 20);
- newlen += 20;
-
- used += len;
- size -= len;
- buffer = (char *) buffer + len;
- continue;
- }
-
- newlen += sprintf(new + newlen, "%o %.*s", S_IFDIR, (int)(slash - path), path);
- new[newlen++] = 0;
- sha1 = (unsigned char *)(new + newlen);
- newlen += 20;
-
- len = write_subdirectory(buffer, size, origpath, slash-origpath+1, sha1);
-
- used += len;
- size -= len;
- buffer = (char *) buffer + len;
- }
-
- write_sha1_file(new, newlen, tree_type, result_sha1);
- free(new);
- return used;
-}
-
-static void convert_tree(void *buffer, unsigned long size, unsigned char *result_sha1)
-{
- void *orig_buffer = buffer;
- unsigned long orig_size = size;
-
- while (size) {
- size_t len = 1+strlen(buffer);
-
- convert_binary_sha1((char *) buffer + len);
-
- len += 20;
- if (len > size)
- die("corrupt tree object");
- size -= len;
- buffer = (char *) buffer + len;
- }
-
- write_subdirectory(orig_buffer, orig_size, "", 0, result_sha1);
-}
-
-static unsigned long parse_oldstyle_date(const char *buf)
-{
- char c, *p;
- char buffer[100];
- struct tm tm;
- const char *formats[] = {
- "%c",
- "%a %b %d %T",
- "%Z",
- "%Y",
- " %Y",
- NULL
- };
- /* We only ever did two timezones in the bad old format .. */
- const char *timezones[] = {
- "PDT", "PST", "CEST", NULL
- };
- const char **fmt = formats;
-
- p = buffer;
- while (isspace(c = *buf))
- buf++;
- while ((c = *buf++) != '\n')
- *p++ = c;
- *p++ = 0;
- buf = buffer;
- memset(&tm, 0, sizeof(tm));
- do {
- const char *next = strptime(buf, *fmt, &tm);
- if (next) {
- if (!*next)
- return mktime(&tm);
- buf = next;
- } else {
- const char **p = timezones;
- while (isspace(*buf))
- buf++;
- while (*p) {
- if (!memcmp(buf, *p, strlen(*p))) {
- buf += strlen(*p);
- break;
- }
- p++;
- }
- }
- fmt++;
- } while (*buf && *fmt);
- printf("left: %s\n", buf);
- return mktime(&tm);
-}
-
-static int convert_date_line(char *dst, void **buf, unsigned long *sp)
-{
- unsigned long size = *sp;
- char *line = *buf;
- char *next = strchr(line, '\n');
- char *date = strchr(line, '>');
- int len;
-
- if (!next || !date)
- die("missing or bad author/committer line %s", line);
- next++; date += 2;
-
- *buf = next;
- *sp = size - (next - line);
-
- len = date - line;
- memcpy(dst, line, len);
- dst += len;
-
- /* Is it already in new format? */
- if (isdigit(*date)) {
- int datelen = next - date;
- memcpy(dst, date, datelen);
- return len + datelen;
- }
-
- /*
- * Hacky hacky: one of the sparse old-style commits does not have
- * any date at all, but we can fake it by using the committer date.
- */
- if (*date == '\n' && strchr(next, '>'))
- date = strchr(next, '>')+2;
-
- return len + sprintf(dst, "%lu -0700\n", parse_oldstyle_date(date));
-}
-
-static void convert_date(void *buffer, unsigned long size, unsigned char *result_sha1)
-{
- char *new = xmalloc(size + 100);
- unsigned long newlen = 0;
-
- /* "tree <sha1>\n" */
- memcpy(new + newlen, buffer, 46);
- newlen += 46;
- buffer = (char *) buffer + 46;
- size -= 46;
-
- /* "parent <sha1>\n" */
- while (!memcmp(buffer, "parent ", 7)) {
- memcpy(new + newlen, buffer, 48);
- newlen += 48;
- buffer = (char *) buffer + 48;
- size -= 48;
- }
-
- /* "author xyz <xyz> date" */
- newlen += convert_date_line(new + newlen, &buffer, &size);
- /* "committer xyz <xyz> date" */
- newlen += convert_date_line(new + newlen, &buffer, &size);
-
- /* Rest */
- memcpy(new + newlen, buffer, size);
- newlen += size;
-
- write_sha1_file(new, newlen, commit_type, result_sha1);
- free(new);
-}
-
-static void convert_commit(void *buffer, unsigned long size, unsigned char *result_sha1)
-{
- void *orig_buffer = buffer;
- unsigned long orig_size = size;
-
- if (memcmp(buffer, "tree ", 5))
- die("Bad commit '%s'", (char *) buffer);
- convert_ascii_sha1((char *) buffer + 5);
- buffer = (char *) buffer + 46; /* "tree " + "hex sha1" + "\n" */
- while (!memcmp(buffer, "parent ", 7)) {
- convert_ascii_sha1((char *) buffer + 7);
- buffer = (char *) buffer + 48;
- }
- convert_date(orig_buffer, orig_size, result_sha1);
-}
-
-static struct entry * convert_entry(unsigned char *sha1)
-{
- struct entry *entry = lookup_entry(sha1);
- enum object_type type;
- void *buffer, *data;
- unsigned long size;
-
- if (entry->converted)
- return entry;
- data = read_sha1_file(sha1, &type, &size);
- if (!data)
- die("unable to read object %s", sha1_to_hex(sha1));
-
- buffer = xmalloc(size);
- memcpy(buffer, data, size);
-
- if (type == OBJ_BLOB) {
- write_sha1_file(buffer, size, blob_type, entry->new_sha1);
- } else if (type == OBJ_TREE)
- convert_tree(buffer, size, entry->new_sha1);
- else if (type == OBJ_COMMIT)
- convert_commit(buffer, size, entry->new_sha1);
- else
- die("unknown object type %d in %s", type, sha1_to_hex(sha1));
- entry->converted = 1;
- free(buffer);
- free(data);
- return entry;
-}
-
-int main(int argc, char **argv)
-{
- unsigned char sha1[20];
- struct entry *entry;
-
- setup_git_directory();
-
- if (argc != 2)
- usage("git-convert-objects <sha1>");
- if (get_sha1(argv[1], sha1))
- die("Not a valid object name %s", argv[1]);
-
- entry = convert_entry(sha1);
- printf("new sha1: %s\n", sha1_to_hex(entry->new_sha1));
- return 0;
-}
+++ /dev/null
-git-convert-objects(1)
-======================
-
-NAME
-----
-git-convert-objects - Converts old-style git repository
-
-
-SYNOPSIS
---------
-[verse]
-'git-convert-objects'
-
-DESCRIPTION
------------
-Converts old-style git repository to the latest format
-
-
-Author
-------
-Written by Linus Torvalds <torvalds@osdl.org>
-
-Documentation
---------------
-Documentation by David Greaves, Junio C Hamano and the git-list <git@vger.kernel.org>.
-
-GIT
----
-Part of the linkgit:git[7] suite
static int diff_suppress_blank_empty;
static int diff_use_color_default = -1;
static int diff_context_default = 3;
+static int diff_interhunk_context_default;
static const char *diff_word_regex_cfg;
static const char *external_diff_cmd_cfg;
static const char *diff_order_file_cfg;
return -1;
return 0;
}
+ if (!strcmp(var, "diff.interhunkcontext")) {
+ diff_interhunk_context_default = git_config_int(var, value);
+ if (diff_interhunk_context_default < 0)
+ return -1;
+ return 0;
+ }
if (!strcmp(var, "diff.renames")) {
diff_detect_rename_default = git_config_rename(var, value);
return 0;
options->rename_limit = -1;
options->dirstat_permille = diff_dirstat_permille_default;
options->context = diff_context_default;
+ options->interhunkcontext = diff_interhunk_context_default;
options->ws_error_highlight = ws_error_highlight_default;
DIFF_OPT_SET(options, RENAME_EMPTY);
/* Returns the highest-priority, location to look for git programs. */
const char *git_exec_path(void)
{
- const char *env;
+ static char *cached_exec_path;
if (argv_exec_path)
return argv_exec_path;
- env = getenv(EXEC_PATH_ENVIRONMENT);
- if (env && *env) {
- return env;
+ if (!cached_exec_path) {
+ const char *env = getenv(EXEC_PATH_ENVIRONMENT);
+ if (env && *env)
+ cached_exec_path = xstrdup(env);
+ else
+ cached_exec_path = system_path(GIT_EXEC_PATH);
}
-
- return system_path(GIT_EXEC_PATH);
+ return cached_exec_path;
}
static void add_path(struct strbuf *out, const char *path)
# This file is licensed under the GPL v2, or a later version
# at the discretion of Linus Torvalds.
-USAGE='<start> <url> [<end>]'
-LONG_USAGE='Summarizes the changes between two commits to the standard output,
-and includes the given URL in the generated summary.'
SUBDIRECTORY_OK='Yes'
OPTIONS_KEEPDASHDASH=
OPTIONS_STUCKLONG=
or: $dashless [--quiet] update [--init] [--remote] [-N|--no-fetch] [-f|--force] [--checkout|--merge|--rebase] [--[no-]recommend-shallow] [--reference <repository>] [--recursive] [--] [<path>...]
or: $dashless [--quiet] summary [--cached|--files] [--summary-limit <n>] [commit] [--] [<path>...]
or: $dashless [--quiet] foreach [--recursive] <command>
- or: $dashless [--quiet] sync [--recursive] [--] [<path>...]"
+ or: $dashless [--quiet] sync [--recursive] [--] [<path>...]
+ or: $dashless [--quiet] absorbgitdirs [--] [<path>...]"
OPTIONS_SPEC=
SUBDIRECTORY_OK=Yes
. git-sh-setup
shift
done
- git ${wt_prefix:+-C "$wt_prefix"} submodule--helper init ${GIT_QUIET:+--quiet} ${prefix:+--prefix "$prefix"} "$@"
+ git ${wt_prefix:+-C "$wt_prefix"} ${prefix:+--super-prefix "$prefix"} submodule--helper init ${GIT_QUIET:+--quiet} "$@"
}
#
#ifndef GPG_INTERFACE_H
#define GPG_INTERFACE_H
-#define GPG_VERIFY_VERBOSE 1
-#define GPG_VERIFY_RAW 2
+#define GPG_VERIFY_VERBOSE 1
+#define GPG_VERIFY_RAW 2
+#define GPG_VERIFY_OMIT_STATUS 4
struct signature_check {
char *payload;
return index_name_stage_pos(istate, name, namelen, 0);
}
-/* Remove entry, return true if there are more entries to go.. */
int remove_index_entry_at(struct index_state *istate, int pos)
{
struct cache_entry *ce = istate->cache[pos];
return 1;
}
-void *read_blob_data_from_index(struct index_state *istate, const char *path, unsigned long *size)
+void *read_blob_data_from_index(const struct index_state *istate,
+ const char *path, unsigned long *size)
{
int pos, len;
unsigned long sz;
return ref;
}
-static int filter_ref_kind(struct ref_filter *filter, const char *refname)
+static int ref_kind_from_refname(const char *refname)
{
unsigned int i;
{ "refs/tags/", FILTER_REFS_TAGS}
};
- if (filter->kind == FILTER_REFS_BRANCHES ||
- filter->kind == FILTER_REFS_REMOTES ||
- filter->kind == FILTER_REFS_TAGS)
- return filter->kind;
- else if (!strcmp(refname, "HEAD"))
+ if (!strcmp(refname, "HEAD"))
return FILTER_REFS_DETACHED_HEAD;
for (i = 0; i < ARRAY_SIZE(ref_kind); i++) {
return FILTER_REFS_OTHERS;
}
+static int filter_ref_kind(struct ref_filter *filter, const char *refname)
+{
+ if (filter->kind == FILTER_REFS_BRANCHES ||
+ filter->kind == FILTER_REFS_REMOTES ||
+ filter->kind == FILTER_REFS_TAGS)
+ return filter->kind;
+ return ref_kind_from_refname(refname);
+}
+
/*
* A call-back given to for_each_ref(). Filter refs and keep them for
* later object processing.
putchar('\n');
}
+void pretty_print_ref(const char *name, const unsigned char *sha1,
+ const char *format)
+{
+ struct ref_array_item *ref_item;
+ ref_item = new_ref_array_item(name, sha1, 0);
+ ref_item->kind = ref_kind_from_refname(name);
+ show_ref_array_item(ref_item, format, 0);
+ free_array_item(ref_item);
+}
+
/* If no sorting option is given, use refname to sort as default */
struct ref_sorting *ref_default_sorting(void)
{
/* Function to parse --merged and --no-merged options */
int parse_opt_merge_filter(const struct option *opt, const char *arg, int unset);
+/*
+ * Print a single ref, outside of any ref-filter. Note that the
+ * name must be a fully qualified refname.
+ */
+void pretty_print_ref(const char *name, const unsigned char *sha1,
+ const char *format);
+
#endif /* REF_FILTER_H */
if (!f)
return;
+ remote->configured_in_repo = 1;
remote->origin = REMOTE_REMOTES;
while (strbuf_getline(&buf, f) != EOF) {
const char *v;
return;
}
+ remote->configured_in_repo = 1;
remote->origin = REMOTE_BRANCHES;
/*
}
remote = make_remote(name, namelen);
remote->origin = REMOTE_CONFIG;
+ if (current_config_scope() == CONFIG_SCOPE_REPO)
+ remote->configured_in_repo = 1;
if (!strcmp(subkey, "mirror"))
remote->mirror = git_config_bool(key, value);
else if (!strcmp(subkey, "skipdefaultupdate"))
return remote_get_1(name, pushremote_for_branch);
}
-int remote_is_configured(struct remote *remote)
+int remote_is_configured(struct remote *remote, int in_repo)
{
- return remote && remote->origin;
+ if (!remote)
+ return 0;
+ if (in_repo)
+ return remote->configured_in_repo;
+ return !!remote->origin;
}
int for_each_remote(each_remote_fn fn, void *priv)
struct hashmap_entry ent; /* must be first */
const char *name;
- int origin;
+ int origin, configured_in_repo;
const char *foreign_vcs;
struct remote *remote_get(const char *name);
struct remote *pushremote_get(const char *name);
-int remote_is_configured(struct remote *remote);
+int remote_is_configured(struct remote *remote, int in_repo);
typedef int each_remote_fn(struct remote *remote, void *priv);
int for_each_remote(each_remote_fn fn, void *priv);
#include "argv-array.h"
#include "quote.h"
#include "trailer.h"
+#include "log-tree.h"
+#include "wt-status.h"
#define GIT_REFLOG_ACTION "GIT_REFLOG_ACTION"
static GIT_PATH_FUNC(git_path_head_file, "sequencer/head")
static GIT_PATH_FUNC(git_path_abort_safety_file, "sequencer/abort-safety")
+static GIT_PATH_FUNC(rebase_path, "rebase-merge")
+/*
+ * The file containing rebase commands, comments, and empty lines.
+ * This file is created by "git rebase -i" then edited by the user. As
+ * the lines are processed, they are removed from the front of this
+ * file and written to the tail of 'done'.
+ */
+static GIT_PATH_FUNC(rebase_path_todo, "rebase-merge/git-rebase-todo")
+/*
+ * The rebase command lines that have already been processed. A line
+ * is moved here when it is first handled, before any associated user
+ * actions.
+ */
+static GIT_PATH_FUNC(rebase_path_done, "rebase-merge/done")
+/*
+ * The file to keep track of how many commands were already processed (e.g.
+ * for the prompt).
+ */
+static GIT_PATH_FUNC(rebase_path_msgnum, "rebase-merge/msgnum");
+/*
+ * The file to keep track of how many commands are to be processed in total
+ * (e.g. for the prompt).
+ */
+static GIT_PATH_FUNC(rebase_path_msgtotal, "rebase-merge/end");
+/*
+ * The commit message that is planned to be used for any changes that
+ * need to be committed following a user interaction.
+ */
+static GIT_PATH_FUNC(rebase_path_message, "rebase-merge/message")
+/*
+ * The file into which is accumulated the suggested commit message for
+ * squash/fixup commands. When the first of a series of squash/fixups
+ * is seen, the file is created and the commit message from the
+ * previous commit and from the first squash/fixup commit are written
+ * to it. The commit message for each subsequent squash/fixup commit
+ * is appended to the file as it is processed.
+ *
+ * The first line of the file is of the form
+ * # This is a combination of $count commits.
+ * where $count is the number of commits whose messages have been
+ * written to the file so far (including the initial "pick" commit).
+ * Each time that a commit message is processed, this line is read and
+ * updated. It is deleted just before the combined commit is made.
+ */
+static GIT_PATH_FUNC(rebase_path_squash_msg, "rebase-merge/message-squash")
+/*
+ * If the current series of squash/fixups has not yet included a squash
+ * command, then this file exists and holds the commit message of the
+ * original "pick" commit. (If the series ends without a "squash"
+ * command, then this can be used as the commit message of the combined
+ * commit without opening the editor.)
+ */
+static GIT_PATH_FUNC(rebase_path_fixup_msg, "rebase-merge/message-fixup")
/*
* A script to set the GIT_AUTHOR_NAME, GIT_AUTHOR_EMAIL, and
* GIT_AUTHOR_DATE that will be used for the commit that is currently
* being rebased.
*/
static GIT_PATH_FUNC(rebase_path_author_script, "rebase-merge/author-script")
+/*
+ * When an "edit" rebase command is being processed, the SHA1 of the
+ * commit to be edited is recorded in this file. When "git rebase
+ * --continue" is executed, if there are any staged changes then they
+ * will be amended to the HEAD commit, but only provided the HEAD
+ * commit is still the commit to be edited. When any other rebase
+ * command is processed, this file is deleted.
+ */
+static GIT_PATH_FUNC(rebase_path_amend, "rebase-merge/amend")
+/*
+ * When we stop at a given patch via the "edit" command, this file contains
+ * the abbreviated commit name of the corresponding patch.
+ */
+static GIT_PATH_FUNC(rebase_path_stopped_sha, "rebase-merge/stopped-sha")
+/*
+ * For the post-rewrite hook, we make a list of rewritten commits and
+ * their new sha1s. The rewritten-pending list keeps the sha1s of
+ * commits that have been processed, but not committed yet,
+ * e.g. because they are waiting for a 'squash' command.
+ */
+static GIT_PATH_FUNC(rebase_path_rewritten_list, "rebase-merge/rewritten-list")
+static GIT_PATH_FUNC(rebase_path_rewritten_pending,
+ "rebase-merge/rewritten-pending")
/*
* The following files are written by git-rebase just after parsing the
* command-line (and are only consumed, not modified, by the sequencer).
*/
static GIT_PATH_FUNC(rebase_path_gpg_sign_opt, "rebase-merge/gpg_sign_opt")
+static GIT_PATH_FUNC(rebase_path_orig_head, "rebase-merge/orig-head")
+static GIT_PATH_FUNC(rebase_path_verbose, "rebase-merge/verbose")
+static GIT_PATH_FUNC(rebase_path_head_name, "rebase-merge/head-name")
+static GIT_PATH_FUNC(rebase_path_onto, "rebase-merge/onto")
+static GIT_PATH_FUNC(rebase_path_autostash, "rebase-merge/autostash")
+static GIT_PATH_FUNC(rebase_path_strategy, "rebase-merge/strategy")
+static GIT_PATH_FUNC(rebase_path_strategy_opts, "rebase-merge/strategy_opts")
-/* We will introduce the 'interactive rebase' mode later */
static inline int is_rebase_i(const struct replay_opts *opts)
{
- return 0;
+ return opts->action == REPLAY_INTERACTIVE_REBASE;
}
static const char *get_dir(const struct replay_opts *opts)
{
+ if (is_rebase_i(opts))
+ return rebase_path();
return git_path_seq_dir();
}
static const char *get_todo_path(const struct replay_opts *opts)
{
+ if (is_rebase_i(opts))
+ return rebase_path_todo();
return git_path_todo_file();
}
static const char *action_name(const struct replay_opts *opts)
{
- return opts->action == REPLAY_REVERT ? N_("revert") : N_("cherry-pick");
+ switch (opts->action) {
+ case REPLAY_REVERT:
+ return N_("revert");
+ case REPLAY_PICK:
+ return N_("cherry-pick");
+ case REPLAY_INTERACTIVE_REBASE:
+ return N_("rebase -i");
+ }
+ die(_("Unknown action: %d"), opts->action);
}
struct commit_message {
o.ancestor = base ? base_label : "(empty tree)";
o.branch1 = "HEAD";
o.branch2 = next ? next_label : "(empty tree)";
+ if (is_rebase_i(opts))
+ o.buffer_output = 2;
head_tree = parse_tree_indirect(head);
next_tree = next ? next->tree : empty_tree();
clean = merge_trees(&o,
head_tree,
next_tree, base_tree, &result);
+ if (is_rebase_i(opts) && clean <= 0)
+ fputs(o.obuf.buf, stdout);
strbuf_release(&o.obuf);
if (clean < 0)
return clean;
if (active_cache_changed &&
write_locked_index(&the_index, &index_lock, COMMIT_LOCK))
- /* TRANSLATORS: %s will be "revert" or "cherry-pick" */
+ /* TRANSLATORS: %s will be "revert", "cherry-pick" or
+ * "rebase -i".
+ */
return error(_("%s: Unable to write new index file"),
_(action_name(opts)));
rollback_lock_file(&index_lock);
return !hashcmp(active_cache_tree->sha1, head_commit->tree->object.oid.hash);
}
+static int write_author_script(const char *message)
+{
+ struct strbuf buf = STRBUF_INIT;
+ const char *eol;
+ int res;
+
+ for (;;)
+ if (!*message || starts_with(message, "\n")) {
+missing_author:
+ /* Missing 'author' line? */
+ unlink(rebase_path_author_script());
+ return 0;
+ } else if (skip_prefix(message, "author ", &message))
+ break;
+ else if ((eol = strchr(message, '\n')))
+ message = eol + 1;
+ else
+ goto missing_author;
+
+ strbuf_addstr(&buf, "GIT_AUTHOR_NAME='");
+ while (*message && *message != '\n' && *message != '\r')
+ if (skip_prefix(message, " <", &message))
+ break;
+ else if (*message != '\'')
+ strbuf_addch(&buf, *(message++));
+ else
+ strbuf_addf(&buf, "'\\\\%c'", *(message++));
+ strbuf_addstr(&buf, "'\nGIT_AUTHOR_EMAIL='");
+ while (*message && *message != '\n' && *message != '\r')
+ if (skip_prefix(message, "> ", &message))
+ break;
+ else if (*message != '\'')
+ strbuf_addch(&buf, *(message++));
+ else
+ strbuf_addf(&buf, "'\\\\%c'", *(message++));
+ strbuf_addstr(&buf, "'\nGIT_AUTHOR_DATE='@");
+ while (*message && *message != '\n' && *message != '\r')
+ if (*message != '\'')
+ strbuf_addch(&buf, *(message++));
+ else
+ strbuf_addf(&buf, "'\\\\%c'", *(message++));
+ res = write_message(buf.buf, buf.len, rebase_path_author_script(), 1);
+ strbuf_release(&buf);
+ return res;
+}
+
/*
- * Read the author-script file into an environment block, ready for use in
- * run_command(), that can be free()d afterwards.
+ * Read a list of environment variable assignments (such as the author-script
+ * file) into an environment block. Returns -1 on error, 0 otherwise.
*/
-static char **read_author_script(void)
+static int read_env_script(struct argv_array *env)
{
struct strbuf script = STRBUF_INIT;
int i, count = 0;
- char *p, *p2, **env;
- size_t env_size;
+ char *p, *p2;
if (strbuf_read_file(&script, rebase_path_author_script(), 256) <= 0)
- return NULL;
+ return -1;
for (p = script.buf; *p; p++)
if (skip_prefix(p, "'\\\\''", (const char **)&p2))
count++;
}
- env_size = (count + 1) * sizeof(*env);
- strbuf_grow(&script, env_size);
- memmove(script.buf + env_size, script.buf, script.len);
- p = script.buf + env_size;
- env = (char **)strbuf_detach(&script, NULL);
-
- for (i = 0; i < count; i++) {
- env[i] = p;
+ for (i = 0, p = script.buf; i < count; i++) {
+ argv_array_push(env, p);
p += strlen(p) + 1;
}
- env[count] = NULL;
- return env;
+ return 0;
}
static const char staged_changes_advice[] =
int allow_empty, int edit, int amend,
int cleanup_commit_message)
{
- char **env = NULL;
- struct argv_array array;
- int rc;
+ struct child_process cmd = CHILD_PROCESS_INIT;
const char *value;
+ cmd.git_cmd = 1;
+
if (is_rebase_i(opts)) {
- env = read_author_script();
- if (!env) {
+ if (!edit) {
+ cmd.stdout_to_stderr = 1;
+ cmd.err = -1;
+ }
+
+ if (read_env_script(&cmd.env_array)) {
const char *gpg_opt = gpg_sign_opt_quoted(opts);
return error(_(staged_changes_advice),
}
}
- argv_array_init(&array);
- argv_array_push(&array, "commit");
- argv_array_push(&array, "-n");
+ argv_array_push(&cmd.args, "commit");
+ argv_array_push(&cmd.args, "-n");
if (amend)
- argv_array_push(&array, "--amend");
+ argv_array_push(&cmd.args, "--amend");
if (opts->gpg_sign)
- argv_array_pushf(&array, "-S%s", opts->gpg_sign);
+ argv_array_pushf(&cmd.args, "-S%s", opts->gpg_sign);
if (opts->signoff)
- argv_array_push(&array, "-s");
+ argv_array_push(&cmd.args, "-s");
if (defmsg)
- argv_array_pushl(&array, "-F", defmsg, NULL);
+ argv_array_pushl(&cmd.args, "-F", defmsg, NULL);
if (cleanup_commit_message)
- argv_array_push(&array, "--cleanup=strip");
+ argv_array_push(&cmd.args, "--cleanup=strip");
if (edit)
- argv_array_push(&array, "-e");
+ argv_array_push(&cmd.args, "-e");
else if (!cleanup_commit_message &&
!opts->signoff && !opts->record_origin &&
git_config_get_value("commit.cleanup", &value))
- argv_array_push(&array, "--cleanup=verbatim");
+ argv_array_push(&cmd.args, "--cleanup=verbatim");
if (allow_empty)
- argv_array_push(&array, "--allow-empty");
+ argv_array_push(&cmd.args, "--allow-empty");
if (opts->allow_empty_message)
- argv_array_push(&array, "--allow-empty-message");
+ argv_array_push(&cmd.args, "--allow-empty-message");
- rc = run_command_v_opt_cd_env(array.argv, RUN_GIT_CMD, NULL,
- (const char *const *)env);
- argv_array_clear(&array);
- free(env);
+ if (cmd.err == -1) {
+ /* hide stderr on success */
+ struct strbuf buf = STRBUF_INIT;
+ int rc = pipe_command(&cmd,
+ NULL, 0,
+ /* stdout is already redirected */
+ NULL, 0,
+ &buf, 0);
+ if (rc)
+ fputs(buf.buf, stderr);
+ strbuf_release(&buf);
+ return rc;
+ }
- return rc;
+ return run_command(&cmd);
}
static int is_original_commit_empty(struct commit *commit)
return 1;
}
+/*
+ * Note that ordering matters in this enum. Not only must it match the mapping
+ * below, it is also divided into several sections that matter. When adding
+ * new commands, make sure you add it in the right section.
+ */
enum todo_command {
+ /* commands that handle commits */
TODO_PICK = 0,
- TODO_REVERT
+ TODO_REVERT,
+ TODO_EDIT,
+ TODO_REWORD,
+ TODO_FIXUP,
+ TODO_SQUASH,
+ /* commands that do something else than handling a single commit */
+ TODO_EXEC,
+ /* commands that do nothing but are counted for reporting progress */
+ TODO_NOOP,
+ TODO_DROP,
+ /* comments (not counted for reporting progress) */
+ TODO_COMMENT
};
-static const char *todo_command_strings[] = {
- "pick",
- "revert"
+static struct {
+ char c;
+ const char *str;
+} todo_command_info[] = {
+ { 'p', "pick" },
+ { 0, "revert" },
+ { 'e', "edit" },
+ { 'r', "reword" },
+ { 'f', "fixup" },
+ { 's', "squash" },
+ { 'x', "exec" },
+ { 0, "noop" },
+ { 'd', "drop" },
+ { 0, NULL }
};
static const char *command_to_string(const enum todo_command command)
{
- if ((size_t)command < ARRAY_SIZE(todo_command_strings))
- return todo_command_strings[command];
+ if (command < TODO_COMMENT)
+ return todo_command_info[command].str;
die("Unknown command: %d", command);
}
+static int is_noop(const enum todo_command command)
+{
+ return TODO_NOOP <= command;
+}
+
+static int is_fixup(enum todo_command command)
+{
+ return command == TODO_FIXUP || command == TODO_SQUASH;
+}
+
+static int update_squash_messages(enum todo_command command,
+ struct commit *commit, struct replay_opts *opts)
+{
+ struct strbuf buf = STRBUF_INIT;
+ int count, res;
+ const char *message, *body;
+
+ if (file_exists(rebase_path_squash_msg())) {
+ struct strbuf header = STRBUF_INIT;
+ char *eol, *p;
+
+ if (strbuf_read_file(&buf, rebase_path_squash_msg(), 2048) <= 0)
+ return error(_("could not read '%s'"),
+ rebase_path_squash_msg());
+
+ p = buf.buf + 1;
+ eol = strchrnul(buf.buf, '\n');
+ if (buf.buf[0] != comment_line_char ||
+ (p += strcspn(p, "0123456789\n")) == eol)
+ return error(_("unexpected 1st line of squash message:"
+ "\n\n\t%.*s"),
+ (int)(eol - buf.buf), buf.buf);
+ count = strtol(p, NULL, 10);
+
+ if (count < 1)
+ return error(_("invalid 1st line of squash message:\n"
+ "\n\t%.*s"),
+ (int)(eol - buf.buf), buf.buf);
+
+ strbuf_addf(&header, "%c ", comment_line_char);
+ strbuf_addf(&header,
+ _("This is a combination of %d commits."), ++count);
+ strbuf_splice(&buf, 0, eol - buf.buf, header.buf, header.len);
+ strbuf_release(&header);
+ } else {
+ unsigned char head[20];
+ struct commit *head_commit;
+ const char *head_message, *body;
+
+ if (get_sha1("HEAD", head))
+ return error(_("need a HEAD to fixup"));
+ if (!(head_commit = lookup_commit_reference(head)))
+ return error(_("could not read HEAD"));
+ if (!(head_message = get_commit_buffer(head_commit, NULL)))
+ return error(_("could not read HEAD's commit message"));
+
+ find_commit_subject(head_message, &body);
+ if (write_message(body, strlen(body),
+ rebase_path_fixup_msg(), 0)) {
+ unuse_commit_buffer(head_commit, head_message);
+ return error(_("cannot write '%s'"),
+ rebase_path_fixup_msg());
+ }
+
+ count = 2;
+ strbuf_addf(&buf, "%c ", comment_line_char);
+ strbuf_addf(&buf, _("This is a combination of %d commits."),
+ count);
+ strbuf_addf(&buf, "\n%c ", comment_line_char);
+ strbuf_addstr(&buf, _("This is the 1st commit message:"));
+ strbuf_addstr(&buf, "\n\n");
+ strbuf_addstr(&buf, body);
+
+ unuse_commit_buffer(head_commit, head_message);
+ }
+
+ if (!(message = get_commit_buffer(commit, NULL)))
+ return error(_("could not read commit message of %s"),
+ oid_to_hex(&commit->object.oid));
+ find_commit_subject(message, &body);
+
+ if (command == TODO_SQUASH) {
+ unlink(rebase_path_fixup_msg());
+ strbuf_addf(&buf, "\n%c ", comment_line_char);
+ strbuf_addf(&buf, _("This is the commit message #%d:"), count);
+ strbuf_addstr(&buf, "\n\n");
+ strbuf_addstr(&buf, body);
+ } else if (command == TODO_FIXUP) {
+ strbuf_addf(&buf, "\n%c ", comment_line_char);
+ strbuf_addf(&buf, _("The commit message #%d will be skipped:"),
+ count);
+ strbuf_addstr(&buf, "\n\n");
+ strbuf_add_commented_lines(&buf, body, strlen(body));
+ } else
+ return error(_("unknown command: %d"), command);
+ unuse_commit_buffer(commit, message);
+
+ res = write_message(buf.buf, buf.len, rebase_path_squash_msg(), 0);
+ strbuf_release(&buf);
+ return res;
+}
+
+static void flush_rewritten_pending(void) {
+ struct strbuf buf = STRBUF_INIT;
+ unsigned char newsha1[20];
+ FILE *out;
+
+ if (strbuf_read_file(&buf, rebase_path_rewritten_pending(), 82) > 0 &&
+ !get_sha1("HEAD", newsha1) &&
+ (out = fopen(rebase_path_rewritten_list(), "a"))) {
+ char *bol = buf.buf, *eol;
+
+ while (*bol) {
+ eol = strchrnul(bol, '\n');
+ fprintf(out, "%.*s %s\n", (int)(eol - bol),
+ bol, sha1_to_hex(newsha1));
+ if (!*eol)
+ break;
+ bol = eol + 1;
+ }
+ fclose(out);
+ unlink(rebase_path_rewritten_pending());
+ }
+ strbuf_release(&buf);
+}
+
+static void record_in_rewritten(struct object_id *oid,
+ enum todo_command next_command) {
+ FILE *out = fopen(rebase_path_rewritten_pending(), "a");
+
+ if (!out)
+ return;
+
+ fprintf(out, "%s\n", oid_to_hex(oid));
+ fclose(out);
+
+ if (!is_fixup(next_command))
+ flush_rewritten_pending();
+}
static int do_pick_commit(enum todo_command command, struct commit *commit,
- struct replay_opts *opts)
+ struct replay_opts *opts, int final_fixup)
{
+ int edit = opts->edit, cleanup_commit_message = 0;
+ const char *msg_file = edit ? NULL : git_path_merge_msg();
unsigned char head[20];
struct commit *base, *next, *parent;
const char *base_label, *next_label;
struct commit_message msg = { NULL, NULL, NULL, NULL };
struct strbuf msgbuf = STRBUF_INIT;
- int res, unborn = 0, allow;
+ int res, unborn = 0, amend = 0, allow = 0;
if (opts->no_commit) {
/*
}
discard_cache();
- if (!commit->parents) {
+ if (!commit->parents)
parent = NULL;
- }
else if (commit->parents->next) {
/* Reverting or cherry-picking a merge commit */
int cnt;
else
parent = commit->parents->item;
- if (opts->allow_ff &&
- ((parent && !hashcmp(parent->object.oid.hash, head)) ||
- (!parent && unborn)))
- return fast_forward_to(commit->object.oid.hash, head, unborn, opts);
+ if (get_message(commit, &msg) != 0)
+ return error(_("cannot get commit message for %s"),
+ oid_to_hex(&commit->object.oid));
+ if (opts->allow_ff && !is_fixup(command) &&
+ ((parent && !hashcmp(parent->object.oid.hash, head)) ||
+ (!parent && unborn))) {
+ if (is_rebase_i(opts))
+ write_author_script(msg.message);
+ res = fast_forward_to(commit->object.oid.hash, head, unborn,
+ opts);
+ if (res || command != TODO_REWORD)
+ goto leave;
+ edit = amend = 1;
+ msg_file = NULL;
+ goto fast_forward_edit;
+ }
if (parent && parse_commit(parent) < 0)
/* TRANSLATORS: The first %s will be a "todo" command like
"revert" or "pick", the second %s a SHA1. */
command_to_string(command),
oid_to_hex(&parent->object.oid));
- if (get_message(commit, &msg) != 0)
- return error(_("cannot get commit message for %s"),
- oid_to_hex(&commit->object.oid));
-
/*
* "commit" is an existing commit. We would want to apply
* the difference it introduces since its first parent "prev"
next = commit;
next_label = msg.label;
- /*
- * Append the commit log message to msgbuf; it starts
- * after the tree, parent, author, committer
- * information followed by "\n\n".
- */
- p = strstr(msg.message, "\n\n");
- if (p)
- strbuf_addstr(&msgbuf, skip_blank_lines(p + 2));
+ /* Append the commit log message to msgbuf. */
+ if (find_commit_subject(msg.message, &p))
+ strbuf_addstr(&msgbuf, p);
if (opts->record_origin) {
if (!has_conforming_footer(&msgbuf, NULL, 0))
}
}
- if (!opts->strategy || !strcmp(opts->strategy, "recursive") || command == TODO_REVERT) {
+ if (command == TODO_REWORD)
+ edit = 1;
+ else if (is_fixup(command)) {
+ if (update_squash_messages(command, commit, opts))
+ return -1;
+ amend = 1;
+ if (!final_fixup)
+ msg_file = rebase_path_squash_msg();
+ else if (file_exists(rebase_path_fixup_msg())) {
+ cleanup_commit_message = 1;
+ msg_file = rebase_path_fixup_msg();
+ } else {
+ const char *dest = git_path("SQUASH_MSG");
+ unlink(dest);
+ if (copy_file(dest, rebase_path_squash_msg(), 0666))
+ return error(_("could not rename '%s' to '%s'"),
+ rebase_path_squash_msg(), dest);
+ unlink(git_path("MERGE_MSG"));
+ msg_file = dest;
+ edit = 1;
+ }
+ }
+
+ if (is_rebase_i(opts) && write_author_script(msg.message) < 0)
+ res = -1;
+ else if (!opts->strategy || !strcmp(opts->strategy, "recursive") || command == TODO_REVERT) {
res = do_recursive_merge(base, next, base_label, next_label,
head, &msgbuf, opts);
if (res < 0)
goto leave;
}
if (!opts->no_commit)
- res = run_git_commit(opts->edit ? NULL : git_path_merge_msg(),
- opts, allow, opts->edit, 0, 0);
+fast_forward_edit:
+ res = run_git_commit(msg_file, opts, allow, edit, amend,
+ cleanup_commit_message);
+
+ if (!res && final_fixup) {
+ unlink(rebase_path_fixup_msg());
+ unlink(rebase_path_squash_msg());
+ }
leave:
free_message(commit, &msg);
struct strbuf buf;
struct todo_item *items;
int nr, alloc, current;
+ int done_nr, total_nr;
};
#define TODO_LIST_INIT { STRBUF_INIT }
/* left-trim */
bol += strspn(bol, " \t");
- for (i = 0; i < ARRAY_SIZE(todo_command_strings); i++)
- if (skip_prefix(bol, todo_command_strings[i], &bol)) {
+ if (bol == eol || *bol == '\r' || *bol == comment_line_char) {
+ item->command = TODO_COMMENT;
+ item->commit = NULL;
+ item->arg = bol;
+ item->arg_len = eol - bol;
+ return 0;
+ }
+
+ for (i = 0; i < TODO_COMMENT; i++)
+ if (skip_prefix(bol, todo_command_info[i].str, &bol)) {
+ item->command = i;
+ break;
+ } else if (bol[1] == ' ' && *bol == todo_command_info[i].c) {
+ bol++;
item->command = i;
break;
}
- if (i >= ARRAY_SIZE(todo_command_strings))
+ if (i >= TODO_COMMENT)
return -1;
+ if (item->command == TODO_NOOP) {
+ item->commit = NULL;
+ item->arg = bol;
+ item->arg_len = eol - bol;
+ return 0;
+ }
+
/* Eat up extra spaces/ tabs before object name */
padding = strspn(bol, " \t");
if (!padding)
return -1;
bol += padding;
+ if (item->command == TODO_EXEC) {
+ item->arg = bol;
+ item->arg_len = (int)(eol - bol);
+ return 0;
+ }
+
end_of_object_name = (char *) bol + strcspn(bol, " \t\n");
saved = *end_of_object_name;
*end_of_object_name = '\0';
{
struct todo_item *item;
char *p = buf, *next_p;
- int i, res = 0;
+ int i, res = 0, fixup_okay = file_exists(rebase_path_done());
for (i = 1; *p; i++, p = next_p) {
char *eol = strchrnul(p, '\n');
if (parse_insn_line(item, p, eol)) {
res = error(_("invalid line %d: %.*s"),
i, (int)(eol - p), p);
- item->command = -1;
+ item->command = TODO_NOOP;
}
+
+ if (fixup_okay)
+ ; /* do nothing */
+ else if (is_fixup(item->command))
+ return error(_("cannot '%s' without a previous commit"),
+ command_to_string(item->command));
+ else if (!is_noop(item->command))
+ fixup_okay = 1;
}
- if (!todo_list->nr)
- return error(_("no commits parsed."));
+
return res;
}
+static int count_commands(struct todo_list *todo_list)
+{
+ int count = 0, i;
+
+ for (i = 0; i < todo_list->nr; i++)
+ if (todo_list->items[i].command != TODO_COMMENT)
+ count++;
+
+ return count;
+}
+
static int read_populate_todo(struct todo_list *todo_list,
struct replay_opts *opts)
{
close(fd);
res = parse_insn_buffer(todo_list->buf.buf, todo_list);
- if (res)
+ if (res) {
+ if (is_rebase_i(opts))
+ return error(_("please fix this using "
+ "'git rebase --edit-todo'."));
return error(_("unusable instruction sheet: '%s'"), todo_file);
+ }
+
+ if (!todo_list->nr &&
+ (!is_rebase_i(opts) || !file_exists(rebase_path_done())))
+ return error(_("no commits parsed."));
if (!is_rebase_i(opts)) {
enum todo_command valid =
return error(_("cannot revert during a cherry-pick."));
}
+ if (is_rebase_i(opts)) {
+ struct todo_list done = TODO_LIST_INIT;
+ FILE *f = fopen(rebase_path_msgtotal(), "w");
+
+ if (strbuf_read_file(&done.buf, rebase_path_done(), 0) > 0 &&
+ !parse_insn_buffer(done.buf.buf, &done))
+ todo_list->done_nr = count_commands(&done);
+ else
+ todo_list->done_nr = 0;
+
+ todo_list->total_nr = todo_list->done_nr
+ + count_commands(todo_list);
+ todo_list_release(&done);
+
+ if (f) {
+ fprintf(f, "%d\n", todo_list->total_nr);
+ fclose(f);
+ }
+ }
+
return 0;
}
return 0;
}
+static void read_strategy_opts(struct replay_opts *opts, struct strbuf *buf)
+{
+ int i;
+
+ strbuf_reset(buf);
+ if (!read_oneliner(buf, rebase_path_strategy(), 0))
+ return;
+ opts->strategy = strbuf_detach(buf, NULL);
+ if (!read_oneliner(buf, rebase_path_strategy_opts(), 0))
+ return;
+
+ opts->xopts_nr = split_cmdline(buf->buf, (const char ***)&opts->xopts);
+ for (i = 0; i < opts->xopts_nr; i++) {
+ const char *arg = opts->xopts[i];
+
+ skip_prefix(arg, "--", &arg);
+ opts->xopts[i] = xstrdup(arg);
+ }
+}
+
static int read_populate_opts(struct replay_opts *opts)
{
if (is_rebase_i(opts)) {
opts->gpg_sign = xstrdup(buf.buf + 2);
}
}
+
+ if (file_exists(rebase_path_verbose()))
+ opts->verbose = 1;
+
+ read_strategy_opts(opts, &buf);
strbuf_release(&buf);
return 0;
{
enum todo_command command = opts->action == REPLAY_PICK ?
TODO_PICK : TODO_REVERT;
- const char *command_string = todo_command_strings[command];
+ const char *command_string = todo_command_info[command].str;
struct commit *commit;
if (prepare_revs(opts))
error(_("a cherry-pick or revert is already in progress"));
advise(_("try \"git cherry-pick (--continue | --quit | --abort)\""));
return -1;
- }
- else if (mkdir(git_path_seq_dir(), 0777) < 0)
+ } else if (mkdir(git_path_seq_dir(), 0777) < 0)
return error_errno(_("could not create sequencer directory '%s'"),
git_path_seq_dir());
return 0;
const char *todo_path = get_todo_path(opts);
int next = todo_list->current, offset, fd;
+ /*
+ * rebase -i writes "git-rebase-todo" without the currently executing
+ * command, appending it to "done" instead.
+ */
+ if (is_rebase_i(opts))
+ next++;
+
fd = hold_lock_file_for_update(&todo_lock, todo_path, 0);
if (fd < 0)
return error_errno(_("could not lock '%s'"), todo_path);
return error_errno(_("could not write to '%s'"), todo_path);
if (commit_lock_file(&todo_lock) < 0)
return error(_("failed to finalize '%s'."), todo_path);
+
+ if (is_rebase_i(opts)) {
+ const char *done_path = rebase_path_done();
+ int fd = open(done_path, O_CREAT | O_WRONLY | O_APPEND, 0666);
+ int prev_offset = !next ? 0 :
+ todo_list->items[next - 1].offset_in_buf;
+
+ if (fd >= 0 && offset > prev_offset &&
+ write_in_full(fd, todo_list->buf.buf + prev_offset,
+ offset - prev_offset) < 0) {
+ close(fd);
+ return error_errno(_("could not write to '%s'"),
+ done_path);
+ }
+ if (fd >= 0)
+ close(fd);
+ }
return 0;
}
return res;
}
+static int make_patch(struct commit *commit, struct replay_opts *opts)
+{
+ struct strbuf buf = STRBUF_INIT;
+ struct rev_info log_tree_opt;
+ const char *subject, *p;
+ int res = 0;
+
+ p = short_commit_name(commit);
+ if (write_message(p, strlen(p), rebase_path_stopped_sha(), 1) < 0)
+ return -1;
+
+ strbuf_addf(&buf, "%s/patch", get_dir(opts));
+ memset(&log_tree_opt, 0, sizeof(log_tree_opt));
+ init_revisions(&log_tree_opt, NULL);
+ log_tree_opt.abbrev = 0;
+ log_tree_opt.diff = 1;
+ log_tree_opt.diffopt.output_format = DIFF_FORMAT_PATCH;
+ log_tree_opt.disable_stdin = 1;
+ log_tree_opt.no_commit_id = 1;
+ log_tree_opt.diffopt.file = fopen(buf.buf, "w");
+ log_tree_opt.diffopt.use_color = GIT_COLOR_NEVER;
+ if (!log_tree_opt.diffopt.file)
+ res |= error_errno(_("could not open '%s'"), buf.buf);
+ else {
+ res |= log_tree_commit(&log_tree_opt, commit);
+ fclose(log_tree_opt.diffopt.file);
+ }
+ strbuf_reset(&buf);
+
+ strbuf_addf(&buf, "%s/message", get_dir(opts));
+ if (!file_exists(buf.buf)) {
+ const char *commit_buffer = get_commit_buffer(commit, NULL);
+ find_commit_subject(commit_buffer, &subject);
+ res |= write_message(subject, strlen(subject), buf.buf, 1);
+ unuse_commit_buffer(commit, commit_buffer);
+ }
+ strbuf_release(&buf);
+
+ return res;
+}
+
+static int intend_to_amend(void)
+{
+ unsigned char head[20];
+ char *p;
+
+ if (get_sha1("HEAD", head))
+ return error(_("cannot read HEAD"));
+
+ p = sha1_to_hex(head);
+ return write_message(p, strlen(p), rebase_path_amend(), 1);
+}
+
+static int error_with_patch(struct commit *commit,
+ const char *subject, int subject_len,
+ struct replay_opts *opts, int exit_code, int to_amend)
+{
+ if (make_patch(commit, opts))
+ return -1;
+
+ if (to_amend) {
+ if (intend_to_amend())
+ return -1;
+
+ fprintf(stderr, "You can amend the commit now, with\n"
+ "\n"
+ " git commit --amend %s\n"
+ "\n"
+ "Once you are satisfied with your changes, run\n"
+ "\n"
+ " git rebase --continue\n", gpg_sign_opt_quoted(opts));
+ } else if (exit_code)
+ fprintf(stderr, "Could not apply %s... %.*s\n",
+ short_commit_name(commit), subject_len, subject);
+
+ return exit_code;
+}
+
+static int error_failed_squash(struct commit *commit,
+ struct replay_opts *opts, int subject_len, const char *subject)
+{
+ if (rename(rebase_path_squash_msg(), rebase_path_message()))
+ return error(_("could not rename '%s' to '%s'"),
+ rebase_path_squash_msg(), rebase_path_message());
+ unlink(rebase_path_fixup_msg());
+ unlink(git_path("MERGE_MSG"));
+ if (copy_file(git_path("MERGE_MSG"), rebase_path_message(), 0666))
+ return error(_("could not copy '%s' to '%s'"),
+ rebase_path_message(), git_path("MERGE_MSG"));
+ return error_with_patch(commit, subject, subject_len, opts, 1, 0);
+}
+
+static int do_exec(const char *command_line)
+{
+ const char *child_argv[] = { NULL, NULL };
+ int dirty, status;
+
+ fprintf(stderr, "Executing: %s\n", command_line);
+ child_argv[0] = command_line;
+ status = run_command_v_opt(child_argv, RUN_USING_SHELL);
+
+ /* force re-reading of the cache */
+ if (discard_cache() < 0 || read_cache() < 0)
+ return error(_("could not read index"));
+
+ dirty = require_clean_work_tree("rebase", NULL, 1, 1);
+
+ if (status) {
+ warning(_("execution failed: %s\n%s"
+ "You can fix the problem, and then run\n"
+ "\n"
+ " git rebase --continue\n"
+ "\n"),
+ command_line,
+ dirty ? N_("and made changes to the index and/or the "
+ "working tree\n") : "");
+ if (status == 127)
+ /* command not found */
+ status = 1;
+ } else if (dirty) {
+ warning(_("execution succeeded: %s\nbut "
+ "left changes to the index and/or the working tree\n"
+ "Commit or stash your changes, and then run\n"
+ "\n"
+ " git rebase --continue\n"
+ "\n"), command_line);
+ status = 1;
+ }
+
+ return status;
+}
+
+static int is_final_fixup(struct todo_list *todo_list)
+{
+ int i = todo_list->current;
+
+ if (!is_fixup(todo_list->items[i].command))
+ return 0;
+
+ while (++i < todo_list->nr)
+ if (is_fixup(todo_list->items[i].command))
+ return 0;
+ else if (!is_noop(todo_list->items[i].command))
+ break;
+ return 1;
+}
+
+static enum todo_command peek_command(struct todo_list *todo_list, int offset)
+{
+ int i;
+
+ for (i = todo_list->current + offset; i < todo_list->nr; i++)
+ if (!is_noop(todo_list->items[i].command))
+ return todo_list->items[i].command;
+
+ return -1;
+}
+
+static int apply_autostash(struct replay_opts *opts)
+{
+ struct strbuf stash_sha1 = STRBUF_INIT;
+ struct child_process child = CHILD_PROCESS_INIT;
+ int ret = 0;
+
+ if (!read_oneliner(&stash_sha1, rebase_path_autostash(), 1)) {
+ strbuf_release(&stash_sha1);
+ return 0;
+ }
+ strbuf_trim(&stash_sha1);
+
+ child.git_cmd = 1;
+ argv_array_push(&child.args, "stash");
+ argv_array_push(&child.args, "apply");
+ argv_array_push(&child.args, stash_sha1.buf);
+ if (!run_command(&child))
+ printf(_("Applied autostash."));
+ else {
+ struct child_process store = CHILD_PROCESS_INIT;
+
+ store.git_cmd = 1;
+ argv_array_push(&store.args, "stash");
+ argv_array_push(&store.args, "store");
+ argv_array_push(&store.args, "-m");
+ argv_array_push(&store.args, "autostash");
+ argv_array_push(&store.args, "-q");
+ argv_array_push(&store.args, stash_sha1.buf);
+ if (run_command(&store))
+ ret = error(_("cannot store %s"), stash_sha1.buf);
+ else
+ printf(_("Applying autostash resulted in conflicts.\n"
+ "Your changes are safe in the stash.\n"
+ "You can run \"git stash pop\" or"
+ " \"git stash drop\" at any time.\n"));
+ }
+
+ strbuf_release(&stash_sha1);
+ return ret;
+}
+
+static const char *reflog_message(struct replay_opts *opts,
+ const char *sub_action, const char *fmt, ...)
+{
+ va_list ap;
+ static struct strbuf buf = STRBUF_INIT;
+
+ va_start(ap, fmt);
+ strbuf_reset(&buf);
+ strbuf_addstr(&buf, action_name(opts));
+ if (sub_action)
+ strbuf_addf(&buf, " (%s)", sub_action);
+ if (fmt) {
+ strbuf_addstr(&buf, ": ");
+ strbuf_vaddf(&buf, fmt, ap);
+ }
+ va_end(ap);
+
+ return buf.buf;
+}
+
static int pick_commits(struct todo_list *todo_list, struct replay_opts *opts)
{
- int res;
+ int res = 0;
setenv(GIT_REFLOG_ACTION, action_name(opts), 0);
if (opts->allow_ff)
struct todo_item *item = todo_list->items + todo_list->current;
if (save_todo(todo_list, opts))
return -1;
- res = do_pick_commit(item->command, item->commit, opts);
+ if (is_rebase_i(opts)) {
+ if (item->command != TODO_COMMENT) {
+ FILE *f = fopen(rebase_path_msgnum(), "w");
+
+ todo_list->done_nr++;
+
+ if (f) {
+ fprintf(f, "%d\n", todo_list->done_nr);
+ fclose(f);
+ }
+ fprintf(stderr, "Rebasing (%d/%d)%s",
+ todo_list->done_nr,
+ todo_list->total_nr,
+ opts->verbose ? "\n" : "\r");
+ }
+ unlink(rebase_path_message());
+ unlink(rebase_path_author_script());
+ unlink(rebase_path_stopped_sha());
+ unlink(rebase_path_amend());
+ }
+ if (item->command <= TODO_SQUASH) {
+ if (is_rebase_i(opts))
+ setenv("GIT_REFLOG_ACTION", reflog_message(opts,
+ command_to_string(item->command), NULL),
+ 1);
+ res = do_pick_commit(item->command, item->commit,
+ opts, is_final_fixup(todo_list));
+ if (is_rebase_i(opts) && res < 0) {
+ /* Reschedule */
+ todo_list->current--;
+ if (save_todo(todo_list, opts))
+ return -1;
+ }
+ if (item->command == TODO_EDIT) {
+ struct commit *commit = item->commit;
+ if (!res)
+ warning(_("stopped at %s... %.*s"),
+ short_commit_name(commit),
+ item->arg_len, item->arg);
+ return error_with_patch(commit,
+ item->arg, item->arg_len, opts, res,
+ !res);
+ }
+ if (is_rebase_i(opts) && !res)
+ record_in_rewritten(&item->commit->object.oid,
+ peek_command(todo_list, 1));
+ if (res && is_fixup(item->command)) {
+ if (res == 1)
+ intend_to_amend();
+ return error_failed_squash(item->commit, opts,
+ item->arg_len, item->arg);
+ } else if (res && is_rebase_i(opts))
+ return res | error_with_patch(item->commit,
+ item->arg, item->arg_len, opts, res,
+ item->command == TODO_REWORD);
+ } else if (item->command == TODO_EXEC) {
+ char *end_of_arg = (char *)(item->arg + item->arg_len);
+ int saved = *end_of_arg;
+
+ *end_of_arg = '\0';
+ res = do_exec(item->arg);
+ *end_of_arg = saved;
+ } else if (!is_noop(item->command))
+ return error(_("unknown command %d"), item->command);
+
todo_list->current++;
if (res)
return res;
}
+ if (is_rebase_i(opts)) {
+ struct strbuf head_ref = STRBUF_INIT, buf = STRBUF_INIT;
+ struct stat st;
+
+ /* Stopped in the middle, as planned? */
+ if (todo_list->current < todo_list->nr)
+ return 0;
+
+ if (read_oneliner(&head_ref, rebase_path_head_name(), 0) &&
+ starts_with(head_ref.buf, "refs/")) {
+ const char *msg;
+ unsigned char head[20], orig[20];
+ int res;
+
+ if (get_sha1("HEAD", head)) {
+ res = error(_("cannot read HEAD"));
+cleanup_head_ref:
+ strbuf_release(&head_ref);
+ strbuf_release(&buf);
+ return res;
+ }
+ if (!read_oneliner(&buf, rebase_path_orig_head(), 0) ||
+ get_sha1_hex(buf.buf, orig)) {
+ res = error(_("could not read orig-head"));
+ goto cleanup_head_ref;
+ }
+ if (!read_oneliner(&buf, rebase_path_onto(), 0)) {
+ res = error(_("could not read 'onto'"));
+ goto cleanup_head_ref;
+ }
+ msg = reflog_message(opts, "finish", "%s onto %s",
+ head_ref.buf, buf.buf);
+ if (update_ref(msg, head_ref.buf, head, orig,
+ REF_NODEREF, UPDATE_REFS_MSG_ON_ERR)) {
+ res = error(_("could not update %s"),
+ head_ref.buf);
+ goto cleanup_head_ref;
+ }
+ msg = reflog_message(opts, "finish", "returning to %s",
+ head_ref.buf);
+ if (create_symref("HEAD", head_ref.buf, msg)) {
+ res = error(_("could not update HEAD to %s"),
+ head_ref.buf);
+ goto cleanup_head_ref;
+ }
+ strbuf_reset(&buf);
+ }
+
+ if (opts->verbose) {
+ struct rev_info log_tree_opt;
+ struct object_id orig, head;
+
+ memset(&log_tree_opt, 0, sizeof(log_tree_opt));
+ init_revisions(&log_tree_opt, NULL);
+ log_tree_opt.diff = 1;
+ log_tree_opt.diffopt.output_format =
+ DIFF_FORMAT_DIFFSTAT;
+ log_tree_opt.disable_stdin = 1;
+
+ if (read_oneliner(&buf, rebase_path_orig_head(), 0) &&
+ !get_sha1(buf.buf, orig.hash) &&
+ !get_sha1("HEAD", head.hash)) {
+ diff_tree_sha1(orig.hash, head.hash,
+ "", &log_tree_opt.diffopt);
+ log_tree_diff_flush(&log_tree_opt);
+ }
+ }
+ flush_rewritten_pending();
+ if (!stat(rebase_path_rewritten_list(), &st) &&
+ st.st_size > 0) {
+ struct child_process child = CHILD_PROCESS_INIT;
+ const char *post_rewrite_hook =
+ find_hook("post-rewrite");
+
+ child.in = open(rebase_path_rewritten_list(), O_RDONLY);
+ child.git_cmd = 1;
+ argv_array_push(&child.args, "notes");
+ argv_array_push(&child.args, "copy");
+ argv_array_push(&child.args, "--for-rewrite=rebase");
+ /* we don't care if this copying failed */
+ run_command(&child);
+
+ if (post_rewrite_hook) {
+ struct child_process hook = CHILD_PROCESS_INIT;
+
+ hook.in = open(rebase_path_rewritten_list(),
+ O_RDONLY);
+ hook.stdout_to_stderr = 1;
+ argv_array_push(&hook.args, post_rewrite_hook);
+ argv_array_push(&hook.args, "rebase");
+ /* we don't care if this hook failed */
+ run_command(&hook);
+ }
+ }
+ apply_autostash(opts);
+
+ fprintf(stderr, "Successfully rebased and updated %s.\n",
+ head_ref.buf);
+
+ strbuf_release(&buf);
+ strbuf_release(&head_ref);
+ }
+
/*
* Sequence of picks finished successfully; cleanup by
* removing the .git/sequencer directory
return run_command_v_opt(argv, RUN_GIT_CMD);
}
+static int commit_staged_changes(struct replay_opts *opts)
+{
+ int amend = 0;
+
+ if (has_unstaged_changes(1))
+ return error(_("cannot rebase: You have unstaged changes."));
+ if (!has_uncommitted_changes(0)) {
+ const char *cherry_pick_head = git_path("CHERRY_PICK_HEAD");
+
+ if (file_exists(cherry_pick_head) && unlink(cherry_pick_head))
+ return error(_("could not remove CHERRY_PICK_HEAD"));
+ return 0;
+ }
+
+ if (file_exists(rebase_path_amend())) {
+ struct strbuf rev = STRBUF_INIT;
+ unsigned char head[20], to_amend[20];
+
+ if (get_sha1("HEAD", head))
+ return error(_("cannot amend non-existing commit"));
+ if (!read_oneliner(&rev, rebase_path_amend(), 0))
+ return error(_("invalid file: '%s'"), rebase_path_amend());
+ if (get_sha1_hex(rev.buf, to_amend))
+ return error(_("invalid contents: '%s'"),
+ rebase_path_amend());
+ if (hashcmp(head, to_amend))
+ return error(_("\nYou have uncommitted changes in your "
+ "working tree. Please, commit them\n"
+ "first and then run 'git rebase "
+ "--continue' again."));
+
+ strbuf_release(&rev);
+ amend = 1;
+ }
+
+ if (run_git_commit(rebase_path_message(), opts, 1, 1, amend, 0))
+ return error(_("could not commit staged changes."));
+ unlink(rebase_path_amend());
+ return 0;
+}
+
int sequencer_continue(struct replay_opts *opts)
{
struct todo_list todo_list = TODO_LIST_INIT;
if (read_and_refresh_cache(opts))
return -1;
- if (!file_exists(get_todo_path(opts)))
+ if (is_rebase_i(opts)) {
+ if (commit_staged_changes(opts))
+ return -1;
+ } else if (!file_exists(get_todo_path(opts)))
return continue_single_pick();
if (read_populate_opts(opts))
return -1;
if ((res = read_populate_todo(&todo_list, opts)))
goto release_todo_list;
- /* Verify that the conflict has been resolved */
- if (file_exists(git_path_cherry_pick_head()) ||
- file_exists(git_path_revert_head())) {
- res = continue_single_pick();
- if (res)
+ if (!is_rebase_i(opts)) {
+ /* Verify that the conflict has been resolved */
+ if (file_exists(git_path_cherry_pick_head()) ||
+ file_exists(git_path_revert_head())) {
+ res = continue_single_pick();
+ if (res)
+ goto release_todo_list;
+ }
+ if (index_differs_from("HEAD", 0, 0)) {
+ res = error_dirty_index(opts);
goto release_todo_list;
+ }
+ todo_list.current++;
+ } else if (file_exists(rebase_path_stopped_sha())) {
+ struct strbuf buf = STRBUF_INIT;
+ struct object_id oid;
+
+ if (read_oneliner(&buf, rebase_path_stopped_sha(), 1) &&
+ !get_sha1_committish(buf.buf, oid.hash))
+ record_in_rewritten(&oid, peek_command(&todo_list, 0));
+ strbuf_release(&buf);
}
- if (index_differs_from("HEAD", 0, 0)) {
- res = error_dirty_index(opts);
- goto release_todo_list;
- }
- todo_list.current++;
+
res = pick_commits(&todo_list, opts);
release_todo_list:
todo_list_release(&todo_list);
{
setenv(GIT_REFLOG_ACTION, action_name(opts), 0);
return do_pick_commit(opts->action == REPLAY_PICK ?
- TODO_PICK : TODO_REVERT, cmit, opts);
+ TODO_PICK : TODO_REVERT, cmit, opts, 0);
}
int sequencer_pick_revisions(struct replay_opts *opts)
enum replay_action {
REPLAY_REVERT,
- REPLAY_PICK
+ REPLAY_PICK,
+ REPLAY_INTERACTIVE_REBASE
};
struct replay_opts {
int allow_empty;
int allow_empty_message;
int keep_redundant_commits;
+ int verbose;
int mainline;
return fd;
}
-static int stat_sha1_file(const unsigned char *sha1, struct stat *st)
+/*
+ * Find "sha1" as a loose object in the local repository or in an alternate.
+ * Returns 0 on success, negative on failure.
+ *
+ * The "path" out-parameter will give the path of the object we found (if any).
+ * Note that it may point to static storage and is only valid until another
+ * call to sha1_file_name(), etc.
+ */
+static int stat_sha1_file(const unsigned char *sha1, struct stat *st,
+ const char **path)
{
struct alternate_object_database *alt;
- if (!lstat(sha1_file_name(sha1), st))
+ *path = sha1_file_name(sha1);
+ if (!lstat(*path, st))
return 0;
prepare_alt_odb();
errno = ENOENT;
for (alt = alt_odb_list; alt; alt = alt->next) {
- const char *path = alt_sha1_path(alt, sha1);
- if (!lstat(path, st))
+ *path = alt_sha1_path(alt, sha1);
+ if (!lstat(*path, st))
return 0;
}
return -1;
}
-static int open_sha1_file(const unsigned char *sha1)
+/*
+ * Like stat_sha1_file(), but actually open the object and return the
+ * descriptor. See the caveats on the "path" parameter above.
+ */
+static int open_sha1_file(const unsigned char *sha1, const char **path)
{
int fd;
struct alternate_object_database *alt;
int most_interesting_errno;
- fd = git_open(sha1_file_name(sha1));
+ *path = sha1_file_name(sha1);
+ fd = git_open(*path);
if (fd >= 0)
return fd;
most_interesting_errno = errno;
prepare_alt_odb();
for (alt = alt_odb_list; alt; alt = alt->next) {
- const char *path = alt_sha1_path(alt, sha1);
- fd = git_open(path);
+ *path = alt_sha1_path(alt, sha1);
+ fd = git_open(*path);
if (fd >= 0)
return fd;
if (most_interesting_errno == ENOENT)
return -1;
}
-void *map_sha1_file(const unsigned char *sha1, unsigned long *size)
+/*
+ * Map the loose object at "path" if it is not NULL, or the path found by
+ * searching for a loose object named "sha1".
+ */
+static void *map_sha1_file_1(const char *path,
+ const unsigned char *sha1,
+ unsigned long *size)
{
void *map;
int fd;
- fd = open_sha1_file(sha1);
+ if (path)
+ fd = git_open(path);
+ else
+ fd = open_sha1_file(sha1, &path);
map = NULL;
if (fd >= 0) {
struct stat st;
*size = xsize_t(st.st_size);
if (!*size) {
/* mmap() is forbidden on empty files */
- error("object file %s is empty", sha1_file_name(sha1));
+ error("object file %s is empty", path);
return NULL;
}
map = xmmap(NULL, *size, PROT_READ, MAP_PRIVATE, fd, 0);
return map;
}
+void *map_sha1_file(const unsigned char *sha1, unsigned long *size)
+{
+ return map_sha1_file_1(NULL, sha1, size);
+}
+
unsigned long unpack_object_header_buffer(const unsigned char *buf,
unsigned long len, enum object_type *type, unsigned long *sizep)
{
void clear_delta_base_cache(void)
{
- struct hashmap_iter iter;
- struct delta_base_cache_entry *entry;
- for (entry = hashmap_iter_first(&delta_base_cache, &iter);
- entry;
- entry = hashmap_iter_next(&iter)) {
+ struct list_head *lru, *tmp;
+ list_for_each_safe(lru, tmp, &delta_base_cache_lru) {
+ struct delta_base_cache_entry *entry =
+ list_entry(lru, struct delta_base_cache_entry, lru);
release_delta_base_cache(entry);
}
}
* object even exists.
*/
if (!oi->typep && !oi->typename && !oi->sizep) {
+ const char *path;
struct stat st;
- if (stat_sha1_file(sha1, &st) < 0)
+ if (stat_sha1_file(sha1, &st, &path) < 0)
return -1;
if (oi->disk_sizep)
*oi->disk_sizep = st.st_size;
{
void *data;
const struct packed_git *p;
+ const char *path;
+ struct stat st;
const unsigned char *repl = lookup_replace_object_extended(sha1, flag);
errno = 0;
die("replacement %s not found for %s",
sha1_to_hex(repl), sha1_to_hex(sha1));
- if (has_loose_object(repl)) {
- const char *path = sha1_file_name(sha1);
-
+ if (!stat_sha1_file(repl, &st, &path))
die("loose object %s (stored in %s) is corrupt",
sha1_to_hex(repl), path);
- }
if ((p = has_packed_and_bad(repl)) != NULL)
die("packed object %s (stored in %s) is corrupt",
}
return r ? r : pack_errors;
}
+
+static int check_stream_sha1(git_zstream *stream,
+ const char *hdr,
+ unsigned long size,
+ const char *path,
+ const unsigned char *expected_sha1)
+{
+ git_SHA_CTX c;
+ unsigned char real_sha1[GIT_SHA1_RAWSZ];
+ unsigned char buf[4096];
+ unsigned long total_read;
+ int status = Z_OK;
+
+ git_SHA1_Init(&c);
+ git_SHA1_Update(&c, hdr, stream->total_out);
+
+ /*
+ * We already read some bytes into hdr, but the ones up to the NUL
+ * do not count against the object's content size.
+ */
+ total_read = stream->total_out - strlen(hdr) - 1;
+
+ /*
+ * This size comparison must be "<=" to read the final zlib packets;
+ * see the comment in unpack_sha1_rest for details.
+ */
+ while (total_read <= size &&
+ (status == Z_OK || status == Z_BUF_ERROR)) {
+ stream->next_out = buf;
+ stream->avail_out = sizeof(buf);
+ if (size - total_read < stream->avail_out)
+ stream->avail_out = size - total_read;
+ status = git_inflate(stream, Z_FINISH);
+ git_SHA1_Update(&c, buf, stream->next_out - buf);
+ total_read += stream->next_out - buf;
+ }
+ git_inflate_end(stream);
+
+ if (status != Z_STREAM_END) {
+ error("corrupt loose object '%s'", sha1_to_hex(expected_sha1));
+ return -1;
+ }
+ if (stream->avail_in) {
+ error("garbage at end of loose object '%s'",
+ sha1_to_hex(expected_sha1));
+ return -1;
+ }
+
+ git_SHA1_Final(real_sha1, &c);
+ if (hashcmp(expected_sha1, real_sha1)) {
+ error("sha1 mismatch for %s (expected %s)", path,
+ sha1_to_hex(expected_sha1));
+ return -1;
+ }
+
+ return 0;
+}
+
+int read_loose_object(const char *path,
+ const unsigned char *expected_sha1,
+ enum object_type *type,
+ unsigned long *size,
+ void **contents)
+{
+ int ret = -1;
+ int fd = -1;
+ void *map = NULL;
+ unsigned long mapsize;
+ git_zstream stream;
+ char hdr[32];
+
+ *contents = NULL;
+
+ map = map_sha1_file_1(path, NULL, &mapsize);
+ if (!map) {
+ error_errno("unable to mmap %s", path);
+ goto out;
+ }
+
+ if (unpack_sha1_header(&stream, map, mapsize, hdr, sizeof(hdr)) < 0) {
+ error("unable to unpack header of %s", path);
+ goto out;
+ }
+
+ *type = parse_sha1_header(hdr, size);
+ if (*type < 0) {
+ error("unable to parse header of %s", path);
+ git_inflate_end(&stream);
+ goto out;
+ }
+
+ if (*type == OBJ_BLOB) {
+ if (check_stream_sha1(&stream, hdr, *size, path, expected_sha1) < 0)
+ goto out;
+ } else {
+ *contents = unpack_sha1_rest(&stream, hdr, *size, expected_sha1);
+ if (!*contents) {
+ error("unable to unpack contents of %s", path);
+ git_inflate_end(&stream);
+ goto out;
+ }
+ if (check_sha1_signature(expected_sha1, *contents,
+ *size, typename(*type))) {
+ error("sha1 mismatch for %s (expected %s)", path,
+ sha1_to_hex(expected_sha1));
+ free(*contents);
+ goto out;
+ }
+ }
+
+ ret = 0; /* everything checks out */
+
+out:
+ if (map)
+ munmap(map, mapsize);
+ if (fd >= 0)
+ close(fd);
+ return ret;
+}
return RECURSE_SUBMODULES_ON_DEMAND;
else if (!strcmp(arg, "check"))
return RECURSE_SUBMODULES_CHECK;
+ else if (!strcmp(arg, "only"))
+ return RECURSE_SUBMODULES_ONLY;
else if (die_on_error)
die("bad %s argument: %s", opt, arg);
else
struct sha1_array;
enum {
+ RECURSE_SUBMODULES_ONLY = -5,
RECURSE_SUBMODULES_CHECK = -4,
RECURSE_SUBMODULES_ERROR = -3,
RECURSE_SUBMODULES_NONE = -2,
git checkout -b "replace_sub1_with_directory" "add_sub1" &&
git submodule update &&
- (
- cd sub1 &&
- git checkout modifications
- ) &&
+ git -C sub1 checkout modifications &&
git rm --cached sub1 &&
rm sub1/.git* &&
git config -f .gitmodules --remove-section "submodule.sub1" &&
test_expect_success 'setup: helpers for corruption tests' '
sha1_file() {
- echo "$*" | sed "s#..#.git/objects/&/#"
+ remainder=${1#??} &&
+ firsttwo=${1%$remainder} &&
+ echo ".git/objects/$firsttwo/$remainder"
} &&
remove_object() {
- file=$(sha1_file "$*") &&
- test -e "$file" &&
- rm -f "$file"
+ rm "$(sha1_file "$1")"
}
'
)
'
-remove_loose_object () {
- sha1="$(git rev-parse "$1")" &&
- remainder=${sha1#??} &&
- firsttwo=${sha1%$remainder} &&
- rm .git/objects/$firsttwo/$remainder
-}
-
test_expect_success 'fsck --name-objects' '
rm -rf name-objects &&
git init name-objects &&
test_commit julius caesar.t &&
test_commit augustus &&
test_commit caesar &&
- remove_loose_object $(git rev-parse julius:caesar.t) &&
+ remove_object $(git rev-parse julius:caesar.t) &&
test_must_fail git fsck --name-objects >out &&
tree=$(git rev-parse --verify julius:) &&
grep "$tree (\(refs/heads/master\|HEAD\)@{[0-9]*}:" out
)
'
+test_expect_success 'alternate objects are correctly blamed' '
+ test_when_finished "rm -rf alt.git .git/objects/info/alternates" &&
+ git init --bare alt.git &&
+ echo "../../alt.git/objects" >.git/objects/info/alternates &&
+ mkdir alt.git/objects/12 &&
+ >alt.git/objects/12/34567890123456789012345678901234567890 &&
+ test_must_fail git fsck >out 2>&1 &&
+ grep alt.git out
+'
+
+test_expect_success 'fsck errors in packed objects' '
+ git cat-file commit HEAD >basis &&
+ sed "s/</one/" basis >one &&
+ sed "s/</foo/" basis >two &&
+ one=$(git hash-object -t commit -w one) &&
+ two=$(git hash-object -t commit -w two) &&
+ pack=$(
+ {
+ echo $one &&
+ echo $two
+ } | git pack-objects .git/objects/pack/pack
+ ) &&
+ test_when_finished "rm -f .git/objects/pack/pack-$pack.*" &&
+ remove_object $one &&
+ remove_object $two &&
+ test_must_fail git fsck 2>out &&
+ grep "error in commit $one.* - bad name" out &&
+ grep "error in commit $two.* - bad name" out &&
+ ! grep corrupt out
+'
+
+test_expect_success 'fsck finds problems in duplicate loose objects' '
+ rm -rf broken-duplicate &&
+ git init broken-duplicate &&
+ (
+ cd broken-duplicate &&
+ test_commit duplicate &&
+ # no "-d" here, so we end up with duplicates
+ git repack &&
+ # now corrupt the loose copy
+ file=$(sha1_file "$(git rev-parse HEAD)") &&
+ rm "$file" &&
+ echo broken >"$file" &&
+ test_must_fail git fsck
+ )
+'
+
+test_expect_success 'fsck detects trailing loose garbage (commit)' '
+ git cat-file commit HEAD >basis &&
+ echo bump-commit-sha1 >>basis &&
+ commit=$(git hash-object -w -t commit basis) &&
+ file=$(sha1_file $commit) &&
+ test_when_finished "remove_object $commit" &&
+ chmod +w "$file" &&
+ echo garbage >>"$file" &&
+ test_must_fail git fsck 2>out &&
+ test_i18ngrep "garbage.*$commit" out
+'
+
+test_expect_success 'fsck detects trailing loose garbage (blob)' '
+ blob=$(echo trailing | git hash-object -w --stdin) &&
+ file=$(sha1_file $blob) &&
+ test_when_finished "remove_object $blob" &&
+ chmod +w "$file" &&
+ echo garbage >>"$file" &&
+ test_must_fail git fsck 2>out &&
+ test_i18ngrep "garbage.*$blob" out
+'
+
test_done
git show HEAD | grep "^Author: Twerp Snog"
'
+test_expect_success 'retain authorship w/ conflicts' '
+ git reset --hard twerp &&
+ test_commit a conflict a conflict-a &&
+ git reset --hard twerp &&
+ GIT_AUTHOR_NAME=AttributeMe \
+ test_commit b conflict b conflict-b &&
+ set_fake_editor &&
+ test_must_fail git rebase -i conflict-a &&
+ echo resolved >conflict &&
+ git add conflict &&
+ git rebase --continue &&
+ test $(git rev-parse conflict-a^0) = $(git rev-parse HEAD^) &&
+ git show >out &&
+ grep AttributeMe out
+'
+
test_expect_success 'squash' '
git reset --hard twerp &&
echo B > file7 &&
}
t() {
+ use_config=
+ git config --unset diff.interHunkContext
+
case $# in
4) hunks=$4; cmd="diff -U$3";;
5) hunks=$5; cmd="diff -U$3 --inter-hunk-context=$4";;
+ 6) hunks=$5; cmd="diff -U$3"; git config diff.interHunkContext $4; use_config="(diff.interHunkContext=$4) ";;
esac
- label="$cmd, $1 common $2"
+ label="$use_config$cmd, $1 common $2"
file=f$1
expected=expected.$file.$3.$hunks
t 9 lines 3 2 2
t 9 lines 3 3 1
+# use diff.interHunkContext?
+t 1 line 0 0 2 config
+t 1 line 0 1 1 config
+t 1 line 0 2 1 config
+t 9 lines 3 3 1 config
+t 2 lines 0 0 2 config
+t 2 lines 0 1 2 config
+t 2 lines 0 2 1 config
+t 3 lines 1 0 2 config
+t 3 lines 1 1 1 config
+t 3 lines 1 2 1 config
+t 9 lines 3 2 2 config
+t 9 lines 3 3 1 config
+
+test_expect_success 'diff.interHunkContext invalid' '
+ git config diff.interHunkContext asdf &&
+ test_must_fail git diff &&
+ git config diff.interHunkContext -1 &&
+ test_must_fail git diff
+'
+
test_done
)
'
+test_expect_success 'rename succeeds with existing remote.<target>.prune' '
+ git clone one four.four &&
+ test_when_finished git config --global --unset remote.upstream.prune &&
+ git config --global remote.upstream.prune true &&
+ git -C four.four remote rename origin upstream
+'
+
cat >remotes_origin <<EOF
URL: $(pwd)/one
Push: refs/heads/master:refs/heads/upstream
test_cmp expected_submodule actual_submodule
'
+test_expect_success 'push --dry-run does not recursively update submodules' '
+ git -C work push --dry-run --recurse-submodules=only ../pub.git master &&
+
+ git -C submodule.git rev-parse master >actual_submodule &&
+ git -C pub.git rev-parse master >actual_pub &&
+ test_cmp expected_pub actual_pub &&
+ test_cmp expected_submodule actual_submodule
+'
+
+test_expect_success 'push only unpushed submodules recursively' '
+ git -C work/gar/bage rev-parse master >expected_submodule &&
+ git -C pub.git rev-parse master >expected_pub &&
+
+ git -C work push --recurse-submodules=only ../pub.git master &&
+
+ git -C submodule.git rev-parse master >actual_submodule &&
+ git -C pub.git rev-parse master >actual_pub &&
+ test_cmp expected_submodule actual_submodule &&
+ test_cmp expected_pub actual_pub
+'
+
test_done
tag_exists mytag'
test_expect_success '--force is moot with a non-existing tag name' '
+ test_when_finished git tag -d newtag forcetag &&
git tag newtag >expect &&
git tag --force forcetag >actual &&
test_cmp expect actual
'
-git tag -d newtag forcetag
# deleting tags:
'
test_expect_success 'listing tags in column with column.*' '
- git config column.tag row &&
- git config column.ui dense &&
+ test_config column.tag row &&
+ test_config column.ui dense &&
COLUMNS=40 git tag -l >actual &&
- git config --unset column.ui &&
- git config --unset column.tag &&
cat >expected <<\EOF &&
a1 aa1 cba t210 t211
v0.2.1 v1.0 v1.0.1 v1.1.3
'
test_expect_success 'listing tags -n in column with column.ui ignored' '
- git config column.ui "row dense" &&
+ test_config column.ui "row dense" &&
COLUMNS=40 git tag -l -n >actual &&
- git config --unset column.ui &&
cat >expected <<\EOF &&
a1 Foo
aa1 Foo
test_must_fail git tag -v forged-tag
'
+test_expect_success 'verifying a proper tag with --format pass and format accordingly' '
+ cat >expect <<-\EOF
+ tagname : signed-tag
+ EOF &&
+ git tag -v --format="tagname : %(tag)" "signed-tag" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'verifying a forged tag with --format fail and format accordingly' '
+ cat >expect <<-\EOF
+ tagname : forged-tag
+ EOF &&
+ test_must_fail git tag -v --format="tagname : %(tag)" "forged-tag" >actual &&
+ test_cmp expect actual
+'
+
# blank and empty messages for signed tags:
get_tag_header empty-signed-tag $commit commit $time >expect
'
# try to sign with bad user.signingkey
-git config user.signingkey BobTheMouse
test_expect_success GPG \
'git tag -s fails if gpg is misconfigured (bad key)' \
- 'test_must_fail git tag -s -m tail tag-gpg-failure'
-git config --unset user.signingkey
+ 'test_config user.signingkey BobTheMouse &&
+ test_must_fail git tag -s -m tail tag-gpg-failure'
# try to produce invalid signature
test_expect_success GPG \
'
test_expect_success 'configured lexical sort' '
- git config tag.sort "v:refname" &&
+ test_config tag.sort "v:refname" &&
git tag -l "foo*" >actual &&
cat >expect <<-\EOF &&
foo1.3
'
test_expect_success 'option override configured sort' '
+ test_config tag.sort "v:refname" &&
git tag -l --sort=-refname "foo*" >actual &&
cat >expect <<-\EOF &&
foo1.6
'
test_expect_success 'invalid sort parameter in configuratoin' '
- git config tag.sort "v:notvalid" &&
+ test_config tag.sort "v:notvalid" &&
test_must_fail git tag -l "foo*"
'
test_expect_success 'version sort with prerelease reordering' '
- git config --unset tag.sort &&
- git config versionsort.prereleaseSuffix -rc &&
+ test_config versionsort.prereleaseSuffix -rc &&
git tag foo1.6-rc1 &&
git tag foo1.6-rc2 &&
git tag -l --sort=version:refname "foo*" >actual &&
'
test_expect_success 'reverse version sort with prerelease reordering' '
+ test_config versionsort.prereleaseSuffix -rc &&
git tag -l --sort=-version:refname "foo*" >actual &&
cat >expect <<-\EOF &&
foo1.10
test_cmp expect actual
'
+test_expect_success 'version sort with prerelease reordering and common leading character' '
+ test_config versionsort.prereleaseSuffix -before &&
+ git tag foo1.7-before1 &&
+ git tag foo1.7 &&
+ git tag foo1.7-after1 &&
+ git tag -l --sort=version:refname "foo1.7*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.7-before1
+ foo1.7
+ foo1.7-after1
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort with prerelease reordering, multiple suffixes and common leading character' '
+ test_config versionsort.prereleaseSuffix -before &&
+ git config --add versionsort.prereleaseSuffix -after &&
+ git tag -l --sort=version:refname "foo1.7*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.7-before1
+ foo1.7-after1
+ foo1.7
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort with prerelease reordering, multiple suffixes match the same tag' '
+ test_config versionsort.prereleaseSuffix -bar &&
+ git config --add versionsort.prereleaseSuffix -foo-baz &&
+ git config --add versionsort.prereleaseSuffix -foo-bar &&
+ git tag foo1.8-foo-bar &&
+ git tag foo1.8-foo-baz &&
+ git tag foo1.8 &&
+ git tag -l --sort=version:refname "foo1.8*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.8-foo-baz
+ foo1.8-foo-bar
+ foo1.8
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort with prerelease reordering, multiple suffixes match starting at the same position' '
+ test_config versionsort.prereleaseSuffix -pre &&
+ git config --add versionsort.prereleaseSuffix -prerelease &&
+ git tag foo1.9-pre1 &&
+ git tag foo1.9-pre2 &&
+ git tag foo1.9-prerelease1 &&
+ git tag -l --sort=version:refname "foo1.9*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.9-pre1
+ foo1.9-pre2
+ foo1.9-prerelease1
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort with general suffix reordering' '
+ test_config versionsort.suffix -alpha &&
+ git config --add versionsort.suffix -beta &&
+ git config --add versionsort.suffix "" &&
+ git config --add versionsort.suffix -gamma &&
+ git config --add versionsort.suffix -delta &&
+ git tag foo1.10-alpha &&
+ git tag foo1.10-beta &&
+ git tag foo1.10-gamma &&
+ git tag foo1.10-delta &&
+ git tag foo1.10-unlisted-suffix &&
+ git tag -l --sort=version:refname "foo1.10*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.10-alpha
+ foo1.10-beta
+ foo1.10
+ foo1.10-unlisted-suffix
+ foo1.10-gamma
+ foo1.10-delta
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'versionsort.suffix overrides versionsort.prereleaseSuffix' '
+ test_config versionsort.suffix -before &&
+ test_config versionsort.prereleaseSuffix -after &&
+ git tag -l --sort=version:refname "foo1.7*" >actual &&
+ cat >expect <<-\EOF &&
+ foo1.7-before1
+ foo1.7
+ foo1.7-after1
+ EOF
+ test_cmp expect actual
+'
+
+test_expect_success 'version sort with very long prerelease suffix' '
+ test_config versionsort.prereleaseSuffix -very-looooooooooooooooooooooooong-prerelease-suffix &&
+ git tag -l --sort=version:refname
+'
+
run_with_limited_stack () {
(ulimit -s 128 && "$@")
}
test_expect_success '--format should list tags as per format given' '
cat >expect <<-\EOF &&
- refname : refs/tags/foo1.10
- refname : refs/tags/foo1.3
- refname : refs/tags/foo1.6
- refname : refs/tags/foo1.6-rc1
- refname : refs/tags/foo1.6-rc2
+ refname : refs/tags/v1.0
+ refname : refs/tags/v1.0.1
+ refname : refs/tags/v1.1.3
EOF
- git tag -l --format="refname : %(refname)" "foo*" >actual &&
+ git tag -l --format="refname : %(refname)" "v1*" >actual &&
test_cmp expect actual
'
test_cmp expect.stderr actual.stderr
'
+test_expect_success 'verifying tag with --format' '
+ cat >expect <<-\EOF
+ tagname : fourth-signed
+ EOF &&
+ git verify-tag --format="tagname : %(tag)" "fourth-signed" >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'verifying a forged tag with --format fail and format accordingly' '
+ cat >expect <<-\EOF
+ tagname : 7th forged-signed
+ EOF &&
+ test_must_fail git verify-tag --format="tagname : %(tag)" $(cat forged1.tag) >actual-forged &&
+ test_cmp expect actual-forged
+'
+
test_done
test_i18ncmp expect2 actual2
'
+cat <<EOF >expect2
+Submodule 'foo/sub' ($pwd/withsubs/../rebasing) registered for path 'sub'
+EOF
+
+test_expect_success 'submodule update --init from and of subdirectory' '
+ git init withsubs &&
+ (cd withsubs &&
+ mkdir foo &&
+ git submodule add "$(pwd)/../rebasing" foo/sub &&
+ (cd foo &&
+ git submodule deinit -f sub &&
+ git submodule update --init sub 2>../../actual2
+ )
+ ) &&
+ test_i18ncmp expect2 actual2
+'
+
apos="'";
test_expect_success 'submodule update does not fetch already present commits' '
(cd submodule &&
"" submodule \
>actual &&
test_cmp expect_local_path actual &&
- git config submodule.a.url $old_a &&
- git config submodule.submodule.url $old_submodule &&
+ git config submodule.a.url "$old_a" &&
+ git config submodule.submodule.url "$old_submodule" &&
git config --unset submodule.a.path c
)
'
+cat >super/expect_url <<EOF
+Submodule url: '../submodule' for path 'b'
+Submodule url: 'git@somewhere.else.net:submodule.git' for path 'submodule'
+EOF
+
+test_expect_success 'reading of local configuration for uninitialized submodules' '
+ (
+ cd super &&
+ git submodule deinit -f b &&
+ old_submodule=$(git config submodule.submodule.url) &&
+ git config submodule.submodule.url git@somewhere.else.net:submodule.git &&
+ test-submodule-config --url \
+ "" b \
+ "" submodule \
+ >actual &&
+ test_cmp expect_url actual &&
+ git config submodule.submodule.url "$old_submodule" &&
+ git submodule init b
+ )
+'
+
cat >super/expect_fetchrecurse_die.err <<EOF
fatal: bad submodule.submodule.fetchrecursesubmodules argument: blabla
EOF
echo "a+bc"
echo "abc"
} >ab &&
+ {
+ echo d &&
+ echo 0
+ } >d0 &&
echo vvv >v &&
echo ww w >w &&
echo x x xx x >x &&
'
test_expect_success 'grep -G -F -P -E pattern' '
- >empty &&
- test_must_fail git grep -G -F -P -E "a\x{2b}b\x{2a}c" ab >actual &&
- test_cmp empty actual
+ echo "d0:d" >expected &&
+ git grep -G -F -P -E "[\d]" d0 >actual &&
+ test_cmp expected actual
'
test_expect_success 'grep pattern with grep.patternType=fixed, =basic, =perl, =extended' '
- >empty &&
- test_must_fail git \
+ echo "d0:d" >expected &&
+ git \
-c grep.patterntype=fixed \
-c grep.patterntype=basic \
-c grep.patterntype=perl \
-c grep.patterntype=extended \
- grep "a\x{2b}b\x{2a}c" ab >actual &&
- test_cmp empty actual
+ grep "[\d]" d0 >actual &&
+ test_cmp expected actual
'
test_expect_success LIBPCRE 'grep -G -F -E -P pattern' '
- echo "ab:a+b*c" >expected &&
- git grep -G -F -E -P "a\x{2b}b\x{2a}c" ab >actual &&
+ echo "d0:0" >expected &&
+ git grep -G -F -E -P "[\d]" d0 >actual &&
test_cmp expected actual
'
test_expect_success LIBPCRE 'grep pattern with grep.patternType=fixed, =basic, =extended, =perl' '
- echo "ab:a+b*c" >expected &&
+ echo "d0:0" >expected &&
git \
-c grep.patterntype=fixed \
-c grep.patterntype=basic \
-c grep.patterntype=extended \
-c grep.patterntype=perl \
- grep "a\x{2b}b\x{2a}c" ab >actual &&
+ grep "[\d]" d0 >actual &&
test_cmp expected actual
'
#include "commit.h"
#include "tree.h"
#include "blob.h"
+#include "gpg-interface.h"
const char *tag_type = "tag";
ret = check_signature(buf, payload_size, buf + payload_size,
size - payload_size, &sigc);
- print_signature_buffer(&sigc, flags);
+
+ if (!(flags & GPG_VERIFY_OMIT_STATUS))
+ print_signature_buffer(&sigc, flags);
signature_check_clear(&sigc);
return ret;
if (run_pre_push_hook(transport, remote_refs))
return -1;
- if ((flags & TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND) && !is_bare_repository()) {
+ if ((flags & (TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND |
+ TRANSPORT_RECURSE_SUBMODULES_ONLY)) &&
+ !is_bare_repository()) {
struct ref *ref = remote_refs;
struct sha1_array commits = SHA1_ARRAY_INIT;
}
if (((flags & TRANSPORT_RECURSE_SUBMODULES_CHECK) ||
- ((flags & TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND) &&
+ ((flags & (TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND |
+ TRANSPORT_RECURSE_SUBMODULES_ONLY)) &&
!pretend)) && !is_bare_repository()) {
struct ref *ref = remote_refs;
struct string_list needs_pushing = STRING_LIST_INIT_DUP;
sha1_array_clear(&commits);
}
- push_ret = transport->push_refs(transport, remote_refs, flags);
+ if (!(flags & TRANSPORT_RECURSE_SUBMODULES_ONLY))
+ push_ret = transport->push_refs(transport, remote_refs, flags);
+ else
+ push_ret = 0;
err = push_had_errors(remote_refs);
ret = push_ret | err;
if (flags & TRANSPORT_PUSH_SET_UPSTREAM)
set_upstreams(transport, remote_refs, pretend);
- if (!(flags & TRANSPORT_PUSH_DRY_RUN)) {
+ if (!(flags & (TRANSPORT_PUSH_DRY_RUN |
+ TRANSPORT_RECURSE_SUBMODULES_ONLY))) {
struct ref *ref;
for (ref = remote_refs; ref; ref = ref->next)
transport_update_tracking_ref(transport->remote, ref, verbose);
enum transport_family family;
};
-#define TRANSPORT_PUSH_ALL 1
-#define TRANSPORT_PUSH_FORCE 2
-#define TRANSPORT_PUSH_DRY_RUN 4
-#define TRANSPORT_PUSH_MIRROR 8
-#define TRANSPORT_PUSH_PORCELAIN 16
-#define TRANSPORT_PUSH_SET_UPSTREAM 32
-#define TRANSPORT_RECURSE_SUBMODULES_CHECK 64
-#define TRANSPORT_PUSH_PRUNE 128
-#define TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND 256
-#define TRANSPORT_PUSH_NO_HOOK 512
-#define TRANSPORT_PUSH_FOLLOW_TAGS 1024
-#define TRANSPORT_PUSH_CERT_ALWAYS 2048
-#define TRANSPORT_PUSH_CERT_IF_ASKED 4096
-#define TRANSPORT_PUSH_ATOMIC 8192
-#define TRANSPORT_PUSH_OPTIONS 16384
+#define TRANSPORT_PUSH_ALL (1<<0)
+#define TRANSPORT_PUSH_FORCE (1<<1)
+#define TRANSPORT_PUSH_DRY_RUN (1<<2)
+#define TRANSPORT_PUSH_MIRROR (1<<3)
+#define TRANSPORT_PUSH_PORCELAIN (1<<4)
+#define TRANSPORT_PUSH_SET_UPSTREAM (1<<5)
+#define TRANSPORT_RECURSE_SUBMODULES_CHECK (1<<6)
+#define TRANSPORT_PUSH_PRUNE (1<<7)
+#define TRANSPORT_RECURSE_SUBMODULES_ON_DEMAND (1<<8)
+#define TRANSPORT_PUSH_NO_HOOK (1<<9)
+#define TRANSPORT_PUSH_FOLLOW_TAGS (1<<10)
+#define TRANSPORT_PUSH_CERT_ALWAYS (1<<11)
+#define TRANSPORT_PUSH_CERT_IF_ASKED (1<<12)
+#define TRANSPORT_PUSH_ATOMIC (1<<13)
+#define TRANSPORT_PUSH_OPTIONS (1<<14)
+#define TRANSPORT_RECURSE_SUBMODULES_ONLY (1<<15)
extern int transport_summary_width(const struct ref *refs);
#include "cache.h"
static FILE *error_handle;
-static int tweaked_error_buffering;
void vreportf(const char *prefix, const char *err, va_list params)
{
+ char msg[4096];
FILE *fh = error_handle ? error_handle : stderr;
+ char *p;
- fflush(fh);
- if (!tweaked_error_buffering) {
- setvbuf(fh, NULL, _IOLBF, 0);
- tweaked_error_buffering = 1;
+ vsnprintf(msg, sizeof(msg), err, params);
+ for (p = msg; *p; p++) {
+ if (iscntrl(*p) && *p != '\t' && *p != '\n')
+ *p = '?';
}
-
- fputs(prefix, fh);
- vfprintf(fh, err, params);
- fputc('\n', fh);
+ fprintf(fh, "%s%s\n", prefix, msg);
}
static NORETURN void usage_builtin(const char *err, va_list params)
void set_error_handle(FILE *fh)
{
error_handle = fh;
- tweaked_error_buffering = 0;
}
void NORETURN usagef(const char *err, ...)
static const struct string_list *prereleases;
static int initialized;
+struct suffix_match {
+ int conf_pos;
+ int start;
+ int len;
+};
+
+static void find_better_matching_suffix(const char *tagname, const char *suffix,
+ int suffix_len, int start, int conf_pos,
+ struct suffix_match *match)
+{
+ /*
+ * A better match either starts earlier or starts at the same offset
+ * but is longer.
+ */
+ int end = match->len < suffix_len ? match->start : match->start-1;
+ int i;
+ for (i = start; i <= end; i++)
+ if (starts_with(tagname + i, suffix)) {
+ match->conf_pos = conf_pos;
+ match->start = i;
+ match->len = suffix_len;
+ break;
+ }
+}
+
/*
- * p1 and p2 point to the first different character in two strings. If
- * either p1 or p2 starts with a prerelease suffix, it will be forced
- * to be on top.
- *
- * If both p1 and p2 start with (different) suffix, the order is
- * determined by config file.
+ * off is the offset of the first different character in the two strings
+ * s1 and s2. If either s1 or s2 contains a prerelease suffix containing
+ * that offset or a suffix ends right before that offset, then that
+ * string will be forced to be on top.
*
- * Note that we don't have to deal with the situation when both p1 and
- * p2 start with the same suffix because the common part is already
- * consumed by the caller.
+ * If both s1 and s2 contain a (different) suffix around that position,
+ * their order is determined by the order of those two suffixes in the
+ * configuration.
+ * If any of the strings contains more than one different suffixes around
+ * that position, then that string is sorted according to the contained
+ * suffix which starts at the earliest offset in that string.
+ * If more than one different contained suffixes start at that earliest
+ * offset, then that string is sorted according to the longest of those
+ * suffixes.
*
* Return non-zero if *diff contains the return value for versioncmp()
*/
-static int swap_prereleases(const void *p1_,
- const void *p2_,
+static int swap_prereleases(const char *s1,
+ const char *s2,
+ int off,
int *diff)
{
- const char *p1 = p1_;
- const char *p2 = p2_;
- int i, i1 = -1, i2 = -1;
+ int i;
+ struct suffix_match match1 = { -1, off, -1 };
+ struct suffix_match match2 = { -1, off, -1 };
for (i = 0; i < prereleases->nr; i++) {
const char *suffix = prereleases->items[i].string;
- if (i1 == -1 && starts_with(p1, suffix))
- i1 = i;
- if (i2 == -1 && starts_with(p2, suffix))
- i2 = i;
+ int start, suffix_len = strlen(suffix);
+ if (suffix_len < off)
+ start = off - suffix_len;
+ else
+ start = 0;
+ find_better_matching_suffix(s1, suffix, suffix_len, start,
+ i, &match1);
+ find_better_matching_suffix(s2, suffix, suffix_len, start,
+ i, &match2);
}
- if (i1 == -1 && i2 == -1)
+ if (match1.conf_pos == -1 && match2.conf_pos == -1)
return 0;
- if (i1 >= 0 && i2 >= 0)
- *diff = i1 - i2;
- else if (i1 >= 0)
+ if (match1.conf_pos == match2.conf_pos)
+ /* Found the same suffix in both, e.g. "-rc" in "v1.0-rcX"
+ * and "v1.0-rcY": the caller should decide based on "X"
+ * and "Y". */
+ return 0;
+
+ if (match1.conf_pos >= 0 && match2.conf_pos >= 0)
+ *diff = match1.conf_pos - match2.conf_pos;
+ else if (match1.conf_pos >= 0)
*diff = -1;
- else /* if (i2 >= 0) */
+ else /* if (match2.conf_pos >= 0) */
*diff = 1;
return 1;
}
}
if (!initialized) {
+ const struct string_list *deprecated_prereleases;
initialized = 1;
- prereleases = git_config_get_value_multi("versionsort.prereleasesuffix");
+ prereleases = git_config_get_value_multi("versionsort.suffix");
+ deprecated_prereleases = git_config_get_value_multi("versionsort.prereleasesuffix");
+ if (prereleases) {
+ if (deprecated_prereleases)
+ warning("ignoring versionsort.prereleasesuffix because versionsort.suffix is set");
+ } else
+ prereleases = deprecated_prereleases;
}
- if (prereleases && swap_prereleases(p1 - 1, p2 - 1, &diff))
+ if (prereleases && swap_prereleases(s1, s2, (const char *) p1 - s1 - 1,
+ &diff))
return diff;
state = result_type[state * 3 + (((c2 == '0') + (isdigit (c2) != 0)))];