The following Thunderbird extensions are needed:
AboutConfig 0.5
http://aboutconfig.mozdev.org/
- External Editor 0.5.4
- http://extensionroom.mozdev.org/more-info/exteditor
+ External Editor 0.7.2
+ http://globs.org/articles.php?lng=en&pg=8
1) Prepare the patch as a text file using your method of choice.
The first two lines indicate that it is showing the two branches
and the first line of the commit log message from their
top-of-the-tree commits, you are currently on `master` branch
-(notice the asterisk `*` character), and the first column for
+(notice the asterisk `\*` character), and the first column for
the later output lines is used to show commits contained in the
`master` branch, and the second column for the `mybranch`
branch. Three commits are shown along with their log messages.
--stat::
Generate a diffstat instead of a patch.
+--summary::
+ Output a condensed summary of extended header information
+ such as creations, renames and mode changes.
+
--patch-with-stat::
Generate patch and prepend its diffstat.
A simple wrapper for git-update-index to add files to the index,
for people used to do "cvs add".
+It only adds non-ignored files, to add ignored files use
+"git update-index --add".
OPTIONS
-------
<file>...::
- Files to add to the index.
+ Files to add to the index (see gitlink:git-ls-files[1]).
-n::
Don't actually add the file(s), just show if they exist.
See Also
--------
gitlink:git-rm[1]
+gitlink:git-ls-files[1]
Author
------
SYNOPSIS
--------
[verse]
-'git-clean' [-d] [-n] [-q] [-x | -X]
+'git-clean' [-d] [-n] [-q] [-x | -X] [--] <paths>...
DESCRIPTION
-----------
from files that are not under version control. If the '-x' option is
specified, ignored files are also removed, allowing to remove all
build products.
+When optional `<paths>...` arguments are given, the paths
+affected are further limited to those that match them.
+
OPTIONS
-------
SYNOPSIS
--------
-'git-cvsexportcommmit' [-h] [-v] [-c] [-p] [PARENTCOMMIT] COMMITID
+'git-cvsexportcommmit' [-h] [-v] [-c] [-p] [-f] [-m msgprefix] [PARENTCOMMIT] COMMITID
DESCRIPTION
Be pedantic (paranoid) when applying patches. Invokes patch with
--fuzz=0
+-f::
+ Force the merge even if the files are not up to date.
+
+-m::
+ Prepend the commit message with the provided prefix.
+ Useful for patch series and the like.
+
-v::
Verbose.
SYNOPSIS
--------
-'git-grep' [<option>...] [-e] <pattern> [--] [<path>...]
+[verse]
+'git-grep' [--cached]
+ [-a | --text] [-I] [-i | --ignore-case] [-w | --word-regexp]
+ [-v | --invert-match]
+ [-E | --extended-regexp] [-G | --basic-regexp] [-F | --fixed-strings]
+ [-n] [-l | --files-with-matches] [-L | --files-without-match]
+ [-c | --count]
+ [-A <post-context>] [-B <pre-context>] [-C <context>]
+ [-f <file>] [-e <pattern>]
+ [<tree>...]
+ [--] [<path>...]
DESCRIPTION
-----------
-Searches list of files `git-ls-files` produces for lines
-containing a match to the given pattern.
+Look for specified patterns in the working tree files, blobs
+registered in the index file, or given tree objects.
OPTIONS
-------
-`--`::
- Signals the end of options; the rest of the parameters
- are <path> limiters.
+--cached::
+ Instead of searching in the working tree files, check
+ the blobs registerd in the index file.
+
+-a | --text::
+ Process binary files as if they were text.
+
+-i | --ignore-case::
+ Ignore case differences between the patterns and the
+ files.
+
+-w | --word-regexp::
+ Match the pattern only at word boundary (either begin at the
+ beginning of a line, or preceded by a non-word character; end at
+ the end of a line or followed by a non-word character).
+
+-v | --invert-match::
+ Select non-matching lines.
+
+-E | --extended-regexp | -G | --basic-regexp::
+ Use POSIX extended/basic regexp for patterns. Default
+ is to use basic regexp.
-<option>...::
- Either an option to pass to `grep` or `git-ls-files`.
-+
-The following are the specific `git-ls-files` options
-that may be given: `-o`, `--cached`, `--deleted`, `--others`,
-`--killed`, `--ignored`, `--modified`, `--exclude=\*`,
-`--exclude-from=\*`, and `--exclude-per-directory=\*`.
-+
-All other options will be passed to `grep`.
+-n::
+ Prefix the line number to matching lines.
-<pattern>::
- The pattern to look for. The first non option is taken
- as the pattern; if your pattern begins with a dash, use
- `-e <pattern>`.
+-l | --files-with-matches | -L | --files-without-match::
+ Instead of showing every matched line, show only the
+ names of files that contain (or do not contain) matches.
-<path>...::
- Optional paths to limit the set of files to be searched;
- passed to `git-ls-files`.
+-c | --count::
+ Instead of showing every matched line, show the number of
+ lines that match.
+
+-[ABC] <context>::
+ Show `context` trailing (`A` -- after), or leading (`B`
+ -- before), or both (`C` -- context) lines, and place a
+ line containing `--` between continguous groups of
+ matches.
+
+-f <file>::
+ Read patterns from <file>, one per line.
+
+`<tree>...`::
+ Search blobs in the trees for specified patterns.
+
+`--`::
+ Signals the end of options; the rest of the parameters
+ are <path> limiters.
Author
------
-Written by Linus Torvalds <torvalds@osdl.org>
+Originally written by Linus Torvalds <torvalds@osdl.org>, later
+revamped by Junio C Hamano.
+
Documentation
--------------
SYNOPSIS
--------
-'git-merge-base' <commit> <commit>
+'git-merge-base' [--all] <commit> <commit>
DESCRIPTION
-----------
-"git-merge-base" finds as good a common ancestor as possible. Given a
-selection of equally good common ancestors it should not be relied on
-to decide in any particular way.
+
+"git-merge-base" finds as good a common ancestor as possible between
+the two commits. That is, given two commits A and B 'git-merge-base A
+B' will output a commit which is reachable from both A and B through
+the parent relationship.
+
+Given a selection of equally good common ancestors it should not be
+relied on to decide in any particular way.
The "git-merge-base" algorithm is still in flux - use the source...
+OPTIONS
+-------
+--all::
+ Output all common ancestors for the two commits instead of
+ just one.
Author
------
--- /dev/null
+git-quiltimport(1)
+================
+
+NAME
+----
+git-quiltimport - Applies a quilt patchset onto the current branch
+
+
+SYNOPSIS
+--------
+[verse]
+'git-quiltimport' [--dry-run] [--author <author>] [--patches <dir>]
+
+
+DESCRIPTION
+-----------
+Applies a quilt patchset onto the current git branch, preserving
+the patch boundaries, patch order, and patch descriptions present
+in the quilt patchset.
+
+For each patch the code attempts to extract the author from the
+patch description. If that fails it falls back to the author
+specified with --author. If the --author flag was not given
+the patch description is displayed and the user is asked to
+interactively enter the author of the patch.
+
+If a subject is not found in the patch description the patch name is
+preserved as the 1 line subject in the git description.
+
+OPTIONS
+-------
+--dry-run::
+ Walk through the patches in the series and warn
+ if we cannot find all of the necessary information to commit
+ a patch. At the time of this writing only missing author
+ information is warned about.
+
+--author Author Name <Author Email>::
+ The author name and email address to use when no author
+ information can be found in the patch description.
+
+--patches <dir>::
+ The directory to find the quilt patches and the
+ quilt series file.
+
+ The default for the patch directory is patches
+ or the value of the $QUILT_PATCHES environment
+ variable.
+
+Author
+------
+Written by Eric Biederman <ebiederm@lnxi.com>
+
+Documentation
+--------------
+Documentation by Eric Biederman <ebiederm@lnxi.com>
+
+GIT
+---
+Part of the gitlink:git[7] suite
+
--------
'git-rebase' [--onto <newbase>] <upstream> [<branch>]
-'git-rebase' --continue
-
-'git-rebase' --abort
+'git-rebase' --continue | --skip | --abort
DESCRIPTION
-----------
It is possible that a merge failure will prevent this process from being
completely automatic. You will have to resolve any such merge failure
-and run `git rebase --continue`. If you can not resolve the merge
-failure, running `git rebase --abort` will restore the original <branch>
-and remove the working files found in the .dotest directory.
+and run `git rebase --continue`. Another option is to bypass the commit
+that caused the merge failure with `git rebase --skip`. To restore the
+original <branch> and remove the .dotest working files, use the command
+`git rebase --abort` instead.
Note that if <branch> is not specified on the command line, the currently
checked out branch is used.
actually the section and the key separated by a dot, and the value will be
escaped.
-If you want to set/unset an option which can occur on multiple lines, you
-should provide a POSIX regex for the value. If you want to handle the lines
-*not* matching the regex, just prepend a single exclamation mark in front
-(see EXAMPLES).
+If you want to set/unset an option which can occur on multiple
+lines, a POSIX regexp `value_regex` needs to be given. Only the
+existing values that match the regexp are updated or unset. If
+you want to handle the lines that do *not* match the regex, just
+prepend a single exclamation mark in front (see EXAMPLES).
The type specifier can be either '--int' or '--bool', which will make
'git-repo-config' ensure that the variable(s) are of the given type and
--bisect::
Limit output to the one commit object which is roughly halfway
between the included and excluded commits. Thus, if 'git-rev-list
- --bisect foo ^bar ^baz' outputs 'midpoint', the output
- of 'git-rev-list foo ^midpoint' and 'git-rev-list midpoint
- ^bar ^baz' would be of roughly the same length. Finding the change
+ --bisect foo {caret}bar {caret}baz' outputs 'midpoint', the output
+ of 'git-rev-list foo {caret}midpoint' and 'git-rev-list midpoint
+ {caret}bar {caret}baz' would be of roughly the same length.
+ Finding the change
which introduces a regression is thus reduced to a binary search:
repeatedly generate and test new 'midpoint's until the commit chain
is of length one.
--all::
Show all refs found in `$GIT_DIR/refs`.
+--branches::
+ Show branch refs found in `$GIT_DIR/refs/heads`.
+
+--tags::
+ Show tag refs found in `$GIT_DIR/refs/tags`.
+
+--remotes::
+ Show tag refs found in `$GIT_DIR/refs/remotes`.
+
--show-prefix::
When the command is invoked from a subdirectory, show the
path of the current directory relative to the top-level
[--cacheinfo <mode> <object> <file>]\*
[--chmod=(+|-)x]
[--assume-unchanged | --no-assume-unchanged]
- [--really-refresh] [--unresolve]
+ [--really-refresh] [--unresolve] [--again]
[--info-only] [--index-info]
[-z] [--stdin]
[--verbose]
filesystem that has very slow lstat(2) system call
(e.g. cifs).
+--again::
+ Runs `git-update-index` itself on the paths whose index
+ entries are different from those from the `HEAD` commit.
+
--unresolve::
Restores the 'unmerged' or 'needs updating' state of a
file during a merge if it was cleared by accident.
for h in *.html *.txt howto/*.txt howto/*.html
do
- diff -u -I'Last updated [0-9][0-9]-[A-Z][a-z][a-z]-' "$T/$h" "$h" || {
+ if test -f "$T/$h" &&
+ diff -u -I'Last updated [0-9][0-9]-[A-Z][a-z][a-z]-' "$T/$h" "$h"
+ then
+ :; # up to date
+ else
echo >&2 "# install $h $T/$h"
rm -f "$T/$h"
mkdir -p `dirname "$T/$h"`
cp "$h" "$T/$h"
- }
+ fi
done
strip_leading=`echo "$T/" | sed -e 's|.|.|g'`
for th in "$T"/*.html "$T"/*.txt "$T"/howto/*.txt "$T"/howto/*.html
git-tag.sh git-verify-tag.sh \
git-applymbox.sh git-applypatch.sh git-am.sh \
git-merge.sh git-merge-stupid.sh git-merge-octopus.sh \
- git-merge-resolve.sh git-merge-ours.sh git-grep.sh \
- git-lost-found.sh
+ git-merge-resolve.sh git-merge-ours.sh \
+ git-lost-found.sh git-quiltimport.sh
SCRIPT_PERL = \
git-archimport.perl git-cvsimport.perl git-relink.perl \
git-shortlog.perl git-fmt-merge-msg.perl git-rerere.perl \
git-annotate.perl git-cvsserver.perl \
- git-svnimport.perl git-mv.perl git-cvsexportcommit.perl
+ git-svnimport.perl git-mv.perl git-cvsexportcommit.perl \
+ git-send-email.perl
SCRIPT_PYTHON = \
git-merge-recursive.py
git-convert-objects$X git-diff-files$X \
git-diff-index$X git-diff-stages$X \
git-diff-tree$X git-fetch-pack$X git-fsck-objects$X \
- git-hash-object$X git-index-pack$X git-init-db$X git-local-fetch$X \
+ git-hash-object$X git-index-pack$X git-local-fetch$X \
git-ls-files$X git-ls-tree$X git-mailinfo$X git-merge-base$X \
git-merge-index$X git-mktag$X git-mktree$X git-pack-objects$X git-patch-id$X \
git-peek-remote$X git-prune-packed$X git-read-tree$X \
- git-receive-pack$X git-rev-list$X git-rev-parse$X \
+ git-receive-pack$X git-rev-parse$X \
git-send-pack$X git-show-branch$X git-shell$X \
git-show-index$X git-ssh-fetch$X \
git-ssh-upload$X git-tar-tree$X git-unpack-file$X \
git-unpack-objects$X git-update-index$X git-update-server-info$X \
git-upload-pack$X git-verify-pack$X git-write-tree$X \
- git-update-ref$X git-symbolic-ref$X git-check-ref-format$X \
+ git-update-ref$X git-symbolic-ref$X \
git-name-rev$X git-pack-redundant$X git-repo-config$X git-var$X \
git-describe$X git-merge-tree$X git-blame$X git-imap-send$X
BUILT_INS = git-log$X git-whatchanged$X git-show$X \
- git-count-objects$X git-diff$X git-push$X
+ git-count-objects$X git-diff$X git-push$X \
+ git-grep$X git-rev-list$X git-check-ref-format$X \
+ git-init-db$X
# what 'all' will build and 'install' will install, in gitexecdir
ALL_PROGRAMS = $(PROGRAMS) $(SIMPLE_PROGRAMS) $(SCRIPTS)
diffcore-delta.o log-tree.o
LIB_OBJS = \
- blob.o commit.o connect.o csum-file.o \
+ blob.o commit.o connect.o csum-file.o base85.o \
date.o diff-delta.o entry.o exec_cmd.o ident.o index.o \
object.o pack-check.o patch-delta.o path.o pkt-line.o \
quote.o read-cache.o refs.o run-command.o \
$(DIFF_OBJS)
BUILTIN_OBJS = \
- builtin-log.o builtin-help.o builtin-count.o builtin-diff.o builtin-push.o
+ builtin-log.o builtin-help.o builtin-count.o builtin-diff.o builtin-push.o \
+ builtin-grep.o builtin-rev-list.o builtin-check-ref-format.o \
+ builtin-init-db.o
GITLIBS = $(LIB_FILE) $(XDIFF_LIB)
LIBS = $(GITLIBS) -lz
ALL_LDFLAGS += -L/usr/local/lib
endif
ifeq ($(uname_S),NetBSD)
- NEEDS_LIBICONV = YesPlease
+ ifeq ($(shell expr "$(uname_R)" : '[01]\.'),2)
+ NEEDS_LIBICONV = YesPlease
+ endif
ALL_CFLAGS += -I/usr/pkg/include
ALL_LDFLAGS += -L/usr/pkg/lib -Wl,-rpath,/usr/pkg/lib
endif
endif
endif
-ifdef WITH_SEND_EMAIL
- SCRIPT_PERL += git-send-email.perl
-endif
-
ifndef NO_CURL
ifdef CURLDIR
# This is still problematic -- gcc does not always want -R.
GIT_PYTHON_DIR_SQ = $(subst ','\'',$(GIT_PYTHON_DIR))
ALL_CFLAGS += -DSHA1_HEADER='$(SHA1_HEADER_SQ)' $(COMPAT_CFLAGS)
+ALL_CFLAGS += -DDEFAULT_GIT_TEMPLATE_DIR='"$(template_dir_SQ)"'
LIB_OBJS += $(COMPAT_OBJS)
export prefix TAR INSTALL DESTDIR SHELL_PATH template_dir
### Build rules
$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
$(LIBS) $(CURL_LIBCURL) $(EXPAT_LIBEXPAT)
-init-db.o: init-db.c
- $(CC) -c $(ALL_CFLAGS) \
- -DDEFAULT_GIT_TEMPLATE_DIR='"$(template_dir_SQ)"' $*.c
-
$(LIB_OBJS) $(BUILTIN_OBJS): $(LIB_H)
$(patsubst git-%$X,%.o,$(PROGRAMS)): $(GITLIBS)
$(DIFF_OBJS): diffcore.h
$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) test-date.c date.o ctype.o
test-delta$X: test-delta.c diff-delta.o patch-delta.o
- $(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $^ -lz
+ $(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $^
check:
for i in *.c; do sparse $(ALL_CFLAGS) $(SPARSE_FLAGS) $$i || exit; done
rpm: dist
$(RPMBUILD) -ta $(GIT_TARNAME).tar.gz
+htmldocs = git-htmldocs-$(GIT_VERSION)
+manpages = git-manpages-$(GIT_VERSION)
+dist-doc:
+ rm -fr .doc-tmp-dir
+ mkdir .doc-tmp-dir
+ $(MAKE) -C Documentation WEBDOC_DEST=../.doc-tmp-dir install-webdoc
+ cd .doc-tmp-dir && $(TAR) cf ../$(htmldocs).tar .
+ gzip -n -9 -f $(htmldocs).tar
+ :
+ rm -fr .doc-tmp-dir
+ mkdir .doc-tmp-dir .doc-tmp-dir/man1 .doc-tmp-dir/man7
+ $(MAKE) -C Documentation DESTDIR=. \
+ man1=../.doc-tmp-dir/man1 \
+ man7=../.doc-tmp-dir/man7 \
+ install
+ cd .doc-tmp-dir && $(TAR) cf ../$(manpages).tar .
+ gzip -n -9 -f $(manpages).tar
+ rm -fr .doc-tmp-dir
+
### Cleaning rules
clean:
$(LIB_FILE) $(XDIFF_LIB)
rm -f $(ALL_PROGRAMS) $(BUILT_INS) git$X
rm -f *.spec *.pyc *.pyo */*.pyc */*.pyo common-cmds.h TAGS tags
- rm -rf $(GIT_TARNAME)
+ rm -rf $(GIT_TARNAME) .doc-tmp-dir
rm -f $(GIT_TARNAME).tar.gz git-core_$(GIT_VERSION)-*.tar.gz
+ rm -f $(htmldocs).tar $(manpages).tar
$(MAKE) -C Documentation/ clean
$(MAKE) -C templates clean
$(MAKE) -C t/ clean
#include "cache.h"
#include "quote.h"
#include "blob.h"
+#include "delta.h"
// --check turns on checking that the working tree matches the
// files that are being modified, but doesn't apply the patch
// --stat does just a diffstat, and doesn't actually apply
// --numstat does numeric diffstat, and doesn't actually apply
// --index-info shows the old and new index info for paths if available.
+// --index updates the cache as well.
+// --cached updates only the cache without ever touching the working tree.
//
static const char *prefix;
static int prefix_length = -1;
+static int newfd = -1;
static int p_value = 1;
static int allow_binary_replacement = 0;
static int check_index = 0;
static int write_index = 0;
+static int cached = 0;
static int diffstat = 0;
static int numstat = 0;
static int summary = 0;
static int line_termination = '\n';
static unsigned long p_context = -1;
static const char apply_usage[] =
-"git-apply [--stat] [--numstat] [--summary] [--check] [--index] [--apply] [--no-add] [--index-info] [--allow-binary-replacement] [-z] [-pNUM] [-CNUM] [--whitespace=<nowarn|warn|error|error-all|strip>] <patch>...";
+"git-apply [--stat] [--numstat] [--summary] [--check] [--index] [--cached] [--apply] [--no-add] [--index-info] [--allow-binary-replacement] [-z] [-pNUM] [-CNUM] [--whitespace=<nowarn|warn|error|error-all|strip>] <patch>...";
static enum whitespace_eol {
nowarn_whitespace,
char *new_name, *old_name, *def_name;
unsigned int old_mode, new_mode;
int is_rename, is_copy, is_new, is_delete, is_binary;
+#define BINARY_DELTA_DEFLATED 1
+#define BINARY_LITERAL_DEFLATED 2
+ unsigned long deflate_origlen;
int lines_added, lines_deleted;
int score;
struct fragment *fragments;
patch->old_mode != patch->new_mode);
}
+static int parse_binary(char *buffer, unsigned long size, struct patch *patch)
+{
+ /* We have read "GIT binary patch\n"; what follows is a line
+ * that says the patch method (currently, either "deflated
+ * literal" or "deflated delta") and the length of data before
+ * deflating; a sequence of 'length-byte' followed by base-85
+ * encoded data follows.
+ *
+ * Each 5-byte sequence of base-85 encodes up to 4 bytes,
+ * and we would limit the patch line to 66 characters,
+ * so one line can fit up to 13 groups that would decode
+ * to 52 bytes max. The length byte 'A'-'Z' corresponds
+ * to 1-26 bytes, and 'a'-'z' corresponds to 27-52 bytes.
+ * The end of binary is signalled with an empty line.
+ */
+ int llen, used;
+ struct fragment *fragment;
+ char *data = NULL;
+
+ patch->fragments = fragment = xcalloc(1, sizeof(*fragment));
+
+ /* Grab the type of patch */
+ llen = linelen(buffer, size);
+ used = llen;
+ linenr++;
+
+ if (!strncmp(buffer, "delta ", 6)) {
+ patch->is_binary = BINARY_DELTA_DEFLATED;
+ patch->deflate_origlen = strtoul(buffer + 6, NULL, 10);
+ }
+ else if (!strncmp(buffer, "literal ", 8)) {
+ patch->is_binary = BINARY_LITERAL_DEFLATED;
+ patch->deflate_origlen = strtoul(buffer + 8, NULL, 10);
+ }
+ else
+ return error("unrecognized binary patch at line %d: %.*s",
+ linenr-1, llen-1, buffer);
+ buffer += llen;
+ while (1) {
+ int byte_length, max_byte_length, newsize;
+ llen = linelen(buffer, size);
+ used += llen;
+ linenr++;
+ if (llen == 1)
+ break;
+ /* Minimum line is "A00000\n" which is 7-byte long,
+ * and the line length must be multiple of 5 plus 2.
+ */
+ if ((llen < 7) || (llen-2) % 5)
+ goto corrupt;
+ max_byte_length = (llen - 2) / 5 * 4;
+ byte_length = *buffer;
+ if ('A' <= byte_length && byte_length <= 'Z')
+ byte_length = byte_length - 'A' + 1;
+ else if ('a' <= byte_length && byte_length <= 'z')
+ byte_length = byte_length - 'a' + 27;
+ else
+ goto corrupt;
+ /* if the input length was not multiple of 4, we would
+ * have filler at the end but the filler should never
+ * exceed 3 bytes
+ */
+ if (max_byte_length < byte_length ||
+ byte_length <= max_byte_length - 4)
+ goto corrupt;
+ newsize = fragment->size + byte_length;
+ data = xrealloc(data, newsize);
+ if (decode_85(data + fragment->size,
+ buffer + 1,
+ byte_length))
+ goto corrupt;
+ fragment->size = newsize;
+ buffer += llen;
+ size -= llen;
+ }
+ fragment->patch = data;
+ return used;
+ corrupt:
+ return error("corrupt binary patch at line %d: %.*s",
+ linenr-1, llen-1, buffer);
+}
+
static int parse_chunk(char *buffer, unsigned long size, struct patch *patch)
{
int hdrsize, patchsize;
"Files ",
NULL,
};
+ static const char git_binary[] = "GIT binary patch\n";
int i;
int hd = hdrsize + offset;
unsigned long llen = linelen(buffer + hd, size - hd);
- if (!memcmp(" differ\n", buffer + hd + llen - 8, 8))
+ if (llen == sizeof(git_binary) - 1 &&
+ !memcmp(git_binary, buffer + hd, llen)) {
+ int used;
+ linenr++;
+ used = parse_binary(buffer + hd + llen,
+ size - hd - llen, patch);
+ if (used)
+ patchsize = used + llen;
+ else
+ patchsize = 0;
+ }
+ else if (!memcmp(" differ\n", buffer + hd + llen - 8, 8)) {
for (i = 0; binhdr[i]; i++) {
int len = strlen(binhdr[i]);
if (len < size - hd &&
!memcmp(binhdr[i], buffer + hd, len)) {
+ linenr++;
patch->is_binary = 1;
+ patchsize = llen;
break;
}
}
+ }
/* Empty patch cannot be applied if:
* - it is a binary patch and we do not do binary_replace, or
return offset;
}
-static int apply_fragments(struct buffer_desc *desc, struct patch *patch)
+static char *inflate_it(const void *data, unsigned long size,
+ unsigned long inflated_size)
+{
+ z_stream stream;
+ void *out;
+ int st;
+
+ memset(&stream, 0, sizeof(stream));
+
+ stream.next_in = (unsigned char *)data;
+ stream.avail_in = size;
+ stream.next_out = out = xmalloc(inflated_size);
+ stream.avail_out = inflated_size;
+ inflateInit(&stream);
+ st = inflate(&stream, Z_FINISH);
+ if ((st != Z_STREAM_END) || stream.total_out != inflated_size) {
+ free(out);
+ return NULL;
+ }
+ return out;
+}
+
+static int apply_binary_fragment(struct buffer_desc *desc, struct patch *patch)
+{
+ unsigned long dst_size;
+ struct fragment *fragment = patch->fragments;
+ void *data;
+ void *result;
+
+ data = inflate_it(fragment->patch, fragment->size,
+ patch->deflate_origlen);
+ if (!data)
+ return error("corrupt patch data");
+ switch (patch->is_binary) {
+ case BINARY_DELTA_DEFLATED:
+ result = patch_delta(desc->buffer, desc->size,
+ data,
+ patch->deflate_origlen,
+ &dst_size);
+ free(desc->buffer);
+ desc->buffer = result;
+ free(data);
+ break;
+ case BINARY_LITERAL_DEFLATED:
+ free(desc->buffer);
+ desc->buffer = data;
+ dst_size = patch->deflate_origlen;
+ break;
+ }
+ if (!desc->buffer)
+ return -1;
+ desc->size = desc->alloc = dst_size;
+ return 0;
+}
+
+static int apply_binary(struct buffer_desc *desc, struct patch *patch)
{
- struct fragment *frag = patch->fragments;
const char *name = patch->old_name ? patch->old_name : patch->new_name;
+ unsigned char sha1[20];
+ unsigned char hdr[50];
+ int hdrlen;
- if (patch->is_binary) {
- unsigned char sha1[20];
+ if (!allow_binary_replacement)
+ return error("cannot apply binary patch to '%s' "
+ "without --allow-binary-replacement",
+ name);
- if (!allow_binary_replacement)
- return error("cannot apply binary patch to '%s' "
- "without --allow-binary-replacement",
- name);
+ /* For safety, we require patch index line to contain
+ * full 40-byte textual SHA1 for old and new, at least for now.
+ */
+ if (strlen(patch->old_sha1_prefix) != 40 ||
+ strlen(patch->new_sha1_prefix) != 40 ||
+ get_sha1_hex(patch->old_sha1_prefix, sha1) ||
+ get_sha1_hex(patch->new_sha1_prefix, sha1))
+ return error("cannot apply binary patch to '%s' "
+ "without full index line", name);
- /* For safety, we require patch index line to contain
- * full 40-byte textual SHA1 for old and new, at least for now.
+ if (patch->old_name) {
+ /* See if the old one matches what the patch
+ * applies to.
*/
- if (strlen(patch->old_sha1_prefix) != 40 ||
- strlen(patch->new_sha1_prefix) != 40 ||
- get_sha1_hex(patch->old_sha1_prefix, sha1) ||
- get_sha1_hex(patch->new_sha1_prefix, sha1))
- return error("cannot apply binary patch to '%s' "
- "without full index line", name);
-
- if (patch->old_name) {
- unsigned char hdr[50];
- int hdrlen;
-
- /* See if the old one matches what the patch
- * applies to.
- */
- write_sha1_file_prepare(desc->buffer, desc->size,
- blob_type, sha1, hdr, &hdrlen);
- if (strcmp(sha1_to_hex(sha1), patch->old_sha1_prefix))
- return error("the patch applies to '%s' (%s), "
- "which does not match the "
- "current contents.",
- name, sha1_to_hex(sha1));
- }
- else {
- /* Otherwise, the old one must be empty. */
- if (desc->size)
- return error("the patch applies to an empty "
- "'%s' but it is not empty", name);
- }
+ write_sha1_file_prepare(desc->buffer, desc->size,
+ blob_type, sha1, hdr, &hdrlen);
+ if (strcmp(sha1_to_hex(sha1), patch->old_sha1_prefix))
+ return error("the patch applies to '%s' (%s), "
+ "which does not match the "
+ "current contents.",
+ name, sha1_to_hex(sha1));
+ }
+ else {
+ /* Otherwise, the old one must be empty. */
+ if (desc->size)
+ return error("the patch applies to an empty "
+ "'%s' but it is not empty", name);
+ }
+
+ get_sha1_hex(patch->new_sha1_prefix, sha1);
+ if (!memcmp(sha1, null_sha1, 20)) {
+ free(desc->buffer);
+ desc->alloc = desc->size = 0;
+ desc->buffer = NULL;
+ return 0; /* deletion patch */
+ }
+
+ if (has_sha1_file(sha1)) {
+ /* We already have the postimage */
+ char type[10];
+ unsigned long size;
- /* For now, we do not record post-image data in the patch,
- * and require the object already present in the recipient's
- * object database.
+ free(desc->buffer);
+ desc->buffer = read_sha1_file(sha1, type, &size);
+ if (!desc->buffer)
+ return error("the necessary postimage %s for "
+ "'%s' cannot be read",
+ patch->new_sha1_prefix, name);
+ desc->alloc = desc->size = size;
+ }
+ else {
+ /* We have verified desc matches the preimage;
+ * apply the patch data to it, which is stored
+ * in the patch->fragments->{patch,size}.
*/
- if (desc->buffer) {
- free(desc->buffer);
- desc->alloc = desc->size = 0;
- }
- get_sha1_hex(patch->new_sha1_prefix, sha1);
-
- if (memcmp(sha1, null_sha1, 20)) {
- char type[10];
- unsigned long size;
-
- desc->buffer = read_sha1_file(sha1, type, &size);
- if (!desc->buffer)
- return error("the necessary postimage %s for "
- "'%s' does not exist",
- patch->new_sha1_prefix, name);
- desc->alloc = desc->size = size;
- }
+ if (apply_binary_fragment(desc, patch))
+ return error("binary patch does not apply to '%s'",
+ name);
- return 0;
+ /* verify that the result matches */
+ write_sha1_file_prepare(desc->buffer, desc->size, blob_type,
+ sha1, hdr, &hdrlen);
+ if (strcmp(sha1_to_hex(sha1), patch->new_sha1_prefix))
+ return error("binary patch to '%s' creates incorrect result", name);
}
+ return 0;
+}
+
+static int apply_fragments(struct buffer_desc *desc, struct patch *patch)
+{
+ struct fragment *frag = patch->fragments;
+ const char *name = patch->old_name ? patch->old_name : patch->new_name;
+
+ if (patch->is_binary)
+ return apply_binary(desc, patch);
+
while (frag) {
if (apply_one_fragment(desc, frag) < 0)
return error("patch failed: %s:%ld",
return 0;
}
-static int apply_data(struct patch *patch, struct stat *st)
+static int apply_data(struct patch *patch, struct stat *st, struct cache_entry *ce)
{
char *buf;
unsigned long size, alloc;
size = 0;
alloc = 0;
buf = NULL;
- if (patch->old_name) {
+ if (cached) {
+ if (ce) {
+ char type[20];
+ buf = read_sha1_file(ce->sha1, type, &size);
+ if (!buf)
+ return error("read of %s failed",
+ patch->old_name);
+ alloc = size;
+ }
+ }
+ else if (patch->old_name) {
size = st->st_size;
alloc = size + 8192;
buf = xmalloc(alloc);
const char *old_name = patch->old_name;
const char *new_name = patch->new_name;
const char *name = old_name ? old_name : new_name;
+ struct cache_entry *ce = NULL;
if (old_name) {
- int changed;
- int stat_ret = lstat(old_name, &st);
+ int changed = 0;
+ int stat_ret = 0;
+ unsigned st_mode = 0;
+ if (!cached)
+ stat_ret = lstat(old_name, &st);
if (check_index) {
int pos = cache_name_pos(old_name, strlen(old_name));
if (pos < 0)
return error("%s: does not exist in index",
old_name);
+ ce = active_cache[pos];
if (stat_ret < 0) {
struct checkout costate;
if (errno != ENOENT)
costate.quiet = 0;
costate.not_new = 0;
costate.refresh_cache = 1;
- if (checkout_entry(active_cache[pos],
+ if (checkout_entry(ce,
&costate,
NULL) ||
lstat(old_name, &st))
return -1;
}
-
- changed = ce_match_stat(active_cache[pos], &st, 1);
+ if (!cached)
+ changed = ce_match_stat(ce, &st, 1);
if (changed)
return error("%s: does not match index",
old_name);
+ if (cached)
+ st_mode = ntohl(ce->ce_mode);
}
else if (stat_ret < 0)
return error("%s: %s", old_name, strerror(errno));
+ if (!cached)
+ st_mode = ntohl(create_ce_mode(st.st_mode));
+
if (patch->is_new < 0)
patch->is_new = 0;
- st.st_mode = ntohl(create_ce_mode(st.st_mode));
if (!patch->old_mode)
- patch->old_mode = st.st_mode;
- if ((st.st_mode ^ patch->old_mode) & S_IFMT)
+ patch->old_mode = st_mode;
+ if ((st_mode ^ patch->old_mode) & S_IFMT)
return error("%s: wrong type", old_name);
- if (st.st_mode != patch->old_mode)
+ if (st_mode != patch->old_mode)
fprintf(stderr, "warning: %s has type %o, expected %o\n",
- old_name, st.st_mode, patch->old_mode);
+ old_name, st_mode, patch->old_mode);
}
if (new_name && (patch->is_new | patch->is_rename | patch->is_copy)) {
if (check_index && cache_name_pos(new_name, strlen(new_name)) >= 0)
return error("%s: already exists in index", new_name);
- if (!lstat(new_name, &st))
- return error("%s: already exists in working directory", new_name);
- if (errno != ENOENT)
- return error("%s: %s", new_name, strerror(errno));
+ if (!cached) {
+ if (!lstat(new_name, &st))
+ return error("%s: already exists in working directory", new_name);
+ if (errno != ENOENT)
+ return error("%s: %s", new_name, strerror(errno));
+ }
if (!patch->new_mode) {
if (patch->is_new)
patch->new_mode = S_IFREG | 0644;
return error("new mode (%o) of %s does not match old mode (%o)%s%s",
patch->new_mode, new_name, patch->old_mode,
same ? "" : " of ", same ? "" : old_name);
- }
+ }
- if (apply_data(patch, &st) < 0)
+ if (apply_data(patch, &st, ce) < 0)
return error("%s: patch does not apply", name);
return 0;
}
{
for ( ; patch; patch = patch->next) {
const char *name;
- name = patch->old_name ? patch->old_name : patch->new_name;
+ name = patch->new_name ? patch->new_name : patch->old_name;
printf("%d\t%d\t", patch->lines_added, patch->lines_deleted);
if (line_termination && quote_c_style(name, NULL, NULL, 0))
quote_c_style(name, NULL, stdout, 0);
if (remove_file_from_cache(patch->old_name) < 0)
die("unable to remove %s from index", patch->old_name);
}
- unlink(patch->old_name);
+ if (!cached)
+ unlink(patch->old_name);
}
static void add_index_file(const char *path, unsigned mode, void *buf, unsigned long size)
memcpy(ce->name, path, namelen);
ce->ce_mode = create_ce_mode(mode);
ce->ce_flags = htons(namelen);
- if (lstat(path, &st) < 0)
- die("unable to stat newly created file %s", path);
- fill_stat_cache_info(ce, &st);
+ if (!cached) {
+ if (lstat(path, &st) < 0)
+ die("unable to stat newly created file %s", path);
+ fill_stat_cache_info(ce, &st);
+ }
if (write_sha1_file(buf, size, blob_type, ce->sha1) < 0)
die("unable to create backing store for newly created file %s", path);
if (add_cache_entry(ce, ADD_CACHE_OK_TO_ADD) < 0)
*/
static void create_one_file(char *path, unsigned mode, const char *buf, unsigned long size)
{
+ if (cached)
+ return;
if (!try_create_file(path, mode, buf, size))
return;
static int apply_patch(int fd, const char *filename)
{
- int newfd;
unsigned long offset, size;
char *buffer = read_patch_file(fd, &size);
struct patch *list = NULL, **listp = &list;
size -= nr;
}
- newfd = -1;
if (whitespace_error && (new_whitespace == error_on_whitespace))
apply = 0;
write_index = check_index && apply;
- if (write_index)
+ if (write_index && newfd < 0)
newfd = hold_index_file_for_update(&cache_file, get_index_file());
if (check_index) {
if (read_cache() < 0)
if (apply)
write_out_results(list, skipped_patch);
- if (write_index) {
- if (write_cache(newfd, active_cache, active_nr) ||
- commit_index_file(&cache_file))
- die("Unable to write new cachefile");
- }
-
if (show_index_info)
show_index_list(list);
diffstat = 1;
continue;
}
- if (!strcmp(arg, "--allow-binary-replacement")) {
+ if (!strcmp(arg, "--allow-binary-replacement") ||
+ !strcmp(arg, "--binary")) {
allow_binary_replacement = 1;
continue;
}
check_index = 1;
continue;
}
+ if (!strcmp(arg, "--cached")) {
+ check_index = 1;
+ cached = 1;
+ continue;
+ }
if (!strcmp(arg, "--apply")) {
apply = 1;
continue;
whitespace_error == 1 ? "" : "s",
whitespace_error == 1 ? "s" : "");
}
+
+ if (write_index) {
+ if (write_cache(newfd, active_cache, active_nr) ||
+ commit_index_file(&cache_file))
+ die("Unable to write new cachefile");
+ }
+
return 0;
}
--- /dev/null
+#include "cache.h"
+
+#undef DEBUG_85
+
+#ifdef DEBUG_85
+#define say(a) fprintf(stderr, a)
+#define say1(a,b) fprintf(stderr, a, b)
+#define say2(a,b,c) fprintf(stderr, a, b, c)
+#else
+#define say(a) do {} while(0)
+#define say1(a,b) do {} while(0)
+#define say2(a,b,c) do {} while(0)
+#endif
+
+static const char en85[] = {
+ '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
+ 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J',
+ 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T',
+ 'U', 'V', 'W', 'X', 'Y', 'Z',
+ 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j',
+ 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't',
+ 'u', 'v', 'w', 'x', 'y', 'z',
+ '!', '#', '$', '%', '&', '(', ')', '*', '+', '-',
+ ';', '<', '=', '>', '?', '@', '^', '_', '`', '{',
+ '|', '}', '~'
+};
+
+static char de85[256];
+static void prep_base85(void)
+{
+ int i;
+ if (de85['Z'])
+ return;
+ for (i = 0; i < ARRAY_SIZE(en85); i++) {
+ int ch = en85[i];
+ de85[ch] = i + 1;
+ }
+}
+
+int decode_85(char *dst, char *buffer, int len)
+{
+ prep_base85();
+
+ say2("decode 85 <%.*s>", len/4*5, buffer);
+ while (len) {
+ unsigned acc = 0;
+ int de, cnt = 4;
+ unsigned char ch;
+ do {
+ ch = *buffer++;
+ de = de85[ch];
+ if (--de < 0)
+ return error("invalid base85 alphabet %c", ch);
+ acc = acc * 85 + de;
+ } while (--cnt);
+ ch = *buffer++;
+ de = de85[ch];
+ if (--de < 0)
+ return error("invalid base85 alphabet %c", ch);
+ /*
+ * Detect overflow. The largest
+ * 5-letter possible is "|NsC0" to
+ * encode 0xffffffff, and "|NsC" gives
+ * 0x03030303 at this point (i.e.
+ * 0xffffffff = 0x03030303 * 85).
+ */
+ if (0x03030303 < acc ||
+ 0xffffffff - de < (acc *= 85))
+ error("invalid base85 sequence %.5s", buffer-5);
+ acc += de;
+ say1(" %08x", acc);
+
+ cnt = (len < 4) ? len : 4;
+ len -= cnt;
+ do {
+ acc = (acc << 8) | (acc >> 24);
+ *dst++ = acc;
+ } while (--cnt);
+ }
+ say("\n");
+
+ return 0;
+}
+
+void encode_85(char *buf, unsigned char *data, int bytes)
+{
+ prep_base85();
+
+ say("encode 85");
+ while (bytes) {
+ unsigned acc = 0;
+ int cnt;
+ for (cnt = 24; cnt >= 0; cnt -= 8) {
+ int ch = *data++;
+ acc |= ch << cnt;
+ if (--bytes == 0)
+ break;
+ }
+ say1(" %08x", acc);
+ for (cnt = 4; cnt >= 0; cnt--) {
+ int val = acc % 85;
+ acc /= 85;
+ buf[cnt] = en85[val];
+ }
+ buf += 5;
+ }
+ say("\n");
+
+ *buf = 0;
+}
+
+#ifdef DEBUG_85
+int main(int ac, char **av)
+{
+ char buf[1024];
+
+ if (!strcmp(av[1], "-e")) {
+ int len = strlen(av[2]);
+ encode_85(buf, av[2], len);
+ if (len <= 26) len = len + 'A' - 1;
+ else len = len + 'a' - 26 + 1;
+ printf("encoded: %c%s\n", len, buf);
+ return 0;
+ }
+ if (!strcmp(av[1], "-d")) {
+ int len = *av[2];
+ if ('A' <= len && len <= 'Z') len = len - 'A' + 1;
+ else len = len - 'a' + 26 + 1;
+ decode_85(buf, av[2]+1, len);
+ printf("decoded: %.*s\n", len, buf);
+ return 0;
+ }
+ if (!strcmp(av[1], "-t")) {
+ char t[4] = { -1,-1,-1,-1 };
+ encode_85(buf, t, 4);
+ printf("encoded: D%s\n", buf);
+ return 0;
+ }
+}
+#endif
--- /dev/null
+/*
+ * GIT - The information manager from hell
+ */
+
+#include "cache.h"
+#include "refs.h"
+#include "builtin.h"
+
+int cmd_check_ref_format(int argc, const char **argv, char **envp)
+{
+ if (argc != 2)
+ usage("git check-ref-format refname");
+ return !!check_ref_format(argv[1]);
+}
if (opt->reverse_diff) {
unsigned tmp;
- const
- const unsigned char *tmp_u;
+ const unsigned char *tmp_u;
const char *tmp_c;
tmp = old_mode; old_mode = new_mode; new_mode = tmp;
tmp_u = old_sha1; old_sha1 = new_sha1; new_sha1 = tmp_u;
stuff_change(&revs->diffopt,
canon_mode(st.st_mode), canon_mode(st.st_mode),
blob[0].sha1, null_sha1,
- blob[0].name, path);
+ path, path);
diffcore_std(&revs->diffopt);
diff_flush(&revs->diffopt);
return 0;
int argc, const char **argv,
struct blobinfo *blob)
{
- /* Blobs */
+ /* Blobs: the arguments are reversed when setup_revisions()
+ * picked them up.
+ */
unsigned mode = canon_mode(S_IFREG | 0644);
while (1 < argc) {
}
stuff_change(&revs->diffopt,
mode, mode,
- blob[0].sha1, blob[1].sha1,
- blob[1].name, blob[1].name);
+ blob[1].sha1, blob[0].sha1,
+ blob[0].name, blob[0].name);
diffcore_std(&revs->diffopt);
diff_flush(&revs->diffopt);
return 0;
--- /dev/null
+/*
+ * Builtin "git grep"
+ *
+ * Copyright (c) 2006 Junio C Hamano
+ */
+#include "cache.h"
+#include "blob.h"
+#include "tree.h"
+#include "commit.h"
+#include "tag.h"
+#include "tree-walk.h"
+#include "builtin.h"
+#include <regex.h>
+#include <fnmatch.h>
+#include <sys/wait.h>
+
+/*
+ * git grep pathspecs are somewhat different from diff-tree pathspecs;
+ * pathname wildcards are allowed.
+ */
+static int pathspec_matches(const char **paths, const char *name)
+{
+ int namelen, i;
+ if (!paths || !*paths)
+ return 1;
+ namelen = strlen(name);
+ for (i = 0; paths[i]; i++) {
+ const char *match = paths[i];
+ int matchlen = strlen(match);
+ const char *cp, *meta;
+
+ if ((matchlen <= namelen) &&
+ !strncmp(name, match, matchlen) &&
+ (match[matchlen-1] == '/' ||
+ name[matchlen] == '\0' || name[matchlen] == '/'))
+ return 1;
+ if (!fnmatch(match, name, 0))
+ return 1;
+ if (name[namelen-1] != '/')
+ continue;
+
+ /* We are being asked if the directory ("name") is worth
+ * descending into.
+ *
+ * Find the longest leading directory name that does
+ * not have metacharacter in the pathspec; the name
+ * we are looking at must overlap with that directory.
+ */
+ for (cp = match, meta = NULL; cp - match < matchlen; cp++) {
+ char ch = *cp;
+ if (ch == '*' || ch == '[' || ch == '?') {
+ meta = cp;
+ break;
+ }
+ }
+ if (!meta)
+ meta = cp; /* fully literal */
+
+ if (namelen <= meta - match) {
+ /* Looking at "Documentation/" and
+ * the pattern says "Documentation/howto/", or
+ * "Documentation/diff*.txt". The name we
+ * have should match prefix.
+ */
+ if (!memcmp(match, name, namelen))
+ return 1;
+ continue;
+ }
+
+ if (meta - match < namelen) {
+ /* Looking at "Documentation/howto/" and
+ * the pattern says "Documentation/h*";
+ * match up to "Do.../h"; this avoids descending
+ * into "Documentation/technical/".
+ */
+ if (!memcmp(match, name, meta - match))
+ return 1;
+ continue;
+ }
+ }
+ return 0;
+}
+
+struct grep_pat {
+ struct grep_pat *next;
+ const char *origin;
+ int no;
+ const char *pattern;
+ regex_t regexp;
+};
+
+struct grep_opt {
+ struct grep_pat *pattern_list;
+ struct grep_pat **pattern_tail;
+ regex_t regexp;
+ unsigned linenum:1;
+ unsigned invert:1;
+ unsigned name_only:1;
+ unsigned unmatch_name_only:1;
+ unsigned count:1;
+ unsigned word_regexp:1;
+ unsigned fixed:1;
+#define GREP_BINARY_DEFAULT 0
+#define GREP_BINARY_NOMATCH 1
+#define GREP_BINARY_TEXT 2
+ unsigned binary:2;
+ int regflags;
+ unsigned pre_context;
+ unsigned post_context;
+};
+
+static void add_pattern(struct grep_opt *opt, const char *pat,
+ const char *origin, int no)
+{
+ struct grep_pat *p = xcalloc(1, sizeof(*p));
+ p->pattern = pat;
+ p->origin = origin;
+ p->no = no;
+ *opt->pattern_tail = p;
+ opt->pattern_tail = &p->next;
+ p->next = NULL;
+}
+
+static void compile_patterns(struct grep_opt *opt)
+{
+ struct grep_pat *p;
+ for (p = opt->pattern_list; p; p = p->next) {
+ int err = regcomp(&p->regexp, p->pattern, opt->regflags);
+ if (err) {
+ char errbuf[1024];
+ char where[1024];
+ if (p->no)
+ sprintf(where, "In '%s' at %d, ",
+ p->origin, p->no);
+ else if (p->origin)
+ sprintf(where, "%s, ", p->origin);
+ else
+ where[0] = 0;
+ regerror(err, &p->regexp, errbuf, 1024);
+ regfree(&p->regexp);
+ die("%s'%s': %s", where, p->pattern, errbuf);
+ }
+ }
+}
+
+static char *end_of_line(char *cp, unsigned long *left)
+{
+ unsigned long l = *left;
+ while (l && *cp != '\n') {
+ l--;
+ cp++;
+ }
+ *left = l;
+ return cp;
+}
+
+static int word_char(char ch)
+{
+ return isalnum(ch) || ch == '_';
+}
+
+static void show_line(struct grep_opt *opt, const char *bol, const char *eol,
+ const char *name, unsigned lno, char sign)
+{
+ printf("%s%c", name, sign);
+ if (opt->linenum)
+ printf("%d%c", lno, sign);
+ printf("%.*s\n", (int)(eol-bol), bol);
+}
+
+/*
+ * NEEDSWORK: share code with diff.c
+ */
+#define FIRST_FEW_BYTES 8000
+static int buffer_is_binary(const char *ptr, unsigned long size)
+{
+ if (FIRST_FEW_BYTES < size)
+ size = FIRST_FEW_BYTES;
+ if (memchr(ptr, 0, size))
+ return 1;
+ return 0;
+}
+
+static int fixmatch(const char *pattern, char *line, regmatch_t *match)
+{
+ char *hit = strstr(line, pattern);
+ if (!hit) {
+ match->rm_so = match->rm_eo = -1;
+ return REG_NOMATCH;
+ }
+ else {
+ match->rm_so = hit - line;
+ match->rm_eo = match->rm_so + strlen(pattern);
+ return 0;
+ }
+}
+
+static int grep_buffer(struct grep_opt *opt, const char *name,
+ char *buf, unsigned long size)
+{
+ char *bol = buf;
+ unsigned long left = size;
+ unsigned lno = 1;
+ struct pre_context_line {
+ char *bol;
+ char *eol;
+ } *prev = NULL, *pcl;
+ unsigned last_hit = 0;
+ unsigned last_shown = 0;
+ int binary_match_only = 0;
+ const char *hunk_mark = "";
+ unsigned count = 0;
+
+ if (buffer_is_binary(buf, size)) {
+ switch (opt->binary) {
+ case GREP_BINARY_DEFAULT:
+ binary_match_only = 1;
+ break;
+ case GREP_BINARY_NOMATCH:
+ return 0; /* Assume unmatch */
+ break;
+ default:
+ break;
+ }
+ }
+
+ if (opt->pre_context)
+ prev = xcalloc(opt->pre_context, sizeof(*prev));
+ if (opt->pre_context || opt->post_context)
+ hunk_mark = "--\n";
+
+ while (left) {
+ regmatch_t pmatch[10];
+ char *eol, ch;
+ int hit = 0;
+ struct grep_pat *p;
+
+ eol = end_of_line(bol, &left);
+ ch = *eol;
+ *eol = 0;
+
+ for (p = opt->pattern_list; p; p = p->next) {
+ if (!opt->fixed) {
+ regex_t *exp = &p->regexp;
+ hit = !regexec(exp, bol, ARRAY_SIZE(pmatch),
+ pmatch, 0);
+ }
+ else {
+ hit = !fixmatch(p->pattern, bol, pmatch);
+ }
+
+ if (hit && opt->word_regexp) {
+ /* Match beginning must be either
+ * beginning of the line, or at word
+ * boundary (i.e. the last char must
+ * not be alnum or underscore).
+ */
+ if ((pmatch[0].rm_so < 0) ||
+ (eol - bol) <= pmatch[0].rm_so ||
+ (pmatch[0].rm_eo < 0) ||
+ (eol - bol) < pmatch[0].rm_eo)
+ die("regexp returned nonsense");
+ if (pmatch[0].rm_so != 0 &&
+ word_char(bol[pmatch[0].rm_so-1]))
+ hit = 0;
+ if (pmatch[0].rm_eo != (eol-bol) &&
+ word_char(bol[pmatch[0].rm_eo]))
+ hit = 0;
+ }
+ if (hit)
+ break;
+ }
+ /* "grep -v -e foo -e bla" should list lines
+ * that do not have either, so inversion should
+ * be done outside.
+ */
+ if (opt->invert)
+ hit = !hit;
+ if (opt->unmatch_name_only) {
+ if (hit)
+ return 0;
+ goto next_line;
+ }
+ if (hit) {
+ count++;
+ if (binary_match_only) {
+ printf("Binary file %s matches\n", name);
+ return 1;
+ }
+ if (opt->name_only) {
+ printf("%s\n", name);
+ return 1;
+ }
+ /* Hit at this line. If we haven't shown the
+ * pre-context lines, we would need to show them.
+ * When asked to do "count", this still show
+ * the context which is nonsense, but the user
+ * deserves to get that ;-).
+ */
+ if (opt->pre_context) {
+ unsigned from;
+ if (opt->pre_context < lno)
+ from = lno - opt->pre_context;
+ else
+ from = 1;
+ if (from <= last_shown)
+ from = last_shown + 1;
+ if (last_shown && from != last_shown + 1)
+ printf(hunk_mark);
+ while (from < lno) {
+ pcl = &prev[lno-from-1];
+ show_line(opt, pcl->bol, pcl->eol,
+ name, from, '-');
+ from++;
+ }
+ last_shown = lno-1;
+ }
+ if (last_shown && lno != last_shown + 1)
+ printf(hunk_mark);
+ if (!opt->count)
+ show_line(opt, bol, eol, name, lno, ':');
+ last_shown = last_hit = lno;
+ }
+ else if (last_hit &&
+ lno <= last_hit + opt->post_context) {
+ /* If the last hit is within the post context,
+ * we need to show this line.
+ */
+ if (last_shown && lno != last_shown + 1)
+ printf(hunk_mark);
+ show_line(opt, bol, eol, name, lno, '-');
+ last_shown = lno;
+ }
+ if (opt->pre_context) {
+ memmove(prev+1, prev,
+ (opt->pre_context-1) * sizeof(*prev));
+ prev->bol = bol;
+ prev->eol = eol;
+ }
+
+ next_line:
+ *eol = ch;
+ bol = eol + 1;
+ if (!left)
+ break;
+ left--;
+ lno++;
+ }
+
+ if (opt->unmatch_name_only) {
+ /* We did not see any hit, so we want to show this */
+ printf("%s\n", name);
+ return 1;
+ }
+
+ /* NEEDSWORK:
+ * The real "grep -c foo *.c" gives many "bar.c:0" lines,
+ * which feels mostly useless but sometimes useful. Maybe
+ * make it another option? For now suppress them.
+ */
+ if (opt->count && count)
+ printf("%s:%u\n", name, count);
+ return !!last_hit;
+}
+
+static int grep_sha1(struct grep_opt *opt, const unsigned char *sha1, const char *name)
+{
+ unsigned long size;
+ char *data;
+ char type[20];
+ int hit;
+ data = read_sha1_file(sha1, type, &size);
+ if (!data) {
+ error("'%s': unable to read %s", name, sha1_to_hex(sha1));
+ return 0;
+ }
+ hit = grep_buffer(opt, name, data, size);
+ free(data);
+ return hit;
+}
+
+static int grep_file(struct grep_opt *opt, const char *filename)
+{
+ struct stat st;
+ int i;
+ char *data;
+ if (lstat(filename, &st) < 0) {
+ err_ret:
+ if (errno != ENOENT)
+ error("'%s': %s", filename, strerror(errno));
+ return 0;
+ }
+ if (!st.st_size)
+ return 0; /* empty file -- no grep hit */
+ if (!S_ISREG(st.st_mode))
+ return 0;
+ i = open(filename, O_RDONLY);
+ if (i < 0)
+ goto err_ret;
+ data = xmalloc(st.st_size + 1);
+ if (st.st_size != xread(i, data, st.st_size)) {
+ error("'%s': short read %s", filename, strerror(errno));
+ close(i);
+ free(data);
+ return 0;
+ }
+ close(i);
+ i = grep_buffer(opt, filename, data, st.st_size);
+ free(data);
+ return i;
+}
+
+static int exec_grep(int argc, const char **argv)
+{
+ pid_t pid;
+ int status;
+
+ argv[argc] = NULL;
+ pid = fork();
+ if (pid < 0)
+ return pid;
+ if (!pid) {
+ execvp("grep", (char **) argv);
+ exit(255);
+ }
+ while (waitpid(pid, &status, 0) < 0) {
+ if (errno == EINTR)
+ continue;
+ return -1;
+ }
+ if (WIFEXITED(status)) {
+ if (!WEXITSTATUS(status))
+ return 1;
+ return 0;
+ }
+ return -1;
+}
+
+#define MAXARGS 1000
+#define ARGBUF 4096
+#define push_arg(a) do { \
+ if (nr < MAXARGS) argv[nr++] = (a); \
+ else die("maximum number of args exceeded"); \
+ } while (0)
+
+static int external_grep(struct grep_opt *opt, const char **paths, int cached)
+{
+ int i, nr, argc, hit, len;
+ const char *argv[MAXARGS+1];
+ char randarg[ARGBUF];
+ char *argptr = randarg;
+ struct grep_pat *p;
+
+ len = nr = 0;
+ push_arg("grep");
+ if (opt->fixed)
+ push_arg("-F");
+ if (opt->linenum)
+ push_arg("-n");
+ if (opt->regflags & REG_EXTENDED)
+ push_arg("-E");
+ if (opt->word_regexp)
+ push_arg("-w");
+ if (opt->name_only)
+ push_arg("-l");
+ if (opt->unmatch_name_only)
+ push_arg("-L");
+ if (opt->count)
+ push_arg("-c");
+ if (opt->post_context || opt->pre_context) {
+ if (opt->post_context != opt->pre_context) {
+ if (opt->pre_context) {
+ push_arg("-B");
+ len += snprintf(argptr, sizeof(randarg)-len,
+ "%u", opt->pre_context);
+ if (sizeof(randarg) <= len)
+ die("maximum length of args exceeded");
+ push_arg(argptr);
+ argptr += len;
+ }
+ if (opt->post_context) {
+ push_arg("-A");
+ len += snprintf(argptr, sizeof(randarg)-len,
+ "%u", opt->post_context);
+ if (sizeof(randarg) <= len)
+ die("maximum length of args exceeded");
+ push_arg(argptr);
+ argptr += len;
+ }
+ }
+ else {
+ push_arg("-C");
+ len += snprintf(argptr, sizeof(randarg)-len,
+ "%u", opt->post_context);
+ if (sizeof(randarg) <= len)
+ die("maximum length of args exceeded");
+ push_arg(argptr);
+ argptr += len;
+ }
+ }
+ for (p = opt->pattern_list; p; p = p->next) {
+ push_arg("-e");
+ push_arg(p->pattern);
+ }
+
+ /*
+ * To make sure we get the header printed out when we want it,
+ * add /dev/null to the paths to grep. This is unnecessary
+ * (and wrong) with "-l" or "-L", which always print out the
+ * name anyway.
+ *
+ * GNU grep has "-H", but this is portable.
+ */
+ if (!opt->name_only && !opt->unmatch_name_only)
+ push_arg("/dev/null");
+
+ hit = 0;
+ argc = nr;
+ for (i = 0; i < active_nr; i++) {
+ struct cache_entry *ce = active_cache[i];
+ const char *name;
+ if (ce_stage(ce) || !S_ISREG(ntohl(ce->ce_mode)))
+ continue;
+ if (!pathspec_matches(paths, ce->name))
+ continue;
+ name = ce->name;
+ if (name[0] == '-') {
+ int len = ce_namelen(ce);
+ name = xmalloc(len + 3);
+ memcpy(name, "./", 2);
+ memcpy(name + 2, ce->name, len + 1);
+ }
+ argv[argc++] = name;
+ if (argc < MAXARGS)
+ continue;
+ hit += exec_grep(argc, argv);
+ argc = nr;
+ }
+ if (argc > nr)
+ hit += exec_grep(argc, argv);
+ return 0;
+}
+
+static int grep_cache(struct grep_opt *opt, const char **paths, int cached)
+{
+ int hit = 0;
+ int nr;
+ read_cache();
+
+#ifdef __unix__
+ /*
+ * Use the external "grep" command for the case where
+ * we grep through the checked-out files. It tends to
+ * be a lot more optimized
+ */
+ if (!cached) {
+ hit = external_grep(opt, paths, cached);
+ if (hit >= 0)
+ return hit;
+ }
+#endif
+
+ for (nr = 0; nr < active_nr; nr++) {
+ struct cache_entry *ce = active_cache[nr];
+ if (ce_stage(ce) || !S_ISREG(ntohl(ce->ce_mode)))
+ continue;
+ if (!pathspec_matches(paths, ce->name))
+ continue;
+ if (cached)
+ hit |= grep_sha1(opt, ce->sha1, ce->name);
+ else
+ hit |= grep_file(opt, ce->name);
+ }
+ return hit;
+}
+
+static int grep_tree(struct grep_opt *opt, const char **paths,
+ struct tree_desc *tree,
+ const char *tree_name, const char *base)
+{
+ unsigned mode;
+ int len;
+ int hit = 0;
+ const char *path;
+ const unsigned char *sha1;
+ char *down;
+ char *path_buf = xmalloc(PATH_MAX + strlen(tree_name) + 100);
+
+ if (tree_name[0]) {
+ int offset = sprintf(path_buf, "%s:", tree_name);
+ down = path_buf + offset;
+ strcat(down, base);
+ }
+ else {
+ down = path_buf;
+ strcpy(down, base);
+ }
+ len = strlen(path_buf);
+
+ while (tree->size) {
+ int pathlen;
+ sha1 = tree_entry_extract(tree, &path, &mode);
+ pathlen = strlen(path);
+ strcpy(path_buf + len, path);
+
+ if (S_ISDIR(mode))
+ /* Match "abc/" against pathspec to
+ * decide if we want to descend into "abc"
+ * directory.
+ */
+ strcpy(path_buf + len + pathlen, "/");
+
+ if (!pathspec_matches(paths, down))
+ ;
+ else if (S_ISREG(mode))
+ hit |= grep_sha1(opt, sha1, path_buf);
+ else if (S_ISDIR(mode)) {
+ char type[20];
+ struct tree_desc sub;
+ void *data;
+ data = read_sha1_file(sha1, type, &sub.size);
+ if (!data)
+ die("unable to read tree (%s)",
+ sha1_to_hex(sha1));
+ sub.buf = data;
+ hit |= grep_tree(opt, paths, &sub, tree_name, down);
+ free(data);
+ }
+ update_tree_entry(tree);
+ }
+ return hit;
+}
+
+static int grep_object(struct grep_opt *opt, const char **paths,
+ struct object *obj, const char *name)
+{
+ if (!strcmp(obj->type, blob_type))
+ return grep_sha1(opt, obj->sha1, name);
+ if (!strcmp(obj->type, commit_type) ||
+ !strcmp(obj->type, tree_type)) {
+ struct tree_desc tree;
+ void *data;
+ int hit;
+ data = read_object_with_reference(obj->sha1, tree_type,
+ &tree.size, NULL);
+ if (!data)
+ die("unable to read tree (%s)", sha1_to_hex(obj->sha1));
+ tree.buf = data;
+ hit = grep_tree(opt, paths, &tree, name, "");
+ free(data);
+ return hit;
+ }
+ die("unable to grep from object of type %s", obj->type);
+}
+
+static const char builtin_grep_usage[] =
+"git-grep <option>* <rev>* [-e] <pattern> [<path>...]";
+
+int cmd_grep(int argc, const char **argv, char **envp)
+{
+ int hit = 0;
+ int cached = 0;
+ int seen_dashdash = 0;
+ struct grep_opt opt;
+ struct object_list *list, **tail, *object_list = NULL;
+ const char *prefix = setup_git_directory();
+ const char **paths = NULL;
+ int i;
+
+ memset(&opt, 0, sizeof(opt));
+ opt.pattern_tail = &opt.pattern_list;
+ opt.regflags = REG_NEWLINE;
+
+ /*
+ * If there is no -- then the paths must exist in the working
+ * tree. If there is no explicit pattern specified with -e or
+ * -f, we take the first unrecognized non option to be the
+ * pattern, but then what follows it must be zero or more
+ * valid refs up to the -- (if exists), and then existing
+ * paths. If there is an explicit pattern, then the first
+ * unrecocnized non option is the beginning of the refs list
+ * that continues up to the -- (if exists), and then paths.
+ */
+
+ tail = &object_list;
+ while (1 < argc) {
+ const char *arg = argv[1];
+ argc--; argv++;
+ if (!strcmp("--cached", arg)) {
+ cached = 1;
+ continue;
+ }
+ if (!strcmp("-a", arg) ||
+ !strcmp("--text", arg)) {
+ opt.binary = GREP_BINARY_TEXT;
+ continue;
+ }
+ if (!strcmp("-i", arg) ||
+ !strcmp("--ignore-case", arg)) {
+ opt.regflags |= REG_ICASE;
+ continue;
+ }
+ if (!strcmp("-I", arg)) {
+ opt.binary = GREP_BINARY_NOMATCH;
+ continue;
+ }
+ if (!strcmp("-v", arg) ||
+ !strcmp("--invert-match", arg)) {
+ opt.invert = 1;
+ continue;
+ }
+ if (!strcmp("-E", arg) ||
+ !strcmp("--extended-regexp", arg)) {
+ opt.regflags |= REG_EXTENDED;
+ continue;
+ }
+ if (!strcmp("-F", arg) ||
+ !strcmp("--fixed-strings", arg)) {
+ opt.fixed = 1;
+ continue;
+ }
+ if (!strcmp("-G", arg) ||
+ !strcmp("--basic-regexp", arg)) {
+ opt.regflags &= ~REG_EXTENDED;
+ continue;
+ }
+ if (!strcmp("-n", arg)) {
+ opt.linenum = 1;
+ continue;
+ }
+ if (!strcmp("-H", arg)) {
+ /* We always show the pathname, so this
+ * is a noop.
+ */
+ continue;
+ }
+ if (!strcmp("-l", arg) ||
+ !strcmp("--files-with-matches", arg)) {
+ opt.name_only = 1;
+ continue;
+ }
+ if (!strcmp("-L", arg) ||
+ !strcmp("--files-without-match", arg)) {
+ opt.unmatch_name_only = 1;
+ continue;
+ }
+ if (!strcmp("-c", arg) ||
+ !strcmp("--count", arg)) {
+ opt.count = 1;
+ continue;
+ }
+ if (!strcmp("-w", arg) ||
+ !strcmp("--word-regexp", arg)) {
+ opt.word_regexp = 1;
+ continue;
+ }
+ if (!strncmp("-A", arg, 2) ||
+ !strncmp("-B", arg, 2) ||
+ !strncmp("-C", arg, 2) ||
+ (arg[0] == '-' && '1' <= arg[1] && arg[1] <= '9')) {
+ unsigned num;
+ const char *scan;
+ switch (arg[1]) {
+ case 'A': case 'B': case 'C':
+ if (!arg[2]) {
+ if (argc <= 1)
+ usage(builtin_grep_usage);
+ scan = *++argv;
+ argc--;
+ }
+ else
+ scan = arg + 2;
+ break;
+ default:
+ scan = arg + 1;
+ break;
+ }
+ if (sscanf(scan, "%u", &num) != 1)
+ usage(builtin_grep_usage);
+ switch (arg[1]) {
+ case 'A':
+ opt.post_context = num;
+ break;
+ default:
+ case 'C':
+ opt.post_context = num;
+ case 'B':
+ opt.pre_context = num;
+ break;
+ }
+ continue;
+ }
+ if (!strcmp("-f", arg)) {
+ FILE *patterns;
+ int lno = 0;
+ char buf[1024];
+ if (argc <= 1)
+ usage(builtin_grep_usage);
+ patterns = fopen(argv[1], "r");
+ if (!patterns)
+ die("'%s': %s", argv[1], strerror(errno));
+ while (fgets(buf, sizeof(buf), patterns)) {
+ int len = strlen(buf);
+ if (buf[len-1] == '\n')
+ buf[len-1] = 0;
+ /* ignore empty line like grep does */
+ if (!buf[0])
+ continue;
+ add_pattern(&opt, strdup(buf), argv[1], ++lno);
+ }
+ fclose(patterns);
+ argv++;
+ argc--;
+ continue;
+ }
+ if (!strcmp("-e", arg)) {
+ if (1 < argc) {
+ add_pattern(&opt, argv[1], "-e option", 0);
+ argv++;
+ argc--;
+ continue;
+ }
+ usage(builtin_grep_usage);
+ }
+ if (!strcmp("--", arg))
+ break;
+ if (*arg == '-')
+ usage(builtin_grep_usage);
+
+ /* First unrecognized non-option token */
+ if (!opt.pattern_list) {
+ add_pattern(&opt, arg, "command line", 0);
+ break;
+ }
+ else {
+ /* We are looking at the first path or rev;
+ * it is found at argv[1] after leaving the
+ * loop.
+ */
+ argc++; argv--;
+ break;
+ }
+ }
+
+ if (!opt.pattern_list)
+ die("no pattern given.");
+ if ((opt.regflags != REG_NEWLINE) && opt.fixed)
+ die("cannot mix --fixed-strings and regexp");
+ if (!opt.fixed)
+ compile_patterns(&opt);
+
+ /* Check revs and then paths */
+ for (i = 1; i < argc; i++) {
+ const char *arg = argv[i];
+ unsigned char sha1[20];
+ /* Is it a rev? */
+ if (!get_sha1(arg, sha1)) {
+ struct object *object = parse_object(sha1);
+ struct object_list *elem;
+ if (!object)
+ die("bad object %s", arg);
+ elem = object_list_insert(object, tail);
+ elem->name = arg;
+ tail = &elem->next;
+ continue;
+ }
+ if (!strcmp(arg, "--")) {
+ i++;
+ seen_dashdash = 1;
+ }
+ break;
+ }
+
+ /* The rest are paths */
+ if (!seen_dashdash) {
+ int j;
+ for (j = i; j < argc; j++)
+ verify_filename(prefix, argv[j]);
+ }
+
+ if (i < argc)
+ paths = get_pathspec(prefix, argv + i);
+ else if (prefix) {
+ paths = xcalloc(2, sizeof(const char *));
+ paths[0] = prefix;
+ paths[1] = NULL;
+ }
+
+ if (!object_list)
+ return !grep_cache(&opt, paths, cached);
+
+ if (cached)
+ die("both --cached and trees are given.");
+
+ for (list = object_list; list; list = list->next) {
+ struct object *real_obj;
+ real_obj = deref_tag(list->item, NULL, 0);
+ if (grep_object(&opt, paths, real_obj, list->name))
+ hit = 1;
+ }
+ return !hit;
+}
--- /dev/null
+/*
+ * GIT - The information manager from hell
+ *
+ * Copyright (C) Linus Torvalds, 2005
+ */
+#include "cache.h"
+#include "builtin.h"
+
+#ifndef DEFAULT_GIT_TEMPLATE_DIR
+#define DEFAULT_GIT_TEMPLATE_DIR "/usr/share/git-core/templates/"
+#endif
+
+static void safe_create_dir(const char *dir, int share)
+{
+ if (mkdir(dir, 0777) < 0) {
+ if (errno != EEXIST) {
+ perror(dir);
+ exit(1);
+ }
+ }
+ else if (share && adjust_shared_perm(dir))
+ die("Could not make %s writable by group\n", dir);
+}
+
+static int copy_file(const char *dst, const char *src, int mode)
+{
+ int fdi, fdo, status;
+
+ mode = (mode & 0111) ? 0777 : 0666;
+ if ((fdi = open(src, O_RDONLY)) < 0)
+ return fdi;
+ if ((fdo = open(dst, O_WRONLY | O_CREAT | O_EXCL, mode)) < 0) {
+ close(fdi);
+ return fdo;
+ }
+ status = copy_fd(fdi, fdo);
+ close(fdo);
+
+ if (!status && adjust_shared_perm(dst))
+ return -1;
+
+ return status;
+}
+
+static void copy_templates_1(char *path, int baselen,
+ char *template, int template_baselen,
+ DIR *dir)
+{
+ struct dirent *de;
+
+ /* Note: if ".git/hooks" file exists in the repository being
+ * re-initialized, /etc/core-git/templates/hooks/update would
+ * cause git-init-db to fail here. I think this is sane but
+ * it means that the set of templates we ship by default, along
+ * with the way the namespace under .git/ is organized, should
+ * be really carefully chosen.
+ */
+ safe_create_dir(path, 1);
+ while ((de = readdir(dir)) != NULL) {
+ struct stat st_git, st_template;
+ int namelen;
+ int exists = 0;
+
+ if (de->d_name[0] == '.')
+ continue;
+ namelen = strlen(de->d_name);
+ if ((PATH_MAX <= baselen + namelen) ||
+ (PATH_MAX <= template_baselen + namelen))
+ die("insanely long template name %s", de->d_name);
+ memcpy(path + baselen, de->d_name, namelen+1);
+ memcpy(template + template_baselen, de->d_name, namelen+1);
+ if (lstat(path, &st_git)) {
+ if (errno != ENOENT)
+ die("cannot stat %s", path);
+ }
+ else
+ exists = 1;
+
+ if (lstat(template, &st_template))
+ die("cannot stat template %s", template);
+
+ if (S_ISDIR(st_template.st_mode)) {
+ DIR *subdir = opendir(template);
+ int baselen_sub = baselen + namelen;
+ int template_baselen_sub = template_baselen + namelen;
+ if (!subdir)
+ die("cannot opendir %s", template);
+ path[baselen_sub++] =
+ template[template_baselen_sub++] = '/';
+ path[baselen_sub] =
+ template[template_baselen_sub] = 0;
+ copy_templates_1(path, baselen_sub,
+ template, template_baselen_sub,
+ subdir);
+ closedir(subdir);
+ }
+ else if (exists)
+ continue;
+ else if (S_ISLNK(st_template.st_mode)) {
+ char lnk[256];
+ int len;
+ len = readlink(template, lnk, sizeof(lnk));
+ if (len < 0)
+ die("cannot readlink %s", template);
+ if (sizeof(lnk) <= len)
+ die("insanely long symlink %s", template);
+ lnk[len] = 0;
+ if (symlink(lnk, path))
+ die("cannot symlink %s %s", lnk, path);
+ }
+ else if (S_ISREG(st_template.st_mode)) {
+ if (copy_file(path, template, st_template.st_mode))
+ die("cannot copy %s to %s", template, path);
+ }
+ else
+ error("ignoring template %s", template);
+ }
+}
+
+static void copy_templates(const char *git_dir, int len, const char *template_dir)
+{
+ char path[PATH_MAX];
+ char template_path[PATH_MAX];
+ int template_len;
+ DIR *dir;
+
+ if (!template_dir)
+ template_dir = DEFAULT_GIT_TEMPLATE_DIR;
+ strcpy(template_path, template_dir);
+ template_len = strlen(template_path);
+ if (template_path[template_len-1] != '/') {
+ template_path[template_len++] = '/';
+ template_path[template_len] = 0;
+ }
+ dir = opendir(template_path);
+ if (!dir) {
+ fprintf(stderr, "warning: templates not found %s\n",
+ template_dir);
+ return;
+ }
+
+ /* Make sure that template is from the correct vintage */
+ strcpy(template_path + template_len, "config");
+ repository_format_version = 0;
+ git_config_from_file(check_repository_format_version,
+ template_path);
+ template_path[template_len] = 0;
+
+ if (repository_format_version &&
+ repository_format_version != GIT_REPO_VERSION) {
+ fprintf(stderr, "warning: not copying templates of "
+ "a wrong format version %d from '%s'\n",
+ repository_format_version,
+ template_dir);
+ closedir(dir);
+ return;
+ }
+
+ memcpy(path, git_dir, len);
+ path[len] = 0;
+ copy_templates_1(path, len,
+ template_path, template_len,
+ dir);
+ closedir(dir);
+}
+
+static void create_default_files(const char *git_dir, const char *template_path)
+{
+ unsigned len = strlen(git_dir);
+ static char path[PATH_MAX];
+ unsigned char sha1[20];
+ struct stat st1;
+ char repo_version_string[10];
+
+ if (len > sizeof(path)-50)
+ die("insane git directory %s", git_dir);
+ memcpy(path, git_dir, len);
+
+ if (len && path[len-1] != '/')
+ path[len++] = '/';
+
+ /*
+ * Create .git/refs/{heads,tags}
+ */
+ strcpy(path + len, "refs");
+ safe_create_dir(path, 1);
+ strcpy(path + len, "refs/heads");
+ safe_create_dir(path, 1);
+ strcpy(path + len, "refs/tags");
+ safe_create_dir(path, 1);
+
+ /* First copy the templates -- we might have the default
+ * config file there, in which case we would want to read
+ * from it after installing.
+ */
+ path[len] = 0;
+ copy_templates(path, len, template_path);
+
+ git_config(git_default_config);
+
+ /*
+ * Create the default symlink from ".git/HEAD" to the "master"
+ * branch, if it does not exist yet.
+ */
+ strcpy(path + len, "HEAD");
+ if (read_ref(path, sha1) < 0) {
+ if (create_symref(path, "refs/heads/master") < 0)
+ exit(1);
+ }
+
+ /* This forces creation of new config file */
+ sprintf(repo_version_string, "%d", GIT_REPO_VERSION);
+ git_config_set("core.repositoryformatversion", repo_version_string);
+
+ path[len] = 0;
+ strcpy(path + len, "config");
+
+ /* Check filemode trustability */
+ if (!lstat(path, &st1)) {
+ struct stat st2;
+ int filemode = (!chmod(path, st1.st_mode ^ S_IXUSR) &&
+ !lstat(path, &st2) &&
+ st1.st_mode != st2.st_mode);
+ git_config_set("core.filemode",
+ filemode ? "true" : "false");
+ }
+}
+
+static const char init_db_usage[] =
+"git-init-db [--template=<template-directory>] [--shared]";
+
+/*
+ * If you want to, you can share the DB area with any number of branches.
+ * That has advantages: you can save space by sharing all the SHA1 objects.
+ * On the other hand, it might just make lookup slower and messier. You
+ * be the judge. The default case is to have one DB per managed directory.
+ */
+int cmd_init_db(int argc, const char **argv, char **envp)
+{
+ const char *git_dir;
+ const char *sha1_dir;
+ const char *template_dir = NULL;
+ char *path;
+ int len, i;
+
+ for (i = 1; i < argc; i++, argv++) {
+ const char *arg = argv[1];
+ if (!strncmp(arg, "--template=", 11))
+ template_dir = arg+11;
+ else if (!strcmp(arg, "--shared"))
+ shared_repository = 1;
+ else
+ die(init_db_usage);
+ }
+
+ /*
+ * Set up the default .git directory contents
+ */
+ git_dir = getenv(GIT_DIR_ENVIRONMENT);
+ if (!git_dir) {
+ git_dir = DEFAULT_GIT_DIR_ENVIRONMENT;
+ fprintf(stderr, "defaulting to local storage area\n");
+ }
+ safe_create_dir(git_dir, 0);
+
+ /* Check to see if the repository version is right.
+ * Note that a newly created repository does not have
+ * config file, so this will not fail. What we are catching
+ * is an attempt to reinitialize new repository with an old tool.
+ */
+ check_repository_format();
+
+ create_default_files(git_dir, template_dir);
+
+ /*
+ * And set up the object store.
+ */
+ sha1_dir = get_object_directory();
+ len = strlen(sha1_dir);
+ path = xmalloc(len + 40);
+ memcpy(path, sha1_dir, len);
+
+ safe_create_dir(sha1_dir, 1);
+ strcpy(path+len, "/pack");
+ safe_create_dir(path, 1);
+ strcpy(path+len, "/info");
+ safe_create_dir(path, 1);
+
+ if (shared_repository)
+ git_config_set("core.sharedRepository", "true");
+
+ return 0;
+}
rev->commit_format = CMIT_FMT_DEFAULT;
rev->verbose_header = 1;
argc = setup_revisions(argc, argv, rev, "HEAD");
+ if (rev->always_show_header) {
+ if (rev->diffopt.pickaxe || rev->diffopt.filter) {
+ rev->always_show_header = 0;
+ if (rev->diffopt.output_format == DIFF_FORMAT_RAW)
+ rev->diffopt.output_format = DIFF_FORMAT_NO_OUTPUT;
+ }
+ }
if (argc > 1)
die("unrecognized argument: %s", argv[1]);
--- /dev/null
+#include "cache.h"
+#include "refs.h"
+#include "tag.h"
+#include "commit.h"
+#include "tree.h"
+#include "blob.h"
+#include "tree-walk.h"
+#include "diff.h"
+#include "revision.h"
+#include "builtin.h"
+
+/* bits #0-15 in revision.h */
+
+#define COUNTED (1u<<16)
+
+static const char rev_list_usage[] =
+"git-rev-list [OPTION] <commit-id>... [ -- paths... ]\n"
+" limiting output:\n"
+" --max-count=nr\n"
+" --max-age=epoch\n"
+" --min-age=epoch\n"
+" --sparse\n"
+" --no-merges\n"
+" --remove-empty\n"
+" --all\n"
+" ordering output:\n"
+" --topo-order\n"
+" --date-order\n"
+" formatting output:\n"
+" --parents\n"
+" --objects | --objects-edge\n"
+" --unpacked\n"
+" --header | --pretty\n"
+" --abbrev=nr | --no-abbrev\n"
+" --abbrev-commit\n"
+" special purpose:\n"
+" --bisect"
+;
+
+static struct rev_info revs;
+
+static int bisect_list = 0;
+static int show_timestamp = 0;
+static int hdr_termination = 0;
+static const char *header_prefix;
+
+static void show_commit(struct commit *commit)
+{
+ if (show_timestamp)
+ printf("%lu ", commit->date);
+ if (header_prefix)
+ fputs(header_prefix, stdout);
+ if (commit->object.flags & BOUNDARY)
+ putchar('-');
+ if (revs.abbrev_commit && revs.abbrev)
+ fputs(find_unique_abbrev(commit->object.sha1, revs.abbrev),
+ stdout);
+ else
+ fputs(sha1_to_hex(commit->object.sha1), stdout);
+ if (revs.parents) {
+ struct commit_list *parents = commit->parents;
+ while (parents) {
+ struct object *o = &(parents->item->object);
+ parents = parents->next;
+ if (o->flags & TMP_MARK)
+ continue;
+ printf(" %s", sha1_to_hex(o->sha1));
+ o->flags |= TMP_MARK;
+ }
+ /* TMP_MARK is a general purpose flag that can
+ * be used locally, but the user should clean
+ * things up after it is done with them.
+ */
+ for (parents = commit->parents;
+ parents;
+ parents = parents->next)
+ parents->item->object.flags &= ~TMP_MARK;
+ }
+ if (revs.commit_format == CMIT_FMT_ONELINE)
+ putchar(' ');
+ else
+ putchar('\n');
+
+ if (revs.verbose_header) {
+ static char pretty_header[16384];
+ pretty_print_commit(revs.commit_format, commit, ~0,
+ pretty_header, sizeof(pretty_header),
+ revs.abbrev, NULL);
+ printf("%s%c", pretty_header, hdr_termination);
+ }
+ fflush(stdout);
+}
+
+static struct object_list **process_blob(struct blob *blob,
+ struct object_list **p,
+ struct name_path *path,
+ const char *name)
+{
+ struct object *obj = &blob->object;
+
+ if (!revs.blob_objects)
+ return p;
+ if (obj->flags & (UNINTERESTING | SEEN))
+ return p;
+ obj->flags |= SEEN;
+ return add_object(obj, p, path, name);
+}
+
+static struct object_list **process_tree(struct tree *tree,
+ struct object_list **p,
+ struct name_path *path,
+ const char *name)
+{
+ struct object *obj = &tree->object;
+ struct tree_entry_list *entry;
+ struct name_path me;
+
+ if (!revs.tree_objects)
+ return p;
+ if (obj->flags & (UNINTERESTING | SEEN))
+ return p;
+ if (parse_tree(tree) < 0)
+ die("bad tree object %s", sha1_to_hex(obj->sha1));
+ obj->flags |= SEEN;
+ p = add_object(obj, p, path, name);
+ me.up = path;
+ me.elem = name;
+ me.elem_len = strlen(name);
+ entry = tree->entries;
+ tree->entries = NULL;
+ while (entry) {
+ struct tree_entry_list *next = entry->next;
+ if (entry->directory)
+ p = process_tree(entry->item.tree, p, &me, entry->name);
+ else
+ p = process_blob(entry->item.blob, p, &me, entry->name);
+ free(entry);
+ entry = next;
+ }
+ return p;
+}
+
+static void show_commit_list(struct rev_info *revs)
+{
+ struct commit *commit;
+ struct object_list *objects = NULL, **p = &objects, *pending;
+
+ while ((commit = get_revision(revs)) != NULL) {
+ p = process_tree(commit->tree, p, NULL, "");
+ show_commit(commit);
+ }
+ for (pending = revs->pending_objects; pending; pending = pending->next) {
+ struct object *obj = pending->item;
+ const char *name = pending->name;
+ if (obj->flags & (UNINTERESTING | SEEN))
+ continue;
+ if (obj->type == tag_type) {
+ obj->flags |= SEEN;
+ p = add_object(obj, p, NULL, name);
+ continue;
+ }
+ if (obj->type == tree_type) {
+ p = process_tree((struct tree *)obj, p, NULL, name);
+ continue;
+ }
+ if (obj->type == blob_type) {
+ p = process_blob((struct blob *)obj, p, NULL, name);
+ continue;
+ }
+ die("unknown pending object %s (%s)", sha1_to_hex(obj->sha1), name);
+ }
+ while (objects) {
+ /* An object with name "foo\n0000000..." can be used to
+ * confuse downstream git-pack-objects very badly.
+ */
+ const char *ep = strchr(objects->name, '\n');
+ if (ep) {
+ printf("%s %.*s\n", sha1_to_hex(objects->item->sha1),
+ (int) (ep - objects->name),
+ objects->name);
+ }
+ else
+ printf("%s %s\n", sha1_to_hex(objects->item->sha1), objects->name);
+ objects = objects->next;
+ }
+}
+
+/*
+ * This is a truly stupid algorithm, but it's only
+ * used for bisection, and we just don't care enough.
+ *
+ * We care just barely enough to avoid recursing for
+ * non-merge entries.
+ */
+static int count_distance(struct commit_list *entry)
+{
+ int nr = 0;
+
+ while (entry) {
+ struct commit *commit = entry->item;
+ struct commit_list *p;
+
+ if (commit->object.flags & (UNINTERESTING | COUNTED))
+ break;
+ if (!revs.prune_fn || (commit->object.flags & TREECHANGE))
+ nr++;
+ commit->object.flags |= COUNTED;
+ p = commit->parents;
+ entry = p;
+ if (p) {
+ p = p->next;
+ while (p) {
+ nr += count_distance(p);
+ p = p->next;
+ }
+ }
+ }
+
+ return nr;
+}
+
+static void clear_distance(struct commit_list *list)
+{
+ while (list) {
+ struct commit *commit = list->item;
+ commit->object.flags &= ~COUNTED;
+ list = list->next;
+ }
+}
+
+static struct commit_list *find_bisection(struct commit_list *list)
+{
+ int nr, closest;
+ struct commit_list *p, *best;
+
+ nr = 0;
+ p = list;
+ while (p) {
+ if (!revs.prune_fn || (p->item->object.flags & TREECHANGE))
+ nr++;
+ p = p->next;
+ }
+ closest = 0;
+ best = list;
+
+ for (p = list; p; p = p->next) {
+ int distance;
+
+ if (revs.prune_fn && !(p->item->object.flags & TREECHANGE))
+ continue;
+
+ distance = count_distance(p);
+ clear_distance(list);
+ if (nr - distance < distance)
+ distance = nr - distance;
+ if (distance > closest) {
+ best = p;
+ closest = distance;
+ }
+ }
+ if (best)
+ best->next = NULL;
+ return best;
+}
+
+static void mark_edge_parents_uninteresting(struct commit *commit)
+{
+ struct commit_list *parents;
+
+ for (parents = commit->parents; parents; parents = parents->next) {
+ struct commit *parent = parents->item;
+ if (!(parent->object.flags & UNINTERESTING))
+ continue;
+ mark_tree_uninteresting(parent->tree);
+ if (revs.edge_hint && !(parent->object.flags & SHOWN)) {
+ parent->object.flags |= SHOWN;
+ printf("-%s\n", sha1_to_hex(parent->object.sha1));
+ }
+ }
+}
+
+static void mark_edges_uninteresting(struct commit_list *list)
+{
+ for ( ; list; list = list->next) {
+ struct commit *commit = list->item;
+
+ if (commit->object.flags & UNINTERESTING) {
+ mark_tree_uninteresting(commit->tree);
+ continue;
+ }
+ mark_edge_parents_uninteresting(commit);
+ }
+}
+
+int cmd_rev_list(int argc, const char **argv, char **envp)
+{
+ struct commit_list *list;
+ int i;
+
+ init_revisions(&revs);
+ revs.abbrev = 0;
+ revs.commit_format = CMIT_FMT_UNSPECIFIED;
+ argc = setup_revisions(argc, argv, &revs, NULL);
+
+ for (i = 1 ; i < argc; i++) {
+ const char *arg = argv[i];
+
+ if (!strcmp(arg, "--header")) {
+ revs.verbose_header = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--timestamp")) {
+ show_timestamp = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--bisect")) {
+ bisect_list = 1;
+ continue;
+ }
+ usage(rev_list_usage);
+
+ }
+ if (revs.commit_format != CMIT_FMT_UNSPECIFIED) {
+ /* The command line has a --pretty */
+ hdr_termination = '\n';
+ if (revs.commit_format == CMIT_FMT_ONELINE)
+ header_prefix = "";
+ else
+ header_prefix = "commit ";
+ }
+ else if (revs.verbose_header)
+ /* Only --header was specified */
+ revs.commit_format = CMIT_FMT_RAW;
+
+ list = revs.commits;
+
+ if ((!list &&
+ (!(revs.tag_objects||revs.tree_objects||revs.blob_objects) &&
+ !revs.pending_objects)) ||
+ revs.diff)
+ usage(rev_list_usage);
+
+ save_commit_buffer = revs.verbose_header;
+ track_object_refs = 0;
+ if (bisect_list)
+ revs.limited = 1;
+
+ prepare_revision_walk(&revs);
+ if (revs.tree_objects)
+ mark_edges_uninteresting(revs.commits);
+
+ if (bisect_list)
+ revs.commits = find_bisection(revs.commits);
+
+ show_commit_list(&revs);
+
+ return 0;
+}
extern int cmd_count_objects(int argc, const char **argv, char **envp);
extern int cmd_push(int argc, const char **argv, char **envp);
+extern int cmd_grep(int argc, const char **argv, char **envp);
+extern int cmd_rev_list(int argc, const char **argv, char **envp);
+extern int cmd_check_ref_format(int argc, const char **argv, char **envp);
+extern int cmd_init_db(int argc, const char **argv, char **envp);
#endif
extern int index_path(unsigned char *sha1, const char *path, struct stat *st, int write_object);
extern void fill_stat_cache_info(struct cache_entry *ce, struct stat *st);
+#define REFRESH_REALLY 0x0001 /* ignore_valid */
+#define REFRESH_UNMERGED 0x0002 /* allow unmerged */
+#define REFRESH_QUIET 0x0004 /* be quiet about it */
+#define REFRESH_IGNORE_MISSING 0x0008 /* ignore non-existent */
+extern int refresh_cache(unsigned int flags);
+
struct cache_file {
struct cache_file *next;
char lockfile[PATH_MAX];
/* pager.c */
extern void setup_pager(void);
+/* base85 */
+int decode_85(char *dst, char *line, int linelen);
+void encode_85(char *buf, unsigned char *data, int bytes);
+
#endif /* CACHE_H */
setup_git_directory();
git_config(git_default_config);
- if (argc != 3 || get_sha1(argv[2], sha1))
+ if (argc != 3)
usage("git-cat-file [-t|-s|-e|-p|<type>] <sha1>");
+ if (get_sha1(argv[2], sha1))
+ die("Not a valid object name %s", argv[2]);
opt = 0;
if ( argv[1][0] == '-' ) {
return !has_sha1_file(sha1);
case 'p':
- if (get_sha1(argv[2], sha1) ||
- sha1_object_info(sha1, type, NULL))
+ if (sha1_object_info(sha1, type, NULL))
die("Not a valid object name %s", argv[2]);
/* custom pretty-print here */
+++ /dev/null
-/*
- * GIT - The information manager from hell
- */
-
-#include "cache.h"
-#include "refs.h"
-
-#include <stdio.h>
-
-int main(int ac, char **av)
-{
- if (ac != 2)
- usage("git-check-ref-format refname");
- if (check_ref_format(av[1]))
- exit(1);
- return 0;
-}
die("git-checkout-index: don't mix '--stdin' and explicit filenames");
p = prefix_path(prefix, prefix_length, arg);
checkout_file(p);
- if (p != arg)
+ if (p < arg || p > arg + strlen(arg))
free((char*)p);
}
path_name = buf.buf;
p = prefix_path(prefix, prefix_length, path_name);
checkout_file(p);
- if (p != path_name)
+ if (p < path_name || p > path_name + strlen(path_name))
free((char *)p);
if (path_name != buf.buf)
free(path_name);
int abbrev = opt->full_index ? 40 : DEFAULT_ABBREV;
mmfile_t result_file;
+ context = opt->context;
/* Read the result of merge first */
if (!working_tree_file)
result = grab_blob(elem->sha1, &result_size);
git_config(git_default_config);
- if (argc < 2 || get_sha1(argv[1], tree_sha1) < 0)
+ if (argc < 2)
usage(commit_tree_usage);
+ if (get_sha1(argv[1], tree_sha1))
+ die("Not a valid object name %s", argv[1]);
check_valid(tree_sha1, tree_type);
for (i = 2; i < argc; i += 2) {
char *a, *b;
a = argv[i]; b = argv[i+1];
- if (!b || strcmp(a, "-p") || get_sha1(b, parent_sha1[parents]))
+ if (!b || strcmp(a, "-p"))
usage(commit_tree_usage);
+ if (get_sha1(b, parent_sha1[parents]))
+ die("Not a valid object name %s", b);
check_valid(parent_sha1[parents], commit_type);
if (new_parent(parents))
parents++;
const char *commit_type = "commit";
+struct cmt_fmt_map {
+ const char *n;
+ size_t cmp_len;
+ enum cmit_fmt v;
+} cmt_fmts[] = {
+ { "raw", 1, CMIT_FMT_RAW },
+ { "medium", 1, CMIT_FMT_MEDIUM },
+ { "short", 1, CMIT_FMT_SHORT },
+ { "email", 1, CMIT_FMT_EMAIL },
+ { "full", 5, CMIT_FMT_FULL },
+ { "fuller", 5, CMIT_FMT_FULLER },
+ { "oneline", 1, CMIT_FMT_ONELINE },
+};
+
enum cmit_fmt get_commit_format(const char *arg)
{
- if (!*arg)
+ int i;
+
+ if (!arg || !*arg)
return CMIT_FMT_DEFAULT;
- if (!strcmp(arg, "=raw"))
- return CMIT_FMT_RAW;
- if (!strcmp(arg, "=medium"))
- return CMIT_FMT_MEDIUM;
- if (!strcmp(arg, "=short"))
- return CMIT_FMT_SHORT;
- if (!strcmp(arg, "=full"))
- return CMIT_FMT_FULL;
- if (!strcmp(arg, "=fuller"))
- return CMIT_FMT_FULLER;
- if (!strcmp(arg, "=email"))
- return CMIT_FMT_EMAIL;
- if (!strcmp(arg, "=oneline"))
- return CMIT_FMT_ONELINE;
- die("invalid --pretty format");
+ if (*arg == '=')
+ arg++;
+ for (i = 0; i < ARRAY_SIZE(cmt_fmts); i++) {
+ if (!strncmp(arg, cmt_fmts[i].n, cmt_fmts[i].cmp_len))
+ return cmt_fmts[i].v;
+ }
+
+ die("invalid --pretty format: %s", arg);
}
static struct commit *check_commit(struct object *obj,
return fn(name, value);
}
+static int get_extended_base_var(char *name, int baselen, int c)
+{
+ do {
+ if (c == '\n')
+ return -1;
+ c = get_next_char();
+ } while (isspace(c));
+
+ /* We require the format to be '[base "extension"]' */
+ if (c != '"')
+ return -1;
+ name[baselen++] = '.';
+
+ for (;;) {
+ int c = get_next_char();
+ if (c == '\n')
+ return -1;
+ if (c == '"')
+ break;
+ if (c == '\\') {
+ c = get_next_char();
+ if (c == '\n')
+ return -1;
+ }
+ name[baselen++] = c;
+ if (baselen > MAXNAME / 2)
+ return -1;
+ }
+
+ /* Final ']' */
+ if (get_next_char() != ']')
+ return -1;
+ return baselen;
+}
+
static int get_base_var(char *name)
{
int baselen = 0;
return -1;
if (c == ']')
return baselen;
+ if (isspace(c))
+ return get_extended_base_var(name, baselen, c);
if (!isalnum(c) && c != '.')
return -1;
if (baselen > MAXNAME / 2)
store.offset[store.seen] = ftell(config_file);
store.state = KEY_SEEN;
store.seen++;
- } else if (strrchr(key, '.') - key == store.baselen &&
+ } else {
+ if (strrchr(key, '.') - key == store.baselen &&
!strncmp(key, store.key, store.baselen)) {
store.state = SECTION_SEEN;
store.offset[store.seen] = ftell(config_file);
+ }
}
}
return 0;
static void store_write_section(int fd, const char* key)
{
+ const char *dot = strchr(key, '.');
+ int len1 = store.baselen, len2 = -1;
+
+ dot = strchr(key, '.');
+ if (dot) {
+ int dotlen = dot - key;
+ if (dotlen < len1) {
+ len2 = len1 - dotlen - 1;
+ len1 = dotlen;
+ }
+ }
+
write(fd, "[", 1);
- write(fd, key, store.baselen);
+ write(fd, key, len1);
+ if (len2 >= 0) {
+ write(fd, " \"", 2);
+ while (--len2 >= 0) {
+ unsigned char c = *++dot;
+ if (c == '"')
+ write(fd, "\\", 1);
+ write(fd, &c, 1);
+ }
+ write(fd, "\"", 1);
+ }
write(fd, "]\n", 2);
}
int git_config_set_multivar(const char* key, const char* value,
const char* value_regex, int multi_replace)
{
- int i;
- int fd, in_fd;
+ int i, dot;
+ int fd = -1, in_fd;
int ret;
char* config_filename = strdup(git_path("config"));
char* lock_file = strdup(git_path("config.lock"));
* Validate the key and while at it, lower case it for matching.
*/
store.key = (char*)malloc(strlen(key)+1);
- for (i = 0; key[i]; i++)
- if (i != store.baselen &&
- ((!isalnum(key[i]) && key[i] != '.') ||
- (i == store.baselen+1 && !isalpha(key[i])))) {
- fprintf(stderr, "invalid key: %s\n", key);
- free(store.key);
- ret = 1;
- goto out_free;
- } else
- store.key[i] = tolower(key[i]);
+ dot = 0;
+ for (i = 0; key[i]; i++) {
+ unsigned char c = key[i];
+ if (c == '.')
+ dot = 1;
+ /* Leave the extended basename untouched.. */
+ if (!dot || i > store.baselen) {
+ if (!isalnum(c) || (i == store.baselen+1 && !isalpha(c))) {
+ fprintf(stderr, "invalid key: %s\n", key);
+ free(store.key);
+ ret = 1;
+ goto out_free;
+ }
+ c = tolower(c);
+ }
+ store.key[i] = c;
+ }
store.key[i] = 0;
/*
if ( ENOENT != errno ) {
error("opening %s: %s", config_filename,
strerror(errno));
- close(fd);
- unlink(lock_file);
ret = 3; /* same as "invalid config file" */
goto out_free;
}
/* if nothing to unset, error out */
if (value == NULL) {
- close(fd);
- unlink(lock_file);
ret = 5;
goto out_free;
}
/* if nothing to unset, or too many matches, error out */
if ((store.seen == 0 && value == NULL) ||
(store.seen > 1 && multi_replace == 0)) {
- close(fd);
- unlink(lock_file);
ret = 5;
goto out_free;
}
unlink(config_filename);
}
- close(fd);
-
if (rename(lock_file, config_filename) < 0) {
fprintf(stderr, "Could not rename the lock file?\n");
ret = 4;
ret = 0;
out_free:
+ if (0 <= fd)
+ close(fd);
if (config_filename)
free(config_filename);
- if (lock_file)
+ if (lock_file) {
+ unlink(lock_file);
free(lock_file);
+ }
return ret;
}
--- /dev/null
+#!/bin/sh
+
+# Use this tool to rewrite your .git/remotes/ files into the config.
+
+. git-sh-setup
+
+if [ -d "$GIT_DIR"/remotes ]; then
+ echo "Rewriting $GIT_DIR/remotes" >&2
+ error=0
+ # rewrite into config
+ {
+ cd "$GIT_DIR"/remotes
+ ls | while read f; do
+ name=$(echo -n "$f" | tr -c "A-Za-z0-9" ".")
+ sed -n \
+ -e "s/^URL: \(.*\)$/remote.$name.url \1 ./p" \
+ -e "s/^Pull: \(.*\)$/remote.$name.fetch \1 ^$ /p" \
+ -e "s/^Push: \(.*\)$/remote.$name.push \1 ^$ /p" \
+ < "$f"
+ done
+ echo done
+ } | while read key value regex; do
+ case $key in
+ done)
+ if [ $error = 0 ]; then
+ mv "$GIT_DIR"/remotes "$GIT_DIR"/remotes.old
+ fi ;;
+ *)
+ echo "git-repo-config $key "$value" $regex"
+ git-repo-config $key "$value" $regex || error=1 ;;
+ esac
+ done
+fi
+
+
setup_git_directory();
- if (argc != 2 || get_sha1(argv[1], sha1))
+ if (argc != 2)
usage("git-convert-objects <sha1>");
+ if (get_sha1(argv[1], sha1))
+ die("Not a valid object name %s", argv[1]);
entry = convert_entry(sha1);
printf("new sha1: %s\n", sha1_to_hex(entry->new_sha1));
#ifndef DELTA_H
#define DELTA_H
-/* handling of delta buffers */
-extern void *diff_delta(const void *from_buf, unsigned long from_size,
- const void *to_buf, unsigned long to_size,
- unsigned long *delta_size, unsigned long max_size);
-extern void *patch_delta(void *src_buf, unsigned long src_size,
+/* opaque object for delta index */
+struct delta_index;
+
+/*
+ * create_delta_index: compute index data from given buffer
+ *
+ * This returns a pointer to a struct delta_index that should be passed to
+ * subsequent create_delta() calls, or to free_delta_index(). A NULL pointer
+ * is returned on failure. The given buffer must not be freed nor altered
+ * before free_delta_index() is called. The returned pointer must be freed
+ * using free_delta_index().
+ */
+extern struct delta_index *
+create_delta_index(const void *buf, unsigned long bufsize);
+
+/*
+ * free_delta_index: free the index created by create_delta_index()
+ *
+ * Given pointer must be what create_delta_index() returned, or NULL.
+ */
+extern void free_delta_index(struct delta_index *index);
+
+/*
+ * create_delta: create a delta from given index for the given buffer
+ *
+ * This function may be called multiple times with different buffers using
+ * the same delta_index pointer. If max_delta_size is non-zero and the
+ * resulting delta is to be larger than max_delta_size then NULL is returned.
+ * On success, a non-NULL pointer to the buffer with the delta data is
+ * returned and *delta_size is updated with its size. The returned buffer
+ * must be freed by the caller.
+ */
+extern void *
+create_delta(const struct delta_index *index,
+ const void *buf, unsigned long bufsize,
+ unsigned long *delta_size, unsigned long max_delta_size);
+
+/*
+ * diff_delta: create a delta from source buffer to target buffer
+ *
+ * If max_delta_size is non-zero and the resulting delta is to be larger
+ * than max_delta_size then NULL is returned. On success, a non-NULL
+ * pointer to the buffer with the delta data is returned and *delta_size is
+ * updated with its size. The returned buffer must be freed by the caller.
+ */
+static inline void *
+diff_delta(const void *src_buf, unsigned long src_bufsize,
+ const void *trg_buf, unsigned long trg_bufsize,
+ unsigned long *delta_size, unsigned long max_delta_size)
+{
+ struct delta_index *index = create_delta_index(src_buf, src_bufsize);
+ if (index) {
+ void *delta = create_delta(index, trg_buf, trg_bufsize,
+ delta_size, max_delta_size);
+ free_delta_index(index);
+ return delta;
+ }
+ return NULL;
+}
+
+/*
+ * patch_delta: recreate target buffer given source buffer and delta data
+ *
+ * On success, a non-NULL pointer to the target buffer is returned and
+ * *trg_bufsize is updated with its size. On failure a NULL pointer is
+ * returned. The returned buffer must be freed by the caller.
+ */
+extern void *patch_delta(const void *src_buf, unsigned long src_size,
const void *delta_buf, unsigned long delta_size,
unsigned long *dst_size);
/*
* This must be called twice on the delta data buffer, first to get the
- * expected reference buffer size, and again to get the result buffer size.
+ * expected source buffer size, and again to get the target buffer size.
*/
static inline unsigned long get_delta_hdr_size(const unsigned char **datap,
const unsigned char *top)
static int initialized = 0;
struct commit_name *n;
- if (get_sha1(arg, sha1) < 0)
- usage(describe_usage);
+ if (get_sha1(arg, sha1))
+ die("Not a valid object name %s", arg);
cmit = lookup_commit_reference(sha1);
if (!cmit)
- usage(describe_usage);
+ die("%s is not a valid '%s' object", arg, commit_type);
if (!initialized) {
initialized = 1;
#include <stdlib.h>
#include <string.h>
-#include <zlib.h>
#include "delta.h"
-/* block size: min = 16, max = 64k, power of 2 */
-#define BLK_SIZE 16
-
-#define MIN(a, b) ((a) < (b) ? (a) : (b))
+/* maximum hash entry list for the same hash bucket */
+#define HASH_LIMIT 64
+
+#define RABIN_SHIFT 23
+#define RABIN_WINDOW 16
+
+static const unsigned int T[256] = {
+ 0x00000000, 0xab59b4d1, 0x56b369a2, 0xfdeadd73, 0x063f6795, 0xad66d344,
+ 0x508c0e37, 0xfbd5bae6, 0x0c7ecf2a, 0xa7277bfb, 0x5acda688, 0xf1941259,
+ 0x0a41a8bf, 0xa1181c6e, 0x5cf2c11d, 0xf7ab75cc, 0x18fd9e54, 0xb3a42a85,
+ 0x4e4ef7f6, 0xe5174327, 0x1ec2f9c1, 0xb59b4d10, 0x48719063, 0xe32824b2,
+ 0x1483517e, 0xbfdae5af, 0x423038dc, 0xe9698c0d, 0x12bc36eb, 0xb9e5823a,
+ 0x440f5f49, 0xef56eb98, 0x31fb3ca8, 0x9aa28879, 0x6748550a, 0xcc11e1db,
+ 0x37c45b3d, 0x9c9defec, 0x6177329f, 0xca2e864e, 0x3d85f382, 0x96dc4753,
+ 0x6b369a20, 0xc06f2ef1, 0x3bba9417, 0x90e320c6, 0x6d09fdb5, 0xc6504964,
+ 0x2906a2fc, 0x825f162d, 0x7fb5cb5e, 0xd4ec7f8f, 0x2f39c569, 0x846071b8,
+ 0x798aaccb, 0xd2d3181a, 0x25786dd6, 0x8e21d907, 0x73cb0474, 0xd892b0a5,
+ 0x23470a43, 0x881ebe92, 0x75f463e1, 0xdeadd730, 0x63f67950, 0xc8afcd81,
+ 0x354510f2, 0x9e1ca423, 0x65c91ec5, 0xce90aa14, 0x337a7767, 0x9823c3b6,
+ 0x6f88b67a, 0xc4d102ab, 0x393bdfd8, 0x92626b09, 0x69b7d1ef, 0xc2ee653e,
+ 0x3f04b84d, 0x945d0c9c, 0x7b0be704, 0xd05253d5, 0x2db88ea6, 0x86e13a77,
+ 0x7d348091, 0xd66d3440, 0x2b87e933, 0x80de5de2, 0x7775282e, 0xdc2c9cff,
+ 0x21c6418c, 0x8a9ff55d, 0x714a4fbb, 0xda13fb6a, 0x27f92619, 0x8ca092c8,
+ 0x520d45f8, 0xf954f129, 0x04be2c5a, 0xafe7988b, 0x5432226d, 0xff6b96bc,
+ 0x02814bcf, 0xa9d8ff1e, 0x5e738ad2, 0xf52a3e03, 0x08c0e370, 0xa39957a1,
+ 0x584ced47, 0xf3155996, 0x0eff84e5, 0xa5a63034, 0x4af0dbac, 0xe1a96f7d,
+ 0x1c43b20e, 0xb71a06df, 0x4ccfbc39, 0xe79608e8, 0x1a7cd59b, 0xb125614a,
+ 0x468e1486, 0xedd7a057, 0x103d7d24, 0xbb64c9f5, 0x40b17313, 0xebe8c7c2,
+ 0x16021ab1, 0xbd5bae60, 0x6cb54671, 0xc7ecf2a0, 0x3a062fd3, 0x915f9b02,
+ 0x6a8a21e4, 0xc1d39535, 0x3c394846, 0x9760fc97, 0x60cb895b, 0xcb923d8a,
+ 0x3678e0f9, 0x9d215428, 0x66f4eece, 0xcdad5a1f, 0x3047876c, 0x9b1e33bd,
+ 0x7448d825, 0xdf116cf4, 0x22fbb187, 0x89a20556, 0x7277bfb0, 0xd92e0b61,
+ 0x24c4d612, 0x8f9d62c3, 0x7836170f, 0xd36fa3de, 0x2e857ead, 0x85dcca7c,
+ 0x7e09709a, 0xd550c44b, 0x28ba1938, 0x83e3ade9, 0x5d4e7ad9, 0xf617ce08,
+ 0x0bfd137b, 0xa0a4a7aa, 0x5b711d4c, 0xf028a99d, 0x0dc274ee, 0xa69bc03f,
+ 0x5130b5f3, 0xfa690122, 0x0783dc51, 0xacda6880, 0x570fd266, 0xfc5666b7,
+ 0x01bcbbc4, 0xaae50f15, 0x45b3e48d, 0xeeea505c, 0x13008d2f, 0xb85939fe,
+ 0x438c8318, 0xe8d537c9, 0x153feaba, 0xbe665e6b, 0x49cd2ba7, 0xe2949f76,
+ 0x1f7e4205, 0xb427f6d4, 0x4ff24c32, 0xe4abf8e3, 0x19412590, 0xb2189141,
+ 0x0f433f21, 0xa41a8bf0, 0x59f05683, 0xf2a9e252, 0x097c58b4, 0xa225ec65,
+ 0x5fcf3116, 0xf49685c7, 0x033df00b, 0xa86444da, 0x558e99a9, 0xfed72d78,
+ 0x0502979e, 0xae5b234f, 0x53b1fe3c, 0xf8e84aed, 0x17bea175, 0xbce715a4,
+ 0x410dc8d7, 0xea547c06, 0x1181c6e0, 0xbad87231, 0x4732af42, 0xec6b1b93,
+ 0x1bc06e5f, 0xb099da8e, 0x4d7307fd, 0xe62ab32c, 0x1dff09ca, 0xb6a6bd1b,
+ 0x4b4c6068, 0xe015d4b9, 0x3eb80389, 0x95e1b758, 0x680b6a2b, 0xc352defa,
+ 0x3887641c, 0x93ded0cd, 0x6e340dbe, 0xc56db96f, 0x32c6cca3, 0x999f7872,
+ 0x6475a501, 0xcf2c11d0, 0x34f9ab36, 0x9fa01fe7, 0x624ac294, 0xc9137645,
+ 0x26459ddd, 0x8d1c290c, 0x70f6f47f, 0xdbaf40ae, 0x207afa48, 0x8b234e99,
+ 0x76c993ea, 0xdd90273b, 0x2a3b52f7, 0x8162e626, 0x7c883b55, 0xd7d18f84,
+ 0x2c043562, 0x875d81b3, 0x7ab75cc0, 0xd1eee811
+};
-#define GR_PRIME 0x9e370001
-#define HASH(v, shift) (((unsigned int)(v) * GR_PRIME) >> (shift))
+static const unsigned int U[256] = {
+ 0x00000000, 0x7eb5200d, 0x5633f4cb, 0x2886d4c6, 0x073e5d47, 0x798b7d4a,
+ 0x510da98c, 0x2fb88981, 0x0e7cba8e, 0x70c99a83, 0x584f4e45, 0x26fa6e48,
+ 0x0942e7c9, 0x77f7c7c4, 0x5f711302, 0x21c4330f, 0x1cf9751c, 0x624c5511,
+ 0x4aca81d7, 0x347fa1da, 0x1bc7285b, 0x65720856, 0x4df4dc90, 0x3341fc9d,
+ 0x1285cf92, 0x6c30ef9f, 0x44b63b59, 0x3a031b54, 0x15bb92d5, 0x6b0eb2d8,
+ 0x4388661e, 0x3d3d4613, 0x39f2ea38, 0x4747ca35, 0x6fc11ef3, 0x11743efe,
+ 0x3eccb77f, 0x40799772, 0x68ff43b4, 0x164a63b9, 0x378e50b6, 0x493b70bb,
+ 0x61bda47d, 0x1f088470, 0x30b00df1, 0x4e052dfc, 0x6683f93a, 0x1836d937,
+ 0x250b9f24, 0x5bbebf29, 0x73386bef, 0x0d8d4be2, 0x2235c263, 0x5c80e26e,
+ 0x740636a8, 0x0ab316a5, 0x2b7725aa, 0x55c205a7, 0x7d44d161, 0x03f1f16c,
+ 0x2c4978ed, 0x52fc58e0, 0x7a7a8c26, 0x04cfac2b, 0x73e5d470, 0x0d50f47d,
+ 0x25d620bb, 0x5b6300b6, 0x74db8937, 0x0a6ea93a, 0x22e87dfc, 0x5c5d5df1,
+ 0x7d996efe, 0x032c4ef3, 0x2baa9a35, 0x551fba38, 0x7aa733b9, 0x041213b4,
+ 0x2c94c772, 0x5221e77f, 0x6f1ca16c, 0x11a98161, 0x392f55a7, 0x479a75aa,
+ 0x6822fc2b, 0x1697dc26, 0x3e1108e0, 0x40a428ed, 0x61601be2, 0x1fd53bef,
+ 0x3753ef29, 0x49e6cf24, 0x665e46a5, 0x18eb66a8, 0x306db26e, 0x4ed89263,
+ 0x4a173e48, 0x34a21e45, 0x1c24ca83, 0x6291ea8e, 0x4d29630f, 0x339c4302,
+ 0x1b1a97c4, 0x65afb7c9, 0x446b84c6, 0x3adea4cb, 0x1258700d, 0x6ced5000,
+ 0x4355d981, 0x3de0f98c, 0x15662d4a, 0x6bd30d47, 0x56ee4b54, 0x285b6b59,
+ 0x00ddbf9f, 0x7e689f92, 0x51d01613, 0x2f65361e, 0x07e3e2d8, 0x7956c2d5,
+ 0x5892f1da, 0x2627d1d7, 0x0ea10511, 0x7014251c, 0x5facac9d, 0x21198c90,
+ 0x099f5856, 0x772a785b, 0x4c921c31, 0x32273c3c, 0x1aa1e8fa, 0x6414c8f7,
+ 0x4bac4176, 0x3519617b, 0x1d9fb5bd, 0x632a95b0, 0x42eea6bf, 0x3c5b86b2,
+ 0x14dd5274, 0x6a687279, 0x45d0fbf8, 0x3b65dbf5, 0x13e30f33, 0x6d562f3e,
+ 0x506b692d, 0x2ede4920, 0x06589de6, 0x78edbdeb, 0x5755346a, 0x29e01467,
+ 0x0166c0a1, 0x7fd3e0ac, 0x5e17d3a3, 0x20a2f3ae, 0x08242768, 0x76910765,
+ 0x59298ee4, 0x279caee9, 0x0f1a7a2f, 0x71af5a22, 0x7560f609, 0x0bd5d604,
+ 0x235302c2, 0x5de622cf, 0x725eab4e, 0x0ceb8b43, 0x246d5f85, 0x5ad87f88,
+ 0x7b1c4c87, 0x05a96c8a, 0x2d2fb84c, 0x539a9841, 0x7c2211c0, 0x029731cd,
+ 0x2a11e50b, 0x54a4c506, 0x69998315, 0x172ca318, 0x3faa77de, 0x411f57d3,
+ 0x6ea7de52, 0x1012fe5f, 0x38942a99, 0x46210a94, 0x67e5399b, 0x19501996,
+ 0x31d6cd50, 0x4f63ed5d, 0x60db64dc, 0x1e6e44d1, 0x36e89017, 0x485db01a,
+ 0x3f77c841, 0x41c2e84c, 0x69443c8a, 0x17f11c87, 0x38499506, 0x46fcb50b,
+ 0x6e7a61cd, 0x10cf41c0, 0x310b72cf, 0x4fbe52c2, 0x67388604, 0x198da609,
+ 0x36352f88, 0x48800f85, 0x6006db43, 0x1eb3fb4e, 0x238ebd5d, 0x5d3b9d50,
+ 0x75bd4996, 0x0b08699b, 0x24b0e01a, 0x5a05c017, 0x728314d1, 0x0c3634dc,
+ 0x2df207d3, 0x534727de, 0x7bc1f318, 0x0574d315, 0x2acc5a94, 0x54797a99,
+ 0x7cffae5f, 0x024a8e52, 0x06852279, 0x78300274, 0x50b6d6b2, 0x2e03f6bf,
+ 0x01bb7f3e, 0x7f0e5f33, 0x57888bf5, 0x293dabf8, 0x08f998f7, 0x764cb8fa,
+ 0x5eca6c3c, 0x207f4c31, 0x0fc7c5b0, 0x7172e5bd, 0x59f4317b, 0x27411176,
+ 0x1a7c5765, 0x64c97768, 0x4c4fa3ae, 0x32fa83a3, 0x1d420a22, 0x63f72a2f,
+ 0x4b71fee9, 0x35c4dee4, 0x1400edeb, 0x6ab5cde6, 0x42331920, 0x3c86392d,
+ 0x133eb0ac, 0x6d8b90a1, 0x450d4467, 0x3bb8646a
+};
-struct index {
+struct index_entry {
const unsigned char *ptr;
unsigned int val;
- struct index *next;
+ struct index_entry *next;
+};
+
+struct delta_index {
+ const void *src_buf;
+ unsigned long src_size;
+ unsigned int hash_mask;
+ struct index_entry *hash[0];
};
-static struct index ** delta_index(const unsigned char *buf,
- unsigned long bufsize,
- unsigned long trg_bufsize,
- unsigned int *hash_shift)
+struct delta_index * create_delta_index(const void *buf, unsigned long bufsize)
{
- unsigned int i, hsize, hshift, hlimit, entries, *hash_count;
- const unsigned char *data;
- struct index *entry, **hash;
+ unsigned int i, hsize, hmask, entries, prev_val, *hash_count;
+ const unsigned char *data, *buffer = buf;
+ struct delta_index *index;
+ struct index_entry *entry, **hash;
void *mem;
+ unsigned long memsize;
+
+ if (!buf || !bufsize)
+ return NULL;
- /* determine index hash size */
- entries = bufsize / BLK_SIZE;
+ /* Determine index hash size. Note that indexing skips the
+ first byte to allow for optimizing the rabin polynomial
+ initialization in create_delta(). */
+ entries = (bufsize - 1) / RABIN_WINDOW;
hsize = entries / 4;
for (i = 4; (1 << i) < hsize && i < 31; i++);
hsize = 1 << i;
- hshift = 32 - i;
- *hash_shift = hshift;
+ hmask = hsize - 1;
/* allocate lookup index */
- mem = malloc(hsize * sizeof(*hash) + entries * sizeof(*entry));
+ memsize = sizeof(*index) +
+ sizeof(*hash) * hsize +
+ sizeof(*entry) * entries;
+ mem = malloc(memsize);
if (!mem)
return NULL;
+ index = mem;
+ mem = index + 1;
hash = mem;
- entry = mem + hsize * sizeof(*hash);
+ mem = hash + hsize;
+ entry = mem;
+
+ index->src_buf = buf;
+ index->src_size = bufsize;
+ index->hash_mask = hmask;
memset(hash, 0, hsize * sizeof(*hash));
/* allocate an array to count hash entries */
hash_count = calloc(hsize, sizeof(*hash_count));
if (!hash_count) {
- free(hash);
+ free(index);
return NULL;
}
/* then populate the index */
- data = buf + entries * BLK_SIZE - BLK_SIZE;
- while (data >= buf) {
- unsigned int val = adler32(0, data, BLK_SIZE);
- i = HASH(val, hshift);
- entry->ptr = data;
- entry->val = val;
- entry->next = hash[i];
- hash[i] = entry++;
- hash_count[i]++;
- data -= BLK_SIZE;
- }
+ prev_val = ~0;
+ for (data = buffer + entries * RABIN_WINDOW - RABIN_WINDOW;
+ data >= buffer;
+ data -= RABIN_WINDOW) {
+ unsigned int val = 0;
+ for (i = 1; i <= RABIN_WINDOW; i++)
+ val = ((val << 8) | data[i]) ^ T[val >> RABIN_SHIFT];
+ if (val == prev_val) {
+ /* keep the lowest of consecutive identical blocks */
+ entry[-1].ptr = data + RABIN_WINDOW;
+ } else {
+ prev_val = val;
+ i = val & hmask;
+ entry->ptr = data + RABIN_WINDOW;
+ entry->val = val;
+ entry->next = hash[i];
+ hash[i] = entry++;
+ hash_count[i]++;
+ }
+ }
/*
* Determine a limit on the number of entries in the same hash
* bucket that would bring us to O(m*n) computing costs (m and n
* corresponding to reference and target buffer sizes).
*
- * The more the target buffer is large, the more it is important to
- * have small entry lists for each hash buckets. With such a limit
- * the cost is bounded to something more like O(m+n).
- */
- hlimit = (1 << 26) / trg_bufsize;
- if (hlimit < 4*BLK_SIZE)
- hlimit = 4*BLK_SIZE;
-
- /*
- * Now make sure none of the hash buckets has more entries than
+ * Make sure none of the hash buckets has more entries than
* we're willing to test. Otherwise we cull the entry list
* uniformly to still preserve a good repartition across
* the reference buffer.
*/
for (i = 0; i < hsize; i++) {
- if (hash_count[i] < hlimit)
+ if (hash_count[i] < HASH_LIMIT)
continue;
entry = hash[i];
do {
- struct index *keep = entry;
- int skip = hash_count[i] / hlimit / 2;
+ struct index_entry *keep = entry;
+ int skip = hash_count[i] / HASH_LIMIT / 2;
do {
entry = entry->next;
} while(--skip && entry);
}
free(hash_count);
- return hash;
+ return index;
}
-/* provide the size of the copy opcode given the block offset and size */
-#define COPYOP_SIZE(o, s) \
- (!!(o & 0xff) + !!(o & 0xff00) + !!(o & 0xff0000) + !!(o & 0xff000000) + \
- !!(s & 0xff) + !!(s & 0xff00) + 1)
+void free_delta_index(struct delta_index *index)
+{
+ free(index);
+}
-/* the maximum size for any opcode */
-#define MAX_OP_SIZE COPYOP_SIZE(0xffffffff, 0xffffffff)
+/*
+ * The maximum size for any opcode sequence, including the initial header
+ * plus rabin window plus biggest copy.
+ */
+#define MAX_OP_SIZE (5 + 5 + 1 + RABIN_WINDOW + 7)
-void *diff_delta(const void *from_buf, unsigned long from_size,
- const void *to_buf, unsigned long to_size,
- unsigned long *delta_size,
- unsigned long max_size)
+void *
+create_delta(const struct delta_index *index,
+ const void *trg_buf, unsigned long trg_size,
+ unsigned long *delta_size, unsigned long max_size)
{
- unsigned int i, outpos, outsize, hash_shift;
+ unsigned int i, outpos, outsize, val;
int inscnt;
const unsigned char *ref_data, *ref_top, *data, *top;
unsigned char *out;
- struct index *entry, **hash;
- if (!from_size || !to_size)
- return NULL;
- hash = delta_index(from_buf, from_size, to_size, &hash_shift);
- if (!hash)
+ if (!trg_buf || !trg_size)
return NULL;
outpos = 0;
if (max_size && outsize >= max_size)
outsize = max_size + MAX_OP_SIZE + 1;
out = malloc(outsize);
- if (!out) {
- free(hash);
+ if (!out)
return NULL;
- }
-
- ref_data = from_buf;
- ref_top = from_buf + from_size;
- data = to_buf;
- top = to_buf + to_size;
/* store reference buffer size */
- out[outpos++] = from_size;
- from_size >>= 7;
- while (from_size) {
- out[outpos - 1] |= 0x80;
- out[outpos++] = from_size;
- from_size >>= 7;
+ i = index->src_size;
+ while (i >= 0x80) {
+ out[outpos++] = i | 0x80;
+ i >>= 7;
}
+ out[outpos++] = i;
/* store target buffer size */
- out[outpos++] = to_size;
- to_size >>= 7;
- while (to_size) {
- out[outpos - 1] |= 0x80;
- out[outpos++] = to_size;
- to_size >>= 7;
+ i = trg_size;
+ while (i >= 0x80) {
+ out[outpos++] = i | 0x80;
+ i >>= 7;
}
-
- inscnt = 0;
+ out[outpos++] = i;
+
+ ref_data = index->src_buf;
+ ref_top = ref_data + index->src_size;
+ data = trg_buf;
+ top = trg_buf + trg_size;
+
+ outpos++;
+ val = 0;
+ for (i = 0; i < RABIN_WINDOW && data < top; i++, data++) {
+ out[outpos++] = *data;
+ val = ((val << 8) | *data) ^ T[val >> RABIN_SHIFT];
+ }
+ inscnt = i;
while (data < top) {
unsigned int moff = 0, msize = 0;
- if (data + BLK_SIZE <= top) {
- unsigned int val = adler32(0, data, BLK_SIZE);
- i = HASH(val, hash_shift);
- for (entry = hash[i]; entry; entry = entry->next) {
- const unsigned char *ref = entry->ptr;
- const unsigned char *src = data;
- unsigned int ref_size = ref_top - ref;
- if (entry->val != val)
- continue;
- if (ref_size > top - src)
- ref_size = top - src;
- if (ref_size > 0x10000)
- ref_size = 0x10000;
- if (ref_size <= msize)
- break;
- while (ref_size-- && *src++ == *ref)
- ref++;
- if (msize < ref - entry->ptr) {
- /* this is our best match so far */
- msize = ref - entry->ptr;
- moff = entry->ptr - ref_data;
- }
+ struct index_entry *entry;
+ val ^= U[data[-RABIN_WINDOW]];
+ val = ((val << 8) | *data) ^ T[val >> RABIN_SHIFT];
+ i = val & index->hash_mask;
+ for (entry = index->hash[i]; entry; entry = entry->next) {
+ const unsigned char *ref = entry->ptr;
+ const unsigned char *src = data;
+ unsigned int ref_size = ref_top - ref;
+ if (entry->val != val)
+ continue;
+ if (ref_size > top - src)
+ ref_size = top - src;
+ if (ref_size > 0x10000)
+ ref_size = 0x10000;
+ if (ref_size <= msize)
+ break;
+ while (ref_size-- && *src++ == *ref)
+ ref++;
+ if (msize < ref - entry->ptr) {
+ /* this is our best match so far */
+ msize = ref - entry->ptr;
+ moff = entry->ptr - ref_data;
}
}
- if (!msize || msize < COPYOP_SIZE(moff, msize)) {
+ if (msize < 4) {
if (!inscnt)
outpos++;
out[outpos++] = *data++;
} else {
unsigned char *op;
+ if (msize >= RABIN_WINDOW) {
+ const unsigned char *sk;
+ sk = data + msize - RABIN_WINDOW;
+ val = 0;
+ for (i = 0; i < RABIN_WINDOW; i++)
+ val = ((val << 8) | *sk++) ^ T[val >> RABIN_SHIFT];
+ } else {
+ const unsigned char *sk = data + 1;
+ for (i = 1; i < msize; i++) {
+ val ^= U[sk[-RABIN_WINDOW]];
+ val = ((val << 8) | *sk++) ^ T[val >> RABIN_SHIFT];
+ }
+ }
+
if (inscnt) {
while (moff && ref_data[moff-1] == data[-1]) {
if (msize == 0x10000)
if (max_size && outsize >= max_size)
outsize = max_size + MAX_OP_SIZE + 1;
if (max_size && outpos > max_size)
- out = NULL;
- else
- out = realloc(out, outsize);
+ break;
+ out = realloc(out, outsize);
if (!out) {
free(tmp);
- free(hash);
return NULL;
}
}
if (inscnt)
out[outpos - inscnt - 1] = inscnt;
- free(hash);
+ if (max_size && outpos > max_size) {
+ free(out);
+ return NULL;
+ }
+
*delta_size = outpos;
return out;
}
opt->diffopt.setup |= (DIFF_SETUP_USE_SIZE_CACHE |
DIFF_SETUP_USE_CACHE);
while (fgets(line, sizeof(line), stdin))
- diff_tree_stdin(line);
+ if (line[0] == '\n')
+ fflush(stdout);
+ else
+ diff_tree_stdin(line);
return 0;
}
#include "quote.h"
#include "diff.h"
#include "diffcore.h"
+#include "delta.h"
#include "xdiff-interface.h"
static int use_size_cache;
* name-a => name-b
*/
if (pfx_length + sfx_length) {
+ int a_midlen = len_a - pfx_length - sfx_length;
+ int b_midlen = len_b - pfx_length - sfx_length;
+ if (a_midlen < 0) a_midlen = 0;
+ if (b_midlen < 0) b_midlen = 0;
+
name = xmalloc(len_a + len_b - pfx_length - sfx_length + 7);
sprintf(name, "%.*s{%.*s => %.*s}%s",
pfx_length, a,
- len_a - pfx_length - sfx_length, a + pfx_length,
- len_b - pfx_length - sfx_length, b + pfx_length,
+ a_midlen, a + pfx_length,
+ b_midlen, b + pfx_length,
a + len_a - sfx_length);
}
else {
static void show_stats(struct diffstat_t* data)
{
- char *prefix = "";
int i, len, add, del, total, adds = 0, dels = 0;
int max, max_change = 0, max_len = 0;
int total_files = data->nr;
}
for (i = 0; i < data->nr; i++) {
+ char *prefix = "";
char *name = data->files[i]->name;
int added = data->files[i]->added;
int deleted = data->files[i]->deleted;
total_files, adds, dels);
}
+struct checkdiff_t {
+ struct xdiff_emit_state xm;
+ const char *filename;
+ int lineno;
+};
+
+static void checkdiff_consume(void *priv, char *line, unsigned long len)
+{
+ struct checkdiff_t *data = priv;
+
+ if (line[0] == '+') {
+ int i, spaces = 0;
+
+ data->lineno++;
+
+ /* check space before tab */
+ for (i = 1; i < len && (line[i] == ' ' || line[i] == '\t'); i++)
+ if (line[i] == ' ')
+ spaces++;
+ if (line[i - 1] == '\t' && spaces)
+ printf("%s:%d: space before tab:%.*s\n",
+ data->filename, data->lineno, (int)len, line);
+
+ /* check white space at line end */
+ if (line[len - 1] == '\n')
+ len--;
+ if (isspace(line[len - 1]))
+ printf("%s:%d: white space at end: %.*s\n",
+ data->filename, data->lineno, (int)len, line);
+ } else if (line[0] == ' ')
+ data->lineno++;
+ else if (line[0] == '@') {
+ char *plus = strchr(line, '+');
+ if (plus)
+ data->lineno = strtol(plus, line + len, 10);
+ else
+ die("invalid diff");
+ }
+}
+
+static unsigned char *deflate_it(char *data,
+ unsigned long size,
+ unsigned long *result_size)
+{
+ int bound;
+ unsigned char *deflated;
+ z_stream stream;
+
+ memset(&stream, 0, sizeof(stream));
+ deflateInit(&stream, Z_BEST_COMPRESSION);
+ bound = deflateBound(&stream, size);
+ deflated = xmalloc(bound);
+ stream.next_out = deflated;
+ stream.avail_out = bound;
+
+ stream.next_in = (unsigned char *)data;
+ stream.avail_in = size;
+ while (deflate(&stream, Z_FINISH) == Z_OK)
+ ; /* nothing */
+ deflateEnd(&stream);
+ *result_size = stream.total_out;
+ return deflated;
+}
+
+static void emit_binary_diff(mmfile_t *one, mmfile_t *two)
+{
+ void *cp;
+ void *delta;
+ void *deflated;
+ void *data;
+ unsigned long orig_size;
+ unsigned long delta_size;
+ unsigned long deflate_size;
+ unsigned long data_size;
+
+ printf("GIT binary patch\n");
+ /* We could do deflated delta, or we could do just deflated two,
+ * whichever is smaller.
+ */
+ delta = NULL;
+ deflated = deflate_it(two->ptr, two->size, &deflate_size);
+ if (one->size && two->size) {
+ delta = diff_delta(one->ptr, one->size,
+ two->ptr, two->size,
+ &delta_size, deflate_size);
+ if (delta) {
+ void *to_free = delta;
+ orig_size = delta_size;
+ delta = deflate_it(delta, delta_size, &delta_size);
+ free(to_free);
+ }
+ }
+
+ if (delta && delta_size < deflate_size) {
+ printf("delta %lu\n", orig_size);
+ free(deflated);
+ data = delta;
+ data_size = delta_size;
+ }
+ else {
+ printf("literal %lu\n", two->size);
+ free(delta);
+ data = deflated;
+ data_size = deflate_size;
+ }
+
+ /* emit data encoded in base85 */
+ cp = data;
+ while (data_size) {
+ int bytes = (52 < data_size) ? 52 : data_size;
+ char line[70];
+ data_size -= bytes;
+ if (bytes <= 26)
+ line[0] = bytes + 'A' - 1;
+ else
+ line[0] = bytes - 26 + 'a' - 1;
+ encode_85(line + 1, cp, bytes);
+ cp += bytes;
+ puts(line);
+ }
+ printf("\n");
+ free(data);
+}
+
#define FIRST_FEW_BYTES 8000
static int mmfile_is_binary(mmfile_t *mf)
{
struct diff_filespec *one,
struct diff_filespec *two,
const char *xfrm_msg,
+ struct diff_options *o,
int complete_rewrite)
{
mmfile_t mf1, mf2;
if (fill_mmfile(&mf1, one) < 0 || fill_mmfile(&mf2, two) < 0)
die("unable to read files to diff");
- if (mmfile_is_binary(&mf1) || mmfile_is_binary(&mf2))
- printf("Binary files %s and %s differ\n", lbl[0], lbl[1]);
+ if (mmfile_is_binary(&mf1) || mmfile_is_binary(&mf2)) {
+ /* Quite common confusing case */
+ if (mf1.size == mf2.size &&
+ !memcmp(mf1.ptr, mf2.ptr, mf1.size))
+ goto free_ab_and_return;
+ if (o->binary)
+ emit_binary_diff(&mf1, &mf2);
+ else
+ printf("Binary files %s and %s differ\n",
+ lbl[0], lbl[1]);
+ }
else {
/* Crazy xdl interfaces.. */
const char *diffopts = getenv("GIT_DIFF_OPTS");
ecbdata.label_path = lbl;
xpp.flags = XDF_NEED_MINIMAL;
- xecfg.ctxlen = 3;
+ xecfg.ctxlen = o->context;
xecfg.flags = XDL_EMIT_FUNCNAMES;
if (!diffopts)
;
}
}
+static void builtin_checkdiff(const char *name_a, const char *name_b,
+ struct diff_filespec *one,
+ struct diff_filespec *two)
+{
+ mmfile_t mf1, mf2;
+ struct checkdiff_t data;
+
+ if (!two)
+ return;
+
+ memset(&data, 0, sizeof(data));
+ data.xm.consume = checkdiff_consume;
+ data.filename = name_b ? name_b : name_a;
+ data.lineno = 0;
+
+ if (fill_mmfile(&mf1, one) < 0 || fill_mmfile(&mf2, two) < 0)
+ die("unable to read files to diff");
+
+ if (mmfile_is_binary(&mf2))
+ return;
+ else {
+ /* Crazy xdl interfaces.. */
+ xpparam_t xpp;
+ xdemitconf_t xecfg;
+ xdemitcb_t ecb;
+
+ xpp.flags = XDF_NEED_MINIMAL;
+ xecfg.ctxlen = 0;
+ xecfg.flags = 0;
+ ecb.outf = xdiff_outf;
+ ecb.priv = &data;
+ xdl_diff(&mf1, &mf2, &xpp, &xecfg, &ecb);
+ }
+}
+
struct diff_filespec *alloc_filespec(const char *path)
{
int namelen = strlen(path);
struct diff_filespec *one,
struct diff_filespec *two,
const char *xfrm_msg,
+ struct diff_options *o,
int complete_rewrite)
{
if (pgm) {
}
if (one && two)
builtin_diff(name, other ? other : name,
- one, two, xfrm_msg, complete_rewrite);
+ one, two, xfrm_msg, o, complete_rewrite);
else
printf("* Unmerged path %s\n", name);
}
if (DIFF_PAIR_UNMERGED(p)) {
/* unmerged */
- run_diff_cmd(pgm, p->one->path, NULL, NULL, NULL, NULL, 0);
+ run_diff_cmd(pgm, p->one->path, NULL, NULL, NULL, NULL, o, 0);
return;
}
* needs to be split into deletion and creation.
*/
struct diff_filespec *null = alloc_filespec(two->path);
- run_diff_cmd(NULL, name, other, one, null, xfrm_msg, 0);
+ run_diff_cmd(NULL, name, other, one, null, xfrm_msg, o, 0);
free(null);
null = alloc_filespec(one->path);
- run_diff_cmd(NULL, name, other, null, two, xfrm_msg, 0);
+ run_diff_cmd(NULL, name, other, null, two, xfrm_msg, o, 0);
free(null);
}
else
- run_diff_cmd(pgm, name, other, one, two, xfrm_msg,
+ run_diff_cmd(pgm, name, other, one, two, xfrm_msg, o,
complete_rewrite);
free(name_munged);
builtin_diffstat(name, other, p->one, p->two, diffstat, complete_rewrite);
}
+static void run_checkdiff(struct diff_filepair *p, struct diff_options *o)
+{
+ const char *name;
+ const char *other;
+
+ if (DIFF_PAIR_UNMERGED(p)) {
+ /* unmerged */
+ return;
+ }
+
+ name = p->one->path;
+ other = (strcmp(name, p->two->path) ? p->two->path : NULL);
+
+ diff_fill_sha1_info(p->one);
+ diff_fill_sha1_info(p->two);
+
+ builtin_checkdiff(name, other, p->one, p->two);
+}
+
void diff_setup(struct diff_options *options)
{
memset(options, 0, sizeof(*options));
options->line_termination = '\n';
options->break_opt = -1;
options->rename_limit = -1;
+ options->context = 3;
options->change = diff_change;
options->add_remove = diff_addremove;
* recursive bits for other formats here.
*/
if ((options->output_format == DIFF_FORMAT_PATCH) ||
- (options->output_format == DIFF_FORMAT_DIFFSTAT))
+ (options->output_format == DIFF_FORMAT_DIFFSTAT) ||
+ (options->output_format == DIFF_FORMAT_CHECKDIFF))
options->recursive = 1;
if (options->detect_rename && options->rename_limit < 0)
return 0;
}
+int opt_arg(const char *arg, int arg_short, const char *arg_long, int *val)
+{
+ char c, *eq;
+ int len;
+
+ if (*arg != '-')
+ return 0;
+ c = *++arg;
+ if (!c)
+ return 0;
+ if (c == arg_short) {
+ c = *++arg;
+ if (!c)
+ return 1;
+ if (val && isdigit(c)) {
+ char *end;
+ int n = strtoul(arg, &end, 10);
+ if (*end)
+ return 0;
+ *val = n;
+ return 1;
+ }
+ return 0;
+ }
+ if (c != '-')
+ return 0;
+ arg++;
+ eq = strchr(arg, '=');
+ if (eq)
+ len = eq - arg;
+ else
+ len = strlen(arg);
+ if (!len || strncmp(arg, arg_long, len))
+ return 0;
+ if (eq) {
+ int n;
+ char *end;
+ if (!isdigit(*++eq))
+ return 0;
+ n = strtoul(eq, &end, 10);
+ if (*end)
+ return 0;
+ *val = n;
+ }
+ return 1;
+}
+
int diff_opt_parse(struct diff_options *options, const char **av, int ac)
{
const char *arg = av[0];
if (!strcmp(arg, "-p") || !strcmp(arg, "-u"))
options->output_format = DIFF_FORMAT_PATCH;
+ else if (opt_arg(arg, 'U', "unified", &options->context))
+ options->output_format = DIFF_FORMAT_PATCH;
else if (!strcmp(arg, "--patch-with-raw")) {
options->output_format = DIFF_FORMAT_PATCH;
options->with_raw = 1;
}
else if (!strcmp(arg, "--stat"))
options->output_format = DIFF_FORMAT_DIFFSTAT;
+ else if (!strcmp(arg, "--check"))
+ options->output_format = DIFF_FORMAT_CHECKDIFF;
+ else if (!strcmp(arg, "--summary"))
+ options->summary = 1;
else if (!strcmp(arg, "--patch-with-stat")) {
options->output_format = DIFF_FORMAT_PATCH;
options->with_stat = 1;
options->rename_limit = strtoul(arg+2, NULL, 10);
else if (!strcmp(arg, "--full-index"))
options->full_index = 1;
+ else if (!strcmp(arg, "--binary")) {
+ options->output_format = DIFF_FORMAT_PATCH;
+ options->full_index = options->binary = 1;
+ }
else if (!strcmp(arg, "--name-only"))
options->output_format = DIFF_FORMAT_NAME;
else if (!strcmp(arg, "--name-status"))
run_diffstat(p, o, diffstat);
}
+static void diff_flush_checkdiff(struct diff_filepair *p,
+ struct diff_options *o)
+{
+ if (diff_unmodified_pair(p))
+ return;
+
+ if ((DIFF_FILE_VALID(p->one) && S_ISDIR(p->one->mode)) ||
+ (DIFF_FILE_VALID(p->two) && S_ISDIR(p->two->mode)))
+ return; /* no tree diffs in patch format */
+
+ run_checkdiff(p, o);
+}
+
int diff_queue_is_empty(void)
{
struct diff_queue_struct *q = &diff_queued_diff;
case DIFF_FORMAT_DIFFSTAT:
diff_flush_stat(p, options, diffstat);
break;
+ case DIFF_FORMAT_CHECKDIFF:
+ diff_flush_checkdiff(p, options);
+ break;
case DIFF_FORMAT_PATCH:
diff_flush_patch(p, options);
break;
}
}
+static void show_file_mode_name(const char *newdelete, struct diff_filespec *fs)
+{
+ if (fs->mode)
+ printf(" %s mode %06o %s\n", newdelete, fs->mode, fs->path);
+ else
+ printf(" %s %s\n", newdelete, fs->path);
+}
+
+
+static void show_mode_change(struct diff_filepair *p, int show_name)
+{
+ if (p->one->mode && p->two->mode && p->one->mode != p->two->mode) {
+ if (show_name)
+ printf(" mode change %06o => %06o %s\n",
+ p->one->mode, p->two->mode, p->two->path);
+ else
+ printf(" mode change %06o => %06o\n",
+ p->one->mode, p->two->mode);
+ }
+}
+
+static void show_rename_copy(const char *renamecopy, struct diff_filepair *p)
+{
+ const char *old, *new;
+
+ /* Find common prefix */
+ old = p->one->path;
+ new = p->two->path;
+ while (1) {
+ const char *slash_old, *slash_new;
+ slash_old = strchr(old, '/');
+ slash_new = strchr(new, '/');
+ if (!slash_old ||
+ !slash_new ||
+ slash_old - old != slash_new - new ||
+ memcmp(old, new, slash_new - new))
+ break;
+ old = slash_old + 1;
+ new = slash_new + 1;
+ }
+ /* p->one->path thru old is the common prefix, and old and new
+ * through the end of names are renames
+ */
+ if (old != p->one->path)
+ printf(" %s %.*s{%s => %s} (%d%%)\n", renamecopy,
+ (int)(old - p->one->path), p->one->path,
+ old, new, (int)(0.5 + p->score * 100.0/MAX_SCORE));
+ else
+ printf(" %s %s => %s (%d%%)\n", renamecopy,
+ p->one->path, p->two->path,
+ (int)(0.5 + p->score * 100.0/MAX_SCORE));
+ show_mode_change(p, 0);
+}
+
+static void diff_summary(struct diff_filepair *p)
+{
+ switch(p->status) {
+ case DIFF_STATUS_DELETED:
+ show_file_mode_name("delete", p->one);
+ break;
+ case DIFF_STATUS_ADDED:
+ show_file_mode_name("create", p->two);
+ break;
+ case DIFF_STATUS_COPIED:
+ show_rename_copy("copy", p);
+ break;
+ case DIFF_STATUS_RENAMED:
+ show_rename_copy("rename", p);
+ break;
+ default:
+ if (p->score) {
+ printf(" rewrite %s (%d%%)\n", p->two->path,
+ (int)(0.5 + p->score * 100.0/MAX_SCORE));
+ show_mode_change(p, 0);
+ } else show_mode_change(p, 1);
+ break;
+ }
+}
+
void diff_flush(struct diff_options *options)
{
struct diff_queue_struct *q = &diff_queued_diff;
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
flush_one_pair(p, diff_output_format, options, diffstat);
- diff_free_filepair(p);
}
if (diffstat) {
free(diffstat);
}
+ for (i = 0; i < q->nr; i++) {
+ if (options->summary)
+ diff_summary(q->queue[i]);
+ diff_free_filepair(q->queue[i]);
+ }
+
free(q->queue);
q->queue = NULL;
q->nr = q->alloc = 0;
with_raw:1,
with_stat:1,
tree_in_recursive:1,
+ binary:1,
full_index:1,
silent_on_remove:1,
- find_copies_harder:1;
+ find_copies_harder:1,
+ summary:1;
+ int context;
int break_opt;
int detect_rename;
int line_termination;
#define DIFF_FORMAT_NAME 4
#define DIFF_FORMAT_NAME_STATUS 5
#define DIFF_FORMAT_DIFFSTAT 6
+#define DIFF_FORMAT_CHECKDIFF 7
extern void diff_flush(struct diff_options*);
}
stop_here_user_resolve () {
+ if [ -n "$resolvemsg" ]; then
+ echo "$resolvemsg"
+ stop_here $1
+ fi
cmdline=$(basename $0)
if test '' != "$interactive"
then
GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
git-write-tree >"$dotest/patch-merge-base+" &&
# index has the base tree now.
- (
- cd "$dotest/patch-merge-tmp-dir" &&
- GIT_INDEX_FILE="../patch-merge-tmp-index" \
- GIT_OBJECT_DIRECTORY="$O_OBJECT" \
- git-apply $binary --index <../patch
- )
+ GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
+ git-apply $binary --cached <"$dotest/patch"
then
echo Using index info to reconstruct a base tree...
mv "$dotest/patch-merge-base+" "$dotest/patch-merge-base"
mv "$dotest/patch-merge-tmp-index" "$dotest/patch-merge-index"
- else
- # Otherwise, try nearby trees that can be used to apply the
- # patch.
- (
- N=10
-
- # Hoping the patch is against our recent commits...
- git-rev-list --max-count=$N HEAD
-
- # or hoping the patch is against known tags...
- git-ls-remote --tags .
- ) |
- while read base junk
- do
- # See if we have it as a tree...
- git-cat-file tree "$base" >/dev/null 2>&1 || continue
-
- rm -fr "$dotest"/patch-merge-* &&
- mkdir "$dotest/patch-merge-tmp-dir" || break
- (
- cd "$dotest/patch-merge-tmp-dir" &&
- GIT_INDEX_FILE=../patch-merge-tmp-index &&
- GIT_OBJECT_DIRECTORY="$O_OBJECT" &&
- export GIT_INDEX_FILE GIT_OBJECT_DIRECTORY &&
- git-read-tree "$base" &&
- git-apply $binary --index &&
- mv ../patch-merge-tmp-index ../patch-merge-index &&
- echo "$base" >../patch-merge-base
- ) <"$dotest/patch" 2>/dev/null && break
- done
fi
test -f "$dotest/patch-merge-index" &&
}
prec=4
-dotest=.dotest sign= utf8= keep= skip= interactive= resolved= binary= ws=
+dotest=.dotest sign= utf8= keep= skip= interactive= resolved= binary= ws= resolvemsg=
while case "$#" in 0) break;; esac
do
--whitespace=*)
ws=$1; shift ;;
+ --resolvemsg=*)
+ resolvemsg=$(echo "$1" | sed -e "s/^--resolvemsg=//"); shift ;;
+
--)
shift; break ;;
-*)
else
# Make sure we are not given --skip nor --resolved
test ",$skip,$resolved," = ,,, ||
- die "we are not resuming."
+ die "Resolve operation not in progress, we are not resuming."
# Start afresh.
mkdir -p "$dotest" || exit
case "$#" in
0)
- git-rev-parse --symbolic --all |
- sed -ne 's|^refs/heads/||p' |
+ git-rev-parse --symbolic --branches |
sort |
while read ref
do
work=`git write-tree` &&
git read-tree --reset $new &&
git checkout-index -f -u -q -a &&
- git read-tree -m -u $old $new $work || exit
+ git read-tree -m -u --aggressive $old $new $work || exit
if result=`git write-tree 2>/dev/null`
then
# Copyright (c) 2005-2006 Pavel Roskin
#
-USAGE="[-d] [-n] [-q] [-x | -X]"
+USAGE="[-d] [-n] [-q] [-x | -X] [--] <paths>..."
LONG_USAGE='Clean untracked files from the working directory
-d remove directories as well
-n don'\''t remove anything, just show what would be done
-q be quiet, only report errors
-x remove ignored files as well
- -X remove only ignored files as well'
+ -X remove only ignored files
+When optional <paths>... arguments are given, the paths
+affected are further limited to those that match them.'
SUBDIRECTORY_OK=Yes
. git-sh-setup
-X)
ignoredonly=1
;;
- *)
+ --)
+ shift
+ break
+ ;;
+ -*)
usage
+ ;;
+ *)
+ break
esac
shift
done
fi
fi
-git-ls-files --others --directory $excl ${excl_info:+"$excl_info"} |
+git-ls-files --others --directory $excl ${excl_info:+"$excl_info"} -- "$@" |
while read -r file; do
if [ -d "$file" -a ! -L "$file" ]; then
if [ -z "$cleandir" ]; then
;;
yes)
mkdir -p "$GIT_DIR/objects/info"
- {
- test -f "$repo/objects/info/alternates" &&
- cat "$repo/objects/info/alternates";
- echo "$repo/objects"
- } >"$GIT_DIR/objects/info/alternates"
+ echo "$repo/objects" >> "$GIT_DIR/objects/info/alternates"
;;
esac
git-ls-remote "$repo" >"$GIT_DIR/CLONE_HEAD"
exit 1
;;
esac
+ git-var GIT_AUTHOR_IDENT > /dev/null || die
+ git-var GIT_COMMITTER_IDENT > /dev/null || die
${VISUAL:-${EDITOR:-vi}} "$GIT_DIR/COMMIT_EDITMSG"
;;
esac
die "GIT_DIR is not defined or is unreadable";
}
-our ($opt_h, $opt_p, $opt_v, $opt_c );
+our ($opt_h, $opt_p, $opt_v, $opt_c, $opt_f, $opt_m );
-getopts('hpvc');
+getopts('hpvcfm:');
$opt_h && usage();
$opt_v && print "Applying to CVS commit $commit from parent $parent\n";
# grab the commit message
-`git-cat-file commit $commit | sed -e '1,/^\$/d' > .msg`;
+open(MSG, ">.msg") or die "Cannot open .msg for writing";
+print MSG $opt_m;
+close MSG;
+
+`git-cat-file commit $commit | sed -e '1,/^\$/d' >> .msg`;
$? && die "Error extracting the commit message";
my (@afiles, @dfiles, @mfiles);
my @files = safe_pipe_capture('git-diff-tree', '-r', $parent, $commit);
-print @files;
+#print @files;
$? && die "Error in git-diff-tree";
foreach my $f (@files) {
chomp $f;
if (@status > 1) { warn 'Strange! cvs status returned more than one line?'};
unless ($status[0] =~ m/Status: Unknown$/) {
$dirty = 1;
- warn "File $f is already known in your CVS checkout!\n";
+ warn "File $f is already known in your CVS checkout -- perhaps it has been added by another user. Or this may indicate that it exists on a different branch. If this is the case, use -f to force the merge.\n";
}
}
foreach my $f (@mfiles, @dfiles) {
}
}
if ($dirty) {
- die "Exiting: your CVS tree is not clean for this merge.";
+ if ($opt_f) { warn "The tree is not clean -- forced merge\n";
+ $dirty = 0;
+ } else {
+ die "Exiting: your CVS tree is not clean for this merge.";
+ }
}
###
}
sub usage {
print STDERR <<END;
-Usage: GIT_DIR=/path/to/.git ${\basename $0} [-h] [-p] [-v] [-c] [ parent ] commit
+Usage: GIT_DIR=/path/to/.git ${\basename $0} [-h] [-p] [-v] [-c] [-f] [-m msgprefix] [ parent ] commit
END
exit(1);
}
return $res;
} elsif($line =~ s/^E //) {
# print STDERR "S: $line\n";
- } elsif($line =~ /^Remove-entry /i) {
+ } elsif($line =~ /^(Remove-entry|Removed) /i) {
$line = $self->readline(); # filename
$line = $self->readline(); # OK
chomp $line;
{
my ( $cmd, $data ) = @_;
$log->debug("req_Globaloption : $data");
-
- # TODO : is this data useful ???
+ $state->{globaloptions}{$data} = 1;
}
# Valid-responses request-list \n
$state->{localdir} = $data;
$state->{repository} = $repository;
- $state->{directory} = $repository;
- $state->{directory} =~ s/^$state->{CVSROOT}\///;
- $state->{module} = $1 if ($state->{directory} =~ s/^(.*?)(\/|$)//);
+ $state->{path} = $repository;
+ $state->{path} =~ s/^$state->{CVSROOT}\///;
+ $state->{module} = $1 if ($state->{path} =~ s/^(.*?)(\/|$)//);
+ $state->{path} .= "/" if ( $state->{path} =~ /\S/ );
+
+ $state->{directory} = $state->{localdir};
+ $state->{directory} = "" if ( $state->{directory} eq "." );
$state->{directory} .= "/" if ( $state->{directory} =~ /\S/ );
- $log->debug("req_Directory : localdir=$data repository=$repository directory=$state->{directory} module=$state->{module}");
+ if ( not defined($state->{prependdir}) and $state->{localdir} eq "." and $state->{path} =~ /\S/ )
+ {
+ $log->info("Setting prepend to '$state->{path}'");
+ $state->{prependdir} = $state->{path};
+ foreach my $entry ( keys %{$state->{entries}} )
+ {
+ $state->{entries}{$state->{prependdir} . $entry} = $state->{entries}{$entry};
+ delete $state->{entries}{$entry};
+ }
+ }
+
+ if ( defined ( $state->{prependdir} ) )
+ {
+ $log->debug("Prepending '$state->{prependdir}' to state|directory");
+ $state->{directory} = $state->{prependdir} . $state->{directory}
+ }
+ $log->debug("req_Directory : localdir=$data repository=$repository path=$state->{path} directory=$state->{directory} module=$state->{module}");
}
# Entry entry-line \n
{
my ( $cmd, $data ) = @_;
- $log->debug("req_Entry : $data");
+ #$log->debug("req_Entry : $data");
my @data = split(/\//, $data);
options => $data[4],
tag_or_date => $data[5],
};
+
+ $log->info("Received entry line '$data' => '" . $state->{directory} . $data[1] . "'");
+}
+
+# Questionable filename \n
+# Response expected: no. Additional data: no. Tell the server to check
+# whether filename should be ignored, and if not, next time the server
+# sends responses, send (in a M response) `?' followed by the directory and
+# filename. filename must not contain `/'; it needs to be a file in the
+# directory named by the most recent Directory request.
+sub req_Questionable
+{
+ my ( $cmd, $data ) = @_;
+
+ $log->debug("req_Questionable : $data");
+ $state->{entries}{$state->{directory}.$data}{questionable} = 1;
}
# add \n
next;
}
-
- my ( $filepart, $dirpart ) = filenamesplit($filename);
+ my ( $filepart, $dirpart ) = filenamesplit($filename, 1);
print "E cvs add: scheduling file `$filename' for addition\n";
}
- my ( $filepart, $dirpart ) = filenamesplit($filename);
+ my ( $filepart, $dirpart ) = filenamesplit($filename, 1);
print "E cvs remove: scheduling `$filename' for removal\n";
#$log->debug("req_Unchanged : $data");
}
-# Questionable filename \n
-# Response expected: no. Additional data: no.
-# Tell the server to check whether filename should be ignored,
-# and if not, next time the server sends responses, send (in
-# a M response) `?' followed by the directory and filename.
-# filename must not contain `/'; it needs to be a file in the
-# directory named by the most recent Directory request.
-sub req_Questionable
-{
- my ( $cmd, $data ) = @_;
-
- $state->{entries}{$state->{directory}.$data}{questionable} = 1;
-
- #$log->debug("req_Questionable : $data");
-}
-
# Argument text \n
# Response expected: no. Save argument for use in a subsequent command.
# Arguments accumulate until an argument-using command is given, at which
$updater->update();
- # if no files were specified, we need to work out what files we should be providing status on ...
- argsfromdir($updater) if ( scalar ( @{$state->{args}} ) == 0 );
+ argsfromdir($updater);
#$log->debug("update state : " . Dumper($state));
{
$filename = filecleanup($filename);
+ $log->debug("Processing file $filename");
+
# if we have a -C we should pretend we never saw modified stuff
if ( exists ( $state->{opt}{C} ) )
{
if ( $meta->{filehash} eq "deleted" )
{
- my ( $filepart, $dirpart ) = filenamesplit($filename);
+ my ( $filepart, $dirpart ) = filenamesplit($filename,1);
$log->info("Removing '$filename' from working copy (no longer in the repo)");
print "E cvs update: `$filename' is no longer in the repository\n";
- print "Removed $dirpart\n";
- print "$filepart\n";
+ # Don't want to actually _DO_ the update if -n specified
+ unless ( $state->{globaloptions}{-n} ) {
+ print "Removed $dirpart\n";
+ print "$filepart\n";
+ }
}
elsif ( not defined ( $state->{entries}{$filename}{modified_hash} )
or $state->{entries}{$filename}{modified_hash} eq $oldmeta->{filehash} )
print "MT newline\n";
print "MT -updated\n";
- my ( $filepart, $dirpart ) = filenamesplit($filename);
- $dirpart =~ s/^$state->{directory}//;
-
- if ( defined ( $wrev ) )
- {
- # instruct client we're sending a file to put in this path as a replacement
- print "Update-existing $dirpart\n";
- $log->debug("Updating existing file 'Update-existing $dirpart'");
- } else {
- # instruct client we're sending a file to put in this path as a new file
- print "Created $dirpart\n";
- $log->debug("Creating new file 'Created $dirpart'");
- }
- print $state->{CVSROOT} . "/$state->{module}/$filename\n";
-
- # this is an "entries" line
- $log->debug("/$filepart/1.$meta->{revision}///");
- print "/$filepart/1.$meta->{revision}///\n";
-
- # permissions
- $log->debug("SEND : u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}");
- print "u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}\n";
-
- # transmit file
- transmitfile($meta->{filehash});
+ my ( $filepart, $dirpart ) = filenamesplit($filename,1);
+
+ # Don't want to actually _DO_ the update if -n specified
+ unless ( $state->{globaloptions}{-n} )
+ {
+ if ( defined ( $wrev ) )
+ {
+ # instruct client we're sending a file to put in this path as a replacement
+ print "Update-existing $dirpart\n";
+ $log->debug("Updating existing file 'Update-existing $dirpart'");
+ } else {
+ # instruct client we're sending a file to put in this path as a new file
+ print "Clear-static-directory $dirpart\n";
+ print $state->{CVSROOT} . "/$state->{module}/$dirpart\n";
+ print "Clear-sticky $dirpart\n";
+ print $state->{CVSROOT} . "/$state->{module}/$dirpart\n";
+
+ $log->debug("Creating new file 'Created $dirpart'");
+ print "Created $dirpart\n";
+ }
+ print $state->{CVSROOT} . "/$state->{module}/$filename\n";
+
+ # this is an "entries" line
+ $log->debug("/$filepart/1.$meta->{revision}///");
+ print "/$filepart/1.$meta->{revision}///\n";
+
+ # permissions
+ $log->debug("SEND : u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}");
+ print "u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}\n";
+
+ # transmit file
+ transmitfile($meta->{filehash});
+ }
} else {
$log->info("Updating '$filename'");
- my ( $filepart, $dirpart ) = filenamesplit($meta->{name});
+ my ( $filepart, $dirpart ) = filenamesplit($meta->{name},1);
my $dir = tempdir( DIR => $TEMP_DIR, CLEANUP => 1 ) . "/";
$log->info("Merged successfully");
print "M M $filename\n";
$log->debug("Update-existing $dirpart");
- print "Update-existing $dirpart\n";
- $log->debug($state->{CVSROOT} . "/$state->{module}/$filename");
- print $state->{CVSROOT} . "/$state->{module}/$filename\n";
- $log->debug("/$filepart/1.$meta->{revision}///");
- print "/$filepart/1.$meta->{revision}///\n";
+
+ # Don't want to actually _DO_ the update if -n specified
+ unless ( $state->{globaloptions}{-n} )
+ {
+ print "Update-existing $dirpart\n";
+ $log->debug($state->{CVSROOT} . "/$state->{module}/$filename");
+ print $state->{CVSROOT} . "/$state->{module}/$filename\n";
+ $log->debug("/$filepart/1.$meta->{revision}///");
+ print "/$filepart/1.$meta->{revision}///\n";
+ }
}
elsif ( $return == 1 )
{
$log->info("Merged with conflicts");
print "M C $filename\n";
- print "Update-existing $dirpart\n";
- print $state->{CVSROOT} . "/$state->{module}/$filename\n";
- print "/$filepart/1.$meta->{revision}/+//\n";
+
+ # Don't want to actually _DO_ the update if -n specified
+ unless ( $state->{globaloptions}{-n} )
+ {
+ print "Update-existing $dirpart\n";
+ print $state->{CVSROOT} . "/$state->{module}/$filename\n";
+ print "/$filepart/1.$meta->{revision}/+//\n";
+ }
}
else
{
next;
}
- # permissions
- $log->debug("SEND : u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}");
- print "u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}\n";
-
- # transmit file, format is single integer on a line by itself (file
- # size) followed by the file contents
- # TODO : we should copy files in blocks
- my $data = `cat $file_local`;
- $log->debug("File size : " . length($data));
- print length($data) . "\n";
- print $data;
+ # Don't want to actually _DO_ the update if -n specified
+ unless ( $state->{globaloptions}{-n} )
+ {
+ # permissions
+ $log->debug("SEND : u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}");
+ print "u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}\n";
+
+ # transmit file, format is single integer on a line by itself (file
+ # size) followed by the file contents
+ # TODO : we should copy files in blocks
+ my $data = `cat $file_local`;
+ $log->debug("File size : " . length($data));
+ print length($data) . "\n";
+ print $data;
+ }
chdir "/";
}
if ( -e $state->{CVSROOT} . "/index" )
{
+ $log->warn("file 'index' already exists in the git repository");
print "error 1 Index already exists in git repo\n";
exit;
}
my $lockfile = "$state->{CVSROOT}/refs/heads/$state->{module}.lock";
unless ( sysopen(LOCKFILE,$lockfile,O_EXCL|O_CREAT|O_WRONLY) )
{
+ $log->warn("lockfile '$lockfile' already exists, please try again");
print "error 1 Lock file '$lockfile' already exists, please try again\n";
exit;
}
# foreach file specified on the commandline ...
foreach my $filename ( @{$state->{args}} )
{
+ my $committedfile = $filename;
$filename = filecleanup($filename);
next unless ( exists $state->{entries}{$filename}{modified_filename} or not $state->{entries}{$filename}{unchanged} );
exit;
}
- push @committedfiles, $filename;
+ push @committedfiles, $committedfile;
$log->info("Committing $filename");
system("mkdir","-p",$dirpart) unless ( -d $dirpart );
my $meta = $updater->getmeta($filename);
- my ( $filepart, $dirpart ) = filenamesplit($filename);
+ my ( $filepart, $dirpart ) = filenamesplit($filename, 1);
$log->debug("Checked-in $dirpart : $filename");
$updater->update();
# if no files were specified, we need to work out what files we should be providing status on ...
- argsfromdir($updater) if ( scalar ( @{$state->{args}} ) == 0 );
+ argsfromdir($updater);
# foreach file specified on the commandline ...
foreach my $filename ( @{$state->{args}} )
$updater->update();
# if no files were specified, we need to work out what files we should be providing status on ...
- argsfromdir($updater) if ( scalar ( @{$state->{args}} ) == 0 );
+ argsfromdir($updater);
# foreach file specified on the commandline ...
foreach my $filename ( @{$state->{args}} )
$updater->update();
# if no files were specified, we need to work out what files we should be providing status on ...
- argsfromdir($updater) if ( scalar ( @{$state->{args}} ) == 0 );
+ argsfromdir($updater);
# foreach file specified on the commandline ...
foreach my $filename ( @{$state->{args}} )
$updater->update();
# if no files were specified, we need to work out what files we should be providing annotate on ...
- argsfromdir($updater) if ( scalar ( @{$state->{args}} ) == 0 );
+ argsfromdir($updater);
# we'll need a temporary checkout dir
my $tmpdir = tempdir ( DIR => $TEMP_DIR );
{
my $updater = shift;
- $state->{args} = [];
+ $state->{args} = [] if ( scalar(@{$state->{args}}) == 1 and $state->{args}[0] eq "." );
+
+ return if ( scalar ( @{$state->{args}} ) > 1 );
- foreach my $file ( @{$updater->gethead} )
+ if ( scalar(@{$state->{args}}) == 1 )
{
- next if ( $file->{filehash} eq "deleted" and not defined ( $state->{entries}{$file->{name}} ) );
- next unless ( $file->{name} =~ s/^$state->{directory}// );
- push @{$state->{args}}, $file->{name};
+ my $arg = $state->{args}[0];
+ $arg .= $state->{prependdir} if ( defined ( $state->{prependdir} ) );
+
+ $log->info("Only one arg specified, checking for directory expansion on '$arg'");
+
+ foreach my $file ( @{$updater->gethead} )
+ {
+ next if ( $file->{filehash} eq "deleted" and not defined ( $state->{entries}{$file->{name}} ) );
+ next unless ( $file->{name} =~ /^$arg\// or $file->{name} eq $arg );
+ push @{$state->{args}}, $file->{name};
+ }
+
+ shift @{$state->{args}} if ( scalar(@{$state->{args}}) > 1 );
+ } else {
+ $log->info("Only one arg specified, populating file list automatically");
+
+ $state->{args} = [];
+
+ foreach my $file ( @{$updater->gethead} )
+ {
+ next if ( $file->{filehash} eq "deleted" and not defined ( $state->{entries}{$file->{name}} ) );
+ next unless ( $file->{name} =~ s/^$state->{prependdir}// );
+ push @{$state->{args}}, $file->{name};
+ }
}
}
sub filenamesplit
{
my $filename = shift;
+ my $fixforlocaldir = shift;
my ( $filepart, $dirpart ) = ( $filename, "." );
( $filepart, $dirpart ) = ( $2, $1 ) if ( $filename =~ /(.*)\/(.*)/ );
$dirpart .= "/";
+ if ( $fixforlocaldir )
+ {
+ $dirpart =~ s/^$state->{prependdir}//;
+ }
+
return ( $filepart, $dirpart );
}
}
$filename =~ s/^\.\///g;
- $filename = $state->{directory} . $filename;
-
+ $filename = $state->{prependdir} . $filename;
return $filename;
}
close FH or die "close $commsg pipe";
' "$keep_subject" "$num" "$signoff" "$headers" "$mimemagic" $commsg
- git-diff-tree -p $diff_opts "$commit" | git-apply --stat --summary
+ git-diff-tree -p --stat --summary $diff_opts "$commit"
echo
case "$mimemagic" in
'');;
+++ /dev/null
-#!/bin/sh
-#
-# Copyright (c) Linus Torvalds, 2005
-#
-
-USAGE='[<option>...] [-e] <pattern> [<path>...]'
-SUBDIRECTORY_OK='Yes'
-. git-sh-setup
-
-got_pattern () {
- if [ -z "$no_more_patterns" ]
- then
- pattern="$1" no_more_patterns=yes
- else
- die "git-grep: do not specify more than one pattern"
- fi
-}
-
-no_more_patterns=
-pattern=
-flags=()
-git_flags=()
-while : ; do
- case "$1" in
- -o|--cached|--deleted|--others|--killed|\
- --ignored|--modified|--exclude=*|\
- --exclude-from=*|\--exclude-per-directory=*)
- git_flags=("${git_flags[@]}" "$1")
- ;;
- -e)
- got_pattern "$2"
- shift
- ;;
- -A|-B|-C|-D|-d|-f|-m)
- flags=("${flags[@]}" "$1" "$2")
- shift
- ;;
- --)
- # The rest are git-ls-files paths
- shift
- break
- ;;
- -*)
- flags=("${flags[@]}" "$1")
- ;;
- *)
- if [ -z "$no_more_patterns" ]
- then
- got_pattern "$1"
- shift
- fi
- [ "$1" = -- ] && shift
- break
- ;;
- esac
- shift
-done
-[ "$pattern" ] || {
- usage
-}
-git-ls-files -z "${git_flags[@]}" -- "$@" |
- xargs -0 grep "${flags[@]}" -e "$pattern" --
case "$no_summary" in
'')
- git-diff-tree -p -M "$head" "$1" |
- git-apply --stat --summary
+ git-diff-tree -p --stat --summary -M "$head" "$1"
;;
esac
}
# Not so fast. This could be the partial URL shorthand...
token=$(expr "z$1" : 'z\([^/]*\)/')
remainder=$(expr "z$1" : 'z[^/]*/\(.*\)')
- if test -f "$GIT_DIR/branches/$token"
+ if test "$(git-repo-config --get "remote.$token.url")"
+ then
+ echo config-partial
+ elif test -f "$GIT_DIR/branches/$token"
then
echo branches-partial
else
fi
;;
*)
- if test -f "$GIT_DIR/remotes/$1"
+ if test "$(git-repo-config --get "remote.$1.url")"
+ then
+ echo config
+ elif test -f "$GIT_DIR/remotes/$1"
then
echo remotes
elif test -f "$GIT_DIR/branches/$1"
case "$data_source" in
'')
echo "$1" ;;
+ config-partial)
+ token=$(expr "z$1" : 'z\([^/]*\)/')
+ remainder=$(expr "z$1" : 'z[^/]*/\(.*\)')
+ url=$(git-repo-config --get "remote.$token.url")
+ echo "$url/$remainder"
+ ;;
+ config)
+ git-repo-config --get "remote.$1.url"
+ ;;
remotes)
sed -ne '/^URL: */{
s///p
get_remote_default_refs_for_push () {
data_source=$(get_data_source "$1")
case "$data_source" in
- '' | branches | branches-partial)
+ '' | config-partial | branches | branches-partial)
;; # no default push mapping, just send matching refs.
+ config)
+ git-repo-config --get-all "remote.$1.push" ;;
remotes)
sed -ne '/^Push: */{
s///p
get_remote_default_refs_for_fetch () {
data_source=$(get_data_source "$1")
case "$data_source" in
- '' | branches-partial)
+ '' | config-partial | branches-partial)
echo "HEAD:" ;;
+ config)
+ canon_refs_list_for_fetch \
+ $(git-repo-config --get-all "remote.$1.fetch") ;;
branches)
remote_branch=$(sed -ne '/#/s/.*#//p' "$GIT_DIR/branches/$1")
case "$remote_branch" in '') remote_branch=master ;; esac
--- /dev/null
+#!/bin/sh
+USAGE='--dry-run --author <author> --patches </path/to/quilt/patch/directory>'
+SUBDIRECTORY_ON=Yes
+. git-sh-setup
+
+dry_run=""
+quilt_author=""
+while case "$#" in 0) break;; esac
+do
+ case "$1" in
+ --au=*|--aut=*|--auth=*|--autho=*|--author=*)
+ quilt_author=$(expr "$1" : '-[^=]*\(.*\)')
+ shift
+ ;;
+
+ --au|--aut|--auth|--autho|--author)
+ case "$#" in 1) usage ;; esac
+ shift
+ quilt_author="$1"
+ shift
+ ;;
+
+ --dry-run)
+ shift
+ dry_run=1
+ ;;
+
+ --pa=*|--pat=*|--patc=*|--patch=*|--patche=*|--patches=*)
+ QUILT_PATCHES=$(expr "$1" : '-[^=]*\(.*\)')
+ shift
+ ;;
+
+ --pa|--pat|--patc|--patch|--patche|--patches)
+ case "$#" in 1) usage ;; esac
+ shift
+ QUILT_PATCHES="$1"
+ shift
+ ;;
+
+ *)
+ break
+ ;;
+ esac
+done
+
+# Quilt Author
+if [ -n "$quilt_author" ] ; then
+ quilt_author_name=$(expr "z$quilt_author" : 'z\(.*[^ ]\) *<.*') &&
+ quilt_author_email=$(expr "z$quilt_author" : '.*<\([^>]*\)') &&
+ test '' != "$quilt_author_name" &&
+ test '' != "$quilt_author_email" ||
+ die "malformatted --author parameter"
+fi
+
+# Quilt patch directory
+: ${QUILT_PATCHES:=patches}
+if ! [ -d "$QUILT_PATCHES" ] ; then
+ echo "The \"$QUILT_PATCHES\" directory does not exist."
+ exit 1
+fi
+
+# Temporay directories
+tmp_dir=.dotest
+tmp_msg="$tmp_dir/msg"
+tmp_patch="$tmp_dir/patch"
+tmp_info="$tmp_dir/info"
+
+
+# Find the intial commit
+commit=$(git-rev-parse HEAD)
+
+mkdir $tmp_dir || exit 2
+for patch_name in $(cat "$QUILT_PATCHES/series" | grep -v '^#'); do
+ echo $patch_name
+ (cat $QUILT_PATCHES/$patch_name | git-mailinfo "$tmp_msg" "$tmp_patch" > "$tmp_info") || exit 3
+
+ # Parse the author information
+ export GIT_AUTHOR_NAME=$(sed -ne 's/Author: //p' "$tmp_info")
+ export GIT_AUTHOR_EMAIL=$(sed -ne 's/Email: //p' "$tmp_info")
+ while test -z "$GIT_AUTHOR_EMAIL" && test -z "$GIT_AUTHOR_NAME" ; do
+ if [ -n "$quilt_author" ] ; then
+ GIT_AUTHOR_NAME="$quilt_author_name";
+ GIT_AUTHOR_EMAIL="$quilt_author_email";
+ elif [ -n "$dry_run" ]; then
+ echo "No author found in $patch_name" >&2;
+ GIT_AUTHOR_NAME="dry-run-not-found";
+ GIT_AUTHOR_EMAIL="dry-run-not-found";
+ else
+ echo "No author found in $patch_name" >&2;
+ echo "---"
+ cat $tmp_msg
+ echo -n "Author: ";
+ read patch_author
+
+ echo "$patch_author"
+
+ patch_author_name=$(expr "z$patch_author" : 'z\(.*[^ ]\) *<.*') &&
+ patch_author_email=$(expr "z$patch_author" : '.*<\([^>]*\)') &&
+ test '' != "$patch_author_name" &&
+ test '' != "$patch_author_email" &&
+ GIT_AUTHOR_NAME="$patch_author_name" &&
+ GIT_AUTHOR_EMAIL="$patch_author_email"
+ fi
+ done
+ export GIT_AUTHOR_DATE=$(sed -ne 's/Date: //p' "$tmp_info")
+ export SUBJECT=$(sed -ne 's/Subject: //p' "$tmp_info")
+ if [ -z "$SUBJECT" ] ; then
+ SUBJECT=$(echo $patch_name | sed -e 's/.patch$//')
+ fi
+
+ if [ -z "$dry_run" ] ; then
+ git-apply --index -C1 "$tmp_patch" &&
+ tree=$(git-write-tree) &&
+ commit=$((echo "$SUBJECT"; echo; cat "$tmp_msg") | git-commit-tree $tree -p $commit) &&
+ git-update-ref HEAD $commit || exit 4
+ fi
+done
+rm -rf $tmp_dir || exit 5
It is possible that a merge failure will prevent this process from being
completely automatic. You will have to resolve any such merge failure
-and run git-rebase --continue. If you can not resolve the merge failure,
-running git-rebase --abort will restore the original <branch> and remove
-the working files found in the .dotest directory.
+and run git rebase --continue. Another option is to bypass the commit
+that caused the merge failure with git rebase --skip. To restore the
+original <branch> and remove the .dotest working files, use the command
+git rebase --abort instead.
Note that if <branch> is not specified on the command line, the
currently checked out branch is used. You must be in the top
'
. git-sh-setup
+RESOLVEMSG="
+When you have resolved this problem run \"git rebase --continue\".
+If you would prefer to skip this patch, instead run \"git rebase --skip\".
+To restore the original branch and stop rebasing run \"git rebase --abort\".
+"
unset newbase
while case "$#" in 0) break ;; esac
do
exit 1
;;
esac
- git am --resolved --3way
+ git am --resolved --3way --resolvemsg="$RESOLVEMSG"
+ exit
+ ;;
+ --skip)
+ git am -3 --skip --resolvemsg="$RESOLVEMSG"
exit
;;
--abort)
fi
git-format-patch -k --stdout --full-index "$upstream" ORIG_HEAD |
-git am --binary -3 -k
+git am --binary -3 -k --resolvemsg="$RESOLVEMSG"
+
exit 1
if [ -z "$name" ]; then
echo Nothing new to pack.
- exit 0
-fi
-echo "Pack pack-$name created."
+else
+ echo "Pack pack-$name created."
-mkdir -p "$PACKDIR" || exit
+ mkdir -p "$PACKDIR" || exit
-mv .tmp-pack-$name.pack "$PACKDIR/pack-$name.pack" &&
-mv .tmp-pack-$name.idx "$PACKDIR/pack-$name.idx" ||
-exit
+ mv .tmp-pack-$name.pack "$PACKDIR/pack-$name.pack" &&
+ mv .tmp-pack-$name.idx "$PACKDIR/pack-$name.idx" ||
+ exit
+fi
if test "$remove_redundant" = t
then
echo
git log $baserev..$headrev | git-shortlog ;
-git diff $baserev..$headrev | git-apply --stat --summary
+git diff --stat --summary $baserev..$headrev
tmp=${GIT_DIR}/reset.$$
trap 'rm -f $tmp-*' 0 1 2 3 15
+update=
reset_type=--mixed
case "$1" in
--mixed | --soft | --hard)
# behind before a hard reset, so that we can remove them.
if test "$reset_type" = "--hard"
then
- {
- git-ls-files --stage -z
- git-rev-parse --verify HEAD 2>/dev/null &&
- git-ls-tree -r -z HEAD
- } | perl -e '
- use strict;
- my %seen;
- $/ = "\0";
- while (<>) {
- chomp;
- my ($info, $path) = split(/\t/, $_);
- next if ($info =~ / tree /);
- if (!$seen{$path}) {
- $seen{$path} = 1;
- print "$path\0";
- }
- }
- ' >$tmp-exists
+ update=-u
fi
# Soft reset does not touch the index file nor the working tree
die "Cannot do a soft reset in the middle of a merge."
fi
else
- git-read-tree --reset "$rev" || exit
+ git-read-tree --reset $update "$rev" || exit
fi
# Any resets update HEAD to the head being switched to.
case "$reset_type" in
--hard )
- # Hard reset matches the working tree to that of the tree
- # being switched to.
- git-checkout-index -f -u -q -a
- git-ls-files --cached -z |
- perl -e '
- use strict;
- my (%keep, $fh);
- $/ = "\0";
- while (<STDIN>) {
- chomp;
- $keep{$_} = 1;
- }
- open $fh, "<", $ARGV[0]
- or die "cannot open $ARGV[0]";
- while (<$fh>) {
- chomp;
- if (! exists $keep{$_}) {
- # it is ok if this fails -- it may already
- # have been culled by checkout-index.
- unlink $_;
- while (s|/[^/]*$||) {
- rmdir($_) or last;
- }
- }
- }
- ' $tmp-exists
- ;;
+ ;; # Nothing else to do
--soft )
;; # Nothing else to do
--mixed )
# $prev and $commit on top of us (when cherry-picking or replaying).
echo >&2 "First trying simple merge strategy to $me."
-git-read-tree -m -u $base $head $next &&
+git-read-tree -m -u --aggressive $base $head $next &&
result=$(git-write-tree 2>/dev/null) || {
echo >&2 "Simple $me fails; trying Automatic $me."
git-merge-index -o git-merge-one-file -a || {
my (@to,@cc,@initial_cc,$initial_reply_to,$initial_subject,@files,$from,$compose,$time);
# Behavior modification variables
-my ($chain_reply_to, $smtp_server, $quiet, $suppress_from, $no_signed_off_cc) = (1, "localhost", 0, 0, 0);
+my ($chain_reply_to, $quiet, $suppress_from, $no_signed_off_cc) = (1, 0, 0, 0);
+my $smtp_server;
# Example reply to:
#$initial_reply_to = ''; #<20050203173208.GA23964@foobar.com>';
my ($author) = gitvar_ident('GIT_AUTHOR_IDENT');
my ($committer) = gitvar_ident('GIT_COMMITTER_IDENT');
+my %aliases;
+chomp(my @alias_files = `git-repo-config --get-all sendemail.aliasesfile`);
+chomp(my $aliasfiletype = `git-repo-config sendemail.aliasfiletype`);
+my %parse_alias = (
+ # multiline formats can be supported in the future
+ mutt => sub { my $fh = shift; while (<$fh>) {
+ if (/^alias\s+(\S+)\s+(.*)$/) {
+ my ($alias, $addr) = ($1, $2);
+ $addr =~ s/#.*$//; # mutt allows # comments
+ # commas delimit multiple addresses
+ $aliases{$alias} = [ split(/\s*,\s*/, $addr) ];
+ }}},
+ mailrc => sub { my $fh = shift; while (<$fh>) {
+ if (/^alias\s+(\S+)\s+(.*)$/) {
+ # spaces delimit multiple addresses
+ $aliases{$1} = [ split(/\s+/, $2) ];
+ }}},
+ pine => sub { my $fh = shift; while (<$fh>) {
+ if (/^(\S+)\s+(.*)$/) {
+ $aliases{$1} = [ split(/\s*,\s*/, $2) ];
+ }}},
+ gnus => sub { my $fh = shift; while (<$fh>) {
+ if (/\(define-mail-alias\s+"(\S+?)"\s+"(\S+?)"\)/) {
+ $aliases{$1} = [ $2 ];
+ }}}
+);
+
+if (@alias_files && defined $parse_alias{$aliasfiletype}) {
+ foreach my $file (@alias_files) {
+ open my $fh, '<', $file or die "opening $file: $!\n";
+ $parse_alias{$aliasfiletype}->($fh);
+ close $fh;
+ }
+}
+
my $prompting = 0;
if (!defined $from) {
$from = $author || $committer;
$prompting++;
}
+sub expand_aliases {
+ my @cur = @_;
+ my @last;
+ do {
+ @last = @cur;
+ @cur = map { $aliases{$_} ? @{$aliases{$_}} : $_ } @last;
+ } while (join(',',@cur) ne join(',',@last));
+ return @cur;
+}
+
+@to = expand_aliases(@to);
+@initial_cc = expand_aliases(@initial_cc);
+
if (!defined $initial_subject && $compose) {
do {
$_ = $term->readline("What subject should the emails start with? ",
$initial_reply_to =~ s/(^\s+|\s+$)//g;
}
-if (!defined $smtp_server) {
- $smtp_server = "localhost";
+if (!$smtp_server) {
+ foreach (qw( /usr/sbin/sendmail /usr/lib/sendmail )) {
+ if (-x $_) {
+ $smtp_server = $_;
+ last;
+ }
+ }
+ $smtp_server ||= 'localhost'; # could be 127.0.0.1, too... *shrug*
}
if ($compose) {
sub extract_valid_address {
my $address = shift;
+
+ # check for a local address:
+ return $address if ($address =~ /^([\w\-]+)$/);
+
if ($have_email_valid) {
return Email::Valid->address($address);
} else {
";
$header .= "In-Reply-To: $reply_to\n" if $reply_to;
- $smtp ||= Net::SMTP->new( $smtp_server );
- $smtp->mail( $from ) or die $smtp->message;
- $smtp->to( @recipients ) or die $smtp->message;
- $smtp->data or die $smtp->message;
- $smtp->datasend("$header\n$message") or die $smtp->message;
- $smtp->dataend() or die $smtp->message;
- $smtp->ok or die "Failed to send $subject\n".$smtp->message;
-
+ if ($smtp_server =~ m#^/#) {
+ my $pid = open my $sm, '|-';
+ defined $pid or die $!;
+ if (!$pid) {
+ exec($smtp_server,'-i',@recipients) or die $!;
+ }
+ print $sm "$header\n$message";
+ close $sm or die $?;
+ } else {
+ $smtp ||= Net::SMTP->new( $smtp_server );
+ $smtp->mail( $from ) or die $smtp->message;
+ $smtp->to( @recipients ) or die $smtp->message;
+ $smtp->data or die $smtp->message;
+ $smtp->datasend("$header\n$message") or die $smtp->message;
+ $smtp->dataend() or die $smtp->message;
+ $smtp->ok or die "Failed to send $subject\n".$smtp->message;
+ }
if ($quiet) {
printf "Sent %s\n", $subject;
} else {
- print "OK. Log says:
-Date: $date
-Server: $smtp_server Port: 25
-From: $from
-Subject: $subject
-Cc: $cc
-To: $to
-
-Result: ", $smtp->code, ' ', ($smtp->message =~ /\n([^\n]+\n)$/s), "\n";
+ print "OK. Log says:\nDate: $date\n";
+ if ($smtp) {
+ print "Server: $smtp_server\n";
+ } else {
+ print "Sendmail: $smtp_server\n";
+ }
+ print "From: $from\nSubject: $subject\nCc: $cc\nTo: $to\n\n";
+ if ($smtp) {
+ print "Result: ", $smtp->code, ' ',
+ ($smtp->message =~ /\n([^\n]+\n)$/s), "\n";
+ } else {
+ print "Result: OK\n";
+ }
}
}
my @emails;
foreach my $entry (@_) {
- my $clean = extract_valid_address($entry);
- next if $seen{$clean}++;
- push @emails, $entry;
+ if (my $clean = extract_valid_address($entry)) {
+ $seen{$clean} ||= 0;
+ next if $seen{$clean}++;
+ push @emails, $entry;
+ } else {
+ print STDERR "W: unable to extract a valid address",
+ " from: $entry\n";
+ }
}
return @emails;
}
force=1
;;
-l)
- cd "$GIT_DIR/refs" &&
case "$#" in
1)
- find tags -type f -print ;;
- *)
- shift
- find tags -type f -print | grep "$@" ;;
+ set x . ;;
esac
+ shift
+ git rev-parse --symbolic --tags | sort | grep "$@"
exit $?
;;
-m)
{ "fmt-patch", cmd_format_patch },
{ "count-objects", cmd_count_objects },
{ "diff", cmd_diff },
+ { "grep", cmd_grep },
+ { "rev-list", cmd_rev_list },
+ { "init-db", cmd_init_db },
+ { "check-ref-format", cmd_check_ref_format }
};
int i;
%setup -q
%build
-make %{_smp_mflags} CFLAGS="$RPM_OPT_FLAGS" WITH_OWN_SUBPROCESS_PY=YesPlease WITH_SEND_EMAIL=1 \
+make %{_smp_mflags} CFLAGS="$RPM_OPT_FLAGS" WITH_OWN_SUBPROCESS_PY=YesPlease \
prefix=%{_prefix} all %{!?_without_docs: doc}
%install
rm -rf $RPM_BUILD_ROOT
-make %{_smp_mflags} DESTDIR=$RPM_BUILD_ROOT WITH_OWN_SUBPROCESS_PY=YesPlease WITH_SEND_EMAIL=1 \
+make %{_smp_mflags} DESTDIR=$RPM_BUILD_ROOT WITH_OWN_SUBPROCESS_PY=YesPlease \
prefix=%{_prefix} mandir=%{_mandir} \
install %{!?_without_docs: install-doc}
+++ /dev/null
-/*
- * GIT - The information manager from hell
- *
- * Copyright (C) Linus Torvalds, 2005
- */
-#include "cache.h"
-
-#ifndef DEFAULT_GIT_TEMPLATE_DIR
-#define DEFAULT_GIT_TEMPLATE_DIR "/usr/share/git-core/templates/"
-#endif
-
-static void safe_create_dir(const char *dir, int share)
-{
- if (mkdir(dir, 0777) < 0) {
- if (errno != EEXIST) {
- perror(dir);
- exit(1);
- }
- }
- else if (share && adjust_shared_perm(dir))
- die("Could not make %s writable by group\n", dir);
-}
-
-static int copy_file(const char *dst, const char *src, int mode)
-{
- int fdi, fdo, status;
-
- mode = (mode & 0111) ? 0777 : 0666;
- if ((fdi = open(src, O_RDONLY)) < 0)
- return fdi;
- if ((fdo = open(dst, O_WRONLY | O_CREAT | O_EXCL, mode)) < 0) {
- close(fdi);
- return fdo;
- }
- status = copy_fd(fdi, fdo);
- close(fdo);
-
- if (!status && adjust_shared_perm(dst))
- return -1;
-
- return status;
-}
-
-static void copy_templates_1(char *path, int baselen,
- char *template, int template_baselen,
- DIR *dir)
-{
- struct dirent *de;
-
- /* Note: if ".git/hooks" file exists in the repository being
- * re-initialized, /etc/core-git/templates/hooks/update would
- * cause git-init-db to fail here. I think this is sane but
- * it means that the set of templates we ship by default, along
- * with the way the namespace under .git/ is organized, should
- * be really carefully chosen.
- */
- safe_create_dir(path, 1);
- while ((de = readdir(dir)) != NULL) {
- struct stat st_git, st_template;
- int namelen;
- int exists = 0;
-
- if (de->d_name[0] == '.')
- continue;
- namelen = strlen(de->d_name);
- if ((PATH_MAX <= baselen + namelen) ||
- (PATH_MAX <= template_baselen + namelen))
- die("insanely long template name %s", de->d_name);
- memcpy(path + baselen, de->d_name, namelen+1);
- memcpy(template + template_baselen, de->d_name, namelen+1);
- if (lstat(path, &st_git)) {
- if (errno != ENOENT)
- die("cannot stat %s", path);
- }
- else
- exists = 1;
-
- if (lstat(template, &st_template))
- die("cannot stat template %s", template);
-
- if (S_ISDIR(st_template.st_mode)) {
- DIR *subdir = opendir(template);
- int baselen_sub = baselen + namelen;
- int template_baselen_sub = template_baselen + namelen;
- if (!subdir)
- die("cannot opendir %s", template);
- path[baselen_sub++] =
- template[template_baselen_sub++] = '/';
- path[baselen_sub] =
- template[template_baselen_sub] = 0;
- copy_templates_1(path, baselen_sub,
- template, template_baselen_sub,
- subdir);
- closedir(subdir);
- }
- else if (exists)
- continue;
- else if (S_ISLNK(st_template.st_mode)) {
- char lnk[256];
- int len;
- len = readlink(template, lnk, sizeof(lnk));
- if (len < 0)
- die("cannot readlink %s", template);
- if (sizeof(lnk) <= len)
- die("insanely long symlink %s", template);
- lnk[len] = 0;
- if (symlink(lnk, path))
- die("cannot symlink %s %s", lnk, path);
- }
- else if (S_ISREG(st_template.st_mode)) {
- if (copy_file(path, template, st_template.st_mode))
- die("cannot copy %s to %s", template, path);
- }
- else
- error("ignoring template %s", template);
- }
-}
-
-static void copy_templates(const char *git_dir, int len, char *template_dir)
-{
- char path[PATH_MAX];
- char template_path[PATH_MAX];
- int template_len;
- DIR *dir;
-
- if (!template_dir)
- template_dir = DEFAULT_GIT_TEMPLATE_DIR;
- strcpy(template_path, template_dir);
- template_len = strlen(template_path);
- if (template_path[template_len-1] != '/') {
- template_path[template_len++] = '/';
- template_path[template_len] = 0;
- }
- dir = opendir(template_path);
- if (!dir) {
- fprintf(stderr, "warning: templates not found %s\n",
- template_dir);
- return;
- }
-
- /* Make sure that template is from the correct vintage */
- strcpy(template_path + template_len, "config");
- repository_format_version = 0;
- git_config_from_file(check_repository_format_version,
- template_path);
- template_path[template_len] = 0;
-
- if (repository_format_version &&
- repository_format_version != GIT_REPO_VERSION) {
- fprintf(stderr, "warning: not copying templates of "
- "a wrong format version %d from '%s'\n",
- repository_format_version,
- template_dir);
- closedir(dir);
- return;
- }
-
- memcpy(path, git_dir, len);
- path[len] = 0;
- copy_templates_1(path, len,
- template_path, template_len,
- dir);
- closedir(dir);
-}
-
-static void create_default_files(const char *git_dir, char *template_path)
-{
- unsigned len = strlen(git_dir);
- static char path[PATH_MAX];
- unsigned char sha1[20];
- struct stat st1;
- char repo_version_string[10];
-
- if (len > sizeof(path)-50)
- die("insane git directory %s", git_dir);
- memcpy(path, git_dir, len);
-
- if (len && path[len-1] != '/')
- path[len++] = '/';
-
- /*
- * Create .git/refs/{heads,tags}
- */
- strcpy(path + len, "refs");
- safe_create_dir(path, 1);
- strcpy(path + len, "refs/heads");
- safe_create_dir(path, 1);
- strcpy(path + len, "refs/tags");
- safe_create_dir(path, 1);
-
- /* First copy the templates -- we might have the default
- * config file there, in which case we would want to read
- * from it after installing.
- */
- path[len] = 0;
- copy_templates(path, len, template_path);
-
- git_config(git_default_config);
-
- /*
- * Create the default symlink from ".git/HEAD" to the "master"
- * branch, if it does not exist yet.
- */
- strcpy(path + len, "HEAD");
- if (read_ref(path, sha1) < 0) {
- if (create_symref(path, "refs/heads/master") < 0)
- exit(1);
- }
-
- /* This forces creation of new config file */
- sprintf(repo_version_string, "%d", GIT_REPO_VERSION);
- git_config_set("core.repositoryformatversion", repo_version_string);
-
- path[len] = 0;
- strcpy(path + len, "config");
-
- /* Check filemode trustability */
- if (!lstat(path, &st1)) {
- struct stat st2;
- int filemode = (!chmod(path, st1.st_mode ^ S_IXUSR) &&
- !lstat(path, &st2) &&
- st1.st_mode != st2.st_mode);
- git_config_set("core.filemode",
- filemode ? "true" : "false");
- }
-}
-
-static const char init_db_usage[] =
-"git-init-db [--template=<template-directory>] [--shared]";
-
-/*
- * If you want to, you can share the DB area with any number of branches.
- * That has advantages: you can save space by sharing all the SHA1 objects.
- * On the other hand, it might just make lookup slower and messier. You
- * be the judge. The default case is to have one DB per managed directory.
- */
-int main(int argc, char **argv)
-{
- const char *git_dir;
- const char *sha1_dir;
- char *path, *template_dir = NULL;
- int len, i;
-
- for (i = 1; i < argc; i++, argv++) {
- char *arg = argv[1];
- if (!strncmp(arg, "--template=", 11))
- template_dir = arg+11;
- else if (!strcmp(arg, "--shared"))
- shared_repository = 1;
- else
- die(init_db_usage);
- }
-
- /*
- * Set up the default .git directory contents
- */
- git_dir = getenv(GIT_DIR_ENVIRONMENT);
- if (!git_dir) {
- git_dir = DEFAULT_GIT_DIR_ENVIRONMENT;
- fprintf(stderr, "defaulting to local storage area\n");
- }
- safe_create_dir(git_dir, 0);
-
- /* Check to see if the repository version is right.
- * Note that a newly created repository does not have
- * config file, so this will not fail. What we are catching
- * is an attempt to reinitialize new repository with an old tool.
- */
- check_repository_format();
-
- create_default_files(git_dir, template_dir);
-
- /*
- * And set up the object store.
- */
- sha1_dir = get_object_directory();
- len = strlen(sha1_dir);
- path = xmalloc(len + 40);
- memcpy(path, sha1_dir, len);
-
- safe_create_dir(sha1_dir, 1);
- strcpy(path+len, "/pack");
- safe_create_dir(path, 1);
- strcpy(path+len, "/info");
- safe_create_dir(path, 1);
-
- if (shared_repository)
- git_config_set("core.sharedRepository", "true");
-
- return 0;
-}
if (argc < 2)
usage(ls_tree_usage);
- if (get_sha1(argv[1], sha1) < 0)
- usage(ls_tree_usage);
+ if (get_sha1(argv[1], sha1))
+ die("Not a valid object name %s", argv[1]);
pathspec = get_pathspec(prefix, argv + 2);
tree = parse_tree_indirect(sha1);
* commit B.
*
*
- * Another pathological example how this thing can fail to mark an ancestor
- * of a merge base as UNINTERESTING without the postprocessing phase.
+ * Another pathological example how this thing used to fail to mark an
+ * ancestor of a merge base as UNINTERESTING before we introduced the
+ * postprocessing phase (mark_reachable_commits).
*
* 2
* H
* D7 2 3 7 7 3 2 1 2
* E7 2 3 7 7 7 2 1 2
*
- * and we end up showing E as an interesting merge base.
+ * and we ended up showing E as an interesting merge base.
+ * The postprocessing phase re-injects C and continues traversal
+ * to contaminate D and E.
*/
static int show_all = 0;
usage(merge_base_usage);
argc--; argv++;
}
- if (argc != 3 ||
- get_sha1(argv[1], rev1key) ||
- get_sha1(argv[2], rev2key))
+ if (argc != 3)
usage(merge_base_usage);
+ if (get_sha1(argv[1], rev1key))
+ die("Not a valid object name %s", argv[1]);
+ if (get_sha1(argv[2], rev2key))
+ die("Not a valid object name %s", argv[2]);
rev1 = lookup_commit_reference(rev1key);
rev2 = lookup_commit_reference(rev2key);
if (!rev1 || !rev2)
unsigned char sha1[20];
void *buf;
- if (get_sha1(rev, sha1) < 0)
+ if (get_sha1(rev, sha1))
die("unknown rev %s", rev);
buf = fill_tree_descriptor(desc, sha1);
if (!buf)
rix->revindex = xmalloc(sizeof(unsigned long) * (num_ent + 1));
for (i = 0; i < num_ent; i++) {
- long hl = *((long *)(index + 24 * i));
+ unsigned int hl = *((unsigned int *)(index + 24 * i));
rix->revindex[i] = ntohl(hl);
}
/* This knows the pack format -- the 20-byte trailer
struct unpacked {
struct object_entry *entry;
void *data;
+ struct delta_index *index;
};
/*
* more importantly, the bigger file is likely the more recent
* one.
*/
-static int try_delta(struct unpacked *cur, struct unpacked *old, unsigned max_depth)
+static int try_delta(struct unpacked *trg, struct unpacked *src,
+ struct delta_index *src_index, unsigned max_depth)
{
- struct object_entry *cur_entry = cur->entry;
- struct object_entry *old_entry = old->entry;
- unsigned long size, oldsize, delta_size, sizediff;
- long max_size;
+ struct object_entry *trg_entry = trg->entry;
+ struct object_entry *src_entry = src->entry;
+ unsigned long size, src_size, delta_size, sizediff, max_size;
void *delta_buf;
/* Don't bother doing diffs between different types */
- if (cur_entry->type != old_entry->type)
+ if (trg_entry->type != src_entry->type)
return -1;
/* We do not compute delta to *create* objects we are not
* going to pack.
*/
- if (cur_entry->preferred_base)
+ if (trg_entry->preferred_base)
return -1;
- /* If the current object is at pack edge, take the depth the
+ /*
+ * If the current object is at pack edge, take the depth the
* objects that depend on the current object into account --
* otherwise they would become too deep.
*/
- if (cur_entry->delta_child) {
- if (max_depth <= cur_entry->delta_limit)
+ if (trg_entry->delta_child) {
+ if (max_depth <= trg_entry->delta_limit)
return 0;
- max_depth -= cur_entry->delta_limit;
+ max_depth -= trg_entry->delta_limit;
}
-
- if (old_entry->depth >= max_depth)
+ if (src_entry->depth >= max_depth)
return 0;
- /*
- * NOTE!
- *
- * We always delta from the bigger to the smaller, since that's
- * more space-efficient (deletes don't have to say _what_ they
- * delete).
- */
- size = cur_entry->size;
- max_size = size / 2 - 20;
- if (cur_entry->delta)
- max_size = cur_entry->delta_size-1;
- oldsize = old_entry->size;
- sizediff = oldsize < size ? size - oldsize : 0;
+ /* Now some size filtering heuristics. */
+ size = trg_entry->size;
+ max_size = size/2 - 20;
+ max_size = max_size * (max_depth - src_entry->depth) / max_depth;
+ if (max_size == 0)
+ return 0;
+ if (trg_entry->delta && trg_entry->delta_size <= max_size)
+ max_size = trg_entry->delta_size-1;
+ src_size = src_entry->size;
+ sizediff = src_size < size ? size - src_size : 0;
if (sizediff >= max_size)
return 0;
- delta_buf = diff_delta(old->data, oldsize,
- cur->data, size, &delta_size, max_size);
+
+ delta_buf = create_delta(src_index, trg->data, size, &delta_size, max_size);
if (!delta_buf)
return 0;
- cur_entry->delta = old_entry;
- cur_entry->delta_size = delta_size;
- cur_entry->depth = old_entry->depth + 1;
+
+ trg_entry->delta = src_entry;
+ trg_entry->delta_size = delta_size;
+ trg_entry->depth = src_entry->depth + 1;
free(delta_buf);
- return 0;
+ return 1;
}
static void progress_interval(int signum)
if (entry->size < 50)
continue;
-
+ free_delta_index(n->index);
+ n->index = NULL;
free(n->data);
n->entry = entry;
n->data = read_sha1_file(entry->sha1, type, &size);
if (size != entry->size)
- die("object %s inconsistent object length (%lu vs %lu)", sha1_to_hex(entry->sha1), size, entry->size);
+ die("object %s inconsistent object length (%lu vs %lu)",
+ sha1_to_hex(entry->sha1), size, entry->size);
j = window;
while (--j > 0) {
m = array + other_idx;
if (!m->entry)
break;
- if (try_delta(n, m, depth) < 0)
+ if (try_delta(n, m, m->index, depth) < 0)
break;
}
-#if 0
/* if we made n a delta, and if n is already at max
* depth, leaving it in the window is pointless. we
* should evict it first.
- * ... in theory only; somehow this makes things worse.
*/
if (entry->delta && depth <= entry->depth)
continue;
-#endif
+
+ n->index = create_delta_index(n->data, size);
+ if (!n->index)
+ die("out of memory");
+
idx++;
if (idx >= window)
idx = 0;
if (progress)
fputc('\n', stderr);
- for (i = 0; i < window; ++i)
+ for (i = 0; i < window; ++i) {
+ free_delta_index(array[i].index);
free(array[i].data);
+ }
free(array);
}
#include <string.h>
#include "delta.h"
-void *patch_delta(void *src_buf, unsigned long src_size,
+void *patch_delta(const void *src_buf, unsigned long src_size,
const void *delta_buf, unsigned long delta_size,
unsigned long *dst_size)
{
return 0;
}
+/* Three functions to allow overloaded pointer return; see linux/err.h */
+static inline void *ERR_PTR(long error)
+{
+ return (void *) error;
+}
+
+static inline long PTR_ERR(const void *ptr)
+{
+ return (long) ptr;
+}
+
+static inline long IS_ERR(const void *ptr)
+{
+ return (unsigned long)ptr > (unsigned long)-1000L;
+}
+
+/*
+ * "refresh" does not calculate a new sha1 file or bring the
+ * cache up-to-date for mode/content changes. But what it
+ * _does_ do is to "re-match" the stat information of a file
+ * with the cache, so that you can refresh the cache for a
+ * file that hasn't been changed but where the stat entry is
+ * out of date.
+ *
+ * For example, you'd want to do this after doing a "git-read-tree",
+ * to link up the stat cache details with the proper files.
+ */
+static struct cache_entry *refresh_entry(struct cache_entry *ce, int really)
+{
+ struct stat st;
+ struct cache_entry *updated;
+ int changed, size;
+
+ if (lstat(ce->name, &st) < 0)
+ return ERR_PTR(-errno);
+
+ changed = ce_match_stat(ce, &st, really);
+ if (!changed) {
+ if (really && assume_unchanged &&
+ !(ce->ce_flags & htons(CE_VALID)))
+ ; /* mark this one VALID again */
+ else
+ return NULL;
+ }
+
+ if (ce_modified(ce, &st, really))
+ return ERR_PTR(-EINVAL);
+
+ size = ce_size(ce);
+ updated = xmalloc(size);
+ memcpy(updated, ce, size);
+ fill_stat_cache_info(updated, &st);
+
+ /* In this case, if really is not set, we should leave
+ * CE_VALID bit alone. Otherwise, paths marked with
+ * --no-assume-unchanged (i.e. things to be edited) will
+ * reacquire CE_VALID bit automatically, which is not
+ * really what we want.
+ */
+ if (!really && assume_unchanged && !(ce->ce_flags & htons(CE_VALID)))
+ updated->ce_flags &= ~htons(CE_VALID);
+
+ return updated;
+}
+
+int refresh_cache(unsigned int flags)
+{
+ int i;
+ int has_errors = 0;
+ int really = (flags & REFRESH_REALLY) != 0;
+ int allow_unmerged = (flags & REFRESH_UNMERGED) != 0;
+ int quiet = (flags & REFRESH_QUIET) != 0;
+ int not_new = (flags & REFRESH_IGNORE_MISSING) != 0;
+
+ for (i = 0; i < active_nr; i++) {
+ struct cache_entry *ce, *new;
+ ce = active_cache[i];
+ if (ce_stage(ce)) {
+ while ((i < active_nr) &&
+ ! strcmp(active_cache[i]->name, ce->name))
+ i++;
+ i--;
+ if (allow_unmerged)
+ continue;
+ printf("%s: needs merge\n", ce->name);
+ has_errors = 1;
+ continue;
+ }
+
+ new = refresh_entry(ce, really);
+ if (!new)
+ continue;
+ if (IS_ERR(new)) {
+ if (not_new && PTR_ERR(new) == -ENOENT)
+ continue;
+ if (really && PTR_ERR(new) == -EINVAL) {
+ /* If we are doing --really-refresh that
+ * means the index is not valid anymore.
+ */
+ ce->ce_flags &= ~htons(CE_VALID);
+ active_cache_changed = 1;
+ }
+ if (quiet)
+ continue;
+ printf("%s: needs update\n", ce->name);
+ has_errors = 1;
+ continue;
+ }
+ active_cache_changed = 1;
+ /* You can NOT just free active_cache[i] here, since it
+ * might not be necessarily malloc()ed but can also come
+ * from mmap(). */
+ active_cache[i] = new;
+ }
+ return has_errors;
+}
+
static int verify_hdr(struct cache_header *hdr, unsigned long size)
{
SHA_CTX c;
active_nr = ntohl(hdr->hdr_entries);
active_alloc = alloc_nr(active_nr);
- active_cache = calloc(active_alloc, sizeof(struct cache_entry *));
+ active_cache = xcalloc(active_alloc, sizeof(struct cache_entry *));
offset = sizeof(*hdr);
for (i = 0; i < active_nr; i++) {
#include <sys/time.h>
#include <signal.h>
+static int reset = 0;
static int merge = 0;
static int update = 0;
static int index_only = 0;
{
struct stat st;
- if (index_only)
+ if (index_only || reset)
return;
if (!lstat(ce->name, &st)) {
return;
errno = 0;
}
+ if (reset) {
+ ce->ce_flags |= htons(CE_UPDATE);
+ return;
+ }
if (errno == ENOENT)
return;
die("Entry '%s' not uptodate. Cannot merge.", ce->name);
}
+/*
+ * We do not want to remove or overwrite a working tree file that
+ * is not tracked.
+ */
+static void verify_absent(const char *path, const char *action)
+{
+ struct stat st;
+
+ if (index_only || reset || !update)
+ return;
+ if (!lstat(path, &st))
+ die("Untracked working tree file '%s' "
+ "would be %s by merge.", path, action);
+}
+
static int merged_entry(struct cache_entry *merge, struct cache_entry *old)
{
merge->ce_flags |= htons(CE_UPDATE);
verify_uptodate(old);
}
}
+ else
+ verify_absent(merge->name, "overwritten");
+
merge->ce_flags &= ~htons(CE_STAGEMASK);
add_cache_entry(merge, ADD_CACHE_OK_TO_ADD);
return 1;
{
if (old)
verify_uptodate(old);
+ else
+ verify_absent(ce->name, "removed");
ce->ce_mode = 0;
add_cache_entry(ce, ADD_CACHE_OK_TO_ADD);
return 1;
int count;
int head_match = 0;
int remote_match = 0;
+ const char *path = NULL;
int df_conflict_head = 0;
int df_conflict_remote = 0;
for (i = 1; i < head_idx; i++) {
if (!stages[i])
any_anc_missing = 1;
- else
+ else {
+ if (!path)
+ path = stages[i]->name;
no_anc_exists = 0;
+ }
}
index = stages[0];
remote = NULL;
}
+ if (!path && index)
+ path = index->name;
+ if (!path && head)
+ path = head->name;
+ if (!path && remote)
+ path = remote->name;
+
/* First, if there's a #16 situation, note that to prevent #13
- * and #14.
+ * and #14.
*/
if (!same(remote, head)) {
for (i = 1; i < head_idx; i++) {
(remote_deleted && head && head_match)) {
if (index)
return deleted_entry(index, index);
+ else if (path)
+ verify_absent(path, "removed");
return 0;
}
/*
if (index) {
verify_uptodate(index);
}
+ else if (path)
+ verify_absent(path, "overwritten");
nontrivial_merge = 1;
merge_size);
if (!a)
- return 0;
+ return deleted_entry(old, old);
if (old && same(old, a)) {
+ if (reset) {
+ struct stat st;
+ if (lstat(old->name, &st) ||
+ ce_match_stat(old, &st, 1))
+ old->ce_flags |= htons(CE_UPDATE);
+ }
return keep_entry(old);
}
- return merged_entry(a, NULL);
+ return merged_entry(a, old);
}
static int read_cache_unmerged(void)
int main(int argc, char **argv)
{
- int i, newfd, reset, stage = 0;
+ int i, newfd, stage = 0;
unsigned char sha1[20];
merge_fn_t fn = NULL;
if (1 < index_only + update)
usage(read_tree_usage);
- if (get_sha1(arg, sha1) < 0)
- usage(read_tree_usage);
+ if (get_sha1(arg, sha1))
+ die("Not a valid object name %s", arg);
if (list_tree(sha1) < 0)
die("failed to unpack tree object %s", arg);
stage++;
return -1;
}
-static int do_for_each_ref(const char *base, int (*fn)(const char *path, const unsigned char *sha1))
+static int do_for_each_ref(const char *base, int (*fn)(const char *path, const unsigned char *sha1), int trim)
{
int retval = 0;
DIR *dir = opendir(git_path("%s", base));
if (stat(git_path("%s", path), &st) < 0)
continue;
if (S_ISDIR(st.st_mode)) {
- retval = do_for_each_ref(path, fn);
+ retval = do_for_each_ref(path, fn, trim);
if (retval)
break;
continue;
"commit object!", path);
continue;
}
- retval = fn(path, sha1);
+ retval = fn(path + trim, sha1);
if (retval)
break;
}
int for_each_ref(int (*fn)(const char *path, const unsigned char *sha1))
{
- return do_for_each_ref("refs", fn);
+ return do_for_each_ref("refs", fn, 0);
+}
+
+int for_each_tag_ref(int (*fn)(const char *path, const unsigned char *sha1))
+{
+ return do_for_each_ref("refs/tags", fn, 10);
+}
+
+int for_each_branch_ref(int (*fn)(const char *path, const unsigned char *sha1))
+{
+ return do_for_each_ref("refs/heads", fn, 11);
+}
+
+int for_each_remote_ref(int (*fn)(const char *path, const unsigned char *sha1))
+{
+ return do_for_each_ref("refs/remotes", fn, 13);
}
static char *ref_file_name(const char *ref)
int get_ref_sha1(const char *ref, unsigned char *sha1)
{
- const char *filename;
-
if (check_ref_format(ref))
return -1;
- filename = git_path("refs/%s", ref);
- return read_ref(filename, sha1);
+ return read_ref(git_path("refs/%s", ref), sha1);
}
static int lock_ref_file(const char *filename, const char *lock_filename,
*/
extern int head_ref(int (*fn)(const char *path, const unsigned char *sha1));
extern int for_each_ref(int (*fn)(const char *path, const unsigned char *sha1));
+extern int for_each_tag_ref(int (*fn)(const char *path, const unsigned char *sha1));
+extern int for_each_branch_ref(int (*fn)(const char *path, const unsigned char *sha1));
+extern int for_each_remote_ref(int (*fn)(const char *path, const unsigned char *sha1));
/** Reads the refs file specified into sha1 **/
extern int get_ref_sha1(const char *ref, unsigned char *sha1);
static int get_value(const char* key_, const char* regex_)
{
- int i;
+ char *tl;
- key = malloc(strlen(key_)+1);
- for (i = 0; key_[i]; i++)
- key[i] = tolower(key_[i]);
- key[i] = 0;
+ key = strdup(key_);
+ for (tl=key+strlen(key)-1; tl >= key && *tl != '.'; --tl)
+ *tl = tolower(*tl);
+ for (tl=key; *tl && *tl != '.'; ++tl)
+ *tl = tolower(*tl);
if (use_key_regexp) {
key_regexp = (regex_t*)malloc(sizeof(regex_t));
+++ /dev/null
-#include "cache.h"
-#include "refs.h"
-#include "tag.h"
-#include "commit.h"
-#include "tree.h"
-#include "blob.h"
-#include "tree-walk.h"
-#include "diff.h"
-#include "revision.h"
-
-/* bits #0-15 in revision.h */
-
-#define COUNTED (1u<<16)
-
-static const char rev_list_usage[] =
-"git-rev-list [OPTION] <commit-id>... [ -- paths... ]\n"
-" limiting output:\n"
-" --max-count=nr\n"
-" --max-age=epoch\n"
-" --min-age=epoch\n"
-" --sparse\n"
-" --no-merges\n"
-" --remove-empty\n"
-" --all\n"
-" ordering output:\n"
-" --topo-order\n"
-" --date-order\n"
-" formatting output:\n"
-" --parents\n"
-" --objects | --objects-edge\n"
-" --unpacked\n"
-" --header | --pretty\n"
-" --abbrev=nr | --no-abbrev\n"
-" --abbrev-commit\n"
-" special purpose:\n"
-" --bisect"
-;
-
-struct rev_info revs;
-
-static int bisect_list = 0;
-static int show_timestamp = 0;
-static int hdr_termination = 0;
-static const char *header_prefix;
-
-static void show_commit(struct commit *commit)
-{
- if (show_timestamp)
- printf("%lu ", commit->date);
- if (header_prefix)
- fputs(header_prefix, stdout);
- if (commit->object.flags & BOUNDARY)
- putchar('-');
- if (revs.abbrev_commit && revs.abbrev)
- fputs(find_unique_abbrev(commit->object.sha1, revs.abbrev),
- stdout);
- else
- fputs(sha1_to_hex(commit->object.sha1), stdout);
- if (revs.parents) {
- struct commit_list *parents = commit->parents;
- while (parents) {
- struct object *o = &(parents->item->object);
- parents = parents->next;
- if (o->flags & TMP_MARK)
- continue;
- printf(" %s", sha1_to_hex(o->sha1));
- o->flags |= TMP_MARK;
- }
- /* TMP_MARK is a general purpose flag that can
- * be used locally, but the user should clean
- * things up after it is done with them.
- */
- for (parents = commit->parents;
- parents;
- parents = parents->next)
- parents->item->object.flags &= ~TMP_MARK;
- }
- if (revs.commit_format == CMIT_FMT_ONELINE)
- putchar(' ');
- else
- putchar('\n');
-
- if (revs.verbose_header) {
- static char pretty_header[16384];
- pretty_print_commit(revs.commit_format, commit, ~0,
- pretty_header, sizeof(pretty_header),
- revs.abbrev, NULL);
- printf("%s%c", pretty_header, hdr_termination);
- }
- fflush(stdout);
-}
-
-static struct object_list **process_blob(struct blob *blob,
- struct object_list **p,
- struct name_path *path,
- const char *name)
-{
- struct object *obj = &blob->object;
-
- if (!revs.blob_objects)
- return p;
- if (obj->flags & (UNINTERESTING | SEEN))
- return p;
- obj->flags |= SEEN;
- return add_object(obj, p, path, name);
-}
-
-static struct object_list **process_tree(struct tree *tree,
- struct object_list **p,
- struct name_path *path,
- const char *name)
-{
- struct object *obj = &tree->object;
- struct tree_entry_list *entry;
- struct name_path me;
-
- if (!revs.tree_objects)
- return p;
- if (obj->flags & (UNINTERESTING | SEEN))
- return p;
- if (parse_tree(tree) < 0)
- die("bad tree object %s", sha1_to_hex(obj->sha1));
- obj->flags |= SEEN;
- p = add_object(obj, p, path, name);
- me.up = path;
- me.elem = name;
- me.elem_len = strlen(name);
- entry = tree->entries;
- tree->entries = NULL;
- while (entry) {
- struct tree_entry_list *next = entry->next;
- if (entry->directory)
- p = process_tree(entry->item.tree, p, &me, entry->name);
- else
- p = process_blob(entry->item.blob, p, &me, entry->name);
- free(entry);
- entry = next;
- }
- return p;
-}
-
-static void show_commit_list(struct rev_info *revs)
-{
- struct commit *commit;
- struct object_list *objects = NULL, **p = &objects, *pending;
-
- while ((commit = get_revision(revs)) != NULL) {
- p = process_tree(commit->tree, p, NULL, "");
- show_commit(commit);
- }
- for (pending = revs->pending_objects; pending; pending = pending->next) {
- struct object *obj = pending->item;
- const char *name = pending->name;
- if (obj->flags & (UNINTERESTING | SEEN))
- continue;
- if (obj->type == tag_type) {
- obj->flags |= SEEN;
- p = add_object(obj, p, NULL, name);
- continue;
- }
- if (obj->type == tree_type) {
- p = process_tree((struct tree *)obj, p, NULL, name);
- continue;
- }
- if (obj->type == blob_type) {
- p = process_blob((struct blob *)obj, p, NULL, name);
- continue;
- }
- die("unknown pending object %s (%s)", sha1_to_hex(obj->sha1), name);
- }
- while (objects) {
- /* An object with name "foo\n0000000..." can be used to
- * confuse downstream git-pack-objects very badly.
- */
- const char *ep = strchr(objects->name, '\n');
- if (ep) {
- printf("%s %.*s\n", sha1_to_hex(objects->item->sha1),
- (int) (ep - objects->name),
- objects->name);
- }
- else
- printf("%s %s\n", sha1_to_hex(objects->item->sha1), objects->name);
- objects = objects->next;
- }
-}
-
-/*
- * This is a truly stupid algorithm, but it's only
- * used for bisection, and we just don't care enough.
- *
- * We care just barely enough to avoid recursing for
- * non-merge entries.
- */
-static int count_distance(struct commit_list *entry)
-{
- int nr = 0;
-
- while (entry) {
- struct commit *commit = entry->item;
- struct commit_list *p;
-
- if (commit->object.flags & (UNINTERESTING | COUNTED))
- break;
- if (!revs.prune_fn || (commit->object.flags & TREECHANGE))
- nr++;
- commit->object.flags |= COUNTED;
- p = commit->parents;
- entry = p;
- if (p) {
- p = p->next;
- while (p) {
- nr += count_distance(p);
- p = p->next;
- }
- }
- }
-
- return nr;
-}
-
-static void clear_distance(struct commit_list *list)
-{
- while (list) {
- struct commit *commit = list->item;
- commit->object.flags &= ~COUNTED;
- list = list->next;
- }
-}
-
-static struct commit_list *find_bisection(struct commit_list *list)
-{
- int nr, closest;
- struct commit_list *p, *best;
-
- nr = 0;
- p = list;
- while (p) {
- if (!revs.prune_fn || (p->item->object.flags & TREECHANGE))
- nr++;
- p = p->next;
- }
- closest = 0;
- best = list;
-
- for (p = list; p; p = p->next) {
- int distance;
-
- if (revs.prune_fn && !(p->item->object.flags & TREECHANGE))
- continue;
-
- distance = count_distance(p);
- clear_distance(list);
- if (nr - distance < distance)
- distance = nr - distance;
- if (distance > closest) {
- best = p;
- closest = distance;
- }
- }
- if (best)
- best->next = NULL;
- return best;
-}
-
-static void mark_edge_parents_uninteresting(struct commit *commit)
-{
- struct commit_list *parents;
-
- for (parents = commit->parents; parents; parents = parents->next) {
- struct commit *parent = parents->item;
- if (!(parent->object.flags & UNINTERESTING))
- continue;
- mark_tree_uninteresting(parent->tree);
- if (revs.edge_hint && !(parent->object.flags & SHOWN)) {
- parent->object.flags |= SHOWN;
- printf("-%s\n", sha1_to_hex(parent->object.sha1));
- }
- }
-}
-
-static void mark_edges_uninteresting(struct commit_list *list)
-{
- for ( ; list; list = list->next) {
- struct commit *commit = list->item;
-
- if (commit->object.flags & UNINTERESTING) {
- mark_tree_uninteresting(commit->tree);
- continue;
- }
- mark_edge_parents_uninteresting(commit);
- }
-}
-
-int main(int argc, const char **argv)
-{
- struct commit_list *list;
- int i;
-
- init_revisions(&revs);
- revs.abbrev = 0;
- revs.commit_format = CMIT_FMT_UNSPECIFIED;
- argc = setup_revisions(argc, argv, &revs, NULL);
-
- for (i = 1 ; i < argc; i++) {
- const char *arg = argv[i];
-
- if (!strcmp(arg, "--header")) {
- revs.verbose_header = 1;
- continue;
- }
- if (!strcmp(arg, "--timestamp")) {
- show_timestamp = 1;
- continue;
- }
- if (!strcmp(arg, "--bisect")) {
- bisect_list = 1;
- continue;
- }
- usage(rev_list_usage);
-
- }
- if (revs.commit_format != CMIT_FMT_UNSPECIFIED) {
- /* The command line has a --pretty */
- hdr_termination = '\n';
- if (revs.commit_format == CMIT_FMT_ONELINE)
- header_prefix = "";
- else
- header_prefix = "commit ";
- }
- else if (revs.verbose_header)
- /* Only --header was specified */
- revs.commit_format = CMIT_FMT_RAW;
-
- list = revs.commits;
-
- if ((!list &&
- (!(revs.tag_objects||revs.tree_objects||revs.blob_objects) &&
- !revs.pending_objects)) ||
- revs.diff)
- usage(rev_list_usage);
-
- save_commit_buffer = revs.verbose_header;
- track_object_refs = 0;
- if (bisect_list)
- revs.limited = 1;
-
- prepare_revision_walk(&revs);
- if (revs.tree_objects)
- mark_edges_uninteresting(revs.commits);
-
- if (bisect_list)
- revs.commits = find_bisection(revs.commits);
-
- show_commit_list(&revs);
-
- return 0;
-}
"--all",
"--bisect",
"--dense",
+ "--branches",
"--header",
"--max-age=",
"--max-count=",
"--objects-edge",
"--parents",
"--pretty",
+ "--remotes",
"--sparse",
+ "--tags",
"--topo-order",
"--date-order",
"--unpacked",
int i, as_is = 0, verify = 0;
unsigned char sha1[20];
const char *prefix = setup_git_directory();
-
+
git_config(git_default_config);
for (i = 1; i < argc; i++) {
for_each_ref(show_reference);
continue;
}
+ if (!strcmp(arg, "--branches")) {
+ for_each_branch_ref(show_reference);
+ continue;
+ }
+ if (!strcmp(arg, "--tags")) {
+ for_each_tag_ref(show_reference);
+ continue;
+ }
+ if (!strcmp(arg, "--remotes")) {
+ for_each_remote_ref(show_reference);
+ continue;
+ }
if (!strcmp(arg, "--show-prefix")) {
if (prefix)
puts(prefix);
local_flags = UNINTERESTING;
arg++;
}
- if (get_sha1(arg, sha1) < 0) {
+ if (get_sha1(arg, sha1)) {
int j;
if (seen_dashdash || local_flags)
if (def && !revs->pending_objects) {
unsigned char sha1[20];
struct object *object;
- if (get_sha1(def, sha1) < 0)
+ if (get_sha1(def, sha1))
die("bad default revision '%s'", def);
object = get_reference(revs, def, sha1, 0);
add_pending_object(revs, object, def);
struct alternate_object_database *alt_odb_list;
static struct alternate_object_database **alt_odb_tail;
+static void read_info_alternates(const char * alternates, int depth);
+
/*
* Prepare alternate object database registry.
*
* SHA1, an extra slash for the first level indirection, and the
* terminating NUL.
*/
-static void link_alt_odb_entries(const char *alt, const char *ep, int sep,
- const char *relative_base)
+static int link_alt_odb_entry(const char * entry, int len, const char * relative_base, int depth)
{
- const char *cp, *last;
- struct alternate_object_database *ent;
+ struct stat st;
const char *objdir = get_object_directory();
+ struct alternate_object_database *ent;
+ struct alternate_object_database *alt;
+ /* 43 = 40-byte + 2 '/' + terminating NUL */
+ int pfxlen = len;
+ int entlen = pfxlen + 43;
int base_len = -1;
+ if (*entry != '/' && relative_base) {
+ /* Relative alt-odb */
+ if (base_len < 0)
+ base_len = strlen(relative_base) + 1;
+ entlen += base_len;
+ pfxlen += base_len;
+ }
+ ent = xmalloc(sizeof(*ent) + entlen);
+
+ if (*entry != '/' && relative_base) {
+ memcpy(ent->base, relative_base, base_len - 1);
+ ent->base[base_len - 1] = '/';
+ memcpy(ent->base + base_len, entry, len);
+ }
+ else
+ memcpy(ent->base, entry, pfxlen);
+
+ ent->name = ent->base + pfxlen + 1;
+ ent->base[pfxlen + 3] = '/';
+ ent->base[pfxlen] = ent->base[entlen-1] = 0;
+
+ /* Detect cases where alternate disappeared */
+ if (stat(ent->base, &st) || !S_ISDIR(st.st_mode)) {
+ error("object directory %s does not exist; "
+ "check .git/objects/info/alternates.",
+ ent->base);
+ free(ent);
+ return -1;
+ }
+
+ /* Prevent the common mistake of listing the same
+ * thing twice, or object directory itself.
+ */
+ for (alt = alt_odb_list; alt; alt = alt->next) {
+ if (!memcmp(ent->base, alt->base, pfxlen)) {
+ free(ent);
+ return -1;
+ }
+ }
+ if (!memcmp(ent->base, objdir, pfxlen)) {
+ free(ent);
+ return -1;
+ }
+
+ /* add the alternate entry */
+ *alt_odb_tail = ent;
+ alt_odb_tail = &(ent->next);
+ ent->next = NULL;
+
+ /* recursively add alternates */
+ read_info_alternates(ent->base, depth + 1);
+
+ ent->base[pfxlen] = '/';
+
+ return 0;
+}
+
+static void link_alt_odb_entries(const char *alt, const char *ep, int sep,
+ const char *relative_base, int depth)
+{
+ const char *cp, *last;
+
+ if (depth > 5) {
+ error("%s: ignoring alternate object stores, nesting too deep.",
+ relative_base);
+ return;
+ }
+
last = alt;
while (last < ep) {
cp = last;
last = cp + 1;
continue;
}
- for ( ; cp < ep && *cp != sep; cp++)
- ;
+ while (cp < ep && *cp != sep)
+ cp++;
if (last != cp) {
- struct stat st;
- struct alternate_object_database *alt;
- /* 43 = 40-byte + 2 '/' + terminating NUL */
- int pfxlen = cp - last;
- int entlen = pfxlen + 43;
-
- if (*last != '/' && relative_base) {
- /* Relative alt-odb */
- if (base_len < 0)
- base_len = strlen(relative_base) + 1;
- entlen += base_len;
- pfxlen += base_len;
- }
- ent = xmalloc(sizeof(*ent) + entlen);
-
- if (*last != '/' && relative_base) {
- memcpy(ent->base, relative_base, base_len - 1);
- ent->base[base_len - 1] = '/';
- memcpy(ent->base + base_len,
- last, cp - last);
- }
- else
- memcpy(ent->base, last, pfxlen);
-
- ent->name = ent->base + pfxlen + 1;
- ent->base[pfxlen + 3] = '/';
- ent->base[pfxlen] = ent->base[entlen-1] = 0;
-
- /* Detect cases where alternate disappeared */
- if (stat(ent->base, &st) || !S_ISDIR(st.st_mode)) {
- error("object directory %s does not exist; "
- "check .git/objects/info/alternates.",
- ent->base);
- goto bad;
- }
- ent->base[pfxlen] = '/';
-
- /* Prevent the common mistake of listing the same
- * thing twice, or object directory itself.
- */
- for (alt = alt_odb_list; alt; alt = alt->next)
- if (!memcmp(ent->base, alt->base, pfxlen))
- goto bad;
- if (!memcmp(ent->base, objdir, pfxlen)) {
- bad:
- free(ent);
- }
- else {
- *alt_odb_tail = ent;
- alt_odb_tail = &(ent->next);
- ent->next = NULL;
+ if ((*last != '/') && depth) {
+ error("%s: ignoring relative alternate object store %s",
+ relative_base, last);
+ } else {
+ link_alt_odb_entry(last, cp - last,
+ relative_base, depth);
}
}
while (cp < ep && *cp == sep)
}
}
-void prepare_alt_odb(void)
+static void read_info_alternates(const char * relative_base, int depth)
{
- char path[PATH_MAX];
char *map;
- int fd;
struct stat st;
- char *alt;
-
- alt = getenv(ALTERNATE_DB_ENVIRONMENT);
- if (!alt) alt = "";
-
- if (alt_odb_tail)
- return;
- alt_odb_tail = &alt_odb_list;
- link_alt_odb_entries(alt, alt + strlen(alt), ':', NULL);
+ char path[PATH_MAX];
+ int fd;
- sprintf(path, "%s/info/alternates", get_object_directory());
+ sprintf(path, "%s/info/alternates", relative_base);
fd = open(path, O_RDONLY);
if (fd < 0)
return;
if (map == MAP_FAILED)
return;
- link_alt_odb_entries(map, map + st.st_size, '\n',
- get_object_directory());
+ link_alt_odb_entries(map, map + st.st_size, '\n', relative_base, depth);
+
munmap(map, st.st_size);
}
+void prepare_alt_odb(void)
+{
+ char *alt;
+
+ alt = getenv(ALTERNATE_DB_ENVIRONMENT);
+ if (!alt) alt = "";
+
+ if (alt_odb_tail)
+ return;
+ alt_odb_tail = &alt_odb_list;
+ link_alt_odb_entries(alt, alt + strlen(alt), ':', NULL, 0);
+
+ read_info_alternates(get_object_directory(), 0);
+}
+
static char *find_sha1_file(const unsigned char *sha1, struct stat *st)
{
char *name = sha1_file_name(sha1);
int mi = (lo + hi) / 2;
int cmp = memcmp(index + 24 * mi + 4, sha1, 20);
if (!cmp) {
- e->offset = ntohl(*((int*)(index + 24 * mi)));
+ e->offset = ntohl(*((unsigned int *)(index + 24 * mi)));
memcpy(e->sha1, sha1, 20);
e->p = p;
return 1;
{
int ret;
unsigned unused;
+ int namelen = strlen(name);
+ const char *cp;
prepare_alt_odb();
- ret = get_sha1_1(name, strlen(name), sha1);
- if (ret < 0) {
- const char *cp = strchr(name, ':');
- if (cp) {
- unsigned char tree_sha1[20];
- if (!get_sha1_1(name, cp-name, tree_sha1))
- return get_tree_entry(tree_sha1, cp+1, sha1,
- &unused);
+ ret = get_sha1_1(name, namelen, sha1);
+ if (!ret)
+ return ret;
+ /* sha1:path --> object name of path in ent sha1
+ * :path -> object name of path in index
+ * :[0-3]:path -> object name of path in index at stage
+ */
+ if (name[0] == ':') {
+ int stage = 0;
+ struct cache_entry *ce;
+ int pos;
+ if (namelen < 3 ||
+ name[2] != ':' ||
+ name[1] < '0' || '3' < name[1])
+ cp = name + 1;
+ else {
+ stage = name[1] - '0';
+ cp = name + 3;
}
+ namelen = namelen - (cp - name);
+ if (!active_cache)
+ read_cache();
+ if (active_nr < 0)
+ return -1;
+ pos = cache_name_pos(cp, namelen);
+ if (pos < 0)
+ pos = -pos - 1;
+ while (pos < active_nr) {
+ ce = active_cache[pos];
+ if (ce_namelen(ce) != namelen ||
+ memcmp(ce->name, cp, namelen))
+ break;
+ if (ce_stage(ce) == stage) {
+ memcpy(sha1, ce->sha1, 20);
+ return 0;
+ }
+ pos++;
+ }
+ return -1;
+ }
+ cp = strchr(name, ':');
+ if (cp) {
+ unsigned char tree_sha1[20];
+ if (!get_sha1_1(name, cp-name, tree_sha1))
+ return get_tree_entry(tree_sha1, cp+1, sha1,
+ &unused);
}
return ret;
}
commit_id = argv[arg];
url = argv[arg + 1];
if (get_sha1(commit_id, sha1))
- usage(ssh_push_usage);
+ die("Not a valid object name %s", commit_id);
memcpy(hex, sha1_to_hex(sha1), sizeof(hex));
argv[arg] = hex;
echo nitfol >nitfol &&
echo bozbar >bozbar &&
echo rezrov >rezrov &&
- echo yomin >yomin &&
git-update-index --add nitfol bozbar rezrov &&
treeH=`git-write-tree` &&
echo treeH $treeH &&
test_expect_success \
'1, 2, 3 - no carry forward' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
git-read-tree -m -u $treeH $treeM &&
git-ls-files --stage >1-3.out &&
cmp M.out 1-3.out &&
check_cache_at frotz clean &&
check_cache_at nitfol clean'
-echo '+100644 X 0 yomin' >expected
-
test_expect_success \
'4 - carry forward local addition.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
+ echo "+100644 X 0 yomin" >expected &&
+ echo yomin >yomin &&
git-update-index --add yomin &&
git-read-tree -m -u $treeH $treeM &&
git-ls-files --stage >4.out || return 1
test_expect_success \
'5 - carry forward local addition.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
+ git-read-tree -m -u $treeH &&
echo yomin >yomin &&
git-update-index --add yomin &&
echo yomin yomin >yomin &&
test_expect_success \
'6 - local addition already has the same.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
+ echo frotz >frotz &&
git-update-index --add frotz &&
git-read-tree -m -u $treeH $treeM &&
git-ls-files --stage >6.out &&
test_expect_success \
'7 - local addition already has the same.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo frotz >frotz &&
git-update-index --add frotz &&
echo frotz frotz >frotz &&
test_expect_success \
'8 - conflicting addition.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo frotz frotz >frotz &&
git-update-index --add frotz &&
if git-read-tree -m -u $treeH $treeM; then false; else :; fi'
test_expect_success \
'9 - conflicting addition.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo frotz frotz >frotz &&
git-update-index --add frotz &&
echo frotz >frotz &&
test_expect_success \
'10 - path removed.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo rezrov >rezrov &&
git-update-index --add rezrov &&
git-read-tree -m -u $treeH $treeM &&
test_expect_success \
'11 - dirty path removed.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo rezrov >rezrov &&
git-update-index --add rezrov &&
echo rezrov rezrov >rezrov &&
test_expect_success \
'12 - unmatching local changes being removed.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo rezrov rezrov >rezrov &&
git-update-index --add rezrov &&
if git-read-tree -m -u $treeH $treeM; then false; else :; fi'
test_expect_success \
'13 - unmatching local changes being removed.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo rezrov rezrov >rezrov &&
git-update-index --add rezrov &&
echo rezrov >rezrov &&
test_expect_success \
'14 - unchanged in two heads.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo nitfol nitfol >nitfol &&
git-update-index --add nitfol &&
git-read-tree -m -u $treeH $treeM &&
test_expect_success \
'15 - unchanged in two heads.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo nitfol nitfol >nitfol &&
git-update-index --add nitfol &&
echo nitfol nitfol nitfol >nitfol &&
test_expect_success \
'16 - conflicting local change.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo bozbar bozbar >bozbar &&
git-update-index --add bozbar &&
if git-read-tree -m -u $treeH $treeM; then false; else :; fi'
test_expect_success \
'17 - conflicting local change.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo bozbar bozbar >bozbar &&
git-update-index --add bozbar &&
echo bozbar bozbar bozbar >bozbar &&
test_expect_success \
'18 - local change already having a good result.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo gnusto >bozbar &&
git-update-index --add bozbar &&
git-read-tree -m -u $treeH $treeM &&
test_expect_success \
'19 - local change already having a good result, further modified.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo gnusto >bozbar &&
git-update-index --add bozbar &&
echo gnusto gnusto >bozbar &&
test_expect_success \
'20 - no local change, use new tree.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo bozbar >bozbar &&
git-update-index --add bozbar &&
git-read-tree -m -u $treeH $treeM &&
test_expect_success \
'21 - no local change, dirty cache.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index nitfol bozbar rezrov frotz &&
+ git-read-tree --reset -u $treeH &&
echo bozbar >bozbar &&
git-update-index --add bozbar &&
echo gnusto gnusto >bozbar &&
# Also make sure we did not break DF vs DF/DF case.
test_expect_success \
'DF vs DF/DF case setup.' \
- 'rm -f .git/index &&
+ 'rm -f .git/index
echo DF >DF &&
git-update-index --add DF &&
treeDF=`git-write-tree` &&
test_expect_success 'correct key' 'git-repo-config 123456.a123 987'
test_expect_success 'hierarchical section' \
- 'git-repo-config 1.2.3.alpha beta'
+ 'git-repo-config Version.1.2.3eX.Alpha beta'
cat > expect << EOF
[beta] ; silly comment # another comment
NoNewLine = wow2 for me
[123456]
a123 = 987
-[1.2.3]
- alpha = beta
+[Version "1.2.3eX"]
+ Alpha = beta
EOF
test_expect_success 'hierarchical section value' 'cmp .git/config expect'
beta.noindent=sillyValue
nextsection.nonewline=wow2 for me
123456.a123=987
-1.2.3.alpha=beta
+version.1.2.3eX.alpha=beta
EOF
test_expect_success 'working --list' \
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2006 Junio C Hamano
+#
+
+test_description='git-update-index --again test.
+'
+
+. ./test-lib.sh
+
+test_expect_success 'update-index --add' \
+ 'echo hello world >file1 &&
+ echo goodbye people >file2 &&
+ git-update-index --add file1 file2 &&
+ git-ls-files -s >current &&
+ cmp current - <<\EOF
+100644 3b18e512dba79e4c8300dd08aeb37f8e728b8dad 0 file1
+100644 9db8893856a8a02eaa73470054b7c1c5a7c82e47 0 file2
+EOF'
+
+test_expect_success 'update-index --again' \
+ 'rm -f file1 &&
+ echo hello everybody >file2 &&
+ if git-update-index --again
+ then
+ echo should have refused to remove file1
+ exit 1
+ else
+ echo happy - failed as expected
+ fi &&
+ git-ls-files -s >current &&
+ cmp current - <<\EOF
+100644 3b18e512dba79e4c8300dd08aeb37f8e728b8dad 0 file1
+100644 9db8893856a8a02eaa73470054b7c1c5a7c82e47 0 file2
+EOF'
+
+test_expect_success 'update-index --remove --again' \
+ 'git-update-index --remove --again &&
+ git-ls-files -s >current &&
+ cmp current - <<\EOF
+100644 0f1ae1422c2bf43f117d3dbd715c988a9ed2103f 0 file2
+EOF'
+
+test_expect_success 'first commit' 'git-commit -m initial'
+
+test_expect_success 'update-index again' \
+ 'mkdir -p dir1 &&
+ echo hello world >dir1/file3 &&
+ echo goodbye people >file2 &&
+ git-update-index --add file2 dir1/file3 &&
+ echo hello everybody >file2
+ echo happy >dir1/file3 &&
+ git-update-index --again &&
+ git-ls-files -s >current &&
+ cmp current - <<\EOF
+100644 53ab446c3f4e42ce9bb728a0ccb283a101be4979 0 dir1/file3
+100644 0f1ae1422c2bf43f117d3dbd715c988a9ed2103f 0 file2
+EOF'
+
+test_expect_success 'update-index --update from subdir' \
+ 'echo not so happy >file2 &&
+ cd dir1 &&
+ cat ../file2 >file3 &&
+ git-update-index --again &&
+ cd .. &&
+ git-ls-files -s >current &&
+ cmp current - <<\EOF
+100644 d7fb3f695f06c759dbf3ab00046e7cc2da22d10f 0 dir1/file3
+100644 0f1ae1422c2bf43f117d3dbd715c988a9ed2103f 0 file2
+EOF'
+
+test_expect_success 'update-index --update with pathspec' \
+ 'echo very happy >file2 &&
+ cat file2 >dir1/file3 &&
+ git-update-index --again dir1/ &&
+ git-ls-files -s >current &&
+ cmp current - <<\EOF
+100644 594fb5bb1759d90998e2bf2a38261ae8e243c760 0 dir1/file3
+100644 0f1ae1422c2bf43f117d3dbd715c988a9ed2103f 0 file2
+EOF'
+
+test_done
git-commit -m "Add C." &&
git-checkout -f master &&
+ rm -f B C &&
echo Third >> A &&
git-update-index A &&
'rm -fr Z [A-Z][A-Z] &&
git-read-tree $tree_A &&
git-checkout-index -f -a &&
- git-read-tree -m $tree_O || return 1
+ git-read-tree --reset $tree_O || return 1
git-update-index --refresh >/dev/null ;# this can exit non-zero
git-diff-files >.test-a &&
cmp_diff_files_output .test-a .test-recursive-OA'
'rm -fr Z [A-Z][A-Z] &&
git-read-tree $tree_B &&
git-checkout-index -f -a &&
- git-read-tree -m $tree_O || return 1
+ git-read-tree --reset $tree_O || return 1
git-update-index --refresh >/dev/null ;# this can exit non-zero
git-diff-files >.test-a &&
cmp_diff_files_output .test-a .test-recursive-OB'
'rm -fr Z [A-Z][A-Z] &&
git-read-tree $tree_B &&
git-checkout-index -f -a &&
- git-read-tree -m $tree_A || return 1
+ git-read-tree --reset $tree_A || return 1
git-update-index --refresh >/dev/null ;# this can exit non-zero
git-diff-files >.test-a &&
cmp_diff_files_output .test-a .test-recursive-AB'
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2006 Junio C Hamano
+#
+
+test_description='Binary diff and apply
+'
+
+. ./test-lib.sh
+
+test_expect_success 'prepare repository' \
+ 'echo AIT >a && echo BIT >b && echo CIT >c && echo DIT >d &&
+ git-update-index --add a b c d &&
+ echo git >a &&
+ cat ../test4012.png >b &&
+ echo git >c &&
+ cat b b >d'
+
+test_expect_success 'diff without --binary' \
+ 'git-diff | git-apply --stat --summary >current &&
+ cmp current - <<\EOF
+ a | 2 +-
+ b | Bin
+ c | 2 +-
+ d | Bin
+ 4 files changed, 2 insertions(+), 2 deletions(-)
+EOF'
+
+test_expect_success 'diff with --binary' \
+ 'git-diff --binary | git-apply --stat --summary >current &&
+ cmp current - <<\EOF
+ a | 2 +-
+ b | Bin
+ c | 2 +-
+ d | Bin
+ 4 files changed, 2 insertions(+), 2 deletions(-)
+EOF'
+
+# apply needs to be able to skip the binary material correctly
+# in order to report the line number of a corrupt patch.
+test_expect_success 'apply detecting corrupt patch correctly' \
+ 'git-diff | sed -e 's/-CIT/xCIT/' >broken &&
+ if git-apply --stat --summary broken 2>detected
+ then
+ echo unhappy - should have detected an error
+ (exit 1)
+ else
+ echo happy
+ fi &&
+ detected=`cat detected` &&
+ detected=`expr "$detected" : "fatal.*at line \\([0-9]*\\)\$"` &&
+ detected=`sed -ne "${detected}p" broken` &&
+ test "$detected" = xCIT'
+
+test_expect_success 'apply detecting corrupt patch correctly' \
+ 'git-diff --binary | sed -e 's/-CIT/xCIT/' >broken &&
+ if git-apply --stat --summary broken 2>detected
+ then
+ echo unhappy - should have detected an error
+ (exit 1)
+ else
+ echo happy
+ fi &&
+ detected=`cat detected` &&
+ detected=`expr "$detected" : "fatal.*at line \\([0-9]*\\)\$"` &&
+ detected=`sed -ne "${detected}p" broken` &&
+ test "$detected" = xCIT'
+
+test_expect_success 'initial commit' 'git-commit -a -m initial'
+
+# Try removal (b), modification (d), and creation (e).
+test_expect_success 'diff-index with --binary' \
+ 'echo AIT >a && mv b e && echo CIT >c && cat e >d &&
+ git-update-index --add --remove a b c d e &&
+ tree0=`git-write-tree` &&
+ git-diff --cached --binary >current &&
+ git-apply --stat --summary current'
+
+test_expect_success 'apply binary patch' \
+ 'git-reset --hard &&
+ git-apply --binary --index <current &&
+ tree1=`git-write-tree` &&
+ test "$tree1" = "$tree0"'
+
+test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (C) 2006 Martin Waitz <tali@admingilde.org>
+#
+
+test_description='test clone --reference'
+. ./test-lib.sh
+
+base_dir=`pwd`
+
+test_expect_success 'preparing first repository' \
+'test_create_repo A && cd A &&
+echo first > file1 &&
+git add file1 &&
+git commit -m initial'
+
+cd "$base_dir"
+
+test_expect_success 'preparing second repository' \
+'git clone A B && cd B &&
+echo second > file2 &&
+git add file2 &&
+git commit -m addition &&
+git repack -a -d &&
+git prune'
+
+cd "$base_dir"
+
+test_expect_success 'cloning with reference' \
+'git clone -l -s --reference B A C'
+
+cd "$base_dir"
+
+test_expect_success 'existance of info/alternates' \
+'test `wc -l <C/.git/objects/info/alternates` = 2'
+
+cd "$base_dir"
+
+test_expect_success 'pulling from reference' \
+'cd C &&
+git pull ../B'
+
+cd "$base_dir"
+
+test_expect_success 'that reference gets used' \
+'cd C &&
+echo "0 objects, 0 kilobytes" > expected &&
+git count-objects > current &&
+diff expected current'
+
+cd "$base_dir"
+
+test_expect_success 'updating origin' \
+'cd A &&
+echo third > file3 &&
+git add file3 &&
+git commit -m update &&
+git repack -a -d &&
+git prune'
+
+cd "$base_dir"
+
+test_expect_success 'pulling changes from origin' \
+'cd C &&
+git pull origin'
+
+cd "$base_dir"
+
+# the 2 local objects are commit and tree from the merge
+test_expect_success 'that alternate to origin gets used' \
+'cd C &&
+echo "2 objects" > expected &&
+git count-objects | cut -d, -f1 > current &&
+diff expected current'
+
+cd "$base_dir"
+
+test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (C) 2006 Martin Waitz <tali@admingilde.org>
+#
+
+test_description='test transitive info/alternate entries'
+. ./test-lib.sh
+
+# test that a file is not reachable in the current repository
+# but that it is after creating a info/alternate entry
+reachable_via() {
+ alternate="$1"
+ file="$2"
+ if git cat-file -e "HEAD:$file"; then return 1; fi
+ echo "$alternate" >> .git/objects/info/alternate
+ git cat-file -e "HEAD:$file"
+}
+
+test_valid_repo() {
+ git fsck-objects --full > fsck.log &&
+ test `wc -l < fsck.log` = 0
+}
+
+base_dir=`pwd`
+
+test_expect_success 'preparing first repository' \
+'test_create_repo A && cd A &&
+echo "Hello World" > file1 &&
+git add file1 &&
+git commit -m "Initial commit" file1 &&
+git repack -a -d &&
+git prune'
+
+cd "$base_dir"
+
+test_expect_success 'preparing second repository' \
+'git clone -l -s A B && cd B &&
+echo "foo bar" > file2 &&
+git add file2 &&
+git commit -m "next commit" file2 &&
+git repack -a -d -l &&
+git prune'
+
+cd "$base_dir"
+
+test_expect_success 'preparing third repository' \
+'git clone -l -s B C && cd C &&
+echo "Goodbye, cruel world" > file3 &&
+git add file3 &&
+git commit -m "one more" file3 &&
+git repack -a -d -l &&
+git prune'
+
+cd "$base_dir"
+
+test_expect_failure 'creating too deep nesting' \
+'git clone -l -s C D &&
+git clone -l -s D E &&
+git clone -l -s E F &&
+git clone -l -s F G &&
+test_valid_repo'
+
+cd "$base_dir"
+
+test_expect_success 'validity of third repository' \
+'cd C &&
+test_valid_repo'
+
+cd "$base_dir"
+
+test_expect_success 'validity of fourth repository' \
+'cd D &&
+test_valid_repo'
+
+cd "$base_dir"
+
+test_expect_success 'breaking of loops' \
+"echo '$base_dir/B/.git/objects' >> '$base_dir'/A/.git/objects/info/alternates&&
+cd C &&
+test_valid_repo"
+
+cd "$base_dir"
+
+test_expect_failure 'that info/alternates is neccessary' \
+'cd C &&
+rm .git/objects/info/alternates &&
+test_valid_repo'
+
+cd "$base_dir"
+
+test_expect_success 'that relative alternate is possible for current dir' \
+'cd C &&
+echo "../../../B/.git/objects" > .git/objects/info/alternates &&
+test_valid_repo'
+
+cd "$base_dir"
+
+test_expect_failure 'that relative alternate is only possible for current dir' \
+'cd D &&
+test_valid_repo'
+
+cd "$base_dir"
+
+test_done
+
test_expect_success 'pull renaming branch into another renaming one' \
'
+ rm -f B
git reset --hard
git checkout red
git pull . white && {
strbuf_append_string(¤t_path, "/");
/* FALLTHROUGH */
case 2:
- if (get_sha1(argv[1], sha1) < 0)
- usage(tar_tree_usage);
+ if (get_sha1(argv[1], sha1))
+ die("Not a valid object name %s", argv[1]);
break;
default:
usage(tar_tree_usage);
{
unsigned char sha1[20];
- if (argc != 2 || get_sha1(argv[1], sha1))
+ if (argc != 2)
usage("git-unpack-file <sha1>");
+ if (get_sha1(argv[1], sha1))
+ die("Not a valid object name %s", argv[1]);
setup_git_directory();
git_config(git_default_config);
static int allow_add;
static int allow_remove;
static int allow_replace;
-static int allow_unmerged; /* --refresh needing merge is not error */
-static int not_new; /* --refresh not having working tree files is not error */
-static int quiet; /* --refresh needing update is not error */
static int info_only;
static int force_remove;
static int verbose;
#define MARK_VALID 1
#define UNMARK_VALID 2
-
-/* Three functions to allow overloaded pointer return; see linux/err.h */
-static inline void *ERR_PTR(long error)
-{
- return (void *) error;
-}
-
-static inline long PTR_ERR(const void *ptr)
-{
- return (long) ptr;
-}
-
-static inline long IS_ERR(const void *ptr)
-{
- return (unsigned long)ptr > (unsigned long)-1000L;
-}
-
static void report(const char *fmt, ...)
{
va_list vp;
return 0;
}
-/*
- * "refresh" does not calculate a new sha1 file or bring the
- * cache up-to-date for mode/content changes. But what it
- * _does_ do is to "re-match" the stat information of a file
- * with the cache, so that you can refresh the cache for a
- * file that hasn't been changed but where the stat entry is
- * out of date.
- *
- * For example, you'd want to do this after doing a "git-read-tree",
- * to link up the stat cache details with the proper files.
- */
-static struct cache_entry *refresh_entry(struct cache_entry *ce, int really)
-{
- struct stat st;
- struct cache_entry *updated;
- int changed, size;
-
- if (lstat(ce->name, &st) < 0)
- return ERR_PTR(-errno);
-
- changed = ce_match_stat(ce, &st, really);
- if (!changed) {
- if (really && assume_unchanged &&
- !(ce->ce_flags & htons(CE_VALID)))
- ; /* mark this one VALID again */
- else
- return NULL;
- }
-
- if (ce_modified(ce, &st, really))
- return ERR_PTR(-EINVAL);
-
- size = ce_size(ce);
- updated = xmalloc(size);
- memcpy(updated, ce, size);
- fill_stat_cache_info(updated, &st);
-
- /* In this case, if really is not set, we should leave
- * CE_VALID bit alone. Otherwise, paths marked with
- * --no-assume-unchanged (i.e. things to be edited) will
- * reacquire CE_VALID bit automatically, which is not
- * really what we want.
- */
- if (!really && assume_unchanged && !(ce->ce_flags & htons(CE_VALID)))
- updated->ce_flags &= ~htons(CE_VALID);
-
- return updated;
-}
-
-static int refresh_cache(int really)
-{
- int i;
- int has_errors = 0;
-
- for (i = 0; i < active_nr; i++) {
- struct cache_entry *ce, *new;
- ce = active_cache[i];
- if (ce_stage(ce)) {
- while ((i < active_nr) &&
- ! strcmp(active_cache[i]->name, ce->name))
- i++;
- i--;
- if (allow_unmerged)
- continue;
- printf("%s: needs merge\n", ce->name);
- has_errors = 1;
- continue;
- }
-
- new = refresh_entry(ce, really);
- if (!new)
- continue;
- if (IS_ERR(new)) {
- if (not_new && PTR_ERR(new) == -ENOENT)
- continue;
- if (really && PTR_ERR(new) == -EINVAL) {
- /* If we are doing --really-refresh that
- * means the index is not valid anymore.
- */
- ce->ce_flags &= ~htons(CE_VALID);
- active_cache_changed = 1;
- }
- if (quiet)
- continue;
- printf("%s: needs update\n", ce->name);
- has_errors = 1;
- continue;
- }
- active_cache_changed = 1;
- /* You can NOT just free active_cache[i] here, since it
- * might not be necessarily malloc()ed but can also come
- * from mmap(). */
- active_cache[i] = new;
- }
- return has_errors;
-}
-
/*
* We fundamentally don't like some paths: we don't want
* dot or dot-dot anywhere, and for obvious reasons don't
die("Unable to process file %s", path);
report("add '%s'", path);
free_return:
- if (p != path)
+ if (p < path || p > path + strlen(path))
free((char*)p);
}
}
static const char update_index_usage[] =
-"git-update-index [-q] [--add] [--replace] [--remove] [--unmerged] [--refresh] [--really-refresh] [--cacheinfo] [--chmod=(+|-)x] [--assume-unchanged] [--info-only] [--force-remove] [--stdin] [--index-info] [--unresolve] [--ignore-missing] [-z] [--verbose] [--] <file>...";
+"git-update-index [-q] [--add] [--replace] [--remove] [--unmerged] [--refresh] [--really-refresh] [--cacheinfo] [--chmod=(+|-)x] [--assume-unchanged] [--info-only] [--force-remove] [--stdin] [--index-info] [--unresolve] [--again] [--ignore-missing] [-z] [--verbose] [--] <file>...";
static unsigned char head_sha1[20];
static unsigned char merge_head_sha1[20];
struct cache_entry *ce;
if (get_tree_entry(ent, path, sha1, &mode)) {
- error("%s: not in %s branch.", path, which);
+ if (which)
+ error("%s: not in %s branch.", path, which);
return NULL;
}
if (mode == S_IFDIR) {
- error("%s: not a blob in %s branch.", path, which);
+ if (which)
+ error("%s: not a blob in %s branch.", path, which);
return NULL;
}
size = cache_entry_size(namelen);
const char *arg = av[i];
const char *p = prefix_path(prefix, prefix_length, arg);
err |= unresolve_one(p);
- if (p != arg)
+ if (p < arg || p > arg + strlen(arg))
free((char*)p);
}
return err;
}
+static int do_reupdate(int ac, const char **av,
+ const char *prefix, int prefix_length)
+{
+ /* Read HEAD and run update-index on paths that are
+ * merged and already different between index and HEAD.
+ */
+ int pos;
+ int has_head = 1;
+ const char **pathspec = get_pathspec(prefix, av + 1);
+
+ if (read_ref(git_path("HEAD"), head_sha1))
+ /* If there is no HEAD, that means it is an initial
+ * commit. Update everything in the index.
+ */
+ has_head = 0;
+ redo:
+ for (pos = 0; pos < active_nr; pos++) {
+ struct cache_entry *ce = active_cache[pos];
+ struct cache_entry *old = NULL;
+ int save_nr;
+
+ if (ce_stage(ce) || !ce_path_match(ce, pathspec))
+ continue;
+ if (has_head)
+ old = read_one_ent(NULL, head_sha1,
+ ce->name, ce_namelen(ce), 0);
+ if (old && ce->ce_mode == old->ce_mode &&
+ !memcmp(ce->sha1, old->sha1, 20)) {
+ free(old);
+ continue; /* unchanged */
+ }
+ /* Be careful. The working tree may not have the
+ * path anymore, in which case, under 'allow_remove',
+ * or worse yet 'allow_replace', active_nr may decrease.
+ */
+ save_nr = active_nr;
+ update_one(ce->name + prefix_length, prefix, prefix_length);
+ if (save_nr != active_nr)
+ goto redo;
+ }
+ return 0;
+}
+
int main(int argc, const char **argv)
{
int i, newfd, entries, has_errors = 0, line_termination = '\n';
const char *prefix = setup_git_directory();
int prefix_length = prefix ? strlen(prefix) : 0;
char set_executable_bit = 0;
+ unsigned int refresh_flags = 0;
git_config(git_default_config);
continue;
}
if (!strcmp(path, "-q")) {
- quiet = 1;
+ refresh_flags |= REFRESH_QUIET;
continue;
}
if (!strcmp(path, "--add")) {
continue;
}
if (!strcmp(path, "--unmerged")) {
- allow_unmerged = 1;
+ refresh_flags |= REFRESH_UNMERGED;
continue;
}
if (!strcmp(path, "--refresh")) {
- has_errors |= refresh_cache(0);
+ has_errors |= refresh_cache(refresh_flags);
continue;
}
if (!strcmp(path, "--really-refresh")) {
- has_errors |= refresh_cache(1);
+ has_errors |= refresh_cache(REFRESH_REALLY | refresh_flags);
continue;
}
if (!strcmp(path, "--cacheinfo")) {
active_cache_changed = 0;
goto finish;
}
+ if (!strcmp(path, "--again")) {
+ has_errors = do_reupdate(argc - i, argv + i,
+ prefix, prefix_length);
+ if (has_errors)
+ active_cache_changed = 0;
+ goto finish;
+ }
if (!strcmp(path, "--ignore-missing")) {
- not_new = 1;
+ refresh_flags |= REFRESH_IGNORE_MISSING;
continue;
}
if (!strcmp(path, "--verbose")) {
update_one(p, NULL, 0);
if (set_executable_bit)
chmod_path(set_executable_bit, p);
- if (p != path_name)
+ if (p < path_name || p > path_name + strlen(path_name))
free((char*) p);
if (path_name != buf.buf)
free(path_name);
refname = argv[1];
value = argv[2];
oldval = argv[3];
- if (get_sha1(value, sha1) < 0)
+ if (get_sha1(value, sha1))
die("%s: not a valid SHA1", value);
memset(oldsha1, 0, 20);
- if (oldval && get_sha1(oldval, oldsha1) < 0)
+ if (oldval && get_sha1(oldval, oldsha1))
die("%s: not a valid old SHA1", oldval);
path = resolve_ref(git_path("%s", refname), currsha1, !!oldval);