/git-remote-ftps
/git-remote-fd
/git-remote-ext
+/git-remote-testgit
/git-remote-testpy
/git-remote-testsvn
/git-repack
Foreign interface
* remote-hg and remote-bzr helpers (in contrib/ since v1.8.2) have
- been updated; especially, the latter has been accelerated to help
- Emacs folks, whose primary SCM seems to be stagnating.
+ been updated; especially, the latter has been done in an
+ accelerated schedule (read: we may not have merged to this release
+ if we were following the usual "cook sufficiently in next before
+ unleashing it to the world" workflow) in order to help Emacs folks,
+ whose primary SCM seems to be stagnating.
UI, Workflows & Features
of erroneous inputs was suboptimal and has been improved.
* When the interactive access to git-shell is not enabled, it issues
- a message meant to help the system administrator to enable it.
- An explicit way to help the end users who connect to the service by
- issuing custom messages to refuse such an access has been added.
+ a message meant to help the system administrator to enable it. An
+ explicit way has been added to issue custom messages to refuse an
+ access over the network to help the end users who connect to the
+ service expecting an interactive shell.
* In addition to the case where the user edits the log message with
the "e)dit" option of "am -i", replace the "Applying: this patch"
* "git status" suggests users to look into using --untracked=no option
when it takes too long.
- * "git status" shows a bit more information during a
- rebase/bisect session.
+ * "git status" shows a bit more information during a rebase/bisect
+ session.
* "git fetch" learned to fetch a commit at the tip of an unadvertised
ref by specifying a raw object name from the command line when the
* Various subcommands of "git remote" simply ignored extraneous
command line arguments instead of diagnosing them as errors.
- (merge b17dd3f tr/remote-tighten-commandline-parsing later to maint).
* When receive-pack detects an error in the pack header it received in
order to decide which of unpack-objects or index-pack to run, it
buffer around as human readable object names. This was not a huge
problem but was exposed by a new change that uses these names in
error output.
- (merge 70d26c6 tr/copy-revisions-from-stdin later to maint).
* Smart-capable HTTP servers were not restricted via the
GIT_NAMESPACE mechanism when talking with commit-walking clients,
* Fix a 1.8.1.x regression that stopped matching "dir" (without a
trailing slash) to a directory "dir".
- (merge efa5f82 jc/directory-attrs-regression-fix later to maint-1.8.1).
* "git apply --whitespace=fix" was not prepared to see a line getting
longer after fixing whitespaces (e.g. tab-in-indent aka Python).
- (merge 329b26e jc/apply-ws-fix-tab-in-indent later to maint-1.8.1).
* The prompt string generator (in contrib/completion/) did not notice
when we are in a middle of a "git revert" session.
--- /dev/null
+Git v1.8.4 Release Notes
+========================
+
+Updates since v1.8.3
+--------------------
+
+Foreign interface
+
+ * Remote transport helper has been updated to report errors and
+ maintain ref hierarchy used to keep track of its own state better.
+
+
+UI, Workflows & Features
+
+ * "check-ignore" (new feature since 1.8.2) has been updated to work
+ more like "check-attr" over bidi-pipes.
+
+ * We used the approxidate() parser for "--expire=<timestamp>" options
+ of various commands, but it is better to treat --expire=all and
+ --expire=now a bit more specially than using the current timestamp.
+ "git gc" and "git reflog" have been updated with a new parsing
+ function for expiry dates.
+
+
+Performance, Internal Implementation, etc.
+
+ * Object lookup logic, when the object hashtable starts to become
+ crowded, has been optimized.
+
+ * When TEST_OUTPUT_DIRECTORY setting is used, it was handled somewhat
+ inconsistently between the test framework and t/Makefile, and logic
+ to summarize the results looked at a wrong place.
+
+ * Many warnings from sparse source checker in compat/ area has been
+ squelched.
+
+ * The code to reading and updating packed-refs file has been updated,
+ correcting corner case bugs.
+
+
+Also contains various documentation updates and code clean-ups.
+
+
+Fixes since v1.8.3
+------------------
+
+Unless otherwise noted, all the fixes since v1.8.3 in the maintenance
+track are contained in this release (see release notes to them for
+details).
+
+ * When $HOME is misconfigured to point at an unreadable directory, we
+ used to complain and die. Loosen the check.
+ (merge 4698c8f jn/config-ignore-inaccessible later to maint).
+
+ * "git subtree" (in contrib/) had one codepath with loose error
+ checks to lose data at the remote side.
+ (merge 3212d56 jk/subtree-do-not-push-if-split-fails later to maint).
+
+ * "git fetch" into a shallow repository from a repository that does
+ not know about the shallow boundary commits (e.g. a different fork
+ from the repository the current shallow repository was cloned from)
+ did not work correctly.
+ (merge 71d5f93 mh/fetch-into-shallow later to maint).
+
+ * "git checkout foo" DWIMs the intended "upstream" and turns it into
+ "git checkout -t -b foo remotes/origin/foo". This codepath has been
+ updated to correctly take existing remote definitions into account.
+ (merge 229177a jh/checkout-auto-tracking later to maint).
--ignore-submodules[=<when>]::
Ignore changes to submodules in the diff generation. <when> can be
- either "none", "untracked", "dirty" or "all", which is the default
+ either "none", "untracked", "dirty" or "all", which is the default.
Using "none" will consider the submodule modified when it either contains
untracked or modified files or its HEAD differs from the commit recorded
in the superproject and can be used to override any settings of the
'set';; when the attribute is defined as true.
<value>;; when a value has been assigned to the attribute.
+Buffering happens as documented under the `GIT_FLUSH` option in
+linkgit:git[1]. The caller is responsible for avoiding deadlocks
+caused by overfilling an input buffer or reading from an empty output
+buffer.
+
EXAMPLES
--------
below). If `--stdin` is also given, input paths are separated
with a NUL character instead of a linefeed character.
+-n, --non-matching::
+ Show given paths which don't match any pattern. This only
+ makes sense when `--verbose` is enabled, otherwise it would
+ not be possible to distinguish between paths which match a
+ pattern and those which don't.
+
OUTPUT
------
<source> <NULL> <linenum> <NULL> <pattern> <NULL> <pathname> <NULL>
+If `-n` or `--non-matching` are specified, non-matching pathnames will
+also be output, in which case all fields in each output record except
+for <pathname> will be empty. This can be useful when running
+non-interactively, so that files can be incrementally streamed to
+STDIN of a long-running check-ignore process, and for each of these
+files, STDOUT will indicate whether that file matched a pattern or
+not. (Without this option, it would be impossible to tell whether the
+absence of output for a given file meant that it didn't match any
+pattern, or that the output hadn't been generated yet.)
+
+Buffering happens as documented under the `GIT_FLUSH` option in
+linkgit:git[1]. The caller is responsible for avoiding deadlocks
+caused by overfilling an input buffer or reading from an empty output
+buffer.
EXIT STATUS
-----------
"--track" in linkgit:git-branch[1] for details.
+
If no '-b' option is given, the name of the new branch will be
-derived from the remote-tracking branch. If "remotes/" or "refs/remotes/"
-is prefixed it is stripped away, and then the part up to the
-next slash (which would be the nickname of the remote) is removed.
+derived from the remote-tracking branch, by looking at the local part of
+the refspec configured for the corresponding remote, and then stripping
+the initial part up to the "*".
This would tell us to use "hack" as the local branch when branching
off of "origin/hack" (or "remotes/origin/hack", or even
"refs/remotes/origin/hack"). If the given name has no slash, or the above
NAME
----
-git-diff-index - Compares content and mode of blobs between the index and repository
+git-diff-index - Compare a tree to the working tree or index
SYNOPSIS
DESCRIPTION
-----------
-Compares the content and mode of the blobs found via a tree
-object with the content of the current index and, optionally
-ignoring the stat state of the file on disk. When paths are
-specified, compares only those named paths. Otherwise all
-entries in the index are compared.
+Compares the content and mode of the blobs found in a tree object
+with the corresponding tracked files in the working tree, or with the
+corresponding paths in the index. When <path> arguments are present,
+compares only paths matching those patterns. Otherwise all tracked
+files are compared.
OPTIONS
-------
--prune=<date>::
Prune loose objects older than date (default is 2 weeks ago,
- overridable by the config variable `gc.pruneExpire`). This
- option is on by default.
+ overridable by the config variable `gc.pruneExpire`).
+ --prune=all prunes loose objects regardless of their age.
+ --prune is on by default.
--no-prune::
Do not prune any loose objects.
--expire=<time>::
Entries older than this time are pruned. Without the
option it is taken from configuration `gc.reflogExpire`,
- which in turn defaults to 90 days.
+ which in turn defaults to 90 days. --expire=all prunes
+ entries regardless of their age; --expire=never turns off
+ pruning of reachable entries (but see --expire-unreachable).
--expire-unreachable=<time>::
Entries older than this time and not reachable from
the current tip of the branch are pruned. Without the
option it is taken from configuration
`gc.reflogExpireUnreachable`, which in turn defaults to
- 30 days.
+ 30 days. --expire-unreachable=all prunes unreachable
+ entries regardless of their age; --expire-unreachable=never
+ turns off early pruning of unreachable entries (but see
+ --expire).
--all::
Instead of listing <refs> explicitly, prune all refs.
Create a tag by using the tags_subdir instead of the branches_subdir
specified during git svn init.
--d;;
---destination;;
+-d<path>;;
+--destination=<path>;;
+
If more than one --branches (or --tags) option was given to the 'init'
or 'clone' command, you must provide the location of the branch (or
- tag) you wish to create in the SVN repository. The value of this
- option must match one of the paths specified by a --branches (or
- --tags) option. You can see these paths with the commands
+ tag) you wish to create in the SVN repository. <path> specifies which
+ path to use to create the branch or tag and should match the pattern
+ on the left-hand side of one of the configured branches or tags
+ refspecs. You can see these refspecs with the commands
+
git config --get-all svn-remote.<name>.branches
git config --get-all svn-remote.<name>.tags
git config --get-all svn-remote.<name>.commiturl
+
+--parents;;
+ Create parent folders. This parameter is equivalent to the parameter
+ --parents on svn cp commands and is useful for non-standard repository
+ layouts.
+
'tag'::
Create a tag in the SVN repository. This is a shorthand for
'branch -t'.
tags = tags/{1.0,2.0}/src:refs/remotes/tags/*
------------------------------------------------------------------------
+Multiple fetch, branches, and tags keys are supported:
+
+------------------------------------------------------------------------
+[svn-remote "messy-repo"]
+ url = http://server.org/svn
+ fetch = trunk/project-a:refs/remotes/project-a/trunk
+ fetch = branches/demos/june-project-a-demo:refs/remotes/project-a/demos/june-demo
+ branches = branches/server/*:refs/remotes/project-a/branches/*
+ branches = branches/demos/2011/*:refs/remotes/project-a/2011-demos/*
+ tags = tags/server/*:refs/remotes/project-a/tags/*
+------------------------------------------------------------------------
+
+Creating a branch in such a configuration requires disambiguating which
+location to use using the -d or --destination flag:
+
+------------------------------------------------------------------------
+$ git svn branch -d branches/server release-2-3-0
+------------------------------------------------------------------------
+
Note that git-svn keeps track of the highest revision in which a branch
or tag has appeared. If the subset of branches or tags is changed after
fetching, then .git/svn/.metadata must be manually edited to remove (or
branch of the `git.git` repository.
Documentation for older releases are available here:
+* link:v1.8.3/git.html[documentation for release 1.8.3]
+
+* release notes for
+ link:RelNotes/1.8.3.txt[1.8.3].
+
* link:v1.8.2.3/git.html[documentation for release 1.8.2.3]
* release notes for
- link:RelNotes/1.8.2.3.txt[1.8.2.3].
- link:RelNotes/1.8.2.2.txt[1.8.2.2].
- link:RelNotes/1.8.2.1.txt[1.8.2.1].
+ link:RelNotes/1.8.2.3.txt[1.8.2.3],
+ link:RelNotes/1.8.2.2.txt[1.8.2.2],
+ link:RelNotes/1.8.2.1.txt[1.8.2.1],
link:RelNotes/1.8.2.txt[1.8.2].
* link:v1.8.1.6/git.html[documentation for release 1.8.1.6]
'GIT_FLUSH'::
If this environment variable is set to "1", then commands such
as 'git blame' (in incremental mode), 'git rev-list', 'git log',
- and 'git whatchanged' will force a flush of the output stream
- after each commit-oriented record have been flushed. If this
+ 'git check-attr', 'git check-ignore', and 'git whatchanged' will
+ force a flush of the output stream after each record have been
+ flushed. If this
variable is set to "0", the output of these commands will be done
using completely buffered I/O. If this environment variable is
not set, Git will choose buffered or record-oriented flushing
carried out.
'refspec' <refspec>::
- This modifies the 'import' capability, allowing the produced
- fast-import stream to modify refs in a private namespace
- instead of writing to refs/heads or refs/remotes directly.
+ For remote helpers that implement 'import' or 'export', this capability
+ allows the refs to be constrained to a private namespace, instead of
+ writing to refs/heads or refs/remotes directly.
It is recommended that all importers providing the 'import'
- capability use this.
+ capability use this. It's mandatory for 'export'.
+
A helper advertising the capability
`refspec refs/heads/*:refs/svn/origin/branches/*`
This capability can be advertised multiple times. The first
applicable refspec takes precedence. The left-hand of refspecs
advertised with this capability must cover all refs reported by
-the list command. If a helper does not need a specific 'refspec'
-capability then it should advertise `refspec *:*`.
+the list command. If no 'refspec' capability is advertised,
+there is an implied `refspec *:*`.
'bidi-import'::
This modifies the 'import' capability.
<<def_ref,ref>> and local ref.
[[def_remote_tracking_branch]]remote-tracking branch::
- A regular Git <<def_branch,branch>> that is used to follow changes from
- another <<def_repository,repository>>. A remote-tracking
- branch should not contain direct modifications or have local commits
- made to it. A remote-tracking branch can usually be
- identified as the right-hand-side <<def_ref,ref>> in a Pull:
- <<def_refspec,refspec>>.
+ A <<def_ref,ref>> that is used to follow changes from another
+ <<def_repository,repository>>. It typically looks like
+ 'refs/remotes/foo/bar' (indicating that it tracks a branch named
+ 'bar' in a remote named 'foo'), and matches the right-hand-side of
+ a configured fetch <<def_refspec,refspec>>. A remote-tracking
+ branch should not contain direct modifications or have local
+ commits made to it.
[[def_repository]]repository::
A collection of <<def_ref,refs>> together with an
inspect and further tweak the merge result before committing.
--edit::
+-e::
--no-edit::
Invoke an editor before committing successful mechanical merge to
further edit the auto-generated merge message, so that the user
can explain and justify the merge. The `--no-edit` option can be
used to accept the auto-generated message (this is generally
- discouraged). The `--edit` option is still useful if you are
+ discouraged). The `--edit` (or `-e`) option is still useful if you are
giving a draft message with the `-m` option from the command line
and want to edit it in the editor.
+
+
* `tag <tag>` means the same as `refs/tags/<tag>:refs/tags/<tag>`;
it requests fetching everything up to the given tag.
-* A parameter <ref> without a colon is equivalent to
- <ref>: when pulling/fetching, so it merges <ref> into the current
- branch without storing the remote branch anywhere locally
+ifndef::git-pull[]
+* A parameter <ref> without a colon fetches that ref into FETCH_HEAD,
+endif::git-pull[]
+ifdef::git-pull[]
+* A parameter <ref> without a colon merges <ref> into the current
+ branch,
+endif::git-pull[]
+ and updates the remote-tracking branches (if any).
* Boolean long options can be 'negated' (or 'unset') by prepending
`no-`, e.g. `--no-abbrev` instead of `--abbrev`. Conversely,
options that begin with `no-` can be 'negated' by removing it.
+ Other long options can be unset (e.g., set string to NULL, set
+ integer to 0) by prepending `no-`.
* Options and non-option arguments can clearly be separated using the `--`
option, e.g. `-a -b --option -- --this-is-a-file` indicates that
Introduce an option with date argument, see `approxidate()`.
The timestamp is put into `int_var`.
+`OPT_EXPIRY_DATE(short, long, &int_var, description)`::
+ Introduce an option with expiry date argument, see `parse_expiry_date()`.
+ The timestamp is put into `int_var`.
+
`OPT_CALLBACK(short, long, &var, arg_str, description, func_ptr)`::
Introduce an option with argument.
The argument will be fed into the function given by `func_ptr`
The client MUST write all obj-ids which it only has shallow copies
of (meaning that it does not have the parents of a commit) as
'shallow' lines so that the server is aware of the limitations of
-the client's history. Clients MUST NOT mention an obj-id which
-it does not know exists on the server.
+the client's history.
The client now sends the maximum commit history depth it wants for
this transaction, which is the number of commits it wants from the
- {startsb}user@{endsb}host.xz:path/to/repo.git/
+This syntax is only recognized if there are no slashes before the
+first colon. This helps differentiate a local path that contains a
+colon. For example the local path `foo:bar` could be specified as an
+absolute path or `./foo:bar` to avoid being misinterpreted as an ssh
+url.
+
The ssh and git protocols additionally support ~username expansion:
- ssh://{startsb}user@{endsb}host.xz{startsb}:port{endsb}/~{startsb}user{endsb}/path/to/repo.git/
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v1.8.3-rc3
+DEF_VER=v1.8.3.GIT
LF='
'
# Define NO_MSGFMT_EXTENDED_OPTIONS if your implementation of msgfmt
# doesn't support GNU extensions like --check and --statistics
#
+# Define NEEDS_CLIPPED_WRITE if your write(2) cannot write more than
+# INT_MAX bytes at once (e.g. MacOS X).
+#
# Define HAVE_PATHS_H if you have paths.h and want to use the default PATH
# it specifies.
#
# specify your own (or DarwinPort's) include directories and
# library directories by defining CFLAGS and LDFLAGS appropriately.
#
+# Define NO_APPLE_COMMON_CRYPTO if you are building on Darwin/Mac OS X
+# and do not want to use Apple's CommonCrypto library. This allows you
+# to provide your own OpenSSL library, for example from MacPorts.
+#
# Define BLK_SHA1 environment variable to make use of the bundled
# optimized C SHA1 routine.
#
SCRIPT_SH += git-pull.sh
SCRIPT_SH += git-quiltimport.sh
SCRIPT_SH += git-rebase.sh
+SCRIPT_SH += git-remote-testgit.sh
SCRIPT_SH += git-repack.sh
SCRIPT_SH += git-request-pull.sh
SCRIPT_SH += git-stash.sh
LIB_H += notes-merge.h
LIB_H += notes.h
LIB_H += object.h
-LIB_H += pack-refs.h
LIB_H += pack-revindex.h
LIB_H += pack.h
LIB_H += parse-options.h
LIB_OBJS += notes-merge.o
LIB_OBJS += object.o
LIB_OBJS += pack-check.o
-LIB_OBJS += pack-refs.o
LIB_OBJS += pack-revindex.o
LIB_OBJS += pack-write.o
LIB_OBJS += pager.o
BASIC_LDFLAGS += -L/opt/local/lib
endif
endif
+ ifndef NO_APPLE_COMMON_CRYPTO
+ APPLE_COMMON_CRYPTO = YesPlease
+ COMPAT_CFLAGS += -DAPPLE_COMMON_CRYPTO
+ endif
+ NO_REGEX = YesPlease
PTHREAD_LIBS =
endif
SHA1_HEADER = "ppc/sha1.h"
LIB_OBJS += ppc/sha1.o ppc/sha1ppc.o
LIB_H += ppc/sha1.h
+else
+ifdef APPLE_COMMON_CRYPTO
+ COMPAT_CFLAGS += -DCOMMON_DIGEST_FOR_OPENSSL
+ SHA1_HEADER = <CommonCrypto/CommonDigest.h>
else
SHA1_HEADER = <openssl/sha.h>
EXTLIBS += $(LIB_4_CRYPTO)
endif
endif
+endif
+
ifdef NO_PERL_MAKEMAKER
export NO_PERL_MAKEMAKER
endif
MSGFMT += --check --statistics
endif
+ifdef NEEDS_CLIPPED_WRITE
+ BASIC_CFLAGS += -DNEEDS_CLIPPED_WRITE
+ COMPAT_OBJS += compat/clipped-write.o
+endif
+
ifneq (,$(XDL_FAST_HASH))
BASIC_CFLAGS += -DXDL_FAST_HASH
endif
ifdef USE_NED_ALLOCATOR
compat/nedmalloc/nedmalloc.sp compat/nedmalloc/nedmalloc.o: EXTRA_CPPFLAGS = \
-DNDEBUG -DOVERRIDE_STRDUP -DREPLACE_SYSTEM_ALLOCATOR
+compat/nedmalloc/nedmalloc.sp: SPARSE_FLAGS += -Wno-non-pointer-null
endif
git-%$X: %.o GIT-LDFLAGS $(GITLIBS)
@echo NO_PERL=\''$(subst ','\'',$(subst ','\'',$(NO_PERL)))'\' >>$@
@echo NO_PYTHON=\''$(subst ','\'',$(subst ','\'',$(NO_PYTHON)))'\' >>$@
@echo NO_UNIX_SOCKETS=\''$(subst ','\'',$(subst ','\'',$(NO_UNIX_SOCKETS)))'\' >>$@
+ifdef TEST_OUTPUT_DIRECTORY
+ @echo TEST_OUTPUT_DIRECTORY=\''$(subst ','\'',$(subst ','\'',$(TEST_OUTPUT_DIRECTORY)))'\' >>$@
+endif
ifdef GIT_TEST_OPTS
@echo GIT_TEST_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_TEST_OPTS)))'\' >>$@
endif
$(RM) $(addsuffix *.gcda,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
$(RM) $(addsuffix *.gcno,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
-clean: profile-clean
+clean: profile-clean coverage-clean
$(RM) *.o block-sha1/*.o ppc/*.o compat/*.o compat/*/*.o xdiff/*.o vcs-svn/*.o \
builtin/*.o $(LIB_FILE) $(XDIFF_LIB) $(VCSSVN_LIB)
$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) git$X
### Test suite coverage testing
#
-.PHONY: coverage coverage-clean coverage-build coverage-report
+.PHONY: coverage coverage-clean coverage-compile coverage-test coverage-report
+.PHONY: coverage-clean-results
coverage:
- $(MAKE) coverage-build
- $(MAKE) coverage-report
+ $(MAKE) coverage-test
+ $(MAKE) coverage-untested-functions
object_dirs := $(sort $(dir $(OBJECTS)))
-coverage-clean:
+coverage-clean-results:
$(RM) $(addsuffix *.gcov,$(object_dirs))
$(RM) $(addsuffix *.gcda,$(object_dirs))
- $(RM) $(addsuffix *.gcno,$(object_dirs))
$(RM) coverage-untested-functions
$(RM) -r cover_db/
$(RM) -r cover_db_html/
+coverage-clean: coverage-clean-results
+ $(RM) $(addsuffix *.gcno,$(object_dirs))
+
COVERAGE_CFLAGS = $(CFLAGS) -O0 -ftest-coverage -fprofile-arcs
COVERAGE_LDFLAGS = $(CFLAGS) -O0 -lgcov
GCOVFLAGS = --preserve-paths --branch-probabilities --all-blocks
-coverage-build: coverage-clean
+coverage-compile:
$(MAKE) CFLAGS="$(COVERAGE_CFLAGS)" LDFLAGS="$(COVERAGE_LDFLAGS)" all
+
+coverage-test: coverage-clean-results coverage-compile
$(MAKE) CFLAGS="$(COVERAGE_CFLAGS)" LDFLAGS="$(COVERAGE_LDFLAGS)" \
- -j1 test
+ DEFAULT_TEST_TARGET=test -j1 test
coverage-report:
$(QUIET_GCOV)for dir in $(object_dirs); do \
-Documentation/RelNotes/1.8.3.txt
\ No newline at end of file
+Documentation/RelNotes/1.8.4.txt
\ No newline at end of file
return 1;
}
+static int check_tracking_branch(struct remote *remote, void *cb_data)
+{
+ char *tracking_branch = cb_data;
+ struct refspec query;
+ memset(&query, 0, sizeof(struct refspec));
+ query.dst = tracking_branch;
+ return !(remote_find_tracking(remote, &query) ||
+ prefixcmp(query.src, "refs/heads/"));
+}
+
+static int validate_remote_tracking_branch(char *ref)
+{
+ return !for_each_remote(check_tracking_branch, ref);
+}
+
static const char upstream_not_branch[] =
N_("Cannot setup tracking information; starting point '%s' is not a branch.");
static const char upstream_missing[] =
case 1:
/* Unique completion -- good, only if it is a real branch */
if (prefixcmp(real_ref, "refs/heads/") &&
- prefixcmp(real_ref, "refs/remotes/")) {
+ validate_remote_tracking_branch(real_ref)) {
if (explicit_tracking)
die(_(upstream_not_branch), start_name);
else
#include "pathspec.h"
#include "parse-options.h"
-static int quiet, verbose, stdin_paths;
+static int quiet, verbose, stdin_paths, show_non_matching;
static const char * const check_ignore_usage[] = {
"git check-ignore [options] pathname...",
"git check-ignore [options] --stdin < <list-of-paths>",
N_("read file names from stdin")),
OPT_BOOLEAN('z', NULL, &null_term_line,
N_("input paths are terminated by a null character")),
+ OPT_BOOLEAN('n', "non-matching", &show_non_matching,
+ N_("show non-matching input paths")),
OPT_END()
};
static void output_exclude(const char *path, struct exclude *exclude)
{
- char *bang = exclude->flags & EXC_FLAG_NEGATIVE ? "!" : "";
- char *slash = exclude->flags & EXC_FLAG_MUSTBEDIR ? "/" : "";
+ char *bang = (exclude && exclude->flags & EXC_FLAG_NEGATIVE) ? "!" : "";
+ char *slash = (exclude && exclude->flags & EXC_FLAG_MUSTBEDIR) ? "/" : "";
if (!null_term_line) {
if (!verbose) {
write_name_quoted(path, stdout, '\n');
} else {
- quote_c_style(exclude->el->src, NULL, stdout, 0);
- printf(":%d:%s%s%s\t",
- exclude->srcpos,
- bang, exclude->pattern, slash);
+ if (exclude) {
+ quote_c_style(exclude->el->src, NULL, stdout, 0);
+ printf(":%d:%s%s%s\t",
+ exclude->srcpos,
+ bang, exclude->pattern, slash);
+ }
+ else {
+ printf("::\t");
+ }
quote_c_style(path, NULL, stdout, 0);
fputc('\n', stdout);
}
if (!verbose) {
printf("%s%c", path, '\0');
} else {
- printf("%s%c%d%c%s%s%s%c%s%c",
- exclude->el->src, '\0',
- exclude->srcpos, '\0',
- bang, exclude->pattern, slash, '\0',
- path, '\0');
+ if (exclude)
+ printf("%s%c%d%c%s%s%s%c%s%c",
+ exclude->el->src, '\0',
+ exclude->srcpos, '\0',
+ bang, exclude->pattern, slash, '\0',
+ path, '\0');
+ else
+ printf("%c%c%c%s%c", '\0', '\0', '\0', path, '\0');
}
}
}
-static int check_ignore(const char *prefix, const char **pathspec)
+static int check_ignore(struct dir_struct *dir,
+ const char *prefix, const char **pathspec)
{
- struct dir_struct dir;
const char *path, *full_path;
char *seen;
int num_ignored = 0, dtype = DT_UNKNOWN, i;
struct exclude *exclude;
- /* read_cache() is only necessary so we can watch out for submodules. */
- if (read_cache() < 0)
- die(_("index file corrupt"));
-
- memset(&dir, 0, sizeof(dir));
- setup_standard_excludes(&dir);
-
if (!pathspec || !*pathspec) {
if (!quiet)
fprintf(stderr, "no pathspec given.\n");
? strlen(prefix) : 0, path);
full_path = check_path_for_gitlink(full_path);
die_if_path_beyond_symlink(full_path, prefix);
+ exclude = NULL;
if (!seen[i]) {
- exclude = last_exclude_matching(&dir, full_path, &dtype);
- if (exclude) {
- if (!quiet)
- output_exclude(path, exclude);
- num_ignored++;
- }
+ exclude = last_exclude_matching(dir, full_path, &dtype);
}
+ if (!quiet && (exclude || show_non_matching))
+ output_exclude(path, exclude);
+ if (exclude)
+ num_ignored++;
}
free(seen);
- clear_directory(&dir);
return num_ignored;
}
-static int check_ignore_stdin_paths(const char *prefix)
+static int check_ignore_stdin_paths(struct dir_struct *dir, const char *prefix)
{
struct strbuf buf, nbuf;
- char **pathspec = NULL;
- size_t nr = 0, alloc = 0;
+ char *pathspec[2] = { NULL, NULL };
int line_termination = null_term_line ? 0 : '\n';
- int num_ignored;
+ int num_ignored = 0;
strbuf_init(&buf, 0);
strbuf_init(&nbuf, 0);
die("line is badly quoted");
strbuf_swap(&buf, &nbuf);
}
- ALLOC_GROW(pathspec, nr + 1, alloc);
- pathspec[nr] = xcalloc(strlen(buf.buf) + 1, sizeof(*buf.buf));
- strcpy(pathspec[nr++], buf.buf);
+ pathspec[0] = buf.buf;
+ num_ignored += check_ignore(dir, prefix, (const char **)pathspec);
+ maybe_flush_or_die(stdout, "check-ignore to stdout");
}
- ALLOC_GROW(pathspec, nr + 1, alloc);
- pathspec[nr] = NULL;
- num_ignored = check_ignore(prefix, (const char **)pathspec);
- maybe_flush_or_die(stdout, "attribute to stdout");
strbuf_release(&buf);
strbuf_release(&nbuf);
- free(pathspec);
return num_ignored;
}
int cmd_check_ignore(int argc, const char **argv, const char *prefix)
{
int num_ignored;
+ struct dir_struct dir;
git_config(git_default_config, NULL);
if (verbose)
die(_("cannot have both --quiet and --verbose"));
}
+ if (show_non_matching && !verbose)
+ die(_("--non-matching is only valid with --verbose"));
+
+ /* read_cache() is only necessary so we can watch out for submodules. */
+ if (read_cache() < 0)
+ die(_("index file corrupt"));
+
+ memset(&dir, 0, sizeof(dir));
+ setup_standard_excludes(&dir);
if (stdin_paths) {
- num_ignored = check_ignore_stdin_paths(prefix);
+ num_ignored = check_ignore_stdin_paths(&dir, prefix);
} else {
- num_ignored = check_ignore(prefix, argv);
+ num_ignored = check_ignore(&dir, prefix, argv);
maybe_flush_or_die(stdout, "ignore to stdout");
}
+ clear_directory(&dir);
+
return !num_ignored;
}
}
struct tracking_name_data {
- const char *name;
- char *remote;
+ /* const */ char *src_ref;
+ char *dst_ref;
+ unsigned char *dst_sha1;
int unique;
};
-static int check_tracking_name(const char *refname, const unsigned char *sha1,
- int flags, void *cb_data)
+static int check_tracking_name(struct remote *remote, void *cb_data)
{
struct tracking_name_data *cb = cb_data;
- const char *slash;
-
- if (prefixcmp(refname, "refs/remotes/"))
- return 0;
- slash = strchr(refname + 13, '/');
- if (!slash || strcmp(slash + 1, cb->name))
+ struct refspec query;
+ memset(&query, 0, sizeof(struct refspec));
+ query.src = cb->src_ref;
+ if (remote_find_tracking(remote, &query) ||
+ get_sha1(query.dst, cb->dst_sha1))
return 0;
- if (cb->remote) {
+ if (cb->dst_ref) {
cb->unique = 0;
return 0;
}
- cb->remote = xstrdup(refname);
+ cb->dst_ref = xstrdup(query.dst);
return 0;
}
-static const char *unique_tracking_name(const char *name)
+static const char *unique_tracking_name(const char *name, unsigned char *sha1)
{
- struct tracking_name_data cb_data = { NULL, NULL, 1 };
- cb_data.name = name;
- for_each_ref(check_tracking_name, &cb_data);
+ struct tracking_name_data cb_data = { NULL, NULL, NULL, 1 };
+ char src_ref[PATH_MAX];
+ snprintf(src_ref, PATH_MAX, "refs/heads/%s", name);
+ cb_data.src_ref = src_ref;
+ cb_data.dst_sha1 = sha1;
+ for_each_remote(check_tracking_name, &cb_data);
if (cb_data.unique)
- return cb_data.remote;
- free(cb_data.remote);
+ return cb_data.dst_ref;
+ free(cb_data.dst_ref);
return NULL;
}
if (dwim_new_local_branch_ok &&
!check_filename(NULL, arg) &&
argc == 1) {
- const char *remote = unique_tracking_name(arg);
- if (!remote || get_sha1(remote, rev))
+ const char *remote = unique_tracking_name(arg, rev);
+ if (!remote)
return argcount;
*new_branch = arg;
arg = remote;
#include "transport.h"
#include "strbuf.h"
#include "dir.h"
-#include "pack-refs.h"
#include "sigchain.h"
#include "branch.h"
#include "remote.h"
is_local = option_local != 0 && path && !is_bundle;
if (is_local && option_depth)
warning(_("--depth is ignored in local clones; use file:// instead."));
+ if (option_local > 0 && !is_local)
+ warning(_("--local is ignored"));
if (argc == 2)
dir = xstrdup(argv[1]);
*/
die("$HOME not set");
- if (access_or_warn(user_config, R_OK) &&
- xdg_config && !access_or_warn(xdg_config, R_OK))
+ if (access_or_warn(user_config, R_OK, 0) &&
+ xdg_config && !access_or_warn(xdg_config, R_OK, 0))
given_config_file = xdg_config;
else
given_config_file = user_config;
char *line_end, *mark_end;
unsigned char sha1[20];
struct object *object;
+ struct commit *commit;
+ enum object_type type;
line_end = strchr(line, '\n');
if (line[0] != ':' || !line_end)
mark = strtoumax(line + 1, &mark_end, 10);
if (!mark || mark_end == line + 1
- || *mark_end != ' ' || get_sha1(mark_end + 1, sha1))
+ || *mark_end != ' ' || get_sha1_hex(mark_end + 1, sha1))
die("corrupt mark line: %s", line);
if (last_idnum < mark)
last_idnum = mark;
- object = parse_object(sha1);
- if (!object)
+ type = sha1_object_info(sha1, NULL);
+ if (type < 0)
+ die("object not found: %s", sha1_to_hex(sha1));
+
+ if (type != OBJ_COMMIT)
+ /* only commits */
continue;
+ commit = lookup_commit(sha1);
+ if (!commit)
+ die("not a commit? can't happen: %s", sha1_to_hex(sha1));
+
+ object = &commit->object;
+
if (object->flags & SHOWN)
error("Object %s already has a mark", sha1_to_hex(sha1));
- if (object->type != OBJ_COMMIT)
- /* only commits */
- continue;
-
mark_object(object, mark);
object->flags |= SHOWN;
for (rm = *head; rm; rm = rm->next) {
if (branch_merge_matches(branch, i, rm->name)) {
- rm->merge = 1;
+ rm->fetch_head_status = FETCH_HEAD_MERGE;
break;
}
}
refspec.src = branch->merge[i]->src;
get_fetch_map(remote_refs, &refspec, tail, 1);
for (rm = *old_tail; rm; rm = rm->next)
- rm->merge = 1;
+ rm->fetch_head_status = FETCH_HEAD_MERGE;
}
}
const struct ref *remote_refs = transport_get_remote_refs(transport);
if (ref_count || tags == TAGS_SET) {
+ struct ref **old_tail;
+
for (i = 0; i < ref_count; i++) {
get_fetch_map(remote_refs, &refs[i], &tail, 0);
if (refs[i].dst && refs[i].dst[0])
}
/* Merge everything on the command line, but not --tags */
for (rm = ref_map; rm; rm = rm->next)
- rm->merge = 1;
+ rm->fetch_head_status = FETCH_HEAD_MERGE;
if (tags == TAGS_SET)
get_fetch_map(remote_refs, tag_refspec, &tail, 0);
+
+ /*
+ * For any refs that we happen to be fetching via command-line
+ * arguments, take the opportunity to update their configured
+ * counterparts. However, we do not want to mention these
+ * entries in FETCH_HEAD at all, as they would simply be
+ * duplicates of existing entries.
+ */
+ old_tail = tail;
+ for (i = 0; i < transport->remote->fetch_refspec_nr; i++)
+ get_fetch_map(ref_map, &transport->remote->fetch[i],
+ &tail, 1);
+ for (rm = *old_tail; rm; rm = rm->next)
+ rm->fetch_head_status = FETCH_HEAD_IGNORE;
} else {
/* Use the defaults */
struct remote *remote = transport->remote;
*autotags = 1;
if (!i && !has_merge && ref_map &&
!remote->fetch[0].pattern)
- ref_map->merge = 1;
+ ref_map->fetch_head_status = FETCH_HEAD_MERGE;
}
/*
* if the remote we're fetching from is the same
ref_map = get_remote_ref(remote_refs, "HEAD");
if (!ref_map)
die(_("Couldn't find remote ref HEAD"));
- ref_map->merge = 1;
+ ref_map->fetch_head_status = FETCH_HEAD_MERGE;
tail = &ref_map->next;
}
}
const char *what, *kind;
struct ref *rm;
char *url, *filename = dry_run ? "/dev/null" : git_path("FETCH_HEAD");
- int want_merge;
+ int want_status;
fp = fopen(filename, "a");
if (!fp)
}
/*
- * The first pass writes objects to be merged and then the
- * second pass writes the rest, in order to allow using
- * FETCH_HEAD as a refname to refer to the ref to be merged.
+ * We do a pass for each fetch_head_status type in their enum order, so
+ * merged entries are written before not-for-merge. That lets readers
+ * use FETCH_HEAD as a refname to refer to the ref to be merged.
*/
- for (want_merge = 1; 0 <= want_merge; want_merge--) {
+ for (want_status = FETCH_HEAD_MERGE;
+ want_status <= FETCH_HEAD_IGNORE;
+ want_status++) {
for (rm = ref_map; rm; rm = rm->next) {
struct ref *ref = NULL;
+ const char *merge_status_marker = "";
commit = lookup_commit_reference_gently(rm->old_sha1, 1);
if (!commit)
- rm->merge = 0;
+ rm->fetch_head_status = FETCH_HEAD_NOT_FOR_MERGE;
- if (rm->merge != want_merge)
+ if (rm->fetch_head_status != want_status)
continue;
if (rm->peer_ref) {
strbuf_addf(¬e, "%s ", kind);
strbuf_addf(¬e, "'%s' of ", what);
}
- fprintf(fp, "%s\t%s\t%s",
- sha1_to_hex(rm->old_sha1),
- rm->merge ? "" : "not-for-merge",
- note.buf);
- for (i = 0; i < url_len; ++i)
- if ('\n' == url[i])
- fputs("\\n", fp);
- else
- fputc(url[i], fp);
- fputc('\n', fp);
+ switch (rm->fetch_head_status) {
+ case FETCH_HEAD_NOT_FOR_MERGE:
+ merge_status_marker = "not-for-merge";
+ /* fall-through */
+ case FETCH_HEAD_MERGE:
+ fprintf(fp, "%s\t%s\t%s",
+ sha1_to_hex(rm->old_sha1),
+ merge_status_marker,
+ note.buf);
+ for (i = 0; i < url_len; ++i)
+ if ('\n' == url[i])
+ fputs("\\n", fp);
+ else
+ fputc(url[i], fp);
+ fputc('\n', fp);
+ break;
+ default:
+ /* do not write anything to FETCH_HEAD */
+ break;
+ }
strbuf_reset(¬e);
if (ref) {
for (i = 0; i < argc; i++) {
struct commit *commit = get_merge_parent(argv[i]);
if (!commit)
- die(_("%s - not something we can merge"), argv[i]);
+ help_unknown_ref(argv[i], "merge",
+ "not something we can merge");
remotes = &commit_list_insert(commit, remotes)->next;
}
*remotes = NULL;
#include "builtin.h"
#include "parse-options.h"
-#include "pack-refs.h"
+#include "refs.h"
static char const * const pack_refs_usage[] = {
N_("git pack-refs [options]"),
OPT__DRY_RUN(&show_only, N_("do not remove, show only")),
OPT__VERBOSE(&verbose, N_("report pruned objects")),
OPT_BOOL(0, "progress", &show_progress, N_("show progress")),
- OPT_DATE(0, "expire", &expire,
- N_("expire objects older than <time>")),
+ OPT_EXPIRY_DATE(0, "expire", &expire,
+ N_("expire objects older than <time>")),
OPT_END()
};
char *s;
{
if (!value)
return config_error_nonbool(var);
- if (!strcmp(value, "never") || !strcmp(value, "false")) {
- *expire = 0;
- return 0;
- }
- *expire = approxidate(value);
+ if (parse_expiry_date(value, expire))
+ return error(_("%s' for '%s' is not a valid timestamp"),
+ value, var);
return 0;
}
if (!strcmp(arg, "--dry-run") || !strcmp(arg, "-n"))
cb.dry_run = 1;
else if (!prefixcmp(arg, "--expire=")) {
- cb.expire_total = approxidate(arg + 9);
+ if (parse_expiry_date(arg + 9, &cb.expire_total))
+ die(_("'%s' is not a valid timestamp"), arg);
explicit_expiry |= EXPIRE_TOTAL;
}
else if (!prefixcmp(arg, "--expire-unreachable=")) {
- cb.expire_unreachable = approxidate(arg + 21);
+ if (parse_expiry_date(arg + 21, &cb.expire_unreachable))
+ die(_("'%s' is not a valid timestamp"), arg);
explicit_expiry |= EXPIRE_UNREACH;
}
else if (!strcmp(arg, "--stale-fix"))
struct strbuf *timebuf);
int parse_date(const char *date, char *buf, int bufsize);
int parse_date_basic(const char *date, unsigned long *timestamp, int *offset);
+int parse_expiry_date(const char *date, unsigned long *timestamp);
void datestamp(char *buf, int bufsize);
#define approxidate(s) approxidate_careful((s), NULL)
unsigned long approxidate_careful(const char *, int *);
unsigned int
force:1,
forced_update:1,
- merge:1,
deletion:1,
matched:1;
+
+ /*
+ * Order is important here, as we write to FETCH_HEAD
+ * in numeric order. And the default NOT_FOR_MERGE
+ * should be 0, so that xcalloc'd structures get it
+ * by default.
+ */
+ enum {
+ FETCH_HEAD_MERGE = -1,
+ FETCH_HEAD_NOT_FOR_MERGE = 0,
+ FETCH_HEAD_IGNORE = 1
+ } fetch_head_status;
+
enum {
REF_STATUS_NONE = 0,
REF_STATUS_OK,
unsigned long k;
/* Paint a few lines before the first interesting line. */
- while (j < i)
- sline[j++].flag |= mark | no_pre_delete;
+ while (j < i) {
+ if (!(sline[j].flag & mark))
+ sline[j].flag |= no_pre_delete;
+ sline[j++].flag |= mark;
+ }
again:
/* we know up to i is to be included. where does the
--- /dev/null
+#include "../git-compat-util.h"
+#undef write
+
+/*
+ * Version of write that will write at most INT_MAX bytes.
+ * Workaround a xnu bug on Mac OS X
+ */
+ssize_t clipped_write(int fildes, const void *buf, size_t nbyte)
+{
+ if (nbyte > INT_MAX)
+ nbyte = INT_MAX;
+ return write(fildes, buf, nbyte);
+}
# define _GNU_SOURCE 1
#endif
+#include <stddef.h>
#include <errno.h>
#include <fnmatch.h>
#include <ctype.h>
whose names are inconsistent. */
# if !defined _LIBC && !defined getenv
-extern char *getenv ();
+extern char *getenv (const char *name);
# endif
# ifndef errno
struct pinfo_t *next;
pid_t pid;
HANDLE proc;
-} pinfo_t;
-struct pinfo_t *pinfo = NULL;
+};
+static struct pinfo_t *pinfo = NULL;
CRITICAL_SECTION pinfo_cs;
static pid_t mingw_spawnve_fd(const char *cmd, const char **argv, char **env,
else
sin->sin_addr.s_addr = INADDR_LOOPBACK;
ai->ai_addr = (struct sockaddr *)sin;
- ai->ai_next = 0;
+ ai->ai_next = NULL;
return 0;
}
char **make_augmented_environ(const char *const *vars);
void free_environ(char **env);
+/*
+ * A critical section used in the implementation of the spawn
+ * functions (mingw_spawnv[p]e()) and waitpid(). Intialised in
+ * the replacement main() macro below.
+ */
+extern CRITICAL_SECTION pinfo_cs;
+
/*
* A replacement of main() that ensures that argv[0] has a path
* and that default fmode and std(in|out|err) are in binary mode
*/
#define main(c,v) dummy_decl_mingw_main(); \
-static int mingw_main(); \
-int main(int argc, const char **argv) \
+static int mingw_main(c,v); \
+int main(int argc, char **argv) \
{ \
extern CRITICAL_SECTION pinfo_cs; \
_fmode = _O_BINARY; \
#define DLMALLOC_VERSION 20804
#endif /* DLMALLOC_VERSION */
+#if defined(linux)
+#define _GNU_SOURCE 1
+#endif
+
#ifndef WIN32
#ifdef _WIN32
#define WIN32 1
static MLOCK_T malloc_global_mutex = { 0, 0, 0};
-static FORCEINLINE long win32_getcurrentthreadid() {
+static FORCEINLINE long win32_getcurrentthreadid(void) {
#ifdef _MSC_VER
#if defined(_M_IX86)
long *threadstruct=(long *)__readfsdword(0x18);
#endif
int nedmallopt(int parno, int value) THROWSPEC { return nedpmallopt(0, parno, value); }
int nedmalloc_trim(size_t pad) THROWSPEC { return nedpmalloc_trim(0, pad); }
-void nedmalloc_stats() THROWSPEC { nedpmalloc_stats(0); }
-size_t nedmalloc_footprint() THROWSPEC { return nedpmalloc_footprint(0); }
+void nedmalloc_stats(void) THROWSPEC { nedpmalloc_stats(0); }
+size_t nedmalloc_footprint(void) THROWSPEC { return nedpmalloc_footprint(0); }
void **nedindependent_calloc(size_t elemsno, size_t elemsize, void **chunks) THROWSPEC { return nedpindependent_calloc(0, elemsno, elemsize, chunks); }
void **nedindependent_comalloc(size_t elems, size_t *sizes, void **chunks) THROWSPEC { return nedpindependent_comalloc(0, elems, sizes, chunks); }
{
/* It's a socket. */
WSAEnumNetworkEvents ((SOCKET) h, NULL, &ev);
- WSAEventSelect ((SOCKET) h, 0, 0);
+ WSAEventSelect ((SOCKET) h, NULL, 0);
/* If we're lucky, WSAEnumNetworkEvents already provided a way
to distinguish FD_READ and FD_ACCEPT; this saves a recv later. */
}
/* Update the state_log if we need */
-re_dfastate_t *
+static re_dfastate_t *
internal_function
merge_state_with_log (reg_errcode_t *err, re_match_context_t *mctx,
re_dfastate_t *next_state)
mctx->state_log[cur_idx] = next_state;
mctx->state_log_top = cur_idx;
}
- else if (mctx->state_log[cur_idx] == 0)
+ else if (mctx->state_log[cur_idx] == NULL)
{
mctx->state_log[cur_idx] = next_state;
}
/* Skip bytes in the input that correspond to part of a
multi-byte match, then look in the log for a state
from which to restart matching. */
-re_dfastate_t *
+static re_dfastate_t *
internal_function
find_recover_state (reg_errcode_t *err, re_match_context_t *mctx)
{
void gitunsetenv (const char *name)
{
- extern char **environ;
int src, dst;
size_t nmln;
pthread_t pthread_self(void)
{
- pthread_t t = { 0 };
+ pthread_t t = { NULL };
t.tid = GetCurrentThreadId();
return t;
}
if (!(flags & MAP_PRIVATE))
die("Invalid usage of mmap when built with USE_WIN32_MMAP");
- hmap = CreateFileMapping((HANDLE)_get_osfhandle(fd), 0, PAGE_WRITECOPY,
- 0, 0, 0);
+ hmap = CreateFileMapping((HANDLE)_get_osfhandle(fd), NULL,
+ PAGE_WRITECOPY, 0, 0, NULL);
if (!hmap)
return MAP_FAILED;
path = buf.buf;
}
- if (!access_or_die(path, R_OK)) {
+ if (!access_or_die(path, R_OK, 0)) {
if (++inc->depth > MAX_INCLUDE_DEPTH)
die(include_depth_advice, MAX_INCLUDE_DEPTH, path,
cf && cf->name ? cf->name : "the command line");
home_config_paths(&user_config, &xdg_config, "config");
- if (git_config_system() && !access_or_die(git_etc_gitconfig(), R_OK)) {
+ if (git_config_system() && !access_or_die(git_etc_gitconfig(), R_OK, 0)) {
ret += git_config_from_file(fn, git_etc_gitconfig(),
data);
found += 1;
}
- if (xdg_config && !access_or_die(xdg_config, R_OK)) {
+ if (xdg_config && !access_or_die(xdg_config, R_OK, ACCESS_EACCES_OK)) {
ret += git_config_from_file(fn, xdg_config, data);
found += 1;
}
- if (user_config && !access_or_die(user_config, R_OK)) {
+ if (user_config && !access_or_die(user_config, R_OK, ACCESS_EACCES_OK)) {
ret += git_config_from_file(fn, user_config, data);
found += 1;
}
- if (repo_config && !access_or_die(repo_config, R_OK)) {
+ if (repo_config && !access_or_die(repo_config, R_OK, 0)) {
ret += git_config_from_file(fn, repo_config, data);
found += 1;
}
NO_MEMMEM = YesPlease
USE_ST_TIMESPEC = YesPlease
HAVE_DEV_TTY = YesPlease
+ NEEDS_CLIPPED_WRITE = YesPlease
COMPAT_OBJS += compat/precompose_utf8.o
BASIC_CFLAGS += -DPRECOMPOSE_UNICODE
endif
path = strchr(end, c);
if (path && !has_dos_drive_prefix(end)) {
if (c == ':') {
- protocol = PROTO_SSH;
- *path++ = '\0';
+ if (path < strchrnul(host, '/')) {
+ protocol = PROTO_SSH;
+ *path++ = '\0';
+ } else /* '/' in the host part, assume local path */
+ path = end;
}
} else
path = end;
# since tilde expansion is not applied.
# This means that COMPREPLY will be empty and Bash default
# completion will be used.
- COMPREPLY=($(compgen -P "${2-}" -W "$1" -- "${3-$cur}"))
+ __gitcompadd "$1" "${2-}" "${3-$cur}" ""
- # Tell Bash that compspec generates filenames.
- compopt -o filenames 2>/dev/null
+ # use a hack to enable file mode in bash < 4
+ compopt -o filenames +o nospace 2>/dev/null ||
+ compgen -f /non-existing-dir/ > /dev/null
}
-__git_index_file_list_filter_compat ()
-{
- local path
-
- while read -r path; do
- case "$path" in
- ?*/*) echo "${path%%/*}/" ;;
- *) echo "$path" ;;
- esac
- done
-}
-
-__git_index_file_list_filter_bash ()
-{
- local path
-
- while read -r path; do
- case "$path" in
- ?*/*)
- # XXX if we append a slash to directory names when using
- # `compopt -o filenames`, Bash will append another slash.
- # This is pretty stupid, and this the reason why we have to
- # define a compatible version for this function.
- echo "${path%%/*}" ;;
- *)
- echo "$path" ;;
- esac
- done
-}
-
-# Process path list returned by "ls-files" and "diff-index --name-only"
-# commands, in order to list only file names relative to a specified
-# directory, and append a slash to directory names.
-__git_index_file_list_filter ()
-{
- # Default to Bash >= 4.x
- __git_index_file_list_filter_bash
-}
-
-# Execute git ls-files, returning paths relative to the directory
-# specified in the first argument, and using the options specified in
-# the second argument.
+# Execute 'git ls-files', unless the --committable option is specified, in
+# which case it runs 'git diff-index' to find out the files that can be
+# committed. It return paths relative to the directory specified in the first
+# argument, and using the options specified in the second argument.
__git_ls_files_helper ()
{
(
test -n "${CDPATH+set}" && unset CDPATH
- # NOTE: $2 is not quoted in order to support multiple options
- cd "$1" && git ls-files --exclude-standard $2
+ cd "$1"
+ if [ "$2" == "--committable" ]; then
+ git diff-index --name-only --relative HEAD
+ else
+ # NOTE: $2 is not quoted in order to support multiple options
+ git ls-files --exclude-standard $2
+ fi
) 2>/dev/null
}
-# Execute git diff-index, returning paths relative to the directory
-# specified in the first argument, and using the tree object id
-# specified in the second argument.
-__git_diff_index_helper ()
-{
- (
- test -n "${CDPATH+set}" && unset CDPATH
- cd "$1" && git diff-index --name-only --relative "$2"
- ) 2>/dev/null
-}
-
# __git_index_files accepts 1 or 2 arguments:
# 1: Options to pass to ls-files (required).
-# Supported options are --cached, --modified, --deleted, --others,
-# and --directory.
# 2: A directory path (optional).
# If provided, only files within the specified directory are listed.
# Sub directories are never recursed. Path must have a trailing
# slash.
__git_index_files ()
{
- local dir="$(__gitdir)" root="${2-.}"
+ local dir="$(__gitdir)" root="${2-.}" file
if [ -d "$dir" ]; then
- __git_ls_files_helper "$root" "$1" | __git_index_file_list_filter |
- sort | uniq
- fi
-}
-
-# __git_diff_index_files accepts 1 or 2 arguments:
-# 1) The id of a tree object.
-# 2) A directory path (optional).
-# If provided, only files within the specified directory are listed.
-# Sub directories are never recursed. Path must have a trailing
-# slash.
-__git_diff_index_files ()
-{
- local dir="$(__gitdir)" root="${2-.}"
-
- if [ -d "$dir" ]; then
- __git_diff_index_helper "$root" "$1" | __git_index_file_list_filter |
- sort | uniq
+ __git_ls_files_helper "$root" "$1" |
+ while read -r file; do
+ case "$file" in
+ ?*/*) echo "${file%%/*}" ;;
+ *) echo "$file" ;;
+ esac
+ done | sort | uniq
fi
}
}
-# __git_complete_index_file requires 1 argument: the options to pass to
-# ls-file
+# __git_complete_index_file requires 1 argument:
+# 1: the options to pass to ls-file
+#
+# The exception is --committable, which finds the files appropriate commit.
__git_complete_index_file ()
{
- local pfx cur_="$cur"
+ local pfx="" cur_="$cur"
case "$cur_" in
?*/*)
pfx="${cur_%/*}"
cur_="${cur_##*/}"
pfx="${pfx}/"
-
- __gitcomp_file "$(__git_index_files "$1" "$pfx")" "$pfx" "$cur_"
- ;;
- *)
- __gitcomp_file "$(__git_index_files "$1")" "" "$cur_"
;;
esac
-}
-
-# __git_complete_diff_index_file requires 1 argument: the id of a tree
-# object
-__git_complete_diff_index_file ()
-{
- local pfx cur_="$cur"
- case "$cur_" in
- ?*/*)
- pfx="${cur_%/*}"
- cur_="${cur_##*/}"
- pfx="${pfx}/"
-
- __gitcomp_file "$(__git_diff_index_files "$1" "$pfx")" "$pfx" "$cur_"
- ;;
- *)
- __gitcomp_file "$(__git_diff_index_files "$1")" "" "$cur_"
- ;;
- esac
+ __gitcomp_file "$(__git_index_files "$1" "$pfx")" "$pfx" "$cur_"
}
__git_complete_file ()
esac
if git rev-parse --verify --quiet HEAD >/dev/null; then
- __git_complete_diff_index_file "HEAD"
+ __git_complete_index_file "--committable"
else
# This is the first commit
__git_complete_index_file "--cached"
local remote="${prev#remote.}"
remote="${remote%.fetch}"
if [ -z "$cur" ]; then
- __gitcompadd "refs/heads/" "" "" ""
+ __gitcomp_nl "refs/heads/" "" "" ""
return
fi
__gitcomp_nl "$(__git_refs_remotes "$remote")"
--*=*|*.) ;;
*) c="$c " ;;
esac
- array[$#array+1]="$c"
+ array+=("$c")
done
compset -P '*[=:]'
compadd -Q -S '' -p "${2-}" -a -- array && _ret=0
compadd -Q -p "${2-}" -f -- ${=1} && _ret=0
}
- __git_zsh_helper ()
- {
- emulate -L ksh
- local cur cword prev
- cur=${words[CURRENT-1]}
- prev=${words[CURRENT-2]}
- let cword=CURRENT-1
- __${service}_main
- }
-
_git ()
{
- emulate -L zsh
- local _ret=1
- __git_zsh_helper
- let _ret && _default -S '' && _ret=0
+ local _ret=1 cur cword prev
+ cur=${words[CURRENT]}
+ prev=${words[CURRENT-1]}
+ let cword=CURRENT-1
+ emulate ksh -c __${service}_main
+ let _ret && _default && _ret=0
return _ret
}
compdef _git git gitk
return
-elif [[ -n ${BASH_VERSION-} ]]; then
- if ((${BASH_VERSINFO[0]} < 4)); then
- # compopt is not supported
- __git_index_file_list_filter ()
- {
- __git_index_file_list_filter_compat
- }
- fi
fi
__git_func_wrap ()
#
# Copyright (c) 2012-2013 Felipe Contreras <felipe.contreras@gmail.com>
#
-# You need git's bash completion script installed somewhere, by default on the
-# same directory as this script.
+# You need git's bash completion script installed somewhere, by default it
+# would be the location bash-completion uses.
#
-# If your script is on ~/.git-completion.sh instead, you can configure it on
-# your ~/.zshrc:
+# If your script is somewhere else, you can configure it on your ~/.zshrc:
#
# zstyle ':completion:*:*:git:*' script ~/.git-completion.sh
#
-# The recommended way to install this script is to copy to
-# '~/.zsh/completion/_git', and then add the following to your ~/.zshrc file:
+# The recommended way to install this script is to copy to '~/.zsh/_git', and
+# then add the following to your ~/.zshrc file:
#
-# fpath=(~/.zsh/completion $fpath)
+# fpath=(~/.zsh $fpath)
complete ()
{
zstyle ':completion:*:*:git:*' tag-order 'common-commands'
zstyle -s ":completion:*:*:git:*" script script
-test -z "$script" && script="$(dirname ${funcsourcetrace[1]%:*})"/git-completion.bash
+if [ -z "$script" ]; then
+ local -a locations
+ local e
+ locations=(
+ '/etc/bash_completion.d/git' # fedora, old debian
+ '/usr/share/bash-completion/completions/git' # arch, ubuntu, new debian
+ '/usr/share/bash-completion/git' # gentoo
+ $(dirname ${funcsourcetrace[1]%:*})/git-completion.bash
+ )
+ for e in $locations; do
+ test -f $e && script="$e" && break
+ done
+fi
ZSH_VERSION='' . "$script"
__gitcomp ()
import atexit
import urlparse, hashlib
-#
-# If you want to switch to hg-git compatibility mode:
-# git config --global remote-hg.hg-git-compat true
#
# If you are not in hg-git-compat mode and want to disable the tracking of
# named branches:
# git config --global remote-hg.force-push false
#
# If you want the equivalent of hg's clone/pull--insecure option:
-# git config remote-hg.insecure true
+# git config --global remote-hg.insecure true
+#
+# If you want to switch to hg-git compatibility mode:
+# git config --global remote-hg.hg-git-compat true
#
# git:
# Sensible defaults for git.
marks_path = os.path.join(dirname, 'marks-hg')
marks = Marks(marks_path)
+ if sys.platform == 'win32':
+ import msvcrt
+ msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
+
parser = Parser(repo)
for line in parser:
if parser.check('capabilities'):
repository=$1
refspec=$2
echo "git push using: " $repository $refspec
- git push $repository $(git subtree split --prefix=$prefix):refs/heads/$refspec
+ localrev=$(git subtree split --prefix="$prefix") || die
+ git push $repository $localrev:refs/heads/$refspec
else
die "'$dir' must already exist. Try 'git subtree add'."
fi
return c->username && c->password;
}
-int main(int argc, const char **argv)
+int main(int argc, char **argv)
{
const char * const usage[] = {
"git credential-store [options] <action>",
umask(077);
- argc = parse_options(argc, argv, NULL, options, usage, 0);
+ argc = parse_options(argc, (const char **)argv, NULL, options, usage, 0);
if (argc != 1)
usage_with_options(usage, options);
op = argv[0];
return 0; /* success */
}
+int parse_expiry_date(const char *date, unsigned long *timestamp)
+{
+ int errors = 0;
+
+ if (!strcmp(date, "never") || !strcmp(date, "false"))
+ *timestamp = 0;
+ else if (!strcmp(date, "all") || !strcmp(date, "now"))
+ /*
+ * We take over "now" here, which usually translates
+ * to the current timestamp. This is because the user
+ * really means to expire everything she has done in
+ * the past, and by definition reflogs are the record
+ * of the past, and there is nothing from the future
+ * to be kept.
+ */
+ *timestamp = ULONG_MAX;
+ else
+ *timestamp = approxidate_careful(date, &errors);
+
+ return errors;
+}
+
int parse_date(const char *date, char *result, int maxlen)
{
unsigned long timestamp;
home_config_paths(NULL, &xdg_path, "ignore");
excludes_file = xdg_path;
}
- if (!access_or_warn(path, R_OK))
+ if (!access_or_warn(path, R_OK, 0))
add_excludes_from_file(dir, path);
- if (excludes_file && !access_or_warn(excludes_file, R_OK))
+ if (excludes_file && !access_or_warn(excludes_file, R_OK, 0))
add_excludes_from_file(dir, excludes_file);
}
static FILE *pack_edges;
static unsigned int show_stats = 1;
static int global_argc;
-static const char **global_argv;
+static char **global_argv;
/* Memory pools */
static size_t mem_pool_alloc = 2*1024*1024 - sizeof(struct mem_pool);
*end = 0;
mark = strtoumax(line + 1, &end, 10);
if (!mark || end == line + 1
- || *end != ' ' || get_sha1(end + 1, sha1))
+ || *end != ' ' || get_sha1_hex(end + 1, sha1))
die("corrupt mark line: %s", line);
e = find_object(sha1);
if (!e) {
read_marks();
}
-int main(int argc, const char **argv)
+int main(int argc, char **argv)
{
unsigned int i;
#define probe_utf8_pathname_composition(a,b)
#endif
+#ifdef NEEDS_CLIPPED_WRITE
+ssize_t clipped_write(int fildes, const void *buf, size_t nbyte);
+#define write(x,y,z) clipped_write((x),(y),(z))
+#endif
+
#ifdef MKDIR_WO_TRAILING_SLASH
#define mkdir(a,b) compat_mkdir_wo_trailing_slash((a),(b))
extern int compat_mkdir_wo_trailing_slash(const char*, mode_t);
* Call access(2), but warn for any error except "missing file"
* (ENOENT or ENOTDIR).
*/
-int access_or_warn(const char *path, int mode);
-int access_or_die(const char *path, int mode);
+#define ACCESS_EACCES_OK (1U << 0)
+int access_or_warn(const char *path, int mode, unsigned flag);
+int access_or_die(const char *path, int mode, unsigned flag);
/* Warn on an inaccessible file that ought to be accessible */
void warn_on_inaccessible(const char *path);
+++ /dev/null
-#!/usr/bin/env bash
-# Copyright (c) 2012 Felipe Contreras
-
-alias=$1
-url=$2
-
-dir="$GIT_DIR/testgit/$alias"
-prefix="refs/testgit/$alias"
-
-default_refspec="refs/heads/*:${prefix}/heads/*"
-
-refspec="${GIT_REMOTE_TESTGIT_REFSPEC-$default_refspec}"
-
-test -z "$refspec" && prefix="refs"
-
-export GIT_DIR="$url/.git"
-
-mkdir -p "$dir"
-
-if test -z "$GIT_REMOTE_TESTGIT_NO_MARKS"
-then
- gitmarks="$dir/git.marks"
- testgitmarks="$dir/testgit.marks"
- test -e "$gitmarks" || >"$gitmarks"
- test -e "$testgitmarks" || >"$testgitmarks"
- testgitmarks_args=( "--"{import,export}"-marks=$testgitmarks" )
-fi
-
-while read line
-do
- case $line in
- capabilities)
- echo 'import'
- echo 'export'
- test -n "$refspec" && echo "refspec $refspec"
- if test -n "$gitmarks"
- then
- echo "*import-marks $gitmarks"
- echo "*export-marks $gitmarks"
- fi
- test -n "$GIT_REMOTE_TESTGIT_SIGNED_TAGS" && echo "signed-tags"
- echo
- ;;
- list)
- git for-each-ref --format='? %(refname)' 'refs/heads/'
- head=$(git symbolic-ref HEAD)
- echo "@$head HEAD"
- echo
- ;;
- import*)
- # read all import lines
- while true
- do
- ref="${line#* }"
- refs="$refs $ref"
- read line
- test "${line%% *}" != "import" && break
- done
-
- if test -n "$gitmarks"
- then
- echo "feature import-marks=$gitmarks"
- echo "feature export-marks=$gitmarks"
- fi
- echo "feature done"
- git fast-export "${testgitmarks_args[@]}" $refs |
- sed -e "s#refs/heads/#${prefix}/heads/#g"
- echo "done"
- ;;
- export)
- before=$(git for-each-ref --format='%(refname) %(objectname)')
-
- git fast-import "${testgitmarks_args[@]}" --quiet
-
- after=$(git for-each-ref --format='%(refname) %(objectname)')
-
- # figure out which refs were updated
- join -e 0 -o '0 1.2 2.2' -a 2 <(echo "$before") <(echo "$after") |
- while read ref a b
- do
- test $a == $b && continue
- echo "ok $ref"
- done
-
- echo
- ;;
- '')
- exit
- ;;
- esac
-done
--- /dev/null
+#!/bin/sh
+# Copyright (c) 2012 Felipe Contreras
+
+alias=$1
+url=$2
+
+dir="$GIT_DIR/testgit/$alias"
+prefix="refs/testgit/$alias"
+
+default_refspec="refs/heads/*:${prefix}/heads/*"
+
+refspec="${GIT_REMOTE_TESTGIT_REFSPEC-$default_refspec}"
+
+test -z "$refspec" && prefix="refs"
+
+export GIT_DIR="$url/.git"
+
+mkdir -p "$dir"
+
+if test -z "$GIT_REMOTE_TESTGIT_NO_MARKS"
+then
+ gitmarks="$dir/git.marks"
+ testgitmarks="$dir/testgit.marks"
+ test -e "$gitmarks" || >"$gitmarks"
+ test -e "$testgitmarks" || >"$testgitmarks"
+fi
+
+while read line
+do
+ case $line in
+ capabilities)
+ echo 'import'
+ echo 'export'
+ test -n "$refspec" && echo "refspec $refspec"
+ if test -n "$gitmarks"
+ then
+ echo "*import-marks $gitmarks"
+ echo "*export-marks $gitmarks"
+ fi
+ test -n "$GIT_REMOTE_TESTGIT_SIGNED_TAGS" && echo "signed-tags"
+ echo
+ ;;
+ list)
+ git for-each-ref --format='? %(refname)' 'refs/heads/'
+ head=$(git symbolic-ref HEAD)
+ echo "@$head HEAD"
+ echo
+ ;;
+ import*)
+ # read all import lines
+ while true
+ do
+ ref="${line#* }"
+ refs="$refs $ref"
+ read line
+ test "${line%% *}" != "import" && break
+ done
+
+ if test -n "$gitmarks"
+ then
+ echo "feature import-marks=$gitmarks"
+ echo "feature export-marks=$gitmarks"
+ fi
+
+ if test -n "$GIT_REMOTE_TESTGIT_FAILURE"
+ then
+ echo "feature done"
+ exit 1
+ fi
+
+ echo "feature done"
+ git fast-export \
+ ${testgitmarks:+"--import-marks=$testgitmarks"} \
+ ${testgitmarks:+"--export-marks=$testgitmarks"} \
+ $refs |
+ sed -e "s#refs/heads/#${prefix}/heads/#g"
+ echo "done"
+ ;;
+ export)
+ if test -n "$GIT_REMOTE_TESTGIT_FAILURE"
+ then
+ # consume input so fast-export doesn't get SIGPIPE;
+ # git would also notice that case, but we want
+ # to make sure we are exercising the later
+ # error checks
+ while read line; do
+ test "done" = "$line" && break
+ done
+ exit 1
+ fi
+
+ before=$(git for-each-ref --format=' %(refname) %(objectname) ')
+
+ git fast-import \
+ ${testgitmarks:+"--import-marks=$testgitmarks"} \
+ ${testgitmarks:+"--export-marks=$testgitmarks"} \
+ --quiet
+
+ # figure out which refs were updated
+ git for-each-ref --format='%(refname) %(objectname)' |
+ while read ref a
+ do
+ case "$before" in
+ *" $ref $a "*)
+ continue ;; # unchanged
+ esac
+ if test -z "$GIT_REMOTE_TESTGIT_PUSH_ERROR"
+ then
+ echo "ok $ref"
+ else
+ echo "error $ref $GIT_REMOTE_TESTGIT_PUSH_ERROR"
+ fi
+ done
+
+ echo
+ ;;
+ '')
+ exit
+ ;;
+ esac
+done
$_template, $_shared,
$_version, $_fetch_all, $_no_rebase, $_fetch_parent,
$_before, $_after,
- $_merge, $_strategy, $_preserve_merges, $_dry_run, $_local,
+ $_merge, $_strategy, $_preserve_merges, $_dry_run, $_parents, $_local,
$_prefix, $_no_checkout, $_url, $_verbose,
$_commit_url, $_tag, $_merge_info, $_interactive);
{ 'message|m=s' => \$_message,
'destination|d=s' => \$_branch_dest,
'dry-run|n' => \$_dry_run,
+ 'parents' => \$_parents,
'tag|t' => \$_tag,
'username=s' => \$Git::SVN::Prompt::_username,
'commit-url=s' => \$_commit_url } ],
{ 'message|m=s' => \$_message,
'destination|d=s' => \$_branch_dest,
'dry-run|n' => \$_dry_run,
+ 'parents' => \$_parents,
'username=s' => \$Git::SVN::Prompt::_username,
'commit-url=s' => \$_commit_url } ],
'set-tree' => [ \&cmd_set_tree,
$ctx->ls($dst, 'HEAD', 0);
} and die "branch ${branch_name} already exists\n";
+ if ($_parents) {
+ mk_parent_dirs($ctx, $dst);
+ }
+
print "Copying ${src} at r${rev} to ${dst}...\n";
$ctx->copy($src, $rev, $dst)
unless $_dry_run;
$gs->fetch_all;
}
+sub mk_parent_dirs {
+ my ($ctx, $parent) = @_;
+ $parent =~ s{/[^/]*$}{};
+
+ if (!eval{$ctx->ls($parent, 'HEAD', 0)}) {
+ mk_parent_dirs($ctx, $parent);
+ print "Creating parent folder ${parent} ...\n";
+ $ctx->mkdir($parent) unless $_dry_run;
+ }
+}
+
sub cmd_find_rev {
my $revision_or_hash = shift or die "SVN or git revision required ",
"as a command-line argument\n";
}
-int main(int argc, const char **argv)
+int main(int argc, char **av)
{
+ const char **argv = (const char **) av;
const char *cmd;
startup_info = &git_startup_info;
#include "string-list.h"
#include "column.h"
#include "version.h"
+#include "refs.h"
void add_cmdname(struct cmdnames *cmds, const char *name, int len)
{
printf("git version %s\n", git_version_string);
return 0;
}
+
+struct similar_ref_cb {
+ const char *base_ref;
+ struct string_list *similar_refs;
+};
+
+static int append_similar_ref(const char *refname, const unsigned char *sha1,
+ int flags, void *cb_data)
+{
+ struct similar_ref_cb *cb = (struct similar_ref_cb *)(cb_data);
+ char *branch = strrchr(refname, '/') + 1;
+ /* A remote branch of the same name is deemed similar */
+ if (!prefixcmp(refname, "refs/remotes/") &&
+ !strcmp(branch, cb->base_ref))
+ string_list_append(cb->similar_refs,
+ refname + strlen("refs/remotes/"));
+ return 0;
+}
+
+static struct string_list guess_refs(const char *ref)
+{
+ struct similar_ref_cb ref_cb;
+ struct string_list similar_refs = STRING_LIST_INIT_NODUP;
+
+ ref_cb.base_ref = ref;
+ ref_cb.similar_refs = &similar_refs;
+ for_each_ref(append_similar_ref, &ref_cb);
+ return similar_refs;
+}
+
+void help_unknown_ref(const char *ref, const char *cmd, const char *error)
+{
+ int i;
+ struct string_list suggested_refs = guess_refs(ref);
+
+ fprintf_ln(stderr, _("%s: %s - %s"), cmd, ref, error);
+
+ if (suggested_refs.nr > 0) {
+ fprintf_ln(stderr,
+ Q_("\nDid you mean this?",
+ "\nDid you mean one of these?",
+ suggested_refs.nr));
+ for (i = 0; i < suggested_refs.nr; i++)
+ fprintf(stderr, "\t%s\n", suggested_refs.items[i].string);
+ }
+
+ string_list_clear(&suggested_refs, 0);
+ exit(1);
+}
extern int is_in_cmdlist(struct cmdnames *cmds, const char *name);
extern void list_commands(unsigned int colopts, struct cmdnames *main_cmds, struct cmdnames *other_cmds);
+/*
+ * call this to die(), when it is suspected that the user mistyped a
+ * ref to the command, to give suggested "correct" refs.
+ */
+extern void help_unknown_ref(const char *ref, const char *cmd, const char *error);
#endif /* HELP_H */
#ifdef NO_OPENSSL
typedef void *SSL;
#else
+#ifdef APPLE_COMMON_CRYPTO
+#include <CommonCrypto/CommonHMAC.h>
+#define HMAC_CTX CCHmacContext
+#define HMAC_Init(hmac, key, len, algo) CCHmacInit(hmac, algo, key, len)
+#define HMAC_Update CCHmacUpdate
+#define HMAC_Final(hmac, hash, ptr) CCHmacFinal(hmac, hash)
+#define HMAC_CTX_cleanup(ignore)
+#define EVP_md5() kCCHmacAlgMD5
+#else
#include <openssl/evp.h>
#include <openssl/hmac.h>
+#endif
#include <openssl/x509v3.h>
#endif
struct object *lookup_object(const unsigned char *sha1)
{
- unsigned int i;
+ unsigned int i, first;
struct object *obj;
if (!obj_hash)
return NULL;
- i = hashtable_index(sha1);
+ first = i = hashtable_index(sha1);
while ((obj = obj_hash[i]) != NULL) {
if (!hashcmp(sha1, obj->sha1))
break;
if (i == obj_hash_size)
i = 0;
}
+ if (obj && i != first) {
+ /*
+ * Move object to where we started to look for it so
+ * that we do not need to walk the hash table the next
+ * time we look for it.
+ */
+ struct object *tmp = obj_hash[i];
+ obj_hash[i] = obj_hash[first];
+ obj_hash[first] = tmp;
+ }
return obj;
}
+++ /dev/null
-#include "cache.h"
-#include "refs.h"
-#include "tag.h"
-#include "pack-refs.h"
-
-struct ref_to_prune {
- struct ref_to_prune *next;
- unsigned char sha1[20];
- char name[FLEX_ARRAY];
-};
-
-struct pack_refs_cb_data {
- unsigned int flags;
- struct ref_to_prune *ref_to_prune;
- FILE *refs_file;
-};
-
-static int do_not_prune(int flags)
-{
- /* If it is already packed or if it is a symref,
- * do not prune it.
- */
- return (flags & (REF_ISSYMREF|REF_ISPACKED));
-}
-
-static int handle_one_ref(const char *path, const unsigned char *sha1,
- int flags, void *cb_data)
-{
- struct pack_refs_cb_data *cb = cb_data;
- struct object *o;
- int is_tag_ref;
-
- /* Do not pack the symbolic refs */
- if ((flags & REF_ISSYMREF))
- return 0;
- is_tag_ref = !prefixcmp(path, "refs/tags/");
-
- /* ALWAYS pack refs that were already packed or are tags */
- if (!(cb->flags & PACK_REFS_ALL) && !is_tag_ref && !(flags & REF_ISPACKED))
- return 0;
-
- fprintf(cb->refs_file, "%s %s\n", sha1_to_hex(sha1), path);
-
- o = parse_object_or_die(sha1, path);
- if (o->type == OBJ_TAG) {
- o = deref_tag(o, path, 0);
- if (o)
- fprintf(cb->refs_file, "^%s\n",
- sha1_to_hex(o->sha1));
- }
-
- if ((cb->flags & PACK_REFS_PRUNE) && !do_not_prune(flags)) {
- int namelen = strlen(path) + 1;
- struct ref_to_prune *n = xcalloc(1, sizeof(*n) + namelen);
- hashcpy(n->sha1, sha1);
- strcpy(n->name, path);
- n->next = cb->ref_to_prune;
- cb->ref_to_prune = n;
- }
- return 0;
-}
-
-/*
- * Remove empty parents, but spare refs/ and immediate subdirs.
- * Note: munges *name.
- */
-static void try_remove_empty_parents(char *name)
-{
- char *p, *q;
- int i;
- p = name;
- for (i = 0; i < 2; i++) { /* refs/{heads,tags,...}/ */
- while (*p && *p != '/')
- p++;
- /* tolerate duplicate slashes; see check_refname_format() */
- while (*p == '/')
- p++;
- }
- for (q = p; *q; q++)
- ;
- while (1) {
- while (q > p && *q != '/')
- q--;
- while (q > p && *(q-1) == '/')
- q--;
- if (q == p)
- break;
- *q = '\0';
- if (rmdir(git_path("%s", name)))
- break;
- }
-}
-
-/* make sure nobody touched the ref, and unlink */
-static void prune_ref(struct ref_to_prune *r)
-{
- struct ref_lock *lock = lock_ref_sha1(r->name + 5, r->sha1);
-
- if (lock) {
- unlink_or_warn(git_path("%s", r->name));
- unlock_ref(lock);
- try_remove_empty_parents(r->name);
- }
-}
-
-static void prune_refs(struct ref_to_prune *r)
-{
- while (r) {
- prune_ref(r);
- r = r->next;
- }
-}
-
-static struct lock_file packed;
-
-int pack_refs(unsigned int flags)
-{
- int fd;
- struct pack_refs_cb_data cbdata;
-
- memset(&cbdata, 0, sizeof(cbdata));
- cbdata.flags = flags;
-
- fd = hold_lock_file_for_update(&packed, git_path("packed-refs"),
- LOCK_DIE_ON_ERROR);
- cbdata.refs_file = fdopen(fd, "w");
- if (!cbdata.refs_file)
- die_errno("unable to create ref-pack file structure");
-
- /* perhaps other traits later as well */
- fprintf(cbdata.refs_file, "# pack-refs with: peeled fully-peeled \n");
-
- for_each_ref(handle_one_ref, &cbdata);
- if (ferror(cbdata.refs_file))
- die("failed to write ref-pack file");
- if (fflush(cbdata.refs_file) || fsync(fd) || fclose(cbdata.refs_file))
- die_errno("failed to write ref-pack file");
- /*
- * Since the lock file was fdopen()'ed and then fclose()'ed above,
- * assign -1 to the lock file descriptor so that commit_lock_file()
- * won't try to close() it.
- */
- packed.fd = -1;
- if (commit_lock_file(&packed) < 0)
- die_errno("unable to overwrite old ref-pack file");
- prune_refs(cbdata.ref_to_prune);
- return 0;
-}
+++ /dev/null
-#ifndef PACK_REFS_H
-#define PACK_REFS_H
-
-/*
- * Flags for controlling behaviour of pack_refs()
- * PACK_REFS_PRUNE: Prune loose refs after packing
- * PACK_REFS_ALL: Pack _all_ refs, not just tags and already packed refs
- */
-#define PACK_REFS_PRUNE 0x0001
-#define PACK_REFS_ALL 0x0002
-
-/*
- * Write a packed-refs file for the current repository.
- * flags: Combination of the above PACK_REFS_* flags.
- */
-int pack_refs(unsigned int flags);
-
-#endif /* PACK_REFS_H */
return 0;
}
+int parse_opt_expiry_date_cb(const struct option *opt, const char *arg,
+ int unset)
+{
+ return parse_expiry_date(arg, (unsigned long *)opt->value);
+}
+
int parse_opt_color_flag_cb(const struct option *opt, const char *arg,
int unset)
{
#define OPT_DATE(s, l, v, h) \
{ OPTION_CALLBACK, (s), (l), (v), N_("time"),(h), 0, \
parse_opt_approxidate_cb }
+#define OPT_EXPIRY_DATE(s, l, v, h) \
+ { OPTION_CALLBACK, (s), (l), (v), N_("expiry date"),(h), 0, \
+ parse_opt_expiry_date_cb }
#define OPT_CALLBACK(s, l, v, a, h, f) \
{ OPTION_CALLBACK, (s), (l), (v), (a), (h), 0, (f) }
#define OPT_NUMBER_CALLBACK(v, h, f) \
/*----- some often used options -----*/
extern int parse_opt_abbrev_cb(const struct option *, const char *, int);
extern int parse_opt_approxidate_cb(const struct option *, const char *, int);
+extern int parse_opt_expiry_date_cb(const struct option *, const char *, int);
extern int parse_opt_color_flag_cb(const struct option *, const char *, int);
extern int parse_opt_verbosity_cb(const struct option *, const char *, int);
extern int parse_opt_with_commit(const struct option *, const char *, int);
* (ref_entry->flag & REF_DIR) is zero.
*/
struct ref_value {
+ /*
+ * The name of the object to which this reference resolves
+ * (which may be a tag object). If REF_ISBROKEN, this is
+ * null. If REF_ISSYMREF, then this is the name of the object
+ * referred to by the last reference in the symlink chain.
+ */
unsigned char sha1[20];
+
+ /*
+ * If REF_KNOWS_PEELED, then this field holds the peeled value
+ * of this reference, or null if the reference is known not to
+ * be peelable. See the documentation for peel_ref() for an
+ * exact definition of "peelable".
+ */
unsigned char peeled[20];
};
struct ref_entry **entries;
};
-/* ISSYMREF=0x01, ISPACKED=0x02, and ISBROKEN=0x04 are public interfaces */
+/*
+ * Bit values for ref_entry::flag. REF_ISSYMREF=0x01,
+ * REF_ISPACKED=0x02, and REF_ISBROKEN=0x04 are public values; see
+ * refs.h.
+ */
+
+/*
+ * The field ref_entry->u.value.peeled of this value entry contains
+ * the correct peeled value for the reference, which might be
+ * null_sha1 if the reference is not a tag or if it is broken.
+ */
#define REF_KNOWS_PEELED 0x08
/* ref_entry represents a directory of references */
}
/*
- * Return the entry with the given refname from the ref_dir
- * (non-recursively), sorting dir if necessary. Return NULL if no
- * such entry is found. dir must already be complete.
+ * Return the index of the entry with the given refname from the
+ * ref_dir (non-recursively), sorting dir if necessary. Return -1 if
+ * no such entry is found. dir must already be complete.
*/
-static struct ref_entry *search_ref_dir(struct ref_dir *dir,
- const char *refname, size_t len)
+static int search_ref_dir(struct ref_dir *dir, const char *refname, size_t len)
{
struct ref_entry **r;
struct string_slice key;
if (refname == NULL || !dir->nr)
- return NULL;
+ return -1;
sort_ref_dir(dir);
key.len = len;
ref_entry_cmp_sslice);
if (r == NULL)
- return NULL;
+ return -1;
- return *r;
+ return r - dir->entries;
}
/*
const char *subdirname, size_t len,
int mkdir)
{
- struct ref_entry *entry = search_ref_dir(dir, subdirname, len);
- if (!entry) {
+ int entry_index = search_ref_dir(dir, subdirname, len);
+ struct ref_entry *entry;
+ if (entry_index == -1) {
if (!mkdir)
return NULL;
/*
*/
entry = create_dir_entry(dir->ref_cache, subdirname, len, 0);
add_entry_to_dir(dir, entry);
+ } else {
+ entry = dir->entries[entry_index];
}
return get_ref_dir(entry);
}
*/
static struct ref_entry *find_ref(struct ref_dir *dir, const char *refname)
{
+ int entry_index;
struct ref_entry *entry;
dir = find_containing_dir(dir, refname, 0);
if (!dir)
return NULL;
- entry = search_ref_dir(dir, refname, strlen(refname));
- return (entry && !(entry->flag & REF_DIR)) ? entry : NULL;
+ entry_index = search_ref_dir(dir, refname, strlen(refname));
+ if (entry_index == -1)
+ return NULL;
+ entry = dir->entries[entry_index];
+ return (entry->flag & REF_DIR) ? NULL : entry;
+}
+
+/*
+ * Remove the entry with the given name from dir, recursing into
+ * subdirectories as necessary. If refname is the name of a directory
+ * (i.e., ends with '/'), then remove the directory and its contents.
+ * If the removal was successful, return the number of entries
+ * remaining in the directory entry that contained the deleted entry.
+ * If the name was not found, return -1. Please note that this
+ * function only deletes the entry from the cache; it does not delete
+ * it from the filesystem or ensure that other cache entries (which
+ * might be symbolic references to the removed entry) are updated.
+ * Nor does it remove any containing dir entries that might be made
+ * empty by the removal. dir must represent the top-level directory
+ * and must already be complete.
+ */
+static int remove_entry(struct ref_dir *dir, const char *refname)
+{
+ int refname_len = strlen(refname);
+ int entry_index;
+ struct ref_entry *entry;
+ int is_dir = refname[refname_len - 1] == '/';
+ if (is_dir) {
+ /*
+ * refname represents a reference directory. Remove
+ * the trailing slash; otherwise we will get the
+ * directory *representing* refname rather than the
+ * one *containing* it.
+ */
+ char *dirname = xmemdupz(refname, refname_len - 1);
+ dir = find_containing_dir(dir, dirname, 0);
+ free(dirname);
+ } else {
+ dir = find_containing_dir(dir, refname, 0);
+ }
+ if (!dir)
+ return -1;
+ entry_index = search_ref_dir(dir, refname, refname_len);
+ if (entry_index == -1)
+ return -1;
+ entry = dir->entries[entry_index];
+
+ memmove(&dir->entries[entry_index],
+ &dir->entries[entry_index + 1],
+ (dir->nr - entry_index - 1) * sizeof(*dir->entries)
+ );
+ dir->nr--;
+ if (dir->sorted > entry_index)
+ dir->sorted--;
+ free_ref_entry(entry);
+ return dir->nr;
}
/*
dir->sorted = dir->nr = i;
}
-#define DO_FOR_EACH_INCLUDE_BROKEN 01
+/* Include broken references in a do_for_each_ref*() iteration: */
+#define DO_FOR_EACH_INCLUDE_BROKEN 0x01
+
+/*
+ * Return true iff the reference described by entry can be resolved to
+ * an object in the database. Emit a warning if the referred-to
+ * object does not exist.
+ */
+static int ref_resolves_to_object(struct ref_entry *entry)
+{
+ if (entry->flag & REF_ISBROKEN)
+ return 0;
+ if (!has_sha1_file(entry->u.value.sha1)) {
+ error("%s does not point to a valid object!", entry->name);
+ return 0;
+ }
+ return 1;
+}
+/*
+ * current_ref is a performance hack: when iterating over references
+ * using the for_each_ref*() functions, current_ref is set to the
+ * current reference's entry before calling the callback function. If
+ * the callback function calls peel_ref(), then peel_ref() first
+ * checks whether the reference to be peeled is the current reference
+ * (it usually is) and if so, returns that reference's peeled version
+ * if it is available. This avoids a refname lookup in a common case.
+ */
static struct ref_entry *current_ref;
-static int do_one_ref(const char *base, each_ref_fn fn, int trim,
- int flags, void *cb_data, struct ref_entry *entry)
+typedef int each_ref_entry_fn(struct ref_entry *entry, void *cb_data);
+
+struct ref_entry_cb {
+ const char *base;
+ int trim;
+ int flags;
+ each_ref_fn *fn;
+ void *cb_data;
+};
+
+/*
+ * Handle one reference in a do_for_each_ref*()-style iteration,
+ * calling an each_ref_fn for each entry.
+ */
+static int do_one_ref(struct ref_entry *entry, void *cb_data)
{
+ struct ref_entry_cb *data = cb_data;
int retval;
- if (prefixcmp(entry->name, base))
+ if (prefixcmp(entry->name, data->base))
+ return 0;
+
+ if (!(data->flags & DO_FOR_EACH_INCLUDE_BROKEN) &&
+ !ref_resolves_to_object(entry))
return 0;
- if (!(flags & DO_FOR_EACH_INCLUDE_BROKEN)) {
- if (entry->flag & REF_ISBROKEN)
- return 0; /* ignore broken refs e.g. dangling symref */
- if (!has_sha1_file(entry->u.value.sha1)) {
- error("%s does not point to a valid object!", entry->name);
- return 0;
- }
- }
current_ref = entry;
- retval = fn(entry->name + trim, entry->u.value.sha1, entry->flag, cb_data);
+ retval = data->fn(entry->name + data->trim, entry->u.value.sha1,
+ entry->flag, data->cb_data);
current_ref = NULL;
return retval;
}
* Call fn for each reference in dir that has index in the range
* offset <= index < dir->nr. Recurse into subdirectories that are in
* that index range, sorting them before iterating. This function
- * does not sort dir itself; it should be sorted beforehand.
+ * does not sort dir itself; it should be sorted beforehand. fn is
+ * called for all references, including broken ones.
*/
-static int do_for_each_ref_in_dir(struct ref_dir *dir, int offset,
- const char *base,
- each_ref_fn fn, int trim, int flags, void *cb_data)
+static int do_for_each_entry_in_dir(struct ref_dir *dir, int offset,
+ each_ref_entry_fn fn, void *cb_data)
{
int i;
assert(dir->sorted == dir->nr);
if (entry->flag & REF_DIR) {
struct ref_dir *subdir = get_ref_dir(entry);
sort_ref_dir(subdir);
- retval = do_for_each_ref_in_dir(subdir, 0,
- base, fn, trim, flags, cb_data);
+ retval = do_for_each_entry_in_dir(subdir, 0, fn, cb_data);
} else {
- retval = do_one_ref(base, fn, trim, flags, cb_data, entry);
+ retval = fn(entry, cb_data);
}
if (retval)
return retval;
* by refname. Recurse into subdirectories. If a value entry appears
* in both dir1 and dir2, then only process the version that is in
* dir2. The input dirs must already be sorted, but subdirs will be
- * sorted as needed.
+ * sorted as needed. fn is called for all references, including
+ * broken ones.
*/
-static int do_for_each_ref_in_dirs(struct ref_dir *dir1,
- struct ref_dir *dir2,
- const char *base, each_ref_fn fn, int trim,
- int flags, void *cb_data)
+static int do_for_each_entry_in_dirs(struct ref_dir *dir1,
+ struct ref_dir *dir2,
+ each_ref_entry_fn fn, void *cb_data)
{
int retval;
int i1 = 0, i2 = 0;
struct ref_entry *e1, *e2;
int cmp;
if (i1 == dir1->nr) {
- return do_for_each_ref_in_dir(dir2, i2,
- base, fn, trim, flags, cb_data);
+ return do_for_each_entry_in_dir(dir2, i2, fn, cb_data);
}
if (i2 == dir2->nr) {
- return do_for_each_ref_in_dir(dir1, i1,
- base, fn, trim, flags, cb_data);
+ return do_for_each_entry_in_dir(dir1, i1, fn, cb_data);
}
e1 = dir1->entries[i1];
e2 = dir2->entries[i2];
struct ref_dir *subdir2 = get_ref_dir(e2);
sort_ref_dir(subdir1);
sort_ref_dir(subdir2);
- retval = do_for_each_ref_in_dirs(
- subdir1, subdir2,
- base, fn, trim, flags, cb_data);
+ retval = do_for_each_entry_in_dirs(
+ subdir1, subdir2, fn, cb_data);
i1++;
i2++;
} else if (!(e1->flag & REF_DIR) && !(e2->flag & REF_DIR)) {
/* Both are references; ignore the one from dir1. */
- retval = do_one_ref(base, fn, trim, flags, cb_data, e2);
+ retval = fn(e2, cb_data);
i1++;
i2++;
} else {
if (e->flag & REF_DIR) {
struct ref_dir *subdir = get_ref_dir(e);
sort_ref_dir(subdir);
- retval = do_for_each_ref_in_dir(
- subdir, 0,
- base, fn, trim, flags, cb_data);
+ retval = do_for_each_entry_in_dir(
+ subdir, 0, fn, cb_data);
} else {
- retval = do_one_ref(base, fn, trim, flags, cb_data, e);
+ retval = fn(e, cb_data);
}
}
if (retval)
return retval;
}
- if (i1 < dir1->nr)
- return do_for_each_ref_in_dir(dir1, i1,
- base, fn, trim, flags, cb_data);
- if (i2 < dir2->nr)
- return do_for_each_ref_in_dir(dir2, i2,
- base, fn, trim, flags, cb_data);
- return 0;
}
/*
const char *conflicting_refname;
};
-static int name_conflict_fn(const char *existingrefname, const unsigned char *sha1,
- int flags, void *cb_data)
+static int name_conflict_fn(struct ref_entry *entry, void *cb_data)
{
struct name_conflict_cb *data = (struct name_conflict_cb *)cb_data;
- if (data->oldrefname && !strcmp(data->oldrefname, existingrefname))
+ if (data->oldrefname && !strcmp(data->oldrefname, entry->name))
return 0;
- if (names_conflict(data->refname, existingrefname)) {
- data->conflicting_refname = existingrefname;
+ if (names_conflict(data->refname, entry->name)) {
+ data->conflicting_refname = entry->name;
return 1;
}
return 0;
/*
* Return true iff a reference named refname could be created without
- * conflicting with the name of an existing reference in array. If
+ * conflicting with the name of an existing reference in dir. If
* oldrefname is non-NULL, ignore potential conflicts with oldrefname
* (e.g., because oldrefname is scheduled for deletion in the same
* operation).
data.conflicting_refname = NULL;
sort_ref_dir(dir);
- if (do_for_each_ref_in_dir(dir, 0, "", name_conflict_fn,
- 0, DO_FOR_EACH_INCLUDE_BROKEN,
- &data)) {
+ if (do_for_each_entry_in_dir(dir, 0, name_conflict_fn, &data)) {
error("'%s' exists; cannot create '%s'",
data.conflicting_refname, refname);
return 0;
struct ref_cache *next;
struct ref_entry *loose;
struct ref_entry *packed;
- /* The submodule name, or "" for the main repo. */
- char name[FLEX_ARRAY];
-} *ref_cache;
+ /*
+ * The submodule name, or "" for the main repo. We allocate
+ * length 1 rather than FLEX_ARRAY so that the main ref_cache
+ * is initialized correctly.
+ */
+ char name[1];
+} ref_cache, *submodule_ref_caches;
static void clear_packed_ref_cache(struct ref_cache *refs)
{
*/
static struct ref_cache *get_ref_cache(const char *submodule)
{
- struct ref_cache *refs = ref_cache;
- if (!submodule)
- submodule = "";
- while (refs) {
+ struct ref_cache *refs;
+
+ if (!submodule || !*submodule)
+ return &ref_cache;
+
+ for (refs = submodule_ref_caches; refs; refs = refs->next)
if (!strcmp(submodule, refs->name))
return refs;
- refs = refs->next;
- }
refs = create_ref_cache(submodule);
- refs->next = ref_cache;
- ref_cache = refs;
+ refs->next = submodule_ref_caches;
+ submodule_ref_caches = refs;
return refs;
}
clear_loose_ref_cache(refs);
}
+/* The length of a peeled reference line in packed-refs, including EOL: */
+#define PEELED_LINE_LENGTH 42
+
+/*
+ * The packed-refs header line that we write out. Perhaps other
+ * traits will be added later. The trailing space is required.
+ */
+static const char PACKED_REFS_HEADER[] =
+ "# pack-refs with: peeled fully-peeled \n";
+
/*
* Parse one line from a packed-refs file. Write the SHA1 to sha1.
* Return a pointer to the refname within the line (null-terminated),
}
if (last &&
refline[0] == '^' &&
- strlen(refline) == 42 &&
- refline[41] == '\n' &&
+ strlen(refline) == PEELED_LINE_LENGTH &&
+ refline[PEELED_LINE_LENGTH - 1] == '\n' &&
!get_sha1_hex(refline + 1, sha1)) {
hashcpy(last->u.value.peeled, sha1);
/*
void add_packed_ref(const char *refname, const unsigned char *sha1)
{
- add_ref(get_packed_refs(get_ref_cache(NULL)),
- create_ref_entry(refname, sha1, REF_ISPACKED, 1));
+ add_ref(get_packed_refs(&ref_cache),
+ create_ref_entry(refname, sha1, REF_ISPACKED, 1));
}
/*
}
/*
- * Try to read ref from the packed references. On success, set sha1
- * and return 0; otherwise, return -1.
+ * Return the ref_entry for the given refname from the packed
+ * references. If it does not exist, return NULL.
*/
-static int get_packed_ref(const char *refname, unsigned char *sha1)
+static struct ref_entry *get_packed_ref(const char *refname)
{
- struct ref_dir *packed = get_packed_refs(get_ref_cache(NULL));
- struct ref_entry *entry = find_ref(packed, refname);
- if (entry) {
- hashcpy(sha1, entry->u.value.sha1);
- return 0;
- }
- return -1;
+ return find_ref(get_packed_refs(&ref_cache), refname);
}
const char *resolve_ref_unsafe(const char *refname, unsigned char *sha1, int reading, int *flag)
git_snpath(path, sizeof(path), "%s", refname);
if (lstat(path, &st) < 0) {
+ struct ref_entry *entry;
+
if (errno != ENOENT)
return NULL;
/*
* The loose reference file does not exist;
* check for a packed reference.
*/
- if (!get_packed_ref(refname, sha1)) {
+ entry = get_packed_ref(refname);
+ if (entry) {
+ hashcpy(sha1, entry->u.value.sha1);
if (flag)
*flag |= REF_ISPACKED;
return refname;
return filter->fn(refname, sha1, flags, filter->cb_data);
}
+enum peel_status {
+ /* object was peeled successfully: */
+ PEEL_PEELED = 0,
+
+ /*
+ * object cannot be peeled because the named object (or an
+ * object referred to by a tag in the peel chain), does not
+ * exist.
+ */
+ PEEL_INVALID = -1,
+
+ /* object cannot be peeled because it is not a tag: */
+ PEEL_NON_TAG = -2,
+
+ /* ref_entry contains no peeled value because it is a symref: */
+ PEEL_IS_SYMREF = -3,
+
+ /*
+ * ref_entry cannot be peeled because it is broken (i.e., the
+ * symbolic reference cannot even be resolved to an object
+ * name):
+ */
+ PEEL_BROKEN = -4
+};
+
+/*
+ * Peel the named object; i.e., if the object is a tag, resolve the
+ * tag recursively until a non-tag is found. If successful, store the
+ * result to sha1 and return PEEL_PEELED. If the object is not a tag
+ * or is not valid, return PEEL_NON_TAG or PEEL_INVALID, respectively,
+ * and leave sha1 unchanged.
+ */
+static enum peel_status peel_object(const unsigned char *name, unsigned char *sha1)
+{
+ struct object *o = lookup_unknown_object(name);
+
+ if (o->type == OBJ_NONE) {
+ int type = sha1_object_info(name, NULL);
+ if (type < 0)
+ return PEEL_INVALID;
+ o->type = type;
+ }
+
+ if (o->type != OBJ_TAG)
+ return PEEL_NON_TAG;
+
+ o = deref_tag_noverify(o);
+ if (!o)
+ return PEEL_INVALID;
+
+ hashcpy(sha1, o->sha1);
+ return PEEL_PEELED;
+}
+
+/*
+ * Peel the entry (if possible) and return its new peel_status. If
+ * repeel is true, re-peel the entry even if there is an old peeled
+ * value that is already stored in it.
+ *
+ * It is OK to call this function with a packed reference entry that
+ * might be stale and might even refer to an object that has since
+ * been garbage-collected. In such a case, if the entry has
+ * REF_KNOWS_PEELED then leave the status unchanged and return
+ * PEEL_PEELED or PEEL_NON_TAG; otherwise, return PEEL_INVALID.
+ */
+static enum peel_status peel_entry(struct ref_entry *entry, int repeel)
+{
+ enum peel_status status;
+
+ if (entry->flag & REF_KNOWS_PEELED) {
+ if (repeel) {
+ entry->flag &= ~REF_KNOWS_PEELED;
+ hashclr(entry->u.value.peeled);
+ } else {
+ return is_null_sha1(entry->u.value.peeled) ?
+ PEEL_NON_TAG : PEEL_PEELED;
+ }
+ }
+ if (entry->flag & REF_ISBROKEN)
+ return PEEL_BROKEN;
+ if (entry->flag & REF_ISSYMREF)
+ return PEEL_IS_SYMREF;
+
+ status = peel_object(entry->u.value.sha1, entry->u.value.peeled);
+ if (status == PEEL_PEELED || status == PEEL_NON_TAG)
+ entry->flag |= REF_KNOWS_PEELED;
+ return status;
+}
+
int peel_ref(const char *refname, unsigned char *sha1)
{
int flag;
unsigned char base[20];
- struct object *o;
if (current_ref && (current_ref->name == refname
- || !strcmp(current_ref->name, refname))) {
- if (current_ref->flag & REF_KNOWS_PEELED) {
- if (is_null_sha1(current_ref->u.value.peeled))
- return -1;
- hashcpy(sha1, current_ref->u.value.peeled);
- return 0;
- }
- hashcpy(base, current_ref->u.value.sha1);
- goto fallback;
+ || !strcmp(current_ref->name, refname))) {
+ if (peel_entry(current_ref, 0))
+ return -1;
+ hashcpy(sha1, current_ref->u.value.peeled);
+ return 0;
}
if (read_ref_full(refname, base, 1, &flag))
return -1;
- if ((flag & REF_ISPACKED)) {
- struct ref_dir *dir = get_packed_refs(get_ref_cache(NULL));
- struct ref_entry *r = find_ref(dir, refname);
-
- if (r != NULL && r->flag & REF_KNOWS_PEELED) {
+ /*
+ * If the reference is packed, read its ref_entry from the
+ * cache in the hope that we already know its peeled value.
+ * We only try this optimization on packed references because
+ * (a) forcing the filling of the loose reference cache could
+ * be expensive and (b) loose references anyway usually do not
+ * have REF_KNOWS_PEELED.
+ */
+ if (flag & REF_ISPACKED) {
+ struct ref_entry *r = get_packed_ref(refname);
+ if (r) {
+ if (peel_entry(r, 0))
+ return -1;
hashcpy(sha1, r->u.value.peeled);
return 0;
}
}
-fallback:
- o = lookup_unknown_object(base);
- if (o->type == OBJ_NONE) {
- int type = sha1_object_info(base, NULL);
- if (type < 0)
- return -1;
- o->type = type;
- }
-
- if (o->type == OBJ_TAG) {
- o = deref_tag_noverify(o);
- if (o) {
- hashcpy(sha1, o->sha1);
- return 0;
- }
- }
- return -1;
+ return peel_object(base, sha1);
}
struct warn_if_dangling_data {
for_each_rawref(warn_if_dangling_symref, &data);
}
-static int do_for_each_ref(const char *submodule, const char *base, each_ref_fn fn,
- int trim, int flags, void *cb_data)
+/*
+ * Call fn for each reference in the specified ref_cache, omitting
+ * references not in the containing_dir of base. fn is called for all
+ * references, including broken ones. If fn ever returns a non-zero
+ * value, stop the iteration and return that value; otherwise, return
+ * 0.
+ */
+static int do_for_each_entry(struct ref_cache *refs, const char *base,
+ each_ref_entry_fn fn, void *cb_data)
{
- struct ref_cache *refs = get_ref_cache(submodule);
struct ref_dir *packed_dir = get_packed_refs(refs);
struct ref_dir *loose_dir = get_loose_refs(refs);
int retval = 0;
if (packed_dir && loose_dir) {
sort_ref_dir(packed_dir);
sort_ref_dir(loose_dir);
- retval = do_for_each_ref_in_dirs(
- packed_dir, loose_dir,
- base, fn, trim, flags, cb_data);
+ retval = do_for_each_entry_in_dirs(
+ packed_dir, loose_dir, fn, cb_data);
} else if (packed_dir) {
sort_ref_dir(packed_dir);
- retval = do_for_each_ref_in_dir(
- packed_dir, 0,
- base, fn, trim, flags, cb_data);
+ retval = do_for_each_entry_in_dir(
+ packed_dir, 0, fn, cb_data);
} else if (loose_dir) {
sort_ref_dir(loose_dir);
- retval = do_for_each_ref_in_dir(
- loose_dir, 0,
- base, fn, trim, flags, cb_data);
+ retval = do_for_each_entry_in_dir(
+ loose_dir, 0, fn, cb_data);
}
return retval;
}
+/*
+ * Call fn for each reference in the specified ref_cache for which the
+ * refname begins with base. If trim is non-zero, then trim that many
+ * characters off the beginning of each refname before passing the
+ * refname to fn. flags can be DO_FOR_EACH_INCLUDE_BROKEN to include
+ * broken references in the iteration. If fn ever returns a non-zero
+ * value, stop the iteration and return that value; otherwise, return
+ * 0.
+ */
+static int do_for_each_ref(struct ref_cache *refs, const char *base,
+ each_ref_fn fn, int trim, int flags, void *cb_data)
+{
+ struct ref_entry_cb data;
+ data.base = base;
+ data.trim = trim;
+ data.flags = flags;
+ data.fn = fn;
+ data.cb_data = cb_data;
+
+ return do_for_each_entry(refs, base, do_one_ref, &data);
+}
+
static int do_head_ref(const char *submodule, each_ref_fn fn, void *cb_data)
{
unsigned char sha1[20];
int for_each_ref(each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, "", fn, 0, 0, cb_data);
+ return do_for_each_ref(&ref_cache, "", fn, 0, 0, cb_data);
}
int for_each_ref_submodule(const char *submodule, each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(submodule, "", fn, 0, 0, cb_data);
+ return do_for_each_ref(get_ref_cache(submodule), "", fn, 0, 0, cb_data);
}
int for_each_ref_in(const char *prefix, each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, prefix, fn, strlen(prefix), 0, cb_data);
+ return do_for_each_ref(&ref_cache, prefix, fn, strlen(prefix), 0, cb_data);
}
int for_each_ref_in_submodule(const char *submodule, const char *prefix,
each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(submodule, prefix, fn, strlen(prefix), 0, cb_data);
+ return do_for_each_ref(get_ref_cache(submodule), prefix, fn, strlen(prefix), 0, cb_data);
}
int for_each_tag_ref(each_ref_fn fn, void *cb_data)
int for_each_replace_ref(each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, "refs/replace/", fn, 13, 0, cb_data);
+ return do_for_each_ref(&ref_cache, "refs/replace/", fn, 13, 0, cb_data);
}
int head_ref_namespaced(each_ref_fn fn, void *cb_data)
struct strbuf buf = STRBUF_INIT;
int ret;
strbuf_addf(&buf, "%srefs/", get_git_namespace());
- ret = do_for_each_ref(NULL, buf.buf, fn, 0, 0, cb_data);
+ ret = do_for_each_ref(&ref_cache, buf.buf, fn, 0, 0, cb_data);
strbuf_release(&buf);
return ret;
}
int for_each_rawref(each_ref_fn fn, void *cb_data)
{
- return do_for_each_ref(NULL, "", fn, 0,
+ return do_for_each_ref(&ref_cache, "", fn, 0,
DO_FOR_EACH_INCLUDE_BROKEN, cb_data);
}
* name is a proper prefix of our refname.
*/
if (missing &&
- !is_refname_available(refname, NULL, get_packed_refs(get_ref_cache(NULL)))) {
+ !is_refname_available(refname, NULL, get_packed_refs(&ref_cache))) {
last_errno = ENOTDIR;
goto error_return;
}
return lock_ref_sha1_basic(refname, old_sha1, flags, NULL);
}
-struct repack_without_ref_sb {
- const char *refname;
- int fd;
-};
-
-static int repack_without_ref_fn(const char *refname, const unsigned char *sha1,
- int flags, void *cb_data)
+/*
+ * Write an entry to the packed-refs file for the specified refname.
+ * If peeled is non-NULL, write it as the entry's peeled value.
+ */
+static void write_packed_entry(int fd, char *refname, unsigned char *sha1,
+ unsigned char *peeled)
{
- struct repack_without_ref_sb *data = cb_data;
char line[PATH_MAX + 100];
int len;
- if (!strcmp(data->refname, refname))
- return 0;
len = snprintf(line, sizeof(line), "%s %s\n",
sha1_to_hex(sha1), refname);
/* this should not happen but just being defensive */
if (len > sizeof(line))
die("too long a refname '%s'", refname);
- write_or_die(data->fd, line, len);
+ write_or_die(fd, line, len);
+
+ if (peeled) {
+ if (snprintf(line, sizeof(line), "^%s\n",
+ sha1_to_hex(peeled)) != PEELED_LINE_LENGTH)
+ die("internal error");
+ write_or_die(fd, line, PEELED_LINE_LENGTH);
+ }
+}
+
+struct ref_to_prune {
+ struct ref_to_prune *next;
+ unsigned char sha1[20];
+ char name[FLEX_ARRAY];
+};
+
+struct pack_refs_cb_data {
+ unsigned int flags;
+ struct ref_to_prune *ref_to_prune;
+ int fd;
+};
+
+static int pack_one_ref(struct ref_entry *entry, void *cb_data)
+{
+ struct pack_refs_cb_data *cb = cb_data;
+ enum peel_status peel_status;
+ int is_tag_ref = !prefixcmp(entry->name, "refs/tags/");
+
+ /* ALWAYS pack refs that were already packed or are tags */
+ if (!(cb->flags & PACK_REFS_ALL) && !is_tag_ref &&
+ !(entry->flag & REF_ISPACKED))
+ return 0;
+
+ /* Do not pack symbolic or broken refs: */
+ if ((entry->flag & REF_ISSYMREF) || !ref_resolves_to_object(entry))
+ return 0;
+
+ peel_status = peel_entry(entry, 1);
+ if (peel_status != PEEL_PEELED && peel_status != PEEL_NON_TAG)
+ die("internal error peeling reference %s (%s)",
+ entry->name, sha1_to_hex(entry->u.value.sha1));
+ write_packed_entry(cb->fd, entry->name, entry->u.value.sha1,
+ peel_status == PEEL_PEELED ?
+ entry->u.value.peeled : NULL);
+
+ /* If the ref was already packed, there is no need to prune it. */
+ if ((cb->flags & PACK_REFS_PRUNE) && !(entry->flag & REF_ISPACKED)) {
+ int namelen = strlen(entry->name) + 1;
+ struct ref_to_prune *n = xcalloc(1, sizeof(*n) + namelen);
+ hashcpy(n->sha1, entry->u.value.sha1);
+ strcpy(n->name, entry->name);
+ n->next = cb->ref_to_prune;
+ cb->ref_to_prune = n;
+ }
return 0;
}
+/*
+ * Remove empty parents, but spare refs/ and immediate subdirs.
+ * Note: munges *name.
+ */
+static void try_remove_empty_parents(char *name)
+{
+ char *p, *q;
+ int i;
+ p = name;
+ for (i = 0; i < 2; i++) { /* refs/{heads,tags,...}/ */
+ while (*p && *p != '/')
+ p++;
+ /* tolerate duplicate slashes; see check_refname_format() */
+ while (*p == '/')
+ p++;
+ }
+ for (q = p; *q; q++)
+ ;
+ while (1) {
+ while (q > p && *q != '/')
+ q--;
+ while (q > p && *(q-1) == '/')
+ q--;
+ if (q == p)
+ break;
+ *q = '\0';
+ if (rmdir(git_path("%s", name)))
+ break;
+ }
+}
+
+/* make sure nobody touched the ref, and unlink */
+static void prune_ref(struct ref_to_prune *r)
+{
+ struct ref_lock *lock = lock_ref_sha1(r->name + 5, r->sha1);
+
+ if (lock) {
+ unlink_or_warn(git_path("%s", r->name));
+ unlock_ref(lock);
+ try_remove_empty_parents(r->name);
+ }
+}
+
+static void prune_refs(struct ref_to_prune *r)
+{
+ while (r) {
+ prune_ref(r);
+ r = r->next;
+ }
+}
+
static struct lock_file packlock;
-static int repack_without_ref(const char *refname)
+int pack_refs(unsigned int flags)
{
- struct repack_without_ref_sb data;
- struct ref_cache *refs = get_ref_cache(NULL);
- struct ref_dir *packed = get_packed_refs(refs);
- if (find_ref(packed, refname) == NULL)
+ struct pack_refs_cb_data cbdata;
+
+ memset(&cbdata, 0, sizeof(cbdata));
+ cbdata.flags = flags;
+
+ cbdata.fd = hold_lock_file_for_update(&packlock, git_path("packed-refs"),
+ LOCK_DIE_ON_ERROR);
+
+ write_or_die(cbdata.fd, PACKED_REFS_HEADER, strlen(PACKED_REFS_HEADER));
+
+ do_for_each_entry(&ref_cache, "", pack_one_ref, &cbdata);
+ if (commit_lock_file(&packlock) < 0)
+ die_errno("unable to overwrite old ref-pack file");
+ prune_refs(cbdata.ref_to_prune);
+ return 0;
+}
+
+static int repack_ref_fn(struct ref_entry *entry, void *cb_data)
+{
+ int *fd = cb_data;
+ enum peel_status peel_status;
+
+ if (entry->flag & REF_ISBROKEN) {
+ /* This shouldn't happen to packed refs. */
+ error("%s is broken!", entry->name);
return 0;
- data.refname = refname;
- data.fd = hold_lock_file_for_update(&packlock, git_path("packed-refs"), 0);
- if (data.fd < 0) {
+ }
+ if (!has_sha1_file(entry->u.value.sha1)) {
+ unsigned char sha1[20];
+ int flags;
+
+ if (read_ref_full(entry->name, sha1, 0, &flags))
+ /* We should at least have found the packed ref. */
+ die("Internal error");
+ if ((flags & REF_ISSYMREF) || !(flags & REF_ISPACKED))
+ /*
+ * This packed reference is overridden by a
+ * loose reference, so it is OK that its value
+ * is no longer valid; for example, it might
+ * refer to an object that has been garbage
+ * collected. For this purpose we don't even
+ * care whether the loose reference itself is
+ * invalid, broken, symbolic, etc. Silently
+ * omit the packed reference from the output.
+ */
+ return 0;
+ /*
+ * There is no overriding loose reference, so the fact
+ * that this reference doesn't refer to a valid object
+ * indicates some kind of repository corruption.
+ * Report the problem, then omit the reference from
+ * the output.
+ */
+ error("%s does not point to a valid object!", entry->name);
+ return 0;
+ }
+
+ peel_status = peel_entry(entry, 0);
+ write_packed_entry(*fd, entry->name, entry->u.value.sha1,
+ peel_status == PEEL_PEELED ?
+ entry->u.value.peeled : NULL);
+
+ return 0;
+}
+
+static int repack_without_ref(const char *refname)
+{
+ int fd;
+ struct ref_dir *packed;
+
+ if (!get_packed_ref(refname))
+ return 0; /* refname does not exist in packed refs */
+
+ fd = hold_lock_file_for_update(&packlock, git_path("packed-refs"), 0);
+ if (fd < 0) {
unable_to_lock_error(git_path("packed-refs"), errno);
return error("cannot delete '%s' from packed refs", refname);
}
- clear_packed_ref_cache(refs);
- packed = get_packed_refs(refs);
- do_for_each_ref_in_dir(packed, 0, "", repack_without_ref_fn, 0, 0, &data);
+ clear_packed_ref_cache(&ref_cache);
+ packed = get_packed_refs(&ref_cache);
+ /* Remove refname from the cache. */
+ if (remove_entry(packed, refname) == -1) {
+ /*
+ * The packed entry disappeared while we were
+ * acquiring the lock.
+ */
+ rollback_lock_file(&packlock);
+ return 0;
+ }
+ write_or_die(fd, PACKED_REFS_HEADER, strlen(PACKED_REFS_HEADER));
+ do_for_each_entry_in_dir(packed, 0, repack_ref_fn, &fd);
return commit_lock_file(&packlock);
}
ret |= repack_without_ref(lock->ref_name);
unlink_or_warn(git_path("logs/%s", lock->ref_name));
- invalidate_ref_cache(NULL);
+ clear_loose_ref_cache(&ref_cache);
unlock_ref(lock);
return ret;
}
struct stat loginfo;
int log = !lstat(git_path("logs/%s", oldrefname), &loginfo);
const char *symref = NULL;
- struct ref_cache *refs = get_ref_cache(NULL);
if (log && S_ISLNK(loginfo.st_mode))
return error("reflog for %s is a symlink", oldrefname);
if (!symref)
return error("refname %s not found", oldrefname);
- if (!is_refname_available(newrefname, oldrefname, get_packed_refs(refs)))
+ if (!is_refname_available(newrefname, oldrefname, get_packed_refs(&ref_cache)))
return 1;
- if (!is_refname_available(newrefname, oldrefname, get_loose_refs(refs)))
+ if (!is_refname_available(newrefname, oldrefname, get_loose_refs(&ref_cache)))
return 1;
if (log && rename(git_path("logs/%s", oldrefname), git_path(TMP_RENAMED_LOG)))
unlock_ref(lock);
return -1;
}
- clear_loose_ref_cache(get_ref_cache(NULL));
+ clear_loose_ref_cache(&ref_cache);
if (log_ref_write(lock->ref_name, lock->old_sha1, sha1, logmsg) < 0 ||
(strcmp(lock->ref_name, lock->orig_ref_name) &&
log_ref_write(lock->orig_ref_name, lock->old_sha1, sha1, logmsg) < 0)) {
int force_write;
};
+/*
+ * Bit values set in the flags argument passed to each_ref_fn():
+ */
+
+/* Reference is a symbolic reference. */
#define REF_ISSYMREF 0x01
+
+/* Reference is a packed reference. */
#define REF_ISPACKED 0x02
+
+/*
+ * Reference cannot be resolved to an object name: dangling symbolic
+ * reference (directly or indirectly), corrupt reference file, or
+ * symbolic reference refers to ill-formatted reference name.
+ */
#define REF_ISBROKEN 0x04
/*
*/
extern void add_packed_ref(const char *refname, const unsigned char *sha1);
+/*
+ * Flags for controlling behaviour of pack_refs()
+ * PACK_REFS_PRUNE: Prune loose refs after packing
+ * PACK_REFS_ALL: Pack _all_ refs, not just tags and already packed refs
+ */
+#define PACK_REFS_PRUNE 0x0001
+#define PACK_REFS_ALL 0x0002
+
+/*
+ * Write a packed-refs file for the current repository.
+ * flags: Combination of the above PACK_REFS_* flags.
+ */
+int pack_refs(unsigned int flags);
+
extern int ref_exists(const char *);
+/*
+ * If refname is a non-symbolic reference that refers to a tag object,
+ * and the tag can be (recursively) dereferenced to a non-tag object,
+ * store the SHA1 of the referred-to object to sha1 and return 0. If
+ * any of these conditions are not met, return a non-zero value.
+ * Symbolic references are considered unpeelable, even if they
+ * ultimately resolve to a peelable tag.
+ */
extern int peel_ref(const char *refname, unsigned char *sha1);
/** Locks a "refs/" ref returning the lock on success and NULL on failure. **/
return 0;
}
-int main(int argc, const char **argv)
+int main(int argc, char **argv)
{
struct strbuf buf = STRBUF_INIT, url_sb = STRBUF_INIT,
private_ref_sb = STRBUF_INIT, marksfilename_sb = STRBUF_INIT,
info->nr++;
}
+static void add_rev_cmdline_list(struct rev_info *revs,
+ struct commit_list *commit_list,
+ int whence,
+ unsigned flags)
+{
+ while (commit_list) {
+ struct object *object = &commit_list->item->object;
+ add_rev_cmdline(revs, object, sha1_to_hex(object->sha1),
+ whence, flags);
+ commit_list = commit_list->next;
+ }
+}
+
struct all_refs_cb {
int all_flags;
int warned_bad_reflog;
add_pending_object(revs, &head->object, "HEAD");
add_pending_object(revs, &other->object, "MERGE_HEAD");
bases = get_merge_bases(head, other, 1);
+ add_rev_cmdline_list(revs, bases, REV_CMD_MERGE_BASE, UNINTERESTING);
add_pending_commit_list(revs, bases, UNINTERESTING);
free_commit_list(bases);
head->object.flags |= SYMMETRIC_LEFT;
if (symmetric) {
exclude = get_merge_bases(a, b, 1);
+ add_rev_cmdline_list(revs, exclude,
+ REV_CMD_MERGE_BASE,
+ flags_exclude);
add_pending_commit_list(revs, exclude,
flags_exclude);
free_commit_list(exclude);
REV_CMD_PARENTS_ONLY,
REV_CMD_LEFT,
REV_CMD_RIGHT,
+ REV_CMD_MERGE_BASE,
REV_CMD_REV
} whence;
unsigned flags;
int strbuf_branchname(struct strbuf *sb, const char *name)
{
int len = strlen(name);
- if (interpret_branch_name(name, sb) == len)
+ int used = interpret_branch_name(name, sb);
+
+ if (used == len)
return 0;
- strbuf_add(sb, name, len);
+ if (used < 0)
+ used = 0;
+ strbuf_add(sb, name + used, len - used);
return len;
}
DEFAULT_TEST_TARGET ?= test
TEST_LINT ?= test-lint-duplicates test-lint-executable
+ifdef TEST_OUTPUT_DIRECTORY
+TEST_RESULTS_DIRECTORY = $(TEST_OUTPUT_DIRECTORY)/test-results
+else
+TEST_RESULTS_DIRECTORY = test-results
+endif
+
# Shell quote;
SHELL_PATH_SQ = $(subst ','\'',$(SHELL_PATH))
PERL_PATH_SQ = $(subst ','\'',$(PERL_PATH))
+TEST_RESULTS_DIRECTORY_SQ = $(subst ','\'',$(TEST_RESULTS_DIRECTORY))
T = $(sort $(wildcard t[0-9][0-9][0-9][0-9]-*.sh))
TSVN = $(sort $(wildcard t91[0-9][0-9]-*.sh))
@echo "*** $@ ***"; GIT_CONFIG=.git/config '$(SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
pre-clean:
- $(RM) -r test-results
+ $(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
clean-except-prove-cache:
- $(RM) -r 'trash directory'.* test-results
+ $(RM) -r 'trash directory'.* '$(TEST_RESULTS_DIRECTORY_SQ)'
$(RM) -r valgrind/bin
clean: clean-except-prove-cache
$(MAKE) clean
aggregate-results:
- for f in test-results/t*-*.counts; do \
+ for f in '$(TEST_RESULTS_DIRECTORY_SQ)'/t*-*.counts; do \
echo "$$f"; \
done | '$(SHELL_PATH_SQ)' ./aggregate-results.sh
init_vars &&
rm -f "$HOME/stdout" "$HOME/stderr" "$HOME/cmd" &&
- echo git $global_args check-ignore $quiet_opt $verbose_opt $args \
+ echo git $global_args check-ignore $quiet_opt $verbose_opt $non_matching_opt $args \
>"$HOME/cmd" &&
+ echo "$expect_code" >"$HOME/expected-exit-code" &&
test_expect_code "$expect_code" \
- git $global_args check-ignore $quiet_opt $verbose_opt $args \
+ git $global_args check-ignore $quiet_opt $verbose_opt $non_matching_opt $args \
>"$HOME/stdout" 2>"$HOME/stderr" &&
test_cmp "$HOME/expected-stdout" "$HOME/stdout" &&
stderr_empty_on_success "$expect_code"
}
-# Runs the same code with 3 different levels of output verbosity,
+# Runs the same code with 4 different levels of output verbosity:
+#
+# 1. with -q / --quiet
+# 2. with default verbosity
+# 3. with -v / --verbose
+# 4. with -v / --verbose, *and* -n / --non-matching
+#
# expecting success each time. Takes advantage of the fact that
# check-ignore --verbose output is the same as normal output except
# for the extra first column.
# Arguments:
# - (optional) prereqs for this test, e.g. 'SYMLINKS'
# - test name
-# - output to expect from -v / --verbose mode
+# - output to expect from the fourth verbosity mode (the output
+# from the other verbosity modes is automatically inferred
+# from this value)
# - code to run (should invoke test_check_ignore)
test_expect_success_multi () {
prereq=
prereq=$1
shift
fi
- testname="$1" expect_verbose="$2" code="$3"
+ testname="$1" expect_all="$2" code="$3"
+ expect_verbose=$( echo "$expect_all" | grep -v '^:: ' )
expect=$( echo "$expect_verbose" | sed -e 's/.* //' )
test_expect_success $prereq "$testname" '
eval "$code"
'
- for quiet_opt in '-q' '--quiet'
- do
- test_expect_success $prereq "$testname${quiet_opt:+ with $quiet_opt}" "
+ # --quiet is only valid when a single pattern is passed
+ if test $( echo "$expect_all" | wc -l ) = 1
+ then
+ for quiet_opt in '-q' '--quiet'
+ do
+ test_expect_success $prereq "$testname${quiet_opt:+ with $quiet_opt}" "
expect '' &&
$code
"
- done
- quiet_opt=
+ done
+ quiet_opt=
+ fi
for verbose_opt in '-v' '--verbose'
do
- test_expect_success $prereq "$testname${verbose_opt:+ with $verbose_opt}" "
- expect '$expect_verbose' &&
- $code
- "
+ for non_matching_opt in '' ' -n' ' --non-matching'
+ do
+ if test -n "$non_matching_opt"
+ then
+ my_expect="$expect_all"
+ else
+ my_expect="$expect_verbose"
+ fi
+
+ test_code="
+ expect '$my_expect' &&
+ $code
+ "
+ opts="$verbose_opt$non_matching_opt"
+ test_expect_success $prereq "$testname${opts:+ with $opts}" "$test_code"
+ done
done
verbose_opt=
+ non_matching_opt=
}
test_expect_success 'setup' '
#
# test invalid inputs
-test_expect_success_multi '. corner-case' '' '
+test_expect_success_multi '. corner-case' ':: .' '
test_check_ignore . 1
'
test_expect_success_multi '--stdin with empty STDIN' '' '
test_check_ignore "--stdin" 1 </dev/null &&
- if test -n "$quiet_opt"; then
- test_stderr ""
- else
- test_stderr "no pathspec given."
- fi
+ test_stderr ""
'
test_expect_success '-q with multiple args' '
where="in subdir $subdir"
fi
- test_expect_success_multi "non-existent file $where not ignored" '' "
- test_check_ignore '${subdir}non-existent' 1
- "
+ test_expect_success_multi "non-existent file $where not ignored" \
+ ":: ${subdir}non-existent" \
+ "test_check_ignore '${subdir}non-existent' 1"
test_expect_success_multi "non-existent file $where ignored" \
- ".gitignore:1:one ${subdir}one" "
- test_check_ignore '${subdir}one'
- "
+ ".gitignore:1:one ${subdir}one" \
+ "test_check_ignore '${subdir}one'"
- test_expect_success_multi "existing untracked file $where not ignored" '' "
- test_check_ignore '${subdir}not-ignored' 1
- "
+ test_expect_success_multi "existing untracked file $where not ignored" \
+ ":: ${subdir}not-ignored" \
+ "test_check_ignore '${subdir}not-ignored' 1"
- test_expect_success_multi "existing tracked file $where not ignored" '' "
- test_check_ignore '${subdir}ignored-but-in-index' 1
- "
+ test_expect_success_multi "existing tracked file $where not ignored" \
+ ":: ${subdir}ignored-but-in-index" \
+ "test_check_ignore '${subdir}ignored-but-in-index' 1"
test_expect_success_multi "existing untracked file $where ignored" \
- ".gitignore:2:ignored-* ${subdir}ignored-and-untracked" "
- test_check_ignore '${subdir}ignored-and-untracked'
- "
+ ".gitignore:2:ignored-* ${subdir}ignored-and-untracked" \
+ "test_check_ignore '${subdir}ignored-and-untracked'"
+
+ test_expect_success_multi "mix of file types $where" \
+":: ${subdir}non-existent
+.gitignore:1:one ${subdir}one
+:: ${subdir}not-ignored
+:: ${subdir}ignored-but-in-index
+.gitignore:2:ignored-* ${subdir}ignored-and-untracked" \
+ "test_check_ignore '
+ ${subdir}non-existent
+ ${subdir}one
+ ${subdir}not-ignored
+ ${subdir}ignored-but-in-index
+ ${subdir}ignored-and-untracked'
+ "
done
# Having established the above, from now on we mostly test against
#
# test handling of symlinks
-test_expect_success_multi SYMLINKS 'symlink' '' '
+test_expect_success_multi SYMLINKS 'symlink' ':: a/symlink' '
test_check_ignore "a/symlink" 1
'
globaltwo
b/globaltwo
../b/globaltwo
+ c/not-ignored
EOF
-cat <<-\EOF >expected-default
- ../one
- one
- b/on
- b/one
- b/one one
- b/one two
- "b/one\"three"
- b/two
- b/twooo
- ../globaltwo
- globaltwo
- b/globaltwo
- ../b/globaltwo
-EOF
-cat <<-EOF >expected-verbose
+# N.B. we deliberately end STDIN with a non-matching pattern in order
+# to test that the exit code indicates that one or more of the
+# provided paths is ignored - in other words, that it represents an
+# aggregation of all the results, not just the final result.
+
+cat <<-EOF >expected-all
.gitignore:1:one ../one
+ :: ../not-ignored
.gitignore:1:one one
+ :: not-ignored
a/b/.gitignore:8:!on* b/on
a/b/.gitignore:8:!on* b/one
a/b/.gitignore:8:!on* b/one one
a/b/.gitignore:8:!on* b/one two
a/b/.gitignore:8:!on* "b/one\"three"
a/b/.gitignore:9:!two b/two
+ :: b/not-ignored
a/.gitignore:1:two* b/twooo
$global_excludes:2:!globaltwo ../globaltwo
$global_excludes:2:!globaltwo globaltwo
$global_excludes:2:!globaltwo b/globaltwo
$global_excludes:2:!globaltwo ../b/globaltwo
+ :: c/not-ignored
EOF
+grep -v '^:: ' expected-all >expected-verbose
+sed -e 's/.* //' expected-verbose >expected-default
sed -e 's/^"//' -e 's/\\//' -e 's/"$//' stdin | \
tr "\n" "\0" >stdin0
)
'
+test_expect_success '--stdin from subdirectory with -v -n' '
+ expect_from_stdin <expected-all &&
+ (
+ cd a &&
+ test_check_ignore "--stdin -v -n" <../stdin
+ )
+'
+
for opts in '--stdin -z' '-z --stdin'
do
test_expect_success "$opts from subdirectory" '
'
done
+test_expect_success PIPE 'streaming support for --stdin' '
+ mkfifo in out &&
+ (git check-ignore -n -v --stdin <in >out &) &&
+
+ # We cannot just "echo >in" because check-ignore would get EOF
+ # after echo exited; instead we open the descriptor in our
+ # shell, and then echo to the fd. We make sure to close it at
+ # the end, so that the subprocess does get EOF and dies
+ # properly.
+ exec 9>in &&
+ test_when_finished "exec 9>&-" &&
+ echo >&9 one &&
+ read response <out &&
+ echo "$response" | grep "^\.gitignore:1:one one" &&
+ echo >&9 two &&
+ read response <out &&
+ echo "$response" | grep "^:: two"
+'
test_done
test_commit B &&
git checkout A &&
test_commit C &&
+ test_commit D &&
git branch -f master B &&
git branch -f other &&
git checkout other &&
git cat-file commit HEAD | grep "Merge branch '\''other'\''"
'
-test_expect_success 'merge @{-1} when there is not enough switches yet' '
+test_expect_success 'merge @{-1}~1' '
+ git checkout master &&
+ git reset --hard B &&
+ git checkout other &&
+ git checkout master &&
+ git merge @{-1}~1 &&
+ git cat-file commit HEAD >actual &&
+ grep "Merge branch '\''other'\''" actual
+'
+
+test_expect_success 'merge @{-100} before checking out that many branches yet' '
git reflog expire --expire=now &&
git checkout -f master &&
git reset --hard B &&
git branch -f other C &&
git checkout other &&
git checkout master &&
- test_must_fail git merge @{-12}
+ test_must_fail git merge @{-100}
'
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='checkout <branch>
+
+Ensures that checkout on an unborn branch does what the user expects'
+
+. ./test-lib.sh
+
+# Is the current branch "refs/heads/$1"?
+test_branch () {
+ printf "%s\n" "refs/heads/$1" >expect.HEAD &&
+ git symbolic-ref HEAD >actual.HEAD &&
+ test_cmp expect.HEAD actual.HEAD
+}
+
+# Is branch "refs/heads/$1" set to pull from "$2/$3"?
+test_branch_upstream () {
+ printf "%s\n" "$2" "refs/heads/$3" >expect.upstream &&
+ {
+ git config "branch.$1.remote" &&
+ git config "branch.$1.merge"
+ } >actual.upstream &&
+ test_cmp expect.upstream actual.upstream
+}
+
+test_expect_success 'setup' '
+ test_commit my_master &&
+ git init repo_a &&
+ (
+ cd repo_a &&
+ test_commit a_master &&
+ git checkout -b foo &&
+ test_commit a_foo &&
+ git checkout -b bar &&
+ test_commit a_bar
+ ) &&
+ git init repo_b &&
+ (
+ cd repo_b &&
+ test_commit b_master &&
+ git checkout -b foo &&
+ test_commit b_foo &&
+ git checkout -b baz &&
+ test_commit b_baz
+ ) &&
+ git remote add repo_a repo_a &&
+ git remote add repo_b repo_b &&
+ git config remote.repo_b.fetch \
+ "+refs/heads/*:refs/remotes/other_b/*" &&
+ git fetch --all
+'
+
+test_expect_success 'checkout of non-existing branch fails' '
+ git checkout -B master &&
+ test_might_fail git branch -D xyzzy &&
+
+ test_must_fail git checkout xyzzy &&
+ test_must_fail git rev-parse --verify refs/heads/xyzzy &&
+ test_branch master
+'
+
+test_expect_success 'checkout of branch from multiple remotes fails #1' '
+ git checkout -B master &&
+ test_might_fail git branch -D foo &&
+
+ test_must_fail git checkout foo &&
+ test_must_fail git rev-parse --verify refs/heads/foo &&
+ test_branch master
+'
+
+test_expect_success 'checkout of branch from a single remote succeeds #1' '
+ git checkout -B master &&
+ test_might_fail git branch -D bar &&
+
+ git checkout bar &&
+ test_branch bar &&
+ test_cmp_rev remotes/repo_a/bar HEAD &&
+ test_branch_upstream bar repo_a bar
+'
+
+test_expect_success 'checkout of branch from a single remote succeeds #2' '
+ git checkout -B master &&
+ test_might_fail git branch -D baz &&
+
+ git checkout baz &&
+ test_branch baz &&
+ test_cmp_rev remotes/other_b/baz HEAD &&
+ test_branch_upstream baz repo_b baz
+'
+
+test_expect_success '--no-guess suppresses branch auto-vivification' '
+ git checkout -B master &&
+ test_might_fail git branch -D bar &&
+
+ test_must_fail git checkout --no-guess bar &&
+ test_must_fail git rev-parse --verify refs/heads/bar &&
+ test_branch master
+'
+
+test_expect_success 'setup more remotes with unconventional refspecs' '
+ git checkout -B master &&
+ git init repo_c &&
+ (
+ cd repo_c &&
+ test_commit c_master &&
+ git checkout -b bar &&
+ test_commit c_bar
+ git checkout -b spam &&
+ test_commit c_spam
+ ) &&
+ git init repo_d &&
+ (
+ cd repo_d &&
+ test_commit d_master &&
+ git checkout -b baz &&
+ test_commit f_baz
+ git checkout -b eggs &&
+ test_commit c_eggs
+ ) &&
+ git remote add repo_c repo_c &&
+ git config remote.repo_c.fetch \
+ "+refs/heads/*:refs/remotes/extra_dir/repo_c/extra_dir/*" &&
+ git remote add repo_d repo_d &&
+ git config remote.repo_d.fetch \
+ "+refs/heads/*:refs/repo_d/*" &&
+ git fetch --all
+'
+
+test_expect_success 'checkout of branch from multiple remotes fails #2' '
+ git checkout -B master &&
+ test_might_fail git branch -D bar &&
+
+ test_must_fail git checkout bar &&
+ test_must_fail git rev-parse --verify refs/heads/bar &&
+ test_branch master
+'
+
+test_expect_success 'checkout of branch from multiple remotes fails #3' '
+ git checkout -B master &&
+ test_might_fail git branch -D baz &&
+
+ test_must_fail git checkout baz &&
+ test_must_fail git rev-parse --verify refs/heads/baz &&
+ test_branch master
+'
+
+test_expect_success 'checkout of branch from a single remote succeeds #3' '
+ git checkout -B master &&
+ test_might_fail git branch -D spam &&
+
+ git checkout spam &&
+ test_branch spam &&
+ test_cmp_rev refs/remotes/extra_dir/repo_c/extra_dir/spam HEAD &&
+ test_branch_upstream spam repo_c spam
+'
+
+test_expect_success 'checkout of branch from a single remote succeeds #4' '
+ git checkout -B master &&
+ test_might_fail git branch -D eggs &&
+
+ git checkout eggs &&
+ test_branch eggs &&
+ test_cmp_rev refs/repo_d/eggs HEAD &&
+ test_branch_upstream eggs repo_d eggs
+'
+
+test_done
test $(git config branch.my4.merge) = refs/heads/master
'
-test_expect_success 'test tracking setup (non-wildcard, not matching)' '
+test_expect_success 'tracking setup fails on non-matching refspec' '
git config remote.local.url . &&
git config remote.local.fetch refs/heads/s:refs/remotes/local/s &&
(git show-ref -q refs/remotes/local/master || git fetch local) &&
- git branch --track my5 local/master &&
- ! test "$(git config branch.my5.remote)" = local &&
- ! test "$(git config branch.my5.merge)" = refs/heads/master
+ test_must_fail git branch --track my5 local/master &&
+ test_must_fail git config branch.my5.remote &&
+ test_must_fail git config branch.my5.merge
'
test_expect_success 'test tracking setup via config' '
test_cmp all-of-them again
'
+test_expect_success 'explicit pack-refs with dangling packed reference' '
+ git commit --allow-empty -m "soon to be garbage-collected" &&
+ git pack-refs --all &&
+ git reset --hard HEAD^ &&
+ git reflog expire --expire=all --all &&
+ git prune --expire=all &&
+ git pack-refs --all 2>result &&
+ test_cmp /dev/null result
+'
+
+test_expect_success 'delete ref with dangling packed version' '
+ git checkout -b lamb &&
+ git commit --allow-empty -m "future garbage" &&
+ git pack-refs --all &&
+ git reset --hard HEAD^ &&
+ git checkout master &&
+ git reflog expire --expire=all --all &&
+ git prune --expire=all &&
+ git branch -d lamb 2>result &&
+ test_cmp /dev/null result
+'
+
+test_expect_success 'delete ref while another dangling packed ref' '
+ git branch lamb &&
+ git commit --allow-empty -m "future garbage" &&
+ git pack-refs --all &&
+ git reset --hard HEAD^ &&
+ git reflog expire --expire=all --all &&
+ git prune --expire=all &&
+ git branch -d lamb 2>result &&
+ test_cmp /dev/null result
+'
+
test_done
test_cmp expect actual
'
+test_expect_success 'peeled refs survive deletion of packed ref' '
+ git pack-refs --all &&
+ cp .git/packed-refs fully-peeled &&
+ git branch yadda &&
+ git pack-refs --all &&
+ git branch -d yadda &&
+ test_cmp fully-peeled .git/packed-refs
+'
+
test_done
compare_diff_patch expected actual
'
+# Test for a bug reported at
+# http://thread.gmane.org/gmane.comp.version-control.git/224410
+# where a delete lines were missing from combined diff output when they
+# occurred exactly before the context lines of a later change.
+test_expect_success 'combine diff missing delete bug' '
+ git commit -m initial --allow-empty &&
+ cat <<-\EOF >test &&
+ 1
+ 2
+ 3
+ 4
+ EOF
+ git add test &&
+ git commit -a -m side1 &&
+ git checkout -B side1 &&
+ git checkout HEAD^ &&
+ cat <<-\EOF >test &&
+ 0
+ 1
+ 2
+ 3
+ 4modified
+ EOF
+ git add test &&
+ git commit -m side2 &&
+ git branch -f side2 &&
+ test_must_fail git merge --no-commit side1 &&
+ cat <<-\EOF >test &&
+ 1
+ 2
+ 3
+ 4modified
+ EOF
+ git add test &&
+ git commit -a -m merge &&
+ git diff-tree -c -p HEAD >actual.tmp &&
+ sed -e "1,/^@@@/d" < actual.tmp >actual &&
+ tr -d Q <<-\EOF >expected &&
+ - 0
+ 1
+ 2
+ 3
+ -4
+ +4modified
+ EOF
+ compare_diff_patch expected actual
+'
+
test_done
SUBSTFORMAT=%H%n
+test_lazy_prereq TAR_NEEDS_PAX_FALLBACK '
+ (
+ mkdir pax &&
+ cd pax &&
+ "$TAR" xf "$TEST_DIRECTORY"/t5000/pax.tar &&
+ test -f PaxHeaders.1791/file
+ )
+'
+
+get_pax_header() {
+ file=$1
+ header=$2=
+
+ while read len rest
+ do
+ if test "$len" = $(echo "$len $rest" | wc -c)
+ then
+ case "$rest" in
+ $header*)
+ echo "${rest#$header}"
+ ;;
+ esac
+ fi
+ done <"$file"
+}
+
+check_tar() {
+ tarfile=$1.tar
+ listfile=$1.lst
+ dir=$1
+ dir_with_prefix=$dir/$2
+
+ test_expect_success ' extract tar archive' '
+ (mkdir $dir && cd $dir && "$TAR" xf -) <$tarfile
+ '
+
+ test_expect_success TAR_NEEDS_PAX_FALLBACK ' interpret pax headers' '
+ (
+ cd $dir &&
+ for header in *.paxheader
+ do
+ data=${header%.paxheader}.data &&
+ if test -h $data -o -e $data
+ then
+ path=$(get_pax_header $header path) &&
+ if test -n "$path"
+ then
+ mv "$data" "$path"
+ fi
+ fi
+ done
+ )
+ '
+
+ test_expect_success ' validate filenames' '
+ (cd ${dir_with_prefix}a && find .) | sort >$listfile &&
+ test_cmp a.lst $listfile
+ '
+
+ test_expect_success ' validate file contents' '
+ diff -r a ${dir_with_prefix}a
+ '
+}
+
test_expect_success \
'populate workdir' \
- 'mkdir a b c &&
+ 'mkdir a &&
echo simple textfile >a/a &&
+ ten=0123456789 && hundred=$ten$ten$ten$ten$ten$ten$ten$ten$ten$ten &&
+ echo long filename >a/four$hundred &&
mkdir a/bin &&
cp /bin/sh a/bin &&
printf "A\$Format:%s\$O" "$SUBSTFORMAT" >a/substfile1 &&
git update-ref HEAD $(TZ=GMT GIT_COMMITTER_DATE="2005-05-27 22:00:00" \
git commit-tree $treeid </dev/null)'
+test_expect_success 'setup export-subst' '
+ echo "substfile?" export-subst >>.git/info/attributes &&
+ git log --max-count=1 "--pretty=format:A${SUBSTFORMAT}O" HEAD \
+ >a/substfile1
+'
+
test_expect_success \
'create bare clone' \
'git clone --bare . bare.git &&
'git archive' \
'git archive HEAD >b.tar'
-test_expect_success \
- 'git tar-tree' \
- 'git tar-tree HEAD >b2.tar'
+check_tar b
-test_expect_success \
- 'git archive vs. git tar-tree' \
- 'test_cmp b.tar b2.tar'
+test_expect_success 'git archive --prefix=prefix/' '
+ git archive --prefix=prefix/ HEAD >with_prefix.tar
+'
+
+check_tar with_prefix prefix/
+
+test_expect_success 'git-archive --prefix=olde-' '
+ git archive --prefix=olde- HEAD >with_olde-prefix.tar
+'
+
+check_tar with_olde-prefix olde-
test_expect_success 'git archive on large files' '
test_config core.bigfilethreshold 1 &&
'git get-tar-commit-id <b.tar >b.commitid &&
test_cmp .git/$(git symbolic-ref HEAD) b.commitid'
-test_expect_success \
- 'extract tar archive' \
- '(cd b && "$TAR" xf -) <b.tar'
-
-test_expect_success \
- 'validate filenames' \
- '(cd b/a && find .) | sort >b.lst &&
- test_cmp a.lst b.lst'
-
-test_expect_success \
- 'validate file contents' \
- 'diff -r a b/a'
-
-test_expect_success \
- 'git tar-tree with prefix' \
- 'git tar-tree HEAD prefix >c.tar'
-
-test_expect_success \
- 'extract tar archive with prefix' \
- '(cd c && "$TAR" xf -) <c.tar'
-
-test_expect_success \
- 'validate filenames with prefix' \
- '(cd c/prefix/a && find .) | sort >c.lst &&
- test_cmp a.lst c.lst'
-
-test_expect_success \
- 'validate file contents with prefix' \
- 'diff -r a c/prefix/a'
-
-test_expect_success \
- 'create archives with substfiles' \
- 'cp .git/info/attributes .git/info/attributes.before &&
- echo "substfile?" export-subst >>.git/info/attributes &&
- git archive HEAD >f.tar &&
- git archive --prefix=prefix/ HEAD >g.tar &&
- mv .git/info/attributes.before .git/info/attributes'
-
-test_expect_success \
- 'extract substfiles' \
- '(mkdir f && cd f && "$TAR" xf -) <f.tar'
-
-test_expect_success \
- 'validate substfile contents' \
- 'git log --max-count=1 "--pretty=format:A${SUBSTFORMAT}O" HEAD \
- >f/a/substfile1.expected &&
- test_cmp f/a/substfile1.expected f/a/substfile1 &&
- test_cmp a/substfile2 f/a/substfile2
+test_expect_success 'git tar-tree' '
+ git tar-tree HEAD >tar-tree.tar &&
+ test_cmp b.tar tar-tree.tar
'
-test_expect_success \
- 'extract substfiles from archive with prefix' \
- '(mkdir g && cd g && "$TAR" xf -) <g.tar'
-
-test_expect_success \
- 'validate substfile contents from archive with prefix' \
- 'git log --max-count=1 "--pretty=format:A${SUBSTFORMAT}O" HEAD \
- >g/prefix/a/substfile1.expected &&
- test_cmp g/prefix/a/substfile1.expected g/prefix/a/substfile1 &&
- test_cmp a/substfile2 g/prefix/a/substfile2
+test_expect_success 'git tar-tree with prefix' '
+ git tar-tree HEAD prefix >tar-tree_with_prefix.tar &&
+ test_cmp with_prefix.tar tar-tree_with_prefix.tar
'
test_expect_success 'git archive with --output, override inferred format' '
test_must_fail git archive --remote=. $sha1 >remote.tar
'
-test_expect_success 'git-archive --prefix=olde-' '
- git archive --prefix=olde- >h.tar HEAD &&
- (
- mkdir h &&
- cd h &&
- "$TAR" xf - <../h.tar
- ) &&
- test -d h/olde-a &&
- test -d h/olde-a/bin &&
- test -f h/olde-a/bin/sh
-'
-
test_expect_success 'setup tar filters' '
git config tar.tar.foo.command "tr ab ba" &&
git config tar.bar.command "tr ab ba" &&
test_expect_success \
'populate workdir' \
- 'mkdir a b c &&
+ 'mkdir a &&
echo simple textfile >a/a &&
mkdir a/bin &&
cp /bin/sh a/bin &&
test_cmp expect actual
}
+
+# bsdtar/libarchive versions before 3.1.3 consider a tar file with a
+# global pax header that is not followed by a file record as corrupt.
+if "$TAR" tf "$TEST_DIRECTORY"/t5004/empty-with-pax-header.tar >/dev/null 2>&1
+then
+ test_set_prereq HEADER_ONLY_TAR_OK
+fi
+
+test_expect_success HEADER_ONLY_TAR_OK 'tar archive of commit with empty tree' '
+ git archive --format=tar HEAD >empty-with-pax-header.tar &&
+ make_dir extract &&
+ "$TAR" xf empty-with-pax-header.tar -C extract &&
+ check_dir extract
+'
+
test_expect_success 'tar archive of empty tree is empty' '
git archive --format=tar HEAD: >empty.tar &&
perl -e "print \"\\0\" x 10240" >10knuls.tar &&
test_cmp count8.expected count8.actual
'
+test_expect_success 'fetch in shallow repo unreachable shallow objects' '
+ (
+ git clone --bare --branch B --single-branch "file://$(pwd)/." no-reflog &&
+ git clone --depth 1 "file://$(pwd)/no-reflog" shallow9 &&
+ cd no-reflog &&
+ git tag -d TAGB1 TAGB2 &&
+ git update-ref refs/heads/B B~~ &&
+ git gc --prune=now &&
+ cd ../shallow9 &&
+ git fetch origin &&
+ git fsck --no-dangling
+ )
+'
+
test_expect_success 'setup tests for the --stdin parameter' '
for head in C D E F
do
'
-test_expect_success 'explicit fetch should not update tracking' '
+test_expect_success 'mark initial state of origin/master' '
+ (
+ cd three &&
+ git tag base-origin-master refs/remotes/origin/master
+ )
+'
+
+test_expect_success 'explicit fetch should update tracking' '
cd "$D" &&
git branch -f side &&
(
cd three &&
+ git update-ref refs/remotes/origin/master base-origin-master &&
o=$(git rev-parse --verify refs/remotes/origin/master) &&
git fetch origin master &&
n=$(git rev-parse --verify refs/remotes/origin/master) &&
- test "$o" = "$n" &&
+ test "$o" != "$n" &&
test_must_fail git rev-parse --verify refs/remotes/origin/side
)
'
-test_expect_success 'explicit pull should not update tracking' '
+test_expect_success 'explicit pull should update tracking' '
cd "$D" &&
git branch -f side &&
(
cd three &&
+ git update-ref refs/remotes/origin/master base-origin-master &&
o=$(git rev-parse --verify refs/remotes/origin/master) &&
git pull origin master &&
n=$(git rev-parse --verify refs/remotes/origin/master) &&
- test "$o" = "$n" &&
+ test "$o" != "$n" &&
test_must_fail git rev-parse --verify refs/remotes/origin/side
)
'
git branch -f side &&
(
cd three &&
+ git update-ref refs/remotes/origin/master base-origin-master &&
o=$(git rev-parse --verify refs/remotes/origin/master) &&
git fetch origin &&
n=$(git rev-parse --verify refs/remotes/origin/master) &&
)
'
+test_expect_success 'non-matching refspecs do not confuse tracking update' '
+ cd "$D" &&
+ git update-ref refs/odd/location HEAD &&
+ (
+ cd three &&
+ git update-ref refs/remotes/origin/master base-origin-master &&
+ git config --add remote.origin.fetch \
+ refs/odd/location:refs/remotes/origin/odd &&
+ o=$(git rev-parse --verify refs/remotes/origin/master) &&
+ git fetch origin master &&
+ n=$(git rev-parse --verify refs/remotes/origin/master) &&
+ test "$o" != "$n" &&
+ test_must_fail git rev-parse --verify refs/remotes/origin/odd
+ )
+'
+
test_expect_success 'pushing nonexistent branch by mistake should not segv' '
cd "$D" &&
# now assign tags to all the dangling commits we created above
tag=$("$PERL_PATH" -e "print \"bla\" x 30") &&
- sed -e "s/^:\(.\+\) \(.\+\)$/\2 refs\/tags\/$tag-\1/" <marks >>packed-refs
+ sed -e "s|^:\([^ ]*\) \(.*\)$|\2 refs/tags/$tag-\1|" <marks >>packed-refs
)
'
test_expect_success EXPENSIVE 'clone the 50,000 tag repo to check OS command line overflow' '
git clone $HTTPD_URL/smart/repo.git too-many-refs 2>err &&
- test_line_count = 0 err
+ test_line_count = 0 err &&
+ (
+ cd too-many-refs &&
+ test $(git for-each-ref refs/tags | wc -l) = 50000
+ )
'
stop_httpd
test_cmp fetch.expected fetch.actual
'
+test_expect_success NOT_MINGW,NOT_CYGWIN 'clone local path foo:bar' '
+ cp -R src "foo:bar" &&
+ git clone "./foo:bar" foobar
+'
+
test_done
. ./test-lib.sh
. "$TEST_DIRECTORY"/lib-gpg.sh
-if ! type "${BASH-bash}" >/dev/null 2>&1; then
- skip_all='skipping remote-testgit tests, bash not available'
- test_done
-fi
-
compare_refs() {
git --git-dir="$1/.git" rev-parse --verify $2 >expect &&
git --git-dir="$3/.git" rev-parse --verify $4 >actual &&
test_expect_success 'cloning without refspec' '
GIT_REMOTE_TESTGIT_REFSPEC="" \
- git clone "testgit::${PWD}/server" local2 &&
+ git clone "testgit::${PWD}/server" local2 2>error &&
+ grep "This remote helper should implement refspec capability" error &&
compare_refs local2 HEAD server HEAD
'
test_expect_success 'pulling without refspecs' '
(cd local2 &&
git reset --hard &&
- GIT_REMOTE_TESTGIT_REFSPEC="" git pull) &&
+ GIT_REMOTE_TESTGIT_REFSPEC="" git pull 2>../error) &&
+ grep "This remote helper should implement refspec capability" error &&
compare_refs local2 HEAD server HEAD
'
-test_expect_failure 'pushing without refspecs' '
+test_expect_success 'pushing without refspecs' '
test_when_finished "(cd local2 && git reset --hard origin)" &&
(cd local2 &&
echo content >>file &&
git commit -a -m ten &&
- GIT_REMOTE_TESTGIT_REFSPEC="" git push) &&
- compare_refs local2 HEAD server HEAD
-'
-
-test_expect_success 'pulling with straight refspec' '
- (cd local2 &&
- GIT_REMOTE_TESTGIT_REFSPEC="*:*" git pull) &&
- compare_refs local2 HEAD server HEAD
-'
-
-test_expect_failure 'pushing with straight refspec' '
- test_when_finished "(cd local2 && git reset --hard origin)" &&
- (cd local2 &&
- echo content >>file &&
- git commit -a -m eleven &&
- GIT_REMOTE_TESTGIT_REFSPEC="*:*" git push) &&
- compare_refs local2 HEAD server HEAD
+ GIT_REMOTE_TESTGIT_REFSPEC="" &&
+ export GIT_REMOTE_TESTGIT_REFSPEC &&
+ test_must_fail git push 2>../error) &&
+ grep "remote-helper doesn.t support push; refspec needed" error
'
test_expect_success 'pulling without marks' '
compare_refs local signed-tag-2 server signed-tag-2
'
+test_expect_success 'push update refs' '
+ (cd local &&
+ git checkout -b update master &&
+ echo update >>file &&
+ git commit -a -m update &&
+ git push origin update &&
+ git rev-parse --verify remotes/origin/update >expect &&
+ git rev-parse --verify testgit/origin/heads/update >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'push update refs failure' '
+ (cd local &&
+ git checkout update &&
+ echo "update fail" >>file &&
+ git commit -a -m "update fail" &&
+ git rev-parse --verify testgit/origin/heads/update >expect &&
+ GIT_REMOTE_TESTGIT_PUSH_ERROR="non-fast forward" &&
+ export GIT_REMOTE_TESTGIT_PUSH_ERROR &&
+ test_expect_code 1 git push origin update &&
+ git rev-parse --verify testgit/origin/heads/update >actual &&
+ test_cmp expect actual
+ )
+'
+
+test_expect_success 'proper failure checks for fetching' '
+ (GIT_REMOTE_TESTGIT_FAILURE=1 &&
+ export GIT_REMOTE_TESTGIT_FAILURE &&
+ cd local &&
+ test_must_fail git fetch 2> error &&
+ cat error &&
+ grep -q "Error while running fast-import" error
+ )
+'
+
+test_expect_success 'proper failure checks for pushing' '
+ (GIT_REMOTE_TESTGIT_FAILURE=1 &&
+ export GIT_REMOTE_TESTGIT_FAILURE &&
+ cd local &&
+ test_must_fail git push --all 2> error &&
+ cat error &&
+ grep -q "Reading from helper .git-remote-testgit. failed" error
+ )
+'
+
test_expect_success 'push messages' '
(cd local &&
git checkout -b new_branch master &&
#
# D..M -- M.t == M
# --ancestry-path D..M -- M.t == M
+#
+# F...I == F G H I
+# --ancestry-path F...I == F H I
. ./test-lib.sh
test_cmp expect actual
'
-test_expect_success 'rev-list --ancestry-patch D..M -- M.t' '
+test_expect_success 'rev-list --ancestry-path D..M -- M.t' '
echo M >expect &&
git rev-list --ancestry-path --format=%s D..M -- M.t |
sed -e "/^commit /d" >actual &&
test_cmp expect actual
'
+test_expect_success 'rev-list F...I' '
+ for c in F G H I; do echo $c; done >expect &&
+ git rev-list --format=%s F...I |
+ sed -e "/^commit /d" |
+ sort >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'rev-list --ancestry-path F...I' '
+ for c in F H I; do echo $c; done >expect &&
+ git rev-list --ancestry-path --format=%s F...I |
+ sed -e "/^commit /d" |
+ sort >actual &&
+ test_cmp expect actual
+'
+
# b---bc
# / \ /
# a X
test_expect_success \
'checkout with --track fakes a sensible -b <name>' '
+ git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*" &&
git update-ref refs/remotes/origin/koala/bear renamer &&
git checkout --track origin/koala/bear &&
test_expect_success 'setup git mirror and merge' '
git svn init "$svnrepo" -t tags -T trunk -b branches &&
git svn fetch &&
- git checkout --track -b svn remotes/trunk &&
+ git checkout -b svn remotes/trunk &&
git checkout -b merge &&
echo new file > new_file &&
git add new_file &&
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2013 Tobias Schulte
+#
+
+test_description='git svn branch for subproject clones'
+. ./lib-git-svn.sh
+
+test_expect_success 'initialize svnrepo' '
+ mkdir import &&
+ (
+ cd import &&
+ mkdir -p trunk/project branches tags &&
+ (
+ cd trunk/project &&
+ echo foo > foo
+ ) &&
+ svn_cmd import -m "import for git-svn" . "$svnrepo" >/dev/null
+ ) &&
+ rm -rf import &&
+ svn_cmd co "$svnrepo"/trunk/project trunk/project &&
+ (
+ cd trunk/project &&
+ echo bar >> foo &&
+ svn_cmd ci -m "updated trunk"
+ ) &&
+ rm -rf trunk
+'
+
+test_expect_success 'import into git' '
+ git svn init --trunk=trunk/project --branches=branches/*/project \
+ --tags=tags/*/project "$svnrepo" &&
+ git svn fetch &&
+ git checkout remotes/trunk
+'
+
+test_expect_success 'git svn branch tests' '
+ test_must_fail git svn branch a &&
+ git svn branch --parents a &&
+ test_must_fail git svn branch -t tag1 &&
+ git svn branch --parents -t tag1 &&
+ test_must_fail git svn branch --tag tag2 &&
+ git svn branch --parents --tag tag2 &&
+ test_must_fail git svn tag tag3 &&
+ git svn tag --parents tag3
+'
+
+test_done
test_completion "git send-email ma" "master "
'
+test_expect_success 'complete files' '
+ git init tmp && cd tmp &&
+ test_when_finished "cd .. && rm -rf tmp" &&
+
+ echo "expected" > .gitignore &&
+ echo "out" >> .gitignore &&
+
+ git add .gitignore &&
+ test_completion "git commit " ".gitignore" &&
+
+ git commit -m ignore &&
+
+ touch new &&
+ test_completion "git add " "new" &&
+
+ git add new &&
+ git commit -a -m new &&
+ test_completion "git add " "" &&
+
+ git mv new modified &&
+ echo modify > modified &&
+ test_completion "git add " "modified" &&
+
+ touch untracked &&
+
+ : TODO .gitignore should not be here &&
+ test_completion "git rm " <<-\EOF &&
+ .gitignore
+ modified
+ EOF
+
+ test_completion "git clean " "untracked" &&
+
+ : TODO .gitignore should not be here &&
+ test_completion "git mv " <<-\EOF &&
+ .gitignore
+ modified
+ EOF
+
+ mkdir dir &&
+ touch dir/file-in-dir &&
+ git add dir/file-in-dir &&
+ git commit -m dir &&
+
+ mkdir untracked-dir &&
+
+ : TODO .gitignore should not be here &&
+ test_completion "git mv modified " <<-\EOF &&
+ .gitignore
+ dir
+ modified
+ untracked
+ untracked-dir
+ EOF
+
+ test_completion "git commit " "modified" &&
+
+ : TODO .gitignore should not be here &&
+ test_completion "git ls-files " <<-\EOF
+ .gitignore
+ dir
+ modified
+ EOF
+
+ touch momified &&
+ test_completion "git add mom" "momified"
+'
+
+test_expect_failure 'complete with tilde expansion' '
+ git init tmp && cd tmp &&
+ test_when_finished "cd .. && rm -rf tmp" &&
+
+ touch ~/tmp/file &&
+
+ test_completion "git add ~/tmp/" "~/tmp/file"
+'
+
test_done
# do not redirect again
;;
*' --tee '*|*' --va'*)
- mkdir -p test-results
- BASE=test-results/$(basename "$0" .sh)
+ mkdir -p "$TEST_OUTPUT_DIRECTORY/test-results"
+ BASE="$TEST_OUTPUT_DIRECTORY/test-results/$(basename "$0" .sh)"
(GIT_TEST_TEE_STARTED=done ${SHELL_PATH} "$0" "$@" 2>&1;
echo $? > $BASE.exit) | tee $BASE.out
test "$(cat $BASE.exit)" = 0
#!/bin/sh
-out_prefix=$(dirname "$0")/../test-results/valgrind.out
+# Get TEST_OUTPUT_DIRECTORY from GIT-BUILD-OPTIONS if it's there...
+. "$(dirname "$0")/../../GIT-BUILD-OPTIONS"
+# ... otherwise set it to the default value.
+: ${TEST_OUTPUT_DIRECTORY=$(dirname "$0")/..}
+
output=
count=0
total_count=0
finish_output
}
-for test_script in "$(dirname "$0")"/../test-results/*.out
+for test_script in "$TEST_OUTPUT_DIRECTORY"/test-results/*.out
do
handle_one $test_script
done
return 1;
}
-int main(int argc, const char *argv[])
+int main(int argc, char *argv[])
{
static int verbose;
#include "cache.h"
-int main(int argc, const char **argv)
+int main(int argc, char **argv)
{
struct cache_header hdr;
int version;
return strcmp(x->text, y->text);
}
-int main(int argc, const char **argv)
+int main(int argc, char **argv)
{
struct line *line, *p = NULL, *lines = NULL;
struct strbuf sb = STRBUF_INIT;
return 0;
}
-int main(int argc, const char **argv)
+int main(int argc, char **argv)
{
const char *prefix = "prefix/";
const char *usage[] = {
};
int i;
- argc = parse_options(argc, argv, prefix, options, usage, 0);
+ argc = parse_options(argc, (const char **)argv, prefix, options, usage, 0);
printf("boolean: %d\n", boolean);
printf("integer: %u\n", integer);
#include "cache.h"
#include "run-command.h"
-int main(int argc, const char **argv)
+int main(int argc, char **argv)
{
struct child_process cp;
int nogit = 0;
}
memset(&cp, 0, sizeof(cp));
cp.git_cmd = 1;
- cp.argv = argv + 1;
+ cp.argv = (const char **)argv + 1;
return run_command(&cp);
}
#include "thread-utils.h"
#include "sigchain.h"
#include "argv-array.h"
+#include "refs.h"
static int debug;
die_errno("Full write to remote helper failed");
}
-static int recvline_fh(FILE *helper, struct strbuf *buffer)
+static int recvline_fh(FILE *helper, struct strbuf *buffer, const char *name)
{
strbuf_reset(buffer);
if (debug)
if (strbuf_getline(buffer, helper, '\n') == EOF) {
if (debug)
fprintf(stderr, "Debug: Remote helper quit.\n");
- exit(128);
+ die("Reading from helper 'git-remote-%s' failed", name);
}
if (debug)
static int recvline(struct helper_data *helper, struct strbuf *buffer)
{
- return recvline_fh(helper->out, buffer);
+ return recvline_fh(helper->out, buffer, helper->name);
}
static void xchgline(struct helper_data *helper, struct strbuf *buffer)
for (i = 0; i < refspec_nr; i++)
free((char *)refspecs[i]);
free(refspecs);
+ } else if (data->import || data->bidi_import || data->export) {
+ warning("This remote helper should implement refspec capability.");
}
strbuf_release(&buf);
if (debug)
* were fetching.
*
* (If no "refspec" capability was specified, for historical
- * reasons we default to *:*.)
+ * reasons we default to the equivalent of *:*.)
*
* Store the result in to_fetch[i].old_sha1. Callers such
* as "git fetch" can use the value to write feedback to the
goto exit;
sendline(data, &cmdbuf);
- recvline_fh(input, &cmdbuf);
+ recvline_fh(input, &cmdbuf, name);
if (!strcmp(cmdbuf.buf, "")) {
data->no_disconnect_req = 1;
if (debug)
return -1;
}
-static void push_update_ref_status(struct strbuf *buf,
+static int push_update_ref_status(struct strbuf *buf,
struct ref **ref,
struct ref *remote_refs)
{
*ref = find_ref_by_name(remote_refs, refname);
if (!*ref) {
warning("helper reported unexpected status of %s", refname);
- return;
+ return 1;
}
if ((*ref)->status != REF_STATUS_NONE) {
* status reported by the remote helper if the latter is 'no match'.
*/
if (status == REF_STATUS_NONE)
- return;
+ return 1;
}
(*ref)->status = status;
(*ref)->remote_status = msg;
+ return !(status == REF_STATUS_OK);
}
static void push_update_refs_status(struct helper_data *data,
struct strbuf buf = STRBUF_INIT;
struct ref *ref = remote_refs;
for (;;) {
+ char *private;
+
recvline(data, &buf);
if (!buf.len)
break;
- push_update_ref_status(&buf, &ref, remote_refs);
+ if (push_update_ref_status(&buf, &ref, remote_refs))
+ continue;
+
+ if (!data->refspecs)
+ continue;
+
+ /* propagate back the update to the remote namespace */
+ private = apply_refspecs(data->refspecs, data->refspec_nr, ref->name);
+ if (!private)
+ continue;
+ update_ref("update by helper", private, ref->new_sha1, NULL, 0, 0);
+ free(private);
}
strbuf_release(&buf);
}
struct string_list revlist_args = STRING_LIST_INIT_NODUP;
struct strbuf buf = STRBUF_INIT;
+ if (!data->refspecs)
+ die("remote-helper doesn't support push; refspec needed");
+
helper = get_helper(transport);
write_constant(helper->in, "export\n");
char *private;
unsigned char sha1[20];
- if (!data->refspecs)
- continue;
+ if (ref->deletion)
+ die("remote-helpers do not support ref deletion");
+
private = apply_refspecs(data->refspecs, data->refspec_nr, ref->name);
if (private && !get_sha1(private, sha1)) {
strbuf_addf(&buf, "^%s", private);
}
free(private);
- if (ref->deletion) {
- die("remote-helpers do not support ref deletion");
- }
-
if (ref->peer_ref)
string_list_append(&revlist_args, ref->peer_ref->name);
-
}
if (get_exporter(transport, &exporter, &revlist_args))
die("invalid shallow line: %s", line);
object = parse_object(sha1);
if (!object)
- die("did not find object for %s", line);
+ continue;
if (object->type != OBJ_COMMIT)
die("invalid shallow object %s", sha1_to_hex(sha1));
if (!(object->flags & CLIENT_SHALLOW)) {
warning(_("unable to access '%s': %s"), path, strerror(errno));
}
-int access_or_warn(const char *path, int mode)
+static int access_error_is_ok(int err, unsigned flag)
+{
+ return err == ENOENT || err == ENOTDIR ||
+ ((flag & ACCESS_EACCES_OK) && err == EACCES);
+}
+
+int access_or_warn(const char *path, int mode, unsigned flag)
{
int ret = access(path, mode);
- if (ret && errno != ENOENT && errno != ENOTDIR)
+ if (ret && !access_error_is_ok(errno, flag))
warn_on_inaccessible(path);
return ret;
}
-int access_or_die(const char *path, int mode)
+int access_or_die(const char *path, int mode, unsigned flag)
{
int ret = access(path, mode);
- if (ret && errno != ENOENT && errno != ENOTDIR)
+ if (ret && !access_error_is_ok(errno, flag))
die_errno(_("unable to access '%s'"), path);
return ret;
}