/config.mak.autogen
/config.mak.append
/configure
+/.vscode/
/tags
/TAGS
/cscope*
ancestor discovery during the "git fetch" transaction.
(merge 42cc7485a2 jt/fetch-negotiator-skipping later to maint).
+ * A new configuration variable core.usereplacerefs has been added,
+ primarily to help server installations that want to ignore the
+ replace mechanism altogether.
+
+ * Teach "git tag -s" etc. a few configuration variables (gpg.format
+ that can be set to "openpgp" or "x509", and gpg.<format>.program
+ that is used to specify what program to use to deal with the format)
+ to allow x.509 certs with CMS via "gpgsm" to be used instead of
+ openpgp via "gnupg".
+
+ * Many more strings are prepared for l10n.
+
+ * "git p4 submit" learns to ask its own pre-submit hook if it should
+ continue with submitting.
+
Performance, Internal Implementation, Development Support etc.
* The singleton commit-graph in-core instance is made per in-core
repository instance.
+ * "make DEVELOPER=1 DEVOPTS=pedantic" allows developers to compile
+ with -pedantic option, which may catch more problematic program
+ constructs and potential bugs.
+
+ * Preparatory code to later add json output for telemetry data has
+ been added.
+
+ * Update the way we use Coccinelle to find out-of-style code that
+ need to be modernised.
+
+ * It is too easy to misuse system API functions such as strcat();
+ these selected functions are now forbidden in this codebase and
+ will cause a compilation failure.
+
+ * Add a script (in contrib/) to help users of VSCode work better with
+ our codebase.
+
+ * The Travis CI scripts were taught to ship back the test data from
+ failed tests.
+ (merge aea8879a6a sg/travis-retrieve-trash-upon-failure later to maint).
+
Fixes since v2.18
-----------------
* The lazy clone support had a few places where missing but promised
objects were not correctly tolerated, which have been fixed.
+ * One of the "diff --color-moved" mode "dimmed_zebra" that was named
+ in an unusual way has been deprecated and replaced by
+ "dimmed-zebra".
+ (merge e3f2f5f9cd es/diff-color-moved-fix later to maint).
+
+ * The wire-protocol v2 relies on the client to send "ref prefixes" to
+ limit the bandwidth spent on the initial ref advertisement. "git
+ clone" when learned to speak v2 forgot to do so, which has been
+ corrected.
+ (merge 402c47d939 bw/clone-ref-prefixes later to maint).
+
+ * "git diff --histogram" had a bad memory usage pattern, which has
+ been rearranged to reduce the peak usage.
+ (merge 79cb2ebb92 sb/histogram-less-memory later to maint).
+
+ * Code clean-up to use size_t/ssize_t when they are the right type.
+ (merge 7726d360b5 jk/size-t later to maint).
+
+ * The wire-protocol v2 relies on the client to send "ref prefixes" to
+ limit the bandwidth spent on the initial ref advertisement. "git
+ fetch $remote branch:branch" that asks tags that point into the
+ history leading to the "branch" automatically followed sent to
+ narrow prefix and broke the tag following, which has been fixed.
+ (merge 2b554353a5 jt/tag-following-with-proto-v2-fix later to maint).
+
+ * When the sparse checkout feature is in use, "git cherry-pick" and
+ other mergy operations lost the skip_worktree bit when a path that
+ is excluded from checkout requires content level merge, which is
+ resolved as the same as the HEAD version, without materializing the
+ merge result in the working tree, which made the path appear as
+ deleted. This has been corrected by preserving the skip_worktree
+ bit (and not materializing the file in the working tree).
+ (merge 2b75fb601c en/merge-recursive-skip-fix later to maint).
+
* Code cleanup, docfix, build fix, etc.
(merge aee9be2ebe sg/update-ref-stdin-cleanup later to maint).
(merge 037714252f jc/clean-after-sanity-tests later to maint).
(merge 6aaded5509 tb/config-default later to maint).
(merge 022d2ac1f3 sb/blame-color later to maint).
(merge 5a06a20e0c bp/test-drop-caches-for-windows later to maint).
+ (merge dd61cc1c2e jk/ui-color-always-to-auto later to maint).
+ (merge 1e83b9bfdd sb/trailers-docfix later to maint).
+ (merge ab29f1b329 sg/fast-import-dump-refs-on-checkpoint-fix later to maint).
+ (merge 6a8ad880f0 jn/subtree-test-fixes later to maint).
+ (merge ffbd51cc60 nd/pack-objects-threading-doc later to maint).
+ (merge e9dac7be60 es/mw-to-git-chain-fix later to maint).
+ (merge fe583c6c7a rs/remote-mv-leakfix later to maint).
required. Default is false. See linkgit:git-commit-graph[1]
for details.
+core.useReplaceRefs::
+ If set to `false`, behave as if the `--no-replace-objects`
+ option was given on the command line. See linkgit:git[1] and
+ linkgit:git-replace[1] for more information.
+
core.sparseCheckout::
Enable "sparse checkout" feature. See section "Sparse checkout" in
linkgit:git-read-tree[1] for more information.
fetch.fsckObjects::
If it is set to true, git-fetch-pack will check all fetched
- objects. It will abort in the case of a malformed object or a
- broken link. The result of an abort are only dangling objects.
- Defaults to false. If not set, the value of `transfer.fsckObjects`
- is used instead.
+ objects. See `transfer.fsckObjects` for what's
+ checked. Defaults to false. If not set, the value of
+ `transfer.fsckObjects` is used instead.
+
+fetch.fsck.<msg-id>::
+ Acts like `fsck.<msg-id>`, but is used by
+ linkgit:git-fetch-pack[1] instead of linkgit:git-fsck[1]. See
+ the `fsck.<msg-id>` documentation for details.
+
+fetch.fsck.skipList::
+ Acts like `fsck.skipList`, but is used by
+ linkgit:git-fetch-pack[1] instead of linkgit:git-fsck[1]. See
+ the `fsck.skipList` documentation for details.
fetch.unpackLimit::
If the number of objects fetched over the Git native
sent when negotiating the contents of the packfile to be sent by the
server. Set to "skipping" to use an algorithm that skips commits in an
effort to converge faster, but may result in a larger-than-necessary
- packfile; any other value instructs Git to use the default algorithm
+ packfile; The default is "default" which instructs Git to use the default algorithm
that never skips commits (unless the server has acknowledged it or one
of its descendants).
+ Unknown values will cause 'git fetch' to error out.
++
+See also the `--negotiation-tip` option for linkgit:git-fetch[1].
format.attach::
Enable multipart/mixed attachments as the default for
linkgit:gitattributes[5] for details.
fsck.<msg-id>::
- Allows overriding the message type (error, warn or ignore) of a
- specific message ID such as `missingEmail`.
-+
-For convenience, fsck prefixes the error/warning with the message ID,
-e.g. "missingEmail: invalid author/committer line - missing email" means
-that setting `fsck.missingEmail = ignore` will hide that issue.
-+
-This feature is intended to support working with legacy repositories
-which cannot be repaired without disruptive changes.
+ During fsck git may find issues with legacy data which
+ wouldn't be generated by current versions of git, and which
+ wouldn't be sent over the wire if `transfer.fsckObjects` was
+ set. This feature is intended to support working with legacy
+ repositories containing such data.
++
+Setting `fsck.<msg-id>` will be picked up by linkgit:git-fsck[1], but
+to accept pushes of such data set `receive.fsck.<msg-id>` instead, or
+to clone or fetch it set `fetch.fsck.<msg-id>`.
++
+The rest of the documentation discusses `fsck.*` for brevity, but the
+same applies for the corresponding `receive.fsck.*` and
+`fetch.<msg-id>.*`. variables.
++
+Unlike variables like `color.ui` and `core.editor` the
+`receive.fsck.<msg-id>` and `fetch.fsck.<msg-id>` variables will not
+fall back on the `fsck.<msg-id>` configuration if they aren't set. To
+uniformly configure the same fsck settings in different circumstances
+all three of them they must all set to the same values.
++
+When `fsck.<msg-id>` is set, errors can be switched to warnings and
+vice versa by configuring the `fsck.<msg-id>` setting where the
+`<msg-id>` is the fsck message ID and the value is one of `error`,
+`warn` or `ignore`. For convenience, fsck prefixes the error/warning
+with the message ID, e.g. "missingEmail: invalid author/committer line
+- missing email" means that setting `fsck.missingEmail = ignore` will
+hide that issue.
++
+In general, it is better to enumerate existing objects with problems
+with `fsck.skipList`, instead of listing the kind of breakages these
+problematic objects share to be ignored, as doing the latter will
+allow new instances of the same breakages go unnoticed.
++
+Setting an unknown `fsck.<msg-id>` value will cause fsck to die, but
+doing the same for `receive.fsck.<msg-id>` and `fetch.fsck.<msg-id>`
+will only cause git to warn.
fsck.skipList::
The path to a sorted list of object names (i.e. one SHA-1 per
should be accepted despite early commits containing errors that
can be safely ignored such as invalid committer email addresses.
Note: corrupt objects cannot be skipped with this setting.
++
+Like `fsck.<msg-id>` this variable has corresponding
+`receive.fsck.skipList` and `fetch.fsck.skipList` variants.
++
+Unlike variables like `color.ui` and `core.editor` the
+`receive.fsck.skipList` and `fetch.fsck.skipList` variables will not
+fall back on the `fsck.skipList` configuration if they aren't set. To
+uniformly configure the same fsck settings in different circumstances
+all three of them they must all set to the same values.
gc.aggressiveDepth::
The depth parameter used in the delta compression
signed, and the program is expected to send the result to its
standard output.
+gpg.format::
+ Specifies which key format to use when signing with `--gpg-sign`.
+ Default is "openpgp" and another possible value is "x509".
+
+gpg.<format>.program::
+ Use this to customize the program used for the signing format you
+ chose. (see `gpg.program` and `gpg.format`) `gpg.program` can still
+ be used as a legacy synonym for `gpg.openpgp.program`. The default
+ value for `gpg.x509.program` is "gpgsm".
+
gui.commitMsgWidth::
Defines how wide the commit message window is in the
linkgit:git-gui[1]. "75" is the default.
receive.fsckObjects::
If it is set to true, git-receive-pack will check all received
- objects. It will abort in the case of a malformed object or a
- broken link. The result of an abort are only dangling objects.
- Defaults to false. If not set, the value of `transfer.fsckObjects`
- is used instead.
+ objects. See `transfer.fsckObjects` for what's checked.
+ Defaults to false. If not set, the value of
+ `transfer.fsckObjects` is used instead.
receive.fsck.<msg-id>::
- When `receive.fsckObjects` is set to true, errors can be switched
- to warnings and vice versa by configuring the `receive.fsck.<msg-id>`
- setting where the `<msg-id>` is the fsck message ID and the value
- is one of `error`, `warn` or `ignore`. For convenience, fsck prefixes
- the error/warning with the message ID, e.g. "missingEmail: invalid
- author/committer line - missing email" means that setting
- `receive.fsck.missingEmail = ignore` will hide that issue.
-+
-This feature is intended to support working with legacy repositories
-which would not pass pushing when `receive.fsckObjects = true`, allowing
-the host to accept repositories with certain known issues but still catch
-other issues.
+ Acts like `fsck.<msg-id>`, but is used by
+ linkgit:git-receive-pack[1] instead of
+ linkgit:git-fsck[1]. See the `fsck.<msg-id>` documentation for
+ details.
receive.fsck.skipList::
- The path to a sorted list of object names (i.e. one SHA-1 per
- line) that are known to be broken in a non-fatal way and should
- be ignored. This feature is useful when an established project
- should be accepted despite early commits containing errors that
- can be safely ignored such as invalid committer email addresses.
- Note: corrupt objects cannot be skipped with this setting.
+ Acts like `fsck.skipList`, but is used by
+ linkgit:git-receive-pack[1] instead of
+ linkgit:git-fsck[1]. See the `fsck.skipList` documentation for
+ details.
receive.keepAlive::
After receiving the pack from the client, `receive-pack` may
When `fetch.fsckObjects` or `receive.fsckObjects` are
not set, the value of this variable is used instead.
Defaults to false.
++
+When set, the fetch or receive will abort in the case of a malformed
+object or a link to a nonexistent object. In addition, various other
+issues are checked for, including legacy issues (see `fsck.<msg-id>`),
+and potential security issues like the existence of a `.GIT` directory
+or a malicious `.gitmodules` file (see the release notes for v2.2.1
+and v2.17.1 for details). Other sanity and security checks may be
+added in future releases.
++
+On the receiving side, failing fsckObjects will make those objects
+unreachable, see "QUARANTINE ENVIRONMENT" in
+linkgit:git-receive-pack[1]. On the fetch side, malformed objects will
+instead be left unreferenced in the repository.
++
+Due to the non-quarantine nature of the `fetch.fsckObjects`
+implementation it can not be relied upon to leave the object store
+clean like `receive.fsckObjects` can.
++
+As objects are unpacked they're written to the object store, so there
+can be cases where malicious objects get introduced even though the
+"fetch" failed, only to have a subsequent "fetch" succeed because only
+new incoming objects are checked, not those that have already been
+written to the object store. That difference in behavior should not be
+relied upon. In the future, such objects may be quarantined for
+"fetch" as well.
++
+For now, the paranoid need to find some way to emulate the quarantine
+environment if they'd like the same protection as "push". E.g. in the
+case of an internal mirror do the mirroring in two steps, one to fetch
+the untrusted objects, and then do a second "push" (which will use the
+quarantine) to another internal repo, and have internal clients
+consume this pushed-to repository, or embargo internal fetches and
+only allow them once a full "fsck" has run (and no new fetches have
+happened in the meantime).
transfer.hideRefs::
String(s) `receive-pack` and `upload-pack` use to decide which
are painted using either the 'color.diff.{old,new}Moved' color or
'color.diff.{old,new}MovedAlternative'. The change between
the two colors indicates that a new block was detected.
-dimmed_zebra::
+dimmed-zebra::
Similar to 'zebra', but additional dimming of uninteresting parts
of moved code is performed. The bordering lines of two adjacent
blocks are considered interesting, the rest is uninteresting.
+ `dimmed_zebra` is a deprecated synonym.
--
--color-moved-ws=<modes>::
The argument to this option may be a glob on ref names, a ref, or the (possibly
abbreviated) SHA-1 of a commit. Specifying a glob is equivalent to specifying
this option multiple times, one for each matching ref name.
++
+See also the `fetch.negotiationAlgorithm` configuration variable
+documented in linkgit:git-config[1].
ifndef::git-pull[]
--dry-run::
`xx`; for example `%00` interpolates to `\0` (NUL),
`%09` to `\t` (TAB) and `%0a` to `\n` (LF).
---color[=<when>]:
+--color[=<when>]::
Respect any colors specified in the `--format` option. The
`<when>` field must be one of `always`, `never`, or `auto` (if
`<when>` is absent, behave as if `always` was given).
Specify where all new trailers will be added. A setting
provided with '--where' overrides all configuration variables
and applies to all '--trailer' options until the next occurrence of
- '--where' or '--no-where'.
+ '--where' or '--no-where'. Possible values are `after`, `before`,
+ `end` or `start`.
--if-exists <action>::
--no-if-exists::
least one trailer with the same <token> in the message. A setting
provided with '--if-exists' overrides all configuration variables
and applies to all '--trailer' options until the next occurrence of
- '--if-exists' or '--no-if-exists'.
+ '--if-exists' or '--no-if-exists'. Possible actions are `addIfDifferent`,
+ `addIfDifferentNeighbor`, `add`, `replace` and `doNothing`.
--if-missing <action>::
--no-if-missing::
trailer with the same <token> in the message. A setting
provided with '--if-missing' overrides all configuration variables
and applies to all '--trailer' options until the next occurrence of
- '--if-missing' or '--no-if-missing'.
+ '--if-missing' or '--no-if-missing'. Possible actions are `doNothing`
+ or `add`.
--only-trailers::
Output only the trailers, not any other parts of the input.
been submitted. Implies --disable-rebase. Can also be set with
git-p4.disableP4Sync. Sync with origin/master still goes ahead if possible.
+Hook for submit
+~~~~~~~~~~~~~~~
+The `p4-pre-submit` hook is executed if it exists and is executable.
+The hook takes no parameters and nothing from standard input. Exiting with
+non-zero status from this script prevents `git-p4 submit` from launching.
+
+One usage scenario is to run unit tests in the hook.
+
Rebase options
~~~~~~~~~~~~~~
These options can be used to modify 'git p4 rebase' behavior.
variable if it exists, or lexicographic order otherwise. See
linkgit:git-config[1].
---color[=<when>]:
+--color[=<when>]::
Respect any colors specified in the `--format` option. The
`<when>` field must be one of `always`, `never`, or `auto` (if
`<when>` is absent, behave as if `always` was given).
hook to limit its search. On error, it will fall back to verifying
all files and folders.
+p4-pre-submit
+~~~~~~~~~~~~~
+
+This hook is invoked by `git-p4 submit`. It takes no parameters and nothing
+from standard input. Exiting with non-zero status from this script prevent
+`git-p4 submit` from launching. Run `git-p4 submit --help` for details.
+
GIT
---
Part of the linkgit:git[1] suite
request_end
request_end = "0000" / "done"
- want_list = PKT-LINE(want NUL cap_list LF)
+ want_list = PKT-LINE(want SP cap_list LF)
*(want_pkt)
want_pkt = PKT-LINE(want LF)
want = "want" SP id
- cap_list = *(SP capability) SP
+ cap_list = capability *(SP capability)
have_list = *PKT-LINE("have" SP id LF)
Servers that receive any such Extra Parameters MUST ignore all
unrecognized keys. Currently, the only Extra Parameter recognized is
-"version=1".
+"version" with a value of '1' or '2'. See protocol-v2.txt for more
+information on protocol version 2.
Git Transport
-------------
# The DEVELOPER mode enables -Wextra with a few exceptions. By
# setting this flag the exceptions are removed, and all of
# -Wextra is used.
+#
+# pedantic:
+#
+# Enable -pedantic compilation. This also disables
+# USE_PARENS_AROUND_GETTEXT_N to produce only relevant warnings.
GIT-VERSION-FILE: FORCE
@$(SHELL_PATH) ./GIT-VERSION-GEN
export TCL_PATH TCLTK_PATH
SPARSE_FLAGS =
-SPATCH_FLAGS = --all-includes
+SPATCH_FLAGS = --all-includes --patch .
TEST_BUILTINS_OBJS += test-genrandom.o
TEST_BUILTINS_OBJS += test-hashmap.o
TEST_BUILTINS_OBJS += test-index-version.o
+TEST_BUILTINS_OBJS += test-json-writer.o
TEST_BUILTINS_OBJS += test-lazy-init-name-hash.o
TEST_BUILTINS_OBJS += test-match-trees.o
TEST_BUILTINS_OBJS += test-mergesort.o
LIB_OBJS += help.o
LIB_OBJS += hex.o
LIB_OBJS += ident.o
+LIB_OBJS += json-writer.o
LIB_OBJS += kwset.o
LIB_OBJS += levenshtein.o
LIB_OBJS += line-log.o
fi
C_SOURCES = $(patsubst %.o,%.c,$(C_OBJ))
-%.cocci.patch: %.cocci $(C_SOURCES)
+ifdef DC_SHA1_SUBMODULE
+COCCI_SOURCES = $(filter-out sha1collisiondetection/%,$(C_SOURCES))
+else
+COCCI_SOURCES = $(filter-out sha1dc/%,$(C_SOURCES))
+endif
+
+%.cocci.patch: %.cocci $(COCCI_SOURCES)
@echo ' ' SPATCH $<; \
ret=0; \
- for f in $(C_SOURCES); do \
+ for f in $(COCCI_SOURCES); do \
$(SPATCH) --sp-file $< $$f $(SPATCH_FLAGS) || \
{ ret=$$?; break; }; \
done >$@+ 2>$@.log; \
then \
echo ' ' SPATCH result: $@; \
fi
-coccicheck: $(patsubst %.cocci,%.cocci.patch,$(wildcard contrib/coccinelle/*.cocci))
+coccicheck: $(addsuffix .patch,$(wildcard contrib/coccinelle/*.cocci))
+
+.PHONY: coccicheck
### Installation rules
$(RM) $(addsuffix *.gcda,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
$(RM) $(addsuffix *.gcno,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
-clean: profile-clean coverage-clean
+cocciclean:
+ $(RM) contrib/coccinelle/*.cocci.patch*
+
+clean: profile-clean coverage-clean cocciclean
$(RM) *.res
$(RM) $(OBJECTS)
$(RM) $(LIB_FILE) $(XDIFF_LIB) $(VCSSVN_LIB)
$(RM) -r $(GIT_TARNAME) .doc-tmp-dir
$(RM) $(GIT_TARNAME).tar.gz git-core_$(GIT_VERSION)-*.tar.gz
$(RM) $(htmldocs).tar.gz $(manpages).tar.gz
- $(RM) contrib/coccinelle/*.cocci.patch*
$(MAKE) -C Documentation/ clean
ifndef NO_PERL
$(MAKE) -C gitweb clean
$(RM) GIT-USER-AGENT GIT-PREFIX
$(RM) GIT-SCRIPT-DEFINES GIT-PERL-DEFINES GIT-PERL-HEADER GIT-PYTHON-VARS
-.PHONY: all install profile-clean clean strip
+.PHONY: all install profile-clean cocciclean clean strip
.PHONY: shell_compatibility_test please_set_SHELL_PATH_to_a_more_modern_shell
.PHONY: FORCE cscope
st = open_istream(oid, &type, &sz, NULL);
if (!st)
- return error("cannot stream blob %s", oid_to_hex(oid));
+ return error(_("cannot stream blob %s"), oid_to_hex(oid));
for (;;) {
readlen = read_istream(st, buf, sizeof(buf));
if (readlen <= 0)
*header.typeflag = TYPEFLAG_REG;
mode = (mode | ((mode & 0100) ? 0777 : 0666)) & ~tar_umask;
} else {
- return error("unsupported file mode: 0%o (SHA1: %s)",
+ return error(_("unsupported file mode: 0%o (SHA1: %s)"),
mode, oid_to_hex(oid));
}
if (pathlen > sizeof(header.name)) {
enum object_type type;
buffer = object_file_to_archive(args, path, oid, old_mode, &type, &size);
if (!buffer)
- return error("cannot read %s", oid_to_hex(oid));
+ return error(_("cannot read %s"), oid_to_hex(oid));
} else {
buffer = NULL;
size = 0;
filter.in = -1;
if (start_command(&filter) < 0)
- die_errno("unable to start '%s' filter", argv[0]);
+ die_errno(_("unable to start '%s' filter"), argv[0]);
close(1);
if (dup2(filter.in, 1) < 0)
- die_errno("unable to redirect descriptor");
+ die_errno(_("unable to redirect descriptor"));
close(filter.in);
r = write_tar_archive(ar, args);
close(1);
if (finish_command(&filter) != 0)
- die("'%s' filter reported error", argv[0]);
+ die(_("'%s' filter reported error"), argv[0]);
strbuf_release(&cmd);
return r;
if (is_utf8(path))
flags |= ZIP_UTF8;
else
- warning("Path is not valid UTF-8: %s", path);
+ warning(_("path is not valid UTF-8: %s"), path);
}
if (pathlen > 0xffff) {
- return error("path too long (%d chars, SHA1: %s): %s",
+ return error(_("path too long (%d chars, SHA1: %s): %s"),
(int)pathlen, oid_to_hex(oid), path);
}
size > big_file_threshold) {
stream = open_istream(oid, &type, &size, NULL);
if (!stream)
- return error("cannot stream blob %s",
+ return error(_("cannot stream blob %s"),
oid_to_hex(oid));
flags |= ZIP_STREAM;
out = buffer = NULL;
buffer = object_file_to_archive(args, path, oid, mode,
&type, &size);
if (!buffer)
- return error("cannot read %s",
+ return error(_("cannot read %s"),
oid_to_hex(oid));
crc = crc32(crc, buffer, size);
is_binary = entry_is_binary(path_without_prefix,
}
compressed_size = (method == 0) ? size : 0;
} else {
- return error("unsupported file mode: 0%o (SHA1: %s)", mode,
+ return error(_("unsupported file mode: 0%o (SHA1: %s)"), mode,
oid_to_hex(oid));
}
zstream.avail_in = readlen;
result = git_deflate(&zstream, 0);
if (result != Z_OK)
- die("deflate error (%d)", result);
+ die(_("deflate error (%d)"), result);
out_len = zstream.next_out - compressed;
if (out_len > 0) {
struct tm *t;
if (date_overflows(*timestamp))
- die("timestamp too large for this system: %"PRItime,
+ die(_("timestamp too large for this system: %"PRItime),
*timestamp);
time = (time_t)*timestamp;
t = localtime(&time);
--- /dev/null
+#ifndef BANNED_H
+#define BANNED_H
+
+/*
+ * This header lists functions that have been banned from our code base,
+ * because they're too easy to misuse (and even if used correctly,
+ * complicate audits). Including this header turns them into compile-time
+ * errors.
+ */
+
+#define BANNED(func) sorry_##func##_is_a_banned_function
+
+#undef strcpy
+#define strcpy(x,y) BANNED(strcpy)
+#undef strcat
+#define strcat(x,y) BANNED(strcat)
+#undef strncpy
+#define strncpy(x,y,n) BANNED(strncpy)
+
+#undef sprintf
+#undef vsprintf
+#ifdef HAVE_VARIADIC_MACROS
+#define sprintf(...) BANNED(sprintf)
+#define vsprintf(...) BANNED(vsprintf)
+#else
+#define sprintf(buf,fmt,arg) BANNED(sprintf)
+#define vsprintf(buf,fmt,arg) BANNED(sprintf)
+#endif
+
+#endif /* BANNED_H */
OPT_BOOL( 0 , "refresh", &refresh_only, N_("don't add, only refresh the index")),
OPT_BOOL( 0 , "ignore-errors", &ignore_add_errors, N_("just skip files which cannot be added because of errors")),
OPT_BOOL( 0 , "ignore-missing", &ignore_missing, N_("check if - even missing - files are ignored in dry run")),
- OPT_STRING( 0 , "chmod", &chmod_arg, N_("(+/-)x"), N_("override the executable bit of the listed files")),
+ OPT_STRING(0, "chmod", &chmod_arg, "(+|-)x",
+ N_("override the executable bit of the listed files")),
OPT_HIDDEN_BOOL(0, "warn-embedded-repo", &warn_on_embedded_repo,
N_("warn when adding an embedded repository")),
OPT_END(),
}
if (next == EXPECT_COLOR)
- die (_("must end with a color"));
+ die(_("must end with a color"));
colorfield[colorfield_nr].hop = TIME_MAX;
string_list_clear(&l, 0);
if (opts.track != BRANCH_TRACK_UNSPECIFIED && !opts.new_branch) {
const char *argv0 = argv[0];
if (!argc || !strcmp(argv0, "--"))
- die (_("--track needs a branch name"));
+ die(_("--track needs a branch name"));
skip_prefix(argv0, "refs/", &argv0);
skip_prefix(argv0, "remotes/", &argv0);
argv0 = strchr(argv0, '/');
if (!argv0 || !argv0[1])
- die (_("Missing branch name; try -b"));
+ die(_("missing branch name; try -b"));
opts.new_branch = argv0 + 1;
}
int err = 0, complete_refs_before_fetch = 1;
int submodule_progress;
- struct refspec_item refspec;
+ struct refspec rs = REFSPEC_INIT_FETCH;
+ struct argv_array ref_prefixes = ARGV_ARRAY_INIT;
fetch_if_missing = 0;
if (option_required_reference.nr || option_optional_reference.nr)
setup_reference();
- refspec_item_init_or_die(&refspec, value.buf, REFSPEC_FETCH);
+ refspec_append(&rs, value.buf);
strbuf_reset(&value);
if (transport->smart_options && !deepen && !filter_options.choice)
transport->smart_options->check_self_contained_and_connected = 1;
- refs = transport_get_remote_refs(transport, NULL);
+
+ argv_array_push(&ref_prefixes, "HEAD");
+ refspec_ref_prefixes(&rs, &ref_prefixes);
+ if (option_branch)
+ expand_ref_prefix(&ref_prefixes, option_branch);
+ if (!option_no_tags)
+ argv_array_push(&ref_prefixes, "refs/tags/");
+
+ refs = transport_get_remote_refs(transport, &ref_prefixes);
if (refs) {
- mapped_refs = wanted_peer_refs(refs, &refspec);
+ mapped_refs = wanted_peer_refs(refs, &rs.items[0]);
/*
* transport_get_remote_refs() may return refs with null sha-1
* in mapped_refs (see struct transport->get_refs_list
}
if (!is_local && !complete_refs_before_fetch)
- transport_fetch_refs(transport, mapped_refs, NULL);
+ transport_fetch_refs(transport, mapped_refs);
remote_head = find_ref_by_name(refs, "HEAD");
remote_head_points_at =
if (is_local)
clone_local(path, git_dir);
else if (refs && complete_refs_before_fetch)
- transport_fetch_refs(transport, mapped_refs, NULL);
+ transport_fetch_refs(transport, mapped_refs);
update_remote_refs(refs, mapped_refs, remote_head_points_at,
branch_top.buf, reflog_msg.buf, transport,
strbuf_release(&value);
junk_mode = JUNK_LEAVE_ALL;
- refspec_item_clear(&refspec);
+ refspec_clear(&rs);
+ argv_array_clear(&ref_prefixes);
return err;
}
unlink(git_path_squash_msg(the_repository));
if (commit_index_files())
- die (_("Repository has been updated, but unable to write\n"
- "new_index file. Check that disk is not full and quota is\n"
- "not exceeded, and then \"git reset HEAD\" to recover."));
+ die(_("repository has been updated, but unable to write\n"
+ "new_index file. Check that disk is not full and quota is\n"
+ "not exceeded, and then \"git reset HEAD\" to recover."));
rerere(0);
run_command_v_opt(argv_gc_auto, RUN_GIT_CMD);
* --int' and '--type=bool
* --type=int'.
*/
- error("only one type at a time.");
+ error(_("only one type at a time"));
usage_builtin_config();
}
*to_type = new_type;
static void check_argc(int argc, int min, int max) {
if (argc >= min && argc <= max)
return;
- error("wrong number of arguments");
+ if (min == max)
+ error(_("wrong number of arguments, should be %d"), min);
+ else
+ error(_("wrong number of arguments, should be from %d to %d"),
+ min, max);
usage_builtin_config();
}
key_regexp = (regex_t*)xmalloc(sizeof(regex_t));
if (regcomp(key_regexp, key, REG_EXTENDED)) {
- error("invalid key pattern: %s", key_);
+ error(_("invalid key pattern: %s"), key_);
FREE_AND_NULL(key_regexp);
ret = CONFIG_INVALID_PATTERN;
goto free_strings;
regexp = (regex_t*)xmalloc(sizeof(regex_t));
if (regcomp(regexp, regex_, REG_EXTENDED)) {
- error("invalid pattern: %s", regex_);
+ error(_("invalid pattern: %s"), regex_);
FREE_AND_NULL(regexp);
ret = CONFIG_INVALID_PATTERN;
goto free_strings;
if (type == TYPE_COLOR) {
char v[COLOR_MAXLEN];
if (git_config_color(v, key, value))
- die("cannot parse color '%s'", value);
+ die(_("cannot parse color '%s'"), value);
/*
* The contents of `v` now contain an ANSI escape
static void check_write(void)
{
if (!given_config_source.file && !startup_info->have_repository)
- die("not in a git directory");
+ die(_("not in a git directory"));
if (given_config_source.use_stdin)
- die("writing to stdin is not supported");
+ die(_("writing to stdin is not supported"));
if (given_config_source.blob)
- die("writing config blobs is not supported");
+ die(_("writing config blobs is not supported"));
}
struct urlmatch_current_candidate_value {
if (use_global_config + use_system_config + use_local_config +
!!given_config_source.file + !!given_config_source.blob > 1) {
- error("only one config file at a time.");
+ error(_("only one config file at a time"));
usage_builtin_config();
}
* location; error out even if XDG_CONFIG_HOME
* is set and points at a sane location.
*/
- die("$HOME not set");
+ die(_("$HOME not set"));
if (access_or_warn(user_config, R_OK, 0) &&
xdg_config && !access_or_warn(xdg_config, R_OK, 0)) {
}
if ((actions & (ACTION_GET_COLOR|ACTION_GET_COLORBOOL)) && type) {
- error("--get-color and variable type are incoherent");
+ error(_("--get-color and variable type are incoherent"));
usage_builtin_config();
}
if (HAS_MULTI_BITS(actions)) {
- error("only one action at a time.");
+ error(_("only one action at a time"));
usage_builtin_config();
}
if (actions == 0)
}
if (omit_values &&
!(actions == ACTION_LIST || actions == ACTION_GET_REGEXP)) {
- error("--name-only is only applicable to --list or --get-regexp");
+ error(_("--name-only is only applicable to --list or --get-regexp"));
usage_builtin_config();
}
if (show_origin && !(actions &
(ACTION_GET|ACTION_GET_ALL|ACTION_GET_REGEXP|ACTION_LIST))) {
- error("--show-origin is only applicable to --get, --get-all, "
- "--get-regexp, and --list.");
+ error(_("--show-origin is only applicable to --get, --get-all, "
+ "--get-regexp, and --list"));
usage_builtin_config();
}
if (default_value && !(actions & ACTION_GET)) {
- error("--default is only applicable to --get");
+ error(_("--default is only applicable to --get"));
usage_builtin_config();
}
&given_config_source,
&config_options) < 0) {
if (given_config_source.file)
- die_errno("unable to read config file '%s'",
+ die_errno(_("unable to read config file '%s'"),
given_config_source.file);
else
- die("error processing config file(s)");
+ die(_("error processing config file(s)"));
}
}
else if (actions == ACTION_EDIT) {
check_argc(argc, 0, 0);
if (!given_config_source.file && nongit)
- die("not in a git directory");
+ die(_("not in a git directory"));
if (given_config_source.use_stdin)
- die("editing stdin is not supported");
+ die(_("editing stdin is not supported"));
if (given_config_source.blob)
- die("editing blobs is not supported");
+ die(_("editing blobs is not supported"));
git_config(git_default_config, NULL);
config_file = given_config_source.file ?
xstrdup(given_config_source.file) :
if (ret < 0)
return ret;
if (ret == 0)
- die("No such section!");
+ die(_("no such section: %s"), argv[0]);
}
else if (actions == ACTION_REMOVE_SECTION) {
int ret;
if (ret < 0)
return ret;
if (ret == 0)
- die("No such section!");
+ die(_("no such section: %s"), argv[0]);
}
else if (actions == ACTION_GET_COLOR) {
check_argc(argc, 1, 2);
1, PARSE_OPT_NONEG | PARSE_OPT_HIDDEN),
OPT_BOOL(0, "symlinks", &symlinks,
N_("use symlinks in dir-diff mode")),
- OPT_STRING('t', "tool", &difftool_cmd, N_("<tool>"),
+ OPT_STRING('t', "tool", &difftool_cmd, N_("tool"),
N_("use the specified diff tool")),
OPT_BOOL(0, "tool-help", &tool_help,
N_("print a list of diff tools that may be used with "
OPT_BOOL(0, "trust-exit-code", &trust_exit_code,
N_("make 'git-difftool' exit when an invoked diff "
"tool returns a non - zero exit code")),
- OPT_STRING('x', "extcmd", &extcmd, N_("<command>"),
+ OPT_STRING('x', "extcmd", &extcmd, N_("command"),
N_("specify a custom command for viewing diffs")),
OPT_END()
};
} else {
buf = read_object_file(oid, &type, &size);
if (!buf)
- die ("Could not read blob %s", oid_to_hex(oid));
+ die("could not read blob %s", oid_to_hex(oid));
if (check_object_signature(oid, buf, size, type_name(type)) < 0)
die("sha1 mismatch in blob %s", oid_to_hex(oid));
object = parse_object_buffer(the_repository, oid, type,
printf("blob\nmark :%"PRIu32"\ndata %lu\n", last_idnum, size);
if (size && fwrite(buf, size, 1, stdout) != 1)
- die_errno ("Could not write blob '%s'", oid_to_hex(oid));
+ die_errno("could not write blob '%s'", oid_to_hex(oid));
printf("\n");
show_progress();
commit_buffer = get_commit_buffer(commit, NULL);
author = strstr(commit_buffer, "\nauthor ");
if (!author)
- die ("Could not find author in commit %s",
- oid_to_hex(&commit->object.oid));
+ die("could not find author in commit %s",
+ oid_to_hex(&commit->object.oid));
author++;
author_end = strchrnul(author, '\n');
committer = strstr(author_end, "\ncommitter ");
if (!committer)
- die ("Could not find committer in commit %s",
- oid_to_hex(&commit->object.oid));
+ die("could not find committer in commit %s",
+ oid_to_hex(&commit->object.oid));
committer++;
committer_end = strchrnul(committer, '\n');
message = strstr(committer_end, "\n\n");
buf = read_object_file(&tag->object.oid, &type, &size);
if (!buf)
- die ("Could not read tag %s", oid_to_hex(&tag->object.oid));
+ die("could not read tag %s", oid_to_hex(&tag->object.oid));
message = memmem(buf, size, "\n\n", 2);
if (message) {
message += 2;
if (signature)
switch(signed_tag_mode) {
case ABORT:
- die ("Encountered signed tag %s; use "
- "--signed-tags=<mode> to handle it.",
- oid_to_hex(&tag->object.oid));
+ die("encountered signed tag %s; use "
+ "--signed-tags=<mode> to handle it",
+ oid_to_hex(&tag->object.oid));
case WARN:
- warning ("Exporting signed tag %s",
- oid_to_hex(&tag->object.oid));
+ warning("exporting signed tag %s",
+ oid_to_hex(&tag->object.oid));
/* fallthru */
case VERBATIM:
break;
case WARN_STRIP:
- warning ("Stripping signature from tag %s",
- oid_to_hex(&tag->object.oid));
+ warning("stripping signature from tag %s",
+ oid_to_hex(&tag->object.oid));
/* fallthru */
case STRIP:
message_size = signature + 1 - message;
if (!tagged_mark) {
switch(tag_of_filtered_mode) {
case ABORT:
- die ("Tag %s tags unexported object; use "
- "--tag-of-filtered-object=<mode> to handle it.",
- oid_to_hex(&tag->object.oid));
+ die("tag %s tags unexported object; use "
+ "--tag-of-filtered-object=<mode> to handle it",
+ oid_to_hex(&tag->object.oid));
case DROP:
/* Ignore this tag altogether */
free(buf);
return;
case REWRITE:
if (tagged->type != OBJ_COMMIT) {
- die ("Tag %s tags unexported %s!",
- oid_to_hex(&tag->object.oid),
- type_name(tagged->type));
+ die("tag %s tags unexported %s!",
+ oid_to_hex(&tag->object.oid),
+ type_name(tagged->type));
}
p = (struct commit *)tagged;
for (;;) {
if (!(p->object.flags & TREESAME))
break;
if (!p->parents)
- die ("Can't find replacement commit for tag %s\n",
+ die("can't find replacement commit for tag %s",
oid_to_hex(&tag->object.oid));
p = p->parents->item;
}
return check_connected(iterate_ref_map, &rm, &opt);
}
-static int fetch_refs(struct transport *transport, struct ref *ref_map,
- struct ref **updated_remote_refs)
+static int fetch_refs(struct transport *transport, struct ref *ref_map)
{
int ret = quickfetch(ref_map);
if (ret)
- ret = transport_fetch_refs(transport, ref_map,
- updated_remote_refs);
+ ret = transport_fetch_refs(transport, ref_map);
if (!ret)
/*
* Keep the new pack's ".keep" file around to allow the caller
transport_set_option(transport, TRANS_OPT_FOLLOWTAGS, NULL);
transport_set_option(transport, TRANS_OPT_DEPTH, "0");
transport_set_option(transport, TRANS_OPT_DEEPEN_RELATIVE, NULL);
- if (!fetch_refs(transport, ref_map, NULL))
+ if (!fetch_refs(transport, ref_map))
consume_refs(transport, ref_map);
if (gsecondary) {
int autotags = (transport->remote->fetch_tags == 1);
int retcode = 0;
const struct ref *remote_refs;
- struct ref *updated_remote_refs = NULL;
struct argv_array ref_prefixes = ARGV_ARRAY_INIT;
if (tags == TAGS_DEFAULT) {
refspec_ref_prefixes(&transport->remote->fetch, &ref_prefixes);
if (ref_prefixes.argc &&
- (tags == TAGS_SET || (tags == TAGS_DEFAULT && !rs->nr))) {
+ (tags == TAGS_SET || (tags == TAGS_DEFAULT))) {
argv_array_push(&ref_prefixes, "refs/tags/");
}
transport->url);
}
}
-
- if (fetch_refs(transport, ref_map, &updated_remote_refs)) {
- free_refs(ref_map);
- retcode = 1;
- goto cleanup;
- }
- if (updated_remote_refs) {
- /*
- * Regenerate ref_map using the updated remote refs. This is
- * to account for additional information which may be provided
- * by the transport (e.g. shallow info).
- */
- free_refs(ref_map);
- ref_map = get_ref_map(transport->remote, updated_remote_refs, rs,
- tags, &autotags);
- free_refs(updated_remote_refs);
- }
- if (consume_refs(transport, ref_map)) {
+ if (fetch_refs(transport, ref_map) || consume_refs(transport, ref_map)) {
free_refs(ref_map);
retcode = 1;
goto cleanup;
i++;
p[len] = 0;
if (handle_line(p, &merge_parents))
- die ("Error in line %d: %.*s", i, len, p);
+ die("error in line %d: %.*s", i, len, p);
}
if (opts->add_title && srcs.nr)
fetch_if_missing = 0;
errors_found = 0;
- check_replace_refs = 0;
+ read_replace_refs = 0;
argc = parse_options(argc, argv, prefix, fsck_opts, fsck_usage, 0);
}
if (repo_read_index(repo) < 0)
- die("index file corrupt");
+ die(_("index file corrupt"));
for (nr = 0; nr < repo->index->cache_nr; nr++) {
const struct cache_entry *ce = repo->index->cache[nr];
}
if (!opt.pattern_list)
- die(_("no pattern given."));
+ die(_("no pattern given"));
/* --only-matching has no effect with --invert. */
if (opt.invert)
}
if (recurse_submodules && (!use_index || untracked))
- die(_("option not supported with --recurse-submodules."));
+ die(_("option not supported with --recurse-submodules"));
if (!show_in_pager && !opt.status_only)
setup_pager();
if (!use_index && (untracked || cached))
- die(_("--cached or --untracked cannot be used with --no-index."));
+ die(_("--cached or --untracked cannot be used with --no-index"));
if (!use_index || untracked) {
int use_exclude = (opt_exclude < 0) ? use_index : !!opt_exclude;
hit = grep_directory(&opt, &pathspec, use_exclude, use_index);
} else if (0 <= opt_exclude) {
- die(_("--[no-]exclude-standard cannot be used for tracked contents."));
+ die(_("--[no-]exclude-standard cannot be used for tracked contents"));
} else if (!list.nr) {
if (!cached)
setup_work_tree();
hit = grep_cache(&opt, the_repository, &pathspec, cached);
} else {
if (cached)
- die(_("both --cached and trees are given."));
+ die(_("both --cached and trees are given"));
hit = grep_objects(&opt, &pathspec, &list);
}
if (argc == 2 && !strcmp(argv[1], "-h"))
usage(index_pack_usage);
- check_replace_refs = 0;
+ read_replace_refs = 0;
fsck_options.walk = mark_link;
reset_pack_idx_option(&opts);
continue;
else if (S_ISLNK(st_template.st_mode)) {
struct strbuf lnk = STRBUF_INIT;
- if (strbuf_readlink(&lnk, template_path->buf, 0) < 0)
+ if (strbuf_readlink(&lnk, template_path->buf,
+ st_template.st_size) < 0)
die_errno(_("cannot readlink '%s'"), template_path->buf);
if (symlink(lnk.buf, path->buf))
die_errno(_("cannot symlink '%s' '%s'"),
numbered = 0;
if (numbered && keep_subject)
- die (_("-n and -k are mutually exclusive."));
+ die(_("-n and -k are mutually exclusive"));
if (keep_subject && subject_prefix)
- die (_("--subject-prefix/--rfc and -k are mutually exclusive."));
+ die(_("--subject-prefix/--rfc and -k are mutually exclusive"));
rev.preserve_subject = keep_subject;
argc = setup_revisions(argc, argv, &rev, &s_r_opt);
if (argc > 1)
- die (_("unrecognized argument: %s"), argv[1]);
+ die(_("unrecognized argument: %s"), argv[1]);
if (rev.diffopt.output_format & DIFF_FORMAT_NAME)
die(_("--name-only does not make sense"));
exit(128);
if (write_locked_index(&the_index, &lock,
COMMIT_LOCK | SKIP_IF_UNCHANGED))
- die (_("unable to write %s"), get_index_file());
+ die(_("unable to write %s"), get_index_file());
return clean ? 0 : 1;
} else {
return try_merge_command(strategy, xopts_nr, xopts,
buf = read_object_file(&entry->idx.oid, &type, &size);
if (!buf)
- die("unable to read %s", oid_to_hex(&entry->idx.oid));
+ die(_("unable to read %s"), oid_to_hex(&entry->idx.oid));
base_buf = read_object_file(&DELTA(entry)->idx.oid, &type,
&base_size);
if (!base_buf)
oid_to_hex(&DELTA(entry)->idx.oid));
delta_buf = diff_delta(base_buf, base_size,
buf, size, &delta_size, 0);
+ /*
+ * We succesfully computed this delta once but dropped it for
+ * memory reasons. Something is very wrong if this time we
+ * recompute and create a different delta.
+ */
if (!delta_buf || delta_size != DELTA_SIZE(entry))
- die("delta size changed");
+ BUG("delta size changed");
free(buf);
free(base_buf);
return delta_buf;
datalen = revidx[1].offset - offset;
if (!pack_to_stdout && p->index_version > 1 &&
check_pack_crc(p, &w_curs, offset, datalen, revidx->nr)) {
- error("bad packed object CRC for %s",
+ error(_("bad packed object CRC for %s"),
oid_to_hex(&entry->idx.oid));
unuse_pack(&w_curs);
return write_no_reuse_object(f, entry, limit, usable_delta);
if (!pack_to_stdout && p->index_version == 1 &&
check_pack_inflate(p, &w_curs, offset, datalen, entry_size)) {
- error("corrupt packed object for %s",
+ error(_("corrupt packed object for %s"),
oid_to_hex(&entry->idx.oid));
unuse_pack(&w_curs);
return write_no_reuse_object(f, entry, limit, usable_delta);
*/
recursing = (e->idx.offset == 1);
if (recursing) {
- warning("recursive delta detected for object %s",
+ warning(_("recursive delta detected for object %s"),
oid_to_hex(&e->idx.oid));
return WRITE_ONE_RECURSIVE;
} else if (e->idx.offset || e->preferred_base) {
/* make sure off_t is sufficiently large not to wrap */
if (signed_add_overflows(*offset, size))
- die("pack too large for current definition of off_t");
+ die(_("pack too large for current definition of off_t"));
*offset += size;
return WRITE_ONE_WRITTEN;
}
}
if (wo_end != to_pack.nr_objects)
- die("ordered %u objects, expected %"PRIu32, wo_end, to_pack.nr_objects);
+ die(_("ordered %u objects, expected %"PRIu32),
+ wo_end, to_pack.nr_objects);
return wo;
}
int fd;
if (!is_pack_valid(reuse_packfile))
- die("packfile is invalid: %s", reuse_packfile->pack_name);
+ die(_("packfile is invalid: %s"), reuse_packfile->pack_name);
fd = git_open(reuse_packfile->pack_name);
if (fd < 0)
- die_errno("unable to open packfile for reuse: %s",
+ die_errno(_("unable to open packfile for reuse: %s"),
reuse_packfile->pack_name);
if (lseek(fd, sizeof(struct pack_header), SEEK_SET) == -1)
- die_errno("unable to seek in reused packfile");
+ die_errno(_("unable to seek in reused packfile"));
if (reuse_packfile_offset < 0)
reuse_packfile_offset = reuse_packfile->pack_size - the_hash_algo->rawsz;
int read_pack = xread(fd, buffer, sizeof(buffer));
if (read_pack <= 0)
- die_errno("unable to read from reused packfile");
+ die_errno(_("unable to read from reused packfile"));
if (read_pack > to_write)
read_pack = to_write;
* to preserve this property.
*/
if (stat(pack_tmp_name, &st) < 0) {
- warning_errno("failed to stat %s", pack_tmp_name);
+ warning_errno(_("failed to stat %s"), pack_tmp_name);
} else if (!last_mtime) {
last_mtime = st.st_mtime;
} else {
utb.actime = st.st_atime;
utb.modtime = --last_mtime;
if (utime(pack_tmp_name, &utb) < 0)
- warning_errno("failed utime() on %s", pack_tmp_name);
+ warning_errno(_("failed utime() on %s"), pack_tmp_name);
}
strbuf_addf(&tmpname, "%s-", base_name);
free(write_order);
stop_progress(&progress_state);
if (written != nr_result)
- die("wrote %"PRIu32" objects while expecting %"PRIu32,
- written, nr_result);
+ die(_("wrote %"PRIu32" objects while expecting %"PRIu32),
+ written, nr_result);
}
static int no_try_delta(const char *path)
while (c & 128) {
ofs += 1;
if (!ofs || MSB(ofs, 7)) {
- error("delta base offset overflow in pack for %s",
+ error(_("delta base offset overflow in pack for %s"),
oid_to_hex(&entry->idx.oid));
goto give_up;
}
}
ofs = entry->in_pack_offset - ofs;
if (ofs <= 0 || ofs >= entry->in_pack_offset) {
- error("delta base offset out of bound for %s",
+ error(_("delta base offset out of bound for %s"),
oid_to_hex(&entry->idx.oid));
goto give_up;
}
#ifndef NO_PTHREADS
+/* Protect access to object database */
static pthread_mutex_t read_mutex;
#define read_lock() pthread_mutex_lock(&read_mutex)
#define read_unlock() pthread_mutex_unlock(&read_mutex)
+/* Protect delta_cache_size */
static pthread_mutex_t cache_mutex;
#define cache_lock() pthread_mutex_lock(&cache_mutex)
#define cache_unlock() pthread_mutex_unlock(&cache_mutex)
+/*
+ * Protect object list partitioning (e.g. struct thread_param) and
+ * progress_state
+ */
static pthread_mutex_t progress_mutex;
#define progress_lock() pthread_mutex_lock(&progress_mutex)
#define progress_unlock() pthread_mutex_unlock(&progress_mutex)
+/*
+ * Access to struct object_entry is unprotected since each thread owns
+ * a portion of the main object list. Just don't access object entries
+ * ahead in the list because they can be stolen and would need
+ * progress_mutex for protection.
+ */
#else
#define read_lock() (void)0
trg->data = read_object_file(&trg_entry->idx.oid, &type, &sz);
read_unlock();
if (!trg->data)
- die("object %s cannot be read",
+ die(_("object %s cannot be read"),
oid_to_hex(&trg_entry->idx.oid));
if (sz != trg_size)
- die("object %s inconsistent object length (%lu vs %lu)",
+ die(_("object %s inconsistent object length (%lu vs %lu)"),
oid_to_hex(&trg_entry->idx.oid), sz,
trg_size);
*mem_usage += sz;
if (src_entry->preferred_base) {
static int warned = 0;
if (!warned++)
- warning("object %s cannot be read",
+ warning(_("object %s cannot be read"),
oid_to_hex(&src_entry->idx.oid));
/*
* Those objects are not included in the
*/
return 0;
}
- die("object %s cannot be read",
+ die(_("object %s cannot be read"),
oid_to_hex(&src_entry->idx.oid));
}
if (sz != src_size)
- die("object %s inconsistent object length (%lu vs %lu)",
+ die(_("object %s inconsistent object length (%lu vs %lu)"),
oid_to_hex(&src_entry->idx.oid), sz,
src_size);
*mem_usage += sz;
if (!src->index) {
static int warned = 0;
if (!warned++)
- warning("suboptimal pack - out of memory");
+ warning(_("suboptimal pack - out of memory"));
return 0;
}
*mem_usage += sizeof_delta_index(src->index);
static try_to_free_t old_try_to_free_routine;
/*
+ * The main object list is split into smaller lists, each is handed to
+ * one worker.
+ *
* The main thread waits on the condition that (at least) one of the workers
* has stopped working (which is indicated in the .working member of
* struct thread_params).
+ *
* When a work thread has completed its work, it sets .working to 0 and
* signals the main thread and waits on the condition that .data_ready
* becomes 1.
+ *
+ * The main thread steals half of the work from the worker that has
+ * most work left to hand it to the idle worker.
*/
struct thread_params {
return;
}
if (progress > pack_to_stdout)
- fprintf(stderr, "Delta compression using up to %d threads.\n",
- delta_search_threads);
+ fprintf_ln(stderr, _("Delta compression using up to %d threads"),
+ delta_search_threads);
p = xcalloc(delta_search_threads, sizeof(*p));
/* Partition the work amongst work threads. */
ret = pthread_create(&p[i].thread, NULL,
threaded_find_deltas, &p[i]);
if (ret)
- die("unable to create thread: %s", strerror(ret));
+ die(_("unable to create thread: %s"), strerror(ret));
active_threads++;
}
tag = lookup_tag(the_repository, oid);
while (1) {
if (!tag || parse_tag(tag) || !tag->tagged)
- die("unable to pack objects reachable from tag %s",
+ die(_("unable to pack objects reachable from tag %s"),
oid_to_hex(oid));
add_object_entry(&tag->object.oid, OBJ_TAG, NULL, 0);
if (!entry->preferred_base) {
nr_deltas++;
if (oe_type(entry) < 0)
- die("unable to get type of object %s",
+ die(_("unable to get type of object %s"),
oid_to_hex(&entry->idx.oid));
} else {
if (oe_type(entry) < 0) {
ll_find_deltas(delta_list, n, window+1, depth, &nr_done);
stop_progress(&progress_state);
if (nr_done != nr_deltas)
- die("inconsistency with delta count");
+ die(_("inconsistency with delta count"));
}
free(delta_list);
}
if (!strcmp(k, "pack.threads")) {
delta_search_threads = git_config_int(k, v);
if (delta_search_threads < 0)
- die("invalid number of threads specified (%d)",
+ die(_("invalid number of threads specified (%d)"),
delta_search_threads);
#ifdef NO_PTHREADS
if (delta_search_threads != 1) {
- warning("no threads support, ignoring %s", k);
+ warning(_("no threads support, ignoring %s"), k);
delta_search_threads = 0;
}
#endif
if (!strcmp(k, "pack.indexversion")) {
pack_idx_opts.version = git_config_int(k, v);
if (pack_idx_opts.version > 2)
- die("bad pack.indexversion=%"PRIu32,
+ die(_("bad pack.indexversion=%"PRIu32),
pack_idx_opts.version);
return 0;
}
if (feof(stdin))
break;
if (!ferror(stdin))
- die("fgets returned NULL, not EOF, not error!");
+ die("BUG: fgets returned NULL, not EOF, not error!");
if (errno != EINTR)
die_errno("fgets");
clearerr(stdin);
}
if (line[0] == '-') {
if (get_oid_hex(line+1, &oid))
- die("expected edge object ID, got garbage:\n %s",
+ die(_("expected edge object ID, got garbage:\n %s"),
line);
add_preferred_base(&oid);
continue;
}
if (parse_oid_hex(line, &oid, &p))
- die("expected object ID, got garbage:\n %s", line);
+ die(_("expected object ID, got garbage:\n %s"), line);
add_preferred_base_object(p + 1);
add_object_entry(&oid, OBJ_NONE, p + 1, 0);
if (!p->pack_local || p->pack_keep || p->pack_keep_in_core)
continue;
if (open_pack_index(p))
- die("cannot open pack index");
+ die(_("cannot open pack index"));
ALLOC_GROW(in_pack.array,
in_pack.nr + p->num_objects,
enum object_type type = oid_object_info(the_repository, oid, NULL);
if (type < 0) {
- warning("loose object at %s could not be examined", path);
+ warning(_("loose object at %s could not be examined"), path);
return 0;
}
continue;
if (open_pack_index(p))
- die("cannot open pack index");
+ die(_("cannot open pack index"));
for (i = 0; i < p->num_objects; i++) {
nth_packed_object_oid(&oid, p, i);
!has_sha1_pack_kept_or_nonlocal(&oid) &&
!loosened_object_can_be_discarded(&oid, p->mtime))
if (force_object_loose(&oid, p->mtime))
- die("unable to force loose object");
+ die(_("unable to force loose object"));
}
}
}
use_bitmap_index = 0;
continue;
}
- die("not a rev '%s'", line);
+ die(_("not a rev '%s'"), line);
}
if (handle_revision_arg(line, &revs, flags, REVARG_CANNOT_BE_FILENAME))
- die("bad revision '%s'", line);
+ die(_("bad revision '%s'"), line);
}
if (use_bitmap_index && !get_object_list_from_bitmap(&revs))
return;
if (prepare_revision_walk(&revs))
- die("revision walk setup failed");
+ die(_("revision walk setup failed"));
mark_edges_uninteresting(&revs, show_edge);
if (!fn_show_object)
revs.ignore_missing_links = 1;
if (add_unseen_recent_objects_to_traversal(&revs,
unpack_unreachable_expiration))
- die("unable to add recent objects");
+ die(_("unable to add recent objects"));
if (prepare_revision_walk(&revs))
- die("revision walk setup failed");
+ die(_("revision walk setup failed"));
traverse_commit_list(&revs, record_recent_commit,
record_recent_object, NULL);
}
OPT_BOOL(0, "all-progress-implied",
&all_progress_implied,
N_("similar to --all-progress when progress meter is shown")),
- { OPTION_CALLBACK, 0, "index-version", NULL, N_("version[,offset]"),
+ { OPTION_CALLBACK, 0, "index-version", NULL, N_("<version>[,<offset>]"),
N_("write the pack index file in the specified idx format version"),
0, option_parse_index_version },
OPT_MAGNITUDE(0, "max-pack-size", &pack_size_limit,
if (DFS_NUM_STATES > (1 << OE_DFS_STATE_BITS))
BUG("too many dfs states, increase OE_DFS_STATE_BITS");
- check_replace_refs = 0;
+ read_replace_refs = 0;
reset_pack_idx_option(&pack_idx_opts);
git_config(git_pack_config, NULL);
if (pack_compression_level == -1)
pack_compression_level = Z_DEFAULT_COMPRESSION;
else if (pack_compression_level < 0 || pack_compression_level > Z_BEST_COMPRESSION)
- die("bad pack compression level %d", pack_compression_level);
+ die(_("bad pack compression level %d"), pack_compression_level);
if (!delta_search_threads) /* --threads=0 means autodetect */
delta_search_threads = online_cpus();
#ifdef NO_PTHREADS
if (delta_search_threads != 1)
- warning("no threads support, ignoring --threads");
+ warning(_("no threads support, ignoring --threads"));
#endif
if (!pack_to_stdout && !pack_size_limit)
pack_size_limit = pack_size_limit_cfg;
if (pack_to_stdout && pack_size_limit)
- die("--max-pack-size cannot be used to build a pack for transfer.");
+ die(_("--max-pack-size cannot be used to build a pack for transfer"));
if (pack_size_limit && pack_size_limit < 1024*1024) {
- warning("minimum pack size limit is 1 MiB");
+ warning(_("minimum pack size limit is 1 MiB"));
pack_size_limit = 1024*1024;
}
if (!pack_to_stdout && thin)
- die("--thin cannot be used to build an indexable pack.");
+ die(_("--thin cannot be used to build an indexable pack"));
if (keep_unreachable && unpack_unreachable)
- die("--keep-unreachable and --unpack-unreachable are incompatible.");
+ die(_("--keep-unreachable and --unpack-unreachable are incompatible"));
if (!rev_list_all || !rev_list_reflog || !rev_list_index)
unpack_unreachable_expiration = 0;
if (filter_options.choice) {
if (!pack_to_stdout)
- die("cannot use --filter without --stdout.");
+ die(_("cannot use --filter without --stdout"));
use_bitmap_index = 0;
}
prepare_pack(window, depth);
write_pack_file();
if (progress)
- fprintf(stderr, "Total %"PRIu32" (delta %"PRIu32"),"
- " reused %"PRIu32" (delta %"PRIu32")\n",
- written, written_delta, reused, reused_delta);
+ fprintf_ln(stderr,
+ _("Total %"PRIu32" (delta %"PRIu32"),"
+ " reused %"PRIu32" (delta %"PRIu32")"),
+ written, written_delta, reused, reused_delta);
return 0;
}
expire = TIME_MAX;
save_commit_buffer = 0;
- check_replace_refs = 0;
+ read_replace_refs = 0;
ref_paranoia = 1;
init_revisions(&revs, prefix);
OPT_BIT( 0, "porcelain", &flags, N_("machine-readable output"), TRANSPORT_PUSH_PORCELAIN),
OPT_BIT('f', "force", &flags, N_("force updates"), TRANSPORT_PUSH_FORCE),
{ OPTION_CALLBACK,
- 0, CAS_OPT_NAME, &cas, N_("refname>:<expect"),
+ 0, CAS_OPT_NAME, &cas, N_("<refname>:<expect>"),
N_("require old value of ref to be at this value"),
- PARSE_OPT_OPTARG, parseopt_push_cas_option },
+ PARSE_OPT_OPTARG | PARSE_OPT_LITERAL_ARGHELP, parseopt_push_cas_option },
{ OPTION_CALLBACK, 0, "recurse-submodules", &recurse_submodules, "check|on-demand|no",
N_("control recursive pushing of submodules"),
PARSE_OPT_OPTARG, option_parse_recurse_submodules },
N_("same as -m, but discard unmerged entries")),
{ OPTION_STRING, 0, "prefix", &opts.prefix, N_("<subdirectory>/"),
N_("read the tree into the index under <subdirectory>/"),
- PARSE_OPT_NONEG | PARSE_OPT_LITERAL_ARGHELP },
+ PARSE_OPT_NONEG },
OPT_BOOL('u', NULL, &opts.update,
N_("update working tree with merge result")),
{ OPTION_CALLBACK, 0, "exclude-per-directory", &opts,
strbuf_addf(&buf, "refs/remotes/%s/", rename->old_name);
if (starts_with(refname, buf.buf)) {
- item = string_list_append(rename->remote_branches, xstrdup(refname));
+ item = string_list_append(rename->remote_branches, refname);
symref = resolve_ref_unsafe(refname, RESOLVE_REF_READING,
NULL, &flag);
if (symref && (flag & REF_ISSYMREF))
struct remote *oldremote, *newremote;
struct strbuf buf = STRBUF_INIT, buf2 = STRBUF_INIT, buf3 = STRBUF_INIT,
old_remote_context = STRBUF_INIT;
- struct string_list remote_branches = STRING_LIST_INIT_NODUP;
+ struct string_list remote_branches = STRING_LIST_INIT_DUP;
struct rename_info rename;
int i, refspec_updated = 0;
if (create_symref(buf.buf, buf2.buf, buf3.buf))
die(_("creating '%s' failed"), buf.buf);
}
+ string_list_clear(&remote_branches, 1);
return 0;
}
enum object_type obj_type, repl_type;
if (get_oid(refname, &object))
- return error("Failed to resolve '%s' as a valid ref.", refname);
+ return error(_("failed to resolve '%s' as a valid ref"), refname);
obj_type = oid_object_info(the_repository, &object,
NULL);
else if (!strcmp(format, "long"))
data.format = REPLACE_FORMAT_LONG;
else
- return error("invalid replace format '%s'\n"
- "valid formats are 'short', 'medium' and 'long'\n",
+ return error(_("invalid replace format '%s'\n"
+ "valid formats are 'short', 'medium' and 'long'"),
format);
for_each_replace_ref(the_repository, show_reference, (void *)&data);
for (p = argv; *p; p++) {
if (get_oid(*p, &oid)) {
- error("Failed to resolve '%s' as a valid ref.", *p);
+ error("failed to resolve '%s' as a valid ref", *p);
had_error = 1;
continue;
}
full_hex = ref.buf + base_len;
if (read_ref(ref.buf, &oid)) {
- error("replace ref '%s' not found.", full_hex);
+ error(_("replace ref '%s' not found"), full_hex);
had_error = 1;
continue;
}
{
if (delete_ref(NULL, ref, oid, 0))
return 1;
- printf("Deleted replace ref '%s'\n", name);
+ printf_ln(_("Deleted replace ref '%s'"), name);
return 0;
}
strbuf_reset(ref);
strbuf_addf(ref, "%s%s", git_replace_ref_base, oid_to_hex(object));
if (check_refname_format(ref->buf, 0))
- return error("'%s' is not a valid ref name.", ref->buf);
+ return error(_("'%s' is not a valid ref name"), ref->buf);
if (read_ref(ref->buf, prev))
oidclr(prev);
else if (!force)
- return error("replace ref '%s' already exists", ref->buf);
+ return error(_("replace ref '%s' already exists"), ref->buf);
return 0;
}
obj_type = oid_object_info(the_repository, object, NULL);
repl_type = oid_object_info(the_repository, repl, NULL);
if (!force && obj_type != repl_type)
- return error("Objects must be of the same type.\n"
- "'%s' points to a replaced object of type '%s'\n"
- "while '%s' points to a replacement object of "
- "type '%s'.",
+ return error(_("Objects must be of the same type.\n"
+ "'%s' points to a replaced object of type '%s'\n"
+ "while '%s' points to a replacement object of "
+ "type '%s'."),
object_ref, type_name(obj_type),
replace_ref, type_name(repl_type));
struct object_id object, repl;
if (get_oid(object_ref, &object))
- return error("Failed to resolve '%s' as a valid ref.",
+ return error(_("failed to resolve '%s' as a valid ref"),
object_ref);
if (get_oid(replace_ref, &repl))
- return error("Failed to resolve '%s' as a valid ref.",
+ return error(_("failed to resolve '%s' as a valid ref"),
replace_ref);
return replace_object_oid(object_ref, &object, replace_ref, &repl, force);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, 0666);
if (fd < 0)
- return error_errno("unable to open %s for writing", filename);
+ return error_errno(_("unable to open %s for writing"), filename);
argv_array_push(&cmd.args, "--no-replace-objects");
argv_array_push(&cmd.args, "cat-file");
cmd.out = fd;
if (run_command(&cmd))
- return error("cat-file reported failure");
+ return error(_("cat-file reported failure"));
return 0;
}
fd = open(filename, O_RDONLY);
if (fd < 0)
- return error_errno("unable to open %s for reading", filename);
+ return error_errno(_("unable to open %s for reading"), filename);
if (!raw && type == OBJ_TREE) {
const char *argv[] = { "mktree", NULL };
if (start_command(&cmd)) {
close(fd);
- return error("unable to spawn mktree");
+ return error(_("unable to spawn mktree"));
}
if (strbuf_read(&result, cmd.out, 41) < 0) {
- error_errno("unable to read from mktree");
+ error_errno(_("unable to read from mktree"));
close(fd);
close(cmd.out);
return -1;
if (finish_command(&cmd)) {
strbuf_release(&result);
- return error("mktree reported failure");
+ return error(_("mktree reported failure"));
}
if (get_oid_hex(result.buf, oid) < 0) {
strbuf_release(&result);
- return error("mktree did not return an object name");
+ return error(_("mktree did not return an object name"));
}
strbuf_release(&result);
int flags = HASH_FORMAT_CHECK | HASH_WRITE_OBJECT;
if (fstat(fd, &st) < 0) {
- error_errno("unable to fstat %s", filename);
+ error_errno(_("unable to fstat %s"), filename);
close(fd);
return -1;
}
if (index_fd(oid, fd, &st, type, NULL, flags) < 0)
- return error("unable to write object to database");
+ return error(_("unable to write object to database"));
/* index_fd close()s fd for us */
}
struct strbuf ref = STRBUF_INIT;
if (get_oid(object_ref, &old_oid) < 0)
- return error("Not a valid object name: '%s'", object_ref);
+ return error(_("not a valid object name: '%s'"), object_ref);
type = oid_object_info(the_repository, &old_oid, NULL);
if (type < 0)
- return error("unable to get object type for %s",
+ return error(_("unable to get object type for %s"),
oid_to_hex(&old_oid));
if (check_ref_valid(&old_oid, &prev, &ref, force)) {
}
if (launch_editor(tmpfile, NULL, NULL) < 0) {
free(tmpfile);
- return error("editing object file failed");
+ return error(_("editing object file failed"));
}
if (import_object(&new_oid, type, raw, tmpfile)) {
free(tmpfile);
free(tmpfile);
if (!oidcmp(&old_oid, &new_oid))
- return error("new object is the same as the old one: '%s'", oid_to_hex(&old_oid));
+ return error(_("new object is the same as the old one: '%s'"), oid_to_hex(&old_oid));
return replace_object_oid(object_ref, &old_oid, "replacement", &new_oid, force);
}
struct object_id oid;
if (get_oid(argv[i], &oid) < 0) {
strbuf_release(&new_parents);
- return error(_("Not a valid object name: '%s'"),
+ return error(_("not a valid object name: '%s'"),
argv[i]);
}
if (!lookup_commit_reference(the_repository, &oid)) {
for (i = 1; i < mergetag_data->argc; i++) {
struct object_id oid;
if (get_oid(mergetag_data->argv[i], &oid) < 0)
- return error(_("Not a valid object name: '%s'"),
+ return error(_("not a valid object name: '%s'"),
mergetag_data->argv[i]);
if (!oidcmp(&tag->tagged->oid, &oid))
return 0; /* found */
unsigned long size;
if (get_oid(old_ref, &old_oid) < 0)
- return error(_("Not a valid object name: '%s'"), old_ref);
+ return error(_("not a valid object name: '%s'"), old_ref);
commit = lookup_commit_reference(the_repository, &old_oid);
if (!commit)
return error(_("could not parse %s"), old_ref);
}
if (remove_signature(&buf)) {
- warning(_("the original commit '%s' has a gpg signature."), old_ref);
+ warning(_("the original commit '%s' has a gpg signature"), old_ref);
warning(_("the signature will be removed in the replacement commit!"));
}
if (!oidcmp(&old_oid, &new_oid)) {
if (gentle) {
- warning("graft for '%s' unnecessary", oid_to_hex(&old_oid));
+ warning(_("graft for '%s' unnecessary"), oid_to_hex(&old_oid));
return 0;
}
- return error("new commit is the same as the old one: '%s'", oid_to_hex(&old_oid));
+ return error(_("new commit is the same as the old one: '%s'"), oid_to_hex(&old_oid));
}
return replace_object_oid(old_ref, &old_oid, "replacement", &new_oid, force);
OPT_END()
};
- check_replace_refs = 0;
+ read_replace_refs = 0;
git_config(git_default_config, NULL);
argc = parse_options(argc, argv, prefix, options, git_replace_usage, 0);
cmdmode = argc ? MODE_REPLACE : MODE_LIST;
if (format && cmdmode != MODE_LIST)
- usage_msg_opt("--format cannot be used when not listing",
+ usage_msg_opt(_("--format cannot be used when not listing"),
git_replace_usage, options);
if (force &&
cmdmode != MODE_EDIT &&
cmdmode != MODE_GRAFT &&
cmdmode != MODE_CONVERT_GRAFT_FILE)
- usage_msg_opt("-f only makes sense when writing a replacement",
+ usage_msg_opt(_("-f only makes sense when writing a replacement"),
git_replace_usage, options);
if (raw && cmdmode != MODE_EDIT)
- usage_msg_opt("--raw only makes sense with --edit",
+ usage_msg_opt(_("--raw only makes sense with --edit"),
git_replace_usage, options);
switch (cmdmode) {
case MODE_DELETE:
if (argc < 1)
- usage_msg_opt("-d needs at least one argument",
+ usage_msg_opt(_("-d needs at least one argument"),
git_replace_usage, options);
return for_each_replace_name(argv, delete_replace_ref);
case MODE_REPLACE:
if (argc != 2)
- usage_msg_opt("bad number of arguments",
+ usage_msg_opt(_("bad number of arguments"),
git_replace_usage, options);
return replace_object(argv[0], argv[1], force);
case MODE_EDIT:
if (argc != 1)
- usage_msg_opt("-e needs exactly one argument",
+ usage_msg_opt(_("-e needs exactly one argument"),
git_replace_usage, options);
return edit_and_replace(argv[0], force, raw);
case MODE_GRAFT:
if (argc < 1)
- usage_msg_opt("-g needs at least one argument",
+ usage_msg_opt(_("-g needs at least one argument"),
git_replace_usage, options);
return create_graft(argc, argv, force, 0);
case MODE_CONVERT_GRAFT_FILE:
if (argc != 0)
- usage_msg_opt("--convert-graft-file takes no argument",
+ usage_msg_opt(_("--convert-graft-file takes no argument"),
git_replace_usage, options);
return !!convert_graft_file(force);
case MODE_LIST:
if (argc > 1)
- usage_msg_opt("only one pattern can be given with -l",
+ usage_msg_opt(_("only one pattern can be given with -l"),
git_replace_usage, options);
return list_replace_refs(argv[0], format);
list.entry[list.nr].is_submodule = S_ISGITLINK(ce->ce_mode);
if (list.entry[list.nr++].is_submodule &&
!is_staging_gitmodules_ok(&the_index))
- die (_("Please stage your changes to .gitmodules or stash them to proceed"));
+ die(_("please stage your changes to .gitmodules or stash them to proceed"));
}
if (pathspec.nr) {
OPT_BOOL(0, "stdin", &from_stdin, N_("read refs from stdin")),
OPT_BOOL(0, "helper-status", &helper_status, N_("print status from remote helper")),
{ OPTION_CALLBACK,
- 0, CAS_OPT_NAME, &cas, N_("refname>:<expect"),
+ 0, CAS_OPT_NAME, &cas, N_("<refname>:<expect>"),
N_("require old value of ref to be at this value"),
PARSE_OPT_OPTARG, parseopt_push_cas_option },
OPT_END()
N_("Suppress commit descriptions, only provides commit count")),
OPT_BOOL('e', "email", &log.email,
N_("Show the email address of each author")),
- { OPTION_CALLBACK, 'w', NULL, &log, N_("w[,i1[,i2]]"),
- N_("Linewrap output"), PARSE_OPT_OPTARG, &parse_wrap_args },
+ { OPTION_CALLBACK, 'w', NULL, &log, N_("<w>[,<i1>[,<i2>]]"),
+ N_("Linewrap output"), PARSE_OPT_OPTARG,
+ &parse_wrap_args },
OPT_END(),
};
{ OPTION_CALLBACK, 'g', "reflog", &reflog_base, N_("<n>[,<base>]"),
N_("show <n> most recent ref-log entries starting at "
"base"),
- PARSE_OPT_OPTARG | PARSE_OPT_LITERAL_ARGHELP,
+ PARSE_OPT_OPTARG,
parse_reflog_param },
OPT_END()
};
int i;
struct object_id oid;
- check_replace_refs = 0;
+ read_replace_refs = 0;
git_config(git_default_config, NULL);
PARSE_OPT_NOARG | /* disallow --cacheinfo=<mode> form */
PARSE_OPT_NONEG | PARSE_OPT_LITERAL_ARGHELP,
(parse_opt_cb *) cacheinfo_callback},
- {OPTION_CALLBACK, 0, "chmod", &set_executable_bit, N_("(+/-)x"),
+ {OPTION_CALLBACK, 0, "chmod", &set_executable_bit, "(+|-)x",
N_("override the executable bit of the listed files"),
- PARSE_OPT_NONEG | PARSE_OPT_LITERAL_ARGHELP,
+ PARSE_OPT_NONEG,
chmod_callback},
{OPTION_SET_INT, 0, "assume-unchanged", &mark_valid_only, NULL,
N_("mark files as \"not changing\""),
};
packet_trace_identity("upload-pack");
- check_replace_refs = 0;
+ read_replace_refs = 0;
argc = parse_options(argc, argv, NULL, options, upload_pack_usage, 0);
struct option write_tree_options[] = {
OPT_BIT(0, "missing-ok", &flags, N_("allow missing objects"),
WRITE_TREE_MISSING_OK),
- { OPTION_STRING, 0, "prefix", &prefix, N_("<prefix>/"),
- N_("write tree object for a subdirectory <prefix>") ,
- PARSE_OPT_LITERAL_ARGHELP },
+ OPT_STRING(0, "prefix", &prefix, N_("<prefix>/"),
+ N_("write tree object for a subdirectory <prefix>")),
{ OPTION_BIT, 0, "ignore-cache-tree", &flags, NULL,
N_("only useful for debugging"),
PARSE_OPT_HIDDEN | PARSE_OPT_NOARG, NULL,
* Do replace refs need to be checked this run? This variable is
* initialized to true unless --no-replace-object is used or
* $GIT_NO_REPLACE_OBJECTS is set, but is set to false by some
- * commands that do not want replace references to be active. As an
- * optimization it is also set to false if replace references have
- * been sought but there were none.
+ * commands that do not want replace references to be active.
*/
-extern int check_replace_refs;
+extern int read_replace_refs;
extern char *git_replace_ref_base;
extern int fsync_object_files;
extern struct object *peel_to_type(const char *name, int namelen,
struct object *o, enum object_type);
+enum date_mode_type {
+ DATE_NORMAL = 0,
+ DATE_RELATIVE,
+ DATE_SHORT,
+ DATE_ISO8601,
+ DATE_ISO8601_STRICT,
+ DATE_RFC2822,
+ DATE_STRFTIME,
+ DATE_RAW,
+ DATE_UNIX
+};
+
struct date_mode {
- enum date_mode_type {
- DATE_NORMAL = 0,
- DATE_RELATIVE,
- DATE_SHORT,
- DATE_ISO8601,
- DATE_ISO8601_STRICT,
- DATE_RFC2822,
- DATE_STRFTIME,
- DATE_RAW,
- DATE_UNIX
- } type;
+ enum date_mode_type type;
const char *strftime_fmt;
int local;
};
export DEVELOPER=1
export DEFAULT_TEST_TARGET=prove
export GIT_PROVE_OPTS="--timer --jobs 3 --state=failed,slow,save"
-export GIT_TEST_OPTS="--verbose-log -x"
+export GIT_TEST_OPTS="--verbose-log -x --immediate"
export GIT_TEST_CLONE_2GB=YesPlease
if [ "$jobname" = linux-gcc ]; then
export CC=gcc-8
# Tracing executed commands would produce too much noise in the loop below.
set +x
-if ! ls t/test-results/*.exit >/dev/null 2>/dev/null
+cd t/
+
+if ! ls test-results/*.exit >/dev/null 2>/dev/null
then
echo "Build job failed before the tests could have been run"
exit
fi
-for TEST_EXIT in t/test-results/*.exit
+case "$jobname" in
+osx-clang|osx-gcc)
+ # base64 in OSX doesn't wrap its output at 76 columns by
+ # default, but prints a single, very long line.
+ base64_opts="-b 76"
+ ;;
+esac
+
+combined_trash_size=0
+for TEST_EXIT in test-results/*.exit
do
if [ "$(cat "$TEST_EXIT")" != "0" ]
then
echo "$(tput setaf 1)${TEST_OUT}...$(tput sgr0)"
echo "------------------------------------------------------------------------"
cat "${TEST_OUT}"
+
+ test_name="${TEST_EXIT%.exit}"
+ test_name="${test_name##*/}"
+ trash_dir="trash directory.$test_name"
+ trash_tgz_b64="trash.$test_name.base64"
+ if [ -d "$trash_dir" ]
+ then
+ tar czp "$trash_dir" |base64 $base64_opts >"$trash_tgz_b64"
+
+ trash_size=$(wc -c <"$trash_tgz_b64")
+ if [ $trash_size -gt 1048576 ]
+ then
+ # larger than 1MB
+ echo "$(tput setaf 1)Didn't include the trash directory of '$test_name' in the trace log, it's too big$(tput sgr0)"
+ continue
+ fi
+
+ new_combined_trash_size=$(($combined_trash_size + $trash_size))
+ if [ $new_combined_trash_size -gt 1048576 ]
+ then
+ echo "$(tput setaf 1)Didn't include the trash directory of '$test_name' in the trace log, there is plenty of trash in there already.$(tput sgr0)"
+ continue
+ fi
+ combined_trash_size=$new_combined_trash_size
+
+ # DO NOT modify these two 'echo'-ed strings below
+ # without updating 'ci/util/extract-trash-dirs.sh'
+ # as well.
+ echo "$(tput setaf 1)Start of trash directory of '$test_name':$(tput sgr0)"
+ cat "$trash_tgz_b64"
+ echo "$(tput setaf 1)End of trash directory of '$test_name'$(tput sgr0)"
+ fi
fi
done
+
+if [ $combined_trash_size -gt 0 ]
+then
+ echo "------------------------------------------------------------------------"
+ echo "Trash directories embedded in this log can be extracted by running:"
+ echo
+ echo " curl https://api.travis-ci.org/v3/job/$TRAVIS_JOB_ID/log.txt |./ci/util/extract-trash-dirs.sh"
+fi
. ${0%/*}/lib-travisci.sh
-make coccicheck
+make --jobs=2 coccicheck
+
+set +x
+
+fail=
+for cocci_patch in contrib/coccinelle/*.patch
+do
+ if test -s "$cocci_patch"
+ then
+ echo "$(tput setaf 1)Coccinelle suggests the following changes in '$cocci_patch':$(tput sgr0)"
+ cat "$cocci_patch"
+ fail=UnfortunatelyYes
+ fi
+done
+
+if test -n "$fail"
+then
+ echo "$(tput setaf 1)error: Coccinelle suggested some changes$(tput sgr0)"
+ exit 1
+fi
save_good_tree
--- /dev/null
+#!/bin/sh
+
+error () {
+ echo >&2 "error: $@"
+ exit 1
+}
+
+find_embedded_trash () {
+ while read -r line
+ do
+ case "$line" in
+ *Start\ of\ trash\ directory\ of\ \'t[0-9][0-9][0-9][0-9]-*\':*)
+ test_name="${line#*\'}"
+ test_name="${test_name%\'*}"
+
+ return 0
+ esac
+ done
+
+ return 1
+}
+
+extract_embedded_trash () {
+ while read -r line
+ do
+ case "$line" in
+ *End\ of\ trash\ directory\ of\ \'$test_name\'*)
+ return
+ ;;
+ *)
+ printf '%s\n' "$line"
+ ;;
+ esac
+ done
+
+ error "unexpected end of input"
+}
+
+# Raw logs from Linux build jobs have CRLF line endings, while OSX
+# build jobs mostly have CRCRLF, except an odd line every now and
+# then that has CRCRCRLF. 'base64 -d' from 'coreutils' doesn't like
+# CRs and complains about "invalid input", so remove all CRs at the
+# end of lines.
+sed -e 's/\r*$//' | \
+while find_embedded_trash
+do
+ echo "Extracting trash directory of '$test_name'"
+
+ extract_embedded_trash |base64 -d |tar xzp
+done
static int want_auto[3] = { -1, -1, -1 };
+ if (fd < 1 || fd >= ARRAY_SIZE(want_auto))
+ BUG("file descriptor out of range: %d", fd);
+
if (var < 0)
var = git_use_color_default;
if (graph_size < GRAPH_MIN_SIZE) {
close(fd);
- die("graph file %s is too small", graph_file);
+ die(_("graph file %s is too small"), graph_file);
}
graph_map = xmmap(NULL, graph_size, PROT_READ, MAP_PRIVATE, fd, 0);
data = (const unsigned char *)graph_map;
graph_signature = get_be32(data);
if (graph_signature != GRAPH_SIGNATURE) {
- error("graph signature %X does not match signature %X",
+ error(_("graph signature %X does not match signature %X"),
graph_signature, GRAPH_SIGNATURE);
goto cleanup_fail;
}
graph_version = *(unsigned char*)(data + 4);
if (graph_version != GRAPH_VERSION) {
- error("graph version %X does not match version %X",
+ error(_("graph version %X does not match version %X"),
graph_version, GRAPH_VERSION);
goto cleanup_fail;
}
hash_version = *(unsigned char*)(data + 5);
if (hash_version != GRAPH_OID_VERSION) {
- error("hash version %X does not match version %X",
+ error(_("hash version %X does not match version %X"),
hash_version, GRAPH_OID_VERSION);
goto cleanup_fail;
}
chunk_lookup += GRAPH_CHUNKLOOKUP_WIDTH;
if (chunk_offset > graph_size - GIT_MAX_RAWSZ) {
- error("improper chunk offset %08x%08x", (uint32_t)(chunk_offset >> 32),
+ error(_("improper chunk offset %08x%08x"), (uint32_t)(chunk_offset >> 32),
(uint32_t)chunk_offset);
goto cleanup_fail;
}
}
if (chunk_repeated) {
- error("chunk id %08x appears multiple times", chunk_id);
+ error(_("chunk id %08x appears multiple times"), chunk_id);
goto cleanup_fail;
}
hashcpy(oid.hash, g->chunk_oid_lookup + g->hash_len * pos);
c = lookup_commit(the_repository, &oid);
if (!c)
- die("could not find commit %s", oid_to_hex(&oid));
+ die(_("could not find commit %s"), oid_to_hex(&oid));
c->graph_pos = pos;
return &commit_list_insert(c, pptr)->next;
}
oi.typep = &type;
if (packed_object_info(the_repository, pack, offset, &oi) < 0)
- die("unable to get type of object %s", oid_to_hex(oid));
+ die(_("unable to get type of object %s"), oid_to_hex(oid));
if (type != OBJ_COMMIT)
return 0;
strbuf_addstr(&packname, pack_indexes->items[i].string);
p = add_packed_git(packname.buf, packname.len, 1);
if (!p)
- die("error adding pack %s", packname.buf);
+ die(_("error adding pack %s"), packname.buf);
if (open_pack_index(p))
- die("error opening index for %s", packname.buf);
+ die(_("error opening index for %s"), packname.buf);
for_each_object_in_pack(p, add_packed_commits, &oids);
close_pack(p);
}
}
#define MAX_INCLUDE_DEPTH 10
-static const char include_depth_advice[] =
+static const char include_depth_advice[] = N_(
"exceeded maximum include depth (%d) while including\n"
" %s\n"
"from\n"
" %s\n"
-"Do you have circular includes?";
+"Do you have circular includes?");
static int handle_path_include(const char *path, struct config_include_data *inc)
{
int ret = 0;
expanded = expand_user_path(path, 0);
if (!expanded)
- return error("could not expand include path '%s'", path);
+ return error(_("could not expand include path '%s'"), path);
path = expanded;
/*
char *slash;
if (!cf || !cf->path)
- return error("relative config includes must come from files");
+ return error(_("relative config includes must come from files"));
slash = find_last_dir_sep(cf->path);
if (slash)
if (!access_or_die(path, R_OK, 0)) {
if (++inc->depth > MAX_INCLUDE_DEPTH)
- die(include_depth_advice, MAX_INCLUDE_DEPTH, path,
+ die(_(include_depth_advice), MAX_INCLUDE_DEPTH, path,
!cf ? "<unknown>" :
cf->name ? cf->name :
"the command line");
if (last_dot == NULL || last_dot == key) {
if (!quiet)
- error("key does not contain a section: %s", key);
+ error(_("key does not contain a section: %s"), key);
return -CONFIG_NO_SECTION_OR_NAME;
}
if (!last_dot[1]) {
if (!quiet)
- error("key does not contain variable name: %s", key);
+ error(_("key does not contain variable name: %s"), key);
return -CONFIG_NO_SECTION_OR_NAME;
}
if (!iskeychar(c) ||
(i == baselen + 1 && !isalpha(c))) {
if (!quiet)
- error("invalid key: %s", key);
+ error(_("invalid key: %s"), key);
goto out_free_ret_1;
}
c = tolower(c);
} else if (c == '\n') {
if (!quiet)
- error("invalid key (newline): %s", key);
+ error(_("invalid key (newline): %s"), key);
goto out_free_ret_1;
}
if (store_key)
pair = strbuf_split_str(text, '=', 2);
if (!pair[0])
- return error("bogus config parameter: %s", text);
+ return error(_("bogus config parameter: %s"), text);
if (pair[0]->len && pair[0]->buf[pair[0]->len - 1] == '=') {
strbuf_setlen(pair[0], pair[0]->len - 1);
strbuf_trim(pair[0]);
if (!pair[0]->len) {
strbuf_list_free(pair);
- return error("bogus config parameter: %s", text);
+ return error(_("bogus config parameter: %s"), text);
}
if (git_config_parse_key(pair[0]->buf, &canonical_name, NULL)) {
envw = xstrdup(env);
if (sq_dequote_to_argv(envw, &argv, &nr, &alloc) < 0) {
- ret = error("bogus format in " CONFIG_DATA_ENVIRONMENT);
+ ret = error(_("bogus format in %s"), CONFIG_DATA_ENVIRONMENT);
goto out;
}
else {
int abbrev = git_config_int(var, value);
if (abbrev < minimum_abbrev || abbrev > 40)
- return error("abbrev length out of range: %d", abbrev);
+ return error(_("abbrev length out of range: %d"), abbrev);
default_abbrev = abbrev;
}
return 0;
comment_line_char = value[0];
auto_comment_line_char = 0;
} else
- return error("core.commentChar should only be one character");
+ return error(_("core.commentChar should only be one character"));
return 0;
}
var, value);
}
+ if (!strcmp(var, "core.usereplacerefs")) {
+ read_replace_refs = git_config_bool(var, value);
+ return 0;
+ }
+
/* Add other config variables here and to Documentation/config.txt. */
return 0;
}
else if (!strcmp(value, "always"))
autorebase = AUTOREBASE_ALWAYS;
else
- return error("malformed value for %s", var);
+ return error(_("malformed value for %s"), var);
return 0;
}
else if (!strcmp(value, "current"))
push_default = PUSH_DEFAULT_CURRENT;
else {
- error("malformed value for %s: %s", var, value);
- return error("Must be one of nothing, matching, simple, "
- "upstream or current.");
+ error(_("malformed value for %s: %s"), var, value);
+ return error(_("must be one of nothing, matching, simple, "
+ "upstream or current"));
}
return 0;
}
buf = read_object_file(oid, &type, &size);
if (!buf)
- return error("unable to load config blob object '%s'", name);
+ return error(_("unable to load config blob object '%s'"), name);
if (type != OBJ_BLOB) {
free(buf);
- return error("reference '%s' does not point to a blob", name);
+ return error(_("reference '%s' does not point to a blob"), name);
}
ret = git_config_from_mem(fn, CONFIG_ORIGIN_BLOB, name, buf, size,
struct object_id oid;
if (get_oid(name, &oid) < 0)
- return error("unable to resolve config blob '%s'", name);
+ return error(_("unable to resolve config blob '%s'"), name);
return git_config_from_blob_oid(fn, name, &oid, data);
}
{
const char *v = getenv(k);
if (v && !git_parse_ulong(v, &val))
- die("failed to parse %s", k);
+ die(_("failed to parse %s"), k);
return val;
}
if (type == CONFIG_EVENT_SECTION) {
if (cf->var.len < 2 || cf->var.buf[cf->var.len - 1] != '.')
- return error("invalid section name '%s'", cf->var.buf);
+ return error(_("invalid section name '%s'"), cf->var.buf);
/* Is this the section we were looking for? */
store->is_keys_section =
static int write_error(const char *filename)
{
- error("failed to write new configuration file %s", filename);
+ error(_("failed to write new configuration file %s"), filename);
/* Same error code as "failed to rename". */
return 4;
*/
fd = hold_lock_file_for_update(&lock, config_filename, 0);
if (fd < 0) {
- error_errno("could not lock config file %s", config_filename);
+ error_errno(_("could not lock config file %s"), config_filename);
ret = CONFIG_NO_LOCK;
goto out_free;
}
in_fd = open(config_filename, O_RDONLY);
if ( in_fd < 0 ) {
if ( ENOENT != errno ) {
- error_errno("opening %s", config_filename);
+ error_errno(_("opening %s"), config_filename);
ret = CONFIG_INVALID_FILE; /* same as "invalid config file" */
goto out_free;
}
store.value_regex = (regex_t*)xmalloc(sizeof(regex_t));
if (regcomp(store.value_regex, value_regex,
REG_EXTENDED)) {
- error("invalid pattern: %s", value_regex);
+ error(_("invalid pattern: %s"), value_regex);
FREE_AND_NULL(store.value_regex);
ret = CONFIG_INVALID_PATTERN;
goto out_free;
if (git_config_from_file_with_options(store_aux,
config_filename,
&store, &opts)) {
- error("invalid config file %s", config_filename);
+ error(_("invalid config file %s"), config_filename);
ret = CONFIG_INVALID_FILE;
goto out_free;
}
if (contents == MAP_FAILED) {
if (errno == ENODEV && S_ISDIR(st.st_mode))
errno = EISDIR;
- error_errno("unable to mmap '%s'", config_filename);
+ error_errno(_("unable to mmap '%s'"), config_filename);
ret = CONFIG_INVALID_FILE;
contents = NULL;
goto out_free;
in_fd = -1;
if (chmod(get_lock_file_path(&lock), st.st_mode & 07777) < 0) {
- error_errno("chmod on %s failed", get_lock_file_path(&lock));
+ error_errno(_("chmod on %s failed"), get_lock_file_path(&lock));
ret = CONFIG_NO_WRITE;
goto out_free;
}
}
if (commit_lock_file(&lock) < 0) {
- error_errno("could not write config file %s", config_filename);
+ error_errno(_("could not write config file %s"), config_filename);
ret = CONFIG_NO_WRITE;
goto out_free;
}
memset(&store, 0, sizeof(store));
if (new_name && !section_name_is_ok(new_name)) {
- ret = error("invalid section name: %s", new_name);
+ ret = error(_("invalid section name: %s"), new_name);
goto out_no_rollback;
}
out_fd = hold_lock_file_for_update(&lock, config_filename, 0);
if (out_fd < 0) {
- ret = error("could not lock config file %s", config_filename);
+ ret = error(_("could not lock config file %s"), config_filename);
goto out;
}
}
if (chmod(get_lock_file_path(&lock), st.st_mode & 07777) < 0) {
- ret = error_errno("chmod on %s failed",
+ ret = error_errno(_("chmod on %s failed"),
get_lock_file_path(&lock));
goto out;
}
config_file = NULL;
commit_and_out:
if (commit_lock_file(&lock) < 0)
- ret = error_errno("could not write config file %s",
+ ret = error_errno(_("could not write config file %s"),
config_filename);
out:
if (config_file)
#undef config_error_nonbool
int config_error_nonbool(const char *var)
{
- return error("missing value for '%s'", var);
+ return error(_("missing value for '%s'"), var);
}
int parse_config_key(const char *var,
ifeq ($(filter no-error,$(DEVOPTS)),)
CFLAGS += -Werror
endif
+ifneq ($(filter pedantic,$(DEVOPTS)),)
+CFLAGS += -pedantic
+# don't warn for each N_ use
+CFLAGS += -DUSE_PARENS_AROUND_GETTEXT_N=0
+endif
CFLAGS += -Wdeclaration-after-statement
CFLAGS += -Wno-format-zero-length
CFLAGS += -Wold-style-definition
COMPAT_OBJS += compat/mingw.o compat/winansi.o \
compat/win32/pthread.o compat/win32/syslog.o \
compat/win32/dirent.o
- BASIC_CFLAGS += -DPROTECT_NTFS_DEFAULT=1
+ BASIC_CFLAGS += -DWIN32 -DPROTECT_NTFS_DEFAULT=1
EXTLIBS += -lws2_32
GITLIBS += git.res
PTHREAD_LIBS =
* response does not necessarily mean an ACL problem, though.
*/
if (unexpected)
- die(_("The remote end hung up upon initial contact"));
+ die(_("the remote end hung up upon initial contact"));
else
die(_("Could not read from remote repository.\n\n"
"Please make sure you have the correct access rights\n"
}
if (die_on_error)
- die("server doesn't support '%s'", c);
+ die(_("server doesn't support '%s'"), c);
return 0;
}
}
if (die_on_error)
- die("server doesn't support feature '%s'", feature);
+ die(_("server doesn't support feature '%s'"), feature);
return 0;
}
argv_array_push(&server_capabilities_v2, reader->line);
if (reader->status != PACKET_READ_FLUSH)
- die("expected flush after capabilities");
+ die(_("expected flush after capabilities"));
}
enum protocol_version discover_version(struct packet_reader *reader)
static void check_no_capabilities(const char *line, int len)
{
if (strlen(line) != len)
- warning("Ignoring capabilities after first line '%s'",
+ warning(_("ignoring capabilities after first line '%s'"),
line + strlen(line));
}
if (extra_have && !strcmp(name, ".have")) {
oid_array_append(extra_have, &old_oid);
} else if (!strcmp(name, "capabilities^{}")) {
- die("protocol error: unexpected capabilities^{}");
+ die(_("protocol error: unexpected capabilities^{}"));
} else if (check_ref(name, flags)) {
struct ref *ref = alloc_ref(name);
oidcpy(&ref->old_oid, &old_oid);
return 0;
if (get_oid_hex(arg, &old_oid))
- die("protocol error: expected shallow sha-1, got '%s'", arg);
+ die(_("protocol error: expected shallow sha-1, got '%s'"), arg);
if (!shallow_points)
- die("repository on the other end cannot be shallow");
+ die(_("repository on the other end cannot be shallow"));
oid_array_append(shallow_points, &old_oid);
check_no_capabilities(line, len);
return 1;
case PACKET_READ_NORMAL:
len = reader->pktlen;
if (len > 4 && skip_prefix(reader->line, "ERR ", &arg))
- die("remote error: %s", arg);
+ die(_("remote error: %s"), arg);
break;
case PACKET_READ_FLUSH:
state = EXPECTING_DONE;
break;
case PACKET_READ_DELIM:
- die("invalid packet");
+ die(_("invalid packet"));
}
switch (state) {
case EXPECTING_SHALLOW:
if (process_shallow(reader->line, len, shallow_points))
break;
- die("protocol error: unexpected '%s'", reader->line);
+ die(_("protocol error: unexpected '%s'"), reader->line);
case EXPECTING_DONE:
break;
}
/* Process response from server */
while (packet_reader_read(reader) == PACKET_READ_NORMAL) {
if (!process_ref_v2(reader->line, &list))
- die("invalid ls-refs response: %s", reader->line);
+ die(_("invalid ls-refs response: %s"), reader->line);
}
if (reader->status != PACKET_READ_FLUSH)
- die("expected flush after ref listing");
+ die(_("expected flush after ref listing"));
return list;
}
return PROTO_SSH;
if (!strcmp(name, "file"))
return PROTO_FILE;
- die("I don't handle protocol '%s'", name);
+ die(_("protocol '%s' is not supported"), name);
}
static char *host_end(char **hoststart, int removebrackets)
int ka = 1;
if (setsockopt(sockfd, SOL_SOCKET, SO_KEEPALIVE, &ka, sizeof(ka)) < 0)
- fprintf(stderr, "unable to set SO_KEEPALIVE on socket: %s\n",
- strerror(errno));
+ error_errno(_("unable to set SO_KEEPALIVE on socket"));
}
#ifndef NO_IPV6
hints.ai_protocol = IPPROTO_TCP;
if (flags & CONNECT_VERBOSE)
- fprintf(stderr, "Looking up %s ... ", host);
+ fprintf(stderr, _("Looking up %s ... "), host);
gai = getaddrinfo(host, port, &hints, &ai);
if (gai)
- die("Unable to look up %s (port %s) (%s)", host, port, gai_strerror(gai));
+ die(_("unable to look up %s (port %s) (%s)"), host, port, gai_strerror(gai));
if (flags & CONNECT_VERBOSE)
- fprintf(stderr, "done.\nConnecting to %s (port %s) ... ", host, port);
+ /* TRANSLATORS: this is the end of "Looking up %s ... " */
+ fprintf(stderr, _("done.\nConnecting to %s (port %s) ... "), host, port);
for (ai0 = ai; ai; ai = ai->ai_next, cnt++) {
sockfd = socket(ai->ai_family,
freeaddrinfo(ai0);
if (sockfd < 0)
- die("unable to connect to %s:\n%s", host, error_message.buf);
+ die(_("unable to connect to %s:\n%s"), host, error_message.buf);
enable_keepalive(sockfd);
if (flags & CONNECT_VERBOSE)
- fprintf(stderr, "done.\n");
+ /* TRANSLATORS: this is the end of "Connecting to %s (port %s) ... " */
+ fprintf_ln(stderr, _("done."));
strbuf_release(&error_message);
get_host_and_port(&host, &port);
if (flags & CONNECT_VERBOSE)
- fprintf(stderr, "Looking up %s ... ", host);
+ fprintf(stderr, _("Looking up %s ... "), host);
he = gethostbyname(host);
if (!he)
- die("Unable to look up %s (%s)", host, hstrerror(h_errno));
+ die(_("unable to look up %s (%s)"), host, hstrerror(h_errno));
nport = strtoul(port, &ep, 10);
if ( ep == port || *ep ) {
/* Not numeric */
struct servent *se = getservbyname(port,"tcp");
if ( !se )
- die("Unknown port %s", port);
+ die(_("unknown port %s"), port);
nport = se->s_port;
}
if (flags & CONNECT_VERBOSE)
- fprintf(stderr, "done.\nConnecting to %s (port %s) ... ", host, port);
+ /* TRANSLATORS: this is the end of "Looking up %s ... " */
+ fprintf(stderr, _("done.\nConnecting to %s (port %s) ... "), host, port);
for (cnt = 0, ap = he->h_addr_list; *ap; ap++, cnt++) {
memset(&sa, 0, sizeof sa);
}
if (sockfd < 0)
- die("unable to connect to %s:\n%s", host, error_message.buf);
+ die(_("unable to connect to %s:\n%s"), host, error_message.buf);
enable_keepalive(sockfd);
if (flags & CONNECT_VERBOSE)
- fprintf(stderr, "done.\n");
+ /* TRANSLATORS: this is the end of "Connecting to %s (port %s) ... " */
+ fprintf_ln(stderr, _("done."));
return sockfd;
}
get_host_and_port(&host, &port);
if (looks_like_command_line_option(host))
- die("strange hostname '%s' blocked", host);
+ die(_("strange hostname '%s' blocked"), host);
if (looks_like_command_line_option(port))
- die("strange port '%s' blocked", port);
+ die(_("strange port '%s' blocked"), port);
proxy = xmalloc(sizeof(*proxy));
child_process_init(proxy);
proxy->in = -1;
proxy->out = -1;
if (start_command(proxy))
- die("cannot start proxy %s", git_proxy_command);
+ die(_("cannot start proxy %s"), git_proxy_command);
fd[0] = proxy->out; /* read from proxy stdout */
fd[1] = proxy->in; /* write to proxy stdin */
return proxy;
path = strchr(end, separator);
if (!path || !*path)
- die("No path specified. See 'man git-pull' for valid url syntax");
+ die(_("no path specified; see 'git help pull' for valid url syntax"));
/*
* null-terminate hostname and point path to ~ for URL's like this:
case VARIANT_AUTO:
BUG("VARIANT_AUTO passed to push_ssh_options");
case VARIANT_SIMPLE:
- die("ssh variant 'simple' does not support -4");
+ die(_("ssh variant 'simple' does not support -4"));
case VARIANT_SSH:
case VARIANT_PLINK:
case VARIANT_PUTTY:
case VARIANT_AUTO:
BUG("VARIANT_AUTO passed to push_ssh_options");
case VARIANT_SIMPLE:
- die("ssh variant 'simple' does not support -6");
+ die(_("ssh variant 'simple' does not support -6"));
case VARIANT_SSH:
case VARIANT_PLINK:
case VARIANT_PUTTY:
case VARIANT_AUTO:
BUG("VARIANT_AUTO passed to push_ssh_options");
case VARIANT_SIMPLE:
- die("ssh variant 'simple' does not support setting port");
+ die(_("ssh variant 'simple' does not support setting port"));
case VARIANT_SSH:
argv_array_push(args, "-p");
break;
enum ssh_variant variant;
if (looks_like_command_line_option(ssh_host))
- die("strange hostname '%s' blocked", ssh_host);
+ die(_("strange hostname '%s' blocked"), ssh_host);
ssh = get_ssh_command();
if (ssh) {
child_process_init(conn);
if (looks_like_command_line_option(path))
- die("strange pathname '%s' blocked", path);
+ die(_("strange pathname '%s' blocked"), path);
strbuf_addstr(&cmd, prog);
strbuf_addch(&cmd, ' ');
argv_array_push(&conn->args, cmd.buf);
if (start_command(conn))
- die("unable to fork");
+ die(_("unable to fork"));
fd[0] = conn->out; /* read from child's stdout */
fd[1] = conn->in; /* write to child's stdin */
wiki_editpage Notconsidered "this page will not appear on local" false &&
wiki_editpage Othercategory "this page will not appear on local" false -c=Cattwo &&
wiki_editpage Tobeedited "this page have been modified" true -c=Catone &&
- wiki_delete_page Tobedeleted
+ wiki_delete_page Tobedeleted &&
git clone -c remote.origin.categories="Catone" \
mediawiki::'"$WIKI_URL"' mw_dir_14 &&
wiki_getallpage ref_page_14 Catone &&
git fetch .. subproj-br &&
git merge FETCH_HEAD &&
- chks="sub1
-sub2
-sub3
-sub4" &&
- chks_sub=$(cat <<TXT | sed '\''s,^,sub dir/,'\''
-$chks
-TXT
-) &&
- chkms="main-sub1
-main-sub2
-main-sub3
-main-sub4" &&
- chkms_sub=$(cat <<TXT | sed '\''s,^,sub dir/,'\''
-$chkms
-TXT
-) &&
-
- subfiles=$(git ls-files) &&
- check_equal "$subfiles" "$chkms
-$chks"
+ test_write_lines main-sub1 main-sub2 main-sub3 main-sub4 \
+ sub1 sub2 sub3 sub4 >expect &&
+ git ls-files >actual &&
+ test_cmp expect actual
)
'
git fetch .. subproj-br &&
git merge FETCH_HEAD &&
- chks="sub1
-sub2
-sub3
-sub4" &&
- chks_sub=$(cat <<TXT | sed '\''s,^,sub dir/,'\''
-$chks
-TXT
-) &&
- chkms="main-sub1
-main-sub2
-main-sub3
-main-sub4" &&
- chkms_sub=$(cat <<TXT | sed '\''s,^,sub dir/,'\''
-$chkms
-TXT
-) &&
- allchanges=$(git log --name-only --pretty=format:"" | sort | sed "/^$/d") &&
- check_equal "$allchanges" "$chkms
-$chks"
+ test_write_lines main-sub1 main-sub2 main-sub3 main-sub4 \
+ sub1 sub2 sub3 sub4 >expect &&
+ git log --name-only --pretty=format:"" >log &&
+ sort <log | sed "/^\$/ d" >actual &&
+ test_cmp expect actual
)
'
cd "$subtree_test_count" &&
git subtree pull --prefix="sub dir" ./"sub proj" master &&
- chkm="main1
-main2" &&
- chks="sub1
-sub2
-sub3
-sub4" &&
- chks_sub=$(cat <<TXT | sed '\''s,^,sub dir/,'\''
-$chks
-TXT
-) &&
- chkms="main-sub1
-main-sub2
-main-sub3
-main-sub4" &&
- chkms_sub=$(cat <<TXT | sed '\''s,^,sub dir/,'\''
-$chkms
-TXT
-) &&
- mainfiles=$(git ls-files) &&
- check_equal "$mainfiles" "$chkm
-$chkms_sub
-$chks_sub"
-)
+ test_write_lines main1 main2 >chkm &&
+ test_write_lines main-sub1 main-sub2 main-sub3 main-sub4 >chkms &&
+ sed "s,^,sub dir/," chkms >chkms_sub &&
+ test_write_lines sub1 sub2 sub3 sub4 >chks &&
+ sed "s,^,sub dir/," chks >chks_sub &&
+
+ cat chkm chkms_sub chks_sub >expect &&
+ git ls-files >actual &&
+ test_cmp expect actual
+ )
'
next_test
test_create_commit "$subtree_test_count/sub proj" sub1 &&
(
cd "$subtree_test_count" &&
- git config log.date relative
+ git config log.date relative &&
git fetch ./"sub proj" master &&
git subtree add --prefix="sub dir" FETCH_HEAD
) &&
cd "$subtree_test_count" &&
git subtree pull --prefix="sub dir" ./"sub proj" master &&
- chkm="main1
-main2" &&
- chks="sub1
-sub2
-sub3
-sub4" &&
- chks_sub=$(cat <<TXT | sed '\''s,^,sub dir/,'\''
-$chks
-TXT
-) &&
- chkms="main-sub1
-main-sub2
-main-sub3
-main-sub4" &&
- chkms_sub=$(cat <<TXT | sed '\''s,^,sub dir/,'\''
-$chkms
-TXT
-) &&
+ test_write_lines main1 main2 >chkm &&
+ test_write_lines sub1 sub2 sub3 sub4 >chks &&
+ test_write_lines main-sub1 main-sub2 main-sub3 main-sub4 >chkms &&
+ sed "s,^,sub dir/," chkms >chkms_sub &&
# main-sub?? and /"sub dir"/main-sub?? both change, because those are the
# changes that were split into their own history. And "sub dir"/sub?? never
# change, since they were *only* changed in the subtree branch.
- allchanges=$(git log --name-only --pretty=format:"" | sort | sed "/^$/d") &&
- expected=''"$(cat <<TXT | sort
-$chkms
-$chkm
-$chks
-$chkms_sub
-TXT
-)"'' &&
- check_equal "$allchanges" "$expected"
+ git log --name-only --pretty=format:"" >log &&
+ sort <log >sorted-log &&
+ sed "/^$/ d" sorted-log >actual &&
+
+ cat chkms chkm chks chkms_sub >expect-unsorted &&
+ sort expect-unsorted >expect &&
+ test_cmp expect actual
)
'
--- /dev/null
+init.sh whitespace=-indent-with-non-tab
--- /dev/null
+Configuration for VS Code
+=========================
+
+[VS Code](https://code.visualstudio.com/) is a lightweight but powerful source
+code editor which runs on your desktop and is available for
+[Windows](https://code.visualstudio.com/docs/setup/windows),
+[macOS](https://code.visualstudio.com/docs/setup/mac) and
+[Linux](https://code.visualstudio.com/docs/setup/linux). Among other languages,
+it has [support for C/C++ via an extension](https://github.com/Microsoft/vscode-cpptools).
+
+To start developing Git with VS Code, simply run the Unix shell script called
+`init.sh` in this directory, which creates the configuration files in
+`.vscode/` that VS Code consumes. `init.sh` needs access to `make` and `gcc`,
+so run the script in a Git SDK shell if you are using Windows.
--- /dev/null
+#!/bin/sh
+
+die () {
+ echo "$*" >&2
+ exit 1
+}
+
+cd "$(dirname "$0")"/../.. ||
+die "Could not cd to top-level directory"
+
+mkdir -p .vscode ||
+die "Could not create .vscode/"
+
+# General settings
+
+cat >.vscode/settings.json.new <<\EOF ||
+{
+ "C_Cpp.intelliSenseEngine": "Default",
+ "C_Cpp.intelliSenseEngineFallback": "Disabled",
+ "[git-commit]": {
+ "editor.wordWrap": "wordWrapColumn",
+ "editor.wordWrapColumn": 72
+ },
+ "[c]": {
+ "editor.detectIndentation": false,
+ "editor.insertSpaces": false,
+ "editor.tabSize": 8,
+ "editor.wordWrap": "wordWrapColumn",
+ "editor.wordWrapColumn": 80,
+ "files.trimTrailingWhitespace": true
+ },
+ "files.associations": {
+ "*.h": "c",
+ "*.c": "c"
+ },
+ "cSpell.ignorePaths": [
+ ],
+ "cSpell.words": [
+ "DATAW",
+ "DBCACHED",
+ "DFCHECK",
+ "DTYPE",
+ "Hamano",
+ "HCAST",
+ "HEXSZ",
+ "HKEY",
+ "HKLM",
+ "IFGITLINK",
+ "IFINVALID",
+ "ISBROKEN",
+ "ISGITLINK",
+ "ISSYMREF",
+ "Junio",
+ "LPDWORD",
+ "LPPROC",
+ "LPWSTR",
+ "MSVCRT",
+ "NOARG",
+ "NOCOMPLETE",
+ "NOINHERIT",
+ "RENORMALIZE",
+ "STARTF",
+ "STARTUPINFOEXW",
+ "Schindelin",
+ "UCRT",
+ "YESNO",
+ "argcp",
+ "beginthreadex",
+ "committish",
+ "contentp",
+ "cpath",
+ "cpidx",
+ "ctim",
+ "dequote",
+ "envw",
+ "ewah",
+ "fdata",
+ "fherr",
+ "fhin",
+ "fhout",
+ "fragp",
+ "fsmonitor",
+ "hnsec",
+ "idents",
+ "includeif",
+ "interpr",
+ "iprog",
+ "isexe",
+ "iskeychar",
+ "kompare",
+ "mksnpath",
+ "mktag",
+ "mktree",
+ "mmblob",
+ "mmbuffer",
+ "mmfile",
+ "noenv",
+ "nparents",
+ "ntpath",
+ "ondisk",
+ "ooid",
+ "oplen",
+ "osdl",
+ "pnew",
+ "pold",
+ "ppinfo",
+ "pushf",
+ "pushv",
+ "rawsz",
+ "rebasing",
+ "reencode",
+ "repo",
+ "rerere",
+ "scld",
+ "sharedrepo",
+ "spawnv",
+ "spawnve",
+ "spawnvpe",
+ "strdup'ing",
+ "submodule",
+ "submodules",
+ "topath",
+ "topo",
+ "tpatch",
+ "unexecutable",
+ "unhide",
+ "unkc",
+ "unkv",
+ "unmark",
+ "unmatch",
+ "unsets",
+ "unshown",
+ "untracked",
+ "untrackedcache",
+ "unuse",
+ "upos",
+ "uval",
+ "vreportf",
+ "wargs",
+ "wargv",
+ "wbuffer",
+ "wcmd",
+ "wcsnicmp",
+ "wcstoutfdup",
+ "wdeltaenv",
+ "wdir",
+ "wenv",
+ "wenvblk",
+ "wenvcmp",
+ "wenviron",
+ "wenvpos",
+ "wenvsz",
+ "wfile",
+ "wfilename",
+ "wfopen",
+ "wfreopen",
+ "wfullpath",
+ "which'll",
+ "wlink",
+ "wmain",
+ "wmkdir",
+ "wmktemp",
+ "wnewpath",
+ "wotype",
+ "wpath",
+ "wpathname",
+ "wpgmptr",
+ "wpnew",
+ "wpointer",
+ "wpold",
+ "wpos",
+ "wputenv",
+ "wrmdir",
+ "wship",
+ "wtarget",
+ "wtemplate",
+ "wunlink",
+ "xcalloc",
+ "xgetcwd",
+ "xmallocz",
+ "xmemdupz",
+ "xmmap",
+ "xopts",
+ "xrealloc",
+ "xsnprintf",
+ "xutftowcs",
+ "xutftowcsn",
+ "xwcstoutf"
+ ],
+ "cSpell.ignoreRegExpList": [
+ "\\\"(DIRC|FSMN|REUC|UNTR)\\\"",
+ "\\\\u[0-9a-fA-Fx]{4}\\b",
+ "\\b(filfre|frotz|xyzzy)\\b",
+ "\\bCMIT_FMT_DEFAULT\\b",
+ "\\bde-munge\\b",
+ "\\bGET_OID_DISAMBIGUATORS\\b",
+ "\\bHASH_RENORMALIZE\\b",
+ "\\bTREESAMEness\\b",
+ "\\bUSE_STDEV\\b",
+ "\\Wchar *\\*\\W*utfs\\W",
+ "cURL's",
+ "nedmalloc'ed",
+ "ntifs\\.h",
+ ],
+}
+EOF
+die "Could not write settings.json"
+
+# Infer some setup-specific locations/names
+
+GCCPATH="$(which gcc)"
+GDBPATH="$(which gdb)"
+MAKECOMMAND="make -j5 DEVELOPER=1"
+OSNAME=
+X=
+case "$(uname -s)" in
+MINGW*)
+ GCCPATH="$(cygpath -am "$GCCPATH")"
+ GDBPATH="$(cygpath -am "$GDBPATH")"
+ MAKE_BASH="$(cygpath -am /git-cmd.exe) --command=usr\\\\bin\\\\bash.exe"
+ MAKECOMMAND="$MAKE_BASH -lc \\\"$MAKECOMMAND\\\""
+ OSNAME=Win32
+ X=.exe
+ ;;
+Linux)
+ OSNAME=Linux
+ ;;
+Darwin)
+ OSNAME=macOS
+ ;;
+esac
+
+# Default build task
+
+cat >.vscode/tasks.json.new <<EOF ||
+{
+ // See https://go.microsoft.com/fwlink/?LinkId=733558
+ // for the documentation about the tasks.json format
+ "version": "2.0.0",
+ "tasks": [
+ {
+ "label": "make",
+ "type": "shell",
+ "command": "$MAKECOMMAND",
+ "group": {
+ "kind": "build",
+ "isDefault": true
+ }
+ }
+ ]
+}
+EOF
+die "Could not install default build task"
+
+# Debugger settings
+
+cat >.vscode/launch.json.new <<EOF ||
+{
+ // Use IntelliSense to learn about possible attributes.
+ // Hover to view descriptions of existing attributes.
+ // For more information, visit:
+ // https://go.microsoft.com/fwlink/?linkid=830387
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "name": "(gdb) Launch",
+ "type": "cppdbg",
+ "request": "launch",
+ "program": "\${workspaceFolder}/git$X",
+ "args": [],
+ "stopAtEntry": false,
+ "cwd": "\${workspaceFolder}",
+ "environment": [],
+ "externalConsole": true,
+ "MIMode": "gdb",
+ "miDebuggerPath": "$GDBPATH",
+ "setupCommands": [
+ {
+ "description": "Enable pretty-printing for gdb",
+ "text": "-enable-pretty-printing",
+ "ignoreFailures": true
+ }
+ ]
+ }
+ ]
+}
+EOF
+die "Could not write launch configuration"
+
+# C/C++ extension settings
+
+make -f - OSNAME=$OSNAME GCCPATH="$GCCPATH" vscode-init \
+ >.vscode/c_cpp_properties.json <<\EOF ||
+include Makefile
+
+vscode-init:
+ @mkdir -p .vscode && \
+ incs= && defs= && \
+ for e in $(ALL_CFLAGS) \
+ '-DGIT_EXEC_PATH="$(gitexecdir_SQ)"' \
+ '-DGIT_LOCALE_PATH="$(localedir_relative_SQ)"' \
+ '-DBINDIR="$(bindir_relative_SQ)"' \
+ '-DFALLBACK_RUNTIME_PREFIX="$(prefix_SQ)"' \
+ '-DDEFAULT_GIT_TEMPLATE_DIR="$(template_dir_SQ)"' \
+ '-DETC_GITCONFIG="$(ETC_GITCONFIG_SQ)"' \
+ '-DETC_GITATTRIBUTES="$(ETC_GITATTRIBUTES_SQ)"' \
+ '-DGIT_LOCALE_PATH="$(localedir_relative_SQ)"' \
+ '-DCURL_DISABLE_TYPECHECK', \
+ '-DGIT_HTML_PATH="$(htmldir_relative_SQ)"' \
+ '-DGIT_MAN_PATH="$(mandir_relative_SQ)"' \
+ '-DGIT_INFO_PATH="$(infodir_relative_SQ)"'; do \
+ case "$$e" in \
+ -I.) \
+ incs="$$(printf '% 16s"$${workspaceRoot}",\n%s' \
+ "" "$$incs")" \
+ ;; \
+ -I/*) \
+ incs="$$(printf '% 16s"%s",\n%s' \
+ "" "$${e#-I}" "$$incs")" \
+ ;; \
+ -I*) \
+ incs="$$(printf '% 16s"$${workspaceRoot}/%s",\n%s' \
+ "" "$${e#-I}" "$$incs")" \
+ ;; \
+ -D*) \
+ defs="$$(printf '% 16s"%s",\n%s' \
+ "" "$$(echo "$${e#-D}" | sed 's/"/\\&/g')" \
+ "$$defs")" \
+ ;; \
+ esac; \
+ done && \
+ echo '{' && \
+ echo ' "configurations": [' && \
+ echo ' {' && \
+ echo ' "name": "$(OSNAME)",' && \
+ echo ' "intelliSenseMode": "clang-x64",' && \
+ echo ' "includePath": [' && \
+ echo "$$incs" | sort | sed '$$s/,$$//' && \
+ echo ' ],' && \
+ echo ' "defines": [' && \
+ echo "$$defs" | sort | sed '$$s/,$$//' && \
+ echo ' ],' && \
+ echo ' "browse": {' && \
+ echo ' "limitSymbolsToIncludedHeaders": true,' && \
+ echo ' "databaseFilename": "",' && \
+ echo ' "path": [' && \
+ echo ' "$${workspaceRoot}"' && \
+ echo ' ]' && \
+ echo ' },' && \
+ echo ' "cStandard": "c11",' && \
+ echo ' "cppStandard": "c++17",' && \
+ echo ' "compilerPath": "$(GCCPATH)"' && \
+ echo ' }' && \
+ echo ' ],' && \
+ echo ' "version": 4' && \
+ echo '}'
+EOF
+die "Could not write settings for the C/C++ extension"
+
+for file in .vscode/settings.json .vscode/tasks.json .vscode/launch.json
+do
+ if test -f $file
+ then
+ if git diff --no-index --quiet --exit-code $file $file.new
+ then
+ rm $file.new
+ else
+ printf "The file $file.new has these changes:\n\n"
+ git --no-pager diff --no-index $file $file.new
+ printf "\n\nMaybe \`mv $file.new $file\`?\n\n"
+ fi
+ else
+ mv $file.new $file
+ fi
+done
/* fall through */
return text_eol_is_crlf() ? EOL_CRLF : EOL_LF;
}
- warning("Illegal crlf_action %d\n", (int)crlf_action);
+ warning(_("illegal crlf_action %d"), (int)crlf_action);
return core_eol;
}
* CRLFs would not be restored by checkout
*/
if (conv_flags & CONV_EOL_RNDTRP_DIE)
- die(_("CRLF would be replaced by LF in %s."), path);
+ die(_("CRLF would be replaced by LF in %s"), path);
else if (conv_flags & CONV_EOL_RNDTRP_WARN)
warning(_("CRLF will be replaced by LF in %s.\n"
"The file will have its original line"
- " endings in your working directory."), path);
+ " endings in your working directory"), path);
} else if (old_stats->lonelf && !new_stats->lonelf ) {
/*
* CRLFs would be added by checkout
else if (conv_flags & CONV_EOL_RNDTRP_WARN)
warning(_("LF will be replaced by CRLF in %s.\n"
"The file will have its original line"
- " endings in your working directory."), path);
+ " endings in your working directory"), path);
}
}
struct strbuf *buf, const char *enc, int conv_flags)
{
char *dst;
- int dst_len;
+ size_t dst_len;
int die_on_error = conv_flags & CONV_WRITE_OBJECT;
/*
*/
if (die_on_error && check_roundtrip(enc)) {
char *re_src;
- int re_src_len;
+ size_t re_src_len;
re_src = reencode_string_len(dst, dst_len,
enc, default_encoding,
struct strbuf *buf, const char *enc)
{
char *dst;
- int dst_len;
+ size_t dst_len;
/*
* No encoding is specified or there is nothing to encode.
dst = reencode_string_len(src, src_len, enc, default_encoding,
&dst_len);
if (!dst) {
- error("failed to encode '%s' from %s to %s",
- path, default_encoding, enc);
+ error(_("failed to encode '%s' from %s to %s"),
+ path, default_encoding, enc);
return 0;
}
if (start_command(&child_process)) {
strbuf_release(&cmd);
- return error("cannot fork to run external filter '%s'", params->cmd);
+ return error(_("cannot fork to run external filter '%s'"),
+ params->cmd);
}
sigchain_push(SIGPIPE, SIG_IGN);
if (close(child_process.in))
write_err = 1;
if (write_err)
- error("cannot feed the input to external filter '%s'", params->cmd);
+ error(_("cannot feed the input to external filter '%s'"),
+ params->cmd);
sigchain_pop(SIGPIPE);
status = finish_command(&child_process);
if (status)
- error("external filter '%s' failed %d", params->cmd, status);
+ error(_("external filter '%s' failed %d"), params->cmd, status);
strbuf_release(&cmd);
return (write_err || status);
return 0; /* error was already reported */
if (strbuf_read(&nbuf, async.out, len) < 0) {
- err = error("read from external filter '%s' failed", cmd);
+ err = error(_("read from external filter '%s' failed"), cmd);
}
if (close(async.out)) {
- err = error("read from external filter '%s' failed", cmd);
+ err = error(_("read from external filter '%s' failed"), cmd);
}
if (finish_async(&async)) {
- err = error("external filter '%s' failed", cmd);
+ err = error(_("external filter '%s' failed"), cmd);
}
if (!err) {
* Something went wrong with the protocol filter.
* Force shutdown and restart if another blob requires filtering.
*/
- error("external filter '%s' failed", entry->subprocess.cmd);
+ error(_("external filter '%s' failed"), entry->subprocess.cmd);
subprocess_stop(&subprocess_map, &entry->subprocess);
free(entry);
}
else if (wanted_capability & CAP_SMUDGE)
filter_type = "smudge";
else
- die("unexpected filter type");
+ die(_("unexpected filter type"));
sigchain_push(SIGPIPE, SIG_IGN);
err = strlen(path) > LARGE_PACKET_DATA_MAX - strlen("pathname=\n");
if (err) {
- error("path name too long for external filter");
+ error(_("path name too long for external filter"));
goto done;
}
assert(subprocess_map_initialized);
entry = (struct cmd2process *)subprocess_find_entry(&subprocess_map, cmd);
if (!entry) {
- error("external filter '%s' is not available anymore although "
- "not all paths have been filtered", cmd);
+ error(_("external filter '%s' is not available anymore although "
+ "not all paths have been filtered"), cmd);
return 0;
}
process = &entry->subprocess.process;
ret |= apply_filter(path, src, len, -1, dst, ca.drv, CAP_CLEAN, NULL);
if (!ret && ca.drv && ca.drv->required)
- die("%s: clean filter '%s' failed", path, ca.drv->name);
+ die(_("%s: clean filter '%s' failed"), path, ca.drv->name);
if (ret && dst) {
src = dst->buf;
assert(ca.drv->clean || ca.drv->process);
if (!apply_filter(path, NULL, 0, fd, dst, ca.drv, CAP_CLEAN, NULL))
- die("%s: clean filter '%s' failed", path, ca.drv->name);
+ die(_("%s: clean filter '%s' failed"), path, ca.drv->name);
encode_to_git(path, dst->buf, dst->len, dst, ca.working_tree_encoding, conv_flags);
crlf_to_git(istate, path, dst->buf, dst->len, dst, ca.crlf_action, conv_flags);
ret_filter = apply_filter(
path, src, len, -1, dst, ca.drv, CAP_SMUDGE, dco);
if (!ret_filter && ca.drv && ca.drv->required)
- die("%s: smudge filter %s failed", path, ca.drv->name);
+ die(_("%s: smudge filter %s failed"), path, ca.drv->name);
return ret | ret_filter;
}
return COLOR_MOVED_ZEBRA;
else if (!strcmp(arg, "default"))
return COLOR_MOVED_DEFAULT;
+ else if (!strcmp(arg, "dimmed-zebra"))
+ return COLOR_MOVED_ZEBRA_DIM;
else if (!strcmp(arg, "dimmed_zebra"))
return COLOR_MOVED_ZEBRA_DIM;
else
- return error(_("color moved setting must be one of 'no', 'default', 'blocks', 'zebra', 'dimmed_zebra', 'plain'"));
+ return error(_("color moved setting must be one of 'no', 'default', 'blocks', 'zebra', 'dimmed-zebra', 'plain'"));
}
static int parse_color_moved_ws(const char *arg)
if (regcomp(ecbdata->diff_words->word_regex,
o->word_regex,
REG_EXTENDED | REG_NEWLINE))
- die ("Invalid regular expression: %s",
- o->word_regex);
+ die("invalid regular expression: %s",
+ o->word_regex);
}
for (i = 0; i < ARRAY_SIZE(diff_words_styles); i++) {
if (o->word_diff == diff_words_styles[i].type) {
if (found_dup)
continue;
- error("pathspec '%s' did not match any file(s) known to git.",
+ error(_("pathspec '%s' did not match any file(s) known to git"),
pathspec->items[num].original);
errors++;
}
dir->unmanaged_exclude_files++;
el = add_exclude_list(dir, EXC_FILE, fname);
if (add_excludes(fname, "", 0, el, NULL, oid_stat) < 0)
- die("cannot use %s as an exclude file", fname);
+ die(_("cannot use %s as an exclude file"), fname);
}
void add_excludes_from_file(struct dir_struct *dir, const char *fname)
return NULL;
if (!ident_in_untracked(dir->untracked)) {
- warning(_("Untracked cache is disabled on this system or location."));
+ warning(_("untracked cache is disabled on this system or location"));
return NULL;
}
return;
if (repo_read_index(&subrepo) < 0)
- die("index file corrupt in repo %s", subrepo.gitdir);
+ die(_("index file corrupt in repo %s"), subrepo.gitdir);
for (i = 0; i < subrepo.index->cache_nr; i++) {
const struct cache_entry *ce = subrepo.index->cache[i];
const char *askpass_program;
const char *excludes_file;
enum auto_crlf auto_crlf = AUTO_CRLF_FALSE;
-int check_replace_refs = 1; /* NEEDSWORK: rename to read_replace_refs */
+int read_replace_refs = 1;
char *git_replace_ref_base;
enum eol core_eol = EOL_UNSET;
int global_conv_flags_eol = CONV_EOL_RNDTRP_WARN;
strbuf_addf(&buf, "refs/namespaces/%s", (*c)->buf);
strbuf_list_free(components);
if (check_refname_format(buf.buf, 0))
- die("bad git namespace path \"%s\"", raw_namespace);
+ die(_("bad git namespace path \"%s\""), raw_namespace);
strbuf_addch(&buf, '/');
return strbuf_detach(&buf, NULL);
}
argv_array_clear(&to_free);
if (getenv(NO_REPLACE_OBJECTS_ENVIRONMENT))
- check_replace_refs = 0;
+ read_replace_refs = 0;
replace_ref_base = getenv(GIT_REPLACE_REF_BASE_ENVIRONMENT);
free(git_replace_ref_base);
git_replace_ref_base = xstrdup(replace_ref_base ? replace_ref_base
static void set_git_dir_1(const char *path)
{
if (setenv(GIT_DIR_ENVIRONMENT, path, 1))
- die("could not set GIT_DIR to '%s'", path);
+ die(_("could not set GIT_DIR to '%s'"), path);
setup_git_env(path);
}
}
va_end(param);
if (MAX_ARGS <= argc)
- return error("too many args to run %s", cmd);
+ return error(_("too many args to run %s"), cmd);
argv[argc] = NULL;
return execv_git_cmd(argv);
void fetch_negotiator_init(struct fetch_negotiator *negotiator,
const char *algorithm)
{
- if (algorithm && !strcmp(algorithm, "skipping")) {
- skipping_negotiator_init(negotiator);
- return;
+ if (algorithm) {
+ if (!strcmp(algorithm, "skipping")) {
+ skipping_negotiator_init(negotiator);
+ return;
+ } else if (!strcmp(algorithm, "default")) {
+ /* Fall through to default initialization */
+ } else {
+ die("unknown fetch negotiation algorithm '%s'", algorithm);
+ }
}
default_negotiator_init(negotiator);
}
transport_set_option(transport, TRANS_OPT_FROM_PROMISOR, "1");
transport_set_option(transport, TRANS_OPT_NO_DEPENDENTS, "1");
- transport_fetch_refs(transport, ref, NULL);
+ transport_fetch_refs(transport, ref);
fetch_if_missing = original_fetch_if_missing;
}
#include "object-store.h"
#include "connected.h"
#include "fetch-negotiator.h"
+#include "fsck.h"
static int transfer_unpack_limit = -1;
static int fetch_unpack_limit = -1;
static struct lock_file shallow_lock;
static const char *alternate_shallow_file;
static char *negotiation_algorithm;
+static struct strbuf fsck_msg_types = STRBUF_INIT;
/* Remember to update object flag allocation in object.h */
#define COMPLETE (1U << 0)
*/
argv_array_push(&cmd.args, "--fsck-objects");
else
- argv_array_push(&cmd.args, "--strict");
+ argv_array_pushf(&cmd.args, "--strict%s",
+ fsck_msg_types.buf);
}
cmd.in = demux.out;
int ret;
if (packet_reader_peek(reader) != PACKET_READ_NORMAL)
- die("error reading section header '%s'", section);
+ die(_("error reading section header '%s'"), section);
ret = !strcmp(reader->line, section);
if (!peek) {
if (!ret)
- die("expected '%s', received '%s'",
+ die(_("expected '%s', received '%s'"),
section, reader->line);
packet_reader_read(reader);
}
continue;
}
- die("unexpected acknowledgment line: '%s'", reader->line);
+ die(_("unexpected acknowledgment line: '%s'"), reader->line);
}
if (reader->status != PACKET_READ_FLUSH &&
reader->status != PACKET_READ_DELIM)
- die("error processing acks: %d", reader->status);
+ die(_("error processing acks: %d"), reader->status);
/* return 0 if no common, 1 if there are common, or 2 if ready */
return received_ready ? 2 : (received_ack ? 1 : 0);
if (reader->status != PACKET_READ_FLUSH &&
reader->status != PACKET_READ_DELIM)
- die("error processing shallow info: %d", reader->status);
+ die(_("error processing shallow info: %d"), reader->status);
setup_alternate_shallow(&shallow_lock, &alternate_shallow_file, NULL);
args->deepen = 1;
}
-static void receive_wanted_refs(struct packet_reader *reader, struct ref *refs)
+static void receive_wanted_refs(struct packet_reader *reader,
+ struct ref **sought, int nr_sought)
{
process_section_header(reader, "wanted-refs", 0);
while (packet_reader_read(reader) == PACKET_READ_NORMAL) {
struct object_id oid;
const char *end;
- struct ref *r = NULL;
+ int i;
if (parse_oid_hex(reader->line, &oid, &end) || *end++ != ' ')
- die("expected wanted-ref, got '%s'", reader->line);
+ die(_("expected wanted-ref, got '%s'"), reader->line);
- for (r = refs; r; r = r->next) {
- if (!strcmp(end, r->name)) {
- oidcpy(&r->old_oid, &oid);
+ for (i = 0; i < nr_sought; i++) {
+ if (!strcmp(end, sought[i]->name)) {
+ oidcpy(&sought[i]->old_oid, &oid);
break;
}
}
- if (!r)
- die("unexpected wanted-ref: '%s'", reader->line);
+ if (i == nr_sought)
+ die(_("unexpected wanted-ref: '%s'"), reader->line);
}
if (reader->status != PACKET_READ_DELIM)
- die("error processing wanted refs: %d", reader->status);
+ die(_("error processing wanted refs: %d"), reader->status);
}
enum fetch_state {
receive_shallow_info(args, &reader);
if (process_section_header(&reader, "wanted-refs", 1))
- receive_wanted_refs(&reader, ref);
+ receive_wanted_refs(&reader, sought, nr_sought);
/* get the pack */
process_section_header(&reader, "packfile", 0);
return ref;
}
+static int fetch_pack_config_cb(const char *var, const char *value, void *cb)
+{
+ if (strcmp(var, "fetch.fsck.skiplist") == 0) {
+ const char *path;
+
+ if (git_config_pathname(&path, var, value))
+ return 1;
+ strbuf_addf(&fsck_msg_types, "%cskiplist=%s",
+ fsck_msg_types.len ? ',' : '=', path);
+ free((char *)path);
+ return 0;
+ }
+
+ if (skip_prefix(var, "fetch.fsck.", &var)) {
+ if (is_valid_msg_type(var, value))
+ strbuf_addf(&fsck_msg_types, "%c%s=%s",
+ fsck_msg_types.len ? ',' : '=', var, value);
+ else
+ warning("Skipping unknown msg id '%s'", var);
+ return 0;
+ }
+
+ return git_default_config(var, value, cb);
+}
+
static void fetch_pack_config(void)
{
git_config_get_int("fetch.unpacklimit", &fetch_unpack_limit);
git_config_get_string("fetch.negotiationalgorithm",
&negotiation_algorithm);
- git_config(git_default_config, NULL);
+ git_config(fetch_pack_config_cb, NULL);
}
static void fetch_pack_setup(void)
}
static void update_shallow(struct fetch_pack_args *args,
- struct ref *refs,
+ struct ref **sought, int nr_sought,
struct shallow_info *si)
{
struct oid_array ref = OID_ARRAY_INIT;
int *status;
int i;
- struct ref *r;
if (args->deepen && alternate_shallow_file) {
if (*alternate_shallow_file == '\0') { /* --unshallow */
remove_nonexistent_theirs_shallow(si);
if (!si->nr_ours && !si->nr_theirs)
return;
- for (r = refs; r; r = r->next)
- oid_array_append(&ref, &r->old_oid);
+ for (i = 0; i < nr_sought; i++)
+ oid_array_append(&ref, &sought[i]->old_oid);
si->ref = &ref;
if (args->update_shallow) {
* remote is also shallow, check what ref is safe to update
* without updating .git/shallow
*/
- status = xcalloc(ref.nr, sizeof(*status));
+ status = xcalloc(nr_sought, sizeof(*status));
assign_shallow_commits_to_refs(si, NULL, status);
if (si->nr_ours || si->nr_theirs) {
- for (r = refs, i = 0; r; r = r->next, i++)
+ for (i = 0; i < nr_sought; i++)
if (status[i])
- r->status = REF_STATUS_REJECT_SHALLOW;
+ sought[i]->status = REF_STATUS_REJECT_SHALLOW;
}
free(status);
oid_array_clear(&ref);
args->connectivity_checked = 1;
}
- update_shallow(args, ref_cpy, &si);
+ update_shallow(args, sought, nr_sought, &si);
cleanup:
clear_shallow_info(&si);
return ref_cpy;
#define UNLEAK(var) do {} while (0)
#endif
+/*
+ * This include must come after system headers, since it introduces macros that
+ * replace system names.
+ */
+#include "banned.h"
+
#endif
optparse.make_option("--disable-p4sync", dest="disable_p4sync", action="store_true",
help="Skip Perforce sync of p4/master after submit or shelve"),
]
- self.description = "Submit changes from git to the perforce depot."
+ self.description = """Submit changes from git to the perforce depot.\n
+ The `p4-pre-submit` hook is executed if it exists and is executable.
+ The hook takes no parameters and nothing from standard input. Exiting with
+ non-zero status from this script prevents `git-p4 submit` from launching.
+
+ One usage scenario is to run unit tests in the hook."""
+
self.usage += " [name of git branch to submit into perforce depot]"
self.origin = ""
self.detectRenames = False
sys.exit("number of commits (%d) must match number of shelved changelist (%d)" %
(len(commits), num_shelves))
+ hooks_path = gitConfig("core.hooksPath")
+ if len(hooks_path) <= 0:
+ hooks_path = os.path.join(os.environ.get("GIT_DIR", ".git"), "hooks")
+
+ hook_file = os.path.join(hooks_path, "p4-pre-submit")
+ if os.path.isfile(hook_file) and os.access(hook_file, os.X_OK) and subprocess.call([hook_file]) != 0:
+ sys.exit(1)
+
#
# Apply the commits, one at a time. On failure, ask if should
# continue to try the rest of the patches, or quit.
if (envchanged)
*envchanged = 1;
} else if (!strcmp(cmd, "--no-replace-objects")) {
- check_replace_refs = 0;
+ read_replace_refs = 0;
setenv(NO_REPLACE_OBJECTS_ENVIRONMENT, "1", 1);
if (envchanged)
*envchanged = 1;
#include "tempfile.h"
static char *configured_signing_key;
-static const char *gpg_program = "gpg";
+struct gpg_format {
+ const char *name;
+ const char *program;
+ const char **verify_args;
+ const char **sigs;
+};
+
+static const char *openpgp_verify_args[] = {
+ "--keyid-format=long",
+ NULL
+};
+static const char *openpgp_sigs[] = {
+ "-----BEGIN PGP SIGNATURE-----",
+ "-----BEGIN PGP MESSAGE-----",
+ NULL
+};
+
+static const char *x509_verify_args[] = {
+ NULL
+};
+static const char *x509_sigs[] = {
+ "-----BEGIN SIGNED MESSAGE-----",
+ NULL
+};
-#define PGP_SIGNATURE "-----BEGIN PGP SIGNATURE-----"
-#define PGP_MESSAGE "-----BEGIN PGP MESSAGE-----"
+static struct gpg_format gpg_format[] = {
+ { .name = "openpgp", .program = "gpg",
+ .verify_args = openpgp_verify_args,
+ .sigs = openpgp_sigs
+ },
+ { .name = "x509", .program = "gpgsm",
+ .verify_args = x509_verify_args,
+ .sigs = x509_sigs
+ },
+};
+
+static struct gpg_format *use_format = &gpg_format[0];
+
+static struct gpg_format *get_format_by_name(const char *str)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(gpg_format); i++)
+ if (!strcmp(gpg_format[i].name, str))
+ return gpg_format + i;
+ return NULL;
+}
+
+static struct gpg_format *get_format_by_sig(const char *sig)
+{
+ int i, j;
+
+ for (i = 0; i < ARRAY_SIZE(gpg_format); i++)
+ for (j = 0; gpg_format[i].sigs[j]; j++)
+ if (starts_with(sig, gpg_format[i].sigs[j]))
+ return gpg_format + i;
+ return NULL;
+}
void signature_check_clear(struct signature_check *sigc)
{
sigc->result = sigcheck_gpg_status[i].result;
/* The trust messages are not followed by key/signer information */
if (sigc->result != 'U') {
- sigc->key = xmemdupz(found, 16);
+ next = strchrnul(found, ' ');
+ sigc->key = xmemdupz(found, next - found);
/* The ERRSIG message is not followed by signer information */
- if (sigc-> result != 'E') {
- found += 17;
+ if (*next && sigc-> result != 'E') {
+ found = next + 1;
next = strchrnul(found, '\n');
sigc->signer = xmemdupz(found, next - found);
}
fputs(output, stderr);
}
-static int is_gpg_start(const char *line)
-{
- return starts_with(line, PGP_SIGNATURE) ||
- starts_with(line, PGP_MESSAGE);
-}
-
size_t parse_signature(const char *buf, size_t size)
{
size_t len = 0;
while (len < size) {
const char *eol;
- if (is_gpg_start(buf + len))
+ if (get_format_by_sig(buf + len))
match = len;
eol = memchr(buf + len, '\n', size - len);
int git_gpg_config(const char *var, const char *value, void *cb)
{
+ struct gpg_format *fmt = NULL;
+ char *fmtname = NULL;
+
if (!strcmp(var, "user.signingkey")) {
if (!value)
return config_error_nonbool(var);
return 0;
}
- if (!strcmp(var, "gpg.program")) {
+ if (!strcmp(var, "gpg.format")) {
if (!value)
return config_error_nonbool(var);
- gpg_program = xstrdup(value);
+ fmt = get_format_by_name(value);
+ if (!fmt)
+ return error("unsupported value for %s: %s",
+ var, value);
+ use_format = fmt;
return 0;
}
+ if (!strcmp(var, "gpg.program") || !strcmp(var, "gpg.openpgp.program"))
+ fmtname = "openpgp";
+
+ if (!strcmp(var, "gpg.x509.program"))
+ fmtname = "x509";
+
+ if (fmtname) {
+ fmt = get_format_by_name(fmtname);
+ return git_config_string(&fmt->program, var, value);
+ }
+
return 0;
}
struct strbuf gpg_status = STRBUF_INIT;
argv_array_pushl(&gpg.args,
- gpg_program,
+ use_format->program,
"--status-fd=2",
"-bsau", signing_key,
NULL);
struct strbuf *gpg_output, struct strbuf *gpg_status)
{
struct child_process gpg = CHILD_PROCESS_INIT;
+ struct gpg_format *fmt;
struct tempfile *temp;
int ret;
struct strbuf buf = STRBUF_INIT;
return -1;
}
+ fmt = get_format_by_sig(signature);
+ if (!fmt)
+ BUG("bad signature '%s'", signature);
+
+ argv_array_push(&gpg.args, fmt->program);
+ argv_array_pushv(&gpg.args, fmt->verify_args);
argv_array_pushl(&gpg.args,
- gpg_program,
"--status-fd=1",
- "--keyid-format=long",
"--verify", temp->filename.buf, "-",
NULL);
--- /dev/null
+#include "cache.h"
+#include "json-writer.h"
+
+void jw_init(struct json_writer *jw)
+{
+ strbuf_init(&jw->json, 0);
+ strbuf_init(&jw->open_stack, 0);
+ jw->need_comma = 0;
+ jw->pretty = 0;
+}
+
+void jw_release(struct json_writer *jw)
+{
+ strbuf_release(&jw->json);
+ strbuf_release(&jw->open_stack);
+}
+
+/*
+ * Append JSON-quoted version of the given string to 'out'.
+ */
+static void append_quoted_string(struct strbuf *out, const char *in)
+{
+ unsigned char c;
+
+ strbuf_addch(out, '"');
+ while ((c = *in++) != '\0') {
+ if (c == '"')
+ strbuf_addstr(out, "\\\"");
+ else if (c == '\\')
+ strbuf_addstr(out, "\\\\");
+ else if (c == '\n')
+ strbuf_addstr(out, "\\n");
+ else if (c == '\r')
+ strbuf_addstr(out, "\\r");
+ else if (c == '\t')
+ strbuf_addstr(out, "\\t");
+ else if (c == '\f')
+ strbuf_addstr(out, "\\f");
+ else if (c == '\b')
+ strbuf_addstr(out, "\\b");
+ else if (c < 0x20)
+ strbuf_addf(out, "\\u%04x", c);
+ else
+ strbuf_addch(out, c);
+ }
+ strbuf_addch(out, '"');
+}
+
+static void indent_pretty(struct json_writer *jw)
+{
+ int k;
+
+ for (k = 0; k < jw->open_stack.len; k++)
+ strbuf_addstr(&jw->json, " ");
+}
+
+/*
+ * Begin an object or array (either top-level or nested within the currently
+ * open object or array).
+ */
+static void begin(struct json_writer *jw, char ch_open, int pretty)
+{
+ jw->pretty = pretty;
+
+ strbuf_addch(&jw->json, ch_open);
+
+ strbuf_addch(&jw->open_stack, ch_open);
+ jw->need_comma = 0;
+}
+
+/*
+ * Assert that the top of the open-stack is an object.
+ */
+static void assert_in_object(const struct json_writer *jw, const char *key)
+{
+ if (!jw->open_stack.len)
+ BUG("json-writer: object: missing jw_object_begin(): '%s'", key);
+ if (jw->open_stack.buf[jw->open_stack.len - 1] != '{')
+ BUG("json-writer: object: not in object: '%s'", key);
+}
+
+/*
+ * Assert that the top of the open-stack is an array.
+ */
+static void assert_in_array(const struct json_writer *jw)
+{
+ if (!jw->open_stack.len)
+ BUG("json-writer: array: missing jw_array_begin()");
+ if (jw->open_stack.buf[jw->open_stack.len - 1] != '[')
+ BUG("json-writer: array: not in array");
+}
+
+/*
+ * Add comma if we have already seen a member at this level.
+ */
+static void maybe_add_comma(struct json_writer *jw)
+{
+ if (jw->need_comma)
+ strbuf_addch(&jw->json, ',');
+ else
+ jw->need_comma = 1;
+}
+
+static void fmt_double(struct json_writer *jw, int precision,
+ double value)
+{
+ if (precision < 0) {
+ strbuf_addf(&jw->json, "%f", value);
+ } else {
+ struct strbuf fmt = STRBUF_INIT;
+ strbuf_addf(&fmt, "%%.%df", precision);
+ strbuf_addf(&jw->json, fmt.buf, value);
+ strbuf_release(&fmt);
+ }
+}
+
+static void object_common(struct json_writer *jw, const char *key)
+{
+ assert_in_object(jw, key);
+ maybe_add_comma(jw);
+
+ if (jw->pretty) {
+ strbuf_addch(&jw->json, '\n');
+ indent_pretty(jw);
+ }
+
+ append_quoted_string(&jw->json, key);
+ strbuf_addch(&jw->json, ':');
+ if (jw->pretty)
+ strbuf_addch(&jw->json, ' ');
+}
+
+static void array_common(struct json_writer *jw)
+{
+ assert_in_array(jw);
+ maybe_add_comma(jw);
+
+ if (jw->pretty) {
+ strbuf_addch(&jw->json, '\n');
+ indent_pretty(jw);
+ }
+}
+
+/*
+ * Assert that the given JSON object or JSON array has been properly
+ * terminated. (Has closing bracket.)
+ */
+static void assert_is_terminated(const struct json_writer *jw)
+{
+ if (jw->open_stack.len)
+ BUG("json-writer: object: missing jw_end(): '%s'",
+ jw->json.buf);
+}
+
+void jw_object_begin(struct json_writer *jw, int pretty)
+{
+ begin(jw, '{', pretty);
+}
+
+void jw_object_string(struct json_writer *jw, const char *key, const char *value)
+{
+ object_common(jw, key);
+ append_quoted_string(&jw->json, value);
+}
+
+void jw_object_intmax(struct json_writer *jw, const char *key, intmax_t value)
+{
+ object_common(jw, key);
+ strbuf_addf(&jw->json, "%"PRIdMAX, value);
+}
+
+void jw_object_double(struct json_writer *jw, const char *key, int precision,
+ double value)
+{
+ object_common(jw, key);
+ fmt_double(jw, precision, value);
+}
+
+void jw_object_true(struct json_writer *jw, const char *key)
+{
+ object_common(jw, key);
+ strbuf_addstr(&jw->json, "true");
+}
+
+void jw_object_false(struct json_writer *jw, const char *key)
+{
+ object_common(jw, key);
+ strbuf_addstr(&jw->json, "false");
+}
+
+void jw_object_bool(struct json_writer *jw, const char *key, int value)
+{
+ if (value)
+ jw_object_true(jw, key);
+ else
+ jw_object_false(jw, key);
+}
+
+void jw_object_null(struct json_writer *jw, const char *key)
+{
+ object_common(jw, key);
+ strbuf_addstr(&jw->json, "null");
+}
+
+static void increase_indent(struct strbuf *sb,
+ const struct json_writer *jw,
+ int indent)
+{
+ int k;
+
+ strbuf_reset(sb);
+ for (k = 0; k < jw->json.len; k++) {
+ char ch = jw->json.buf[k];
+ strbuf_addch(sb, ch);
+ if (ch == '\n')
+ strbuf_addchars(sb, ' ', indent);
+ }
+}
+
+static void kill_indent(struct strbuf *sb,
+ const struct json_writer *jw)
+{
+ int k;
+ int eat_it = 0;
+
+ strbuf_reset(sb);
+ for (k = 0; k < jw->json.len; k++) {
+ char ch = jw->json.buf[k];
+ if (eat_it && ch == ' ')
+ continue;
+ if (ch == '\n') {
+ eat_it = 1;
+ continue;
+ }
+ eat_it = 0;
+ strbuf_addch(sb, ch);
+ }
+}
+
+static void append_sub_jw(struct json_writer *jw,
+ const struct json_writer *value)
+{
+ /*
+ * If both are pretty, increase the indentation of the sub_jw
+ * to better fit under the super.
+ *
+ * If the super is pretty, but the sub_jw is compact, leave the
+ * sub_jw compact. (We don't want to parse and rebuild the sub_jw
+ * for this debug-ish feature.)
+ *
+ * If the super is compact, and the sub_jw is pretty, convert
+ * the sub_jw to compact.
+ *
+ * If both are compact, keep the sub_jw compact.
+ */
+ if (jw->pretty && jw->open_stack.len && value->pretty) {
+ struct strbuf sb = STRBUF_INIT;
+ increase_indent(&sb, value, jw->open_stack.len * 2);
+ strbuf_addbuf(&jw->json, &sb);
+ strbuf_release(&sb);
+ return;
+ }
+ if (!jw->pretty && value->pretty) {
+ struct strbuf sb = STRBUF_INIT;
+ kill_indent(&sb, value);
+ strbuf_addbuf(&jw->json, &sb);
+ strbuf_release(&sb);
+ return;
+ }
+
+ strbuf_addbuf(&jw->json, &value->json);
+}
+
+/*
+ * Append existing (properly terminated) JSON sub-data (object or array)
+ * as-is onto the given JSON data.
+ */
+void jw_object_sub_jw(struct json_writer *jw, const char *key,
+ const struct json_writer *value)
+{
+ assert_is_terminated(value);
+
+ object_common(jw, key);
+ append_sub_jw(jw, value);
+}
+
+void jw_object_inline_begin_object(struct json_writer *jw, const char *key)
+{
+ object_common(jw, key);
+
+ jw_object_begin(jw, jw->pretty);
+}
+
+void jw_object_inline_begin_array(struct json_writer *jw, const char *key)
+{
+ object_common(jw, key);
+
+ jw_array_begin(jw, jw->pretty);
+}
+
+void jw_array_begin(struct json_writer *jw, int pretty)
+{
+ begin(jw, '[', pretty);
+}
+
+void jw_array_string(struct json_writer *jw, const char *value)
+{
+ array_common(jw);
+ append_quoted_string(&jw->json, value);
+}
+
+void jw_array_intmax(struct json_writer *jw, intmax_t value)
+{
+ array_common(jw);
+ strbuf_addf(&jw->json, "%"PRIdMAX, value);
+}
+
+void jw_array_double(struct json_writer *jw, int precision, double value)
+{
+ array_common(jw);
+ fmt_double(jw, precision, value);
+}
+
+void jw_array_true(struct json_writer *jw)
+{
+ array_common(jw);
+ strbuf_addstr(&jw->json, "true");
+}
+
+void jw_array_false(struct json_writer *jw)
+{
+ array_common(jw);
+ strbuf_addstr(&jw->json, "false");
+}
+
+void jw_array_bool(struct json_writer *jw, int value)
+{
+ if (value)
+ jw_array_true(jw);
+ else
+ jw_array_false(jw);
+}
+
+void jw_array_null(struct json_writer *jw)
+{
+ array_common(jw);
+ strbuf_addstr(&jw->json, "null");
+}
+
+void jw_array_sub_jw(struct json_writer *jw, const struct json_writer *value)
+{
+ assert_is_terminated(value);
+
+ array_common(jw);
+ append_sub_jw(jw, value);
+}
+
+void jw_array_argc_argv(struct json_writer *jw, int argc, const char **argv)
+{
+ int k;
+
+ for (k = 0; k < argc; k++)
+ jw_array_string(jw, argv[k]);
+}
+
+void jw_array_argv(struct json_writer *jw, const char **argv)
+{
+ while (*argv)
+ jw_array_string(jw, *argv++);
+}
+
+void jw_array_inline_begin_object(struct json_writer *jw)
+{
+ array_common(jw);
+
+ jw_object_begin(jw, jw->pretty);
+}
+
+void jw_array_inline_begin_array(struct json_writer *jw)
+{
+ array_common(jw);
+
+ jw_array_begin(jw, jw->pretty);
+}
+
+int jw_is_terminated(const struct json_writer *jw)
+{
+ return !jw->open_stack.len;
+}
+
+void jw_end(struct json_writer *jw)
+{
+ char ch_open;
+ int len;
+
+ if (!jw->open_stack.len)
+ BUG("json-writer: too many jw_end(): '%s'", jw->json.buf);
+
+ len = jw->open_stack.len - 1;
+ ch_open = jw->open_stack.buf[len];
+
+ strbuf_setlen(&jw->open_stack, len);
+ jw->need_comma = 1;
+
+ if (jw->pretty) {
+ strbuf_addch(&jw->json, '\n');
+ indent_pretty(jw);
+ }
+
+ if (ch_open == '{')
+ strbuf_addch(&jw->json, '}');
+ else
+ strbuf_addch(&jw->json, ']');
+}
--- /dev/null
+#ifndef JSON_WRITER_H
+#define JSON_WRITER_H
+
+/*
+ * JSON data structures are defined at:
+ * [1] http://www.ietf.org/rfc/rfc7159.txt
+ * [2] http://json.org/
+ *
+ * The JSON-writer API allows one to build JSON data structures using a
+ * simple wrapper around a "struct strbuf" buffer. It is intended as a
+ * simple API to build output strings; it is not intended to be a general
+ * object model for JSON data. In particular, it does not re-order keys
+ * in an object (dictionary), it does not de-dup keys in an object, and
+ * it does not allow lookup or parsing of JSON data.
+ *
+ * All string values (both keys and string r-values) are properly quoted
+ * and escaped if they contain special characters.
+ *
+ * These routines create compact JSON data (with no unnecessary whitespace,
+ * newlines, or indenting). If you get an unexpected response, verify
+ * that you're not expecting a pretty JSON string.
+ *
+ * Both "JSON objects" (aka sets of k/v pairs) and "JSON array" can be
+ * constructed using a 'begin append* end' model.
+ *
+ * Nested objects and arrays can either be constructed bottom up (by
+ * creating sub object/arrays first and appending them to the super
+ * object/array) -or- by building them inline in one pass. This is a
+ * personal style and/or data shape choice.
+ *
+ * See t/helper/test-json-writer.c for various usage examples.
+ *
+ * LIMITATIONS:
+ * ============
+ *
+ * The JSON specification [1,2] defines string values as Unicode data
+ * and probably UTF-8 encoded. The current json-writer API does not
+ * enforce this and will write any string as received. However, it will
+ * properly quote and backslash-escape them as necessary. It is up to
+ * the caller to UTF-8 encode their strings *before* passing them to this
+ * API. This layer should not have to try to guess the encoding or locale
+ * of the given strings.
+ */
+
+struct json_writer
+{
+ /*
+ * Buffer of the in-progress JSON currently being composed.
+ */
+ struct strbuf json;
+
+ /*
+ * Simple stack of the currently open array and object forms.
+ * This is a string of '{' and '[' characters indicating the
+ * currently unterminated forms. This is used to ensure the
+ * properly closing character is used when popping a level and
+ * to know when the JSON is completely closed.
+ */
+ struct strbuf open_stack;
+
+ unsigned int need_comma:1;
+ unsigned int pretty:1;
+};
+
+#define JSON_WRITER_INIT { STRBUF_INIT, STRBUF_INIT, 0, 0 }
+
+void jw_init(struct json_writer *jw);
+void jw_release(struct json_writer *jw);
+
+void jw_object_begin(struct json_writer *jw, int pretty);
+void jw_array_begin(struct json_writer *jw, int pretty);
+
+void jw_object_string(struct json_writer *jw, const char *key,
+ const char *value);
+void jw_object_intmax(struct json_writer *jw, const char *key, intmax_t value);
+void jw_object_double(struct json_writer *jw, const char *key, int precision,
+ double value);
+void jw_object_true(struct json_writer *jw, const char *key);
+void jw_object_false(struct json_writer *jw, const char *key);
+void jw_object_bool(struct json_writer *jw, const char *key, int value);
+void jw_object_null(struct json_writer *jw, const char *key);
+void jw_object_sub_jw(struct json_writer *jw, const char *key,
+ const struct json_writer *value);
+
+void jw_object_inline_begin_object(struct json_writer *jw, const char *key);
+void jw_object_inline_begin_array(struct json_writer *jw, const char *key);
+
+void jw_array_string(struct json_writer *jw, const char *value);
+void jw_array_intmax(struct json_writer *jw, intmax_t value);
+void jw_array_double(struct json_writer *jw, int precision, double value);
+void jw_array_true(struct json_writer *jw);
+void jw_array_false(struct json_writer *jw);
+void jw_array_bool(struct json_writer *jw, int value);
+void jw_array_null(struct json_writer *jw);
+void jw_array_sub_jw(struct json_writer *jw, const struct json_writer *value);
+void jw_array_argc_argv(struct json_writer *jw, int argc, const char **argv);
+void jw_array_argv(struct json_writer *jw, const char **argv);
+
+void jw_array_inline_begin_object(struct json_writer *jw);
+void jw_array_inline_begin_array(struct json_writer *jw);
+
+int jw_is_terminated(const struct json_writer *jw);
+void jw_end(struct json_writer *jw);
+
+#endif /* JSON_WRITER_H */
if (starts_with(refname, git_replace_ref_base)) {
struct object_id original_oid;
- if (!check_replace_refs)
+ if (!read_replace_refs)
return 0;
if (get_oid_hex(refname + strlen(git_replace_ref_base),
&original_oid)) {
int score = 0;
for (;;) {
- struct name_entry e1, e2;
- int got_entry_from_one = tree_entry(&one, &e1);
- int got_entry_from_two = tree_entry(&two, &e2);
int cmp;
- if (got_entry_from_one && got_entry_from_two)
- cmp = base_name_entries_compare(&e1, &e2);
- else if (got_entry_from_one)
+ if (one.size && two.size)
+ cmp = base_name_entries_compare(&one.entry, &two.entry);
+ else if (one.size)
/* two lacks this entry */
cmp = -1;
- else if (got_entry_from_two)
+ else if (two.size)
/* two has more entries */
cmp = 1;
else
break;
- if (cmp < 0)
+ if (cmp < 0) {
/* path1 does not appear in two */
- score += score_missing(e1.mode, e1.path);
- else if (cmp > 0)
+ score += score_missing(one.entry.mode, one.entry.path);
+ update_tree_entry(&one);
+ } else if (cmp > 0) {
/* path2 does not appear in one */
- score += score_missing(e2.mode, e2.path);
- else if (oidcmp(e1.oid, e2.oid))
- /* they are different */
- score += score_differs(e1.mode, e2.mode, e1.path);
- else
- /* same subtree or blob */
- score += score_matches(e1.mode, e2.mode, e1.path);
+ score += score_missing(two.entry.mode, two.entry.path);
+ update_tree_entry(&two);
+ } else {
+ /* path appears in both */
+ if (oidcmp(one.entry.oid, two.entry.oid)) {
+ /* they are different */
+ score += score_differs(one.entry.mode,
+ two.entry.mode,
+ one.entry.path);
+ } else {
+ /* same subtree or blob */
+ score += score_matches(one.entry.mode,
+ two.entry.mode,
+ one.entry.path);
+ }
+ update_tree_entry(&one);
+ update_tree_entry(&two);
+ }
}
free(one_buf);
free(two_buf);
if (mfi.clean &&
was_tracked_and_matches(o, path, &mfi.oid, mfi.mode) &&
!df_conflict_remains) {
+ int pos;
+ struct cache_entry *ce;
+
output(o, 3, _("Skipped %s (merged same as existing)"), path);
if (add_cacheinfo(o, mfi.mode, &mfi.oid, path,
0, (!o->call_depth && !is_dirty), 0))
return -1;
+ /*
+ * However, add_cacheinfo() will delete the old cache entry
+ * and add a new one. We need to copy over any skip_worktree
+ * flag to avoid making the file appear as if it were
+ * deleted by the user.
+ */
+ pos = index_name_pos(&o->orig_index, path, strlen(path));
+ ce = o->orig_index.cache[pos];
+ if (ce_skip_worktree(ce)) {
+ pos = index_name_pos(&the_index, path, strlen(path));
+ ce = the_index.cache[pos];
+ ce->ce_flags |= CE_SKIP_WORKTREE;
+ }
return mfi.clean;
}
if (gentle)
return -1;
- die("invalid object type \"%s\"", str);
+ die(_("invalid object type \"%s\""), str);
}
/*
}
else {
if (!quiet)
- error("object %s is a %s, not a %s",
+ error(_("object %s is a %s, not a %s"),
oid_to_hex(&obj->oid),
type_name(obj->type), type_name(type));
return NULL;
obj = &tag->object;
}
} else {
- warning("object %s has unknown type id %d", oid_to_hex(oid), type);
+ warning(_("object %s has unknown type id %d"), oid_to_hex(oid), type);
obj = NULL;
}
return obj;
(!obj && has_object_file(oid) &&
oid_object_info(r, oid, NULL) == OBJ_BLOB)) {
if (check_object_signature(repl, NULL, 0, NULL) < 0) {
- error("sha1 mismatch %s", oid_to_hex(oid));
+ error(_("sha1 mismatch %s"), oid_to_hex(oid));
return NULL;
}
parse_blob_buffer(lookup_blob(r, oid), NULL, 0);
if (buffer) {
if (check_object_signature(repl, buffer, size, type_name(type)) < 0) {
free(buffer);
- error("sha1 mismatch %s", oid_to_hex(repl));
+ error(_("sha1 mismatch %s"), oid_to_hex(repl));
return NULL;
}
#ifndef PACKFILE_H
#define PACKFILE_H
+#include "cache.h"
#include "oidset.h"
/* in object-store.h */
struct packed_git;
struct object_info;
-enum object_type;
/*
* Generate the filename to be used for a pack file with checksum "sha1" and
static int usage_argh(const struct option *opts, FILE *outfile)
{
const char *s;
- int literal = (opts->flags & PARSE_OPT_LITERAL_ARGHELP) || !opts->argh;
+ int literal = (opts->flags & PARSE_OPT_LITERAL_ARGHELP) ||
+ !opts->argh || !!strpbrk(opts->argh, "()<>[]|");
if (opts->flags & PARSE_OPT_OPTARG)
if (opts->long_name)
s = literal ? "[=%s]" : "[=<%s>]";
{
packet_trace("0000", 4, 1);
if (write_in_full(fd, "0000", 4) < 0)
- return error("flush packet write failed");
+ return error(_("flush packet write failed"));
return 0;
}
n = out->len - orig_len;
if (n > LARGE_PACKET_MAX)
- die("protocol error: impossibly long line");
+ die(_("protocol error: impossibly long line"));
set_packet_header(&out->buf[orig_len], n);
packet_trace(out->buf + orig_len + 4, n - 4, 1);
if (write_in_full(fd, buf.buf, buf.len) < 0) {
if (!gently) {
check_pipe(errno);
- die_errno("packet write with format failed");
+ die_errno(_("packet write with format failed"));
}
- return error("packet write with format failed");
+ return error(_("packet write with format failed"));
}
return 0;
size_t packet_size;
if (size > sizeof(packet_write_buffer) - 4)
- return error("packet write failed - data exceeds max packet size");
+ return error(_("packet write failed - data exceeds max packet size"));
packet_trace(buf, size, 1);
packet_size = size + 4;
set_packet_header(packet_write_buffer, packet_size);
memcpy(packet_write_buffer + 4, buf, size);
if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
- return error("packet write failed");
+ return error(_("packet write failed"));
return 0;
}
void packet_write(int fd_out, const char *buf, size_t size)
{
if (packet_write_gently(fd_out, buf, size))
- die_errno("packet write failed");
+ die_errno(_("packet write failed"));
}
void packet_buf_write(struct strbuf *buf, const char *fmt, ...)
n = buf->len - orig_len;
if (n > LARGE_PACKET_MAX)
- die("protocol error: impossibly long line");
+ die(_("protocol error: impossibly long line"));
set_packet_header(&buf->buf[orig_len], n);
packet_trace(data, len, 1);
} else {
ret = read_in_full(fd, dst, size);
if (ret < 0)
- die_errno("read error");
+ die_errno(_("read error"));
}
/* And complain if we didn't get enough bytes to satisfy the read. */
if (options & PACKET_READ_GENTLE_ON_EOF)
return -1;
- die("The remote end hung up unexpectedly");
+ die(_("the remote end hung up unexpectedly"));
}
return ret;
len = packet_length(linelen);
if (len < 0) {
- die("protocol error: bad line length character: %.4s", linelen);
+ die(_("protocol error: bad line length character: %.4s"), linelen);
} else if (!len) {
packet_trace("0000", 4, 0);
*pktlen = 0;
*pktlen = 0;
return PACKET_READ_DELIM;
} else if (len < 4) {
- die("protocol error: bad line length %d", len);
+ die(_("protocol error: bad line length %d"), len);
}
len -= 4;
if ((unsigned)len >= size)
- die("protocol error: bad line length %d", len);
+ die(_("protocol error: bad line length %d"), len);
if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) {
*pktlen = -1;
}
if (output_enc) {
- int outsz;
+ size_t outsz;
char *out = reencode_string_len(sb->buf, sb->len,
output_enc, utf8, &outsz);
if (out)
enum selector_type selector = SELECTOR_NONE;
if (commit->object.flags & UNINTERESTING)
- die ("Cannot walk reflogs for %s", name);
+ die("cannot walk reflogs for %s", name);
branch = xstrdup(name);
if (at && at[1] == '{') {
free(branch);
branch = resolve_refdup("HEAD", 0, NULL, NULL);
if (!branch)
- die ("No current branch");
+ die("no current branch");
}
reflogs = read_complete_reflog(branch);
if (flags & REF_ISBROKEN)
return 0;
if (!has_sha1_file(oid->hash)) {
- error("%s does not point to a valid object!", refname);
+ error(_("%s does not point to a valid object!"), refname);
return 0;
}
return 1;
NULL
};
+#define NUM_REV_PARSE_RULES (ARRAY_SIZE(ref_rev_parse_rules) - 1)
+
+/*
+ * Is it possible that the caller meant full_name with abbrev_name?
+ * If so return a non-zero value to signal "yes"; the magnitude of
+ * the returned value gives the precedence used for disambiguation.
+ *
+ * If abbrev_name cannot mean full_name, return 0.
+ */
int refname_match(const char *abbrev_name, const char *full_name)
{
const char **p;
const int abbrev_name_len = strlen(abbrev_name);
+ const int num_rules = NUM_REV_PARSE_RULES;
- for (p = ref_rev_parse_rules; *p; p++) {
- if (!strcmp(full_name, mkpath(*p, abbrev_name_len, abbrev_name))) {
- return 1;
- }
- }
+ for (p = ref_rev_parse_rules; *p; p++)
+ if (!strcmp(full_name, mkpath(*p, abbrev_name_len, abbrev_name)))
+ return &ref_rev_parse_rules[num_rules] - p;
return 0;
}
if (!warn_ambiguous_refs)
break;
} else if ((flag & REF_ISSYMREF) && strcmp(fullref.buf, "HEAD")) {
- warning("ignoring dangling symref %s.", fullref.buf);
+ warning(_("ignoring dangling symref %s"), fullref.buf);
} else if ((flag & REF_ISBROKEN) && strchr(fullref.buf, '/')) {
- warning("ignoring broken ref %s.", fullref.buf);
+ warning(_("ignoring broken ref %s"), fullref.buf);
}
}
strbuf_release(&fullref);
fd = hold_lock_file_for_update_timeout(&lock, filename, 0,
get_files_ref_lock_timeout_ms());
if (fd < 0) {
- strbuf_addf(err, "could not open '%s' for writing: %s",
+ strbuf_addf(err, _("could not open '%s' for writing: %s"),
filename, strerror(errno));
goto done;
}
if (read_ref(pseudoref, &actual_old_oid)) {
if (!is_null_oid(old_oid)) {
- strbuf_addf(err, "could not read ref '%s'",
+ strbuf_addf(err, _("could not read ref '%s'"),
pseudoref);
rollback_lock_file(&lock);
goto done;
}
} else if (is_null_oid(old_oid)) {
- strbuf_addf(err, "ref '%s' already exists",
+ strbuf_addf(err, _("ref '%s' already exists"),
pseudoref);
rollback_lock_file(&lock);
goto done;
} else if (oidcmp(&actual_old_oid, old_oid)) {
- strbuf_addf(err, "unexpected object ID when writing '%s'",
+ strbuf_addf(err, _("unexpected object ID when writing '%s'"),
pseudoref);
rollback_lock_file(&lock);
goto done;
}
if (write_in_full(fd, buf.buf, buf.len) < 0) {
- strbuf_addf(err, "could not write to '%s'", filename);
+ strbuf_addf(err, _("could not write to '%s'"), filename);
rollback_lock_file(&lock);
goto done;
}
return -1;
}
if (read_ref(pseudoref, &actual_old_oid))
- die("could not read ref '%s'", pseudoref);
+ die(_("could not read ref '%s'"), pseudoref);
if (oidcmp(&actual_old_oid, old_oid)) {
- error("unexpected object ID when deleting '%s'",
+ error(_("unexpected object ID when deleting '%s'"),
pseudoref);
rollback_lock_file(&lock);
return -1;
if (!is_null_oid(&cb->ooid)) {
oidcpy(cb->oid, noid);
if (oidcmp(&cb->ooid, noid))
- warning("Log for ref %s has gap after %s.",
+ warning(_("log for ref %s has gap after %s"),
cb->refname, show_date(cb->date, cb->tz, DATE_MODE(RFC2822)));
}
else if (cb->date == cb->at_time)
oidcpy(cb->oid, noid);
else if (oidcmp(noid, cb->oid))
- warning("Log for ref %s unexpectedly ended on %s.",
+ warning(_("log for ref %s unexpectedly ended on %s"),
cb->refname, show_date(cb->date, cb->tz,
DATE_MODE(RFC2822)));
oidcpy(&cb->ooid, ooid);
if (flags & GET_OID_QUIETLY)
exit(128);
else
- die("Log for %s is empty.", refname);
+ die(_("log for %s is empty"), refname);
}
if (cb.found_it)
return 0;
if ((new_oid && !is_null_oid(new_oid)) ?
check_refname_format(refname, REFNAME_ALLOW_ONELEVEL) :
!refname_is_safe(refname)) {
- strbuf_addf(err, "refusing to update ref with bad name '%s'",
+ strbuf_addf(err, _("refusing to update ref with bad name '%s'"),
refname);
return -1;
}
}
}
if (ret) {
- const char *str = "update_ref failed for ref '%s': %s";
+ const char *str = _("update_ref failed for ref '%s': %s");
switch (onerr) {
case UPDATE_REFS_MSG_ON_ERR:
if (!cmp) {
strbuf_addf(err,
- "multiple updates for ref '%s' not allowed.",
+ _("multiple updates for ref '%s' not allowed"),
refnames->items[i].string);
return 1;
} else if (cmp > 0) {
continue;
if (!refs_read_raw_ref(refs, dirname.buf, &oid, &referent, &type)) {
- strbuf_addf(err, "'%s' exists; cannot create '%s'",
+ strbuf_addf(err, _("'%s' exists; cannot create '%s'"),
dirname.buf, refname);
goto cleanup;
}
if (extras && string_list_has_string(extras, dirname.buf)) {
- strbuf_addf(err, "cannot process '%s' and '%s' at the same time",
+ strbuf_addf(err, _("cannot process '%s' and '%s' at the same time"),
refname, dirname.buf);
goto cleanup;
}
string_list_has_string(skip, iter->refname))
continue;
- strbuf_addf(err, "'%s' exists; cannot create '%s'",
+ strbuf_addf(err, _("'%s' exists; cannot create '%s'"),
iter->refname, refname);
ref_iterator_abort(iter);
goto cleanup;
extra_refname = find_descendant_ref(dirname.buf, extras, skip);
if (extra_refname)
- strbuf_addf(err, "cannot process '%s' and '%s' at the same time",
+ strbuf_addf(err, _("cannot process '%s' and '%s' at the same time"),
refname, extra_refname);
else
ret = 0;
/* Follow "normalized" - ie "refs/.." symlinks by hand */
if (S_ISLNK(st.st_mode)) {
strbuf_reset(&sb_contents);
- if (strbuf_readlink(&sb_contents, path, 0) < 0) {
+ if (strbuf_readlink(&sb_contents, path, st.st_size) < 0) {
if (errno == ENOENT || errno == EINVAL)
/* inconsistent with lstat; retry */
goto stat_ref;
int fetch)
{
if (!refspec_item_init(item, refspec, fetch))
- die("Invalid refspec '%s'", refspec);
+ die(_("invalid refspec '%s'"), refspec);
}
void refspec_item_clear(struct refspec_item *item)
static const struct ref *find_ref_by_name_abbrev(const struct ref *refs, const char *name)
{
const struct ref *ref;
+ const struct ref *best_match = NULL;
+ int best_score = 0;
+
for (ref = refs; ref; ref = ref->next) {
- if (refname_match(name, ref->name))
- return ref;
+ int score = refname_match(name, ref->name);
+
+ if (best_score < score) {
+ best_match = ref;
+ best_score = score;
+ }
}
- return NULL;
+ return best_match;
}
struct ref *get_remote_ref(const struct ref *remote_refs, const char *name)
if (get_oid_hex(hash, &repl_obj->original.oid)) {
free(repl_obj);
- warning("bad replace ref name: %s", refname);
+ warning(_("bad replace ref name: %s"), refname);
return 0;
}
/* Register new object */
if (oidmap_put(the_repository->objects->replace_map, repl_obj))
- die("duplicate replace ref: %s", refname);
+ die(_("duplicate replace ref: %s"), refname);
return 0;
}
* replacement object's name (replaced recursively, if necessary).
* The return value is either oid or a pointer to a
* permanently-allocated value. This function always respects replace
- * references, regardless of the value of check_replace_refs.
+ * references, regardless of the value of read_replace_refs.
*/
const struct object_id *do_lookup_replace_object(struct repository *r,
const struct object_id *oid)
return cur;
cur = &repl_obj->replacement;
}
- die("replace depth too high for object %s", oid_to_hex(oid));
+ die(_("replace depth too high for object %s"), oid_to_hex(oid));
}
static inline const struct object_id *lookup_replace_object(struct repository *r,
const struct object_id *oid)
{
- if (!check_replace_refs ||
+ if (!read_replace_refs ||
(r->objects->replace_map &&
r->objects->replace_map->map.tablesize == 0))
return oid;
case REPLAY_INTERACTIVE_REBASE:
return N_("rebase -i");
}
- die(_("Unknown action: %d"), opts->action);
+ die(_("unknown action: %d"), opts->action);
}
struct commit_message {
strbuf_addch(&buf, *(message++));
else
strbuf_addf(&buf, "'\\\\%c'", *(message++));
+ strbuf_addch(&buf, '\'');
res = write_message(buf.buf, buf.len, rebase_path_author_script(), 1);
strbuf_release(&buf);
return res;
const char *keys[] = {
"GIT_AUTHOR_NAME=", "GIT_AUTHOR_EMAIL=", "GIT_AUTHOR_DATE="
};
- char *in, *out, *eol;
- int i = 0, len;
+ struct strbuf out = STRBUF_INIT;
+ char *in, *eol;
+ const char *val[3];
+ int i = 0;
if (strbuf_read_file(buf, rebase_path_author_script(), 256) <= 0)
return NULL;
/* dequote values and construct ident line in-place */
- for (in = out = buf->buf; i < 3 && in - buf->buf < buf->len; i++) {
+ for (in = buf->buf; i < 3 && in - buf->buf < buf->len; i++) {
if (!skip_prefix(in, keys[i], (const char **)&in)) {
- warning("could not parse '%s' (looking for '%s'",
+ warning(_("could not parse '%s' (looking for '%s'"),
rebase_path_author_script(), keys[i]);
return NULL;
}
eol = strchrnul(in, '\n');
*eol = '\0';
- sq_dequote(in);
- len = strlen(in);
-
- if (i > 0) /* separate values by spaces */
- *(out++) = ' ';
- if (i == 1) /* email needs to be surrounded by <...> */
- *(out++) = '<';
- memmove(out, in, len);
- out += len;
- if (i == 1) /* email needs to be surrounded by <...> */
- *(out++) = '>';
+ if (!sq_dequote(in)) {
+ warning(_("bad quoting on %s value in '%s'"),
+ keys[i], rebase_path_author_script());
+ return NULL;
+ }
+ val[i] = in;
in = eol + 1;
}
if (i < 3) {
- warning("could not parse '%s' (looking for '%s')",
+ warning(_("could not parse '%s' (looking for '%s')"),
rebase_path_author_script(), keys[i]);
return NULL;
}
- buf->len = out - buf->buf;
+ /* validate date since fmt_ident() will die() on bad value */
+ if (parse_date(val[2], &out)){
+ warning(_("invalid date format '%s' in '%s'"),
+ val[2], rebase_path_author_script());
+ strbuf_release(&out);
+ return NULL;
+ }
+
+ strbuf_reset(&out);
+ strbuf_addstr(&out, fmt_ident(val[0], val[1], val[2], 0));
+ strbuf_swap(buf, &out);
+ strbuf_release(&out);
return buf->buf;
}
{
if (command < TODO_COMMENT)
return todo_command_info[command].str;
- die("Unknown command: %d", command);
+ die(_("unknown command: %d"), command);
}
static char command_to_char(const enum todo_command command)
if (intend_to_amend())
return -1;
- fprintf(stderr, "You can amend the commit now, with\n"
- "\n"
- " git commit --amend %s\n"
- "\n"
- "Once you are satisfied with your changes, run\n"
- "\n"
- " git rebase --continue\n", gpg_sign_opt_quoted(opts));
+ fprintf(stderr,
+ _("You can amend the commit now, with\n"
+ "\n"
+ " git commit --amend %s\n"
+ "\n"
+ "Once you are satisfied with your changes, run\n"
+ "\n"
+ " git rebase --continue\n"),
+ gpg_sign_opt_quoted(opts));
} else if (exit_code)
- fprintf(stderr, "Could not apply %s... %.*s\n",
+ fprintf_ln(stderr, _("Could not apply %s... %.*s"),
short_commit_name(commit), subject_len, subject);
return exit_code;
struct object_id head_oid;
if (len == 1 && *name == '#')
- return error("Illegal label name: '%.*s'", len, name);
+ return error(_("illegal label name: '%.*s'"), len, name);
strbuf_addf(&ref_name, "refs/rewritten/%.*s", len, name);
strbuf_addf(&msg, "rebase -i (label) '%.*s'", len, name);
static void git_hash_unknown_init(git_hash_ctx *ctx)
{
- die("trying to init unknown hash");
+ BUG("trying to init unknown hash");
}
static void git_hash_unknown_update(git_hash_ctx *ctx, const void *data, size_t len)
{
- die("trying to update unknown hash");
+ BUG("trying to update unknown hash");
}
static void git_hash_unknown_final(unsigned char *hash, git_hash_ctx *ctx)
{
- die("trying to finalize unknown hash");
+ BUG("trying to finalize unknown hash");
}
const struct git_hash_algo hash_algos[GIT_HASH_NALGOS] = {
/* Detect cases where alternate disappeared */
if (!is_directory(path->buf)) {
- error("object directory %s does not exist; "
- "check .git/objects/info/alternates.",
+ error(_("object directory %s does not exist; "
+ "check .git/objects/info/alternates"),
path->buf);
return 0;
}
strbuf_addstr(&pathbuf, entry);
if (strbuf_normalize_path(&pathbuf) < 0 && relative_base) {
- error("unable to normalize alternate object path: %s",
+ error(_("unable to normalize alternate object path: %s"),
pathbuf.buf);
strbuf_release(&pathbuf);
return -1;
return;
if (depth > 5) {
- error("%s: ignoring alternate object stores, nesting too deep.",
+ error(_("%s: ignoring alternate object stores, nesting too deep"),
relative_base);
return;
}
strbuf_add_absolute_path(&objdirbuf, r->objects->objectdir);
if (strbuf_normalize_path(&objdirbuf) < 0)
- die("unable to normalize object directory: %s",
+ die(_("unable to normalize object directory: %s"),
objdirbuf.buf);
while (*alt) {
hold_lock_file_for_update(&lock, alts, LOCK_DIE_ON_ERROR);
out = fdopen_lock_file(&lock, "w");
if (!out)
- die_errno("unable to fdopen alternates lockfile");
+ die_errno(_("unable to fdopen alternates lockfile"));
in = fopen(alts, "r");
if (in) {
fclose(in);
}
else if (errno != ENOENT)
- die_errno("unable to read alternates file");
+ die_errno(_("unable to read alternates file"));
if (found) {
rollback_lock_file(&lock);
} else {
fprintf_or_die(out, "%s\n", reference);
if (commit_lock_file(&lock))
- die_errno("unable to move new alternates file into place");
+ die_errno(_("unable to move new alternates file into place"));
if (the_repository->objects->alt_odb_tail)
link_alt_odb_entries(the_repository, reference,
'\n', NULL, 0);
limit = SIZE_MAX;
}
if (length > limit)
- die("attempting to mmap %"PRIuMAX" over limit %"PRIuMAX,
+ die(_("attempting to mmap %"PRIuMAX" over limit %"PRIuMAX),
(uintmax_t)length, (uintmax_t)limit);
}
{
void *ret = xmmap_gently(start, length, prot, flags, fd, offset);
if (ret == MAP_FAILED)
- die_errno("mmap failed");
+ die_errno(_("mmap failed"));
return ret;
}
*size = xsize_t(st.st_size);
if (!*size) {
/* mmap() is forbidden on empty files */
- error("object file %s is empty", path);
+ error(_("object file %s is empty"), path);
return NULL;
}
map = xmmap(NULL, *size, PROT_READ, MAP_PRIVATE, fd, 0);
}
if (status < 0)
- error("corrupt loose object '%s'", sha1_to_hex(sha1));
+ error(_("corrupt loose object '%s'"), sha1_to_hex(sha1));
else if (stream->avail_in)
- error("garbage at end of loose object '%s'",
+ error(_("garbage at end of loose object '%s'"),
sha1_to_hex(sha1));
free(buf);
return NULL;
if ((flags & OBJECT_INFO_ALLOW_UNKNOWN_TYPE) && (type < 0))
type = 0;
else if (type < 0)
- die("invalid object type");
+ die(_("invalid object type"));
if (oi->typep)
*oi->typep = type;
*oi->disk_sizep = mapsize;
if ((flags & OBJECT_INFO_ALLOW_UNKNOWN_TYPE)) {
if (unpack_sha1_header_to_strbuf(&stream, map, mapsize, hdr, sizeof(hdr), &hdrbuf) < 0)
- status = error("unable to unpack %s header with --allow-unknown-type",
+ status = error(_("unable to unpack %s header with --allow-unknown-type"),
sha1_to_hex(sha1));
} else if (unpack_sha1_header(&stream, map, mapsize, hdr, sizeof(hdr)) < 0)
- status = error("unable to unpack %s header",
+ status = error(_("unable to unpack %s header"),
sha1_to_hex(sha1));
if (status < 0)
; /* Do nothing */
else if (hdrbuf.len) {
if ((status = parse_sha1_header_extended(hdrbuf.buf, oi, flags)) < 0)
- status = error("unable to parse %s header with --allow-unknown-type",
+ status = error(_("unable to parse %s header with --allow-unknown-type"),
sha1_to_hex(sha1));
} else if ((status = parse_sha1_header_extended(hdr, oi, flags)) < 0)
- status = error("unable to parse %s header", sha1_to_hex(sha1));
+ status = error(_("unable to parse %s header"), sha1_to_hex(sha1));
if (status >= 0 && oi->contentp) {
*oi->contentp = unpack_sha1_rest(&stream, hdr,
return data;
if (errno && errno != ENOENT)
- die_errno("failed to read object %s", oid_to_hex(oid));
+ die_errno(_("failed to read object %s"), oid_to_hex(oid));
/* die if we replaced an object with one that does not exist */
if (repl != oid)
- die("replacement %s not found for %s",
+ die(_("replacement %s not found for %s"),
oid_to_hex(repl), oid_to_hex(oid));
if (!stat_sha1_file(the_repository, repl->hash, &st, &path))
- die("loose object %s (stored in %s) is corrupt",
+ die(_("loose object %s (stored in %s) is corrupt"),
oid_to_hex(repl), path);
if ((p = has_packed_and_bad(repl->hash)) != NULL)
- die("packed object %s (stored in %s) is corrupt",
+ die(_("packed object %s (stored in %s) is corrupt"),
oid_to_hex(repl), p->pack_name);
return NULL;
unlink_or_warn(tmpfile);
if (ret) {
if (ret != EEXIST) {
- return error_errno("unable to write sha1 filename %s", filename);
+ return error_errno(_("unable to write sha1 filename %s"), filename);
}
/* FIXME!!! Collision check here ? */
}
out:
if (adjust_shared_perm(filename))
- return error("unable to set permission to '%s'", filename);
+ return error(_("unable to set permission to '%s'"), filename);
return 0;
}
static int write_buffer(int fd, const void *buf, size_t len)
{
if (write_in_full(fd, buf, len) < 0)
- return error_errno("file write error");
+ return error_errno(_("file write error"));
return 0;
}
if (fsync_object_files)
fsync_or_die(fd, "sha1 file");
if (close(fd) != 0)
- die_errno("error when closing sha1 file");
+ die_errno(_("error when closing sha1 file"));
}
/* Size of directory component, including the ending '/' */
fd = create_tmpfile(&tmp_file, filename.buf);
if (fd < 0) {
if (errno == EACCES)
- return error("insufficient permission for adding an object to repository database %s", get_object_directory());
+ return error(_("insufficient permission for adding an object to repository database %s"), get_object_directory());
else
- return error_errno("unable to create temporary file");
+ return error_errno(_("unable to create temporary file"));
}
/* Set it up */
ret = git_deflate(&stream, Z_FINISH);
the_hash_algo->update_fn(&c, in0, stream.next_in - in0);
if (write_buffer(fd, compressed, stream.next_out - compressed) < 0)
- die("unable to write sha1 file");
+ die(_("unable to write sha1 file"));
stream.next_out = compressed;
stream.avail_out = sizeof(compressed);
} while (ret == Z_OK);
if (ret != Z_STREAM_END)
- die("unable to deflate new object %s (%d)", oid_to_hex(oid),
+ die(_("unable to deflate new object %s (%d)"), oid_to_hex(oid),
ret);
ret = git_deflate_end_gently(&stream);
if (ret != Z_OK)
- die("deflateEnd on object %s failed (%d)", oid_to_hex(oid),
+ die(_("deflateEnd on object %s failed (%d)"), oid_to_hex(oid),
ret);
the_hash_algo->final_fn(parano_oid.hash, &c);
if (oidcmp(oid, ¶no_oid) != 0)
- die("confused by unstable object source data for %s",
+ die(_("confused by unstable object source data for %s"),
oid_to_hex(oid));
close_sha1_file(fd);
utb.actime = mtime;
utb.modtime = mtime;
if (utime(tmp_file.buf, &utb) < 0)
- warning_errno("failed utime() on %s", tmp_file.buf);
+ warning_errno(_("failed utime() on %s"), tmp_file.buf);
}
return finalize_object_file(tmp_file.buf, filename.buf);
return 0;
buf = read_object(oid->hash, &type, &len);
if (!buf)
- return error("cannot read sha1_file for %s", oid_to_hex(oid));
+ return error(_("cannot read sha1_file for %s"), oid_to_hex(oid));
hdrlen = xsnprintf(hdr, sizeof(hdr), "%s %lu", type_name(type), len) + 1;
ret = write_loose_object(oid, hdr, hdrlen, buf, len, mtime);
free(buf);
struct commit c;
memset(&c, 0, sizeof(c));
if (parse_commit_buffer(the_repository, &c, buf, size, 0))
- die("corrupt commit");
+ die(_("corrupt commit"));
}
static void check_tag(const void *buf, size_t size)
struct tag t;
memset(&t, 0, sizeof(t));
if (parse_tag_buffer(the_repository, &t, buf, size))
- die("corrupt tag");
+ die(_("corrupt tag"));
}
static int index_mem(struct object_id *oid, void *buf, size_t size,
char *buf = xmalloc(size);
ssize_t read_result = read_in_full(fd, buf, size);
if (read_result < 0)
- ret = error_errno("read error while indexing %s",
+ ret = error_errno(_("read error while indexing %s"),
path ? path : "<unknown>");
else if (read_result != size)
- ret = error("short read while indexing %s",
+ ret = error(_("short read while indexing %s"),
path ? path : "<unknown>");
else
ret = index_mem(oid, buf, size, type, path, flags);
if (fd < 0)
return error_errno("open(\"%s\")", path);
if (index_fd(oid, fd, st, OBJ_BLOB, path, flags) < 0)
- return error("%s: failed to insert into database",
+ return error(_("%s: failed to insert into database"),
path);
break;
case S_IFLNK:
if (!(flags & HASH_WRITE_OBJECT))
hash_object_file(sb.buf, sb.len, blob_type, oid);
else if (write_object_file(sb.buf, sb.len, blob_type, oid))
- rc = error("%s: failed to insert into database", path);
+ rc = error(_("%s: failed to insert into database"), path);
strbuf_release(&sb);
break;
case S_IFDIR:
return resolve_gitlink_ref(path, "HEAD", oid);
default:
- return error("%s: unsupported file type", path);
+ return error(_("%s: unsupported file type"), path);
}
return rc;
}
{
enum object_type type = oid_object_info(the_repository, oid, NULL);
if (type < 0)
- die("%s is not a valid object", oid_to_hex(oid));
+ die(_("%s is not a valid object"), oid_to_hex(oid));
if (type != expect)
- die("%s is not a valid '%s' object", oid_to_hex(oid),
+ die(_("%s is not a valid '%s' object"), oid_to_hex(oid),
type_name(expect));
}
dir = opendir(path->buf);
if (!dir) {
if (errno != ENOENT)
- r = error_errno("unable to open %s", path->buf);
+ r = error_errno(_("unable to open %s"), path->buf);
strbuf_setlen(path, origlen);
return r;
}
git_inflate_end(stream);
if (status != Z_STREAM_END) {
- error("corrupt loose object '%s'", sha1_to_hex(expected_sha1));
+ error(_("corrupt loose object '%s'"), sha1_to_hex(expected_sha1));
return -1;
}
if (stream->avail_in) {
- error("garbage at end of loose object '%s'",
+ error(_("garbage at end of loose object '%s'"),
sha1_to_hex(expected_sha1));
return -1;
}
the_hash_algo->final_fn(real_sha1, &c);
if (hashcmp(expected_sha1, real_sha1)) {
- error("sha1 mismatch for %s (expected %s)", path,
+ error(_("sha1 mismatch for %s (expected %s)"), path,
sha1_to_hex(expected_sha1));
return -1;
}
map = map_sha1_file_1(the_repository, path, NULL, &mapsize);
if (!map) {
- error_errno("unable to mmap %s", path);
+ error_errno(_("unable to mmap %s"), path);
goto out;
}
if (unpack_sha1_header(&stream, map, mapsize, hdr, sizeof(hdr)) < 0) {
- error("unable to unpack header of %s", path);
+ error(_("unable to unpack header of %s"), path);
goto out;
}
*type = parse_sha1_header(hdr, size);
if (*type < 0) {
- error("unable to parse header of %s", path);
+ error(_("unable to parse header of %s"), path);
git_inflate_end(&stream);
goto out;
}
} else {
*contents = unpack_sha1_rest(&stream, hdr, *size, expected_oid->hash);
if (!*contents) {
- error("unable to unpack contents of %s", path);
+ error(_("unable to unpack contents of %s"), path);
git_inflate_end(&stream);
goto out;
}
if (check_object_signature(expected_oid, *contents,
*size, type_name(*type))) {
- error("sha1 mismatch for %s (expected %s)", path,
+ error(_("sha1 mismatch for %s (expected %s)"), path,
oid_to_hex(expected_oid));
free(*contents);
goto out;
-Subproject commit 19d97bf5af05312267c2e874ee6bcf584d9e9681
+Subproject commit 232357eb2ea0397388254a4b188333a227bf5b10
#define SHA1DC_BIGENDIAN
/* Not under GCC-alike or glibc or *BSD or newlib or <processor whitelist> */
+#elif (defined(_AIX))
+
+/*
+ * Defines Big Endian on a whitelist of OSs that are known to be Big
+ * Endian-only. See
+ * https://public-inbox.org/git/93056823-2740-d072-1ebd-46b440b33d7e@felt.demon.nl/
+ */
+#define SHA1DC_BIGENDIAN
+
+/* Not under GCC-alike or glibc or *BSD or newlib or <processor whitelist> or <os whitelist> */
#elif defined(SHA1DC_ON_INTEL_LIKE_PROCESSOR)
/*
* As a last resort before we do anything else we're not 100% sure
* about below, we blacklist specific processors here. We could add
* more, see e.g. https://wiki.debian.org/ArchitectureSpecificsMemo
*/
-#else /* Not under GCC-alike or glibc or *BSD or newlib or <processor whitelist> or <processor blacklist> */
+#else /* Not under GCC-alike or glibc or *BSD or newlib or <processor whitelist> or <os whitelist> or <processor blacklist> */
/* We do nothing more here for now */
/*#error "Uncomment this to see if you fall through all the detection"*/
int strbuf_reencode(struct strbuf *sb, const char *from, const char *to)
{
char *out;
- int len;
+ size_t len;
if (same_encoding(from, to))
return 0;
int strbuf_cmp(const struct strbuf *a, const struct strbuf *b)
{
- int len = a->len < b->len ? a->len: b->len;
+ size_t len = a->len < b->len ? a->len: b->len;
int cmp = memcmp(a->buf, b->buf, len);
if (cmp)
return cmp;
void strbuf_addbuf_percentquote(struct strbuf *dst, const struct strbuf *src)
{
- int i, len = src->len;
+ size_t i, len = src->len;
for (i = 0; i < len; i++) {
if (src->buf[i] == '%')
hint = 32;
while (hint < STRBUF_MAXLINK) {
- int len;
+ ssize_t len;
strbuf_grow(sb, hint);
len = readlink(path, sb->buf, hint);
{
if (bytes > 1 << 30) {
strbuf_addf(buf, "%u.%2.2u GiB",
- (int)(bytes >> 30),
- (int)(bytes & ((1 << 30) - 1)) / 10737419);
+ (unsigned)(bytes >> 30),
+ (unsigned)(bytes & ((1 << 30) - 1)) / 10737419);
} else if (bytes > 1 << 20) {
- int x = bytes + 5243; /* for rounding */
+ unsigned x = bytes + 5243; /* for rounding */
strbuf_addf(buf, "%u.%2.2u MiB",
x >> 20, ((x & ((1 << 20) - 1)) * 100) >> 20);
} else if (bytes > 1 << 10) {
- int x = bytes + 5; /* for rounding */
+ unsigned x = bytes + 5; /* for rounding */
strbuf_addf(buf, "%u.%2.2u KiB",
x >> 10, ((x & ((1 << 10) - 1)) * 100) >> 10);
} else {
- strbuf_addf(buf, "%u bytes", (int)bytes);
+ strbuf_addf(buf, "%u bytes", (unsigned)bytes);
}
}
*/
void strbuf_stripspace(struct strbuf *sb, int skip_comments)
{
- int empties = 0;
+ size_t empties = 0;
size_t i, j, len, newlen;
char *eol;
--- /dev/null
+#include "test-tool.h"
+#include "cache.h"
+#include "json-writer.h"
+
+static const char *expect_obj1 = "{\"a\":\"abc\",\"b\":42,\"c\":true}";
+static const char *expect_obj2 = "{\"a\":-1,\"b\":2147483647,\"c\":0}";
+static const char *expect_obj3 = "{\"a\":0,\"b\":4294967295,\"c\":9223372036854775807}";
+static const char *expect_obj4 = "{\"t\":true,\"f\":false,\"n\":null}";
+static const char *expect_obj5 = "{\"abc\\tdef\":\"abc\\\\def\"}";
+static const char *expect_obj6 = "{\"a\":3.14}";
+
+static const char *pretty_obj1 = ("{\n"
+ " \"a\": \"abc\",\n"
+ " \"b\": 42,\n"
+ " \"c\": true\n"
+ "}");
+static const char *pretty_obj2 = ("{\n"
+ " \"a\": -1,\n"
+ " \"b\": 2147483647,\n"
+ " \"c\": 0\n"
+ "}");
+static const char *pretty_obj3 = ("{\n"
+ " \"a\": 0,\n"
+ " \"b\": 4294967295,\n"
+ " \"c\": 9223372036854775807\n"
+ "}");
+static const char *pretty_obj4 = ("{\n"
+ " \"t\": true,\n"
+ " \"f\": false,\n"
+ " \"n\": null\n"
+ "}");
+
+static struct json_writer obj1 = JSON_WRITER_INIT;
+static struct json_writer obj2 = JSON_WRITER_INIT;
+static struct json_writer obj3 = JSON_WRITER_INIT;
+static struct json_writer obj4 = JSON_WRITER_INIT;
+static struct json_writer obj5 = JSON_WRITER_INIT;
+static struct json_writer obj6 = JSON_WRITER_INIT;
+
+static void make_obj1(int pretty)
+{
+ jw_object_begin(&obj1, pretty);
+ {
+ jw_object_string(&obj1, "a", "abc");
+ jw_object_intmax(&obj1, "b", 42);
+ jw_object_true(&obj1, "c");
+ }
+ jw_end(&obj1);
+}
+
+static void make_obj2(int pretty)
+{
+ jw_object_begin(&obj2, pretty);
+ {
+ jw_object_intmax(&obj2, "a", -1);
+ jw_object_intmax(&obj2, "b", 0x7fffffff);
+ jw_object_intmax(&obj2, "c", 0);
+ }
+ jw_end(&obj2);
+}
+
+static void make_obj3(int pretty)
+{
+ jw_object_begin(&obj3, pretty);
+ {
+ jw_object_intmax(&obj3, "a", 0);
+ jw_object_intmax(&obj3, "b", 0xffffffff);
+ jw_object_intmax(&obj3, "c", 0x7fffffffffffffffULL);
+ }
+ jw_end(&obj3);
+}
+
+static void make_obj4(int pretty)
+{
+ jw_object_begin(&obj4, pretty);
+ {
+ jw_object_true(&obj4, "t");
+ jw_object_false(&obj4, "f");
+ jw_object_null(&obj4, "n");
+ }
+ jw_end(&obj4);
+}
+
+static void make_obj5(int pretty)
+{
+ jw_object_begin(&obj5, pretty);
+ {
+ jw_object_string(&obj5, "abc" "\x09" "def", "abc" "\\" "def");
+ }
+ jw_end(&obj5);
+}
+
+static void make_obj6(int pretty)
+{
+ jw_object_begin(&obj6, pretty);
+ {
+ jw_object_double(&obj6, "a", 2, 3.14159);
+ }
+ jw_end(&obj6);
+}
+
+static const char *expect_arr1 = "[\"abc\",42,true]";
+static const char *expect_arr2 = "[-1,2147483647,0]";
+static const char *expect_arr3 = "[0,4294967295,9223372036854775807]";
+static const char *expect_arr4 = "[true,false,null]";
+
+static const char *pretty_arr1 = ("[\n"
+ " \"abc\",\n"
+ " 42,\n"
+ " true\n"
+ "]");
+static const char *pretty_arr2 = ("[\n"
+ " -1,\n"
+ " 2147483647,\n"
+ " 0\n"
+ "]");
+static const char *pretty_arr3 = ("[\n"
+ " 0,\n"
+ " 4294967295,\n"
+ " 9223372036854775807\n"
+ "]");
+static const char *pretty_arr4 = ("[\n"
+ " true,\n"
+ " false,\n"
+ " null\n"
+ "]");
+
+static struct json_writer arr1 = JSON_WRITER_INIT;
+static struct json_writer arr2 = JSON_WRITER_INIT;
+static struct json_writer arr3 = JSON_WRITER_INIT;
+static struct json_writer arr4 = JSON_WRITER_INIT;
+
+static void make_arr1(int pretty)
+{
+ jw_array_begin(&arr1, pretty);
+ {
+ jw_array_string(&arr1, "abc");
+ jw_array_intmax(&arr1, 42);
+ jw_array_true(&arr1);
+ }
+ jw_end(&arr1);
+}
+
+static void make_arr2(int pretty)
+{
+ jw_array_begin(&arr2, pretty);
+ {
+ jw_array_intmax(&arr2, -1);
+ jw_array_intmax(&arr2, 0x7fffffff);
+ jw_array_intmax(&arr2, 0);
+ }
+ jw_end(&arr2);
+}
+
+static void make_arr3(int pretty)
+{
+ jw_array_begin(&arr3, pretty);
+ {
+ jw_array_intmax(&arr3, 0);
+ jw_array_intmax(&arr3, 0xffffffff);
+ jw_array_intmax(&arr3, 0x7fffffffffffffffULL);
+ }
+ jw_end(&arr3);
+}
+
+static void make_arr4(int pretty)
+{
+ jw_array_begin(&arr4, pretty);
+ {
+ jw_array_true(&arr4);
+ jw_array_false(&arr4);
+ jw_array_null(&arr4);
+ }
+ jw_end(&arr4);
+}
+
+static char *expect_nest1 =
+ "{\"obj1\":{\"a\":\"abc\",\"b\":42,\"c\":true},\"arr1\":[\"abc\",42,true]}";
+
+static struct json_writer nest1 = JSON_WRITER_INIT;
+
+static void make_nest1(int pretty)
+{
+ jw_object_begin(&nest1, pretty);
+ {
+ jw_object_sub_jw(&nest1, "obj1", &obj1);
+ jw_object_sub_jw(&nest1, "arr1", &arr1);
+ }
+ jw_end(&nest1);
+}
+
+static char *expect_inline1 =
+ "{\"obj1\":{\"a\":\"abc\",\"b\":42,\"c\":true},\"arr1\":[\"abc\",42,true]}";
+
+static char *pretty_inline1 =
+ ("{\n"
+ " \"obj1\": {\n"
+ " \"a\": \"abc\",\n"
+ " \"b\": 42,\n"
+ " \"c\": true\n"
+ " },\n"
+ " \"arr1\": [\n"
+ " \"abc\",\n"
+ " 42,\n"
+ " true\n"
+ " ]\n"
+ "}");
+
+static struct json_writer inline1 = JSON_WRITER_INIT;
+
+static void make_inline1(int pretty)
+{
+ jw_object_begin(&inline1, pretty);
+ {
+ jw_object_inline_begin_object(&inline1, "obj1");
+ {
+ jw_object_string(&inline1, "a", "abc");
+ jw_object_intmax(&inline1, "b", 42);
+ jw_object_true(&inline1, "c");
+ }
+ jw_end(&inline1);
+ jw_object_inline_begin_array(&inline1, "arr1");
+ {
+ jw_array_string(&inline1, "abc");
+ jw_array_intmax(&inline1, 42);
+ jw_array_true(&inline1);
+ }
+ jw_end(&inline1);
+ }
+ jw_end(&inline1);
+}
+
+static char *expect_inline2 =
+ "[[1,2],[3,4],{\"a\":\"abc\"}]";
+
+static char *pretty_inline2 =
+ ("[\n"
+ " [\n"
+ " 1,\n"
+ " 2\n"
+ " ],\n"
+ " [\n"
+ " 3,\n"
+ " 4\n"
+ " ],\n"
+ " {\n"
+ " \"a\": \"abc\"\n"
+ " }\n"
+ "]");
+
+static struct json_writer inline2 = JSON_WRITER_INIT;
+
+static void make_inline2(int pretty)
+{
+ jw_array_begin(&inline2, pretty);
+ {
+ jw_array_inline_begin_array(&inline2);
+ {
+ jw_array_intmax(&inline2, 1);
+ jw_array_intmax(&inline2, 2);
+ }
+ jw_end(&inline2);
+ jw_array_inline_begin_array(&inline2);
+ {
+ jw_array_intmax(&inline2, 3);
+ jw_array_intmax(&inline2, 4);
+ }
+ jw_end(&inline2);
+ jw_array_inline_begin_object(&inline2);
+ {
+ jw_object_string(&inline2, "a", "abc");
+ }
+ jw_end(&inline2);
+ }
+ jw_end(&inline2);
+}
+
+/*
+ * When super is compact, we expect subs to be compacted (even if originally
+ * pretty).
+ */
+static const char *expect_mixed1 =
+ ("{\"obj1\":{\"a\":\"abc\",\"b\":42,\"c\":true},"
+ "\"arr1\":[\"abc\",42,true]}");
+
+/*
+ * When super is pretty, a compact sub (obj1) is kept compact and a pretty
+ * sub (arr1) is re-indented.
+ */
+static const char *pretty_mixed1 =
+ ("{\n"
+ " \"obj1\": {\"a\":\"abc\",\"b\":42,\"c\":true},\n"
+ " \"arr1\": [\n"
+ " \"abc\",\n"
+ " 42,\n"
+ " true\n"
+ " ]\n"
+ "}");
+
+static struct json_writer mixed1 = JSON_WRITER_INIT;
+
+static void make_mixed1(int pretty)
+{
+ jw_init(&obj1);
+ jw_init(&arr1);
+
+ make_obj1(0); /* obj1 is compact */
+ make_arr1(1); /* arr1 is pretty */
+
+ jw_object_begin(&mixed1, pretty);
+ {
+ jw_object_sub_jw(&mixed1, "obj1", &obj1);
+ jw_object_sub_jw(&mixed1, "arr1", &arr1);
+ }
+ jw_end(&mixed1);
+}
+
+static void cmp(const char *test, const struct json_writer *jw, const char *exp)
+{
+ if (!strcmp(jw->json.buf, exp))
+ return;
+
+ printf("error[%s]: observed '%s' expected '%s'\n",
+ test, jw->json.buf, exp);
+ exit(1);
+}
+
+#define t(v) do { make_##v(0); cmp(#v, &v, expect_##v); } while (0)
+#define p(v) do { make_##v(1); cmp(#v, &v, pretty_##v); } while (0)
+
+/*
+ * Run some basic regression tests with some known patterns.
+ * These tests also demonstrate how to use the jw_ API.
+ */
+static int unit_tests(void)
+{
+ /* comptact (canonical) forms */
+ t(obj1);
+ t(obj2);
+ t(obj3);
+ t(obj4);
+ t(obj5);
+ t(obj6);
+
+ t(arr1);
+ t(arr2);
+ t(arr3);
+ t(arr4);
+
+ t(nest1);
+
+ t(inline1);
+ t(inline2);
+
+ jw_init(&obj1);
+ jw_init(&obj2);
+ jw_init(&obj3);
+ jw_init(&obj4);
+
+ jw_init(&arr1);
+ jw_init(&arr2);
+ jw_init(&arr3);
+ jw_init(&arr4);
+
+ jw_init(&inline1);
+ jw_init(&inline2);
+
+ /* pretty forms */
+ p(obj1);
+ p(obj2);
+ p(obj3);
+ p(obj4);
+
+ p(arr1);
+ p(arr2);
+ p(arr3);
+ p(arr4);
+
+ p(inline1);
+ p(inline2);
+
+ /* mixed forms */
+ t(mixed1);
+ jw_init(&mixed1);
+ p(mixed1);
+
+ return 0;
+}
+
+static void get_s(int line_nr, char **s_in)
+{
+ *s_in = strtok(NULL, " ");
+ if (!*s_in)
+ die("line[%d]: expected: <s>", line_nr);
+}
+
+static void get_i(int line_nr, intmax_t *s_in)
+{
+ char *s;
+ char *endptr;
+
+ get_s(line_nr, &s);
+
+ *s_in = strtol(s, &endptr, 10);
+ if (*endptr || errno == ERANGE)
+ die("line[%d]: invalid integer value", line_nr);
+}
+
+static void get_d(int line_nr, double *s_in)
+{
+ char *s;
+ char *endptr;
+
+ get_s(line_nr, &s);
+
+ *s_in = strtod(s, &endptr);
+ if (*endptr || errno == ERANGE)
+ die("line[%d]: invalid float value", line_nr);
+}
+
+static int pretty;
+
+#define MAX_LINE_LENGTH (64 * 1024)
+
+static char *get_trimmed_line(char *buf, int buf_size)
+{
+ int len;
+
+ if (!fgets(buf, buf_size, stdin))
+ return NULL;
+
+ len = strlen(buf);
+ while (len > 0) {
+ char c = buf[len - 1];
+ if (c == '\n' || c == '\r' || c == ' ' || c == '\t')
+ buf[--len] = 0;
+ else
+ break;
+ }
+
+ while (*buf == ' ' || *buf == '\t')
+ buf++;
+
+ return buf;
+}
+
+static int scripted(void)
+{
+ struct json_writer jw = JSON_WRITER_INIT;
+ char buf[MAX_LINE_LENGTH];
+ char *line;
+ int line_nr = 0;
+
+ line = get_trimmed_line(buf, MAX_LINE_LENGTH);
+ if (!line)
+ return 0;
+
+ if (!strcmp(line, "object"))
+ jw_object_begin(&jw, pretty);
+ else if (!strcmp(line, "array"))
+ jw_array_begin(&jw, pretty);
+ else
+ die("expected first line to be 'object' or 'array'");
+
+ while ((line = get_trimmed_line(buf, MAX_LINE_LENGTH)) != NULL) {
+ char *verb;
+ char *key;
+ char *s_value;
+ intmax_t i_value;
+ double d_value;
+
+ line_nr++;
+
+ verb = strtok(line, " ");
+
+ if (!strcmp(verb, "end")) {
+ jw_end(&jw);
+ }
+ else if (!strcmp(verb, "object-string")) {
+ get_s(line_nr, &key);
+ get_s(line_nr, &s_value);
+ jw_object_string(&jw, key, s_value);
+ }
+ else if (!strcmp(verb, "object-int")) {
+ get_s(line_nr, &key);
+ get_i(line_nr, &i_value);
+ jw_object_intmax(&jw, key, i_value);
+ }
+ else if (!strcmp(verb, "object-double")) {
+ get_s(line_nr, &key);
+ get_i(line_nr, &i_value);
+ get_d(line_nr, &d_value);
+ jw_object_double(&jw, key, i_value, d_value);
+ }
+ else if (!strcmp(verb, "object-true")) {
+ get_s(line_nr, &key);
+ jw_object_true(&jw, key);
+ }
+ else if (!strcmp(verb, "object-false")) {
+ get_s(line_nr, &key);
+ jw_object_false(&jw, key);
+ }
+ else if (!strcmp(verb, "object-null")) {
+ get_s(line_nr, &key);
+ jw_object_null(&jw, key);
+ }
+ else if (!strcmp(verb, "object-object")) {
+ get_s(line_nr, &key);
+ jw_object_inline_begin_object(&jw, key);
+ }
+ else if (!strcmp(verb, "object-array")) {
+ get_s(line_nr, &key);
+ jw_object_inline_begin_array(&jw, key);
+ }
+ else if (!strcmp(verb, "array-string")) {
+ get_s(line_nr, &s_value);
+ jw_array_string(&jw, s_value);
+ }
+ else if (!strcmp(verb, "array-int")) {
+ get_i(line_nr, &i_value);
+ jw_array_intmax(&jw, i_value);
+ }
+ else if (!strcmp(verb, "array-double")) {
+ get_i(line_nr, &i_value);
+ get_d(line_nr, &d_value);
+ jw_array_double(&jw, i_value, d_value);
+ }
+ else if (!strcmp(verb, "array-true"))
+ jw_array_true(&jw);
+ else if (!strcmp(verb, "array-false"))
+ jw_array_false(&jw);
+ else if (!strcmp(verb, "array-null"))
+ jw_array_null(&jw);
+ else if (!strcmp(verb, "array-object"))
+ jw_array_inline_begin_object(&jw);
+ else if (!strcmp(verb, "array-array"))
+ jw_array_inline_begin_array(&jw);
+ else
+ die("unrecognized token: '%s'", verb);
+ }
+
+ if (!jw_is_terminated(&jw))
+ die("json not terminated: '%s'", jw.json.buf);
+
+ printf("%s\n", jw.json.buf);
+
+ strbuf_release(&jw.json);
+ return 0;
+}
+
+int cmd__json_writer(int argc, const char **argv)
+{
+ argc--; /* skip over "json-writer" arg */
+ argv++;
+
+ if (argc > 0 && argv[0][0] == '-') {
+ if (!strcmp(argv[0], "-u") || !strcmp(argv[0], "--unit"))
+ return unit_tests();
+
+ if (!strcmp(argv[0], "-p") || !strcmp(argv[0], "--pretty"))
+ pretty = 1;
+ }
+
+ return scripted();
+}
{ "genrandom", cmd__genrandom },
{ "hashmap", cmd__hashmap },
{ "index-version", cmd__index_version },
+ { "json-writer", cmd__json_writer },
{ "lazy-init-name-hash", cmd__lazy_init_name_hash },
{ "match-trees", cmd__match_trees },
{ "mergesort", cmd__mergesort },
int cmd__genrandom(int argc, const char **argv);
int cmd__hashmap(int argc, const char **argv);
int cmd__index_version(int argc, const char **argv);
+int cmd__json_writer(int argc, const char **argv);
int cmd__lazy_init_name_hash(int argc, const char **argv);
int cmd__match_trees(int argc, const char **argv);
int cmd__mergesort(int argc, const char **argv);
"$TEST_DIRECTORY"/lib-gpg/ownertrust &&
gpg --homedir "${GNUPGHOME}" </dev/null >/dev/null 2>&1 \
--sign -u committer@example.com &&
- test_set_prereq GPG
+ test_set_prereq GPG &&
+ # Available key info:
+ # * see t/lib-gpg/gpgsm-gen-key.in
+ # To generate new certificate:
+ # * no passphrase
+ # gpgsm --homedir /tmp/gpghome/ \
+ # -o /tmp/gpgsm.crt.user \
+ # --generate-key \
+ # --batch t/lib-gpg/gpgsm-gen-key.in
+ # To import certificate:
+ # gpgsm --homedir /tmp/gpghome/ \
+ # --import /tmp/gpgsm.crt.user
+ # To export into a .p12 we can later import:
+ # gpgsm --homedir /tmp/gpghome/ \
+ # -o t/lib-gpg/gpgsm_cert.p12 \
+ # --export-secret-key-p12 "committer@example.com"
+ echo | gpgsm --homedir "${GNUPGHOME}" 2>/dev/null \
+ --passphrase-fd 0 --pinentry-mode loopback \
+ --import "$TEST_DIRECTORY"/lib-gpg/gpgsm_cert.p12 &&
+ gpgsm --homedir "${GNUPGHOME}" 2>/dev/null -K \
+ | grep fingerprint: | cut -d" " -f4 | tr -d '\n' > \
+ ${GNUPGHOME}/trustlist.txt &&
+ echo " S relax" >> ${GNUPGHOME}/trustlist.txt &&
+ (gpgconf --kill gpg-agent >/dev/null 2>&1 || : ) &&
+ echo hello | gpgsm --homedir "${GNUPGHOME}" >/dev/null \
+ -u committer@example.com -o /dev/null --sign - 2>&1 &&
+ test_set_prereq GPGSM
;;
esac
fi
--- /dev/null
+Key-Type: RSA
+Key-Length: 2048
+Key-Usage: sign
+Serial: random
+Name-DN: CN=C O Mitter, O=Example, SN=C O, GN=Mitter
+Name-Email: committer@example.com
+Not-Before: 1970-01-01 00:00:00
+Not-After: 3000-01-01 00:00:00
--- /dev/null
+#!/bin/sh
+
+test_description='test json-writer JSON generation'
+. ./test-lib.sh
+
+test_expect_success 'unit test of json-writer routines' '
+ test-tool json-writer -u
+'
+
+test_expect_success 'trivial object' '
+ cat >expect <<-\EOF &&
+ {}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'trivial array' '
+ cat >expect <<-\EOF &&
+ []
+ EOF
+ cat >input <<-\EOF &&
+ array
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'simple object' '
+ cat >expect <<-\EOF &&
+ {"a":"abc","b":42,"c":3.14,"d":true,"e":false,"f":null}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc
+ object-int b 42
+ object-double c 2 3.140
+ object-true d
+ object-false e
+ object-null f
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'simple array' '
+ cat >expect <<-\EOF &&
+ ["abc",42,3.14,true,false,null]
+ EOF
+ cat >input <<-\EOF &&
+ array
+ array-string abc
+ array-int 42
+ array-double 2 3.140
+ array-true
+ array-false
+ array-null
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'escape quoting string' '
+ cat >expect <<-\EOF &&
+ {"a":"abc\\def"}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc\def
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'escape quoting string 2' '
+ cat >expect <<-\EOF &&
+ {"a":"abc\"def"}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc"def
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'nested inline object' '
+ cat >expect <<-\EOF &&
+ {"a":"abc","b":42,"sub1":{"c":3.14,"d":true,"sub2":{"e":false,"f":null}}}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc
+ object-int b 42
+ object-object sub1
+ object-double c 2 3.140
+ object-true d
+ object-object sub2
+ object-false e
+ object-null f
+ end
+ end
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'nested inline array' '
+ cat >expect <<-\EOF &&
+ ["abc",42,[3.14,true,[false,null]]]
+ EOF
+ cat >input <<-\EOF &&
+ array
+ array-string abc
+ array-int 42
+ array-array
+ array-double 2 3.140
+ array-true
+ array-array
+ array-false
+ array-null
+ end
+ end
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'nested inline object and array' '
+ cat >expect <<-\EOF &&
+ {"a":"abc","b":42,"sub1":{"c":3.14,"d":true,"sub2":[false,null]}}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc
+ object-int b 42
+ object-object sub1
+ object-double c 2 3.140
+ object-true d
+ object-array sub2
+ array-false
+ array-null
+ end
+ end
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'nested inline object and array 2' '
+ cat >expect <<-\EOF &&
+ {"a":"abc","b":42,"sub1":{"c":3.14,"d":true,"sub2":[false,{"g":0,"h":1},null]}}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc
+ object-int b 42
+ object-object sub1
+ object-double c 2 3.140
+ object-true d
+ object-array sub2
+ array-false
+ array-object
+ object-int g 0
+ object-int h 1
+ end
+ array-null
+ end
+ end
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'pretty nested inline object and array 2' '
+ sed -e "s/^|//" >expect <<-\EOF &&
+ |{
+ | "a": "abc",
+ | "b": 42,
+ | "sub1": {
+ | "c": 3.14,
+ | "d": true,
+ | "sub2": [
+ | false,
+ | {
+ | "g": 0,
+ | "h": 1
+ | },
+ | null
+ | ]
+ | }
+ |}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc
+ object-int b 42
+ object-object sub1
+ object-double c 2 3.140
+ object-true d
+ object-array sub2
+ array-false
+ array-object
+ object-int g 0
+ object-int h 1
+ end
+ array-null
+ end
+ end
+ end
+ EOF
+ test-tool json-writer -p <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'inline object with no members' '
+ cat >expect <<-\EOF &&
+ {"a":"abc","empty":{},"b":42}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc
+ object-object empty
+ end
+ object-int b 42
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'inline array with no members' '
+ cat >expect <<-\EOF &&
+ {"a":"abc","empty":[],"b":42}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc
+ object-array empty
+ end
+ object-int b 42
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'larger empty example' '
+ cat >expect <<-\EOF &&
+ {"a":"abc","empty":[{},{},{},[],{}],"b":42}
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc
+ object-array empty
+ array-object
+ end
+ array-object
+ end
+ array-object
+ end
+ array-array
+ end
+ array-object
+ end
+ end
+ object-int b 42
+ end
+ EOF
+ test-tool json-writer <input >actual &&
+ test_cmp expect actual
+'
+
+test_lazy_prereq PERLJSON '
+ perl -MJSON -e "exit 0"
+'
+
+# As a sanity check, ask Perl to parse our generated JSON and recursively
+# dump the resulting data in sorted order. Confirm that that matches our
+# expectations.
+test_expect_success PERLJSON 'parse JSON using Perl' '
+ cat >expect <<-\EOF &&
+ row[0].a abc
+ row[0].b 42
+ row[0].sub1 hash
+ row[0].sub1.c 3.14
+ row[0].sub1.d 1
+ row[0].sub1.sub2 array
+ row[0].sub1.sub2[0] 0
+ row[0].sub1.sub2[1] hash
+ row[0].sub1.sub2[1].g 0
+ row[0].sub1.sub2[1].h 1
+ row[0].sub1.sub2[2] null
+ EOF
+ cat >input <<-\EOF &&
+ object
+ object-string a abc
+ object-int b 42
+ object-object sub1
+ object-double c 2 3.140
+ object-true d
+ object-array sub2
+ array-false
+ array-object
+ object-int g 0
+ object-int h 1
+ end
+ array-null
+ end
+ end
+ end
+ EOF
+ test-tool json-writer <input >output.json &&
+ perl "$TEST_DIRECTORY"/t0019/parse_json.perl <output.json >actual &&
+ test_cmp expect actual
+'
+
+test_done
--- /dev/null
+#!/usr/bin/perl
+use strict;
+use warnings;
+use JSON;
+
+sub dump_array {
+ my ($label_in, $ary_ref) = @_;
+ my @ary = @$ary_ref;
+
+ for ( my $i = 0; $i <= $#{ $ary_ref }; $i++ )
+ {
+ my $label = "$label_in\[$i\]";
+ dump_item($label, $ary[$i]);
+ }
+}
+
+sub dump_hash {
+ my ($label_in, $obj_ref) = @_;
+ my %obj = %$obj_ref;
+
+ foreach my $k (sort keys %obj) {
+ my $label = (length($label_in) > 0) ? "$label_in.$k" : "$k";
+ my $value = $obj{$k};
+
+ dump_item($label, $value);
+ }
+}
+
+sub dump_item {
+ my ($label_in, $value) = @_;
+ if (ref($value) eq 'ARRAY') {
+ print "$label_in array\n";
+ dump_array($label_in, $value);
+ } elsif (ref($value) eq 'HASH') {
+ print "$label_in hash\n";
+ dump_hash($label_in, $value);
+ } elsif (defined $value) {
+ print "$label_in $value\n";
+ } else {
+ print "$label_in null\n";
+ }
+}
+
+my $row = 0;
+while (<>) {
+ my $data = decode_json( $_ );
+ my $label = "row[$row]";
+
+ dump_hash($label, $data);
+ $row++;
+}
+
git checkout --quiet --no-progress . 2>git-stderr.log &&
grep "smudge write error at" git-stderr.log &&
- grep "error: external filter" git-stderr.log &&
+ test_i18ngrep "error: external filter" git-stderr.log &&
cat >expected.log <<-EOF &&
START
cycle
EOF
test_must_fail git config --get-all test.value 2>stderr &&
- grep "exceeded maximum include depth" stderr
+ test_i18ngrep "exceeded maximum include depth" stderr
'
test_done
test_expect_success 'error on modifying repo config without repo' '
nongit test_must_fail git config a.b c 2>err &&
- grep "not in a git directory" err
+ test_i18ngrep "not in a git directory" err
'
cmdline_config="'foo.bar=from-cmdline'"
test_when_finished "rm -f o e" &&
git rev-parse --verify "master@{2005-05-26 23:33:01}" >o 2>e &&
test $B = $(cat o) &&
- test "warning: Log for ref $m has gap after $gd." = "$(cat e)"
+ test_i18ngrep -F "warning: log for ref $m has gap after $gd" e
'
test_expect_success 'Query "master@{2005-05-26 23:38:00}" (middle of history)' '
test_when_finished "rm -f o e" &&
test_when_finished "rm -f o e" &&
git rev-parse --verify "master@{2005-05-28}" >o 2>e &&
test $D = $(cat o) &&
- test "warning: Log for ref $m unexpectedly ended on $ld." = "$(cat e)"
+ test_i18ngrep -F "warning: log for ref $m unexpectedly ended on $ld" e
'
rm -f .git/$m .git/logs/$m expect
test_expect_success 'given old value for missing pseudoref, do not create' '
test_must_fail git update-ref PSEUDOREF $A $B 2>err &&
test_path_is_missing .git/PSEUDOREF &&
- grep "could not read ref" err
+ test_i18ngrep "could not read ref" err
'
test_expect_success 'create pseudoref' '
test_expect_success 'do not overwrite pseudoref with wrong old value' '
test_must_fail git update-ref PSEUDOREF $D $E 2>err &&
test $C = $(cat .git/PSEUDOREF) &&
- grep "unexpected object ID" err
+ test_i18ngrep "unexpected object ID" err
'
test_expect_success 'delete pseudoref' '
git update-ref PSEUDOREF $A &&
test_must_fail git update-ref -d PSEUDOREF $B 2>err &&
test $A = $(cat .git/PSEUDOREF) &&
- grep "unexpected object ID" err
+ test_i18ngrep "unexpected object ID" err
'
test_expect_success 'delete pseudoref with correct old value' '
test_when_finished git update-ref -d PSEUDOREF &&
test_must_fail git update-ref PSEUDOREF $B $Z 2>err &&
test $A = $(cat .git/PSEUDOREF) &&
- grep "already exists" err
+ test_i18ngrep "already exists" err
'
# Test --stdin
create $a $m
EOF
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: multiple updates for ref '"'"'$a'"'"' not allowed." err
+ test_i18ngrep "fatal: multiple updates for ref '"'"'$a'"'"' not allowed" err
'
test_expect_success 'stdin create ref works' '
test_expect_success 'stdin -z fails with duplicate refs' '
printf $F "create $a" "$m" "create $b" "$m" "create $a" "$m" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: multiple updates for ref '"'"'$a'"'"' not allowed." err
+ test_i18ngrep "fatal: multiple updates for ref '"'"'$a'"'"' not allowed" err
'
test_expect_success 'stdin -z create ref works' '
update HEAD $B
EOF
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: multiple updates for '\''HEAD'\'' (including one via its referent .refs/heads/target1.) are not allowed" err &&
+ test_i18ngrep "fatal: multiple updates for '\''HEAD'\'' (including one via its referent .refs/heads/target1.) are not allowed" err &&
echo "refs/heads/target1" >expect &&
git symbolic-ref HEAD >actual &&
test_cmp expect actual &&
update refs/heads/symref2 $B
EOF
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: multiple updates for '\''refs/heads/target2'\'' (including one via symref .refs/heads/symref2.) are not allowed" err &&
+ test_i18ngrep "fatal: multiple updates for '\''refs/heads/target2'\'' (including one via symref .refs/heads/symref2.) are not allowed" err &&
echo "refs/heads/target2" >expect &&
git symbolic-ref refs/heads/symref2 >actual &&
test_cmp expect actual &&
fi &&
printf "create $prefix/%s $C\n" $create >input &&
test_must_fail git update-ref --stdin <input 2>output.err &&
- grep -F "$error" output.err &&
+ test_i18ngrep -F "$error" output.err &&
git for-each-ref $prefix >actual &&
test_cmp unchanged actual
}
printf "%s\n" "delete $delname" "create $addname $D"
fi >commands &&
test_must_fail git update-ref --stdin <commands 2>output.err &&
- test_cmp expected-err output.err &&
+ test_i18ncmp expected-err output.err &&
printf "%s\n" "$C $delref" >expected-refs &&
git for-each-ref --format="%(objectname) %(refname)" $prefix/r >actual-refs &&
test_cmp expected-refs actual-refs
test_might_fail git rev-list --verify-objects refs/heads/bogus >/dev/null 2>out &&
cat out &&
- grep -q "error: sha1 mismatch 63ffffffffffffffffffffffffffffffffffffff" out
+ test_i18ngrep -q "error: sha1 mismatch 63ffffffffffffffffffffffffffffffffffffff" out
'
test_expect_success 'force fsck to ignore double author' '
}
status_uno_is_clean () {
- >status.expect &&
git status -uno --porcelain >status.actual &&
- test_cmp status.expect status.actual
+ test_must_be_empty status.actual
}
test_expect_success 'setup' '
git fetch repo_upstream2 &&
test_must_fail git worktree add ../foo foo &&
git -c checkout.defaultRemote=repo_upstream worktree add ../foo foo &&
- >status.expect &&
git status -uno --porcelain >status.actual &&
- test_cmp status.expect status.actual
+ test_must_be_empty status.actual
) &&
(
cd foo &&
cd top/sub &&
for f in ../y*
do
- echo "error: pathspec $sq$f$sq did not match any file(s) known to git."
+ echo "error: pathspec $sq$f$sq did not match any file(s) known to git"
done >expect.err &&
echo "Did you forget to ${sq}git add${sq}?" >>expect.err &&
ls ../x* >expect.out &&
test_must_fail git ls-files -c --error-unmatch ../[xy]* >actual.out 2>actual.err &&
test_cmp expect.out actual.out &&
- test_cmp expect.err actual.err
+ test_i18ncmp expect.err actual.err
)
'
cd top/sub &&
for f in ../x*
do
- echo "error: pathspec $sq$f$sq did not match any file(s) known to git."
+ echo "error: pathspec $sq$f$sq did not match any file(s) known to git"
done >expect.err &&
echo "Did you forget to ${sq}git add${sq}?" >>expect.err &&
ls ../y* >expect.out &&
test_must_fail git ls-files -o --error-unmatch ../[xy]* >actual.out 2>actual.err &&
test_cmp expect.out actual.out &&
- test_cmp expect.err actual.err
+ test_i18ncmp expect.err actual.err
)
'
test_expect_success 'existing directory reports concrete ref' '
test_must_fail git branch foo 2>stderr &&
- grep refs/heads/foo/bar/baz stderr
+ test_i18ngrep refs/heads/foo/bar/baz stderr
'
test_expect_success 'notice d/f conflict with existing ref' '
test "$(git rev-parse refs/notes/y)" = "$(git rev-parse NOTES_MERGE_PARTIAL^1)" &&
test "$(git rev-parse refs/notes/m)" != "$(git rev-parse NOTES_MERGE_PARTIAL^1)" &&
# Mention refs/notes/m, and its current and expected value in output
- grep -q "refs/notes/m" output &&
- grep -q "$(git rev-parse refs/notes/m)" output &&
- grep -q "$(git rev-parse NOTES_MERGE_PARTIAL^1)" output &&
+ test_i18ngrep -q "refs/notes/m" output &&
+ test_i18ngrep -q "$(git rev-parse refs/notes/m)" output &&
+ test_i18ngrep -q "$(git rev-parse NOTES_MERGE_PARTIAL^1)" output &&
# Verify that other notes refs has not changed (w, x, y and z)
verify_notes w &&
verify_notes x &&
test_might_fail git branch -D $1 &&
test_might_fail git rebase --abort
" &&
- git checkout -b $1 master
+ git checkout -b $1 ${2:-master}
}
test_expect_success 'drop' '
test_i18ngrep "$SQ-S\"S I Gner\"$SQ" err
'
+test_expect_success 'valid author header after --root swap' '
+ rebase_setup_and_clean author-header no-conflict-branch &&
+ set_fake_editor &&
+ FAKE_LINES="2 1" git rebase -i --root &&
+ git cat-file commit HEAD^ >out &&
+ grep "^author ..*> [0-9][0-9]* [-+][0-9][0-9][0-9][0-9]$" out
+'
+
test_done
test_cmp expect actual
'
+test_expect_success 'cherry-pick preserves sparse-checkout' '
+ pristine_detach initial &&
+ test_config core.sparseCheckout true &&
+ test_when_finished "
+ echo \"/*\" >.git/info/sparse-checkout
+ git read-tree --reset -u HEAD
+ rm .git/info/sparse-checkout" &&
+ echo /unrelated >.git/info/sparse-checkout &&
+ git read-tree --reset -u HEAD &&
+ test_must_fail git cherry-pick -Xours picked>actual &&
+ test_i18ngrep ! "Changes not staged for commit:" actual
+'
+
test_done
'
-test_expect_success 'detect permutations inside moved code -- dimmed_zebra' '
+test_expect_success 'detect permutations inside moved code -- dimmed-zebra' '
# reuse setup from test before!
test_config color.diff.oldMoved "magenta" &&
test_config color.diff.newMoved "cyan" &&
test_config color.diff.newMovedDimmed "normal cyan" &&
test_config color.diff.oldMovedAlternativeDimmed "normal blue" &&
test_config color.diff.newMovedAlternativeDimmed "normal yellow" &&
- git diff HEAD --no-renames --color-moved=dimmed_zebra --color >actual.raw &&
+ git diff HEAD --no-renames --color-moved=dimmed-zebra --color >actual.raw &&
grep -v "index" actual.raw | test_decode_color >actual &&
cat <<-\EOF >expected &&
<BOLD>diff --git a/lines.txt b/lines.txt<RESET>
git commit -S -m signed_commit
'
+test_expect_success GPGSM 'setup signed branch x509' '
+ test_when_finished "git reset --hard && git checkout master" &&
+ git checkout -b signed-x509 master &&
+ echo foo >foo &&
+ git add foo &&
+ test_config gpg.format x509 &&
+ test_config user.signingkey $GIT_COMMITTER_EMAIL &&
+ git commit -S -m signed_commit
+'
+
test_expect_success GPG 'log --graph --show-signature' '
git log --graph --show-signature -n1 signed >actual &&
grep "^| gpg: Signature made" actual &&
grep "^| gpg: Good signature" actual
'
+test_expect_success GPGSM 'log --graph --show-signature x509' '
+ git log --graph --show-signature -n1 signed-x509 >actual &&
+ grep "^| gpgsm: Signature made" actual &&
+ grep "^| gpgsm: Good signature" actual
+'
+
test_expect_success GPG 'log --graph --show-signature for merged tag' '
test_when_finished "git reset --hard && git checkout master" &&
git checkout -b plain master &&
grep "^| | gpg: Good signature" actual
'
+test_expect_success GPGSM 'log --graph --show-signature for merged tag x509' '
+ test_when_finished "git reset --hard && git checkout master" &&
+ test_config gpg.format x509 &&
+ test_config user.signingkey $GIT_COMMITTER_EMAIL &&
+ git checkout -b plain-x509 master &&
+ echo aaa >bar &&
+ git add bar &&
+ git commit -m bar_commit &&
+ git checkout -b tagged-x509 master &&
+ echo bbb >baz &&
+ git add baz &&
+ git commit -m baz_commit &&
+ git tag -s -m signed_tag_msg signed_tag_x509 &&
+ git checkout plain-x509 &&
+ git merge --no-ff -m msg signed_tag_x509 &&
+ git log --graph --show-signature -n1 plain-x509 >actual &&
+ grep "^|\\\ merged tag" actual &&
+ grep "^| | gpgsm: Signature made" actual &&
+ grep "^| | gpgsm: Good signature" actual
+'
+
test_expect_success GPG '--no-show-signature overrides --show-signature' '
git log -1 --show-signature --no-show-signature signed >actual &&
! grep "^gpg:" actual
git fetch --depth=1 --progress 2>actual &&
# This should fetch only the empty commit, no tree or
# blob objects
- grep "remote: Total 1" actual
+ test_i18ngrep "remote: Total 1" actual
)
'
test_description='fetch/receive strict mode'
. ./test-lib.sh
-test_expect_success setup '
+test_expect_success 'setup and inject "corrupt or missing" object' '
echo hello >greetings &&
git add greetings &&
git commit -m greetings &&
S=$(git rev-parse :greetings | sed -e "s|^..|&/|") &&
X=$(echo bye | git hash-object -w --stdin | sed -e "s|^..|&/|") &&
+ echo $S >S &&
+ echo $X >X &&
+ cp .git/objects/$S .git/objects/$S.back &&
mv -f .git/objects/$X .git/objects/$S &&
test_must_fail git fsck
test_cmp exp act
'
+test_expect_success 'repair the "corrupt or missing" object' '
+ mv -f .git/objects/$(cat S) .git/objects/$(cat X) &&
+ mv .git/objects/$(cat S).back .git/objects/$(cat S) &&
+ rm -rf .git/objects/$(cat X) &&
+ git fsck
+'
+
cat >bogus-commit <<EOF
tree $EMPTY_TREE
author Bugs Bunny 1234567890 +0000
This commit object intentionally broken
EOF
+test_expect_success 'fsck with invalid or bogus skipList input' '
+ git -c fsck.skipList=/dev/null -c fsck.missingEmail=ignore fsck &&
+ test_must_fail git -c fsck.skipList=does-not-exist -c fsck.missingEmail=ignore fsck 2>err &&
+ test_i18ngrep "Could not open skip list: does-not-exist" err &&
+ test_must_fail git -c fsck.skipList=.git/config -c fsck.missingEmail=ignore fsck 2>err &&
+ test_i18ngrep "Invalid SHA-1: \[core\]" err
+'
+
test_expect_success 'push with receive.fsck.skipList' '
commit="$(git hash-object -t commit -w --stdin <bogus-commit)" &&
git push . $commit:refs/heads/bogus &&
git init dst &&
git --git-dir=dst/.git config receive.fsckObjects true &&
test_must_fail git push --porcelain dst bogus &&
- git --git-dir=dst/.git config receive.fsck.skipList SKIP &&
echo $commit >dst/.git/SKIP &&
+
+ # receive.fsck.* does not fall back on fsck.*
+ git --git-dir=dst/.git config fsck.skipList SKIP &&
+ test_must_fail git push --porcelain dst bogus &&
+
+ # Invalid and/or bogus skipList input
+ git --git-dir=dst/.git config receive.fsck.skipList /dev/null &&
+ test_must_fail git push --porcelain dst bogus &&
+ git --git-dir=dst/.git config receive.fsck.skipList does-not-exist &&
+ test_must_fail git push --porcelain dst bogus 2>err &&
+ test_i18ngrep "Could not open skip list: does-not-exist" err &&
+ git --git-dir=dst/.git config receive.fsck.skipList config &&
+ test_must_fail git push --porcelain dst bogus 2>err &&
+ test_i18ngrep "Invalid SHA-1: \[core\]" err &&
+
+ git --git-dir=dst/.git config receive.fsck.skipList SKIP &&
git push --porcelain dst bogus
'
+test_expect_success 'fetch with fetch.fsck.skipList' '
+ commit="$(git hash-object -t commit -w --stdin <bogus-commit)" &&
+ refspec=refs/heads/bogus:refs/heads/bogus &&
+ git push . $commit:refs/heads/bogus &&
+ rm -rf dst &&
+ git init dst &&
+ git --git-dir=dst/.git config fetch.fsckObjects true &&
+ test_must_fail git --git-dir=dst/.git fetch "file://$(pwd)" $refspec &&
+ git --git-dir=dst/.git config fetch.fsck.skipList /dev/null &&
+ test_must_fail git --git-dir=dst/.git fetch "file://$(pwd)" $refspec &&
+ echo $commit >dst/.git/SKIP &&
+
+ # fetch.fsck.* does not fall back on fsck.*
+ git --git-dir=dst/.git config fsck.skipList dst/.git/SKIP &&
+ test_must_fail git --git-dir=dst/.git fetch "file://$(pwd)" $refspec &&
+
+ # Invalid and/or bogus skipList input
+ git --git-dir=dst/.git config fetch.fsck.skipList /dev/null &&
+ test_must_fail git --git-dir=dst/.git fetch "file://$(pwd)" $refspec &&
+ git --git-dir=dst/.git config fetch.fsck.skipList does-not-exist &&
+ test_must_fail git --git-dir=dst/.git fetch "file://$(pwd)" $refspec 2>err &&
+ test_i18ngrep "Could not open skip list: does-not-exist" err &&
+ git --git-dir=dst/.git config fetch.fsck.skipList dst/.git/config &&
+ test_must_fail git --git-dir=dst/.git fetch "file://$(pwd)" $refspec 2>err &&
+ test_i18ngrep "Invalid SHA-1: \[core\]" err &&
+
+ git --git-dir=dst/.git config fetch.fsck.skipList dst/.git/SKIP &&
+ git --git-dir=dst/.git fetch "file://$(pwd)" $refspec
+'
+
+test_expect_success 'fsck.<unknownmsg-id> dies' '
+ test_must_fail git -c fsck.whatEver=ignore fsck 2>err &&
+ test_i18ngrep "Unhandled message id: whatever" err
+'
+
test_expect_success 'push with receive.fsck.missingEmail=warn' '
commit="$(git hash-object -t commit -w --stdin <bogus-commit)" &&
git push . $commit:refs/heads/bogus &&
git init dst &&
git --git-dir=dst/.git config receive.fsckobjects true &&
test_must_fail git push --porcelain dst bogus &&
+
+ # receive.fsck.<msg-id> does not fall back on fsck.<msg-id>
+ git --git-dir=dst/.git config fsck.missingEmail warn &&
+ test_must_fail git push --porcelain dst bogus &&
+
+ # receive.fsck.<unknownmsg-id> warns
+ git --git-dir=dst/.git config \
+ receive.fsck.whatEver error &&
+
git --git-dir=dst/.git config \
receive.fsck.missingEmail warn &&
git push --porcelain dst bogus >act 2>&1 &&
grep "missingEmail" act &&
+ test_i18ngrep "Skipping unknown msg id.*whatever" act &&
git --git-dir=dst/.git branch -D bogus &&
git --git-dir=dst/.git config --add \
receive.fsck.missingEmail ignore &&
- git --git-dir=dst/.git config --add \
- receive.fsck.badDate warn &&
git push --porcelain dst bogus >act 2>&1 &&
! grep "missingEmail" act
'
+test_expect_success 'fetch with fetch.fsck.missingEmail=warn' '
+ commit="$(git hash-object -t commit -w --stdin <bogus-commit)" &&
+ refspec=refs/heads/bogus:refs/heads/bogus &&
+ git push . $commit:refs/heads/bogus &&
+ rm -rf dst &&
+ git init dst &&
+ git --git-dir=dst/.git config fetch.fsckobjects true &&
+ test_must_fail git --git-dir=dst/.git fetch "file://$(pwd)" $refspec &&
+
+ # fetch.fsck.<msg-id> does not fall back on fsck.<msg-id>
+ git --git-dir=dst/.git config fsck.missingEmail warn &&
+ test_must_fail git --git-dir=dst/.git fetch "file://$(pwd)" $refspec &&
+
+ # receive.fsck.<unknownmsg-id> warns
+ git --git-dir=dst/.git config \
+ fetch.fsck.whatEver error &&
+
+ git --git-dir=dst/.git config \
+ fetch.fsck.missingEmail warn &&
+ git --git-dir=dst/.git fetch "file://$(pwd)" $refspec >act 2>&1 &&
+ grep "missingEmail" act &&
+ test_i18ngrep "Skipping unknown msg id.*whatever" act &&
+ rm -rf dst &&
+ git init dst &&
+ git --git-dir=dst/.git config fetch.fsckobjects true &&
+ git --git-dir=dst/.git config \
+ fetch.fsck.missingEmail ignore &&
+ git --git-dir=dst/.git fetch "file://$(pwd)" $refspec >act 2>&1 &&
+ ! grep "missingEmail" act
+'
+
test_expect_success \
'receive.fsck.unterminatedHeader=warn triggers error' '
rm -rf dst &&
grep "Cannot demote unterminatedheader" act
'
+test_expect_success \
+ 'fetch.fsck.unterminatedHeader=warn triggers error' '
+ rm -rf dst &&
+ git init dst &&
+ git --git-dir=dst/.git config fetch.fsckobjects true &&
+ git --git-dir=dst/.git config \
+ fetch.fsck.unterminatedheader warn &&
+ test_must_fail git --git-dir=dst/.git fetch "file://$(pwd)" HEAD &&
+ grep "Cannot demote unterminatedheader" act
+'
+
test_done
cd eight &&
test_must_fail git branch nomore origin
) 2>err &&
- grep "dangling symref" err
+ test_i18ngrep "dangling symref" err
'
test_expect_success 'show empty remote' '
)
'
+test_expect_success 'LHS of refspec follows ref disambiguation rules' '
+ mkdir lhs-ambiguous &&
+ (
+ cd lhs-ambiguous &&
+ git init server &&
+ test_commit -C server unwanted &&
+ test_commit -C server wanted &&
+
+ git init client &&
+
+ # Check a name coming after "refs" alphabetically ...
+ git -C server update-ref refs/heads/s wanted &&
+ git -C server update-ref refs/heads/refs/heads/s unwanted &&
+ git -C client fetch ../server +refs/heads/s:refs/heads/checkthis &&
+ git -C server rev-parse wanted >expect &&
+ git -C client rev-parse checkthis >actual &&
+ test_cmp expect actual &&
+
+ # ... and one before.
+ git -C server update-ref refs/heads/q wanted &&
+ git -C server update-ref refs/heads/refs/heads/q unwanted &&
+ git -C client fetch ../server +refs/heads/q:refs/heads/checkthis &&
+ git -C server rev-parse wanted >expect &&
+ git -C client rev-parse checkthis >actual &&
+ test_cmp expect actual &&
+
+ # Tags are preferred over branches like refs/{heads,tags}/*
+ git -C server update-ref refs/tags/t wanted &&
+ git -C server update-ref refs/heads/t unwanted &&
+ git -C client fetch ../server +t:refs/heads/checkthis &&
+ git -C server rev-parse wanted >expect &&
+ git -C client rev-parse checkthis >actual
+ )
+'
+
# configured prune tests
set_config_tristate () {
EOF
- unset GIT_COMMITTER_EMAIL &&
- git config user.email hasnokey@nowhere.com &&
- test_must_fail git push --signed dst noop ff +noff &&
- git config user.signingkey committer@example.com &&
+ test_config user.email hasnokey@nowhere.com &&
+ (
+ sane_unset GIT_COMMITTER_EMAIL &&
+ test_must_fail git push --signed dst noop ff +noff
+ ) &&
+ test_config user.signingkey $GIT_COMMITTER_EMAIL &&
git push --signed dst noop ff +noff &&
(
test_cmp expect dst/push-cert-status
'
+test_expect_success GPGSM 'fail without key and heed user.signingkey x509' '
+ test_config gpg.format x509 &&
+ prepare_dst &&
+ mkdir -p dst/.git/hooks &&
+ git -C dst config receive.certnonceseed sekrit &&
+ write_script dst/.git/hooks/post-receive <<-\EOF &&
+ # discard the update list
+ cat >/dev/null
+ # record the push certificate
+ if test -n "${GIT_PUSH_CERT-}"
+ then
+ git cat-file blob $GIT_PUSH_CERT >../push-cert
+ fi &&
+
+ cat >../push-cert-status <<E_O_F
+ SIGNER=${GIT_PUSH_CERT_SIGNER-nobody}
+ KEY=${GIT_PUSH_CERT_KEY-nokey}
+ STATUS=${GIT_PUSH_CERT_STATUS-nostatus}
+ NONCE_STATUS=${GIT_PUSH_CERT_NONCE_STATUS-nononcestatus}
+ NONCE=${GIT_PUSH_CERT_NONCE-nononce}
+ E_O_F
+
+ EOF
+
+ test_config user.email hasnokey@nowhere.com &&
+ test_config user.signingkey "" &&
+ (
+ sane_unset GIT_COMMITTER_EMAIL &&
+ test_must_fail git push --signed dst noop ff +noff
+ ) &&
+ test_config user.signingkey $GIT_COMMITTER_EMAIL &&
+ git push --signed dst noop ff +noff &&
+
+ (
+ cat <<-\EOF &&
+ SIGNER=/CN=C O Mitter/O=Example/SN=C O/GN=Mitter
+ KEY=
+ STATUS=G
+ NONCE_STATUS=OK
+ EOF
+ sed -n -e "s/^nonce /NONCE=/p" -e "/^$/q" dst/push-cert
+ ) >expect.in &&
+ key=$(cat "${GNUPGHOME}/trustlist.txt" | cut -d" " -f1 | tr -d ":") &&
+ sed -e "s/^KEY=/KEY=${key}/" expect.in >expect &&
+
+ noop=$(git rev-parse noop) &&
+ ff=$(git rev-parse ff) &&
+ noff=$(git rev-parse noff) &&
+ grep "$noop $ff refs/heads/ff" dst/push-cert &&
+ grep "$noop $noff refs/heads/noff" dst/push-cert &&
+ test_cmp expect dst/push-cert-status
+'
+
test_done
submodule update sub
'
+test_expect_success 'using fetch command in remote-curl updates refs' '
+ SERVER="$HTTPD_DOCUMENT_ROOT_PATH/twobranch" &&
+ rm -rf "$SERVER" client &&
+
+ git init "$SERVER" &&
+ test_commit -C "$SERVER" foo &&
+ git -C "$SERVER" update-ref refs/heads/anotherbranch foo &&
+
+ git clone $HTTPD_URL/smart/twobranch client &&
+
+ test_commit -C "$SERVER" bar &&
+ git -C client -c protocol.version=0 fetch &&
+
+ git -C "$SERVER" rev-parse master >expect &&
+ git -C client rev-parse origin/master >actual &&
+ test_cmp expect actual
+'
+
test_expect_success 'GIT_REDACT_COOKIES redacts cookies' '
rm -rf clone &&
echo "Set-Cookie: Foo=1" >cookies &&
have_not_sent c6 c4 c3
'
+test_expect_success 'unknown fetch.negotiationAlgorithm values error out' '
+ rm -rf server client trace &&
+ git init server &&
+ test_commit -C server to_fetch &&
+
+ git init client &&
+ test_commit -C client on_client &&
+ git -C client checkout on_client &&
+
+ test_config -C client fetch.negotiationAlgorithm invalid &&
+ test_must_fail git -C client fetch "$(pwd)/server" 2>err &&
+ test_i18ngrep "unknown fetch negotiation algorithm" err &&
+
+ # Explicit "default" value
+ test_config -C client fetch.negotiationAlgorithm default &&
+ git -C client -c fetch.negotiationAlgorithm=default fetch "$(pwd)/server" &&
+
+ # Implementation detail: If there is nothing to fetch, we will not error out
+ test_config -C client fetch.negotiationAlgorithm invalid &&
+ git -C client fetch "$(pwd)/server" 2>err &&
+ test_i18ngrep ! "unknown fetch negotiation algorithm" err
+'
+
test_expect_success 'when two skips collide, favor the larger one' '
rm -rf server client trace &&
git init server &&
start_git_daemon
check_verbose_connect () {
- grep -F "Looking up 127.0.0.1 ..." stderr &&
- grep -F "Connecting to 127.0.0.1 (port " stderr &&
- grep -F "done." stderr
+ test_i18ngrep -F "Looking up 127.0.0.1 ..." stderr &&
+ test_i18ngrep -F "Connecting to 127.0.0.1 (port " stderr &&
+ test_i18ngrep -F "done." stderr
}
test_expect_success 'setup repository' '
test_cmp expect actual &&
# Server responded using protocol v2
- grep "clone< version 2" log
+ grep "clone< version 2" log &&
+
+ # Client sent ref-prefixes to filter the ref-advertisement
+ grep "ref-prefix HEAD" log &&
+ grep "ref-prefix refs/heads/" log &&
+ grep "ref-prefix refs/tags/" log
'
test_expect_success 'fetch with file:// using protocol v2' '
test_when_finished "rm -f log" &&
test_commit -C file_parent three &&
+ git -C file_parent branch unwanted-branch three &&
GIT_TRACE_PACKET="$(pwd)/log" git -C file_child -c protocol.version=2 \
fetch origin master &&
git -C file_parent log -1 --format=%s >expect &&
test_cmp expect actual &&
- ! grep "refs/tags/one" log &&
- ! grep "refs/tags/two" log &&
- ! grep "refs/tags/three" log
+ grep "refs/heads/master" log &&
+ ! grep "refs/heads/unwanted-branch" log
'
test_expect_success 'server-options are sent when fetching' '
grep "ref-prefix refs/tags/" log
'
+test_expect_success 'fetch supports various ways of have lines' '
+ rm -rf server client trace &&
+ git init server &&
+ test_commit -C server dwim &&
+ TREE=$(git -C server rev-parse HEAD^{tree}) &&
+ git -C server tag exact \
+ $(git -C server commit-tree -m a "$TREE") &&
+ git -C server tag dwim-unwanted \
+ $(git -C server commit-tree -m b "$TREE") &&
+ git -C server tag exact-unwanted \
+ $(git -C server commit-tree -m c "$TREE") &&
+ git -C server tag prefix1 \
+ $(git -C server commit-tree -m d "$TREE") &&
+ git -C server tag prefix2 \
+ $(git -C server commit-tree -m e "$TREE") &&
+ git -C server tag fetch-by-sha1 \
+ $(git -C server commit-tree -m f "$TREE") &&
+ git -C server tag completely-unrelated \
+ $(git -C server commit-tree -m g "$TREE") &&
+
+ git init client &&
+ GIT_TRACE_PACKET="$(pwd)/trace" git -C client -c protocol.version=2 \
+ fetch "file://$(pwd)/server" \
+ dwim \
+ refs/tags/exact \
+ refs/tags/prefix*:refs/tags/prefix* \
+ "$(git -C server rev-parse fetch-by-sha1)" &&
+
+ # Ensure that the appropriate prefixes are sent (using a sample)
+ grep "fetch> ref-prefix dwim" trace &&
+ grep "fetch> ref-prefix refs/heads/dwim" trace &&
+ grep "fetch> ref-prefix refs/tags/prefix" trace &&
+
+ # Ensure that the correct objects are returned
+ git -C client cat-file -e $(git -C server rev-parse dwim) &&
+ git -C client cat-file -e $(git -C server rev-parse exact) &&
+ git -C client cat-file -e $(git -C server rev-parse prefix1) &&
+ git -C client cat-file -e $(git -C server rev-parse prefix2) &&
+ git -C client cat-file -e $(git -C server rev-parse fetch-by-sha1) &&
+ test_must_fail git -C client cat-file -e \
+ $(git -C server rev-parse dwim-unwanted) &&
+ test_must_fail git -C client cat-file -e \
+ $(git -C server rev-parse exact-unwanted) &&
+ test_must_fail git -C client cat-file -e \
+ $(git -C server rev-parse completely-unrelated)
+'
+
+test_expect_success 'fetch supports include-tag and tag following' '
+ rm -rf server client trace &&
+ git init server &&
+
+ test_commit -C server to_fetch &&
+ git -C server tag -a annotated_tag -m message &&
+
+ git init client &&
+ GIT_TRACE_PACKET="$(pwd)/trace" git -C client -c protocol.version=2 \
+ fetch "$(pwd)/server" to_fetch:to_fetch &&
+
+ grep "fetch> ref-prefix to_fetch" trace &&
+ grep "fetch> ref-prefix refs/tags/" trace &&
+ grep "fetch> include-tag" trace &&
+
+ git -C client cat-file -e $(git -C client rev-parse annotated_tag)
+'
+
# Test protocol v2 with 'http://' transport
#
. "$TEST_DIRECTORY"/lib-httpd.sh
test_expect_success 'cloning without refspec' '
GIT_REMOTE_TESTGIT_REFSPEC="" \
git clone "testgit::${PWD}/server" local2 2>error &&
- grep "This remote helper should implement refspec capability" error &&
+ test_i18ngrep "this remote helper should implement refspec capability" error &&
compare_refs local2 HEAD server HEAD
'
(cd local2 &&
git reset --hard &&
GIT_REMOTE_TESTGIT_REFSPEC="" git pull 2>../error) &&
- grep "This remote helper should implement refspec capability" error &&
+ test_i18ngrep "this remote helper should implement refspec capability" error &&
compare_refs local2 HEAD server HEAD
'
GIT_REMOTE_TESTGIT_REFSPEC="" &&
export GIT_REMOTE_TESTGIT_REFSPEC &&
test_must_fail git push 2>../error) &&
- grep "remote-helper doesn.t support push; refspec needed" error
+ test_i18ngrep "remote-helper doesn.t support push; refspec needed" error
'
test_expect_success 'pulling without marks' '
(cd local &&
test_must_fail env GIT_REMOTE_TESTGIT_FAILURE=1 git fetch 2>error &&
cat error &&
- grep -q "Error while running fast-import" error
+ test_i18ngrep -q "error while running fast-import" error
)
'
'
+test_expect_success 'setup branch sub' '
+ git checkout --orphan sub &&
+ git rm -rf . &&
+ test_commit foo
+'
+
+test_expect_success 'setup branch main' '
+ git checkout -b main master &&
+ git merge -s ours --no-commit --allow-unrelated-histories sub &&
+ git read-tree --prefix=dir/ -u sub &&
+ git commit -m "initial merge of sub into main" &&
+ test_path_is_file dir/foo.t &&
+ test_path_is_file hello
+'
+
+test_expect_success 'update branch sub' '
+ git checkout sub &&
+ test_commit bar
+'
+
+test_expect_success 'update branch main' '
+ git checkout main &&
+ git merge -s subtree sub -m "second merge of sub into main" &&
+ test_path_is_file dir/bar.t &&
+ test_path_is_file dir/foo.t &&
+ test_path_is_file hello
+'
+
test_expect_success 'setup' '
mkdir git-gui &&
cd git-gui &&
GIT_NO_REPLACE_OBJECTS=1 git show $HASH2 | grep "A U Thor"
'
+test_expect_success 'test core.usereplacerefs config option' '
+ test_config core.usereplacerefs false &&
+ git cat-file commit $HASH2 | grep "author A U Thor" &&
+ git show $HASH2 | grep "A U Thor"
+'
+
cat >tag.sig <<EOF
object $HASH2
type commit
'test_config gpg.program echo &&
test_must_fail git tag -s -m tail tag-gpg-failure'
+# try to sign with bad user.signingkey
+test_expect_success GPGSM \
+ 'git tag -s fails if gpgsm is misconfigured (bad key)' \
+ 'test_config user.signingkey BobTheMouse &&
+ test_config gpg.format x509 &&
+ test_must_fail git tag -s -m tail tag-gpg-failure'
+
+# try to produce invalid signature
+test_expect_success GPGSM \
+ 'git tag -s fails if gpgsm is misconfigured (bad signature format)' \
+ 'test_config gpg.x509.program echo &&
+ test_config gpg.format x509 &&
+ test_must_fail git tag -s -m tail tag-gpg-failure'
# try to verify without gpg:
git tag -uB7227189 -m eighth eighth-signed-alt
'
+test_expect_success GPGSM 'create signed tags x509 ' '
+ test_config gpg.format x509 &&
+ test_config user.signingkey $GIT_COMMITTER_EMAIL &&
+ echo 9 >file && test_tick && git commit -a -m "nineth gpgsm-signed" &&
+ git tag -s -m nineth nineth-signed-x509
+'
+
test_expect_success GPG 'verify and show signatures' '
(
for tag in initial second merge fourth-signed sixth-signed seventh-signed
)
'
+test_expect_success GPGSM 'verify and show signatures x509' '
+ git verify-tag nineth-signed-x509 2>actual &&
+ grep "Good signature from" actual &&
+ ! grep "BAD signature from" actual &&
+ echo nineth-signed-x509 OK
+'
+
test_expect_success GPG 'detect fudged signature' '
git cat-file tag seventh-signed >raw &&
sed -e "/^tag / s/seventh/7th forged/" raw >forged1 &&
)
'
+test_expect_success GPGSM 'verify signatures with --raw x509' '
+ git verify-tag --raw nineth-signed-x509 2>actual &&
+ grep "GOODSIG" actual &&
+ ! grep "BADSIG" actual &&
+ echo nineth-signed-x509 OK
+'
+
test_expect_success GPG 'verify multiple tags' '
tags="fourth-signed sixth-signed seventh-signed" &&
for i in $tags
test_cmp expect.stderr actual.stderr
'
+test_expect_success GPGSM 'verify multiple tags x509' '
+ tags="seventh-signed nineth-signed-x509" &&
+ for i in $tags
+ do
+ git verify-tag -v --raw $i || return 1
+ done >expect.stdout 2>expect.stderr.1 &&
+ grep "^.GNUPG:." <expect.stderr.1 >expect.stderr &&
+ git verify-tag -v --raw $tags >actual.stdout 2>actual.stderr.1 &&
+ grep "^.GNUPG:." <actual.stderr.1 >actual.stderr &&
+ test_cmp expect.stdout actual.stdout &&
+ test_cmp expect.stderr actual.stderr
+'
+
test_expect_success GPG 'verifying tag with --format' '
cat >expect <<-\EOF &&
tagname : fourth-signed
mkdir ../other_worktree &&
cp -R done dthree dtwo four three ../other_worktree &&
GIT_WORK_TREE=../other_worktree git status 2>../err &&
- echo "warning: Untracked cache is disabled on this system or location." >../expect &&
+ echo "warning: untracked cache is disabled on this system or location" >../expect &&
test_i18ncmp ../expect ../err
'
test_failure_with_unknown_submodule () {
test_must_fail git submodule $1 no-such-submodule 2>output.err &&
- grep "^error: .*no-such-submodule" output.err
+ test_i18ngrep "^error: .*no-such-submodule" output.err
}
test_expect_success 'init should fail with unknown submodule' '
EOF
cat <<EOF >expect2
+Cloning into '$pwd/recursivesuper/super/merging'...
+Cloning into '$pwd/recursivesuper/super/none'...
+Cloning into '$pwd/recursivesuper/super/rebasing'...
+Cloning into '$pwd/recursivesuper/super/submodule'...
Submodule 'merging' ($pwd/merging) registered for path '../super/merging'
Submodule 'none' ($pwd/none) registered for path '../super/none'
Submodule 'rebasing' ($pwd/rebasing) registered for path '../super/rebasing'
Submodule 'submodule' ($pwd/submodule) registered for path '../super/submodule'
-Cloning into '$pwd/recursivesuper/super/merging'...
done.
-Cloning into '$pwd/recursivesuper/super/none'...
done.
-Cloning into '$pwd/recursivesuper/super/rebasing'...
done.
-Cloning into '$pwd/recursivesuper/super/submodule'...
done.
EOF
git submodule update --init --recursive ../super >../../actual 2>../../actual2
) &&
test_i18ncmp expect actual &&
- test_i18ncmp expect2 actual2
+ sort actual2 >actual2.sorted &&
+ test_i18ncmp expect2 actual2.sorted
'
cat <<EOF >expect2
grep "gpg: Good signature" actual
'
+test_expect_success GPG 'check config gpg.format values' '
+ test_config gpg.format openpgp &&
+ git commit -S --amend -m "success" &&
+ test_config gpg.format OpEnPgP &&
+ test_must_fail git commit -S --amend -m "fail"
+'
+
test_done
echo $! >V.pid
# We don't mind if fast-import has already died by the time the test
# ends.
- test_when_finished "exec 8>&-; exec 9>&-; kill $(cat V.pid) || true"
+ test_when_finished "
+ exec 8>&-; exec 9>&-;
+ kill $(cat V.pid) && wait $(cat V.pid)
+ true"
# Start in the background to ensure we adhere strictly to (blocking)
# pipes writing sequence. We want to assume that the write below could
)
'
+# Test following scenarios:
+# - Without ".git/hooks/p4-pre-submit" , submit should continue
+# - With the hook returning 0, submit should continue
+# - With the hook returning 1, submit should abort
+test_expect_success 'run hook p4-pre-submit before submit' '
+ test_when_finished cleanup_git &&
+ git p4 clone --dest="$git" //depot &&
+ (
+ cd "$git" &&
+ echo "hello world" >hello.txt &&
+ git add hello.txt &&
+ git commit -m "add hello.txt" &&
+ git config git-p4.skipSubmitEdit true &&
+ git p4 submit --dry-run >out &&
+ grep "Would apply" out &&
+ mkdir -p .git/hooks &&
+ write_script .git/hooks/p4-pre-submit <<-\EOF &&
+ exit 0
+ EOF
+ git p4 submit --dry-run >out &&
+ grep "Would apply" out &&
+ write_script .git/hooks/p4-pre-submit <<-\EOF &&
+ exit 1
+ EOF
+ test_must_fail git p4 submit --dry-run >errs 2>&1 &&
+ ! grep "Would apply" errs
+ )
+'
+
test_expect_success 'submit from detached head' '
test_when_finished cleanup_git &&
git p4 clone --dest="$git" //depot &&
if (debug)
fprintf(stderr, "Debug: Remote helper: -> %s", buffer->buf);
if (write_in_full(helper->helper->in, buffer->buf, buffer->len) < 0)
- die_errno("Full write to remote helper failed");
+ die_errno(_("full write to remote helper failed"));
}
static int recvline_fh(FILE *helper, struct strbuf *buffer)
if (debug)
fprintf(stderr, "Debug: Remote helper: -> %s", str);
if (write_in_full(fd, str, strlen(str)) < 0)
- die_errno("Full write to remote helper failed");
+ die_errno(_("full write to remote helper failed"));
}
static const char *remove_ext_force(const char *url)
code = start_command(helper);
if (code < 0 && errno == ENOENT)
- die("Unable to find remote helper for '%s'", data->name);
+ die(_("unable to find remote helper for '%s'"), data->name);
else if (code != 0)
exit(code);
*/
duped = dup(helper->out);
if (duped < 0)
- die_errno("Can't dup helper output fd");
+ die_errno(_("can't dup helper output fd"));
data->out = xfdopen(duped, "r");
write_constant(helper->in, "capabilities\n");
} else if (starts_with(capname, "no-private-update")) {
data->no_private_update = 1;
} else if (mandatory) {
- die("Unknown mandatory capability %s. This remote "
- "helper probably needs newer version of Git.",
+ die(_("unknown mandatory capability %s; this remote "
+ "helper probably needs newer version of Git"),
capname);
}
}
if (!data->rs.nr && (data->import || data->bidi_import || data->export)) {
- warning("This remote helper should implement refspec capability.");
+ warning(_("this remote helper should implement refspec capability"));
}
strbuf_release(&buf);
if (debug)
else if (!strcmp(buf->buf, "unsupported"))
ret = 1;
else {
- warning("%s unexpectedly said: '%s'", data->name, buf->buf);
+ warning(_("%s unexpectedly said: '%s'"), data->name, buf->buf);
ret = 1;
}
return ret;
if (starts_with(buf.buf, "lock ")) {
const char *name = buf.buf + 5;
if (transport->pack_lockfile)
- warning("%s also locked %s", data->name, name);
+ warning(_("%s also locked %s"), data->name, name);
else
transport->pack_lockfile = xstrdup(name);
}
else if (!buf.len)
break;
else
- warning("%s unexpectedly said: '%s'", data->name, buf.buf);
+ warning(_("%s unexpectedly said: '%s'"), data->name, buf.buf);
}
strbuf_release(&buf);
return 0;
get_helper(transport);
if (get_importer(transport, &fastimport))
- die("Couldn't run fast-import");
+ die(_("couldn't run fast-import"));
for (i = 0; i < nr_heads; i++) {
posn = to_fetch[i];
*/
if (finish_command(&fastimport))
- die("Error while running fast-import");
+ die(_("error while running fast-import"));
/*
* The fast-import stream of a remote helper that advertises
private = xstrdup(name);
if (private) {
if (read_ref(private, &posn->old_oid) < 0)
- die("Could not read ref %s", private);
+ die(_("could not read ref %s"), private);
free(private);
}
}
*/
duped = dup(helper->out);
if (duped < 0)
- die_errno("Can't dup helper output fd");
+ die_errno(_("can't dup helper output fd"));
input = xfdopen(duped, "r");
setvbuf(input, NULL, _IONBF, 0);
fprintf(stderr, "Debug: Falling back to dumb "
"transport.\n");
} else {
- die("Unknown response to connect: %s",
- cmdbuf->buf);
+ die(_(_("unknown response to connect: %s")),
+ cmdbuf->buf);
}
fclose(input);
if (strcmp(name, exec)) {
int r = set_helper_option(transport, "servpath", exec);
if (r > 0)
- warning("Setting remote service path not supported by protocol.");
+ warning(_("setting remote service path not supported by protocol"));
else if (r < 0)
- warning("Invalid remote service path.");
+ warning(_("invalid remote service path"));
}
if (data->connect) {
/* Get_helper so connect is inited. */
get_helper(transport);
if (!data->connect)
- die("Operation not supported by protocol.");
+ die(_("operation not supported by protocol"));
if (!process_connect_service(transport, name, exec))
- die("Can't connect to subservice %s.", name);
+ die(_("can't connect to subservice %s"), name);
fd[0] = data->helper->out;
fd[1] = data->helper->in;
}
static int fetch(struct transport *transport,
- int nr_heads, struct ref **to_fetch,
- struct ref **fetched_refs)
+ int nr_heads, struct ref **to_fetch)
{
struct helper_data *data = transport->data;
int i, count;
if (process_connect(transport, 0)) {
do_take_over(transport);
- return transport->vtable->fetch(transport, nr_heads, to_fetch,
- fetched_refs);
+ return transport->vtable->fetch(transport, nr_heads, to_fetch);
}
count = 0;
status = REF_STATUS_REMOTE_REJECT;
refname = buf->buf + 6;
} else
- die("expected ok/error, helper said '%s'", buf->buf);
+ die(_("expected ok/error, helper said '%s'"), buf->buf);
msg = strchr(refname, ' ');
if (msg) {
if (!*ref)
*ref = find_ref_by_name(remote_refs, refname);
if (!*ref) {
- warning("helper reported unexpected status of %s", refname);
+ warning(_("helper reported unexpected status of %s"), refname);
return 1;
}
{
if (flags & TRANSPORT_PUSH_DRY_RUN) {
if (set_helper_option(transport, "dry-run", "true") != 0)
- die("helper %s does not support dry-run", name);
+ die(_("helper %s does not support dry-run"), name);
} else if (flags & TRANSPORT_PUSH_CERT_ALWAYS) {
if (set_helper_option(transport, TRANS_OPT_PUSH_CERT, "true") != 0)
- die("helper %s does not support --signed", name);
+ die(_("helper %s does not support --signed"), name);
} else if (flags & TRANSPORT_PUSH_CERT_IF_ASKED) {
if (set_helper_option(transport, TRANS_OPT_PUSH_CERT, "if-asked") != 0)
- die("helper %s does not support --signed=if-asked", name);
+ die(_("helper %s does not support --signed=if-asked"), name);
}
if (flags & TRANSPORT_PUSH_OPTIONS) {
struct string_list_item *item;
for_each_string_list_item(item, transport->push_options)
if (set_helper_option(transport, "push-option", item->string) != 0)
- die("helper %s does not support 'push-option'", name);
+ die(_("helper %s does not support 'push-option'"), name);
}
}
struct strbuf buf = STRBUF_INIT;
if (!data->rs.nr)
- die("remote-helper doesn't support push; refspec needed");
+ die(_("remote-helper doesn't support push; refspec needed"));
set_common_push_options(transport, data->name, flags);
if (flags & TRANSPORT_PUSH_FORCE) {
if (set_helper_option(transport, "force", "true") != 0)
- warning("helper %s does not support 'force'", data->name);
+ warning(_("helper %s does not support 'force'"), data->name);
}
helper = get_helper(transport);
}
if (get_exporter(transport, &exporter, &revlist_args))
- die("Couldn't run fast-export");
+ die(_("couldn't run fast-export"));
string_list_clear(&revlist_args, 1);
if (finish_command(&exporter))
- die("Error while running fast-export");
+ die(_("error while running fast-export"));
if (push_update_refs_status(data, remote_refs, flags))
return 1;
}
if (!remote_refs) {
- fprintf(stderr, "No refs in common and none specified; doing nothing.\n"
- "Perhaps you should specify a branch such as 'master'.\n");
+ fprintf(stderr,
+ _("No refs in common and none specified; doing nothing.\n"
+ "Perhaps you should specify a branch such as 'master'.\n"));
return 0;
}
eov = strchr(buf.buf, ' ');
if (!eov)
- die("Malformed response in ref list: %s", buf.buf);
+ die(_("malformed response in ref list: %s"), buf.buf);
eon = strchr(eov + 1, ' ');
*eov = '\0';
if (eon)
if (has_attribute(eon + 1, "unchanged")) {
(*tail)->status |= REF_STATUS_UPTODATE;
if (read_ref((*tail)->name, &(*tail)->old_oid) < 0)
- die(_("Could not read ref %s"),
+ die(_("could not read ref %s"),
(*tail)->name);
}
}
bytes = read(t->src, t->buf + t->bufuse, BUFFERSIZE - t->bufuse);
if (bytes < 0 && errno != EWOULDBLOCK && errno != EAGAIN &&
errno != EINTR) {
- error_errno("read(%s) failed", t->src_name);
+ error_errno(_("read(%s) failed"), t->src_name);
return -1;
} else if (bytes == 0) {
transfer_debug("%s EOF (with %i bytes in buffer)",
transfer_debug("%s is writable", t->dest_name);
bytes = xwrite(t->dest, t->buf, t->bufuse);
if (bytes < 0 && errno != EWOULDBLOCK) {
- error_errno("write(%s) failed", t->dest_name);
+ error_errno(_("write(%s) failed"), t->dest_name);
return -1;
} else if (bytes > 0) {
t->bufuse -= bytes;
void *tret;
err = pthread_join(thread, &tret);
if (!tret) {
- error("%s thread failed", name);
+ error(_("%s thread failed"), name);
return 1;
}
if (err) {
- error("%s thread failed to join: %s", name, strerror(err));
+ error(_("%s thread failed to join: %s"), name, strerror(err));
return 1;
}
return 0;
err = pthread_create(>p_thread, NULL, udt_copy_task_routine,
&s->gtp);
if (err)
- die("Can't start thread for copying data: %s", strerror(err));
+ die(_("can't start thread for copying data: %s"), strerror(err));
err = pthread_create(&ptg_thread, NULL, udt_copy_task_routine,
&s->ptg);
if (err)
- die("Can't start thread for copying data: %s", strerror(err));
+ die(_("can't start thread for copying data: %s"), strerror(err));
ret |= tloop_join(gtp_thread, "Git to program copy");
ret |= tloop_join(ptg_thread, "Program to git copy");
{
int tret;
if (waitpid(pid, &tret, 0) < 0) {
- error_errno("%s process failed to wait", name);
+ error_errno(_("%s process failed to wait"), name);
return 1;
}
if (!WIFEXITED(tret) || WEXITSTATUS(tret)) {
- error("%s process failed", name);
+ error(_("%s process failed"), name);
return 1;
}
return 0;
/* Fork thread #1: git to program. */
pid1 = fork();
if (pid1 < 0)
- die_errno("Can't start thread for copying data");
+ die_errno(_("can't start thread for copying data"));
else if (pid1 == 0) {
udt_kill_transfer(&s->ptg);
exit(udt_copy_task_routine(&s->gtp) ? 0 : 1);
/* Fork thread #2: program to git. */
pid2 = fork();
if (pid2 < 0)
- die_errno("Can't start thread for copying data");
+ die_errno(_("can't start thread for copying data"));
else if (pid2 == 0) {
udt_kill_transfer(&s->gtp);
exit(udt_copy_task_routine(&s->ptg) ? 0 : 1);
* Fetch the objects for the given refs. Note that this gets
* an array, and should ignore the list structure.
*
- * The transport *may* provide, in fetched_refs, the list of refs that
- * it fetched. If the transport knows anything about the fetched refs
- * that the caller does not know (for example, shallow status), it
- * should provide that list of refs and include that information in the
- * list.
- *
* If the transport did not get hashes for refs in
* get_refs_list(), it should set the old_sha1 fields in the
* provided refs now.
**/
- int (*fetch)(struct transport *transport, int refs_nr, struct ref **refs,
- struct ref **fetched_refs);
+ int (*fetch)(struct transport *transport, int refs_nr, struct ref **refs);
/**
* Push the objects and refs. Send the necessary objects, and
close(data->fd);
data->fd = read_bundle_header(transport->url, &data->header);
if (data->fd < 0)
- die ("Could not read bundle '%s'.", transport->url);
+ die(_("could not read bundle '%s'"), transport->url);
for (i = 0; i < data->header.references.nr; i++) {
struct ref_list_entry *e = data->header.references.list + i;
struct ref *ref = alloc_ref(e->name);
}
static int fetch_refs_from_bundle(struct transport *transport,
- int nr_heads, struct ref **to_fetch,
- struct ref **fetched_refs)
+ int nr_heads, struct ref **to_fetch)
{
struct bundle_transport_data *data = transport->data;
return unbundle(&data->header, data->fd,
}
static int fetch_refs_via_pack(struct transport *transport,
- int nr_heads, struct ref **to_fetch,
- struct ref **fetched_refs)
+ int nr_heads, struct ref **to_fetch)
{
int ret = 0;
struct git_transport_data *data = transport->data;
if (report_unmatched_refs(to_fetch, nr_heads))
ret = -1;
- if (fetched_refs)
- *fetched_refs = refs;
- else
- free_refs(refs);
-
free_refs(refs_tmp);
+ free_refs(refs);
free(dest);
return ret;
}
switch (data->version) {
case protocol_v2:
- die("support for protocol v2 not implemented yet");
+ die(_("support for protocol v2 not implemented yet"));
break;
case protocol_v1:
case protocol_v0:
else if (!strcasecmp(value, "user"))
return PROTOCOL_ALLOW_USER_ONLY;
- die("unknown value for config '%s': %s", key, value);
+ die(_("unknown value for config '%s': %s"), key, value);
}
static enum protocol_allow_config get_protocol_config(const char *type)
void transport_check_allowed(const char *type)
{
if (!is_transport_allowed(type, -1))
- die("transport '%s' not allowed", type);
+ die(_("transport '%s' not allowed"), type);
}
static struct transport_vtable bundle_vtable = {
ret->progress = isatty(2);
if (!remote)
- die("No remote provided to transport_get()");
+ BUG("No remote provided to transport_get()");
ret->got_remote_refs = 0;
ret->remote = remote;
if (helper) {
transport_helper_init(ret, helper);
} else if (starts_with(url, "rsync:")) {
- die("git-over-rsync is no longer supported");
+ die(_("git-over-rsync is no longer supported"));
} else if (url_is_local_not_ssh(url) && is_file(url) && is_bundle(url, 1)) {
struct bundle_transport_data *data = xcalloc(1, sizeof(*data));
transport_check_allowed("file");
transport->push_options,
pretend)) {
oid_array_clear(&commits);
- die("Failed to push all needed submodules!");
+ die(_("failed to push all needed submodules"));
}
oid_array_clear(&commits);
}
return transport->remote_refs;
}
-int transport_fetch_refs(struct transport *transport, struct ref *refs,
- struct ref **fetched_refs)
+int transport_fetch_refs(struct transport *transport, struct ref *refs)
{
int rc;
int nr_heads = 0, nr_alloc = 0, nr_refs = 0;
struct ref **heads = NULL;
- struct ref *nop_head = NULL, **nop_tail = &nop_head;
struct ref *rm;
for (rm = refs; rm; rm = rm->next) {
nr_refs++;
if (rm->peer_ref &&
!is_null_oid(&rm->old_oid) &&
- !oidcmp(&rm->peer_ref->old_oid, &rm->old_oid)) {
- /*
- * These need to be reported as fetched, but we don't
- * actually need to fetch them.
- */
- if (fetched_refs) {
- struct ref *nop_ref = copy_ref(rm);
- *nop_tail = nop_ref;
- nop_tail = &nop_ref->next;
- }
+ !oidcmp(&rm->peer_ref->old_oid, &rm->old_oid))
continue;
- }
ALLOC_GROW(heads, nr_heads + 1, nr_alloc);
heads[nr_heads++] = rm;
}
heads[nr_heads++] = rm;
}
- rc = transport->vtable->fetch(transport, nr_heads, heads, fetched_refs);
- if (fetched_refs && nop_head) {
- *nop_tail = *fetched_refs;
- *fetched_refs = nop_head;
- }
+ rc = transport->vtable->fetch(transport, nr_heads, heads);
free(heads);
return rc;
if (transport->vtable->connect)
return transport->vtable->connect(transport, name, exec, fd);
else
- die("Operation not supported by protocol");
+ die(_("operation not supported by protocol"));
}
int transport_disconnect(struct transport *transport)
if (get_oid_hex(line.buf, &oid) ||
line.buf[GIT_SHA1_HEXSZ] != ' ') {
- warning("invalid line while parsing alternate refs: %s",
+ warning(_("invalid line while parsing alternate refs: %s"),
line.buf);
break;
}
const struct ref *transport_get_remote_refs(struct transport *transport,
const struct argv_array *ref_prefixes);
-int transport_fetch_refs(struct transport *transport, struct ref *refs,
- struct ref **fetched_refs);
+int transport_fetch_refs(struct transport *transport, struct ref *refs);
void transport_unlock_pack(struct transport *transport);
int transport_disconnect(struct transport *transport);
char *transport_anonymize_url(const char *url);
#else
typedef char * iconv_ibp;
#endif
-char *reencode_string_iconv(const char *in, size_t insz, iconv_t conv, int *outsz_p)
+char *reencode_string_iconv(const char *in, size_t insz, iconv_t conv, size_t *outsz_p)
{
size_t outsz, outalloc;
char *out, *outpos;
iconv_ibp cp;
outsz = insz;
- outalloc = outsz + 1; /* for terminating NUL */
+ outalloc = st_add(outsz, 1); /* for terminating NUL */
out = xmalloc(outalloc);
outpos = out;
cp = (iconv_ibp)in;
* converting the rest.
*/
sofar = outpos - out;
- outalloc = sofar + insz * 2 + 32;
+ outalloc = st_add3(sofar, st_mult(insz, 2), 32);
out = xrealloc(out, outalloc);
outpos = out + sofar;
outsz = outalloc - sofar - 1;
return name;
}
-char *reencode_string_len(const char *in, int insz,
+char *reencode_string_len(const char *in, size_t insz,
const char *out_encoding, const char *in_encoding,
- int *outsz)
+ size_t *outsz)
{
iconv_t conv;
char *out;
#ifndef NO_ICONV
char *reencode_string_iconv(const char *in, size_t insz,
- iconv_t conv, int *outsz);
-char *reencode_string_len(const char *in, int insz,
+ iconv_t conv, size_t *outsz);
+char *reencode_string_len(const char *in, size_t insz,
const char *out_encoding,
const char *in_encoding,
- int *outsz);
+ size_t *outsz);
#else
-static inline char *reencode_string_len(const char *a, int b,
- const char *c, const char *d, int *e)
+static inline char *reencode_string_len(const char *a, size_t b,
+ const char *c, const char *d, size_t *e)
{ if (e) *e = 0; return NULL; }
#endif
return b_next;
}
-static int find_lcs(struct histindex *index, struct region *lcs,
- int line1, int count1, int line2, int count2) {
- int b_ptr;
-
- if (scanA(index, line1, count1))
- return -1;
-
- index->cnt = index->max_chain_length + 1;
-
- for (b_ptr = line2; b_ptr <= LINE_END(2); )
- b_ptr = try_lcs(index, lcs, b_ptr, line1, count1, line2, count2);
-
- return index->has_common && index->max_chain_length < index->cnt;
-}
-
-static int fall_back_to_classic_diff(struct histindex *index,
+static int fall_back_to_classic_diff(xpparam_t const *xpp, xdfenv_t *env,
int line1, int count1, int line2, int count2)
{
- xpparam_t xpp;
- xpp.flags = index->xpp->flags & ~XDF_DIFF_ALGORITHM_MASK;
+ xpparam_t xpparam;
+ xpparam.flags = xpp->flags & ~XDF_DIFF_ALGORITHM_MASK;
- return xdl_fall_back_diff(index->env, &xpp,
+ return xdl_fall_back_diff(env, &xpparam,
line1, count1, line2, count2);
}
-static int histogram_diff(xpparam_t const *xpp, xdfenv_t *env,
- int line1, int count1, int line2, int count2)
+static inline void free_index(struct histindex *index)
{
- struct histindex index;
- struct region lcs;
- int sz;
- int result = -1;
-
- if (count1 <= 0 && count2 <= 0)
- return 0;
-
- if (LINE_END(1) >= MAX_PTR)
- return -1;
+ xdl_free(index->records);
+ xdl_free(index->line_map);
+ xdl_free(index->next_ptrs);
+ xdl_cha_free(&index->rcha);
+}
- if (!count1) {
- while(count2--)
- env->xdf2.rchg[line2++ - 1] = 1;
- return 0;
- } else if (!count2) {
- while(count1--)
- env->xdf1.rchg[line1++ - 1] = 1;
- return 0;
- }
+static int find_lcs(xpparam_t const *xpp, xdfenv_t *env,
+ struct region *lcs,
+ int line1, int count1, int line2, int count2)
+{
+ int b_ptr;
+ int sz, ret = -1;
+ struct histindex index;
memset(&index, 0, sizeof(index));
index.ptr_shift = line1;
index.max_chain_length = 64;
+ if (scanA(&index, line1, count1))
+ goto cleanup;
+
+ index.cnt = index.max_chain_length + 1;
+
+ for (b_ptr = line2; b_ptr <= LINE_END(2); )
+ b_ptr = try_lcs(&index, lcs, b_ptr, line1, count1, line2, count2);
+
+ if (index.has_common && index.max_chain_length < index.cnt)
+ ret = 1;
+ else
+ ret = 0;
+
+cleanup:
+ free_index(&index);
+ return ret;
+}
+
+static int histogram_diff(xpparam_t const *xpp, xdfenv_t *env,
+ int line1, int count1, int line2, int count2)
+{
+ struct region lcs;
+ int lcs_found;
+ int result;
+redo:
+ result = -1;
+
+ if (count1 <= 0 && count2 <= 0)
+ return 0;
+
+ if (LINE_END(1) >= MAX_PTR)
+ return -1;
+
+ if (!count1) {
+ while(count2--)
+ env->xdf2.rchg[line2++ - 1] = 1;
+ return 0;
+ } else if (!count2) {
+ while(count1--)
+ env->xdf1.rchg[line1++ - 1] = 1;
+ return 0;
+ }
+
memset(&lcs, 0, sizeof(lcs));
- if (find_lcs(&index, &lcs, line1, count1, line2, count2))
- result = fall_back_to_classic_diff(&index, line1, count1, line2, count2);
+ lcs_found = find_lcs(xpp, env, &lcs, line1, count1, line2, count2);
+ if (lcs_found < 0)
+ goto out;
+ else if (lcs_found)
+ result = fall_back_to_classic_diff(xpp, env, line1, count1, line2, count2);
else {
if (lcs.begin1 == 0 && lcs.begin2 == 0) {
while (count1--)
line1, lcs.begin1 - line1,
line2, lcs.begin2 - line2);
if (result)
- goto cleanup;
- result = histogram_diff(xpp, env,
- lcs.end1 + 1, LINE_END(1) - lcs.end1,
- lcs.end2 + 1, LINE_END(2) - lcs.end2);
- if (result)
- goto cleanup;
+ goto out;
+ /*
+ * result = histogram_diff(xpp, env,
+ * lcs.end1 + 1, LINE_END(1) - lcs.end1,
+ * lcs.end2 + 1, LINE_END(2) - lcs.end2);
+ * but let's optimize tail recursion ourself:
+ */
+ count1 = LINE_END(1) - lcs.end1;
+ line1 = lcs.end1 + 1;
+ count2 = LINE_END(2) - lcs.end2;
+ line2 = lcs.end2 + 1;
+ goto redo;
}
}
-
-cleanup:
- xdl_free(index.records);
- xdl_free(index.line_map);
- xdl_free(index.next_ptrs);
- xdl_cha_free(&index.rcha);
-
+out:
return result;
}