judgement call, the decision based more on real world
constraints people face than what the paper standard says.
+Make your code readable and sensible, and don't try to be clever.
As for more concrete guidelines, just imitate the existing code
(this is a good guideline, no matter which project you are
- Use Git's gettext wrappers to make the user interface
translatable. See "Marking strings for translation" in po/README.
+For Perl programs:
+
+ - Most of the C guidelines above apply.
+
+ - We try to support Perl 5.8 and later ("use Perl 5.008").
+
+ - use strict and use warnings are strongly preferred.
+
+ - Don't overuse statement modifiers unless using them makes the
+ result easier to follow.
+
+ ... do something ...
+ do_this() unless (condition);
+ ... do something else ...
+
+ is more readable than:
+
+ ... do something ...
+ unless (condition) {
+ do_this();
+ }
+ ... do something else ...
+
+ *only* when the condition is so rare that do_this() will be almost
+ always called.
+
+ - We try to avoid assignments inside "if ()" conditions.
+
+ - Learn and use Git.pm if you need that functionality.
+
+ - For Emacs, it's useful to put the following in
+ GIT_CHECKOUT/.dir-locals.el, assuming you use cperl-mode:
+
+ ;; note the first part is useful for C editing, too
+ ((nil . ((indent-tabs-mode . t)
+ (tab-width . 8)
+ (fill-column . 80)))
+ (cperl-mode . ((cperl-indent-level . 8)
+ (cperl-extra-newline-before-brace . nil)
+ (cperl-merge-trailing-else . t))))
+
For Python scripts:
- We follow PEP-8 (http://www.python.org/dev/peps/pep-0008/).
* "git commit" can be told to use --cleanup=whitespace by setting the
configuration variable commit.cleanup to 'whitespace'.
+ * "git diff" and other Porcelain commands can be told to use a
+ non-standard algorithm by setting diff.algorithm configuration
+ variable.
+
* "git fetch --mirror" and fetch that uses other forms of refspec
with wildcard used to attempt to update a symbolic ref that match
the wildcard on the receiving end, which made little sense (the
rewrite the names and email addresses of people using the mailmap
mechanism.
+ * "git log --cc --graph" now shows the combined diff output with the
+ ancestry graph.
+
+ * "git log --grep=<pattern>" honors i18n.logoutputencoding to look
+ for the pattern after fixing the log message to the specified
+ encoding.
+
* "git mergetool" and "git difftool" learned to list the available
tool backends in a more consistent manner.
you do not have any commits in your history, but it now gives you
an empty index (to match non-existent commit you are not even on).
+ * "git status" says what branch is being bisected or rebased when
+ able, not just "bisecting" or "rebasing".
+
* "git submodule" started learning a new mode to integrate with the
tip of the remote branch (as opposed to integrating with the commit
recorded in the superproject's gitlink).
+ * "git upload-pack" which implements the service "ls-remote" and
+ "fetch" talk to can be told to hide ref hierarchies the server
+ side internally uses (and that clients have no business learning
+ about) with transfer.hiderefs configuration.
+
Foreign Interface
these implementations can reliably update. This can be used to
avoid excessive revalidation of contents.
+ * Some platforms ship with old version of expat where xmlparse.h
+ needs to be included instead of expat.h; the build procedure has
+ been taught about this.
+
+ * "make clean" on platforms that cannot compute header dependencies
+ on the fly did not work with implementations of "rm" that do not
+ like an empty argument list.
Also contains minor documentation updates and code clean-ups.
failed to remove the real location of the $GIT_DIR it created.
This was most visible when interrupting a submodule update.
+ * "git cvsimport" mishandled timestamps at DST boundary.
+ (merge 48c9162 bw/get-tz-offset-perl later to maint).
+
* We used to have an arbitrary 32 limit for combined diff input,
resulting in incorrect number of leading colons shown when showing
the "--raw --cc" output.
* Command line completion code was inadvertently made incompatible with
older versions of bash by using a newer array notation.
+ * "git push" was taught to refuse updating the branch that is
+ currently checked out long time ago, but the user manual was left
+ stale.
+ (merge d9be248 wk/man-deny-current-branch-is-default-these-days later to maint).
+
* Some shells do not behave correctly when IFS is unset; work it
around by explicitly setting it to the default value.
even if that push is forced. This configuration variable is
set when initializing a shared repository.
+receive.hiderefs::
+ String(s) `receive-pack` uses to decide which refs to omit
+ from its initial advertisement. Use more than one
+ definitions to specify multiple prefix strings. A ref that
+ are under the hierarchies listed on the value of this
+ variable is excluded, and is hidden when responding to `git
+ push`, and an attempt to update or delete a hidden ref by
+ `git push` is rejected.
+
receive.updateserverinfo::
If set to true, git-receive-pack will run git-update-server-info
after receiving data from git-push and updating refs.
not set, the value of this variable is used instead.
Defaults to false.
+transfer.hiderefs::
+ This variable can be used to set both `receive.hiderefs`
+ and `uploadpack.hiderefs` at the same time to the same
+ values. See entries for these other variables.
+
transfer.unpackLimit::
When `fetch.unpackLimit` or `receive.unpackLimit` are
not set, the value of this variable is used instead.
The default value is 100.
+uploadpack.hiderefs::
+ String(s) `upload-pack` uses to decide which refs to omit
+ from its initial advertisement. Use more than one
+ definitions to specify multiple prefix strings. A ref that
+ are under the hierarchies listed on the value of this
+ variable is excluded, and is hidden from `git ls-remote`,
+ `git fetch`, etc. An attempt to fetch a hidden ref by `git
+ fetch` will fail.
+
url.<base>.insteadOf::
Any URL that starts with this value will be rewritten to
start, instead, with <base>. In cases where some site serves a
that a corresponding difftool.<tool>.cmd variable is defined.
include::mergetools-diff.txt[]
+
+diff.algorithm::
+ Choose a diff algorithm. The variants are as follows:
++
+--
+`default`, `myers`;;
+ The basic greedy diff algorithm. Currently, this is the default.
+`minimal`;;
+ Spend extra time to make sure the smallest possible diff is
+ produced.
+`patience`;;
+ Use "patience diff" algorithm when generating patches.
+`histogram`;;
+ This algorithm extends the patience algorithm to "support
+ low-occurrence common elements".
+--
++
--histogram::
Generate a diff using the "histogram diff" algorithm.
+--diff-algorithm={patience|minimal|histogram|myers}::
+ Choose a diff algorithm. The variants are as follows:
++
+--
+`default`, `myers`;;
+ The basic greedy diff algorithm. Currently, this is the default.
+`minimal`;;
+ Spend extra time to make sure the smallest possible diff is
+ produced.
+`patience`;;
+ Use "patience diff" algorithm when generating patches.
+`histogram`;;
+ This algorithm extends the patience algorithm to "support
+ low-occurrence common elements".
+--
++
+For instance, if you configured diff.algorithm variable to a
+non-default value and want to use the default one, then you
+have to use `--diff-algorithm=default` option.
+
--stat[=<width>[,<name-width>[,<count>]]]::
Generate a diffstat. By default, as much space as necessary
will be used for the filename part, and the rest for the graph
If no <pathspec> is given, the current version of Git defaults to
"."; in other words, update all tracked files in the current directory
and its subdirectories. This default will change in a future version
-of Git, hence the form without <filepattern> should not be used.
+of Git, hence the form without <pathspec> should not be used.
-A::
--all::
~~~~~~~~~~~~
After a bisect session, to clean up the bisection state and return to
-the original HEAD, issue the following command:
+the original HEAD (i.e., to quit bisecting), issue the following command:
------------------------------------------------
$ git bisect reset
------------
$ git bisect start HEAD v1.2 -- # HEAD is bad, v1.2 is good
$ git bisect run make # "make" builds the app
+$ git bisect reset # quit the bisect session
------------
* Automatically bisect a test failure between origin and HEAD:
------------
$ git bisect start HEAD origin -- # HEAD is bad, origin is good
$ git bisect run make test # "make test" builds and tests
+$ git bisect reset # quit the bisect session
------------
* Automatically bisect a broken test case:
~/check_test_case.sh # does the test case pass?
$ git bisect start HEAD HEAD~10 -- # culprit is among the last 10
$ git bisect run ~/test.sh
+$ git bisect reset # quit the bisect session
------------
+
Here we use a "test.sh" custom script. In this script, if "make"
------------
$ git bisect start HEAD HEAD~10 -- # culprit is among the last 10
$ git bisect run sh -c "make || exit 125; ~/check_test_case.sh"
+$ git bisect reset # quit the bisect session
------------
+
This shows that you can do without a run script if you write the test
rm -f tmp.$$
test $rc = 0'
+$ git bisect reset # quit the bisect session
------------
+
In this case, when 'git bisect run' finishes, bisect/bad will refer to a commit that
Note that the target of a "push" is normally a
<<def_bare_repository,bare>> repository. You can also push to a
-repository that has a checked-out working tree, but the working tree
-will not be updated by the push. This may lead to unexpected results if
-the branch you push to is the currently checked-out branch!
+repository that has a checked-out working tree, but a push to update the
+currently checked-out branch is denied by default to prevent confusion.
+See the description ofthe receive.denyCurrentBranch option
+in linkgit:git-config[1] for details.
As with `git fetch`, you may also set up configuration options to
save typing; so, for example, after
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v1.8.1.GIT
+DEF_VER=v1.8.2-rc0
LF='
'
# Define EXPATDIR=/foo/bar if your expat header and library files are in
# /foo/bar/include and /foo/bar/lib directories.
#
+# Define EXPAT_NEEDS_XMLPARSE_H if you have an old version of expat (e.g.,
+# 1.1 or 1.2) that provides xmlparse.h instead of expat.h.
+#
# Define NO_GETTEXT if you don't want Git output to be translated.
# A translated Git requires GNU libintl or another gettext implementation,
# plus libintl-perl at runtime.
SCRIPT_PYTHON += git-remote-testpy.py
SCRIPT_PYTHON += git-p4.py
-SCRIPTS = $(patsubst %.sh,%,$(SCRIPT_SH)) \
- $(patsubst %.perl,%,$(SCRIPT_PERL)) \
- $(patsubst %.py,%,$(SCRIPT_PYTHON)) \
+# Generated files for scripts
+SCRIPT_SH_GEN = $(patsubst %.sh,%,$(SCRIPT_SH))
+SCRIPT_PERL_GEN = $(patsubst %.perl,%,$(SCRIPT_PERL))
+SCRIPT_PYTHON_GEN = $(patsubst %.py,%,$(SCRIPT_PYTHON))
+
+# Individual rules to allow e.g.
+# "make -C ../.. SCRIPT_PERL=contrib/foo/bar.perl build-perl-script"
+# from subdirectories like contrib/*/
+.PHONY: build-perl-script build-sh-script build-python-script
+build-perl-script: $(SCRIPT_PERL_GEN)
+build-sh-script: $(SCRIPT_SH_GEN)
+build-python-script: $(SCRIPT_PYTHON_GEN)
+
+.PHONY: install-perl-script install-sh-script install-python-script
+install-sh-script: $(SCRIPT_SH_GEN)
+ $(INSTALL) $(SCRIPT_SH_GEN) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
+install-perl-script: $(SCRIPT_PERL_GEN)
+ $(INSTALL) $(SCRIPT_PERL_GEN) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
+install-python-script: $(SCRIPT_PYTHON_GEN)
+ $(INSTALL) $(SCRIPT_PYTHON_GEN) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
+
+.PHONY: clean-perl-script clean-sh-script clean-python-script
+clean-sh-script:
+ $(RM) $(SCRIPT_SH_GEN)
+clean-perl-script:
+ $(RM) $(SCRIPT_PERL_GEN)
+clean-python-script:
+ $(RM) $(SCRIPT_PYTHON_GEN)
+
+SCRIPTS = $(SCRIPT_SH_GEN) \
+ $(SCRIPT_PERL_GEN) \
+ $(SCRIPT_PYTHON_GEN) \
git-instaweb
ETAGS_TARGET = TAGS
else
EXPAT_LIBEXPAT = -lexpat
endif
+ ifdef EXPAT_NEEDS_XMLPARSE_H
+ BASIC_CFLAGS += -DEXPAT_NEEDS_XMLPARSE_H
+ endif
endif
endif
builtin/*.o $(LIB_FILE) $(XDIFF_LIB) $(VCSSVN_LIB)
$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) git$X
$(RM) $(TEST_PROGRAMS)
- $(RM) -r bin-wrappers
- $(RM) -r $(dep_dirs)
+ $(RM) -r bin-wrappers $(dep_dirs)
$(RM) -r po/build/
$(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo common-cmds.h $(ETAGS_TARGET) tags cscope*
$(RM) -r $(GIT_TARNAME) .doc-tmp-dir
return 0;
}
+static int preimage_sha1_in_gitlink_patch(struct patch *p, unsigned char sha1[20])
+{
+ /*
+ * A usable gitlink patch has only one fragment (hunk) that looks like:
+ * @@ -1 +1 @@
+ * -Subproject commit <old sha1>
+ * +Subproject commit <new sha1>
+ * or
+ * @@ -1 +0,0 @@
+ * -Subproject commit <old sha1>
+ * for a removal patch.
+ */
+ struct fragment *hunk = p->fragments;
+ static const char heading[] = "-Subproject commit ";
+ char *preimage;
+
+ if (/* does the patch have only one hunk? */
+ hunk && !hunk->next &&
+ /* is its preimage one line? */
+ hunk->oldpos == 1 && hunk->oldlines == 1 &&
+ /* does preimage begin with the heading? */
+ (preimage = memchr(hunk->patch, '\n', hunk->size)) != NULL &&
+ !prefixcmp(++preimage, heading) &&
+ /* does it record full SHA-1? */
+ !get_sha1_hex(preimage + sizeof(heading) - 1, sha1) &&
+ preimage[sizeof(heading) + 40 - 1] == '\n' &&
+ /* does the abbreviated name on the index line agree with it? */
+ !prefixcmp(preimage + sizeof(heading) - 1, p->old_sha1_prefix))
+ return 0; /* it all looks fine */
+
+ /* we may have full object name on the index line */
+ return get_sha1_hex(p->old_sha1_prefix, sha1);
+}
+
/* Build an index that contains the just the files needed for a 3way merge */
static void build_fake_ancestor(struct patch *list, const char *filename)
{
continue;
if (S_ISGITLINK(patch->old_mode)) {
- if (get_sha1_hex(patch->old_sha1_prefix, sha1))
- die("submoule change for %s without full index name",
+ if (!preimage_sha1_in_gitlink_patch(patch, sha1))
+ ; /* ok, the textual part looks sane */
+ else
+ die("sha1 information is lacking or useless for submoule %s",
name);
} else if (!get_sha1_blob(patch->old_sha1_prefix, sha1)) {
; /* ok */
static int receive_pack_config(const char *var, const char *value, void *cb)
{
+ int status = parse_hide_refs_config(var, value, "receive");
+
+ if (status)
+ return status;
+
if (strcmp(var, "receive.denydeletes") == 0) {
deny_deletes = git_config_bool(var, value);
return 0;
static void show_ref(const char *path, const unsigned char *sha1)
{
+ if (ref_is_hidden(path))
+ return;
+
if (sent_capabilities)
packet_write(1, "%s %s\n", sha1_to_hex(sha1), path);
else
return -1; /* end of list */
}
+static void reject_updates_to_hidden(struct command *commands)
+{
+ struct command *cmd;
+
+ for (cmd = commands; cmd; cmd = cmd->next) {
+ if (cmd->error_string || !ref_is_hidden(cmd->ref_name))
+ continue;
+ if (is_null_sha1(cmd->new_sha1))
+ cmd->error_string = "deny deleting a hidden ref";
+ else
+ cmd->error_string = "deny updating a hidden ref";
+ }
+}
+
static void execute_commands(struct command *commands, const char *unpacker_error)
{
struct command *cmd;
0, &cmd))
set_connectivity_errors(commands);
+ reject_updates_to_hidden(commands);
+
if (run_receive_hook(commands, "pre-receive", 0)) {
for (cmd = commands; cmd; cmd = cmd->next) {
if (!cmd->error_string)
saw_cr_at_eol ? "\r" : "");
}
-static void dump_sline(struct sline *sline, unsigned long cnt, int num_parent,
+static void dump_sline(struct sline *sline, const char *line_prefix,
+ unsigned long cnt, int num_parent,
int use_color, int result_deleted)
{
unsigned long mark = (1UL<<num_parent);
rlines -= null_context;
}
- fputs(c_frag, stdout);
+ printf("%s%s", line_prefix, c_frag);
for (i = 0; i <= num_parent; i++) putchar(combine_marker);
for (i = 0; i < num_parent; i++)
show_parent_lno(sline, lno, hunk_end, i, null_context);
struct sline *sl = &sline[lno++];
ll = (sl->flag & no_pre_delete) ? NULL : sl->lost_head;
while (ll) {
- fputs(c_old, stdout);
+ printf("%s%s", line_prefix, c_old);
for (j = 0; j < num_parent; j++) {
if (ll->parent_map & (1UL<<j))
putchar('-');
if (cnt < lno)
break;
p_mask = 1;
+ fputs(line_prefix, stdout);
if (!(sl->flag & (mark-1))) {
/*
* This sline was here to hang the
static void dump_quoted_path(const char *head,
const char *prefix,
const char *path,
+ const char *line_prefix,
const char *c_meta, const char *c_reset)
{
static struct strbuf buf = STRBUF_INIT;
strbuf_reset(&buf);
+ strbuf_addstr(&buf, line_prefix);
strbuf_addstr(&buf, c_meta);
strbuf_addstr(&buf, head);
quote_two_c_style(&buf, prefix, path, 0);
int num_parent,
int dense,
struct rev_info *rev,
+ const char *line_prefix,
int mode_differs,
int show_file_header)
{
show_log(rev);
dump_quoted_path(dense ? "diff --cc " : "diff --combined ",
- "", elem->path, c_meta, c_reset);
- printf("%sindex ", c_meta);
+ "", elem->path, line_prefix, c_meta, c_reset);
+ printf("%s%sindex ", line_prefix, c_meta);
for (i = 0; i < num_parent; i++) {
abb = find_unique_abbrev(elem->parent[i].sha1,
abbrev);
DIFF_STATUS_ADDED)
added = 0;
if (added)
- printf("%snew file mode %06o",
- c_meta, elem->mode);
+ printf("%s%snew file mode %06o",
+ line_prefix, c_meta, elem->mode);
else {
if (deleted)
- printf("%sdeleted file ", c_meta);
+ printf("%s%sdeleted file ",
+ line_prefix, c_meta);
printf("mode ");
for (i = 0; i < num_parent; i++) {
printf("%s%06o", i ? "," : "",
if (added)
dump_quoted_path("--- ", "", "/dev/null",
- c_meta, c_reset);
+ line_prefix, c_meta, c_reset);
else
dump_quoted_path("--- ", a_prefix, elem->path,
- c_meta, c_reset);
+ line_prefix, c_meta, c_reset);
if (deleted)
dump_quoted_path("+++ ", "", "/dev/null",
- c_meta, c_reset);
+ line_prefix, c_meta, c_reset);
else
dump_quoted_path("+++ ", b_prefix, elem->path,
- c_meta, c_reset);
+ line_prefix, c_meta, c_reset);
}
static void show_patch_diff(struct combine_diff_path *elem, int num_parent,
struct userdiff_driver *userdiff;
struct userdiff_driver *textconv = NULL;
int is_binary;
+ const char *line_prefix = diff_line_prefix(opt);
context = opt->context;
userdiff = userdiff_find_by_path(elem->path);
}
if (is_binary) {
show_combined_header(elem, num_parent, dense, rev,
- mode_differs, 0);
+ line_prefix, mode_differs, 0);
printf("Binary files differ\n");
free(result);
return;
if (show_hunks || mode_differs || working_tree_file) {
show_combined_header(elem, num_parent, dense, rev,
- mode_differs, 1);
- dump_sline(sline, cnt, num_parent,
+ line_prefix, mode_differs, 1);
+ dump_sline(sline, line_prefix, cnt, num_parent,
opt->use_color, result_deleted);
}
free(result);
{
struct diff_options *opt = &rev->diffopt;
int line_termination, inter_name_termination, i;
+ const char *line_prefix = diff_line_prefix(opt);
line_termination = opt->line_termination;
inter_name_termination = '\t';
if (rev->loginfo && !rev->no_commit_id)
show_log(rev);
+
if (opt->output_format & DIFF_FORMAT_RAW) {
+ printf("%s", line_prefix);
+
/* As many colons as there are parents */
for (i = 0; i < num_parent; i++)
putchar(':');
struct rev_info *rev)
{
struct diff_options *opt = &rev->diffopt;
+
if (!p->len)
return;
if (opt->output_format & (DIFF_FORMAT_RAW |
if (show_log_first && i == 0) {
show_log(rev);
+
if (rev->verbose_header && opt->output_format)
- putchar(opt->line_termination);
+ printf("%s%c", diff_line_prefix(opt),
+ opt->line_termination);
}
diff_flush(&diffopts);
}
if (opt->output_format & DIFF_FORMAT_PATCH) {
if (needsep)
- putchar(opt->line_termination);
+ printf("%s%c", diff_line_prefix(opt),
+ opt->line_termination);
for (p = paths; p; p = p->next) {
if (p->len)
show_patch_diff(p, num_parent, dense,
endif
ifeq ($(uname_S),QNX)
COMPAT_CFLAGS += -DSA_RESTART=0
+ EXPAT_NEEDS_XMLPARSE_H = YesPlease
HAVE_STRINGS_H = YesPlease
NEEDS_SOCKET = YesPlease
NO_FNMATCH_CASEFOLD = YesPlease
__gitcomp_nl "$(__git_refs)"
}
+__git_diff_algorithms="myers minimal patience histogram"
+
__git_diff_common_options="--stat --numstat --shortstat --summary
--patch-with-stat --name-only --name-status --color
--no-color --color-words --no-renames --check
--no-ext-diff
--no-prefix --src-prefix= --dst-prefix=
--inter-hunk-context=
- --patience
+ --patience --histogram --minimal
--raw
--dirstat --dirstat= --dirstat-by-file
--dirstat-by-file= --cumulative
+ --diff-algorithm=
"
_git_diff ()
__git_has_doubledash && return
case "$cur" in
+ --diff-algorithm=*)
+ __gitcomp "$__git_diff_algorithms" "" "${cur##--diff-algorithm=}"
+ return
+ ;;
--*)
__gitcomp "--cached --staged --pickaxe-all --pickaxe-regex
--base --ours --theirs --no-index
while [ $c -gt 1 ]; do
word="${words[c]}"
case "$word" in
- --global|--system|--file=*)
+ --system|--global|--local|--file=*)
config_file="$word"
break
;;
case "$cur" in
--*)
__gitcomp "
- --global --system --file=
+ --system --global --local --file=
--list --replace-all
--get --get-all --get-regexp
--add --unset --unset-all
diff.suppressBlankEmpty
diff.tool
diff.wordRegex
+ diff.algorithm
difftool.
difftool.prompt
fetch.recurseSubmodules
" "" "${cur#*=}"
return
;;
+ --diff-algorithm=*)
+ __gitcomp "$__git_diff_algorithms" "" "${cur##--diff-algorithm=}"
+ return
+ ;;
--*)
__gitcomp "--pretty= --format= --abbrev-commit --oneline
$__git_diff_common_options
#
# If you would like to see if there're untracked files, then you can set
# GIT_PS1_SHOWUNTRACKEDFILES to a nonempty value. If there're untracked
-# files, then a '%' will be shown next to the branch name.
+# files, then a '%' will be shown next to the branch name. You can
+# configure this per-repository with the bash.showUntrackedFiles
+# variable, which defaults to true once GIT_PS1_SHOWUNTRACKEDFILES is
+# enabled.
#
# If you would like to see the difference between HEAD and its upstream,
# set GIT_PS1_SHOWUPSTREAM="auto". A "<" indicates you are behind, ">"
fi
if [ -n "${GIT_PS1_SHOWUNTRACKEDFILES-}" ]; then
- if [ -n "$(git ls-files --others --exclude-standard)" ]; then
- u="%"
+ if [ "$(git config --bool bash.showUntrackedFiles)" != "false" ]; then
+ if [ -n "$(git ls-files --others --exclude-standard)" ]; then
+ u="%"
+ fi
fi
fi
--- /dev/null
+git-remote-mediawiki
#
-# Copyright (C) 2012
-# Charles Roussel <charles.roussel@ensimag.imag.fr>
-# Simon Cathebras <simon.cathebras@ensimag.imag.fr>
-# Julien Khayat <julien.khayat@ensimag.imag.fr>
-# Guillaume Sasdy <guillaume.sasdy@ensimag.imag.fr>
-# Simon Perrat <simon.perrat@ensimag.imag.fr>
+# Copyright (C) 2013
+# Matthieu Moy <Matthieu.Moy@imag.fr>
#
## Build git-remote-mediawiki
--include ../../config.mak.autogen
--include ../../config.mak
+SCRIPT_PERL=git-remote-mediawiki.perl
+GIT_ROOT_DIR=../..
+HERE=contrib/mw-to-git/
-ifndef PERL_PATH
- PERL_PATH = /usr/bin/perl
-endif
-ifndef gitexecdir
- gitexecdir = $(shell git --exec-path)
-endif
+SCRIPT_PERL_FULL=$(patsubst %,$(HERE)/%,$(SCRIPT_PERL))
-PERL_PATH_SQ = $(subst ','\'',$(PERL_PATH))
-gitexecdir_SQ = $(subst ','\'',$(gitexecdir))
-SCRIPT = git-remote-mediawiki
+all: build
-.PHONY: install help doc test clean
-
-help:
- @echo 'This is the help target of the Makefile. Current configuration:'
- @echo ' gitexecdir = $(gitexecdir_SQ)'
- @echo ' PERL_PATH = $(PERL_PATH_SQ)'
- @echo 'Run "$(MAKE) install" to install $(SCRIPT) in gitexecdir'
- @echo 'Run "$(MAKE) test" to run the testsuite'
-
-install:
- sed -e '1s|#!.*/perl|#!$(PERL_PATH_SQ)|' $(SCRIPT) \
- > '$(gitexecdir_SQ)/$(SCRIPT)'
- chmod +x '$(gitexecdir)/$(SCRIPT)'
-
-doc:
- @echo 'Sorry, "make doc" is not implemented yet for $(SCRIPT)'
-
-test:
- $(MAKE) -C t/ test
-
-clean:
- $(RM) '$(gitexecdir)/$(SCRIPT)'
- $(MAKE) -C t/ clean
+build install clean:
+ $(MAKE) -C $(GIT_ROOT_DIR) SCRIPT_PERL=$(SCRIPT_PERL_FULL) \
+ $@-perl-script
+++ /dev/null
-#! /usr/bin/perl
-
-# Copyright (C) 2011
-# Jérémie Nikaes <jeremie.nikaes@ensimag.imag.fr>
-# Arnaud Lacurie <arnaud.lacurie@ensimag.imag.fr>
-# Claire Fousse <claire.fousse@ensimag.imag.fr>
-# David Amouyal <david.amouyal@ensimag.imag.fr>
-# Matthieu Moy <matthieu.moy@grenoble-inp.fr>
-# License: GPL v2 or later
-
-# Gateway between Git and MediaWiki.
-# Documentation & bugtracker: https://github.com/moy/Git-Mediawiki/
-
-use strict;
-use MediaWiki::API;
-use DateTime::Format::ISO8601;
-
-# By default, use UTF-8 to communicate with Git and the user
-binmode STDERR, ":utf8";
-binmode STDOUT, ":utf8";
-
-use URI::Escape;
-use IPC::Open2;
-
-use warnings;
-
-# Mediawiki filenames can contain forward slashes. This variable decides by which pattern they should be replaced
-use constant SLASH_REPLACEMENT => "%2F";
-
-# It's not always possible to delete pages (may require some
-# priviledges). Deleted pages are replaced with this content.
-use constant DELETED_CONTENT => "[[Category:Deleted]]\n";
-
-# It's not possible to create empty pages. New empty files in Git are
-# sent with this content instead.
-use constant EMPTY_CONTENT => "<!-- empty page -->\n";
-
-# used to reflect file creation or deletion in diff.
-use constant NULL_SHA1 => "0000000000000000000000000000000000000000";
-
-# Used on Git's side to reflect empty edit messages on the wiki
-use constant EMPTY_MESSAGE => '*Empty MediaWiki Message*';
-
-my $remotename = $ARGV[0];
-my $url = $ARGV[1];
-
-# Accept both space-separated and multiple keys in config file.
-# Spaces should be written as _ anyway because we'll use chomp.
-my @tracked_pages = split(/[ \n]/, run_git("config --get-all remote.". $remotename .".pages"));
-chomp(@tracked_pages);
-
-# Just like @tracked_pages, but for MediaWiki categories.
-my @tracked_categories = split(/[ \n]/, run_git("config --get-all remote.". $remotename .".categories"));
-chomp(@tracked_categories);
-
-# Import media files on pull
-my $import_media = run_git("config --get --bool remote.". $remotename .".mediaimport");
-chomp($import_media);
-$import_media = ($import_media eq "true");
-
-# Export media files on push
-my $export_media = run_git("config --get --bool remote.". $remotename .".mediaexport");
-chomp($export_media);
-$export_media = !($export_media eq "false");
-
-my $wiki_login = run_git("config --get remote.". $remotename .".mwLogin");
-# Note: mwPassword is discourraged. Use the credential system instead.
-my $wiki_passwd = run_git("config --get remote.". $remotename .".mwPassword");
-my $wiki_domain = run_git("config --get remote.". $remotename .".mwDomain");
-chomp($wiki_login);
-chomp($wiki_passwd);
-chomp($wiki_domain);
-
-# Import only last revisions (both for clone and fetch)
-my $shallow_import = run_git("config --get --bool remote.". $remotename .".shallow");
-chomp($shallow_import);
-$shallow_import = ($shallow_import eq "true");
-
-# Fetch (clone and pull) by revisions instead of by pages. This behavior
-# is more efficient when we have a wiki with lots of pages and we fetch
-# the revisions quite often so that they concern only few pages.
-# Possible values:
-# - by_rev: perform one query per new revision on the remote wiki
-# - by_page: query each tracked page for new revision
-my $fetch_strategy = run_git("config --get remote.$remotename.fetchStrategy");
-unless ($fetch_strategy) {
- $fetch_strategy = run_git("config --get mediawiki.fetchStrategy");
-}
-chomp($fetch_strategy);
-unless ($fetch_strategy) {
- $fetch_strategy = "by_page";
-}
-
-# Dumb push: don't update notes and mediawiki ref to reflect the last push.
-#
-# Configurable with mediawiki.dumbPush, or per-remote with
-# remote.<remotename>.dumbPush.
-#
-# This means the user will have to re-import the just-pushed
-# revisions. On the other hand, this means that the Git revisions
-# corresponding to MediaWiki revisions are all imported from the wiki,
-# regardless of whether they were initially created in Git or from the
-# web interface, hence all users will get the same history (i.e. if
-# the push from Git to MediaWiki loses some information, everybody
-# will get the history with information lost). If the import is
-# deterministic, this means everybody gets the same sha1 for each
-# MediaWiki revision.
-my $dumb_push = run_git("config --get --bool remote.$remotename.dumbPush");
-unless ($dumb_push) {
- $dumb_push = run_git("config --get --bool mediawiki.dumbPush");
-}
-chomp($dumb_push);
-$dumb_push = ($dumb_push eq "true");
-
-my $wiki_name = $url;
-$wiki_name =~ s/[^\/]*:\/\///;
-# If URL is like http://user:password@example.com/, we clearly don't
-# want the password in $wiki_name. While we're there, also remove user
-# and '@' sign, to avoid author like MWUser@HTTPUser@host.com
-$wiki_name =~ s/^.*@//;
-
-# Commands parser
-my $entry;
-my @cmd;
-while (<STDIN>) {
- chomp;
- @cmd = split(/ /);
- if (defined($cmd[0])) {
- # Line not blank
- if ($cmd[0] eq "capabilities") {
- die("Too many arguments for capabilities") unless (!defined($cmd[1]));
- mw_capabilities();
- } elsif ($cmd[0] eq "list") {
- die("Too many arguments for list") unless (!defined($cmd[2]));
- mw_list($cmd[1]);
- } elsif ($cmd[0] eq "import") {
- die("Invalid arguments for import") unless ($cmd[1] ne "" && !defined($cmd[2]));
- mw_import($cmd[1]);
- } elsif ($cmd[0] eq "option") {
- die("Too many arguments for option") unless ($cmd[1] ne "" && $cmd[2] ne "" && !defined($cmd[3]));
- mw_option($cmd[1],$cmd[2]);
- } elsif ($cmd[0] eq "push") {
- mw_push($cmd[1]);
- } else {
- print STDERR "Unknown command. Aborting...\n";
- last;
- }
- } else {
- # blank line: we should terminate
- last;
- }
-
- BEGIN { $| = 1 } # flush STDOUT, to make sure the previous
- # command is fully processed.
-}
-
-########################## Functions ##############################
-
-## credential API management (generic functions)
-
-sub credential_read {
- my %credential;
- my $reader = shift;
- my $op = shift;
- while (<$reader>) {
- my ($key, $value) = /([^=]*)=(.*)/;
- if (not defined $key) {
- die "ERROR receiving response from git credential $op:\n$_\n";
- }
- $credential{$key} = $value;
- }
- return %credential;
-}
-
-sub credential_write {
- my $credential = shift;
- my $writer = shift;
- # url overwrites other fields, so it must come first
- print $writer "url=$credential->{url}\n" if exists $credential->{url};
- while (my ($key, $value) = each(%$credential) ) {
- if (length $value && $key ne 'url') {
- print $writer "$key=$value\n";
- }
- }
-}
-
-sub credential_run {
- my $op = shift;
- my $credential = shift;
- my $pid = open2(my $reader, my $writer, "git credential $op");
- credential_write($credential, $writer);
- print $writer "\n";
- close($writer);
-
- if ($op eq "fill") {
- %$credential = credential_read($reader, $op);
- } else {
- if (<$reader>) {
- die "ERROR while running git credential $op:\n$_";
- }
- }
- close($reader);
- waitpid($pid, 0);
- my $child_exit_status = $? >> 8;
- if ($child_exit_status != 0) {
- die "'git credential $op' failed with code $child_exit_status.";
- }
-}
-
-# MediaWiki API instance, created lazily.
-my $mediawiki;
-
-sub mw_connect_maybe {
- if ($mediawiki) {
- return;
- }
- $mediawiki = MediaWiki::API->new;
- $mediawiki->{config}->{api_url} = "$url/api.php";
- if ($wiki_login) {
- my %credential = (url => $url);
- $credential{username} = $wiki_login;
- $credential{password} = $wiki_passwd;
- credential_run("fill", \%credential);
- my $request = {lgname => $credential{username},
- lgpassword => $credential{password},
- lgdomain => $wiki_domain};
- if ($mediawiki->login($request)) {
- credential_run("approve", \%credential);
- print STDERR "Logged in mediawiki user \"$credential{username}\".\n";
- } else {
- print STDERR "Failed to log in mediawiki user \"$credential{username}\" on $url\n";
- print STDERR " (error " .
- $mediawiki->{error}->{code} . ': ' .
- $mediawiki->{error}->{details} . ")\n";
- credential_run("reject", \%credential);
- exit 1;
- }
- }
-}
-
-## Functions for listing pages on the remote wiki
-sub get_mw_tracked_pages {
- my $pages = shift;
- get_mw_page_list(\@tracked_pages, $pages);
-}
-
-sub get_mw_page_list {
- my $page_list = shift;
- my $pages = shift;
- my @some_pages = @$page_list;
- while (@some_pages) {
- my $last = 50;
- if ($#some_pages < $last) {
- $last = $#some_pages;
- }
- my @slice = @some_pages[0..$last];
- get_mw_first_pages(\@slice, $pages);
- @some_pages = @some_pages[51..$#some_pages];
- }
-}
-
-sub get_mw_tracked_categories {
- my $pages = shift;
- foreach my $category (@tracked_categories) {
- if (index($category, ':') < 0) {
- # Mediawiki requires the Category
- # prefix, but let's not force the user
- # to specify it.
- $category = "Category:" . $category;
- }
- my $mw_pages = $mediawiki->list( {
- action => 'query',
- list => 'categorymembers',
- cmtitle => $category,
- cmlimit => 'max' } )
- || die $mediawiki->{error}->{code} . ': '
- . $mediawiki->{error}->{details};
- foreach my $page (@{$mw_pages}) {
- $pages->{$page->{title}} = $page;
- }
- }
-}
-
-sub get_mw_all_pages {
- my $pages = shift;
- # No user-provided list, get the list of pages from the API.
- my $mw_pages = $mediawiki->list({
- action => 'query',
- list => 'allpages',
- aplimit => 'max'
- });
- if (!defined($mw_pages)) {
- print STDERR "fatal: could not get the list of wiki pages.\n";
- print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
- print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
- exit 1;
- }
- foreach my $page (@{$mw_pages}) {
- $pages->{$page->{title}} = $page;
- }
-}
-
-# queries the wiki for a set of pages. Meant to be used within a loop
-# querying the wiki for slices of page list.
-sub get_mw_first_pages {
- my $some_pages = shift;
- my @some_pages = @{$some_pages};
-
- my $pages = shift;
-
- # pattern 'page1|page2|...' required by the API
- my $titles = join('|', @some_pages);
-
- my $mw_pages = $mediawiki->api({
- action => 'query',
- titles => $titles,
- });
- if (!defined($mw_pages)) {
- print STDERR "fatal: could not query the list of wiki pages.\n";
- print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
- print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
- exit 1;
- }
- while (my ($id, $page) = each(%{$mw_pages->{query}->{pages}})) {
- if ($id < 0) {
- print STDERR "Warning: page $page->{title} not found on wiki\n";
- } else {
- $pages->{$page->{title}} = $page;
- }
- }
-}
-
-# Get the list of pages to be fetched according to configuration.
-sub get_mw_pages {
- mw_connect_maybe();
-
- print STDERR "Listing pages on remote wiki...\n";
-
- my %pages; # hash on page titles to avoid duplicates
- my $user_defined;
- if (@tracked_pages) {
- $user_defined = 1;
- # The user provided a list of pages titles, but we
- # still need to query the API to get the page IDs.
- get_mw_tracked_pages(\%pages);
- }
- if (@tracked_categories) {
- $user_defined = 1;
- get_mw_tracked_categories(\%pages);
- }
- if (!$user_defined) {
- get_mw_all_pages(\%pages);
- }
- if ($import_media) {
- print STDERR "Getting media files for selected pages...\n";
- if ($user_defined) {
- get_linked_mediafiles(\%pages);
- } else {
- get_all_mediafiles(\%pages);
- }
- }
- print STDERR (scalar keys %pages) . " pages found.\n";
- return %pages;
-}
-
-# usage: $out = run_git("command args");
-# $out = run_git("command args", "raw"); # don't interpret output as UTF-8.
-sub run_git {
- my $args = shift;
- my $encoding = (shift || "encoding(UTF-8)");
- open(my $git, "-|:$encoding", "git " . $args);
- my $res = do { local $/; <$git> };
- close($git);
-
- return $res;
-}
-
-
-sub get_all_mediafiles {
- my $pages = shift;
- # Attach list of all pages for media files from the API,
- # they are in a different namespace, only one namespace
- # can be queried at the same moment
- my $mw_pages = $mediawiki->list({
- action => 'query',
- list => 'allpages',
- apnamespace => get_mw_namespace_id("File"),
- aplimit => 'max'
- });
- if (!defined($mw_pages)) {
- print STDERR "fatal: could not get the list of pages for media files.\n";
- print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
- print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
- exit 1;
- }
- foreach my $page (@{$mw_pages}) {
- $pages->{$page->{title}} = $page;
- }
-}
-
-sub get_linked_mediafiles {
- my $pages = shift;
- my @titles = map $_->{title}, values(%{$pages});
-
- # The query is split in small batches because of the MW API limit of
- # the number of links to be returned (500 links max).
- my $batch = 10;
- while (@titles) {
- if ($#titles < $batch) {
- $batch = $#titles;
- }
- my @slice = @titles[0..$batch];
-
- # pattern 'page1|page2|...' required by the API
- my $mw_titles = join('|', @slice);
-
- # Media files could be included or linked from
- # a page, get all related
- my $query = {
- action => 'query',
- prop => 'links|images',
- titles => $mw_titles,
- plnamespace => get_mw_namespace_id("File"),
- pllimit => 'max'
- };
- my $result = $mediawiki->api($query);
-
- while (my ($id, $page) = each(%{$result->{query}->{pages}})) {
- my @media_titles;
- if (defined($page->{links})) {
- my @link_titles = map $_->{title}, @{$page->{links}};
- push(@media_titles, @link_titles);
- }
- if (defined($page->{images})) {
- my @image_titles = map $_->{title}, @{$page->{images}};
- push(@media_titles, @image_titles);
- }
- if (@media_titles) {
- get_mw_page_list(\@media_titles, $pages);
- }
- }
-
- @titles = @titles[($batch+1)..$#titles];
- }
-}
-
-sub get_mw_mediafile_for_page_revision {
- # Name of the file on Wiki, with the prefix.
- my $filename = shift;
- my $timestamp = shift;
- my %mediafile;
-
- # Search if on a media file with given timestamp exists on
- # MediaWiki. In that case download the file.
- my $query = {
- action => 'query',
- prop => 'imageinfo',
- titles => "File:" . $filename,
- iistart => $timestamp,
- iiend => $timestamp,
- iiprop => 'timestamp|archivename|url',
- iilimit => 1
- };
- my $result = $mediawiki->api($query);
-
- my ($fileid, $file) = each( %{$result->{query}->{pages}} );
- # If not defined it means there is no revision of the file for
- # given timestamp.
- if (defined($file->{imageinfo})) {
- $mediafile{title} = $filename;
-
- my $fileinfo = pop(@{$file->{imageinfo}});
- $mediafile{timestamp} = $fileinfo->{timestamp};
- # Mediawiki::API's download function doesn't support https URLs
- # and can't download old versions of files.
- print STDERR "\tDownloading file $mediafile{title}, version $mediafile{timestamp}\n";
- $mediafile{content} = download_mw_mediafile($fileinfo->{url});
- }
- return %mediafile;
-}
-
-sub download_mw_mediafile {
- my $url = shift;
-
- my $response = $mediawiki->{ua}->get($url);
- if ($response->code == 200) {
- return $response->decoded_content;
- } else {
- print STDERR "Error downloading mediafile from :\n";
- print STDERR "URL: $url\n";
- print STDERR "Server response: " . $response->code . " " . $response->message . "\n";
- exit 1;
- }
-}
-
-sub get_last_local_revision {
- # Get note regarding last mediawiki revision
- my $note = run_git("notes --ref=$remotename/mediawiki show refs/mediawiki/$remotename/master 2>/dev/null");
- my @note_info = split(/ /, $note);
-
- my $lastrevision_number;
- if (!(defined($note_info[0]) && $note_info[0] eq "mediawiki_revision:")) {
- print STDERR "No previous mediawiki revision found";
- $lastrevision_number = 0;
- } else {
- # Notes are formatted : mediawiki_revision: #number
- $lastrevision_number = $note_info[1];
- chomp($lastrevision_number);
- print STDERR "Last local mediawiki revision found is $lastrevision_number";
- }
- return $lastrevision_number;
-}
-
-# Remember the timestamp corresponding to a revision id.
-my %basetimestamps;
-
-# Get the last remote revision without taking in account which pages are
-# tracked or not. This function makes a single request to the wiki thus
-# avoid a loop onto all tracked pages. This is useful for the fetch-by-rev
-# option.
-sub get_last_global_remote_rev {
- mw_connect_maybe();
-
- my $query = {
- action => 'query',
- list => 'recentchanges',
- prop => 'revisions',
- rclimit => '1',
- rcdir => 'older',
- };
- my $result = $mediawiki->api($query);
- return $result->{query}->{recentchanges}[0]->{revid};
-}
-
-# Get the last remote revision concerning the tracked pages and the tracked
-# categories.
-sub get_last_remote_revision {
- mw_connect_maybe();
-
- my %pages_hash = get_mw_pages();
- my @pages = values(%pages_hash);
-
- my $max_rev_num = 0;
-
- print STDERR "Getting last revision id on tracked pages...\n";
-
- foreach my $page (@pages) {
- my $id = $page->{pageid};
-
- my $query = {
- action => 'query',
- prop => 'revisions',
- rvprop => 'ids|timestamp',
- pageids => $id,
- };
-
- my $result = $mediawiki->api($query);
-
- my $lastrev = pop(@{$result->{query}->{pages}->{$id}->{revisions}});
-
- $basetimestamps{$lastrev->{revid}} = $lastrev->{timestamp};
-
- $max_rev_num = ($lastrev->{revid} > $max_rev_num ? $lastrev->{revid} : $max_rev_num);
- }
-
- print STDERR "Last remote revision found is $max_rev_num.\n";
- return $max_rev_num;
-}
-
-# Clean content before sending it to MediaWiki
-sub mediawiki_clean {
- my $string = shift;
- my $page_created = shift;
- # Mediawiki does not allow blank space at the end of a page and ends with a single \n.
- # This function right trims a string and adds a \n at the end to follow this rule
- $string =~ s/\s+$//;
- if ($string eq "" && $page_created) {
- # Creating empty pages is forbidden.
- $string = EMPTY_CONTENT;
- }
- return $string."\n";
-}
-
-# Filter applied on MediaWiki data before adding them to Git
-sub mediawiki_smudge {
- my $string = shift;
- if ($string eq EMPTY_CONTENT) {
- $string = "";
- }
- # This \n is important. This is due to mediawiki's way to handle end of files.
- return $string."\n";
-}
-
-sub mediawiki_clean_filename {
- my $filename = shift;
- $filename =~ s/@{[SLASH_REPLACEMENT]}/\//g;
- # [, ], |, {, and } are forbidden by MediaWiki, even URL-encoded.
- # Do a variant of URL-encoding, i.e. looks like URL-encoding,
- # but with _ added to prevent MediaWiki from thinking this is
- # an actual special character.
- $filename =~ s/[\[\]\{\}\|]/sprintf("_%%_%x", ord($&))/ge;
- # If we use the uri escape before
- # we should unescape here, before anything
-
- return $filename;
-}
-
-sub mediawiki_smudge_filename {
- my $filename = shift;
- $filename =~ s/\//@{[SLASH_REPLACEMENT]}/g;
- $filename =~ s/ /_/g;
- # Decode forbidden characters encoded in mediawiki_clean_filename
- $filename =~ s/_%_([0-9a-fA-F][0-9a-fA-F])/sprintf("%c", hex($1))/ge;
- return $filename;
-}
-
-sub literal_data {
- my ($content) = @_;
- print STDOUT "data ", bytes::length($content), "\n", $content;
-}
-
-sub literal_data_raw {
- # Output possibly binary content.
- my ($content) = @_;
- # Avoid confusion between size in bytes and in characters
- utf8::downgrade($content);
- binmode STDOUT, ":raw";
- print STDOUT "data ", bytes::length($content), "\n", $content;
- binmode STDOUT, ":utf8";
-}
-
-sub mw_capabilities {
- # Revisions are imported to the private namespace
- # refs/mediawiki/$remotename/ by the helper and fetched into
- # refs/remotes/$remotename later by fetch.
- print STDOUT "refspec refs/heads/*:refs/mediawiki/$remotename/*\n";
- print STDOUT "import\n";
- print STDOUT "list\n";
- print STDOUT "push\n";
- print STDOUT "\n";
-}
-
-sub mw_list {
- # MediaWiki do not have branches, we consider one branch arbitrarily
- # called master, and HEAD pointing to it.
- print STDOUT "? refs/heads/master\n";
- print STDOUT "\@refs/heads/master HEAD\n";
- print STDOUT "\n";
-}
-
-sub mw_option {
- print STDERR "remote-helper command 'option $_[0]' not yet implemented\n";
- print STDOUT "unsupported\n";
-}
-
-sub fetch_mw_revisions_for_page {
- my $page = shift;
- my $id = shift;
- my $fetch_from = shift;
- my @page_revs = ();
- my $query = {
- action => 'query',
- prop => 'revisions',
- rvprop => 'ids',
- rvdir => 'newer',
- rvstartid => $fetch_from,
- rvlimit => 500,
- pageids => $id,
- };
-
- my $revnum = 0;
- # Get 500 revisions at a time due to the mediawiki api limit
- while (1) {
- my $result = $mediawiki->api($query);
-
- # Parse each of those 500 revisions
- foreach my $revision (@{$result->{query}->{pages}->{$id}->{revisions}}) {
- my $page_rev_ids;
- $page_rev_ids->{pageid} = $page->{pageid};
- $page_rev_ids->{revid} = $revision->{revid};
- push(@page_revs, $page_rev_ids);
- $revnum++;
- }
- last unless $result->{'query-continue'};
- $query->{rvstartid} = $result->{'query-continue'}->{revisions}->{rvstartid};
- }
- if ($shallow_import && @page_revs) {
- print STDERR " Found 1 revision (shallow import).\n";
- @page_revs = sort {$b->{revid} <=> $a->{revid}} (@page_revs);
- return $page_revs[0];
- }
- print STDERR " Found ", $revnum, " revision(s).\n";
- return @page_revs;
-}
-
-sub fetch_mw_revisions {
- my $pages = shift; my @pages = @{$pages};
- my $fetch_from = shift;
-
- my @revisions = ();
- my $n = 1;
- foreach my $page (@pages) {
- my $id = $page->{pageid};
-
- print STDERR "page $n/", scalar(@pages), ": ". $page->{title} ."\n";
- $n++;
- my @page_revs = fetch_mw_revisions_for_page($page, $id, $fetch_from);
- @revisions = (@page_revs, @revisions);
- }
-
- return ($n, @revisions);
-}
-
-sub fe_escape_path {
- my $path = shift;
- $path =~ s/\\/\\\\/g;
- $path =~ s/"/\\"/g;
- $path =~ s/\n/\\n/g;
- return '"' . $path . '"';
-}
-
-sub import_file_revision {
- my $commit = shift;
- my %commit = %{$commit};
- my $full_import = shift;
- my $n = shift;
- my $mediafile = shift;
- my %mediafile;
- if ($mediafile) {
- %mediafile = %{$mediafile};
- }
-
- my $title = $commit{title};
- my $comment = $commit{comment};
- my $content = $commit{content};
- my $author = $commit{author};
- my $date = $commit{date};
-
- print STDOUT "commit refs/mediawiki/$remotename/master\n";
- print STDOUT "mark :$n\n";
- print STDOUT "committer $author <$author\@$wiki_name> ", $date->epoch, " +0000\n";
- literal_data($comment);
-
- # If it's not a clone, we need to know where to start from
- if (!$full_import && $n == 1) {
- print STDOUT "from refs/mediawiki/$remotename/master^0\n";
- }
- if ($content ne DELETED_CONTENT) {
- print STDOUT "M 644 inline " .
- fe_escape_path($title . ".mw") . "\n";
- literal_data($content);
- if (%mediafile) {
- print STDOUT "M 644 inline "
- . fe_escape_path($mediafile{title}) . "\n";
- literal_data_raw($mediafile{content});
- }
- print STDOUT "\n\n";
- } else {
- print STDOUT "D " . fe_escape_path($title . ".mw") . "\n";
- }
-
- # mediawiki revision number in the git note
- if ($full_import && $n == 1) {
- print STDOUT "reset refs/notes/$remotename/mediawiki\n";
- }
- print STDOUT "commit refs/notes/$remotename/mediawiki\n";
- print STDOUT "committer $author <$author\@$wiki_name> ", $date->epoch, " +0000\n";
- literal_data("Note added by git-mediawiki during import");
- if (!$full_import && $n == 1) {
- print STDOUT "from refs/notes/$remotename/mediawiki^0\n";
- }
- print STDOUT "N inline :$n\n";
- literal_data("mediawiki_revision: " . $commit{mw_revision});
- print STDOUT "\n\n";
-}
-
-# parse a sequence of
-# <cmd> <arg1>
-# <cmd> <arg2>
-# \n
-# (like batch sequence of import and sequence of push statements)
-sub get_more_refs {
- my $cmd = shift;
- my @refs;
- while (1) {
- my $line = <STDIN>;
- if ($line =~ m/^$cmd (.*)$/) {
- push(@refs, $1);
- } elsif ($line eq "\n") {
- return @refs;
- } else {
- die("Invalid command in a '$cmd' batch: ". $_);
- }
- }
-}
-
-sub mw_import {
- # multiple import commands can follow each other.
- my @refs = (shift, get_more_refs("import"));
- foreach my $ref (@refs) {
- mw_import_ref($ref);
- }
- print STDOUT "done\n";
-}
-
-sub mw_import_ref {
- my $ref = shift;
- # The remote helper will call "import HEAD" and
- # "import refs/heads/master".
- # Since HEAD is a symbolic ref to master (by convention,
- # followed by the output of the command "list" that we gave),
- # we don't need to do anything in this case.
- if ($ref eq "HEAD") {
- return;
- }
-
- mw_connect_maybe();
-
- print STDERR "Searching revisions...\n";
- my $last_local = get_last_local_revision();
- my $fetch_from = $last_local + 1;
- if ($fetch_from == 1) {
- print STDERR ", fetching from beginning.\n";
- } else {
- print STDERR ", fetching from here.\n";
- }
-
- my $n = 0;
- if ($fetch_strategy eq "by_rev") {
- print STDERR "Fetching & writing export data by revs...\n";
- $n = mw_import_ref_by_revs($fetch_from);
- } elsif ($fetch_strategy eq "by_page") {
- print STDERR "Fetching & writing export data by pages...\n";
- $n = mw_import_ref_by_pages($fetch_from);
- } else {
- print STDERR "fatal: invalid fetch strategy \"$fetch_strategy\".\n";
- print STDERR "Check your configuration variables remote.$remotename.fetchStrategy and mediawiki.fetchStrategy\n";
- exit 1;
- }
-
- if ($fetch_from == 1 && $n == 0) {
- print STDERR "You appear to have cloned an empty MediaWiki.\n";
- # Something has to be done remote-helper side. If nothing is done, an error is
- # thrown saying that HEAD is refering to unknown object 0000000000000000000
- # and the clone fails.
- }
-}
-
-sub mw_import_ref_by_pages {
-
- my $fetch_from = shift;
- my %pages_hash = get_mw_pages();
- my @pages = values(%pages_hash);
-
- my ($n, @revisions) = fetch_mw_revisions(\@pages, $fetch_from);
-
- @revisions = sort {$a->{revid} <=> $b->{revid}} @revisions;
- my @revision_ids = map $_->{revid}, @revisions;
-
- return mw_import_revids($fetch_from, \@revision_ids, \%pages_hash);
-}
-
-sub mw_import_ref_by_revs {
-
- my $fetch_from = shift;
- my %pages_hash = get_mw_pages();
-
- my $last_remote = get_last_global_remote_rev();
- my @revision_ids = $fetch_from..$last_remote;
- return mw_import_revids($fetch_from, \@revision_ids, \%pages_hash);
-}
-
-# Import revisions given in second argument (array of integers).
-# Only pages appearing in the third argument (hash indexed by page titles)
-# will be imported.
-sub mw_import_revids {
- my $fetch_from = shift;
- my $revision_ids = shift;
- my $pages = shift;
-
- my $n = 0;
- my $n_actual = 0;
- my $last_timestamp = 0; # Placeholer in case $rev->timestamp is undefined
-
- foreach my $pagerevid (@$revision_ids) {
- # Count page even if we skip it, since we display
- # $n/$total and $total includes skipped pages.
- $n++;
-
- # fetch the content of the pages
- my $query = {
- action => 'query',
- prop => 'revisions',
- rvprop => 'content|timestamp|comment|user|ids',
- revids => $pagerevid,
- };
-
- my $result = $mediawiki->api($query);
-
- if (!$result) {
- die "Failed to retrieve modified page for revision $pagerevid";
- }
-
- if (defined($result->{query}->{badrevids}->{$pagerevid})) {
- # The revision id does not exist on the remote wiki.
- next;
- }
-
- if (!defined($result->{query}->{pages})) {
- die "Invalid revision $pagerevid.";
- }
-
- my @result_pages = values(%{$result->{query}->{pages}});
- my $result_page = $result_pages[0];
- my $rev = $result_pages[0]->{revisions}->[0];
-
- my $page_title = $result_page->{title};
-
- if (!exists($pages->{$page_title})) {
- print STDERR "$n/", scalar(@$revision_ids),
- ": Skipping revision #$rev->{revid} of $page_title\n";
- next;
- }
-
- $n_actual++;
-
- my %commit;
- $commit{author} = $rev->{user} || 'Anonymous';
- $commit{comment} = $rev->{comment} || EMPTY_MESSAGE;
- $commit{title} = mediawiki_smudge_filename($page_title);
- $commit{mw_revision} = $rev->{revid};
- $commit{content} = mediawiki_smudge($rev->{'*'});
-
- if (!defined($rev->{timestamp})) {
- $last_timestamp++;
- } else {
- $last_timestamp = $rev->{timestamp};
- }
- $commit{date} = DateTime::Format::ISO8601->parse_datetime($last_timestamp);
-
- # Differentiates classic pages and media files.
- my ($namespace, $filename) = $page_title =~ /^([^:]*):(.*)$/;
- my %mediafile;
- if ($namespace) {
- my $id = get_mw_namespace_id($namespace);
- if ($id && $id == get_mw_namespace_id("File")) {
- %mediafile = get_mw_mediafile_for_page_revision($filename, $rev->{timestamp});
- }
- }
- # If this is a revision of the media page for new version
- # of a file do one common commit for both file and media page.
- # Else do commit only for that page.
- print STDERR "$n/", scalar(@$revision_ids), ": Revision #$rev->{revid} of $commit{title}\n";
- import_file_revision(\%commit, ($fetch_from == 1), $n_actual, \%mediafile);
- }
-
- return $n_actual;
-}
-
-sub error_non_fast_forward {
- my $advice = run_git("config --bool advice.pushNonFastForward");
- chomp($advice);
- if ($advice ne "false") {
- # Native git-push would show this after the summary.
- # We can't ask it to display it cleanly, so print it
- # ourselves before.
- print STDERR "To prevent you from losing history, non-fast-forward updates were rejected\n";
- print STDERR "Merge the remote changes (e.g. 'git pull') before pushing again. See the\n";
- print STDERR "'Note about fast-forwards' section of 'git push --help' for details.\n";
- }
- print STDOUT "error $_[0] \"non-fast-forward\"\n";
- return 0;
-}
-
-sub mw_upload_file {
- my $complete_file_name = shift;
- my $new_sha1 = shift;
- my $extension = shift;
- my $file_deleted = shift;
- my $summary = shift;
- my $newrevid;
- my $path = "File:" . $complete_file_name;
- my %hashFiles = get_allowed_file_extensions();
- if (!exists($hashFiles{$extension})) {
- print STDERR "$complete_file_name is not a permitted file on this wiki.\n";
- print STDERR "Check the configuration of file uploads in your mediawiki.\n";
- return $newrevid;
- }
- # Deleting and uploading a file requires a priviledged user
- if ($file_deleted) {
- mw_connect_maybe();
- my $query = {
- action => 'delete',
- title => $path,
- reason => $summary
- };
- if (!$mediawiki->edit($query)) {
- print STDERR "Failed to delete file on remote wiki\n";
- print STDERR "Check your permissions on the remote site. Error code:\n";
- print STDERR $mediawiki->{error}->{code} . ':' . $mediawiki->{error}->{details};
- exit 1;
- }
- } else {
- # Don't let perl try to interpret file content as UTF-8 => use "raw"
- my $content = run_git("cat-file blob $new_sha1", "raw");
- if ($content ne "") {
- mw_connect_maybe();
- $mediawiki->{config}->{upload_url} =
- "$url/index.php/Special:Upload";
- $mediawiki->edit({
- action => 'upload',
- filename => $complete_file_name,
- comment => $summary,
- file => [undef,
- $complete_file_name,
- Content => $content],
- ignorewarnings => 1,
- }, {
- skip_encoding => 1
- } ) || die $mediawiki->{error}->{code} . ':'
- . $mediawiki->{error}->{details};
- my $last_file_page = $mediawiki->get_page({title => $path});
- $newrevid = $last_file_page->{revid};
- print STDERR "Pushed file: $new_sha1 - $complete_file_name.\n";
- } else {
- print STDERR "Empty file $complete_file_name not pushed.\n";
- }
- }
- return $newrevid;
-}
-
-sub mw_push_file {
- my $diff_info = shift;
- # $diff_info contains a string in this format:
- # 100644 100644 <sha1_of_blob_before_commit> <sha1_of_blob_now> <status>
- my @diff_info_split = split(/[ \t]/, $diff_info);
-
- # Filename, including .mw extension
- my $complete_file_name = shift;
- # Commit message
- my $summary = shift;
- # MediaWiki revision number. Keep the previous one by default,
- # in case there's no edit to perform.
- my $oldrevid = shift;
- my $newrevid;
-
- if ($summary eq EMPTY_MESSAGE) {
- $summary = '';
- }
-
- my $new_sha1 = $diff_info_split[3];
- my $old_sha1 = $diff_info_split[2];
- my $page_created = ($old_sha1 eq NULL_SHA1);
- my $page_deleted = ($new_sha1 eq NULL_SHA1);
- $complete_file_name = mediawiki_clean_filename($complete_file_name);
-
- my ($title, $extension) = $complete_file_name =~ /^(.*)\.([^\.]*)$/;
- if (!defined($extension)) {
- $extension = "";
- }
- if ($extension eq "mw") {
- my $ns = get_mw_namespace_id_for_page($complete_file_name);
- if ($ns && $ns == get_mw_namespace_id("File") && (!$export_media)) {
- print STDERR "Ignoring media file related page: $complete_file_name\n";
- return ($oldrevid, "ok");
- }
- my $file_content;
- if ($page_deleted) {
- # Deleting a page usually requires
- # special priviledges. A common
- # convention is to replace the page
- # with this content instead:
- $file_content = DELETED_CONTENT;
- } else {
- $file_content = run_git("cat-file blob $new_sha1");
- }
-
- mw_connect_maybe();
-
- my $result = $mediawiki->edit( {
- action => 'edit',
- summary => $summary,
- title => $title,
- basetimestamp => $basetimestamps{$oldrevid},
- text => mediawiki_clean($file_content, $page_created),
- }, {
- skip_encoding => 1 # Helps with names with accentuated characters
- });
- if (!$result) {
- if ($mediawiki->{error}->{code} == 3) {
- # edit conflicts, considered as non-fast-forward
- print STDERR 'Warning: Error ' .
- $mediawiki->{error}->{code} .
- ' from mediwiki: ' . $mediawiki->{error}->{details} .
- ".\n";
- return ($oldrevid, "non-fast-forward");
- } else {
- # Other errors. Shouldn't happen => just die()
- die 'Fatal: Error ' .
- $mediawiki->{error}->{code} .
- ' from mediwiki: ' . $mediawiki->{error}->{details};
- }
- }
- $newrevid = $result->{edit}->{newrevid};
- print STDERR "Pushed file: $new_sha1 - $title\n";
- } elsif ($export_media) {
- $newrevid = mw_upload_file($complete_file_name, $new_sha1,
- $extension, $page_deleted,
- $summary);
- } else {
- print STDERR "Ignoring media file $title\n";
- }
- $newrevid = ($newrevid or $oldrevid);
- return ($newrevid, "ok");
-}
-
-sub mw_push {
- # multiple push statements can follow each other
- my @refsspecs = (shift, get_more_refs("push"));
- my $pushed;
- for my $refspec (@refsspecs) {
- my ($force, $local, $remote) = $refspec =~ /^(\+)?([^:]*):([^:]*)$/
- or die("Invalid refspec for push. Expected <src>:<dst> or +<src>:<dst>");
- if ($force) {
- print STDERR "Warning: forced push not allowed on a MediaWiki.\n";
- }
- if ($local eq "") {
- print STDERR "Cannot delete remote branch on a MediaWiki\n";
- print STDOUT "error $remote cannot delete\n";
- next;
- }
- if ($remote ne "refs/heads/master") {
- print STDERR "Only push to the branch 'master' is supported on a MediaWiki\n";
- print STDOUT "error $remote only master allowed\n";
- next;
- }
- if (mw_push_revision($local, $remote)) {
- $pushed = 1;
- }
- }
-
- # Notify Git that the push is done
- print STDOUT "\n";
-
- if ($pushed && $dumb_push) {
- print STDERR "Just pushed some revisions to MediaWiki.\n";
- print STDERR "The pushed revisions now have to be re-imported, and your current branch\n";
- print STDERR "needs to be updated with these re-imported commits. You can do this with\n";
- print STDERR "\n";
- print STDERR " git pull --rebase\n";
- print STDERR "\n";
- }
-}
-
-sub mw_push_revision {
- my $local = shift;
- my $remote = shift; # actually, this has to be "refs/heads/master" at this point.
- my $last_local_revid = get_last_local_revision();
- print STDERR ".\n"; # Finish sentence started by get_last_local_revision()
- my $last_remote_revid = get_last_remote_revision();
- my $mw_revision = $last_remote_revid;
-
- # Get sha1 of commit pointed by local HEAD
- my $HEAD_sha1 = run_git("rev-parse $local 2>/dev/null"); chomp($HEAD_sha1);
- # Get sha1 of commit pointed by remotes/$remotename/master
- my $remoteorigin_sha1 = run_git("rev-parse refs/remotes/$remotename/master 2>/dev/null");
- chomp($remoteorigin_sha1);
-
- if ($last_local_revid > 0 &&
- $last_local_revid < $last_remote_revid) {
- return error_non_fast_forward($remote);
- }
-
- if ($HEAD_sha1 eq $remoteorigin_sha1) {
- # nothing to push
- return 0;
- }
-
- # Get every commit in between HEAD and refs/remotes/origin/master,
- # including HEAD and refs/remotes/origin/master
- my @commit_pairs = ();
- if ($last_local_revid > 0) {
- my $parsed_sha1 = $remoteorigin_sha1;
- # Find a path from last MediaWiki commit to pushed commit
- print STDERR "Computing path from local to remote ...\n";
- my @local_ancestry = split(/\n/, run_git("rev-list --boundary --parents $local ^$parsed_sha1"));
- my %local_ancestry;
- foreach my $line (@local_ancestry) {
- if (my ($child, $parents) = $line =~ m/^-?([a-f0-9]+) ([a-f0-9 ]+)/) {
- foreach my $parent (split(' ', $parents)) {
- $local_ancestry{$parent} = $child;
- }
- } elsif (!$line =~ m/^([a-f0-9]+)/) {
- die "Unexpected output from git rev-list: $line";
- }
- }
- while ($parsed_sha1 ne $HEAD_sha1) {
- my $child = $local_ancestry{$parsed_sha1};
- if (!$child) {
- printf STDERR "Cannot find a path in history from remote commit to last commit\n";
- return error_non_fast_forward($remote);
- }
- push(@commit_pairs, [$parsed_sha1, $child]);
- $parsed_sha1 = $child;
- }
- } else {
- # No remote mediawiki revision. Export the whole
- # history (linearized with --first-parent)
- print STDERR "Warning: no common ancestor, pushing complete history\n";
- my $history = run_git("rev-list --first-parent --children $local");
- my @history = split('\n', $history);
- @history = @history[1..$#history];
- foreach my $line (reverse @history) {
- my @commit_info_split = split(/ |\n/, $line);
- push(@commit_pairs, \@commit_info_split);
- }
- }
-
- foreach my $commit_info_split (@commit_pairs) {
- my $sha1_child = @{$commit_info_split}[0];
- my $sha1_commit = @{$commit_info_split}[1];
- my $diff_infos = run_git("diff-tree -r --raw -z $sha1_child $sha1_commit");
- # TODO: we could detect rename, and encode them with a #redirect on the wiki.
- # TODO: for now, it's just a delete+add
- my @diff_info_list = split(/\0/, $diff_infos);
- # Keep the subject line of the commit message as mediawiki comment for the revision
- my $commit_msg = run_git("log --no-walk --format=\"%s\" $sha1_commit");
- chomp($commit_msg);
- # Push every blob
- while (@diff_info_list) {
- my $status;
- # git diff-tree -z gives an output like
- # <metadata>\0<filename1>\0
- # <metadata>\0<filename2>\0
- # and we've split on \0.
- my $info = shift(@diff_info_list);
- my $file = shift(@diff_info_list);
- ($mw_revision, $status) = mw_push_file($info, $file, $commit_msg, $mw_revision);
- if ($status eq "non-fast-forward") {
- # we may already have sent part of the
- # commit to MediaWiki, but it's too
- # late to cancel it. Stop the push in
- # the middle, but still give an
- # accurate error message.
- return error_non_fast_forward($remote);
- }
- if ($status ne "ok") {
- die("Unknown error from mw_push_file()");
- }
- }
- unless ($dumb_push) {
- run_git("notes --ref=$remotename/mediawiki add -f -m \"mediawiki_revision: $mw_revision\" $sha1_commit");
- run_git("update-ref -m \"Git-MediaWiki push\" refs/mediawiki/$remotename/master $sha1_commit $sha1_child");
- }
- }
-
- print STDOUT "ok $remote\n";
- return 1;
-}
-
-sub get_allowed_file_extensions {
- mw_connect_maybe();
-
- my $query = {
- action => 'query',
- meta => 'siteinfo',
- siprop => 'fileextensions'
- };
- my $result = $mediawiki->api($query);
- my @file_extensions= map $_->{ext},@{$result->{query}->{fileextensions}};
- my %hashFile = map {$_ => 1}@file_extensions;
-
- return %hashFile;
-}
-
-# In memory cache for MediaWiki namespace ids.
-my %namespace_id;
-
-# Namespaces whose id is cached in the configuration file
-# (to avoid duplicates)
-my %cached_mw_namespace_id;
-
-# Return MediaWiki id for a canonical namespace name.
-# Ex.: "File", "Project".
-sub get_mw_namespace_id {
- mw_connect_maybe();
- my $name = shift;
-
- if (!exists $namespace_id{$name}) {
- # Look at configuration file, if the record for that namespace is
- # already cached. Namespaces are stored in form:
- # "Name_of_namespace:Id_namespace", ex.: "File:6".
- my @temp = split(/[\n]/, run_git("config --get-all remote."
- . $remotename .".namespaceCache"));
- chomp(@temp);
- foreach my $ns (@temp) {
- my ($n, $id) = split(/:/, $ns);
- if ($id eq 'notANameSpace') {
- $namespace_id{$n} = {is_namespace => 0};
- } else {
- $namespace_id{$n} = {is_namespace => 1, id => $id};
- }
- $cached_mw_namespace_id{$n} = 1;
- }
- }
-
- if (!exists $namespace_id{$name}) {
- print STDERR "Namespace $name not found in cache, querying the wiki ...\n";
- # NS not found => get namespace id from MW and store it in
- # configuration file.
- my $query = {
- action => 'query',
- meta => 'siteinfo',
- siprop => 'namespaces'
- };
- my $result = $mediawiki->api($query);
-
- while (my ($id, $ns) = each(%{$result->{query}->{namespaces}})) {
- if (defined($ns->{id}) && defined($ns->{canonical})) {
- $namespace_id{$ns->{canonical}} = {is_namespace => 1, id => $ns->{id}};
- if ($ns->{'*'}) {
- # alias (e.g. french Fichier: as alias for canonical File:)
- $namespace_id{$ns->{'*'}} = {is_namespace => 1, id => $ns->{id}};
- }
- }
- }
- }
-
- my $ns = $namespace_id{$name};
- my $id;
-
- unless (defined $ns) {
- print STDERR "No such namespace $name on MediaWiki.\n";
- $ns = {is_namespace => 0};
- $namespace_id{$name} = $ns;
- }
-
- if ($ns->{is_namespace}) {
- $id = $ns->{id};
- }
-
- # Store "notANameSpace" as special value for inexisting namespaces
- my $store_id = ($id || 'notANameSpace');
-
- # Store explicitely requested namespaces on disk
- if (!exists $cached_mw_namespace_id{$name}) {
- run_git("config --add remote.". $remotename
- .".namespaceCache \"". $name .":". $store_id ."\"");
- $cached_mw_namespace_id{$name} = 1;
- }
- return $id;
-}
-
-sub get_mw_namespace_id_for_page {
- if (my ($namespace) = $_[0] =~ /^([^:]*):/) {
- return get_mw_namespace_id($namespace);
- } else {
- return;
- }
-}
--- /dev/null
+#! /usr/bin/perl
+
+# Copyright (C) 2011
+# Jérémie Nikaes <jeremie.nikaes@ensimag.imag.fr>
+# Arnaud Lacurie <arnaud.lacurie@ensimag.imag.fr>
+# Claire Fousse <claire.fousse@ensimag.imag.fr>
+# David Amouyal <david.amouyal@ensimag.imag.fr>
+# Matthieu Moy <matthieu.moy@grenoble-inp.fr>
+# License: GPL v2 or later
+
+# Gateway between Git and MediaWiki.
+# Documentation & bugtracker: https://github.com/moy/Git-Mediawiki/
+
+use strict;
+use MediaWiki::API;
+use DateTime::Format::ISO8601;
+
+# By default, use UTF-8 to communicate with Git and the user
+binmode STDERR, ":utf8";
+binmode STDOUT, ":utf8";
+
+use URI::Escape;
+use IPC::Open2;
+
+use warnings;
+
+# Mediawiki filenames can contain forward slashes. This variable decides by which pattern they should be replaced
+use constant SLASH_REPLACEMENT => "%2F";
+
+# It's not always possible to delete pages (may require some
+# priviledges). Deleted pages are replaced with this content.
+use constant DELETED_CONTENT => "[[Category:Deleted]]\n";
+
+# It's not possible to create empty pages. New empty files in Git are
+# sent with this content instead.
+use constant EMPTY_CONTENT => "<!-- empty page -->\n";
+
+# used to reflect file creation or deletion in diff.
+use constant NULL_SHA1 => "0000000000000000000000000000000000000000";
+
+# Used on Git's side to reflect empty edit messages on the wiki
+use constant EMPTY_MESSAGE => '*Empty MediaWiki Message*';
+
+my $remotename = $ARGV[0];
+my $url = $ARGV[1];
+
+# Accept both space-separated and multiple keys in config file.
+# Spaces should be written as _ anyway because we'll use chomp.
+my @tracked_pages = split(/[ \n]/, run_git("config --get-all remote.". $remotename .".pages"));
+chomp(@tracked_pages);
+
+# Just like @tracked_pages, but for MediaWiki categories.
+my @tracked_categories = split(/[ \n]/, run_git("config --get-all remote.". $remotename .".categories"));
+chomp(@tracked_categories);
+
+# Import media files on pull
+my $import_media = run_git("config --get --bool remote.". $remotename .".mediaimport");
+chomp($import_media);
+$import_media = ($import_media eq "true");
+
+# Export media files on push
+my $export_media = run_git("config --get --bool remote.". $remotename .".mediaexport");
+chomp($export_media);
+$export_media = !($export_media eq "false");
+
+my $wiki_login = run_git("config --get remote.". $remotename .".mwLogin");
+# Note: mwPassword is discourraged. Use the credential system instead.
+my $wiki_passwd = run_git("config --get remote.". $remotename .".mwPassword");
+my $wiki_domain = run_git("config --get remote.". $remotename .".mwDomain");
+chomp($wiki_login);
+chomp($wiki_passwd);
+chomp($wiki_domain);
+
+# Import only last revisions (both for clone and fetch)
+my $shallow_import = run_git("config --get --bool remote.". $remotename .".shallow");
+chomp($shallow_import);
+$shallow_import = ($shallow_import eq "true");
+
+# Fetch (clone and pull) by revisions instead of by pages. This behavior
+# is more efficient when we have a wiki with lots of pages and we fetch
+# the revisions quite often so that they concern only few pages.
+# Possible values:
+# - by_rev: perform one query per new revision on the remote wiki
+# - by_page: query each tracked page for new revision
+my $fetch_strategy = run_git("config --get remote.$remotename.fetchStrategy");
+unless ($fetch_strategy) {
+ $fetch_strategy = run_git("config --get mediawiki.fetchStrategy");
+}
+chomp($fetch_strategy);
+unless ($fetch_strategy) {
+ $fetch_strategy = "by_page";
+}
+
+# Dumb push: don't update notes and mediawiki ref to reflect the last push.
+#
+# Configurable with mediawiki.dumbPush, or per-remote with
+# remote.<remotename>.dumbPush.
+#
+# This means the user will have to re-import the just-pushed
+# revisions. On the other hand, this means that the Git revisions
+# corresponding to MediaWiki revisions are all imported from the wiki,
+# regardless of whether they were initially created in Git or from the
+# web interface, hence all users will get the same history (i.e. if
+# the push from Git to MediaWiki loses some information, everybody
+# will get the history with information lost). If the import is
+# deterministic, this means everybody gets the same sha1 for each
+# MediaWiki revision.
+my $dumb_push = run_git("config --get --bool remote.$remotename.dumbPush");
+unless ($dumb_push) {
+ $dumb_push = run_git("config --get --bool mediawiki.dumbPush");
+}
+chomp($dumb_push);
+$dumb_push = ($dumb_push eq "true");
+
+my $wiki_name = $url;
+$wiki_name =~ s/[^\/]*:\/\///;
+# If URL is like http://user:password@example.com/, we clearly don't
+# want the password in $wiki_name. While we're there, also remove user
+# and '@' sign, to avoid author like MWUser@HTTPUser@host.com
+$wiki_name =~ s/^.*@//;
+
+# Commands parser
+my $entry;
+my @cmd;
+while (<STDIN>) {
+ chomp;
+ @cmd = split(/ /);
+ if (defined($cmd[0])) {
+ # Line not blank
+ if ($cmd[0] eq "capabilities") {
+ die("Too many arguments for capabilities") unless (!defined($cmd[1]));
+ mw_capabilities();
+ } elsif ($cmd[0] eq "list") {
+ die("Too many arguments for list") unless (!defined($cmd[2]));
+ mw_list($cmd[1]);
+ } elsif ($cmd[0] eq "import") {
+ die("Invalid arguments for import") unless ($cmd[1] ne "" && !defined($cmd[2]));
+ mw_import($cmd[1]);
+ } elsif ($cmd[0] eq "option") {
+ die("Too many arguments for option") unless ($cmd[1] ne "" && $cmd[2] ne "" && !defined($cmd[3]));
+ mw_option($cmd[1],$cmd[2]);
+ } elsif ($cmd[0] eq "push") {
+ mw_push($cmd[1]);
+ } else {
+ print STDERR "Unknown command. Aborting...\n";
+ last;
+ }
+ } else {
+ # blank line: we should terminate
+ last;
+ }
+
+ BEGIN { $| = 1 } # flush STDOUT, to make sure the previous
+ # command is fully processed.
+}
+
+########################## Functions ##############################
+
+## credential API management (generic functions)
+
+sub credential_read {
+ my %credential;
+ my $reader = shift;
+ my $op = shift;
+ while (<$reader>) {
+ my ($key, $value) = /([^=]*)=(.*)/;
+ if (not defined $key) {
+ die "ERROR receiving response from git credential $op:\n$_\n";
+ }
+ $credential{$key} = $value;
+ }
+ return %credential;
+}
+
+sub credential_write {
+ my $credential = shift;
+ my $writer = shift;
+ # url overwrites other fields, so it must come first
+ print $writer "url=$credential->{url}\n" if exists $credential->{url};
+ while (my ($key, $value) = each(%$credential) ) {
+ if (length $value && $key ne 'url') {
+ print $writer "$key=$value\n";
+ }
+ }
+}
+
+sub credential_run {
+ my $op = shift;
+ my $credential = shift;
+ my $pid = open2(my $reader, my $writer, "git credential $op");
+ credential_write($credential, $writer);
+ print $writer "\n";
+ close($writer);
+
+ if ($op eq "fill") {
+ %$credential = credential_read($reader, $op);
+ } else {
+ if (<$reader>) {
+ die "ERROR while running git credential $op:\n$_";
+ }
+ }
+ close($reader);
+ waitpid($pid, 0);
+ my $child_exit_status = $? >> 8;
+ if ($child_exit_status != 0) {
+ die "'git credential $op' failed with code $child_exit_status.";
+ }
+}
+
+# MediaWiki API instance, created lazily.
+my $mediawiki;
+
+sub mw_connect_maybe {
+ if ($mediawiki) {
+ return;
+ }
+ $mediawiki = MediaWiki::API->new;
+ $mediawiki->{config}->{api_url} = "$url/api.php";
+ if ($wiki_login) {
+ my %credential = (url => $url);
+ $credential{username} = $wiki_login;
+ $credential{password} = $wiki_passwd;
+ credential_run("fill", \%credential);
+ my $request = {lgname => $credential{username},
+ lgpassword => $credential{password},
+ lgdomain => $wiki_domain};
+ if ($mediawiki->login($request)) {
+ credential_run("approve", \%credential);
+ print STDERR "Logged in mediawiki user \"$credential{username}\".\n";
+ } else {
+ print STDERR "Failed to log in mediawiki user \"$credential{username}\" on $url\n";
+ print STDERR " (error " .
+ $mediawiki->{error}->{code} . ': ' .
+ $mediawiki->{error}->{details} . ")\n";
+ credential_run("reject", \%credential);
+ exit 1;
+ }
+ }
+}
+
+## Functions for listing pages on the remote wiki
+sub get_mw_tracked_pages {
+ my $pages = shift;
+ get_mw_page_list(\@tracked_pages, $pages);
+}
+
+sub get_mw_page_list {
+ my $page_list = shift;
+ my $pages = shift;
+ my @some_pages = @$page_list;
+ while (@some_pages) {
+ my $last = 50;
+ if ($#some_pages < $last) {
+ $last = $#some_pages;
+ }
+ my @slice = @some_pages[0..$last];
+ get_mw_first_pages(\@slice, $pages);
+ @some_pages = @some_pages[51..$#some_pages];
+ }
+}
+
+sub get_mw_tracked_categories {
+ my $pages = shift;
+ foreach my $category (@tracked_categories) {
+ if (index($category, ':') < 0) {
+ # Mediawiki requires the Category
+ # prefix, but let's not force the user
+ # to specify it.
+ $category = "Category:" . $category;
+ }
+ my $mw_pages = $mediawiki->list( {
+ action => 'query',
+ list => 'categorymembers',
+ cmtitle => $category,
+ cmlimit => 'max' } )
+ || die $mediawiki->{error}->{code} . ': '
+ . $mediawiki->{error}->{details};
+ foreach my $page (@{$mw_pages}) {
+ $pages->{$page->{title}} = $page;
+ }
+ }
+}
+
+sub get_mw_all_pages {
+ my $pages = shift;
+ # No user-provided list, get the list of pages from the API.
+ my $mw_pages = $mediawiki->list({
+ action => 'query',
+ list => 'allpages',
+ aplimit => 'max'
+ });
+ if (!defined($mw_pages)) {
+ print STDERR "fatal: could not get the list of wiki pages.\n";
+ print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
+ print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
+ exit 1;
+ }
+ foreach my $page (@{$mw_pages}) {
+ $pages->{$page->{title}} = $page;
+ }
+}
+
+# queries the wiki for a set of pages. Meant to be used within a loop
+# querying the wiki for slices of page list.
+sub get_mw_first_pages {
+ my $some_pages = shift;
+ my @some_pages = @{$some_pages};
+
+ my $pages = shift;
+
+ # pattern 'page1|page2|...' required by the API
+ my $titles = join('|', @some_pages);
+
+ my $mw_pages = $mediawiki->api({
+ action => 'query',
+ titles => $titles,
+ });
+ if (!defined($mw_pages)) {
+ print STDERR "fatal: could not query the list of wiki pages.\n";
+ print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
+ print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
+ exit 1;
+ }
+ while (my ($id, $page) = each(%{$mw_pages->{query}->{pages}})) {
+ if ($id < 0) {
+ print STDERR "Warning: page $page->{title} not found on wiki\n";
+ } else {
+ $pages->{$page->{title}} = $page;
+ }
+ }
+}
+
+# Get the list of pages to be fetched according to configuration.
+sub get_mw_pages {
+ mw_connect_maybe();
+
+ print STDERR "Listing pages on remote wiki...\n";
+
+ my %pages; # hash on page titles to avoid duplicates
+ my $user_defined;
+ if (@tracked_pages) {
+ $user_defined = 1;
+ # The user provided a list of pages titles, but we
+ # still need to query the API to get the page IDs.
+ get_mw_tracked_pages(\%pages);
+ }
+ if (@tracked_categories) {
+ $user_defined = 1;
+ get_mw_tracked_categories(\%pages);
+ }
+ if (!$user_defined) {
+ get_mw_all_pages(\%pages);
+ }
+ if ($import_media) {
+ print STDERR "Getting media files for selected pages...\n";
+ if ($user_defined) {
+ get_linked_mediafiles(\%pages);
+ } else {
+ get_all_mediafiles(\%pages);
+ }
+ }
+ print STDERR (scalar keys %pages) . " pages found.\n";
+ return %pages;
+}
+
+# usage: $out = run_git("command args");
+# $out = run_git("command args", "raw"); # don't interpret output as UTF-8.
+sub run_git {
+ my $args = shift;
+ my $encoding = (shift || "encoding(UTF-8)");
+ open(my $git, "-|:$encoding", "git " . $args);
+ my $res = do { local $/; <$git> };
+ close($git);
+
+ return $res;
+}
+
+
+sub get_all_mediafiles {
+ my $pages = shift;
+ # Attach list of all pages for media files from the API,
+ # they are in a different namespace, only one namespace
+ # can be queried at the same moment
+ my $mw_pages = $mediawiki->list({
+ action => 'query',
+ list => 'allpages',
+ apnamespace => get_mw_namespace_id("File"),
+ aplimit => 'max'
+ });
+ if (!defined($mw_pages)) {
+ print STDERR "fatal: could not get the list of pages for media files.\n";
+ print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
+ print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
+ exit 1;
+ }
+ foreach my $page (@{$mw_pages}) {
+ $pages->{$page->{title}} = $page;
+ }
+}
+
+sub get_linked_mediafiles {
+ my $pages = shift;
+ my @titles = map $_->{title}, values(%{$pages});
+
+ # The query is split in small batches because of the MW API limit of
+ # the number of links to be returned (500 links max).
+ my $batch = 10;
+ while (@titles) {
+ if ($#titles < $batch) {
+ $batch = $#titles;
+ }
+ my @slice = @titles[0..$batch];
+
+ # pattern 'page1|page2|...' required by the API
+ my $mw_titles = join('|', @slice);
+
+ # Media files could be included or linked from
+ # a page, get all related
+ my $query = {
+ action => 'query',
+ prop => 'links|images',
+ titles => $mw_titles,
+ plnamespace => get_mw_namespace_id("File"),
+ pllimit => 'max'
+ };
+ my $result = $mediawiki->api($query);
+
+ while (my ($id, $page) = each(%{$result->{query}->{pages}})) {
+ my @media_titles;
+ if (defined($page->{links})) {
+ my @link_titles = map $_->{title}, @{$page->{links}};
+ push(@media_titles, @link_titles);
+ }
+ if (defined($page->{images})) {
+ my @image_titles = map $_->{title}, @{$page->{images}};
+ push(@media_titles, @image_titles);
+ }
+ if (@media_titles) {
+ get_mw_page_list(\@media_titles, $pages);
+ }
+ }
+
+ @titles = @titles[($batch+1)..$#titles];
+ }
+}
+
+sub get_mw_mediafile_for_page_revision {
+ # Name of the file on Wiki, with the prefix.
+ my $filename = shift;
+ my $timestamp = shift;
+ my %mediafile;
+
+ # Search if on a media file with given timestamp exists on
+ # MediaWiki. In that case download the file.
+ my $query = {
+ action => 'query',
+ prop => 'imageinfo',
+ titles => "File:" . $filename,
+ iistart => $timestamp,
+ iiend => $timestamp,
+ iiprop => 'timestamp|archivename|url',
+ iilimit => 1
+ };
+ my $result = $mediawiki->api($query);
+
+ my ($fileid, $file) = each( %{$result->{query}->{pages}} );
+ # If not defined it means there is no revision of the file for
+ # given timestamp.
+ if (defined($file->{imageinfo})) {
+ $mediafile{title} = $filename;
+
+ my $fileinfo = pop(@{$file->{imageinfo}});
+ $mediafile{timestamp} = $fileinfo->{timestamp};
+ # Mediawiki::API's download function doesn't support https URLs
+ # and can't download old versions of files.
+ print STDERR "\tDownloading file $mediafile{title}, version $mediafile{timestamp}\n";
+ $mediafile{content} = download_mw_mediafile($fileinfo->{url});
+ }
+ return %mediafile;
+}
+
+sub download_mw_mediafile {
+ my $url = shift;
+
+ my $response = $mediawiki->{ua}->get($url);
+ if ($response->code == 200) {
+ return $response->decoded_content;
+ } else {
+ print STDERR "Error downloading mediafile from :\n";
+ print STDERR "URL: $url\n";
+ print STDERR "Server response: " . $response->code . " " . $response->message . "\n";
+ exit 1;
+ }
+}
+
+sub get_last_local_revision {
+ # Get note regarding last mediawiki revision
+ my $note = run_git("notes --ref=$remotename/mediawiki show refs/mediawiki/$remotename/master 2>/dev/null");
+ my @note_info = split(/ /, $note);
+
+ my $lastrevision_number;
+ if (!(defined($note_info[0]) && $note_info[0] eq "mediawiki_revision:")) {
+ print STDERR "No previous mediawiki revision found";
+ $lastrevision_number = 0;
+ } else {
+ # Notes are formatted : mediawiki_revision: #number
+ $lastrevision_number = $note_info[1];
+ chomp($lastrevision_number);
+ print STDERR "Last local mediawiki revision found is $lastrevision_number";
+ }
+ return $lastrevision_number;
+}
+
+# Remember the timestamp corresponding to a revision id.
+my %basetimestamps;
+
+# Get the last remote revision without taking in account which pages are
+# tracked or not. This function makes a single request to the wiki thus
+# avoid a loop onto all tracked pages. This is useful for the fetch-by-rev
+# option.
+sub get_last_global_remote_rev {
+ mw_connect_maybe();
+
+ my $query = {
+ action => 'query',
+ list => 'recentchanges',
+ prop => 'revisions',
+ rclimit => '1',
+ rcdir => 'older',
+ };
+ my $result = $mediawiki->api($query);
+ return $result->{query}->{recentchanges}[0]->{revid};
+}
+
+# Get the last remote revision concerning the tracked pages and the tracked
+# categories.
+sub get_last_remote_revision {
+ mw_connect_maybe();
+
+ my %pages_hash = get_mw_pages();
+ my @pages = values(%pages_hash);
+
+ my $max_rev_num = 0;
+
+ print STDERR "Getting last revision id on tracked pages...\n";
+
+ foreach my $page (@pages) {
+ my $id = $page->{pageid};
+
+ my $query = {
+ action => 'query',
+ prop => 'revisions',
+ rvprop => 'ids|timestamp',
+ pageids => $id,
+ };
+
+ my $result = $mediawiki->api($query);
+
+ my $lastrev = pop(@{$result->{query}->{pages}->{$id}->{revisions}});
+
+ $basetimestamps{$lastrev->{revid}} = $lastrev->{timestamp};
+
+ $max_rev_num = ($lastrev->{revid} > $max_rev_num ? $lastrev->{revid} : $max_rev_num);
+ }
+
+ print STDERR "Last remote revision found is $max_rev_num.\n";
+ return $max_rev_num;
+}
+
+# Clean content before sending it to MediaWiki
+sub mediawiki_clean {
+ my $string = shift;
+ my $page_created = shift;
+ # Mediawiki does not allow blank space at the end of a page and ends with a single \n.
+ # This function right trims a string and adds a \n at the end to follow this rule
+ $string =~ s/\s+$//;
+ if ($string eq "" && $page_created) {
+ # Creating empty pages is forbidden.
+ $string = EMPTY_CONTENT;
+ }
+ return $string."\n";
+}
+
+# Filter applied on MediaWiki data before adding them to Git
+sub mediawiki_smudge {
+ my $string = shift;
+ if ($string eq EMPTY_CONTENT) {
+ $string = "";
+ }
+ # This \n is important. This is due to mediawiki's way to handle end of files.
+ return $string."\n";
+}
+
+sub mediawiki_clean_filename {
+ my $filename = shift;
+ $filename =~ s/@{[SLASH_REPLACEMENT]}/\//g;
+ # [, ], |, {, and } are forbidden by MediaWiki, even URL-encoded.
+ # Do a variant of URL-encoding, i.e. looks like URL-encoding,
+ # but with _ added to prevent MediaWiki from thinking this is
+ # an actual special character.
+ $filename =~ s/[\[\]\{\}\|]/sprintf("_%%_%x", ord($&))/ge;
+ # If we use the uri escape before
+ # we should unescape here, before anything
+
+ return $filename;
+}
+
+sub mediawiki_smudge_filename {
+ my $filename = shift;
+ $filename =~ s/\//@{[SLASH_REPLACEMENT]}/g;
+ $filename =~ s/ /_/g;
+ # Decode forbidden characters encoded in mediawiki_clean_filename
+ $filename =~ s/_%_([0-9a-fA-F][0-9a-fA-F])/sprintf("%c", hex($1))/ge;
+ return $filename;
+}
+
+sub literal_data {
+ my ($content) = @_;
+ print STDOUT "data ", bytes::length($content), "\n", $content;
+}
+
+sub literal_data_raw {
+ # Output possibly binary content.
+ my ($content) = @_;
+ # Avoid confusion between size in bytes and in characters
+ utf8::downgrade($content);
+ binmode STDOUT, ":raw";
+ print STDOUT "data ", bytes::length($content), "\n", $content;
+ binmode STDOUT, ":utf8";
+}
+
+sub mw_capabilities {
+ # Revisions are imported to the private namespace
+ # refs/mediawiki/$remotename/ by the helper and fetched into
+ # refs/remotes/$remotename later by fetch.
+ print STDOUT "refspec refs/heads/*:refs/mediawiki/$remotename/*\n";
+ print STDOUT "import\n";
+ print STDOUT "list\n";
+ print STDOUT "push\n";
+ print STDOUT "\n";
+}
+
+sub mw_list {
+ # MediaWiki do not have branches, we consider one branch arbitrarily
+ # called master, and HEAD pointing to it.
+ print STDOUT "? refs/heads/master\n";
+ print STDOUT "\@refs/heads/master HEAD\n";
+ print STDOUT "\n";
+}
+
+sub mw_option {
+ print STDERR "remote-helper command 'option $_[0]' not yet implemented\n";
+ print STDOUT "unsupported\n";
+}
+
+sub fetch_mw_revisions_for_page {
+ my $page = shift;
+ my $id = shift;
+ my $fetch_from = shift;
+ my @page_revs = ();
+ my $query = {
+ action => 'query',
+ prop => 'revisions',
+ rvprop => 'ids',
+ rvdir => 'newer',
+ rvstartid => $fetch_from,
+ rvlimit => 500,
+ pageids => $id,
+ };
+
+ my $revnum = 0;
+ # Get 500 revisions at a time due to the mediawiki api limit
+ while (1) {
+ my $result = $mediawiki->api($query);
+
+ # Parse each of those 500 revisions
+ foreach my $revision (@{$result->{query}->{pages}->{$id}->{revisions}}) {
+ my $page_rev_ids;
+ $page_rev_ids->{pageid} = $page->{pageid};
+ $page_rev_ids->{revid} = $revision->{revid};
+ push(@page_revs, $page_rev_ids);
+ $revnum++;
+ }
+ last unless $result->{'query-continue'};
+ $query->{rvstartid} = $result->{'query-continue'}->{revisions}->{rvstartid};
+ }
+ if ($shallow_import && @page_revs) {
+ print STDERR " Found 1 revision (shallow import).\n";
+ @page_revs = sort {$b->{revid} <=> $a->{revid}} (@page_revs);
+ return $page_revs[0];
+ }
+ print STDERR " Found ", $revnum, " revision(s).\n";
+ return @page_revs;
+}
+
+sub fetch_mw_revisions {
+ my $pages = shift; my @pages = @{$pages};
+ my $fetch_from = shift;
+
+ my @revisions = ();
+ my $n = 1;
+ foreach my $page (@pages) {
+ my $id = $page->{pageid};
+
+ print STDERR "page $n/", scalar(@pages), ": ". $page->{title} ."\n";
+ $n++;
+ my @page_revs = fetch_mw_revisions_for_page($page, $id, $fetch_from);
+ @revisions = (@page_revs, @revisions);
+ }
+
+ return ($n, @revisions);
+}
+
+sub fe_escape_path {
+ my $path = shift;
+ $path =~ s/\\/\\\\/g;
+ $path =~ s/"/\\"/g;
+ $path =~ s/\n/\\n/g;
+ return '"' . $path . '"';
+}
+
+sub import_file_revision {
+ my $commit = shift;
+ my %commit = %{$commit};
+ my $full_import = shift;
+ my $n = shift;
+ my $mediafile = shift;
+ my %mediafile;
+ if ($mediafile) {
+ %mediafile = %{$mediafile};
+ }
+
+ my $title = $commit{title};
+ my $comment = $commit{comment};
+ my $content = $commit{content};
+ my $author = $commit{author};
+ my $date = $commit{date};
+
+ print STDOUT "commit refs/mediawiki/$remotename/master\n";
+ print STDOUT "mark :$n\n";
+ print STDOUT "committer $author <$author\@$wiki_name> ", $date->epoch, " +0000\n";
+ literal_data($comment);
+
+ # If it's not a clone, we need to know where to start from
+ if (!$full_import && $n == 1) {
+ print STDOUT "from refs/mediawiki/$remotename/master^0\n";
+ }
+ if ($content ne DELETED_CONTENT) {
+ print STDOUT "M 644 inline " .
+ fe_escape_path($title . ".mw") . "\n";
+ literal_data($content);
+ if (%mediafile) {
+ print STDOUT "M 644 inline "
+ . fe_escape_path($mediafile{title}) . "\n";
+ literal_data_raw($mediafile{content});
+ }
+ print STDOUT "\n\n";
+ } else {
+ print STDOUT "D " . fe_escape_path($title . ".mw") . "\n";
+ }
+
+ # mediawiki revision number in the git note
+ if ($full_import && $n == 1) {
+ print STDOUT "reset refs/notes/$remotename/mediawiki\n";
+ }
+ print STDOUT "commit refs/notes/$remotename/mediawiki\n";
+ print STDOUT "committer $author <$author\@$wiki_name> ", $date->epoch, " +0000\n";
+ literal_data("Note added by git-mediawiki during import");
+ if (!$full_import && $n == 1) {
+ print STDOUT "from refs/notes/$remotename/mediawiki^0\n";
+ }
+ print STDOUT "N inline :$n\n";
+ literal_data("mediawiki_revision: " . $commit{mw_revision});
+ print STDOUT "\n\n";
+}
+
+# parse a sequence of
+# <cmd> <arg1>
+# <cmd> <arg2>
+# \n
+# (like batch sequence of import and sequence of push statements)
+sub get_more_refs {
+ my $cmd = shift;
+ my @refs;
+ while (1) {
+ my $line = <STDIN>;
+ if ($line =~ m/^$cmd (.*)$/) {
+ push(@refs, $1);
+ } elsif ($line eq "\n") {
+ return @refs;
+ } else {
+ die("Invalid command in a '$cmd' batch: ". $_);
+ }
+ }
+}
+
+sub mw_import {
+ # multiple import commands can follow each other.
+ my @refs = (shift, get_more_refs("import"));
+ foreach my $ref (@refs) {
+ mw_import_ref($ref);
+ }
+ print STDOUT "done\n";
+}
+
+sub mw_import_ref {
+ my $ref = shift;
+ # The remote helper will call "import HEAD" and
+ # "import refs/heads/master".
+ # Since HEAD is a symbolic ref to master (by convention,
+ # followed by the output of the command "list" that we gave),
+ # we don't need to do anything in this case.
+ if ($ref eq "HEAD") {
+ return;
+ }
+
+ mw_connect_maybe();
+
+ print STDERR "Searching revisions...\n";
+ my $last_local = get_last_local_revision();
+ my $fetch_from = $last_local + 1;
+ if ($fetch_from == 1) {
+ print STDERR ", fetching from beginning.\n";
+ } else {
+ print STDERR ", fetching from here.\n";
+ }
+
+ my $n = 0;
+ if ($fetch_strategy eq "by_rev") {
+ print STDERR "Fetching & writing export data by revs...\n";
+ $n = mw_import_ref_by_revs($fetch_from);
+ } elsif ($fetch_strategy eq "by_page") {
+ print STDERR "Fetching & writing export data by pages...\n";
+ $n = mw_import_ref_by_pages($fetch_from);
+ } else {
+ print STDERR "fatal: invalid fetch strategy \"$fetch_strategy\".\n";
+ print STDERR "Check your configuration variables remote.$remotename.fetchStrategy and mediawiki.fetchStrategy\n";
+ exit 1;
+ }
+
+ if ($fetch_from == 1 && $n == 0) {
+ print STDERR "You appear to have cloned an empty MediaWiki.\n";
+ # Something has to be done remote-helper side. If nothing is done, an error is
+ # thrown saying that HEAD is refering to unknown object 0000000000000000000
+ # and the clone fails.
+ }
+}
+
+sub mw_import_ref_by_pages {
+
+ my $fetch_from = shift;
+ my %pages_hash = get_mw_pages();
+ my @pages = values(%pages_hash);
+
+ my ($n, @revisions) = fetch_mw_revisions(\@pages, $fetch_from);
+
+ @revisions = sort {$a->{revid} <=> $b->{revid}} @revisions;
+ my @revision_ids = map $_->{revid}, @revisions;
+
+ return mw_import_revids($fetch_from, \@revision_ids, \%pages_hash);
+}
+
+sub mw_import_ref_by_revs {
+
+ my $fetch_from = shift;
+ my %pages_hash = get_mw_pages();
+
+ my $last_remote = get_last_global_remote_rev();
+ my @revision_ids = $fetch_from..$last_remote;
+ return mw_import_revids($fetch_from, \@revision_ids, \%pages_hash);
+}
+
+# Import revisions given in second argument (array of integers).
+# Only pages appearing in the third argument (hash indexed by page titles)
+# will be imported.
+sub mw_import_revids {
+ my $fetch_from = shift;
+ my $revision_ids = shift;
+ my $pages = shift;
+
+ my $n = 0;
+ my $n_actual = 0;
+ my $last_timestamp = 0; # Placeholer in case $rev->timestamp is undefined
+
+ foreach my $pagerevid (@$revision_ids) {
+ # Count page even if we skip it, since we display
+ # $n/$total and $total includes skipped pages.
+ $n++;
+
+ # fetch the content of the pages
+ my $query = {
+ action => 'query',
+ prop => 'revisions',
+ rvprop => 'content|timestamp|comment|user|ids',
+ revids => $pagerevid,
+ };
+
+ my $result = $mediawiki->api($query);
+
+ if (!$result) {
+ die "Failed to retrieve modified page for revision $pagerevid";
+ }
+
+ if (defined($result->{query}->{badrevids}->{$pagerevid})) {
+ # The revision id does not exist on the remote wiki.
+ next;
+ }
+
+ if (!defined($result->{query}->{pages})) {
+ die "Invalid revision $pagerevid.";
+ }
+
+ my @result_pages = values(%{$result->{query}->{pages}});
+ my $result_page = $result_pages[0];
+ my $rev = $result_pages[0]->{revisions}->[0];
+
+ my $page_title = $result_page->{title};
+
+ if (!exists($pages->{$page_title})) {
+ print STDERR "$n/", scalar(@$revision_ids),
+ ": Skipping revision #$rev->{revid} of $page_title\n";
+ next;
+ }
+
+ $n_actual++;
+
+ my %commit;
+ $commit{author} = $rev->{user} || 'Anonymous';
+ $commit{comment} = $rev->{comment} || EMPTY_MESSAGE;
+ $commit{title} = mediawiki_smudge_filename($page_title);
+ $commit{mw_revision} = $rev->{revid};
+ $commit{content} = mediawiki_smudge($rev->{'*'});
+
+ if (!defined($rev->{timestamp})) {
+ $last_timestamp++;
+ } else {
+ $last_timestamp = $rev->{timestamp};
+ }
+ $commit{date} = DateTime::Format::ISO8601->parse_datetime($last_timestamp);
+
+ # Differentiates classic pages and media files.
+ my ($namespace, $filename) = $page_title =~ /^([^:]*):(.*)$/;
+ my %mediafile;
+ if ($namespace) {
+ my $id = get_mw_namespace_id($namespace);
+ if ($id && $id == get_mw_namespace_id("File")) {
+ %mediafile = get_mw_mediafile_for_page_revision($filename, $rev->{timestamp});
+ }
+ }
+ # If this is a revision of the media page for new version
+ # of a file do one common commit for both file and media page.
+ # Else do commit only for that page.
+ print STDERR "$n/", scalar(@$revision_ids), ": Revision #$rev->{revid} of $commit{title}\n";
+ import_file_revision(\%commit, ($fetch_from == 1), $n_actual, \%mediafile);
+ }
+
+ return $n_actual;
+}
+
+sub error_non_fast_forward {
+ my $advice = run_git("config --bool advice.pushNonFastForward");
+ chomp($advice);
+ if ($advice ne "false") {
+ # Native git-push would show this after the summary.
+ # We can't ask it to display it cleanly, so print it
+ # ourselves before.
+ print STDERR "To prevent you from losing history, non-fast-forward updates were rejected\n";
+ print STDERR "Merge the remote changes (e.g. 'git pull') before pushing again. See the\n";
+ print STDERR "'Note about fast-forwards' section of 'git push --help' for details.\n";
+ }
+ print STDOUT "error $_[0] \"non-fast-forward\"\n";
+ return 0;
+}
+
+sub mw_upload_file {
+ my $complete_file_name = shift;
+ my $new_sha1 = shift;
+ my $extension = shift;
+ my $file_deleted = shift;
+ my $summary = shift;
+ my $newrevid;
+ my $path = "File:" . $complete_file_name;
+ my %hashFiles = get_allowed_file_extensions();
+ if (!exists($hashFiles{$extension})) {
+ print STDERR "$complete_file_name is not a permitted file on this wiki.\n";
+ print STDERR "Check the configuration of file uploads in your mediawiki.\n";
+ return $newrevid;
+ }
+ # Deleting and uploading a file requires a priviledged user
+ if ($file_deleted) {
+ mw_connect_maybe();
+ my $query = {
+ action => 'delete',
+ title => $path,
+ reason => $summary
+ };
+ if (!$mediawiki->edit($query)) {
+ print STDERR "Failed to delete file on remote wiki\n";
+ print STDERR "Check your permissions on the remote site. Error code:\n";
+ print STDERR $mediawiki->{error}->{code} . ':' . $mediawiki->{error}->{details};
+ exit 1;
+ }
+ } else {
+ # Don't let perl try to interpret file content as UTF-8 => use "raw"
+ my $content = run_git("cat-file blob $new_sha1", "raw");
+ if ($content ne "") {
+ mw_connect_maybe();
+ $mediawiki->{config}->{upload_url} =
+ "$url/index.php/Special:Upload";
+ $mediawiki->edit({
+ action => 'upload',
+ filename => $complete_file_name,
+ comment => $summary,
+ file => [undef,
+ $complete_file_name,
+ Content => $content],
+ ignorewarnings => 1,
+ }, {
+ skip_encoding => 1
+ } ) || die $mediawiki->{error}->{code} . ':'
+ . $mediawiki->{error}->{details};
+ my $last_file_page = $mediawiki->get_page({title => $path});
+ $newrevid = $last_file_page->{revid};
+ print STDERR "Pushed file: $new_sha1 - $complete_file_name.\n";
+ } else {
+ print STDERR "Empty file $complete_file_name not pushed.\n";
+ }
+ }
+ return $newrevid;
+}
+
+sub mw_push_file {
+ my $diff_info = shift;
+ # $diff_info contains a string in this format:
+ # 100644 100644 <sha1_of_blob_before_commit> <sha1_of_blob_now> <status>
+ my @diff_info_split = split(/[ \t]/, $diff_info);
+
+ # Filename, including .mw extension
+ my $complete_file_name = shift;
+ # Commit message
+ my $summary = shift;
+ # MediaWiki revision number. Keep the previous one by default,
+ # in case there's no edit to perform.
+ my $oldrevid = shift;
+ my $newrevid;
+
+ if ($summary eq EMPTY_MESSAGE) {
+ $summary = '';
+ }
+
+ my $new_sha1 = $diff_info_split[3];
+ my $old_sha1 = $diff_info_split[2];
+ my $page_created = ($old_sha1 eq NULL_SHA1);
+ my $page_deleted = ($new_sha1 eq NULL_SHA1);
+ $complete_file_name = mediawiki_clean_filename($complete_file_name);
+
+ my ($title, $extension) = $complete_file_name =~ /^(.*)\.([^\.]*)$/;
+ if (!defined($extension)) {
+ $extension = "";
+ }
+ if ($extension eq "mw") {
+ my $ns = get_mw_namespace_id_for_page($complete_file_name);
+ if ($ns && $ns == get_mw_namespace_id("File") && (!$export_media)) {
+ print STDERR "Ignoring media file related page: $complete_file_name\n";
+ return ($oldrevid, "ok");
+ }
+ my $file_content;
+ if ($page_deleted) {
+ # Deleting a page usually requires
+ # special priviledges. A common
+ # convention is to replace the page
+ # with this content instead:
+ $file_content = DELETED_CONTENT;
+ } else {
+ $file_content = run_git("cat-file blob $new_sha1");
+ }
+
+ mw_connect_maybe();
+
+ my $result = $mediawiki->edit( {
+ action => 'edit',
+ summary => $summary,
+ title => $title,
+ basetimestamp => $basetimestamps{$oldrevid},
+ text => mediawiki_clean($file_content, $page_created),
+ }, {
+ skip_encoding => 1 # Helps with names with accentuated characters
+ });
+ if (!$result) {
+ if ($mediawiki->{error}->{code} == 3) {
+ # edit conflicts, considered as non-fast-forward
+ print STDERR 'Warning: Error ' .
+ $mediawiki->{error}->{code} .
+ ' from mediwiki: ' . $mediawiki->{error}->{details} .
+ ".\n";
+ return ($oldrevid, "non-fast-forward");
+ } else {
+ # Other errors. Shouldn't happen => just die()
+ die 'Fatal: Error ' .
+ $mediawiki->{error}->{code} .
+ ' from mediwiki: ' . $mediawiki->{error}->{details};
+ }
+ }
+ $newrevid = $result->{edit}->{newrevid};
+ print STDERR "Pushed file: $new_sha1 - $title\n";
+ } elsif ($export_media) {
+ $newrevid = mw_upload_file($complete_file_name, $new_sha1,
+ $extension, $page_deleted,
+ $summary);
+ } else {
+ print STDERR "Ignoring media file $title\n";
+ }
+ $newrevid = ($newrevid or $oldrevid);
+ return ($newrevid, "ok");
+}
+
+sub mw_push {
+ # multiple push statements can follow each other
+ my @refsspecs = (shift, get_more_refs("push"));
+ my $pushed;
+ for my $refspec (@refsspecs) {
+ my ($force, $local, $remote) = $refspec =~ /^(\+)?([^:]*):([^:]*)$/
+ or die("Invalid refspec for push. Expected <src>:<dst> or +<src>:<dst>");
+ if ($force) {
+ print STDERR "Warning: forced push not allowed on a MediaWiki.\n";
+ }
+ if ($local eq "") {
+ print STDERR "Cannot delete remote branch on a MediaWiki\n";
+ print STDOUT "error $remote cannot delete\n";
+ next;
+ }
+ if ($remote ne "refs/heads/master") {
+ print STDERR "Only push to the branch 'master' is supported on a MediaWiki\n";
+ print STDOUT "error $remote only master allowed\n";
+ next;
+ }
+ if (mw_push_revision($local, $remote)) {
+ $pushed = 1;
+ }
+ }
+
+ # Notify Git that the push is done
+ print STDOUT "\n";
+
+ if ($pushed && $dumb_push) {
+ print STDERR "Just pushed some revisions to MediaWiki.\n";
+ print STDERR "The pushed revisions now have to be re-imported, and your current branch\n";
+ print STDERR "needs to be updated with these re-imported commits. You can do this with\n";
+ print STDERR "\n";
+ print STDERR " git pull --rebase\n";
+ print STDERR "\n";
+ }
+}
+
+sub mw_push_revision {
+ my $local = shift;
+ my $remote = shift; # actually, this has to be "refs/heads/master" at this point.
+ my $last_local_revid = get_last_local_revision();
+ print STDERR ".\n"; # Finish sentence started by get_last_local_revision()
+ my $last_remote_revid = get_last_remote_revision();
+ my $mw_revision = $last_remote_revid;
+
+ # Get sha1 of commit pointed by local HEAD
+ my $HEAD_sha1 = run_git("rev-parse $local 2>/dev/null"); chomp($HEAD_sha1);
+ # Get sha1 of commit pointed by remotes/$remotename/master
+ my $remoteorigin_sha1 = run_git("rev-parse refs/remotes/$remotename/master 2>/dev/null");
+ chomp($remoteorigin_sha1);
+
+ if ($last_local_revid > 0 &&
+ $last_local_revid < $last_remote_revid) {
+ return error_non_fast_forward($remote);
+ }
+
+ if ($HEAD_sha1 eq $remoteorigin_sha1) {
+ # nothing to push
+ return 0;
+ }
+
+ # Get every commit in between HEAD and refs/remotes/origin/master,
+ # including HEAD and refs/remotes/origin/master
+ my @commit_pairs = ();
+ if ($last_local_revid > 0) {
+ my $parsed_sha1 = $remoteorigin_sha1;
+ # Find a path from last MediaWiki commit to pushed commit
+ print STDERR "Computing path from local to remote ...\n";
+ my @local_ancestry = split(/\n/, run_git("rev-list --boundary --parents $local ^$parsed_sha1"));
+ my %local_ancestry;
+ foreach my $line (@local_ancestry) {
+ if (my ($child, $parents) = $line =~ m/^-?([a-f0-9]+) ([a-f0-9 ]+)/) {
+ foreach my $parent (split(' ', $parents)) {
+ $local_ancestry{$parent} = $child;
+ }
+ } elsif (!$line =~ m/^([a-f0-9]+)/) {
+ die "Unexpected output from git rev-list: $line";
+ }
+ }
+ while ($parsed_sha1 ne $HEAD_sha1) {
+ my $child = $local_ancestry{$parsed_sha1};
+ if (!$child) {
+ printf STDERR "Cannot find a path in history from remote commit to last commit\n";
+ return error_non_fast_forward($remote);
+ }
+ push(@commit_pairs, [$parsed_sha1, $child]);
+ $parsed_sha1 = $child;
+ }
+ } else {
+ # No remote mediawiki revision. Export the whole
+ # history (linearized with --first-parent)
+ print STDERR "Warning: no common ancestor, pushing complete history\n";
+ my $history = run_git("rev-list --first-parent --children $local");
+ my @history = split('\n', $history);
+ @history = @history[1..$#history];
+ foreach my $line (reverse @history) {
+ my @commit_info_split = split(/ |\n/, $line);
+ push(@commit_pairs, \@commit_info_split);
+ }
+ }
+
+ foreach my $commit_info_split (@commit_pairs) {
+ my $sha1_child = @{$commit_info_split}[0];
+ my $sha1_commit = @{$commit_info_split}[1];
+ my $diff_infos = run_git("diff-tree -r --raw -z $sha1_child $sha1_commit");
+ # TODO: we could detect rename, and encode them with a #redirect on the wiki.
+ # TODO: for now, it's just a delete+add
+ my @diff_info_list = split(/\0/, $diff_infos);
+ # Keep the subject line of the commit message as mediawiki comment for the revision
+ my $commit_msg = run_git("log --no-walk --format=\"%s\" $sha1_commit");
+ chomp($commit_msg);
+ # Push every blob
+ while (@diff_info_list) {
+ my $status;
+ # git diff-tree -z gives an output like
+ # <metadata>\0<filename1>\0
+ # <metadata>\0<filename2>\0
+ # and we've split on \0.
+ my $info = shift(@diff_info_list);
+ my $file = shift(@diff_info_list);
+ ($mw_revision, $status) = mw_push_file($info, $file, $commit_msg, $mw_revision);
+ if ($status eq "non-fast-forward") {
+ # we may already have sent part of the
+ # commit to MediaWiki, but it's too
+ # late to cancel it. Stop the push in
+ # the middle, but still give an
+ # accurate error message.
+ return error_non_fast_forward($remote);
+ }
+ if ($status ne "ok") {
+ die("Unknown error from mw_push_file()");
+ }
+ }
+ unless ($dumb_push) {
+ run_git("notes --ref=$remotename/mediawiki add -f -m \"mediawiki_revision: $mw_revision\" $sha1_commit");
+ run_git("update-ref -m \"Git-MediaWiki push\" refs/mediawiki/$remotename/master $sha1_commit $sha1_child");
+ }
+ }
+
+ print STDOUT "ok $remote\n";
+ return 1;
+}
+
+sub get_allowed_file_extensions {
+ mw_connect_maybe();
+
+ my $query = {
+ action => 'query',
+ meta => 'siteinfo',
+ siprop => 'fileextensions'
+ };
+ my $result = $mediawiki->api($query);
+ my @file_extensions= map $_->{ext},@{$result->{query}->{fileextensions}};
+ my %hashFile = map {$_ => 1}@file_extensions;
+
+ return %hashFile;
+}
+
+# In memory cache for MediaWiki namespace ids.
+my %namespace_id;
+
+# Namespaces whose id is cached in the configuration file
+# (to avoid duplicates)
+my %cached_mw_namespace_id;
+
+# Return MediaWiki id for a canonical namespace name.
+# Ex.: "File", "Project".
+sub get_mw_namespace_id {
+ mw_connect_maybe();
+ my $name = shift;
+
+ if (!exists $namespace_id{$name}) {
+ # Look at configuration file, if the record for that namespace is
+ # already cached. Namespaces are stored in form:
+ # "Name_of_namespace:Id_namespace", ex.: "File:6".
+ my @temp = split(/[\n]/, run_git("config --get-all remote."
+ . $remotename .".namespaceCache"));
+ chomp(@temp);
+ foreach my $ns (@temp) {
+ my ($n, $id) = split(/:/, $ns);
+ if ($id eq 'notANameSpace') {
+ $namespace_id{$n} = {is_namespace => 0};
+ } else {
+ $namespace_id{$n} = {is_namespace => 1, id => $id};
+ }
+ $cached_mw_namespace_id{$n} = 1;
+ }
+ }
+
+ if (!exists $namespace_id{$name}) {
+ print STDERR "Namespace $name not found in cache, querying the wiki ...\n";
+ # NS not found => get namespace id from MW and store it in
+ # configuration file.
+ my $query = {
+ action => 'query',
+ meta => 'siteinfo',
+ siprop => 'namespaces'
+ };
+ my $result = $mediawiki->api($query);
+
+ while (my ($id, $ns) = each(%{$result->{query}->{namespaces}})) {
+ if (defined($ns->{id}) && defined($ns->{canonical})) {
+ $namespace_id{$ns->{canonical}} = {is_namespace => 1, id => $ns->{id}};
+ if ($ns->{'*'}) {
+ # alias (e.g. french Fichier: as alias for canonical File:)
+ $namespace_id{$ns->{'*'}} = {is_namespace => 1, id => $ns->{id}};
+ }
+ }
+ }
+ }
+
+ my $ns = $namespace_id{$name};
+ my $id;
+
+ unless (defined $ns) {
+ print STDERR "No such namespace $name on MediaWiki.\n";
+ $ns = {is_namespace => 0};
+ $namespace_id{$name} = $ns;
+ }
+
+ if ($ns->{is_namespace}) {
+ $id = $ns->{id};
+ }
+
+ # Store "notANameSpace" as special value for inexisting namespaces
+ my $store_id = ($id || 'notANameSpace');
+
+ # Store explicitely requested namespaces on disk
+ if (!exists $cached_mw_namespace_id{$name}) {
+ run_git("config --add remote.". $remotename
+ .".namespaceCache \"". $name .":". $store_id ."\"");
+ $cached_mw_namespace_id{$name} = 1;
+ }
+ return $id;
+}
+
+sub get_mw_namespace_id_for_page {
+ if (my ($namespace) = $_[0] =~ /^([^:]*):/) {
+ return get_mw_namespace_id($namespace);
+ } else {
+ return;
+ }
+}
doc: $(GIT_SUBTREE_DOC)
install: $(GIT_SUBTREE)
- $(INSTALL) -m 755 $(GIT_SUBTREE) $(libexecdir)
+ $(INSTALL) -m 755 $(GIT_SUBTREE) $(DESTDIR)$(libexecdir)
install-doc: install-man
install-man: $(GIT_SUBTREE_DOC)
- $(INSTALL) -m 644 $^ $(man1dir)
+ $(INSTALL) -d -m 755 $(DESTDIR)$(man1dir)
+ $(INSTALL) -m 644 $^ $(DESTDIR)$(man1dir)
$(GIT_SUBTREE_DOC): $(GIT_SUBTREE_XML)
xmlto -m $(MANPAGE_NORMAL_XSL) man $^
fi
OPTS_SPEC="\
git subtree add --prefix=<prefix> <commit>
+git subtree add --prefix=<prefix> <repository> <commit>
git subtree merge --prefix=<prefix> <commit>
git subtree pull --prefix=<prefix> <repository> <refspec...>
git subtree push --prefix=<prefix> <repository> <refspec...>
# We're going to set some environment vars here, so
# do it in a subshell to get rid of them safely later
debug copy_commit "{$1}" "{$2}" "{$3}"
- git log -1 --pretty=format:'%an%n%ae%n%ad%n%cn%n%ce%n%cd%n%s%n%n%b' "$1" |
+ git log -1 --pretty=format:'%an%n%ae%n%ad%n%cn%n%ce%n%cd%n%B' "$1" |
(
read GIT_AUTHOR_NAME
read GIT_AUTHOR_EMAIL
ensure_clean
if [ $# -eq 1 ]; then
- "cmd_add_commit" "$@"
+ git rev-parse -q --verify "$1^{commit}" >/dev/null ||
+ die "'$1' does not refer to a commit"
+
+ "cmd_add_commit" "$@"
elif [ $# -eq 2 ]; then
- "cmd_add_repository" "$@"
+ # Technically we could accept a refspec here but we're
+ # just going to turn around and add FETCH_HEAD under the
+ # specified directory. Allowing a refspec might be
+ # misleading because we won't do anything with any other
+ # branches fetched via the refspec.
+ git rev-parse -q --verify "$2^{commit}" >/dev/null ||
+ die "'$2' does not refer to a commit"
+
+ "cmd_add_repository" "$@"
else
say "error: parameters were '$@'"
- die "Provide either a refspec or a repository and refspec."
+ die "Provide either a commit or a repository and commit."
fi
}
SYNOPSIS
--------
[verse]
-'git subtree' add -P <prefix> <commit>
+'git subtree' add -P <prefix> <refspec>
+'git subtree' add -P <prefix> <repository> <refspec>
'git subtree' pull -P <prefix> <repository> <refspec...>
'git subtree' push -P <prefix> <repository> <refspec...>
'git subtree' merge -P <prefix> <commit>
git log --pretty=format:%s -1
}
-# 1
test_expect_success 'init subproj' '
test_create_repo subproj
'
# To the subproject!
cd subproj
-# 2
test_expect_success 'add sub1' '
create sub1 &&
git commit -m "sub1" &&
git branch -m master subproj
'
-# 3
+# Save this hash for testing later.
+
+subdir_hash=`git rev-parse HEAD`
+
test_expect_success 'add sub2' '
create sub2 &&
git commit -m "sub2" &&
git branch sub2
'
-# 4
test_expect_success 'add sub3' '
create sub3 &&
git commit -m "sub3" &&
# Back to mainline
cd ..
-# 5
test_expect_success 'add main4' '
create main4 &&
git commit -m "main4" &&
git branch subdir
'
-# 6
test_expect_success 'fetch subproj history' '
git fetch ./subproj sub1 &&
git branch sub1 FETCH_HEAD
'
-# 7
test_expect_success 'no subtree exists in main tree' '
test_must_fail git subtree merge --prefix=subdir sub1
'
-# 8
test_expect_success 'no pull from non-existant subtree' '
test_must_fail git subtree pull --prefix=subdir ./subproj sub1
'
-# 9
test_expect_success 'check if --message works for add' '
git subtree add --prefix=subdir --message="Added subproject" sub1 &&
check_equal ''"$(last_commit_message)"'' "Added subproject" &&
undo
'
-# 10
test_expect_success 'check if --message works as -m and --prefix as -P' '
git subtree add -P subdir -m "Added subproject using git subtree" sub1 &&
check_equal ''"$(last_commit_message)"'' "Added subproject using git subtree" &&
undo
'
-# 11
test_expect_success 'check if --message works with squash too' '
git subtree add -P subdir -m "Added subproject with squash" --squash sub1 &&
check_equal ''"$(last_commit_message)"'' "Added subproject with squash" &&
undo
'
-# 12
test_expect_success 'add subproj to mainline' '
git subtree add --prefix=subdir/ FETCH_HEAD &&
check_equal ''"$(last_commit_message)"'' "Add '"'subdir/'"' from commit '"'"'''"$(git rev-parse sub1)"'''"'"'"
'
-# 13
# this shouldn't actually do anything, since FETCH_HEAD is already a parent
test_expect_success 'merge fetched subproj' '
git merge -m "merge -s -ours" -s ours FETCH_HEAD
'
-# 14
test_expect_success 'add main-sub5' '
create subdir/main-sub5 &&
git commit -m "main-sub5"
'
-# 15
test_expect_success 'add main6' '
create main6 &&
git commit -m "main6 boring"
'
-# 16
test_expect_success 'add main-sub7' '
create subdir/main-sub7 &&
git commit -m "main-sub7"
'
-# 17
test_expect_success 'fetch new subproj history' '
git fetch ./subproj sub2 &&
git branch sub2 FETCH_HEAD
'
-# 18
test_expect_success 'check if --message works for merge' '
git subtree merge --prefix=subdir -m "Merged changes from subproject" sub2 &&
check_equal ''"$(last_commit_message)"'' "Merged changes from subproject" &&
undo
'
-# 19
test_expect_success 'check if --message for merge works with squash too' '
git subtree merge --prefix subdir -m "Merged changes from subproject using squash" --squash sub2 &&
check_equal ''"$(last_commit_message)"'' "Merged changes from subproject using squash" &&
undo
'
-# 20
test_expect_success 'merge new subproj history into subdir' '
git subtree merge --prefix=subdir FETCH_HEAD &&
git branch pre-split &&
check_equal ''"$(last_commit_message)"'' "Merge commit '"'"'"$(git rev-parse sub2)"'"'"' into mainline"
'
-# 21
test_expect_success 'Check that prefix argument is required for split' '
echo "You must provide the --prefix option." > expected &&
test_must_fail git subtree split > actual 2>&1 &&
rm -f expected actual
'
-# 22
test_expect_success 'Check that the <prefix> exists for a split' '
echo "'"'"'non-existent-directory'"'"'" does not exist\; use "'"'"'git subtree add'"'"'" > expected &&
test_must_fail git subtree split --prefix=non-existent-directory > actual 2>&1 &&
# rm -f expected actual
'
-# 23
test_expect_success 'check if --message works for split+rejoin' '
spl1=''"$(git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --message "Split & rejoin" --rejoin)"'' &&
git branch spl1 "$spl1" &&
undo
'
-# 24
test_expect_success 'check split with --branch' '
- spl1=$(git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --message "Split & rejoin" --rejoin) &&
- undo &&
- git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --branch splitbr1 &&
- check_equal ''"$(git rev-parse splitbr1)"'' "$spl1"
+ spl1=$(git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --message "Split & rejoin" --rejoin) &&
+ undo &&
+ git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --branch splitbr1 &&
+ check_equal ''"$(git rev-parse splitbr1)"'' "$spl1"
+'
+
+test_expect_success 'check hash of split' '
+ spl1=$(git subtree split --prefix subdir) &&
+ undo &&
+ git subtree split --prefix subdir --branch splitbr1test &&
+ check_equal ''"$(git rev-parse splitbr1test)"'' "$spl1"
+ git checkout splitbr1test &&
+ new_hash=$(git rev-parse HEAD~2) &&
+ git checkout mainline &&
+ check_equal ''"$new_hash"'' "$subdir_hash"
'
-# 25
test_expect_success 'check split with --branch for an existing branch' '
spl1=''"$(git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --message "Split & rejoin" --rejoin)"'' &&
undo &&
check_equal ''"$(git rev-parse splitbr2)"'' "$spl1"
'
-# 26
test_expect_success 'check split with --branch for an incompatible branch' '
test_must_fail git subtree split --prefix subdir --onto FETCH_HEAD --branch subdir
'
-
-# 27
test_expect_success 'check split+rejoin' '
spl1=''"$(git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --message "Split & rejoin" --rejoin)"'' &&
undo &&
check_equal ''"$(last_commit_message)"'' "Split '"'"'subdir/'"'"' into commit '"'"'"$spl1"'"'"'"
'
-# 28
test_expect_success 'add main-sub8' '
create subdir/main-sub8 &&
git commit -m "main-sub8"
# To the subproject!
cd ./subproj
-# 29
test_expect_success 'merge split into subproj' '
git fetch .. spl1 &&
git branch spl1 FETCH_HEAD &&
git merge FETCH_HEAD
'
-# 30
test_expect_success 'add sub9' '
create sub9 &&
git commit -m "sub9"
# Back to mainline
cd ..
-# 31
test_expect_success 'split for sub8' '
split2=''"$(git subtree split --annotate='"'*'"' --prefix subdir/ --rejoin)"''
git branch split2 "$split2"
'
-# 32
test_expect_success 'add main-sub10' '
create subdir/main-sub10 &&
git commit -m "main-sub10"
'
-# 33
test_expect_success 'split for sub10' '
spl3=''"$(git subtree split --annotate='"'*'"' --prefix subdir --rejoin)"'' &&
git branch spl3 "$spl3"
# To the subproject!
cd ./subproj
-# 34
test_expect_success 'merge split into subproj' '
git fetch .. spl3 &&
git branch spl3 FETCH_HEAD &&
chks="sub1 sub2 sub3 sub9"
chks_sub=$(echo $chks | multiline | sed 's,^,subdir/,' | fixnl)
-# 35
test_expect_success 'make sure exactly the right set of files ends up in the subproj' '
subfiles=''"$(git ls-files | fixnl)"'' &&
check_equal "$subfiles" "$chkms $chks"
'
-# 36
test_expect_success 'make sure the subproj history *only* contains commits that affect the subdir' '
allchanges=''"$(git log --name-only --pretty=format:'"''"' | sort | fixnl)"'' &&
check_equal "$allchanges" "$chkms $chks"
# Back to mainline
cd ..
-# 37
test_expect_success 'pull from subproj' '
git fetch ./subproj subproj-merge-spl3 &&
git branch subproj-merge-spl3 FETCH_HEAD &&
git subtree pull --prefix=subdir ./subproj subproj-merge-spl3
'
-# 38
test_expect_success 'make sure exactly the right set of files ends up in the mainline' '
mainfiles=''"$(git ls-files | fixnl)"'' &&
check_equal "$mainfiles" "$chkm $chkms_sub $chks_sub"
'
-# 39
test_expect_success 'make sure each filename changed exactly once in the entire history' '
# main-sub?? and /subdir/main-sub?? both change, because those are the
# changes that were split into their own history. And subdir/sub?? never
check_equal "$allchanges" ''"$(echo $chkms $chkm $chks $chkms_sub | multiline | sort | fixnl)"''
'
-# 40
test_expect_success 'make sure the --rejoin commits never make it into subproj' '
check_equal ''"$(git log --pretty=format:'"'%s'"' HEAD^2 | grep -i split)"'' ""
'
-# 41
test_expect_success 'make sure no "git subtree" tagged commits make it into subproj' '
# They are meaningless to subproj since one side of the merge refers to the mainline
check_equal ''"$(git log --pretty=format:'"'%s%n%b'"' HEAD^2 | grep "git-subtree.*:")"'' ""
mkdir test2
cd test2
-# 42
test_expect_success 'init main' '
test_create_repo main
'
cd main
-# 43
test_expect_success 'add main1' '
create main1 &&
git commit -m "main1"
cd ..
-# 44
test_expect_success 'init sub' '
test_create_repo sub
'
cd sub
-# 45
test_expect_success 'add sub2' '
create sub2 &&
git commit -m "sub2"
# check if split can find proper base without --onto
-# 46
test_expect_success 'add sub as subdir in main' '
git fetch ../sub master &&
git branch sub2 FETCH_HEAD &&
cd ../sub
-# 47
test_expect_success 'add sub3' '
create sub3 &&
git commit -m "sub3"
cd ../main
-# 48
test_expect_success 'merge from sub' '
git fetch ../sub master &&
git branch sub3 FETCH_HEAD &&
git subtree merge --prefix subdir sub3
'
-# 49
test_expect_success 'add main-sub4' '
create subdir/main-sub4 &&
git commit -m "main-sub4"
'
-# 50
test_expect_success 'split for main-sub4 without --onto' '
git subtree split --prefix subdir --branch mainsub4
'
# have been sub3, but it was not, because its cache was not set to
# itself)
-# 51
test_expect_success 'check that the commit parent is sub3' '
check_equal ''"$(git log --pretty=format:%P -1 mainsub4)"'' ''"$(git rev-parse sub3)"''
'
-# 52
test_expect_success 'add main-sub5' '
mkdir subdir2 &&
create subdir2/main-sub5 &&
git commit -m "main-sub5"
'
-# 53
test_expect_success 'split for main-sub5 without --onto' '
# also test that we still can split out an entirely new subtree
# if the parent of the first commit in the tree is not empty,
echo "$commit $all"
}
-# 54
test_expect_success 'verify one file change per commit' '
x= &&
list=''"$(git log --pretty=format:'"'commit: %H'"' | joincommits)"'' &&
static int diff_stat_graph_width;
static int diff_dirstat_permille_default = 30;
static struct diff_options default_diff_options;
+static long diff_algorithm;
static char diff_colors[][COLOR_MAXLEN] = {
GIT_COLOR_RESET,
return git_config_bool(var,value) ? DIFF_DETECT_RENAME : 0;
}
+long parse_algorithm_value(const char *value)
+{
+ if (!value)
+ return -1;
+ else if (!strcasecmp(value, "myers") || !strcasecmp(value, "default"))
+ return 0;
+ else if (!strcasecmp(value, "minimal"))
+ return XDF_NEED_MINIMAL;
+ else if (!strcasecmp(value, "patience"))
+ return XDF_PATIENCE_DIFF;
+ else if (!strcasecmp(value, "histogram"))
+ return XDF_HISTOGRAM_DIFF;
+ return -1;
+}
+
/*
* These are to give UI layer defaults.
* The core-level commands such as git-diff-files should
return 0;
}
+ if (!strcmp(var, "diff.algorithm")) {
+ diff_algorithm = parse_algorithm_value(value);
+ if (diff_algorithm < 0)
+ return -1;
+ return 0;
+ }
+
if (git_color_config(var, value, cb) < 0)
return -1;
int nofirst;
FILE *file = o->file;
- if (o->output_prefix) {
- struct strbuf *msg = NULL;
- msg = o->output_prefix(o, o->output_prefix_data);
- assert(msg);
- fwrite(msg->buf, msg->len, 1, file);
- }
+ fputs(diff_line_prefix(o), file);
if (len == 0) {
has_trailing_newline = (first == '\n');
char *data_one, *data_two;
size_t size_one, size_two;
struct emit_callback ecbdata;
- char *line_prefix = "";
- struct strbuf *msgbuf;
-
- if (o && o->output_prefix) {
- msgbuf = o->output_prefix(o, o->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ const char *line_prefix = diff_line_prefix(o);
if (diff_mnemonic_prefix && DIFF_OPT_TST(o, REVERSE_DIFF)) {
a_prefix = o->b_prefix;
int minus_first, minus_len, plus_first, plus_len;
const char *minus_begin, *minus_end, *plus_begin, *plus_end;
struct diff_options *opt = diff_words->opt;
- struct strbuf *msgbuf;
- char *line_prefix = "";
+ const char *line_prefix;
if (line[0] != '@' || parse_hunk_header(line, len,
&minus_first, &minus_len, &plus_first, &plus_len))
return;
assert(opt);
- if (opt->output_prefix) {
- msgbuf = opt->output_prefix(opt, opt->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ line_prefix = diff_line_prefix(opt);
/* POSIX requires that first be decremented by one if len == 0... */
if (minus_len) {
struct diff_words_style *style = diff_words->style;
struct diff_options *opt = diff_words->opt;
- struct strbuf *msgbuf;
- char *line_prefix = "";
+ const char *line_prefix;
assert(opt);
- if (opt->output_prefix) {
- msgbuf = opt->output_prefix(opt, opt->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ line_prefix = diff_line_prefix(opt);
/* special case: only removal */
if (!diff_words->plus.text.size) {
return "";
}
+const char *diff_line_prefix(struct diff_options *opt)
+{
+ struct strbuf *msgbuf;
+ if (!opt->output_prefix)
+ return "";
+
+ msgbuf = opt->output_prefix(opt, opt->output_prefix_data);
+ return msgbuf->buf;
+}
+
static unsigned long sane_truncate_line(struct emit_callback *ecb, char *line, unsigned long len)
{
const char *cp;
const char *plain = diff_get_color(ecbdata->color_diff, DIFF_PLAIN);
const char *reset = diff_get_color(ecbdata->color_diff, DIFF_RESET);
struct diff_options *o = ecbdata->opt;
- char *line_prefix = "";
- struct strbuf *msgbuf;
-
- if (o && o->output_prefix) {
- msgbuf = o->output_prefix(o, o->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ const char *line_prefix = diff_line_prefix(o);
if (ecbdata->header) {
fprintf(ecbdata->opt->file, "%s", ecbdata->header->buf);
const char *reset, *add_c, *del_c;
const char *line_prefix = "";
int extra_shown = 0;
- struct strbuf *msg = NULL;
if (data->nr == 0)
return;
- if (options->output_prefix) {
- msg = options->output_prefix(options, options->output_prefix_data);
- line_prefix = msg->buf;
- }
-
+ line_prefix = diff_line_prefix(options);
count = options->stat_count ? options->stat_count : data->nr;
reset = diff_get_color_opt(options, DIFF_RESET);
dels += deleted;
}
}
- if (options->output_prefix) {
- struct strbuf *msg = NULL;
- msg = options->output_prefix(options,
- options->output_prefix_data);
- fprintf(options->file, "%s", msg->buf);
- }
+ fprintf(options->file, "%s", diff_line_prefix(options));
print_stat_summary(options->file, total_files, adds, dels);
}
for (i = 0; i < data->nr; i++) {
struct diffstat_file *file = data->files[i];
- if (options->output_prefix) {
- struct strbuf *msg = NULL;
- msg = options->output_prefix(options,
- options->output_prefix_data);
- fprintf(options->file, "%s", msg->buf);
- }
+ fprintf(options->file, "%s", diff_line_prefix(options));
if (file->is_binary)
fprintf(options->file, "-\t-\t");
{
unsigned long this_dir = 0;
unsigned int sources = 0;
- const char *line_prefix = "";
- struct strbuf *msg = NULL;
-
- if (opt->output_prefix) {
- msg = opt->output_prefix(opt, opt->output_prefix_data);
- line_prefix = msg->buf;
- }
+ const char *line_prefix = diff_line_prefix(opt);
while (dir->nr) {
struct dirstat_file *f = dir->files;
const char *reset = diff_get_color(data->o->use_color, DIFF_RESET);
const char *set = diff_get_color(data->o->use_color, DIFF_FILE_NEW);
char *err;
- char *line_prefix = "";
- struct strbuf *msgbuf;
+ const char *line_prefix;
assert(data->o);
- if (data->o->output_prefix) {
- msgbuf = data->o->output_prefix(data->o,
- data->o->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ line_prefix = diff_line_prefix(data->o);
if (line[0] == '+') {
unsigned bad;
return deflated;
}
-static void emit_binary_diff_body(FILE *file, mmfile_t *one, mmfile_t *two, char *prefix)
+static void emit_binary_diff_body(FILE *file, mmfile_t *one, mmfile_t *two,
+ const char *prefix)
{
void *cp;
void *delta;
free(data);
}
-static void emit_binary_diff(FILE *file, mmfile_t *one, mmfile_t *two, char *prefix)
+static void emit_binary_diff(FILE *file, mmfile_t *one, mmfile_t *two,
+ const char *prefix)
{
fprintf(file, "%sGIT binary patch\n", prefix);
emit_binary_diff_body(file, one, two, prefix);
struct userdiff_driver *textconv_one = NULL;
struct userdiff_driver *textconv_two = NULL;
struct strbuf header = STRBUF_INIT;
- struct strbuf *msgbuf;
- char *line_prefix = "";
-
- if (o->output_prefix) {
- msgbuf = o->output_prefix(o, o->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ const char *line_prefix = diff_line_prefix(o);
if (DIFF_OPT_TST(o, SUBMODULE_LOG) &&
(!one->mode || S_ISGITLINK(one->mode)) &&
{
const char *set = diff_get_color(use_color, DIFF_METAINFO);
const char *reset = diff_get_color(use_color, DIFF_RESET);
- struct strbuf *msgbuf;
- char *line_prefix = "";
+ const char *line_prefix = diff_line_prefix(o);
*must_show_header = 1;
- if (o->output_prefix) {
- msgbuf = o->output_prefix(o, o->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
strbuf_init(msg, PATH_MAX * 2 + 300);
switch (p->status) {
case DIFF_STATUS_COPIED:
options->add_remove = diff_addremove;
options->use_color = diff_use_color_default;
options->detect_rename = diff_detect_rename_default;
+ options->xdl_opts |= diff_algorithm;
if (diff_no_prefix) {
options->a_prefix = options->b_prefix = "";
options->xdl_opts = DIFF_WITH_ALG(options, PATIENCE_DIFF);
else if (!strcmp(arg, "--histogram"))
options->xdl_opts = DIFF_WITH_ALG(options, HISTOGRAM_DIFF);
+ else if (!prefixcmp(arg, "--diff-algorithm=")) {
+ long value = parse_algorithm_value(arg+17);
+ if (value < 0)
+ return error("option diff-algorithm accepts \"myers\", "
+ "\"minimal\", \"patience\" and \"histogram\"");
+ /* clear out previous settings */
+ DIFF_XDL_CLR(options, NEED_MINIMAL);
+ options->xdl_opts &= ~XDF_DIFF_ALGORITHM_MASK;
+ options->xdl_opts |= value;
+ }
/* flags options */
else if (!strcmp(arg, "--binary")) {
{
int line_termination = opt->line_termination;
int inter_name_termination = line_termination ? '\t' : '\0';
- if (opt->output_prefix) {
- struct strbuf *msg = NULL;
- msg = opt->output_prefix(opt, opt->output_prefix_data);
- fprintf(opt->file, "%s", msg->buf);
- }
+ fprintf(opt->file, "%s", diff_line_prefix(opt));
if (!(opt->output_format & DIFF_FORMAT_NAME_STATUS)) {
fprintf(opt->file, ":%06o %06o %s ", p->one->mode, p->two->mode,
diff_unique_abbrev(p->one->sha1, opt->abbrev));
static void diff_summary(struct diff_options *opt, struct diff_filepair *p)
{
FILE *file = opt->file;
- char *line_prefix = "";
-
- if (opt->output_prefix) {
- struct strbuf *buf = opt->output_prefix(opt, opt->output_prefix_data);
- line_prefix = buf->buf;
- }
+ const char *line_prefix = diff_line_prefix(opt);
switch(p->status) {
case DIFF_STATUS_DELETED:
if (output_format & DIFF_FORMAT_PATCH) {
if (separator) {
- if (options->output_prefix) {
- struct strbuf *msg = NULL;
- msg = options->output_prefix(options,
- options->output_prefix_data);
- fwrite(msg->buf, msg->len, 1, stdout);
- }
- putc(options->line_termination, options->file);
+ fprintf(options->file, "%s%c",
+ diff_line_prefix(options),
+ options->line_termination);
if (options->stat_sep) {
/* attach patch instead of inline */
fputs(options->stat_sep, options->file);
diff_get_color((o)->use_color, ix)
+const char *diff_line_prefix(struct diff_options *);
+
+
extern const char mime_boundary_leader[];
extern void diff_tree_setup_paths(const char **paths, struct diff_options *);
extern int parse_rename_score(const char **cp_p);
+extern long parse_algorithm_value(const char *value);
+
extern int print_stat_summary(FILE *fp, int files,
int insertions, int deletions);
extern void setup_diff_pager(struct diff_options *);
/*
* Let callers be aware of the constant return value; this can help
- * gcc with -Wuninitialized analysis. We have to restrict this trick to
- * gcc, though, because of the variadic macro and the magic ## comma pasting
- * behavior. But since we're only trying to help gcc, anyway, it's OK; other
- * compilers will fall back to using the function as usual.
+ * gcc with -Wuninitialized analysis. We restrict this trick to gcc, though,
+ * because some compilers may not support variadic macros. Since we're only
+ * trying to help gcc, anyway, it's OK; other compilers will fall back to
+ * using the function as usual.
*/
#if defined(__GNUC__) && ! defined(__clang__)
-#define error(fmt, ...) (error((fmt), ##__VA_ARGS__), -1)
+#define error(...) (error(__VA_ARGS__), -1)
#endif
extern void set_die_routine(NORETURN_PTR void (*routine)(const char *err, va_list params));
use IO::Pipe;
use POSIX qw(strftime tzset dup2 ENOENT);
use IPC::Open2;
+use Git qw(get_tz_offset);
$SIG{'PIPE'}="IGNORE";
set_timezone('UTC');
}
set_timezone($author_tz);
- my $commit_date = strftime("%s %z", localtime($date));
+ # $date is in the seconds since epoch format
+ my $tz_offset = get_tz_offset($date);
+ my $commit_date = "$date $tz_offset";
set_timezone('UTC');
$ENV{GIT_AUTHOR_NAME} = $author_name;
$ENV{GIT_AUTHOR_EMAIL} = $author_email;
# the user with the real $MERGED name before launching $merge_tool.
if should_prompt
then
- printf "\nViewing: '$MERGED'\n"
+ printf "\nViewing: '%s'\n" "$MERGED"
if use_ext_cmd
then
printf "Launch '%s' [Y/n]: " \
fi
printf "Merging:\n"
-printf "$files\n"
+printf "%s\n" "$files"
IFS='
'
GIT_CHERRY_PICK_HELP="$resolvemsg"
export GIT_CHERRY_PICK_HELP
+comment_char=$(git config --get core.commentchar 2>/dev/null | cut -c1)
+: ${comment_char:=#}
+
warn () {
printf '%s\n' "$*" >&2
}
sed -e 1q < "$todo" >> "$done"
sed -e 1d < "$todo" >> "$todo".new
mv -f "$todo".new "$todo"
- new_count=$(sane_grep -c '^[^#]' < "$done")
- total=$(($new_count+$(sane_grep -c '^[^#]' < "$todo")))
+ new_count=$(git stripspace --strip-comments <"$done" | wc -l)
+ total=$(($new_count + $(git stripspace --strip-comments <"$todo" | wc -l)))
if test "$last_count" != "$new_count"
then
last_count=$new_count
}
append_todo_help () {
- cat >> "$todo" << EOF
-#
-# Commands:
-# p, pick = use commit
-# r, reword = use commit, but edit the commit message
-# e, edit = use commit, but stop for amending
-# s, squash = use commit, but meld into previous commit
-# f, fixup = like "squash", but discard this commit's log message
-# x, exec = run command (the rest of the line) using shell
-#
-# These lines can be re-ordered; they are executed from top to bottom.
-#
-# If you remove a line here THAT COMMIT WILL BE LOST.
+ git stripspace --comment-lines >>"$todo" <<\EOF
+
+Commands:
+ p, pick = use commit
+ r, reword = use commit, but edit the commit message
+ e, edit = use commit, but stop for amending
+ s, squash = use commit, but meld into previous commit
+ f, fixup = like "squash", but discard this commit's log message
+ x, exec = run command (the rest of the line) using shell
+
+These lines can be re-ordered; they are executed from top to bottom.
+
+If you remove a line here THAT COMMIT WILL BE LOST.
EOF
}
}
has_action () {
- sane_grep '^[^#]' "$1" >/dev/null
+ test -n "$(git stripspace --strip-comments <"$1")"
}
is_empty_commit() {
if test -f "$squash_msg"; then
mv "$squash_msg" "$squash_msg".bak || exit
count=$(($(sed -n \
- -e "1s/^# This is a combination of \(.*\) commits\./\1/p" \
+ -e "1s/^. This is a combination of \(.*\) commits\./\1/p" \
-e "q" < "$squash_msg".bak)+1))
{
- echo "# This is a combination of $count commits."
+ printf '%s\n' "$comment_char This is a combination of $count commits."
sed -e 1d -e '2,/^./{
/^$/d
}' <"$squash_msg".bak
commit_message HEAD > "$fixup_msg" || die "Cannot write $fixup_msg"
count=2
{
- echo "# This is a combination of 2 commits."
- echo "# The first commit's message is:"
+ printf '%s\n' "$comment_char This is a combination of 2 commits."
+ printf '%s\n' "$comment_char The first commit's message is:"
echo
cat "$fixup_msg"
} >"$squash_msg"
squash)
rm -f "$fixup_msg"
echo
- echo "# This is the $(nth_string $count) commit message:"
+ printf '%s\n' "$comment_char This is the $(nth_string $count) commit message:"
echo
commit_message $2
;;
fixup)
echo
- echo "# The $(nth_string $count) commit message will be skipped:"
+ printf '%s\n' "$comment_char The $(nth_string $count) commit message will be skipped:"
echo
- commit_message $2 | sed -e 's/^/# /'
+ # Change the space after the comment character to TAB:
+ commit_message $2 | git stripspace --comment-lines | sed -e 's/ / /'
;;
esac >>"$squash_msg"
}
peek_next_command () {
- sed -n -e "/^#/d" -e '/^$/d' -e "s/ .*//p" -e "q" < "$todo"
+ git stripspace --strip-comments <"$todo" | sed -n -e 's/ .*//p' -e q
}
# A squash/fixup has failed. Prepare the long version of the squash
rm -f "$msg" "$author_script" "$amend" || exit
read -r command sha1 rest < "$todo"
case "$command" in
- '#'*|''|noop)
+ "$comment_char"*|''|noop)
mark_action_done
;;
pick|p)
do_rest
;;
edit-todo)
- sed -e '/^#/d' < "$todo" > "$todo".new
+ git stripspace --strip-comments <"$todo" >"$todo".new
mv -f "$todo".new "$todo"
append_todo_help
- cat >> "$todo" << EOF
-#
-# You are editing the todo file of an ongoing interactive rebase.
-# To continue rebase after editing, run:
-# git rebase --continue
-#
+ git stripspace --comment-lines >>"$todo" <<\EOF
+
+You are editing the todo file of an ongoing interactive rebase.
+To continue rebase after editing, run:
+ git rebase --continue
+
EOF
git_sequence_editor "$todo" ||
if test -z "$keep_empty" && is_empty_commit $shortsha1 && ! is_merge_commit $shortsha1
then
- comment_out="# "
+ comment_out="$comment_char "
else
comment_out=
fi
test -n "$autosquash" && rearrange_squash "$todo"
test -n "$cmd" && add_exec_commands "$todo"
-cat >> "$todo" << EOF
+cat >>"$todo" <<EOF
-# Rebase $shortrevisions onto $shortonto
+$comment_char Rebase $shortrevisions onto $shortonto
EOF
append_todo_help
-cat >> "$todo" << EOF
-#
-# However, if you remove everything, the rebase will be aborted.
-#
+git stripspace --comment-lines >>"$todo" <<\EOF
+
+However, if you remove everything, the rebase will be aborted.
+
EOF
if test -z "$keep_empty"
then
- echo "# Note that empty commits are commented out" >>"$todo"
+ printf '%s\n' "$comment_char Note that empty commits are commented out" >>"$todo"
fi
if (!graph)
return;
+ /*
+ * When showing a diff of a merge against each of its parents, we
+ * are called once for each parent without graph_update having been
+ * called. In this case, simply output a single padding line.
+ */
+ if (graph_is_commit_finished(graph)) {
+ graph_show_padding(graph);
+ shown_commit_line = 1;
+ }
+
while (!shown_commit_line && !graph_is_commit_finished(graph)) {
shown_commit_line = graph_next_line(graph, &msgbuf);
fwrite(msgbuf.buf, sizeof(char), msgbuf.len, stdout);
#include "list-objects.h"
#include "sigchain.h"
+#ifdef EXPAT_NEEDS_XMLPARSE_H
+#include <xmlparse.h>
+#else
#include <expat.h>
+#endif
static const char http_push_usage[] =
"git http-push [--all] [--dry-run] [--force] [--verbose] <remote> [<head>...]\n";
o->xdl_opts = DIFF_WITH_ALG(o, PATIENCE_DIFF);
else if (!strcmp(s, "histogram"))
o->xdl_opts = DIFF_WITH_ALG(o, HISTOGRAM_DIFF);
+ else if (!strcmp(s, "diff-algorithm=")) {
+ long value = parse_algorithm_value(s+15);
+ if (value < 0)
+ return -1;
+ /* clear out previous settings */
+ DIFF_XDL_CLR(o, NEED_MINIMAL);
+ o->xdl_opts &= ~XDF_DIFF_ALGORITHM_MASK;
+ o->xdl_opts |= value;
+ }
else if (!strcmp(s, "ignore-space-change"))
o->xdl_opts |= XDF_IGNORE_WHITESPACE_CHANGE;
else if (!strcmp(s, "ignore-all-space"))
empty_file="${TMPDIR:-/tmp}/git-difftool-p4merge-empty-file.$$"
>"$empty_file"
- printf "$empty_file"
+ printf "%s" "$empty_file"
}
#include "cache.h"
#include "commit.h"
#include "color.h"
+#include "utf8.h"
static int parse_options_usage(struct parse_opt_ctx_t *ctx,
const char * const *usagestr,
default: /* PARSE_OPT_UNKNOWN */
if (ctx.argv[0][1] == '-') {
error("unknown option `%s'", ctx.argv[0] + 2);
- } else {
+ } else if (isascii(*ctx.opt)) {
error("unknown switch `%c'", *ctx.opt);
+ } else {
+ error("unknown non-ascii option in string: `%s'",
+ ctx.argv[0]);
}
usage_with_options(usagestr, options);
}
s = literal ? "[%s]" : "[<%s>]";
else
s = literal ? " %s" : " <%s>";
- return fprintf(outfile, s, opts->argh ? _(opts->argh) : _("..."));
+ return utf8_fprintf(outfile, s, opts->argh ? _(opts->argh) : _("..."));
}
#define USAGE_OPTS_WIDTH 24
if (opts->long_name)
pos += fprintf(outfile, "--%s", opts->long_name);
if (opts->type == OPTION_NUMBER)
- pos += fprintf(outfile, "-NUM");
+ pos += utf8_fprintf(outfile, _("-NUM"));
if ((opts->flags & PARSE_OPT_LITERAL_ARGHELP) ||
!(opts->flags & PARSE_OPT_NOARG))
command_bidi_pipe command_close_bidi_pipe
version exec_path html_path hash_object git_cmd_try
remote_refs prompt
+ get_tz_offset
temp_acquire temp_release temp_reset temp_path);
use Cwd qw(abs_path cwd);
use IPC::Open2 qw(open2);
use Fcntl qw(SEEK_SET SEEK_CUR);
+use Time::Local qw(timegm);
}
sub html_path { command_oneline('--html-path') }
+
+=item get_tz_offset ( TIME )
+
+Return the time zone offset from GMT in the form +/-HHMM where HH is
+the number of hours from GMT and MM is the number of minutes. This is
+the equivalent of what strftime("%z", ...) would provide on a GNU
+platform.
+
+If TIME is not supplied, the current local time is used.
+
+=cut
+
+sub get_tz_offset {
+ # some systmes don't handle or mishandle %z, so be creative.
+ my $t = shift || time;
+ my $gm = timegm(localtime($t));
+ my $sign = qw( + + - )[ $gm <=> $t ];
+ return sprintf("%s%02d%02d", $sign, (gmtime(abs($t - $gm)))[2,1]);
+}
+
+
=item prompt ( PROMPT , ISPASSWORD )
Query user C<PROMPT> and return answer from user.
use File::Path qw/mkpath/;
use File::Copy qw/copy/;
use IPC::Open3;
-use Time::Local;
use Memoize; # core since 5.8.0, Jul 2002
use Memoize::Storable;
use POSIX qw(:signal_h);
command_noisy
command_output_pipe
command_close_pipe
+ get_tz_offset
);
use Git::SVN::Utils qw(
fatal
\@out;
}
-sub get_tz {
- # some systmes don't handle or mishandle %z, so be creative.
- my $t = shift || time;
- my $gm = timelocal(gmtime($t));
- my $sign = qw( + + - )[ $t <=> $gm ];
- return sprintf("%s%02d%02d", $sign, (gmtime(abs($t - $gm)))[2,1]);
-}
-
# parse_svn_date(DATE)
# --------------------
# Given a date (in UTC) from Subversion, return a string in the format
delete $ENV{TZ};
}
- my $our_TZ = get_tz();
+ my $our_TZ = get_tz_offset();
# This converts $epoch_in_UTC into our local timezone.
my ($sec, $min, $hour, $mday, $mon, $year,
use strict;
use warnings;
use Git::SVN::Utils qw(fatal);
-use Git qw(command command_oneline command_output_pipe command_close_pipe);
+use Git qw(command
+ command_oneline
+ command_output_pipe
+ command_close_pipe
+ get_tz_offset);
use POSIX qw/strftime/;
use constant commit_log_separator => ('-' x 72) . "\n";
use vars qw/$TZ $limit $color $pager $non_recursive $verbose $oneline
sub format_svn_date {
my $t = shift || time;
require Git::SVN;
- my $gmoff = Git::SVN::get_tz($t);
+ my $gmoff = get_tz_offset($t);
return strftime("%Y-%m-%d %H:%M:%S $gmoff (%a, %d %b %Y)", localtime($t));
}
#include "object.h"
#include "tag.h"
#include "dir.h"
+#include "string-list.h"
/*
* Make sure "ref" is something reasonable to have under ".git/refs/";
free(short_name);
return xstrdup(refname);
}
+
+static struct string_list *hide_refs;
+
+int parse_hide_refs_config(const char *var, const char *value, const char *section)
+{
+ if (!strcmp("transfer.hiderefs", var) ||
+ /* NEEDSWORK: use parse_config_key() once both are merged */
+ (!prefixcmp(var, section) && var[strlen(section)] == '.' &&
+ !strcmp(var + strlen(section), ".hiderefs"))) {
+ char *ref;
+ int len;
+
+ if (!value)
+ return config_error_nonbool(var);
+ ref = xstrdup(value);
+ len = strlen(ref);
+ while (len && ref[len - 1] == '/')
+ ref[--len] = '\0';
+ if (!hide_refs) {
+ hide_refs = xcalloc(1, sizeof(*hide_refs));
+ hide_refs->strdup_strings = 1;
+ }
+ string_list_append(hide_refs, ref);
+ }
+ return 0;
+}
+
+int ref_is_hidden(const char *refname)
+{
+ struct string_list_item *item;
+
+ if (!hide_refs)
+ return 0;
+ for_each_string_list_item(item, hide_refs) {
+ int len;
+ if (prefixcmp(refname, item->string))
+ continue;
+ len = strlen(item->string);
+ if (!refname[len] || refname[len] == '/')
+ return 1;
+ }
+ return 0;
+}
const unsigned char *sha1, const unsigned char *oldval,
int flags, enum action_on_err onerr);
+extern int parse_hide_refs_config(const char *var, const char *value, const char *);
+extern int ref_is_hidden(const char *);
+
#endif /* REFS_H */
static int commit_match(struct commit *commit, struct rev_info *opt)
{
int retval;
+ const char *encoding;
+ char *message;
struct strbuf buf = STRBUF_INIT;
+
if (!opt->grep_filter.pattern_list && !opt->grep_filter.header_list)
return 1;
strbuf_addch(&buf, '\n');
}
+ /*
+ * We grep in the user's output encoding, under the assumption that it
+ * is the encoding they are most likely to write their grep pattern
+ * for. In addition, it means we will match the "notes" encoding below,
+ * so we will not end up with a buffer that has two different encodings
+ * in it.
+ */
+ encoding = get_log_output_encoding();
+ message = logmsg_reencode(commit, encoding);
+
/* Copy the commit to temporary if we are using "fake" headers */
if (buf.len)
- strbuf_addstr(&buf, commit->buffer);
+ strbuf_addstr(&buf, message);
if (opt->grep_filter.header_list && opt->mailmap) {
if (!buf.len)
- strbuf_addstr(&buf, commit->buffer);
+ strbuf_addstr(&buf, message);
commit_rewrite_person(&buf, "\nauthor ", opt->mailmap);
commit_rewrite_person(&buf, "\ncommitter ", opt->mailmap);
/* Append "fake" message parts as needed */
if (opt->show_notes) {
if (!buf.len)
- strbuf_addstr(&buf, commit->buffer);
- format_display_notes(commit->object.sha1, &buf,
- get_log_output_encoding(), 1);
+ strbuf_addstr(&buf, message);
+ format_display_notes(commit->object.sha1, &buf, encoding, 1);
}
- /* Find either in the commit object, or in the temporary */
+ /* Find either in the original commit message, or in the temporary */
if (buf.len)
retval = grep_buffer(&opt->grep_filter, buf.buf, buf.len);
else
retval = grep_buffer(&opt->grep_filter,
- commit->buffer, strlen(commit->buffer));
+ message, strlen(message));
strbuf_release(&buf);
+ logmsg_free(message, commit);
return retval;
}
test L = $(git cat-file commit HEAD | sed -ne \$p)
'
+test_expect_success 'rebase -i respects core.commentchar' '
+ git reset --hard &&
+ git checkout E^0 &&
+ git config core.commentchar "\\" &&
+ test_when_finished "git config --unset core.commentchar" &&
+ write_script remove-all-but-first.sh <<-\EOF &&
+ sed -e "2,\$s/^/\\\\/" "$1" >"$1.tmp" &&
+ mv "$1.tmp" "$1"
+ EOF
+ test_set_editor "$(pwd)/remove-all-but-first.sh" &&
+ git rebase -i B &&
+ test B = $(git cat-file commit HEAD^ | sed -ne \$p)
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='test log with i18n features'
+. ./test-lib.sh
+
+# two forms of é
+utf8_e=$(printf '\303\251')
+latin1_e=$(printf '\351')
+
+test_expect_success 'create commits in different encodings' '
+ test_tick &&
+ cat >msg <<-EOF &&
+ utf8
+
+ t${utf8_e}st
+ EOF
+ git add msg &&
+ git -c i18n.commitencoding=utf8 commit -F msg &&
+ cat >msg <<-EOF &&
+ latin1
+
+ t${latin1_e}st
+ EOF
+ git add msg &&
+ git -c i18n.commitencoding=ISO-8859-1 commit -F msg
+'
+
+test_expect_success 'log --grep searches in log output encoding (utf8)' '
+ cat >expect <<-\EOF &&
+ latin1
+ utf8
+ EOF
+ git log --encoding=utf8 --format=%s --grep=$utf8_e >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'log --grep searches in log output encoding (latin1)' '
+ cat >expect <<-\EOF &&
+ latin1
+ utf8
+ EOF
+ git log --encoding=ISO-8859-1 --format=%s --grep=$latin1_e >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'log --grep does not find non-reencoded values (utf8)' '
+ >expect &&
+ git log --encoding=utf8 --format=%s --grep=$latin1_e >actual &&
+ test_cmp expect actual
+'
+
+test_expect_success 'log --grep does not find non-reencoded values (latin1)' '
+ >expect &&
+ git log --encoding=ISO-8859-1 --format=%s --grep=$utf8_e >actual &&
+ test_cmp expect actual
+'
+
+test_done
test_cmp expect actual
'
+for configsection in transfer uploadpack
+do
+ test_expect_success "Hide some refs with $configsection.hiderefs" '
+ test_config $configsection.hiderefs refs/tags &&
+ git ls-remote . >actual &&
+ test_unconfig $configsection.hiderefs &&
+ git ls-remote . |
+ sed -e "/ refs\/tags\//d" >expect &&
+ test_cmp expect actual
+ '
+done
+
test_done
! check_push_result $the_first_commit tmp/foo tmp/bar
'
+for configsection in transfer receive
+do
+ test_expect_success "push to update a ref hidden by $configsection.hiderefs" '
+ mk_test heads/master hidden/one hidden/two hidden/three &&
+ (
+ cd testrepo &&
+ git config $configsection.hiderefs refs/hidden
+ ) &&
+
+ # push to unhidden ref succeeds normally
+ git push testrepo master:refs/heads/master &&
+ check_push_result $the_commit heads/master &&
+
+ # push to update a hidden ref should fail
+ test_must_fail git push testrepo master:refs/hidden/one &&
+ check_push_result $the_first_commit hidden/one &&
+
+ # push to delete a hidden ref should fail
+ test_must_fail git push testrepo :refs/hidden/two &&
+ check_push_result $the_first_commit hidden/two &&
+
+ # idempotent push to update a hidden ref should fail
+ test_must_fail git push testrepo $the_first_commit:refs/hidden/three &&
+ check_push_result $the_first_commit hidden/three
+ '
+done
+
test_done
test_expect_success 'status when rebase in progress before resolving conflicts' '
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD^^) &&
test_must_fail git rebase HEAD^ --onto HEAD^^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently rebasing.
+ # You are currently rebasing branch '\''rebase_conflicts'\'' on '\''$ONTO'\''.
# (fix conflicts and then run "git rebase --continue")
# (use "git rebase --skip" to skip this patch)
# (use "git rebase --abort" to check out the original branch)
test_expect_success 'status when rebase in progress before rebase --continue' '
git reset --hard rebase_conflicts &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD^^) &&
test_must_fail git rebase HEAD^ --onto HEAD^^ &&
echo three >main.txt &&
git add main.txt &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently rebasing.
+ # You are currently rebasing branch '\''rebase_conflicts'\'' on '\''$ONTO'\''.
# (all conflicts fixed: run "git rebase --continue")
#
# Changes to be committed:
test_expect_success 'status during rebase -i when conflicts unresolved' '
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short rebase_i_conflicts) &&
test_must_fail git rebase -i rebase_i_conflicts &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently rebasing.
+ # You are currently rebasing branch '\''rebase_i_conflicts_second'\'' on '\''$ONTO'\''.
# (fix conflicts and then run "git rebase --continue")
# (use "git rebase --skip" to skip this patch)
# (use "git rebase --abort" to check out the original branch)
test_expect_success 'status during rebase -i after resolving conflicts' '
git reset --hard rebase_i_conflicts_second &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short rebase_i_conflicts) &&
test_must_fail git rebase -i rebase_i_conflicts &&
git add main.txt &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently rebasing.
+ # You are currently rebasing branch '\''rebase_i_conflicts_second'\'' on '\''$ONTO'\''.
# (all conflicts fixed: run "git rebase --continue")
#
# Changes to be committed:
FAKE_LINES="1 edit 2" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~2) &&
git rebase -i HEAD~2 &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''rebase_i_edit'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git reset HEAD^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently splitting a commit during a rebase.
+ # You are currently splitting a commit while rebasing branch '\''split_commit'\'' on '\''$ONTO'\''.
# (Once your working directory is clean, run "git rebase --continue")
#
# Changes not staged for commit:
FAKE_LINES="1 2 edit 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git commit --amend -m "foo" &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''amend_last'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git rebase --continue &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git rebase --continue &&
git reset HEAD^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently splitting a commit during a rebase.
+ # You are currently splitting a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (Once your working directory is clean, run "git rebase --continue")
#
# Changes not staged for commit:
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git rebase --continue &&
git commit --amend -m "foo" &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git commit --amend -m "a" &&
git rebase --continue &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git commit --amend -m "b" &&
git rebase --continue &&
git reset HEAD^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently splitting a commit during a rebase.
+ # You are currently splitting a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (Once your working directory is clean, run "git rebase --continue")
#
# Changes not staged for commit:
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git commit --amend -m "c" &&
git rebase --continue &&
git commit --amend -m "d" &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git reset HEAD^ &&
git add main.txt &&
git commit -m "e" &&
git rebase --continue &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git reset HEAD^ &&
git add main.txt &&
git commit --amend -m "f" &&
git rebase --continue &&
git reset HEAD^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently splitting a commit during a rebase.
+ # You are currently splitting a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (Once your working directory is clean, run "git rebase --continue")
#
# Changes not staged for commit:
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git reset HEAD^ &&
git add main.txt &&
git commit --amend -m "g" &&
git rebase --continue &&
git commit --amend -m "h" &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
git bisect good one_bisect &&
cat >expected <<-\EOF &&
# Not currently on any branch.
- # You are currently bisecting.
+ # You are currently bisecting branch '\''bisect'\''.
# (use "git bisect reset" to get back to the original branch)
#
nothing to commit (use -u to show untracked files)
test_commit two_statushints main.txt two &&
test_commit three_statushints main.txt three &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD^^) &&
test_must_fail git rebase HEAD^ --onto HEAD^^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently rebasing.
+ # You are currently rebasing branch '\''statushints_disabled'\'' on '\''$ONTO'\''.
#
# Unmerged paths:
# both modified: main.txt
test_cmp expected "$actual"
'
-test_expect_success 'prompt - dirty status indicator - disabled by config' '
+test_expect_success 'prompt - dirty status indicator - shell variable unset with config disabled' '
+ printf " (master)" > expected &&
+ echo "dirty" > file &&
+ test_when_finished "git reset --hard" &&
+ test_config bash.showDirtyState false &&
+ (
+ sane_unset GIT_PS1_SHOWDIRTYSTATE &&
+ __git_ps1 > "$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success 'prompt - dirty status indicator - shell variable unset with config enabled' '
+ printf " (master)" > expected &&
+ echo "dirty" > file &&
+ test_when_finished "git reset --hard" &&
+ test_config bash.showDirtyState true &&
+ (
+ sane_unset GIT_PS1_SHOWDIRTYSTATE &&
+ __git_ps1 > "$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success 'prompt - dirty status indicator - shell variable set with config disabled' '
printf " (master)" > expected &&
echo "dirty" > file &&
test_when_finished "git reset --hard" &&
test_cmp expected "$actual"
'
+test_expect_success 'prompt - dirty status indicator - shell variable set with config enabled' '
+ printf " (master *)" > expected &&
+ echo "dirty" > file &&
+ test_when_finished "git reset --hard" &&
+ test_config bash.showDirtyState true &&
+ (
+ GIT_PS1_SHOWDIRTYSTATE=y &&
+ __git_ps1 > "$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
test_expect_success 'prompt - dirty status indicator - not shown inside .git directory' '
printf " (GIT_DIR!)" > expected &&
echo "dirty" > file &&
test_cmp expected "$actual"
'
+test_expect_success 'prompt - untracked files status indicator - shell variable unset with config disabled' '
+ printf " (master)" > expected &&
+ test_config bash.showUntrackedFiles false &&
+ (
+ sane_unset GIT_PS1_SHOWUNTRACKEDFILES &&
+ __git_ps1 > "$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success 'prompt - untracked files status indicator - shell variable unset with config enabled' '
+ printf " (master)" > expected &&
+ test_config bash.showUntrackedFiles true &&
+ (
+ sane_unset GIT_PS1_SHOWUNTRACKEDFILES &&
+ __git_ps1 > "$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success 'prompt - untracked files status indicator - shell variable set with config disabled' '
+ printf " (master)" > expected &&
+ test_config bash.showUntrackedFiles false &&
+ (
+ GIT_PS1_SHOWUNTRACKEDFILES=y &&
+ __git_ps1 > "$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
+test_expect_success 'prompt - untracked files status indicator - shell variable set with config enabled' '
+ printf " (master %%)" > expected &&
+ test_config bash.showUntrackedFiles true &&
+ (
+ GIT_PS1_SHOWUNTRACKEDFILES=y &&
+ __git_ps1 > "$actual"
+ ) &&
+ test_cmp expected "$actual"
+'
+
test_expect_success 'prompt - untracked files status indicator - not shown inside .git directory' '
printf " (GIT_DIR!)" > expected &&
(
#include "run-command.h"
#include "sigchain.h"
#include "version.h"
+#include "string-list.h"
static const char upload_pack_usage[] = "git upload-pack [--strict] [--timeout=<n>] <dir>";
static unsigned long oldest_have;
-static int multi_ack, nr_our_refs;
+static int multi_ack;
static int no_done;
static int use_thin_pack, use_ofs_delta, use_include_tag;
static int no_progress, daemon_mode;
{
struct async rev_list;
struct child_process pack_objects;
- int create_full_pack = (nr_our_refs == want_obj.nr && !have_obj.nr);
char data[8193], progress[128];
char abort_msg[] = "aborting due to possible repository "
"corruption on the remote side.";
argv[arg++] = "pack-objects";
if (!shallow_nr) {
argv[arg++] = "--revs";
- if (create_full_pack)
- argv[arg++] = "--all";
- else if (use_thin_pack)
+ if (use_thin_pack)
argv[arg++] = "--thin";
}
}
else {
FILE *pipe_fd = xfdopen(pack_objects.in, "w");
- if (!create_full_pack) {
- int i;
- for (i = 0; i < want_obj.nr; i++)
- fprintf(pipe_fd, "%s\n", sha1_to_hex(want_obj.objects[i].item->sha1));
- fprintf(pipe_fd, "--not\n");
- for (i = 0; i < have_obj.nr; i++)
- fprintf(pipe_fd, "%s\n", sha1_to_hex(have_obj.objects[i].item->sha1));
- }
+ int i;
+ for (i = 0; i < want_obj.nr; i++)
+ fprintf(pipe_fd, "%s\n",
+ sha1_to_hex(want_obj.objects[i].item->sha1));
+ fprintf(pipe_fd, "--not\n");
+ for (i = 0; i < have_obj.nr; i++)
+ fprintf(pipe_fd, "%s\n",
+ sha1_to_hex(have_obj.objects[i].item->sha1));
fprintf(pipe_fd, "\n");
fflush(pipe_fd);
fclose(pipe_fd);
free(shallows.objects);
}
+/* return non-zero if the ref is hidden, otherwise 0 */
+static int mark_our_ref(const char *refname, const unsigned char *sha1, int flag, void *cb_data)
+{
+ struct object *o = lookup_unknown_object(sha1);
+
+ if (ref_is_hidden(refname))
+ return 1;
+ if (!o)
+ die("git upload-pack: cannot find object %s:", sha1_to_hex(sha1));
+ o->flags |= OUR_REF;
+ return 0;
+}
+
static int send_ref(const char *refname, const unsigned char *sha1, int flag, void *cb_data)
{
static const char *capabilities = "multi_ack thin-pack side-band"
" side-band-64k ofs-delta shallow no-progress"
" include-tag multi_ack_detailed";
- struct object *o = lookup_unknown_object(sha1);
const char *refname_nons = strip_namespace(refname);
unsigned char peeled[20];
+ if (mark_our_ref(refname, sha1, flag, cb_data))
+ return 0;
+
if (capabilities)
packet_write(1, "%s %s%c%s%s agent=%s\n",
sha1_to_hex(sha1), refname_nons,
else
packet_write(1, "%s %s\n", sha1_to_hex(sha1), refname_nons);
capabilities = NULL;
- if (!(o->flags & OUR_REF)) {
- o->flags |= OUR_REF;
- nr_our_refs++;
- }
if (!peel_ref(refname, peeled))
packet_write(1, "%s %s^{}\n", sha1_to_hex(peeled), refname_nons);
return 0;
}
-static int mark_our_ref(const char *refname, const unsigned char *sha1, int flag, void *cb_data)
-{
- struct object *o = parse_object(sha1);
- if (!o)
- die("git upload-pack: cannot find object %s:", sha1_to_hex(sha1));
- if (!(o->flags & OUR_REF)) {
- o->flags |= OUR_REF;
- nr_our_refs++;
- }
- return 0;
-}
-
static void upload_pack(void)
{
if (advertise_refs || !stateless_rpc) {
}
}
+static int upload_pack_config(const char *var, const char *value, void *unused)
+{
+ return parse_hide_refs_config(var, value, "uploadpack");
+}
+
int main(int argc, char **argv)
{
char *dir;
die("'%s' does not appear to be a git repository", dir);
if (is_repository_shallow())
die("attempt to fetch/clone from a shallow repository");
+ git_config(upload_pack_config, NULL);
if (getenv("GIT_DEBUG_SEND_PACK"))
debug_fd = atoi(getenv("GIT_DEBUG_SEND_PACK"));
upload_pack();
return !strcasecmp(src, dst);
}
+/*
+ * Wrapper for fprintf and returns the total number of columns required
+ * for the printed string, assuming that the string is utf8.
+ */
+int utf8_fprintf(FILE *stream, const char *format, ...)
+{
+ struct strbuf buf = STRBUF_INIT;
+ va_list arg;
+ int columns;
+
+ va_start(arg, format);
+ strbuf_vaddf(&buf, format, arg);
+ va_end(arg);
+
+ columns = fputs(buf.buf, stream);
+ if (0 <= columns) /* keep the error from the I/O */
+ columns = utf8_strwidth(buf.buf);
+ strbuf_release(&buf);
+ return columns;
+}
+
/*
* Given a buffer and its encoding, return it re-encoded
* with iconv. If the conversion fails, returns NULL.
int is_utf8(const char *text);
int is_encoding_utf8(const char *name);
int same_encoding(const char *, const char *);
+int utf8_fprintf(FILE *, const char *, ...);
void strbuf_add_wrapped_text(struct strbuf *buf,
const char *text, int indent, int indent2, int width);
struct stat st;
if (has_unmerged(s)) {
- status_printf_ln(s, color, _("You are currently rebasing."));
+ if (state->branch)
+ status_printf_ln(s, color,
+ _("You are currently rebasing branch '%s' on '%s'."),
+ state->branch,
+ state->onto);
+ else
+ status_printf_ln(s, color,
+ _("You are currently rebasing."));
if (advice_status_hints) {
status_printf_ln(s, color,
_(" (fix conflicts and then run \"git rebase --continue\")"));
_(" (use \"git rebase --abort\" to check out the original branch)"));
}
} else if (state->rebase_in_progress || !stat(git_path("MERGE_MSG"), &st)) {
- status_printf_ln(s, color, _("You are currently rebasing."));
+ if (state->branch)
+ status_printf_ln(s, color,
+ _("You are currently rebasing branch '%s' on '%s'."),
+ state->branch,
+ state->onto);
+ else
+ status_printf_ln(s, color,
+ _("You are currently rebasing."));
if (advice_status_hints)
status_printf_ln(s, color,
_(" (all conflicts fixed: run \"git rebase --continue\")"));
} else if (split_commit_in_progress(s)) {
- status_printf_ln(s, color, _("You are currently splitting a commit during a rebase."));
+ if (state->branch)
+ status_printf_ln(s, color,
+ _("You are currently splitting a commit while rebasing branch '%s' on '%s'."),
+ state->branch,
+ state->onto);
+ else
+ status_printf_ln(s, color,
+ _("You are currently splitting a commit during a rebase."));
if (advice_status_hints)
status_printf_ln(s, color,
_(" (Once your working directory is clean, run \"git rebase --continue\")"));
} else {
- status_printf_ln(s, color, _("You are currently editing a commit during a rebase."));
+ if (state->branch)
+ status_printf_ln(s, color,
+ _("You are currently editing a commit while rebasing branch '%s' on '%s'."),
+ state->branch,
+ state->onto);
+ else
+ status_printf_ln(s, color,
+ _("You are currently editing a commit during a rebase."));
if (advice_status_hints && !s->amend) {
status_printf_ln(s, color,
_(" (use \"git commit --amend\" to amend the current commit)"));
struct wt_status_state *state,
const char *color)
{
- status_printf_ln(s, color, _("You are currently bisecting."));
+ if (state->branch)
+ status_printf_ln(s, color,
+ _("You are currently bisecting branch '%s'."),
+ state->branch);
+ else
+ status_printf_ln(s, color,
+ _("You are currently bisecting."));
if (advice_status_hints)
status_printf_ln(s, color,
_(" (use \"git bisect reset\" to get back to the original branch)"));
wt_status_print_trailer(s);
}
+/*
+ * Extract branch information from rebase/bisect
+ */
+static void read_and_strip_branch(struct strbuf *sb,
+ const char **branch,
+ const char *path)
+{
+ unsigned char sha1[20];
+
+ strbuf_reset(sb);
+ if (strbuf_read_file(sb, git_path("%s", path), 0) <= 0)
+ return;
+
+ while (sb->len && sb->buf[sb->len - 1] == '\n')
+ strbuf_setlen(sb, sb->len - 1);
+ if (!sb->len)
+ return;
+ if (!prefixcmp(sb->buf, "refs/heads/"))
+ *branch = sb->buf + strlen("refs/heads/");
+ else if (!prefixcmp(sb->buf, "refs/"))
+ *branch = sb->buf;
+ else if (!get_sha1_hex(sb->buf, sha1)) {
+ const char *abbrev;
+ abbrev = find_unique_abbrev(sha1, DEFAULT_ABBREV);
+ strbuf_reset(sb);
+ strbuf_addstr(sb, abbrev);
+ *branch = sb->buf;
+ } else if (!strcmp(sb->buf, "detached HEAD")) /* rebase */
+ ;
+ else /* bisect */
+ *branch = sb->buf;
+}
+
static void wt_status_print_state(struct wt_status *s)
{
const char *state_color = color(WT_STATUS_HEADER, s);
+ struct strbuf branch = STRBUF_INIT;
+ struct strbuf onto = STRBUF_INIT;
struct wt_status_state state;
struct stat st;
state.am_empty_patch = 1;
} else {
state.rebase_in_progress = 1;
+ read_and_strip_branch(&branch, &state.branch,
+ "rebase-apply/head-name");
+ read_and_strip_branch(&onto, &state.onto,
+ "rebase-apply/onto");
}
} else if (!stat(git_path("rebase-merge"), &st)) {
if (!stat(git_path("rebase-merge/interactive"), &st))
state.rebase_interactive_in_progress = 1;
else
state.rebase_in_progress = 1;
+ read_and_strip_branch(&branch, &state.branch,
+ "rebase-merge/head-name");
+ read_and_strip_branch(&onto, &state.onto,
+ "rebase-merge/onto");
} else if (!stat(git_path("CHERRY_PICK_HEAD"), &st)) {
state.cherry_pick_in_progress = 1;
}
- if (!stat(git_path("BISECT_LOG"), &st))
+ if (!stat(git_path("BISECT_LOG"), &st)) {
state.bisect_in_progress = 1;
+ read_and_strip_branch(&branch, &state.branch,
+ "BISECT_START");
+ }
if (state.merge_in_progress)
show_merge_in_progress(s, &state, state_color);
show_cherry_pick_in_progress(s, &state, state_color);
if (state.bisect_in_progress)
show_bisect_in_progress(s, &state, state_color);
+ strbuf_release(&branch);
+ strbuf_release(&onto);
}
void wt_status_print(struct wt_status *s)
int rebase_interactive_in_progress;
int cherry_pick_in_progress;
int bisect_in_progress;
+ const char *branch;
+ const char *onto;
};
void wt_status_prepare(struct wt_status *s);