judgement call, the decision based more on real world
constraints people face than what the paper standard says.
+Make your code readable and sensible, and don't try to be clever.
As for more concrete guidelines, just imitate the existing code
(this is a good guideline, no matter which project you are
- Use Git's gettext wrappers to make the user interface
translatable. See "Marking strings for translation" in po/README.
+For Perl programs:
+
+ - Most of the C guidelines above apply.
+
+ - We try to support Perl 5.8 and later ("use Perl 5.008").
+
+ - use strict and use warnings are strongly preferred.
+
+ - Don't overuse statement modifiers unless using them makes the
+ result easier to follow.
+
+ ... do something ...
+ do_this() unless (condition);
+ ... do something else ...
+
+ is more readable than:
+
+ ... do something ...
+ unless (condition) {
+ do_this();
+ }
+ ... do something else ...
+
+ *only* when the condition is so rare that do_this() will be almost
+ always called.
+
+ - We try to avoid assignments inside "if ()" conditions.
+
+ - Learn and use Git.pm if you need that functionality.
+
+ - For Emacs, it's useful to put the following in
+ GIT_CHECKOUT/.dir-locals.el, assuming you use cperl-mode:
+
+ ;; note the first part is useful for C editing, too
+ ((nil . ((indent-tabs-mode . t)
+ (tab-width . 8)
+ (fill-column . 80)))
+ (cperl-mode . ((cperl-indent-level . 8)
+ (cperl-extra-newline-before-brace . nil)
+ (cperl-merge-trailing-else . t))))
+
For Python scripts:
- We follow PEP-8 (http://www.python.org/dev/peps/pep-0008/).
rewrite the names and email addresses of people using the mailmap
mechanism.
+ * "git log --cc --graph" now shows the combined diff output with the
+ ancestry graph.
+
* "git mergetool" and "git difftool" learned to list the available
tool backends in a more consistent manner.
you do not have any commits in your history, but it now gives you
an empty index (to match non-existent commit you are not even on).
+ * "git status" says what branch is being bisected or rebased when
+ able, not just "bisecting" or "rebasing".
+
* "git submodule" started learning a new mode to integrate with the
tip of the remote branch (as opposed to integrating with the commit
recorded in the superproject's gitlink).
failed to remove the real location of the $GIT_DIR it created.
This was most visible when interrupting a submodule update.
+ * "git cvsimport" mishandled timestamps at DST boundary.
+ (merge 48c9162 bw/get-tz-offset-perl later to maint).
+
* We used to have an arbitrary 32 limit for combined diff input,
resulting in incorrect number of leading colons shown when showing
the "--raw --cc" output.
If no <pathspec> is given, the current version of Git defaults to
"."; in other words, update all tracked files in the current directory
and its subdirectories. This default will change in a future version
-of Git, hence the form without <filepattern> should not be used.
+of Git, hence the form without <pathspec> should not be used.
-A::
--all::
~~~~~~~~~~~~
After a bisect session, to clean up the bisection state and return to
-the original HEAD, issue the following command:
+the original HEAD (i.e., to quit bisecting), issue the following command:
------------------------------------------------
$ git bisect reset
------------
$ git bisect start HEAD v1.2 -- # HEAD is bad, v1.2 is good
$ git bisect run make # "make" builds the app
+$ git bisect reset # quit the bisect session
------------
* Automatically bisect a test failure between origin and HEAD:
------------
$ git bisect start HEAD origin -- # HEAD is bad, origin is good
$ git bisect run make test # "make test" builds and tests
+$ git bisect reset # quit the bisect session
------------
* Automatically bisect a broken test case:
~/check_test_case.sh # does the test case pass?
$ git bisect start HEAD HEAD~10 -- # culprit is among the last 10
$ git bisect run ~/test.sh
+$ git bisect reset # quit the bisect session
------------
+
Here we use a "test.sh" custom script. In this script, if "make"
------------
$ git bisect start HEAD HEAD~10 -- # culprit is among the last 10
$ git bisect run sh -c "make || exit 125; ~/check_test_case.sh"
+$ git bisect reset # quit the bisect session
------------
+
This shows that you can do without a run script if you write the test
rm -f tmp.$$
test $rc = 0'
+$ git bisect reset # quit the bisect session
------------
+
In this case, when 'git bisect run' finishes, bisect/bad will refer to a commit that
# Define EXPATDIR=/foo/bar if your expat header and library files are in
# /foo/bar/include and /foo/bar/lib directories.
#
+# Define EXPAT_NEEDS_XMLPARSE_H if you have an old version of expat (e.g.,
+# 1.1 or 1.2) that provides xmlparse.h instead of expat.h.
+#
# Define NO_GETTEXT if you don't want Git output to be translated.
# A translated Git requires GNU libintl or another gettext implementation,
# plus libintl-perl at runtime.
SCRIPT_PYTHON += git-remote-testpy.py
SCRIPT_PYTHON += git-p4.py
-SCRIPTS = $(patsubst %.sh,%,$(SCRIPT_SH)) \
- $(patsubst %.perl,%,$(SCRIPT_PERL)) \
- $(patsubst %.py,%,$(SCRIPT_PYTHON)) \
+# Generated files for scripts
+SCRIPT_SH_GEN = $(patsubst %.sh,%,$(SCRIPT_SH))
+SCRIPT_PERL_GEN = $(patsubst %.perl,%,$(SCRIPT_PERL))
+SCRIPT_PYTHON_GEN = $(patsubst %.py,%,$(SCRIPT_PYTHON))
+
+# Individual rules to allow e.g.
+# "make -C ../.. SCRIPT_PERL=contrib/foo/bar.perl build-perl-script"
+# from subdirectories like contrib/*/
+.PHONY: build-perl-script build-sh-script build-python-script
+build-perl-script: $(SCRIPT_PERL_GEN)
+build-sh-script: $(SCRIPT_SH_GEN)
+build-python-script: $(SCRIPT_PYTHON_GEN)
+
+.PHONY: install-perl-script install-sh-script install-python-script
+install-sh-script: $(SCRIPT_SH_GEN)
+ $(INSTALL) $(SCRIPT_SH_GEN) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
+install-perl-script: $(SCRIPT_PERL_GEN)
+ $(INSTALL) $(SCRIPT_PERL_GEN) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
+install-python-script: $(SCRIPT_PYTHON_GEN)
+ $(INSTALL) $(SCRIPT_PYTHON_GEN) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
+
+.PHONY: clean-perl-script clean-sh-script clean-python-script
+clean-sh-script:
+ $(RM) $(SCRIPT_SH_GEN)
+clean-perl-script:
+ $(RM) $(SCRIPT_PERL_GEN)
+clean-python-script:
+ $(RM) $(SCRIPT_PYTHON_GEN)
+
+SCRIPTS = $(SCRIPT_SH_GEN) \
+ $(SCRIPT_PERL_GEN) \
+ $(SCRIPT_PYTHON_GEN) \
git-instaweb
ETAGS_TARGET = TAGS
else
EXPAT_LIBEXPAT = -lexpat
endif
+ ifdef EXPAT_NEEDS_XMLPARSE_H
+ BASIC_CFLAGS += -DEXPAT_NEEDS_XMLPARSE_H
+ endif
endif
endif
return 0;
}
+static int preimage_sha1_in_gitlink_patch(struct patch *p, unsigned char sha1[20])
+{
+ /*
+ * A usable gitlink patch has only one fragment (hunk) that looks like:
+ * @@ -1 +1 @@
+ * -Subproject commit <old sha1>
+ * +Subproject commit <new sha1>
+ * or
+ * @@ -1 +0,0 @@
+ * -Subproject commit <old sha1>
+ * for a removal patch.
+ */
+ struct fragment *hunk = p->fragments;
+ static const char heading[] = "-Subproject commit ";
+ char *preimage;
+
+ if (/* does the patch have only one hunk? */
+ hunk && !hunk->next &&
+ /* is its preimage one line? */
+ hunk->oldpos == 1 && hunk->oldlines == 1 &&
+ /* does preimage begin with the heading? */
+ (preimage = memchr(hunk->patch, '\n', hunk->size)) != NULL &&
+ !prefixcmp(++preimage, heading) &&
+ /* does it record full SHA-1? */
+ !get_sha1_hex(preimage + sizeof(heading) - 1, sha1) &&
+ preimage[sizeof(heading) + 40 - 1] == '\n' &&
+ /* does the abbreviated name on the index line agree with it? */
+ !prefixcmp(preimage + sizeof(heading) - 1, p->old_sha1_prefix))
+ return 0; /* it all looks fine */
+
+ /* we may have full object name on the index line */
+ return get_sha1_hex(p->old_sha1_prefix, sha1);
+}
+
/* Build an index that contains the just the files needed for a 3way merge */
static void build_fake_ancestor(struct patch *list, const char *filename)
{
continue;
if (S_ISGITLINK(patch->old_mode)) {
- if (get_sha1_hex(patch->old_sha1_prefix, sha1))
- die("submoule change for %s without full index name",
+ if (!preimage_sha1_in_gitlink_patch(patch, sha1))
+ ; /* ok, the textual part looks sane */
+ else
+ die("sha1 information is lacking or useless for submoule %s",
name);
} else if (!get_sha1_blob(patch->old_sha1_prefix, sha1)) {
; /* ok */
saw_cr_at_eol ? "\r" : "");
}
-static void dump_sline(struct sline *sline, unsigned long cnt, int num_parent,
+static void dump_sline(struct sline *sline, const char *line_prefix,
+ unsigned long cnt, int num_parent,
int use_color, int result_deleted)
{
unsigned long mark = (1UL<<num_parent);
rlines -= null_context;
}
- fputs(c_frag, stdout);
+ printf("%s%s", line_prefix, c_frag);
for (i = 0; i <= num_parent; i++) putchar(combine_marker);
for (i = 0; i < num_parent; i++)
show_parent_lno(sline, lno, hunk_end, i, null_context);
struct sline *sl = &sline[lno++];
ll = (sl->flag & no_pre_delete) ? NULL : sl->lost_head;
while (ll) {
- fputs(c_old, stdout);
+ printf("%s%s", line_prefix, c_old);
for (j = 0; j < num_parent; j++) {
if (ll->parent_map & (1UL<<j))
putchar('-');
if (cnt < lno)
break;
p_mask = 1;
+ fputs(line_prefix, stdout);
if (!(sl->flag & (mark-1))) {
/*
* This sline was here to hang the
static void dump_quoted_path(const char *head,
const char *prefix,
const char *path,
+ const char *line_prefix,
const char *c_meta, const char *c_reset)
{
static struct strbuf buf = STRBUF_INIT;
strbuf_reset(&buf);
+ strbuf_addstr(&buf, line_prefix);
strbuf_addstr(&buf, c_meta);
strbuf_addstr(&buf, head);
quote_two_c_style(&buf, prefix, path, 0);
int num_parent,
int dense,
struct rev_info *rev,
+ const char *line_prefix,
int mode_differs,
int show_file_header)
{
show_log(rev);
dump_quoted_path(dense ? "diff --cc " : "diff --combined ",
- "", elem->path, c_meta, c_reset);
- printf("%sindex ", c_meta);
+ "", elem->path, line_prefix, c_meta, c_reset);
+ printf("%s%sindex ", line_prefix, c_meta);
for (i = 0; i < num_parent; i++) {
abb = find_unique_abbrev(elem->parent[i].sha1,
abbrev);
DIFF_STATUS_ADDED)
added = 0;
if (added)
- printf("%snew file mode %06o",
- c_meta, elem->mode);
+ printf("%s%snew file mode %06o",
+ line_prefix, c_meta, elem->mode);
else {
if (deleted)
- printf("%sdeleted file ", c_meta);
+ printf("%s%sdeleted file ",
+ line_prefix, c_meta);
printf("mode ");
for (i = 0; i < num_parent; i++) {
printf("%s%06o", i ? "," : "",
if (added)
dump_quoted_path("--- ", "", "/dev/null",
- c_meta, c_reset);
+ line_prefix, c_meta, c_reset);
else
dump_quoted_path("--- ", a_prefix, elem->path,
- c_meta, c_reset);
+ line_prefix, c_meta, c_reset);
if (deleted)
dump_quoted_path("+++ ", "", "/dev/null",
- c_meta, c_reset);
+ line_prefix, c_meta, c_reset);
else
dump_quoted_path("+++ ", b_prefix, elem->path,
- c_meta, c_reset);
+ line_prefix, c_meta, c_reset);
}
static void show_patch_diff(struct combine_diff_path *elem, int num_parent,
struct userdiff_driver *userdiff;
struct userdiff_driver *textconv = NULL;
int is_binary;
+ const char *line_prefix = diff_line_prefix(opt);
context = opt->context;
userdiff = userdiff_find_by_path(elem->path);
}
if (is_binary) {
show_combined_header(elem, num_parent, dense, rev,
- mode_differs, 0);
+ line_prefix, mode_differs, 0);
printf("Binary files differ\n");
free(result);
return;
if (show_hunks || mode_differs || working_tree_file) {
show_combined_header(elem, num_parent, dense, rev,
- mode_differs, 1);
- dump_sline(sline, cnt, num_parent,
+ line_prefix, mode_differs, 1);
+ dump_sline(sline, line_prefix, cnt, num_parent,
opt->use_color, result_deleted);
}
free(result);
{
struct diff_options *opt = &rev->diffopt;
int line_termination, inter_name_termination, i;
+ const char *line_prefix = diff_line_prefix(opt);
line_termination = opt->line_termination;
inter_name_termination = '\t';
if (rev->loginfo && !rev->no_commit_id)
show_log(rev);
+
if (opt->output_format & DIFF_FORMAT_RAW) {
+ printf("%s", line_prefix);
+
/* As many colons as there are parents */
for (i = 0; i < num_parent; i++)
putchar(':');
struct rev_info *rev)
{
struct diff_options *opt = &rev->diffopt;
+
if (!p->len)
return;
if (opt->output_format & (DIFF_FORMAT_RAW |
if (show_log_first && i == 0) {
show_log(rev);
+
if (rev->verbose_header && opt->output_format)
- putchar(opt->line_termination);
+ printf("%s%c", diff_line_prefix(opt),
+ opt->line_termination);
}
diff_flush(&diffopts);
}
if (opt->output_format & DIFF_FORMAT_PATCH) {
if (needsep)
- putchar(opt->line_termination);
+ printf("%s%c", diff_line_prefix(opt),
+ opt->line_termination);
for (p = paths; p; p = p->next) {
if (p->len)
show_patch_diff(p, num_parent, dense,
endif
ifeq ($(uname_S),QNX)
COMPAT_CFLAGS += -DSA_RESTART=0
+ EXPAT_NEEDS_XMLPARSE_H = YesPlease
HAVE_STRINGS_H = YesPlease
NEEDS_SOCKET = YesPlease
NO_FNMATCH_CASEFOLD = YesPlease
while [ $c -gt 1 ]; do
word="${words[c]}"
case "$word" in
- --global|--system|--file=*)
+ --system|--global|--local|--file=*)
config_file="$word"
break
;;
case "$cur" in
--*)
__gitcomp "
- --global --system --file=
+ --system --global --local --file=
--list --replace-all
--get --get-all --get-regexp
--add --unset --unset-all
--- /dev/null
+git-remote-mediawiki
#
-# Copyright (C) 2012
-# Charles Roussel <charles.roussel@ensimag.imag.fr>
-# Simon Cathebras <simon.cathebras@ensimag.imag.fr>
-# Julien Khayat <julien.khayat@ensimag.imag.fr>
-# Guillaume Sasdy <guillaume.sasdy@ensimag.imag.fr>
-# Simon Perrat <simon.perrat@ensimag.imag.fr>
+# Copyright (C) 2013
+# Matthieu Moy <Matthieu.Moy@imag.fr>
#
## Build git-remote-mediawiki
--include ../../config.mak.autogen
--include ../../config.mak
+SCRIPT_PERL=git-remote-mediawiki.perl
+GIT_ROOT_DIR=../..
+HERE=contrib/mw-to-git/
-ifndef PERL_PATH
- PERL_PATH = /usr/bin/perl
-endif
-ifndef gitexecdir
- gitexecdir = $(shell git --exec-path)
-endif
+SCRIPT_PERL_FULL=$(patsubst %,$(HERE)/%,$(SCRIPT_PERL))
-PERL_PATH_SQ = $(subst ','\'',$(PERL_PATH))
-gitexecdir_SQ = $(subst ','\'',$(gitexecdir))
-SCRIPT = git-remote-mediawiki
+all: build
-.PHONY: install help doc test clean
-
-help:
- @echo 'This is the help target of the Makefile. Current configuration:'
- @echo ' gitexecdir = $(gitexecdir_SQ)'
- @echo ' PERL_PATH = $(PERL_PATH_SQ)'
- @echo 'Run "$(MAKE) install" to install $(SCRIPT) in gitexecdir'
- @echo 'Run "$(MAKE) test" to run the testsuite'
-
-install:
- sed -e '1s|#!.*/perl|#!$(PERL_PATH_SQ)|' $(SCRIPT) \
- > '$(gitexecdir_SQ)/$(SCRIPT)'
- chmod +x '$(gitexecdir)/$(SCRIPT)'
-
-doc:
- @echo 'Sorry, "make doc" is not implemented yet for $(SCRIPT)'
-
-test:
- $(MAKE) -C t/ test
-
-clean:
- $(RM) '$(gitexecdir)/$(SCRIPT)'
- $(MAKE) -C t/ clean
+build install clean:
+ $(MAKE) -C $(GIT_ROOT_DIR) SCRIPT_PERL=$(SCRIPT_PERL_FULL) \
+ $@-perl-script
+++ /dev/null
-#! /usr/bin/perl
-
-# Copyright (C) 2011
-# Jérémie Nikaes <jeremie.nikaes@ensimag.imag.fr>
-# Arnaud Lacurie <arnaud.lacurie@ensimag.imag.fr>
-# Claire Fousse <claire.fousse@ensimag.imag.fr>
-# David Amouyal <david.amouyal@ensimag.imag.fr>
-# Matthieu Moy <matthieu.moy@grenoble-inp.fr>
-# License: GPL v2 or later
-
-# Gateway between Git and MediaWiki.
-# Documentation & bugtracker: https://github.com/moy/Git-Mediawiki/
-
-use strict;
-use MediaWiki::API;
-use DateTime::Format::ISO8601;
-
-# By default, use UTF-8 to communicate with Git and the user
-binmode STDERR, ":utf8";
-binmode STDOUT, ":utf8";
-
-use URI::Escape;
-use IPC::Open2;
-
-use warnings;
-
-# Mediawiki filenames can contain forward slashes. This variable decides by which pattern they should be replaced
-use constant SLASH_REPLACEMENT => "%2F";
-
-# It's not always possible to delete pages (may require some
-# priviledges). Deleted pages are replaced with this content.
-use constant DELETED_CONTENT => "[[Category:Deleted]]\n";
-
-# It's not possible to create empty pages. New empty files in Git are
-# sent with this content instead.
-use constant EMPTY_CONTENT => "<!-- empty page -->\n";
-
-# used to reflect file creation or deletion in diff.
-use constant NULL_SHA1 => "0000000000000000000000000000000000000000";
-
-# Used on Git's side to reflect empty edit messages on the wiki
-use constant EMPTY_MESSAGE => '*Empty MediaWiki Message*';
-
-my $remotename = $ARGV[0];
-my $url = $ARGV[1];
-
-# Accept both space-separated and multiple keys in config file.
-# Spaces should be written as _ anyway because we'll use chomp.
-my @tracked_pages = split(/[ \n]/, run_git("config --get-all remote.". $remotename .".pages"));
-chomp(@tracked_pages);
-
-# Just like @tracked_pages, but for MediaWiki categories.
-my @tracked_categories = split(/[ \n]/, run_git("config --get-all remote.". $remotename .".categories"));
-chomp(@tracked_categories);
-
-# Import media files on pull
-my $import_media = run_git("config --get --bool remote.". $remotename .".mediaimport");
-chomp($import_media);
-$import_media = ($import_media eq "true");
-
-# Export media files on push
-my $export_media = run_git("config --get --bool remote.". $remotename .".mediaexport");
-chomp($export_media);
-$export_media = !($export_media eq "false");
-
-my $wiki_login = run_git("config --get remote.". $remotename .".mwLogin");
-# Note: mwPassword is discourraged. Use the credential system instead.
-my $wiki_passwd = run_git("config --get remote.". $remotename .".mwPassword");
-my $wiki_domain = run_git("config --get remote.". $remotename .".mwDomain");
-chomp($wiki_login);
-chomp($wiki_passwd);
-chomp($wiki_domain);
-
-# Import only last revisions (both for clone and fetch)
-my $shallow_import = run_git("config --get --bool remote.". $remotename .".shallow");
-chomp($shallow_import);
-$shallow_import = ($shallow_import eq "true");
-
-# Fetch (clone and pull) by revisions instead of by pages. This behavior
-# is more efficient when we have a wiki with lots of pages and we fetch
-# the revisions quite often so that they concern only few pages.
-# Possible values:
-# - by_rev: perform one query per new revision on the remote wiki
-# - by_page: query each tracked page for new revision
-my $fetch_strategy = run_git("config --get remote.$remotename.fetchStrategy");
-unless ($fetch_strategy) {
- $fetch_strategy = run_git("config --get mediawiki.fetchStrategy");
-}
-chomp($fetch_strategy);
-unless ($fetch_strategy) {
- $fetch_strategy = "by_page";
-}
-
-# Dumb push: don't update notes and mediawiki ref to reflect the last push.
-#
-# Configurable with mediawiki.dumbPush, or per-remote with
-# remote.<remotename>.dumbPush.
-#
-# This means the user will have to re-import the just-pushed
-# revisions. On the other hand, this means that the Git revisions
-# corresponding to MediaWiki revisions are all imported from the wiki,
-# regardless of whether they were initially created in Git or from the
-# web interface, hence all users will get the same history (i.e. if
-# the push from Git to MediaWiki loses some information, everybody
-# will get the history with information lost). If the import is
-# deterministic, this means everybody gets the same sha1 for each
-# MediaWiki revision.
-my $dumb_push = run_git("config --get --bool remote.$remotename.dumbPush");
-unless ($dumb_push) {
- $dumb_push = run_git("config --get --bool mediawiki.dumbPush");
-}
-chomp($dumb_push);
-$dumb_push = ($dumb_push eq "true");
-
-my $wiki_name = $url;
-$wiki_name =~ s/[^\/]*:\/\///;
-# If URL is like http://user:password@example.com/, we clearly don't
-# want the password in $wiki_name. While we're there, also remove user
-# and '@' sign, to avoid author like MWUser@HTTPUser@host.com
-$wiki_name =~ s/^.*@//;
-
-# Commands parser
-my $entry;
-my @cmd;
-while (<STDIN>) {
- chomp;
- @cmd = split(/ /);
- if (defined($cmd[0])) {
- # Line not blank
- if ($cmd[0] eq "capabilities") {
- die("Too many arguments for capabilities") unless (!defined($cmd[1]));
- mw_capabilities();
- } elsif ($cmd[0] eq "list") {
- die("Too many arguments for list") unless (!defined($cmd[2]));
- mw_list($cmd[1]);
- } elsif ($cmd[0] eq "import") {
- die("Invalid arguments for import") unless ($cmd[1] ne "" && !defined($cmd[2]));
- mw_import($cmd[1]);
- } elsif ($cmd[0] eq "option") {
- die("Too many arguments for option") unless ($cmd[1] ne "" && $cmd[2] ne "" && !defined($cmd[3]));
- mw_option($cmd[1],$cmd[2]);
- } elsif ($cmd[0] eq "push") {
- mw_push($cmd[1]);
- } else {
- print STDERR "Unknown command. Aborting...\n";
- last;
- }
- } else {
- # blank line: we should terminate
- last;
- }
-
- BEGIN { $| = 1 } # flush STDOUT, to make sure the previous
- # command is fully processed.
-}
-
-########################## Functions ##############################
-
-## credential API management (generic functions)
-
-sub credential_read {
- my %credential;
- my $reader = shift;
- my $op = shift;
- while (<$reader>) {
- my ($key, $value) = /([^=]*)=(.*)/;
- if (not defined $key) {
- die "ERROR receiving response from git credential $op:\n$_\n";
- }
- $credential{$key} = $value;
- }
- return %credential;
-}
-
-sub credential_write {
- my $credential = shift;
- my $writer = shift;
- # url overwrites other fields, so it must come first
- print $writer "url=$credential->{url}\n" if exists $credential->{url};
- while (my ($key, $value) = each(%$credential) ) {
- if (length $value && $key ne 'url') {
- print $writer "$key=$value\n";
- }
- }
-}
-
-sub credential_run {
- my $op = shift;
- my $credential = shift;
- my $pid = open2(my $reader, my $writer, "git credential $op");
- credential_write($credential, $writer);
- print $writer "\n";
- close($writer);
-
- if ($op eq "fill") {
- %$credential = credential_read($reader, $op);
- } else {
- if (<$reader>) {
- die "ERROR while running git credential $op:\n$_";
- }
- }
- close($reader);
- waitpid($pid, 0);
- my $child_exit_status = $? >> 8;
- if ($child_exit_status != 0) {
- die "'git credential $op' failed with code $child_exit_status.";
- }
-}
-
-# MediaWiki API instance, created lazily.
-my $mediawiki;
-
-sub mw_connect_maybe {
- if ($mediawiki) {
- return;
- }
- $mediawiki = MediaWiki::API->new;
- $mediawiki->{config}->{api_url} = "$url/api.php";
- if ($wiki_login) {
- my %credential = (url => $url);
- $credential{username} = $wiki_login;
- $credential{password} = $wiki_passwd;
- credential_run("fill", \%credential);
- my $request = {lgname => $credential{username},
- lgpassword => $credential{password},
- lgdomain => $wiki_domain};
- if ($mediawiki->login($request)) {
- credential_run("approve", \%credential);
- print STDERR "Logged in mediawiki user \"$credential{username}\".\n";
- } else {
- print STDERR "Failed to log in mediawiki user \"$credential{username}\" on $url\n";
- print STDERR " (error " .
- $mediawiki->{error}->{code} . ': ' .
- $mediawiki->{error}->{details} . ")\n";
- credential_run("reject", \%credential);
- exit 1;
- }
- }
-}
-
-## Functions for listing pages on the remote wiki
-sub get_mw_tracked_pages {
- my $pages = shift;
- get_mw_page_list(\@tracked_pages, $pages);
-}
-
-sub get_mw_page_list {
- my $page_list = shift;
- my $pages = shift;
- my @some_pages = @$page_list;
- while (@some_pages) {
- my $last = 50;
- if ($#some_pages < $last) {
- $last = $#some_pages;
- }
- my @slice = @some_pages[0..$last];
- get_mw_first_pages(\@slice, $pages);
- @some_pages = @some_pages[51..$#some_pages];
- }
-}
-
-sub get_mw_tracked_categories {
- my $pages = shift;
- foreach my $category (@tracked_categories) {
- if (index($category, ':') < 0) {
- # Mediawiki requires the Category
- # prefix, but let's not force the user
- # to specify it.
- $category = "Category:" . $category;
- }
- my $mw_pages = $mediawiki->list( {
- action => 'query',
- list => 'categorymembers',
- cmtitle => $category,
- cmlimit => 'max' } )
- || die $mediawiki->{error}->{code} . ': '
- . $mediawiki->{error}->{details};
- foreach my $page (@{$mw_pages}) {
- $pages->{$page->{title}} = $page;
- }
- }
-}
-
-sub get_mw_all_pages {
- my $pages = shift;
- # No user-provided list, get the list of pages from the API.
- my $mw_pages = $mediawiki->list({
- action => 'query',
- list => 'allpages',
- aplimit => 'max'
- });
- if (!defined($mw_pages)) {
- print STDERR "fatal: could not get the list of wiki pages.\n";
- print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
- print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
- exit 1;
- }
- foreach my $page (@{$mw_pages}) {
- $pages->{$page->{title}} = $page;
- }
-}
-
-# queries the wiki for a set of pages. Meant to be used within a loop
-# querying the wiki for slices of page list.
-sub get_mw_first_pages {
- my $some_pages = shift;
- my @some_pages = @{$some_pages};
-
- my $pages = shift;
-
- # pattern 'page1|page2|...' required by the API
- my $titles = join('|', @some_pages);
-
- my $mw_pages = $mediawiki->api({
- action => 'query',
- titles => $titles,
- });
- if (!defined($mw_pages)) {
- print STDERR "fatal: could not query the list of wiki pages.\n";
- print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
- print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
- exit 1;
- }
- while (my ($id, $page) = each(%{$mw_pages->{query}->{pages}})) {
- if ($id < 0) {
- print STDERR "Warning: page $page->{title} not found on wiki\n";
- } else {
- $pages->{$page->{title}} = $page;
- }
- }
-}
-
-# Get the list of pages to be fetched according to configuration.
-sub get_mw_pages {
- mw_connect_maybe();
-
- print STDERR "Listing pages on remote wiki...\n";
-
- my %pages; # hash on page titles to avoid duplicates
- my $user_defined;
- if (@tracked_pages) {
- $user_defined = 1;
- # The user provided a list of pages titles, but we
- # still need to query the API to get the page IDs.
- get_mw_tracked_pages(\%pages);
- }
- if (@tracked_categories) {
- $user_defined = 1;
- get_mw_tracked_categories(\%pages);
- }
- if (!$user_defined) {
- get_mw_all_pages(\%pages);
- }
- if ($import_media) {
- print STDERR "Getting media files for selected pages...\n";
- if ($user_defined) {
- get_linked_mediafiles(\%pages);
- } else {
- get_all_mediafiles(\%pages);
- }
- }
- print STDERR (scalar keys %pages) . " pages found.\n";
- return %pages;
-}
-
-# usage: $out = run_git("command args");
-# $out = run_git("command args", "raw"); # don't interpret output as UTF-8.
-sub run_git {
- my $args = shift;
- my $encoding = (shift || "encoding(UTF-8)");
- open(my $git, "-|:$encoding", "git " . $args);
- my $res = do { local $/; <$git> };
- close($git);
-
- return $res;
-}
-
-
-sub get_all_mediafiles {
- my $pages = shift;
- # Attach list of all pages for media files from the API,
- # they are in a different namespace, only one namespace
- # can be queried at the same moment
- my $mw_pages = $mediawiki->list({
- action => 'query',
- list => 'allpages',
- apnamespace => get_mw_namespace_id("File"),
- aplimit => 'max'
- });
- if (!defined($mw_pages)) {
- print STDERR "fatal: could not get the list of pages for media files.\n";
- print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
- print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
- exit 1;
- }
- foreach my $page (@{$mw_pages}) {
- $pages->{$page->{title}} = $page;
- }
-}
-
-sub get_linked_mediafiles {
- my $pages = shift;
- my @titles = map $_->{title}, values(%{$pages});
-
- # The query is split in small batches because of the MW API limit of
- # the number of links to be returned (500 links max).
- my $batch = 10;
- while (@titles) {
- if ($#titles < $batch) {
- $batch = $#titles;
- }
- my @slice = @titles[0..$batch];
-
- # pattern 'page1|page2|...' required by the API
- my $mw_titles = join('|', @slice);
-
- # Media files could be included or linked from
- # a page, get all related
- my $query = {
- action => 'query',
- prop => 'links|images',
- titles => $mw_titles,
- plnamespace => get_mw_namespace_id("File"),
- pllimit => 'max'
- };
- my $result = $mediawiki->api($query);
-
- while (my ($id, $page) = each(%{$result->{query}->{pages}})) {
- my @media_titles;
- if (defined($page->{links})) {
- my @link_titles = map $_->{title}, @{$page->{links}};
- push(@media_titles, @link_titles);
- }
- if (defined($page->{images})) {
- my @image_titles = map $_->{title}, @{$page->{images}};
- push(@media_titles, @image_titles);
- }
- if (@media_titles) {
- get_mw_page_list(\@media_titles, $pages);
- }
- }
-
- @titles = @titles[($batch+1)..$#titles];
- }
-}
-
-sub get_mw_mediafile_for_page_revision {
- # Name of the file on Wiki, with the prefix.
- my $filename = shift;
- my $timestamp = shift;
- my %mediafile;
-
- # Search if on a media file with given timestamp exists on
- # MediaWiki. In that case download the file.
- my $query = {
- action => 'query',
- prop => 'imageinfo',
- titles => "File:" . $filename,
- iistart => $timestamp,
- iiend => $timestamp,
- iiprop => 'timestamp|archivename|url',
- iilimit => 1
- };
- my $result = $mediawiki->api($query);
-
- my ($fileid, $file) = each( %{$result->{query}->{pages}} );
- # If not defined it means there is no revision of the file for
- # given timestamp.
- if (defined($file->{imageinfo})) {
- $mediafile{title} = $filename;
-
- my $fileinfo = pop(@{$file->{imageinfo}});
- $mediafile{timestamp} = $fileinfo->{timestamp};
- # Mediawiki::API's download function doesn't support https URLs
- # and can't download old versions of files.
- print STDERR "\tDownloading file $mediafile{title}, version $mediafile{timestamp}\n";
- $mediafile{content} = download_mw_mediafile($fileinfo->{url});
- }
- return %mediafile;
-}
-
-sub download_mw_mediafile {
- my $url = shift;
-
- my $response = $mediawiki->{ua}->get($url);
- if ($response->code == 200) {
- return $response->decoded_content;
- } else {
- print STDERR "Error downloading mediafile from :\n";
- print STDERR "URL: $url\n";
- print STDERR "Server response: " . $response->code . " " . $response->message . "\n";
- exit 1;
- }
-}
-
-sub get_last_local_revision {
- # Get note regarding last mediawiki revision
- my $note = run_git("notes --ref=$remotename/mediawiki show refs/mediawiki/$remotename/master 2>/dev/null");
- my @note_info = split(/ /, $note);
-
- my $lastrevision_number;
- if (!(defined($note_info[0]) && $note_info[0] eq "mediawiki_revision:")) {
- print STDERR "No previous mediawiki revision found";
- $lastrevision_number = 0;
- } else {
- # Notes are formatted : mediawiki_revision: #number
- $lastrevision_number = $note_info[1];
- chomp($lastrevision_number);
- print STDERR "Last local mediawiki revision found is $lastrevision_number";
- }
- return $lastrevision_number;
-}
-
-# Remember the timestamp corresponding to a revision id.
-my %basetimestamps;
-
-# Get the last remote revision without taking in account which pages are
-# tracked or not. This function makes a single request to the wiki thus
-# avoid a loop onto all tracked pages. This is useful for the fetch-by-rev
-# option.
-sub get_last_global_remote_rev {
- mw_connect_maybe();
-
- my $query = {
- action => 'query',
- list => 'recentchanges',
- prop => 'revisions',
- rclimit => '1',
- rcdir => 'older',
- };
- my $result = $mediawiki->api($query);
- return $result->{query}->{recentchanges}[0]->{revid};
-}
-
-# Get the last remote revision concerning the tracked pages and the tracked
-# categories.
-sub get_last_remote_revision {
- mw_connect_maybe();
-
- my %pages_hash = get_mw_pages();
- my @pages = values(%pages_hash);
-
- my $max_rev_num = 0;
-
- print STDERR "Getting last revision id on tracked pages...\n";
-
- foreach my $page (@pages) {
- my $id = $page->{pageid};
-
- my $query = {
- action => 'query',
- prop => 'revisions',
- rvprop => 'ids|timestamp',
- pageids => $id,
- };
-
- my $result = $mediawiki->api($query);
-
- my $lastrev = pop(@{$result->{query}->{pages}->{$id}->{revisions}});
-
- $basetimestamps{$lastrev->{revid}} = $lastrev->{timestamp};
-
- $max_rev_num = ($lastrev->{revid} > $max_rev_num ? $lastrev->{revid} : $max_rev_num);
- }
-
- print STDERR "Last remote revision found is $max_rev_num.\n";
- return $max_rev_num;
-}
-
-# Clean content before sending it to MediaWiki
-sub mediawiki_clean {
- my $string = shift;
- my $page_created = shift;
- # Mediawiki does not allow blank space at the end of a page and ends with a single \n.
- # This function right trims a string and adds a \n at the end to follow this rule
- $string =~ s/\s+$//;
- if ($string eq "" && $page_created) {
- # Creating empty pages is forbidden.
- $string = EMPTY_CONTENT;
- }
- return $string."\n";
-}
-
-# Filter applied on MediaWiki data before adding them to Git
-sub mediawiki_smudge {
- my $string = shift;
- if ($string eq EMPTY_CONTENT) {
- $string = "";
- }
- # This \n is important. This is due to mediawiki's way to handle end of files.
- return $string."\n";
-}
-
-sub mediawiki_clean_filename {
- my $filename = shift;
- $filename =~ s/@{[SLASH_REPLACEMENT]}/\//g;
- # [, ], |, {, and } are forbidden by MediaWiki, even URL-encoded.
- # Do a variant of URL-encoding, i.e. looks like URL-encoding,
- # but with _ added to prevent MediaWiki from thinking this is
- # an actual special character.
- $filename =~ s/[\[\]\{\}\|]/sprintf("_%%_%x", ord($&))/ge;
- # If we use the uri escape before
- # we should unescape here, before anything
-
- return $filename;
-}
-
-sub mediawiki_smudge_filename {
- my $filename = shift;
- $filename =~ s/\//@{[SLASH_REPLACEMENT]}/g;
- $filename =~ s/ /_/g;
- # Decode forbidden characters encoded in mediawiki_clean_filename
- $filename =~ s/_%_([0-9a-fA-F][0-9a-fA-F])/sprintf("%c", hex($1))/ge;
- return $filename;
-}
-
-sub literal_data {
- my ($content) = @_;
- print STDOUT "data ", bytes::length($content), "\n", $content;
-}
-
-sub literal_data_raw {
- # Output possibly binary content.
- my ($content) = @_;
- # Avoid confusion between size in bytes and in characters
- utf8::downgrade($content);
- binmode STDOUT, ":raw";
- print STDOUT "data ", bytes::length($content), "\n", $content;
- binmode STDOUT, ":utf8";
-}
-
-sub mw_capabilities {
- # Revisions are imported to the private namespace
- # refs/mediawiki/$remotename/ by the helper and fetched into
- # refs/remotes/$remotename later by fetch.
- print STDOUT "refspec refs/heads/*:refs/mediawiki/$remotename/*\n";
- print STDOUT "import\n";
- print STDOUT "list\n";
- print STDOUT "push\n";
- print STDOUT "\n";
-}
-
-sub mw_list {
- # MediaWiki do not have branches, we consider one branch arbitrarily
- # called master, and HEAD pointing to it.
- print STDOUT "? refs/heads/master\n";
- print STDOUT "\@refs/heads/master HEAD\n";
- print STDOUT "\n";
-}
-
-sub mw_option {
- print STDERR "remote-helper command 'option $_[0]' not yet implemented\n";
- print STDOUT "unsupported\n";
-}
-
-sub fetch_mw_revisions_for_page {
- my $page = shift;
- my $id = shift;
- my $fetch_from = shift;
- my @page_revs = ();
- my $query = {
- action => 'query',
- prop => 'revisions',
- rvprop => 'ids',
- rvdir => 'newer',
- rvstartid => $fetch_from,
- rvlimit => 500,
- pageids => $id,
- };
-
- my $revnum = 0;
- # Get 500 revisions at a time due to the mediawiki api limit
- while (1) {
- my $result = $mediawiki->api($query);
-
- # Parse each of those 500 revisions
- foreach my $revision (@{$result->{query}->{pages}->{$id}->{revisions}}) {
- my $page_rev_ids;
- $page_rev_ids->{pageid} = $page->{pageid};
- $page_rev_ids->{revid} = $revision->{revid};
- push(@page_revs, $page_rev_ids);
- $revnum++;
- }
- last unless $result->{'query-continue'};
- $query->{rvstartid} = $result->{'query-continue'}->{revisions}->{rvstartid};
- }
- if ($shallow_import && @page_revs) {
- print STDERR " Found 1 revision (shallow import).\n";
- @page_revs = sort {$b->{revid} <=> $a->{revid}} (@page_revs);
- return $page_revs[0];
- }
- print STDERR " Found ", $revnum, " revision(s).\n";
- return @page_revs;
-}
-
-sub fetch_mw_revisions {
- my $pages = shift; my @pages = @{$pages};
- my $fetch_from = shift;
-
- my @revisions = ();
- my $n = 1;
- foreach my $page (@pages) {
- my $id = $page->{pageid};
-
- print STDERR "page $n/", scalar(@pages), ": ". $page->{title} ."\n";
- $n++;
- my @page_revs = fetch_mw_revisions_for_page($page, $id, $fetch_from);
- @revisions = (@page_revs, @revisions);
- }
-
- return ($n, @revisions);
-}
-
-sub fe_escape_path {
- my $path = shift;
- $path =~ s/\\/\\\\/g;
- $path =~ s/"/\\"/g;
- $path =~ s/\n/\\n/g;
- return '"' . $path . '"';
-}
-
-sub import_file_revision {
- my $commit = shift;
- my %commit = %{$commit};
- my $full_import = shift;
- my $n = shift;
- my $mediafile = shift;
- my %mediafile;
- if ($mediafile) {
- %mediafile = %{$mediafile};
- }
-
- my $title = $commit{title};
- my $comment = $commit{comment};
- my $content = $commit{content};
- my $author = $commit{author};
- my $date = $commit{date};
-
- print STDOUT "commit refs/mediawiki/$remotename/master\n";
- print STDOUT "mark :$n\n";
- print STDOUT "committer $author <$author\@$wiki_name> ", $date->epoch, " +0000\n";
- literal_data($comment);
-
- # If it's not a clone, we need to know where to start from
- if (!$full_import && $n == 1) {
- print STDOUT "from refs/mediawiki/$remotename/master^0\n";
- }
- if ($content ne DELETED_CONTENT) {
- print STDOUT "M 644 inline " .
- fe_escape_path($title . ".mw") . "\n";
- literal_data($content);
- if (%mediafile) {
- print STDOUT "M 644 inline "
- . fe_escape_path($mediafile{title}) . "\n";
- literal_data_raw($mediafile{content});
- }
- print STDOUT "\n\n";
- } else {
- print STDOUT "D " . fe_escape_path($title . ".mw") . "\n";
- }
-
- # mediawiki revision number in the git note
- if ($full_import && $n == 1) {
- print STDOUT "reset refs/notes/$remotename/mediawiki\n";
- }
- print STDOUT "commit refs/notes/$remotename/mediawiki\n";
- print STDOUT "committer $author <$author\@$wiki_name> ", $date->epoch, " +0000\n";
- literal_data("Note added by git-mediawiki during import");
- if (!$full_import && $n == 1) {
- print STDOUT "from refs/notes/$remotename/mediawiki^0\n";
- }
- print STDOUT "N inline :$n\n";
- literal_data("mediawiki_revision: " . $commit{mw_revision});
- print STDOUT "\n\n";
-}
-
-# parse a sequence of
-# <cmd> <arg1>
-# <cmd> <arg2>
-# \n
-# (like batch sequence of import and sequence of push statements)
-sub get_more_refs {
- my $cmd = shift;
- my @refs;
- while (1) {
- my $line = <STDIN>;
- if ($line =~ m/^$cmd (.*)$/) {
- push(@refs, $1);
- } elsif ($line eq "\n") {
- return @refs;
- } else {
- die("Invalid command in a '$cmd' batch: ". $_);
- }
- }
-}
-
-sub mw_import {
- # multiple import commands can follow each other.
- my @refs = (shift, get_more_refs("import"));
- foreach my $ref (@refs) {
- mw_import_ref($ref);
- }
- print STDOUT "done\n";
-}
-
-sub mw_import_ref {
- my $ref = shift;
- # The remote helper will call "import HEAD" and
- # "import refs/heads/master".
- # Since HEAD is a symbolic ref to master (by convention,
- # followed by the output of the command "list" that we gave),
- # we don't need to do anything in this case.
- if ($ref eq "HEAD") {
- return;
- }
-
- mw_connect_maybe();
-
- print STDERR "Searching revisions...\n";
- my $last_local = get_last_local_revision();
- my $fetch_from = $last_local + 1;
- if ($fetch_from == 1) {
- print STDERR ", fetching from beginning.\n";
- } else {
- print STDERR ", fetching from here.\n";
- }
-
- my $n = 0;
- if ($fetch_strategy eq "by_rev") {
- print STDERR "Fetching & writing export data by revs...\n";
- $n = mw_import_ref_by_revs($fetch_from);
- } elsif ($fetch_strategy eq "by_page") {
- print STDERR "Fetching & writing export data by pages...\n";
- $n = mw_import_ref_by_pages($fetch_from);
- } else {
- print STDERR "fatal: invalid fetch strategy \"$fetch_strategy\".\n";
- print STDERR "Check your configuration variables remote.$remotename.fetchStrategy and mediawiki.fetchStrategy\n";
- exit 1;
- }
-
- if ($fetch_from == 1 && $n == 0) {
- print STDERR "You appear to have cloned an empty MediaWiki.\n";
- # Something has to be done remote-helper side. If nothing is done, an error is
- # thrown saying that HEAD is refering to unknown object 0000000000000000000
- # and the clone fails.
- }
-}
-
-sub mw_import_ref_by_pages {
-
- my $fetch_from = shift;
- my %pages_hash = get_mw_pages();
- my @pages = values(%pages_hash);
-
- my ($n, @revisions) = fetch_mw_revisions(\@pages, $fetch_from);
-
- @revisions = sort {$a->{revid} <=> $b->{revid}} @revisions;
- my @revision_ids = map $_->{revid}, @revisions;
-
- return mw_import_revids($fetch_from, \@revision_ids, \%pages_hash);
-}
-
-sub mw_import_ref_by_revs {
-
- my $fetch_from = shift;
- my %pages_hash = get_mw_pages();
-
- my $last_remote = get_last_global_remote_rev();
- my @revision_ids = $fetch_from..$last_remote;
- return mw_import_revids($fetch_from, \@revision_ids, \%pages_hash);
-}
-
-# Import revisions given in second argument (array of integers).
-# Only pages appearing in the third argument (hash indexed by page titles)
-# will be imported.
-sub mw_import_revids {
- my $fetch_from = shift;
- my $revision_ids = shift;
- my $pages = shift;
-
- my $n = 0;
- my $n_actual = 0;
- my $last_timestamp = 0; # Placeholer in case $rev->timestamp is undefined
-
- foreach my $pagerevid (@$revision_ids) {
- # Count page even if we skip it, since we display
- # $n/$total and $total includes skipped pages.
- $n++;
-
- # fetch the content of the pages
- my $query = {
- action => 'query',
- prop => 'revisions',
- rvprop => 'content|timestamp|comment|user|ids',
- revids => $pagerevid,
- };
-
- my $result = $mediawiki->api($query);
-
- if (!$result) {
- die "Failed to retrieve modified page for revision $pagerevid";
- }
-
- if (defined($result->{query}->{badrevids}->{$pagerevid})) {
- # The revision id does not exist on the remote wiki.
- next;
- }
-
- if (!defined($result->{query}->{pages})) {
- die "Invalid revision $pagerevid.";
- }
-
- my @result_pages = values(%{$result->{query}->{pages}});
- my $result_page = $result_pages[0];
- my $rev = $result_pages[0]->{revisions}->[0];
-
- my $page_title = $result_page->{title};
-
- if (!exists($pages->{$page_title})) {
- print STDERR "$n/", scalar(@$revision_ids),
- ": Skipping revision #$rev->{revid} of $page_title\n";
- next;
- }
-
- $n_actual++;
-
- my %commit;
- $commit{author} = $rev->{user} || 'Anonymous';
- $commit{comment} = $rev->{comment} || EMPTY_MESSAGE;
- $commit{title} = mediawiki_smudge_filename($page_title);
- $commit{mw_revision} = $rev->{revid};
- $commit{content} = mediawiki_smudge($rev->{'*'});
-
- if (!defined($rev->{timestamp})) {
- $last_timestamp++;
- } else {
- $last_timestamp = $rev->{timestamp};
- }
- $commit{date} = DateTime::Format::ISO8601->parse_datetime($last_timestamp);
-
- # Differentiates classic pages and media files.
- my ($namespace, $filename) = $page_title =~ /^([^:]*):(.*)$/;
- my %mediafile;
- if ($namespace) {
- my $id = get_mw_namespace_id($namespace);
- if ($id && $id == get_mw_namespace_id("File")) {
- %mediafile = get_mw_mediafile_for_page_revision($filename, $rev->{timestamp});
- }
- }
- # If this is a revision of the media page for new version
- # of a file do one common commit for both file and media page.
- # Else do commit only for that page.
- print STDERR "$n/", scalar(@$revision_ids), ": Revision #$rev->{revid} of $commit{title}\n";
- import_file_revision(\%commit, ($fetch_from == 1), $n_actual, \%mediafile);
- }
-
- return $n_actual;
-}
-
-sub error_non_fast_forward {
- my $advice = run_git("config --bool advice.pushNonFastForward");
- chomp($advice);
- if ($advice ne "false") {
- # Native git-push would show this after the summary.
- # We can't ask it to display it cleanly, so print it
- # ourselves before.
- print STDERR "To prevent you from losing history, non-fast-forward updates were rejected\n";
- print STDERR "Merge the remote changes (e.g. 'git pull') before pushing again. See the\n";
- print STDERR "'Note about fast-forwards' section of 'git push --help' for details.\n";
- }
- print STDOUT "error $_[0] \"non-fast-forward\"\n";
- return 0;
-}
-
-sub mw_upload_file {
- my $complete_file_name = shift;
- my $new_sha1 = shift;
- my $extension = shift;
- my $file_deleted = shift;
- my $summary = shift;
- my $newrevid;
- my $path = "File:" . $complete_file_name;
- my %hashFiles = get_allowed_file_extensions();
- if (!exists($hashFiles{$extension})) {
- print STDERR "$complete_file_name is not a permitted file on this wiki.\n";
- print STDERR "Check the configuration of file uploads in your mediawiki.\n";
- return $newrevid;
- }
- # Deleting and uploading a file requires a priviledged user
- if ($file_deleted) {
- mw_connect_maybe();
- my $query = {
- action => 'delete',
- title => $path,
- reason => $summary
- };
- if (!$mediawiki->edit($query)) {
- print STDERR "Failed to delete file on remote wiki\n";
- print STDERR "Check your permissions on the remote site. Error code:\n";
- print STDERR $mediawiki->{error}->{code} . ':' . $mediawiki->{error}->{details};
- exit 1;
- }
- } else {
- # Don't let perl try to interpret file content as UTF-8 => use "raw"
- my $content = run_git("cat-file blob $new_sha1", "raw");
- if ($content ne "") {
- mw_connect_maybe();
- $mediawiki->{config}->{upload_url} =
- "$url/index.php/Special:Upload";
- $mediawiki->edit({
- action => 'upload',
- filename => $complete_file_name,
- comment => $summary,
- file => [undef,
- $complete_file_name,
- Content => $content],
- ignorewarnings => 1,
- }, {
- skip_encoding => 1
- } ) || die $mediawiki->{error}->{code} . ':'
- . $mediawiki->{error}->{details};
- my $last_file_page = $mediawiki->get_page({title => $path});
- $newrevid = $last_file_page->{revid};
- print STDERR "Pushed file: $new_sha1 - $complete_file_name.\n";
- } else {
- print STDERR "Empty file $complete_file_name not pushed.\n";
- }
- }
- return $newrevid;
-}
-
-sub mw_push_file {
- my $diff_info = shift;
- # $diff_info contains a string in this format:
- # 100644 100644 <sha1_of_blob_before_commit> <sha1_of_blob_now> <status>
- my @diff_info_split = split(/[ \t]/, $diff_info);
-
- # Filename, including .mw extension
- my $complete_file_name = shift;
- # Commit message
- my $summary = shift;
- # MediaWiki revision number. Keep the previous one by default,
- # in case there's no edit to perform.
- my $oldrevid = shift;
- my $newrevid;
-
- if ($summary eq EMPTY_MESSAGE) {
- $summary = '';
- }
-
- my $new_sha1 = $diff_info_split[3];
- my $old_sha1 = $diff_info_split[2];
- my $page_created = ($old_sha1 eq NULL_SHA1);
- my $page_deleted = ($new_sha1 eq NULL_SHA1);
- $complete_file_name = mediawiki_clean_filename($complete_file_name);
-
- my ($title, $extension) = $complete_file_name =~ /^(.*)\.([^\.]*)$/;
- if (!defined($extension)) {
- $extension = "";
- }
- if ($extension eq "mw") {
- my $ns = get_mw_namespace_id_for_page($complete_file_name);
- if ($ns && $ns == get_mw_namespace_id("File") && (!$export_media)) {
- print STDERR "Ignoring media file related page: $complete_file_name\n";
- return ($oldrevid, "ok");
- }
- my $file_content;
- if ($page_deleted) {
- # Deleting a page usually requires
- # special priviledges. A common
- # convention is to replace the page
- # with this content instead:
- $file_content = DELETED_CONTENT;
- } else {
- $file_content = run_git("cat-file blob $new_sha1");
- }
-
- mw_connect_maybe();
-
- my $result = $mediawiki->edit( {
- action => 'edit',
- summary => $summary,
- title => $title,
- basetimestamp => $basetimestamps{$oldrevid},
- text => mediawiki_clean($file_content, $page_created),
- }, {
- skip_encoding => 1 # Helps with names with accentuated characters
- });
- if (!$result) {
- if ($mediawiki->{error}->{code} == 3) {
- # edit conflicts, considered as non-fast-forward
- print STDERR 'Warning: Error ' .
- $mediawiki->{error}->{code} .
- ' from mediwiki: ' . $mediawiki->{error}->{details} .
- ".\n";
- return ($oldrevid, "non-fast-forward");
- } else {
- # Other errors. Shouldn't happen => just die()
- die 'Fatal: Error ' .
- $mediawiki->{error}->{code} .
- ' from mediwiki: ' . $mediawiki->{error}->{details};
- }
- }
- $newrevid = $result->{edit}->{newrevid};
- print STDERR "Pushed file: $new_sha1 - $title\n";
- } elsif ($export_media) {
- $newrevid = mw_upload_file($complete_file_name, $new_sha1,
- $extension, $page_deleted,
- $summary);
- } else {
- print STDERR "Ignoring media file $title\n";
- }
- $newrevid = ($newrevid or $oldrevid);
- return ($newrevid, "ok");
-}
-
-sub mw_push {
- # multiple push statements can follow each other
- my @refsspecs = (shift, get_more_refs("push"));
- my $pushed;
- for my $refspec (@refsspecs) {
- my ($force, $local, $remote) = $refspec =~ /^(\+)?([^:]*):([^:]*)$/
- or die("Invalid refspec for push. Expected <src>:<dst> or +<src>:<dst>");
- if ($force) {
- print STDERR "Warning: forced push not allowed on a MediaWiki.\n";
- }
- if ($local eq "") {
- print STDERR "Cannot delete remote branch on a MediaWiki\n";
- print STDOUT "error $remote cannot delete\n";
- next;
- }
- if ($remote ne "refs/heads/master") {
- print STDERR "Only push to the branch 'master' is supported on a MediaWiki\n";
- print STDOUT "error $remote only master allowed\n";
- next;
- }
- if (mw_push_revision($local, $remote)) {
- $pushed = 1;
- }
- }
-
- # Notify Git that the push is done
- print STDOUT "\n";
-
- if ($pushed && $dumb_push) {
- print STDERR "Just pushed some revisions to MediaWiki.\n";
- print STDERR "The pushed revisions now have to be re-imported, and your current branch\n";
- print STDERR "needs to be updated with these re-imported commits. You can do this with\n";
- print STDERR "\n";
- print STDERR " git pull --rebase\n";
- print STDERR "\n";
- }
-}
-
-sub mw_push_revision {
- my $local = shift;
- my $remote = shift; # actually, this has to be "refs/heads/master" at this point.
- my $last_local_revid = get_last_local_revision();
- print STDERR ".\n"; # Finish sentence started by get_last_local_revision()
- my $last_remote_revid = get_last_remote_revision();
- my $mw_revision = $last_remote_revid;
-
- # Get sha1 of commit pointed by local HEAD
- my $HEAD_sha1 = run_git("rev-parse $local 2>/dev/null"); chomp($HEAD_sha1);
- # Get sha1 of commit pointed by remotes/$remotename/master
- my $remoteorigin_sha1 = run_git("rev-parse refs/remotes/$remotename/master 2>/dev/null");
- chomp($remoteorigin_sha1);
-
- if ($last_local_revid > 0 &&
- $last_local_revid < $last_remote_revid) {
- return error_non_fast_forward($remote);
- }
-
- if ($HEAD_sha1 eq $remoteorigin_sha1) {
- # nothing to push
- return 0;
- }
-
- # Get every commit in between HEAD and refs/remotes/origin/master,
- # including HEAD and refs/remotes/origin/master
- my @commit_pairs = ();
- if ($last_local_revid > 0) {
- my $parsed_sha1 = $remoteorigin_sha1;
- # Find a path from last MediaWiki commit to pushed commit
- print STDERR "Computing path from local to remote ...\n";
- my @local_ancestry = split(/\n/, run_git("rev-list --boundary --parents $local ^$parsed_sha1"));
- my %local_ancestry;
- foreach my $line (@local_ancestry) {
- if (my ($child, $parents) = $line =~ m/^-?([a-f0-9]+) ([a-f0-9 ]+)/) {
- foreach my $parent (split(' ', $parents)) {
- $local_ancestry{$parent} = $child;
- }
- } elsif (!$line =~ m/^([a-f0-9]+)/) {
- die "Unexpected output from git rev-list: $line";
- }
- }
- while ($parsed_sha1 ne $HEAD_sha1) {
- my $child = $local_ancestry{$parsed_sha1};
- if (!$child) {
- printf STDERR "Cannot find a path in history from remote commit to last commit\n";
- return error_non_fast_forward($remote);
- }
- push(@commit_pairs, [$parsed_sha1, $child]);
- $parsed_sha1 = $child;
- }
- } else {
- # No remote mediawiki revision. Export the whole
- # history (linearized with --first-parent)
- print STDERR "Warning: no common ancestor, pushing complete history\n";
- my $history = run_git("rev-list --first-parent --children $local");
- my @history = split('\n', $history);
- @history = @history[1..$#history];
- foreach my $line (reverse @history) {
- my @commit_info_split = split(/ |\n/, $line);
- push(@commit_pairs, \@commit_info_split);
- }
- }
-
- foreach my $commit_info_split (@commit_pairs) {
- my $sha1_child = @{$commit_info_split}[0];
- my $sha1_commit = @{$commit_info_split}[1];
- my $diff_infos = run_git("diff-tree -r --raw -z $sha1_child $sha1_commit");
- # TODO: we could detect rename, and encode them with a #redirect on the wiki.
- # TODO: for now, it's just a delete+add
- my @diff_info_list = split(/\0/, $diff_infos);
- # Keep the subject line of the commit message as mediawiki comment for the revision
- my $commit_msg = run_git("log --no-walk --format=\"%s\" $sha1_commit");
- chomp($commit_msg);
- # Push every blob
- while (@diff_info_list) {
- my $status;
- # git diff-tree -z gives an output like
- # <metadata>\0<filename1>\0
- # <metadata>\0<filename2>\0
- # and we've split on \0.
- my $info = shift(@diff_info_list);
- my $file = shift(@diff_info_list);
- ($mw_revision, $status) = mw_push_file($info, $file, $commit_msg, $mw_revision);
- if ($status eq "non-fast-forward") {
- # we may already have sent part of the
- # commit to MediaWiki, but it's too
- # late to cancel it. Stop the push in
- # the middle, but still give an
- # accurate error message.
- return error_non_fast_forward($remote);
- }
- if ($status ne "ok") {
- die("Unknown error from mw_push_file()");
- }
- }
- unless ($dumb_push) {
- run_git("notes --ref=$remotename/mediawiki add -f -m \"mediawiki_revision: $mw_revision\" $sha1_commit");
- run_git("update-ref -m \"Git-MediaWiki push\" refs/mediawiki/$remotename/master $sha1_commit $sha1_child");
- }
- }
-
- print STDOUT "ok $remote\n";
- return 1;
-}
-
-sub get_allowed_file_extensions {
- mw_connect_maybe();
-
- my $query = {
- action => 'query',
- meta => 'siteinfo',
- siprop => 'fileextensions'
- };
- my $result = $mediawiki->api($query);
- my @file_extensions= map $_->{ext},@{$result->{query}->{fileextensions}};
- my %hashFile = map {$_ => 1}@file_extensions;
-
- return %hashFile;
-}
-
-# In memory cache for MediaWiki namespace ids.
-my %namespace_id;
-
-# Namespaces whose id is cached in the configuration file
-# (to avoid duplicates)
-my %cached_mw_namespace_id;
-
-# Return MediaWiki id for a canonical namespace name.
-# Ex.: "File", "Project".
-sub get_mw_namespace_id {
- mw_connect_maybe();
- my $name = shift;
-
- if (!exists $namespace_id{$name}) {
- # Look at configuration file, if the record for that namespace is
- # already cached. Namespaces are stored in form:
- # "Name_of_namespace:Id_namespace", ex.: "File:6".
- my @temp = split(/[\n]/, run_git("config --get-all remote."
- . $remotename .".namespaceCache"));
- chomp(@temp);
- foreach my $ns (@temp) {
- my ($n, $id) = split(/:/, $ns);
- if ($id eq 'notANameSpace') {
- $namespace_id{$n} = {is_namespace => 0};
- } else {
- $namespace_id{$n} = {is_namespace => 1, id => $id};
- }
- $cached_mw_namespace_id{$n} = 1;
- }
- }
-
- if (!exists $namespace_id{$name}) {
- print STDERR "Namespace $name not found in cache, querying the wiki ...\n";
- # NS not found => get namespace id from MW and store it in
- # configuration file.
- my $query = {
- action => 'query',
- meta => 'siteinfo',
- siprop => 'namespaces'
- };
- my $result = $mediawiki->api($query);
-
- while (my ($id, $ns) = each(%{$result->{query}->{namespaces}})) {
- if (defined($ns->{id}) && defined($ns->{canonical})) {
- $namespace_id{$ns->{canonical}} = {is_namespace => 1, id => $ns->{id}};
- if ($ns->{'*'}) {
- # alias (e.g. french Fichier: as alias for canonical File:)
- $namespace_id{$ns->{'*'}} = {is_namespace => 1, id => $ns->{id}};
- }
- }
- }
- }
-
- my $ns = $namespace_id{$name};
- my $id;
-
- unless (defined $ns) {
- print STDERR "No such namespace $name on MediaWiki.\n";
- $ns = {is_namespace => 0};
- $namespace_id{$name} = $ns;
- }
-
- if ($ns->{is_namespace}) {
- $id = $ns->{id};
- }
-
- # Store "notANameSpace" as special value for inexisting namespaces
- my $store_id = ($id || 'notANameSpace');
-
- # Store explicitely requested namespaces on disk
- if (!exists $cached_mw_namespace_id{$name}) {
- run_git("config --add remote.". $remotename
- .".namespaceCache \"". $name .":". $store_id ."\"");
- $cached_mw_namespace_id{$name} = 1;
- }
- return $id;
-}
-
-sub get_mw_namespace_id_for_page {
- if (my ($namespace) = $_[0] =~ /^([^:]*):/) {
- return get_mw_namespace_id($namespace);
- } else {
- return;
- }
-}
--- /dev/null
+#! /usr/bin/perl
+
+# Copyright (C) 2011
+# Jérémie Nikaes <jeremie.nikaes@ensimag.imag.fr>
+# Arnaud Lacurie <arnaud.lacurie@ensimag.imag.fr>
+# Claire Fousse <claire.fousse@ensimag.imag.fr>
+# David Amouyal <david.amouyal@ensimag.imag.fr>
+# Matthieu Moy <matthieu.moy@grenoble-inp.fr>
+# License: GPL v2 or later
+
+# Gateway between Git and MediaWiki.
+# Documentation & bugtracker: https://github.com/moy/Git-Mediawiki/
+
+use strict;
+use MediaWiki::API;
+use DateTime::Format::ISO8601;
+
+# By default, use UTF-8 to communicate with Git and the user
+binmode STDERR, ":utf8";
+binmode STDOUT, ":utf8";
+
+use URI::Escape;
+use IPC::Open2;
+
+use warnings;
+
+# Mediawiki filenames can contain forward slashes. This variable decides by which pattern they should be replaced
+use constant SLASH_REPLACEMENT => "%2F";
+
+# It's not always possible to delete pages (may require some
+# priviledges). Deleted pages are replaced with this content.
+use constant DELETED_CONTENT => "[[Category:Deleted]]\n";
+
+# It's not possible to create empty pages. New empty files in Git are
+# sent with this content instead.
+use constant EMPTY_CONTENT => "<!-- empty page -->\n";
+
+# used to reflect file creation or deletion in diff.
+use constant NULL_SHA1 => "0000000000000000000000000000000000000000";
+
+# Used on Git's side to reflect empty edit messages on the wiki
+use constant EMPTY_MESSAGE => '*Empty MediaWiki Message*';
+
+my $remotename = $ARGV[0];
+my $url = $ARGV[1];
+
+# Accept both space-separated and multiple keys in config file.
+# Spaces should be written as _ anyway because we'll use chomp.
+my @tracked_pages = split(/[ \n]/, run_git("config --get-all remote.". $remotename .".pages"));
+chomp(@tracked_pages);
+
+# Just like @tracked_pages, but for MediaWiki categories.
+my @tracked_categories = split(/[ \n]/, run_git("config --get-all remote.". $remotename .".categories"));
+chomp(@tracked_categories);
+
+# Import media files on pull
+my $import_media = run_git("config --get --bool remote.". $remotename .".mediaimport");
+chomp($import_media);
+$import_media = ($import_media eq "true");
+
+# Export media files on push
+my $export_media = run_git("config --get --bool remote.". $remotename .".mediaexport");
+chomp($export_media);
+$export_media = !($export_media eq "false");
+
+my $wiki_login = run_git("config --get remote.". $remotename .".mwLogin");
+# Note: mwPassword is discourraged. Use the credential system instead.
+my $wiki_passwd = run_git("config --get remote.". $remotename .".mwPassword");
+my $wiki_domain = run_git("config --get remote.". $remotename .".mwDomain");
+chomp($wiki_login);
+chomp($wiki_passwd);
+chomp($wiki_domain);
+
+# Import only last revisions (both for clone and fetch)
+my $shallow_import = run_git("config --get --bool remote.". $remotename .".shallow");
+chomp($shallow_import);
+$shallow_import = ($shallow_import eq "true");
+
+# Fetch (clone and pull) by revisions instead of by pages. This behavior
+# is more efficient when we have a wiki with lots of pages and we fetch
+# the revisions quite often so that they concern only few pages.
+# Possible values:
+# - by_rev: perform one query per new revision on the remote wiki
+# - by_page: query each tracked page for new revision
+my $fetch_strategy = run_git("config --get remote.$remotename.fetchStrategy");
+unless ($fetch_strategy) {
+ $fetch_strategy = run_git("config --get mediawiki.fetchStrategy");
+}
+chomp($fetch_strategy);
+unless ($fetch_strategy) {
+ $fetch_strategy = "by_page";
+}
+
+# Dumb push: don't update notes and mediawiki ref to reflect the last push.
+#
+# Configurable with mediawiki.dumbPush, or per-remote with
+# remote.<remotename>.dumbPush.
+#
+# This means the user will have to re-import the just-pushed
+# revisions. On the other hand, this means that the Git revisions
+# corresponding to MediaWiki revisions are all imported from the wiki,
+# regardless of whether they were initially created in Git or from the
+# web interface, hence all users will get the same history (i.e. if
+# the push from Git to MediaWiki loses some information, everybody
+# will get the history with information lost). If the import is
+# deterministic, this means everybody gets the same sha1 for each
+# MediaWiki revision.
+my $dumb_push = run_git("config --get --bool remote.$remotename.dumbPush");
+unless ($dumb_push) {
+ $dumb_push = run_git("config --get --bool mediawiki.dumbPush");
+}
+chomp($dumb_push);
+$dumb_push = ($dumb_push eq "true");
+
+my $wiki_name = $url;
+$wiki_name =~ s/[^\/]*:\/\///;
+# If URL is like http://user:password@example.com/, we clearly don't
+# want the password in $wiki_name. While we're there, also remove user
+# and '@' sign, to avoid author like MWUser@HTTPUser@host.com
+$wiki_name =~ s/^.*@//;
+
+# Commands parser
+my $entry;
+my @cmd;
+while (<STDIN>) {
+ chomp;
+ @cmd = split(/ /);
+ if (defined($cmd[0])) {
+ # Line not blank
+ if ($cmd[0] eq "capabilities") {
+ die("Too many arguments for capabilities") unless (!defined($cmd[1]));
+ mw_capabilities();
+ } elsif ($cmd[0] eq "list") {
+ die("Too many arguments for list") unless (!defined($cmd[2]));
+ mw_list($cmd[1]);
+ } elsif ($cmd[0] eq "import") {
+ die("Invalid arguments for import") unless ($cmd[1] ne "" && !defined($cmd[2]));
+ mw_import($cmd[1]);
+ } elsif ($cmd[0] eq "option") {
+ die("Too many arguments for option") unless ($cmd[1] ne "" && $cmd[2] ne "" && !defined($cmd[3]));
+ mw_option($cmd[1],$cmd[2]);
+ } elsif ($cmd[0] eq "push") {
+ mw_push($cmd[1]);
+ } else {
+ print STDERR "Unknown command. Aborting...\n";
+ last;
+ }
+ } else {
+ # blank line: we should terminate
+ last;
+ }
+
+ BEGIN { $| = 1 } # flush STDOUT, to make sure the previous
+ # command is fully processed.
+}
+
+########################## Functions ##############################
+
+## credential API management (generic functions)
+
+sub credential_read {
+ my %credential;
+ my $reader = shift;
+ my $op = shift;
+ while (<$reader>) {
+ my ($key, $value) = /([^=]*)=(.*)/;
+ if (not defined $key) {
+ die "ERROR receiving response from git credential $op:\n$_\n";
+ }
+ $credential{$key} = $value;
+ }
+ return %credential;
+}
+
+sub credential_write {
+ my $credential = shift;
+ my $writer = shift;
+ # url overwrites other fields, so it must come first
+ print $writer "url=$credential->{url}\n" if exists $credential->{url};
+ while (my ($key, $value) = each(%$credential) ) {
+ if (length $value && $key ne 'url') {
+ print $writer "$key=$value\n";
+ }
+ }
+}
+
+sub credential_run {
+ my $op = shift;
+ my $credential = shift;
+ my $pid = open2(my $reader, my $writer, "git credential $op");
+ credential_write($credential, $writer);
+ print $writer "\n";
+ close($writer);
+
+ if ($op eq "fill") {
+ %$credential = credential_read($reader, $op);
+ } else {
+ if (<$reader>) {
+ die "ERROR while running git credential $op:\n$_";
+ }
+ }
+ close($reader);
+ waitpid($pid, 0);
+ my $child_exit_status = $? >> 8;
+ if ($child_exit_status != 0) {
+ die "'git credential $op' failed with code $child_exit_status.";
+ }
+}
+
+# MediaWiki API instance, created lazily.
+my $mediawiki;
+
+sub mw_connect_maybe {
+ if ($mediawiki) {
+ return;
+ }
+ $mediawiki = MediaWiki::API->new;
+ $mediawiki->{config}->{api_url} = "$url/api.php";
+ if ($wiki_login) {
+ my %credential = (url => $url);
+ $credential{username} = $wiki_login;
+ $credential{password} = $wiki_passwd;
+ credential_run("fill", \%credential);
+ my $request = {lgname => $credential{username},
+ lgpassword => $credential{password},
+ lgdomain => $wiki_domain};
+ if ($mediawiki->login($request)) {
+ credential_run("approve", \%credential);
+ print STDERR "Logged in mediawiki user \"$credential{username}\".\n";
+ } else {
+ print STDERR "Failed to log in mediawiki user \"$credential{username}\" on $url\n";
+ print STDERR " (error " .
+ $mediawiki->{error}->{code} . ': ' .
+ $mediawiki->{error}->{details} . ")\n";
+ credential_run("reject", \%credential);
+ exit 1;
+ }
+ }
+}
+
+## Functions for listing pages on the remote wiki
+sub get_mw_tracked_pages {
+ my $pages = shift;
+ get_mw_page_list(\@tracked_pages, $pages);
+}
+
+sub get_mw_page_list {
+ my $page_list = shift;
+ my $pages = shift;
+ my @some_pages = @$page_list;
+ while (@some_pages) {
+ my $last = 50;
+ if ($#some_pages < $last) {
+ $last = $#some_pages;
+ }
+ my @slice = @some_pages[0..$last];
+ get_mw_first_pages(\@slice, $pages);
+ @some_pages = @some_pages[51..$#some_pages];
+ }
+}
+
+sub get_mw_tracked_categories {
+ my $pages = shift;
+ foreach my $category (@tracked_categories) {
+ if (index($category, ':') < 0) {
+ # Mediawiki requires the Category
+ # prefix, but let's not force the user
+ # to specify it.
+ $category = "Category:" . $category;
+ }
+ my $mw_pages = $mediawiki->list( {
+ action => 'query',
+ list => 'categorymembers',
+ cmtitle => $category,
+ cmlimit => 'max' } )
+ || die $mediawiki->{error}->{code} . ': '
+ . $mediawiki->{error}->{details};
+ foreach my $page (@{$mw_pages}) {
+ $pages->{$page->{title}} = $page;
+ }
+ }
+}
+
+sub get_mw_all_pages {
+ my $pages = shift;
+ # No user-provided list, get the list of pages from the API.
+ my $mw_pages = $mediawiki->list({
+ action => 'query',
+ list => 'allpages',
+ aplimit => 'max'
+ });
+ if (!defined($mw_pages)) {
+ print STDERR "fatal: could not get the list of wiki pages.\n";
+ print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
+ print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
+ exit 1;
+ }
+ foreach my $page (@{$mw_pages}) {
+ $pages->{$page->{title}} = $page;
+ }
+}
+
+# queries the wiki for a set of pages. Meant to be used within a loop
+# querying the wiki for slices of page list.
+sub get_mw_first_pages {
+ my $some_pages = shift;
+ my @some_pages = @{$some_pages};
+
+ my $pages = shift;
+
+ # pattern 'page1|page2|...' required by the API
+ my $titles = join('|', @some_pages);
+
+ my $mw_pages = $mediawiki->api({
+ action => 'query',
+ titles => $titles,
+ });
+ if (!defined($mw_pages)) {
+ print STDERR "fatal: could not query the list of wiki pages.\n";
+ print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
+ print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
+ exit 1;
+ }
+ while (my ($id, $page) = each(%{$mw_pages->{query}->{pages}})) {
+ if ($id < 0) {
+ print STDERR "Warning: page $page->{title} not found on wiki\n";
+ } else {
+ $pages->{$page->{title}} = $page;
+ }
+ }
+}
+
+# Get the list of pages to be fetched according to configuration.
+sub get_mw_pages {
+ mw_connect_maybe();
+
+ print STDERR "Listing pages on remote wiki...\n";
+
+ my %pages; # hash on page titles to avoid duplicates
+ my $user_defined;
+ if (@tracked_pages) {
+ $user_defined = 1;
+ # The user provided a list of pages titles, but we
+ # still need to query the API to get the page IDs.
+ get_mw_tracked_pages(\%pages);
+ }
+ if (@tracked_categories) {
+ $user_defined = 1;
+ get_mw_tracked_categories(\%pages);
+ }
+ if (!$user_defined) {
+ get_mw_all_pages(\%pages);
+ }
+ if ($import_media) {
+ print STDERR "Getting media files for selected pages...\n";
+ if ($user_defined) {
+ get_linked_mediafiles(\%pages);
+ } else {
+ get_all_mediafiles(\%pages);
+ }
+ }
+ print STDERR (scalar keys %pages) . " pages found.\n";
+ return %pages;
+}
+
+# usage: $out = run_git("command args");
+# $out = run_git("command args", "raw"); # don't interpret output as UTF-8.
+sub run_git {
+ my $args = shift;
+ my $encoding = (shift || "encoding(UTF-8)");
+ open(my $git, "-|:$encoding", "git " . $args);
+ my $res = do { local $/; <$git> };
+ close($git);
+
+ return $res;
+}
+
+
+sub get_all_mediafiles {
+ my $pages = shift;
+ # Attach list of all pages for media files from the API,
+ # they are in a different namespace, only one namespace
+ # can be queried at the same moment
+ my $mw_pages = $mediawiki->list({
+ action => 'query',
+ list => 'allpages',
+ apnamespace => get_mw_namespace_id("File"),
+ aplimit => 'max'
+ });
+ if (!defined($mw_pages)) {
+ print STDERR "fatal: could not get the list of pages for media files.\n";
+ print STDERR "fatal: '$url' does not appear to be a mediawiki\n";
+ print STDERR "fatal: make sure '$url/api.php' is a valid page.\n";
+ exit 1;
+ }
+ foreach my $page (@{$mw_pages}) {
+ $pages->{$page->{title}} = $page;
+ }
+}
+
+sub get_linked_mediafiles {
+ my $pages = shift;
+ my @titles = map $_->{title}, values(%{$pages});
+
+ # The query is split in small batches because of the MW API limit of
+ # the number of links to be returned (500 links max).
+ my $batch = 10;
+ while (@titles) {
+ if ($#titles < $batch) {
+ $batch = $#titles;
+ }
+ my @slice = @titles[0..$batch];
+
+ # pattern 'page1|page2|...' required by the API
+ my $mw_titles = join('|', @slice);
+
+ # Media files could be included or linked from
+ # a page, get all related
+ my $query = {
+ action => 'query',
+ prop => 'links|images',
+ titles => $mw_titles,
+ plnamespace => get_mw_namespace_id("File"),
+ pllimit => 'max'
+ };
+ my $result = $mediawiki->api($query);
+
+ while (my ($id, $page) = each(%{$result->{query}->{pages}})) {
+ my @media_titles;
+ if (defined($page->{links})) {
+ my @link_titles = map $_->{title}, @{$page->{links}};
+ push(@media_titles, @link_titles);
+ }
+ if (defined($page->{images})) {
+ my @image_titles = map $_->{title}, @{$page->{images}};
+ push(@media_titles, @image_titles);
+ }
+ if (@media_titles) {
+ get_mw_page_list(\@media_titles, $pages);
+ }
+ }
+
+ @titles = @titles[($batch+1)..$#titles];
+ }
+}
+
+sub get_mw_mediafile_for_page_revision {
+ # Name of the file on Wiki, with the prefix.
+ my $filename = shift;
+ my $timestamp = shift;
+ my %mediafile;
+
+ # Search if on a media file with given timestamp exists on
+ # MediaWiki. In that case download the file.
+ my $query = {
+ action => 'query',
+ prop => 'imageinfo',
+ titles => "File:" . $filename,
+ iistart => $timestamp,
+ iiend => $timestamp,
+ iiprop => 'timestamp|archivename|url',
+ iilimit => 1
+ };
+ my $result = $mediawiki->api($query);
+
+ my ($fileid, $file) = each( %{$result->{query}->{pages}} );
+ # If not defined it means there is no revision of the file for
+ # given timestamp.
+ if (defined($file->{imageinfo})) {
+ $mediafile{title} = $filename;
+
+ my $fileinfo = pop(@{$file->{imageinfo}});
+ $mediafile{timestamp} = $fileinfo->{timestamp};
+ # Mediawiki::API's download function doesn't support https URLs
+ # and can't download old versions of files.
+ print STDERR "\tDownloading file $mediafile{title}, version $mediafile{timestamp}\n";
+ $mediafile{content} = download_mw_mediafile($fileinfo->{url});
+ }
+ return %mediafile;
+}
+
+sub download_mw_mediafile {
+ my $url = shift;
+
+ my $response = $mediawiki->{ua}->get($url);
+ if ($response->code == 200) {
+ return $response->decoded_content;
+ } else {
+ print STDERR "Error downloading mediafile from :\n";
+ print STDERR "URL: $url\n";
+ print STDERR "Server response: " . $response->code . " " . $response->message . "\n";
+ exit 1;
+ }
+}
+
+sub get_last_local_revision {
+ # Get note regarding last mediawiki revision
+ my $note = run_git("notes --ref=$remotename/mediawiki show refs/mediawiki/$remotename/master 2>/dev/null");
+ my @note_info = split(/ /, $note);
+
+ my $lastrevision_number;
+ if (!(defined($note_info[0]) && $note_info[0] eq "mediawiki_revision:")) {
+ print STDERR "No previous mediawiki revision found";
+ $lastrevision_number = 0;
+ } else {
+ # Notes are formatted : mediawiki_revision: #number
+ $lastrevision_number = $note_info[1];
+ chomp($lastrevision_number);
+ print STDERR "Last local mediawiki revision found is $lastrevision_number";
+ }
+ return $lastrevision_number;
+}
+
+# Remember the timestamp corresponding to a revision id.
+my %basetimestamps;
+
+# Get the last remote revision without taking in account which pages are
+# tracked or not. This function makes a single request to the wiki thus
+# avoid a loop onto all tracked pages. This is useful for the fetch-by-rev
+# option.
+sub get_last_global_remote_rev {
+ mw_connect_maybe();
+
+ my $query = {
+ action => 'query',
+ list => 'recentchanges',
+ prop => 'revisions',
+ rclimit => '1',
+ rcdir => 'older',
+ };
+ my $result = $mediawiki->api($query);
+ return $result->{query}->{recentchanges}[0]->{revid};
+}
+
+# Get the last remote revision concerning the tracked pages and the tracked
+# categories.
+sub get_last_remote_revision {
+ mw_connect_maybe();
+
+ my %pages_hash = get_mw_pages();
+ my @pages = values(%pages_hash);
+
+ my $max_rev_num = 0;
+
+ print STDERR "Getting last revision id on tracked pages...\n";
+
+ foreach my $page (@pages) {
+ my $id = $page->{pageid};
+
+ my $query = {
+ action => 'query',
+ prop => 'revisions',
+ rvprop => 'ids|timestamp',
+ pageids => $id,
+ };
+
+ my $result = $mediawiki->api($query);
+
+ my $lastrev = pop(@{$result->{query}->{pages}->{$id}->{revisions}});
+
+ $basetimestamps{$lastrev->{revid}} = $lastrev->{timestamp};
+
+ $max_rev_num = ($lastrev->{revid} > $max_rev_num ? $lastrev->{revid} : $max_rev_num);
+ }
+
+ print STDERR "Last remote revision found is $max_rev_num.\n";
+ return $max_rev_num;
+}
+
+# Clean content before sending it to MediaWiki
+sub mediawiki_clean {
+ my $string = shift;
+ my $page_created = shift;
+ # Mediawiki does not allow blank space at the end of a page and ends with a single \n.
+ # This function right trims a string and adds a \n at the end to follow this rule
+ $string =~ s/\s+$//;
+ if ($string eq "" && $page_created) {
+ # Creating empty pages is forbidden.
+ $string = EMPTY_CONTENT;
+ }
+ return $string."\n";
+}
+
+# Filter applied on MediaWiki data before adding them to Git
+sub mediawiki_smudge {
+ my $string = shift;
+ if ($string eq EMPTY_CONTENT) {
+ $string = "";
+ }
+ # This \n is important. This is due to mediawiki's way to handle end of files.
+ return $string."\n";
+}
+
+sub mediawiki_clean_filename {
+ my $filename = shift;
+ $filename =~ s/@{[SLASH_REPLACEMENT]}/\//g;
+ # [, ], |, {, and } are forbidden by MediaWiki, even URL-encoded.
+ # Do a variant of URL-encoding, i.e. looks like URL-encoding,
+ # but with _ added to prevent MediaWiki from thinking this is
+ # an actual special character.
+ $filename =~ s/[\[\]\{\}\|]/sprintf("_%%_%x", ord($&))/ge;
+ # If we use the uri escape before
+ # we should unescape here, before anything
+
+ return $filename;
+}
+
+sub mediawiki_smudge_filename {
+ my $filename = shift;
+ $filename =~ s/\//@{[SLASH_REPLACEMENT]}/g;
+ $filename =~ s/ /_/g;
+ # Decode forbidden characters encoded in mediawiki_clean_filename
+ $filename =~ s/_%_([0-9a-fA-F][0-9a-fA-F])/sprintf("%c", hex($1))/ge;
+ return $filename;
+}
+
+sub literal_data {
+ my ($content) = @_;
+ print STDOUT "data ", bytes::length($content), "\n", $content;
+}
+
+sub literal_data_raw {
+ # Output possibly binary content.
+ my ($content) = @_;
+ # Avoid confusion between size in bytes and in characters
+ utf8::downgrade($content);
+ binmode STDOUT, ":raw";
+ print STDOUT "data ", bytes::length($content), "\n", $content;
+ binmode STDOUT, ":utf8";
+}
+
+sub mw_capabilities {
+ # Revisions are imported to the private namespace
+ # refs/mediawiki/$remotename/ by the helper and fetched into
+ # refs/remotes/$remotename later by fetch.
+ print STDOUT "refspec refs/heads/*:refs/mediawiki/$remotename/*\n";
+ print STDOUT "import\n";
+ print STDOUT "list\n";
+ print STDOUT "push\n";
+ print STDOUT "\n";
+}
+
+sub mw_list {
+ # MediaWiki do not have branches, we consider one branch arbitrarily
+ # called master, and HEAD pointing to it.
+ print STDOUT "? refs/heads/master\n";
+ print STDOUT "\@refs/heads/master HEAD\n";
+ print STDOUT "\n";
+}
+
+sub mw_option {
+ print STDERR "remote-helper command 'option $_[0]' not yet implemented\n";
+ print STDOUT "unsupported\n";
+}
+
+sub fetch_mw_revisions_for_page {
+ my $page = shift;
+ my $id = shift;
+ my $fetch_from = shift;
+ my @page_revs = ();
+ my $query = {
+ action => 'query',
+ prop => 'revisions',
+ rvprop => 'ids',
+ rvdir => 'newer',
+ rvstartid => $fetch_from,
+ rvlimit => 500,
+ pageids => $id,
+ };
+
+ my $revnum = 0;
+ # Get 500 revisions at a time due to the mediawiki api limit
+ while (1) {
+ my $result = $mediawiki->api($query);
+
+ # Parse each of those 500 revisions
+ foreach my $revision (@{$result->{query}->{pages}->{$id}->{revisions}}) {
+ my $page_rev_ids;
+ $page_rev_ids->{pageid} = $page->{pageid};
+ $page_rev_ids->{revid} = $revision->{revid};
+ push(@page_revs, $page_rev_ids);
+ $revnum++;
+ }
+ last unless $result->{'query-continue'};
+ $query->{rvstartid} = $result->{'query-continue'}->{revisions}->{rvstartid};
+ }
+ if ($shallow_import && @page_revs) {
+ print STDERR " Found 1 revision (shallow import).\n";
+ @page_revs = sort {$b->{revid} <=> $a->{revid}} (@page_revs);
+ return $page_revs[0];
+ }
+ print STDERR " Found ", $revnum, " revision(s).\n";
+ return @page_revs;
+}
+
+sub fetch_mw_revisions {
+ my $pages = shift; my @pages = @{$pages};
+ my $fetch_from = shift;
+
+ my @revisions = ();
+ my $n = 1;
+ foreach my $page (@pages) {
+ my $id = $page->{pageid};
+
+ print STDERR "page $n/", scalar(@pages), ": ". $page->{title} ."\n";
+ $n++;
+ my @page_revs = fetch_mw_revisions_for_page($page, $id, $fetch_from);
+ @revisions = (@page_revs, @revisions);
+ }
+
+ return ($n, @revisions);
+}
+
+sub fe_escape_path {
+ my $path = shift;
+ $path =~ s/\\/\\\\/g;
+ $path =~ s/"/\\"/g;
+ $path =~ s/\n/\\n/g;
+ return '"' . $path . '"';
+}
+
+sub import_file_revision {
+ my $commit = shift;
+ my %commit = %{$commit};
+ my $full_import = shift;
+ my $n = shift;
+ my $mediafile = shift;
+ my %mediafile;
+ if ($mediafile) {
+ %mediafile = %{$mediafile};
+ }
+
+ my $title = $commit{title};
+ my $comment = $commit{comment};
+ my $content = $commit{content};
+ my $author = $commit{author};
+ my $date = $commit{date};
+
+ print STDOUT "commit refs/mediawiki/$remotename/master\n";
+ print STDOUT "mark :$n\n";
+ print STDOUT "committer $author <$author\@$wiki_name> ", $date->epoch, " +0000\n";
+ literal_data($comment);
+
+ # If it's not a clone, we need to know where to start from
+ if (!$full_import && $n == 1) {
+ print STDOUT "from refs/mediawiki/$remotename/master^0\n";
+ }
+ if ($content ne DELETED_CONTENT) {
+ print STDOUT "M 644 inline " .
+ fe_escape_path($title . ".mw") . "\n";
+ literal_data($content);
+ if (%mediafile) {
+ print STDOUT "M 644 inline "
+ . fe_escape_path($mediafile{title}) . "\n";
+ literal_data_raw($mediafile{content});
+ }
+ print STDOUT "\n\n";
+ } else {
+ print STDOUT "D " . fe_escape_path($title . ".mw") . "\n";
+ }
+
+ # mediawiki revision number in the git note
+ if ($full_import && $n == 1) {
+ print STDOUT "reset refs/notes/$remotename/mediawiki\n";
+ }
+ print STDOUT "commit refs/notes/$remotename/mediawiki\n";
+ print STDOUT "committer $author <$author\@$wiki_name> ", $date->epoch, " +0000\n";
+ literal_data("Note added by git-mediawiki during import");
+ if (!$full_import && $n == 1) {
+ print STDOUT "from refs/notes/$remotename/mediawiki^0\n";
+ }
+ print STDOUT "N inline :$n\n";
+ literal_data("mediawiki_revision: " . $commit{mw_revision});
+ print STDOUT "\n\n";
+}
+
+# parse a sequence of
+# <cmd> <arg1>
+# <cmd> <arg2>
+# \n
+# (like batch sequence of import and sequence of push statements)
+sub get_more_refs {
+ my $cmd = shift;
+ my @refs;
+ while (1) {
+ my $line = <STDIN>;
+ if ($line =~ m/^$cmd (.*)$/) {
+ push(@refs, $1);
+ } elsif ($line eq "\n") {
+ return @refs;
+ } else {
+ die("Invalid command in a '$cmd' batch: ". $_);
+ }
+ }
+}
+
+sub mw_import {
+ # multiple import commands can follow each other.
+ my @refs = (shift, get_more_refs("import"));
+ foreach my $ref (@refs) {
+ mw_import_ref($ref);
+ }
+ print STDOUT "done\n";
+}
+
+sub mw_import_ref {
+ my $ref = shift;
+ # The remote helper will call "import HEAD" and
+ # "import refs/heads/master".
+ # Since HEAD is a symbolic ref to master (by convention,
+ # followed by the output of the command "list" that we gave),
+ # we don't need to do anything in this case.
+ if ($ref eq "HEAD") {
+ return;
+ }
+
+ mw_connect_maybe();
+
+ print STDERR "Searching revisions...\n";
+ my $last_local = get_last_local_revision();
+ my $fetch_from = $last_local + 1;
+ if ($fetch_from == 1) {
+ print STDERR ", fetching from beginning.\n";
+ } else {
+ print STDERR ", fetching from here.\n";
+ }
+
+ my $n = 0;
+ if ($fetch_strategy eq "by_rev") {
+ print STDERR "Fetching & writing export data by revs...\n";
+ $n = mw_import_ref_by_revs($fetch_from);
+ } elsif ($fetch_strategy eq "by_page") {
+ print STDERR "Fetching & writing export data by pages...\n";
+ $n = mw_import_ref_by_pages($fetch_from);
+ } else {
+ print STDERR "fatal: invalid fetch strategy \"$fetch_strategy\".\n";
+ print STDERR "Check your configuration variables remote.$remotename.fetchStrategy and mediawiki.fetchStrategy\n";
+ exit 1;
+ }
+
+ if ($fetch_from == 1 && $n == 0) {
+ print STDERR "You appear to have cloned an empty MediaWiki.\n";
+ # Something has to be done remote-helper side. If nothing is done, an error is
+ # thrown saying that HEAD is refering to unknown object 0000000000000000000
+ # and the clone fails.
+ }
+}
+
+sub mw_import_ref_by_pages {
+
+ my $fetch_from = shift;
+ my %pages_hash = get_mw_pages();
+ my @pages = values(%pages_hash);
+
+ my ($n, @revisions) = fetch_mw_revisions(\@pages, $fetch_from);
+
+ @revisions = sort {$a->{revid} <=> $b->{revid}} @revisions;
+ my @revision_ids = map $_->{revid}, @revisions;
+
+ return mw_import_revids($fetch_from, \@revision_ids, \%pages_hash);
+}
+
+sub mw_import_ref_by_revs {
+
+ my $fetch_from = shift;
+ my %pages_hash = get_mw_pages();
+
+ my $last_remote = get_last_global_remote_rev();
+ my @revision_ids = $fetch_from..$last_remote;
+ return mw_import_revids($fetch_from, \@revision_ids, \%pages_hash);
+}
+
+# Import revisions given in second argument (array of integers).
+# Only pages appearing in the third argument (hash indexed by page titles)
+# will be imported.
+sub mw_import_revids {
+ my $fetch_from = shift;
+ my $revision_ids = shift;
+ my $pages = shift;
+
+ my $n = 0;
+ my $n_actual = 0;
+ my $last_timestamp = 0; # Placeholer in case $rev->timestamp is undefined
+
+ foreach my $pagerevid (@$revision_ids) {
+ # Count page even if we skip it, since we display
+ # $n/$total and $total includes skipped pages.
+ $n++;
+
+ # fetch the content of the pages
+ my $query = {
+ action => 'query',
+ prop => 'revisions',
+ rvprop => 'content|timestamp|comment|user|ids',
+ revids => $pagerevid,
+ };
+
+ my $result = $mediawiki->api($query);
+
+ if (!$result) {
+ die "Failed to retrieve modified page for revision $pagerevid";
+ }
+
+ if (defined($result->{query}->{badrevids}->{$pagerevid})) {
+ # The revision id does not exist on the remote wiki.
+ next;
+ }
+
+ if (!defined($result->{query}->{pages})) {
+ die "Invalid revision $pagerevid.";
+ }
+
+ my @result_pages = values(%{$result->{query}->{pages}});
+ my $result_page = $result_pages[0];
+ my $rev = $result_pages[0]->{revisions}->[0];
+
+ my $page_title = $result_page->{title};
+
+ if (!exists($pages->{$page_title})) {
+ print STDERR "$n/", scalar(@$revision_ids),
+ ": Skipping revision #$rev->{revid} of $page_title\n";
+ next;
+ }
+
+ $n_actual++;
+
+ my %commit;
+ $commit{author} = $rev->{user} || 'Anonymous';
+ $commit{comment} = $rev->{comment} || EMPTY_MESSAGE;
+ $commit{title} = mediawiki_smudge_filename($page_title);
+ $commit{mw_revision} = $rev->{revid};
+ $commit{content} = mediawiki_smudge($rev->{'*'});
+
+ if (!defined($rev->{timestamp})) {
+ $last_timestamp++;
+ } else {
+ $last_timestamp = $rev->{timestamp};
+ }
+ $commit{date} = DateTime::Format::ISO8601->parse_datetime($last_timestamp);
+
+ # Differentiates classic pages and media files.
+ my ($namespace, $filename) = $page_title =~ /^([^:]*):(.*)$/;
+ my %mediafile;
+ if ($namespace) {
+ my $id = get_mw_namespace_id($namespace);
+ if ($id && $id == get_mw_namespace_id("File")) {
+ %mediafile = get_mw_mediafile_for_page_revision($filename, $rev->{timestamp});
+ }
+ }
+ # If this is a revision of the media page for new version
+ # of a file do one common commit for both file and media page.
+ # Else do commit only for that page.
+ print STDERR "$n/", scalar(@$revision_ids), ": Revision #$rev->{revid} of $commit{title}\n";
+ import_file_revision(\%commit, ($fetch_from == 1), $n_actual, \%mediafile);
+ }
+
+ return $n_actual;
+}
+
+sub error_non_fast_forward {
+ my $advice = run_git("config --bool advice.pushNonFastForward");
+ chomp($advice);
+ if ($advice ne "false") {
+ # Native git-push would show this after the summary.
+ # We can't ask it to display it cleanly, so print it
+ # ourselves before.
+ print STDERR "To prevent you from losing history, non-fast-forward updates were rejected\n";
+ print STDERR "Merge the remote changes (e.g. 'git pull') before pushing again. See the\n";
+ print STDERR "'Note about fast-forwards' section of 'git push --help' for details.\n";
+ }
+ print STDOUT "error $_[0] \"non-fast-forward\"\n";
+ return 0;
+}
+
+sub mw_upload_file {
+ my $complete_file_name = shift;
+ my $new_sha1 = shift;
+ my $extension = shift;
+ my $file_deleted = shift;
+ my $summary = shift;
+ my $newrevid;
+ my $path = "File:" . $complete_file_name;
+ my %hashFiles = get_allowed_file_extensions();
+ if (!exists($hashFiles{$extension})) {
+ print STDERR "$complete_file_name is not a permitted file on this wiki.\n";
+ print STDERR "Check the configuration of file uploads in your mediawiki.\n";
+ return $newrevid;
+ }
+ # Deleting and uploading a file requires a priviledged user
+ if ($file_deleted) {
+ mw_connect_maybe();
+ my $query = {
+ action => 'delete',
+ title => $path,
+ reason => $summary
+ };
+ if (!$mediawiki->edit($query)) {
+ print STDERR "Failed to delete file on remote wiki\n";
+ print STDERR "Check your permissions on the remote site. Error code:\n";
+ print STDERR $mediawiki->{error}->{code} . ':' . $mediawiki->{error}->{details};
+ exit 1;
+ }
+ } else {
+ # Don't let perl try to interpret file content as UTF-8 => use "raw"
+ my $content = run_git("cat-file blob $new_sha1", "raw");
+ if ($content ne "") {
+ mw_connect_maybe();
+ $mediawiki->{config}->{upload_url} =
+ "$url/index.php/Special:Upload";
+ $mediawiki->edit({
+ action => 'upload',
+ filename => $complete_file_name,
+ comment => $summary,
+ file => [undef,
+ $complete_file_name,
+ Content => $content],
+ ignorewarnings => 1,
+ }, {
+ skip_encoding => 1
+ } ) || die $mediawiki->{error}->{code} . ':'
+ . $mediawiki->{error}->{details};
+ my $last_file_page = $mediawiki->get_page({title => $path});
+ $newrevid = $last_file_page->{revid};
+ print STDERR "Pushed file: $new_sha1 - $complete_file_name.\n";
+ } else {
+ print STDERR "Empty file $complete_file_name not pushed.\n";
+ }
+ }
+ return $newrevid;
+}
+
+sub mw_push_file {
+ my $diff_info = shift;
+ # $diff_info contains a string in this format:
+ # 100644 100644 <sha1_of_blob_before_commit> <sha1_of_blob_now> <status>
+ my @diff_info_split = split(/[ \t]/, $diff_info);
+
+ # Filename, including .mw extension
+ my $complete_file_name = shift;
+ # Commit message
+ my $summary = shift;
+ # MediaWiki revision number. Keep the previous one by default,
+ # in case there's no edit to perform.
+ my $oldrevid = shift;
+ my $newrevid;
+
+ if ($summary eq EMPTY_MESSAGE) {
+ $summary = '';
+ }
+
+ my $new_sha1 = $diff_info_split[3];
+ my $old_sha1 = $diff_info_split[2];
+ my $page_created = ($old_sha1 eq NULL_SHA1);
+ my $page_deleted = ($new_sha1 eq NULL_SHA1);
+ $complete_file_name = mediawiki_clean_filename($complete_file_name);
+
+ my ($title, $extension) = $complete_file_name =~ /^(.*)\.([^\.]*)$/;
+ if (!defined($extension)) {
+ $extension = "";
+ }
+ if ($extension eq "mw") {
+ my $ns = get_mw_namespace_id_for_page($complete_file_name);
+ if ($ns && $ns == get_mw_namespace_id("File") && (!$export_media)) {
+ print STDERR "Ignoring media file related page: $complete_file_name\n";
+ return ($oldrevid, "ok");
+ }
+ my $file_content;
+ if ($page_deleted) {
+ # Deleting a page usually requires
+ # special priviledges. A common
+ # convention is to replace the page
+ # with this content instead:
+ $file_content = DELETED_CONTENT;
+ } else {
+ $file_content = run_git("cat-file blob $new_sha1");
+ }
+
+ mw_connect_maybe();
+
+ my $result = $mediawiki->edit( {
+ action => 'edit',
+ summary => $summary,
+ title => $title,
+ basetimestamp => $basetimestamps{$oldrevid},
+ text => mediawiki_clean($file_content, $page_created),
+ }, {
+ skip_encoding => 1 # Helps with names with accentuated characters
+ });
+ if (!$result) {
+ if ($mediawiki->{error}->{code} == 3) {
+ # edit conflicts, considered as non-fast-forward
+ print STDERR 'Warning: Error ' .
+ $mediawiki->{error}->{code} .
+ ' from mediwiki: ' . $mediawiki->{error}->{details} .
+ ".\n";
+ return ($oldrevid, "non-fast-forward");
+ } else {
+ # Other errors. Shouldn't happen => just die()
+ die 'Fatal: Error ' .
+ $mediawiki->{error}->{code} .
+ ' from mediwiki: ' . $mediawiki->{error}->{details};
+ }
+ }
+ $newrevid = $result->{edit}->{newrevid};
+ print STDERR "Pushed file: $new_sha1 - $title\n";
+ } elsif ($export_media) {
+ $newrevid = mw_upload_file($complete_file_name, $new_sha1,
+ $extension, $page_deleted,
+ $summary);
+ } else {
+ print STDERR "Ignoring media file $title\n";
+ }
+ $newrevid = ($newrevid or $oldrevid);
+ return ($newrevid, "ok");
+}
+
+sub mw_push {
+ # multiple push statements can follow each other
+ my @refsspecs = (shift, get_more_refs("push"));
+ my $pushed;
+ for my $refspec (@refsspecs) {
+ my ($force, $local, $remote) = $refspec =~ /^(\+)?([^:]*):([^:]*)$/
+ or die("Invalid refspec for push. Expected <src>:<dst> or +<src>:<dst>");
+ if ($force) {
+ print STDERR "Warning: forced push not allowed on a MediaWiki.\n";
+ }
+ if ($local eq "") {
+ print STDERR "Cannot delete remote branch on a MediaWiki\n";
+ print STDOUT "error $remote cannot delete\n";
+ next;
+ }
+ if ($remote ne "refs/heads/master") {
+ print STDERR "Only push to the branch 'master' is supported on a MediaWiki\n";
+ print STDOUT "error $remote only master allowed\n";
+ next;
+ }
+ if (mw_push_revision($local, $remote)) {
+ $pushed = 1;
+ }
+ }
+
+ # Notify Git that the push is done
+ print STDOUT "\n";
+
+ if ($pushed && $dumb_push) {
+ print STDERR "Just pushed some revisions to MediaWiki.\n";
+ print STDERR "The pushed revisions now have to be re-imported, and your current branch\n";
+ print STDERR "needs to be updated with these re-imported commits. You can do this with\n";
+ print STDERR "\n";
+ print STDERR " git pull --rebase\n";
+ print STDERR "\n";
+ }
+}
+
+sub mw_push_revision {
+ my $local = shift;
+ my $remote = shift; # actually, this has to be "refs/heads/master" at this point.
+ my $last_local_revid = get_last_local_revision();
+ print STDERR ".\n"; # Finish sentence started by get_last_local_revision()
+ my $last_remote_revid = get_last_remote_revision();
+ my $mw_revision = $last_remote_revid;
+
+ # Get sha1 of commit pointed by local HEAD
+ my $HEAD_sha1 = run_git("rev-parse $local 2>/dev/null"); chomp($HEAD_sha1);
+ # Get sha1 of commit pointed by remotes/$remotename/master
+ my $remoteorigin_sha1 = run_git("rev-parse refs/remotes/$remotename/master 2>/dev/null");
+ chomp($remoteorigin_sha1);
+
+ if ($last_local_revid > 0 &&
+ $last_local_revid < $last_remote_revid) {
+ return error_non_fast_forward($remote);
+ }
+
+ if ($HEAD_sha1 eq $remoteorigin_sha1) {
+ # nothing to push
+ return 0;
+ }
+
+ # Get every commit in between HEAD and refs/remotes/origin/master,
+ # including HEAD and refs/remotes/origin/master
+ my @commit_pairs = ();
+ if ($last_local_revid > 0) {
+ my $parsed_sha1 = $remoteorigin_sha1;
+ # Find a path from last MediaWiki commit to pushed commit
+ print STDERR "Computing path from local to remote ...\n";
+ my @local_ancestry = split(/\n/, run_git("rev-list --boundary --parents $local ^$parsed_sha1"));
+ my %local_ancestry;
+ foreach my $line (@local_ancestry) {
+ if (my ($child, $parents) = $line =~ m/^-?([a-f0-9]+) ([a-f0-9 ]+)/) {
+ foreach my $parent (split(' ', $parents)) {
+ $local_ancestry{$parent} = $child;
+ }
+ } elsif (!$line =~ m/^([a-f0-9]+)/) {
+ die "Unexpected output from git rev-list: $line";
+ }
+ }
+ while ($parsed_sha1 ne $HEAD_sha1) {
+ my $child = $local_ancestry{$parsed_sha1};
+ if (!$child) {
+ printf STDERR "Cannot find a path in history from remote commit to last commit\n";
+ return error_non_fast_forward($remote);
+ }
+ push(@commit_pairs, [$parsed_sha1, $child]);
+ $parsed_sha1 = $child;
+ }
+ } else {
+ # No remote mediawiki revision. Export the whole
+ # history (linearized with --first-parent)
+ print STDERR "Warning: no common ancestor, pushing complete history\n";
+ my $history = run_git("rev-list --first-parent --children $local");
+ my @history = split('\n', $history);
+ @history = @history[1..$#history];
+ foreach my $line (reverse @history) {
+ my @commit_info_split = split(/ |\n/, $line);
+ push(@commit_pairs, \@commit_info_split);
+ }
+ }
+
+ foreach my $commit_info_split (@commit_pairs) {
+ my $sha1_child = @{$commit_info_split}[0];
+ my $sha1_commit = @{$commit_info_split}[1];
+ my $diff_infos = run_git("diff-tree -r --raw -z $sha1_child $sha1_commit");
+ # TODO: we could detect rename, and encode them with a #redirect on the wiki.
+ # TODO: for now, it's just a delete+add
+ my @diff_info_list = split(/\0/, $diff_infos);
+ # Keep the subject line of the commit message as mediawiki comment for the revision
+ my $commit_msg = run_git("log --no-walk --format=\"%s\" $sha1_commit");
+ chomp($commit_msg);
+ # Push every blob
+ while (@diff_info_list) {
+ my $status;
+ # git diff-tree -z gives an output like
+ # <metadata>\0<filename1>\0
+ # <metadata>\0<filename2>\0
+ # and we've split on \0.
+ my $info = shift(@diff_info_list);
+ my $file = shift(@diff_info_list);
+ ($mw_revision, $status) = mw_push_file($info, $file, $commit_msg, $mw_revision);
+ if ($status eq "non-fast-forward") {
+ # we may already have sent part of the
+ # commit to MediaWiki, but it's too
+ # late to cancel it. Stop the push in
+ # the middle, but still give an
+ # accurate error message.
+ return error_non_fast_forward($remote);
+ }
+ if ($status ne "ok") {
+ die("Unknown error from mw_push_file()");
+ }
+ }
+ unless ($dumb_push) {
+ run_git("notes --ref=$remotename/mediawiki add -f -m \"mediawiki_revision: $mw_revision\" $sha1_commit");
+ run_git("update-ref -m \"Git-MediaWiki push\" refs/mediawiki/$remotename/master $sha1_commit $sha1_child");
+ }
+ }
+
+ print STDOUT "ok $remote\n";
+ return 1;
+}
+
+sub get_allowed_file_extensions {
+ mw_connect_maybe();
+
+ my $query = {
+ action => 'query',
+ meta => 'siteinfo',
+ siprop => 'fileextensions'
+ };
+ my $result = $mediawiki->api($query);
+ my @file_extensions= map $_->{ext},@{$result->{query}->{fileextensions}};
+ my %hashFile = map {$_ => 1}@file_extensions;
+
+ return %hashFile;
+}
+
+# In memory cache for MediaWiki namespace ids.
+my %namespace_id;
+
+# Namespaces whose id is cached in the configuration file
+# (to avoid duplicates)
+my %cached_mw_namespace_id;
+
+# Return MediaWiki id for a canonical namespace name.
+# Ex.: "File", "Project".
+sub get_mw_namespace_id {
+ mw_connect_maybe();
+ my $name = shift;
+
+ if (!exists $namespace_id{$name}) {
+ # Look at configuration file, if the record for that namespace is
+ # already cached. Namespaces are stored in form:
+ # "Name_of_namespace:Id_namespace", ex.: "File:6".
+ my @temp = split(/[\n]/, run_git("config --get-all remote."
+ . $remotename .".namespaceCache"));
+ chomp(@temp);
+ foreach my $ns (@temp) {
+ my ($n, $id) = split(/:/, $ns);
+ if ($id eq 'notANameSpace') {
+ $namespace_id{$n} = {is_namespace => 0};
+ } else {
+ $namespace_id{$n} = {is_namespace => 1, id => $id};
+ }
+ $cached_mw_namespace_id{$n} = 1;
+ }
+ }
+
+ if (!exists $namespace_id{$name}) {
+ print STDERR "Namespace $name not found in cache, querying the wiki ...\n";
+ # NS not found => get namespace id from MW and store it in
+ # configuration file.
+ my $query = {
+ action => 'query',
+ meta => 'siteinfo',
+ siprop => 'namespaces'
+ };
+ my $result = $mediawiki->api($query);
+
+ while (my ($id, $ns) = each(%{$result->{query}->{namespaces}})) {
+ if (defined($ns->{id}) && defined($ns->{canonical})) {
+ $namespace_id{$ns->{canonical}} = {is_namespace => 1, id => $ns->{id}};
+ if ($ns->{'*'}) {
+ # alias (e.g. french Fichier: as alias for canonical File:)
+ $namespace_id{$ns->{'*'}} = {is_namespace => 1, id => $ns->{id}};
+ }
+ }
+ }
+ }
+
+ my $ns = $namespace_id{$name};
+ my $id;
+
+ unless (defined $ns) {
+ print STDERR "No such namespace $name on MediaWiki.\n";
+ $ns = {is_namespace => 0};
+ $namespace_id{$name} = $ns;
+ }
+
+ if ($ns->{is_namespace}) {
+ $id = $ns->{id};
+ }
+
+ # Store "notANameSpace" as special value for inexisting namespaces
+ my $store_id = ($id || 'notANameSpace');
+
+ # Store explicitely requested namespaces on disk
+ if (!exists $cached_mw_namespace_id{$name}) {
+ run_git("config --add remote.". $remotename
+ .".namespaceCache \"". $name .":". $store_id ."\"");
+ $cached_mw_namespace_id{$name} = 1;
+ }
+ return $id;
+}
+
+sub get_mw_namespace_id_for_page {
+ if (my ($namespace) = $_[0] =~ /^([^:]*):/) {
+ return get_mw_namespace_id($namespace);
+ } else {
+ return;
+ }
+}
doc: $(GIT_SUBTREE_DOC)
install: $(GIT_SUBTREE)
- $(INSTALL) -m 755 $(GIT_SUBTREE) $(libexecdir)
+ $(INSTALL) -m 755 $(GIT_SUBTREE) $(DESTDIR)$(libexecdir)
install-doc: install-man
install-man: $(GIT_SUBTREE_DOC)
- $(INSTALL) -m 644 $^ $(man1dir)
+ $(INSTALL) -d -m 755 $(DESTDIR)$(man1dir)
+ $(INSTALL) -m 644 $^ $(DESTDIR)$(man1dir)
$(GIT_SUBTREE_DOC): $(GIT_SUBTREE_XML)
xmlto -m $(MANPAGE_NORMAL_XSL) man $^
fi
OPTS_SPEC="\
git subtree add --prefix=<prefix> <commit>
+git subtree add --prefix=<prefix> <repository> <commit>
git subtree merge --prefix=<prefix> <commit>
git subtree pull --prefix=<prefix> <repository> <refspec...>
git subtree push --prefix=<prefix> <repository> <refspec...>
# We're going to set some environment vars here, so
# do it in a subshell to get rid of them safely later
debug copy_commit "{$1}" "{$2}" "{$3}"
- git log -1 --pretty=format:'%an%n%ae%n%ad%n%cn%n%ce%n%cd%n%s%n%n%b' "$1" |
+ git log -1 --pretty=format:'%an%n%ae%n%ad%n%cn%n%ce%n%cd%n%B' "$1" |
(
read GIT_AUTHOR_NAME
read GIT_AUTHOR_EMAIL
ensure_clean
if [ $# -eq 1 ]; then
- "cmd_add_commit" "$@"
+ git rev-parse -q --verify "$1^{commit}" >/dev/null ||
+ die "'$1' does not refer to a commit"
+
+ "cmd_add_commit" "$@"
elif [ $# -eq 2 ]; then
- "cmd_add_repository" "$@"
+ # Technically we could accept a refspec here but we're
+ # just going to turn around and add FETCH_HEAD under the
+ # specified directory. Allowing a refspec might be
+ # misleading because we won't do anything with any other
+ # branches fetched via the refspec.
+ git rev-parse -q --verify "$2^{commit}" >/dev/null ||
+ die "'$2' does not refer to a commit"
+
+ "cmd_add_repository" "$@"
else
say "error: parameters were '$@'"
- die "Provide either a refspec or a repository and refspec."
+ die "Provide either a commit or a repository and commit."
fi
}
SYNOPSIS
--------
[verse]
-'git subtree' add -P <prefix> <commit>
+'git subtree' add -P <prefix> <refspec>
+'git subtree' add -P <prefix> <repository> <refspec>
'git subtree' pull -P <prefix> <repository> <refspec...>
'git subtree' push -P <prefix> <repository> <refspec...>
'git subtree' merge -P <prefix> <commit>
git log --pretty=format:%s -1
}
-# 1
test_expect_success 'init subproj' '
test_create_repo subproj
'
# To the subproject!
cd subproj
-# 2
test_expect_success 'add sub1' '
create sub1 &&
git commit -m "sub1" &&
git branch -m master subproj
'
-# 3
+# Save this hash for testing later.
+
+subdir_hash=`git rev-parse HEAD`
+
test_expect_success 'add sub2' '
create sub2 &&
git commit -m "sub2" &&
git branch sub2
'
-# 4
test_expect_success 'add sub3' '
create sub3 &&
git commit -m "sub3" &&
# Back to mainline
cd ..
-# 5
test_expect_success 'add main4' '
create main4 &&
git commit -m "main4" &&
git branch subdir
'
-# 6
test_expect_success 'fetch subproj history' '
git fetch ./subproj sub1 &&
git branch sub1 FETCH_HEAD
'
-# 7
test_expect_success 'no subtree exists in main tree' '
test_must_fail git subtree merge --prefix=subdir sub1
'
-# 8
test_expect_success 'no pull from non-existant subtree' '
test_must_fail git subtree pull --prefix=subdir ./subproj sub1
'
-# 9
test_expect_success 'check if --message works for add' '
git subtree add --prefix=subdir --message="Added subproject" sub1 &&
check_equal ''"$(last_commit_message)"'' "Added subproject" &&
undo
'
-# 10
test_expect_success 'check if --message works as -m and --prefix as -P' '
git subtree add -P subdir -m "Added subproject using git subtree" sub1 &&
check_equal ''"$(last_commit_message)"'' "Added subproject using git subtree" &&
undo
'
-# 11
test_expect_success 'check if --message works with squash too' '
git subtree add -P subdir -m "Added subproject with squash" --squash sub1 &&
check_equal ''"$(last_commit_message)"'' "Added subproject with squash" &&
undo
'
-# 12
test_expect_success 'add subproj to mainline' '
git subtree add --prefix=subdir/ FETCH_HEAD &&
check_equal ''"$(last_commit_message)"'' "Add '"'subdir/'"' from commit '"'"'''"$(git rev-parse sub1)"'''"'"'"
'
-# 13
# this shouldn't actually do anything, since FETCH_HEAD is already a parent
test_expect_success 'merge fetched subproj' '
git merge -m "merge -s -ours" -s ours FETCH_HEAD
'
-# 14
test_expect_success 'add main-sub5' '
create subdir/main-sub5 &&
git commit -m "main-sub5"
'
-# 15
test_expect_success 'add main6' '
create main6 &&
git commit -m "main6 boring"
'
-# 16
test_expect_success 'add main-sub7' '
create subdir/main-sub7 &&
git commit -m "main-sub7"
'
-# 17
test_expect_success 'fetch new subproj history' '
git fetch ./subproj sub2 &&
git branch sub2 FETCH_HEAD
'
-# 18
test_expect_success 'check if --message works for merge' '
git subtree merge --prefix=subdir -m "Merged changes from subproject" sub2 &&
check_equal ''"$(last_commit_message)"'' "Merged changes from subproject" &&
undo
'
-# 19
test_expect_success 'check if --message for merge works with squash too' '
git subtree merge --prefix subdir -m "Merged changes from subproject using squash" --squash sub2 &&
check_equal ''"$(last_commit_message)"'' "Merged changes from subproject using squash" &&
undo
'
-# 20
test_expect_success 'merge new subproj history into subdir' '
git subtree merge --prefix=subdir FETCH_HEAD &&
git branch pre-split &&
check_equal ''"$(last_commit_message)"'' "Merge commit '"'"'"$(git rev-parse sub2)"'"'"' into mainline"
'
-# 21
test_expect_success 'Check that prefix argument is required for split' '
echo "You must provide the --prefix option." > expected &&
test_must_fail git subtree split > actual 2>&1 &&
rm -f expected actual
'
-# 22
test_expect_success 'Check that the <prefix> exists for a split' '
echo "'"'"'non-existent-directory'"'"'" does not exist\; use "'"'"'git subtree add'"'"'" > expected &&
test_must_fail git subtree split --prefix=non-existent-directory > actual 2>&1 &&
# rm -f expected actual
'
-# 23
test_expect_success 'check if --message works for split+rejoin' '
spl1=''"$(git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --message "Split & rejoin" --rejoin)"'' &&
git branch spl1 "$spl1" &&
undo
'
-# 24
test_expect_success 'check split with --branch' '
- spl1=$(git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --message "Split & rejoin" --rejoin) &&
- undo &&
- git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --branch splitbr1 &&
- check_equal ''"$(git rev-parse splitbr1)"'' "$spl1"
+ spl1=$(git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --message "Split & rejoin" --rejoin) &&
+ undo &&
+ git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --branch splitbr1 &&
+ check_equal ''"$(git rev-parse splitbr1)"'' "$spl1"
+'
+
+test_expect_success 'check hash of split' '
+ spl1=$(git subtree split --prefix subdir) &&
+ undo &&
+ git subtree split --prefix subdir --branch splitbr1test &&
+ check_equal ''"$(git rev-parse splitbr1test)"'' "$spl1"
+ git checkout splitbr1test &&
+ new_hash=$(git rev-parse HEAD~2) &&
+ git checkout mainline &&
+ check_equal ''"$new_hash"'' "$subdir_hash"
'
-# 25
test_expect_success 'check split with --branch for an existing branch' '
spl1=''"$(git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --message "Split & rejoin" --rejoin)"'' &&
undo &&
check_equal ''"$(git rev-parse splitbr2)"'' "$spl1"
'
-# 26
test_expect_success 'check split with --branch for an incompatible branch' '
test_must_fail git subtree split --prefix subdir --onto FETCH_HEAD --branch subdir
'
-
-# 27
test_expect_success 'check split+rejoin' '
spl1=''"$(git subtree split --annotate='"'*'"' --prefix subdir --onto FETCH_HEAD --message "Split & rejoin" --rejoin)"'' &&
undo &&
check_equal ''"$(last_commit_message)"'' "Split '"'"'subdir/'"'"' into commit '"'"'"$spl1"'"'"'"
'
-# 28
test_expect_success 'add main-sub8' '
create subdir/main-sub8 &&
git commit -m "main-sub8"
# To the subproject!
cd ./subproj
-# 29
test_expect_success 'merge split into subproj' '
git fetch .. spl1 &&
git branch spl1 FETCH_HEAD &&
git merge FETCH_HEAD
'
-# 30
test_expect_success 'add sub9' '
create sub9 &&
git commit -m "sub9"
# Back to mainline
cd ..
-# 31
test_expect_success 'split for sub8' '
split2=''"$(git subtree split --annotate='"'*'"' --prefix subdir/ --rejoin)"''
git branch split2 "$split2"
'
-# 32
test_expect_success 'add main-sub10' '
create subdir/main-sub10 &&
git commit -m "main-sub10"
'
-# 33
test_expect_success 'split for sub10' '
spl3=''"$(git subtree split --annotate='"'*'"' --prefix subdir --rejoin)"'' &&
git branch spl3 "$spl3"
# To the subproject!
cd ./subproj
-# 34
test_expect_success 'merge split into subproj' '
git fetch .. spl3 &&
git branch spl3 FETCH_HEAD &&
chks="sub1 sub2 sub3 sub9"
chks_sub=$(echo $chks | multiline | sed 's,^,subdir/,' | fixnl)
-# 35
test_expect_success 'make sure exactly the right set of files ends up in the subproj' '
subfiles=''"$(git ls-files | fixnl)"'' &&
check_equal "$subfiles" "$chkms $chks"
'
-# 36
test_expect_success 'make sure the subproj history *only* contains commits that affect the subdir' '
allchanges=''"$(git log --name-only --pretty=format:'"''"' | sort | fixnl)"'' &&
check_equal "$allchanges" "$chkms $chks"
# Back to mainline
cd ..
-# 37
test_expect_success 'pull from subproj' '
git fetch ./subproj subproj-merge-spl3 &&
git branch subproj-merge-spl3 FETCH_HEAD &&
git subtree pull --prefix=subdir ./subproj subproj-merge-spl3
'
-# 38
test_expect_success 'make sure exactly the right set of files ends up in the mainline' '
mainfiles=''"$(git ls-files | fixnl)"'' &&
check_equal "$mainfiles" "$chkm $chkms_sub $chks_sub"
'
-# 39
test_expect_success 'make sure each filename changed exactly once in the entire history' '
# main-sub?? and /subdir/main-sub?? both change, because those are the
# changes that were split into their own history. And subdir/sub?? never
check_equal "$allchanges" ''"$(echo $chkms $chkm $chks $chkms_sub | multiline | sort | fixnl)"''
'
-# 40
test_expect_success 'make sure the --rejoin commits never make it into subproj' '
check_equal ''"$(git log --pretty=format:'"'%s'"' HEAD^2 | grep -i split)"'' ""
'
-# 41
test_expect_success 'make sure no "git subtree" tagged commits make it into subproj' '
# They are meaningless to subproj since one side of the merge refers to the mainline
check_equal ''"$(git log --pretty=format:'"'%s%n%b'"' HEAD^2 | grep "git-subtree.*:")"'' ""
mkdir test2
cd test2
-# 42
test_expect_success 'init main' '
test_create_repo main
'
cd main
-# 43
test_expect_success 'add main1' '
create main1 &&
git commit -m "main1"
cd ..
-# 44
test_expect_success 'init sub' '
test_create_repo sub
'
cd sub
-# 45
test_expect_success 'add sub2' '
create sub2 &&
git commit -m "sub2"
# check if split can find proper base without --onto
-# 46
test_expect_success 'add sub as subdir in main' '
git fetch ../sub master &&
git branch sub2 FETCH_HEAD &&
cd ../sub
-# 47
test_expect_success 'add sub3' '
create sub3 &&
git commit -m "sub3"
cd ../main
-# 48
test_expect_success 'merge from sub' '
git fetch ../sub master &&
git branch sub3 FETCH_HEAD &&
git subtree merge --prefix subdir sub3
'
-# 49
test_expect_success 'add main-sub4' '
create subdir/main-sub4 &&
git commit -m "main-sub4"
'
-# 50
test_expect_success 'split for main-sub4 without --onto' '
git subtree split --prefix subdir --branch mainsub4
'
# have been sub3, but it was not, because its cache was not set to
# itself)
-# 51
test_expect_success 'check that the commit parent is sub3' '
check_equal ''"$(git log --pretty=format:%P -1 mainsub4)"'' ''"$(git rev-parse sub3)"''
'
-# 52
test_expect_success 'add main-sub5' '
mkdir subdir2 &&
create subdir2/main-sub5 &&
git commit -m "main-sub5"
'
-# 53
test_expect_success 'split for main-sub5 without --onto' '
# also test that we still can split out an entirely new subtree
# if the parent of the first commit in the tree is not empty,
echo "$commit $all"
}
-# 54
test_expect_success 'verify one file change per commit' '
x= &&
list=''"$(git log --pretty=format:'"'commit: %H'"' | joincommits)"'' &&
int nofirst;
FILE *file = o->file;
- if (o->output_prefix) {
- struct strbuf *msg = NULL;
- msg = o->output_prefix(o, o->output_prefix_data);
- assert(msg);
- fwrite(msg->buf, msg->len, 1, file);
- }
+ fputs(diff_line_prefix(o), file);
if (len == 0) {
has_trailing_newline = (first == '\n');
char *data_one, *data_two;
size_t size_one, size_two;
struct emit_callback ecbdata;
- char *line_prefix = "";
- struct strbuf *msgbuf;
-
- if (o && o->output_prefix) {
- msgbuf = o->output_prefix(o, o->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ const char *line_prefix = diff_line_prefix(o);
if (diff_mnemonic_prefix && DIFF_OPT_TST(o, REVERSE_DIFF)) {
a_prefix = o->b_prefix;
int minus_first, minus_len, plus_first, plus_len;
const char *minus_begin, *minus_end, *plus_begin, *plus_end;
struct diff_options *opt = diff_words->opt;
- struct strbuf *msgbuf;
- char *line_prefix = "";
+ const char *line_prefix;
if (line[0] != '@' || parse_hunk_header(line, len,
&minus_first, &minus_len, &plus_first, &plus_len))
return;
assert(opt);
- if (opt->output_prefix) {
- msgbuf = opt->output_prefix(opt, opt->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ line_prefix = diff_line_prefix(opt);
/* POSIX requires that first be decremented by one if len == 0... */
if (minus_len) {
struct diff_words_style *style = diff_words->style;
struct diff_options *opt = diff_words->opt;
- struct strbuf *msgbuf;
- char *line_prefix = "";
+ const char *line_prefix;
assert(opt);
- if (opt->output_prefix) {
- msgbuf = opt->output_prefix(opt, opt->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ line_prefix = diff_line_prefix(opt);
/* special case: only removal */
if (!diff_words->plus.text.size) {
return "";
}
+const char *diff_line_prefix(struct diff_options *opt)
+{
+ struct strbuf *msgbuf;
+ if (!opt->output_prefix)
+ return "";
+
+ msgbuf = opt->output_prefix(opt, opt->output_prefix_data);
+ return msgbuf->buf;
+}
+
static unsigned long sane_truncate_line(struct emit_callback *ecb, char *line, unsigned long len)
{
const char *cp;
const char *plain = diff_get_color(ecbdata->color_diff, DIFF_PLAIN);
const char *reset = diff_get_color(ecbdata->color_diff, DIFF_RESET);
struct diff_options *o = ecbdata->opt;
- char *line_prefix = "";
- struct strbuf *msgbuf;
-
- if (o && o->output_prefix) {
- msgbuf = o->output_prefix(o, o->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ const char *line_prefix = diff_line_prefix(o);
if (ecbdata->header) {
fprintf(ecbdata->opt->file, "%s", ecbdata->header->buf);
const char *reset, *add_c, *del_c;
const char *line_prefix = "";
int extra_shown = 0;
- struct strbuf *msg = NULL;
if (data->nr == 0)
return;
- if (options->output_prefix) {
- msg = options->output_prefix(options, options->output_prefix_data);
- line_prefix = msg->buf;
- }
-
+ line_prefix = diff_line_prefix(options);
count = options->stat_count ? options->stat_count : data->nr;
reset = diff_get_color_opt(options, DIFF_RESET);
dels += deleted;
}
}
- if (options->output_prefix) {
- struct strbuf *msg = NULL;
- msg = options->output_prefix(options,
- options->output_prefix_data);
- fprintf(options->file, "%s", msg->buf);
- }
+ fprintf(options->file, "%s", diff_line_prefix(options));
print_stat_summary(options->file, total_files, adds, dels);
}
for (i = 0; i < data->nr; i++) {
struct diffstat_file *file = data->files[i];
- if (options->output_prefix) {
- struct strbuf *msg = NULL;
- msg = options->output_prefix(options,
- options->output_prefix_data);
- fprintf(options->file, "%s", msg->buf);
- }
+ fprintf(options->file, "%s", diff_line_prefix(options));
if (file->is_binary)
fprintf(options->file, "-\t-\t");
{
unsigned long this_dir = 0;
unsigned int sources = 0;
- const char *line_prefix = "";
- struct strbuf *msg = NULL;
-
- if (opt->output_prefix) {
- msg = opt->output_prefix(opt, opt->output_prefix_data);
- line_prefix = msg->buf;
- }
+ const char *line_prefix = diff_line_prefix(opt);
while (dir->nr) {
struct dirstat_file *f = dir->files;
const char *reset = diff_get_color(data->o->use_color, DIFF_RESET);
const char *set = diff_get_color(data->o->use_color, DIFF_FILE_NEW);
char *err;
- char *line_prefix = "";
- struct strbuf *msgbuf;
+ const char *line_prefix;
assert(data->o);
- if (data->o->output_prefix) {
- msgbuf = data->o->output_prefix(data->o,
- data->o->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ line_prefix = diff_line_prefix(data->o);
if (line[0] == '+') {
unsigned bad;
return deflated;
}
-static void emit_binary_diff_body(FILE *file, mmfile_t *one, mmfile_t *two, char *prefix)
+static void emit_binary_diff_body(FILE *file, mmfile_t *one, mmfile_t *two,
+ const char *prefix)
{
void *cp;
void *delta;
free(data);
}
-static void emit_binary_diff(FILE *file, mmfile_t *one, mmfile_t *two, char *prefix)
+static void emit_binary_diff(FILE *file, mmfile_t *one, mmfile_t *two,
+ const char *prefix)
{
fprintf(file, "%sGIT binary patch\n", prefix);
emit_binary_diff_body(file, one, two, prefix);
struct userdiff_driver *textconv_one = NULL;
struct userdiff_driver *textconv_two = NULL;
struct strbuf header = STRBUF_INIT;
- struct strbuf *msgbuf;
- char *line_prefix = "";
-
- if (o->output_prefix) {
- msgbuf = o->output_prefix(o, o->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
+ const char *line_prefix = diff_line_prefix(o);
if (DIFF_OPT_TST(o, SUBMODULE_LOG) &&
(!one->mode || S_ISGITLINK(one->mode)) &&
{
const char *set = diff_get_color(use_color, DIFF_METAINFO);
const char *reset = diff_get_color(use_color, DIFF_RESET);
- struct strbuf *msgbuf;
- char *line_prefix = "";
+ const char *line_prefix = diff_line_prefix(o);
*must_show_header = 1;
- if (o->output_prefix) {
- msgbuf = o->output_prefix(o, o->output_prefix_data);
- line_prefix = msgbuf->buf;
- }
strbuf_init(msg, PATH_MAX * 2 + 300);
switch (p->status) {
case DIFF_STATUS_COPIED:
{
int line_termination = opt->line_termination;
int inter_name_termination = line_termination ? '\t' : '\0';
- if (opt->output_prefix) {
- struct strbuf *msg = NULL;
- msg = opt->output_prefix(opt, opt->output_prefix_data);
- fprintf(opt->file, "%s", msg->buf);
- }
+ fprintf(opt->file, "%s", diff_line_prefix(opt));
if (!(opt->output_format & DIFF_FORMAT_NAME_STATUS)) {
fprintf(opt->file, ":%06o %06o %s ", p->one->mode, p->two->mode,
diff_unique_abbrev(p->one->sha1, opt->abbrev));
static void diff_summary(struct diff_options *opt, struct diff_filepair *p)
{
FILE *file = opt->file;
- char *line_prefix = "";
-
- if (opt->output_prefix) {
- struct strbuf *buf = opt->output_prefix(opt, opt->output_prefix_data);
- line_prefix = buf->buf;
- }
+ const char *line_prefix = diff_line_prefix(opt);
switch(p->status) {
case DIFF_STATUS_DELETED:
if (output_format & DIFF_FORMAT_PATCH) {
if (separator) {
- if (options->output_prefix) {
- struct strbuf *msg = NULL;
- msg = options->output_prefix(options,
- options->output_prefix_data);
- fwrite(msg->buf, msg->len, 1, stdout);
- }
- putc(options->line_termination, options->file);
+ fprintf(options->file, "%s%c",
+ diff_line_prefix(options),
+ options->line_termination);
if (options->stat_sep) {
/* attach patch instead of inline */
fputs(options->stat_sep, options->file);
diff_get_color((o)->use_color, ix)
+const char *diff_line_prefix(struct diff_options *);
+
+
extern const char mime_boundary_leader[];
extern void diff_tree_setup_paths(const char **paths, struct diff_options *);
/*
* Let callers be aware of the constant return value; this can help
- * gcc with -Wuninitialized analysis. We have to restrict this trick to
- * gcc, though, because of the variadic macro and the magic ## comma pasting
- * behavior. But since we're only trying to help gcc, anyway, it's OK; other
- * compilers will fall back to using the function as usual.
+ * gcc with -Wuninitialized analysis. We restrict this trick to gcc, though,
+ * because some compilers may not support variadic macros. Since we're only
+ * trying to help gcc, anyway, it's OK; other compilers will fall back to
+ * using the function as usual.
*/
#if defined(__GNUC__) && ! defined(__clang__)
-#define error(fmt, ...) (error((fmt), ##__VA_ARGS__), -1)
+#define error(...) (error(__VA_ARGS__), -1)
#endif
extern void set_die_routine(NORETURN_PTR void (*routine)(const char *err, va_list params));
use IO::Pipe;
use POSIX qw(strftime tzset dup2 ENOENT);
use IPC::Open2;
+use Git qw(get_tz_offset);
$SIG{'PIPE'}="IGNORE";
set_timezone('UTC');
}
set_timezone($author_tz);
- my $commit_date = strftime("%s %z", localtime($date));
+ # $date is in the seconds since epoch format
+ my $tz_offset = get_tz_offset($date);
+ my $commit_date = "$date $tz_offset";
set_timezone('UTC');
$ENV{GIT_AUTHOR_NAME} = $author_name;
$ENV{GIT_AUTHOR_EMAIL} = $author_email;
# the user with the real $MERGED name before launching $merge_tool.
if should_prompt
then
- printf "\nViewing: '$MERGED'\n"
+ printf "\nViewing: '%s'\n" "$MERGED"
if use_ext_cmd
then
printf "Launch '%s' [Y/n]: " \
fi
printf "Merging:\n"
-printf "$files\n"
+printf "%s\n" "$files"
IFS='
'
if (!graph)
return;
+ /*
+ * When showing a diff of a merge against each of its parents, we
+ * are called once for each parent without graph_update having been
+ * called. In this case, simply output a single padding line.
+ */
+ if (graph_is_commit_finished(graph)) {
+ graph_show_padding(graph);
+ shown_commit_line = 1;
+ }
+
while (!shown_commit_line && !graph_is_commit_finished(graph)) {
shown_commit_line = graph_next_line(graph, &msgbuf);
fwrite(msgbuf.buf, sizeof(char), msgbuf.len, stdout);
#include "list-objects.h"
#include "sigchain.h"
+#ifdef EXPAT_NEEDS_XMLPARSE_H
+#include <xmlparse.h>
+#else
#include <expat.h>
+#endif
static const char http_push_usage[] =
"git http-push [--all] [--dry-run] [--force] [--verbose] <remote> [<head>...]\n";
empty_file="${TMPDIR:-/tmp}/git-difftool-p4merge-empty-file.$$"
>"$empty_file"
- printf "$empty_file"
+ printf "%s" "$empty_file"
}
#include "cache.h"
#include "commit.h"
#include "color.h"
+#include "utf8.h"
static int parse_options_usage(struct parse_opt_ctx_t *ctx,
const char * const *usagestr,
default: /* PARSE_OPT_UNKNOWN */
if (ctx.argv[0][1] == '-') {
error("unknown option `%s'", ctx.argv[0] + 2);
- } else {
+ } else if (isascii(*ctx.opt)) {
error("unknown switch `%c'", *ctx.opt);
+ } else {
+ error("unknown non-ascii option in string: `%s'",
+ ctx.argv[0]);
}
usage_with_options(usagestr, options);
}
s = literal ? "[%s]" : "[<%s>]";
else
s = literal ? " %s" : " <%s>";
- return fprintf(outfile, s, opts->argh ? _(opts->argh) : _("..."));
+ return utf8_fprintf(outfile, s, opts->argh ? _(opts->argh) : _("..."));
}
#define USAGE_OPTS_WIDTH 24
if (opts->long_name)
pos += fprintf(outfile, "--%s", opts->long_name);
if (opts->type == OPTION_NUMBER)
- pos += fprintf(outfile, "-NUM");
+ pos += utf8_fprintf(outfile, _("-NUM"));
if ((opts->flags & PARSE_OPT_LITERAL_ARGHELP) ||
!(opts->flags & PARSE_OPT_NOARG))
command_bidi_pipe command_close_bidi_pipe
version exec_path html_path hash_object git_cmd_try
remote_refs prompt
+ get_tz_offset
temp_acquire temp_release temp_reset temp_path);
use Cwd qw(abs_path cwd);
use IPC::Open2 qw(open2);
use Fcntl qw(SEEK_SET SEEK_CUR);
+use Time::Local qw(timegm);
}
sub html_path { command_oneline('--html-path') }
+
+=item get_tz_offset ( TIME )
+
+Return the time zone offset from GMT in the form +/-HHMM where HH is
+the number of hours from GMT and MM is the number of minutes. This is
+the equivalent of what strftime("%z", ...) would provide on a GNU
+platform.
+
+If TIME is not supplied, the current local time is used.
+
+=cut
+
+sub get_tz_offset {
+ # some systmes don't handle or mishandle %z, so be creative.
+ my $t = shift || time;
+ my $gm = timegm(localtime($t));
+ my $sign = qw( + + - )[ $gm <=> $t ];
+ return sprintf("%s%02d%02d", $sign, (gmtime(abs($t - $gm)))[2,1]);
+}
+
+
=item prompt ( PROMPT , ISPASSWORD )
Query user C<PROMPT> and return answer from user.
use File::Path qw/mkpath/;
use File::Copy qw/copy/;
use IPC::Open3;
-use Time::Local;
use Memoize; # core since 5.8.0, Jul 2002
use Memoize::Storable;
use POSIX qw(:signal_h);
command_noisy
command_output_pipe
command_close_pipe
+ get_tz_offset
);
use Git::SVN::Utils qw(
fatal
\@out;
}
-sub get_tz {
- # some systmes don't handle or mishandle %z, so be creative.
- my $t = shift || time;
- my $gm = timelocal(gmtime($t));
- my $sign = qw( + + - )[ $t <=> $gm ];
- return sprintf("%s%02d%02d", $sign, (gmtime(abs($t - $gm)))[2,1]);
-}
-
# parse_svn_date(DATE)
# --------------------
# Given a date (in UTC) from Subversion, return a string in the format
delete $ENV{TZ};
}
- my $our_TZ = get_tz();
+ my $our_TZ = get_tz_offset();
# This converts $epoch_in_UTC into our local timezone.
my ($sec, $min, $hour, $mday, $mon, $year,
use strict;
use warnings;
use Git::SVN::Utils qw(fatal);
-use Git qw(command command_oneline command_output_pipe command_close_pipe);
+use Git qw(command
+ command_oneline
+ command_output_pipe
+ command_close_pipe
+ get_tz_offset);
use POSIX qw/strftime/;
use constant commit_log_separator => ('-' x 72) . "\n";
use vars qw/$TZ $limit $color $pager $non_recursive $verbose $oneline
sub format_svn_date {
my $t = shift || time;
require Git::SVN;
- my $gmoff = Git::SVN::get_tz($t);
+ my $gmoff = get_tz_offset($t);
return strftime("%Y-%m-%d %H:%M:%S $gmoff (%a, %d %b %Y)", localtime($t));
}
test_expect_success 'status when rebase in progress before resolving conflicts' '
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD^^) &&
test_must_fail git rebase HEAD^ --onto HEAD^^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently rebasing.
+ # You are currently rebasing branch '\''rebase_conflicts'\'' on '\''$ONTO'\''.
# (fix conflicts and then run "git rebase --continue")
# (use "git rebase --skip" to skip this patch)
# (use "git rebase --abort" to check out the original branch)
test_expect_success 'status when rebase in progress before rebase --continue' '
git reset --hard rebase_conflicts &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD^^) &&
test_must_fail git rebase HEAD^ --onto HEAD^^ &&
echo three >main.txt &&
git add main.txt &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently rebasing.
+ # You are currently rebasing branch '\''rebase_conflicts'\'' on '\''$ONTO'\''.
# (all conflicts fixed: run "git rebase --continue")
#
# Changes to be committed:
test_expect_success 'status during rebase -i when conflicts unresolved' '
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short rebase_i_conflicts) &&
test_must_fail git rebase -i rebase_i_conflicts &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently rebasing.
+ # You are currently rebasing branch '\''rebase_i_conflicts_second'\'' on '\''$ONTO'\''.
# (fix conflicts and then run "git rebase --continue")
# (use "git rebase --skip" to skip this patch)
# (use "git rebase --abort" to check out the original branch)
test_expect_success 'status during rebase -i after resolving conflicts' '
git reset --hard rebase_i_conflicts_second &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short rebase_i_conflicts) &&
test_must_fail git rebase -i rebase_i_conflicts &&
git add main.txt &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently rebasing.
+ # You are currently rebasing branch '\''rebase_i_conflicts_second'\'' on '\''$ONTO'\''.
# (all conflicts fixed: run "git rebase --continue")
#
# Changes to be committed:
FAKE_LINES="1 edit 2" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~2) &&
git rebase -i HEAD~2 &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''rebase_i_edit'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git reset HEAD^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently splitting a commit during a rebase.
+ # You are currently splitting a commit while rebasing branch '\''split_commit'\'' on '\''$ONTO'\''.
# (Once your working directory is clean, run "git rebase --continue")
#
# Changes not staged for commit:
FAKE_LINES="1 2 edit 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git commit --amend -m "foo" &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''amend_last'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git rebase --continue &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git rebase --continue &&
git reset HEAD^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently splitting a commit during a rebase.
+ # You are currently splitting a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (Once your working directory is clean, run "git rebase --continue")
#
# Changes not staged for commit:
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git rebase --continue &&
git commit --amend -m "foo" &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git commit --amend -m "a" &&
git rebase --continue &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git commit --amend -m "b" &&
git rebase --continue &&
git reset HEAD^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently splitting a commit during a rebase.
+ # You are currently splitting a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (Once your working directory is clean, run "git rebase --continue")
#
# Changes not staged for commit:
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git commit --amend -m "c" &&
git rebase --continue &&
git commit --amend -m "d" &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git reset HEAD^ &&
git add main.txt &&
git commit -m "e" &&
git rebase --continue &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git reset HEAD^ &&
git add main.txt &&
git commit --amend -m "f" &&
git rebase --continue &&
git reset HEAD^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently splitting a commit during a rebase.
+ # You are currently splitting a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (Once your working directory is clean, run "git rebase --continue")
#
# Changes not staged for commit:
FAKE_LINES="edit 1 edit 2 3" &&
export FAKE_LINES &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD~3) &&
git rebase -i HEAD~3 &&
git reset HEAD^ &&
git add main.txt &&
git commit --amend -m "g" &&
git rebase --continue &&
git commit --amend -m "h" &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently editing a commit during a rebase.
+ # You are currently editing a commit while rebasing branch '\''several_edits'\'' on '\''$ONTO'\''.
# (use "git commit --amend" to amend the current commit)
# (use "git rebase --continue" once you are satisfied with your changes)
#
git bisect good one_bisect &&
cat >expected <<-\EOF &&
# Not currently on any branch.
- # You are currently bisecting.
+ # You are currently bisecting branch '\''bisect'\''.
# (use "git bisect reset" to get back to the original branch)
#
nothing to commit (use -u to show untracked files)
test_commit two_statushints main.txt two &&
test_commit three_statushints main.txt three &&
test_when_finished "git rebase --abort" &&
+ ONTO=$(git rev-parse --short HEAD^^) &&
test_must_fail git rebase HEAD^ --onto HEAD^^ &&
- cat >expected <<-\EOF &&
+ cat >expected <<-EOF &&
# Not currently on any branch.
- # You are currently rebasing.
+ # You are currently rebasing branch '\''statushints_disabled'\'' on '\''$ONTO'\''.
#
# Unmerged paths:
# both modified: main.txt
return !strcasecmp(src, dst);
}
+/*
+ * Wrapper for fprintf and returns the total number of columns required
+ * for the printed string, assuming that the string is utf8.
+ */
+int utf8_fprintf(FILE *stream, const char *format, ...)
+{
+ struct strbuf buf = STRBUF_INIT;
+ va_list arg;
+ int columns;
+
+ va_start(arg, format);
+ strbuf_vaddf(&buf, format, arg);
+ va_end(arg);
+
+ columns = fputs(buf.buf, stream);
+ if (0 <= columns) /* keep the error from the I/O */
+ columns = utf8_strwidth(buf.buf);
+ strbuf_release(&buf);
+ return columns;
+}
+
/*
* Given a buffer and its encoding, return it re-encoded
* with iconv. If the conversion fails, returns NULL.
int is_utf8(const char *text);
int is_encoding_utf8(const char *name);
int same_encoding(const char *, const char *);
+int utf8_fprintf(FILE *, const char *, ...);
void strbuf_add_wrapped_text(struct strbuf *buf,
const char *text, int indent, int indent2, int width);
struct stat st;
if (has_unmerged(s)) {
- status_printf_ln(s, color, _("You are currently rebasing."));
+ if (state->branch)
+ status_printf_ln(s, color,
+ _("You are currently rebasing branch '%s' on '%s'."),
+ state->branch,
+ state->onto);
+ else
+ status_printf_ln(s, color,
+ _("You are currently rebasing."));
if (advice_status_hints) {
status_printf_ln(s, color,
_(" (fix conflicts and then run \"git rebase --continue\")"));
_(" (use \"git rebase --abort\" to check out the original branch)"));
}
} else if (state->rebase_in_progress || !stat(git_path("MERGE_MSG"), &st)) {
- status_printf_ln(s, color, _("You are currently rebasing."));
+ if (state->branch)
+ status_printf_ln(s, color,
+ _("You are currently rebasing branch '%s' on '%s'."),
+ state->branch,
+ state->onto);
+ else
+ status_printf_ln(s, color,
+ _("You are currently rebasing."));
if (advice_status_hints)
status_printf_ln(s, color,
_(" (all conflicts fixed: run \"git rebase --continue\")"));
} else if (split_commit_in_progress(s)) {
- status_printf_ln(s, color, _("You are currently splitting a commit during a rebase."));
+ if (state->branch)
+ status_printf_ln(s, color,
+ _("You are currently splitting a commit while rebasing branch '%s' on '%s'."),
+ state->branch,
+ state->onto);
+ else
+ status_printf_ln(s, color,
+ _("You are currently splitting a commit during a rebase."));
if (advice_status_hints)
status_printf_ln(s, color,
_(" (Once your working directory is clean, run \"git rebase --continue\")"));
} else {
- status_printf_ln(s, color, _("You are currently editing a commit during a rebase."));
+ if (state->branch)
+ status_printf_ln(s, color,
+ _("You are currently editing a commit while rebasing branch '%s' on '%s'."),
+ state->branch,
+ state->onto);
+ else
+ status_printf_ln(s, color,
+ _("You are currently editing a commit during a rebase."));
if (advice_status_hints && !s->amend) {
status_printf_ln(s, color,
_(" (use \"git commit --amend\" to amend the current commit)"));
struct wt_status_state *state,
const char *color)
{
- status_printf_ln(s, color, _("You are currently bisecting."));
+ if (state->branch)
+ status_printf_ln(s, color,
+ _("You are currently bisecting branch '%s'."),
+ state->branch);
+ else
+ status_printf_ln(s, color,
+ _("You are currently bisecting."));
if (advice_status_hints)
status_printf_ln(s, color,
_(" (use \"git bisect reset\" to get back to the original branch)"));
wt_status_print_trailer(s);
}
+/*
+ * Extract branch information from rebase/bisect
+ */
+static void read_and_strip_branch(struct strbuf *sb,
+ const char **branch,
+ const char *path)
+{
+ unsigned char sha1[20];
+
+ strbuf_reset(sb);
+ if (strbuf_read_file(sb, git_path("%s", path), 0) <= 0)
+ return;
+
+ while (sb->len && sb->buf[sb->len - 1] == '\n')
+ strbuf_setlen(sb, sb->len - 1);
+ if (!sb->len)
+ return;
+ if (!prefixcmp(sb->buf, "refs/heads/"))
+ *branch = sb->buf + strlen("refs/heads/");
+ else if (!prefixcmp(sb->buf, "refs/"))
+ *branch = sb->buf;
+ else if (!get_sha1_hex(sb->buf, sha1)) {
+ const char *abbrev;
+ abbrev = find_unique_abbrev(sha1, DEFAULT_ABBREV);
+ strbuf_reset(sb);
+ strbuf_addstr(sb, abbrev);
+ *branch = sb->buf;
+ } else if (!strcmp(sb->buf, "detached HEAD")) /* rebase */
+ ;
+ else /* bisect */
+ *branch = sb->buf;
+}
+
static void wt_status_print_state(struct wt_status *s)
{
const char *state_color = color(WT_STATUS_HEADER, s);
+ struct strbuf branch = STRBUF_INIT;
+ struct strbuf onto = STRBUF_INIT;
struct wt_status_state state;
struct stat st;
state.am_empty_patch = 1;
} else {
state.rebase_in_progress = 1;
+ read_and_strip_branch(&branch, &state.branch,
+ "rebase-apply/head-name");
+ read_and_strip_branch(&onto, &state.onto,
+ "rebase-apply/onto");
}
} else if (!stat(git_path("rebase-merge"), &st)) {
if (!stat(git_path("rebase-merge/interactive"), &st))
state.rebase_interactive_in_progress = 1;
else
state.rebase_in_progress = 1;
+ read_and_strip_branch(&branch, &state.branch,
+ "rebase-merge/head-name");
+ read_and_strip_branch(&onto, &state.onto,
+ "rebase-merge/onto");
} else if (!stat(git_path("CHERRY_PICK_HEAD"), &st)) {
state.cherry_pick_in_progress = 1;
}
- if (!stat(git_path("BISECT_LOG"), &st))
+ if (!stat(git_path("BISECT_LOG"), &st)) {
state.bisect_in_progress = 1;
+ read_and_strip_branch(&branch, &state.branch,
+ "BISECT_START");
+ }
if (state.merge_in_progress)
show_merge_in_progress(s, &state, state_color);
show_cherry_pick_in_progress(s, &state, state_color);
if (state.bisect_in_progress)
show_bisect_in_progress(s, &state, state_color);
+ strbuf_release(&branch);
+ strbuf_release(&onto);
}
void wt_status_print(struct wt_status *s)
int rebase_interactive_in_progress;
int cherry_pick_in_progress;
int bisect_in_progress;
+ const char *branch;
+ const char *onto;
};
void wt_status_prepare(struct wt_status *s);