git-stripspace
git-submodule
git-svn
-git-svnimport
git-symbolic-ref
git-tag
git-tar-tree
Santi Béjar <sbejar@gmail.com>
Sean Estabrooks <seanlkml@sympatico.ca>
Shawn O. Pearce <spearce@spearce.org>
+Steven Grimm <koreth@midwinter.com>
Theodore Ts'o <tytso@mit.edu>
Tony Luck <tony.luck@intel.com>
Uwe Kleine-König <Uwe_Zeisberger@digi.com>
Fixes since v1.5.3.4
--------------------
+ * Comes with git-gui 0.8.4.
+
* "git-config" silently ignored options after --list; now it will
error out with a usage message.
* "git-add -i" did not handle single line hunks correctly.
- * "git-rebase -i" failed if external diff drivers were used for one
- or more files in a commit. It now avoids calling the external
- diff drivers.
+ * "git-rebase -i" and "git-stash apply" failed if external diff
+ drivers were used for one or more files in a commit. They now
+ avoid calling the external diff drivers.
* "git-log --follow" did not work unless diff generation (e.g. -p)
was also requested.
+ * "git-log --follow -B" did not work at all. Fixed.
+
+ * "git-log -M -B" did not correctly handle cases of very large files
+ being renamed and replaced by very small files in the same commit.
+
* "git-log" printed extra newlines between commits when a diff
was generated internally (e.g. -S or --follow) but not displayed.
* "git-push" error message is more helpful when pushing to a
repository with no matching refs and none specified.
+ * "git-push" now respects + (force push) on wildcard refspecs,
+ matching the behavior of git-fetch.
+
* "git-filter-branch" now updates the working directory when it
has finished filtering the current branch.
* "git-instaweb" no longer fails on Mac OS X.
+ * "git-cvsexportcommit" didn't always create new parent directories
+ before trying to create new child directories. Fixed.
+
+ * "git-fetch" printed a scary (but bogus) error message while
+ fetching a tag that pointed to a tree or blob. The error did
+ not impact correctness, only user perception. The bogus error
+ is no longer printed.
+
+ * "git-ls-files --ignored" did not properly descend into non-ignored
+ directories that themselves contained ignored files if d_type
+ was not supported by the filesystem. This bug impacted systems
+ such as AFS. Fixed.
+
+ * Git segfaulted when reading an invalid .gitattributes file. Fixed.
+
* post-receive-email example hook fixed was fixed for
non-fast-forward updates.
* "make clean" no longer deletes the configure script that ships
with the git tarball, making multiple architecture builds easier.
+
+ * "git-remote show origin" spewed a warning message from Perl
+ when no remote is defined for the current branch via
+ branch.<name>.remote configuration settings.
+
+ * Building with NO_PERL_MAKEMAKER excessively rebuilt contents
+ of perl/ subdirectory by rewriting perl.mak.
+
+ * http.sslVerify configuration settings were not used in scripted
+ Porcelains.
+
+ * "git-add" leaked a bit of memory while scanning for files to add.
+
+ * A few workarounds to squelch false warnings from recent gcc have
+ been added.
+
+--
+exec >/var/tmp/1
+O=v1.5.3.4-55-gf120ae2
+echo O=`git describe refs/heads/maint`
+git shortlog --no-merges $O..refs/heads/maint
Updates since v1.5.3
--------------------
+ * Comes with much improved gitk.
+
* git-reset is now built-in.
* git-send-email can optionally talk over ssmtp and use SMTP-AUTH.
* git-archive can optionally substitute keywords in files marked with
export-subst attribute.
+ * git-for-each-ref learned %(xxxdate:<dateformat>) syntax to
+ show the various date fields in different formats.
+
+ * git-gc --auto is a low-impact way to automatically run a
+ variant of git-repack that does not lose unreferenced objects
+ (read: safer than the usual one) after the user accumulates
+ too many loose objects.
+
+ * git-push has been rewritten in C.
+
+ * git-push learned --dry-run option to show what would happen
+ if a push is run.
+
+ * git-remote learned "rm" subcommand.
+
+ * git-rebase --interactive mode can now work on detached HEAD.
+
+ * git-cvsserver can be run via git-shell.
+
+ * git-am and git-rebase are far less verbose.
+
+ * git-pull learned to pass --[no-]ff option to underlying git-merge.
+
* Various Perforce importer updates.
Fixes since v1.5.3
--
exec >/var/tmp/1
-O=v1.5.3.2-99-ge4b2890
+O=v1.5.3.4-450-g952a9e5
echo O=`git describe refs/heads/master`
git shortlog --no-merges $O..refs/heads/master ^refs/heads/maint
-
git-stripspace purehelpers
git-submodule mainporcelain
git-svn foreignscminterface
-git-svnimport foreignscminterface
git-symbolic-ref plumbingmanipulators
git-tag mainporcelain
git-tar-tree plumbinginterrogators
If this option is not given, `git fetch` defaults to remote "origin".
branch.<name>.merge::
- When in branch <name>, it tells `git fetch` the default refspec to
- be marked for merging in FETCH_HEAD. The value has exactly to match
- a remote part of one of the refspecs which are fetched from the remote
- given by "branch.<name>.remote".
+ When in branch <name>, it tells `git fetch` the default
+ refspec to be marked for merging in FETCH_HEAD. The value is
+ handled like the remote part of a refspec, and must match a
+ ref which is fetched from the remote given by
+ "branch.<name>.remote".
The merge information is used by `git pull` (which at first calls
`git fetch`) to lookup the default branch for merging. Without
this option, `git pull` defaults to merge the first refspec fetched.
[NOTE]
Most likely, you are not directly using the core
-git Plumbing commands, but using Porcelain like Cogito on top
-of it. Cogito works a bit differently and you usually do not
-have to run `git-update-index` yourself for changed files (you
-do tell underlying git about additions and removals via
-`cg-add` and `cg-rm` commands). Just before you make a commit
-with `cg-commit`, Cogito figures out which files you modified,
-and runs `git-update-index` on them for you.
+git Plumbing commands, but using Porcelain such as `git-add`, `git-rm'
+and `git-commit'.
Tagging a version
and in fact a lot of the common git command combinations can be scripted
with the `git xyz` interfaces. You can learn things by just looking
-at what the various git scripts do. For example, `git reset` is the
-above two lines implemented in `git-reset`, but some things like
+at what the various git scripts do. For example, `git reset` used to be
+the above two lines implemented in `git-reset`, but some things like
`git status` and `git commit` are slightly more complex scripts around
the basic git commands.
$ git branch
------------
-which is nothing more than a simple script around `ls .git/refs/heads`.
-There will be asterisk in front of the branch you are currently on.
+which used to be nothing more than a simple script around `ls .git/refs/heads`.
+There will be an asterisk in front of the branch you are currently on.
Sometimes you may wish to create a new branch _without_ actually
checking it out and switching to it. If so, just use the command
`master` branch, and the second column for the `mybranch`
branch. Three commits are shown along with their log messages.
All of them have non blank characters in the first column (`*`
-shows an ordinary commit on the current branch, `.` is a merge commit), which
+shows an ordinary commit on the current branch, `-` is a merge commit), which
means they are now part of the `master` branch. Only the "Some
work" commit has the plus `+` character in the second column,
because `mybranch` has not been merged to incorporate these
There are (confusingly enough) `git-ssh-fetch` and `git-ssh-upload`
programs, which are 'commit walkers'; they outlived their
usefulness when git Native and SSH transports were introduced,
-and not used by `git pull` or `git push` scripts.
+and are not used by `git pull` or `git push` scripts.
Once you fetch from the remote repository, you `merge` that
with your current branch.
The command writes the commit object name of the common ancestor
to the standard output, so we captured its output to a variable,
-because we will be using it in the next step. BTW, the common
+because we will be using it in the next step. By the way, the common
ancestor commit is the "New day." commit in this case. You can
tell it by:
convenient to organize your project with an informal hierarchy
of developers. Linux kernel development is run this way. There
is a nice illustration (page 17, "Merges to Mainline") in
-link:http://www.xenotime.net/linux/mentor/linux-mentoring-2006.pdf
-[Randy Dunlap's presentation].
+link:http://www.xenotime.net/linux/mentor/linux-mentoring-2006.pdf[Randy Dunlap's presentation].
It should be stressed that this hierarchy is purely *informal*.
There is nothing fundamental in git that enforces the "chain of
message prior committing.
-x::
- Cause the command to append which commit was
- cherry-picked after the original commit message when
- making a commit. Do not use this option if you are
- cherry-picking from your private branch because the
- information is useless to the recipient. If on the
+ When recording the commit, append to the original commit
+ message a note that indicates which commit this change
+ was cherry-picked from. Append the note only for cherry
+ picks without conflicts. Do not use this option if
+ you are cherry-picking from your private branch because
+ the information is useless to the recipient. If on the
other hand you are cherry-picking between two publicly
visible branches (e.g. backporting a fix to a
maintenance branch for an older release from a
Users are encouraged to run this task on a regular basis within
each repository to maintain good disk space utilization and good
-operating performance.
+operating performance. Some git commands may automatically run
+`git-gc`; see the `--auto` flag below for details.
OPTIONS
-------
few hundred changesets or so.
--auto::
- With this option, `git gc` checks if there are too many
- loose objects in the repository and runs
- gitlink:git-repack[1] with `-d -l` option to pack them.
- The threshold for loose objects is set with `gc.auto` configuration
- variable, and can be disabled by setting it to 0. Some
- Porcelain commands use this after they perform operation
- that could create many loose objects automatically.
- Additionally, when there are too many packs are present,
- they are consolidated into one larger pack by running
- the `git-repack` command with `-A` option. The
- threshold for number of packs is set with
- `gc.autopacklimit` configuration variable.
+ With this option, `git gc` checks whether any housekeeping is
+ required; if not, it exits without performing any work.
+ Some git commands run `git gc --auto` after performing
+ operations that could create many loose objects.
++
+Housekeeping is required if there are too many loose objects or
+too many packs in the repository. If the number of loose objects
+exceeds the value of the `gc.auto` configuration variable, then
+all loose objects are combined into a single pack using
+`git-repack -d -l`. Setting the value of `gc.auto` to 0
+disables automatic packing of loose objects.
++
+If the number of packs exceeds the value of `gc.autopacklimit`,
+then existing packs (except those marked with a `.keep` file)
+are consolidated into a single pack by using the `-A` option of
+`git-repack`. Setting `gc.autopacklimit` to 0 disables
+automatic consolidation of packs.
Configuration
-------------
SYNOPSIS
--------
-'git-http-push' [--all] [--force] [--verbose] <url> <ref> [<ref>...]
+'git-http-push' [--all] [--dry-run] [--force] [--verbose] <url> <ref> [<ref>...]
DESCRIPTION
-----------
the remote repository can lose commits; use it with
care.
+--dry-run::
+ Do everything except actually send the updates.
+
--verbose::
Report the list of objects being walked locally and the
list of objects successfully sent to the remote repository.
Format of the file(s) specified in sendemail.aliasesfile. Must be
one of 'mutt', 'mailrc', 'pine', or 'gnus'.
+sendemail.to::
+ Email address (or alias) to always send to.
+
sendemail.cccmd::
Command to execute to generate per patch file specific "Cc:"s.
+++ /dev/null
-git-svnimport(1)
-================
-v0.1, July 2005
-
-NAME
-----
-git-svnimport - Import a SVN repository into git
-
-
-SYNOPSIS
---------
-[verse]
-'git-svnimport' [ -o <branch-for-HEAD> ] [ -h ] [ -v ] [ -d | -D ]
- [ -C <GIT_repository> ] [ -i ] [ -u ] [-l limit_rev]
- [ -b branch_subdir ] [ -T trunk_subdir ] [ -t tag_subdir ]
- [ -s start_chg ] [ -m ] [ -r ] [ -M regex ]
- [ -I <ignorefile_name> ] [ -A <author_file> ]
- [ -R <repack_each_revs>] [ -P <path_from_trunk> ]
- <SVN_repository_URL> [ <path> ]
-
-
-DESCRIPTION
------------
-Imports a SVN repository into git. It will either create a new
-repository, or incrementally import into an existing one.
-
-SVN access is done by the SVN::Perl module.
-
-git-svnimport assumes that SVN repositories are organized into one
-"trunk" directory where the main development happens, "branches/FOO"
-directories for branches, and "/tags/FOO" directories for tags.
-Other subdirectories are ignored.
-
-git-svnimport creates a file ".git/svn2git", which is required for
-incremental SVN imports.
-
-OPTIONS
--------
--C <target-dir>::
- The GIT repository to import to. If the directory doesn't
- exist, it will be created. Default is the current directory.
-
--s <start_rev>::
- Start importing at this SVN change number. The default is 1.
-+
-When importing incrementally, you might need to edit the .git/svn2git file.
-
--i::
- Import-only: don't perform a checkout after importing. This option
- ensures the working directory and index remain untouched and will
- not create them if they do not exist.
-
--T <trunk_subdir>::
- Name the SVN trunk. Default "trunk".
-
--t <tag_subdir>::
- Name the SVN subdirectory for tags. Default "tags".
-
--b <branch_subdir>::
- Name the SVN subdirectory for branches. Default "branches".
-
--o <branch-for-HEAD>::
- The 'trunk' branch from SVN is imported to the 'origin' branch within
- the git repository. Use this option if you want to import into a
- different branch.
-
--r::
- Prepend 'rX: ' to commit messages, where X is the imported
- subversion revision.
-
--u::
- Replace underscores in tag names with periods.
-
--I <ignorefile_name>::
- Import the svn:ignore directory property to files with this
- name in each directory. (The Subversion and GIT ignore
- syntaxes are similar enough that using the Subversion patterns
- directly with "-I .gitignore" will almost always just work.)
-
--A <author_file>::
- Read a file with lines on the form
-+
-------
- username = User's Full Name <email@addr.es>
-
-------
-+
-and use "User's Full Name <email@addr.es>" as the GIT
-author and committer for Subversion commits made by
-"username". If encountering a commit made by a user not in the
-list, abort.
-+
-For convenience, this data is saved to $GIT_DIR/svn-authors
-each time the -A option is provided, and read from that same
-file each time git-svnimport is run with an existing GIT
-repository without -A.
-
--m::
- Attempt to detect merges based on the commit message. This option
- will enable default regexes that try to capture the name source
- branch name from the commit message.
-
--M <regex>::
- Attempt to detect merges based on the commit message with a custom
- regex. It can be used with -m to also see the default regexes.
- You must escape forward slashes.
-
--l <max_rev>::
- Specify a maximum revision number to pull.
-+
-Formerly, this option controlled how many revisions to pull,
-due to SVN memory leaks. (These have been worked around.)
-
--R <repack_each_revs>::
- Specify how often git repository should be repacked.
-+
-The default value is 1000. git-svnimport will do import in chunks of 1000
-revisions, after each chunk git repository will be repacked. To disable
-this behavior specify some big value here which is mote than number of
-revisions to import.
-
--P <path_from_trunk>::
- Partial import of the SVN tree.
-+
-By default, the whole tree on the SVN trunk (/trunk) is imported.
-'-P my/proj' will import starting only from '/trunk/my/proj'.
-This option is useful when you want to import one project from a
-svn repo which hosts multiple projects under the same trunk.
-
--v::
- Verbosity: let 'svnimport' report what it is doing.
-
--d::
- Use direct HTTP requests if possible. The "<path>" argument is used
- only for retrieving the SVN logs; the path to the contents is
- included in the SVN log.
-
--D::
- Use direct HTTP requests if possible. The "<path>" argument is used
- for retrieving the logs, as well as for the contents.
-+
-There's no safe way to automatically find out which of these options to
-use, so you need to try both. Usually, the one that's wrong will die
-with a 40x error pretty quickly.
-
-<SVN_repository_URL>::
- The URL of the SVN module you want to import. For local
- repositories, use "file:///absolute/path".
-+
-If you're using the "-d" or "-D" option, this is the URL of the SVN
-repository itself; it usually ends in "/svn".
-
-<path>::
- The path to the module you want to check out.
-
--h::
- Print a short usage message and exit.
-
-OUTPUT
-------
-If '-v' is specified, the script reports what it is doing.
-
-Otherwise, success is indicated the Unix way, i.e. by simply exiting with
-a zero exit status.
-
-Author
-------
-Written by Matthias Urlichs <smurf@smurf.noris.de>, with help from
-various participants of the git-list <git@vger.kernel.org>.
-
-Based on a cvs2git script by the same author.
-
-Documentation
---------------
-Documentation by Matthias Urlichs <smurf@smurf.noris.de>.
-
-GIT
----
-Part of the gitlink:git[7] suite
providing generally smoother user experience than the "raw" Core GIT
itself and indeed many other version control systems.
+ Cogito is no longer maintained as most of its functionality
+ is now in core GIT.
+
- *pg* (http://www.spearce.org/category/projects/scm/pg/)
- *StGit* (http://www.procode.org/stgit/)
Stacked GIT provides a quilt-like patch management functionality in the
- GIT environment. You can easily manage your patches in the scope of GIT
+ GIT environment. You can easily manage your patches in the scope of GIT
until they get merged upstream.
link:RelNotes-1.5.3.4.txt[1.5.3.4],
link:RelNotes-1.5.3.3.txt[1.5.3.3],
link:RelNotes-1.5.3.2.txt[1.5.3.2],
- link:RelNotes-1.5.3.1.txt[1.5.3.1].
+ link:RelNotes-1.5.3.1.txt[1.5.3.1],
+ link:RelNotes-1.5.3.txt[1.5.3].
* release notes for
link:RelNotes-1.5.2.5.txt[1.5.2.5],
The "--" is necessary to avoid confusion with the *branch* named
'gitk'
-gitk --max-count=100 --all -- Makefile::
+gitk --max-count=100 --all \-- Makefile::
Show at most 100 changes made to the file 'Makefile'. Instead of only
looking for changes in the current branch look in all branches.
#
# Define NO_SETENV if you don't have setenv in the C library.
#
+# Define NO_MKDTEMP if you don't have mkdtemp in the C library.
+#
# Define NO_SYMLINK_HEAD if you never want .git/HEAD to be a symbolic link.
# Enable it on Windows. By default, symrefs are still used.
#
SCRIPT_SH = \
git-bisect.sh git-checkout.sh \
git-clean.sh git-clone.sh git-commit.sh \
- git-fetch.sh \
git-ls-remote.sh \
git-merge-one-file.sh git-mergetool.sh git-parse-remote.sh \
git-pull.sh git-rebase.sh git-rebase--interactive.sh \
SCRIPT_PERL = \
git-add--interactive.perl \
git-archimport.perl git-cvsimport.perl git-relink.perl \
- git-cvsserver.perl git-remote.perl \
- git-svnimport.perl git-cvsexportcommit.perl \
+ git-cvsserver.perl git-remote.perl git-cvsexportcommit.perl \
git-send-email.perl git-svn.perl
SCRIPTS = $(patsubst %.sh,%,$(SCRIPT_SH)) \
# ... and all the rest that could be moved out of bindir to gitexecdir
PROGRAMS = \
git-fetch-pack$X \
- git-hash-object$X git-index-pack$X git-local-fetch$X \
+ git-hash-object$X git-index-pack$X \
git-fast-import$X \
git-daemon$X \
git-merge-index$X git-mktag$X git-mktree$X git-patch-id$X \
git-peek-remote$X git-receive-pack$X \
git-send-pack$X git-shell$X \
- git-show-index$X git-ssh-fetch$X \
- git-ssh-upload$X git-unpack-file$X \
+ git-show-index$X \
+ git-unpack-file$X \
git-update-server-info$X \
git-upload-pack$X \
git-pack-redundant$X git-var$X \
OTHER_PROGRAMS += gitk-wish
endif
-# Backward compatibility -- to be removed after 1.0
-PROGRAMS += git-ssh-pull$X git-ssh-push$X
-
# Set paths to tools early so that they can be used for version tests.
ifndef SHELL_PATH
SHELL_PATH = /bin/sh
run-command.h strbuf.h tag.h tree.h git-compat-util.h revision.h \
tree-walk.h log-tree.h dir.h path-list.h unpack-trees.h builtin.h \
utf8.h reflog-walk.h patch-ids.h attr.h decorate.h progress.h \
- mailmap.h remote.h
+ mailmap.h remote.h transport.h
DIFF_OBJS = \
diff.o diff-lib.o diffcore-break.o diffcore-order.o \
write_or_die.o trace.o list-objects.o grep.o match-trees.o \
alloc.o merge-file.o path-list.o help.o unpack-trees.o $(DIFF_OBJS) \
color.o wt-status.o archive-zip.o archive-tar.o shallow.o utf8.o \
- convert.o attr.o decorate.o progress.o mailmap.o symlinks.o remote.o
+ convert.o attr.o decorate.o progress.o mailmap.o symlinks.o remote.o \
+ transport.o bundle.o walker.o
BUILTIN_OBJS = \
builtin-add.o \
builtin-diff-files.o \
builtin-diff-index.o \
builtin-diff-tree.o \
+ builtin-fetch.o \
+ builtin-fetch-pack.o \
builtin-fetch--tool.o \
builtin-fmt-merge-msg.o \
builtin-for-each-ref.o \
NEEDS_LIBICONV = YesPlease
NO_UNSETENV = YesPlease
NO_SETENV = YesPlease
+ NO_MKDTEMP = YesPlease
NO_C99_FORMAT = YesPlease
NO_STRTOUMAX = YesPlease
endif
ifeq ($(uname_R),5.9)
NO_UNSETENV = YesPlease
NO_SETENV = YesPlease
+ NO_MKDTEMP = YesPlease
NO_C99_FORMAT = YesPlease
NO_STRTOUMAX = YesPlease
endif
CC_LD_DYNPATH = -R
endif
-ifndef NO_CURL
+ifdef NO_CURL
+ BASIC_CFLAGS += -DNO_CURL
+else
ifdef CURLDIR
# Try "-Wl,-rpath=$(CURLDIR)/$(lib)" in such a case.
BASIC_CFLAGS += -I$(CURLDIR)/include
else
CURL_LIBCURL = -lcurl
endif
- PROGRAMS += git-http-fetch$X
+ BUILTIN_OBJS += builtin-http-fetch.o
+ EXTLIBS += $(CURL_LIBCURL)
+ LIB_OBJS += http.o http-walker.o
curl_check := $(shell (echo 070908; curl-config --vernum) | sort -r | sed -ne 2p)
ifeq "$(curl_check)" "070908"
ifndef NO_EXPAT
COMPAT_CFLAGS += -DNO_SETENV
COMPAT_OBJS += compat/setenv.o
endif
+ifdef NO_MKDTEMP
+ COMPAT_CFLAGS += -DNO_MKDTEMP
+ COMPAT_OBJS += compat/mkdtemp.o
+endif
ifdef NO_UNSETENV
COMPAT_CFLAGS += -DNO_UNSETENV
COMPAT_OBJS += compat/unsetenv.o
$(patsubst %.perl,%,$(SCRIPT_PERL)): perl/perl.mak
-perl/perl.mak: GIT-CFLAGS
+perl/perl.mak: GIT-CFLAGS perl/Makefile perl/Makefile.PL
$(QUIET_SUBDIR0)perl $(QUIET_SUBDIR1) PERL_PATH='$(PERL_PATH_SQ)' prefix='$(prefix_SQ)' $(@F)
$(patsubst %.perl,%,$(SCRIPT_PERL)): % : %.perl
$(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) -DGIT_USER_AGENT='"git/$(GIT_VERSION)"' $<
ifdef NO_EXPAT
-http-fetch.o: http-fetch.c http.h GIT-CFLAGS
+http-walker.o: http-walker.c http.h GIT-CFLAGS
$(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) -DNO_EXPAT $<
endif
git-%$X: %.o $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) $(LIBS)
-ssh-pull.o: ssh-fetch.c
-ssh-push.o: ssh-upload.c
-git-local-fetch$X: fetch.o
-git-ssh-fetch$X: rsh.o fetch.o
-git-ssh-upload$X: rsh.o
-git-ssh-pull$X: rsh.o fetch.o
-git-ssh-push$X: rsh.o
-
git-imap-send$X: imap-send.o $(LIB_FILE)
-http.o http-fetch.o http-push.o: http.h
-git-http-fetch$X: fetch.o http.o http-fetch.o $(GITLIBS)
- $(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
- $(LIBS) $(CURL_LIBCURL) $(EXPAT_LIBEXPAT)
+http.o http-walker.o http-push.o: http.h
git-http-push$X: revision.o http.o http-push.o $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
$(LIBS) $(CURL_LIBCURL) $(EXPAT_LIBEXPAT)
-$(LIB_OBJS) $(BUILTIN_OBJS) fetch.o: $(LIB_H)
+$(LIB_OBJS) $(BUILTIN_OBJS): $(LIB_H)
$(patsubst git-%$X,%.o,$(PROGRAMS)): $(LIB_H) $(wildcard */*.h)
$(DIFF_OBJS): diffcore.h
$(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $(XDIFF_OBJS)
-perl/Makefile: perl/Git.pm perl/Makefile.PL GIT-CFLAGS
- (cd perl && $(PERL_PATH) Makefile.PL \
- PREFIX='$(prefix_SQ)')
-
doc:
$(MAKE) -C Documentation all
git-merge-octopus | git-merge-ours | git-merge-recursive | \
git-merge-resolve | git-merge-stupid | \
git-add--interactive | git-fsck-objects | git-init-db | \
- git-repo-config | git-fetch--tool | \
- git-ssh-pull | git-ssh-push ) continue ;; \
+ git-repo-config | git-fetch--tool ) continue ;; \
esac ; \
test -f "Documentation/$$v.txt" || \
echo "no doc: $$v"; \
num_attr = 0;
cp = name + namelen;
cp = cp + strspn(cp, blank);
- while (*cp)
+ while (*cp) {
cp = parse_attr(src, lineno, cp, &num_attr, res);
+ if (!cp)
+ return NULL;
+ }
if (pass)
break;
res = xcalloc(1,
die("pathspec '%s' did not match any files",
pathspec[i]);
}
+ free(seen);
}
static void fill_directory(struct dir_struct *dir, const char **pathspec,
if (!seen[i])
die("pathspec '%s' did not match any files", pathspec[i]);
}
+ free(seen);
}
static int git_add_config(const char *var, const char *value)
unsigned int is_rename:1;
struct fragment *fragments;
char *result;
- unsigned long resultsize;
+ size_t resultsize;
char old_sha1_prefix[41];
char new_sha1_prefix[41];
struct patch *next;
buffer = read_sha1_file(sha1, type, sizep);
if (buffer && S_ISREG(mode)) {
struct strbuf buf;
+ size_t size = 0;
strbuf_init(&buf, 0);
strbuf_attach(&buf, buffer, *sizep, *sizep + 1);
convert_to_working_tree(path, buf.buf, buf.len, &buf);
convert_to_archive(path, buf.buf, buf.len, &buf, commit);
- buffer = strbuf_detach(&buf, sizep);
+ buffer = strbuf_detach(&buf, &size);
+ *sizep = size;
}
return buffer;
#include "builtin.h"
#include "cache.h"
-#include "object.h"
-#include "commit.h"
-#include "diff.h"
-#include "revision.h"
-#include "list-objects.h"
-#include "run-command.h"
+#include "bundle.h"
/*
* Basic handler for bundle files to connect repositories via sneakernet.
static const char *bundle_usage="git-bundle (create <bundle> <git-rev-list args> | verify <bundle> | list-heads <bundle> [refname]... | unbundle <bundle> [refname]... )";
-static const char bundle_signature[] = "# v2 git bundle\n";
-
-struct ref_list {
- unsigned int nr, alloc;
- struct ref_list_entry {
- unsigned char sha1[20];
- char *name;
- } *list;
-};
-
-static void add_to_ref_list(const unsigned char *sha1, const char *name,
- struct ref_list *list)
-{
- if (list->nr + 1 >= list->alloc) {
- list->alloc = alloc_nr(list->nr + 1);
- list->list = xrealloc(list->list,
- list->alloc * sizeof(list->list[0]));
- }
- memcpy(list->list[list->nr].sha1, sha1, 20);
- list->list[list->nr].name = xstrdup(name);
- list->nr++;
-}
-
-struct bundle_header {
- struct ref_list prerequisites;
- struct ref_list references;
-};
-
-/* returns an fd */
-static int read_header(const char *path, struct bundle_header *header) {
- char buffer[1024];
- int fd;
- long fpos;
- FILE *ffd = fopen(path, "rb");
-
- if (!ffd)
- return error("could not open '%s'", path);
- if (!fgets(buffer, sizeof(buffer), ffd) ||
- strcmp(buffer, bundle_signature)) {
- fclose(ffd);
- return error("'%s' does not look like a v2 bundle file", path);
- }
- while (fgets(buffer, sizeof(buffer), ffd)
- && buffer[0] != '\n') {
- int is_prereq = buffer[0] == '-';
- int offset = is_prereq ? 1 : 0;
- int len = strlen(buffer);
- unsigned char sha1[20];
- struct ref_list *list = is_prereq ? &header->prerequisites
- : &header->references;
- char delim;
-
- if (buffer[len - 1] == '\n')
- buffer[len - 1] = '\0';
- if (get_sha1_hex(buffer + offset, sha1)) {
- warning("unrecognized header: %s", buffer);
- continue;
- }
- delim = buffer[40 + offset];
- if (!isspace(delim) && (delim != '\0' || !is_prereq))
- die ("invalid header: %s", buffer);
- add_to_ref_list(sha1, isspace(delim) ?
- buffer + 41 + offset : "", list);
- }
- fpos = ftell(ffd);
- fclose(ffd);
- fd = open(path, O_RDONLY);
- if (fd < 0)
- return error("could not open '%s'", path);
- lseek(fd, fpos, SEEK_SET);
- return fd;
-}
-
-static int list_refs(struct ref_list *r, int argc, const char **argv)
-{
- int i;
-
- for (i = 0; i < r->nr; i++) {
- if (argc > 1) {
- int j;
- for (j = 1; j < argc; j++)
- if (!strcmp(r->list[i].name, argv[j]))
- break;
- if (j == argc)
- continue;
- }
- printf("%s %s\n", sha1_to_hex(r->list[i].sha1),
- r->list[i].name);
- }
- return 0;
-}
-
-#define PREREQ_MARK (1u<<16)
-
-static int verify_bundle(struct bundle_header *header, int verbose)
-{
- /*
- * Do fast check, then if any prereqs are missing then go line by line
- * to be verbose about the errors
- */
- struct ref_list *p = &header->prerequisites;
- struct rev_info revs;
- const char *argv[] = {NULL, "--all"};
- struct object_array refs;
- struct commit *commit;
- int i, ret = 0, req_nr;
- const char *message = "Repository lacks these prerequisite commits:";
-
- init_revisions(&revs, NULL);
- for (i = 0; i < p->nr; i++) {
- struct ref_list_entry *e = p->list + i;
- struct object *o = parse_object(e->sha1);
- if (o) {
- o->flags |= PREREQ_MARK;
- add_pending_object(&revs, o, e->name);
- continue;
- }
- if (++ret == 1)
- error(message);
- error("%s %s", sha1_to_hex(e->sha1), e->name);
- }
- if (revs.pending.nr != p->nr)
- return ret;
- req_nr = revs.pending.nr;
- setup_revisions(2, argv, &revs, NULL);
-
- memset(&refs, 0, sizeof(struct object_array));
- for (i = 0; i < revs.pending.nr; i++) {
- struct object_array_entry *e = revs.pending.objects + i;
- add_object_array(e->item, e->name, &refs);
- }
-
- prepare_revision_walk(&revs);
-
- i = req_nr;
- while (i && (commit = get_revision(&revs)))
- if (commit->object.flags & PREREQ_MARK)
- i--;
-
- for (i = 0; i < req_nr; i++)
- if (!(refs.objects[i].item->flags & SHOWN)) {
- if (++ret == 1)
- error(message);
- error("%s %s", sha1_to_hex(refs.objects[i].item->sha1),
- refs.objects[i].name);
- }
-
- for (i = 0; i < refs.nr; i++)
- clear_commit_marks((struct commit *)refs.objects[i].item, -1);
-
- if (verbose) {
- struct ref_list *r;
-
- r = &header->references;
- printf("The bundle contains %d ref%s\n",
- r->nr, (1 < r->nr) ? "s" : "");
- list_refs(r, 0, NULL);
- r = &header->prerequisites;
- printf("The bundle requires these %d ref%s\n",
- r->nr, (1 < r->nr) ? "s" : "");
- list_refs(r, 0, NULL);
- }
- return ret;
-}
-
-static int list_heads(struct bundle_header *header, int argc, const char **argv)
-{
- return list_refs(&header->references, argc, argv);
-}
-
-static int create_bundle(struct bundle_header *header, const char *path,
- int argc, const char **argv)
-{
- static struct lock_file lock;
- int bundle_fd = -1;
- int bundle_to_stdout;
- const char **argv_boundary = xmalloc((argc + 4) * sizeof(const char *));
- const char **argv_pack = xmalloc(5 * sizeof(const char *));
- int i, ref_count = 0;
- char buffer[1024];
- struct rev_info revs;
- struct child_process rls;
- FILE *rls_fout;
-
- bundle_to_stdout = !strcmp(path, "-");
- if (bundle_to_stdout)
- bundle_fd = 1;
- else
- bundle_fd = hold_lock_file_for_update(&lock, path, 1);
-
- /* write signature */
- write_or_die(bundle_fd, bundle_signature, strlen(bundle_signature));
-
- /* init revs to list objects for pack-objects later */
- save_commit_buffer = 0;
- init_revisions(&revs, NULL);
-
- /* write prerequisites */
- memcpy(argv_boundary + 3, argv + 1, argc * sizeof(const char *));
- argv_boundary[0] = "rev-list";
- argv_boundary[1] = "--boundary";
- argv_boundary[2] = "--pretty=oneline";
- argv_boundary[argc + 2] = NULL;
- memset(&rls, 0, sizeof(rls));
- rls.argv = argv_boundary;
- rls.out = -1;
- rls.git_cmd = 1;
- if (start_command(&rls))
- return -1;
- rls_fout = fdopen(rls.out, "r");
- while (fgets(buffer, sizeof(buffer), rls_fout)) {
- unsigned char sha1[20];
- if (buffer[0] == '-') {
- write_or_die(bundle_fd, buffer, strlen(buffer));
- if (!get_sha1_hex(buffer + 1, sha1)) {
- struct object *object = parse_object(sha1);
- object->flags |= UNINTERESTING;
- add_pending_object(&revs, object, buffer);
- }
- } else if (!get_sha1_hex(buffer, sha1)) {
- struct object *object = parse_object(sha1);
- object->flags |= SHOWN;
- }
- }
- fclose(rls_fout);
- if (finish_command(&rls))
- return error("rev-list died");
-
- /* write references */
- argc = setup_revisions(argc, argv, &revs, NULL);
- if (argc > 1)
- return error("unrecognized argument: %s'", argv[1]);
-
- for (i = 0; i < revs.pending.nr; i++) {
- struct object_array_entry *e = revs.pending.objects + i;
- unsigned char sha1[20];
- char *ref;
-
- if (e->item->flags & UNINTERESTING)
- continue;
- if (dwim_ref(e->name, strlen(e->name), sha1, &ref) != 1)
- continue;
- /*
- * Make sure the refs we wrote out is correct; --max-count and
- * other limiting options could have prevented all the tips
- * from getting output.
- *
- * Non commit objects such as tags and blobs do not have
- * this issue as they are not affected by those extra
- * constraints.
- */
- if (!(e->item->flags & SHOWN) && e->item->type == OBJ_COMMIT) {
- warning("ref '%s' is excluded by the rev-list options",
- e->name);
- free(ref);
- continue;
- }
- /*
- * If you run "git bundle create bndl v1.0..v2.0", the
- * name of the positive ref is "v2.0" but that is the
- * commit that is referenced by the tag, and not the tag
- * itself.
- */
- if (hashcmp(sha1, e->item->sha1)) {
- /*
- * Is this the positive end of a range expressed
- * in terms of a tag (e.g. v2.0 from the range
- * "v1.0..v2.0")?
- */
- struct commit *one = lookup_commit_reference(sha1);
- struct object *obj;
-
- if (e->item == &(one->object)) {
- /*
- * Need to include e->name as an
- * independent ref to the pack-objects
- * input, so that the tag is included
- * in the output; otherwise we would
- * end up triggering "empty bundle"
- * error.
- */
- obj = parse_object(sha1);
- obj->flags |= SHOWN;
- add_pending_object(&revs, obj, e->name);
- }
- free(ref);
- continue;
- }
-
- ref_count++;
- write_or_die(bundle_fd, sha1_to_hex(e->item->sha1), 40);
- write_or_die(bundle_fd, " ", 1);
- write_or_die(bundle_fd, ref, strlen(ref));
- write_or_die(bundle_fd, "\n", 1);
- free(ref);
- }
- if (!ref_count)
- die ("Refusing to create empty bundle.");
-
- /* end header */
- write_or_die(bundle_fd, "\n", 1);
-
- /* write pack */
- argv_pack[0] = "pack-objects";
- argv_pack[1] = "--all-progress";
- argv_pack[2] = "--stdout";
- argv_pack[3] = "--thin";
- argv_pack[4] = NULL;
- memset(&rls, 0, sizeof(rls));
- rls.argv = argv_pack;
- rls.in = -1;
- rls.out = bundle_fd;
- rls.git_cmd = 1;
- if (start_command(&rls))
- return error("Could not spawn pack-objects");
- for (i = 0; i < revs.pending.nr; i++) {
- struct object *object = revs.pending.objects[i].item;
- if (object->flags & UNINTERESTING)
- write(rls.in, "^", 1);
- write(rls.in, sha1_to_hex(object->sha1), 40);
- write(rls.in, "\n", 1);
- }
- if (finish_command(&rls))
- return error ("pack-objects died");
- close(bundle_fd);
- if (!bundle_to_stdout)
- commit_lock_file(&lock);
- return 0;
-}
-
-static int unbundle(struct bundle_header *header, int bundle_fd,
- int argc, const char **argv)
-{
- const char *argv_index_pack[] = {"index-pack",
- "--fix-thin", "--stdin", NULL};
- struct child_process ip;
-
- if (verify_bundle(header, 0))
- return -1;
- memset(&ip, 0, sizeof(ip));
- ip.argv = argv_index_pack;
- ip.in = bundle_fd;
- ip.no_stdout = 1;
- ip.git_cmd = 1;
- if (run_command(&ip))
- return error("index-pack died");
- return list_heads(header, argc, argv);
-}
-
int cmd_bundle(int argc, const char **argv, const char *prefix)
{
struct bundle_header header;
}
memset(&header, 0, sizeof(header));
- if (strcmp(cmd, "create") &&
- (bundle_fd = read_header(bundle_file, &header)) < 0)
+ if (strcmp(cmd, "create") && (bundle_fd =
+ read_bundle_header(bundle_file, &header)) < 0)
return 1;
if (!strcmp(cmd, "verify")) {
}
if (!strcmp(cmd, "list-heads")) {
close(bundle_fd);
- return !!list_heads(&header, argc, argv);
+ return !!list_bundle_refs(&header, argc, argv);
}
if (!strcmp(cmd, "create")) {
if (nongit)
} else if (!strcmp(cmd, "unbundle")) {
if (nongit)
die("Need a repository to unbundle.");
- return !!unbundle(&header, bundle_fd, argc, argv);
+ return !!unbundle(&header, bundle_fd) ||
+ list_bundle_refs(&header, argc, argv);
} else
usage(bundle_usage);
}
unsigned char *oldval)
{
char msg[1024];
- char *rla = getenv("GIT_REFLOG_ACTION");
+ const char *rla = getenv("GIT_REFLOG_ACTION");
if (!rla)
rla = "(reflog update)";
}
if (get_sha1(name, sha1_old)) {
- char *msg;
+ const char *msg;
just_store:
/* new ref */
if (!strncmp(name, "refs/tags/", 10))
if (get_sha1(head, sha1))
return error("Not a valid object name: %s", head);
- commit = lookup_commit_reference(sha1);
+ commit = lookup_commit_reference_gently(sha1, 1);
if (!commit)
not_for_merge = 1;
--- /dev/null
+#include "cache.h"
+#include "refs.h"
+#include "pkt-line.h"
+#include "commit.h"
+#include "tag.h"
+#include "exec_cmd.h"
+#include "pack.h"
+#include "sideband.h"
+#include "fetch-pack.h"
+
+static int transfer_unpack_limit = -1;
+static int fetch_unpack_limit = -1;
+static int unpack_limit = 100;
+static struct fetch_pack_args args = {
+ /* .uploadpack = */ "git-upload-pack",
+};
+
+static const char fetch_pack_usage[] =
+"git-fetch-pack [--all] [--quiet|-q] [--keep|-k] [--thin] [--upload-pack=<git-upload-pack>] [--depth=<n>] [--no-progress] [-v] [<host>:]<directory> [<refs>...]";
+
+#define COMPLETE (1U << 0)
+#define COMMON (1U << 1)
+#define COMMON_REF (1U << 2)
+#define SEEN (1U << 3)
+#define POPPED (1U << 4)
+
+/*
+ * After sending this many "have"s if we do not get any new ACK , we
+ * give up traversing our history.
+ */
+#define MAX_IN_VAIN 256
+
+static struct commit_list *rev_list;
+static int non_common_revs, multi_ack, use_thin_pack, use_sideband;
+
+static void rev_list_push(struct commit *commit, int mark)
+{
+ if (!(commit->object.flags & mark)) {
+ commit->object.flags |= mark;
+
+ if (!(commit->object.parsed))
+ parse_commit(commit);
+
+ insert_by_date(commit, &rev_list);
+
+ if (!(commit->object.flags & COMMON))
+ non_common_revs++;
+ }
+}
+
+static int rev_list_insert_ref(const char *path, const unsigned char *sha1, int flag, void *cb_data)
+{
+ struct object *o = deref_tag(parse_object(sha1), path, 0);
+
+ if (o && o->type == OBJ_COMMIT)
+ rev_list_push((struct commit *)o, SEEN);
+
+ return 0;
+}
+
+/*
+ This function marks a rev and its ancestors as common.
+ In some cases, it is desirable to mark only the ancestors (for example
+ when only the server does not yet know that they are common).
+*/
+
+static void mark_common(struct commit *commit,
+ int ancestors_only, int dont_parse)
+{
+ if (commit != NULL && !(commit->object.flags & COMMON)) {
+ struct object *o = (struct object *)commit;
+
+ if (!ancestors_only)
+ o->flags |= COMMON;
+
+ if (!(o->flags & SEEN))
+ rev_list_push(commit, SEEN);
+ else {
+ struct commit_list *parents;
+
+ if (!ancestors_only && !(o->flags & POPPED))
+ non_common_revs--;
+ if (!o->parsed && !dont_parse)
+ parse_commit(commit);
+
+ for (parents = commit->parents;
+ parents;
+ parents = parents->next)
+ mark_common(parents->item, 0, dont_parse);
+ }
+ }
+}
+
+/*
+ Get the next rev to send, ignoring the common.
+*/
+
+static const unsigned char* get_rev(void)
+{
+ struct commit *commit = NULL;
+
+ while (commit == NULL) {
+ unsigned int mark;
+ struct commit_list* parents;
+
+ if (rev_list == NULL || non_common_revs == 0)
+ return NULL;
+
+ commit = rev_list->item;
+ if (!(commit->object.parsed))
+ parse_commit(commit);
+ commit->object.flags |= POPPED;
+ if (!(commit->object.flags & COMMON))
+ non_common_revs--;
+
+ parents = commit->parents;
+
+ if (commit->object.flags & COMMON) {
+ /* do not send "have", and ignore ancestors */
+ commit = NULL;
+ mark = COMMON | SEEN;
+ } else if (commit->object.flags & COMMON_REF)
+ /* send "have", and ignore ancestors */
+ mark = COMMON | SEEN;
+ else
+ /* send "have", also for its ancestors */
+ mark = SEEN;
+
+ while (parents) {
+ if (!(parents->item->object.flags & SEEN))
+ rev_list_push(parents->item, mark);
+ if (mark & COMMON)
+ mark_common(parents->item, 1, 0);
+ parents = parents->next;
+ }
+
+ rev_list = rev_list->next;
+ }
+
+ return commit->object.sha1;
+}
+
+static int find_common(int fd[2], unsigned char *result_sha1,
+ struct ref *refs)
+{
+ int fetching;
+ int count = 0, flushes = 0, retval;
+ const unsigned char *sha1;
+ unsigned in_vain = 0;
+ int got_continue = 0;
+
+ for_each_ref(rev_list_insert_ref, NULL);
+
+ fetching = 0;
+ for ( ; refs ; refs = refs->next) {
+ unsigned char *remote = refs->old_sha1;
+ struct object *o;
+
+ /*
+ * If that object is complete (i.e. it is an ancestor of a
+ * local ref), we tell them we have it but do not have to
+ * tell them about its ancestors, which they already know
+ * about.
+ *
+ * We use lookup_object here because we are only
+ * interested in the case we *know* the object is
+ * reachable and we have already scanned it.
+ */
+ if (((o = lookup_object(remote)) != NULL) &&
+ (o->flags & COMPLETE)) {
+ continue;
+ }
+
+ if (!fetching)
+ packet_write(fd[1], "want %s%s%s%s%s%s%s\n",
+ sha1_to_hex(remote),
+ (multi_ack ? " multi_ack" : ""),
+ (use_sideband == 2 ? " side-band-64k" : ""),
+ (use_sideband == 1 ? " side-band" : ""),
+ (use_thin_pack ? " thin-pack" : ""),
+ (args.no_progress ? " no-progress" : ""),
+ " ofs-delta");
+ else
+ packet_write(fd[1], "want %s\n", sha1_to_hex(remote));
+ fetching++;
+ }
+ if (is_repository_shallow())
+ write_shallow_commits(fd[1], 1);
+ if (args.depth > 0)
+ packet_write(fd[1], "deepen %d", args.depth);
+ packet_flush(fd[1]);
+ if (!fetching)
+ return 1;
+
+ if (args.depth > 0) {
+ char line[1024];
+ unsigned char sha1[20];
+ int len;
+
+ while ((len = packet_read_line(fd[0], line, sizeof(line)))) {
+ if (!prefixcmp(line, "shallow ")) {
+ if (get_sha1_hex(line + 8, sha1))
+ die("invalid shallow line: %s", line);
+ register_shallow(sha1);
+ continue;
+ }
+ if (!prefixcmp(line, "unshallow ")) {
+ if (get_sha1_hex(line + 10, sha1))
+ die("invalid unshallow line: %s", line);
+ if (!lookup_object(sha1))
+ die("object not found: %s", line);
+ /* make sure that it is parsed as shallow */
+ parse_object(sha1);
+ if (unregister_shallow(sha1))
+ die("no shallow found: %s", line);
+ continue;
+ }
+ die("expected shallow/unshallow, got %s", line);
+ }
+ }
+
+ flushes = 0;
+ retval = -1;
+ while ((sha1 = get_rev())) {
+ packet_write(fd[1], "have %s\n", sha1_to_hex(sha1));
+ if (args.verbose)
+ fprintf(stderr, "have %s\n", sha1_to_hex(sha1));
+ in_vain++;
+ if (!(31 & ++count)) {
+ int ack;
+
+ packet_flush(fd[1]);
+ flushes++;
+
+ /*
+ * We keep one window "ahead" of the other side, and
+ * will wait for an ACK only on the next one
+ */
+ if (count == 32)
+ continue;
+
+ do {
+ ack = get_ack(fd[0], result_sha1);
+ if (args.verbose && ack)
+ fprintf(stderr, "got ack %d %s\n", ack,
+ sha1_to_hex(result_sha1));
+ if (ack == 1) {
+ flushes = 0;
+ multi_ack = 0;
+ retval = 0;
+ goto done;
+ } else if (ack == 2) {
+ struct commit *commit =
+ lookup_commit(result_sha1);
+ mark_common(commit, 0, 1);
+ retval = 0;
+ in_vain = 0;
+ got_continue = 1;
+ }
+ } while (ack);
+ flushes--;
+ if (got_continue && MAX_IN_VAIN < in_vain) {
+ if (args.verbose)
+ fprintf(stderr, "giving up\n");
+ break; /* give up */
+ }
+ }
+ }
+done:
+ packet_write(fd[1], "done\n");
+ if (args.verbose)
+ fprintf(stderr, "done\n");
+ if (retval != 0) {
+ multi_ack = 0;
+ flushes++;
+ }
+ while (flushes || multi_ack) {
+ int ack = get_ack(fd[0], result_sha1);
+ if (ack) {
+ if (args.verbose)
+ fprintf(stderr, "got ack (%d) %s\n", ack,
+ sha1_to_hex(result_sha1));
+ if (ack == 1)
+ return 0;
+ multi_ack = 1;
+ continue;
+ }
+ flushes--;
+ }
+ return retval;
+}
+
+static struct commit_list *complete;
+
+static int mark_complete(const char *path, const unsigned char *sha1, int flag, void *cb_data)
+{
+ struct object *o = parse_object(sha1);
+
+ while (o && o->type == OBJ_TAG) {
+ struct tag *t = (struct tag *) o;
+ if (!t->tagged)
+ break; /* broken repository */
+ o->flags |= COMPLETE;
+ o = parse_object(t->tagged->sha1);
+ }
+ if (o && o->type == OBJ_COMMIT) {
+ struct commit *commit = (struct commit *)o;
+ commit->object.flags |= COMPLETE;
+ insert_by_date(commit, &complete);
+ }
+ return 0;
+}
+
+static void mark_recent_complete_commits(unsigned long cutoff)
+{
+ while (complete && cutoff <= complete->item->date) {
+ if (args.verbose)
+ fprintf(stderr, "Marking %s as complete\n",
+ sha1_to_hex(complete->item->object.sha1));
+ pop_most_recent_commit(&complete, COMPLETE);
+ }
+}
+
+static void filter_refs(struct ref **refs, int nr_match, char **match)
+{
+ struct ref **return_refs;
+ struct ref *newlist = NULL;
+ struct ref **newtail = &newlist;
+ struct ref *ref, *next;
+ struct ref *fastarray[32];
+
+ if (nr_match && !args.fetch_all) {
+ if (ARRAY_SIZE(fastarray) < nr_match)
+ return_refs = xcalloc(nr_match, sizeof(struct ref *));
+ else {
+ return_refs = fastarray;
+ memset(return_refs, 0, sizeof(struct ref *) * nr_match);
+ }
+ }
+ else
+ return_refs = NULL;
+
+ for (ref = *refs; ref; ref = next) {
+ next = ref->next;
+ if (!memcmp(ref->name, "refs/", 5) &&
+ check_ref_format(ref->name + 5))
+ ; /* trash */
+ else if (args.fetch_all &&
+ (!args.depth || prefixcmp(ref->name, "refs/tags/") )) {
+ *newtail = ref;
+ ref->next = NULL;
+ newtail = &ref->next;
+ continue;
+ }
+ else {
+ int order = path_match(ref->name, nr_match, match);
+ if (order) {
+ return_refs[order-1] = ref;
+ continue; /* we will link it later */
+ }
+ }
+ free(ref);
+ }
+
+ if (!args.fetch_all) {
+ int i;
+ for (i = 0; i < nr_match; i++) {
+ ref = return_refs[i];
+ if (ref) {
+ *newtail = ref;
+ ref->next = NULL;
+ newtail = &ref->next;
+ }
+ }
+ if (return_refs != fastarray)
+ free(return_refs);
+ }
+ *refs = newlist;
+}
+
+static int everything_local(struct ref **refs, int nr_match, char **match)
+{
+ struct ref *ref;
+ int retval;
+ unsigned long cutoff = 0;
+
+ track_object_refs = 0;
+ save_commit_buffer = 0;
+
+ for (ref = *refs; ref; ref = ref->next) {
+ struct object *o;
+
+ o = parse_object(ref->old_sha1);
+ if (!o)
+ continue;
+
+ /* We already have it -- which may mean that we were
+ * in sync with the other side at some time after
+ * that (it is OK if we guess wrong here).
+ */
+ if (o->type == OBJ_COMMIT) {
+ struct commit *commit = (struct commit *)o;
+ if (!cutoff || cutoff < commit->date)
+ cutoff = commit->date;
+ }
+ }
+
+ if (!args.depth) {
+ for_each_ref(mark_complete, NULL);
+ if (cutoff)
+ mark_recent_complete_commits(cutoff);
+ }
+
+ /*
+ * Mark all complete remote refs as common refs.
+ * Don't mark them common yet; the server has to be told so first.
+ */
+ for (ref = *refs; ref; ref = ref->next) {
+ struct object *o = deref_tag(lookup_object(ref->old_sha1),
+ NULL, 0);
+
+ if (!o || o->type != OBJ_COMMIT || !(o->flags & COMPLETE))
+ continue;
+
+ if (!(o->flags & SEEN)) {
+ rev_list_push((struct commit *)o, COMMON_REF | SEEN);
+
+ mark_common((struct commit *)o, 1, 1);
+ }
+ }
+
+ filter_refs(refs, nr_match, match);
+
+ for (retval = 1, ref = *refs; ref ; ref = ref->next) {
+ const unsigned char *remote = ref->old_sha1;
+ unsigned char local[20];
+ struct object *o;
+
+ o = lookup_object(remote);
+ if (!o || !(o->flags & COMPLETE)) {
+ retval = 0;
+ if (!args.verbose)
+ continue;
+ fprintf(stderr,
+ "want %s (%s)\n", sha1_to_hex(remote),
+ ref->name);
+ continue;
+ }
+
+ hashcpy(ref->new_sha1, local);
+ if (!args.verbose)
+ continue;
+ fprintf(stderr,
+ "already have %s (%s)\n", sha1_to_hex(remote),
+ ref->name);
+ }
+ return retval;
+}
+
+static pid_t setup_sideband(int fd[2], int xd[2])
+{
+ pid_t side_pid;
+
+ if (!use_sideband) {
+ fd[0] = xd[0];
+ fd[1] = xd[1];
+ return 0;
+ }
+ /* xd[] is talking with upload-pack; subprocess reads from
+ * xd[0], spits out band#2 to stderr, and feeds us band#1
+ * through our fd[0].
+ */
+ if (pipe(fd) < 0)
+ die("fetch-pack: unable to set up pipe");
+ side_pid = fork();
+ if (side_pid < 0)
+ die("fetch-pack: unable to fork off sideband demultiplexer");
+ if (!side_pid) {
+ /* subprocess */
+ close(fd[0]);
+ if (xd[0] != xd[1])
+ close(xd[1]);
+ if (recv_sideband("fetch-pack", xd[0], fd[1], 2))
+ exit(1);
+ exit(0);
+ }
+ close(xd[0]);
+ close(fd[1]);
+ fd[1] = xd[1];
+ return side_pid;
+}
+
+static int get_pack(int xd[2], char **pack_lockfile)
+{
+ int status;
+ pid_t pid, side_pid;
+ int fd[2];
+ const char *argv[20];
+ char keep_arg[256];
+ char hdr_arg[256];
+ const char **av;
+ int do_keep = args.keep_pack;
+ int keep_pipe[2];
+
+ side_pid = setup_sideband(fd, xd);
+
+ av = argv;
+ *hdr_arg = 0;
+ if (!args.keep_pack && unpack_limit) {
+ struct pack_header header;
+
+ if (read_pack_header(fd[0], &header))
+ die("protocol error: bad pack header");
+ snprintf(hdr_arg, sizeof(hdr_arg), "--pack_header=%u,%u",
+ ntohl(header.hdr_version), ntohl(header.hdr_entries));
+ if (ntohl(header.hdr_entries) < unpack_limit)
+ do_keep = 0;
+ else
+ do_keep = 1;
+ }
+
+ if (do_keep) {
+ if (pack_lockfile && pipe(keep_pipe))
+ die("fetch-pack: pipe setup failure: %s", strerror(errno));
+ *av++ = "index-pack";
+ *av++ = "--stdin";
+ if (!args.quiet && !args.no_progress)
+ *av++ = "-v";
+ if (args.use_thin_pack)
+ *av++ = "--fix-thin";
+ if (args.lock_pack || unpack_limit) {
+ int s = sprintf(keep_arg,
+ "--keep=fetch-pack %d on ", getpid());
+ if (gethostname(keep_arg + s, sizeof(keep_arg) - s))
+ strcpy(keep_arg + s, "localhost");
+ *av++ = keep_arg;
+ }
+ }
+ else {
+ *av++ = "unpack-objects";
+ if (args.quiet)
+ *av++ = "-q";
+ }
+ if (*hdr_arg)
+ *av++ = hdr_arg;
+ *av++ = NULL;
+
+ pid = fork();
+ if (pid < 0)
+ die("fetch-pack: unable to fork off %s", argv[0]);
+ if (!pid) {
+ dup2(fd[0], 0);
+ if (do_keep && pack_lockfile) {
+ dup2(keep_pipe[1], 1);
+ close(keep_pipe[0]);
+ close(keep_pipe[1]);
+ }
+ close(fd[0]);
+ close(fd[1]);
+ execv_git_cmd(argv);
+ die("%s exec failed", argv[0]);
+ }
+ close(fd[0]);
+ close(fd[1]);
+ if (do_keep && pack_lockfile) {
+ close(keep_pipe[1]);
+ *pack_lockfile = index_pack_lockfile(keep_pipe[0]);
+ close(keep_pipe[0]);
+ }
+ while (waitpid(pid, &status, 0) < 0) {
+ if (errno != EINTR)
+ die("waiting for %s: %s", argv[0], strerror(errno));
+ }
+ if (WIFEXITED(status)) {
+ int code = WEXITSTATUS(status);
+ if (code)
+ die("%s died with error code %d", argv[0], code);
+ return 0;
+ }
+ if (WIFSIGNALED(status)) {
+ int sig = WTERMSIG(status);
+ die("%s died of signal %d", argv[0], sig);
+ }
+ die("%s died of unnatural causes %d", argv[0], status);
+}
+
+static struct ref *do_fetch_pack(int fd[2],
+ int nr_match,
+ char **match,
+ char **pack_lockfile)
+{
+ struct ref *ref;
+ unsigned char sha1[20];
+
+ get_remote_heads(fd[0], &ref, 0, NULL, 0);
+ if (is_repository_shallow() && !server_supports("shallow"))
+ die("Server does not support shallow clients");
+ if (server_supports("multi_ack")) {
+ if (args.verbose)
+ fprintf(stderr, "Server supports multi_ack\n");
+ multi_ack = 1;
+ }
+ if (server_supports("side-band-64k")) {
+ if (args.verbose)
+ fprintf(stderr, "Server supports side-band-64k\n");
+ use_sideband = 2;
+ }
+ else if (server_supports("side-band")) {
+ if (args.verbose)
+ fprintf(stderr, "Server supports side-band\n");
+ use_sideband = 1;
+ }
+ if (!ref) {
+ packet_flush(fd[1]);
+ die("no matching remote head");
+ }
+ if (everything_local(&ref, nr_match, match)) {
+ packet_flush(fd[1]);
+ goto all_done;
+ }
+ if (find_common(fd, sha1, ref) < 0)
+ if (!args.keep_pack)
+ /* When cloning, it is not unusual to have
+ * no common commit.
+ */
+ fprintf(stderr, "warning: no common commits\n");
+
+ if (get_pack(fd, pack_lockfile))
+ die("git-fetch-pack: fetch failed.");
+
+ all_done:
+ return ref;
+}
+
+static int remove_duplicates(int nr_heads, char **heads)
+{
+ int src, dst;
+
+ for (src = dst = 0; src < nr_heads; src++) {
+ /* If heads[src] is different from any of
+ * heads[0..dst], push it in.
+ */
+ int i;
+ for (i = 0; i < dst; i++) {
+ if (!strcmp(heads[i], heads[src]))
+ break;
+ }
+ if (i < dst)
+ continue;
+ if (src != dst)
+ heads[dst] = heads[src];
+ dst++;
+ }
+ return dst;
+}
+
+static int fetch_pack_config(const char *var, const char *value)
+{
+ if (strcmp(var, "fetch.unpacklimit") == 0) {
+ fetch_unpack_limit = git_config_int(var, value);
+ return 0;
+ }
+
+ if (strcmp(var, "transfer.unpacklimit") == 0) {
+ transfer_unpack_limit = git_config_int(var, value);
+ return 0;
+ }
+
+ return git_default_config(var, value);
+}
+
+static struct lock_file lock;
+
+static void fetch_pack_setup(void)
+{
+ static int did_setup;
+ if (did_setup)
+ return;
+ git_config(fetch_pack_config);
+ if (0 <= transfer_unpack_limit)
+ unpack_limit = transfer_unpack_limit;
+ else if (0 <= fetch_unpack_limit)
+ unpack_limit = fetch_unpack_limit;
+ did_setup = 1;
+}
+
+int cmd_fetch_pack(int argc, const char **argv, const char *prefix)
+{
+ int i, ret, nr_heads;
+ struct ref *ref;
+ char *dest = NULL, **heads;
+
+ nr_heads = 0;
+ heads = NULL;
+ for (i = 1; i < argc; i++) {
+ const char *arg = argv[i];
+
+ if (*arg == '-') {
+ if (!prefixcmp(arg, "--upload-pack=")) {
+ args.uploadpack = arg + 14;
+ continue;
+ }
+ if (!prefixcmp(arg, "--exec=")) {
+ args.uploadpack = arg + 7;
+ continue;
+ }
+ if (!strcmp("--quiet", arg) || !strcmp("-q", arg)) {
+ args.quiet = 1;
+ continue;
+ }
+ if (!strcmp("--keep", arg) || !strcmp("-k", arg)) {
+ args.lock_pack = args.keep_pack;
+ args.keep_pack = 1;
+ continue;
+ }
+ if (!strcmp("--thin", arg)) {
+ args.use_thin_pack = 1;
+ continue;
+ }
+ if (!strcmp("--all", arg)) {
+ args.fetch_all = 1;
+ continue;
+ }
+ if (!strcmp("-v", arg)) {
+ args.verbose = 1;
+ continue;
+ }
+ if (!prefixcmp(arg, "--depth=")) {
+ args.depth = strtol(arg + 8, NULL, 0);
+ continue;
+ }
+ if (!strcmp("--no-progress", arg)) {
+ args.no_progress = 1;
+ continue;
+ }
+ usage(fetch_pack_usage);
+ }
+ dest = (char *)arg;
+ heads = (char **)(argv + i + 1);
+ nr_heads = argc - i - 1;
+ break;
+ }
+ if (!dest)
+ usage(fetch_pack_usage);
+
+ ref = fetch_pack(&args, dest, nr_heads, heads, NULL);
+ ret = !ref;
+
+ while (ref) {
+ printf("%s %s\n",
+ sha1_to_hex(ref->old_sha1), ref->name);
+ ref = ref->next;
+ }
+
+ return ret;
+}
+
+struct ref *fetch_pack(struct fetch_pack_args *my_args,
+ const char *dest,
+ int nr_heads,
+ char **heads,
+ char **pack_lockfile)
+{
+ int i, ret;
+ int fd[2];
+ pid_t pid;
+ struct ref *ref;
+ struct stat st;
+
+ fetch_pack_setup();
+ memcpy(&args, my_args, sizeof(args));
+ if (args.depth > 0) {
+ if (stat(git_path("shallow"), &st))
+ st.st_mtime = 0;
+ }
+
+ pid = git_connect(fd, (char *)dest, args.uploadpack,
+ args.verbose ? CONNECT_VERBOSE : 0);
+ if (pid < 0)
+ return NULL;
+ if (heads && nr_heads)
+ nr_heads = remove_duplicates(nr_heads, heads);
+ ref = do_fetch_pack(fd, nr_heads, heads, pack_lockfile);
+ close(fd[0]);
+ close(fd[1]);
+ ret = finish_connect(pid);
+
+ if (!ret && nr_heads) {
+ /* If the heads to pull were given, we should have
+ * consumed all of them by matching the remote.
+ * Otherwise, 'git-fetch remote no-such-ref' would
+ * silently succeed without issuing an error.
+ */
+ for (i = 0; i < nr_heads; i++)
+ if (heads[i] && heads[i][0]) {
+ error("no such remote ref %s", heads[i]);
+ ret = 1;
+ }
+ }
+
+ if (!ret && args.depth > 0) {
+ struct cache_time mtime;
+ char *shallow = git_path("shallow");
+ int fd;
+
+ mtime.sec = st.st_mtime;
+#ifdef USE_NSEC
+ mtime.usec = st.st_mtim.usec;
+#endif
+ if (stat(shallow, &st)) {
+ if (mtime.sec)
+ die("shallow file was removed during fetch");
+ } else if (st.st_mtime != mtime.sec
+#ifdef USE_NSEC
+ || st.st_mtim.usec != mtime.usec
+#endif
+ )
+ die("shallow file was changed during fetch");
+
+ fd = hold_lock_file_for_update(&lock, shallow, 1);
+ if (!write_shallow_commits(fd, 0)) {
+ unlink(shallow);
+ rollback_lock_file(&lock);
+ } else {
+ close(fd);
+ commit_lock_file(&lock);
+ }
+ }
+
+ if (ret)
+ ref = NULL;
+
+ return ref;
+}
--- /dev/null
+/*
+ * "git fetch"
+ */
+#include "cache.h"
+#include "refs.h"
+#include "commit.h"
+#include "builtin.h"
+#include "path-list.h"
+#include "remote.h"
+#include "transport.h"
+
+static const char fetch_usage[] = "git-fetch [-a | --append] [--upload-pack <upload-pack>] [-f | --force] [--no-tags] [-t | --tags] [-k | --keep] [-u | --update-head-ok] [--depth <depth>] [-v | --verbose] [<repository> <refspec>...]";
+
+static int append, force, tags, no_tags, update_head_ok, verbose, quiet;
+static char *default_rla = NULL;
+static struct transport *transport;
+
+static void unlock_pack(void)
+{
+ if (transport)
+ transport_unlock_pack(transport);
+}
+
+static void unlock_pack_on_signal(int signo)
+{
+ unlock_pack();
+ signal(SIGINT, SIG_DFL);
+ raise(signo);
+}
+
+static void add_merge_config(struct ref **head,
+ struct ref *remote_refs,
+ struct branch *branch,
+ struct ref ***tail)
+{
+ int i;
+
+ for (i = 0; i < branch->merge_nr; i++) {
+ struct ref *rm, **old_tail = *tail;
+ struct refspec refspec;
+
+ for (rm = *head; rm; rm = rm->next) {
+ if (branch_merge_matches(branch, i, rm->name)) {
+ rm->merge = 1;
+ break;
+ }
+ }
+ if (rm)
+ continue;
+
+ /*
+ * Not fetched to a tracking branch? We need to fetch
+ * it anyway to allow this branch's "branch.$name.merge"
+ * to be honored by git-pull, but we do not have to
+ * fail if branch.$name.merge is misconfigured to point
+ * at a nonexisting branch. If we were indeed called by
+ * git-pull, it will notice the misconfiguration because
+ * there is no entry in the resulting FETCH_HEAD marked
+ * for merging.
+ */
+ refspec.src = branch->merge[i]->src;
+ refspec.dst = NULL;
+ refspec.pattern = 0;
+ refspec.force = 0;
+ get_fetch_map(remote_refs, &refspec, tail, 1);
+ for (rm = *old_tail; rm; rm = rm->next)
+ rm->merge = 1;
+ }
+}
+
+static struct ref *get_ref_map(struct transport *transport,
+ struct refspec *refs, int ref_count, int tags,
+ int *autotags)
+{
+ int i;
+ struct ref *rm;
+ struct ref *ref_map = NULL;
+ struct ref **tail = &ref_map;
+
+ struct ref *remote_refs = transport_get_remote_refs(transport);
+
+ if (ref_count || tags) {
+ for (i = 0; i < ref_count; i++) {
+ get_fetch_map(remote_refs, &refs[i], &tail, 0);
+ if (refs[i].dst && refs[i].dst[0])
+ *autotags = 1;
+ }
+ /* Merge everything on the command line, but not --tags */
+ for (rm = ref_map; rm; rm = rm->next)
+ rm->merge = 1;
+ if (tags) {
+ struct refspec refspec;
+ refspec.src = "refs/tags/";
+ refspec.dst = "refs/tags/";
+ refspec.pattern = 1;
+ refspec.force = 0;
+ get_fetch_map(remote_refs, &refspec, &tail, 0);
+ }
+ } else {
+ /* Use the defaults */
+ struct remote *remote = transport->remote;
+ struct branch *branch = branch_get(NULL);
+ int has_merge = branch_has_merge_config(branch);
+ if (remote && (remote->fetch_refspec_nr || has_merge)) {
+ for (i = 0; i < remote->fetch_refspec_nr; i++) {
+ get_fetch_map(remote_refs, &remote->fetch[i], &tail, 0);
+ if (remote->fetch[i].dst &&
+ remote->fetch[i].dst[0])
+ *autotags = 1;
+ if (!i && !has_merge && ref_map &&
+ !remote->fetch[0].pattern)
+ ref_map->merge = 1;
+ }
+ /*
+ * if the remote we're fetching from is the same
+ * as given in branch.<name>.remote, we add the
+ * ref given in branch.<name>.merge, too.
+ */
+ if (has_merge &&
+ !strcmp(branch->remote_name, remote->name))
+ add_merge_config(&ref_map, remote_refs, branch, &tail);
+ } else {
+ ref_map = get_remote_ref(remote_refs, "HEAD");
+ if (!ref_map)
+ die("Couldn't find remote ref HEAD");
+ ref_map->merge = 1;
+ }
+ }
+ ref_remove_duplicates(ref_map);
+
+ return ref_map;
+}
+
+static void show_new(enum object_type type, unsigned char *sha1_new)
+{
+ fprintf(stderr, " %s: %s\n", typename(type),
+ find_unique_abbrev(sha1_new, DEFAULT_ABBREV));
+}
+
+static int s_update_ref(const char *action,
+ struct ref *ref,
+ int check_old)
+{
+ char msg[1024];
+ char *rla = getenv("GIT_REFLOG_ACTION");
+ static struct ref_lock *lock;
+
+ if (!rla)
+ rla = default_rla;
+ snprintf(msg, sizeof(msg), "%s: %s", rla, action);
+ lock = lock_any_ref_for_update(ref->name,
+ check_old ? ref->old_sha1 : NULL, 0);
+ if (!lock)
+ return 1;
+ if (write_ref_sha1(lock, ref->new_sha1, msg) < 0)
+ return 1;
+ return 0;
+}
+
+static int update_local_ref(struct ref *ref,
+ const char *note,
+ int verbose)
+{
+ char oldh[41], newh[41];
+ struct commit *current = NULL, *updated;
+ enum object_type type;
+ struct branch *current_branch = branch_get(NULL);
+
+ type = sha1_object_info(ref->new_sha1, NULL);
+ if (type < 0)
+ die("object %s not found", sha1_to_hex(ref->new_sha1));
+
+ if (!*ref->name) {
+ /* Not storing */
+ if (verbose) {
+ fprintf(stderr, "* fetched %s\n", note);
+ show_new(type, ref->new_sha1);
+ }
+ return 0;
+ }
+
+ if (!hashcmp(ref->old_sha1, ref->new_sha1)) {
+ if (verbose) {
+ fprintf(stderr, "* %s: same as %s\n",
+ ref->name, note);
+ show_new(type, ref->new_sha1);
+ }
+ return 0;
+ }
+
+ if (current_branch &&
+ !strcmp(ref->name, current_branch->name) &&
+ !(update_head_ok || is_bare_repository()) &&
+ !is_null_sha1(ref->old_sha1)) {
+ /*
+ * If this is the head, and it's not okay to update
+ * the head, and the old value of the head isn't empty...
+ */
+ fprintf(stderr,
+ " * %s: Cannot fetch into the current branch.\n",
+ ref->name);
+ return 1;
+ }
+
+ if (!is_null_sha1(ref->old_sha1) &&
+ !prefixcmp(ref->name, "refs/tags/")) {
+ fprintf(stderr, "* %s: updating with %s\n",
+ ref->name, note);
+ show_new(type, ref->new_sha1);
+ return s_update_ref("updating tag", ref, 0);
+ }
+
+ current = lookup_commit_reference_gently(ref->old_sha1, 1);
+ updated = lookup_commit_reference_gently(ref->new_sha1, 1);
+ if (!current || !updated) {
+ char *msg;
+ if (!strncmp(ref->name, "refs/tags/", 10))
+ msg = "storing tag";
+ else
+ msg = "storing head";
+ fprintf(stderr, "* %s: storing %s\n",
+ ref->name, note);
+ show_new(type, ref->new_sha1);
+ return s_update_ref(msg, ref, 0);
+ }
+
+ strcpy(oldh, find_unique_abbrev(current->object.sha1, DEFAULT_ABBREV));
+ strcpy(newh, find_unique_abbrev(ref->new_sha1, DEFAULT_ABBREV));
+
+ if (in_merge_bases(current, &updated, 1)) {
+ fprintf(stderr, "* %s: fast forward to %s\n",
+ ref->name, note);
+ fprintf(stderr, " old..new: %s..%s\n", oldh, newh);
+ return s_update_ref("fast forward", ref, 1);
+ }
+ if (!force && !ref->force) {
+ fprintf(stderr,
+ "* %s: not updating to non-fast forward %s\n",
+ ref->name, note);
+ fprintf(stderr,
+ " old...new: %s...%s\n", oldh, newh);
+ return 1;
+ }
+ fprintf(stderr,
+ "* %s: forcing update to non-fast forward %s\n",
+ ref->name, note);
+ fprintf(stderr, " old...new: %s...%s\n", oldh, newh);
+ return s_update_ref("forced-update", ref, 1);
+}
+
+static void store_updated_refs(const char *url, struct ref *ref_map)
+{
+ FILE *fp;
+ struct commit *commit;
+ int url_len, i, note_len;
+ char note[1024];
+ const char *what, *kind;
+ struct ref *rm;
+
+ fp = fopen(git_path("FETCH_HEAD"), "a");
+ for (rm = ref_map; rm; rm = rm->next) {
+ struct ref *ref = NULL;
+
+ if (rm->peer_ref) {
+ ref = xcalloc(1, sizeof(*ref) + strlen(rm->peer_ref->name) + 1);
+ strcpy(ref->name, rm->peer_ref->name);
+ hashcpy(ref->old_sha1, rm->peer_ref->old_sha1);
+ hashcpy(ref->new_sha1, rm->old_sha1);
+ ref->force = rm->peer_ref->force;
+ }
+
+ commit = lookup_commit_reference_gently(rm->old_sha1, 1);
+ if (!commit)
+ rm->merge = 0;
+
+ if (!strcmp(rm->name, "HEAD")) {
+ kind = "";
+ what = "";
+ }
+ else if (!prefixcmp(rm->name, "refs/heads/")) {
+ kind = "branch";
+ what = rm->name + 11;
+ }
+ else if (!prefixcmp(rm->name, "refs/tags/")) {
+ kind = "tag";
+ what = rm->name + 10;
+ }
+ else if (!prefixcmp(rm->name, "refs/remotes/")) {
+ kind = "remote branch";
+ what = rm->name + 13;
+ }
+ else {
+ kind = "";
+ what = rm->name;
+ }
+
+ url_len = strlen(url);
+ for (i = url_len - 1; url[i] == '/' && 0 <= i; i--)
+ ;
+ url_len = i + 1;
+ if (4 < i && !strncmp(".git", url + i - 3, 4))
+ url_len = i - 3;
+
+ note_len = 0;
+ if (*what) {
+ if (*kind)
+ note_len += sprintf(note + note_len, "%s ",
+ kind);
+ note_len += sprintf(note + note_len, "'%s' of ", what);
+ }
+ note_len += sprintf(note + note_len, "%.*s", url_len, url);
+ fprintf(fp, "%s\t%s\t%s\n",
+ sha1_to_hex(commit ? commit->object.sha1 :
+ rm->old_sha1),
+ rm->merge ? "" : "not-for-merge",
+ note);
+
+ if (ref)
+ update_local_ref(ref, note, verbose);
+ }
+ fclose(fp);
+}
+
+static int fetch_refs(struct transport *transport, struct ref *ref_map)
+{
+ int ret = transport_fetch_refs(transport, ref_map);
+ if (!ret)
+ store_updated_refs(transport->url, ref_map);
+ transport_unlock_pack(transport);
+ return ret;
+}
+
+static int add_existing(const char *refname, const unsigned char *sha1,
+ int flag, void *cbdata)
+{
+ struct path_list *list = (struct path_list *)cbdata;
+ path_list_insert(refname, list);
+ return 0;
+}
+
+static struct ref *find_non_local_tags(struct transport *transport,
+ struct ref *fetch_map)
+{
+ static struct path_list existing_refs = { NULL, 0, 0, 0 };
+ struct path_list new_refs = { NULL, 0, 0, 1 };
+ char *ref_name;
+ int ref_name_len;
+ unsigned char *ref_sha1;
+ struct ref *tag_ref;
+ struct ref *rm = NULL;
+ struct ref *ref_map = NULL;
+ struct ref **tail = &ref_map;
+ struct ref *ref;
+
+ for_each_ref(add_existing, &existing_refs);
+ for (ref = transport_get_remote_refs(transport); ref; ref = ref->next) {
+ if (prefixcmp(ref->name, "refs/tags"))
+ continue;
+
+ ref_name = xstrdup(ref->name);
+ ref_name_len = strlen(ref_name);
+ ref_sha1 = ref->old_sha1;
+
+ if (!strcmp(ref_name + ref_name_len - 3, "^{}")) {
+ ref_name[ref_name_len - 3] = 0;
+ tag_ref = transport_get_remote_refs(transport);
+ while (tag_ref) {
+ if (!strcmp(tag_ref->name, ref_name)) {
+ ref_sha1 = tag_ref->old_sha1;
+ break;
+ }
+ tag_ref = tag_ref->next;
+ }
+ }
+
+ if (!path_list_has_path(&existing_refs, ref_name) &&
+ !path_list_has_path(&new_refs, ref_name) &&
+ lookup_object(ref->old_sha1)) {
+ fprintf(stderr, "Auto-following %s\n",
+ ref_name);
+
+ path_list_insert(ref_name, &new_refs);
+
+ rm = alloc_ref(strlen(ref_name) + 1);
+ strcpy(rm->name, ref_name);
+ rm->peer_ref = alloc_ref(strlen(ref_name) + 1);
+ strcpy(rm->peer_ref->name, ref_name);
+ hashcpy(rm->old_sha1, ref_sha1);
+
+ *tail = rm;
+ tail = &rm->next;
+ }
+ free(ref_name);
+ }
+
+ return ref_map;
+}
+
+static int do_fetch(struct transport *transport,
+ struct refspec *refs, int ref_count)
+{
+ struct ref *ref_map, *fetch_map;
+ struct ref *rm;
+ int autotags = (transport->remote->fetch_tags == 1);
+ if (transport->remote->fetch_tags == 2 && !no_tags)
+ tags = 1;
+ if (transport->remote->fetch_tags == -1)
+ no_tags = 1;
+
+ if (!transport->get_refs_list || !transport->fetch)
+ die("Don't know how to fetch from %s", transport->url);
+
+ /* if not appending, truncate FETCH_HEAD */
+ if (!append)
+ fclose(fopen(git_path("FETCH_HEAD"), "w"));
+
+ ref_map = get_ref_map(transport, refs, ref_count, tags, &autotags);
+
+ for (rm = ref_map; rm; rm = rm->next) {
+ if (rm->peer_ref)
+ read_ref(rm->peer_ref->name, rm->peer_ref->old_sha1);
+ }
+
+ if (fetch_refs(transport, ref_map)) {
+ free_refs(ref_map);
+ return 1;
+ }
+
+ fetch_map = ref_map;
+
+ /* if neither --no-tags nor --tags was specified, do automated tag
+ * following ... */
+ if (!(tags || no_tags) && autotags) {
+ ref_map = find_non_local_tags(transport, fetch_map);
+ if (ref_map) {
+ transport_set_option(transport, TRANS_OPT_DEPTH, "0");
+ fetch_refs(transport, ref_map);
+ }
+ free_refs(ref_map);
+ }
+
+ free_refs(fetch_map);
+
+ return 0;
+}
+
+static void set_option(const char *name, const char *value)
+{
+ int r = transport_set_option(transport, name, value);
+ if (r < 0)
+ die("Option \"%s\" value \"%s\" is not valid for %s\n",
+ name, value, transport->url);
+ if (r > 0)
+ warning("Option \"%s\" is ignored for %s\n",
+ name, transport->url);
+}
+
+int cmd_fetch(int argc, const char **argv, const char *prefix)
+{
+ struct remote *remote;
+ int i, j, rla_offset;
+ static const char **refs = NULL;
+ int ref_nr = 0;
+ int cmd_len = 0;
+ const char *depth = NULL, *upload_pack = NULL;
+ int keep = 0;
+
+ for (i = 1; i < argc; i++) {
+ const char *arg = argv[i];
+ cmd_len += strlen(arg);
+
+ if (arg[0] != '-')
+ break;
+ if (!strcmp(arg, "--append") || !strcmp(arg, "-a")) {
+ append = 1;
+ continue;
+ }
+ if (!prefixcmp(arg, "--upload-pack=")) {
+ upload_pack = arg + 14;
+ continue;
+ }
+ if (!strcmp(arg, "--upload-pack")) {
+ i++;
+ if (i == argc)
+ usage(fetch_usage);
+ upload_pack = argv[i];
+ continue;
+ }
+ if (!strcmp(arg, "--force") || !strcmp(arg, "-f")) {
+ force = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--no-tags")) {
+ no_tags = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--tags") || !strcmp(arg, "-t")) {
+ tags = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--keep") || !strcmp(arg, "-k")) {
+ keep = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--update-head-ok") || !strcmp(arg, "-u")) {
+ update_head_ok = 1;
+ continue;
+ }
+ if (!prefixcmp(arg, "--depth=")) {
+ depth = arg + 8;
+ continue;
+ }
+ if (!strcmp(arg, "--depth")) {
+ i++;
+ if (i == argc)
+ usage(fetch_usage);
+ depth = argv[i];
+ continue;
+ }
+ if (!strcmp(arg, "--quiet")) {
+ quiet = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--verbose") || !strcmp(arg, "-v")) {
+ verbose++;
+ continue;
+ }
+ usage(fetch_usage);
+ }
+
+ for (j = i; j < argc; j++)
+ cmd_len += strlen(argv[j]);
+
+ default_rla = xmalloc(cmd_len + 5 + argc + 1);
+ sprintf(default_rla, "fetch");
+ rla_offset = strlen(default_rla);
+ for (j = 1; j < argc; j++) {
+ sprintf(default_rla + rla_offset, " %s", argv[j]);
+ rla_offset += strlen(argv[j]) + 1;
+ }
+
+ if (i == argc)
+ remote = remote_get(NULL);
+ else
+ remote = remote_get(argv[i++]);
+
+ transport = transport_get(remote, remote->url[0]);
+ if (verbose >= 2)
+ transport->verbose = 1;
+ if (quiet)
+ transport->verbose = -1;
+ if (upload_pack)
+ set_option(TRANS_OPT_UPLOADPACK, upload_pack);
+ if (keep)
+ set_option(TRANS_OPT_KEEP, "yes");
+ if (depth)
+ set_option(TRANS_OPT_DEPTH, depth);
+
+ if (!transport->url)
+ die("Where do you want to fetch from today?");
+
+ if (i < argc) {
+ int j = 0;
+ refs = xcalloc(argc - i + 1, sizeof(const char *));
+ while (i < argc) {
+ if (!strcmp(argv[i], "tag")) {
+ char *ref;
+ i++;
+ ref = xmalloc(strlen(argv[i]) * 2 + 22);
+ strcpy(ref, "refs/tags/");
+ strcat(ref, argv[i]);
+ strcat(ref, ":refs/tags/");
+ strcat(ref, argv[i]);
+ refs[j++] = ref;
+ } else
+ refs[j++] = argv[i];
+ i++;
+ }
+ refs[j] = NULL;
+ ref_nr = j;
+ }
+
+ signal(SIGINT, unlock_pack_on_signal);
+ atexit(unlock_pack);
+ return do_fetch(transport, parse_ref_spec(ref_nr, refs), ref_nr);
+}
if (!need_to_gc())
return 0;
fprintf(stderr, "Packing your repository for optimum "
- "performance. If you would rather run\n"
- "\"git gc\" by hand, run \"git config gc.auto 0\" "
- "to disable automatic cleanup.\n");
+ "performance. You may also\n"
+ "run \"git gc\" manually. See "
+ "\"git help gc\" for more information.\n");
} else {
/*
* Use safer (for shared repos) "-A" option to
--- /dev/null
+#include "cache.h"
+#include "walker.h"
+
+int cmd_http_fetch(int argc, const char **argv, const char *prefix)
+{
+ struct walker *walker;
+ int commits_on_stdin = 0;
+ int commits;
+ const char **write_ref = NULL;
+ char **commit_id;
+ const char *url;
+ int arg = 1;
+ int rc = 0;
+ int get_tree = 0;
+ int get_history = 0;
+ int get_all = 0;
+ int get_verbosely = 0;
+ int get_recover = 0;
+
+ git_config(git_default_config);
+
+ while (arg < argc && argv[arg][0] == '-') {
+ if (argv[arg][1] == 't') {
+ get_tree = 1;
+ } else if (argv[arg][1] == 'c') {
+ get_history = 1;
+ } else if (argv[arg][1] == 'a') {
+ get_all = 1;
+ get_tree = 1;
+ get_history = 1;
+ } else if (argv[arg][1] == 'v') {
+ get_verbosely = 1;
+ } else if (argv[arg][1] == 'w') {
+ write_ref = &argv[arg + 1];
+ arg++;
+ } else if (!strcmp(argv[arg], "--recover")) {
+ get_recover = 1;
+ } else if (!strcmp(argv[arg], "--stdin")) {
+ commits_on_stdin = 1;
+ }
+ arg++;
+ }
+ if (argc < arg + 2 - commits_on_stdin) {
+ usage("git-http-fetch [-c] [-t] [-a] [-v] [--recover] [-w ref] [--stdin] commit-id url");
+ return 1;
+ }
+ if (commits_on_stdin) {
+ commits = walker_targets_stdin(&commit_id, &write_ref);
+ } else {
+ commit_id = (char **) &argv[arg++];
+ commits = 1;
+ }
+ url = argv[arg];
+
+ walker = get_http_walker(url);
+ walker->get_tree = get_tree;
+ walker->get_history = get_history;
+ walker->get_all = get_all;
+ walker->get_verbosely = get_verbosely;
+ walker->get_recover = get_recover;
+
+ rc = walker_fetch(walker, commits, commit_id, write_ref, url);
+
+ if (commits_on_stdin)
+ walker_targets_free(commits, commit_id, write_ref);
+
+ if (walker->corrupt_object_found) {
+ fprintf(stderr,
+"Some loose object were found to be corrupt, but they might be just\n"
+"a false '404 Not Found' error message sent with incorrect HTTP\n"
+"status code. Suggest running git-fsck.\n");
+ }
+
+ walker_free(walker);
+
+ return rc;
+}
}
static void decode_header(char *it, unsigned itsize);
-static char *header[MAX_HDR_PARSED] = {
+static const char *header[MAX_HDR_PARSED] = {
"From","Subject","Date",
};
#include "run-command.h"
#include "builtin.h"
#include "remote.h"
+#include "transport.h"
static const char push_usage[] = "git-push [--all] [--dry-run] [--tags] [--receive-pack=<git-receive-pack>] [--repo=all] [-f | --force] [-v] [<repository> <refspec>...]";
-static int all, dry_run, force, thin, verbose;
+static int thin, verbose;
static const char *receivepack;
static const char **refspec;
}
}
-static int do_push(const char *repo)
+static int do_push(const char *repo, int flags)
{
int i, errs;
- int common_argc;
- const char **argv;
- int argc;
struct remote *remote = remote_get(repo);
if (!remote)
die("bad repository '%s'", repo);
- if (remote->receivepack) {
- char *rp = xmalloc(strlen(remote->receivepack) + 16);
- sprintf(rp, "--receive-pack=%s", remote->receivepack);
- receivepack = rp;
- }
- if (!refspec && !all && remote->push_refspec_nr) {
+ if (!refspec
+ && !(flags & TRANSPORT_PUSH_ALL)
+ && remote->push_refspec_nr) {
refspec = remote->push_refspec;
refspec_nr = remote->push_refspec_nr;
}
-
- argv = xmalloc((refspec_nr + 10) * sizeof(char *));
- argv[0] = "dummy-send-pack";
- argc = 1;
- if (all)
- argv[argc++] = "--all";
- if (dry_run)
- argv[argc++] = "--dry-run";
- if (force)
- argv[argc++] = "--force";
- if (receivepack)
- argv[argc++] = receivepack;
- common_argc = argc;
-
errs = 0;
- for (i = 0; i < remote->uri_nr; i++) {
+ for (i = 0; i < remote->url_nr; i++) {
+ struct transport *transport =
+ transport_get(remote, remote->url[i]);
int err;
- int dest_argc = common_argc;
- int dest_refspec_nr = refspec_nr;
- const char **dest_refspec = refspec;
- const char *dest = remote->uri[i];
- const char *sender = "send-pack";
- if (!prefixcmp(dest, "http://") ||
- !prefixcmp(dest, "https://"))
- sender = "http-push";
- else {
- char *rem = xmalloc(strlen(remote->name) + 10);
- sprintf(rem, "--remote=%s", remote->name);
- argv[dest_argc++] = rem;
- if (thin)
- argv[dest_argc++] = "--thin";
- }
- argv[0] = sender;
- argv[dest_argc++] = dest;
- while (dest_refspec_nr--)
- argv[dest_argc++] = *dest_refspec++;
- argv[dest_argc] = NULL;
+ if (receivepack)
+ transport_set_option(transport,
+ TRANS_OPT_RECEIVEPACK, receivepack);
+ if (thin)
+ transport_set_option(transport, TRANS_OPT_THIN, "yes");
+
if (verbose)
- fprintf(stderr, "Pushing to %s\n", dest);
- err = run_command_v_opt(argv, RUN_GIT_CMD);
+ fprintf(stderr, "Pushing to %s\n", remote->url[i]);
+ err = transport_push(transport, refspec_nr, refspec, flags);
+ err |= transport_disconnect(transport);
+
if (!err)
continue;
- error("failed to push to '%s'", remote->uri[i]);
- switch (err) {
- case -ERR_RUN_COMMAND_FORK:
- error("unable to fork for %s", sender);
- case -ERR_RUN_COMMAND_EXEC:
- error("unable to exec %s", sender);
- break;
- case -ERR_RUN_COMMAND_WAITPID:
- case -ERR_RUN_COMMAND_WAITPID_WRONG_PID:
- case -ERR_RUN_COMMAND_WAITPID_SIGNAL:
- case -ERR_RUN_COMMAND_WAITPID_NOEXIT:
- error("%s died with strange error", sender);
- }
+ error("failed to push to '%s'", remote->url[i]);
errs++;
}
return !!errs;
int cmd_push(int argc, const char **argv, const char *prefix)
{
int i;
+ int flags = 0;
const char *repo = NULL; /* default repository */
for (i = 1; i < argc; i++) {
continue;
}
if (!strcmp(arg, "--all")) {
- all = 1;
+ flags |= TRANSPORT_PUSH_ALL;
continue;
}
if (!strcmp(arg, "--dry-run")) {
- dry_run = 1;
+ flags |= TRANSPORT_PUSH_DRY_RUN;
continue;
}
if (!strcmp(arg, "--tags")) {
continue;
}
if (!strcmp(arg, "--force") || !strcmp(arg, "-f")) {
- force = 1;
+ flags |= TRANSPORT_PUSH_FORCE;
continue;
}
if (!strcmp(arg, "--thin")) {
continue;
}
if (!prefixcmp(arg, "--receive-pack=")) {
- receivepack = arg;
+ receivepack = arg + 15;
continue;
}
if (!prefixcmp(arg, "--exec=")) {
- receivepack = arg;
+ receivepack = arg + 7;
continue;
}
usage(push_usage);
}
set_refspecs(argv + i, argc - i);
- if (all && refspec)
+ if ((flags & TRANSPORT_PUSH_ALL) && refspec)
usage(push_usage);
- return do_push(repo);
+ return do_push(repo, flags);
}
}
enum reset_type { MIXED, SOFT, HARD, NONE };
-static char *reset_type_names[] = { "mixed", "soft", "hard", NULL };
+static const char *reset_type_names[] = { "mixed", "soft", "hard", NULL };
int cmd_reset(int argc, const char **argv, const char *prefix)
{
extern int cmd_diff_index(int argc, const char **argv, const char *prefix);
extern int cmd_diff(int argc, const char **argv, const char *prefix);
extern int cmd_diff_tree(int argc, const char **argv, const char *prefix);
+extern int cmd_fetch(int argc, const char **argv, const char *prefix);
+extern int cmd_fetch_pack(int argc, const char **argv, const char *prefix);
extern int cmd_fetch__tool(int argc, const char **argv, const char *prefix);
extern int cmd_fmt_merge_msg(int argc, const char **argv, const char *prefix);
extern int cmd_for_each_ref(int argc, const char **argv, const char *prefix);
extern int cmd_get_tar_commit_id(int argc, const char **argv, const char *prefix);
extern int cmd_grep(int argc, const char **argv, const char *prefix);
extern int cmd_help(int argc, const char **argv, const char *prefix);
+extern int cmd_http_fetch(int argc, const char **argv, const char *prefix);
extern int cmd_init_db(int argc, const char **argv, const char *prefix);
extern int cmd_log(int argc, const char **argv, const char *prefix);
extern int cmd_log_reflog(int argc, const char **argv, const char *prefix);
--- /dev/null
+#include "cache.h"
+#include "bundle.h"
+#include "object.h"
+#include "commit.h"
+#include "diff.h"
+#include "revision.h"
+#include "list-objects.h"
+#include "run-command.h"
+
+static const char bundle_signature[] = "# v2 git bundle\n";
+
+static void add_to_ref_list(const unsigned char *sha1, const char *name,
+ struct ref_list *list)
+{
+ if (list->nr + 1 >= list->alloc) {
+ list->alloc = alloc_nr(list->nr + 1);
+ list->list = xrealloc(list->list,
+ list->alloc * sizeof(list->list[0]));
+ }
+ memcpy(list->list[list->nr].sha1, sha1, 20);
+ list->list[list->nr].name = xstrdup(name);
+ list->nr++;
+}
+
+/* returns an fd */
+int read_bundle_header(const char *path, struct bundle_header *header) {
+ char buffer[1024];
+ int fd;
+ long fpos;
+ FILE *ffd = fopen(path, "rb");
+
+ if (!ffd)
+ return error("could not open '%s'", path);
+ if (!fgets(buffer, sizeof(buffer), ffd) ||
+ strcmp(buffer, bundle_signature)) {
+ fclose(ffd);
+ return error("'%s' does not look like a v2 bundle file", path);
+ }
+ while (fgets(buffer, sizeof(buffer), ffd)
+ && buffer[0] != '\n') {
+ int is_prereq = buffer[0] == '-';
+ int offset = is_prereq ? 1 : 0;
+ int len = strlen(buffer);
+ unsigned char sha1[20];
+ struct ref_list *list = is_prereq ? &header->prerequisites
+ : &header->references;
+ char delim;
+
+ if (buffer[len - 1] == '\n')
+ buffer[len - 1] = '\0';
+ if (get_sha1_hex(buffer + offset, sha1)) {
+ warning("unrecognized header: %s", buffer);
+ continue;
+ }
+ delim = buffer[40 + offset];
+ if (!isspace(delim) && (delim != '\0' || !is_prereq))
+ die ("invalid header: %s", buffer);
+ add_to_ref_list(sha1, isspace(delim) ?
+ buffer + 41 + offset : "", list);
+ }
+ fpos = ftell(ffd);
+ fclose(ffd);
+ fd = open(path, O_RDONLY);
+ if (fd < 0)
+ return error("could not open '%s'", path);
+ lseek(fd, fpos, SEEK_SET);
+ return fd;
+}
+
+static int list_refs(struct ref_list *r, int argc, const char **argv)
+{
+ int i;
+
+ for (i = 0; i < r->nr; i++) {
+ if (argc > 1) {
+ int j;
+ for (j = 1; j < argc; j++)
+ if (!strcmp(r->list[i].name, argv[j]))
+ break;
+ if (j == argc)
+ continue;
+ }
+ printf("%s %s\n", sha1_to_hex(r->list[i].sha1),
+ r->list[i].name);
+ }
+ return 0;
+}
+
+#define PREREQ_MARK (1u<<16)
+
+int verify_bundle(struct bundle_header *header, int verbose)
+{
+ /*
+ * Do fast check, then if any prereqs are missing then go line by line
+ * to be verbose about the errors
+ */
+ struct ref_list *p = &header->prerequisites;
+ struct rev_info revs;
+ const char *argv[] = {NULL, "--all"};
+ struct object_array refs;
+ struct commit *commit;
+ int i, ret = 0, req_nr;
+ const char *message = "Repository lacks these prerequisite commits:";
+
+ init_revisions(&revs, NULL);
+ for (i = 0; i < p->nr; i++) {
+ struct ref_list_entry *e = p->list + i;
+ struct object *o = parse_object(e->sha1);
+ if (o) {
+ o->flags |= PREREQ_MARK;
+ add_pending_object(&revs, o, e->name);
+ continue;
+ }
+ if (++ret == 1)
+ error(message);
+ error("%s %s", sha1_to_hex(e->sha1), e->name);
+ }
+ if (revs.pending.nr != p->nr)
+ return ret;
+ req_nr = revs.pending.nr;
+ setup_revisions(2, argv, &revs, NULL);
+
+ memset(&refs, 0, sizeof(struct object_array));
+ for (i = 0; i < revs.pending.nr; i++) {
+ struct object_array_entry *e = revs.pending.objects + i;
+ add_object_array(e->item, e->name, &refs);
+ }
+
+ prepare_revision_walk(&revs);
+
+ i = req_nr;
+ while (i && (commit = get_revision(&revs)))
+ if (commit->object.flags & PREREQ_MARK)
+ i--;
+
+ for (i = 0; i < req_nr; i++)
+ if (!(refs.objects[i].item->flags & SHOWN)) {
+ if (++ret == 1)
+ error(message);
+ error("%s %s", sha1_to_hex(refs.objects[i].item->sha1),
+ refs.objects[i].name);
+ }
+
+ for (i = 0; i < refs.nr; i++)
+ clear_commit_marks((struct commit *)refs.objects[i].item, -1);
+
+ if (verbose) {
+ struct ref_list *r;
+
+ r = &header->references;
+ printf("The bundle contains %d ref%s\n",
+ r->nr, (1 < r->nr) ? "s" : "");
+ list_refs(r, 0, NULL);
+ r = &header->prerequisites;
+ printf("The bundle requires these %d ref%s\n",
+ r->nr, (1 < r->nr) ? "s" : "");
+ list_refs(r, 0, NULL);
+ }
+ return ret;
+}
+
+int list_bundle_refs(struct bundle_header *header, int argc, const char **argv)
+{
+ return list_refs(&header->references, argc, argv);
+}
+
+int create_bundle(struct bundle_header *header, const char *path,
+ int argc, const char **argv)
+{
+ static struct lock_file lock;
+ int bundle_fd = -1;
+ int bundle_to_stdout;
+ const char **argv_boundary = xmalloc((argc + 4) * sizeof(const char *));
+ const char **argv_pack = xmalloc(5 * sizeof(const char *));
+ int i, ref_count = 0;
+ char buffer[1024];
+ struct rev_info revs;
+ struct child_process rls;
+ FILE *rls_fout;
+
+ bundle_to_stdout = !strcmp(path, "-");
+ if (bundle_to_stdout)
+ bundle_fd = 1;
+ else
+ bundle_fd = hold_lock_file_for_update(&lock, path, 1);
+
+ /* write signature */
+ write_or_die(bundle_fd, bundle_signature, strlen(bundle_signature));
+
+ /* init revs to list objects for pack-objects later */
+ save_commit_buffer = 0;
+ init_revisions(&revs, NULL);
+
+ /* write prerequisites */
+ memcpy(argv_boundary + 3, argv + 1, argc * sizeof(const char *));
+ argv_boundary[0] = "rev-list";
+ argv_boundary[1] = "--boundary";
+ argv_boundary[2] = "--pretty=oneline";
+ argv_boundary[argc + 2] = NULL;
+ memset(&rls, 0, sizeof(rls));
+ rls.argv = argv_boundary;
+ rls.out = -1;
+ rls.git_cmd = 1;
+ if (start_command(&rls))
+ return -1;
+ rls_fout = fdopen(rls.out, "r");
+ while (fgets(buffer, sizeof(buffer), rls_fout)) {
+ unsigned char sha1[20];
+ if (buffer[0] == '-') {
+ write_or_die(bundle_fd, buffer, strlen(buffer));
+ if (!get_sha1_hex(buffer + 1, sha1)) {
+ struct object *object = parse_object(sha1);
+ object->flags |= UNINTERESTING;
+ add_pending_object(&revs, object, buffer);
+ }
+ } else if (!get_sha1_hex(buffer, sha1)) {
+ struct object *object = parse_object(sha1);
+ object->flags |= SHOWN;
+ }
+ }
+ fclose(rls_fout);
+ if (finish_command(&rls))
+ return error("rev-list died");
+
+ /* write references */
+ argc = setup_revisions(argc, argv, &revs, NULL);
+ if (argc > 1)
+ return error("unrecognized argument: %s'", argv[1]);
+
+ for (i = 0; i < revs.pending.nr; i++) {
+ struct object_array_entry *e = revs.pending.objects + i;
+ unsigned char sha1[20];
+ char *ref;
+
+ if (e->item->flags & UNINTERESTING)
+ continue;
+ if (dwim_ref(e->name, strlen(e->name), sha1, &ref) != 1)
+ continue;
+ /*
+ * Make sure the refs we wrote out is correct; --max-count and
+ * other limiting options could have prevented all the tips
+ * from getting output.
+ *
+ * Non commit objects such as tags and blobs do not have
+ * this issue as they are not affected by those extra
+ * constraints.
+ */
+ if (!(e->item->flags & SHOWN) && e->item->type == OBJ_COMMIT) {
+ warning("ref '%s' is excluded by the rev-list options",
+ e->name);
+ free(ref);
+ continue;
+ }
+ /*
+ * If you run "git bundle create bndl v1.0..v2.0", the
+ * name of the positive ref is "v2.0" but that is the
+ * commit that is referenced by the tag, and not the tag
+ * itself.
+ */
+ if (hashcmp(sha1, e->item->sha1)) {
+ /*
+ * Is this the positive end of a range expressed
+ * in terms of a tag (e.g. v2.0 from the range
+ * "v1.0..v2.0")?
+ */
+ struct commit *one = lookup_commit_reference(sha1);
+ struct object *obj;
+
+ if (e->item == &(one->object)) {
+ /*
+ * Need to include e->name as an
+ * independent ref to the pack-objects
+ * input, so that the tag is included
+ * in the output; otherwise we would
+ * end up triggering "empty bundle"
+ * error.
+ */
+ obj = parse_object(sha1);
+ obj->flags |= SHOWN;
+ add_pending_object(&revs, obj, e->name);
+ }
+ free(ref);
+ continue;
+ }
+
+ ref_count++;
+ write_or_die(bundle_fd, sha1_to_hex(e->item->sha1), 40);
+ write_or_die(bundle_fd, " ", 1);
+ write_or_die(bundle_fd, ref, strlen(ref));
+ write_or_die(bundle_fd, "\n", 1);
+ free(ref);
+ }
+ if (!ref_count)
+ die ("Refusing to create empty bundle.");
+
+ /* end header */
+ write_or_die(bundle_fd, "\n", 1);
+
+ /* write pack */
+ argv_pack[0] = "pack-objects";
+ argv_pack[1] = "--all-progress";
+ argv_pack[2] = "--stdout";
+ argv_pack[3] = "--thin";
+ argv_pack[4] = NULL;
+ memset(&rls, 0, sizeof(rls));
+ rls.argv = argv_pack;
+ rls.in = -1;
+ rls.out = bundle_fd;
+ rls.git_cmd = 1;
+ if (start_command(&rls))
+ return error("Could not spawn pack-objects");
+ for (i = 0; i < revs.pending.nr; i++) {
+ struct object *object = revs.pending.objects[i].item;
+ if (object->flags & UNINTERESTING)
+ write(rls.in, "^", 1);
+ write(rls.in, sha1_to_hex(object->sha1), 40);
+ write(rls.in, "\n", 1);
+ }
+ if (finish_command(&rls))
+ return error ("pack-objects died");
+ close(bundle_fd);
+ if (!bundle_to_stdout)
+ commit_lock_file(&lock);
+ return 0;
+}
+
+int unbundle(struct bundle_header *header, int bundle_fd)
+{
+ const char *argv_index_pack[] = {"index-pack",
+ "--fix-thin", "--stdin", NULL};
+ struct child_process ip;
+
+ if (verify_bundle(header, 0))
+ return -1;
+ memset(&ip, 0, sizeof(ip));
+ ip.argv = argv_index_pack;
+ ip.in = bundle_fd;
+ ip.no_stdout = 1;
+ ip.git_cmd = 1;
+ if (run_command(&ip))
+ return error("index-pack died");
+ return 0;
+}
--- /dev/null
+#ifndef BUNDLE_H
+#define BUNDLE_H
+
+struct ref_list {
+ unsigned int nr, alloc;
+ struct ref_list_entry {
+ unsigned char sha1[20];
+ char *name;
+ } *list;
+};
+
+struct bundle_header {
+ struct ref_list prerequisites;
+ struct ref_list references;
+};
+
+int read_bundle_header(const char *path, struct bundle_header *header);
+int create_bundle(struct bundle_header *header, const char *path,
+ int argc, const char **argv);
+int verify_bundle(struct bundle_header *header, int verbose);
+int unbundle(struct bundle_header *header, int bundle_fd);
+int list_bundle_refs(struct bundle_header *header,
+ int argc, const char **argv);
+
+#endif
unsigned char old_sha1[20];
unsigned char new_sha1[20];
unsigned char force;
+ unsigned char merge;
struct ref *peer_ref; /* when renaming */
char name[FLEX_ARRAY]; /* more */
};
--- /dev/null
+#include "../git-compat-util.h"
+
+char *gitmkdtemp(char *template)
+{
+ if (!mktemp(template) || mkdir(template, 0700))
+ return NULL;
+ return template;
+}
continue;
if (nr_match && !path_match(name, nr_match, match))
continue;
- ref = alloc_ref(len - 40);
+ ref = alloc_ref(name_len + 1);
hashcpy(ref->old_sha1, old_sha1);
- memcpy(ref->name, buffer + 41, len - 40);
+ memcpy(ref->name, buffer + 41, name_len + 1);
*list = ref;
list = &ref->next;
}
ssh-*) : transport;;
stripspace) : plumbing;;
svn) : import export;;
- svnimport) : import;;
symbolic-ref) : plumbing;;
tar-tree) : deprecated;;
unpack-file) : plumbing;;
(with-current-buffer buffer (erase-buffer))
(dolist (info files) (git-set-fileinfo-state info 'uptodate))
(git-call-process-env nil nil "rerere")
+ (git-call-process-env nil nil "gc" "--auto")
(git-refresh-files)
(git-refresh-ewoc-hf git-status)
(message "Committed %s." commit)
"Mark all files."
(interactive)
(unless git-status (error "Not in git-status buffer."))
- (ewoc-map (lambda (info) (setf (git-fileinfo->marked info) t) t) git-status)
+ (ewoc-map (lambda (info) (unless (git-fileinfo->marked info)
+ (setf (git-fileinfo->marked info) t))) git-status)
; move back to goal column after invalidate
(when goal-column (move-to-column goal-column)))
"Unmark all files."
(interactive)
(unless git-status (error "Not in git-status buffer."))
- (ewoc-map (lambda (info) (setf (git-fileinfo->marked info) nil) t) git-status)
+ (ewoc-map (lambda (info) (when (git-fileinfo->marked info)
+ (setf (git-fileinfo->marked info) nil)
+ t)) git-status)
; move back to goal column after invalidate
(when goal-column (move-to-column goal-column)))
(when modified
(apply #'git-call-process-env nil nil "checkout" "HEAD" modified))
(git-update-status-files (append added modified) 'uptodate)
- (git-success-message "Reverted" files))))
+ (git-success-message "Reverted" (git-get-filenames files)))))
(defun git-resolve-file ()
"Resolve conflicts in marked file(s)."
"Update the corresponding git-status buffer when a file is saved.
Meant to be used in `after-save-hook'."
(let* ((file (expand-file-name buffer-file-name))
- (dir (condition-case nil (git-get-top-dir (file-name-directory file))))
+ (dir (condition-case nil (git-get-top-dir (file-name-directory file)) (error nil)))
(buffer (and dir (git-find-status-buffer dir))))
(when buffer
(with-current-buffer buffer
--- /dev/null
+#!/bin/sh
+#
+
+USAGE='<fetch-options> <repository> <refspec>...'
+SUBDIRECTORY_OK=Yes
+. git-sh-setup
+set_reflog_action "fetch $*"
+cd_to_toplevel ;# probably unnecessary...
+
+. git-parse-remote
+_x40='[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]'
+_x40="$_x40$_x40$_x40$_x40$_x40$_x40$_x40$_x40"
+
+LF='
+'
+IFS="$LF"
+
+no_tags=
+tags=
+append=
+force=
+verbose=
+update_head_ok=
+exec=
+keep=
+shallow_depth=
+no_progress=
+test -t 1 || no_progress=--no-progress
+quiet=
+while test $# != 0
+do
+ case "$1" in
+ -a|--a|--ap|--app|--appe|--appen|--append)
+ append=t
+ ;;
+ --upl|--uplo|--uploa|--upload|--upload-|--upload-p|\
+ --upload-pa|--upload-pac|--upload-pack)
+ shift
+ exec="--upload-pack=$1"
+ ;;
+ --upl=*|--uplo=*|--uploa=*|--upload=*|\
+ --upload-=*|--upload-p=*|--upload-pa=*|--upload-pac=*|--upload-pack=*)
+ exec=--upload-pack=$(expr "z$1" : 'z-[^=]*=\(.*\)')
+ shift
+ ;;
+ -f|--f|--fo|--for|--forc|--force)
+ force=t
+ ;;
+ -t|--t|--ta|--tag|--tags)
+ tags=t
+ ;;
+ -n|--n|--no|--no-|--no-t|--no-ta|--no-tag|--no-tags)
+ no_tags=t
+ ;;
+ -u|--u|--up|--upd|--upda|--updat|--update|--update-|--update-h|\
+ --update-he|--update-hea|--update-head|--update-head-|\
+ --update-head-o|--update-head-ok)
+ update_head_ok=t
+ ;;
+ -q|--q|--qu|--qui|--quie|--quiet)
+ quiet=--quiet
+ ;;
+ -v|--verbose)
+ verbose="$verbose"Yes
+ ;;
+ -k|--k|--ke|--kee|--keep)
+ keep='-k -k'
+ ;;
+ --depth=*)
+ shallow_depth="--depth=`expr "z$1" : 'z-[^=]*=\(.*\)'`"
+ ;;
+ --depth)
+ shift
+ shallow_depth="--depth=$1"
+ ;;
+ -*)
+ usage
+ ;;
+ *)
+ break
+ ;;
+ esac
+ shift
+done
+
+case "$#" in
+0)
+ origin=$(get_default_remote)
+ test -n "$(get_remote_url ${origin})" ||
+ die "Where do you want to fetch from today?"
+ set x $origin ; shift ;;
+esac
+
+if test -z "$exec"
+then
+ # No command line override and we have configuration for the remote.
+ exec="--upload-pack=$(get_uploadpack $1)"
+fi
+
+remote_nick="$1"
+remote=$(get_remote_url "$@")
+refs=
+rref=
+rsync_slurped_objects=
+
+if test "" = "$append"
+then
+ : >"$GIT_DIR/FETCH_HEAD"
+fi
+
+# Global that is reused later
+ls_remote_result=$(git ls-remote $exec "$remote") ||
+ die "Cannot get the repository state from $remote"
+
+append_fetch_head () {
+ flags=
+ test -n "$verbose" && flags="$flags$LF-v"
+ test -n "$force$single_force" && flags="$flags$LF-f"
+ GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION" \
+ git fetch--tool $flags append-fetch-head "$@"
+}
+
+# updating the current HEAD with git-fetch in a bare
+# repository is always fine.
+if test -z "$update_head_ok" && test $(is_bare_repository) = false
+then
+ orig_head=$(git rev-parse --verify HEAD 2>/dev/null)
+fi
+
+# Allow --notags from remote.$1.tagopt
+case "$tags$no_tags" in
+'')
+ case "$(git config --get "remote.$1.tagopt")" in
+ --no-tags)
+ no_tags=t ;;
+ esac
+esac
+
+# If --tags (and later --heads or --all) is specified, then we are
+# not talking about defaults stored in Pull: line of remotes or
+# branches file, and just fetch those and refspecs explicitly given.
+# Otherwise we do what we always did.
+
+reflist=$(get_remote_refs_for_fetch "$@")
+if test "$tags"
+then
+ taglist=`IFS=' ' &&
+ echo "$ls_remote_result" |
+ git show-ref --exclude-existing=refs/tags/ |
+ while read sha1 name
+ do
+ echo ".${name}:${name}"
+ done` || exit
+ if test "$#" -gt 1
+ then
+ # remote URL plus explicit refspecs; we need to merge them.
+ reflist="$reflist$LF$taglist"
+ else
+ # No explicit refspecs; fetch tags only.
+ reflist=$taglist
+ fi
+fi
+
+fetch_all_at_once () {
+
+ eval=$(echo "$1" | git fetch--tool parse-reflist "-")
+ eval "$eval"
+
+ ( : subshell because we muck with IFS
+ IFS=" $LF"
+ (
+ if test "$remote" = . ; then
+ git show-ref $rref || echo failed "$remote"
+ elif test -f "$remote" ; then
+ test -n "$shallow_depth" &&
+ die "shallow clone with bundle is not supported"
+ git bundle unbundle "$remote" $rref ||
+ echo failed "$remote"
+ else
+ if test -d "$remote" &&
+
+ # The remote might be our alternate. With
+ # this optimization we will bypass fetch-pack
+ # altogether, which means we cannot be doing
+ # the shallow stuff at all.
+ test ! -f "$GIT_DIR/shallow" &&
+ test -z "$shallow_depth" &&
+
+ # See if all of what we are going to fetch are
+ # connected to our repository's tips, in which
+ # case we do not have to do any fetch.
+ theirs=$(echo "$ls_remote_result" | \
+ git fetch--tool -s pick-rref "$rref" "-") &&
+
+ # This will barf when $theirs reach an object that
+ # we do not have in our repository. Otherwise,
+ # we already have everything the fetch would bring in.
+ git rev-list --objects $theirs --not --all \
+ >/dev/null 2>/dev/null
+ then
+ echo "$ls_remote_result" | \
+ git fetch--tool pick-rref "$rref" "-"
+ else
+ flags=
+ case $verbose in
+ YesYes*)
+ flags="-v"
+ ;;
+ esac
+ git-fetch-pack --thin $exec $keep $shallow_depth \
+ $quiet $no_progress $flags "$remote" $rref ||
+ echo failed "$remote"
+ fi
+ fi
+ ) |
+ (
+ flags=
+ test -n "$verbose" && flags="$flags -v"
+ test -n "$force" && flags="$flags -f"
+ GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION" \
+ git fetch--tool $flags native-store \
+ "$remote" "$remote_nick" "$refs"
+ )
+ ) || exit
+
+}
+
+fetch_per_ref () {
+ reflist="$1"
+ refs=
+ rref=
+
+ for ref in $reflist
+ do
+ refs="$refs$LF$ref"
+
+ # These are relative path from $GIT_DIR, typically starting at refs/
+ # but may be HEAD
+ if expr "z$ref" : 'z\.' >/dev/null
+ then
+ not_for_merge=t
+ ref=$(expr "z$ref" : 'z\.\(.*\)')
+ else
+ not_for_merge=
+ fi
+ if expr "z$ref" : 'z+' >/dev/null
+ then
+ single_force=t
+ ref=$(expr "z$ref" : 'z+\(.*\)')
+ else
+ single_force=
+ fi
+ remote_name=$(expr "z$ref" : 'z\([^:]*\):')
+ local_name=$(expr "z$ref" : 'z[^:]*:\(.*\)')
+
+ rref="$rref$LF$remote_name"
+
+ # There are transports that can fetch only one head at a time...
+ case "$remote" in
+ http://* | https://* | ftp://*)
+ test -n "$shallow_depth" &&
+ die "shallow clone with http not supported"
+ proto=`expr "$remote" : '\([^:]*\):'`
+ if [ -n "$GIT_SSL_NO_VERIFY" ]; then
+ curl_extra_args="-k"
+ fi
+ if [ -n "$GIT_CURL_FTP_NO_EPSV" -o \
+ "`git config --bool http.noEPSV`" = true ]; then
+ noepsv_opt="--disable-epsv"
+ fi
+
+ # Find $remote_name from ls-remote output.
+ head=$(echo "$ls_remote_result" | \
+ git fetch--tool -s pick-rref "$remote_name" "-")
+ expr "z$head" : "z$_x40\$" >/dev/null ||
+ die "No such ref $remote_name at $remote"
+ echo >&2 "Fetching $remote_name from $remote using $proto"
+ case "$quiet" in '') v=-v ;; *) v= ;; esac
+ git-http-fetch $v -a "$head" "$remote" || exit
+ ;;
+ rsync://*)
+ test -n "$shallow_depth" &&
+ die "shallow clone with rsync not supported"
+ TMP_HEAD="$GIT_DIR/TMP_HEAD"
+ rsync -L -q "$remote/$remote_name" "$TMP_HEAD" || exit 1
+ head=$(git rev-parse --verify TMP_HEAD)
+ rm -f "$TMP_HEAD"
+ case "$quiet" in '') v=-v ;; *) v= ;; esac
+ test "$rsync_slurped_objects" || {
+ rsync -a $v --ignore-existing --exclude info \
+ "$remote/objects/" "$GIT_OBJECT_DIRECTORY/" || exit
+
+ # Look at objects/info/alternates for rsync -- http will
+ # support it natively and git native ones will do it on
+ # the remote end. Not having that file is not a crime.
+ rsync -q "$remote/objects/info/alternates" \
+ "$GIT_DIR/TMP_ALT" 2>/dev/null ||
+ rm -f "$GIT_DIR/TMP_ALT"
+ if test -f "$GIT_DIR/TMP_ALT"
+ then
+ resolve_alternates "$remote" <"$GIT_DIR/TMP_ALT" |
+ while read alt
+ do
+ case "$alt" in 'bad alternate: '*) die "$alt";; esac
+ echo >&2 "Getting alternate: $alt"
+ rsync -av --ignore-existing --exclude info \
+ "$alt" "$GIT_OBJECT_DIRECTORY/" || exit
+ done
+ rm -f "$GIT_DIR/TMP_ALT"
+ fi
+ rsync_slurped_objects=t
+ }
+ ;;
+ esac
+
+ append_fetch_head "$head" "$remote" \
+ "$remote_name" "$remote_nick" "$local_name" "$not_for_merge" || exit
+
+ done
+
+}
+
+fetch_main () {
+ case "$remote" in
+ http://* | https://* | ftp://* | rsync://* )
+ fetch_per_ref "$@"
+ ;;
+ *)
+ fetch_all_at_once "$@"
+ ;;
+ esac
+}
+
+fetch_main "$reflist" || exit
+
+# automated tag following
+case "$no_tags$tags" in
+'')
+ case "$reflist" in
+ *:refs/*)
+ # effective only when we are following remote branch
+ # using local tracking branch.
+ taglist=$(IFS=' ' &&
+ echo "$ls_remote_result" |
+ git show-ref --exclude-existing=refs/tags/ |
+ while read sha1 name
+ do
+ git cat-file -t "$sha1" >/dev/null 2>&1 || continue
+ echo >&2 "Auto-following $name"
+ echo ".${name}:${name}"
+ done)
+ esac
+ case "$taglist" in
+ '') ;;
+ ?*)
+ # do not deepen a shallow tree when following tags
+ shallow_depth=
+ fetch_main "$taglist" || exit ;;
+ esac
+esac
+
+# If the original head was empty (i.e. no "master" yet), or
+# if we were told not to worry, we do not have to check.
+case "$orig_head" in
+'')
+ ;;
+?*)
+ curr_head=$(git rev-parse --verify HEAD 2>/dev/null)
+ if test "$curr_head" != "$orig_head"
+ then
+ git update-ref \
+ -m "$GIT_REFLOG_ACTION: Undoing incorrectly fetched HEAD." \
+ HEAD "$orig_head"
+ die "Cannot fetch into the current branch."
+ fi
+ ;;
+esac
--- /dev/null
+#!/usr/bin/perl -w
+
+# This tool is copyright (c) 2005, Matthias Urlichs.
+# It is released under the Gnu Public License, version 2.
+#
+# The basic idea is to pull and analyze SVN changes.
+#
+# Checking out the files is done by a single long-running SVN connection.
+#
+# The head revision is on branch "origin" by default.
+# You can change that with the '-o' option.
+
+use strict;
+use warnings;
+use Getopt::Std;
+use File::Copy;
+use File::Spec;
+use File::Temp qw(tempfile);
+use File::Path qw(mkpath);
+use File::Basename qw(basename dirname);
+use Time::Local;
+use IO::Pipe;
+use POSIX qw(strftime dup2);
+use IPC::Open2;
+use SVN::Core;
+use SVN::Ra;
+
+die "Need SVN:Core 1.2.1 or better" if $SVN::Core::VERSION lt "1.2.1";
+
+$SIG{'PIPE'}="IGNORE";
+$ENV{'TZ'}="UTC";
+
+our($opt_h,$opt_o,$opt_v,$opt_u,$opt_C,$opt_i,$opt_m,$opt_M,$opt_t,$opt_T,
+ $opt_b,$opt_r,$opt_I,$opt_A,$opt_s,$opt_l,$opt_d,$opt_D,$opt_S,$opt_F,
+ $opt_P,$opt_R);
+
+sub usage() {
+ print STDERR <<END;
+Usage: ${\basename $0} # fetch/update GIT from SVN
+ [-o branch-for-HEAD] [-h] [-v] [-l max_rev] [-R repack_each_revs]
+ [-C GIT_repository] [-t tagname] [-T trunkname] [-b branchname]
+ [-d|-D] [-i] [-u] [-r] [-I ignorefilename] [-s start_chg]
+ [-m] [-M regex] [-A author_file] [-S] [-F] [-P project_name] [SVN_URL]
+END
+ exit(1);
+}
+
+getopts("A:b:C:dDFhiI:l:mM:o:rs:t:T:SP:R:uv") or usage();
+usage if $opt_h;
+
+my $tag_name = $opt_t || "tags";
+my $trunk_name = defined $opt_T ? $opt_T : "trunk";
+my $branch_name = $opt_b || "branches";
+my $project_name = $opt_P || "";
+$project_name = "/" . $project_name if ($project_name);
+my $repack_after = $opt_R || 1000;
+my $root_pool = SVN::Pool->new_default;
+
+@ARGV == 1 or @ARGV == 2 or usage();
+
+$opt_o ||= "origin";
+$opt_s ||= 1;
+my $git_tree = $opt_C;
+$git_tree ||= ".";
+
+my $svn_url = $ARGV[0];
+my $svn_dir = $ARGV[1];
+
+our @mergerx = ();
+if ($opt_m) {
+ my $branch_esc = quotemeta ($branch_name);
+ my $trunk_esc = quotemeta ($trunk_name);
+ @mergerx =
+ (
+ qr!\b(?:merg(?:ed?|ing))\b.*?\b((?:(?<=$branch_esc/)[\w\.\-]+)|(?:$trunk_esc))\b!i,
+ qr!\b(?:from|of)\W+((?:(?<=$branch_esc/)[\w\.\-]+)|(?:$trunk_esc))\b!i,
+ qr!\b(?:from|of)\W+(?:the )?([\w\.\-]+)[-\s]branch\b!i
+ );
+}
+if ($opt_M) {
+ unshift (@mergerx, qr/$opt_M/);
+}
+
+# Absolutize filename now, since we will have chdir'ed by the time we
+# get around to opening it.
+$opt_A = File::Spec->rel2abs($opt_A) if $opt_A;
+
+our %users = ();
+our $users_file = undef;
+sub read_users($) {
+ $users_file = File::Spec->rel2abs(@_);
+ die "Cannot open $users_file\n" unless -f $users_file;
+ open(my $authors,$users_file);
+ while(<$authors>) {
+ chomp;
+ next unless /^(\S+?)\s*=\s*(.+?)\s*<(.+)>\s*$/;
+ (my $user,my $name,my $email) = ($1,$2,$3);
+ $users{$user} = [$name,$email];
+ }
+ close($authors);
+}
+
+select(STDERR); $|=1; select(STDOUT);
+
+
+package SVNconn;
+# Basic SVN connection.
+# We're only interested in connecting and downloading, so ...
+
+use File::Spec;
+use File::Temp qw(tempfile);
+use POSIX qw(strftime dup2);
+use Fcntl qw(SEEK_SET);
+
+sub new {
+ my($what,$repo) = @_;
+ $what=ref($what) if ref($what);
+
+ my $self = {};
+ $self->{'buffer'} = "";
+ bless($self,$what);
+
+ $repo =~ s#/+$##;
+ $self->{'fullrep'} = $repo;
+ $self->conn();
+
+ return $self;
+}
+
+sub conn {
+ my $self = shift;
+ my $repo = $self->{'fullrep'};
+ my $auth = SVN::Core::auth_open ([SVN::Client::get_simple_provider,
+ SVN::Client::get_ssl_server_trust_file_provider,
+ SVN::Client::get_username_provider]);
+ my $s = SVN::Ra->new(url => $repo, auth => $auth, pool => $root_pool);
+ die "SVN connection to $repo: $!\n" unless defined $s;
+ $self->{'svn'} = $s;
+ $self->{'repo'} = $repo;
+ $self->{'maxrev'} = $s->get_latest_revnum();
+}
+
+sub file {
+ my($self,$path,$rev) = @_;
+
+ my ($fh, $name) = tempfile('gitsvn.XXXXXX',
+ DIR => File::Spec->tmpdir(), UNLINK => 1);
+
+ print "... $rev $path ...\n" if $opt_v;
+ my (undef, $properties);
+ $path =~ s#^/*##;
+ my $subpool = SVN::Pool::new_default_sub;
+ eval { (undef, $properties)
+ = $self->{'svn'}->get_file($path,$rev,$fh); };
+ if($@) {
+ return undef if $@ =~ /Attempted to get checksum/;
+ die $@;
+ }
+ my $mode;
+ if (exists $properties->{'svn:executable'}) {
+ $mode = '100755';
+ } elsif (exists $properties->{'svn:special'}) {
+ my ($special_content, $filesize);
+ $filesize = tell $fh;
+ seek $fh, 0, SEEK_SET;
+ read $fh, $special_content, $filesize;
+ if ($special_content =~ s/^link //) {
+ $mode = '120000';
+ seek $fh, 0, SEEK_SET;
+ truncate $fh, 0;
+ print $fh $special_content;
+ } else {
+ die "unexpected svn:special file encountered";
+ }
+ } else {
+ $mode = '100644';
+ }
+ close ($fh);
+
+ return ($name, $mode);
+}
+
+sub ignore {
+ my($self,$path,$rev) = @_;
+
+ print "... $rev $path ...\n" if $opt_v;
+ $path =~ s#^/*##;
+ my $subpool = SVN::Pool::new_default_sub;
+ my (undef,undef,$properties)
+ = $self->{'svn'}->get_dir($path,$rev,undef);
+ if (exists $properties->{'svn:ignore'}) {
+ my ($fh, $name) = tempfile('gitsvn.XXXXXX',
+ DIR => File::Spec->tmpdir(),
+ UNLINK => 1);
+ print $fh $properties->{'svn:ignore'};
+ close($fh);
+ return $name;
+ } else {
+ return undef;
+ }
+}
+
+sub dir_list {
+ my($self,$path,$rev) = @_;
+ $path =~ s#^/*##;
+ my $subpool = SVN::Pool::new_default_sub;
+ my ($dirents,undef,$properties)
+ = $self->{'svn'}->get_dir($path,$rev,undef);
+ return $dirents;
+}
+
+package main;
+use URI;
+
+our $svn = $svn_url;
+$svn .= "/$svn_dir" if defined $svn_dir;
+my $svn2 = SVNconn->new($svn);
+$svn = SVNconn->new($svn);
+
+my $lwp_ua;
+if($opt_d or $opt_D) {
+ $svn_url = URI->new($svn_url)->canonical;
+ if($opt_D) {
+ $svn_dir =~ s#/*$#/#;
+ } else {
+ $svn_dir = "";
+ }
+ if ($svn_url->scheme eq "http") {
+ use LWP::UserAgent;
+ $lwp_ua = LWP::UserAgent->new(keep_alive => 1, requests_redirectable => []);
+ } else {
+ print STDERR "Warning: not HTTP; turning off direct file access\n";
+ $opt_d=0;
+ }
+}
+
+sub pdate($) {
+ my($d) = @_;
+ $d =~ m#(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)#
+ or die "Unparseable date: $d\n";
+ my $y=$1; $y-=1900 if $y>1900;
+ return timegm($6||0,$5,$4,$3,$2-1,$y);
+}
+
+sub getwd() {
+ my $pwd = `pwd`;
+ chomp $pwd;
+ return $pwd;
+}
+
+
+sub get_headref($$) {
+ my $name = shift;
+ my $git_dir = shift;
+ my $sha;
+
+ if (open(C,"$git_dir/refs/heads/$name")) {
+ chomp($sha = <C>);
+ close(C);
+ length($sha) == 40
+ or die "Cannot get head id for $name ($sha): $!\n";
+ }
+ return $sha;
+}
+
+
+-d $git_tree
+ or mkdir($git_tree,0777)
+ or die "Could not create $git_tree: $!";
+chdir($git_tree);
+
+my $orig_branch = "";
+my $forward_master = 0;
+my %branches;
+
+my $git_dir = $ENV{"GIT_DIR"} || ".git";
+$git_dir = getwd()."/".$git_dir unless $git_dir =~ m#^/#;
+$ENV{"GIT_DIR"} = $git_dir;
+my $orig_git_index;
+$orig_git_index = $ENV{GIT_INDEX_FILE} if exists $ENV{GIT_INDEX_FILE};
+my ($git_ih, $git_index) = tempfile('gitXXXXXX', SUFFIX => '.idx',
+ DIR => File::Spec->tmpdir());
+close ($git_ih);
+$ENV{GIT_INDEX_FILE} = $git_index;
+my $maxnum = 0;
+my $last_rev = "";
+my $last_branch;
+my $current_rev = $opt_s || 1;
+unless(-d $git_dir) {
+ system("git-init");
+ die "Cannot init the GIT db at $git_tree: $?\n" if $?;
+ system("git-read-tree");
+ die "Cannot init an empty tree: $?\n" if $?;
+
+ $last_branch = $opt_o;
+ $orig_branch = "";
+} else {
+ -f "$git_dir/refs/heads/$opt_o"
+ or die "Branch '$opt_o' does not exist.\n".
+ "Either use the correct '-o branch' option,\n".
+ "or import to a new repository.\n";
+
+ -f "$git_dir/svn2git"
+ or die "'$git_dir/svn2git' does not exist.\n".
+ "You need that file for incremental imports.\n";
+ open(F, "git-symbolic-ref HEAD |") or
+ die "Cannot run git-symbolic-ref: $!\n";
+ chomp ($last_branch = <F>);
+ $last_branch = basename($last_branch);
+ close(F);
+ unless($last_branch) {
+ warn "Cannot read the last branch name: $! -- assuming 'master'\n";
+ $last_branch = "master";
+ }
+ $orig_branch = $last_branch;
+ $last_rev = get_headref($orig_branch, $git_dir);
+ if (-f "$git_dir/SVN2GIT_HEAD") {
+ die <<EOM;
+SVN2GIT_HEAD exists.
+Make sure your working directory corresponds to HEAD and remove SVN2GIT_HEAD.
+You may need to run
+
+ git-read-tree -m -u SVN2GIT_HEAD HEAD
+EOM
+ }
+ system('cp', "$git_dir/HEAD", "$git_dir/SVN2GIT_HEAD");
+
+ $forward_master =
+ $opt_o ne 'master' && -f "$git_dir/refs/heads/master" &&
+ system('cmp', '-s', "$git_dir/refs/heads/master",
+ "$git_dir/refs/heads/$opt_o") == 0;
+
+ # populate index
+ system('git-read-tree', $last_rev);
+ die "read-tree failed: $?\n" if $?;
+
+ # Get the last import timestamps
+ open my $B,"<", "$git_dir/svn2git";
+ while(<$B>) {
+ chomp;
+ my($num,$branch,$ref) = split;
+ $branches{$branch}{$num} = $ref;
+ $branches{$branch}{"LAST"} = $ref;
+ $current_rev = $num+1 if $current_rev <= $num;
+ }
+ close($B);
+}
+-d $git_dir
+ or die "Could not create git subdir ($git_dir).\n";
+
+my $default_authors = "$git_dir/svn-authors";
+if ($opt_A) {
+ read_users($opt_A);
+ copy($opt_A,$default_authors) or die "Copy failed: $!";
+} else {
+ read_users($default_authors) if -f $default_authors;
+}
+
+open BRANCHES,">>", "$git_dir/svn2git";
+
+sub node_kind($$) {
+ my ($svnpath, $revision) = @_;
+ $svnpath =~ s#^/*##;
+ my $subpool = SVN::Pool::new_default_sub;
+ my $kind = $svn->{'svn'}->check_path($svnpath,$revision);
+ return $kind;
+}
+
+sub get_file($$$) {
+ my($svnpath,$rev,$path) = @_;
+
+ # now get it
+ my ($name,$mode);
+ if($opt_d) {
+ my($req,$res);
+
+ # /svn/!svn/bc/2/django/trunk/django-docs/build.py
+ my $url=$svn_url->clone();
+ $url->path($url->path."/!svn/bc/$rev/$svn_dir$svnpath");
+ print "... $path...\n" if $opt_v;
+ $req = HTTP::Request->new(GET => $url);
+ $res = $lwp_ua->request($req);
+ if ($res->is_success) {
+ my $fh;
+ ($fh, $name) = tempfile('gitsvn.XXXXXX',
+ DIR => File::Spec->tmpdir(), UNLINK => 1);
+ print $fh $res->content;
+ close($fh) or die "Could not write $name: $!\n";
+ } else {
+ return undef if $res->code == 301; # directory?
+ die $res->status_line." at $url\n";
+ }
+ $mode = '0644'; # can't obtain mode via direct http request?
+ } else {
+ ($name,$mode) = $svn->file("$svnpath",$rev);
+ return undef unless defined $name;
+ }
+
+ my $pid = open(my $F, '-|');
+ die $! unless defined $pid;
+ if (!$pid) {
+ exec("git-hash-object", "-w", $name)
+ or die "Cannot create object: $!\n";
+ }
+ my $sha = <$F>;
+ chomp $sha;
+ close $F;
+ unlink $name;
+ return [$mode, $sha, $path];
+}
+
+sub get_ignore($$$$$) {
+ my($new,$old,$rev,$path,$svnpath) = @_;
+
+ return unless $opt_I;
+ my $name = $svn->ignore("$svnpath",$rev);
+ if ($path eq '/') {
+ $path = $opt_I;
+ } else {
+ $path = File::Spec->catfile($path,$opt_I);
+ }
+ if (defined $name) {
+ my $pid = open(my $F, '-|');
+ die $! unless defined $pid;
+ if (!$pid) {
+ exec("git-hash-object", "-w", $name)
+ or die "Cannot create object: $!\n";
+ }
+ my $sha = <$F>;
+ chomp $sha;
+ close $F;
+ unlink $name;
+ push(@$new,['0644',$sha,$path]);
+ } elsif (defined $old) {
+ push(@$old,$path);
+ }
+}
+
+sub project_path($$)
+{
+ my ($path, $project) = @_;
+
+ $path = "/".$path unless ($path =~ m#^\/#) ;
+ return $1 if ($path =~ m#^$project\/(.*)$#);
+
+ $path =~ s#\.#\\\.#g;
+ $path =~ s#\+#\\\+#g;
+ return "/" if ($project =~ m#^$path.*$#);
+
+ return undef;
+}
+
+sub split_path($$) {
+ my($rev,$path) = @_;
+ my $branch;
+
+ if($path =~ s#^/\Q$tag_name\E/([^/]+)/?##) {
+ $branch = "/$1";
+ } elsif($path =~ s#^/\Q$trunk_name\E/?##) {
+ $branch = "/";
+ } elsif($path =~ s#^/\Q$branch_name\E/([^/]+)/?##) {
+ $branch = $1;
+ } else {
+ my %no_error = (
+ "/" => 1,
+ "/$tag_name" => 1,
+ "/$branch_name" => 1
+ );
+ print STDERR "$rev: Unrecognized path: $path\n" unless (defined $no_error{$path});
+ return ()
+ }
+ if ($path eq "") {
+ $path = "/";
+ } elsif ($project_name) {
+ $path = project_path($path, $project_name);
+ }
+ return ($branch,$path);
+}
+
+sub branch_rev($$) {
+
+ my ($srcbranch,$uptorev) = @_;
+
+ my $bbranches = $branches{$srcbranch};
+ my @revs = reverse sort { ($a eq 'LAST' ? 0 : $a) <=> ($b eq 'LAST' ? 0 : $b) } keys %$bbranches;
+ my $therev;
+ foreach my $arev(@revs) {
+ next if ($arev eq 'LAST');
+ if ($arev <= $uptorev) {
+ $therev = $arev;
+ last;
+ }
+ }
+ return $therev;
+}
+
+sub expand_svndir($$$);
+
+sub expand_svndir($$$)
+{
+ my ($svnpath, $rev, $path) = @_;
+ my @list;
+ get_ignore(\@list, undef, $rev, $path, $svnpath);
+ my $dirents = $svn->dir_list($svnpath, $rev);
+ foreach my $p(keys %$dirents) {
+ my $kind = node_kind($svnpath.'/'.$p, $rev);
+ if ($kind eq $SVN::Node::file) {
+ my $f = get_file($svnpath.'/'.$p, $rev, $path.'/'.$p);
+ push(@list, $f) if $f;
+ } elsif ($kind eq $SVN::Node::dir) {
+ push(@list,
+ expand_svndir($svnpath.'/'.$p, $rev, $path.'/'.$p));
+ }
+ }
+ return @list;
+}
+
+sub copy_path($$$$$$$$) {
+ # Somebody copied a whole subdirectory.
+ # We need to find the index entries from the old version which the
+ # SVN log entry points to, and add them to the new place.
+
+ my($newrev,$newbranch,$path,$oldpath,$rev,$node_kind,$new,$parents) = @_;
+
+ my($srcbranch,$srcpath) = split_path($rev,$oldpath);
+ unless(defined $srcbranch && defined $srcpath) {
+ print "Path not found when copying from $oldpath @ $rev.\n".
+ "Will try to copy from original SVN location...\n"
+ if $opt_v;
+ push (@$new, expand_svndir($oldpath, $rev, $path));
+ return;
+ }
+ my $therev = branch_rev($srcbranch, $rev);
+ my $gitrev = $branches{$srcbranch}{$therev};
+ unless($gitrev) {
+ print STDERR "$newrev:$newbranch: could not find $oldpath \@ $rev\n";
+ return;
+ }
+ if ($srcbranch ne $newbranch) {
+ push(@$parents, $branches{$srcbranch}{'LAST'});
+ }
+ print "$newrev:$newbranch:$path: copying from $srcbranch:$srcpath @ $rev\n" if $opt_v;
+ if ($node_kind eq $SVN::Node::dir) {
+ $srcpath =~ s#/*$#/#;
+ }
+
+ my $pid = open my $f,'-|';
+ die $! unless defined $pid;
+ if (!$pid) {
+ exec("git-ls-tree","-r","-z",$gitrev,$srcpath)
+ or die $!;
+ }
+ local $/ = "\0";
+ while(<$f>) {
+ chomp;
+ my($m,$p) = split(/\t/,$_,2);
+ my($mode,$type,$sha1) = split(/ /,$m);
+ next if $type ne "blob";
+ if ($node_kind eq $SVN::Node::dir) {
+ $p = $path . substr($p,length($srcpath)-1);
+ } else {
+ $p = $path;
+ }
+ push(@$new,[$mode,$sha1,$p]);
+ }
+ close($f) or
+ print STDERR "$newrev:$newbranch: could not list files in $oldpath \@ $rev\n";
+}
+
+sub commit {
+ my($branch, $changed_paths, $revision, $author, $date, $message) = @_;
+ my($committer_name,$committer_email,$dest);
+ my($author_name,$author_email);
+ my(@old,@new,@parents);
+
+ if (not defined $author or $author eq "") {
+ $committer_name = $committer_email = "unknown";
+ } elsif (defined $users_file) {
+ die "User $author is not listed in $users_file\n"
+ unless exists $users{$author};
+ ($committer_name,$committer_email) = @{$users{$author}};
+ } elsif ($author =~ /^(.*?)\s+<(.*)>$/) {
+ ($committer_name, $committer_email) = ($1, $2);
+ } else {
+ $author =~ s/^<(.*)>$/$1/;
+ $committer_name = $committer_email = $author;
+ }
+
+ if ($opt_F && $message =~ /From:\s+(.*?)\s+<(.*)>\s*\n/) {
+ ($author_name, $author_email) = ($1, $2);
+ print "Author from From: $1 <$2>\n" if ($opt_v);;
+ } elsif ($opt_S && $message =~ /Signed-off-by:\s+(.*?)\s+<(.*)>\s*\n/) {
+ ($author_name, $author_email) = ($1, $2);
+ print "Author from Signed-off-by: $1 <$2>\n" if ($opt_v);;
+ } else {
+ $author_name = $committer_name;
+ $author_email = $committer_email;
+ }
+
+ $date = pdate($date);
+
+ my $tag;
+ my $parent;
+ if($branch eq "/") { # trunk
+ $parent = $opt_o;
+ } elsif($branch =~ m#^/(.+)#) { # tag
+ $tag = 1;
+ $parent = $1;
+ } else { # "normal" branch
+ # nothing to do
+ $parent = $branch;
+ }
+ $dest = $parent;
+
+ my $prev = $changed_paths->{"/"};
+ if($prev and $prev->[0] eq "A") {
+ delete $changed_paths->{"/"};
+ my $oldpath = $prev->[1];
+ my $rev;
+ if(defined $oldpath) {
+ my $p;
+ ($parent,$p) = split_path($revision,$oldpath);
+ if(defined $parent) {
+ if($parent eq "/") {
+ $parent = $opt_o;
+ } else {
+ $parent =~ s#^/##; # if it's a tag
+ }
+ }
+ } else {
+ $parent = undef;
+ }
+ }
+
+ my $rev;
+ if($revision > $opt_s and defined $parent) {
+ open(H,'-|',"git-rev-parse","--verify",$parent);
+ $rev = <H>;
+ close(H) or do {
+ print STDERR "$revision: cannot find commit '$parent'!\n";
+ return;
+ };
+ chop $rev;
+ if(length($rev) != 40) {
+ print STDERR "$revision: cannot find commit '$parent'!\n";
+ return;
+ }
+ $rev = $branches{($parent eq $opt_o) ? "/" : $parent}{"LAST"};
+ if($revision != $opt_s and not $rev) {
+ print STDERR "$revision: do not know ancestor for '$parent'!\n";
+ return;
+ }
+ } else {
+ $rev = undef;
+ }
+
+# if($prev and $prev->[0] eq "A") {
+# if(not $tag) {
+# unless(open(H,"> $git_dir/refs/heads/$branch")) {
+# print STDERR "$revision: Could not create branch $branch: $!\n";
+# $state=11;
+# next;
+# }
+# print H "$rev\n"
+# or die "Could not write branch $branch: $!";
+# close(H)
+# or die "Could not write branch $branch: $!";
+# }
+# }
+ if(not defined $rev) {
+ unlink($git_index);
+ } elsif ($rev ne $last_rev) {
+ print "Switching from $last_rev to $rev ($branch)\n" if $opt_v;
+ system("git-read-tree", $rev);
+ die "read-tree failed for $rev: $?\n" if $?;
+ $last_rev = $rev;
+ }
+
+ push (@parents, $rev) if defined $rev;
+
+ my $cid;
+ if($tag and not %$changed_paths) {
+ $cid = $rev;
+ } else {
+ my @paths = sort keys %$changed_paths;
+ foreach my $path(@paths) {
+ my $action = $changed_paths->{$path};
+
+ if ($action->[0] eq "R") {
+ # refer to a file/tree in an earlier commit
+ push(@old,$path); # remove any old stuff
+ }
+ if(($action->[0] eq "A") || ($action->[0] eq "R")) {
+ my $node_kind = node_kind($action->[3], $revision);
+ if ($node_kind eq $SVN::Node::file) {
+ my $f = get_file($action->[3],
+ $revision, $path);
+ if ($f) {
+ push(@new,$f) if $f;
+ } else {
+ my $opath = $action->[3];
+ print STDERR "$revision: $branch: could not fetch '$opath'\n";
+ }
+ } elsif ($node_kind eq $SVN::Node::dir) {
+ if($action->[1]) {
+ copy_path($revision, $branch,
+ $path, $action->[1],
+ $action->[2], $node_kind,
+ \@new, \@parents);
+ } else {
+ get_ignore(\@new, \@old, $revision,
+ $path, $action->[3]);
+ }
+ }
+ } elsif ($action->[0] eq "D") {
+ push(@old,$path);
+ } elsif ($action->[0] eq "M") {
+ my $node_kind = node_kind($action->[3], $revision);
+ if ($node_kind eq $SVN::Node::file) {
+ my $f = get_file($action->[3],
+ $revision, $path);
+ push(@new,$f) if $f;
+ } elsif ($node_kind eq $SVN::Node::dir) {
+ get_ignore(\@new, \@old, $revision,
+ $path, $action->[3]);
+ }
+ } else {
+ die "$revision: unknown action '".$action->[0]."' for $path\n";
+ }
+ }
+
+ while(@old) {
+ my @o1;
+ if(@old > 55) {
+ @o1 = splice(@old,0,50);
+ } else {
+ @o1 = @old;
+ @old = ();
+ }
+ my $pid = open my $F, "-|";
+ die "$!" unless defined $pid;
+ if (!$pid) {
+ exec("git-ls-files", "-z", @o1) or die $!;
+ }
+ @o1 = ();
+ local $/ = "\0";
+ while(<$F>) {
+ chomp;
+ push(@o1,$_);
+ }
+ close($F);
+
+ while(@o1) {
+ my @o2;
+ if(@o1 > 55) {
+ @o2 = splice(@o1,0,50);
+ } else {
+ @o2 = @o1;
+ @o1 = ();
+ }
+ system("git-update-index","--force-remove","--",@o2);
+ die "Cannot remove files: $?\n" if $?;
+ }
+ }
+ while(@new) {
+ my @n2;
+ if(@new > 12) {
+ @n2 = splice(@new,0,10);
+ } else {
+ @n2 = @new;
+ @new = ();
+ }
+ system("git-update-index","--add",
+ (map { ('--cacheinfo', @$_) } @n2));
+ die "Cannot add files: $?\n" if $?;
+ }
+
+ my $pid = open(C,"-|");
+ die "Cannot fork: $!" unless defined $pid;
+ unless($pid) {
+ exec("git-write-tree");
+ die "Cannot exec git-write-tree: $!\n";
+ }
+ chomp(my $tree = <C>);
+ length($tree) == 40
+ or die "Cannot get tree id ($tree): $!\n";
+ close(C)
+ or die "Error running git-write-tree: $?\n";
+ print "Tree ID $tree\n" if $opt_v;
+
+ my $pr = IO::Pipe->new() or die "Cannot open pipe: $!\n";
+ my $pw = IO::Pipe->new() or die "Cannot open pipe: $!\n";
+ $pid = fork();
+ die "Fork: $!\n" unless defined $pid;
+ unless($pid) {
+ $pr->writer();
+ $pw->reader();
+ open(OUT,">&STDOUT");
+ dup2($pw->fileno(),0);
+ dup2($pr->fileno(),1);
+ $pr->close();
+ $pw->close();
+
+ my @par = ();
+
+ # loose detection of merges
+ # based on the commit msg
+ foreach my $rx (@mergerx) {
+ if ($message =~ $rx) {
+ my $mparent = $1;
+ if ($mparent eq 'HEAD') { $mparent = $opt_o };
+ if ( -e "$git_dir/refs/heads/$mparent") {
+ $mparent = get_headref($mparent, $git_dir);
+ push (@parents, $mparent);
+ print OUT "Merge parent branch: $mparent\n" if $opt_v;
+ }
+ }
+ }
+ my %seen_parents = ();
+ my @unique_parents = grep { ! $seen_parents{$_} ++ } @parents;
+ foreach my $bparent (@unique_parents) {
+ push @par, '-p', $bparent;
+ print OUT "Merge parent branch: $bparent\n" if $opt_v;
+ }
+
+ exec("env",
+ "GIT_AUTHOR_NAME=$author_name",
+ "GIT_AUTHOR_EMAIL=$author_email",
+ "GIT_AUTHOR_DATE=".strftime("+0000 %Y-%m-%d %H:%M:%S",gmtime($date)),
+ "GIT_COMMITTER_NAME=$committer_name",
+ "GIT_COMMITTER_EMAIL=$committer_email",
+ "GIT_COMMITTER_DATE=".strftime("+0000 %Y-%m-%d %H:%M:%S",gmtime($date)),
+ "git-commit-tree", $tree,@par);
+ die "Cannot exec git-commit-tree: $!\n";
+ }
+ $pw->writer();
+ $pr->reader();
+
+ $message =~ s/[\s\n]+\z//;
+ $message = "r$revision: $message" if $opt_r;
+
+ print $pw "$message\n"
+ or die "Error writing to git-commit-tree: $!\n";
+ $pw->close();
+
+ print "Committed change $revision:$branch ".strftime("%Y-%m-%d %H:%M:%S",gmtime($date)).")\n" if $opt_v;
+ chomp($cid = <$pr>);
+ length($cid) == 40
+ or die "Cannot get commit id ($cid): $!\n";
+ print "Commit ID $cid\n" if $opt_v;
+ $pr->close();
+
+ waitpid($pid,0);
+ die "Error running git-commit-tree: $?\n" if $?;
+ }
+
+ if (not defined $cid) {
+ $cid = $branches{"/"}{"LAST"};
+ }
+
+ if(not defined $dest) {
+ print "... no known parent\n" if $opt_v;
+ } elsif(not $tag) {
+ print "Writing to refs/heads/$dest\n" if $opt_v;
+ open(C,">$git_dir/refs/heads/$dest") and
+ print C ("$cid\n") and
+ close(C)
+ or die "Cannot write branch $dest for update: $!\n";
+ }
+
+ if ($tag) {
+ $last_rev = "-" if %$changed_paths;
+ # the tag was 'complex', i.e. did not refer to a "real" revision
+
+ $dest =~ tr/_/\./ if $opt_u;
+
+ system('git-tag', '-f', $dest, $cid) == 0
+ or die "Cannot create tag $dest: $!\n";
+
+ print "Created tag '$dest' on '$branch'\n" if $opt_v;
+ }
+ $branches{$branch}{"LAST"} = $cid;
+ $branches{$branch}{$revision} = $cid;
+ $last_rev = $cid;
+ print BRANCHES "$revision $branch $cid\n";
+ print "DONE: $revision $dest $cid\n" if $opt_v;
+}
+
+sub commit_all {
+ # Recursive use of the SVN connection does not work
+ local $svn = $svn2;
+
+ my ($changed_paths, $revision, $author, $date, $message) = @_;
+ my %p;
+ while(my($path,$action) = each %$changed_paths) {
+ $p{$path} = [ $action->action,$action->copyfrom_path, $action->copyfrom_rev, $path ];
+ }
+ $changed_paths = \%p;
+
+ my %done;
+ my @col;
+ my $pref;
+ my $branch;
+
+ while(my($path,$action) = each %$changed_paths) {
+ ($branch,$path) = split_path($revision,$path);
+ next if not defined $branch;
+ next if not defined $path;
+ $done{$branch}{$path} = $action;
+ }
+ while(($branch,$changed_paths) = each %done) {
+ commit($branch, $changed_paths, $revision, $author, $date, $message);
+ }
+}
+
+$opt_l = $svn->{'maxrev'} if not defined $opt_l or $opt_l > $svn->{'maxrev'};
+
+if ($opt_l < $current_rev) {
+ print "Up to date: no new revisions to fetch!\n" if $opt_v;
+ unlink("$git_dir/SVN2GIT_HEAD");
+ exit;
+}
+
+print "Processing from $current_rev to $opt_l ...\n" if $opt_v;
+
+my $from_rev;
+my $to_rev = $current_rev - 1;
+
+my $subpool = SVN::Pool::new_default_sub;
+while ($to_rev < $opt_l) {
+ $subpool->clear;
+ $from_rev = $to_rev + 1;
+ $to_rev = $from_rev + $repack_after;
+ $to_rev = $opt_l if $opt_l < $to_rev;
+ print "Fetching from $from_rev to $to_rev ...\n" if $opt_v;
+ $svn->{'svn'}->get_log("/",$from_rev,$to_rev,0,1,1,\&commit_all);
+ my $pid = fork();
+ die "Fork: $!\n" unless defined $pid;
+ unless($pid) {
+ exec("git-repack", "-d")
+ or die "Cannot repack: $!\n";
+ }
+ waitpid($pid, 0);
+}
+
+
+unlink($git_index);
+
+if (defined $orig_git_index) {
+ $ENV{GIT_INDEX_FILE} = $orig_git_index;
+} else {
+ delete $ENV{GIT_INDEX_FILE};
+}
+
+# Now switch back to the branch we were in before all of this happened
+if($orig_branch) {
+ print "DONE\n" if $opt_v and (not defined $opt_l or $opt_l > 0);
+ system("cp","$git_dir/refs/heads/$opt_o","$git_dir/refs/heads/master")
+ if $forward_master;
+ unless ($opt_i) {
+ system('git-read-tree', '-m', '-u', 'SVN2GIT_HEAD', 'HEAD');
+ die "read-tree failed: $?\n" if $?;
+ }
+} else {
+ $orig_branch = "master";
+ print "DONE; creating $orig_branch branch\n" if $opt_v and (not defined $opt_l or $opt_l > 0);
+ system("cp","$git_dir/refs/heads/$opt_o","$git_dir/refs/heads/master")
+ unless -f "$git_dir/refs/heads/master";
+ system('git-update-ref', 'HEAD', "$orig_branch");
+ unless ($opt_i) {
+ system('git checkout');
+ die "checkout failed: $?\n" if $?;
+ }
+}
+unlink("$git_dir/SVN2GIT_HEAD");
+close(BRANCHES);
--- /dev/null
+git-svnimport(1)
+================
+v0.1, July 2005
+
+NAME
+----
+git-svnimport - Import a SVN repository into git
+
+
+SYNOPSIS
+--------
+[verse]
+'git-svnimport' [ -o <branch-for-HEAD> ] [ -h ] [ -v ] [ -d | -D ]
+ [ -C <GIT_repository> ] [ -i ] [ -u ] [-l limit_rev]
+ [ -b branch_subdir ] [ -T trunk_subdir ] [ -t tag_subdir ]
+ [ -s start_chg ] [ -m ] [ -r ] [ -M regex ]
+ [ -I <ignorefile_name> ] [ -A <author_file> ]
+ [ -R <repack_each_revs>] [ -P <path_from_trunk> ]
+ <SVN_repository_URL> [ <path> ]
+
+
+DESCRIPTION
+-----------
+Imports a SVN repository into git. It will either create a new
+repository, or incrementally import into an existing one.
+
+SVN access is done by the SVN::Perl module.
+
+git-svnimport assumes that SVN repositories are organized into one
+"trunk" directory where the main development happens, "branches/FOO"
+directories for branches, and "/tags/FOO" directories for tags.
+Other subdirectories are ignored.
+
+git-svnimport creates a file ".git/svn2git", which is required for
+incremental SVN imports.
+
+OPTIONS
+-------
+-C <target-dir>::
+ The GIT repository to import to. If the directory doesn't
+ exist, it will be created. Default is the current directory.
+
+-s <start_rev>::
+ Start importing at this SVN change number. The default is 1.
++
+When importing incrementally, you might need to edit the .git/svn2git file.
+
+-i::
+ Import-only: don't perform a checkout after importing. This option
+ ensures the working directory and index remain untouched and will
+ not create them if they do not exist.
+
+-T <trunk_subdir>::
+ Name the SVN trunk. Default "trunk".
+
+-t <tag_subdir>::
+ Name the SVN subdirectory for tags. Default "tags".
+
+-b <branch_subdir>::
+ Name the SVN subdirectory for branches. Default "branches".
+
+-o <branch-for-HEAD>::
+ The 'trunk' branch from SVN is imported to the 'origin' branch within
+ the git repository. Use this option if you want to import into a
+ different branch.
+
+-r::
+ Prepend 'rX: ' to commit messages, where X is the imported
+ subversion revision.
+
+-u::
+ Replace underscores in tag names with periods.
+
+-I <ignorefile_name>::
+ Import the svn:ignore directory property to files with this
+ name in each directory. (The Subversion and GIT ignore
+ syntaxes are similar enough that using the Subversion patterns
+ directly with "-I .gitignore" will almost always just work.)
+
+-A <author_file>::
+ Read a file with lines on the form
++
+------
+ username = User's Full Name <email@addr.es>
+
+------
++
+and use "User's Full Name <email@addr.es>" as the GIT
+author and committer for Subversion commits made by
+"username". If encountering a commit made by a user not in the
+list, abort.
++
+For convenience, this data is saved to $GIT_DIR/svn-authors
+each time the -A option is provided, and read from that same
+file each time git-svnimport is run with an existing GIT
+repository without -A.
+
+-m::
+ Attempt to detect merges based on the commit message. This option
+ will enable default regexes that try to capture the name source
+ branch name from the commit message.
+
+-M <regex>::
+ Attempt to detect merges based on the commit message with a custom
+ regex. It can be used with -m to also see the default regexes.
+ You must escape forward slashes.
+
+-l <max_rev>::
+ Specify a maximum revision number to pull.
++
+Formerly, this option controlled how many revisions to pull,
+due to SVN memory leaks. (These have been worked around.)
+
+-R <repack_each_revs>::
+ Specify how often git repository should be repacked.
++
+The default value is 1000. git-svnimport will do import in chunks of 1000
+revisions, after each chunk git repository will be repacked. To disable
+this behavior specify some big value here which is mote than number of
+revisions to import.
+
+-P <path_from_trunk>::
+ Partial import of the SVN tree.
++
+By default, the whole tree on the SVN trunk (/trunk) is imported.
+'-P my/proj' will import starting only from '/trunk/my/proj'.
+This option is useful when you want to import one project from a
+svn repo which hosts multiple projects under the same trunk.
+
+-v::
+ Verbosity: let 'svnimport' report what it is doing.
+
+-d::
+ Use direct HTTP requests if possible. The "<path>" argument is used
+ only for retrieving the SVN logs; the path to the contents is
+ included in the SVN log.
+
+-D::
+ Use direct HTTP requests if possible. The "<path>" argument is used
+ for retrieving the logs, as well as for the contents.
++
+There's no safe way to automatically find out which of these options to
+use, so you need to try both. Usually, the one that's wrong will die
+with a 40x error pretty quickly.
+
+<SVN_repository_URL>::
+ The URL of the SVN module you want to import. For local
+ repositories, use "file:///absolute/path".
++
+If you're using the "-d" or "-D" option, this is the URL of the SVN
+repository itself; it usually ends in "/svn".
+
+<path>::
+ The path to the module you want to check out.
+
+-h::
+ Print a short usage message and exit.
+
+OUTPUT
+------
+If '-v' is specified, the script reports what it is doing.
+
+Otherwise, success is indicated the Unix way, i.e. by simply exiting with
+a zero exit status.
+
+Author
+------
+Written by Matthias Urlichs <smurf@smurf.noris.de>, with help from
+various participants of the git-list <git@vger.kernel.org>.
+
+Based on a cvs2git script by the same author.
+
+Documentation
+--------------
+Documentation by Matthias Urlichs <smurf@smurf.noris.de>.
+
+GIT
+---
+Part of the gitlink:git[7] suite
optparse.make_option("--dry-run", action="store_true"),
optparse.make_option("--direct", dest="directSubmit", action="store_true"),
optparse.make_option("--trust-me-like-a-fool", dest="trustMeLikeAFool", action="store_true"),
+ optparse.make_option("-M", dest="detectRename", action="store_true"),
]
self.description = "Submit changes from git to the perforce depot."
self.usage += " [name of git branch to submit into perforce depot]"
self.origin = ""
self.directSubmit = False
self.trustMeLikeAFool = False
+ self.detectRename = False
self.verbose = False
self.isWindows = (platform.system() == "Windows")
diff = self.diffStatus
else:
print "Applying %s" % (read_pipe("git log --max-count=1 --pretty=oneline %s" % id))
- diff = read_pipe_lines("git diff-tree -r --name-status \"%s^\" \"%s\"" % (id, id))
+ diffOpts = ("", "-M")[self.detectRename]
+ diff = read_pipe_lines("git diff-tree -r --name-status %s \"%s^\" \"%s\"" % (diffOpts, id, id))
filesToAdd = set()
filesToDelete = set()
editedFiles = set()
filesToDelete.add(path)
if path in filesToAdd:
filesToAdd.remove(path)
+ elif modifier == "R":
+ src, dest = line.strip().split("\t")[1:3]
+ system("p4 integrate -Dt \"%s\" \"%s\"" % (src, dest))
+ system("p4 edit \"%s\"" % (dest))
+ os.unlink(dest)
+ editedFiles.add(dest)
+ filesToDelete.add(src)
else:
die("unknown modifier %s for %s" % (modifier, path))
"and with .rej files / [w]rite the patch to a file (patch.txt) ")
if response == "s":
print "Skipping! Good luck with the next patches..."
+ for f in editedFiles:
+ system("p4 revert \"%s\"" % f);
+ for f in filesToAdd:
+ system("rm %s" %f)
return
elif response == "a":
os.system(applyPatchCmd)
#define HOST_NAME_MAX 256
#endif
+#ifndef NI_MAXSERV
+#define NI_MAXSERV 32
+#endif
+
static int log_syslog;
static int verbose;
static int reuseaddr;
static int populate_from_stdin(struct diff_filespec *s)
{
struct strbuf buf;
+ size_t size = 0;
strbuf_init(&buf, 0);
if (strbuf_read(&buf, 0, 0) < 0)
strerror(errno));
s->should_munmap = 0;
- s->data = strbuf_detach(&buf, &s->size);
+ s->data = strbuf_detach(&buf, &size);
+ s->size = size;
s->should_free = 1;
return 0;
}
*/
strbuf_init(&buf, 0);
if (convert_to_git(s->path, s->data, s->size, &buf)) {
+ size_t size = 0;
munmap(s->data, s->size);
s->should_munmap = 0;
- s->data = strbuf_detach(&buf, &s->size);
+ s->data = strbuf_detach(&buf, &size);
+ s->size = size;
s->should_free = 1;
}
}
* The value we return is 1 if we want the pair to be broken,
* or 0 if we do not.
*/
- unsigned long delta_size, base_size, src_copied, literal_added,
- src_removed;
+ unsigned long delta_size, base_size, max_size;
+ unsigned long src_copied, literal_added, src_removed;
*merge_score_p = 0; /* assume no deletion --- "do not break"
* is the default.
return 0; /* error but caught downstream */
base_size = ((src->size < dst->size) ? src->size : dst->size);
- if (base_size < MINIMUM_BREAK_SIZE)
+ max_size = ((src->size > dst->size) ? src->size : dst->size);
+ if (max_size < MINIMUM_BREAK_SIZE)
return 0; /* we do not break too small filepair */
if (diffcore_count_changes(src, dst,
* less than the minimum, after rename/copy runs.
*/
*merge_score_p = (int)(src_removed * MAX_SCORE / src->size);
+ if (*merge_score_p > break_score)
+ return 1;
/* Extent of damage, which counts both inserts and
* deletes.
*/
delta_size = src_removed + literal_added;
- if (delta_size * MAX_SCORE / base_size < break_score)
+ if (delta_size * MAX_SCORE / max_size < break_score)
return 0;
/* If you removed a lot without adding new material, that is
return retval;
}
+static int no_wildcard(const char *string)
+{
+ return string[strcspn(string, "*?[{")] == '\0';
+}
+
void add_exclude(const char *string, const char *base,
int baselen, struct exclude_list *which)
{
struct exclude *x = xmalloc(sizeof (*x));
+ x->to_exclude = 1;
+ if (*string == '!') {
+ x->to_exclude = 0;
+ string++;
+ }
x->pattern = string;
+ x->patternlen = strlen(string);
x->base = base;
x->baselen = baselen;
+ x->flags = 0;
+ if (!strchr(string, '/'))
+ x->flags |= EXC_FLAG_NODIR;
+ if (no_wildcard(string))
+ x->flags |= EXC_FLAG_NOWILDCARD;
+ if (*string == '*' && no_wildcard(string+1))
+ x->flags |= EXC_FLAG_ENDSWITH;
if (which->nr == which->alloc) {
which->alloc = alloc_nr(which->alloc);
which->excludes = xrealloc(which->excludes,
* Return 1 for exclude, 0 for include and -1 for undecided.
*/
static int excluded_1(const char *pathname,
- int pathlen,
+ int pathlen, const char *basename,
struct exclude_list *el)
{
int i;
for (i = el->nr - 1; 0 <= i; i--) {
struct exclude *x = el->excludes[i];
const char *exclude = x->pattern;
- int to_exclude = 1;
+ int to_exclude = x->to_exclude;
- if (*exclude == '!') {
- to_exclude = 0;
- exclude++;
- }
-
- if (!strchr(exclude, '/')) {
+ if (x->flags & EXC_FLAG_NODIR) {
/* match basename */
- const char *basename = strrchr(pathname, '/');
- basename = (basename) ? basename+1 : pathname;
- if (fnmatch(exclude, basename, 0) == 0)
- return to_exclude;
+ if (x->flags & EXC_FLAG_NOWILDCARD) {
+ if (!strcmp(exclude, basename))
+ return to_exclude;
+ } else if (x->flags & EXC_FLAG_ENDSWITH) {
+ if (x->patternlen - 1 <= pathlen &&
+ !strcmp(exclude + 1, pathname + pathlen - x->patternlen + 1))
+ return to_exclude;
+ } else {
+ if (fnmatch(exclude, basename, 0) == 0)
+ return to_exclude;
+ }
}
else {
/* match with FNM_PATHNAME:
strncmp(pathname, x->base, baselen))
continue;
- if (fnmatch(exclude, pathname+baselen,
- FNM_PATHNAME) == 0)
- return to_exclude;
+ if (x->flags & EXC_FLAG_NOWILDCARD) {
+ if (!strcmp(exclude, pathname + baselen))
+ return to_exclude;
+ } else {
+ if (fnmatch(exclude, pathname+baselen,
+ FNM_PATHNAME) == 0)
+ return to_exclude;
+ }
}
}
}
{
int pathlen = strlen(pathname);
int st;
+ const char *basename = strrchr(pathname, '/');
+ basename = (basename) ? basename+1 : pathname;
for (st = EXC_CMDL; st <= EXC_FILE; st++) {
- switch (excluded_1(pathname, pathlen, &dir->exclude_list[st])) {
+ switch (excluded_1(pathname, pathlen, basename, &dir->exclude_list[st])) {
case 0:
return 0;
case 1:
return 0;
}
+static int get_dtype(struct dirent *de, const char *path)
+{
+ int dtype = DTYPE(de);
+ struct stat st;
+
+ if (dtype != DT_UNKNOWN)
+ return dtype;
+ if (lstat(path, &st))
+ return dtype;
+ if (S_ISREG(st.st_mode))
+ return DT_REG;
+ if (S_ISDIR(st.st_mode))
+ return DT_DIR;
+ if (S_ISLNK(st.st_mode))
+ return DT_LNK;
+ return dtype;
+}
+
/*
* Read a directory tree. We currently ignore anything but
* directories, regular files and symlinks. That's because git
exclude_stk = push_exclude_per_directory(dir, base, baselen);
while ((de = readdir(fdir)) != NULL) {
- int len;
+ int len, dtype;
int exclude;
if ((de->d_name[0] == '.') &&
if (exclude && dir->collect_ignored
&& in_pathspec(fullname, baselen + len, simplify))
dir_add_ignored(dir, fullname, baselen + len);
- if (exclude != dir->show_ignored) {
- if (!dir->show_ignored || DTYPE(de) != DT_DIR) {
+
+ /*
+ * Excluded? If we don't explicitly want to show
+ * ignored files, ignore it
+ */
+ if (exclude && !dir->show_ignored)
+ continue;
+
+ dtype = get_dtype(de, fullname);
+
+ /*
+ * Do we want to see just the ignored files?
+ * We still need to recurse into directories,
+ * even if we don't ignore them, since the
+ * directory may contain files that we do..
+ */
+ if (!exclude && dir->show_ignored) {
+ if (dtype != DT_DIR)
continue;
- }
}
- switch (DTYPE(de)) {
- struct stat st;
+ switch (dtype) {
default:
continue;
- case DT_UNKNOWN:
- if (lstat(fullname, &st))
- continue;
- if (S_ISREG(st.st_mode) || S_ISLNK(st.st_mode))
- break;
- if (!S_ISDIR(st.st_mode))
- continue;
- /* fallthrough */
case DT_DIR:
memcpy(fullname + baselen + len, "/", 2);
len++;
char buffer[PATH_MAX];
return get_relative_cwd(buffer, sizeof(buffer), dir) != NULL;
}
+
+int remove_dir_recursively(struct strbuf *path, int only_empty)
+{
+ DIR *dir = opendir(path->buf);
+ struct dirent *e;
+ int ret = 0, original_len = path->len, len;
+
+ if (!dir)
+ return -1;
+ if (path->buf[original_len - 1] != '/')
+ strbuf_addch(path, '/');
+
+ len = path->len;
+ while ((e = readdir(dir)) != NULL) {
+ struct stat st;
+ if ((e->d_name[0] == '.') &&
+ ((e->d_name[1] == 0) ||
+ ((e->d_name[1] == '.') && e->d_name[2] == 0)))
+ continue; /* "." and ".." */
+
+ strbuf_setlen(path, len);
+ strbuf_addstr(path, e->d_name);
+ if (lstat(path->buf, &st))
+ ; /* fall thru */
+ else if (S_ISDIR(st.st_mode)) {
+ if (!remove_dir_recursively(path, only_empty))
+ continue; /* happy */
+ } else if (!only_empty && !unlink(path->buf))
+ continue; /* happy, too */
+
+ /* path too long, stat fails, or non-directory still exists */
+ ret = -1;
+ break;
+ }
+ closedir(dir);
+
+ strbuf_setlen(path, original_len);
+ if (!ret)
+ ret = rmdir(path->buf);
+ return ret;
+}
char name[FLEX_ARRAY]; /* more */
};
+#define EXC_FLAG_NODIR 1
+#define EXC_FLAG_NOWILDCARD 2
+#define EXC_FLAG_ENDSWITH 4
+
struct exclude_list {
int nr;
int alloc;
struct exclude {
const char *pattern;
+ int patternlen;
const char *base;
int baselen;
+ int to_exclude;
+ int flags;
} **excludes;
};
extern char *get_relative_cwd(char *buffer, int size, const char *dir);
extern int is_inside_dir(const char *dir);
+extern int remove_dir_recursively(struct strbuf *path, int only_empty);
+
#endif
*/
strbuf_init(&buf, 0);
if (convert_to_working_tree(ce->name, new, size, &buf)) {
+ size_t newsize = 0;
free(new);
- new = strbuf_detach(&buf, &size);
+ new = strbuf_detach(&buf, &newsize);
+ size = newsize;
}
if (to_tempfile) {
char *term = xstrdup(command_buf.buf + 5 + 2);
size_t term_len = command_buf.len - 5 - 2;
+ strbuf_detach(&command_buf, NULL);
for (;;) {
if (strbuf_getline(&command_buf, stdin, '\n') == EOF)
die("EOF in data (terminator '%s' not found)", term);
} else if (oe) {
if (oe->type != OBJ_BLOB)
die("Not a blob (actually a %s): %s",
- command_buf.buf, typename(oe->type));
+ typename(oe->type), command_buf.buf);
} else {
enum object_type type = sha1_object_info(sha1, NULL);
if (type < 0)
+++ /dev/null
-#include "cache.h"
-#include "refs.h"
-#include "pkt-line.h"
-#include "commit.h"
-#include "tag.h"
-#include "exec_cmd.h"
-#include "pack.h"
-#include "sideband.h"
-
-static int keep_pack;
-static int transfer_unpack_limit = -1;
-static int fetch_unpack_limit = -1;
-static int unpack_limit = 100;
-static int quiet;
-static int verbose;
-static int fetch_all;
-static int depth;
-static int no_progress;
-static const char fetch_pack_usage[] =
-"git-fetch-pack [--all] [--quiet|-q] [--keep|-k] [--thin] [--upload-pack=<git-upload-pack>] [--depth=<n>] [--no-progress] [-v] [<host>:]<directory> [<refs>...]";
-static const char *uploadpack = "git-upload-pack";
-
-#define COMPLETE (1U << 0)
-#define COMMON (1U << 1)
-#define COMMON_REF (1U << 2)
-#define SEEN (1U << 3)
-#define POPPED (1U << 4)
-
-/*
- * After sending this many "have"s if we do not get any new ACK , we
- * give up traversing our history.
- */
-#define MAX_IN_VAIN 256
-
-static struct commit_list *rev_list;
-static int non_common_revs, multi_ack, use_thin_pack, use_sideband;
-
-static void rev_list_push(struct commit *commit, int mark)
-{
- if (!(commit->object.flags & mark)) {
- commit->object.flags |= mark;
-
- if (!(commit->object.parsed))
- parse_commit(commit);
-
- insert_by_date(commit, &rev_list);
-
- if (!(commit->object.flags & COMMON))
- non_common_revs++;
- }
-}
-
-static int rev_list_insert_ref(const char *path, const unsigned char *sha1, int flag, void *cb_data)
-{
- struct object *o = deref_tag(parse_object(sha1), path, 0);
-
- if (o && o->type == OBJ_COMMIT)
- rev_list_push((struct commit *)o, SEEN);
-
- return 0;
-}
-
-/*
- This function marks a rev and its ancestors as common.
- In some cases, it is desirable to mark only the ancestors (for example
- when only the server does not yet know that they are common).
-*/
-
-static void mark_common(struct commit *commit,
- int ancestors_only, int dont_parse)
-{
- if (commit != NULL && !(commit->object.flags & COMMON)) {
- struct object *o = (struct object *)commit;
-
- if (!ancestors_only)
- o->flags |= COMMON;
-
- if (!(o->flags & SEEN))
- rev_list_push(commit, SEEN);
- else {
- struct commit_list *parents;
-
- if (!ancestors_only && !(o->flags & POPPED))
- non_common_revs--;
- if (!o->parsed && !dont_parse)
- parse_commit(commit);
-
- for (parents = commit->parents;
- parents;
- parents = parents->next)
- mark_common(parents->item, 0, dont_parse);
- }
- }
-}
-
-/*
- Get the next rev to send, ignoring the common.
-*/
-
-static const unsigned char* get_rev(void)
-{
- struct commit *commit = NULL;
-
- while (commit == NULL) {
- unsigned int mark;
- struct commit_list* parents;
-
- if (rev_list == NULL || non_common_revs == 0)
- return NULL;
-
- commit = rev_list->item;
- if (!(commit->object.parsed))
- parse_commit(commit);
- commit->object.flags |= POPPED;
- if (!(commit->object.flags & COMMON))
- non_common_revs--;
-
- parents = commit->parents;
-
- if (commit->object.flags & COMMON) {
- /* do not send "have", and ignore ancestors */
- commit = NULL;
- mark = COMMON | SEEN;
- } else if (commit->object.flags & COMMON_REF)
- /* send "have", and ignore ancestors */
- mark = COMMON | SEEN;
- else
- /* send "have", also for its ancestors */
- mark = SEEN;
-
- while (parents) {
- if (!(parents->item->object.flags & SEEN))
- rev_list_push(parents->item, mark);
- if (mark & COMMON)
- mark_common(parents->item, 1, 0);
- parents = parents->next;
- }
-
- rev_list = rev_list->next;
- }
-
- return commit->object.sha1;
-}
-
-static int find_common(int fd[2], unsigned char *result_sha1,
- struct ref *refs)
-{
- int fetching;
- int count = 0, flushes = 0, retval;
- const unsigned char *sha1;
- unsigned in_vain = 0;
- int got_continue = 0;
-
- for_each_ref(rev_list_insert_ref, NULL);
-
- fetching = 0;
- for ( ; refs ; refs = refs->next) {
- unsigned char *remote = refs->old_sha1;
- struct object *o;
-
- /*
- * If that object is complete (i.e. it is an ancestor of a
- * local ref), we tell them we have it but do not have to
- * tell them about its ancestors, which they already know
- * about.
- *
- * We use lookup_object here because we are only
- * interested in the case we *know* the object is
- * reachable and we have already scanned it.
- */
- if (((o = lookup_object(remote)) != NULL) &&
- (o->flags & COMPLETE)) {
- continue;
- }
-
- if (!fetching)
- packet_write(fd[1], "want %s%s%s%s%s%s%s\n",
- sha1_to_hex(remote),
- (multi_ack ? " multi_ack" : ""),
- (use_sideband == 2 ? " side-band-64k" : ""),
- (use_sideband == 1 ? " side-band" : ""),
- (use_thin_pack ? " thin-pack" : ""),
- (no_progress ? " no-progress" : ""),
- " ofs-delta");
- else
- packet_write(fd[1], "want %s\n", sha1_to_hex(remote));
- fetching++;
- }
- if (is_repository_shallow())
- write_shallow_commits(fd[1], 1);
- if (depth > 0)
- packet_write(fd[1], "deepen %d", depth);
- packet_flush(fd[1]);
- if (!fetching)
- return 1;
-
- if (depth > 0) {
- char line[1024];
- unsigned char sha1[20];
- int len;
-
- while ((len = packet_read_line(fd[0], line, sizeof(line)))) {
- if (!prefixcmp(line, "shallow ")) {
- if (get_sha1_hex(line + 8, sha1))
- die("invalid shallow line: %s", line);
- register_shallow(sha1);
- continue;
- }
- if (!prefixcmp(line, "unshallow ")) {
- if (get_sha1_hex(line + 10, sha1))
- die("invalid unshallow line: %s", line);
- if (!lookup_object(sha1))
- die("object not found: %s", line);
- /* make sure that it is parsed as shallow */
- parse_object(sha1);
- if (unregister_shallow(sha1))
- die("no shallow found: %s", line);
- continue;
- }
- die("expected shallow/unshallow, got %s", line);
- }
- }
-
- flushes = 0;
- retval = -1;
- while ((sha1 = get_rev())) {
- packet_write(fd[1], "have %s\n", sha1_to_hex(sha1));
- if (verbose)
- fprintf(stderr, "have %s\n", sha1_to_hex(sha1));
- in_vain++;
- if (!(31 & ++count)) {
- int ack;
-
- packet_flush(fd[1]);
- flushes++;
-
- /*
- * We keep one window "ahead" of the other side, and
- * will wait for an ACK only on the next one
- */
- if (count == 32)
- continue;
-
- do {
- ack = get_ack(fd[0], result_sha1);
- if (verbose && ack)
- fprintf(stderr, "got ack %d %s\n", ack,
- sha1_to_hex(result_sha1));
- if (ack == 1) {
- flushes = 0;
- multi_ack = 0;
- retval = 0;
- goto done;
- } else if (ack == 2) {
- struct commit *commit =
- lookup_commit(result_sha1);
- mark_common(commit, 0, 1);
- retval = 0;
- in_vain = 0;
- got_continue = 1;
- }
- } while (ack);
- flushes--;
- if (got_continue && MAX_IN_VAIN < in_vain) {
- if (verbose)
- fprintf(stderr, "giving up\n");
- break; /* give up */
- }
- }
- }
-done:
- packet_write(fd[1], "done\n");
- if (verbose)
- fprintf(stderr, "done\n");
- if (retval != 0) {
- multi_ack = 0;
- flushes++;
- }
- while (flushes || multi_ack) {
- int ack = get_ack(fd[0], result_sha1);
- if (ack) {
- if (verbose)
- fprintf(stderr, "got ack (%d) %s\n", ack,
- sha1_to_hex(result_sha1));
- if (ack == 1)
- return 0;
- multi_ack = 1;
- continue;
- }
- flushes--;
- }
- return retval;
-}
-
-static struct commit_list *complete;
-
-static int mark_complete(const char *path, const unsigned char *sha1, int flag, void *cb_data)
-{
- struct object *o = parse_object(sha1);
-
- while (o && o->type == OBJ_TAG) {
- struct tag *t = (struct tag *) o;
- if (!t->tagged)
- break; /* broken repository */
- o->flags |= COMPLETE;
- o = parse_object(t->tagged->sha1);
- }
- if (o && o->type == OBJ_COMMIT) {
- struct commit *commit = (struct commit *)o;
- commit->object.flags |= COMPLETE;
- insert_by_date(commit, &complete);
- }
- return 0;
-}
-
-static void mark_recent_complete_commits(unsigned long cutoff)
-{
- while (complete && cutoff <= complete->item->date) {
- if (verbose)
- fprintf(stderr, "Marking %s as complete\n",
- sha1_to_hex(complete->item->object.sha1));
- pop_most_recent_commit(&complete, COMPLETE);
- }
-}
-
-static void filter_refs(struct ref **refs, int nr_match, char **match)
-{
- struct ref **return_refs;
- struct ref *newlist = NULL;
- struct ref **newtail = &newlist;
- struct ref *ref, *next;
- struct ref *fastarray[32];
-
- if (nr_match && !fetch_all) {
- if (ARRAY_SIZE(fastarray) < nr_match)
- return_refs = xcalloc(nr_match, sizeof(struct ref *));
- else {
- return_refs = fastarray;
- memset(return_refs, 0, sizeof(struct ref *) * nr_match);
- }
- }
- else
- return_refs = NULL;
-
- for (ref = *refs; ref; ref = next) {
- next = ref->next;
- if (!memcmp(ref->name, "refs/", 5) &&
- check_ref_format(ref->name + 5))
- ; /* trash */
- else if (fetch_all &&
- (!depth || prefixcmp(ref->name, "refs/tags/") )) {
- *newtail = ref;
- ref->next = NULL;
- newtail = &ref->next;
- continue;
- }
- else {
- int order = path_match(ref->name, nr_match, match);
- if (order) {
- return_refs[order-1] = ref;
- continue; /* we will link it later */
- }
- }
- free(ref);
- }
-
- if (!fetch_all) {
- int i;
- for (i = 0; i < nr_match; i++) {
- ref = return_refs[i];
- if (ref) {
- *newtail = ref;
- ref->next = NULL;
- newtail = &ref->next;
- }
- }
- if (return_refs != fastarray)
- free(return_refs);
- }
- *refs = newlist;
-}
-
-static int everything_local(struct ref **refs, int nr_match, char **match)
-{
- struct ref *ref;
- int retval;
- unsigned long cutoff = 0;
-
- track_object_refs = 0;
- save_commit_buffer = 0;
-
- for (ref = *refs; ref; ref = ref->next) {
- struct object *o;
-
- o = parse_object(ref->old_sha1);
- if (!o)
- continue;
-
- /* We already have it -- which may mean that we were
- * in sync with the other side at some time after
- * that (it is OK if we guess wrong here).
- */
- if (o->type == OBJ_COMMIT) {
- struct commit *commit = (struct commit *)o;
- if (!cutoff || cutoff < commit->date)
- cutoff = commit->date;
- }
- }
-
- if (!depth) {
- for_each_ref(mark_complete, NULL);
- if (cutoff)
- mark_recent_complete_commits(cutoff);
- }
-
- /*
- * Mark all complete remote refs as common refs.
- * Don't mark them common yet; the server has to be told so first.
- */
- for (ref = *refs; ref; ref = ref->next) {
- struct object *o = deref_tag(lookup_object(ref->old_sha1),
- NULL, 0);
-
- if (!o || o->type != OBJ_COMMIT || !(o->flags & COMPLETE))
- continue;
-
- if (!(o->flags & SEEN)) {
- rev_list_push((struct commit *)o, COMMON_REF | SEEN);
-
- mark_common((struct commit *)o, 1, 1);
- }
- }
-
- filter_refs(refs, nr_match, match);
-
- for (retval = 1, ref = *refs; ref ; ref = ref->next) {
- const unsigned char *remote = ref->old_sha1;
- unsigned char local[20];
- struct object *o;
-
- o = lookup_object(remote);
- if (!o || !(o->flags & COMPLETE)) {
- retval = 0;
- if (!verbose)
- continue;
- fprintf(stderr,
- "want %s (%s)\n", sha1_to_hex(remote),
- ref->name);
- continue;
- }
-
- hashcpy(ref->new_sha1, local);
- if (!verbose)
- continue;
- fprintf(stderr,
- "already have %s (%s)\n", sha1_to_hex(remote),
- ref->name);
- }
- return retval;
-}
-
-static pid_t setup_sideband(int fd[2], int xd[2])
-{
- pid_t side_pid;
-
- if (!use_sideband) {
- fd[0] = xd[0];
- fd[1] = xd[1];
- return 0;
- }
- /* xd[] is talking with upload-pack; subprocess reads from
- * xd[0], spits out band#2 to stderr, and feeds us band#1
- * through our fd[0].
- */
- if (pipe(fd) < 0)
- die("fetch-pack: unable to set up pipe");
- side_pid = fork();
- if (side_pid < 0)
- die("fetch-pack: unable to fork off sideband demultiplexer");
- if (!side_pid) {
- /* subprocess */
- close(fd[0]);
- if (xd[0] != xd[1])
- close(xd[1]);
- if (recv_sideband("fetch-pack", xd[0], fd[1], 2))
- exit(1);
- exit(0);
- }
- close(xd[0]);
- close(fd[1]);
- fd[1] = xd[1];
- return side_pid;
-}
-
-static int get_pack(int xd[2])
-{
- int status;
- pid_t pid, side_pid;
- int fd[2];
- const char *argv[20];
- char keep_arg[256];
- char hdr_arg[256];
- const char **av;
- int do_keep = keep_pack;
-
- side_pid = setup_sideband(fd, xd);
-
- av = argv;
- *hdr_arg = 0;
- if (unpack_limit) {
- struct pack_header header;
-
- if (read_pack_header(fd[0], &header))
- die("protocol error: bad pack header");
- snprintf(hdr_arg, sizeof(hdr_arg), "--pack_header=%u,%u",
- ntohl(header.hdr_version), ntohl(header.hdr_entries));
- if (ntohl(header.hdr_entries) < unpack_limit)
- do_keep = 0;
- else
- do_keep = 1;
- }
-
- if (do_keep) {
- *av++ = "index-pack";
- *av++ = "--stdin";
- if (!quiet && !no_progress)
- *av++ = "-v";
- if (use_thin_pack)
- *av++ = "--fix-thin";
- if (keep_pack > 1 || unpack_limit) {
- int s = sprintf(keep_arg,
- "--keep=fetch-pack %d on ", getpid());
- if (gethostname(keep_arg + s, sizeof(keep_arg) - s))
- strcpy(keep_arg + s, "localhost");
- *av++ = keep_arg;
- }
- }
- else {
- *av++ = "unpack-objects";
- if (quiet)
- *av++ = "-q";
- }
- if (*hdr_arg)
- *av++ = hdr_arg;
- *av++ = NULL;
-
- pid = fork();
- if (pid < 0)
- die("fetch-pack: unable to fork off %s", argv[0]);
- if (!pid) {
- dup2(fd[0], 0);
- close(fd[0]);
- close(fd[1]);
- execv_git_cmd(argv);
- die("%s exec failed", argv[0]);
- }
- close(fd[0]);
- close(fd[1]);
- while (waitpid(pid, &status, 0) < 0) {
- if (errno != EINTR)
- die("waiting for %s: %s", argv[0], strerror(errno));
- }
- if (WIFEXITED(status)) {
- int code = WEXITSTATUS(status);
- if (code)
- die("%s died with error code %d", argv[0], code);
- return 0;
- }
- if (WIFSIGNALED(status)) {
- int sig = WTERMSIG(status);
- die("%s died of signal %d", argv[0], sig);
- }
- die("%s died of unnatural causes %d", argv[0], status);
-}
-
-static int fetch_pack(int fd[2], int nr_match, char **match)
-{
- struct ref *ref;
- unsigned char sha1[20];
-
- get_remote_heads(fd[0], &ref, 0, NULL, 0);
- if (is_repository_shallow() && !server_supports("shallow"))
- die("Server does not support shallow clients");
- if (server_supports("multi_ack")) {
- if (verbose)
- fprintf(stderr, "Server supports multi_ack\n");
- multi_ack = 1;
- }
- if (server_supports("side-band-64k")) {
- if (verbose)
- fprintf(stderr, "Server supports side-band-64k\n");
- use_sideband = 2;
- }
- else if (server_supports("side-band")) {
- if (verbose)
- fprintf(stderr, "Server supports side-band\n");
- use_sideband = 1;
- }
- if (!ref) {
- packet_flush(fd[1]);
- die("no matching remote head");
- }
- if (everything_local(&ref, nr_match, match)) {
- packet_flush(fd[1]);
- goto all_done;
- }
- if (find_common(fd, sha1, ref) < 0)
- if (keep_pack != 1)
- /* When cloning, it is not unusual to have
- * no common commit.
- */
- fprintf(stderr, "warning: no common commits\n");
-
- if (get_pack(fd))
- die("git-fetch-pack: fetch failed.");
-
- all_done:
- while (ref) {
- printf("%s %s\n",
- sha1_to_hex(ref->old_sha1), ref->name);
- ref = ref->next;
- }
- return 0;
-}
-
-static int remove_duplicates(int nr_heads, char **heads)
-{
- int src, dst;
-
- for (src = dst = 0; src < nr_heads; src++) {
- /* If heads[src] is different from any of
- * heads[0..dst], push it in.
- */
- int i;
- for (i = 0; i < dst; i++) {
- if (!strcmp(heads[i], heads[src]))
- break;
- }
- if (i < dst)
- continue;
- if (src != dst)
- heads[dst] = heads[src];
- dst++;
- }
- heads[dst] = 0;
- return dst;
-}
-
-static int fetch_pack_config(const char *var, const char *value)
-{
- if (strcmp(var, "fetch.unpacklimit") == 0) {
- fetch_unpack_limit = git_config_int(var, value);
- return 0;
- }
-
- if (strcmp(var, "transfer.unpacklimit") == 0) {
- transfer_unpack_limit = git_config_int(var, value);
- return 0;
- }
-
- return git_default_config(var, value);
-}
-
-static struct lock_file lock;
-
-int main(int argc, char **argv)
-{
- int i, ret, nr_heads;
- char *dest = NULL, **heads;
- int fd[2];
- pid_t pid;
- struct stat st;
-
- setup_git_directory();
- git_config(fetch_pack_config);
-
- if (0 <= transfer_unpack_limit)
- unpack_limit = transfer_unpack_limit;
- else if (0 <= fetch_unpack_limit)
- unpack_limit = fetch_unpack_limit;
-
- nr_heads = 0;
- heads = NULL;
- for (i = 1; i < argc; i++) {
- char *arg = argv[i];
-
- if (*arg == '-') {
- if (!prefixcmp(arg, "--upload-pack=")) {
- uploadpack = arg + 14;
- continue;
- }
- if (!prefixcmp(arg, "--exec=")) {
- uploadpack = arg + 7;
- continue;
- }
- if (!strcmp("--quiet", arg) || !strcmp("-q", arg)) {
- quiet = 1;
- continue;
- }
- if (!strcmp("--keep", arg) || !strcmp("-k", arg)) {
- keep_pack++;
- unpack_limit = 0;
- continue;
- }
- if (!strcmp("--thin", arg)) {
- use_thin_pack = 1;
- continue;
- }
- if (!strcmp("--all", arg)) {
- fetch_all = 1;
- continue;
- }
- if (!strcmp("-v", arg)) {
- verbose = 1;
- continue;
- }
- if (!prefixcmp(arg, "--depth=")) {
- depth = strtol(arg + 8, NULL, 0);
- if (stat(git_path("shallow"), &st))
- st.st_mtime = 0;
- continue;
- }
- if (!strcmp("--no-progress", arg)) {
- no_progress = 1;
- continue;
- }
- usage(fetch_pack_usage);
- }
- dest = arg;
- heads = argv + i + 1;
- nr_heads = argc - i - 1;
- break;
- }
- if (!dest)
- usage(fetch_pack_usage);
- pid = git_connect(fd, dest, uploadpack, verbose ? CONNECT_VERBOSE : 0);
- if (pid < 0)
- return 1;
- if (heads && nr_heads)
- nr_heads = remove_duplicates(nr_heads, heads);
- ret = fetch_pack(fd, nr_heads, heads);
- close(fd[0]);
- close(fd[1]);
- ret |= finish_connect(pid);
-
- if (!ret && nr_heads) {
- /* If the heads to pull were given, we should have
- * consumed all of them by matching the remote.
- * Otherwise, 'git-fetch remote no-such-ref' would
- * silently succeed without issuing an error.
- */
- for (i = 0; i < nr_heads; i++)
- if (heads[i] && heads[i][0]) {
- error("no such remote ref %s", heads[i]);
- ret = 1;
- }
- }
-
- if (!ret && depth > 0) {
- struct cache_time mtime;
- char *shallow = git_path("shallow");
- int fd;
-
- mtime.sec = st.st_mtime;
-#ifdef USE_NSEC
- mtime.usec = st.st_mtim.usec;
-#endif
- if (stat(shallow, &st)) {
- if (mtime.sec)
- die("shallow file was removed during fetch");
- } else if (st.st_mtime != mtime.sec
-#ifdef USE_NSEC
- || st.st_mtim.usec != mtime.usec
-#endif
- )
- die("shallow file was changed during fetch");
-
- fd = hold_lock_file_for_update(&lock, shallow, 1);
- if (!write_shallow_commits(fd, 0)) {
- unlink(shallow);
- rollback_lock_file(&lock);
- } else {
- close(fd);
- commit_lock_file(&lock);
- }
- }
-
- return !!ret;
-}
--- /dev/null
+#ifndef FETCH_PACK_H
+#define FETCH_PACK_H
+
+struct fetch_pack_args
+{
+ const char *uploadpack;
+ int unpacklimit;
+ int depth;
+ unsigned quiet:1,
+ keep_pack:1,
+ lock_pack:1,
+ use_thin_pack:1,
+ fetch_all:1,
+ verbose:1,
+ no_progress:1;
+};
+
+struct ref *fetch_pack(struct fetch_pack_args *args,
+ const char *dest,
+ int nr_heads,
+ char **heads,
+ char **pack_lockfile);
+
+#endif
+++ /dev/null
-#include "cache.h"
-#include "fetch.h"
-#include "commit.h"
-#include "tree.h"
-#include "tree-walk.h"
-#include "tag.h"
-#include "blob.h"
-#include "refs.h"
-
-int get_tree = 0;
-int get_history = 0;
-int get_all = 0;
-int get_verbosely = 0;
-int get_recover = 0;
-static unsigned char current_commit_sha1[20];
-
-void pull_say(const char *fmt, const char *hex)
-{
- if (get_verbosely)
- fprintf(stderr, fmt, hex);
-}
-
-static void report_missing(const struct object *obj)
-{
- char missing_hex[41];
- strcpy(missing_hex, sha1_to_hex(obj->sha1));;
- fprintf(stderr, "Cannot obtain needed %s %s\n",
- obj->type ? typename(obj->type): "object", missing_hex);
- if (!is_null_sha1(current_commit_sha1))
- fprintf(stderr, "while processing commit %s.\n",
- sha1_to_hex(current_commit_sha1));
-}
-
-static int process(struct object *obj);
-
-static int process_tree(struct tree *tree)
-{
- struct tree_desc desc;
- struct name_entry entry;
-
- if (parse_tree(tree))
- return -1;
-
- init_tree_desc(&desc, tree->buffer, tree->size);
- while (tree_entry(&desc, &entry)) {
- struct object *obj = NULL;
-
- /* submodule commits are not stored in the superproject */
- if (S_ISGITLINK(entry.mode))
- continue;
- if (S_ISDIR(entry.mode)) {
- struct tree *tree = lookup_tree(entry.sha1);
- if (tree)
- obj = &tree->object;
- }
- else {
- struct blob *blob = lookup_blob(entry.sha1);
- if (blob)
- obj = &blob->object;
- }
- if (!obj || process(obj))
- return -1;
- }
- free(tree->buffer);
- tree->buffer = NULL;
- tree->size = 0;
- return 0;
-}
-
-#define COMPLETE (1U << 0)
-#define SEEN (1U << 1)
-#define TO_SCAN (1U << 2)
-
-static struct commit_list *complete = NULL;
-
-static int process_commit(struct commit *commit)
-{
- if (parse_commit(commit))
- return -1;
-
- while (complete && complete->item->date >= commit->date) {
- pop_most_recent_commit(&complete, COMPLETE);
- }
-
- if (commit->object.flags & COMPLETE)
- return 0;
-
- hashcpy(current_commit_sha1, commit->object.sha1);
-
- pull_say("walk %s\n", sha1_to_hex(commit->object.sha1));
-
- if (get_tree) {
- if (process(&commit->tree->object))
- return -1;
- if (!get_all)
- get_tree = 0;
- }
- if (get_history) {
- struct commit_list *parents = commit->parents;
- for (; parents; parents = parents->next) {
- if (process(&parents->item->object))
- return -1;
- }
- }
- return 0;
-}
-
-static int process_tag(struct tag *tag)
-{
- if (parse_tag(tag))
- return -1;
- return process(tag->tagged);
-}
-
-static struct object_list *process_queue = NULL;
-static struct object_list **process_queue_end = &process_queue;
-
-static int process_object(struct object *obj)
-{
- if (obj->type == OBJ_COMMIT) {
- if (process_commit((struct commit *)obj))
- return -1;
- return 0;
- }
- if (obj->type == OBJ_TREE) {
- if (process_tree((struct tree *)obj))
- return -1;
- return 0;
- }
- if (obj->type == OBJ_BLOB) {
- return 0;
- }
- if (obj->type == OBJ_TAG) {
- if (process_tag((struct tag *)obj))
- return -1;
- return 0;
- }
- return error("Unable to determine requirements "
- "of type %s for %s",
- typename(obj->type), sha1_to_hex(obj->sha1));
-}
-
-static int process(struct object *obj)
-{
- if (obj->flags & SEEN)
- return 0;
- obj->flags |= SEEN;
-
- if (has_sha1_file(obj->sha1)) {
- /* We already have it, so we should scan it now. */
- obj->flags |= TO_SCAN;
- }
- else {
- if (obj->flags & COMPLETE)
- return 0;
- prefetch(obj->sha1);
- }
-
- object_list_insert(obj, process_queue_end);
- process_queue_end = &(*process_queue_end)->next;
- return 0;
-}
-
-static int loop(void)
-{
- struct object_list *elem;
-
- while (process_queue) {
- struct object *obj = process_queue->item;
- elem = process_queue;
- process_queue = elem->next;
- free(elem);
- if (!process_queue)
- process_queue_end = &process_queue;
-
- /* If we are not scanning this object, we placed it in
- * the queue because we needed to fetch it first.
- */
- if (! (obj->flags & TO_SCAN)) {
- if (fetch(obj->sha1)) {
- report_missing(obj);
- return -1;
- }
- }
- if (!obj->type)
- parse_object(obj->sha1);
- if (process_object(obj))
- return -1;
- }
- return 0;
-}
-
-static int interpret_target(char *target, unsigned char *sha1)
-{
- if (!get_sha1_hex(target, sha1))
- return 0;
- if (!check_ref_format(target)) {
- if (!fetch_ref(target, sha1)) {
- return 0;
- }
- }
- return -1;
-}
-
-static int mark_complete(const char *path, const unsigned char *sha1, int flag, void *cb_data)
-{
- struct commit *commit = lookup_commit_reference_gently(sha1, 1);
- if (commit) {
- commit->object.flags |= COMPLETE;
- insert_by_date(commit, &complete);
- }
- return 0;
-}
-
-int pull_targets_stdin(char ***target, const char ***write_ref)
-{
- int targets = 0, targets_alloc = 0;
- struct strbuf buf;
- *target = NULL; *write_ref = NULL;
- strbuf_init(&buf, 0);
- while (1) {
- char *rf_one = NULL;
- char *tg_one;
-
- if (strbuf_getline(&buf, stdin, '\n') == EOF)
- break;
- tg_one = buf.buf;
- rf_one = strchr(tg_one, '\t');
- if (rf_one)
- *rf_one++ = 0;
-
- if (targets >= targets_alloc) {
- targets_alloc = targets_alloc ? targets_alloc * 2 : 64;
- *target = xrealloc(*target, targets_alloc * sizeof(**target));
- *write_ref = xrealloc(*write_ref, targets_alloc * sizeof(**write_ref));
- }
- (*target)[targets] = xstrdup(tg_one);
- (*write_ref)[targets] = rf_one ? xstrdup(rf_one) : NULL;
- targets++;
- }
- strbuf_release(&buf);
- return targets;
-}
-
-void pull_targets_free(int targets, char **target, const char **write_ref)
-{
- while (targets--) {
- free(target[targets]);
- if (write_ref && write_ref[targets])
- free((char *) write_ref[targets]);
- }
-}
-
-int pull(int targets, char **target, const char **write_ref,
- const char *write_ref_log_details)
-{
- struct ref_lock **lock = xcalloc(targets, sizeof(struct ref_lock *));
- unsigned char *sha1 = xmalloc(targets * 20);
- char *msg;
- int ret;
- int i;
-
- save_commit_buffer = 0;
- track_object_refs = 0;
-
- for (i = 0; i < targets; i++) {
- if (!write_ref || !write_ref[i])
- continue;
-
- lock[i] = lock_ref_sha1(write_ref[i], NULL);
- if (!lock[i]) {
- error("Can't lock ref %s", write_ref[i]);
- goto unlock_and_fail;
- }
- }
-
- if (!get_recover)
- for_each_ref(mark_complete, NULL);
-
- for (i = 0; i < targets; i++) {
- if (interpret_target(target[i], &sha1[20 * i])) {
- error("Could not interpret %s as something to pull", target[i]);
- goto unlock_and_fail;
- }
- if (process(lookup_unknown_object(&sha1[20 * i])))
- goto unlock_and_fail;
- }
-
- if (loop())
- goto unlock_and_fail;
-
- if (write_ref_log_details) {
- msg = xmalloc(strlen(write_ref_log_details) + 12);
- sprintf(msg, "fetch from %s", write_ref_log_details);
- } else {
- msg = NULL;
- }
- for (i = 0; i < targets; i++) {
- if (!write_ref || !write_ref[i])
- continue;
- ret = write_ref_sha1(lock[i], &sha1[20 * i], msg ? msg : "fetch (unknown)");
- lock[i] = NULL;
- if (ret)
- goto unlock_and_fail;
- }
- free(msg);
-
- return 0;
-
-
-unlock_and_fail:
- for (i = 0; i < targets; i++)
- if (lock[i])
- unlock_ref(lock[i]);
- return -1;
-}
+++ /dev/null
-#ifndef PULL_H
-#define PULL_H
-
-/*
- * Fetch object given SHA1 from the remote, and store it locally under
- * GIT_OBJECT_DIRECTORY. Return 0 on success, -1 on failure. To be
- * provided by the particular implementation.
- */
-extern int fetch(unsigned char *sha1);
-
-/*
- * Fetch the specified object and store it locally; fetch() will be
- * called later to determine success. To be provided by the particular
- * implementation.
- */
-extern void prefetch(unsigned char *sha1);
-
-/*
- * Fetch ref (relative to $GIT_DIR/refs) from the remote, and store
- * the 20-byte SHA1 in sha1. Return 0 on success, -1 on failure. To
- * be provided by the particular implementation.
- */
-extern int fetch_ref(char *ref, unsigned char *sha1);
-
-/* Set to fetch the target tree. */
-extern int get_tree;
-
-/* Set to fetch the commit history. */
-extern int get_history;
-
-/* Set to fetch the trees in the commit history. */
-extern int get_all;
-
-/* Set to be verbose */
-extern int get_verbosely;
-
-/* Set to check on all reachable objects. */
-extern int get_recover;
-
-/* Report what we got under get_verbosely */
-extern void pull_say(const char *, const char *);
-
-/* Load pull targets from stdin */
-extern int pull_targets_stdin(char ***target, const char ***write_ref);
-
-/* Free up loaded targets */
-extern void pull_targets_free(int targets, char **target, const char **write_ref);
-
-/* If write_ref is set, the ref filename to write the target value to. */
-/* If write_ref_log_details is set, additional text will appear in the ref log. */
-extern int pull(int targets, char **target, const char **write_ref,
- const char *write_ref_log_details);
-
-#endif /* PULL_H */
) 2>/dev/null
}
-if [ -n "$GIT_SSL_NO_VERIFY" ]; then
+if [ -n "$GIT_SSL_NO_VERIFY" -o \
+ "`git config --bool http.sslVerify`" = false ]; then
curl_extra_args="-k"
fi
extern int gitsetenv(const char *, const char *, int);
#endif
+#ifdef NO_MKDTEMP
+#define mkdtemp gitmkdtemp
+extern char *gitmkdtemp(char *);
+#endif
+
#ifdef NO_UNSETENV
#define unsetenv gitunsetenv
extern void gitunsetenv(const char *);
print "Patch applied successfully. Adding new files and directories to CVS\n";
my $dirtypatch = 0;
+
+#
+# We have to add the directories in order otherwise we will have
+# problems when we try and add the sub-directory of a directory we
+# have not added yet.
+#
+# Luckily this is easy to deal with by sorting the directories and
+# dealing with the shortest ones first.
+#
+@dirs = sort { length $a <=> length $b} @dirs;
+
foreach my $d (@dirs) {
if (system(@cvs,'add',$d)) {
$dirtypatch = 1;
+++ /dev/null
-#!/bin/sh
-#
-
-USAGE='<fetch-options> <repository> <refspec>...'
-SUBDIRECTORY_OK=Yes
-. git-sh-setup
-set_reflog_action "fetch $*"
-cd_to_toplevel ;# probably unnecessary...
-
-. git-parse-remote
-_x40='[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]'
-_x40="$_x40$_x40$_x40$_x40$_x40$_x40$_x40$_x40"
-
-LF='
-'
-IFS="$LF"
-
-no_tags=
-tags=
-append=
-force=
-verbose=
-update_head_ok=
-exec=
-keep=
-shallow_depth=
-no_progress=
-test -t 1 || no_progress=--no-progress
-quiet=
-while test $# != 0
-do
- case "$1" in
- -a|--a|--ap|--app|--appe|--appen|--append)
- append=t
- ;;
- --upl|--uplo|--uploa|--upload|--upload-|--upload-p|\
- --upload-pa|--upload-pac|--upload-pack)
- shift
- exec="--upload-pack=$1"
- ;;
- --upl=*|--uplo=*|--uploa=*|--upload=*|\
- --upload-=*|--upload-p=*|--upload-pa=*|--upload-pac=*|--upload-pack=*)
- exec=--upload-pack=$(expr "z$1" : 'z-[^=]*=\(.*\)')
- shift
- ;;
- -f|--f|--fo|--for|--forc|--force)
- force=t
- ;;
- -t|--t|--ta|--tag|--tags)
- tags=t
- ;;
- -n|--n|--no|--no-|--no-t|--no-ta|--no-tag|--no-tags)
- no_tags=t
- ;;
- -u|--u|--up|--upd|--upda|--updat|--update|--update-|--update-h|\
- --update-he|--update-hea|--update-head|--update-head-|\
- --update-head-o|--update-head-ok)
- update_head_ok=t
- ;;
- -q|--q|--qu|--qui|--quie|--quiet)
- quiet=--quiet
- ;;
- -v|--verbose)
- verbose="$verbose"Yes
- ;;
- -k|--k|--ke|--kee|--keep)
- keep='-k -k'
- ;;
- --depth=*)
- shallow_depth="--depth=`expr "z$1" : 'z-[^=]*=\(.*\)'`"
- ;;
- --depth)
- shift
- shallow_depth="--depth=$1"
- ;;
- -*)
- usage
- ;;
- *)
- break
- ;;
- esac
- shift
-done
-
-case "$#" in
-0)
- origin=$(get_default_remote)
- test -n "$(get_remote_url ${origin})" ||
- die "Where do you want to fetch from today?"
- set x $origin ; shift ;;
-esac
-
-if test -z "$exec"
-then
- # No command line override and we have configuration for the remote.
- exec="--upload-pack=$(get_uploadpack $1)"
-fi
-
-remote_nick="$1"
-remote=$(get_remote_url "$@")
-refs=
-rref=
-rsync_slurped_objects=
-
-if test "" = "$append"
-then
- : >"$GIT_DIR/FETCH_HEAD"
-fi
-
-# Global that is reused later
-ls_remote_result=$(git ls-remote $exec "$remote") ||
- die "Cannot get the repository state from $remote"
-
-append_fetch_head () {
- flags=
- test -n "$verbose" && flags="$flags$LF-v"
- test -n "$force$single_force" && flags="$flags$LF-f"
- GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION" \
- git fetch--tool $flags append-fetch-head "$@"
-}
-
-# updating the current HEAD with git-fetch in a bare
-# repository is always fine.
-if test -z "$update_head_ok" && test $(is_bare_repository) = false
-then
- orig_head=$(git rev-parse --verify HEAD 2>/dev/null)
-fi
-
-# Allow --notags from remote.$1.tagopt
-case "$tags$no_tags" in
-'')
- case "$(git config --get "remote.$1.tagopt")" in
- --no-tags)
- no_tags=t ;;
- esac
-esac
-
-# If --tags (and later --heads or --all) is specified, then we are
-# not talking about defaults stored in Pull: line of remotes or
-# branches file, and just fetch those and refspecs explicitly given.
-# Otherwise we do what we always did.
-
-reflist=$(get_remote_refs_for_fetch "$@")
-if test "$tags"
-then
- taglist=`IFS=' ' &&
- echo "$ls_remote_result" |
- git show-ref --exclude-existing=refs/tags/ |
- while read sha1 name
- do
- echo ".${name}:${name}"
- done` || exit
- if test "$#" -gt 1
- then
- # remote URL plus explicit refspecs; we need to merge them.
- reflist="$reflist$LF$taglist"
- else
- # No explicit refspecs; fetch tags only.
- reflist=$taglist
- fi
-fi
-
-fetch_all_at_once () {
-
- eval=$(echo "$1" | git fetch--tool parse-reflist "-")
- eval "$eval"
-
- ( : subshell because we muck with IFS
- IFS=" $LF"
- (
- if test "$remote" = . ; then
- git show-ref $rref || echo failed "$remote"
- elif test -f "$remote" ; then
- test -n "$shallow_depth" &&
- die "shallow clone with bundle is not supported"
- git bundle unbundle "$remote" $rref ||
- echo failed "$remote"
- else
- if test -d "$remote" &&
-
- # The remote might be our alternate. With
- # this optimization we will bypass fetch-pack
- # altogether, which means we cannot be doing
- # the shallow stuff at all.
- test ! -f "$GIT_DIR/shallow" &&
- test -z "$shallow_depth" &&
-
- # See if all of what we are going to fetch are
- # connected to our repository's tips, in which
- # case we do not have to do any fetch.
- theirs=$(echo "$ls_remote_result" | \
- git fetch--tool -s pick-rref "$rref" "-") &&
-
- # This will barf when $theirs reach an object that
- # we do not have in our repository. Otherwise,
- # we already have everything the fetch would bring in.
- git rev-list --objects $theirs --not --all \
- >/dev/null 2>/dev/null
- then
- echo "$ls_remote_result" | \
- git fetch--tool pick-rref "$rref" "-"
- else
- flags=
- case $verbose in
- YesYes*)
- flags="-v"
- ;;
- esac
- git-fetch-pack --thin $exec $keep $shallow_depth \
- $quiet $no_progress $flags "$remote" $rref ||
- echo failed "$remote"
- fi
- fi
- ) |
- (
- flags=
- test -n "$verbose" && flags="$flags -v"
- test -n "$force" && flags="$flags -f"
- GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION" \
- git fetch--tool $flags native-store \
- "$remote" "$remote_nick" "$refs"
- )
- ) || exit
-
-}
-
-fetch_per_ref () {
- reflist="$1"
- refs=
- rref=
-
- for ref in $reflist
- do
- refs="$refs$LF$ref"
-
- # These are relative path from $GIT_DIR, typically starting at refs/
- # but may be HEAD
- if expr "z$ref" : 'z\.' >/dev/null
- then
- not_for_merge=t
- ref=$(expr "z$ref" : 'z\.\(.*\)')
- else
- not_for_merge=
- fi
- if expr "z$ref" : 'z+' >/dev/null
- then
- single_force=t
- ref=$(expr "z$ref" : 'z+\(.*\)')
- else
- single_force=
- fi
- remote_name=$(expr "z$ref" : 'z\([^:]*\):')
- local_name=$(expr "z$ref" : 'z[^:]*:\(.*\)')
-
- rref="$rref$LF$remote_name"
-
- # There are transports that can fetch only one head at a time...
- case "$remote" in
- http://* | https://* | ftp://*)
- test -n "$shallow_depth" &&
- die "shallow clone with http not supported"
- proto=`expr "$remote" : '\([^:]*\):'`
- if [ -n "$GIT_SSL_NO_VERIFY" ]; then
- curl_extra_args="-k"
- fi
- if [ -n "$GIT_CURL_FTP_NO_EPSV" -o \
- "`git config --bool http.noEPSV`" = true ]; then
- noepsv_opt="--disable-epsv"
- fi
-
- # Find $remote_name from ls-remote output.
- head=$(echo "$ls_remote_result" | \
- git fetch--tool -s pick-rref "$remote_name" "-")
- expr "z$head" : "z$_x40\$" >/dev/null ||
- die "No such ref $remote_name at $remote"
- echo >&2 "Fetching $remote_name from $remote using $proto"
- case "$quiet" in '') v=-v ;; *) v= ;; esac
- git-http-fetch $v -a "$head" "$remote" || exit
- ;;
- rsync://*)
- test -n "$shallow_depth" &&
- die "shallow clone with rsync not supported"
- TMP_HEAD="$GIT_DIR/TMP_HEAD"
- rsync -L -q "$remote/$remote_name" "$TMP_HEAD" || exit 1
- head=$(git rev-parse --verify TMP_HEAD)
- rm -f "$TMP_HEAD"
- case "$quiet" in '') v=-v ;; *) v= ;; esac
- test "$rsync_slurped_objects" || {
- rsync -a $v --ignore-existing --exclude info \
- "$remote/objects/" "$GIT_OBJECT_DIRECTORY/" || exit
-
- # Look at objects/info/alternates for rsync -- http will
- # support it natively and git native ones will do it on
- # the remote end. Not having that file is not a crime.
- rsync -q "$remote/objects/info/alternates" \
- "$GIT_DIR/TMP_ALT" 2>/dev/null ||
- rm -f "$GIT_DIR/TMP_ALT"
- if test -f "$GIT_DIR/TMP_ALT"
- then
- resolve_alternates "$remote" <"$GIT_DIR/TMP_ALT" |
- while read alt
- do
- case "$alt" in 'bad alternate: '*) die "$alt";; esac
- echo >&2 "Getting alternate: $alt"
- rsync -av --ignore-existing --exclude info \
- "$alt" "$GIT_OBJECT_DIRECTORY/" || exit
- done
- rm -f "$GIT_DIR/TMP_ALT"
- fi
- rsync_slurped_objects=t
- }
- ;;
- esac
-
- append_fetch_head "$head" "$remote" \
- "$remote_name" "$remote_nick" "$local_name" "$not_for_merge" || exit
-
- done
-
-}
-
-fetch_main () {
- case "$remote" in
- http://* | https://* | ftp://* | rsync://* )
- fetch_per_ref "$@"
- ;;
- *)
- fetch_all_at_once "$@"
- ;;
- esac
-}
-
-fetch_main "$reflist" || exit
-
-# automated tag following
-case "$no_tags$tags" in
-'')
- case "$reflist" in
- *:refs/*)
- # effective only when we are following remote branch
- # using local tracking branch.
- taglist=$(IFS=' ' &&
- echo "$ls_remote_result" |
- git show-ref --exclude-existing=refs/tags/ |
- while read sha1 name
- do
- git cat-file -t "$sha1" >/dev/null 2>&1 || continue
- echo >&2 "Auto-following $name"
- echo ".${name}:${name}"
- done)
- esac
- case "$taglist" in
- '') ;;
- ?*)
- # do not deepen a shallow tree when following tags
- shallow_depth=
- fetch_main "$taglist" || exit ;;
- esac
-esac
-
-# If the original head was empty (i.e. no "master" yet), or
-# if we were told not to worry, we do not have to check.
-case "$orig_head" in
-'')
- ;;
-?*)
- curr_head=$(git rev-parse --verify HEAD 2>/dev/null)
- if test "$curr_head" != "$orig_head"
- then
- git update-ref \
- -m "$GIT_REFLOG_ACTION: Undoing incorrectly fetched HEAD." \
- HEAD "$orig_head"
- die "Cannot fetch into the current branch."
- fi
- ;;
-esac
global env _search_exe _search_path
if {$_search_path eq {}} {
- if {[is_Cygwin]} {
+ if {[is_Cygwin] && [regexp {^(/|\.:)} $env(PATH)]} {
set _search_path [split [exec cygpath \
--windows \
--path \
set _git [_which git]
if {$_git eq {}} {
catch {wm withdraw .}
- error_popup "Cannot find git in PATH."
+ tk_messageBox \
+ -icon error \
+ -type ok \
+ -title [mc "git-gui: fatal error"] \
+ -message [mc "Cannot find git in PATH."]
exit 1
}
regsub {\.[0-9]+\.g[0-9a-f]+$} $_git_version {} _git_version
regsub {\.rc[0-9]+$} $_git_version {} _git_version
regsub {\.GIT$} $_git_version {} _git_version
+regsub {\.[a-zA-Z]+\.[0-9]+$} $_git_version {} _git_version
if {![regexp {^[1-9]+(\.[0-9]+)+$} $_git_version]} {
catch {wm withdraw .}
}
}
+if {[is_Cygwin]} {
+ set is_git_info_link {}
+ set is_git_info_exclude {}
+ proc have_info_exclude {} {
+ global is_git_info_link is_git_info_exclude
+
+ if {$is_git_info_link eq {}} {
+ set is_git_info_link [file isfile [gitdir info.lnk]]
+ }
+
+ if {$is_git_info_link} {
+ if {$is_git_info_exclude eq {}} {
+ if {[catch {exec test -f [gitdir info exclude]}]} {
+ set is_git_info_exclude 0
+ } else {
+ set is_git_info_exclude 1
+ }
+ }
+ return $is_git_info_exclude
+ } else {
+ return [file readable [gitdir info exclude]]
+ }
+ }
+} else {
+ proc have_info_exclude {} {
+ return [file readable [gitdir info exclude]]
+ }
+}
+
proc rescan_stage2 {fd after} {
global rescan_active buf_rdi buf_rdf buf_rlo
}
set ls_others [list --exclude-per-directory=.gitignore]
- set info_exclude [gitdir info exclude]
- if {[file readable $info_exclude]} {
- lappend ls_others "--exclude-from=$info_exclude"
+ if {[have_info_exclude]} {
+ lappend ls_others "--exclude-from=[gitdir info exclude]"
}
set user_exclude [get_config core.excludesfile]
if {$user_exclude ne {} && [file readable $user_exclude]} {
}
proc ui_status {msg} {
- $::main_status show $msg
+ global main_status
+ if {[info exists main_status]} {
+ $main_status show $msg
+ }
}
proc ui_ready {{test {}}} {
- $::main_status show {Ready.} $test
+ global main_status
+ if {[info exists main_status]} {
+ $main_status show [mc "Ready."] $test
+ }
}
proc escape_path {path} {
if {! [file exists $exe]} {
error_popup "Unable to start gitk:\n\n$exe does not exist"
} else {
+ global env
+
+ if {[info exists env(GIT_DIR)]} {
+ set old_GIT_DIR $env(GIT_DIR)
+ } else {
+ set old_GIT_DIR {}
+ }
+
+ set pwd [pwd]
+ cd [file dirname [gitdir]]
+ set env(GIT_DIR) [file tail [gitdir]]
+
eval exec $cmd $revs &
+
+ if {$old_GIT_DIR eq {}} {
+ unset env(GIT_DIR)
+ } else {
+ set env(GIT_DIR) $old_GIT_DIR
+ }
+ cd $pwd
+
ui_status $::starting_gitk_msg
after 10000 {
ui_ready $starting_gitk_msg
set font [lindex $option 1]
if {[catch {
foreach {cn cv} $repo_config(gui.$name) {
- font configure $font $cn $cv
+ font configure $font $cn $cv -weight normal
}
} err]} {
error_popup "Invalid font specified in gui.$name:\n\n$err"
global repo_config
gets $fd_wt tree_id
- if {$tree_id eq {} || [catch {close $fd_wt} err]} {
+ if {[catch {close $fd_wt} err]} {
error_popup "write-tree failed:\n\n$err"
ui_status {Commit failed.}
unlock_index
} else {
$w.m.t delete $console_cr end
$w.m.t insert end "\n"
- $w.m.t insert end [string range $buf $c $cr]
+ $w.m.t insert end [string range $buf $c [expr {$cr - 1}]]
set c $cr
incr c
}
set prior [string range $meter 0 $r]
set meter [string range $meter [expr {$r + 1}] end]
- if {[regexp "\\((\\d+)/(\\d+)\\)\\s+done\r\$" $prior _j a b]} {
+ set p "\\((\\d+)/(\\d+)\\)"
+ if {[regexp ":\\s*\\d+% $p\(?:, done.\\s*\n|\\s*\r)\$" $prior _j a b]} {
+ update $this $a $b
+ } elseif {[regexp "$p\\s+done\r\$" $prior _j a b]} {
update $this $a $b
}
}
case "$peek_repo" in
http://* | https://* | ftp://* )
- if [ -n "$GIT_SSL_NO_VERIFY" ]; then
- curl_extra_args="-k"
- fi
+ if [ -n "$GIT_SSL_NO_VERIFY" -o \
+ "`git config --bool http.sslVerify`" = false ]; then
+ curl_extra_args="-k"
+ fi
if [ -n "$GIT_CURL_FTP_NO_EPSV" -o \
"`git config --bool http.noEPSV`" = true ]; then
curl_extra_args="${curl_extra_args} --disable-epsv"
#
# Fetch one or more remote refs and merge it/them into the current HEAD.
-USAGE='[-n | --no-summary] [--no-commit] [-s strategy]... [<fetch-options>] <repo> <head>...'
+USAGE='[-n | --no-summary] [--[no-]commit] [--[no-]squash] [--[no-]ff] [-s strategy]... [<fetch-options>] <repo> <head>...'
LONG_USAGE='Fetch one or more remote refs and merge it/them into the current HEAD.'
SUBDIRECTORY_OK=Yes
. git-sh-setup
test -z "$(git ls-files -u)" ||
die "You are in the middle of a conflicted merge."
-strategy_args= no_summary= no_commit= squash=
+strategy_args= no_summary= no_commit= squash= no_ff=
while :
do
case "$1" in
;;
--no-c|--no-co|--no-com|--no-comm|--no-commi|--no-commit)
no_commit=--no-commit ;;
+ --c|--co|--com|--comm|--commi|--commit)
+ no_commit=--commit ;;
--sq|--squ|--squa|--squas|--squash)
squash=--squash ;;
+ --no-sq|--no-squ|--no-squa|--no-squas|--no-squash)
+ squash=--no-squash ;;
+ --ff)
+ no_ff=--ff ;;
+ --no-ff)
+ no_ff=--no-ff ;;
-s=*|--s=*|--st=*|--str=*|--stra=*|--strat=*|--strate=*|\
--strateg=*|--strategy=*|\
-s|--s|--st|--str|--stra|--strat|--strate|--strateg|--strategy)
fi
merge_name=$(git fmt-merge-msg <"$GIT_DIR/FETCH_HEAD") || exit
-exec git-merge $no_summary $no_commit $squash $strategy_args \
+exec git-merge $no_summary $no_commit $squash $no_ff $strategy_args \
"$merge_name" HEAD $merge_head
print "* remote $name\n";
print " URL: $info->{'URL'}\n";
for my $branchname (sort keys %$branch) {
- next if ($branch->{$branchname}{'REMOTE'} ne $name);
+ next unless (defined $branch->{$branchname}{'REMOTE'} &&
+ $branch->{$branchname}{'REMOTE'} eq $name);
my @merged = map {
s|^refs/heads/||;
$_;
"smtpserverport" => \$smtp_server_port,
"smtpuser" => \$smtp_authuser,
"smtppass" => \$smtp_authpass,
+ "to" => \@to,
"cccmd" => \$cc_cmd,
"aliasfiletype" => \$aliasfiletype,
"bcc" => \@bcclist,
unstashed_index_tree=
if test -n "$unstash_index" && test "$b_tree" != "$i_tree"
then
- git diff --binary $s^2^..$s^2 | git apply --cached
+ git diff-tree --binary $s^2^..$s^2 | git apply --cached
test $? -ne 0 &&
die 'Conflicts in index. Try without --index.'
unstashed_index_tree=$(git-write-tree) ||
git read-tree "$unstashed_index_tree"
else
a="$TMP-added" &&
- git diff --cached --name-only --diff-filter=A $c_tree >"$a" &&
+ git diff-index --cached --name-only --diff-filter=A $c_tree >"$a" &&
git read-tree --reset $c_tree &&
git update-index --add --stdin <"$a" ||
die "Cannot unstage modified files"
+++ /dev/null
-#!/usr/bin/perl -w
-
-# This tool is copyright (c) 2005, Matthias Urlichs.
-# It is released under the Gnu Public License, version 2.
-#
-# The basic idea is to pull and analyze SVN changes.
-#
-# Checking out the files is done by a single long-running SVN connection.
-#
-# The head revision is on branch "origin" by default.
-# You can change that with the '-o' option.
-
-use strict;
-use warnings;
-use Getopt::Std;
-use File::Copy;
-use File::Spec;
-use File::Temp qw(tempfile);
-use File::Path qw(mkpath);
-use File::Basename qw(basename dirname);
-use Time::Local;
-use IO::Pipe;
-use POSIX qw(strftime dup2);
-use IPC::Open2;
-use SVN::Core;
-use SVN::Ra;
-
-die "Need SVN:Core 1.2.1 or better" if $SVN::Core::VERSION lt "1.2.1";
-
-$SIG{'PIPE'}="IGNORE";
-$ENV{'TZ'}="UTC";
-
-our($opt_h,$opt_o,$opt_v,$opt_u,$opt_C,$opt_i,$opt_m,$opt_M,$opt_t,$opt_T,
- $opt_b,$opt_r,$opt_I,$opt_A,$opt_s,$opt_l,$opt_d,$opt_D,$opt_S,$opt_F,
- $opt_P,$opt_R);
-
-sub usage() {
- print STDERR <<END;
-Usage: ${\basename $0} # fetch/update GIT from SVN
- [-o branch-for-HEAD] [-h] [-v] [-l max_rev] [-R repack_each_revs]
- [-C GIT_repository] [-t tagname] [-T trunkname] [-b branchname]
- [-d|-D] [-i] [-u] [-r] [-I ignorefilename] [-s start_chg]
- [-m] [-M regex] [-A author_file] [-S] [-F] [-P project_name] [SVN_URL]
-END
- exit(1);
-}
-
-getopts("A:b:C:dDFhiI:l:mM:o:rs:t:T:SP:R:uv") or usage();
-usage if $opt_h;
-
-my $tag_name = $opt_t || "tags";
-my $trunk_name = defined $opt_T ? $opt_T : "trunk";
-my $branch_name = $opt_b || "branches";
-my $project_name = $opt_P || "";
-$project_name = "/" . $project_name if ($project_name);
-my $repack_after = $opt_R || 1000;
-my $root_pool = SVN::Pool->new_default;
-
-@ARGV == 1 or @ARGV == 2 or usage();
-
-$opt_o ||= "origin";
-$opt_s ||= 1;
-my $git_tree = $opt_C;
-$git_tree ||= ".";
-
-my $svn_url = $ARGV[0];
-my $svn_dir = $ARGV[1];
-
-our @mergerx = ();
-if ($opt_m) {
- my $branch_esc = quotemeta ($branch_name);
- my $trunk_esc = quotemeta ($trunk_name);
- @mergerx =
- (
- qr!\b(?:merg(?:ed?|ing))\b.*?\b((?:(?<=$branch_esc/)[\w\.\-]+)|(?:$trunk_esc))\b!i,
- qr!\b(?:from|of)\W+((?:(?<=$branch_esc/)[\w\.\-]+)|(?:$trunk_esc))\b!i,
- qr!\b(?:from|of)\W+(?:the )?([\w\.\-]+)[-\s]branch\b!i
- );
-}
-if ($opt_M) {
- unshift (@mergerx, qr/$opt_M/);
-}
-
-# Absolutize filename now, since we will have chdir'ed by the time we
-# get around to opening it.
-$opt_A = File::Spec->rel2abs($opt_A) if $opt_A;
-
-our %users = ();
-our $users_file = undef;
-sub read_users($) {
- $users_file = File::Spec->rel2abs(@_);
- die "Cannot open $users_file\n" unless -f $users_file;
- open(my $authors,$users_file);
- while(<$authors>) {
- chomp;
- next unless /^(\S+?)\s*=\s*(.+?)\s*<(.+)>\s*$/;
- (my $user,my $name,my $email) = ($1,$2,$3);
- $users{$user} = [$name,$email];
- }
- close($authors);
-}
-
-select(STDERR); $|=1; select(STDOUT);
-
-
-package SVNconn;
-# Basic SVN connection.
-# We're only interested in connecting and downloading, so ...
-
-use File::Spec;
-use File::Temp qw(tempfile);
-use POSIX qw(strftime dup2);
-use Fcntl qw(SEEK_SET);
-
-sub new {
- my($what,$repo) = @_;
- $what=ref($what) if ref($what);
-
- my $self = {};
- $self->{'buffer'} = "";
- bless($self,$what);
-
- $repo =~ s#/+$##;
- $self->{'fullrep'} = $repo;
- $self->conn();
-
- return $self;
-}
-
-sub conn {
- my $self = shift;
- my $repo = $self->{'fullrep'};
- my $auth = SVN::Core::auth_open ([SVN::Client::get_simple_provider,
- SVN::Client::get_ssl_server_trust_file_provider,
- SVN::Client::get_username_provider]);
- my $s = SVN::Ra->new(url => $repo, auth => $auth, pool => $root_pool);
- die "SVN connection to $repo: $!\n" unless defined $s;
- $self->{'svn'} = $s;
- $self->{'repo'} = $repo;
- $self->{'maxrev'} = $s->get_latest_revnum();
-}
-
-sub file {
- my($self,$path,$rev) = @_;
-
- my ($fh, $name) = tempfile('gitsvn.XXXXXX',
- DIR => File::Spec->tmpdir(), UNLINK => 1);
-
- print "... $rev $path ...\n" if $opt_v;
- my (undef, $properties);
- $path =~ s#^/*##;
- my $subpool = SVN::Pool::new_default_sub;
- eval { (undef, $properties)
- = $self->{'svn'}->get_file($path,$rev,$fh); };
- if($@) {
- return undef if $@ =~ /Attempted to get checksum/;
- die $@;
- }
- my $mode;
- if (exists $properties->{'svn:executable'}) {
- $mode = '100755';
- } elsif (exists $properties->{'svn:special'}) {
- my ($special_content, $filesize);
- $filesize = tell $fh;
- seek $fh, 0, SEEK_SET;
- read $fh, $special_content, $filesize;
- if ($special_content =~ s/^link //) {
- $mode = '120000';
- seek $fh, 0, SEEK_SET;
- truncate $fh, 0;
- print $fh $special_content;
- } else {
- die "unexpected svn:special file encountered";
- }
- } else {
- $mode = '100644';
- }
- close ($fh);
-
- return ($name, $mode);
-}
-
-sub ignore {
- my($self,$path,$rev) = @_;
-
- print "... $rev $path ...\n" if $opt_v;
- $path =~ s#^/*##;
- my $subpool = SVN::Pool::new_default_sub;
- my (undef,undef,$properties)
- = $self->{'svn'}->get_dir($path,$rev,undef);
- if (exists $properties->{'svn:ignore'}) {
- my ($fh, $name) = tempfile('gitsvn.XXXXXX',
- DIR => File::Spec->tmpdir(),
- UNLINK => 1);
- print $fh $properties->{'svn:ignore'};
- close($fh);
- return $name;
- } else {
- return undef;
- }
-}
-
-sub dir_list {
- my($self,$path,$rev) = @_;
- $path =~ s#^/*##;
- my $subpool = SVN::Pool::new_default_sub;
- my ($dirents,undef,$properties)
- = $self->{'svn'}->get_dir($path,$rev,undef);
- return $dirents;
-}
-
-package main;
-use URI;
-
-our $svn = $svn_url;
-$svn .= "/$svn_dir" if defined $svn_dir;
-my $svn2 = SVNconn->new($svn);
-$svn = SVNconn->new($svn);
-
-my $lwp_ua;
-if($opt_d or $opt_D) {
- $svn_url = URI->new($svn_url)->canonical;
- if($opt_D) {
- $svn_dir =~ s#/*$#/#;
- } else {
- $svn_dir = "";
- }
- if ($svn_url->scheme eq "http") {
- use LWP::UserAgent;
- $lwp_ua = LWP::UserAgent->new(keep_alive => 1, requests_redirectable => []);
- } else {
- print STDERR "Warning: not HTTP; turning off direct file access\n";
- $opt_d=0;
- }
-}
-
-sub pdate($) {
- my($d) = @_;
- $d =~ m#(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)#
- or die "Unparseable date: $d\n";
- my $y=$1; $y-=1900 if $y>1900;
- return timegm($6||0,$5,$4,$3,$2-1,$y);
-}
-
-sub getwd() {
- my $pwd = `pwd`;
- chomp $pwd;
- return $pwd;
-}
-
-
-sub get_headref($$) {
- my $name = shift;
- my $git_dir = shift;
- my $sha;
-
- if (open(C,"$git_dir/refs/heads/$name")) {
- chomp($sha = <C>);
- close(C);
- length($sha) == 40
- or die "Cannot get head id for $name ($sha): $!\n";
- }
- return $sha;
-}
-
-
--d $git_tree
- or mkdir($git_tree,0777)
- or die "Could not create $git_tree: $!";
-chdir($git_tree);
-
-my $orig_branch = "";
-my $forward_master = 0;
-my %branches;
-
-my $git_dir = $ENV{"GIT_DIR"} || ".git";
-$git_dir = getwd()."/".$git_dir unless $git_dir =~ m#^/#;
-$ENV{"GIT_DIR"} = $git_dir;
-my $orig_git_index;
-$orig_git_index = $ENV{GIT_INDEX_FILE} if exists $ENV{GIT_INDEX_FILE};
-my ($git_ih, $git_index) = tempfile('gitXXXXXX', SUFFIX => '.idx',
- DIR => File::Spec->tmpdir());
-close ($git_ih);
-$ENV{GIT_INDEX_FILE} = $git_index;
-my $maxnum = 0;
-my $last_rev = "";
-my $last_branch;
-my $current_rev = $opt_s || 1;
-unless(-d $git_dir) {
- system("git-init");
- die "Cannot init the GIT db at $git_tree: $?\n" if $?;
- system("git-read-tree");
- die "Cannot init an empty tree: $?\n" if $?;
-
- $last_branch = $opt_o;
- $orig_branch = "";
-} else {
- -f "$git_dir/refs/heads/$opt_o"
- or die "Branch '$opt_o' does not exist.\n".
- "Either use the correct '-o branch' option,\n".
- "or import to a new repository.\n";
-
- -f "$git_dir/svn2git"
- or die "'$git_dir/svn2git' does not exist.\n".
- "You need that file for incremental imports.\n";
- open(F, "git-symbolic-ref HEAD |") or
- die "Cannot run git-symbolic-ref: $!\n";
- chomp ($last_branch = <F>);
- $last_branch = basename($last_branch);
- close(F);
- unless($last_branch) {
- warn "Cannot read the last branch name: $! -- assuming 'master'\n";
- $last_branch = "master";
- }
- $orig_branch = $last_branch;
- $last_rev = get_headref($orig_branch, $git_dir);
- if (-f "$git_dir/SVN2GIT_HEAD") {
- die <<EOM;
-SVN2GIT_HEAD exists.
-Make sure your working directory corresponds to HEAD and remove SVN2GIT_HEAD.
-You may need to run
-
- git-read-tree -m -u SVN2GIT_HEAD HEAD
-EOM
- }
- system('cp', "$git_dir/HEAD", "$git_dir/SVN2GIT_HEAD");
-
- $forward_master =
- $opt_o ne 'master' && -f "$git_dir/refs/heads/master" &&
- system('cmp', '-s', "$git_dir/refs/heads/master",
- "$git_dir/refs/heads/$opt_o") == 0;
-
- # populate index
- system('git-read-tree', $last_rev);
- die "read-tree failed: $?\n" if $?;
-
- # Get the last import timestamps
- open my $B,"<", "$git_dir/svn2git";
- while(<$B>) {
- chomp;
- my($num,$branch,$ref) = split;
- $branches{$branch}{$num} = $ref;
- $branches{$branch}{"LAST"} = $ref;
- $current_rev = $num+1 if $current_rev <= $num;
- }
- close($B);
-}
--d $git_dir
- or die "Could not create git subdir ($git_dir).\n";
-
-my $default_authors = "$git_dir/svn-authors";
-if ($opt_A) {
- read_users($opt_A);
- copy($opt_A,$default_authors) or die "Copy failed: $!";
-} else {
- read_users($default_authors) if -f $default_authors;
-}
-
-open BRANCHES,">>", "$git_dir/svn2git";
-
-sub node_kind($$) {
- my ($svnpath, $revision) = @_;
- $svnpath =~ s#^/*##;
- my $subpool = SVN::Pool::new_default_sub;
- my $kind = $svn->{'svn'}->check_path($svnpath,$revision);
- return $kind;
-}
-
-sub get_file($$$) {
- my($svnpath,$rev,$path) = @_;
-
- # now get it
- my ($name,$mode);
- if($opt_d) {
- my($req,$res);
-
- # /svn/!svn/bc/2/django/trunk/django-docs/build.py
- my $url=$svn_url->clone();
- $url->path($url->path."/!svn/bc/$rev/$svn_dir$svnpath");
- print "... $path...\n" if $opt_v;
- $req = HTTP::Request->new(GET => $url);
- $res = $lwp_ua->request($req);
- if ($res->is_success) {
- my $fh;
- ($fh, $name) = tempfile('gitsvn.XXXXXX',
- DIR => File::Spec->tmpdir(), UNLINK => 1);
- print $fh $res->content;
- close($fh) or die "Could not write $name: $!\n";
- } else {
- return undef if $res->code == 301; # directory?
- die $res->status_line." at $url\n";
- }
- $mode = '0644'; # can't obtain mode via direct http request?
- } else {
- ($name,$mode) = $svn->file("$svnpath",$rev);
- return undef unless defined $name;
- }
-
- my $pid = open(my $F, '-|');
- die $! unless defined $pid;
- if (!$pid) {
- exec("git-hash-object", "-w", $name)
- or die "Cannot create object: $!\n";
- }
- my $sha = <$F>;
- chomp $sha;
- close $F;
- unlink $name;
- return [$mode, $sha, $path];
-}
-
-sub get_ignore($$$$$) {
- my($new,$old,$rev,$path,$svnpath) = @_;
-
- return unless $opt_I;
- my $name = $svn->ignore("$svnpath",$rev);
- if ($path eq '/') {
- $path = $opt_I;
- } else {
- $path = File::Spec->catfile($path,$opt_I);
- }
- if (defined $name) {
- my $pid = open(my $F, '-|');
- die $! unless defined $pid;
- if (!$pid) {
- exec("git-hash-object", "-w", $name)
- or die "Cannot create object: $!\n";
- }
- my $sha = <$F>;
- chomp $sha;
- close $F;
- unlink $name;
- push(@$new,['0644',$sha,$path]);
- } elsif (defined $old) {
- push(@$old,$path);
- }
-}
-
-sub project_path($$)
-{
- my ($path, $project) = @_;
-
- $path = "/".$path unless ($path =~ m#^\/#) ;
- return $1 if ($path =~ m#^$project\/(.*)$#);
-
- $path =~ s#\.#\\\.#g;
- $path =~ s#\+#\\\+#g;
- return "/" if ($project =~ m#^$path.*$#);
-
- return undef;
-}
-
-sub split_path($$) {
- my($rev,$path) = @_;
- my $branch;
-
- if($path =~ s#^/\Q$tag_name\E/([^/]+)/?##) {
- $branch = "/$1";
- } elsif($path =~ s#^/\Q$trunk_name\E/?##) {
- $branch = "/";
- } elsif($path =~ s#^/\Q$branch_name\E/([^/]+)/?##) {
- $branch = $1;
- } else {
- my %no_error = (
- "/" => 1,
- "/$tag_name" => 1,
- "/$branch_name" => 1
- );
- print STDERR "$rev: Unrecognized path: $path\n" unless (defined $no_error{$path});
- return ()
- }
- if ($path eq "") {
- $path = "/";
- } elsif ($project_name) {
- $path = project_path($path, $project_name);
- }
- return ($branch,$path);
-}
-
-sub branch_rev($$) {
-
- my ($srcbranch,$uptorev) = @_;
-
- my $bbranches = $branches{$srcbranch};
- my @revs = reverse sort { ($a eq 'LAST' ? 0 : $a) <=> ($b eq 'LAST' ? 0 : $b) } keys %$bbranches;
- my $therev;
- foreach my $arev(@revs) {
- next if ($arev eq 'LAST');
- if ($arev <= $uptorev) {
- $therev = $arev;
- last;
- }
- }
- return $therev;
-}
-
-sub expand_svndir($$$);
-
-sub expand_svndir($$$)
-{
- my ($svnpath, $rev, $path) = @_;
- my @list;
- get_ignore(\@list, undef, $rev, $path, $svnpath);
- my $dirents = $svn->dir_list($svnpath, $rev);
- foreach my $p(keys %$dirents) {
- my $kind = node_kind($svnpath.'/'.$p, $rev);
- if ($kind eq $SVN::Node::file) {
- my $f = get_file($svnpath.'/'.$p, $rev, $path.'/'.$p);
- push(@list, $f) if $f;
- } elsif ($kind eq $SVN::Node::dir) {
- push(@list,
- expand_svndir($svnpath.'/'.$p, $rev, $path.'/'.$p));
- }
- }
- return @list;
-}
-
-sub copy_path($$$$$$$$) {
- # Somebody copied a whole subdirectory.
- # We need to find the index entries from the old version which the
- # SVN log entry points to, and add them to the new place.
-
- my($newrev,$newbranch,$path,$oldpath,$rev,$node_kind,$new,$parents) = @_;
-
- my($srcbranch,$srcpath) = split_path($rev,$oldpath);
- unless(defined $srcbranch && defined $srcpath) {
- print "Path not found when copying from $oldpath @ $rev.\n".
- "Will try to copy from original SVN location...\n"
- if $opt_v;
- push (@$new, expand_svndir($oldpath, $rev, $path));
- return;
- }
- my $therev = branch_rev($srcbranch, $rev);
- my $gitrev = $branches{$srcbranch}{$therev};
- unless($gitrev) {
- print STDERR "$newrev:$newbranch: could not find $oldpath \@ $rev\n";
- return;
- }
- if ($srcbranch ne $newbranch) {
- push(@$parents, $branches{$srcbranch}{'LAST'});
- }
- print "$newrev:$newbranch:$path: copying from $srcbranch:$srcpath @ $rev\n" if $opt_v;
- if ($node_kind eq $SVN::Node::dir) {
- $srcpath =~ s#/*$#/#;
- }
-
- my $pid = open my $f,'-|';
- die $! unless defined $pid;
- if (!$pid) {
- exec("git-ls-tree","-r","-z",$gitrev,$srcpath)
- or die $!;
- }
- local $/ = "\0";
- while(<$f>) {
- chomp;
- my($m,$p) = split(/\t/,$_,2);
- my($mode,$type,$sha1) = split(/ /,$m);
- next if $type ne "blob";
- if ($node_kind eq $SVN::Node::dir) {
- $p = $path . substr($p,length($srcpath)-1);
- } else {
- $p = $path;
- }
- push(@$new,[$mode,$sha1,$p]);
- }
- close($f) or
- print STDERR "$newrev:$newbranch: could not list files in $oldpath \@ $rev\n";
-}
-
-sub commit {
- my($branch, $changed_paths, $revision, $author, $date, $message) = @_;
- my($committer_name,$committer_email,$dest);
- my($author_name,$author_email);
- my(@old,@new,@parents);
-
- if (not defined $author or $author eq "") {
- $committer_name = $committer_email = "unknown";
- } elsif (defined $users_file) {
- die "User $author is not listed in $users_file\n"
- unless exists $users{$author};
- ($committer_name,$committer_email) = @{$users{$author}};
- } elsif ($author =~ /^(.*?)\s+<(.*)>$/) {
- ($committer_name, $committer_email) = ($1, $2);
- } else {
- $author =~ s/^<(.*)>$/$1/;
- $committer_name = $committer_email = $author;
- }
-
- if ($opt_F && $message =~ /From:\s+(.*?)\s+<(.*)>\s*\n/) {
- ($author_name, $author_email) = ($1, $2);
- print "Author from From: $1 <$2>\n" if ($opt_v);;
- } elsif ($opt_S && $message =~ /Signed-off-by:\s+(.*?)\s+<(.*)>\s*\n/) {
- ($author_name, $author_email) = ($1, $2);
- print "Author from Signed-off-by: $1 <$2>\n" if ($opt_v);;
- } else {
- $author_name = $committer_name;
- $author_email = $committer_email;
- }
-
- $date = pdate($date);
-
- my $tag;
- my $parent;
- if($branch eq "/") { # trunk
- $parent = $opt_o;
- } elsif($branch =~ m#^/(.+)#) { # tag
- $tag = 1;
- $parent = $1;
- } else { # "normal" branch
- # nothing to do
- $parent = $branch;
- }
- $dest = $parent;
-
- my $prev = $changed_paths->{"/"};
- if($prev and $prev->[0] eq "A") {
- delete $changed_paths->{"/"};
- my $oldpath = $prev->[1];
- my $rev;
- if(defined $oldpath) {
- my $p;
- ($parent,$p) = split_path($revision,$oldpath);
- if(defined $parent) {
- if($parent eq "/") {
- $parent = $opt_o;
- } else {
- $parent =~ s#^/##; # if it's a tag
- }
- }
- } else {
- $parent = undef;
- }
- }
-
- my $rev;
- if($revision > $opt_s and defined $parent) {
- open(H,'-|',"git-rev-parse","--verify",$parent);
- $rev = <H>;
- close(H) or do {
- print STDERR "$revision: cannot find commit '$parent'!\n";
- return;
- };
- chop $rev;
- if(length($rev) != 40) {
- print STDERR "$revision: cannot find commit '$parent'!\n";
- return;
- }
- $rev = $branches{($parent eq $opt_o) ? "/" : $parent}{"LAST"};
- if($revision != $opt_s and not $rev) {
- print STDERR "$revision: do not know ancestor for '$parent'!\n";
- return;
- }
- } else {
- $rev = undef;
- }
-
-# if($prev and $prev->[0] eq "A") {
-# if(not $tag) {
-# unless(open(H,"> $git_dir/refs/heads/$branch")) {
-# print STDERR "$revision: Could not create branch $branch: $!\n";
-# $state=11;
-# next;
-# }
-# print H "$rev\n"
-# or die "Could not write branch $branch: $!";
-# close(H)
-# or die "Could not write branch $branch: $!";
-# }
-# }
- if(not defined $rev) {
- unlink($git_index);
- } elsif ($rev ne $last_rev) {
- print "Switching from $last_rev to $rev ($branch)\n" if $opt_v;
- system("git-read-tree", $rev);
- die "read-tree failed for $rev: $?\n" if $?;
- $last_rev = $rev;
- }
-
- push (@parents, $rev) if defined $rev;
-
- my $cid;
- if($tag and not %$changed_paths) {
- $cid = $rev;
- } else {
- my @paths = sort keys %$changed_paths;
- foreach my $path(@paths) {
- my $action = $changed_paths->{$path};
-
- if ($action->[0] eq "R") {
- # refer to a file/tree in an earlier commit
- push(@old,$path); # remove any old stuff
- }
- if(($action->[0] eq "A") || ($action->[0] eq "R")) {
- my $node_kind = node_kind($action->[3], $revision);
- if ($node_kind eq $SVN::Node::file) {
- my $f = get_file($action->[3],
- $revision, $path);
- if ($f) {
- push(@new,$f) if $f;
- } else {
- my $opath = $action->[3];
- print STDERR "$revision: $branch: could not fetch '$opath'\n";
- }
- } elsif ($node_kind eq $SVN::Node::dir) {
- if($action->[1]) {
- copy_path($revision, $branch,
- $path, $action->[1],
- $action->[2], $node_kind,
- \@new, \@parents);
- } else {
- get_ignore(\@new, \@old, $revision,
- $path, $action->[3]);
- }
- }
- } elsif ($action->[0] eq "D") {
- push(@old,$path);
- } elsif ($action->[0] eq "M") {
- my $node_kind = node_kind($action->[3], $revision);
- if ($node_kind eq $SVN::Node::file) {
- my $f = get_file($action->[3],
- $revision, $path);
- push(@new,$f) if $f;
- } elsif ($node_kind eq $SVN::Node::dir) {
- get_ignore(\@new, \@old, $revision,
- $path, $action->[3]);
- }
- } else {
- die "$revision: unknown action '".$action->[0]."' for $path\n";
- }
- }
-
- while(@old) {
- my @o1;
- if(@old > 55) {
- @o1 = splice(@old,0,50);
- } else {
- @o1 = @old;
- @old = ();
- }
- my $pid = open my $F, "-|";
- die "$!" unless defined $pid;
- if (!$pid) {
- exec("git-ls-files", "-z", @o1) or die $!;
- }
- @o1 = ();
- local $/ = "\0";
- while(<$F>) {
- chomp;
- push(@o1,$_);
- }
- close($F);
-
- while(@o1) {
- my @o2;
- if(@o1 > 55) {
- @o2 = splice(@o1,0,50);
- } else {
- @o2 = @o1;
- @o1 = ();
- }
- system("git-update-index","--force-remove","--",@o2);
- die "Cannot remove files: $?\n" if $?;
- }
- }
- while(@new) {
- my @n2;
- if(@new > 12) {
- @n2 = splice(@new,0,10);
- } else {
- @n2 = @new;
- @new = ();
- }
- system("git-update-index","--add",
- (map { ('--cacheinfo', @$_) } @n2));
- die "Cannot add files: $?\n" if $?;
- }
-
- my $pid = open(C,"-|");
- die "Cannot fork: $!" unless defined $pid;
- unless($pid) {
- exec("git-write-tree");
- die "Cannot exec git-write-tree: $!\n";
- }
- chomp(my $tree = <C>);
- length($tree) == 40
- or die "Cannot get tree id ($tree): $!\n";
- close(C)
- or die "Error running git-write-tree: $?\n";
- print "Tree ID $tree\n" if $opt_v;
-
- my $pr = IO::Pipe->new() or die "Cannot open pipe: $!\n";
- my $pw = IO::Pipe->new() or die "Cannot open pipe: $!\n";
- $pid = fork();
- die "Fork: $!\n" unless defined $pid;
- unless($pid) {
- $pr->writer();
- $pw->reader();
- open(OUT,">&STDOUT");
- dup2($pw->fileno(),0);
- dup2($pr->fileno(),1);
- $pr->close();
- $pw->close();
-
- my @par = ();
-
- # loose detection of merges
- # based on the commit msg
- foreach my $rx (@mergerx) {
- if ($message =~ $rx) {
- my $mparent = $1;
- if ($mparent eq 'HEAD') { $mparent = $opt_o };
- if ( -e "$git_dir/refs/heads/$mparent") {
- $mparent = get_headref($mparent, $git_dir);
- push (@parents, $mparent);
- print OUT "Merge parent branch: $mparent\n" if $opt_v;
- }
- }
- }
- my %seen_parents = ();
- my @unique_parents = grep { ! $seen_parents{$_} ++ } @parents;
- foreach my $bparent (@unique_parents) {
- push @par, '-p', $bparent;
- print OUT "Merge parent branch: $bparent\n" if $opt_v;
- }
-
- exec("env",
- "GIT_AUTHOR_NAME=$author_name",
- "GIT_AUTHOR_EMAIL=$author_email",
- "GIT_AUTHOR_DATE=".strftime("+0000 %Y-%m-%d %H:%M:%S",gmtime($date)),
- "GIT_COMMITTER_NAME=$committer_name",
- "GIT_COMMITTER_EMAIL=$committer_email",
- "GIT_COMMITTER_DATE=".strftime("+0000 %Y-%m-%d %H:%M:%S",gmtime($date)),
- "git-commit-tree", $tree,@par);
- die "Cannot exec git-commit-tree: $!\n";
- }
- $pw->writer();
- $pr->reader();
-
- $message =~ s/[\s\n]+\z//;
- $message = "r$revision: $message" if $opt_r;
-
- print $pw "$message\n"
- or die "Error writing to git-commit-tree: $!\n";
- $pw->close();
-
- print "Committed change $revision:$branch ".strftime("%Y-%m-%d %H:%M:%S",gmtime($date)).")\n" if $opt_v;
- chomp($cid = <$pr>);
- length($cid) == 40
- or die "Cannot get commit id ($cid): $!\n";
- print "Commit ID $cid\n" if $opt_v;
- $pr->close();
-
- waitpid($pid,0);
- die "Error running git-commit-tree: $?\n" if $?;
- }
-
- if (not defined $cid) {
- $cid = $branches{"/"}{"LAST"};
- }
-
- if(not defined $dest) {
- print "... no known parent\n" if $opt_v;
- } elsif(not $tag) {
- print "Writing to refs/heads/$dest\n" if $opt_v;
- open(C,">$git_dir/refs/heads/$dest") and
- print C ("$cid\n") and
- close(C)
- or die "Cannot write branch $dest for update: $!\n";
- }
-
- if ($tag) {
- $last_rev = "-" if %$changed_paths;
- # the tag was 'complex', i.e. did not refer to a "real" revision
-
- $dest =~ tr/_/\./ if $opt_u;
-
- system('git-tag', '-f', $dest, $cid) == 0
- or die "Cannot create tag $dest: $!\n";
-
- print "Created tag '$dest' on '$branch'\n" if $opt_v;
- }
- $branches{$branch}{"LAST"} = $cid;
- $branches{$branch}{$revision} = $cid;
- $last_rev = $cid;
- print BRANCHES "$revision $branch $cid\n";
- print "DONE: $revision $dest $cid\n" if $opt_v;
-}
-
-sub commit_all {
- # Recursive use of the SVN connection does not work
- local $svn = $svn2;
-
- my ($changed_paths, $revision, $author, $date, $message) = @_;
- my %p;
- while(my($path,$action) = each %$changed_paths) {
- $p{$path} = [ $action->action,$action->copyfrom_path, $action->copyfrom_rev, $path ];
- }
- $changed_paths = \%p;
-
- my %done;
- my @col;
- my $pref;
- my $branch;
-
- while(my($path,$action) = each %$changed_paths) {
- ($branch,$path) = split_path($revision,$path);
- next if not defined $branch;
- next if not defined $path;
- $done{$branch}{$path} = $action;
- }
- while(($branch,$changed_paths) = each %done) {
- commit($branch, $changed_paths, $revision, $author, $date, $message);
- }
-}
-
-$opt_l = $svn->{'maxrev'} if not defined $opt_l or $opt_l > $svn->{'maxrev'};
-
-if ($opt_l < $current_rev) {
- print "Up to date: no new revisions to fetch!\n" if $opt_v;
- unlink("$git_dir/SVN2GIT_HEAD");
- exit;
-}
-
-print "Processing from $current_rev to $opt_l ...\n" if $opt_v;
-
-my $from_rev;
-my $to_rev = $current_rev - 1;
-
-my $subpool = SVN::Pool::new_default_sub;
-while ($to_rev < $opt_l) {
- $subpool->clear;
- $from_rev = $to_rev + 1;
- $to_rev = $from_rev + $repack_after;
- $to_rev = $opt_l if $opt_l < $to_rev;
- print "Fetching from $from_rev to $to_rev ...\n" if $opt_v;
- $svn->{'svn'}->get_log("/",$from_rev,$to_rev,0,1,1,\&commit_all);
- my $pid = fork();
- die "Fork: $!\n" unless defined $pid;
- unless($pid) {
- exec("git-repack", "-d")
- or die "Cannot repack: $!\n";
- }
- waitpid($pid, 0);
-}
-
-
-unlink($git_index);
-
-if (defined $orig_git_index) {
- $ENV{GIT_INDEX_FILE} = $orig_git_index;
-} else {
- delete $ENV{GIT_INDEX_FILE};
-}
-
-# Now switch back to the branch we were in before all of this happened
-if($orig_branch) {
- print "DONE\n" if $opt_v and (not defined $opt_l or $opt_l > 0);
- system("cp","$git_dir/refs/heads/$opt_o","$git_dir/refs/heads/master")
- if $forward_master;
- unless ($opt_i) {
- system('git-read-tree', '-m', '-u', 'SVN2GIT_HEAD', 'HEAD');
- die "read-tree failed: $?\n" if $?;
- }
-} else {
- $orig_branch = "master";
- print "DONE; creating $orig_branch branch\n" if $opt_v and (not defined $opt_l or $opt_l > 0);
- system("cp","$git_dir/refs/heads/$opt_o","$git_dir/refs/heads/master")
- unless -f "$git_dir/refs/heads/master";
- system('git-update-ref', 'HEAD', "$orig_branch");
- unless ($opt_i) {
- system('git checkout');
- die "checkout failed: $?\n" if $?;
- }
-}
-unlink("$git_dir/SVN2GIT_HEAD");
-close(BRANCHES);
{ "diff-files", cmd_diff_files },
{ "diff-index", cmd_diff_index, RUN_SETUP },
{ "diff-tree", cmd_diff_tree, RUN_SETUP },
+ { "fetch", cmd_fetch, RUN_SETUP },
+ { "fetch-pack", cmd_fetch_pack, RUN_SETUP },
{ "fetch--tool", cmd_fetch__tool, RUN_SETUP },
{ "fmt-merge-msg", cmd_fmt_merge_msg, RUN_SETUP },
{ "for-each-ref", cmd_for_each_ref, RUN_SETUP },
{ "get-tar-commit-id", cmd_get_tar_commit_id },
{ "grep", cmd_grep, RUN_SETUP | USE_PAGER },
{ "help", cmd_help },
+#ifndef NO_CURL
+ { "http-fetch", cmd_http_fetch, RUN_SETUP },
+#endif
{ "init", cmd_init_db },
{ "init-db", cmd_init_db },
{ "log", cmd_log, RUN_SETUP | USE_PAGER },
/*
* Take the basename of argv[0] as the command
* name, and the dirname as the default exec_path
- * if it's an absolute path and we don't have
- * anything better.
+ * if we don't have anything better.
*/
if (slash) {
*slash++ = 0;
if (*cmd == '/')
exec_path = cmd;
+ else
+ exec_path = xstrdup(make_absolute_path(cmd));
cmd = slash;
}
proc start_rev_list {view} {
global startmsecs
global commfd leftover tclencoding datemode
- global viewargs viewfiles commitidx
- global lookingforhead showlocalchanges
+ global viewargs viewfiles commitidx viewcomplete vnextroot
+ global showlocalchanges commitinterest mainheadid
+ global progressdirn progresscoords proglastnc curview
set startmsecs [clock clicks -milliseconds]
set commitidx($view) 0
+ set viewcomplete($view) 0
+ set vnextroot($view) 0
set order "--topo-order"
if {$datemode} {
set order "--date-order"
}
if {[catch {
- set fd [open [concat | git log -z --pretty=raw $order --parents \
+ set fd [open [concat | git log --no-color -z --pretty=raw $order --parents \
--boundary $viewargs($view) "--" $viewfiles($view)] r]
} err]} {
error_popup "Error executing git rev-list: $err"
}
set commfd($view) $fd
set leftover($view) {}
- set lookingforhead $showlocalchanges
+ if {$showlocalchanges} {
+ lappend commitinterest($mainheadid) {dodiffindex}
+ }
fconfigure $fd -blocking 0 -translation lf -eofchar {}
if {$tclencoding != {}} {
fconfigure $fd -encoding $tclencoding
}
filerun $fd [list getcommitlines $fd $view]
- nowbusy $view
+ nowbusy $view "Reading"
+ if {$view == $curview} {
+ set progressdirn 1
+ set progresscoords {0 0}
+ set proglastnc 0
+ }
}
proc stop_rev_list {} {
}
proc getcommits {} {
- global phase canv mainfont curview
+ global phase canv curview
set phase getcommits
initlayout
show_status "Reading commits..."
}
+# This makes a string representation of a positive integer which
+# sorts as a string in numerical order
+proc strrep {n} {
+ if {$n < 16} {
+ return [format "%x" $n]
+ } elseif {$n < 256} {
+ return [format "x%.2x" $n]
+ } elseif {$n < 65536} {
+ return [format "y%.4x" $n]
+ }
+ return [format "z%.8x" $n]
+}
+
proc getcommitlines {fd view} {
- global commitlisted
+ global commitlisted commitinterest
global leftover commfd
- global displayorder commitidx commitrow commitdata
+ global displayorder commitidx viewcomplete commitrow commitdata
global parentlist children curview hlview
global vparentlist vdisporder vcmitlisted
+ global ordertok vnextroot idpending
set stuff [read $fd 500000]
# git log doesn't terminate the last commit with a null...
if {![eof $fd]} {
return 1
}
- global viewname
+ # Check if we have seen any ids listed as parents that haven't
+ # appeared in the list
+ foreach vid [array names idpending "$view,*"] {
+ # should only get here if git log is buggy
+ set id [lindex [split $vid ","] 1]
+ set commitrow($vid) $commitidx($view)
+ incr commitidx($view)
+ if {$view == $curview} {
+ lappend parentlist {}
+ lappend displayorder $id
+ lappend commitlisted 0
+ } else {
+ lappend vparentlist($view) {}
+ lappend vdisporder($view) $id
+ lappend vcmitlisted($view) 0
+ }
+ }
+ set viewcomplete($view) 1
+ global viewname progresscoords
unset commfd($view)
notbusy $view
+ set progresscoords {0 0}
+ adjustprogress
# set it blocking so we wait for the process to terminate
fconfigure $fd -blocking 1
if {[catch {close $fd} err]} {
exit 1
}
set id [lindex $ids 0]
+ if {![info exists ordertok($view,$id)]} {
+ set otok "o[strrep $vnextroot($view)]"
+ incr vnextroot($view)
+ set ordertok($view,$id) $otok
+ } else {
+ set otok $ordertok($view,$id)
+ unset idpending($view,$id)
+ }
if {$listed} {
set olds [lrange $ids 1 end]
- set i 0
- foreach p $olds {
- if {$i == 0 || [lsearch -exact $olds $p] >= $i} {
- lappend children($view,$p) $id
+ if {[llength $olds] == 1} {
+ set p [lindex $olds 0]
+ lappend children($view,$p) $id
+ if {![info exists ordertok($view,$p)]} {
+ set ordertok($view,$p) $ordertok($view,$id)
+ set idpending($view,$p) 1
+ }
+ } else {
+ set i 0
+ foreach p $olds {
+ if {$i == 0 || [lsearch -exact $olds $p] >= $i} {
+ lappend children($view,$p) $id
+ }
+ if {![info exists ordertok($view,$p)]} {
+ set ordertok($view,$p) "$otok[strrep $i]]"
+ set idpending($view,$p) 1
+ }
+ incr i
}
- incr i
}
} else {
set olds {}
lappend vdisporder($view) $id
lappend vcmitlisted($view) $listed
}
+ if {[info exists commitinterest($id)]} {
+ foreach script $commitinterest($id) {
+ eval [string map [list "%I" $id] $script]
+ }
+ unset commitinterest($id)
+ }
set gotsome 1
}
if {$gotsome} {
run chewcommits $view
+ if {$view == $curview} {
+ # update progress bar
+ global progressdirn progresscoords proglastnc
+ set inc [expr {($commitidx($view) - $proglastnc) * 0.0002}]
+ set proglastnc $commitidx($view)
+ set l [lindex $progresscoords 0]
+ set r [lindex $progresscoords 1]
+ if {$progressdirn} {
+ set r [expr {$r + $inc}]
+ if {$r >= 1.0} {
+ set r 1.0
+ set progressdirn 0
+ }
+ if {$r > 0.2} {
+ set l [expr {$r - 0.2}]
+ }
+ } else {
+ set l [expr {$l - $inc}]
+ if {$l <= 0.0} {
+ set l 0.0
+ set progressdirn 1
+ }
+ set r [expr {$l + 0.2}]
+ }
+ set progresscoords [list $l $r]
+ adjustprogress
+ }
}
return 2
}
proc chewcommits {view} {
- global curview hlview commfd
+ global curview hlview viewcomplete
global selectedline pending_select
- set more 0
if {$view == $curview} {
- set allread [expr {![info exists commfd($view)]}]
- set tlimit [expr {[clock clicks -milliseconds] + 50}]
- set more [layoutmore $tlimit $allread]
- if {$allread && !$more} {
+ layoutmore
+ if {$viewcomplete($view)} {
global displayorder commitidx phase
global numcommits startmsecs
if {[info exists hlview] && $view == $hlview} {
vhighlightmore
}
- return $more
+ return 0
}
proc readcommit {id} {
}
proc updatecommits {} {
- global viewdata curview phase displayorder
+ global viewdata curview phase displayorder ordertok idpending
global children commitrow selectedline thickerline showneartags
if {$phase ne {}} {
foreach id $displayorder {
catch {unset children($n,$id)}
catch {unset commitrow($n,$id)}
+ catch {unset ordertok($n,$id)}
+ }
+ foreach vid [array names idpending "$n,*"] {
+ unset idpending($vid)
}
set curview -1
catch {unset selectedline}
proc makewindow {} {
global canv canv2 canv3 linespc charspc ctext cflist
- global textfont mainfont uifont tabstop
+ global tabstop
global findtype findtypemenu findloc findstring fstring geometry
global entries sha1entry sha1string sha1but
global diffcontextstring diffcontext
global highlight_files gdttype
global searchstring sstring
global bgcolor fgcolor bglist fglist diffcolors selectbgcolor
- global headctxmenu
+ global headctxmenu progresscanv progressitem progresscoords statusw
+ global fprogitem fprogcoord lastprogupdate progupdatepending
+ global rprogitem rprogcoord
+ global have_tk85
menu .bar
.bar add cascade -label "File" -menu .bar.file
- .bar configure -font $uifont
+ .bar configure -font uifont
menu .bar.file
.bar.file add command -label "Update" -command updatecommits
.bar.file add command -label "Reread references" -command rereadrefs
.bar.file add command -label "List references" -command showrefs
.bar.file add command -label "Quit" -command doquit
- .bar.file configure -font $uifont
+ .bar.file configure -font uifont
menu .bar.edit
.bar add cascade -label "Edit" -menu .bar.edit
.bar.edit add command -label "Preferences" -command doprefs
- .bar.edit configure -font $uifont
+ .bar.edit configure -font uifont
- menu .bar.view -font $uifont
+ menu .bar.view -font uifont
.bar add cascade -label "View" -menu .bar.view
.bar.view add command -label "New view..." -command {newview 0}
.bar.view add command -label "Edit view..." -command editview \
.bar add cascade -label "Help" -menu .bar.help
.bar.help add command -label "About gitk" -command about
.bar.help add command -label "Key bindings" -command keys
- .bar.help configure -font $uifont
+ .bar.help configure -font uifont
. configure -menu .bar
# the gui has upper and lower half, parts of a paned window.
set entries $sha1entry
set sha1but .tf.bar.sha1label
button $sha1but -text "SHA1 ID: " -state disabled -relief flat \
- -command gotocommit -width 8 -font $uifont
+ -command gotocommit -width 8 -font uifont
$sha1but conf -disabledforeground [$sha1but cget -foreground]
pack .tf.bar.sha1label -side left
- entry $sha1entry -width 40 -font $textfont -textvariable sha1string
+ entry $sha1entry -width 40 -font textfont -textvariable sha1string
trace add variable sha1string write sha1change
pack $sha1entry -side left -pady 2
-state disabled -width 26
pack .tf.bar.rightbut -side left -fill y
- button .tf.bar.findbut -text "Find" -command dofind -font $uifont
- pack .tf.bar.findbut -side left
+ # Status label and progress bar
+ set statusw .tf.bar.status
+ label $statusw -width 15 -relief sunken -font uifont
+ pack $statusw -side left -padx 5
+ set h [expr {[font metrics uifont -linespace] + 2}]
+ set progresscanv .tf.bar.progress
+ canvas $progresscanv -relief sunken -height $h -borderwidth 2
+ set progressitem [$progresscanv create rect -1 0 0 $h -fill green]
+ set fprogitem [$progresscanv create rect -1 0 0 $h -fill yellow]
+ set rprogitem [$progresscanv create rect -1 0 0 $h -fill red]
+ pack $progresscanv -side right -expand 1 -fill x
+ set progresscoords {0 0}
+ set fprogcoord 0
+ set rprogcoord 0
+ bind $progresscanv <Configure> adjustprogress
+ set lastprogupdate [clock clicks -milliseconds]
+ set progupdatepending 0
+
+ # build up the bottom bar of upper window
+ label .tf.lbar.flabel -text "Find " -font uifont
+ button .tf.lbar.fnext -text "next" -command {dofind 1 1} -font uifont
+ button .tf.lbar.fprev -text "prev" -command {dofind -1 1} -font uifont
+ label .tf.lbar.flab2 -text " commit " -font uifont
+ pack .tf.lbar.flabel .tf.lbar.fnext .tf.lbar.fprev .tf.lbar.flab2 \
+ -side left -fill y
+ set gdttype "containing:"
+ set gm [tk_optionMenu .tf.lbar.gdttype gdttype \
+ "containing:" \
+ "touching paths:" \
+ "adding/removing string:"]
+ trace add variable gdttype write gdttype_change
+ $gm conf -font uifont
+ .tf.lbar.gdttype conf -font uifont
+ pack .tf.lbar.gdttype -side left -fill y
+
set findstring {}
- set fstring .tf.bar.findstring
+ set fstring .tf.lbar.findstring
lappend entries $fstring
- entry $fstring -width 30 -font $textfont -textvariable findstring
+ entry $fstring -width 30 -font textfont -textvariable findstring
trace add variable findstring write find_change
- pack $fstring -side left -expand 1 -fill x -in .tf.bar
set findtype Exact
- set findtypemenu [tk_optionMenu .tf.bar.findtype \
+ set findtypemenu [tk_optionMenu .tf.lbar.findtype \
findtype Exact IgnCase Regexp]
- trace add variable findtype write find_change
- .tf.bar.findtype configure -font $uifont
- .tf.bar.findtype.menu configure -font $uifont
+ trace add variable findtype write findcom_change
+ .tf.lbar.findtype configure -font uifont
+ .tf.lbar.findtype.menu configure -font uifont
set findloc "All fields"
- tk_optionMenu .tf.bar.findloc findloc "All fields" Headline \
+ tk_optionMenu .tf.lbar.findloc findloc "All fields" Headline \
Comments Author Committer
trace add variable findloc write find_change
- .tf.bar.findloc configure -font $uifont
- .tf.bar.findloc.menu configure -font $uifont
- pack .tf.bar.findloc -side right
- pack .tf.bar.findtype -side right
-
- # build up the bottom bar of upper window
- label .tf.lbar.flabel -text "Highlight: Commits " \
- -font $uifont
- pack .tf.lbar.flabel -side left -fill y
- set gdttype "touching paths:"
- set gm [tk_optionMenu .tf.lbar.gdttype gdttype "touching paths:" \
- "adding/removing string:"]
- trace add variable gdttype write hfiles_change
- $gm conf -font $uifont
- .tf.lbar.gdttype conf -font $uifont
- pack .tf.lbar.gdttype -side left -fill y
- entry .tf.lbar.fent -width 25 -font $textfont \
- -textvariable highlight_files
- trace add variable highlight_files write hfiles_change
- lappend entries .tf.lbar.fent
- pack .tf.lbar.fent -side left -fill x -expand 1
- label .tf.lbar.vlabel -text " OR in view" -font $uifont
- pack .tf.lbar.vlabel -side left -fill y
- global viewhlmenu selectedhlview
- set viewhlmenu [tk_optionMenu .tf.lbar.vhl selectedhlview None]
- $viewhlmenu entryconf None -command delvhighlight
- $viewhlmenu conf -font $uifont
- .tf.lbar.vhl conf -font $uifont
- pack .tf.lbar.vhl -side left -fill y
- label .tf.lbar.rlabel -text " OR " -font $uifont
- pack .tf.lbar.rlabel -side left -fill y
- global highlight_related
- set m [tk_optionMenu .tf.lbar.relm highlight_related None \
- "Descendent" "Not descendent" "Ancestor" "Not ancestor"]
- $m conf -font $uifont
- .tf.lbar.relm conf -font $uifont
- trace add variable highlight_related write vrel_change
- pack .tf.lbar.relm -side left -fill y
+ .tf.lbar.findloc configure -font uifont
+ .tf.lbar.findloc.menu configure -font uifont
+ pack .tf.lbar.findloc -side right
+ pack .tf.lbar.findtype -side right
+ pack $fstring -side left -expand 1 -fill x
# Finish putting the upper half of the viewer together
pack .tf.lbar -in .tf -side bottom -fill x
frame .bleft.mid
button .bleft.top.search -text "Search" -command dosearch \
- -font $uifont
+ -font uifont
pack .bleft.top.search -side left -padx 5
set sstring .bleft.top.sstring
- entry $sstring -width 20 -font $textfont -textvariable searchstring
+ entry $sstring -width 20 -font textfont -textvariable searchstring
lappend entries $sstring
trace add variable searchstring write incrsearch
pack $sstring -side left -expand 1 -fill x
- radiobutton .bleft.mid.diff -text "Diff" \
+ radiobutton .bleft.mid.diff -text "Diff" -font uifont \
-command changediffdisp -variable diffelide -value {0 0}
- radiobutton .bleft.mid.old -text "Old version" \
+ radiobutton .bleft.mid.old -text "Old version" -font uifont \
-command changediffdisp -variable diffelide -value {0 1}
- radiobutton .bleft.mid.new -text "New version" \
+ radiobutton .bleft.mid.new -text "New version" -font uifont \
-command changediffdisp -variable diffelide -value {1 0}
label .bleft.mid.labeldiffcontext -text " Lines of context: " \
- -font $uifont
+ -font uifont
pack .bleft.mid.diff .bleft.mid.old .bleft.mid.new -side left
- spinbox .bleft.mid.diffcontext -width 5 -font $textfont \
+ spinbox .bleft.mid.diffcontext -width 5 -font textfont \
-from 1 -increment 1 -to 10000000 \
-validate all -validatecommand "diffcontextvalidate %P" \
-textvariable diffcontextstring
pack .bleft.mid.labeldiffcontext .bleft.mid.diffcontext -side left
set ctext .bleft.ctext
text $ctext -background $bgcolor -foreground $fgcolor \
- -tabs "[expr {$tabstop * $charspc}]" \
- -state disabled -font $textfont \
+ -state disabled -font textfont \
-yscrollcommand scrolltext -wrap none
+ if {$have_tk85} {
+ $ctext conf -tabstyle wordprocessor
+ }
scrollbar .bleft.sb -command "$ctext yview"
pack .bleft.top -side top -fill x
pack .bleft.mid -side top -fill x
lappend fglist $ctext
$ctext tag conf comment -wrap $wrapcomment
- $ctext tag conf filesep -font [concat $textfont bold] -back "#aaaaaa"
+ $ctext tag conf filesep -font textfontbold -back "#aaaaaa"
$ctext tag conf hunksep -fore [lindex $diffcolors 2]
$ctext tag conf d0 -fore [lindex $diffcolors 0]
$ctext tag conf d1 -fore [lindex $diffcolors 1]
$ctext tag conf m15 -fore "#ff70b0"
$ctext tag conf mmax -fore darkgrey
set mergemax 16
- $ctext tag conf mresult -font [concat $textfont bold]
- $ctext tag conf msep -font [concat $textfont bold]
+ $ctext tag conf mresult -font textfontbold
+ $ctext tag conf msep -font textfontbold
$ctext tag conf found -back yellow
.pwbottom add .bleft
frame .bright.mode
radiobutton .bright.mode.patch -text "Patch" \
-command reselectline -variable cmitmode -value "patch"
- .bright.mode.patch configure -font $uifont
+ .bright.mode.patch configure -font uifont
radiobutton .bright.mode.tree -text "Tree" \
-command reselectline -variable cmitmode -value "tree"
- .bright.mode.tree configure -font $uifont
+ .bright.mode.tree configure -font uifont
grid .bright.mode.patch .bright.mode.tree -sticky ew
pack .bright.mode -side top -fill x
set cflist .bright.cfiles
- set indent [font measure $mainfont "nn"]
+ set indent [font measure mainfont "nn"]
text $cflist \
-selectbackground $selectbgcolor \
-background $bgcolor -foreground $fgcolor \
- -font $mainfont \
+ -font mainfont \
-tabs [list $indent [expr {2 * $indent}]] \
-yscrollcommand ".bright.sb set" \
-cursor [. cget -cursor] \
pack $cflist -side left -fill both -expand 1
$cflist tag configure highlight \
-background [$cflist cget -selectbackground]
- $cflist tag configure bold -font [concat $mainfont bold]
+ $cflist tag configure bold -font mainfontbold
.pwbottom add .bright
.ctop add .pwbottom
} else {
bindall <ButtonRelease-4> "allcanvs yview scroll -5 units"
bindall <ButtonRelease-5> "allcanvs yview scroll 5 units"
+ if {[tk windowingsystem] eq "aqua"} {
+ bindall <MouseWheel> {
+ set delta [expr {- (%D)}]
+ allcanvs yview scroll $delta units
+ }
+ }
}
bindall <2> "canvscan mark %W %x %y"
bindall <B2-Motion> "canvscan dragto %W %x %y"
bindkey <End> sellastline
bind . <Key-Up> "selnextline -1"
bind . <Key-Down> "selnextline 1"
- bind . <Shift-Key-Up> "next_highlight -1"
- bind . <Shift-Key-Down> "next_highlight 1"
+ bind . <Shift-Key-Up> "dofind -1 0"
+ bind . <Shift-Key-Down> "dofind 1 0"
bindkey <Key-Right> "goforw"
bindkey <Key-Left> "goback"
bind . <Key-Prior> "selnextpage -1"
bindkey b "$ctext yview scroll -1 pages"
bindkey d "$ctext yview scroll 18 units"
bindkey u "$ctext yview scroll -18 units"
- bindkey / {findnext 1}
- bindkey <Key-Return> {findnext 0}
- bindkey ? findprev
+ bindkey / {dofind 1 1}
+ bindkey <Key-Return> {dofind 1 1}
+ bindkey ? {dofind -1 1}
bindkey f nextfile
bindkey <F5> updatecommits
bind . <$M1B-q> doquit
- bind . <$M1B-f> dofind
- bind . <$M1B-g> {findnext 0}
+ bind . <$M1B-f> {dofind 1 1}
+ bind . <$M1B-g> {dofind 1 0}
bind . <$M1B-r> dosearchback
bind . <$M1B-s> dosearch
bind . <$M1B-equal> {incrfont 1}
bind . <$M1B-KP_Subtract> {incrfont -1}
wm protocol . WM_DELETE_WINDOW doquit
bind . <Button-1> "click %W"
- bind $fstring <Key-Return> dofind
+ bind $fstring <Key-Return> {dofind 1 1}
bind $sha1entry <Key-Return> gotocommit
bind $sha1entry <<PasteSelection>> clearsha1
bind $cflist <1> {sel_flist %W %x %y; break}
focus .
}
+# Adjust the progress bar for a change in requested extent or canvas size
+proc adjustprogress {} {
+ global progresscanv progressitem progresscoords
+ global fprogitem fprogcoord lastprogupdate progupdatepending
+ global rprogitem rprogcoord
+
+ set w [expr {[winfo width $progresscanv] - 4}]
+ set x0 [expr {$w * [lindex $progresscoords 0]}]
+ set x1 [expr {$w * [lindex $progresscoords 1]}]
+ set h [winfo height $progresscanv]
+ $progresscanv coords $progressitem $x0 0 $x1 $h
+ $progresscanv coords $fprogitem 0 0 [expr {$w * $fprogcoord}] $h
+ $progresscanv coords $rprogitem 0 0 [expr {$w * $rprogcoord}] $h
+ set now [clock clicks -milliseconds]
+ if {$now >= $lastprogupdate + 100} {
+ set progupdatepending 0
+ update
+ } elseif {!$progupdatepending} {
+ set progupdatepending 1
+ after [expr {$lastprogupdate + 100 - $now}] doprogupdate
+ }
+}
+
+proc doprogupdate {} {
+ global lastprogupdate progupdatepending
+
+ if {$progupdatepending} {
+ set progupdatepending 0
+ set lastprogupdate [clock clicks -milliseconds]
+ update
+ }
+}
+
proc savestuff {w} {
- global canv canv2 canv3 ctext cflist mainfont textfont uifont tabstop
+ global canv canv2 canv3 mainfont textfont uifont tabstop
global stuffsaved findmergefiles maxgraphpct
global maxwidth showneartags showlocalchanges
global viewname viewfiles viewargs viewperm nextviewnum
- global cmitmode wrapcomment datetimeformat
+ global cmitmode wrapcomment datetimeformat limitdiffs
global colors bgcolor fgcolor diffcolors diffcontext selectbgcolor
if {$stuffsaved} return
puts $f [list set showneartags $showneartags]
puts $f [list set showlocalchanges $showlocalchanges]
puts $f [list set datetimeformat $datetimeformat]
+ puts $f [list set limitdiffs $limitdiffs]
puts $f [list set bgcolor $bgcolor]
puts $f [list set fgcolor $fgcolor]
puts $f [list set colors $colors]
Use and redistribute under the terms of the GNU General Public License} \
-justify center -aspect 400 -border 2 -bg white -relief groove
pack $w.m -side top -fill x -padx 2 -pady 2
- $w.m configure -font $uifont
+ $w.m configure -font uifont
button $w.ok -text Close -command "destroy $w" -default active
pack $w.ok -side bottom
- $w.ok configure -font $uifont
+ $w.ok configure -font uifont
bind $w <Visibility> "focus $w.ok"
bind $w <Key-Escape> "destroy $w"
bind $w <Key-Return> "destroy $w"
<$M1T-Down> Scroll commit list down one line
<$M1T-PageUp> Scroll commit list up one page
<$M1T-PageDown> Scroll commit list down one page
-<Shift-Up> Move to previous highlighted line
-<Shift-Down> Move to next highlighted line
+<Shift-Up> Find backwards (upwards, later commits)
+<Shift-Down> Find forwards (downwards, earlier commits)
<Delete>, b Scroll diff view up one page
<Backspace> Scroll diff view up one page
<Space> Scroll diff view down one page
" \
-justify left -bg white -border 2 -relief groove
pack $w.m -side top -fill both -padx 2 -pady 2
- $w.m configure -font $uifont
+ $w.m configure -font uifont
button $w.ok -text Close -command "destroy $w" -default active
pack $w.ok -side bottom
- $w.ok configure -font $uifont
+ $w.ok configure -font uifont
bind $w <Visibility> "focus $w.ok"
bind $w <Key-Escape> "destroy $w"
bind $w <Key-Return> "destroy $w"
global ctext cflist cmitmode flist_menu flist_menu_file
global treediffs diffids
+ stopfinding
set l [lindex [split [$w index "@$x,$y"] "."] 0]
if {$l <= 1} return
if {$cmitmode eq "tree"} {
}
proc flist_hl {only} {
- global flist_menu_file highlight_files
+ global flist_menu_file findstring gdttype
set x [shellquote $flist_menu_file]
- if {$only || $highlight_files eq {}} {
- set highlight_files $x
+ if {$only || $findstring eq {} || $gdttype ne "touching paths:"} {
+ set findstring $x
} else {
- append highlight_files " " $x
+ append findstring " " $x
}
+ set gdttype "touching paths:"
}
# Functions for adding and removing shell-type quoting
toplevel $top
wm title $top $title
- label $top.nl -text "Name" -font $uifont
- entry $top.name -width 20 -textvariable newviewname($n) -font $uifont
+ label $top.nl -text "Name" -font uifont
+ entry $top.name -width 20 -textvariable newviewname($n) -font uifont
grid $top.nl $top.name -sticky w -pady 5
checkbutton $top.perm -text "Remember this view" -variable newviewperm($n) \
- -font $uifont
+ -font uifont
grid $top.perm - -pady 5 -sticky w
- message $top.al -aspect 1000 -font $uifont \
+ message $top.al -aspect 1000 -font uifont \
-text "Commits to include (arguments to git rev-list):"
grid $top.al - -sticky w -pady 5
entry $top.args -width 50 -textvariable newviewargs($n) \
- -background white -font $uifont
+ -background white -font uifont
grid $top.args - -sticky ew -padx 5
- message $top.l -aspect 1000 -font $uifont \
+ message $top.l -aspect 1000 -font uifont \
-text "Enter files and directories to include, one per line:"
grid $top.l - -sticky w
- text $top.t -width 40 -height 10 -background white -font $uifont
+ text $top.t -width 40 -height 10 -background white -font uifont
if {[info exists viewfiles($n)]} {
foreach f $viewfiles($n) {
$top.t insert end $f
grid $top.t - -sticky ew -padx 5
frame $top.buts
button $top.buts.ok -text "OK" -command [list newviewok $top $n] \
- -font $uifont
+ -font uifont
button $top.buts.can -text "Cancel" -command [list destroy $top] \
- -font $uifont
+ -font uifont
grid $top.buts.ok $top.buts.can
grid columnconfigure $top.buts 0 -weight 1 -uniform a
grid columnconfigure $top.buts 1 -weight 1 -uniform a
}
proc allviewmenus {n op args} {
- global viewhlmenu
+ # global viewhlmenu
doviewmenu .bar.view 5 [list showview $n] $op $args
- doviewmenu $viewhlmenu 1 [list addvhighlight $n] $op $args
+ # doviewmenu $viewhlmenu 1 [list addvhighlight $n] $op $args
}
proc newviewok {top n} {
set viewname($n) $newviewname($n)
doviewmenu .bar.view 5 [list showview $n] \
entryconf [list -label $viewname($n)]
- doviewmenu $viewhlmenu 1 [list addvhighlight $n] \
- entryconf [list -label $viewname($n) -value $viewname($n)]
+ # doviewmenu $viewhlmenu 1 [list addvhighlight $n] \
+ # entryconf [list -label $viewname($n) -value $viewname($n)]
}
if {$files ne $viewfiles($n) || $newargs ne $viewargs($n)} {
set viewfiles($n) $files
.bar.view add radiobutton -label $viewname($n) \
-command [list showview $n] -variable selectedview -value $n
- $viewhlmenu add radiobutton -label $viewname($n) \
- -command [list addvhighlight $n] -variable selectedhlview
+ #$viewhlmenu add radiobutton -label $viewname($n) \
+ # -command [list addvhighlight $n] -variable selectedhlview
}
proc flatten {var} {
proc showview {n} {
global curview viewdata viewfiles
- global displayorder parentlist rowidlist rowoffsets
+ global displayorder parentlist rowidlist rowisopt rowfinal
global colormap rowtextx commitrow nextcolor canvxmax
- global numcommits rowrangelist commitlisted idrowranges rowchk
+ global numcommits commitlisted
global selectedline currentid canv canvy0
global treediffs
global pending_select phase
- global commitidx rowlaidout rowoptim
+ global commitidx
global commfd
global selectedview selectfirst
global vparentlist vdisporder vcmitlisted
- global hlview selectedhlview
+ global hlview selectedhlview commitinterest
if {$n == $curview} return
set selid {}
set vparentlist($curview) $parentlist
set vdisporder($curview) $displayorder
set vcmitlisted($curview) $commitlisted
- if {$phase ne {}} {
- set viewdata($curview) \
- [list $phase $rowidlist $rowoffsets $rowrangelist \
- [flatten idrowranges] [flatten idinlist] \
- $rowlaidout $rowoptim $numcommits]
- } elseif {![info exists viewdata($curview)]
- || [lindex $viewdata($curview) 0] ne {}} {
+ if {$phase ne {} ||
+ ![info exists viewdata($curview)] ||
+ [lindex $viewdata($curview) 0] ne {}} {
set viewdata($curview) \
- [list {} $rowidlist $rowoffsets $rowrangelist]
+ [list $phase $rowidlist $rowisopt $rowfinal]
}
}
catch {unset treediffs}
unset hlview
set selectedhlview None
}
+ catch {unset commitinterest}
set curview $n
set selectedview $n
.bar.view entryconf Edit* -state [expr {$n == 0? "disabled": "normal"}]
.bar.view entryconf Delete* -state [expr {$n == 0? "disabled": "normal"}]
+ run refill_reflist
if {![info exists viewdata($n)]} {
if {$selid ne {}} {
set pending_select $selid
set parentlist $vparentlist($n)
set commitlisted $vcmitlisted($n)
set rowidlist [lindex $v 1]
- set rowoffsets [lindex $v 2]
- set rowrangelist [lindex $v 3]
- if {$phase eq {}} {
- set numcommits [llength $displayorder]
- catch {unset idrowranges}
- } else {
- unflatten idrowranges [lindex $v 4]
- unflatten idinlist [lindex $v 5]
- set rowlaidout [lindex $v 6]
- set rowoptim [lindex $v 7]
- set numcommits [lindex $v 8]
- catch {unset rowchk}
- }
+ set rowisopt [lindex $v 2]
+ set rowfinal [lindex $v 3]
+ set numcommits $commitidx($n)
catch {unset colormap}
catch {unset rowtextx}
} elseif {$numcommits == 0} {
show_status "No commits selected"
}
- run refill_reflist
}
# Stuff relating to the highlighting facility
}
proc unbolden {} {
- global mainfont boldrows
+ global boldrows
set stillbold {}
foreach row $boldrows {
if {![ishighlighted $row]} {
- bolden $row $mainfont
+ bolden $row mainfont
} else {
lappend stillbold $row
}
}
set hlview $n
if {$n != $curview && ![info exists viewdata($n)]} {
- set viewdata($n) [list getcommits {{}} {{}} {} {} {} 0 0 0 {}]
+ set viewdata($n) [list getcommits {{}} 0 0 0]
set vparentlist($n) {}
set vdisporder($n) {}
set vcmitlisted($n) {}
proc vhighlightmore {} {
global hlview vhl_done commitidx vhighlights
- global displayorder vdisporder curview mainfont
+ global displayorder vdisporder curview
- set font [concat $mainfont bold]
set max $commitidx($hlview)
if {$hlview == $curview} {
set disp $displayorder
set row $commitrow($curview,$id)
if {$r0 <= $row && $row <= $r1} {
if {![highlighted $row]} {
- bolden $row $font
+ bolden $row mainfontbold
}
set vhighlights($row) 1
}
}
proc askvhighlight {row id} {
- global hlview vhighlights commitrow iddrawn mainfont
+ global hlview vhighlights commitrow iddrawn
if {[info exists commitrow($hlview,$id)]} {
if {[info exists iddrawn($id)] && ![ishighlighted $row]} {
- bolden $row [concat $mainfont bold]
+ bolden $row mainfontbold
}
set vhighlights($row) 1
} else {
}
}
-proc hfiles_change {name ix op} {
+proc hfiles_change {} {
global highlight_files filehighlight fhighlights fh_serial
- global mainfont highlight_paths
+ global highlight_paths gdttype
if {[info exists filehighlight]} {
# delete previous highlights
}
}
+proc gdttype_change {name ix op} {
+ global gdttype highlight_files findstring findpattern
+
+ stopfinding
+ if {$findstring ne {}} {
+ if {$gdttype eq "containing:"} {
+ if {$highlight_files ne {}} {
+ set highlight_files {}
+ hfiles_change
+ }
+ findcom_change
+ } else {
+ if {$findpattern ne {}} {
+ set findpattern {}
+ findcom_change
+ }
+ set highlight_files $findstring
+ hfiles_change
+ }
+ drawvisible
+ }
+ # enable/disable findtype/findloc menus too
+}
+
+proc find_change {name ix op} {
+ global gdttype findstring highlight_files
+
+ stopfinding
+ if {$gdttype eq "containing:"} {
+ findcom_change
+ } else {
+ if {$highlight_files ne $findstring} {
+ set highlight_files $findstring
+ hfiles_change
+ }
+ }
+ drawvisible
+}
+
+proc findcom_change args {
+ global nhighlights boldnamerows
+ global findpattern findtype findstring gdttype
+
+ stopfinding
+ # delete previous highlights, if any
+ foreach row $boldnamerows {
+ bolden_name $row mainfont
+ }
+ set boldnamerows {}
+ catch {unset nhighlights}
+ unbolden
+ unmarkmatches
+ if {$gdttype ne "containing:" || $findstring eq {}} {
+ set findpattern {}
+ } elseif {$findtype eq "Regexp"} {
+ set findpattern $findstring
+ } else {
+ set e [string map {"*" "\\*" "?" "\\?" "\[" "\\\[" "\\" "\\\\"} \
+ $findstring]
+ set findpattern "*$e*"
+ }
+}
+
proc makepatterns {l} {
set ret {}
foreach e $l {
set highlight_paths [makepatterns $paths]
highlight_filelist
set gdtargs [concat -- $paths]
- } else {
+ } elseif {$gdttype eq "adding/removing string:"} {
set gdtargs [list "-S$highlight_files"]
+ } else {
+ # must be "containing:", i.e. we're searching commit info
+ return
}
set cmd [concat | git diff-tree -r -s --stdin $gdtargs]
set filehighlight [open $cmd r+]
}
proc readfhighlight {} {
- global filehighlight fhighlights commitrow curview mainfont iddrawn
- global fhl_list
+ global filehighlight fhighlights commitrow curview iddrawn
+ global fhl_list find_dirn
if {![info exists filehighlight]} {
return 0
if {![info exists commitrow($curview,$line)]} continue
set row $commitrow($curview,$line)
if {[info exists iddrawn($line)] && ![ishighlighted $row]} {
- bolden $row [concat $mainfont bold]
+ bolden $row mainfontbold
}
set fhighlights($row) 1
}
unset filehighlight
return 0
}
- next_hlcont
- return 1
-}
-
-proc find_change {name ix op} {
- global nhighlights mainfont boldnamerows
- global findstring findpattern findtype
-
- # delete previous highlights, if any
- foreach row $boldnamerows {
- bolden_name $row $mainfont
- }
- set boldnamerows {}
- catch {unset nhighlights}
- unbolden
- unmarkmatches
- if {$findtype ne "Regexp"} {
- set e [string map {"*" "\\*" "?" "\\?" "\[" "\\\[" "\\" "\\\\"} \
- $findstring]
- set findpattern "*$e*"
+ if {[info exists find_dirn]} {
+ run findmore
}
- drawvisible
+ return 1
}
proc doesmatch {f} {
- global findtype findstring findpattern
+ global findtype findpattern
if {$findtype eq "Regexp"} {
- return [regexp $findstring $f]
+ return [regexp $findpattern $f]
} elseif {$findtype eq "IgnCase"} {
return [string match -nocase $findpattern $f]
} else {
}
proc askfindhighlight {row id} {
- global nhighlights commitinfo iddrawn mainfont
+ global nhighlights commitinfo iddrawn
global findloc
global markingmatches
}
}
if {$isbold && [info exists iddrawn($id)]} {
- set f [concat $mainfont bold]
if {![ishighlighted $row]} {
- bolden $row $f
+ bolden $row mainfontbold
if {$isbold > 1} {
- bolden_name $row $f
+ bolden_name $row mainfontbold
}
}
if {$markingmatches} {
}
proc askrelhighlight {row id} {
- global descendent highlight_related iddrawn mainfont rhighlights
+ global descendent highlight_related iddrawn rhighlights
global selectedline ancestor
if {![info exists selectedline]} return
}
if {[info exists iddrawn($id)]} {
if {$isbold && ![ishighlighted $row]} {
- bolden $row [concat $mainfont bold]
+ bolden $row mainfontbold
}
}
set rhighlights($row) $isbold
}
-proc next_hlcont {} {
- global fhl_row fhl_dirn displayorder numcommits
- global vhighlights fhighlights nhighlights rhighlights
- global hlview filehighlight findstring highlight_related
-
- if {![info exists fhl_dirn] || $fhl_dirn == 0} return
- set row $fhl_row
- while {1} {
- if {$row < 0 || $row >= $numcommits} {
- bell
- set fhl_dirn 0
- return
- }
- set id [lindex $displayorder $row]
- if {[info exists hlview]} {
- if {![info exists vhighlights($row)]} {
- askvhighlight $row $id
- }
- if {$vhighlights($row) > 0} break
- }
- if {$findstring ne {}} {
- if {![info exists nhighlights($row)]} {
- askfindhighlight $row $id
- }
- if {$nhighlights($row) > 0} break
- }
- if {$highlight_related ne "None"} {
- if {![info exists rhighlights($row)]} {
- askrelhighlight $row $id
- }
- if {$rhighlights($row) > 0} break
- }
- if {[info exists filehighlight]} {
- if {![info exists fhighlights($row)]} {
- # ask for a few more while we're at it...
- set r $row
- for {set n 0} {$n < 100} {incr n} {
- if {![info exists fhighlights($r)]} {
- askfilehighlight $r [lindex $displayorder $r]
- }
- incr r $fhl_dirn
- if {$r < 0 || $r >= $numcommits} break
- }
- flushhighlights
- }
- if {$fhighlights($row) < 0} {
- set fhl_row $row
- return
- }
- if {$fhighlights($row) > 0} break
- }
- incr row $fhl_dirn
- }
- set fhl_dirn 0
- selectline $row 1
-}
-
-proc next_highlight {dirn} {
- global selectedline fhl_row fhl_dirn
- global hlview filehighlight findstring highlight_related
-
- if {![info exists selectedline]} return
- if {!([info exists hlview] || $findstring ne {} ||
- $highlight_related ne "None" || [info exists filehighlight])} return
- set fhl_row [expr {$selectedline + $dirn}]
- set fhl_dirn $dirn
- next_hlcont
-}
-
-proc cancel_next_highlight {} {
- global fhl_dirn
-
- set fhl_dirn 0
-}
-
# Graph layout functions
proc shortids {ids} {
return $res
}
-proc incrange {l x o} {
- set n [llength $l]
- while {$x < $n} {
- set e [lindex $l $x]
- if {$e ne {}} {
- lset l $x [expr {$e + $o}]
- }
- incr x
- }
- return $l
-}
-
proc ntimes {n o} {
set ret {}
- for {} {$n > 0} {incr n -1} {
- lappend ret $o
- }
- return $ret
-}
-
-proc usedinrange {id l1 l2} {
- global children commitrow curview
-
- if {[info exists commitrow($curview,$id)]} {
- set r $commitrow($curview,$id)
- if {$l1 <= $r && $r <= $l2} {
- return [expr {$r - $l1 + 1}]
+ set o [list $o]
+ for {set mask 1} {$mask <= $n} {incr mask $mask} {
+ if {($n & $mask) != 0} {
+ set ret [concat $ret $o]
}
+ set o [concat $o $o]
}
- set kids $children($curview,$id)
- foreach c $kids {
- set r $commitrow($curview,$c)
- if {$l1 <= $r && $r <= $l2} {
- return [expr {$r - $l1 + 1}]
- }
- }
- return 0
+ return $ret
}
-proc sanity {row {full 0}} {
- global rowidlist rowoffsets
+# Work out where id should go in idlist so that order-token
+# values increase from left to right
+proc idcol {idlist id {i 0}} {
+ global ordertok curview
- set col -1
- set ids [lindex $rowidlist $row]
- foreach id $ids {
- incr col
- if {$id eq {}} continue
- if {$col < [llength $ids] - 1 &&
- [lsearch -exact -start [expr {$col+1}] $ids $id] >= 0} {
- puts "oops: [shortids $id] repeated in row $row col $col: {[shortids [lindex $rowidlist $row]]}"
- }
- set o [lindex $rowoffsets $row $col]
- set y $row
- set x $col
- while {$o ne {}} {
- incr y -1
- incr x $o
- if {[lindex $rowidlist $y $x] != $id} {
- puts "oops: rowoffsets wrong at row [expr {$y+1}] col [expr {$x-$o}]"
- puts " id=[shortids $id] check started at row $row"
- for {set i $row} {$i >= $y} {incr i -1} {
- puts " row $i ids={[shortids [lindex $rowidlist $i]]} offs={[lindex $rowoffsets $i]}"
- }
- break
- }
- if {!$full} break
- set o [lindex $rowoffsets $y $x]
+ set t $ordertok($curview,$id)
+ if {$i >= [llength $idlist] ||
+ $t < $ordertok($curview,[lindex $idlist $i])} {
+ if {$i > [llength $idlist]} {
+ set i [llength $idlist]
}
- }
-}
-
-proc makeuparrow {oid x y z} {
- global rowidlist rowoffsets uparrowlen idrowranges displayorder
-
- for {set i 1} {$i < $uparrowlen && $y > 1} {incr i} {
- incr y -1
- incr x $z
- set off0 [lindex $rowoffsets $y]
- for {set x0 $x} {1} {incr x0} {
- if {$x0 >= [llength $off0]} {
- set x0 [llength [lindex $rowoffsets [expr {$y-1}]]]
- break
- }
- set z [lindex $off0 $x0]
- if {$z ne {}} {
- incr x0 $z
- break
- }
+ while {[incr i -1] >= 0 &&
+ $t < $ordertok($curview,[lindex $idlist $i])} {}
+ incr i
+ } else {
+ if {$t > $ordertok($curview,[lindex $idlist $i])} {
+ while {[incr i] < [llength $idlist] &&
+ $t >= $ordertok($curview,[lindex $idlist $i])} {}
}
- set z [expr {$x0 - $x}]
- lset rowidlist $y [linsert [lindex $rowidlist $y] $x $oid]
- lset rowoffsets $y [linsert [lindex $rowoffsets $y] $x $z]
}
- set tmp [lreplace [lindex $rowoffsets $y] $x $x {}]
- lset rowoffsets $y [incrange $tmp [expr {$x+1}] -1]
- lappend idrowranges($oid) [lindex $displayorder $y]
+ return $i
}
proc initlayout {} {
- global rowidlist rowoffsets displayorder commitlisted
- global rowlaidout rowoptim
- global idinlist rowchk rowrangelist idrowranges
+ global rowidlist rowisopt rowfinal displayorder commitlisted
global numcommits canvxmax canv
global nextcolor
global parentlist
set displayorder {}
set commitlisted {}
set parentlist {}
- set rowrangelist {}
set nextcolor 0
- set rowidlist {{}}
- set rowoffsets {{}}
- catch {unset idinlist}
- catch {unset rowchk}
- set rowlaidout 0
- set rowoptim 0
+ set rowidlist {}
+ set rowisopt {}
+ set rowfinal {}
set canvxmax [$canv cget -width]
catch {unset colormap}
catch {unset rowtextx}
- catch {unset idrowranges}
set selectfirst 1
}
return [list $r0 $r1]
}
-proc layoutmore {tmax allread} {
- global rowlaidout rowoptim commitidx numcommits optim_delay
- global uparrowlen curview rowidlist idinlist
+proc layoutmore {} {
+ global commitidx viewcomplete numcommits
+ global uparrowlen downarrowlen mingaplen curview
- set showlast 0
- set showdelay $optim_delay
- set optdelay [expr {$uparrowlen + 1}]
- while {1} {
- if {$rowoptim - $showdelay > $numcommits} {
- showstuff [expr {$rowoptim - $showdelay}] $showlast
- } elseif {$rowlaidout - $optdelay > $rowoptim} {
- set nr [expr {$rowlaidout - $optdelay - $rowoptim}]
- if {$nr > 100} {
- set nr 100
- }
- optimize_rows $rowoptim 0 [expr {$rowoptim + $nr}]
- incr rowoptim $nr
- } elseif {$commitidx($curview) > $rowlaidout} {
- set nr [expr {$commitidx($curview) - $rowlaidout}]
- # may need to increase this threshold if uparrowlen or
- # mingaplen are increased...
- if {$nr > 150} {
- set nr 150
- }
- set row $rowlaidout
- set rowlaidout [layoutrows $row [expr {$row + $nr}] $allread]
- if {$rowlaidout == $row} {
- return 0
- }
- } elseif {$allread} {
- set optdelay 0
- set nrows $commitidx($curview)
- if {[lindex $rowidlist $nrows] ne {} ||
- [array names idinlist] ne {}} {
- layouttail
- set rowlaidout $commitidx($curview)
- } elseif {$rowoptim == $nrows} {
- set showdelay 0
- set showlast 1
- if {$numcommits == $nrows} {
- return 0
- }
- }
- } else {
- return 0
- }
- if {$tmax ne {} && [clock clicks -milliseconds] >= $tmax} {
- return 1
- }
+ set show $commitidx($curview)
+ if {$show > $numcommits || $viewcomplete($curview)} {
+ showstuff $show $viewcomplete($curview)
}
}
proc showstuff {canshow last} {
global numcommits commitrow pending_select selectedline curview
- global lookingforhead mainheadid displayorder selectfirst
+ global mainheadid displayorder selectfirst
global lastscrollset commitinterest
if {$numcommits == 0} {
set phase "incrdraw"
allcanvs delete all
}
- for {set l $numcommits} {$l < $canshow} {incr l} {
- set id [lindex $displayorder $l]
- if {[info exists commitinterest($id)]} {
- foreach script $commitinterest($id) {
- eval [string map [list "%I" $id] $script]
- }
- unset commitinterest($id)
- }
- }
set r0 $numcommits
set prev $numcommits
set numcommits $canshow
set selectfirst 0
}
}
- if {$lookingforhead && [info exists commitrow($curview,$mainheadid)]
- && ($last || $commitrow($curview,$mainheadid) < $numcommits - 1)} {
- set lookingforhead 0
- dodiffindex
- }
}
proc doshowlocalchanges {} {
- global lookingforhead curview mainheadid phase commitrow
+ global curview mainheadid phase commitrow
if {[info exists commitrow($curview,$mainheadid)] &&
($phase eq {} || $commitrow($curview,$mainheadid) < $numcommits - 1)} {
dodiffindex
} elseif {$phase ne {}} {
- set lookingforhead 1
+ lappend commitinterest($mainheadid) {}
}
}
proc dohidelocalchanges {} {
- global lookingforhead localfrow localirow lserial
+ global localfrow localirow lserial
- set lookingforhead 0
if {$localfrow >= 0} {
removerow $localfrow
set localfrow -1
# spawn off a process to do git diff-index --cached HEAD
proc dodiffindex {} {
- global localirow localfrow lserial
+ global localirow localfrow lserial showlocalchanges
+ if {!$showlocalchanges} return
incr lserial
set localfrow -1
set localirow -1
return 0
}
-proc layoutrows {row endrow last} {
- global rowidlist rowoffsets displayorder
- global uparrowlen downarrowlen maxwidth mingaplen
- global children parentlist
- global idrowranges
- global commitidx curview
- global idinlist rowchk rowrangelist
+proc nextuse {id row} {
+ global commitrow curview children
- set idlist [lindex $rowidlist $row]
- set offs [lindex $rowoffsets $row]
- while {$row < $endrow} {
- set id [lindex $displayorder $row]
- set nev [expr {[llength $idlist] - $maxwidth + 1}]
- foreach p [lindex $parentlist $row] {
- if {![info exists idinlist($p)] || !$idinlist($p)} {
- incr nev
- }
- }
- if {$nev > 0} {
- if {!$last &&
- $row + $uparrowlen + $mingaplen >= $commitidx($curview)} break
- for {set x [llength $idlist]} {[incr x -1] >= 0} {} {
- set i [lindex $idlist $x]
- if {![info exists rowchk($i)] || $row >= $rowchk($i)} {
- set r [usedinrange $i [expr {$row - $downarrowlen}] \
- [expr {$row + $uparrowlen + $mingaplen}]]
- if {$r == 0} {
- set idlist [lreplace $idlist $x $x]
- set offs [lreplace $offs $x $x]
- set offs [incrange $offs $x 1]
- set idinlist($i) 0
- set rm1 [expr {$row - 1}]
- lappend idrowranges($i) [lindex $displayorder $rm1]
- if {[incr nev -1] <= 0} break
- continue
- }
- set rowchk($i) [expr {$row + $r}]
- }
+ if {[info exists children($curview,$id)]} {
+ foreach kid $children($curview,$id) {
+ if {![info exists commitrow($curview,$kid)]} {
+ return -1
+ }
+ if {$commitrow($curview,$kid) > $row} {
+ return $commitrow($curview,$kid)
}
- lset rowidlist $row $idlist
- lset rowoffsets $row $offs
}
- set oldolds {}
- set newolds {}
- foreach p [lindex $parentlist $row] {
- if {![info exists idinlist($p)]} {
- lappend newolds $p
- } elseif {!$idinlist($p)} {
- lappend oldolds $p
+ }
+ if {[info exists commitrow($curview,$id)]} {
+ return $commitrow($curview,$id)
+ }
+ return -1
+}
+
+proc prevuse {id row} {
+ global commitrow curview children
+
+ set ret -1
+ if {[info exists children($curview,$id)]} {
+ foreach kid $children($curview,$id) {
+ if {![info exists commitrow($curview,$kid)]} break
+ if {$commitrow($curview,$kid) < $row} {
+ set ret $commitrow($curview,$kid)
}
- set idinlist($p) 1
}
- set col [lsearch -exact $idlist $id]
- if {$col < 0} {
- set col [llength $idlist]
- lappend idlist $id
- lset rowidlist $row $idlist
- set z {}
- if {$children($curview,$id) ne {}} {
- set z [expr {[llength [lindex $rowidlist [expr {$row-1}]]] - $col}]
- unset idinlist($id)
- }
- lappend offs $z
- lset rowoffsets $row $offs
- if {$z ne {}} {
- makeuparrow $id $col $row $z
+ }
+ return $ret
+}
+
+proc make_idlist {row} {
+ global displayorder parentlist uparrowlen downarrowlen mingaplen
+ global commitidx curview ordertok children commitrow
+
+ set r [expr {$row - $mingaplen - $downarrowlen - 1}]
+ if {$r < 0} {
+ set r 0
+ }
+ set ra [expr {$row - $downarrowlen}]
+ if {$ra < 0} {
+ set ra 0
+ }
+ set rb [expr {$row + $uparrowlen}]
+ if {$rb > $commitidx($curview)} {
+ set rb $commitidx($curview)
+ }
+ set ids {}
+ for {} {$r < $ra} {incr r} {
+ set nextid [lindex $displayorder [expr {$r + 1}]]
+ foreach p [lindex $parentlist $r] {
+ if {$p eq $nextid} continue
+ set rn [nextuse $p $r]
+ if {$rn >= $row &&
+ $rn <= $r + $downarrowlen + $mingaplen + $uparrowlen} {
+ lappend ids [list $ordertok($curview,$p) $p]
}
- } else {
- unset idinlist($id)
- }
- set ranges {}
- if {[info exists idrowranges($id)]} {
- set ranges $idrowranges($id)
- lappend ranges $id
- unset idrowranges($id)
- }
- lappend rowrangelist $ranges
- incr row
- set offs [ntimes [llength $idlist] 0]
- set l [llength $newolds]
- set idlist [eval lreplace \$idlist $col $col $newolds]
- set o 0
- if {$l != 1} {
- set offs [lrange $offs 0 [expr {$col - 1}]]
- foreach x $newolds {
- lappend offs {}
- incr o -1
- }
- incr o
- set tmp [expr {[llength $idlist] - [llength $offs]}]
- if {$tmp > 0} {
- set offs [concat $offs [ntimes $tmp $o]]
+ }
+ }
+ for {} {$r < $row} {incr r} {
+ set nextid [lindex $displayorder [expr {$r + 1}]]
+ foreach p [lindex $parentlist $r] {
+ if {$p eq $nextid} continue
+ set rn [nextuse $p $r]
+ if {$rn < 0 || $rn >= $row} {
+ lappend ids [list $ordertok($curview,$p) $p]
}
- } else {
- lset offs $col {}
}
- foreach i $newolds {
- set idrowranges($i) $id
+ }
+ set id [lindex $displayorder $row]
+ lappend ids [list $ordertok($curview,$id) $id]
+ while {$r < $rb} {
+ foreach p [lindex $parentlist $r] {
+ set firstkid [lindex $children($curview,$p) 0]
+ if {$commitrow($curview,$firstkid) < $row} {
+ lappend ids [list $ordertok($curview,$p) $p]
+ }
}
- incr col $l
- foreach oid $oldolds {
- set idlist [linsert $idlist $col $oid]
- set offs [linsert $offs $col $o]
- makeuparrow $oid $col $row $o
- incr col
+ incr r
+ set id [lindex $displayorder $r]
+ if {$id ne {}} {
+ set firstkid [lindex $children($curview,$id) 0]
+ if {$firstkid ne {} && $commitrow($curview,$firstkid) < $row} {
+ lappend ids [list $ordertok($curview,$id) $id]
+ }
}
- lappend rowidlist $idlist
- lappend rowoffsets $offs
}
- return $row
+ set idlist {}
+ foreach idx [lsort -unique $ids] {
+ lappend idlist [lindex $idx 1]
+ }
+ return $idlist
+}
+
+proc rowsequal {a b} {
+ while {[set i [lsearch -exact $a {}]] >= 0} {
+ set a [lreplace $a $i $i]
+ }
+ while {[set i [lsearch -exact $b {}]] >= 0} {
+ set b [lreplace $b $i $i]
+ }
+ return [expr {$a eq $b}]
}
-proc addextraid {id row} {
- global displayorder commitrow commitinfo
- global commitidx commitlisted
- global parentlist children curview
+proc makeupline {id row rend col} {
+ global rowidlist uparrowlen downarrowlen mingaplen
- incr commitidx($curview)
- lappend displayorder $id
- lappend commitlisted 0
- lappend parentlist {}
- set commitrow($curview,$id) $row
- readcommit $id
- if {![info exists commitinfo($id)]} {
- set commitinfo($id) {"No commit information available"}
+ for {set r $rend} {1} {set r $rstart} {
+ set rstart [prevuse $id $r]
+ if {$rstart < 0} return
+ if {$rstart < $row} break
}
- if {![info exists children($curview,$id)]} {
- set children($curview,$id) {}
+ if {$rstart + $uparrowlen + $mingaplen + $downarrowlen < $rend} {
+ set rstart [expr {$rend - $uparrowlen - 1}]
+ }
+ for {set r $rstart} {[incr r] <= $row} {} {
+ set idlist [lindex $rowidlist $r]
+ if {$idlist ne {} && [lsearch -exact $idlist $id] < 0} {
+ set col [idcol $idlist $id $col]
+ lset rowidlist $r [linsert $idlist $col $id]
+ changedrow $r
+ }
}
}
-proc layouttail {} {
- global rowidlist rowoffsets idinlist commitidx curview
- global idrowranges rowrangelist
+proc layoutrows {row endrow} {
+ global rowidlist rowisopt rowfinal displayorder
+ global uparrowlen downarrowlen maxwidth mingaplen
+ global children parentlist
+ global commitidx viewcomplete curview commitrow
- set row $commitidx($curview)
- set idlist [lindex $rowidlist $row]
- while {$idlist ne {}} {
- set col [expr {[llength $idlist] - 1}]
- set id [lindex $idlist $col]
- addextraid $id $row
- catch {unset idinlist($id)}
- lappend idrowranges($id) $id
- lappend rowrangelist $idrowranges($id)
- unset idrowranges($id)
- incr row
- set offs [ntimes $col 0]
- set idlist [lreplace $idlist $col $col]
- lappend rowidlist $idlist
- lappend rowoffsets $offs
- }
-
- foreach id [array names idinlist] {
- unset idinlist($id)
- addextraid $id $row
- lset rowidlist $row [list $id]
- lset rowoffsets $row 0
- makeuparrow $id 0 $row 0
- lappend idrowranges($id) $id
- lappend rowrangelist $idrowranges($id)
- unset idrowranges($id)
- incr row
- lappend rowidlist {}
- lappend rowoffsets {}
+ set idlist {}
+ if {$row > 0} {
+ set rm1 [expr {$row - 1}]
+ foreach id [lindex $rowidlist $rm1] {
+ if {$id ne {}} {
+ lappend idlist $id
+ }
+ }
+ set final [lindex $rowfinal $rm1]
+ }
+ for {} {$row < $endrow} {incr row} {
+ set rm1 [expr {$row - 1}]
+ if {$rm1 < 0 || $idlist eq {}} {
+ set idlist [make_idlist $row]
+ set final 1
+ } else {
+ set id [lindex $displayorder $rm1]
+ set col [lsearch -exact $idlist $id]
+ set idlist [lreplace $idlist $col $col]
+ foreach p [lindex $parentlist $rm1] {
+ if {[lsearch -exact $idlist $p] < 0} {
+ set col [idcol $idlist $p $col]
+ set idlist [linsert $idlist $col $p]
+ # if not the first child, we have to insert a line going up
+ if {$id ne [lindex $children($curview,$p) 0]} {
+ makeupline $p $rm1 $row $col
+ }
+ }
+ }
+ set id [lindex $displayorder $row]
+ if {$row > $downarrowlen} {
+ set termrow [expr {$row - $downarrowlen - 1}]
+ foreach p [lindex $parentlist $termrow] {
+ set i [lsearch -exact $idlist $p]
+ if {$i < 0} continue
+ set nr [nextuse $p $termrow]
+ if {$nr < 0 || $nr >= $row + $mingaplen + $uparrowlen} {
+ set idlist [lreplace $idlist $i $i]
+ }
+ }
+ }
+ set col [lsearch -exact $idlist $id]
+ if {$col < 0} {
+ set col [idcol $idlist $id]
+ set idlist [linsert $idlist $col $id]
+ if {$children($curview,$id) ne {}} {
+ makeupline $id $rm1 $row $col
+ }
+ }
+ set r [expr {$row + $uparrowlen - 1}]
+ if {$r < $commitidx($curview)} {
+ set x $col
+ foreach p [lindex $parentlist $r] {
+ if {[lsearch -exact $idlist $p] >= 0} continue
+ set fk [lindex $children($curview,$p) 0]
+ if {$commitrow($curview,$fk) < $row} {
+ set x [idcol $idlist $p $x]
+ set idlist [linsert $idlist $x $p]
+ }
+ }
+ if {[incr r] < $commitidx($curview)} {
+ set p [lindex $displayorder $r]
+ if {[lsearch -exact $idlist $p] < 0} {
+ set fk [lindex $children($curview,$p) 0]
+ if {$fk ne {} && $commitrow($curview,$fk) < $row} {
+ set x [idcol $idlist $p $x]
+ set idlist [linsert $idlist $x $p]
+ }
+ }
+ }
+ }
+ }
+ if {$final && !$viewcomplete($curview) &&
+ $row + $uparrowlen + $mingaplen + $downarrowlen
+ >= $commitidx($curview)} {
+ set final 0
+ }
+ set l [llength $rowidlist]
+ if {$row == $l} {
+ lappend rowidlist $idlist
+ lappend rowisopt 0
+ lappend rowfinal $final
+ } elseif {$row < $l} {
+ if {![rowsequal $idlist [lindex $rowidlist $row]]} {
+ lset rowidlist $row $idlist
+ changedrow $row
+ }
+ lset rowfinal $row $final
+ } else {
+ set pad [ntimes [expr {$row - $l}] {}]
+ set rowidlist [concat $rowidlist $pad]
+ lappend rowidlist $idlist
+ set rowfinal [concat $rowfinal $pad]
+ lappend rowfinal $final
+ set rowisopt [concat $rowisopt [ntimes [expr {$row - $l + 1}] 0]]
+ }
+ }
+ return $row
+}
+
+proc changedrow {row} {
+ global displayorder iddrawn rowisopt need_redisplay
+
+ set l [llength $rowisopt]
+ if {$row < $l} {
+ lset rowisopt $row 0
+ if {$row + 1 < $l} {
+ lset rowisopt [expr {$row + 1}] 0
+ if {$row + 2 < $l} {
+ lset rowisopt [expr {$row + 2}] 0
+ }
+ }
+ }
+ set id [lindex $displayorder $row]
+ if {[info exists iddrawn($id)]} {
+ set need_redisplay 1
}
}
proc insert_pad {row col npad} {
- global rowidlist rowoffsets
+ global rowidlist
set pad [ntimes $npad {}]
- lset rowidlist $row [eval linsert [list [lindex $rowidlist $row]] $col $pad]
- set tmp [eval linsert [list [lindex $rowoffsets $row]] $col $pad]
- lset rowoffsets $row [incrange $tmp [expr {$col + $npad}] [expr {-$npad}]]
+ set idlist [lindex $rowidlist $row]
+ set bef [lrange $idlist 0 [expr {$col - 1}]]
+ set aft [lrange $idlist $col end]
+ set i [lsearch -exact $aft {}]
+ if {$i > 0} {
+ set aft [lreplace $aft $i $i]
+ }
+ lset rowidlist $row [concat $bef $pad $aft]
+ changedrow $row
}
proc optimize_rows {row col endrow} {
- global rowidlist rowoffsets displayorder
+ global rowidlist rowisopt displayorder curview children
- for {} {$row < $endrow} {incr row} {
- set idlist [lindex $rowidlist $row]
- set offs [lindex $rowoffsets $row]
+ if {$row < 1} {
+ set row 1
+ }
+ for {} {$row < $endrow} {incr row; set col 0} {
+ if {[lindex $rowisopt $row]} continue
set haspad 0
- for {} {$col < [llength $offs]} {incr col} {
- if {[lindex $idlist $col] eq {}} {
+ set y0 [expr {$row - 1}]
+ set ym [expr {$row - 2}]
+ set idlist [lindex $rowidlist $row]
+ set previdlist [lindex $rowidlist $y0]
+ if {$idlist eq {} || $previdlist eq {}} continue
+ if {$ym >= 0} {
+ set pprevidlist [lindex $rowidlist $ym]
+ if {$pprevidlist eq {}} continue
+ } else {
+ set pprevidlist {}
+ }
+ set x0 -1
+ set xm -1
+ for {} {$col < [llength $idlist]} {incr col} {
+ set id [lindex $idlist $col]
+ if {[lindex $previdlist $col] eq $id} continue
+ if {$id eq {}} {
set haspad 1
continue
}
- set z [lindex $offs $col]
- if {$z eq {}} continue
+ set x0 [lsearch -exact $previdlist $id]
+ if {$x0 < 0} continue
+ set z [expr {$x0 - $col}]
set isarrow 0
- set x0 [expr {$col + $z}]
- set y0 [expr {$row - 1}]
- set z0 [lindex $rowoffsets $y0 $x0]
+ set z0 {}
+ if {$ym >= 0} {
+ set xm [lsearch -exact $pprevidlist $id]
+ if {$xm >= 0} {
+ set z0 [expr {$xm - $x0}]
+ }
+ }
if {$z0 eq {}} {
- set id [lindex $idlist $col]
- set ranges [rowranges $id]
- if {$ranges ne {} && $y0 > [lindex $ranges 0]} {
+ # if row y0 is the first child of $id then it's not an arrow
+ if {[lindex $children($curview,$id) 0] ne
+ [lindex $displayorder $y0]} {
set isarrow 1
}
}
+ if {!$isarrow && $id ne [lindex $displayorder $row] &&
+ [lsearch -exact [lindex $rowidlist [expr {$row+1}]] $id] < 0} {
+ set isarrow 1
+ }
# Looking at lines from this row to the previous row,
# make them go straight up if they end in an arrow on
# the previous row; otherwise make them go straight up
# Line currently goes left too much;
# insert pads in the previous row, then optimize it
set npad [expr {-1 - $z + $isarrow}]
- set offs [incrange $offs $col $npad]
insert_pad $y0 $x0 $npad
if {$y0 > 0} {
optimize_rows $y0 $x0 $row
}
- set z [lindex $offs $col]
- set x0 [expr {$col + $z}]
- set z0 [lindex $rowoffsets $y0 $x0]
+ set previdlist [lindex $rowidlist $y0]
+ set x0 [lsearch -exact $previdlist $id]
+ set z [expr {$x0 - $col}]
+ if {$z0 ne {}} {
+ set pprevidlist [lindex $rowidlist $ym]
+ set xm [lsearch -exact $pprevidlist $id]
+ set z0 [expr {$xm - $x0}]
+ }
} elseif {$z > 1 || ($z > 0 && $isarrow)} {
# Line currently goes right too much;
- # insert pads in this line and adjust the next's rowoffsets
+ # insert pads in this line
set npad [expr {$z - 1 + $isarrow}]
- set y1 [expr {$row + 1}]
- set offs2 [lindex $rowoffsets $y1]
- set x1 -1
- foreach z $offs2 {
- incr x1
- if {$z eq {} || $x1 + $z < $col} continue
- if {$x1 + $z > $col} {
- incr npad
- }
- lset rowoffsets $y1 [incrange $offs2 $x1 $npad]
- break
- }
- set pad [ntimes $npad {}]
- set idlist [eval linsert \$idlist $col $pad]
- set tmp [eval linsert \$offs $col $pad]
+ insert_pad $row $col $npad
+ set idlist [lindex $rowidlist $row]
incr col $npad
- set offs [incrange $tmp $col [expr {-$npad}]]
- set z [lindex $offs $col]
+ set z [expr {$x0 - $col}]
set haspad 1
}
- if {$z0 eq {} && !$isarrow} {
+ if {$z0 eq {} && !$isarrow && $ym >= 0} {
# this line links to its first child on row $row-2
- set rm2 [expr {$row - 2}]
- set id [lindex $displayorder $rm2]
- set xc [lsearch -exact [lindex $rowidlist $rm2] $id]
+ set id [lindex $displayorder $ym]
+ set xc [lsearch -exact $pprevidlist $id]
if {$xc >= 0} {
set z0 [expr {$xc - $x0}]
}
# avoid lines jigging left then immediately right
if {$z0 ne {} && $z < 0 && $z0 > 0} {
insert_pad $y0 $x0 1
- set offs [incrange $offs $col 1]
- optimize_rows $y0 [expr {$x0 + 1}] $row
+ incr x0
+ optimize_rows $y0 $x0 $row
+ set previdlist [lindex $rowidlist $y0]
}
}
if {!$haspad} {
- set o {}
# Find the first column that doesn't have a line going right
for {set col [llength $idlist]} {[incr col -1] >= 0} {} {
- set o [lindex $offs $col]
- if {$o eq {}} {
+ set id [lindex $idlist $col]
+ if {$id eq {}} break
+ set x0 [lsearch -exact $previdlist $id]
+ if {$x0 < 0} {
# check if this is the link to the first child
- set id [lindex $idlist $col]
- set ranges [rowranges $id]
- if {$ranges ne {} && $row == [lindex $ranges 0]} {
+ set kid [lindex $displayorder $y0]
+ if {[lindex $children($curview,$id) 0] eq $kid} {
# it is, work out offset to child
- set y0 [expr {$row - 1}]
- set id [lindex $displayorder $y0]
- set x0 [lsearch -exact [lindex $rowidlist $y0] $id]
- if {$x0 >= 0} {
- set o [expr {$x0 - $col}]
- }
+ set x0 [lsearch -exact $previdlist $kid]
}
}
- if {$o eq {} || $o <= 0} break
+ if {$x0 <= $col} break
}
# Insert a pad at that column as long as it has a line and
- # isn't the last column, and adjust the next row' offsets
- if {$o ne {} && [incr col] < [llength $idlist]} {
- set y1 [expr {$row + 1}]
- set offs2 [lindex $rowoffsets $y1]
- set x1 -1
- foreach z $offs2 {
- incr x1
- if {$z eq {} || $x1 + $z < $col} continue
- lset rowoffsets $y1 [incrange $offs2 $x1 1]
- break
- }
+ # isn't the last column
+ if {$x0 >= 0 && [incr col] < [llength $idlist]} {
set idlist [linsert $idlist $col {}]
- set tmp [linsert $offs $col {}]
- incr col
- set offs [incrange $tmp $col -1]
+ lset rowidlist $row $idlist
+ changedrow $row
}
}
- lset rowidlist $row $idlist
- lset rowoffsets $row $offs
- set col 0
}
}
}
proc rowranges {id} {
- global phase idrowranges commitrow rowlaidout rowrangelist curview
-
- set ranges {}
- if {$phase eq {} ||
- ([info exists commitrow($curview,$id)]
- && $commitrow($curview,$id) < $rowlaidout)} {
- set ranges [lindex $rowrangelist $commitrow($curview,$id)]
- } elseif {[info exists idrowranges($id)]} {
- set ranges $idrowranges($id)
- }
- set linenos {}
- foreach rid $ranges {
- lappend linenos $commitrow($curview,$rid)
- }
- if {$linenos ne {}} {
- lset linenos 0 [expr {[lindex $linenos 0] + 1}]
- }
- return $linenos
-}
-
-# work around tk8.4 refusal to draw arrows on diagonal segments
-proc adjarrowhigh {coords} {
- global linespc
-
- set x0 [lindex $coords 0]
- set x1 [lindex $coords 2]
- if {$x0 != $x1} {
- set y0 [lindex $coords 1]
- set y1 [lindex $coords 3]
- if {$y0 - $y1 <= 2 * $linespc && $x1 == [lindex $coords 4]} {
- # we have a nearby vertical segment, just trim off the diag bit
- set coords [lrange $coords 2 end]
+ global commitrow curview children uparrowlen downarrowlen
+ global rowidlist
+
+ set kids $children($curview,$id)
+ if {$kids eq {}} {
+ return {}
+ }
+ set ret {}
+ lappend kids $id
+ foreach child $kids {
+ if {![info exists commitrow($curview,$child)]} break
+ set row $commitrow($curview,$child)
+ if {![info exists prev]} {
+ lappend ret [expr {$row + 1}]
} else {
- set slope [expr {($x0 - $x1) / ($y0 - $y1)}]
- set xi [expr {$x0 - $slope * $linespc / 2}]
- set yi [expr {$y0 - $linespc / 2}]
- set coords [lreplace $coords 0 1 $xi $y0 $xi $yi]
+ if {$row <= $prevrow} {
+ puts "oops children out of order [shortids $id] $row < [shortids $prev] $prevrow"
+ }
+ # see if the line extends the whole way from prevrow to row
+ if {$row > $prevrow + $uparrowlen + $downarrowlen &&
+ [lsearch -exact [lindex $rowidlist \
+ [expr {int(($row + $prevrow) / 2)}]] $id] < 0} {
+ # it doesn't, see where it ends
+ set r [expr {$prevrow + $downarrowlen}]
+ if {[lsearch -exact [lindex $rowidlist $r] $id] < 0} {
+ while {[incr r -1] > $prevrow &&
+ [lsearch -exact [lindex $rowidlist $r] $id] < 0} {}
+ } else {
+ while {[incr r] <= $row &&
+ [lsearch -exact [lindex $rowidlist $r] $id] >= 0} {}
+ incr r -1
+ }
+ lappend ret $r
+ # see where it starts up again
+ set r [expr {$row - $uparrowlen}]
+ if {[lsearch -exact [lindex $rowidlist $r] $id] < 0} {
+ while {[incr r] < $row &&
+ [lsearch -exact [lindex $rowidlist $r] $id] < 0} {}
+ } else {
+ while {[incr r -1] >= $prevrow &&
+ [lsearch -exact [lindex $rowidlist $r] $id] >= 0} {}
+ incr r
+ }
+ lappend ret $r
+ }
+ }
+ if {$child eq $id} {
+ lappend ret $row
}
+ set prev $id
+ set prevrow $row
}
- return $coords
+ return $ret
}
proc drawlineseg {id row endrow arrowlow} {
global rowidlist displayorder iddrawn linesegs
- global canv colormap linespc curview maxlinelen
+ global canv colormap linespc curview maxlinelen parentlist
set cols [list [lsearch -exact [lindex $rowidlist $row] $id]]
set le [expr {$row + 1}]
set itl [lindex $lines [expr {$i-1}] 2]
set al [$canv itemcget $itl -arrow]
set arrowlow [expr {$al eq "last" || $al eq "both"}]
- } elseif {$arrowlow &&
- [lsearch -exact [lindex $rowidlist [expr {$row-1}]] $id] >= 0} {
- set arrowlow 0
+ } elseif {$arrowlow} {
+ if {[lsearch -exact [lindex $rowidlist [expr {$row-1}]] $id] >= 0 ||
+ [lsearch -exact [lindex $parentlist [expr {$row-1}]] $id] >= 0} {
+ set arrowlow 0
+ }
}
set arrow [lindex {none first last both} [expr {$arrowhigh + 2*$arrowlow}]]
for {set y $le} {[incr y -1] > $row} {} {
set xc [lsearch -exact [lindex $rowidlist $row] $ch]
if {$xc < 0} {
puts "oops: drawlineseg: child $ch not on row $row"
- } else {
- if {$xc < $x - 1} {
+ } elseif {$xc != $x} {
+ if {($arrowhigh && $le == $row + 1) || $dir == 0} {
+ set d [expr {int(0.5 * $linespc)}]
+ set x1 [xc $row $x]
+ if {$xc < $x} {
+ set x2 [expr {$x1 - $d}]
+ } else {
+ set x2 [expr {$x1 + $d}]
+ }
+ set y2 [yc $row]
+ set y1 [expr {$y2 + $d}]
+ lappend coords $x1 $y1 $x2 $y2
+ } elseif {$xc < $x - 1} {
lappend coords [xc $row [expr {$x-1}]] [yc $row]
} elseif {$xc > $x + 1} {
lappend coords [xc $row [expr {$x+1}]] [yc $row]
} else {
set xn [xc $row $xp]
set yn [yc $row]
- # work around tk8.4 refusal to draw arrows on diagonal segments
- if {$arrowlow && $xn != [lindex $coords end-1]} {
- if {[llength $coords] < 4 ||
- [lindex $coords end-3] != [lindex $coords end-1] ||
- [lindex $coords end] - $yn > 2 * $linespc} {
- set xn [xc $row [expr {$xp - 0.5 * $dir}]]
- set yo [yc [expr {$row + 0.5}]]
- lappend coords $xn $yo $xn $yn
- }
- } else {
- lappend coords $xn $yn
- }
+ lappend coords $xn $yn
}
if {!$joinhigh} {
- if {$arrowhigh} {
- set coords [adjarrowhigh $coords]
- }
assigncolor $id
set t [$canv create line $coords -width [linewidth $id] \
-fill $colormap($id) -tags lines.$id -arrow $arrow]
set coords [concat $coords $clow]
if {!$joinhigh} {
lset lines [expr {$i-1}] 1 $le
- if {$arrowhigh} {
- set coords [adjarrowhigh $coords]
- }
} else {
# coalesce two pieces
$canv delete $ith
proc drawparentlinks {id row} {
global rowidlist canv colormap curview parentlist
- global idpos
+ global idpos linespc
set rowids [lindex $rowidlist $row]
set col [lsearch -exact $rowids $id]
set x [xc $row $col]
set y [yc $row]
set y2 [yc $row2]
+ set d [expr {int(0.5 * $linespc)}]
+ set ymid [expr {$y + $d}]
set ids [lindex $rowidlist $row2]
# rmx = right-most X coord used
set rmx 0
if {$x2 > $rmx} {
set rmx $x2
}
- if {[lsearch -exact $rowids $p] < 0} {
+ set j [lsearch -exact $rowids $p]
+ if {$j < 0} {
# drawlineseg will do this one for us
continue
}
assigncolor $p
# should handle duplicated parents here...
set coords [list $x $y]
- if {$i < $col - 1} {
- lappend coords [xc $row [expr {$i + 1}]] $y
- } elseif {$i > $col + 1} {
- lappend coords [xc $row [expr {$i - 1}]] $y
+ if {$i != $col} {
+ # if attaching to a vertical segment, draw a smaller
+ # slant for visual distinctness
+ if {$i == $j} {
+ if {$i < $col} {
+ lappend coords [expr {$x2 + $d}] $y $x2 $ymid
+ } else {
+ lappend coords [expr {$x2 - $d}] $y $x2 $ymid
+ }
+ } elseif {$i < $col && $i < $j} {
+ # segment slants towards us already
+ lappend coords [xc $row $j] $y
+ } else {
+ if {$i < $col - 1} {
+ lappend coords [expr {$x2 + $linespc}] $y
+ } elseif {$i > $col + 1} {
+ lappend coords [expr {$x2 - $linespc}] $y
+ }
+ lappend coords $x2 $y2
+ }
+ } else {
+ lappend coords $x2 $y2
}
- lappend coords $x2 $y2
set t [$canv create line $coords -width [linewidth $p] \
-fill $colormap($p) -tags lines.$p]
$canv lower $t
global linespc canv canv2 canv3 canvy0 fgcolor curview
global commitlisted commitinfo rowidlist parentlist
global rowtextx idpos idtags idheads idotherrefs
- global linehtag linentag linedtag
- global mainfont canvxmax boldrows boldnamerows fgcolor nullid nullid2
+ global linehtag linentag linedtag selectedline
+ global canvxmax boldrows boldnamerows fgcolor nullid nullid2
# listed is 0 for boundary, 1 for normal, 2 for left, 3 for right
set listed [lindex $commitlisted $row]
set name [lindex $commitinfo($id) 1]
set date [lindex $commitinfo($id) 2]
set date [formatdate $date]
- set font $mainfont
- set nfont $mainfont
+ set font mainfont
+ set nfont mainfont
set isbold [ishighlighted $row]
if {$isbold > 0} {
lappend boldrows $row
- lappend font bold
+ set font mainfontbold
if {$isbold > 1} {
lappend boldnamerows $row
- lappend nfont bold
+ set nfont mainfontbold
}
}
set linehtag($row) [$canv create text $xt $y -anchor w -fill $fgcolor \
set linentag($row) [$canv2 create text 3 $y -anchor w -fill $fgcolor \
-text $name -font $nfont -tags text]
set linedtag($row) [$canv3 create text 3 $y -anchor w -fill $fgcolor \
- -text $date -font $mainfont -tags text]
- set xr [expr {$xt + [font measure $mainfont $headline]}]
+ -text $date -font mainfont -tags text]
+ if {[info exists selectedline] && $selectedline == $row} {
+ make_secsel $row
+ }
+ set xr [expr {$xt + [font measure $font $headline]}]
if {$xr > $canvxmax} {
set canvxmax $xr
setcanvscroll
}
proc drawcmitrow {row} {
- global displayorder rowidlist
+ global displayorder rowidlist nrows_drawn
global iddrawn markingmatches
global commitinfo parentlist numcommits
- global filehighlight fhighlights findstring nhighlights
+ global filehighlight fhighlights findpattern nhighlights
global hlview vhighlights
global highlight_related rhighlights
if {[info exists filehighlight] && ![info exists fhighlights($row)]} {
askfilehighlight $row $id
}
- if {$findstring ne {} && ![info exists nhighlights($row)]} {
+ if {$findpattern ne {} && ![info exists nhighlights($row)]} {
askfindhighlight $row $id
}
if {$highlight_related ne "None" && ![info exists rhighlights($row)]} {
assigncolor $id
drawcmittext $id $row $col
set iddrawn($id) 1
+ incr nrows_drawn
}
if {$markingmatches} {
markrowmatches $row $id
}
proc drawcommits {row {endrow {}}} {
- global numcommits iddrawn displayorder curview
- global parentlist rowidlist
+ global numcommits iddrawn displayorder curview need_redisplay
+ global parentlist rowidlist rowfinal uparrowlen downarrowlen nrows_drawn
if {$row < 0} {
set row 0
set endrow [expr {$numcommits - 1}]
}
+ set rl1 [expr {$row - $downarrowlen - 3}]
+ if {$rl1 < 0} {
+ set rl1 0
+ }
+ set ro1 [expr {$row - 3}]
+ if {$ro1 < 0} {
+ set ro1 0
+ }
+ set r2 [expr {$endrow + $uparrowlen + 3}]
+ if {$r2 > $numcommits} {
+ set r2 $numcommits
+ }
+ for {set r $rl1} {$r < $r2} {incr r} {
+ if {[lindex $rowidlist $r] ne {} && [lindex $rowfinal $r]} {
+ if {$rl1 < $r} {
+ layoutrows $rl1 $r
+ }
+ set rl1 [expr {$r + 1}]
+ }
+ }
+ if {$rl1 < $r} {
+ layoutrows $rl1 $r
+ }
+ optimize_rows $ro1 0 $r2
+ if {$need_redisplay || $nrows_drawn > 2000} {
+ clear_display
+ drawvisible
+ }
+
# make the lines join to already-drawn rows either side
set r [expr {$row - 1}]
if {$r < 0 || ![info exists iddrawn([lindex $displayorder $r])]} {
drawcmitrow $r
if {$r == $er} break
set nextid [lindex $displayorder [expr {$r + 1}]]
- if {$wasdrawn && [info exists iddrawn($nextid)]} {
- catch {unset prevlines}
- continue
- }
+ if {$wasdrawn && [info exists iddrawn($nextid)]} continue
drawparentlinks $id $r
- if {[info exists lineends($r)]} {
- foreach lid $lineends($r) {
- unset prevlines($lid)
- }
- }
set rowids [lindex $rowidlist $r]
foreach lid $rowids {
if {$lid eq {}} continue
+ if {[info exists lineend($lid)] && $lineend($lid) > $r} continue
if {$lid eq $id} {
# see if this is the first child of any of its parents
foreach p [lindex $parentlist $r] {
if {[lsearch -exact $rowids $p] < 0} {
# make this line extend up to the child
- set le [drawlineseg $p $r $er 0]
- lappend lineends($le) $p
- set prevlines($p) 1
+ set lineend($p) [drawlineseg $p $r $er 0]
}
}
- } elseif {![info exists prevlines($lid)]} {
- set le [drawlineseg $lid $r $er 1]
- lappend lineends($le) $lid
- set prevlines($lid) 1
+ } else {
+ set lineend($lid) [drawlineseg $lid $r $er 1]
}
}
}
}
proc clear_display {} {
- global iddrawn linesegs
+ global iddrawn linesegs need_redisplay nrows_drawn
global vhighlights fhighlights nhighlights rhighlights
allcanvs delete all
catch {unset fhighlights}
catch {unset nhighlights}
catch {unset rhighlights}
+ set need_redisplay 0
+ set nrows_drawn 0
}
proc findcrossings {id} {
- global rowidlist parentlist numcommits rowoffsets displayorder
+ global rowidlist parentlist numcommits displayorder
set cross {}
set ccross {}
set e [expr {$numcommits - 1}]
}
if {$e <= $s} continue
- set x [lsearch -exact [lindex $rowidlist $e] $id]
- if {$x < 0} {
- puts "findcrossings: oops, no [shortids $id] in row $e"
- continue
- }
for {set row $e} {[incr row -1] >= $s} {} {
+ set x [lsearch -exact [lindex $rowidlist $row] $id]
+ if {$x < 0} break
set olds [lindex $parentlist $row]
set kid [lindex $displayorder $row]
set kidx [lsearch -exact [lindex $rowidlist $row] $kid]
}
}
}
- set inc [lindex $rowoffsets $row $x]
- if {$inc eq {}} break
- incr x $inc
}
}
return [concat $ccross {{}} $cross]
proc drawtags {id x xt y1} {
global idtags idheads idotherrefs mainhead
global linespc lthickness
- global canv mainfont commitrow rowtextx curview fgcolor bgcolor
+ global canv commitrow rowtextx curview fgcolor bgcolor
set marks {}
set ntags 0
foreach tag $marks {
incr i
if {$i >= $ntags && $i < $ntags + $nheads && $tag eq $mainhead} {
- set wid [font measure [concat $mainfont bold] $tag]
+ set wid [font measure mainfontbold $tag]
} else {
- set wid [font measure $mainfont $tag]
+ set wid [font measure mainfont $tag]
}
lappend xvals $xt
lappend wvals $wid
foreach tag $marks x $xvals wid $wvals {
set xl [expr {$x + $delta}]
set xr [expr {$x + $delta + $wid + $lthickness}]
- set font $mainfont
+ set font mainfont
if {[incr ntags -1] >= 0} {
# draw a tag
set t [$canv create polygon $x [expr {$yt + $delta}] $xl $yt \
if {[incr nheads -1] >= 0} {
set col green
if {$tag eq $mainhead} {
- lappend font bold
+ set font mainfontbold
}
} else {
set col "#ddddff"
$canv create polygon $x $yt $xr $yt $xr $yb $x $yb \
-width 1 -outline black -fill $col -tags tag.$id
if {[regexp {^(remotes/.*/|remotes/)} $tag match remoteprefix]} {
- set rwid [font measure $mainfont $remoteprefix]
+ set rwid [font measure mainfont $remoteprefix]
set xi [expr {$x + 1}]
set yti [expr {$yt + 1}]
set xri [expr {$x + $rwid}]
}
proc show_status {msg} {
- global canv mainfont fgcolor
+ global canv fgcolor
clear_display
- $canv create text 3 3 -anchor nw -text $msg -font $mainfont \
+ $canv create text 3 3 -anchor nw -text $msg -font mainfont \
-tags text -fill $fgcolor
}
# on that row and below will move down one row.
proc insertrow {row newcmit} {
global displayorder parentlist commitlisted children
- global commitrow curview rowidlist rowoffsets numcommits
- global rowrangelist rowlaidout rowoptim numcommits
- global selectedline rowchk commitidx
+ global commitrow curview rowidlist rowisopt rowfinal numcommits
+ global numcommits
+ global selectedline commitidx ordertok
if {$row >= $numcommits} {
puts "oops, inserting new row $row but only have $numcommits rows"
set commitrow($curview,$id) $r
}
incr commitidx($curview)
+ set ordertok($curview,$newcmit) $ordertok($curview,$p)
- set idlist [lindex $rowidlist $row]
- set offs [lindex $rowoffsets $row]
- set newoffs {}
- foreach x $idlist {
- if {$x eq {} || ($x eq $p && [llength $kids] == 1)} {
- lappend newoffs {}
- } else {
- lappend newoffs 0
- }
- }
- if {[llength $kids] == 1} {
- set col [lsearch -exact $idlist $p]
- lset idlist $col $newcmit
- } else {
- set col [llength $idlist]
- lappend idlist $newcmit
- lappend offs {}
- lset rowoffsets $row $offs
- }
- set rowidlist [linsert $rowidlist $row $idlist]
- set rowoffsets [linsert $rowoffsets [expr {$row+1}] $newoffs]
-
- set rowrangelist [linsert $rowrangelist $row {}]
- if {[llength $kids] > 1} {
- set rp1 [expr {$row + 1}]
- set ranges [lindex $rowrangelist $rp1]
- if {$ranges eq {}} {
- set ranges [list $newcmit $p]
- } elseif {[lindex $ranges end-1] eq $p} {
- lset ranges end-1 $newcmit
+ if {$row < [llength $rowidlist]} {
+ set idlist [lindex $rowidlist $row]
+ if {$idlist ne {}} {
+ if {[llength $kids] == 1} {
+ set col [lsearch -exact $idlist $p]
+ lset idlist $col $newcmit
+ } else {
+ set col [llength $idlist]
+ lappend idlist $newcmit
+ }
}
- lset rowrangelist $rp1 $ranges
+ set rowidlist [linsert $rowidlist $row $idlist]
+ set rowisopt [linsert $rowisopt $row 0]
+ set rowfinal [linsert $rowfinal $row [lindex $rowfinal $row]]
}
- catch {unset rowchk}
-
- incr rowlaidout
- incr rowoptim
incr numcommits
if {[info exists selectedline] && $selectedline >= $row} {
# Remove a commit that was inserted with insertrow on row $row.
proc removerow {row} {
global displayorder parentlist commitlisted children
- global commitrow curview rowidlist rowoffsets numcommits
- global rowrangelist idrowranges rowlaidout rowoptim numcommits
- global linesegends selectedline rowchk commitidx
+ global commitrow curview rowidlist rowisopt rowfinal numcommits
+ global numcommits
+ global linesegends selectedline commitidx
if {$row >= $numcommits} {
puts "oops, removing row $row but only have $numcommits rows"
}
incr commitidx($curview) -1
- set rowidlist [lreplace $rowidlist $row $row]
- set rowoffsets [lreplace $rowoffsets $rp1 $rp1]
- if {$kids ne {}} {
- set offs [lindex $rowoffsets $row]
- set offs [lreplace $offs end end]
- lset rowoffsets $row $offs
- }
-
- set rowrangelist [lreplace $rowrangelist $row $row]
- if {[llength $kids] > 0} {
- set ranges [lindex $rowrangelist $row]
- if {[lindex $ranges end-1] eq $id} {
- set ranges [lreplace $ranges end-1 end]
- lset rowrangelist $row $ranges
- }
+ if {$row < [llength $rowidlist]} {
+ set rowidlist [lreplace $rowidlist $row $row]
+ set rowisopt [lreplace $rowisopt $row $row]
+ set rowfinal [lreplace $rowfinal $row $row]
}
- catch {unset rowchk}
-
- incr rowlaidout -1
- incr rowoptim -1
incr numcommits -1
if {[info exists selectedline] && $selectedline > $row} {
set curtextcursor $c
}
-proc nowbusy {what} {
- global isbusy
+proc nowbusy {what {name {}}} {
+ global isbusy busyname statusw
if {[array names isbusy] eq {}} {
. config -cursor watch
settextcursor watch
}
set isbusy($what) 1
+ set busyname($what) $name
+ if {$name ne {}} {
+ $statusw conf -text $name
+ }
}
proc notbusy {what} {
- global isbusy maincursor textcursor
+ global isbusy maincursor textcursor busyname statusw
- catch {unset isbusy($what)}
+ catch {
+ unset isbusy($what)
+ if {$busyname($what) ne {} &&
+ [$statusw cget -text] eq $busyname($what)} {
+ $statusw conf -text {}
+ }
+ }
if {[array names isbusy] eq {}} {
. config -cursor $maincursor
settextcursor $textcursor
return $matches
}
-proc dofind {{rev 0}} {
+proc dofind {{dirn 1} {wrap 1}} {
global findstring findstartline findcurline selectedline numcommits
+ global gdttype filehighlight fh_serial find_dirn findallowwrap
- unmarkmatches
- cancel_next_highlight
+ if {[info exists find_dirn]} {
+ if {$find_dirn == $dirn} return
+ stopfinding
+ }
focus .
if {$findstring eq {} || $numcommits == 0} return
if {![info exists selectedline]} {
- set findstartline [lindex [visiblerows] $rev]
+ set findstartline [lindex [visiblerows] [expr {$dirn < 0}]]
} else {
set findstartline $selectedline
}
set findcurline $findstartline
- nowbusy finding
- if {!$rev} {
- run findmore
- } else {
- if {$findcurline == 0} {
- set findcurline $numcommits
- }
- incr findcurline -1
- run findmorerev
+ nowbusy finding "Searching"
+ if {$gdttype ne "containing:" && ![info exists filehighlight]} {
+ after cancel do_file_hl $fh_serial
+ do_file_hl $fh_serial
}
+ set find_dirn $dirn
+ set findallowwrap $wrap
+ run findmore
}
-proc findnext {restart} {
- global findcurline
- if {![info exists findcurline]} {
- if {$restart} {
- dofind
- } else {
- bell
- }
- } else {
- run findmore
- nowbusy finding
- }
-}
+proc stopfinding {} {
+ global find_dirn findcurline fprogcoord
-proc findprev {} {
- global findcurline
- if {![info exists findcurline]} {
- dofind 1
- } else {
- run findmorerev
- nowbusy finding
+ if {[info exists find_dirn]} {
+ unset find_dirn
+ unset findcurline
+ notbusy finding
+ set fprogcoord 0
+ adjustprogress
}
}
proc findmore {} {
- global commitdata commitinfo numcommits findstring findpattern findloc
+ global commitdata commitinfo numcommits findpattern findloc
global findstartline findcurline displayorder
+ global find_dirn gdttype fhighlights fprogcoord
+ global findallowwrap
- set fldtypes {Headline Author Date Committer CDate Comments}
- set l [expr {$findcurline + 1}]
- if {$l >= $numcommits} {
- set l 0
- }
- if {$l <= $findstartline} {
- set lim [expr {$findstartline + 1}]
- } else {
- set lim $numcommits
- }
- if {$lim - $l > 500} {
- set lim [expr {$l + 500}]
- }
- set last 0
- for {} {$l < $lim} {incr l} {
- set id [lindex $displayorder $l]
- # shouldn't happen unless git log doesn't give all the commits...
- if {![info exists commitdata($id)]} continue
- if {![doesmatch $commitdata($id)]} continue
- if {![info exists commitinfo($id)]} {
- getcommit $id
- }
- set info $commitinfo($id)
- foreach f $info ty $fldtypes {
- if {($findloc eq "All fields" || $findloc eq $ty) &&
- [doesmatch $f]} {
- findselectline $l
- notbusy finding
- return 0
- }
- }
- }
- if {$l == $findstartline + 1} {
- bell
- unset findcurline
- notbusy finding
+ if {![info exists find_dirn]} {
return 0
}
- set findcurline [expr {$l - 1}]
- return 1
-}
-
-proc findmorerev {} {
- global commitdata commitinfo numcommits findstring findpattern findloc
- global findstartline findcurline displayorder
-
set fldtypes {Headline Author Date Committer CDate Comments}
set l $findcurline
- if {$l == 0} {
- set l $numcommits
- }
- incr l -1
- if {$l >= $findstartline} {
- set lim [expr {$findstartline - 1}]
+ set moretodo 0
+ if {$find_dirn > 0} {
+ incr l
+ if {$l >= $numcommits} {
+ set l 0
+ }
+ if {$l <= $findstartline} {
+ set lim [expr {$findstartline + 1}]
+ } else {
+ set lim $numcommits
+ set moretodo $findallowwrap
+ }
} else {
- set lim -1
- }
- if {$l - $lim > 500} {
- set lim [expr {$l - 500}]
- }
- set last 0
- for {} {$l > $lim} {incr l -1} {
- set id [lindex $displayorder $l]
- if {![doesmatch $commitdata($id)]} continue
- if {![info exists commitinfo($id)]} {
- getcommit $id
+ if {$l == 0} {
+ set l $numcommits
+ }
+ incr l -1
+ if {$l >= $findstartline} {
+ set lim [expr {$findstartline - 1}]
+ } else {
+ set lim -1
+ set moretodo $findallowwrap
+ }
+ }
+ set n [expr {($lim - $l) * $find_dirn}]
+ if {$n > 500} {
+ set n 500
+ set moretodo 1
+ }
+ set found 0
+ set domore 1
+ if {$gdttype eq "containing:"} {
+ for {} {$n > 0} {incr n -1; incr l $find_dirn} {
+ set id [lindex $displayorder $l]
+ # shouldn't happen unless git log doesn't give all the commits...
+ if {![info exists commitdata($id)]} continue
+ if {![doesmatch $commitdata($id)]} continue
+ if {![info exists commitinfo($id)]} {
+ getcommit $id
+ }
+ set info $commitinfo($id)
+ foreach f $info ty $fldtypes {
+ if {($findloc eq "All fields" || $findloc eq $ty) &&
+ [doesmatch $f]} {
+ set found 1
+ break
+ }
+ }
+ if {$found} break
}
- set info $commitinfo($id)
- foreach f $info ty $fldtypes {
- if {($findloc eq "All fields" || $findloc eq $ty) &&
- [doesmatch $f]} {
- findselectline $l
- notbusy finding
- return 0
+ } else {
+ for {} {$n > 0} {incr n -1; incr l $find_dirn} {
+ set id [lindex $displayorder $l]
+ if {![info exists fhighlights($l)]} {
+ askfilehighlight $l $id
+ if {$domore} {
+ set domore 0
+ set findcurline [expr {$l - $find_dirn}]
+ }
+ } elseif {$fhighlights($l)} {
+ set found $domore
+ break
}
}
}
- if {$l == -1} {
- bell
+ if {$found || ($domore && !$moretodo)} {
unset findcurline
+ unset find_dirn
notbusy finding
+ set fprogcoord 0
+ adjustprogress
+ if {$found} {
+ findselectline $l
+ } else {
+ bell
+ }
return 0
}
- set findcurline [expr {$l + 1}]
- return 1
+ if {!$domore} {
+ flushhighlights
+ } else {
+ set findcurline [expr {$l - $find_dirn}]
+ }
+ set n [expr {($findcurline - $findstartline) * $find_dirn - 1}]
+ if {$n < 0} {
+ incr n $numcommits
+ }
+ set fprogcoord [expr {$n * 1.0 / $numcommits}]
+ adjustprogress
+ return $domore
}
proc findselectline {l} {
- global findloc commentend ctext findcurline markingmatches
+ global findloc commentend ctext findcurline markingmatches gdttype
set markingmatches 1
set findcurline $l
}
proc unmarkmatches {} {
- global findids markingmatches findcurline
+ global markingmatches
allcanvs delete matches
- catch {unset findids}
set markingmatches 0
- catch {unset findcurline}
+ stopfinding
}
proc selcanvline {w x y} {
# append some text to the ctext widget, and make any SHA1 ID
# that we know about be a clickable link.
proc appendwithlinks {text tags} {
- global ctext commitrow linknum curview
+ global ctext commitrow linknum curview pendinglinks
set start [$ctext index "end - 1c"]
$ctext insert end $text $tags
set s [lindex $l 0]
set e [lindex $l 1]
set linkid [string range $text $s $e]
- if {![info exists commitrow($curview,$linkid)]} continue
incr e
- $ctext tag add link "$start + $s c" "$start + $e c"
+ $ctext tag delete link$linknum
$ctext tag add link$linknum "$start + $s c" "$start + $e c"
- $ctext tag bind link$linknum <1> \
- [list selectline $commitrow($curview,$linkid) 1]
+ setlink $linkid link$linknum
incr linknum
}
- $ctext tag conf link -foreground blue -underline 1
- $ctext tag bind link <Enter> { %W configure -cursor hand2 }
- $ctext tag bind link <Leave> { %W configure -cursor $curtextcursor }
+}
+
+proc setlink {id lk} {
+ global curview commitrow ctext pendinglinks commitinterest
+
+ if {[info exists commitrow($curview,$id)]} {
+ $ctext tag conf $lk -foreground blue -underline 1
+ $ctext tag bind $lk <1> [list selectline $commitrow($curview,$id) 1]
+ $ctext tag bind $lk <Enter> {linkcursor %W 1}
+ $ctext tag bind $lk <Leave> {linkcursor %W -1}
+ } else {
+ lappend pendinglinks($id) $lk
+ lappend commitinterest($id) {makelink %I}
+ }
+}
+
+proc makelink {id} {
+ global pendinglinks
+
+ if {![info exists pendinglinks($id)]} return
+ foreach lk $pendinglinks($id) {
+ setlink $id $lk
+ }
+ unset pendinglinks($id)
+}
+
+proc linkcursor {w inc} {
+ global linkentercount curtextcursor
+
+ if {[incr linkentercount $inc] > 0} {
+ $w configure -cursor hand2
+ } else {
+ $w configure -cursor $curtextcursor
+ if {$linkentercount < 0} {
+ set linkentercount 0
+ }
+ }
}
proc viewnextline {dir} {
$ctext tag delete $lk
$ctext insert $pos $sep
$ctext insert $pos [lindex $ti 0] $lk
- if {[info exists commitrow($curview,$id)]} {
- $ctext tag conf $lk -foreground blue
- $ctext tag bind $lk <1> \
- [list selectline $commitrow($curview,$id) 1]
- $ctext tag conf $lk -underline 1
- $ctext tag bind $lk <Enter> { %W configure -cursor hand2 }
- $ctext tag bind $lk <Leave> \
- { %W configure -cursor $curtextcursor }
- }
+ setlink $id $lk
set sep ", "
}
}
}
}
+proc make_secsel {l} {
+ global linehtag linentag linedtag canv canv2 canv3
+
+ if {![info exists linehtag($l)]} return
+ $canv delete secsel
+ set t [eval $canv create rect [$canv bbox $linehtag($l)] -outline {{}} \
+ -tags secsel -fill [$canv cget -selectbackground]]
+ $canv lower $t
+ $canv2 delete secsel
+ set t [eval $canv2 create rect [$canv2 bbox $linentag($l)] -outline {{}} \
+ -tags secsel -fill [$canv2 cget -selectbackground]]
+ $canv2 lower $t
+ $canv3 delete secsel
+ set t [eval $canv3 create rect [$canv3 bbox $linedtag($l)] -outline {{}} \
+ -tags secsel -fill [$canv3 cget -selectbackground]]
+ $canv3 lower $t
+}
+
proc selectline {l isnew} {
- global canv canv2 canv3 ctext commitinfo selectedline
- global displayorder linehtag linentag linedtag
+ global canv ctext commitinfo selectedline
+ global displayorder
global canvy0 linespc parentlist children curview
global currentid sha1entry
global commentend idtags linknum
catch {unset pending_select}
$canv delete hover
normalline
- cancel_next_highlight
unsel_reflist
+ stopfinding
if {$l < 0 || $l >= $numcommits} return
set y [expr {$canvy0 + $l * $linespc}]
set ymax [lindex [$canv cget -scrollregion] 3]
drawvisible
}
- if {![info exists linehtag($l)]} return
- $canv delete secsel
- set t [eval $canv create rect [$canv bbox $linehtag($l)] -outline {{}} \
- -tags secsel -fill [$canv cget -selectbackground]]
- $canv lower $t
- $canv2 delete secsel
- set t [eval $canv2 create rect [$canv2 bbox $linentag($l)] -outline {{}} \
- -tags secsel -fill [$canv2 cget -selectbackground]]
- $canv2 lower $t
- $canv3 delete secsel
- set t [eval $canv3 create rect [$canv3 bbox $linedtag($l)] -outline {{}} \
- -tags secsel -fill [$canv3 cget -selectbackground]]
- $canv3 lower $t
+ make_secsel $l
if {$isnew} {
addtohistory [list selectline $l 0]
catch {unset currentid}
allcanvs delete secsel
rhighlight_none
- cancel_next_highlight
}
proc reselectline {} {
$ctext insert end "$f\n" filesep
$ctext config -state disabled
$ctext yview $commentend
+ settabs 0
}
proc getblobline {bf id} {
}
proc mergediff {id l} {
- global diffmergeid diffopts mdifffd
+ global diffmergeid mdifffd
global diffids
global parentlist
+ global limitdiffs viewfiles curview
set diffmergeid $id
set diffids $id
# this doesn't seem to actually affect anything...
- set env(GIT_DIFF_OPTS) $diffopts
set cmd [concat | git diff-tree --no-commit-id --cc $id]
+ if {$limitdiffs && $viewfiles($curview) ne {}} {
+ set cmd [concat $cmd -- $viewfiles($curview)]
+ }
if {[catch {set mdf [open $cmd r]} err]} {
error_popup "Error getting merge diffs: $err"
return
fconfigure $mdf -blocking 0
set mdifffd($id) $mdf
set np [llength [lindex $parentlist $l]]
+ settabs $np
filerun $mdf [list getmergediffline $mdf $id $np]
}
proc startdiff {ids} {
global treediffs diffids treepending diffmergeid nullid nullid2
+ settabs 1
set diffids $ids
catch {unset diffmergeid}
if {![info exists treediffs($ids)] ||
}
}
+proc path_filter {filter name} {
+ foreach p $filter {
+ set l [string length $p]
+ if {[string index $p end] eq "/"} {
+ if {[string compare -length $l $p $name] == 0} {
+ return 1
+ }
+ } else {
+ if {[string compare -length $l $p $name] == 0 &&
+ ([string length $name] == $l ||
+ [string index $name $l] eq "/")} {
+ return 1
+ }
+ }
+ }
+ return 0
+}
+
proc addtocflist {ids} {
- global treediffs cflist
+ global treediffs
+
add_flist $treediffs($ids)
getblobdiffs $ids
}
proc gettreediffline {gdtf ids} {
global treediff treediffs treepending diffids diffmergeid
- global cmitmode
+ global cmitmode viewfiles curview limitdiffs
set nr 0
while {[incr nr] <= 1000 && [gets $gdtf line] >= 0} {
return [expr {$nr >= 1000? 2: 1}]
}
close $gdtf
- set treediffs($ids) $treediff
+ if {$limitdiffs && $viewfiles($curview) ne {}} {
+ set flist {}
+ foreach f $treediff {
+ if {[path_filter $viewfiles($curview) $f]} {
+ lappend flist $f
+ }
+ }
+ set treediffs($ids) $flist
+ } else {
+ set treediffs($ids) $treediff
+ }
unset treepending
if {$cmitmode eq "tree"} {
gettree $diffids
}
proc getblobdiffs {ids} {
- global diffopts blobdifffd diffids env
+ global blobdifffd diffids env
global diffinhdr treediffs
global diffcontext
+ global limitdiffs viewfiles curview
- set env(GIT_DIFF_OPTS) $diffopts
- if {[catch {set bdf [open [diffcmd $ids "-p -C --no-commit-id -U$diffcontext"] r]} err]} {
+ set cmd [diffcmd $ids "-p -C --no-commit-id -U$diffcontext"]
+ if {$limitdiffs && $viewfiles($curview) ne {}} {
+ set cmd [concat $cmd -- $viewfiles($curview)]
+ }
+ if {[catch {set bdf [open $cmd r]} err]} {
puts "error getting diffs: $err"
return
}
set diffinhdr 0
} elseif {$diffinhdr} {
- if {![string compare -length 12 "rename from " $line] ||
- ![string compare -length 10 "copy from " $line]} {
+ if {![string compare -length 12 "rename from " $line]} {
set fname [string range $line [expr 6 + [string first " from " $line] ] end]
if {[string index $fname 0] eq "\""} {
set fname [lindex $fname 0]
proc clear_ctext {{first 1.0}} {
global ctext smarktop smarkbot
+ global pendinglinks
set l [lindex [split $first .] 0]
if {![info exists smarktop] || [$ctext compare $first < $smarktop.0]} {
set smarkbot $l
}
$ctext delete $first end
+ if {$first eq "1.0"} {
+ catch {unset pendinglinks}
+ }
+}
+
+proc settabs {{firstab {}}} {
+ global firsttabstop tabstop ctext have_tk85
+
+ if {$firstab ne {} && $have_tk85} {
+ set firsttabstop $firstab
+ }
+ set w [font measure textfont "0"]
+ if {$firsttabstop != 0} {
+ $ctext conf -tabs [list [expr {($firsttabstop + $tabstop) * $w}] \
+ [expr {($firsttabstop + 2 * $tabstop) * $w}]]
+ } elseif {$have_tk85 || $tabstop != 8} {
+ $ctext conf -tabs [expr {$tabstop * $w}]
+ } else {
+ $ctext conf -tabs {}
+ }
}
proc incrsearch {name ix op} {
}
proc setcoords {} {
- global linespc charspc canvx0 canvy0 mainfont
+ global linespc charspc canvx0 canvy0
global xspc1 xspc2 lthickness
- set linespc [font metrics $mainfont -linespace]
- set charspc [font measure $mainfont "m"]
+ set linespc [font metrics mainfont -linespace]
+ set charspc [font measure mainfont "m"]
set canvy0 [expr {int(3 + 0.5 * $linespc)}]
set canvx0 [expr {int(3 + 0.5 * $linespc)}]
set lthickness [expr {int($linespc / 9) + 1}]
}
}
+proc parsefont {f n} {
+ global fontattr
+
+ set fontattr($f,family) [lindex $n 0]
+ set s [lindex $n 1]
+ if {$s eq {} || $s == 0} {
+ set s 10
+ } elseif {$s < 0} {
+ set s [expr {int(-$s / [winfo fpixels . 1p] + 0.5)}]
+ }
+ set fontattr($f,size) $s
+ set fontattr($f,weight) normal
+ set fontattr($f,slant) roman
+ foreach style [lrange $n 2 end] {
+ switch -- $style {
+ "normal" -
+ "bold" {set fontattr($f,weight) $style}
+ "roman" -
+ "italic" {set fontattr($f,slant) $style}
+ }
+ }
+}
+
+proc fontflags {f {isbold 0}} {
+ global fontattr
+
+ return [list -family $fontattr($f,family) -size $fontattr($f,size) \
+ -weight [expr {$isbold? "bold": $fontattr($f,weight)}] \
+ -slant $fontattr($f,slant)]
+}
+
+proc fontname {f} {
+ global fontattr
+
+ set n [list $fontattr($f,family) $fontattr($f,size)]
+ if {$fontattr($f,weight) eq "bold"} {
+ lappend n "bold"
+ }
+ if {$fontattr($f,slant) eq "italic"} {
+ lappend n "italic"
+ }
+ return $n
+}
+
proc incrfont {inc} {
global mainfont textfont ctext canv phase cflist showrefstop
- global charspc tabstop
- global stopped entries
+ global stopped entries fontattr
+
unmarkmatches
- set mainfont [lreplace $mainfont 1 1 [expr {[lindex $mainfont 1] + $inc}]]
- set textfont [lreplace $textfont 1 1 [expr {[lindex $textfont 1] + $inc}]]
+ set s $fontattr(mainfont,size)
+ incr s $inc
+ if {$s < 1} {
+ set s 1
+ }
+ set fontattr(mainfont,size) $s
+ font config mainfont -size $s
+ font config mainfontbold -size $s
+ set mainfont [fontname mainfont]
+ set s $fontattr(textfont,size)
+ incr s $inc
+ if {$s < 1} {
+ set s 1
+ }
+ set fontattr(textfont,size) $s
+ font config textfont -size $s
+ font config textfontbold -size $s
+ set textfont [fontname textfont]
setcoords
- $ctext conf -font $textfont -tabs "[expr {$tabstop * $charspc}]"
- $cflist conf -font $textfont
- $ctext tag conf filesep -font [concat $textfont bold]
- foreach e $entries {
- $e conf -font $mainfont
- }
- if {$phase eq "getcommits"} {
- $canv itemconf textitems -font $mainfont
- }
- if {[info exists showrefstop] && [winfo exists $showrefstop]} {
- $showrefstop.list conf -font $mainfont
- }
+ settabs
redisplay
}
proc linehover {} {
global hoverx hovery hoverid hovertimer
global canv linespc lthickness
- global commitinfo mainfont
+ global commitinfo
set text [lindex $commitinfo($hoverid) 0]
set ymax [lindex [$canv cget -scrollregion] 3]
set y [expr {$hovery + $yfrac * $ymax - $linespc / 2}]
set x0 [expr {$x - 2 * $lthickness}]
set y0 [expr {$y - 2 * $lthickness}]
- set x1 [expr {$x + [font measure $mainfont $text] + 2 * $lthickness}]
+ set x1 [expr {$x + [font measure mainfont $text] + 2 * $lthickness}]
set y1 [expr {$y + $linespc + 2 * $lthickness}]
set t [$canv create rectangle $x0 $y0 $x1 $y1 \
-fill \#ffff80 -outline black -width 1 -tags hover]
$canv raise $t
set t [$canv create text $x $y -anchor nw -text $text -tags hover \
- -font $mainfont]
+ -font mainfont]
$canv raise $t
}
}
proc lineclick {x y id isnew} {
- global ctext commitinfo children canv thickerline curview
+ global ctext commitinfo children canv thickerline curview commitrow
if {![info exists commitinfo($id)] && ![getcommit $id]} return
unmarkmatches
# fill the details pane with info about this line
$ctext conf -state normal
clear_ctext
- $ctext tag conf link -foreground blue -underline 1
- $ctext tag bind link <Enter> { %W configure -cursor hand2 }
- $ctext tag bind link <Leave> { %W configure -cursor $curtextcursor }
+ settabs 0
$ctext insert end "Parent:\t"
- $ctext insert end $id [list link link0]
- $ctext tag bind link0 <1> [list selbyid $id]
+ $ctext insert end $id link0
+ setlink $id link0
set info $commitinfo($id)
$ctext insert end "\n\t[lindex $info 0]\n"
$ctext insert end "\tAuthor:\t[lindex $info 1]\n"
if {![info exists commitinfo($child)] && ![getcommit $child]} continue
set info $commitinfo($child)
$ctext insert end "\n\t"
- $ctext insert end $child [list link link$i]
- $ctext tag bind link$i <1> [list selbyid $child]
+ $ctext insert end $child link$i
+ setlink $child link$i
$ctext insert end "\n\t[lindex $info 0]"
$ctext insert end "\n\tAuthor:\t[lindex $info 1]"
set date [formatdate [lindex $info 2]]
global rowctxmenu commitrow selectedline rowmenuid curview
global nullid nullid2 fakerowmenu mainhead
+ stopfinding
set rowmenuid $id
if {![info exists selectedline]
|| $commitrow($curview,$id) eq $selectedline} {
clear_ctext
init_flist "Top"
$ctext insert end "From "
- $ctext tag conf link -foreground blue -underline 1
- $ctext tag bind link <Enter> { %W configure -cursor hand2 }
- $ctext tag bind link <Leave> { %W configure -cursor $curtextcursor }
- $ctext tag bind link0 <1> [list selbyid $oldid]
- $ctext insert end $oldid [list link link0]
+ $ctext insert end $oldid link0
+ setlink $oldid link0
$ctext insert end "\n "
$ctext insert end [lindex $commitinfo($oldid) 0]
$ctext insert end "\n\nTo "
- $ctext tag bind link1 <1> [list selbyid $newid]
- $ctext insert end $newid [list link link1]
+ $ctext insert end $newid link1
+ setlink $newid link1
$ctext insert end "\n "
$ctext insert end [lindex $commitinfo($newid) 0]
$ctext insert end "\n"
set newid [$patchtop.tosha1 get]
set fname [$patchtop.fname get]
set cmd [diffcmd [list $oldid $newid] -p]
+ # trim off the initial "|"
+ set cmd [lrange $cmd 1 end]
lappend cmd >$fname &
if {[catch {eval exec $cmd} err]} {
error_popup "Error creating patch: $err"
proc redrawtags {id} {
global canv linehtag commitrow idpos selectedline curview
- global mainfont canvxmax iddrawn
+ global canvxmax iddrawn
if {![info exists commitrow($curview,$id)]} return
if {![info exists iddrawn($id)]} return
set xt [eval drawtags $id $idpos($id)]
$canv coords $linehtag($commitrow($curview,$id)) $xt [lindex $idpos($id) 2]
set text [$canv itemcget $linehtag($commitrow($curview,$id)) -text]
- set xr [expr {$xt + [font measure $mainfont $text]}]
+ set xr [expr {$xt + [font measure mainfont $text]}]
if {$xr > $canvxmax} {
set canvxmax $xr
setcanvscroll
included in branch $mainhead -- really re-apply it?"]
if {!$ok} return
}
- nowbusy cherrypick
+ nowbusy cherrypick "Cherry-picking"
update
# Unfortunately git-cherry-pick writes stuff to stderr even when
# no error occurs, and exec takes that as an indication of error...
proc resethead {} {
global mainheadid mainhead rowmenuid confirm_ok resettype
- global showlocalchanges
set confirm_ok 0
set w ".confirmreset"
error_popup $err
} else {
dohidelocalchanges
- set w ".resetprogress"
- filerun $fd [list readresetstat $fd $w]
- toplevel $w
- wm transient $w
- wm title $w "Reset progress"
- message $w.m -text "Reset in progress, please wait..." \
- -justify center -aspect 1000
- pack $w.m -side top -fill x -padx 20 -pady 5
- canvas $w.c -width 150 -height 20 -bg white
- $w.c create rect 0 0 0 20 -fill green -tags rect
- pack $w.c -side top -fill x -padx 20 -pady 5 -expand 1
- nowbusy reset
+ filerun $fd [list readresetstat $fd]
+ nowbusy reset "Resetting"
}
}
-proc readresetstat {fd w} {
- global mainhead mainheadid showlocalchanges
+proc readresetstat {fd} {
+ global mainhead mainheadid showlocalchanges rprogcoord
if {[gets $fd line] >= 0} {
if {[regexp {([0-9]+)% \(([0-9]+)/([0-9]+)\)} $line match p m n]} {
- set x [expr {($m * 150) / $n}]
- $w.c coords rect 0 0 $x 20
+ set rprogcoord [expr {1.0 * $m / $n}]
+ adjustprogress
}
return 1
}
- destroy $w
+ set rprogcoord 0
+ adjustprogress
notbusy reset
if {[catch {close $fd} err]} {
error_popup $err
proc headmenu {x y id head} {
global headmenuid headmenuhead headctxmenu mainhead
+ stopfinding
set headmenuid $id
set headmenuhead $head
set state normal
# check the tree is clean first??
set oldmainhead $mainhead
- nowbusy checkout
+ nowbusy checkout "Checking out"
update
dohidelocalchanges
if {[catch {
# Display a list of tags and heads
proc showrefs {} {
- global showrefstop bgcolor fgcolor selectbgcolor mainfont
- global bglist fglist uifont reflistfilter reflist maincursor
+ global showrefstop bgcolor fgcolor selectbgcolor
+ global bglist fglist reflistfilter reflist maincursor
set top .showrefs
set showrefstop $top
toplevel $top
wm title $top "Tags and heads: [file tail [pwd]]"
text $top.list -background $bgcolor -foreground $fgcolor \
- -selectbackground $selectbgcolor -font $mainfont \
+ -selectbackground $selectbgcolor -font mainfont \
-xscrollcommand "$top.xsb set" -yscrollcommand "$top.ysb set" \
-width 30 -height 20 -cursor $maincursor \
-spacing1 1 -spacing3 1 -state disabled
grid $top.list $top.ysb -sticky nsew
grid $top.xsb x -sticky ew
frame $top.f
- label $top.f.l -text "Filter: " -font $uifont
- entry $top.f.e -width 20 -textvariable reflistfilter -font $uifont
+ label $top.f.l -text "Filter: " -font uifont
+ entry $top.f.e -width 20 -textvariable reflistfilter -font uifont
set reflistfilter "*"
trace add variable reflistfilter write reflistfilter_change
pack $top.f.e -side right -fill x -expand 1
pack $top.f.l -side left
grid $top.f - -sticky ew -pady 2
button $top.close -command [list destroy $top] -text "Close" \
- -font $uifont
+ -font uifont
grid $top.close -
grid columnconfigure $top 0 -weight 1
grid rowconfigure $top 0 -weight 1
# Stuff for finding nearby tags
proc getallcommits {} {
- global allcommits allids nbmp nextarc seeds
+ global allcommits nextarc seeds allccache allcwait cachedarcs allcupdate
+ global idheads idtags idotherrefs allparents tagobjid
if {![info exists allcommits]} {
- set allids {}
- set nbmp 0
set nextarc 0
set allcommits 0
set seeds {}
+ set allcwait 0
+ set cachedarcs 0
+ set allccache [file join [gitdir] "gitk.cache"]
+ if {![catch {
+ set f [open $allccache r]
+ set allcwait 1
+ getcache $f
+ }]} return
}
- set cmd [concat | git rev-list --all --parents]
- foreach id $seeds {
- lappend cmd "^$id"
+ if {$allcwait} {
+ return
+ }
+ set cmd [list | git rev-list --parents]
+ set allcupdate [expr {$seeds ne {}}]
+ if {!$allcupdate} {
+ set ids "--all"
+ } else {
+ set refs [concat [array names idheads] [array names idtags] \
+ [array names idotherrefs]]
+ set ids {}
+ set tagobjs {}
+ foreach name [array names tagobjid] {
+ lappend tagobjs $tagobjid($name)
+ }
+ foreach id [lsort -unique $refs] {
+ if {![info exists allparents($id)] &&
+ [lsearch -exact $tagobjs $id] < 0} {
+ lappend ids $id
+ }
+ }
+ if {$ids ne {}} {
+ foreach id $seeds {
+ lappend ids "^$id"
+ }
+ }
+ }
+ if {$ids ne {}} {
+ set fd [open [concat $cmd $ids] r]
+ fconfigure $fd -blocking 0
+ incr allcommits
+ nowbusy allcommits
+ filerun $fd [list getallclines $fd]
+ } else {
+ dispneartags 0
}
- set fd [open $cmd r]
- fconfigure $fd -blocking 0
- incr allcommits
- nowbusy allcommits
- filerun $fd [list getallclines $fd]
}
# Since most commits have 1 parent and 1 child, we group strings of
# coming from descendents, and "outgoing" means going towards ancestors.
proc getallclines {fd} {
- global allids allparents allchildren idtags idheads nextarc nbmp
+ global allparents allchildren idtags idheads nextarc
global arcnos arcids arctags arcout arcend arcstart archeads growing
- global seeds allcommits
-
+ global seeds allcommits cachedarcs allcupdate
+
set nid 0
while {[incr nid] <= 1000 && [gets $fd line] >= 0} {
set id [lindex $line 0]
# seen it already
continue
}
- lappend allids $id
+ set cachedarcs 0
set olds [lrange $line 1 end]
set allparents($id) $olds
if {![info exists allchildren($id)]} {
continue
}
}
- incr nbmp
foreach a $arcnos($id) {
lappend arcids($a) $id
set arcend($a) $id
if {![eof $fd]} {
return [expr {$nid >= 1000? 2: 1}]
}
- close $fd
+ set cacheok 1
+ if {[catch {
+ fconfigure $fd -blocking 1
+ close $fd
+ } err]} {
+ # got an error reading the list of commits
+ # if we were updating, try rereading the whole thing again
+ if {$allcupdate} {
+ incr allcommits -1
+ dropcache $err
+ return
+ }
+ error_popup "Error reading commit topology information;\
+ branch and preceding/following tag information\
+ will be incomplete.\n($err)"
+ set cacheok 0
+ }
if {[incr allcommits -1] == 0} {
notbusy allcommits
+ if {$cacheok} {
+ run savecache
+ }
}
dispneartags 0
return 0
}
proc splitarc {p} {
- global arcnos arcids nextarc nbmp arctags archeads idtags idheads
+ global arcnos arcids nextarc arctags archeads idtags idheads
global arcstart arcend arcout allparents growing
set a $arcnos($p)
set growing($na) 1
unset growing($a)
}
- incr nbmp
foreach id $tail {
if {[llength $arcnos($id)] == 1} {
# Update things for a new commit added that is a child of one
# existing commit. Used when cherry-picking.
proc addnewchild {id p} {
- global allids allparents allchildren idtags nextarc nbmp
+ global allparents allchildren idtags nextarc
global arcnos arcids arctags arcout arcend arcstart archeads growing
global seeds allcommits
- if {![info exists allcommits]} return
- lappend allids $id
+ if {![info exists allcommits] || ![info exists arcnos($p)]} return
set allparents($id) [list $p]
set allchildren($id) {}
set arcnos($id) {}
lappend seeds $id
- incr nbmp
lappend allchildren($p) $id
set a [incr nextarc]
set arcstart($a) $id
set arcout($id) [list $a]
}
+# This implements a cache for the topology information.
+# The cache saves, for each arc, the start and end of the arc,
+# the ids on the arc, and the outgoing arcs from the end.
+proc readcache {f} {
+ global arcnos arcids arcout arcstart arcend arctags archeads nextarc
+ global idtags idheads allparents cachedarcs possible_seeds seeds growing
+ global allcwait
+
+ set a $nextarc
+ set lim $cachedarcs
+ if {$lim - $a > 500} {
+ set lim [expr {$a + 500}]
+ }
+ if {[catch {
+ if {$a == $lim} {
+ # finish reading the cache and setting up arctags, etc.
+ set line [gets $f]
+ if {$line ne "1"} {error "bad final version"}
+ close $f
+ foreach id [array names idtags] {
+ if {[info exists arcnos($id)] && [llength $arcnos($id)] == 1 &&
+ [llength $allparents($id)] == 1} {
+ set a [lindex $arcnos($id) 0]
+ if {$arctags($a) eq {}} {
+ recalcarc $a
+ }
+ }
+ }
+ foreach id [array names idheads] {
+ if {[info exists arcnos($id)] && [llength $arcnos($id)] == 1 &&
+ [llength $allparents($id)] == 1} {
+ set a [lindex $arcnos($id) 0]
+ if {$archeads($a) eq {}} {
+ recalcarc $a
+ }
+ }
+ }
+ foreach id [lsort -unique $possible_seeds] {
+ if {$arcnos($id) eq {}} {
+ lappend seeds $id
+ }
+ }
+ set allcwait 0
+ } else {
+ while {[incr a] <= $lim} {
+ set line [gets $f]
+ if {[llength $line] != 3} {error "bad line"}
+ set s [lindex $line 0]
+ set arcstart($a) $s
+ lappend arcout($s) $a
+ if {![info exists arcnos($s)]} {
+ lappend possible_seeds $s
+ set arcnos($s) {}
+ }
+ set e [lindex $line 1]
+ if {$e eq {}} {
+ set growing($a) 1
+ } else {
+ set arcend($a) $e
+ if {![info exists arcout($e)]} {
+ set arcout($e) {}
+ }
+ }
+ set arcids($a) [lindex $line 2]
+ foreach id $arcids($a) {
+ lappend allparents($s) $id
+ set s $id
+ lappend arcnos($id) $a
+ }
+ if {![info exists allparents($s)]} {
+ set allparents($s) {}
+ }
+ set arctags($a) {}
+ set archeads($a) {}
+ }
+ set nextarc [expr {$a - 1}]
+ }
+ } err]} {
+ dropcache $err
+ return 0
+ }
+ if {!$allcwait} {
+ getallcommits
+ }
+ return $allcwait
+}
+
+proc getcache {f} {
+ global nextarc cachedarcs possible_seeds
+
+ if {[catch {
+ set line [gets $f]
+ if {[llength $line] != 2 || [lindex $line 0] ne "1"} {error "bad version"}
+ # make sure it's an integer
+ set cachedarcs [expr {int([lindex $line 1])}]
+ if {$cachedarcs < 0} {error "bad number of arcs"}
+ set nextarc 0
+ set possible_seeds {}
+ run readcache $f
+ } err]} {
+ dropcache $err
+ }
+ return 0
+}
+
+proc dropcache {err} {
+ global allcwait nextarc cachedarcs seeds
+
+ #puts "dropping cache ($err)"
+ foreach v {arcnos arcout arcids arcstart arcend growing \
+ arctags archeads allparents allchildren} {
+ global $v
+ catch {unset $v}
+ }
+ set allcwait 0
+ set nextarc 0
+ set cachedarcs 0
+ set seeds {}
+ getallcommits
+}
+
+proc writecache {f} {
+ global cachearc cachedarcs allccache
+ global arcstart arcend arcnos arcids arcout
+
+ set a $cachearc
+ set lim $cachedarcs
+ if {$lim - $a > 1000} {
+ set lim [expr {$a + 1000}]
+ }
+ if {[catch {
+ while {[incr a] <= $lim} {
+ if {[info exists arcend($a)]} {
+ puts $f [list $arcstart($a) $arcend($a) $arcids($a)]
+ } else {
+ puts $f [list $arcstart($a) {} $arcids($a)]
+ }
+ }
+ } err]} {
+ catch {close $f}
+ catch {file delete $allccache}
+ #puts "writing cache failed ($err)"
+ return 0
+ }
+ set cachearc [expr {$a - 1}]
+ if {$a > $cachedarcs} {
+ puts $f "1"
+ close $f
+ return 0
+ }
+ return 1
+}
+
+proc savecache {} {
+ global nextarc cachedarcs cachearc allccache
+
+ if {$nextarc == $cachedarcs} return
+ set cachearc 0
+ set cachedarcs $nextarc
+ catch {
+ set f [open $allccache w]
+ puts $f [list 1 $cachedarcs]
+ run writecache $f
+ }
+}
+
# Returns 1 if a is an ancestor of b, -1 if b is an ancestor of a,
# or 0 if neither is true.
proc anc_or_desc {a b} {
}
$ctext conf -state normal
clear_ctext
+ settabs 0
set linknum 0
if {![info exists tagcontents($tag)]} {
catch {
destroy .
}
+proc mkfontdisp {font top which} {
+ global fontattr fontpref $font
+
+ set fontpref($font) [set $font]
+ button $top.${font}but -text $which -font optionfont \
+ -command [list choosefont $font $which]
+ label $top.$font -relief flat -font $font \
+ -text $fontattr($font,family) -justify left
+ grid x $top.${font}but $top.$font -sticky w
+}
+
+proc choosefont {font which} {
+ global fontparam fontlist fonttop fontattr
+
+ set fontparam(which) $which
+ set fontparam(font) $font
+ set fontparam(family) [font actual $font -family]
+ set fontparam(size) $fontattr($font,size)
+ set fontparam(weight) $fontattr($font,weight)
+ set fontparam(slant) $fontattr($font,slant)
+ set top .gitkfont
+ set fonttop $top
+ if {![winfo exists $top]} {
+ font create sample
+ eval font config sample [font actual $font]
+ toplevel $top
+ wm title $top "Gitk font chooser"
+ label $top.l -textvariable fontparam(which) -font uifont
+ pack $top.l -side top
+ set fontlist [lsort [font families]]
+ frame $top.f
+ listbox $top.f.fam -listvariable fontlist \
+ -yscrollcommand [list $top.f.sb set]
+ bind $top.f.fam <<ListboxSelect>> selfontfam
+ scrollbar $top.f.sb -command [list $top.f.fam yview]
+ pack $top.f.sb -side right -fill y
+ pack $top.f.fam -side left -fill both -expand 1
+ pack $top.f -side top -fill both -expand 1
+ frame $top.g
+ spinbox $top.g.size -from 4 -to 40 -width 4 \
+ -textvariable fontparam(size) \
+ -validatecommand {string is integer -strict %s}
+ checkbutton $top.g.bold -padx 5 \
+ -font {{Times New Roman} 12 bold} -text "B" -indicatoron 0 \
+ -variable fontparam(weight) -onvalue bold -offvalue normal
+ checkbutton $top.g.ital -padx 5 \
+ -font {{Times New Roman} 12 italic} -text "I" -indicatoron 0 \
+ -variable fontparam(slant) -onvalue italic -offvalue roman
+ pack $top.g.size $top.g.bold $top.g.ital -side left
+ pack $top.g -side top
+ canvas $top.c -width 150 -height 50 -border 2 -relief sunk \
+ -background white
+ $top.c create text 100 25 -anchor center -text $which -font sample \
+ -fill black -tags text
+ bind $top.c <Configure> [list centertext $top.c]
+ pack $top.c -side top -fill x
+ frame $top.buts
+ button $top.buts.ok -text "OK" -command fontok -default active \
+ -font uifont
+ button $top.buts.can -text "Cancel" -command fontcan -default normal \
+ -font uifont
+ grid $top.buts.ok $top.buts.can
+ grid columnconfigure $top.buts 0 -weight 1 -uniform a
+ grid columnconfigure $top.buts 1 -weight 1 -uniform a
+ pack $top.buts -side bottom -fill x
+ trace add variable fontparam write chg_fontparam
+ } else {
+ raise $top
+ $top.c itemconf text -text $which
+ }
+ set i [lsearch -exact $fontlist $fontparam(family)]
+ if {$i >= 0} {
+ $top.f.fam selection set $i
+ $top.f.fam see $i
+ }
+}
+
+proc centertext {w} {
+ $w coords text [expr {[winfo width $w] / 2}] [expr {[winfo height $w] / 2}]
+}
+
+proc fontok {} {
+ global fontparam fontpref prefstop
+
+ set f $fontparam(font)
+ set fontpref($f) [list $fontparam(family) $fontparam(size)]
+ if {$fontparam(weight) eq "bold"} {
+ lappend fontpref($f) "bold"
+ }
+ if {$fontparam(slant) eq "italic"} {
+ lappend fontpref($f) "italic"
+ }
+ set w $prefstop.$f
+ $w conf -text $fontparam(family) -font $fontpref($f)
+
+ fontcan
+}
+
+proc fontcan {} {
+ global fonttop fontparam
+
+ if {[info exists fonttop]} {
+ catch {destroy $fonttop}
+ catch {font delete sample}
+ unset fonttop
+ unset fontparam
+ }
+}
+
+proc selfontfam {} {
+ global fonttop fontparam
+
+ set i [$fonttop.f.fam curselection]
+ if {$i ne {}} {
+ set fontparam(family) [$fonttop.f.fam get $i]
+ }
+}
+
+proc chg_fontparam {v sub op} {
+ global fontparam
+
+ font config sample -$sub $fontparam($sub)
+}
+
proc doprefs {} {
- global maxwidth maxgraphpct diffopts
+ global maxwidth maxgraphpct
global oldprefs prefstop showneartags showlocalchanges
global bgcolor fgcolor ctext diffcolors selectbgcolor
- global uifont tabstop
+ global uifont tabstop limitdiffs
set top .gitkprefs
set prefstop $top
raise $top
return
}
- foreach v {maxwidth maxgraphpct diffopts showneartags showlocalchanges} {
+ foreach v {maxwidth maxgraphpct showneartags showlocalchanges \
+ limitdiffs tabstop} {
set oldprefs($v) [set $v]
}
toplevel $top
wm title $top "Gitk preferences"
label $top.ldisp -text "Commit list display options"
- $top.ldisp configure -font $uifont
+ $top.ldisp configure -font uifont
grid $top.ldisp - -sticky w -pady 10
label $top.spacer -text " "
label $top.maxwidthl -text "Maximum graph width (lines)" \
grid x $top.showlocal -sticky w
label $top.ddisp -text "Diff display options"
- $top.ddisp configure -font $uifont
+ $top.ddisp configure -font uifont
grid $top.ddisp - -sticky w -pady 10
- label $top.diffoptl -text "Options for diff program" \
- -font optionfont
- entry $top.diffopt -width 20 -textvariable diffopts
- grid x $top.diffoptl $top.diffopt -sticky w
+ label $top.tabstopl -text "Tab spacing" -font optionfont
+ spinbox $top.tabstop -from 1 -to 20 -width 4 -textvariable tabstop
+ grid x $top.tabstopl $top.tabstop -sticky w
frame $top.ntag
label $top.ntag.l -text "Display nearby tags" -font optionfont
checkbutton $top.ntag.b -variable showneartags
pack $top.ntag.b $top.ntag.l -side left
grid x $top.ntag -sticky w
- label $top.tabstopl -text "tabstop" -font optionfont
- spinbox $top.tabstop -from 1 -to 20 -width 4 -textvariable tabstop
- grid x $top.tabstopl $top.tabstop -sticky w
+ frame $top.ldiff
+ label $top.ldiff.l -text "Limit diffs to listed paths" -font optionfont
+ checkbutton $top.ldiff.b -variable limitdiffs
+ pack $top.ldiff.b $top.ldiff.l -side left
+ grid x $top.ldiff -sticky w
label $top.cdisp -text "Colors: press to choose"
- $top.cdisp configure -font $uifont
+ $top.cdisp configure -font uifont
grid $top.cdisp - -sticky w -pady 10
label $top.bg -padx 40 -relief sunk -background $bgcolor
button $top.bgbut -text "Background" -font optionfont \
-command [list choosecolor selectbgcolor 0 $top.selbgsep background setselbg]
grid x $top.selbgbut $top.selbgsep -sticky w
+ label $top.cfont -text "Fonts: press to choose"
+ $top.cfont configure -font uifont
+ grid $top.cfont - -sticky w -pady 10
+ mkfontdisp mainfont $top "Main font"
+ mkfontdisp textfont $top "Diff display font"
+ mkfontdisp uifont $top "User interface font"
+
frame $top.buts
button $top.buts.ok -text "OK" -command prefsok -default active
- $top.buts.ok configure -font $uifont
+ $top.buts.ok configure -font uifont
button $top.buts.can -text "Cancel" -command prefscan -default normal
- $top.buts.can configure -font $uifont
+ $top.buts.can configure -font uifont
grid $top.buts.ok $top.buts.can
grid columnconfigure $top.buts 0 -weight 1 -uniform a
grid columnconfigure $top.buts 1 -weight 1 -uniform a
}
proc prefscan {} {
- global maxwidth maxgraphpct diffopts
- global oldprefs prefstop showneartags showlocalchanges
+ global oldprefs prefstop
- foreach v {maxwidth maxgraphpct diffopts showneartags showlocalchanges} {
+ foreach v {maxwidth maxgraphpct showneartags showlocalchanges \
+ limitdiffs tabstop} {
+ global $v
set $v $oldprefs($v)
}
catch {destroy $prefstop}
unset prefstop
+ fontcan
}
proc prefsok {} {
global maxwidth maxgraphpct
global oldprefs prefstop showneartags showlocalchanges
- global charspc ctext tabstop
+ global fontpref mainfont textfont uifont
+ global limitdiffs treediffs
catch {destroy $prefstop}
unset prefstop
- $ctext configure -tabs "[expr {$tabstop * $charspc}]"
+ fontcan
+ set fontchanged 0
+ if {$mainfont ne $fontpref(mainfont)} {
+ set mainfont $fontpref(mainfont)
+ parsefont mainfont $mainfont
+ eval font configure mainfont [fontflags mainfont]
+ eval font configure mainfontbold [fontflags mainfont 1]
+ setcoords
+ set fontchanged 1
+ }
+ if {$textfont ne $fontpref(textfont)} {
+ set textfont $fontpref(textfont)
+ parsefont textfont $textfont
+ eval font configure textfont [fontflags textfont]
+ eval font configure textfontbold [fontflags textfont 1]
+ }
+ if {$uifont ne $fontpref(uifont)} {
+ set uifont $fontpref(uifont)
+ parsefont uifont $uifont
+ eval font configure uifont [fontflags uifont]
+ }
+ settabs
if {$showlocalchanges != $oldprefs(showlocalchanges)} {
if {$showlocalchanges} {
doshowlocalchanges
dohidelocalchanges
}
}
- if {$maxwidth != $oldprefs(maxwidth)
+ if {$limitdiffs != $oldprefs(limitdiffs)} {
+ # treediffs elements are limited by path
+ catch {unset treediffs}
+ }
+ if {$fontchanged || $maxwidth != $oldprefs(maxwidth)
|| $maxgraphpct != $oldprefs(maxgraphpct)} {
redisplay
- } elseif {$showneartags != $oldprefs(showneartags)} {
+ } elseif {$showneartags != $oldprefs(showneartags) ||
+ $limitdiffs != $oldprefs(limitdiffs)} {
reselectline
}
}
return {}
}
+# First check that Tcl/Tk is recent enough
+if {[catch {package require Tk 8.4} err]} {
+ show_error {} . "Sorry, gitk cannot run with this version of Tcl/Tk.\n\
+ Gitk requires at least Tcl/Tk 8.4."
+ exit 1
+}
+
# defaults...
set datemode 0
-set diffopts "-U 5 -p"
set wrcomcmd "git diff-tree --stdin -p --pretty"
set gitencoding {}
set maxwidth 16
set revlistorder 0
set fastdate 0
-set uparrowlen 7
-set downarrowlen 7
-set mingaplen 30
+set uparrowlen 5
+set downarrowlen 5
+set mingaplen 100
set cmitmode "patch"
set wrapcomment "none"
set showneartags 1
set maxrefs 20
set maxlinelen 200
set showlocalchanges 1
+set limitdiffs 1
set datetimeformat "%Y-%m-%d %H:%M:%S"
set colors {green red blue magenta darkgrey brown orange}
font create optionfont -family sans-serif -size -12
+parsefont mainfont $mainfont
+eval font create mainfont [fontflags mainfont]
+eval font create mainfontbold [fontflags mainfont 1]
+
+parsefont textfont $textfont
+eval font create textfont [fontflags textfont]
+eval font create textfontbold [fontflags textfont 1]
+
+parsefont uifont $uifont
+eval font create uifont [fontflags uifont]
+
# check that we can find a .git directory somewhere...
if {[catch {set gitdir [gitdir]}]} {
show_error {} . "Cannot find a git repository here."
exit 1
}
+set mergeonly 0
set revtreeargs {}
set cmdline_files {}
set i 0
switch -- $arg {
"" { }
"-d" { set datemode 1 }
+ "--merge" {
+ set mergeonly 1
+ lappend revtreeargs $arg
+ }
"--" {
set cmdline_files [lrange $argv [expr {$i + 1}] end]
break
}
}
+if {$mergeonly} {
+ # find the list of unmerged files
+ set mlist {}
+ set nr_unmerged 0
+ if {[catch {
+ set fd [open "| git ls-files -u" r]
+ } err]} {
+ show_error {} . "Couldn't get list of unmerged files: $err"
+ exit 1
+ }
+ while {[gets $fd line] >= 0} {
+ set i [string first "\t" $line]
+ if {$i < 0} continue
+ set fname [string range $line [expr {$i+1}] end]
+ if {[lsearch -exact $mlist $fname] >= 0} continue
+ incr nr_unmerged
+ if {$cmdline_files eq {} || [path_filter $cmdline_files $fname]} {
+ lappend mlist $fname
+ }
+ }
+ catch {close $fd}
+ if {$mlist eq {}} {
+ if {$nr_unmerged == 0} {
+ show_error {} . "No files selected: --merge specified but\
+ no files are unmerged."
+ } else {
+ show_error {} . "No files selected: --merge specified but\
+ no unmerged files are within file limit."
+ }
+ exit 1
+ }
+ set cmdline_files $mlist
+}
+
set nullid "0000000000000000000000000000000000000000"
set nullid2 "0000000000000000000000000000000000000001"
+set have_tk85 [expr {[package vcompare $tk_version "8.5"] >= 0}]
set runq {}
set history {}
set fh_serial 0
set nhl_names {}
set highlight_paths {}
+set findpattern {}
set searchdirn -forwards
set boldrows {}
set boldnamerows {}
set diffelide {0 0}
set markingmatches 0
-
-set optim_delay 16
+set linkentercount 0
+set need_redisplay 0
+set nrows_drawn 0
+set firsttabstop 0
set nextviewnum 1
set curview 0
set selectedview 0
set selectedhlview None
+set highlight_related None
+set highlight_files {}
set viewfiles(0) {}
set viewperm(0) 0
set viewargs(0) {}
set stopped 0
set stuffsaved 0
set patchnum 0
-set lookingforhead 0
set localirow -1
set localfrow -1
set lserial 0
void help_unknown_cmd(const char *cmd)
{
- printf("git: '%s' is not a git-command\n\n", cmd);
- list_common_cmds_help();
+ fprintf(stderr, "git: '%s' is not a git-command. See 'git --help'.\n", cmd);
exit(1);
}
+++ /dev/null
-#include "cache.h"
-#include "commit.h"
-#include "pack.h"
-#include "fetch.h"
-#include "http.h"
-
-#define PREV_BUF_SIZE 4096
-#define RANGE_HEADER_SIZE 30
-
-static int commits_on_stdin;
-
-static int got_alternates = -1;
-static int corrupt_object_found;
-
-static struct curl_slist *no_pragma_header;
-
-struct alt_base
-{
- char *base;
- int got_indices;
- struct packed_git *packs;
- struct alt_base *next;
-};
-
-static struct alt_base *alt;
-
-enum object_request_state {
- WAITING,
- ABORTED,
- ACTIVE,
- COMPLETE,
-};
-
-struct object_request
-{
- unsigned char sha1[20];
- struct alt_base *repo;
- char *url;
- char filename[PATH_MAX];
- char tmpfile[PATH_MAX];
- int local;
- enum object_request_state state;
- CURLcode curl_result;
- char errorstr[CURL_ERROR_SIZE];
- long http_code;
- unsigned char real_sha1[20];
- SHA_CTX c;
- z_stream stream;
- int zret;
- int rename;
- struct active_request_slot *slot;
- struct object_request *next;
-};
-
-struct alternates_request {
- const char *base;
- char *url;
- struct buffer *buffer;
- struct active_request_slot *slot;
- int http_specific;
-};
-
-static struct object_request *object_queue_head;
-
-static size_t fwrite_sha1_file(void *ptr, size_t eltsize, size_t nmemb,
- void *data)
-{
- unsigned char expn[4096];
- size_t size = eltsize * nmemb;
- int posn = 0;
- struct object_request *obj_req = (struct object_request *)data;
- do {
- ssize_t retval = xwrite(obj_req->local,
- (char *) ptr + posn, size - posn);
- if (retval < 0)
- return posn;
- posn += retval;
- } while (posn < size);
-
- obj_req->stream.avail_in = size;
- obj_req->stream.next_in = ptr;
- do {
- obj_req->stream.next_out = expn;
- obj_req->stream.avail_out = sizeof(expn);
- obj_req->zret = inflate(&obj_req->stream, Z_SYNC_FLUSH);
- SHA1_Update(&obj_req->c, expn,
- sizeof(expn) - obj_req->stream.avail_out);
- } while (obj_req->stream.avail_in && obj_req->zret == Z_OK);
- data_received++;
- return size;
-}
-
-static int missing__target(int code, int result)
-{
- return /* file:// URL -- do we ever use one??? */
- (result == CURLE_FILE_COULDNT_READ_FILE) ||
- /* http:// and https:// URL */
- (code == 404 && result == CURLE_HTTP_RETURNED_ERROR) ||
- /* ftp:// URL */
- (code == 550 && result == CURLE_FTP_COULDNT_RETR_FILE)
- ;
-}
-
-#define missing_target(a) missing__target((a)->http_code, (a)->curl_result)
-
-static void fetch_alternates(const char *base);
-
-static void process_object_response(void *callback_data);
-
-static void start_object_request(struct object_request *obj_req)
-{
- char *hex = sha1_to_hex(obj_req->sha1);
- char prevfile[PATH_MAX];
- char *url;
- char *posn;
- int prevlocal;
- unsigned char prev_buf[PREV_BUF_SIZE];
- ssize_t prev_read = 0;
- long prev_posn = 0;
- char range[RANGE_HEADER_SIZE];
- struct curl_slist *range_header = NULL;
- struct active_request_slot *slot;
-
- snprintf(prevfile, sizeof(prevfile), "%s.prev", obj_req->filename);
- unlink(prevfile);
- rename(obj_req->tmpfile, prevfile);
- unlink(obj_req->tmpfile);
-
- if (obj_req->local != -1)
- error("fd leakage in start: %d", obj_req->local);
- obj_req->local = open(obj_req->tmpfile,
- O_WRONLY | O_CREAT | O_EXCL, 0666);
- /* This could have failed due to the "lazy directory creation";
- * try to mkdir the last path component.
- */
- if (obj_req->local < 0 && errno == ENOENT) {
- char *dir = strrchr(obj_req->tmpfile, '/');
- if (dir) {
- *dir = 0;
- mkdir(obj_req->tmpfile, 0777);
- *dir = '/';
- }
- obj_req->local = open(obj_req->tmpfile,
- O_WRONLY | O_CREAT | O_EXCL, 0666);
- }
-
- if (obj_req->local < 0) {
- obj_req->state = ABORTED;
- error("Couldn't create temporary file %s for %s: %s",
- obj_req->tmpfile, obj_req->filename, strerror(errno));
- return;
- }
-
- memset(&obj_req->stream, 0, sizeof(obj_req->stream));
-
- inflateInit(&obj_req->stream);
-
- SHA1_Init(&obj_req->c);
-
- url = xmalloc(strlen(obj_req->repo->base) + 51);
- obj_req->url = xmalloc(strlen(obj_req->repo->base) + 51);
- strcpy(url, obj_req->repo->base);
- posn = url + strlen(obj_req->repo->base);
- strcpy(posn, "/objects/");
- posn += 9;
- memcpy(posn, hex, 2);
- posn += 2;
- *(posn++) = '/';
- strcpy(posn, hex + 2);
- strcpy(obj_req->url, url);
-
- /* If a previous temp file is present, process what was already
- fetched. */
- prevlocal = open(prevfile, O_RDONLY);
- if (prevlocal != -1) {
- do {
- prev_read = xread(prevlocal, prev_buf, PREV_BUF_SIZE);
- if (prev_read>0) {
- if (fwrite_sha1_file(prev_buf,
- 1,
- prev_read,
- obj_req) == prev_read) {
- prev_posn += prev_read;
- } else {
- prev_read = -1;
- }
- }
- } while (prev_read > 0);
- close(prevlocal);
- }
- unlink(prevfile);
-
- /* Reset inflate/SHA1 if there was an error reading the previous temp
- file; also rewind to the beginning of the local file. */
- if (prev_read == -1) {
- memset(&obj_req->stream, 0, sizeof(obj_req->stream));
- inflateInit(&obj_req->stream);
- SHA1_Init(&obj_req->c);
- if (prev_posn>0) {
- prev_posn = 0;
- lseek(obj_req->local, 0, SEEK_SET);
- ftruncate(obj_req->local, 0);
- }
- }
-
- slot = get_active_slot();
- slot->callback_func = process_object_response;
- slot->callback_data = obj_req;
- obj_req->slot = slot;
-
- curl_easy_setopt(slot->curl, CURLOPT_FILE, obj_req);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
- curl_easy_setopt(slot->curl, CURLOPT_ERRORBUFFER, obj_req->errorstr);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
-
- /* If we have successfully processed data from a previous fetch
- attempt, only fetch the data we don't already have. */
- if (prev_posn>0) {
- if (get_verbosely)
- fprintf(stderr,
- "Resuming fetch of object %s at byte %ld\n",
- hex, prev_posn);
- sprintf(range, "Range: bytes=%ld-", prev_posn);
- range_header = curl_slist_append(range_header, range);
- curl_easy_setopt(slot->curl,
- CURLOPT_HTTPHEADER, range_header);
- }
-
- /* Try to get the request started, abort the request on error */
- obj_req->state = ACTIVE;
- if (!start_active_slot(slot)) {
- obj_req->state = ABORTED;
- obj_req->slot = NULL;
- close(obj_req->local); obj_req->local = -1;
- free(obj_req->url);
- return;
- }
-}
-
-static void finish_object_request(struct object_request *obj_req)
-{
- struct stat st;
-
- fchmod(obj_req->local, 0444);
- close(obj_req->local); obj_req->local = -1;
-
- if (obj_req->http_code == 416) {
- fprintf(stderr, "Warning: requested range invalid; we may already have all the data.\n");
- } else if (obj_req->curl_result != CURLE_OK) {
- if (stat(obj_req->tmpfile, &st) == 0)
- if (st.st_size == 0)
- unlink(obj_req->tmpfile);
- return;
- }
-
- inflateEnd(&obj_req->stream);
- SHA1_Final(obj_req->real_sha1, &obj_req->c);
- if (obj_req->zret != Z_STREAM_END) {
- unlink(obj_req->tmpfile);
- return;
- }
- if (hashcmp(obj_req->sha1, obj_req->real_sha1)) {
- unlink(obj_req->tmpfile);
- return;
- }
- obj_req->rename =
- move_temp_to_file(obj_req->tmpfile, obj_req->filename);
-
- if (obj_req->rename == 0)
- pull_say("got %s\n", sha1_to_hex(obj_req->sha1));
-}
-
-static void process_object_response(void *callback_data)
-{
- struct object_request *obj_req =
- (struct object_request *)callback_data;
-
- obj_req->curl_result = obj_req->slot->curl_result;
- obj_req->http_code = obj_req->slot->http_code;
- obj_req->slot = NULL;
- obj_req->state = COMPLETE;
-
- /* Use alternates if necessary */
- if (missing_target(obj_req)) {
- fetch_alternates(alt->base);
- if (obj_req->repo->next != NULL) {
- obj_req->repo =
- obj_req->repo->next;
- close(obj_req->local);
- obj_req->local = -1;
- start_object_request(obj_req);
- return;
- }
- }
-
- finish_object_request(obj_req);
-}
-
-static void release_object_request(struct object_request *obj_req)
-{
- struct object_request *entry = object_queue_head;
-
- if (obj_req->local != -1)
- error("fd leakage in release: %d", obj_req->local);
- if (obj_req == object_queue_head) {
- object_queue_head = obj_req->next;
- } else {
- while (entry->next != NULL && entry->next != obj_req)
- entry = entry->next;
- if (entry->next == obj_req)
- entry->next = entry->next->next;
- }
-
- free(obj_req->url);
- free(obj_req);
-}
-
-#ifdef USE_CURL_MULTI
-void fill_active_slots(void)
-{
- struct object_request *obj_req = object_queue_head;
- struct active_request_slot *slot = active_queue_head;
- int num_transfers;
-
- while (active_requests < max_requests && obj_req != NULL) {
- if (obj_req->state == WAITING) {
- if (has_sha1_file(obj_req->sha1))
- obj_req->state = COMPLETE;
- else
- start_object_request(obj_req);
- curl_multi_perform(curlm, &num_transfers);
- }
- obj_req = obj_req->next;
- }
-
- while (slot != NULL) {
- if (!slot->in_use && slot->curl != NULL) {
- curl_easy_cleanup(slot->curl);
- slot->curl = NULL;
- }
- slot = slot->next;
- }
-}
-#endif
-
-void prefetch(unsigned char *sha1)
-{
- struct object_request *newreq;
- struct object_request *tail;
- char *filename = sha1_file_name(sha1);
-
- newreq = xmalloc(sizeof(*newreq));
- hashcpy(newreq->sha1, sha1);
- newreq->repo = alt;
- newreq->url = NULL;
- newreq->local = -1;
- newreq->state = WAITING;
- snprintf(newreq->filename, sizeof(newreq->filename), "%s", filename);
- snprintf(newreq->tmpfile, sizeof(newreq->tmpfile),
- "%s.temp", filename);
- newreq->slot = NULL;
- newreq->next = NULL;
-
- if (object_queue_head == NULL) {
- object_queue_head = newreq;
- } else {
- tail = object_queue_head;
- while (tail->next != NULL) {
- tail = tail->next;
- }
- tail->next = newreq;
- }
-
-#ifdef USE_CURL_MULTI
- fill_active_slots();
- step_active_slots();
-#endif
-}
-
-static int fetch_index(struct alt_base *repo, unsigned char *sha1)
-{
- char *hex = sha1_to_hex(sha1);
- char *filename;
- char *url;
- char tmpfile[PATH_MAX];
- long prev_posn = 0;
- char range[RANGE_HEADER_SIZE];
- struct curl_slist *range_header = NULL;
-
- FILE *indexfile;
- struct active_request_slot *slot;
- struct slot_results results;
-
- if (has_pack_index(sha1))
- return 0;
-
- if (get_verbosely)
- fprintf(stderr, "Getting index for pack %s\n", hex);
-
- url = xmalloc(strlen(repo->base) + 64);
- sprintf(url, "%s/objects/pack/pack-%s.idx", repo->base, hex);
-
- filename = sha1_pack_index_name(sha1);
- snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
- indexfile = fopen(tmpfile, "a");
- if (!indexfile)
- return error("Unable to open local file %s for pack index",
- filename);
-
- slot = get_active_slot();
- slot->results = &results;
- curl_easy_setopt(slot->curl, CURLOPT_FILE, indexfile);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
- slot->local = indexfile;
-
- /* If there is data present from a previous transfer attempt,
- resume where it left off */
- prev_posn = ftell(indexfile);
- if (prev_posn>0) {
- if (get_verbosely)
- fprintf(stderr,
- "Resuming fetch of index for pack %s at byte %ld\n",
- hex, prev_posn);
- sprintf(range, "Range: bytes=%ld-", prev_posn);
- range_header = curl_slist_append(range_header, range);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
- }
-
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- if (results.curl_result != CURLE_OK) {
- fclose(indexfile);
- return error("Unable to get pack index %s\n%s", url,
- curl_errorstr);
- }
- } else {
- fclose(indexfile);
- return error("Unable to start request");
- }
-
- fclose(indexfile);
-
- return move_temp_to_file(tmpfile, filename);
-}
-
-static int setup_index(struct alt_base *repo, unsigned char *sha1)
-{
- struct packed_git *new_pack;
- if (has_pack_file(sha1))
- return 0; /* don't list this as something we can get */
-
- if (fetch_index(repo, sha1))
- return -1;
-
- new_pack = parse_pack_index(sha1);
- new_pack->next = repo->packs;
- repo->packs = new_pack;
- return 0;
-}
-
-static void process_alternates_response(void *callback_data)
-{
- struct alternates_request *alt_req =
- (struct alternates_request *)callback_data;
- struct active_request_slot *slot = alt_req->slot;
- struct alt_base *tail = alt;
- const char *base = alt_req->base;
- static const char null_byte = '\0';
- char *data;
- int i = 0;
-
- if (alt_req->http_specific) {
- if (slot->curl_result != CURLE_OK ||
- !alt_req->buffer->posn) {
-
- /* Try reusing the slot to get non-http alternates */
- alt_req->http_specific = 0;
- sprintf(alt_req->url, "%s/objects/info/alternates",
- base);
- curl_easy_setopt(slot->curl, CURLOPT_URL,
- alt_req->url);
- active_requests++;
- slot->in_use = 1;
- if (slot->finished != NULL)
- (*slot->finished) = 0;
- if (!start_active_slot(slot)) {
- got_alternates = -1;
- slot->in_use = 0;
- if (slot->finished != NULL)
- (*slot->finished) = 1;
- }
- return;
- }
- } else if (slot->curl_result != CURLE_OK) {
- if (!missing_target(slot)) {
- got_alternates = -1;
- return;
- }
- }
-
- fwrite_buffer(&null_byte, 1, 1, alt_req->buffer);
- alt_req->buffer->posn--;
- data = alt_req->buffer->buffer;
-
- while (i < alt_req->buffer->posn) {
- int posn = i;
- while (posn < alt_req->buffer->posn && data[posn] != '\n')
- posn++;
- if (data[posn] == '\n') {
- int okay = 0;
- int serverlen = 0;
- struct alt_base *newalt;
- char *target = NULL;
- if (data[i] == '/') {
- /* This counts
- * http://git.host/pub/scm/linux.git/
- * -----------here^
- * so memcpy(dst, base, serverlen) will
- * copy up to "...git.host".
- */
- const char *colon_ss = strstr(base,"://");
- if (colon_ss) {
- serverlen = (strchr(colon_ss + 3, '/')
- - base);
- okay = 1;
- }
- } else if (!memcmp(data + i, "../", 3)) {
- /* Relative URL; chop the corresponding
- * number of subpath from base (and ../
- * from data), and concatenate the result.
- *
- * The code first drops ../ from data, and
- * then drops one ../ from data and one path
- * from base. IOW, one extra ../ is dropped
- * from data than path is dropped from base.
- *
- * This is not wrong. The alternate in
- * http://git.host/pub/scm/linux.git/
- * to borrow from
- * http://git.host/pub/scm/linus.git/
- * is ../../linus.git/objects/. You need
- * two ../../ to borrow from your direct
- * neighbour.
- */
- i += 3;
- serverlen = strlen(base);
- while (i + 2 < posn &&
- !memcmp(data + i, "../", 3)) {
- do {
- serverlen--;
- } while (serverlen &&
- base[serverlen - 1] != '/');
- i += 3;
- }
- /* If the server got removed, give up. */
- okay = strchr(base, ':') - base + 3 <
- serverlen;
- } else if (alt_req->http_specific) {
- char *colon = strchr(data + i, ':');
- char *slash = strchr(data + i, '/');
- if (colon && slash && colon < data + posn &&
- slash < data + posn && colon < slash) {
- okay = 1;
- }
- }
- /* skip "objects\n" at end */
- if (okay) {
- target = xmalloc(serverlen + posn - i - 6);
- memcpy(target, base, serverlen);
- memcpy(target + serverlen, data + i,
- posn - i - 7);
- target[serverlen + posn - i - 7] = 0;
- if (get_verbosely)
- fprintf(stderr,
- "Also look at %s\n", target);
- newalt = xmalloc(sizeof(*newalt));
- newalt->next = NULL;
- newalt->base = target;
- newalt->got_indices = 0;
- newalt->packs = NULL;
-
- while (tail->next != NULL)
- tail = tail->next;
- tail->next = newalt;
- }
- }
- i = posn + 1;
- }
-
- got_alternates = 1;
-}
-
-static void fetch_alternates(const char *base)
-{
- struct buffer buffer;
- char *url;
- char *data;
- struct active_request_slot *slot;
- struct alternates_request alt_req;
-
- /* If another request has already started fetching alternates,
- wait for them to arrive and return to processing this request's
- curl message */
-#ifdef USE_CURL_MULTI
- while (got_alternates == 0) {
- step_active_slots();
- }
-#endif
-
- /* Nothing to do if they've already been fetched */
- if (got_alternates == 1)
- return;
-
- /* Start the fetch */
- got_alternates = 0;
-
- data = xmalloc(4096);
- buffer.size = 4096;
- buffer.posn = 0;
- buffer.buffer = data;
-
- if (get_verbosely)
- fprintf(stderr, "Getting alternates list for %s\n", base);
-
- url = xmalloc(strlen(base) + 31);
- sprintf(url, "%s/objects/info/http-alternates", base);
-
- /* Use a callback to process the result, since another request
- may fail and need to have alternates loaded before continuing */
- slot = get_active_slot();
- slot->callback_func = process_alternates_response;
- slot->callback_data = &alt_req;
-
- curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
-
- alt_req.base = base;
- alt_req.url = url;
- alt_req.buffer = &buffer;
- alt_req.http_specific = 1;
- alt_req.slot = slot;
-
- if (start_active_slot(slot))
- run_active_slot(slot);
- else
- got_alternates = -1;
-
- free(data);
- free(url);
-}
-
-static int fetch_indices(struct alt_base *repo)
-{
- unsigned char sha1[20];
- char *url;
- struct buffer buffer;
- char *data;
- int i = 0;
-
- struct active_request_slot *slot;
- struct slot_results results;
-
- if (repo->got_indices)
- return 0;
-
- data = xmalloc(4096);
- buffer.size = 4096;
- buffer.posn = 0;
- buffer.buffer = data;
-
- if (get_verbosely)
- fprintf(stderr, "Getting pack list for %s\n", repo->base);
-
- url = xmalloc(strlen(repo->base) + 21);
- sprintf(url, "%s/objects/info/packs", repo->base);
-
- slot = get_active_slot();
- slot->results = &results;
- curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- if (results.curl_result != CURLE_OK) {
- if (missing_target(&results)) {
- repo->got_indices = 1;
- free(buffer.buffer);
- return 0;
- } else {
- repo->got_indices = 0;
- free(buffer.buffer);
- return error("%s", curl_errorstr);
- }
- }
- } else {
- repo->got_indices = 0;
- free(buffer.buffer);
- return error("Unable to start request");
- }
-
- data = buffer.buffer;
- while (i < buffer.posn) {
- switch (data[i]) {
- case 'P':
- i++;
- if (i + 52 <= buffer.posn &&
- !prefixcmp(data + i, " pack-") &&
- !prefixcmp(data + i + 46, ".pack\n")) {
- get_sha1_hex(data + i + 6, sha1);
- setup_index(repo, sha1);
- i += 51;
- break;
- }
- default:
- while (i < buffer.posn && data[i] != '\n')
- i++;
- }
- i++;
- }
-
- free(buffer.buffer);
- repo->got_indices = 1;
- return 0;
-}
-
-static int fetch_pack(struct alt_base *repo, unsigned char *sha1)
-{
- char *url;
- struct packed_git *target;
- struct packed_git **lst;
- FILE *packfile;
- char *filename;
- char tmpfile[PATH_MAX];
- int ret;
- long prev_posn = 0;
- char range[RANGE_HEADER_SIZE];
- struct curl_slist *range_header = NULL;
-
- struct active_request_slot *slot;
- struct slot_results results;
-
- if (fetch_indices(repo))
- return -1;
- target = find_sha1_pack(sha1, repo->packs);
- if (!target)
- return -1;
-
- if (get_verbosely) {
- fprintf(stderr, "Getting pack %s\n",
- sha1_to_hex(target->sha1));
- fprintf(stderr, " which contains %s\n",
- sha1_to_hex(sha1));
- }
-
- url = xmalloc(strlen(repo->base) + 65);
- sprintf(url, "%s/objects/pack/pack-%s.pack",
- repo->base, sha1_to_hex(target->sha1));
-
- filename = sha1_pack_name(target->sha1);
- snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
- packfile = fopen(tmpfile, "a");
- if (!packfile)
- return error("Unable to open local file %s for pack",
- filename);
-
- slot = get_active_slot();
- slot->results = &results;
- curl_easy_setopt(slot->curl, CURLOPT_FILE, packfile);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
- slot->local = packfile;
-
- /* If there is data present from a previous transfer attempt,
- resume where it left off */
- prev_posn = ftell(packfile);
- if (prev_posn>0) {
- if (get_verbosely)
- fprintf(stderr,
- "Resuming fetch of pack %s at byte %ld\n",
- sha1_to_hex(target->sha1), prev_posn);
- sprintf(range, "Range: bytes=%ld-", prev_posn);
- range_header = curl_slist_append(range_header, range);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
- }
-
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- if (results.curl_result != CURLE_OK) {
- fclose(packfile);
- return error("Unable to get pack file %s\n%s", url,
- curl_errorstr);
- }
- } else {
- fclose(packfile);
- return error("Unable to start request");
- }
-
- target->pack_size = ftell(packfile);
- fclose(packfile);
-
- ret = move_temp_to_file(tmpfile, filename);
- if (ret)
- return ret;
-
- lst = &repo->packs;
- while (*lst != target)
- lst = &((*lst)->next);
- *lst = (*lst)->next;
-
- if (verify_pack(target, 0))
- return -1;
- install_packed_git(target);
-
- return 0;
-}
-
-static void abort_object_request(struct object_request *obj_req)
-{
- if (obj_req->local >= 0) {
- close(obj_req->local);
- obj_req->local = -1;
- }
- unlink(obj_req->tmpfile);
- if (obj_req->slot) {
- release_active_slot(obj_req->slot);
- obj_req->slot = NULL;
- }
- release_object_request(obj_req);
-}
-
-static int fetch_object(struct alt_base *repo, unsigned char *sha1)
-{
- char *hex = sha1_to_hex(sha1);
- int ret = 0;
- struct object_request *obj_req = object_queue_head;
-
- while (obj_req != NULL && hashcmp(obj_req->sha1, sha1))
- obj_req = obj_req->next;
- if (obj_req == NULL)
- return error("Couldn't find request for %s in the queue", hex);
-
- if (has_sha1_file(obj_req->sha1)) {
- abort_object_request(obj_req);
- return 0;
- }
-
-#ifdef USE_CURL_MULTI
- while (obj_req->state == WAITING) {
- step_active_slots();
- }
-#else
- start_object_request(obj_req);
-#endif
-
- while (obj_req->state == ACTIVE) {
- run_active_slot(obj_req->slot);
- }
- if (obj_req->local != -1) {
- close(obj_req->local); obj_req->local = -1;
- }
-
- if (obj_req->state == ABORTED) {
- ret = error("Request for %s aborted", hex);
- } else if (obj_req->curl_result != CURLE_OK &&
- obj_req->http_code != 416) {
- if (missing_target(obj_req))
- ret = -1; /* Be silent, it is probably in a pack. */
- else
- ret = error("%s (curl_result = %d, http_code = %ld, sha1 = %s)",
- obj_req->errorstr, obj_req->curl_result,
- obj_req->http_code, hex);
- } else if (obj_req->zret != Z_STREAM_END) {
- corrupt_object_found++;
- ret = error("File %s (%s) corrupt", hex, obj_req->url);
- } else if (hashcmp(obj_req->sha1, obj_req->real_sha1)) {
- ret = error("File %s has bad hash", hex);
- } else if (obj_req->rename < 0) {
- ret = error("unable to write sha1 filename %s",
- obj_req->filename);
- }
-
- release_object_request(obj_req);
- return ret;
-}
-
-int fetch(unsigned char *sha1)
-{
- struct alt_base *altbase = alt;
-
- if (!fetch_object(altbase, sha1))
- return 0;
- while (altbase) {
- if (!fetch_pack(altbase, sha1))
- return 0;
- fetch_alternates(alt->base);
- altbase = altbase->next;
- }
- return error("Unable to find %s under %s", sha1_to_hex(sha1),
- alt->base);
-}
-
-static inline int needs_quote(int ch)
-{
- if (((ch >= 'A') && (ch <= 'Z'))
- || ((ch >= 'a') && (ch <= 'z'))
- || ((ch >= '0') && (ch <= '9'))
- || (ch == '/')
- || (ch == '-')
- || (ch == '.'))
- return 0;
- return 1;
-}
-
-static inline int hex(int v)
-{
- if (v < 10) return '0' + v;
- else return 'A' + v - 10;
-}
-
-static char *quote_ref_url(const char *base, const char *ref)
-{
- const char *cp;
- char *dp, *qref;
- int len, baselen, ch;
-
- baselen = strlen(base);
- len = baselen + 7; /* "/refs/" + NUL */
- for (cp = ref; (ch = *cp) != 0; cp++, len++)
- if (needs_quote(ch))
- len += 2; /* extra two hex plus replacement % */
- qref = xmalloc(len);
- memcpy(qref, base, baselen);
- memcpy(qref + baselen, "/refs/", 6);
- for (cp = ref, dp = qref + baselen + 6; (ch = *cp) != 0; cp++) {
- if (needs_quote(ch)) {
- *dp++ = '%';
- *dp++ = hex((ch >> 4) & 0xF);
- *dp++ = hex(ch & 0xF);
- }
- else
- *dp++ = ch;
- }
- *dp = 0;
-
- return qref;
-}
-
-int fetch_ref(char *ref, unsigned char *sha1)
-{
- char *url;
- char hex[42];
- struct buffer buffer;
- const char *base = alt->base;
- struct active_request_slot *slot;
- struct slot_results results;
- buffer.size = 41;
- buffer.posn = 0;
- buffer.buffer = hex;
- hex[41] = '\0';
-
- url = quote_ref_url(base, ref);
- slot = get_active_slot();
- slot->results = &results;
- curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- if (results.curl_result != CURLE_OK)
- return error("Couldn't get %s for %s\n%s",
- url, ref, curl_errorstr);
- } else {
- return error("Unable to start request");
- }
-
- hex[40] = '\0';
- get_sha1_hex(hex, sha1);
- return 0;
-}
-
-int main(int argc, const char **argv)
-{
- int commits;
- const char **write_ref = NULL;
- char **commit_id;
- const char *url;
- char *s;
- int arg = 1;
- int rc = 0;
-
- setup_git_directory();
- git_config(git_default_config);
-
- while (arg < argc && argv[arg][0] == '-') {
- if (argv[arg][1] == 't') {
- get_tree = 1;
- } else if (argv[arg][1] == 'c') {
- get_history = 1;
- } else if (argv[arg][1] == 'a') {
- get_all = 1;
- get_tree = 1;
- get_history = 1;
- } else if (argv[arg][1] == 'v') {
- get_verbosely = 1;
- } else if (argv[arg][1] == 'w') {
- write_ref = &argv[arg + 1];
- arg++;
- } else if (!strcmp(argv[arg], "--recover")) {
- get_recover = 1;
- } else if (!strcmp(argv[arg], "--stdin")) {
- commits_on_stdin = 1;
- }
- arg++;
- }
- if (argc < arg + 2 - commits_on_stdin) {
- usage("git-http-fetch [-c] [-t] [-a] [-v] [--recover] [-w ref] [--stdin] commit-id url");
- return 1;
- }
- if (commits_on_stdin) {
- commits = pull_targets_stdin(&commit_id, &write_ref);
- } else {
- commit_id = (char **) &argv[arg++];
- commits = 1;
- }
- url = argv[arg];
-
- http_init();
-
- no_pragma_header = curl_slist_append(no_pragma_header, "Pragma:");
-
- alt = xmalloc(sizeof(*alt));
- alt->base = xmalloc(strlen(url) + 1);
- strcpy(alt->base, url);
- for (s = alt->base + strlen(alt->base) - 1; *s == '/'; --s)
- *s = 0;
- alt->got_indices = 0;
- alt->packs = NULL;
- alt->next = NULL;
-
- if (pull(commits, commit_id, write_ref, url))
- rc = 1;
-
- http_cleanup();
-
- curl_slist_free_all(no_pragma_header);
-
- if (commits_on_stdin)
- pull_targets_free(commits, commit_id, write_ref);
-
- if (corrupt_object_found) {
- fprintf(stderr,
-"Some loose object were found to be corrupt, but they might be just\n"
-"a false '404 Not Found' error message sent with incorrect HTTP\n"
-"status code. Suggest running git-fsck.\n");
- }
- return rc;
-}
#include "cache.h"
#include "commit.h"
#include "pack.h"
-#include "fetch.h"
#include "tag.h"
#include "blob.h"
#include "http.h"
#include <expat.h>
static const char http_push_usage[] =
-"git-http-push [--all] [--force] [--verbose] <remote> [<head>...]\n";
+"git-http-push [--all] [--dry-run] [--force] [--verbose] <remote> [<head>...]\n";
#ifndef XML_STATUS_OK
enum XML_Status {
static int push_verbosely;
static int push_all;
static int force_all;
+static int dry_run;
static struct object_list *objects;
}
#ifdef USE_CURL_MULTI
-void fill_active_slots(void)
+static int fill_active_slot(void *unused)
{
struct transfer_request *request = request_queue_head;
- struct transfer_request *next;
- struct active_request_slot *slot = active_queue_head;
- int num_transfers;
if (aborted)
- return;
+ return 0;
- while (active_requests < max_requests && request != NULL) {
- next = request->next;
+ for (request = request_queue_head; request; request = request->next) {
if (request->state == NEED_FETCH) {
start_fetch_loose(request);
+ return 1;
} else if (pushing && request->state == NEED_PUSH) {
if (remote_dir_exists[request->obj->sha1[0]] == 1) {
start_put(request);
} else {
start_mkcol(request);
}
- curl_multi_perform(curlm, &num_transfers);
- }
- request = next;
- }
-
- while (slot != NULL) {
- if (!slot->in_use && slot->curl != NULL) {
- curl_easy_cleanup(slot->curl);
- slot->curl = NULL;
+ return 1;
}
- slot = slot->next;
}
+ return 0;
}
#endif
force_all = 1;
continue;
}
+ if (!strcmp(arg, "--dry-run")) {
+ dry_run = 1;
+ continue;
+ }
if (!strcmp(arg, "--verbose")) {
push_verbosely = 1;
continue;
if (strcmp(ref->name, ref->peer_ref->name))
fprintf(stderr, " using '%s'", ref->peer_ref->name);
fprintf(stderr, "\n from %s\n to %s\n", old_hex, new_hex);
-
+ if (dry_run)
+ continue;
/* Lock remote branch ref */
ref_lock = lock_remote(ref->name, LOCK_TIME);
objects_to_send);
#ifdef USE_CURL_MULTI
fill_active_slots();
+ add_fill_function(NULL, fill_active_slot);
#endif
finish_all_active_slots();
if (remote->has_info_refs && new_refs) {
if (info_ref_lock && remote->can_update_info_refs) {
fprintf(stderr, "Updating remote server info\n");
- update_remote_info_refs(info_ref_lock);
+ if (!dry_run)
+ update_remote_info_refs(info_ref_lock);
} else {
fprintf(stderr, "Unable to update server info\n");
}
--- /dev/null
+#include "cache.h"
+#include "commit.h"
+#include "pack.h"
+#include "walker.h"
+#include "http.h"
+
+#define PREV_BUF_SIZE 4096
+#define RANGE_HEADER_SIZE 30
+
+struct alt_base
+{
+ char *base;
+ int got_indices;
+ struct packed_git *packs;
+ struct alt_base *next;
+};
+
+enum object_request_state {
+ WAITING,
+ ABORTED,
+ ACTIVE,
+ COMPLETE,
+};
+
+struct object_request
+{
+ struct walker *walker;
+ unsigned char sha1[20];
+ struct alt_base *repo;
+ char *url;
+ char filename[PATH_MAX];
+ char tmpfile[PATH_MAX];
+ int local;
+ enum object_request_state state;
+ CURLcode curl_result;
+ char errorstr[CURL_ERROR_SIZE];
+ long http_code;
+ unsigned char real_sha1[20];
+ SHA_CTX c;
+ z_stream stream;
+ int zret;
+ int rename;
+ struct active_request_slot *slot;
+ struct object_request *next;
+};
+
+struct alternates_request {
+ struct walker *walker;
+ const char *base;
+ char *url;
+ struct buffer *buffer;
+ struct active_request_slot *slot;
+ int http_specific;
+};
+
+struct walker_data {
+ const char *url;
+ int got_alternates;
+ struct alt_base *alt;
+ struct curl_slist *no_pragma_header;
+};
+
+static struct object_request *object_queue_head;
+
+static size_t fwrite_sha1_file(void *ptr, size_t eltsize, size_t nmemb,
+ void *data)
+{
+ unsigned char expn[4096];
+ size_t size = eltsize * nmemb;
+ int posn = 0;
+ struct object_request *obj_req = (struct object_request *)data;
+ do {
+ ssize_t retval = xwrite(obj_req->local,
+ (char *) ptr + posn, size - posn);
+ if (retval < 0)
+ return posn;
+ posn += retval;
+ } while (posn < size);
+
+ obj_req->stream.avail_in = size;
+ obj_req->stream.next_in = ptr;
+ do {
+ obj_req->stream.next_out = expn;
+ obj_req->stream.avail_out = sizeof(expn);
+ obj_req->zret = inflate(&obj_req->stream, Z_SYNC_FLUSH);
+ SHA1_Update(&obj_req->c, expn,
+ sizeof(expn) - obj_req->stream.avail_out);
+ } while (obj_req->stream.avail_in && obj_req->zret == Z_OK);
+ data_received++;
+ return size;
+}
+
+static int missing__target(int code, int result)
+{
+ return /* file:// URL -- do we ever use one??? */
+ (result == CURLE_FILE_COULDNT_READ_FILE) ||
+ /* http:// and https:// URL */
+ (code == 404 && result == CURLE_HTTP_RETURNED_ERROR) ||
+ /* ftp:// URL */
+ (code == 550 && result == CURLE_FTP_COULDNT_RETR_FILE)
+ ;
+}
+
+#define missing_target(a) missing__target((a)->http_code, (a)->curl_result)
+
+static void fetch_alternates(struct walker *walker, const char *base);
+
+static void process_object_response(void *callback_data);
+
+static void start_object_request(struct walker *walker,
+ struct object_request *obj_req)
+{
+ char *hex = sha1_to_hex(obj_req->sha1);
+ char prevfile[PATH_MAX];
+ char *url;
+ char *posn;
+ int prevlocal;
+ unsigned char prev_buf[PREV_BUF_SIZE];
+ ssize_t prev_read = 0;
+ long prev_posn = 0;
+ char range[RANGE_HEADER_SIZE];
+ struct curl_slist *range_header = NULL;
+ struct active_request_slot *slot;
+ struct walker_data *data = walker->data;
+
+ snprintf(prevfile, sizeof(prevfile), "%s.prev", obj_req->filename);
+ unlink(prevfile);
+ rename(obj_req->tmpfile, prevfile);
+ unlink(obj_req->tmpfile);
+
+ if (obj_req->local != -1)
+ error("fd leakage in start: %d", obj_req->local);
+ obj_req->local = open(obj_req->tmpfile,
+ O_WRONLY | O_CREAT | O_EXCL, 0666);
+ /* This could have failed due to the "lazy directory creation";
+ * try to mkdir the last path component.
+ */
+ if (obj_req->local < 0 && errno == ENOENT) {
+ char *dir = strrchr(obj_req->tmpfile, '/');
+ if (dir) {
+ *dir = 0;
+ mkdir(obj_req->tmpfile, 0777);
+ *dir = '/';
+ }
+ obj_req->local = open(obj_req->tmpfile,
+ O_WRONLY | O_CREAT | O_EXCL, 0666);
+ }
+
+ if (obj_req->local < 0) {
+ obj_req->state = ABORTED;
+ error("Couldn't create temporary file %s for %s: %s",
+ obj_req->tmpfile, obj_req->filename, strerror(errno));
+ return;
+ }
+
+ memset(&obj_req->stream, 0, sizeof(obj_req->stream));
+
+ inflateInit(&obj_req->stream);
+
+ SHA1_Init(&obj_req->c);
+
+ url = xmalloc(strlen(obj_req->repo->base) + 51);
+ obj_req->url = xmalloc(strlen(obj_req->repo->base) + 51);
+ strcpy(url, obj_req->repo->base);
+ posn = url + strlen(obj_req->repo->base);
+ strcpy(posn, "/objects/");
+ posn += 9;
+ memcpy(posn, hex, 2);
+ posn += 2;
+ *(posn++) = '/';
+ strcpy(posn, hex + 2);
+ strcpy(obj_req->url, url);
+
+ /* If a previous temp file is present, process what was already
+ fetched. */
+ prevlocal = open(prevfile, O_RDONLY);
+ if (prevlocal != -1) {
+ do {
+ prev_read = xread(prevlocal, prev_buf, PREV_BUF_SIZE);
+ if (prev_read>0) {
+ if (fwrite_sha1_file(prev_buf,
+ 1,
+ prev_read,
+ obj_req) == prev_read) {
+ prev_posn += prev_read;
+ } else {
+ prev_read = -1;
+ }
+ }
+ } while (prev_read > 0);
+ close(prevlocal);
+ }
+ unlink(prevfile);
+
+ /* Reset inflate/SHA1 if there was an error reading the previous temp
+ file; also rewind to the beginning of the local file. */
+ if (prev_read == -1) {
+ memset(&obj_req->stream, 0, sizeof(obj_req->stream));
+ inflateInit(&obj_req->stream);
+ SHA1_Init(&obj_req->c);
+ if (prev_posn>0) {
+ prev_posn = 0;
+ lseek(obj_req->local, 0, SEEK_SET);
+ ftruncate(obj_req->local, 0);
+ }
+ }
+
+ slot = get_active_slot();
+ slot->callback_func = process_object_response;
+ slot->callback_data = obj_req;
+ obj_req->slot = slot;
+
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, obj_req);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
+ curl_easy_setopt(slot->curl, CURLOPT_ERRORBUFFER, obj_req->errorstr);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, data->no_pragma_header);
+
+ /* If we have successfully processed data from a previous fetch
+ attempt, only fetch the data we don't already have. */
+ if (prev_posn>0) {
+ if (walker->get_verbosely)
+ fprintf(stderr,
+ "Resuming fetch of object %s at byte %ld\n",
+ hex, prev_posn);
+ sprintf(range, "Range: bytes=%ld-", prev_posn);
+ range_header = curl_slist_append(range_header, range);
+ curl_easy_setopt(slot->curl,
+ CURLOPT_HTTPHEADER, range_header);
+ }
+
+ /* Try to get the request started, abort the request on error */
+ obj_req->state = ACTIVE;
+ if (!start_active_slot(slot)) {
+ obj_req->state = ABORTED;
+ obj_req->slot = NULL;
+ close(obj_req->local); obj_req->local = -1;
+ free(obj_req->url);
+ return;
+ }
+}
+
+static void finish_object_request(struct object_request *obj_req)
+{
+ struct stat st;
+
+ fchmod(obj_req->local, 0444);
+ close(obj_req->local); obj_req->local = -1;
+
+ if (obj_req->http_code == 416) {
+ fprintf(stderr, "Warning: requested range invalid; we may already have all the data.\n");
+ } else if (obj_req->curl_result != CURLE_OK) {
+ if (stat(obj_req->tmpfile, &st) == 0)
+ if (st.st_size == 0)
+ unlink(obj_req->tmpfile);
+ return;
+ }
+
+ inflateEnd(&obj_req->stream);
+ SHA1_Final(obj_req->real_sha1, &obj_req->c);
+ if (obj_req->zret != Z_STREAM_END) {
+ unlink(obj_req->tmpfile);
+ return;
+ }
+ if (hashcmp(obj_req->sha1, obj_req->real_sha1)) {
+ unlink(obj_req->tmpfile);
+ return;
+ }
+ obj_req->rename =
+ move_temp_to_file(obj_req->tmpfile, obj_req->filename);
+
+ if (obj_req->rename == 0)
+ walker_say(obj_req->walker, "got %s\n", sha1_to_hex(obj_req->sha1));
+}
+
+static void process_object_response(void *callback_data)
+{
+ struct object_request *obj_req =
+ (struct object_request *)callback_data;
+ struct walker *walker = obj_req->walker;
+ struct walker_data *data = walker->data;
+ struct alt_base *alt = data->alt;
+
+ obj_req->curl_result = obj_req->slot->curl_result;
+ obj_req->http_code = obj_req->slot->http_code;
+ obj_req->slot = NULL;
+ obj_req->state = COMPLETE;
+
+ /* Use alternates if necessary */
+ if (missing_target(obj_req)) {
+ fetch_alternates(walker, alt->base);
+ if (obj_req->repo->next != NULL) {
+ obj_req->repo =
+ obj_req->repo->next;
+ close(obj_req->local);
+ obj_req->local = -1;
+ start_object_request(walker, obj_req);
+ return;
+ }
+ }
+
+ finish_object_request(obj_req);
+}
+
+static void release_object_request(struct object_request *obj_req)
+{
+ struct object_request *entry = object_queue_head;
+
+ if (obj_req->local != -1)
+ error("fd leakage in release: %d", obj_req->local);
+ if (obj_req == object_queue_head) {
+ object_queue_head = obj_req->next;
+ } else {
+ while (entry->next != NULL && entry->next != obj_req)
+ entry = entry->next;
+ if (entry->next == obj_req)
+ entry->next = entry->next->next;
+ }
+
+ free(obj_req->url);
+ free(obj_req);
+}
+
+#ifdef USE_CURL_MULTI
+static int fill_active_slot(struct walker *walker)
+{
+ struct object_request *obj_req;
+
+ for (obj_req = object_queue_head; obj_req; obj_req = obj_req->next) {
+ if (obj_req->state == WAITING) {
+ if (has_sha1_file(obj_req->sha1))
+ obj_req->state = COMPLETE;
+ else {
+ start_object_request(walker, obj_req);
+ return 1;
+ }
+ }
+ }
+ return 0;
+}
+#endif
+
+static void prefetch(struct walker *walker, unsigned char *sha1)
+{
+ struct object_request *newreq;
+ struct object_request *tail;
+ struct walker_data *data = walker->data;
+ char *filename = sha1_file_name(sha1);
+
+ newreq = xmalloc(sizeof(*newreq));
+ newreq->walker = walker;
+ hashcpy(newreq->sha1, sha1);
+ newreq->repo = data->alt;
+ newreq->url = NULL;
+ newreq->local = -1;
+ newreq->state = WAITING;
+ snprintf(newreq->filename, sizeof(newreq->filename), "%s", filename);
+ snprintf(newreq->tmpfile, sizeof(newreq->tmpfile),
+ "%s.temp", filename);
+ newreq->slot = NULL;
+ newreq->next = NULL;
+
+ if (object_queue_head == NULL) {
+ object_queue_head = newreq;
+ } else {
+ tail = object_queue_head;
+ while (tail->next != NULL) {
+ tail = tail->next;
+ }
+ tail->next = newreq;
+ }
+
+#ifdef USE_CURL_MULTI
+ fill_active_slots();
+ step_active_slots();
+#endif
+}
+
+static int fetch_index(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
+{
+ char *hex = sha1_to_hex(sha1);
+ char *filename;
+ char *url;
+ char tmpfile[PATH_MAX];
+ long prev_posn = 0;
+ char range[RANGE_HEADER_SIZE];
+ struct curl_slist *range_header = NULL;
+ struct walker_data *data = walker->data;
+
+ FILE *indexfile;
+ struct active_request_slot *slot;
+ struct slot_results results;
+
+ if (has_pack_index(sha1))
+ return 0;
+
+ if (walker->get_verbosely)
+ fprintf(stderr, "Getting index for pack %s\n", hex);
+
+ url = xmalloc(strlen(repo->base) + 64);
+ sprintf(url, "%s/objects/pack/pack-%s.idx", repo->base, hex);
+
+ filename = sha1_pack_index_name(sha1);
+ snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
+ indexfile = fopen(tmpfile, "a");
+ if (!indexfile)
+ return error("Unable to open local file %s for pack index",
+ filename);
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, indexfile);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, data->no_pragma_header);
+ slot->local = indexfile;
+
+ /* If there is data present from a previous transfer attempt,
+ resume where it left off */
+ prev_posn = ftell(indexfile);
+ if (prev_posn>0) {
+ if (walker->get_verbosely)
+ fprintf(stderr,
+ "Resuming fetch of index for pack %s at byte %ld\n",
+ hex, prev_posn);
+ sprintf(range, "Range: bytes=%ld-", prev_posn);
+ range_header = curl_slist_append(range_header, range);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
+ }
+
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ fclose(indexfile);
+ return error("Unable to get pack index %s\n%s", url,
+ curl_errorstr);
+ }
+ } else {
+ fclose(indexfile);
+ return error("Unable to start request");
+ }
+
+ fclose(indexfile);
+
+ return move_temp_to_file(tmpfile, filename);
+}
+
+static int setup_index(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
+{
+ struct packed_git *new_pack;
+ if (has_pack_file(sha1))
+ return 0; /* don't list this as something we can get */
+
+ if (fetch_index(walker, repo, sha1))
+ return -1;
+
+ new_pack = parse_pack_index(sha1);
+ new_pack->next = repo->packs;
+ repo->packs = new_pack;
+ return 0;
+}
+
+static void process_alternates_response(void *callback_data)
+{
+ struct alternates_request *alt_req =
+ (struct alternates_request *)callback_data;
+ struct walker *walker = alt_req->walker;
+ struct walker_data *cdata = walker->data;
+ struct active_request_slot *slot = alt_req->slot;
+ struct alt_base *tail = cdata->alt;
+ const char *base = alt_req->base;
+ static const char null_byte = '\0';
+ char *data;
+ int i = 0;
+
+ if (alt_req->http_specific) {
+ if (slot->curl_result != CURLE_OK ||
+ !alt_req->buffer->posn) {
+
+ /* Try reusing the slot to get non-http alternates */
+ alt_req->http_specific = 0;
+ sprintf(alt_req->url, "%s/objects/info/alternates",
+ base);
+ curl_easy_setopt(slot->curl, CURLOPT_URL,
+ alt_req->url);
+ active_requests++;
+ slot->in_use = 1;
+ if (slot->finished != NULL)
+ (*slot->finished) = 0;
+ if (!start_active_slot(slot)) {
+ cdata->got_alternates = -1;
+ slot->in_use = 0;
+ if (slot->finished != NULL)
+ (*slot->finished) = 1;
+ }
+ return;
+ }
+ } else if (slot->curl_result != CURLE_OK) {
+ if (!missing_target(slot)) {
+ cdata->got_alternates = -1;
+ return;
+ }
+ }
+
+ fwrite_buffer(&null_byte, 1, 1, alt_req->buffer);
+ alt_req->buffer->posn--;
+ data = alt_req->buffer->buffer;
+
+ while (i < alt_req->buffer->posn) {
+ int posn = i;
+ while (posn < alt_req->buffer->posn && data[posn] != '\n')
+ posn++;
+ if (data[posn] == '\n') {
+ int okay = 0;
+ int serverlen = 0;
+ struct alt_base *newalt;
+ char *target = NULL;
+ if (data[i] == '/') {
+ /* This counts
+ * http://git.host/pub/scm/linux.git/
+ * -----------here^
+ * so memcpy(dst, base, serverlen) will
+ * copy up to "...git.host".
+ */
+ const char *colon_ss = strstr(base,"://");
+ if (colon_ss) {
+ serverlen = (strchr(colon_ss + 3, '/')
+ - base);
+ okay = 1;
+ }
+ } else if (!memcmp(data + i, "../", 3)) {
+ /* Relative URL; chop the corresponding
+ * number of subpath from base (and ../
+ * from data), and concatenate the result.
+ *
+ * The code first drops ../ from data, and
+ * then drops one ../ from data and one path
+ * from base. IOW, one extra ../ is dropped
+ * from data than path is dropped from base.
+ *
+ * This is not wrong. The alternate in
+ * http://git.host/pub/scm/linux.git/
+ * to borrow from
+ * http://git.host/pub/scm/linus.git/
+ * is ../../linus.git/objects/. You need
+ * two ../../ to borrow from your direct
+ * neighbour.
+ */
+ i += 3;
+ serverlen = strlen(base);
+ while (i + 2 < posn &&
+ !memcmp(data + i, "../", 3)) {
+ do {
+ serverlen--;
+ } while (serverlen &&
+ base[serverlen - 1] != '/');
+ i += 3;
+ }
+ /* If the server got removed, give up. */
+ okay = strchr(base, ':') - base + 3 <
+ serverlen;
+ } else if (alt_req->http_specific) {
+ char *colon = strchr(data + i, ':');
+ char *slash = strchr(data + i, '/');
+ if (colon && slash && colon < data + posn &&
+ slash < data + posn && colon < slash) {
+ okay = 1;
+ }
+ }
+ /* skip "objects\n" at end */
+ if (okay) {
+ target = xmalloc(serverlen + posn - i - 6);
+ memcpy(target, base, serverlen);
+ memcpy(target + serverlen, data + i,
+ posn - i - 7);
+ target[serverlen + posn - i - 7] = 0;
+ if (walker->get_verbosely)
+ fprintf(stderr,
+ "Also look at %s\n", target);
+ newalt = xmalloc(sizeof(*newalt));
+ newalt->next = NULL;
+ newalt->base = target;
+ newalt->got_indices = 0;
+ newalt->packs = NULL;
+
+ while (tail->next != NULL)
+ tail = tail->next;
+ tail->next = newalt;
+ }
+ }
+ i = posn + 1;
+ }
+
+ cdata->got_alternates = 1;
+}
+
+static void fetch_alternates(struct walker *walker, const char *base)
+{
+ struct buffer buffer;
+ char *url;
+ char *data;
+ struct active_request_slot *slot;
+ struct alternates_request alt_req;
+ struct walker_data *cdata = walker->data;
+
+ /* If another request has already started fetching alternates,
+ wait for them to arrive and return to processing this request's
+ curl message */
+#ifdef USE_CURL_MULTI
+ while (cdata->got_alternates == 0) {
+ step_active_slots();
+ }
+#endif
+
+ /* Nothing to do if they've already been fetched */
+ if (cdata->got_alternates == 1)
+ return;
+
+ /* Start the fetch */
+ cdata->got_alternates = 0;
+
+ data = xmalloc(4096);
+ buffer.size = 4096;
+ buffer.posn = 0;
+ buffer.buffer = data;
+
+ if (walker->get_verbosely)
+ fprintf(stderr, "Getting alternates list for %s\n", base);
+
+ url = xmalloc(strlen(base) + 31);
+ sprintf(url, "%s/objects/info/http-alternates", base);
+
+ /* Use a callback to process the result, since another request
+ may fail and need to have alternates loaded before continuing */
+ slot = get_active_slot();
+ slot->callback_func = process_alternates_response;
+ alt_req.walker = walker;
+ slot->callback_data = &alt_req;
+
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+
+ alt_req.base = base;
+ alt_req.url = url;
+ alt_req.buffer = &buffer;
+ alt_req.http_specific = 1;
+ alt_req.slot = slot;
+
+ if (start_active_slot(slot))
+ run_active_slot(slot);
+ else
+ cdata->got_alternates = -1;
+
+ free(data);
+ free(url);
+}
+
+static int fetch_indices(struct walker *walker, struct alt_base *repo)
+{
+ unsigned char sha1[20];
+ char *url;
+ struct buffer buffer;
+ char *data;
+ int i = 0;
+
+ struct active_request_slot *slot;
+ struct slot_results results;
+
+ if (repo->got_indices)
+ return 0;
+
+ data = xmalloc(4096);
+ buffer.size = 4096;
+ buffer.posn = 0;
+ buffer.buffer = data;
+
+ if (walker->get_verbosely)
+ fprintf(stderr, "Getting pack list for %s\n", repo->base);
+
+ url = xmalloc(strlen(repo->base) + 21);
+ sprintf(url, "%s/objects/info/packs", repo->base);
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ if (missing_target(&results)) {
+ repo->got_indices = 1;
+ free(buffer.buffer);
+ return 0;
+ } else {
+ repo->got_indices = 0;
+ free(buffer.buffer);
+ return error("%s", curl_errorstr);
+ }
+ }
+ } else {
+ repo->got_indices = 0;
+ free(buffer.buffer);
+ return error("Unable to start request");
+ }
+
+ data = buffer.buffer;
+ while (i < buffer.posn) {
+ switch (data[i]) {
+ case 'P':
+ i++;
+ if (i + 52 <= buffer.posn &&
+ !prefixcmp(data + i, " pack-") &&
+ !prefixcmp(data + i + 46, ".pack\n")) {
+ get_sha1_hex(data + i + 6, sha1);
+ setup_index(walker, repo, sha1);
+ i += 51;
+ break;
+ }
+ default:
+ while (i < buffer.posn && data[i] != '\n')
+ i++;
+ }
+ i++;
+ }
+
+ free(buffer.buffer);
+ repo->got_indices = 1;
+ return 0;
+}
+
+static int fetch_pack(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
+{
+ char *url;
+ struct packed_git *target;
+ struct packed_git **lst;
+ FILE *packfile;
+ char *filename;
+ char tmpfile[PATH_MAX];
+ int ret;
+ long prev_posn = 0;
+ char range[RANGE_HEADER_SIZE];
+ struct curl_slist *range_header = NULL;
+ struct walker_data *data = walker->data;
+
+ struct active_request_slot *slot;
+ struct slot_results results;
+
+ if (fetch_indices(walker, repo))
+ return -1;
+ target = find_sha1_pack(sha1, repo->packs);
+ if (!target)
+ return -1;
+
+ if (walker->get_verbosely) {
+ fprintf(stderr, "Getting pack %s\n",
+ sha1_to_hex(target->sha1));
+ fprintf(stderr, " which contains %s\n",
+ sha1_to_hex(sha1));
+ }
+
+ url = xmalloc(strlen(repo->base) + 65);
+ sprintf(url, "%s/objects/pack/pack-%s.pack",
+ repo->base, sha1_to_hex(target->sha1));
+
+ filename = sha1_pack_name(target->sha1);
+ snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
+ packfile = fopen(tmpfile, "a");
+ if (!packfile)
+ return error("Unable to open local file %s for pack",
+ filename);
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, packfile);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, data->no_pragma_header);
+ slot->local = packfile;
+
+ /* If there is data present from a previous transfer attempt,
+ resume where it left off */
+ prev_posn = ftell(packfile);
+ if (prev_posn>0) {
+ if (walker->get_verbosely)
+ fprintf(stderr,
+ "Resuming fetch of pack %s at byte %ld\n",
+ sha1_to_hex(target->sha1), prev_posn);
+ sprintf(range, "Range: bytes=%ld-", prev_posn);
+ range_header = curl_slist_append(range_header, range);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
+ }
+
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ fclose(packfile);
+ return error("Unable to get pack file %s\n%s", url,
+ curl_errorstr);
+ }
+ } else {
+ fclose(packfile);
+ return error("Unable to start request");
+ }
+
+ target->pack_size = ftell(packfile);
+ fclose(packfile);
+
+ ret = move_temp_to_file(tmpfile, filename);
+ if (ret)
+ return ret;
+
+ lst = &repo->packs;
+ while (*lst != target)
+ lst = &((*lst)->next);
+ *lst = (*lst)->next;
+
+ if (verify_pack(target, 0))
+ return -1;
+ install_packed_git(target);
+
+ return 0;
+}
+
+static void abort_object_request(struct object_request *obj_req)
+{
+ if (obj_req->local >= 0) {
+ close(obj_req->local);
+ obj_req->local = -1;
+ }
+ unlink(obj_req->tmpfile);
+ if (obj_req->slot) {
+ release_active_slot(obj_req->slot);
+ obj_req->slot = NULL;
+ }
+ release_object_request(obj_req);
+}
+
+static int fetch_object(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
+{
+ char *hex = sha1_to_hex(sha1);
+ int ret = 0;
+ struct object_request *obj_req = object_queue_head;
+
+ while (obj_req != NULL && hashcmp(obj_req->sha1, sha1))
+ obj_req = obj_req->next;
+ if (obj_req == NULL)
+ return error("Couldn't find request for %s in the queue", hex);
+
+ if (has_sha1_file(obj_req->sha1)) {
+ abort_object_request(obj_req);
+ return 0;
+ }
+
+#ifdef USE_CURL_MULTI
+ while (obj_req->state == WAITING) {
+ step_active_slots();
+ }
+#else
+ start_object_request(walker, obj_req);
+#endif
+
+ while (obj_req->state == ACTIVE) {
+ run_active_slot(obj_req->slot);
+ }
+ if (obj_req->local != -1) {
+ close(obj_req->local); obj_req->local = -1;
+ }
+
+ if (obj_req->state == ABORTED) {
+ ret = error("Request for %s aborted", hex);
+ } else if (obj_req->curl_result != CURLE_OK &&
+ obj_req->http_code != 416) {
+ if (missing_target(obj_req))
+ ret = -1; /* Be silent, it is probably in a pack. */
+ else
+ ret = error("%s (curl_result = %d, http_code = %ld, sha1 = %s)",
+ obj_req->errorstr, obj_req->curl_result,
+ obj_req->http_code, hex);
+ } else if (obj_req->zret != Z_STREAM_END) {
+ walker->corrupt_object_found++;
+ ret = error("File %s (%s) corrupt", hex, obj_req->url);
+ } else if (hashcmp(obj_req->sha1, obj_req->real_sha1)) {
+ ret = error("File %s has bad hash", hex);
+ } else if (obj_req->rename < 0) {
+ ret = error("unable to write sha1 filename %s",
+ obj_req->filename);
+ }
+
+ release_object_request(obj_req);
+ return ret;
+}
+
+static int fetch(struct walker *walker, unsigned char *sha1)
+{
+ struct walker_data *data = walker->data;
+ struct alt_base *altbase = data->alt;
+
+ if (!fetch_object(walker, altbase, sha1))
+ return 0;
+ while (altbase) {
+ if (!fetch_pack(walker, altbase, sha1))
+ return 0;
+ fetch_alternates(walker, data->alt->base);
+ altbase = altbase->next;
+ }
+ return error("Unable to find %s under %s", sha1_to_hex(sha1),
+ data->alt->base);
+}
+
+static inline int needs_quote(int ch)
+{
+ if (((ch >= 'A') && (ch <= 'Z'))
+ || ((ch >= 'a') && (ch <= 'z'))
+ || ((ch >= '0') && (ch <= '9'))
+ || (ch == '/')
+ || (ch == '-')
+ || (ch == '.'))
+ return 0;
+ return 1;
+}
+
+static inline int hex(int v)
+{
+ if (v < 10) return '0' + v;
+ else return 'A' + v - 10;
+}
+
+static char *quote_ref_url(const char *base, const char *ref)
+{
+ const char *cp;
+ char *dp, *qref;
+ int len, baselen, ch;
+
+ baselen = strlen(base);
+ len = baselen + 7; /* "/refs/" + NUL */
+ for (cp = ref; (ch = *cp) != 0; cp++, len++)
+ if (needs_quote(ch))
+ len += 2; /* extra two hex plus replacement % */
+ qref = xmalloc(len);
+ memcpy(qref, base, baselen);
+ memcpy(qref + baselen, "/refs/", 6);
+ for (cp = ref, dp = qref + baselen + 6; (ch = *cp) != 0; cp++) {
+ if (needs_quote(ch)) {
+ *dp++ = '%';
+ *dp++ = hex((ch >> 4) & 0xF);
+ *dp++ = hex(ch & 0xF);
+ }
+ else
+ *dp++ = ch;
+ }
+ *dp = 0;
+
+ return qref;
+}
+
+static int fetch_ref(struct walker *walker, char *ref, unsigned char *sha1)
+{
+ char *url;
+ char hex[42];
+ struct buffer buffer;
+ struct walker_data *data = walker->data;
+ const char *base = data->alt->base;
+ struct active_request_slot *slot;
+ struct slot_results results;
+ buffer.size = 41;
+ buffer.posn = 0;
+ buffer.buffer = hex;
+ hex[41] = '\0';
+
+ url = quote_ref_url(base, ref);
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK)
+ return error("Couldn't get %s for %s\n%s",
+ url, ref, curl_errorstr);
+ } else {
+ return error("Unable to start request");
+ }
+
+ hex[40] = '\0';
+ get_sha1_hex(hex, sha1);
+ return 0;
+}
+
+static void cleanup(struct walker *walker)
+{
+ struct walker_data *data = walker->data;
+ http_cleanup();
+
+ curl_slist_free_all(data->no_pragma_header);
+}
+
+struct walker *get_http_walker(const char *url)
+{
+ char *s;
+ struct walker_data *data = xmalloc(sizeof(struct walker_data));
+ struct walker *walker = xmalloc(sizeof(struct walker));
+
+ http_init();
+
+ data->no_pragma_header = curl_slist_append(NULL, "Pragma:");
+
+ data->alt = xmalloc(sizeof(*data->alt));
+ data->alt->base = xmalloc(strlen(url) + 1);
+ strcpy(data->alt->base, url);
+ for (s = data->alt->base + strlen(data->alt->base) - 1; *s == '/'; --s)
+ *s = 0;
+
+ data->alt->got_indices = 0;
+ data->alt->packs = NULL;
+ data->alt->next = NULL;
+ data->got_alternates = -1;
+
+ walker->corrupt_object_found = 0;
+ walker->fetch = fetch;
+ walker->fetch_ref = fetch_ref;
+ walker->prefetch = prefetch;
+ walker->cleanup = cleanup;
+ walker->data = data;
+
+#ifdef USE_CURL_MULTI
+ add_fill_function(walker, (int (*)(void *)) fill_active_slot);
+#endif
+
+ return walker;
+}
#endif
while (slot != NULL) {
+ struct active_request_slot *next = slot->next;
#ifdef USE_CURL_MULTI
if (slot->in_use) {
curl_easy_getinfo(slot->curl,
#endif
if (slot->curl != NULL)
curl_easy_cleanup(slot->curl);
- slot = slot->next;
+ free(slot);
+ slot = next;
}
+ active_queue_head = NULL;
#ifndef NO_CURL_EASY_DUPHANDLE
curl_easy_cleanup(curl_default);
curl_global_cleanup();
curl_slist_free_all(pragma_header);
- pragma_header = NULL;
+ pragma_header = NULL;
}
struct active_request_slot *get_active_slot(void)
{
#ifdef USE_CURL_MULTI
CURLMcode curlm_result = curl_multi_add_handle(curlm, slot->curl);
+ int num_transfers;
if (curlm_result != CURLM_OK &&
curlm_result != CURLM_CALL_MULTI_PERFORM) {
slot->in_use = 0;
return 0;
}
+
+ /*
+ * We know there must be something to do, since we just added
+ * something.
+ */
+ curl_multi_perform(curlm, &num_transfers);
#endif
return 1;
}
#ifdef USE_CURL_MULTI
+struct fill_chain {
+ void *data;
+ int (*fill)(void *);
+ struct fill_chain *next;
+};
+
+static struct fill_chain *fill_cfg = NULL;
+
+void add_fill_function(void *data, int (*fill)(void *))
+{
+ struct fill_chain *new = malloc(sizeof(*new));
+ struct fill_chain **linkp = &fill_cfg;
+ new->data = data;
+ new->fill = fill;
+ new->next = NULL;
+ while (*linkp)
+ linkp = &(*linkp)->next;
+ *linkp = new;
+}
+
+void fill_active_slots(void)
+{
+ struct active_request_slot *slot = active_queue_head;
+
+ while (active_requests < max_requests) {
+ struct fill_chain *fill;
+ for (fill = fill_cfg; fill; fill = fill->next)
+ if (fill->fill(fill->data))
+ break;
+
+ if (!fill)
+ break;
+ }
+
+ while (slot != NULL) {
+ if (!slot->in_use && slot->curl != NULL) {
+ curl_easy_cleanup(slot->curl);
+ slot->curl = NULL;
+ }
+ slot = slot->next;
+ }
+}
+
void step_active_slots(void)
{
int num_transfers;
#ifdef USE_CURL_MULTI
extern void fill_active_slots(void);
+extern void add_fill_function(void *data, int (*fill)(void *));
extern void step_active_slots(void);
#endif
extern int data_received;
extern int active_requests;
-#ifdef USE_CURL_MULTI
-extern int max_requests;
-extern CURLM *curlm;
-#endif
#ifndef NO_CURL_EASY_DUPHANDLE
extern CURL *curl_default;
#endif
extern struct curl_slist *pragma_header;
extern struct curl_slist *no_range_header;
-extern struct active_request_slot *active_queue_head;
-
#endif /* HTTP_H */
+++ /dev/null
-/*
- * Copyright (C) 2005 Junio C Hamano
- */
-#include "cache.h"
-#include "commit.h"
-#include "fetch.h"
-
-static int use_link;
-static int use_symlink;
-static int use_filecopy = 1;
-static int commits_on_stdin;
-
-static const char *path; /* "Remote" git repository */
-
-void prefetch(unsigned char *sha1)
-{
-}
-
-static struct packed_git *packs;
-
-static void setup_index(unsigned char *sha1)
-{
- struct packed_git *new_pack;
- char filename[PATH_MAX];
- strcpy(filename, path);
- strcat(filename, "/objects/pack/pack-");
- strcat(filename, sha1_to_hex(sha1));
- strcat(filename, ".idx");
- new_pack = parse_pack_index_file(sha1, filename);
- new_pack->next = packs;
- packs = new_pack;
-}
-
-static int setup_indices(void)
-{
- DIR *dir;
- struct dirent *de;
- char filename[PATH_MAX];
- unsigned char sha1[20];
- sprintf(filename, "%s/objects/pack/", path);
- dir = opendir(filename);
- if (!dir)
- return -1;
- while ((de = readdir(dir)) != NULL) {
- int namelen = strlen(de->d_name);
- if (namelen != 50 ||
- !has_extension(de->d_name, ".pack"))
- continue;
- get_sha1_hex(de->d_name + 5, sha1);
- setup_index(sha1);
- }
- closedir(dir);
- return 0;
-}
-
-static int copy_file(const char *source, char *dest, const char *hex,
- int warn_if_not_exists)
-{
- safe_create_leading_directories(dest);
- if (use_link) {
- if (!link(source, dest)) {
- pull_say("link %s\n", hex);
- return 0;
- }
- /* If we got ENOENT there is no point continuing. */
- if (errno == ENOENT) {
- if (!warn_if_not_exists)
- return -1;
- return error("does not exist %s", source);
- }
- }
- if (use_symlink) {
- struct stat st;
- if (stat(source, &st)) {
- if (!warn_if_not_exists && errno == ENOENT)
- return -1;
- return error("cannot stat %s: %s", source,
- strerror(errno));
- }
- if (!symlink(source, dest)) {
- pull_say("symlink %s\n", hex);
- return 0;
- }
- }
- if (use_filecopy) {
- int ifd, ofd, status = 0;
-
- ifd = open(source, O_RDONLY);
- if (ifd < 0) {
- if (!warn_if_not_exists && errno == ENOENT)
- return -1;
- return error("cannot open %s", source);
- }
- ofd = open(dest, O_WRONLY | O_CREAT | O_EXCL, 0666);
- if (ofd < 0) {
- close(ifd);
- return error("cannot open %s", dest);
- }
- status = copy_fd(ifd, ofd);
- close(ofd);
- if (status)
- return error("cannot write %s", dest);
- pull_say("copy %s\n", hex);
- return 0;
- }
- return error("failed to copy %s with given copy methods.", hex);
-}
-
-static int fetch_pack(const unsigned char *sha1)
-{
- struct packed_git *target;
- char filename[PATH_MAX];
- if (setup_indices())
- return -1;
- target = find_sha1_pack(sha1, packs);
- if (!target)
- return error("Couldn't find %s: not separate or in any pack",
- sha1_to_hex(sha1));
- if (get_verbosely) {
- fprintf(stderr, "Getting pack %s\n",
- sha1_to_hex(target->sha1));
- fprintf(stderr, " which contains %s\n",
- sha1_to_hex(sha1));
- }
- sprintf(filename, "%s/objects/pack/pack-%s.pack",
- path, sha1_to_hex(target->sha1));
- copy_file(filename, sha1_pack_name(target->sha1),
- sha1_to_hex(target->sha1), 1);
- sprintf(filename, "%s/objects/pack/pack-%s.idx",
- path, sha1_to_hex(target->sha1));
- copy_file(filename, sha1_pack_index_name(target->sha1),
- sha1_to_hex(target->sha1), 1);
- install_packed_git(target);
- return 0;
-}
-
-static int fetch_file(const unsigned char *sha1)
-{
- static int object_name_start = -1;
- static char filename[PATH_MAX];
- char *hex = sha1_to_hex(sha1);
- char *dest_filename = sha1_file_name(sha1);
-
- if (object_name_start < 0) {
- strcpy(filename, path); /* e.g. git.git */
- strcat(filename, "/objects/");
- object_name_start = strlen(filename);
- }
- filename[object_name_start+0] = hex[0];
- filename[object_name_start+1] = hex[1];
- filename[object_name_start+2] = '/';
- strcpy(filename + object_name_start + 3, hex + 2);
- return copy_file(filename, dest_filename, hex, 0);
-}
-
-int fetch(unsigned char *sha1)
-{
- if (has_sha1_file(sha1))
- return 0;
- else
- return fetch_file(sha1) && fetch_pack(sha1);
-}
-
-int fetch_ref(char *ref, unsigned char *sha1)
-{
- static int ref_name_start = -1;
- static char filename[PATH_MAX];
- static char hex[41];
- int ifd;
-
- if (ref_name_start < 0) {
- sprintf(filename, "%s/refs/", path);
- ref_name_start = strlen(filename);
- }
- strcpy(filename + ref_name_start, ref);
- ifd = open(filename, O_RDONLY);
- if (ifd < 0) {
- close(ifd);
- return error("cannot open %s", filename);
- }
- if (read_in_full(ifd, hex, 40) != 40 || get_sha1_hex(hex, sha1)) {
- close(ifd);
- return error("cannot read from %s", filename);
- }
- close(ifd);
- pull_say("ref %s\n", sha1_to_hex(sha1));
- return 0;
-}
-
-static const char local_pull_usage[] =
-"git-local-fetch [-c] [-t] [-a] [-v] [-w filename] [--recover] [-l] [-s] [-n] [--stdin] commit-id path";
-
-/*
- * By default we only use file copy.
- * If -l is specified, a hard link is attempted.
- * If -s is specified, then a symlink is attempted.
- * If -n is _not_ specified, then a regular file-to-file copy is done.
- */
-int main(int argc, const char **argv)
-{
- int commits;
- const char **write_ref = NULL;
- char **commit_id;
- int arg = 1;
-
- setup_git_directory();
- git_config(git_default_config);
-
- while (arg < argc && argv[arg][0] == '-') {
- if (argv[arg][1] == 't')
- get_tree = 1;
- else if (argv[arg][1] == 'c')
- get_history = 1;
- else if (argv[arg][1] == 'a') {
- get_all = 1;
- get_tree = 1;
- get_history = 1;
- }
- else if (argv[arg][1] == 'l')
- use_link = 1;
- else if (argv[arg][1] == 's')
- use_symlink = 1;
- else if (argv[arg][1] == 'n')
- use_filecopy = 0;
- else if (argv[arg][1] == 'v')
- get_verbosely = 1;
- else if (argv[arg][1] == 'w')
- write_ref = &argv[++arg];
- else if (!strcmp(argv[arg], "--recover"))
- get_recover = 1;
- else if (!strcmp(argv[arg], "--stdin"))
- commits_on_stdin = 1;
- else
- usage(local_pull_usage);
- arg++;
- }
- if (argc < arg + 2 - commits_on_stdin)
- usage(local_pull_usage);
- if (commits_on_stdin) {
- commits = pull_targets_stdin(&commit_id, &write_ref);
- } else {
- commit_id = (char **) &argv[arg++];
- commits = 1;
- }
- path = argv[arg];
-
- if (pull(commits, commit_id, write_ref, path))
- return 1;
-
- if (commits_on_stdin)
- pull_targets_free(commits, commit_id, write_ref);
-
- return 0;
-}
const unsigned char *hash2,
int *best_score,
char **best_match,
- char *base,
+ const char *base,
int recurse_limit)
{
struct tree_desc one;
{
struct commit_list *iter;
struct commit *merged_common_ancestors;
- struct tree *mrtree;
+ struct tree *mrtree = mrtree;
int clean;
if (show(4)) {
SHA1_Final(pack_file_sha1, &c);
write_or_die(pack_fd, pack_file_sha1, 20);
}
+
+char *index_pack_lockfile(int ip_out)
+{
+ int len, s;
+ char packname[46];
+
+ /*
+ * The first thing we expects from index-pack's output
+ * is "pack\t%40s\n" or "keep\t%40s\n" (46 bytes) where
+ * %40s is the newly created pack SHA1 name. In the "keep"
+ * case, we need it to remove the corresponding .keep file
+ * later on. If we don't get that then tough luck with it.
+ */
+ for (len = 0;
+ len < 46 && (s = xread(ip_out, packname+len, 46-len)) > 0;
+ len += s);
+ if (len == 46 && packname[45] == '\n' &&
+ memcmp(packname, "keep\t", 5) == 0) {
+ char path[PATH_MAX];
+ packname[45] = 0;
+ snprintf(path, sizeof(path), "%s/pack/pack-%s.keep",
+ get_object_directory(), packname + 5);
+ return xstrdup(path);
+ }
+ return NULL;
+}
extern int verify_pack(struct packed_git *, int);
extern void fixup_pack_header_footer(int, unsigned char *, const char *, uint32_t);
+extern char *index_pack_lockfile(int fd);
#define PH_ERROR_EOF (-1)
#define PH_ERROR_PACK_SIGNATURE (-2)
struct ref_lock *lock;
if (!prefixcmp(name, "refs/") && check_ref_format(name + 5)) {
- error("refusing to create funny ref '%s' locally", name);
+ error("refusing to create funny ref '%s' remotely", name);
return "funny refname";
}
}
} else {
const char *keeper[6];
- int s, len, status;
+ int s, status;
char keep_arg[256];
- char packname[46];
struct child_process ip;
s = sprintf(keep_arg, "--keep=receive-pack %i on ", getpid());
ip.git_cmd = 1;
if (start_command(&ip))
return "index-pack fork failed";
-
- /*
- * The first thing we expects from index-pack's output
- * is "pack\t%40s\n" or "keep\t%40s\n" (46 bytes) where
- * %40s is the newly created pack SHA1 name. In the "keep"
- * case, we need it to remove the corresponding .keep file
- * later on. If we don't get that then tough luck with it.
- */
- for (len = 0;
- len < 46 && (s = xread(ip.out, packname+len, 46-len)) > 0;
- len += s);
- if (len == 46 && packname[45] == '\n' &&
- memcmp(packname, "keep\t", 5) == 0) {
- char path[PATH_MAX];
- packname[45] = 0;
- snprintf(path, sizeof(path), "%s/pack/pack-%s.keep",
- get_object_directory(), packname + 5);
- pack_lockfile = xstrdup(path);
- }
-
+ pack_lockfile = index_pack_lockfile(ip.out);
status = finish_command(&ip);
if (!status) {
reprepare_packed_git();
#include "refs.h"
#include "object.h"
#include "tag.h"
+#include "dir.h"
/* ISSYMREF=01 and ISPACKED=02 are public interfaces */
#define REF_KNOWS_PEELED 04
return lock;
}
-static int remove_empty_dir_recursive(char *path, int len)
-{
- DIR *dir = opendir(path);
- struct dirent *e;
- int ret = 0;
-
- if (!dir)
- return -1;
- if (path[len-1] != '/')
- path[len++] = '/';
- while ((e = readdir(dir)) != NULL) {
- struct stat st;
- int namlen;
- if ((e->d_name[0] == '.') &&
- ((e->d_name[1] == 0) ||
- ((e->d_name[1] == '.') && e->d_name[2] == 0)))
- continue; /* "." and ".." */
-
- namlen = strlen(e->d_name);
- if ((len + namlen < PATH_MAX) &&
- strcpy(path + len, e->d_name) &&
- !lstat(path, &st) &&
- S_ISDIR(st.st_mode) &&
- !remove_empty_dir_recursive(path, len + namlen))
- continue; /* happy */
-
- /* path too long, stat fails, or non-directory still exists */
- ret = -1;
- break;
- }
- closedir(dir);
- if (!ret) {
- path[len] = 0;
- ret = rmdir(path);
- }
- return ret;
-}
-
-static int remove_empty_directories(char *file)
+static int remove_empty_directories(const char *file)
{
/* we want to create a file but there is a directory there;
* if that is an empty directory (or a directory that contains
* only empty directories), remove them.
*/
- char path[PATH_MAX];
- int len = strlen(file);
+ struct strbuf path;
+ int result;
- if (len >= PATH_MAX) /* path too long ;-) */
- return -1;
- strcpy(path, file);
- return remove_empty_dir_recursive(path, len);
+ strbuf_init(&path, 20);
+ strbuf_addstr(&path, file);
+
+ result = remove_dir_recursively(&path, 1);
+
+ strbuf_release(&path);
+
+ return result;
}
static int is_refname_available(const char *ref, const char *oldref,
static struct remote **remotes;
static int allocated_remotes;
+static struct branch **branches;
+static int allocated_branches;
+
+static struct branch *current_branch;
+static const char *default_remote_name;
+
#define BUF_SIZE (2048)
static char buffer[BUF_SIZE];
remote->fetch_refspec_nr = nr;
}
-static void add_uri(struct remote *remote, const char *uri)
+static void add_url(struct remote *remote, const char *url)
{
- int nr = remote->uri_nr + 1;
- remote->uri =
- xrealloc(remote->uri, nr * sizeof(char *));
- remote->uri[nr-1] = uri;
- remote->uri_nr = nr;
+ int nr = remote->url_nr + 1;
+ remote->url =
+ xrealloc(remote->url, nr * sizeof(char *));
+ remote->url[nr-1] = url;
+ remote->url_nr = nr;
}
static struct remote *make_remote(const char *name, int len)
return remotes[empty];
}
+static void add_merge(struct branch *branch, const char *name)
+{
+ int nr = branch->merge_nr + 1;
+ branch->merge_name =
+ xrealloc(branch->merge_name, nr * sizeof(char *));
+ branch->merge_name[nr-1] = name;
+ branch->merge_nr = nr;
+}
+
+static struct branch *make_branch(const char *name, int len)
+{
+ int i, empty = -1;
+ char *refname;
+
+ for (i = 0; i < allocated_branches; i++) {
+ if (!branches[i]) {
+ if (empty < 0)
+ empty = i;
+ } else {
+ if (len ? (!strncmp(name, branches[i]->name, len) &&
+ !branches[i]->name[len]) :
+ !strcmp(name, branches[i]->name))
+ return branches[i];
+ }
+ }
+
+ if (empty < 0) {
+ empty = allocated_branches;
+ allocated_branches += allocated_branches ? allocated_branches : 1;
+ branches = xrealloc(branches,
+ sizeof(*branches) * allocated_branches);
+ memset(branches + empty, 0,
+ (allocated_branches - empty) * sizeof(*branches));
+ }
+ branches[empty] = xcalloc(1, sizeof(struct branch));
+ if (len)
+ branches[empty]->name = xstrndup(name, len);
+ else
+ branches[empty]->name = xstrdup(name);
+ refname = malloc(strlen(name) + strlen("refs/heads/") + 1);
+ strcpy(refname, "refs/heads/");
+ strcpy(refname + strlen("refs/heads/"),
+ branches[empty]->name);
+ branches[empty]->refname = refname;
+
+ return branches[empty];
+}
+
static void read_remotes_file(struct remote *remote)
{
FILE *f = fopen(git_path("remotes/%s", remote->name), "r");
switch (value_list) {
case 0:
- add_uri(remote, xstrdup(s));
+ add_url(remote, xstrdup(s));
break;
case 1:
add_push_refspec(remote, xstrdup(s));
static void read_branches_file(struct remote *remote)
{
const char *slash = strchr(remote->name, '/');
+ char *frag;
+ char *branch;
int n = slash ? slash - remote->name : 1000;
FILE *f = fopen(git_path("branches/%.*s", n, remote->name), "r");
char *s, *p;
strcpy(p, s);
if (slash)
strcat(p, slash);
- add_uri(remote, p);
+ frag = strchr(p, '#');
+ if (frag) {
+ *(frag++) = '\0';
+ branch = xmalloc(strlen(frag) + 12);
+ strcpy(branch, "refs/heads/");
+ strcat(branch, frag);
+ } else {
+ branch = "refs/heads/master";
+ }
+ add_url(remote, p);
+ add_fetch_refspec(remote, branch);
+ remote->fetch_tags = 1; /* always auto-follow */
}
-static char *default_remote_name = NULL;
-static const char *current_branch = NULL;
-static int current_branch_len = 0;
-
static int handle_config(const char *key, const char *value)
{
const char *name;
const char *subkey;
struct remote *remote;
- if (!prefixcmp(key, "branch.") && current_branch &&
- !strncmp(key + 7, current_branch, current_branch_len) &&
- !strcmp(key + 7 + current_branch_len, ".remote")) {
- free(default_remote_name);
- default_remote_name = xstrdup(value);
+ struct branch *branch;
+ if (!prefixcmp(key, "branch.")) {
+ name = key + 7;
+ subkey = strrchr(name, '.');
+ branch = make_branch(name, subkey - name);
+ if (!subkey)
+ return 0;
+ if (!value)
+ return 0;
+ if (!strcmp(subkey, ".remote")) {
+ branch->remote_name = xstrdup(value);
+ if (branch == current_branch)
+ default_remote_name = branch->remote_name;
+ } else if (!strcmp(subkey, ".merge"))
+ add_merge(branch, xstrdup(value));
+ return 0;
}
if (prefixcmp(key, "remote."))
return 0;
return 0; /* ignore unknown booleans */
}
if (!strcmp(subkey, ".url")) {
- add_uri(remote, xstrdup(value));
+ add_url(remote, xstrdup(value));
} else if (!strcmp(subkey, ".push")) {
add_push_refspec(remote, xstrdup(value));
} else if (!strcmp(subkey, ".fetch")) {
remote->receivepack = xstrdup(value);
else
error("more than one receivepack given, using the first");
+ } else if (!strcmp(subkey, ".uploadpack")) {
+ if (!remote->uploadpack)
+ remote->uploadpack = xstrdup(value);
+ else
+ error("more than one uploadpack given, using the first");
+ } else if (!strcmp(subkey, ".tagopt")) {
+ if (!strcmp(value, "--no-tags"))
+ remote->fetch_tags = -1;
}
return 0;
}
head_ref = resolve_ref("HEAD", sha1, 0, &flag);
if (head_ref && (flag & REF_ISSYMREF) &&
!prefixcmp(head_ref, "refs/heads/")) {
- current_branch = head_ref + strlen("refs/heads/");
- current_branch_len = strlen(current_branch);
+ current_branch =
+ make_branch(head_ref + strlen("refs/heads/"), 0);
}
git_config(handle_config);
}
-static struct refspec *parse_ref_spec(int nr_refspec, const char **refspec)
+struct refspec *parse_ref_spec(int nr_refspec, const char **refspec)
{
int i;
struct refspec *rs = xcalloc(sizeof(*rs), nr_refspec);
name = default_remote_name;
ret = make_remote(name, 0);
if (name[0] != '/') {
- if (!ret->uri)
+ if (!ret->url)
read_remotes_file(ret);
- if (!ret->uri)
+ if (!ret->url)
read_branches_file(ret);
}
- if (!ret->uri)
- add_uri(ret, name);
- if (!ret->uri)
+ if (!ret->url)
+ add_url(ret, name);
+ if (!ret->url)
return NULL;
ret->fetch = parse_ref_spec(ret->fetch_refspec_nr, ret->fetch_refspec);
ret->push = parse_ref_spec(ret->push_refspec_nr, ret->push_refspec);
return result;
}
-int remote_has_uri(struct remote *remote, const char *uri)
+void ref_remove_duplicates(struct ref *ref_map)
+{
+ struct ref **posn;
+ struct ref *next;
+ for (; ref_map; ref_map = ref_map->next) {
+ if (!ref_map->peer_ref)
+ continue;
+ posn = &ref_map->next;
+ while (*posn) {
+ if ((*posn)->peer_ref &&
+ !strcmp((*posn)->peer_ref->name,
+ ref_map->peer_ref->name)) {
+ if (strcmp((*posn)->name, ref_map->name))
+ die("%s tracks both %s and %s",
+ ref_map->peer_ref->name,
+ (*posn)->name, ref_map->name);
+ next = (*posn)->next;
+ free((*posn)->peer_ref);
+ free(*posn);
+ *posn = next;
+ } else {
+ posn = &(*posn)->next;
+ }
+ }
+ }
+}
+
+int remote_has_url(struct remote *remote, const char *url)
{
int i;
- for (i = 0; i < remote->uri_nr; i++) {
- if (!strcmp(remote->uri[i], uri))
+ for (i = 0; i < remote->url_nr; i++) {
+ if (!strcmp(remote->url[i], url))
return 1;
}
return 0;
}
+/*
+ * Returns true if, under the matching rules for fetching, name is the
+ * same as the given full name.
+ */
+static int ref_matches_abbrev(const char *name, const char *full)
+{
+ if (!prefixcmp(name, "refs/") || !strcmp(name, "HEAD"))
+ return !strcmp(name, full);
+ if (prefixcmp(full, "refs/"))
+ return 0;
+ if (!prefixcmp(name, "heads/") ||
+ !prefixcmp(name, "tags/") ||
+ !prefixcmp(name, "remotes/"))
+ return !strcmp(name, full + 5);
+ if (prefixcmp(full + 5, "heads/"))
+ return 0;
+ return !strcmp(full + 11, name);
+}
+
int remote_find_tracking(struct remote *remote, struct refspec *refspec)
{
int find_src = refspec->src == NULL;
int i;
if (find_src) {
- if (refspec->dst == NULL)
+ if (!refspec->dst)
return error("find_tracking: need either src or dst");
needle = refspec->dst;
result = &refspec->src;
return ret;
}
+static struct ref *copy_ref(struct ref *ref)
+{
+ struct ref *ret = xmalloc(sizeof(struct ref) + strlen(ref->name) + 1);
+ memcpy(ret, ref, sizeof(struct ref) + strlen(ref->name) + 1);
+ ret->next = NULL;
+ return ret;
+}
+
void free_refs(struct ref *ref)
{
struct ref *next;
* way to delete 'other' ref at the remote end.
*/
matched_src = try_explicit_object_name(rs->src);
- if (matched_src)
- break;
- error("src refspec %s does not match any.",
- rs->src);
+ if (!matched_src)
+ error("src refspec %s does not match any.", rs->src);
break;
default:
matched_src = NULL;
- error("src refspec %s matches more than one.",
- rs->src);
+ error("src refspec %s matches more than one.", rs->src);
break;
}
if (!matched_src)
errs = 1;
- if (dst_value == NULL)
+ if (!dst_value) {
+ if (!matched_src)
+ return errs;
dst_value = matched_src->name;
+ }
switch (count_refspec_match(dst_value, dst, &matched_dst)) {
case 1:
dst_value);
break;
}
- if (errs || matched_dst == NULL)
+ if (errs || !matched_dst)
return 1;
if (matched_dst->peer_ref) {
errs = 1;
hashcpy(dst_peer->new_sha1, src->new_sha1);
}
dst_peer->peer_ref = src;
+ if (pat)
+ dst_peer->force = pat->force;
free_name:
free(dst_name);
}
return 0;
}
+
+struct branch *branch_get(const char *name)
+{
+ struct branch *ret;
+
+ read_config();
+ if (!name || !*name || !strcmp(name, "HEAD"))
+ ret = current_branch;
+ else
+ ret = make_branch(name, 0);
+ if (ret && ret->remote_name) {
+ ret->remote = remote_get(ret->remote_name);
+ if (ret->merge_nr) {
+ int i;
+ ret->merge = xcalloc(sizeof(*ret->merge),
+ ret->merge_nr);
+ for (i = 0; i < ret->merge_nr; i++) {
+ ret->merge[i] = xcalloc(1, sizeof(**ret->merge));
+ ret->merge[i]->src = xstrdup(ret->merge_name[i]);
+ remote_find_tracking(ret->remote,
+ ret->merge[i]);
+ }
+ }
+ }
+ return ret;
+}
+
+int branch_has_merge_config(struct branch *branch)
+{
+ return branch && !!branch->merge;
+}
+
+int branch_merge_matches(struct branch *branch,
+ int i,
+ const char *refname)
+{
+ if (!branch || i < 0 || i >= branch->merge_nr)
+ return 0;
+ return ref_matches_abbrev(branch->merge[i]->src, refname);
+}
+
+static struct ref *get_expanded_map(struct ref *remote_refs,
+ const struct refspec *refspec)
+{
+ struct ref *ref;
+ struct ref *ret = NULL;
+ struct ref **tail = &ret;
+
+ int remote_prefix_len = strlen(refspec->src);
+ int local_prefix_len = strlen(refspec->dst);
+
+ for (ref = remote_refs; ref; ref = ref->next) {
+ if (strchr(ref->name, '^'))
+ continue; /* a dereference item */
+ if (!prefixcmp(ref->name, refspec->src)) {
+ char *match;
+ struct ref *cpy = copy_ref(ref);
+ match = ref->name + remote_prefix_len;
+
+ cpy->peer_ref = alloc_ref(local_prefix_len +
+ strlen(match) + 1);
+ sprintf(cpy->peer_ref->name, "%s%s",
+ refspec->dst, match);
+ if (refspec->force)
+ cpy->peer_ref->force = 1;
+ *tail = cpy;
+ tail = &cpy->next;
+ }
+ }
+
+ return ret;
+}
+
+static struct ref *find_ref_by_name_abbrev(struct ref *refs, const char *name)
+{
+ struct ref *ref;
+ for (ref = refs; ref; ref = ref->next) {
+ if (ref_matches_abbrev(name, ref->name))
+ return ref;
+ }
+ return NULL;
+}
+
+struct ref *get_remote_ref(struct ref *remote_refs, const char *name)
+{
+ struct ref *ref = find_ref_by_name_abbrev(remote_refs, name);
+
+ if (!ref)
+ return NULL;
+
+ return copy_ref(ref);
+}
+
+static struct ref *get_local_ref(const char *name)
+{
+ struct ref *ret;
+ if (!name)
+ return NULL;
+
+ if (!prefixcmp(name, "refs/")) {
+ ret = alloc_ref(strlen(name) + 1);
+ strcpy(ret->name, name);
+ return ret;
+ }
+
+ if (!prefixcmp(name, "heads/") ||
+ !prefixcmp(name, "tags/") ||
+ !prefixcmp(name, "remotes/")) {
+ ret = alloc_ref(strlen(name) + 6);
+ sprintf(ret->name, "refs/%s", name);
+ return ret;
+ }
+
+ ret = alloc_ref(strlen(name) + 12);
+ sprintf(ret->name, "refs/heads/%s", name);
+ return ret;
+}
+
+int get_fetch_map(struct ref *remote_refs,
+ const struct refspec *refspec,
+ struct ref ***tail,
+ int missing_ok)
+{
+ struct ref *ref_map, *rm;
+
+ if (refspec->pattern) {
+ ref_map = get_expanded_map(remote_refs, refspec);
+ } else {
+ const char *name = refspec->src[0] ? refspec->src : "HEAD";
+
+ ref_map = get_remote_ref(remote_refs, name);
+ if (!missing_ok && !ref_map)
+ die("Couldn't find remote ref %s", name);
+ if (ref_map) {
+ ref_map->peer_ref = get_local_ref(refspec->dst);
+ if (ref_map->peer_ref && refspec->force)
+ ref_map->peer_ref->force = 1;
+ }
+ }
+
+ for (rm = ref_map; rm; rm = rm->next) {
+ if (rm->peer_ref && check_ref_format(rm->peer_ref->name + 5))
+ die("* refusing to create funny ref '%s' locally",
+ rm->peer_ref->name);
+ }
+
+ if (ref_map)
+ tail_link_ref(ref_map, tail);
+
+ return 0;
+}
struct remote {
const char *name;
- const char **uri;
- int uri_nr;
+ const char **url;
+ int url_nr;
const char **push_refspec;
struct refspec *push;
struct refspec *fetch;
int fetch_refspec_nr;
+ /*
+ * -1 to never fetch tags
+ * 0 to auto-follow tags on heuristic (default)
+ * 1 to always auto-follow tags
+ * 2 to always fetch tags
+ */
+ int fetch_tags;
+
const char *receivepack;
+ const char *uploadpack;
};
struct remote *remote_get(const char *name);
typedef int each_remote_fn(struct remote *remote, void *priv);
int for_each_remote(each_remote_fn fn, void *priv);
-int remote_has_uri(struct remote *remote, const char *uri);
+int remote_has_url(struct remote *remote, const char *url);
struct refspec {
unsigned force : 1;
*/
void free_refs(struct ref *ref);
+/*
+ * Removes and frees any duplicate refs in the map.
+ */
+void ref_remove_duplicates(struct ref *ref_map);
+
+struct refspec *parse_ref_spec(int nr_refspec, const char **refspec);
+
int match_refs(struct ref *src, struct ref *dst, struct ref ***dst_tail,
int nr_refspec, char **refspec, int all);
+/*
+ * Given a list of the remote refs and the specification of things to
+ * fetch, makes a (separate) list of the refs to fetch and the local
+ * refs to store into.
+ *
+ * *tail is the pointer to the tail pointer of the list of results
+ * beforehand, and will be set to the tail pointer of the list of
+ * results afterward.
+ *
+ * missing_ok is usually false, but when we are adding branch.$name.merge
+ * it is Ok if the branch is not at the remote anymore.
+ */
+int get_fetch_map(struct ref *remote_refs, const struct refspec *refspec,
+ struct ref ***tail, int missing_ok);
+
+struct ref *get_remote_ref(struct ref *remote_refs, const char *name);
+
/*
* For the given remote, reads the refspec's src and sets the other fields.
*/
int remote_find_tracking(struct remote *remote, struct refspec *refspec);
+struct branch {
+ const char *name;
+ const char *refname;
+
+ const char *remote_name;
+ struct remote *remote;
+
+ const char **merge_name;
+ struct refspec **merge;
+ int merge_nr;
+};
+
+struct branch *branch_get(const char *name);
+
+int branch_has_merge_config(struct branch *branch);
+int branch_merge_matches(struct branch *, int n, const char *);
+
#endif
+++ /dev/null
-#include "cache.h"
-#include "rsh.h"
-#include "quote.h"
-
-#define COMMAND_SIZE 4096
-
-int setup_connection(int *fd_in, int *fd_out, const char *remote_prog,
- char *url, int rmt_argc, char **rmt_argv)
-{
- char *host;
- char *path;
- int sv[2];
- int i;
- pid_t pid;
- struct strbuf cmd;
-
- if (!strcmp(url, "-")) {
- *fd_in = 0;
- *fd_out = 1;
- return 0;
- }
-
- host = strstr(url, "//");
- if (host) {
- host += 2;
- path = strchr(host, '/');
- } else {
- host = url;
- path = strchr(host, ':');
- if (path)
- *(path++) = '\0';
- }
- if (!path) {
- return error("Bad URL: %s", url);
- }
-
- /* $GIT_RSH <host> "env GIT_DIR=<path> <remote_prog> <args...>" */
- strbuf_init(&cmd, COMMAND_SIZE);
- strbuf_addstr(&cmd, "env ");
- strbuf_addstr(&cmd, GIT_DIR_ENVIRONMENT "=");
- sq_quote_buf(&cmd, path);
- strbuf_addch(&cmd, ' ');
- sq_quote_buf(&cmd, remote_prog);
-
- for (i = 0 ; i < rmt_argc ; i++) {
- strbuf_addch(&cmd, ' ');
- sq_quote_buf(&cmd, rmt_argv[i]);
- }
-
- strbuf_addstr(&cmd, " -");
-
- if (cmd.len >= COMMAND_SIZE)
- return error("Command line too long");
-
- if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv))
- return error("Couldn't create socket");
-
- pid = fork();
- if (pid < 0)
- return error("Couldn't fork");
- if (!pid) {
- const char *ssh, *ssh_basename;
- ssh = getenv("GIT_SSH");
- if (!ssh) ssh = "ssh";
- ssh_basename = strrchr(ssh, '/');
- if (!ssh_basename)
- ssh_basename = ssh;
- else
- ssh_basename++;
- close(sv[1]);
- dup2(sv[0], 0);
- dup2(sv[0], 1);
- execlp(ssh, ssh_basename, host, cmd.buf, NULL);
- }
- close(sv[0]);
- *fd_in = sv[1];
- *fd_out = sv[1];
- return 0;
-}
+++ /dev/null
-#ifndef RSH_H
-#define RSH_H
-
-int setup_connection(int *fd_in, int *fd_out, const char *remote_prog,
- char *url, int rmt_argc, char **rmt_argv);
-
-#endif
if (remote_name) {
remote = remote_get(remote_name);
- if (!remote_has_uri(remote, dest)) {
+ if (!remote_has_url(remote, dest)) {
die("Destination %s is not a uri for %s",
dest, remote_name);
}
munmap(idx_map, idx_size);
return error("wrong index v2 file size in %s", path);
}
- if (idx_size != min_size) {
- /* make sure we can deal with large pack offsets */
- off_t x = 0x7fffffffUL, y = 0xffffffffUL;
- if (x > (x + 1) || y > (y + 1)) {
- munmap(idx_map, idx_size);
- return error("pack too large for current definition of off_t in %s", path);
- }
+ if (idx_size != min_size &&
+ /*
+ * make sure we can deal with large pack offsets.
+ * 31-bit signed offset won't be enough, neither
+ * 32-bit unsigned one will be.
+ */
+ (sizeof(off_t) <= 4)) {
+ munmap(idx_map, idx_size);
+ return error("pack too large for current definition of off_t in %s", path);
}
}
ntohl(off64[1]);
off64_nr++;
}
- printf("%llu %s (%08x)\n", (unsigned long long) offset,
+ printf("%" PRIuMAX " %s (%08x)\n", (uintmax_t) offset,
sha1_to_hex(entries[i].sha1),
ntohl(entries[i].crc));
}
+++ /dev/null
-#ifndef COUNTERPART_ENV_NAME
-#define COUNTERPART_ENV_NAME "GIT_SSH_UPLOAD"
-#endif
-#ifndef COUNTERPART_PROGRAM_NAME
-#define COUNTERPART_PROGRAM_NAME "git-ssh-upload"
-#endif
-#ifndef MY_PROGRAM_NAME
-#define MY_PROGRAM_NAME "git-ssh-fetch"
-#endif
-
-#include "cache.h"
-#include "commit.h"
-#include "rsh.h"
-#include "fetch.h"
-#include "refs.h"
-
-static int fd_in;
-static int fd_out;
-
-static unsigned char remote_version;
-static unsigned char local_version = 1;
-
-static int prefetches;
-
-static struct object_list *in_transit;
-static struct object_list **end_of_transit = &in_transit;
-
-void prefetch(unsigned char *sha1)
-{
- char type = 'o';
- struct object_list *node;
- if (prefetches > 100) {
- fetch(in_transit->item->sha1);
- }
- node = xmalloc(sizeof(struct object_list));
- node->next = NULL;
- node->item = lookup_unknown_object(sha1);
- *end_of_transit = node;
- end_of_transit = &node->next;
- /* XXX: what if these writes fail? */
- write_in_full(fd_out, &type, 1);
- write_in_full(fd_out, sha1, 20);
- prefetches++;
-}
-
-static char conn_buf[4096];
-static size_t conn_buf_posn;
-
-int fetch(unsigned char *sha1)
-{
- int ret;
- signed char remote;
- struct object_list *temp;
-
- if (hashcmp(sha1, in_transit->item->sha1)) {
- /* we must have already fetched it to clean the queue */
- return has_sha1_file(sha1) ? 0 : -1;
- }
- prefetches--;
- temp = in_transit;
- in_transit = in_transit->next;
- if (!in_transit)
- end_of_transit = &in_transit;
- free(temp);
-
- if (conn_buf_posn) {
- remote = conn_buf[0];
- memmove(conn_buf, conn_buf + 1, --conn_buf_posn);
- } else {
- if (xread(fd_in, &remote, 1) < 1)
- return -1;
- }
- /* fprintf(stderr, "Got %d\n", remote); */
- if (remote < 0)
- return remote;
- ret = write_sha1_from_fd(sha1, fd_in, conn_buf, 4096, &conn_buf_posn);
- if (!ret)
- pull_say("got %s\n", sha1_to_hex(sha1));
- return ret;
-}
-
-static int get_version(void)
-{
- char type = 'v';
- if (write_in_full(fd_out, &type, 1) != 1 ||
- write_in_full(fd_out, &local_version, 1)) {
- return error("Couldn't request version from remote end");
- }
- if (xread(fd_in, &remote_version, 1) < 1) {
- return error("Couldn't read version from remote end");
- }
- return 0;
-}
-
-int fetch_ref(char *ref, unsigned char *sha1)
-{
- signed char remote;
- char type = 'r';
- int length = strlen(ref) + 1;
- if (write_in_full(fd_out, &type, 1) != 1 ||
- write_in_full(fd_out, ref, length) != length)
- return -1;
-
- if (read_in_full(fd_in, &remote, 1) != 1)
- return -1;
- if (remote < 0)
- return remote;
- if (read_in_full(fd_in, sha1, 20) != 20)
- return -1;
- return 0;
-}
-
-static const char ssh_fetch_usage[] =
- MY_PROGRAM_NAME
- " [-c] [-t] [-a] [-v] [--recover] [-w ref] commit-id url";
-int main(int argc, char **argv)
-{
- const char *write_ref = NULL;
- char *commit_id;
- char *url;
- int arg = 1;
- const char *prog;
-
- prog = getenv("GIT_SSH_PUSH");
- if (!prog) prog = "git-ssh-upload";
-
- setup_git_directory();
- git_config(git_default_config);
-
- while (arg < argc && argv[arg][0] == '-') {
- if (argv[arg][1] == 't') {
- get_tree = 1;
- } else if (argv[arg][1] == 'c') {
- get_history = 1;
- } else if (argv[arg][1] == 'a') {
- get_all = 1;
- get_tree = 1;
- get_history = 1;
- } else if (argv[arg][1] == 'v') {
- get_verbosely = 1;
- } else if (argv[arg][1] == 'w') {
- write_ref = argv[arg + 1];
- arg++;
- } else if (!strcmp(argv[arg], "--recover")) {
- get_recover = 1;
- }
- arg++;
- }
- if (argc < arg + 2) {
- usage(ssh_fetch_usage);
- return 1;
- }
- commit_id = argv[arg];
- url = argv[arg + 1];
-
- if (setup_connection(&fd_in, &fd_out, prog, url, arg, argv + 1))
- return 1;
-
- if (get_version())
- return 1;
-
- if (pull(1, &commit_id, &write_ref, url))
- return 1;
-
- return 0;
-}
+++ /dev/null
-#define COUNTERPART_ENV_NAME "GIT_SSH_PUSH"
-#define COUNTERPART_PROGRAM_NAME "git-ssh-push"
-#define MY_PROGRAM_NAME "git-ssh-pull"
-#include "ssh-fetch.c"
+++ /dev/null
-#define COUNTERPART_ENV_NAME "GIT_SSH_PULL"
-#define COUNTERPART_PROGRAM_NAME "git-ssh-pull"
-#define MY_PROGRAM_NAME "git-ssh-push"
-#include "ssh-upload.c"
+++ /dev/null
-#ifndef COUNTERPART_ENV_NAME
-#define COUNTERPART_ENV_NAME "GIT_SSH_FETCH"
-#endif
-#ifndef COUNTERPART_PROGRAM_NAME
-#define COUNTERPART_PROGRAM_NAME "git-ssh-fetch"
-#endif
-#ifndef MY_PROGRAM_NAME
-#define MY_PROGRAM_NAME "git-ssh-upload"
-#endif
-
-#include "cache.h"
-#include "rsh.h"
-#include "refs.h"
-
-static unsigned char local_version = 1;
-static unsigned char remote_version;
-
-static int verbose;
-
-static int serve_object(int fd_in, int fd_out) {
- ssize_t size;
- unsigned char sha1[20];
- signed char remote;
-
- size = read_in_full(fd_in, sha1, 20);
- if (size < 0) {
- perror("git-ssh-upload: read ");
- return -1;
- }
- if (!size)
- return -1;
-
- if (verbose)
- fprintf(stderr, "Serving %s\n", sha1_to_hex(sha1));
-
- remote = 0;
-
- if (!has_sha1_file(sha1)) {
- fprintf(stderr, "git-ssh-upload: could not find %s\n",
- sha1_to_hex(sha1));
- remote = -1;
- }
-
- if (write_in_full(fd_out, &remote, 1) != 1)
- return 0;
-
- if (remote < 0)
- return 0;
-
- return write_sha1_to_fd(fd_out, sha1);
-}
-
-static int serve_version(int fd_in, int fd_out)
-{
- if (xread(fd_in, &remote_version, 1) < 1)
- return -1;
- write_in_full(fd_out, &local_version, 1);
- return 0;
-}
-
-static int serve_ref(int fd_in, int fd_out)
-{
- char ref[PATH_MAX];
- unsigned char sha1[20];
- int posn = 0;
- signed char remote = 0;
- do {
- if (posn >= PATH_MAX || xread(fd_in, ref + posn, 1) < 1)
- return -1;
- posn++;
- } while (ref[posn - 1]);
-
- if (verbose)
- fprintf(stderr, "Serving %s\n", ref);
-
- if (get_ref_sha1(ref, sha1))
- remote = -1;
- if (write_in_full(fd_out, &remote, 1) != 1)
- return 0;
- if (remote)
- return 0;
- write_in_full(fd_out, sha1, 20);
- return 0;
-}
-
-
-static void service(int fd_in, int fd_out) {
- char type;
- ssize_t retval;
- do {
- retval = xread(fd_in, &type, 1);
- if (retval < 1) {
- if (retval < 0)
- perror("git-ssh-upload: read ");
- return;
- }
- if (type == 'v' && serve_version(fd_in, fd_out))
- return;
- if (type == 'o' && serve_object(fd_in, fd_out))
- return;
- if (type == 'r' && serve_ref(fd_in, fd_out))
- return;
- } while (1);
-}
-
-static const char ssh_push_usage[] =
- MY_PROGRAM_NAME " [-c] [-t] [-a] [-w ref] commit-id url";
-
-int main(int argc, char **argv)
-{
- int arg = 1;
- char *commit_id;
- char *url;
- int fd_in, fd_out;
- const char *prog;
- unsigned char sha1[20];
- char hex[41];
-
- prog = getenv(COUNTERPART_ENV_NAME);
- if (!prog) prog = COUNTERPART_PROGRAM_NAME;
-
- setup_git_directory();
-
- while (arg < argc && argv[arg][0] == '-') {
- if (argv[arg][1] == 'w')
- arg++;
- arg++;
- }
- if (argc < arg + 2)
- usage(ssh_push_usage);
- commit_id = argv[arg];
- url = argv[arg + 1];
- if (get_sha1(commit_id, sha1))
- die("Not a valid object name %s", commit_id);
- memcpy(hex, sha1_to_hex(sha1), sizeof(hex));
- argv[arg] = hex;
-
- if (setup_connection(&fd_in, &fd_out, prog, url, arg, argv + 1))
- return 1;
-
- service(fd_in, fd_out);
- return 0;
-}
}
'
+test_expect_success 'invalid .gitattributes (must not crash)' '
+
+ echo "three +crlf" >>.gitattributes &&
+ git diff
+
+'
+
test_done
git-branch -a >branches && ! grep -q origin/master branches
'
+rewound_push_setup() {
+ rm -rf parent child &&
+ mkdir parent && cd parent &&
+ git-init && echo one >file && git-add file && git-commit -m one &&
+ echo two >file && git-commit -a -m two &&
+ cd .. &&
+ git-clone parent child && cd child && git-reset --hard HEAD^
+}
+
+rewound_push_succeeded() {
+ cmp ../parent/.git/refs/heads/master .git/refs/heads/master
+}
+
+rewound_push_failed() {
+ if rewound_push_succeeded
+ then
+ false
+ else
+ true
+ fi
+}
+
+test_expect_success \
+ 'pushing explicit refspecs respects forcing' '
+ rewound_push_setup &&
+ if git-send-pack ../parent/.git refs/heads/master:refs/heads/master
+ then
+ false
+ else
+ true
+ fi && rewound_push_failed &&
+ git-send-pack ../parent/.git +refs/heads/master:refs/heads/master &&
+ rewound_push_succeeded
+'
+
+test_expect_success \
+ 'pushing wildcard refspecs respects forcing' '
+ rewound_push_setup &&
+ if git-send-pack ../parent/.git refs/heads/*:refs/heads/*
+ then
+ false
+ else
+ true
+ fi && rewound_push_failed &&
+ git-send-pack ../parent/.git +refs/heads/*:refs/heads/* &&
+ rewound_push_succeeded
+'
+
test_done
cut -f -2 .git/FETCH_HEAD >actual &&
diff expected actual'
+test_expect_success 'fetch tags when there is no tags' '
+
+ cd "$D" &&
+
+ mkdir notags &&
+ cd notags &&
+ git init &&
+
+ git fetch -t ..
+
+'
+
test_expect_success 'fetch following tags' '
cd "$D" &&
'
+test "$TEST_RSYNC" && {
+test_expect_success 'fetch via rsync' '
+ git pack-refs &&
+ mkdir rsynced &&
+ cd rsynced &&
+ git init &&
+ git fetch rsync://127.0.0.1$(pwd)/../.git master:refs/heads/master &&
+ git gc --prune &&
+ test $(git rev-parse master) = $(cd .. && git rev-parse master) &&
+ git fsck --full
+'
+
+test_expect_success 'push via rsync' '
+ mkdir ../rsynced2 &&
+ (cd ../rsynced2 &&
+ git init) &&
+ git push rsync://127.0.0.1$(pwd)/../rsynced2/.git master &&
+ cd ../rsynced2 &&
+ git gc --prune &&
+ test $(git rev-parse master) = $(cd .. && git rev-parse master) &&
+ git fsck --full
+'
+
+test_expect_success 'push via rsync' '
+ cd .. &&
+ mkdir rsynced3 &&
+ (cd rsynced3 &&
+ git init) &&
+ git push --all rsync://127.0.0.1$(pwd)/rsynced3/.git &&
+ cd rsynced3 &&
+ test $(git rev-parse master) = $(cd .. && git rev-parse master) &&
+ git fsck --full
+'
+}
+
+test_expect_success 'fetch with a non-applying branch.<name>.merge' '
+ git config branch.master.remote yeti &&
+ git config branch.master.merge refs/heads/bigfoot &&
+ git config remote.blub.url one &&
+ git config remote.blub.fetch "refs/heads/*:refs/remotes/one/*" &&
+ git fetch blub
+'
+
test_done
git config branch.br-$remote-merge.merge refs/heads/three &&
git config branch.br-$remote-octopus.remote $remote &&
git config branch.br-$remote-octopus.merge refs/heads/one &&
- git config --add branch.br-$remote-octopus.merge two &&
- git config --add branch.br-$remote-octopus.merge remotes/rem/three
+ git config --add branch.br-$remote-octopus.merge two
done
'
# br-branches-default-merge
-754b754407bf032e9a2f9d5a9ad05ca79a6b228f branch 'master' of ../
+754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
+0567da4d5edd2ff4bb292a465ba9e64dcad9536b branch 'three' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-default-merge branches-default
-754b754407bf032e9a2f9d5a9ad05ca79a6b228f branch 'master' of ../
+754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
+0567da4d5edd2ff4bb292a465ba9e64dcad9536b branch 'three' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-default-octopus
-754b754407bf032e9a2f9d5a9ad05ca79a6b228f branch 'master' of ../
+754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
+8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-default-octopus branches-default
-754b754407bf032e9a2f9d5a9ad05ca79a6b228f branch 'master' of ../
+754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
+8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-one-merge
-8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge branch 'one' of ../
+0567da4d5edd2ff4bb292a465ba9e64dcad9536b branch 'three' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-one-merge branches-one
-8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge branch 'one' of ../
+0567da4d5edd2ff4bb292a465ba9e64dcad9536b branch 'three' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-one-octopus
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-one-octopus branches-one
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
0567da4d5edd2ff4bb292a465ba9e64dcad9536b not-for-merge branch 'three' of ../
-6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 not-for-merge branch 'two' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
0567da4d5edd2ff4bb292a465ba9e64dcad9536b not-for-merge branch 'three' of ../
-6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 not-for-merge branch 'two' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
0567da4d5edd2ff4bb292a465ba9e64dcad9536b not-for-merge branch 'three' of ../
-6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 not-for-merge branch 'two' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
0567da4d5edd2ff4bb292a465ba9e64dcad9536b not-for-merge branch 'three' of ../
-6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 not-for-merge branch 'two' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
test_expect_success 'pulling from reference' \
'cd C &&
-git pull ../B'
+git pull ../B master'
cd "$base_dir"
cd "$base_dir"
test_expect_success 'pulling from reference' \
-'cd D && git pull ../B'
+'cd D && git pull ../B master'
cd "$base_dir"
. ./test-lib.sh
+OLD_TERM="$TERM"
+
for i in GIT_EDITOR core_editor EDITOR VISUAL vi
do
cat >e-$i.sh <<-EOF
'
done
+TERM="$OLD_TERM"
+
test_done
# '
# . ./test-lib.sh
-error () {
- echo "* error: $*"
- trap - exit
- exit 1
-}
-
-say () {
- echo "* $*"
-}
+[ "x$TERM" != "xdumb" ] &&
+ tput bold >/dev/null 2>&1 &&
+ tput setaf 1 >/dev/null 2>&1 &&
+ tput sgr0 >/dev/null 2>&1 &&
+ color=t
test "${test_description}" != "" ||
error "Test script did not set test_description."
exit 0 ;;
-v|--v|--ve|--ver|--verb|--verbo|--verbos|--verbose)
verbose=t; shift ;;
+ -q|--q|--qu|--qui|--quie|--quiet)
+ quiet=t; shift ;;
+ --no-color)
+ color=; shift ;;
--no-python)
# noop now...
shift ;;
esac
done
+if test -n "$color"; then
+ say_color () {
+ case "$1" in
+ error) tput bold; tput setaf 1;; # bold red
+ skip) tput bold; tput setaf 2;; # bold green
+ pass) tput setaf 2;; # green
+ info) tput setaf 3;; # brown
+ *) test -n "$quiet" && return;;
+ esac
+ shift
+ echo "* $*"
+ tput sgr0
+ }
+else
+ say_color() {
+ test -z "$1" && test -n "$quiet" && return
+ shift
+ echo "* $*"
+ }
+fi
+
+error () {
+ say_color error "error: $*"
+ trap - exit
+ exit 1
+}
+
+say () {
+ say_color info "$*"
+}
+
exec 5>&1
if test "$verbose" = "t"
then
test_ok_ () {
test_count=$(expr "$test_count" + 1)
- say " ok $test_count: $@"
+ say_color "" " ok $test_count: $@"
}
test_failure_ () {
test_count=$(expr "$test_count" + 1)
test_failure=$(expr "$test_failure" + 1);
- say "FAIL $test_count: $1"
+ say_color error "FAIL $test_count: $1"
shift
echo "$@" | sed -e 's/^/ /'
test "$immediate" = "" || { trap - exit; exit 1; }
done
case "$to_skip" in
t)
- say >&3 "skipping test: $@"
+ say_color skip >&3 "skipping test: $@"
test_count=$(expr "$test_count" + 1)
- say "skip $test_count: $1"
+ say_color skip "skip $test_count: $1"
: true
;;
*)
# The Makefile provided will clean this test area so
# we will leave things as they are.
- say "passed all $test_count test(s)"
+ say_color pass "passed all $test_count test(s)"
exit 0 ;;
*)
- say "failed $test_failure among $test_count test(s)"
+ say_color error "failed $test_failure among $test_count test(s)"
exit 1 ;;
esac
done
case "$to_skip" in
t)
- say >&3 "skipping test $this_test altogether"
- say "skip all tests in $this_test"
+ say_color skip >&3 "skipping test $this_test altogether"
+ say_color skip "skip all tests in $this_test"
test_done
esac
done
if (/\s$/) {
bad_line("trailing whitespace", $_);
}
- if (/^\s* /) {
+ if (/^\s* \t/) {
bad_line("indent SP followed by a TAB", $_);
}
if (/^(?:[<>=]){7}/) {
--- /dev/null
+#include "cache.h"
+#include "transport.h"
+#include "run-command.h"
+#ifndef NO_CURL
+#include "http.h"
+#endif
+#include "pkt-line.h"
+#include "fetch-pack.h"
+#include "walker.h"
+#include "bundle.h"
+#include "dir.h"
+#include "refs.h"
+
+/* rsync support */
+
+/*
+ * We copy packed-refs and refs/ into a temporary file, then read the
+ * loose refs recursively (sorting whenever possible), and then inserting
+ * those packed refs that are not yet in the list (not validating, but
+ * assuming that the file is sorted).
+ *
+ * Appears refactoring this from refs.c is too cumbersome.
+ */
+
+static int str_cmp(const void *a, const void *b)
+{
+ const char *s1 = a;
+ const char *s2 = b;
+
+ return strcmp(s1, s2);
+}
+
+/* path->buf + name_offset is expected to point to "refs/" */
+
+static int read_loose_refs(struct strbuf *path, int name_offset,
+ struct ref **tail)
+{
+ DIR *dir = opendir(path->buf);
+ struct dirent *de;
+ struct {
+ char **entries;
+ int nr, alloc;
+ } list;
+ int i, pathlen;
+
+ if (!dir)
+ return -1;
+
+ memset (&list, 0, sizeof(list));
+
+ while ((de = readdir(dir))) {
+ if (de->d_name[0] == '.' && (de->d_name[1] == '\0' ||
+ (de->d_name[1] == '.' &&
+ de->d_name[2] == '\0')))
+ continue;
+ ALLOC_GROW(list.entries, list.nr + 1, list.alloc);
+ list.entries[list.nr++] = xstrdup(de->d_name);
+ }
+ closedir(dir);
+
+ /* sort the list */
+
+ qsort(list.entries, list.nr, sizeof(char *), str_cmp);
+
+ pathlen = path->len;
+ strbuf_addch(path, '/');
+
+ for (i = 0; i < list.nr; i++, strbuf_setlen(path, pathlen + 1)) {
+ strbuf_addstr(path, list.entries[i]);
+ if (read_loose_refs(path, name_offset, tail)) {
+ int fd = open(path->buf, O_RDONLY);
+ char buffer[40];
+ struct ref *next;
+
+ if (fd < 0)
+ continue;
+ next = alloc_ref(path->len - name_offset + 1);
+ if (read_in_full(fd, buffer, 40) != 40 ||
+ get_sha1_hex(buffer, next->old_sha1)) {
+ close(fd);
+ free(next);
+ continue;
+ }
+ close(fd);
+ strcpy(next->name, path->buf + name_offset);
+ (*tail)->next = next;
+ *tail = next;
+ }
+ }
+ strbuf_setlen(path, pathlen);
+
+ for (i = 0; i < list.nr; i++)
+ free(list.entries[i]);
+ free(list.entries);
+
+ return 0;
+}
+
+/* insert the packed refs for which no loose refs were found */
+
+static void insert_packed_refs(const char *packed_refs, struct ref **list)
+{
+ FILE *f = fopen(packed_refs, "r");
+ static char buffer[PATH_MAX];
+
+ if (!f)
+ return;
+
+ for (;;) {
+ int cmp, len;
+
+ if (!fgets(buffer, sizeof(buffer), f)) {
+ fclose(f);
+ return;
+ }
+
+ if (hexval(buffer[0]) > 0xf)
+ continue;
+ len = strlen(buffer);
+ if (buffer[len - 1] == '\n')
+ buffer[--len] = '\0';
+ if (len < 41)
+ continue;
+ while ((*list)->next &&
+ (cmp = strcmp(buffer + 41,
+ (*list)->next->name)) > 0)
+ list = &(*list)->next;
+ if (!(*list)->next || cmp < 0) {
+ struct ref *next = alloc_ref(len - 40);
+ buffer[40] = '\0';
+ if (get_sha1_hex(buffer, next->old_sha1)) {
+ warning ("invalid SHA-1: %s", buffer);
+ free(next);
+ continue;
+ }
+ strcpy(next->name, buffer + 41);
+ next->next = (*list)->next;
+ (*list)->next = next;
+ list = &(*list)->next;
+ }
+ }
+}
+
+static struct ref *get_refs_via_rsync(const struct transport *transport)
+{
+ struct strbuf buf = STRBUF_INIT, temp_dir = STRBUF_INIT;
+ struct ref dummy, *tail = &dummy;
+ struct child_process rsync;
+ const char *args[5];
+ int temp_dir_len;
+
+ /* copy the refs to the temporary directory */
+
+ strbuf_addstr(&temp_dir, git_path("rsync-refs-XXXXXX"));
+ if (!mkdtemp(temp_dir.buf))
+ die ("Could not make temporary directory");
+ temp_dir_len = temp_dir.len;
+
+ strbuf_addstr(&buf, transport->url);
+ strbuf_addstr(&buf, "/refs");
+
+ memset(&rsync, 0, sizeof(rsync));
+ rsync.argv = args;
+ rsync.stdout_to_stderr = 1;
+ args[0] = "rsync";
+ args[1] = (transport->verbose > 0) ? "-rv" : "-r";
+ args[2] = buf.buf;
+ args[3] = temp_dir.buf;
+ args[4] = NULL;
+
+ if (run_command(&rsync))
+ die ("Could not run rsync to get refs");
+
+ strbuf_reset(&buf);
+ strbuf_addstr(&buf, transport->url);
+ strbuf_addstr(&buf, "/packed-refs");
+
+ args[2] = buf.buf;
+
+ if (run_command(&rsync))
+ die ("Could not run rsync to get refs");
+
+ /* read the copied refs */
+
+ strbuf_addstr(&temp_dir, "/refs");
+ read_loose_refs(&temp_dir, temp_dir_len + 1, &tail);
+ strbuf_setlen(&temp_dir, temp_dir_len);
+
+ tail = &dummy;
+ strbuf_addstr(&temp_dir, "/packed-refs");
+ insert_packed_refs(temp_dir.buf, &tail);
+ strbuf_setlen(&temp_dir, temp_dir_len);
+
+ if (remove_dir_recursively(&temp_dir, 0))
+ warning ("Error removing temporary directory %s.",
+ temp_dir.buf);
+
+ strbuf_release(&buf);
+ strbuf_release(&temp_dir);
+
+ return dummy.next;
+}
+
+static int fetch_objs_via_rsync(struct transport *transport,
+ int nr_objs, struct ref **to_fetch)
+{
+ struct strbuf buf = STRBUF_INIT;
+ struct child_process rsync;
+ const char *args[8];
+ int result;
+
+ strbuf_addstr(&buf, transport->url);
+ strbuf_addstr(&buf, "/objects/");
+
+ memset(&rsync, 0, sizeof(rsync));
+ rsync.argv = args;
+ rsync.stdout_to_stderr = 1;
+ args[0] = "rsync";
+ args[1] = (transport->verbose > 0) ? "-rv" : "-r";
+ args[2] = "--ignore-existing";
+ args[3] = "--exclude";
+ args[4] = "info";
+ args[5] = buf.buf;
+ args[6] = get_object_directory();
+ args[7] = NULL;
+
+ /* NEEDSWORK: handle one level of alternates */
+ result = run_command(&rsync);
+
+ strbuf_release(&buf);
+
+ return result;
+}
+
+static int write_one_ref(const char *name, const unsigned char *sha1,
+ int flags, void *data)
+{
+ struct strbuf *buf = data;
+ int len = buf->len;
+ FILE *f;
+
+ /* when called via for_each_ref(), flags is non-zero */
+ if (flags && prefixcmp(name, "refs/heads/") &&
+ prefixcmp(name, "refs/tags/"))
+ return 0;
+
+ strbuf_addstr(buf, name);
+ if (safe_create_leading_directories(buf->buf) ||
+ !(f = fopen(buf->buf, "w")) ||
+ fprintf(f, "%s\n", sha1_to_hex(sha1)) < 0 ||
+ fclose(f))
+ return error("problems writing temporary file %s", buf->buf);
+ strbuf_setlen(buf, len);
+ return 0;
+}
+
+static int write_refs_to_temp_dir(struct strbuf *temp_dir,
+ int refspec_nr, const char **refspec)
+{
+ int i;
+
+ for (i = 0; i < refspec_nr; i++) {
+ unsigned char sha1[20];
+ char *ref;
+
+ if (dwim_ref(refspec[i], strlen(refspec[i]), sha1, &ref) != 1)
+ return error("Could not get ref %s", refspec[i]);
+
+ if (write_one_ref(ref, sha1, 0, temp_dir)) {
+ free(ref);
+ return -1;
+ }
+ free(ref);
+ }
+ return 0;
+}
+
+static int rsync_transport_push(struct transport *transport,
+ int refspec_nr, const char **refspec, int flags)
+{
+ struct strbuf buf = STRBUF_INIT, temp_dir = STRBUF_INIT;
+ int result = 0, i;
+ struct child_process rsync;
+ const char *args[10];
+
+ /* first push the objects */
+
+ strbuf_addstr(&buf, transport->url);
+ strbuf_addch(&buf, '/');
+
+ memset(&rsync, 0, sizeof(rsync));
+ rsync.argv = args;
+ rsync.stdout_to_stderr = 1;
+ i = 0;
+ args[i++] = "rsync";
+ args[i++] = "-a";
+ if (flags & TRANSPORT_PUSH_DRY_RUN)
+ args[i++] = "--dry-run";
+ if (transport->verbose > 0)
+ args[i++] = "-v";
+ args[i++] = "--ignore-existing";
+ args[i++] = "--exclude";
+ args[i++] = "info";
+ args[i++] = get_object_directory();
+ args[i++] = buf.buf;
+ args[i++] = NULL;
+
+ if (run_command(&rsync))
+ return error("Could not push objects to %s", transport->url);
+
+ /* copy the refs to the temporary directory; they could be packed. */
+
+ strbuf_addstr(&temp_dir, git_path("rsync-refs-XXXXXX"));
+ if (!mkdtemp(temp_dir.buf))
+ die ("Could not make temporary directory");
+ strbuf_addch(&temp_dir, '/');
+
+ if (flags & TRANSPORT_PUSH_ALL) {
+ if (for_each_ref(write_one_ref, &temp_dir))
+ return -1;
+ } else if (write_refs_to_temp_dir(&temp_dir, refspec_nr, refspec))
+ return -1;
+
+ i = 2;
+ if (flags & TRANSPORT_PUSH_DRY_RUN)
+ args[i++] = "--dry-run";
+ if (!(flags & TRANSPORT_PUSH_FORCE))
+ args[i++] = "--ignore-existing";
+ args[i++] = temp_dir.buf;
+ args[i++] = transport->url;
+ args[i++] = NULL;
+ if (run_command(&rsync))
+ result = error("Could not push to %s", transport->url);
+
+ if (remove_dir_recursively(&temp_dir, 0))
+ warning ("Could not remove temporary directory %s.",
+ temp_dir.buf);
+
+ strbuf_release(&buf);
+ strbuf_release(&temp_dir);
+
+ return result;
+}
+
+/* Generic functions for using commit walkers */
+
+static int fetch_objs_via_walker(struct transport *transport,
+ int nr_objs, struct ref **to_fetch)
+{
+ char *dest = xstrdup(transport->url);
+ struct walker *walker = transport->data;
+ char **objs = xmalloc(nr_objs * sizeof(*objs));
+ int i;
+
+ walker->get_all = 1;
+ walker->get_tree = 1;
+ walker->get_history = 1;
+ walker->get_verbosely = transport->verbose >= 0;
+ walker->get_recover = 0;
+
+ for (i = 0; i < nr_objs; i++)
+ objs[i] = xstrdup(sha1_to_hex(to_fetch[i]->old_sha1));
+
+ if (walker_fetch(walker, nr_objs, objs, NULL, NULL))
+ die("Fetch failed.");
+
+ for (i = 0; i < nr_objs; i++)
+ free(objs[i]);
+ free(objs);
+ free(dest);
+ return 0;
+}
+
+static int disconnect_walker(struct transport *transport)
+{
+ struct walker *walker = transport->data;
+ if (walker)
+ walker_free(walker);
+ return 0;
+}
+
+#ifndef NO_CURL
+static int curl_transport_push(struct transport *transport, int refspec_nr, const char **refspec, int flags) {
+ const char **argv;
+ int argc;
+ int err;
+
+ argv = xmalloc((refspec_nr + 11) * sizeof(char *));
+ argv[0] = "http-push";
+ argc = 1;
+ if (flags & TRANSPORT_PUSH_ALL)
+ argv[argc++] = "--all";
+ if (flags & TRANSPORT_PUSH_FORCE)
+ argv[argc++] = "--force";
+ if (flags & TRANSPORT_PUSH_DRY_RUN)
+ argv[argc++] = "--dry-run";
+ argv[argc++] = transport->url;
+ while (refspec_nr--)
+ argv[argc++] = *refspec++;
+ argv[argc] = NULL;
+ err = run_command_v_opt(argv, RUN_GIT_CMD);
+ switch (err) {
+ case -ERR_RUN_COMMAND_FORK:
+ error("unable to fork for %s", argv[0]);
+ case -ERR_RUN_COMMAND_EXEC:
+ error("unable to exec %s", argv[0]);
+ break;
+ case -ERR_RUN_COMMAND_WAITPID:
+ case -ERR_RUN_COMMAND_WAITPID_WRONG_PID:
+ case -ERR_RUN_COMMAND_WAITPID_SIGNAL:
+ case -ERR_RUN_COMMAND_WAITPID_NOEXIT:
+ error("%s died with strange error", argv[0]);
+ }
+ return !!err;
+}
+
+static int missing__target(int code, int result)
+{
+ return /* file:// URL -- do we ever use one??? */
+ (result == CURLE_FILE_COULDNT_READ_FILE) ||
+ /* http:// and https:// URL */
+ (code == 404 && result == CURLE_HTTP_RETURNED_ERROR) ||
+ /* ftp:// URL */
+ (code == 550 && result == CURLE_FTP_COULDNT_RETR_FILE)
+ ;
+}
+
+#define missing_target(a) missing__target((a)->http_code, (a)->curl_result)
+
+static struct ref *get_refs_via_curl(const struct transport *transport)
+{
+ struct buffer buffer;
+ char *data, *start, *mid;
+ char *ref_name;
+ char *refs_url;
+ int i = 0;
+
+ struct active_request_slot *slot;
+ struct slot_results results;
+
+ struct ref *refs = NULL;
+ struct ref *ref = NULL;
+ struct ref *last_ref = NULL;
+
+ data = xmalloc(4096);
+ buffer.size = 4096;
+ buffer.posn = 0;
+ buffer.buffer = data;
+
+ refs_url = xmalloc(strlen(transport->url) + 11);
+ sprintf(refs_url, "%s/info/refs", transport->url);
+
+ http_init();
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, refs_url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ if (missing_target(&results)) {
+ free(buffer.buffer);
+ return NULL;
+ } else {
+ free(buffer.buffer);
+ error("%s", curl_errorstr);
+ return NULL;
+ }
+ }
+ } else {
+ free(buffer.buffer);
+ error("Unable to start request");
+ return NULL;
+ }
+
+ http_cleanup();
+
+ data = buffer.buffer;
+ start = NULL;
+ mid = data;
+ while (i < buffer.posn) {
+ if (!start)
+ start = &data[i];
+ if (data[i] == '\t')
+ mid = &data[i];
+ if (data[i] == '\n') {
+ data[i] = 0;
+ ref_name = mid + 1;
+ ref = xmalloc(sizeof(struct ref) +
+ strlen(ref_name) + 1);
+ memset(ref, 0, sizeof(struct ref));
+ strcpy(ref->name, ref_name);
+ get_sha1_hex(start, ref->old_sha1);
+ if (!refs)
+ refs = ref;
+ if (last_ref)
+ last_ref->next = ref;
+ last_ref = ref;
+ start = NULL;
+ }
+ i++;
+ }
+
+ free(buffer.buffer);
+
+ return refs;
+}
+
+static int fetch_objs_via_curl(struct transport *transport,
+ int nr_objs, struct ref **to_fetch)
+{
+ if (!transport->data)
+ transport->data = get_http_walker(transport->url);
+ return fetch_objs_via_walker(transport, nr_objs, to_fetch);
+}
+
+#endif
+
+struct bundle_transport_data {
+ int fd;
+ struct bundle_header header;
+};
+
+static struct ref *get_refs_from_bundle(const struct transport *transport)
+{
+ struct bundle_transport_data *data = transport->data;
+ struct ref *result = NULL;
+ int i;
+
+ if (data->fd > 0)
+ close(data->fd);
+ data->fd = read_bundle_header(transport->url, &data->header);
+ if (data->fd < 0)
+ die ("Could not read bundle '%s'.", transport->url);
+ for (i = 0; i < data->header.references.nr; i++) {
+ struct ref_list_entry *e = data->header.references.list + i;
+ struct ref *ref = alloc_ref(strlen(e->name) + 1);
+ hashcpy(ref->old_sha1, e->sha1);
+ strcpy(ref->name, e->name);
+ ref->next = result;
+ result = ref;
+ }
+ return result;
+}
+
+static int fetch_refs_from_bundle(struct transport *transport,
+ int nr_heads, struct ref **to_fetch)
+{
+ struct bundle_transport_data *data = transport->data;
+ return unbundle(&data->header, data->fd);
+}
+
+static int close_bundle(struct transport *transport)
+{
+ struct bundle_transport_data *data = transport->data;
+ if (data->fd > 0)
+ close(data->fd);
+ free(data);
+ return 0;
+}
+
+struct git_transport_data {
+ unsigned thin : 1;
+ unsigned keep : 1;
+ int depth;
+ const char *uploadpack;
+ const char *receivepack;
+};
+
+static int set_git_option(struct transport *connection,
+ const char *name, const char *value)
+{
+ struct git_transport_data *data = connection->data;
+ if (!strcmp(name, TRANS_OPT_UPLOADPACK)) {
+ data->uploadpack = value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_RECEIVEPACK)) {
+ data->receivepack = value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_THIN)) {
+ data->thin = !!value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_KEEP)) {
+ data->keep = !!value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_DEPTH)) {
+ if (!value)
+ data->depth = 0;
+ else
+ data->depth = atoi(value);
+ return 0;
+ }
+ return 1;
+}
+
+static struct ref *get_refs_via_connect(const struct transport *transport)
+{
+ struct git_transport_data *data = transport->data;
+ struct ref *refs;
+ int fd[2];
+ pid_t pid;
+ char *dest = xstrdup(transport->url);
+
+ pid = git_connect(fd, dest, data->uploadpack, 0);
+
+ if (pid < 0)
+ die("Failed to connect to \"%s\"", transport->url);
+
+ get_remote_heads(fd[0], &refs, 0, NULL, 0);
+ packet_flush(fd[1]);
+
+ finish_connect(pid);
+
+ free(dest);
+
+ return refs;
+}
+
+static int fetch_refs_via_pack(struct transport *transport,
+ int nr_heads, struct ref **to_fetch)
+{
+ struct git_transport_data *data = transport->data;
+ char **heads = xmalloc(nr_heads * sizeof(*heads));
+ char **origh = xmalloc(nr_heads * sizeof(*origh));
+ struct ref *refs;
+ char *dest = xstrdup(transport->url);
+ struct fetch_pack_args args;
+ int i;
+
+ memset(&args, 0, sizeof(args));
+ args.uploadpack = data->uploadpack;
+ args.keep_pack = data->keep;
+ args.lock_pack = 1;
+ args.use_thin_pack = data->thin;
+ args.verbose = transport->verbose > 0;
+ args.depth = data->depth;
+
+ for (i = 0; i < nr_heads; i++)
+ origh[i] = heads[i] = xstrdup(to_fetch[i]->name);
+ refs = fetch_pack(&args, dest, nr_heads, heads, &transport->pack_lockfile);
+
+ for (i = 0; i < nr_heads; i++)
+ free(origh[i]);
+ free(origh);
+ free(heads);
+ free_refs(refs);
+ free(dest);
+ return 0;
+}
+
+static int git_transport_push(struct transport *transport, int refspec_nr, const char **refspec, int flags) {
+ struct git_transport_data *data = transport->data;
+ const char **argv;
+ char *rem;
+ int argc;
+ int err;
+
+ argv = xmalloc((refspec_nr + 11) * sizeof(char *));
+ argv[0] = "send-pack";
+ argc = 1;
+ if (flags & TRANSPORT_PUSH_ALL)
+ argv[argc++] = "--all";
+ if (flags & TRANSPORT_PUSH_FORCE)
+ argv[argc++] = "--force";
+ if (flags & TRANSPORT_PUSH_DRY_RUN)
+ argv[argc++] = "--dry-run";
+ if (data->receivepack) {
+ char *rp = xmalloc(strlen(data->receivepack) + 16);
+ sprintf(rp, "--receive-pack=%s", data->receivepack);
+ argv[argc++] = rp;
+ }
+ if (data->thin)
+ argv[argc++] = "--thin";
+ rem = xmalloc(strlen(transport->remote->name) + 10);
+ sprintf(rem, "--remote=%s", transport->remote->name);
+ argv[argc++] = rem;
+ argv[argc++] = transport->url;
+ while (refspec_nr--)
+ argv[argc++] = *refspec++;
+ argv[argc] = NULL;
+ err = run_command_v_opt(argv, RUN_GIT_CMD);
+ switch (err) {
+ case -ERR_RUN_COMMAND_FORK:
+ error("unable to fork for %s", argv[0]);
+ case -ERR_RUN_COMMAND_EXEC:
+ error("unable to exec %s", argv[0]);
+ break;
+ case -ERR_RUN_COMMAND_WAITPID:
+ case -ERR_RUN_COMMAND_WAITPID_WRONG_PID:
+ case -ERR_RUN_COMMAND_WAITPID_SIGNAL:
+ case -ERR_RUN_COMMAND_WAITPID_NOEXIT:
+ error("%s died with strange error", argv[0]);
+ }
+ return !!err;
+}
+
+static int disconnect_git(struct transport *transport)
+{
+ free(transport->data);
+ return 0;
+}
+
+static int is_local(const char *url)
+{
+ const char *colon = strchr(url, ':');
+ const char *slash = strchr(url, '/');
+ return !colon || (slash && slash < colon);
+}
+
+static int is_file(const char *url)
+{
+ struct stat buf;
+ if (stat(url, &buf))
+ return 0;
+ return S_ISREG(buf.st_mode);
+}
+
+struct transport *transport_get(struct remote *remote, const char *url)
+{
+ struct transport *ret = xcalloc(1, sizeof(*ret));
+
+ ret->remote = remote;
+ ret->url = url;
+
+ if (!prefixcmp(url, "rsync://")) {
+ ret->get_refs_list = get_refs_via_rsync;
+ ret->fetch = fetch_objs_via_rsync;
+ ret->push = rsync_transport_push;
+
+ } else if (!prefixcmp(url, "http://")
+ || !prefixcmp(url, "https://")
+ || !prefixcmp(url, "ftp://")) {
+#ifdef NO_CURL
+ error("git was compiled without libcurl support.");
+#else
+ ret->get_refs_list = get_refs_via_curl;
+ ret->fetch = fetch_objs_via_curl;
+ ret->push = curl_transport_push;
+#endif
+ ret->disconnect = disconnect_walker;
+
+ } else if (is_local(url) && is_file(url)) {
+ struct bundle_transport_data *data = xcalloc(1, sizeof(*data));
+ ret->data = data;
+ ret->get_refs_list = get_refs_from_bundle;
+ ret->fetch = fetch_refs_from_bundle;
+ ret->disconnect = close_bundle;
+
+ } else {
+ struct git_transport_data *data = xcalloc(1, sizeof(*data));
+ ret->data = data;
+ ret->set_option = set_git_option;
+ ret->get_refs_list = get_refs_via_connect;
+ ret->fetch = fetch_refs_via_pack;
+ ret->push = git_transport_push;
+ ret->disconnect = disconnect_git;
+
+ data->thin = 1;
+ data->uploadpack = "git-upload-pack";
+ if (remote && remote->uploadpack)
+ data->uploadpack = remote->uploadpack;
+ data->receivepack = "git-receive-pack";
+ if (remote && remote->receivepack)
+ data->receivepack = remote->receivepack;
+ }
+
+ return ret;
+}
+
+int transport_set_option(struct transport *transport,
+ const char *name, const char *value)
+{
+ if (transport->set_option)
+ return transport->set_option(transport, name, value);
+ return 1;
+}
+
+int transport_push(struct transport *transport,
+ int refspec_nr, const char **refspec, int flags)
+{
+ if (!transport->push)
+ return 1;
+ return transport->push(transport, refspec_nr, refspec, flags);
+}
+
+struct ref *transport_get_remote_refs(struct transport *transport)
+{
+ if (!transport->remote_refs)
+ transport->remote_refs = transport->get_refs_list(transport);
+ return transport->remote_refs;
+}
+
+int transport_fetch_refs(struct transport *transport, struct ref *refs)
+{
+ int rc;
+ int nr_heads = 0, nr_alloc = 0;
+ struct ref **heads = NULL;
+ struct ref *rm;
+
+ for (rm = refs; rm; rm = rm->next) {
+ if (rm->peer_ref &&
+ !hashcmp(rm->peer_ref->old_sha1, rm->old_sha1))
+ continue;
+ ALLOC_GROW(heads, nr_heads + 1, nr_alloc);
+ heads[nr_heads++] = rm;
+ }
+
+ rc = transport->fetch(transport, nr_heads, heads);
+ free(heads);
+ return rc;
+}
+
+void transport_unlock_pack(struct transport *transport)
+{
+ if (transport->pack_lockfile) {
+ unlink(transport->pack_lockfile);
+ free(transport->pack_lockfile);
+ transport->pack_lockfile = NULL;
+ }
+}
+
+int transport_disconnect(struct transport *transport)
+{
+ int ret = 0;
+ if (transport->disconnect)
+ ret = transport->disconnect(transport);
+ free(transport);
+ return ret;
+}
--- /dev/null
+#ifndef TRANSPORT_H
+#define TRANSPORT_H
+
+#include "cache.h"
+#include "remote.h"
+
+struct transport {
+ struct remote *remote;
+ const char *url;
+ void *data;
+ struct ref *remote_refs;
+
+ /**
+ * Returns 0 if successful, positive if the option is not
+ * recognized or is inapplicable, and negative if the option
+ * is applicable but the value is invalid.
+ **/
+ int (*set_option)(struct transport *connection, const char *name,
+ const char *value);
+
+ struct ref *(*get_refs_list)(const struct transport *transport);
+ int (*fetch)(struct transport *transport, int refs_nr, struct ref **refs);
+ int (*push)(struct transport *connection, int refspec_nr, const char **refspec, int flags);
+
+ int (*disconnect)(struct transport *connection);
+ char *pack_lockfile;
+ signed verbose : 2;
+};
+
+#define TRANSPORT_PUSH_ALL 1
+#define TRANSPORT_PUSH_FORCE 2
+#define TRANSPORT_PUSH_DRY_RUN 4
+
+/* Returns a transport suitable for the url */
+struct transport *transport_get(struct remote *, const char *);
+
+/* Transport options which apply to git:// and scp-style URLs */
+
+/* The program to use on the remote side to send a pack */
+#define TRANS_OPT_UPLOADPACK "uploadpack"
+
+/* The program to use on the remote side to receive a pack */
+#define TRANS_OPT_RECEIVEPACK "receivepack"
+
+/* Transfer the data as a thin pack if not null */
+#define TRANS_OPT_THIN "thin"
+
+/* Keep the pack that was transferred if not null */
+#define TRANS_OPT_KEEP "keep"
+
+/* Limit the depth of the fetch if not null */
+#define TRANS_OPT_DEPTH "depth"
+
+/**
+ * Returns 0 if the option was used, non-zero otherwise. Prints a
+ * message to stderr if the option is not used.
+ **/
+int transport_set_option(struct transport *transport, const char *name,
+ const char *value);
+
+int transport_push(struct transport *connection,
+ int refspec_nr, const char **refspec, int flags);
+
+struct ref *transport_get_remote_refs(struct transport *transport);
+
+int transport_fetch_refs(struct transport *transport, struct ref *refs);
+void transport_unlock_pack(struct transport *transport);
+int transport_disconnect(struct transport *transport);
+
+#endif
diff_opts.detect_rename = DIFF_DETECT_RENAME;
diff_opts.output_format = DIFF_FORMAT_NO_OUTPUT;
diff_opts.single_follow = opt->paths[0];
+ diff_opts.break_opt = opt->break_opt;
paths[0] = NULL;
diff_tree_setup_paths(paths, &diff_opts);
if (diff_setup_done(&diff_opts) < 0)
--- /dev/null
+#include "cache.h"
+#include "walker.h"
+#include "commit.h"
+#include "tree.h"
+#include "tree-walk.h"
+#include "tag.h"
+#include "blob.h"
+#include "refs.h"
+
+static unsigned char current_commit_sha1[20];
+
+void walker_say(struct walker *walker, const char *fmt, const char *hex)
+{
+ if (walker->get_verbosely)
+ fprintf(stderr, fmt, hex);
+}
+
+static void report_missing(const struct object *obj)
+{
+ char missing_hex[41];
+ strcpy(missing_hex, sha1_to_hex(obj->sha1));;
+ fprintf(stderr, "Cannot obtain needed %s %s\n",
+ obj->type ? typename(obj->type): "object", missing_hex);
+ if (!is_null_sha1(current_commit_sha1))
+ fprintf(stderr, "while processing commit %s.\n",
+ sha1_to_hex(current_commit_sha1));
+}
+
+static int process(struct walker *walker, struct object *obj);
+
+static int process_tree(struct walker *walker, struct tree *tree)
+{
+ struct tree_desc desc;
+ struct name_entry entry;
+
+ if (parse_tree(tree))
+ return -1;
+
+ init_tree_desc(&desc, tree->buffer, tree->size);
+ while (tree_entry(&desc, &entry)) {
+ struct object *obj = NULL;
+
+ /* submodule commits are not stored in the superproject */
+ if (S_ISGITLINK(entry.mode))
+ continue;
+ if (S_ISDIR(entry.mode)) {
+ struct tree *tree = lookup_tree(entry.sha1);
+ if (tree)
+ obj = &tree->object;
+ }
+ else {
+ struct blob *blob = lookup_blob(entry.sha1);
+ if (blob)
+ obj = &blob->object;
+ }
+ if (!obj || process(walker, obj))
+ return -1;
+ }
+ free(tree->buffer);
+ tree->buffer = NULL;
+ tree->size = 0;
+ return 0;
+}
+
+#define COMPLETE (1U << 0)
+#define SEEN (1U << 1)
+#define TO_SCAN (1U << 2)
+
+static struct commit_list *complete = NULL;
+
+static int process_commit(struct walker *walker, struct commit *commit)
+{
+ if (parse_commit(commit))
+ return -1;
+
+ while (complete && complete->item->date >= commit->date) {
+ pop_most_recent_commit(&complete, COMPLETE);
+ }
+
+ if (commit->object.flags & COMPLETE)
+ return 0;
+
+ hashcpy(current_commit_sha1, commit->object.sha1);
+
+ walker_say(walker, "walk %s\n", sha1_to_hex(commit->object.sha1));
+
+ if (walker->get_tree) {
+ if (process(walker, &commit->tree->object))
+ return -1;
+ if (!walker->get_all)
+ walker->get_tree = 0;
+ }
+ if (walker->get_history) {
+ struct commit_list *parents = commit->parents;
+ for (; parents; parents = parents->next) {
+ if (process(walker, &parents->item->object))
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int process_tag(struct walker *walker, struct tag *tag)
+{
+ if (parse_tag(tag))
+ return -1;
+ return process(walker, tag->tagged);
+}
+
+static struct object_list *process_queue = NULL;
+static struct object_list **process_queue_end = &process_queue;
+
+static int process_object(struct walker *walker, struct object *obj)
+{
+ if (obj->type == OBJ_COMMIT) {
+ if (process_commit(walker, (struct commit *)obj))
+ return -1;
+ return 0;
+ }
+ if (obj->type == OBJ_TREE) {
+ if (process_tree(walker, (struct tree *)obj))
+ return -1;
+ return 0;
+ }
+ if (obj->type == OBJ_BLOB) {
+ return 0;
+ }
+ if (obj->type == OBJ_TAG) {
+ if (process_tag(walker, (struct tag *)obj))
+ return -1;
+ return 0;
+ }
+ return error("Unable to determine requirements "
+ "of type %s for %s",
+ typename(obj->type), sha1_to_hex(obj->sha1));
+}
+
+static int process(struct walker *walker, struct object *obj)
+{
+ if (obj->flags & SEEN)
+ return 0;
+ obj->flags |= SEEN;
+
+ if (has_sha1_file(obj->sha1)) {
+ /* We already have it, so we should scan it now. */
+ obj->flags |= TO_SCAN;
+ }
+ else {
+ if (obj->flags & COMPLETE)
+ return 0;
+ walker->prefetch(walker, obj->sha1);
+ }
+
+ object_list_insert(obj, process_queue_end);
+ process_queue_end = &(*process_queue_end)->next;
+ return 0;
+}
+
+static int loop(struct walker *walker)
+{
+ struct object_list *elem;
+
+ while (process_queue) {
+ struct object *obj = process_queue->item;
+ elem = process_queue;
+ process_queue = elem->next;
+ free(elem);
+ if (!process_queue)
+ process_queue_end = &process_queue;
+
+ /* If we are not scanning this object, we placed it in
+ * the queue because we needed to fetch it first.
+ */
+ if (! (obj->flags & TO_SCAN)) {
+ if (walker->fetch(walker, obj->sha1)) {
+ report_missing(obj);
+ return -1;
+ }
+ }
+ if (!obj->type)
+ parse_object(obj->sha1);
+ if (process_object(walker, obj))
+ return -1;
+ }
+ return 0;
+}
+
+static int interpret_target(struct walker *walker, char *target, unsigned char *sha1)
+{
+ if (!get_sha1_hex(target, sha1))
+ return 0;
+ if (!check_ref_format(target)) {
+ if (!walker->fetch_ref(walker, target, sha1)) {
+ return 0;
+ }
+ }
+ return -1;
+}
+
+static int mark_complete(const char *path, const unsigned char *sha1, int flag, void *cb_data)
+{
+ struct commit *commit = lookup_commit_reference_gently(sha1, 1);
+ if (commit) {
+ commit->object.flags |= COMPLETE;
+ insert_by_date(commit, &complete);
+ }
+ return 0;
+}
+
+int walker_targets_stdin(char ***target, const char ***write_ref)
+{
+ int targets = 0, targets_alloc = 0;
+ struct strbuf buf;
+ *target = NULL; *write_ref = NULL;
+ strbuf_init(&buf, 0);
+ while (1) {
+ char *rf_one = NULL;
+ char *tg_one;
+
+ if (strbuf_getline(&buf, stdin, '\n') == EOF)
+ break;
+ tg_one = buf.buf;
+ rf_one = strchr(tg_one, '\t');
+ if (rf_one)
+ *rf_one++ = 0;
+
+ if (targets >= targets_alloc) {
+ targets_alloc = targets_alloc ? targets_alloc * 2 : 64;
+ *target = xrealloc(*target, targets_alloc * sizeof(**target));
+ *write_ref = xrealloc(*write_ref, targets_alloc * sizeof(**write_ref));
+ }
+ (*target)[targets] = xstrdup(tg_one);
+ (*write_ref)[targets] = rf_one ? xstrdup(rf_one) : NULL;
+ targets++;
+ }
+ strbuf_release(&buf);
+ return targets;
+}
+
+void walker_targets_free(int targets, char **target, const char **write_ref)
+{
+ while (targets--) {
+ free(target[targets]);
+ if (write_ref && write_ref[targets])
+ free((char *) write_ref[targets]);
+ }
+}
+
+int walker_fetch(struct walker *walker, int targets, char **target,
+ const char **write_ref, const char *write_ref_log_details)
+{
+ struct ref_lock **lock = xcalloc(targets, sizeof(struct ref_lock *));
+ unsigned char *sha1 = xmalloc(targets * 20);
+ char *msg;
+ int ret;
+ int i;
+
+ save_commit_buffer = 0;
+ track_object_refs = 0;
+
+ for (i = 0; i < targets; i++) {
+ if (!write_ref || !write_ref[i])
+ continue;
+
+ lock[i] = lock_ref_sha1(write_ref[i], NULL);
+ if (!lock[i]) {
+ error("Can't lock ref %s", write_ref[i]);
+ goto unlock_and_fail;
+ }
+ }
+
+ if (!walker->get_recover)
+ for_each_ref(mark_complete, NULL);
+
+ for (i = 0; i < targets; i++) {
+ if (interpret_target(walker, target[i], &sha1[20 * i])) {
+ error("Could not interpret %s as something to pull", target[i]);
+ goto unlock_and_fail;
+ }
+ if (process(walker, lookup_unknown_object(&sha1[20 * i])))
+ goto unlock_and_fail;
+ }
+
+ if (loop(walker))
+ goto unlock_and_fail;
+
+ if (write_ref_log_details) {
+ msg = xmalloc(strlen(write_ref_log_details) + 12);
+ sprintf(msg, "fetch from %s", write_ref_log_details);
+ } else {
+ msg = NULL;
+ }
+ for (i = 0; i < targets; i++) {
+ if (!write_ref || !write_ref[i])
+ continue;
+ ret = write_ref_sha1(lock[i], &sha1[20 * i], msg ? msg : "fetch (unknown)");
+ lock[i] = NULL;
+ if (ret)
+ goto unlock_and_fail;
+ }
+ free(msg);
+
+ return 0;
+
+unlock_and_fail:
+ for (i = 0; i < targets; i++)
+ if (lock[i])
+ unlock_ref(lock[i]);
+
+ return -1;
+}
+
+void walker_free(struct walker *walker)
+{
+ walker->cleanup(walker);
+ free(walker);
+}
--- /dev/null
+#ifndef WALKER_H
+#define WALKER_H
+
+struct walker {
+ void *data;
+ int (*fetch_ref)(struct walker *, char *ref, unsigned char *sha1);
+ void (*prefetch)(struct walker *, unsigned char *sha1);
+ int (*fetch)(struct walker *, unsigned char *sha1);
+ void (*cleanup)(struct walker *);
+ int get_tree;
+ int get_history;
+ int get_all;
+ int get_verbosely;
+ int get_recover;
+
+ int corrupt_object_found;
+};
+
+/* Report what we got under get_verbosely */
+void walker_say(struct walker *walker, const char *, const char *);
+
+/* Load pull targets from stdin */
+int walker_targets_stdin(char ***target, const char ***write_ref);
+
+/* Free up loaded targets */
+void walker_targets_free(int targets, char **target, const char **write_ref);
+
+/* If write_ref is set, the ref filename to write the target value to. */
+/* If write_ref_log_details is set, additional text will appear in the ref log. */
+int walker_fetch(struct walker *impl, int targets, char **target,
+ const char **write_ref, const char *write_ref_log_details);
+
+void walker_free(struct walker *walker);
+
+struct walker *get_http_walker(const char *url);
+
+#endif /* WALKER_H */