git-commit
git-commit-tree
git-config
-git-convert-objects
git-count-objects
git-cvsexportcommit
git-cvsimport
git-stripspace
git-submodule
git-svn
-git-svnimport
git-symbolic-ref
git-tag
git-tar-tree
config.mak.autogen
config.mak.append
configure
+tags
+TAGS
+cscope*
Santi Béjar <sbejar@gmail.com>
Sean Estabrooks <seanlkml@sympatico.ca>
Shawn O. Pearce <spearce@spearce.org>
+Steven Grimm <koreth@midwinter.com>
Theodore Ts'o <tytso@mit.edu>
Tony Luck <tony.luck@intel.com>
Uwe Kleine-König <Uwe_Zeisberger@digi.com>
--- /dev/null
+GIT v1.5.3.2 Release Notes
+==========================
+
+Fixes since v1.5.3.1
+--------------------
+
+ * git-push sent thin packs by default, which was not good for
+ the public distribution server (no point in saving transfer
+ while pushing; no point in making the resulting pack less
+ optimum).
+
+ * git-svn sometimes terminated with "Malformed network data" when
+ talking over svn:// protocol.
+
+ * git-send-email re-issued the same message-id about 10% of the
+ time if you fired off 30 messages within a single second.
+
+ * git-stash was not terminating the log message of commits it
+ internally creates with LF.
+
+ * git-apply failed to check the size of the patch hunk when its
+ beginning part matched the remainder of the preimage exactly,
+ even though the preimage recorded in the hunk was much larger
+ (therefore the patch should not have applied), leading to a
+ segfault.
+
+ * "git rm foo && git commit foo" complained that 'foo' needs to
+ be added first, instead of committing the removal, which was a
+ nonsense.
+
+ * git grep -c said "/dev/null: 0".
+
+ * git-add -u failed to recognize a blob whose type changed
+ between the index and the work tree.
+
+ * The limit to rename detection has been tightened a lot to
+ reduce performance problems with a huge change.
+
+ * cvsimport and svnimport barfed when the input tried to move
+ a tag.
+
+ * "git apply -pN" did not chop the right number of directories.
+
+ * "git svnimport" did not like SVN tags with funny characters in them.
+
+ * git-gui 0.8.3, with assorted fixes, including:
+
+ - font-chooser on X11 was unusable with large number of fonts;
+ - a diff that contained a deleted symlink made it barf;
+ - an untracked symbolic link to a directory made it fart;
+ - a file with % in its name made it vomit;
+
+
+Documentation updates
+---------------------
+
+User manual has been somewhat restructured. I think the new
+organization is much easier to read.
--- /dev/null
+GIT v1.5.3.3 Release Notes
+==========================
+
+Fixes since v1.5.3.2
+--------------------
+
+ * git-quiltimport did not like it when a patch described in the
+ series file does not exist.
+
+ * p4 importer missed executable bit in some cases.
+
+ * The default shell on some FreeBSD did not execute the
+ argument parsing code correctly and made git unusable.
+
+ * git-svn incorrectly spawned pager even when the user user
+ explicitly asked not to.
+
+ * sample post-receive hook overquoted the envelope sender
+ value.
+
+ * git-am got confused when the patch contained a change that is
+ only about type and not contents.
+
+ * git-mergetool did not show our and their version of the
+ conflicted file when started from a subdirectory of the
+ project.
+
+ * git-mergetool did not pass correct options when invoking diff3.
+
+ * git-log sometimes invoked underlying "diff" machinery
+ unnecessarily.
--- /dev/null
+GIT v1.5.3.4 Release Notes
+==========================
+
+Fixes since v1.5.3.3
+--------------------
+
+ * Change to "git-ls-files" in v1.5.3.3 that was introduced to support
+ partial commit of removal better had a segfaulting bug, which was
+ diagnosed and fixed by Keith and Carl.
+
+ * Performance improvements for rename detection has been backported
+ from the 'master' branch.
+
+ * "git-for-each-ref --format='%(numparent)'" was not working
+ correctly at all, and --format='%(parent)' was not working for
+ merge commits.
+
+ * Sample "post-receive-hook" incorrectly sent out push
+ notification e-mails marked as "From: " the committer of the
+ commit that happened to be at the tip of the branch that was
+ pushed, not from the person who pushed.
+
+ * "git-remote" did not exit non-zero status upon error.
+
+ * "git-add -i" did not respond very well to EOF from tty nor
+ bogus input.
+
+ * "git-rebase -i" squash subcommand incorrectly made the
+ author of later commit the author of resulting commit,
+ instead of taking from the first one in the squashed series.
+
+ * "git-stash apply --index" was not documented.
+
+ * autoconfiguration learned that "ar" command is found as "gas" on
+ some systems.
--- /dev/null
+GIT v1.5.3.5 Release Notes
+==========================
+
+Fixes since v1.5.3.4
+--------------------
+
+ * Comes with git-gui 0.8.4.
+
+ * "git-config" silently ignored options after --list; now it will
+ error out with a usage message.
+
+ * "git-config --file" failed if the argument used a relative path
+ as it changed directories before opening the file.
+
+ * "git-config --file" now displays a proper error message if it
+ cannot read the file specified on the command line.
+
+ * "git-config", "git-diff", "git-apply" failed if run from a
+ subdirectory with relative GIT_DIR and GIT_WORK_TREE set.
+
+ * "git-blame" crashed if run during a merge conflict.
+
+ * "git-add -i" did not handle single line hunks correctly.
+
+ * "git-rebase -i" and "git-stash apply" failed if external diff
+ drivers were used for one or more files in a commit. They now
+ avoid calling the external diff drivers.
+
+ * "git-log --follow" did not work unless diff generation (e.g. -p)
+ was also requested.
+
+ * "git-log --follow -B" did not work at all. Fixed.
+
+ * "git-log -M -B" did not correctly handle cases of very large files
+ being renamed and replaced by very small files in the same commit.
+
+ * "git-log" printed extra newlines between commits when a diff
+ was generated internally (e.g. -S or --follow) but not displayed.
+
+ * "git-push" error message is more helpful when pushing to a
+ repository with no matching refs and none specified.
+
+ * "git-push" now respects + (force push) on wildcard refspecs,
+ matching the behavior of git-fetch.
+
+ * "git-filter-branch" now updates the working directory when it
+ has finished filtering the current branch.
+
+ * "git-instaweb" no longer fails on Mac OS X.
+
+ * "git-cvsexportcommit" didn't always create new parent directories
+ before trying to create new child directories. Fixed.
+
+ * "git-fetch" printed a scary (but bogus) error message while
+ fetching a tag that pointed to a tree or blob. The error did
+ not impact correctness, only user perception. The bogus error
+ is no longer printed.
+
+ * "git-ls-files --ignored" did not properly descend into non-ignored
+ directories that themselves contained ignored files if d_type
+ was not supported by the filesystem. This bug impacted systems
+ such as AFS. Fixed.
+
+ * Git segfaulted when reading an invalid .gitattributes file. Fixed.
+
+ * post-receive-email example hook fixed was fixed for
+ non-fast-forward updates.
+
+ * Documentation updates for supported (but previously undocumented)
+ options of "git-archive" and "git-reflog".
+
+ * "make clean" no longer deletes the configure script that ships
+ with the git tarball, making multiple architecture builds easier.
+
+ * "git-remote show origin" spewed a warning message from Perl
+ when no remote is defined for the current branch via
+ branch.<name>.remote configuration settings.
+
+ * Building with NO_PERL_MAKEMAKER excessively rebuilt contents
+ of perl/ subdirectory by rewriting perl.mak.
+
+ * http.sslVerify configuration settings were not used in scripted
+ Porcelains.
+
+ * "git-add" leaked a bit of memory while scanning for files to add.
+
+ * A few workarounds to squelch false warnings from recent gcc have
+ been added.
+
+ * "git-send-pack $remote frotz" segfaulted when there is nothing
+ named 'frotz' on the local end.
+
+ * "git-rebase -interactive" did not handle its "--strategy" option
+ properly.
Updates since v1.5.3
--------------------
+ * Comes with much improved gitk.
+ * git-reset is now built-in.
+
+ * git-send-email can optionally talk over ssmtp and use SMTP-AUTH.
+
+ * git-rebase learned --whitespace option.
+
+ * git-remote knows --mirror mode.
+
+ * git-merge can call the "post-merge" hook.
+
+ * git-pack-objects can optionally run deltification with multiple threads.
+
+ * git-archive can optionally substitute keywords in files marked with
+ export-subst attribute.
+
+ * git-for-each-ref learned %(xxxdate:<dateformat>) syntax to
+ show the various date fields in different formats.
+
+ * git-gc --auto is a low-impact way to automatically run a
+ variant of git-repack that does not lose unreferenced objects
+ (read: safer than the usual one) after the user accumulates
+ too many loose objects.
+
+ * git-push has been rewritten in C.
+
+ * git-push learned --dry-run option to show what would happen
+ if a push is run.
+
+ * git-remote learned "rm" subcommand.
+
+ * git-rebase --interactive mode can now work on detached HEAD.
+
+ * git-cvsserver can be run via git-shell.
+
+ * git-am and git-rebase are far less verbose.
+
+ * git-pull learned to pass --[no-]ff option to underlying git-merge.
+
+ * Various Perforce importer updates.
Fixes since v1.5.3
------------------
All of the fixes in v1.5.3 maintenance series are included in
this release, unless otherwise noted.
+--
+exec >/var/tmp/1
+O=v1.5.3.4-450-g952a9e5
+echo O=`git describe refs/heads/master`
+git shortlog --no-merges $O..refs/heads/master ^refs/heads/maint
git-commit mainporcelain
git-commit-tree plumbingmanipulators
git-config ancillarymanipulators
-git-convert-objects ancillarymanipulators
git-count-objects ancillaryinterrogators
git-cvsexportcommit foreignscminterface
git-cvsimport foreignscminterface
git-stripspace purehelpers
git-submodule mainporcelain
git-svn foreignscminterface
-git-svnimport foreignscminterface
git-symbolic-ref plumbingmanipulators
git-tag mainporcelain
git-tar-tree plumbinginterrogators
Set the path to the working tree. The value will not be
used in combination with repositories found automatically in
a .git directory (i.e. $GIT_DIR is not set).
- This can be overriden by the GIT_WORK_TREE environment
+ This can be overridden by the GIT_WORK_TREE environment
variable and the '--work-tree' command line option.
core.logAllRefUpdates::
If this option is not given, `git fetch` defaults to remote "origin".
branch.<name>.merge::
- When in branch <name>, it tells `git fetch` the default refspec to
- be marked for merging in FETCH_HEAD. The value has exactly to match
- a remote part of one of the refspecs which are fetched from the remote
- given by "branch.<name>.remote".
+ When in branch <name>, it tells `git fetch` the default
+ refspec to be marked for merging in FETCH_HEAD. The value is
+ handled like the remote part of a refspec, and must match a
+ ref which is fetched from the remote given by
+ "branch.<name>.remote".
The merge information is used by `git pull` (which at first calls
`git fetch`) to lookup the default branch for merging. Without
this option, `git pull` defaults to merge the first refspec fetched.
branch.<name>.merge to the desired branch, and use the special setting
`.` (a period) for branch.<name>.remote.
+branch.<name>.mergeoptions::
+ Sets default options for merging into branch <name>. The syntax and
+ supported options are equal to that of gitlink:git-merge[1], but
+ option values containing whitespace characters are currently not
+ supported.
+
clean.requireForce::
A boolean to make git-clean do nothing unless given -f or -n. Defaults
to false.
algorithm used by 'git gc --aggressive'. This defaults
to 10.
+gc.auto::
+ When there are approximately more than this many loose
+ objects in the repository, `git gc --auto` will pack them.
+ Some Porcelain commands use this command to perform a
+ light-weight garbage collection from time to time. Setting
+ this to 0 disables it.
+
+gc.autopacklimit::
+ When there are more than this many packs that are not
+ marked with `*.keep` file in the repository, `git gc
+ --auto` consolidates them into one larger pack. Setting
+ this to 0 disables this.
+
gc.packrefs::
`git gc` does not run `git pack-refs` in a bare repository by
default so that older dumb-transport clients can still fetch
merge.tool::
Controls which merge resolution program is used by
- gitlink:git-mergetool[l]. Valid values are: "kdiff3", "tkdiff",
+ gitlink:git-mergetool[1]. Valid values are: "kdiff3", "tkdiff",
"meld", "xxdiff", "emerge", "vimdiff", "gvimdiff", and "opendiff".
merge.verbosity::
message if conflicts were detected. Level 1 outputs only
conflicts, 2 outputs conflicts and file changes. Level 5 and
above outputs debugging information. The default is level 2.
- Can be overriden by 'GIT_MERGE_VERBOSITY' environment variable.
+ Can be overridden by 'GIT_MERGE_VERBOSITY' environment variable.
merge.<driver>.name::
Defines a human readable name for a custom low-level
[NOTE]
Most likely, you are not directly using the core
-git Plumbing commands, but using Porcelain like Cogito on top
-of it. Cogito works a bit differently and you usually do not
-have to run `git-update-index` yourself for changed files (you
-do tell underlying git about additions and removals via
-`cg-add` and `cg-rm` commands). Just before you make a commit
-with `cg-commit`, Cogito figures out which files you modified,
-and runs `git-update-index` on them for you.
+git Plumbing commands, but using Porcelain such as `git-add`, `git-rm'
+and `git-commit'.
Tagging a version
and in fact a lot of the common git command combinations can be scripted
with the `git xyz` interfaces. You can learn things by just looking
-at what the various git scripts do. For example, `git reset` is the
-above two lines implemented in `git-reset`, but some things like
+at what the various git scripts do. For example, `git reset` used to be
+the above two lines implemented in `git-reset`, but some things like
`git status` and `git commit` are slightly more complex scripts around
the basic git commands.
$ git branch
------------
-which is nothing more than a simple script around `ls .git/refs/heads`.
-There will be asterisk in front of the branch you are currently on.
+which used to be nothing more than a simple script around `ls .git/refs/heads`.
+There will be an asterisk in front of the branch you are currently on.
Sometimes you may wish to create a new branch _without_ actually
checking it out and switching to it. If so, just use the command
to resolve and what the merge is all about:
------------
-$ git merge "Merge work in mybranch" HEAD mybranch
+$ git merge -m "Merge work in mybranch" mybranch
------------
where the first argument is going to be used as the commit message if
`master` branch, and the second column for the `mybranch`
branch. Three commits are shown along with their log messages.
All of them have non blank characters in the first column (`*`
-shows an ordinary commit on the current branch, `.` is a merge commit), which
+shows an ordinary commit on the current branch, `-` is a merge commit), which
means they are now part of the `master` branch. Only the "Some
work" commit has the plus `+` character in the second column,
because `mybranch` has not been merged to incorporate these
------------
$ git checkout mybranch
-$ git merge "Merge upstream changes." HEAD master
+$ git merge -m "Merge upstream changes." master
------------
This outputs something like this (the actual commit object names
There are (confusingly enough) `git-ssh-fetch` and `git-ssh-upload`
programs, which are 'commit walkers'; they outlived their
usefulness when git Native and SSH transports were introduced,
-and not used by `git pull` or `git push` scripts.
+and are not used by `git pull` or `git push` scripts.
Once you fetch from the remote repository, you `merge` that
with your current branch.
The command writes the commit object name of the common ancestor
to the standard output, so we captured its output to a variable,
-because we will be using it in the next step. BTW, the common
+because we will be using it in the next step. By the way, the common
ancestor commit is the "New day." commit in this case. You can
tell it by:
convenient to organize your project with an informal hierarchy
of developers. Linux kernel development is run this way. There
is a nice illustration (page 17, "Merges to Mainline") in
-link:http://tinyurl.com/a2jdg[Randy Dunlap's presentation].
+link:http://www.xenotime.net/linux/mentor/linux-mentoring-2006.pdf[Randy Dunlap's presentation].
It should be stressed that this hierarchy is purely *informal*.
There is nothing fundamental in git that enforces the "chain of
'commit-fix' next, like this:
------------
-$ git merge 'Merge fix in diff-fix' master diff-fix
-$ git merge 'Merge fix in commit-fix' master commit-fix
+$ git merge -m 'Merge fix in diff-fix' diff-fix
+$ git merge -m 'Merge fix in commit-fix' commit-fix
------------
Which would result in:
--ext-diff::
Allow an external diff helper to be executed. If you set an
- external diff driver with gitlink:gitattributes(5), you need
- to use this option with gitlink:git-log(1) and friends.
+ external diff driver with gitlink:gitattributes[5], you need
+ to use this option with gitlink:git-log[1] and friends.
--no-ext-diff::
Disallow external diff drivers.
--------
[verse]
'git-apply' [--stat] [--numstat] [--summary] [--check] [--index]
- [--apply] [--no-add] [--index-info] [-R | --reverse]
+ [--apply] [--no-add] [--build-fake-ancestor <file>] [-R | --reverse]
[--allow-binary-replacement | --binary] [--reject] [-z]
[-pNUM] [-CNUM] [--inaccurate-eof] [--cached]
[--whitespace=<nowarn|warn|error|error-all|strip>]
cached data, apply the patch, and store the result in the index,
without using the working tree. This implies '--index'.
---index-info::
+--build-fake-ancestor <file>::
Newer git-diff output has embedded 'index information'
for each blob to help identify the original version that
the patch applies to. When this flag is given, and if
- the original version of the blob is available locally,
- outputs information about them to the standard output.
+ the original versions of the blobs is available locally,
+ builds a temporary index containing those blobs.
++
+When a pure mode change is encountered (which has no index information),
+the information is read from the current index instead.
-R, --reverse::
Apply the patch in reverse.
--------
[verse]
'git-archive' --format=<fmt> [--list] [--prefix=<prefix>/] [<extra>]
- [--remote=<repo>] <tree-ish> [path...]
+ [--remote=<repo> [--exec=<git-upload-archive>]] <tree-ish>
+ [path...]
DESCRIPTION
-----------
Instead of making a tar archive from local repository,
retrieve a tar archive from a remote repository.
+--exec=<git-upload-archive>::
+ Used with --remote to specify the path to the
+ git-upload-archive executable on the remote side.
+
<tree-ish>::
The tree or commit to produce an archive for.
on the subcommand:
git bisect start [<bad> [<good>...]] [--] [<paths>...]
- git bisect bad <rev>
- git bisect good <rev>
+ git bisect bad [<rev>]
+ git bisect good [<rev>...]
+ git bisect skip [<rev>...]
git bisect reset [<branch>]
git bisect visualize
git bisect replay <logfile>
Then compile and test the one you chose to try. After that, tell
bisect what the result was as usual.
+Bisect skip
+~~~~~~~~~~~~
+
+Instead of choosing by yourself a nearby commit, you may just want git
+to do it for you using:
+
+------------
+$ git bisect skip # Current version cannot be tested
+------------
+
+But computing the commit to test may be slower afterwards and git may
+eventually not be able to tell the first bad among a bad and one or
+more "skip"ped commits.
+
Cutting down bisection by giving more parameters to bisect start
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
------------
Note that the "run" script (`my_script` in the above example) should
-exit with code 0 in case the current source code is good and with a
-code between 1 and 127 (included) in case the current source code is
-bad.
+exit with code 0 in case the current source code is good. Exit with a
+code between 1 and 127 (inclusive), except 125, if the current
+source code is bad.
Any other exit code will abort the automatic bisect process. (A
program that does "exit(-1)" leaves $? = 255, see exit(3) manual page,
the value is chopped with "& 0377".)
+The special exit code 125 should be used when the current source code
+cannot be tested. If the "run" script exits with this code, the current
+revision will be skipped, see `git bisect skip` above.
+
You may often find that during bisect you want to have near-constant
tweaks (e.g., s/#define DEBUG 0/#define DEBUG 1/ in a header file, or
"revision that does not have this commit needs this patch applied to
If no <start-point> is given, the branch will be created with a head
equal to that of the currently checked out branch.
+Note that this will create the new branch, but it will not switch the
+working tree to it; use "git checkout <newbranch>" to switch to the
+new branch.
+
When a local branch is started off a remote branch, git can setup the
branch so that gitlink:git-pull[1] will appropriately merge from that
remote branch. If this behavior is desired, it is possible to make it
--no-abbrev::
Display the full sha1s in output listing rather than abbreviating them.
+--track::
+ Set up configuration so that git-pull will automatically
+ retrieve data from the remote branch. Use this if you always
+ pull from the same remote branch into the new branch, or if you
+ don't want to use "git pull <repository> <refspec>" explicitly. Set the
+ branch.autosetupmerge configuration variable to true if you
+ want git-checkout and git-branch to always behave as if
+ '--track' were given.
+
+--no-track::
+ When -b is given and a branch is created off a remote branch,
+ set up configuration so that git-pull will not retrieve data
+ from the remote branch, ignoring the branch.autosetupmerge
+ configuration variable.
+
<branchname>::
The name of the branch to create or delete.
The new branch name must pass all checks defined by
and move it afterwards to help build the bundle.
in R1 on A:
+
+------------
$ git-bundle create mybundle master ^lastR2bundle
$ git tag -f lastR2bundle master
+------------
(move mybundle from A to B by some mechanism)
in R2 on B:
+
+------------
$ git-bundle verify mybundle
$ git-fetch mybundle refspec
+------------
where refspec is refInBundle:localRef
You can first sneakernet the bundle file to ~/tmp/file.bdl and
then these commands:
+------------
$ git ls-remote bundle
$ git fetch bundle
$ git pull bundle
+------------
would treat it as if it is talking with a remote side over the
network.
--track::
When -b is given and a branch is created off a remote branch,
set up configuration so that git-pull will automatically
- retrieve data from the remote branch. Set the
+ retrieve data from the remote branch. Use this if you always
+ pull from the same remote branch into the new branch, or if you
+ don't want to use "git pull <repository> <refspec>" explicitly. Set the
branch.autosetupmerge configuration variable to true if you
want git-checkout and git-branch to always behave as if
'--track' were given.
message prior committing.
-x::
- Cause the command to append which commit was
- cherry-picked after the original commit message when
- making a commit. Do not use this option if you are
- cherry-picking from your private branch because the
- information is useless to the recipient. If on the
+ When recording the commit, append to the original commit
+ message a note that indicates which commit this change
+ was cherry-picked from. Append the note only for cherry
+ picks without conflicts. Do not use this option if
+ you are cherry-picking from your private branch because
+ the information is useless to the recipient. If on the
other hand you are cherry-picking between two publicly
visible branches (e.g. backporting a fix to a
maintenance branch for an older release from a
+++ /dev/null
-git-convert-objects(1)
-======================
-
-NAME
-----
-git-convert-objects - Converts old-style git repository
-
-
-SYNOPSIS
---------
-'git-convert-objects'
-
-DESCRIPTION
------------
-Converts old-style git repository to the latest format
-
-
-Author
-------
-Written by Linus Torvalds <torvalds@osdl.org>
-
-Documentation
---------------
-Documentation by David Greaves, Junio C Hamano and the git-list <git@vger.kernel.org>.
-
-GIT
----
-Part of the gitlink:git[7] suite
$ export GIT_DIR=~/project/.git
$ cd ~/project_cvs_checkout
$ git-cvsexportcommit -v <commit-sha1>
-$ cvs commit -F .mgs <files>
+$ cvs commit -F .msg <files>
------------
Merge pending patches into CVS automatically -- only if you really know what you are doing::
+
<1> Changes between the tips of the topic and the master branches.
<2> Same as above.
-<3> Changes that occured on the master branch since when the topic
+<3> Changes that occurred on the master branch since when the topic
branch was started off it.
Limiting the diff output::
git filter-branch --index-filter 'git update-index --remove filename' HEAD
--------------------------------------------------------------------------
-Now, you will get the rewritten history saved in the branch 'newbranch'
-(your current branch is left untouched).
+Now, you will get the rewritten history saved in HEAD.
To set a commit (which typically is at the tip of another
history) to be the parent of the current initial commit, in
the object referred by the ref does not cause an error. It
returns an empty string instead.
+As a special case for the date-type fields, you may specify a format for
+the date by adding one of `:default`, `:relative`, `:short`, `:local`,
+`:iso8601` or `:rfc2822` to the end of the fieldname; e.g.
+`%(taggerdate:relative)`.
+
EXAMPLES
--------
SYNOPSIS
--------
-'git-gc' [--prune] [--aggressive]
+'git-gc' [--prune] [--aggressive] [--auto]
DESCRIPTION
-----------
Users are encouraged to run this task on a regular basis within
each repository to maintain good disk space utilization and good
-operating performance.
+operating performance. Some git commands may automatically run
+`git-gc`; see the `--auto` flag below for details.
OPTIONS
-------
persistent, so this option only needs to be used occasionally; every
few hundred changesets or so.
+--auto::
+ With this option, `git gc` checks whether any housekeeping is
+ required; if not, it exits without performing any work.
+ Some git commands run `git gc --auto` after performing
+ operations that could create many loose objects.
++
+Housekeeping is required if there are too many loose objects or
+too many packs in the repository. If the number of loose objects
+exceeds the value of the `gc.auto` configuration variable, then
+all loose objects are combined into a single pack using
+`git-repack -d -l`. Setting the value of `gc.auto` to 0
+disables automatic packing of loose objects.
++
+If the number of packs exceeds the value of `gc.autopacklimit`,
+then existing packs (except those marked with a `.keep` file)
+are consolidated into a single pack by using the `-A` option of
+`git-repack`. Setting `gc.autopacklimit` to 0 disables
+automatic consolidation of packs.
+
Configuration
-------------
SYNOPSIS
--------
-'git-http-push' [--all] [--force] [--verbose] <url> <ref> [<ref>...]
+'git-http-push' [--all] [--dry-run] [--force] [--verbose] <url> <ref> [<ref>...]
DESCRIPTION
-----------
the remote repository can lose commits; use it with
care.
+--dry-run::
+ Do everything except actually send the updates.
+
--verbose::
Report the list of objects being walked locally and the
list of objects successfully sent to the remote repository.
a default name determined from the pack content. If
<pack-file> is not specified consider using --keep to
prevent a race condition between this process and
- gitlink::git-repack[1] .
+ gitlink::git-repack[1].
--fix-thin::
It is possible for gitlink:git-pack-objects[1] to build
The HTTP daemon command-line that will be executed.
Command-line options may be specified here, and the
configuration file will be added at the end of the command-line.
- Currently, lighttpd and apache2 are the only supported servers.
+ Currently lighttpd, apache2 and webrick are supported.
(Default: lighttpd)
-m|--module-path::
Author
------
-Written by Junio C Hamano 濱野 純 <junkio@cox.net>
+Written by Junio C Hamano <gitster@pobox.com>
Documentation
--------------
processes them in turn only stopping if merge returns a non-zero exit
code.
-Typically this is run with the a script calling git's imitation of
+Typically this is run with a script calling git's imitation of
the merge command from the RCS package.
A sample script called "git-merge-one-file" is included in the
[verse]
'git-merge' [-n] [--summary] [--no-commit] [--squash] [-s <strategy>]...
[-m <msg>] <remote> <remote>...
+'git-merge' <msg> HEAD <remote>...
DESCRIPTION
-----------
This is the top-level interface to the merge machinery
which drives multiple merge strategy scripts.
+The second syntax (<msg> `HEAD` <remote>) is supported for
+historical reasons. Do not use it from the command line or in
+new scripts. It is the same as `git merge -m <msg> <remote>`.
+
OPTIONS
-------
include::merge-options.txt[]
-<msg>::
+-m <msg>::
The commit message to be used for the merge commit (in case
it is created). The `git-fmt-merge-msg` script can be used
to give a good default for automated `git-merge` invocations.
-<head>::
- Our branch head commit. This has to be `HEAD`, so new
- syntax does not require it
-
<remote>::
Other branch head merged into our branch. You need at
least one <remote>. Specifying more than one <remote>
message if conflicts were detected. Level 1 outputs only
conflicts, 2 outputs conflicts and file changes. Level 5 and
above outputs debugging information. The default is level 2.
- Can be overriden by 'GIT_MERGE_VERBOSITY' environment variable.
+ Can be overridden by 'GIT_MERGE_VERBOSITY' environment variable.
+branch.<name>.mergeoptions::
+ Sets default options for merging into branch <name>. The syntax and
+ supported options are equal to that of git-merge, but option values
+ containing whitespace characters are currently not supported.
HOW MERGE WORKS
---------------
designed to be unpackable without having anything else, but for
random access, accompanied with the pack index file (.idx).
+Placing both in the pack/ subdirectory of $GIT_OBJECT_DIRECTORY (or
+any of the directories on $GIT_ALTERNATE_OBJECT_DIRECTORIES)
+enables git to read from such an archive.
+
'git-unpack-objects' command can read the packed archive and
expand the objects contained in the pack into "one-file
one-object" format; this is typically done by the smart-pull
commands when a pack is created on-the-fly for efficient network
transport by their peers.
-Placing both in the pack/ subdirectory of $GIT_OBJECT_DIRECTORY (or
-any of the directories on $GIT_ALTERNATE_OBJECT_DIRECTORIES)
-enables git to read from such an archive.
-
In a packed archive, an object is either stored as a compressed
whole, or as a difference from some other object. The latter is
often called a delta.
DESCRIPTION
-----------
-This program search the `$GIT_OBJECT_DIR` for all objects that currently
+This program searches the `$GIT_OBJECT_DIR` for all objects that currently
exist in a pack file as well as the independent object directories.
All such extra objects are removed.
SYNOPSIS
--------
[verse]
-'git-push' [--all] [--tags] [--receive-pack=<git-receive-pack>]
+'git-push' [--all] [--dry-run] [--tags] [--receive-pack=<git-receive-pack>]
[--repo=all] [-f | --force] [-v] [<repository> <refspec>...]
DESCRIPTION
Instead of naming each ref to push, specifies that all
refs under `$GIT_DIR/refs/heads/` be pushed.
+\--dry-run::
+ Do everything except actually send the updates.
+
\--tags::
All refs under `$GIT_DIR/refs/tags` are pushed, in
addition to refspecs explicitly listed on the command
`git reset --hard <upstream>` (or <newbase>).
The commits that were previously saved into the temporary area are
-then reapplied to the current branch, one by one, in order.
+then reapplied to the current branch, one by one, in order. Note that
+any commits in HEAD which introduce the same textual changes as a commit
+in HEAD..<upstream> are omitted (i.e., a patch already accepted upstream
+with a different commit message or timestamp will be skipped).
It is possible that a merge failure will prevent this process from being
completely automatic. You will have to resolve any such merge failure
The latter form is just a short-hand of `git checkout topic`
followed by `git rebase master`.
+If the upstream branch already contains a change you have made (e.g.,
+because you mailed a patch which was applied upstream), then that commit
+will be skipped. For example, running `git-rebase master` on the
+following history (in which A' and A introduce the same set of changes,
+but have different committer information):
+
+------------
+ A---B---C topic
+ /
+ D---E---A'---F master
+------------
+
+will result in:
+
+------------
+ B'---C' topic
+ /
+ D---E---A'---F master
+------------
+
Here is how you would transplant a topic branch based on one
branch to another, to pretend that you forked the topic branch
from the latter branch, using `rebase --onto`.
If you want to fold two or more commits into one, replace the command
"pick" with "squash" for the second and subsequent commit. If the
commits had different authors, it will attribute the squashed commit to
-the author of the last commit.
+the author of the first commit.
In both cases, or when a "pick" does not succeed (because of merge
errors), the loop will stop to let you fix things, and you can continue
depending on the subcommand:
[verse]
-git reflog expire [--dry-run] [--stale-fix]
+git reflog expire [--dry-run] [--stale-fix] [--verbose]
[--expire=<time>] [--expire-unreachable=<time>] [--all] <refs>...
git reflog [show] [log-options]
--all::
Instead of listing <refs> explicitly, prune all refs.
+--verbose::
+ Print extra information on screen.
+
Author
------
Written by Junio C Hamano <junkio@cox.net>
[verse]
'git-remote'
'git-remote' add [-t <branch>] [-m <branch>] [-f] [--mirror] <name> <url>
+'git-remote' rm <name>
'git-remote' show <name>
'git-remote' prune <name>
'git-remote' update [group]
in the 'refs/remotes/' namespace, but in 'refs/heads/'. This option
only makes sense in bare repositories.
+'rm'::
+
+Remove the remote named <name>. All remote tracking branches and
+configuration settings for the remote are removed.
+
'show'::
Gives some information about the remote <name>.
[ \--pretty | \--header ]
[ \--bisect ]
[ \--bisect-vars ]
+ [ \--bisect-all ]
[ \--merge ]
[ \--reverse ]
[ \--walk-reflogs ]
turns out to be bad to `bisect_bad`, and the number of commits
we are bisecting right now to `bisect_all`.
+--bisect-all::
+
+This outputs all the commit objects between the included and excluded
+commits, ordered by their distance to the included and excluded
+commits. The farthest from them is displayed first. (This is the only
+one displayed by `--bisect`.)
+
+This is useful because it makes it easy to choose a good commit to
+test when you want to avoid to test some of them for some reason (they
+may not compile for example).
+
+This option can be used along with `--bisect-vars`, in this case,
+after all the sorted commit objects, there will be the same text as if
+`--bisect-vars` had been used alone.
+
--
Commit Ordering
`/usr/lib/sendmail` if such program is available, or
`localhost` otherwise.
+--smtp-server-port::
+ Specifies a port different from the default port (SMTP
+ servers typically listen to smtp port 25 and ssmtp port
+ 465).
+
--smtp-user, --smtp-pass::
Username and password for SMTP-AUTH. Defaults are the values of
the configuration values 'sendemail.smtpuser' and
Format of the file(s) specified in sendemail.aliasesfile. Must be
one of 'mutt', 'mailrc', 'pine', or 'gnus'.
+sendemail.to::
+ Email address (or alias) to always send to.
+
sendemail.cccmd::
Command to execute to generate per patch file specific "Cc:"s.
SYNOPSIS
--------
-'git-send-pack' [--all] [--force] [--receive-pack=<git-receive-pack>] [--verbose] [--thin] [<host>:]<directory> [<ref>...]
+'git-send-pack' [--all] [--dry-run] [--force] [--receive-pack=<git-receive-pack>] [--verbose] [--thin] [<host>:]<directory> [<ref>...]
DESCRIPTION
-----------
Instead of explicitly specifying which refs to update,
update all heads that locally exist.
+\--dry-run::
+ Do everything except actually send the updates.
+
\--force::
Usually, the command refuses to update a remote ref that
is not an ancestor of the local ref used to overwrite it.
show [<stash>]::
- Show the changes recorded in the stash as a diff between the the
+ Show the changes recorded in the stash as a diff between the
stashed state and its original parent. When no `<stash>` is given,
shows the latest one. By default, the command shows the diffstat, but
it will accept any format known to `git-diff` (e.g., `git-stash show
-p stash@\{1}` to view the second most recent stash in patch form).
-apply [<stash>]::
+apply [--index] [<stash>]::
Restore the changes recorded in the stash on top of the current
working tree state. When no `<stash>` is given, applies the latest
+
This operation can fail with conflicts; you need to resolve them
by hand in the working tree.
++
+If the `--index` option is used, then tries to reinstate not only the working
+tree's changes, but also the index's ones. However, this can fail, when you
+have conflicts (which are stored in the index, where you therefore can no
+longer apply the changes as they were originally).
clear::
Remove all the stashed states. Note that those states will then
repository is cloned at the specified path, added to the
changeset and registered in .gitmodules. If no path is
specified, the path is deduced from the repository specification.
+ If the repository url begins with ./ or ../, it is stored as
+ given but resolved as a relative path from the main project's
+ url when cloning.
status::
Show the status of the submodules. This will print the SHA-1 of the
BASIC EXAMPLES
--------------
-Tracking and contributing to a the trunk of a Subversion-managed project:
+Tracking and contributing to the trunk of a Subversion-managed project:
------------------------------------------------------------------------
# Clone a repo (like git clone):
+++ /dev/null
-git-svnimport(1)
-================
-v0.1, July 2005
-
-NAME
-----
-git-svnimport - Import a SVN repository into git
-
-
-SYNOPSIS
---------
-[verse]
-'git-svnimport' [ -o <branch-for-HEAD> ] [ -h ] [ -v ] [ -d | -D ]
- [ -C <GIT_repository> ] [ -i ] [ -u ] [-l limit_rev]
- [ -b branch_subdir ] [ -T trunk_subdir ] [ -t tag_subdir ]
- [ -s start_chg ] [ -m ] [ -r ] [ -M regex ]
- [ -I <ignorefile_name> ] [ -A <author_file> ]
- [ -R <repack_each_revs>] [ -P <path_from_trunk> ]
- <SVN_repository_URL> [ <path> ]
-
-
-DESCRIPTION
------------
-Imports a SVN repository into git. It will either create a new
-repository, or incrementally import into an existing one.
-
-SVN access is done by the SVN::Perl module.
-
-git-svnimport assumes that SVN repositories are organized into one
-"trunk" directory where the main development happens, "branches/FOO"
-directories for branches, and "/tags/FOO" directories for tags.
-Other subdirectories are ignored.
-
-git-svnimport creates a file ".git/svn2git", which is required for
-incremental SVN imports.
-
-OPTIONS
--------
--C <target-dir>::
- The GIT repository to import to. If the directory doesn't
- exist, it will be created. Default is the current directory.
-
--s <start_rev>::
- Start importing at this SVN change number. The default is 1.
-+
-When importing incrementally, you might need to edit the .git/svn2git file.
-
--i::
- Import-only: don't perform a checkout after importing. This option
- ensures the working directory and index remain untouched and will
- not create them if they do not exist.
-
--T <trunk_subdir>::
- Name the SVN trunk. Default "trunk".
-
--t <tag_subdir>::
- Name the SVN subdirectory for tags. Default "tags".
-
--b <branch_subdir>::
- Name the SVN subdirectory for branches. Default "branches".
-
--o <branch-for-HEAD>::
- The 'trunk' branch from SVN is imported to the 'origin' branch within
- the git repository. Use this option if you want to import into a
- different branch.
-
--r::
- Prepend 'rX: ' to commit messages, where X is the imported
- subversion revision.
-
--u::
- Replace underscores in tag names with periods.
-
--I <ignorefile_name>::
- Import the svn:ignore directory property to files with this
- name in each directory. (The Subversion and GIT ignore
- syntaxes are similar enough that using the Subversion patterns
- directly with "-I .gitignore" will almost always just work.)
-
--A <author_file>::
- Read a file with lines on the form
-+
-------
- username = User's Full Name <email@addr.es>
-
-------
-+
-and use "User's Full Name <email@addr.es>" as the GIT
-author and committer for Subversion commits made by
-"username". If encountering a commit made by a user not in the
-list, abort.
-+
-For convenience, this data is saved to $GIT_DIR/svn-authors
-each time the -A option is provided, and read from that same
-file each time git-svnimport is run with an existing GIT
-repository without -A.
-
--m::
- Attempt to detect merges based on the commit message. This option
- will enable default regexes that try to capture the name source
- branch name from the commit message.
-
--M <regex>::
- Attempt to detect merges based on the commit message with a custom
- regex. It can be used with -m to also see the default regexes.
- You must escape forward slashes.
-
--l <max_rev>::
- Specify a maximum revision number to pull.
-+
-Formerly, this option controlled how many revisions to pull,
-due to SVN memory leaks. (These have been worked around.)
-
--R <repack_each_revs>::
- Specify how often git repository should be repacked.
-+
-The default value is 1000. git-svnimport will do import in chunks of 1000
-revisions, after each chunk git repository will be repacked. To disable
-this behavior specify some big value here which is mote than number of
-revisions to import.
-
--P <path_from_trunk>::
- Partial import of the SVN tree.
-+
-By default, the whole tree on the SVN trunk (/trunk) is imported.
-'-P my/proj' will import starting only from '/trunk/my/proj'.
-This option is useful when you want to import one project from a
-svn repo which hosts multiple projects under the same trunk.
-
--v::
- Verbosity: let 'svnimport' report what it is doing.
-
--d::
- Use direct HTTP requests if possible. The "<path>" argument is used
- only for retrieving the SVN logs; the path to the contents is
- included in the SVN log.
-
--D::
- Use direct HTTP requests if possible. The "<path>" argument is used
- for retrieving the logs, as well as for the contents.
-+
-There's no safe way to automatically find out which of these options to
-use, so you need to try both. Usually, the one that's wrong will die
-with a 40x error pretty quickly.
-
-<SVN_repository_URL>::
- The URL of the SVN module you want to import. For local
- repositories, use "file:///absolute/path".
-+
-If you're using the "-d" or "-D" option, this is the URL of the SVN
-repository itself; it usually ends in "/svn".
-
-<path>::
- The path to the module you want to check out.
-
--h::
- Print a short usage message and exit.
-
-OUTPUT
-------
-If '-v' is specified, the script reports what it is doing.
-
-Otherwise, success is indicated the Unix way, i.e. by simply exiting with
-a zero exit status.
-
-Author
-------
-Written by Matthias Urlichs <smurf@smurf.noris.de>, with help from
-various participants of the git-list <git@vger.kernel.org>.
-
-Based on a cvs2git script by the same author.
-
-Documentation
---------------
-Documentation by Matthias Urlichs <smurf@smurf.noris.de>.
-
-GIT
----
-Part of the gitlink:git[7] suite
others have already seen the old one. So just use "git tag -f"
again, as if you hadn't already published the old one.
-However, Git does *not* (and it should not)change tags behind
+However, Git does *not* (and it should not) change tags behind
users back. So if somebody already got the old tag, doing a "git
pull" on your tree shouldn't just make them overwrite the old
one.
follow such tags is a good thing.
+On Backdating Tags
+~~~~~~~~~~~~~~~~~~
+
+If you have imported some changes from another VCS and would like
+to add tags for major releases of your work, it is useful to be able
+to specify the date to embed inside of the tag object. The data in
+the tag object affects, for example, the ordering of tags in the
+gitweb interface.
+
+To set the date used in future tag objects, set the environment
+variable GIT_AUTHOR_DATE to one or more of the date and time. The
+date and time can be specified in a number of ways; the most common
+is "YYYY-MM-DD HH:MM".
+
+An example follows.
+
+------------
+$ GIT_AUTHOR_DATE="2006-10-02 10:31" git tag -s v1.0.1
+------------
+
+
Author
------
Written by Linus Torvalds <torvalds@osdl.org>,
providing generally smoother user experience than the "raw" Core GIT
itself and indeed many other version control systems.
+ Cogito is no longer maintained as most of its functionality
+ is now in core GIT.
+
- *pg* (http://www.spearce.org/category/projects/scm/pg/)
- *StGit* (http://www.procode.org/stgit/)
Stacked GIT provides a quilt-like patch management functionality in the
- GIT environment. You can easily manage your patches in the scope of GIT
+ GIT environment. You can easily manage your patches in the scope of GIT
until they get merged upstream.
* link:v1.5.3/git.html[documentation for release 1.5.3]
* release notes for
- link:RelNotes-1.5.3.1.txt[1.5.3.1].
+ link:RelNotes-1.5.3.5.txt[1.5.3.5],
+ link:RelNotes-1.5.3.4.txt[1.5.3.4],
+ link:RelNotes-1.5.3.3.txt[1.5.3.3],
+ link:RelNotes-1.5.3.2.txt[1.5.3.2],
+ link:RelNotes-1.5.3.1.txt[1.5.3.1],
+ link:RelNotes-1.5.3.txt[1.5.3].
* release notes for
link:RelNotes-1.5.2.5.txt[1.5.2.5],
File/Directory Structure
------------------------
-Please see link:repository-layout.html[repository layout] document.
+Please see the link:repository-layout.html[repository layout] document.
Read link:hooks.html[hooks] for more details about each hook.
Terminology
-----------
-Please see link:glossary.html[glossary] document.
+Please see the link:glossary.html[glossary] document.
Environment Variables
with `$Id$` upon check-in.
-Interaction between checkin/checkout attributes
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-In the check-in codepath, the worktree file is first converted
-with `ident` (if specified), and then with `crlf` (again, if
-specified and applicable).
-
-In the check-out codepath, the blob content is first converted
-with `crlf`, and then `ident`.
-
-
`filter`
^^^^^^^^
The content filtering is done to massage the content into a
shape that is more convenient for the platform, filesystem, and
the user to use. The keyword here is "more convenient" and not
-"turning something unusable into usable". In other words, it is
-"hanging yourself because we gave you a long rope" if your
-project uses filtering mechanism in such a way that it makes
-your project unusable unless the checkout is done with a
-specific filter in effect.
+"turning something unusable into usable". In other words, the
+intent is that if someone unsets the filter driver definition,
+or does not have the appropriate filter program, the project
+should still be usable.
Interaction between checkin/checkout attributes
* Patterns read from a `.gitignore` file in the same directory
as the path, or in any parent directory, with patterns in the
- higher level files (up to the root) being overriden by those in
+ higher level files (up to the root) being overridden by those in
lower level files down to the directory containing the file.
These patterns match relative to the location of the
`.gitignore` file. A project normally includes such
The "--" is necessary to avoid confusion with the *branch* named
'gitk'
-gitk --max-count=100 --all -- Makefile::
+gitk --max-count=100 --all \-- Makefile::
Show at most 100 changes made to the file 'Makefile'. Instead of only
looking for changes in the current branch look in all branches.
[[def_cherry-picking]]cherry-picking::
In <<def_SCM,SCM>> jargon, "cherry pick" means to choose a subset of
changes out of a series of changes (typically commits) and record them
- as a new series of changes on top of different codebase. In GIT, this is
- performed by "git cherry-pick" command to extract the change introduced
+ as a new series of changes on top of a different codebase. In GIT, this is
+ performed by the "git cherry-pick" command to extract the change introduced
by an existing <<def_commit,commit>> and to record it based on the tip
of the current <<def_branch,branch>> as a new commit.
[[def_pickaxe]]pickaxe::
The term <<def_pickaxe,pickaxe>> refers to an option to the diffcore
routines that help select changes that add or delete a given text
- string. With the --pickaxe-all option, it can be used to view the full
+ string. With the `--pickaxe-all` option, it can be used to view the full
<<def_changeset,changeset>> that introduced or removed, say, a
particular line of text. See gitlink:git-diff[1].
[[def_push]]push::
Pushing a <<def_branch,branch>> means to get the branch's
<<def_head_ref,head ref>> from a remote <<def_repository,repository>>,
- find out if it is an ancestor to the branch's local
- head ref is a direct, and in that case, putting all
+ find out if it is a direct ancestor to the branch's local
+ head ref, and in that case, putting all
objects, which are <<def_reachable,reachable>> from the local
head ref, and which are missing from the remote
repository, into the remote
it as my origin branch head". And `git push
$URL refs/heads/master:refs/heads/to-upstream` means "publish my
master branch head as to-upstream branch at $URL". See also
- gitlink:git-push[1]
+ gitlink:git-push[1].
[[def_repository]]repository::
A collection of <<def_ref,refs>> together with an
This hook is meant primarily for notification, and cannot affect
the outcome of `git-commit`.
+post-checkout
+-----------
+
+This hook is invoked when a `git-checkout` is run after having updated the
+worktree. The hook is given three parameters: the ref of the previous HEAD,
+the ref of the new HEAD (which may or may not have changed), and a flag
+indicating whether the checkout was a branch checkout (changing branches,
+flag=1) or a file checkout (retrieving a file from the index, flag=0).
+This hook cannot affect the outcome of `git-checkout`.
+
+This hook can be used to perform repository validity checks, auto-display
+differences from the previous HEAD if different, or set working dir metadata
+properties.
+
+post-merge
+-----------
+
+This hook is invoked by `git-merge`, which happens when a `git pull`
+is done on a local repository. The hook takes a single parameter, a status
+flag specifying whether or not the merge being done was a squash merge.
+This hook cannot affect the outcome of `git-merge`.
+
+This hook can be used in conjunction with a corresponding pre-commit hook to
+save and restore any form of metadata associated with the working tree
+(eg: permissions/ownership, ACLS, etc). See contrib/hooks/setgitperms.perl
+for an example of how to do this.
+
[[pre-receive]]
pre-receive
-----------
not autocommit, to give the user a chance to inspect and
further tweak the merge result before committing.
+--commit::
+ Perform the merge and commit the result. This option can
+ be used to override --no-commit.
+
--squash::
Produce the working tree and index state as if a real
merge happened, but do not actually make a commit or
top of the current branch whose effect is the same as
merging another branch (or more in case of an octopus).
+--no-squash::
+ Perform the merge and commit the result. This option can
+ be used to override --squash.
+
+--no-ff::
+ Generate a merge commit even if the merge resolved as a
+ fast-forward.
+
+--ff::
+ Do not generate a merge commit if the merge resolved as
+ a fast-forward, only update the branch pointer. This is
+ the default behavior of git-merge.
+
-s <strategy>, \--strategy=<strategy>::
Use the given merge strategy; can be supplied more than
once to specify them in the order they should be tried.
The full name is occasionally useful if, for example, there ever
exists a tag and a branch with the same name.
+(Newly created refs are actually stored in the .git/refs directory,
+under the path given by their name. However, for efficiency reasons
+they may also be packed together in a single file; see
+gitlink:git-pack-refs[1]).
+
As another useful shortcut, the "HEAD" of a repository can be referred
to just using the name of that repository. So, for example, "origin"
is usually a shortcut for the HEAD branch in the repository "origin".
commit. You can find out with this:
-------------------------------------------------
-$ git log --raw --abbrev=40 --pretty=oneline -- filename |
+$ git log --raw --abbrev=40 --pretty=oneline |
grep -B 1 `git hash-object filename`
-------------------------------------------------
-------------------------
On large repositories, git depends on compression to keep the history
-information from taking up to much space on disk or in memory.
+information from taking up too much space on disk or in memory.
This compression is not performed automatically. Therefore you
should occasionally run gitlink:git-gc[1]:
Dangling objects are not a problem. At worst they may take up a little
extra disk space. They can sometimes provide a last-resort method for
recovering lost work--see <<dangling-objects>> for details. However, if
-you wish, you can remove them with gitlink:git-prune[1] or the --prune
+you wish, you can remove them with gitlink:git-prune[1] or the `--prune`
option to gitlink:git-gc[1]:
-------------------------------------------------
Reflogs
^^^^^^^
-Say you modify a branch with gitlink:git-reset[1] --hard, and then
+Say you modify a branch with `gitlink:git-reset[1] --hard`, and then
realize that the branch was the only reference you had to that point in
history.
More generally, a branch that is created from a remote branch will pull
by default from that branch. See the descriptions of the
branch.<name>.remote and branch.<name>.merge options in
-gitlink:git-config[1], and the discussion of the --track option in
+gitlink:git-config[1], and the discussion of the `--track` option in
gitlink:git-checkout[1], to learn how to control these defaults.
In addition to saving you keystrokes, "git pull" also helps you by
$ git pull /path/to/other/repository
-------------------------------------------------
-or an ssh url:
+or an ssh URL:
-------------------------------------------------
$ git clone ssh://yourhost/~you/repository
This is the preferred method.
If someone else administers the server, they should tell you what
-directory to put the repository in, and what git:// url it will appear
+directory to put the repository in, and what git:// URL it will appear
at. You can then skip to the section
"<<pushing-changes-to-a-public-repository,Pushing changes to a public
repository>>", below.
gitlink:git-update-server-info[1], and the documentation
link:hooks.html[Hooks used by git].)
-Advertise the url of proj.git. Anybody else should then be able to
-clone or pull from that url, for example with a command line like:
+Advertise the URL of proj.git. Anybody else should then be able to
+clone or pull from that URL, for example with a command line like:
-------------------------------------------------
$ git clone http://yourserver.com/~you/proj.git
a <<fast-forwards,fast forward>>. Normally this is a sign of
something wrong. However, if you are sure you know what you're
doing, you may force git-push to perform the update anyway by
-proceeding the branch name by a plus sign:
+preceding the branch name by a plus sign:
-------------------------------------------------
$ git push ssh://yourserver.com/~you/proj.git +master
$ git branch --track release origin/master
-------------------------------------------------
-These can be easily kept up to date using gitlink:git-pull[1]
+These can be easily kept up to date using gitlink:git-pull[1].
-------------------------------------------------
$ git checkout test && git pull
$ git log linux..branchname | git-shortlog
-------------------------------------------------
-To see whether it has already been merged into the test or release branches
+To see whether it has already been merged into the test or release branches,
use:
-------------------------------------------------
$ git log release..branchname
-------------------------------------------------
-(If this branch has not yet been merged you will see some log entries.
+(If this branch has not yet been merged, you will see some log entries.
If it has been merged, then there will be no output.)
Once a patch completes the great cycle (moving from test to release,
then pulled by Linus, and finally coming back into your local
-"origin/master" branch) the branch for this change is no longer needed.
+"origin/master" branch), the branch for this change is no longer needed.
You detect this when the output from:
-------------------------------------------------
git checkout $1 && git pull . origin
;;
origin)
- before=$(cat .git/refs/remotes/origin/master)
+ before=$(git rev-parse refs/remotes/origin/master)
git fetch origin
- after=$(cat .git/refs/remotes/origin/master)
+ after=$(git rev-parse refs/remotes/origin/master)
if [ $before != $after ]
then
git log $before..$after | git shortlog
exit 1
}
-if [ ! -f .git/refs/heads/"$1" ]
-then
+git show-ref -q --verify -- refs/heads/"$1" || {
echo "Can't see branch <$1>" 1>&2
usage
-fi
+}
case "$2" in
test|release)
git log test..release
fi
-for branch in `ls .git/refs/heads`
+for branch in `git show-ref --heads | sed 's|^.*/||'`
do
if [ $branch = test -o $branch = release ]
then
and git will continue applying the rest of the patches.
-At any point you may use the --abort option to abort this process and
+At any point you may use the `--abort` option to abort this process and
return mywork to the state it had before you started the rebase:
-------------------------------------------------
$ gitk origin..mywork &
-------------------------------------------------
-And browse through the list of patches in the mywork branch using gitk,
+and browse through the list of patches in the mywork branch using gitk,
applying them (possibly in a different order) to mywork-new using
-cherry-pick, and possibly modifying them as you go using commit --amend.
+cherry-pick, and possibly modifying them as you go using `commit --amend`.
The gitlink:git-gui[1] command may also help as it allows you to
individually select diff hunks for inclusion in the index (by
right-clicking on the diff hunk and choosing "Stage Hunk for Commit").
- Git can quickly determine whether two objects are identical or not,
just by comparing names.
-- Since object names are computed the same way in ever repository, the
+- Since object names are computed the same way in every repository, the
same content stored in two repositories will always be stored under
the same name.
- Git can detect errors when it reads an object, by checking that the
"blob" objects into a directory structure. In addition, a tree object
can refer to other tree objects, thus creating a directory hierarchy.
- A <<def_commit_object,"commit" object>> ties such directory hierarchies
- together into a <<def_DAG,directed acyclic graph>> of revisions - each
+ together into a <<def_DAG,directed acyclic graph>> of revisions--each
commit contains the object name of exactly one tree designating the
directory hierarchy at the time of the commit. In addition, a commit
refers to "parent" commit objects that describe the history of how we
identical object names.
(Note: in the presence of submodules, trees may also have commits as
-entries. See gitlink:git-submodule[1] and gitlink:gitmodules.txt[1]
-for partial documentation.)
+entries. See <<submodules>> for documentation.)
Note that the files all have mode 644 or 755: git actually only pays
attention to the executable bit.
See the gitlink:git-tag[1] command to learn how to create and verify tag
objects. (Note that gitlink:git-tag[1] can also be used to create
"lightweight tags", which are not tag objects at all, but just simple
-references in .git/refs/tags/).
+references whose names begin with "refs/tags/").
[[pack-files]]
How git stores objects efficiently: pack files
example, a "dangling blob" may arise because you did a "git add" of a
file, but then, before you actually committed it and made it part of the
bigger picture, you changed something else in that file and committed
-that *updated* thing - the old state that you added originally ends up
+that *updated* thing--the old state that you added originally ends up
not being pointed to by any commit or tree, so it's now a dangling blob
object.
Generally, dangling objects aren't anything to worry about. They can
even be very useful: if you screw something up, the dangling objects can
be how you recover your old tree (say, you did a rebase, and realized
-that you really didn't want to - you can look at what dangling objects
+that you really didn't want to--you can look at what dangling objects
you have, and decide to reset your head to some old dangling state).
For commits, you can just use:
------------------------------------------------
and they'll be gone. But you should only run "git prune" on a quiescent
-repository - it's kind of like doing a filesystem fsck recovery: you
+repository--it's kind of like doing a filesystem fsck recovery: you
don't want to do that while the filesystem is mounted.
-(The same is true of "git-fsck" itself, btw - but since
+(The same is true of "git-fsck" itself, btw, but since
git-fsck never actually *changes* the repository, it just reports
on what it found, git-fsck itself is never "dangerous" to run.
Running it while somebody is actually changing the repository can cause
If you blow the index away entirely, you generally haven't lost any
information as long as you have the name of the tree that it described.
+[[submodules]]
+Submodules
+==========
+
+Large projects are often composed of smaller, self-contained modules. For
+example, an embedded Linux distribution's source tree would include every
+piece of software in the distribution with some local modifications; a movie
+player might need to build against a specific, known-working version of a
+decompression library; several independent programs might all share the same
+build scripts.
+
+With centralized revision control systems this is often accomplished by
+including every module in one single repository. Developers can check out
+all modules or only the modules they need to work with. They can even modify
+files across several modules in a single commit while moving things around
+or updating APIs and translations.
+
+Git does not allow partial checkouts, so duplicating this approach in Git
+would force developers to keep a local copy of modules they are not
+interested in touching. Commits in an enormous checkout would be slower
+than you'd expect as Git would have to scan every directory for changes.
+If modules have a lot of local history, clones would take forever.
+
+On the plus side, distributed revision control systems can much better
+integrate with external sources. In a centralized model, a single arbitrary
+snapshot of the external project is exported from its own revision control
+and then imported into the local revision control on a vendor branch. All
+the history is hidden. With distributed revision control you can clone the
+entire external history and much more easily follow development and re-merge
+local changes.
+
+Git's submodule support allows a repository to contain, as a subdirectory, a
+checkout of an external project. Submodules maintain their own identity;
+the submodule support just stores the submodule repository location and
+commit ID, so other developers who clone the containing project
+("superproject") can easily clone all the submodules at the same revision.
+Partial checkouts of the superproject are possible: you can tell Git to
+clone none, some or all of the submodules.
+
+The gitlink:git-submodule[1] command is available since Git 1.5.3. Users
+with Git 1.5.2 can look up the submodule commits in the repository and
+manually check them out; earlier versions won't recognize the submodules at
+all.
+
+To see how submodule support works, create (for example) four example
+repositories that can be used later as a submodule:
+
+-------------------------------------------------
+$ mkdir ~/git
+$ cd ~/git
+$ for i in a b c d
+do
+ mkdir $i
+ cd $i
+ git init
+ echo "module $i" > $i.txt
+ git add $i.txt
+ git commit -m "Initial commit, submodule $i"
+ cd ..
+done
+-------------------------------------------------
+
+Now create the superproject and add all the submodules:
+
+-------------------------------------------------
+$ mkdir super
+$ cd super
+$ git init
+$ for i in a b c d
+do
+ git submodule add ~/git/$i
+done
+-------------------------------------------------
+
+NOTE: Do not use local URLs here if you plan to publish your superproject!
+
+See what files `git submodule` created:
+
+-------------------------------------------------
+$ ls -a
+. .. .git .gitmodules a b c d
+-------------------------------------------------
+
+The `git submodule add` command does a couple of things:
+
+- It clones the submodule under the current directory and by default checks out
+ the master branch.
+- It adds the submodule's clone path to the gitlink:gitmodules[5] file and
+ adds this file to the index, ready to be committed.
+- It adds the submodule's current commit ID to the index, ready to be
+ committed.
+
+Commit the superproject:
+
+-------------------------------------------------
+$ git commit -m "Add submodules a, b, c and d."
+-------------------------------------------------
+
+Now clone the superproject:
+
+-------------------------------------------------
+$ cd ..
+$ git clone super cloned
+$ cd cloned
+-------------------------------------------------
+
+The submodule directories are there, but they're empty:
+
+-------------------------------------------------
+$ ls -a a
+. ..
+$ git submodule status
+-d266b9873ad50488163457f025db7cdd9683d88b a
+-e81d457da15309b4fef4249aba9b50187999670d b
+-c1536a972b9affea0f16e0680ba87332dc059146 c
+-d96249ff5d57de5de093e6baff9e0aafa5276a74 d
+-------------------------------------------------
+
+NOTE: The commit object names shown above would be different for you, but they
+should match the HEAD commit object names of your repositories. You can check
+it by running `git ls-remote ../a`.
+
+Pulling down the submodules is a two-step process. First run `git submodule
+init` to add the submodule repository URLs to `.git/config`:
+
+-------------------------------------------------
+$ git submodule init
+-------------------------------------------------
+
+Now use `git submodule update` to clone the repositories and check out the
+commits specified in the superproject:
+
+-------------------------------------------------
+$ git submodule update
+$ cd a
+$ ls -a
+. .. .git a.txt
+-------------------------------------------------
+
+One major difference between `git submodule update` and `git submodule add` is
+that `git submodule update` checks out a specific commit, rather than the tip
+of a branch. It's like checking out a tag: the head is detached, so you're not
+working on a branch.
+
+-------------------------------------------------
+$ git branch
+* (no branch)
+ master
+-------------------------------------------------
+
+If you want to make a change within a submodule and you have a detached head,
+then you should create or checkout a branch, make your changes, publish the
+change within the submodule, and then update the superproject to reference the
+new commit:
+
+-------------------------------------------------
+$ git checkout master
+-------------------------------------------------
+
+or
+
+-------------------------------------------------
+$ git checkout -b fix-up
+-------------------------------------------------
+
+then
+
+-------------------------------------------------
+$ echo "adding a line again" >> a.txt
+$ git commit -a -m "Updated the submodule from within the superproject."
+$ git push
+$ cd ..
+$ git diff
+diff --git a/a b/a
+index d266b98..261dfac 160000
+--- a/a
++++ b/a
+@@ -1 +1 @@
+-Subproject commit d266b9873ad50488163457f025db7cdd9683d88b
++Subproject commit 261dfac35cb99d380eb966e102c1197139f7fa24
+$ git add a
+$ git commit -m "Updated submodule a."
+$ git push
+-------------------------------------------------
+
+You have to run `git submodule update` after `git pull` if you want to update
+submodules, too.
+
+Pitfalls with submodules
+------------------------
+
+Always publish the submodule change before publishing the change to the
+superproject that references it. If you forget to publish the submodule change,
+others won't be able to clone the repository:
+
+-------------------------------------------------
+$ cd ~/git/super/a
+$ echo i added another line to this file >> a.txt
+$ git commit -a -m "doing it wrong this time"
+$ cd ..
+$ git add a
+$ git commit -m "Updated submodule a again."
+$ git push
+$ cd ~/git/cloned
+$ git pull
+$ git submodule update
+error: pathspec '261dfac35cb99d380eb966e102c1197139f7fa24' did not match any file(s) known to git.
+Did you forget to 'git add'?
+Unable to checkout '261dfac35cb99d380eb966e102c1197139f7fa24' in submodule path 'a'
+-------------------------------------------------
+
+You also should not rewind branches in a submodule beyond commits that were
+ever recorded in any superproject.
+
+It's not safe to run `git submodule update` if you've made and committed
+changes within a submodule without checking out a branch first. They will be
+silently overwritten:
+
+-------------------------------------------------
+$ cat a.txt
+module a
+$ echo line added from private2 >> a.txt
+$ git commit -a -m "line added inside private2"
+$ cd ..
+$ git submodule update
+Submodule path 'a': checked out 'd266b9873ad50488163457f025db7cdd9683d88b'
+$ cd a
+$ cat a.txt
+module a
+-------------------------------------------------
+
+NOTE: The changes are still visible in the submodule's reflog.
+
+This is not the case if you did not commit your changes.
+
[[low-level-operations]]
Low-level git operations
========================
------------
High-level operations such as gitlink:git-commit[1],
-gitlink:git-checkout[1] and git-reset[1] work by moving data between the
-working tree, the index, and the object database. Git provides
-low-level operations which perform each of these steps individually.
+gitlink:git-checkout[1] and gitlink:git-reset[1] work by moving data
+between the working tree, the index, and the object database. Git
+provides low-level operations which perform each of these steps
+individually.
Generally, all "git" operations work on the index file. Some operations
work *purely* on the index file (showing the current state of the
$ git write-tree
-------------------------------------------------
-that doesn't come with any options - it will just write out the
+that doesn't come with any options--it will just write out the
current index into the set of tree objects that describe that state,
and it will return the name of the resulting top-level tree. You can
use that tree to re-generate the index at any time by going in the
~~~~~~~~~~~~~~~~~~~~~~~~
You read a "tree" file from the object database, and use that to
-populate (and overwrite - don't do this if your index contains any
+populate (and overwrite--don't do this if your index contains any
unsaved state that you might want to restore later!) your current
index. Normal operation is just
To commit a tree you have instantiated with "git-write-tree", you'd
create a "commit" object that refers to that tree and the history
-behind it - most notably the "parent" commits that preceded it in
+behind it--most notably the "parent" commits that preceded it in
history.
Normally a "commit" has one parent: the previous state of the tree
tree, aka the common tree, and the two "result" trees, aka the branches
you want to merge), you do a "merge" read into the index. This will
complain if it has to throw away your old index contents, so you should
-make sure that you've committed those - in fact you would normally
+make sure that you've committed those--in fact you would normally
always do a merge against your last commit (which should thus match what
you have in your current index anyway).
---------------------------------
Sadly, many merges aren't trivial. If there are files that have
-been added.moved or removed, or if both branches have modified the
+been added, moved or removed, or if both branches have modified the
same file, you will be left with an index tree that contains "merge
entries" in it. Such an index tree can 'NOT' be written out to a tree
object, and you will have to resolve any such merge clashes using
- `get_sha1()` returns 0 on _success_. This might surprise some new
Git hackers, but there is a long tradition in UNIX to return different
- negative numbers in case of different errors -- and 0 on success.
+ negative numbers in case of different errors--and 0 on success.
- the variable `sha1` in the function signature of `get_sha1()` is `unsigned
char \*`, but is actually expected to be a pointer to `unsigned
$ git branch -d new # delete branch "new"
-----------------------------------------------
-Instead of basing new branch on current HEAD (the default), use:
+Instead of basing a new branch on current HEAD (the default), use:
-----------------------------------------------
$ git branch new test # branch named "test"
Alternates, clone -reference, etc.
git unpack-objects -r for recovery
-
-submodules
- "perl" and POSIX-compliant shells are needed to use most of
the barebone Porcelainish scripts.
+ - "cpio" is used by git-merge for saving and restoring the index,
+ and by git-clone when doing a local (possibly hardlinked) clone.
+
- Some platform specific issues are dealt with Makefile rules,
but depending on your specific installation, you may not
have all the libraries/tools needed, or you may have
#
# Define NO_SETENV if you don't have setenv in the C library.
#
+# Define NO_MKDTEMP if you don't have mkdtemp in the C library.
+#
# Define NO_SYMLINK_HEAD if you never want .git/HEAD to be a symbolic link.
# Enable it on Windows. By default, symrefs are still used.
#
GITWEB_HOME_LINK_STR = projects
GITWEB_SITENAME =
GITWEB_PROJECTROOT = /pub/git
+GITWEB_PROJECT_MAXDEPTH = 2007
GITWEB_EXPORT_OK =
GITWEB_STRICT_EXPORT =
GITWEB_BASE_URL =
SCRIPT_SH = \
git-bisect.sh git-checkout.sh \
git-clean.sh git-clone.sh git-commit.sh \
- git-fetch.sh \
git-ls-remote.sh \
git-merge-one-file.sh git-mergetool.sh git-parse-remote.sh \
git-pull.sh git-rebase.sh git-rebase--interactive.sh \
SCRIPT_PERL = \
git-add--interactive.perl \
git-archimport.perl git-cvsimport.perl git-relink.perl \
- git-cvsserver.perl git-remote.perl \
- git-svnimport.perl git-cvsexportcommit.perl \
+ git-cvsserver.perl git-remote.perl git-cvsexportcommit.perl \
git-send-email.perl git-svn.perl
SCRIPTS = $(patsubst %.sh,%,$(SCRIPT_SH)) \
# ... and all the rest that could be moved out of bindir to gitexecdir
PROGRAMS = \
- git-convert-objects$X git-fetch-pack$X \
- git-hash-object$X git-index-pack$X git-local-fetch$X \
+ git-fetch-pack$X \
+ git-hash-object$X git-index-pack$X \
git-fast-import$X \
git-daemon$X \
git-merge-index$X git-mktag$X git-mktree$X git-patch-id$X \
git-peek-remote$X git-receive-pack$X \
git-send-pack$X git-shell$X \
- git-show-index$X git-ssh-fetch$X \
- git-ssh-upload$X git-unpack-file$X \
+ git-show-index$X \
+ git-unpack-file$X \
git-update-server-info$X \
git-upload-pack$X \
git-pack-redundant$X git-var$X \
OTHER_PROGRAMS += gitk-wish
endif
-# Backward compatibility -- to be removed after 1.0
-PROGRAMS += git-ssh-pull$X git-ssh-push$X
-
# Set paths to tools early so that they can be used for version tests.
ifndef SHELL_PATH
SHELL_PATH = /bin/sh
run-command.h strbuf.h tag.h tree.h git-compat-util.h revision.h \
tree-walk.h log-tree.h dir.h path-list.h unpack-trees.h builtin.h \
utf8.h reflog-walk.h patch-ids.h attr.h decorate.h progress.h \
- mailmap.h remote.h
+ mailmap.h remote.h transport.h diffcore.h hash.h
DIFF_OBJS = \
diff.o diff-lib.o diffcore-break.o diffcore-order.o \
LIB_OBJS = \
blob.o commit.o connect.o csum-file.o cache-tree.o base85.o \
date.o diff-delta.o entry.o exec_cmd.o ident.o \
- interpolate.o \
+ interpolate.o hash.o \
lockfile.o \
patch-ids.o \
object.o pack-check.o pack-write.o patch-delta.o path.o pkt-line.o \
write_or_die.o trace.o list-objects.o grep.o match-trees.o \
alloc.o merge-file.o path-list.o help.o unpack-trees.o $(DIFF_OBJS) \
color.o wt-status.o archive-zip.o archive-tar.o shallow.o utf8.o \
- convert.o attr.o decorate.o progress.o mailmap.o symlinks.o remote.o
+ convert.o attr.o decorate.o progress.o mailmap.o symlinks.o remote.o \
+ transport.o bundle.o walker.o
BUILTIN_OBJS = \
builtin-add.o \
builtin-diff-files.o \
builtin-diff-index.o \
builtin-diff-tree.o \
+ builtin-fetch.o \
+ builtin-fetch-pack.o \
builtin-fetch--tool.o \
builtin-fmt-merge-msg.o \
builtin-for-each-ref.o \
NEEDS_LIBICONV = YesPlease
NO_UNSETENV = YesPlease
NO_SETENV = YesPlease
+ NO_MKDTEMP = YesPlease
NO_C99_FORMAT = YesPlease
NO_STRTOUMAX = YesPlease
endif
ifeq ($(uname_R),5.9)
NO_UNSETENV = YesPlease
NO_SETENV = YesPlease
+ NO_MKDTEMP = YesPlease
NO_C99_FORMAT = YesPlease
NO_STRTOUMAX = YesPlease
endif
CC_LD_DYNPATH = -R
endif
-ifndef NO_CURL
+ifdef NO_CURL
+ BASIC_CFLAGS += -DNO_CURL
+else
ifdef CURLDIR
# Try "-Wl,-rpath=$(CURLDIR)/$(lib)" in such a case.
BASIC_CFLAGS += -I$(CURLDIR)/include
else
CURL_LIBCURL = -lcurl
endif
- PROGRAMS += git-http-fetch$X
+ BUILTIN_OBJS += builtin-http-fetch.o
+ EXTLIBS += $(CURL_LIBCURL)
+ LIB_OBJS += http.o http-walker.o
curl_check := $(shell (echo 070908; curl-config --vernum) | sort -r | sed -ne 2p)
ifeq "$(curl_check)" "070908"
ifndef NO_EXPAT
COMPAT_CFLAGS += -DNO_SETENV
COMPAT_OBJS += compat/setenv.o
endif
+ifdef NO_MKDTEMP
+ COMPAT_CFLAGS += -DNO_MKDTEMP
+ COMPAT_OBJS += compat/mkdtemp.o
+endif
ifdef NO_UNSETENV
COMPAT_CFLAGS += -DNO_UNSETENV
COMPAT_OBJS += compat/unsetenv.o
$(patsubst %.perl,%,$(SCRIPT_PERL)): perl/perl.mak
-perl/perl.mak: GIT-CFLAGS
+perl/perl.mak: GIT-CFLAGS perl/Makefile perl/Makefile.PL
$(QUIET_SUBDIR0)perl $(QUIET_SUBDIR1) PERL_PATH='$(PERL_PATH_SQ)' prefix='$(prefix_SQ)' $(@F)
$(patsubst %.perl,%,$(SCRIPT_PERL)): % : %.perl
$(QUIET_GEN)$(RM) $@ $@+ && \
- INSTLIBDIR=`$(MAKE) -C perl -s --no-print-directory instlibdir` && \
+ INSTLIBDIR=`MAKEFLAGS= $(MAKE) -C perl -s --no-print-directory instlibdir` && \
sed -e '1{' \
-e ' s|#!.*perl|#!$(PERL_PATH_SQ)|' \
-e ' h' \
-e 's|++GITWEB_HOME_LINK_STR++|$(GITWEB_HOME_LINK_STR)|g' \
-e 's|++GITWEB_SITENAME++|$(GITWEB_SITENAME)|g' \
-e 's|++GITWEB_PROJECTROOT++|$(GITWEB_PROJECTROOT)|g' \
+ -e 's|"++GITWEB_PROJECT_MAXDEPTH++"|$(GITWEB_PROJECT_MAXDEPTH)|g' \
-e 's|++GITWEB_EXPORT_OK++|$(GITWEB_EXPORT_OK)|g' \
-e 's|++GITWEB_STRICT_EXPORT++|$(GITWEB_STRICT_EXPORT)|g' \
-e 's|++GITWEB_BASE_URL++|$(GITWEB_BASE_URL)|g' \
$(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) -DGIT_USER_AGENT='"git/$(GIT_VERSION)"' $<
ifdef NO_EXPAT
-http-fetch.o: http-fetch.c http.h GIT-CFLAGS
+http-walker.o: http-walker.c http.h GIT-CFLAGS
$(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) -DNO_EXPAT $<
endif
git-%$X: %.o $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) $(LIBS)
-ssh-pull.o: ssh-fetch.c
-ssh-push.o: ssh-upload.c
-git-local-fetch$X: fetch.o
-git-ssh-fetch$X: rsh.o fetch.o
-git-ssh-upload$X: rsh.o
-git-ssh-pull$X: rsh.o fetch.o
-git-ssh-push$X: rsh.o
-
git-imap-send$X: imap-send.o $(LIB_FILE)
-http.o http-fetch.o http-push.o: http.h
-git-http-fetch$X: fetch.o http.o http-fetch.o $(GITLIBS)
- $(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
- $(LIBS) $(CURL_LIBCURL) $(EXPAT_LIBEXPAT)
+http.o http-walker.o http-push.o: http.h
git-http-push$X: revision.o http.o http-push.o $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
$(LIBS) $(CURL_LIBCURL) $(EXPAT_LIBEXPAT)
-$(LIB_OBJS) $(BUILTIN_OBJS) fetch.o: $(LIB_H)
+$(LIB_OBJS) $(BUILTIN_OBJS): $(LIB_H)
$(patsubst git-%$X,%.o,$(PROGRAMS)): $(LIB_H) $(wildcard */*.h)
-$(DIFF_OBJS): diffcore.h
$(LIB_FILE): $(LIB_OBJS)
$(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $(LIB_OBJS)
$(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $(XDIFF_OBJS)
-perl/Makefile: perl/Git.pm perl/Makefile.PL GIT-CFLAGS
- (cd perl && $(PERL_PATH) Makefile.PL \
- PREFIX='$(prefix_SQ)')
-
doc:
$(MAKE) -C Documentation all
$(RM) tags
$(FIND) . -name '*.[hcS]' -print | xargs ctags -a
+cscope:
+ $(RM) cscope*
+ $(FIND) . -name '*.[hcS]' -print | xargs cscope -b
+
### Detect prefix changes
TRACK_CFLAGS = $(subst ','\'',$(ALL_CFLAGS)):\
$(bindir_SQ):$(gitexecdir_SQ):$(template_dir_SQ):$(prefix_SQ)
### Cleaning rules
+distclean: clean
+ $(RM) configure
+
clean:
$(RM) *.o mozilla-sha1/*.o arm/*.o ppc/*.o compat/*.o xdiff/*.o \
$(LIB_FILE) $(XDIFF_LIB)
$(RM) $(ALL_PROGRAMS) $(BUILT_INS) git$X
$(RM) $(TEST_PROGRAMS)
- $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo common-cmds.h TAGS tags
+ $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo common-cmds.h TAGS tags cscope*
$(RM) -r autom4te.cache
- $(RM) configure config.log config.mak.autogen config.mak.append config.status config.cache
+ $(RM) config.log config.mak.autogen config.mak.append config.status config.cache
$(RM) -r $(GIT_TARNAME) .doc-tmp-dir
$(RM) $(GIT_TARNAME).tar.gz git-core_$(GIT_VERSION)-*.tar.gz
$(RM) $(htmldocs).tar.gz $(manpages).tar.gz
$(RM) GIT-VERSION-FILE GIT-CFLAGS GIT-GUI-VARS
.PHONY: all install clean strip
-.PHONY: .FORCE-GIT-VERSION-FILE TAGS tags .FORCE-GIT-CFLAGS
+.PHONY: .FORCE-GIT-VERSION-FILE TAGS tags cscope .FORCE-GIT-CFLAGS
### Check documentation
#
git-merge-octopus | git-merge-ours | git-merge-recursive | \
git-merge-resolve | git-merge-stupid | \
git-add--interactive | git-fsck-objects | git-init-db | \
- git-repo-config | git-fetch--tool | \
- git-ssh-pull | git-ssh-push ) continue ;; \
+ git-repo-config | git-fetch--tool ) continue ;; \
esac ; \
test -f "Documentation/$$v.txt" || \
echo "no doc: $$v"; \
num_attr = 0;
cp = name + namelen;
cp = cp + strspn(cp, blank);
- while (*cp)
+ while (*cp) {
cp = parse_attr(src, lineno, cp, &num_attr, res);
+ if (!cp)
+ return NULL;
+ }
if (pass)
break;
res = xcalloc(1,
die("pathspec '%s' did not match any files",
pathspec[i]);
}
+ free(seen);
}
static void fill_directory(struct dir_struct *dir, const char **pathspec,
if (!seen[i])
die("pathspec '%s' did not match any files", pathspec[i]);
}
+ free(seen);
}
static int git_add_config(const char *var, const char *value)
static int apply_with_reject;
static int apply_verbosely;
static int no_add;
-static int show_index_info;
+static const char *fake_ancestor;
static int line_termination = '\n';
static unsigned long p_context = ULONG_MAX;
static const char apply_usage[] =
unsigned int is_rename:1;
struct fragment *fragments;
char *result;
- unsigned long resultsize;
+ size_t resultsize;
char old_sha1_prefix[41];
char new_sha1_prefix[41];
struct patch *next;
#define CHUNKSIZE (8192)
#define SLOP (16)
-static void *read_patch_file(int fd, unsigned long *sizep)
+static void read_patch_file(struct strbuf *sb, int fd)
{
- struct strbuf buf;
-
- strbuf_init(&buf, 0);
- if (strbuf_read(&buf, fd, 0) < 0)
+ if (strbuf_read(sb, fd, 0) < 0)
die("git-apply: read returned %s", strerror(errno));
- *sizep = buf.len;
/*
* Make sure that we have some slop in the buffer
* so that we can do speculative "memcmp" etc, and
* see to it that it is NUL-filled.
*/
- strbuf_grow(&buf, SLOP);
- memset(buf.buf + buf.len, 0, SLOP);
- return strbuf_detach(&buf);
+ strbuf_grow(sb, SLOP);
+ memset(sb->buf + sb->len, 0, SLOP);
}
static unsigned long linelen(const char *buffer, unsigned long size)
*/
strbuf_remove(&name, 0, cp - name.buf);
free(def);
- return name.buf;
+ return strbuf_detach(&name, NULL);
}
}
strbuf_release(&name);
if (strcmp(cp + 1, first.buf))
goto free_and_fail1;
strbuf_release(&sp);
- return first.buf;
+ return strbuf_detach(&first, NULL);
}
/* unquoted second */
if (line + llen - cp != first.len + 1 ||
memcmp(first.buf, cp, first.len))
goto free_and_fail1;
- return first.buf;
+ return strbuf_detach(&first, NULL);
free_and_fail1:
strbuf_release(&first);
isspace(name[len])) {
/* Good */
strbuf_remove(&sp, 0, np - sp.buf);
- return sp.buf;
+ return strbuf_detach(&sp, NULL);
}
free_and_fail2:
static int read_old_data(struct stat *st, const char *path, struct strbuf *buf)
{
- int fd;
-
switch (st->st_mode & S_IFMT) {
case S_IFLNK:
strbuf_grow(buf, st->st_size);
strbuf_setlen(buf, st->st_size);
return 0;
case S_IFREG:
- fd = open(path, O_RDONLY);
- if (fd < 0)
- return error("unable to open %s", path);
- if (strbuf_read(buf, fd, st->st_size) < 0) {
- close(fd);
- return -1;
- }
- close(fd);
+ if (strbuf_read_file(buf, path, st->st_size) != st->st_size)
+ return error("unable to open or read %s", path);
convert_to_git(path, buf->buf, buf->len, buf);
return 0;
default:
if (apply_fragments(&buf, patch) < 0)
return -1; /* note with --reject this succeeds. */
- patch->result = buf.buf;
- patch->resultsize = buf.len;
+ patch->result = strbuf_detach(&buf, &patch->resultsize);
if (0 < patch->is_delete && patch->resultsize)
return error("removal patch leaves file contents");
return 0;
}
-static void show_index_list(struct patch *list)
+/* Build an index that contains the just the files needed for a 3way merge */
+static void build_fake_ancestor(struct patch *list, const char *filename)
{
struct patch *patch;
+ struct index_state result = { 0 };
+ int fd;
/* Once we start supporting the reverse patch, it may be
* worth showing the new sha1 prefix, but until then...
for (patch = list; patch; patch = patch->next) {
const unsigned char *sha1_ptr;
unsigned char sha1[20];
+ struct cache_entry *ce;
const char *name;
name = patch->old_name ? patch->old_name : patch->new_name;
if (0 < patch->is_new)
- sha1_ptr = null_sha1;
+ continue;
else if (get_sha1(patch->old_sha1_prefix, sha1))
/* git diff has no index line for mode/type changes */
if (!patch->lines_added && !patch->lines_deleted) {
else
sha1_ptr = sha1;
- printf("%06o %s ",patch->old_mode, sha1_to_hex(sha1_ptr));
- if (line_termination && quote_c_style(name, NULL, NULL, 0))
- quote_c_style(name, NULL, stdout, 0);
- else
- fputs(name, stdout);
- putchar(line_termination);
+ ce = make_cache_entry(patch->old_mode, sha1_ptr, name, 0, 0);
+ if (add_index_entry(&result, ce, ADD_CACHE_OK_TO_ADD))
+ die ("Could not add %s to temporary index", name);
}
+
+ fd = open(filename, O_WRONLY | O_CREAT, 0666);
+ if (fd < 0 || write_index(&result, fd) || close(fd))
+ die ("Could not write temporary index to %s", filename);
+
+ discard_index(&result);
}
static void stat_patch_list(struct patch *patch)
static int apply_patch(int fd, const char *filename, int inaccurate_eof)
{
- unsigned long offset, size;
- char *buffer = read_patch_file(fd, &size);
+ size_t offset;
+ struct strbuf buf;
struct patch *list = NULL, **listp = &list;
int skipped_patch = 0;
+ strbuf_init(&buf, 0);
patch_input_file = filename;
- if (!buffer)
- return -1;
+ read_patch_file(&buf, fd);
offset = 0;
- while (size > 0) {
+ while (offset < buf.len) {
struct patch *patch;
int nr;
patch = xcalloc(1, sizeof(*patch));
patch->inaccurate_eof = inaccurate_eof;
- nr = parse_chunk(buffer + offset, size, patch);
+ nr = parse_chunk(buf.buf + offset, buf.len - offset, patch);
if (nr < 0)
break;
if (apply_in_reverse)
skipped_patch++;
}
offset += nr;
- size -= nr;
}
if (whitespace_error && (new_whitespace == error_on_whitespace))
if (apply && write_out_results(list, skipped_patch))
exit(1);
- if (show_index_info)
- show_index_list(list);
+ if (fake_ancestor)
+ build_fake_ancestor(list, fake_ancestor);
if (diffstat)
stat_patch_list(list);
if (summary)
summary_patch_list(list);
- free(buffer);
+ strbuf_release(&buf);
return 0;
}
apply = 1;
continue;
}
- if (!strcmp(arg, "--index-info")) {
+ if (!strcmp(arg, "--build-fake-ancestor")) {
apply = 0;
- show_index_info = 1;
+ if (++i >= argc)
+ die ("need a filename");
+ fake_ancestor = argv[i];
continue;
}
if (!strcmp(arg, "-z")) {
struct strbuf fmt;
if (src == buf->buf)
- to_free = strbuf_detach(buf);
+ to_free = strbuf_detach(buf, NULL);
strbuf_init(&fmt, 0);
for (;;) {
const char *b, *c;
buffer = read_sha1_file(sha1, type, sizep);
if (buffer && S_ISREG(mode)) {
struct strbuf buf;
+ size_t size = 0;
strbuf_init(&buf, 0);
strbuf_attach(&buf, buffer, *sizep, *sizep + 1);
convert_to_working_tree(path, buf.buf, buf.len, &buf);
convert_to_archive(path, buf.buf, buf.len, &buf, commit);
- *sizep = buf.len;
- buffer = buf.buf;
+ buffer = strbuf_detach(&buf, &size);
+ *sizep = size;
}
return buffer;
unsigned char head_sha1[20];
struct strbuf buf;
const char *ident;
- int fd;
time_t now;
int size, len;
struct cache_entry *ce;
mode = canon_mode(st.st_mode);
switch (st.st_mode & S_IFMT) {
case S_IFREG:
- fd = open(read_from, O_RDONLY);
- if (fd < 0)
- die("cannot open %s", read_from);
- if (strbuf_read(&buf, fd, 0) != xsize_t(st.st_size))
- die("cannot read %s", read_from);
+ if (strbuf_read_file(&buf, read_from, st.st_size) != st.st_size)
+ die("cannot open or read %s", read_from);
break;
case S_IFLNK:
if (readlink(read_from, buf.buf, buf.alloc) != fin_size)
if (strbuf_read(&buf, 0, 0) < 0)
die("read error %s from stdin", strerror(errno));
}
+ convert_to_git(path, buf.buf, buf.len, &buf);
origin->file.ptr = buf.buf;
origin->file.size = buf.len;
pretend_sha1_file(buf.buf, buf.len, OBJ_BLOB, origin->blob_sha1);
#include "builtin.h"
#include "cache.h"
-#include "object.h"
-#include "commit.h"
-#include "diff.h"
-#include "revision.h"
-#include "list-objects.h"
-#include "run-command.h"
+#include "bundle.h"
/*
* Basic handler for bundle files to connect repositories via sneakernet.
static const char *bundle_usage="git-bundle (create <bundle> <git-rev-list args> | verify <bundle> | list-heads <bundle> [refname]... | unbundle <bundle> [refname]... )";
-static const char bundle_signature[] = "# v2 git bundle\n";
-
-struct ref_list {
- unsigned int nr, alloc;
- struct ref_list_entry {
- unsigned char sha1[20];
- char *name;
- } *list;
-};
-
-static void add_to_ref_list(const unsigned char *sha1, const char *name,
- struct ref_list *list)
-{
- if (list->nr + 1 >= list->alloc) {
- list->alloc = alloc_nr(list->nr + 1);
- list->list = xrealloc(list->list,
- list->alloc * sizeof(list->list[0]));
- }
- memcpy(list->list[list->nr].sha1, sha1, 20);
- list->list[list->nr].name = xstrdup(name);
- list->nr++;
-}
-
-struct bundle_header {
- struct ref_list prerequisites;
- struct ref_list references;
-};
-
-/* returns an fd */
-static int read_header(const char *path, struct bundle_header *header) {
- char buffer[1024];
- int fd;
- long fpos;
- FILE *ffd = fopen(path, "rb");
-
- if (!ffd)
- return error("could not open '%s'", path);
- if (!fgets(buffer, sizeof(buffer), ffd) ||
- strcmp(buffer, bundle_signature)) {
- fclose(ffd);
- return error("'%s' does not look like a v2 bundle file", path);
- }
- while (fgets(buffer, sizeof(buffer), ffd)
- && buffer[0] != '\n') {
- int is_prereq = buffer[0] == '-';
- int offset = is_prereq ? 1 : 0;
- int len = strlen(buffer);
- unsigned char sha1[20];
- struct ref_list *list = is_prereq ? &header->prerequisites
- : &header->references;
- char delim;
-
- if (buffer[len - 1] == '\n')
- buffer[len - 1] = '\0';
- if (get_sha1_hex(buffer + offset, sha1)) {
- warning("unrecognized header: %s", buffer);
- continue;
- }
- delim = buffer[40 + offset];
- if (!isspace(delim) && (delim != '\0' || !is_prereq))
- die ("invalid header: %s", buffer);
- add_to_ref_list(sha1, isspace(delim) ?
- buffer + 41 + offset : "", list);
- }
- fpos = ftell(ffd);
- fclose(ffd);
- fd = open(path, O_RDONLY);
- if (fd < 0)
- return error("could not open '%s'", path);
- lseek(fd, fpos, SEEK_SET);
- return fd;
-}
-
-static int list_refs(struct ref_list *r, int argc, const char **argv)
-{
- int i;
-
- for (i = 0; i < r->nr; i++) {
- if (argc > 1) {
- int j;
- for (j = 1; j < argc; j++)
- if (!strcmp(r->list[i].name, argv[j]))
- break;
- if (j == argc)
- continue;
- }
- printf("%s %s\n", sha1_to_hex(r->list[i].sha1),
- r->list[i].name);
- }
- return 0;
-}
-
-#define PREREQ_MARK (1u<<16)
-
-static int verify_bundle(struct bundle_header *header, int verbose)
-{
- /*
- * Do fast check, then if any prereqs are missing then go line by line
- * to be verbose about the errors
- */
- struct ref_list *p = &header->prerequisites;
- struct rev_info revs;
- const char *argv[] = {NULL, "--all"};
- struct object_array refs;
- struct commit *commit;
- int i, ret = 0, req_nr;
- const char *message = "Repository lacks these prerequisite commits:";
-
- init_revisions(&revs, NULL);
- for (i = 0; i < p->nr; i++) {
- struct ref_list_entry *e = p->list + i;
- struct object *o = parse_object(e->sha1);
- if (o) {
- o->flags |= PREREQ_MARK;
- add_pending_object(&revs, o, e->name);
- continue;
- }
- if (++ret == 1)
- error(message);
- error("%s %s", sha1_to_hex(e->sha1), e->name);
- }
- if (revs.pending.nr != p->nr)
- return ret;
- req_nr = revs.pending.nr;
- setup_revisions(2, argv, &revs, NULL);
-
- memset(&refs, 0, sizeof(struct object_array));
- for (i = 0; i < revs.pending.nr; i++) {
- struct object_array_entry *e = revs.pending.objects + i;
- add_object_array(e->item, e->name, &refs);
- }
-
- prepare_revision_walk(&revs);
-
- i = req_nr;
- while (i && (commit = get_revision(&revs)))
- if (commit->object.flags & PREREQ_MARK)
- i--;
-
- for (i = 0; i < req_nr; i++)
- if (!(refs.objects[i].item->flags & SHOWN)) {
- if (++ret == 1)
- error(message);
- error("%s %s", sha1_to_hex(refs.objects[i].item->sha1),
- refs.objects[i].name);
- }
-
- for (i = 0; i < refs.nr; i++)
- clear_commit_marks((struct commit *)refs.objects[i].item, -1);
-
- if (verbose) {
- struct ref_list *r;
-
- r = &header->references;
- printf("The bundle contains %d ref%s\n",
- r->nr, (1 < r->nr) ? "s" : "");
- list_refs(r, 0, NULL);
- r = &header->prerequisites;
- printf("The bundle requires these %d ref%s\n",
- r->nr, (1 < r->nr) ? "s" : "");
- list_refs(r, 0, NULL);
- }
- return ret;
-}
-
-static int list_heads(struct bundle_header *header, int argc, const char **argv)
-{
- return list_refs(&header->references, argc, argv);
-}
-
-static int create_bundle(struct bundle_header *header, const char *path,
- int argc, const char **argv)
-{
- static struct lock_file lock;
- int bundle_fd = -1;
- int bundle_to_stdout;
- const char **argv_boundary = xmalloc((argc + 4) * sizeof(const char *));
- const char **argv_pack = xmalloc(5 * sizeof(const char *));
- int i, ref_count = 0;
- char buffer[1024];
- struct rev_info revs;
- struct child_process rls;
- FILE *rls_fout;
-
- bundle_to_stdout = !strcmp(path, "-");
- if (bundle_to_stdout)
- bundle_fd = 1;
- else
- bundle_fd = hold_lock_file_for_update(&lock, path, 1);
-
- /* write signature */
- write_or_die(bundle_fd, bundle_signature, strlen(bundle_signature));
-
- /* init revs to list objects for pack-objects later */
- save_commit_buffer = 0;
- init_revisions(&revs, NULL);
-
- /* write prerequisites */
- memcpy(argv_boundary + 3, argv + 1, argc * sizeof(const char *));
- argv_boundary[0] = "rev-list";
- argv_boundary[1] = "--boundary";
- argv_boundary[2] = "--pretty=oneline";
- argv_boundary[argc + 2] = NULL;
- memset(&rls, 0, sizeof(rls));
- rls.argv = argv_boundary;
- rls.out = -1;
- rls.git_cmd = 1;
- if (start_command(&rls))
- return -1;
- rls_fout = fdopen(rls.out, "r");
- while (fgets(buffer, sizeof(buffer), rls_fout)) {
- unsigned char sha1[20];
- if (buffer[0] == '-') {
- write_or_die(bundle_fd, buffer, strlen(buffer));
- if (!get_sha1_hex(buffer + 1, sha1)) {
- struct object *object = parse_object(sha1);
- object->flags |= UNINTERESTING;
- add_pending_object(&revs, object, buffer);
- }
- } else if (!get_sha1_hex(buffer, sha1)) {
- struct object *object = parse_object(sha1);
- object->flags |= SHOWN;
- }
- }
- fclose(rls_fout);
- if (finish_command(&rls))
- return error("rev-list died");
-
- /* write references */
- argc = setup_revisions(argc, argv, &revs, NULL);
- if (argc > 1)
- return error("unrecognized argument: %s'", argv[1]);
-
- for (i = 0; i < revs.pending.nr; i++) {
- struct object_array_entry *e = revs.pending.objects + i;
- unsigned char sha1[20];
- char *ref;
-
- if (e->item->flags & UNINTERESTING)
- continue;
- if (dwim_ref(e->name, strlen(e->name), sha1, &ref) != 1)
- continue;
- /*
- * Make sure the refs we wrote out is correct; --max-count and
- * other limiting options could have prevented all the tips
- * from getting output.
- *
- * Non commit objects such as tags and blobs do not have
- * this issue as they are not affected by those extra
- * constraints.
- */
- if (!(e->item->flags & SHOWN) && e->item->type == OBJ_COMMIT) {
- warning("ref '%s' is excluded by the rev-list options",
- e->name);
- free(ref);
- continue;
- }
- /*
- * If you run "git bundle create bndl v1.0..v2.0", the
- * name of the positive ref is "v2.0" but that is the
- * commit that is referenced by the tag, and not the tag
- * itself.
- */
- if (hashcmp(sha1, e->item->sha1)) {
- /*
- * Is this the positive end of a range expressed
- * in terms of a tag (e.g. v2.0 from the range
- * "v1.0..v2.0")?
- */
- struct commit *one = lookup_commit_reference(sha1);
- struct object *obj;
-
- if (e->item == &(one->object)) {
- /*
- * Need to include e->name as an
- * independent ref to the pack-objects
- * input, so that the tag is included
- * in the output; otherwise we would
- * end up triggering "empty bundle"
- * error.
- */
- obj = parse_object(sha1);
- obj->flags |= SHOWN;
- add_pending_object(&revs, obj, e->name);
- }
- free(ref);
- continue;
- }
-
- ref_count++;
- write_or_die(bundle_fd, sha1_to_hex(e->item->sha1), 40);
- write_or_die(bundle_fd, " ", 1);
- write_or_die(bundle_fd, ref, strlen(ref));
- write_or_die(bundle_fd, "\n", 1);
- free(ref);
- }
- if (!ref_count)
- die ("Refusing to create empty bundle.");
-
- /* end header */
- write_or_die(bundle_fd, "\n", 1);
-
- /* write pack */
- argv_pack[0] = "pack-objects";
- argv_pack[1] = "--all-progress";
- argv_pack[2] = "--stdout";
- argv_pack[3] = "--thin";
- argv_pack[4] = NULL;
- memset(&rls, 0, sizeof(rls));
- rls.argv = argv_pack;
- rls.in = -1;
- rls.out = bundle_fd;
- rls.git_cmd = 1;
- if (start_command(&rls))
- return error("Could not spawn pack-objects");
- for (i = 0; i < revs.pending.nr; i++) {
- struct object *object = revs.pending.objects[i].item;
- if (object->flags & UNINTERESTING)
- write(rls.in, "^", 1);
- write(rls.in, sha1_to_hex(object->sha1), 40);
- write(rls.in, "\n", 1);
- }
- if (finish_command(&rls))
- return error ("pack-objects died");
- close(bundle_fd);
- if (!bundle_to_stdout)
- commit_lock_file(&lock);
- return 0;
-}
-
-static int unbundle(struct bundle_header *header, int bundle_fd,
- int argc, const char **argv)
-{
- const char *argv_index_pack[] = {"index-pack",
- "--fix-thin", "--stdin", NULL};
- struct child_process ip;
-
- if (verify_bundle(header, 0))
- return -1;
- memset(&ip, 0, sizeof(ip));
- ip.argv = argv_index_pack;
- ip.in = bundle_fd;
- ip.no_stdout = 1;
- ip.git_cmd = 1;
- if (run_command(&ip))
- return error("index-pack died");
- return list_heads(header, argc, argv);
-}
-
int cmd_bundle(int argc, const char **argv, const char *prefix)
{
struct bundle_header header;
}
memset(&header, 0, sizeof(header));
- if (strcmp(cmd, "create") &&
- (bundle_fd = read_header(bundle_file, &header)) < 0)
+ if (strcmp(cmd, "create") && (bundle_fd =
+ read_bundle_header(bundle_file, &header)) < 0)
return 1;
if (!strcmp(cmd, "verify")) {
}
if (!strcmp(cmd, "list-heads")) {
close(bundle_fd);
- return !!list_heads(&header, argc, argv);
+ return !!list_bundle_refs(&header, argc, argv);
}
if (!strcmp(cmd, "create")) {
if (nongit)
} else if (!strcmp(cmd, "unbundle")) {
if (nongit)
die("Need a repository to unbundle.");
- return !!unbundle(&header, bundle_fd, argc, argv);
+ return !!unbundle(&header, bundle_fd) ||
+ list_bundle_refs(&header, argc, argv);
} else
usage(bundle_usage);
}
{
int nongit = 0;
char* value;
- setup_git_directory_gently(&nongit);
+ const char *file = setup_git_directory_gently(&nongit);
while (1 < argc) {
if (!strcmp(argv[1], "--int"))
type = T_INT;
else if (!strcmp(argv[1], "--bool"))
type = T_BOOL;
- else if (!strcmp(argv[1], "--list") || !strcmp(argv[1], "-l"))
- return git_config(show_all_config);
+ else if (!strcmp(argv[1], "--list") || !strcmp(argv[1], "-l")) {
+ if (argc != 2)
+ usage(git_config_set_usage);
+ if (git_config(show_all_config) < 0 && file && errno)
+ die("unable to read config file %s: %s", file,
+ strerror(errno));
+ return 0;
+ }
else if (!strcmp(argv[1], "--global")) {
char *home = getenv("HOME");
if (home) {
else if (!strcmp(argv[1], "--file") || !strcmp(argv[1], "-f")) {
if (argc < 3)
usage(git_config_set_usage);
- setenv(CONFIG_ENVIRONMENT, argv[2], 1);
+ if (!is_absolute_path(argv[2]) && file)
+ file = prefix_filename(file, strlen(file),
+ argv[2]);
+ else
+ file = argv[2];
+ setenv(CONFIG_ENVIRONMENT, file, 1);
argc--;
argv++;
}
if (strbuf_read(&buf, 0, 1024) < 0) {
die("error reading standard input: %s", strerror(errno));
}
- return strbuf_detach(&buf);
+ return strbuf_detach(&buf, NULL);
}
static void show_new(enum object_type type, unsigned char *sha1_new)
unsigned char *oldval)
{
char msg[1024];
- char *rla = getenv("GIT_REFLOG_ACTION");
+ const char *rla = getenv("GIT_REFLOG_ACTION");
if (!rla)
rla = "(reflog update)";
}
if (get_sha1(name, sha1_old)) {
- char *msg;
+ const char *msg;
just_store:
/* new ref */
if (!strncmp(name, "refs/tags/", 10))
if (get_sha1(head, sha1))
return error("Not a valid object name: %s", head);
- commit = lookup_commit_reference(sha1);
+ commit = lookup_commit_reference_gently(sha1, 1);
if (!commit)
not_for_merge = 1;
--- /dev/null
+#include "cache.h"
+#include "refs.h"
+#include "pkt-line.h"
+#include "commit.h"
+#include "tag.h"
+#include "exec_cmd.h"
+#include "pack.h"
+#include "sideband.h"
+#include "fetch-pack.h"
+
+static int transfer_unpack_limit = -1;
+static int fetch_unpack_limit = -1;
+static int unpack_limit = 100;
+static struct fetch_pack_args args = {
+ /* .uploadpack = */ "git-upload-pack",
+};
+
+static const char fetch_pack_usage[] =
+"git-fetch-pack [--all] [--quiet|-q] [--keep|-k] [--thin] [--upload-pack=<git-upload-pack>] [--depth=<n>] [--no-progress] [-v] [<host>:]<directory> [<refs>...]";
+
+#define COMPLETE (1U << 0)
+#define COMMON (1U << 1)
+#define COMMON_REF (1U << 2)
+#define SEEN (1U << 3)
+#define POPPED (1U << 4)
+
+/*
+ * After sending this many "have"s if we do not get any new ACK , we
+ * give up traversing our history.
+ */
+#define MAX_IN_VAIN 256
+
+static struct commit_list *rev_list;
+static int non_common_revs, multi_ack, use_thin_pack, use_sideband;
+
+static void rev_list_push(struct commit *commit, int mark)
+{
+ if (!(commit->object.flags & mark)) {
+ commit->object.flags |= mark;
+
+ if (!(commit->object.parsed))
+ parse_commit(commit);
+
+ insert_by_date(commit, &rev_list);
+
+ if (!(commit->object.flags & COMMON))
+ non_common_revs++;
+ }
+}
+
+static int rev_list_insert_ref(const char *path, const unsigned char *sha1, int flag, void *cb_data)
+{
+ struct object *o = deref_tag(parse_object(sha1), path, 0);
+
+ if (o && o->type == OBJ_COMMIT)
+ rev_list_push((struct commit *)o, SEEN);
+
+ return 0;
+}
+
+/*
+ This function marks a rev and its ancestors as common.
+ In some cases, it is desirable to mark only the ancestors (for example
+ when only the server does not yet know that they are common).
+*/
+
+static void mark_common(struct commit *commit,
+ int ancestors_only, int dont_parse)
+{
+ if (commit != NULL && !(commit->object.flags & COMMON)) {
+ struct object *o = (struct object *)commit;
+
+ if (!ancestors_only)
+ o->flags |= COMMON;
+
+ if (!(o->flags & SEEN))
+ rev_list_push(commit, SEEN);
+ else {
+ struct commit_list *parents;
+
+ if (!ancestors_only && !(o->flags & POPPED))
+ non_common_revs--;
+ if (!o->parsed && !dont_parse)
+ parse_commit(commit);
+
+ for (parents = commit->parents;
+ parents;
+ parents = parents->next)
+ mark_common(parents->item, 0, dont_parse);
+ }
+ }
+}
+
+/*
+ Get the next rev to send, ignoring the common.
+*/
+
+static const unsigned char* get_rev(void)
+{
+ struct commit *commit = NULL;
+
+ while (commit == NULL) {
+ unsigned int mark;
+ struct commit_list* parents;
+
+ if (rev_list == NULL || non_common_revs == 0)
+ return NULL;
+
+ commit = rev_list->item;
+ if (!(commit->object.parsed))
+ parse_commit(commit);
+ commit->object.flags |= POPPED;
+ if (!(commit->object.flags & COMMON))
+ non_common_revs--;
+
+ parents = commit->parents;
+
+ if (commit->object.flags & COMMON) {
+ /* do not send "have", and ignore ancestors */
+ commit = NULL;
+ mark = COMMON | SEEN;
+ } else if (commit->object.flags & COMMON_REF)
+ /* send "have", and ignore ancestors */
+ mark = COMMON | SEEN;
+ else
+ /* send "have", also for its ancestors */
+ mark = SEEN;
+
+ while (parents) {
+ if (!(parents->item->object.flags & SEEN))
+ rev_list_push(parents->item, mark);
+ if (mark & COMMON)
+ mark_common(parents->item, 1, 0);
+ parents = parents->next;
+ }
+
+ rev_list = rev_list->next;
+ }
+
+ return commit->object.sha1;
+}
+
+static int find_common(int fd[2], unsigned char *result_sha1,
+ struct ref *refs)
+{
+ int fetching;
+ int count = 0, flushes = 0, retval;
+ const unsigned char *sha1;
+ unsigned in_vain = 0;
+ int got_continue = 0;
+
+ for_each_ref(rev_list_insert_ref, NULL);
+
+ fetching = 0;
+ for ( ; refs ; refs = refs->next) {
+ unsigned char *remote = refs->old_sha1;
+ struct object *o;
+
+ /*
+ * If that object is complete (i.e. it is an ancestor of a
+ * local ref), we tell them we have it but do not have to
+ * tell them about its ancestors, which they already know
+ * about.
+ *
+ * We use lookup_object here because we are only
+ * interested in the case we *know* the object is
+ * reachable and we have already scanned it.
+ */
+ if (((o = lookup_object(remote)) != NULL) &&
+ (o->flags & COMPLETE)) {
+ continue;
+ }
+
+ if (!fetching)
+ packet_write(fd[1], "want %s%s%s%s%s%s%s\n",
+ sha1_to_hex(remote),
+ (multi_ack ? " multi_ack" : ""),
+ (use_sideband == 2 ? " side-band-64k" : ""),
+ (use_sideband == 1 ? " side-band" : ""),
+ (use_thin_pack ? " thin-pack" : ""),
+ (args.no_progress ? " no-progress" : ""),
+ " ofs-delta");
+ else
+ packet_write(fd[1], "want %s\n", sha1_to_hex(remote));
+ fetching++;
+ }
+ if (is_repository_shallow())
+ write_shallow_commits(fd[1], 1);
+ if (args.depth > 0)
+ packet_write(fd[1], "deepen %d", args.depth);
+ packet_flush(fd[1]);
+ if (!fetching)
+ return 1;
+
+ if (args.depth > 0) {
+ char line[1024];
+ unsigned char sha1[20];
+ int len;
+
+ while ((len = packet_read_line(fd[0], line, sizeof(line)))) {
+ if (!prefixcmp(line, "shallow ")) {
+ if (get_sha1_hex(line + 8, sha1))
+ die("invalid shallow line: %s", line);
+ register_shallow(sha1);
+ continue;
+ }
+ if (!prefixcmp(line, "unshallow ")) {
+ if (get_sha1_hex(line + 10, sha1))
+ die("invalid unshallow line: %s", line);
+ if (!lookup_object(sha1))
+ die("object not found: %s", line);
+ /* make sure that it is parsed as shallow */
+ parse_object(sha1);
+ if (unregister_shallow(sha1))
+ die("no shallow found: %s", line);
+ continue;
+ }
+ die("expected shallow/unshallow, got %s", line);
+ }
+ }
+
+ flushes = 0;
+ retval = -1;
+ while ((sha1 = get_rev())) {
+ packet_write(fd[1], "have %s\n", sha1_to_hex(sha1));
+ if (args.verbose)
+ fprintf(stderr, "have %s\n", sha1_to_hex(sha1));
+ in_vain++;
+ if (!(31 & ++count)) {
+ int ack;
+
+ packet_flush(fd[1]);
+ flushes++;
+
+ /*
+ * We keep one window "ahead" of the other side, and
+ * will wait for an ACK only on the next one
+ */
+ if (count == 32)
+ continue;
+
+ do {
+ ack = get_ack(fd[0], result_sha1);
+ if (args.verbose && ack)
+ fprintf(stderr, "got ack %d %s\n", ack,
+ sha1_to_hex(result_sha1));
+ if (ack == 1) {
+ flushes = 0;
+ multi_ack = 0;
+ retval = 0;
+ goto done;
+ } else if (ack == 2) {
+ struct commit *commit =
+ lookup_commit(result_sha1);
+ mark_common(commit, 0, 1);
+ retval = 0;
+ in_vain = 0;
+ got_continue = 1;
+ }
+ } while (ack);
+ flushes--;
+ if (got_continue && MAX_IN_VAIN < in_vain) {
+ if (args.verbose)
+ fprintf(stderr, "giving up\n");
+ break; /* give up */
+ }
+ }
+ }
+done:
+ packet_write(fd[1], "done\n");
+ if (args.verbose)
+ fprintf(stderr, "done\n");
+ if (retval != 0) {
+ multi_ack = 0;
+ flushes++;
+ }
+ while (flushes || multi_ack) {
+ int ack = get_ack(fd[0], result_sha1);
+ if (ack) {
+ if (args.verbose)
+ fprintf(stderr, "got ack (%d) %s\n", ack,
+ sha1_to_hex(result_sha1));
+ if (ack == 1)
+ return 0;
+ multi_ack = 1;
+ continue;
+ }
+ flushes--;
+ }
+ return retval;
+}
+
+static struct commit_list *complete;
+
+static int mark_complete(const char *path, const unsigned char *sha1, int flag, void *cb_data)
+{
+ struct object *o = parse_object(sha1);
+
+ while (o && o->type == OBJ_TAG) {
+ struct tag *t = (struct tag *) o;
+ if (!t->tagged)
+ break; /* broken repository */
+ o->flags |= COMPLETE;
+ o = parse_object(t->tagged->sha1);
+ }
+ if (o && o->type == OBJ_COMMIT) {
+ struct commit *commit = (struct commit *)o;
+ commit->object.flags |= COMPLETE;
+ insert_by_date(commit, &complete);
+ }
+ return 0;
+}
+
+static void mark_recent_complete_commits(unsigned long cutoff)
+{
+ while (complete && cutoff <= complete->item->date) {
+ if (args.verbose)
+ fprintf(stderr, "Marking %s as complete\n",
+ sha1_to_hex(complete->item->object.sha1));
+ pop_most_recent_commit(&complete, COMPLETE);
+ }
+}
+
+static void filter_refs(struct ref **refs, int nr_match, char **match)
+{
+ struct ref **return_refs;
+ struct ref *newlist = NULL;
+ struct ref **newtail = &newlist;
+ struct ref *ref, *next;
+ struct ref *fastarray[32];
+
+ if (nr_match && !args.fetch_all) {
+ if (ARRAY_SIZE(fastarray) < nr_match)
+ return_refs = xcalloc(nr_match, sizeof(struct ref *));
+ else {
+ return_refs = fastarray;
+ memset(return_refs, 0, sizeof(struct ref *) * nr_match);
+ }
+ }
+ else
+ return_refs = NULL;
+
+ for (ref = *refs; ref; ref = next) {
+ next = ref->next;
+ if (!memcmp(ref->name, "refs/", 5) &&
+ check_ref_format(ref->name + 5))
+ ; /* trash */
+ else if (args.fetch_all &&
+ (!args.depth || prefixcmp(ref->name, "refs/tags/") )) {
+ *newtail = ref;
+ ref->next = NULL;
+ newtail = &ref->next;
+ continue;
+ }
+ else {
+ int order = path_match(ref->name, nr_match, match);
+ if (order) {
+ return_refs[order-1] = ref;
+ continue; /* we will link it later */
+ }
+ }
+ free(ref);
+ }
+
+ if (!args.fetch_all) {
+ int i;
+ for (i = 0; i < nr_match; i++) {
+ ref = return_refs[i];
+ if (ref) {
+ *newtail = ref;
+ ref->next = NULL;
+ newtail = &ref->next;
+ }
+ }
+ if (return_refs != fastarray)
+ free(return_refs);
+ }
+ *refs = newlist;
+}
+
+static int everything_local(struct ref **refs, int nr_match, char **match)
+{
+ struct ref *ref;
+ int retval;
+ unsigned long cutoff = 0;
+
+ track_object_refs = 0;
+ save_commit_buffer = 0;
+
+ for (ref = *refs; ref; ref = ref->next) {
+ struct object *o;
+
+ o = parse_object(ref->old_sha1);
+ if (!o)
+ continue;
+
+ /* We already have it -- which may mean that we were
+ * in sync with the other side at some time after
+ * that (it is OK if we guess wrong here).
+ */
+ if (o->type == OBJ_COMMIT) {
+ struct commit *commit = (struct commit *)o;
+ if (!cutoff || cutoff < commit->date)
+ cutoff = commit->date;
+ }
+ }
+
+ if (!args.depth) {
+ for_each_ref(mark_complete, NULL);
+ if (cutoff)
+ mark_recent_complete_commits(cutoff);
+ }
+
+ /*
+ * Mark all complete remote refs as common refs.
+ * Don't mark them common yet; the server has to be told so first.
+ */
+ for (ref = *refs; ref; ref = ref->next) {
+ struct object *o = deref_tag(lookup_object(ref->old_sha1),
+ NULL, 0);
+
+ if (!o || o->type != OBJ_COMMIT || !(o->flags & COMPLETE))
+ continue;
+
+ if (!(o->flags & SEEN)) {
+ rev_list_push((struct commit *)o, COMMON_REF | SEEN);
+
+ mark_common((struct commit *)o, 1, 1);
+ }
+ }
+
+ filter_refs(refs, nr_match, match);
+
+ for (retval = 1, ref = *refs; ref ; ref = ref->next) {
+ const unsigned char *remote = ref->old_sha1;
+ unsigned char local[20];
+ struct object *o;
+
+ o = lookup_object(remote);
+ if (!o || !(o->flags & COMPLETE)) {
+ retval = 0;
+ if (!args.verbose)
+ continue;
+ fprintf(stderr,
+ "want %s (%s)\n", sha1_to_hex(remote),
+ ref->name);
+ continue;
+ }
+
+ hashcpy(ref->new_sha1, local);
+ if (!args.verbose)
+ continue;
+ fprintf(stderr,
+ "already have %s (%s)\n", sha1_to_hex(remote),
+ ref->name);
+ }
+ return retval;
+}
+
+static pid_t setup_sideband(int fd[2], int xd[2])
+{
+ pid_t side_pid;
+
+ if (!use_sideband) {
+ fd[0] = xd[0];
+ fd[1] = xd[1];
+ return 0;
+ }
+ /* xd[] is talking with upload-pack; subprocess reads from
+ * xd[0], spits out band#2 to stderr, and feeds us band#1
+ * through our fd[0].
+ */
+ if (pipe(fd) < 0)
+ die("fetch-pack: unable to set up pipe");
+ side_pid = fork();
+ if (side_pid < 0)
+ die("fetch-pack: unable to fork off sideband demultiplexer");
+ if (!side_pid) {
+ /* subprocess */
+ close(fd[0]);
+ if (xd[0] != xd[1])
+ close(xd[1]);
+ if (recv_sideband("fetch-pack", xd[0], fd[1], 2))
+ exit(1);
+ exit(0);
+ }
+ close(xd[0]);
+ close(fd[1]);
+ fd[1] = xd[1];
+ return side_pid;
+}
+
+static int get_pack(int xd[2], char **pack_lockfile)
+{
+ int status;
+ pid_t pid, side_pid;
+ int fd[2];
+ const char *argv[20];
+ char keep_arg[256];
+ char hdr_arg[256];
+ const char **av;
+ int do_keep = args.keep_pack;
+ int keep_pipe[2];
+
+ side_pid = setup_sideband(fd, xd);
+
+ av = argv;
+ *hdr_arg = 0;
+ if (!args.keep_pack && unpack_limit) {
+ struct pack_header header;
+
+ if (read_pack_header(fd[0], &header))
+ die("protocol error: bad pack header");
+ snprintf(hdr_arg, sizeof(hdr_arg), "--pack_header=%u,%u",
+ ntohl(header.hdr_version), ntohl(header.hdr_entries));
+ if (ntohl(header.hdr_entries) < unpack_limit)
+ do_keep = 0;
+ else
+ do_keep = 1;
+ }
+
+ if (do_keep) {
+ if (pack_lockfile && pipe(keep_pipe))
+ die("fetch-pack: pipe setup failure: %s", strerror(errno));
+ *av++ = "index-pack";
+ *av++ = "--stdin";
+ if (!args.quiet && !args.no_progress)
+ *av++ = "-v";
+ if (args.use_thin_pack)
+ *av++ = "--fix-thin";
+ if (args.lock_pack || unpack_limit) {
+ int s = sprintf(keep_arg,
+ "--keep=fetch-pack %d on ", getpid());
+ if (gethostname(keep_arg + s, sizeof(keep_arg) - s))
+ strcpy(keep_arg + s, "localhost");
+ *av++ = keep_arg;
+ }
+ }
+ else {
+ *av++ = "unpack-objects";
+ if (args.quiet)
+ *av++ = "-q";
+ }
+ if (*hdr_arg)
+ *av++ = hdr_arg;
+ *av++ = NULL;
+
+ pid = fork();
+ if (pid < 0)
+ die("fetch-pack: unable to fork off %s", argv[0]);
+ if (!pid) {
+ dup2(fd[0], 0);
+ if (do_keep && pack_lockfile) {
+ dup2(keep_pipe[1], 1);
+ close(keep_pipe[0]);
+ close(keep_pipe[1]);
+ }
+ close(fd[0]);
+ close(fd[1]);
+ execv_git_cmd(argv);
+ die("%s exec failed", argv[0]);
+ }
+ close(fd[0]);
+ close(fd[1]);
+ if (do_keep && pack_lockfile) {
+ close(keep_pipe[1]);
+ *pack_lockfile = index_pack_lockfile(keep_pipe[0]);
+ close(keep_pipe[0]);
+ }
+ while (waitpid(pid, &status, 0) < 0) {
+ if (errno != EINTR)
+ die("waiting for %s: %s", argv[0], strerror(errno));
+ }
+ if (WIFEXITED(status)) {
+ int code = WEXITSTATUS(status);
+ if (code)
+ die("%s died with error code %d", argv[0], code);
+ return 0;
+ }
+ if (WIFSIGNALED(status)) {
+ int sig = WTERMSIG(status);
+ die("%s died of signal %d", argv[0], sig);
+ }
+ die("%s died of unnatural causes %d", argv[0], status);
+}
+
+static struct ref *do_fetch_pack(int fd[2],
+ int nr_match,
+ char **match,
+ char **pack_lockfile)
+{
+ struct ref *ref;
+ unsigned char sha1[20];
+
+ get_remote_heads(fd[0], &ref, 0, NULL, 0);
+ if (is_repository_shallow() && !server_supports("shallow"))
+ die("Server does not support shallow clients");
+ if (server_supports("multi_ack")) {
+ if (args.verbose)
+ fprintf(stderr, "Server supports multi_ack\n");
+ multi_ack = 1;
+ }
+ if (server_supports("side-band-64k")) {
+ if (args.verbose)
+ fprintf(stderr, "Server supports side-band-64k\n");
+ use_sideband = 2;
+ }
+ else if (server_supports("side-band")) {
+ if (args.verbose)
+ fprintf(stderr, "Server supports side-band\n");
+ use_sideband = 1;
+ }
+ if (!ref) {
+ packet_flush(fd[1]);
+ die("no matching remote head");
+ }
+ if (everything_local(&ref, nr_match, match)) {
+ packet_flush(fd[1]);
+ goto all_done;
+ }
+ if (find_common(fd, sha1, ref) < 0)
+ if (!args.keep_pack)
+ /* When cloning, it is not unusual to have
+ * no common commit.
+ */
+ fprintf(stderr, "warning: no common commits\n");
+
+ if (get_pack(fd, pack_lockfile))
+ die("git-fetch-pack: fetch failed.");
+
+ all_done:
+ return ref;
+}
+
+static int remove_duplicates(int nr_heads, char **heads)
+{
+ int src, dst;
+
+ for (src = dst = 0; src < nr_heads; src++) {
+ /* If heads[src] is different from any of
+ * heads[0..dst], push it in.
+ */
+ int i;
+ for (i = 0; i < dst; i++) {
+ if (!strcmp(heads[i], heads[src]))
+ break;
+ }
+ if (i < dst)
+ continue;
+ if (src != dst)
+ heads[dst] = heads[src];
+ dst++;
+ }
+ return dst;
+}
+
+static int fetch_pack_config(const char *var, const char *value)
+{
+ if (strcmp(var, "fetch.unpacklimit") == 0) {
+ fetch_unpack_limit = git_config_int(var, value);
+ return 0;
+ }
+
+ if (strcmp(var, "transfer.unpacklimit") == 0) {
+ transfer_unpack_limit = git_config_int(var, value);
+ return 0;
+ }
+
+ return git_default_config(var, value);
+}
+
+static struct lock_file lock;
+
+static void fetch_pack_setup(void)
+{
+ static int did_setup;
+ if (did_setup)
+ return;
+ git_config(fetch_pack_config);
+ if (0 <= transfer_unpack_limit)
+ unpack_limit = transfer_unpack_limit;
+ else if (0 <= fetch_unpack_limit)
+ unpack_limit = fetch_unpack_limit;
+ did_setup = 1;
+}
+
+int cmd_fetch_pack(int argc, const char **argv, const char *prefix)
+{
+ int i, ret, nr_heads;
+ struct ref *ref;
+ char *dest = NULL, **heads;
+
+ nr_heads = 0;
+ heads = NULL;
+ for (i = 1; i < argc; i++) {
+ const char *arg = argv[i];
+
+ if (*arg == '-') {
+ if (!prefixcmp(arg, "--upload-pack=")) {
+ args.uploadpack = arg + 14;
+ continue;
+ }
+ if (!prefixcmp(arg, "--exec=")) {
+ args.uploadpack = arg + 7;
+ continue;
+ }
+ if (!strcmp("--quiet", arg) || !strcmp("-q", arg)) {
+ args.quiet = 1;
+ continue;
+ }
+ if (!strcmp("--keep", arg) || !strcmp("-k", arg)) {
+ args.lock_pack = args.keep_pack;
+ args.keep_pack = 1;
+ continue;
+ }
+ if (!strcmp("--thin", arg)) {
+ args.use_thin_pack = 1;
+ continue;
+ }
+ if (!strcmp("--all", arg)) {
+ args.fetch_all = 1;
+ continue;
+ }
+ if (!strcmp("-v", arg)) {
+ args.verbose = 1;
+ continue;
+ }
+ if (!prefixcmp(arg, "--depth=")) {
+ args.depth = strtol(arg + 8, NULL, 0);
+ continue;
+ }
+ if (!strcmp("--no-progress", arg)) {
+ args.no_progress = 1;
+ continue;
+ }
+ usage(fetch_pack_usage);
+ }
+ dest = (char *)arg;
+ heads = (char **)(argv + i + 1);
+ nr_heads = argc - i - 1;
+ break;
+ }
+ if (!dest)
+ usage(fetch_pack_usage);
+
+ ref = fetch_pack(&args, dest, nr_heads, heads, NULL);
+ ret = !ref;
+
+ while (ref) {
+ printf("%s %s\n",
+ sha1_to_hex(ref->old_sha1), ref->name);
+ ref = ref->next;
+ }
+
+ return ret;
+}
+
+struct ref *fetch_pack(struct fetch_pack_args *my_args,
+ const char *dest,
+ int nr_heads,
+ char **heads,
+ char **pack_lockfile)
+{
+ int i, ret;
+ int fd[2];
+ pid_t pid;
+ struct ref *ref;
+ struct stat st;
+
+ fetch_pack_setup();
+ memcpy(&args, my_args, sizeof(args));
+ if (args.depth > 0) {
+ if (stat(git_path("shallow"), &st))
+ st.st_mtime = 0;
+ }
+
+ pid = git_connect(fd, (char *)dest, args.uploadpack,
+ args.verbose ? CONNECT_VERBOSE : 0);
+ if (pid < 0)
+ return NULL;
+ if (heads && nr_heads)
+ nr_heads = remove_duplicates(nr_heads, heads);
+ ref = do_fetch_pack(fd, nr_heads, heads, pack_lockfile);
+ close(fd[0]);
+ close(fd[1]);
+ ret = finish_connect(pid);
+
+ if (!ret && nr_heads) {
+ /* If the heads to pull were given, we should have
+ * consumed all of them by matching the remote.
+ * Otherwise, 'git-fetch remote no-such-ref' would
+ * silently succeed without issuing an error.
+ */
+ for (i = 0; i < nr_heads; i++)
+ if (heads[i] && heads[i][0]) {
+ error("no such remote ref %s", heads[i]);
+ ret = 1;
+ }
+ }
+
+ if (!ret && args.depth > 0) {
+ struct cache_time mtime;
+ char *shallow = git_path("shallow");
+ int fd;
+
+ mtime.sec = st.st_mtime;
+#ifdef USE_NSEC
+ mtime.usec = st.st_mtim.usec;
+#endif
+ if (stat(shallow, &st)) {
+ if (mtime.sec)
+ die("shallow file was removed during fetch");
+ } else if (st.st_mtime != mtime.sec
+#ifdef USE_NSEC
+ || st.st_mtim.usec != mtime.usec
+#endif
+ )
+ die("shallow file was changed during fetch");
+
+ fd = hold_lock_file_for_update(&lock, shallow, 1);
+ if (!write_shallow_commits(fd, 0)) {
+ unlink(shallow);
+ rollback_lock_file(&lock);
+ } else {
+ close(fd);
+ commit_lock_file(&lock);
+ }
+ }
+
+ if (ret)
+ ref = NULL;
+
+ return ref;
+}
--- /dev/null
+/*
+ * "git fetch"
+ */
+#include "cache.h"
+#include "refs.h"
+#include "commit.h"
+#include "builtin.h"
+#include "path-list.h"
+#include "remote.h"
+#include "transport.h"
+
+static const char fetch_usage[] = "git-fetch [-a | --append] [--upload-pack <upload-pack>] [-f | --force] [--no-tags] [-t | --tags] [-k | --keep] [-u | --update-head-ok] [--depth <depth>] [-v | --verbose] [<repository> <refspec>...]";
+
+static int append, force, tags, no_tags, update_head_ok, verbose, quiet;
+static char *default_rla = NULL;
+static struct transport *transport;
+
+static void unlock_pack(void)
+{
+ if (transport)
+ transport_unlock_pack(transport);
+}
+
+static void unlock_pack_on_signal(int signo)
+{
+ unlock_pack();
+ signal(SIGINT, SIG_DFL);
+ raise(signo);
+}
+
+static void add_merge_config(struct ref **head,
+ struct ref *remote_refs,
+ struct branch *branch,
+ struct ref ***tail)
+{
+ int i;
+
+ for (i = 0; i < branch->merge_nr; i++) {
+ struct ref *rm, **old_tail = *tail;
+ struct refspec refspec;
+
+ for (rm = *head; rm; rm = rm->next) {
+ if (branch_merge_matches(branch, i, rm->name)) {
+ rm->merge = 1;
+ break;
+ }
+ }
+ if (rm)
+ continue;
+
+ /*
+ * Not fetched to a tracking branch? We need to fetch
+ * it anyway to allow this branch's "branch.$name.merge"
+ * to be honored by git-pull, but we do not have to
+ * fail if branch.$name.merge is misconfigured to point
+ * at a nonexisting branch. If we were indeed called by
+ * git-pull, it will notice the misconfiguration because
+ * there is no entry in the resulting FETCH_HEAD marked
+ * for merging.
+ */
+ refspec.src = branch->merge[i]->src;
+ refspec.dst = NULL;
+ refspec.pattern = 0;
+ refspec.force = 0;
+ get_fetch_map(remote_refs, &refspec, tail, 1);
+ for (rm = *old_tail; rm; rm = rm->next)
+ rm->merge = 1;
+ }
+}
+
+static struct ref *get_ref_map(struct transport *transport,
+ struct refspec *refs, int ref_count, int tags,
+ int *autotags)
+{
+ int i;
+ struct ref *rm;
+ struct ref *ref_map = NULL;
+ struct ref **tail = &ref_map;
+
+ struct ref *remote_refs = transport_get_remote_refs(transport);
+
+ if (ref_count || tags) {
+ for (i = 0; i < ref_count; i++) {
+ get_fetch_map(remote_refs, &refs[i], &tail, 0);
+ if (refs[i].dst && refs[i].dst[0])
+ *autotags = 1;
+ }
+ /* Merge everything on the command line, but not --tags */
+ for (rm = ref_map; rm; rm = rm->next)
+ rm->merge = 1;
+ if (tags) {
+ struct refspec refspec;
+ refspec.src = "refs/tags/";
+ refspec.dst = "refs/tags/";
+ refspec.pattern = 1;
+ refspec.force = 0;
+ get_fetch_map(remote_refs, &refspec, &tail, 0);
+ }
+ } else {
+ /* Use the defaults */
+ struct remote *remote = transport->remote;
+ struct branch *branch = branch_get(NULL);
+ int has_merge = branch_has_merge_config(branch);
+ if (remote && (remote->fetch_refspec_nr || has_merge)) {
+ for (i = 0; i < remote->fetch_refspec_nr; i++) {
+ get_fetch_map(remote_refs, &remote->fetch[i], &tail, 0);
+ if (remote->fetch[i].dst &&
+ remote->fetch[i].dst[0])
+ *autotags = 1;
+ if (!i && !has_merge && ref_map &&
+ !remote->fetch[0].pattern)
+ ref_map->merge = 1;
+ }
+ /*
+ * if the remote we're fetching from is the same
+ * as given in branch.<name>.remote, we add the
+ * ref given in branch.<name>.merge, too.
+ */
+ if (has_merge &&
+ !strcmp(branch->remote_name, remote->name))
+ add_merge_config(&ref_map, remote_refs, branch, &tail);
+ } else {
+ ref_map = get_remote_ref(remote_refs, "HEAD");
+ if (!ref_map)
+ die("Couldn't find remote ref HEAD");
+ ref_map->merge = 1;
+ }
+ }
+ ref_remove_duplicates(ref_map);
+
+ return ref_map;
+}
+
+static void show_new(enum object_type type, unsigned char *sha1_new)
+{
+ fprintf(stderr, " %s: %s\n", typename(type),
+ find_unique_abbrev(sha1_new, DEFAULT_ABBREV));
+}
+
+static int s_update_ref(const char *action,
+ struct ref *ref,
+ int check_old)
+{
+ char msg[1024];
+ char *rla = getenv("GIT_REFLOG_ACTION");
+ static struct ref_lock *lock;
+
+ if (!rla)
+ rla = default_rla;
+ snprintf(msg, sizeof(msg), "%s: %s", rla, action);
+ lock = lock_any_ref_for_update(ref->name,
+ check_old ? ref->old_sha1 : NULL, 0);
+ if (!lock)
+ return 1;
+ if (write_ref_sha1(lock, ref->new_sha1, msg) < 0)
+ return 1;
+ return 0;
+}
+
+static int update_local_ref(struct ref *ref,
+ const char *note,
+ int verbose)
+{
+ char oldh[41], newh[41];
+ struct commit *current = NULL, *updated;
+ enum object_type type;
+ struct branch *current_branch = branch_get(NULL);
+
+ type = sha1_object_info(ref->new_sha1, NULL);
+ if (type < 0)
+ die("object %s not found", sha1_to_hex(ref->new_sha1));
+
+ if (!*ref->name) {
+ /* Not storing */
+ if (verbose) {
+ fprintf(stderr, "* fetched %s\n", note);
+ show_new(type, ref->new_sha1);
+ }
+ return 0;
+ }
+
+ if (!hashcmp(ref->old_sha1, ref->new_sha1)) {
+ if (verbose) {
+ fprintf(stderr, "* %s: same as %s\n",
+ ref->name, note);
+ show_new(type, ref->new_sha1);
+ }
+ return 0;
+ }
+
+ if (current_branch &&
+ !strcmp(ref->name, current_branch->name) &&
+ !(update_head_ok || is_bare_repository()) &&
+ !is_null_sha1(ref->old_sha1)) {
+ /*
+ * If this is the head, and it's not okay to update
+ * the head, and the old value of the head isn't empty...
+ */
+ fprintf(stderr,
+ " * %s: Cannot fetch into the current branch.\n",
+ ref->name);
+ return 1;
+ }
+
+ if (!is_null_sha1(ref->old_sha1) &&
+ !prefixcmp(ref->name, "refs/tags/")) {
+ fprintf(stderr, "* %s: updating with %s\n",
+ ref->name, note);
+ show_new(type, ref->new_sha1);
+ return s_update_ref("updating tag", ref, 0);
+ }
+
+ current = lookup_commit_reference_gently(ref->old_sha1, 1);
+ updated = lookup_commit_reference_gently(ref->new_sha1, 1);
+ if (!current || !updated) {
+ char *msg;
+ if (!strncmp(ref->name, "refs/tags/", 10))
+ msg = "storing tag";
+ else
+ msg = "storing head";
+ fprintf(stderr, "* %s: storing %s\n",
+ ref->name, note);
+ show_new(type, ref->new_sha1);
+ return s_update_ref(msg, ref, 0);
+ }
+
+ strcpy(oldh, find_unique_abbrev(current->object.sha1, DEFAULT_ABBREV));
+ strcpy(newh, find_unique_abbrev(ref->new_sha1, DEFAULT_ABBREV));
+
+ if (in_merge_bases(current, &updated, 1)) {
+ fprintf(stderr, "* %s: fast forward to %s\n",
+ ref->name, note);
+ fprintf(stderr, " old..new: %s..%s\n", oldh, newh);
+ return s_update_ref("fast forward", ref, 1);
+ }
+ if (!force && !ref->force) {
+ fprintf(stderr,
+ "* %s: not updating to non-fast forward %s\n",
+ ref->name, note);
+ fprintf(stderr,
+ " old...new: %s...%s\n", oldh, newh);
+ return 1;
+ }
+ fprintf(stderr,
+ "* %s: forcing update to non-fast forward %s\n",
+ ref->name, note);
+ fprintf(stderr, " old...new: %s...%s\n", oldh, newh);
+ return s_update_ref("forced-update", ref, 1);
+}
+
+static void store_updated_refs(const char *url, struct ref *ref_map)
+{
+ FILE *fp;
+ struct commit *commit;
+ int url_len, i, note_len;
+ char note[1024];
+ const char *what, *kind;
+ struct ref *rm;
+
+ fp = fopen(git_path("FETCH_HEAD"), "a");
+ for (rm = ref_map; rm; rm = rm->next) {
+ struct ref *ref = NULL;
+
+ if (rm->peer_ref) {
+ ref = xcalloc(1, sizeof(*ref) + strlen(rm->peer_ref->name) + 1);
+ strcpy(ref->name, rm->peer_ref->name);
+ hashcpy(ref->old_sha1, rm->peer_ref->old_sha1);
+ hashcpy(ref->new_sha1, rm->old_sha1);
+ ref->force = rm->peer_ref->force;
+ }
+
+ commit = lookup_commit_reference_gently(rm->old_sha1, 1);
+ if (!commit)
+ rm->merge = 0;
+
+ if (!strcmp(rm->name, "HEAD")) {
+ kind = "";
+ what = "";
+ }
+ else if (!prefixcmp(rm->name, "refs/heads/")) {
+ kind = "branch";
+ what = rm->name + 11;
+ }
+ else if (!prefixcmp(rm->name, "refs/tags/")) {
+ kind = "tag";
+ what = rm->name + 10;
+ }
+ else if (!prefixcmp(rm->name, "refs/remotes/")) {
+ kind = "remote branch";
+ what = rm->name + 13;
+ }
+ else {
+ kind = "";
+ what = rm->name;
+ }
+
+ url_len = strlen(url);
+ for (i = url_len - 1; url[i] == '/' && 0 <= i; i--)
+ ;
+ url_len = i + 1;
+ if (4 < i && !strncmp(".git", url + i - 3, 4))
+ url_len = i - 3;
+
+ note_len = 0;
+ if (*what) {
+ if (*kind)
+ note_len += sprintf(note + note_len, "%s ",
+ kind);
+ note_len += sprintf(note + note_len, "'%s' of ", what);
+ }
+ note_len += sprintf(note + note_len, "%.*s", url_len, url);
+ fprintf(fp, "%s\t%s\t%s\n",
+ sha1_to_hex(commit ? commit->object.sha1 :
+ rm->old_sha1),
+ rm->merge ? "" : "not-for-merge",
+ note);
+
+ if (ref)
+ update_local_ref(ref, note, verbose);
+ }
+ fclose(fp);
+}
+
+static int fetch_refs(struct transport *transport, struct ref *ref_map)
+{
+ int ret = transport_fetch_refs(transport, ref_map);
+ if (!ret)
+ store_updated_refs(transport->url, ref_map);
+ transport_unlock_pack(transport);
+ return ret;
+}
+
+static int add_existing(const char *refname, const unsigned char *sha1,
+ int flag, void *cbdata)
+{
+ struct path_list *list = (struct path_list *)cbdata;
+ path_list_insert(refname, list);
+ return 0;
+}
+
+static struct ref *find_non_local_tags(struct transport *transport,
+ struct ref *fetch_map)
+{
+ static struct path_list existing_refs = { NULL, 0, 0, 0 };
+ struct path_list new_refs = { NULL, 0, 0, 1 };
+ char *ref_name;
+ int ref_name_len;
+ unsigned char *ref_sha1;
+ struct ref *tag_ref;
+ struct ref *rm = NULL;
+ struct ref *ref_map = NULL;
+ struct ref **tail = &ref_map;
+ struct ref *ref;
+
+ for_each_ref(add_existing, &existing_refs);
+ for (ref = transport_get_remote_refs(transport); ref; ref = ref->next) {
+ if (prefixcmp(ref->name, "refs/tags"))
+ continue;
+
+ ref_name = xstrdup(ref->name);
+ ref_name_len = strlen(ref_name);
+ ref_sha1 = ref->old_sha1;
+
+ if (!strcmp(ref_name + ref_name_len - 3, "^{}")) {
+ ref_name[ref_name_len - 3] = 0;
+ tag_ref = transport_get_remote_refs(transport);
+ while (tag_ref) {
+ if (!strcmp(tag_ref->name, ref_name)) {
+ ref_sha1 = tag_ref->old_sha1;
+ break;
+ }
+ tag_ref = tag_ref->next;
+ }
+ }
+
+ if (!path_list_has_path(&existing_refs, ref_name) &&
+ !path_list_has_path(&new_refs, ref_name) &&
+ lookup_object(ref->old_sha1)) {
+ fprintf(stderr, "Auto-following %s\n",
+ ref_name);
+
+ path_list_insert(ref_name, &new_refs);
+
+ rm = alloc_ref(strlen(ref_name) + 1);
+ strcpy(rm->name, ref_name);
+ rm->peer_ref = alloc_ref(strlen(ref_name) + 1);
+ strcpy(rm->peer_ref->name, ref_name);
+ hashcpy(rm->old_sha1, ref_sha1);
+
+ *tail = rm;
+ tail = &rm->next;
+ }
+ free(ref_name);
+ }
+
+ return ref_map;
+}
+
+static int do_fetch(struct transport *transport,
+ struct refspec *refs, int ref_count)
+{
+ struct ref *ref_map, *fetch_map;
+ struct ref *rm;
+ int autotags = (transport->remote->fetch_tags == 1);
+ if (transport->remote->fetch_tags == 2 && !no_tags)
+ tags = 1;
+ if (transport->remote->fetch_tags == -1)
+ no_tags = 1;
+
+ if (!transport->get_refs_list || !transport->fetch)
+ die("Don't know how to fetch from %s", transport->url);
+
+ /* if not appending, truncate FETCH_HEAD */
+ if (!append)
+ fclose(fopen(git_path("FETCH_HEAD"), "w"));
+
+ ref_map = get_ref_map(transport, refs, ref_count, tags, &autotags);
+
+ for (rm = ref_map; rm; rm = rm->next) {
+ if (rm->peer_ref)
+ read_ref(rm->peer_ref->name, rm->peer_ref->old_sha1);
+ }
+
+ if (fetch_refs(transport, ref_map)) {
+ free_refs(ref_map);
+ return 1;
+ }
+
+ fetch_map = ref_map;
+
+ /* if neither --no-tags nor --tags was specified, do automated tag
+ * following ... */
+ if (!(tags || no_tags) && autotags) {
+ ref_map = find_non_local_tags(transport, fetch_map);
+ if (ref_map) {
+ transport_set_option(transport, TRANS_OPT_DEPTH, "0");
+ fetch_refs(transport, ref_map);
+ }
+ free_refs(ref_map);
+ }
+
+ free_refs(fetch_map);
+
+ return 0;
+}
+
+static void set_option(const char *name, const char *value)
+{
+ int r = transport_set_option(transport, name, value);
+ if (r < 0)
+ die("Option \"%s\" value \"%s\" is not valid for %s\n",
+ name, value, transport->url);
+ if (r > 0)
+ warning("Option \"%s\" is ignored for %s\n",
+ name, transport->url);
+}
+
+int cmd_fetch(int argc, const char **argv, const char *prefix)
+{
+ struct remote *remote;
+ int i, j, rla_offset;
+ static const char **refs = NULL;
+ int ref_nr = 0;
+ int cmd_len = 0;
+ const char *depth = NULL, *upload_pack = NULL;
+ int keep = 0;
+
+ for (i = 1; i < argc; i++) {
+ const char *arg = argv[i];
+ cmd_len += strlen(arg);
+
+ if (arg[0] != '-')
+ break;
+ if (!strcmp(arg, "--append") || !strcmp(arg, "-a")) {
+ append = 1;
+ continue;
+ }
+ if (!prefixcmp(arg, "--upload-pack=")) {
+ upload_pack = arg + 14;
+ continue;
+ }
+ if (!strcmp(arg, "--upload-pack")) {
+ i++;
+ if (i == argc)
+ usage(fetch_usage);
+ upload_pack = argv[i];
+ continue;
+ }
+ if (!strcmp(arg, "--force") || !strcmp(arg, "-f")) {
+ force = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--no-tags")) {
+ no_tags = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--tags") || !strcmp(arg, "-t")) {
+ tags = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--keep") || !strcmp(arg, "-k")) {
+ keep = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--update-head-ok") || !strcmp(arg, "-u")) {
+ update_head_ok = 1;
+ continue;
+ }
+ if (!prefixcmp(arg, "--depth=")) {
+ depth = arg + 8;
+ continue;
+ }
+ if (!strcmp(arg, "--depth")) {
+ i++;
+ if (i == argc)
+ usage(fetch_usage);
+ depth = argv[i];
+ continue;
+ }
+ if (!strcmp(arg, "--quiet")) {
+ quiet = 1;
+ continue;
+ }
+ if (!strcmp(arg, "--verbose") || !strcmp(arg, "-v")) {
+ verbose++;
+ continue;
+ }
+ usage(fetch_usage);
+ }
+
+ for (j = i; j < argc; j++)
+ cmd_len += strlen(argv[j]);
+
+ default_rla = xmalloc(cmd_len + 5 + argc + 1);
+ sprintf(default_rla, "fetch");
+ rla_offset = strlen(default_rla);
+ for (j = 1; j < argc; j++) {
+ sprintf(default_rla + rla_offset, " %s", argv[j]);
+ rla_offset += strlen(argv[j]) + 1;
+ }
+
+ if (i == argc)
+ remote = remote_get(NULL);
+ else
+ remote = remote_get(argv[i++]);
+
+ transport = transport_get(remote, remote->url[0]);
+ if (verbose >= 2)
+ transport->verbose = 1;
+ if (quiet)
+ transport->verbose = -1;
+ if (upload_pack)
+ set_option(TRANS_OPT_UPLOADPACK, upload_pack);
+ if (keep)
+ set_option(TRANS_OPT_KEEP, "yes");
+ if (depth)
+ set_option(TRANS_OPT_DEPTH, depth);
+
+ if (!transport->url)
+ die("Where do you want to fetch from today?");
+
+ if (i < argc) {
+ int j = 0;
+ refs = xcalloc(argc - i + 1, sizeof(const char *));
+ while (i < argc) {
+ if (!strcmp(argv[i], "tag")) {
+ char *ref;
+ i++;
+ ref = xmalloc(strlen(argv[i]) * 2 + 22);
+ strcpy(ref, "refs/tags/");
+ strcat(ref, argv[i]);
+ strcat(ref, ":refs/tags/");
+ strcat(ref, argv[i]);
+ refs[j++] = ref;
+ } else
+ refs[j++] = argv[i];
+ i++;
+ }
+ refs[j] = NULL;
+ ref_nr = j;
+ }
+
+ signal(SIGINT, unlock_pack_on_signal);
+ atexit(unlock_pack);
+ return do_fetch(transport, parse_ref_spec(ref_nr, refs), ref_nr);
+}
{ "objectsize", FIELD_ULONG },
{ "objectname" },
{ "tree" },
- { "parent" }, /* NEEDSWORK: how to address 2nd and later parents? */
+ { "parent" },
{ "numparent", FIELD_ULONG },
{ "object" },
{ "type" },
/* Is the atom a valid one? */
for (i = 0; i < ARRAY_SIZE(valid_atom); i++) {
int len = strlen(valid_atom[i].name);
- if (len == ep - sp && !memcmp(valid_atom[i].name, sp, len))
+ /*
+ * If the atom name has a colon, strip it and everything after
+ * it off - it specifies the format for this entry, and
+ * shouldn't be used for checking against the valid_atom
+ * table.
+ */
+ const char *formatp = strchr(sp, ':');
+ if (!formatp || ep < formatp)
+ formatp = ep;
+ if (len == formatp - sp && !memcmp(valid_atom[i].name, sp, len))
break;
}
}
if (!strcmp(name, "numparent")) {
char *s = xmalloc(40);
+ v->ul = num_parents(commit);
sprintf(s, "%lu", v->ul);
v->s = s;
- v->ul = num_parents(commit);
}
else if (!strcmp(name, "parent")) {
int num = num_parents(commit);
int i;
struct commit_list *parents;
- char *s = xmalloc(42 * num);
+ char *s = xmalloc(41 * num + 1);
v->s = s;
for (i = 0, parents = commit->parents;
parents;
- parents = parents->next, i = i + 42) {
+ parents = parents->next, i = i + 41) {
struct commit *parent = parents->item;
strcpy(s+i, sha1_to_hex(parent->object.sha1));
if (parents->next)
s[i+40] = ' ';
}
+ if (!i)
+ *s = '\0';
}
}
}
return xmemdupz(email, eoemail + 1 - email);
}
-static void grab_date(const char *buf, struct atom_value *v)
+static void grab_date(const char *buf, struct atom_value *v, const char *atomname)
{
const char *eoemail = strstr(buf, "> ");
char *zone;
unsigned long timestamp;
long tz;
+ enum date_mode date_mode = DATE_NORMAL;
+ const char *formatp;
+
+ /*
+ * We got here because atomname ends in "date" or "date<something>";
+ * it's not possible that <something> is not ":<format>" because
+ * parse_atom() wouldn't have allowed it, so we can assume that no
+ * ":" means no format is specified, and use the default.
+ */
+ formatp = strchr(atomname, ':');
+ if (formatp != NULL) {
+ formatp++;
+ date_mode = parse_date_format(formatp);
+ }
if (!eoemail)
goto bad;
tz = strtol(zone, NULL, 10);
if ((tz == LONG_MIN || tz == LONG_MAX) && errno == ERANGE)
goto bad;
- v->s = xstrdup(show_date(timestamp, tz, 0));
+ v->s = xstrdup(show_date(timestamp, tz, date_mode));
v->ul = timestamp;
return;
bad:
if (name[wholen] != 0 &&
strcmp(name + wholen, "name") &&
strcmp(name + wholen, "email") &&
- strcmp(name + wholen, "date"))
+ prefixcmp(name + wholen, "date"))
continue;
if (!wholine)
wholine = find_wholine(who, wholen, buf, sz);
v->s = copy_name(wholine);
else if (!strcmp(name + wholen, "email"))
v->s = copy_email(wholine);
- else if (!strcmp(name + wholen, "date"))
- grab_date(wholine, v);
+ else if (!prefixcmp(name + wholen, "date"))
+ grab_date(wholine, v, name);
}
/* For a tag or a commit object, if "creator" or "creatordate" is
if (deref)
name++;
- if (!strcmp(name, "creatordate"))
- grab_date(wholine, v);
+ if (!prefixcmp(name, "creatordate"))
+ grab_date(wholine, v, name);
else if (!strcmp(name, "creator"))
v->s = copy_line(wholine);
}
static int pack_refs = 1;
static int aggressive_window = -1;
+static int gc_auto_threshold = 6700;
+static int gc_auto_pack_limit = 20;
#define MAX_ADD 10
static const char *argv_pack_refs[] = {"pack-refs", "--all", "--prune", NULL};
static const char *argv_reflog[] = {"reflog", "expire", "--all", NULL};
-static const char *argv_repack[MAX_ADD] = {"repack", "-a", "-d", "-l", NULL};
+static const char *argv_repack[MAX_ADD] = {"repack", "-d", "-l", NULL};
static const char *argv_prune[] = {"prune", NULL};
static const char *argv_rerere[] = {"rerere", "gc", NULL};
aggressive_window = git_config_int(var, value);
return 0;
}
+ if (!strcmp(var, "gc.auto")) {
+ gc_auto_threshold = git_config_int(var, value);
+ return 0;
+ }
+ if (!strcmp(var, "gc.autopacklimit")) {
+ gc_auto_pack_limit = git_config_int(var, value);
+ return 0;
+ }
return git_default_config(var, value);
}
cmd[i] = NULL;
}
+static int too_many_loose_objects(void)
+{
+ /*
+ * Quickly check if a "gc" is needed, by estimating how
+ * many loose objects there are. Because SHA-1 is evenly
+ * distributed, we can check only one and get a reasonable
+ * estimate.
+ */
+ char path[PATH_MAX];
+ const char *objdir = get_object_directory();
+ DIR *dir;
+ struct dirent *ent;
+ int auto_threshold;
+ int num_loose = 0;
+ int needed = 0;
+
+ if (gc_auto_threshold <= 0)
+ return 0;
+
+ if (sizeof(path) <= snprintf(path, sizeof(path), "%s/17", objdir)) {
+ warning("insanely long object directory %.*s", 50, objdir);
+ return 0;
+ }
+ dir = opendir(path);
+ if (!dir)
+ return 0;
+
+ auto_threshold = (gc_auto_threshold + 255) / 256;
+ while ((ent = readdir(dir)) != NULL) {
+ if (strspn(ent->d_name, "0123456789abcdef") != 38 ||
+ ent->d_name[38] != '\0')
+ continue;
+ if (++num_loose > auto_threshold) {
+ needed = 1;
+ break;
+ }
+ }
+ closedir(dir);
+ return needed;
+}
+
+static int too_many_packs(void)
+{
+ struct packed_git *p;
+ int cnt;
+
+ if (gc_auto_pack_limit <= 0)
+ return 0;
+
+ prepare_packed_git();
+ for (cnt = 0, p = packed_git; p; p = p->next) {
+ char path[PATH_MAX];
+ size_t len;
+ int keep;
+
+ if (!p->pack_local)
+ continue;
+ len = strlen(p->pack_name);
+ if (PATH_MAX <= len + 1)
+ continue; /* oops, give up */
+ memcpy(path, p->pack_name, len-5);
+ memcpy(path + len - 5, ".keep", 6);
+ keep = access(p->pack_name, F_OK) && (errno == ENOENT);
+ if (keep)
+ continue;
+ /*
+ * Perhaps check the size of the pack and count only
+ * very small ones here?
+ */
+ cnt++;
+ }
+ return gc_auto_pack_limit <= cnt;
+}
+
+static int need_to_gc(void)
+{
+ /*
+ * Setting gc.auto and gc.autopacklimit to 0 or negative can
+ * disable the automatic gc.
+ */
+ if (gc_auto_threshold <= 0 && gc_auto_pack_limit <= 0)
+ return 0;
+
+ /*
+ * If there are too many loose objects, but not too many
+ * packs, we run "repack -d -l". If there are too many packs,
+ * we run "repack -A -d -l". Otherwise we tell the caller
+ * there is no need.
+ */
+ if (too_many_packs())
+ append_option(argv_repack, "-A", MAX_ADD);
+ else if (!too_many_loose_objects())
+ return 0;
+ return 1;
+}
+
int cmd_gc(int argc, const char **argv, const char *prefix)
{
int i;
int prune = 0;
+ int auto_gc = 0;
char buf[80];
git_config(gc_config);
}
continue;
}
- /* perhaps other parameters later... */
+ if (!strcmp(arg, "--auto")) {
+ auto_gc = 1;
+ continue;
+ }
break;
}
if (i != argc)
usage(builtin_gc_usage);
+ if (auto_gc) {
+ /*
+ * Auto-gc should be least intrusive as possible.
+ */
+ prune = 0;
+ if (!need_to_gc())
+ return 0;
+ fprintf(stderr, "Packing your repository for optimum "
+ "performance. You may also\n"
+ "run \"git gc\" manually. See "
+ "\"git help gc\" for more information.\n");
+ } else {
+ /*
+ * Use safer (for shared repos) "-A" option to
+ * repack when not pruning. Auto-gc makes its
+ * own decision.
+ */
+ if (prune)
+ append_option(argv_repack, "-a", MAX_ADD);
+ else
+ append_option(argv_repack, "-A", MAX_ADD);
+ }
+
if (pack_refs && run_command_v_opt(argv_pack_refs, RUN_GIT_CMD))
return error(FAILED_RUN, argv_pack_refs[0]);
if (run_command_v_opt(argv_rerere, RUN_GIT_CMD))
return error(FAILED_RUN, argv_rerere[0]);
+ if (auto_gc && too_many_loose_objects())
+ warning("There are too many unreachable loose objects; "
+ "run 'git prune' to remove them.");
+
return 0;
}
--- /dev/null
+#include "cache.h"
+#include "walker.h"
+
+int cmd_http_fetch(int argc, const char **argv, const char *prefix)
+{
+ struct walker *walker;
+ int commits_on_stdin = 0;
+ int commits;
+ const char **write_ref = NULL;
+ char **commit_id;
+ const char *url;
+ int arg = 1;
+ int rc = 0;
+ int get_tree = 0;
+ int get_history = 0;
+ int get_all = 0;
+ int get_verbosely = 0;
+ int get_recover = 0;
+
+ git_config(git_default_config);
+
+ while (arg < argc && argv[arg][0] == '-') {
+ if (argv[arg][1] == 't') {
+ get_tree = 1;
+ } else if (argv[arg][1] == 'c') {
+ get_history = 1;
+ } else if (argv[arg][1] == 'a') {
+ get_all = 1;
+ get_tree = 1;
+ get_history = 1;
+ } else if (argv[arg][1] == 'v') {
+ get_verbosely = 1;
+ } else if (argv[arg][1] == 'w') {
+ write_ref = &argv[arg + 1];
+ arg++;
+ } else if (!strcmp(argv[arg], "--recover")) {
+ get_recover = 1;
+ } else if (!strcmp(argv[arg], "--stdin")) {
+ commits_on_stdin = 1;
+ }
+ arg++;
+ }
+ if (argc < arg + 2 - commits_on_stdin) {
+ usage("git-http-fetch [-c] [-t] [-a] [-v] [--recover] [-w ref] [--stdin] commit-id url");
+ return 1;
+ }
+ if (commits_on_stdin) {
+ commits = walker_targets_stdin(&commit_id, &write_ref);
+ } else {
+ commit_id = (char **) &argv[arg++];
+ commits = 1;
+ }
+ url = argv[arg];
+
+ walker = get_http_walker(url);
+ walker->get_tree = get_tree;
+ walker->get_history = get_history;
+ walker->get_all = get_all;
+ walker->get_verbosely = get_verbosely;
+ walker->get_recover = get_recover;
+
+ rc = walker_fetch(walker, commits, commit_id, write_ref, url);
+
+ if (commits_on_stdin)
+ walker_targets_free(commits, commit_id, write_ref);
+
+ if (walker->corrupt_object_found) {
+ fprintf(stderr,
+"Some loose object were found to be corrupt, but they might be just\n"
+"a false '404 Not Found' error message sent with incorrect HTTP\n"
+"status code. Suggest running git-fsck.\n");
+ }
+
+ walker_free(walker);
+
+ return rc;
+}
if (pos < 0)
pos = -pos-1;
- active_cache += pos;
+ memmove(active_cache, active_cache + pos,
+ (active_nr - pos) * sizeof(struct cache_entry *));
active_nr -= pos;
first = 0;
last = active_nr;
}
static void decode_header(char *it, unsigned itsize);
-static char *header[MAX_HDR_PARSED] = {
+static const char *header[MAX_HDR_PARSED] = {
"From","Subject","Date",
};
[--window=N] [--window-memory=N] [--depth=N] \n\
[--no-reuse-delta] [--no-reuse-object] [--delta-base-offset] \n\
[--threads=N] [--non-empty] [--revs [--unpacked | --all]*] [--reflog] \n\
- [--stdout | base-name] [<ref-list | <object-list]";
+ [--stdout | base-name] [--keep-unreachable] [<ref-list | <object-list]";
struct object_entry {
struct pack_idx_entry idx;
static uint32_t nr_objects, nr_alloc, nr_result, nr_written;
static int non_empty;
-static int no_reuse_delta, no_reuse_object;
+static int no_reuse_delta, no_reuse_object, keep_unreachable;
static int local;
static int incremental;
static int allow_ofs_delta;
}
}
+#define OBJECT_ADDED (1u<<20)
+
static void show_commit(struct commit *commit)
{
add_object_entry(commit->object.sha1, OBJ_COMMIT, NULL, 0);
+ commit->object.flags |= OBJECT_ADDED;
}
static void show_object(struct object_array_entry *p)
{
add_preferred_base_object(p->name);
add_object_entry(p->item->sha1, p->item->type, p->name, 0);
+ p->item->flags |= OBJECT_ADDED;
}
static void show_edge(struct commit *commit)
add_preferred_base(commit->object.sha1);
}
+struct in_pack_object {
+ off_t offset;
+ struct object *object;
+};
+
+struct in_pack {
+ int alloc;
+ int nr;
+ struct in_pack_object *array;
+};
+
+static void mark_in_pack_object(struct object *object, struct packed_git *p, struct in_pack *in_pack)
+{
+ in_pack->array[in_pack->nr].offset = find_pack_entry_one(object->sha1, p);
+ in_pack->array[in_pack->nr].object = object;
+ in_pack->nr++;
+}
+
+/*
+ * Compare the objects in the offset order, in order to emulate the
+ * "git-rev-list --objects" output that produced the pack originally.
+ */
+static int ofscmp(const void *a_, const void *b_)
+{
+ struct in_pack_object *a = (struct in_pack_object *)a_;
+ struct in_pack_object *b = (struct in_pack_object *)b_;
+
+ if (a->offset < b->offset)
+ return -1;
+ else if (a->offset > b->offset)
+ return 1;
+ else
+ return hashcmp(a->object->sha1, b->object->sha1);
+}
+
+static void add_objects_in_unpacked_packs(struct rev_info *revs)
+{
+ struct packed_git *p;
+ struct in_pack in_pack;
+ uint32_t i;
+
+ memset(&in_pack, 0, sizeof(in_pack));
+
+ for (p = packed_git; p; p = p->next) {
+ const unsigned char *sha1;
+ struct object *o;
+
+ for (i = 0; i < revs->num_ignore_packed; i++) {
+ if (matches_pack_name(p, revs->ignore_packed[i]))
+ break;
+ }
+ if (revs->num_ignore_packed <= i)
+ continue;
+ if (open_pack_index(p))
+ die("cannot open pack index");
+
+ ALLOC_GROW(in_pack.array,
+ in_pack.nr + p->num_objects,
+ in_pack.alloc);
+
+ for (i = 0; i < p->num_objects; i++) {
+ sha1 = nth_packed_object_sha1(p, i);
+ o = lookup_unknown_object(sha1);
+ if (!(o->flags & OBJECT_ADDED))
+ mark_in_pack_object(o, p, &in_pack);
+ o->flags |= OBJECT_ADDED;
+ }
+ }
+
+ if (in_pack.nr) {
+ qsort(in_pack.array, in_pack.nr, sizeof(in_pack.array[0]),
+ ofscmp);
+ for (i = 0; i < in_pack.nr; i++) {
+ struct object *o = in_pack.array[i].object;
+ add_object_entry(o->sha1, o->type, "", 0);
+ }
+ }
+ free(in_pack.array);
+}
+
static void get_object_list(int ac, const char **av)
{
struct rev_info revs;
prepare_revision_walk(&revs);
mark_edges_uninteresting(revs.commits, &revs, show_edge);
traverse_commit_list(&revs, show_commit, show_object);
+
+ if (keep_unreachable)
+ add_objects_in_unpacked_packs(&revs);
}
static int adjust_perm(const char *path, mode_t mode)
use_internal_rev_list = 1;
continue;
}
+ if (!strcmp("--keep-unreachable", arg)) {
+ keep_unreachable = 1;
+ continue;
+ }
if (!strcmp("--unpacked", arg) ||
!prefixcmp(arg, "--unpacked=") ||
!strcmp("--reflog", arg) ||
#include "run-command.h"
#include "builtin.h"
#include "remote.h"
+#include "transport.h"
-static const char push_usage[] = "git-push [--all] [--tags] [--receive-pack=<git-receive-pack>] [--repo=all] [-f | --force] [-v] [<repository> <refspec>...]";
+static const char push_usage[] = "git-push [--all] [--dry-run] [--tags] [--receive-pack=<git-receive-pack>] [--repo=all] [-f | --force] [-v] [<repository> <refspec>...]";
-static int all, force, thin, verbose;
+static int thin, verbose;
static const char *receivepack;
static const char **refspec;
}
}
-static int do_push(const char *repo)
+static int do_push(const char *repo, int flags)
{
int i, errs;
- int common_argc;
- const char **argv;
- int argc;
struct remote *remote = remote_get(repo);
if (!remote)
die("bad repository '%s'", repo);
- if (remote->receivepack) {
- char *rp = xmalloc(strlen(remote->receivepack) + 16);
- sprintf(rp, "--receive-pack=%s", remote->receivepack);
- receivepack = rp;
- }
- if (!refspec && !all && remote->push_refspec_nr) {
+ if (!refspec
+ && !(flags & TRANSPORT_PUSH_ALL)
+ && remote->push_refspec_nr) {
refspec = remote->push_refspec;
refspec_nr = remote->push_refspec_nr;
}
-
- argv = xmalloc((refspec_nr + 10) * sizeof(char *));
- argv[0] = "dummy-send-pack";
- argc = 1;
- if (all)
- argv[argc++] = "--all";
- if (force)
- argv[argc++] = "--force";
- if (receivepack)
- argv[argc++] = receivepack;
- common_argc = argc;
-
errs = 0;
- for (i = 0; i < remote->uri_nr; i++) {
+ for (i = 0; i < remote->url_nr; i++) {
+ struct transport *transport =
+ transport_get(remote, remote->url[i]);
int err;
- int dest_argc = common_argc;
- int dest_refspec_nr = refspec_nr;
- const char **dest_refspec = refspec;
- const char *dest = remote->uri[i];
- const char *sender = "send-pack";
- if (!prefixcmp(dest, "http://") ||
- !prefixcmp(dest, "https://"))
- sender = "http-push";
- else {
- char *rem = xmalloc(strlen(remote->name) + 10);
- sprintf(rem, "--remote=%s", remote->name);
- argv[dest_argc++] = rem;
- if (thin)
- argv[dest_argc++] = "--thin";
- }
- argv[0] = sender;
- argv[dest_argc++] = dest;
- while (dest_refspec_nr--)
- argv[dest_argc++] = *dest_refspec++;
- argv[dest_argc] = NULL;
+ if (receivepack)
+ transport_set_option(transport,
+ TRANS_OPT_RECEIVEPACK, receivepack);
+ if (thin)
+ transport_set_option(transport, TRANS_OPT_THIN, "yes");
+
if (verbose)
- fprintf(stderr, "Pushing to %s\n", dest);
- err = run_command_v_opt(argv, RUN_GIT_CMD);
+ fprintf(stderr, "Pushing to %s\n", remote->url[i]);
+ err = transport_push(transport, refspec_nr, refspec, flags);
+ err |= transport_disconnect(transport);
+
if (!err)
continue;
- error("failed to push to '%s'", remote->uri[i]);
- switch (err) {
- case -ERR_RUN_COMMAND_FORK:
- error("unable to fork for %s", sender);
- case -ERR_RUN_COMMAND_EXEC:
- error("unable to exec %s", sender);
- break;
- case -ERR_RUN_COMMAND_WAITPID:
- case -ERR_RUN_COMMAND_WAITPID_WRONG_PID:
- case -ERR_RUN_COMMAND_WAITPID_SIGNAL:
- case -ERR_RUN_COMMAND_WAITPID_NOEXIT:
- error("%s died with strange error", sender);
- }
+ error("failed to push to '%s'", remote->url[i]);
errs++;
}
return !!errs;
int cmd_push(int argc, const char **argv, const char *prefix)
{
int i;
+ int flags = 0;
const char *repo = NULL; /* default repository */
for (i = 1; i < argc; i++) {
continue;
}
if (!strcmp(arg, "--all")) {
- all = 1;
+ flags |= TRANSPORT_PUSH_ALL;
+ continue;
+ }
+ if (!strcmp(arg, "--dry-run")) {
+ flags |= TRANSPORT_PUSH_DRY_RUN;
continue;
}
if (!strcmp(arg, "--tags")) {
continue;
}
if (!strcmp(arg, "--force") || !strcmp(arg, "-f")) {
- force = 1;
+ flags |= TRANSPORT_PUSH_FORCE;
continue;
}
if (!strcmp(arg, "--thin")) {
continue;
}
if (!prefixcmp(arg, "--receive-pack=")) {
- receivepack = arg;
+ receivepack = arg + 15;
continue;
}
if (!prefixcmp(arg, "--exec=")) {
- receivepack = arg;
+ receivepack = arg + 7;
continue;
}
usage(push_usage);
}
set_refspecs(argv + i, argc - i);
- if (all && refspec)
+ if ((flags & TRANSPORT_PUSH_ALL) && refspec)
usage(push_usage);
- return do_push(repo);
+ return do_push(repo, flags);
}
}
enum reset_type { MIXED, SOFT, HARD, NONE };
-static char *reset_type_names[] = { "mixed", "soft", "hard", NULL };
+static const char *reset_type_names[] = { "mixed", "soft", "hard", NULL };
int cmd_reset(int argc, const char **argv, const char *prefix)
{
#include "revision.h"
#include "list-objects.h"
#include "builtin.h"
+#include "log-tree.h"
/* bits #0-15 in revision.h */
" --left-right\n"
" special purpose:\n"
" --bisect\n"
-" --bisect-vars"
+" --bisect-vars\n"
+" --bisect-all"
;
static struct rev_info revs;
parents = parents->next;
}
}
+ show_decorations(commit);
if (revs.commit_format == CMIT_FMT_ONELINE)
putchar(' ');
else
strbuf_init(&buf, 0);
pretty_print_commit(revs.commit_format, commit,
&buf, revs.abbrev, NULL, NULL, revs.date_mode);
- printf("%s%c", buf.buf, hdr_termination);
+ if (buf.len)
+ printf("%s%c", buf.buf, hdr_termination);
strbuf_release(&buf);
}
maybe_flush_or_die(stdout, "stdout");
return best;
}
+struct commit_dist {
+ struct commit *commit;
+ int distance;
+};
+
+static int compare_commit_dist(const void *a_, const void *b_)
+{
+ struct commit_dist *a, *b;
+
+ a = (struct commit_dist *)a_;
+ b = (struct commit_dist *)b_;
+ if (a->distance != b->distance)
+ return b->distance - a->distance; /* desc sort */
+ return hashcmp(a->commit->object.sha1, b->commit->object.sha1);
+}
+
+static struct commit_list *best_bisection_sorted(struct commit_list *list, int nr)
+{
+ struct commit_list *p;
+ struct commit_dist *array = xcalloc(nr, sizeof(*array));
+ int cnt, i;
+
+ for (p = list, cnt = 0; p; p = p->next) {
+ int distance;
+ unsigned flags = p->item->object.flags;
+
+ if (revs.prune_fn && !(flags & TREECHANGE))
+ continue;
+ distance = weight(p);
+ if (nr - distance < distance)
+ distance = nr - distance;
+ array[cnt].commit = p->item;
+ array[cnt].distance = distance;
+ cnt++;
+ }
+ qsort(array, cnt, sizeof(*array), compare_commit_dist);
+ for (p = list, i = 0; i < cnt; i++) {
+ struct name_decoration *r = xmalloc(sizeof(*r) + 100);
+ struct object *obj = &(array[i].commit->object);
+
+ sprintf(r->name, "dist=%d", array[i].distance);
+ r->next = add_decoration(&name_decoration, obj, r);
+ p->item = array[i].commit;
+ p = p->next;
+ }
+ if (p)
+ p->next = NULL;
+ free(array);
+ return list;
+}
+
/*
* zero or positive weight is the number of interesting commits it can
* reach, including itself. Especially, weight = 0 means it does not
* or positive distance.
*/
static struct commit_list *do_find_bisection(struct commit_list *list,
- int nr, int *weights)
+ int nr, int *weights,
+ int find_all)
{
int n, counted;
struct commit_list *p;
clear_distance(list);
/* Does it happen to be at exactly half-way? */
- if (halfway(p, nr))
+ if (!find_all && halfway(p, nr))
return p;
counted++;
}
weight_set(p, weight(q));
/* Does it happen to be at exactly half-way? */
- if (halfway(p, nr))
+ if (!find_all && halfway(p, nr))
return p;
}
}
show_list("bisection 2 counted all", counted, nr, list);
- /* Then find the best one */
- return best_bisection(list, nr);
+ if (!find_all)
+ return best_bisection(list, nr);
+ else
+ return best_bisection_sorted(list, nr);
}
static struct commit_list *find_bisection(struct commit_list *list,
- int *reaches, int *all)
+ int *reaches, int *all,
+ int find_all)
{
int nr, on_list;
struct commit_list *p, *best, *next, *last;
weights = xcalloc(on_list, sizeof(*weights));
/* Do the real work of finding bisection commit. */
- best = do_find_bisection(list, nr, weights);
-
- if (best)
- best->next = NULL;
-
- *reaches = weight(best);
+ best = do_find_bisection(list, nr, weights, find_all);
+ if (best) {
+ if (!find_all)
+ best->next = NULL;
+ *reaches = weight(best);
+ }
free(weights);
-
return best;
}
int i;
int read_from_stdin = 0;
int bisect_show_vars = 0;
+ int bisect_find_all = 0;
git_config(git_default_config);
init_revisions(&revs, prefix);
bisect_list = 1;
continue;
}
+ if (!strcmp(arg, "--bisect-all")) {
+ bisect_list = 1;
+ bisect_find_all = 1;
+ continue;
+ }
if (!strcmp(arg, "--bisect-vars")) {
bisect_list = 1;
bisect_show_vars = 1;
if (bisect_list) {
int reaches = reaches, all = all;
- revs.commits = find_bisection(revs.commits, &reaches, &all);
+ revs.commits = find_bisection(revs.commits, &reaches, &all,
+ bisect_find_all);
if (bisect_show_vars) {
int cnt;
+ char hex[41];
if (!revs.commits)
return 1;
/*
* A bisect set of size N has (N-1) commits further
* to test, as we already know one bad one.
*/
- cnt = all-reaches;
+ cnt = all - reaches;
if (cnt < reaches)
cnt = reaches;
+ strcpy(hex, sha1_to_hex(revs.commits->item->object.sha1));
+
+ if (bisect_find_all) {
+ traverse_commit_list(&revs, show_commit, show_object);
+ printf("------\n");
+ }
+
printf("bisect_rev=%s\n"
"bisect_nr=%d\n"
"bisect_good=%d\n"
"bisect_bad=%d\n"
"bisect_all=%d\n",
- sha1_to_hex(revs.commits->item->object.sha1),
+ hex,
cnt - 1,
all - reaches - 1,
reaches - 1,
die ("Error wrapping up %s", defmsg);
fprintf(stderr, "Automatic %s failed. "
"After resolving the conflicts,\n"
- "mark the corrected paths with 'git-add <paths>'\n"
+ "mark the corrected paths with 'git add <paths>' "
"and commit the result.\n", me);
if (action == CHERRY_PICK) {
fprintf(stderr, "When commiting, use the option "
if (run_command(&child))
die("There was a problem with the editor %s.", editor);
- if (strbuf_read_file(buffer, path) < 0)
+ if (strbuf_read_file(buffer, path, 0) < 0)
die("could not read message file '%s': %s",
path, strerror(errno));
}
continue;
}
if (!strcmp(arg, "-F")) {
- int fd;
-
annotate = 1;
i++;
if (i == argc)
if (message)
die("only one -F or -m option is allowed.");
- if (!strcmp(argv[i], "-"))
- fd = 0;
- else {
- fd = open(argv[i], O_RDONLY);
- if (fd < 0)
- die("could not open '%s': %s",
+ if (!strcmp(argv[i], "-")) {
+ if (strbuf_read(&buf, 0, 1024) < 0)
+ die("cannot read %s", argv[i]);
+ } else {
+ if (strbuf_read_file(&buf, argv[i], 1024) < 0)
+ die("could not open or read '%s': %s",
argv[i], strerror(errno));
}
- if (strbuf_read(&buf, fd, 1024) < 0) {
- die("cannot read %s", argv[i]);
- }
message = 1;
continue;
}
die("git-update-index: unable to update %s",
path_name);
}
- if (path_name != ptr)
- free(path_name);
continue;
bad_line:
extern int cmd_diff_index(int argc, const char **argv, const char *prefix);
extern int cmd_diff(int argc, const char **argv, const char *prefix);
extern int cmd_diff_tree(int argc, const char **argv, const char *prefix);
+extern int cmd_fetch(int argc, const char **argv, const char *prefix);
+extern int cmd_fetch_pack(int argc, const char **argv, const char *prefix);
extern int cmd_fetch__tool(int argc, const char **argv, const char *prefix);
extern int cmd_fmt_merge_msg(int argc, const char **argv, const char *prefix);
extern int cmd_for_each_ref(int argc, const char **argv, const char *prefix);
extern int cmd_get_tar_commit_id(int argc, const char **argv, const char *prefix);
extern int cmd_grep(int argc, const char **argv, const char *prefix);
extern int cmd_help(int argc, const char **argv, const char *prefix);
+extern int cmd_http_fetch(int argc, const char **argv, const char *prefix);
extern int cmd_init_db(int argc, const char **argv, const char *prefix);
extern int cmd_log(int argc, const char **argv, const char *prefix);
extern int cmd_log_reflog(int argc, const char **argv, const char *prefix);
--- /dev/null
+#include "cache.h"
+#include "bundle.h"
+#include "object.h"
+#include "commit.h"
+#include "diff.h"
+#include "revision.h"
+#include "list-objects.h"
+#include "run-command.h"
+
+static const char bundle_signature[] = "# v2 git bundle\n";
+
+static void add_to_ref_list(const unsigned char *sha1, const char *name,
+ struct ref_list *list)
+{
+ if (list->nr + 1 >= list->alloc) {
+ list->alloc = alloc_nr(list->nr + 1);
+ list->list = xrealloc(list->list,
+ list->alloc * sizeof(list->list[0]));
+ }
+ memcpy(list->list[list->nr].sha1, sha1, 20);
+ list->list[list->nr].name = xstrdup(name);
+ list->nr++;
+}
+
+/* returns an fd */
+int read_bundle_header(const char *path, struct bundle_header *header) {
+ char buffer[1024];
+ int fd;
+ long fpos;
+ FILE *ffd = fopen(path, "rb");
+
+ if (!ffd)
+ return error("could not open '%s'", path);
+ if (!fgets(buffer, sizeof(buffer), ffd) ||
+ strcmp(buffer, bundle_signature)) {
+ fclose(ffd);
+ return error("'%s' does not look like a v2 bundle file", path);
+ }
+ while (fgets(buffer, sizeof(buffer), ffd)
+ && buffer[0] != '\n') {
+ int is_prereq = buffer[0] == '-';
+ int offset = is_prereq ? 1 : 0;
+ int len = strlen(buffer);
+ unsigned char sha1[20];
+ struct ref_list *list = is_prereq ? &header->prerequisites
+ : &header->references;
+ char delim;
+
+ if (buffer[len - 1] == '\n')
+ buffer[len - 1] = '\0';
+ if (get_sha1_hex(buffer + offset, sha1)) {
+ warning("unrecognized header: %s", buffer);
+ continue;
+ }
+ delim = buffer[40 + offset];
+ if (!isspace(delim) && (delim != '\0' || !is_prereq))
+ die ("invalid header: %s", buffer);
+ add_to_ref_list(sha1, isspace(delim) ?
+ buffer + 41 + offset : "", list);
+ }
+ fpos = ftell(ffd);
+ fclose(ffd);
+ fd = open(path, O_RDONLY);
+ if (fd < 0)
+ return error("could not open '%s'", path);
+ lseek(fd, fpos, SEEK_SET);
+ return fd;
+}
+
+static int list_refs(struct ref_list *r, int argc, const char **argv)
+{
+ int i;
+
+ for (i = 0; i < r->nr; i++) {
+ if (argc > 1) {
+ int j;
+ for (j = 1; j < argc; j++)
+ if (!strcmp(r->list[i].name, argv[j]))
+ break;
+ if (j == argc)
+ continue;
+ }
+ printf("%s %s\n", sha1_to_hex(r->list[i].sha1),
+ r->list[i].name);
+ }
+ return 0;
+}
+
+#define PREREQ_MARK (1u<<16)
+
+int verify_bundle(struct bundle_header *header, int verbose)
+{
+ /*
+ * Do fast check, then if any prereqs are missing then go line by line
+ * to be verbose about the errors
+ */
+ struct ref_list *p = &header->prerequisites;
+ struct rev_info revs;
+ const char *argv[] = {NULL, "--all"};
+ struct object_array refs;
+ struct commit *commit;
+ int i, ret = 0, req_nr;
+ const char *message = "Repository lacks these prerequisite commits:";
+
+ init_revisions(&revs, NULL);
+ for (i = 0; i < p->nr; i++) {
+ struct ref_list_entry *e = p->list + i;
+ struct object *o = parse_object(e->sha1);
+ if (o) {
+ o->flags |= PREREQ_MARK;
+ add_pending_object(&revs, o, e->name);
+ continue;
+ }
+ if (++ret == 1)
+ error(message);
+ error("%s %s", sha1_to_hex(e->sha1), e->name);
+ }
+ if (revs.pending.nr != p->nr)
+ return ret;
+ req_nr = revs.pending.nr;
+ setup_revisions(2, argv, &revs, NULL);
+
+ memset(&refs, 0, sizeof(struct object_array));
+ for (i = 0; i < revs.pending.nr; i++) {
+ struct object_array_entry *e = revs.pending.objects + i;
+ add_object_array(e->item, e->name, &refs);
+ }
+
+ prepare_revision_walk(&revs);
+
+ i = req_nr;
+ while (i && (commit = get_revision(&revs)))
+ if (commit->object.flags & PREREQ_MARK)
+ i--;
+
+ for (i = 0; i < req_nr; i++)
+ if (!(refs.objects[i].item->flags & SHOWN)) {
+ if (++ret == 1)
+ error(message);
+ error("%s %s", sha1_to_hex(refs.objects[i].item->sha1),
+ refs.objects[i].name);
+ }
+
+ for (i = 0; i < refs.nr; i++)
+ clear_commit_marks((struct commit *)refs.objects[i].item, -1);
+
+ if (verbose) {
+ struct ref_list *r;
+
+ r = &header->references;
+ printf("The bundle contains %d ref%s\n",
+ r->nr, (1 < r->nr) ? "s" : "");
+ list_refs(r, 0, NULL);
+ r = &header->prerequisites;
+ printf("The bundle requires these %d ref%s\n",
+ r->nr, (1 < r->nr) ? "s" : "");
+ list_refs(r, 0, NULL);
+ }
+ return ret;
+}
+
+int list_bundle_refs(struct bundle_header *header, int argc, const char **argv)
+{
+ return list_refs(&header->references, argc, argv);
+}
+
+int create_bundle(struct bundle_header *header, const char *path,
+ int argc, const char **argv)
+{
+ static struct lock_file lock;
+ int bundle_fd = -1;
+ int bundle_to_stdout;
+ const char **argv_boundary = xmalloc((argc + 4) * sizeof(const char *));
+ const char **argv_pack = xmalloc(5 * sizeof(const char *));
+ int i, ref_count = 0;
+ char buffer[1024];
+ struct rev_info revs;
+ struct child_process rls;
+ FILE *rls_fout;
+
+ bundle_to_stdout = !strcmp(path, "-");
+ if (bundle_to_stdout)
+ bundle_fd = 1;
+ else
+ bundle_fd = hold_lock_file_for_update(&lock, path, 1);
+
+ /* write signature */
+ write_or_die(bundle_fd, bundle_signature, strlen(bundle_signature));
+
+ /* init revs to list objects for pack-objects later */
+ save_commit_buffer = 0;
+ init_revisions(&revs, NULL);
+
+ /* write prerequisites */
+ memcpy(argv_boundary + 3, argv + 1, argc * sizeof(const char *));
+ argv_boundary[0] = "rev-list";
+ argv_boundary[1] = "--boundary";
+ argv_boundary[2] = "--pretty=oneline";
+ argv_boundary[argc + 2] = NULL;
+ memset(&rls, 0, sizeof(rls));
+ rls.argv = argv_boundary;
+ rls.out = -1;
+ rls.git_cmd = 1;
+ if (start_command(&rls))
+ return -1;
+ rls_fout = fdopen(rls.out, "r");
+ while (fgets(buffer, sizeof(buffer), rls_fout)) {
+ unsigned char sha1[20];
+ if (buffer[0] == '-') {
+ write_or_die(bundle_fd, buffer, strlen(buffer));
+ if (!get_sha1_hex(buffer + 1, sha1)) {
+ struct object *object = parse_object(sha1);
+ object->flags |= UNINTERESTING;
+ add_pending_object(&revs, object, buffer);
+ }
+ } else if (!get_sha1_hex(buffer, sha1)) {
+ struct object *object = parse_object(sha1);
+ object->flags |= SHOWN;
+ }
+ }
+ fclose(rls_fout);
+ if (finish_command(&rls))
+ return error("rev-list died");
+
+ /* write references */
+ argc = setup_revisions(argc, argv, &revs, NULL);
+ if (argc > 1)
+ return error("unrecognized argument: %s'", argv[1]);
+
+ for (i = 0; i < revs.pending.nr; i++) {
+ struct object_array_entry *e = revs.pending.objects + i;
+ unsigned char sha1[20];
+ char *ref;
+
+ if (e->item->flags & UNINTERESTING)
+ continue;
+ if (dwim_ref(e->name, strlen(e->name), sha1, &ref) != 1)
+ continue;
+ /*
+ * Make sure the refs we wrote out is correct; --max-count and
+ * other limiting options could have prevented all the tips
+ * from getting output.
+ *
+ * Non commit objects such as tags and blobs do not have
+ * this issue as they are not affected by those extra
+ * constraints.
+ */
+ if (!(e->item->flags & SHOWN) && e->item->type == OBJ_COMMIT) {
+ warning("ref '%s' is excluded by the rev-list options",
+ e->name);
+ free(ref);
+ continue;
+ }
+ /*
+ * If you run "git bundle create bndl v1.0..v2.0", the
+ * name of the positive ref is "v2.0" but that is the
+ * commit that is referenced by the tag, and not the tag
+ * itself.
+ */
+ if (hashcmp(sha1, e->item->sha1)) {
+ /*
+ * Is this the positive end of a range expressed
+ * in terms of a tag (e.g. v2.0 from the range
+ * "v1.0..v2.0")?
+ */
+ struct commit *one = lookup_commit_reference(sha1);
+ struct object *obj;
+
+ if (e->item == &(one->object)) {
+ /*
+ * Need to include e->name as an
+ * independent ref to the pack-objects
+ * input, so that the tag is included
+ * in the output; otherwise we would
+ * end up triggering "empty bundle"
+ * error.
+ */
+ obj = parse_object(sha1);
+ obj->flags |= SHOWN;
+ add_pending_object(&revs, obj, e->name);
+ }
+ free(ref);
+ continue;
+ }
+
+ ref_count++;
+ write_or_die(bundle_fd, sha1_to_hex(e->item->sha1), 40);
+ write_or_die(bundle_fd, " ", 1);
+ write_or_die(bundle_fd, ref, strlen(ref));
+ write_or_die(bundle_fd, "\n", 1);
+ free(ref);
+ }
+ if (!ref_count)
+ die ("Refusing to create empty bundle.");
+
+ /* end header */
+ write_or_die(bundle_fd, "\n", 1);
+
+ /* write pack */
+ argv_pack[0] = "pack-objects";
+ argv_pack[1] = "--all-progress";
+ argv_pack[2] = "--stdout";
+ argv_pack[3] = "--thin";
+ argv_pack[4] = NULL;
+ memset(&rls, 0, sizeof(rls));
+ rls.argv = argv_pack;
+ rls.in = -1;
+ rls.out = bundle_fd;
+ rls.git_cmd = 1;
+ if (start_command(&rls))
+ return error("Could not spawn pack-objects");
+ for (i = 0; i < revs.pending.nr; i++) {
+ struct object *object = revs.pending.objects[i].item;
+ if (object->flags & UNINTERESTING)
+ write(rls.in, "^", 1);
+ write(rls.in, sha1_to_hex(object->sha1), 40);
+ write(rls.in, "\n", 1);
+ }
+ if (finish_command(&rls))
+ return error ("pack-objects died");
+ close(bundle_fd);
+ if (!bundle_to_stdout)
+ commit_lock_file(&lock);
+ return 0;
+}
+
+int unbundle(struct bundle_header *header, int bundle_fd)
+{
+ const char *argv_index_pack[] = {"index-pack",
+ "--fix-thin", "--stdin", NULL};
+ struct child_process ip;
+
+ if (verify_bundle(header, 0))
+ return -1;
+ memset(&ip, 0, sizeof(ip));
+ ip.argv = argv_index_pack;
+ ip.in = bundle_fd;
+ ip.no_stdout = 1;
+ ip.git_cmd = 1;
+ if (run_command(&ip))
+ return error("index-pack died");
+ return 0;
+}
--- /dev/null
+#ifndef BUNDLE_H
+#define BUNDLE_H
+
+struct ref_list {
+ unsigned int nr, alloc;
+ struct ref_list_entry {
+ unsigned char sha1[20];
+ char *name;
+ } *list;
+};
+
+struct bundle_header {
+ struct ref_list prerequisites;
+ struct ref_list references;
+};
+
+int read_bundle_header(const char *path, struct bundle_header *header);
+int create_bundle(struct bundle_header *header, const char *path,
+ int argc, const char **argv);
+int verify_bundle(struct bundle_header *header, int verbose);
+int unbundle(struct bundle_header *header, int bundle_fd);
+int list_bundle_refs(struct bundle_header *header,
+ int argc, const char **argv);
+
+#endif
int parse_date(const char *date, char *buf, int bufsize);
void datestamp(char *buf, int bufsize);
unsigned long approxidate(const char *);
+enum date_mode parse_date_format(const char *format);
extern const char *git_author_info(int);
extern const char *git_committer_info(int);
unsigned char old_sha1[20];
unsigned char new_sha1[20];
unsigned char force;
+ unsigned char merge;
struct ref *peer_ref; /* when renaming */
char name[FLEX_ARRAY]; /* more */
};
extern unsigned long unpack_object_header_gently(const unsigned char *buf, unsigned long len, enum object_type *type, unsigned long *sizep);
extern unsigned long get_size_from_delta(struct packed_git *, struct pack_window **, off_t);
extern const char *packed_object_info_detail(struct packed_git *, off_t, unsigned long *, unsigned long *, unsigned int *, unsigned char *);
+extern int matches_pack_name(struct packed_git *p, const char *name);
/* Dumb servers support */
extern int update_server_info(int);
void clear_commit_marks(struct commit *commit, unsigned int mark)
{
- struct commit_list *parents;
+ while (commit) {
+ struct commit_list *parents;
- commit->object.flags &= ~mark;
- parents = commit->parents;
- while (parents) {
- struct commit *parent = parents->item;
+ if (!(mark & commit->object.flags))
+ return;
- /* Have we already cleared this? */
- if (mark & parent->object.flags)
- clear_commit_marks(parent, mark);
- parents = parents->next;
+ commit->object.flags &= ~mark;
+
+ parents = commit->parents;
+ if (!parents)
+ return;
+
+ while ((parents = parents->next))
+ clear_commit_marks(parents->item, mark);
+
+ commit = commit->parents->item;
}
}
len - strlen("encoding \n"),
encoding, strlen(encoding));
}
- return tmp.buf;
+ return strbuf_detach(&tmp, NULL);
}
static char *logmsg_reencode(const struct commit *commit,
}
if (msg[i])
table[IBODY].value = xstrdup(msg + i);
- for (i = 0; i < ARRAY_SIZE(table); i++)
- if (!table[i].value)
- interp_set_entry(table, i, "<unknown>");
len = interpolate(sb->buf + sb->len, strbuf_avail(sb),
format, table, ARRAY_SIZE(table));
--- /dev/null
+#include "../git-compat-util.h"
+
+char *gitmkdtemp(char *template)
+{
+ if (!mktemp(template) || mkdir(template, 0700))
+ return NULL;
+ return template;
+}
#
AC_PROG_CC([cc gcc])
#AC_PROG_INSTALL # needs install-sh or install.sh in sources
-AC_CHECK_TOOL(AR, ar, :)
+AC_CHECK_TOOLS(AR, [gar ar], :)
AC_CHECK_PROGS(TAR, [gtar tar])
# TCLTK_PATH will be set to some value if we want Tcl/Tk
# or will be empty otherwise.
continue;
if (nr_match && !path_match(name, nr_match, match))
continue;
- ref = alloc_ref(len - 40);
+ ref = alloc_ref(name_len + 1);
hashcpy(ref->old_sha1, old_sha1);
- memcpy(ref->name, buffer + 41, len - 40);
+ memcpy(ref->name, buffer + 41, name_len + 1);
*list = ref;
list = &ref->next;
}
check-attr) : plumbing;;
check-ref-format) : plumbing;;
commit-tree) : plumbing;;
- convert-objects) : plumbing;;
cvsexportcommit) : export;;
cvsimport) : import;;
cvsserver) : daemon;;
ssh-*) : transport;;
stripspace) : plumbing;;
svn) : import export;;
- svnimport) : import;;
symbolic-ref) : plumbing;;
tar-tree) : deprecated;;
unpack-file) : plumbing;;
--- /dev/null
+#include "cache.h"
+#include "blob.h"
+#include "commit.h"
+#include "tree.h"
+
+struct entry {
+ unsigned char old_sha1[20];
+ unsigned char new_sha1[20];
+ int converted;
+};
+
+#define MAXOBJECTS (1000000)
+
+static struct entry *convert[MAXOBJECTS];
+static int nr_convert;
+
+static struct entry * convert_entry(unsigned char *sha1);
+
+static struct entry *insert_new(unsigned char *sha1, int pos)
+{
+ struct entry *new = xcalloc(1, sizeof(struct entry));
+ hashcpy(new->old_sha1, sha1);
+ memmove(convert + pos + 1, convert + pos, (nr_convert - pos) * sizeof(struct entry *));
+ convert[pos] = new;
+ nr_convert++;
+ if (nr_convert == MAXOBJECTS)
+ die("you're kidding me - hit maximum object limit");
+ return new;
+}
+
+static struct entry *lookup_entry(unsigned char *sha1)
+{
+ int low = 0, high = nr_convert;
+
+ while (low < high) {
+ int next = (low + high) / 2;
+ struct entry *n = convert[next];
+ int cmp = hashcmp(sha1, n->old_sha1);
+ if (!cmp)
+ return n;
+ if (cmp < 0) {
+ high = next;
+ continue;
+ }
+ low = next+1;
+ }
+ return insert_new(sha1, low);
+}
+
+static void convert_binary_sha1(void *buffer)
+{
+ struct entry *entry = convert_entry(buffer);
+ hashcpy(buffer, entry->new_sha1);
+}
+
+static void convert_ascii_sha1(void *buffer)
+{
+ unsigned char sha1[20];
+ struct entry *entry;
+
+ if (get_sha1_hex(buffer, sha1))
+ die("expected sha1, got '%s'", (char*) buffer);
+ entry = convert_entry(sha1);
+ memcpy(buffer, sha1_to_hex(entry->new_sha1), 40);
+}
+
+static unsigned int convert_mode(unsigned int mode)
+{
+ unsigned int newmode;
+
+ newmode = mode & S_IFMT;
+ if (S_ISREG(mode))
+ newmode |= (mode & 0100) ? 0755 : 0644;
+ return newmode;
+}
+
+static int write_subdirectory(void *buffer, unsigned long size, const char *base, int baselen, unsigned char *result_sha1)
+{
+ char *new = xmalloc(size);
+ unsigned long newlen = 0;
+ unsigned long used;
+
+ used = 0;
+ while (size) {
+ int len = 21 + strlen(buffer);
+ char *path = strchr(buffer, ' ');
+ unsigned char *sha1;
+ unsigned int mode;
+ char *slash, *origpath;
+
+ if (!path || strtoul_ui(buffer, 8, &mode))
+ die("bad tree conversion");
+ mode = convert_mode(mode);
+ path++;
+ if (memcmp(path, base, baselen))
+ break;
+ origpath = path;
+ path += baselen;
+ slash = strchr(path, '/');
+ if (!slash) {
+ newlen += sprintf(new + newlen, "%o %s", mode, path);
+ new[newlen++] = '\0';
+ hashcpy((unsigned char*)new + newlen, (unsigned char *) buffer + len - 20);
+ newlen += 20;
+
+ used += len;
+ size -= len;
+ buffer = (char *) buffer + len;
+ continue;
+ }
+
+ newlen += sprintf(new + newlen, "%o %.*s", S_IFDIR, (int)(slash - path), path);
+ new[newlen++] = 0;
+ sha1 = (unsigned char *)(new + newlen);
+ newlen += 20;
+
+ len = write_subdirectory(buffer, size, origpath, slash-origpath+1, sha1);
+
+ used += len;
+ size -= len;
+ buffer = (char *) buffer + len;
+ }
+
+ write_sha1_file(new, newlen, tree_type, result_sha1);
+ free(new);
+ return used;
+}
+
+static void convert_tree(void *buffer, unsigned long size, unsigned char *result_sha1)
+{
+ void *orig_buffer = buffer;
+ unsigned long orig_size = size;
+
+ while (size) {
+ size_t len = 1+strlen(buffer);
+
+ convert_binary_sha1((char *) buffer + len);
+
+ len += 20;
+ if (len > size)
+ die("corrupt tree object");
+ size -= len;
+ buffer = (char *) buffer + len;
+ }
+
+ write_subdirectory(orig_buffer, orig_size, "", 0, result_sha1);
+}
+
+static unsigned long parse_oldstyle_date(const char *buf)
+{
+ char c, *p;
+ char buffer[100];
+ struct tm tm;
+ const char *formats[] = {
+ "%c",
+ "%a %b %d %T",
+ "%Z",
+ "%Y",
+ " %Y",
+ NULL
+ };
+ /* We only ever did two timezones in the bad old format .. */
+ const char *timezones[] = {
+ "PDT", "PST", "CEST", NULL
+ };
+ const char **fmt = formats;
+
+ p = buffer;
+ while (isspace(c = *buf))
+ buf++;
+ while ((c = *buf++) != '\n')
+ *p++ = c;
+ *p++ = 0;
+ buf = buffer;
+ memset(&tm, 0, sizeof(tm));
+ do {
+ const char *next = strptime(buf, *fmt, &tm);
+ if (next) {
+ if (!*next)
+ return mktime(&tm);
+ buf = next;
+ } else {
+ const char **p = timezones;
+ while (isspace(*buf))
+ buf++;
+ while (*p) {
+ if (!memcmp(buf, *p, strlen(*p))) {
+ buf += strlen(*p);
+ break;
+ }
+ p++;
+ }
+ }
+ fmt++;
+ } while (*buf && *fmt);
+ printf("left: %s\n", buf);
+ return mktime(&tm);
+}
+
+static int convert_date_line(char *dst, void **buf, unsigned long *sp)
+{
+ unsigned long size = *sp;
+ char *line = *buf;
+ char *next = strchr(line, '\n');
+ char *date = strchr(line, '>');
+ int len;
+
+ if (!next || !date)
+ die("missing or bad author/committer line %s", line);
+ next++; date += 2;
+
+ *buf = next;
+ *sp = size - (next - line);
+
+ len = date - line;
+ memcpy(dst, line, len);
+ dst += len;
+
+ /* Is it already in new format? */
+ if (isdigit(*date)) {
+ int datelen = next - date;
+ memcpy(dst, date, datelen);
+ return len + datelen;
+ }
+
+ /*
+ * Hacky hacky: one of the sparse old-style commits does not have
+ * any date at all, but we can fake it by using the committer date.
+ */
+ if (*date == '\n' && strchr(next, '>'))
+ date = strchr(next, '>')+2;
+
+ return len + sprintf(dst, "%lu -0700\n", parse_oldstyle_date(date));
+}
+
+static void convert_date(void *buffer, unsigned long size, unsigned char *result_sha1)
+{
+ char *new = xmalloc(size + 100);
+ unsigned long newlen = 0;
+
+ /* "tree <sha1>\n" */
+ memcpy(new + newlen, buffer, 46);
+ newlen += 46;
+ buffer = (char *) buffer + 46;
+ size -= 46;
+
+ /* "parent <sha1>\n" */
+ while (!memcmp(buffer, "parent ", 7)) {
+ memcpy(new + newlen, buffer, 48);
+ newlen += 48;
+ buffer = (char *) buffer + 48;
+ size -= 48;
+ }
+
+ /* "author xyz <xyz> date" */
+ newlen += convert_date_line(new + newlen, &buffer, &size);
+ /* "committer xyz <xyz> date" */
+ newlen += convert_date_line(new + newlen, &buffer, &size);
+
+ /* Rest */
+ memcpy(new + newlen, buffer, size);
+ newlen += size;
+
+ write_sha1_file(new, newlen, commit_type, result_sha1);
+ free(new);
+}
+
+static void convert_commit(void *buffer, unsigned long size, unsigned char *result_sha1)
+{
+ void *orig_buffer = buffer;
+ unsigned long orig_size = size;
+
+ if (memcmp(buffer, "tree ", 5))
+ die("Bad commit '%s'", (char*) buffer);
+ convert_ascii_sha1((char *) buffer + 5);
+ buffer = (char *) buffer + 46; /* "tree " + "hex sha1" + "\n" */
+ while (!memcmp(buffer, "parent ", 7)) {
+ convert_ascii_sha1((char *) buffer + 7);
+ buffer = (char *) buffer + 48;
+ }
+ convert_date(orig_buffer, orig_size, result_sha1);
+}
+
+static struct entry * convert_entry(unsigned char *sha1)
+{
+ struct entry *entry = lookup_entry(sha1);
+ enum object_type type;
+ void *buffer, *data;
+ unsigned long size;
+
+ if (entry->converted)
+ return entry;
+ data = read_sha1_file(sha1, &type, &size);
+ if (!data)
+ die("unable to read object %s", sha1_to_hex(sha1));
+
+ buffer = xmalloc(size);
+ memcpy(buffer, data, size);
+
+ if (type == OBJ_BLOB) {
+ write_sha1_file(buffer, size, blob_type, entry->new_sha1);
+ } else if (type == OBJ_TREE)
+ convert_tree(buffer, size, entry->new_sha1);
+ else if (type == OBJ_COMMIT)
+ convert_commit(buffer, size, entry->new_sha1);
+ else
+ die("unknown object type %d in %s", type, sha1_to_hex(sha1));
+ entry->converted = 1;
+ free(buffer);
+ free(data);
+ return entry;
+}
+
+int main(int argc, char **argv)
+{
+ unsigned char sha1[20];
+ struct entry *entry;
+
+ setup_git_directory();
+
+ if (argc != 2)
+ usage("git-convert-objects <sha1>");
+ if (get_sha1(argv[1], sha1))
+ die("Not a valid object name %s", argv[1]);
+
+ entry = convert_entry(sha1);
+ printf("new sha1: %s\n", sha1_to_hex(entry->new_sha1));
+ return 0;
+}
--- /dev/null
+git-convert-objects(1)
+======================
+
+NAME
+----
+git-convert-objects - Converts old-style git repository
+
+
+SYNOPSIS
+--------
+'git-convert-objects'
+
+DESCRIPTION
+-----------
+Converts old-style git repository to the latest format
+
+
+Author
+------
+Written by Linus Torvalds <torvalds@osdl.org>
+
+Documentation
+--------------
+Documentation by David Greaves, Junio C Hamano and the git-list <git@vger.kernel.org>.
+
+GIT
+---
+Part of the gitlink:git[7] suite
;; TODO
;; - portability to XEmacs
;; - better handling of subprocess errors
-;; - hook into file save (after-save-hook)
;; - diff against other branch
;; - renaming files from the status buffer
;; - creating tags
(message "Running git %s...done" (car args))
buffer))
-(defun git-run-command (buffer env &rest args)
- (message "Running git %s..." (car args))
- (apply #'git-call-process-env buffer env args)
- (message "Running git %s...done" (car args)))
-
(defun git-run-command-region (buffer start end env &rest args)
"Run a git command with specified buffer region as input."
- (message "Running git %s..." (car args))
(unless (eq 0 (if env
(git-run-process-region
buffer start end "env"
(append (git-get-env-strings env) (list "git") args))
(git-run-process-region
buffer start end "git" args)))
- (error "Failed to run \"git %s\":\n%s" (mapconcat (lambda (x) x) args " ") (buffer-string)))
- (message "Running git %s...done" (car args)))
+ (error "Failed to run \"git %s\":\n%s" (mapconcat (lambda (x) x) args " ") (buffer-string))))
(defun git-run-hook (hook env &rest args)
"Run a git hook and display its output if any."
"\"")
name))
+(defun git-success-message (text files)
+ "Print a success message after having handled FILES."
+ (let ((n (length files)))
+ (if (equal n 1)
+ (message "%s %s" text (car files))
+ (message "%s %d files" text n))))
+
(defun git-get-top-dir (dir)
"Retrieve the top-level directory of a git tree."
(let ((cdup (with-output-to-string
(sort-lines nil (point-min) (point-max))
(save-buffer))
(when created
- (git-run-command nil nil "update-index" "--add" "--" (file-relative-name ignore-name)))
+ (git-call-process-env nil nil "update-index" "--add" "--" (file-relative-name ignore-name)))
(git-update-status-files (list (file-relative-name ignore-name)) 'unknown)))
; propertize definition for XEmacs, stolen from erc-compat
"Remove everything from the status list."
(ewoc-filter status (lambda (info) nil)))
-(defun git-set-files-state (files state)
- "Set the state of a list of files."
- (dolist (info files)
- (unless (eq (git-fileinfo->state info) state)
- (setf (git-fileinfo->state info) state)
- (setf (git-fileinfo->rename-state info) nil)
- (setf (git-fileinfo->orig-name info) nil)
- (setf (git-fileinfo->needs-refresh info) t))))
-
-(defun git-set-filenames-state (status files state)
- "Set the state of a list of named files."
+(defun git-set-fileinfo-state (info state)
+ "Set the state of a file info."
+ (unless (eq (git-fileinfo->state info) state)
+ (setf (git-fileinfo->state info) state
+ (git-fileinfo->old-perm info) 0
+ (git-fileinfo->new-perm info) 0
+ (git-fileinfo->rename-state info) nil
+ (git-fileinfo->orig-name info) nil
+ (git-fileinfo->needs-refresh info) t)))
+
+(defun git-status-filenames-map (status func files &rest args)
+ "Apply FUNC to the status files names in the FILES list."
(when files
(setq files (sort files #'string-lessp))
(let ((file (pop files))
(node (ewoc-nth status 0)))
(while (and file node)
(let ((info (ewoc-data node)))
- (cond ((string-lessp (git-fileinfo->name info) file)
- (setq node (ewoc-next status node)))
- ((string-equal (git-fileinfo->name info) file)
- (unless (eq (git-fileinfo->state info) state)
- (setf (git-fileinfo->state info) state)
- (setf (git-fileinfo->rename-state info) nil)
- (setf (git-fileinfo->orig-name info) nil)
- (setf (git-fileinfo->needs-refresh info) t))
- (setq file (pop files)))
- (t (setq file (pop files)))))))
+ (if (string-lessp (git-fileinfo->name info) file)
+ (setq node (ewoc-next status node))
+ (if (string-equal (git-fileinfo->name info) file)
+ (apply func info args))
+ (setq file (pop files))))))))
+
+(defun git-set-filenames-state (status files state)
+ "Set the state of a list of named files."
+ (when files
+ (git-status-filenames-map status #'git-set-fileinfo-state files state)
(unless state ;; delete files whose state has been set to nil
(ewoc-filter status (lambda (info) (git-fileinfo->state info))))))
Return the list of files that haven't been handled."
(let (infolist)
(with-temp-buffer
- (apply #'git-run-command t nil "diff-index" "-z" "-M" "HEAD" "--" files)
+ (apply #'git-call-process-env t nil "diff-index" "-z" "-M" "HEAD" "--" files)
(goto-char (point-min))
(while (re-search-forward
":\\([0-7]\\{6\\}\\) \\([0-7]\\{6\\}\\) [0-9a-f]\\{40\\} [0-9a-f]\\{40\\} \\(\\([ADMU]\\)\0\\([^\0]+\\)\\|\\([CR]\\)[0-9]*\0\\([^\0]+\\)\0\\([^\0]+\\)\\)\0"
Return the list of files that haven't been handled."
(let (infolist)
(with-temp-buffer
- (apply #'git-run-command t nil "ls-files" "-z" (append options (list "--") files))
+ (apply #'git-call-process-env t nil "ls-files" "-z" (append options (list "--") files))
(goto-char (point-min))
(while (re-search-forward "\\([^\0]*\\)\0" nil t 1)
(let ((name (match-string 1)))
(defun git-run-ls-unmerged (status files)
"Run git-ls-files -u on FILES and parse the results into STATUS."
(with-temp-buffer
- (apply #'git-run-command t nil "ls-files" "-z" "-u" "--" files)
+ (apply #'git-call-process-env t nil "ls-files" "-z" "-u" "--" files)
(goto-char (point-min))
(let (unmerged-files)
(while (re-search-forward "[0-7]\\{6\\} [0-9a-f]\\{40\\} [123]\t\\([^\0]+\\)\0" nil t)
('deleted (push info deleted))
('modified (push info modified))))
(when added
- (apply #'git-run-command nil env "update-index" "--add" "--" (git-get-filenames added)))
+ (apply #'git-call-process-env nil env "update-index" "--add" "--" (git-get-filenames added)))
(when deleted
- (apply #'git-run-command nil env "update-index" "--remove" "--" (git-get-filenames deleted)))
+ (apply #'git-call-process-env nil env "update-index" "--remove" "--" (git-get-filenames deleted)))
(when modified
- (apply #'git-run-command nil env "update-index" "--" (git-get-filenames modified)))))
+ (apply #'git-call-process-env nil env "update-index" "--" (git-get-filenames modified)))))
(defun git-run-pre-commit-hook ()
"Run the pre-commit hook if any."
head-tree (git-rev-parse "HEAD^{tree}")))
(if files
(progn
+ (message "Running git commit...")
(git-read-tree head-tree index-file)
(git-update-index nil files) ;update both the default index
(git-update-index index-file files) ;and the temporary one
(condition-case nil (delete-file ".git/MERGE_HEAD") (error nil))
(condition-case nil (delete-file ".git/MERGE_MSG") (error nil))
(with-current-buffer buffer (erase-buffer))
- (git-set-files-state files 'uptodate)
- (git-run-command nil nil "rerere")
+ (dolist (info files) (git-set-fileinfo-state info 'uptodate))
+ (git-call-process-env nil nil "rerere")
+ (git-call-process-env nil nil "gc" "--auto")
(git-refresh-files)
(git-refresh-ewoc-hf git-status)
(message "Committed %s." commit)
"Mark all files."
(interactive)
(unless git-status (error "Not in git-status buffer."))
- (ewoc-map (lambda (info) (setf (git-fileinfo->marked info) t) t) git-status)
+ (ewoc-map (lambda (info) (unless (git-fileinfo->marked info)
+ (setf (git-fileinfo->marked info) t))) git-status)
; move back to goal column after invalidate
(when goal-column (move-to-column goal-column)))
"Unmark all files."
(interactive)
(unless git-status (error "Not in git-status buffer."))
- (ewoc-map (lambda (info) (setf (git-fileinfo->marked info) nil) t) git-status)
+ (ewoc-map (lambda (info) (when (git-fileinfo->marked info)
+ (setf (git-fileinfo->marked info) nil)
+ t)) git-status)
; move back to goal column after invalidate
(when goal-column (move-to-column goal-column)))
(let ((files (git-get-filenames (git-marked-files-state 'unknown 'ignored))))
(unless files
(push (file-relative-name (read-file-name "File to add: " nil nil t)) files))
- (apply #'git-run-command nil nil "update-index" "--add" "--" files)
- (git-update-status-files files 'uptodate)))
+ (apply #'git-call-process-env nil nil "update-index" "--add" "--" files)
+ (git-update-status-files files 'uptodate)
+ (git-success-message "Added" files)))
(defun git-ignore-file ()
"Add marked file(s) to the ignore list."
(unless files
(push (file-relative-name (read-file-name "File to ignore: " nil nil t)) files))
(dolist (f files) (git-append-to-ignore f))
- (git-update-status-files files 'ignored)))
+ (git-update-status-files files 'ignored)
+ (git-success-message "Ignored" files)))
(defun git-remove-file ()
"Remove the marked file(s)."
(progn
(dolist (name files)
(when (file-exists-p name) (delete-file name)))
- (apply #'git-run-command nil nil "update-index" "--remove" "--" files)
- (git-update-status-files files nil))
+ (apply #'git-call-process-env nil nil "update-index" "--remove" "--" files)
+ (git-update-status-files files nil)
+ (git-success-message "Removed" files))
(message "Aborting"))))
(defun git-revert-file ()
('unmerged (push (git-fileinfo->name info) modified))
('modified (push (git-fileinfo->name info) modified))))
(when added
- (apply #'git-run-command nil nil "update-index" "--force-remove" "--" added))
+ (apply #'git-call-process-env nil nil "update-index" "--force-remove" "--" added))
(when modified
- (apply #'git-run-command nil nil "checkout" "HEAD" modified))
- (git-update-status-files (append added modified) 'uptodate))))
+ (apply #'git-call-process-env nil nil "checkout" "HEAD" modified))
+ (git-update-status-files (append added modified) 'uptodate)
+ (git-success-message "Reverted" (git-get-filenames files)))))
(defun git-resolve-file ()
"Resolve conflicts in marked file(s)."
(interactive)
(let ((files (git-get-filenames (git-marked-files-state 'unmerged))))
(when files
- (apply #'git-run-command nil nil "update-index" "--" files)
- (git-update-status-files files 'uptodate))))
+ (apply #'git-call-process-env nil nil "update-index" "--" files)
+ (git-update-status-files files 'uptodate)
+ (git-success-message "Resolved" files))))
(defun git-remove-handled ()
"Remove handled files from the status list."
(interactive)
(if (setq git-show-ignored (not git-show-ignored))
(progn
+ (message "Inserting ignored files...")
(git-run-ls-files-with-excludes git-status nil 'ignored "-o" "-i")
(git-refresh-files)
- (git-refresh-ewoc-hf git-status))
+ (git-refresh-ewoc-hf git-status)
+ (message "Inserting ignored files...done"))
(git-remove-handled)))
(defun git-toggle-show-unknown ()
(interactive)
(if (setq git-show-unknown (not git-show-unknown))
(progn
+ (message "Inserting unknown files...")
(git-run-ls-files-with-excludes git-status nil 'unknown "-o")
(git-refresh-files)
- (git-refresh-ewoc-hf git-status))
+ (git-refresh-ewoc-hf git-status)
+ (message "Inserting unknown files...done"))
(git-remove-handled)))
(defun git-setup-diff-buffer (buffer)
(interactive)
(let* ((status git-status)
(pos (ewoc-locate status))
+ (marked-files (git-get-filenames (ewoc-collect status (lambda (info) (git-fileinfo->marked info)))))
(cur-name (and pos (git-fileinfo->name (ewoc-data pos)))))
(unless status (error "Not in git-status buffer."))
- (git-run-command nil nil "update-index" "--refresh")
+ (message "Refreshing git status...")
+ (git-call-process-env nil nil "update-index" "--refresh")
(git-clear-status status)
(git-update-status-files nil)
+ ; restore file marks
+ (when marked-files
+ (git-status-filenames-map status
+ (lambda (info)
+ (setf (git-fileinfo->marked info) t)
+ (setf (git-fileinfo->needs-refresh info) t))
+ marked-files)
+ (git-refresh-files))
; move point to the current file name if any
+ (message "Refreshing git status...done")
(let ((node (and cur-name (git-find-status-file status cur-name))))
(when node (ewoc-goto-node status node)))))
(cd dir)
(git-status-mode)
(git-refresh-status)
- (goto-char (point-min)))
+ (goto-char (point-min))
+ (add-hook 'after-save-hook 'git-update-saved-file))
(message "%s is not a git working tree." dir)))
+(defun git-update-saved-file ()
+ "Update the corresponding git-status buffer when a file is saved.
+Meant to be used in `after-save-hook'."
+ (let* ((file (expand-file-name buffer-file-name))
+ (dir (condition-case nil (git-get-top-dir (file-name-directory file)) (error nil)))
+ (buffer (and dir (git-find-status-buffer dir))))
+ (when buffer
+ (with-current-buffer buffer
+ (let ((filename (file-relative-name file dir)))
+ ; skip files located inside the .git directory
+ (unless (string-match "^\\.git/" filename)
+ (git-call-process-env nil nil "add" "--refresh" "--" filename)
+ (git-update-status-files (list filename) 'uptodate)))))))
+
(defun git-help ()
"Display help for Git mode."
(interactive)
--- /dev/null
+#!/bin/sh
+#
+
+USAGE='<fetch-options> <repository> <refspec>...'
+SUBDIRECTORY_OK=Yes
+. git-sh-setup
+set_reflog_action "fetch $*"
+cd_to_toplevel ;# probably unnecessary...
+
+. git-parse-remote
+_x40='[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]'
+_x40="$_x40$_x40$_x40$_x40$_x40$_x40$_x40$_x40"
+
+LF='
+'
+IFS="$LF"
+
+no_tags=
+tags=
+append=
+force=
+verbose=
+update_head_ok=
+exec=
+keep=
+shallow_depth=
+no_progress=
+test -t 1 || no_progress=--no-progress
+quiet=
+while test $# != 0
+do
+ case "$1" in
+ -a|--a|--ap|--app|--appe|--appen|--append)
+ append=t
+ ;;
+ --upl|--uplo|--uploa|--upload|--upload-|--upload-p|\
+ --upload-pa|--upload-pac|--upload-pack)
+ shift
+ exec="--upload-pack=$1"
+ ;;
+ --upl=*|--uplo=*|--uploa=*|--upload=*|\
+ --upload-=*|--upload-p=*|--upload-pa=*|--upload-pac=*|--upload-pack=*)
+ exec=--upload-pack=$(expr "z$1" : 'z-[^=]*=\(.*\)')
+ shift
+ ;;
+ -f|--f|--fo|--for|--forc|--force)
+ force=t
+ ;;
+ -t|--t|--ta|--tag|--tags)
+ tags=t
+ ;;
+ -n|--n|--no|--no-|--no-t|--no-ta|--no-tag|--no-tags)
+ no_tags=t
+ ;;
+ -u|--u|--up|--upd|--upda|--updat|--update|--update-|--update-h|\
+ --update-he|--update-hea|--update-head|--update-head-|\
+ --update-head-o|--update-head-ok)
+ update_head_ok=t
+ ;;
+ -q|--q|--qu|--qui|--quie|--quiet)
+ quiet=--quiet
+ ;;
+ -v|--verbose)
+ verbose="$verbose"Yes
+ ;;
+ -k|--k|--ke|--kee|--keep)
+ keep='-k -k'
+ ;;
+ --depth=*)
+ shallow_depth="--depth=`expr "z$1" : 'z-[^=]*=\(.*\)'`"
+ ;;
+ --depth)
+ shift
+ shallow_depth="--depth=$1"
+ ;;
+ -*)
+ usage
+ ;;
+ *)
+ break
+ ;;
+ esac
+ shift
+done
+
+case "$#" in
+0)
+ origin=$(get_default_remote)
+ test -n "$(get_remote_url ${origin})" ||
+ die "Where do you want to fetch from today?"
+ set x $origin ; shift ;;
+esac
+
+if test -z "$exec"
+then
+ # No command line override and we have configuration for the remote.
+ exec="--upload-pack=$(get_uploadpack $1)"
+fi
+
+remote_nick="$1"
+remote=$(get_remote_url "$@")
+refs=
+rref=
+rsync_slurped_objects=
+
+if test "" = "$append"
+then
+ : >"$GIT_DIR/FETCH_HEAD"
+fi
+
+# Global that is reused later
+ls_remote_result=$(git ls-remote $exec "$remote") ||
+ die "Cannot get the repository state from $remote"
+
+append_fetch_head () {
+ flags=
+ test -n "$verbose" && flags="$flags$LF-v"
+ test -n "$force$single_force" && flags="$flags$LF-f"
+ GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION" \
+ git fetch--tool $flags append-fetch-head "$@"
+}
+
+# updating the current HEAD with git-fetch in a bare
+# repository is always fine.
+if test -z "$update_head_ok" && test $(is_bare_repository) = false
+then
+ orig_head=$(git rev-parse --verify HEAD 2>/dev/null)
+fi
+
+# Allow --notags from remote.$1.tagopt
+case "$tags$no_tags" in
+'')
+ case "$(git config --get "remote.$1.tagopt")" in
+ --no-tags)
+ no_tags=t ;;
+ esac
+esac
+
+# If --tags (and later --heads or --all) is specified, then we are
+# not talking about defaults stored in Pull: line of remotes or
+# branches file, and just fetch those and refspecs explicitly given.
+# Otherwise we do what we always did.
+
+reflist=$(get_remote_refs_for_fetch "$@")
+if test "$tags"
+then
+ taglist=`IFS=' ' &&
+ echo "$ls_remote_result" |
+ git show-ref --exclude-existing=refs/tags/ |
+ while read sha1 name
+ do
+ echo ".${name}:${name}"
+ done` || exit
+ if test "$#" -gt 1
+ then
+ # remote URL plus explicit refspecs; we need to merge them.
+ reflist="$reflist$LF$taglist"
+ else
+ # No explicit refspecs; fetch tags only.
+ reflist=$taglist
+ fi
+fi
+
+fetch_all_at_once () {
+
+ eval=$(echo "$1" | git fetch--tool parse-reflist "-")
+ eval "$eval"
+
+ ( : subshell because we muck with IFS
+ IFS=" $LF"
+ (
+ if test "$remote" = . ; then
+ git show-ref $rref || echo failed "$remote"
+ elif test -f "$remote" ; then
+ test -n "$shallow_depth" &&
+ die "shallow clone with bundle is not supported"
+ git bundle unbundle "$remote" $rref ||
+ echo failed "$remote"
+ else
+ if test -d "$remote" &&
+
+ # The remote might be our alternate. With
+ # this optimization we will bypass fetch-pack
+ # altogether, which means we cannot be doing
+ # the shallow stuff at all.
+ test ! -f "$GIT_DIR/shallow" &&
+ test -z "$shallow_depth" &&
+
+ # See if all of what we are going to fetch are
+ # connected to our repository's tips, in which
+ # case we do not have to do any fetch.
+ theirs=$(echo "$ls_remote_result" | \
+ git fetch--tool -s pick-rref "$rref" "-") &&
+
+ # This will barf when $theirs reach an object that
+ # we do not have in our repository. Otherwise,
+ # we already have everything the fetch would bring in.
+ git rev-list --objects $theirs --not --all \
+ >/dev/null 2>/dev/null
+ then
+ echo "$ls_remote_result" | \
+ git fetch--tool pick-rref "$rref" "-"
+ else
+ flags=
+ case $verbose in
+ YesYes*)
+ flags="-v"
+ ;;
+ esac
+ git-fetch-pack --thin $exec $keep $shallow_depth \
+ $quiet $no_progress $flags "$remote" $rref ||
+ echo failed "$remote"
+ fi
+ fi
+ ) |
+ (
+ flags=
+ test -n "$verbose" && flags="$flags -v"
+ test -n "$force" && flags="$flags -f"
+ GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION" \
+ git fetch--tool $flags native-store \
+ "$remote" "$remote_nick" "$refs"
+ )
+ ) || exit
+
+}
+
+fetch_per_ref () {
+ reflist="$1"
+ refs=
+ rref=
+
+ for ref in $reflist
+ do
+ refs="$refs$LF$ref"
+
+ # These are relative path from $GIT_DIR, typically starting at refs/
+ # but may be HEAD
+ if expr "z$ref" : 'z\.' >/dev/null
+ then
+ not_for_merge=t
+ ref=$(expr "z$ref" : 'z\.\(.*\)')
+ else
+ not_for_merge=
+ fi
+ if expr "z$ref" : 'z+' >/dev/null
+ then
+ single_force=t
+ ref=$(expr "z$ref" : 'z+\(.*\)')
+ else
+ single_force=
+ fi
+ remote_name=$(expr "z$ref" : 'z\([^:]*\):')
+ local_name=$(expr "z$ref" : 'z[^:]*:\(.*\)')
+
+ rref="$rref$LF$remote_name"
+
+ # There are transports that can fetch only one head at a time...
+ case "$remote" in
+ http://* | https://* | ftp://*)
+ test -n "$shallow_depth" &&
+ die "shallow clone with http not supported"
+ proto=`expr "$remote" : '\([^:]*\):'`
+ if [ -n "$GIT_SSL_NO_VERIFY" ]; then
+ curl_extra_args="-k"
+ fi
+ if [ -n "$GIT_CURL_FTP_NO_EPSV" -o \
+ "`git config --bool http.noEPSV`" = true ]; then
+ noepsv_opt="--disable-epsv"
+ fi
+
+ # Find $remote_name from ls-remote output.
+ head=$(echo "$ls_remote_result" | \
+ git fetch--tool -s pick-rref "$remote_name" "-")
+ expr "z$head" : "z$_x40\$" >/dev/null ||
+ die "No such ref $remote_name at $remote"
+ echo >&2 "Fetching $remote_name from $remote using $proto"
+ case "$quiet" in '') v=-v ;; *) v= ;; esac
+ git-http-fetch $v -a "$head" "$remote" || exit
+ ;;
+ rsync://*)
+ test -n "$shallow_depth" &&
+ die "shallow clone with rsync not supported"
+ TMP_HEAD="$GIT_DIR/TMP_HEAD"
+ rsync -L -q "$remote/$remote_name" "$TMP_HEAD" || exit 1
+ head=$(git rev-parse --verify TMP_HEAD)
+ rm -f "$TMP_HEAD"
+ case "$quiet" in '') v=-v ;; *) v= ;; esac
+ test "$rsync_slurped_objects" || {
+ rsync -a $v --ignore-existing --exclude info \
+ "$remote/objects/" "$GIT_OBJECT_DIRECTORY/" || exit
+
+ # Look at objects/info/alternates for rsync -- http will
+ # support it natively and git native ones will do it on
+ # the remote end. Not having that file is not a crime.
+ rsync -q "$remote/objects/info/alternates" \
+ "$GIT_DIR/TMP_ALT" 2>/dev/null ||
+ rm -f "$GIT_DIR/TMP_ALT"
+ if test -f "$GIT_DIR/TMP_ALT"
+ then
+ resolve_alternates "$remote" <"$GIT_DIR/TMP_ALT" |
+ while read alt
+ do
+ case "$alt" in 'bad alternate: '*) die "$alt";; esac
+ echo >&2 "Getting alternate: $alt"
+ rsync -av --ignore-existing --exclude info \
+ "$alt" "$GIT_OBJECT_DIRECTORY/" || exit
+ done
+ rm -f "$GIT_DIR/TMP_ALT"
+ fi
+ rsync_slurped_objects=t
+ }
+ ;;
+ esac
+
+ append_fetch_head "$head" "$remote" \
+ "$remote_name" "$remote_nick" "$local_name" "$not_for_merge" || exit
+
+ done
+
+}
+
+fetch_main () {
+ case "$remote" in
+ http://* | https://* | ftp://* | rsync://* )
+ fetch_per_ref "$@"
+ ;;
+ *)
+ fetch_all_at_once "$@"
+ ;;
+ esac
+}
+
+fetch_main "$reflist" || exit
+
+# automated tag following
+case "$no_tags$tags" in
+'')
+ case "$reflist" in
+ *:refs/*)
+ # effective only when we are following remote branch
+ # using local tracking branch.
+ taglist=$(IFS=' ' &&
+ echo "$ls_remote_result" |
+ git show-ref --exclude-existing=refs/tags/ |
+ while read sha1 name
+ do
+ git cat-file -t "$sha1" >/dev/null 2>&1 || continue
+ echo >&2 "Auto-following $name"
+ echo ".${name}:${name}"
+ done)
+ esac
+ case "$taglist" in
+ '') ;;
+ ?*)
+ # do not deepen a shallow tree when following tags
+ shallow_depth=
+ fetch_main "$taglist" || exit ;;
+ esac
+esac
+
+# If the original head was empty (i.e. no "master" yet), or
+# if we were told not to worry, we do not have to check.
+case "$orig_head" in
+'')
+ ;;
+?*)
+ curr_head=$(git rev-parse --verify HEAD 2>/dev/null)
+ if test "$curr_head" != "$orig_head"
+ then
+ git update-ref \
+ -m "$GIT_REFLOG_ACTION: Undoing incorrectly fetched HEAD." \
+ HEAD "$orig_head"
+ die "Cannot fetch into the current branch."
+ fi
+ ;;
+esac
. git-sh-setup
no_prune=:
-while case $# in 0) break ;; esac
+while test $# != 0
do
case "$1" in
--prune)
update= reset_type=--mixed
unset rev
-while case $# in 0) break ;; esac
+while test $# != 0
do
case "$1" in
--mixed | --soft | --hard)
--- /dev/null
+#!/usr/bin/perl -w
+
+# This tool is copyright (c) 2005, Matthias Urlichs.
+# It is released under the Gnu Public License, version 2.
+#
+# The basic idea is to pull and analyze SVN changes.
+#
+# Checking out the files is done by a single long-running SVN connection.
+#
+# The head revision is on branch "origin" by default.
+# You can change that with the '-o' option.
+
+use strict;
+use warnings;
+use Getopt::Std;
+use File::Copy;
+use File::Spec;
+use File::Temp qw(tempfile);
+use File::Path qw(mkpath);
+use File::Basename qw(basename dirname);
+use Time::Local;
+use IO::Pipe;
+use POSIX qw(strftime dup2);
+use IPC::Open2;
+use SVN::Core;
+use SVN::Ra;
+
+die "Need SVN:Core 1.2.1 or better" if $SVN::Core::VERSION lt "1.2.1";
+
+$SIG{'PIPE'}="IGNORE";
+$ENV{'TZ'}="UTC";
+
+our($opt_h,$opt_o,$opt_v,$opt_u,$opt_C,$opt_i,$opt_m,$opt_M,$opt_t,$opt_T,
+ $opt_b,$opt_r,$opt_I,$opt_A,$opt_s,$opt_l,$opt_d,$opt_D,$opt_S,$opt_F,
+ $opt_P,$opt_R);
+
+sub usage() {
+ print STDERR <<END;
+Usage: ${\basename $0} # fetch/update GIT from SVN
+ [-o branch-for-HEAD] [-h] [-v] [-l max_rev] [-R repack_each_revs]
+ [-C GIT_repository] [-t tagname] [-T trunkname] [-b branchname]
+ [-d|-D] [-i] [-u] [-r] [-I ignorefilename] [-s start_chg]
+ [-m] [-M regex] [-A author_file] [-S] [-F] [-P project_name] [SVN_URL]
+END
+ exit(1);
+}
+
+getopts("A:b:C:dDFhiI:l:mM:o:rs:t:T:SP:R:uv") or usage();
+usage if $opt_h;
+
+my $tag_name = $opt_t || "tags";
+my $trunk_name = defined $opt_T ? $opt_T : "trunk";
+my $branch_name = $opt_b || "branches";
+my $project_name = $opt_P || "";
+$project_name = "/" . $project_name if ($project_name);
+my $repack_after = $opt_R || 1000;
+my $root_pool = SVN::Pool->new_default;
+
+@ARGV == 1 or @ARGV == 2 or usage();
+
+$opt_o ||= "origin";
+$opt_s ||= 1;
+my $git_tree = $opt_C;
+$git_tree ||= ".";
+
+my $svn_url = $ARGV[0];
+my $svn_dir = $ARGV[1];
+
+our @mergerx = ();
+if ($opt_m) {
+ my $branch_esc = quotemeta ($branch_name);
+ my $trunk_esc = quotemeta ($trunk_name);
+ @mergerx =
+ (
+ qr!\b(?:merg(?:ed?|ing))\b.*?\b((?:(?<=$branch_esc/)[\w\.\-]+)|(?:$trunk_esc))\b!i,
+ qr!\b(?:from|of)\W+((?:(?<=$branch_esc/)[\w\.\-]+)|(?:$trunk_esc))\b!i,
+ qr!\b(?:from|of)\W+(?:the )?([\w\.\-]+)[-\s]branch\b!i
+ );
+}
+if ($opt_M) {
+ unshift (@mergerx, qr/$opt_M/);
+}
+
+# Absolutize filename now, since we will have chdir'ed by the time we
+# get around to opening it.
+$opt_A = File::Spec->rel2abs($opt_A) if $opt_A;
+
+our %users = ();
+our $users_file = undef;
+sub read_users($) {
+ $users_file = File::Spec->rel2abs(@_);
+ die "Cannot open $users_file\n" unless -f $users_file;
+ open(my $authors,$users_file);
+ while(<$authors>) {
+ chomp;
+ next unless /^(\S+?)\s*=\s*(.+?)\s*<(.+)>\s*$/;
+ (my $user,my $name,my $email) = ($1,$2,$3);
+ $users{$user} = [$name,$email];
+ }
+ close($authors);
+}
+
+select(STDERR); $|=1; select(STDOUT);
+
+
+package SVNconn;
+# Basic SVN connection.
+# We're only interested in connecting and downloading, so ...
+
+use File::Spec;
+use File::Temp qw(tempfile);
+use POSIX qw(strftime dup2);
+use Fcntl qw(SEEK_SET);
+
+sub new {
+ my($what,$repo) = @_;
+ $what=ref($what) if ref($what);
+
+ my $self = {};
+ $self->{'buffer'} = "";
+ bless($self,$what);
+
+ $repo =~ s#/+$##;
+ $self->{'fullrep'} = $repo;
+ $self->conn();
+
+ return $self;
+}
+
+sub conn {
+ my $self = shift;
+ my $repo = $self->{'fullrep'};
+ my $auth = SVN::Core::auth_open ([SVN::Client::get_simple_provider,
+ SVN::Client::get_ssl_server_trust_file_provider,
+ SVN::Client::get_username_provider]);
+ my $s = SVN::Ra->new(url => $repo, auth => $auth, pool => $root_pool);
+ die "SVN connection to $repo: $!\n" unless defined $s;
+ $self->{'svn'} = $s;
+ $self->{'repo'} = $repo;
+ $self->{'maxrev'} = $s->get_latest_revnum();
+}
+
+sub file {
+ my($self,$path,$rev) = @_;
+
+ my ($fh, $name) = tempfile('gitsvn.XXXXXX',
+ DIR => File::Spec->tmpdir(), UNLINK => 1);
+
+ print "... $rev $path ...\n" if $opt_v;
+ my (undef, $properties);
+ $path =~ s#^/*##;
+ my $subpool = SVN::Pool::new_default_sub;
+ eval { (undef, $properties)
+ = $self->{'svn'}->get_file($path,$rev,$fh); };
+ if($@) {
+ return undef if $@ =~ /Attempted to get checksum/;
+ die $@;
+ }
+ my $mode;
+ if (exists $properties->{'svn:executable'}) {
+ $mode = '100755';
+ } elsif (exists $properties->{'svn:special'}) {
+ my ($special_content, $filesize);
+ $filesize = tell $fh;
+ seek $fh, 0, SEEK_SET;
+ read $fh, $special_content, $filesize;
+ if ($special_content =~ s/^link //) {
+ $mode = '120000';
+ seek $fh, 0, SEEK_SET;
+ truncate $fh, 0;
+ print $fh $special_content;
+ } else {
+ die "unexpected svn:special file encountered";
+ }
+ } else {
+ $mode = '100644';
+ }
+ close ($fh);
+
+ return ($name, $mode);
+}
+
+sub ignore {
+ my($self,$path,$rev) = @_;
+
+ print "... $rev $path ...\n" if $opt_v;
+ $path =~ s#^/*##;
+ my $subpool = SVN::Pool::new_default_sub;
+ my (undef,undef,$properties)
+ = $self->{'svn'}->get_dir($path,$rev,undef);
+ if (exists $properties->{'svn:ignore'}) {
+ my ($fh, $name) = tempfile('gitsvn.XXXXXX',
+ DIR => File::Spec->tmpdir(),
+ UNLINK => 1);
+ print $fh $properties->{'svn:ignore'};
+ close($fh);
+ return $name;
+ } else {
+ return undef;
+ }
+}
+
+sub dir_list {
+ my($self,$path,$rev) = @_;
+ $path =~ s#^/*##;
+ my $subpool = SVN::Pool::new_default_sub;
+ my ($dirents,undef,$properties)
+ = $self->{'svn'}->get_dir($path,$rev,undef);
+ return $dirents;
+}
+
+package main;
+use URI;
+
+our $svn = $svn_url;
+$svn .= "/$svn_dir" if defined $svn_dir;
+my $svn2 = SVNconn->new($svn);
+$svn = SVNconn->new($svn);
+
+my $lwp_ua;
+if($opt_d or $opt_D) {
+ $svn_url = URI->new($svn_url)->canonical;
+ if($opt_D) {
+ $svn_dir =~ s#/*$#/#;
+ } else {
+ $svn_dir = "";
+ }
+ if ($svn_url->scheme eq "http") {
+ use LWP::UserAgent;
+ $lwp_ua = LWP::UserAgent->new(keep_alive => 1, requests_redirectable => []);
+ } else {
+ print STDERR "Warning: not HTTP; turning off direct file access\n";
+ $opt_d=0;
+ }
+}
+
+sub pdate($) {
+ my($d) = @_;
+ $d =~ m#(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)#
+ or die "Unparseable date: $d\n";
+ my $y=$1; $y-=1900 if $y>1900;
+ return timegm($6||0,$5,$4,$3,$2-1,$y);
+}
+
+sub getwd() {
+ my $pwd = `pwd`;
+ chomp $pwd;
+ return $pwd;
+}
+
+
+sub get_headref($$) {
+ my $name = shift;
+ my $git_dir = shift;
+ my $sha;
+
+ if (open(C,"$git_dir/refs/heads/$name")) {
+ chomp($sha = <C>);
+ close(C);
+ length($sha) == 40
+ or die "Cannot get head id for $name ($sha): $!\n";
+ }
+ return $sha;
+}
+
+
+-d $git_tree
+ or mkdir($git_tree,0777)
+ or die "Could not create $git_tree: $!";
+chdir($git_tree);
+
+my $orig_branch = "";
+my $forward_master = 0;
+my %branches;
+
+my $git_dir = $ENV{"GIT_DIR"} || ".git";
+$git_dir = getwd()."/".$git_dir unless $git_dir =~ m#^/#;
+$ENV{"GIT_DIR"} = $git_dir;
+my $orig_git_index;
+$orig_git_index = $ENV{GIT_INDEX_FILE} if exists $ENV{GIT_INDEX_FILE};
+my ($git_ih, $git_index) = tempfile('gitXXXXXX', SUFFIX => '.idx',
+ DIR => File::Spec->tmpdir());
+close ($git_ih);
+$ENV{GIT_INDEX_FILE} = $git_index;
+my $maxnum = 0;
+my $last_rev = "";
+my $last_branch;
+my $current_rev = $opt_s || 1;
+unless(-d $git_dir) {
+ system("git-init");
+ die "Cannot init the GIT db at $git_tree: $?\n" if $?;
+ system("git-read-tree");
+ die "Cannot init an empty tree: $?\n" if $?;
+
+ $last_branch = $opt_o;
+ $orig_branch = "";
+} else {
+ -f "$git_dir/refs/heads/$opt_o"
+ or die "Branch '$opt_o' does not exist.\n".
+ "Either use the correct '-o branch' option,\n".
+ "or import to a new repository.\n";
+
+ -f "$git_dir/svn2git"
+ or die "'$git_dir/svn2git' does not exist.\n".
+ "You need that file for incremental imports.\n";
+ open(F, "git-symbolic-ref HEAD |") or
+ die "Cannot run git-symbolic-ref: $!\n";
+ chomp ($last_branch = <F>);
+ $last_branch = basename($last_branch);
+ close(F);
+ unless($last_branch) {
+ warn "Cannot read the last branch name: $! -- assuming 'master'\n";
+ $last_branch = "master";
+ }
+ $orig_branch = $last_branch;
+ $last_rev = get_headref($orig_branch, $git_dir);
+ if (-f "$git_dir/SVN2GIT_HEAD") {
+ die <<EOM;
+SVN2GIT_HEAD exists.
+Make sure your working directory corresponds to HEAD and remove SVN2GIT_HEAD.
+You may need to run
+
+ git-read-tree -m -u SVN2GIT_HEAD HEAD
+EOM
+ }
+ system('cp', "$git_dir/HEAD", "$git_dir/SVN2GIT_HEAD");
+
+ $forward_master =
+ $opt_o ne 'master' && -f "$git_dir/refs/heads/master" &&
+ system('cmp', '-s', "$git_dir/refs/heads/master",
+ "$git_dir/refs/heads/$opt_o") == 0;
+
+ # populate index
+ system('git-read-tree', $last_rev);
+ die "read-tree failed: $?\n" if $?;
+
+ # Get the last import timestamps
+ open my $B,"<", "$git_dir/svn2git";
+ while(<$B>) {
+ chomp;
+ my($num,$branch,$ref) = split;
+ $branches{$branch}{$num} = $ref;
+ $branches{$branch}{"LAST"} = $ref;
+ $current_rev = $num+1 if $current_rev <= $num;
+ }
+ close($B);
+}
+-d $git_dir
+ or die "Could not create git subdir ($git_dir).\n";
+
+my $default_authors = "$git_dir/svn-authors";
+if ($opt_A) {
+ read_users($opt_A);
+ copy($opt_A,$default_authors) or die "Copy failed: $!";
+} else {
+ read_users($default_authors) if -f $default_authors;
+}
+
+open BRANCHES,">>", "$git_dir/svn2git";
+
+sub node_kind($$) {
+ my ($svnpath, $revision) = @_;
+ $svnpath =~ s#^/*##;
+ my $subpool = SVN::Pool::new_default_sub;
+ my $kind = $svn->{'svn'}->check_path($svnpath,$revision);
+ return $kind;
+}
+
+sub get_file($$$) {
+ my($svnpath,$rev,$path) = @_;
+
+ # now get it
+ my ($name,$mode);
+ if($opt_d) {
+ my($req,$res);
+
+ # /svn/!svn/bc/2/django/trunk/django-docs/build.py
+ my $url=$svn_url->clone();
+ $url->path($url->path."/!svn/bc/$rev/$svn_dir$svnpath");
+ print "... $path...\n" if $opt_v;
+ $req = HTTP::Request->new(GET => $url);
+ $res = $lwp_ua->request($req);
+ if ($res->is_success) {
+ my $fh;
+ ($fh, $name) = tempfile('gitsvn.XXXXXX',
+ DIR => File::Spec->tmpdir(), UNLINK => 1);
+ print $fh $res->content;
+ close($fh) or die "Could not write $name: $!\n";
+ } else {
+ return undef if $res->code == 301; # directory?
+ die $res->status_line." at $url\n";
+ }
+ $mode = '0644'; # can't obtain mode via direct http request?
+ } else {
+ ($name,$mode) = $svn->file("$svnpath",$rev);
+ return undef unless defined $name;
+ }
+
+ my $pid = open(my $F, '-|');
+ die $! unless defined $pid;
+ if (!$pid) {
+ exec("git-hash-object", "-w", $name)
+ or die "Cannot create object: $!\n";
+ }
+ my $sha = <$F>;
+ chomp $sha;
+ close $F;
+ unlink $name;
+ return [$mode, $sha, $path];
+}
+
+sub get_ignore($$$$$) {
+ my($new,$old,$rev,$path,$svnpath) = @_;
+
+ return unless $opt_I;
+ my $name = $svn->ignore("$svnpath",$rev);
+ if ($path eq '/') {
+ $path = $opt_I;
+ } else {
+ $path = File::Spec->catfile($path,$opt_I);
+ }
+ if (defined $name) {
+ my $pid = open(my $F, '-|');
+ die $! unless defined $pid;
+ if (!$pid) {
+ exec("git-hash-object", "-w", $name)
+ or die "Cannot create object: $!\n";
+ }
+ my $sha = <$F>;
+ chomp $sha;
+ close $F;
+ unlink $name;
+ push(@$new,['0644',$sha,$path]);
+ } elsif (defined $old) {
+ push(@$old,$path);
+ }
+}
+
+sub project_path($$)
+{
+ my ($path, $project) = @_;
+
+ $path = "/".$path unless ($path =~ m#^\/#) ;
+ return $1 if ($path =~ m#^$project\/(.*)$#);
+
+ $path =~ s#\.#\\\.#g;
+ $path =~ s#\+#\\\+#g;
+ return "/" if ($project =~ m#^$path.*$#);
+
+ return undef;
+}
+
+sub split_path($$) {
+ my($rev,$path) = @_;
+ my $branch;
+
+ if($path =~ s#^/\Q$tag_name\E/([^/]+)/?##) {
+ $branch = "/$1";
+ } elsif($path =~ s#^/\Q$trunk_name\E/?##) {
+ $branch = "/";
+ } elsif($path =~ s#^/\Q$branch_name\E/([^/]+)/?##) {
+ $branch = $1;
+ } else {
+ my %no_error = (
+ "/" => 1,
+ "/$tag_name" => 1,
+ "/$branch_name" => 1
+ );
+ print STDERR "$rev: Unrecognized path: $path\n" unless (defined $no_error{$path});
+ return ()
+ }
+ if ($path eq "") {
+ $path = "/";
+ } elsif ($project_name) {
+ $path = project_path($path, $project_name);
+ }
+ return ($branch,$path);
+}
+
+sub branch_rev($$) {
+
+ my ($srcbranch,$uptorev) = @_;
+
+ my $bbranches = $branches{$srcbranch};
+ my @revs = reverse sort { ($a eq 'LAST' ? 0 : $a) <=> ($b eq 'LAST' ? 0 : $b) } keys %$bbranches;
+ my $therev;
+ foreach my $arev(@revs) {
+ next if ($arev eq 'LAST');
+ if ($arev <= $uptorev) {
+ $therev = $arev;
+ last;
+ }
+ }
+ return $therev;
+}
+
+sub expand_svndir($$$);
+
+sub expand_svndir($$$)
+{
+ my ($svnpath, $rev, $path) = @_;
+ my @list;
+ get_ignore(\@list, undef, $rev, $path, $svnpath);
+ my $dirents = $svn->dir_list($svnpath, $rev);
+ foreach my $p(keys %$dirents) {
+ my $kind = node_kind($svnpath.'/'.$p, $rev);
+ if ($kind eq $SVN::Node::file) {
+ my $f = get_file($svnpath.'/'.$p, $rev, $path.'/'.$p);
+ push(@list, $f) if $f;
+ } elsif ($kind eq $SVN::Node::dir) {
+ push(@list,
+ expand_svndir($svnpath.'/'.$p, $rev, $path.'/'.$p));
+ }
+ }
+ return @list;
+}
+
+sub copy_path($$$$$$$$) {
+ # Somebody copied a whole subdirectory.
+ # We need to find the index entries from the old version which the
+ # SVN log entry points to, and add them to the new place.
+
+ my($newrev,$newbranch,$path,$oldpath,$rev,$node_kind,$new,$parents) = @_;
+
+ my($srcbranch,$srcpath) = split_path($rev,$oldpath);
+ unless(defined $srcbranch && defined $srcpath) {
+ print "Path not found when copying from $oldpath @ $rev.\n".
+ "Will try to copy from original SVN location...\n"
+ if $opt_v;
+ push (@$new, expand_svndir($oldpath, $rev, $path));
+ return;
+ }
+ my $therev = branch_rev($srcbranch, $rev);
+ my $gitrev = $branches{$srcbranch}{$therev};
+ unless($gitrev) {
+ print STDERR "$newrev:$newbranch: could not find $oldpath \@ $rev\n";
+ return;
+ }
+ if ($srcbranch ne $newbranch) {
+ push(@$parents, $branches{$srcbranch}{'LAST'});
+ }
+ print "$newrev:$newbranch:$path: copying from $srcbranch:$srcpath @ $rev\n" if $opt_v;
+ if ($node_kind eq $SVN::Node::dir) {
+ $srcpath =~ s#/*$#/#;
+ }
+
+ my $pid = open my $f,'-|';
+ die $! unless defined $pid;
+ if (!$pid) {
+ exec("git-ls-tree","-r","-z",$gitrev,$srcpath)
+ or die $!;
+ }
+ local $/ = "\0";
+ while(<$f>) {
+ chomp;
+ my($m,$p) = split(/\t/,$_,2);
+ my($mode,$type,$sha1) = split(/ /,$m);
+ next if $type ne "blob";
+ if ($node_kind eq $SVN::Node::dir) {
+ $p = $path . substr($p,length($srcpath)-1);
+ } else {
+ $p = $path;
+ }
+ push(@$new,[$mode,$sha1,$p]);
+ }
+ close($f) or
+ print STDERR "$newrev:$newbranch: could not list files in $oldpath \@ $rev\n";
+}
+
+sub commit {
+ my($branch, $changed_paths, $revision, $author, $date, $message) = @_;
+ my($committer_name,$committer_email,$dest);
+ my($author_name,$author_email);
+ my(@old,@new,@parents);
+
+ if (not defined $author or $author eq "") {
+ $committer_name = $committer_email = "unknown";
+ } elsif (defined $users_file) {
+ die "User $author is not listed in $users_file\n"
+ unless exists $users{$author};
+ ($committer_name,$committer_email) = @{$users{$author}};
+ } elsif ($author =~ /^(.*?)\s+<(.*)>$/) {
+ ($committer_name, $committer_email) = ($1, $2);
+ } else {
+ $author =~ s/^<(.*)>$/$1/;
+ $committer_name = $committer_email = $author;
+ }
+
+ if ($opt_F && $message =~ /From:\s+(.*?)\s+<(.*)>\s*\n/) {
+ ($author_name, $author_email) = ($1, $2);
+ print "Author from From: $1 <$2>\n" if ($opt_v);;
+ } elsif ($opt_S && $message =~ /Signed-off-by:\s+(.*?)\s+<(.*)>\s*\n/) {
+ ($author_name, $author_email) = ($1, $2);
+ print "Author from Signed-off-by: $1 <$2>\n" if ($opt_v);;
+ } else {
+ $author_name = $committer_name;
+ $author_email = $committer_email;
+ }
+
+ $date = pdate($date);
+
+ my $tag;
+ my $parent;
+ if($branch eq "/") { # trunk
+ $parent = $opt_o;
+ } elsif($branch =~ m#^/(.+)#) { # tag
+ $tag = 1;
+ $parent = $1;
+ } else { # "normal" branch
+ # nothing to do
+ $parent = $branch;
+ }
+ $dest = $parent;
+
+ my $prev = $changed_paths->{"/"};
+ if($prev and $prev->[0] eq "A") {
+ delete $changed_paths->{"/"};
+ my $oldpath = $prev->[1];
+ my $rev;
+ if(defined $oldpath) {
+ my $p;
+ ($parent,$p) = split_path($revision,$oldpath);
+ if(defined $parent) {
+ if($parent eq "/") {
+ $parent = $opt_o;
+ } else {
+ $parent =~ s#^/##; # if it's a tag
+ }
+ }
+ } else {
+ $parent = undef;
+ }
+ }
+
+ my $rev;
+ if($revision > $opt_s and defined $parent) {
+ open(H,'-|',"git-rev-parse","--verify",$parent);
+ $rev = <H>;
+ close(H) or do {
+ print STDERR "$revision: cannot find commit '$parent'!\n";
+ return;
+ };
+ chop $rev;
+ if(length($rev) != 40) {
+ print STDERR "$revision: cannot find commit '$parent'!\n";
+ return;
+ }
+ $rev = $branches{($parent eq $opt_o) ? "/" : $parent}{"LAST"};
+ if($revision != $opt_s and not $rev) {
+ print STDERR "$revision: do not know ancestor for '$parent'!\n";
+ return;
+ }
+ } else {
+ $rev = undef;
+ }
+
+# if($prev and $prev->[0] eq "A") {
+# if(not $tag) {
+# unless(open(H,"> $git_dir/refs/heads/$branch")) {
+# print STDERR "$revision: Could not create branch $branch: $!\n";
+# $state=11;
+# next;
+# }
+# print H "$rev\n"
+# or die "Could not write branch $branch: $!";
+# close(H)
+# or die "Could not write branch $branch: $!";
+# }
+# }
+ if(not defined $rev) {
+ unlink($git_index);
+ } elsif ($rev ne $last_rev) {
+ print "Switching from $last_rev to $rev ($branch)\n" if $opt_v;
+ system("git-read-tree", $rev);
+ die "read-tree failed for $rev: $?\n" if $?;
+ $last_rev = $rev;
+ }
+
+ push (@parents, $rev) if defined $rev;
+
+ my $cid;
+ if($tag and not %$changed_paths) {
+ $cid = $rev;
+ } else {
+ my @paths = sort keys %$changed_paths;
+ foreach my $path(@paths) {
+ my $action = $changed_paths->{$path};
+
+ if ($action->[0] eq "R") {
+ # refer to a file/tree in an earlier commit
+ push(@old,$path); # remove any old stuff
+ }
+ if(($action->[0] eq "A") || ($action->[0] eq "R")) {
+ my $node_kind = node_kind($action->[3], $revision);
+ if ($node_kind eq $SVN::Node::file) {
+ my $f = get_file($action->[3],
+ $revision, $path);
+ if ($f) {
+ push(@new,$f) if $f;
+ } else {
+ my $opath = $action->[3];
+ print STDERR "$revision: $branch: could not fetch '$opath'\n";
+ }
+ } elsif ($node_kind eq $SVN::Node::dir) {
+ if($action->[1]) {
+ copy_path($revision, $branch,
+ $path, $action->[1],
+ $action->[2], $node_kind,
+ \@new, \@parents);
+ } else {
+ get_ignore(\@new, \@old, $revision,
+ $path, $action->[3]);
+ }
+ }
+ } elsif ($action->[0] eq "D") {
+ push(@old,$path);
+ } elsif ($action->[0] eq "M") {
+ my $node_kind = node_kind($action->[3], $revision);
+ if ($node_kind eq $SVN::Node::file) {
+ my $f = get_file($action->[3],
+ $revision, $path);
+ push(@new,$f) if $f;
+ } elsif ($node_kind eq $SVN::Node::dir) {
+ get_ignore(\@new, \@old, $revision,
+ $path, $action->[3]);
+ }
+ } else {
+ die "$revision: unknown action '".$action->[0]."' for $path\n";
+ }
+ }
+
+ while(@old) {
+ my @o1;
+ if(@old > 55) {
+ @o1 = splice(@old,0,50);
+ } else {
+ @o1 = @old;
+ @old = ();
+ }
+ my $pid = open my $F, "-|";
+ die "$!" unless defined $pid;
+ if (!$pid) {
+ exec("git-ls-files", "-z", @o1) or die $!;
+ }
+ @o1 = ();
+ local $/ = "\0";
+ while(<$F>) {
+ chomp;
+ push(@o1,$_);
+ }
+ close($F);
+
+ while(@o1) {
+ my @o2;
+ if(@o1 > 55) {
+ @o2 = splice(@o1,0,50);
+ } else {
+ @o2 = @o1;
+ @o1 = ();
+ }
+ system("git-update-index","--force-remove","--",@o2);
+ die "Cannot remove files: $?\n" if $?;
+ }
+ }
+ while(@new) {
+ my @n2;
+ if(@new > 12) {
+ @n2 = splice(@new,0,10);
+ } else {
+ @n2 = @new;
+ @new = ();
+ }
+ system("git-update-index","--add",
+ (map { ('--cacheinfo', @$_) } @n2));
+ die "Cannot add files: $?\n" if $?;
+ }
+
+ my $pid = open(C,"-|");
+ die "Cannot fork: $!" unless defined $pid;
+ unless($pid) {
+ exec("git-write-tree");
+ die "Cannot exec git-write-tree: $!\n";
+ }
+ chomp(my $tree = <C>);
+ length($tree) == 40
+ or die "Cannot get tree id ($tree): $!\n";
+ close(C)
+ or die "Error running git-write-tree: $?\n";
+ print "Tree ID $tree\n" if $opt_v;
+
+ my $pr = IO::Pipe->new() or die "Cannot open pipe: $!\n";
+ my $pw = IO::Pipe->new() or die "Cannot open pipe: $!\n";
+ $pid = fork();
+ die "Fork: $!\n" unless defined $pid;
+ unless($pid) {
+ $pr->writer();
+ $pw->reader();
+ open(OUT,">&STDOUT");
+ dup2($pw->fileno(),0);
+ dup2($pr->fileno(),1);
+ $pr->close();
+ $pw->close();
+
+ my @par = ();
+
+ # loose detection of merges
+ # based on the commit msg
+ foreach my $rx (@mergerx) {
+ if ($message =~ $rx) {
+ my $mparent = $1;
+ if ($mparent eq 'HEAD') { $mparent = $opt_o };
+ if ( -e "$git_dir/refs/heads/$mparent") {
+ $mparent = get_headref($mparent, $git_dir);
+ push (@parents, $mparent);
+ print OUT "Merge parent branch: $mparent\n" if $opt_v;
+ }
+ }
+ }
+ my %seen_parents = ();
+ my @unique_parents = grep { ! $seen_parents{$_} ++ } @parents;
+ foreach my $bparent (@unique_parents) {
+ push @par, '-p', $bparent;
+ print OUT "Merge parent branch: $bparent\n" if $opt_v;
+ }
+
+ exec("env",
+ "GIT_AUTHOR_NAME=$author_name",
+ "GIT_AUTHOR_EMAIL=$author_email",
+ "GIT_AUTHOR_DATE=".strftime("+0000 %Y-%m-%d %H:%M:%S",gmtime($date)),
+ "GIT_COMMITTER_NAME=$committer_name",
+ "GIT_COMMITTER_EMAIL=$committer_email",
+ "GIT_COMMITTER_DATE=".strftime("+0000 %Y-%m-%d %H:%M:%S",gmtime($date)),
+ "git-commit-tree", $tree,@par);
+ die "Cannot exec git-commit-tree: $!\n";
+ }
+ $pw->writer();
+ $pr->reader();
+
+ $message =~ s/[\s\n]+\z//;
+ $message = "r$revision: $message" if $opt_r;
+
+ print $pw "$message\n"
+ or die "Error writing to git-commit-tree: $!\n";
+ $pw->close();
+
+ print "Committed change $revision:$branch ".strftime("%Y-%m-%d %H:%M:%S",gmtime($date)).")\n" if $opt_v;
+ chomp($cid = <$pr>);
+ length($cid) == 40
+ or die "Cannot get commit id ($cid): $!\n";
+ print "Commit ID $cid\n" if $opt_v;
+ $pr->close();
+
+ waitpid($pid,0);
+ die "Error running git-commit-tree: $?\n" if $?;
+ }
+
+ if (not defined $cid) {
+ $cid = $branches{"/"}{"LAST"};
+ }
+
+ if(not defined $dest) {
+ print "... no known parent\n" if $opt_v;
+ } elsif(not $tag) {
+ print "Writing to refs/heads/$dest\n" if $opt_v;
+ open(C,">$git_dir/refs/heads/$dest") and
+ print C ("$cid\n") and
+ close(C)
+ or die "Cannot write branch $dest for update: $!\n";
+ }
+
+ if ($tag) {
+ $last_rev = "-" if %$changed_paths;
+ # the tag was 'complex', i.e. did not refer to a "real" revision
+
+ $dest =~ tr/_/\./ if $opt_u;
+
+ system('git-tag', '-f', $dest, $cid) == 0
+ or die "Cannot create tag $dest: $!\n";
+
+ print "Created tag '$dest' on '$branch'\n" if $opt_v;
+ }
+ $branches{$branch}{"LAST"} = $cid;
+ $branches{$branch}{$revision} = $cid;
+ $last_rev = $cid;
+ print BRANCHES "$revision $branch $cid\n";
+ print "DONE: $revision $dest $cid\n" if $opt_v;
+}
+
+sub commit_all {
+ # Recursive use of the SVN connection does not work
+ local $svn = $svn2;
+
+ my ($changed_paths, $revision, $author, $date, $message) = @_;
+ my %p;
+ while(my($path,$action) = each %$changed_paths) {
+ $p{$path} = [ $action->action,$action->copyfrom_path, $action->copyfrom_rev, $path ];
+ }
+ $changed_paths = \%p;
+
+ my %done;
+ my @col;
+ my $pref;
+ my $branch;
+
+ while(my($path,$action) = each %$changed_paths) {
+ ($branch,$path) = split_path($revision,$path);
+ next if not defined $branch;
+ next if not defined $path;
+ $done{$branch}{$path} = $action;
+ }
+ while(($branch,$changed_paths) = each %done) {
+ commit($branch, $changed_paths, $revision, $author, $date, $message);
+ }
+}
+
+$opt_l = $svn->{'maxrev'} if not defined $opt_l or $opt_l > $svn->{'maxrev'};
+
+if ($opt_l < $current_rev) {
+ print "Up to date: no new revisions to fetch!\n" if $opt_v;
+ unlink("$git_dir/SVN2GIT_HEAD");
+ exit;
+}
+
+print "Processing from $current_rev to $opt_l ...\n" if $opt_v;
+
+my $from_rev;
+my $to_rev = $current_rev - 1;
+
+my $subpool = SVN::Pool::new_default_sub;
+while ($to_rev < $opt_l) {
+ $subpool->clear;
+ $from_rev = $to_rev + 1;
+ $to_rev = $from_rev + $repack_after;
+ $to_rev = $opt_l if $opt_l < $to_rev;
+ print "Fetching from $from_rev to $to_rev ...\n" if $opt_v;
+ $svn->{'svn'}->get_log("/",$from_rev,$to_rev,0,1,1,\&commit_all);
+ my $pid = fork();
+ die "Fork: $!\n" unless defined $pid;
+ unless($pid) {
+ exec("git-repack", "-d")
+ or die "Cannot repack: $!\n";
+ }
+ waitpid($pid, 0);
+}
+
+
+unlink($git_index);
+
+if (defined $orig_git_index) {
+ $ENV{GIT_INDEX_FILE} = $orig_git_index;
+} else {
+ delete $ENV{GIT_INDEX_FILE};
+}
+
+# Now switch back to the branch we were in before all of this happened
+if($orig_branch) {
+ print "DONE\n" if $opt_v and (not defined $opt_l or $opt_l > 0);
+ system("cp","$git_dir/refs/heads/$opt_o","$git_dir/refs/heads/master")
+ if $forward_master;
+ unless ($opt_i) {
+ system('git-read-tree', '-m', '-u', 'SVN2GIT_HEAD', 'HEAD');
+ die "read-tree failed: $?\n" if $?;
+ }
+} else {
+ $orig_branch = "master";
+ print "DONE; creating $orig_branch branch\n" if $opt_v and (not defined $opt_l or $opt_l > 0);
+ system("cp","$git_dir/refs/heads/$opt_o","$git_dir/refs/heads/master")
+ unless -f "$git_dir/refs/heads/master";
+ system('git-update-ref', 'HEAD', "$orig_branch");
+ unless ($opt_i) {
+ system('git checkout');
+ die "checkout failed: $?\n" if $?;
+ }
+}
+unlink("$git_dir/SVN2GIT_HEAD");
+close(BRANCHES);
--- /dev/null
+git-svnimport(1)
+================
+v0.1, July 2005
+
+NAME
+----
+git-svnimport - Import a SVN repository into git
+
+
+SYNOPSIS
+--------
+[verse]
+'git-svnimport' [ -o <branch-for-HEAD> ] [ -h ] [ -v ] [ -d | -D ]
+ [ -C <GIT_repository> ] [ -i ] [ -u ] [-l limit_rev]
+ [ -b branch_subdir ] [ -T trunk_subdir ] [ -t tag_subdir ]
+ [ -s start_chg ] [ -m ] [ -r ] [ -M regex ]
+ [ -I <ignorefile_name> ] [ -A <author_file> ]
+ [ -R <repack_each_revs>] [ -P <path_from_trunk> ]
+ <SVN_repository_URL> [ <path> ]
+
+
+DESCRIPTION
+-----------
+Imports a SVN repository into git. It will either create a new
+repository, or incrementally import into an existing one.
+
+SVN access is done by the SVN::Perl module.
+
+git-svnimport assumes that SVN repositories are organized into one
+"trunk" directory where the main development happens, "branches/FOO"
+directories for branches, and "/tags/FOO" directories for tags.
+Other subdirectories are ignored.
+
+git-svnimport creates a file ".git/svn2git", which is required for
+incremental SVN imports.
+
+OPTIONS
+-------
+-C <target-dir>::
+ The GIT repository to import to. If the directory doesn't
+ exist, it will be created. Default is the current directory.
+
+-s <start_rev>::
+ Start importing at this SVN change number. The default is 1.
++
+When importing incrementally, you might need to edit the .git/svn2git file.
+
+-i::
+ Import-only: don't perform a checkout after importing. This option
+ ensures the working directory and index remain untouched and will
+ not create them if they do not exist.
+
+-T <trunk_subdir>::
+ Name the SVN trunk. Default "trunk".
+
+-t <tag_subdir>::
+ Name the SVN subdirectory for tags. Default "tags".
+
+-b <branch_subdir>::
+ Name the SVN subdirectory for branches. Default "branches".
+
+-o <branch-for-HEAD>::
+ The 'trunk' branch from SVN is imported to the 'origin' branch within
+ the git repository. Use this option if you want to import into a
+ different branch.
+
+-r::
+ Prepend 'rX: ' to commit messages, where X is the imported
+ subversion revision.
+
+-u::
+ Replace underscores in tag names with periods.
+
+-I <ignorefile_name>::
+ Import the svn:ignore directory property to files with this
+ name in each directory. (The Subversion and GIT ignore
+ syntaxes are similar enough that using the Subversion patterns
+ directly with "-I .gitignore" will almost always just work.)
+
+-A <author_file>::
+ Read a file with lines on the form
++
+------
+ username = User's Full Name <email@addr.es>
+
+------
++
+and use "User's Full Name <email@addr.es>" as the GIT
+author and committer for Subversion commits made by
+"username". If encountering a commit made by a user not in the
+list, abort.
++
+For convenience, this data is saved to $GIT_DIR/svn-authors
+each time the -A option is provided, and read from that same
+file each time git-svnimport is run with an existing GIT
+repository without -A.
+
+-m::
+ Attempt to detect merges based on the commit message. This option
+ will enable default regexes that try to capture the name source
+ branch name from the commit message.
+
+-M <regex>::
+ Attempt to detect merges based on the commit message with a custom
+ regex. It can be used with -m to also see the default regexes.
+ You must escape forward slashes.
+
+-l <max_rev>::
+ Specify a maximum revision number to pull.
++
+Formerly, this option controlled how many revisions to pull,
+due to SVN memory leaks. (These have been worked around.)
+
+-R <repack_each_revs>::
+ Specify how often git repository should be repacked.
++
+The default value is 1000. git-svnimport will do import in chunks of 1000
+revisions, after each chunk git repository will be repacked. To disable
+this behavior specify some big value here which is mote than number of
+revisions to import.
+
+-P <path_from_trunk>::
+ Partial import of the SVN tree.
++
+By default, the whole tree on the SVN trunk (/trunk) is imported.
+'-P my/proj' will import starting only from '/trunk/my/proj'.
+This option is useful when you want to import one project from a
+svn repo which hosts multiple projects under the same trunk.
+
+-v::
+ Verbosity: let 'svnimport' report what it is doing.
+
+-d::
+ Use direct HTTP requests if possible. The "<path>" argument is used
+ only for retrieving the SVN logs; the path to the contents is
+ included in the SVN log.
+
+-D::
+ Use direct HTTP requests if possible. The "<path>" argument is used
+ for retrieving the logs, as well as for the contents.
++
+There's no safe way to automatically find out which of these options to
+use, so you need to try both. Usually, the one that's wrong will die
+with a 40x error pretty quickly.
+
+<SVN_repository_URL>::
+ The URL of the SVN module you want to import. For local
+ repositories, use "file:///absolute/path".
++
+If you're using the "-d" or "-D" option, this is the URL of the SVN
+repository itself; it usually ends in "/svn".
+
+<path>::
+ The path to the module you want to check out.
+
+-h::
+ Print a short usage message and exit.
+
+OUTPUT
+------
+If '-v' is specified, the script reports what it is doing.
+
+Otherwise, success is indicated the Unix way, i.e. by simply exiting with
+a zero exit status.
+
+Author
+------
+Written by Matthias Urlichs <smurf@smurf.noris.de>, with help from
+various participants of the git-list <git@vger.kernel.org>.
+
+Based on a cvs2git script by the same author.
+
+Documentation
+--------------
+Documentation by Matthias Urlichs <smurf@smurf.noris.de>.
+
+GIT
+---
+Part of the gitlink:git[7] suite
list=
verify=
LINES=0
-while case "$#" in 0) break ;; esac
+while test $# != 0
do
case "$1" in
-a)
. git-sh-setup
verbose=
-while case $# in 0) break;; esac
+while test $# != 0
do
case "$1" in
-v|--v|--ve|--ver|--verb|--verbo|--verbos|--verbose)
if os.system(cmd) != 0:
die("command failed: %s" % cmd)
+def isP4Exec(kind):
+ """Determine if a Perforce 'kind' should have execute permission
+
+ 'p4 help filetypes' gives a list of the types. If it starts with 'x',
+ or x follows one of a few letters. Otherwise, if there is an 'x' after
+ a plus sign, it is also executable"""
+ return (re.search(r"(^[cku]?x)|\+.*x", kind) != None)
+
def p4CmdList(cmd, stdin=None, stdin_mode='w+b'):
cmd = "p4 -G %s" % cmd
if verbose:
optparse.make_option("--dry-run", action="store_true"),
optparse.make_option("--direct", dest="directSubmit", action="store_true"),
optparse.make_option("--trust-me-like-a-fool", dest="trustMeLikeAFool", action="store_true"),
+ optparse.make_option("-M", dest="detectRename", action="store_true"),
]
self.description = "Submit changes from git to the perforce depot."
self.usage += " [name of git branch to submit into perforce depot]"
self.origin = ""
self.directSubmit = False
self.trustMeLikeAFool = False
+ self.detectRename = False
self.verbose = False
self.isWindows = (platform.system() == "Windows")
diff = self.diffStatus
else:
print "Applying %s" % (read_pipe("git log --max-count=1 --pretty=oneline %s" % id))
- diff = read_pipe_lines("git diff-tree -r --name-status \"%s^\" \"%s\"" % (id, id))
+ diffOpts = ("", "-M")[self.detectRename]
+ diff = read_pipe_lines("git diff-tree -r --name-status %s \"%s^\" \"%s\"" % (diffOpts, id, id))
filesToAdd = set()
filesToDelete = set()
editedFiles = set()
filesToDelete.add(path)
if path in filesToAdd:
filesToAdd.remove(path)
+ elif modifier == "R":
+ src, dest = line.strip().split("\t")[1:3]
+ system("p4 integrate -Dt \"%s\" \"%s\"" % (src, dest))
+ system("p4 edit \"%s\"" % (dest))
+ os.unlink(dest)
+ editedFiles.add(dest)
+ filesToDelete.add(src)
else:
die("unknown modifier %s for %s" % (modifier, path))
"and with .rej files / [w]rite the patch to a file (patch.txt) ")
if response == "s":
print "Skipping! Good luck with the next patches..."
+ for f in editedFiles:
+ system("p4 revert \"%s\"" % f);
+ for f in filesToAdd:
+ system("rm %s" %f)
return
elif response == "a":
os.system(applyPatchCmd)
data = file['data']
mode = "644"
- if file["type"].startswith("x"):
+ if isP4Exec(file["type"]):
mode = "755"
elif file["type"] == "symlink":
mode = "120000"
commands = {
"debug" : P4Debug,
"submit" : P4Submit,
+ "commit" : P4Submit,
"sync" : P4Sync,
"rebase" : P4Rebase,
"clone" : P4Clone,
import string
import fcntl
+try:
+ import gtksourceview2
+ have_gtksourceview2 = True
+except ImportError:
+ have_gtksourceview2 = False
+
try:
import gtksourceview
have_gtksourceview = True
except ImportError:
have_gtksourceview = False
- print "Running without gtksourceview module"
+
+if not have_gtksourceview2 and not have_gtksourceview:
+ print "Running without gtksourceview2 or gtksourceview module"
re_ident = re.compile('(author|committer) (?P<ident>.*) (?P<epoch>\d+) (?P<tz>[+-]\d{4})')
return time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(secs))
+def get_source_buffer_and_view():
+ if have_gtksourceview2:
+ buffer = gtksourceview2.Buffer()
+ slm = gtksourceview2.LanguageManager()
+ gsl = slm.get_language("diff")
+ buffer.set_highlight_syntax(True)
+ buffer.set_language(gsl)
+ view = gtksourceview2.View(buffer)
+ elif have_gtksourceview:
+ buffer = gtksourceview.SourceBuffer()
+ slm = gtksourceview.SourceLanguagesManager()
+ gsl = slm.get_language_from_mime_type("text/x-patch")
+ buffer.set_highlight(True)
+ buffer.set_language(gsl)
+ view = gtksourceview.SourceView(buffer)
+ else:
+ buffer = gtk.TextBuffer()
+ view = gtk.TextView(buffer)
+ return (buffer, view)
+
class CellRendererGraph(gtk.GenericCellRenderer):
"""Cell renderer for directed graph.
hpan.pack1(scrollwin, True, True)
scrollwin.show()
- if have_gtksourceview:
- self.buffer = gtksourceview.SourceBuffer()
- slm = gtksourceview.SourceLanguagesManager()
- gsl = slm.get_language_from_mime_type("text/x-patch")
- self.buffer.set_highlight(True)
- self.buffer.set_language(gsl)
- sourceview = gtksourceview.SourceView(self.buffer)
- else:
- self.buffer = gtk.TextBuffer()
- sourceview = gtk.TextView(self.buffer)
-
+ (self.buffer, sourceview) = get_source_buffer_and_view()
sourceview.set_editable(False)
sourceview.modify_font(pango.FontDescription("Monospace"))
vbox.pack_start(scrollwin, expand=True, fill=True)
scrollwin.show()
- if have_gtksourceview:
- self.message_buffer = gtksourceview.SourceBuffer()
- slm = gtksourceview.SourceLanguagesManager()
- gsl = slm.get_language_from_mime_type("text/x-patch")
- self.message_buffer.set_highlight(True)
- self.message_buffer.set_language(gsl)
- sourceview = gtksourceview.SourceView(self.message_buffer)
- else:
- self.message_buffer = gtk.TextBuffer()
- sourceview = gtk.TextView(self.message_buffer)
+ (self.message_buffer, sourceview) = get_source_buffer_and_view()
sourceview.set_editable(False)
sourceview.modify_font(pango.FontDescription("Monospace"))
hgchildren = {}
# Current branch for each hg revision
hgbranch = {}
+# Number of new changesets converted from hg
+hgnewcsets = 0
#------------------------------------------------------------------------------
options:
-s, --gitstate=FILE: name of the state to be saved/read
for incrementals
+ -n, --nrepack=INT: number of changesets that will trigger
+ a repack (default=0, -1 to deactivate)
required:
hgprj: name of the HG project to import (directory)
#------------------------------------------------------------------------------
state = ''
+opt_nrepack = 0
try:
- opts, args = getopt.getopt(sys.argv[1:], 's:t:', ['gitstate=', 'tempdir='])
+ opts, args = getopt.getopt(sys.argv[1:], 's:t:n:', ['gitstate=', 'tempdir=', 'nrepack='])
for o, a in opts:
if o in ('-s', '--gitstate'):
state = a
state = os.path.abspath(state)
-
+ if o in ('-n', '--nrepack'):
+ opt_nrepack = int(a)
if len(args) != 1:
raise('params')
except:
# incremental, already seen
if hgvers.has_key(str(cset)):
continue
+ hgnewcsets += 1
# get info
prnts = os.popen('hg log -r %d | grep ^parent: | cut -f 2 -d :' % cset).readlines()
print 'record', cset, '->', vvv
hgvers[str(cset)] = vvv
-os.system('git-repack -a -d')
+if hgnewcsets >= opt_nrepack and opt_nrepack != -1:
+ os.system('git-repack -a -d')
# write the state for incrementals
if state:
# Check if we've got anyone to send to
if [ -z "$recipients" ]; then
- echo >&2 "*** hooks.recipients is not set so no email will be sent"
+ case "$refname_type" in
+ "annotated tag")
+ config_name="hooks.announcelist"
+ ;;
+ *)
+ config_name="hooks.mailinglist"
+ ;;
+ esac
+ echo >&2 "*** $config_name is not set so no email will be sent"
echo >&2 "*** for $refname update $oldrev->$newrev"
exit 0
fi
# --- Email (all stdout will be the email)
# Generate header
cat <<-EOF
- From: $committer
To: $recipients
Subject: ${EMAILPREFIX}$projectdesc $refname_type, $short_refname, ${change_type}d. $describe
X-Git-Refname: $refname
echo " via $rev ($revtype)"
done
- if [ -z "$fastforward" ]; then
+ if [ "$fast_forward" ]; then
echo " from $oldrev ($oldrev_type)"
else
# 1. Existing revisions were removed. In this case newrev is a
echo $LOGEND
}
+send_mail()
+{
+ if [ -n "$envelopesender" ]; then
+ /usr/sbin/sendmail -t -f "$envelopesender"
+ else
+ /usr/sbin/sendmail -t
+ fi
+}
+
# ---------------------------- main()
# --- Constants
# resend an email; they could redirect the output to sendmail themselves
PAGER= generate_email $2 $3 $1
else
- if [ -n "$envelopesender" ]; then
- envelopesender="-f '$envelopesender'"
- fi
-
while read oldrev newrev refname
do
- generate_email $oldrev $newrev $refname |
- /usr/sbin/sendmail -t $envelopesender
+ generate_email $oldrev $newrev $refname | send_mail
done
fi
--- /dev/null
+#!/usr/bin/perl
+#
+# Copyright (c) 2006 Josh England
+#
+# This script can be used to save/restore full permissions and ownership data
+# within a git working tree.
+#
+# To save permissions/ownership data, place this script in your .git/hooks
+# directory and enable a `pre-commit` hook with the following lines:
+# #!/bin/sh
+# SUBDIRECTORY_OK=1 . git-sh-setup
+# $GIT_DIR/hooks/setgitperms.perl -r
+#
+# To restore permissions/ownership data, place this script in your .git/hooks
+# directory and enable a `post-merge` and `post-checkout` hook with the
+# following lines:
+# #!/bin/sh
+# SUBDIRECTORY_OK=1 . git-sh-setup
+# $GIT_DIR/hooks/setgitperms.perl -w
+#
+use strict;
+use Getopt::Long;
+use File::Find;
+use File::Basename;
+
+my $usage =
+"Usage: setgitperms.perl [OPTION]... <--read|--write>
+This program uses a file `.gitmeta` to store/restore permissions and uid/gid
+info for all files/dirs tracked by git in the repository.
+
+---------------------------------Read Mode-------------------------------------
+-r, --read Reads perms/etc from working dir into a .gitmeta file
+-s, --stdout Output to stdout instead of .gitmeta
+-d, --diff Show unified diff of perms file (XOR with --stdout)
+
+---------------------------------Write Mode------------------------------------
+-w, --write Modify perms/etc in working dir to match the .gitmeta file
+-v, --verbose Be verbose
+
+\n";
+
+my ($stdout, $showdiff, $verbose, $read_mode, $write_mode);
+
+if ((@ARGV < 0) || !GetOptions(
+ "stdout", \$stdout,
+ "diff", \$showdiff,
+ "read", \$read_mode,
+ "write", \$write_mode,
+ "verbose", \$verbose,
+ )) { die $usage; }
+die $usage unless ($read_mode xor $write_mode);
+
+my $topdir = `git-rev-parse --show-cdup` or die "\n"; chomp $topdir;
+my $gitdir = $topdir . '.git';
+my $gitmeta = $topdir . '.gitmeta';
+
+if ($write_mode) {
+ # Update the working dir permissions/ownership based on data from .gitmeta
+ open (IN, "<$gitmeta") or die "Could not open $gitmeta for reading: $!\n";
+ while (defined ($_ = <IN>)) {
+ chomp;
+ if (/^(.*) mode=(\S+)\s+uid=(\d+)\s+gid=(\d+)/) {
+ # Compare recorded perms to actual perms in the working dir
+ my ($path, $mode, $uid, $gid) = ($1, $2, $3, $4);
+ my $fullpath = $topdir . $path;
+ my (undef,undef,$wmode,undef,$wuid,$wgid) = lstat($fullpath);
+ $wmode = sprintf "%04o", $wmode & 07777;
+ if ($mode ne $wmode) {
+ $verbose && print "Updating permissions on $path: old=$wmode, new=$mode\n";
+ chmod oct($mode), $fullpath;
+ }
+ if ($uid != $wuid || $gid != $wgid) {
+ if ($verbose) {
+ # Print out user/group names instead of uid/gid
+ my $pwname = getpwuid($uid);
+ my $grpname = getgrgid($gid);
+ my $wpwname = getpwuid($wuid);
+ my $wgrpname = getgrgid($wgid);
+ $pwname = $uid if !defined $pwname;
+ $grpname = $gid if !defined $grpname;
+ $wpwname = $wuid if !defined $wpwname;
+ $wgrpname = $wgid if !defined $wgrpname;
+
+ print "Updating uid/gid on $path: old=$wpwname/$wgrpname, new=$pwname/$grpname\n";
+ }
+ chown $uid, $gid, $fullpath;
+ }
+ }
+ else {
+ warn "Invalid input format in $gitmeta:\n\t$_\n";
+ }
+ }
+ close IN;
+}
+elsif ($read_mode) {
+ # Handle merge conflicts in the .gitperms file
+ if (-e "$gitdir/MERGE_MSG") {
+ if (`grep ====== $gitmeta`) {
+ # Conflict not resolved -- abort the commit
+ print "PERMISSIONS/OWNERSHIP CONFLICT\n";
+ print " Resolve the conflict in the $gitmeta file and then run\n";
+ print " `.git/hooks/setgitperms.perl --write` to reconcile.\n";
+ exit 1;
+ }
+ elsif (`grep $gitmeta $gitdir/MERGE_MSG`) {
+ # A conflict in .gitmeta has been manually resolved. Verify that
+ # the working dir perms matches the current .gitmeta perms for
+ # each file/dir that conflicted.
+ # This is here because a `setgitperms.perl --write` was not
+ # performed due to a merge conflict, so permissions/ownership
+ # may not be consistent with the manually merged .gitmeta file.
+ my @conflict_diff = `git show \$(cat $gitdir/MERGE_HEAD)`;
+ my @conflict_files;
+ my $metadiff = 0;
+
+ # Build a list of files that conflicted from the .gitmeta diff
+ foreach my $line (@conflict_diff) {
+ if ($line =~ m|^diff --git a/$gitmeta b/$gitmeta|) {
+ $metadiff = 1;
+ }
+ elsif ($line =~ /^diff --git/) {
+ $metadiff = 0;
+ }
+ elsif ($metadiff && $line =~ /^\+(.*) mode=/) {
+ push @conflict_files, $1;
+ }
+ }
+
+ # Verify that each conflict file now has permissions consistent
+ # with the .gitmeta file
+ foreach my $file (@conflict_files) {
+ my $absfile = $topdir . $file;
+ my $gm_entry = `grep "^$file mode=" $gitmeta`;
+ if ($gm_entry =~ /mode=(\d+) uid=(\d+) gid=(\d+)/) {
+ my ($gm_mode, $gm_uid, $gm_gid) = ($1, $2, $3);
+ my (undef,undef,$mode,undef,$uid,$gid) = lstat("$absfile");
+ $mode = sprintf("%04o", $mode & 07777);
+ if (($gm_mode ne $mode) || ($gm_uid != $uid)
+ || ($gm_gid != $gid)) {
+ print "PERMISSIONS/OWNERSHIP CONFLICT\n";
+ print " Mismatch found for file: $file\n";
+ print " Run `.git/hooks/setgitperms.perl --write` to reconcile.\n";
+ exit 1;
+ }
+ }
+ else {
+ print "Warning! Permissions/ownership no longer being tracked for file: $file\n";
+ }
+ }
+ }
+ }
+
+ # No merge conflicts -- write out perms/ownership data to .gitmeta file
+ unless ($stdout) {
+ open (OUT, ">$gitmeta.tmp") or die "Could not open $gitmeta.tmp for writing: $!\n";
+ }
+
+ my @files = `git-ls-files`;
+ my %dirs;
+
+ foreach my $path (@files) {
+ chomp $path;
+ # We have to manually add stats for parent directories
+ my $parent = dirname($path);
+ while (!exists $dirs{$parent}) {
+ $dirs{$parent} = 1;
+ next if $parent eq '.';
+ printstats($parent);
+ $parent = dirname($parent);
+ }
+ # Now the git-tracked file
+ printstats($path);
+ }
+
+ # diff the temporary metadata file to see if anything has changed
+ # If no metadata has changed, don't overwrite the real file
+ # This is just so `git commit -a` doesn't try to commit a bogus update
+ unless ($stdout) {
+ if (! -e $gitmeta) {
+ rename "$gitmeta.tmp", $gitmeta;
+ }
+ else {
+ my $diff = `diff -U 0 $gitmeta $gitmeta.tmp`;
+ if ($diff ne '') {
+ rename "$gitmeta.tmp", $gitmeta;
+ }
+ else {
+ unlink "$gitmeta.tmp";
+ }
+ if ($showdiff) {
+ print $diff;
+ }
+ }
+ close OUT;
+ }
+ # Make sure the .gitmeta file is tracked
+ system("git add $gitmeta");
+}
+
+
+sub printstats {
+ my $path = $_[0];
+ $path =~ s/@/\@/g;
+ my (undef,undef,$mode,undef,$uid,$gid) = lstat($path);
+ $path =~ s/%/\%/g;
+ if ($stdout) {
+ print $path;
+ printf " mode=%04o uid=$uid gid=$gid\n", $mode & 07777;
+ }
+ else {
+ print OUT $path;
+ printf OUT " mode=%04o uid=$uid gid=$gid\n", $mode & 07777;
+ }
+}
+++ /dev/null
-#include "cache.h"
-#include "blob.h"
-#include "commit.h"
-#include "tree.h"
-
-struct entry {
- unsigned char old_sha1[20];
- unsigned char new_sha1[20];
- int converted;
-};
-
-#define MAXOBJECTS (1000000)
-
-static struct entry *convert[MAXOBJECTS];
-static int nr_convert;
-
-static struct entry * convert_entry(unsigned char *sha1);
-
-static struct entry *insert_new(unsigned char *sha1, int pos)
-{
- struct entry *new = xcalloc(1, sizeof(struct entry));
- hashcpy(new->old_sha1, sha1);
- memmove(convert + pos + 1, convert + pos, (nr_convert - pos) * sizeof(struct entry *));
- convert[pos] = new;
- nr_convert++;
- if (nr_convert == MAXOBJECTS)
- die("you're kidding me - hit maximum object limit");
- return new;
-}
-
-static struct entry *lookup_entry(unsigned char *sha1)
-{
- int low = 0, high = nr_convert;
-
- while (low < high) {
- int next = (low + high) / 2;
- struct entry *n = convert[next];
- int cmp = hashcmp(sha1, n->old_sha1);
- if (!cmp)
- return n;
- if (cmp < 0) {
- high = next;
- continue;
- }
- low = next+1;
- }
- return insert_new(sha1, low);
-}
-
-static void convert_binary_sha1(void *buffer)
-{
- struct entry *entry = convert_entry(buffer);
- hashcpy(buffer, entry->new_sha1);
-}
-
-static void convert_ascii_sha1(void *buffer)
-{
- unsigned char sha1[20];
- struct entry *entry;
-
- if (get_sha1_hex(buffer, sha1))
- die("expected sha1, got '%s'", (char*) buffer);
- entry = convert_entry(sha1);
- memcpy(buffer, sha1_to_hex(entry->new_sha1), 40);
-}
-
-static unsigned int convert_mode(unsigned int mode)
-{
- unsigned int newmode;
-
- newmode = mode & S_IFMT;
- if (S_ISREG(mode))
- newmode |= (mode & 0100) ? 0755 : 0644;
- return newmode;
-}
-
-static int write_subdirectory(void *buffer, unsigned long size, const char *base, int baselen, unsigned char *result_sha1)
-{
- char *new = xmalloc(size);
- unsigned long newlen = 0;
- unsigned long used;
-
- used = 0;
- while (size) {
- int len = 21 + strlen(buffer);
- char *path = strchr(buffer, ' ');
- unsigned char *sha1;
- unsigned int mode;
- char *slash, *origpath;
-
- if (!path || strtoul_ui(buffer, 8, &mode))
- die("bad tree conversion");
- mode = convert_mode(mode);
- path++;
- if (memcmp(path, base, baselen))
- break;
- origpath = path;
- path += baselen;
- slash = strchr(path, '/');
- if (!slash) {
- newlen += sprintf(new + newlen, "%o %s", mode, path);
- new[newlen++] = '\0';
- hashcpy((unsigned char*)new + newlen, (unsigned char *) buffer + len - 20);
- newlen += 20;
-
- used += len;
- size -= len;
- buffer = (char *) buffer + len;
- continue;
- }
-
- newlen += sprintf(new + newlen, "%o %.*s", S_IFDIR, (int)(slash - path), path);
- new[newlen++] = 0;
- sha1 = (unsigned char *)(new + newlen);
- newlen += 20;
-
- len = write_subdirectory(buffer, size, origpath, slash-origpath+1, sha1);
-
- used += len;
- size -= len;
- buffer = (char *) buffer + len;
- }
-
- write_sha1_file(new, newlen, tree_type, result_sha1);
- free(new);
- return used;
-}
-
-static void convert_tree(void *buffer, unsigned long size, unsigned char *result_sha1)
-{
- void *orig_buffer = buffer;
- unsigned long orig_size = size;
-
- while (size) {
- size_t len = 1+strlen(buffer);
-
- convert_binary_sha1((char *) buffer + len);
-
- len += 20;
- if (len > size)
- die("corrupt tree object");
- size -= len;
- buffer = (char *) buffer + len;
- }
-
- write_subdirectory(orig_buffer, orig_size, "", 0, result_sha1);
-}
-
-static unsigned long parse_oldstyle_date(const char *buf)
-{
- char c, *p;
- char buffer[100];
- struct tm tm;
- const char *formats[] = {
- "%c",
- "%a %b %d %T",
- "%Z",
- "%Y",
- " %Y",
- NULL
- };
- /* We only ever did two timezones in the bad old format .. */
- const char *timezones[] = {
- "PDT", "PST", "CEST", NULL
- };
- const char **fmt = formats;
-
- p = buffer;
- while (isspace(c = *buf))
- buf++;
- while ((c = *buf++) != '\n')
- *p++ = c;
- *p++ = 0;
- buf = buffer;
- memset(&tm, 0, sizeof(tm));
- do {
- const char *next = strptime(buf, *fmt, &tm);
- if (next) {
- if (!*next)
- return mktime(&tm);
- buf = next;
- } else {
- const char **p = timezones;
- while (isspace(*buf))
- buf++;
- while (*p) {
- if (!memcmp(buf, *p, strlen(*p))) {
- buf += strlen(*p);
- break;
- }
- p++;
- }
- }
- fmt++;
- } while (*buf && *fmt);
- printf("left: %s\n", buf);
- return mktime(&tm);
-}
-
-static int convert_date_line(char *dst, void **buf, unsigned long *sp)
-{
- unsigned long size = *sp;
- char *line = *buf;
- char *next = strchr(line, '\n');
- char *date = strchr(line, '>');
- int len;
-
- if (!next || !date)
- die("missing or bad author/committer line %s", line);
- next++; date += 2;
-
- *buf = next;
- *sp = size - (next - line);
-
- len = date - line;
- memcpy(dst, line, len);
- dst += len;
-
- /* Is it already in new format? */
- if (isdigit(*date)) {
- int datelen = next - date;
- memcpy(dst, date, datelen);
- return len + datelen;
- }
-
- /*
- * Hacky hacky: one of the sparse old-style commits does not have
- * any date at all, but we can fake it by using the committer date.
- */
- if (*date == '\n' && strchr(next, '>'))
- date = strchr(next, '>')+2;
-
- return len + sprintf(dst, "%lu -0700\n", parse_oldstyle_date(date));
-}
-
-static void convert_date(void *buffer, unsigned long size, unsigned char *result_sha1)
-{
- char *new = xmalloc(size + 100);
- unsigned long newlen = 0;
-
- /* "tree <sha1>\n" */
- memcpy(new + newlen, buffer, 46);
- newlen += 46;
- buffer = (char *) buffer + 46;
- size -= 46;
-
- /* "parent <sha1>\n" */
- while (!memcmp(buffer, "parent ", 7)) {
- memcpy(new + newlen, buffer, 48);
- newlen += 48;
- buffer = (char *) buffer + 48;
- size -= 48;
- }
-
- /* "author xyz <xyz> date" */
- newlen += convert_date_line(new + newlen, &buffer, &size);
- /* "committer xyz <xyz> date" */
- newlen += convert_date_line(new + newlen, &buffer, &size);
-
- /* Rest */
- memcpy(new + newlen, buffer, size);
- newlen += size;
-
- write_sha1_file(new, newlen, commit_type, result_sha1);
- free(new);
-}
-
-static void convert_commit(void *buffer, unsigned long size, unsigned char *result_sha1)
-{
- void *orig_buffer = buffer;
- unsigned long orig_size = size;
-
- if (memcmp(buffer, "tree ", 5))
- die("Bad commit '%s'", (char*) buffer);
- convert_ascii_sha1((char *) buffer + 5);
- buffer = (char *) buffer + 46; /* "tree " + "hex sha1" + "\n" */
- while (!memcmp(buffer, "parent ", 7)) {
- convert_ascii_sha1((char *) buffer + 7);
- buffer = (char *) buffer + 48;
- }
- convert_date(orig_buffer, orig_size, result_sha1);
-}
-
-static struct entry * convert_entry(unsigned char *sha1)
-{
- struct entry *entry = lookup_entry(sha1);
- enum object_type type;
- void *buffer, *data;
- unsigned long size;
-
- if (entry->converted)
- return entry;
- data = read_sha1_file(sha1, &type, &size);
- if (!data)
- die("unable to read object %s", sha1_to_hex(sha1));
-
- buffer = xmalloc(size);
- memcpy(buffer, data, size);
-
- if (type == OBJ_BLOB) {
- write_sha1_file(buffer, size, blob_type, entry->new_sha1);
- } else if (type == OBJ_TREE)
- convert_tree(buffer, size, entry->new_sha1);
- else if (type == OBJ_COMMIT)
- convert_commit(buffer, size, entry->new_sha1);
- else
- die("unknown object type %d in %s", type, sha1_to_hex(sha1));
- entry->converted = 1;
- free(buffer);
- free(data);
- return entry;
-}
-
-int main(int argc, char **argv)
-{
- unsigned char sha1[20];
- struct entry *entry;
-
- setup_git_directory();
-
- if (argc != 2)
- usage("git-convert-objects <sha1>");
- if (get_sha1(argv[1], sha1))
- die("Not a valid object name %s", argv[1]);
-
- entry = convert_entry(sha1);
- printf("new sha1: %s\n", sha1_to_hex(entry->new_sha1));
- return 0;
-}
return 0;
}
- strbuf_grow(buf, len);
+ /* only grow if not in place */
+ if (strbuf_avail(buf) + buf->len < len)
+ strbuf_grow(buf, len - buf->len);
dst = buf->buf;
if (action == CRLF_GUESS) {
/*
/* are we "faking" in place editing ? */
if (src == buf->buf)
- to_free = strbuf_detach(buf);
+ to_free = strbuf_detach(buf, NULL);
strbuf_grow(buf, len + stats.lf - stats.crlf);
for (;;) {
ret = 0;
}
if (close(pipe_feed[0])) {
- ret = error("read from external filter %s failed", cmd);
+ error("read from external filter %s failed", cmd);
ret = 0;
}
status = finish_command(&child_process);
if (status) {
- ret = error("external filter %s failed %d", cmd, -status);
+ error("external filter %s failed %d", cmd, -status);
ret = 0;
}
if (ret) {
- *dst = nbuf;
- } else {
- strbuf_release(&nbuf);
+ strbuf_swap(dst, &nbuf);
}
+ strbuf_release(&nbuf);
return ret;
}
if (!ident || !count_ident(src, len))
return 0;
- strbuf_grow(buf, len);
+ /* only grow if not in place */
+ if (strbuf_avail(buf) + buf->len < len)
+ strbuf_grow(buf, len - buf->len);
dst = buf->buf;
for (;;) {
dollar = memchr(src, '$', len);
/* are we "faking" in place editing ? */
if (src == buf->buf)
- to_free = strbuf_detach(buf);
+ to_free = strbuf_detach(buf, NULL);
hash_sha1_file(src, len, "blob", sha1);
strbuf_grow(buf, len + cnt * 43);
#define HOST_NAME_MAX 256
#endif
+#ifndef NI_MAXSERV
+#define NI_MAXSERV 32
+#endif
+
static int log_syslog;
static int verbose;
static int reuseaddr;
return date_string(then, offset, result, maxlen);
}
+enum date_mode parse_date_format(const char *format)
+{
+ if (!strcmp(format, "relative"))
+ return DATE_RELATIVE;
+ else if (!strcmp(format, "iso8601") ||
+ !strcmp(format, "iso"))
+ return DATE_ISO8601;
+ else if (!strcmp(format, "rfc2822") ||
+ !strcmp(format, "rfc"))
+ return DATE_RFC2822;
+ else if (!strcmp(format, "short"))
+ return DATE_SHORT;
+ else if (!strcmp(format, "local"))
+ return DATE_LOCAL;
+ else if (!strcmp(format, "default"))
+ return DATE_NORMAL;
+ else
+ die("unknown date format %s", format);
+}
+
void datestamp(char *buf, int bufsize)
{
time_t now;
strbuf_addstr(&res, one);
strbuf_addstr(&res, two);
}
- return res.buf;
+ return strbuf_detach(&res, NULL);
}
static const char *external_diff(void)
quote_c_style(a, &name, NULL, 0);
strbuf_addstr(&name, " => ");
quote_c_style(b, &name, NULL, 0);
- return name.buf;
+ return strbuf_detach(&name, NULL);
}
/* Find common prefix */
strbuf_addch(&name, '}');
strbuf_add(&name, a + len_a - sfx_length, sfx_length);
}
- return name.buf;
+ return strbuf_detach(&name, NULL);
}
struct diffstat_t {
strbuf_init(&buf, 0);
if (quote_c_style(file->name, &buf, NULL, 0)) {
free(file->name);
- file->name = buf.buf;
+ file->name = strbuf_detach(&buf, NULL);
} else {
strbuf_release(&buf);
}
memset(spec, 0, sizeof(*spec));
spec->path = (char *)(spec + 1);
memcpy(spec->path, path, namelen+1);
+ spec->count = 1;
return spec;
}
+void free_filespec(struct diff_filespec *spec)
+{
+ if (!--spec->count) {
+ diff_free_filespec_data(spec);
+ free(spec);
+ }
+}
+
void fill_filespec(struct diff_filespec *spec, const unsigned char *sha1,
unsigned short mode)
{
static int populate_from_stdin(struct diff_filespec *s)
{
struct strbuf buf;
+ size_t size = 0;
strbuf_init(&buf, 0);
if (strbuf_read(&buf, 0, 0) < 0)
strerror(errno));
s->should_munmap = 0;
- s->size = buf.len;
- s->data = strbuf_detach(&buf);
+ s->data = strbuf_detach(&buf, &size);
+ s->size = size;
s->should_free = 1;
return 0;
}
*/
strbuf_init(&buf, 0);
if (convert_to_git(s->path, s->data, s->size, &buf)) {
+ size_t size = 0;
munmap(s->data, s->size);
s->should_munmap = 0;
- s->data = buf.buf;
- s->size = buf.len;
+ s->data = strbuf_detach(&buf, &size);
+ s->size = size;
s->should_free = 1;
}
}
return 0;
}
-void diff_free_filespec_data(struct diff_filespec *s)
+void diff_free_filespec_blob(struct diff_filespec *s)
{
if (s->should_free)
free(s->data);
s->should_free = s->should_munmap = 0;
s->data = NULL;
}
+}
+
+void diff_free_filespec_data(struct diff_filespec *s)
+{
+ diff_free_filespec_blob(s);
free(s->cnt_data);
s->cnt_data = NULL;
}
void diff_free_filepair(struct diff_filepair *p)
{
- diff_free_filespec_data(p->one);
- diff_free_filespec_data(p->two);
- free(p->one);
- free(p->two);
+ free_filespec(p->one);
+ free_filespec(p->two);
free(p);
}
{
diff_debug_filespec(p->one, i, "one");
diff_debug_filespec(p->two, i, "two");
- fprintf(stderr, "score %d, status %c stays %d broken %d\n",
+ fprintf(stderr, "score %d, status %c rename_used %d broken %d\n",
p->score, p->status ? p->status : '?',
- p->source_stays, p->broken_pair);
+ p->one->rename_used, p->broken_pair);
}
void diff_debug_queue(const char *msg, struct diff_queue_struct *q)
static void diff_resolve_rename_copy(void)
{
- int i, j;
- struct diff_filepair *p, *pp;
+ int i;
+ struct diff_filepair *p;
struct diff_queue_struct *q = &diff_queued_diff;
diff_debug_queue("resolve-rename-copy", q);
* either in-place edit or rename/copy edit.
*/
else if (DIFF_PAIR_RENAME(p)) {
- if (p->source_stays) {
- p->status = DIFF_STATUS_COPIED;
- continue;
- }
- /* See if there is some other filepair that
- * copies from the same source as us. If so
- * we are a copy. Otherwise we are either a
- * copy if the path stays, or a rename if it
- * does not, but we already handled "stays" case.
+ /*
+ * A rename might have re-connected a broken
+ * pair up, causing the pathnames to be the
+ * same again. If so, that's not a rename at
+ * all, just a modification..
+ *
+ * Otherwise, see if this source was used for
+ * multiple renames, in which case we decrement
+ * the count, and call it a copy.
*/
- for (j = i + 1; j < q->nr; j++) {
- pp = q->queue[j];
- if (strcmp(pp->one->path, p->one->path))
- continue; /* not us */
- if (!DIFF_PAIR_RENAME(pp))
- continue; /* not a rename/copy */
- /* pp is a rename/copy from the same source */
+ if (!strcmp(p->one->path, p->two->path))
+ p->status = DIFF_STATUS_MODIFIED;
+ else if (--p->one->rename_used > 0)
p->status = DIFF_STATUS_COPIED;
- break;
- }
- if (!p->status)
+ else
p->status = DIFF_STATUS_RENAMED;
}
else if (hashcmp(p->one->sha1, p->two->sha1) ||
* The value we return is 1 if we want the pair to be broken,
* or 0 if we do not.
*/
- unsigned long delta_size, base_size, src_copied, literal_added,
- src_removed;
+ unsigned long delta_size, base_size, max_size;
+ unsigned long src_copied, literal_added, src_removed;
*merge_score_p = 0; /* assume no deletion --- "do not break"
* is the default.
return 0; /* error but caught downstream */
base_size = ((src->size < dst->size) ? src->size : dst->size);
- if (base_size < MINIMUM_BREAK_SIZE)
+ max_size = ((src->size > dst->size) ? src->size : dst->size);
+ if (max_size < MINIMUM_BREAK_SIZE)
return 0; /* we do not break too small filepair */
if (diffcore_count_changes(src, dst,
* less than the minimum, after rename/copy runs.
*/
*merge_score_p = (int)(src_removed * MAX_SCORE / src->size);
+ if (*merge_score_p > break_score)
+ return 1;
/* Extent of damage, which counts both inserts and
* deletes.
*/
delta_size = src_removed + literal_added;
- if (delta_size * MAX_SCORE / base_size < break_score)
+ if (delta_size * MAX_SCORE / max_size < break_score)
return 0;
/* If you removed a lot without adding new material, that is
struct spanhash data[FLEX_ARRAY];
};
-static struct spanhash *spanhash_find(struct spanhash_top *top,
- unsigned int hashval)
-{
- int sz = 1 << top->alloc_log2;
- int bucket = hashval & (sz - 1);
- while (1) {
- struct spanhash *h = &(top->data[bucket++]);
- if (!h->cnt)
- return NULL;
- if (h->hashval == hashval)
- return h;
- if (sz <= bucket)
- bucket = 0;
- }
-}
-
static struct spanhash_top *spanhash_rehash(struct spanhash_top *orig)
{
struct spanhash_top *new;
}
}
+static int spanhash_cmp(const void *a_, const void *b_)
+{
+ const struct spanhash *a = a_;
+ const struct spanhash *b = b_;
+
+ /* A count of zero compares at the end.. */
+ if (!a->cnt)
+ return !b->cnt ? 0 : 1;
+ if (!b->cnt)
+ return -1;
+ return a->hashval < b->hashval ? -1 :
+ a->hashval > b->hashval ? 1 : 0;
+}
+
static struct spanhash_top *hash_chars(struct diff_filespec *one)
{
int i, n;
n = 0;
accum1 = accum2 = 0;
}
+ qsort(hash->data,
+ 1ul << hash->alloc_log2,
+ sizeof(hash->data[0]),
+ spanhash_cmp);
return hash;
}
unsigned long *src_copied,
unsigned long *literal_added)
{
- int i, ssz;
+ struct spanhash *s, *d;
struct spanhash_top *src_count, *dst_count;
unsigned long sc, la;
}
sc = la = 0;
- ssz = 1 << src_count->alloc_log2;
- for (i = 0; i < ssz; i++) {
- struct spanhash *s = &(src_count->data[i]);
- struct spanhash *d;
+ s = src_count->data;
+ d = dst_count->data;
+ for (;;) {
unsigned dst_cnt, src_cnt;
if (!s->cnt)
- continue;
+ break; /* we checked all in src */
+ while (d->cnt) {
+ if (d->hashval >= s->hashval)
+ break;
+ d++;
+ }
src_cnt = s->cnt;
- d = spanhash_find(dst_count, s->hashval);
- dst_cnt = d ? d->cnt : 0;
+ dst_cnt = d->hashval == s->hashval ? d->cnt : 0;
if (src_cnt < dst_cnt) {
la += dst_cnt - src_cnt;
sc += src_cnt;
}
else
sc += dst_cnt;
+ s++;
}
if (!src_count_p)
#include "cache.h"
#include "diff.h"
#include "diffcore.h"
+#include "hash.h"
/* Table of rename/copy destinations */
static struct diff_rename_src {
struct diff_filespec *one;
unsigned short score; /* to remember the break score */
- unsigned src_path_left : 1;
} *rename_src;
static int rename_src_nr, rename_src_alloc;
static struct diff_rename_src *register_rename_src(struct diff_filespec *one,
- int src_path_left,
unsigned short score)
{
int first, last;
(rename_src_nr - first - 1) * sizeof(*rename_src));
rename_src[first].one = one;
rename_src[first].score = score;
- rename_src[first].src_path_left = src_path_left;
return &(rename_src[first]);
}
-static int is_exact_match(struct diff_filespec *src,
- struct diff_filespec *dst,
- int contents_too)
-{
- if (src->sha1_valid && dst->sha1_valid &&
- !hashcmp(src->sha1, dst->sha1))
- return 1;
- if (!contents_too)
- return 0;
- if (diff_populate_filespec(src, 1) || diff_populate_filespec(dst, 1))
- return 0;
- if (src->size != dst->size)
- return 0;
- if (src->sha1_valid && dst->sha1_valid)
- return !hashcmp(src->sha1, dst->sha1);
- if (diff_populate_filespec(src, 0) || diff_populate_filespec(dst, 0))
- return 0;
- if (src->size == dst->size &&
- !memcmp(src->data, dst->data, src->size))
- return 1;
- return 0;
-}
-
static int basename_same(struct diff_filespec *src, struct diff_filespec *dst)
{
int src_len = strlen(src->path), dst_len = strlen(dst->path);
if (!S_ISREG(src->mode) || !S_ISREG(dst->mode))
return 0;
+ /*
+ * Need to check that source and destination sizes are
+ * filled in before comparing them.
+ *
+ * If we already have "cnt_data" filled in, we know it's
+ * all good (avoid checking the size for zero, as that
+ * is a possible size - we really should have a flag to
+ * say whether the size is valid or not!)
+ */
+ if (!src->cnt_data && diff_populate_filespec(src, 0))
+ return 0;
+ if (!dst->cnt_data && diff_populate_filespec(dst, 0))
+ return 0;
+
max_size = ((src->size > dst->size) ? src->size : dst->size);
base_size = ((src->size < dst->size) ? src->size : dst->size);
delta_size = max_size - base_size;
if (base_size * (MAX_SCORE-minimum_score) < delta_size * MAX_SCORE)
return 0;
- if (diff_populate_filespec(src, 0) || diff_populate_filespec(dst, 0))
- return 0; /* error but caught downstream */
-
-
delta_limit = (unsigned long)
(base_size * (MAX_SCORE-minimum_score) / MAX_SCORE);
if (diffcore_count_changes(src, dst,
static void record_rename_pair(int dst_index, int src_index, int score)
{
- struct diff_filespec *one, *two, *src, *dst;
+ struct diff_filespec *src, *dst;
struct diff_filepair *dp;
if (rename_dst[dst_index].pair)
die("internal error: dst already matched.");
src = rename_src[src_index].one;
- one = alloc_filespec(src->path);
- fill_filespec(one, src->sha1, src->mode);
+ src->rename_used++;
+ src->count++;
dst = rename_dst[dst_index].two;
- two = alloc_filespec(dst->path);
- fill_filespec(two, dst->sha1, dst->mode);
+ dst->count++;
- dp = diff_queue(NULL, one, two);
+ dp = diff_queue(NULL, src, dst);
dp->renamed_pair = 1;
if (!strcmp(src->path, dst->path))
dp->score = rename_src[src_index].score;
else
dp->score = score;
- dp->source_stays = rename_src[src_index].src_path_left;
rename_dst[dst_index].pair = dp;
}
return b->score - a->score;
}
-static int compute_stays(struct diff_queue_struct *q,
- struct diff_filespec *one)
+struct file_similarity {
+ int src_dst, index;
+ struct diff_filespec *filespec;
+ struct file_similarity *next;
+};
+
+static int find_identical_files(struct file_similarity *src,
+ struct file_similarity *dst)
{
- int i;
- for (i = 0; i < q->nr; i++) {
- struct diff_filepair *p = q->queue[i];
- if (strcmp(one->path, p->two->path))
- continue;
- if (DIFF_PAIR_RENAME(p)) {
- return 0; /* something else is renamed into this */
+ int renames = 0;
+
+ /*
+ * Walk over all the destinations ...
+ */
+ do {
+ struct diff_filespec *one = dst->filespec;
+ struct file_similarity *p, *best;
+ int i = 100;
+
+ /*
+ * .. to find the best source match
+ */
+ best = NULL;
+ for (p = src; p; p = p->next) {
+ struct diff_filespec *two = p->filespec;
+
+ /* False hash collission? */
+ if (hashcmp(one->sha1, two->sha1))
+ continue;
+ /* Non-regular files? If so, the modes must match! */
+ if (!S_ISREG(one->mode) || !S_ISREG(two->mode)) {
+ if (one->mode != two->mode)
+ continue;
+ }
+ best = p;
+ if (basename_same(one, two))
+ break;
+
+ /* Too many identical alternatives? Pick one */
+ if (!--i)
+ break;
+ }
+ if (best) {
+ record_rename_pair(dst->index, best->index, MAX_SCORE);
+ renames++;
}
+ } while ((dst = dst->next) != NULL);
+ return renames;
+}
+
+static void free_similarity_list(struct file_similarity *p)
+{
+ while (p) {
+ struct file_similarity *entry = p;
+ p = p->next;
+ free(entry);
}
- return 1;
+}
+
+static int find_same_files(void *ptr)
+{
+ int ret;
+ struct file_similarity *p = ptr;
+ struct file_similarity *src = NULL, *dst = NULL;
+
+ /* Split the hash list up into sources and destinations */
+ do {
+ struct file_similarity *entry = p;
+ p = p->next;
+ if (entry->src_dst < 0) {
+ entry->next = src;
+ src = entry;
+ } else {
+ entry->next = dst;
+ dst = entry;
+ }
+ } while (p);
+
+ /*
+ * If we have both sources *and* destinations, see if
+ * we can match them up
+ */
+ ret = (src && dst) ? find_identical_files(src, dst) : 0;
+
+ /* Free the hashes and return the number of renames found */
+ free_similarity_list(src);
+ free_similarity_list(dst);
+ return ret;
+}
+
+static unsigned int hash_filespec(struct diff_filespec *filespec)
+{
+ unsigned int hash;
+ if (!filespec->sha1_valid) {
+ if (diff_populate_filespec(filespec, 0))
+ return 0;
+ hash_sha1_file(filespec->data, filespec->size, "blob", filespec->sha1);
+ }
+ memcpy(&hash, filespec->sha1, sizeof(hash));
+ return hash;
+}
+
+static void insert_file_table(struct hash_table *table, int src_dst, int index, struct diff_filespec *filespec)
+{
+ void **pos;
+ unsigned int hash;
+ struct file_similarity *entry = xmalloc(sizeof(*entry));
+
+ entry->src_dst = src_dst;
+ entry->index = index;
+ entry->filespec = filespec;
+ entry->next = NULL;
+
+ hash = hash_filespec(filespec);
+ pos = insert_hash(hash, entry, table);
+
+ /* We already had an entry there? */
+ if (pos) {
+ entry->next = *pos;
+ *pos = entry;
+ }
+}
+
+/*
+ * Find exact renames first.
+ *
+ * The first round matches up the up-to-date entries,
+ * and then during the second round we try to match
+ * cache-dirty entries as well.
+ */
+static int find_exact_renames(void)
+{
+ int i;
+ struct hash_table file_table;
+
+ init_hash(&file_table);
+ for (i = 0; i < rename_src_nr; i++)
+ insert_file_table(&file_table, -1, i, rename_src[i].one);
+
+ for (i = 0; i < rename_dst_nr; i++)
+ insert_file_table(&file_table, 1, i, rename_dst[i].two);
+
+ /* Find the renames */
+ i = for_each_hash(&file_table, find_same_files);
+
+ /* .. and free the hash data structure */
+ free_hash(&file_table);
+
+ return i;
}
void diffcore_rename(struct diff_options *options)
struct diff_queue_struct *q = &diff_queued_diff;
struct diff_queue_struct outq;
struct diff_score *mx;
- int i, j, rename_count, contents_too;
+ int i, j, rename_count;
int num_create, num_src, dst_cnt;
if (!minimum_score)
minimum_score = DEFAULT_RENAME_SCORE;
- rename_count = 0;
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
locate_rename_dst(p->two, 1);
}
else if (!DIFF_FILE_VALID(p->two)) {
- /* If the source is a broken "delete", and
+ /*
+ * If the source is a broken "delete", and
* they did not really want to get broken,
* that means the source actually stays.
+ * So we increment the "rename_used" score
+ * by one, to indicate ourselves as a user
+ */
+ if (p->broken_pair && !p->score)
+ p->one->rename_used++;
+ register_rename_src(p->one, p->score);
+ }
+ else if (detect_rename == DIFF_DETECT_COPY) {
+ /*
+ * Increment the "rename_used" score by
+ * one, to indicate ourselves as a user.
*/
- int stays = (p->broken_pair && !p->score);
- register_rename_src(p->one, stays, p->score);
+ p->one->rename_used++;
+ register_rename_src(p->one, p->score);
}
- else if (detect_rename == DIFF_DETECT_COPY)
- register_rename_src(p->one, 1, p->score);
}
if (rename_dst_nr == 0 || rename_src_nr == 0)
goto cleanup; /* nothing to do */
+ /*
+ * We really want to cull the candidates list early
+ * with cheap tests in order to avoid doing deltas.
+ */
+ rename_count = find_exact_renames();
+
+ /* Did we only want exact renames? */
+ if (minimum_score == MAX_SCORE)
+ goto cleanup;
+
+ /*
+ * Calculate how many renames are left (but all the source
+ * files still remain as options for rename/copies!)
+ */
+ num_create = (rename_dst_nr - rename_count);
+ num_src = rename_src_nr;
+
+ /* All done? */
+ if (!num_create)
+ goto cleanup;
+
/*
* This basically does a test for the rename matrix not
* growing larger than a "rename_limit" square matrix, ie:
*
- * rename_dst_nr * rename_src_nr > rename_limit * rename_limit
+ * num_create * num_src > rename_limit * rename_limit
*
* but handles the potential overflow case specially (and we
* assume at least 32-bit integers)
*/
if (rename_limit <= 0 || rename_limit > 32767)
rename_limit = 32767;
- if (rename_dst_nr > rename_limit && rename_src_nr > rename_limit)
+ if (num_create > rename_limit && num_src > rename_limit)
goto cleanup;
- if (rename_dst_nr * rename_src_nr > rename_limit * rename_limit)
+ if (num_create * num_src > rename_limit * rename_limit)
goto cleanup;
- /* We really want to cull the candidates list early
- * with cheap tests in order to avoid doing deltas.
- * The first round matches up the up-to-date entries,
- * and then during the second round we try to match
- * cache-dirty entries as well.
- */
- for (contents_too = 0; contents_too < 2; contents_too++) {
- for (i = 0; i < rename_dst_nr; i++) {
- struct diff_filespec *two = rename_dst[i].two;
- if (rename_dst[i].pair)
- continue; /* dealt with an earlier round */
- for (j = 0; j < rename_src_nr; j++) {
- int k;
- struct diff_filespec *one = rename_src[j].one;
- if (!is_exact_match(one, two, contents_too))
- continue;
-
- /* see if there is a basename match, too */
- for (k = j; k < rename_src_nr; k++) {
- one = rename_src[k].one;
- if (basename_same(one, two) &&
- is_exact_match(one, two,
- contents_too)) {
- j = k;
- break;
- }
- }
-
- record_rename_pair(i, j, (int)MAX_SCORE);
- rename_count++;
- break; /* we are done with this entry */
- }
- }
- }
-
- /* Have we run out the created file pool? If so we can avoid
- * doing the delta matrix altogether.
- */
- if (rename_count == rename_dst_nr)
- goto cleanup;
-
- if (minimum_score == MAX_SCORE)
- goto cleanup;
-
- num_create = (rename_dst_nr - rename_count);
- num_src = rename_src_nr;
mx = xmalloc(sizeof(*mx) * num_create * num_src);
for (dst_cnt = i = 0; i < rename_dst_nr; i++) {
int base = dst_cnt * num_src;
m->score = estimate_similarity(one, two,
minimum_score);
m->name_score = basename_same(one, two);
- diff_free_filespec_data(one);
+ diff_free_filespec_blob(one);
}
/* We do not need the text anymore */
- diff_free_filespec_data(two);
+ diff_free_filespec_blob(two);
dst_cnt++;
}
/* cost matrix sorted by most to least similar pair */
pair_to_free = p;
}
else {
- for (j = 0; j < rename_dst_nr; j++) {
- if (!rename_dst[j].pair)
- continue;
- if (strcmp(rename_dst[j].pair->
- one->path,
- p->one->path))
- continue;
- break;
- }
- if (j < rename_dst_nr)
+ if (p->one->rename_used)
/* this path remains */
pair_to_free = p;
}
*q = outq;
diff_debug_queue("done collapsing", q);
- /* We need to see which rename source really stays here;
- * earlier we only checked if the path is left in the result,
- * but even if a path remains in the result, if that is coming
- * from copying something else on top of it, then the original
- * source is lost and does not stay.
- */
- for (i = 0; i < q->nr; i++) {
- struct diff_filepair *p = q->queue[i];
- if (DIFF_PAIR_RENAME(p) && p->source_stays) {
- /* If one appears as the target of a rename-copy,
- * then mark p->source_stays = 0; otherwise
- * leave it as is.
- */
- p->source_stays = compute_stays(q, p->one);
- }
- }
-
- for (i = 0; i < rename_dst_nr; i++) {
- diff_free_filespec_data(rename_dst[i].two);
- free(rename_dst[i].two);
- }
+ for (i = 0; i < rename_dst_nr; i++)
+ free_filespec(rename_dst[i].two);
free(rename_dst);
rename_dst = NULL;
void *cnt_data;
const char *funcname_pattern_ident;
unsigned long size;
+ int count; /* Reference count */
int xfrm_flags; /* for use by the xfrm */
+ int rename_used; /* Count of rename users */
unsigned short mode; /* file mode */
unsigned sha1_valid : 1; /* if true, use sha1 and trust mode;
* if false, use the name and read from
};
extern struct diff_filespec *alloc_filespec(const char *);
+extern void free_filespec(struct diff_filespec *);
extern void fill_filespec(struct diff_filespec *, const unsigned char *,
unsigned short);
extern int diff_populate_filespec(struct diff_filespec *, int);
extern void diff_free_filespec_data(struct diff_filespec *);
+extern void diff_free_filespec_blob(struct diff_filespec *);
extern int diff_filespec_is_binary(struct diff_filespec *);
struct diff_filepair {
struct diff_filespec *two;
unsigned short int score;
char status; /* M C R N D U (see Documentation/diff-format.txt) */
- unsigned source_stays : 1; /* all of R/C are copies */
unsigned broken_pair : 1;
unsigned renamed_pair : 1;
unsigned is_unmerged : 1;
return retval;
}
+static int no_wildcard(const char *string)
+{
+ return string[strcspn(string, "*?[{")] == '\0';
+}
+
void add_exclude(const char *string, const char *base,
int baselen, struct exclude_list *which)
{
struct exclude *x = xmalloc(sizeof (*x));
+ x->to_exclude = 1;
+ if (*string == '!') {
+ x->to_exclude = 0;
+ string++;
+ }
x->pattern = string;
+ x->patternlen = strlen(string);
x->base = base;
x->baselen = baselen;
+ x->flags = 0;
+ if (!strchr(string, '/'))
+ x->flags |= EXC_FLAG_NODIR;
+ if (no_wildcard(string))
+ x->flags |= EXC_FLAG_NOWILDCARD;
+ if (*string == '*' && no_wildcard(string+1))
+ x->flags |= EXC_FLAG_ENDSWITH;
if (which->nr == which->alloc) {
which->alloc = alloc_nr(which->alloc);
which->excludes = xrealloc(which->excludes,
* Return 1 for exclude, 0 for include and -1 for undecided.
*/
static int excluded_1(const char *pathname,
- int pathlen,
+ int pathlen, const char *basename,
struct exclude_list *el)
{
int i;
for (i = el->nr - 1; 0 <= i; i--) {
struct exclude *x = el->excludes[i];
const char *exclude = x->pattern;
- int to_exclude = 1;
+ int to_exclude = x->to_exclude;
- if (*exclude == '!') {
- to_exclude = 0;
- exclude++;
- }
-
- if (!strchr(exclude, '/')) {
+ if (x->flags & EXC_FLAG_NODIR) {
/* match basename */
- const char *basename = strrchr(pathname, '/');
- basename = (basename) ? basename+1 : pathname;
- if (fnmatch(exclude, basename, 0) == 0)
- return to_exclude;
+ if (x->flags & EXC_FLAG_NOWILDCARD) {
+ if (!strcmp(exclude, basename))
+ return to_exclude;
+ } else if (x->flags & EXC_FLAG_ENDSWITH) {
+ if (x->patternlen - 1 <= pathlen &&
+ !strcmp(exclude + 1, pathname + pathlen - x->patternlen + 1))
+ return to_exclude;
+ } else {
+ if (fnmatch(exclude, basename, 0) == 0)
+ return to_exclude;
+ }
}
else {
/* match with FNM_PATHNAME:
strncmp(pathname, x->base, baselen))
continue;
- if (fnmatch(exclude, pathname+baselen,
- FNM_PATHNAME) == 0)
- return to_exclude;
+ if (x->flags & EXC_FLAG_NOWILDCARD) {
+ if (!strcmp(exclude, pathname + baselen))
+ return to_exclude;
+ } else {
+ if (fnmatch(exclude, pathname+baselen,
+ FNM_PATHNAME) == 0)
+ return to_exclude;
+ }
}
}
}
{
int pathlen = strlen(pathname);
int st;
+ const char *basename = strrchr(pathname, '/');
+ basename = (basename) ? basename+1 : pathname;
for (st = EXC_CMDL; st <= EXC_FILE; st++) {
- switch (excluded_1(pathname, pathlen, &dir->exclude_list[st])) {
+ switch (excluded_1(pathname, pathlen, basename, &dir->exclude_list[st])) {
case 0:
return 0;
case 1:
return 0;
}
+static int get_dtype(struct dirent *de, const char *path)
+{
+ int dtype = DTYPE(de);
+ struct stat st;
+
+ if (dtype != DT_UNKNOWN)
+ return dtype;
+ if (lstat(path, &st))
+ return dtype;
+ if (S_ISREG(st.st_mode))
+ return DT_REG;
+ if (S_ISDIR(st.st_mode))
+ return DT_DIR;
+ if (S_ISLNK(st.st_mode))
+ return DT_LNK;
+ return dtype;
+}
+
/*
* Read a directory tree. We currently ignore anything but
* directories, regular files and symlinks. That's because git
exclude_stk = push_exclude_per_directory(dir, base, baselen);
while ((de = readdir(fdir)) != NULL) {
- int len;
+ int len, dtype;
int exclude;
if ((de->d_name[0] == '.') &&
if (exclude && dir->collect_ignored
&& in_pathspec(fullname, baselen + len, simplify))
dir_add_ignored(dir, fullname, baselen + len);
- if (exclude != dir->show_ignored) {
- if (!dir->show_ignored || DTYPE(de) != DT_DIR) {
+
+ /*
+ * Excluded? If we don't explicitly want to show
+ * ignored files, ignore it
+ */
+ if (exclude && !dir->show_ignored)
+ continue;
+
+ dtype = get_dtype(de, fullname);
+
+ /*
+ * Do we want to see just the ignored files?
+ * We still need to recurse into directories,
+ * even if we don't ignore them, since the
+ * directory may contain files that we do..
+ */
+ if (!exclude && dir->show_ignored) {
+ if (dtype != DT_DIR)
continue;
- }
}
- switch (DTYPE(de)) {
- struct stat st;
+ switch (dtype) {
default:
continue;
- case DT_UNKNOWN:
- if (lstat(fullname, &st))
- continue;
- if (S_ISREG(st.st_mode) || S_ISLNK(st.st_mode))
- break;
- if (!S_ISDIR(st.st_mode))
- continue;
- /* fallthrough */
case DT_DIR:
memcpy(fullname + baselen + len, "/", 2);
len++;
char buffer[PATH_MAX];
return get_relative_cwd(buffer, sizeof(buffer), dir) != NULL;
}
+
+int remove_dir_recursively(struct strbuf *path, int only_empty)
+{
+ DIR *dir = opendir(path->buf);
+ struct dirent *e;
+ int ret = 0, original_len = path->len, len;
+
+ if (!dir)
+ return -1;
+ if (path->buf[original_len - 1] != '/')
+ strbuf_addch(path, '/');
+
+ len = path->len;
+ while ((e = readdir(dir)) != NULL) {
+ struct stat st;
+ if ((e->d_name[0] == '.') &&
+ ((e->d_name[1] == 0) ||
+ ((e->d_name[1] == '.') && e->d_name[2] == 0)))
+ continue; /* "." and ".." */
+
+ strbuf_setlen(path, len);
+ strbuf_addstr(path, e->d_name);
+ if (lstat(path->buf, &st))
+ ; /* fall thru */
+ else if (S_ISDIR(st.st_mode)) {
+ if (!remove_dir_recursively(path, only_empty))
+ continue; /* happy */
+ } else if (!only_empty && !unlink(path->buf))
+ continue; /* happy, too */
+
+ /* path too long, stat fails, or non-directory still exists */
+ ret = -1;
+ break;
+ }
+ closedir(dir);
+
+ strbuf_setlen(path, original_len);
+ if (!ret)
+ ret = rmdir(path->buf);
+ return ret;
+}
char name[FLEX_ARRAY]; /* more */
};
+#define EXC_FLAG_NODIR 1
+#define EXC_FLAG_NOWILDCARD 2
+#define EXC_FLAG_ENDSWITH 4
+
struct exclude_list {
int nr;
int alloc;
struct exclude {
const char *pattern;
+ int patternlen;
const char *base;
int baselen;
+ int to_exclude;
+ int flags;
} **excludes;
};
extern char *get_relative_cwd(char *buffer, int size, const char *dir);
extern int is_inside_dir(const char *dir);
+extern int remove_dir_recursively(struct strbuf *path, int only_empty);
+
#endif
*/
strbuf_init(&buf, 0);
if (convert_to_working_tree(ce->name, new, size, &buf)) {
+ size_t newsize = 0;
free(new);
- new = buf.buf;
- size = buf.len;
+ new = strbuf_detach(&buf, &newsize);
+ size = newsize;
}
if (to_tempfile) {
} else {
struct recent_command *rc;
- strbuf_detach(&command_buf);
+ strbuf_detach(&command_buf, NULL);
stdin_eof = strbuf_getline(&command_buf, stdin, '\n');
if (stdin_eof)
return EOF;
char *term = xstrdup(command_buf.buf + 5 + 2);
size_t term_len = command_buf.len - 5 - 2;
+ strbuf_detach(&command_buf, NULL);
for (;;) {
if (strbuf_getline(&command_buf, stdin, '\n') == EOF)
die("EOF in data (terminator '%s' not found)", term);
} else if (oe) {
if (oe->type != OBJ_BLOB)
die("Not a blob (actually a %s): %s",
- command_buf.buf, typename(oe->type));
+ typename(oe->type), command_buf.buf);
} else {
enum object_type type = sha1_object_info(sha1, NULL);
if (type < 0)
+++ /dev/null
-#include "cache.h"
-#include "refs.h"
-#include "pkt-line.h"
-#include "commit.h"
-#include "tag.h"
-#include "exec_cmd.h"
-#include "pack.h"
-#include "sideband.h"
-
-static int keep_pack;
-static int transfer_unpack_limit = -1;
-static int fetch_unpack_limit = -1;
-static int unpack_limit = 100;
-static int quiet;
-static int verbose;
-static int fetch_all;
-static int depth;
-static int no_progress;
-static const char fetch_pack_usage[] =
-"git-fetch-pack [--all] [--quiet|-q] [--keep|-k] [--thin] [--upload-pack=<git-upload-pack>] [--depth=<n>] [--no-progress] [-v] [<host>:]<directory> [<refs>...]";
-static const char *uploadpack = "git-upload-pack";
-
-#define COMPLETE (1U << 0)
-#define COMMON (1U << 1)
-#define COMMON_REF (1U << 2)
-#define SEEN (1U << 3)
-#define POPPED (1U << 4)
-
-/*
- * After sending this many "have"s if we do not get any new ACK , we
- * give up traversing our history.
- */
-#define MAX_IN_VAIN 256
-
-static struct commit_list *rev_list;
-static int non_common_revs, multi_ack, use_thin_pack, use_sideband;
-
-static void rev_list_push(struct commit *commit, int mark)
-{
- if (!(commit->object.flags & mark)) {
- commit->object.flags |= mark;
-
- if (!(commit->object.parsed))
- parse_commit(commit);
-
- insert_by_date(commit, &rev_list);
-
- if (!(commit->object.flags & COMMON))
- non_common_revs++;
- }
-}
-
-static int rev_list_insert_ref(const char *path, const unsigned char *sha1, int flag, void *cb_data)
-{
- struct object *o = deref_tag(parse_object(sha1), path, 0);
-
- if (o && o->type == OBJ_COMMIT)
- rev_list_push((struct commit *)o, SEEN);
-
- return 0;
-}
-
-/*
- This function marks a rev and its ancestors as common.
- In some cases, it is desirable to mark only the ancestors (for example
- when only the server does not yet know that they are common).
-*/
-
-static void mark_common(struct commit *commit,
- int ancestors_only, int dont_parse)
-{
- if (commit != NULL && !(commit->object.flags & COMMON)) {
- struct object *o = (struct object *)commit;
-
- if (!ancestors_only)
- o->flags |= COMMON;
-
- if (!(o->flags & SEEN))
- rev_list_push(commit, SEEN);
- else {
- struct commit_list *parents;
-
- if (!ancestors_only && !(o->flags & POPPED))
- non_common_revs--;
- if (!o->parsed && !dont_parse)
- parse_commit(commit);
-
- for (parents = commit->parents;
- parents;
- parents = parents->next)
- mark_common(parents->item, 0, dont_parse);
- }
- }
-}
-
-/*
- Get the next rev to send, ignoring the common.
-*/
-
-static const unsigned char* get_rev(void)
-{
- struct commit *commit = NULL;
-
- while (commit == NULL) {
- unsigned int mark;
- struct commit_list* parents;
-
- if (rev_list == NULL || non_common_revs == 0)
- return NULL;
-
- commit = rev_list->item;
- if (!(commit->object.parsed))
- parse_commit(commit);
- commit->object.flags |= POPPED;
- if (!(commit->object.flags & COMMON))
- non_common_revs--;
-
- parents = commit->parents;
-
- if (commit->object.flags & COMMON) {
- /* do not send "have", and ignore ancestors */
- commit = NULL;
- mark = COMMON | SEEN;
- } else if (commit->object.flags & COMMON_REF)
- /* send "have", and ignore ancestors */
- mark = COMMON | SEEN;
- else
- /* send "have", also for its ancestors */
- mark = SEEN;
-
- while (parents) {
- if (!(parents->item->object.flags & SEEN))
- rev_list_push(parents->item, mark);
- if (mark & COMMON)
- mark_common(parents->item, 1, 0);
- parents = parents->next;
- }
-
- rev_list = rev_list->next;
- }
-
- return commit->object.sha1;
-}
-
-static int find_common(int fd[2], unsigned char *result_sha1,
- struct ref *refs)
-{
- int fetching;
- int count = 0, flushes = 0, retval;
- const unsigned char *sha1;
- unsigned in_vain = 0;
- int got_continue = 0;
-
- for_each_ref(rev_list_insert_ref, NULL);
-
- fetching = 0;
- for ( ; refs ; refs = refs->next) {
- unsigned char *remote = refs->old_sha1;
- struct object *o;
-
- /*
- * If that object is complete (i.e. it is an ancestor of a
- * local ref), we tell them we have it but do not have to
- * tell them about its ancestors, which they already know
- * about.
- *
- * We use lookup_object here because we are only
- * interested in the case we *know* the object is
- * reachable and we have already scanned it.
- */
- if (((o = lookup_object(remote)) != NULL) &&
- (o->flags & COMPLETE)) {
- continue;
- }
-
- if (!fetching)
- packet_write(fd[1], "want %s%s%s%s%s%s%s\n",
- sha1_to_hex(remote),
- (multi_ack ? " multi_ack" : ""),
- (use_sideband == 2 ? " side-band-64k" : ""),
- (use_sideband == 1 ? " side-band" : ""),
- (use_thin_pack ? " thin-pack" : ""),
- (no_progress ? " no-progress" : ""),
- " ofs-delta");
- else
- packet_write(fd[1], "want %s\n", sha1_to_hex(remote));
- fetching++;
- }
- if (is_repository_shallow())
- write_shallow_commits(fd[1], 1);
- if (depth > 0)
- packet_write(fd[1], "deepen %d", depth);
- packet_flush(fd[1]);
- if (!fetching)
- return 1;
-
- if (depth > 0) {
- char line[1024];
- unsigned char sha1[20];
- int len;
-
- while ((len = packet_read_line(fd[0], line, sizeof(line)))) {
- if (!prefixcmp(line, "shallow ")) {
- if (get_sha1_hex(line + 8, sha1))
- die("invalid shallow line: %s", line);
- register_shallow(sha1);
- continue;
- }
- if (!prefixcmp(line, "unshallow ")) {
- if (get_sha1_hex(line + 10, sha1))
- die("invalid unshallow line: %s", line);
- if (!lookup_object(sha1))
- die("object not found: %s", line);
- /* make sure that it is parsed as shallow */
- parse_object(sha1);
- if (unregister_shallow(sha1))
- die("no shallow found: %s", line);
- continue;
- }
- die("expected shallow/unshallow, got %s", line);
- }
- }
-
- flushes = 0;
- retval = -1;
- while ((sha1 = get_rev())) {
- packet_write(fd[1], "have %s\n", sha1_to_hex(sha1));
- if (verbose)
- fprintf(stderr, "have %s\n", sha1_to_hex(sha1));
- in_vain++;
- if (!(31 & ++count)) {
- int ack;
-
- packet_flush(fd[1]);
- flushes++;
-
- /*
- * We keep one window "ahead" of the other side, and
- * will wait for an ACK only on the next one
- */
- if (count == 32)
- continue;
-
- do {
- ack = get_ack(fd[0], result_sha1);
- if (verbose && ack)
- fprintf(stderr, "got ack %d %s\n", ack,
- sha1_to_hex(result_sha1));
- if (ack == 1) {
- flushes = 0;
- multi_ack = 0;
- retval = 0;
- goto done;
- } else if (ack == 2) {
- struct commit *commit =
- lookup_commit(result_sha1);
- mark_common(commit, 0, 1);
- retval = 0;
- in_vain = 0;
- got_continue = 1;
- }
- } while (ack);
- flushes--;
- if (got_continue && MAX_IN_VAIN < in_vain) {
- if (verbose)
- fprintf(stderr, "giving up\n");
- break; /* give up */
- }
- }
- }
-done:
- packet_write(fd[1], "done\n");
- if (verbose)
- fprintf(stderr, "done\n");
- if (retval != 0) {
- multi_ack = 0;
- flushes++;
- }
- while (flushes || multi_ack) {
- int ack = get_ack(fd[0], result_sha1);
- if (ack) {
- if (verbose)
- fprintf(stderr, "got ack (%d) %s\n", ack,
- sha1_to_hex(result_sha1));
- if (ack == 1)
- return 0;
- multi_ack = 1;
- continue;
- }
- flushes--;
- }
- return retval;
-}
-
-static struct commit_list *complete;
-
-static int mark_complete(const char *path, const unsigned char *sha1, int flag, void *cb_data)
-{
- struct object *o = parse_object(sha1);
-
- while (o && o->type == OBJ_TAG) {
- struct tag *t = (struct tag *) o;
- if (!t->tagged)
- break; /* broken repository */
- o->flags |= COMPLETE;
- o = parse_object(t->tagged->sha1);
- }
- if (o && o->type == OBJ_COMMIT) {
- struct commit *commit = (struct commit *)o;
- commit->object.flags |= COMPLETE;
- insert_by_date(commit, &complete);
- }
- return 0;
-}
-
-static void mark_recent_complete_commits(unsigned long cutoff)
-{
- while (complete && cutoff <= complete->item->date) {
- if (verbose)
- fprintf(stderr, "Marking %s as complete\n",
- sha1_to_hex(complete->item->object.sha1));
- pop_most_recent_commit(&complete, COMPLETE);
- }
-}
-
-static void filter_refs(struct ref **refs, int nr_match, char **match)
-{
- struct ref **return_refs;
- struct ref *newlist = NULL;
- struct ref **newtail = &newlist;
- struct ref *ref, *next;
- struct ref *fastarray[32];
-
- if (nr_match && !fetch_all) {
- if (ARRAY_SIZE(fastarray) < nr_match)
- return_refs = xcalloc(nr_match, sizeof(struct ref *));
- else {
- return_refs = fastarray;
- memset(return_refs, 0, sizeof(struct ref *) * nr_match);
- }
- }
- else
- return_refs = NULL;
-
- for (ref = *refs; ref; ref = next) {
- next = ref->next;
- if (!memcmp(ref->name, "refs/", 5) &&
- check_ref_format(ref->name + 5))
- ; /* trash */
- else if (fetch_all &&
- (!depth || prefixcmp(ref->name, "refs/tags/") )) {
- *newtail = ref;
- ref->next = NULL;
- newtail = &ref->next;
- continue;
- }
- else {
- int order = path_match(ref->name, nr_match, match);
- if (order) {
- return_refs[order-1] = ref;
- continue; /* we will link it later */
- }
- }
- free(ref);
- }
-
- if (!fetch_all) {
- int i;
- for (i = 0; i < nr_match; i++) {
- ref = return_refs[i];
- if (ref) {
- *newtail = ref;
- ref->next = NULL;
- newtail = &ref->next;
- }
- }
- if (return_refs != fastarray)
- free(return_refs);
- }
- *refs = newlist;
-}
-
-static int everything_local(struct ref **refs, int nr_match, char **match)
-{
- struct ref *ref;
- int retval;
- unsigned long cutoff = 0;
-
- track_object_refs = 0;
- save_commit_buffer = 0;
-
- for (ref = *refs; ref; ref = ref->next) {
- struct object *o;
-
- o = parse_object(ref->old_sha1);
- if (!o)
- continue;
-
- /* We already have it -- which may mean that we were
- * in sync with the other side at some time after
- * that (it is OK if we guess wrong here).
- */
- if (o->type == OBJ_COMMIT) {
- struct commit *commit = (struct commit *)o;
- if (!cutoff || cutoff < commit->date)
- cutoff = commit->date;
- }
- }
-
- if (!depth) {
- for_each_ref(mark_complete, NULL);
- if (cutoff)
- mark_recent_complete_commits(cutoff);
- }
-
- /*
- * Mark all complete remote refs as common refs.
- * Don't mark them common yet; the server has to be told so first.
- */
- for (ref = *refs; ref; ref = ref->next) {
- struct object *o = deref_tag(lookup_object(ref->old_sha1),
- NULL, 0);
-
- if (!o || o->type != OBJ_COMMIT || !(o->flags & COMPLETE))
- continue;
-
- if (!(o->flags & SEEN)) {
- rev_list_push((struct commit *)o, COMMON_REF | SEEN);
-
- mark_common((struct commit *)o, 1, 1);
- }
- }
-
- filter_refs(refs, nr_match, match);
-
- for (retval = 1, ref = *refs; ref ; ref = ref->next) {
- const unsigned char *remote = ref->old_sha1;
- unsigned char local[20];
- struct object *o;
-
- o = lookup_object(remote);
- if (!o || !(o->flags & COMPLETE)) {
- retval = 0;
- if (!verbose)
- continue;
- fprintf(stderr,
- "want %s (%s)\n", sha1_to_hex(remote),
- ref->name);
- continue;
- }
-
- hashcpy(ref->new_sha1, local);
- if (!verbose)
- continue;
- fprintf(stderr,
- "already have %s (%s)\n", sha1_to_hex(remote),
- ref->name);
- }
- return retval;
-}
-
-static pid_t setup_sideband(int fd[2], int xd[2])
-{
- pid_t side_pid;
-
- if (!use_sideband) {
- fd[0] = xd[0];
- fd[1] = xd[1];
- return 0;
- }
- /* xd[] is talking with upload-pack; subprocess reads from
- * xd[0], spits out band#2 to stderr, and feeds us band#1
- * through our fd[0].
- */
- if (pipe(fd) < 0)
- die("fetch-pack: unable to set up pipe");
- side_pid = fork();
- if (side_pid < 0)
- die("fetch-pack: unable to fork off sideband demultiplexer");
- if (!side_pid) {
- /* subprocess */
- close(fd[0]);
- if (xd[0] != xd[1])
- close(xd[1]);
- if (recv_sideband("fetch-pack", xd[0], fd[1], 2))
- exit(1);
- exit(0);
- }
- close(xd[0]);
- close(fd[1]);
- fd[1] = xd[1];
- return side_pid;
-}
-
-static int get_pack(int xd[2])
-{
- int status;
- pid_t pid, side_pid;
- int fd[2];
- const char *argv[20];
- char keep_arg[256];
- char hdr_arg[256];
- const char **av;
- int do_keep = keep_pack;
-
- side_pid = setup_sideband(fd, xd);
-
- av = argv;
- *hdr_arg = 0;
- if (unpack_limit) {
- struct pack_header header;
-
- if (read_pack_header(fd[0], &header))
- die("protocol error: bad pack header");
- snprintf(hdr_arg, sizeof(hdr_arg), "--pack_header=%u,%u",
- ntohl(header.hdr_version), ntohl(header.hdr_entries));
- if (ntohl(header.hdr_entries) < unpack_limit)
- do_keep = 0;
- else
- do_keep = 1;
- }
-
- if (do_keep) {
- *av++ = "index-pack";
- *av++ = "--stdin";
- if (!quiet && !no_progress)
- *av++ = "-v";
- if (use_thin_pack)
- *av++ = "--fix-thin";
- if (keep_pack > 1 || unpack_limit) {
- int s = sprintf(keep_arg,
- "--keep=fetch-pack %d on ", getpid());
- if (gethostname(keep_arg + s, sizeof(keep_arg) - s))
- strcpy(keep_arg + s, "localhost");
- *av++ = keep_arg;
- }
- }
- else {
- *av++ = "unpack-objects";
- if (quiet)
- *av++ = "-q";
- }
- if (*hdr_arg)
- *av++ = hdr_arg;
- *av++ = NULL;
-
- pid = fork();
- if (pid < 0)
- die("fetch-pack: unable to fork off %s", argv[0]);
- if (!pid) {
- dup2(fd[0], 0);
- close(fd[0]);
- close(fd[1]);
- execv_git_cmd(argv);
- die("%s exec failed", argv[0]);
- }
- close(fd[0]);
- close(fd[1]);
- while (waitpid(pid, &status, 0) < 0) {
- if (errno != EINTR)
- die("waiting for %s: %s", argv[0], strerror(errno));
- }
- if (WIFEXITED(status)) {
- int code = WEXITSTATUS(status);
- if (code)
- die("%s died with error code %d", argv[0], code);
- return 0;
- }
- if (WIFSIGNALED(status)) {
- int sig = WTERMSIG(status);
- die("%s died of signal %d", argv[0], sig);
- }
- die("%s died of unnatural causes %d", argv[0], status);
-}
-
-static int fetch_pack(int fd[2], int nr_match, char **match)
-{
- struct ref *ref;
- unsigned char sha1[20];
-
- get_remote_heads(fd[0], &ref, 0, NULL, 0);
- if (is_repository_shallow() && !server_supports("shallow"))
- die("Server does not support shallow clients");
- if (server_supports("multi_ack")) {
- if (verbose)
- fprintf(stderr, "Server supports multi_ack\n");
- multi_ack = 1;
- }
- if (server_supports("side-band-64k")) {
- if (verbose)
- fprintf(stderr, "Server supports side-band-64k\n");
- use_sideband = 2;
- }
- else if (server_supports("side-band")) {
- if (verbose)
- fprintf(stderr, "Server supports side-band\n");
- use_sideband = 1;
- }
- if (!ref) {
- packet_flush(fd[1]);
- die("no matching remote head");
- }
- if (everything_local(&ref, nr_match, match)) {
- packet_flush(fd[1]);
- goto all_done;
- }
- if (find_common(fd, sha1, ref) < 0)
- if (keep_pack != 1)
- /* When cloning, it is not unusual to have
- * no common commit.
- */
- fprintf(stderr, "warning: no common commits\n");
-
- if (get_pack(fd))
- die("git-fetch-pack: fetch failed.");
-
- all_done:
- while (ref) {
- printf("%s %s\n",
- sha1_to_hex(ref->old_sha1), ref->name);
- ref = ref->next;
- }
- return 0;
-}
-
-static int remove_duplicates(int nr_heads, char **heads)
-{
- int src, dst;
-
- for (src = dst = 0; src < nr_heads; src++) {
- /* If heads[src] is different from any of
- * heads[0..dst], push it in.
- */
- int i;
- for (i = 0; i < dst; i++) {
- if (!strcmp(heads[i], heads[src]))
- break;
- }
- if (i < dst)
- continue;
- if (src != dst)
- heads[dst] = heads[src];
- dst++;
- }
- heads[dst] = 0;
- return dst;
-}
-
-static int fetch_pack_config(const char *var, const char *value)
-{
- if (strcmp(var, "fetch.unpacklimit") == 0) {
- fetch_unpack_limit = git_config_int(var, value);
- return 0;
- }
-
- if (strcmp(var, "transfer.unpacklimit") == 0) {
- transfer_unpack_limit = git_config_int(var, value);
- return 0;
- }
-
- return git_default_config(var, value);
-}
-
-static struct lock_file lock;
-
-int main(int argc, char **argv)
-{
- int i, ret, nr_heads;
- char *dest = NULL, **heads;
- int fd[2];
- pid_t pid;
- struct stat st;
-
- setup_git_directory();
- git_config(fetch_pack_config);
-
- if (0 <= transfer_unpack_limit)
- unpack_limit = transfer_unpack_limit;
- else if (0 <= fetch_unpack_limit)
- unpack_limit = fetch_unpack_limit;
-
- nr_heads = 0;
- heads = NULL;
- for (i = 1; i < argc; i++) {
- char *arg = argv[i];
-
- if (*arg == '-') {
- if (!prefixcmp(arg, "--upload-pack=")) {
- uploadpack = arg + 14;
- continue;
- }
- if (!prefixcmp(arg, "--exec=")) {
- uploadpack = arg + 7;
- continue;
- }
- if (!strcmp("--quiet", arg) || !strcmp("-q", arg)) {
- quiet = 1;
- continue;
- }
- if (!strcmp("--keep", arg) || !strcmp("-k", arg)) {
- keep_pack++;
- unpack_limit = 0;
- continue;
- }
- if (!strcmp("--thin", arg)) {
- use_thin_pack = 1;
- continue;
- }
- if (!strcmp("--all", arg)) {
- fetch_all = 1;
- continue;
- }
- if (!strcmp("-v", arg)) {
- verbose = 1;
- continue;
- }
- if (!prefixcmp(arg, "--depth=")) {
- depth = strtol(arg + 8, NULL, 0);
- if (stat(git_path("shallow"), &st))
- st.st_mtime = 0;
- continue;
- }
- if (!strcmp("--no-progress", arg)) {
- no_progress = 1;
- continue;
- }
- usage(fetch_pack_usage);
- }
- dest = arg;
- heads = argv + i + 1;
- nr_heads = argc - i - 1;
- break;
- }
- if (!dest)
- usage(fetch_pack_usage);
- pid = git_connect(fd, dest, uploadpack, verbose ? CONNECT_VERBOSE : 0);
- if (pid < 0)
- return 1;
- if (heads && nr_heads)
- nr_heads = remove_duplicates(nr_heads, heads);
- ret = fetch_pack(fd, nr_heads, heads);
- close(fd[0]);
- close(fd[1]);
- ret |= finish_connect(pid);
-
- if (!ret && nr_heads) {
- /* If the heads to pull were given, we should have
- * consumed all of them by matching the remote.
- * Otherwise, 'git-fetch remote no-such-ref' would
- * silently succeed without issuing an error.
- */
- for (i = 0; i < nr_heads; i++)
- if (heads[i] && heads[i][0]) {
- error("no such remote ref %s", heads[i]);
- ret = 1;
- }
- }
-
- if (!ret && depth > 0) {
- struct cache_time mtime;
- char *shallow = git_path("shallow");
- int fd;
-
- mtime.sec = st.st_mtime;
-#ifdef USE_NSEC
- mtime.usec = st.st_mtim.usec;
-#endif
- if (stat(shallow, &st)) {
- if (mtime.sec)
- die("shallow file was removed during fetch");
- } else if (st.st_mtime != mtime.sec
-#ifdef USE_NSEC
- || st.st_mtim.usec != mtime.usec
-#endif
- )
- die("shallow file was changed during fetch");
-
- fd = hold_lock_file_for_update(&lock, shallow, 1);
- if (!write_shallow_commits(fd, 0)) {
- unlink(shallow);
- rollback_lock_file(&lock);
- } else {
- close(fd);
- commit_lock_file(&lock);
- }
- }
-
- return !!ret;
-}
--- /dev/null
+#ifndef FETCH_PACK_H
+#define FETCH_PACK_H
+
+struct fetch_pack_args
+{
+ const char *uploadpack;
+ int unpacklimit;
+ int depth;
+ unsigned quiet:1,
+ keep_pack:1,
+ lock_pack:1,
+ use_thin_pack:1,
+ fetch_all:1,
+ verbose:1,
+ no_progress:1;
+};
+
+struct ref *fetch_pack(struct fetch_pack_args *args,
+ const char *dest,
+ int nr_heads,
+ char **heads,
+ char **pack_lockfile);
+
+#endif
+++ /dev/null
-#include "cache.h"
-#include "fetch.h"
-#include "commit.h"
-#include "tree.h"
-#include "tree-walk.h"
-#include "tag.h"
-#include "blob.h"
-#include "refs.h"
-
-int get_tree = 0;
-int get_history = 0;
-int get_all = 0;
-int get_verbosely = 0;
-int get_recover = 0;
-static unsigned char current_commit_sha1[20];
-
-void pull_say(const char *fmt, const char *hex)
-{
- if (get_verbosely)
- fprintf(stderr, fmt, hex);
-}
-
-static void report_missing(const struct object *obj)
-{
- char missing_hex[41];
- strcpy(missing_hex, sha1_to_hex(obj->sha1));;
- fprintf(stderr, "Cannot obtain needed %s %s\n",
- obj->type ? typename(obj->type): "object", missing_hex);
- if (!is_null_sha1(current_commit_sha1))
- fprintf(stderr, "while processing commit %s.\n",
- sha1_to_hex(current_commit_sha1));
-}
-
-static int process(struct object *obj);
-
-static int process_tree(struct tree *tree)
-{
- struct tree_desc desc;
- struct name_entry entry;
-
- if (parse_tree(tree))
- return -1;
-
- init_tree_desc(&desc, tree->buffer, tree->size);
- while (tree_entry(&desc, &entry)) {
- struct object *obj = NULL;
-
- /* submodule commits are not stored in the superproject */
- if (S_ISGITLINK(entry.mode))
- continue;
- if (S_ISDIR(entry.mode)) {
- struct tree *tree = lookup_tree(entry.sha1);
- if (tree)
- obj = &tree->object;
- }
- else {
- struct blob *blob = lookup_blob(entry.sha1);
- if (blob)
- obj = &blob->object;
- }
- if (!obj || process(obj))
- return -1;
- }
- free(tree->buffer);
- tree->buffer = NULL;
- tree->size = 0;
- return 0;
-}
-
-#define COMPLETE (1U << 0)
-#define SEEN (1U << 1)
-#define TO_SCAN (1U << 2)
-
-static struct commit_list *complete = NULL;
-
-static int process_commit(struct commit *commit)
-{
- if (parse_commit(commit))
- return -1;
-
- while (complete && complete->item->date >= commit->date) {
- pop_most_recent_commit(&complete, COMPLETE);
- }
-
- if (commit->object.flags & COMPLETE)
- return 0;
-
- hashcpy(current_commit_sha1, commit->object.sha1);
-
- pull_say("walk %s\n", sha1_to_hex(commit->object.sha1));
-
- if (get_tree) {
- if (process(&commit->tree->object))
- return -1;
- if (!get_all)
- get_tree = 0;
- }
- if (get_history) {
- struct commit_list *parents = commit->parents;
- for (; parents; parents = parents->next) {
- if (process(&parents->item->object))
- return -1;
- }
- }
- return 0;
-}
-
-static int process_tag(struct tag *tag)
-{
- if (parse_tag(tag))
- return -1;
- return process(tag->tagged);
-}
-
-static struct object_list *process_queue = NULL;
-static struct object_list **process_queue_end = &process_queue;
-
-static int process_object(struct object *obj)
-{
- if (obj->type == OBJ_COMMIT) {
- if (process_commit((struct commit *)obj))
- return -1;
- return 0;
- }
- if (obj->type == OBJ_TREE) {
- if (process_tree((struct tree *)obj))
- return -1;
- return 0;
- }
- if (obj->type == OBJ_BLOB) {
- return 0;
- }
- if (obj->type == OBJ_TAG) {
- if (process_tag((struct tag *)obj))
- return -1;
- return 0;
- }
- return error("Unable to determine requirements "
- "of type %s for %s",
- typename(obj->type), sha1_to_hex(obj->sha1));
-}
-
-static int process(struct object *obj)
-{
- if (obj->flags & SEEN)
- return 0;
- obj->flags |= SEEN;
-
- if (has_sha1_file(obj->sha1)) {
- /* We already have it, so we should scan it now. */
- obj->flags |= TO_SCAN;
- }
- else {
- if (obj->flags & COMPLETE)
- return 0;
- prefetch(obj->sha1);
- }
-
- object_list_insert(obj, process_queue_end);
- process_queue_end = &(*process_queue_end)->next;
- return 0;
-}
-
-static int loop(void)
-{
- struct object_list *elem;
-
- while (process_queue) {
- struct object *obj = process_queue->item;
- elem = process_queue;
- process_queue = elem->next;
- free(elem);
- if (!process_queue)
- process_queue_end = &process_queue;
-
- /* If we are not scanning this object, we placed it in
- * the queue because we needed to fetch it first.
- */
- if (! (obj->flags & TO_SCAN)) {
- if (fetch(obj->sha1)) {
- report_missing(obj);
- return -1;
- }
- }
- if (!obj->type)
- parse_object(obj->sha1);
- if (process_object(obj))
- return -1;
- }
- return 0;
-}
-
-static int interpret_target(char *target, unsigned char *sha1)
-{
- if (!get_sha1_hex(target, sha1))
- return 0;
- if (!check_ref_format(target)) {
- if (!fetch_ref(target, sha1)) {
- return 0;
- }
- }
- return -1;
-}
-
-static int mark_complete(const char *path, const unsigned char *sha1, int flag, void *cb_data)
-{
- struct commit *commit = lookup_commit_reference_gently(sha1, 1);
- if (commit) {
- commit->object.flags |= COMPLETE;
- insert_by_date(commit, &complete);
- }
- return 0;
-}
-
-int pull_targets_stdin(char ***target, const char ***write_ref)
-{
- int targets = 0, targets_alloc = 0;
- struct strbuf buf;
- *target = NULL; *write_ref = NULL;
- strbuf_init(&buf, 0);
- while (1) {
- char *rf_one = NULL;
- char *tg_one;
-
- if (strbuf_getline(&buf, stdin, '\n') == EOF)
- break;
- tg_one = buf.buf;
- rf_one = strchr(tg_one, '\t');
- if (rf_one)
- *rf_one++ = 0;
-
- if (targets >= targets_alloc) {
- targets_alloc = targets_alloc ? targets_alloc * 2 : 64;
- *target = xrealloc(*target, targets_alloc * sizeof(**target));
- *write_ref = xrealloc(*write_ref, targets_alloc * sizeof(**write_ref));
- }
- (*target)[targets] = xstrdup(tg_one);
- (*write_ref)[targets] = rf_one ? xstrdup(rf_one) : NULL;
- targets++;
- }
- strbuf_release(&buf);
- return targets;
-}
-
-void pull_targets_free(int targets, char **target, const char **write_ref)
-{
- while (targets--) {
- free(target[targets]);
- if (write_ref && write_ref[targets])
- free((char *) write_ref[targets]);
- }
-}
-
-int pull(int targets, char **target, const char **write_ref,
- const char *write_ref_log_details)
-{
- struct ref_lock **lock = xcalloc(targets, sizeof(struct ref_lock *));
- unsigned char *sha1 = xmalloc(targets * 20);
- char *msg;
- int ret;
- int i;
-
- save_commit_buffer = 0;
- track_object_refs = 0;
-
- for (i = 0; i < targets; i++) {
- if (!write_ref || !write_ref[i])
- continue;
-
- lock[i] = lock_ref_sha1(write_ref[i], NULL);
- if (!lock[i]) {
- error("Can't lock ref %s", write_ref[i]);
- goto unlock_and_fail;
- }
- }
-
- if (!get_recover)
- for_each_ref(mark_complete, NULL);
-
- for (i = 0; i < targets; i++) {
- if (interpret_target(target[i], &sha1[20 * i])) {
- error("Could not interpret %s as something to pull", target[i]);
- goto unlock_and_fail;
- }
- if (process(lookup_unknown_object(&sha1[20 * i])))
- goto unlock_and_fail;
- }
-
- if (loop())
- goto unlock_and_fail;
-
- if (write_ref_log_details) {
- msg = xmalloc(strlen(write_ref_log_details) + 12);
- sprintf(msg, "fetch from %s", write_ref_log_details);
- } else {
- msg = NULL;
- }
- for (i = 0; i < targets; i++) {
- if (!write_ref || !write_ref[i])
- continue;
- ret = write_ref_sha1(lock[i], &sha1[20 * i], msg ? msg : "fetch (unknown)");
- lock[i] = NULL;
- if (ret)
- goto unlock_and_fail;
- }
- free(msg);
-
- return 0;
-
-
-unlock_and_fail:
- for (i = 0; i < targets; i++)
- if (lock[i])
- unlock_ref(lock[i]);
- return -1;
-}
+++ /dev/null
-#ifndef PULL_H
-#define PULL_H
-
-/*
- * Fetch object given SHA1 from the remote, and store it locally under
- * GIT_OBJECT_DIRECTORY. Return 0 on success, -1 on failure. To be
- * provided by the particular implementation.
- */
-extern int fetch(unsigned char *sha1);
-
-/*
- * Fetch the specified object and store it locally; fetch() will be
- * called later to determine success. To be provided by the particular
- * implementation.
- */
-extern void prefetch(unsigned char *sha1);
-
-/*
- * Fetch ref (relative to $GIT_DIR/refs) from the remote, and store
- * the 20-byte SHA1 in sha1. Return 0 on success, -1 on failure. To
- * be provided by the particular implementation.
- */
-extern int fetch_ref(char *ref, unsigned char *sha1);
-
-/* Set to fetch the target tree. */
-extern int get_tree;
-
-/* Set to fetch the commit history. */
-extern int get_history;
-
-/* Set to fetch the trees in the commit history. */
-extern int get_all;
-
-/* Set to be verbose */
-extern int get_verbosely;
-
-/* Set to check on all reachable objects. */
-extern int get_recover;
-
-/* Report what we got under get_verbosely */
-extern void pull_say(const char *, const char *);
-
-/* Load pull targets from stdin */
-extern int pull_targets_stdin(char ***target, const char ***write_ref);
-
-/* Free up loaded targets */
-extern void pull_targets_free(int targets, char **target, const char **write_ref);
-
-/* If write_ref is set, the ref filename to write the target value to. */
-/* If write_ref_log_details is set, additional text will appear in the ref log. */
-extern int pull(int targets, char **target, const char **write_ref,
- const char *write_ref_log_details);
-
-#endif /* PULL_H */
print ">> ";
}
my $line = <STDIN>;
- last if (!$line);
+ if (!$line) {
+ print "\n";
+ $opts->{ON_EOF}->() if $opts->{ON_EOF};
+ last;
+ }
chomp $line;
- my $donesomething = 0;
+ last if $line eq '';
for my $choice (split(/[\s,]+/, $line)) {
my $choose = 1;
my ($bottom, $top);
next TOPLOOP;
}
for ($i = $bottom-1; $i <= $top-1; $i++) {
- next if (@stuff <= $i);
+ next if (@stuff <= $i || $i < 0);
$chosen[$i] = $choose;
- $donesomething++;
}
}
- last if (!$donesomething || $opts->{IMMEDIATE});
+ last if ($opts->{IMMEDIATE});
}
for ($i = 0; $i < @stuff; $i++) {
if ($chosen[$i]) {
sub parse_hunk_header {
my ($line) = @_;
my ($o_ofs, $o_cnt, $n_ofs, $n_cnt) =
- $line =~ /^@@ -(\d+)(?:,(\d+)) \+(\d+)(?:,(\d+)) @@/;
+ $line =~ /^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@/;
+ $o_cnt = 1 unless defined $o_cnt;
+ $n_cnt = 1 unless defined $n_cnt;
return ($o_ofs, $o_cnt, $n_ofs, $n_cnt);
}
# it can be split, but we would need to take care of
# overlaps later.
- my ($o_ofs, $o_cnt, $n_ofs, $n_cnt) = parse_hunk_header($text->[0]);
+ my ($o_ofs, undef, $n_ofs) = parse_hunk_header($text->[0]);
my $hunk_start = 1;
- my $next_hunk_start;
OUTER:
while (1) {
for my $hunk (@split) {
$o_ofs = $hunk->{OLD};
$n_ofs = $hunk->{NEW};
- $o_cnt = $hunk->{OCNT};
- $n_cnt = $hunk->{NCNT};
+ my $o_cnt = $hunk->{OCNT};
+ my $n_cnt = $hunk->{NCNT};
my $head = ("@@ -$o_ofs" .
(($o_cnt != 1) ? ",$o_cnt" : '') .
sub find_last_o_ctx {
my ($it) = @_;
my $text = $it->{TEXT};
- my ($o_ofs, $o_cnt, $n_ofs, $n_cnt) = parse_hunk_header($text->[0]);
+ my ($o_ofs, $o_cnt) = parse_hunk_header($text->[0]);
my $i = @{$text};
my $last_o_ctx = $o_ofs + $o_cnt;
while (0 < --$i) {
for (grep { $_->{USE} } @in) {
my $text = $_->{TEXT};
- my ($o_ofs, $o_cnt, $n_ofs, $n_cnt) =
- parse_hunk_header($text->[0]);
+ my ($o_ofs) = parse_hunk_header($text->[0]);
if (defined $last_o_ctx &&
$o_ofs <= $last_o_ctx) {
merge_hunk($out[-1], $_);
@hunk = coalesce_overlapping_hunks(@hunk);
- my ($o_lofs, $n_lofs) = (0, 0);
+ my $n_lofs = 0;
my @result = ();
for (@hunk) {
my $text = $_->{TEXT};
parse_hunk_header($text->[0]);
if (!$_->{USE}) {
- if (!defined $o_cnt) { $o_cnt = 1; }
- if (!defined $n_cnt) { $n_cnt = 1; }
-
# We would have added ($n_cnt - $o_cnt) lines
# to the postimage if we were to use this hunk,
# but we didn't. So the line number that the next
if ($n_lofs) {
$n_ofs += $n_lofs;
$text->[0] = ("@@ -$o_ofs" .
- ((defined $o_cnt)
+ (($o_cnt != 1)
? ",$o_cnt" : '') .
" +$n_ofs" .
- ((defined $n_cnt)
+ (($n_cnt != 1)
? ",$n_cnt" : '') .
" @@\n");
}
SINGLETON => 1,
LIST_FLAT => 4,
HEADER => '*** Commands ***',
+ ON_EOF => \&quit_cmd,
IMMEDIATE => 1 }, @cmd);
if ($it) {
eval {
}
}
-my @z;
-
refresh();
status_cmd();
main_loop();
mkdir "$dotest/patch-merge-tmp-dir"
# First see if the patch records the index info that we can use.
- git apply -z --index-info "$dotest/patch" \
- >"$dotest/patch-merge-index-info" &&
- GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
- git update-index -z --index-info <"$dotest/patch-merge-index-info" &&
+ git apply --build-fake-ancestor "$dotest/patch-merge-tmp-index" \
+ "$dotest/patch" &&
GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
git write-tree >"$dotest/patch-merge-base+" ||
cannot_fallback "Repository lacks necessary blobs to fall back on 3-way merge."
resolvemsg= resume=
git_apply_opt=
-while case "$#" in 0) break;; esac
+while test $# != 0
do
case "$1" in
-d=*|--d=*|--do=*|--dot=*|--dote=*|--dotes=*|--dotest=*)
stop_here $this
fi
- echo
printf 'Applying %s\n' "$SUBJECT"
- echo
case "$resolved" in
'')
fi
tree=$(git write-tree) &&
- echo Wrote tree $tree &&
parent=$(git rev-parse --verify HEAD) &&
commit=$(git commit-tree $tree -p $parent <"$dotest/final-commit") &&
- echo Committed: $commit &&
git update-ref -m "$GIT_REFLOG_ACTION: $SUBJECT" HEAD $commit $parent ||
stop_here $this
"$GIT_DIR"/hooks/post-applypatch
fi
+ git gc --auto
+
go_next
done
#!/bin/sh
-USAGE='[start|bad|good|next|reset|visualize|replay|log|run]'
+USAGE='[start|bad|good|skip|next|reset|visualize|replay|log|run]'
LONG_USAGE='git bisect start [<bad> [<good>...]] [--] [<pathspec>...]
reset bisect state and start bisection.
git bisect bad [<rev>]
mark <rev> a known-bad revision.
git bisect good [<rev>...]
mark <rev>... known-good revisions.
+git bisect skip [<rev>...]
+ mark <rev>... untestable revisions.
git bisect next
find next bisection to test and check it out.
git bisect reset [<branch>]
branch=`cat "$GIT_DIR/head-name"`
else
branch=master
- fi
+ fi
git checkout $branch || exit
;;
refs/heads/*)
arg="$1"
case "$arg" in
--)
- shift
+ shift
break
;;
*)
- rev=$(git rev-parse --verify "$arg^{commit}" 2>/dev/null) || {
+ rev=$(git rev-parse --verify "$arg^{commit}" 2>/dev/null) || {
test $has_double_dash -eq 1 &&
die "'$arg' does not appear to be a valid revision"
break
}
- if [ $bad_seen -eq 0 ]; then
- bad_seen=1
- bisect_write_bad "$rev"
- else
- bisect_write_good "$rev"
- fi
- shift
+ case $bad_seen in
+ 0) state='bad' ; bad_seen=1 ;;
+ *) state='good' ;;
+ esac
+ bisect_write "$state" "$rev" 'nolog'
+ shift
;;
esac
- done
+ done
sq "$@" >"$GIT_DIR/BISECT_NAMES"
echo "git-bisect start$orig_args" >>"$GIT_DIR/BISECT_LOG"
bisect_auto_next
}
-bisect_bad() {
- bisect_autostart
- case "$#" in
- 0)
- rev=$(git rev-parse --verify HEAD) ;;
- 1)
- rev=$(git rev-parse --verify "$1^{commit}") ;;
- *)
- usage ;;
- esac || exit
- bisect_write_bad "$rev"
- echo "git-bisect bad $rev" >>"$GIT_DIR/BISECT_LOG"
- bisect_auto_next
-}
-
-bisect_write_bad() {
- rev="$1"
- echo "$rev" >"$GIT_DIR/refs/bisect/bad"
- echo "# bad: "$(git show-branch $rev) >>"$GIT_DIR/BISECT_LOG"
+bisect_write() {
+ state="$1"
+ rev="$2"
+ nolog="$3"
+ case "$state" in
+ bad) tag="$state" ;;
+ good|skip) tag="$state"-"$rev" ;;
+ *) die "Bad bisect_write argument: $state" ;;
+ esac
+ echo "$rev" >"$GIT_DIR/refs/bisect/$tag"
+ echo "# $state: "$(git show-branch $rev) >>"$GIT_DIR/BISECT_LOG"
+ test -z "$nolog" && echo "git-bisect $state $rev" >>"$GIT_DIR/BISECT_LOG"
}
-bisect_good() {
+bisect_state() {
bisect_autostart
- case "$#" in
- 0) revs=$(git rev-parse --verify HEAD) || exit ;;
- *) revs=$(git rev-parse --revs-only --no-flags "$@") &&
- test '' != "$revs" || die "Bad rev input: $@" ;;
+ state=$1
+ case "$#,$state" in
+ 0,*)
+ die "Please call 'bisect_state' with at least one argument." ;;
+ 1,bad|1,good|1,skip)
+ rev=$(git rev-parse --verify HEAD) ||
+ die "Bad rev input: HEAD"
+ bisect_write "$state" "$rev" ;;
+ 2,bad)
+ rev=$(git rev-parse --verify "$2^{commit}") ||
+ die "Bad rev input: $2"
+ bisect_write "$state" "$rev" ;;
+ *,good|*,skip)
+ shift
+ revs=$(git rev-parse --revs-only --no-flags "$@") &&
+ test '' != "$revs" || die "Bad rev input: $@"
+ for rev in $revs
+ do
+ rev=$(git rev-parse --verify "$rev^{commit}") ||
+ die "Bad rev commit: $rev^{commit}"
+ bisect_write "$state" "$rev"
+ done ;;
+ *)
+ usage ;;
esac
- for rev in $revs
- do
- rev=$(git rev-parse --verify "$rev^{commit}") || exit
- bisect_write_good "$rev"
- echo "git-bisect good $rev" >>"$GIT_DIR/BISECT_LOG"
-
- done
bisect_auto_next
}
-bisect_write_good() {
- rev="$1"
- echo "$rev" >"$GIT_DIR/refs/bisect/good-$rev"
- echo "# good: "$(git show-branch $rev) >>"$GIT_DIR/BISECT_LOG"
-}
-
bisect_next_check() {
missing_good= missing_bad=
git show-ref -q --verify refs/bisect/bad || missing_bad=t
bisect_next_check && bisect_next || :
}
+filter_skipped() {
+ _eval="$1"
+ _skip="$2"
+
+ if [ -z "$_skip" ]; then
+ eval $_eval
+ return
+ fi
+
+ # Let's parse the output of:
+ # "git rev-list --bisect-vars --bisect-all ..."
+ eval $_eval | while read hash line
+ do
+ case "$VARS,$FOUND,$TRIED,$hash" in
+ # We display some vars.
+ 1,*,*,*) echo "$hash $line" ;;
+
+ # Split line.
+ ,*,*,---*) ;;
+
+ # We had nothing to search.
+ ,,,bisect_rev*)
+ echo "bisect_rev="
+ VARS=1
+ ;;
+
+ # We did not find a good bisect rev.
+ # This should happen only if the "bad"
+ # commit is also a "skip" commit.
+ ,,*,bisect_rev*)
+ echo "bisect_rev=$TRIED"
+ VARS=1
+ ;;
+
+ # We are searching.
+ ,,*,*)
+ TRIED="${TRIED:+$TRIED|}$hash"
+ case "$_skip" in
+ *$hash*) ;;
+ *)
+ echo "bisect_rev=$hash"
+ echo "bisect_tried=\"$TRIED\""
+ FOUND=1
+ ;;
+ esac
+ ;;
+
+ # We have already found a rev to be tested.
+ ,1,*,bisect_rev*) VARS=1 ;;
+ ,1,*,*) ;;
+
+ # ???
+ *) die "filter_skipped error " \
+ "VARS: '$VARS' " \
+ "FOUND: '$FOUND' " \
+ "TRIED: '$TRIED' " \
+ "hash: '$hash' " \
+ "line: '$line'"
+ ;;
+ esac
+ done
+}
+
+exit_if_skipped_commits () {
+ _tried=$1
+ if expr "$_tried" : ".*[|].*" > /dev/null ; then
+ echo "There are only 'skip'ped commit left to test."
+ echo "The first bad commit could be any of:"
+ echo "$_tried" | sed -e 's/[|]/\n/g'
+ echo "We cannot bisect more!"
+ exit 2
+ fi
+}
+
bisect_next() {
- case "$#" in 0) ;; *) usage ;; esac
+ case "$#" in 0) ;; *) usage ;; esac
bisect_autostart
bisect_next_check good
+ skip=$(git for-each-ref --format='%(objectname)' \
+ "refs/bisect/skip-*" | tr '[\012]' ' ') || exit
+
+ BISECT_OPT=''
+ test -n "$skip" && BISECT_OPT='--bisect-all'
+
bad=$(git rev-parse --verify refs/bisect/bad) &&
good=$(git for-each-ref --format='^%(objectname)' \
"refs/bisect/good-*" | tr '[\012]' ' ') &&
- eval="git rev-list --bisect-vars $good $bad --" &&
+ eval="git rev-list --bisect-vars $BISECT_OPT $good $bad --" &&
eval="$eval $(cat "$GIT_DIR/BISECT_NAMES")" &&
- eval=$(eval "$eval") &&
+ eval=$(filter_skipped "$eval" "$skip") &&
eval "$eval" || exit
if [ -z "$bisect_rev" ]; then
exit 1
fi
if [ "$bisect_rev" = "$bad" ]; then
+ exit_if_skipped_commits "$bisect_tried"
echo "$bisect_rev is first bad commit"
git diff-tree --pretty $bisect_rev
exit 0
fi
+ # We should exit here only if the "bad"
+ # commit is also a "skip" commit (see above).
+ exit_if_skipped_commits "$bisect_rev"
+
echo "Bisecting: $bisect_nr revisions left to test after this"
echo "$bisect_rev" >"$GIT_DIR/refs/heads/new-bisect"
git checkout -q new-bisect || exit
else
branch=master
fi ;;
- 1) git show-ref --verify --quiet -- "refs/heads/$1" || {
- echo >&2 "$1 does not seem to be a valid branch"
- exit 1
- }
+ 1) git show-ref --verify --quiet -- "refs/heads/$1" ||
+ die "$1 does not seem to be a valid branch"
branch="$1" ;;
- *)
+ *)
usage ;;
esac
if git checkout "$branch"; then
}
bisect_replay () {
- test -r "$1" || {
- echo >&2 "cannot read $1 for replaying"
- exit 1
- }
+ test -r "$1" || die "cannot read $1 for replaying"
bisect_reset
while read bisect command rev
do
case "$command" in
start)
cmd="bisect_start $rev"
- eval "$cmd"
- ;;
- good)
- echo "$rev" >"$GIT_DIR/refs/bisect/good-$rev"
- echo "# good: "$(git show-branch $rev) >>"$GIT_DIR/BISECT_LOG"
- echo "git-bisect good $rev" >>"$GIT_DIR/BISECT_LOG"
- ;;
- bad)
- echo "$rev" >"$GIT_DIR/refs/bisect/bad"
- echo "# bad: "$(git show-branch $rev) >>"$GIT_DIR/BISECT_LOG"
- echo "git-bisect bad $rev" >>"$GIT_DIR/BISECT_LOG"
- ;;
+ eval "$cmd" ;;
+ good|bad|skip)
+ bisect_write "$command" "$rev" ;;
*)
- echo >&2 "?? what are you talking about?"
- exit 1 ;;
+ die "?? what are you talking about?" ;;
esac
done <"$1"
bisect_auto_next
exit $res
fi
- # Use "bisect_good" or "bisect_bad"
- # depending on run success or failure.
- if [ $res -gt 0 ]; then
- next_bisect='bisect_bad'
+ # Find current state depending on run success or failure.
+ # A special exit code of 125 means cannot test.
+ if [ $res -eq 125 ]; then
+ state='skip'
+ elif [ $res -gt 0 ]; then
+ state='bad'
else
- next_bisect='bisect_good'
+ state='good'
fi
- # We have to use a subshell because bisect_good or
- # bisect_bad functions can exit.
- ( $next_bisect > "$GIT_DIR/BISECT_RUN" )
+ # We have to use a subshell because "bisect_state" can exit.
+ ( bisect_state $state > "$GIT_DIR/BISECT_RUN" )
res=$?
cat "$GIT_DIR/BISECT_RUN"
+ if grep "first bad commit could be any of" "$GIT_DIR/BISECT_RUN" \
+ > /dev/null; then
+ echo >&2 "bisect run cannot continue any more"
+ exit $res
+ fi
+
if [ $res -ne 0 ]; then
echo >&2 "bisect run failed:"
- echo >&2 "$next_bisect exited with error code $res"
+ echo >&2 "'bisect_state $state' exited with error code $res"
exit $res
fi
case "$cmd" in
start)
bisect_start "$@" ;;
- bad)
- bisect_bad "$@" ;;
- good)
- bisect_good "$@" ;;
+ bad|good|skip)
+ bisect_state "$cmd" "$@" ;;
next)
# Not sure we want "next" at the UI level anymore.
bisect_next "$@" ;;
git ls-files --error-unmatch -- "$@" >/dev/null || exit
git ls-files -- "$@" |
git checkout-index -f -u --stdin
+
+ # Run a post-checkout hook -- the HEAD does not change so the
+ # current HEAD is passed in for both args
+ if test -x "$GIT_DIR"/hooks/post-checkout; then
+ "$GIT_DIR"/hooks/post-checkout $old $old 0
+ fi
+
exit $?
else
# Make sure we did not fall back on $arg^{tree} codepath
else
exit 1
fi
+
+# Run a post-checkout hook
+if test -x "$GIT_DIR"/hooks/post-checkout; then
+ "$GIT_DIR"/hooks/post-checkout $old $new 1
+fi
rm_refuse="echo Not removing"
echo1="echo"
-while case "$#" in 0) break ;; esac
+while test $# != 0
do
case "$1" in
-d)
) 2>/dev/null
}
-if [ -n "$GIT_SSL_NO_VERIFY" ]; then
+if [ -n "$GIT_SSL_NO_VERIFY" -o \
+ "`git config --bool http.sslVerify`" = false ]; then
curl_extra_args="-k"
fi
exit 1
}
+TMP_INDEX=
THIS_INDEX="$GIT_DIR/index"
NEXT_INDEX="$GIT_DIR/next-index$$"
rm -f "$NEXT_INDEX"
only_include_assumed=
untracked_files=
templatefile="`git config commit.template`"
-while case "$#" in 0) break;; esac
+while test $# != 0
do
case "$1" in
-F|--F|-f|--f|--fi|--fil|--file)
if test "$ret" = 0
then
+ git gc --auto
if test -x "$GIT_DIR"/hooks/post-commit
then
"$GIT_DIR"/hooks/post-commit
extern int gitsetenv(const char *, const char *, int);
#endif
+#ifdef NO_MKDTEMP
+#define mkdtemp gitmkdtemp
+extern char *gitmkdtemp(char *);
+#endif
+
#ifdef NO_UNSETENV
#define unsetenv gitunsetenv
extern void gitunsetenv(const char *);
@cvs = ('cvs');
}
-# setup a tempdir
-our ($tmpdir, $tmpdirname) = tempdir('git-cvsapplycommit-XXXXXX',
- TMPDIR => 1,
- CLEANUP => 1);
-
# resolve target commit
my $commit;
$commit = pop @ARGV;
print "Checking if patch will apply\n";
my @stat;
-open APPLY, "GIT_DIR= git-apply $context --binary --summary --numstat<.cvsexportcommit.diff|" || die "cannot patch";
+open APPLY, "GIT_DIR= git-apply $context --summary --numstat<.cvsexportcommit.diff|" || die "cannot patch";
@stat=<APPLY>;
close APPLY || die "Cannot patch";
my (@bfiles,@files,@afiles,@dfiles);
}
print "Applying\n";
-`GIT_DIR= git-apply $context --binary --summary --numstat --apply <.cvsexportcommit.diff` || die "cannot patch";
+`GIT_DIR= git-apply $context --summary --numstat --apply <.cvsexportcommit.diff` || die "cannot patch";
print "Patch applied successfully. Adding new files and directories to CVS\n";
my $dirtypatch = 0;
+
+#
+# We have to add the directories in order otherwise we will have
+# problems when we try and add the sub-directory of a directory we
+# have not added yet.
+#
+# Luckily this is easy to deal with by sorting the directories and
+# dealing with the shortest ones first.
+#
+@dirs = sort { length $a <=> length $b} @dirs;
+
foreach my $d (@dirs) {
if (system(@cvs,'add',$d)) {
$dirtypatch = 1;
}
my $request = $1;
$line = <STDIN>; chomp $line;
- req_Root('root', $line) # reuse Root
- or die "E Invalid root $line \n";
+ unless (req_Root('root', $line)) { # reuse Root
+ print "E Invalid root $line \n";
+ exit 1;
+ }
$line = <STDIN>; chomp $line;
unless ($line eq 'anonymous') {
print "E Only anonymous user allowed via pserver\n";
+++ /dev/null
-#!/bin/sh
-#
-
-USAGE='<fetch-options> <repository> <refspec>...'
-SUBDIRECTORY_OK=Yes
-. git-sh-setup
-set_reflog_action "fetch $*"
-cd_to_toplevel ;# probably unnecessary...
-
-. git-parse-remote
-_x40='[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]'
-_x40="$_x40$_x40$_x40$_x40$_x40$_x40$_x40$_x40"
-
-LF='
-'
-IFS="$LF"
-
-no_tags=
-tags=
-append=
-force=
-verbose=
-update_head_ok=
-exec=
-keep=
-shallow_depth=
-no_progress=
-test -t 1 || no_progress=--no-progress
-quiet=
-while case "$#" in 0) break ;; esac
-do
- case "$1" in
- -a|--a|--ap|--app|--appe|--appen|--append)
- append=t
- ;;
- --upl|--uplo|--uploa|--upload|--upload-|--upload-p|\
- --upload-pa|--upload-pac|--upload-pack)
- shift
- exec="--upload-pack=$1"
- ;;
- --upl=*|--uplo=*|--uploa=*|--upload=*|\
- --upload-=*|--upload-p=*|--upload-pa=*|--upload-pac=*|--upload-pack=*)
- exec=--upload-pack=$(expr "z$1" : 'z-[^=]*=\(.*\)')
- shift
- ;;
- -f|--f|--fo|--for|--forc|--force)
- force=t
- ;;
- -t|--t|--ta|--tag|--tags)
- tags=t
- ;;
- -n|--n|--no|--no-|--no-t|--no-ta|--no-tag|--no-tags)
- no_tags=t
- ;;
- -u|--u|--up|--upd|--upda|--updat|--update|--update-|--update-h|\
- --update-he|--update-hea|--update-head|--update-head-|\
- --update-head-o|--update-head-ok)
- update_head_ok=t
- ;;
- -q|--q|--qu|--qui|--quie|--quiet)
- quiet=--quiet
- ;;
- -v|--verbose)
- verbose="$verbose"Yes
- ;;
- -k|--k|--ke|--kee|--keep)
- keep='-k -k'
- ;;
- --depth=*)
- shallow_depth="--depth=`expr "z$1" : 'z-[^=]*=\(.*\)'`"
- ;;
- --depth)
- shift
- shallow_depth="--depth=$1"
- ;;
- -*)
- usage
- ;;
- *)
- break
- ;;
- esac
- shift
-done
-
-case "$#" in
-0)
- origin=$(get_default_remote)
- test -n "$(get_remote_url ${origin})" ||
- die "Where do you want to fetch from today?"
- set x $origin ; shift ;;
-esac
-
-if test -z "$exec"
-then
- # No command line override and we have configuration for the remote.
- exec="--upload-pack=$(get_uploadpack $1)"
-fi
-
-remote_nick="$1"
-remote=$(get_remote_url "$@")
-refs=
-rref=
-rsync_slurped_objects=
-
-if test "" = "$append"
-then
- : >"$GIT_DIR/FETCH_HEAD"
-fi
-
-# Global that is reused later
-ls_remote_result=$(git ls-remote $exec "$remote") ||
- die "Cannot get the repository state from $remote"
-
-append_fetch_head () {
- flags=
- test -n "$verbose" && flags="$flags$LF-v"
- test -n "$force$single_force" && flags="$flags$LF-f"
- GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION" \
- git fetch--tool $flags append-fetch-head "$@"
-}
-
-# updating the current HEAD with git-fetch in a bare
-# repository is always fine.
-if test -z "$update_head_ok" && test $(is_bare_repository) = false
-then
- orig_head=$(git rev-parse --verify HEAD 2>/dev/null)
-fi
-
-# Allow --notags from remote.$1.tagopt
-case "$tags$no_tags" in
-'')
- case "$(git config --get "remote.$1.tagopt")" in
- --no-tags)
- no_tags=t ;;
- esac
-esac
-
-# If --tags (and later --heads or --all) is specified, then we are
-# not talking about defaults stored in Pull: line of remotes or
-# branches file, and just fetch those and refspecs explicitly given.
-# Otherwise we do what we always did.
-
-reflist=$(get_remote_refs_for_fetch "$@")
-if test "$tags"
-then
- taglist=`IFS=' ' &&
- echo "$ls_remote_result" |
- git show-ref --exclude-existing=refs/tags/ |
- while read sha1 name
- do
- echo ".${name}:${name}"
- done` || exit
- if test "$#" -gt 1
- then
- # remote URL plus explicit refspecs; we need to merge them.
- reflist="$reflist$LF$taglist"
- else
- # No explicit refspecs; fetch tags only.
- reflist=$taglist
- fi
-fi
-
-fetch_all_at_once () {
-
- eval=$(echo "$1" | git fetch--tool parse-reflist "-")
- eval "$eval"
-
- ( : subshell because we muck with IFS
- IFS=" $LF"
- (
- if test "$remote" = . ; then
- git show-ref $rref || echo failed "$remote"
- elif test -f "$remote" ; then
- test -n "$shallow_depth" &&
- die "shallow clone with bundle is not supported"
- git bundle unbundle "$remote" $rref ||
- echo failed "$remote"
- else
- if test -d "$remote" &&
-
- # The remote might be our alternate. With
- # this optimization we will bypass fetch-pack
- # altogether, which means we cannot be doing
- # the shallow stuff at all.
- test ! -f "$GIT_DIR/shallow" &&
- test -z "$shallow_depth" &&
-
- # See if all of what we are going to fetch are
- # connected to our repository's tips, in which
- # case we do not have to do any fetch.
- theirs=$(echo "$ls_remote_result" | \
- git fetch--tool -s pick-rref "$rref" "-") &&
-
- # This will barf when $theirs reach an object that
- # we do not have in our repository. Otherwise,
- # we already have everything the fetch would bring in.
- git rev-list --objects $theirs --not --all \
- >/dev/null 2>/dev/null
- then
- echo "$ls_remote_result" | \
- git fetch--tool pick-rref "$rref" "-"
- else
- flags=
- case $verbose in
- YesYes*)
- flags="-v"
- ;;
- esac
- git-fetch-pack --thin $exec $keep $shallow_depth \
- $quiet $no_progress $flags "$remote" $rref ||
- echo failed "$remote"
- fi
- fi
- ) |
- (
- flags=
- test -n "$verbose" && flags="$flags -v"
- test -n "$force" && flags="$flags -f"
- GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION" \
- git fetch--tool $flags native-store \
- "$remote" "$remote_nick" "$refs"
- )
- ) || exit
-
-}
-
-fetch_per_ref () {
- reflist="$1"
- refs=
- rref=
-
- for ref in $reflist
- do
- refs="$refs$LF$ref"
-
- # These are relative path from $GIT_DIR, typically starting at refs/
- # but may be HEAD
- if expr "z$ref" : 'z\.' >/dev/null
- then
- not_for_merge=t
- ref=$(expr "z$ref" : 'z\.\(.*\)')
- else
- not_for_merge=
- fi
- if expr "z$ref" : 'z+' >/dev/null
- then
- single_force=t
- ref=$(expr "z$ref" : 'z+\(.*\)')
- else
- single_force=
- fi
- remote_name=$(expr "z$ref" : 'z\([^:]*\):')
- local_name=$(expr "z$ref" : 'z[^:]*:\(.*\)')
-
- rref="$rref$LF$remote_name"
-
- # There are transports that can fetch only one head at a time...
- case "$remote" in
- http://* | https://* | ftp://*)
- test -n "$shallow_depth" &&
- die "shallow clone with http not supported"
- proto=`expr "$remote" : '\([^:]*\):'`
- if [ -n "$GIT_SSL_NO_VERIFY" ]; then
- curl_extra_args="-k"
- fi
- if [ -n "$GIT_CURL_FTP_NO_EPSV" -o \
- "`git config --bool http.noEPSV`" = true ]; then
- noepsv_opt="--disable-epsv"
- fi
-
- # Find $remote_name from ls-remote output.
- head=$(echo "$ls_remote_result" | \
- git fetch--tool -s pick-rref "$remote_name" "-")
- expr "z$head" : "z$_x40\$" >/dev/null ||
- die "No such ref $remote_name at $remote"
- echo >&2 "Fetching $remote_name from $remote using $proto"
- case "$quiet" in '') v=-v ;; *) v= ;; esac
- git-http-fetch $v -a "$head" "$remote" || exit
- ;;
- rsync://*)
- test -n "$shallow_depth" &&
- die "shallow clone with rsync not supported"
- TMP_HEAD="$GIT_DIR/TMP_HEAD"
- rsync -L -q "$remote/$remote_name" "$TMP_HEAD" || exit 1
- head=$(git rev-parse --verify TMP_HEAD)
- rm -f "$TMP_HEAD"
- case "$quiet" in '') v=-v ;; *) v= ;; esac
- test "$rsync_slurped_objects" || {
- rsync -a $v --ignore-existing --exclude info \
- "$remote/objects/" "$GIT_OBJECT_DIRECTORY/" || exit
-
- # Look at objects/info/alternates for rsync -- http will
- # support it natively and git native ones will do it on
- # the remote end. Not having that file is not a crime.
- rsync -q "$remote/objects/info/alternates" \
- "$GIT_DIR/TMP_ALT" 2>/dev/null ||
- rm -f "$GIT_DIR/TMP_ALT"
- if test -f "$GIT_DIR/TMP_ALT"
- then
- resolve_alternates "$remote" <"$GIT_DIR/TMP_ALT" |
- while read alt
- do
- case "$alt" in 'bad alternate: '*) die "$alt";; esac
- echo >&2 "Getting alternate: $alt"
- rsync -av --ignore-existing --exclude info \
- "$alt" "$GIT_OBJECT_DIRECTORY/" || exit
- done
- rm -f "$GIT_DIR/TMP_ALT"
- fi
- rsync_slurped_objects=t
- }
- ;;
- esac
-
- append_fetch_head "$head" "$remote" \
- "$remote_name" "$remote_nick" "$local_name" "$not_for_merge" || exit
-
- done
-
-}
-
-fetch_main () {
- case "$remote" in
- http://* | https://* | ftp://* | rsync://* )
- fetch_per_ref "$@"
- ;;
- *)
- fetch_all_at_once "$@"
- ;;
- esac
-}
-
-fetch_main "$reflist" || exit
-
-# automated tag following
-case "$no_tags$tags" in
-'')
- case "$reflist" in
- *:refs/*)
- # effective only when we are following remote branch
- # using local tracking branch.
- taglist=$(IFS=' ' &&
- echo "$ls_remote_result" |
- git show-ref --exclude-existing=refs/tags/ |
- while read sha1 name
- do
- git cat-file -t "$sha1" >/dev/null 2>&1 || continue
- echo >&2 "Auto-following $name"
- echo ".${name}:${name}"
- done)
- esac
- case "$taglist" in
- '') ;;
- ?*)
- # do not deepen a shallow tree when following tags
- shallow_depth=
- fetch_main "$taglist" || exit ;;
- esac
-esac
-
-# If the original head was empty (i.e. no "master" yet), or
-# if we were told not to worry, we do not have to check.
-case "$orig_head" in
-'')
- ;;
-?*)
- curr_head=$(git rev-parse --verify HEAD 2>/dev/null)
- if test "$curr_head" != "$orig_head"
- then
- git update-ref \
- -m "$GIT_REFLOG_ACTION: Undoing incorrectly fetched HEAD." \
- HEAD "$orig_head"
- die "Cannot fetch into the current branch."
- fi
- ;;
-esac
. git-sh-setup
+git diff-files --quiet &&
+ git diff-index --cached --quiet HEAD ||
+ die "Cannot rewrite branch(es) with a dirty working directory."
+
tempdir=.git-rewrite
filter_env=
filter_tree=
filter_subdir=
orig_namespace=refs/original/
force=
-while case "$#" in 0) usage;; esac
+while :
do
+ test $# = 0 && usage
case "$1" in
--)
shift
esac
done < "$tempdir"/backup-refs
+ORIG_GIT_DIR="$GIT_DIR"
+ORIG_GIT_WORK_TREE="$GIT_WORK_TREE"
+ORIG_GIT_INDEX_FILE="$GIT_INDEX_FILE"
export GIT_DIR GIT_WORK_TREE=.
# These refs should be updated if their heads were rewritten
test $count -gt 0 && echo "These refs were rewritten:"
git show-ref | grep ^"$orig_namespace"
+unset GIT_DIR GIT_WORK_TREE GIT_INDEX_FILE
+test -z "$ORIG_GIT_DIR" || GIT_DIR="$ORIG_GIT_DIR" && export GIT_DIR
+test -z "$ORIG_GIT_WORK_TREE" || GIT_WORK_TREE="$ORIG_GIT_WORK_TREE" &&
+ export GIT_WORK_TREE
+test -z "$ORIG_GIT_INDEX_FILE" || GIT_INDEX_FILE="$ORIG_GIT_INDEX_FILE" &&
+ export GIT_INDEX_FILE
+git read-tree -u -m HEAD
+
exit $ret
global env _search_exe _search_path
if {$_search_path eq {}} {
- if {[is_Cygwin]} {
+ if {[is_Cygwin] && [regexp {^(/|\.:)} $env(PATH)]} {
set _search_path [split [exec cygpath \
--windows \
--path \
set _git [_which git]
if {$_git eq {}} {
catch {wm withdraw .}
- error_popup "Cannot find git in PATH."
+ tk_messageBox \
+ -icon error \
+ -type ok \
+ -title [mc "git-gui: fatal error"] \
+ -message [mc "Cannot find git in PATH."]
exit 1
}
regsub {\.[0-9]+\.g[0-9a-f]+$} $_git_version {} _git_version
regsub {\.rc[0-9]+$} $_git_version {} _git_version
regsub {\.GIT$} $_git_version {} _git_version
+regsub {\.[a-zA-Z]+\.[0-9]+$} $_git_version {} _git_version
if {![regexp {^[1-9]+(\.[0-9]+)+$} $_git_version]} {
catch {wm withdraw .}
}
}
+if {[is_Cygwin]} {
+ set is_git_info_link {}
+ set is_git_info_exclude {}
+ proc have_info_exclude {} {
+ global is_git_info_link is_git_info_exclude
+
+ if {$is_git_info_link eq {}} {
+ set is_git_info_link [file isfile [gitdir info.lnk]]
+ }
+
+ if {$is_git_info_link} {
+ if {$is_git_info_exclude eq {}} {
+ if {[catch {exec test -f [gitdir info exclude]}]} {
+ set is_git_info_exclude 0
+ } else {
+ set is_git_info_exclude 1
+ }
+ }
+ return $is_git_info_exclude
+ } else {
+ return [file readable [gitdir info exclude]]
+ }
+ }
+} else {
+ proc have_info_exclude {} {
+ return [file readable [gitdir info exclude]]
+ }
+}
+
proc rescan_stage2 {fd after} {
global rescan_active buf_rdi buf_rdf buf_rlo
}
set ls_others [list --exclude-per-directory=.gitignore]
- set info_exclude [gitdir info exclude]
- if {[file readable $info_exclude]} {
- lappend ls_others "--exclude-from=$info_exclude"
+ if {[have_info_exclude]} {
+ lappend ls_others "--exclude-from=[gitdir info exclude]"
}
set user_exclude [get_config core.excludesfile]
if {$user_exclude ne {} && [file readable $user_exclude]} {
}
proc ui_status {msg} {
- $::main_status show $msg
+ global main_status
+ if {[info exists main_status]} {
+ $main_status show $msg
+ }
}
proc ui_ready {{test {}}} {
- $::main_status show {Ready.} $test
+ global main_status
+ if {[info exists main_status]} {
+ $main_status show [mc "Ready."] $test
+ }
}
proc escape_path {path} {
if {! [file exists $exe]} {
error_popup "Unable to start gitk:\n\n$exe does not exist"
} else {
+ global env
+
+ if {[info exists env(GIT_DIR)]} {
+ set old_GIT_DIR $env(GIT_DIR)
+ } else {
+ set old_GIT_DIR {}
+ }
+
+ set pwd [pwd]
+ cd [file dirname [gitdir]]
+ set env(GIT_DIR) [file tail [gitdir]]
+
eval exec $cmd $revs &
+
+ if {$old_GIT_DIR eq {}} {
+ unset env(GIT_DIR)
+ } else {
+ set env(GIT_DIR) $old_GIT_DIR
+ }
+ cd $pwd
+
ui_status $::starting_gitk_msg
after 10000 {
ui_ready $starting_gitk_msg
set font [lindex $option 1]
if {[catch {
foreach {cn cv} $repo_config(gui.$name) {
- font configure $font $cn $cv
+ font configure $font $cn $cv -weight normal
}
} err]} {
error_popup "Invalid font specified in gui.$name:\n\n$err"
global repo_config
gets $fd_wt tree_id
- if {$tree_id eq {} || [catch {close $fd_wt} err]} {
+ if {[catch {close $fd_wt} err]} {
error_popup "write-tree failed:\n\n$err"
ui_status {Commit failed.}
unlock_index
} else {
$w.m.t delete $console_cr end
$w.m.t insert end "\n"
- $w.m.t insert end [string range $buf $c $cr]
+ $w.m.t insert end [string range $buf $c [expr {$cr - 1}]]
set c $cr
incr c
}
set prior [string range $meter 0 $r]
set meter [string range $meter [expr {$r + 1}] end]
- if {[regexp "\\((\\d+)/(\\d+)\\)\\s+done\r\$" $prior _j a b]} {
+ set p "\\((\\d+)/(\\d+)\\)"
+ if {[regexp ":\\s*\\d+% $p\(?:, done.\\s*\n|\\s*\r)\$" $prior _j a b]} {
+ update $this $a $b
+ } elseif {[regexp "$p\\s+done\r\$" $prior _j a b]} {
update $this $a $b
}
}
start_httpd () {
httpd_only="`echo $httpd | cut -f1 -d' '`"
- if test "`expr index $httpd_only /`" -eq '1' || \
- which $httpd_only >/dev/null
+ if case "$httpd_only" in /*) : ;; *) which $httpd_only >/dev/null;; esac
then
$httpd $fqgitdir/gitweb/httpd.conf
else
# many httpds are installed in /usr/sbin or /usr/local/sbin
# these days and those are not in most users $PATHs
- for i in /usr/local/sbin /usr/sbin
+ # in addition, we may have generated a server script
+ # in $fqgitdir/gitweb.
+ for i in /usr/local/sbin /usr/sbin "$fqgitdir/gitweb"
do
if test -x "$i/$httpd_only"
then
test -f "$fqgitdir/pid" && kill `cat "$fqgitdir/pid"`
}
-while case "$#" in 0) break ;; esac
+while test $# != 0
do
case "$1" in
--stop|stop)
export GIT_EXEC_PATH GIT_DIR
+webrick_conf () {
+ # generate a standalone server script in $fqgitdir/gitweb.
+ cat >"$fqgitdir/gitweb/$httpd.rb" <<EOF
+require 'webrick'
+require 'yaml'
+options = YAML::load_file(ARGV[0])
+options[:StartCallback] = proc do
+ File.open(options[:PidFile],"w") do |f|
+ f.puts Process.pid
+ end
+end
+options[:ServerType] = WEBrick::Daemon
+server = WEBrick::HTTPServer.new(options)
+['INT', 'TERM'].each do |signal|
+ trap(signal) {server.shutdown}
+end
+server.start
+EOF
+ # generate a shell script to invoke the above ruby script,
+ # which assumes _ruby_ is in the user's $PATH. that's _one_
+ # portable way to run ruby, which could be installed anywhere,
+ # really.
+ cat >"$fqgitdir/gitweb/$httpd" <<EOF
+#!/bin/sh
+exec ruby "$fqgitdir/gitweb/$httpd.rb" \$*
+EOF
+ chmod +x "$fqgitdir/gitweb/$httpd"
+
+ cat >"$conf" <<EOF
+:Port: $port
+:DocumentRoot: "$fqgitdir/gitweb"
+:DirectoryIndex: ["gitweb.cgi"]
+:PidFile: "$fqgitdir/pid"
+EOF
+ test "$local" = true && echo ':BindAddress: "127.0.0.1"' >> "$conf"
+}
+
lighttpd_conf () {
cat > "$conf" <<EOF
server.document-root = "$fqgitdir/gitweb"
*apache2*)
apache2_conf
;;
+webrick)
+ webrick_conf
+ ;;
*)
echo "Unknown httpd specified: $httpd"
exit 1
}
exec=
-while case "$#" in 0) break;; esac
+while test $# != 0
do
case "$1" in
-h|--h|--he|--hea|--head|--heads)
case "$peek_repo" in
http://* | https://* | ftp://* )
- if [ -n "$GIT_SSL_NO_VERIFY" ]; then
- curl_extra_args="-k"
- fi
+ if [ -n "$GIT_SSL_NO_VERIFY" -o \
+ "`git config --bool http.sslVerify`" = false ]; then
+ curl_extra_args="-k"
+ fi
if [ -n "$GIT_CURL_FTP_NO_EPSV" -o \
"`git config --bool http.noEPSV`" = true ]; then
curl_extra_args="${curl_extra_args} --disable-epsv"
# Copyright (c) 2005 Junio C Hamano
#
-USAGE='[-n] [--summary] [--no-commit] [--squash] [-s <strategy>] [-m=<merge-message>] <commit>+'
+USAGE='[-n] [--summary] [--[no-]commit] [--[no-]squash] [--[no-]ff] [-s <strategy>] [-m=<merge-message>] <commit>+'
SUBDIRECTORY_OK=Yes
. git-sh-setup
squash_message () {
echo Squashed commit of the following:
echo
- git log --no-merges ^"$head" $remote
+ git log --no-merges ^"$head" $remoteheads
}
finish () {
;;
*)
git update-ref -m "$rlogm" HEAD "$1" "$head" || exit 1
+ git gc --auto
;;
esac
;;
fi
;;
esac
+
+ # Run a post-merge hook
+ if test -x "$GIT_DIR"/hooks/post-merge
+ then
+ case "$squash" in
+ t)
+ "$GIT_DIR"/hooks/post-merge 1
+ ;;
+ '')
+ "$GIT_DIR"/hooks/post-merge 0
+ ;;
+ esac
+ fi
}
merge_name () {
fi
}
-case "$#" in 0) usage ;; esac
-
-have_message=
-while case "$#" in 0) break ;; esac
-do
+parse_option () {
case "$1" in
-n|--n|--no|--no-|--no-s|--no-su|--no-sum|--no-summ|\
--no-summa|--no-summar|--no-summary)
--summary)
show_diffstat=t ;;
--sq|--squ|--squa|--squas|--squash)
- squash=t no_commit=t ;;
+ allow_fast_forward=t squash=t no_commit=t ;;
+ --no-sq|--no-squ|--no-squa|--no-squas|--no-squash)
+ allow_fast_forward=t squash= no_commit= ;;
+ --c|--co|--com|--comm|--commi|--commit)
+ allow_fast_forward=t squash= no_commit= ;;
--no-c|--no-co|--no-com|--no-comm|--no-commi|--no-commit)
- no_commit=t ;;
+ allow_fast_forward=t squash= no_commit=t ;;
+ --ff)
+ allow_fast_forward=t squash= no_commit= ;;
+ --no-ff)
+ allow_fast_forward=false squash= no_commit= ;;
-s=*|--s=*|--st=*|--str=*|--stra=*|--strat=*|--strate=*|\
--strateg=*|--strategy=*|\
-s|--s|--st|--str|--stra|--strat|--strate|--strateg|--strategy)
have_message=t
;;
-*) usage ;;
- *) break ;;
+ *) return 1 ;;
esac
shift
+ args_left=$#
+}
+
+parse_config () {
+ while test $# -gt 0
+ do
+ parse_option "$@" || usage
+ while test $args_left -lt $#
+ do
+ shift
+ done
+ done
+}
+
+test $# != 0 || usage
+
+have_message=
+
+if branch=$(git-symbolic-ref -q HEAD)
+then
+ mergeopts=$(git config "branch.${branch#refs/heads/}.mergeoptions")
+ if test -n "$mergeopts"
+ then
+ parse_config $mergeopts
+ fi
+fi
+
+while parse_option "$@"
+do
+ while test $args_left -lt $#
+ do
+ shift
+ done
done
if test -z "$show_diffstat"; then
# auto resolved the merge cleanly.
if test '' != "$result_tree"
then
- parents=$(git show-branch --independent "$head" "$@" | sed -e 's/^/-p /')
+ if test "$allow_fast_forward" = "t"
+ then
+ parents=$(git show-branch --independent "$head" "$@")
+ else
+ parents=$(git rev-parse "$head" "$@")
+ fi
+ parents=$(echo "$parents" | sed -e 's/^/-p /')
result_commit=$(printf '%s\n' "$merge_msg" | git commit-tree $result_tree $parents) || exit
finish "$result_commit" "Merge made by $wt_strategy."
dropsave
SUBDIRECTORY_OK=Yes
. git-sh-setup
require_work_tree
+prefix=$(git rev-parse --show-prefix)
# Returns true if the mode reflects a symlink
is_symlink () {
local_mode=`git ls-files -u -- "$path" | awk '{if ($3==2) print $1;}'`
remote_mode=`git ls-files -u -- "$path" | awk '{if ($3==3) print $1;}'`
- base_present && git cat-file blob ":1:$path" > "$BASE" 2>/dev/null
- local_present && git cat-file blob ":2:$path" > "$LOCAL" 2>/dev/null
- remote_present && git cat-file blob ":3:$path" > "$REMOTE" 2>/dev/null
+ base_present && git cat-file blob ":1:$prefix$path" >"$BASE" 2>/dev/null
+ local_present && git cat-file blob ":2:$prefix$path" >"$LOCAL" 2>/dev/null
+ remote_present && git cat-file blob ":3:$prefix$path" >"$REMOTE" 2>/dev/null
if test -z "$local_mode" -o -z "$remote_mode"; then
echo "Deleted merge conflict for '$path':"
case "$merge_tool" in
kdiff3)
if base_present ; then
- (kdiff3 --auto --L1 "$path (Base)" -L2 "$path (Local)" --L3 "$path (Remote)" \
+ (kdiff3 --auto --L1 "$path (Base)" --L2 "$path (Local)" --L3 "$path (Remote)" \
-o "$path" -- "$BASE" "$LOCAL" "$REMOTE" > /dev/null 2>&1)
else
- (kdiff3 --auto -L1 "$path (Local)" --L2 "$path (Remote)" \
+ (kdiff3 --auto --L1 "$path (Local)" --L2 "$path (Remote)" \
-o "$path" -- "$LOCAL" "$REMOTE" > /dev/null 2>&1)
fi
status=$?
;;
emerge)
if base_present ; then
- emacs -f emerge-files-with-ancestor-command "$LOCAL" "$REMOTE" "$BASE" "$path"
+ emacs -f emerge-files-with-ancestor-command "$LOCAL" "$REMOTE" "$BASE" "$(basename "$path")"
else
- emacs -f emerge-files-command "$LOCAL" "$REMOTE" "$path"
+ emacs -f emerge-files-command "$LOCAL" "$REMOTE" "$(basename "$path")"
fi
status=$?
save_backup
cleanup_temp_files
}
-while case $# in 0) break ;; esac
+while test $# != 0
do
case "$1" in
-t|--tool*)
#
# Fetch one or more remote refs and merge it/them into the current HEAD.
-USAGE='[-n | --no-summary] [--no-commit] [-s strategy]... [<fetch-options>] <repo> <head>...'
+USAGE='[-n | --no-summary] [--[no-]commit] [--[no-]squash] [--[no-]ff] [-s strategy]... [<fetch-options>] <repo> <head>...'
LONG_USAGE='Fetch one or more remote refs and merge it/them into the current HEAD.'
SUBDIRECTORY_OK=Yes
. git-sh-setup
test -z "$(git ls-files -u)" ||
die "You are in the middle of a conflicted merge."
-strategy_args= no_summary= no_commit= squash=
-while case "$#,$1" in 0) break ;; *,-*) ;; *) break ;; esac
+strategy_args= no_summary= no_commit= squash= no_ff=
+while :
do
case "$1" in
-n|--n|--no|--no-|--no-s|--no-su|--no-sum|--no-summ|\
;;
--no-c|--no-co|--no-com|--no-comm|--no-commi|--no-commit)
no_commit=--no-commit ;;
+ --c|--co|--com|--comm|--commi|--commit)
+ no_commit=--commit ;;
--sq|--squ|--squa|--squas|--squash)
squash=--squash ;;
+ --no-sq|--no-squ|--no-squa|--no-squas|--no-squash)
+ squash=--no-squash ;;
+ --ff)
+ no_ff=--ff ;;
+ --no-ff)
+ no_ff=--no-ff ;;
-s=*|--s=*|--st=*|--str=*|--stra=*|--strat=*|--strate=*|\
--strateg=*|--strategy=*|\
-s|--s|--st|--str|--stra|--strat|--strate|--strateg|--strategy)
-h|--h|--he|--hel|--help)
usage
;;
- -*)
- # Pass thru anything that is meant for fetch.
+ *)
+ # Pass thru anything that may be meant for fetch.
break
;;
esac
esac
curr_branch=${curr_branch#refs/heads/}
- echo >&2 "Warning: No merge candidate found because value of config option
- \"branch.${curr_branch}.merge\" does not match any remote branch fetched."
- echo >&2 "No changes."
- exit 0
+ echo >&2 "You asked me to pull without telling me which branch you"
+ echo >&2 "want to merge with, and 'branch.${curr_branch}.merge' in"
+ echo >&2 "your configuration file does not tell me either. Please"
+ echo >&2 "name which branch you want to merge on the command line and"
+ echo >&2 "try again (e.g. 'git pull <repository> <refspec>')."
+ echo >&2 "See git-pull(1) for details on the refspec."
+ echo >&2
+ echo >&2 "If you often merge with the same branch, you may want to"
+ echo >&2 "configure the following variables in your configuration"
+ echo >&2 "file:"
+ echo >&2
+ echo >&2 " branch.${curr_branch}.remote = <nickname>"
+ echo >&2 " branch.${curr_branch}.merge = <remote-ref>"
+ echo >&2 " remote.<nickname>.url = <url>"
+ echo >&2 " remote.<nickname>.fetch = <refspec>"
+ echo >&2
+ echo >&2 "See git-config(1) for details."
+ exit 1
;;
?*' '?*)
if test -z "$orig_head"
fi
merge_name=$(git fmt-merge-msg <"$GIT_DIR/FETCH_HEAD") || exit
-exec git-merge $no_summary $no_commit $squash $strategy_args \
+exec git-merge $no_summary $no_commit $squash $no_ff $strategy_args \
"$merge_name" HEAD $merge_head
dry_run=""
quilt_author=""
-while case "$#" in 0) break;; esac
+while test $# != 0
do
case "$1" in
--au=*|--aut=*|--auth=*|--autho=*|--author=*)
mkdir $tmp_dir || exit 2
for patch_name in $(grep -v '^#' < "$QUILT_PATCHES/series" ); do
+ if ! [ -f "$QUILT_PATCHES/$patch_name" ] ; then
+ echo "$patch_name doesn't exist. Skipping."
+ continue
+ fi
echo $patch_name
git mailinfo "$tmp_msg" "$tmp_patch" \
<"$QUILT_PATCHES/$patch_name" >"$tmp_info" || exit 3
output () {
case "$VERBOSE" in
'')
- "$@" > "$DOTEST"/output 2>&1
+ output=$("$@" 2>&1 )
status=$?
- test $status != 0 &&
- cat "$DOTEST"/output
+ test $status != 0 && printf "%s\n" "$output"
return $status
- ;;
+ ;;
*)
"$@"
+ ;;
esac
}
''|rebase*)
GIT_REFLOG_ACTION="rebase -i ($1)"
export GIT_REFLOG_ACTION
+ ;;
esac
}
sed -e 1q < "$TODO" >> "$DONE"
sed -e 1d < "$TODO" >> "$TODO".new
mv -f "$TODO".new "$TODO"
- count=$(($(wc -l < "$DONE")))
- total=$(($count+$(wc -l < "$TODO")))
+ count=$(($(grep -ve '^$' -e '^#' < "$DONE" | wc -l)))
+ total=$(($count+$(grep -ve '^$' -e '^#' < "$TODO" | wc -l)))
printf "Rebasing (%d/%d)\r" $count $total
test -z "$VERBOSE" || echo
}
make_patch () {
- parent_sha1=$(git rev-parse --verify "$1"^ 2> /dev/null)
- git diff "$parent_sha1".."$1" > "$DOTEST"/patch
+ parent_sha1=$(git rev-parse --verify "$1"^) ||
+ die "Cannot get patch for $1^"
+ git diff-tree -p "$parent_sha1".."$1" > "$DOTEST"/patch
+ test -f "$DOTEST"/message ||
+ git cat-file commit "$1" | sed "1,/^$/d" > "$DOTEST"/message
+ test -f "$DOTEST"/author-script ||
+ get_author_ident_from_commit "$1" > "$DOTEST"/author-script
}
die_with_patch () {
- test -f "$DOTEST"/message ||
- git cat-file commit $sha1 | sed "1,/^$/d" > "$DOTEST"/message
- test -f "$DOTEST"/author-script ||
- get_author_ident_from_commit $sha1 > "$DOTEST"/author-script
make_patch "$1"
die "$2"
}
die "$1"
}
+has_action () {
+ grep -vqe '^$' -e '^#' "$1"
+}
+
pick_one () {
no_ff=
case "$1" in -n) sha1=$2; no_ff=t ;; *) sha1=$1 ;; esac
output git rev-parse --verify $sha1 || die "Invalid commit name: $sha1"
test -d "$REWRITTEN" &&
pick_one_preserving_merges "$@" && return
- parent_sha1=$(git rev-parse --verify $sha1^ 2>/dev/null)
+ parent_sha1=$(git rev-parse --verify $sha1^) ||
+ die "Could not get the parent of $sha1"
current_sha1=$(git rev-parse --verify HEAD)
- if test $no_ff$current_sha1 = $parent_sha1; then
+ if test "$no_ff$current_sha1" = "$parent_sha1"; then
output git reset --hard $sha1
test "a$1" = a-n && output git reset --soft $current_sha1
sha1=$(git rev-parse --short $sha1)
output warn Fast forward to $sha1
else
- output git cherry-pick $STRATEGY "$@"
+ output git cherry-pick "$@"
fi
}
fast_forward=t
preserve=t
new_parents=
- for p in $(git rev-list --parents -1 $sha1 | cut -d\ -f2-)
+ for p in $(git rev-list --parents -1 $sha1 | cut -d' ' -f2-)
do
if test -f "$REWRITTEN"/$p
then
;; # do nothing; that parent is already there
*)
new_parents="$new_parents $new_p"
+ ;;
esac
fi
done
case $fast_forward in
t)
output warn "Fast forward to $sha1"
- test $preserve=f && echo $sha1 > "$REWRITTEN"/$sha1
+ test $preserve = f || echo $sha1 > "$REWRITTEN"/$sha1
;;
f)
test "a$1" = a-n && die "Refusing to squash a merge: $sha1"
- first_parent=$(expr "$new_parents" : " \([^ ]*\)")
+ first_parent=$(expr "$new_parents" : ' \([^ ]*\)')
# detach HEAD to current parent
output git checkout $first_parent 2> /dev/null ||
die "Cannot move HEAD to $first_parent"
echo $sha1 > "$DOTEST"/current-commit
case "$new_parents" in
- \ *\ *)
+ ' '*' '*)
# redo merge
author_script=$(get_author_ident_from_commit $sha1)
eval "$author_script"
- msg="$(git cat-file commit $sha1 | \
- sed -e '1,/^$/d' -e "s/[\"\\]/\\\\&/g")"
+ msg="$(git cat-file commit $sha1 | sed -e '1,/^$/d')"
+ # No point in merging the first parent, that's HEAD
+ new_parents=${new_parents# $first_parent}
# NEEDSWORK: give rerere a chance
- if ! output git merge $STRATEGY -m "$msg" $new_parents
+ if ! GIT_AUTHOR_NAME="$GIT_AUTHOR_NAME" \
+ GIT_AUTHOR_EMAIL="$GIT_AUTHOR_EMAIL" \
+ GIT_AUTHOR_DATE="$GIT_AUTHOR_DATE" \
+ output git merge $STRATEGY -m "$msg" \
+ $new_parents
then
- echo "$msg" > "$GIT_DIR"/MERGE_MSG
+ printf "%s\n" "$msg" > "$GIT_DIR"/MERGE_MSG
die Error redoing merge $sha1
fi
;;
*)
- output git cherry-pick $STRATEGY "$@" ||
+ output git cherry-pick "$@" ||
die_with_patch $sha1 "Could not pick $sha1"
+ ;;
esac
+ ;;
esac
}
}
do_next () {
- test -f "$DOTEST"/message && rm "$DOTEST"/message
- test -f "$DOTEST"/author-script && rm "$DOTEST"/author-script
+ rm -f "$DOTEST"/message "$DOTEST"/author-script \
+ "$DOTEST"/amend || exit
read command sha1 rest < "$TODO"
case "$command" in
- \#|'')
+ '#'*|'')
mark_action_done
;;
- pick)
+ pick|p)
comment_for_reflog pick
mark_action_done
pick_one $sha1 ||
die_with_patch $sha1 "Could not apply $sha1... $rest"
;;
- edit)
+ edit|e)
comment_for_reflog edit
mark_action_done
pick_one $sha1 ||
die_with_patch $sha1 "Could not apply $sha1... $rest"
make_patch $sha1
+ : > "$DOTEST"/amend
warn
warn "You can amend the commit now, with"
warn
warn
exit 0
;;
- squash)
+ squash|s)
comment_for_reflog squash
- test -z "$(grep -ve '^$' -e '^#' < $DONE)" &&
+ has_action "$DONE" ||
die "Cannot 'squash' without a previous commit"
mark_action_done
make_squash_message $sha1 > "$MSG"
case "$(peek_next_command)" in
- squash)
+ squash|s)
EDIT_COMMIT=
USE_OUTPUT=output
cp "$MSG" "$SQUASH_MSG"
- ;;
+ ;;
*)
EDIT_COMMIT=-e
USE_OUTPUT=
- test -f "$SQUASH_MSG" && rm "$SQUASH_MSG"
+ rm -f "$SQUASH_MSG" || exit
+ ;;
esac
failed=f
+ author_script=$(get_author_ident_from_commit HEAD)
output git reset --soft HEAD^
pick_one -n $sha1 || failed=t
- author_script=$(get_author_ident_from_commit $sha1)
echo "$author_script" > "$DOTEST"/author-script
case $failed in
f)
# This is like --amend, but with a different message
eval "$author_script"
- export GIT_AUTHOR_NAME GIT_AUTHOR_EMAIL GIT_AUTHOR_DATE
+ GIT_AUTHOR_NAME="$GIT_AUTHOR_NAME" \
+ GIT_AUTHOR_EMAIL="$GIT_AUTHOR_EMAIL" \
+ GIT_AUTHOR_DATE="$GIT_AUTHOR_DATE" \
$USE_OUTPUT git commit -F "$MSG" $EDIT_COMMIT
;;
t)
warn
warn "Could not apply $sha1... $rest"
die_with_patch $sha1 ""
+ ;;
esac
;;
*)
warn "Unknown command: $command $sha1 $rest"
die_with_patch $sha1 "Please fix this in the file $TODO."
+ ;;
esac
test -s "$TODO" && return
else
NEWHEAD=$(git rev-parse HEAD)
fi &&
- message="$GIT_REFLOG_ACTION: $HEADNAME onto $SHORTONTO)" &&
- git update-ref -m "$message" $HEADNAME $NEWHEAD $OLDHEAD &&
- git symbolic-ref HEAD $HEADNAME && {
+ case $HEADNAME in
+ refs/*)
+ message="$GIT_REFLOG_ACTION: $HEADNAME onto $SHORTONTO)" &&
+ git update-ref -m "$message" $HEADNAME $NEWHEAD $OLDHEAD &&
+ git symbolic-ref HEAD $HEADNAME
+ ;;
+ esac && {
test ! -f "$DOTEST"/verbose ||
- git diff --stat $(cat "$DOTEST"/head)..HEAD
+ git diff-tree --stat $(cat "$DOTEST"/head)..HEAD
} &&
rm -rf "$DOTEST" &&
+ git gc --auto &&
warn "Successfully rebased and updated $HEADNAME."
exit
done
}
-while case $# in 0) break ;; esac
+while test $# != 0
do
case "$1" in
--continue)
git update-index --refresh &&
git diff-files --quiet &&
! git diff-index --cached --quiet HEAD &&
- . "$DOTEST"/author-script &&
+ . "$DOTEST"/author-script && {
+ test ! -f "$DOTEST"/amend || git reset --soft HEAD^
+ } &&
export GIT_AUTHOR_NAME GIT_AUTHOR_NAME GIT_AUTHOR_DATE &&
git commit -F "$DOTEST"/message -e
HEADNAME=$(cat "$DOTEST"/head-name)
HEAD=$(cat "$DOTEST"/head)
- git symbolic-ref HEAD $HEADNAME &&
+ case $HEADNAME in
+ refs/*)
+ git symbolic-ref HEAD $HEADNAME
+ ;;
+ esac &&
output git reset --hard $HEAD &&
rm -rf "$DOTEST"
exit
output git reset --hard && do_rest
;;
-s|--strategy)
- shift
case "$#,$1" in
*,*=*)
STRATEGY="-s `expr "z$1" : 'z-[^=]*=\(.*\)'`" ;;
require_clean_work_tree
- mkdir "$DOTEST" || die "Could not create temporary $DOTEST"
if test ! -z "$2"
then
output git show-ref --verify --quiet "refs/heads/$2" ||
HEAD=$(git rev-parse --verify HEAD) || die "No HEAD?"
UPSTREAM=$(git rev-parse --verify "$1") || die "Invalid base"
+ mkdir "$DOTEST" || die "Could not create temporary $DOTEST"
+
test -z "$ONTO" && ONTO=$UPSTREAM
: > "$DOTEST"/interactive || die "Could not mark as interactive"
- git symbolic-ref HEAD > "$DOTEST"/head-name ||
- die "Could not get HEAD"
+ git symbolic-ref HEAD > "$DOTEST"/head-name 2> /dev/null ||
+ echo "detached HEAD" > "$DOTEST"/head-name
echo $HEAD > "$DOTEST"/head
echo $UPSTREAM > "$DOTEST"/upstream
$UPSTREAM...$HEAD | \
sed -n "s/^>/pick /p" >> "$TODO"
- test -z "$(grep -ve '^$' -e '^#' < $TODO)" &&
+ has_action "$TODO" ||
die_abort "Nothing to do"
cp "$TODO" "$TODO".backup
git_editor "$TODO" ||
die "Could not execute editor"
- test -z "$(grep -ve '^$' -e '^#' < $TODO)" &&
+ has_action "$TODO" ||
die_abort "Nothing to do"
output git checkout $ONTO && do_rest
+ ;;
esac
shift
done
die "$RESOLVEMSG"
fi
- cmt=`cat $dotest/current`
+ cmt=`cat "$dotest/current"`
if ! git diff-index --quiet HEAD
then
if ! git-commit -C "$cmt"
}
call_merge () {
- cmt="$(cat $dotest/cmt.$1)"
+ cmt="$(cat "$dotest/cmt.$1")"
echo "$cmt" > "$dotest/current"
hd=$(git rev-parse --verify HEAD)
cmt_name=$(git symbolic-ref HEAD)
- msgnum=$(cat $dotest/msgnum)
- end=$(cat $dotest/end)
+ msgnum=$(cat "$dotest/msgnum")
+ end=$(cat "$dotest/end")
eval GITHEAD_$cmt='"${cmt_name##refs/heads/}~$(($end - $msgnum))"'
- eval GITHEAD_$hd='"$(cat $dotest/onto_name)"'
+ eval GITHEAD_$hd='$(cat "$dotest/onto_name")'
export GITHEAD_$cmt GITHEAD_$hd
git-merge-$strategy "$cmt^" -- "$hd" "$cmt"
rv=$?
is_interactive () {
test -f "$dotest"/interactive ||
- while case $#,"$1" in 0,|*,-i|*,--interactive) break ;; esac
- do
+ while :; do case $#,"$1" in 0,|*,-i|*,--interactive) break ;; esac
shift
done && test -n "$1"
}
is_interactive "$@" && exec git-rebase--interactive "$@"
-while case "$#" in 0) break ;; esac
+while test $# != 0
do
case "$1" in
--continue)
}
if test -d "$dotest"
then
- prev_head="`cat $dotest/prev_head`"
- end="`cat $dotest/end`"
- msgnum="`cat $dotest/msgnum`"
- onto="`cat $dotest/onto`"
+ prev_head=$(cat "$dotest/prev_head")
+ end=$(cat "$dotest/end")
+ msgnum=$(cat "$dotest/msgnum")
+ onto=$(cat "$dotest/onto")
continue_merge
while test "$msgnum" -le "$end"
do
if test -d "$dotest"
then
git rerere clear
- prev_head="`cat $dotest/prev_head`"
- end="`cat $dotest/end`"
- msgnum="`cat $dotest/msgnum`"
+ prev_head=$(cat "$dotest/prev_head")
+ end=$(cat "$dotest/end")
+ msgnum=$(cat "$dotest/msgnum")
msgnum=$(($msgnum + 1))
- onto="`cat $dotest/onto`"
+ onto=$(cat "$dotest/onto")
while test "$msgnum" -le "$end"
do
call_merge "$msgnum"
my ($name, $ls_remote) = @_;
if (!exists $remote->{$name}) {
print STDERR "No such remote $name\n";
- return;
+ return 1;
}
my $info = $remote->{$name};
update_ls_remote($ls_remote, $info);
my @v = $git->command(qw(rev-parse --verify), "$prefix/$to_prune");
$git->command(qw(update-ref -d), "$prefix/$to_prune", $v[0]);
}
+ return 0;
}
sub show_remote {
my ($name, $ls_remote) = @_;
if (!exists $remote->{$name}) {
print STDERR "No such remote $name\n";
- return;
+ return 1;
}
my $info = $remote->{$name};
update_ls_remote($ls_remote, $info);
print "* remote $name\n";
print " URL: $info->{'URL'}\n";
for my $branchname (sort keys %$branch) {
- next if ($branch->{$branchname}{'REMOTE'} ne $name);
+ next unless (defined $branch->{$branchname}{'REMOTE'} &&
+ $branch->{$branchname}{'REMOTE'} eq $name);
my @merged = map {
s|^refs/heads/||;
$_;
print " Local branch(es) pushed with 'git push'\n";
print " @pushed\n";
}
+ return 0;
}
sub add_remote {
}
}
+sub rm_remote {
+ my ($name) = @_;
+ if (!exists $remote->{$name}) {
+ print STDERR "No such remote $name\n";
+ return 1;
+ }
+
+ $git->command('config', '--remove-section', "remote.$name");
+
+ eval {
+ my @trackers = $git->command('config', '--get-regexp',
+ 'branch.*.remote', $name);
+ for (@trackers) {
+ /^branch\.(.*)?\.remote/;
+ $git->config('--unset', "branch.$1.remote");
+ $git->config('--unset', "branch.$1.merge");
+ }
+ };
+
+ my @refs = $git->command('for-each-ref',
+ '--format=%(refname) %(objectname)', "refs/remotes/$name");
+ for (@refs) {
+ ($ref, $object) = split;
+ $git->command(qw(update-ref -d), $ref, $object);
+ }
+ return 0;
+}
+
sub add_usage {
print STDERR "Usage: git remote add [-f] [-t track]* [-m master] <name> <url>\n";
exit(1);
print STDERR "Usage: git remote show <remote>\n";
exit(1);
}
+ my $status = 0;
for (; $i < @ARGV; $i++) {
- show_remote($ARGV[$i], $ls_remote);
+ $status |= show_remote($ARGV[$i], $ls_remote);
}
+ exit($status);
}
elsif ($ARGV[0] eq 'update') {
if (@ARGV <= 1) {
print STDERR "Usage: git remote prune <remote>\n";
exit(1);
}
+ my $status = 0;
for (; $i < @ARGV; $i++) {
- prune_remote($ARGV[$i], $ls_remote);
+ $status |= prune_remote($ARGV[$i], $ls_remote);
}
+ exit($status);
}
elsif ($ARGV[0] eq 'add') {
my %opts = ();
}
add_remote($ARGV[1], $ARGV[2], \%opts);
}
+elsif ($ARGV[0] eq 'rm') {
+ if (@ARGV <= 1) {
+ print STDERR "Usage: git remote rm <remote>\n";
+ exit(1);
+ }
+ exit(rm_remote($ARGV[1]));
+}
else {
print STDERR "Usage: git remote\n";
print STDERR " git remote add <name> <url>\n";
+ print STDERR " git remote rm <name>\n";
print STDERR " git remote show <name>\n";
print STDERR " git remote prune <name>\n";
print STDERR " git remote update [group]\n";
# Copyright (c) 2005 Linus Torvalds
#
-USAGE='[-a] [-d] [-f] [-l] [-n] [-q] [--max-pack-size=N] [--window=N] [--window-memory=N] [--depth=N]'
+USAGE='[-a|-A] [-d] [-f] [-l] [-n] [-q] [--max-pack-size=N] [--window=N] [--window-memory=N] [--depth=N]'
SUBDIRECTORY_OK='Yes'
. git-sh-setup
-no_update_info= all_into_one= remove_redundant=
+no_update_info= all_into_one= remove_redundant= keep_unreachable=
local= quiet= no_reuse= extra=
-while case "$#" in 0) break ;; esac
+while test $# != 0
do
case "$1" in
-n) no_update_info=t ;;
-a) all_into_one=t ;;
+ -A) all_into_one=t
+ keep_unreachable=--keep-unreachable ;;
-d) remove_redundant=t ;;
-q) quiet=-q ;;
-f) no_reuse=--no-reuse-object ;;
fi
done
fi
- [ -z "$args" ] && args='--unpacked --incremental'
+ if test -z "$args"
+ then
+ args='--unpacked --incremental'
+ elif test -n "$keep_unreachable"
+ then
+ args="$args $keep_unreachable"
+ fi
;;
esac
the default section.
--smtp-server If set, specifies the outgoing SMTP server to use.
- Defaults to localhost.
+ Defaults to localhost. Port number can be specified here with
+ hostname:port format or by using --smtp-server-port option.
+
+ --smtp-server-port Specify a port on the outgoing SMTP server to connect to.
--smtp-user The username for SMTP-AUTH.
# Variables with corresponding config settings
my ($thread, $chain_reply_to, $suppress_from, $signed_off_cc, $cc_cmd);
-my ($smtp_server, $smtp_authuser, $smtp_authpass, $smtp_ssl);
-my ($identity, $aliasfiletype, @alias_files);
+my ($smtp_server, $smtp_server_port, $smtp_authuser, $smtp_authpass, $smtp_ssl);
+my ($identity, $aliasfiletype, @alias_files, @smtp_host_parts);
my %config_bool_settings = (
"thread" => [\$thread, 1],
my %config_settings = (
"smtpserver" => \$smtp_server,
+ "smtpserverport" => \$smtp_server_port,
"smtpuser" => \$smtp_authuser,
"smtppass" => \$smtp_authpass,
+ "to" => \@to,
"cccmd" => \$cc_cmd,
"aliasfiletype" => \$aliasfiletype,
"bcc" => \@bcclist,
"bcc=s" => \@bcclist,
"chain-reply-to!" => \$chain_reply_to,
"smtp-server=s" => \$smtp_server,
+ "smtp-server-port=s" => \$smtp_server_port,
"smtp-user=s" => \$smtp_authuser,
"smtp-pass=s" => \$smtp_authpass,
"smtp-ssl!" => \$smtp_ssl,
print $sm "$header\n$message";
close $sm or die $?;
} else {
+
+ if (!defined $smtp_server) {
+ die "The required SMTP server is not properly defined."
+ }
+
if ($smtp_ssl) {
+ $smtp_server_port ||= 465; # ssmtp
require Net::SMTP::SSL;
- $smtp ||= Net::SMTP::SSL->new( $smtp_server, Port => 465 );
+ $smtp ||= Net::SMTP::SSL->new($smtp_server, Port => $smtp_server_port);
}
else {
require Net::SMTP;
- $smtp ||= Net::SMTP->new( $smtp_server );
+ $smtp ||= Net::SMTP->new((defined $smtp_server_port)
+ ? "$smtp_server:$smtp_server_port"
+ : $smtp_server);
+ }
+
+ if (!$smtp) {
+ die "Unable to initialize SMTP properly. Is there something wrong with your config?";
+ }
+
+ if ((defined $smtp_authuser) && (defined $smtp_authpass)) {
+ $smtp->auth( $smtp_authuser, $smtp_authpass ) or die $smtp->message;
}
- $smtp->auth( $smtp_authuser, $smtp_authpass )
- or die $smtp->message if (defined $smtp_authuser);
$smtp->mail( $raw_from ) or die $smtp->message;
$smtp->to( @recipients ) or die $smtp->message;
$smtp->data or die $smtp->message;
if [ -z "$SUBDIRECTORY_OK" ]
then
: ${GIT_DIR=.git}
- GIT_DIR=$(GIT_DIR="$GIT_DIR" git rev-parse --git-dir) || {
+ test -z "$(git rev-parse --show-cdup)" || {
exit=$?
echo >&2 "You need to run this command from the toplevel of the working tree."
exit $exit
unstashed_index_tree=
if test -n "$unstash_index" && test "$b_tree" != "$i_tree"
then
- git diff --binary $s^2^..$s^2 | git apply --cached
+ git diff-tree --binary $s^2^..$s^2 | git apply --cached
test $? -ne 0 &&
die 'Conflicts in index. Try without --index.'
unstashed_index_tree=$(git-write-tree) ||
git read-tree "$unstashed_index_tree"
else
a="$TMP-added" &&
- git diff --cached --name-only --diff-filter=A $c_tree >"$a" &&
+ git diff-index --cached --name-only --diff-filter=A $c_tree >"$a" &&
git read-tree --reset $c_tree &&
git update-index --add --stdin <"$a" ||
die "Cannot unstage modified files"
) 2>/dev/null
}
+# Resolve relative url by appending to parent's url
+resolve_relative_url ()
+{
+ branch="$(git symbolic-ref HEAD 2>/dev/null)"
+ remote="$(git config branch.${branch#refs/heads/}.remote)"
+ remote="${remote:-origin}"
+ remoteurl="$(git config remote.$remote.url)" ||
+ die "remote ($remote) does not have a url in .git/config"
+ url="$1"
+ while test -n "$url"
+ do
+ case "$url" in
+ ../*)
+ url="${url#../}"
+ remoteurl="${remoteurl%/*}"
+ ;;
+ ./*)
+ url="${url#./}"
+ ;;
+ *)
+ break;;
+ esac
+ done
+ echo "$remoteurl/$url"
+}
+
#
# Map submodule path to submodule name
#
usage
fi
- # Turn the source into an absolute path if
- # it is local
- if base=$(get_repo_base "$repo"); then
- repo="$base"
- fi
+ case "$repo" in
+ ./*|../*)
+ # dereference source url relative to parent's url
+ realrepo="$(resolve_relative_url $repo)" ;;
+ *)
+ # Turn the source into an absolute path if
+ # it is local
+ if base=$(get_repo_base "$repo"); then
+ repo="$base"
+ fi
+ realrepo=$repo
+ ;;
+ esac
# Guess path from repo if not specified or strip trailing slashes
if test -z "$path"; then
git ls-files --error-unmatch "$path" > /dev/null 2>&1 &&
die "'$path' already exists in the index"
- module_clone "$path" "$repo" || exit
+ module_clone "$path" "$realrepo" || exit
(unset GIT_DIR && cd "$path" && git checkout -q ${branch:+-b "$branch" "origin/$branch"}) ||
die "Unable to checkout submodule '$path'"
git add "$path" ||
test -z "$url" &&
die "No url found for submodule path '$path' in .gitmodules"
+ # Possibly a url relative to parent
+ case "$url" in
+ ./*|../*)
+ url="$(resolve_relative_url "$url")"
+ ;;
+ esac
+
git config submodule."$name".url "$url" ||
die "Failed to register url for submodule path '$path'"
done
}
-while case "$#" in 0) break ;; esac
+while test $# != 0
do
case "$1" in
add)
$AUTHOR = 'Eric Wong <normalperson@yhbt.net>';
$VERSION = '@@GIT_VERSION@@';
+# From which subdir have we been invoked?
+my $cmd_dir_prefix = eval {
+ command_oneline([qw/rev-parse --show-prefix/], STDERR => 0)
+} || '';
+
my $git_dir_user_set = 1 if defined $ENV{GIT_DIR};
$ENV{GIT_DIR} ||= '.git';
$Git::SVN::default_repo_id = 'svn';
$ENV{TZ} = 'UTC';
$| = 1; # unbuffer STDOUT
-sub fatal (@) { print STDERR @_; exit 1 }
+sub fatal (@) { print STDERR "@_\n"; exit 1 }
require SVN::Core; # use()-ing this causes segfaults for me... *shrug*
require SVN::Ra;
require SVN::Delta;
if ($SVN::Core::VERSION lt '1.1.0') {
- fatal "Need SVN::Core 1.1.0 or better (got $SVN::Core::VERSION)\n";
+ fatal "Need SVN::Core 1.1.0 or better (got $SVN::Core::VERSION)";
}
push @Git::SVN::Ra::ISA, 'SVN::Ra';
push @SVN::Git::Editor::ISA, 'SVN::Delta::Editor';
'set-tree' => [ \&cmd_set_tree,
"Set an SVN repository to a git tree-ish",
{ 'stdin|' => \$_stdin, %cmt_opts, %fc_opts, } ],
+ 'create-ignore' => [ \&cmd_create_ignore,
+ 'Create a .gitignore per svn:ignore',
+ { 'revision|r=i' => \$_revision
+ } ],
+ 'propget' => [ \&cmd_propget,
+ 'Print the value of a property on a file or directory',
+ { 'revision|r=i' => \$_revision } ],
+ 'proplist' => [ \&cmd_proplist,
+ 'List all properties of a file or directory',
+ { 'revision|r=i' => \$_revision } ],
'show-ignore' => [ \&cmd_show_ignore, "Show svn:ignore listings",
{ 'revision|r=i' => \$_revision
} ],
} elsif (scalar @tmp > 1) {
push @revs, reverse(command('rev-list',@tmp));
} else {
- fatal "Failed to rev-parse $c\n";
+ fatal "Failed to rev-parse $c";
}
}
my $gs = Git::SVN->new;
fatal "There are new revisions that were fetched ",
"and need to be merged (or acknowledged) ",
"before committing.\nlast rev: $r_last\n",
- " current: $gs->{last_rev}\n";
+ " current: $gs->{last_rev}";
}
$gs->set_tree($_) foreach @revs;
print "Done committing ",scalar @revs," revisions to SVN\n";
(undef, $last_rev, undef) = cmt_metadata("$d~1");
unless (defined $last_rev) {
fatal "Unable to extract revision information ",
- "from commit $d~1\n";
+ "from commit $d~1";
}
}
if ($_dry_run) {
my ($url, $rev, $uuid, $gs) = working_head_info('HEAD');
$gs ||= Git::SVN->new;
my $r = (defined $_revision ? $_revision : $gs->ra->get_latest_revnum);
- $gs->traverse_ignore(\*STDOUT, $gs->{path}, $r);
+ $gs->prop_walk($gs->{path}, $r, sub {
+ my ($gs, $path, $props) = @_;
+ print STDOUT "\n# $path\n";
+ my $s = $props->{'svn:ignore'} or return;
+ $s =~ s/[\r\n]+/\n/g;
+ chomp $s;
+ $s =~ s#^#$path#gm;
+ print STDOUT "$s\n";
+ });
+}
+
+sub cmd_create_ignore {
+ my ($url, $rev, $uuid, $gs) = working_head_info('HEAD');
+ $gs ||= Git::SVN->new;
+ my $r = (defined $_revision ? $_revision : $gs->ra->get_latest_revnum);
+ $gs->prop_walk($gs->{path}, $r, sub {
+ my ($gs, $path, $props) = @_;
+ # $path is of the form /path/to/dir/
+ my $ignore = '.' . $path . '.gitignore';
+ my $s = $props->{'svn:ignore'} or return;
+ open(GITIGNORE, '>', $ignore)
+ or fatal("Failed to open `$ignore' for writing: $!");
+ $s =~ s/[\r\n]+/\n/g;
+ chomp $s;
+ # Prefix all patterns so that the ignore doesn't apply
+ # to sub-directories.
+ $s =~ s#^#/#gm;
+ print GITIGNORE "$s\n";
+ close(GITIGNORE)
+ or fatal("Failed to close `$ignore': $!");
+ command_noisy('add', $ignore);
+ });
+}
+
+# get_svnprops(PATH)
+# ------------------
+# Helper for cmd_propget and cmd_proplist below.
+sub get_svnprops {
+ my $path = shift;
+ my ($url, $rev, $uuid, $gs) = working_head_info('HEAD');
+ $gs ||= Git::SVN->new;
+
+ # prefix THE PATH by the sub-directory from which the user
+ # invoked us.
+ $path = $cmd_dir_prefix . $path;
+ fatal("No such file or directory: $path") unless -e $path;
+ my $is_dir = -d $path ? 1 : 0;
+ $path = $gs->{path} . '/' . $path;
+
+ # canonicalize the path (otherwise libsvn will abort or fail to
+ # find the file)
+ # File::Spec->canonpath doesn't collapse x/../y into y (for a
+ # good reason), so let's do this manually.
+ $path =~ s#/+#/#g;
+ $path =~ s#/\.(?:/|$)#/#g;
+ $path =~ s#/[^/]+/\.\.##g;
+ $path =~ s#/$##g;
+
+ my $r = (defined $_revision ? $_revision : $gs->ra->get_latest_revnum);
+ my $props;
+ if ($is_dir) {
+ (undef, undef, $props) = $gs->ra->get_dir($path, $r);
+ }
+ else {
+ (undef, $props) = $gs->ra->get_file($path, $r, undef);
+ }
+ return $props;
+}
+
+# cmd_propget (PROP, PATH)
+# ------------------------
+# Print the SVN property PROP for PATH.
+sub cmd_propget {
+ my ($prop, $path) = @_;
+ $path = '.' if not defined $path;
+ usage(1) if not defined $prop;
+ my $props = get_svnprops($path);
+ if (not defined $props->{$prop}) {
+ fatal("`$path' does not have a `$prop' SVN property.");
+ }
+ print $props->{$prop} . "\n";
+}
+
+# cmd_proplist (PATH)
+# -------------------
+# Print the list of SVN properties for PATH.
+sub cmd_proplist {
+ my $path = shift;
+ $path = '.' if not defined $path;
+ my $props = get_svnprops($path);
+ print "Properties on '$path':\n";
+ foreach (sort keys %{$props}) {
+ print " $_\n";
+ }
}
sub cmd_multi_init {
sub cmd_commit_diff {
my ($ta, $tb, $url) = @_;
my $usage = "Usage: $0 commit-diff -r<revision> ".
- "<tree-ish> <tree-ish> [<URL>]\n";
+ "<tree-ish> <tree-ish> [<URL>]";
fatal($usage) if (!defined $ta || !defined $tb);
my $svn_path;
if (!defined $url) {
if (defined $_message && defined $_file) {
fatal("Both --message/-m and --file/-F specified ",
"for the commit message.\n",
- "I have no idea what you mean\n");
+ "I have no idea what you mean");
}
if (defined $_file) {
$_message = file_to_s($_file);
if ($path !~ m#^[a-z\+]+://#) {
if (!defined $url || $url !~ m#^[a-z\+]+://#) {
fatal("E: '$path' is not a complete URL ",
- "and a separate URL is not specified\n");
+ "and a separate URL is not specified");
}
return ($url, $path);
}
$repo_path =~ s#^/+##;
unless ($ra) {
fatal("E: '$repo_path' is not a complete URL ",
- "and a separate URL is not specified\n");
+ "and a separate URL is not specified");
}
}
my $url = $ra->{url};
$url;
}
-sub traverse_ignore {
- my ($self, $fh, $path, $r) = @_;
- $path =~ s#^/+##g;
- my $ra = $self->ra;
- my ($dirent, undef, $props) = $ra->get_dir($path, $r);
+# prop_walk(PATH, REV, SUB)
+# -------------------------
+# Recursively traverse PATH at revision REV and invoke SUB for each
+# directory that contains a SVN property. SUB will be invoked as
+# follows: &SUB(gs, path, props); where `gs' is this instance of
+# Git::SVN, `path' the path to the directory where the properties
+# `props' were found. The `path' will be relative to point of checkout,
+# that is, if url://repo/trunk is the current Git branch, and that
+# directory contains a sub-directory `d', SUB will be invoked with `/d/'
+# as `path' (note the trailing `/').
+sub prop_walk {
+ my ($self, $path, $rev, $sub) = @_;
+
+ my ($dirent, undef, $props) = $self->ra->get_dir($path, $rev);
+ $path =~ s#^/*#/#g;
my $p = $path;
- $p =~ s#^\Q$self->{path}\E(/|$)##;
- print $fh length $p ? "\n# $p\n" : "\n# /\n";
- if (my $s = $props->{'svn:ignore'}) {
- $s =~ s/[\r\n]+/\n/g;
- chomp $s;
- if (length $p == 0) {
- $s =~ s#\n#\n/$p#g;
- print $fh "/$s\n";
- } else {
- $s =~ s#\n#\n/$p/#g;
- print $fh "/$p/$s\n";
- }
- }
+ # Strip the irrelevant part of the path.
+ $p =~ s#^/+\Q$self->{path}\E(/|$)#/#;
+ # Ensure the path is terminated by a `/'.
+ $p =~ s#/*$#/#;
+
+ # The properties contain all the internal SVN stuff nobody
+ # (usually) cares about.
+ my $interesting_props = 0;
+ foreach (keys %{$props}) {
+ # If it doesn't start with `svn:', it must be a
+ # user-defined property.
+ ++$interesting_props and next if $_ !~ /^svn:/;
+ # FIXME: Fragile, if SVN adds new public properties,
+ # this needs to be updated.
+ ++$interesting_props if /^svn:(?:ignore|keywords|executable
+ |eol-style|mime-type
+ |externals|needs-lock)$/x;
+ }
+ &$sub($self, $p, $props) if $interesting_props;
+
foreach (sort keys %$dirent) {
next if $dirent->{$_}->{kind} != $SVN::Node::dir;
- $self->traverse_ignore($fh, "$path/$_", $r);
+ $self->prop_walk($path . '/' . $_, $rev, $sub);
}
}
$x = command_oneline('write-tree');
if ($y ne $x) {
::fatal "trees ($treeish) $y != $x\n",
- "Something is seriously wrong...\n";
+ "Something is seriously wrong...";
}
});
}
$gs->ra->gs_do_switch($r0, $rev, $gs,
$self->full_url, $ed)
or die "SVN connection failed somewhere...\n";
+ } elsif ($self->ra->trees_match($new_url, $r0,
+ $self->full_url, $rev)) {
+ print STDERR "Trees match:\n",
+ " $new_url\@$r0\n",
+ " ${\$self->full_url}\@$rev\n",
+ "Following parent with no changes\n";
+ $self->tmp_index_do(sub {
+ command_noisy('read-tree', $parent);
+ });
+ $self->{last_commit} = $parent;
} else {
print STDERR "Following parent with do_update\n";
$ed = SVN::Git::Fetcher->new($self);
my ($self, $tree) = (shift, shift);
my $log_entry = ::get_commit_entry($tree);
unless ($self->{last_rev}) {
- fatal("Must have an existing revision to commit\n");
+ fatal("Must have an existing revision to commit");
}
my %ed_opts = ( r => $self->{last_rev},
log => $log_entry->{log},
my ($cred, $realm, $failures, $cert_info, $may_save, $pool) = @_;
$may_save = undef if $_no_auth_cache;
print STDERR "Error validating server certificate for '$realm':\n";
- if ($failures & $SVN::Auth::SSL::UNKNOWNCA) {
- print STDERR " - The certificate is not issued by a trusted ",
- "authority. Use the\n",
- " fingerprint to validate the certificate manually!\n";
- }
- if ($failures & $SVN::Auth::SSL::CNMISMATCH) {
- print STDERR " - The certificate hostname does not match.\n";
- }
- if ($failures & $SVN::Auth::SSL::NOTYETVALID) {
- print STDERR " - The certificate is not yet valid.\n";
- }
- if ($failures & $SVN::Auth::SSL::EXPIRED) {
- print STDERR " - The certificate has expired.\n";
- }
- if ($failures & $SVN::Auth::SSL::OTHER) {
- print STDERR " - The certificate has an unknown error.\n";
- }
+ {
+ no warnings 'once';
+ # All variables SVN::Auth::SSL::* are used only once,
+ # so we're shutting up Perl warnings about this.
+ if ($failures & $SVN::Auth::SSL::UNKNOWNCA) {
+ print STDERR " - The certificate is not issued ",
+ "by a trusted authority. Use the\n",
+ " fingerprint to validate ",
+ "the certificate manually!\n";
+ }
+ if ($failures & $SVN::Auth::SSL::CNMISMATCH) {
+ print STDERR " - The certificate hostname ",
+ "does not match.\n";
+ }
+ if ($failures & $SVN::Auth::SSL::NOTYETVALID) {
+ print STDERR " - The certificate is not yet valid.\n";
+ }
+ if ($failures & $SVN::Auth::SSL::EXPIRED) {
+ print STDERR " - The certificate has expired.\n";
+ }
+ if ($failures & $SVN::Auth::SSL::OTHER) {
+ print STDERR " - The certificate has ",
+ "an unknown error.\n";
+ }
+ } # no warnings 'once'
printf STDERR
"Certificate information:\n".
" - Hostname: %s\n".
$password;
}
-package main;
-
-{
- my $kill_stupid_warnings = $SVN::Node::none.$SVN::Node::file.
- $SVN::Node::dir.$SVN::Node::unknown.
- $SVN::Node::none.$SVN::Node::file.
- $SVN::Node::dir.$SVN::Node::unknown.
- $SVN::Auth::SSL::CNMISMATCH.
- $SVN::Auth::SSL::NOTYETVALID.
- $SVN::Auth::SSL::EXPIRED.
- $SVN::Auth::SSL::UNKNOWNCA.
- $SVN::Auth::SSL::OTHER;
-}
-
package SVN::Git::Fetcher;
use vars qw/@ISA/;
use strict;
if (!defined $t) {
die "$full_path not known in r$self->{r} or we have a bug!\n";
}
- if ($t == $SVN::Node::none) {
- return $self->add_directory($full_path, $baton,
- undef, -1, $self->{pool});
- } elsif ($t == $SVN::Node::dir) {
- return $self->open_directory($full_path, $baton,
- $self->{r}, $self->{pool});
- }
- print STDERR "$full_path already exists in repository at ",
- "r$self->{r} and it is not a directory (",
- ($t == $SVN::Node::file ? 'file' : 'unknown'),"/$t)\n";
+ {
+ no warnings 'once';
+ # SVN::Node::none and SVN::Node::file are used only once,
+ # so we're shutting up Perl's warnings about them.
+ if ($t == $SVN::Node::none) {
+ return $self->add_directory($full_path, $baton,
+ undef, -1, $self->{pool});
+ } elsif ($t == $SVN::Node::dir) {
+ return $self->open_directory($full_path, $baton,
+ $self->{r}, $self->{pool});
+ } # no warnings 'once'
+ print STDERR "$full_path already exists in repository at ",
+ "r$self->{r} and it is not a directory (",
+ ($t == $SVN::Node::file ? 'file' : 'unknown'),"/$t)\n";
+ } # no warnings 'once'
exit 1;
}
if (defined $o{$f}) {
$self->$f($m);
} else {
- fatal("Invalid change type: $f\n");
+ fatal("Invalid change type: $f");
}
}
$self->rmdirs if $_rmdir;
}
}
+sub _auth_providers () {
+ [
+ SVN::Client::get_simple_provider(),
+ SVN::Client::get_ssl_server_trust_file_provider(),
+ SVN::Client::get_simple_prompt_provider(
+ \&Git::SVN::Prompt::simple, 2),
+ SVN::Client::get_ssl_client_cert_file_provider(),
+ SVN::Client::get_ssl_client_cert_prompt_provider(
+ \&Git::SVN::Prompt::ssl_client_cert, 2),
+ SVN::Client::get_ssl_client_cert_pw_prompt_provider(
+ \&Git::SVN::Prompt::ssl_client_cert_pw, 2),
+ SVN::Client::get_username_provider(),
+ SVN::Client::get_ssl_server_trust_prompt_provider(
+ \&Git::SVN::Prompt::ssl_server_trust),
+ SVN::Client::get_username_prompt_provider(
+ \&Git::SVN::Prompt::username, 2)
+ ]
+}
+
sub new {
my ($class, $url) = @_;
$url =~ s!/+$!!;
return $RA if ($RA && $RA->{url} eq $url);
SVN::_Core::svn_config_ensure($config_dir, undef);
- my ($baton, $callbacks) = SVN::Core::auth_open_helper([
- SVN::Client::get_simple_provider(),
- SVN::Client::get_ssl_server_trust_file_provider(),
- SVN::Client::get_simple_prompt_provider(
- \&Git::SVN::Prompt::simple, 2),
- SVN::Client::get_ssl_client_cert_file_provider(),
- SVN::Client::get_ssl_client_cert_prompt_provider(
- \&Git::SVN::Prompt::ssl_client_cert, 2),
- SVN::Client::get_ssl_client_cert_pw_prompt_provider(
- \&Git::SVN::Prompt::ssl_client_cert_pw, 2),
- SVN::Client::get_username_provider(),
- SVN::Client::get_ssl_server_trust_prompt_provider(
- \&Git::SVN::Prompt::ssl_server_trust),
- SVN::Client::get_username_prompt_provider(
- \&Git::SVN::Prompt::username, 2),
- ]);
+ my ($baton, $callbacks) = SVN::Core::auth_open_helper(_auth_providers);
my $config = SVN::Core::config_get_config($config_dir);
$RA = undef;
+ my $dont_store_passwords = 1;
+ my $conf_t = ${$config}{'config'};
+ {
+ no warnings 'once';
+ # The usage of $SVN::_Core::SVN_CONFIG_* variables
+ # produces warnings that variables are used only once.
+ # I had not found the better way to shut them up, so
+ # the warnings of type 'once' are disabled in this block.
+ if (SVN::_Core::svn_config_get_bool($conf_t,
+ $SVN::_Core::SVN_CONFIG_SECTION_AUTH,
+ $SVN::_Core::SVN_CONFIG_OPTION_STORE_PASSWORDS,
+ 1) == 0) {
+ SVN::_Core::svn_auth_set_parameter($baton,
+ $SVN::_Core::SVN_AUTH_PARAM_DONT_STORE_PASSWORDS,
+ bless (\$dont_store_passwords, "_p_void"));
+ }
+ if (SVN::_Core::svn_config_get_bool($conf_t,
+ $SVN::_Core::SVN_CONFIG_SECTION_AUTH,
+ $SVN::_Core::SVN_CONFIG_OPTION_STORE_AUTH_CREDS,
+ 1) == 0) {
+ $Git::SVN::Prompt::_no_auth_cache = 1;
+ }
+ } # no warnings 'once'
my $self = SVN::Ra->new(url => $url, auth => $baton,
config => $config,
pool => SVN::Pool->new,
$ret;
}
+sub trees_match {
+ my ($self, $url1, $rev1, $url2, $rev2) = @_;
+ my $ctx = SVN::Client->new(auth => _auth_providers);
+ my $out = IO::File->new_tmpfile;
+
+ # older SVN (1.1.x) doesn't take $pool as the last parameter for
+ # $ctx->diff(), so we'll create a default one
+ my $pool = SVN::Pool->new_default_sub;
+
+ $ra_invalid = 1; # this will open a new SVN::Ra connection to $url1
+ $ctx->diff([], $url1, $rev1, $url2, $rev2, 1, 1, 0, $out, $out);
+ $out->flush;
+ my $ret = (($out->stat)[7] == 0);
+ close $out or croak $!;
+
+ $ret;
+}
+
sub get_commit_editor {
my ($self, $log, $cb, $pool) = @_;
my @lock = $SVN::Core::VERSION ge '1.2.0' ? (undef, 0) : ();
}
sub run_pager {
- return unless -t *STDOUT;
+ return unless -t *STDOUT && defined $pager;
pipe my $rfd, my $wfd or return;
- defined(my $pid = fork) or ::fatal "Can't fork: $!\n";
+ defined(my $pid = fork) or ::fatal "Can't fork: $!";
if (!$pid) {
open STDOUT, '>&', $wfd or
- ::fatal "Can't redirect to stdout: $!\n";
+ ::fatal "Can't redirect to stdout: $!";
return;
}
- open STDIN, '<&', $rfd or ::fatal "Can't redirect stdin: $!\n";
+ open STDIN, '<&', $rfd or ::fatal "Can't redirect stdin: $!";
$ENV{LESS} ||= 'FRSX';
- exec $pager or ::fatal "Can't run pager: $! ($pager)\n";
+ exec $pager or ::fatal "Can't run pager: $! ($pager)";
}
sub tz_to_s_offset {
$r_min = $r_max = $::_revision;
} else {
::fatal "-r$::_revision is not supported, use ",
- "standard \'git log\' arguments instead\n";
+ "standard 'git log' arguments instead";
}
}
+++ /dev/null
-#!/usr/bin/perl -w
-
-# This tool is copyright (c) 2005, Matthias Urlichs.
-# It is released under the Gnu Public License, version 2.
-#
-# The basic idea is to pull and analyze SVN changes.
-#
-# Checking out the files is done by a single long-running SVN connection.
-#
-# The head revision is on branch "origin" by default.
-# You can change that with the '-o' option.
-
-use strict;
-use warnings;
-use Getopt::Std;
-use File::Copy;
-use File::Spec;
-use File::Temp qw(tempfile);
-use File::Path qw(mkpath);
-use File::Basename qw(basename dirname);
-use Time::Local;
-use IO::Pipe;
-use POSIX qw(strftime dup2);
-use IPC::Open2;
-use SVN::Core;
-use SVN::Ra;
-
-die "Need SVN:Core 1.2.1 or better" if $SVN::Core::VERSION lt "1.2.1";
-
-$SIG{'PIPE'}="IGNORE";
-$ENV{'TZ'}="UTC";
-
-our($opt_h,$opt_o,$opt_v,$opt_u,$opt_C,$opt_i,$opt_m,$opt_M,$opt_t,$opt_T,
- $opt_b,$opt_r,$opt_I,$opt_A,$opt_s,$opt_l,$opt_d,$opt_D,$opt_S,$opt_F,
- $opt_P,$opt_R);
-
-sub usage() {
- print STDERR <<END;
-Usage: ${\basename $0} # fetch/update GIT from SVN
- [-o branch-for-HEAD] [-h] [-v] [-l max_rev] [-R repack_each_revs]
- [-C GIT_repository] [-t tagname] [-T trunkname] [-b branchname]
- [-d|-D] [-i] [-u] [-r] [-I ignorefilename] [-s start_chg]
- [-m] [-M regex] [-A author_file] [-S] [-F] [-P project_name] [SVN_URL]
-END
- exit(1);
-}
-
-getopts("A:b:C:dDFhiI:l:mM:o:rs:t:T:SP:R:uv") or usage();
-usage if $opt_h;
-
-my $tag_name = $opt_t || "tags";
-my $trunk_name = defined $opt_T ? $opt_T : "trunk";
-my $branch_name = $opt_b || "branches";
-my $project_name = $opt_P || "";
-$project_name = "/" . $project_name if ($project_name);
-my $repack_after = $opt_R || 1000;
-
-@ARGV == 1 or @ARGV == 2 or usage();
-
-$opt_o ||= "origin";
-$opt_s ||= 1;
-my $git_tree = $opt_C;
-$git_tree ||= ".";
-
-my $svn_url = $ARGV[0];
-my $svn_dir = $ARGV[1];
-
-our @mergerx = ();
-if ($opt_m) {
- my $branch_esc = quotemeta ($branch_name);
- my $trunk_esc = quotemeta ($trunk_name);
- @mergerx =
- (
- qr!\b(?:merg(?:ed?|ing))\b.*?\b((?:(?<=$branch_esc/)[\w\.\-]+)|(?:$trunk_esc))\b!i,
- qr!\b(?:from|of)\W+((?:(?<=$branch_esc/)[\w\.\-]+)|(?:$trunk_esc))\b!i,
- qr!\b(?:from|of)\W+(?:the )?([\w\.\-]+)[-\s]branch\b!i
- );
-}
-if ($opt_M) {
- unshift (@mergerx, qr/$opt_M/);
-}
-
-# Absolutize filename now, since we will have chdir'ed by the time we
-# get around to opening it.
-$opt_A = File::Spec->rel2abs($opt_A) if $opt_A;
-
-our %users = ();
-our $users_file = undef;
-sub read_users($) {
- $users_file = File::Spec->rel2abs(@_);
- die "Cannot open $users_file\n" unless -f $users_file;
- open(my $authors,$users_file);
- while(<$authors>) {
- chomp;
- next unless /^(\S+?)\s*=\s*(.+?)\s*<(.+)>\s*$/;
- (my $user,my $name,my $email) = ($1,$2,$3);
- $users{$user} = [$name,$email];
- }
- close($authors);
-}
-
-select(STDERR); $|=1; select(STDOUT);
-
-
-package SVNconn;
-# Basic SVN connection.
-# We're only interested in connecting and downloading, so ...
-
-use File::Spec;
-use File::Temp qw(tempfile);
-use POSIX qw(strftime dup2);
-use Fcntl qw(SEEK_SET);
-
-sub new {
- my($what,$repo) = @_;
- $what=ref($what) if ref($what);
-
- my $self = {};
- $self->{'buffer'} = "";
- bless($self,$what);
-
- $repo =~ s#/+$##;
- $self->{'fullrep'} = $repo;
- $self->conn();
-
- return $self;
-}
-
-sub conn {
- my $self = shift;
- my $repo = $self->{'fullrep'};
- my $auth = SVN::Core::auth_open ([SVN::Client::get_simple_provider,
- SVN::Client::get_ssl_server_trust_file_provider,
- SVN::Client::get_username_provider]);
- my $s = SVN::Ra->new(url => $repo, auth => $auth);
- die "SVN connection to $repo: $!\n" unless defined $s;
- $self->{'svn'} = $s;
- $self->{'repo'} = $repo;
- $self->{'maxrev'} = $s->get_latest_revnum();
-}
-
-sub file {
- my($self,$path,$rev) = @_;
-
- my ($fh, $name) = tempfile('gitsvn.XXXXXX',
- DIR => File::Spec->tmpdir(), UNLINK => 1);
-
- print "... $rev $path ...\n" if $opt_v;
- my (undef, $properties);
- my $pool = SVN::Pool->new();
- $path =~ s#^/*##;
- eval { (undef, $properties)
- = $self->{'svn'}->get_file($path,$rev,$fh,$pool); };
- $pool->clear;
- if($@) {
- return undef if $@ =~ /Attempted to get checksum/;
- die $@;
- }
- my $mode;
- if (exists $properties->{'svn:executable'}) {
- $mode = '100755';
- } elsif (exists $properties->{'svn:special'}) {
- my ($special_content, $filesize);
- $filesize = tell $fh;
- seek $fh, 0, SEEK_SET;
- read $fh, $special_content, $filesize;
- if ($special_content =~ s/^link //) {
- $mode = '120000';
- seek $fh, 0, SEEK_SET;
- truncate $fh, 0;
- print $fh $special_content;
- } else {
- die "unexpected svn:special file encountered";
- }
- } else {
- $mode = '100644';
- }
- close ($fh);
-
- return ($name, $mode);
-}
-
-sub ignore {
- my($self,$path,$rev) = @_;
-
- print "... $rev $path ...\n" if $opt_v;
- $path =~ s#^/*##;
- my (undef,undef,$properties)
- = $self->{'svn'}->get_dir($path,$rev,undef);
- if (exists $properties->{'svn:ignore'}) {
- my ($fh, $name) = tempfile('gitsvn.XXXXXX',
- DIR => File::Spec->tmpdir(),
- UNLINK => 1);
- print $fh $properties->{'svn:ignore'};
- close($fh);
- return $name;
- } else {
- return undef;
- }
-}
-
-sub dir_list {
- my($self,$path,$rev) = @_;
- $path =~ s#^/*##;
- my ($dirents,undef,$properties)
- = $self->{'svn'}->get_dir($path,$rev,undef);
- return $dirents;
-}
-
-package main;
-use URI;
-
-our $svn = $svn_url;
-$svn .= "/$svn_dir" if defined $svn_dir;
-my $svn2 = SVNconn->new($svn);
-$svn = SVNconn->new($svn);
-
-my $lwp_ua;
-if($opt_d or $opt_D) {
- $svn_url = URI->new($svn_url)->canonical;
- if($opt_D) {
- $svn_dir =~ s#/*$#/#;
- } else {
- $svn_dir = "";
- }
- if ($svn_url->scheme eq "http") {
- use LWP::UserAgent;
- $lwp_ua = LWP::UserAgent->new(keep_alive => 1, requests_redirectable => []);
- } else {
- print STDERR "Warning: not HTTP; turning off direct file access\n";
- $opt_d=0;
- }
-}
-
-sub pdate($) {
- my($d) = @_;
- $d =~ m#(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)#
- or die "Unparseable date: $d\n";
- my $y=$1; $y-=1900 if $y>1900;
- return timegm($6||0,$5,$4,$3,$2-1,$y);
-}
-
-sub getwd() {
- my $pwd = `pwd`;
- chomp $pwd;
- return $pwd;
-}
-
-
-sub get_headref($$) {
- my $name = shift;
- my $git_dir = shift;
- my $sha;
-
- if (open(C,"$git_dir/refs/heads/$name")) {
- chomp($sha = <C>);
- close(C);
- length($sha) == 40
- or die "Cannot get head id for $name ($sha): $!\n";
- }
- return $sha;
-}
-
-
--d $git_tree
- or mkdir($git_tree,0777)
- or die "Could not create $git_tree: $!";
-chdir($git_tree);
-
-my $orig_branch = "";
-my $forward_master = 0;
-my %branches;
-
-my $git_dir = $ENV{"GIT_DIR"} || ".git";
-$git_dir = getwd()."/".$git_dir unless $git_dir =~ m#^/#;
-$ENV{"GIT_DIR"} = $git_dir;
-my $orig_git_index;
-$orig_git_index = $ENV{GIT_INDEX_FILE} if exists $ENV{GIT_INDEX_FILE};
-my ($git_ih, $git_index) = tempfile('gitXXXXXX', SUFFIX => '.idx',
- DIR => File::Spec->tmpdir());
-close ($git_ih);
-$ENV{GIT_INDEX_FILE} = $git_index;
-my $maxnum = 0;
-my $last_rev = "";
-my $last_branch;
-my $current_rev = $opt_s || 1;
-unless(-d $git_dir) {
- system("git-init");
- die "Cannot init the GIT db at $git_tree: $?\n" if $?;
- system("git-read-tree");
- die "Cannot init an empty tree: $?\n" if $?;
-
- $last_branch = $opt_o;
- $orig_branch = "";
-} else {
- -f "$git_dir/refs/heads/$opt_o"
- or die "Branch '$opt_o' does not exist.\n".
- "Either use the correct '-o branch' option,\n".
- "or import to a new repository.\n";
-
- -f "$git_dir/svn2git"
- or die "'$git_dir/svn2git' does not exist.\n".
- "You need that file for incremental imports.\n";
- open(F, "git-symbolic-ref HEAD |") or
- die "Cannot run git-symbolic-ref: $!\n";
- chomp ($last_branch = <F>);
- $last_branch = basename($last_branch);
- close(F);
- unless($last_branch) {
- warn "Cannot read the last branch name: $! -- assuming 'master'\n";
- $last_branch = "master";
- }
- $orig_branch = $last_branch;
- $last_rev = get_headref($orig_branch, $git_dir);
- if (-f "$git_dir/SVN2GIT_HEAD") {
- die <<EOM;
-SVN2GIT_HEAD exists.
-Make sure your working directory corresponds to HEAD and remove SVN2GIT_HEAD.
-You may need to run
-
- git-read-tree -m -u SVN2GIT_HEAD HEAD
-EOM
- }
- system('cp', "$git_dir/HEAD", "$git_dir/SVN2GIT_HEAD");
-
- $forward_master =
- $opt_o ne 'master' && -f "$git_dir/refs/heads/master" &&
- system('cmp', '-s', "$git_dir/refs/heads/master",
- "$git_dir/refs/heads/$opt_o") == 0;
-
- # populate index
- system('git-read-tree', $last_rev);
- die "read-tree failed: $?\n" if $?;
-
- # Get the last import timestamps
- open my $B,"<", "$git_dir/svn2git";
- while(<$B>) {
- chomp;
- my($num,$branch,$ref) = split;
- $branches{$branch}{$num} = $ref;
- $branches{$branch}{"LAST"} = $ref;
- $current_rev = $num+1 if $current_rev <= $num;
- }
- close($B);
-}
--d $git_dir
- or die "Could not create git subdir ($git_dir).\n";
-
-my $default_authors = "$git_dir/svn-authors";
-if ($opt_A) {
- read_users($opt_A);
- copy($opt_A,$default_authors) or die "Copy failed: $!";
-} else {
- read_users($default_authors) if -f $default_authors;
-}
-
-open BRANCHES,">>", "$git_dir/svn2git";
-
-sub node_kind($$) {
- my ($svnpath, $revision) = @_;
- my $pool=SVN::Pool->new;
- $svnpath =~ s#^/*##;
- my $kind = $svn->{'svn'}->check_path($svnpath,$revision,$pool);
- $pool->clear;
- return $kind;
-}
-
-sub get_file($$$) {
- my($svnpath,$rev,$path) = @_;
-
- # now get it
- my ($name,$mode);
- if($opt_d) {
- my($req,$res);
-
- # /svn/!svn/bc/2/django/trunk/django-docs/build.py
- my $url=$svn_url->clone();
- $url->path($url->path."/!svn/bc/$rev/$svn_dir$svnpath");
- print "... $path...\n" if $opt_v;
- $req = HTTP::Request->new(GET => $url);
- $res = $lwp_ua->request($req);
- if ($res->is_success) {
- my $fh;
- ($fh, $name) = tempfile('gitsvn.XXXXXX',
- DIR => File::Spec->tmpdir(), UNLINK => 1);
- print $fh $res->content;
- close($fh) or die "Could not write $name: $!\n";
- } else {
- return undef if $res->code == 301; # directory?
- die $res->status_line." at $url\n";
- }
- $mode = '0644'; # can't obtain mode via direct http request?
- } else {
- ($name,$mode) = $svn->file("$svnpath",$rev);
- return undef unless defined $name;
- }
-
- my $pid = open(my $F, '-|');
- die $! unless defined $pid;
- if (!$pid) {
- exec("git-hash-object", "-w", $name)
- or die "Cannot create object: $!\n";
- }
- my $sha = <$F>;
- chomp $sha;
- close $F;
- unlink $name;
- return [$mode, $sha, $path];
-}
-
-sub get_ignore($$$$$) {
- my($new,$old,$rev,$path,$svnpath) = @_;
-
- return unless $opt_I;
- my $name = $svn->ignore("$svnpath",$rev);
- if ($path eq '/') {
- $path = $opt_I;
- } else {
- $path = File::Spec->catfile($path,$opt_I);
- }
- if (defined $name) {
- my $pid = open(my $F, '-|');
- die $! unless defined $pid;
- if (!$pid) {
- exec("git-hash-object", "-w", $name)
- or die "Cannot create object: $!\n";
- }
- my $sha = <$F>;
- chomp $sha;
- close $F;
- unlink $name;
- push(@$new,['0644',$sha,$path]);
- } elsif (defined $old) {
- push(@$old,$path);
- }
-}
-
-sub project_path($$)
-{
- my ($path, $project) = @_;
-
- $path = "/".$path unless ($path =~ m#^\/#) ;
- return $1 if ($path =~ m#^$project\/(.*)$#);
-
- $path =~ s#\.#\\\.#g;
- $path =~ s#\+#\\\+#g;
- return "/" if ($project =~ m#^$path.*$#);
-
- return undef;
-}
-
-sub split_path($$) {
- my($rev,$path) = @_;
- my $branch;
-
- if($path =~ s#^/\Q$tag_name\E/([^/]+)/?##) {
- $branch = "/$1";
- } elsif($path =~ s#^/\Q$trunk_name\E/?##) {
- $branch = "/";
- } elsif($path =~ s#^/\Q$branch_name\E/([^/]+)/?##) {
- $branch = $1;
- } else {
- my %no_error = (
- "/" => 1,
- "/$tag_name" => 1,
- "/$branch_name" => 1
- );
- print STDERR "$rev: Unrecognized path: $path\n" unless (defined $no_error{$path});
- return ()
- }
- if ($path eq "") {
- $path = "/";
- } elsif ($project_name) {
- $path = project_path($path, $project_name);
- }
- return ($branch,$path);
-}
-
-sub branch_rev($$) {
-
- my ($srcbranch,$uptorev) = @_;
-
- my $bbranches = $branches{$srcbranch};
- my @revs = reverse sort { ($a eq 'LAST' ? 0 : $a) <=> ($b eq 'LAST' ? 0 : $b) } keys %$bbranches;
- my $therev;
- foreach my $arev(@revs) {
- next if ($arev eq 'LAST');
- if ($arev <= $uptorev) {
- $therev = $arev;
- last;
- }
- }
- return $therev;
-}
-
-sub expand_svndir($$$);
-
-sub expand_svndir($$$)
-{
- my ($svnpath, $rev, $path) = @_;
- my @list;
- get_ignore(\@list, undef, $rev, $path, $svnpath);
- my $dirents = $svn->dir_list($svnpath, $rev);
- foreach my $p(keys %$dirents) {
- my $kind = node_kind($svnpath.'/'.$p, $rev);
- if ($kind eq $SVN::Node::file) {
- my $f = get_file($svnpath.'/'.$p, $rev, $path.'/'.$p);
- push(@list, $f) if $f;
- } elsif ($kind eq $SVN::Node::dir) {
- push(@list,
- expand_svndir($svnpath.'/'.$p, $rev, $path.'/'.$p));
- }
- }
- return @list;
-}
-
-sub copy_path($$$$$$$$) {
- # Somebody copied a whole subdirectory.
- # We need to find the index entries from the old version which the
- # SVN log entry points to, and add them to the new place.
-
- my($newrev,$newbranch,$path,$oldpath,$rev,$node_kind,$new,$parents) = @_;
-
- my($srcbranch,$srcpath) = split_path($rev,$oldpath);
- unless(defined $srcbranch && defined $srcpath) {
- print "Path not found when copying from $oldpath @ $rev.\n".
- "Will try to copy from original SVN location...\n"
- if $opt_v;
- push (@$new, expand_svndir($oldpath, $rev, $path));
- return;
- }
- my $therev = branch_rev($srcbranch, $rev);
- my $gitrev = $branches{$srcbranch}{$therev};
- unless($gitrev) {
- print STDERR "$newrev:$newbranch: could not find $oldpath \@ $rev\n";
- return;
- }
- if ($srcbranch ne $newbranch) {
- push(@$parents, $branches{$srcbranch}{'LAST'});
- }
- print "$newrev:$newbranch:$path: copying from $srcbranch:$srcpath @ $rev\n" if $opt_v;
- if ($node_kind eq $SVN::Node::dir) {
- $srcpath =~ s#/*$#/#;
- }
-
- my $pid = open my $f,'-|';
- die $! unless defined $pid;
- if (!$pid) {
- exec("git-ls-tree","-r","-z",$gitrev,$srcpath)
- or die $!;
- }
- local $/ = "\0";
- while(<$f>) {
- chomp;
- my($m,$p) = split(/\t/,$_,2);
- my($mode,$type,$sha1) = split(/ /,$m);
- next if $type ne "blob";
- if ($node_kind eq $SVN::Node::dir) {
- $p = $path . substr($p,length($srcpath)-1);
- } else {
- $p = $path;
- }
- push(@$new,[$mode,$sha1,$p]);
- }
- close($f) or
- print STDERR "$newrev:$newbranch: could not list files in $oldpath \@ $rev\n";
-}
-
-sub commit {
- my($branch, $changed_paths, $revision, $author, $date, $message) = @_;
- my($committer_name,$committer_email,$dest);
- my($author_name,$author_email);
- my(@old,@new,@parents);
-
- if (not defined $author or $author eq "") {
- $committer_name = $committer_email = "unknown";
- } elsif (defined $users_file) {
- die "User $author is not listed in $users_file\n"
- unless exists $users{$author};
- ($committer_name,$committer_email) = @{$users{$author}};
- } elsif ($author =~ /^(.*?)\s+<(.*)>$/) {
- ($committer_name, $committer_email) = ($1, $2);
- } else {
- $author =~ s/^<(.*)>$/$1/;
- $committer_name = $committer_email = $author;
- }
-
- if ($opt_F && $message =~ /From:\s+(.*?)\s+<(.*)>\s*\n/) {
- ($author_name, $author_email) = ($1, $2);
- print "Author from From: $1 <$2>\n" if ($opt_v);;
- } elsif ($opt_S && $message =~ /Signed-off-by:\s+(.*?)\s+<(.*)>\s*\n/) {
- ($author_name, $author_email) = ($1, $2);
- print "Author from Signed-off-by: $1 <$2>\n" if ($opt_v);;
- } else {
- $author_name = $committer_name;
- $author_email = $committer_email;
- }
-
- $date = pdate($date);
-
- my $tag;
- my $parent;
- if($branch eq "/") { # trunk
- $parent = $opt_o;
- } elsif($branch =~ m#^/(.+)#) { # tag
- $tag = 1;
- $parent = $1;
- } else { # "normal" branch
- # nothing to do
- $parent = $branch;
- }
- $dest = $parent;
-
- my $prev = $changed_paths->{"/"};
- if($prev and $prev->[0] eq "A") {
- delete $changed_paths->{"/"};
- my $oldpath = $prev->[1];
- my $rev;
- if(defined $oldpath) {
- my $p;
- ($parent,$p) = split_path($revision,$oldpath);
- if(defined $parent) {
- if($parent eq "/") {
- $parent = $opt_o;
- } else {
- $parent =~ s#^/##; # if it's a tag
- }
- }
- } else {
- $parent = undef;
- }
- }
-
- my $rev;
- if($revision > $opt_s and defined $parent) {
- open(H,'-|',"git-rev-parse","--verify",$parent);
- $rev = <H>;
- close(H) or do {
- print STDERR "$revision: cannot find commit '$parent'!\n";
- return;
- };
- chop $rev;
- if(length($rev) != 40) {
- print STDERR "$revision: cannot find commit '$parent'!\n";
- return;
- }
- $rev = $branches{($parent eq $opt_o) ? "/" : $parent}{"LAST"};
- if($revision != $opt_s and not $rev) {
- print STDERR "$revision: do not know ancestor for '$parent'!\n";
- return;
- }
- } else {
- $rev = undef;
- }
-
-# if($prev and $prev->[0] eq "A") {
-# if(not $tag) {
-# unless(open(H,"> $git_dir/refs/heads/$branch")) {
-# print STDERR "$revision: Could not create branch $branch: $!\n";
-# $state=11;
-# next;
-# }
-# print H "$rev\n"
-# or die "Could not write branch $branch: $!";
-# close(H)
-# or die "Could not write branch $branch: $!";
-# }
-# }
- if(not defined $rev) {
- unlink($git_index);
- } elsif ($rev ne $last_rev) {
- print "Switching from $last_rev to $rev ($branch)\n" if $opt_v;
- system("git-read-tree", $rev);
- die "read-tree failed for $rev: $?\n" if $?;
- $last_rev = $rev;
- }
-
- push (@parents, $rev) if defined $rev;
-
- my $cid;
- if($tag and not %$changed_paths) {
- $cid = $rev;
- } else {
- my @paths = sort keys %$changed_paths;
- foreach my $path(@paths) {
- my $action = $changed_paths->{$path};
-
- if ($action->[0] eq "R") {
- # refer to a file/tree in an earlier commit
- push(@old,$path); # remove any old stuff
- }
- if(($action->[0] eq "A") || ($action->[0] eq "R")) {
- my $node_kind = node_kind($action->[3], $revision);
- if ($node_kind eq $SVN::Node::file) {
- my $f = get_file($action->[3],
- $revision, $path);
- if ($f) {
- push(@new,$f) if $f;
- } else {
- my $opath = $action->[3];
- print STDERR "$revision: $branch: could not fetch '$opath'\n";
- }
- } elsif ($node_kind eq $SVN::Node::dir) {
- if($action->[1]) {
- copy_path($revision, $branch,
- $path, $action->[1],
- $action->[2], $node_kind,
- \@new, \@parents);
- } else {
- get_ignore(\@new, \@old, $revision,
- $path, $action->[3]);
- }
- }
- } elsif ($action->[0] eq "D") {
- push(@old,$path);
- } elsif ($action->[0] eq "M") {
- my $node_kind = node_kind($action->[3], $revision);
- if ($node_kind eq $SVN::Node::file) {
- my $f = get_file($action->[3],
- $revision, $path);
- push(@new,$f) if $f;
- } elsif ($node_kind eq $SVN::Node::dir) {
- get_ignore(\@new, \@old, $revision,
- $path, $action->[3]);
- }
- } else {
- die "$revision: unknown action '".$action->[0]."' for $path\n";
- }
- }
-
- while(@old) {
- my @o1;
- if(@old > 55) {
- @o1 = splice(@old,0,50);
- } else {
- @o1 = @old;
- @old = ();
- }
- my $pid = open my $F, "-|";
- die "$!" unless defined $pid;
- if (!$pid) {
- exec("git-ls-files", "-z", @o1) or die $!;
- }
- @o1 = ();
- local $/ = "\0";
- while(<$F>) {
- chomp;
- push(@o1,$_);
- }
- close($F);
-
- while(@o1) {
- my @o2;
- if(@o1 > 55) {
- @o2 = splice(@o1,0,50);
- } else {
- @o2 = @o1;
- @o1 = ();
- }
- system("git-update-index","--force-remove","--",@o2);
- die "Cannot remove files: $?\n" if $?;
- }
- }
- while(@new) {
- my @n2;
- if(@new > 12) {
- @n2 = splice(@new,0,10);
- } else {
- @n2 = @new;
- @new = ();
- }
- system("git-update-index","--add",
- (map { ('--cacheinfo', @$_) } @n2));
- die "Cannot add files: $?\n" if $?;
- }
-
- my $pid = open(C,"-|");
- die "Cannot fork: $!" unless defined $pid;
- unless($pid) {
- exec("git-write-tree");
- die "Cannot exec git-write-tree: $!\n";
- }
- chomp(my $tree = <C>);
- length($tree) == 40
- or die "Cannot get tree id ($tree): $!\n";
- close(C)
- or die "Error running git-write-tree: $?\n";
- print "Tree ID $tree\n" if $opt_v;
-
- my $pr = IO::Pipe->new() or die "Cannot open pipe: $!\n";
- my $pw = IO::Pipe->new() or die "Cannot open pipe: $!\n";
- $pid = fork();
- die "Fork: $!\n" unless defined $pid;
- unless($pid) {
- $pr->writer();
- $pw->reader();
- open(OUT,">&STDOUT");
- dup2($pw->fileno(),0);
- dup2($pr->fileno(),1);
- $pr->close();
- $pw->close();
-
- my @par = ();
-
- # loose detection of merges
- # based on the commit msg
- foreach my $rx (@mergerx) {
- if ($message =~ $rx) {
- my $mparent = $1;
- if ($mparent eq 'HEAD') { $mparent = $opt_o };
- if ( -e "$git_dir/refs/heads/$mparent") {
- $mparent = get_headref($mparent, $git_dir);
- push (@parents, $mparent);
- print OUT "Merge parent branch: $mparent\n" if $opt_v;
- }
- }
- }
- my %seen_parents = ();
- my @unique_parents = grep { ! $seen_parents{$_} ++ } @parents;
- foreach my $bparent (@unique_parents) {
- push @par, '-p', $bparent;
- print OUT "Merge parent branch: $bparent\n" if $opt_v;
- }
-
- exec("env",
- "GIT_AUTHOR_NAME=$author_name",
- "GIT_AUTHOR_EMAIL=$author_email",
- "GIT_AUTHOR_DATE=".strftime("+0000 %Y-%m-%d %H:%M:%S",gmtime($date)),
- "GIT_COMMITTER_NAME=$committer_name",
- "GIT_COMMITTER_EMAIL=$committer_email",
- "GIT_COMMITTER_DATE=".strftime("+0000 %Y-%m-%d %H:%M:%S",gmtime($date)),
- "git-commit-tree", $tree,@par);
- die "Cannot exec git-commit-tree: $!\n";
- }
- $pw->writer();
- $pr->reader();
-
- $message =~ s/[\s\n]+\z//;
- $message = "r$revision: $message" if $opt_r;
-
- print $pw "$message\n"
- or die "Error writing to git-commit-tree: $!\n";
- $pw->close();
-
- print "Committed change $revision:$branch ".strftime("%Y-%m-%d %H:%M:%S",gmtime($date)).")\n" if $opt_v;
- chomp($cid = <$pr>);
- length($cid) == 40
- or die "Cannot get commit id ($cid): $!\n";
- print "Commit ID $cid\n" if $opt_v;
- $pr->close();
-
- waitpid($pid,0);
- die "Error running git-commit-tree: $?\n" if $?;
- }
-
- if (not defined $cid) {
- $cid = $branches{"/"}{"LAST"};
- }
-
- if(not defined $dest) {
- print "... no known parent\n" if $opt_v;
- } elsif(not $tag) {
- print "Writing to refs/heads/$dest\n" if $opt_v;
- open(C,">$git_dir/refs/heads/$dest") and
- print C ("$cid\n") and
- close(C)
- or die "Cannot write branch $dest for update: $!\n";
- }
-
- if ($tag) {
- $last_rev = "-" if %$changed_paths;
- # the tag was 'complex', i.e. did not refer to a "real" revision
-
- $dest =~ tr/_/\./ if $opt_u;
-
- system('git-tag', '-f', $dest, $cid) == 0
- or die "Cannot create tag $dest: $!\n";
-
- print "Created tag '$dest' on '$branch'\n" if $opt_v;
- }
- $branches{$branch}{"LAST"} = $cid;
- $branches{$branch}{$revision} = $cid;
- $last_rev = $cid;
- print BRANCHES "$revision $branch $cid\n";
- print "DONE: $revision $dest $cid\n" if $opt_v;
-}
-
-sub commit_all {
- # Recursive use of the SVN connection does not work
- local $svn = $svn2;
-
- my ($changed_paths, $revision, $author, $date, $message, $pool) = @_;
- my %p;
- while(my($path,$action) = each %$changed_paths) {
- $p{$path} = [ $action->action,$action->copyfrom_path, $action->copyfrom_rev, $path ];
- }
- $changed_paths = \%p;
-
- my %done;
- my @col;
- my $pref;
- my $branch;
-
- while(my($path,$action) = each %$changed_paths) {
- ($branch,$path) = split_path($revision,$path);
- next if not defined $branch;
- next if not defined $path;
- $done{$branch}{$path} = $action;
- }
- while(($branch,$changed_paths) = each %done) {
- commit($branch, $changed_paths, $revision, $author, $date, $message);
- }
-}
-
-$opt_l = $svn->{'maxrev'} if not defined $opt_l or $opt_l > $svn->{'maxrev'};
-
-if ($opt_l < $current_rev) {
- print "Up to date: no new revisions to fetch!\n" if $opt_v;
- unlink("$git_dir/SVN2GIT_HEAD");
- exit;
-}
-
-print "Processing from $current_rev to $opt_l ...\n" if $opt_v;
-
-my $from_rev;
-my $to_rev = $current_rev - 1;
-
-while ($to_rev < $opt_l) {
- $from_rev = $to_rev + 1;
- $to_rev = $from_rev + $repack_after;
- $to_rev = $opt_l if $opt_l < $to_rev;
- print "Fetching from $from_rev to $to_rev ...\n" if $opt_v;
- my $pool=SVN::Pool->new;
- $svn->{'svn'}->get_log("/",$from_rev,$to_rev,0,1,1,\&commit_all,$pool);
- $pool->clear;
- my $pid = fork();
- die "Fork: $!\n" unless defined $pid;
- unless($pid) {
- exec("git-repack", "-d")
- or die "Cannot repack: $!\n";
- }
- waitpid($pid, 0);
-}
-
-
-unlink($git_index);
-
-if (defined $orig_git_index) {
- $ENV{GIT_INDEX_FILE} = $orig_git_index;
-} else {
- delete $ENV{GIT_INDEX_FILE};
-}
-
-# Now switch back to the branch we were in before all of this happened
-if($orig_branch) {
- print "DONE\n" if $opt_v and (not defined $opt_l or $opt_l > 0);
- system("cp","$git_dir/refs/heads/$opt_o","$git_dir/refs/heads/master")
- if $forward_master;
- unless ($opt_i) {
- system('git-read-tree', '-m', '-u', 'SVN2GIT_HEAD', 'HEAD');
- die "read-tree failed: $?\n" if $?;
- }
-} else {
- $orig_branch = "master";
- print "DONE; creating $orig_branch branch\n" if $opt_v and (not defined $opt_l or $opt_l > 0);
- system("cp","$git_dir/refs/heads/$opt_o","$git_dir/refs/heads/master")
- unless -f "$git_dir/refs/heads/master";
- system('git-update-ref', 'HEAD', "$orig_branch");
- unless ($opt_i) {
- system('git checkout');
- die "checkout failed: $?\n" if $?;
- }
-}
-unlink("$git_dir/SVN2GIT_HEAD");
-close(BRANCHES);
{ "diff-files", cmd_diff_files },
{ "diff-index", cmd_diff_index, RUN_SETUP },
{ "diff-tree", cmd_diff_tree, RUN_SETUP },
+ { "fetch", cmd_fetch, RUN_SETUP },
+ { "fetch-pack", cmd_fetch_pack, RUN_SETUP },
{ "fetch--tool", cmd_fetch__tool, RUN_SETUP },
{ "fmt-merge-msg", cmd_fmt_merge_msg, RUN_SETUP },
{ "for-each-ref", cmd_for_each_ref, RUN_SETUP },
{ "get-tar-commit-id", cmd_get_tar_commit_id },
{ "grep", cmd_grep, RUN_SETUP | USE_PAGER },
{ "help", cmd_help },
+#ifndef NO_CURL
+ { "http-fetch", cmd_http_fetch, RUN_SETUP },
+#endif
{ "init", cmd_init_db },
{ "init-db", cmd_init_db },
{ "log", cmd_log, RUN_SETUP | USE_PAGER },
/*
* Take the basename of argv[0] as the command
* name, and the dirname as the default exec_path
- * if it's an absolute path and we don't have
- * anything better.
+ * if we don't have anything better.
*/
if (slash) {
*slash++ = 0;
if (*cmd == '/')
exec_path = cmd;
+ else
+ exec_path = xstrdup(make_absolute_path(cmd));
cmd = slash;
}
proc start_rev_list {view} {
global startmsecs
global commfd leftover tclencoding datemode
- global viewargs viewfiles commitidx
- global lookingforhead showlocalchanges
+ global viewargs viewfiles commitidx viewcomplete vnextroot
+ global showlocalchanges commitinterest mainheadid
+ global progressdirn progresscoords proglastnc curview
set startmsecs [clock clicks -milliseconds]
set commitidx($view) 0
+ set viewcomplete($view) 0
+ set vnextroot($view) 0
set order "--topo-order"
if {$datemode} {
set order "--date-order"
}
if {[catch {
- set fd [open [concat | git log -z --pretty=raw $order --parents \
+ set fd [open [concat | git log --no-color -z --pretty=raw $order --parents \
--boundary $viewargs($view) "--" $viewfiles($view)] r]
} err]} {
error_popup "Error executing git rev-list: $err"
}
set commfd($view) $fd
set leftover($view) {}
- set lookingforhead $showlocalchanges
+ if {$showlocalchanges} {
+ lappend commitinterest($mainheadid) {dodiffindex}
+ }
fconfigure $fd -blocking 0 -translation lf -eofchar {}
if {$tclencoding != {}} {
fconfigure $fd -encoding $tclencoding
}
filerun $fd [list getcommitlines $fd $view]
- nowbusy $view
+ nowbusy $view "Reading"
+ if {$view == $curview} {
+ set progressdirn 1
+ set progresscoords {0 0}
+ set proglastnc 0
+ }
}
proc stop_rev_list {} {
}
proc getcommits {} {
- global phase canv mainfont curview
+ global phase canv curview
set phase getcommits
initlayout
show_status "Reading commits..."
}
+# This makes a string representation of a positive integer which
+# sorts as a string in numerical order
+proc strrep {n} {
+ if {$n < 16} {
+ return [format "%x" $n]
+ } elseif {$n < 256} {
+ return [format "x%.2x" $n]
+ } elseif {$n < 65536} {
+ return [format "y%.4x" $n]
+ }
+ return [format "z%.8x" $n]
+}
+
proc getcommitlines {fd view} {
- global commitlisted
+ global commitlisted commitinterest
global leftover commfd
- global displayorder commitidx commitrow commitdata
+ global displayorder commitidx viewcomplete commitrow commitdata
global parentlist children curview hlview
global vparentlist vdisporder vcmitlisted
+ global ordertok vnextroot idpending
set stuff [read $fd 500000]
# git log doesn't terminate the last commit with a null...
if {![eof $fd]} {
return 1
}
- global viewname
+ # Check if we have seen any ids listed as parents that haven't
+ # appeared in the list
+ foreach vid [array names idpending "$view,*"] {
+ # should only get here if git log is buggy
+ set id [lindex [split $vid ","] 1]
+ set commitrow($vid) $commitidx($view)
+ incr commitidx($view)
+ if {$view == $curview} {
+ lappend parentlist {}
+ lappend displayorder $id
+ lappend commitlisted 0
+ } else {
+ lappend vparentlist($view) {}
+ lappend vdisporder($view) $id
+ lappend vcmitlisted($view) 0
+ }
+ }
+ set viewcomplete($view) 1
+ global viewname progresscoords
unset commfd($view)
notbusy $view
+ set progresscoords {0 0}
+ adjustprogress
# set it blocking so we wait for the process to terminate
fconfigure $fd -blocking 1
if {[catch {close $fd} err]} {
exit 1
}
set id [lindex $ids 0]
+ if {![info exists ordertok($view,$id)]} {
+ set otok "o[strrep $vnextroot($view)]"
+ incr vnextroot($view)
+ set ordertok($view,$id) $otok
+ } else {
+ set otok $ordertok($view,$id)
+ unset idpending($view,$id)
+ }
if {$listed} {
set olds [lrange $ids 1 end]
- set i 0
- foreach p $olds {
- if {$i == 0 || [lsearch -exact $olds $p] >= $i} {
- lappend children($view,$p) $id
+ if {[llength $olds] == 1} {
+ set p [lindex $olds 0]
+ lappend children($view,$p) $id
+ if {![info exists ordertok($view,$p)]} {
+ set ordertok($view,$p) $ordertok($view,$id)
+ set idpending($view,$p) 1
+ }
+ } else {
+ set i 0
+ foreach p $olds {
+ if {$i == 0 || [lsearch -exact $olds $p] >= $i} {
+ lappend children($view,$p) $id
+ }
+ if {![info exists ordertok($view,$p)]} {
+ set ordertok($view,$p) "$otok[strrep $i]]"
+ set idpending($view,$p) 1
+ }
+ incr i
}
- incr i
}
} else {
set olds {}
lappend vdisporder($view) $id
lappend vcmitlisted($view) $listed
}
+ if {[info exists commitinterest($id)]} {
+ foreach script $commitinterest($id) {
+ eval [string map [list "%I" $id] $script]
+ }
+ unset commitinterest($id)
+ }
set gotsome 1
}
if {$gotsome} {
run chewcommits $view
+ if {$view == $curview} {
+ # update progress bar
+ global progressdirn progresscoords proglastnc
+ set inc [expr {($commitidx($view) - $proglastnc) * 0.0002}]
+ set proglastnc $commitidx($view)
+ set l [lindex $progresscoords 0]
+ set r [lindex $progresscoords 1]
+ if {$progressdirn} {
+ set r [expr {$r + $inc}]
+ if {$r >= 1.0} {
+ set r 1.0
+ set progressdirn 0
+ }
+ if {$r > 0.2} {
+ set l [expr {$r - 0.2}]
+ }
+ } else {
+ set l [expr {$l - $inc}]
+ if {$l <= 0.0} {
+ set l 0.0
+ set progressdirn 1
+ }
+ set r [expr {$l + 0.2}]
+ }
+ set progresscoords [list $l $r]
+ adjustprogress
+ }
}
return 2
}
proc chewcommits {view} {
- global curview hlview commfd
+ global curview hlview viewcomplete
global selectedline pending_select
- set more 0
if {$view == $curview} {
- set allread [expr {![info exists commfd($view)]}]
- set tlimit [expr {[clock clicks -milliseconds] + 50}]
- set more [layoutmore $tlimit $allread]
- if {$allread && !$more} {
+ layoutmore
+ if {$viewcomplete($view)} {
global displayorder commitidx phase
global numcommits startmsecs
if {[info exists hlview] && $view == $hlview} {
vhighlightmore
}
- return $more
+ return 0
}
proc readcommit {id} {
}
proc updatecommits {} {
- global viewdata curview phase displayorder
+ global viewdata curview phase displayorder ordertok idpending
global children commitrow selectedline thickerline showneartags
if {$phase ne {}} {
foreach id $displayorder {
catch {unset children($n,$id)}
catch {unset commitrow($n,$id)}
+ catch {unset ordertok($n,$id)}
+ }
+ foreach vid [array names idpending "$n,*"] {
+ unset idpending($vid)
}
set curview -1
catch {unset selectedline}
proc makewindow {} {
global canv canv2 canv3 linespc charspc ctext cflist
- global textfont mainfont uifont tabstop
+ global tabstop
global findtype findtypemenu findloc findstring fstring geometry
global entries sha1entry sha1string sha1but
global diffcontextstring diffcontext
global highlight_files gdttype
global searchstring sstring
global bgcolor fgcolor bglist fglist diffcolors selectbgcolor
- global headctxmenu
+ global headctxmenu progresscanv progressitem progresscoords statusw
+ global fprogitem fprogcoord lastprogupdate progupdatepending
+ global rprogitem rprogcoord
+ global have_tk85
menu .bar
.bar add cascade -label "File" -menu .bar.file
- .bar configure -font $uifont
+ .bar configure -font uifont
menu .bar.file
.bar.file add command -label "Update" -command updatecommits
.bar.file add command -label "Reread references" -command rereadrefs
.bar.file add command -label "List references" -command showrefs
.bar.file add command -label "Quit" -command doquit
- .bar.file configure -font $uifont
+ .bar.file configure -font uifont
menu .bar.edit
.bar add cascade -label "Edit" -menu .bar.edit
.bar.edit add command -label "Preferences" -command doprefs
- .bar.edit configure -font $uifont
+ .bar.edit configure -font uifont
- menu .bar.view -font $uifont
+ menu .bar.view -font uifont
.bar add cascade -label "View" -menu .bar.view
.bar.view add command -label "New view..." -command {newview 0}
.bar.view add command -label "Edit view..." -command editview \
.bar add cascade -label "Help" -menu .bar.help
.bar.help add command -label "About gitk" -command about
.bar.help add command -label "Key bindings" -command keys
- .bar.help configure -font $uifont
+ .bar.help configure -font uifont
. configure -menu .bar
# the gui has upper and lower half, parts of a paned window.
set entries $sha1entry
set sha1but .tf.bar.sha1label
button $sha1but -text "SHA1 ID: " -state disabled -relief flat \
- -command gotocommit -width 8 -font $uifont
+ -command gotocommit -width 8 -font uifont
$sha1but conf -disabledforeground [$sha1but cget -foreground]
pack .tf.bar.sha1label -side left
- entry $sha1entry -width 40 -font $textfont -textvariable sha1string
+ entry $sha1entry -width 40 -font textfont -textvariable sha1string
trace add variable sha1string write sha1change
pack $sha1entry -side left -pady 2
-state disabled -width 26
pack .tf.bar.rightbut -side left -fill y
- button .tf.bar.findbut -text "Find" -command dofind -font $uifont
- pack .tf.bar.findbut -side left
+ # Status label and progress bar
+ set statusw .tf.bar.status
+ label $statusw -width 15 -relief sunken -font uifont
+ pack $statusw -side left -padx 5
+ set h [expr {[font metrics uifont -linespace] + 2}]
+ set progresscanv .tf.bar.progress
+ canvas $progresscanv -relief sunken -height $h -borderwidth 2
+ set progressitem [$progresscanv create rect -1 0 0 $h -fill green]
+ set fprogitem [$progresscanv create rect -1 0 0 $h -fill yellow]
+ set rprogitem [$progresscanv create rect -1 0 0 $h -fill red]
+ pack $progresscanv -side right -expand 1 -fill x
+ set progresscoords {0 0}
+ set fprogcoord 0
+ set rprogcoord 0
+ bind $progresscanv <Configure> adjustprogress
+ set lastprogupdate [clock clicks -milliseconds]
+ set progupdatepending 0
+
+ # build up the bottom bar of upper window
+ label .tf.lbar.flabel -text "Find " -font uifont
+ button .tf.lbar.fnext -text "next" -command {dofind 1 1} -font uifont
+ button .tf.lbar.fprev -text "prev" -command {dofind -1 1} -font uifont
+ label .tf.lbar.flab2 -text " commit " -font uifont
+ pack .tf.lbar.flabel .tf.lbar.fnext .tf.lbar.fprev .tf.lbar.flab2 \
+ -side left -fill y
+ set gdttype "containing:"
+ set gm [tk_optionMenu .tf.lbar.gdttype gdttype \
+ "containing:" \
+ "touching paths:" \
+ "adding/removing string:"]
+ trace add variable gdttype write gdttype_change
+ $gm conf -font uifont
+ .tf.lbar.gdttype conf -font uifont
+ pack .tf.lbar.gdttype -side left -fill y
+
set findstring {}
- set fstring .tf.bar.findstring
+ set fstring .tf.lbar.findstring
lappend entries $fstring
- entry $fstring -width 30 -font $textfont -textvariable findstring
+ entry $fstring -width 30 -font textfont -textvariable findstring
trace add variable findstring write find_change
- pack $fstring -side left -expand 1 -fill x -in .tf.bar
set findtype Exact
- set findtypemenu [tk_optionMenu .tf.bar.findtype \
+ set findtypemenu [tk_optionMenu .tf.lbar.findtype \
findtype Exact IgnCase Regexp]
- trace add variable findtype write find_change
- .tf.bar.findtype configure -font $uifont
- .tf.bar.findtype.menu configure -font $uifont
+ trace add variable findtype write findcom_change
+ .tf.lbar.findtype configure -font uifont
+ .tf.lbar.findtype.menu configure -font uifont
set findloc "All fields"
- tk_optionMenu .tf.bar.findloc findloc "All fields" Headline \
+ tk_optionMenu .tf.lbar.findloc findloc "All fields" Headline \
Comments Author Committer
trace add variable findloc write find_change
- .tf.bar.findloc configure -font $uifont
- .tf.bar.findloc.menu configure -font $uifont
- pack .tf.bar.findloc -side right
- pack .tf.bar.findtype -side right
-
- # build up the bottom bar of upper window
- label .tf.lbar.flabel -text "Highlight: Commits " \
- -font $uifont
- pack .tf.lbar.flabel -side left -fill y
- set gdttype "touching paths:"
- set gm [tk_optionMenu .tf.lbar.gdttype gdttype "touching paths:" \
- "adding/removing string:"]
- trace add variable gdttype write hfiles_change
- $gm conf -font $uifont
- .tf.lbar.gdttype conf -font $uifont
- pack .tf.lbar.gdttype -side left -fill y
- entry .tf.lbar.fent -width 25 -font $textfont \
- -textvariable highlight_files
- trace add variable highlight_files write hfiles_change
- lappend entries .tf.lbar.fent
- pack .tf.lbar.fent -side left -fill x -expand 1
- label .tf.lbar.vlabel -text " OR in view" -font $uifont
- pack .tf.lbar.vlabel -side left -fill y
- global viewhlmenu selectedhlview
- set viewhlmenu [tk_optionMenu .tf.lbar.vhl selectedhlview None]
- $viewhlmenu entryconf None -command delvhighlight
- $viewhlmenu conf -font $uifont
- .tf.lbar.vhl conf -font $uifont
- pack .tf.lbar.vhl -side left -fill y
- label .tf.lbar.rlabel -text " OR " -font $uifont
- pack .tf.lbar.rlabel -side left -fill y
- global highlight_related
- set m [tk_optionMenu .tf.lbar.relm highlight_related None \
- "Descendent" "Not descendent" "Ancestor" "Not ancestor"]
- $m conf -font $uifont
- .tf.lbar.relm conf -font $uifont
- trace add variable highlight_related write vrel_change
- pack .tf.lbar.relm -side left -fill y
+ .tf.lbar.findloc configure -font uifont
+ .tf.lbar.findloc.menu configure -font uifont
+ pack .tf.lbar.findloc -side right
+ pack .tf.lbar.findtype -side right
+ pack $fstring -side left -expand 1 -fill x
# Finish putting the upper half of the viewer together
pack .tf.lbar -in .tf -side bottom -fill x
frame .bleft.mid
button .bleft.top.search -text "Search" -command dosearch \
- -font $uifont
+ -font uifont
pack .bleft.top.search -side left -padx 5
set sstring .bleft.top.sstring
- entry $sstring -width 20 -font $textfont -textvariable searchstring
+ entry $sstring -width 20 -font textfont -textvariable searchstring
lappend entries $sstring
trace add variable searchstring write incrsearch
pack $sstring -side left -expand 1 -fill x
- radiobutton .bleft.mid.diff -text "Diff" \
+ radiobutton .bleft.mid.diff -text "Diff" -font uifont \
-command changediffdisp -variable diffelide -value {0 0}
- radiobutton .bleft.mid.old -text "Old version" \
+ radiobutton .bleft.mid.old -text "Old version" -font uifont \
-command changediffdisp -variable diffelide -value {0 1}
- radiobutton .bleft.mid.new -text "New version" \
+ radiobutton .bleft.mid.new -text "New version" -font uifont \
-command changediffdisp -variable diffelide -value {1 0}
label .bleft.mid.labeldiffcontext -text " Lines of context: " \
- -font $uifont
+ -font uifont
pack .bleft.mid.diff .bleft.mid.old .bleft.mid.new -side left
- spinbox .bleft.mid.diffcontext -width 5 -font $textfont \
+ spinbox .bleft.mid.diffcontext -width 5 -font textfont \
-from 1 -increment 1 -to 10000000 \
-validate all -validatecommand "diffcontextvalidate %P" \
-textvariable diffcontextstring
pack .bleft.mid.labeldiffcontext .bleft.mid.diffcontext -side left
set ctext .bleft.ctext
text $ctext -background $bgcolor -foreground $fgcolor \
- -tabs "[expr {$tabstop * $charspc}]" \
- -state disabled -font $textfont \
+ -state disabled -font textfont \
-yscrollcommand scrolltext -wrap none
+ if {$have_tk85} {
+ $ctext conf -tabstyle wordprocessor
+ }
scrollbar .bleft.sb -command "$ctext yview"
pack .bleft.top -side top -fill x
pack .bleft.mid -side top -fill x
lappend fglist $ctext
$ctext tag conf comment -wrap $wrapcomment
- $ctext tag conf filesep -font [concat $textfont bold] -back "#aaaaaa"
+ $ctext tag conf filesep -font textfontbold -back "#aaaaaa"
$ctext tag conf hunksep -fore [lindex $diffcolors 2]
$ctext tag conf d0 -fore [lindex $diffcolors 0]
$ctext tag conf d1 -fore [lindex $diffcolors 1]
$ctext tag conf m15 -fore "#ff70b0"
$ctext tag conf mmax -fore darkgrey
set mergemax 16
- $ctext tag conf mresult -font [concat $textfont bold]
- $ctext tag conf msep -font [concat $textfont bold]
+ $ctext tag conf mresult -font textfontbold
+ $ctext tag conf msep -font textfontbold
$ctext tag conf found -back yellow
.pwbottom add .bleft
frame .bright.mode
radiobutton .bright.mode.patch -text "Patch" \
-command reselectline -variable cmitmode -value "patch"
- .bright.mode.patch configure -font $uifont
+ .bright.mode.patch configure -font uifont
radiobutton .bright.mode.tree -text "Tree" \
-command reselectline -variable cmitmode -value "tree"
- .bright.mode.tree configure -font $uifont
+ .bright.mode.tree configure -font uifont
grid .bright.mode.patch .bright.mode.tree -sticky ew
pack .bright.mode -side top -fill x
set cflist .bright.cfiles
- set indent [font measure $mainfont "nn"]
+ set indent [font measure mainfont "nn"]
text $cflist \
-selectbackground $selectbgcolor \
-background $bgcolor -foreground $fgcolor \
- -font $mainfont \
+ -font mainfont \
-tabs [list $indent [expr {2 * $indent}]] \
-yscrollcommand ".bright.sb set" \
-cursor [. cget -cursor] \
pack $cflist -side left -fill both -expand 1
$cflist tag configure highlight \
-background [$cflist cget -selectbackground]
- $cflist tag configure bold -font [concat $mainfont bold]
+ $cflist tag configure bold -font mainfontbold
.pwbottom add .bright
.ctop add .pwbottom
} else {
bindall <ButtonRelease-4> "allcanvs yview scroll -5 units"
bindall <ButtonRelease-5> "allcanvs yview scroll 5 units"
+ if {[tk windowingsystem] eq "aqua"} {
+ bindall <MouseWheel> {
+ set delta [expr {- (%D)}]
+ allcanvs yview scroll $delta units
+ }
+ }
}
bindall <2> "canvscan mark %W %x %y"
bindall <B2-Motion> "canvscan dragto %W %x %y"
bindkey <End> sellastline
bind . <Key-Up> "selnextline -1"
bind . <Key-Down> "selnextline 1"
- bind . <Shift-Key-Up> "next_highlight -1"
- bind . <Shift-Key-Down> "next_highlight 1"
+ bind . <Shift-Key-Up> "dofind -1 0"
+ bind . <Shift-Key-Down> "dofind 1 0"
bindkey <Key-Right> "goforw"
bindkey <Key-Left> "goback"
bind . <Key-Prior> "selnextpage -1"
bindkey b "$ctext yview scroll -1 pages"
bindkey d "$ctext yview scroll 18 units"
bindkey u "$ctext yview scroll -18 units"
- bindkey / {findnext 1}
- bindkey <Key-Return> {findnext 0}
- bindkey ? findprev
+ bindkey / {dofind 1 1}
+ bindkey <Key-Return> {dofind 1 1}
+ bindkey ? {dofind -1 1}
bindkey f nextfile
bindkey <F5> updatecommits
bind . <$M1B-q> doquit
- bind . <$M1B-f> dofind
- bind . <$M1B-g> {findnext 0}
+ bind . <$M1B-f> {dofind 1 1}
+ bind . <$M1B-g> {dofind 1 0}
bind . <$M1B-r> dosearchback
bind . <$M1B-s> dosearch
bind . <$M1B-equal> {incrfont 1}
bind . <$M1B-KP_Subtract> {incrfont -1}
wm protocol . WM_DELETE_WINDOW doquit
bind . <Button-1> "click %W"
- bind $fstring <Key-Return> dofind
+ bind $fstring <Key-Return> {dofind 1 1}
bind $sha1entry <Key-Return> gotocommit
bind $sha1entry <<PasteSelection>> clearsha1
bind $cflist <1> {sel_flist %W %x %y; break}
focus .
}
+# Adjust the progress bar for a change in requested extent or canvas size
+proc adjustprogress {} {
+ global progresscanv progressitem progresscoords
+ global fprogitem fprogcoord lastprogupdate progupdatepending
+ global rprogitem rprogcoord
+
+ set w [expr {[winfo width $progresscanv] - 4}]
+ set x0 [expr {$w * [lindex $progresscoords 0]}]
+ set x1 [expr {$w * [lindex $progresscoords 1]}]
+ set h [winfo height $progresscanv]
+ $progresscanv coords $progressitem $x0 0 $x1 $h
+ $progresscanv coords $fprogitem 0 0 [expr {$w * $fprogcoord}] $h
+ $progresscanv coords $rprogitem 0 0 [expr {$w * $rprogcoord}] $h
+ set now [clock clicks -milliseconds]
+ if {$now >= $lastprogupdate + 100} {
+ set progupdatepending 0
+ update
+ } elseif {!$progupdatepending} {
+ set progupdatepending 1
+ after [expr {$lastprogupdate + 100 - $now}] doprogupdate
+ }
+}
+
+proc doprogupdate {} {
+ global lastprogupdate progupdatepending
+
+ if {$progupdatepending} {
+ set progupdatepending 0
+ set lastprogupdate [clock clicks -milliseconds]
+ update
+ }
+}
+
proc savestuff {w} {
- global canv canv2 canv3 ctext cflist mainfont textfont uifont tabstop
+ global canv canv2 canv3 mainfont textfont uifont tabstop
global stuffsaved findmergefiles maxgraphpct
global maxwidth showneartags showlocalchanges
global viewname viewfiles viewargs viewperm nextviewnum
- global cmitmode wrapcomment datetimeformat
+ global cmitmode wrapcomment datetimeformat limitdiffs
global colors bgcolor fgcolor diffcolors diffcontext selectbgcolor
if {$stuffsaved} return
puts $f [list set showneartags $showneartags]
puts $f [list set showlocalchanges $showlocalchanges]
puts $f [list set datetimeformat $datetimeformat]
+ puts $f [list set limitdiffs $limitdiffs]
puts $f [list set bgcolor $bgcolor]
puts $f [list set fgcolor $fgcolor]
puts $f [list set colors $colors]
Use and redistribute under the terms of the GNU General Public License} \
-justify center -aspect 400 -border 2 -bg white -relief groove
pack $w.m -side top -fill x -padx 2 -pady 2
- $w.m configure -font $uifont
+ $w.m configure -font uifont
button $w.ok -text Close -command "destroy $w" -default active
pack $w.ok -side bottom
- $w.ok configure -font $uifont
+ $w.ok configure -font uifont
bind $w <Visibility> "focus $w.ok"
bind $w <Key-Escape> "destroy $w"
bind $w <Key-Return> "destroy $w"
<$M1T-Down> Scroll commit list down one line
<$M1T-PageUp> Scroll commit list up one page
<$M1T-PageDown> Scroll commit list down one page
-<Shift-Up> Move to previous highlighted line
-<Shift-Down> Move to next highlighted line
+<Shift-Up> Find backwards (upwards, later commits)
+<Shift-Down> Find forwards (downwards, earlier commits)
<Delete>, b Scroll diff view up one page
<Backspace> Scroll diff view up one page
<Space> Scroll diff view down one page
" \
-justify left -bg white -border 2 -relief groove
pack $w.m -side top -fill both -padx 2 -pady 2
- $w.m configure -font $uifont
+ $w.m configure -font uifont
button $w.ok -text Close -command "destroy $w" -default active
pack $w.ok -side bottom
- $w.ok configure -font $uifont
+ $w.ok configure -font uifont
bind $w <Visibility> "focus $w.ok"
bind $w <Key-Escape> "destroy $w"
bind $w <Key-Return> "destroy $w"
global ctext cflist cmitmode flist_menu flist_menu_file
global treediffs diffids
+ stopfinding
set l [lindex [split [$w index "@$x,$y"] "."] 0]
if {$l <= 1} return
if {$cmitmode eq "tree"} {
}
proc flist_hl {only} {
- global flist_menu_file highlight_files
+ global flist_menu_file findstring gdttype
set x [shellquote $flist_menu_file]
- if {$only || $highlight_files eq {}} {
- set highlight_files $x
+ if {$only || $findstring eq {} || $gdttype ne "touching paths:"} {
+ set findstring $x
} else {
- append highlight_files " " $x
+ append findstring " " $x
}
+ set gdttype "touching paths:"
}
# Functions for adding and removing shell-type quoting
toplevel $top
wm title $top $title
- label $top.nl -text "Name" -font $uifont
- entry $top.name -width 20 -textvariable newviewname($n) -font $uifont
+ label $top.nl -text "Name" -font uifont
+ entry $top.name -width 20 -textvariable newviewname($n) -font uifont
grid $top.nl $top.name -sticky w -pady 5
checkbutton $top.perm -text "Remember this view" -variable newviewperm($n) \
- -font $uifont
+ -font uifont
grid $top.perm - -pady 5 -sticky w
- message $top.al -aspect 1000 -font $uifont \
+ message $top.al -aspect 1000 -font uifont \
-text "Commits to include (arguments to git rev-list):"
grid $top.al - -sticky w -pady 5
entry $top.args -width 50 -textvariable newviewargs($n) \
- -background white -font $uifont
+ -background white -font uifont
grid $top.args - -sticky ew -padx 5
- message $top.l -aspect 1000 -font $uifont \
+ message $top.l -aspect 1000 -font uifont \
-text "Enter files and directories to include, one per line:"
grid $top.l - -sticky w
- text $top.t -width 40 -height 10 -background white -font $uifont
+ text $top.t -width 40 -height 10 -background white -font uifont
if {[info exists viewfiles($n)]} {
foreach f $viewfiles($n) {
$top.t insert end $f
grid $top.t - -sticky ew -padx 5
frame $top.buts
button $top.buts.ok -text "OK" -command [list newviewok $top $n] \
- -font $uifont
+ -font uifont
button $top.buts.can -text "Cancel" -command [list destroy $top] \
- -font $uifont
+ -font uifont
grid $top.buts.ok $top.buts.can
grid columnconfigure $top.buts 0 -weight 1 -uniform a
grid columnconfigure $top.buts 1 -weight 1 -uniform a
}
proc allviewmenus {n op args} {
- global viewhlmenu
+ # global viewhlmenu
doviewmenu .bar.view 5 [list showview $n] $op $args
- doviewmenu $viewhlmenu 1 [list addvhighlight $n] $op $args
+ # doviewmenu $viewhlmenu 1 [list addvhighlight $n] $op $args
}
proc newviewok {top n} {
set viewname($n) $newviewname($n)
doviewmenu .bar.view 5 [list showview $n] \
entryconf [list -label $viewname($n)]
- doviewmenu $viewhlmenu 1 [list addvhighlight $n] \
- entryconf [list -label $viewname($n) -value $viewname($n)]
+ # doviewmenu $viewhlmenu 1 [list addvhighlight $n] \
+ # entryconf [list -label $viewname($n) -value $viewname($n)]
}
if {$files ne $viewfiles($n) || $newargs ne $viewargs($n)} {
set viewfiles($n) $files
.bar.view add radiobutton -label $viewname($n) \
-command [list showview $n] -variable selectedview -value $n
- $viewhlmenu add radiobutton -label $viewname($n) \
- -command [list addvhighlight $n] -variable selectedhlview
+ #$viewhlmenu add radiobutton -label $viewname($n) \
+ # -command [list addvhighlight $n] -variable selectedhlview
}
proc flatten {var} {
proc showview {n} {
global curview viewdata viewfiles
- global displayorder parentlist rowidlist rowoffsets
+ global displayorder parentlist rowidlist rowisopt rowfinal
global colormap rowtextx commitrow nextcolor canvxmax
- global numcommits rowrangelist commitlisted idrowranges rowchk
+ global numcommits commitlisted
global selectedline currentid canv canvy0
global treediffs
global pending_select phase
- global commitidx rowlaidout rowoptim
+ global commitidx
global commfd
global selectedview selectfirst
global vparentlist vdisporder vcmitlisted
- global hlview selectedhlview
+ global hlview selectedhlview commitinterest
if {$n == $curview} return
set selid {}
set vparentlist($curview) $parentlist
set vdisporder($curview) $displayorder
set vcmitlisted($curview) $commitlisted
- if {$phase ne {}} {
- set viewdata($curview) \
- [list $phase $rowidlist $rowoffsets $rowrangelist \
- [flatten idrowranges] [flatten idinlist] \
- $rowlaidout $rowoptim $numcommits]
- } elseif {![info exists viewdata($curview)]
- || [lindex $viewdata($curview) 0] ne {}} {
+ if {$phase ne {} ||
+ ![info exists viewdata($curview)] ||
+ [lindex $viewdata($curview) 0] ne {}} {
set viewdata($curview) \
- [list {} $rowidlist $rowoffsets $rowrangelist]
+ [list $phase $rowidlist $rowisopt $rowfinal]
}
}
catch {unset treediffs}
unset hlview
set selectedhlview None
}
+ catch {unset commitinterest}
set curview $n
set selectedview $n
.bar.view entryconf Edit* -state [expr {$n == 0? "disabled": "normal"}]
.bar.view entryconf Delete* -state [expr {$n == 0? "disabled": "normal"}]
+ run refill_reflist
if {![info exists viewdata($n)]} {
if {$selid ne {}} {
set pending_select $selid
set parentlist $vparentlist($n)
set commitlisted $vcmitlisted($n)
set rowidlist [lindex $v 1]
- set rowoffsets [lindex $v 2]
- set rowrangelist [lindex $v 3]
- if {$phase eq {}} {
- set numcommits [llength $displayorder]
- catch {unset idrowranges}
- } else {
- unflatten idrowranges [lindex $v 4]
- unflatten idinlist [lindex $v 5]
- set rowlaidout [lindex $v 6]
- set rowoptim [lindex $v 7]
- set numcommits [lindex $v 8]
- catch {unset rowchk}
- }
+ set rowisopt [lindex $v 2]
+ set rowfinal [lindex $v 3]
+ set numcommits $commitidx($n)
catch {unset colormap}
catch {unset rowtextx}
} elseif {$numcommits == 0} {
show_status "No commits selected"
}
- run refill_reflist
}
# Stuff relating to the highlighting facility
}
proc unbolden {} {
- global mainfont boldrows
+ global boldrows
set stillbold {}
foreach row $boldrows {
if {![ishighlighted $row]} {
- bolden $row $mainfont
+ bolden $row mainfont
} else {
lappend stillbold $row
}
}
set hlview $n
if {$n != $curview && ![info exists viewdata($n)]} {
- set viewdata($n) [list getcommits {{}} {{}} {} {} {} 0 0 0 {}]
+ set viewdata($n) [list getcommits {{}} 0 0 0]
set vparentlist($n) {}
set vdisporder($n) {}
set vcmitlisted($n) {}
proc vhighlightmore {} {
global hlview vhl_done commitidx vhighlights
- global displayorder vdisporder curview mainfont
+ global displayorder vdisporder curview
- set font [concat $mainfont bold]
set max $commitidx($hlview)
if {$hlview == $curview} {
set disp $displayorder
set row $commitrow($curview,$id)
if {$r0 <= $row && $row <= $r1} {
if {![highlighted $row]} {
- bolden $row $font
+ bolden $row mainfontbold
}
set vhighlights($row) 1
}
}
proc askvhighlight {row id} {
- global hlview vhighlights commitrow iddrawn mainfont
+ global hlview vhighlights commitrow iddrawn
if {[info exists commitrow($hlview,$id)]} {
if {[info exists iddrawn($id)] && ![ishighlighted $row]} {
- bolden $row [concat $mainfont bold]
+ bolden $row mainfontbold
}
set vhighlights($row) 1
} else {
}
}
-proc hfiles_change {name ix op} {
+proc hfiles_change {} {
global highlight_files filehighlight fhighlights fh_serial
- global mainfont highlight_paths
+ global highlight_paths gdttype
if {[info exists filehighlight]} {
# delete previous highlights
}
}
+proc gdttype_change {name ix op} {
+ global gdttype highlight_files findstring findpattern
+
+ stopfinding
+ if {$findstring ne {}} {
+ if {$gdttype eq "containing:"} {
+ if {$highlight_files ne {}} {
+ set highlight_files {}
+ hfiles_change
+ }
+ findcom_change
+ } else {
+ if {$findpattern ne {}} {
+ set findpattern {}
+ findcom_change
+ }
+ set highlight_files $findstring
+ hfiles_change
+ }
+ drawvisible
+ }
+ # enable/disable findtype/findloc menus too
+}
+
+proc find_change {name ix op} {
+ global gdttype findstring highlight_files
+
+ stopfinding
+ if {$gdttype eq "containing:"} {
+ findcom_change
+ } else {
+ if {$highlight_files ne $findstring} {
+ set highlight_files $findstring
+ hfiles_change
+ }
+ }
+ drawvisible
+}
+
+proc findcom_change args {
+ global nhighlights boldnamerows
+ global findpattern findtype findstring gdttype
+
+ stopfinding
+ # delete previous highlights, if any
+ foreach row $boldnamerows {
+ bolden_name $row mainfont
+ }
+ set boldnamerows {}
+ catch {unset nhighlights}
+ unbolden
+ unmarkmatches
+ if {$gdttype ne "containing:" || $findstring eq {}} {
+ set findpattern {}
+ } elseif {$findtype eq "Regexp"} {
+ set findpattern $findstring
+ } else {
+ set e [string map {"*" "\\*" "?" "\\?" "\[" "\\\[" "\\" "\\\\"} \
+ $findstring]
+ set findpattern "*$e*"
+ }
+}
+
proc makepatterns {l} {
set ret {}
foreach e $l {
set highlight_paths [makepatterns $paths]
highlight_filelist
set gdtargs [concat -- $paths]
- } else {
+ } elseif {$gdttype eq "adding/removing string:"} {
set gdtargs [list "-S$highlight_files"]
+ } else {
+ # must be "containing:", i.e. we're searching commit info
+ return
}
set cmd [concat | git diff-tree -r -s --stdin $gdtargs]
set filehighlight [open $cmd r+]
}
proc readfhighlight {} {
- global filehighlight fhighlights commitrow curview mainfont iddrawn
- global fhl_list
+ global filehighlight fhighlights commitrow curview iddrawn
+ global fhl_list find_dirn
if {![info exists filehighlight]} {
return 0
if {![info exists commitrow($curview,$line)]} continue
set row $commitrow($curview,$line)
if {[info exists iddrawn($line)] && ![ishighlighted $row]} {
- bolden $row [concat $mainfont bold]
+ bolden $row mainfontbold
}
set fhighlights($row) 1
}
unset filehighlight
return 0
}
- next_hlcont
- return 1
-}
-
-proc find_change {name ix op} {
- global nhighlights mainfont boldnamerows
- global findstring findpattern findtype
-
- # delete previous highlights, if any
- foreach row $boldnamerows {
- bolden_name $row $mainfont
- }
- set boldnamerows {}
- catch {unset nhighlights}
- unbolden
- unmarkmatches
- if {$findtype ne "Regexp"} {
- set e [string map {"*" "\\*" "?" "\\?" "\[" "\\\[" "\\" "\\\\"} \
- $findstring]
- set findpattern "*$e*"
+ if {[info exists find_dirn]} {
+ run findmore
}
- drawvisible
+ return 1
}
proc doesmatch {f} {
- global findtype findstring findpattern
+ global findtype findpattern
if {$findtype eq "Regexp"} {
- return [regexp $findstring $f]
+ return [regexp $findpattern $f]
} elseif {$findtype eq "IgnCase"} {
return [string match -nocase $findpattern $f]
} else {
}
proc askfindhighlight {row id} {
- global nhighlights commitinfo iddrawn mainfont
+ global nhighlights commitinfo iddrawn
global findloc
global markingmatches
}
}
if {$isbold && [info exists iddrawn($id)]} {
- set f [concat $mainfont bold]
if {![ishighlighted $row]} {
- bolden $row $f
+ bolden $row mainfontbold
if {$isbold > 1} {
- bolden_name $row $f
+ bolden_name $row mainfontbold
}
}
if {$markingmatches} {
}
proc askrelhighlight {row id} {
- global descendent highlight_related iddrawn mainfont rhighlights
+ global descendent highlight_related iddrawn rhighlights
global selectedline ancestor
if {![info exists selectedline]} return
}
if {[info exists iddrawn($id)]} {
if {$isbold && ![ishighlighted $row]} {
- bolden $row [concat $mainfont bold]
+ bolden $row mainfontbold
}
}
set rhighlights($row) $isbold
}
-proc next_hlcont {} {
- global fhl_row fhl_dirn displayorder numcommits
- global vhighlights fhighlights nhighlights rhighlights
- global hlview filehighlight findstring highlight_related
-
- if {![info exists fhl_dirn] || $fhl_dirn == 0} return
- set row $fhl_row
- while {1} {
- if {$row < 0 || $row >= $numcommits} {
- bell
- set fhl_dirn 0
- return
- }
- set id [lindex $displayorder $row]
- if {[info exists hlview]} {
- if {![info exists vhighlights($row)]} {
- askvhighlight $row $id
- }
- if {$vhighlights($row) > 0} break
- }
- if {$findstring ne {}} {
- if {![info exists nhighlights($row)]} {
- askfindhighlight $row $id
- }
- if {$nhighlights($row) > 0} break
- }
- if {$highlight_related ne "None"} {
- if {![info exists rhighlights($row)]} {
- askrelhighlight $row $id
- }
- if {$rhighlights($row) > 0} break
- }
- if {[info exists filehighlight]} {
- if {![info exists fhighlights($row)]} {
- # ask for a few more while we're at it...
- set r $row
- for {set n 0} {$n < 100} {incr n} {
- if {![info exists fhighlights($r)]} {
- askfilehighlight $r [lindex $displayorder $r]
- }
- incr r $fhl_dirn
- if {$r < 0 || $r >= $numcommits} break
- }
- flushhighlights
- }
- if {$fhighlights($row) < 0} {
- set fhl_row $row
- return
- }
- if {$fhighlights($row) > 0} break
- }
- incr row $fhl_dirn
- }
- set fhl_dirn 0
- selectline $row 1
-}
-
-proc next_highlight {dirn} {
- global selectedline fhl_row fhl_dirn
- global hlview filehighlight findstring highlight_related
-
- if {![info exists selectedline]} return
- if {!([info exists hlview] || $findstring ne {} ||
- $highlight_related ne "None" || [info exists filehighlight])} return
- set fhl_row [expr {$selectedline + $dirn}]
- set fhl_dirn $dirn
- next_hlcont
-}
-
-proc cancel_next_highlight {} {
- global fhl_dirn
-
- set fhl_dirn 0
-}
-
# Graph layout functions
proc shortids {ids} {
return $res
}
-proc incrange {l x o} {
- set n [llength $l]
- while {$x < $n} {
- set e [lindex $l $x]
- if {$e ne {}} {
- lset l $x [expr {$e + $o}]
- }
- incr x
- }
- return $l
-}
-
proc ntimes {n o} {
set ret {}
- for {} {$n > 0} {incr n -1} {
- lappend ret $o
- }
- return $ret
-}
-
-proc usedinrange {id l1 l2} {
- global children commitrow curview
-
- if {[info exists commitrow($curview,$id)]} {
- set r $commitrow($curview,$id)
- if {$l1 <= $r && $r <= $l2} {
- return [expr {$r - $l1 + 1}]
+ set o [list $o]
+ for {set mask 1} {$mask <= $n} {incr mask $mask} {
+ if {($n & $mask) != 0} {
+ set ret [concat $ret $o]
}
+ set o [concat $o $o]
}
- set kids $children($curview,$id)
- foreach c $kids {
- set r $commitrow($curview,$c)
- if {$l1 <= $r && $r <= $l2} {
- return [expr {$r - $l1 + 1}]
- }
- }
- return 0
+ return $ret
}
-proc sanity {row {full 0}} {
- global rowidlist rowoffsets
+# Work out where id should go in idlist so that order-token
+# values increase from left to right
+proc idcol {idlist id {i 0}} {
+ global ordertok curview
- set col -1
- set ids [lindex $rowidlist $row]
- foreach id $ids {
- incr col
- if {$id eq {}} continue
- if {$col < [llength $ids] - 1 &&
- [lsearch -exact -start [expr {$col+1}] $ids $id] >= 0} {
- puts "oops: [shortids $id] repeated in row $row col $col: {[shortids [lindex $rowidlist $row]]}"
- }
- set o [lindex $rowoffsets $row $col]
- set y $row
- set x $col
- while {$o ne {}} {
- incr y -1
- incr x $o
- if {[lindex $rowidlist $y $x] != $id} {
- puts "oops: rowoffsets wrong at row [expr {$y+1}] col [expr {$x-$o}]"
- puts " id=[shortids $id] check started at row $row"
- for {set i $row} {$i >= $y} {incr i -1} {
- puts " row $i ids={[shortids [lindex $rowidlist $i]]} offs={[lindex $rowoffsets $i]}"
- }
- break
- }
- if {!$full} break
- set o [lindex $rowoffsets $y $x]
+ set t $ordertok($curview,$id)
+ if {$i >= [llength $idlist] ||
+ $t < $ordertok($curview,[lindex $idlist $i])} {
+ if {$i > [llength $idlist]} {
+ set i [llength $idlist]
}
- }
-}
-
-proc makeuparrow {oid x y z} {
- global rowidlist rowoffsets uparrowlen idrowranges displayorder
-
- for {set i 1} {$i < $uparrowlen && $y > 1} {incr i} {
- incr y -1
- incr x $z
- set off0 [lindex $rowoffsets $y]
- for {set x0 $x} {1} {incr x0} {
- if {$x0 >= [llength $off0]} {
- set x0 [llength [lindex $rowoffsets [expr {$y-1}]]]
- break
- }
- set z [lindex $off0 $x0]
- if {$z ne {}} {
- incr x0 $z
- break
- }
+ while {[incr i -1] >= 0 &&
+ $t < $ordertok($curview,[lindex $idlist $i])} {}
+ incr i
+ } else {
+ if {$t > $ordertok($curview,[lindex $idlist $i])} {
+ while {[incr i] < [llength $idlist] &&
+ $t >= $ordertok($curview,[lindex $idlist $i])} {}
}
- set z [expr {$x0 - $x}]
- lset rowidlist $y [linsert [lindex $rowidlist $y] $x $oid]
- lset rowoffsets $y [linsert [lindex $rowoffsets $y] $x $z]
}
- set tmp [lreplace [lindex $rowoffsets $y] $x $x {}]
- lset rowoffsets $y [incrange $tmp [expr {$x+1}] -1]
- lappend idrowranges($oid) [lindex $displayorder $y]
+ return $i
}
proc initlayout {} {
- global rowidlist rowoffsets displayorder commitlisted
- global rowlaidout rowoptim
- global idinlist rowchk rowrangelist idrowranges
+ global rowidlist rowisopt rowfinal displayorder commitlisted
global numcommits canvxmax canv
global nextcolor
global parentlist
set displayorder {}
set commitlisted {}
set parentlist {}
- set rowrangelist {}
set nextcolor 0
- set rowidlist {{}}
- set rowoffsets {{}}
- catch {unset idinlist}
- catch {unset rowchk}
- set rowlaidout 0
- set rowoptim 0
+ set rowidlist {}
+ set rowisopt {}
+ set rowfinal {}
set canvxmax [$canv cget -width]
catch {unset colormap}
catch {unset rowtextx}
- catch {unset idrowranges}
set selectfirst 1
}
return [list $r0 $r1]
}
-proc layoutmore {tmax allread} {
- global rowlaidout rowoptim commitidx numcommits optim_delay
- global uparrowlen curview rowidlist idinlist
+proc layoutmore {} {
+ global commitidx viewcomplete numcommits
+ global uparrowlen downarrowlen mingaplen curview
- set showlast 0
- set showdelay $optim_delay
- set optdelay [expr {$uparrowlen + 1}]
- while {1} {
- if {$rowoptim - $showdelay > $numcommits} {
- showstuff [expr {$rowoptim - $showdelay}] $showlast
- } elseif {$rowlaidout - $optdelay > $rowoptim} {
- set nr [expr {$rowlaidout - $optdelay - $rowoptim}]
- if {$nr > 100} {
- set nr 100
- }
- optimize_rows $rowoptim 0 [expr {$rowoptim + $nr}]
- incr rowoptim $nr
- } elseif {$commitidx($curview) > $rowlaidout} {
- set nr [expr {$commitidx($curview) - $rowlaidout}]
- # may need to increase this threshold if uparrowlen or
- # mingaplen are increased...
- if {$nr > 150} {
- set nr 150
- }
- set row $rowlaidout
- set rowlaidout [layoutrows $row [expr {$row + $nr}] $allread]
- if {$rowlaidout == $row} {
- return 0
- }
- } elseif {$allread} {
- set optdelay 0
- set nrows $commitidx($curview)
- if {[lindex $rowidlist $nrows] ne {} ||
- [array names idinlist] ne {}} {
- layouttail
- set rowlaidout $commitidx($curview)
- } elseif {$rowoptim == $nrows} {
- set showdelay 0
- set showlast 1
- if {$numcommits == $nrows} {
- return 0
- }
- }
- } else {
- return 0
- }
- if {$tmax ne {} && [clock clicks -milliseconds] >= $tmax} {
- return 1
- }
+ set show $commitidx($curview)
+ if {$show > $numcommits || $viewcomplete($curview)} {
+ showstuff $show $viewcomplete($curview)
}
}
proc showstuff {canshow last} {
global numcommits commitrow pending_select selectedline curview
- global lookingforhead mainheadid displayorder selectfirst
+ global mainheadid displayorder selectfirst
global lastscrollset commitinterest
if {$numcommits == 0} {
set phase "incrdraw"
allcanvs delete all
}
- for {set l $numcommits} {$l < $canshow} {incr l} {
- set id [lindex $displayorder $l]
- if {[info exists commitinterest($id)]} {
- foreach script $commitinterest($id) {
- eval [string map [list "%I" $id] $script]
- }
- unset commitinterest($id)
- }
- }
set r0 $numcommits
set prev $numcommits
set numcommits $canshow
set selectfirst 0
}
}
- if {$lookingforhead && [info exists commitrow($curview,$mainheadid)]
- && ($last || $commitrow($curview,$mainheadid) < $numcommits - 1)} {
- set lookingforhead 0
- dodiffindex
- }
}
proc doshowlocalchanges {} {
- global lookingforhead curview mainheadid phase commitrow
+ global curview mainheadid phase commitrow
if {[info exists commitrow($curview,$mainheadid)] &&
($phase eq {} || $commitrow($curview,$mainheadid) < $numcommits - 1)} {
dodiffindex
} elseif {$phase ne {}} {
- set lookingforhead 1
+ lappend commitinterest($mainheadid) {}
}
}
proc dohidelocalchanges {} {
- global lookingforhead localfrow localirow lserial
+ global localfrow localirow lserial
- set lookingforhead 0
if {$localfrow >= 0} {
removerow $localfrow
set localfrow -1
# spawn off a process to do git diff-index --cached HEAD
proc dodiffindex {} {
- global localirow localfrow lserial
+ global localirow localfrow lserial showlocalchanges
+ if {!$showlocalchanges} return
incr lserial
set localfrow -1
set localirow -1
return 0
}
-proc layoutrows {row endrow last} {
- global rowidlist rowoffsets displayorder
- global uparrowlen downarrowlen maxwidth mingaplen
- global children parentlist
- global idrowranges
- global commitidx curview
- global idinlist rowchk rowrangelist
+proc nextuse {id row} {
+ global commitrow curview children
- set idlist [lindex $rowidlist $row]
- set offs [lindex $rowoffsets $row]
- while {$row < $endrow} {
- set id [lindex $displayorder $row]
- set nev [expr {[llength $idlist] - $maxwidth + 1}]
- foreach p [lindex $parentlist $row] {
- if {![info exists idinlist($p)] || !$idinlist($p)} {
- incr nev
- }
- }
- if {$nev > 0} {
- if {!$last &&
- $row + $uparrowlen + $mingaplen >= $commitidx($curview)} break
- for {set x [llength $idlist]} {[incr x -1] >= 0} {} {
- set i [lindex $idlist $x]
- if {![info exists rowchk($i)] || $row >= $rowchk($i)} {
- set r [usedinrange $i [expr {$row - $downarrowlen}] \
- [expr {$row + $uparrowlen + $mingaplen}]]
- if {$r == 0} {
- set idlist [lreplace $idlist $x $x]
- set offs [lreplace $offs $x $x]
- set offs [incrange $offs $x 1]
- set idinlist($i) 0
- set rm1 [expr {$row - 1}]
- lappend idrowranges($i) [lindex $displayorder $rm1]
- if {[incr nev -1] <= 0} break
- continue
- }
- set rowchk($i) [expr {$row + $r}]
- }
+ if {[info exists children($curview,$id)]} {
+ foreach kid $children($curview,$id) {
+ if {![info exists commitrow($curview,$kid)]} {
+ return -1
+ }
+ if {$commitrow($curview,$kid) > $row} {
+ return $commitrow($curview,$kid)
}
- lset rowidlist $row $idlist
- lset rowoffsets $row $offs
}
- set oldolds {}
- set newolds {}
- foreach p [lindex $parentlist $row] {
- if {![info exists idinlist($p)]} {
- lappend newolds $p
- } elseif {!$idinlist($p)} {
- lappend oldolds $p
+ }
+ if {[info exists commitrow($curview,$id)]} {
+ return $commitrow($curview,$id)
+ }
+ return -1
+}
+
+proc prevuse {id row} {
+ global commitrow curview children
+
+ set ret -1
+ if {[info exists children($curview,$id)]} {
+ foreach kid $children($curview,$id) {
+ if {![info exists commitrow($curview,$kid)]} break
+ if {$commitrow($curview,$kid) < $row} {
+ set ret $commitrow($curview,$kid)
}
- set idinlist($p) 1
}
- set col [lsearch -exact $idlist $id]
- if {$col < 0} {
- set col [llength $idlist]
- lappend idlist $id
- lset rowidlist $row $idlist
- set z {}
- if {$children($curview,$id) ne {}} {
- set z [expr {[llength [lindex $rowidlist [expr {$row-1}]]] - $col}]
- unset idinlist($id)
- }
- lappend offs $z
- lset rowoffsets $row $offs
- if {$z ne {}} {
- makeuparrow $id $col $row $z
+ }
+ return $ret
+}
+
+proc make_idlist {row} {
+ global displayorder parentlist uparrowlen downarrowlen mingaplen
+ global commitidx curview ordertok children commitrow
+
+ set r [expr {$row - $mingaplen - $downarrowlen - 1}]
+ if {$r < 0} {
+ set r 0
+ }
+ set ra [expr {$row - $downarrowlen}]
+ if {$ra < 0} {
+ set ra 0
+ }
+ set rb [expr {$row + $uparrowlen}]
+ if {$rb > $commitidx($curview)} {
+ set rb $commitidx($curview)
+ }
+ set ids {}
+ for {} {$r < $ra} {incr r} {
+ set nextid [lindex $displayorder [expr {$r + 1}]]
+ foreach p [lindex $parentlist $r] {
+ if {$p eq $nextid} continue
+ set rn [nextuse $p $r]
+ if {$rn >= $row &&
+ $rn <= $r + $downarrowlen + $mingaplen + $uparrowlen} {
+ lappend ids [list $ordertok($curview,$p) $p]
}
- } else {
- unset idinlist($id)
- }
- set ranges {}
- if {[info exists idrowranges($id)]} {
- set ranges $idrowranges($id)
- lappend ranges $id
- unset idrowranges($id)
- }
- lappend rowrangelist $ranges
- incr row
- set offs [ntimes [llength $idlist] 0]
- set l [llength $newolds]
- set idlist [eval lreplace \$idlist $col $col $newolds]
- set o 0
- if {$l != 1} {
- set offs [lrange $offs 0 [expr {$col - 1}]]
- foreach x $newolds {
- lappend offs {}
- incr o -1
- }
- incr o
- set tmp [expr {[llength $idlist] - [llength $offs]}]
- if {$tmp > 0} {
- set offs [concat $offs [ntimes $tmp $o]]
+ }
+ }
+ for {} {$r < $row} {incr r} {
+ set nextid [lindex $displayorder [expr {$r + 1}]]
+ foreach p [lindex $parentlist $r] {
+ if {$p eq $nextid} continue
+ set rn [nextuse $p $r]
+ if {$rn < 0 || $rn >= $row} {
+ lappend ids [list $ordertok($curview,$p) $p]
}
- } else {
- lset offs $col {}
}
- foreach i $newolds {
- set idrowranges($i) $id
+ }
+ set id [lindex $displayorder $row]
+ lappend ids [list $ordertok($curview,$id) $id]
+ while {$r < $rb} {
+ foreach p [lindex $parentlist $r] {
+ set firstkid [lindex $children($curview,$p) 0]
+ if {$commitrow($curview,$firstkid) < $row} {
+ lappend ids [list $ordertok($curview,$p) $p]
+ }
}
- incr col $l
- foreach oid $oldolds {
- set idlist [linsert $idlist $col $oid]
- set offs [linsert $offs $col $o]
- makeuparrow $oid $col $row $o
- incr col
+ incr r
+ set id [lindex $displayorder $r]
+ if {$id ne {}} {
+ set firstkid [lindex $children($curview,$id) 0]
+ if {$firstkid ne {} && $commitrow($curview,$firstkid) < $row} {
+ lappend ids [list $ordertok($curview,$id) $id]
+ }
}
- lappend rowidlist $idlist
- lappend rowoffsets $offs
}
- return $row
+ set idlist {}
+ foreach idx [lsort -unique $ids] {
+ lappend idlist [lindex $idx 1]
+ }
+ return $idlist
+}
+
+proc rowsequal {a b} {
+ while {[set i [lsearch -exact $a {}]] >= 0} {
+ set a [lreplace $a $i $i]
+ }
+ while {[set i [lsearch -exact $b {}]] >= 0} {
+ set b [lreplace $b $i $i]
+ }
+ return [expr {$a eq $b}]
}
-proc addextraid {id row} {
- global displayorder commitrow commitinfo
- global commitidx commitlisted
- global parentlist children curview
+proc makeupline {id row rend col} {
+ global rowidlist uparrowlen downarrowlen mingaplen
- incr commitidx($curview)
- lappend displayorder $id
- lappend commitlisted 0
- lappend parentlist {}
- set commitrow($curview,$id) $row
- readcommit $id
- if {![info exists commitinfo($id)]} {
- set commitinfo($id) {"No commit information available"}
+ for {set r $rend} {1} {set r $rstart} {
+ set rstart [prevuse $id $r]
+ if {$rstart < 0} return
+ if {$rstart < $row} break
}
- if {![info exists children($curview,$id)]} {
- set children($curview,$id) {}
+ if {$rstart + $uparrowlen + $mingaplen + $downarrowlen < $rend} {
+ set rstart [expr {$rend - $uparrowlen - 1}]
+ }
+ for {set r $rstart} {[incr r] <= $row} {} {
+ set idlist [lindex $rowidlist $r]
+ if {$idlist ne {} && [lsearch -exact $idlist $id] < 0} {
+ set col [idcol $idlist $id $col]
+ lset rowidlist $r [linsert $idlist $col $id]
+ changedrow $r
+ }
}
}
-proc layouttail {} {
- global rowidlist rowoffsets idinlist commitidx curview
- global idrowranges rowrangelist
+proc layoutrows {row endrow} {
+ global rowidlist rowisopt rowfinal displayorder
+ global uparrowlen downarrowlen maxwidth mingaplen
+ global children parentlist
+ global commitidx viewcomplete curview commitrow
- set row $commitidx($curview)
- set idlist [lindex $rowidlist $row]
- while {$idlist ne {}} {
- set col [expr {[llength $idlist] - 1}]
- set id [lindex $idlist $col]
- addextraid $id $row
- catch {unset idinlist($id)}
- lappend idrowranges($id) $id
- lappend rowrangelist $idrowranges($id)
- unset idrowranges($id)
- incr row
- set offs [ntimes $col 0]
- set idlist [lreplace $idlist $col $col]
- lappend rowidlist $idlist
- lappend rowoffsets $offs
- }
-
- foreach id [array names idinlist] {
- unset idinlist($id)
- addextraid $id $row
- lset rowidlist $row [list $id]
- lset rowoffsets $row 0
- makeuparrow $id 0 $row 0
- lappend idrowranges($id) $id
- lappend rowrangelist $idrowranges($id)
- unset idrowranges($id)
- incr row
- lappend rowidlist {}
- lappend rowoffsets {}
+ set idlist {}
+ if {$row > 0} {
+ set rm1 [expr {$row - 1}]
+ foreach id [lindex $rowidlist $rm1] {
+ if {$id ne {}} {
+ lappend idlist $id
+ }
+ }
+ set final [lindex $rowfinal $rm1]
+ }
+ for {} {$row < $endrow} {incr row} {
+ set rm1 [expr {$row - 1}]
+ if {$rm1 < 0 || $idlist eq {}} {
+ set idlist [make_idlist $row]
+ set final 1
+ } else {
+ set id [lindex $displayorder $rm1]
+ set col [lsearch -exact $idlist $id]
+ set idlist [lreplace $idlist $col $col]
+ foreach p [lindex $parentlist $rm1] {
+ if {[lsearch -exact $idlist $p] < 0} {
+ set col [idcol $idlist $p $col]
+ set idlist [linsert $idlist $col $p]
+ # if not the first child, we have to insert a line going up
+ if {$id ne [lindex $children($curview,$p) 0]} {
+ makeupline $p $rm1 $row $col
+ }
+ }
+ }
+ set id [lindex $displayorder $row]
+ if {$row > $downarrowlen} {
+ set termrow [expr {$row - $downarrowlen - 1}]
+ foreach p [lindex $parentlist $termrow] {
+ set i [lsearch -exact $idlist $p]
+ if {$i < 0} continue
+ set nr [nextuse $p $termrow]
+ if {$nr < 0 || $nr >= $row + $mingaplen + $uparrowlen} {
+ set idlist [lreplace $idlist $i $i]
+ }
+ }
+ }
+ set col [lsearch -exact $idlist $id]
+ if {$col < 0} {
+ set col [idcol $idlist $id]
+ set idlist [linsert $idlist $col $id]
+ if {$children($curview,$id) ne {}} {
+ makeupline $id $rm1 $row $col
+ }
+ }
+ set r [expr {$row + $uparrowlen - 1}]
+ if {$r < $commitidx($curview)} {
+ set x $col
+ foreach p [lindex $parentlist $r] {
+ if {[lsearch -exact $idlist $p] >= 0} continue
+ set fk [lindex $children($curview,$p) 0]
+ if {$commitrow($curview,$fk) < $row} {
+ set x [idcol $idlist $p $x]
+ set idlist [linsert $idlist $x $p]
+ }
+ }
+ if {[incr r] < $commitidx($curview)} {
+ set p [lindex $displayorder $r]
+ if {[lsearch -exact $idlist $p] < 0} {
+ set fk [lindex $children($curview,$p) 0]
+ if {$fk ne {} && $commitrow($curview,$fk) < $row} {
+ set x [idcol $idlist $p $x]
+ set idlist [linsert $idlist $x $p]
+ }
+ }
+ }
+ }
+ }
+ if {$final && !$viewcomplete($curview) &&
+ $row + $uparrowlen + $mingaplen + $downarrowlen
+ >= $commitidx($curview)} {
+ set final 0
+ }
+ set l [llength $rowidlist]
+ if {$row == $l} {
+ lappend rowidlist $idlist
+ lappend rowisopt 0
+ lappend rowfinal $final
+ } elseif {$row < $l} {
+ if {![rowsequal $idlist [lindex $rowidlist $row]]} {
+ lset rowidlist $row $idlist
+ changedrow $row
+ }
+ lset rowfinal $row $final
+ } else {
+ set pad [ntimes [expr {$row - $l}] {}]
+ set rowidlist [concat $rowidlist $pad]
+ lappend rowidlist $idlist
+ set rowfinal [concat $rowfinal $pad]
+ lappend rowfinal $final
+ set rowisopt [concat $rowisopt [ntimes [expr {$row - $l + 1}] 0]]
+ }
+ }
+ return $row
+}
+
+proc changedrow {row} {
+ global displayorder iddrawn rowisopt need_redisplay
+
+ set l [llength $rowisopt]
+ if {$row < $l} {
+ lset rowisopt $row 0
+ if {$row + 1 < $l} {
+ lset rowisopt [expr {$row + 1}] 0
+ if {$row + 2 < $l} {
+ lset rowisopt [expr {$row + 2}] 0
+ }
+ }
+ }
+ set id [lindex $displayorder $row]
+ if {[info exists iddrawn($id)]} {
+ set need_redisplay 1
}
}
proc insert_pad {row col npad} {
- global rowidlist rowoffsets
+ global rowidlist
set pad [ntimes $npad {}]
- lset rowidlist $row [eval linsert [list [lindex $rowidlist $row]] $col $pad]
- set tmp [eval linsert [list [lindex $rowoffsets $row]] $col $pad]
- lset rowoffsets $row [incrange $tmp [expr {$col + $npad}] [expr {-$npad}]]
+ set idlist [lindex $rowidlist $row]
+ set bef [lrange $idlist 0 [expr {$col - 1}]]
+ set aft [lrange $idlist $col end]
+ set i [lsearch -exact $aft {}]
+ if {$i > 0} {
+ set aft [lreplace $aft $i $i]
+ }
+ lset rowidlist $row [concat $bef $pad $aft]
+ changedrow $row
}
proc optimize_rows {row col endrow} {
- global rowidlist rowoffsets displayorder
+ global rowidlist rowisopt displayorder curview children
- for {} {$row < $endrow} {incr row} {
- set idlist [lindex $rowidlist $row]
- set offs [lindex $rowoffsets $row]
+ if {$row < 1} {
+ set row 1
+ }
+ for {} {$row < $endrow} {incr row; set col 0} {
+ if {[lindex $rowisopt $row]} continue
set haspad 0
- for {} {$col < [llength $offs]} {incr col} {
- if {[lindex $idlist $col] eq {}} {
+ set y0 [expr {$row - 1}]
+ set ym [expr {$row - 2}]
+ set idlist [lindex $rowidlist $row]
+ set previdlist [lindex $rowidlist $y0]
+ if {$idlist eq {} || $previdlist eq {}} continue
+ if {$ym >= 0} {
+ set pprevidlist [lindex $rowidlist $ym]
+ if {$pprevidlist eq {}} continue
+ } else {
+ set pprevidlist {}
+ }
+ set x0 -1
+ set xm -1
+ for {} {$col < [llength $idlist]} {incr col} {
+ set id [lindex $idlist $col]
+ if {[lindex $previdlist $col] eq $id} continue
+ if {$id eq {}} {
set haspad 1
continue
}
- set z [lindex $offs $col]
- if {$z eq {}} continue
+ set x0 [lsearch -exact $previdlist $id]
+ if {$x0 < 0} continue
+ set z [expr {$x0 - $col}]
set isarrow 0
- set x0 [expr {$col + $z}]
- set y0 [expr {$row - 1}]
- set z0 [lindex $rowoffsets $y0 $x0]
+ set z0 {}
+ if {$ym >= 0} {
+ set xm [lsearch -exact $pprevidlist $id]
+ if {$xm >= 0} {
+ set z0 [expr {$xm - $x0}]
+ }
+ }
if {$z0 eq {}} {
- set id [lindex $idlist $col]
- set ranges [rowranges $id]
- if {$ranges ne {} && $y0 > [lindex $ranges 0]} {
+ # if row y0 is the first child of $id then it's not an arrow
+ if {[lindex $children($curview,$id) 0] ne
+ [lindex $displayorder $y0]} {
set isarrow 1
}
}
+ if {!$isarrow && $id ne [lindex $displayorder $row] &&
+ [lsearch -exact [lindex $rowidlist [expr {$row+1}]] $id] < 0} {
+ set isarrow 1
+ }
# Looking at lines from this row to the previous row,
# make them go straight up if they end in an arrow on
# the previous row; otherwise make them go straight up
# Line currently goes left too much;
# insert pads in the previous row, then optimize it
set npad [expr {-1 - $z + $isarrow}]
- set offs [incrange $offs $col $npad]
insert_pad $y0 $x0 $npad
if {$y0 > 0} {
optimize_rows $y0 $x0 $row
}
- set z [lindex $offs $col]
- set x0 [expr {$col + $z}]
- set z0 [lindex $rowoffsets $y0 $x0]
+ set previdlist [lindex $rowidlist $y0]
+ set x0 [lsearch -exact $previdlist $id]
+ set z [expr {$x0 - $col}]
+ if {$z0 ne {}} {
+ set pprevidlist [lindex $rowidlist $ym]
+ set xm [lsearch -exact $pprevidlist $id]
+ set z0 [expr {$xm - $x0}]
+ }
} elseif {$z > 1 || ($z > 0 && $isarrow)} {
# Line currently goes right too much;
- # insert pads in this line and adjust the next's rowoffsets
+ # insert pads in this line
set npad [expr {$z - 1 + $isarrow}]
- set y1 [expr {$row + 1}]
- set offs2 [lindex $rowoffsets $y1]
- set x1 -1
- foreach z $offs2 {
- incr x1
- if {$z eq {} || $x1 + $z < $col} continue
- if {$x1 + $z > $col} {
- incr npad
- }
- lset rowoffsets $y1 [incrange $offs2 $x1 $npad]
- break
- }
- set pad [ntimes $npad {}]
- set idlist [eval linsert \$idlist $col $pad]
- set tmp [eval linsert \$offs $col $pad]
+ insert_pad $row $col $npad
+ set idlist [lindex $rowidlist $row]
incr col $npad
- set offs [incrange $tmp $col [expr {-$npad}]]
- set z [lindex $offs $col]
+ set z [expr {$x0 - $col}]
set haspad 1
}
- if {$z0 eq {} && !$isarrow} {
+ if {$z0 eq {} && !$isarrow && $ym >= 0} {
# this line links to its first child on row $row-2
- set rm2 [expr {$row - 2}]
- set id [lindex $displayorder $rm2]
- set xc [lsearch -exact [lindex $rowidlist $rm2] $id]
+ set id [lindex $displayorder $ym]
+ set xc [lsearch -exact $pprevidlist $id]
if {$xc >= 0} {
set z0 [expr {$xc - $x0}]
}
# avoid lines jigging left then immediately right
if {$z0 ne {} && $z < 0 && $z0 > 0} {
insert_pad $y0 $x0 1
- set offs [incrange $offs $col 1]
- optimize_rows $y0 [expr {$x0 + 1}] $row
+ incr x0
+ optimize_rows $y0 $x0 $row
+ set previdlist [lindex $rowidlist $y0]
}
}
if {!$haspad} {
- set o {}
# Find the first column that doesn't have a line going right
for {set col [llength $idlist]} {[incr col -1] >= 0} {} {
- set o [lindex $offs $col]
- if {$o eq {}} {
+ set id [lindex $idlist $col]
+ if {$id eq {}} break
+ set x0 [lsearch -exact $previdlist $id]
+ if {$x0 < 0} {
# check if this is the link to the first child
- set id [lindex $idlist $col]
- set ranges [rowranges $id]
- if {$ranges ne {} && $row == [lindex $ranges 0]} {
+ set kid [lindex $displayorder $y0]
+ if {[lindex $children($curview,$id) 0] eq $kid} {
# it is, work out offset to child
- set y0 [expr {$row - 1}]
- set id [lindex $displayorder $y0]
- set x0 [lsearch -exact [lindex $rowidlist $y0] $id]
- if {$x0 >= 0} {
- set o [expr {$x0 - $col}]
- }
+ set x0 [lsearch -exact $previdlist $kid]
}
}
- if {$o eq {} || $o <= 0} break
+ if {$x0 <= $col} break
}
# Insert a pad at that column as long as it has a line and
- # isn't the last column, and adjust the next row' offsets
- if {$o ne {} && [incr col] < [llength $idlist]} {
- set y1 [expr {$row + 1}]
- set offs2 [lindex $rowoffsets $y1]
- set x1 -1
- foreach z $offs2 {
- incr x1
- if {$z eq {} || $x1 + $z < $col} continue
- lset rowoffsets $y1 [incrange $offs2 $x1 1]
- break
- }
+ # isn't the last column
+ if {$x0 >= 0 && [incr col] < [llength $idlist]} {
set idlist [linsert $idlist $col {}]
- set tmp [linsert $offs $col {}]
- incr col
- set offs [incrange $tmp $col -1]
+ lset rowidlist $row $idlist
+ changedrow $row
}
}
- lset rowidlist $row $idlist
- lset rowoffsets $row $offs
- set col 0
}
}
}
proc rowranges {id} {
- global phase idrowranges commitrow rowlaidout rowrangelist curview
-
- set ranges {}
- if {$phase eq {} ||
- ([info exists commitrow($curview,$id)]
- && $commitrow($curview,$id) < $rowlaidout)} {
- set ranges [lindex $rowrangelist $commitrow($curview,$id)]
- } elseif {[info exists idrowranges($id)]} {
- set ranges $idrowranges($id)
- }
- set linenos {}
- foreach rid $ranges {
- lappend linenos $commitrow($curview,$rid)
- }
- if {$linenos ne {}} {
- lset linenos 0 [expr {[lindex $linenos 0] + 1}]
- }
- return $linenos
-}
-
-# work around tk8.4 refusal to draw arrows on diagonal segments
-proc adjarrowhigh {coords} {
- global linespc
-
- set x0 [lindex $coords 0]
- set x1 [lindex $coords 2]
- if {$x0 != $x1} {
- set y0 [lindex $coords 1]
- set y1 [lindex $coords 3]
- if {$y0 - $y1 <= 2 * $linespc && $x1 == [lindex $coords 4]} {
- # we have a nearby vertical segment, just trim off the diag bit
- set coords [lrange $coords 2 end]
+ global commitrow curview children uparrowlen downarrowlen
+ global rowidlist
+
+ set kids $children($curview,$id)
+ if {$kids eq {}} {
+ return {}
+ }
+ set ret {}
+ lappend kids $id
+ foreach child $kids {
+ if {![info exists commitrow($curview,$child)]} break
+ set row $commitrow($curview,$child)
+ if {![info exists prev]} {
+ lappend ret [expr {$row + 1}]
} else {
- set slope [expr {($x0 - $x1) / ($y0 - $y1)}]
- set xi [expr {$x0 - $slope * $linespc / 2}]
- set yi [expr {$y0 - $linespc / 2}]
- set coords [lreplace $coords 0 1 $xi $y0 $xi $yi]
+ if {$row <= $prevrow} {
+ puts "oops children out of order [shortids $id] $row < [shortids $prev] $prevrow"
+ }
+ # see if the line extends the whole way from prevrow to row
+ if {$row > $prevrow + $uparrowlen + $downarrowlen &&
+ [lsearch -exact [lindex $rowidlist \
+ [expr {int(($row + $prevrow) / 2)}]] $id] < 0} {
+ # it doesn't, see where it ends
+ set r [expr {$prevrow + $downarrowlen}]
+ if {[lsearch -exact [lindex $rowidlist $r] $id] < 0} {
+ while {[incr r -1] > $prevrow &&
+ [lsearch -exact [lindex $rowidlist $r] $id] < 0} {}
+ } else {
+ while {[incr r] <= $row &&
+ [lsearch -exact [lindex $rowidlist $r] $id] >= 0} {}
+ incr r -1
+ }
+ lappend ret $r
+ # see where it starts up again
+ set r [expr {$row - $uparrowlen}]
+ if {[lsearch -exact [lindex $rowidlist $r] $id] < 0} {
+ while {[incr r] < $row &&
+ [lsearch -exact [lindex $rowidlist $r] $id] < 0} {}
+ } else {
+ while {[incr r -1] >= $prevrow &&
+ [lsearch -exact [lindex $rowidlist $r] $id] >= 0} {}
+ incr r
+ }
+ lappend ret $r
+ }
+ }
+ if {$child eq $id} {
+ lappend ret $row
}
+ set prev $id
+ set prevrow $row
}
- return $coords
+ return $ret
}
proc drawlineseg {id row endrow arrowlow} {
global rowidlist displayorder iddrawn linesegs
- global canv colormap linespc curview maxlinelen
+ global canv colormap linespc curview maxlinelen parentlist
set cols [list [lsearch -exact [lindex $rowidlist $row] $id]]
set le [expr {$row + 1}]
set itl [lindex $lines [expr {$i-1}] 2]
set al [$canv itemcget $itl -arrow]
set arrowlow [expr {$al eq "last" || $al eq "both"}]
- } elseif {$arrowlow &&
- [lsearch -exact [lindex $rowidlist [expr {$row-1}]] $id] >= 0} {
- set arrowlow 0
+ } elseif {$arrowlow} {
+ if {[lsearch -exact [lindex $rowidlist [expr {$row-1}]] $id] >= 0 ||
+ [lsearch -exact [lindex $parentlist [expr {$row-1}]] $id] >= 0} {
+ set arrowlow 0
+ }
}
set arrow [lindex {none first last both} [expr {$arrowhigh + 2*$arrowlow}]]
for {set y $le} {[incr y -1] > $row} {} {
set xc [lsearch -exact [lindex $rowidlist $row] $ch]
if {$xc < 0} {
puts "oops: drawlineseg: child $ch not on row $row"
- } else {
- if {$xc < $x - 1} {
+ } elseif {$xc != $x} {
+ if {($arrowhigh && $le == $row + 1) || $dir == 0} {
+ set d [expr {int(0.5 * $linespc)}]
+ set x1 [xc $row $x]
+ if {$xc < $x} {
+ set x2 [expr {$x1 - $d}]
+ } else {
+ set x2 [expr {$x1 + $d}]
+ }
+ set y2 [yc $row]
+ set y1 [expr {$y2 + $d}]
+ lappend coords $x1 $y1 $x2 $y2
+ } elseif {$xc < $x - 1} {
lappend coords [xc $row [expr {$x-1}]] [yc $row]
} elseif {$xc > $x + 1} {
lappend coords [xc $row [expr {$x+1}]] [yc $row]
} else {
set xn [xc $row $xp]
set yn [yc $row]
- # work around tk8.4 refusal to draw arrows on diagonal segments
- if {$arrowlow && $xn != [lindex $coords end-1]} {
- if {[llength $coords] < 4 ||
- [lindex $coords end-3] != [lindex $coords end-1] ||
- [lindex $coords end] - $yn > 2 * $linespc} {
- set xn [xc $row [expr {$xp - 0.5 * $dir}]]
- set yo [yc [expr {$row + 0.5}]]
- lappend coords $xn $yo $xn $yn
- }
- } else {
- lappend coords $xn $yn
- }
+ lappend coords $xn $yn
}
if {!$joinhigh} {
- if {$arrowhigh} {
- set coords [adjarrowhigh $coords]
- }
assigncolor $id
set t [$canv create line $coords -width [linewidth $id] \
-fill $colormap($id) -tags lines.$id -arrow $arrow]
set coords [concat $coords $clow]
if {!$joinhigh} {
lset lines [expr {$i-1}] 1 $le
- if {$arrowhigh} {
- set coords [adjarrowhigh $coords]
- }
} else {
# coalesce two pieces
$canv delete $ith
proc drawparentlinks {id row} {
global rowidlist canv colormap curview parentlist
- global idpos
+ global idpos linespc
set rowids [lindex $rowidlist $row]
set col [lsearch -exact $rowids $id]
set x [xc $row $col]
set y [yc $row]
set y2 [yc $row2]
+ set d [expr {int(0.5 * $linespc)}]
+ set ymid [expr {$y + $d}]
set ids [lindex $rowidlist $row2]
# rmx = right-most X coord used
set rmx 0
if {$x2 > $rmx} {
set rmx $x2
}
- if {[lsearch -exact $rowids $p] < 0} {
+ set j [lsearch -exact $rowids $p]
+ if {$j < 0} {
# drawlineseg will do this one for us
continue
}
assigncolor $p
# should handle duplicated parents here...
set coords [list $x $y]
- if {$i < $col - 1} {
- lappend coords [xc $row [expr {$i + 1}]] $y
- } elseif {$i > $col + 1} {
- lappend coords [xc $row [expr {$i - 1}]] $y
+ if {$i != $col} {
+ # if attaching to a vertical segment, draw a smaller
+ # slant for visual distinctness
+ if {$i == $j} {
+ if {$i < $col} {
+ lappend coords [expr {$x2 + $d}] $y $x2 $ymid
+ } else {
+ lappend coords [expr {$x2 - $d}] $y $x2 $ymid
+ }
+ } elseif {$i < $col && $i < $j} {
+ # segment slants towards us already
+ lappend coords [xc $row $j] $y
+ } else {
+ if {$i < $col - 1} {
+ lappend coords [expr {$x2 + $linespc}] $y
+ } elseif {$i > $col + 1} {
+ lappend coords [expr {$x2 - $linespc}] $y
+ }
+ lappend coords $x2 $y2
+ }
+ } else {
+ lappend coords $x2 $y2
}
- lappend coords $x2 $y2
set t [$canv create line $coords -width [linewidth $p] \
-fill $colormap($p) -tags lines.$p]
$canv lower $t
global linespc canv canv2 canv3 canvy0 fgcolor curview
global commitlisted commitinfo rowidlist parentlist
global rowtextx idpos idtags idheads idotherrefs
- global linehtag linentag linedtag
- global mainfont canvxmax boldrows boldnamerows fgcolor nullid nullid2
+ global linehtag linentag linedtag selectedline
+ global canvxmax boldrows boldnamerows fgcolor nullid nullid2
# listed is 0 for boundary, 1 for normal, 2 for left, 3 for right
set listed [lindex $commitlisted $row]
set name [lindex $commitinfo($id) 1]
set date [lindex $commitinfo($id) 2]
set date [formatdate $date]
- set font $mainfont
- set nfont $mainfont
+ set font mainfont
+ set nfont mainfont
set isbold [ishighlighted $row]
if {$isbold > 0} {
lappend boldrows $row
- lappend font bold
+ set font mainfontbold
if {$isbold > 1} {
lappend boldnamerows $row
- lappend nfont bold
+ set nfont mainfontbold
}
}
set linehtag($row) [$canv create text $xt $y -anchor w -fill $fgcolor \
set linentag($row) [$canv2 create text 3 $y -anchor w -fill $fgcolor \
-text $name -font $nfont -tags text]
set linedtag($row) [$canv3 create text 3 $y -anchor w -fill $fgcolor \
- -text $date -font $mainfont -tags text]
- set xr [expr {$xt + [font measure $mainfont $headline]}]
+ -text $date -font mainfont -tags text]
+ if {[info exists selectedline] && $selectedline == $row} {
+ make_secsel $row
+ }
+ set xr [expr {$xt + [font measure $font $headline]}]
if {$xr > $canvxmax} {
set canvxmax $xr
setcanvscroll
}
proc drawcmitrow {row} {
- global displayorder rowidlist
+ global displayorder rowidlist nrows_drawn
global iddrawn markingmatches
global commitinfo parentlist numcommits
- global filehighlight fhighlights findstring nhighlights
+ global filehighlight fhighlights findpattern nhighlights
global hlview vhighlights
global highlight_related rhighlights
if {[info exists filehighlight] && ![info exists fhighlights($row)]} {
askfilehighlight $row $id
}
- if {$findstring ne {} && ![info exists nhighlights($row)]} {
+ if {$findpattern ne {} && ![info exists nhighlights($row)]} {
askfindhighlight $row $id
}
if {$highlight_related ne "None" && ![info exists rhighlights($row)]} {
assigncolor $id
drawcmittext $id $row $col
set iddrawn($id) 1
+ incr nrows_drawn
}
if {$markingmatches} {
markrowmatches $row $id
}
proc drawcommits {row {endrow {}}} {
- global numcommits iddrawn displayorder curview
- global parentlist rowidlist
+ global numcommits iddrawn displayorder curview need_redisplay
+ global parentlist rowidlist rowfinal uparrowlen downarrowlen nrows_drawn
if {$row < 0} {
set row 0
set endrow [expr {$numcommits - 1}]
}
+ set rl1 [expr {$row - $downarrowlen - 3}]
+ if {$rl1 < 0} {
+ set rl1 0
+ }
+ set ro1 [expr {$row - 3}]
+ if {$ro1 < 0} {
+ set ro1 0
+ }
+ set r2 [expr {$endrow + $uparrowlen + 3}]
+ if {$r2 > $numcommits} {
+ set r2 $numcommits
+ }
+ for {set r $rl1} {$r < $r2} {incr r} {
+ if {[lindex $rowidlist $r] ne {} && [lindex $rowfinal $r]} {
+ if {$rl1 < $r} {
+ layoutrows $rl1 $r
+ }
+ set rl1 [expr {$r + 1}]
+ }
+ }
+ if {$rl1 < $r} {
+ layoutrows $rl1 $r
+ }
+ optimize_rows $ro1 0 $r2
+ if {$need_redisplay || $nrows_drawn > 2000} {
+ clear_display
+ drawvisible
+ }
+
# make the lines join to already-drawn rows either side
set r [expr {$row - 1}]
if {$r < 0 || ![info exists iddrawn([lindex $displayorder $r])]} {
drawcmitrow $r
if {$r == $er} break
set nextid [lindex $displayorder [expr {$r + 1}]]
- if {$wasdrawn && [info exists iddrawn($nextid)]} {
- catch {unset prevlines}
- continue
- }
+ if {$wasdrawn && [info exists iddrawn($nextid)]} continue
drawparentlinks $id $r
- if {[info exists lineends($r)]} {
- foreach lid $lineends($r) {
- unset prevlines($lid)
- }
- }
set rowids [lindex $rowidlist $r]
foreach lid $rowids {
if {$lid eq {}} continue
+ if {[info exists lineend($lid)] && $lineend($lid) > $r} continue
if {$lid eq $id} {
# see if this is the first child of any of its parents
foreach p [lindex $parentlist $r] {
if {[lsearch -exact $rowids $p] < 0} {
# make this line extend up to the child
- set le [drawlineseg $p $r $er 0]
- lappend lineends($le) $p
- set prevlines($p) 1
+ set lineend($p) [drawlineseg $p $r $er 0]
}
}
- } elseif {![info exists prevlines($lid)]} {
- set le [drawlineseg $lid $r $er 1]
- lappend lineends($le) $lid
- set prevlines($lid) 1
+ } else {
+ set lineend($lid) [drawlineseg $lid $r $er 1]
}
}
}
}
proc clear_display {} {
- global iddrawn linesegs
+ global iddrawn linesegs need_redisplay nrows_drawn
global vhighlights fhighlights nhighlights rhighlights
allcanvs delete all
catch {unset fhighlights}
catch {unset nhighlights}
catch {unset rhighlights}
+ set need_redisplay 0
+ set nrows_drawn 0
}
proc findcrossings {id} {
- global rowidlist parentlist numcommits rowoffsets displayorder
+ global rowidlist parentlist numcommits displayorder
set cross {}
set ccross {}
set e [expr {$numcommits - 1}]
}
if {$e <= $s} continue
- set x [lsearch -exact [lindex $rowidlist $e] $id]
- if {$x < 0} {
- puts "findcrossings: oops, no [shortids $id] in row $e"
- continue
- }
for {set row $e} {[incr row -1] >= $s} {} {
+ set x [lsearch -exact [lindex $rowidlist $row] $id]
+ if {$x < 0} break
set olds [lindex $parentlist $row]
set kid [lindex $displayorder $row]
set kidx [lsearch -exact [lindex $rowidlist $row] $kid]
}
}
}
- set inc [lindex $rowoffsets $row $x]
- if {$inc eq {}} break
- incr x $inc
}
}
return [concat $ccross {{}} $cross]
proc drawtags {id x xt y1} {
global idtags idheads idotherrefs mainhead
global linespc lthickness
- global canv mainfont commitrow rowtextx curview fgcolor bgcolor
+ global canv commitrow rowtextx curview fgcolor bgcolor
set marks {}
set ntags 0
foreach tag $marks {
incr i
if {$i >= $ntags && $i < $ntags + $nheads && $tag eq $mainhead} {
- set wid [font measure [concat $mainfont bold] $tag]
+ set wid [font measure mainfontbold $tag]
} else {
- set wid [font measure $mainfont $tag]
+ set wid [font measure mainfont $tag]
}
lappend xvals $xt
lappend wvals $wid
foreach tag $marks x $xvals wid $wvals {
set xl [expr {$x + $delta}]
set xr [expr {$x + $delta + $wid + $lthickness}]
- set font $mainfont
+ set font mainfont
if {[incr ntags -1] >= 0} {
# draw a tag
set t [$canv create polygon $x [expr {$yt + $delta}] $xl $yt \
if {[incr nheads -1] >= 0} {
set col green
if {$tag eq $mainhead} {
- lappend font bold
+ set font mainfontbold
}
} else {
set col "#ddddff"
$canv create polygon $x $yt $xr $yt $xr $yb $x $yb \
-width 1 -outline black -fill $col -tags tag.$id
if {[regexp {^(remotes/.*/|remotes/)} $tag match remoteprefix]} {
- set rwid [font measure $mainfont $remoteprefix]
+ set rwid [font measure mainfont $remoteprefix]
set xi [expr {$x + 1}]
set yti [expr {$yt + 1}]
set xri [expr {$x + $rwid}]
}
proc show_status {msg} {
- global canv mainfont fgcolor
+ global canv fgcolor
clear_display
- $canv create text 3 3 -anchor nw -text $msg -font $mainfont \
+ $canv create text 3 3 -anchor nw -text $msg -font mainfont \
-tags text -fill $fgcolor
}
# on that row and below will move down one row.
proc insertrow {row newcmit} {
global displayorder parentlist commitlisted children
- global commitrow curview rowidlist rowoffsets numcommits
- global rowrangelist rowlaidout rowoptim numcommits
- global selectedline rowchk commitidx
+ global commitrow curview rowidlist rowisopt rowfinal numcommits
+ global numcommits
+ global selectedline commitidx ordertok
if {$row >= $numcommits} {
puts "oops, inserting new row $row but only have $numcommits rows"
set commitrow($curview,$id) $r
}
incr commitidx($curview)
+ set ordertok($curview,$newcmit) $ordertok($curview,$p)
- set idlist [lindex $rowidlist $row]
- set offs [lindex $rowoffsets $row]
- set newoffs {}
- foreach x $idlist {
- if {$x eq {} || ($x eq $p && [llength $kids] == 1)} {
- lappend newoffs {}
- } else {
- lappend newoffs 0
- }
- }
- if {[llength $kids] == 1} {
- set col [lsearch -exact $idlist $p]
- lset idlist $col $newcmit
- } else {
- set col [llength $idlist]
- lappend idlist $newcmit
- lappend offs {}
- lset rowoffsets $row $offs
- }
- set rowidlist [linsert $rowidlist $row $idlist]
- set rowoffsets [linsert $rowoffsets [expr {$row+1}] $newoffs]
-
- set rowrangelist [linsert $rowrangelist $row {}]
- if {[llength $kids] > 1} {
- set rp1 [expr {$row + 1}]
- set ranges [lindex $rowrangelist $rp1]
- if {$ranges eq {}} {
- set ranges [list $newcmit $p]
- } elseif {[lindex $ranges end-1] eq $p} {
- lset ranges end-1 $newcmit
+ if {$row < [llength $rowidlist]} {
+ set idlist [lindex $rowidlist $row]
+ if {$idlist ne {}} {
+ if {[llength $kids] == 1} {
+ set col [lsearch -exact $idlist $p]
+ lset idlist $col $newcmit
+ } else {
+ set col [llength $idlist]
+ lappend idlist $newcmit
+ }
}
- lset rowrangelist $rp1 $ranges
+ set rowidlist [linsert $rowidlist $row $idlist]
+ set rowisopt [linsert $rowisopt $row 0]
+ set rowfinal [linsert $rowfinal $row [lindex $rowfinal $row]]
}
- catch {unset rowchk}
-
- incr rowlaidout
- incr rowoptim
incr numcommits
if {[info exists selectedline] && $selectedline >= $row} {
# Remove a commit that was inserted with insertrow on row $row.
proc removerow {row} {
global displayorder parentlist commitlisted children
- global commitrow curview rowidlist rowoffsets numcommits
- global rowrangelist idrowranges rowlaidout rowoptim numcommits
- global linesegends selectedline rowchk commitidx
+ global commitrow curview rowidlist rowisopt rowfinal numcommits
+ global numcommits
+ global linesegends selectedline commitidx
if {$row >= $numcommits} {
puts "oops, removing row $row but only have $numcommits rows"
}
incr commitidx($curview) -1
- set rowidlist [lreplace $rowidlist $row $row]
- set rowoffsets [lreplace $rowoffsets $rp1 $rp1]
- if {$kids ne {}} {
- set offs [lindex $rowoffsets $row]
- set offs [lreplace $offs end end]
- lset rowoffsets $row $offs
- }
-
- set rowrangelist [lreplace $rowrangelist $row $row]
- if {[llength $kids] > 0} {
- set ranges [lindex $rowrangelist $row]
- if {[lindex $ranges end-1] eq $id} {
- set ranges [lreplace $ranges end-1 end]
- lset rowrangelist $row $ranges
- }
+ if {$row < [llength $rowidlist]} {
+ set rowidlist [lreplace $rowidlist $row $row]
+ set rowisopt [lreplace $rowisopt $row $row]
+ set rowfinal [lreplace $rowfinal $row $row]
}
- catch {unset rowchk}
-
- incr rowlaidout -1
- incr rowoptim -1
incr numcommits -1
if {[info exists selectedline] && $selectedline > $row} {
set curtextcursor $c
}
-proc nowbusy {what} {
- global isbusy
+proc nowbusy {what {name {}}} {
+ global isbusy busyname statusw
if {[array names isbusy] eq {}} {
. config -cursor watch
settextcursor watch
}
set isbusy($what) 1
+ set busyname($what) $name
+ if {$name ne {}} {
+ $statusw conf -text $name
+ }
}
proc notbusy {what} {
- global isbusy maincursor textcursor
+ global isbusy maincursor textcursor busyname statusw
- catch {unset isbusy($what)}
+ catch {
+ unset isbusy($what)
+ if {$busyname($what) ne {} &&
+ [$statusw cget -text] eq $busyname($what)} {
+ $statusw conf -text {}
+ }
+ }
if {[array names isbusy] eq {}} {
. config -cursor $maincursor
settextcursor $textcursor
return $matches
}
-proc dofind {{rev 0}} {
+proc dofind {{dirn 1} {wrap 1}} {
global findstring findstartline findcurline selectedline numcommits
+ global gdttype filehighlight fh_serial find_dirn findallowwrap
- unmarkmatches
- cancel_next_highlight
+ if {[info exists find_dirn]} {
+ if {$find_dirn == $dirn} return
+ stopfinding
+ }
focus .
if {$findstring eq {} || $numcommits == 0} return
if {![info exists selectedline]} {
- set findstartline [lindex [visiblerows] $rev]
+ set findstartline [lindex [visiblerows] [expr {$dirn < 0}]]
} else {
set findstartline $selectedline
}
set findcurline $findstartline
- nowbusy finding
- if {!$rev} {
- run findmore
- } else {
- if {$findcurline == 0} {
- set findcurline $numcommits
- }
- incr findcurline -1
- run findmorerev
+ nowbusy finding "Searching"
+ if {$gdttype ne "containing:" && ![info exists filehighlight]} {
+ after cancel do_file_hl $fh_serial
+ do_file_hl $fh_serial
}
+ set find_dirn $dirn
+ set findallowwrap $wrap
+ run findmore
}
-proc findnext {restart} {
- global findcurline
- if {![info exists findcurline]} {
- if {$restart} {
- dofind
- } else {
- bell
- }
- } else {
- run findmore
- nowbusy finding
- }
-}
+proc stopfinding {} {
+ global find_dirn findcurline fprogcoord
-proc findprev {} {
- global findcurline
- if {![info exists findcurline]} {
- dofind 1
- } else {
- run findmorerev
- nowbusy finding
+ if {[info exists find_dirn]} {
+ unset find_dirn
+ unset findcurline
+ notbusy finding
+ set fprogcoord 0
+ adjustprogress
}
}
proc findmore {} {
- global commitdata commitinfo numcommits findstring findpattern findloc
+ global commitdata commitinfo numcommits findpattern findloc
global findstartline findcurline displayorder
+ global find_dirn gdttype fhighlights fprogcoord
+ global findallowwrap
- set fldtypes {Headline Author Date Committer CDate Comments}
- set l [expr {$findcurline + 1}]
- if {$l >= $numcommits} {
- set l 0
- }
- if {$l <= $findstartline} {
- set lim [expr {$findstartline + 1}]
- } else {
- set lim $numcommits
- }
- if {$lim - $l > 500} {
- set lim [expr {$l + 500}]
- }
- set last 0
- for {} {$l < $lim} {incr l} {
- set id [lindex $displayorder $l]
- # shouldn't happen unless git log doesn't give all the commits...
- if {![info exists commitdata($id)]} continue
- if {![doesmatch $commitdata($id)]} continue
- if {![info exists commitinfo($id)]} {
- getcommit $id
- }
- set info $commitinfo($id)
- foreach f $info ty $fldtypes {
- if {($findloc eq "All fields" || $findloc eq $ty) &&
- [doesmatch $f]} {
- findselectline $l
- notbusy finding
- return 0
- }
- }
- }
- if {$l == $findstartline + 1} {
- bell
- unset findcurline
- notbusy finding
+ if {![info exists find_dirn]} {
return 0
}
- set findcurline [expr {$l - 1}]
- return 1
-}
-
-proc findmorerev {} {
- global commitdata commitinfo numcommits findstring findpattern findloc
- global findstartline findcurline displayorder
-
set fldtypes {Headline Author Date Committer CDate Comments}
set l $findcurline
- if {$l == 0} {
- set l $numcommits
- }
- incr l -1
- if {$l >= $findstartline} {
- set lim [expr {$findstartline - 1}]
+ set moretodo 0
+ if {$find_dirn > 0} {
+ incr l
+ if {$l >= $numcommits} {
+ set l 0
+ }
+ if {$l <= $findstartline} {
+ set lim [expr {$findstartline + 1}]
+ } else {
+ set lim $numcommits
+ set moretodo $findallowwrap
+ }
} else {
- set lim -1
- }
- if {$l - $lim > 500} {
- set lim [expr {$l - 500}]
- }
- set last 0
- for {} {$l > $lim} {incr l -1} {
- set id [lindex $displayorder $l]
- if {![doesmatch $commitdata($id)]} continue
- if {![info exists commitinfo($id)]} {
- getcommit $id
+ if {$l == 0} {
+ set l $numcommits
+ }
+ incr l -1
+ if {$l >= $findstartline} {
+ set lim [expr {$findstartline - 1}]
+ } else {
+ set lim -1
+ set moretodo $findallowwrap
+ }
+ }
+ set n [expr {($lim - $l) * $find_dirn}]
+ if {$n > 500} {
+ set n 500
+ set moretodo 1
+ }
+ set found 0
+ set domore 1
+ if {$gdttype eq "containing:"} {
+ for {} {$n > 0} {incr n -1; incr l $find_dirn} {
+ set id [lindex $displayorder $l]
+ # shouldn't happen unless git log doesn't give all the commits...
+ if {![info exists commitdata($id)]} continue
+ if {![doesmatch $commitdata($id)]} continue
+ if {![info exists commitinfo($id)]} {
+ getcommit $id
+ }
+ set info $commitinfo($id)
+ foreach f $info ty $fldtypes {
+ if {($findloc eq "All fields" || $findloc eq $ty) &&
+ [doesmatch $f]} {
+ set found 1
+ break
+ }
+ }
+ if {$found} break
}
- set info $commitinfo($id)
- foreach f $info ty $fldtypes {
- if {($findloc eq "All fields" || $findloc eq $ty) &&
- [doesmatch $f]} {
- findselectline $l
- notbusy finding
- return 0
+ } else {
+ for {} {$n > 0} {incr n -1; incr l $find_dirn} {
+ set id [lindex $displayorder $l]
+ if {![info exists fhighlights($l)]} {
+ askfilehighlight $l $id
+ if {$domore} {
+ set domore 0
+ set findcurline [expr {$l - $find_dirn}]
+ }
+ } elseif {$fhighlights($l)} {
+ set found $domore
+ break
}
}
}
- if {$l == -1} {
- bell
+ if {$found || ($domore && !$moretodo)} {
unset findcurline
+ unset find_dirn
notbusy finding
+ set fprogcoord 0
+ adjustprogress
+ if {$found} {
+ findselectline $l
+ } else {
+ bell
+ }
return 0
}
- set findcurline [expr {$l + 1}]
- return 1
+ if {!$domore} {
+ flushhighlights
+ } else {
+ set findcurline [expr {$l - $find_dirn}]
+ }
+ set n [expr {($findcurline - $findstartline) * $find_dirn - 1}]
+ if {$n < 0} {
+ incr n $numcommits
+ }
+ set fprogcoord [expr {$n * 1.0 / $numcommits}]
+ adjustprogress
+ return $domore
}
proc findselectline {l} {
- global findloc commentend ctext findcurline markingmatches
+ global findloc commentend ctext findcurline markingmatches gdttype
set markingmatches 1
set findcurline $l
}
proc unmarkmatches {} {
- global findids markingmatches findcurline
+ global markingmatches
allcanvs delete matches
- catch {unset findids}
set markingmatches 0
- catch {unset findcurline}
+ stopfinding
}
proc selcanvline {w x y} {
# append some text to the ctext widget, and make any SHA1 ID
# that we know about be a clickable link.
proc appendwithlinks {text tags} {
- global ctext commitrow linknum curview
+ global ctext commitrow linknum curview pendinglinks
set start [$ctext index "end - 1c"]
$ctext insert end $text $tags
set s [lindex $l 0]
set e [lindex $l 1]
set linkid [string range $text $s $e]
- if {![info exists commitrow($curview,$linkid)]} continue
incr e
- $ctext tag add link "$start + $s c" "$start + $e c"
+ $ctext tag delete link$linknum
$ctext tag add link$linknum "$start + $s c" "$start + $e c"
- $ctext tag bind link$linknum <1> \
- [list selectline $commitrow($curview,$linkid) 1]
+ setlink $linkid link$linknum
incr linknum
}
- $ctext tag conf link -foreground blue -underline 1
- $ctext tag bind link <Enter> { %W configure -cursor hand2 }
- $ctext tag bind link <Leave> { %W configure -cursor $curtextcursor }
+}
+
+proc setlink {id lk} {
+ global curview commitrow ctext pendinglinks commitinterest
+
+ if {[info exists commitrow($curview,$id)]} {
+ $ctext tag conf $lk -foreground blue -underline 1
+ $ctext tag bind $lk <1> [list selectline $commitrow($curview,$id) 1]
+ $ctext tag bind $lk <Enter> {linkcursor %W 1}
+ $ctext tag bind $lk <Leave> {linkcursor %W -1}
+ } else {
+ lappend pendinglinks($id) $lk
+ lappend commitinterest($id) {makelink %I}
+ }
+}
+
+proc makelink {id} {
+ global pendinglinks
+
+ if {![info exists pendinglinks($id)]} return
+ foreach lk $pendinglinks($id) {
+ setlink $id $lk
+ }
+ unset pendinglinks($id)
+}
+
+proc linkcursor {w inc} {
+ global linkentercount curtextcursor
+
+ if {[incr linkentercount $inc] > 0} {
+ $w configure -cursor hand2
+ } else {
+ $w configure -cursor $curtextcursor
+ if {$linkentercount < 0} {
+ set linkentercount 0
+ }
+ }
}
proc viewnextline {dir} {
$ctext tag delete $lk
$ctext insert $pos $sep
$ctext insert $pos [lindex $ti 0] $lk
- if {[info exists commitrow($curview,$id)]} {
- $ctext tag conf $lk -foreground blue
- $ctext tag bind $lk <1> \
- [list selectline $commitrow($curview,$id) 1]
- $ctext tag conf $lk -underline 1
- $ctext tag bind $lk <Enter> { %W configure -cursor hand2 }
- $ctext tag bind $lk <Leave> \
- { %W configure -cursor $curtextcursor }
- }
+ setlink $id $lk
set sep ", "
}
}
}
}
+proc make_secsel {l} {
+ global linehtag linentag linedtag canv canv2 canv3
+
+ if {![info exists linehtag($l)]} return
+ $canv delete secsel
+ set t [eval $canv create rect [$canv bbox $linehtag($l)] -outline {{}} \
+ -tags secsel -fill [$canv cget -selectbackground]]
+ $canv lower $t
+ $canv2 delete secsel
+ set t [eval $canv2 create rect [$canv2 bbox $linentag($l)] -outline {{}} \
+ -tags secsel -fill [$canv2 cget -selectbackground]]
+ $canv2 lower $t
+ $canv3 delete secsel
+ set t [eval $canv3 create rect [$canv3 bbox $linedtag($l)] -outline {{}} \
+ -tags secsel -fill [$canv3 cget -selectbackground]]
+ $canv3 lower $t
+}
+
proc selectline {l isnew} {
- global canv canv2 canv3 ctext commitinfo selectedline
- global displayorder linehtag linentag linedtag
+ global canv ctext commitinfo selectedline
+ global displayorder
global canvy0 linespc parentlist children curview
global currentid sha1entry
global commentend idtags linknum
catch {unset pending_select}
$canv delete hover
normalline
- cancel_next_highlight
unsel_reflist
+ stopfinding
if {$l < 0 || $l >= $numcommits} return
set y [expr {$canvy0 + $l * $linespc}]
set ymax [lindex [$canv cget -scrollregion] 3]
drawvisible
}
- if {![info exists linehtag($l)]} return
- $canv delete secsel
- set t [eval $canv create rect [$canv bbox $linehtag($l)] -outline {{}} \
- -tags secsel -fill [$canv cget -selectbackground]]
- $canv lower $t
- $canv2 delete secsel
- set t [eval $canv2 create rect [$canv2 bbox $linentag($l)] -outline {{}} \
- -tags secsel -fill [$canv2 cget -selectbackground]]
- $canv2 lower $t
- $canv3 delete secsel
- set t [eval $canv3 create rect [$canv3 bbox $linedtag($l)] -outline {{}} \
- -tags secsel -fill [$canv3 cget -selectbackground]]
- $canv3 lower $t
+ make_secsel $l
if {$isnew} {
addtohistory [list selectline $l 0]
catch {unset currentid}
allcanvs delete secsel
rhighlight_none
- cancel_next_highlight
}
proc reselectline {} {
$ctext insert end "$f\n" filesep
$ctext config -state disabled
$ctext yview $commentend
+ settabs 0
}
proc getblobline {bf id} {
}
proc mergediff {id l} {
- global diffmergeid diffopts mdifffd
+ global diffmergeid mdifffd
global diffids
global parentlist
+ global limitdiffs viewfiles curview
set diffmergeid $id
set diffids $id
# this doesn't seem to actually affect anything...
- set env(GIT_DIFF_OPTS) $diffopts
set cmd [concat | git diff-tree --no-commit-id --cc $id]
+ if {$limitdiffs && $viewfiles($curview) ne {}} {
+ set cmd [concat $cmd -- $viewfiles($curview)]
+ }
if {[catch {set mdf [open $cmd r]} err]} {
error_popup "Error getting merge diffs: $err"
return
fconfigure $mdf -blocking 0
set mdifffd($id) $mdf
set np [llength [lindex $parentlist $l]]
+ settabs $np
filerun $mdf [list getmergediffline $mdf $id $np]
}
proc startdiff {ids} {
global treediffs diffids treepending diffmergeid nullid nullid2
+ settabs 1
set diffids $ids
catch {unset diffmergeid}
if {![info exists treediffs($ids)] ||
}
}
+proc path_filter {filter name} {
+ foreach p $filter {
+ set l [string length $p]
+ if {[string index $p end] eq "/"} {
+ if {[string compare -length $l $p $name] == 0} {
+ return 1
+ }
+ } else {
+ if {[string compare -length $l $p $name] == 0 &&
+ ([string length $name] == $l ||
+ [string index $name $l] eq "/")} {
+ return 1
+ }
+ }
+ }
+ return 0
+}
+
proc addtocflist {ids} {
- global treediffs cflist
+ global treediffs
+
add_flist $treediffs($ids)
getblobdiffs $ids
}
proc gettreediffline {gdtf ids} {
global treediff treediffs treepending diffids diffmergeid
- global cmitmode
+ global cmitmode viewfiles curview limitdiffs
set nr 0
while {[incr nr] <= 1000 && [gets $gdtf line] >= 0} {
return [expr {$nr >= 1000? 2: 1}]
}
close $gdtf
- set treediffs($ids) $treediff
+ if {$limitdiffs && $viewfiles($curview) ne {}} {
+ set flist {}
+ foreach f $treediff {
+ if {[path_filter $viewfiles($curview) $f]} {
+ lappend flist $f
+ }
+ }
+ set treediffs($ids) $flist
+ } else {
+ set treediffs($ids) $treediff
+ }
unset treepending
if {$cmitmode eq "tree"} {
gettree $diffids
}
proc getblobdiffs {ids} {
- global diffopts blobdifffd diffids env
+ global blobdifffd diffids env
global diffinhdr treediffs
global diffcontext
+ global limitdiffs viewfiles curview
- set env(GIT_DIFF_OPTS) $diffopts
- if {[catch {set bdf [open [diffcmd $ids "-p -C --no-commit-id -U$diffcontext"] r]} err]} {
+ set cmd [diffcmd $ids "-p -C --no-commit-id -U$diffcontext"]
+ if {$limitdiffs && $viewfiles($curview) ne {}} {
+ set cmd [concat $cmd -- $viewfiles($curview)]
+ }
+ if {[catch {set bdf [open $cmd r]} err]} {
puts "error getting diffs: $err"
return
}
set diffinhdr 0
} elseif {$diffinhdr} {
- if {![string compare -length 12 "rename from " $line] ||
- ![string compare -length 10 "copy from " $line]} {
+ if {![string compare -length 12 "rename from " $line]} {
set fname [string range $line [expr 6 + [string first " from " $line] ] end]
if {[string index $fname 0] eq "\""} {
set fname [lindex $fname 0]
proc clear_ctext {{first 1.0}} {
global ctext smarktop smarkbot
+ global pendinglinks
set l [lindex [split $first .] 0]
if {![info exists smarktop] || [$ctext compare $first < $smarktop.0]} {
set smarkbot $l
}
$ctext delete $first end
+ if {$first eq "1.0"} {
+ catch {unset pendinglinks}
+ }
+}
+
+proc settabs {{firstab {}}} {
+ global firsttabstop tabstop ctext have_tk85
+
+ if {$firstab ne {} && $have_tk85} {
+ set firsttabstop $firstab
+ }
+ set w [font measure textfont "0"]
+ if {$firsttabstop != 0} {
+ $ctext conf -tabs [list [expr {($firsttabstop + $tabstop) * $w}] \
+ [expr {($firsttabstop + 2 * $tabstop) * $w}]]
+ } elseif {$have_tk85 || $tabstop != 8} {
+ $ctext conf -tabs [expr {$tabstop * $w}]
+ } else {
+ $ctext conf -tabs {}
+ }
}
proc incrsearch {name ix op} {
}
proc setcoords {} {
- global linespc charspc canvx0 canvy0 mainfont
+ global linespc charspc canvx0 canvy0
global xspc1 xspc2 lthickness
- set linespc [font metrics $mainfont -linespace]
- set charspc [font measure $mainfont "m"]
+ set linespc [font metrics mainfont -linespace]
+ set charspc [font measure mainfont "m"]
set canvy0 [expr {int(3 + 0.5 * $linespc)}]
set canvx0 [expr {int(3 + 0.5 * $linespc)}]
set lthickness [expr {int($linespc / 9) + 1}]
}
}
+proc parsefont {f n} {
+ global fontattr
+
+ set fontattr($f,family) [lindex $n 0]
+ set s [lindex $n 1]
+ if {$s eq {} || $s == 0} {
+ set s 10
+ } elseif {$s < 0} {
+ set s [expr {int(-$s / [winfo fpixels . 1p] + 0.5)}]
+ }
+ set fontattr($f,size) $s
+ set fontattr($f,weight) normal
+ set fontattr($f,slant) roman
+ foreach style [lrange $n 2 end] {
+ switch -- $style {
+ "normal" -
+ "bold" {set fontattr($f,weight) $style}
+ "roman" -
+ "italic" {set fontattr($f,slant) $style}
+ }
+ }
+}
+
+proc fontflags {f {isbold 0}} {
+ global fontattr
+
+ return [list -family $fontattr($f,family) -size $fontattr($f,size) \
+ -weight [expr {$isbold? "bold": $fontattr($f,weight)}] \
+ -slant $fontattr($f,slant)]
+}
+
+proc fontname {f} {
+ global fontattr
+
+ set n [list $fontattr($f,family) $fontattr($f,size)]
+ if {$fontattr($f,weight) eq "bold"} {
+ lappend n "bold"
+ }
+ if {$fontattr($f,slant) eq "italic"} {
+ lappend n "italic"
+ }
+ return $n
+}
+
proc incrfont {inc} {
global mainfont textfont ctext canv phase cflist showrefstop
- global charspc tabstop
- global stopped entries
+ global stopped entries fontattr
+
unmarkmatches
- set mainfont [lreplace $mainfont 1 1 [expr {[lindex $mainfont 1] + $inc}]]
- set textfont [lreplace $textfont 1 1 [expr {[lindex $textfont 1] + $inc}]]
+ set s $fontattr(mainfont,size)
+ incr s $inc
+ if {$s < 1} {
+ set s 1
+ }
+ set fontattr(mainfont,size) $s
+ font config mainfont -size $s
+ font config mainfontbold -size $s
+ set mainfont [fontname mainfont]
+ set s $fontattr(textfont,size)
+ incr s $inc
+ if {$s < 1} {
+ set s 1
+ }
+ set fontattr(textfont,size) $s
+ font config textfont -size $s
+ font config textfontbold -size $s
+ set textfont [fontname textfont]
setcoords
- $ctext conf -font $textfont -tabs "[expr {$tabstop * $charspc}]"
- $cflist conf -font $textfont
- $ctext tag conf filesep -font [concat $textfont bold]
- foreach e $entries {
- $e conf -font $mainfont
- }
- if {$phase eq "getcommits"} {
- $canv itemconf textitems -font $mainfont
- }
- if {[info exists showrefstop] && [winfo exists $showrefstop]} {
- $showrefstop.list conf -font $mainfont
- }
+ settabs
redisplay
}
proc linehover {} {
global hoverx hovery hoverid hovertimer
global canv linespc lthickness
- global commitinfo mainfont
+ global commitinfo
set text [lindex $commitinfo($hoverid) 0]
set ymax [lindex [$canv cget -scrollregion] 3]
set y [expr {$hovery + $yfrac * $ymax - $linespc / 2}]
set x0 [expr {$x - 2 * $lthickness}]
set y0 [expr {$y - 2 * $lthickness}]
- set x1 [expr {$x + [font measure $mainfont $text] + 2 * $lthickness}]
+ set x1 [expr {$x + [font measure mainfont $text] + 2 * $lthickness}]
set y1 [expr {$y + $linespc + 2 * $lthickness}]
set t [$canv create rectangle $x0 $y0 $x1 $y1 \
-fill \#ffff80 -outline black -width 1 -tags hover]
$canv raise $t
set t [$canv create text $x $y -anchor nw -text $text -tags hover \
- -font $mainfont]
+ -font mainfont]
$canv raise $t
}
}
proc lineclick {x y id isnew} {
- global ctext commitinfo children canv thickerline curview
+ global ctext commitinfo children canv thickerline curview commitrow
if {![info exists commitinfo($id)] && ![getcommit $id]} return
unmarkmatches
# fill the details pane with info about this line
$ctext conf -state normal
clear_ctext
- $ctext tag conf link -foreground blue -underline 1
- $ctext tag bind link <Enter> { %W configure -cursor hand2 }
- $ctext tag bind link <Leave> { %W configure -cursor $curtextcursor }
+ settabs 0
$ctext insert end "Parent:\t"
- $ctext insert end $id [list link link0]
- $ctext tag bind link0 <1> [list selbyid $id]
+ $ctext insert end $id link0
+ setlink $id link0
set info $commitinfo($id)
$ctext insert end "\n\t[lindex $info 0]\n"
$ctext insert end "\tAuthor:\t[lindex $info 1]\n"
if {![info exists commitinfo($child)] && ![getcommit $child]} continue
set info $commitinfo($child)
$ctext insert end "\n\t"
- $ctext insert end $child [list link link$i]
- $ctext tag bind link$i <1> [list selbyid $child]
+ $ctext insert end $child link$i
+ setlink $child link$i
$ctext insert end "\n\t[lindex $info 0]"
$ctext insert end "\n\tAuthor:\t[lindex $info 1]"
set date [formatdate [lindex $info 2]]
global rowctxmenu commitrow selectedline rowmenuid curview
global nullid nullid2 fakerowmenu mainhead
+ stopfinding
set rowmenuid $id
if {![info exists selectedline]
|| $commitrow($curview,$id) eq $selectedline} {
clear_ctext
init_flist "Top"
$ctext insert end "From "
- $ctext tag conf link -foreground blue -underline 1
- $ctext tag bind link <Enter> { %W configure -cursor hand2 }
- $ctext tag bind link <Leave> { %W configure -cursor $curtextcursor }
- $ctext tag bind link0 <1> [list selbyid $oldid]
- $ctext insert end $oldid [list link link0]
+ $ctext insert end $oldid link0
+ setlink $oldid link0
$ctext insert end "\n "
$ctext insert end [lindex $commitinfo($oldid) 0]
$ctext insert end "\n\nTo "
- $ctext tag bind link1 <1> [list selbyid $newid]
- $ctext insert end $newid [list link link1]
+ $ctext insert end $newid link1
+ setlink $newid link1
$ctext insert end "\n "
$ctext insert end [lindex $commitinfo($newid) 0]
$ctext insert end "\n"
set newid [$patchtop.tosha1 get]
set fname [$patchtop.fname get]
set cmd [diffcmd [list $oldid $newid] -p]
+ # trim off the initial "|"
+ set cmd [lrange $cmd 1 end]
lappend cmd >$fname &
if {[catch {eval exec $cmd} err]} {
error_popup "Error creating patch: $err"
proc redrawtags {id} {
global canv linehtag commitrow idpos selectedline curview
- global mainfont canvxmax iddrawn
+ global canvxmax iddrawn
if {![info exists commitrow($curview,$id)]} return
if {![info exists iddrawn($id)]} return
set xt [eval drawtags $id $idpos($id)]
$canv coords $linehtag($commitrow($curview,$id)) $xt [lindex $idpos($id) 2]
set text [$canv itemcget $linehtag($commitrow($curview,$id)) -text]
- set xr [expr {$xt + [font measure $mainfont $text]}]
+ set xr [expr {$xt + [font measure mainfont $text]}]
if {$xr > $canvxmax} {
set canvxmax $xr
setcanvscroll
included in branch $mainhead -- really re-apply it?"]
if {!$ok} return
}
- nowbusy cherrypick
+ nowbusy cherrypick "Cherry-picking"
update
# Unfortunately git-cherry-pick writes stuff to stderr even when
# no error occurs, and exec takes that as an indication of error...
proc resethead {} {
global mainheadid mainhead rowmenuid confirm_ok resettype
- global showlocalchanges
set confirm_ok 0
set w ".confirmreset"
error_popup $err
} else {
dohidelocalchanges
- set w ".resetprogress"
- filerun $fd [list readresetstat $fd $w]
- toplevel $w
- wm transient $w
- wm title $w "Reset progress"
- message $w.m -text "Reset in progress, please wait..." \
- -justify center -aspect 1000
- pack $w.m -side top -fill x -padx 20 -pady 5
- canvas $w.c -width 150 -height 20 -bg white
- $w.c create rect 0 0 0 20 -fill green -tags rect
- pack $w.c -side top -fill x -padx 20 -pady 5 -expand 1
- nowbusy reset
+ filerun $fd [list readresetstat $fd]
+ nowbusy reset "Resetting"
}
}
-proc readresetstat {fd w} {
- global mainhead mainheadid showlocalchanges
+proc readresetstat {fd} {
+ global mainhead mainheadid showlocalchanges rprogcoord
if {[gets $fd line] >= 0} {
if {[regexp {([0-9]+)% \(([0-9]+)/([0-9]+)\)} $line match p m n]} {
- set x [expr {($m * 150) / $n}]
- $w.c coords rect 0 0 $x 20
+ set rprogcoord [expr {1.0 * $m / $n}]
+ adjustprogress
}
return 1
}
- destroy $w
+ set rprogcoord 0
+ adjustprogress
notbusy reset
if {[catch {close $fd} err]} {
error_popup $err
proc headmenu {x y id head} {
global headmenuid headmenuhead headctxmenu mainhead
+ stopfinding
set headmenuid $id
set headmenuhead $head
set state normal
# check the tree is clean first??
set oldmainhead $mainhead
- nowbusy checkout
+ nowbusy checkout "Checking out"
update
dohidelocalchanges
if {[catch {
# Display a list of tags and heads
proc showrefs {} {
- global showrefstop bgcolor fgcolor selectbgcolor mainfont
- global bglist fglist uifont reflistfilter reflist maincursor
+ global showrefstop bgcolor fgcolor selectbgcolor
+ global bglist fglist reflistfilter reflist maincursor
set top .showrefs
set showrefstop $top
toplevel $top
wm title $top "Tags and heads: [file tail [pwd]]"
text $top.list -background $bgcolor -foreground $fgcolor \
- -selectbackground $selectbgcolor -font $mainfont \
+ -selectbackground $selectbgcolor -font mainfont \
-xscrollcommand "$top.xsb set" -yscrollcommand "$top.ysb set" \
-width 30 -height 20 -cursor $maincursor \
-spacing1 1 -spacing3 1 -state disabled
grid $top.list $top.ysb -sticky nsew
grid $top.xsb x -sticky ew
frame $top.f
- label $top.f.l -text "Filter: " -font $uifont
- entry $top.f.e -width 20 -textvariable reflistfilter -font $uifont
+ label $top.f.l -text "Filter: " -font uifont
+ entry $top.f.e -width 20 -textvariable reflistfilter -font uifont
set reflistfilter "*"
trace add variable reflistfilter write reflistfilter_change
pack $top.f.e -side right -fill x -expand 1
pack $top.f.l -side left
grid $top.f - -sticky ew -pady 2
button $top.close -command [list destroy $top] -text "Close" \
- -font $uifont
+ -font uifont
grid $top.close -
grid columnconfigure $top 0 -weight 1
grid rowconfigure $top 0 -weight 1
# Stuff for finding nearby tags
proc getallcommits {} {
- global allcommits allids nbmp nextarc seeds
+ global allcommits nextarc seeds allccache allcwait cachedarcs allcupdate
+ global idheads idtags idotherrefs allparents tagobjid
if {![info exists allcommits]} {
- set allids {}
- set nbmp 0
set nextarc 0
set allcommits 0
set seeds {}
+ set allcwait 0
+ set cachedarcs 0
+ set allccache [file join [gitdir] "gitk.cache"]
+ if {![catch {
+ set f [open $allccache r]
+ set allcwait 1
+ getcache $f
+ }]} return
}
- set cmd [concat | git rev-list --all --parents]
- foreach id $seeds {
- lappend cmd "^$id"
+ if {$allcwait} {
+ return
+ }
+ set cmd [list | git rev-list --parents]
+ set allcupdate [expr {$seeds ne {}}]
+ if {!$allcupdate} {
+ set ids "--all"
+ } else {
+ set refs [concat [array names idheads] [array names idtags] \
+ [array names idotherrefs]]
+ set ids {}
+ set tagobjs {}
+ foreach name [array names tagobjid] {
+ lappend tagobjs $tagobjid($name)
+ }
+ foreach id [lsort -unique $refs] {
+ if {![info exists allparents($id)] &&
+ [lsearch -exact $tagobjs $id] < 0} {
+ lappend ids $id
+ }
+ }
+ if {$ids ne {}} {
+ foreach id $seeds {
+ lappend ids "^$id"
+ }
+ }
+ }
+ if {$ids ne {}} {
+ set fd [open [concat $cmd $ids] r]
+ fconfigure $fd -blocking 0
+ incr allcommits
+ nowbusy allcommits
+ filerun $fd [list getallclines $fd]
+ } else {
+ dispneartags 0
}
- set fd [open $cmd r]
- fconfigure $fd -blocking 0
- incr allcommits
- nowbusy allcommits
- filerun $fd [list getallclines $fd]
}
# Since most commits have 1 parent and 1 child, we group strings of
# coming from descendents, and "outgoing" means going towards ancestors.
proc getallclines {fd} {
- global allids allparents allchildren idtags idheads nextarc nbmp
+ global allparents allchildren idtags idheads nextarc
global arcnos arcids arctags arcout arcend arcstart archeads growing
- global seeds allcommits
-
+ global seeds allcommits cachedarcs allcupdate
+
set nid 0
while {[incr nid] <= 1000 && [gets $fd line] >= 0} {
set id [lindex $line 0]
# seen it already
continue
}
- lappend allids $id
+ set cachedarcs 0
set olds [lrange $line 1 end]
set allparents($id) $olds
if {![info exists allchildren($id)]} {
continue
}
}
- incr nbmp
foreach a $arcnos($id) {
lappend arcids($a) $id
set arcend($a) $id
if {![eof $fd]} {
return [expr {$nid >= 1000? 2: 1}]
}
- close $fd
+ set cacheok 1
+ if {[catch {
+ fconfigure $fd -blocking 1
+ close $fd
+ } err]} {
+ # got an error reading the list of commits
+ # if we were updating, try rereading the whole thing again
+ if {$allcupdate} {
+ incr allcommits -1
+ dropcache $err
+ return
+ }
+ error_popup "Error reading commit topology information;\
+ branch and preceding/following tag information\
+ will be incomplete.\n($err)"
+ set cacheok 0
+ }
if {[incr allcommits -1] == 0} {
notbusy allcommits
+ if {$cacheok} {
+ run savecache
+ }
}
dispneartags 0
return 0
}
proc splitarc {p} {
- global arcnos arcids nextarc nbmp arctags archeads idtags idheads
+ global arcnos arcids nextarc arctags archeads idtags idheads
global arcstart arcend arcout allparents growing
set a $arcnos($p)
set growing($na) 1
unset growing($a)
}
- incr nbmp
foreach id $tail {
if {[llength $arcnos($id)] == 1} {
# Update things for a new commit added that is a child of one
# existing commit. Used when cherry-picking.
proc addnewchild {id p} {
- global allids allparents allchildren idtags nextarc nbmp
+ global allparents allchildren idtags nextarc
global arcnos arcids arctags arcout arcend arcstart archeads growing
global seeds allcommits
- if {![info exists allcommits]} return
- lappend allids $id
+ if {![info exists allcommits] || ![info exists arcnos($p)]} return
set allparents($id) [list $p]
set allchildren($id) {}
set arcnos($id) {}
lappend seeds $id
- incr nbmp
lappend allchildren($p) $id
set a [incr nextarc]
set arcstart($a) $id
set arcout($id) [list $a]
}
+# This implements a cache for the topology information.
+# The cache saves, for each arc, the start and end of the arc,
+# the ids on the arc, and the outgoing arcs from the end.
+proc readcache {f} {
+ global arcnos arcids arcout arcstart arcend arctags archeads nextarc
+ global idtags idheads allparents cachedarcs possible_seeds seeds growing
+ global allcwait
+
+ set a $nextarc
+ set lim $cachedarcs
+ if {$lim - $a > 500} {
+ set lim [expr {$a + 500}]
+ }
+ if {[catch {
+ if {$a == $lim} {
+ # finish reading the cache and setting up arctags, etc.
+ set line [gets $f]
+ if {$line ne "1"} {error "bad final version"}
+ close $f
+ foreach id [array names idtags] {
+ if {[info exists arcnos($id)] && [llength $arcnos($id)] == 1 &&
+ [llength $allparents($id)] == 1} {
+ set a [lindex $arcnos($id) 0]
+ if {$arctags($a) eq {}} {
+ recalcarc $a
+ }
+ }
+ }
+ foreach id [array names idheads] {
+ if {[info exists arcnos($id)] && [llength $arcnos($id)] == 1 &&
+ [llength $allparents($id)] == 1} {
+ set a [lindex $arcnos($id) 0]
+ if {$archeads($a) eq {}} {
+ recalcarc $a
+ }
+ }
+ }
+ foreach id [lsort -unique $possible_seeds] {
+ if {$arcnos($id) eq {}} {
+ lappend seeds $id
+ }
+ }
+ set allcwait 0
+ } else {
+ while {[incr a] <= $lim} {
+ set line [gets $f]
+ if {[llength $line] != 3} {error "bad line"}
+ set s [lindex $line 0]
+ set arcstart($a) $s
+ lappend arcout($s) $a
+ if {![info exists arcnos($s)]} {
+ lappend possible_seeds $s
+ set arcnos($s) {}
+ }
+ set e [lindex $line 1]
+ if {$e eq {}} {
+ set growing($a) 1
+ } else {
+ set arcend($a) $e
+ if {![info exists arcout($e)]} {
+ set arcout($e) {}
+ }
+ }
+ set arcids($a) [lindex $line 2]
+ foreach id $arcids($a) {
+ lappend allparents($s) $id
+ set s $id
+ lappend arcnos($id) $a
+ }
+ if {![info exists allparents($s)]} {
+ set allparents($s) {}
+ }
+ set arctags($a) {}
+ set archeads($a) {}
+ }
+ set nextarc [expr {$a - 1}]
+ }
+ } err]} {
+ dropcache $err
+ return 0
+ }
+ if {!$allcwait} {
+ getallcommits
+ }
+ return $allcwait
+}
+
+proc getcache {f} {
+ global nextarc cachedarcs possible_seeds
+
+ if {[catch {
+ set line [gets $f]
+ if {[llength $line] != 2 || [lindex $line 0] ne "1"} {error "bad version"}
+ # make sure it's an integer
+ set cachedarcs [expr {int([lindex $line 1])}]
+ if {$cachedarcs < 0} {error "bad number of arcs"}
+ set nextarc 0
+ set possible_seeds {}
+ run readcache $f
+ } err]} {
+ dropcache $err
+ }
+ return 0
+}
+
+proc dropcache {err} {
+ global allcwait nextarc cachedarcs seeds
+
+ #puts "dropping cache ($err)"
+ foreach v {arcnos arcout arcids arcstart arcend growing \
+ arctags archeads allparents allchildren} {
+ global $v
+ catch {unset $v}
+ }
+ set allcwait 0
+ set nextarc 0
+ set cachedarcs 0
+ set seeds {}
+ getallcommits
+}
+
+proc writecache {f} {
+ global cachearc cachedarcs allccache
+ global arcstart arcend arcnos arcids arcout
+
+ set a $cachearc
+ set lim $cachedarcs
+ if {$lim - $a > 1000} {
+ set lim [expr {$a + 1000}]
+ }
+ if {[catch {
+ while {[incr a] <= $lim} {
+ if {[info exists arcend($a)]} {
+ puts $f [list $arcstart($a) $arcend($a) $arcids($a)]
+ } else {
+ puts $f [list $arcstart($a) {} $arcids($a)]
+ }
+ }
+ } err]} {
+ catch {close $f}
+ catch {file delete $allccache}
+ #puts "writing cache failed ($err)"
+ return 0
+ }
+ set cachearc [expr {$a - 1}]
+ if {$a > $cachedarcs} {
+ puts $f "1"
+ close $f
+ return 0
+ }
+ return 1
+}
+
+proc savecache {} {
+ global nextarc cachedarcs cachearc allccache
+
+ if {$nextarc == $cachedarcs} return
+ set cachearc 0
+ set cachedarcs $nextarc
+ catch {
+ set f [open $allccache w]
+ puts $f [list 1 $cachedarcs]
+ run writecache $f
+ }
+}
+
# Returns 1 if a is an ancestor of b, -1 if b is an ancestor of a,
# or 0 if neither is true.
proc anc_or_desc {a b} {
}
$ctext conf -state normal
clear_ctext
+ settabs 0
set linknum 0
if {![info exists tagcontents($tag)]} {
catch {
destroy .
}
+proc mkfontdisp {font top which} {
+ global fontattr fontpref $font
+
+ set fontpref($font) [set $font]
+ button $top.${font}but -text $which -font optionfont \
+ -command [list choosefont $font $which]
+ label $top.$font -relief flat -font $font \
+ -text $fontattr($font,family) -justify left
+ grid x $top.${font}but $top.$font -sticky w
+}
+
+proc choosefont {font which} {
+ global fontparam fontlist fonttop fontattr
+
+ set fontparam(which) $which
+ set fontparam(font) $font
+ set fontparam(family) [font actual $font -family]
+ set fontparam(size) $fontattr($font,size)
+ set fontparam(weight) $fontattr($font,weight)
+ set fontparam(slant) $fontattr($font,slant)
+ set top .gitkfont
+ set fonttop $top
+ if {![winfo exists $top]} {
+ font create sample
+ eval font config sample [font actual $font]
+ toplevel $top
+ wm title $top "Gitk font chooser"
+ label $top.l -textvariable fontparam(which) -font uifont
+ pack $top.l -side top
+ set fontlist [lsort [font families]]
+ frame $top.f
+ listbox $top.f.fam -listvariable fontlist \
+ -yscrollcommand [list $top.f.sb set]
+ bind $top.f.fam <<ListboxSelect>> selfontfam
+ scrollbar $top.f.sb -command [list $top.f.fam yview]
+ pack $top.f.sb -side right -fill y
+ pack $top.f.fam -side left -fill both -expand 1
+ pack $top.f -side top -fill both -expand 1
+ frame $top.g
+ spinbox $top.g.size -from 4 -to 40 -width 4 \
+ -textvariable fontparam(size) \
+ -validatecommand {string is integer -strict %s}
+ checkbutton $top.g.bold -padx 5 \
+ -font {{Times New Roman} 12 bold} -text "B" -indicatoron 0 \
+ -variable fontparam(weight) -onvalue bold -offvalue normal
+ checkbutton $top.g.ital -padx 5 \
+ -font {{Times New Roman} 12 italic} -text "I" -indicatoron 0 \
+ -variable fontparam(slant) -onvalue italic -offvalue roman
+ pack $top.g.size $top.g.bold $top.g.ital -side left
+ pack $top.g -side top
+ canvas $top.c -width 150 -height 50 -border 2 -relief sunk \
+ -background white
+ $top.c create text 100 25 -anchor center -text $which -font sample \
+ -fill black -tags text
+ bind $top.c <Configure> [list centertext $top.c]
+ pack $top.c -side top -fill x
+ frame $top.buts
+ button $top.buts.ok -text "OK" -command fontok -default active \
+ -font uifont
+ button $top.buts.can -text "Cancel" -command fontcan -default normal \
+ -font uifont
+ grid $top.buts.ok $top.buts.can
+ grid columnconfigure $top.buts 0 -weight 1 -uniform a
+ grid columnconfigure $top.buts 1 -weight 1 -uniform a
+ pack $top.buts -side bottom -fill x
+ trace add variable fontparam write chg_fontparam
+ } else {
+ raise $top
+ $top.c itemconf text -text $which
+ }
+ set i [lsearch -exact $fontlist $fontparam(family)]
+ if {$i >= 0} {
+ $top.f.fam selection set $i
+ $top.f.fam see $i
+ }
+}
+
+proc centertext {w} {
+ $w coords text [expr {[winfo width $w] / 2}] [expr {[winfo height $w] / 2}]
+}
+
+proc fontok {} {
+ global fontparam fontpref prefstop
+
+ set f $fontparam(font)
+ set fontpref($f) [list $fontparam(family) $fontparam(size)]
+ if {$fontparam(weight) eq "bold"} {
+ lappend fontpref($f) "bold"
+ }
+ if {$fontparam(slant) eq "italic"} {
+ lappend fontpref($f) "italic"
+ }
+ set w $prefstop.$f
+ $w conf -text $fontparam(family) -font $fontpref($f)
+
+ fontcan
+}
+
+proc fontcan {} {
+ global fonttop fontparam
+
+ if {[info exists fonttop]} {
+ catch {destroy $fonttop}
+ catch {font delete sample}
+ unset fonttop
+ unset fontparam
+ }
+}
+
+proc selfontfam {} {
+ global fonttop fontparam
+
+ set i [$fonttop.f.fam curselection]
+ if {$i ne {}} {
+ set fontparam(family) [$fonttop.f.fam get $i]
+ }
+}
+
+proc chg_fontparam {v sub op} {
+ global fontparam
+
+ font config sample -$sub $fontparam($sub)
+}
+
proc doprefs {} {
- global maxwidth maxgraphpct diffopts
+ global maxwidth maxgraphpct
global oldprefs prefstop showneartags showlocalchanges
global bgcolor fgcolor ctext diffcolors selectbgcolor
- global uifont tabstop
+ global uifont tabstop limitdiffs
set top .gitkprefs
set prefstop $top
raise $top
return
}
- foreach v {maxwidth maxgraphpct diffopts showneartags showlocalchanges} {
+ foreach v {maxwidth maxgraphpct showneartags showlocalchanges \
+ limitdiffs tabstop} {
set oldprefs($v) [set $v]
}
toplevel $top
wm title $top "Gitk preferences"
label $top.ldisp -text "Commit list display options"
- $top.ldisp configure -font $uifont
+ $top.ldisp configure -font uifont
grid $top.ldisp - -sticky w -pady 10
label $top.spacer -text " "
label $top.maxwidthl -text "Maximum graph width (lines)" \
grid x $top.showlocal -sticky w
label $top.ddisp -text "Diff display options"
- $top.ddisp configure -font $uifont
+ $top.ddisp configure -font uifont
grid $top.ddisp - -sticky w -pady 10
- label $top.diffoptl -text "Options for diff program" \
- -font optionfont
- entry $top.diffopt -width 20 -textvariable diffopts
- grid x $top.diffoptl $top.diffopt -sticky w
+ label $top.tabstopl -text "Tab spacing" -font optionfont
+ spinbox $top.tabstop -from 1 -to 20 -width 4 -textvariable tabstop
+ grid x $top.tabstopl $top.tabstop -sticky w
frame $top.ntag
label $top.ntag.l -text "Display nearby tags" -font optionfont
checkbutton $top.ntag.b -variable showneartags
pack $top.ntag.b $top.ntag.l -side left
grid x $top.ntag -sticky w
- label $top.tabstopl -text "tabstop" -font optionfont
- spinbox $top.tabstop -from 1 -to 20 -width 4 -textvariable tabstop
- grid x $top.tabstopl $top.tabstop -sticky w
+ frame $top.ldiff
+ label $top.ldiff.l -text "Limit diffs to listed paths" -font optionfont
+ checkbutton $top.ldiff.b -variable limitdiffs
+ pack $top.ldiff.b $top.ldiff.l -side left
+ grid x $top.ldiff -sticky w
label $top.cdisp -text "Colors: press to choose"
- $top.cdisp configure -font $uifont
+ $top.cdisp configure -font uifont
grid $top.cdisp - -sticky w -pady 10
label $top.bg -padx 40 -relief sunk -background $bgcolor
button $top.bgbut -text "Background" -font optionfont \
-command [list choosecolor selectbgcolor 0 $top.selbgsep background setselbg]
grid x $top.selbgbut $top.selbgsep -sticky w
+ label $top.cfont -text "Fonts: press to choose"
+ $top.cfont configure -font uifont
+ grid $top.cfont - -sticky w -pady 10
+ mkfontdisp mainfont $top "Main font"
+ mkfontdisp textfont $top "Diff display font"
+ mkfontdisp uifont $top "User interface font"
+
frame $top.buts
button $top.buts.ok -text "OK" -command prefsok -default active
- $top.buts.ok configure -font $uifont
+ $top.buts.ok configure -font uifont
button $top.buts.can -text "Cancel" -command prefscan -default normal
- $top.buts.can configure -font $uifont
+ $top.buts.can configure -font uifont
grid $top.buts.ok $top.buts.can
grid columnconfigure $top.buts 0 -weight 1 -uniform a
grid columnconfigure $top.buts 1 -weight 1 -uniform a
}
proc prefscan {} {
- global maxwidth maxgraphpct diffopts
- global oldprefs prefstop showneartags showlocalchanges
+ global oldprefs prefstop
- foreach v {maxwidth maxgraphpct diffopts showneartags showlocalchanges} {
+ foreach v {maxwidth maxgraphpct showneartags showlocalchanges \
+ limitdiffs tabstop} {
+ global $v
set $v $oldprefs($v)
}
catch {destroy $prefstop}
unset prefstop
+ fontcan
}
proc prefsok {} {
global maxwidth maxgraphpct
global oldprefs prefstop showneartags showlocalchanges
- global charspc ctext tabstop
+ global fontpref mainfont textfont uifont
+ global limitdiffs treediffs
catch {destroy $prefstop}
unset prefstop
- $ctext configure -tabs "[expr {$tabstop * $charspc}]"
+ fontcan
+ set fontchanged 0
+ if {$mainfont ne $fontpref(mainfont)} {
+ set mainfont $fontpref(mainfont)
+ parsefont mainfont $mainfont
+ eval font configure mainfont [fontflags mainfont]
+ eval font configure mainfontbold [fontflags mainfont 1]
+ setcoords
+ set fontchanged 1
+ }
+ if {$textfont ne $fontpref(textfont)} {
+ set textfont $fontpref(textfont)
+ parsefont textfont $textfont
+ eval font configure textfont [fontflags textfont]
+ eval font configure textfontbold [fontflags textfont 1]
+ }
+ if {$uifont ne $fontpref(uifont)} {
+ set uifont $fontpref(uifont)
+ parsefont uifont $uifont
+ eval font configure uifont [fontflags uifont]
+ }
+ settabs
if {$showlocalchanges != $oldprefs(showlocalchanges)} {
if {$showlocalchanges} {
doshowlocalchanges
dohidelocalchanges
}
}
- if {$maxwidth != $oldprefs(maxwidth)
+ if {$limitdiffs != $oldprefs(limitdiffs)} {
+ # treediffs elements are limited by path
+ catch {unset treediffs}
+ }
+ if {$fontchanged || $maxwidth != $oldprefs(maxwidth)
|| $maxgraphpct != $oldprefs(maxgraphpct)} {
redisplay
- } elseif {$showneartags != $oldprefs(showneartags)} {
+ } elseif {$showneartags != $oldprefs(showneartags) ||
+ $limitdiffs != $oldprefs(limitdiffs)} {
reselectline
}
}
return {}
}
+# First check that Tcl/Tk is recent enough
+if {[catch {package require Tk 8.4} err]} {
+ show_error {} . "Sorry, gitk cannot run with this version of Tcl/Tk.\n\
+ Gitk requires at least Tcl/Tk 8.4."
+ exit 1
+}
+
# defaults...
set datemode 0
-set diffopts "-U 5 -p"
set wrcomcmd "git diff-tree --stdin -p --pretty"
set gitencoding {}
set maxwidth 16
set revlistorder 0
set fastdate 0
-set uparrowlen 7
-set downarrowlen 7
-set mingaplen 30
+set uparrowlen 5
+set downarrowlen 5
+set mingaplen 100
set cmitmode "patch"
set wrapcomment "none"
set showneartags 1
set maxrefs 20
set maxlinelen 200
set showlocalchanges 1
+set limitdiffs 1
set datetimeformat "%Y-%m-%d %H:%M:%S"
set colors {green red blue magenta darkgrey brown orange}
font create optionfont -family sans-serif -size -12
+parsefont mainfont $mainfont
+eval font create mainfont [fontflags mainfont]
+eval font create mainfontbold [fontflags mainfont 1]
+
+parsefont textfont $textfont
+eval font create textfont [fontflags textfont]
+eval font create textfontbold [fontflags textfont 1]
+
+parsefont uifont $uifont
+eval font create uifont [fontflags uifont]
+
# check that we can find a .git directory somewhere...
if {[catch {set gitdir [gitdir]}]} {
show_error {} . "Cannot find a git repository here."
exit 1
}
+set mergeonly 0
set revtreeargs {}
set cmdline_files {}
set i 0
switch -- $arg {
"" { }
"-d" { set datemode 1 }
+ "--merge" {
+ set mergeonly 1
+ lappend revtreeargs $arg
+ }
"--" {
set cmdline_files [lrange $argv [expr {$i + 1}] end]
break
}
}
+if {$mergeonly} {
+ # find the list of unmerged files
+ set mlist {}
+ set nr_unmerged 0
+ if {[catch {
+ set fd [open "| git ls-files -u" r]
+ } err]} {
+ show_error {} . "Couldn't get list of unmerged files: $err"
+ exit 1
+ }
+ while {[gets $fd line] >= 0} {
+ set i [string first "\t" $line]
+ if {$i < 0} continue
+ set fname [string range $line [expr {$i+1}] end]
+ if {[lsearch -exact $mlist $fname] >= 0} continue
+ incr nr_unmerged
+ if {$cmdline_files eq {} || [path_filter $cmdline_files $fname]} {
+ lappend mlist $fname
+ }
+ }
+ catch {close $fd}
+ if {$mlist eq {}} {
+ if {$nr_unmerged == 0} {
+ show_error {} . "No files selected: --merge specified but\
+ no files are unmerged."
+ } else {
+ show_error {} . "No files selected: --merge specified but\
+ no unmerged files are within file limit."
+ }
+ exit 1
+ }
+ set cmdline_files $mlist
+}
+
set nullid "0000000000000000000000000000000000000000"
set nullid2 "0000000000000000000000000000000000000001"
+set have_tk85 [expr {[package vcompare $tk_version "8.5"] >= 0}]
set runq {}
set history {}
set fh_serial 0
set nhl_names {}
set highlight_paths {}
+set findpattern {}
set searchdirn -forwards
set boldrows {}
set boldnamerows {}
set diffelide {0 0}
set markingmatches 0
-
-set optim_delay 16
+set linkentercount 0
+set need_redisplay 0
+set nrows_drawn 0
+set firsttabstop 0
set nextviewnum 1
set curview 0
set selectedview 0
set selectedhlview None
+set highlight_related None
+set highlight_files {}
set viewfiles(0) {}
set viewperm(0) 0
set viewargs(0) {}
set stopped 0
set stuffsaved 0
set patchnum 0
-set lookingforhead 0
set localirow -1
set localfrow -1
set lserial 0
#our $projectroot = "/pub/scm";
our $projectroot = "++GITWEB_PROJECTROOT++";
+# fs traversing limit for getting project list
+# the number is relative to the projectroot
+our $project_maxdepth = "++GITWEB_PROJECT_MAXDEPTH++";
+
# target of the home link on top of all pages
our $home_link = $my_uri || "/";
return "$body$tail";
}
+# takes the same arguments as chop_str, but also wraps a <span> around the
+# result with a title attribute if it does get chopped. Additionally, the
+# string is HTML-escaped.
+sub chop_and_escape_str {
+ my $str = shift;
+ my $len = shift;
+ my $add_len = shift || 10;
+
+ my $chopped = chop_str($str, $len, $add_len);
+ if ($chopped eq $str) {
+ return esc_html($chopped);
+ } else {
+ return qq{<span title="} . esc_html($str) . qq{">} .
+ esc_html($chopped) . qq{</span>};
+ }
+}
+
## ----------------------------------------------------------------------
## functions returning short strings
# remove the trailing "/"
$dir =~ s!/+$!!;
my $pfxlen = length("$dir");
+ my $pfxdepth = ($dir =~ tr!/!!);
File::Find::find({
follow_fast => 1, # follow symbolic links
return if (m!^[/.]$!);
# only directories can be git repositories
return unless (-d $_);
+ # don't traverse too deep (Find is super slow on os x)
+ if (($File::Find::name =~ tr!/!!) - $pfxdepth > $project_maxdepth) {
+ $File::Find::prune = 1;
+ return;
+ }
my $subdir = substr($File::Find::name, $pfxlen + 1);
# we check related file in $projectroot
return wantarray ? %res : \%res;
}
+# wrapper: return parsed line of git-diff-tree "raw" output
+# (the argument might be raw line, or parsed info)
+sub parsed_difftree_line {
+ my $line_or_ref = shift;
+
+ if (ref($line_or_ref) eq "HASH") {
+ # pre-parsed (or generated by hand)
+ return $line_or_ref;
+ } else {
+ return parse_difftree_raw_line($line_or_ref);
+ }
+}
+
# parse line of git-ls-tree output
sub parse_ls_tree_line ($;%) {
my $line = shift;
}
}
} else {
+ # ordinary (not combined) diff
$from->{'file'} = $diffinfo->{'from_file'} || $diffinfo->{'file'};
if ($diffinfo->{'status'} ne "A") { # not new (added) file
$from->{'href'} = href(action=>"blob", hash_base=>$hash_parent,
## ......................................................................
## functions printing large fragments of HTML
+# get pre-image filenames for merge (combined) diff
sub fill_from_file_info {
my ($diff, @parents) = @_;
return $diff;
}
-# parameters can be strings, or references to arrays of strings
-sub from_ids_eq {
- my ($a, $b) = @_;
-
- if (ref($a) eq "ARRAY" && ref($b) eq "ARRAY" && @$a == @$b) {
- for (my $i = 0; $i < @$a; ++$i) {
- return 0 unless ($a->[$i] eq $b->[$i]);
- }
- return 1;
- } elsif (!ref($a) && !ref($b)) {
- return $a eq $b;
- } else {
- return 0;
- }
-}
-
+# is current raw difftree line of file deletion
sub is_deleted {
my $diffinfo = shift;
return $diffinfo->{'to_id'} eq ('0' x 40);
}
+# does patch correspond to [previous] difftree raw line
+# $diffinfo - hashref of parsed raw diff format
+# $patchinfo - hashref of parsed patch diff format
+# (the same keys as in $diffinfo)
+sub is_patch_split {
+ my ($diffinfo, $patchinfo) = @_;
+
+ return defined $diffinfo && defined $patchinfo
+ && ($diffinfo->{'to_file'} || $diffinfo->{'file'}) eq $patchinfo->{'to_file'};
+}
+
+
sub git_difftree_body {
my ($difftree, $hash, @parents) = @_;
my ($parent) = $parents[0];
"diff_tree\">\n";
# header only for combined diff in 'commitdiff' view
- my $has_header = @parents > 1 && $action eq 'commitdiff';
+ my $has_header = @$difftree && @parents > 1 && $action eq 'commitdiff';
if ($has_header) {
# table header
print "<thead><tr>\n" .
my $alternate = 1;
my $patchno = 0;
foreach my $line (@{$difftree}) {
- my $diff;
- if (ref($line) eq "HASH") {
- # pre-parsed (or generated by hand)
- $diff = $line;
- } else {
- $diff = parse_difftree_raw_line($line);
- }
+ my $diff = parsed_difftree_line($line);
if ($alternate) {
print "<tr class=\"dark\">\n";
my ($fd, $difftree, $hash, @hash_parents) = @_;
my ($hash_parent) = $hash_parents[0];
+ my $is_combined = (@hash_parents > 1);
my $patch_idx = 0;
my $patch_number = 0;
my $patch_line;
my $diffinfo;
+ my $to_name;
my (%from, %to);
print "<div class=\"patchset\">\n";
PATCH:
while ($patch_line) {
- my @diff_header;
- my ($from_id, $to_id);
- # git diff header
- #assert($patch_line =~ m/^diff /) if DEBUG;
- #assert($patch_line !~ m!$/$!) if DEBUG; # is chomp-ed
- $patch_number++;
- push @diff_header, $patch_line;
-
- # extended diff header
- EXTENDED_HEADER:
- while ($patch_line = <$fd>) {
- chomp $patch_line;
-
- last EXTENDED_HEADER if ($patch_line =~ m/^--- |^diff /);
-
- if ($patch_line =~ m/^index ([0-9a-fA-F]{40})..([0-9a-fA-F]{40})/) {
- $from_id = $1;
- $to_id = $2;
- } elsif ($patch_line =~ m/^index ((?:[0-9a-fA-F]{40},)+[0-9a-fA-F]{40})..([0-9a-fA-F]{40})/) {
- $from_id = [ split(',', $1) ];
- $to_id = $2;
- }
-
- push @diff_header, $patch_line;
+ # parse "git diff" header line
+ if ($patch_line =~ m/^diff --git (\"(?:[^\\\"]*(?:\\.[^\\\"]*)*)\"|[^ "]*) (.*)$/) {
+ # $1 is from_name, which we do not use
+ $to_name = unquote($2);
+ $to_name =~ s!^b/!!;
+ } elsif ($patch_line =~ m/^diff --(cc|combined) ("?.*"?)$/) {
+ # $1 is 'cc' or 'combined', which we do not use
+ $to_name = unquote($2);
+ } else {
+ $to_name = undef;
}
- my $last_patch_line = $patch_line;
# check if current patch belong to current raw line
# and parse raw git-diff line if needed
- if (defined $diffinfo &&
- defined $from_id && defined $to_id &&
- from_ids_eq($diffinfo->{'from_id'}, $from_id) &&
- $diffinfo->{'to_id'} eq $to_id) {
+ if (is_patch_split($diffinfo, { 'to_file' => $to_name })) {
# this is continuation of a split patch
print "<div class=\"patch cont\">\n";
} else {
# advance raw git-diff output if needed
$patch_idx++ if defined $diffinfo;
- # compact combined diff output can have some patches skipped
- # find which patch (using pathname of result) we are at now
- my $to_name;
- if ($diff_header[0] =~ m!^diff --cc "?(.*)"?$!) {
- $to_name = $1;
- }
+ # read and prepare patch information
+ $diffinfo = parsed_difftree_line($difftree->[$patch_idx]);
- do {
- # read and prepare patch information
- if (ref($difftree->[$patch_idx]) eq "HASH") {
- # pre-parsed (or generated by hand)
- $diffinfo = $difftree->[$patch_idx];
- } else {
- $diffinfo = parse_difftree_raw_line($difftree->[$patch_idx]);
- }
-
- # check if current raw line has no patch (it got simplified)
- if (defined $to_name && $to_name ne $diffinfo->{'to_file'}) {
+ # compact combined diff output can have some patches skipped
+ # find which patch (using pathname of result) we are at now;
+ if ($is_combined) {
+ while ($to_name ne $diffinfo->{'to_file'}) {
print "<div class=\"patch\" id=\"patch". ($patch_idx+1) ."\">\n" .
format_diff_cc_simplified($diffinfo, @hash_parents) .
"</div>\n"; # class="patch"
$patch_idx++;
$patch_number++;
+
+ last if $patch_idx > $#$difftree;
+ $diffinfo = parsed_difftree_line($difftree->[$patch_idx]);
}
- } until (!defined $to_name || $to_name eq $diffinfo->{'to_file'} ||
- $patch_idx > $#$difftree);
+ }
+
# modifies %from, %to hashes
parse_from_to_diffinfo($diffinfo, \%from, \%to, @hash_parents);
- if ($diffinfo->{'nparents'}) {
- # combined diff
- $from{'file'} = [];
- $from{'href'} = [];
- fill_from_file_info($diffinfo, @hash_parents)
- unless exists $diffinfo->{'from_file'};
- for (my $i = 0; $i < $diffinfo->{'nparents'}; $i++) {
- $from{'file'}[$i] = $diffinfo->{'from_file'}[$i] || $diffinfo->{'to_file'};
- if ($diffinfo->{'status'}[$i] ne "A") { # not new (added) file
- $from{'href'}[$i] = href(action=>"blob",
- hash_base=>$hash_parents[$i],
- hash=>$diffinfo->{'from_id'}[$i],
- file_name=>$from{'file'}[$i]);
- } else {
- $from{'href'}[$i] = undef;
- }
- }
- } else {
- $from{'file'} = $diffinfo->{'from_file'} || $diffinfo->{'file'};
- if ($diffinfo->{'status'} ne "A") { # not new (added) file
- $from{'href'} = href(action=>"blob", hash_base=>$hash_parent,
- hash=>$diffinfo->{'from_id'},
- file_name=>$from{'file'});
- } else {
- delete $from{'href'};
- }
- }
- $to{'file'} = $diffinfo->{'to_file'} || $diffinfo->{'file'};
- if (!is_deleted($diffinfo)) { # file exists in result
- $to{'href'} = href(action=>"blob", hash_base=>$hash,
- hash=>$diffinfo->{'to_id'},
- file_name=>$to{'file'});
- } else {
- delete $to{'href'};
- }
# this is first patch for raw difftree line with $patch_idx index
# we index @$difftree array from 0, but number patches from 1
print "<div class=\"patch\" id=\"patch". ($patch_idx+1) ."\">\n";
}
+ # git diff header
+ #assert($patch_line =~ m/^diff /) if DEBUG;
+ #assert($patch_line !~ m!$/$!) if DEBUG; # is chomp-ed
+ $patch_number++;
# print "git diff" header
- $patch_line = shift @diff_header;
print format_git_diff_header_line($patch_line, $diffinfo,
\%from, \%to);
# print extended diff header
- print "<div class=\"diff extended_header\">\n" if (@diff_header > 0);
+ print "<div class=\"diff extended_header\">\n";
EXTENDED_HEADER:
- foreach $patch_line (@diff_header) {
+ while ($patch_line = <$fd>) {
+ chomp $patch_line;
+
+ last EXTENDED_HEADER if ($patch_line =~ m/^--- |^diff /);
+
print format_extended_diff_header_line($patch_line, $diffinfo,
\%from, \%to);
}
- print "</div>\n" if (@diff_header > 0); # class="diff extended_header"
+ print "</div>\n"; # class="diff extended_header"
# from-file/to-file diff header
- $patch_line = $last_patch_line;
if (! $patch_line) {
print "</div>\n"; # class="patch"
last PATCH;
}
next PATCH if ($patch_line =~ m/^diff /);
#assert($patch_line =~ m/^---/) if DEBUG;
- #assert($patch_line eq $last_patch_line) if DEBUG;
+ my $last_patch_line = $patch_line;
$patch_line = <$fd>;
chomp $patch_line;
#assert($patch_line =~ m/^\+\+\+/) if DEBUG;
# for compact combined (--cc) format, with chunk and patch simpliciaction
# patchset might be empty, but there might be unprocessed raw lines
- for ($patch_idx++ if $patch_number > 0;
+ for (++$patch_idx if $patch_number > 0;
$patch_idx < @$difftree;
- $patch_idx++) {
+ ++$patch_idx) {
# read and prepare patch information
- if (ref($difftree->[$patch_idx]) eq "HASH") {
- # pre-parsed (or generated by hand)
- $diffinfo = $difftree->[$patch_idx];
- } else {
- $diffinfo = parse_difftree_raw_line($difftree->[$patch_idx]);
- }
+ $diffinfo = parsed_difftree_line($difftree->[$patch_idx]);
# generate anchor for "patch" links in difftree / whatchanged part
print "<div class=\"patch\" id=\"patch". ($patch_idx+1) ."\">\n" .
"<td>" . $cgi->a({-href => href(project=>$pr->{'path'}, action=>"summary"),
-class => "list", -title => $pr->{'descr_long'}},
esc_html($pr->{'descr'})) . "</td>\n" .
- "<td><i>" . esc_html(chop_str($pr->{'owner'}, 15)) . "</i></td>\n";
+ "<td><i>" . chop_and_escape_str($pr->{'owner'}, 15) . "</i></td>\n";
print "<td class=\"". age_class($pr->{'age'}) . "\">" .
(defined $pr->{'age_string'} ? $pr->{'age_string'} : "No commits") . "</td>\n" .
"<td class=\"link\">" .
print "<tr class=\"light\">\n";
}
$alternate ^= 1;
+ my $author = chop_and_escape_str($co{'author_name'}, 10);
# git_summary() used print "<td><i>$co{'age_string'}</i></td>\n" .
print "<td title=\"$co{'age_string_age'}\"><i>$co{'age_string_date'}</i></td>\n" .
- "<td><i>" . esc_html(chop_str($co{'author_name'}, 10)) . "</i></td>\n" .
+ "<td><i>" . $author . "</i></td>\n" .
"<td>";
print format_subject_html($co{'title'}, $co{'title_short'},
href(action=>"commit", hash=>$commit), $ref);
print "<tr class=\"light\">\n";
}
$alternate ^= 1;
+ # shortlog uses chop_str($co{'author_name'}, 10)
+ my $author = chop_and_escape_str($co{'author_name'}, 15, 3);
print "<td title=\"$co{'age_string_age'}\"><i>$co{'age_string_date'}</i></td>\n" .
- # shortlog uses chop_str($co{'author_name'}, 10)
- "<td><i>" . esc_html(chop_str($co{'author_name'}, 15, 3)) . "</i></td>\n" .
+ "<td><i>" . $author . "</i></td>\n" .
"<td>";
# originally git_history used chop_str($co{'title'}, 50)
print format_subject_html($co{'title'}, $co{'title_short'},
print "<tr class=\"light\">\n";
}
$alternate ^= 1;
+ my $author = chop_and_escape_str($co{'author_name'}, 15, 5);
print "<td title=\"$co{'age_string_age'}\"><i>$co{'age_string_date'}</i></td>\n" .
- "<td><i>" . esc_html(chop_str($co{'author_name'}, 15, 5)) . "</i></td>\n" .
+ "<td><i>" . $author . "</i></td>\n" .
"<td>" .
$cgi->a({-href => href(action=>"commit", hash=>$co{'id'}), -class => "list subject"},
- esc_html(chop_str($co{'title'}, 50)) . "<br/>");
+ chop_and_escape_str($co{'title'}, 50) . "<br/>");
my $comment = $co{'comment'};
foreach my $line (@$comment) {
if ($line =~ m/^(.*)($search_regexp)(.*)$/i) {
print "<tr class=\"light\">\n";
}
$alternate ^= 1;
+ my $author = chop_and_escape_str($co{'author_name'}, 15, 5);
print "<td title=\"$co{'age_string_age'}\"><i>$co{'age_string_date'}</i></td>\n" .
- "<td><i>" . esc_html(chop_str($co{'author_name'}, 15, 5)) . "</i></td>\n" .
+ "<td><i>" . $author . "</i></td>\n" .
"<td>" .
$cgi->a({-href => href(action=>"commit", hash=>$co{'id'}),
-class => "list subject"},
- esc_html(chop_str($co{'title'}, 50)) . "<br/>");
+ chop_and_escape_str($co{'title'}, 50) . "<br/>");
while (my $setref = shift @files) {
my %set = %$setref;
print $cgi->a({-href => href(action=>"blob", hash_base=>$co{'id'},
--- /dev/null
+/*
+ * Some generic hashing helpers.
+ */
+#include "cache.h"
+#include "hash.h"
+
+/*
+ * Look up a hash entry in the hash table. Return the pointer to
+ * the existing entry, or the empty slot if none existed. The caller
+ * can then look at the (*ptr) to see whether it existed or not.
+ */
+static struct hash_table_entry *lookup_hash_entry(unsigned int hash, struct hash_table *table)
+{
+ unsigned int size = table->size, nr = hash % size;
+ struct hash_table_entry *array = table->array;
+
+ while (array[nr].ptr) {
+ if (array[nr].hash == hash)
+ break;
+ nr++;
+ if (nr >= size)
+ nr = 0;
+ }
+ return array + nr;
+}
+
+
+/*
+ * Insert a new hash entry pointer into the table.
+ *
+ * If that hash entry already existed, return the pointer to
+ * the existing entry (and the caller can create a list of the
+ * pointers or do anything else). If it didn't exist, return
+ * NULL (and the caller knows the pointer has been inserted).
+ */
+static void **insert_hash_entry(unsigned int hash, void *ptr, struct hash_table *table)
+{
+ struct hash_table_entry *entry = lookup_hash_entry(hash, table);
+
+ if (!entry->ptr) {
+ entry->ptr = ptr;
+ entry->hash = hash;
+ table->nr++;
+ return NULL;
+ }
+ return &entry->ptr;
+}
+
+static void grow_hash_table(struct hash_table *table)
+{
+ unsigned int i;
+ unsigned int old_size = table->size, new_size;
+ struct hash_table_entry *old_array = table->array, *new_array;
+
+ new_size = alloc_nr(old_size);
+ new_array = xcalloc(sizeof(struct hash_table_entry), new_size);
+ table->size = new_size;
+ table->array = new_array;
+ table->nr = 0;
+ for (i = 0; i < old_size; i++) {
+ unsigned int hash = old_array[i].hash;
+ void *ptr = old_array[i].ptr;
+ if (ptr)
+ insert_hash_entry(hash, ptr, table);
+ }
+ free(old_array);
+}
+
+void *lookup_hash(unsigned int hash, struct hash_table *table)
+{
+ if (!table->array)
+ return NULL;
+ return &lookup_hash_entry(hash, table)->ptr;
+}
+
+void **insert_hash(unsigned int hash, void *ptr, struct hash_table *table)
+{
+ unsigned int nr = table->nr;
+ if (nr >= table->size/2)
+ grow_hash_table(table);
+ return insert_hash_entry(hash, ptr, table);
+}
+
+int for_each_hash(struct hash_table *table, int (*fn)(void *))
+{
+ int sum = 0;
+ unsigned int i;
+ unsigned int size = table->size;
+ struct hash_table_entry *array = table->array;
+
+ for (i = 0; i < size; i++) {
+ void *ptr = array->ptr;
+ array++;
+ if (ptr) {
+ int val = fn(ptr);
+ if (val < 0)
+ return val;
+ sum += val;
+ }
+ }
+ return sum;
+}
+
+void free_hash(struct hash_table *table)
+{
+ free(table->array);
+ table->array = NULL;
+ table->size = 0;
+ table->nr = 0;
+}
--- /dev/null
+#ifndef HASH_H
+#define HASH_H
+
+/*
+ * These are some simple generic hash table helper functions.
+ * Not necessarily suitable for all users, but good for things
+ * where you want to just keep track of a list of things, and
+ * have a good hash to use on them.
+ *
+ * It keeps the hash table at roughly 50-75% free, so the memory
+ * cost of the hash table itself is roughly
+ *
+ * 3 * 2*sizeof(void *) * nr_of_objects
+ *
+ * bytes.
+ *
+ * FIXME: on 64-bit architectures, we waste memory. It would be
+ * good to have just 32-bit pointers, requiring a special allocator
+ * for hashed entries or something.
+ */
+struct hash_table_entry {
+ unsigned int hash;
+ void *ptr;
+};
+
+struct hash_table {
+ unsigned int size, nr;
+ struct hash_table_entry *array;
+};
+
+extern void *lookup_hash(unsigned int hash, struct hash_table *table);
+extern void **insert_hash(unsigned int hash, void *ptr, struct hash_table *table);
+extern int for_each_hash(struct hash_table *table, int (*fn)(void *));
+extern void free_hash(struct hash_table *table);
+
+static inline void init_hash(struct hash_table *table)
+{
+ table->size = 0;
+ table->nr = 0;
+ table->array = NULL;
+}
+
+#endif
void help_unknown_cmd(const char *cmd)
{
- printf("git: '%s' is not a git-command\n\n", cmd);
- list_common_cmds_help();
+ fprintf(stderr, "git: '%s' is not a git-command. See 'git --help'.\n", cmd);
exit(1);
}
+++ /dev/null
-#include "cache.h"
-#include "commit.h"
-#include "pack.h"
-#include "fetch.h"
-#include "http.h"
-
-#define PREV_BUF_SIZE 4096
-#define RANGE_HEADER_SIZE 30
-
-static int commits_on_stdin;
-
-static int got_alternates = -1;
-static int corrupt_object_found;
-
-static struct curl_slist *no_pragma_header;
-
-struct alt_base
-{
- char *base;
- int got_indices;
- struct packed_git *packs;
- struct alt_base *next;
-};
-
-static struct alt_base *alt;
-
-enum object_request_state {
- WAITING,
- ABORTED,
- ACTIVE,
- COMPLETE,
-};
-
-struct object_request
-{
- unsigned char sha1[20];
- struct alt_base *repo;
- char *url;
- char filename[PATH_MAX];
- char tmpfile[PATH_MAX];
- int local;
- enum object_request_state state;
- CURLcode curl_result;
- char errorstr[CURL_ERROR_SIZE];
- long http_code;
- unsigned char real_sha1[20];
- SHA_CTX c;
- z_stream stream;
- int zret;
- int rename;
- struct active_request_slot *slot;
- struct object_request *next;
-};
-
-struct alternates_request {
- const char *base;
- char *url;
- struct buffer *buffer;
- struct active_request_slot *slot;
- int http_specific;
-};
-
-static struct object_request *object_queue_head;
-
-static size_t fwrite_sha1_file(void *ptr, size_t eltsize, size_t nmemb,
- void *data)
-{
- unsigned char expn[4096];
- size_t size = eltsize * nmemb;
- int posn = 0;
- struct object_request *obj_req = (struct object_request *)data;
- do {
- ssize_t retval = xwrite(obj_req->local,
- (char *) ptr + posn, size - posn);
- if (retval < 0)
- return posn;
- posn += retval;
- } while (posn < size);
-
- obj_req->stream.avail_in = size;
- obj_req->stream.next_in = ptr;
- do {
- obj_req->stream.next_out = expn;
- obj_req->stream.avail_out = sizeof(expn);
- obj_req->zret = inflate(&obj_req->stream, Z_SYNC_FLUSH);
- SHA1_Update(&obj_req->c, expn,
- sizeof(expn) - obj_req->stream.avail_out);
- } while (obj_req->stream.avail_in && obj_req->zret == Z_OK);
- data_received++;
- return size;
-}
-
-static int missing__target(int code, int result)
-{
- return /* file:// URL -- do we ever use one??? */
- (result == CURLE_FILE_COULDNT_READ_FILE) ||
- /* http:// and https:// URL */
- (code == 404 && result == CURLE_HTTP_RETURNED_ERROR) ||
- /* ftp:// URL */
- (code == 550 && result == CURLE_FTP_COULDNT_RETR_FILE)
- ;
-}
-
-#define missing_target(a) missing__target((a)->http_code, (a)->curl_result)
-
-static void fetch_alternates(const char *base);
-
-static void process_object_response(void *callback_data);
-
-static void start_object_request(struct object_request *obj_req)
-{
- char *hex = sha1_to_hex(obj_req->sha1);
- char prevfile[PATH_MAX];
- char *url;
- char *posn;
- int prevlocal;
- unsigned char prev_buf[PREV_BUF_SIZE];
- ssize_t prev_read = 0;
- long prev_posn = 0;
- char range[RANGE_HEADER_SIZE];
- struct curl_slist *range_header = NULL;
- struct active_request_slot *slot;
-
- snprintf(prevfile, sizeof(prevfile), "%s.prev", obj_req->filename);
- unlink(prevfile);
- rename(obj_req->tmpfile, prevfile);
- unlink(obj_req->tmpfile);
-
- if (obj_req->local != -1)
- error("fd leakage in start: %d", obj_req->local);
- obj_req->local = open(obj_req->tmpfile,
- O_WRONLY | O_CREAT | O_EXCL, 0666);
- /* This could have failed due to the "lazy directory creation";
- * try to mkdir the last path component.
- */
- if (obj_req->local < 0 && errno == ENOENT) {
- char *dir = strrchr(obj_req->tmpfile, '/');
- if (dir) {
- *dir = 0;
- mkdir(obj_req->tmpfile, 0777);
- *dir = '/';
- }
- obj_req->local = open(obj_req->tmpfile,
- O_WRONLY | O_CREAT | O_EXCL, 0666);
- }
-
- if (obj_req->local < 0) {
- obj_req->state = ABORTED;
- error("Couldn't create temporary file %s for %s: %s",
- obj_req->tmpfile, obj_req->filename, strerror(errno));
- return;
- }
-
- memset(&obj_req->stream, 0, sizeof(obj_req->stream));
-
- inflateInit(&obj_req->stream);
-
- SHA1_Init(&obj_req->c);
-
- url = xmalloc(strlen(obj_req->repo->base) + 51);
- obj_req->url = xmalloc(strlen(obj_req->repo->base) + 51);
- strcpy(url, obj_req->repo->base);
- posn = url + strlen(obj_req->repo->base);
- strcpy(posn, "/objects/");
- posn += 9;
- memcpy(posn, hex, 2);
- posn += 2;
- *(posn++) = '/';
- strcpy(posn, hex + 2);
- strcpy(obj_req->url, url);
-
- /* If a previous temp file is present, process what was already
- fetched. */
- prevlocal = open(prevfile, O_RDONLY);
- if (prevlocal != -1) {
- do {
- prev_read = xread(prevlocal, prev_buf, PREV_BUF_SIZE);
- if (prev_read>0) {
- if (fwrite_sha1_file(prev_buf,
- 1,
- prev_read,
- obj_req) == prev_read) {
- prev_posn += prev_read;
- } else {
- prev_read = -1;
- }
- }
- } while (prev_read > 0);
- close(prevlocal);
- }
- unlink(prevfile);
-
- /* Reset inflate/SHA1 if there was an error reading the previous temp
- file; also rewind to the beginning of the local file. */
- if (prev_read == -1) {
- memset(&obj_req->stream, 0, sizeof(obj_req->stream));
- inflateInit(&obj_req->stream);
- SHA1_Init(&obj_req->c);
- if (prev_posn>0) {
- prev_posn = 0;
- lseek(obj_req->local, 0, SEEK_SET);
- ftruncate(obj_req->local, 0);
- }
- }
-
- slot = get_active_slot();
- slot->callback_func = process_object_response;
- slot->callback_data = obj_req;
- obj_req->slot = slot;
-
- curl_easy_setopt(slot->curl, CURLOPT_FILE, obj_req);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
- curl_easy_setopt(slot->curl, CURLOPT_ERRORBUFFER, obj_req->errorstr);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
-
- /* If we have successfully processed data from a previous fetch
- attempt, only fetch the data we don't already have. */
- if (prev_posn>0) {
- if (get_verbosely)
- fprintf(stderr,
- "Resuming fetch of object %s at byte %ld\n",
- hex, prev_posn);
- sprintf(range, "Range: bytes=%ld-", prev_posn);
- range_header = curl_slist_append(range_header, range);
- curl_easy_setopt(slot->curl,
- CURLOPT_HTTPHEADER, range_header);
- }
-
- /* Try to get the request started, abort the request on error */
- obj_req->state = ACTIVE;
- if (!start_active_slot(slot)) {
- obj_req->state = ABORTED;
- obj_req->slot = NULL;
- close(obj_req->local); obj_req->local = -1;
- free(obj_req->url);
- return;
- }
-}
-
-static void finish_object_request(struct object_request *obj_req)
-{
- struct stat st;
-
- fchmod(obj_req->local, 0444);
- close(obj_req->local); obj_req->local = -1;
-
- if (obj_req->http_code == 416) {
- fprintf(stderr, "Warning: requested range invalid; we may already have all the data.\n");
- } else if (obj_req->curl_result != CURLE_OK) {
- if (stat(obj_req->tmpfile, &st) == 0)
- if (st.st_size == 0)
- unlink(obj_req->tmpfile);
- return;
- }
-
- inflateEnd(&obj_req->stream);
- SHA1_Final(obj_req->real_sha1, &obj_req->c);
- if (obj_req->zret != Z_STREAM_END) {
- unlink(obj_req->tmpfile);
- return;
- }
- if (hashcmp(obj_req->sha1, obj_req->real_sha1)) {
- unlink(obj_req->tmpfile);
- return;
- }
- obj_req->rename =
- move_temp_to_file(obj_req->tmpfile, obj_req->filename);
-
- if (obj_req->rename == 0)
- pull_say("got %s\n", sha1_to_hex(obj_req->sha1));
-}
-
-static void process_object_response(void *callback_data)
-{
- struct object_request *obj_req =
- (struct object_request *)callback_data;
-
- obj_req->curl_result = obj_req->slot->curl_result;
- obj_req->http_code = obj_req->slot->http_code;
- obj_req->slot = NULL;
- obj_req->state = COMPLETE;
-
- /* Use alternates if necessary */
- if (missing_target(obj_req)) {
- fetch_alternates(alt->base);
- if (obj_req->repo->next != NULL) {
- obj_req->repo =
- obj_req->repo->next;
- close(obj_req->local);
- obj_req->local = -1;
- start_object_request(obj_req);
- return;
- }
- }
-
- finish_object_request(obj_req);
-}
-
-static void release_object_request(struct object_request *obj_req)
-{
- struct object_request *entry = object_queue_head;
-
- if (obj_req->local != -1)
- error("fd leakage in release: %d", obj_req->local);
- if (obj_req == object_queue_head) {
- object_queue_head = obj_req->next;
- } else {
- while (entry->next != NULL && entry->next != obj_req)
- entry = entry->next;
- if (entry->next == obj_req)
- entry->next = entry->next->next;
- }
-
- free(obj_req->url);
- free(obj_req);
-}
-
-#ifdef USE_CURL_MULTI
-void fill_active_slots(void)
-{
- struct object_request *obj_req = object_queue_head;
- struct active_request_slot *slot = active_queue_head;
- int num_transfers;
-
- while (active_requests < max_requests && obj_req != NULL) {
- if (obj_req->state == WAITING) {
- if (has_sha1_file(obj_req->sha1))
- obj_req->state = COMPLETE;
- else
- start_object_request(obj_req);
- curl_multi_perform(curlm, &num_transfers);
- }
- obj_req = obj_req->next;
- }
-
- while (slot != NULL) {
- if (!slot->in_use && slot->curl != NULL) {
- curl_easy_cleanup(slot->curl);
- slot->curl = NULL;
- }
- slot = slot->next;
- }
-}
-#endif
-
-void prefetch(unsigned char *sha1)
-{
- struct object_request *newreq;
- struct object_request *tail;
- char *filename = sha1_file_name(sha1);
-
- newreq = xmalloc(sizeof(*newreq));
- hashcpy(newreq->sha1, sha1);
- newreq->repo = alt;
- newreq->url = NULL;
- newreq->local = -1;
- newreq->state = WAITING;
- snprintf(newreq->filename, sizeof(newreq->filename), "%s", filename);
- snprintf(newreq->tmpfile, sizeof(newreq->tmpfile),
- "%s.temp", filename);
- newreq->slot = NULL;
- newreq->next = NULL;
-
- if (object_queue_head == NULL) {
- object_queue_head = newreq;
- } else {
- tail = object_queue_head;
- while (tail->next != NULL) {
- tail = tail->next;
- }
- tail->next = newreq;
- }
-
-#ifdef USE_CURL_MULTI
- fill_active_slots();
- step_active_slots();
-#endif
-}
-
-static int fetch_index(struct alt_base *repo, unsigned char *sha1)
-{
- char *hex = sha1_to_hex(sha1);
- char *filename;
- char *url;
- char tmpfile[PATH_MAX];
- long prev_posn = 0;
- char range[RANGE_HEADER_SIZE];
- struct curl_slist *range_header = NULL;
-
- FILE *indexfile;
- struct active_request_slot *slot;
- struct slot_results results;
-
- if (has_pack_index(sha1))
- return 0;
-
- if (get_verbosely)
- fprintf(stderr, "Getting index for pack %s\n", hex);
-
- url = xmalloc(strlen(repo->base) + 64);
- sprintf(url, "%s/objects/pack/pack-%s.idx", repo->base, hex);
-
- filename = sha1_pack_index_name(sha1);
- snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
- indexfile = fopen(tmpfile, "a");
- if (!indexfile)
- return error("Unable to open local file %s for pack index",
- filename);
-
- slot = get_active_slot();
- slot->results = &results;
- curl_easy_setopt(slot->curl, CURLOPT_FILE, indexfile);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
- slot->local = indexfile;
-
- /* If there is data present from a previous transfer attempt,
- resume where it left off */
- prev_posn = ftell(indexfile);
- if (prev_posn>0) {
- if (get_verbosely)
- fprintf(stderr,
- "Resuming fetch of index for pack %s at byte %ld\n",
- hex, prev_posn);
- sprintf(range, "Range: bytes=%ld-", prev_posn);
- range_header = curl_slist_append(range_header, range);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
- }
-
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- if (results.curl_result != CURLE_OK) {
- fclose(indexfile);
- return error("Unable to get pack index %s\n%s", url,
- curl_errorstr);
- }
- } else {
- fclose(indexfile);
- return error("Unable to start request");
- }
-
- fclose(indexfile);
-
- return move_temp_to_file(tmpfile, filename);
-}
-
-static int setup_index(struct alt_base *repo, unsigned char *sha1)
-{
- struct packed_git *new_pack;
- if (has_pack_file(sha1))
- return 0; /* don't list this as something we can get */
-
- if (fetch_index(repo, sha1))
- return -1;
-
- new_pack = parse_pack_index(sha1);
- new_pack->next = repo->packs;
- repo->packs = new_pack;
- return 0;
-}
-
-static void process_alternates_response(void *callback_data)
-{
- struct alternates_request *alt_req =
- (struct alternates_request *)callback_data;
- struct active_request_slot *slot = alt_req->slot;
- struct alt_base *tail = alt;
- const char *base = alt_req->base;
- static const char null_byte = '\0';
- char *data;
- int i = 0;
-
- if (alt_req->http_specific) {
- if (slot->curl_result != CURLE_OK ||
- !alt_req->buffer->posn) {
-
- /* Try reusing the slot to get non-http alternates */
- alt_req->http_specific = 0;
- sprintf(alt_req->url, "%s/objects/info/alternates",
- base);
- curl_easy_setopt(slot->curl, CURLOPT_URL,
- alt_req->url);
- active_requests++;
- slot->in_use = 1;
- if (slot->finished != NULL)
- (*slot->finished) = 0;
- if (!start_active_slot(slot)) {
- got_alternates = -1;
- slot->in_use = 0;
- if (slot->finished != NULL)
- (*slot->finished) = 1;
- }
- return;
- }
- } else if (slot->curl_result != CURLE_OK) {
- if (!missing_target(slot)) {
- got_alternates = -1;
- return;
- }
- }
-
- fwrite_buffer(&null_byte, 1, 1, alt_req->buffer);
- alt_req->buffer->posn--;
- data = alt_req->buffer->buffer;
-
- while (i < alt_req->buffer->posn) {
- int posn = i;
- while (posn < alt_req->buffer->posn && data[posn] != '\n')
- posn++;
- if (data[posn] == '\n') {
- int okay = 0;
- int serverlen = 0;
- struct alt_base *newalt;
- char *target = NULL;
- if (data[i] == '/') {
- /* This counts
- * http://git.host/pub/scm/linux.git/
- * -----------here^
- * so memcpy(dst, base, serverlen) will
- * copy up to "...git.host".
- */
- const char *colon_ss = strstr(base,"://");
- if (colon_ss) {
- serverlen = (strchr(colon_ss + 3, '/')
- - base);
- okay = 1;
- }
- } else if (!memcmp(data + i, "../", 3)) {
- /* Relative URL; chop the corresponding
- * number of subpath from base (and ../
- * from data), and concatenate the result.
- *
- * The code first drops ../ from data, and
- * then drops one ../ from data and one path
- * from base. IOW, one extra ../ is dropped
- * from data than path is dropped from base.
- *
- * This is not wrong. The alternate in
- * http://git.host/pub/scm/linux.git/
- * to borrow from
- * http://git.host/pub/scm/linus.git/
- * is ../../linus.git/objects/. You need
- * two ../../ to borrow from your direct
- * neighbour.
- */
- i += 3;
- serverlen = strlen(base);
- while (i + 2 < posn &&
- !memcmp(data + i, "../", 3)) {
- do {
- serverlen--;
- } while (serverlen &&
- base[serverlen - 1] != '/');
- i += 3;
- }
- /* If the server got removed, give up. */
- okay = strchr(base, ':') - base + 3 <
- serverlen;
- } else if (alt_req->http_specific) {
- char *colon = strchr(data + i, ':');
- char *slash = strchr(data + i, '/');
- if (colon && slash && colon < data + posn &&
- slash < data + posn && colon < slash) {
- okay = 1;
- }
- }
- /* skip "objects\n" at end */
- if (okay) {
- target = xmalloc(serverlen + posn - i - 6);
- memcpy(target, base, serverlen);
- memcpy(target + serverlen, data + i,
- posn - i - 7);
- target[serverlen + posn - i - 7] = 0;
- if (get_verbosely)
- fprintf(stderr,
- "Also look at %s\n", target);
- newalt = xmalloc(sizeof(*newalt));
- newalt->next = NULL;
- newalt->base = target;
- newalt->got_indices = 0;
- newalt->packs = NULL;
-
- while (tail->next != NULL)
- tail = tail->next;
- tail->next = newalt;
- }
- }
- i = posn + 1;
- }
-
- got_alternates = 1;
-}
-
-static void fetch_alternates(const char *base)
-{
- struct buffer buffer;
- char *url;
- char *data;
- struct active_request_slot *slot;
- struct alternates_request alt_req;
-
- /* If another request has already started fetching alternates,
- wait for them to arrive and return to processing this request's
- curl message */
-#ifdef USE_CURL_MULTI
- while (got_alternates == 0) {
- step_active_slots();
- }
-#endif
-
- /* Nothing to do if they've already been fetched */
- if (got_alternates == 1)
- return;
-
- /* Start the fetch */
- got_alternates = 0;
-
- data = xmalloc(4096);
- buffer.size = 4096;
- buffer.posn = 0;
- buffer.buffer = data;
-
- if (get_verbosely)
- fprintf(stderr, "Getting alternates list for %s\n", base);
-
- url = xmalloc(strlen(base) + 31);
- sprintf(url, "%s/objects/info/http-alternates", base);
-
- /* Use a callback to process the result, since another request
- may fail and need to have alternates loaded before continuing */
- slot = get_active_slot();
- slot->callback_func = process_alternates_response;
- slot->callback_data = &alt_req;
-
- curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
-
- alt_req.base = base;
- alt_req.url = url;
- alt_req.buffer = &buffer;
- alt_req.http_specific = 1;
- alt_req.slot = slot;
-
- if (start_active_slot(slot))
- run_active_slot(slot);
- else
- got_alternates = -1;
-
- free(data);
- free(url);
-}
-
-static int fetch_indices(struct alt_base *repo)
-{
- unsigned char sha1[20];
- char *url;
- struct buffer buffer;
- char *data;
- int i = 0;
-
- struct active_request_slot *slot;
- struct slot_results results;
-
- if (repo->got_indices)
- return 0;
-
- data = xmalloc(4096);
- buffer.size = 4096;
- buffer.posn = 0;
- buffer.buffer = data;
-
- if (get_verbosely)
- fprintf(stderr, "Getting pack list for %s\n", repo->base);
-
- url = xmalloc(strlen(repo->base) + 21);
- sprintf(url, "%s/objects/info/packs", repo->base);
-
- slot = get_active_slot();
- slot->results = &results;
- curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- if (results.curl_result != CURLE_OK) {
- if (missing_target(&results)) {
- repo->got_indices = 1;
- free(buffer.buffer);
- return 0;
- } else {
- repo->got_indices = 0;
- free(buffer.buffer);
- return error("%s", curl_errorstr);
- }
- }
- } else {
- repo->got_indices = 0;
- free(buffer.buffer);
- return error("Unable to start request");
- }
-
- data = buffer.buffer;
- while (i < buffer.posn) {
- switch (data[i]) {
- case 'P':
- i++;
- if (i + 52 <= buffer.posn &&
- !prefixcmp(data + i, " pack-") &&
- !prefixcmp(data + i + 46, ".pack\n")) {
- get_sha1_hex(data + i + 6, sha1);
- setup_index(repo, sha1);
- i += 51;
- break;
- }
- default:
- while (i < buffer.posn && data[i] != '\n')
- i++;
- }
- i++;
- }
-
- free(buffer.buffer);
- repo->got_indices = 1;
- return 0;
-}
-
-static int fetch_pack(struct alt_base *repo, unsigned char *sha1)
-{
- char *url;
- struct packed_git *target;
- struct packed_git **lst;
- FILE *packfile;
- char *filename;
- char tmpfile[PATH_MAX];
- int ret;
- long prev_posn = 0;
- char range[RANGE_HEADER_SIZE];
- struct curl_slist *range_header = NULL;
-
- struct active_request_slot *slot;
- struct slot_results results;
-
- if (fetch_indices(repo))
- return -1;
- target = find_sha1_pack(sha1, repo->packs);
- if (!target)
- return -1;
-
- if (get_verbosely) {
- fprintf(stderr, "Getting pack %s\n",
- sha1_to_hex(target->sha1));
- fprintf(stderr, " which contains %s\n",
- sha1_to_hex(sha1));
- }
-
- url = xmalloc(strlen(repo->base) + 65);
- sprintf(url, "%s/objects/pack/pack-%s.pack",
- repo->base, sha1_to_hex(target->sha1));
-
- filename = sha1_pack_name(target->sha1);
- snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
- packfile = fopen(tmpfile, "a");
- if (!packfile)
- return error("Unable to open local file %s for pack",
- filename);
-
- slot = get_active_slot();
- slot->results = &results;
- curl_easy_setopt(slot->curl, CURLOPT_FILE, packfile);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
- slot->local = packfile;
-
- /* If there is data present from a previous transfer attempt,
- resume where it left off */
- prev_posn = ftell(packfile);
- if (prev_posn>0) {
- if (get_verbosely)
- fprintf(stderr,
- "Resuming fetch of pack %s at byte %ld\n",
- sha1_to_hex(target->sha1), prev_posn);
- sprintf(range, "Range: bytes=%ld-", prev_posn);
- range_header = curl_slist_append(range_header, range);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
- }
-
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- if (results.curl_result != CURLE_OK) {
- fclose(packfile);
- return error("Unable to get pack file %s\n%s", url,
- curl_errorstr);
- }
- } else {
- fclose(packfile);
- return error("Unable to start request");
- }
-
- target->pack_size = ftell(packfile);
- fclose(packfile);
-
- ret = move_temp_to_file(tmpfile, filename);
- if (ret)
- return ret;
-
- lst = &repo->packs;
- while (*lst != target)
- lst = &((*lst)->next);
- *lst = (*lst)->next;
-
- if (verify_pack(target, 0))
- return -1;
- install_packed_git(target);
-
- return 0;
-}
-
-static void abort_object_request(struct object_request *obj_req)
-{
- if (obj_req->local >= 0) {
- close(obj_req->local);
- obj_req->local = -1;
- }
- unlink(obj_req->tmpfile);
- if (obj_req->slot) {
- release_active_slot(obj_req->slot);
- obj_req->slot = NULL;
- }
- release_object_request(obj_req);
-}
-
-static int fetch_object(struct alt_base *repo, unsigned char *sha1)
-{
- char *hex = sha1_to_hex(sha1);
- int ret = 0;
- struct object_request *obj_req = object_queue_head;
-
- while (obj_req != NULL && hashcmp(obj_req->sha1, sha1))
- obj_req = obj_req->next;
- if (obj_req == NULL)
- return error("Couldn't find request for %s in the queue", hex);
-
- if (has_sha1_file(obj_req->sha1)) {
- abort_object_request(obj_req);
- return 0;
- }
-
-#ifdef USE_CURL_MULTI
- while (obj_req->state == WAITING) {
- step_active_slots();
- }
-#else
- start_object_request(obj_req);
-#endif
-
- while (obj_req->state == ACTIVE) {
- run_active_slot(obj_req->slot);
- }
- if (obj_req->local != -1) {
- close(obj_req->local); obj_req->local = -1;
- }
-
- if (obj_req->state == ABORTED) {
- ret = error("Request for %s aborted", hex);
- } else if (obj_req->curl_result != CURLE_OK &&
- obj_req->http_code != 416) {
- if (missing_target(obj_req))
- ret = -1; /* Be silent, it is probably in a pack. */
- else
- ret = error("%s (curl_result = %d, http_code = %ld, sha1 = %s)",
- obj_req->errorstr, obj_req->curl_result,
- obj_req->http_code, hex);
- } else if (obj_req->zret != Z_STREAM_END) {
- corrupt_object_found++;
- ret = error("File %s (%s) corrupt", hex, obj_req->url);
- } else if (hashcmp(obj_req->sha1, obj_req->real_sha1)) {
- ret = error("File %s has bad hash", hex);
- } else if (obj_req->rename < 0) {
- ret = error("unable to write sha1 filename %s",
- obj_req->filename);
- }
-
- release_object_request(obj_req);
- return ret;
-}
-
-int fetch(unsigned char *sha1)
-{
- struct alt_base *altbase = alt;
-
- if (!fetch_object(altbase, sha1))
- return 0;
- while (altbase) {
- if (!fetch_pack(altbase, sha1))
- return 0;
- fetch_alternates(alt->base);
- altbase = altbase->next;
- }
- return error("Unable to find %s under %s", sha1_to_hex(sha1),
- alt->base);
-}
-
-static inline int needs_quote(int ch)
-{
- if (((ch >= 'A') && (ch <= 'Z'))
- || ((ch >= 'a') && (ch <= 'z'))
- || ((ch >= '0') && (ch <= '9'))
- || (ch == '/')
- || (ch == '-')
- || (ch == '.'))
- return 0;
- return 1;
-}
-
-static inline int hex(int v)
-{
- if (v < 10) return '0' + v;
- else return 'A' + v - 10;
-}
-
-static char *quote_ref_url(const char *base, const char *ref)
-{
- const char *cp;
- char *dp, *qref;
- int len, baselen, ch;
-
- baselen = strlen(base);
- len = baselen + 7; /* "/refs/" + NUL */
- for (cp = ref; (ch = *cp) != 0; cp++, len++)
- if (needs_quote(ch))
- len += 2; /* extra two hex plus replacement % */
- qref = xmalloc(len);
- memcpy(qref, base, baselen);
- memcpy(qref + baselen, "/refs/", 6);
- for (cp = ref, dp = qref + baselen + 6; (ch = *cp) != 0; cp++) {
- if (needs_quote(ch)) {
- *dp++ = '%';
- *dp++ = hex((ch >> 4) & 0xF);
- *dp++ = hex(ch & 0xF);
- }
- else
- *dp++ = ch;
- }
- *dp = 0;
-
- return qref;
-}
-
-int fetch_ref(char *ref, unsigned char *sha1)
-{
- char *url;
- char hex[42];
- struct buffer buffer;
- const char *base = alt->base;
- struct active_request_slot *slot;
- struct slot_results results;
- buffer.size = 41;
- buffer.posn = 0;
- buffer.buffer = hex;
- hex[41] = '\0';
-
- url = quote_ref_url(base, ref);
- slot = get_active_slot();
- slot->results = &results;
- curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- if (results.curl_result != CURLE_OK)
- return error("Couldn't get %s for %s\n%s",
- url, ref, curl_errorstr);
- } else {
- return error("Unable to start request");
- }
-
- hex[40] = '\0';
- get_sha1_hex(hex, sha1);
- return 0;
-}
-
-int main(int argc, const char **argv)
-{
- int commits;
- const char **write_ref = NULL;
- char **commit_id;
- const char *url;
- char *s;
- int arg = 1;
- int rc = 0;
-
- setup_git_directory();
- git_config(git_default_config);
-
- while (arg < argc && argv[arg][0] == '-') {
- if (argv[arg][1] == 't') {
- get_tree = 1;
- } else if (argv[arg][1] == 'c') {
- get_history = 1;
- } else if (argv[arg][1] == 'a') {
- get_all = 1;
- get_tree = 1;
- get_history = 1;
- } else if (argv[arg][1] == 'v') {
- get_verbosely = 1;
- } else if (argv[arg][1] == 'w') {
- write_ref = &argv[arg + 1];
- arg++;
- } else if (!strcmp(argv[arg], "--recover")) {
- get_recover = 1;
- } else if (!strcmp(argv[arg], "--stdin")) {
- commits_on_stdin = 1;
- }
- arg++;
- }
- if (argc < arg + 2 - commits_on_stdin) {
- usage("git-http-fetch [-c] [-t] [-a] [-v] [--recover] [-w ref] [--stdin] commit-id url");
- return 1;
- }
- if (commits_on_stdin) {
- commits = pull_targets_stdin(&commit_id, &write_ref);
- } else {
- commit_id = (char **) &argv[arg++];
- commits = 1;
- }
- url = argv[arg];
-
- http_init();
-
- no_pragma_header = curl_slist_append(no_pragma_header, "Pragma:");
-
- alt = xmalloc(sizeof(*alt));
- alt->base = xmalloc(strlen(url) + 1);
- strcpy(alt->base, url);
- for (s = alt->base + strlen(alt->base) - 1; *s == '/'; --s)
- *s = 0;
- alt->got_indices = 0;
- alt->packs = NULL;
- alt->next = NULL;
-
- if (pull(commits, commit_id, write_ref, url))
- rc = 1;
-
- http_cleanup();
-
- curl_slist_free_all(no_pragma_header);
-
- if (commits_on_stdin)
- pull_targets_free(commits, commit_id, write_ref);
-
- if (corrupt_object_found) {
- fprintf(stderr,
-"Some loose object were found to be corrupt, but they might be just\n"
-"a false '404 Not Found' error message sent with incorrect HTTP\n"
-"status code. Suggest running git-fsck.\n");
- }
- return rc;
-}
#include "cache.h"
#include "commit.h"
#include "pack.h"
-#include "fetch.h"
#include "tag.h"
#include "blob.h"
#include "http.h"
#include <expat.h>
static const char http_push_usage[] =
-"git-http-push [--all] [--force] [--verbose] <remote> [<head>...]\n";
+"git-http-push [--all] [--dry-run] [--force] [--verbose] <remote> [<head>...]\n";
#ifndef XML_STATUS_OK
enum XML_Status {
static int push_verbosely;
static int push_all;
static int force_all;
+static int dry_run;
static struct object_list *objects;
}
#ifdef USE_CURL_MULTI
-void fill_active_slots(void)
+static int fill_active_slot(void *unused)
{
struct transfer_request *request = request_queue_head;
- struct transfer_request *next;
- struct active_request_slot *slot = active_queue_head;
- int num_transfers;
if (aborted)
- return;
+ return 0;
- while (active_requests < max_requests && request != NULL) {
- next = request->next;
+ for (request = request_queue_head; request; request = request->next) {
if (request->state == NEED_FETCH) {
start_fetch_loose(request);
+ return 1;
} else if (pushing && request->state == NEED_PUSH) {
if (remote_dir_exists[request->obj->sha1[0]] == 1) {
start_put(request);
} else {
start_mkcol(request);
}
- curl_multi_perform(curlm, &num_transfers);
- }
- request = next;
- }
-
- while (slot != NULL) {
- if (!slot->in_use && slot->curl != NULL) {
- curl_easy_cleanup(slot->curl);
- slot->curl = NULL;
+ return 1;
}
- slot = slot->next;
}
+ return 0;
}
#endif
force_all = 1;
continue;
}
+ if (!strcmp(arg, "--dry-run")) {
+ dry_run = 1;
+ continue;
+ }
if (!strcmp(arg, "--verbose")) {
push_verbosely = 1;
continue;
if (strcmp(ref->name, ref->peer_ref->name))
fprintf(stderr, " using '%s'", ref->peer_ref->name);
fprintf(stderr, "\n from %s\n to %s\n", old_hex, new_hex);
-
+ if (dry_run)
+ continue;
/* Lock remote branch ref */
ref_lock = lock_remote(ref->name, LOCK_TIME);
objects_to_send);
#ifdef USE_CURL_MULTI
fill_active_slots();
+ add_fill_function(NULL, fill_active_slot);
#endif
finish_all_active_slots();
if (remote->has_info_refs && new_refs) {
if (info_ref_lock && remote->can_update_info_refs) {
fprintf(stderr, "Updating remote server info\n");
- update_remote_info_refs(info_ref_lock);
+ if (!dry_run)
+ update_remote_info_refs(info_ref_lock);
} else {
fprintf(stderr, "Unable to update server info\n");
}
--- /dev/null
+#include "cache.h"
+#include "commit.h"
+#include "pack.h"
+#include "walker.h"
+#include "http.h"
+
+#define PREV_BUF_SIZE 4096
+#define RANGE_HEADER_SIZE 30
+
+struct alt_base
+{
+ char *base;
+ int got_indices;
+ struct packed_git *packs;
+ struct alt_base *next;
+};
+
+enum object_request_state {
+ WAITING,
+ ABORTED,
+ ACTIVE,
+ COMPLETE,
+};
+
+struct object_request
+{
+ struct walker *walker;
+ unsigned char sha1[20];
+ struct alt_base *repo;
+ char *url;
+ char filename[PATH_MAX];
+ char tmpfile[PATH_MAX];
+ int local;
+ enum object_request_state state;
+ CURLcode curl_result;
+ char errorstr[CURL_ERROR_SIZE];
+ long http_code;
+ unsigned char real_sha1[20];
+ SHA_CTX c;
+ z_stream stream;
+ int zret;
+ int rename;
+ struct active_request_slot *slot;
+ struct object_request *next;
+};
+
+struct alternates_request {
+ struct walker *walker;
+ const char *base;
+ char *url;
+ struct buffer *buffer;
+ struct active_request_slot *slot;
+ int http_specific;
+};
+
+struct walker_data {
+ const char *url;
+ int got_alternates;
+ struct alt_base *alt;
+ struct curl_slist *no_pragma_header;
+};
+
+static struct object_request *object_queue_head;
+
+static size_t fwrite_sha1_file(void *ptr, size_t eltsize, size_t nmemb,
+ void *data)
+{
+ unsigned char expn[4096];
+ size_t size = eltsize * nmemb;
+ int posn = 0;
+ struct object_request *obj_req = (struct object_request *)data;
+ do {
+ ssize_t retval = xwrite(obj_req->local,
+ (char *) ptr + posn, size - posn);
+ if (retval < 0)
+ return posn;
+ posn += retval;
+ } while (posn < size);
+
+ obj_req->stream.avail_in = size;
+ obj_req->stream.next_in = ptr;
+ do {
+ obj_req->stream.next_out = expn;
+ obj_req->stream.avail_out = sizeof(expn);
+ obj_req->zret = inflate(&obj_req->stream, Z_SYNC_FLUSH);
+ SHA1_Update(&obj_req->c, expn,
+ sizeof(expn) - obj_req->stream.avail_out);
+ } while (obj_req->stream.avail_in && obj_req->zret == Z_OK);
+ data_received++;
+ return size;
+}
+
+static int missing__target(int code, int result)
+{
+ return /* file:// URL -- do we ever use one??? */
+ (result == CURLE_FILE_COULDNT_READ_FILE) ||
+ /* http:// and https:// URL */
+ (code == 404 && result == CURLE_HTTP_RETURNED_ERROR) ||
+ /* ftp:// URL */
+ (code == 550 && result == CURLE_FTP_COULDNT_RETR_FILE)
+ ;
+}
+
+#define missing_target(a) missing__target((a)->http_code, (a)->curl_result)
+
+static void fetch_alternates(struct walker *walker, const char *base);
+
+static void process_object_response(void *callback_data);
+
+static void start_object_request(struct walker *walker,
+ struct object_request *obj_req)
+{
+ char *hex = sha1_to_hex(obj_req->sha1);
+ char prevfile[PATH_MAX];
+ char *url;
+ char *posn;
+ int prevlocal;
+ unsigned char prev_buf[PREV_BUF_SIZE];
+ ssize_t prev_read = 0;
+ long prev_posn = 0;
+ char range[RANGE_HEADER_SIZE];
+ struct curl_slist *range_header = NULL;
+ struct active_request_slot *slot;
+ struct walker_data *data = walker->data;
+
+ snprintf(prevfile, sizeof(prevfile), "%s.prev", obj_req->filename);
+ unlink(prevfile);
+ rename(obj_req->tmpfile, prevfile);
+ unlink(obj_req->tmpfile);
+
+ if (obj_req->local != -1)
+ error("fd leakage in start: %d", obj_req->local);
+ obj_req->local = open(obj_req->tmpfile,
+ O_WRONLY | O_CREAT | O_EXCL, 0666);
+ /* This could have failed due to the "lazy directory creation";
+ * try to mkdir the last path component.
+ */
+ if (obj_req->local < 0 && errno == ENOENT) {
+ char *dir = strrchr(obj_req->tmpfile, '/');
+ if (dir) {
+ *dir = 0;
+ mkdir(obj_req->tmpfile, 0777);
+ *dir = '/';
+ }
+ obj_req->local = open(obj_req->tmpfile,
+ O_WRONLY | O_CREAT | O_EXCL, 0666);
+ }
+
+ if (obj_req->local < 0) {
+ obj_req->state = ABORTED;
+ error("Couldn't create temporary file %s for %s: %s",
+ obj_req->tmpfile, obj_req->filename, strerror(errno));
+ return;
+ }
+
+ memset(&obj_req->stream, 0, sizeof(obj_req->stream));
+
+ inflateInit(&obj_req->stream);
+
+ SHA1_Init(&obj_req->c);
+
+ url = xmalloc(strlen(obj_req->repo->base) + 51);
+ obj_req->url = xmalloc(strlen(obj_req->repo->base) + 51);
+ strcpy(url, obj_req->repo->base);
+ posn = url + strlen(obj_req->repo->base);
+ strcpy(posn, "/objects/");
+ posn += 9;
+ memcpy(posn, hex, 2);
+ posn += 2;
+ *(posn++) = '/';
+ strcpy(posn, hex + 2);
+ strcpy(obj_req->url, url);
+
+ /* If a previous temp file is present, process what was already
+ fetched. */
+ prevlocal = open(prevfile, O_RDONLY);
+ if (prevlocal != -1) {
+ do {
+ prev_read = xread(prevlocal, prev_buf, PREV_BUF_SIZE);
+ if (prev_read>0) {
+ if (fwrite_sha1_file(prev_buf,
+ 1,
+ prev_read,
+ obj_req) == prev_read) {
+ prev_posn += prev_read;
+ } else {
+ prev_read = -1;
+ }
+ }
+ } while (prev_read > 0);
+ close(prevlocal);
+ }
+ unlink(prevfile);
+
+ /* Reset inflate/SHA1 if there was an error reading the previous temp
+ file; also rewind to the beginning of the local file. */
+ if (prev_read == -1) {
+ memset(&obj_req->stream, 0, sizeof(obj_req->stream));
+ inflateInit(&obj_req->stream);
+ SHA1_Init(&obj_req->c);
+ if (prev_posn>0) {
+ prev_posn = 0;
+ lseek(obj_req->local, 0, SEEK_SET);
+ ftruncate(obj_req->local, 0);
+ }
+ }
+
+ slot = get_active_slot();
+ slot->callback_func = process_object_response;
+ slot->callback_data = obj_req;
+ obj_req->slot = slot;
+
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, obj_req);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
+ curl_easy_setopt(slot->curl, CURLOPT_ERRORBUFFER, obj_req->errorstr);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, data->no_pragma_header);
+
+ /* If we have successfully processed data from a previous fetch
+ attempt, only fetch the data we don't already have. */
+ if (prev_posn>0) {
+ if (walker->get_verbosely)
+ fprintf(stderr,
+ "Resuming fetch of object %s at byte %ld\n",
+ hex, prev_posn);
+ sprintf(range, "Range: bytes=%ld-", prev_posn);
+ range_header = curl_slist_append(range_header, range);
+ curl_easy_setopt(slot->curl,
+ CURLOPT_HTTPHEADER, range_header);
+ }
+
+ /* Try to get the request started, abort the request on error */
+ obj_req->state = ACTIVE;
+ if (!start_active_slot(slot)) {
+ obj_req->state = ABORTED;
+ obj_req->slot = NULL;
+ close(obj_req->local); obj_req->local = -1;
+ free(obj_req->url);
+ return;
+ }
+}
+
+static void finish_object_request(struct object_request *obj_req)
+{
+ struct stat st;
+
+ fchmod(obj_req->local, 0444);
+ close(obj_req->local); obj_req->local = -1;
+
+ if (obj_req->http_code == 416) {
+ fprintf(stderr, "Warning: requested range invalid; we may already have all the data.\n");
+ } else if (obj_req->curl_result != CURLE_OK) {
+ if (stat(obj_req->tmpfile, &st) == 0)
+ if (st.st_size == 0)
+ unlink(obj_req->tmpfile);
+ return;
+ }
+
+ inflateEnd(&obj_req->stream);
+ SHA1_Final(obj_req->real_sha1, &obj_req->c);
+ if (obj_req->zret != Z_STREAM_END) {
+ unlink(obj_req->tmpfile);
+ return;
+ }
+ if (hashcmp(obj_req->sha1, obj_req->real_sha1)) {
+ unlink(obj_req->tmpfile);
+ return;
+ }
+ obj_req->rename =
+ move_temp_to_file(obj_req->tmpfile, obj_req->filename);
+
+ if (obj_req->rename == 0)
+ walker_say(obj_req->walker, "got %s\n", sha1_to_hex(obj_req->sha1));
+}
+
+static void process_object_response(void *callback_data)
+{
+ struct object_request *obj_req =
+ (struct object_request *)callback_data;
+ struct walker *walker = obj_req->walker;
+ struct walker_data *data = walker->data;
+ struct alt_base *alt = data->alt;
+
+ obj_req->curl_result = obj_req->slot->curl_result;
+ obj_req->http_code = obj_req->slot->http_code;
+ obj_req->slot = NULL;
+ obj_req->state = COMPLETE;
+
+ /* Use alternates if necessary */
+ if (missing_target(obj_req)) {
+ fetch_alternates(walker, alt->base);
+ if (obj_req->repo->next != NULL) {
+ obj_req->repo =
+ obj_req->repo->next;
+ close(obj_req->local);
+ obj_req->local = -1;
+ start_object_request(walker, obj_req);
+ return;
+ }
+ }
+
+ finish_object_request(obj_req);
+}
+
+static void release_object_request(struct object_request *obj_req)
+{
+ struct object_request *entry = object_queue_head;
+
+ if (obj_req->local != -1)
+ error("fd leakage in release: %d", obj_req->local);
+ if (obj_req == object_queue_head) {
+ object_queue_head = obj_req->next;
+ } else {
+ while (entry->next != NULL && entry->next != obj_req)
+ entry = entry->next;
+ if (entry->next == obj_req)
+ entry->next = entry->next->next;
+ }
+
+ free(obj_req->url);
+ free(obj_req);
+}
+
+#ifdef USE_CURL_MULTI
+static int fill_active_slot(struct walker *walker)
+{
+ struct object_request *obj_req;
+
+ for (obj_req = object_queue_head; obj_req; obj_req = obj_req->next) {
+ if (obj_req->state == WAITING) {
+ if (has_sha1_file(obj_req->sha1))
+ obj_req->state = COMPLETE;
+ else {
+ start_object_request(walker, obj_req);
+ return 1;
+ }
+ }
+ }
+ return 0;
+}
+#endif
+
+static void prefetch(struct walker *walker, unsigned char *sha1)
+{
+ struct object_request *newreq;
+ struct object_request *tail;
+ struct walker_data *data = walker->data;
+ char *filename = sha1_file_name(sha1);
+
+ newreq = xmalloc(sizeof(*newreq));
+ newreq->walker = walker;
+ hashcpy(newreq->sha1, sha1);
+ newreq->repo = data->alt;
+ newreq->url = NULL;
+ newreq->local = -1;
+ newreq->state = WAITING;
+ snprintf(newreq->filename, sizeof(newreq->filename), "%s", filename);
+ snprintf(newreq->tmpfile, sizeof(newreq->tmpfile),
+ "%s.temp", filename);
+ newreq->slot = NULL;
+ newreq->next = NULL;
+
+ if (object_queue_head == NULL) {
+ object_queue_head = newreq;
+ } else {
+ tail = object_queue_head;
+ while (tail->next != NULL) {
+ tail = tail->next;
+ }
+ tail->next = newreq;
+ }
+
+#ifdef USE_CURL_MULTI
+ fill_active_slots();
+ step_active_slots();
+#endif
+}
+
+static int fetch_index(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
+{
+ char *hex = sha1_to_hex(sha1);
+ char *filename;
+ char *url;
+ char tmpfile[PATH_MAX];
+ long prev_posn = 0;
+ char range[RANGE_HEADER_SIZE];
+ struct curl_slist *range_header = NULL;
+ struct walker_data *data = walker->data;
+
+ FILE *indexfile;
+ struct active_request_slot *slot;
+ struct slot_results results;
+
+ if (has_pack_index(sha1))
+ return 0;
+
+ if (walker->get_verbosely)
+ fprintf(stderr, "Getting index for pack %s\n", hex);
+
+ url = xmalloc(strlen(repo->base) + 64);
+ sprintf(url, "%s/objects/pack/pack-%s.idx", repo->base, hex);
+
+ filename = sha1_pack_index_name(sha1);
+ snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
+ indexfile = fopen(tmpfile, "a");
+ if (!indexfile)
+ return error("Unable to open local file %s for pack index",
+ filename);
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, indexfile);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, data->no_pragma_header);
+ slot->local = indexfile;
+
+ /* If there is data present from a previous transfer attempt,
+ resume where it left off */
+ prev_posn = ftell(indexfile);
+ if (prev_posn>0) {
+ if (walker->get_verbosely)
+ fprintf(stderr,
+ "Resuming fetch of index for pack %s at byte %ld\n",
+ hex, prev_posn);
+ sprintf(range, "Range: bytes=%ld-", prev_posn);
+ range_header = curl_slist_append(range_header, range);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
+ }
+
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ fclose(indexfile);
+ return error("Unable to get pack index %s\n%s", url,
+ curl_errorstr);
+ }
+ } else {
+ fclose(indexfile);
+ return error("Unable to start request");
+ }
+
+ fclose(indexfile);
+
+ return move_temp_to_file(tmpfile, filename);
+}
+
+static int setup_index(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
+{
+ struct packed_git *new_pack;
+ if (has_pack_file(sha1))
+ return 0; /* don't list this as something we can get */
+
+ if (fetch_index(walker, repo, sha1))
+ return -1;
+
+ new_pack = parse_pack_index(sha1);
+ new_pack->next = repo->packs;
+ repo->packs = new_pack;
+ return 0;
+}
+
+static void process_alternates_response(void *callback_data)
+{
+ struct alternates_request *alt_req =
+ (struct alternates_request *)callback_data;
+ struct walker *walker = alt_req->walker;
+ struct walker_data *cdata = walker->data;
+ struct active_request_slot *slot = alt_req->slot;
+ struct alt_base *tail = cdata->alt;
+ const char *base = alt_req->base;
+ static const char null_byte = '\0';
+ char *data;
+ int i = 0;
+
+ if (alt_req->http_specific) {
+ if (slot->curl_result != CURLE_OK ||
+ !alt_req->buffer->posn) {
+
+ /* Try reusing the slot to get non-http alternates */
+ alt_req->http_specific = 0;
+ sprintf(alt_req->url, "%s/objects/info/alternates",
+ base);
+ curl_easy_setopt(slot->curl, CURLOPT_URL,
+ alt_req->url);
+ active_requests++;
+ slot->in_use = 1;
+ if (slot->finished != NULL)
+ (*slot->finished) = 0;
+ if (!start_active_slot(slot)) {
+ cdata->got_alternates = -1;
+ slot->in_use = 0;
+ if (slot->finished != NULL)
+ (*slot->finished) = 1;
+ }
+ return;
+ }
+ } else if (slot->curl_result != CURLE_OK) {
+ if (!missing_target(slot)) {
+ cdata->got_alternates = -1;
+ return;
+ }
+ }
+
+ fwrite_buffer(&null_byte, 1, 1, alt_req->buffer);
+ alt_req->buffer->posn--;
+ data = alt_req->buffer->buffer;
+
+ while (i < alt_req->buffer->posn) {
+ int posn = i;
+ while (posn < alt_req->buffer->posn && data[posn] != '\n')
+ posn++;
+ if (data[posn] == '\n') {
+ int okay = 0;
+ int serverlen = 0;
+ struct alt_base *newalt;
+ char *target = NULL;
+ if (data[i] == '/') {
+ /* This counts
+ * http://git.host/pub/scm/linux.git/
+ * -----------here^
+ * so memcpy(dst, base, serverlen) will
+ * copy up to "...git.host".
+ */
+ const char *colon_ss = strstr(base,"://");
+ if (colon_ss) {
+ serverlen = (strchr(colon_ss + 3, '/')
+ - base);
+ okay = 1;
+ }
+ } else if (!memcmp(data + i, "../", 3)) {
+ /* Relative URL; chop the corresponding
+ * number of subpath from base (and ../
+ * from data), and concatenate the result.
+ *
+ * The code first drops ../ from data, and
+ * then drops one ../ from data and one path
+ * from base. IOW, one extra ../ is dropped
+ * from data than path is dropped from base.
+ *
+ * This is not wrong. The alternate in
+ * http://git.host/pub/scm/linux.git/
+ * to borrow from
+ * http://git.host/pub/scm/linus.git/
+ * is ../../linus.git/objects/. You need
+ * two ../../ to borrow from your direct
+ * neighbour.
+ */
+ i += 3;
+ serverlen = strlen(base);
+ while (i + 2 < posn &&
+ !memcmp(data + i, "../", 3)) {
+ do {
+ serverlen--;
+ } while (serverlen &&
+ base[serverlen - 1] != '/');
+ i += 3;
+ }
+ /* If the server got removed, give up. */
+ okay = strchr(base, ':') - base + 3 <
+ serverlen;
+ } else if (alt_req->http_specific) {
+ char *colon = strchr(data + i, ':');
+ char *slash = strchr(data + i, '/');
+ if (colon && slash && colon < data + posn &&
+ slash < data + posn && colon < slash) {
+ okay = 1;
+ }
+ }
+ /* skip "objects\n" at end */
+ if (okay) {
+ target = xmalloc(serverlen + posn - i - 6);
+ memcpy(target, base, serverlen);
+ memcpy(target + serverlen, data + i,
+ posn - i - 7);
+ target[serverlen + posn - i - 7] = 0;
+ if (walker->get_verbosely)
+ fprintf(stderr,
+ "Also look at %s\n", target);
+ newalt = xmalloc(sizeof(*newalt));
+ newalt->next = NULL;
+ newalt->base = target;
+ newalt->got_indices = 0;
+ newalt->packs = NULL;
+
+ while (tail->next != NULL)
+ tail = tail->next;
+ tail->next = newalt;
+ }
+ }
+ i = posn + 1;
+ }
+
+ cdata->got_alternates = 1;
+}
+
+static void fetch_alternates(struct walker *walker, const char *base)
+{
+ struct buffer buffer;
+ char *url;
+ char *data;
+ struct active_request_slot *slot;
+ struct alternates_request alt_req;
+ struct walker_data *cdata = walker->data;
+
+ /* If another request has already started fetching alternates,
+ wait for them to arrive and return to processing this request's
+ curl message */
+#ifdef USE_CURL_MULTI
+ while (cdata->got_alternates == 0) {
+ step_active_slots();
+ }
+#endif
+
+ /* Nothing to do if they've already been fetched */
+ if (cdata->got_alternates == 1)
+ return;
+
+ /* Start the fetch */
+ cdata->got_alternates = 0;
+
+ data = xmalloc(4096);
+ buffer.size = 4096;
+ buffer.posn = 0;
+ buffer.buffer = data;
+
+ if (walker->get_verbosely)
+ fprintf(stderr, "Getting alternates list for %s\n", base);
+
+ url = xmalloc(strlen(base) + 31);
+ sprintf(url, "%s/objects/info/http-alternates", base);
+
+ /* Use a callback to process the result, since another request
+ may fail and need to have alternates loaded before continuing */
+ slot = get_active_slot();
+ slot->callback_func = process_alternates_response;
+ alt_req.walker = walker;
+ slot->callback_data = &alt_req;
+
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+
+ alt_req.base = base;
+ alt_req.url = url;
+ alt_req.buffer = &buffer;
+ alt_req.http_specific = 1;
+ alt_req.slot = slot;
+
+ if (start_active_slot(slot))
+ run_active_slot(slot);
+ else
+ cdata->got_alternates = -1;
+
+ free(data);
+ free(url);
+}
+
+static int fetch_indices(struct walker *walker, struct alt_base *repo)
+{
+ unsigned char sha1[20];
+ char *url;
+ struct buffer buffer;
+ char *data;
+ int i = 0;
+
+ struct active_request_slot *slot;
+ struct slot_results results;
+
+ if (repo->got_indices)
+ return 0;
+
+ data = xmalloc(4096);
+ buffer.size = 4096;
+ buffer.posn = 0;
+ buffer.buffer = data;
+
+ if (walker->get_verbosely)
+ fprintf(stderr, "Getting pack list for %s\n", repo->base);
+
+ url = xmalloc(strlen(repo->base) + 21);
+ sprintf(url, "%s/objects/info/packs", repo->base);
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ if (missing_target(&results)) {
+ repo->got_indices = 1;
+ free(buffer.buffer);
+ return 0;
+ } else {
+ repo->got_indices = 0;
+ free(buffer.buffer);
+ return error("%s", curl_errorstr);
+ }
+ }
+ } else {
+ repo->got_indices = 0;
+ free(buffer.buffer);
+ return error("Unable to start request");
+ }
+
+ data = buffer.buffer;
+ while (i < buffer.posn) {
+ switch (data[i]) {
+ case 'P':
+ i++;
+ if (i + 52 <= buffer.posn &&
+ !prefixcmp(data + i, " pack-") &&
+ !prefixcmp(data + i + 46, ".pack\n")) {
+ get_sha1_hex(data + i + 6, sha1);
+ setup_index(walker, repo, sha1);
+ i += 51;
+ break;
+ }
+ default:
+ while (i < buffer.posn && data[i] != '\n')
+ i++;
+ }
+ i++;
+ }
+
+ free(buffer.buffer);
+ repo->got_indices = 1;
+ return 0;
+}
+
+static int fetch_pack(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
+{
+ char *url;
+ struct packed_git *target;
+ struct packed_git **lst;
+ FILE *packfile;
+ char *filename;
+ char tmpfile[PATH_MAX];
+ int ret;
+ long prev_posn = 0;
+ char range[RANGE_HEADER_SIZE];
+ struct curl_slist *range_header = NULL;
+ struct walker_data *data = walker->data;
+
+ struct active_request_slot *slot;
+ struct slot_results results;
+
+ if (fetch_indices(walker, repo))
+ return -1;
+ target = find_sha1_pack(sha1, repo->packs);
+ if (!target)
+ return -1;
+
+ if (walker->get_verbosely) {
+ fprintf(stderr, "Getting pack %s\n",
+ sha1_to_hex(target->sha1));
+ fprintf(stderr, " which contains %s\n",
+ sha1_to_hex(sha1));
+ }
+
+ url = xmalloc(strlen(repo->base) + 65);
+ sprintf(url, "%s/objects/pack/pack-%s.pack",
+ repo->base, sha1_to_hex(target->sha1));
+
+ filename = sha1_pack_name(target->sha1);
+ snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
+ packfile = fopen(tmpfile, "a");
+ if (!packfile)
+ return error("Unable to open local file %s for pack",
+ filename);
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, packfile);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, data->no_pragma_header);
+ slot->local = packfile;
+
+ /* If there is data present from a previous transfer attempt,
+ resume where it left off */
+ prev_posn = ftell(packfile);
+ if (prev_posn>0) {
+ if (walker->get_verbosely)
+ fprintf(stderr,
+ "Resuming fetch of pack %s at byte %ld\n",
+ sha1_to_hex(target->sha1), prev_posn);
+ sprintf(range, "Range: bytes=%ld-", prev_posn);
+ range_header = curl_slist_append(range_header, range);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
+ }
+
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ fclose(packfile);
+ return error("Unable to get pack file %s\n%s", url,
+ curl_errorstr);
+ }
+ } else {
+ fclose(packfile);
+ return error("Unable to start request");
+ }
+
+ target->pack_size = ftell(packfile);
+ fclose(packfile);
+
+ ret = move_temp_to_file(tmpfile, filename);
+ if (ret)
+ return ret;
+
+ lst = &repo->packs;
+ while (*lst != target)
+ lst = &((*lst)->next);
+ *lst = (*lst)->next;
+
+ if (verify_pack(target, 0))
+ return -1;
+ install_packed_git(target);
+
+ return 0;
+}
+
+static void abort_object_request(struct object_request *obj_req)
+{
+ if (obj_req->local >= 0) {
+ close(obj_req->local);
+ obj_req->local = -1;
+ }
+ unlink(obj_req->tmpfile);
+ if (obj_req->slot) {
+ release_active_slot(obj_req->slot);
+ obj_req->slot = NULL;
+ }
+ release_object_request(obj_req);
+}
+
+static int fetch_object(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
+{
+ char *hex = sha1_to_hex(sha1);
+ int ret = 0;
+ struct object_request *obj_req = object_queue_head;
+
+ while (obj_req != NULL && hashcmp(obj_req->sha1, sha1))
+ obj_req = obj_req->next;
+ if (obj_req == NULL)
+ return error("Couldn't find request for %s in the queue", hex);
+
+ if (has_sha1_file(obj_req->sha1)) {
+ abort_object_request(obj_req);
+ return 0;
+ }
+
+#ifdef USE_CURL_MULTI
+ while (obj_req->state == WAITING) {
+ step_active_slots();
+ }
+#else
+ start_object_request(walker, obj_req);
+#endif
+
+ while (obj_req->state == ACTIVE) {
+ run_active_slot(obj_req->slot);
+ }
+ if (obj_req->local != -1) {
+ close(obj_req->local); obj_req->local = -1;
+ }
+
+ if (obj_req->state == ABORTED) {
+ ret = error("Request for %s aborted", hex);
+ } else if (obj_req->curl_result != CURLE_OK &&
+ obj_req->http_code != 416) {
+ if (missing_target(obj_req))
+ ret = -1; /* Be silent, it is probably in a pack. */
+ else
+ ret = error("%s (curl_result = %d, http_code = %ld, sha1 = %s)",
+ obj_req->errorstr, obj_req->curl_result,
+ obj_req->http_code, hex);
+ } else if (obj_req->zret != Z_STREAM_END) {
+ walker->corrupt_object_found++;
+ ret = error("File %s (%s) corrupt", hex, obj_req->url);
+ } else if (hashcmp(obj_req->sha1, obj_req->real_sha1)) {
+ ret = error("File %s has bad hash", hex);
+ } else if (obj_req->rename < 0) {
+ ret = error("unable to write sha1 filename %s",
+ obj_req->filename);
+ }
+
+ release_object_request(obj_req);
+ return ret;
+}
+
+static int fetch(struct walker *walker, unsigned char *sha1)
+{
+ struct walker_data *data = walker->data;
+ struct alt_base *altbase = data->alt;
+
+ if (!fetch_object(walker, altbase, sha1))
+ return 0;
+ while (altbase) {
+ if (!fetch_pack(walker, altbase, sha1))
+ return 0;
+ fetch_alternates(walker, data->alt->base);
+ altbase = altbase->next;
+ }
+ return error("Unable to find %s under %s", sha1_to_hex(sha1),
+ data->alt->base);
+}
+
+static inline int needs_quote(int ch)
+{
+ if (((ch >= 'A') && (ch <= 'Z'))
+ || ((ch >= 'a') && (ch <= 'z'))
+ || ((ch >= '0') && (ch <= '9'))
+ || (ch == '/')
+ || (ch == '-')
+ || (ch == '.'))
+ return 0;
+ return 1;
+}
+
+static inline int hex(int v)
+{
+ if (v < 10) return '0' + v;
+ else return 'A' + v - 10;
+}
+
+static char *quote_ref_url(const char *base, const char *ref)
+{
+ const char *cp;
+ char *dp, *qref;
+ int len, baselen, ch;
+
+ baselen = strlen(base);
+ len = baselen + 7; /* "/refs/" + NUL */
+ for (cp = ref; (ch = *cp) != 0; cp++, len++)
+ if (needs_quote(ch))
+ len += 2; /* extra two hex plus replacement % */
+ qref = xmalloc(len);
+ memcpy(qref, base, baselen);
+ memcpy(qref + baselen, "/refs/", 6);
+ for (cp = ref, dp = qref + baselen + 6; (ch = *cp) != 0; cp++) {
+ if (needs_quote(ch)) {
+ *dp++ = '%';
+ *dp++ = hex((ch >> 4) & 0xF);
+ *dp++ = hex(ch & 0xF);
+ }
+ else
+ *dp++ = ch;
+ }
+ *dp = 0;
+
+ return qref;
+}
+
+static int fetch_ref(struct walker *walker, char *ref, unsigned char *sha1)
+{
+ char *url;
+ char hex[42];
+ struct buffer buffer;
+ struct walker_data *data = walker->data;
+ const char *base = data->alt->base;
+ struct active_request_slot *slot;
+ struct slot_results results;
+ buffer.size = 41;
+ buffer.posn = 0;
+ buffer.buffer = hex;
+ hex[41] = '\0';
+
+ url = quote_ref_url(base, ref);
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK)
+ return error("Couldn't get %s for %s\n%s",
+ url, ref, curl_errorstr);
+ } else {
+ return error("Unable to start request");
+ }
+
+ hex[40] = '\0';
+ get_sha1_hex(hex, sha1);
+ return 0;
+}
+
+static void cleanup(struct walker *walker)
+{
+ struct walker_data *data = walker->data;
+ http_cleanup();
+
+ curl_slist_free_all(data->no_pragma_header);
+}
+
+struct walker *get_http_walker(const char *url)
+{
+ char *s;
+ struct walker_data *data = xmalloc(sizeof(struct walker_data));
+ struct walker *walker = xmalloc(sizeof(struct walker));
+
+ http_init();
+
+ data->no_pragma_header = curl_slist_append(NULL, "Pragma:");
+
+ data->alt = xmalloc(sizeof(*data->alt));
+ data->alt->base = xmalloc(strlen(url) + 1);
+ strcpy(data->alt->base, url);
+ for (s = data->alt->base + strlen(data->alt->base) - 1; *s == '/'; --s)
+ *s = 0;
+
+ data->alt->got_indices = 0;
+ data->alt->packs = NULL;
+ data->alt->next = NULL;
+ data->got_alternates = -1;
+
+ walker->corrupt_object_found = 0;
+ walker->fetch = fetch;
+ walker->fetch_ref = fetch_ref;
+ walker->prefetch = prefetch;
+ walker->cleanup = cleanup;
+ walker->data = data;
+
+#ifdef USE_CURL_MULTI
+ add_fill_function(walker, (int (*)(void *)) fill_active_slot);
+#endif
+
+ return walker;
+}
#endif
while (slot != NULL) {
+ struct active_request_slot *next = slot->next;
#ifdef USE_CURL_MULTI
if (slot->in_use) {
curl_easy_getinfo(slot->curl,
#endif
if (slot->curl != NULL)
curl_easy_cleanup(slot->curl);
- slot = slot->next;
+ free(slot);
+ slot = next;
}
+ active_queue_head = NULL;
#ifndef NO_CURL_EASY_DUPHANDLE
curl_easy_cleanup(curl_default);
curl_global_cleanup();
curl_slist_free_all(pragma_header);
- pragma_header = NULL;
+ pragma_header = NULL;
}
struct active_request_slot *get_active_slot(void)
{
#ifdef USE_CURL_MULTI
CURLMcode curlm_result = curl_multi_add_handle(curlm, slot->curl);
+ int num_transfers;
if (curlm_result != CURLM_OK &&
curlm_result != CURLM_CALL_MULTI_PERFORM) {
slot->in_use = 0;
return 0;
}
+
+ /*
+ * We know there must be something to do, since we just added
+ * something.
+ */
+ curl_multi_perform(curlm, &num_transfers);
#endif
return 1;
}
#ifdef USE_CURL_MULTI
+struct fill_chain {
+ void *data;
+ int (*fill)(void *);
+ struct fill_chain *next;
+};
+
+static struct fill_chain *fill_cfg = NULL;
+
+void add_fill_function(void *data, int (*fill)(void *))
+{
+ struct fill_chain *new = malloc(sizeof(*new));
+ struct fill_chain **linkp = &fill_cfg;
+ new->data = data;
+ new->fill = fill;
+ new->next = NULL;
+ while (*linkp)
+ linkp = &(*linkp)->next;
+ *linkp = new;
+}
+
+void fill_active_slots(void)
+{
+ struct active_request_slot *slot = active_queue_head;
+
+ while (active_requests < max_requests) {
+ struct fill_chain *fill;
+ for (fill = fill_cfg; fill; fill = fill->next)
+ if (fill->fill(fill->data))
+ break;
+
+ if (!fill)
+ break;
+ }
+
+ while (slot != NULL) {
+ if (!slot->in_use && slot->curl != NULL) {
+ curl_easy_cleanup(slot->curl);
+ slot->curl = NULL;
+ }
+ slot = slot->next;
+ }
+}
+
void step_active_slots(void)
{
int num_transfers;
#ifdef USE_CURL_MULTI
extern void fill_active_slots(void);
+extern void add_fill_function(void *data, int (*fill)(void *));
extern void step_active_slots(void);
#endif
extern int data_received;
extern int active_requests;
-#ifdef USE_CURL_MULTI
-extern int max_requests;
-extern CURLM *curlm;
-#endif
#ifndef NO_CURL_EASY_DUPHANDLE
extern CURL *curl_default;
#endif
extern struct curl_slist *pragma_header;
extern struct curl_slist *no_range_header;
-extern struct active_request_slot *active_queue_head;
-
#endif /* HTTP_H */
} while (!feof(f));
msg->len = buf.len;
- msg->data = strbuf_detach(&buf);
+ msg->data = strbuf_detach(&buf, NULL);
return msg->len;
}
/* Check for valid interpolation. */
if (i < ninterps) {
value = interps[i].value;
- valuelen = strlen(value);
+ if (!value) {
+ src += namelen;
+ continue;
+ }
+ valuelen = strlen(value);
if (newlen + valuelen < reslen) {
/* Substitute. */
memcpy(dest, value, valuelen);
+++ /dev/null
-/*
- * Copyright (C) 2005 Junio C Hamano
- */
-#include "cache.h"
-#include "commit.h"
-#include "fetch.h"
-
-static int use_link;
-static int use_symlink;
-static int use_filecopy = 1;
-static int commits_on_stdin;
-
-static const char *path; /* "Remote" git repository */
-
-void prefetch(unsigned char *sha1)
-{
-}
-
-static struct packed_git *packs;
-
-static void setup_index(unsigned char *sha1)
-{
- struct packed_git *new_pack;
- char filename[PATH_MAX];
- strcpy(filename, path);
- strcat(filename, "/objects/pack/pack-");
- strcat(filename, sha1_to_hex(sha1));
- strcat(filename, ".idx");
- new_pack = parse_pack_index_file(sha1, filename);
- new_pack->next = packs;
- packs = new_pack;
-}
-
-static int setup_indices(void)
-{
- DIR *dir;
- struct dirent *de;
- char filename[PATH_MAX];
- unsigned char sha1[20];
- sprintf(filename, "%s/objects/pack/", path);
- dir = opendir(filename);
- if (!dir)
- return -1;
- while ((de = readdir(dir)) != NULL) {
- int namelen = strlen(de->d_name);
- if (namelen != 50 ||
- !has_extension(de->d_name, ".pack"))
- continue;
- get_sha1_hex(de->d_name + 5, sha1);
- setup_index(sha1);
- }
- closedir(dir);
- return 0;
-}
-
-static int copy_file(const char *source, char *dest, const char *hex,
- int warn_if_not_exists)
-{
- safe_create_leading_directories(dest);
- if (use_link) {
- if (!link(source, dest)) {
- pull_say("link %s\n", hex);
- return 0;
- }
- /* If we got ENOENT there is no point continuing. */
- if (errno == ENOENT) {
- if (!warn_if_not_exists)
- return -1;
- return error("does not exist %s", source);
- }
- }
- if (use_symlink) {
- struct stat st;
- if (stat(source, &st)) {
- if (!warn_if_not_exists && errno == ENOENT)
- return -1;
- return error("cannot stat %s: %s", source,
- strerror(errno));
- }
- if (!symlink(source, dest)) {
- pull_say("symlink %s\n", hex);
- return 0;
- }
- }
- if (use_filecopy) {
- int ifd, ofd, status = 0;
-
- ifd = open(source, O_RDONLY);
- if (ifd < 0) {
- if (!warn_if_not_exists && errno == ENOENT)
- return -1;
- return error("cannot open %s", source);
- }
- ofd = open(dest, O_WRONLY | O_CREAT | O_EXCL, 0666);
- if (ofd < 0) {
- close(ifd);
- return error("cannot open %s", dest);
- }
- status = copy_fd(ifd, ofd);
- close(ofd);
- if (status)
- return error("cannot write %s", dest);
- pull_say("copy %s\n", hex);
- return 0;
- }
- return error("failed to copy %s with given copy methods.", hex);
-}
-
-static int fetch_pack(const unsigned char *sha1)
-{
- struct packed_git *target;
- char filename[PATH_MAX];
- if (setup_indices())
- return -1;
- target = find_sha1_pack(sha1, packs);
- if (!target)
- return error("Couldn't find %s: not separate or in any pack",
- sha1_to_hex(sha1));
- if (get_verbosely) {
- fprintf(stderr, "Getting pack %s\n",
- sha1_to_hex(target->sha1));
- fprintf(stderr, " which contains %s\n",
- sha1_to_hex(sha1));
- }
- sprintf(filename, "%s/objects/pack/pack-%s.pack",
- path, sha1_to_hex(target->sha1));
- copy_file(filename, sha1_pack_name(target->sha1),
- sha1_to_hex(target->sha1), 1);
- sprintf(filename, "%s/objects/pack/pack-%s.idx",
- path, sha1_to_hex(target->sha1));
- copy_file(filename, sha1_pack_index_name(target->sha1),
- sha1_to_hex(target->sha1), 1);
- install_packed_git(target);
- return 0;
-}
-
-static int fetch_file(const unsigned char *sha1)
-{
- static int object_name_start = -1;
- static char filename[PATH_MAX];
- char *hex = sha1_to_hex(sha1);
- char *dest_filename = sha1_file_name(sha1);
-
- if (object_name_start < 0) {
- strcpy(filename, path); /* e.g. git.git */
- strcat(filename, "/objects/");
- object_name_start = strlen(filename);
- }
- filename[object_name_start+0] = hex[0];
- filename[object_name_start+1] = hex[1];
- filename[object_name_start+2] = '/';
- strcpy(filename + object_name_start + 3, hex + 2);
- return copy_file(filename, dest_filename, hex, 0);
-}
-
-int fetch(unsigned char *sha1)
-{
- if (has_sha1_file(sha1))
- return 0;
- else
- return fetch_file(sha1) && fetch_pack(sha1);
-}
-
-int fetch_ref(char *ref, unsigned char *sha1)
-{
- static int ref_name_start = -1;
- static char filename[PATH_MAX];
- static char hex[41];
- int ifd;
-
- if (ref_name_start < 0) {
- sprintf(filename, "%s/refs/", path);
- ref_name_start = strlen(filename);
- }
- strcpy(filename + ref_name_start, ref);
- ifd = open(filename, O_RDONLY);
- if (ifd < 0) {
- close(ifd);
- return error("cannot open %s", filename);
- }
- if (read_in_full(ifd, hex, 40) != 40 || get_sha1_hex(hex, sha1)) {
- close(ifd);
- return error("cannot read from %s", filename);
- }
- close(ifd);
- pull_say("ref %s\n", sha1_to_hex(sha1));
- return 0;
-}
-
-static const char local_pull_usage[] =
-"git-local-fetch [-c] [-t] [-a] [-v] [-w filename] [--recover] [-l] [-s] [-n] [--stdin] commit-id path";
-
-/*
- * By default we only use file copy.
- * If -l is specified, a hard link is attempted.
- * If -s is specified, then a symlink is attempted.
- * If -n is _not_ specified, then a regular file-to-file copy is done.
- */
-int main(int argc, const char **argv)
-{
- int commits;
- const char **write_ref = NULL;
- char **commit_id;
- int arg = 1;
-
- setup_git_directory();
- git_config(git_default_config);
-
- while (arg < argc && argv[arg][0] == '-') {
- if (argv[arg][1] == 't')
- get_tree = 1;
- else if (argv[arg][1] == 'c')
- get_history = 1;
- else if (argv[arg][1] == 'a') {
- get_all = 1;
- get_tree = 1;
- get_history = 1;
- }
- else if (argv[arg][1] == 'l')
- use_link = 1;
- else if (argv[arg][1] == 's')
- use_symlink = 1;
- else if (argv[arg][1] == 'n')
- use_filecopy = 0;
- else if (argv[arg][1] == 'v')
- get_verbosely = 1;
- else if (argv[arg][1] == 'w')
- write_ref = &argv[++arg];
- else if (!strcmp(argv[arg], "--recover"))
- get_recover = 1;
- else if (!strcmp(argv[arg], "--stdin"))
- commits_on_stdin = 1;
- else
- usage(local_pull_usage);
- arg++;
- }
- if (argc < arg + 2 - commits_on_stdin)
- usage(local_pull_usage);
- if (commits_on_stdin) {
- commits = pull_targets_stdin(&commit_id, &write_ref);
- } else {
- commit_id = (char **) &argv[arg++];
- commits = 1;
- }
- path = argv[arg];
-
- if (pull(commits, commit_id, write_ref, path))
- return 1;
-
- if (commits_on_stdin)
- pull_targets_free(commits, commit_id, write_ref);
-
- return 0;
-}
}
}
-static void show_decorations(struct commit *commit)
+void show_decorations(struct commit *commit)
{
const char *prefix;
struct name_decoration *decoration;
if (opt->show_log_size)
printf("log size %i\n", (int)msgbuf.len);
- printf("%s%s%s", msgbuf.buf, extra, sep);
+ if (msgbuf.len)
+ printf("%s%s%s", msgbuf.buf, extra, sep);
strbuf_release(&msgbuf);
}
* output for readability.
*/
show_log(opt, opt->diffopt.msg_sep);
- if (opt->verbose_header &&
+ if ((opt->diffopt.output_format & ~DIFF_FORMAT_NO_OUTPUT) &&
+ opt->verbose_header &&
opt->commit_format != CMIT_FMT_ONELINE) {
int pch = DIFF_FORMAT_DIFFSTAT | DIFF_FORMAT_PATCH;
if ((pch & opt->diffopt.output_format) == pch)
int log_tree_commit(struct rev_info *, struct commit *);
int log_tree_opt_parse(struct rev_info *, const char **, int);
void show_log(struct rev_info *opt, const char *sep);
+void show_decorations(struct commit *commit);
#endif
const unsigned char *hash2,
int *best_score,
char **best_match,
- char *base,
+ const char *base,
int recurse_limit)
{
struct tree_desc one;
static int call_depth = 0;
static int verbosity = 2;
+static int rename_limit = -1;
static int buffer_output = 1;
static struct strbuf obuf = STRBUF_INIT;
diff_setup(&opts);
opts.recursive = 1;
opts.detect_rename = DIFF_DETECT_RENAME;
+ opts.rename_limit = rename_limit;
opts.output_format = DIFF_FORMAT_NO_OUTPUT;
if (diff_setup_done(&opts) < 0)
die("diff setup failed");
{
struct commit_list *iter;
struct commit *merged_common_ancestors;
- struct tree *mrtree;
+ struct tree *mrtree = mrtree;
int clean;
if (show(4)) {
verbosity = git_config_int(var, value);
return 0;
}
+ if (!strcasecmp(var, "diff.renamelimit")) {
+ rename_limit = git_config_int(var, value);
+ return 0;
+ }
return git_default_config(var, value);
}
SHA1_Final(pack_file_sha1, &c);
write_or_die(pack_fd, pack_file_sha1, 20);
}
+
+char *index_pack_lockfile(int ip_out)
+{
+ int len, s;
+ char packname[46];
+
+ /*
+ * The first thing we expects from index-pack's output
+ * is "pack\t%40s\n" or "keep\t%40s\n" (46 bytes) where
+ * %40s is the newly created pack SHA1 name. In the "keep"
+ * case, we need it to remove the corresponding .keep file
+ * later on. If we don't get that then tough luck with it.
+ */
+ for (len = 0;
+ len < 46 && (s = xread(ip_out, packname+len, 46-len)) > 0;
+ len += s);
+ if (len == 46 && packname[45] == '\n' &&
+ memcmp(packname, "keep\t", 5) == 0) {
+ char path[PATH_MAX];
+ packname[45] = 0;
+ snprintf(path, sizeof(path), "%s/pack/pack-%s.keep",
+ get_object_directory(), packname + 5);
+ return xstrdup(path);
+ }
+ return NULL;
+}
extern int verify_pack(struct packed_git *, int);
extern void fixup_pack_header_footer(int, unsigned char *, const char *, uint32_t);
+extern char *index_pack_lockfile(int fd);
#define PH_ERROR_EOF (-1)
#define PH_ERROR_PACK_SIGNATURE (-2)
char *to_free = NULL;
if (dst->buf == src)
- to_free = strbuf_detach(dst);
+ to_free = strbuf_detach(dst, NULL);
strbuf_addch(dst, '\'');
while (*src) {
else if (ce_compare_gitlink(ce))
changed |= DATA_CHANGED;
return changed;
+ case 0: /* Special case: unmerged file in index */
+ return MODE_CHANGED | DATA_CHANGED | TYPE_CHANGED;
default:
die("internal error: ce_mode is %o", ntohl(ce->ce_mode));
}
struct ref_lock *lock;
if (!prefixcmp(name, "refs/") && check_ref_format(name + 5)) {
- error("refusing to create funny ref '%s' locally", name);
+ error("refusing to create funny ref '%s' remotely", name);
return "funny refname";
}
}
} else {
const char *keeper[6];
- int s, len, status;
+ int s, status;
char keep_arg[256];
- char packname[46];
struct child_process ip;
s = sprintf(keep_arg, "--keep=receive-pack %i on ", getpid());
ip.git_cmd = 1;
if (start_command(&ip))
return "index-pack fork failed";
-
- /*
- * The first thing we expects from index-pack's output
- * is "pack\t%40s\n" or "keep\t%40s\n" (46 bytes) where
- * %40s is the newly created pack SHA1 name. In the "keep"
- * case, we need it to remove the corresponding .keep file
- * later on. If we don't get that then tough luck with it.
- */
- for (len = 0;
- len < 46 && (s = xread(ip.out, packname+len, 46-len)) > 0;
- len += s);
- if (len == 46 && packname[45] == '\n' &&
- memcmp(packname, "keep\t", 5) == 0) {
- char path[PATH_MAX];
- packname[45] = 0;
- snprintf(path, sizeof(path), "%s/pack/pack-%s.keep",
- get_object_directory(), packname + 5);
- pack_lockfile = xstrdup(path);
- }
-
+ pack_lockfile = index_pack_lockfile(ip.out);
status = finish_command(&ip);
if (!status) {
reprepare_packed_git();
#include "refs.h"
#include "object.h"
#include "tag.h"
+#include "dir.h"
/* ISSYMREF=01 and ISPACKED=02 are public interfaces */
#define REF_KNOWS_PEELED 04
return lock;
}
-static int remove_empty_dir_recursive(char *path, int len)
-{
- DIR *dir = opendir(path);
- struct dirent *e;
- int ret = 0;
-
- if (!dir)
- return -1;
- if (path[len-1] != '/')
- path[len++] = '/';
- while ((e = readdir(dir)) != NULL) {
- struct stat st;
- int namlen;
- if ((e->d_name[0] == '.') &&
- ((e->d_name[1] == 0) ||
- ((e->d_name[1] == '.') && e->d_name[2] == 0)))
- continue; /* "." and ".." */
-
- namlen = strlen(e->d_name);
- if ((len + namlen < PATH_MAX) &&
- strcpy(path + len, e->d_name) &&
- !lstat(path, &st) &&
- S_ISDIR(st.st_mode) &&
- !remove_empty_dir_recursive(path, len + namlen))
- continue; /* happy */
-
- /* path too long, stat fails, or non-directory still exists */
- ret = -1;
- break;
- }
- closedir(dir);
- if (!ret) {
- path[len] = 0;
- ret = rmdir(path);
- }
- return ret;
-}
-
-static int remove_empty_directories(char *file)
+static int remove_empty_directories(const char *file)
{
/* we want to create a file but there is a directory there;
* if that is an empty directory (or a directory that contains
* only empty directories), remove them.
*/
- char path[PATH_MAX];
- int len = strlen(file);
+ struct strbuf path;
+ int result;
- if (len >= PATH_MAX) /* path too long ;-) */
- return -1;
- strcpy(path, file);
- return remove_empty_dir_recursive(path, len);
+ strbuf_init(&path, 20);
+ strbuf_addstr(&path, file);
+
+ result = remove_dir_recursively(&path, 1);
+
+ strbuf_release(&path);
+
+ return result;
}
static int is_refname_available(const char *ref, const char *oldref,
static struct remote **remotes;
static int allocated_remotes;
+static struct branch **branches;
+static int allocated_branches;
+
+static struct branch *current_branch;
+static const char *default_remote_name;
+
#define BUF_SIZE (2048)
static char buffer[BUF_SIZE];
remote->fetch_refspec_nr = nr;
}
-static void add_uri(struct remote *remote, const char *uri)
+static void add_url(struct remote *remote, const char *url)
{
- int nr = remote->uri_nr + 1;
- remote->uri =
- xrealloc(remote->uri, nr * sizeof(char *));
- remote->uri[nr-1] = uri;
- remote->uri_nr = nr;
+ int nr = remote->url_nr + 1;
+ remote->url =
+ xrealloc(remote->url, nr * sizeof(char *));
+ remote->url[nr-1] = url;
+ remote->url_nr = nr;
}
static struct remote *make_remote(const char *name, int len)
return remotes[empty];
}
+static void add_merge(struct branch *branch, const char *name)
+{
+ int nr = branch->merge_nr + 1;
+ branch->merge_name =
+ xrealloc(branch->merge_name, nr * sizeof(char *));
+ branch->merge_name[nr-1] = name;
+ branch->merge_nr = nr;
+}
+
+static struct branch *make_branch(const char *name, int len)
+{
+ int i, empty = -1;
+ char *refname;
+
+ for (i = 0; i < allocated_branches; i++) {
+ if (!branches[i]) {
+ if (empty < 0)
+ empty = i;
+ } else {
+ if (len ? (!strncmp(name, branches[i]->name, len) &&
+ !branches[i]->name[len]) :
+ !strcmp(name, branches[i]->name))
+ return branches[i];
+ }
+ }
+
+ if (empty < 0) {
+ empty = allocated_branches;
+ allocated_branches += allocated_branches ? allocated_branches : 1;
+ branches = xrealloc(branches,
+ sizeof(*branches) * allocated_branches);
+ memset(branches + empty, 0,
+ (allocated_branches - empty) * sizeof(*branches));
+ }
+ branches[empty] = xcalloc(1, sizeof(struct branch));
+ if (len)
+ branches[empty]->name = xstrndup(name, len);
+ else
+ branches[empty]->name = xstrdup(name);
+ refname = malloc(strlen(name) + strlen("refs/heads/") + 1);
+ strcpy(refname, "refs/heads/");
+ strcpy(refname + strlen("refs/heads/"),
+ branches[empty]->name);
+ branches[empty]->refname = refname;
+
+ return branches[empty];
+}
+
static void read_remotes_file(struct remote *remote)
{
FILE *f = fopen(git_path("remotes/%s", remote->name), "r");
switch (value_list) {
case 0:
- add_uri(remote, xstrdup(s));
+ add_url(remote, xstrdup(s));
break;
case 1:
add_push_refspec(remote, xstrdup(s));
static void read_branches_file(struct remote *remote)
{
const char *slash = strchr(remote->name, '/');
+ char *frag;
+ char *branch;
int n = slash ? slash - remote->name : 1000;
FILE *f = fopen(git_path("branches/%.*s", n, remote->name), "r");
char *s, *p;
strcpy(p, s);
if (slash)
strcat(p, slash);
- add_uri(remote, p);
+ frag = strchr(p, '#');
+ if (frag) {
+ *(frag++) = '\0';
+ branch = xmalloc(strlen(frag) + 12);
+ strcpy(branch, "refs/heads/");
+ strcat(branch, frag);
+ } else {
+ branch = "refs/heads/master";
+ }
+ add_url(remote, p);
+ add_fetch_refspec(remote, branch);
+ remote->fetch_tags = 1; /* always auto-follow */
}
-static char *default_remote_name = NULL;
-static const char *current_branch = NULL;
-static int current_branch_len = 0;
-
static int handle_config(const char *key, const char *value)
{
const char *name;
const char *subkey;
struct remote *remote;
- if (!prefixcmp(key, "branch.") && current_branch &&
- !strncmp(key + 7, current_branch, current_branch_len) &&
- !strcmp(key + 7 + current_branch_len, ".remote")) {
- free(default_remote_name);
- default_remote_name = xstrdup(value);
+ struct branch *branch;
+ if (!prefixcmp(key, "branch.")) {
+ name = key + 7;
+ subkey = strrchr(name, '.');
+ branch = make_branch(name, subkey - name);
+ if (!subkey)
+ return 0;
+ if (!value)
+ return 0;
+ if (!strcmp(subkey, ".remote")) {
+ branch->remote_name = xstrdup(value);
+ if (branch == current_branch)
+ default_remote_name = branch->remote_name;
+ } else if (!strcmp(subkey, ".merge"))
+ add_merge(branch, xstrdup(value));
+ return 0;
}
if (prefixcmp(key, "remote."))
return 0;
return 0; /* ignore unknown booleans */
}
if (!strcmp(subkey, ".url")) {
- add_uri(remote, xstrdup(value));
+ add_url(remote, xstrdup(value));
} else if (!strcmp(subkey, ".push")) {
add_push_refspec(remote, xstrdup(value));
} else if (!strcmp(subkey, ".fetch")) {
remote->receivepack = xstrdup(value);
else
error("more than one receivepack given, using the first");
+ } else if (!strcmp(subkey, ".uploadpack")) {
+ if (!remote->uploadpack)
+ remote->uploadpack = xstrdup(value);
+ else
+ error("more than one uploadpack given, using the first");
+ } else if (!strcmp(subkey, ".tagopt")) {
+ if (!strcmp(value, "--no-tags"))
+ remote->fetch_tags = -1;
}
return 0;
}
head_ref = resolve_ref("HEAD", sha1, 0, &flag);
if (head_ref && (flag & REF_ISSYMREF) &&
!prefixcmp(head_ref, "refs/heads/")) {
- current_branch = head_ref + strlen("refs/heads/");
- current_branch_len = strlen(current_branch);
+ current_branch =
+ make_branch(head_ref + strlen("refs/heads/"), 0);
}
git_config(handle_config);
}
-static struct refspec *parse_ref_spec(int nr_refspec, const char **refspec)
+struct refspec *parse_ref_spec(int nr_refspec, const char **refspec)
{
int i;
struct refspec *rs = xcalloc(sizeof(*rs), nr_refspec);
name = default_remote_name;
ret = make_remote(name, 0);
if (name[0] != '/') {
- if (!ret->uri)
+ if (!ret->url)
read_remotes_file(ret);
- if (!ret->uri)
+ if (!ret->url)
read_branches_file(ret);
}
- if (!ret->uri)
- add_uri(ret, name);
- if (!ret->uri)
+ if (!ret->url)
+ add_url(ret, name);
+ if (!ret->url)
return NULL;
ret->fetch = parse_ref_spec(ret->fetch_refspec_nr, ret->fetch_refspec);
ret->push = parse_ref_spec(ret->push_refspec_nr, ret->push_refspec);
return result;
}
-int remote_has_uri(struct remote *remote, const char *uri)
+void ref_remove_duplicates(struct ref *ref_map)
+{
+ struct ref **posn;
+ struct ref *next;
+ for (; ref_map; ref_map = ref_map->next) {
+ if (!ref_map->peer_ref)
+ continue;
+ posn = &ref_map->next;
+ while (*posn) {
+ if ((*posn)->peer_ref &&
+ !strcmp((*posn)->peer_ref->name,
+ ref_map->peer_ref->name)) {
+ if (strcmp((*posn)->name, ref_map->name))
+ die("%s tracks both %s and %s",
+ ref_map->peer_ref->name,
+ (*posn)->name, ref_map->name);
+ next = (*posn)->next;
+ free((*posn)->peer_ref);
+ free(*posn);
+ *posn = next;
+ } else {
+ posn = &(*posn)->next;
+ }
+ }
+ }
+}
+
+int remote_has_url(struct remote *remote, const char *url)
{
int i;
- for (i = 0; i < remote->uri_nr; i++) {
- if (!strcmp(remote->uri[i], uri))
+ for (i = 0; i < remote->url_nr; i++) {
+ if (!strcmp(remote->url[i], url))
return 1;
}
return 0;
}
+/*
+ * Returns true if, under the matching rules for fetching, name is the
+ * same as the given full name.
+ */
+static int ref_matches_abbrev(const char *name, const char *full)
+{
+ if (!prefixcmp(name, "refs/") || !strcmp(name, "HEAD"))
+ return !strcmp(name, full);
+ if (prefixcmp(full, "refs/"))
+ return 0;
+ if (!prefixcmp(name, "heads/") ||
+ !prefixcmp(name, "tags/") ||
+ !prefixcmp(name, "remotes/"))
+ return !strcmp(name, full + 5);
+ if (prefixcmp(full + 5, "heads/"))
+ return 0;
+ return !strcmp(full + 11, name);
+}
+
int remote_find_tracking(struct remote *remote, struct refspec *refspec)
{
int find_src = refspec->src == NULL;
int i;
if (find_src) {
- if (refspec->dst == NULL)
+ if (!refspec->dst)
return error("find_tracking: need either src or dst");
needle = refspec->dst;
result = &refspec->src;
return ret;
}
+static struct ref *copy_ref(struct ref *ref)
+{
+ struct ref *ret = xmalloc(sizeof(struct ref) + strlen(ref->name) + 1);
+ memcpy(ret, ref, sizeof(struct ref) + strlen(ref->name) + 1);
+ ret->next = NULL;
+ return ret;
+}
+
void free_refs(struct ref *ref)
{
struct ref *next;
* way to delete 'other' ref at the remote end.
*/
matched_src = try_explicit_object_name(rs->src);
- if (matched_src)
- break;
- error("src refspec %s does not match any.",
- rs->src);
+ if (!matched_src)
+ error("src refspec %s does not match any.", rs->src);
break;
default:
matched_src = NULL;
- error("src refspec %s matches more than one.",
- rs->src);
+ error("src refspec %s matches more than one.", rs->src);
break;
}
if (!matched_src)
errs = 1;
- if (dst_value == NULL)
+ if (!dst_value) {
+ if (!matched_src)
+ return errs;
dst_value = matched_src->name;
+ }
switch (count_refspec_match(dst_value, dst, &matched_dst)) {
case 1:
dst_value);
break;
}
- if (errs || matched_dst == NULL)
+ if (errs || !matched_dst)
return 1;
if (matched_dst->peer_ref) {
errs = 1;
hashcpy(dst_peer->new_sha1, src->new_sha1);
}
dst_peer->peer_ref = src;
+ if (pat)
+ dst_peer->force = pat->force;
free_name:
free(dst_name);
}
return 0;
}
+
+struct branch *branch_get(const char *name)
+{
+ struct branch *ret;
+
+ read_config();
+ if (!name || !*name || !strcmp(name, "HEAD"))
+ ret = current_branch;
+ else
+ ret = make_branch(name, 0);
+ if (ret && ret->remote_name) {
+ ret->remote = remote_get(ret->remote_name);
+ if (ret->merge_nr) {
+ int i;
+ ret->merge = xcalloc(sizeof(*ret->merge),
+ ret->merge_nr);
+ for (i = 0; i < ret->merge_nr; i++) {
+ ret->merge[i] = xcalloc(1, sizeof(**ret->merge));
+ ret->merge[i]->src = xstrdup(ret->merge_name[i]);
+ remote_find_tracking(ret->remote,
+ ret->merge[i]);
+ }
+ }
+ }
+ return ret;
+}
+
+int branch_has_merge_config(struct branch *branch)
+{
+ return branch && !!branch->merge;
+}
+
+int branch_merge_matches(struct branch *branch,
+ int i,
+ const char *refname)
+{
+ if (!branch || i < 0 || i >= branch->merge_nr)
+ return 0;
+ return ref_matches_abbrev(branch->merge[i]->src, refname);
+}
+
+static struct ref *get_expanded_map(struct ref *remote_refs,
+ const struct refspec *refspec)
+{
+ struct ref *ref;
+ struct ref *ret = NULL;
+ struct ref **tail = &ret;
+
+ int remote_prefix_len = strlen(refspec->src);
+ int local_prefix_len = strlen(refspec->dst);
+
+ for (ref = remote_refs; ref; ref = ref->next) {
+ if (strchr(ref->name, '^'))
+ continue; /* a dereference item */
+ if (!prefixcmp(ref->name, refspec->src)) {
+ char *match;
+ struct ref *cpy = copy_ref(ref);
+ match = ref->name + remote_prefix_len;
+
+ cpy->peer_ref = alloc_ref(local_prefix_len +
+ strlen(match) + 1);
+ sprintf(cpy->peer_ref->name, "%s%s",
+ refspec->dst, match);
+ if (refspec->force)
+ cpy->peer_ref->force = 1;
+ *tail = cpy;
+ tail = &cpy->next;
+ }
+ }
+
+ return ret;
+}
+
+static struct ref *find_ref_by_name_abbrev(struct ref *refs, const char *name)
+{
+ struct ref *ref;
+ for (ref = refs; ref; ref = ref->next) {
+ if (ref_matches_abbrev(name, ref->name))
+ return ref;
+ }
+ return NULL;
+}
+
+struct ref *get_remote_ref(struct ref *remote_refs, const char *name)
+{
+ struct ref *ref = find_ref_by_name_abbrev(remote_refs, name);
+
+ if (!ref)
+ return NULL;
+
+ return copy_ref(ref);
+}
+
+static struct ref *get_local_ref(const char *name)
+{
+ struct ref *ret;
+ if (!name)
+ return NULL;
+
+ if (!prefixcmp(name, "refs/")) {
+ ret = alloc_ref(strlen(name) + 1);
+ strcpy(ret->name, name);
+ return ret;
+ }
+
+ if (!prefixcmp(name, "heads/") ||
+ !prefixcmp(name, "tags/") ||
+ !prefixcmp(name, "remotes/")) {
+ ret = alloc_ref(strlen(name) + 6);
+ sprintf(ret->name, "refs/%s", name);
+ return ret;
+ }
+
+ ret = alloc_ref(strlen(name) + 12);
+ sprintf(ret->name, "refs/heads/%s", name);
+ return ret;
+}
+
+int get_fetch_map(struct ref *remote_refs,
+ const struct refspec *refspec,
+ struct ref ***tail,
+ int missing_ok)
+{
+ struct ref *ref_map, *rm;
+
+ if (refspec->pattern) {
+ ref_map = get_expanded_map(remote_refs, refspec);
+ } else {
+ const char *name = refspec->src[0] ? refspec->src : "HEAD";
+
+ ref_map = get_remote_ref(remote_refs, name);
+ if (!missing_ok && !ref_map)
+ die("Couldn't find remote ref %s", name);
+ if (ref_map) {
+ ref_map->peer_ref = get_local_ref(refspec->dst);
+ if (ref_map->peer_ref && refspec->force)
+ ref_map->peer_ref->force = 1;
+ }
+ }
+
+ for (rm = ref_map; rm; rm = rm->next) {
+ if (rm->peer_ref && check_ref_format(rm->peer_ref->name + 5))
+ die("* refusing to create funny ref '%s' locally",
+ rm->peer_ref->name);
+ }
+
+ if (ref_map)
+ tail_link_ref(ref_map, tail);
+
+ return 0;
+}
struct remote {
const char *name;
- const char **uri;
- int uri_nr;
+ const char **url;
+ int url_nr;
const char **push_refspec;
struct refspec *push;
struct refspec *fetch;
int fetch_refspec_nr;
+ /*
+ * -1 to never fetch tags
+ * 0 to auto-follow tags on heuristic (default)
+ * 1 to always auto-follow tags
+ * 2 to always fetch tags
+ */
+ int fetch_tags;
+
const char *receivepack;
+ const char *uploadpack;
};
struct remote *remote_get(const char *name);
typedef int each_remote_fn(struct remote *remote, void *priv);
int for_each_remote(each_remote_fn fn, void *priv);
-int remote_has_uri(struct remote *remote, const char *uri);
+int remote_has_url(struct remote *remote, const char *url);
struct refspec {
unsigned force : 1;
*/
void free_refs(struct ref *ref);
+/*
+ * Removes and frees any duplicate refs in the map.
+ */
+void ref_remove_duplicates(struct ref *ref_map);
+
+struct refspec *parse_ref_spec(int nr_refspec, const char **refspec);
+
int match_refs(struct ref *src, struct ref *dst, struct ref ***dst_tail,
int nr_refspec, char **refspec, int all);
+/*
+ * Given a list of the remote refs and the specification of things to
+ * fetch, makes a (separate) list of the refs to fetch and the local
+ * refs to store into.
+ *
+ * *tail is the pointer to the tail pointer of the list of results
+ * beforehand, and will be set to the tail pointer of the list of
+ * results afterward.
+ *
+ * missing_ok is usually false, but when we are adding branch.$name.merge
+ * it is Ok if the branch is not at the remote anymore.
+ */
+int get_fetch_map(struct ref *remote_refs, const struct refspec *refspec,
+ struct ref ***tail, int missing_ok);
+
+struct ref *get_remote_ref(struct ref *remote_refs, const char *name);
+
/*
* For the given remote, reads the refspec's src and sets the other fields.
*/
int remote_find_tracking(struct remote *remote, struct refspec *refspec);
+struct branch {
+ const char *name;
+ const char *refname;
+
+ const char *remote_name;
+ struct remote *remote;
+
+ const char **merge_name;
+ struct refspec **merge;
+ int merge_nr;
+};
+
+struct branch *branch_get(const char *name);
+
+int branch_has_merge_config(struct branch *branch);
+int branch_merge_matches(struct branch *, int n, const char *);
+
#endif
continue;
}
if (!strncmp(arg, "--date=", 7)) {
- if (!strcmp(arg + 7, "relative"))
- revs->date_mode = DATE_RELATIVE;
- else if (!strcmp(arg + 7, "iso8601") ||
- !strcmp(arg + 7, "iso"))
- revs->date_mode = DATE_ISO8601;
- else if (!strcmp(arg + 7, "rfc2822") ||
- !strcmp(arg + 7, "rfc"))
- revs->date_mode = DATE_RFC2822;
- else if (!strcmp(arg + 7, "short"))
- revs->date_mode = DATE_SHORT;
- else if (!strcmp(arg + 7, "local"))
- revs->date_mode = DATE_LOCAL;
- else if (!strcmp(arg + 7, "default"))
- revs->date_mode = DATE_NORMAL;
- else
- die("unknown date format %s", arg);
+ revs->date_mode = parse_date_format(arg + 7);
continue;
}
if (!strcmp(arg, "--log-size")) {
opts = diff_opt_parse(&revs->diffopt, argv+i, argc-i);
if (opts > 0) {
- if (strcmp(argv[i], "-z"))
- revs->diff = 1;
i += opts - 1;
continue;
}
add_pending_object_with_mode(revs, object, def, mode);
}
+ /* Did the user ask for any diff output? Run the diff! */
+ if (revs->diffopt.output_format & ~DIFF_FORMAT_NO_OUTPUT)
+ revs->diff = 1;
+
+ /* Pickaxe and rename following needs diffs */
+ if (revs->diffopt.pickaxe || revs->diffopt.follow_renames)
+ revs->diff = 1;
+
if (revs->topo_order)
revs->limited = 1;
+++ /dev/null
-#include "cache.h"
-#include "rsh.h"
-#include "quote.h"
-
-#define COMMAND_SIZE 4096
-
-int setup_connection(int *fd_in, int *fd_out, const char *remote_prog,
- char *url, int rmt_argc, char **rmt_argv)
-{
- char *host;
- char *path;
- int sv[2];
- int i;
- pid_t pid;
- struct strbuf cmd;
-
- if (!strcmp(url, "-")) {
- *fd_in = 0;
- *fd_out = 1;
- return 0;
- }
-
- host = strstr(url, "//");
- if (host) {
- host += 2;
- path = strchr(host, '/');
- } else {
- host = url;
- path = strchr(host, ':');
- if (path)
- *(path++) = '\0';
- }
- if (!path) {
- return error("Bad URL: %s", url);
- }
-
- /* $GIT_RSH <host> "env GIT_DIR=<path> <remote_prog> <args...>" */
- strbuf_init(&cmd, COMMAND_SIZE);
- strbuf_addstr(&cmd, "env ");
- strbuf_addstr(&cmd, GIT_DIR_ENVIRONMENT "=");
- sq_quote_buf(&cmd, path);
- strbuf_addch(&cmd, ' ');
- sq_quote_buf(&cmd, remote_prog);
-
- for (i = 0 ; i < rmt_argc ; i++) {
- strbuf_addch(&cmd, ' ');
- sq_quote_buf(&cmd, rmt_argv[i]);
- }
-
- strbuf_addstr(&cmd, " -");
-
- if (cmd.len >= COMMAND_SIZE)
- return error("Command line too long");
-
- if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv))
- return error("Couldn't create socket");
-
- pid = fork();
- if (pid < 0)
- return error("Couldn't fork");
- if (!pid) {
- const char *ssh, *ssh_basename;
- ssh = getenv("GIT_SSH");
- if (!ssh) ssh = "ssh";
- ssh_basename = strrchr(ssh, '/');
- if (!ssh_basename)
- ssh_basename = ssh;
- else
- ssh_basename++;
- close(sv[1]);
- dup2(sv[0], 0);
- dup2(sv[0], 1);
- execlp(ssh, ssh_basename, host, cmd.buf, NULL);
- }
- close(sv[0]);
- *fd_in = sv[1];
- *fd_out = sv[1];
- return 0;
-}
+++ /dev/null
-#ifndef RSH_H
-#define RSH_H
-
-int setup_connection(int *fd_in, int *fd_out, const char *remote_prog,
- char *url, int rmt_argc, char **rmt_argv);
-
-#endif
#include "remote.h"
static const char send_pack_usage[] =
-"git-send-pack [--all] [--force] [--receive-pack=<git-receive-pack>] [--verbose] [--thin] [<host>:]<directory> [<ref>...]\n"
+"git-send-pack [--all] [--dry-run] [--force] [--receive-pack=<git-receive-pack>] [--verbose] [--thin] [<host>:]<directory> [<ref>...]\n"
" --all and explicit <ref> specification are mutually exclusive.";
static const char *receivepack = "git-receive-pack";
static int verbose;
static int send_all;
static int force_update;
static int use_thin_pack;
+static int dry_run;
/*
* Make a pack stream and spit it out into file descriptor fd
return ret;
}
+static void update_tracking_ref(struct remote *remote, struct ref *ref)
+{
+ struct refspec rs;
+ int will_delete_ref;
+
+ rs.src = ref->name;
+ rs.dst = NULL;
+
+ if (!ref->peer_ref)
+ return;
+
+ will_delete_ref = is_null_sha1(ref->peer_ref->new_sha1);
+
+ if (!will_delete_ref &&
+ !hashcmp(ref->old_sha1, ref->peer_ref->new_sha1))
+ return;
+
+ if (!remote_find_tracking(remote, &rs)) {
+ fprintf(stderr, "updating local tracking ref '%s'\n", rs.dst);
+ if (is_null_sha1(ref->peer_ref->new_sha1)) {
+ if (delete_ref(rs.dst, NULL))
+ error("Failed to delete");
+ } else
+ update_ref("update by push", rs.dst,
+ ref->new_sha1, NULL, 0, 0);
+ free(rs.dst);
+ }
+}
+
static int send_pack(int in, int out, struct remote *remote, int nr_refspec, char **refspec)
{
struct ref *ref;
return -1;
if (!remote_refs) {
- fprintf(stderr, "No refs in common and none specified; doing nothing.\n");
+ fprintf(stderr, "No refs in common and none specified; doing nothing.\n"
+ "Perhaps you should specify a branch such as 'master'.\n");
return 0;
}
strcpy(old_hex, sha1_to_hex(ref->old_sha1));
new_hex = sha1_to_hex(ref->new_sha1);
- if (ask_for_status_report) {
- packet_write(out, "%s %s %s%c%s",
- old_hex, new_hex, ref->name, 0,
- "report-status");
- ask_for_status_report = 0;
- expect_status_report = 1;
+ if (!dry_run) {
+ if (ask_for_status_report) {
+ packet_write(out, "%s %s %s%c%s",
+ old_hex, new_hex, ref->name, 0,
+ "report-status");
+ ask_for_status_report = 0;
+ expect_status_report = 1;
+ }
+ else
+ packet_write(out, "%s %s %s",
+ old_hex, new_hex, ref->name);
}
- else
- packet_write(out, "%s %s %s",
- old_hex, new_hex, ref->name);
if (will_delete_ref)
fprintf(stderr, "deleting '%s'\n", ref->name);
else {
fprintf(stderr, "\n from %s\n to %s\n",
old_hex, new_hex);
}
- if (remote) {
- struct refspec rs;
- rs.src = ref->name;
- rs.dst = NULL;
- if (!remote_find_tracking(remote, &rs)) {
- fprintf(stderr, " Also local %s\n", rs.dst);
- if (will_delete_ref) {
- if (delete_ref(rs.dst, NULL)) {
- error("Failed to delete");
- }
- } else
- update_ref("update by push", rs.dst,
- ref->new_sha1, NULL, 0, 0);
- free(rs.dst);
- }
- }
}
packet_flush(out);
- if (new_refs)
+ if (new_refs && !dry_run)
ret = pack_objects(out, remote_refs);
close(out);
ret = -4;
}
+ if (!dry_run && remote && ret == 0) {
+ for (ref = remote_refs; ref; ref = ref->next)
+ update_tracking_ref(remote, ref);
+ }
+
if (!new_refs && ret == 0)
fprintf(stderr, "Everything up-to-date\n");
return ret;
send_all = 1;
continue;
}
+ if (!strcmp(arg, "--dry-run")) {
+ dry_run = 1;
+ continue;
+ }
if (!strcmp(arg, "--force")) {
force_update = 1;
continue;
if (remote_name) {
remote = remote_get(remote_name);
- if (!remote_has_uri(remote, dest)) {
+ if (!remote_has_url(remote, dest)) {
die("Destination %s is not a uri for %s",
dest, remote_name);
}
if (PATH_MAX - 40 < strlen(gitdirenv))
die("'$%s' too big", GIT_DIR_ENVIRONMENT);
if (is_git_directory(gitdirenv)) {
+ static char buffer[1024 + 1];
+ const char *retval;
+
if (!work_tree_env)
return set_work_tree(gitdirenv);
- return NULL;
+ retval = get_relative_cwd(buffer, sizeof(buffer) - 1,
+ get_git_work_tree());
+ if (!retval || !*retval)
+ return NULL;
+ set_git_dir(make_absolute_path(gitdirenv));
+ if (chdir(work_tree_env) < 0)
+ die ("Could not chdir to %s", work_tree_env);
+ strcat(buffer, "/");
+ return retval;
}
if (nongit_ok) {
*nongit_ok = 1;
munmap(idx_map, idx_size);
return error("wrong index v2 file size in %s", path);
}
- if (idx_size != min_size) {
- /* make sure we can deal with large pack offsets */
- off_t x = 0x7fffffffUL, y = 0xffffffffUL;
- if (x > (x + 1) || y > (y + 1)) {
- munmap(idx_map, idx_size);
- return error("pack too large for current definition of off_t in %s", path);
- }
+ if (idx_size != min_size &&
+ /*
+ * make sure we can deal with large pack offsets.
+ * 31-bit signed offset won't be enough, neither
+ * 32-bit unsigned one will be.
+ */
+ (sizeof(off_t) <= 4)) {
+ munmap(idx_map, idx_size);
+ return error("pack too large for current definition of off_t in %s", path);
}
}
return 0;
}
-static int matches_pack_name(struct packed_git *p, const char *ig)
+int matches_pack_name(struct packed_git *p, const char *name)
{
const char *last_c, *c;
- if (!strcmp(p->pack_name, ig))
- return 0;
+ if (!strcmp(p->pack_name, name))
+ return 1;
for (c = p->pack_name, last_c = c; *c;)
if (*c == '/')
last_c = ++c;
else
++c;
- if (!strcmp(last_c, ig))
- return 0;
+ if (!strcmp(last_c, name))
+ return 1;
- return 1;
+ return 0;
}
static int find_pack_entry(const unsigned char *sha1, struct pack_entry *e, const char **ignore_packed)
if (ignore_packed) {
const char **ig;
for (ig = ignore_packed; *ig; ig++)
- if (!matches_pack_name(p, *ig))
+ if (matches_pack_name(p, *ig))
break;
if (*ig)
goto next;
strbuf_init(&nbuf, 0);
if (convert_to_git(path, buf, size, &nbuf)) {
munmap(buf, size);
- size = nbuf.len;
- buf = nbuf.buf;
+ buf = strbuf_detach(&nbuf, &size);
re_allocated = 1;
}
}
#include "cache.h"
#include "quote.h"
#include "exec_cmd.h"
+#include "strbuf.h"
static int do_generic_cmd(const char *me, char *arg)
{
return execv_git_cmd(my_argv);
}
+static int do_cvs_cmd(const char *me, char *arg)
+{
+ const char *cvsserver_argv[3] = {
+ "cvsserver", "server", NULL
+ };
+ const char *oldpath = getenv("PATH");
+ struct strbuf newpath = STRBUF_INIT;
+
+ if (!arg || strcmp(arg, "server"))
+ die("git-cvsserver only handles server: %s", arg);
+
+ strbuf_addstr(&newpath, git_exec_path());
+ strbuf_addch(&newpath, ':');
+ strbuf_addstr(&newpath, oldpath);
+
+ setenv("PATH", strbuf_detach(&newpath, NULL), 1);
+
+ return execv_git_cmd(cvsserver_argv);
+}
+
+
static struct commands {
const char *name;
int (*exec)(const char *me, char *arg);
} cmd_list[] = {
{ "git-receive-pack", do_generic_cmd },
{ "git-upload-pack", do_generic_cmd },
+ { "cvs", do_cvs_cmd },
{ NULL },
};
char *prog;
struct commands *cmd;
+ if (argc == 2 && !strcmp(argv[1], "cvs server"))
+ argv--;
/* We want to see "-c cmd args", and nothing else */
- if (argc != 3 || strcmp(argv[1], "-c"))
+ else if (argc != 3 || strcmp(argv[1], "-c"))
die("What do you think I am? A shell?");
prog = argv[2];
ntohl(off64[1]);
off64_nr++;
}
- printf("%llu %s (%08x)\n", (unsigned long long) offset,
+ printf("%" PRIuMAX " %s (%08x)\n", (uintmax_t) offset,
sha1_to_hex(entries[i].sha1),
ntohl(entries[i].crc));
}
+++ /dev/null
-#ifndef COUNTERPART_ENV_NAME
-#define COUNTERPART_ENV_NAME "GIT_SSH_UPLOAD"
-#endif
-#ifndef COUNTERPART_PROGRAM_NAME
-#define COUNTERPART_PROGRAM_NAME "git-ssh-upload"
-#endif
-#ifndef MY_PROGRAM_NAME
-#define MY_PROGRAM_NAME "git-ssh-fetch"
-#endif
-
-#include "cache.h"
-#include "commit.h"
-#include "rsh.h"
-#include "fetch.h"
-#include "refs.h"
-
-static int fd_in;
-static int fd_out;
-
-static unsigned char remote_version;
-static unsigned char local_version = 1;
-
-static int prefetches;
-
-static struct object_list *in_transit;
-static struct object_list **end_of_transit = &in_transit;
-
-void prefetch(unsigned char *sha1)
-{
- char type = 'o';
- struct object_list *node;
- if (prefetches > 100) {
- fetch(in_transit->item->sha1);
- }
- node = xmalloc(sizeof(struct object_list));
- node->next = NULL;
- node->item = lookup_unknown_object(sha1);
- *end_of_transit = node;
- end_of_transit = &node->next;
- /* XXX: what if these writes fail? */
- write_in_full(fd_out, &type, 1);
- write_in_full(fd_out, sha1, 20);
- prefetches++;
-}
-
-static char conn_buf[4096];
-static size_t conn_buf_posn;
-
-int fetch(unsigned char *sha1)
-{
- int ret;
- signed char remote;
- struct object_list *temp;
-
- if (hashcmp(sha1, in_transit->item->sha1)) {
- /* we must have already fetched it to clean the queue */
- return has_sha1_file(sha1) ? 0 : -1;
- }
- prefetches--;
- temp = in_transit;
- in_transit = in_transit->next;
- if (!in_transit)
- end_of_transit = &in_transit;
- free(temp);
-
- if (conn_buf_posn) {
- remote = conn_buf[0];
- memmove(conn_buf, conn_buf + 1, --conn_buf_posn);
- } else {
- if (xread(fd_in, &remote, 1) < 1)
- return -1;
- }
- /* fprintf(stderr, "Got %d\n", remote); */
- if (remote < 0)
- return remote;
- ret = write_sha1_from_fd(sha1, fd_in, conn_buf, 4096, &conn_buf_posn);
- if (!ret)
- pull_say("got %s\n", sha1_to_hex(sha1));
- return ret;
-}
-
-static int get_version(void)
-{
- char type = 'v';
- if (write_in_full(fd_out, &type, 1) != 1 ||
- write_in_full(fd_out, &local_version, 1)) {
- return error("Couldn't request version from remote end");
- }
- if (xread(fd_in, &remote_version, 1) < 1) {
- return error("Couldn't read version from remote end");
- }
- return 0;
-}
-
-int fetch_ref(char *ref, unsigned char *sha1)
-{
- signed char remote;
- char type = 'r';
- int length = strlen(ref) + 1;
- if (write_in_full(fd_out, &type, 1) != 1 ||
- write_in_full(fd_out, ref, length) != length)
- return -1;
-
- if (read_in_full(fd_in, &remote, 1) != 1)
- return -1;
- if (remote < 0)
- return remote;
- if (read_in_full(fd_in, sha1, 20) != 20)
- return -1;
- return 0;
-}
-
-static const char ssh_fetch_usage[] =
- MY_PROGRAM_NAME
- " [-c] [-t] [-a] [-v] [--recover] [-w ref] commit-id url";
-int main(int argc, char **argv)
-{
- const char *write_ref = NULL;
- char *commit_id;
- char *url;
- int arg = 1;
- const char *prog;
-
- prog = getenv("GIT_SSH_PUSH");
- if (!prog) prog = "git-ssh-upload";
-
- setup_git_directory();
- git_config(git_default_config);
-
- while (arg < argc && argv[arg][0] == '-') {
- if (argv[arg][1] == 't') {
- get_tree = 1;
- } else if (argv[arg][1] == 'c') {
- get_history = 1;
- } else if (argv[arg][1] == 'a') {
- get_all = 1;
- get_tree = 1;
- get_history = 1;
- } else if (argv[arg][1] == 'v') {
- get_verbosely = 1;
- } else if (argv[arg][1] == 'w') {
- write_ref = argv[arg + 1];
- arg++;
- } else if (!strcmp(argv[arg], "--recover")) {
- get_recover = 1;
- }
- arg++;
- }
- if (argc < arg + 2) {
- usage(ssh_fetch_usage);
- return 1;
- }
- commit_id = argv[arg];
- url = argv[arg + 1];
-
- if (setup_connection(&fd_in, &fd_out, prog, url, arg, argv + 1))
- return 1;
-
- if (get_version())
- return 1;
-
- if (pull(1, &commit_id, &write_ref, url))
- return 1;
-
- return 0;
-}
+++ /dev/null
-#define COUNTERPART_ENV_NAME "GIT_SSH_PUSH"
-#define COUNTERPART_PROGRAM_NAME "git-ssh-push"
-#define MY_PROGRAM_NAME "git-ssh-pull"
-#include "ssh-fetch.c"
+++ /dev/null
-#define COUNTERPART_ENV_NAME "GIT_SSH_PULL"
-#define COUNTERPART_PROGRAM_NAME "git-ssh-pull"
-#define MY_PROGRAM_NAME "git-ssh-push"
-#include "ssh-upload.c"
+++ /dev/null
-#ifndef COUNTERPART_ENV_NAME
-#define COUNTERPART_ENV_NAME "GIT_SSH_FETCH"
-#endif
-#ifndef COUNTERPART_PROGRAM_NAME
-#define COUNTERPART_PROGRAM_NAME "git-ssh-fetch"
-#endif
-#ifndef MY_PROGRAM_NAME
-#define MY_PROGRAM_NAME "git-ssh-upload"
-#endif
-
-#include "cache.h"
-#include "rsh.h"
-#include "refs.h"
-
-static unsigned char local_version = 1;
-static unsigned char remote_version;
-
-static int verbose;
-
-static int serve_object(int fd_in, int fd_out) {
- ssize_t size;
- unsigned char sha1[20];
- signed char remote;
-
- size = read_in_full(fd_in, sha1, 20);
- if (size < 0) {
- perror("git-ssh-upload: read ");
- return -1;
- }
- if (!size)
- return -1;
-
- if (verbose)
- fprintf(stderr, "Serving %s\n", sha1_to_hex(sha1));
-
- remote = 0;
-
- if (!has_sha1_file(sha1)) {
- fprintf(stderr, "git-ssh-upload: could not find %s\n",
- sha1_to_hex(sha1));
- remote = -1;
- }
-
- if (write_in_full(fd_out, &remote, 1) != 1)
- return 0;
-
- if (remote < 0)
- return 0;
-
- return write_sha1_to_fd(fd_out, sha1);
-}
-
-static int serve_version(int fd_in, int fd_out)
-{
- if (xread(fd_in, &remote_version, 1) < 1)
- return -1;
- write_in_full(fd_out, &local_version, 1);
- return 0;
-}
-
-static int serve_ref(int fd_in, int fd_out)
-{
- char ref[PATH_MAX];
- unsigned char sha1[20];
- int posn = 0;
- signed char remote = 0;
- do {
- if (posn >= PATH_MAX || xread(fd_in, ref + posn, 1) < 1)
- return -1;
- posn++;
- } while (ref[posn - 1]);
-
- if (verbose)
- fprintf(stderr, "Serving %s\n", ref);
-
- if (get_ref_sha1(ref, sha1))
- remote = -1;
- if (write_in_full(fd_out, &remote, 1) != 1)
- return 0;
- if (remote)
- return 0;
- write_in_full(fd_out, sha1, 20);
- return 0;
-}
-
-
-static void service(int fd_in, int fd_out) {
- char type;
- ssize_t retval;
- do {
- retval = xread(fd_in, &type, 1);
- if (retval < 1) {
- if (retval < 0)
- perror("git-ssh-upload: read ");
- return;
- }
- if (type == 'v' && serve_version(fd_in, fd_out))
- return;
- if (type == 'o' && serve_object(fd_in, fd_out))
- return;
- if (type == 'r' && serve_ref(fd_in, fd_out))
- return;
- } while (1);
-}
-
-static const char ssh_push_usage[] =
- MY_PROGRAM_NAME " [-c] [-t] [-a] [-w ref] commit-id url";
-
-int main(int argc, char **argv)
-{
- int arg = 1;
- char *commit_id;
- char *url;
- int fd_in, fd_out;
- const char *prog;
- unsigned char sha1[20];
- char hex[41];
-
- prog = getenv(COUNTERPART_ENV_NAME);
- if (!prog) prog = COUNTERPART_PROGRAM_NAME;
-
- setup_git_directory();
-
- while (arg < argc && argv[arg][0] == '-') {
- if (argv[arg][1] == 'w')
- arg++;
- arg++;
- }
- if (argc < arg + 2)
- usage(ssh_push_usage);
- commit_id = argv[arg];
- url = argv[arg + 1];
- if (get_sha1(commit_id, sha1))
- die("Not a valid object name %s", commit_id);
- memcpy(hex, sha1_to_hex(sha1), sizeof(hex));
- argv[arg] = hex;
-
- if (setup_connection(&fd_in, &fd_out, prog, url, arg, argv + 1))
- return 1;
-
- service(fd_in, fd_out);
- return 0;
-}
#include "cache.h"
+/*
+ * Used as the default ->buf value, so that people can always assume
+ * buf is non NULL and ->buf is NUL terminated even for a freshly
+ * initialized strbuf.
+ */
+char strbuf_slopbuf[1];
+
void strbuf_init(struct strbuf *sb, size_t hint)
{
- memset(sb, 0, sizeof(*sb));
+ sb->alloc = sb->len = 0;
+ sb->buf = strbuf_slopbuf;
if (hint)
strbuf_grow(sb, hint);
}
void strbuf_release(struct strbuf *sb)
{
- free(sb->buf);
- memset(sb, 0, sizeof(*sb));
-}
-
-void strbuf_reset(struct strbuf *sb)
-{
- if (sb->len)
- strbuf_setlen(sb, 0);
+ if (sb->alloc) {
+ free(sb->buf);
+ strbuf_init(sb, 0);
+ }
}
-char *strbuf_detach(struct strbuf *sb)
+char *strbuf_detach(struct strbuf *sb, size_t *sz)
{
- char *res = sb->buf;
+ char *res = sb->alloc ? sb->buf : NULL;
+ if (sz)
+ *sz = sb->len;
strbuf_init(sb, 0);
return res;
}
{
if (sb->len + extra + 1 <= sb->len)
die("you want to use way too much memory");
+ if (!sb->alloc)
+ sb->buf = NULL;
ALLOC_GROW(sb->buf, sb->len + extra + 1, sb->alloc);
}
return 0;
}
-int strbuf_read_file(struct strbuf *sb, const char *path)
+int strbuf_read_file(struct strbuf *sb, const char *path, size_t hint)
{
int fd, len;
fd = open(path, O_RDONLY);
if (fd < 0)
return -1;
- len = strbuf_read(sb, fd, 0);
+ len = strbuf_read(sb, fd, hint);
close(fd);
if (len < 0)
return -1;
* 1. the ->buf member is always malloc-ed, hence strbuf's can be used to
* build complex strings/buffers whose final size isn't easily known.
*
- * It is legal to copy the ->buf pointer away. Though if you want to reuse
- * the strbuf after that, setting ->buf to NULL isn't legal.
+ * It is NOT legal to copy the ->buf pointer away.
* `strbuf_detach' is the operation that detachs a buffer from its shell
* while keeping the shell valid wrt its invariants.
*
#include <assert.h>
+extern char strbuf_slopbuf[];
struct strbuf {
size_t alloc;
size_t len;
char *buf;
};
-#define STRBUF_INIT { 0, 0, NULL }
+#define STRBUF_INIT { 0, 0, strbuf_slopbuf }
/*----- strbuf life cycle -----*/
extern void strbuf_init(struct strbuf *, size_t);
extern void strbuf_release(struct strbuf *);
-extern void strbuf_reset(struct strbuf *);
-extern char *strbuf_detach(struct strbuf *);
+extern char *strbuf_detach(struct strbuf *, size_t *);
extern void strbuf_attach(struct strbuf *, void *, size_t, size_t);
static inline void strbuf_swap(struct strbuf *a, struct strbuf *b) {
struct strbuf tmp = *a;
sb->len = len;
sb->buf[len] = '\0';
}
+#define strbuf_reset(sb) strbuf_setlen(sb, 0)
/*----- content related -----*/
extern void strbuf_rtrim(struct strbuf *);
extern size_t strbuf_fread(struct strbuf *, size_t, FILE *);
/* XXX: if read fails, any partial read is undone */
extern ssize_t strbuf_read(struct strbuf *, int fd, size_t hint);
-extern int strbuf_read_file(struct strbuf *sb, const char *path);
+extern int strbuf_read_file(struct strbuf *sb, const char *path, size_t hint);
extern int strbuf_getline(struct strbuf *, FILE *, int);
}
'
+test_expect_success 'invalid .gitattributes (must not crash)' '
+
+ echo "three +crlf" >>.gitattributes &&
+ git diff
+
+'
+
test_done
test sub/dir/tracked = "$(git ls-files)")
'
+test_expect_success '_gently() groks relative GIT_DIR & GIT_WORK_TREE' '
+ cd repo.git/work/sub/dir &&
+ GIT_DIR=../../.. GIT_WORK_TREE=../.. GIT_PAGER= \
+ git diff --exit-code tracked &&
+ echo changed > tracked &&
+ ! GIT_DIR=../../.. GIT_WORK_TREE=../.. GIT_PAGER= \
+ git diff --exit-code tracked
+'
+
test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2007 Carl D. Worth
+#
+
+test_description='git ls-files test (--with-tree).
+
+This test runs git ls-files --with-tree and in particular in
+a scenario known to trigger a crash with some versions of git.
+'
+. ./test-lib.sh
+
+test_expect_success setup '
+
+ # The bug we are exercising requires a fair number of entries
+ # in a sub-directory so that add_index_entry will trigger a
+ # realloc.
+
+ echo file >expected &&
+ mkdir sub &&
+ bad= &&
+ for n in 0 1 2 3 4 5
+ do
+ for m in 0 1 2 3 4 5 6 7 8 9
+ do
+ num=00$n$m &&
+ >sub/file-$num &&
+ echo file-$num >>expected || {
+ bad=t
+ break
+ }
+ done && test -z "$bad" || {
+ bad=t
+ break
+ }
+ done && test -z "$bad" &&
+ git add . &&
+ git commit -m "add a bunch of files" &&
+
+ # We remove them all so that we will have something to add
+ # back with --with-tree and so that we will definitely be
+ # under the realloc size to trigger the bug.
+ rm -rf sub &&
+ git commit -a -m "remove them all" &&
+
+ # The bug also requires some entry before our directory so that
+ # prune_path will modify the_index.cache
+
+ mkdir a_directory_that_sorts_before_sub &&
+ >a_directory_that_sorts_before_sub/file &&
+ mkdir sub &&
+ >sub/file &&
+ git add .
+'
+
+# We have to run from a sub-directory to trigger prune_path
+# Then we finally get to run our --with-tree test
+cd sub
+
+test_expect_success 'git -ls-files --with-tree should succeed from subdir' '
+
+ git ls-files --with-tree=HEAD~1 >../output
+
+'
+
+cd ..
+test_expect_success \
+ 'git -ls-files --with-tree should add entries from named tree.' \
+ 'diff -u expected output'
+
+test_done
action=pick
for line in $FAKE_LINES; do
case $line in
- squash)
+ squash|edit)
action="$line";;
*)
echo sed -n "${line}s/^pick/$action/p"
'
test_expect_success 'retain authorship when squashing' '
- git show HEAD | grep "^Author: Nitfol"
+ git show HEAD | grep "^Author: Twerp Snog"
'
test_expect_success 'preserve merges with -p' '
test $HEAD = $(git rev-parse HEAD^)
'
+test_expect_success '--continue tries to commit, even for "edit"' '
+ parent=$(git rev-parse HEAD^) &&
+ test_tick &&
+ FAKE_LINES="edit 1" git rebase -i HEAD^ &&
+ echo edited > file7 &&
+ git add file7 &&
+ FAKE_COMMIT_MESSAGE="chouette!" git rebase --continue &&
+ test edited = $(git show HEAD:file7) &&
+ git show HEAD | grep chouette &&
+ test $parent = $(git rev-parse HEAD^)
+'
+
+test_expect_success 'rebase a detached HEAD' '
+ grandparent=$(git rev-parse HEAD~2) &&
+ git checkout $(git rev-parse HEAD) &&
+ test_tick &&
+ FAKE_LINES="2 1" git rebase -i HEAD~2 &&
+ test $grandparent = $(git rev-parse HEAD~2)
+'
+
test_done
. ./test-lib.sh
compare_with () {
- git show -s $1 | sed -e '1,/^$/d' -e 's/^ //' -e '$d' >current &&
+ git show -s $1 | sed -e '1,/^$/d' -e 's/^ //' >current &&
git diff current "$2"
}
Date: Mon Jun 26 00:02:00 2006 +0000
Third
-
$
git-branch -a >branches && ! grep -q origin/master branches
'
+rewound_push_setup() {
+ rm -rf parent child &&
+ mkdir parent && cd parent &&
+ git-init && echo one >file && git-add file && git-commit -m one &&
+ echo two >file && git-commit -a -m two &&
+ cd .. &&
+ git-clone parent child && cd child && git-reset --hard HEAD^
+}
+
+rewound_push_succeeded() {
+ cmp ../parent/.git/refs/heads/master .git/refs/heads/master
+}
+
+rewound_push_failed() {
+ if rewound_push_succeeded
+ then
+ false
+ else
+ true
+ fi
+}
+
+test_expect_success \
+ 'pushing explicit refspecs respects forcing' '
+ rewound_push_setup &&
+ if git-send-pack ../parent/.git refs/heads/master:refs/heads/master
+ then
+ false
+ else
+ true
+ fi && rewound_push_failed &&
+ git-send-pack ../parent/.git +refs/heads/master:refs/heads/master &&
+ rewound_push_succeeded
+'
+
+test_expect_success \
+ 'pushing wildcard refspecs respects forcing' '
+ rewound_push_setup &&
+ if git-send-pack ../parent/.git refs/heads/*:refs/heads/*
+ then
+ false
+ else
+ true
+ fi && rewound_push_failed &&
+ git-send-pack ../parent/.git +refs/heads/*:refs/heads/* &&
+ rewound_push_succeeded
+'
+
test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2006 Josh England
+#
+
+test_description='Test the post-merge hook.'
+. ./test-lib.sh
+
+test_expect_success setup '
+ echo Data for commit0. >a &&
+ git update-index --add a &&
+ tree0=$(git write-tree) &&
+ commit0=$(echo setup | git commit-tree $tree0) &&
+ echo Changed data for commit1. >a &&
+ git update-index a &&
+ tree1=$(git write-tree) &&
+ commit1=$(echo modify | git commit-tree $tree1 -p $commit0) &&
+ git update-ref refs/heads/master $commit0 &&
+ git-clone ./. clone1 &&
+ GIT_DIR=clone1/.git git update-index --add a &&
+ git-clone ./. clone2 &&
+ GIT_DIR=clone2/.git git update-index --add a
+'
+
+for clone in 1 2; do
+ cat >clone${clone}/.git/hooks/post-merge <<'EOF'
+#!/bin/sh
+echo $@ >> $GIT_DIR/post-merge.args
+EOF
+ chmod u+x clone${clone}/.git/hooks/post-merge
+done
+
+test_expect_failure 'post-merge does not run for up-to-date ' '
+ GIT_DIR=clone1/.git git merge $commit0 &&
+ test -e clone1/.git/post-merge.args
+'
+
+test_expect_success 'post-merge runs as expected ' '
+ GIT_DIR=clone1/.git git merge $commit1 &&
+ test -e clone1/.git/post-merge.args
+'
+
+test_expect_success 'post-merge from normal merge receives the right argument ' '
+ grep 0 clone1/.git/post-merge.args
+'
+
+test_expect_success 'post-merge from squash merge runs as expected ' '
+ GIT_DIR=clone2/.git git merge --squash $commit1 &&
+ test -e clone2/.git/post-merge.args
+'
+
+test_expect_success 'post-merge from squash merge receives the right argument ' '
+ grep 1 clone2/.git/post-merge.args
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2006 Josh England
+#
+
+test_description='Test the post-checkout hook.'
+. ./test-lib.sh
+
+test_expect_success setup '
+ echo Data for commit0. >a &&
+ echo Data for commit0. >b &&
+ git update-index --add a &&
+ git update-index --add b &&
+ tree0=$(git write-tree) &&
+ commit0=$(echo setup | git commit-tree $tree0) &&
+ git update-ref refs/heads/master $commit0 &&
+ git-clone ./. clone1 &&
+ git-clone ./. clone2 &&
+ GIT_DIR=clone2/.git git branch -a new2 &&
+ echo Data for commit1. >clone2/b &&
+ GIT_DIR=clone2/.git git add clone2/b &&
+ GIT_DIR=clone2/.git git commit -m new2
+'
+
+for clone in 1 2; do
+ cat >clone${clone}/.git/hooks/post-checkout <<'EOF'
+#!/bin/sh
+echo $@ > $GIT_DIR/post-checkout.args
+EOF
+ chmod u+x clone${clone}/.git/hooks/post-checkout
+done
+
+test_expect_success 'post-checkout runs as expected ' '
+ GIT_DIR=clone1/.git git checkout master &&
+ test -e clone1/.git/post-checkout.args
+'
+
+test_expect_success 'post-checkout receives the right arguments with HEAD unchanged ' '
+ old=$(awk "{print \$1}" clone1/.git/post-checkout.args) &&
+ new=$(awk "{print \$2}" clone1/.git/post-checkout.args) &&
+ flag=$(awk "{print \$3}" clone1/.git/post-checkout.args) &&
+ test $old = $new -a $flag = 1
+'
+
+test_expect_success 'post-checkout runs as expected ' '
+ GIT_DIR=clone1/.git git checkout master &&
+ test -e clone1/.git/post-checkout.args
+'
+
+test_expect_success 'post-checkout args are correct with git checkout -b ' '
+ GIT_DIR=clone1/.git git checkout -b new1 &&
+ old=$(awk "{print \$1}" clone1/.git/post-checkout.args) &&
+ new=$(awk "{print \$2}" clone1/.git/post-checkout.args) &&
+ flag=$(awk "{print \$3}" clone1/.git/post-checkout.args) &&
+ test $old = $new -a $flag = 1
+'
+
+test_expect_success 'post-checkout receives the right args with HEAD changed ' '
+ GIT_DIR=clone2/.git git checkout new2 &&
+ old=$(awk "{print \$1}" clone2/.git/post-checkout.args) &&
+ new=$(awk "{print \$2}" clone2/.git/post-checkout.args) &&
+ flag=$(awk "{print \$3}" clone2/.git/post-checkout.args) &&
+ test $old != $new -a $flag = 1
+'
+
+test_expect_success 'post-checkout receives the right args when not switching branches ' '
+ GIT_DIR=clone2/.git git checkout master b &&
+ old=$(awk "{print \$1}" clone2/.git/post-checkout.args) &&
+ new=$(awk "{print \$2}" clone2/.git/post-checkout.args) &&
+ flag=$(awk "{print \$3}" clone2/.git/post-checkout.args) &&
+ test $old = $new -a $flag = 0
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description='git remote porcelain-ish'
+
+. ./test-lib.sh
+
+GIT_CONFIG=.git/config
+export GIT_CONFIG
+
+setup_repository () {
+ mkdir "$1" && (
+ cd "$1" &&
+ git init &&
+ >file &&
+ git add file &&
+ git commit -m "Initial" &&
+ git checkout -b side &&
+ >elif &&
+ git add elif &&
+ git commit -m "Second" &&
+ git checkout master
+ )
+}
+
+tokens_match () {
+ echo "$1" | tr ' ' '\012' | sort | sed -e '/^$/d' >expect &&
+ echo "$2" | tr ' ' '\012' | sort | sed -e '/^$/d' >actual &&
+ diff -u expect actual
+}
+
+check_remote_track () {
+ actual=$(git remote show "$1" | sed -n -e '$p') &&
+ shift &&
+ tokens_match "$*" "$actual"
+}
+
+check_tracking_branch () {
+ f="" &&
+ r=$(git for-each-ref "--format=%(refname)" |
+ sed -ne "s|^refs/remotes/$1/||p") &&
+ shift &&
+ tokens_match "$*" "$r"
+}
+
+test_expect_success setup '
+
+ setup_repository one &&
+ setup_repository two &&
+ (
+ cd two && git branch another
+ ) &&
+ git clone one test
+
+'
+
+test_expect_success 'remote information for the origin' '
+(
+ cd test &&
+ tokens_match origin "$(git remote)" &&
+ check_remote_track origin master side &&
+ check_tracking_branch origin HEAD master side
+)
+'
+
+test_expect_success 'add another remote' '
+(
+ cd test &&
+ git remote add -f second ../two &&
+ tokens_match "origin second" "$(git remote)" &&
+ check_remote_track origin master side &&
+ check_remote_track second master side another &&
+ check_tracking_branch second master side another &&
+ git for-each-ref "--format=%(refname)" refs/remotes |
+ sed -e "/^refs\/remotes\/origin\//d" \
+ -e "/^refs\/remotes\/second\//d" >actual &&
+ >expect &&
+ diff -u expect actual
+)
+'
+
+test_expect_success 'remove remote' '
+(
+ cd test &&
+ git remote rm second
+)
+'
+
+test_expect_success 'remove remote' '
+(
+ cd test &&
+ tokens_match origin "$(git remote)" &&
+ check_remote_track origin master side &&
+ git for-each-ref "--format=%(refname)" refs/remotes |
+ sed -e "/^refs\/remotes\/origin\//d" >actual &&
+ >expect &&
+ diff -u expect actual
+)
+'
+
+test_done
cut -f -2 .git/FETCH_HEAD >actual &&
diff expected actual'
+test_expect_success 'fetch tags when there is no tags' '
+
+ cd "$D" &&
+
+ mkdir notags &&
+ cd notags &&
+ git init &&
+
+ git fetch -t ..
+
+'
+
test_expect_success 'fetch following tags' '
cd "$D" &&
'
+test "$TEST_RSYNC" && {
+test_expect_success 'fetch via rsync' '
+ git pack-refs &&
+ mkdir rsynced &&
+ cd rsynced &&
+ git init &&
+ git fetch rsync://127.0.0.1$(pwd)/../.git master:refs/heads/master &&
+ git gc --prune &&
+ test $(git rev-parse master) = $(cd .. && git rev-parse master) &&
+ git fsck --full
+'
+
+test_expect_success 'push via rsync' '
+ mkdir ../rsynced2 &&
+ (cd ../rsynced2 &&
+ git init) &&
+ git push rsync://127.0.0.1$(pwd)/../rsynced2/.git master &&
+ cd ../rsynced2 &&
+ git gc --prune &&
+ test $(git rev-parse master) = $(cd .. && git rev-parse master) &&
+ git fsck --full
+'
+
+test_expect_success 'push via rsync' '
+ cd .. &&
+ mkdir rsynced3 &&
+ (cd rsynced3 &&
+ git init) &&
+ git push --all rsync://127.0.0.1$(pwd)/rsynced3/.git &&
+ cd rsynced3 &&
+ test $(git rev-parse master) = $(cd .. && git rev-parse master) &&
+ git fsck --full
+'
+}
+
+test_expect_success 'fetch with a non-applying branch.<name>.merge' '
+ git config branch.master.remote yeti &&
+ git config branch.master.merge refs/heads/bigfoot &&
+ git config remote.blub.url one &&
+ git config remote.blub.fetch "refs/heads/*:refs/remotes/one/*" &&
+ git fetch blub
+'
+
test_done
git config branch.br-$remote-merge.merge refs/heads/three &&
git config branch.br-$remote-octopus.remote $remote &&
git config branch.br-$remote-octopus.merge refs/heads/one &&
- git config --add branch.br-$remote-octopus.merge two &&
- git config --add branch.br-$remote-octopus.merge remotes/rem/three
+ git config --add branch.br-$remote-octopus.merge two
done
'
# br-branches-default-merge
-754b754407bf032e9a2f9d5a9ad05ca79a6b228f branch 'master' of ../
+754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
+0567da4d5edd2ff4bb292a465ba9e64dcad9536b branch 'three' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-default-merge branches-default
-754b754407bf032e9a2f9d5a9ad05ca79a6b228f branch 'master' of ../
+754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
+0567da4d5edd2ff4bb292a465ba9e64dcad9536b branch 'three' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-default-octopus
-754b754407bf032e9a2f9d5a9ad05ca79a6b228f branch 'master' of ../
+754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
+8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-default-octopus branches-default
-754b754407bf032e9a2f9d5a9ad05ca79a6b228f branch 'master' of ../
+754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
+8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-one-merge
-8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge branch 'one' of ../
+0567da4d5edd2ff4bb292a465ba9e64dcad9536b branch 'three' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-one-merge branches-one
-8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge branch 'one' of ../
+0567da4d5edd2ff4bb292a465ba9e64dcad9536b branch 'three' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-one-octopus
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
# br-branches-one-octopus branches-one
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
0567da4d5edd2ff4bb292a465ba9e64dcad9536b not-for-merge branch 'three' of ../
-6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 not-for-merge branch 'two' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
0567da4d5edd2ff4bb292a465ba9e64dcad9536b not-for-merge branch 'three' of ../
-6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 not-for-merge branch 'two' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
0567da4d5edd2ff4bb292a465ba9e64dcad9536b not-for-merge branch 'three' of ../
-6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 not-for-merge branch 'two' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge branch 'master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 branch 'one' of ../
0567da4d5edd2ff4bb292a465ba9e64dcad9536b not-for-merge branch 'three' of ../
-6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 not-for-merge branch 'two' of ../
+6134ee8f857693b96ff1cc98d3e2fd62b199e5a8 branch 'two' of ../
754b754407bf032e9a2f9d5a9ad05ca79a6b228f not-for-merge tag 'tag-master' of ../
8e32a6d901327a23ef831511badce7bf3bf46689 not-for-merge tag 'tag-one' of ../
22feea448b023a2d864ef94b013735af34d238ba not-for-merge tag 'tag-one-tree' of ../
'
+test_expect_success 'push with dry-run' '
+
+ mk_test heads/master &&
+ cd testrepo &&
+ old_commit=$(git show-ref -s --verify refs/heads/master) &&
+ cd .. &&
+ git push --dry-run testrepo &&
+ check_push_result $old_commit heads/master
+'
+
+test_expect_success 'push updates local refs' '
+
+ rm -rf parent child &&
+ mkdir parent && cd parent && git init &&
+ echo one >foo && git add foo && git commit -m one &&
+ cd .. &&
+ git clone parent child && cd child &&
+ echo two >foo && git commit -a -m two &&
+ git push &&
+ test $(git rev-parse master) = $(git rev-parse remotes/origin/master)
+
+'
+
+test_expect_success 'push does not update local refs on failure' '
+
+ rm -rf parent child &&
+ mkdir parent && cd parent && git init &&
+ echo one >foo && git add foo && git commit -m one &&
+ echo exit 1 >.git/hooks/pre-receive &&
+ chmod +x .git/hooks/pre-receive &&
+ cd .. &&
+ git clone parent child && cd child &&
+ echo two >foo && git commit -a -m two || exit 1
+ git push && exit 1
+ test $(git rev-parse master) != $(git rev-parse remotes/origin/master)
+
+'
+
test_done
test_expect_success 'pulling from reference' \
'cd C &&
-git pull ../B'
+git pull ../B master'
cd "$base_dir"
cd "$base_dir"
test_expect_success 'pulling from reference' \
-'cd D && git pull ../B'
+'cd D && git pull ../B master'
cd "$base_dir"
test_format encoding %e <<'EOF'
commit 131a310eb913d107dd3c09a65d1651175898735d
-<unknown>
commit 86c75cfd708a0e5868dc876ed5b8bb66c80b4873
-<unknown>
EOF
test_format subject %s <<'EOF'
test_format body %b <<'EOF'
commit 131a310eb913d107dd3c09a65d1651175898735d
-<unknown>
commit 86c75cfd708a0e5868dc876ed5b8bb66c80b4873
-<unknown>
EOF
test_format colors %Credfoo%Cgreenbar%Cbluebaz%Cresetxyzzy <<'EOF'
commit f58db70b055c5718631e5c61528b28b12090cdea
iso8859-1
commit 131a310eb913d107dd3c09a65d1651175898735d
-<unknown>
commit 86c75cfd708a0e5868dc876ed5b8bb66c80b4873
-<unknown>
EOF
test_format complex-subject %s <<'EOF'
include an iso8859 character: ¡bueno!
commit 131a310eb913d107dd3c09a65d1651175898735d
-<unknown>
commit 86c75cfd708a0e5868dc876ed5b8bb66c80b4873
-<unknown>
EOF
test_done
git bisect next
'
+# $HASH1 is good, $HASH4 is bad, we skip $HASH3
+# but $HASH2 is bad,
+# so we should find $HASH2 as the first bad commit
+test_expect_success 'bisect skip: successfull result' '
+ git bisect reset &&
+ git bisect start $HASH4 $HASH1 &&
+ git bisect skip &&
+ git bisect bad > my_bisect_log.txt &&
+ grep "$HASH2 is first bad commit" my_bisect_log.txt &&
+ git bisect reset
+'
+
+# $HASH1 is good, $HASH4 is bad, we skip $HASH3 and $HASH2
+# so we should not be able to tell the first bad commit
+# among $HASH2, $HASH3 and $HASH4
+test_expect_success 'bisect skip: cannot tell between 3 commits' '
+ git bisect start $HASH4 $HASH1 &&
+ git bisect skip || return 1
+
+ if git bisect skip > my_bisect_log.txt
+ then
+ echo Oops, should have failed.
+ false
+ else
+ test $? -eq 2 &&
+ grep "first bad commit could be any of" my_bisect_log.txt &&
+ ! grep $HASH1 my_bisect_log.txt &&
+ grep $HASH2 my_bisect_log.txt &&
+ grep $HASH3 my_bisect_log.txt &&
+ grep $HASH4 my_bisect_log.txt &&
+ git bisect reset
+ fi
+'
+
+# $HASH1 is good, $HASH4 is bad, we skip $HASH3
+# but $HASH2 is good,
+# so we should not be able to tell the first bad commit
+# among $HASH3 and $HASH4
+test_expect_success 'bisect skip: cannot tell between 2 commits' '
+ git bisect start $HASH4 $HASH1 &&
+ git bisect skip || return 1
+
+ if git bisect good > my_bisect_log.txt
+ then
+ echo Oops, should have failed.
+ false
+ else
+ test $? -eq 2 &&
+ grep "first bad commit could be any of" my_bisect_log.txt &&
+ ! grep $HASH1 my_bisect_log.txt &&
+ ! grep $HASH2 my_bisect_log.txt &&
+ grep $HASH3 my_bisect_log.txt &&
+ grep $HASH4 my_bisect_log.txt &&
+ git bisect reset
+ fi
+'
+
# We want to automatically find the commit that
# introduced "Another" into hello.
test_expect_success \
grep "$HASH4 is first bad commit" my_bisect_log.txt &&
git bisect reset'
+# $HASH1 is good, $HASH5 is bad, we skip $HASH3
+# but $HASH4 is good,
+# so we should find $HASH5 as the first bad commit
+HASH5=
+test_expect_success 'bisect skip: add line and then a new test' '
+ add_line_into_file "5: Another new line." hello &&
+ HASH5=$(git rev-parse --verify HEAD) &&
+ git bisect start $HASH5 $HASH1 &&
+ git bisect skip &&
+ git bisect good > my_bisect_log.txt &&
+ grep "$HASH5 is first bad commit" my_bisect_log.txt &&
+ git bisect log > log_to_replay.txt
+ git bisect reset
+'
+
+test_expect_success 'bisect skip and bisect replay' '
+ git bisect replay log_to_replay.txt > my_bisect_log.txt &&
+ grep "$HASH5 is first bad commit" my_bisect_log.txt &&
+ git bisect reset
+'
+
+HASH6=
+test_expect_success 'bisect run & skip: cannot tell between 2' '
+ add_line_into_file "6: Yet a line." hello &&
+ HASH6=$(git rev-parse --verify HEAD) &&
+ echo "#"\!"/bin/sh" > test_script.sh &&
+ echo "tail -1 hello | grep Ciao > /dev/null && exit 125" >> test_script.sh &&
+ echo "grep line hello > /dev/null" >> test_script.sh &&
+ echo "test \$? -ne 0" >> test_script.sh &&
+ chmod +x test_script.sh &&
+ git bisect start $HASH6 $HASH1 &&
+ if git bisect run ./test_script.sh > my_bisect_log.txt
+ then
+ echo Oops, should have failed.
+ false
+ else
+ test $? -eq 2 &&
+ grep "first bad commit could be any of" my_bisect_log.txt &&
+ ! grep $HASH3 my_bisect_log.txt &&
+ ! grep $HASH6 my_bisect_log.txt &&
+ grep $HASH4 my_bisect_log.txt &&
+ grep $HASH5 my_bisect_log.txt
+ fi
+'
+
+HASH7=
+test_expect_success 'bisect run & skip: find first bad' '
+ git bisect reset &&
+ add_line_into_file "7: Should be the last line." hello &&
+ HASH7=$(git rev-parse --verify HEAD) &&
+ echo "#"\!"/bin/sh" > test_script.sh &&
+ echo "tail -1 hello | grep Ciao > /dev/null && exit 125" >> test_script.sh &&
+ echo "tail -1 hello | grep day > /dev/null && exit 125" >> test_script.sh &&
+ echo "grep Yet hello > /dev/null" >> test_script.sh &&
+ echo "test \$? -ne 0" >> test_script.sh &&
+ chmod +x test_script.sh &&
+ git bisect start $HASH7 $HASH1 &&
+ git bisect run ./test_script.sh > my_bisect_log.txt &&
+ grep "$HASH6 is first bad commit" my_bisect_log.txt
+'
+
#
#
test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2007 Andy Parkins
+#
+
+test_description='for-each-ref test'
+
+. ./test-lib.sh
+
+# Mon Jul 3 15:18:43 2006 +0000
+datestamp=1151939923
+setdate_and_increment () {
+ GIT_COMMITTER_DATE="$datestamp +0200"
+ datestamp=$(expr "$datestamp" + 1)
+ GIT_AUTHOR_DATE="$datestamp +0200"
+ datestamp=$(expr "$datestamp" + 1)
+ export GIT_COMMITTER_DATE GIT_AUTHOR_DATE
+}
+
+test_expect_success 'Create sample commit with known timestamp' '
+ setdate_and_increment &&
+ echo "Using $datestamp" > one &&
+ git add one &&
+ git commit -m "Initial" &&
+ setdate_and_increment &&
+ git tag -a -m "Tagging at $datestamp" testtag
+'
+
+test_expect_success 'Check atom names are valid' '
+ bad=
+ for token in \
+ refname objecttype objectsize objectname tree parent \
+ numparent object type author authorname authoremail \
+ authordate committer committername committeremail \
+ committerdate tag tagger taggername taggeremail \
+ taggerdate creator creatordate subject body contents
+ do
+ git for-each-ref --format="$token=%($token)" refs/heads || {
+ bad=$token
+ break
+ }
+ done
+ test -z "$bad"
+'
+
+test_expect_failure 'Check invalid atoms names are errors' '
+ git-for-each-ref --format="%(INVALID)" refs/heads
+'
+
+test_expect_success 'Check format specifiers are ignored in naming date atoms' '
+ git-for-each-ref --format="%(authordate)" refs/heads &&
+ git-for-each-ref --format="%(authordate:default) %(authordate)" refs/heads &&
+ git-for-each-ref --format="%(authordate) %(authordate:default)" refs/heads &&
+ git-for-each-ref --format="%(authordate:default) %(authordate:default)" refs/heads
+'
+
+test_expect_success 'Check valid format specifiers for date fields' '
+ git-for-each-ref --format="%(authordate:default)" refs/heads &&
+ git-for-each-ref --format="%(authordate:relative)" refs/heads &&
+ git-for-each-ref --format="%(authordate:short)" refs/heads &&
+ git-for-each-ref --format="%(authordate:local)" refs/heads &&
+ git-for-each-ref --format="%(authordate:iso8601)" refs/heads &&
+ git-for-each-ref --format="%(authordate:rfc2822)" refs/heads
+'
+
+test_expect_failure 'Check invalid format specifiers are errors' '
+ git-for-each-ref --format="%(authordate:INVALID)" refs/heads
+'
+
+cat >expected <<\EOF
+'refs/heads/master' 'Mon Jul 3 17:18:43 2006 +0200' 'Mon Jul 3 17:18:44 2006 +0200'
+'refs/tags/testtag' 'Mon Jul 3 17:18:45 2006 +0200'
+EOF
+
+test_expect_success 'Check unformatted date fields output' '
+ (git for-each-ref --shell --format="%(refname) %(committerdate) %(authordate)" refs/heads &&
+ git for-each-ref --shell --format="%(refname) %(taggerdate)" refs/tags) >actual &&
+ git diff expected actual
+'
+
+test_expect_success 'Check format "default" formatted date fields output' '
+ f=default &&
+ (git for-each-ref --shell --format="%(refname) %(committerdate:$f) %(authordate:$f)" refs/heads &&
+ git for-each-ref --shell --format="%(refname) %(taggerdate:$f)" refs/tags) >actual &&
+ git diff expected actual
+'
+
+# Don't know how to do relative check because I can't know when this script
+# is going to be run and can't fake the current time to git, and hence can't
+# provide expected output. Instead, I'll just make sure that "relative"
+# doesn't exit in error
+#
+#cat >expected <<\EOF
+#
+#EOF
+#
+test_expect_success 'Check format "relative" date fields output' '
+ f=relative &&
+ (git for-each-ref --shell --format="%(refname) %(committerdate:$f) %(authordate:$f)" refs/heads &&
+ git for-each-ref --shell --format="%(refname) %(taggerdate:$f)" refs/tags) >actual
+'
+
+cat >expected <<\EOF
+'refs/heads/master' '2006-07-03' '2006-07-03'
+'refs/tags/testtag' '2006-07-03'
+EOF
+
+test_expect_success 'Check format "short" date fields output' '
+ f=short &&
+ (git for-each-ref --shell --format="%(refname) %(committerdate:$f) %(authordate:$f)" refs/heads &&
+ git for-each-ref --shell --format="%(refname) %(taggerdate:$f)" refs/tags) >actual &&
+ git diff expected actual
+'
+
+cat >expected <<\EOF
+'refs/heads/master' 'Mon Jul 3 15:18:43 2006' 'Mon Jul 3 15:18:44 2006'
+'refs/tags/testtag' 'Mon Jul 3 15:18:45 2006'
+EOF
+
+test_expect_success 'Check format "local" date fields output' '
+ f=local &&
+ (git for-each-ref --shell --format="%(refname) %(committerdate:$f) %(authordate:$f)" refs/heads &&
+ git for-each-ref --shell --format="%(refname) %(taggerdate:$f)" refs/tags) >actual &&
+ git diff expected actual
+'
+
+cat >expected <<\EOF
+'refs/heads/master' '2006-07-03 17:18:43 +0200' '2006-07-03 17:18:44 +0200'
+'refs/tags/testtag' '2006-07-03 17:18:45 +0200'
+EOF
+
+test_expect_success 'Check format "iso8601" date fields output' '
+ f=iso8601 &&
+ (git for-each-ref --shell --format="%(refname) %(committerdate:$f) %(authordate:$f)" refs/heads &&
+ git for-each-ref --shell --format="%(refname) %(taggerdate:$f)" refs/tags) >actual &&
+ git diff expected actual
+'
+
+cat >expected <<\EOF
+'refs/heads/master' 'Mon, 3 Jul 2006 17:18:43 +0200' 'Mon, 3 Jul 2006 17:18:44 +0200'
+'refs/tags/testtag' 'Mon, 3 Jul 2006 17:18:45 +0200'
+EOF
+
+test_expect_success 'Check format "rfc2822" date fields output' '
+ f=rfc2822 &&
+ (git for-each-ref --shell --format="%(refname) %(committerdate:$f) %(authordate:$f)" refs/heads &&
+ git for-each-ref --shell --format="%(refname) %(taggerdate:$f)" refs/tags) >actual &&
+ git diff expected actual
+'
+
+test_done
'
test_expect_success 'test that the file was renamed' '
- test d = $(git show HEAD:doh)
+ test d = $(git show HEAD:doh) &&
+ test -f doh &&
+ test d = $(cat doh)
'
git tag oldD HEAD~4
. ./test-lib.sh
+OLD_TERM="$TERM"
+
for i in GIT_EDITOR core_editor EDITOR VISUAL vi
do
cat >e-$i.sh <<-EOF
'
done
+TERM="$OLD_TERM"
+
test_done
git add foo &&
GIT_EDITOR=../t7500/add-content git commit --template "$TEMPLATE" \
-m "command line msg" &&
- commit_msg_is "command line msg<unknown>"
+ commit_msg_is "command line msg"
'
test_expect_success 'commit message from file should override template' '
echo "standard input msg" |
GIT_EDITOR=../t7500/add-content git commit \
--template "$TEMPLATE" --file - &&
- commit_msg_is "standard input msg<unknown>"
+ commit_msg_is "standard input msg"
'
test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2007 Lars Hjemli
+#
+
+test_description='git-merge
+
+Testing basic merge operations/option parsing.'
+
+. ./test-lib.sh
+
+cat >file <<EOF
+1
+2
+3
+4
+5
+6
+7
+8
+9
+EOF
+
+cat >file.1 <<EOF
+1 X
+2
+3
+4
+5
+6
+7
+8
+9
+EOF
+
+cat >file.5 <<EOF
+1
+2
+3
+4
+5 X
+6
+7
+8
+9
+EOF
+
+cat >file.9 <<EOF
+1
+2
+3
+4
+5
+6
+7
+8
+9 X
+EOF
+
+cat >result.1 <<EOF
+1 X
+2
+3
+4
+5
+6
+7
+8
+9
+EOF
+
+cat >result.1-5 <<EOF
+1 X
+2
+3
+4
+5 X
+6
+7
+8
+9
+EOF
+
+cat >result.1-5-9 <<EOF
+1 X
+2
+3
+4
+5 X
+6
+7
+8
+9 X
+EOF
+
+create_merge_msgs() {
+ echo "Merge commit 'c2'" >msg.1-5 &&
+ echo "Merge commit 'c2'; commit 'c3'" >msg.1-5-9 &&
+ echo "Squashed commit of the following:" >squash.1 &&
+ echo >>squash.1 &&
+ git log --no-merges ^HEAD c1 >>squash.1 &&
+ echo "Squashed commit of the following:" >squash.1-5 &&
+ echo >>squash.1-5 &&
+ git log --no-merges ^HEAD c2 >>squash.1-5 &&
+ echo "Squashed commit of the following:" >squash.1-5-9 &&
+ echo >>squash.1-5-9 &&
+ git log --no-merges ^HEAD c2 c3 >>squash.1-5-9
+}
+
+verify_diff() {
+ if ! diff -u "$1" "$2"
+ then
+ echo "$3"
+ false
+ fi
+}
+
+verify_merge() {
+ verify_diff "$2" "$1" "[OOPS] bad merge result" &&
+ if test $(git ls-files -u | wc -l) -gt 0
+ then
+ echo "[OOPS] unmerged files"
+ false
+ fi &&
+ if ! git diff --exit-code
+ then
+ echo "[OOPS] working tree != index"
+ false
+ fi &&
+ if test -n "$3"
+ then
+ git show -s --pretty=format:%s HEAD >msg.act &&
+ verify_diff "$3" msg.act "[OOPS] bad merge message"
+ fi
+}
+
+verify_head() {
+ if test "$1" != "$(git rev-parse HEAD)"
+ then
+ echo "[OOPS] HEAD != $1"
+ false
+ fi
+}
+
+verify_parents() {
+ i=1
+ while test $# -gt 0
+ do
+ if test "$1" != "$(git rev-parse HEAD^$i)"
+ then
+ echo "[OOPS] HEAD^$i != $1"
+ return 1
+ fi
+ i=$(expr $i + 1)
+ shift
+ done
+}
+
+verify_mergeheads() {
+ i=1
+ if ! test -f .git/MERGE_HEAD
+ then
+ echo "[OOPS] MERGE_HEAD is missing"
+ false
+ fi &&
+ while test $# -gt 0
+ do
+ head=$(head -n $i .git/MERGE_HEAD | tail -n 1)
+ if test "$1" != "$head"
+ then
+ echo "[OOPS] MERGE_HEAD $i != $1"
+ return 1
+ fi
+ i=$(expr $i + 1)
+ shift
+ done
+}
+
+verify_no_mergehead() {
+ if test -f .git/MERGE_HEAD
+ then
+ echo "[OOPS] MERGE_HEAD exists"
+ false
+ fi
+}
+
+
+test_expect_success 'setup' '
+ git add file &&
+ test_tick &&
+ git commit -m "commit 0" &&
+ git tag c0 &&
+ c0=$(git rev-parse HEAD) &&
+ cp file.1 file &&
+ git add file &&
+ test_tick &&
+ git commit -m "commit 1" &&
+ git tag c1 &&
+ c1=$(git rev-parse HEAD) &&
+ git reset --hard "$c0" &&
+ cp file.5 file &&
+ git add file &&
+ test_tick &&
+ git commit -m "commit 2" &&
+ git tag c2 &&
+ c2=$(git rev-parse HEAD) &&
+ git reset --hard "$c0" &&
+ cp file.9 file &&
+ git add file &&
+ test_tick &&
+ git commit -m "commit 3" &&
+ git tag c3 &&
+ c3=$(git rev-parse HEAD)
+ git reset --hard "$c0" &&
+ create_merge_msgs
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'test option parsing' '
+ if git merge -$ c1
+ then
+ echo "[OOPS] -$ accepted"
+ false
+ fi &&
+ if git merge --no-such c1
+ then
+ echo "[OOPS] --no-such accepted"
+ false
+ fi &&
+ if git merge -s foobar c1
+ then
+ echo "[OOPS] -s foobar accepted"
+ false
+ fi &&
+ if git merge -s=foobar c1
+ then
+ echo "[OOPS] -s=foobar accepted"
+ false
+ fi &&
+ if git merge -m
+ then
+ echo "[OOPS] missing commit msg accepted"
+ false
+ fi &&
+ if git merge
+ then
+ echo "[OOPS] missing commit references accepted"
+ false
+ fi
+'
+
+test_expect_success 'merge c0 with c1' '
+ git reset --hard c0 &&
+ git merge c1 &&
+ verify_merge file result.1 &&
+ verify_head "$c1"
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c1 with c2' '
+ git reset --hard c1 &&
+ test_tick &&
+ git merge c2 &&
+ verify_merge file result.1-5 msg.1-5 &&
+ verify_parents $c1 $c2
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c1 with c2 and c3' '
+ git reset --hard c1 &&
+ test_tick &&
+ git merge c2 c3 &&
+ verify_merge file result.1-5-9 msg.1-5-9 &&
+ verify_parents $c1 $c2 $c3
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c0 with c1 (no-commit)' '
+ git reset --hard c0 &&
+ git merge --no-commit c1 &&
+ verify_merge file result.1 &&
+ verify_head $c1
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c1 with c2 (no-commit)' '
+ git reset --hard c1 &&
+ git merge --no-commit c2 &&
+ verify_merge file result.1-5 &&
+ verify_head $c1 &&
+ verify_mergeheads $c2
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c1 with c2 and c3 (no-commit)' '
+ git reset --hard c1 &&
+ git merge --no-commit c2 c3 &&
+ verify_merge file result.1-5-9 &&
+ verify_head $c1 &&
+ verify_mergeheads $c2 $c3
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c0 with c1 (squash)' '
+ git reset --hard c0 &&
+ git merge --squash c1 &&
+ verify_merge file result.1 &&
+ verify_head $c0 &&
+ verify_no_mergehead &&
+ verify_diff squash.1 .git/SQUASH_MSG "[OOPS] bad squash message"
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c1 with c2 (squash)' '
+ git reset --hard c1 &&
+ git merge --squash c2 &&
+ verify_merge file result.1-5 &&
+ verify_head $c1 &&
+ verify_no_mergehead &&
+ verify_diff squash.1-5 .git/SQUASH_MSG "[OOPS] bad squash message"
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c1 with c2 and c3 (squash)' '
+ git reset --hard c1 &&
+ git merge --squash c2 c3 &&
+ verify_merge file result.1-5-9 &&
+ verify_head $c1 &&
+ verify_no_mergehead &&
+ verify_diff squash.1-5-9 .git/SQUASH_MSG "[OOPS] bad squash message"
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c1 with c2 (no-commit in config)' '
+ git reset --hard c1 &&
+ git config branch.master.mergeoptions "--no-commit" &&
+ git merge c2 &&
+ verify_merge file result.1-5 &&
+ verify_head $c1 &&
+ verify_mergeheads $c2
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c1 with c2 (squash in config)' '
+ git reset --hard c1 &&
+ git config branch.master.mergeoptions "--squash" &&
+ git merge c2 &&
+ verify_merge file result.1-5 &&
+ verify_head $c1 &&
+ verify_no_mergehead &&
+ verify_diff squash.1-5 .git/SQUASH_MSG "[OOPS] bad squash message"
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'override config option -n' '
+ git reset --hard c1 &&
+ git config branch.master.mergeoptions "-n" &&
+ test_tick &&
+ git merge --summary c2 >diffstat.txt &&
+ verify_merge file result.1-5 msg.1-5 &&
+ verify_parents $c1 $c2 &&
+ if ! grep -e "^ file | \+2 +-$" diffstat.txt
+ then
+ echo "[OOPS] diffstat was not generated"
+ fi
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'override config option --summary' '
+ git reset --hard c1 &&
+ git config branch.master.mergeoptions "--summary" &&
+ test_tick &&
+ git merge -n c2 >diffstat.txt &&
+ verify_merge file result.1-5 msg.1-5 &&
+ verify_parents $c1 $c2 &&
+ if grep -e "^ file | \+2 +-$" diffstat.txt
+ then
+ echo "[OOPS] diffstat was generated"
+ false
+ fi
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c1 with c2 (override --no-commit)' '
+ git reset --hard c1 &&
+ git config branch.master.mergeoptions "--no-commit" &&
+ test_tick &&
+ git merge --commit c2 &&
+ verify_merge file result.1-5 msg.1-5 &&
+ verify_parents $c1 $c2
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c1 with c2 (override --squash)' '
+ git reset --hard c1 &&
+ git config branch.master.mergeoptions "--squash" &&
+ test_tick &&
+ git merge --no-squash c2 &&
+ verify_merge file result.1-5 msg.1-5 &&
+ verify_parents $c1 $c2
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c0 with c1 (no-ff)' '
+ git reset --hard c0 &&
+ test_tick &&
+ git merge --no-ff c1 &&
+ verify_merge file result.1 &&
+ verify_parents $c0 $c1
+'
+
+test_debug 'gitk --all'
+
+test_expect_success 'merge c0 with c1 (ff overrides no-ff)' '
+ git reset --hard c0 &&
+ git config branch.master.mergeoptions "--no-ff" &&
+ git merge --ff c1 &&
+ verify_merge file result.1 &&
+ verify_head $c1
+'
+
+test_debug 'gitk --all'
+
+test_done
--- /dev/null
+#!/bin/sh
+
+# Based on a test case submitted by Björn Steinbrink.
+
+test_description='git blame on conflicted files'
+. ./test-lib.sh
+
+test_expect_success 'setup first case' '
+ # Create the old file
+ echo "Old line" > file1 &&
+ git add file1 &&
+ git commit --author "Old Line <ol@localhost>" -m file1.a &&
+
+ # Branch
+ git checkout -b foo &&
+
+ # Do an ugly move and change
+ git rm file1 &&
+ echo "New line ..." > file2 &&
+ echo "... and more" >> file2 &&
+ git add file2 &&
+ git commit --author "U Gly <ug@localhost>" -m ugly &&
+
+ # Back to master and change something
+ git checkout master &&
+ echo "
+
+bla" >> file1 &&
+ git commit --author "Old Line <ol@localhost>" -a -m file1.b &&
+
+ # Back to foo and merge master
+ git checkout foo &&
+ if git merge master; then
+ echo needed conflict here
+ exit 1
+ else
+ echo merge failed - resolving automatically
+ fi &&
+ echo "New line ...
+... and more
+
+bla
+Even more" > file2 &&
+ git rm file1 &&
+ git commit --author "M Result <mr@localhost>" -a -m merged &&
+
+ # Back to master and change file1 again
+ git checkout master &&
+ sed s/bla/foo/ <file1 >X &&
+ rm file1 &&
+ mv X file1 &&
+ git commit --author "No Bla <nb@localhost>" -a -m replace &&
+
+ # Try to merge into foo again
+ git checkout foo &&
+ if git merge master; then
+ echo needed conflict here
+ exit 1
+ else
+ echo merge failed - test is setup
+ fi
+'
+
+test_expect_success \
+ 'blame runs on unconflicted file while other file has conflicts' '
+ git blame file2
+'
+
+test_expect_success 'blame runs on conflicted file in stages 1,3' '
+ git blame file1
+'
+
+test_done
'
test_expect_success 'Send patches' '
- git send-email -from="Example <nobody@example.com>" --to=nobody@example.com --smtp-server="$(pwd)/fake.sendmail" $patches 2>errors
+ git send-email --from="Example <nobody@example.com>" --to=nobody@example.com --smtp-server="$(pwd)/fake.sendmail" $patches 2>errors
'
cat >expected <<\EOF
# /
/no-such-file*
-# deeply
+# /deeply/
/deeply/no-such-file*
-# deeply/nested
+# /deeply/nested/
/deeply/nested/no-such-file*
-# deeply/nested/directory
+# /deeply/nested/directory/
/deeply/nested/directory/no-such-file*
EOF
test_expect_success 'test show-ignore' "
cd test_wc &&
mkdir -p deeply/nested/directory &&
+ touch deeply/nested/directory/.keep &&
svn add deeply &&
+ svn up &&
svn propset -R svn:ignore 'no-such-file*' .
svn commit -m 'propset svn:ignore'
cd .. &&
cmp show-ignore.expect show-ignore.got
"
+cat >create-ignore.expect <<\EOF
+/no-such-file*
+EOF
+
+cat >create-ignore-index.expect <<\EOF
+100644 8c52e5dfcd0a8b6b6bcfe6b41b89bcbf493718a5 0 .gitignore
+100644 8c52e5dfcd0a8b6b6bcfe6b41b89bcbf493718a5 0 deeply/.gitignore
+100644 8c52e5dfcd0a8b6b6bcfe6b41b89bcbf493718a5 0 deeply/nested/.gitignore
+100644 8c52e5dfcd0a8b6b6bcfe6b41b89bcbf493718a5 0 deeply/nested/directory/.gitignore
+EOF
+
+test_expect_success 'test create-ignore' "
+ git-svn fetch && git pull . remotes/git-svn &&
+ git-svn create-ignore &&
+ cmp ./.gitignore create-ignore.expect &&
+ cmp ./deeply/.gitignore create-ignore.expect &&
+ cmp ./deeply/nested/.gitignore create-ignore.expect &&
+ cmp ./deeply/nested/directory/.gitignore create-ignore.expect &&
+ git ls-files -s | grep gitignore | cmp - create-ignore-index.expect
+ "
+
+cat >prop.expect <<\EOF
+no-such-file*
+
+EOF
+cat >prop2.expect <<\EOF
+8
+EOF
+
+# This test can be improved: since all the svn:ignore contain the same
+# pattern, it can pass even though the propget did not execute on the
+# right directory.
+test_expect_success 'test propget' "
+ git-svn propget svn:ignore . | cmp - prop.expect &&
+ cd deeply &&
+ git-svn propget svn:ignore . | cmp - ../prop.expect &&
+ git-svn propget svn:entry:committed-rev nested/directory/.keep \
+ | cmp - ../prop2.expect &&
+ git-svn propget svn:ignore .. | cmp - ../prop.expect &&
+ git-svn propget svn:ignore nested/ | cmp - ../prop.expect &&
+ git-svn propget svn:ignore ./nested | cmp - ../prop.expect &&
+ git-svn propget svn:ignore .././deeply/nested | cmp - ../prop.expect
+ "
+
+cat >prop.expect <<\EOF
+Properties on '.':
+ svn:entry:committed-date
+ svn:entry:committed-rev
+ svn:entry:last-author
+ svn:entry:uuid
+ svn:ignore
+EOF
+cat >prop2.expect <<\EOF
+Properties on 'nested/directory/.keep':
+ svn:entry:committed-date
+ svn:entry:committed-rev
+ svn:entry:last-author
+ svn:entry:uuid
+EOF
+
+test_expect_success 'test proplist' "
+ git-svn proplist . | cmp - prop.expect &&
+ git-svn proplist nested/directory/.keep | cmp - prop2.expect
+ "
+
test_done
poke trunk/readme &&
svn commit -m 'another commit' &&
svn up &&
- svn mv -m 'rename to thunk' trunk thunk &&
- svn up &&
+ svn mv trunk thunk &&
echo goodbye >> thunk/readme &&
poke thunk/readme &&
svn commit -m 'bye now' &&
"
test_expect_success 'follow deleted parent' "
- svn cp -m 'resurrecting trunk as junk' \
- -r2 $svnrepo/trunk $svnrepo/junk &&
+ (svn cp -m 'resurrecting trunk as junk' \
+ $svnrepo/trunk@2 $svnrepo/junk ||
+ svn cp -m 'resurrecting trunk as junk' \
+ -r2 $svnrepo/trunk $svnrepo/junk) &&
git config --add svn-remote.svn.fetch \
junk:refs/remotes/svn/junk &&
git-svn fetch -i svn/thunk &&
our \$version = "current";
our \$GIT = "git";
our \$projectroot = "$(pwd)";
+our \$project_maxdepth = 8;
our \$home_link_str = "projects";
our \$site_name = "[localhost]";
our \$site_header = "";
# '
# . ./test-lib.sh
-error () {
- echo "* error: $*"
- trap - exit
- exit 1
-}
-
-say () {
- echo "* $*"
-}
+[ "x$TERM" != "xdumb" ] &&
+ tput bold >/dev/null 2>&1 &&
+ tput setaf 1 >/dev/null 2>&1 &&
+ tput sgr0 >/dev/null 2>&1 &&
+ color=t
test "${test_description}" != "" ||
error "Test script did not set test_description."
exit 0 ;;
-v|--v|--ve|--ver|--verb|--verbo|--verbos|--verbose)
verbose=t; shift ;;
+ -q|--q|--qu|--qui|--quie|--quiet)
+ quiet=t; shift ;;
+ --no-color)
+ color=; shift ;;
--no-python)
# noop now...
shift ;;
esac
done
+if test -n "$color"; then
+ say_color () {
+ case "$1" in
+ error) tput bold; tput setaf 1;; # bold red
+ skip) tput bold; tput setaf 2;; # bold green
+ pass) tput setaf 2;; # green
+ info) tput setaf 3;; # brown
+ *) test -n "$quiet" && return;;
+ esac
+ shift
+ echo "* $*"
+ tput sgr0
+ }
+else
+ say_color() {
+ test -z "$1" && test -n "$quiet" && return
+ shift
+ echo "* $*"
+ }
+fi
+
+error () {
+ say_color error "error: $*"
+ trap - exit
+ exit 1
+}
+
+say () {
+ say_color info "$*"
+}
+
exec 5>&1
if test "$verbose" = "t"
then
test_ok_ () {
test_count=$(expr "$test_count" + 1)
- say " ok $test_count: $@"
+ say_color "" " ok $test_count: $@"
}
test_failure_ () {
test_count=$(expr "$test_count" + 1)
test_failure=$(expr "$test_failure" + 1);
- say "FAIL $test_count: $1"
+ say_color error "FAIL $test_count: $1"
shift
echo "$@" | sed -e 's/^/ /'
test "$immediate" = "" || { trap - exit; exit 1; }
done
case "$to_skip" in
t)
- say >&3 "skipping test: $@"
+ say_color skip >&3 "skipping test: $@"
test_count=$(expr "$test_count" + 1)
- say "skip $test_count: $1"
+ say_color skip "skip $test_count: $1"
: true
;;
*)
# The Makefile provided will clean this test area so
# we will leave things as they are.
- say "passed all $test_count test(s)"
+ say_color pass "passed all $test_count test(s)"
exit 0 ;;
*)
- say "failed $test_failure among $test_count test(s)"
+ say_color error "failed $test_failure among $test_count test(s)"
exit 1 ;;
esac
done
case "$to_skip" in
t)
- say >&3 "skipping test $this_test altogether"
- say "skip all tests in $this_test"
+ say_color skip >&3 "skipping test $this_test altogether"
+ say_color skip "skip all tests in $this_test"
test_done
esac
done
if (/\s$/) {
bad_line("trailing whitespace", $_);
}
- if (/^\s* /) {
+ if (/^\s* \t/) {
bad_line("indent SP followed by a TAB", $_);
}
if (/^(?:[<>=]){7}/) {
--- /dev/null
+#include "cache.h"
+#include "transport.h"
+#include "run-command.h"
+#ifndef NO_CURL
+#include "http.h"
+#endif
+#include "pkt-line.h"
+#include "fetch-pack.h"
+#include "walker.h"
+#include "bundle.h"
+#include "dir.h"
+#include "refs.h"
+
+/* rsync support */
+
+/*
+ * We copy packed-refs and refs/ into a temporary file, then read the
+ * loose refs recursively (sorting whenever possible), and then inserting
+ * those packed refs that are not yet in the list (not validating, but
+ * assuming that the file is sorted).
+ *
+ * Appears refactoring this from refs.c is too cumbersome.
+ */
+
+static int str_cmp(const void *a, const void *b)
+{
+ const char *s1 = a;
+ const char *s2 = b;
+
+ return strcmp(s1, s2);
+}
+
+/* path->buf + name_offset is expected to point to "refs/" */
+
+static int read_loose_refs(struct strbuf *path, int name_offset,
+ struct ref **tail)
+{
+ DIR *dir = opendir(path->buf);
+ struct dirent *de;
+ struct {
+ char **entries;
+ int nr, alloc;
+ } list;
+ int i, pathlen;
+
+ if (!dir)
+ return -1;
+
+ memset (&list, 0, sizeof(list));
+
+ while ((de = readdir(dir))) {
+ if (de->d_name[0] == '.' && (de->d_name[1] == '\0' ||
+ (de->d_name[1] == '.' &&
+ de->d_name[2] == '\0')))
+ continue;
+ ALLOC_GROW(list.entries, list.nr + 1, list.alloc);
+ list.entries[list.nr++] = xstrdup(de->d_name);
+ }
+ closedir(dir);
+
+ /* sort the list */
+
+ qsort(list.entries, list.nr, sizeof(char *), str_cmp);
+
+ pathlen = path->len;
+ strbuf_addch(path, '/');
+
+ for (i = 0; i < list.nr; i++, strbuf_setlen(path, pathlen + 1)) {
+ strbuf_addstr(path, list.entries[i]);
+ if (read_loose_refs(path, name_offset, tail)) {
+ int fd = open(path->buf, O_RDONLY);
+ char buffer[40];
+ struct ref *next;
+
+ if (fd < 0)
+ continue;
+ next = alloc_ref(path->len - name_offset + 1);
+ if (read_in_full(fd, buffer, 40) != 40 ||
+ get_sha1_hex(buffer, next->old_sha1)) {
+ close(fd);
+ free(next);
+ continue;
+ }
+ close(fd);
+ strcpy(next->name, path->buf + name_offset);
+ (*tail)->next = next;
+ *tail = next;
+ }
+ }
+ strbuf_setlen(path, pathlen);
+
+ for (i = 0; i < list.nr; i++)
+ free(list.entries[i]);
+ free(list.entries);
+
+ return 0;
+}
+
+/* insert the packed refs for which no loose refs were found */
+
+static void insert_packed_refs(const char *packed_refs, struct ref **list)
+{
+ FILE *f = fopen(packed_refs, "r");
+ static char buffer[PATH_MAX];
+
+ if (!f)
+ return;
+
+ for (;;) {
+ int cmp, len;
+
+ if (!fgets(buffer, sizeof(buffer), f)) {
+ fclose(f);
+ return;
+ }
+
+ if (hexval(buffer[0]) > 0xf)
+ continue;
+ len = strlen(buffer);
+ if (buffer[len - 1] == '\n')
+ buffer[--len] = '\0';
+ if (len < 41)
+ continue;
+ while ((*list)->next &&
+ (cmp = strcmp(buffer + 41,
+ (*list)->next->name)) > 0)
+ list = &(*list)->next;
+ if (!(*list)->next || cmp < 0) {
+ struct ref *next = alloc_ref(len - 40);
+ buffer[40] = '\0';
+ if (get_sha1_hex(buffer, next->old_sha1)) {
+ warning ("invalid SHA-1: %s", buffer);
+ free(next);
+ continue;
+ }
+ strcpy(next->name, buffer + 41);
+ next->next = (*list)->next;
+ (*list)->next = next;
+ list = &(*list)->next;
+ }
+ }
+}
+
+static struct ref *get_refs_via_rsync(const struct transport *transport)
+{
+ struct strbuf buf = STRBUF_INIT, temp_dir = STRBUF_INIT;
+ struct ref dummy, *tail = &dummy;
+ struct child_process rsync;
+ const char *args[5];
+ int temp_dir_len;
+
+ /* copy the refs to the temporary directory */
+
+ strbuf_addstr(&temp_dir, git_path("rsync-refs-XXXXXX"));
+ if (!mkdtemp(temp_dir.buf))
+ die ("Could not make temporary directory");
+ temp_dir_len = temp_dir.len;
+
+ strbuf_addstr(&buf, transport->url);
+ strbuf_addstr(&buf, "/refs");
+
+ memset(&rsync, 0, sizeof(rsync));
+ rsync.argv = args;
+ rsync.stdout_to_stderr = 1;
+ args[0] = "rsync";
+ args[1] = (transport->verbose > 0) ? "-rv" : "-r";
+ args[2] = buf.buf;
+ args[3] = temp_dir.buf;
+ args[4] = NULL;
+
+ if (run_command(&rsync))
+ die ("Could not run rsync to get refs");
+
+ strbuf_reset(&buf);
+ strbuf_addstr(&buf, transport->url);
+ strbuf_addstr(&buf, "/packed-refs");
+
+ args[2] = buf.buf;
+
+ if (run_command(&rsync))
+ die ("Could not run rsync to get refs");
+
+ /* read the copied refs */
+
+ strbuf_addstr(&temp_dir, "/refs");
+ read_loose_refs(&temp_dir, temp_dir_len + 1, &tail);
+ strbuf_setlen(&temp_dir, temp_dir_len);
+
+ tail = &dummy;
+ strbuf_addstr(&temp_dir, "/packed-refs");
+ insert_packed_refs(temp_dir.buf, &tail);
+ strbuf_setlen(&temp_dir, temp_dir_len);
+
+ if (remove_dir_recursively(&temp_dir, 0))
+ warning ("Error removing temporary directory %s.",
+ temp_dir.buf);
+
+ strbuf_release(&buf);
+ strbuf_release(&temp_dir);
+
+ return dummy.next;
+}
+
+static int fetch_objs_via_rsync(struct transport *transport,
+ int nr_objs, struct ref **to_fetch)
+{
+ struct strbuf buf = STRBUF_INIT;
+ struct child_process rsync;
+ const char *args[8];
+ int result;
+
+ strbuf_addstr(&buf, transport->url);
+ strbuf_addstr(&buf, "/objects/");
+
+ memset(&rsync, 0, sizeof(rsync));
+ rsync.argv = args;
+ rsync.stdout_to_stderr = 1;
+ args[0] = "rsync";
+ args[1] = (transport->verbose > 0) ? "-rv" : "-r";
+ args[2] = "--ignore-existing";
+ args[3] = "--exclude";
+ args[4] = "info";
+ args[5] = buf.buf;
+ args[6] = get_object_directory();
+ args[7] = NULL;
+
+ /* NEEDSWORK: handle one level of alternates */
+ result = run_command(&rsync);
+
+ strbuf_release(&buf);
+
+ return result;
+}
+
+static int write_one_ref(const char *name, const unsigned char *sha1,
+ int flags, void *data)
+{
+ struct strbuf *buf = data;
+ int len = buf->len;
+ FILE *f;
+
+ /* when called via for_each_ref(), flags is non-zero */
+ if (flags && prefixcmp(name, "refs/heads/") &&
+ prefixcmp(name, "refs/tags/"))
+ return 0;
+
+ strbuf_addstr(buf, name);
+ if (safe_create_leading_directories(buf->buf) ||
+ !(f = fopen(buf->buf, "w")) ||
+ fprintf(f, "%s\n", sha1_to_hex(sha1)) < 0 ||
+ fclose(f))
+ return error("problems writing temporary file %s", buf->buf);
+ strbuf_setlen(buf, len);
+ return 0;
+}
+
+static int write_refs_to_temp_dir(struct strbuf *temp_dir,
+ int refspec_nr, const char **refspec)
+{
+ int i;
+
+ for (i = 0; i < refspec_nr; i++) {
+ unsigned char sha1[20];
+ char *ref;
+
+ if (dwim_ref(refspec[i], strlen(refspec[i]), sha1, &ref) != 1)
+ return error("Could not get ref %s", refspec[i]);
+
+ if (write_one_ref(ref, sha1, 0, temp_dir)) {
+ free(ref);
+ return -1;
+ }
+ free(ref);
+ }
+ return 0;
+}
+
+static int rsync_transport_push(struct transport *transport,
+ int refspec_nr, const char **refspec, int flags)
+{
+ struct strbuf buf = STRBUF_INIT, temp_dir = STRBUF_INIT;
+ int result = 0, i;
+ struct child_process rsync;
+ const char *args[10];
+
+ /* first push the objects */
+
+ strbuf_addstr(&buf, transport->url);
+ strbuf_addch(&buf, '/');
+
+ memset(&rsync, 0, sizeof(rsync));
+ rsync.argv = args;
+ rsync.stdout_to_stderr = 1;
+ i = 0;
+ args[i++] = "rsync";
+ args[i++] = "-a";
+ if (flags & TRANSPORT_PUSH_DRY_RUN)
+ args[i++] = "--dry-run";
+ if (transport->verbose > 0)
+ args[i++] = "-v";
+ args[i++] = "--ignore-existing";
+ args[i++] = "--exclude";
+ args[i++] = "info";
+ args[i++] = get_object_directory();
+ args[i++] = buf.buf;
+ args[i++] = NULL;
+
+ if (run_command(&rsync))
+ return error("Could not push objects to %s", transport->url);
+
+ /* copy the refs to the temporary directory; they could be packed. */
+
+ strbuf_addstr(&temp_dir, git_path("rsync-refs-XXXXXX"));
+ if (!mkdtemp(temp_dir.buf))
+ die ("Could not make temporary directory");
+ strbuf_addch(&temp_dir, '/');
+
+ if (flags & TRANSPORT_PUSH_ALL) {
+ if (for_each_ref(write_one_ref, &temp_dir))
+ return -1;
+ } else if (write_refs_to_temp_dir(&temp_dir, refspec_nr, refspec))
+ return -1;
+
+ i = 2;
+ if (flags & TRANSPORT_PUSH_DRY_RUN)
+ args[i++] = "--dry-run";
+ if (!(flags & TRANSPORT_PUSH_FORCE))
+ args[i++] = "--ignore-existing";
+ args[i++] = temp_dir.buf;
+ args[i++] = transport->url;
+ args[i++] = NULL;
+ if (run_command(&rsync))
+ result = error("Could not push to %s", transport->url);
+
+ if (remove_dir_recursively(&temp_dir, 0))
+ warning ("Could not remove temporary directory %s.",
+ temp_dir.buf);
+
+ strbuf_release(&buf);
+ strbuf_release(&temp_dir);
+
+ return result;
+}
+
+/* Generic functions for using commit walkers */
+
+static int fetch_objs_via_walker(struct transport *transport,
+ int nr_objs, struct ref **to_fetch)
+{
+ char *dest = xstrdup(transport->url);
+ struct walker *walker = transport->data;
+ char **objs = xmalloc(nr_objs * sizeof(*objs));
+ int i;
+
+ walker->get_all = 1;
+ walker->get_tree = 1;
+ walker->get_history = 1;
+ walker->get_verbosely = transport->verbose >= 0;
+ walker->get_recover = 0;
+
+ for (i = 0; i < nr_objs; i++)
+ objs[i] = xstrdup(sha1_to_hex(to_fetch[i]->old_sha1));
+
+ if (walker_fetch(walker, nr_objs, objs, NULL, NULL))
+ die("Fetch failed.");
+
+ for (i = 0; i < nr_objs; i++)
+ free(objs[i]);
+ free(objs);
+ free(dest);
+ return 0;
+}
+
+static int disconnect_walker(struct transport *transport)
+{
+ struct walker *walker = transport->data;
+ if (walker)
+ walker_free(walker);
+ return 0;
+}
+
+#ifndef NO_CURL
+static int curl_transport_push(struct transport *transport, int refspec_nr, const char **refspec, int flags) {
+ const char **argv;
+ int argc;
+ int err;
+
+ argv = xmalloc((refspec_nr + 11) * sizeof(char *));
+ argv[0] = "http-push";
+ argc = 1;
+ if (flags & TRANSPORT_PUSH_ALL)
+ argv[argc++] = "--all";
+ if (flags & TRANSPORT_PUSH_FORCE)
+ argv[argc++] = "--force";
+ if (flags & TRANSPORT_PUSH_DRY_RUN)
+ argv[argc++] = "--dry-run";
+ argv[argc++] = transport->url;
+ while (refspec_nr--)
+ argv[argc++] = *refspec++;
+ argv[argc] = NULL;
+ err = run_command_v_opt(argv, RUN_GIT_CMD);
+ switch (err) {
+ case -ERR_RUN_COMMAND_FORK:
+ error("unable to fork for %s", argv[0]);
+ case -ERR_RUN_COMMAND_EXEC:
+ error("unable to exec %s", argv[0]);
+ break;
+ case -ERR_RUN_COMMAND_WAITPID:
+ case -ERR_RUN_COMMAND_WAITPID_WRONG_PID:
+ case -ERR_RUN_COMMAND_WAITPID_SIGNAL:
+ case -ERR_RUN_COMMAND_WAITPID_NOEXIT:
+ error("%s died with strange error", argv[0]);
+ }
+ return !!err;
+}
+
+static int missing__target(int code, int result)
+{
+ return /* file:// URL -- do we ever use one??? */
+ (result == CURLE_FILE_COULDNT_READ_FILE) ||
+ /* http:// and https:// URL */
+ (code == 404 && result == CURLE_HTTP_RETURNED_ERROR) ||
+ /* ftp:// URL */
+ (code == 550 && result == CURLE_FTP_COULDNT_RETR_FILE)
+ ;
+}
+
+#define missing_target(a) missing__target((a)->http_code, (a)->curl_result)
+
+static struct ref *get_refs_via_curl(const struct transport *transport)
+{
+ struct buffer buffer;
+ char *data, *start, *mid;
+ char *ref_name;
+ char *refs_url;
+ int i = 0;
+
+ struct active_request_slot *slot;
+ struct slot_results results;
+
+ struct ref *refs = NULL;
+ struct ref *ref = NULL;
+ struct ref *last_ref = NULL;
+
+ data = xmalloc(4096);
+ buffer.size = 4096;
+ buffer.posn = 0;
+ buffer.buffer = data;
+
+ refs_url = xmalloc(strlen(transport->url) + 11);
+ sprintf(refs_url, "%s/info/refs", transport->url);
+
+ http_init();
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, refs_url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ if (missing_target(&results)) {
+ free(buffer.buffer);
+ return NULL;
+ } else {
+ free(buffer.buffer);
+ error("%s", curl_errorstr);
+ return NULL;
+ }
+ }
+ } else {
+ free(buffer.buffer);
+ error("Unable to start request");
+ return NULL;
+ }
+
+ http_cleanup();
+
+ data = buffer.buffer;
+ start = NULL;
+ mid = data;
+ while (i < buffer.posn) {
+ if (!start)
+ start = &data[i];
+ if (data[i] == '\t')
+ mid = &data[i];
+ if (data[i] == '\n') {
+ data[i] = 0;
+ ref_name = mid + 1;
+ ref = xmalloc(sizeof(struct ref) +
+ strlen(ref_name) + 1);
+ memset(ref, 0, sizeof(struct ref));
+ strcpy(ref->name, ref_name);
+ get_sha1_hex(start, ref->old_sha1);
+ if (!refs)
+ refs = ref;
+ if (last_ref)
+ last_ref->next = ref;
+ last_ref = ref;
+ start = NULL;
+ }
+ i++;
+ }
+
+ free(buffer.buffer);
+
+ return refs;
+}
+
+static int fetch_objs_via_curl(struct transport *transport,
+ int nr_objs, struct ref **to_fetch)
+{
+ if (!transport->data)
+ transport->data = get_http_walker(transport->url);
+ return fetch_objs_via_walker(transport, nr_objs, to_fetch);
+}
+
+#endif
+
+struct bundle_transport_data {
+ int fd;
+ struct bundle_header header;
+};
+
+static struct ref *get_refs_from_bundle(const struct transport *transport)
+{
+ struct bundle_transport_data *data = transport->data;
+ struct ref *result = NULL;
+ int i;
+
+ if (data->fd > 0)
+ close(data->fd);
+ data->fd = read_bundle_header(transport->url, &data->header);
+ if (data->fd < 0)
+ die ("Could not read bundle '%s'.", transport->url);
+ for (i = 0; i < data->header.references.nr; i++) {
+ struct ref_list_entry *e = data->header.references.list + i;
+ struct ref *ref = alloc_ref(strlen(e->name) + 1);
+ hashcpy(ref->old_sha1, e->sha1);
+ strcpy(ref->name, e->name);
+ ref->next = result;
+ result = ref;
+ }
+ return result;
+}
+
+static int fetch_refs_from_bundle(struct transport *transport,
+ int nr_heads, struct ref **to_fetch)
+{
+ struct bundle_transport_data *data = transport->data;
+ return unbundle(&data->header, data->fd);
+}
+
+static int close_bundle(struct transport *transport)
+{
+ struct bundle_transport_data *data = transport->data;
+ if (data->fd > 0)
+ close(data->fd);
+ free(data);
+ return 0;
+}
+
+struct git_transport_data {
+ unsigned thin : 1;
+ unsigned keep : 1;
+ int depth;
+ const char *uploadpack;
+ const char *receivepack;
+};
+
+static int set_git_option(struct transport *connection,
+ const char *name, const char *value)
+{
+ struct git_transport_data *data = connection->data;
+ if (!strcmp(name, TRANS_OPT_UPLOADPACK)) {
+ data->uploadpack = value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_RECEIVEPACK)) {
+ data->receivepack = value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_THIN)) {
+ data->thin = !!value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_KEEP)) {
+ data->keep = !!value;
+ return 0;
+ } else if (!strcmp(name, TRANS_OPT_DEPTH)) {
+ if (!value)
+ data->depth = 0;
+ else
+ data->depth = atoi(value);
+ return 0;
+ }
+ return 1;
+}
+
+static struct ref *get_refs_via_connect(const struct transport *transport)
+{
+ struct git_transport_data *data = transport->data;
+ struct ref *refs;
+ int fd[2];
+ pid_t pid;
+ char *dest = xstrdup(transport->url);
+
+ pid = git_connect(fd, dest, data->uploadpack, 0);
+
+ if (pid < 0)
+ die("Failed to connect to \"%s\"", transport->url);
+
+ get_remote_heads(fd[0], &refs, 0, NULL, 0);
+ packet_flush(fd[1]);
+
+ finish_connect(pid);
+
+ free(dest);
+
+ return refs;
+}
+
+static int fetch_refs_via_pack(struct transport *transport,
+ int nr_heads, struct ref **to_fetch)
+{
+ struct git_transport_data *data = transport->data;
+ char **heads = xmalloc(nr_heads * sizeof(*heads));
+ char **origh = xmalloc(nr_heads * sizeof(*origh));
+ struct ref *refs;
+ char *dest = xstrdup(transport->url);
+ struct fetch_pack_args args;
+ int i;
+
+ memset(&args, 0, sizeof(args));
+ args.uploadpack = data->uploadpack;
+ args.keep_pack = data->keep;
+ args.lock_pack = 1;
+ args.use_thin_pack = data->thin;
+ args.verbose = transport->verbose > 0;
+ args.depth = data->depth;
+
+ for (i = 0; i < nr_heads; i++)
+ origh[i] = heads[i] = xstrdup(to_fetch[i]->name);
+ refs = fetch_pack(&args, dest, nr_heads, heads, &transport->pack_lockfile);
+
+ for (i = 0; i < nr_heads; i++)
+ free(origh[i]);
+ free(origh);
+ free(heads);
+ free_refs(refs);
+ free(dest);
+ return 0;
+}
+
+static int git_transport_push(struct transport *transport, int refspec_nr, const char **refspec, int flags) {
+ struct git_transport_data *data = transport->data;
+ const char **argv;
+ char *rem;
+ int argc;
+ int err;
+
+ argv = xmalloc((refspec_nr + 11) * sizeof(char *));
+ argv[0] = "send-pack";
+ argc = 1;
+ if (flags & TRANSPORT_PUSH_ALL)
+ argv[argc++] = "--all";
+ if (flags & TRANSPORT_PUSH_FORCE)
+ argv[argc++] = "--force";
+ if (flags & TRANSPORT_PUSH_DRY_RUN)
+ argv[argc++] = "--dry-run";
+ if (data->receivepack) {
+ char *rp = xmalloc(strlen(data->receivepack) + 16);
+ sprintf(rp, "--receive-pack=%s", data->receivepack);
+ argv[argc++] = rp;
+ }
+ if (data->thin)
+ argv[argc++] = "--thin";
+ rem = xmalloc(strlen(transport->remote->name) + 10);
+ sprintf(rem, "--remote=%s", transport->remote->name);
+ argv[argc++] = rem;
+ argv[argc++] = transport->url;
+ while (refspec_nr--)
+ argv[argc++] = *refspec++;
+ argv[argc] = NULL;
+ err = run_command_v_opt(argv, RUN_GIT_CMD);
+ switch (err) {
+ case -ERR_RUN_COMMAND_FORK:
+ error("unable to fork for %s", argv[0]);
+ case -ERR_RUN_COMMAND_EXEC:
+ error("unable to exec %s", argv[0]);
+ break;
+ case -ERR_RUN_COMMAND_WAITPID:
+ case -ERR_RUN_COMMAND_WAITPID_WRONG_PID:
+ case -ERR_RUN_COMMAND_WAITPID_SIGNAL:
+ case -ERR_RUN_COMMAND_WAITPID_NOEXIT:
+ error("%s died with strange error", argv[0]);
+ }
+ return !!err;
+}
+
+static int disconnect_git(struct transport *transport)
+{
+ free(transport->data);
+ return 0;
+}
+
+static int is_local(const char *url)
+{
+ const char *colon = strchr(url, ':');
+ const char *slash = strchr(url, '/');
+ return !colon || (slash && slash < colon);
+}
+
+static int is_file(const char *url)
+{
+ struct stat buf;
+ if (stat(url, &buf))
+ return 0;
+ return S_ISREG(buf.st_mode);
+}
+
+struct transport *transport_get(struct remote *remote, const char *url)
+{
+ struct transport *ret = xcalloc(1, sizeof(*ret));
+
+ ret->remote = remote;
+ ret->url = url;
+
+ if (!prefixcmp(url, "rsync://")) {
+ ret->get_refs_list = get_refs_via_rsync;
+ ret->fetch = fetch_objs_via_rsync;
+ ret->push = rsync_transport_push;
+
+ } else if (!prefixcmp(url, "http://")
+ || !prefixcmp(url, "https://")
+ || !prefixcmp(url, "ftp://")) {
+#ifdef NO_CURL
+ error("git was compiled without libcurl support.");
+#else
+ ret->get_refs_list = get_refs_via_curl;
+ ret->fetch = fetch_objs_via_curl;
+ ret->push = curl_transport_push;
+#endif
+ ret->disconnect = disconnect_walker;
+
+ } else if (is_local(url) && is_file(url)) {
+ struct bundle_transport_data *data = xcalloc(1, sizeof(*data));
+ ret->data = data;
+ ret->get_refs_list = get_refs_from_bundle;
+ ret->fetch = fetch_refs_from_bundle;
+ ret->disconnect = close_bundle;
+
+ } else {
+ struct git_transport_data *data = xcalloc(1, sizeof(*data));
+ ret->data = data;
+ ret->set_option = set_git_option;
+ ret->get_refs_list = get_refs_via_connect;
+ ret->fetch = fetch_refs_via_pack;
+ ret->push = git_transport_push;
+ ret->disconnect = disconnect_git;
+
+ data->thin = 1;
+ data->uploadpack = "git-upload-pack";
+ if (remote && remote->uploadpack)
+ data->uploadpack = remote->uploadpack;
+ data->receivepack = "git-receive-pack";
+ if (remote && remote->receivepack)
+ data->receivepack = remote->receivepack;
+ }
+
+ return ret;
+}
+
+int transport_set_option(struct transport *transport,
+ const char *name, const char *value)
+{
+ if (transport->set_option)
+ return transport->set_option(transport, name, value);
+ return 1;
+}
+
+int transport_push(struct transport *transport,
+ int refspec_nr, const char **refspec, int flags)
+{
+ if (!transport->push)
+ return 1;
+ return transport->push(transport, refspec_nr, refspec, flags);
+}
+
+struct ref *transport_get_remote_refs(struct transport *transport)
+{
+ if (!transport->remote_refs)
+ transport->remote_refs = transport->get_refs_list(transport);
+ return transport->remote_refs;
+}
+
+int transport_fetch_refs(struct transport *transport, struct ref *refs)
+{
+ int rc;
+ int nr_heads = 0, nr_alloc = 0;
+ struct ref **heads = NULL;
+ struct ref *rm;
+
+ for (rm = refs; rm; rm = rm->next) {
+ if (rm->peer_ref &&
+ !hashcmp(rm->peer_ref->old_sha1, rm->old_sha1))
+ continue;
+ ALLOC_GROW(heads, nr_heads + 1, nr_alloc);
+ heads[nr_heads++] = rm;
+ }
+
+ rc = transport->fetch(transport, nr_heads, heads);
+ free(heads);
+ return rc;
+}
+
+void transport_unlock_pack(struct transport *transport)
+{
+ if (transport->pack_lockfile) {
+ unlink(transport->pack_lockfile);
+ free(transport->pack_lockfile);
+ transport->pack_lockfile = NULL;
+ }
+}
+
+int transport_disconnect(struct transport *transport)
+{
+ int ret = 0;
+ if (transport->disconnect)
+ ret = transport->disconnect(transport);
+ free(transport);
+ return ret;
+}
--- /dev/null
+#ifndef TRANSPORT_H
+#define TRANSPORT_H
+
+#include "cache.h"
+#include "remote.h"
+
+struct transport {
+ struct remote *remote;
+ const char *url;
+ void *data;
+ struct ref *remote_refs;
+
+ /**
+ * Returns 0 if successful, positive if the option is not
+ * recognized or is inapplicable, and negative if the option
+ * is applicable but the value is invalid.
+ **/
+ int (*set_option)(struct transport *connection, const char *name,
+ const char *value);
+
+ struct ref *(*get_refs_list)(const struct transport *transport);
+ int (*fetch)(struct transport *transport, int refs_nr, struct ref **refs);
+ int (*push)(struct transport *connection, int refspec_nr, const char **refspec, int flags);
+
+ int (*disconnect)(struct transport *connection);
+ char *pack_lockfile;
+ signed verbose : 2;
+};
+
+#define TRANSPORT_PUSH_ALL 1
+#define TRANSPORT_PUSH_FORCE 2
+#define TRANSPORT_PUSH_DRY_RUN 4
+
+/* Returns a transport suitable for the url */
+struct transport *transport_get(struct remote *, const char *);
+
+/* Transport options which apply to git:// and scp-style URLs */
+
+/* The program to use on the remote side to send a pack */
+#define TRANS_OPT_UPLOADPACK "uploadpack"
+
+/* The program to use on the remote side to receive a pack */
+#define TRANS_OPT_RECEIVEPACK "receivepack"
+
+/* Transfer the data as a thin pack if not null */
+#define TRANS_OPT_THIN "thin"
+
+/* Keep the pack that was transferred if not null */
+#define TRANS_OPT_KEEP "keep"
+
+/* Limit the depth of the fetch if not null */
+#define TRANS_OPT_DEPTH "depth"
+
+/**
+ * Returns 0 if the option was used, non-zero otherwise. Prints a
+ * message to stderr if the option is not used.
+ **/
+int transport_set_option(struct transport *transport, const char *name,
+ const char *value);
+
+int transport_push(struct transport *connection,
+ int refspec_nr, const char **refspec, int flags);
+
+struct ref *transport_get_remote_refs(struct transport *transport);
+
+int transport_fetch_refs(struct transport *transport, struct ref *refs);
+void transport_unlock_pack(struct transport *transport);
+int transport_disconnect(struct transport *transport);
+
+#endif
diff_opts.detect_rename = DIFF_DETECT_RENAME;
diff_opts.output_format = DIFF_FORMAT_NO_OUTPUT;
diff_opts.single_follow = opt->paths[0];
+ diff_opts.break_opt = opt->break_opt;
paths[0] = NULL;
diff_tree_setup_paths(paths, &diff_opts);
if (diff_setup_done(&diff_opts) < 0)
--- /dev/null
+#include "cache.h"
+#include "walker.h"
+#include "commit.h"
+#include "tree.h"
+#include "tree-walk.h"
+#include "tag.h"
+#include "blob.h"
+#include "refs.h"
+
+static unsigned char current_commit_sha1[20];
+
+void walker_say(struct walker *walker, const char *fmt, const char *hex)
+{
+ if (walker->get_verbosely)
+ fprintf(stderr, fmt, hex);
+}
+
+static void report_missing(const struct object *obj)
+{
+ char missing_hex[41];
+ strcpy(missing_hex, sha1_to_hex(obj->sha1));;
+ fprintf(stderr, "Cannot obtain needed %s %s\n",
+ obj->type ? typename(obj->type): "object", missing_hex);
+ if (!is_null_sha1(current_commit_sha1))
+ fprintf(stderr, "while processing commit %s.\n",
+ sha1_to_hex(current_commit_sha1));
+}
+
+static int process(struct walker *walker, struct object *obj);
+
+static int process_tree(struct walker *walker, struct tree *tree)
+{
+ struct tree_desc desc;
+ struct name_entry entry;
+
+ if (parse_tree(tree))
+ return -1;
+
+ init_tree_desc(&desc, tree->buffer, tree->size);
+ while (tree_entry(&desc, &entry)) {
+ struct object *obj = NULL;
+
+ /* submodule commits are not stored in the superproject */
+ if (S_ISGITLINK(entry.mode))
+ continue;
+ if (S_ISDIR(entry.mode)) {
+ struct tree *tree = lookup_tree(entry.sha1);
+ if (tree)
+ obj = &tree->object;
+ }
+ else {
+ struct blob *blob = lookup_blob(entry.sha1);
+ if (blob)
+ obj = &blob->object;
+ }
+ if (!obj || process(walker, obj))
+ return -1;
+ }
+ free(tree->buffer);
+ tree->buffer = NULL;
+ tree->size = 0;
+ return 0;
+}
+
+#define COMPLETE (1U << 0)
+#define SEEN (1U << 1)
+#define TO_SCAN (1U << 2)
+
+static struct commit_list *complete = NULL;
+
+static int process_commit(struct walker *walker, struct commit *commit)
+{
+ if (parse_commit(commit))
+ return -1;
+
+ while (complete && complete->item->date >= commit->date) {
+ pop_most_recent_commit(&complete, COMPLETE);
+ }
+
+ if (commit->object.flags & COMPLETE)
+ return 0;
+
+ hashcpy(current_commit_sha1, commit->object.sha1);
+
+ walker_say(walker, "walk %s\n", sha1_to_hex(commit->object.sha1));
+
+ if (walker->get_tree) {
+ if (process(walker, &commit->tree->object))
+ return -1;
+ if (!walker->get_all)
+ walker->get_tree = 0;
+ }
+ if (walker->get_history) {
+ struct commit_list *parents = commit->parents;
+ for (; parents; parents = parents->next) {
+ if (process(walker, &parents->item->object))
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int process_tag(struct walker *walker, struct tag *tag)
+{
+ if (parse_tag(tag))
+ return -1;
+ return process(walker, tag->tagged);
+}
+
+static struct object_list *process_queue = NULL;
+static struct object_list **process_queue_end = &process_queue;
+
+static int process_object(struct walker *walker, struct object *obj)
+{
+ if (obj->type == OBJ_COMMIT) {
+ if (process_commit(walker, (struct commit *)obj))
+ return -1;
+ return 0;
+ }
+ if (obj->type == OBJ_TREE) {
+ if (process_tree(walker, (struct tree *)obj))
+ return -1;
+ return 0;
+ }
+ if (obj->type == OBJ_BLOB) {
+ return 0;
+ }
+ if (obj->type == OBJ_TAG) {
+ if (process_tag(walker, (struct tag *)obj))
+ return -1;
+ return 0;
+ }
+ return error("Unable to determine requirements "
+ "of type %s for %s",
+ typename(obj->type), sha1_to_hex(obj->sha1));
+}
+
+static int process(struct walker *walker, struct object *obj)
+{
+ if (obj->flags & SEEN)
+ return 0;
+ obj->flags |= SEEN;
+
+ if (has_sha1_file(obj->sha1)) {
+ /* We already have it, so we should scan it now. */
+ obj->flags |= TO_SCAN;
+ }
+ else {
+ if (obj->flags & COMPLETE)
+ return 0;
+ walker->prefetch(walker, obj->sha1);
+ }
+
+ object_list_insert(obj, process_queue_end);
+ process_queue_end = &(*process_queue_end)->next;
+ return 0;
+}
+
+static int loop(struct walker *walker)
+{
+ struct object_list *elem;
+
+ while (process_queue) {
+ struct object *obj = process_queue->item;
+ elem = process_queue;
+ process_queue = elem->next;
+ free(elem);
+ if (!process_queue)
+ process_queue_end = &process_queue;
+
+ /* If we are not scanning this object, we placed it in
+ * the queue because we needed to fetch it first.
+ */
+ if (! (obj->flags & TO_SCAN)) {
+ if (walker->fetch(walker, obj->sha1)) {
+ report_missing(obj);
+ return -1;
+ }
+ }
+ if (!obj->type)
+ parse_object(obj->sha1);
+ if (process_object(walker, obj))
+ return -1;
+ }
+ return 0;
+}
+
+static int interpret_target(struct walker *walker, char *target, unsigned char *sha1)
+{
+ if (!get_sha1_hex(target, sha1))
+ return 0;
+ if (!check_ref_format(target)) {
+ if (!walker->fetch_ref(walker, target, sha1)) {
+ return 0;
+ }
+ }
+ return -1;
+}
+
+static int mark_complete(const char *path, const unsigned char *sha1, int flag, void *cb_data)
+{
+ struct commit *commit = lookup_commit_reference_gently(sha1, 1);
+ if (commit) {
+ commit->object.flags |= COMPLETE;
+ insert_by_date(commit, &complete);
+ }
+ return 0;
+}
+
+int walker_targets_stdin(char ***target, const char ***write_ref)
+{
+ int targets = 0, targets_alloc = 0;
+ struct strbuf buf;
+ *target = NULL; *write_ref = NULL;
+ strbuf_init(&buf, 0);
+ while (1) {
+ char *rf_one = NULL;
+ char *tg_one;
+
+ if (strbuf_getline(&buf, stdin, '\n') == EOF)
+ break;
+ tg_one = buf.buf;
+ rf_one = strchr(tg_one, '\t');
+ if (rf_one)
+ *rf_one++ = 0;
+
+ if (targets >= targets_alloc) {
+ targets_alloc = targets_alloc ? targets_alloc * 2 : 64;
+ *target = xrealloc(*target, targets_alloc * sizeof(**target));
+ *write_ref = xrealloc(*write_ref, targets_alloc * sizeof(**write_ref));
+ }
+ (*target)[targets] = xstrdup(tg_one);
+ (*write_ref)[targets] = rf_one ? xstrdup(rf_one) : NULL;
+ targets++;
+ }
+ strbuf_release(&buf);
+ return targets;
+}
+
+void walker_targets_free(int targets, char **target, const char **write_ref)
+{
+ while (targets--) {
+ free(target[targets]);
+ if (write_ref && write_ref[targets])
+ free((char *) write_ref[targets]);
+ }
+}
+
+int walker_fetch(struct walker *walker, int targets, char **target,
+ const char **write_ref, const char *write_ref_log_details)
+{
+ struct ref_lock **lock = xcalloc(targets, sizeof(struct ref_lock *));
+ unsigned char *sha1 = xmalloc(targets * 20);
+ char *msg;
+ int ret;
+ int i;
+
+ save_commit_buffer = 0;
+ track_object_refs = 0;
+
+ for (i = 0; i < targets; i++) {
+ if (!write_ref || !write_ref[i])
+ continue;
+
+ lock[i] = lock_ref_sha1(write_ref[i], NULL);
+ if (!lock[i]) {
+ error("Can't lock ref %s", write_ref[i]);
+ goto unlock_and_fail;
+ }
+ }
+
+ if (!walker->get_recover)
+ for_each_ref(mark_complete, NULL);
+
+ for (i = 0; i < targets; i++) {
+ if (interpret_target(walker, target[i], &sha1[20 * i])) {
+ error("Could not interpret %s as something to pull", target[i]);
+ goto unlock_and_fail;
+ }
+ if (process(walker, lookup_unknown_object(&sha1[20 * i])))
+ goto unlock_and_fail;
+ }
+
+ if (loop(walker))
+ goto unlock_and_fail;
+
+ if (write_ref_log_details) {
+ msg = xmalloc(strlen(write_ref_log_details) + 12);
+ sprintf(msg, "fetch from %s", write_ref_log_details);
+ } else {
+ msg = NULL;
+ }
+ for (i = 0; i < targets; i++) {
+ if (!write_ref || !write_ref[i])
+ continue;
+ ret = write_ref_sha1(lock[i], &sha1[20 * i], msg ? msg : "fetch (unknown)");
+ lock[i] = NULL;
+ if (ret)
+ goto unlock_and_fail;
+ }
+ free(msg);
+
+ return 0;
+
+unlock_and_fail:
+ for (i = 0; i < targets; i++)
+ if (lock[i])
+ unlock_ref(lock[i]);
+
+ return -1;
+}
+
+void walker_free(struct walker *walker)
+{
+ walker->cleanup(walker);
+ free(walker);
+}
--- /dev/null
+#ifndef WALKER_H
+#define WALKER_H
+
+struct walker {
+ void *data;
+ int (*fetch_ref)(struct walker *, char *ref, unsigned char *sha1);
+ void (*prefetch)(struct walker *, unsigned char *sha1);
+ int (*fetch)(struct walker *, unsigned char *sha1);
+ void (*cleanup)(struct walker *);
+ int get_tree;
+ int get_history;
+ int get_all;
+ int get_verbosely;
+ int get_recover;
+
+ int corrupt_object_found;
+};
+
+/* Report what we got under get_verbosely */
+void walker_say(struct walker *walker, const char *, const char *);
+
+/* Load pull targets from stdin */
+int walker_targets_stdin(char ***target, const char ***write_ref);
+
+/* Free up loaded targets */
+void walker_targets_free(int targets, char **target, const char **write_ref);
+
+/* If write_ref is set, the ref filename to write the target value to. */
+/* If write_ref_log_details is set, additional text will appear in the ref log. */
+int walker_fetch(struct walker *impl, int targets, char **target,
+ const char **write_ref, const char *write_ref_log_details);
+
+void walker_free(struct walker *walker);
+
+struct walker *get_http_walker(const char *url);
+
+#endif /* WALKER_H */