autom4te.cache
config.log
config.status
-config.mak.in
config.mak.autogen
+config.mak.append
configure
git-blame
version.
core.sharedRepository::
- If true, the repository is made shareable between several users
- in a group (making sure all the files and objects are group-writable).
- See gitlink:git-init-db[1]. False by default.
+ When 'group' (or 'true'), the repository is made shareable between
+ several users in a group (making sure all the files and objects are
+ group-writable). When 'all' (or 'world' or 'everybody'), the
+ repository will be readable by all users, additionally to being
+ group-shareable. When 'umask' (or 'false'), git will use permissions
+ reported by umask(2). See gitlink:git-init-db[1]. False by default.
core.warnAmbiguousRefs::
If true, git will warn you if the ref name you passed it is ambiguous
Tells `git-apply` how to handle whitespaces, in the same way
as the '--whitespace' option. See gitlink:git-apply[1].
+pager.color::
+ A boolean to enable/disable colored output when the pager is in
+ use (default is true).
+
diff.color::
When true (or `always`), always use colors in patch.
When false (or `never`), never. When set to `auto`, use
See gitlink:git-show-branch[1].
tar.umask::
- By default, git-link:git-tar-tree[1] sets file and directories modes
+ By default, gitlink:git-tar-tree[1] sets file and directories modes
to 0666 or 0777. While this is both useful and acceptable for projects
such as the Linux Kernel, it might be excessive for other projects.
With this variable, it becomes possible to tell
- git-link:git-tar-tree[1] to apply a specific umask to the modes above.
+ gitlink:git-tar-tree[1] to apply a specific umask to the modes above.
The special value "user" indicates that the user's current umask will
be used. This should be enough for most projects, as it will lead to
- the same permissions as git-link:git-checkout[1] would use. The default
+ the same permissions as gitlink:git-checkout[1] would use. The default
value remains 0, which means world read-write.
user.email::
+
It is not recommended to use this feature if you intend to
export changes back to CVS again later with
-git-link[1]::git-cvsexportcommit.
+gitlink:git-cvsexportcommit[1].
OUTPUT
------
[verse]
'git-grep' [--cached]
[-a | --text] [-I] [-i | --ignore-case] [-w | --word-regexp]
- [-v | --invert-match]
+ [-v | --invert-match] [--full-name]
[-E | --extended-regexp] [-G | --basic-regexp] [-F | --fixed-strings]
[-n] [-l | --files-with-matches] [-L | --files-without-match]
[-c | --count]
[-A <post-context>] [-B <pre-context>] [-C <context>]
- [-f <file>] [-e] <pattern>
+ [-f <file>] [-e] <pattern> [--and|--or|--not|(|)|-e <pattern>...]
[<tree>...]
[--] [<path>...]
-v | --invert-match::
Select non-matching lines.
+--full-name::
+ When run from a subdirectory, the command usually
+ outputs paths relative to the current directory. This
+ option forces paths to be output relative to the project
+ top directory.
+
-E | --extended-regexp | -G | --basic-regexp::
Use POSIX extended/basic regexp for patterns. Default
is to use basic regexp.
-e::
The next parameter is the pattern. This option has to be
used for patterns starting with - and should be used in
- scripts passing user input to grep.
+ scripts passing user input to grep. Multiple patterns are
+ combined by 'or'.
+
+--and | --or | --not | ( | )::
+ Specify how multiple patterns are combined using boolean
+ expressions. `--or` is the default operator. `--and` has
+ higher precedence than `--or`. `-e` has to be used for all
+ patterns.
`<tree>...`::
Search blobs in the trees for specified patterns.
-`--`::
+\--::
Signals the end of options; the rest of the parameters
are <path> limiters.
+Example
+-------
+
+git grep -e \'#define\' --and \( -e MAX_PATH -e PATH_MAX \)::
+ Looks for a line that has `#define` and either `MAX_PATH` or
+ `PATH_MAX`.
+
Author
------
Originally written by Linus Torvalds <torvalds@osdl.org>, later
SYNOPSIS
--------
-'git-init-db' [--template=<template_directory>] [--shared]
+'git-init-db' [--template=<template_directory>] [--shared[=<permissions>]]
OPTIONS
-------
+
+--
+
--template=<template_directory>::
- Provide the directory from which templates will be used.
- The default template directory is `/usr/share/git-core/templates`.
---shared::
- Specify that the git repository is to be shared amongst several users.
+Provide the directory from which templates will be used. The default template
+directory is `/usr/share/git-core/templates`.
+
+When specified, `<template_directory>` is used as the source of the template
+files rather than the default. The template files include some directory
+structure, some suggested "exclude patterns", and copies of non-executing
+"hook" files. The suggested patterns and hook files are all modifiable and
+extensible.
+
+--shared[={false|true|umask|group|all|world|everybody}]::
+
+Specify that the git repository is to be shared amongst several users. This
+allows users belonging to the same group to push into that
+repository. When specified, the config variable "core.sharedRepository" is
+set so that files and directories under `$GIT_DIR` are created with the
+requested permissions. When not specified, git will use permissions reported
+by umask(2).
+
+The option can have the following values, defaulting to 'group' if no value
+is given:
+
+ - 'umask' (or 'false'): Use permissions reported by umask(2). The default,
+ when `--shared` is not specified.
+
+ - 'group' (or 'true'): Make the repository group-writable, (and g+sx, since
+ the git group may be not the primary group of all users).
+
+ - 'all' (or 'world' or 'everybody'): Same as 'group', but make the repository
+ readable by all users.
+
+--
DESCRIPTION
An initial `HEAD` file that references the HEAD of the master branch
is also created.
-If `--template=<template_directory>` is specified, `<template_directory>`
-is used as the source of the template files rather than the default.
-The template files include some directory structure, some suggested
-"exclude patterns", and copies of non-executing "hook" files. The
-suggested patterns and hook files are all modifiable and extensible.
-
If the `$GIT_DIR` environment variable is set then it specifies a path
to use instead of `./.git` for the base of the repository.
environment variable then the sha1 directories are created underneath -
otherwise the default `$GIT_DIR/objects` directory is used.
-A shared repository allows users belonging to the same group to push into that
-repository. When specifying `--shared` the config variable "core.sharedRepository"
-is set to 'true' so that directories under `$GIT_DIR` are made group writable
-(and g+sx, since the git group may be not the primary group of all users).
-
Running `git-init-db` in an existing repository is safe. It will not overwrite
things that are already there. The primary reason for rerunning `git-init-db`
is to pick up newly added templates.
SYNOPSIS
--------
-'git-push' [--all] [--tags] [--force] <repository> <refspec>...
+'git-push' [--all] [--tags] [-f | --force] <repository> <refspec>...
DESCRIPTION
-----------
[ [\--objects | \--objects-edge] [ \--unpacked ] ]
[ \--pretty | \--header ]
[ \--bisect ]
+ [ \--merge ]
<commit>... [ \-- <paths>... ]
DESCRIPTION
topological order (i.e. descendant commits are shown
before their parents).
+--merge::
+ After a failed merge, show refs that touch files having a
+ conflict and don't exist on all heads to merge.
+
Author
------
Written by Linus Torvalds <torvalds@osdl.org>
SYNOPSIS
--------
-'git-status'
+'git-status' <options>...
DESCRIPTION
-----------
the current HEAD commit, the command exits with non-zero
status.
+The command takes the same set of options as `git-commit`; it
+shows what would be committed if the same options are given to
+`git-commit`.
+
OUTPUT
------
gitlink:git-relink[1]::
Hardlink common objects in local repositories.
+gitlink:git-svn[1]::
+ Bidirectional operation between a single Subversion branch and git.
+
gitlink:git-svnimport[1]::
Import a SVN repository into git.
gitlink:git-imap-send[1]::
Dump a mailbox from stdin into an imap folder.
+gitlink:git-instaweb[1]::
+ Instantly browse your working repository in gitweb.
+
gitlink:git-mailinfo[1]::
Extracts patch and authorship information from a single
e-mail message, optionally transliterating the commit
other
~~~~~
+'GIT_PAGER'::
+ This environment variable overrides `$PAGER`.
+
'GIT_TRACE'::
If this variable is set git will print `trace:` messages on
stderr telling about alias expansion, built-in command
--- /dev/null
+From: Rutger Nijlunsing <rutger@nospam.com>
+Subject: Setting up a git repository which can be pushed into and pulled from over HTTP.
+Date: Thu, 10 Aug 2006 22:00:26 +0200
+
+Since Apache is one of those packages people like to compile
+themselves while others prefer the bureaucrat's dream Debian, it is
+impossible to give guidelines which will work for everyone. Just send
+some feedback to the mailing list at git@vger.kernel.org to get this
+document tailored to your favorite distro.
+
+
+What's needed:
+
+- Have an Apache web-server
+
+ On Debian:
+ $ apt-get install apache2
+ To get apache2 by default started,
+ edit /etc/default/apache2 and set NO_START=0
+
+- can edit the configuration of it.
+
+ This could be found under /etc/httpd, or refer to your Apache documentation.
+
+ On Debian: this means being able to edit files under /etc/apache2
+
+- can restart it.
+
+ 'apachectl --graceful' might do. If it doesn't, just stop and
+ restart apache. Be warning that active connections to your server
+ might be aborted by this.
+
+ On Debian:
+ $ /etc/init.d/apache2 restart
+ or
+ $ /etc/init.d/apache2 force-reload
+ (which seems to do the same)
+ This adds symlinks from the /etc/apache2/mods-enabled to
+ /etc/apache2/mods-available.
+
+- have permissions to chown a directory
+
+- have git installed at the server _and_ client
+
+In effect, this probably means you're going to be root.
+
+
+Step 1: setup a bare GIT repository
+-----------------------------------
+
+At the time of writing, git-http-push cannot remotely create a GIT
+repository. So we have to do that at the server side with git. Another
+option would be to generate an empty repository at the client and copy
+it to the server with WebDAV. But then you're probably the first to
+try that out :)
+
+Create the directory under the DocumentRoot of the directories served
+by Apache. As an example we take /usr/local/apache2, but try "grep
+DocumentRoot /where/ever/httpd.conf" to find your root:
+
+ $ cd /usr/local/apache/htdocs
+ $ mkdir my-new-repo.git
+
+ On Debian:
+
+ $ cd /var/www
+ $ mkdir my-new-repo.git
+
+
+Initialize a bare repository
+
+ $ cd my-new-repo.git
+ $ git --bare init-db
+
+
+Change the ownership to your web-server's credentials. Use "grep ^User
+httpd.conf" and "grep ^Group httpd.conf" to find out:
+
+ $ chown -R www.www .
+
+ On Debian:
+
+ $ chown -R www-data.www-data .
+
+
+If you do not know which user Apache runs as, you can alternatively do
+a "chmod -R a+w .", inspect the files which are created later on, and
+set the permissions appropriately.
+
+Restart apache2, and check whether http://server/my-new-repo.git gives
+a directory listing. If not, check whether apache started up
+successfully.
+
+
+Step 2: enable DAV on this repository
+-------------------------------------
+
+First make sure the dav_module is loaded. For this, insert in httpd.conf:
+
+ LoadModule dav_module libexec/httpd/libdav.so
+ AddModule mod_dav.c
+
+Also make sure that this line exists which is the file used for
+locking DAV operations:
+
+ DAVLockDB "/usr/local/apache2/temp/DAV.lock"
+
+ On Debian these steps can be performed with:
+
+ Enable the dav and dav_fs modules of apache:
+ $ a2enmod dav_fs
+ (just to be sure. dav_fs might be unneeded, I don't know)
+ $ a2enmod dav
+ The DAV lock is located in /etc/apache2/mods-available/dav_fs.conf:
+ DAVLockDB /var/lock/apache2/DAVLock
+
+Of course, it can point somewhere else, but the string is actually just a
+prefix in some Apache configurations, and therefore the _directory_ has to
+be writable by the user Apache runs as.
+
+Then, add something like this to your httpd.conf
+
+ <Location /my-new-repo.git>
+ DAV on
+ AuthType Basic
+ AuthName "Git"
+ AuthUserFile /usr/local/apache2/conf/passwd.git
+ Require valid-user
+ </Location>
+
+ On Debian:
+ Create (or add to) /etc/apache2/conf.d/git.conf :
+
+ <Location /my-new-repo.git>
+ DAV on
+ AuthType Basic
+ AuthName "Git"
+ AuthUserFile /etc/apache2/passwd.git
+ Require valid-user
+ </Location>
+
+ Debian automatically reads all files under /etc/apach2/conf.d.
+
+The password file can be somewhere else, but it has to be readable by
+Apache and preferably not readable by the world.
+
+Create this file by
+ $ htpasswd -c /usr/local/apache2/conf/passwd.git <user>
+
+ On Debian:
+ $ htpasswd -c /etc/apache2/passwd.git <user>
+
+You will be asked a password, and the file is created. Subsequent calls
+to htpasswd should omit the '-c' option, since you want to append to the
+existing file.
+
+You need to restart Apache.
+
+Now go to http://<username>@<servername>/my-new-repo.git in your
+browser to check whether it asks for a password and accepts the right
+password.
+
+On Debian:
+
+ To test the WebDAV part, do:
+
+ $ apt-get install litmus
+ $ litmus http://<servername>/my-new-repo.git <username> <password>
+
+ Most tests should pass.
+
+A command line tool to test WebDAV is cadaver.
+
+If you're into Windows, from XP onwards Internet Explorer supports
+WebDAV. For this, do Internet Explorer -> Open Location ->
+http://<servername>/my-new-repo.git [x] Open as webfolder -> login .
+
+
+Step 3: setup the client
+------------------------
+
+Make sure that you have HTTP support, i.e. your git was built with curl.
+The easiest way to check is to look for the executable 'git-http-push'.
+
+Then, add the following to your $HOME/.netrc (you can do without, but will be
+asked to input your password a _lot_ of times):
+
+ machine <servername>
+ login <username>
+ password <password>
+
+...and set permissions:
+ chmod 600 ~/.netrc
+
+If you want to access the web-server by its IP, you have to type that in,
+instead of the server name.
+
+To check whether all is OK, do:
+
+ curl --netrc --location -v http://<username>@<servername>/my-new-repo.git/
+
+...this should give a directory listing in HTML of /var/www/my-new-repo.git .
+
+
+Now, add the remote in your existing repository which contains the project
+you want to export:
+
+ $ git-repo-config remote.upload.url \
+ http://<username>@<servername>/my-new-repo.git/
+
+It is important to put the last '/'; Without it, the server will send
+a redirect which git-http-push does not (yet) understand, and git-http-push
+will repeat the request infinitely.
+
+
+Step 4: make the initial push
+-----------------------------
+
+From your client repository, do
+
+ $ git push upload master
+
+This pushes branch 'master' (which is assumed to be the branch you
+want to export) to repository called 'upload', which we previously
+defined with git-repo-config.
+
+
+Troubleshooting:
+----------------
+
+If git-http-push says
+
+ Error: no DAV locking support on remote repo http://...
+
+then it means the web-server did not accept your authentication. Make sure
+that the user name and password matches in httpd.conf, .netrc and the URL
+you are uploading to.
+
+If git-http-push shows you an error (22/502) when trying to MOVE a blob,
+it means that your web-server somehow does not recognize its name in the
+request; This can happen when you start Apache, but then disable the
+network interface. A simple restart of Apache helps.
+
+Errors like (22/502) are of format (curl error code/http error
+code). So (22/404) means something like 'not found' at the server.
+
+Reading /usr/local/apache2/logs/error_log is often helpful.
+
+ On Debian: Read /var/log/apache2/error.log instead.
+
+
+Debian References: http://www.debian-administration.org/articles/285
+
+Authors
+ Johannes Schindelin <Johannes.Schindelin@gmx.de>
+ Rutger Nijlunsing <git@wingding.demon.nl>
$ git cat-file -t 513feba2
blob
$ git cat-file blob 513feba2
+hello world!
hello world, again
------------------------------------------------
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v1.4.GIT
+DEF_VER=v1.4.2.GIT
+
+LF='
+'
# First try git-describe, then see if there is a version file
# (included in release tarballs), then default
-if VN=$(git describe --abbrev=4 HEAD 2>/dev/null); then
+if VN=$(git describe --abbrev=4 HEAD 2>/dev/null) &&
+ case "$VN" in
+ *$LF*) (exit 1) ;;
+ v[0-9]*) : happy ;;
+ esac
+then
VN=$(echo "$VN" | sed -e 's/-/./g');
elif test -f version
then
Alternatively you can use autoconf generated ./configure script to
set up install paths (via config.mak.autogen), so you can write instead
- $ autoconf ;# as yourself if ./configure doesn't exist yet
+ $ make configure ;# as yourself
$ ./configure --prefix=/usr ;# as yourself
$ make all doc ;# as yourself
# make install install-doc ;# as root
# Define NO_D_TYPE_IN_DIRENT if your platform defines DT_UNKNOWN but lacks
# d_type in struct dirent (latest Cygwin -- will be fixed soonish).
#
+# Define NO_C99_FORMAT if your formatted IO functions (printf/scanf et.al.)
+# do not support the 'size specifiers' introduced by C99, namely ll, hh,
+# j, z, t. (representing long long int, char, intmax_t, size_t, ptrdiff_t).
+# some C compilers supported these specifiers prior to C99 as an extension.
+#
# Define NO_STRCASESTR if you don't have strcasestr.
#
# Define NO_STRLCPY if you don't have strlcpy.
# ... and all the rest that could be moved out of bindir to gitexecdir
PROGRAMS = \
- git-checkout-index$X \
git-convert-objects$X git-fetch-pack$X git-fsck-objects$X \
git-hash-object$X git-index-pack$X git-local-fetch$X \
git-merge-base$X \
- git-merge-index$X git-mktag$X git-mktree$X git-pack-objects$X git-patch-id$X \
- git-peek-remote$X git-prune-packed$X git-receive-pack$X \
+ git-merge-index$X git-mktag$X git-mktree$X git-patch-id$X \
+ git-peek-remote$X git-receive-pack$X \
git-send-pack$X git-shell$X \
git-show-index$X git-ssh-fetch$X \
git-ssh-upload$X git-unpack-file$X \
- git-unpack-objects$X git-update-server-info$X \
+ git-update-server-info$X \
git-upload-pack$X git-verify-pack$X \
- git-symbolic-ref$X \
- git-name-rev$X git-pack-redundant$X git-repo-config$X git-var$X \
+ git-pack-redundant$X git-var$X \
git-describe$X git-merge-tree$X git-blame$X git-imap-send$X
-BUILT_INS = git-log$X git-whatchanged$X git-show$X git-update-ref$X \
- git-count-objects$X git-diff$X git-push$X git-mailsplit$X \
- git-grep$X git-add$X git-rm$X git-rev-list$X git-stripspace$X \
- git-check-ref-format$X git-rev-parse$X git-mailinfo$X \
- git-init-db$X git-tar-tree$X git-upload-tar$X git-format-patch$X \
- git-ls-files$X git-ls-tree$X git-get-tar-commit-id$X \
- git-read-tree$X git-commit-tree$X git-write-tree$X \
- git-apply$X git-show-branch$X git-diff-files$X git-update-index$X \
- git-diff-index$X git-diff-stages$X git-diff-tree$X git-cat-file$X \
- git-fmt-merge-msg$X git-prune$X git-mv$X
+BUILT_INS = \
+ git-format-patch$X git-show$X git-whatchanged$X \
+ git-get-tar-commit-id$X \
+ $(patsubst builtin-%.o,git-%$X,$(BUILTIN_OBJS))
# what 'all' will build and 'install' will install, in gitexecdir
ALL_PROGRAMS = $(PROGRAMS) $(SIMPLE_PROGRAMS) $(SCRIPTS)
blob.h cache.h commit.h csum-file.h delta.h \
diff.h object.h pack.h pkt-line.h quote.h refs.h \
run-command.h strbuf.h tag.h tree.h git-compat-util.h revision.h \
- tree-walk.h log-tree.h dir.h path-list.h
+ tree-walk.h log-tree.h dir.h path-list.h builtin.h
DIFF_OBJS = \
diff.o diff-lib.o diffcore-break.o diffcore-order.o \
server-info.o setup.o sha1_file.o sha1_name.o strbuf.o \
tag.o tree.o usage.o config.o environment.o ctype.o copy.o \
fetch-clone.o revision.o pager.o tree-walk.o xdiff-interface.o \
- alloc.o merge-file.o path-list.o $(DIFF_OBJS)
+ alloc.o merge-file.o path-list.o help.o $(DIFF_OBJS)
BUILTIN_OBJS = \
- builtin-log.o builtin-help.o builtin-count.o builtin-diff.o builtin-push.o \
- builtin-grep.o builtin-add.o builtin-rev-list.o builtin-check-ref-format.o \
- builtin-rm.o builtin-init-db.o builtin-rev-parse.o \
- builtin-tar-tree.o builtin-upload-tar.o builtin-update-index.o \
- builtin-ls-files.o builtin-ls-tree.o builtin-write-tree.o \
- builtin-read-tree.o builtin-commit-tree.o builtin-mailinfo.o \
- builtin-apply.o builtin-show-branch.o builtin-diff-files.o \
- builtin-diff-index.o builtin-diff-stages.o builtin-diff-tree.o \
- builtin-cat-file.o builtin-mailsplit.o builtin-stripspace.o \
- builtin-update-ref.o builtin-fmt-merge-msg.o builtin-prune.o \
- builtin-mv.o
+ builtin-add.o \
+ builtin-apply.o \
+ builtin-cat-file.o \
+ builtin-checkout-index.o \
+ builtin-check-ref-format.o \
+ builtin-commit-tree.o \
+ builtin-count-objects.o \
+ builtin-diff.o \
+ builtin-diff-files.o \
+ builtin-diff-index.o \
+ builtin-diff-stages.o \
+ builtin-diff-tree.o \
+ builtin-fmt-merge-msg.o \
+ builtin-grep.o \
+ builtin-init-db.o \
+ builtin-log.o \
+ builtin-ls-files.o \
+ builtin-ls-tree.o \
+ builtin-mailinfo.o \
+ builtin-mailsplit.o \
+ builtin-mv.o \
+ builtin-name-rev.o \
+ builtin-pack-objects.o \
+ builtin-prune.o \
+ builtin-prune-packed.o \
+ builtin-push.o \
+ builtin-read-tree.o \
+ builtin-repo-config.o \
+ builtin-rev-list.o \
+ builtin-rev-parse.o \
+ builtin-rm.o \
+ builtin-show-branch.o \
+ builtin-stripspace.o \
+ builtin-symbolic-ref.o \
+ builtin-tar-tree.o \
+ builtin-unpack-objects.o \
+ builtin-update-index.o \
+ builtin-update-ref.o \
+ builtin-upload-tar.o \
+ builtin-verify-pack.o \
+ builtin-write-tree.o
GITLIBS = $(LIB_FILE) $(XDIFF_LIB)
LIBS = $(GITLIBS) -lz
NO_D_TYPE_IN_DIRENT = YesPlease
NO_D_INO_IN_DIRENT = YesPlease
NO_STRCASESTR = YesPlease
- NO_STRLCPY = YesPlease
NO_SYMLINK_HEAD = YesPlease
NEEDS_LIBICONV = YesPlease
+ NO_C99_FORMAT = YesPlease
# There are conflicting reports about this.
# On some boxes NO_MMAP is needed, and not so elsewhere.
# Try uncommenting this if you see things break -- YMMV.
ifdef NO_D_INO_IN_DIRENT
ALL_CFLAGS += -DNO_D_INO_IN_DIRENT
endif
+ifdef NO_C99_FORMAT
+ ALL_CFLAGS += -DNO_C99_FORMAT
+endif
ifdef NO_SYMLINK_HEAD
ALL_CFLAGS += -DNO_SYMLINK_HEAD
endif
$(ALL_CFLAGS) -o $@ $(filter %.c,$^) \
$(BUILTIN_OBJS) $(ALL_LDFLAGS) $(LIBS)
-builtin-help.o: common-cmds.h
+help.o: common-cmds.h
$(BUILT_INS): git$X
rm -f $@ && ln git$X $@
chmod +x $@+
mv $@+ $@
+configure: configure.ac
+ rm -f $@ $<+
+ sed -e 's/@@GIT_VERSION@@/$(GIT_VERSION)/g' \
+ $< > $<+
+ autoconf -o $@ $<+
+ rm -f $<+
+
# These can record GIT_VERSION
git$X git.spec \
$(patsubst %.sh,%,$(SCRIPT_SH)) \
$(INSTALL) -d -m755 '$(DESTDIR_SQ)$(gitexecdir_SQ)'
$(INSTALL) $(ALL_PROGRAMS) '$(DESTDIR_SQ)$(gitexecdir_SQ)'
$(INSTALL) git$X gitk '$(DESTDIR_SQ)$(bindir_SQ)'
- $(MAKE) -C templates install
+ $(MAKE) -C templates DESTDIR='$(DESTDIR_SQ)' install
$(INSTALL) -d -m755 '$(DESTDIR_SQ)$(GIT_PYTHON_DIR_SQ)'
$(INSTALL) $(PYMODULES) '$(DESTDIR_SQ)$(GIT_PYTHON_DIR_SQ)'
if test 'z$(bindir_SQ)' != 'z$(gitexecdir_SQ)'; \
rm -f $(ALL_PROGRAMS) $(BUILT_INS) git$X
rm -f *.spec *.pyc *.pyo */*.pyc */*.pyo common-cmds.h TAGS tags
rm -rf autom4te.cache
- rm -f config.log config.mak.autogen configure config.status config.cache
+ rm -f configure config.log config.mak.autogen config.mak.append config.status config.cache
rm -rf $(GIT_TARNAME) .doc-tmp-dir
rm -f $(GIT_TARNAME).tar.gz git-core_$(GIT_VERSION)-*.tar.gz
rm -f $(htmldocs).tar.gz $(manpages).tar.gz
DEFINE_ALLOCATOR(commit)
DEFINE_ALLOCATOR(tag)
+#ifdef NO_C99_FORMAT
+#define SZ_FMT "%u"
+#else
+#define SZ_FMT "%zu"
+#endif
+
+static void report(const char* name, unsigned int count, size_t size)
+{
+ fprintf(stderr, "%10s: %8u (" SZ_FMT " kB)\n", name, count, size);
+}
+
+#undef SZ_FMT
+
#define REPORT(name) \
- fprintf(stderr, "%10s: %8u (%zu kB)\n", #name, name##_allocs, name##_allocs*sizeof(struct name) >> 10)
+ report(#name, name##_allocs, name##_allocs*sizeof(struct name) >> 10)
void alloc_report(void)
{
#define DEBUG 0
-static const char blame_usage[] = "[-c] [-l] [-t] [-S <revs-file>] [--] file [commit]\n"
+static const char blame_usage[] = "git-blame [-c] [-l] [-t] [-S <revs-file>] [--] file [commit]\n"
" -c, --compatibility Use the same output mode as git-annotate (Default: off)\n"
" -l, --long Show long commit SHA1 (Default: off)\n"
" -t, --time Show raw timestamp (Default: off)\n"
git_config(git_default_config);
- newfd = hold_lock_file_for_update(&lock_file, get_index_file());
- if (newfd < 0)
- die("unable to create new index file");
+ newfd = hold_lock_file_for_update(&lock_file, get_index_file(), 1);
if (read_cache() < 0)
die("index file corrupt");
verbose = 1;
continue;
}
- die(builtin_add_usage);
+ usage(builtin_add_usage);
}
pathspec = get_pathspec(prefix, argv + i);
desc.buffer = buf;
if (apply_fragments(&desc, patch) < 0)
return -1;
+
+ /* NUL terminate the result */
+ if (desc.alloc <= desc.size)
+ desc.buffer = xrealloc(desc.buffer, desc.size + 1);
+ desc.buffer[desc.size] = 0;
+
patch->result = desc.buffer;
patch->resultsize = desc.size;
int fd;
if (S_ISLNK(mode))
+ /* Although buf:size is counted string, it also is NUL
+ * terminated.
+ */
return symlink(buf, path);
fd = open(path, O_CREAT | O_EXCL | O_WRONLY, (mode & 0100) ? 0777 : 0666);
if (fd < 0)
apply = 0;
write_index = check_index && apply;
- if (write_index && newfd < 0) {
+ if (write_index && newfd < 0)
newfd = hold_lock_file_for_update(&lock_file,
- get_index_file());
- if (newfd < 0)
- die("unable to create new index file");
- }
+ get_index_file(), 1);
if (check_index) {
if (read_cache() < 0)
die("unable to read index file");
int cmd_check_ref_format(int argc, const char **argv, const char *prefix)
{
if (argc != 2)
- usage("git check-ref-format refname");
+ usage("git-check-ref-format refname");
return !!check_ref_format(argv[1]);
}
--- /dev/null
+/*
+ * Check-out files from the "current cache directory"
+ *
+ * Copyright (C) 2005 Linus Torvalds
+ *
+ * Careful: order of argument flags does matter. For example,
+ *
+ * git-checkout-index -a -f file.c
+ *
+ * Will first check out all files listed in the cache (but not
+ * overwrite any old ones), and then force-checkout "file.c" a
+ * second time (ie that one _will_ overwrite any old contents
+ * with the same filename).
+ *
+ * Also, just doing "git-checkout-index" does nothing. You probably
+ * meant "git-checkout-index -a". And if you want to force it, you
+ * want "git-checkout-index -f -a".
+ *
+ * Intuitiveness is not the goal here. Repeatability is. The
+ * reason for the "no arguments means no work" thing is that
+ * from scripts you are supposed to be able to do things like
+ *
+ * find . -name '*.h' -print0 | xargs -0 git-checkout-index -f --
+ *
+ * or:
+ *
+ * find . -name '*.h' -print0 | git-checkout-index -f -z --stdin
+ *
+ * which will force all existing *.h files to be replaced with
+ * their cached copies. If an empty command line implied "all",
+ * then this would force-refresh everything in the cache, which
+ * was not the point.
+ *
+ * Oh, and the "--" is just a good idea when you know the rest
+ * will be filenames. Just so that you wouldn't have a filename
+ * of "-a" causing problems (not possible in the above example,
+ * but get used to it in scripting!).
+ */
+#include "cache.h"
+#include "strbuf.h"
+#include "quote.h"
+#include "cache-tree.h"
+
+#define CHECKOUT_ALL 4
+static int line_termination = '\n';
+static int checkout_stage; /* default to checkout stage0 */
+static int to_tempfile;
+static char topath[4][MAXPATHLEN+1];
+
+static struct checkout state;
+
+static void write_tempfile_record(const char *name, int prefix_length)
+{
+ int i;
+
+ if (CHECKOUT_ALL == checkout_stage) {
+ for (i = 1; i < 4; i++) {
+ if (i > 1)
+ putchar(' ');
+ if (topath[i][0])
+ fputs(topath[i], stdout);
+ else
+ putchar('.');
+ }
+ } else
+ fputs(topath[checkout_stage], stdout);
+
+ putchar('\t');
+ write_name_quoted("", 0, name + prefix_length,
+ line_termination, stdout);
+ putchar(line_termination);
+
+ for (i = 0; i < 4; i++) {
+ topath[i][0] = 0;
+ }
+}
+
+static int checkout_file(const char *name, int prefix_length)
+{
+ int namelen = strlen(name);
+ int pos = cache_name_pos(name, namelen);
+ int has_same_name = 0;
+ int did_checkout = 0;
+ int errs = 0;
+
+ if (pos < 0)
+ pos = -pos - 1;
+
+ while (pos < active_nr) {
+ struct cache_entry *ce = active_cache[pos];
+ if (ce_namelen(ce) != namelen ||
+ memcmp(ce->name, name, namelen))
+ break;
+ has_same_name = 1;
+ pos++;
+ if (ce_stage(ce) != checkout_stage
+ && (CHECKOUT_ALL != checkout_stage || !ce_stage(ce)))
+ continue;
+ did_checkout = 1;
+ if (checkout_entry(ce, &state,
+ to_tempfile ? topath[ce_stage(ce)] : NULL) < 0)
+ errs++;
+ }
+
+ if (did_checkout) {
+ if (to_tempfile)
+ write_tempfile_record(name, prefix_length);
+ return errs > 0 ? -1 : 0;
+ }
+
+ if (!state.quiet) {
+ fprintf(stderr, "git-checkout-index: %s ", name);
+ if (!has_same_name)
+ fprintf(stderr, "is not in the cache");
+ else if (checkout_stage)
+ fprintf(stderr, "does not exist at stage %d",
+ checkout_stage);
+ else
+ fprintf(stderr, "is unmerged");
+ fputc('\n', stderr);
+ }
+ return -1;
+}
+
+static int checkout_all(const char *prefix, int prefix_length)
+{
+ int i, errs = 0;
+ struct cache_entry* last_ce = NULL;
+
+ for (i = 0; i < active_nr ; i++) {
+ struct cache_entry *ce = active_cache[i];
+ if (ce_stage(ce) != checkout_stage
+ && (CHECKOUT_ALL != checkout_stage || !ce_stage(ce)))
+ continue;
+ if (prefix && *prefix &&
+ (ce_namelen(ce) <= prefix_length ||
+ memcmp(prefix, ce->name, prefix_length)))
+ continue;
+ if (last_ce && to_tempfile) {
+ if (ce_namelen(last_ce) != ce_namelen(ce)
+ || memcmp(last_ce->name, ce->name, ce_namelen(ce)))
+ write_tempfile_record(last_ce->name, prefix_length);
+ }
+ if (checkout_entry(ce, &state,
+ to_tempfile ? topath[ce_stage(ce)] : NULL) < 0)
+ errs++;
+ last_ce = ce;
+ }
+ if (last_ce && to_tempfile)
+ write_tempfile_record(last_ce->name, prefix_length);
+ if (errs)
+ /* we have already done our error reporting.
+ * exit with the same code as die().
+ */
+ exit(128);
+ return 0;
+}
+
+static const char checkout_cache_usage[] =
+"git-checkout-index [-u] [-q] [-a] [-f] [-n] [--stage=[123]|all] [--prefix=<string>] [--temp] [--] <file>...";
+
+static struct lock_file lock_file;
+
+int cmd_checkout_index(int argc, const char **argv, const char *prefix)
+{
+ int i;
+ int newfd = -1;
+ int all = 0;
+ int read_from_stdin = 0;
+ int prefix_length;
+
+ git_config(git_default_config);
+ state.base_dir = "";
+ prefix_length = prefix ? strlen(prefix) : 0;
+
+ if (read_cache() < 0) {
+ die("invalid cache");
+ }
+
+ for (i = 1; i < argc; i++) {
+ const char *arg = argv[i];
+
+ if (!strcmp(arg, "--")) {
+ i++;
+ break;
+ }
+ if (!strcmp(arg, "-a") || !strcmp(arg, "--all")) {
+ all = 1;
+ continue;
+ }
+ if (!strcmp(arg, "-f") || !strcmp(arg, "--force")) {
+ state.force = 1;
+ continue;
+ }
+ if (!strcmp(arg, "-q") || !strcmp(arg, "--quiet")) {
+ state.quiet = 1;
+ continue;
+ }
+ if (!strcmp(arg, "-n") || !strcmp(arg, "--no-create")) {
+ state.not_new = 1;
+ continue;
+ }
+ if (!strcmp(arg, "-u") || !strcmp(arg, "--index")) {
+ state.refresh_cache = 1;
+ if (newfd < 0)
+ newfd = hold_lock_file_for_update
+ (&lock_file, get_index_file(), 1);
+ if (newfd < 0)
+ die("cannot open index.lock file.");
+ continue;
+ }
+ if (!strcmp(arg, "-z")) {
+ line_termination = 0;
+ continue;
+ }
+ if (!strcmp(arg, "--stdin")) {
+ if (i != argc - 1)
+ die("--stdin must be at the end");
+ read_from_stdin = 1;
+ i++; /* do not consider arg as a file name */
+ break;
+ }
+ if (!strcmp(arg, "--temp")) {
+ to_tempfile = 1;
+ continue;
+ }
+ if (!strncmp(arg, "--prefix=", 9)) {
+ state.base_dir = arg+9;
+ state.base_dir_len = strlen(state.base_dir);
+ continue;
+ }
+ if (!strncmp(arg, "--stage=", 8)) {
+ if (!strcmp(arg + 8, "all")) {
+ to_tempfile = 1;
+ checkout_stage = CHECKOUT_ALL;
+ } else {
+ int ch = arg[8];
+ if ('1' <= ch && ch <= '3')
+ checkout_stage = arg[8] - '0';
+ else
+ die("stage should be between 1 and 3 or all");
+ }
+ continue;
+ }
+ if (arg[0] == '-')
+ usage(checkout_cache_usage);
+ break;
+ }
+
+ if (state.base_dir_len || to_tempfile) {
+ /* when --prefix is specified we do not
+ * want to update cache.
+ */
+ if (state.refresh_cache) {
+ close(newfd); newfd = -1;
+ rollback_lock_file(&lock_file);
+ }
+ state.refresh_cache = 0;
+ }
+
+ /* Check out named files first */
+ for ( ; i < argc; i++) {
+ const char *arg = argv[i];
+ const char *p;
+
+ if (all)
+ die("git-checkout-index: don't mix '--all' and explicit filenames");
+ if (read_from_stdin)
+ die("git-checkout-index: don't mix '--stdin' and explicit filenames");
+ p = prefix_path(prefix, prefix_length, arg);
+ checkout_file(p, prefix_length);
+ if (p < arg || p > arg + strlen(arg))
+ free((char*)p);
+ }
+
+ if (read_from_stdin) {
+ struct strbuf buf;
+ if (all)
+ die("git-checkout-index: don't mix '--all' and '--stdin'");
+ strbuf_init(&buf);
+ while (1) {
+ char *path_name;
+ const char *p;
+
+ read_line(&buf, stdin, line_termination);
+ if (buf.eof)
+ break;
+ if (line_termination && buf.buf[0] == '"')
+ path_name = unquote_c_style(buf.buf, NULL);
+ else
+ path_name = buf.buf;
+ p = prefix_path(prefix, prefix_length, path_name);
+ checkout_file(p, prefix_length);
+ if (p < path_name || p > path_name + strlen(path_name))
+ free((char *)p);
+ if (path_name != buf.buf)
+ free(path_name);
+ }
+ }
+
+ if (all)
+ checkout_all(prefix, prefix_length);
+
+ if (0 <= newfd &&
+ (write_cache(newfd, active_cache, active_nr) ||
+ close(newfd) || commit_lock_file(&lock_file)))
+ die("Unable to write new index file");
+ return 0;
+}
--- /dev/null
+/*
+ * Builtin "git count-objects".
+ *
+ * Copyright (c) 2006 Junio C Hamano
+ */
+
+#include "cache.h"
+#include "builtin.h"
+
+static const char count_objects_usage[] = "git-count-objects [-v]";
+
+static void count_objects(DIR *d, char *path, int len, int verbose,
+ unsigned long *loose,
+ unsigned long *loose_size,
+ unsigned long *packed_loose,
+ unsigned long *garbage)
+{
+ struct dirent *ent;
+ while ((ent = readdir(d)) != NULL) {
+ char hex[41];
+ unsigned char sha1[20];
+ const char *cp;
+ int bad = 0;
+
+ if ((ent->d_name[0] == '.') &&
+ (ent->d_name[1] == 0 ||
+ ((ent->d_name[1] == '.') && (ent->d_name[2] == 0))))
+ continue;
+ for (cp = ent->d_name; *cp; cp++) {
+ int ch = *cp;
+ if (('0' <= ch && ch <= '9') ||
+ ('a' <= ch && ch <= 'f'))
+ continue;
+ bad = 1;
+ break;
+ }
+ if (cp - ent->d_name != 38)
+ bad = 1;
+ else {
+ struct stat st;
+ memcpy(path + len + 3, ent->d_name, 38);
+ path[len + 2] = '/';
+ path[len + 41] = 0;
+ if (lstat(path, &st) || !S_ISREG(st.st_mode))
+ bad = 1;
+ else
+ (*loose_size) += st.st_blocks;
+ }
+ if (bad) {
+ if (verbose) {
+ error("garbage found: %.*s/%s",
+ len + 2, path, ent->d_name);
+ (*garbage)++;
+ }
+ continue;
+ }
+ (*loose)++;
+ if (!verbose)
+ continue;
+ memcpy(hex, path+len, 2);
+ memcpy(hex+2, ent->d_name, 38);
+ hex[40] = 0;
+ if (get_sha1_hex(hex, sha1))
+ die("internal error");
+ if (has_sha1_pack(sha1))
+ (*packed_loose)++;
+ }
+}
+
+int cmd_count_objects(int ac, const char **av, const char *prefix)
+{
+ int i;
+ int verbose = 0;
+ const char *objdir = get_object_directory();
+ int len = strlen(objdir);
+ char *path = xmalloc(len + 50);
+ unsigned long loose = 0, packed = 0, packed_loose = 0, garbage = 0;
+ unsigned long loose_size = 0;
+
+ for (i = 1; i < ac; i++) {
+ const char *arg = av[i];
+ if (*arg != '-')
+ break;
+ else if (!strcmp(arg, "-v"))
+ verbose = 1;
+ else
+ usage(count_objects_usage);
+ }
+
+ /* we do not take arguments other than flags for now */
+ if (i < ac)
+ usage(count_objects_usage);
+ memcpy(path, objdir, len);
+ if (len && objdir[len-1] != '/')
+ path[len++] = '/';
+ for (i = 0; i < 256; i++) {
+ DIR *d;
+ sprintf(path + len, "%02x", i);
+ d = opendir(path);
+ if (!d)
+ continue;
+ count_objects(d, path, len, verbose,
+ &loose, &loose_size, &packed_loose, &garbage);
+ closedir(d);
+ }
+ if (verbose) {
+ struct packed_git *p;
+ if (!packed_git)
+ prepare_packed_git();
+ for (p = packed_git; p; p = p->next) {
+ if (!p->pack_local)
+ continue;
+ packed += num_packed_objects(p);
+ }
+ printf("count: %lu\n", loose);
+ printf("size: %lu\n", loose_size / 2);
+ printf("in-pack: %lu\n", packed);
+ printf("prune-packable: %lu\n", packed_loose);
+ printf("garbage: %lu\n", garbage);
+ }
+ else
+ printf("%lu objects, %lu kilobytes\n",
+ loose, loose_size / 2);
+ return 0;
+}
+++ /dev/null
-/*
- * Builtin "git count-objects".
- *
- * Copyright (c) 2006 Junio C Hamano
- */
-
-#include "cache.h"
-#include "builtin.h"
-
-static const char count_objects_usage[] = "git-count-objects [-v]";
-
-static void count_objects(DIR *d, char *path, int len, int verbose,
- unsigned long *loose,
- unsigned long *loose_size,
- unsigned long *packed_loose,
- unsigned long *garbage)
-{
- struct dirent *ent;
- while ((ent = readdir(d)) != NULL) {
- char hex[41];
- unsigned char sha1[20];
- const char *cp;
- int bad = 0;
-
- if ((ent->d_name[0] == '.') &&
- (ent->d_name[1] == 0 ||
- ((ent->d_name[1] == '.') && (ent->d_name[2] == 0))))
- continue;
- for (cp = ent->d_name; *cp; cp++) {
- int ch = *cp;
- if (('0' <= ch && ch <= '9') ||
- ('a' <= ch && ch <= 'f'))
- continue;
- bad = 1;
- break;
- }
- if (cp - ent->d_name != 38)
- bad = 1;
- else {
- struct stat st;
- memcpy(path + len + 3, ent->d_name, 38);
- path[len + 2] = '/';
- path[len + 41] = 0;
- if (lstat(path, &st) || !S_ISREG(st.st_mode))
- bad = 1;
- else
- (*loose_size) += st.st_blocks;
- }
- if (bad) {
- if (verbose) {
- error("garbage found: %.*s/%s",
- len + 2, path, ent->d_name);
- (*garbage)++;
- }
- continue;
- }
- (*loose)++;
- if (!verbose)
- continue;
- memcpy(hex, path+len, 2);
- memcpy(hex+2, ent->d_name, 38);
- hex[40] = 0;
- if (get_sha1_hex(hex, sha1))
- die("internal error");
- if (has_sha1_pack(sha1))
- (*packed_loose)++;
- }
-}
-
-int cmd_count_objects(int ac, const char **av, const char *prefix)
-{
- int i;
- int verbose = 0;
- const char *objdir = get_object_directory();
- int len = strlen(objdir);
- char *path = xmalloc(len + 50);
- unsigned long loose = 0, packed = 0, packed_loose = 0, garbage = 0;
- unsigned long loose_size = 0;
-
- for (i = 1; i < ac; i++) {
- const char *arg = av[i];
- if (*arg != '-')
- break;
- else if (!strcmp(arg, "-v"))
- verbose = 1;
- else
- usage(count_objects_usage);
- }
-
- /* we do not take arguments other than flags for now */
- if (i < ac)
- usage(count_objects_usage);
- memcpy(path, objdir, len);
- if (len && objdir[len-1] != '/')
- path[len++] = '/';
- for (i = 0; i < 256; i++) {
- DIR *d;
- sprintf(path + len, "%02x", i);
- d = opendir(path);
- if (!d)
- continue;
- count_objects(d, path, len, verbose,
- &loose, &loose_size, &packed_loose, &garbage);
- closedir(d);
- }
- if (verbose) {
- struct packed_git *p;
- if (!packed_git)
- prepare_packed_git();
- for (p = packed_git; p; p = p->next) {
- if (!p->pack_local)
- continue;
- packed += num_packed_objects(p);
- }
- printf("count: %lu\n", loose);
- printf("size: %lu\n", loose_size / 2);
- printf("in-pack: %lu\n", packed);
- printf("prune-packable: %lu\n", packed_loose);
- printf("garbage: %lu\n", garbage);
- }
- else
- printf("%lu objects, %lu kilobytes\n",
- loose, loose_size / 2);
- return 0;
-}
};
static const char builtin_diff_usage[] =
-"diff <options> <rev>{0,2} -- <path>*";
+"git-diff <options> <rev>{0,2} -- <path>*";
static int builtin_diff_files(struct rev_info *revs,
int argc, const char **argv)
int argc, const char **argv,
struct blobinfo *blob)
{
- /* Blobs: the arguments are reversed when setup_revisions()
- * picked them up.
- */
unsigned mode = canon_mode(S_IFREG | 0644);
if (argc > 1)
stuff_change(&revs->diffopt,
mode, mode,
- blob[1].sha1, blob[0].sha1,
- blob[0].name, blob[0].name);
+ blob[0].sha1, blob[1].sha1,
+ blob[0].name, blob[1].name);
diffcore_std(&revs->diffopt);
diff_flush(&revs->diffopt);
return 0;
argc = setup_revisions(argc, argv, &rev, NULL);
if (!rev.diffopt.output_format) {
rev.diffopt.output_format = DIFF_FORMAT_PATCH;
- diff_setup_done(&rev.diffopt);
+ if (diff_setup_done(&rev.diffopt) < 0)
+ die("diff_setup_done failed");
}
/* Do we have --cached and not have a pending object, then
* A and B. We have ent[0] == merge-base, ent[1] == A,
* and ent[2] == B. Show diff between the base and B.
*/
+ ent[1] = ent[2];
return builtin_diff_tree(&rev, argc, argv, ent);
}
else
+#include "builtin.h"
#include "cache.h"
#include "commit.h"
#include "diff.h"
free_list(&subjects);
}
-int cmd_fmt_merge_msg(int argc, char **argv, const char *prefix)
+int cmd_fmt_merge_msg(int argc, const char **argv, const char *prefix)
{
int limit = 20, i = 0;
char line[1024];
struct grep_pat *pattern_list;
struct grep_pat **pattern_tail;
struct grep_expr *pattern_expression;
+ int prefix_length;
regex_t regexp;
unsigned linenum:1;
unsigned invert:1;
#define GREP_BINARY_TEXT 2
unsigned binary:2;
unsigned extended:1;
+ unsigned relative:1;
int regflags;
unsigned pre_context;
unsigned post_context;
static int match_one_pattern(struct grep_opt *opt, struct grep_pat *p, char *bol, char *eol)
{
int hit = 0;
+ int at_true_bol = 1;
regmatch_t pmatch[10];
+ again:
if (!opt->fixed) {
regex_t *exp = &p->regexp;
hit = !regexec(exp, bol, ARRAY_SIZE(pmatch),
}
if (hit && opt->word_regexp) {
- /* Match beginning must be either
- * beginning of the line, or at word
- * boundary (i.e. the last char must
- * not be alnum or underscore).
- */
if ((pmatch[0].rm_so < 0) ||
(eol - bol) <= pmatch[0].rm_so ||
(pmatch[0].rm_eo < 0) ||
(eol - bol) < pmatch[0].rm_eo)
die("regexp returned nonsense");
- if (pmatch[0].rm_so != 0 &&
- word_char(bol[pmatch[0].rm_so-1]))
- hit = 0;
- if (pmatch[0].rm_eo != (eol-bol) &&
- word_char(bol[pmatch[0].rm_eo]))
+
+ /* Match beginning must be either beginning of the
+ * line, or at word boundary (i.e. the last char must
+ * not be a word char). Similarly, match end must be
+ * either end of the line, or at word boundary
+ * (i.e. the next char must not be a word char).
+ */
+ if ( ((pmatch[0].rm_so == 0 && at_true_bol) ||
+ !word_char(bol[pmatch[0].rm_so-1])) &&
+ ((pmatch[0].rm_eo == (eol-bol)) ||
+ !word_char(bol[pmatch[0].rm_eo])) )
+ ;
+ else
hit = 0;
+
+ if (!hit && pmatch[0].rm_so + bol + 1 < eol) {
+ /* There could be more than one match on the
+ * line, and the first match might not be
+ * strict word match. But later ones could be!
+ */
+ bol = pmatch[0].rm_so + bol + 1;
+ at_true_bol = 0;
+ goto again;
+ }
}
return hit;
}
return !!last_hit;
}
-static int grep_sha1(struct grep_opt *opt, const unsigned char *sha1, const char *name)
+static int grep_sha1(struct grep_opt *opt, const unsigned char *sha1, const char *name, int tree_name_len)
{
unsigned long size;
char *data;
char type[20];
+ char *to_free = NULL;
int hit;
+
data = read_sha1_file(sha1, type, &size);
if (!data) {
error("'%s': unable to read %s", name, sha1_to_hex(sha1));
return 0;
}
+ if (opt->relative && opt->prefix_length) {
+ static char name_buf[PATH_MAX];
+ char *cp;
+ int name_len = strlen(name) - opt->prefix_length + 1;
+
+ if (!tree_name_len)
+ name += opt->prefix_length;
+ else {
+ if (ARRAY_SIZE(name_buf) <= name_len)
+ cp = to_free = xmalloc(name_len);
+ else
+ cp = name_buf;
+ memcpy(cp, name, tree_name_len);
+ strcpy(cp + tree_name_len,
+ name + tree_name_len + opt->prefix_length);
+ name = cp;
+ }
+ }
hit = grep_buffer(opt, name, data, size);
free(data);
+ free(to_free);
return hit;
}
return 0;
}
close(i);
+ if (opt->relative && opt->prefix_length)
+ filename += opt->prefix_length;
i = grep_buffer(opt, filename, data, st.st_size);
free(data);
return i;
char *argptr = randarg;
struct grep_pat *p;
- if (opt->extended)
+ if (opt->extended || (opt->relative && opt->prefix_length))
return -1;
len = nr = 0;
push_arg("grep");
if (!pathspec_matches(paths, ce->name))
continue;
if (cached)
- hit |= grep_sha1(opt, ce->sha1, ce->name);
+ hit |= grep_sha1(opt, ce->sha1, ce->name, 0);
else
hit |= grep_file(opt, ce->name);
}
int hit = 0;
struct name_entry entry;
char *down;
- char *path_buf = xmalloc(PATH_MAX + strlen(tree_name) + 100);
+ int tn_len = strlen(tree_name);
+ char *path_buf = xmalloc(PATH_MAX + tn_len + 100);
- if (tree_name[0]) {
- int offset = sprintf(path_buf, "%s:", tree_name);
- down = path_buf + offset;
+ if (tn_len) {
+ tn_len = sprintf(path_buf, "%s:", tree_name);
+ down = path_buf + tn_len;
strcat(down, base);
}
else {
if (!pathspec_matches(paths, down))
;
else if (S_ISREG(entry.mode))
- hit |= grep_sha1(opt, entry.sha1, path_buf);
+ hit |= grep_sha1(opt, entry.sha1, path_buf, tn_len);
else if (S_ISDIR(entry.mode)) {
char type[20];
struct tree_desc sub;
struct object *obj, const char *name)
{
if (obj->type == OBJ_BLOB)
- return grep_sha1(opt, obj->sha1, name);
+ return grep_sha1(opt, obj->sha1, name, 0);
if (obj->type == OBJ_COMMIT || obj->type == OBJ_TREE) {
struct tree_desc tree;
void *data;
int i;
memset(&opt, 0, sizeof(opt));
+ opt.prefix_length = (prefix && *prefix) ? strlen(prefix) : 0;
+ opt.relative = 1;
opt.pattern_tail = &opt.pattern_list;
opt.regflags = REG_NEWLINE;
}
die(emsg_missing_argument, arg);
}
+ if (!strcmp("--full-name", arg)) {
+ opt.relative = 0;
+ continue;
+ }
if (!strcmp("--", arg)) {
/* later processing wants to have this at argv[1] */
argv--;
verify_filename(prefix, argv[j]);
}
- if (i < argc)
+ if (i < argc) {
paths = get_pathspec(prefix, argv + i);
+ if (opt.prefix_length && opt.relative) {
+ /* Make sure we do not get outside of paths */
+ for (i = 0; paths[i]; i++)
+ if (strncmp(prefix, paths[i], opt.prefix_length))
+ die("git-grep: cannot generate relative filenames containing '..'");
+ }
+ }
else if (prefix) {
paths = xcalloc(2, sizeof(const char *));
paths[0] = prefix;
+++ /dev/null
-/*
- * builtin-help.c
- *
- * Builtin help-related commands (help, usage, version)
- */
-#include <sys/ioctl.h>
-#include "cache.h"
-#include "builtin.h"
-#include "exec_cmd.h"
-#include "common-cmds.h"
-
-static const char git_usage[] =
- "Usage: git [--version] [--exec-path[=GIT_EXEC_PATH]] [--help] COMMAND [ ARGS ]";
-
-/* most GUI terminals set COLUMNS (although some don't export it) */
-static int term_columns(void)
-{
- char *col_string = getenv("COLUMNS");
- int n_cols = 0;
-
- if (col_string && (n_cols = atoi(col_string)) > 0)
- return n_cols;
-
-#ifdef TIOCGWINSZ
- {
- struct winsize ws;
- if (!ioctl(1, TIOCGWINSZ, &ws)) {
- if (ws.ws_col)
- return ws.ws_col;
- }
- }
-#endif
-
- return 80;
-}
-
-static void oom(void)
-{
- fprintf(stderr, "git: out of memory\n");
- exit(1);
-}
-
-static inline void mput_char(char c, unsigned int num)
-{
- while(num--)
- putchar(c);
-}
-
-static struct cmdname {
- size_t len;
- char name[1];
-} **cmdname;
-static int cmdname_alloc, cmdname_cnt;
-
-static void add_cmdname(const char *name, int len)
-{
- struct cmdname *ent;
- if (cmdname_alloc <= cmdname_cnt) {
- cmdname_alloc = cmdname_alloc + 200;
- cmdname = realloc(cmdname, cmdname_alloc * sizeof(*cmdname));
- if (!cmdname)
- oom();
- }
- ent = malloc(sizeof(*ent) + len);
- if (!ent)
- oom();
- ent->len = len;
- memcpy(ent->name, name, len);
- ent->name[len] = 0;
- cmdname[cmdname_cnt++] = ent;
-}
-
-static int cmdname_compare(const void *a_, const void *b_)
-{
- struct cmdname *a = *(struct cmdname **)a_;
- struct cmdname *b = *(struct cmdname **)b_;
- return strcmp(a->name, b->name);
-}
-
-static void pretty_print_string_list(struct cmdname **cmdname, int longest)
-{
- int cols = 1, rows;
- int space = longest + 1; /* min 1 SP between words */
- int max_cols = term_columns() - 1; /* don't print *on* the edge */
- int i, j;
-
- if (space < max_cols)
- cols = max_cols / space;
- rows = (cmdname_cnt + cols - 1) / cols;
-
- qsort(cmdname, cmdname_cnt, sizeof(*cmdname), cmdname_compare);
-
- for (i = 0; i < rows; i++) {
- printf(" ");
-
- for (j = 0; j < cols; j++) {
- int n = j * rows + i;
- int size = space;
- if (n >= cmdname_cnt)
- break;
- if (j == cols-1 || n + rows >= cmdname_cnt)
- size = 1;
- printf("%-*s", size, cmdname[n]->name);
- }
- putchar('\n');
- }
-}
-
-static void list_commands(const char *exec_path, const char *pattern)
-{
- unsigned int longest = 0;
- char path[PATH_MAX];
- int dirlen;
- DIR *dir = opendir(exec_path);
- struct dirent *de;
-
- if (!dir) {
- fprintf(stderr, "git: '%s': %s\n", exec_path, strerror(errno));
- exit(1);
- }
-
- dirlen = strlen(exec_path);
- if (PATH_MAX - 20 < dirlen) {
- fprintf(stderr, "git: insanely long exec-path '%s'\n",
- exec_path);
- exit(1);
- }
-
- memcpy(path, exec_path, dirlen);
- path[dirlen++] = '/';
-
- while ((de = readdir(dir)) != NULL) {
- struct stat st;
- int entlen;
-
- if (strncmp(de->d_name, "git-", 4))
- continue;
- strcpy(path+dirlen, de->d_name);
- if (stat(path, &st) || /* stat, not lstat */
- !S_ISREG(st.st_mode) ||
- !(st.st_mode & S_IXUSR))
- continue;
-
- entlen = strlen(de->d_name);
- if (4 < entlen && !strcmp(de->d_name + entlen - 4, ".exe"))
- entlen -= 4;
-
- if (longest < entlen)
- longest = entlen;
-
- add_cmdname(de->d_name + 4, entlen-4);
- }
- closedir(dir);
-
- printf("git commands available in '%s'\n", exec_path);
- printf("----------------------------");
- mput_char('-', strlen(exec_path));
- putchar('\n');
- pretty_print_string_list(cmdname, longest - 4);
- putchar('\n');
-}
-
-static void list_common_cmds_help(void)
-{
- int i, longest = 0;
-
- for (i = 0; i < ARRAY_SIZE(common_cmds); i++) {
- if (longest < strlen(common_cmds[i].name))
- longest = strlen(common_cmds[i].name);
- }
-
- puts("The most commonly used git commands are:");
- for (i = 0; i < ARRAY_SIZE(common_cmds); i++) {
- printf(" %s", common_cmds[i].name);
- mput_char(' ', longest - strlen(common_cmds[i].name) + 4);
- puts(common_cmds[i].help);
- }
- puts("(use 'git help -a' to get a list of all installed git commands)");
-}
-
-void cmd_usage(int show_all, const char *exec_path, const char *fmt, ...)
-{
- if (fmt) {
- va_list ap;
-
- va_start(ap, fmt);
- printf("git: ");
- vprintf(fmt, ap);
- va_end(ap);
- putchar('\n');
- }
- else
- puts(git_usage);
-
- if (exec_path) {
- putchar('\n');
- if (show_all)
- list_commands(exec_path, "git-*");
- else
- list_common_cmds_help();
- }
-
- exit(1);
-}
-
-static void show_man_page(const char *git_cmd)
-{
- const char *page;
-
- if (!strncmp(git_cmd, "git", 3))
- page = git_cmd;
- else {
- int page_len = strlen(git_cmd) + 4;
- char *p = malloc(page_len + 1);
- strcpy(p, "git-");
- strcpy(p + 4, git_cmd);
- p[page_len] = 0;
- page = p;
- }
-
- execlp("man", "man", page, NULL);
-}
-
-int cmd_version(int argc, const char **argv, const char *prefix)
-{
- printf("git version %s\n", git_version_string);
- return 0;
-}
-
-int cmd_help(int argc, const char **argv, const char *prefix)
-{
- const char *help_cmd = argc > 1 ? argv[1] : NULL;
- if (!help_cmd)
- cmd_usage(0, git_exec_path(), NULL);
- else if (!strcmp(help_cmd, "--all") || !strcmp(help_cmd, "-a"))
- cmd_usage(1, git_exec_path(), NULL);
- else
- show_man_page(help_cmd);
- return 0;
-}
-
-
else if (!strncmp(arg, "--shared=", 9))
shared_repository = git_config_perm("arg", arg+9);
else
- die(init_db_usage);
+ usage(init_db_usage);
}
/*
struct commit *commit;
prepare_revision_walk(rev);
- setup_pager();
while ((commit = get_revision(rev)) != NULL) {
log_tree_commit(rev, commit);
free(commit->buffer);
char message_id[1024];
char ref_message_id[1024];
+ setup_ident();
git_config(git_format_config);
init_revisions(&rev, prefix);
rev.commit_format = CMIT_FMT_EMAIL;
!strcmp(argv[i], "-s")) {
const char *committer;
const char *endpos;
- setup_ident();
committer = git_committer_info(1);
endpos = strchr(committer, '>');
if (!endpos)
static int line_terminator = '\n';
static int prefix_len = 0, prefix_offset = 0;
-static const char *prefix = NULL;
static const char **pathspec = NULL;
static int error_unmatch = 0;
static char *ps_matched = NULL;
}
}
-static void show_files(struct dir_struct *dir)
+static void show_files(struct dir_struct *dir, const char *prefix)
{
int i;
/*
* Prune the index to only contain stuff starting with "prefix"
*/
-static void prune_cache(void)
+static void prune_cache(const char *prefix)
{
int pos = cache_name_pos(prefix, prefix_len);
unsigned int first, last;
active_nr = last;
}
-static void verify_pathspec(void)
+static const char *verify_pathspec(const char *prefix)
{
const char **p, *n, *prev;
char *real_prefix;
memcpy(real_prefix, prev, max);
real_prefix[max] = 0;
}
- prefix = real_prefix;
+ return real_prefix;
}
static const char ls_files_usage[] =
/* Verify that the pathspec matches the prefix */
if (pathspec)
- verify_pathspec();
+ prefix = verify_pathspec(prefix);
/* Treat unmatching pathspec elements as errors */
if (pathspec && error_unmatch) {
read_cache();
if (prefix)
- prune_cache();
- show_files(&dir);
+ prune_cache(prefix);
+ show_files(&dir, prefix);
if (ps_matched) {
/* We need to make sure all pathspec matched otherwise
if (path[len - 1] != '/') {
char *with_slash = xmalloc(len + 2);
memcpy(with_slash, path, len);
- strcat(with_slash + len, "/");
+ with_slash[len++] = '/';
+ with_slash[len] = 0;
return with_slash;
}
return path;
git_config(git_default_config);
- newfd = hold_lock_file_for_update(&lock_file, get_index_file());
- if (newfd < 0)
- die("unable to create new index file");
-
+ newfd = hold_lock_file_for_update(&lock_file, get_index_file(), 1);
if (read_cache() < 0)
die("index file corrupt");
ignore_errors = 1;
continue;
}
- die(builtin_mv_usage);
+ usage(builtin_mv_usage);
}
count = argc - i - 1;
if (count < 1)
--- /dev/null
+#include <stdlib.h>
+#include "builtin.h"
+#include "cache.h"
+#include "commit.h"
+#include "tag.h"
+#include "refs.h"
+
+static const char name_rev_usage[] =
+ "git-name-rev [--tags] ( --all | --stdin | committish [committish...] )\n";
+
+typedef struct rev_name {
+ const char *tip_name;
+ int merge_traversals;
+ int generation;
+} rev_name;
+
+static long cutoff = LONG_MAX;
+
+static void name_rev(struct commit *commit,
+ const char *tip_name, int merge_traversals, int generation,
+ int deref)
+{
+ struct rev_name *name = (struct rev_name *)commit->util;
+ struct commit_list *parents;
+ int parent_number = 1;
+
+ if (!commit->object.parsed)
+ parse_commit(commit);
+
+ if (commit->date < cutoff)
+ return;
+
+ if (deref) {
+ char *new_name = xmalloc(strlen(tip_name)+3);
+ strcpy(new_name, tip_name);
+ strcat(new_name, "^0");
+ tip_name = new_name;
+
+ if (generation)
+ die("generation: %d, but deref?", generation);
+ }
+
+ if (name == NULL) {
+ name = xmalloc(sizeof(rev_name));
+ commit->util = name;
+ goto copy_data;
+ } else if (name->merge_traversals > merge_traversals ||
+ (name->merge_traversals == merge_traversals &&
+ name->generation > generation)) {
+copy_data:
+ name->tip_name = tip_name;
+ name->merge_traversals = merge_traversals;
+ name->generation = generation;
+ } else
+ return;
+
+ for (parents = commit->parents;
+ parents;
+ parents = parents->next, parent_number++) {
+ if (parent_number > 1) {
+ char *new_name = xmalloc(strlen(tip_name)+8);
+
+ if (generation > 0)
+ sprintf(new_name, "%s~%d^%d", tip_name,
+ generation, parent_number);
+ else
+ sprintf(new_name, "%s^%d", tip_name, parent_number);
+
+ name_rev(parents->item, new_name,
+ merge_traversals + 1 , 0, 0);
+ } else {
+ name_rev(parents->item, tip_name, merge_traversals,
+ generation + 1, 0);
+ }
+ }
+}
+
+static int tags_only = 0;
+
+static int name_ref(const char *path, const unsigned char *sha1)
+{
+ struct object *o = parse_object(sha1);
+ int deref = 0;
+
+ if (tags_only && strncmp(path, "refs/tags/", 10))
+ return 0;
+
+ while (o && o->type == OBJ_TAG) {
+ struct tag *t = (struct tag *) o;
+ if (!t->tagged)
+ break; /* broken repository */
+ o = parse_object(t->tagged->sha1);
+ deref = 1;
+ }
+ if (o && o->type == OBJ_COMMIT) {
+ struct commit *commit = (struct commit *)o;
+
+ if (!strncmp(path, "refs/heads/", 11))
+ path = path + 11;
+ else if (!strncmp(path, "refs/", 5))
+ path = path + 5;
+
+ name_rev(commit, strdup(path), 0, 0, deref);
+ }
+ return 0;
+}
+
+/* returns a static buffer */
+static const char* get_rev_name(struct object *o)
+{
+ static char buffer[1024];
+ struct rev_name *n;
+ struct commit *c;
+
+ if (o->type != OBJ_COMMIT)
+ return "undefined";
+ c = (struct commit *) o;
+ n = c->util;
+ if (!n)
+ return "undefined";
+
+ if (!n->generation)
+ return n->tip_name;
+
+ snprintf(buffer, sizeof(buffer), "%s~%d", n->tip_name, n->generation);
+
+ return buffer;
+}
+
+int cmd_name_rev(int argc, const char **argv, const char *prefix)
+{
+ struct object_array revs = { 0, 0, NULL };
+ int as_is = 0, all = 0, transform_stdin = 0;
+
+ git_config(git_default_config);
+
+ if (argc < 2)
+ usage(name_rev_usage);
+
+ for (--argc, ++argv; argc; --argc, ++argv) {
+ unsigned char sha1[20];
+ struct object *o;
+ struct commit *commit;
+
+ if (!as_is && (*argv)[0] == '-') {
+ if (!strcmp(*argv, "--")) {
+ as_is = 1;
+ continue;
+ } else if (!strcmp(*argv, "--tags")) {
+ tags_only = 1;
+ continue;
+ } else if (!strcmp(*argv, "--all")) {
+ if (argc > 1)
+ die("Specify either a list, or --all, not both!");
+ all = 1;
+ cutoff = 0;
+ continue;
+ } else if (!strcmp(*argv, "--stdin")) {
+ if (argc > 1)
+ die("Specify either a list, or --stdin, not both!");
+ transform_stdin = 1;
+ cutoff = 0;
+ continue;
+ }
+ usage(name_rev_usage);
+ }
+
+ if (get_sha1(*argv, sha1)) {
+ fprintf(stderr, "Could not get sha1 for %s. Skipping.\n",
+ *argv);
+ continue;
+ }
+
+ o = deref_tag(parse_object(sha1), *argv, 0);
+ if (!o || o->type != OBJ_COMMIT) {
+ fprintf(stderr, "Could not get commit for %s. Skipping.\n",
+ *argv);
+ continue;
+ }
+
+ commit = (struct commit *)o;
+
+ if (cutoff > commit->date)
+ cutoff = commit->date;
+
+ add_object_array((struct object *)commit, *argv, &revs);
+ }
+
+ for_each_ref(name_ref);
+
+ if (transform_stdin) {
+ char buffer[2048];
+ char *p, *p_start;
+
+ while (!feof(stdin)) {
+ int forty = 0;
+ p = fgets(buffer, sizeof(buffer), stdin);
+ if (!p)
+ break;
+
+ for (p_start = p; *p; p++) {
+#define ishex(x) (isdigit((x)) || ((x) >= 'a' && (x) <= 'f'))
+ if (!ishex(*p))
+ forty = 0;
+ else if (++forty == 40 &&
+ !ishex(*(p+1))) {
+ unsigned char sha1[40];
+ const char *name = "undefined";
+ char c = *(p+1);
+
+ forty = 0;
+
+ *(p+1) = 0;
+ if (!get_sha1(p - 39, sha1)) {
+ struct object *o =
+ lookup_object(sha1);
+ if (o)
+ name = get_rev_name(o);
+ }
+ *(p+1) = c;
+
+ if (!strcmp(name, "undefined"))
+ continue;
+
+ fwrite(p_start, p - p_start + 1, 1,
+ stdout);
+ printf(" (%s)", name);
+ p_start = p + 1;
+ }
+ }
+
+ /* flush */
+ if (p_start != p)
+ fwrite(p_start, p - p_start, 1, stdout);
+ }
+ } else if (all) {
+ int i, max;
+
+ max = get_max_object_index();
+ for (i = 0; i < max; i++) {
+ struct object * obj = get_indexed_object(i);
+ if (!obj)
+ continue;
+ printf("%s %s\n", sha1_to_hex(obj->sha1), get_rev_name(obj));
+ }
+ } else {
+ int i;
+ for (i = 0; i < revs.nr; i++)
+ printf("%s %s\n",
+ revs.objects[i].name,
+ get_rev_name(revs.objects[i].item));
+ }
+
+ return 0;
+}
+
--- /dev/null
+#include "builtin.h"
+#include "cache.h"
+#include "object.h"
+#include "blob.h"
+#include "commit.h"
+#include "tag.h"
+#include "tree.h"
+#include "delta.h"
+#include "pack.h"
+#include "csum-file.h"
+#include "tree-walk.h"
+#include <sys/time.h>
+#include <signal.h>
+
+static const char pack_usage[] = "git-pack-objects [-q] [--no-reuse-delta] [--non-empty] [--local] [--incremental] [--window=N] [--depth=N] {--stdout | base-name} < object-list";
+
+struct object_entry {
+ unsigned char sha1[20];
+ unsigned long size; /* uncompressed size */
+ unsigned long offset; /* offset into the final pack file;
+ * nonzero if already written.
+ */
+ unsigned int depth; /* delta depth */
+ unsigned int delta_limit; /* base adjustment for in-pack delta */
+ unsigned int hash; /* name hint hash */
+ enum object_type type;
+ enum object_type in_pack_type; /* could be delta */
+ unsigned long delta_size; /* delta data size (uncompressed) */
+ struct object_entry *delta; /* delta base object */
+ struct packed_git *in_pack; /* already in pack */
+ unsigned int in_pack_offset;
+ struct object_entry *delta_child; /* deltified objects who bases me */
+ struct object_entry *delta_sibling; /* other deltified objects who
+ * uses the same base as me
+ */
+ int preferred_base; /* we do not pack this, but is encouraged to
+ * be used as the base objectto delta huge
+ * objects against.
+ */
+};
+
+/*
+ * Objects we are going to pack are collected in objects array (dynamically
+ * expanded). nr_objects & nr_alloc controls this array. They are stored
+ * in the order we see -- typically rev-list --objects order that gives us
+ * nice "minimum seek" order.
+ *
+ * sorted-by-sha ans sorted-by-type are arrays of pointers that point at
+ * elements in the objects array. The former is used to build the pack
+ * index (lists object names in the ascending order to help offset lookup),
+ * and the latter is used to group similar things together by try_delta()
+ * heuristics.
+ */
+
+static unsigned char object_list_sha1[20];
+static int non_empty = 0;
+static int no_reuse_delta = 0;
+static int local = 0;
+static int incremental = 0;
+static struct object_entry **sorted_by_sha, **sorted_by_type;
+static struct object_entry *objects = NULL;
+static int nr_objects = 0, nr_alloc = 0, nr_result = 0;
+static const char *base_name;
+static unsigned char pack_file_sha1[20];
+static int progress = 1;
+static volatile sig_atomic_t progress_update = 0;
+static int window = 10;
+
+/*
+ * The object names in objects array are hashed with this hashtable,
+ * to help looking up the entry by object name. Binary search from
+ * sorted_by_sha is also possible but this was easier to code and faster.
+ * This hashtable is built after all the objects are seen.
+ */
+static int *object_ix = NULL;
+static int object_ix_hashsz = 0;
+
+/*
+ * Pack index for existing packs give us easy access to the offsets into
+ * corresponding pack file where each object's data starts, but the entries
+ * do not store the size of the compressed representation (uncompressed
+ * size is easily available by examining the pack entry header). We build
+ * a hashtable of existing packs (pack_revindex), and keep reverse index
+ * here -- pack index file is sorted by object name mapping to offset; this
+ * pack_revindex[].revindex array is an ordered list of offsets, so if you
+ * know the offset of an object, next offset is where its packed
+ * representation ends.
+ */
+struct pack_revindex {
+ struct packed_git *p;
+ unsigned long *revindex;
+} *pack_revindex = NULL;
+static int pack_revindex_hashsz = 0;
+
+/*
+ * stats
+ */
+static int written = 0;
+static int written_delta = 0;
+static int reused = 0;
+static int reused_delta = 0;
+
+static int pack_revindex_ix(struct packed_git *p)
+{
+ unsigned long ui = (unsigned long)p;
+ int i;
+
+ ui = ui ^ (ui >> 16); /* defeat structure alignment */
+ i = (int)(ui % pack_revindex_hashsz);
+ while (pack_revindex[i].p) {
+ if (pack_revindex[i].p == p)
+ return i;
+ if (++i == pack_revindex_hashsz)
+ i = 0;
+ }
+ return -1 - i;
+}
+
+static void prepare_pack_ix(void)
+{
+ int num;
+ struct packed_git *p;
+ for (num = 0, p = packed_git; p; p = p->next)
+ num++;
+ if (!num)
+ return;
+ pack_revindex_hashsz = num * 11;
+ pack_revindex = xcalloc(sizeof(*pack_revindex), pack_revindex_hashsz);
+ for (p = packed_git; p; p = p->next) {
+ num = pack_revindex_ix(p);
+ num = - 1 - num;
+ pack_revindex[num].p = p;
+ }
+ /* revindex elements are lazily initialized */
+}
+
+static int cmp_offset(const void *a_, const void *b_)
+{
+ unsigned long a = *(unsigned long *) a_;
+ unsigned long b = *(unsigned long *) b_;
+ if (a < b)
+ return -1;
+ else if (a == b)
+ return 0;
+ else
+ return 1;
+}
+
+/*
+ * Ordered list of offsets of objects in the pack.
+ */
+static void prepare_pack_revindex(struct pack_revindex *rix)
+{
+ struct packed_git *p = rix->p;
+ int num_ent = num_packed_objects(p);
+ int i;
+ void *index = p->index_base + 256;
+
+ rix->revindex = xmalloc(sizeof(unsigned long) * (num_ent + 1));
+ for (i = 0; i < num_ent; i++) {
+ unsigned int hl = *((unsigned int *)((char *) index + 24*i));
+ rix->revindex[i] = ntohl(hl);
+ }
+ /* This knows the pack format -- the 20-byte trailer
+ * follows immediately after the last object data.
+ */
+ rix->revindex[num_ent] = p->pack_size - 20;
+ qsort(rix->revindex, num_ent, sizeof(unsigned long), cmp_offset);
+}
+
+static unsigned long find_packed_object_size(struct packed_git *p,
+ unsigned long ofs)
+{
+ int num;
+ int lo, hi;
+ struct pack_revindex *rix;
+ unsigned long *revindex;
+ num = pack_revindex_ix(p);
+ if (num < 0)
+ die("internal error: pack revindex uninitialized");
+ rix = &pack_revindex[num];
+ if (!rix->revindex)
+ prepare_pack_revindex(rix);
+ revindex = rix->revindex;
+ lo = 0;
+ hi = num_packed_objects(p) + 1;
+ do {
+ int mi = (lo + hi) / 2;
+ if (revindex[mi] == ofs) {
+ return revindex[mi+1] - ofs;
+ }
+ else if (ofs < revindex[mi])
+ hi = mi;
+ else
+ lo = mi + 1;
+ } while (lo < hi);
+ die("internal error: pack revindex corrupt");
+}
+
+static void *delta_against(void *buf, unsigned long size, struct object_entry *entry)
+{
+ unsigned long othersize, delta_size;
+ char type[10];
+ void *otherbuf = read_sha1_file(entry->delta->sha1, type, &othersize);
+ void *delta_buf;
+
+ if (!otherbuf)
+ die("unable to read %s", sha1_to_hex(entry->delta->sha1));
+ delta_buf = diff_delta(otherbuf, othersize,
+ buf, size, &delta_size, 0);
+ if (!delta_buf || delta_size != entry->delta_size)
+ die("delta size changed");
+ free(buf);
+ free(otherbuf);
+ return delta_buf;
+}
+
+/*
+ * The per-object header is a pretty dense thing, which is
+ * - first byte: low four bits are "size", then three bits of "type",
+ * and the high bit is "size continues".
+ * - each byte afterwards: low seven bits are size continuation,
+ * with the high bit being "size continues"
+ */
+static int encode_header(enum object_type type, unsigned long size, unsigned char *hdr)
+{
+ int n = 1;
+ unsigned char c;
+
+ if (type < OBJ_COMMIT || type > OBJ_DELTA)
+ die("bad type %d", type);
+
+ c = (type << 4) | (size & 15);
+ size >>= 4;
+ while (size) {
+ *hdr++ = c | 0x80;
+ c = size & 0x7f;
+ size >>= 7;
+ n++;
+ }
+ *hdr = c;
+ return n;
+}
+
+static unsigned long write_object(struct sha1file *f,
+ struct object_entry *entry)
+{
+ unsigned long size;
+ char type[10];
+ void *buf;
+ unsigned char header[10];
+ unsigned hdrlen, datalen;
+ enum object_type obj_type;
+ int to_reuse = 0;
+
+ if (entry->preferred_base)
+ return 0;
+
+ obj_type = entry->type;
+ if (! entry->in_pack)
+ to_reuse = 0; /* can't reuse what we don't have */
+ else if (obj_type == OBJ_DELTA)
+ to_reuse = 1; /* check_object() decided it for us */
+ else if (obj_type != entry->in_pack_type)
+ to_reuse = 0; /* pack has delta which is unusable */
+ else if (entry->delta)
+ to_reuse = 0; /* we want to pack afresh */
+ else
+ to_reuse = 1; /* we have it in-pack undeltified,
+ * and we do not need to deltify it.
+ */
+
+ if (! to_reuse) {
+ buf = read_sha1_file(entry->sha1, type, &size);
+ if (!buf)
+ die("unable to read %s", sha1_to_hex(entry->sha1));
+ if (size != entry->size)
+ die("object %s size inconsistency (%lu vs %lu)",
+ sha1_to_hex(entry->sha1), size, entry->size);
+ if (entry->delta) {
+ buf = delta_against(buf, size, entry);
+ size = entry->delta_size;
+ obj_type = OBJ_DELTA;
+ }
+ /*
+ * The object header is a byte of 'type' followed by zero or
+ * more bytes of length. For deltas, the 20 bytes of delta
+ * sha1 follows that.
+ */
+ hdrlen = encode_header(obj_type, size, header);
+ sha1write(f, header, hdrlen);
+
+ if (entry->delta) {
+ sha1write(f, entry->delta, 20);
+ hdrlen += 20;
+ }
+ datalen = sha1write_compressed(f, buf, size);
+ free(buf);
+ }
+ else {
+ struct packed_git *p = entry->in_pack;
+ use_packed_git(p);
+
+ datalen = find_packed_object_size(p, entry->in_pack_offset);
+ buf = (char *) p->pack_base + entry->in_pack_offset;
+ sha1write(f, buf, datalen);
+ unuse_packed_git(p);
+ hdrlen = 0; /* not really */
+ if (obj_type == OBJ_DELTA)
+ reused_delta++;
+ reused++;
+ }
+ if (obj_type == OBJ_DELTA)
+ written_delta++;
+ written++;
+ return hdrlen + datalen;
+}
+
+static unsigned long write_one(struct sha1file *f,
+ struct object_entry *e,
+ unsigned long offset)
+{
+ if (e->offset)
+ /* offset starts from header size and cannot be zero
+ * if it is written already.
+ */
+ return offset;
+ e->offset = offset;
+ offset += write_object(f, e);
+ /* if we are deltified, write out its base object. */
+ if (e->delta)
+ offset = write_one(f, e->delta, offset);
+ return offset;
+}
+
+static void write_pack_file(void)
+{
+ int i;
+ struct sha1file *f;
+ unsigned long offset;
+ struct pack_header hdr;
+ unsigned last_percent = 999;
+ int do_progress = 0;
+
+ if (!base_name)
+ f = sha1fd(1, "<stdout>");
+ else {
+ f = sha1create("%s-%s.%s", base_name,
+ sha1_to_hex(object_list_sha1), "pack");
+ do_progress = progress;
+ }
+ if (do_progress)
+ fprintf(stderr, "Writing %d objects.\n", nr_result);
+
+ hdr.hdr_signature = htonl(PACK_SIGNATURE);
+ hdr.hdr_version = htonl(PACK_VERSION);
+ hdr.hdr_entries = htonl(nr_result);
+ sha1write(f, &hdr, sizeof(hdr));
+ offset = sizeof(hdr);
+ if (!nr_result)
+ goto done;
+ for (i = 0; i < nr_objects; i++) {
+ offset = write_one(f, objects + i, offset);
+ if (do_progress) {
+ unsigned percent = written * 100 / nr_result;
+ if (progress_update || percent != last_percent) {
+ fprintf(stderr, "%4u%% (%u/%u) done\r",
+ percent, written, nr_result);
+ progress_update = 0;
+ last_percent = percent;
+ }
+ }
+ }
+ if (do_progress)
+ fputc('\n', stderr);
+ done:
+ sha1close(f, pack_file_sha1, 1);
+}
+
+static void write_index_file(void)
+{
+ int i;
+ struct sha1file *f = sha1create("%s-%s.%s", base_name,
+ sha1_to_hex(object_list_sha1), "idx");
+ struct object_entry **list = sorted_by_sha;
+ struct object_entry **last = list + nr_result;
+ unsigned int array[256];
+
+ /*
+ * Write the first-level table (the list is sorted,
+ * but we use a 256-entry lookup to be able to avoid
+ * having to do eight extra binary search iterations).
+ */
+ for (i = 0; i < 256; i++) {
+ struct object_entry **next = list;
+ while (next < last) {
+ struct object_entry *entry = *next;
+ if (entry->sha1[0] != i)
+ break;
+ next++;
+ }
+ array[i] = htonl(next - sorted_by_sha);
+ list = next;
+ }
+ sha1write(f, array, 256 * sizeof(int));
+
+ /*
+ * Write the actual SHA1 entries..
+ */
+ list = sorted_by_sha;
+ for (i = 0; i < nr_result; i++) {
+ struct object_entry *entry = *list++;
+ unsigned int offset = htonl(entry->offset);
+ sha1write(f, &offset, 4);
+ sha1write(f, entry->sha1, 20);
+ }
+ sha1write(f, pack_file_sha1, 20);
+ sha1close(f, NULL, 1);
+}
+
+static int locate_object_entry_hash(const unsigned char *sha1)
+{
+ int i;
+ unsigned int ui;
+ memcpy(&ui, sha1, sizeof(unsigned int));
+ i = ui % object_ix_hashsz;
+ while (0 < object_ix[i]) {
+ if (!memcmp(sha1, objects[object_ix[i]-1].sha1, 20))
+ return i;
+ if (++i == object_ix_hashsz)
+ i = 0;
+ }
+ return -1 - i;
+}
+
+static struct object_entry *locate_object_entry(const unsigned char *sha1)
+{
+ int i;
+
+ if (!object_ix_hashsz)
+ return NULL;
+
+ i = locate_object_entry_hash(sha1);
+ if (0 <= i)
+ return &objects[object_ix[i]-1];
+ return NULL;
+}
+
+static void rehash_objects(void)
+{
+ int i;
+ struct object_entry *oe;
+
+ object_ix_hashsz = nr_objects * 3;
+ if (object_ix_hashsz < 1024)
+ object_ix_hashsz = 1024;
+ object_ix = xrealloc(object_ix, sizeof(int) * object_ix_hashsz);
+ memset(object_ix, 0, sizeof(int) * object_ix_hashsz);
+ for (i = 0, oe = objects; i < nr_objects; i++, oe++) {
+ int ix = locate_object_entry_hash(oe->sha1);
+ if (0 <= ix)
+ continue;
+ ix = -1 - ix;
+ object_ix[ix] = i + 1;
+ }
+}
+
+static unsigned name_hash(const char *name)
+{
+ unsigned char c;
+ unsigned hash = 0;
+
+ /*
+ * This effectively just creates a sortable number from the
+ * last sixteen non-whitespace characters. Last characters
+ * count "most", so things that end in ".c" sort together.
+ */
+ while ((c = *name++) != 0) {
+ if (isspace(c))
+ continue;
+ hash = (hash >> 2) + (c << 24);
+ }
+ return hash;
+}
+
+static int add_object_entry(const unsigned char *sha1, unsigned hash, int exclude)
+{
+ unsigned int idx = nr_objects;
+ struct object_entry *entry;
+ struct packed_git *p;
+ unsigned int found_offset = 0;
+ struct packed_git *found_pack = NULL;
+ int ix, status = 0;
+
+ if (!exclude) {
+ for (p = packed_git; p; p = p->next) {
+ struct pack_entry e;
+ if (find_pack_entry_one(sha1, &e, p)) {
+ if (incremental)
+ return 0;
+ if (local && !p->pack_local)
+ return 0;
+ if (!found_pack) {
+ found_offset = e.offset;
+ found_pack = e.p;
+ }
+ }
+ }
+ }
+ if ((entry = locate_object_entry(sha1)) != NULL)
+ goto already_added;
+
+ if (idx >= nr_alloc) {
+ unsigned int needed = (idx + 1024) * 3 / 2;
+ objects = xrealloc(objects, needed * sizeof(*entry));
+ nr_alloc = needed;
+ }
+ entry = objects + idx;
+ nr_objects = idx + 1;
+ memset(entry, 0, sizeof(*entry));
+ memcpy(entry->sha1, sha1, 20);
+ entry->hash = hash;
+
+ if (object_ix_hashsz * 3 <= nr_objects * 4)
+ rehash_objects();
+ else {
+ ix = locate_object_entry_hash(entry->sha1);
+ if (0 <= ix)
+ die("internal error in object hashing.");
+ object_ix[-1 - ix] = idx + 1;
+ }
+ status = 1;
+
+ already_added:
+ if (progress_update) {
+ fprintf(stderr, "Counting objects...%d\r", nr_objects);
+ progress_update = 0;
+ }
+ if (exclude)
+ entry->preferred_base = 1;
+ else {
+ if (found_pack) {
+ entry->in_pack = found_pack;
+ entry->in_pack_offset = found_offset;
+ }
+ }
+ return status;
+}
+
+struct pbase_tree_cache {
+ unsigned char sha1[20];
+ int ref;
+ int temporary;
+ void *tree_data;
+ unsigned long tree_size;
+};
+
+static struct pbase_tree_cache *(pbase_tree_cache[256]);
+static int pbase_tree_cache_ix(const unsigned char *sha1)
+{
+ return sha1[0] % ARRAY_SIZE(pbase_tree_cache);
+}
+static int pbase_tree_cache_ix_incr(int ix)
+{
+ return (ix+1) % ARRAY_SIZE(pbase_tree_cache);
+}
+
+static struct pbase_tree {
+ struct pbase_tree *next;
+ /* This is a phony "cache" entry; we are not
+ * going to evict it nor find it through _get()
+ * mechanism -- this is for the toplevel node that
+ * would almost always change with any commit.
+ */
+ struct pbase_tree_cache pcache;
+} *pbase_tree;
+
+static struct pbase_tree_cache *pbase_tree_get(const unsigned char *sha1)
+{
+ struct pbase_tree_cache *ent, *nent;
+ void *data;
+ unsigned long size;
+ char type[20];
+ int neigh;
+ int my_ix = pbase_tree_cache_ix(sha1);
+ int available_ix = -1;
+
+ /* pbase-tree-cache acts as a limited hashtable.
+ * your object will be found at your index or within a few
+ * slots after that slot if it is cached.
+ */
+ for (neigh = 0; neigh < 8; neigh++) {
+ ent = pbase_tree_cache[my_ix];
+ if (ent && !memcmp(ent->sha1, sha1, 20)) {
+ ent->ref++;
+ return ent;
+ }
+ else if (((available_ix < 0) && (!ent || !ent->ref)) ||
+ ((0 <= available_ix) &&
+ (!ent && pbase_tree_cache[available_ix])))
+ available_ix = my_ix;
+ if (!ent)
+ break;
+ my_ix = pbase_tree_cache_ix_incr(my_ix);
+ }
+
+ /* Did not find one. Either we got a bogus request or
+ * we need to read and perhaps cache.
+ */
+ data = read_sha1_file(sha1, type, &size);
+ if (!data)
+ return NULL;
+ if (strcmp(type, tree_type)) {
+ free(data);
+ return NULL;
+ }
+
+ /* We need to either cache or return a throwaway copy */
+
+ if (available_ix < 0)
+ ent = NULL;
+ else {
+ ent = pbase_tree_cache[available_ix];
+ my_ix = available_ix;
+ }
+
+ if (!ent) {
+ nent = xmalloc(sizeof(*nent));
+ nent->temporary = (available_ix < 0);
+ }
+ else {
+ /* evict and reuse */
+ free(ent->tree_data);
+ nent = ent;
+ }
+ memcpy(nent->sha1, sha1, 20);
+ nent->tree_data = data;
+ nent->tree_size = size;
+ nent->ref = 1;
+ if (!nent->temporary)
+ pbase_tree_cache[my_ix] = nent;
+ return nent;
+}
+
+static void pbase_tree_put(struct pbase_tree_cache *cache)
+{
+ if (!cache->temporary) {
+ cache->ref--;
+ return;
+ }
+ free(cache->tree_data);
+ free(cache);
+}
+
+static int name_cmp_len(const char *name)
+{
+ int i;
+ for (i = 0; name[i] && name[i] != '\n' && name[i] != '/'; i++)
+ ;
+ return i;
+}
+
+static void add_pbase_object(struct tree_desc *tree,
+ const char *name,
+ int cmplen,
+ const char *fullname)
+{
+ struct name_entry entry;
+
+ while (tree_entry(tree,&entry)) {
+ unsigned long size;
+ char type[20];
+
+ if (entry.pathlen != cmplen ||
+ memcmp(entry.path, name, cmplen) ||
+ !has_sha1_file(entry.sha1) ||
+ sha1_object_info(entry.sha1, type, &size))
+ continue;
+ if (name[cmplen] != '/') {
+ unsigned hash = name_hash(fullname);
+ add_object_entry(entry.sha1, hash, 1);
+ return;
+ }
+ if (!strcmp(type, tree_type)) {
+ struct tree_desc sub;
+ struct pbase_tree_cache *tree;
+ const char *down = name+cmplen+1;
+ int downlen = name_cmp_len(down);
+
+ tree = pbase_tree_get(entry.sha1);
+ if (!tree)
+ return;
+ sub.buf = tree->tree_data;
+ sub.size = tree->tree_size;
+
+ add_pbase_object(&sub, down, downlen, fullname);
+ pbase_tree_put(tree);
+ }
+ }
+}
+
+static unsigned *done_pbase_paths;
+static int done_pbase_paths_num;
+static int done_pbase_paths_alloc;
+static int done_pbase_path_pos(unsigned hash)
+{
+ int lo = 0;
+ int hi = done_pbase_paths_num;
+ while (lo < hi) {
+ int mi = (hi + lo) / 2;
+ if (done_pbase_paths[mi] == hash)
+ return mi;
+ if (done_pbase_paths[mi] < hash)
+ hi = mi;
+ else
+ lo = mi + 1;
+ }
+ return -lo-1;
+}
+
+static int check_pbase_path(unsigned hash)
+{
+ int pos = (!done_pbase_paths) ? -1 : done_pbase_path_pos(hash);
+ if (0 <= pos)
+ return 1;
+ pos = -pos - 1;
+ if (done_pbase_paths_alloc <= done_pbase_paths_num) {
+ done_pbase_paths_alloc = alloc_nr(done_pbase_paths_alloc);
+ done_pbase_paths = xrealloc(done_pbase_paths,
+ done_pbase_paths_alloc *
+ sizeof(unsigned));
+ }
+ done_pbase_paths_num++;
+ if (pos < done_pbase_paths_num)
+ memmove(done_pbase_paths + pos + 1,
+ done_pbase_paths + pos,
+ (done_pbase_paths_num - pos - 1) * sizeof(unsigned));
+ done_pbase_paths[pos] = hash;
+ return 0;
+}
+
+static void add_preferred_base_object(char *name, unsigned hash)
+{
+ struct pbase_tree *it;
+ int cmplen = name_cmp_len(name);
+
+ if (check_pbase_path(hash))
+ return;
+
+ for (it = pbase_tree; it; it = it->next) {
+ if (cmplen == 0) {
+ hash = name_hash("");
+ add_object_entry(it->pcache.sha1, hash, 1);
+ }
+ else {
+ struct tree_desc tree;
+ tree.buf = it->pcache.tree_data;
+ tree.size = it->pcache.tree_size;
+ add_pbase_object(&tree, name, cmplen, name);
+ }
+ }
+}
+
+static void add_preferred_base(unsigned char *sha1)
+{
+ struct pbase_tree *it;
+ void *data;
+ unsigned long size;
+ unsigned char tree_sha1[20];
+
+ data = read_object_with_reference(sha1, tree_type, &size, tree_sha1);
+ if (!data)
+ return;
+
+ for (it = pbase_tree; it; it = it->next) {
+ if (!memcmp(it->pcache.sha1, tree_sha1, 20)) {
+ free(data);
+ return;
+ }
+ }
+
+ it = xcalloc(1, sizeof(*it));
+ it->next = pbase_tree;
+ pbase_tree = it;
+
+ memcpy(it->pcache.sha1, tree_sha1, 20);
+ it->pcache.tree_data = data;
+ it->pcache.tree_size = size;
+}
+
+static void check_object(struct object_entry *entry)
+{
+ char type[20];
+
+ if (entry->in_pack && !entry->preferred_base) {
+ unsigned char base[20];
+ unsigned long size;
+ struct object_entry *base_entry;
+
+ /* We want in_pack_type even if we do not reuse delta.
+ * There is no point not reusing non-delta representations.
+ */
+ check_reuse_pack_delta(entry->in_pack,
+ entry->in_pack_offset,
+ base, &size,
+ &entry->in_pack_type);
+
+ /* Check if it is delta, and the base is also an object
+ * we are going to pack. If so we will reuse the existing
+ * delta.
+ */
+ if (!no_reuse_delta &&
+ entry->in_pack_type == OBJ_DELTA &&
+ (base_entry = locate_object_entry(base)) &&
+ (!base_entry->preferred_base)) {
+
+ /* Depth value does not matter - find_deltas()
+ * will never consider reused delta as the
+ * base object to deltify other objects
+ * against, in order to avoid circular deltas.
+ */
+
+ /* uncompressed size of the delta data */
+ entry->size = entry->delta_size = size;
+ entry->delta = base_entry;
+ entry->type = OBJ_DELTA;
+
+ entry->delta_sibling = base_entry->delta_child;
+ base_entry->delta_child = entry;
+
+ return;
+ }
+ /* Otherwise we would do the usual */
+ }
+
+ if (sha1_object_info(entry->sha1, type, &entry->size))
+ die("unable to get type of object %s",
+ sha1_to_hex(entry->sha1));
+
+ if (!strcmp(type, commit_type)) {
+ entry->type = OBJ_COMMIT;
+ } else if (!strcmp(type, tree_type)) {
+ entry->type = OBJ_TREE;
+ } else if (!strcmp(type, blob_type)) {
+ entry->type = OBJ_BLOB;
+ } else if (!strcmp(type, tag_type)) {
+ entry->type = OBJ_TAG;
+ } else
+ die("unable to pack object %s of type %s",
+ sha1_to_hex(entry->sha1), type);
+}
+
+static unsigned int check_delta_limit(struct object_entry *me, unsigned int n)
+{
+ struct object_entry *child = me->delta_child;
+ unsigned int m = n;
+ while (child) {
+ unsigned int c = check_delta_limit(child, n + 1);
+ if (m < c)
+ m = c;
+ child = child->delta_sibling;
+ }
+ return m;
+}
+
+static void get_object_details(void)
+{
+ int i;
+ struct object_entry *entry;
+
+ prepare_pack_ix();
+ for (i = 0, entry = objects; i < nr_objects; i++, entry++)
+ check_object(entry);
+
+ if (nr_objects == nr_result) {
+ /*
+ * Depth of objects that depend on the entry -- this
+ * is subtracted from depth-max to break too deep
+ * delta chain because of delta data reusing.
+ * However, we loosen this restriction when we know we
+ * are creating a thin pack -- it will have to be
+ * expanded on the other end anyway, so do not
+ * artificially cut the delta chain and let it go as
+ * deep as it wants.
+ */
+ for (i = 0, entry = objects; i < nr_objects; i++, entry++)
+ if (!entry->delta && entry->delta_child)
+ entry->delta_limit =
+ check_delta_limit(entry, 1);
+ }
+}
+
+typedef int (*entry_sort_t)(const struct object_entry *, const struct object_entry *);
+
+static entry_sort_t current_sort;
+
+static int sort_comparator(const void *_a, const void *_b)
+{
+ struct object_entry *a = *(struct object_entry **)_a;
+ struct object_entry *b = *(struct object_entry **)_b;
+ return current_sort(a,b);
+}
+
+static struct object_entry **create_sorted_list(entry_sort_t sort)
+{
+ struct object_entry **list = xmalloc(nr_objects * sizeof(struct object_entry *));
+ int i;
+
+ for (i = 0; i < nr_objects; i++)
+ list[i] = objects + i;
+ current_sort = sort;
+ qsort(list, nr_objects, sizeof(struct object_entry *), sort_comparator);
+ return list;
+}
+
+static int sha1_sort(const struct object_entry *a, const struct object_entry *b)
+{
+ return memcmp(a->sha1, b->sha1, 20);
+}
+
+static struct object_entry **create_final_object_list(void)
+{
+ struct object_entry **list;
+ int i, j;
+
+ for (i = nr_result = 0; i < nr_objects; i++)
+ if (!objects[i].preferred_base)
+ nr_result++;
+ list = xmalloc(nr_result * sizeof(struct object_entry *));
+ for (i = j = 0; i < nr_objects; i++) {
+ if (!objects[i].preferred_base)
+ list[j++] = objects + i;
+ }
+ current_sort = sha1_sort;
+ qsort(list, nr_result, sizeof(struct object_entry *), sort_comparator);
+ return list;
+}
+
+static int type_size_sort(const struct object_entry *a, const struct object_entry *b)
+{
+ if (a->type < b->type)
+ return -1;
+ if (a->type > b->type)
+ return 1;
+ if (a->hash < b->hash)
+ return -1;
+ if (a->hash > b->hash)
+ return 1;
+ if (a->preferred_base < b->preferred_base)
+ return -1;
+ if (a->preferred_base > b->preferred_base)
+ return 1;
+ if (a->size < b->size)
+ return -1;
+ if (a->size > b->size)
+ return 1;
+ return a < b ? -1 : (a > b);
+}
+
+struct unpacked {
+ struct object_entry *entry;
+ void *data;
+ struct delta_index *index;
+};
+
+/*
+ * We search for deltas _backwards_ in a list sorted by type and
+ * by size, so that we see progressively smaller and smaller files.
+ * That's because we prefer deltas to be from the bigger file
+ * to the smaller - deletes are potentially cheaper, but perhaps
+ * more importantly, the bigger file is likely the more recent
+ * one.
+ */
+static int try_delta(struct unpacked *trg, struct unpacked *src,
+ unsigned max_depth)
+{
+ struct object_entry *trg_entry = trg->entry;
+ struct object_entry *src_entry = src->entry;
+ unsigned long trg_size, src_size, delta_size, sizediff, max_size, sz;
+ char type[10];
+ void *delta_buf;
+
+ /* Don't bother doing diffs between different types */
+ if (trg_entry->type != src_entry->type)
+ return -1;
+
+ /* We do not compute delta to *create* objects we are not
+ * going to pack.
+ */
+ if (trg_entry->preferred_base)
+ return -1;
+
+ /*
+ * We do not bother to try a delta that we discarded
+ * on an earlier try, but only when reusing delta data.
+ */
+ if (!no_reuse_delta && trg_entry->in_pack &&
+ trg_entry->in_pack == src_entry->in_pack)
+ return 0;
+
+ /*
+ * If the current object is at pack edge, take the depth the
+ * objects that depend on the current object into account --
+ * otherwise they would become too deep.
+ */
+ if (trg_entry->delta_child) {
+ if (max_depth <= trg_entry->delta_limit)
+ return 0;
+ max_depth -= trg_entry->delta_limit;
+ }
+ if (src_entry->depth >= max_depth)
+ return 0;
+
+ /* Now some size filtering heuristics. */
+ trg_size = trg_entry->size;
+ max_size = trg_size/2 - 20;
+ max_size = max_size * (max_depth - src_entry->depth) / max_depth;
+ if (max_size == 0)
+ return 0;
+ if (trg_entry->delta && trg_entry->delta_size <= max_size)
+ max_size = trg_entry->delta_size-1;
+ src_size = src_entry->size;
+ sizediff = src_size < trg_size ? trg_size - src_size : 0;
+ if (sizediff >= max_size)
+ return 0;
+
+ /* Load data if not already done */
+ if (!trg->data) {
+ trg->data = read_sha1_file(trg_entry->sha1, type, &sz);
+ if (sz != trg_size)
+ die("object %s inconsistent object length (%lu vs %lu)",
+ sha1_to_hex(trg_entry->sha1), sz, trg_size);
+ }
+ if (!src->data) {
+ src->data = read_sha1_file(src_entry->sha1, type, &sz);
+ if (sz != src_size)
+ die("object %s inconsistent object length (%lu vs %lu)",
+ sha1_to_hex(src_entry->sha1), sz, src_size);
+ }
+ if (!src->index) {
+ src->index = create_delta_index(src->data, src_size);
+ if (!src->index)
+ die("out of memory");
+ }
+
+ delta_buf = create_delta(src->index, trg->data, trg_size, &delta_size, max_size);
+ if (!delta_buf)
+ return 0;
+
+ trg_entry->delta = src_entry;
+ trg_entry->delta_size = delta_size;
+ trg_entry->depth = src_entry->depth + 1;
+ free(delta_buf);
+ return 1;
+}
+
+static void progress_interval(int signum)
+{
+ progress_update = 1;
+}
+
+static void find_deltas(struct object_entry **list, int window, int depth)
+{
+ int i, idx;
+ unsigned int array_size = window * sizeof(struct unpacked);
+ struct unpacked *array = xmalloc(array_size);
+ unsigned processed = 0;
+ unsigned last_percent = 999;
+
+ memset(array, 0, array_size);
+ i = nr_objects;
+ idx = 0;
+ if (progress)
+ fprintf(stderr, "Deltifying %d objects.\n", nr_result);
+
+ while (--i >= 0) {
+ struct object_entry *entry = list[i];
+ struct unpacked *n = array + idx;
+ int j;
+
+ if (!entry->preferred_base)
+ processed++;
+
+ if (progress) {
+ unsigned percent = processed * 100 / nr_result;
+ if (percent != last_percent || progress_update) {
+ fprintf(stderr, "%4u%% (%u/%u) done\r",
+ percent, processed, nr_result);
+ progress_update = 0;
+ last_percent = percent;
+ }
+ }
+
+ if (entry->delta)
+ /* This happens if we decided to reuse existing
+ * delta from a pack. "!no_reuse_delta &&" is implied.
+ */
+ continue;
+
+ if (entry->size < 50)
+ continue;
+ free_delta_index(n->index);
+ n->index = NULL;
+ free(n->data);
+ n->data = NULL;
+ n->entry = entry;
+
+ j = window;
+ while (--j > 0) {
+ unsigned int other_idx = idx + j;
+ struct unpacked *m;
+ if (other_idx >= window)
+ other_idx -= window;
+ m = array + other_idx;
+ if (!m->entry)
+ break;
+ if (try_delta(n, m, depth) < 0)
+ break;
+ }
+ /* if we made n a delta, and if n is already at max
+ * depth, leaving it in the window is pointless. we
+ * should evict it first.
+ */
+ if (entry->delta && depth <= entry->depth)
+ continue;
+
+ idx++;
+ if (idx >= window)
+ idx = 0;
+ }
+
+ if (progress)
+ fputc('\n', stderr);
+
+ for (i = 0; i < window; ++i) {
+ free_delta_index(array[i].index);
+ free(array[i].data);
+ }
+ free(array);
+}
+
+static void prepare_pack(int window, int depth)
+{
+ get_object_details();
+ sorted_by_type = create_sorted_list(type_size_sort);
+ if (window && depth)
+ find_deltas(sorted_by_type, window+1, depth);
+}
+
+static int reuse_cached_pack(unsigned char *sha1, int pack_to_stdout)
+{
+ static const char cache[] = "pack-cache/pack-%s.%s";
+ char *cached_pack, *cached_idx;
+ int ifd, ofd, ifd_ix = -1;
+
+ cached_pack = git_path(cache, sha1_to_hex(sha1), "pack");
+ ifd = open(cached_pack, O_RDONLY);
+ if (ifd < 0)
+ return 0;
+
+ if (!pack_to_stdout) {
+ cached_idx = git_path(cache, sha1_to_hex(sha1), "idx");
+ ifd_ix = open(cached_idx, O_RDONLY);
+ if (ifd_ix < 0) {
+ close(ifd);
+ return 0;
+ }
+ }
+
+ if (progress)
+ fprintf(stderr, "Reusing %d objects pack %s\n", nr_objects,
+ sha1_to_hex(sha1));
+
+ if (pack_to_stdout) {
+ if (copy_fd(ifd, 1))
+ exit(1);
+ close(ifd);
+ }
+ else {
+ char name[PATH_MAX];
+ snprintf(name, sizeof(name),
+ "%s-%s.%s", base_name, sha1_to_hex(sha1), "pack");
+ ofd = open(name, O_CREAT | O_EXCL | O_WRONLY, 0666);
+ if (ofd < 0)
+ die("unable to open %s (%s)", name, strerror(errno));
+ if (copy_fd(ifd, ofd))
+ exit(1);
+ close(ifd);
+
+ snprintf(name, sizeof(name),
+ "%s-%s.%s", base_name, sha1_to_hex(sha1), "idx");
+ ofd = open(name, O_CREAT | O_EXCL | O_WRONLY, 0666);
+ if (ofd < 0)
+ die("unable to open %s (%s)", name, strerror(errno));
+ if (copy_fd(ifd_ix, ofd))
+ exit(1);
+ close(ifd_ix);
+ puts(sha1_to_hex(sha1));
+ }
+
+ return 1;
+}
+
+static void setup_progress_signal(void)
+{
+ struct sigaction sa;
+ struct itimerval v;
+
+ memset(&sa, 0, sizeof(sa));
+ sa.sa_handler = progress_interval;
+ sigemptyset(&sa.sa_mask);
+ sa.sa_flags = SA_RESTART;
+ sigaction(SIGALRM, &sa, NULL);
+
+ v.it_interval.tv_sec = 1;
+ v.it_interval.tv_usec = 0;
+ v.it_value = v.it_interval;
+ setitimer(ITIMER_REAL, &v, NULL);
+}
+
+static int git_pack_config(const char *k, const char *v)
+{
+ if(!strcmp(k, "pack.window")) {
+ window = git_config_int(k, v);
+ return 0;
+ }
+ return git_default_config(k, v);
+}
+
+int cmd_pack_objects(int argc, const char **argv, const char *prefix)
+{
+ SHA_CTX ctx;
+ char line[40 + 1 + PATH_MAX + 2];
+ int depth = 10, pack_to_stdout = 0;
+ struct object_entry **list;
+ int num_preferred_base = 0;
+ int i;
+
+ git_config(git_pack_config);
+
+ progress = isatty(2);
+ for (i = 1; i < argc; i++) {
+ const char *arg = argv[i];
+
+ if (*arg == '-') {
+ if (!strcmp("--non-empty", arg)) {
+ non_empty = 1;
+ continue;
+ }
+ if (!strcmp("--local", arg)) {
+ local = 1;
+ continue;
+ }
+ if (!strcmp("--progress", arg)) {
+ progress = 1;
+ continue;
+ }
+ if (!strcmp("--incremental", arg)) {
+ incremental = 1;
+ continue;
+ }
+ if (!strncmp("--window=", arg, 9)) {
+ char *end;
+ window = strtoul(arg+9, &end, 0);
+ if (!arg[9] || *end)
+ usage(pack_usage);
+ continue;
+ }
+ if (!strncmp("--depth=", arg, 8)) {
+ char *end;
+ depth = strtoul(arg+8, &end, 0);
+ if (!arg[8] || *end)
+ usage(pack_usage);
+ continue;
+ }
+ if (!strcmp("--progress", arg)) {
+ progress = 1;
+ continue;
+ }
+ if (!strcmp("-q", arg)) {
+ progress = 0;
+ continue;
+ }
+ if (!strcmp("--no-reuse-delta", arg)) {
+ no_reuse_delta = 1;
+ continue;
+ }
+ if (!strcmp("--stdout", arg)) {
+ pack_to_stdout = 1;
+ continue;
+ }
+ usage(pack_usage);
+ }
+ if (base_name)
+ usage(pack_usage);
+ base_name = arg;
+ }
+
+ if (pack_to_stdout != !base_name)
+ usage(pack_usage);
+
+ prepare_packed_git();
+
+ if (progress) {
+ fprintf(stderr, "Generating pack...\n");
+ setup_progress_signal();
+ }
+
+ for (;;) {
+ unsigned char sha1[20];
+ unsigned hash;
+
+ if (!fgets(line, sizeof(line), stdin)) {
+ if (feof(stdin))
+ break;
+ if (!ferror(stdin))
+ die("fgets returned NULL, not EOF, not error!");
+ if (errno != EINTR)
+ die("fgets: %s", strerror(errno));
+ clearerr(stdin);
+ continue;
+ }
+
+ if (line[0] == '-') {
+ if (get_sha1_hex(line+1, sha1))
+ die("expected edge sha1, got garbage:\n %s",
+ line+1);
+ if (num_preferred_base++ < window)
+ add_preferred_base(sha1);
+ continue;
+ }
+ if (get_sha1_hex(line, sha1))
+ die("expected sha1, got garbage:\n %s", line);
+ hash = name_hash(line+41);
+ add_preferred_base_object(line+41, hash);
+ add_object_entry(sha1, hash, 0);
+ }
+ if (progress)
+ fprintf(stderr, "Done counting %d objects.\n", nr_objects);
+ sorted_by_sha = create_final_object_list();
+ if (non_empty && !nr_result)
+ return 0;
+
+ SHA1_Init(&ctx);
+ list = sorted_by_sha;
+ for (i = 0; i < nr_result; i++) {
+ struct object_entry *entry = *list++;
+ SHA1_Update(&ctx, entry->sha1, 20);
+ }
+ SHA1_Final(object_list_sha1, &ctx);
+ if (progress && (nr_objects != nr_result))
+ fprintf(stderr, "Result has %d objects.\n", nr_result);
+
+ if (reuse_cached_pack(object_list_sha1, pack_to_stdout))
+ ;
+ else {
+ if (nr_result)
+ prepare_pack(window, depth);
+ if (progress && pack_to_stdout) {
+ /* the other end usually displays progress itself */
+ struct itimerval v = {{0,},};
+ setitimer(ITIMER_REAL, &v, NULL);
+ signal(SIGALRM, SIG_IGN );
+ progress_update = 0;
+ }
+ write_pack_file();
+ if (!pack_to_stdout) {
+ write_index_file();
+ puts(sha1_to_hex(object_list_sha1));
+ }
+ }
+ if (progress)
+ fprintf(stderr, "Total %d, written %d (delta %d), reused %d (delta %d)\n",
+ nr_result, written, written_delta, reused, reused_delta);
+ return 0;
+}
--- /dev/null
+#include "builtin.h"
+#include "cache.h"
+
+static const char prune_packed_usage[] =
+"git-prune-packed [-n]";
+
+static int dryrun;
+
+static void prune_dir(int i, DIR *dir, char *pathname, int len)
+{
+ struct dirent *de;
+ char hex[40];
+
+ sprintf(hex, "%02x", i);
+ while ((de = readdir(dir)) != NULL) {
+ unsigned char sha1[20];
+ if (strlen(de->d_name) != 38)
+ continue;
+ memcpy(hex+2, de->d_name, 38);
+ if (get_sha1_hex(hex, sha1))
+ continue;
+ if (!has_sha1_pack(sha1))
+ continue;
+ memcpy(pathname + len, de->d_name, 38);
+ if (dryrun)
+ printf("rm -f %s\n", pathname);
+ else if (unlink(pathname) < 0)
+ error("unable to unlink %s", pathname);
+ }
+ pathname[len] = 0;
+ rmdir(pathname);
+}
+
+static void prune_packed_objects(void)
+{
+ int i;
+ static char pathname[PATH_MAX];
+ const char *dir = get_object_directory();
+ int len = strlen(dir);
+
+ if (len > PATH_MAX - 42)
+ die("impossible object directory");
+ memcpy(pathname, dir, len);
+ if (len && pathname[len-1] != '/')
+ pathname[len++] = '/';
+ for (i = 0; i < 256; i++) {
+ DIR *d;
+
+ sprintf(pathname + len, "%02x/", i);
+ d = opendir(pathname);
+ if (!d)
+ continue;
+ prune_dir(i, d, pathname, len + 3);
+ closedir(d);
+ }
+}
+
+int cmd_prune_packed(int argc, const char **argv, const char *prefix)
+{
+ int i;
+
+ for (i = 1; i < argc; i++) {
+ const char *arg = argv[i];
+
+ if (*arg == '-') {
+ if (!strcmp(arg, "-n"))
+ dryrun = 1;
+ else
+ usage(prune_packed_usage);
+ continue;
+ }
+ /* Handle arguments here .. */
+ usage(prune_packed_usage);
+ }
+ sync();
+ prune_packed_objects();
+ return 0;
+}
#include "builtin.h"
#include "cache-tree.h"
-static const char prune_usage[] = "git prune [-n]";
+static const char prune_usage[] = "git-prune [-n]";
static int show_only = 0;
static struct rev_info revs;
#define MAX_URI (16)
-static const char push_usage[] = "git push [--all] [--tags] [--force] <repository> [<refspec>...]";
+static const char push_usage[] = "git-push [--all] [--tags] [-f | --force] <repository> [<refspec>...]";
static int all = 0, tags = 0, force = 0, thin = 1;
static const char *execute = NULL;
tags = 1;
continue;
}
- if (!strcmp(arg, "--force")) {
+ if (!strcmp(arg, "--force") || !strcmp(arg, "-f")) {
force = 1;
continue;
}
static struct lock_file lock_file;
-int cmd_read_tree(int argc, const char **argv, const char *prefix)
+int cmd_read_tree(int argc, const char **argv, const char *unused_prefix)
{
int i, newfd, stage = 0;
unsigned char sha1[20];
git_config(git_default_config);
- newfd = hold_lock_file_for_update(&lock_file, get_index_file());
- if (newfd < 0)
- die("unable to create new index file");
+ newfd = hold_lock_file_for_update(&lock_file, get_index_file(), 1);
git_config(git_default_config);
--- /dev/null
+#include "builtin.h"
+#include "cache.h"
+#include <regex.h>
+
+static const char git_config_set_usage[] =
+"git-repo-config [ --bool | --int ] [--get | --get-all | --get-regexp | --replace-all | --unset | --unset-all] name [value [value_regex]] | --list";
+
+static char* key = NULL;
+static regex_t* key_regexp = NULL;
+static regex_t* regexp = NULL;
+static int show_keys = 0;
+static int use_key_regexp = 0;
+static int do_all = 0;
+static int do_not_match = 0;
+static int seen = 0;
+static enum { T_RAW, T_INT, T_BOOL } type = T_RAW;
+
+static int show_all_config(const char *key_, const char *value_)
+{
+ if (value_)
+ printf("%s=%s\n", key_, value_);
+ else
+ printf("%s\n", key_);
+ return 0;
+}
+
+static int show_config(const char* key_, const char* value_)
+{
+ char value[256];
+ const char *vptr = value;
+ int dup_error = 0;
+
+ if (!use_key_regexp && strcmp(key_, key))
+ return 0;
+ if (use_key_regexp && regexec(key_regexp, key_, 0, NULL, 0))
+ return 0;
+ if (regexp != NULL &&
+ (do_not_match ^
+ regexec(regexp, (value_?value_:""), 0, NULL, 0)))
+ return 0;
+
+ if (show_keys)
+ printf("%s ", key_);
+ if (seen && !do_all)
+ dup_error = 1;
+ if (type == T_INT)
+ sprintf(value, "%d", git_config_int(key_, value_?value_:""));
+ else if (type == T_BOOL)
+ vptr = git_config_bool(key_, value_) ? "true" : "false";
+ else
+ vptr = value_?value_:"";
+ seen++;
+ if (dup_error) {
+ error("More than one value for the key %s: %s",
+ key_, vptr);
+ }
+ else
+ printf("%s\n", vptr);
+
+ return 0;
+}
+
+static int get_value(const char* key_, const char* regex_)
+{
+ int ret = -1;
+ char *tl;
+ char *global = NULL, *repo_config = NULL;
+ const char *local;
+
+ local = getenv("GIT_CONFIG");
+ if (!local) {
+ const char *home = getenv("HOME");
+ local = getenv("GIT_CONFIG_LOCAL");
+ if (!local)
+ local = repo_config = strdup(git_path("config"));
+ if (home)
+ global = strdup(mkpath("%s/.gitconfig", home));
+ }
+
+ key = strdup(key_);
+ for (tl=key+strlen(key)-1; tl >= key && *tl != '.'; --tl)
+ *tl = tolower(*tl);
+ for (tl=key; *tl && *tl != '.'; ++tl)
+ *tl = tolower(*tl);
+
+ if (use_key_regexp) {
+ key_regexp = (regex_t*)malloc(sizeof(regex_t));
+ if (regcomp(key_regexp, key, REG_EXTENDED)) {
+ fprintf(stderr, "Invalid key pattern: %s\n", key_);
+ goto free_strings;
+ }
+ }
+
+ if (regex_) {
+ if (regex_[0] == '!') {
+ do_not_match = 1;
+ regex_++;
+ }
+
+ regexp = (regex_t*)malloc(sizeof(regex_t));
+ if (regcomp(regexp, regex_, REG_EXTENDED)) {
+ fprintf(stderr, "Invalid pattern: %s\n", regex_);
+ goto free_strings;
+ }
+ }
+
+ if (do_all && global)
+ git_config_from_file(show_config, global);
+ git_config_from_file(show_config, local);
+ if (!do_all && !seen && global)
+ git_config_from_file(show_config, global);
+
+ free(key);
+ if (regexp) {
+ regfree(regexp);
+ free(regexp);
+ }
+
+ if (do_all)
+ ret = !seen;
+ else
+ ret = (seen == 1) ? 0 : 1;
+
+free_strings:
+ if (repo_config)
+ free(repo_config);
+ if (global)
+ free(global);
+ return ret;
+}
+
+int cmd_repo_config(int argc, const char **argv, const char *prefix)
+{
+ int nongit = 0;
+ setup_git_directory_gently(&nongit);
+
+ while (1 < argc) {
+ if (!strcmp(argv[1], "--int"))
+ type = T_INT;
+ else if (!strcmp(argv[1], "--bool"))
+ type = T_BOOL;
+ else if (!strcmp(argv[1], "--list") || !strcmp(argv[1], "-l"))
+ return git_config(show_all_config);
+ else
+ break;
+ argc--;
+ argv++;
+ }
+
+ switch (argc) {
+ case 2:
+ return get_value(argv[1], NULL);
+ case 3:
+ if (!strcmp(argv[1], "--unset"))
+ return git_config_set(argv[2], NULL);
+ else if (!strcmp(argv[1], "--unset-all"))
+ return git_config_set_multivar(argv[2], NULL, NULL, 1);
+ else if (!strcmp(argv[1], "--get"))
+ return get_value(argv[2], NULL);
+ else if (!strcmp(argv[1], "--get-all")) {
+ do_all = 1;
+ return get_value(argv[2], NULL);
+ } else if (!strcmp(argv[1], "--get-regexp")) {
+ show_keys = 1;
+ use_key_regexp = 1;
+ do_all = 1;
+ return get_value(argv[2], NULL);
+ } else
+
+ return git_config_set(argv[1], argv[2]);
+ case 4:
+ if (!strcmp(argv[1], "--unset"))
+ return git_config_set_multivar(argv[2], NULL, argv[3], 0);
+ else if (!strcmp(argv[1], "--unset-all"))
+ return git_config_set_multivar(argv[2], NULL, argv[3], 1);
+ else if (!strcmp(argv[1], "--get"))
+ return get_value(argv[2], argv[3]);
+ else if (!strcmp(argv[1], "--get-all")) {
+ do_all = 1;
+ return get_value(argv[2], argv[3]);
+ } else if (!strcmp(argv[1], "--get-regexp")) {
+ show_keys = 1;
+ use_key_regexp = 1;
+ do_all = 1;
+ return get_value(argv[2], argv[3]);
+ } else if (!strcmp(argv[1], "--replace-all"))
+
+ return git_config_set_multivar(argv[2], argv[3], NULL, 1);
+ else
+
+ return git_config_set_multivar(argv[1], argv[2], argv[3], 0);
+ case 5:
+ if (!strcmp(argv[1], "--replace-all"))
+ return git_config_set_multivar(argv[2], argv[3], argv[4], 1);
+ case 1:
+ default:
+ usage(git_config_set_usage);
+ }
+ return 0;
+}
git_config(git_default_config);
- newfd = hold_lock_file_for_update(&lock_file, get_index_file());
- if (newfd < 0)
- die("unable to create new index file");
+ newfd = hold_lock_file_for_update(&lock_file, get_index_file(), 1);
if (read_cache() < 0)
die("index file corrupt");
force = 1;
continue;
}
- die(builtin_rm_usage);
+ usage(builtin_rm_usage);
}
if (argc <= i)
usage(builtin_rm_usage);
printf("rm '%s'\n", path);
if (remove_file_from_cache(path))
- die("git rm: unable to remove %s", path);
+ die("git-rm: unable to remove %s", path);
cache_tree_invalidate_path(active_cache_tree, path);
}
continue;
}
if (!removed)
- die("git rm: %s: %s", path, strerror(errno));
+ die("git-rm: %s: %s", path, strerror(errno));
}
}
--- /dev/null
+#include "builtin.h"
+#include "cache.h"
+
+static const char git_symbolic_ref_usage[] =
+"git-symbolic-ref name [ref]";
+
+static void check_symref(const char *HEAD)
+{
+ unsigned char sha1[20];
+ const char *git_HEAD = strdup(git_path("%s", HEAD));
+ const char *git_refs_heads_master = resolve_ref(git_HEAD, sha1, 0);
+ if (git_refs_heads_master) {
+ /* we want to strip the .git/ part */
+ int pfxlen = strlen(git_HEAD) - strlen(HEAD);
+ puts(git_refs_heads_master + pfxlen);
+ }
+ else
+ die("No such ref: %s", HEAD);
+}
+
+int cmd_symbolic_ref(int argc, const char **argv, const char *prefix)
+{
+ git_config(git_default_config);
+ switch (argc) {
+ case 2:
+ check_symref(argv[1]);
+ break;
+ case 3:
+ create_symref(strdup(git_path("%s", argv[1])), argv[2]);
+ break;
+ default:
+ usage(git_symbolic_ref_usage);
+ }
+ return 0;
+}
struct commit *commit;
struct tree_desc tree;
struct strbuf current_path;
+ void *buffer;
current_path.buf = xmalloc(PATH_MAX);
current_path.alloc = PATH_MAX;
} else
archive_time = time(NULL);
- tree.buf = read_object_with_reference(sha1, tree_type, &tree.size,
- tree_sha1);
+ tree.buf = buffer = read_object_with_reference(sha1, tree_type,
+ &tree.size, tree_sha1);
if (!tree.buf)
die("not a reference to a tag, commit or tree object: %s",
sha1_to_hex(sha1));
write_entry(tree_sha1, ¤t_path, 040777, NULL, 0);
traverse_tree(&tree, ¤t_path);
write_trailer();
+ free(buffer);
free(current_path.buf);
return 0;
}
--- /dev/null
+#include "builtin.h"
+#include "cache.h"
+#include "object.h"
+#include "delta.h"
+#include "pack.h"
+#include "blob.h"
+#include "commit.h"
+#include "tag.h"
+#include "tree.h"
+
+#include <sys/time.h>
+
+static int dry_run, quiet;
+static const char unpack_usage[] = "git-unpack-objects [-n] [-q] < pack-file";
+
+/* We always read in 4kB chunks. */
+static unsigned char buffer[4096];
+static unsigned long offset, len, eof;
+static SHA_CTX ctx;
+
+/*
+ * Make sure at least "min" bytes are available in the buffer, and
+ * return the pointer to the buffer.
+ */
+static void * fill(int min)
+{
+ if (min <= len)
+ return buffer + offset;
+ if (eof)
+ die("unable to fill input");
+ if (min > sizeof(buffer))
+ die("cannot fill %d bytes", min);
+ if (offset) {
+ SHA1_Update(&ctx, buffer, offset);
+ memcpy(buffer, buffer + offset, len);
+ offset = 0;
+ }
+ do {
+ int ret = xread(0, buffer + len, sizeof(buffer) - len);
+ if (ret <= 0) {
+ if (!ret)
+ die("early EOF");
+ die("read error on input: %s", strerror(errno));
+ }
+ len += ret;
+ } while (len < min);
+ return buffer;
+}
+
+static void use(int bytes)
+{
+ if (bytes > len)
+ die("used more bytes than were available");
+ len -= bytes;
+ offset += bytes;
+}
+
+static void *get_data(unsigned long size)
+{
+ z_stream stream;
+ void *buf = xmalloc(size);
+
+ memset(&stream, 0, sizeof(stream));
+
+ stream.next_out = buf;
+ stream.avail_out = size;
+ stream.next_in = fill(1);
+ stream.avail_in = len;
+ inflateInit(&stream);
+
+ for (;;) {
+ int ret = inflate(&stream, 0);
+ use(len - stream.avail_in);
+ if (stream.total_out == size && ret == Z_STREAM_END)
+ break;
+ if (ret != Z_OK)
+ die("inflate returned %d\n", ret);
+ stream.next_in = fill(1);
+ stream.avail_in = len;
+ }
+ inflateEnd(&stream);
+ return buf;
+}
+
+struct delta_info {
+ unsigned char base_sha1[20];
+ unsigned long size;
+ void *delta;
+ struct delta_info *next;
+};
+
+static struct delta_info *delta_list;
+
+static void add_delta_to_list(unsigned char *base_sha1, void *delta, unsigned long size)
+{
+ struct delta_info *info = xmalloc(sizeof(*info));
+
+ memcpy(info->base_sha1, base_sha1, 20);
+ info->size = size;
+ info->delta = delta;
+ info->next = delta_list;
+ delta_list = info;
+}
+
+static void added_object(unsigned char *sha1, const char *type, void *data, unsigned long size);
+
+static void write_object(void *buf, unsigned long size, const char *type)
+{
+ unsigned char sha1[20];
+ if (write_sha1_file(buf, size, type, sha1) < 0)
+ die("failed to write object");
+ added_object(sha1, type, buf, size);
+}
+
+static int resolve_delta(const char *type,
+ void *base, unsigned long base_size,
+ void *delta, unsigned long delta_size)
+{
+ void *result;
+ unsigned long result_size;
+
+ result = patch_delta(base, base_size,
+ delta, delta_size,
+ &result_size);
+ if (!result)
+ die("failed to apply delta");
+ free(delta);
+ write_object(result, result_size, type);
+ free(result);
+ return 0;
+}
+
+static void added_object(unsigned char *sha1, const char *type, void *data, unsigned long size)
+{
+ struct delta_info **p = &delta_list;
+ struct delta_info *info;
+
+ while ((info = *p) != NULL) {
+ if (!memcmp(info->base_sha1, sha1, 20)) {
+ *p = info->next;
+ p = &delta_list;
+ resolve_delta(type, data, size, info->delta, info->size);
+ free(info);
+ continue;
+ }
+ p = &info->next;
+ }
+}
+
+static int unpack_non_delta_entry(enum object_type kind, unsigned long size)
+{
+ void *buf = get_data(size);
+ const char *type;
+
+ switch (kind) {
+ case OBJ_COMMIT: type = commit_type; break;
+ case OBJ_TREE: type = tree_type; break;
+ case OBJ_BLOB: type = blob_type; break;
+ case OBJ_TAG: type = tag_type; break;
+ default: die("bad type %d", kind);
+ }
+ if (!dry_run)
+ write_object(buf, size, type);
+ free(buf);
+ return 0;
+}
+
+static int unpack_delta_entry(unsigned long delta_size)
+{
+ void *delta_data, *base;
+ unsigned long base_size;
+ char type[20];
+ unsigned char base_sha1[20];
+ int result;
+
+ memcpy(base_sha1, fill(20), 20);
+ use(20);
+
+ delta_data = get_data(delta_size);
+ if (dry_run) {
+ free(delta_data);
+ return 0;
+ }
+
+ if (!has_sha1_file(base_sha1)) {
+ add_delta_to_list(base_sha1, delta_data, delta_size);
+ return 0;
+ }
+ base = read_sha1_file(base_sha1, type, &base_size);
+ if (!base)
+ die("failed to read delta-pack base object %s", sha1_to_hex(base_sha1));
+ result = resolve_delta(type, base, base_size, delta_data, delta_size);
+ free(base);
+ return result;
+}
+
+static void unpack_one(unsigned nr, unsigned total)
+{
+ unsigned shift;
+ unsigned char *pack, c;
+ unsigned long size;
+ enum object_type type;
+
+ pack = fill(1);
+ c = *pack;
+ use(1);
+ type = (c >> 4) & 7;
+ size = (c & 15);
+ shift = 4;
+ while (c & 0x80) {
+ pack = fill(1);
+ c = *pack++;
+ use(1);
+ size += (c & 0x7f) << shift;
+ shift += 7;
+ }
+ if (!quiet) {
+ static unsigned long last_sec;
+ static unsigned last_percent;
+ struct timeval now;
+ unsigned percentage = (nr * 100) / total;
+
+ gettimeofday(&now, NULL);
+ if (percentage != last_percent || now.tv_sec != last_sec) {
+ last_sec = now.tv_sec;
+ last_percent = percentage;
+ fprintf(stderr, "%4u%% (%u/%u) done\r", percentage, nr, total);
+ }
+ }
+ switch (type) {
+ case OBJ_COMMIT:
+ case OBJ_TREE:
+ case OBJ_BLOB:
+ case OBJ_TAG:
+ unpack_non_delta_entry(type, size);
+ return;
+ case OBJ_DELTA:
+ unpack_delta_entry(size);
+ return;
+ default:
+ die("bad object type %d", type);
+ }
+}
+
+static void unpack_all(void)
+{
+ int i;
+ struct pack_header *hdr = fill(sizeof(struct pack_header));
+ unsigned nr_objects = ntohl(hdr->hdr_entries);
+
+ if (ntohl(hdr->hdr_signature) != PACK_SIGNATURE)
+ die("bad pack file");
+ if (!pack_version_ok(hdr->hdr_version))
+ die("unknown pack file version %d", ntohl(hdr->hdr_version));
+ fprintf(stderr, "Unpacking %d objects\n", nr_objects);
+
+ use(sizeof(struct pack_header));
+ for (i = 0; i < nr_objects; i++)
+ unpack_one(i+1, nr_objects);
+ if (delta_list)
+ die("unresolved deltas left after unpacking");
+}
+
+int cmd_unpack_objects(int argc, const char **argv, const char *prefix)
+{
+ int i;
+ unsigned char sha1[20];
+
+ quiet = !isatty(2);
+
+ for (i = 1 ; i < argc; i++) {
+ const char *arg = argv[i];
+
+ if (*arg == '-') {
+ if (!strcmp(arg, "-n")) {
+ dry_run = 1;
+ continue;
+ }
+ if (!strcmp(arg, "-q")) {
+ quiet = 1;
+ continue;
+ }
+ usage(unpack_usage);
+ }
+
+ /* We don't take any non-flag arguments now.. Maybe some day */
+ usage(unpack_usage);
+ }
+ SHA1_Init(&ctx);
+ unpack_all();
+ SHA1_Update(&ctx, buffer, offset);
+ SHA1_Final(sha1, &ctx);
+ if (memcmp(fill(20), sha1, 20))
+ die("final sha1 did not match");
+ use(20);
+
+ /* Write the last part of the buffer to stdout */
+ while (len) {
+ int ret = xwrite(1, buffer + offset, len);
+ if (ret <= 0)
+ break;
+ len -= ret;
+ offset += ret;
+ }
+
+ /* All done */
+ if (!quiet)
+ fprintf(stderr, "\n");
+ return 0;
+}
/* We can't free this memory, it becomes part of a linked list parsed atexit() */
lock_file = xcalloc(1, sizeof(struct lock_file));
- newfd = hold_lock_file_for_update(lock_file, get_index_file());
- if (newfd < 0)
- die("unable to create new cachefile");
+ newfd = hold_lock_file_for_update(lock_file, get_index_file(), 1);
entries = read_cache();
if (entries < 0)
--- /dev/null
+#include "builtin.h"
+#include "cache.h"
+#include "pack.h"
+
+static int verify_one_pack(const char *path, int verbose)
+{
+ char arg[PATH_MAX];
+ int len;
+ struct packed_git *pack;
+ int err;
+
+ len = strlcpy(arg, path, PATH_MAX);
+ if (len >= PATH_MAX)
+ return error("name too long: %s", path);
+
+ /*
+ * In addition to "foo.idx" we accept "foo.pack" and "foo";
+ * normalize these forms to "foo.idx" for add_packed_git().
+ */
+ if (has_extension(arg, ".pack")) {
+ strcpy(arg + len - 5, ".idx");
+ len--;
+ } else if (!has_extension(arg, ".idx")) {
+ if (len + 4 >= PATH_MAX)
+ return error("name too long: %s.idx", arg);
+ strcpy(arg + len, ".idx");
+ len += 4;
+ }
+
+ /*
+ * add_packed_git() uses our buffer (containing "foo.idx") to
+ * build the pack filename ("foo.pack"). Make sure it fits.
+ */
+ if (len + 1 >= PATH_MAX) {
+ arg[len - 4] = '\0';
+ return error("name too long: %s.pack", arg);
+ }
+
+ pack = add_packed_git(arg, len, 1);
+ if (!pack)
+ return error("packfile %s not found.", arg);
+
+ err = verify_pack(pack, verbose);
+ free(pack);
+
+ return err;
+}
+
+static const char verify_pack_usage[] = "git-verify-pack [-v] <pack>...";
+
+int cmd_verify_pack(int argc, const char **argv, const char *prefix)
+{
+ int err = 0;
+ int verbose = 0;
+ int no_more_options = 0;
+ int nothing_done = 1;
+
+ while (1 < argc) {
+ if (!no_more_options && argv[1][0] == '-') {
+ if (!strcmp("-v", argv[1]))
+ verbose = 1;
+ else if (!strcmp("--", argv[1]))
+ no_more_options = 1;
+ else
+ usage(verify_pack_usage);
+ }
+ else {
+ if (verify_one_pack(argv[1], verbose))
+ err = 1;
+ nothing_done = 0;
+ }
+ argc--; argv++;
+ }
+
+ if (nothing_done)
+ usage(verify_pack_usage);
+
+ return err;
+}
/* We can't free this memory, it becomes part of a linked list parsed atexit() */
struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
- newfd = hold_lock_file_for_update(lock_file, get_index_file());
+ newfd = hold_lock_file_for_update(lock_file, get_index_file(), 0);
entries = read_cache();
if (entries < 0)
else if (!strncmp(arg, "--prefix=", 9))
prefix = arg + 9;
else
- die(write_tree_usage);
+ usage(write_tree_usage);
argc--; argv++;
}
#define BUILTIN_H
#include <stdio.h>
-
-#ifndef PATH_MAX
-# define PATH_MAX 4096
-#endif
+#include <limits.h>
extern const char git_version_string[];
+extern const char git_usage_string[];
-void cmd_usage(int show_all, const char *exec_path, const char *fmt, ...)
-#ifdef __GNUC__
- __attribute__((__format__(__printf__, 3, 4), __noreturn__))
-#endif
- ;
-
-extern int cmd_help(int argc, const char **argv, const char *prefix);
-extern int cmd_version(int argc, const char **argv, const char *prefix);
-
-extern int cmd_whatchanged(int argc, const char **argv, const char *prefix);
-extern int cmd_show(int argc, const char **argv, const char *prefix);
-extern int cmd_log(int argc, const char **argv, const char *prefix);
-extern int cmd_diff(int argc, const char **argv, const char *prefix);
-extern int cmd_format_patch(int argc, const char **argv, const char *prefix);
-extern int cmd_count_objects(int argc, const char **argv, const char *prefix);
-
-extern int cmd_prune(int argc, const char **argv, const char *prefix);
+extern void help_unknown_cmd(const char *cmd);
+extern int mailinfo(FILE *in, FILE *out, int ks, const char *encoding, const char *msg, const char *patch);
+extern int split_mbox(const char **mbox, const char *dir, int allow_bare, int nr_prec, int skip);
+extern void stripspace(FILE *in, FILE *out);
+extern int write_tree(unsigned char *sha1, int missing_ok, const char *prefix);
-extern int cmd_push(int argc, const char **argv, const char *prefix);
-extern int cmd_grep(int argc, const char **argv, const char *prefix);
-extern int cmd_rm(int argc, const char **argv, const char *prefix);
extern int cmd_add(int argc, const char **argv, const char *prefix);
-extern int cmd_rev_list(int argc, const char **argv, const char *prefix);
+extern int cmd_apply(int argc, const char **argv, const char *prefix);
+extern int cmd_cat_file(int argc, const char **argv, const char *prefix);
+extern int cmd_checkout_index(int argc, const char **argv, const char *prefix);
extern int cmd_check_ref_format(int argc, const char **argv, const char *prefix);
-extern int cmd_init_db(int argc, const char **argv, const char *prefix);
-extern int cmd_tar_tree(int argc, const char **argv, const char *prefix);
-extern int cmd_upload_tar(int argc, const char **argv, const char *prefix);
-extern int cmd_get_tar_commit_id(int argc, const char **argv, const char *prefix);
-extern int cmd_ls_files(int argc, const char **argv, const char *prefix);
-extern int cmd_ls_tree(int argc, const char **argv, const char *prefix);
-extern int cmd_read_tree(int argc, const char **argv, const char *prefix);
extern int cmd_commit_tree(int argc, const char **argv, const char *prefix);
-extern int cmd_apply(int argc, const char **argv, const char *prefix);
-extern int cmd_show_branch(int argc, const char **argv, const char *prefix);
+extern int cmd_count_objects(int argc, const char **argv, const char *prefix);
extern int cmd_diff_files(int argc, const char **argv, const char *prefix);
extern int cmd_diff_index(int argc, const char **argv, const char *prefix);
+extern int cmd_diff(int argc, const char **argv, const char *prefix);
extern int cmd_diff_stages(int argc, const char **argv, const char *prefix);
extern int cmd_diff_tree(int argc, const char **argv, const char *prefix);
-extern int cmd_cat_file(int argc, const char **argv, const char *prefix);
+extern int cmd_fmt_merge_msg(int argc, const char **argv, const char *prefix);
+extern int cmd_format_patch(int argc, const char **argv, const char *prefix);
+extern int cmd_get_tar_commit_id(int argc, const char **argv, const char *prefix);
+extern int cmd_grep(int argc, const char **argv, const char *prefix);
+extern int cmd_help(int argc, const char **argv, const char *prefix);
+extern int cmd_init_db(int argc, const char **argv, const char *prefix);
+extern int cmd_log(int argc, const char **argv, const char *prefix);
+extern int cmd_ls_files(int argc, const char **argv, const char *prefix);
+extern int cmd_ls_tree(int argc, const char **argv, const char *prefix);
+extern int cmd_mailinfo(int argc, const char **argv, const char *prefix);
+extern int cmd_mailsplit(int argc, const char **argv, const char *prefix);
+extern int cmd_mv(int argc, const char **argv, const char *prefix);
+extern int cmd_name_rev(int argc, const char **argv, const char *prefix);
+extern int cmd_pack_objects(int argc, const char **argv, const char *prefix);
+extern int cmd_prune(int argc, const char **argv, const char *prefix);
+extern int cmd_prune_packed(int argc, const char **argv, const char *prefix);
+extern int cmd_push(int argc, const char **argv, const char *prefix);
+extern int cmd_read_tree(int argc, const char **argv, const char *prefix);
+extern int cmd_repo_config(int argc, const char **argv, const char *prefix);
+extern int cmd_rev_list(int argc, const char **argv, const char *prefix);
extern int cmd_rev_parse(int argc, const char **argv, const char *prefix);
+extern int cmd_rm(int argc, const char **argv, const char *prefix);
+extern int cmd_show_branch(int argc, const char **argv, const char *prefix);
+extern int cmd_show(int argc, const char **argv, const char *prefix);
+extern int cmd_stripspace(int argc, const char **argv, const char *prefix);
+extern int cmd_symbolic_ref(int argc, const char **argv, const char *prefix);
+extern int cmd_tar_tree(int argc, const char **argv, const char *prefix);
+extern int cmd_unpack_objects(int argc, const char **argv, const char *prefix);
extern int cmd_update_index(int argc, const char **argv, const char *prefix);
extern int cmd_update_ref(int argc, const char **argv, const char *prefix);
-extern int cmd_fmt_merge_msg(int argc, const char **argv, const char *prefix);
-extern int cmd_mv(int argc, const char **argv, const char *prefix);
-
+extern int cmd_upload_tar(int argc, const char **argv, const char *prefix);
+extern int cmd_version(int argc, const char **argv, const char *prefix);
+extern int cmd_whatchanged(int argc, const char **argv, const char *prefix);
extern int cmd_write_tree(int argc, const char **argv, const char *prefix);
-extern int write_tree(unsigned char *sha1, int missing_ok, const char *prefix);
+extern int cmd_verify_pack(int argc, const char **argv, const char *prefix);
-extern int cmd_mailsplit(int argc, const char **argv, const char *prefix);
-extern int split_mbox(const char **mbox, const char *dir, int allow_bare, int nr_prec, int skip);
-
-extern int cmd_mailinfo(int argc, const char **argv, const char *prefix);
-extern int mailinfo(FILE *in, FILE *out, int ks, const char *encoding, const char *msg, const char *patch);
-
-extern int cmd_stripspace(int argc, const char **argv, const char *prefix);
-extern void stripspace(FILE *in, FILE *out);
#endif
struct lock_file *next;
char filename[PATH_MAX];
};
-extern int hold_lock_file_for_update(struct lock_file *, const char *path);
+extern int hold_lock_file_for_update(struct lock_file *, const char *path, int);
extern int commit_lock_file(struct lock_file *);
extern void rollback_lock_file(struct lock_file *);
/* pager.c */
extern void setup_pager(void);
extern int pager_in_use;
+extern int pager_use_color;
/* base85 */
int decode_85(char *dst, char *line, int linelen);
+++ /dev/null
-/*
- * Check-out files from the "current cache directory"
- *
- * Copyright (C) 2005 Linus Torvalds
- *
- * Careful: order of argument flags does matter. For example,
- *
- * git-checkout-index -a -f file.c
- *
- * Will first check out all files listed in the cache (but not
- * overwrite any old ones), and then force-checkout "file.c" a
- * second time (ie that one _will_ overwrite any old contents
- * with the same filename).
- *
- * Also, just doing "git-checkout-index" does nothing. You probably
- * meant "git-checkout-index -a". And if you want to force it, you
- * want "git-checkout-index -f -a".
- *
- * Intuitiveness is not the goal here. Repeatability is. The
- * reason for the "no arguments means no work" thing is that
- * from scripts you are supposed to be able to do things like
- *
- * find . -name '*.h' -print0 | xargs -0 git-checkout-index -f --
- *
- * or:
- *
- * find . -name '*.h' -print0 | git-checkout-index -f -z --stdin
- *
- * which will force all existing *.h files to be replaced with
- * their cached copies. If an empty command line implied "all",
- * then this would force-refresh everything in the cache, which
- * was not the point.
- *
- * Oh, and the "--" is just a good idea when you know the rest
- * will be filenames. Just so that you wouldn't have a filename
- * of "-a" causing problems (not possible in the above example,
- * but get used to it in scripting!).
- */
-#include "cache.h"
-#include "strbuf.h"
-#include "quote.h"
-#include "cache-tree.h"
-
-#define CHECKOUT_ALL 4
-static const char *prefix;
-static int prefix_length;
-static int line_termination = '\n';
-static int checkout_stage; /* default to checkout stage0 */
-static int to_tempfile;
-static char topath[4][MAXPATHLEN+1];
-
-static struct checkout state;
-
-static void write_tempfile_record (const char *name)
-{
- int i;
-
- if (CHECKOUT_ALL == checkout_stage) {
- for (i = 1; i < 4; i++) {
- if (i > 1)
- putchar(' ');
- if (topath[i][0])
- fputs(topath[i], stdout);
- else
- putchar('.');
- }
- } else
- fputs(topath[checkout_stage], stdout);
-
- putchar('\t');
- write_name_quoted("", 0, name + prefix_length,
- line_termination, stdout);
- putchar(line_termination);
-
- for (i = 0; i < 4; i++) {
- topath[i][0] = 0;
- }
-}
-
-static int checkout_file(const char *name)
-{
- int namelen = strlen(name);
- int pos = cache_name_pos(name, namelen);
- int has_same_name = 0;
- int did_checkout = 0;
- int errs = 0;
-
- if (pos < 0)
- pos = -pos - 1;
-
- while (pos < active_nr) {
- struct cache_entry *ce = active_cache[pos];
- if (ce_namelen(ce) != namelen ||
- memcmp(ce->name, name, namelen))
- break;
- has_same_name = 1;
- pos++;
- if (ce_stage(ce) != checkout_stage
- && (CHECKOUT_ALL != checkout_stage || !ce_stage(ce)))
- continue;
- did_checkout = 1;
- if (checkout_entry(ce, &state,
- to_tempfile ? topath[ce_stage(ce)] : NULL) < 0)
- errs++;
- }
-
- if (did_checkout) {
- if (to_tempfile)
- write_tempfile_record(name);
- return errs > 0 ? -1 : 0;
- }
-
- if (!state.quiet) {
- fprintf(stderr, "git-checkout-index: %s ", name);
- if (!has_same_name)
- fprintf(stderr, "is not in the cache");
- else if (checkout_stage)
- fprintf(stderr, "does not exist at stage %d",
- checkout_stage);
- else
- fprintf(stderr, "is unmerged");
- fputc('\n', stderr);
- }
- return -1;
-}
-
-static int checkout_all(void)
-{
- int i, errs = 0;
- struct cache_entry* last_ce = NULL;
-
- for (i = 0; i < active_nr ; i++) {
- struct cache_entry *ce = active_cache[i];
- if (ce_stage(ce) != checkout_stage
- && (CHECKOUT_ALL != checkout_stage || !ce_stage(ce)))
- continue;
- if (prefix && *prefix &&
- (ce_namelen(ce) <= prefix_length ||
- memcmp(prefix, ce->name, prefix_length)))
- continue;
- if (last_ce && to_tempfile) {
- if (ce_namelen(last_ce) != ce_namelen(ce)
- || memcmp(last_ce->name, ce->name, ce_namelen(ce)))
- write_tempfile_record(last_ce->name);
- }
- if (checkout_entry(ce, &state,
- to_tempfile ? topath[ce_stage(ce)] : NULL) < 0)
- errs++;
- last_ce = ce;
- }
- if (last_ce && to_tempfile)
- write_tempfile_record(last_ce->name);
- if (errs)
- /* we have already done our error reporting.
- * exit with the same code as die().
- */
- exit(128);
- return 0;
-}
-
-static const char checkout_cache_usage[] =
-"git-checkout-index [-u] [-q] [-a] [-f] [-n] [--stage=[123]|all] [--prefix=<string>] [--temp] [--] <file>...";
-
-static struct lock_file lock_file;
-
-int main(int argc, char **argv)
-{
- int i;
- int newfd = -1;
- int all = 0;
- int read_from_stdin = 0;
-
- state.base_dir = "";
- prefix = setup_git_directory();
- git_config(git_default_config);
- prefix_length = prefix ? strlen(prefix) : 0;
-
- if (read_cache() < 0) {
- die("invalid cache");
- }
-
- for (i = 1; i < argc; i++) {
- const char *arg = argv[i];
-
- if (!strcmp(arg, "--")) {
- i++;
- break;
- }
- if (!strcmp(arg, "-a") || !strcmp(arg, "--all")) {
- all = 1;
- continue;
- }
- if (!strcmp(arg, "-f") || !strcmp(arg, "--force")) {
- state.force = 1;
- continue;
- }
- if (!strcmp(arg, "-q") || !strcmp(arg, "--quiet")) {
- state.quiet = 1;
- continue;
- }
- if (!strcmp(arg, "-n") || !strcmp(arg, "--no-create")) {
- state.not_new = 1;
- continue;
- }
- if (!strcmp(arg, "-u") || !strcmp(arg, "--index")) {
- state.refresh_cache = 1;
- if (newfd < 0)
- newfd = hold_lock_file_for_update
- (&lock_file, get_index_file());
- if (newfd < 0)
- die("cannot open index.lock file.");
- continue;
- }
- if (!strcmp(arg, "-z")) {
- line_termination = 0;
- continue;
- }
- if (!strcmp(arg, "--stdin")) {
- if (i != argc - 1)
- die("--stdin must be at the end");
- read_from_stdin = 1;
- i++; /* do not consider arg as a file name */
- break;
- }
- if (!strcmp(arg, "--temp")) {
- to_tempfile = 1;
- continue;
- }
- if (!strncmp(arg, "--prefix=", 9)) {
- state.base_dir = arg+9;
- state.base_dir_len = strlen(state.base_dir);
- continue;
- }
- if (!strncmp(arg, "--stage=", 8)) {
- if (!strcmp(arg + 8, "all")) {
- to_tempfile = 1;
- checkout_stage = CHECKOUT_ALL;
- } else {
- int ch = arg[8];
- if ('1' <= ch && ch <= '3')
- checkout_stage = arg[8] - '0';
- else
- die("stage should be between 1 and 3 or all");
- }
- continue;
- }
- if (arg[0] == '-')
- usage(checkout_cache_usage);
- break;
- }
-
- if (state.base_dir_len || to_tempfile) {
- /* when --prefix is specified we do not
- * want to update cache.
- */
- if (state.refresh_cache) {
- close(newfd); newfd = -1;
- rollback_lock_file(&lock_file);
- }
- state.refresh_cache = 0;
- }
-
- /* Check out named files first */
- for ( ; i < argc; i++) {
- const char *arg = argv[i];
- const char *p;
-
- if (all)
- die("git-checkout-index: don't mix '--all' and explicit filenames");
- if (read_from_stdin)
- die("git-checkout-index: don't mix '--stdin' and explicit filenames");
- p = prefix_path(prefix, prefix_length, arg);
- checkout_file(p);
- if (p < arg || p > arg + strlen(arg))
- free((char*)p);
- }
-
- if (read_from_stdin) {
- struct strbuf buf;
- if (all)
- die("git-checkout-index: don't mix '--all' and '--stdin'");
- strbuf_init(&buf);
- while (1) {
- char *path_name;
- const char *p;
-
- read_line(&buf, stdin, line_termination);
- if (buf.eof)
- break;
- if (line_termination && buf.buf[0] == '"')
- path_name = unquote_c_style(buf.buf, NULL);
- else
- path_name = buf.buf;
- p = prefix_path(prefix, prefix_length, path_name);
- checkout_file(p);
- if (p < path_name || p > path_name + strlen(path_name))
- free((char *)p);
- if (path_name != buf.buf)
- free(path_name);
- }
- }
-
- if (all)
- checkout_all();
-
- if (0 <= newfd &&
- (write_cache(newfd, active_cache, active_nr) ||
- close(newfd) || commit_lock_file(&lock_file)))
- die("Unable to write new index file");
- return 0;
-}
printf(" -%lu,%lu", l0, l1-l0);
}
-static void dump_sline(struct sline *sline, unsigned long cnt, int num_parent)
+static void dump_sline(struct sline *sline, unsigned long cnt, int num_parent,
+ int use_color)
{
unsigned long mark = (1UL<<num_parent);
int i;
unsigned long lno = 0;
+ const char *c_frag = diff_get_color(use_color, DIFF_FRAGINFO);
+ const char *c_new = diff_get_color(use_color, DIFF_FILE_NEW);
+ const char *c_old = diff_get_color(use_color, DIFF_FILE_OLD);
+ const char *c_plain = diff_get_color(use_color, DIFF_PLAIN);
+ const char *c_reset = diff_get_color(use_color, DIFF_RESET);
if (!cnt)
return; /* result deleted */
rlines = hunk_end - lno;
if (cnt < hunk_end)
rlines--; /* pointing at the last delete hunk */
+ fputs(c_frag, stdout);
for (i = 0; i <= num_parent; i++) putchar(combine_marker);
for (i = 0; i < num_parent; i++)
show_parent_lno(sline, lno, hunk_end, i);
printf(" +%lu,%lu ", lno+1, rlines);
for (i = 0; i <= num_parent; i++) putchar(combine_marker);
- putchar('\n');
+ printf("%s\n", c_reset);
while (lno < hunk_end) {
struct lline *ll;
int j;
sl = &sline[lno++];
ll = sl->lost_head;
while (ll) {
+ fputs(c_old, stdout);
for (j = 0; j < num_parent; j++) {
if (ll->parent_map & (1UL<<j))
putchar('-');
else
putchar(' ');
}
- puts(ll->line);
+ printf("%s%s\n", ll->line, c_reset);
ll = ll->next;
}
if (cnt < lno)
break;
p_mask = 1;
+ if (!(sl->flag & (mark-1)))
+ fputs(c_plain, stdout);
+ else
+ fputs(c_new, stdout);
for (j = 0; j < num_parent; j++) {
if (p_mask & sl->flag)
putchar('+');
putchar(' ');
p_mask <<= 1;
}
- printf("%.*s\n", sl->len, sl->bol);
+ printf("%.*s%s\n", sl->len, sl->bol, c_reset);
}
}
}
sline->p_lno[i] = sline->p_lno[j];
}
-static void dump_quoted_path(const char *prefix, const char *path)
+static void dump_quoted_path(const char *prefix, const char *path,
+ const char *c_meta, const char *c_reset)
{
- fputs(prefix, stdout);
+ printf("%s%s", c_meta, prefix);
if (quote_c_style(path, NULL, NULL, 0))
quote_c_style(path, NULL, stdout, 0);
else
printf("%s", path);
- putchar('\n');
+ printf("%s\n", c_reset);
}
static int show_patch_diff(struct combine_diff_path *elem, int num_parent,
if (show_hunks || mode_differs || working_tree_file) {
const char *abb;
+ int use_color = opt->color_diff;
+ const char *c_meta = diff_get_color(use_color, DIFF_METAINFO);
+ const char *c_reset = diff_get_color(use_color, DIFF_RESET);
if (rev->loginfo)
show_log(rev, opt->msg_sep);
- dump_quoted_path(dense ? "diff --cc " : "diff --combined ", elem->path);
- printf("index ");
+ dump_quoted_path(dense ? "diff --cc " : "diff --combined ",
+ elem->path, c_meta, c_reset);
+ printf("%sindex ", c_meta);
for (i = 0; i < num_parent; i++) {
abb = find_unique_abbrev(elem->parent[i].sha1,
abbrev);
printf("%s%s", i ? "," : "", abb);
}
abb = find_unique_abbrev(elem->sha1, abbrev);
- printf("..%s\n", abb);
+ printf("..%s%s\n", abb, c_reset);
if (mode_differs) {
int added = !!elem->mode;
DIFF_STATUS_ADDED)
added = 0;
if (added)
- printf("new file mode %06o", elem->mode);
+ printf("%snew file mode %06o",
+ c_meta, elem->mode);
else {
if (!elem->mode)
- printf("deleted file ");
+ printf("%sdeleted file ", c_meta);
printf("mode ");
for (i = 0; i < num_parent; i++) {
printf("%s%06o", i ? "," : "",
if (elem->mode)
printf("..%06o", elem->mode);
}
- putchar('\n');
+ printf("%s\n", c_reset);
}
- dump_quoted_path("--- a/", elem->path);
- dump_quoted_path("+++ b/", elem->path);
- dump_sline(sline, cnt, num_parent);
+ dump_quoted_path("--- a/", elem->path, c_meta, c_reset);
+ dump_quoted_path("+++ b/", elem->path, c_meta, c_reset);
+ dump_sline(sline, cnt, num_parent, opt->color_diff);
}
free(result);
return 0;
}
+ if (!strcmp(var, "pager.color")) {
+ pager_use_color = git_config_bool(var,value);
+ return 0;
+ }
+
/* Add other config variables here and to Documentation/config.txt. */
return 0;
}
export exec_prefix mandir
export srcdir VPATH
+NO_PYTHON=@NO_PYTHON@
+NEEDS_SSL_WITH_CRYPTO=@NEEDS_SSL_WITH_CRYPTO@
+NO_OPENSSL=@NO_OPENSSL@
+NO_CURL=@NO_CURL@
+NO_EXPAT=@NO_EXPAT@
+NEEDS_LIBICONV=@NEEDS_LIBICONV@
+NEEDS_SOCKET=@NEEDS_SOCKET@
+NO_D_INO_IN_DIRENT=@NO_D_INO_IN_DIRENT@
+NO_D_TYPE_IN_DIRENT=@NO_D_TYPE_IN_DIRENT@
+NO_SOCKADDR_STORAGE=@NO_SOCKADDR_STORAGE@
+NO_IPV6=@NO_IPV6@
+NO_C99_FORMAT=@NO_C99_FORMAT@
+NO_STRCASESTR=@NO_STRCASESTR@
+NO_STRLCPY=@NO_STRLCPY@
+NO_SETENV=@NO_SETENV@
+
# Process this file with autoconf to produce a configure script.
AC_PREREQ(2.59)
-AC_INIT([git], [1.4.1], [git@vger.kernel.org])
+AC_INIT([git], [@@GIT_VERSION@@], [git@vger.kernel.org])
AC_CONFIG_SRCDIR([git.c])
# Append LINE to file ${config_append}
AC_DEFUN([GIT_CONF_APPEND_LINE],
[echo "$1" >> "${config_append}"])# GIT_CONF_APPEND_LINE
+#
+# GIT_ARG_SET_PATH(PROGRAM)
+# -------------------------
+# Provide --with-PROGRAM=PATH option to set PATH to PROGRAM
+AC_DEFUN([GIT_ARG_SET_PATH],
+[AC_ARG_WITH([$1],
+ [AS_HELP_STRING([--with-$1=PATH],
+ [provide PATH to $1])],
+ [GIT_CONF_APPEND_PATH($1)],[])
+])# GIT_ARG_SET_PATH
+#
+# GIT_CONF_APPEND_PATH(PROGRAM)
+# ------------------------------
+# Parse --with-PROGRAM=PATH option to set PROGRAM_PATH=PATH
+# Used by GIT_ARG_SET_PATH(PROGRAM)
+AC_DEFUN([GIT_CONF_APPEND_PATH],
+[PROGRAM=m4_toupper($1); \
+if test "$withval" = "no"; then \
+ AC_MSG_ERROR([You cannot use git without $1]); \
+else \
+ if test "$withval" = "yes"; then \
+ AC_MSG_WARN([You should provide path for --with-$1=PATH]); \
+ else \
+ GIT_CONF_APPEND_LINE(${PROGRAM}_PATH=$withval); \
+ fi; \
+fi; \
+]) # GIT_CONF_APPEND_PATH
+#
+# GIT_PARSE_WITH(PACKAGE)
+# -----------------------
+# For use in AC_ARG_WITH action-if-found, for packages default ON.
+# * Set NO_PACKAGE=YesPlease for --without-PACKAGE
+# * Set PACKAGEDIR=PATH for --with-PACKAGE=PATH
+# * Unset NO_PACKAGE for --with-PACKAGE without ARG
+AC_DEFUN([GIT_PARSE_WITH],
+[PACKAGE=m4_toupper($1); \
+if test "$withval" = "no"; then \
+ m4_toupper(NO_$1)=YesPlease; \
+elif test "$withval" = "yes"; then \
+ m4_toupper(NO_$1)=; \
+else \
+ m4_toupper(NO_$1)=; \
+ GIT_CONF_APPEND_LINE(${PACKAGE}DIR=$withval); \
+fi \
+])# GIT_PARSE_WITH
+
+
+## Site configuration related to programs (before tests)
+## --with-PACKAGE[=ARG] and --without-PACKAGE
+#
+# Define SHELL_PATH to provide path to shell.
+GIT_ARG_SET_PATH(shell)
+#
+# Define PERL_PATH to provide path to Perl.
+GIT_ARG_SET_PATH(perl)
+#
+# Define NO_PYTHON if you want to lose all benefits of the recursive merge.
+# Define PYTHON_PATH to provide path to Python.
+AC_ARG_WITH(python,[AS_HELP_STRING([--with-python=PATH], [provide PATH to python])
+AS_HELP_STRING([--without-python], [don't use python scripts])],
+ [if test "$withval" = "no"; then \
+ NO_PYTHON=YesPlease; \
+ elif test "$withval" = "yes"; then \
+ NO_PYTHON=; \
+ else \
+ NO_PYTHON=; \
+ PYTHON_PATH=$withval; \
+ fi; \
+ ])
+AC_SUBST(NO_PYTHON)
+AC_SUBST(PYTHON_PATH)
## Checks for programs.
AC_CHECK_PROGS(TAR, [gtar tar])
#
# Define NO_PYTHON if you want to lose all benefits of the recursive merge.
+# Define PYTHON_PATH to provide path to Python.
+if test -z "$NO_PYTHON"; then
+ if test -z "$PYTHON_PATH"; then
+ AC_PATH_PROGS(PYTHON_PATH, [python python2.4 python2.3 python2])
+ fi
+ if test -n "$PYTHON_PATH"; then
+ GIT_CONF_APPEND_LINE([PYTHON_PATH=@PYTHON_PATH@])
+ NO_PYTHON=""
+ fi
+fi
## Checks for libraries.
#
# Define NO_OPENSSL environment variable if you do not have OpenSSL.
# Define NEEDS_SSL_WITH_CRYPTO if you need -lcrypto with -lssl (Darwin).
-AC_CHECK_LIB([ssl], [SHA1_Init],[],
-[AC_CHECK_LIB([crypto], [SHA1_INIT],
- [GIT_CONF_APPEND_LINE(NEEDS_SSL_WITH_CRYPTO=YesPlease)],
- [GIT_CONF_APPEND_LINE(NO_OPENSSL=YesPlease)])])
+AC_CHECK_LIB([crypto], [SHA1_Init],
+[NEEDS_SSL_WITH_CRYPTO=],
+[AC_CHECK_LIB([ssl], [SHA1_Init],
+ [NEEDS_SSL_WITH_CRYPTO=YesPlease
+ NEEDS_SSL_WITH_CRYPTO=],
+ [NO_OPENSSL=YesPlease])])
+AC_SUBST(NEEDS_SSL_WITH_CRYPTO)
+AC_SUBST(NO_OPENSSL)
#
# Define NO_CURL if you do not have curl installed. git-http-pull and
# git-http-push are not built, and you cannot use http:// and https://
# transports.
-AC_CHECK_LIB([curl], [curl_global_init],[],
-[GIT_CONF_APPEND_LINE(NO_CURL=YesPlease)])
+AC_CHECK_LIB([curl], [curl_global_init],
+[NO_CURL=],
+[NO_CURL=YesPlease])
+AC_SUBST(NO_CURL)
#
# Define NO_EXPAT if you do not have expat installed. git-http-push is
# not built, and you cannot push using http:// and https:// transports.
-AC_CHECK_LIB([expat], [XML_ParserCreate],[],
-[GIT_CONF_APPEND_LINE(NO_EXPAT=YesPlease)])
+AC_CHECK_LIB([expat], [XML_ParserCreate],
+[NO_EXPAT=],
+[NO_EXPAT=YesPlease])
+AC_SUBST(NO_EXPAT)
#
# Define NEEDS_LIBICONV if linking with libc is not enough (Darwin).
-AC_CHECK_LIB([c], [iconv],[],
-[AC_CHECK_LIB([iconv],[iconv],
- [GIT_CONF_APPEND_LINE(NEEDS_LIBICONV=YesPlease)],[])])
+AC_CHECK_LIB([c], [iconv],
+[NEEDS_LIBICONV=],
+[NEEDS_LIBICONV=YesPlease])
+AC_SUBST(NEEDS_LIBICONV)
#
# Define NEEDS_SOCKET if linking with libc is not enough (SunOS,
# Patrick Mauritz).
-AC_CHECK_LIB([c], [socket],[],
-[AC_CHECK_LIB([socket],[socket],
- [GIT_CONF_APPEND_LINE(NEEDS_SOCKET=YesPlease)],[])])
+AC_CHECK_LIB([c], [socket],
+[NEEDS_SOCKET=],
+[NEEDS_SOCKET=YesPlease])
+AC_SUBST(NEEDS_SOCKET)
## Checks for header files.
AC_MSG_NOTICE([CHECKS for typedefs, structures, and compiler characteristics])
#
# Define NO_D_INO_IN_DIRENT if you don't have d_ino in your struct dirent.
-AC_CHECK_MEMBER(struct dirent.d_ino,[],
-[GIT_CONF_APPEND_LINE(NO_D_INO_IN_DIRENT=YesPlease)],
+AC_CHECK_MEMBER(struct dirent.d_ino,
+[NO_D_INO_IN_DIRENT=],
+[NO_D_INO_IN_DIRENT=YesPlease],
[#include <dirent.h>])
+AC_SUBST(NO_D_INO_IN_DIRENT)
#
# Define NO_D_TYPE_IN_DIRENT if your platform defines DT_UNKNOWN but lacks
# d_type in struct dirent (latest Cygwin -- will be fixed soonish).
-AC_CHECK_MEMBER(struct dirent.d_type,[],
-[GIT_CONF_APPEND_LINE(NO_D_TYPE_IN_DIRENT=YesPlease)],
+AC_CHECK_MEMBER(struct dirent.d_type,
+[NO_D_TYPE_IN_DIRENT=],
+[NO_D_TYPE_IN_DIRENT=YesPlease],
[#include <dirent.h>])
+AC_SUBST(NO_D_TYPE_IN_DIRENT)
#
# Define NO_SOCKADDR_STORAGE if your platform does not have struct
# sockaddr_storage.
-AC_CHECK_TYPE(struct sockaddr_storage,[],
-[GIT_CONF_APPEND_LINE(NO_SOCKADDR_STORAGE=YesPlease)],
+AC_CHECK_TYPE(struct sockaddr_storage,
+[NO_SOCKADDR_STORAGE=],
+[NO_SOCKADDR_STORAGE=YesPlease],
[#include <netinet/in.h>])
+AC_SUBST(NO_SOCKADDR_STORAGE)
+#
+# Define NO_IPV6 if you lack IPv6 support and getaddrinfo().
+AC_CHECK_TYPE([struct addrinfo],[
+ AC_CHECK_FUNC([getaddrinfo],
+ [NO_IPV6=],
+ [NO_IPV6=YesPlease])
+],[NO_IPV6=YesPlease],[
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <netdb.h>
+])
+AC_SUBST(NO_IPV6)
+#
+# Define NO_C99_FORMAT if your formatted IO functions (printf/scanf et.al.)
+# do not support the 'size specifiers' introduced by C99, namely ll, hh,
+# j, z, t. (representing long long int, char, intmax_t, size_t, ptrdiff_t).
+# some C compilers supported these specifiers prior to C99 as an extension.
+AC_CACHE_CHECK(whether formatted IO functions support C99 size specifiers,
+ ac_cv_c_c99_format,
+[# Actually git uses only %z (%zu) in alloc.c, and %t (%td) in mktag.c
+AC_RUN_IFELSE(
+ [AC_LANG_PROGRAM([AC_INCLUDES_DEFAULT],
+ [[char buf[64];
+ if (sprintf(buf, "%lld%hhd%jd%zd%td", (long long int)1, (char)2, (intmax_t)3, (size_t)4, (ptrdiff_t)5) != 5)
+ exit(1);
+ else if (strcmp(buf, "12345"))
+ exit(2);]])],
+ [ac_cv_c_c99_format=yes],
+ [ac_cv_c_c99_format=no])
+])
+if test $ac_cv_c_c99_format = no; then
+ NO_C99_FORMAT=YesPlease
+else
+ NO_C99_FORMAT=
+fi
+AC_SUBST(NO_C99_FORMAT)
## Checks for library functions.
AC_MSG_NOTICE([CHECKS for library functions])
#
# Define NO_STRCASESTR if you don't have strcasestr.
-AC_CHECK_FUNC(strcasestr,[],
-[GIT_CONF_APPEND_LINE(NO_STRCASESTR=YesPlease)])
+AC_CHECK_FUNC(strcasestr,
+[NO_STRCASESTR=],
+[NO_STRCASESTR=YesPlease])
+AC_SUBST(NO_STRCASESTR)
#
# Define NO_STRLCPY if you don't have strlcpy.
-AC_CHECK_FUNC(strlcpy,[],
-[GIT_CONF_APPEND_LINE(NO_STRLCPY=YesPlease)])
+AC_CHECK_FUNC(strlcpy,
+[NO_STRLCPY=],
+[NO_STRLCPY=YesPlease])
+AC_SUBST(NO_STRLCPY)
#
# Define NO_SETENV if you don't have setenv in the C library.
-AC_CHECK_FUNC(setenv,[],
-[GIT_CONF_APPEND_LINE(NO_SETENV=YesPlease)])
+AC_CHECK_FUNC(setenv,
+[NO_SETENV=],
+[NO_SETENV=YesPlease])
+AC_SUBST(NO_SETENV)
#
# Define NO_MMAP if you want to avoid mmap.
#
-# Define NO_IPV6 if you lack IPv6 support and getaddrinfo().
-#
# Define NO_ICONV if your libc does not properly support iconv.
# a missing newline at the end of the file.
-## Site configuration
+## Site configuration (override autodetection)
## --with-PACKAGE[=ARG] and --without-PACKAGE
-# Define NO_SVN_TESTS if you want to skip time-consuming SVN interopability
+AC_MSG_NOTICE([CHECKS for site configuration])
+#
+# Define NO_SVN_TESTS if you want to skip time-consuming SVN interoperability
# tests. These tests take up a significant amount of the total test time
# but are not needed unless you plan to talk to SVN repos.
#
# Define NO_OPENSSL environment variable if you do not have OpenSSL.
# This also implies MOZILLA_SHA1.
#
+# Define OPENSSLDIR=/foo/bar if your openssl header and library files are in
+# /foo/bar/include and /foo/bar/lib directories.
+AC_ARG_WITH(openssl,
+AS_HELP_STRING([--with-openssl],[use OpenSSL library (default is YES)])
+AS_HELP_STRING([], [ARG can be prefix for openssl library and headers]),\
+GIT_PARSE_WITH(openssl))
+#
# Define NO_CURL if you do not have curl installed. git-http-pull and
# git-http-push are not built, and you cannot use http:// and https://
# transports.
#
# Define CURLDIR=/foo/bar if your curl header and library files are in
# /foo/bar/include and /foo/bar/lib directories.
+AC_ARG_WITH(curl,
+AS_HELP_STRING([--with-curl],[support http(s):// transports (default is YES)])
+AS_HELP_STRING([], [ARG can be also prefix for curl library and headers]),
+GIT_PARSE_WITH(curl))
#
# Define NO_EXPAT if you do not have expat installed. git-http-push is
# not built, and you cannot push using http:// and https:// transports.
#
-# Define NO_MMAP if you want to avoid mmap.
+# Define EXPATDIR=/foo/bar if your expat header and library files are in
+# /foo/bar/include and /foo/bar/lib directories.
+AC_ARG_WITH(expat,
+AS_HELP_STRING([--with-expat],
+[support git-push using http:// and https:// transports via WebDAV (default is YES)])
+AS_HELP_STRING([], [ARG can be also prefix for expat library and headers]),
+GIT_PARSE_WITH(expat))
+#
+# Define NO_FINK if you are building on Darwin/Mac OS X, have Fink
+# installed in /sw, but don't want GIT to link against any libraries
+# installed there. If defined you may specify your own (or Fink's)
+# include directories and library directories by defining CFLAGS
+# and LDFLAGS appropriately.
#
-# Define NO_PYTHON if you want to loose all benefits of the recursive merge.
+# Define NO_DARWIN_PORTS if you are building on Darwin/Mac OS X,
+# have DarwinPorts installed in /opt/local, but don't want GIT to
+# link against any libraries installed there. If defined you may
+# specify your own (or DarwinPort's) include directories and
+# library directories by defining CFLAGS and LDFLAGS appropriately.
#
+# Define NO_MMAP if you want to avoid mmap.
+
## --enable-FEATURE[=ARG] and --disable-FEATURE
+#
# Define COLLISION_CHECK below if you believe that SHA1's
# 1461501637330902918203684832716283019655932542976 hashes do not give you
# sufficient guarantee that no collisions between objects will ever happen.
#define _XOPEN_SOURCE 500 /* glibc2 and AIX 5.3L need this */
#define _XOPEN_SOURCE_EXTENDED 1 /* AIX 5.3L needs this */
+#define _GNU_SOURCE
#include <time.h>
#include "cache.h"
#include "blob.h"
diff_use_color_default = 1; /* bool */
else if (!strcasecmp(value, "auto")) {
diff_use_color_default = 0;
- if (isatty(1) || pager_in_use) {
+ if (isatty(1) || (pager_in_use && pager_use_color)) {
char *term = getenv("TERM");
if (term && strcmp(term, "dumb"))
diff_use_color_default = 1;
int diff_setup_done(struct diff_options *options)
{
- if ((options->find_copies_harder &&
- options->detect_rename != DIFF_DETECT_COPY) ||
- (0 <= options->rename_limit && !options->detect_rename))
- return -1;
+ if (options->find_copies_harder)
+ options->detect_rename = DIFF_DETECT_COPY;
if (options->output_format & (DIFF_FORMAT_NAME |
DIFF_FORMAT_NAME_STATUS |
struct diff_filespec *one,
struct diff_filespec *two)
{
- struct diff_filepair *dp = xmalloc(sizeof(*dp));
+ struct diff_filepair *dp = xcalloc(1, sizeof(*dp));
dp->one = one;
dp->two = two;
- dp->score = 0;
- dp->status = 0;
- dp->source_stays = 0;
- dp->broken_pair = 0;
if (queue)
diff_q(queue, dp);
return dp;
fill_filespec(two, dst->sha1, dst->mode);
dp = diff_queue(NULL, one, two);
+ dp->renamed_pair = 1;
if (!strcmp(src->path, dst->path))
dp->score = rename_src[src_index].score;
else
char status; /* M C R N D U (see Documentation/diff-format.txt) */
unsigned source_stays : 1; /* all of R/C are copies */
unsigned broken_pair : 1;
+ unsigned renamed_pair : 1;
};
#define DIFF_PAIR_UNMERGED(p) \
(!DIFF_FILE_VALID((p)->one) && !DIFF_FILE_VALID((p)->two))
-#define DIFF_PAIR_RENAME(p) (strcmp((p)->one->path, (p)->two->path))
+#define DIFF_PAIR_RENAME(p) ((p)->renamed_pair)
#define DIFF_PAIR_BROKEN(p) \
( (!DIFF_FILE_VALID((p)->one) != !DIFF_FILE_VALID((p)->two)) && \
const char *apply_default_whitespace = NULL;
int zlib_compression_level = Z_DEFAULT_COMPRESSION;
int pager_in_use;
+int pager_use_color = 1;
static char *git_dir, *git_object_dir, *git_index_file, *git_refs_dir,
*git_graft_file;
len--;
switch (buf[0] & 0xFF) {
case 3:
+ safe_write(2, "remote: ", 8);
safe_write(2, buf+1, len);
- fprintf(stderr, "\n");
+ safe_write(2, "\n", 1);
exit(1);
case 2:
+ safe_write(2, "remote: ", 8);
safe_write(2, buf+1, len);
continue;
case 1:
this=$next
}
+cannot_fallback () {
+ echo "$1"
+ echo "Cannot fall back to three-way merge."
+ exit 1
+}
+
fall_back_3way () {
O_OBJECT=`cd "$GIT_OBJECT_DIRECTORY" && pwd`
mkdir "$dotest/patch-merge-tmp-dir"
# First see if the patch records the index info that we can use.
- if git-apply -z --index-info "$dotest/patch" \
- >"$dotest/patch-merge-index-info" 2>/dev/null &&
- GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
- git-update-index -z --index-info <"$dotest/patch-merge-index-info" &&
- GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
- git-write-tree >"$dotest/patch-merge-base+" &&
- # index has the base tree now.
- GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
+ git-apply -z --index-info "$dotest/patch" \
+ >"$dotest/patch-merge-index-info" &&
+ GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
+ git-update-index -z --index-info <"$dotest/patch-merge-index-info" &&
+ GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
+ git-write-tree >"$dotest/patch-merge-base+" ||
+ cannot_fallback "Patch does not record usable index information."
+
+ echo Using index info to reconstruct a base tree...
+ if GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
git-apply $binary --cached <"$dotest/patch"
then
- echo Using index info to reconstruct a base tree...
mv "$dotest/patch-merge-base+" "$dotest/patch-merge-base"
mv "$dotest/patch-merge-tmp-index" "$dotest/patch-merge-index"
+ else
+ cannot_fallback "Did you hand edit your patch?
+It does not apply to blobs recorded in its index."
fi
test -f "$dotest/patch-merge-index" &&
sub handle_rev {
- my $i = 0;
+ my $revseen = 0;
my %seen;
while (my $rev = shift @revqueue) {
next if $seen{$rev}++;
return $parent;
}
+sub git_find_all_parents {
+ my ($rev) = @_;
+
+ my $revparent = open_pipe("git-rev-list","--remove-empty", "--parents","--max-count=1","$rev")
+ or die "Failed to open git-rev-list to find a single parent: $!";
+
+ my $parentline = <$revparent>;
+ chomp $parentline;
+ my ($origrev, @parents) = split m/\s+/, $parentline;
+
+ close($revparent);
+
+ return @parents;
+}
+
+sub git_merge_base {
+ my ($rev1, $rev2) = @_;
+
+ my $mb = open_pipe("git-merge-base", $rev1, $rev2)
+ or die "Failed to open git-merge-base: $!";
+
+ my $base = <$mb>;
+ chomp $base;
+
+ close($mb);
+
+ return $base;
+}
+
+# Construct a set of pseudo parents that are in the same order,
+# and the same quantity as the real parents,
+# but whose SHA1s are as similar to the logical parents
+# as possible.
+sub get_pseudo_parents {
+ my ($all, $fake) = @_;
+
+ my @all = @$all;
+ my @fake = @$fake;
+
+ my @pseudo;
+
+ my %fake = map {$_ => 1} @fake;
+ my %seenfake;
+
+ my $fakeidx = 0;
+ foreach my $p (@all) {
+ if (exists $fake{$p}) {
+ if ($fake[$fakeidx] ne $p) {
+ die sprintf("parent mismatch: %s != %s\nall:%s\nfake:%s\n",
+ $fake[$fakeidx], $p,
+ join(", ", @all),
+ join(", ", @fake),
+ );
+ }
+
+ push @pseudo, $p;
+ $fakeidx++;
+ $seenfake{$p}++;
+
+ } else {
+ my $base = git_merge_base($fake[$fakeidx], $p);
+ if ($base ne $fake[$fakeidx]) {
+ die sprintf("Result of merge-base doesn't match fake: %s,%s != %s\n",
+ $fake[$fakeidx], $p, $base);
+ }
+
+ # The details of how we parse the diffs
+ # mean that we cannot have a duplicate
+ # revision in the list, so if we've already
+ # seen the revision we would normally add, just use
+ # the actual revision.
+ if ($seenfake{$base}) {
+ push @pseudo, $p;
+ } else {
+ push @pseudo, $base;
+ $seenfake{$base}++;
+ }
+ }
+ }
+
+ return @pseudo;
+}
+
# Get a diff between the current revision and a parent.
# Record the commit information that results.
sub git_diff_parse {
my ($parents, $rev, %revinfo) = @_;
+ my @pseudo_parents;
+ my @command = ("git-diff-tree");
+ my $revision_spec;
+
+ if (scalar @$parents == 1) {
+
+ $revision_spec = join("..", $parents->[0], $rev);
+ @pseudo_parents = @$parents;
+ } else {
+ my @all_parents = git_find_all_parents($rev);
+
+ if (@all_parents != @$parents) {
+ @pseudo_parents = get_pseudo_parents(\@all_parents, $parents);
+ } else {
+ @pseudo_parents = @$parents;
+ }
+
+ $revision_spec = $rev;
+ push @command, "-c";
+ }
+
my @filenames = ( $revs{$rev}{'filename'} );
+
foreach my $parent (@$parents) {
push @filenames, $revs{$parent}{'filename'};
}
- my $diff = open_pipe("git-diff-tree","-M","-p","-c",$rev,"--",
- @filenames )
+ push @command, "-p", "-M", $revision_spec, "--", @filenames;
+
+
+ my $diff = open_pipe( @command )
or die "Failed to call git-diff for annotation: $!";
- _git_diff_parse($diff, $parents, $rev, %revinfo);
+ _git_diff_parse($diff, \@pseudo_parents, $rev, %revinfo);
close($diff);
}
$diff_header_regexp .= "@" x @$parents;
$diff_header_regexp .= ' -\d+,\d+' x @$parents;
$diff_header_regexp .= ' \+(\d+),\d+';
+ $diff_header_regexp .= " " . ("@" x @$parents);
my %claim_regexps;
my $allparentplus = '^' . '\\+' x @$parents . '(.*)$';
DIFF:
while(<$diff>) {
chomp;
+ #printf("%d:%s:\n", $gotheader, $_);
if (m/$diff_header_regexp/) {
$remstart = $1 - 1;
# (0-based arrays)
$gotheader = 1;
- printf("Copying from %d to %d\n", $ri, $remstart);
foreach my $parent (@$parents) {
for (my $i = $ri; $i < $remstart; $i++) {
$plines{$parent}[$pi{$parent}++] = $slines->[$i];
printf("parent %s is on line %d\n", $parent, $pi{$parent});
}
+ my @context;
+ for (my $i = -2; $i < 2; $i++) {
+ push @context, get_line($slines, $ri + $i);
+ }
+ my $context = join("\n", @context);
+
+ my $justline = substr($_, scalar @$parents);
die sprintf("Line %d, does not match:\n|%s|\n|%s|\n%s\n",
$ri,
- substr($_,scalar @$parents),
- get_line($slines,$ri), $rev);
+ $justline,
+ $context);
}
foreach my $parent (@$parents) {
$plines{$parent}[$pi{$parent}++] = $slines->[$ri];
set x "$arg" "$@"
shift
fi
+ case "$1" in
+ --)
+ shift ;;
+ esac
break
;;
esac
[ -e "$dir" ] && echo "$dir already exists." && usage
mkdir -p "$dir" &&
D=$(cd "$dir" && pwd) &&
-trap 'err=$?; cd ..; rm -r "$D"; exit $err' 0
+trap 'err=$?; cd ..; rm -rf "$D"; exit $err' 0
case "$bare" in
yes)
GIT_DIR="$D" ;;
fi
git-ls-remote "$repo" >"$GIT_DIR/CLONE_HEAD" || exit 1
;;
- http://*)
+ https://*|http://*)
if test -z "@@NO_CURL@@"
then
clone_dumb_http "$repo" "$D"
if test -f "$GIT_DIR/CLONE_HEAD"
then
# Read git-fetch-pack -k output and store the remote branches.
- @@PERL@@ -e "$copy_refs" "$GIT_DIR" "$use_separate_remote" "$origin"
+ @@PERL@@ -e "$copy_refs" "$GIT_DIR" "$use_separate_remote" "$origin" ||
+ exit
fi
cd "$D" || exit
ret = malloc(1);
if (!ret)
die("Out of memory, malloc failed");
+#ifdef XMALLOC_POISON
+ memset(ret, 0xA5, size);
+#endif
return ret;
}
}
}
+static inline int has_extension(const char *filename, const char *ext)
+{
+ size_t len = strlen(filename);
+ size_t extlen = strlen(ext);
+ return len > extlen && !memcmp(filename + len - extlen, ext, extlen);
+}
+
/* Sane ctype - no locale, and works with signed chars */
#undef isspace
#undef isdigit
+++ /dev/null
-#!/bin/sh
-
-USAGE='[--all] [--tags] [--force] <repository> [<refspec>...]'
-. git-sh-setup
-
-# Parse out parameters and then stop at remote, so that we can
-# translate it using .git/branches information
-has_all=
-has_force=
-has_exec=
-has_thin=--thin
-remote=
-do_tags=
-
-while case "$#" in 0) break ;; esac
-do
- case "$1" in
- --all)
- has_all=--all ;;
- --tags)
- do_tags=yes ;;
- --force)
- has_force=--force ;;
- --exec=*)
- has_exec="$1" ;;
- --thin)
- ;; # noop
- --no-thin)
- has_thin= ;;
- -*)
- usage ;;
- *)
- set x "$@"
- shift
- break ;;
- esac
- shift
-done
-case "$#" in
-0)
- echo "Where would you want to push today?"
- usage ;;
-esac
-
-. git-parse-remote
-remote=$(get_remote_url "$@")
-
-case "$has_all" in
---all)
- set x ;;
-'')
- case "$do_tags,$#" in
- yes,1)
- set x $(cd "$GIT_DIR/refs" && find tags -type f -print) ;;
- yes,*)
- set x $(cd "$GIT_DIR/refs" && find tags -type f -print) \
- $(get_remote_refs_for_push "$@") ;;
- ,*)
- set x $(get_remote_refs_for_push "$@") ;;
- esac
-esac
-
-shift ;# away the initial 'x'
-
-# $# is now 0 if there was no explicit refspec on the command line
-# and there was no default refspec to push from remotes/ file.
-# we will let git-send-pack to do its "matching refs" thing.
-
-case "$remote" in
-git://*)
- die "Cannot use READ-ONLY transport to push to $remote" ;;
-rsync://*)
- die "Pushing with rsync transport is deprecated" ;;
-esac
-
-set x "$remote" "$@"; shift
-test "$has_all" && set x "$has_all" "$@" && shift
-test "$has_force" && set x "$has_force" "$@" && shift
-test "$has_exec" && set x "$has_exec" "$@" && shift
-test "$has_thin" && set x "$has_thin" "$@" && shift
-
-case "$remote" in
-http://* | https://*)
- exec git-http-push "$@";;
-*)
- exec git-send-pack "$@";;
-esac
# Check if we are already based on $onto, but this should be
# done only when upstream and onto are the same.
-if test "$upstream" = "$onto"
+mb=$(git-merge-base "$onto" "$branch")
+if test "$upstream" = "$onto" && test "$mb" = "$onto"
then
- mb=$(git-merge-base "$onto" "$branch")
- if test "$mb" = "$onto"
- then
- echo >&2 "Current branch $branch_name is up to date."
- exit 0
- fi
+ echo >&2 "Current branch $branch_name is up to date."
+ exit 0
fi
# Rewind the head to "$onto"; this saves our current head in ORIG_HEAD.
# If the $onto is a proper descendant of the tip of the branch, then
# we just fast forwarded.
-if test "$mb" = "$onto"
+if test "$mb" = "$branch"
then
- echo >&2 "Fast-forwarded $branch to $newbase."
+ echo >&2 "Fast-forwarded $branch_name to $onto_name."
exit 0
fi
exit
esac
+# Make sure we are in a valid repository of a vintage we understand.
if [ -z "$SUBDIRECTORY_OK" ]
then
: ${GIT_DIR=.git}
- : ${GIT_OBJECT_DIRECTORY="$GIT_DIR/objects"}
-
- # Make sure we are in a valid repository of a vintage we understand.
- GIT_DIR="$GIT_DIR" git repo-config --get core.nosuch >/dev/null
- if test $? = 128
- then
- exit
- fi
+ GIT_DIR=$(GIT_DIR="$GIT_DIR" git-rev-parse --git-dir) || exit
else
GIT_DIR=$(git-rev-parse --git-dir) || exit
fi
+: ${GIT_OBJECT_DIRECTORY="$GIT_DIR/objects"}
use File::Path qw/mkpath/;
use Getopt::Long qw/:config gnu_getopt no_ignore_case auto_abbrev pass_through/;
use File::Spec qw//;
+use File::Copy qw/copy/;
use POSIX qw/strftime/;
use IPC::Open3;
use Memoize;
'copy-similarity|C=i'=> \$_cp_similarity
);
-# yes, 'native' sets "\n". Patches to fix this for non-*nix systems welcome:
-my %EOL = ( CR => "\015", LF => "\012", CRLF => "\015\012", native => "\012" );
-
my %cmd = (
fetch => [ \&fetch, "Download new revisions from SVN",
{ 'revision|r=s' => \$_revision, %fc_opts } ],
}
}
- my ($url, $path) = ($full_url =~ m!^([a-z\+]+://[^/]*)(.*)$!i);
- $path =~ s#^/+##;
- my @paths = split(m#/+#, $path);
-
if ($_use_lib) {
- while (1) {
- $SVN = libsvn_connect($url);
- last if (defined $SVN &&
- defined eval { $SVN->get_latest_revnum });
- my $n = shift @paths || last;
- $url .= "/$n";
- }
+ my $tmp = libsvn_connect($full_url);
+ my $url = $tmp->get_repos_root;
+ $full_url =~ s#^\Q$url\E/*##;
+ push @repo_path_split_cache, qr/^(\Q$url\E)/;
+ return ($url, $full_url);
} else {
+ my ($url, $path) = ($full_url =~ m!^([a-z\+]+://[^/]*)(.*)$!i);
+ $path =~ s#^/+##;
+ my @paths = split(m#/+#, $path);
while (quiet_run(qw/svn ls --non-interactive/, $url)) {
my $n = shift @paths || last;
$url .= "/$n";
}
+ push @repo_path_split_cache, qr/^(\Q$url\E)/;
+ $path = join('/',@paths);
+ return ($url, $path);
}
- push @repo_path_split_cache, qr/^(\Q$url\E)/;
- $path = join('/',@paths);
- return ($url, $path);
}
sub setup_git_svn {
sub sys { system(@_) == 0 or croak $? }
-sub eol_cp {
- my ($from, $to) = @_;
- my $es = svn_propget_base('svn:eol-style', $to);
- open my $rfd, '<', $from or croak $!;
- binmode $rfd or croak $!;
- open my $wfd, '>', $to or croak $!;
- binmode $wfd or croak $!;
- eol_cp_fd($rfd, $wfd, $es);
- close $rfd or croak $!;
- close $wfd or croak $!;
-}
-
-sub eol_cp_fd {
- my ($rfd, $wfd, $es) = @_;
- my $eol = defined $es ? $EOL{$es} : undef;
- my $buf;
- use bytes;
- while (1) {
- my ($r, $w, $t);
- defined($r = sysread($rfd, $buf, 4096)) or croak $!;
- return unless $r;
- if ($eol) {
- if ($buf =~ /\015$/) {
- my $c;
- defined($r = sysread($rfd,$c,1)) or croak $!;
- $buf .= $c if $r > 0;
- }
- $buf =~ s/(?:\015\012|\015|\012)/$eol/gs;
- $r = length($buf);
- }
- for ($w = 0; $w < $r; $w += $t) {
- $t = syswrite($wfd, $buf, $r - $w, $w) or croak $!;
- }
- }
- no bytes;
-}
-
sub do_update_index {
my ($z_cmd, $cmd, $no_text_base) = @_;
'text-base',"$f.svn-base");
$tb =~ s#^/##;
}
+ my @s = stat($x);
unlink $x or croak $!;
- eol_cp($tb, $x);
+ copy($tb, $x);
chmod(($mode &~ umask), $x) or croak $!;
+ utime $s[8], $s[9], $x;
}
print $ui $x,"\0";
}
sub libsvn_get_file {
my ($gui, $f, $rev) = @_;
my $p = $f;
- return unless ($p =~ s#^\Q$SVN_PATH\E/##);
+ if (length $SVN_PATH > 0) {
+ return unless ($p =~ s#^\Q$SVN_PATH\E/##);
+ }
my ($hash, $pid, $in, $out);
my $pool = SVN::Pool->new;
if (defined $_authors && ! defined $users{$author}) {
die "Author: $author not defined in $_authors file\n";
}
+ $msg = '' if ($rev == 0 && !defined $msg);
return { revision => $rev, date => "+0000 $Y-$m-$d $H:$M:$S",
author => $author, msg => $msg."\n", parents => $parents || [] }
}
#include "builtin.h"
+const char git_usage_string[] =
+ "git [--version] [--exec-path[=GIT_EXEC_PATH]] [--help] COMMAND [ ARGS ]";
+
static void prepend_to_path(const char *dir, int len)
{
const char *old_path = getenv("PATH");
setenv("GIT_DIR", getcwd(git_dir, 1024), 1);
} else {
fprintf(stderr, "Unknown option: %s\n", cmd);
- cmd_usage(0, NULL, NULL);
+ usage(git_usage_string);
}
(*argv)++;
const char git_version_string[] = GIT_VERSION;
-#define NEEDS_PREFIX 1
+#define RUN_SETUP (1<<0)
+#define USE_PAGER (1<<1)
static void handle_internal_command(int argc, const char **argv, char **envp)
{
static struct cmd_struct {
const char *cmd;
int (*fn)(int, const char **, const char *);
- int prefix;
+ int option;
} commands[] = {
- { "version", cmd_version },
- { "help", cmd_help },
- { "log", cmd_log, NEEDS_PREFIX },
- { "whatchanged", cmd_whatchanged, NEEDS_PREFIX },
- { "show", cmd_show, NEEDS_PREFIX },
- { "push", cmd_push },
- { "format-patch", cmd_format_patch, NEEDS_PREFIX },
+ { "add", cmd_add, RUN_SETUP },
+ { "apply", cmd_apply },
+ { "cat-file", cmd_cat_file, RUN_SETUP },
+ { "checkout-index", cmd_checkout_index, RUN_SETUP },
+ { "check-ref-format", cmd_check_ref_format },
+ { "commit-tree", cmd_commit_tree, RUN_SETUP },
{ "count-objects", cmd_count_objects },
- { "diff", cmd_diff, NEEDS_PREFIX },
- { "grep", cmd_grep, NEEDS_PREFIX },
- { "rm", cmd_rm, NEEDS_PREFIX },
- { "add", cmd_add, NEEDS_PREFIX },
- { "rev-list", cmd_rev_list, NEEDS_PREFIX },
- { "init-db", cmd_init_db },
+ { "diff", cmd_diff, RUN_SETUP },
+ { "diff-files", cmd_diff_files, RUN_SETUP },
+ { "diff-index", cmd_diff_index, RUN_SETUP },
+ { "diff-stages", cmd_diff_stages, RUN_SETUP },
+ { "diff-tree", cmd_diff_tree, RUN_SETUP },
+ { "fmt-merge-msg", cmd_fmt_merge_msg, RUN_SETUP },
+ { "format-patch", cmd_format_patch, RUN_SETUP },
{ "get-tar-commit-id", cmd_get_tar_commit_id },
- { "upload-tar", cmd_upload_tar },
- { "check-ref-format", cmd_check_ref_format },
- { "ls-files", cmd_ls_files, NEEDS_PREFIX },
- { "ls-tree", cmd_ls_tree, NEEDS_PREFIX },
- { "tar-tree", cmd_tar_tree, NEEDS_PREFIX },
- { "read-tree", cmd_read_tree, NEEDS_PREFIX },
- { "commit-tree", cmd_commit_tree, NEEDS_PREFIX },
- { "apply", cmd_apply },
- { "show-branch", cmd_show_branch, NEEDS_PREFIX },
- { "diff-files", cmd_diff_files, NEEDS_PREFIX },
- { "diff-index", cmd_diff_index, NEEDS_PREFIX },
- { "diff-stages", cmd_diff_stages, NEEDS_PREFIX },
- { "diff-tree", cmd_diff_tree, NEEDS_PREFIX },
- { "cat-file", cmd_cat_file, NEEDS_PREFIX },
- { "rev-parse", cmd_rev_parse, NEEDS_PREFIX },
- { "write-tree", cmd_write_tree, NEEDS_PREFIX },
- { "mailsplit", cmd_mailsplit },
+ { "grep", cmd_grep, RUN_SETUP },
+ { "help", cmd_help },
+ { "init-db", cmd_init_db },
+ { "log", cmd_log, RUN_SETUP | USE_PAGER },
+ { "ls-files", cmd_ls_files, RUN_SETUP },
+ { "ls-tree", cmd_ls_tree, RUN_SETUP },
{ "mailinfo", cmd_mailinfo },
+ { "mailsplit", cmd_mailsplit },
+ { "mv", cmd_mv, RUN_SETUP },
+ { "name-rev", cmd_name_rev, RUN_SETUP },
+ { "pack-objects", cmd_pack_objects, RUN_SETUP },
+ { "prune", cmd_prune, RUN_SETUP },
+ { "prune-packed", cmd_prune_packed, RUN_SETUP },
+ { "push", cmd_push, RUN_SETUP },
+ { "read-tree", cmd_read_tree, RUN_SETUP },
+ { "repo-config", cmd_repo_config },
+ { "rev-list", cmd_rev_list, RUN_SETUP },
+ { "rev-parse", cmd_rev_parse, RUN_SETUP },
+ { "rm", cmd_rm, RUN_SETUP },
+ { "show-branch", cmd_show_branch, RUN_SETUP },
+ { "show", cmd_show, RUN_SETUP | USE_PAGER },
{ "stripspace", cmd_stripspace },
- { "update-index", cmd_update_index, NEEDS_PREFIX },
- { "update-ref", cmd_update_ref, NEEDS_PREFIX },
- { "fmt-merge-msg", cmd_fmt_merge_msg, NEEDS_PREFIX },
- { "prune", cmd_prune, NEEDS_PREFIX },
- { "mv", cmd_mv, NEEDS_PREFIX },
+ { "symbolic-ref", cmd_symbolic_ref, RUN_SETUP },
+ { "tar-tree", cmd_tar_tree, RUN_SETUP },
+ { "unpack-objects", cmd_unpack_objects, RUN_SETUP },
+ { "update-index", cmd_update_index, RUN_SETUP },
+ { "update-ref", cmd_update_ref, RUN_SETUP },
+ { "upload-tar", cmd_upload_tar },
+ { "version", cmd_version },
+ { "whatchanged", cmd_whatchanged, RUN_SETUP | USE_PAGER },
+ { "write-tree", cmd_write_tree, RUN_SETUP },
+ { "verify-pack", cmd_verify_pack },
};
int i;
continue;
prefix = NULL;
- if (p->prefix)
+ if (p->option & RUN_SETUP)
prefix = setup_git_directory();
+ if (p->option & USE_PAGER)
+ setup_pager();
if (getenv("GIT_TRACE")) {
int i;
fprintf(stderr, "trace: built-in: git");
}
if (errno == ENOENT)
- cmd_usage(0, exec_path, "'%s' is not a git-command", cmd);
+ help_unknown_cmd(cmd);
fprintf(stderr, "Failed to run command '%s': %s\n",
cmd, strerror(errno));
proc readrefs {} {
global tagids idtags headids idheads tagcontents
- global otherrefids idotherrefs
+ global otherrefids idotherrefs mainhead
foreach v {tagids idtags headids idheads otherrefids idotherrefs} {
catch {unset $v}
}
}
close $refd
+ set mainhead {}
+ catch {
+ set thehead [exec git symbolic-ref HEAD]
+ if {[string match "refs/heads/*" $thehead]} {
+ set mainhead [string range $thehead 11 end]
+ }
+ }
}
proc show_error {w top msg} {
global rowctxmenu mergemax wrapcomment
global highlight_files gdttype
global searchstring sstring
+ global bgcolor fgcolor bglist fglist diffcolors
menu .bar
.bar add cascade -label "File" -menu .bar.file
.ctop add .ctop.top
set canv .ctop.top.clist.canv
canvas $canv -height $geometry(canvh) -width $geometry(canv1) \
- -bg white -bd 0 \
+ -background $bgcolor -bd 0 \
-yscrollincr $linespc -yscrollcommand "scrollcanv $cscroll"
.ctop.top.clist add $canv
set canv2 .ctop.top.clist.canv2
canvas $canv2 -height $geometry(canvh) -width $geometry(canv2) \
- -bg white -bd 0 -yscrollincr $linespc
+ -background $bgcolor -bd 0 -yscrollincr $linespc
.ctop.top.clist add $canv2
set canv3 .ctop.top.clist.canv3
canvas $canv3 -height $geometry(canvh) -width $geometry(canv3) \
- -bg white -bd 0 -yscrollincr $linespc
+ -background $bgcolor -bd 0 -yscrollincr $linespc
.ctop.top.clist add $canv3
bind .ctop.top.clist <Configure> {resizeclistpanes %W %w}
+ lappend bglist $canv $canv2 $canv3
set sha1entry .ctop.top.bar.sha1
set entries $sha1entry
trace add variable searchstring write incrsearch
pack $sstring -side left -expand 1 -fill x
set ctext .ctop.cdet.left.ctext
- text $ctext -bg white -state disabled -font $textfont \
+ text $ctext -background $bgcolor -foreground $fgcolor \
+ -state disabled -font $textfont \
-width $geometry(ctextw) -height $geometry(ctexth) \
-yscrollcommand scrolltext -wrap none
scrollbar .ctop.cdet.left.sb -command "$ctext yview"
pack .ctop.cdet.left.sb -side right -fill y
pack $ctext -side left -fill both -expand 1
.ctop.cdet add .ctop.cdet.left
+ lappend bglist $ctext
+ lappend fglist $ctext
$ctext tag conf comment -wrap $wrapcomment
$ctext tag conf filesep -font [concat $textfont bold] -back "#aaaaaa"
- $ctext tag conf hunksep -fore blue
- $ctext tag conf d0 -fore red
- $ctext tag conf d1 -fore "#00a000"
+ $ctext tag conf hunksep -fore [lindex $diffcolors 2]
+ $ctext tag conf d0 -fore [lindex $diffcolors 0]
+ $ctext tag conf d1 -fore [lindex $diffcolors 1]
$ctext tag conf m0 -fore red
$ctext tag conf m1 -fore blue
$ctext tag conf m2 -fore green
pack .ctop.cdet.right.mode -side top -fill x
set cflist .ctop.cdet.right.cfiles
set indent [font measure $mainfont "nn"]
- text $cflist -width $geometry(cflistw) -background white -font $mainfont \
+ text $cflist -width $geometry(cflistw) \
+ -background $bgcolor -foreground $fgcolor \
+ -font $mainfont \
-tabs [list $indent [expr {2 * $indent}]] \
-yscrollcommand ".ctop.cdet.right.sb set" \
-cursor [. cget -cursor] \
-spacing1 1 -spacing3 1
+ lappend bglist $cflist
+ lappend fglist $cflist
scrollbar .ctop.cdet.right.sb -command "$cflist yview"
pack .ctop.cdet.right.sb -side right -fill y
pack $cflist -side left -fill both -expand 1
global maxwidth showneartags
global viewname viewfiles viewargs viewperm nextviewnum
global cmitmode wrapcomment
+ global colors bgcolor fgcolor diffcolors
if {$stuffsaved} return
if {![winfo viewable .]} return
puts $f [list set cmitmode $cmitmode]
puts $f [list set wrapcomment $wrapcomment]
puts $f [list set showneartags $showneartags]
+ puts $f [list set bgcolor $bgcolor]
+ puts $f [list set fgcolor $fgcolor]
+ puts $f [list set colors $colors]
+ puts $f [list set diffcolors $diffcolors]
puts $f "set geometry(width) [winfo width .ctop]"
puts $f "set geometry(height) [winfo height .ctop]"
puts $f "set geometry(canv1) [expr {[winfo width $canv]-2}]"
}
proc drawcmittext {id row col rmx} {
- global linespc canv canv2 canv3 canvy0
+ global linespc canv canv2 canv3 canvy0 fgcolor
global commitlisted commitinfo rowidlist
global rowtextx idpos idtags idheads idotherrefs
global linehtag linentag linedtag
- global mainfont canvxmax boldrows boldnamerows
+ global mainfont canvxmax boldrows boldnamerows fgcolor
set ofill [expr {[lindex $commitlisted $row]? "blue": "white"}]
set x [xc $row $col]
set orad [expr {$linespc / 3}]
set t [$canv create oval [expr {$x - $orad}] [expr {$y - $orad}] \
[expr {$x + $orad - 1}] [expr {$y + $orad - 1}] \
- -fill $ofill -outline black -width 1]
+ -fill $ofill -outline $fgcolor -width 1 -tags circle]
$canv raise $t
$canv bind $t <1> {selcanvline {} %x %y}
set xt [xc $row [llength [lindex $rowidlist $row]]]
lappend nfont bold
}
}
- set linehtag($row) [$canv create text $xt $y -anchor w \
- -text $headline -font $font]
+ set linehtag($row) [$canv create text $xt $y -anchor w -fill $fgcolor \
+ -text $headline -font $font -tags text]
$canv bind $linehtag($row) <Button-3> "rowmenu %X %Y $id"
- set linentag($row) [$canv2 create text 3 $y -anchor w \
- -text $name -font $nfont]
- set linedtag($row) [$canv3 create text 3 $y -anchor w \
- -text $date -font $mainfont]
+ set linentag($row) [$canv2 create text 3 $y -anchor w -fill $fgcolor \
+ -text $name -font $nfont -tags text]
+ set linedtag($row) [$canv3 create text 3 $y -anchor w -fill $fgcolor \
+ -text $date -font $mainfont -tags text]
set xr [expr {$xt + [font measure $mainfont $headline]}]
if {$xr > $canvxmax} {
set canvxmax $xr
}
proc drawtags {id x xt y1} {
- global idtags idheads idotherrefs
+ global idtags idheads idotherrefs mainhead
global linespc lthickness
- global canv mainfont commitrow rowtextx curview
+ global canv mainfont commitrow rowtextx curview fgcolor bgcolor
set marks {}
set ntags 0
set yb [expr {$yt + $linespc - 1}]
set xvals {}
set wvals {}
+ set i -1
foreach tag $marks {
- set wid [font measure $mainfont $tag]
+ incr i
+ if {$i >= $ntags && $i < $ntags + $nheads && $tag eq $mainhead} {
+ set wid [font measure [concat $mainfont bold] $tag]
+ } else {
+ set wid [font measure $mainfont $tag]
+ }
lappend xvals $xt
lappend wvals $wid
set xt [expr {$xt + $delta + $wid + $lthickness + $linespc}]
foreach tag $marks x $xvals wid $wvals {
set xl [expr {$x + $delta}]
set xr [expr {$x + $delta + $wid + $lthickness}]
+ set font $mainfont
if {[incr ntags -1] >= 0} {
# draw a tag
set t [$canv create polygon $x [expr {$yt + $delta}] $xl $yt \
# draw a head or other ref
if {[incr nheads -1] >= 0} {
set col green
+ if {$tag eq $mainhead} {
+ lappend font bold
+ }
} else {
set col "#ddddff"
}
-width 0 -fill "#ffddaa" -tags tag.$id
}
}
- set t [$canv create text $xl $y1 -anchor w -text $tag \
- -font $mainfont -tags tag.$id]
+ set t [$canv create text $xl $y1 -anchor w -text $tag -fill $fgcolor \
+ -font $font -tags [list tag.$id text]]
if {$ntags >= 0} {
$canv bind $t <1> [list showtag $tag 1]
}
}
proc show_status {msg} {
- global canv mainfont
+ global canv mainfont fgcolor
clear_display
- $canv create text 3 3 -anchor nw -text $msg -font $mainfont -tags textitems
+ $canv create text 3 3 -anchor nw -text $msg -font $mainfont \
+ -tags text -fill $fgcolor
}
proc finishcommits {} {
set t [$canv create rectangle $x0 $y0 $x1 $y1 \
-fill \#ffff80 -outline black -width 1 -tags hover]
$canv raise $t
- set t [$canv create text $x $y -anchor nw -text $text -tags hover -font $mainfont]
+ set t [$canv create text $x $y -anchor nw -text $text -tags hover \
+ -font $mainfont]
$canv raise $t
}
proc redrawtags {id} {
global canv linehtag commitrow idpos selectedline curview
- global mainfont
+ global mainfont canvxmax
if {![info exists commitrow($curview,$id)]} return
drawcmitrow $commitrow($curview,$id)
proc doprefs {} {
global maxwidth maxgraphpct diffopts
global oldprefs prefstop showneartags
+ global bgcolor fgcolor ctext diffcolors
set top .gitkprefs
set prefstop $top
-font optionfont
spinbox $top.maxpct -from 1 -to 100 -width 4 -textvariable maxgraphpct
grid x $top.maxpctl $top.maxpct -sticky w
+
label $top.ddisp -text "Diff display options"
grid $top.ddisp - -sticky w -pady 10
label $top.diffoptl -text "Options for diff program" \
checkbutton $top.ntag.b -variable showneartags
pack $top.ntag.b $top.ntag.l -side left
grid x $top.ntag -sticky w
+
+ label $top.cdisp -text "Colors: press to choose"
+ grid $top.cdisp - -sticky w -pady 10
+ label $top.bg -padx 40 -relief sunk -background $bgcolor
+ button $top.bgbut -text "Background" -font optionfont \
+ -command [list choosecolor bgcolor 0 $top.bg background setbg]
+ grid x $top.bgbut $top.bg -sticky w
+ label $top.fg -padx 40 -relief sunk -background $fgcolor
+ button $top.fgbut -text "Foreground" -font optionfont \
+ -command [list choosecolor fgcolor 0 $top.fg foreground setfg]
+ grid x $top.fgbut $top.fg -sticky w
+ label $top.diffold -padx 40 -relief sunk -background [lindex $diffcolors 0]
+ button $top.diffoldbut -text "Diff: old lines" -font optionfont \
+ -command [list choosecolor diffcolors 0 $top.diffold "diff old lines" \
+ [list $ctext tag conf d0 -foreground]]
+ grid x $top.diffoldbut $top.diffold -sticky w
+ label $top.diffnew -padx 40 -relief sunk -background [lindex $diffcolors 1]
+ button $top.diffnewbut -text "Diff: new lines" -font optionfont \
+ -command [list choosecolor diffcolors 1 $top.diffnew "diff new lines" \
+ [list $ctext tag conf d1 -foreground]]
+ grid x $top.diffnewbut $top.diffnew -sticky w
+ label $top.hunksep -padx 40 -relief sunk -background [lindex $diffcolors 2]
+ button $top.hunksepbut -text "Diff: hunk header" -font optionfont \
+ -command [list choosecolor diffcolors 2 $top.hunksep \
+ "diff hunk header" \
+ [list $ctext tag conf hunksep -foreground]]
+ grid x $top.hunksepbut $top.hunksep -sticky w
+
frame $top.buts
button $top.buts.ok -text "OK" -command prefsok
button $top.buts.can -text "Cancel" -command prefscan
grid $top.buts - - -pady 10 -sticky ew
}
+proc choosecolor {v vi w x cmd} {
+ global $v
+
+ set c [tk_chooseColor -initialcolor [lindex [set $v] $vi] \
+ -title "Gitk: choose color for $x"]
+ if {$c eq {}} return
+ $w conf -background $c
+ lset $v $vi $c
+ eval $cmd $c
+}
+
+proc setbg {c} {
+ global bglist
+
+ foreach w $bglist {
+ $w conf -background $c
+ }
+}
+
+proc setfg {c} {
+ global fglist canv
+
+ foreach w $fglist {
+ $w conf -foreground $c
+ }
+ allcanvs itemconf text -fill $c
+ $canv itemconf circle -outline $c
+}
+
proc prefscan {} {
global maxwidth maxgraphpct diffopts
global oldprefs prefstop showneartags
set showneartags 1
set colors {green red blue magenta darkgrey brown orange}
+set bgcolor white
+set fgcolor black
+set diffcolors {red "#00a000" blue}
catch {source ~/.gitk}
background-color: #f6f6f0;
}
+tr.dark2 {
+ background-color: #f6f6f0;
+}
+
tr.dark:hover {
background-color: #edece6;
}
git_page_nav('','', $hash_base,$co{'tree'},$hash_base, $formats_nav);
git_header_div('commit', esc_html($co{'title'}), $hash_base);
git_print_page_path($file_name, $ftype);
- my @rev_color = (qw(light dark));
+ my @rev_color = (qw(light2 dark2));
my $num_colors = scalar(@rev_color);
my $current_color = 0;
my $last_rev;
"<td class=\"link\">" . $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$to_id;hb=$hash;f=$file")}, "blob") . "</td>\n";
} elsif ($status eq "D") {
print "<td>" .
- $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$from_id;hb=$hash;f=$file"), -class => "list"}, esc_html($file)) . "</td>\n" .
+ $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$from_id;hb=$parent;f=$file"), -class => "list"}, esc_html($file)) . "</td>\n" .
"<td><span class=\"file_status deleted\">[deleted " . file_type($from_mode). "]</span></td>\n" .
"<td class=\"link\">" .
- $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$from_id;hb=$hash;f=$file")}, "blob") .
- " | " . $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=history;hb=$hash;f=$file")}, "history") .
+ $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$from_id;hb=$parent;f=$file")}, "blob") .
+ " | " . $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=history;hb=$parent;f=$file")}, "history") .
"</td>\n"
} elsif ($status eq "M" || $status eq "T") {
my $mode_chnge = "";
print "<td>" .
$cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$to_id;hb=$hash;f=$to_file"), -class => "list"}, esc_html($to_file)) . "</td>\n" .
"<td><span class=\"file_status moved\">[moved from " .
- $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$from_id;hb=$hash;f=$from_file"), -class => "list"}, esc_html($from_file)) .
+ $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$from_id;hb=$parent;f=$from_file"), -class => "list"}, esc_html($from_file)) .
" with " . (int $similarity) . "% similarity$mode_chng]</span></td>\n" .
"<td class=\"link\">" .
$cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$to_id;hb=$hash;f=$to_file")}, "blob");
git_diff_print(undef, "/dev/null", $to_id, "b/$file");
} elsif ($status eq "D") {
print "<div class=\"diff_info\">" . file_type($from_mode) . ":" .
- $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$from_id;hb=$hash;f=$file")}, $from_id) . "(deleted)" .
+ $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$from_id;hb=$hash_parent;f=$file")}, $from_id) . "(deleted)" .
"</div>\n";
git_diff_print($from_id, "a/$file", undef, "/dev/null");
} elsif ($status eq "M") {
if ($from_id ne $to_id) {
print "<div class=\"diff_info\">" .
- file_type($from_mode) . ":" . $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$from_id;hb=$hash;f=$file")}, $from_id) .
+ file_type($from_mode) . ":" .
+ $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$from_id;hb=$hash_parent;f=$file")}, $from_id) .
" -> " .
- file_type($to_mode) . ":" . $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$to_id;hb=$hash;f=$file")}, $to_id);
+ file_type($to_mode) . ":" .
+ $cgi->a({-href => "$my_uri?" . esc_param("p=$project;a=blob;h=$to_id;hb=$hash;f=$file")}, $to_id);
print "</div>\n";
git_diff_print($from_id, "a/$file", $to_id, "b/$file");
}
if (!no_more_flags && argv[i][0] == '-') {
if (!strcmp(argv[i], "-t")) {
if (argc <= ++i)
- die(hash_object_usage);
+ usage(hash_object_usage);
type = argv[i];
}
else if (!strcmp(argv[i], "-w")) {
hash_stdin(type, write_object);
}
else
- die(hash_object_usage);
- }
+ usage(hash_object_usage);
+ }
else {
const char *arg = argv[i];
if (0 <= prefix_length)
--- /dev/null
+/*
+ * builtin-help.c
+ *
+ * Builtin help-related commands (help, usage, version)
+ */
+#include <sys/ioctl.h>
+#include "cache.h"
+#include "builtin.h"
+#include "exec_cmd.h"
+#include "common-cmds.h"
+
+
+/* most GUI terminals set COLUMNS (although some don't export it) */
+static int term_columns(void)
+{
+ char *col_string = getenv("COLUMNS");
+ int n_cols = 0;
+
+ if (col_string && (n_cols = atoi(col_string)) > 0)
+ return n_cols;
+
+#ifdef TIOCGWINSZ
+ {
+ struct winsize ws;
+ if (!ioctl(1, TIOCGWINSZ, &ws)) {
+ if (ws.ws_col)
+ return ws.ws_col;
+ }
+ }
+#endif
+
+ return 80;
+}
+
+static void oom(void)
+{
+ fprintf(stderr, "git: out of memory\n");
+ exit(1);
+}
+
+static inline void mput_char(char c, unsigned int num)
+{
+ while(num--)
+ putchar(c);
+}
+
+static struct cmdname {
+ size_t len;
+ char name[1];
+} **cmdname;
+static int cmdname_alloc, cmdname_cnt;
+
+static void add_cmdname(const char *name, int len)
+{
+ struct cmdname *ent;
+ if (cmdname_alloc <= cmdname_cnt) {
+ cmdname_alloc = cmdname_alloc + 200;
+ cmdname = realloc(cmdname, cmdname_alloc * sizeof(*cmdname));
+ if (!cmdname)
+ oom();
+ }
+ ent = malloc(sizeof(*ent) + len);
+ if (!ent)
+ oom();
+ ent->len = len;
+ memcpy(ent->name, name, len);
+ ent->name[len] = 0;
+ cmdname[cmdname_cnt++] = ent;
+}
+
+static int cmdname_compare(const void *a_, const void *b_)
+{
+ struct cmdname *a = *(struct cmdname **)a_;
+ struct cmdname *b = *(struct cmdname **)b_;
+ return strcmp(a->name, b->name);
+}
+
+static void pretty_print_string_list(struct cmdname **cmdname, int longest)
+{
+ int cols = 1, rows;
+ int space = longest + 1; /* min 1 SP between words */
+ int max_cols = term_columns() - 1; /* don't print *on* the edge */
+ int i, j;
+
+ if (space < max_cols)
+ cols = max_cols / space;
+ rows = (cmdname_cnt + cols - 1) / cols;
+
+ qsort(cmdname, cmdname_cnt, sizeof(*cmdname), cmdname_compare);
+
+ for (i = 0; i < rows; i++) {
+ printf(" ");
+
+ for (j = 0; j < cols; j++) {
+ int n = j * rows + i;
+ int size = space;
+ if (n >= cmdname_cnt)
+ break;
+ if (j == cols-1 || n + rows >= cmdname_cnt)
+ size = 1;
+ printf("%-*s", size, cmdname[n]->name);
+ }
+ putchar('\n');
+ }
+}
+
+static void list_commands(const char *exec_path, const char *pattern)
+{
+ unsigned int longest = 0;
+ char path[PATH_MAX];
+ int dirlen;
+ DIR *dir = opendir(exec_path);
+ struct dirent *de;
+
+ if (!dir) {
+ fprintf(stderr, "git: '%s': %s\n", exec_path, strerror(errno));
+ exit(1);
+ }
+
+ dirlen = strlen(exec_path);
+ if (PATH_MAX - 20 < dirlen) {
+ fprintf(stderr, "git: insanely long exec-path '%s'\n",
+ exec_path);
+ exit(1);
+ }
+
+ memcpy(path, exec_path, dirlen);
+ path[dirlen++] = '/';
+
+ while ((de = readdir(dir)) != NULL) {
+ struct stat st;
+ int entlen;
+
+ if (strncmp(de->d_name, "git-", 4))
+ continue;
+ strcpy(path+dirlen, de->d_name);
+ if (stat(path, &st) || /* stat, not lstat */
+ !S_ISREG(st.st_mode) ||
+ !(st.st_mode & S_IXUSR))
+ continue;
+
+ entlen = strlen(de->d_name);
+ if (has_extension(de->d_name, ".exe"))
+ entlen -= 4;
+
+ if (longest < entlen)
+ longest = entlen;
+
+ add_cmdname(de->d_name + 4, entlen-4);
+ }
+ closedir(dir);
+
+ printf("git commands available in '%s'\n", exec_path);
+ printf("----------------------------");
+ mput_char('-', strlen(exec_path));
+ putchar('\n');
+ pretty_print_string_list(cmdname, longest - 4);
+ putchar('\n');
+}
+
+static void list_common_cmds_help(void)
+{
+ int i, longest = 0;
+
+ for (i = 0; i < ARRAY_SIZE(common_cmds); i++) {
+ if (longest < strlen(common_cmds[i].name))
+ longest = strlen(common_cmds[i].name);
+ }
+
+ puts("The most commonly used git commands are:");
+ for (i = 0; i < ARRAY_SIZE(common_cmds); i++) {
+ printf(" %s", common_cmds[i].name);
+ mput_char(' ', longest - strlen(common_cmds[i].name) + 4);
+ puts(common_cmds[i].help);
+ }
+ puts("(use 'git help -a' to get a list of all installed git commands)");
+}
+
+static void show_man_page(const char *git_cmd)
+{
+ const char *page;
+
+ if (!strncmp(git_cmd, "git", 3))
+ page = git_cmd;
+ else {
+ int page_len = strlen(git_cmd) + 4;
+ char *p = malloc(page_len + 1);
+ strcpy(p, "git-");
+ strcpy(p + 4, git_cmd);
+ p[page_len] = 0;
+ page = p;
+ }
+
+ execlp("man", "man", page, NULL);
+}
+
+void help_unknown_cmd(const char *cmd)
+{
+ printf("git: '%s' is not a git-command\n\n", cmd);
+ list_common_cmds_help();
+ exit(1);
+}
+
+int cmd_version(int argc, const char **argv, const char *prefix)
+{
+ printf("git version %s\n", git_version_string);
+ return 0;
+}
+
+int cmd_help(int argc, const char **argv, const char *prefix)
+{
+ const char *help_cmd = argc > 1 ? argv[1] : NULL;
+ const char *exec_path = git_exec_path();
+
+ if (!help_cmd) {
+ printf("usage: %s\n\n", git_usage_string);
+ list_common_cmds_help();
+ exit(1);
+ }
+
+ else if (!strcmp(help_cmd, "--all") || !strcmp(help_cmd, "-a")) {
+ printf("usage: %s\n\n", git_usage_string);
+ if(exec_path)
+ list_commands(exec_path, "git-*");
+ exit(1);
+ }
+
+ else
+ show_man_page(help_cmd);
+
+ return 0;
+}
+
+
if (strlen(ls->dentry_name) == 63 &&
!strncmp(ls->dentry_name, "objects/pack/pack-", 18) &&
- !strncmp(ls->dentry_name+58, ".pack", 5)) {
+ has_extension(ls->dentry_name, ".pack")) {
get_sha1_hex(ls->dentry_name + 18, sha1);
setup_index(ls->repo, sha1);
}
int arg = 1;
int rc = 0;
+ setup_ident();
setup_git_directory();
git_config(git_default_config);
request->dest = xmalloc(strlen(request->url) + 14);
sprintf(request->dest, "Destination: %s", request->url);
posn += 38;
- *(posn++) = '.';
+ *(posn++) = '_';
strcpy(posn, request->lock->token);
slot = get_active_slot();
usage(index_pack_usage);
if (!index_name) {
int len = strlen(pack_name);
- if (len < 5 || strcmp(pack_name + len - 5, ".pack"))
+ if (!has_extension(pack_name, ".pack"))
die("packfile name '%s' does not end with '.pack'",
pack_name);
index_name_buf = xmalloc(len);
return -1;
while ((de = readdir(dir)) != NULL) {
int namelen = strlen(de->d_name);
- if (namelen != 50 ||
- strcmp(de->d_name + namelen - 5, ".pack"))
+ if (namelen != 50 ||
+ !has_extension(de->d_name, ".pack"))
continue;
get_sha1_hex(de->d_name + 5, sha1);
setup_index(sha1);
char **commit_id;
int arg = 1;
+ setup_ident();
setup_git_directory();
git_config(git_default_config);
raise(signo);
}
-int hold_lock_file_for_update(struct lock_file *lk, const char *path)
+static int lock_file(struct lock_file *lk, const char *path)
{
int fd;
sprintf(lk->filename, "%s.lock", path);
return fd;
}
+int hold_lock_file_for_update(struct lock_file *lk, const char *path, int die_on_error)
+{
+ int fd = lock_file(lk, path);
+ if (fd < 0 && die_on_error)
+ die("unable to create '%s': %s", path, strerror(errno));
+ return fd;
+}
+
int commit_lock_file(struct lock_file *lk)
{
char result_file[PATH_MAX];
fputs(diff_unique_abbrev(commit->object.sha1, abbrev_commit), stdout);
if (opt->parents)
show_parents(commit, abbrev_commit);
- putchar('\n');
+ putchar(opt->diffopt.line_termination);
return;
}
#include "tag.h"
/*
- * A signature file has a very simple fixed format: three lines
- * of "object <sha1>" + "type <typename>" + "tag <tagname>",
- * followed by some free-form signature that git itself doesn't
- * care about, but that can be verified with gpg or similar.
+ * A signature file has a very simple fixed format: four lines
+ * of "object <sha1>" + "type <typename>" + "tag <tagname>" +
+ * "tagger <committer>", followed by a blank line, a free-form tag
+ * message and a signature block that git itself doesn't care about,
+ * but that can be verified with gpg or similar.
*
* The first three lines are guaranteed to be at least 63 bytes:
* "object <sha1>\n" is 48 bytes, "type tag\n" at 9 bytes is the
return ret;
}
+#ifdef NO_C99_FORMAT
+#define PD_FMT "%d"
+#else
+#define PD_FMT "%td"
+#endif
+
static int verify_tag(char *buffer, unsigned long size)
{
int typelen;
const char *object, *type_line, *tag_line, *tagger_line;
if (size < 64)
- return error("wanna fool me ? you obviously got the size wrong !\n");
+ return error("wanna fool me ? you obviously got the size wrong !");
buffer[size] = 0;
/* Verify object line */
object = buffer;
if (memcmp(object, "object ", 7))
- return error("char%d: does not start with \"object \"\n", 0);
+ return error("char%d: does not start with \"object \"", 0);
if (get_sha1_hex(object + 7, sha1))
- return error("char%d: could not get SHA1 hash\n", 7);
+ return error("char%d: could not get SHA1 hash", 7);
/* Verify type line */
type_line = object + 48;
if (memcmp(type_line - 1, "\ntype ", 6))
- return error("char%d: could not find \"\\ntype \"\n", 47);
+ return error("char%d: could not find \"\\ntype \"", 47);
/* Verify tag-line */
tag_line = strchr(type_line, '\n');
if (!tag_line)
- return error("char%td: could not find next \"\\n\"\n", type_line - buffer);
+ return error("char" PD_FMT ": could not find next \"\\n\"", type_line - buffer);
tag_line++;
if (memcmp(tag_line, "tag ", 4) || tag_line[4] == '\n')
- return error("char%td: no \"tag \" found\n", tag_line - buffer);
+ return error("char" PD_FMT ": no \"tag \" found", tag_line - buffer);
/* Get the actual type */
typelen = tag_line - type_line - strlen("type \n");
if (typelen >= sizeof(type))
- return error("char%td: type too long\n", type_line+5 - buffer);
+ return error("char" PD_FMT ": type too long", type_line+5 - buffer);
memcpy(type, type_line+5, typelen);
type[typelen] = 0;
/* Verify that the object matches */
- if (get_sha1_hex(object + 7, sha1))
- return error("char%d: could not get SHA1 hash but this is really odd since i got it before !\n", 7);
-
if (verify_object(sha1, type))
- return error("char%d: could not verify object %s\n", 7, sha1);
+ return error("char%d: could not verify object %s", 7, sha1_to_hex(sha1));
/* Verify the tag-name: we don't allow control characters or spaces in it */
tag_line += 4;
break;
if (c > ' ')
continue;
- return error("char%td: could not verify tag name\n", tag_line - buffer);
+ return error("char" PD_FMT ": could not verify tag name", tag_line - buffer);
}
/* Verify the tagger line */
tagger_line = tag_line;
if (memcmp(tagger_line, "tagger", 6) || (tagger_line[6] == '\n'))
- return error("char%td: could not find \"tagger\"\n", tagger_line - buffer);
+ return error("char" PD_FMT ": could not find \"tagger\"", tagger_line - buffer);
+
+ /* TODO: check for committer info + blank line? */
+ /* Also, the minimum length is probably + "tagger .", or 63+8=71 */
/* The actual stuff afterwards we don't care about.. */
return 0;
}
+#undef PD_FMT
+
int main(int argc, char **argv)
{
unsigned long size = 4096;
unsigned char result_sha1[20];
if (argc != 1)
- usage("cat <signaturefile> | git-mktag");
+ usage("git-mktag < signaturefile");
setup_git_directory();
write_sha1_file(buffer, offset, tree_type, sha1);
}
-static const char mktree_usage[] = "mktree [-z]";
+static const char mktree_usage[] = "git-mktree [-z]";
int main(int ac, char **av)
{
+++ /dev/null
-#include <stdlib.h>
-#include "cache.h"
-#include "commit.h"
-#include "tag.h"
-#include "refs.h"
-
-static const char name_rev_usage[] =
- "git-name-rev [--tags] ( --all | --stdin | committish [committish...] )\n";
-
-typedef struct rev_name {
- const char *tip_name;
- int merge_traversals;
- int generation;
-} rev_name;
-
-static long cutoff = LONG_MAX;
-
-static void name_rev(struct commit *commit,
- const char *tip_name, int merge_traversals, int generation,
- int deref)
-{
- struct rev_name *name = (struct rev_name *)commit->util;
- struct commit_list *parents;
- int parent_number = 1;
-
- if (!commit->object.parsed)
- parse_commit(commit);
-
- if (commit->date < cutoff)
- return;
-
- if (deref) {
- char *new_name = xmalloc(strlen(tip_name)+3);
- strcpy(new_name, tip_name);
- strcat(new_name, "^0");
- tip_name = new_name;
-
- if (generation)
- die("generation: %d, but deref?", generation);
- }
-
- if (name == NULL) {
- name = xmalloc(sizeof(rev_name));
- commit->util = name;
- goto copy_data;
- } else if (name->merge_traversals > merge_traversals ||
- (name->merge_traversals == merge_traversals &&
- name->generation > generation)) {
-copy_data:
- name->tip_name = tip_name;
- name->merge_traversals = merge_traversals;
- name->generation = generation;
- } else
- return;
-
- for (parents = commit->parents;
- parents;
- parents = parents->next, parent_number++) {
- if (parent_number > 1) {
- char *new_name = xmalloc(strlen(tip_name)+8);
-
- if (generation > 0)
- sprintf(new_name, "%s~%d^%d", tip_name,
- generation, parent_number);
- else
- sprintf(new_name, "%s^%d", tip_name, parent_number);
-
- name_rev(parents->item, new_name,
- merge_traversals + 1 , 0, 0);
- } else {
- name_rev(parents->item, tip_name, merge_traversals,
- generation + 1, 0);
- }
- }
-}
-
-static int tags_only = 0;
-
-static int name_ref(const char *path, const unsigned char *sha1)
-{
- struct object *o = parse_object(sha1);
- int deref = 0;
-
- if (tags_only && strncmp(path, "refs/tags/", 10))
- return 0;
-
- while (o && o->type == OBJ_TAG) {
- struct tag *t = (struct tag *) o;
- if (!t->tagged)
- break; /* broken repository */
- o = parse_object(t->tagged->sha1);
- deref = 1;
- }
- if (o && o->type == OBJ_COMMIT) {
- struct commit *commit = (struct commit *)o;
-
- if (!strncmp(path, "refs/heads/", 11))
- path = path + 11;
- else if (!strncmp(path, "refs/", 5))
- path = path + 5;
-
- name_rev(commit, strdup(path), 0, 0, deref);
- }
- return 0;
-}
-
-/* returns a static buffer */
-static const char* get_rev_name(struct object *o)
-{
- static char buffer[1024];
- struct rev_name *n;
- struct commit *c;
-
- if (o->type != OBJ_COMMIT)
- return "undefined";
- c = (struct commit *) o;
- n = c->util;
- if (!n)
- return "undefined";
-
- if (!n->generation)
- return n->tip_name;
-
- snprintf(buffer, sizeof(buffer), "%s~%d", n->tip_name, n->generation);
-
- return buffer;
-}
-
-int main(int argc, char **argv)
-{
- struct object_array revs = { 0, 0, NULL };
- int as_is = 0, all = 0, transform_stdin = 0;
-
- setup_git_directory();
- git_config(git_default_config);
-
- if (argc < 2)
- usage(name_rev_usage);
-
- for (--argc, ++argv; argc; --argc, ++argv) {
- unsigned char sha1[20];
- struct object *o;
- struct commit *commit;
-
- if (!as_is && (*argv)[0] == '-') {
- if (!strcmp(*argv, "--")) {
- as_is = 1;
- continue;
- } else if (!strcmp(*argv, "--tags")) {
- tags_only = 1;
- continue;
- } else if (!strcmp(*argv, "--all")) {
- if (argc > 1)
- die("Specify either a list, or --all, not both!");
- all = 1;
- cutoff = 0;
- continue;
- } else if (!strcmp(*argv, "--stdin")) {
- if (argc > 1)
- die("Specify either a list, or --stdin, not both!");
- transform_stdin = 1;
- cutoff = 0;
- continue;
- }
- usage(name_rev_usage);
- }
-
- if (get_sha1(*argv, sha1)) {
- fprintf(stderr, "Could not get sha1 for %s. Skipping.\n",
- *argv);
- continue;
- }
-
- o = deref_tag(parse_object(sha1), *argv, 0);
- if (!o || o->type != OBJ_COMMIT) {
- fprintf(stderr, "Could not get commit for %s. Skipping.\n",
- *argv);
- continue;
- }
-
- commit = (struct commit *)o;
-
- if (cutoff > commit->date)
- cutoff = commit->date;
-
- add_object_array((struct object *)commit, *argv, &revs);
- }
-
- for_each_ref(name_ref);
-
- if (transform_stdin) {
- char buffer[2048];
- char *p, *p_start;
-
- while (!feof(stdin)) {
- int forty = 0;
- p = fgets(buffer, sizeof(buffer), stdin);
- if (!p)
- break;
-
- for (p_start = p; *p; p++) {
-#define ishex(x) (isdigit((x)) || ((x) >= 'a' && (x) <= 'f'))
- if (!ishex(*p))
- forty = 0;
- else if (++forty == 40 &&
- !ishex(*(p+1))) {
- unsigned char sha1[40];
- const char *name = "undefined";
- char c = *(p+1);
-
- forty = 0;
-
- *(p+1) = 0;
- if (!get_sha1(p - 39, sha1)) {
- struct object *o =
- lookup_object(sha1);
- if (o)
- name = get_rev_name(o);
- }
- *(p+1) = c;
-
- if (!strcmp(name, "undefined"))
- continue;
-
- fwrite(p_start, p - p_start + 1, 1,
- stdout);
- printf(" (%s)", name);
- p_start = p + 1;
- }
- }
-
- /* flush */
- if (p_start != p)
- fwrite(p_start, p - p_start, 1, stdout);
- }
- } else if (all) {
- int i, max;
-
- max = get_max_object_index();
- for (i = 0; i < max; i++) {
- struct object * obj = get_indexed_object(i);
- if (!obj)
- continue;
- printf("%s %s\n", sha1_to_hex(obj->sha1), get_rev_name(obj));
- }
- } else {
- int i;
- for (i = 0; i < revs.nr; i++)
- printf("%s %s\n",
- revs.objects[i].name,
- get_rev_name(revs.objects[i].item));
- }
-
- return 0;
-}
-
+++ /dev/null
-#include "cache.h"
-#include "object.h"
-#include "blob.h"
-#include "commit.h"
-#include "tag.h"
-#include "tree.h"
-#include "delta.h"
-#include "pack.h"
-#include "csum-file.h"
-#include "tree-walk.h"
-#include <sys/time.h>
-#include <signal.h>
-
-static const char pack_usage[] = "git-pack-objects [-q] [--no-reuse-delta] [--non-empty] [--local] [--incremental] [--window=N] [--depth=N] {--stdout | base-name} < object-list";
-
-struct object_entry {
- unsigned char sha1[20];
- unsigned long size; /* uncompressed size */
- unsigned long offset; /* offset into the final pack file;
- * nonzero if already written.
- */
- unsigned int depth; /* delta depth */
- unsigned int delta_limit; /* base adjustment for in-pack delta */
- unsigned int hash; /* name hint hash */
- enum object_type type;
- enum object_type in_pack_type; /* could be delta */
- unsigned long delta_size; /* delta data size (uncompressed) */
- struct object_entry *delta; /* delta base object */
- struct packed_git *in_pack; /* already in pack */
- unsigned int in_pack_offset;
- struct object_entry *delta_child; /* deltified objects who bases me */
- struct object_entry *delta_sibling; /* other deltified objects who
- * uses the same base as me
- */
- int preferred_base; /* we do not pack this, but is encouraged to
- * be used as the base objectto delta huge
- * objects against.
- */
-};
-
-/*
- * Objects we are going to pack are collected in objects array (dynamically
- * expanded). nr_objects & nr_alloc controls this array. They are stored
- * in the order we see -- typically rev-list --objects order that gives us
- * nice "minimum seek" order.
- *
- * sorted-by-sha ans sorted-by-type are arrays of pointers that point at
- * elements in the objects array. The former is used to build the pack
- * index (lists object names in the ascending order to help offset lookup),
- * and the latter is used to group similar things together by try_delta()
- * heuristics.
- */
-
-static unsigned char object_list_sha1[20];
-static int non_empty = 0;
-static int no_reuse_delta = 0;
-static int local = 0;
-static int incremental = 0;
-static struct object_entry **sorted_by_sha, **sorted_by_type;
-static struct object_entry *objects = NULL;
-static int nr_objects = 0, nr_alloc = 0, nr_result = 0;
-static const char *base_name;
-static unsigned char pack_file_sha1[20];
-static int progress = 1;
-static volatile sig_atomic_t progress_update = 0;
-static int window = 10;
-
-/*
- * The object names in objects array are hashed with this hashtable,
- * to help looking up the entry by object name. Binary search from
- * sorted_by_sha is also possible but this was easier to code and faster.
- * This hashtable is built after all the objects are seen.
- */
-static int *object_ix = NULL;
-static int object_ix_hashsz = 0;
-
-/*
- * Pack index for existing packs give us easy access to the offsets into
- * corresponding pack file where each object's data starts, but the entries
- * do not store the size of the compressed representation (uncompressed
- * size is easily available by examining the pack entry header). We build
- * a hashtable of existing packs (pack_revindex), and keep reverse index
- * here -- pack index file is sorted by object name mapping to offset; this
- * pack_revindex[].revindex array is an ordered list of offsets, so if you
- * know the offset of an object, next offset is where its packed
- * representation ends.
- */
-struct pack_revindex {
- struct packed_git *p;
- unsigned long *revindex;
-} *pack_revindex = NULL;
-static int pack_revindex_hashsz = 0;
-
-/*
- * stats
- */
-static int written = 0;
-static int written_delta = 0;
-static int reused = 0;
-static int reused_delta = 0;
-
-static int pack_revindex_ix(struct packed_git *p)
-{
- unsigned long ui = (unsigned long)p;
- int i;
-
- ui = ui ^ (ui >> 16); /* defeat structure alignment */
- i = (int)(ui % pack_revindex_hashsz);
- while (pack_revindex[i].p) {
- if (pack_revindex[i].p == p)
- return i;
- if (++i == pack_revindex_hashsz)
- i = 0;
- }
- return -1 - i;
-}
-
-static void prepare_pack_ix(void)
-{
- int num;
- struct packed_git *p;
- for (num = 0, p = packed_git; p; p = p->next)
- num++;
- if (!num)
- return;
- pack_revindex_hashsz = num * 11;
- pack_revindex = xcalloc(sizeof(*pack_revindex), pack_revindex_hashsz);
- for (p = packed_git; p; p = p->next) {
- num = pack_revindex_ix(p);
- num = - 1 - num;
- pack_revindex[num].p = p;
- }
- /* revindex elements are lazily initialized */
-}
-
-static int cmp_offset(const void *a_, const void *b_)
-{
- unsigned long a = *(unsigned long *) a_;
- unsigned long b = *(unsigned long *) b_;
- if (a < b)
- return -1;
- else if (a == b)
- return 0;
- else
- return 1;
-}
-
-/*
- * Ordered list of offsets of objects in the pack.
- */
-static void prepare_pack_revindex(struct pack_revindex *rix)
-{
- struct packed_git *p = rix->p;
- int num_ent = num_packed_objects(p);
- int i;
- void *index = p->index_base + 256;
-
- rix->revindex = xmalloc(sizeof(unsigned long) * (num_ent + 1));
- for (i = 0; i < num_ent; i++) {
- unsigned int hl = *((unsigned int *)((char *) index + 24*i));
- rix->revindex[i] = ntohl(hl);
- }
- /* This knows the pack format -- the 20-byte trailer
- * follows immediately after the last object data.
- */
- rix->revindex[num_ent] = p->pack_size - 20;
- qsort(rix->revindex, num_ent, sizeof(unsigned long), cmp_offset);
-}
-
-static unsigned long find_packed_object_size(struct packed_git *p,
- unsigned long ofs)
-{
- int num;
- int lo, hi;
- struct pack_revindex *rix;
- unsigned long *revindex;
- num = pack_revindex_ix(p);
- if (num < 0)
- die("internal error: pack revindex uninitialized");
- rix = &pack_revindex[num];
- if (!rix->revindex)
- prepare_pack_revindex(rix);
- revindex = rix->revindex;
- lo = 0;
- hi = num_packed_objects(p) + 1;
- do {
- int mi = (lo + hi) / 2;
- if (revindex[mi] == ofs) {
- return revindex[mi+1] - ofs;
- }
- else if (ofs < revindex[mi])
- hi = mi;
- else
- lo = mi + 1;
- } while (lo < hi);
- die("internal error: pack revindex corrupt");
-}
-
-static void *delta_against(void *buf, unsigned long size, struct object_entry *entry)
-{
- unsigned long othersize, delta_size;
- char type[10];
- void *otherbuf = read_sha1_file(entry->delta->sha1, type, &othersize);
- void *delta_buf;
-
- if (!otherbuf)
- die("unable to read %s", sha1_to_hex(entry->delta->sha1));
- delta_buf = diff_delta(otherbuf, othersize,
- buf, size, &delta_size, 0);
- if (!delta_buf || delta_size != entry->delta_size)
- die("delta size changed");
- free(buf);
- free(otherbuf);
- return delta_buf;
-}
-
-/*
- * The per-object header is a pretty dense thing, which is
- * - first byte: low four bits are "size", then three bits of "type",
- * and the high bit is "size continues".
- * - each byte afterwards: low seven bits are size continuation,
- * with the high bit being "size continues"
- */
-static int encode_header(enum object_type type, unsigned long size, unsigned char *hdr)
-{
- int n = 1;
- unsigned char c;
-
- if (type < OBJ_COMMIT || type > OBJ_DELTA)
- die("bad type %d", type);
-
- c = (type << 4) | (size & 15);
- size >>= 4;
- while (size) {
- *hdr++ = c | 0x80;
- c = size & 0x7f;
- size >>= 7;
- n++;
- }
- *hdr = c;
- return n;
-}
-
-static unsigned long write_object(struct sha1file *f,
- struct object_entry *entry)
-{
- unsigned long size;
- char type[10];
- void *buf;
- unsigned char header[10];
- unsigned hdrlen, datalen;
- enum object_type obj_type;
- int to_reuse = 0;
-
- if (entry->preferred_base)
- return 0;
-
- obj_type = entry->type;
- if (! entry->in_pack)
- to_reuse = 0; /* can't reuse what we don't have */
- else if (obj_type == OBJ_DELTA)
- to_reuse = 1; /* check_object() decided it for us */
- else if (obj_type != entry->in_pack_type)
- to_reuse = 0; /* pack has delta which is unusable */
- else if (entry->delta)
- to_reuse = 0; /* we want to pack afresh */
- else
- to_reuse = 1; /* we have it in-pack undeltified,
- * and we do not need to deltify it.
- */
-
- if (! to_reuse) {
- buf = read_sha1_file(entry->sha1, type, &size);
- if (!buf)
- die("unable to read %s", sha1_to_hex(entry->sha1));
- if (size != entry->size)
- die("object %s size inconsistency (%lu vs %lu)",
- sha1_to_hex(entry->sha1), size, entry->size);
- if (entry->delta) {
- buf = delta_against(buf, size, entry);
- size = entry->delta_size;
- obj_type = OBJ_DELTA;
- }
- /*
- * The object header is a byte of 'type' followed by zero or
- * more bytes of length. For deltas, the 20 bytes of delta
- * sha1 follows that.
- */
- hdrlen = encode_header(obj_type, size, header);
- sha1write(f, header, hdrlen);
-
- if (entry->delta) {
- sha1write(f, entry->delta, 20);
- hdrlen += 20;
- }
- datalen = sha1write_compressed(f, buf, size);
- free(buf);
- }
- else {
- struct packed_git *p = entry->in_pack;
- use_packed_git(p);
-
- datalen = find_packed_object_size(p, entry->in_pack_offset);
- buf = (char *) p->pack_base + entry->in_pack_offset;
- sha1write(f, buf, datalen);
- unuse_packed_git(p);
- hdrlen = 0; /* not really */
- if (obj_type == OBJ_DELTA)
- reused_delta++;
- reused++;
- }
- if (obj_type == OBJ_DELTA)
- written_delta++;
- written++;
- return hdrlen + datalen;
-}
-
-static unsigned long write_one(struct sha1file *f,
- struct object_entry *e,
- unsigned long offset)
-{
- if (e->offset)
- /* offset starts from header size and cannot be zero
- * if it is written already.
- */
- return offset;
- e->offset = offset;
- offset += write_object(f, e);
- /* if we are deltified, write out its base object. */
- if (e->delta)
- offset = write_one(f, e->delta, offset);
- return offset;
-}
-
-static void write_pack_file(void)
-{
- int i;
- struct sha1file *f;
- unsigned long offset;
- struct pack_header hdr;
- unsigned last_percent = 999;
- int do_progress = 0;
-
- if (!base_name)
- f = sha1fd(1, "<stdout>");
- else {
- f = sha1create("%s-%s.%s", base_name,
- sha1_to_hex(object_list_sha1), "pack");
- do_progress = progress;
- }
- if (do_progress)
- fprintf(stderr, "Writing %d objects.\n", nr_result);
-
- hdr.hdr_signature = htonl(PACK_SIGNATURE);
- hdr.hdr_version = htonl(PACK_VERSION);
- hdr.hdr_entries = htonl(nr_result);
- sha1write(f, &hdr, sizeof(hdr));
- offset = sizeof(hdr);
- if (!nr_result)
- goto done;
- for (i = 0; i < nr_objects; i++) {
- offset = write_one(f, objects + i, offset);
- if (do_progress) {
- unsigned percent = written * 100 / nr_result;
- if (progress_update || percent != last_percent) {
- fprintf(stderr, "%4u%% (%u/%u) done\r",
- percent, written, nr_result);
- progress_update = 0;
- last_percent = percent;
- }
- }
- }
- if (do_progress)
- fputc('\n', stderr);
- done:
- sha1close(f, pack_file_sha1, 1);
-}
-
-static void write_index_file(void)
-{
- int i;
- struct sha1file *f = sha1create("%s-%s.%s", base_name,
- sha1_to_hex(object_list_sha1), "idx");
- struct object_entry **list = sorted_by_sha;
- struct object_entry **last = list + nr_result;
- unsigned int array[256];
-
- /*
- * Write the first-level table (the list is sorted,
- * but we use a 256-entry lookup to be able to avoid
- * having to do eight extra binary search iterations).
- */
- for (i = 0; i < 256; i++) {
- struct object_entry **next = list;
- while (next < last) {
- struct object_entry *entry = *next;
- if (entry->sha1[0] != i)
- break;
- next++;
- }
- array[i] = htonl(next - sorted_by_sha);
- list = next;
- }
- sha1write(f, array, 256 * sizeof(int));
-
- /*
- * Write the actual SHA1 entries..
- */
- list = sorted_by_sha;
- for (i = 0; i < nr_result; i++) {
- struct object_entry *entry = *list++;
- unsigned int offset = htonl(entry->offset);
- sha1write(f, &offset, 4);
- sha1write(f, entry->sha1, 20);
- }
- sha1write(f, pack_file_sha1, 20);
- sha1close(f, NULL, 1);
-}
-
-static int locate_object_entry_hash(const unsigned char *sha1)
-{
- int i;
- unsigned int ui;
- memcpy(&ui, sha1, sizeof(unsigned int));
- i = ui % object_ix_hashsz;
- while (0 < object_ix[i]) {
- if (!memcmp(sha1, objects[object_ix[i]-1].sha1, 20))
- return i;
- if (++i == object_ix_hashsz)
- i = 0;
- }
- return -1 - i;
-}
-
-static struct object_entry *locate_object_entry(const unsigned char *sha1)
-{
- int i;
-
- if (!object_ix_hashsz)
- return NULL;
-
- i = locate_object_entry_hash(sha1);
- if (0 <= i)
- return &objects[object_ix[i]-1];
- return NULL;
-}
-
-static void rehash_objects(void)
-{
- int i;
- struct object_entry *oe;
-
- object_ix_hashsz = nr_objects * 3;
- if (object_ix_hashsz < 1024)
- object_ix_hashsz = 1024;
- object_ix = xrealloc(object_ix, sizeof(int) * object_ix_hashsz);
- memset(object_ix, 0, sizeof(int) * object_ix_hashsz);
- for (i = 0, oe = objects; i < nr_objects; i++, oe++) {
- int ix = locate_object_entry_hash(oe->sha1);
- if (0 <= ix)
- continue;
- ix = -1 - ix;
- object_ix[ix] = i + 1;
- }
-}
-
-static unsigned name_hash(const char *name)
-{
- unsigned char c;
- unsigned hash = 0;
-
- /*
- * This effectively just creates a sortable number from the
- * last sixteen non-whitespace characters. Last characters
- * count "most", so things that end in ".c" sort together.
- */
- while ((c = *name++) != 0) {
- if (isspace(c))
- continue;
- hash = (hash >> 2) + (c << 24);
- }
- return hash;
-}
-
-static int add_object_entry(const unsigned char *sha1, unsigned hash, int exclude)
-{
- unsigned int idx = nr_objects;
- struct object_entry *entry;
- struct packed_git *p;
- unsigned int found_offset = 0;
- struct packed_git *found_pack = NULL;
- int ix, status = 0;
-
- if (!exclude) {
- for (p = packed_git; p; p = p->next) {
- struct pack_entry e;
- if (find_pack_entry_one(sha1, &e, p)) {
- if (incremental)
- return 0;
- if (local && !p->pack_local)
- return 0;
- if (!found_pack) {
- found_offset = e.offset;
- found_pack = e.p;
- }
- }
- }
- }
- if ((entry = locate_object_entry(sha1)) != NULL)
- goto already_added;
-
- if (idx >= nr_alloc) {
- unsigned int needed = (idx + 1024) * 3 / 2;
- objects = xrealloc(objects, needed * sizeof(*entry));
- nr_alloc = needed;
- }
- entry = objects + idx;
- nr_objects = idx + 1;
- memset(entry, 0, sizeof(*entry));
- memcpy(entry->sha1, sha1, 20);
- entry->hash = hash;
-
- if (object_ix_hashsz * 3 <= nr_objects * 4)
- rehash_objects();
- else {
- ix = locate_object_entry_hash(entry->sha1);
- if (0 <= ix)
- die("internal error in object hashing.");
- object_ix[-1 - ix] = idx + 1;
- }
- status = 1;
-
- already_added:
- if (progress_update) {
- fprintf(stderr, "Counting objects...%d\r", nr_objects);
- progress_update = 0;
- }
- if (exclude)
- entry->preferred_base = 1;
- else {
- if (found_pack) {
- entry->in_pack = found_pack;
- entry->in_pack_offset = found_offset;
- }
- }
- return status;
-}
-
-struct pbase_tree_cache {
- unsigned char sha1[20];
- int ref;
- int temporary;
- void *tree_data;
- unsigned long tree_size;
-};
-
-static struct pbase_tree_cache *(pbase_tree_cache[256]);
-static int pbase_tree_cache_ix(const unsigned char *sha1)
-{
- return sha1[0] % ARRAY_SIZE(pbase_tree_cache);
-}
-static int pbase_tree_cache_ix_incr(int ix)
-{
- return (ix+1) % ARRAY_SIZE(pbase_tree_cache);
-}
-
-static struct pbase_tree {
- struct pbase_tree *next;
- /* This is a phony "cache" entry; we are not
- * going to evict it nor find it through _get()
- * mechanism -- this is for the toplevel node that
- * would almost always change with any commit.
- */
- struct pbase_tree_cache pcache;
-} *pbase_tree;
-
-static struct pbase_tree_cache *pbase_tree_get(const unsigned char *sha1)
-{
- struct pbase_tree_cache *ent, *nent;
- void *data;
- unsigned long size;
- char type[20];
- int neigh;
- int my_ix = pbase_tree_cache_ix(sha1);
- int available_ix = -1;
-
- /* pbase-tree-cache acts as a limited hashtable.
- * your object will be found at your index or within a few
- * slots after that slot if it is cached.
- */
- for (neigh = 0; neigh < 8; neigh++) {
- ent = pbase_tree_cache[my_ix];
- if (ent && !memcmp(ent->sha1, sha1, 20)) {
- ent->ref++;
- return ent;
- }
- else if (((available_ix < 0) && (!ent || !ent->ref)) ||
- ((0 <= available_ix) &&
- (!ent && pbase_tree_cache[available_ix])))
- available_ix = my_ix;
- if (!ent)
- break;
- my_ix = pbase_tree_cache_ix_incr(my_ix);
- }
-
- /* Did not find one. Either we got a bogus request or
- * we need to read and perhaps cache.
- */
- data = read_sha1_file(sha1, type, &size);
- if (!data)
- return NULL;
- if (strcmp(type, tree_type)) {
- free(data);
- return NULL;
- }
-
- /* We need to either cache or return a throwaway copy */
-
- if (available_ix < 0)
- ent = NULL;
- else {
- ent = pbase_tree_cache[available_ix];
- my_ix = available_ix;
- }
-
- if (!ent) {
- nent = xmalloc(sizeof(*nent));
- nent->temporary = (available_ix < 0);
- }
- else {
- /* evict and reuse */
- free(ent->tree_data);
- nent = ent;
- }
- memcpy(nent->sha1, sha1, 20);
- nent->tree_data = data;
- nent->tree_size = size;
- nent->ref = 1;
- if (!nent->temporary)
- pbase_tree_cache[my_ix] = nent;
- return nent;
-}
-
-static void pbase_tree_put(struct pbase_tree_cache *cache)
-{
- if (!cache->temporary) {
- cache->ref--;
- return;
- }
- free(cache->tree_data);
- free(cache);
-}
-
-static int name_cmp_len(const char *name)
-{
- int i;
- for (i = 0; name[i] && name[i] != '\n' && name[i] != '/'; i++)
- ;
- return i;
-}
-
-static void add_pbase_object(struct tree_desc *tree,
- const char *name,
- int cmplen,
- const char *fullname)
-{
- struct name_entry entry;
-
- while (tree_entry(tree,&entry)) {
- unsigned long size;
- char type[20];
-
- if (entry.pathlen != cmplen ||
- memcmp(entry.path, name, cmplen) ||
- !has_sha1_file(entry.sha1) ||
- sha1_object_info(entry.sha1, type, &size))
- continue;
- if (name[cmplen] != '/') {
- unsigned hash = name_hash(fullname);
- add_object_entry(entry.sha1, hash, 1);
- return;
- }
- if (!strcmp(type, tree_type)) {
- struct tree_desc sub;
- struct pbase_tree_cache *tree;
- const char *down = name+cmplen+1;
- int downlen = name_cmp_len(down);
-
- tree = pbase_tree_get(entry.sha1);
- if (!tree)
- return;
- sub.buf = tree->tree_data;
- sub.size = tree->tree_size;
-
- add_pbase_object(&sub, down, downlen, fullname);
- pbase_tree_put(tree);
- }
- }
-}
-
-static unsigned *done_pbase_paths;
-static int done_pbase_paths_num;
-static int done_pbase_paths_alloc;
-static int done_pbase_path_pos(unsigned hash)
-{
- int lo = 0;
- int hi = done_pbase_paths_num;
- while (lo < hi) {
- int mi = (hi + lo) / 2;
- if (done_pbase_paths[mi] == hash)
- return mi;
- if (done_pbase_paths[mi] < hash)
- hi = mi;
- else
- lo = mi + 1;
- }
- return -lo-1;
-}
-
-static int check_pbase_path(unsigned hash)
-{
- int pos = (!done_pbase_paths) ? -1 : done_pbase_path_pos(hash);
- if (0 <= pos)
- return 1;
- pos = -pos - 1;
- if (done_pbase_paths_alloc <= done_pbase_paths_num) {
- done_pbase_paths_alloc = alloc_nr(done_pbase_paths_alloc);
- done_pbase_paths = xrealloc(done_pbase_paths,
- done_pbase_paths_alloc *
- sizeof(unsigned));
- }
- done_pbase_paths_num++;
- if (pos < done_pbase_paths_num)
- memmove(done_pbase_paths + pos + 1,
- done_pbase_paths + pos,
- (done_pbase_paths_num - pos - 1) * sizeof(unsigned));
- done_pbase_paths[pos] = hash;
- return 0;
-}
-
-static void add_preferred_base_object(char *name, unsigned hash)
-{
- struct pbase_tree *it;
- int cmplen = name_cmp_len(name);
-
- if (check_pbase_path(hash))
- return;
-
- for (it = pbase_tree; it; it = it->next) {
- if (cmplen == 0) {
- hash = name_hash("");
- add_object_entry(it->pcache.sha1, hash, 1);
- }
- else {
- struct tree_desc tree;
- tree.buf = it->pcache.tree_data;
- tree.size = it->pcache.tree_size;
- add_pbase_object(&tree, name, cmplen, name);
- }
- }
-}
-
-static void add_preferred_base(unsigned char *sha1)
-{
- struct pbase_tree *it;
- void *data;
- unsigned long size;
- unsigned char tree_sha1[20];
-
- data = read_object_with_reference(sha1, tree_type, &size, tree_sha1);
- if (!data)
- return;
-
- for (it = pbase_tree; it; it = it->next) {
- if (!memcmp(it->pcache.sha1, tree_sha1, 20)) {
- free(data);
- return;
- }
- }
-
- it = xcalloc(1, sizeof(*it));
- it->next = pbase_tree;
- pbase_tree = it;
-
- memcpy(it->pcache.sha1, tree_sha1, 20);
- it->pcache.tree_data = data;
- it->pcache.tree_size = size;
-}
-
-static void check_object(struct object_entry *entry)
-{
- char type[20];
-
- if (entry->in_pack && !entry->preferred_base) {
- unsigned char base[20];
- unsigned long size;
- struct object_entry *base_entry;
-
- /* We want in_pack_type even if we do not reuse delta.
- * There is no point not reusing non-delta representations.
- */
- check_reuse_pack_delta(entry->in_pack,
- entry->in_pack_offset,
- base, &size,
- &entry->in_pack_type);
-
- /* Check if it is delta, and the base is also an object
- * we are going to pack. If so we will reuse the existing
- * delta.
- */
- if (!no_reuse_delta &&
- entry->in_pack_type == OBJ_DELTA &&
- (base_entry = locate_object_entry(base)) &&
- (!base_entry->preferred_base)) {
-
- /* Depth value does not matter - find_deltas()
- * will never consider reused delta as the
- * base object to deltify other objects
- * against, in order to avoid circular deltas.
- */
-
- /* uncompressed size of the delta data */
- entry->size = entry->delta_size = size;
- entry->delta = base_entry;
- entry->type = OBJ_DELTA;
-
- entry->delta_sibling = base_entry->delta_child;
- base_entry->delta_child = entry;
-
- return;
- }
- /* Otherwise we would do the usual */
- }
-
- if (sha1_object_info(entry->sha1, type, &entry->size))
- die("unable to get type of object %s",
- sha1_to_hex(entry->sha1));
-
- if (!strcmp(type, commit_type)) {
- entry->type = OBJ_COMMIT;
- } else if (!strcmp(type, tree_type)) {
- entry->type = OBJ_TREE;
- } else if (!strcmp(type, blob_type)) {
- entry->type = OBJ_BLOB;
- } else if (!strcmp(type, tag_type)) {
- entry->type = OBJ_TAG;
- } else
- die("unable to pack object %s of type %s",
- sha1_to_hex(entry->sha1), type);
-}
-
-static unsigned int check_delta_limit(struct object_entry *me, unsigned int n)
-{
- struct object_entry *child = me->delta_child;
- unsigned int m = n;
- while (child) {
- unsigned int c = check_delta_limit(child, n + 1);
- if (m < c)
- m = c;
- child = child->delta_sibling;
- }
- return m;
-}
-
-static void get_object_details(void)
-{
- int i;
- struct object_entry *entry;
-
- prepare_pack_ix();
- for (i = 0, entry = objects; i < nr_objects; i++, entry++)
- check_object(entry);
-
- if (nr_objects == nr_result) {
- /*
- * Depth of objects that depend on the entry -- this
- * is subtracted from depth-max to break too deep
- * delta chain because of delta data reusing.
- * However, we loosen this restriction when we know we
- * are creating a thin pack -- it will have to be
- * expanded on the other end anyway, so do not
- * artificially cut the delta chain and let it go as
- * deep as it wants.
- */
- for (i = 0, entry = objects; i < nr_objects; i++, entry++)
- if (!entry->delta && entry->delta_child)
- entry->delta_limit =
- check_delta_limit(entry, 1);
- }
-}
-
-typedef int (*entry_sort_t)(const struct object_entry *, const struct object_entry *);
-
-static entry_sort_t current_sort;
-
-static int sort_comparator(const void *_a, const void *_b)
-{
- struct object_entry *a = *(struct object_entry **)_a;
- struct object_entry *b = *(struct object_entry **)_b;
- return current_sort(a,b);
-}
-
-static struct object_entry **create_sorted_list(entry_sort_t sort)
-{
- struct object_entry **list = xmalloc(nr_objects * sizeof(struct object_entry *));
- int i;
-
- for (i = 0; i < nr_objects; i++)
- list[i] = objects + i;
- current_sort = sort;
- qsort(list, nr_objects, sizeof(struct object_entry *), sort_comparator);
- return list;
-}
-
-static int sha1_sort(const struct object_entry *a, const struct object_entry *b)
-{
- return memcmp(a->sha1, b->sha1, 20);
-}
-
-static struct object_entry **create_final_object_list(void)
-{
- struct object_entry **list;
- int i, j;
-
- for (i = nr_result = 0; i < nr_objects; i++)
- if (!objects[i].preferred_base)
- nr_result++;
- list = xmalloc(nr_result * sizeof(struct object_entry *));
- for (i = j = 0; i < nr_objects; i++) {
- if (!objects[i].preferred_base)
- list[j++] = objects + i;
- }
- current_sort = sha1_sort;
- qsort(list, nr_result, sizeof(struct object_entry *), sort_comparator);
- return list;
-}
-
-static int type_size_sort(const struct object_entry *a, const struct object_entry *b)
-{
- if (a->type < b->type)
- return -1;
- if (a->type > b->type)
- return 1;
- if (a->hash < b->hash)
- return -1;
- if (a->hash > b->hash)
- return 1;
- if (a->preferred_base < b->preferred_base)
- return -1;
- if (a->preferred_base > b->preferred_base)
- return 1;
- if (a->size < b->size)
- return -1;
- if (a->size > b->size)
- return 1;
- return a < b ? -1 : (a > b);
-}
-
-struct unpacked {
- struct object_entry *entry;
- void *data;
- struct delta_index *index;
-};
-
-/*
- * We search for deltas _backwards_ in a list sorted by type and
- * by size, so that we see progressively smaller and smaller files.
- * That's because we prefer deltas to be from the bigger file
- * to the smaller - deletes are potentially cheaper, but perhaps
- * more importantly, the bigger file is likely the more recent
- * one.
- */
-static int try_delta(struct unpacked *trg, struct unpacked *src,
- unsigned max_depth)
-{
- struct object_entry *trg_entry = trg->entry;
- struct object_entry *src_entry = src->entry;
- unsigned long trg_size, src_size, delta_size, sizediff, max_size, sz;
- char type[10];
- void *delta_buf;
-
- /* Don't bother doing diffs between different types */
- if (trg_entry->type != src_entry->type)
- return -1;
-
- /* We do not compute delta to *create* objects we are not
- * going to pack.
- */
- if (trg_entry->preferred_base)
- return -1;
-
- /*
- * We do not bother to try a delta that we discarded
- * on an earlier try, but only when reusing delta data.
- */
- if (!no_reuse_delta && trg_entry->in_pack &&
- trg_entry->in_pack == src_entry->in_pack)
- return 0;
-
- /*
- * If the current object is at pack edge, take the depth the
- * objects that depend on the current object into account --
- * otherwise they would become too deep.
- */
- if (trg_entry->delta_child) {
- if (max_depth <= trg_entry->delta_limit)
- return 0;
- max_depth -= trg_entry->delta_limit;
- }
- if (src_entry->depth >= max_depth)
- return 0;
-
- /* Now some size filtering heuristics. */
- trg_size = trg_entry->size;
- max_size = trg_size/2 - 20;
- max_size = max_size * (max_depth - src_entry->depth) / max_depth;
- if (max_size == 0)
- return 0;
- if (trg_entry->delta && trg_entry->delta_size <= max_size)
- max_size = trg_entry->delta_size-1;
- src_size = src_entry->size;
- sizediff = src_size < trg_size ? trg_size - src_size : 0;
- if (sizediff >= max_size)
- return 0;
-
- /* Load data if not already done */
- if (!trg->data) {
- trg->data = read_sha1_file(trg_entry->sha1, type, &sz);
- if (sz != trg_size)
- die("object %s inconsistent object length (%lu vs %lu)",
- sha1_to_hex(trg_entry->sha1), sz, trg_size);
- }
- if (!src->data) {
- src->data = read_sha1_file(src_entry->sha1, type, &sz);
- if (sz != src_size)
- die("object %s inconsistent object length (%lu vs %lu)",
- sha1_to_hex(src_entry->sha1), sz, src_size);
- }
- if (!src->index) {
- src->index = create_delta_index(src->data, src_size);
- if (!src->index)
- die("out of memory");
- }
-
- delta_buf = create_delta(src->index, trg->data, trg_size, &delta_size, max_size);
- if (!delta_buf)
- return 0;
-
- trg_entry->delta = src_entry;
- trg_entry->delta_size = delta_size;
- trg_entry->depth = src_entry->depth + 1;
- free(delta_buf);
- return 1;
-}
-
-static void progress_interval(int signum)
-{
- progress_update = 1;
-}
-
-static void find_deltas(struct object_entry **list, int window, int depth)
-{
- int i, idx;
- unsigned int array_size = window * sizeof(struct unpacked);
- struct unpacked *array = xmalloc(array_size);
- unsigned processed = 0;
- unsigned last_percent = 999;
-
- memset(array, 0, array_size);
- i = nr_objects;
- idx = 0;
- if (progress)
- fprintf(stderr, "Deltifying %d objects.\n", nr_result);
-
- while (--i >= 0) {
- struct object_entry *entry = list[i];
- struct unpacked *n = array + idx;
- int j;
-
- if (!entry->preferred_base)
- processed++;
-
- if (progress) {
- unsigned percent = processed * 100 / nr_result;
- if (percent != last_percent || progress_update) {
- fprintf(stderr, "%4u%% (%u/%u) done\r",
- percent, processed, nr_result);
- progress_update = 0;
- last_percent = percent;
- }
- }
-
- if (entry->delta)
- /* This happens if we decided to reuse existing
- * delta from a pack. "!no_reuse_delta &&" is implied.
- */
- continue;
-
- if (entry->size < 50)
- continue;
- free_delta_index(n->index);
- n->index = NULL;
- free(n->data);
- n->data = NULL;
- n->entry = entry;
-
- j = window;
- while (--j > 0) {
- unsigned int other_idx = idx + j;
- struct unpacked *m;
- if (other_idx >= window)
- other_idx -= window;
- m = array + other_idx;
- if (!m->entry)
- break;
- if (try_delta(n, m, depth) < 0)
- break;
- }
- /* if we made n a delta, and if n is already at max
- * depth, leaving it in the window is pointless. we
- * should evict it first.
- */
- if (entry->delta && depth <= entry->depth)
- continue;
-
- idx++;
- if (idx >= window)
- idx = 0;
- }
-
- if (progress)
- fputc('\n', stderr);
-
- for (i = 0; i < window; ++i) {
- free_delta_index(array[i].index);
- free(array[i].data);
- }
- free(array);
-}
-
-static void prepare_pack(int window, int depth)
-{
- get_object_details();
- sorted_by_type = create_sorted_list(type_size_sort);
- if (window && depth)
- find_deltas(sorted_by_type, window+1, depth);
-}
-
-static int reuse_cached_pack(unsigned char *sha1, int pack_to_stdout)
-{
- static const char cache[] = "pack-cache/pack-%s.%s";
- char *cached_pack, *cached_idx;
- int ifd, ofd, ifd_ix = -1;
-
- cached_pack = git_path(cache, sha1_to_hex(sha1), "pack");
- ifd = open(cached_pack, O_RDONLY);
- if (ifd < 0)
- return 0;
-
- if (!pack_to_stdout) {
- cached_idx = git_path(cache, sha1_to_hex(sha1), "idx");
- ifd_ix = open(cached_idx, O_RDONLY);
- if (ifd_ix < 0) {
- close(ifd);
- return 0;
- }
- }
-
- if (progress)
- fprintf(stderr, "Reusing %d objects pack %s\n", nr_objects,
- sha1_to_hex(sha1));
-
- if (pack_to_stdout) {
- if (copy_fd(ifd, 1))
- exit(1);
- close(ifd);
- }
- else {
- char name[PATH_MAX];
- snprintf(name, sizeof(name),
- "%s-%s.%s", base_name, sha1_to_hex(sha1), "pack");
- ofd = open(name, O_CREAT | O_EXCL | O_WRONLY, 0666);
- if (ofd < 0)
- die("unable to open %s (%s)", name, strerror(errno));
- if (copy_fd(ifd, ofd))
- exit(1);
- close(ifd);
-
- snprintf(name, sizeof(name),
- "%s-%s.%s", base_name, sha1_to_hex(sha1), "idx");
- ofd = open(name, O_CREAT | O_EXCL | O_WRONLY, 0666);
- if (ofd < 0)
- die("unable to open %s (%s)", name, strerror(errno));
- if (copy_fd(ifd_ix, ofd))
- exit(1);
- close(ifd_ix);
- puts(sha1_to_hex(sha1));
- }
-
- return 1;
-}
-
-static void setup_progress_signal(void)
-{
- struct sigaction sa;
- struct itimerval v;
-
- memset(&sa, 0, sizeof(sa));
- sa.sa_handler = progress_interval;
- sigemptyset(&sa.sa_mask);
- sa.sa_flags = SA_RESTART;
- sigaction(SIGALRM, &sa, NULL);
-
- v.it_interval.tv_sec = 1;
- v.it_interval.tv_usec = 0;
- v.it_value = v.it_interval;
- setitimer(ITIMER_REAL, &v, NULL);
-}
-
-static int git_pack_config(const char *k, const char *v)
-{
- if(!strcmp(k, "pack.window")) {
- window = git_config_int(k, v);
- return 0;
- }
- return git_default_config(k, v);
-}
-
-int main(int argc, char **argv)
-{
- SHA_CTX ctx;
- char line[40 + 1 + PATH_MAX + 2];
- int depth = 10, pack_to_stdout = 0;
- struct object_entry **list;
- int num_preferred_base = 0;
- int i;
-
- setup_git_directory();
- git_config(git_pack_config);
-
- progress = isatty(2);
- for (i = 1; i < argc; i++) {
- const char *arg = argv[i];
-
- if (*arg == '-') {
- if (!strcmp("--non-empty", arg)) {
- non_empty = 1;
- continue;
- }
- if (!strcmp("--local", arg)) {
- local = 1;
- continue;
- }
- if (!strcmp("--progress", arg)) {
- progress = 1;
- continue;
- }
- if (!strcmp("--incremental", arg)) {
- incremental = 1;
- continue;
- }
- if (!strncmp("--window=", arg, 9)) {
- char *end;
- window = strtoul(arg+9, &end, 0);
- if (!arg[9] || *end)
- usage(pack_usage);
- continue;
- }
- if (!strncmp("--depth=", arg, 8)) {
- char *end;
- depth = strtoul(arg+8, &end, 0);
- if (!arg[8] || *end)
- usage(pack_usage);
- continue;
- }
- if (!strcmp("--progress", arg)) {
- progress = 1;
- continue;
- }
- if (!strcmp("-q", arg)) {
- progress = 0;
- continue;
- }
- if (!strcmp("--no-reuse-delta", arg)) {
- no_reuse_delta = 1;
- continue;
- }
- if (!strcmp("--stdout", arg)) {
- pack_to_stdout = 1;
- continue;
- }
- usage(pack_usage);
- }
- if (base_name)
- usage(pack_usage);
- base_name = arg;
- }
-
- if (pack_to_stdout != !base_name)
- usage(pack_usage);
-
- prepare_packed_git();
-
- if (progress) {
- fprintf(stderr, "Generating pack...\n");
- setup_progress_signal();
- }
-
- for (;;) {
- unsigned char sha1[20];
- unsigned hash;
-
- if (!fgets(line, sizeof(line), stdin)) {
- if (feof(stdin))
- break;
- if (!ferror(stdin))
- die("fgets returned NULL, not EOF, not error!");
- if (errno != EINTR)
- die("fgets: %s", strerror(errno));
- clearerr(stdin);
- continue;
- }
-
- if (line[0] == '-') {
- if (get_sha1_hex(line+1, sha1))
- die("expected edge sha1, got garbage:\n %s",
- line+1);
- if (num_preferred_base++ < window)
- add_preferred_base(sha1);
- continue;
- }
- if (get_sha1_hex(line, sha1))
- die("expected sha1, got garbage:\n %s", line);
- hash = name_hash(line+41);
- add_preferred_base_object(line+41, hash);
- add_object_entry(sha1, hash, 0);
- }
- if (progress)
- fprintf(stderr, "Done counting %d objects.\n", nr_objects);
- sorted_by_sha = create_final_object_list();
- if (non_empty && !nr_result)
- return 0;
-
- SHA1_Init(&ctx);
- list = sorted_by_sha;
- for (i = 0; i < nr_result; i++) {
- struct object_entry *entry = *list++;
- SHA1_Update(&ctx, entry->sha1, 20);
- }
- SHA1_Final(object_list_sha1, &ctx);
- if (progress && (nr_objects != nr_result))
- fprintf(stderr, "Result has %d objects.\n", nr_result);
-
- if (reuse_cached_pack(object_list_sha1, pack_to_stdout))
- ;
- else {
- if (nr_result)
- prepare_pack(window, depth);
- if (progress && pack_to_stdout) {
- /* the other end usually displays progress itself */
- struct itimerval v = {{0,},};
- setitimer(ITIMER_REAL, &v, NULL);
- signal(SIGALRM, SIG_IGN );
- progress_update = 0;
- }
- write_pack_file();
- if (!pack_to_stdout) {
- write_index_file();
- puts(sha1_to_hex(object_list_sha1));
- }
- }
- if (progress)
- fprintf(stderr, "Total %d, written %d (delta %d), reused %d (delta %d)\n",
- nr_result, written, written_delta, reused, reused_delta);
- return 0;
-}
{
pid_t pid;
int fd[2];
- const char *pager = getenv("PAGER");
+ const char *pager = getenv("GIT_PAGER");
if (!isatty(1))
return;
+ if (!pager)
+ pager = getenv("PAGER");
if (!pager)
pager = "less";
else if (!*pager || !strcmp(pager, "cat"))
+++ /dev/null
-#include "cache.h"
-
-static const char prune_packed_usage[] =
-"git-prune-packed [-n]";
-
-static int dryrun;
-
-static void prune_dir(int i, DIR *dir, char *pathname, int len)
-{
- struct dirent *de;
- char hex[40];
-
- sprintf(hex, "%02x", i);
- while ((de = readdir(dir)) != NULL) {
- unsigned char sha1[20];
- if (strlen(de->d_name) != 38)
- continue;
- memcpy(hex+2, de->d_name, 38);
- if (get_sha1_hex(hex, sha1))
- continue;
- if (!has_sha1_pack(sha1))
- continue;
- memcpy(pathname + len, de->d_name, 38);
- if (dryrun)
- printf("rm -f %s\n", pathname);
- else if (unlink(pathname) < 0)
- error("unable to unlink %s", pathname);
- }
- pathname[len] = 0;
- rmdir(pathname);
-}
-
-static void prune_packed_objects(void)
-{
- int i;
- static char pathname[PATH_MAX];
- const char *dir = get_object_directory();
- int len = strlen(dir);
-
- if (len > PATH_MAX - 42)
- die("impossible object directory");
- memcpy(pathname, dir, len);
- if (len && pathname[len-1] != '/')
- pathname[len++] = '/';
- for (i = 0; i < 256; i++) {
- DIR *d;
-
- sprintf(pathname + len, "%02x/", i);
- d = opendir(pathname);
- if (!d)
- continue;
- prune_dir(i, d, pathname, len + 3);
- closedir(d);
- }
-}
-
-int main(int argc, char **argv)
-{
- int i;
-
- setup_git_directory();
-
- for (i = 1; i < argc; i++) {
- const char *arg = argv[i];
-
- if (*arg == '-') {
- if (!strcmp(arg, "-n"))
- dryrun = 1;
- else
- usage(prune_packed_usage);
- continue;
- }
- /* Handle arguments here .. */
- usage(prune_packed_usage);
- }
- sync();
- prune_packed_objects();
- return 0;
-}
unsigned char sha1[20];
if (!index_fd(sha1, fd, st, 0, NULL))
match = memcmp(sha1, ce->sha1, 20);
- close(fd);
+ /* index_fd() closed the file descriptor already */
}
return match;
}
namelen = strlen(de->d_name);
if (namelen > 255)
continue;
- if (namelen>5 && !strcmp(de->d_name+namelen-5,".lock"))
+ if (has_extension(de->d_name, ".lock"))
continue;
memcpy(path + baselen, de->d_name, namelen+1);
if (stat(git_path("%s", path), &st) < 0)
if (safe_create_leading_directories(lock->ref_file))
die("unable to create directory for %s", lock->ref_file);
- lock->lock_fd = hold_lock_file_for_update(lock->lk, lock->ref_file);
- if (lock->lock_fd < 0) {
- error("Couldn't open lock file %s: %s",
- lock->lk->filename, strerror(errno));
- unlock_ref(lock);
- return NULL;
- }
+ lock->lock_fd = hold_lock_file_for_update(lock->lk, lock->ref_file, 1);
return old_sha1 ? verify_lock(lock, old_sha1, mustexist) : lock;
}
+++ /dev/null
-#include "cache.h"
-#include <regex.h>
-
-static const char git_config_set_usage[] =
-"git-repo-config [ --bool | --int ] [--get | --get-all | --get-regexp | --replace-all | --unset | --unset-all] name [value [value_regex]] | --list";
-
-static char* key = NULL;
-static regex_t* key_regexp = NULL;
-static regex_t* regexp = NULL;
-static int show_keys = 0;
-static int use_key_regexp = 0;
-static int do_all = 0;
-static int do_not_match = 0;
-static int seen = 0;
-static enum { T_RAW, T_INT, T_BOOL } type = T_RAW;
-
-static int show_all_config(const char *key_, const char *value_)
-{
- if (value_)
- printf("%s=%s\n", key_, value_);
- else
- printf("%s\n", key_);
- return 0;
-}
-
-static int show_config(const char* key_, const char* value_)
-{
- char value[256];
- const char *vptr = value;
- int dup_error = 0;
-
- if (!use_key_regexp && strcmp(key_, key))
- return 0;
- if (use_key_regexp && regexec(key_regexp, key_, 0, NULL, 0))
- return 0;
- if (regexp != NULL &&
- (do_not_match ^
- regexec(regexp, (value_?value_:""), 0, NULL, 0)))
- return 0;
-
- if (show_keys)
- printf("%s ", key_);
- if (seen && !do_all)
- dup_error = 1;
- if (type == T_INT)
- sprintf(value, "%d", git_config_int(key_, value_?value_:""));
- else if (type == T_BOOL)
- vptr = git_config_bool(key_, value_) ? "true" : "false";
- else
- vptr = value_?value_:"";
- seen++;
- if (dup_error) {
- error("More than one value for the key %s: %s",
- key_, vptr);
- }
- else
- printf("%s\n", vptr);
-
- return 0;
-}
-
-static int get_value(const char* key_, const char* regex_)
-{
- int ret = -1;
- char *tl;
- char *global = NULL, *repo_config = NULL;
- const char *local;
-
- local = getenv("GIT_CONFIG");
- if (!local) {
- const char *home = getenv("HOME");
- local = getenv("GIT_CONFIG_LOCAL");
- if (!local)
- local = repo_config = strdup(git_path("config"));
- if (home)
- global = strdup(mkpath("%s/.gitconfig", home));
- }
-
- key = strdup(key_);
- for (tl=key+strlen(key)-1; tl >= key && *tl != '.'; --tl)
- *tl = tolower(*tl);
- for (tl=key; *tl && *tl != '.'; ++tl)
- *tl = tolower(*tl);
-
- if (use_key_regexp) {
- key_regexp = (regex_t*)malloc(sizeof(regex_t));
- if (regcomp(key_regexp, key, REG_EXTENDED)) {
- fprintf(stderr, "Invalid key pattern: %s\n", key_);
- goto free_strings;
- }
- }
-
- if (regex_) {
- if (regex_[0] == '!') {
- do_not_match = 1;
- regex_++;
- }
-
- regexp = (regex_t*)malloc(sizeof(regex_t));
- if (regcomp(regexp, regex_, REG_EXTENDED)) {
- fprintf(stderr, "Invalid pattern: %s\n", regex_);
- goto free_strings;
- }
- }
-
- if (do_all && global)
- git_config_from_file(show_config, global);
- git_config_from_file(show_config, local);
- if (!do_all && !seen && global)
- git_config_from_file(show_config, global);
-
- free(key);
- if (regexp) {
- regfree(regexp);
- free(regexp);
- }
-
- if (do_all)
- ret = !seen;
- else
- ret = (seen == 1) ? 0 : 1;
-
-free_strings:
- if (repo_config)
- free(repo_config);
- if (global)
- free(global);
- return ret;
-}
-
-int main(int argc, const char **argv)
-{
- int nongit = 0;
- setup_git_directory_gently(&nongit);
-
- while (1 < argc) {
- if (!strcmp(argv[1], "--int"))
- type = T_INT;
- else if (!strcmp(argv[1], "--bool"))
- type = T_BOOL;
- else if (!strcmp(argv[1], "--list") || !strcmp(argv[1], "-l"))
- return git_config(show_all_config);
- else
- break;
- argc--;
- argv++;
- }
-
- switch (argc) {
- case 2:
- return get_value(argv[1], NULL);
- case 3:
- if (!strcmp(argv[1], "--unset"))
- return git_config_set(argv[2], NULL);
- else if (!strcmp(argv[1], "--unset-all"))
- return git_config_set_multivar(argv[2], NULL, NULL, 1);
- else if (!strcmp(argv[1], "--get"))
- return get_value(argv[2], NULL);
- else if (!strcmp(argv[1], "--get-all")) {
- do_all = 1;
- return get_value(argv[2], NULL);
- } else if (!strcmp(argv[1], "--get-regexp")) {
- show_keys = 1;
- use_key_regexp = 1;
- do_all = 1;
- return get_value(argv[2], NULL);
- } else
-
- return git_config_set(argv[1], argv[2]);
- case 4:
- if (!strcmp(argv[1], "--unset"))
- return git_config_set_multivar(argv[2], NULL, argv[3], 0);
- else if (!strcmp(argv[1], "--unset-all"))
- return git_config_set_multivar(argv[2], NULL, argv[3], 1);
- else if (!strcmp(argv[1], "--get"))
- return get_value(argv[2], argv[3]);
- else if (!strcmp(argv[1], "--get-all")) {
- do_all = 1;
- return get_value(argv[2], argv[3]);
- } else if (!strcmp(argv[1], "--get-regexp")) {
- show_keys = 1;
- use_key_regexp = 1;
- do_all = 1;
- return get_value(argv[2], argv[3]);
- } else if (!strcmp(argv[1], "--replace-all"))
-
- return git_config_set_multivar(argv[2], argv[3], NULL, 1);
- else
-
- return git_config_set_multivar(argv[1], argv[2], argv[3], 0);
- case 5:
- if (!strcmp(argv[1], "--replace-all"))
- return git_config_set_multivar(argv[2], argv[3], argv[4], 1);
- case 1:
- default:
- usage(git_config_set_usage);
- }
- return 0;
-}
revs->diffopt.output_format = DIFF_FORMAT_PATCH;
}
revs->diffopt.abbrev = revs->abbrev;
- diff_setup_done(&revs->diffopt);
+ if (diff_setup_done(&revs->diffopt) < 0)
+ die("diff_setup_done failed");
return left;
}
}
return NULL;
bad_dir_environ:
- if (!nongit_ok) {
+ if (nongit_ok) {
*nongit_ok = 1;
return NULL;
}
int namelen = strlen(de->d_name);
struct packed_git *p;
- if (strcmp(de->d_name + namelen - 4, ".idx"))
+ if (!has_extension(de->d_name, ".idx"))
continue;
/* we have .idx. Is it a file we can map? */
is_null = !memcmp(sha1, null_sha1, 20);
memcpy(hex, sha1_to_hex(sha1), 40);
- if (len == 40)
+ if (len == 40 || !len)
return hex;
while (len < 40) {
unsigned char sha1_ret[20];
prog = getenv("GIT_SSH_PUSH");
if (!prog) prog = "git-ssh-upload";
+ setup_ident();
setup_git_directory();
git_config(git_default_config);
+++ /dev/null
-#include "cache.h"
-
-static const char git_symbolic_ref_usage[] =
-"git-symbolic-ref name [ref]";
-
-static void check_symref(const char *HEAD)
-{
- unsigned char sha1[20];
- const char *git_HEAD = strdup(git_path("%s", HEAD));
- const char *git_refs_heads_master = resolve_ref(git_HEAD, sha1, 0);
- if (git_refs_heads_master) {
- /* we want to strip the .git/ part */
- int pfxlen = strlen(git_HEAD) - strlen(HEAD);
- puts(git_refs_heads_master + pfxlen);
- }
- else
- die("No such ref: %s", HEAD);
-}
-
-int main(int argc, const char **argv)
-{
- setup_git_directory();
- git_config(git_default_config);
- switch (argc) {
- case 2:
- check_symref(argv[1]);
- break;
- case 3:
- create_symref(strdup(git_path("%s", argv[1])), argv[2]);
- break;
- default:
- usage(git_symbolic_ref_usage);
- }
- return 0;
-}
check_count () {
head=
case "$1" in -h) head="$2"; shift; shift ;; esac
- $PROG file $head | perl -e '
+ $PROG file $head >.result || return 1
+ cat .result | perl -e '
my %expect = (@ARGV);
my %count = ();
while (<STDIN>) {
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2006 Junio C Hamano
+#
+
+test_description='git-read-tree --prefix test.
+'
+
+. ./test-lib.sh
+
+test_expect_success setup '
+ echo hello >one &&
+ git-update-index --add one &&
+ tree=`git-write-tree` &&
+ echo tree is $tree
+'
+
+echo 'one
+two/one' >expect
+
+test_expect_success 'read-tree --prefix' '
+ git-read-tree --prefix=two/ $tree &&
+ git-ls-files >actual &&
+ cmp expect actual
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2006 Junio C Hamano
+#
+
+test_description='Try various core-level commands in subdirectory.
+'
+
+. ./test-lib.sh
+
+test_expect_success setup '
+ long="a b c d e f g h i j k l m n o p q r s t u v w x y z" &&
+ for c in $long; do echo $c; done >one &&
+ mkdir dir &&
+ for c in x y z $long a b c; do echo $c; done >dir/two &&
+ cp one original.one &&
+ cp dir/two original.two
+'
+HERE=`pwd`
+LF='
+'
+
+test_expect_success 'update-index and ls-files' '
+ cd $HERE &&
+ git-update-index --add one &&
+ case "`git-ls-files`" in
+ one) echo ok one ;;
+ *) echo bad one; exit 1 ;;
+ esac &&
+ cd dir &&
+ git-update-index --add two &&
+ case "`git-ls-files`" in
+ two) echo ok two ;;
+ *) echo bad two; exit 1 ;;
+ esac &&
+ cd .. &&
+ case "`git-ls-files`" in
+ dir/two"$LF"one) echo ok both ;;
+ *) echo bad; exit 1 ;;
+ esac
+'
+
+test_expect_success 'cat-file' '
+ cd $HERE &&
+ two=`git-ls-files -s dir/two` &&
+ two=`expr "$two" : "[0-7]* \\([0-9a-f]*\\)"` &&
+ echo "$two" &&
+ git-cat-file -p "$two" >actual &&
+ cmp dir/two actual &&
+ cd dir &&
+ git-cat-file -p "$two" >actual &&
+ cmp two actual
+'
+rm -f actual dir/actual
+
+test_expect_success 'diff-files' '
+ cd $HERE &&
+ echo a >>one &&
+ echo d >>dir/two &&
+ case "`git-diff-files --name-only`" in
+ dir/two"$LF"one) echo ok top ;;
+ *) echo bad top; exit 1 ;;
+ esac &&
+ # diff should not omit leading paths
+ cd dir &&
+ case "`git-diff-files --name-only`" in
+ dir/two"$LF"one) echo ok subdir ;;
+ *) echo bad subdir; exit 1 ;;
+ esac &&
+ case "`git-diff-files --name-only .`" in
+ dir/two) echo ok subdir limited ;;
+ *) echo bad subdir limited; exit 1 ;;
+ esac
+'
+
+test_expect_success 'write-tree' '
+ cd $HERE &&
+ top=`git-write-tree` &&
+ echo $top &&
+ cd dir &&
+ sub=`git-write-tree` &&
+ echo $sub &&
+ test "z$top" = "z$sub"
+'
+
+test_expect_success 'checkout-index' '
+ cd $HERE &&
+ git-checkout-index -f -u one &&
+ cmp one original.one &&
+ cd dir &&
+ git-checkout-index -f -u two &&
+ cmp two ../original.two
+'
+
+test_expect_success 'read-tree' '
+ cd $HERE &&
+ rm -f one dir/two &&
+ tree=`git-write-tree` &&
+ git-read-tree --reset -u "$tree" &&
+ cmp one original.one &&
+ cmp dir/two original.two &&
+ cd dir &&
+ rm -f two &&
+ git-read-tree --reset -u "$tree" &&
+ cmp two ../original.two &&
+ cmp ../one ../original.one
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+#
+#
+
+test_description='git-mktag: tag object verify test'
+
+. ./test-lib.sh
+
+###########################################################
+# check the tag.sig file, expecting verify_tag() to fail,
+# and checking that the error message matches the pattern
+# given in the expect.pat file.
+
+check_verify_failure () {
+ test_expect_success \
+ "$1" \
+ 'git-mktag <tag.sig 2>message ||
+ egrep -q -f expect.pat message'
+}
+
+###########################################################
+# first create a commit, so we have a valid object/type
+# for the tag.
+echo Hello >A
+git-update-index --add A
+git-commit -m "Initial commit"
+head=$(git-rev-parse --verify HEAD)
+
+############################################################
+# 1. length check
+
+cat >tag.sig <<EOF
+too short for a tag
+EOF
+
+cat >expect.pat <<EOF
+^error: .*size wrong.*$
+EOF
+
+check_verify_failure 'Tag object length check'
+
+############################################################
+# 2. object line label check
+
+cat >tag.sig <<EOF
+xxxxxx 139e9b33986b1c2670fff52c5067603117b3e895
+type tag
+tag mytag
+EOF
+
+cat >expect.pat <<EOF
+^error: char0: .*"object "$
+EOF
+
+check_verify_failure '"object" line label check'
+
+############################################################
+# 3. object line SHA1 check
+
+cat >tag.sig <<EOF
+object zz9e9b33986b1c2670fff52c5067603117b3e895
+type tag
+tag mytag
+EOF
+
+cat >expect.pat <<EOF
+^error: char7: .*SHA1 hash$
+EOF
+
+check_verify_failure '"object" line SHA1 check'
+
+############################################################
+# 4. type line label check
+
+cat >tag.sig <<EOF
+object 779e9b33986b1c2670fff52c5067603117b3e895
+xxxx tag
+tag mytag
+EOF
+
+cat >expect.pat <<EOF
+^error: char47: .*"[\]ntype "$
+EOF
+
+check_verify_failure '"type" line label check'
+
+############################################################
+# 5. type line eol check
+
+echo "object 779e9b33986b1c2670fff52c5067603117b3e895" >tag.sig
+echo -n "type tagsssssssssssssssssssssssssssssss" >>tag.sig
+
+cat >expect.pat <<EOF
+^error: char48: .*"[\]n"$
+EOF
+
+check_verify_failure '"type" line eol check'
+
+############################################################
+# 6. tag line label check #1
+
+cat >tag.sig <<EOF
+object 779e9b33986b1c2670fff52c5067603117b3e895
+type tag
+xxx mytag
+EOF
+
+cat >expect.pat <<EOF
+^error: char57: no "tag " found$
+EOF
+
+check_verify_failure '"tag" line label check #1'
+
+############################################################
+# 7. tag line label check #2
+
+cat >tag.sig <<EOF
+object 779e9b33986b1c2670fff52c5067603117b3e895
+type taggggggggggggggggggggggggggggggg
+tag
+EOF
+
+cat >expect.pat <<EOF
+^error: char87: no "tag " found$
+EOF
+
+check_verify_failure '"tag" line label check #2'
+
+############################################################
+# 8. type line type-name length check
+
+cat >tag.sig <<EOF
+object 779e9b33986b1c2670fff52c5067603117b3e895
+type taggggggggggggggggggggggggggggggg
+tag mytag
+EOF
+
+cat >expect.pat <<EOF
+^error: char53: type too long$
+EOF
+
+check_verify_failure '"type" line type-name length check'
+
+############################################################
+# 9. verify object (SHA1/type) check
+
+cat >tag.sig <<EOF
+object 779e9b33986b1c2670fff52c5067603117b3e895
+type tagggg
+tag mytag
+EOF
+
+cat >expect.pat <<EOF
+^error: char7: could not verify object.*$
+EOF
+
+check_verify_failure 'verify object (SHA1/type) check'
+
+############################################################
+# 10. verify tag-name check
+
+cat >tag.sig <<EOF
+object $head
+type commit
+tag my tag
+EOF
+
+cat >expect.pat <<EOF
+^error: char67: could not verify tag name$
+EOF
+
+check_verify_failure 'verify tag-name check'
+
+############################################################
+# 11. tagger line lable check #1
+
+cat >tag.sig <<EOF
+object $head
+type commit
+tag mytag
+EOF
+
+cat >expect.pat <<EOF
+^error: char70: could not find "tagger"$
+EOF
+
+check_verify_failure '"tagger" line label check #1'
+
+############################################################
+# 12. tagger line lable check #2
+
+cat >tag.sig <<EOF
+object $head
+type commit
+tag mytag
+tagger
+EOF
+
+cat >expect.pat <<EOF
+^error: char70: could not find "tagger"$
+EOF
+
+check_verify_failure '"tagger" line label check #2'
+
+############################################################
+# 13. create valid tag
+
+cat >tag.sig <<EOF
+object $head
+type commit
+tag mytag
+tagger another@example.com
+EOF
+
+test_expect_success \
+ 'create valid tag' \
+ 'git-mktag <tag.sig >.git/refs/tags/mytag 2>message'
+
+############################################################
+# 14. check mytag
+
+test_expect_success \
+ 'check mytag' \
+ 'git-tag -l | grep mytag'
+
+
+test_done
+*+ [initial] Initial
EOF
-V=`git version | sed -e 's/^git version //'`
+V=`git version | sed -e 's/^git version //' -e 's/\./\\./g'`
while read cmd
do
case "$cmd" in
test_expect_success "git $cmd" '
{
echo "\$ git $cmd"
- git $cmd | sed -e "s/$V/g-i-t--v-e-r-s-i-o-n/"
+ git $cmd |
+ sed -e "s/^\\(-*\\)$V\\(-*\\)\$/\\1g-i-t--v-e-r-s-i-o-n\2/" \
+ -e "s/^\\( *boundary=\"-*\\)$V\\(-*\\)\"\$/\\1g-i-t--v-e-r-s-i-o-n\2\"/"
echo "\$"
} >"$actual" &&
if test -f "$expect"
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2005 Junio C Hamano
+#
+
+test_description='git-apply symlinks and partial files
+
+'
+
+. ./test-lib.sh
+
+test_expect_success setup '
+
+ ln -s path1/path2/path3/path4/path5 link1 &&
+ git add link? &&
+ git commit -m initial &&
+
+ git branch side &&
+
+ rm -f link? &&
+
+ ln -s htap6 link1 &&
+ git update-index link? &&
+ git commit -m second &&
+
+ git diff-tree -p HEAD^ HEAD >patch &&
+ git apply --stat --summary patch
+
+'
+
+test_expect_success 'apply symlink patch' '
+
+ git checkout side &&
+ git apply patch &&
+ git diff-files -p >patched &&
+ diff -u patch patched
+
+'
+
+test_expect_success 'apply --index symlink patch' '
+
+ git checkout -f side &&
+ git apply --index patch &&
+ git diff-index --cached -p HEAD >patched &&
+ diff -u patch patched
+
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+#
+# Copyright (c) 2006 Junio C Hamano
+#
+
+test_description='git grep various.
+'
+
+. ./test-lib.sh
+
+test_expect_success setup '
+ {
+ echo foo mmap bar
+ echo foo_mmap bar
+ echo foo_mmap bar mmap
+ echo foo mmap bar_mmap
+ echo foo_mmap bar mmap baz
+ } >file &&
+ echo x x xx x >x &&
+ echo y yy >y &&
+ echo zzz > z &&
+ mkdir t &&
+ echo test >t/t &&
+ git add file x y z t/t &&
+ git commit -m initial
+'
+
+for H in HEAD ''
+do
+ case "$H" in
+ HEAD) HC='HEAD:' L='HEAD' ;;
+ '') HC= L='in working tree' ;;
+ esac
+
+ test_expect_success "grep -w $L" '
+ {
+ echo ${HC}file:1:foo mmap bar
+ echo ${HC}file:3:foo_mmap bar mmap
+ echo ${HC}file:4:foo mmap bar_mmap
+ echo ${HC}file:5:foo_mmap bar mmap baz
+ } >expected &&
+ git grep -n -w -e mmap $H >actual &&
+ diff expected actual
+ '
+
+ test_expect_success "grep -w $L (x)" '
+ {
+ echo ${HC}x:1:x x xx x
+ } >expected &&
+ git grep -n -w -e "x xx* x" $H >actual &&
+ diff expected actual
+ '
+
+ test_expect_success "grep -w $L (y-1)" '
+ {
+ echo ${HC}y:1:y yy
+ } >expected &&
+ git grep -n -w -e "^y" $H >actual &&
+ diff expected actual
+ '
+
+ test_expect_success "grep -w $L (y-2)" '
+ : >expected &&
+ if git grep -n -w -e "^y y" $H >actual
+ then
+ echo should not have matched
+ cat actual
+ false
+ else
+ diff expected actual
+ fi
+ '
+
+ test_expect_success "grep -w $L (z)" '
+ : >expected &&
+ if git grep -n -w -e "^z" $H >actual
+ then
+ echo should not have matched
+ cat actual
+ false
+ else
+ diff expected actual
+ fi
+ '
+
+ test_expect_success "grep $L (t-1)" '
+ echo "${HC}t/t:1:test" >expected &&
+ git grep -n -e test $H >actual &&
+ diff expected actual
+ '
+
+ test_expect_success "grep $L (t-2)" '
+ echo "${HC}t:1:test" >expected &&
+ (
+ cd t &&
+ git grep -n -e test $H
+ ) >actual &&
+ diff expected actual
+ '
+
+ test_expect_success "grep $L (t-3)" '
+ echo "${HC}t/t:1:test" >expected &&
+ (
+ cd t &&
+ git grep --full-name -n -e test $H
+ ) >actual &&
+ diff expected actual
+ '
+
+done
+
+test_done
test -L $SVN_TREE/exec-2.sh"
name='modify a symlink to become a file'
- git help > help || true
+ echo git help > help || true
rm exec-2.sh
cp help exec-2.sh
git update-index exec-2.sh
rm -f expected
if test "$have_utf8" = t
then
- echo tree f735671b89a7eb30cab1d8597de35bd4271ab813 > expected
+ echo tree bf522353586b1b883488f2bc73dab0d9f774b9a9 > expected
fi
cat >> expected <<\EOF
-tree 4b9af72bb861eaed053854ec502cf7df72618f0f
+tree 83654bb36f019ae4fe77a0171f81075972087624
tree 031b8d557afc6fea52894eaebb45bec52f1ba6d1
tree 0b094cbff17168f24c302e297f55bfac65eb8bd3
tree d667270a1f7b109f5eb3aaea21ede14b56bfdd6e
test_expect_success "$name" "diff -u a expected"
test_done
-
template_dir ?= $(prefix)/share/git-core/templates/
# DESTDIR=
-# Shell quote;
-# Result of this needs to be placed inside ''
-shq = $(subst ','\'',$(1))
-# This has surrounding ''
-shellquote = '$(call shq,$(1))'
+# Shell quote (do not use $(call) to accomodate ancient setups);
+DESTDIR_SQ = $(subst ','\'',$(DESTDIR))
+template_dir_SQ = $(subst ','\'',$(template_dir))
all: boilerplates.made custom
rm -rf blt boilerplates.made
install: all
- $(INSTALL) -d -m755 $(call shellquote,$(DESTDIR)$(template_dir))
+ $(INSTALL) -d -m755 '$(DESTDIR_SQ)$(template_dir_SQ)'
(cd blt && $(TAR) cf - .) | \
- (cd $(call shellquote,$(DESTDIR)$(template_dir)) && $(TAR) xf -)
+ (cd '$(DESTDIR_SQ)$(template_dir_SQ)' && $(TAR) xf -)
+++ /dev/null
-#include "cache.h"
-#include "object.h"
-#include "delta.h"
-#include "pack.h"
-#include "blob.h"
-#include "commit.h"
-#include "tag.h"
-#include "tree.h"
-
-#include <sys/time.h>
-
-static int dry_run, quiet;
-static const char unpack_usage[] = "git-unpack-objects [-n] [-q] < pack-file";
-
-/* We always read in 4kB chunks. */
-static unsigned char buffer[4096];
-static unsigned long offset, len, eof;
-static SHA_CTX ctx;
-
-/*
- * Make sure at least "min" bytes are available in the buffer, and
- * return the pointer to the buffer.
- */
-static void * fill(int min)
-{
- if (min <= len)
- return buffer + offset;
- if (eof)
- die("unable to fill input");
- if (min > sizeof(buffer))
- die("cannot fill %d bytes", min);
- if (offset) {
- SHA1_Update(&ctx, buffer, offset);
- memcpy(buffer, buffer + offset, len);
- offset = 0;
- }
- do {
- int ret = xread(0, buffer + len, sizeof(buffer) - len);
- if (ret <= 0) {
- if (!ret)
- die("early EOF");
- die("read error on input: %s", strerror(errno));
- }
- len += ret;
- } while (len < min);
- return buffer;
-}
-
-static void use(int bytes)
-{
- if (bytes > len)
- die("used more bytes than were available");
- len -= bytes;
- offset += bytes;
-}
-
-static void *get_data(unsigned long size)
-{
- z_stream stream;
- void *buf = xmalloc(size);
-
- memset(&stream, 0, sizeof(stream));
-
- stream.next_out = buf;
- stream.avail_out = size;
- stream.next_in = fill(1);
- stream.avail_in = len;
- inflateInit(&stream);
-
- for (;;) {
- int ret = inflate(&stream, 0);
- use(len - stream.avail_in);
- if (stream.total_out == size && ret == Z_STREAM_END)
- break;
- if (ret != Z_OK)
- die("inflate returned %d\n", ret);
- stream.next_in = fill(1);
- stream.avail_in = len;
- }
- inflateEnd(&stream);
- return buf;
-}
-
-struct delta_info {
- unsigned char base_sha1[20];
- unsigned long size;
- void *delta;
- struct delta_info *next;
-};
-
-static struct delta_info *delta_list;
-
-static void add_delta_to_list(unsigned char *base_sha1, void *delta, unsigned long size)
-{
- struct delta_info *info = xmalloc(sizeof(*info));
-
- memcpy(info->base_sha1, base_sha1, 20);
- info->size = size;
- info->delta = delta;
- info->next = delta_list;
- delta_list = info;
-}
-
-static void added_object(unsigned char *sha1, const char *type, void *data, unsigned long size);
-
-static void write_object(void *buf, unsigned long size, const char *type)
-{
- unsigned char sha1[20];
- if (write_sha1_file(buf, size, type, sha1) < 0)
- die("failed to write object");
- added_object(sha1, type, buf, size);
-}
-
-static int resolve_delta(const char *type,
- void *base, unsigned long base_size,
- void *delta, unsigned long delta_size)
-{
- void *result;
- unsigned long result_size;
-
- result = patch_delta(base, base_size,
- delta, delta_size,
- &result_size);
- if (!result)
- die("failed to apply delta");
- free(delta);
- write_object(result, result_size, type);
- free(result);
- return 0;
-}
-
-static void added_object(unsigned char *sha1, const char *type, void *data, unsigned long size)
-{
- struct delta_info **p = &delta_list;
- struct delta_info *info;
-
- while ((info = *p) != NULL) {
- if (!memcmp(info->base_sha1, sha1, 20)) {
- *p = info->next;
- p = &delta_list;
- resolve_delta(type, data, size, info->delta, info->size);
- free(info);
- continue;
- }
- p = &info->next;
- }
-}
-
-static int unpack_non_delta_entry(enum object_type kind, unsigned long size)
-{
- void *buf = get_data(size);
- const char *type;
-
- switch (kind) {
- case OBJ_COMMIT: type = commit_type; break;
- case OBJ_TREE: type = tree_type; break;
- case OBJ_BLOB: type = blob_type; break;
- case OBJ_TAG: type = tag_type; break;
- default: die("bad type %d", kind);
- }
- if (!dry_run)
- write_object(buf, size, type);
- free(buf);
- return 0;
-}
-
-static int unpack_delta_entry(unsigned long delta_size)
-{
- void *delta_data, *base;
- unsigned long base_size;
- char type[20];
- unsigned char base_sha1[20];
- int result;
-
- memcpy(base_sha1, fill(20), 20);
- use(20);
-
- delta_data = get_data(delta_size);
- if (dry_run) {
- free(delta_data);
- return 0;
- }
-
- if (!has_sha1_file(base_sha1)) {
- add_delta_to_list(base_sha1, delta_data, delta_size);
- return 0;
- }
- base = read_sha1_file(base_sha1, type, &base_size);
- if (!base)
- die("failed to read delta-pack base object %s", sha1_to_hex(base_sha1));
- result = resolve_delta(type, base, base_size, delta_data, delta_size);
- free(base);
- return result;
-}
-
-static void unpack_one(unsigned nr, unsigned total)
-{
- unsigned shift;
- unsigned char *pack, c;
- unsigned long size;
- enum object_type type;
-
- pack = fill(1);
- c = *pack;
- use(1);
- type = (c >> 4) & 7;
- size = (c & 15);
- shift = 4;
- while (c & 0x80) {
- pack = fill(1);
- c = *pack++;
- use(1);
- size += (c & 0x7f) << shift;
- shift += 7;
- }
- if (!quiet) {
- static unsigned long last_sec;
- static unsigned last_percent;
- struct timeval now;
- unsigned percentage = (nr * 100) / total;
-
- gettimeofday(&now, NULL);
- if (percentage != last_percent || now.tv_sec != last_sec) {
- last_sec = now.tv_sec;
- last_percent = percentage;
- fprintf(stderr, "%4u%% (%u/%u) done\r", percentage, nr, total);
- }
- }
- switch (type) {
- case OBJ_COMMIT:
- case OBJ_TREE:
- case OBJ_BLOB:
- case OBJ_TAG:
- unpack_non_delta_entry(type, size);
- return;
- case OBJ_DELTA:
- unpack_delta_entry(size);
- return;
- default:
- die("bad object type %d", type);
- }
-}
-
-static void unpack_all(void)
-{
- int i;
- struct pack_header *hdr = fill(sizeof(struct pack_header));
- unsigned nr_objects = ntohl(hdr->hdr_entries);
-
- if (ntohl(hdr->hdr_signature) != PACK_SIGNATURE)
- die("bad pack file");
- if (!pack_version_ok(hdr->hdr_version))
- die("unknown pack file version %d", ntohl(hdr->hdr_version));
- fprintf(stderr, "Unpacking %d objects\n", nr_objects);
-
- use(sizeof(struct pack_header));
- for (i = 0; i < nr_objects; i++)
- unpack_one(i+1, nr_objects);
- if (delta_list)
- die("unresolved deltas left after unpacking");
-}
-
-int main(int argc, char **argv)
-{
- int i;
- unsigned char sha1[20];
-
- setup_git_directory();
-
- quiet = !isatty(2);
-
- for (i = 1 ; i < argc; i++) {
- const char *arg = argv[i];
-
- if (*arg == '-') {
- if (!strcmp(arg, "-n")) {
- dry_run = 1;
- continue;
- }
- if (!strcmp(arg, "-q")) {
- quiet = 1;
- continue;
- }
- usage(unpack_usage);
- }
-
- /* We don't take any non-flag arguments now.. Maybe some day */
- usage(unpack_usage);
- }
- SHA1_Init(&ctx);
- unpack_all();
- SHA1_Update(&ctx, buffer, offset);
- SHA1_Final(sha1, &ctx);
- if (memcmp(fill(20), sha1, 20))
- die("final sha1 did not match");
- use(20);
-
- /* Write the last part of the buffer to stdout */
- while (len) {
- int ret = xwrite(1, buffer + offset, len);
- if (ret <= 0)
- break;
- len -= ret;
- offset += ret;
- }
-
- /* All done */
- if (!quiet)
- fprintf(stderr, "\n");
- return 0;
-}
+++ /dev/null
-#include "cache.h"
-#include "pack.h"
-
-static int verify_one_pack(char *arg, int verbose)
-{
- int len = strlen(arg);
- struct packed_git *g;
-
- while (1) {
- /* Should name foo.idx, but foo.pack may be named;
- * convert it to foo.idx
- */
- if (!strcmp(arg + len - 5, ".pack")) {
- strcpy(arg + len - 5, ".idx");
- len--;
- }
- /* Should name foo.idx now */
- if ((g = add_packed_git(arg, len, 1)))
- break;
- /* No? did you name just foo? */
- strcpy(arg + len, ".idx");
- len += 4;
- if ((g = add_packed_git(arg, len, 1)))
- break;
- return error("packfile %s not found.", arg);
- }
- return verify_pack(g, verbose);
-}
-
-static const char verify_pack_usage[] = "git-verify-pack [-v] <pack>...";
-
-int main(int ac, char **av)
-{
- int errs = 0;
- int verbose = 0;
- int no_more_options = 0;
-
- while (1 < ac) {
- char path[PATH_MAX];
-
- if (!no_more_options && av[1][0] == '-') {
- if (!strcmp("-v", av[1]))
- verbose = 1;
- else if (!strcmp("--", av[1]))
- no_more_options = 1;
- else
- usage(verify_pack_usage);
- }
- else {
- strcpy(path, av[1]);
- if (verify_one_pack(path, verbose))
- errs++;
- }
- ac--; av++;
- }
- return !!errs;
-}