The new option --stdin reads list of paths to be updated from the
standard input. As usual, -z means the paths are terminated with NUL
characters, as opposed to LF without that option.
This is useful to use git-diff-files -z and git-ls-files -z when the
platform xargs does not support -0 option, and obviously saves one
process even when xargs can take -0.
With the --recover option, we verify that we have absolutely
everything reachable from the target, not assuming that things
reachable from refs will be complete.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
Do not require clean tree when reverting and cherry-picking.
My stupidity deserved to be yelled at by Linus ... there is no reason
to require the working tree to be clean when merging -- the only
requirements are index to match HEAD commit and the paths involved in
merge are up to date in the working tree. Revert and cherry-pick are
just specialized forms of merge, and the requirements should be the
same.
Remove the 'general purpose routine to make sure tree is clean' from
git-sh-setup, to prevent me from getting tempted again.
Being able to try multiple strategies and automatically picking one
that seems to give less conflicting result may or may not much sense
in practice. At least that should not force normal use case to
additionally require the working tree to be fully clean. As Linus
shouted, local changes do not matter unless they interfere with the
merge.
This commit changes git-merge not to require a clean working tree.
Only when we will iterate through more than one merge strategies,
local changes are stashed away before trying the first merge, and
restored before second and later merges are attempted.
The index file must be in sync with HEAD in any case -- otherwise the
merge result would contain changes since HEAD that was done locally
and registered in the index. This check is already enforced by
three-way read-tree existing merge strategies use, but is done here as
a safeguard as well.
[PATCH] Add "git-update-ref" to update the HEAD (or other) ref
This is a careful version of the script stuff that currently just
blindly writes HEAD with a new value.
You can use
git-update-ref HEAD <newhead>
or
git-update-ref HEAD <newhead> <oldhead>
where the latter version verifies that the old value of HEAD matches
oldhead.
It basically allows a "ref" file to be a symbolic pointer to another ref
file by starting with the four-byte header sequence of "ref:".
More importantly, it allows the update of a ref file to follow these
symbolic pointers, whether they are symlinks or these "regular file
symbolic refs".
NOTE! It follows _real_ symlinks only if they start with "refs/":
otherwise it will just try to read them and update them as a regular file
(ie it will allow the filesystem to follow them, but will overwrite such a
symlink to somewhere else with a regular filename).
In general, using
git-update-ref HEAD "$head"
should be a _lot_ safer than doing
echo "$head" > "$GIT_DIR/HEAD"
both from a symlink following standpoint _and_ an error checking
standpoint. The "refs/" rule for symlinks means that symlinks that point
to "outside" the tree are safe: they'll be followed for reading but not
for writing (so we'll never write through a ref symlink to some other
tree, if you have copied a whole archive by creating a symlink tree).
Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] recursive-merge: Don't print a stack trace when read-tree fails.
If the working tree is dirty read-tree will fail, and we don't want an
ugly stack trace in that case. Also make sure we don't print stack
traces when we use 'die'.
Signed-off-by: Fredrik Kuivinen <freku045@student.liu.se> Signed-off-by: Junio C Hamano <junkio@cox.net>
When many paths are modified, rename detection takes a lot of time.
The new option -l<num> can be used to disable rename detection when
more than <num> paths are possibly created as renames.
Now we conditionally compile things in compat/, so we should remove
object files there. Python execution can leave *.pyc and *.pyo, which
need to be cleaned as well.
People typically say 'grep -e $pattern' because $pattern has a leading
dash which would be mistaken as a grep flag. Make sure we pass -e in
front of $pattern when we invoke grep.
Even without the trouble it causes to people without GNU xargs,
it was not really necessary to print from Perl and then remove it
outside. Just unlink it inside Perl.
Well, this makes it even more clear that we need the packet reader and
friends to use the daemon logging code. :/ Therefore, we at least indicate
in the "Disconnect" log message if the child process exitted with an error
code or not.
Idea by Linus.
Signed-off-by: Petr Baudis <pasky@suse.cz> Signed-off-by: Junio C Hamano <junkio@cox.net>
Further clarify licensing status of compat/subprocess.py.
PSF license explicitly states the files in Python distribution is
compatible with GPL, and upstream clarified the licensing terms by
shortening its file header. This version is a verbatim copy from
release24-maint branch form Python CVS.
The change I made to rsh.c would leave the string unterminated under
certain conditions, which unfortunately always applied! This patch
fixes this. For some reason this never bit on i386 or ppc, but bit me
on x86-64.
Fix situation where the buffer was not properly null-terminated.
Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] git-local-fetch: Avoid confusing error messages on packed repositories
If the source repository was packed, and git-local-fetch needed to
fetch a pack file, it spewed a misleading error message about not
being able to find the unpacked object. Fixed by adding the
warn_if_not_exists argument to copy_file(), which controls printing
of error messages in case the source file does not exist.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] Fix "git-local-fetch -s" with packed source repository
"git-local-fetch -s" did not work with a packed repository, because
symlink() happily created a link to a non-existing object file,
therefore fetch_file() always returned success, and fetch_pack() was
not called. Fixed by calling stat() before symlink() to ensure the
file really exists.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
After open() failure, copy_file() called close(ifd) with ifd == -1
(harmless, but causes Valgrind noise). The same thing was possible
for the destination file descriptor.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
Make 'git diff --cached' synonymous to 'git diff --cached HEAD'.
When making changes to different files (i.e. dirty working tree) and
committing logically separate changes in groups, often it is necessary
to run 'git diff --cached HEAD' to make sure that the changes being
committed makes sense. Saying 'git diff --cached' by mistake gives
rather uninformative error message from git-diff-files complaining it
does not understand --cached flag.
[PATCH] fetch.c: Remove call to parse_object() from process()
The call to parse_object() in process() is not actually needed - if
the object type is unknown, parse_object() will be called by loop();
if the type is known, the object will be parsed by the appropriate
process_*() function.
After this change blobs which exist locally are no longer parsed,
which gives about 2x CPU usage improvement; the downside is that there
will be no warnings for existing corrupted blobs, but detecting such
corruption is the job of git-fsck-objects, not the fetch programs.
Newly fetched objects are still checked for corruption in http-fetch.c
and ssh-fetch.c (local-fetch.c does not seem to do it, but the removed
parse_object() call would not be reached for new objects anyway).
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Remove some duplicated code in process()
It does not matter if we call prefetch() or set the TO_SCAN flag before
or after adding the object to process_queue. However, doing it before
object_list_insert() allows us to kill 3 lines of duplicated code.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
The TO_FETCH flag also became redundant after adding the SEEN flag -
it was set and checked in process() to prevent adding the same object
to process_queue multiple times, but now SEEN guards against this.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
After adding the SEEN flag, the SCANNED flag became obviously
redundant - each object can get into process_queue through process()
only once, and therefore multiple calls to process_object() for the
same object are not possible.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Make process() look at each object only once
The process() function is very often called multiple times for the
same object (because lots of trees refer to the same blobs), but did
not have a fast check for this, therefore a lot of useless calls to
has_sha1_file() and parse_object() were made before discovering that
nothing needs to be done.
This patch adds the SEEN flag which is used in process() to make it
look at each object only once. When testing git-local-fetch on the
repository of GIT, this gives a 14x improvement in CPU usage (mainly
because the redundant calls to parse_object() are now avoided -
parse_object() always unpacks and parses the object data, even if it
was already parsed before).
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Remove useless lookup_object_type() call in process()
In all places where process() is called except the one in pull() (which
is executed only once) the pointer to the object is already available,
so pass it as the argument to process() instead of sha1 and avoid an
unneeded call to lookup_object_type().
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] Retitle 'inspecting what happened' section.
In the tutorial, there is a section entitled "Checking it out"
that shows how to use diff log and whatchanged to insect some
of the repository state.
As the phrase "checkout" ususally carries some baggage WRT
other revision control mechanism, I suggest that we re-title
this section something like "Inspecting Changes".
Signed-off-by: Jon Loeliger <jdl@freescale.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
Including the current branch in the list of heads being merged
was not a good idea, so drop it. And shorten the message by
grouping branches and tags together to form a single line.
This patch makes git-daemon --verbose log some useful things on stderr -
in particular connects, disconnects and upload requests, and in such a
way to be able to trace a particular session. Some more errors are now
also logged (even when --verbose is not passed). It is still not perfect
since messages produced by the non-daemon-specific code are obviously
not formatted properly.
[jc: With minor fix up in the log line truncation, and
use of write(2) as suggested by Linus.]
Signed-off-by: Petr Baudis <pasky@suse.cz> Signed-off-by: Junio C Hamano <junkio@cox.net>
Some old scripts might still use git-rev-tree, but it really is
clearly inferior in every way to git-rev-list that such scripts should
be fixed anyway. Fixing them should be pretty easy.
git-export was done as a concept example on how easy it is to export
the git data to something else. It's much less powerful than any
number of trivial one-liner scripts now, and real exporters would not
ever use git-export.
It's obviously much less powerful than "git-whatchanged", or just
about any combination of git-rev-list + git-diff-tree.
We generate the ASCII representation of our internal date representation
("seconds since 1970, UTC + timezone information") in two different
places.
One of them uses the stupid and obvious way to make sure that it gets the
sexagecimal representation right for negative timezones even if they might
not be exact hours, and the other one depends on the modulus operator
always matching the sign of argument.
Hey, the clever one works. And C90 even specifies that behaviour. But I
had to think about it for a while when I was re-visiting this area, and
even if I didn't have to, it's kind of strange to have two different ways
to print out the same data format.
So use a common helper for this. And select the stupid and straighforward
way.
Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
This adds several new keybindings to allow history and selectline
navigation. I basically added Opera-like history traversal, as well
as left-right-cursor history traversal and vi-like motion commands.
Signed-off-by: Robert Suetterlin <robert@mpe.mpg.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
[PATCH] fetch.c: Remove call to parse_object() from process()
The call to parse_object() in process() is not actually needed - if
the object type is unknown, parse_object() will be called by loop();
if the type is known, the object will be parsed by the appropriate
process_*() function.
After this change blobs which exist locally are no longer parsed,
which gives about 2x CPU usage improvement; the downside is that there
will be no warnings for existing corrupted blobs, but detecting such
corruption is the job of git-fsck-objects, not the fetch programs.
Newly fetched objects are still checked for corruption in http-fetch.c
and ssh-fetch.c (local-fetch.c does not seem to do it, but the removed
parse_object() call would not be reached for new objects anyway).
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Remove some duplicated code in process()
It does not matter if we call prefetch() or set the TO_SCAN flag before
or after adding the object to process_queue. However, doing it before
object_list_insert() allows us to kill 3 lines of duplicated code.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
The TO_FETCH flag also became redundant after adding the SEEN flag -
it was set and checked in process() to prevent adding the same object
to process_queue multiple times, but now SEEN guards against this.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
After adding the SEEN flag, the SCANNED flag became obviously
redundant - each object can get into process_queue through process()
only once, and therefore multiple calls to process_object() for the
same object are not possible.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Make process() look at each object only once
The process() function is very often called multiple times for the
same object (because lots of trees refer to the same blobs), but did
not have a fast check for this, therefore a lot of useless calls to
has_sha1_file() and parse_object() were made before discovering that
nothing needs to be done.
This patch adds the SEEN flag which is used in process() to make it
look at each object only once. When testing git-local-fetch on the
repository of GIT, this gives a 14x improvement in CPU usage (mainly
because the redundant calls to parse_object() are now avoided -
parse_object() always unpacks and parses the object data, even if it
was already parsed before).
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Remove useless lookup_object_type() call in process()
In all places where process() is called except the one in pull() (which
is executed only once) the pointer to the object is already available,
so pass it as the argument to process() instead of sha1 and avoid an
unneeded call to lookup_object_type().
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
Randal L. Schwartz noticed that 'make install' does not rebuild what
is installed. Make the 'install' rule depend on 'man'.
I noticed also 'touch' of the source files were used to express include
dependencies, which is a no-no. Rewrite it to do dependencies properly,
and add missing include dependencies while we are at it.
Unlike write_sha1_file() that tries to create the object file in a
temporary location and then move it to the final location, fetch_object
could have been interrupted in the middle, leaving a corrupt file.
Clarify dual license status of subprocess.py file.
The author of the file we stole from Python 2.4 distribution, Peter
Astrand <astrand@lysator.liu.se>, OK'ed to add this at the end of the
licensing terms section of the file:
Use of this file within git is permitted under GPLv2.
[PATCH] Teach "git-rev-parse" about date-based cut-offs
This adds the options "--since=date" and "--before=date" to git-rev-parse,
which knows how to translate them into seconds since the epoch for
git-rev-list.
With this, you can do
git log --since="2 weeks ago"
or
git log --until=yesterday
to show the commits that have happened in the last two weeks or are
older than 24 hours, respectively.
The flags "--after=" and "--before" are synonyms for --since and --until,
and you can combine them, so
git log --after="Aug 5" --before="Aug 10"
is a valid (but strange) thing to do.
Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>