Being able to try multiple strategies and automatically picking one
that seems to give less conflicting result may or may not much sense
in practice. At least that should not force normal use case to
additionally require the working tree to be fully clean. As Linus
shouted, local changes do not matter unless they interfere with the
merge.
This commit changes git-merge not to require a clean working tree.
Only when we will iterate through more than one merge strategies,
local changes are stashed away before trying the first merge, and
restored before second and later merges are attempted.
The index file must be in sync with HEAD in any case -- otherwise the
merge result would contain changes since HEAD that was done locally
and registered in the index. This check is already enforced by
three-way read-tree existing merge strategies use, but is done here as
a safeguard as well.
[PATCH] Add "git-update-ref" to update the HEAD (or other) ref
This is a careful version of the script stuff that currently just
blindly writes HEAD with a new value.
You can use
git-update-ref HEAD <newhead>
or
git-update-ref HEAD <newhead> <oldhead>
where the latter version verifies that the old value of HEAD matches
oldhead.
It basically allows a "ref" file to be a symbolic pointer to another ref
file by starting with the four-byte header sequence of "ref:".
More importantly, it allows the update of a ref file to follow these
symbolic pointers, whether they are symlinks or these "regular file
symbolic refs".
NOTE! It follows _real_ symlinks only if they start with "refs/":
otherwise it will just try to read them and update them as a regular file
(ie it will allow the filesystem to follow them, but will overwrite such a
symlink to somewhere else with a regular filename).
In general, using
git-update-ref HEAD "$head"
should be a _lot_ safer than doing
echo "$head" > "$GIT_DIR/HEAD"
both from a symlink following standpoint _and_ an error checking
standpoint. The "refs/" rule for symlinks means that symlinks that point
to "outside" the tree are safe: they'll be followed for reading but not
for writing (so we'll never write through a ref symlink to some other
tree, if you have copied a whole archive by creating a symlink tree).
Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] recursive-merge: Don't print a stack trace when read-tree fails.
If the working tree is dirty read-tree will fail, and we don't want an
ugly stack trace in that case. Also make sure we don't print stack
traces when we use 'die'.
Signed-off-by: Fredrik Kuivinen <freku045@student.liu.se> Signed-off-by: Junio C Hamano <junkio@cox.net>
When many paths are modified, rename detection takes a lot of time.
The new option -l<num> can be used to disable rename detection when
more than <num> paths are possibly created as renames.
Now we conditionally compile things in compat/, so we should remove
object files there. Python execution can leave *.pyc and *.pyo, which
need to be cleaned as well.
People typically say 'grep -e $pattern' because $pattern has a leading
dash which would be mistaken as a grep flag. Make sure we pass -e in
front of $pattern when we invoke grep.
Even without the trouble it causes to people without GNU xargs,
it was not really necessary to print from Perl and then remove it
outside. Just unlink it inside Perl.
Well, this makes it even more clear that we need the packet reader and
friends to use the daemon logging code. :/ Therefore, we at least indicate
in the "Disconnect" log message if the child process exitted with an error
code or not.
Idea by Linus.
Signed-off-by: Petr Baudis <pasky@suse.cz> Signed-off-by: Junio C Hamano <junkio@cox.net>
Further clarify licensing status of compat/subprocess.py.
PSF license explicitly states the files in Python distribution is
compatible with GPL, and upstream clarified the licensing terms by
shortening its file header. This version is a verbatim copy from
release24-maint branch form Python CVS.
The change I made to rsh.c would leave the string unterminated under
certain conditions, which unfortunately always applied! This patch
fixes this. For some reason this never bit on i386 or ppc, but bit me
on x86-64.
Fix situation where the buffer was not properly null-terminated.
Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] git-local-fetch: Avoid confusing error messages on packed repositories
If the source repository was packed, and git-local-fetch needed to
fetch a pack file, it spewed a misleading error message about not
being able to find the unpacked object. Fixed by adding the
warn_if_not_exists argument to copy_file(), which controls printing
of error messages in case the source file does not exist.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] Fix "git-local-fetch -s" with packed source repository
"git-local-fetch -s" did not work with a packed repository, because
symlink() happily created a link to a non-existing object file,
therefore fetch_file() always returned success, and fetch_pack() was
not called. Fixed by calling stat() before symlink() to ensure the
file really exists.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
After open() failure, copy_file() called close(ifd) with ifd == -1
(harmless, but causes Valgrind noise). The same thing was possible
for the destination file descriptor.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
Make 'git diff --cached' synonymous to 'git diff --cached HEAD'.
When making changes to different files (i.e. dirty working tree) and
committing logically separate changes in groups, often it is necessary
to run 'git diff --cached HEAD' to make sure that the changes being
committed makes sense. Saying 'git diff --cached' by mistake gives
rather uninformative error message from git-diff-files complaining it
does not understand --cached flag.
[PATCH] fetch.c: Remove call to parse_object() from process()
The call to parse_object() in process() is not actually needed - if
the object type is unknown, parse_object() will be called by loop();
if the type is known, the object will be parsed by the appropriate
process_*() function.
After this change blobs which exist locally are no longer parsed,
which gives about 2x CPU usage improvement; the downside is that there
will be no warnings for existing corrupted blobs, but detecting such
corruption is the job of git-fsck-objects, not the fetch programs.
Newly fetched objects are still checked for corruption in http-fetch.c
and ssh-fetch.c (local-fetch.c does not seem to do it, but the removed
parse_object() call would not be reached for new objects anyway).
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Remove some duplicated code in process()
It does not matter if we call prefetch() or set the TO_SCAN flag before
or after adding the object to process_queue. However, doing it before
object_list_insert() allows us to kill 3 lines of duplicated code.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
The TO_FETCH flag also became redundant after adding the SEEN flag -
it was set and checked in process() to prevent adding the same object
to process_queue multiple times, but now SEEN guards against this.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
After adding the SEEN flag, the SCANNED flag became obviously
redundant - each object can get into process_queue through process()
only once, and therefore multiple calls to process_object() for the
same object are not possible.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Make process() look at each object only once
The process() function is very often called multiple times for the
same object (because lots of trees refer to the same blobs), but did
not have a fast check for this, therefore a lot of useless calls to
has_sha1_file() and parse_object() were made before discovering that
nothing needs to be done.
This patch adds the SEEN flag which is used in process() to make it
look at each object only once. When testing git-local-fetch on the
repository of GIT, this gives a 14x improvement in CPU usage (mainly
because the redundant calls to parse_object() are now avoided -
parse_object() always unpacks and parses the object data, even if it
was already parsed before).
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Remove useless lookup_object_type() call in process()
In all places where process() is called except the one in pull() (which
is executed only once) the pointer to the object is already available,
so pass it as the argument to process() instead of sha1 and avoid an
unneeded call to lookup_object_type().
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] Retitle 'inspecting what happened' section.
In the tutorial, there is a section entitled "Checking it out"
that shows how to use diff log and whatchanged to insect some
of the repository state.
As the phrase "checkout" ususally carries some baggage WRT
other revision control mechanism, I suggest that we re-title
this section something like "Inspecting Changes".
Signed-off-by: Jon Loeliger <jdl@freescale.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
Including the current branch in the list of heads being merged
was not a good idea, so drop it. And shorten the message by
grouping branches and tags together to form a single line.
This patch makes git-daemon --verbose log some useful things on stderr -
in particular connects, disconnects and upload requests, and in such a
way to be able to trace a particular session. Some more errors are now
also logged (even when --verbose is not passed). It is still not perfect
since messages produced by the non-daemon-specific code are obviously
not formatted properly.
[jc: With minor fix up in the log line truncation, and
use of write(2) as suggested by Linus.]
Signed-off-by: Petr Baudis <pasky@suse.cz> Signed-off-by: Junio C Hamano <junkio@cox.net>
Some old scripts might still use git-rev-tree, but it really is
clearly inferior in every way to git-rev-list that such scripts should
be fixed anyway. Fixing them should be pretty easy.
git-export was done as a concept example on how easy it is to export
the git data to something else. It's much less powerful than any
number of trivial one-liner scripts now, and real exporters would not
ever use git-export.
It's obviously much less powerful than "git-whatchanged", or just
about any combination of git-rev-list + git-diff-tree.
We generate the ASCII representation of our internal date representation
("seconds since 1970, UTC + timezone information") in two different
places.
One of them uses the stupid and obvious way to make sure that it gets the
sexagecimal representation right for negative timezones even if they might
not be exact hours, and the other one depends on the modulus operator
always matching the sign of argument.
Hey, the clever one works. And C90 even specifies that behaviour. But I
had to think about it for a while when I was re-visiting this area, and
even if I didn't have to, it's kind of strange to have two different ways
to print out the same data format.
So use a common helper for this. And select the stupid and straighforward
way.
Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Remove call to parse_object() from process()
The call to parse_object() in process() is not actually needed - if
the object type is unknown, parse_object() will be called by loop();
if the type is known, the object will be parsed by the appropriate
process_*() function.
After this change blobs which exist locally are no longer parsed,
which gives about 2x CPU usage improvement; the downside is that there
will be no warnings for existing corrupted blobs, but detecting such
corruption is the job of git-fsck-objects, not the fetch programs.
Newly fetched objects are still checked for corruption in http-fetch.c
and ssh-fetch.c (local-fetch.c does not seem to do it, but the removed
parse_object() call would not be reached for new objects anyway).
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Remove some duplicated code in process()
It does not matter if we call prefetch() or set the TO_SCAN flag before
or after adding the object to process_queue. However, doing it before
object_list_insert() allows us to kill 3 lines of duplicated code.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
The TO_FETCH flag also became redundant after adding the SEEN flag -
it was set and checked in process() to prevent adding the same object
to process_queue multiple times, but now SEEN guards against this.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
After adding the SEEN flag, the SCANNED flag became obviously
redundant - each object can get into process_queue through process()
only once, and therefore multiple calls to process_object() for the
same object are not possible.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Make process() look at each object only once
The process() function is very often called multiple times for the
same object (because lots of trees refer to the same blobs), but did
not have a fast check for this, therefore a lot of useless calls to
has_sha1_file() and parse_object() were made before discovering that
nothing needs to be done.
This patch adds the SEEN flag which is used in process() to make it
look at each object only once. When testing git-local-fetch on the
repository of GIT, this gives a 14x improvement in CPU usage (mainly
because the redundant calls to parse_object() are now avoided -
parse_object() always unpacks and parses the object data, even if it
was already parsed before).
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] fetch.c: Remove useless lookup_object_type() call in process()
In all places where process() is called except the one in pull() (which
is executed only once) the pointer to the object is already available,
so pass it as the argument to process() instead of sha1 and avoid an
unneeded call to lookup_object_type().
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
Randal L. Schwartz noticed that 'make install' does not rebuild what
is installed. Make the 'install' rule depend on 'man'.
I noticed also 'touch' of the source files were used to express include
dependencies, which is a no-no. Rewrite it to do dependencies properly,
and add missing include dependencies while we are at it.
Unlike write_sha1_file() that tries to create the object file in a
temporary location and then move it to the final location, fetch_object
could have been interrupted in the middle, leaving a corrupt file.
Clarify dual license status of subprocess.py file.
The author of the file we stole from Python 2.4 distribution, Peter
Astrand <astrand@lysator.liu.se>, OK'ed to add this at the end of the
licensing terms section of the file:
Use of this file within git is permitted under GPLv2.
[PATCH] Teach "git-rev-parse" about date-based cut-offs
This adds the options "--since=date" and "--before=date" to git-rev-parse,
which knows how to translate them into seconds since the epoch for
git-rev-list.
With this, you can do
git log --since="2 weeks ago"
or
git log --until=yesterday
to show the commits that have happened in the last two weeks or are
older than 24 hours, respectively.
The flags "--after=" and "--before" are synonyms for --since and --until,
and you can combine them, so
git log --after="Aug 5" --before="Aug 10"
is a valid (but strange) thing to do.
Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
This is my ARM assembly SHA1 implementation for GIT. It is approximately
50% faster than the generic C version. On an XScale processor running at
400MHz:
generic C version: 9.8 MB/s
my version: 14.5 MB/s
It's not that I expect a lot of big GIT users on ARM, but I stillknow
about one important ARM user that might benefit from it, and writing
that code was fun.
I also reworked the makefile a bit so any optimized SHA1 implementations
is used regardless of whether NO_OPENSSL is defined or not.
Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] Make the git-fsck-objects diagnostics more useful
Actually report what exactly is wrong with the object, instead of an
ambiguous 'bad sha1 file' or such. In places where we already do, unify
the format and clean the messages up.
Signed-off-by: Petr Baudis <pasky@suse.cz> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] Return proper error valud from "parse_date()"
Right now we don't return any error value at all from parse_date(), and if
we can't parse it, we just silently leave the result buffer unchanged.
That's fine for the current user, which will always default to the current
date, but it's a crappy interface, and we might well be better off with an
error message rather than just the default date.
So let's change the thing to return a negative value if an error occurs,
and the length of the result otherwise (snprintf behaviour: if the buffer
is too small, it returns how big it _would_ have been).
[ I started looking at this in case we could support date-based revision
names. Looks ugly. Would have to parse relative dates.. ]
Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
Add -m/--modified to show files that have been modified wrt. the index.
[jc: The original came from Brian Gerst on Sep 1st but it only checked
if the paths were cache dirty without actually checking the files were
modified. I also added the usage string and a new test.]
If the length in the stat information does not match what is recorded
in the index, there is no point rehashing the contents to see if the
index entry can be refreshed.
We need to be a bit careful. Immediately after read-tree or
checkout-index without -u, ce_size is set to zero and does not match
the length of the blob that is recorded, and we need to actually look
at the contents to see if it has been changed.
Use GECOS field a bit better to produce default human readable name.
This updates the default human readable name we generate from GECOS
field. We assume the "full-name, followed by additional information
separated by commas" format, with an & expanding to the capitalized
login name.
Somehow I missed it when we updated read-tree to support the recursive
merge strategy. Also -i should require -m as well, which the command
did not check.
[PATCH] Documentation: Add asciidoc.conf file and gitlink: macro
Introduce an asciidoc.conf file with the purpose of adding a gitlink:
macro which will improve the manpage output.
Original cogito patch by Jonas Fonseca <fonseca@diku.dk>;
asciidoc.conf from that patch was further enhanced to use the proper
DocBook tag <citerefentry> for references to man pages.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Junio C Hamano <junkio@cox.net>
The base target directory for the templates copying was initialized
to git_dir, but git_dir[len] is not zero but / at the time we do the
initialization. This is not what we want for our target directory string
since we pass it to mkdir(), so make it zero-terminated manually.
Signed-off-by: Petr Baudis <pasky@suse.cz> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] Remove total confusion from "git checkout"
The target to check out does not need to be a branch. The _result_ of the
checkout needs to be a branch. Don't confuse the two, and then insult the
user.
Insulting is ok, but I personally get really pissed off is a tool is both
confused and insulting. At least be _correct_ and insulting.
Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
This makes git-update-index error reporting much less confusing. The
user will know what went wrong with better precision, and will be given
a hopefully less confusing advice.
Signed-off-by: Petr Baudis <pasky@suse.cz> Signed-off-by: Junio C Hamano <junkio@cox.net>
This fixes everybodys favourite complaint about "git add", namely that it
doesn't take directories.
We use "git-ls-files --others" to generate an arbitrary list of filenames,
and thus also automatically honor ignore-files while we're at it.
Side note: there's a lot of room for improvement here. In particular, if
we have a long list of filenames (importing a big archive), this will just
do a big stupid for-loop and add them one at a time. Maybe it should use
generate-list | xargs -0 git-update-idex --add --
instead.
Also, I think we should have a default ignore list if we don't find a
.git/info/exclude file. Ignoring "*.o" and ".*" by default would probably
be the right thing to do.
But I think this is a good first step.
Use the "-n" flag to just show the list of files to be added without
adding them.
Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH] Support alternates and http-alternates in http-fetch
This allows the remote repository to refer to additional repositories
in a file objects/info/http-alternates or
objects/info/alternates. Each line may be:
a relative path, starting with ../, to get from the objects directory
of the starting repository to the objects directory of the added
repository.
an absolute path of the objects directory of the added repository (on
the same server).
(only in http-alternates) a full URL of the objects directory of the
added repository.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
The recent safety check to trust only the commits we have made
things impossibly slow and turn out to waste a lot of memory.
This commit fixes it with the following improvements:
- mark already scanned objects and avoid rescanning the same
object again;
- free the tree entries when we have scanned the tree entries;
this is the same as b0d8923ec01fd91b75ab079034f89ced91500157
which reduced memory usage by rev-list;
- plug memory leak from the object_list dequeuing code;
- use the process_queue not just for fetching but for scanning,
to make things tail recursive to avoid deep recursion; the
deep recursion was especially prominent when we cloned a big
pack.
- avoid has_sha1_file() call when we already know we do not have
that object.
Using "git repack -a -d" can destroy your git archive if you use it
twice in succession, because the new pack can be called the same as
the old pack. Found by Linus.