t5510: start tracking-ref tests from a known state
We have three sequential tests for for whether tracking refs
are updated by various fetches and pulls; the first two
should not update the ref, and the third should. Each test
depends on the state left by the test before.
This is fragile (a failing early test will confuse later
tests), and means we cannot add more "should update" tests
after the third one.
Let's instead save the initial state before these tests, and
then reset to a known state before running each test.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
gitk: Display the date of a tag in a human-friendly way
By selecting a tag within gitk you can display information about it.
This information is output by using the command
'git cat-file tag <tagid>'
This outputs the *raw* information from the tag, amongst which is the
time - in seconds since the epoch. As useful as that value is, I find it
a lot easier to read and process time which it is something like:
"Mon Dec 31 14:26:11 2012 -0800"
This change will modify the display of tags in gitk like so:
@@ -1,7 +1,7 @@
object 5d417842efeafb6e109db7574196901c4e95d273
type commit
tag v1.8.1
-tagger Junio C Hamano <gitster@pobox.com> 1356992771 -0800
+tagger Junio C Hamano <gitster@pobox.com> Mon Dec 31 14:26:11 2012 -0800
Git 1.8.1
-----BEGIN PGP SIGNATURE-----
Signed-off-by: Anand Kumria <wildfire@progsoc.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
The drop-down lists used for things like the criteria for finding
commits (containing/touching paths/etc.) use a combobox if we are
using the ttk widgets. By default the combobox exports its value
as the selection when it is changed, which is unnecessary, and sometimes
the combobox wouldn't release the selection, which is annoying.
To fix this, we make these comboboxes not export their selection,
and also clear their selection whenever they are changed. This makes
them more like a simple selection of alternatives, improving the look
and feel of gitk.
Commit 664059f (transport-helper: update remote helper namespace)
updates the namespace when the push succeeds or not; we should do it
only when it succeeded.
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
CodingGuidelines: Documentation/*.txt are the sources
People not familiar with AsciiDoc may not realize they are
supposed to update *.txt files and not *.html/*.1 files when
preparing patches to the project.
Signed-off-by: Dale Worley <worley@ariadne.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
Git 1.8.2.3
t5004: avoid using tar for checking emptiness of archive
t5004: ignore pax global header file
mergetools/kdiff3: do not use --auto when diffing
transport-helper: trivial style cleanup
cherry-pick: picking a tag that resolves to a commit is OK
Earlier, 21246dbb9e0a (cherry-pick: make sure all input objects are
commits, 2013-04-11) tried to catch an unlikely "git cherry-pick $blob"
as an error, but broke a more important use case to cherry-pick a
tag that points at a commit.
t5004: avoid using tar for checking emptiness of archive
Test 2 of t5004 checks if a supposedly empty tar archive really
contains no files. 24676f02 (t5004: fix issue with empty archive test
and bsdtar) removed our commit hash to make it work with bsdtar, but
the test still fails on NetBSD and OpenBSD, which use their own tar
that considers a tar file containing only NULs as broken.
Here's what the different archivers do when asked to create a tar
file without entries:
So NetBSD's native tar creates an empty file, while GNU tar and bsdtar
both give us 10KB of NULs -- just like git archive with an empty tree.
Now let's see how the archivers handle these two kinds of empty tar
files:
$ tar tf zero.tar; echo $?
tar: Unexpected EOF on archive file
1
$ gtar tf zero.tar; echo $?
gtar: This does not look like a tar archive
gtar: Exiting with failure status due to previous errors
2
$ bsdtar tf zero.tar; echo $?
0
$ tar tf tenk.tar; echo $?
tar: Cannot identify format. Searching...
tar: End of archive volume 1 reached
tar: Sorry, unable to determine archive format.
1
$ gtar tf tenk.tar; echo $?
0
$ bsdtar tf tenk.tar; echo $?
0
NetBSD's tar complains about both, bsdtar happily accepts any of them
and GNU tar doesn't like zero-length archive files. So the safest
course of action is to stay with our block-of-NULs format which is
compatible with GNU tar and bsdtar, as we can't make NetBSD's native
tar happy anyway.
We can simplify our test, however, by taking tar out of the picture.
Instead of extracting the archive and checking for the non-presence of
files, check if the file has a size of 10KB and contains only NULs.
This makes t5004 pass on NetBSD and OpenBSD.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a test to verify the emptiness of an archive by extracting its
contents. Don't run this test if the version of tar doesn't support
archives containing only a comment header, though.
The existing check 'tar archive of empty tree is empty' used to work
like that (minus the tar capability check) but was changed to depend
on the exact representation of empty tar files created by git archive
instead of on the behaviour of tar in order to avoid issues with
different tar versions.
The different approaches test different things: The existing one is
for empty trees, for which we know the exact expected output and thus
we can simply check it without extracting; the new one is for commits
with empty trees, whose archives include stamps and so the more
"natural" check by extraction is a better fit because it focuses on
the interesting aspect, namely the absence of any archive entries.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Junio C Hamano <gitster@pobox.com>
t5004: avoid using tar for checking emptiness of archive
Test 2 of t5004 checks if a supposedly empty tar archive really
contains no files. 24676f02 (t5004: fix issue with empty archive test
and bsdtar) removed our commit hash to make it work with bsdtar, but
the test still fails on NetBSD and OpenBSD, which use their own tar
that considers a tar file containing only NULs as broken.
Here's what the different archivers do when asked to create a tar
file without entries:
So NetBSD's native tar creates an empty file, while GNU tar and bsdtar
both give us 10KB of NULs -- just like git archive with an empty tree.
Now let's see how the archivers handle these two kinds of empty tar
files:
$ tar tf zero.tar; echo $?
tar: Unexpected EOF on archive file
1
$ gtar tf zero.tar; echo $?
gtar: This does not look like a tar archive
gtar: Exiting with failure status due to previous errors
2
$ bsdtar tf zero.tar; echo $?
0
$ tar tf tenk.tar; echo $?
tar: Cannot identify format. Searching...
tar: End of archive volume 1 reached
tar: Sorry, unable to determine archive format.
$ gtar tf tenk.tar; echo $?
0
$ bsdtar tf tenk.tar; echo $?
0
NetBSD's tar complains about both, bsdtar happily accepts any of them
and GNU tar doesn't like zero-length archive files. So the safest
course of action is to stay with our block-of-NULs format which is
compatible with GNU tar and bsdtar, as we can't make NetBSD's native
tar happy anyway.
We can simplify our test, however, by taking tar out of the picture.
Instead of extracting the archive and checking for the non-presence of
files, check if the file has a size of 10KB and contains only NULs.
This makes t5004 pass on NetBSD and OpenBSD.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Versions of tar that don't know pax headers -- like the ones in NetBSD 6
and OpenBSD 5.2 -- extract them as regular files. Explicitly ignore the
file created for our global header when checking the list of extracted
files, as this is normal and harmless fall-back behaviour. This fixes
test 3 of t5004 on these platforms.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `kdiff3 --auto` help message is, "No GUI if all conflicts are auto-
solvable." This flag was carried over from the original mergetool
commands. diff_cmd() is for two-way comparisons only so remove the
superfluous flag.
Signed-off-by: David Aguilar <davvid@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The SVN::Fetcher module is now able to filter for inclusion as well
as exclusion (as used by --ignore-path). Also added tests, documentation
changes and git completion script.
If you have an SVN repository with many top level directories and you
only want a git-svn clone of some of them then using --ignore-path is
difficult as it requires a very long regexp. In this case it's much
easier to filter for inclusion.
[ew: remove trailing whitespace]
Signed-off-by: Paul Walmsley <pjwhams@gmail.com> Signed-off-by: Eric Wong <normalperson@yhbt.net>
Git::SVN::*: add missing "NAME" section to perldoc
lexgrog(1) relies on the NAME section to find a manpage's subject's
name and description for easy access later using "man -k". Add the
section it expects.
Noticed using lintian.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Eric Wong <normalperson@yhbt.net>
When svn.pushmergeinfo is set, the target branch is included in the
mergeinfo if it was previously merged into one of the source branches.
SVN does not do this.
Remove merge target branch path from resulting mergeinfo when
svn.pushmergeinfo is set to better match the behavior of SVN. Update the
svn-mergeinfo-push test.
[ew: 80 columns]
Signed-off-by: Michael Contreras <michael@inetric.com> Reported-by: Avishay Lavie <avishay.lavie@gmail.com> Acked-by: Eric Wong <normalperson@yhbt.net>
Use help.c:help_unknown_ref() instead of die() to provide a
friendlier error message before exiting, when one of the refs
specified in a merge is unknown.
Signed-off-by: Vikrant Varma <vikrant.varma94@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the user gives an unknown string to a command that expects to
get a ref, we could be more helpful than just saying "that's not a
ref" and die.
Add helper function help_unknown_ref() to take care of displaying an
error message along with a list of suggested refs the user might
have meant. An interaction with "git merge" might go like this:
$ git merge foo
merge: foo - not something we can merge
Did you mean one of these?
origin/foo
upstream/foo
Signed-off-by: Vikrant Varma <vikrant.varma94@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
It's possible that the previous tip goes away, we should not assume it's
always present. Fortunately we are only using it to calculate the
progress to display to the user, so only that needs to be fixed.
Also, add a test that triggers this issue.
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
clone: allow cloning local paths with colons in them
Usually "foo:bar" is interpreted as an ssh url. This patch allows to
clone from such paths by putting at least one slash before the colon
(i.e. /path/to/foo:bar or just ./foo:bar).
file://foo:bar should also work, but local optimizations are off in
that case, which may be unwanted. While at there, warn the users about
--local being ignored in this case.
Reported-by: William Giokas <1007380@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
fast-export: do not parse non-commit objects while reading marks file
We read from the marks file and keep only marked commits, but in
order to find the type of object, we are parsing the whole thing,
which is slow, specially in big repositories with lots of big files.
There's no need for that, we can query the object information with
sha1_object_info().
Before this, loading the objects of a fresh emacs import, with 260598
blobs took 14 minutes, after this patch, it takes 3 seconds.
This is the way fast-import does it. Also die if the object is not
found (like fast-import).
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-merge-tree causes a null pointer dereference when a directory
entry exists in only one or two of the three trees being compared with
no corresponding entry in the other tree(s).
When this happens, we want to handle the entry as a directory and not
attempt to mark it as a file merge. Do this by setting the entries bit
in the directory mask when the entry is missing or when it is a
directory, only performing the file comparison when we know that a file
entry exists.
Reported-by: Andreas Jacobsen <andreas@andreasjacobsen.com> Signed-off-by: John Keeping <john@keeping.me.uk> Tested-by: Andreas Jacobsen <andreas@andreasjacobsen.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* fc/remote-bzr:
remote-bzr: avoid bad refs
remote-bzr: convert all unicode keys to str
remote-bzr: access branches only when needed
remote-bzr: delay peer branch usage
remote-bzr: iterate revisions properly
remote-bzr: improve progress reporting
remote-bzr: add option to specify branches
remote-bzr: add custom method to find branches
remote-bzr: improve author sanitazion
remote-bzr: add support for shared repo
remote-bzr: fix branch names
remote-bzr: add support for bzr repos
remote-bzr: use branch variable when appropriate
remote-bzr: fix partially pushed merge
remote-bzr: fixes for branch diverge
remote-bzr: add support to push merges
remote-bzr: always try to update the worktree
remote-bzr: fix order of locking in CustomTree
remote-bzr: delay blob fetching until the very end
remote-bzr: cleanup CustomTree
Versions of fast-export before v1.8.2 throws a bad 'reset' commands
because of a behavior in transport-helper that is not even needed.
We should ignore them, otherwise we will treat them as branches and
fail.
This was fixed in v1.8.2, but some people use this script in older
versions of git.
Also, check if the ref was a tag, and skip it for now.
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 54bb901 (t/Makefile: fix result handling with
TEST_OUTPUT_DIRECTORY - 2013-04-26) incorrectly defined
TEST_RESULTS_DIRECTORY relative to itself, when it should be relative to
TEST_OUTPUT_DIRECTORY. Fix this.
Signed-off-by: John Keeping <john@keeping.me.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
If there's already a remote-helper tracking ref, we can fetch the
SHA-1 to report proper push messages (as opposed to always reporting
[new branch]).
The remote-helper currently can specify the old SHA-1 to avoid this
problem, but there's no point in forcing all remote-helpers to be aware
of git commit ids; they should be able to be agnostic of them.
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Merge branch 'tr/remote-tighten-commandline-parsing' into maint
* tr/remote-tighten-commandline-parsing:
remote: 'show' and 'prune' can take more than one remote
remote: check for superfluous arguments in 'git remote add'
remote: add a test for extra arguments, according to docs
t5500: add test for fetching with an unknown 'shallow'
When the client sends a 'shallow' line for an object that the server does
not have, the server should just ignore it and let the client keep that
unknown shallow boundary.
Signed-off-by: Michael Heemskerk <mheemskerk@atlassian.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The lookup_object function is backed by a hash table of all
objects we have seen in the program. We manage collisions
with a linear walk over the colliding entries, checking each
with hashcmp(). The main cost of lookup is in these
hashcmp() calls; finding our item in the first slot is
cheaper than finding it in the second slot, which is cheaper
than the third, and so on.
If we assume that there is some locality to the object
lookups (e.g., if X and Y collide, and we have just looked
up X, the next lookup is more likely to be for X than for
Y), then we can improve our average lookup speed by checking
X before Y.
This patch does so by swapping a found item to the front of
the collision chain. The p0001 perf test reveals that this
does indeed exploit locality in the case of "rev-list --all
--objects":
Test origin this tree
-------------------------------------------------------------------------
0001.1: rev-list --all 0.40(0.38+0.02) 0.40(0.36+0.03) +0.0%
0001.2: rev-list --all --objects 2.24(2.17+0.05) 1.86(1.79+0.05) -17.0%
This is not surprising, as the full object traversal will
hit the same tree entries over and over (e.g., for every
commit that doesn't change "Documentation/", we will have to
look up the same sha1 just to find out that we already
processed it).
The reason why this technique works (and does not violate
any properties of the hash table) is subtle and bears some
explanation. Let's imagine we get a lookup for sha1 `X`, and
it hashes to bucket `i` in our table. That stretch of the
table may look like:
index | i-1 | i | i+1 | i+2 |
-----------------------------------
entry ... | A | B | C | X | ...
-----------------------------------
We start our probe at i, see that B does not match, nor does
C, and finally find X. There may be multiple C's in the
middle, but we know that there are no empty slots (or else
we would not find X at all).
We do not know the original index of B; it may be `i`, or it
may be less than i (e.g., if it were `i-1`, it would collide
with A and spill over into the `i` bucket). So it is
acceptable for us to move it to the right of a contiguous
stretch of entries (because we will find it from a linear
walk starting anywhere at `i` or before), but never to the
left (if we moved it to `i-1`, we would miss it when
starting our walk at `i`).
We do know the original index of X; it is `i`, so it is safe
to place it anywhere in the contiguous stretch between `i`
and where we found it (`i+2` in the this case).
This patch does a pure swap; after finding X in the
situation above, we would end with:
index | i-1 | i | i+1 | i+2 |
-----------------------------------
entry ... | A | X | C | B | ...
-----------------------------------
We could instead bump X into the `i` slot, and then shift
the whole contiguous chain down by one, resulting in:
index | i-1 | i | i+1 | i+2 |
-----------------------------------
entry ... | A | X | B | C | ...
-----------------------------------
That puts our chain in true most-recently-used order.
However, experiments show that it is not any faster (and in
fact, is slightly slower due to the extra manipulation).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Hold the ref_cache instance for the main repository in a dedicated,
statically-allocated instance to avoid the need for a function call
and a linked-list traversal when it is needed.
Suggested by: Heiko Voigt <hvoigt@hvoigt.net>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
pack_one_ref(): use write_packed_entry() to do the writing
Change pack_refs() to work with a file descriptor instead of a FILE*
(making the file-locking code less awkward) and use
write_packed_entry() to do the writing.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change pack_one_ref() to call peel_entry() rather than using its own
code for peeling references. Aside from sharing code, this lets it
take advantage of the optimization introduced by 6c4a060d7d.
Please note that we *could* use any peeled values that happen to
already be stored in the ref_entries, which would avoid some object
lookups for references that were already packed. But doing so would
also propagate any peeling errors across runs of "git pack-refs" and
give no way to recover from such errors. And "git pack-refs" isn't
run often enough that the performance cost is a problem. So instead,
add a new option to peel_entry() to force the entry to be re-peeled,
and call it with that option set.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Function do_not_prune() was redundantly checking REF_ISSYMREF, which
was already tested at the top of pack_one_ref(), so remove that check.
And the rest was trivial, so inline the function.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
pack_refs() was not using any of the extra features of for_each_ref(),
so change it to use do_for_each_entry(). This also gives it access to
the ref_entry and in particular its peeled field, which will be taken
advantage of in the next commit.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
pack-refs: merge code from pack-refs.{c,h} into refs.{c,h}
pack-refs.c doesn't contain much code, and the code it does contain is
closely related to reference handling. Moreover, there is some
duplication between pack_refs() and repack_without_ref(). Therefore,
merge pack-refs.c into refs.c and pack-refs.h into refs.h.
The code duplication will be addressed in future commits.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
repack_without_ref(): write peeled refs in the rewritten file
When a reference that existed in the packed-refs file is deleted, the
packed-refs file must be rewritten. Previously, the file was
rewritten without any peeled refs, even if the file contained peeled
refs when it was read. This was not a bug, because the packed-refs
file header didn't claim that the file contained peeled values. But
it had a performance cost, because the repository would lose the
benefit of having precomputed peeled references until pack-refs was
run again.
Teach repack_without_ref() to write peeled refs to the packed-refs
file (regardless of whether they were present in the old version of
the file).
This means that if the old version of the packed-refs file was not
fully peeled, then repack_without_ref() will have to peel references.
To avoid the expense of reading lots of loose references, we take two
shortcuts relative to pack-refs:
* If the peeled value of a reference is already known (i.e., because
it was read from the old version of the packed-refs file), then
output that peeled value again without any checks. This is the
usual code path and should avoid any noticeable overhead. (This is
different than pack-refs, which always re-peels references.)
* We don't verify that the packed ref is still current. It could be
that a packed references is overridden by a loose reference, in
which case the packed ref is no longer needed and might even refer
to an object that has been garbage collected. But we don't check;
instead, we just try to peel all references. If peeling is
successful, the peeled value is written out (even though it might
not be needed any more); if not, then the reference is silently
omitted from the output.
The extra overhead of peeling references in repack_without_ref()
should only be incurred the first time the packed-refs file is written
by a version of Git that knows about the "fully-peeled" attribute.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
t3211: demonstrate loss of peeled refs if a packed ref is deleted
Add a test that demonstrates that the peeled values recorded in
packed-refs are lost if a packed ref is deleted. (The code in
repack_without_ref() doesn't even attempt to write peeled refs.) This
will be fixed in a moment.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a function remove_ref(), which removes a single entry from a
reference cache.
Use this function to reimplement repack_without_ref(). The old
version iterated over all refs, packing all of them except for the one
to be deleted, then discarded the entire packed reference cache. The
new version deletes the doomed reference from the cache *before*
iterating.
This has two advantages:
* the code for writing packed-refs becomes simpler, because it doesn't
have to exclude one of the references.
* it is no longer necessary to discard the packed refs cache after
deleting a reference: symbolic refs cannot be packed, so packed
references cannot depend on each other, so the rest of the packed
refs cache remains valid after a reference is deleted.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
search_ref_dir(): return an index rather than a pointer
Change search_ref_dir() to return the index of the sought entry (or -1
on error) rather than a pointer to the entry. This will make it more
natural to use the function for removing an entry from the list.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
repack_without_ref(): silence errors for dangling packed refs
Stop emitting an error message when deleting a packed reference if we
find another dangling packed reference that is overridden by a loose
reference. See the previous commit for a longer explanation of the
issue.
We have to be careful to make sure that the invalid packed reference
really *is* overridden by a loose reference; otherwise what we have
found is repository corruption, which we *should* report.
Please note that this approach is vulnerable to a race condition
similar to the race conditions already known to affect packed
references [1]:
* Process 1 tries to peel packed reference X as part of deleting
another packed reference. It discovers that X does not refer to a
valid object (because the object that it referred to has been
garbage collected).
* Process 2 tries to delete reference X. It starts by deleting the
loose reference X.
* Process 1 checks whether there is a loose reference X. There is not
(it has just been deleted by process 2), so process 1 reports a
spurious error "X does not point to a valid object!"
The worst case seems relatively harmless, and the fix is identical to
the fix that will be needed for the other race conditions (namely
holding a lock on the packed-refs file during *all* reference
deletions), so we leave the cleaning up of all of them as a future
project.
t3210: test for spurious error messages for dangling packed refs
A packed reference can be overridden by a loose reference, in which
case the packed reference is obsolete and is never used. The object
pointed to by such a reference can be garbage collected. Since d66da478f2, this could lead to the emission of a spurious error
message:
error: refs/heads/master does not point to a valid object!
The error is generated by repack_without_ref() if there is an obsolete
dangling packed reference in packed-refs when the packed-refs file has
to be rewritten due to the deletion of another packed reference. Add
a failing test demonstrating this problem and some passing tests of
related scenarios.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Establish an internal API for iterating over references, which gives
the callback functions direct access to the ref_entry structure
describing the reference. (Do not change the iteration API that is
exposed outside of the module.)
Define a new internal callback signature
int each_ref_entry_fn(struct ref_entry *entry, void *cb_data)
Change do_for_each_ref_in_dir() and do_for_each_ref_in_dirs() to
accept each_ref_entry_fn callbacks, and rename them to
do_for_each_entry_in_dir() and do_for_each_entry_in_dirs(),
respectively. Adapt their callers accordingly.
Add a new function do_for_each_entry() analogous to do_for_each_ref()
but using the new callback style.
Change do_one_ref() into an each_ref_entry_fn that does some
bookkeeping and then calls a wrapped each_ref_fn.
Reimplement do_for_each_ref() in terms of do_for_each_entry(), using
do_one_ref() as an adapter.
Please note that the responsibility for setting current_ref remains in
do_one_ref(), which means that current_ref is *not* set when iterating
over references via the new internal API. This is not a disadvantage,
because current_ref is not needed by callers of the internal API (they
receive a pointer to the current ref_entry anyway). But more
importantly, this change prevents peel_ref() from returning invalid
results in the following scenario:
When iterating via the external API, the iteration always includes
both packed and loose references, and in particular never presents a
packed ref if there is a loose ref with the same name. The internal
API, on the other hand, gives the option to iterate over only the
packed references. During such an iteration, there is no check
whether the packed ref might be hidden by a loose ref of the same
name. But until now the packed ref was recorded in current_ref during
the iteration. So if peel_ref() were called with the reference name
corresponding to current ref, it would return the peeled version of
the packed ref even though there might be a loose ref that peels to a
different value. This scenario doesn't currently occur in the code,
but fix it to prevent things from breaking in a very confusing way in
the future.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Peel the entry, and as a side effect store the peeled value in the
entry. Use this function from two places in peel_ref(); a third
caller will be added soon.
Please note that this change can lead to ref_entries for unpacked refs
being peeled. This has no practical benefit but is harmless.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
peel_ref(): fix return value for non-peelable, not-current reference
The old version was inconsistent: when a reference was
REF_KNOWS_PEELED but with a null peeled value, it returned non-zero
for the current reference but zero for other references. Change the
behavior for non-current references to match that of current_ref,
which is what callers expect. Document the behavior.
Current callers only call peel_ref() from within a for_each_ref-style
iteration and only for the current ref; therefore, the buggy code path
was never reached.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
peel_object(): give more specific information in return value
Instead of just returning a success/failure bit, return an enumeration
value that explains the reason for any failure. This will come in
handy shortly.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of copying the reference's SHA1 into a caller-supplied
variable, just return the ref_entry itself (or NULL if there is no
such entry). This change will allow the function to be used from
elsewhere.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* rs/pp-user-info-without-extra-allocation:
pretty: remove intermediate strbufs from pp_user_info()
pretty: simplify output line length calculation in pp_user_info()
pretty: simplify input line length calculation in pp_user_info()
* tr/remote-tighten-commandline-parsing:
remote: 'show' and 'prune' can take more than one remote
remote: check for superfluous arguments in 'git remote add'
remote: add a test for extra arguments, according to docs
contrib/subtree: don't delete remote branches if split fails
When using "git subtree push" to split out a subtree and push it to a
remote repository, we do not detect if the split command fails which
causes the LHS of the refspec to be empty, deleting the remote branch.
Fix this by pulling the result of the split command into a variable so
that we can die if the command fails.
Reported-by: Steffen Jaeckel <steffen.jaeckel@stzedn.de> Signed-off-by: John Keeping <john@keeping.me.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Bazaar doesn't seem to be tested for multiple usage of branches, so
resources seem to be leaked all over. Let's try to minimize this by
accessing the Branch objects only when needed.
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>