"git log -F -E --grep='<ere>'" failed to use the given <ere>
pattern as extended regular expression, and instead looked for the
string literally. The early part of this series is a fix for it;
the latter part teaches log to respect the grep.* configuration.
* jc/grep-pcre-loose-ends:
log: honor grep.* configuration
log --grep: accept --basic-regexp and --perl-regexp
log --grep: use the same helper to set -E/-F options as "git grep"
revisions: initialize revs->grep_filter using grep_init()
grep: move pattern-type bits support to top-level grep.[ch]
grep: move the configuration parsing logic to grep.[ch]
builtin/grep.c: make configuration callback more reusable
If you remove a submodule, in order to keep the repository so that
"git checkout" to an older commit in the superproject history can
resurrect the submodule, the real repository will stay in $GIT_DIR
of the superproject. A later "git submodule add $path" to add a
different submodule at the same path will fail. Diagnose this case
a bit better, and if the user really wants to add an unrelated
submodule at the same path, give the "--name" option to give it a
place in $GIT_DIR of the superproject that does not conflict with
the original submodule.
* jl/submodule-add-by-name:
submodule add: Fail when .git/modules/<name> already exists unless forced
Teach "git submodule add" the --name option
"git rm submodule" cannot blindly remove a submodule directory as
its working tree may have local changes, and worse yet, it may even
have its repository embedded in it. Teach it some special cases
where it is safe to remove a submodule, specifically, when there is
no local changes in the submodule working tree, and its repository
is not embedded in its working tree but is elsewhere and uses the
gitfile mechanism to point at it.
* jl/submodule-rm:
submodule: teach rm to remove submodules unless they contain a git directory
Speeds up "git upload-pack" (what is invoked by "git fetch" on the
other side of the connection) by reducing the cost to advertise the
branches and tags that are available in the repository.
* jk/peel-ref:
upload-pack: use peel_ref for ref advertisements
peel_ref: check object type before loading
peel_ref: do not return a null sha1
peel_ref: use faster deref_tag_noverify
* fa/remote-svn:
Add a test script for remote-svn
remote-svn: add marks-file regeneration
Add a svnrdump-simulator replaying a dump file for testing
remote-svn: add incremental import
remote-svn: Activate import/export-marks for fast-import
Create a note for every imported commit containing svn metadata
vcs-svn: add fast_export_note to create notes
Allow reading svn dumps from files via file:// urls
remote-svn, vcs-svn: Enable fetching to private refs
When debug==1, start fast-import with "--stats" instead of "--quiet"
Add documentation for the 'bidi-import' capability of remote-helpers
Connect fast-import to the remote-helper via pipe, adding 'bidi-import' capability
Add argv_array_detach and argv_array_free_detached
Add svndump_init_fd to allow reading dumps from arbitrary FDs
Add git-remote-testsvn to Makefile
Implement a remote helper for svn in C
strbuf: always return a non-NULL value from strbuf_detach
The current behavior is to return NULL when strbuf did not
actually allocate a string. This can be quite surprising to
callers, though, who may feed the strbuf from arbitrary data
and expect to always get a valid value.
In most cases, it does not make a difference because calling
any strbuf function will cause an allocation (even if the
function ends up not inserting any data). But if the code is
structured like:
then you may or may not return NULL, depending on the
condition. This can cause us to segfault in http-push
(when fed an empty URL) and in http-backend (when an empty
parameter like "foo=bar&&" is in the $QUERY_STRING).
This patch forces strbuf_detach to allocate an empty
NUL-terminated string when it is called on a strbuf that has
not been allocated.
I investigated all call-sites of strbuf_detach. The majority
are either not affected by the change (because they call a
strbuf_* function unconditionally), or can handle the empty
string just as easily as NULL.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Merge tag 'gitgui-0.17.0' of git://repo.or.cz/git-gui
git-gui 0.17.0
* tag 'gitgui-0.17.0' of git://repo.or.cz/git-gui:
git-gui 0.17
git-gui: Don't prepend the prefix if value looks like a full path
git-gui: Detect full path when parsing arguments
git-gui: remove .git/CHERRY_PICK_HEAD after committing
git-gui: Fix a loose/lose mistake
git-gui: Fix semi-working shortcuts for unstage and revert
git-gui: de.po: translate "remote" as "extern"
git-gui: de.po: translate "bare" as "bloß"
git-gui: de.po: consistently add untranslated hook names within braces
git-gui: preserve commit messages in utf-8
git-gui: open console when using --trace on windows
git-gui: fix a typo in po/ files
git-gui: Use PWD if it exists on Mac OS X
git-gui: fix git-gui crash due to uninitialized variable
git-gui: Don't prepend the prefix if value looks like a full path
When argument parsing fails to detect a file name, "git-gui" will try to
use the previously detected "head" as the file name. We should avoid
prepending the prefix if "head" looks like a full path.
Signed-off-by: Andrew Wong <andrew.kw.w@gmail.com> Signed-off-by: Pat Thoyts <patthoyts@users.sourceforge.net>
When running "git-gui blame" from a subfolder (which means prefix is
non-empty), if we pass a full path as argument, the argument parsing
will fail to recognize the argument as a file name, because prefix is
prepended to the argument.
This patch handles that scenario by adding an additional branch that
checks the file name without using the prefix.
Signed-off-by: Andrew Wong <andrew.kw.w@gmail.com> Signed-off-by: Pat Thoyts <patthoyts@users.sourceforge.net>
* po/maint-docs:
Doc branch: show -vv option and alternative
Doc clean: add See Also link
Doc add: link gitignore
Doc: separate gitignore pattern sources
Doc: shallow clone deepens _to_ new depth
* jc/ll-merge-binary-ours:
ll-merge: warn about inability to merge binary files only when we can't
attr: "binary" attribute should choose built-in "binary" merge driver
merge: teach -Xours/-Xtheirs to binary ll-merge driver
maybe_flush_or_die: move a too-loose Windows specific error
check to compat
Commit b2f5e268 (Windows: Work around an oddity when a pipe with no reader
is written to) introduced a check for EINVAL after fflush() to fight
spurious "Invalid argument" errors on Windows when a pipe was broken. But
this check may hide real errors on systems that do not have the this odd
behavior. Introduce an fflush wrapper in compat/mingw.* so that the treatment
is only applied on Windows.
Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we get an http 401, we prompt for credentials and put
them in our global credential struct. We also feed them to
the curl handle that produced the 401, with the intent that
they will be used on a retry.
When the code was originally introduced in commit 42653c0,
this was a necessary step. However, since dfa1725, we always
feed our global credential into every curl handle when we
initialize the slot with get_active_slot. So every further
request already feeds the credential to curl.
Moreover, accessing the slot here is somewhat dubious. After
the slot has produced a response, we don't actually control
it any more. If we are using curl_multi, it may even have
been re-initialized to handle a different request.
It just so happens that we will reuse the curl handle within
the slot in such a case, and that because we only keep one
global credential, it will be the one we want. So the
current code is not buggy, but it is misleading.
By cleaning it up, we can remove the slot argument entirely
from handle_curl_result, making it much more obvious that
slots should not be accessed after they are marked as
finished.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit b81401c (http: prompt for credentials on failed POST)
taught post_rpc to call run_slot in a loop in order to retry
a request after asking the user for credentials. However,
after a call to run_slot we will have called
finish_active_slot. This means we have released the slot,
and we should no longer look at it.
As it happens, this does not cause any bugs in the current
code, since we know that we are not using curl_multi in this
code path, and therefore nobody will have taken over our
slot in the meantime. However, it is good form to actually
call get_active_slot again. It also future proofs us against
changes in the http code.
We can do this by jumping back to a retry label at the top
of our function. We just need to reorder a few setup lines
that should not be repeated; everything else within the loop
is either idempotent, needs to be repeated, or in a path we
do not follow (e.g., we do not even try when large_request
is set, because we don't know how much data we might have
streamed from our helper program).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we create an http active_request_slot, we can set its
"results" pointer back to local storage. The http code will
fill in the details of how the request went, and we can
access those details even after the slot has been cleaned
up.
Commit 8809703 (http: factor out http error code handling)
switched us from accessing our local results struct directly
to accessing it via the "results" pointer of the slot. That
means we're accessing the slot after it has been marked as
finished, defeating the whole purpose of keeping the results
storage separate.
Most of the time this doesn't matter, as finishing the slot
does not actually clean up the pointer. However, when using
curl's multi interface with the dumb-http revision walker,
we might actually start a new request before handing control
back to the original caller. In that case, we may reuse the
slot, zeroing its results pointer, and leading the original
caller to segfault while looking for its results inside the
slot.
Instead, we need to pass a pointer to our local results
storage to the handle_curl_result function, rather than
relying on the pointer in the slot struct. This matches what
the original code did before the refactoring (which did not
use a separate function, and therefore just accessed the
results struct directly).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
grep: stop looking at random places for .gitattributes
grep searches for .gitattributes using "name" field in struct
grep_source but that field is not real on-disk path name. For example,
"grep pattern rev" fills the field with "rev:path", and Git looks for
.gitattributes in the (non-existent but exploitable) path "rev:path"
instead of "path".
This patch passes real paths down to grep_source_load_driver() when:
- grep on work tree
- grep on the index
- grep a commit (or a tag if it points to a commit)
so that these cases look up .gitattributes at proper paths.
.gitattributes lookup is disabled in all other cases.
Initial-work-by: Jeff King <peff@peff.net> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
p4merge does not properly handle the case where "/dev/null"
is passed as a filename.
Work it around by creating a temporary file for this purpose.
Reported-by: Jeremy Morton <admin@game-point.net> Signed-off-by: David Aguilar <davvid@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
Needs to be amended with Tested-by when a report comes...
test-lib: Fix say_color () not to interpret \a\b\c in the message
When running with color disabled (e.g. under prove to produce TAP
output), say_color() helper function is defined to use echo to show
the message. With a message that ends with "\c", echo is allowed to
interpret it as "Do not end the line with LF".
That sets up for later tests that fetch the branch and check that git
svn mangles the refname appropriately.
Unfortunately, modern svn versions interpret path arguments with an
@-sign as an example of path@revision syntax (which pegs a path to a
particular revision) and truncate the path or error out with message
"svn: E205000: Syntax error parsing peg revision '{0}reflog'".
When using subversion 1.6.x, escaping the @ sign as %40 avoids trouble
(see 08fd28bb, 2010-07-08). Newer versions are stricter:
The recommended method for escaping a literal @ sign in a path passed
to subversion is to add an empty peg revision at the end of the path
("branches/not-a@{0}reflog@"). Do that.
Pre-1.6.12 versions of Subversion probably treat the trailing @ as
another literal @-sign (svn issue 3651). Luckily ever since
v1.8.0-rc0~155^2~7 (t9118: workaround inconsistency between SVN
versions, 2012-07-28) the test can survive that.
Tested with Debian Subversion 1.6.12dfsg-6 and 1.7.5-1 and r1395837
of Subversion trunk (1.8.x).
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Tested-by: Michael J Gruber <git@drmicha.warpmail.net> Signed-off-by: Eric Wong <normalperson@yhbt.net>
git svn: work around SVN 1.7 mishandling of svn:special changes
Subversion represents symlinks as ordinary files with content starting
with "link " and the svn:special property set to "*". Thus a file can
switch between being a symlink and a non-symlink simply by toggling
its svn:special property, and new checkouts will automatically write a
file of the appropriate type. Likewise, in subversion 1.6 and older,
running "svn update" would notice changes in filetype and update the
working copy appropriately.
Starting in subversion 1.7 (issue 4091), changes to the svn:special
property trip an assertion instead:
$ svn up svn-tree
Updating 'svn-tree':
svn: E235000: In file 'subversion/libsvn_wc/update_editor.c' \
line 1583: assertion failed (action == svn_wc_conflict_action_edit \
|| action == svn_wc_conflict_action_delete || action == \
svn_wc_conflict_action_replace)
Revisions prepared with ordinary svn commands ("svn add" and not "svn
propset") don't trip this because they represent these filetype
changes using a replace operation, which is approximately equivalent
to removal followed by adding a new file and works fine. Follow suit.
Noticed using t9100. After this change, git-svn's file-to-symlink
changes are sent in a format that modern "svn update" can handle and
tests t9100.11-13 pass again.
[ew: s,git-svn\.perl,perl/Git/SVN/Editor.pm,g]
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Eric Wong <normalperson@yhbt.net>
MALLOC_CHECK: Allow checking to be disabled from config.mak
The malloc checks can be disabled using the TEST_NO_MALLOC_CHECK
variable, either from the environment or command line of an
'make test' invocation. In order to allow the malloc checks to be
disabled from the 'config.mak' file, we add TEST_NO_MALLOC_CHECK
to the environment using an export directive.
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now the grep_config() callback is reusable from other configuration
callbacks, call it from git_log_config() so that grep.patterntype
and friends can be used with the commands in the "git log" family.
log --grep: accept --basic-regexp and --perl-regexp
When we added the "--perl-regexp" option (or "-P") to "git grep", we
should have done the same for the commands in the "git log" family,
but somehow we forgot to do so. This corrects it, but we will
reserve the short-and-sweet "-P" option for something else for now.
Also introduce the "--basic-regexp" option for completeness, so that
the "last one wins" principle can be used to defeat an earlier -E
option, e.g. "git log -E --basic-regexp --grep='<bre>'". Note that
it cannot have the short "-G" option as the option is to grep in the
patch text in the context of "log" family.
log --grep: use the same helper to set -E/-F options as "git grep"
The command line option parser for "git log -F -E --grep='<ere>'"
did not flip the "fixed" bit, violating the general "last option
wins" principle among conflicting options.
revisions: initialize revs->grep_filter using grep_init()
Instead of using the hand-rolled initialization sequence,
use grep_init() to populate the necessary bits. This opens
the door to allow the calling commands to optionally read
grep.* configuration variables via git_config() if they
want to.
grep: move pattern-type bits support to top-level grep.[ch]
Switching between -E/-G/-P/-F correctly needs a lot more than just
flipping opt->regflags bit these days, and we have a nice helper
function buried in builtin/grep.c for the sole use of "git grep".
Extract it so that "log --grep" family can also use it.
grep: move the configuration parsing logic to grep.[ch]
The configuration handling is a library-ish part of this program,
that is not specific to "git grep" command. It should be reusable
by "log" and others.
builtin/grep.c: make configuration callback more reusable
The grep_config() function takes one instance of grep_opt as its
callback parameter, and populates it by running git_config().
This has three practical implications:
- You have to have an instance of grep_opt already when you call
the configuration, but that is not necessarily always true. You
may be trying to initialize the grep_filter member of rev_info,
but are not ready to call init_revisions() on it yet.
- It is not easy to enhance grep_config() in such a way to make it
cascade to other callback functions to grab other variables in
one call of git_config(); grep_config() can be cascaded into from
other callbacks, but it has to be at the leaf level of a cascade.
- If you ever need to use more than one instance of grep_opt, you
will have to open and read the configuration file(s) every time
you initialize them.
Rearrange the configuration mechanism and model it after how diff
configuration variables are handled. An early call to git_config()
reads and remembers the values taken from the configuration in the
default "template", and a separate call to grep_init() uses this
template to instantiate a grep_opt.
The next step will be to move some of this out of this file so that
the other user of the grep machinery (i.e. "log") can use it.
40bfbde ("build: don't duplicate substitution of make variables",
2012-09-11) by mistake removed a necessary comma at the end of
"CC_LD_DYNPATH=-Wl,rpath," in line 414.
When executing "./configure --with-zlib=PATH", this resulted in
[...]
CC xdiff/xhistogram.o
AR xdiff/lib.a
LINK git-credential-store
/usr/bin/ld: bad -rpath option
collect2: ld returned 1 exit status
make: *** [git-credential-store] Error 1
$
during make.
Signed-off-by: Øyvind A. Holm <sunny@sunbase.org> Acked-by: Stefano Lattarini <stefano.lattarini@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
These tests just want a bit-for-bit identical copy; they do not need
even -H (there is no symbolic link involved) nor -p (there is no
funny permission or ownership issues involved).
Just use "cp -R" instead.
Signed-off-by: Ben Walton <bdwalton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git url doc: mark ftp/ftps as read-only and deprecate them
It is not even worth mentioning their removal; just discourage
people from using them.
Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fmt-merge-msg" (an internal helper reduce_heads() it uses) had
a severe performance regression; an empty "git pull" took forever to
finish as the result.
* jc/merge-bases-paint-fix:
paint_down_to_common(): parse commit before relying on its timestamp
Merge branch 'jk/receive-pack-unpack-error-to-pusher' into maint
"git receive-pack" (the counterpart to "git push") did not give
progress output while processing objects it received to the puser
when run over the smart-http protocol.
* jk/receive-pack-unpack-error-to-pusher:
receive-pack: drop "n/a" on unpacker errors
receive-pack: send pack-processing stderr over sideband
receive-pack: redirect unpack-objects stdout to /dev/null
A repository created with "git clone --single" had its fetch
refspecs set up just like a clone without "--single", leading the
subsequent "git fetch" to slurp all the other branches, defeating
the whole point of specifying "only this branch".
* rt/maint-clone-single:
clone --single: limit the fetch refspec to fetched branch
gitignore.txt: suggestions how to get literal # or ! at the beginning
We support backslash escape, but we hide the details behind the phrase
"a shell glob suitable for consumption by fnmatch(3)". So it may not
be obvious how one can get literal # or ! at the beginning of pattern.
Add a few lines on how to work around the magic characters.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use svnrdump_sim.py to emulate svnrdump without an svn server.
Tests fetching, incremental fetching, fetching from file://,
and the regeneration of fast-import's marks file.
Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
fast-import mark files are stored outside the object database and are
therefore not fetched and can be lost somehow else. marks provide a
svn revision --> git sha1 mapping, while the notes that are attached
to each commit when it is imported provide a git sha1 --> svn revision
mapping.
If the marks file is not available or not plausible, regenerate it by
walking through the notes tree. , i.e. The plausibility check tests
if the highest revision in the marks file matches the revision of the
top ref. It doesn't ensure that the mark file is completely correct.
This could only be done with an effort equal to unconditional
regeneration.
Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a svnrdump-simulator replaying a dump file for testing
To ease testing without depending on a reachable svn server, this
compact python script mimics parts of svnrdumps behaviour. It
requires the remote url to start with sim://.
Start and end revisions are evaluated. If the requested revision
doesn't exist, as it is the case with incremental imports, if no new
commit was added, it returns 1 (like svnrdump).
To allow using the same dump file for simulating multiple incremental
imports, the highest revision can be limited by setting the environment
variable SVNRMAX to that value. This simulates the situation where
higher revs don't exist yet.
Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Search for a note attached to the ref to update and read it's
'Revision-number:'-line. Start import from the next svn revision.
If there is no next revision in the svn repo, svnrdump terminates with
a message on stderr an non-zero return value. This looks a little
weird, but there is no other way to know whether there is a new
revision in the svn repo.
On the start of an incremental import, the parent of the first commit
in the fast-import stream is set to the branch name to update. All
following commits specify their parent by a mark number. Previous mark
files are currently not reused.
Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
remote-svn: Activate import/export-marks for fast-import
Enable import and export of a marks file by sending the appropriate
feature commands to fast-import before sending data.
Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create a note for every imported commit containing svn metadata
To provide metadata from svn dumps for further processing, e.g.
branch detection, attach a note to each imported commit that stores
additional information. The notes are currently hard-coded in
refs/notes/svn/revs. Currently the following lines from the svn dump
are directly accumulated in the note. This can be refined as needed.
Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
fast_export lacked a method to writes notes to fast-import stream.
Add two new functions fast_export_note which is similar to
fast_export_modify. And also add fast_export_buf_to_data to be able to
write inline blobs that don't come from a line_buffer or from delta
application.
Signed-off-by: Dmitry Ivankov <divanorama@gmail.com> Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Allow reading svn dumps from files via file:// urls
For testing as well as for importing large, already available dumps,
it's useful to bypass svnrdump and replay the svndump from a file
directly.
Add support for file:// urls in the remote url, e.g.
svn::file:///path/to/dump
When the remote helper finds an url starting with file:// it tries to
open that file instead of invoking svnrdump.
Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
remote-svn, vcs-svn: Enable fetching to private refs
The reference to update by the fast-import stream is hard-coded. When
fetching from a remote the remote-helper shall update refs in a
private namespace, i.e. a private subdir of refs/. This namespace is
defined by the 'refspec' capability, that the remote-helper advertises
as a reply to the 'capabilities' command.
Extend svndump and fast-export to allow passing the target ref.
Update svn-fe to be compatible.
Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>