The sequencer infrastructure is shared across "git cherry-pick",
"git rebase -i", etc., and has always spawned "git commit" when it
needs to create a commit. It has been taught to do so internally,
when able, by reusing the codepath "git commit" itself uses, which
gives performance boost for a few tens of percents in some sample
scenarios.
* pw/sequencer-in-process-commit:
sequencer: run 'prepare-commit-msg' hook
t7505: add tests for cherry-pick and rebase -i/-p
t7505: style fixes
sequencer: assign only free()able strings to gpg_sign
sequencer: improve config handling
t3512/t3513: remove KNOWN_FAILURE_CHERRY_PICK_SEES_EMPTY_COMMIT=1
sequencer: try to commit without forking 'git commit'
sequencer: load commit related config
sequencer: simplify adding Signed-off-by: trailer
commit: move print_commit_summary() to libgit
commit: move post-rewrite code to libgit
Add a function to update HEAD after creating a commit
commit: move empty message checks to libgit
t3404: check intermediate squash messages
* nd/shared-index-fix:
read-cache: don't write index twice if we can't write shared index
read-cache.c: move tempfile creation/cleanup out of write_shared_index
read-cache.c: change type of "temp" in write_shared_index()
The split-index mode had a few corner case bugs fixed.
* tg/split-index-fixes:
travis: run tests with GIT_TEST_SPLIT_INDEX
split-index: don't write cache tree with null oid entries
read-cache: fix reading the shared index for other repos
The http tracing code, often used to debug connection issues,
learned to redact potentially sensitive information from its output
so that it can be more safely sharable.
* jt/http-redact-cookies:
http: support omitting data from traces
http: support cookie redaction when tracing
The tracing machinery learned to report tweaking of environment
variables as well.
* nd/trace-with-env:
run-command.c: print new cwd in trace_run_command()
run-command.c: print env vars in trace_run_command()
run-command.c: print program 'git' when tracing git_cmd mode
run-command.c: introduce trace_run_command()
trace.c: move strbuf_release() out of print_trace_line()
trace: avoid unnecessary quoting
sq_quote_argv: drop maxlen parameter
* ks/submodule-doc-updates:
Doc/git-submodule: improve readability and grammar of a sentence
Doc/gitsubmodules: make some changes to improve readability and syntax
The machinery to clone & fetch, which in turn involves packing and
unpacking objects, have been told how to omit certain objects using
the filtering mechanism introduced by the jh/object-filtering
topic, and also mark the resulting pack as a promisor pack to
tolerate missing objects, taking advantage of the mechanism
introduced by the jh/fsck-promisors topic.
* jh/partial-clone:
t5616: test bulk prefetch after partial fetch
fetch: inherit filter-spec from partial clone
t5616: end-to-end tests for partial clone
fetch-pack: restore save_commit_buffer after use
unpack-trees: batch fetching of missing blobs
clone: partial clone
partial-clone: define partial clone settings in config
fetch: support filters
fetch: refactor calculation of remote list
fetch-pack: test support excluding large blobs
fetch-pack: add --no-filter
fetch-pack, index-pack, transport: partial clone
upload-pack: add object filtering for partial clone
In preparation for implementing narrow/partial clone, the machinery
for checking object connectivity used by gc and fsck has been
taught that a missing object is OK when it is referenced by a
packfile specially marked as coming from trusted repository that
promises to make them available on-demand and lazily.
* jh/fsck-promisors:
gc: do not repack promisor packfiles
rev-list: support termination at promisor objects
sha1_file: support lazily fetching missing objects
introduce fetch-object: fetch one promisor object
index-pack: refactor writing of .keep files
fsck: support promisor objects as CLI argument
fsck: support referenced promisor objects
fsck: support refs pointing to promisor objects
fsck: introduce partialclone extension
extension.partialclone: introduce partial clone extension
The build procedure for perl/ part has been greatly simplified by
weaning ourselves off of MakeMaker.
* ab/simplify-perl-makefile:
perl: treat PERLLIB_EXTRA as an extra path again
perl: avoid *.pmc and fix Error.pm further
Makefile: replace perl/Makefile.PL with simple make rules
We moved away from SHA1_HEADER to a preprocessor if chain, but didn't
update the comment discussing the platform defines. Update this comment
so it reflects the current state of our codebase.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
353d84c537 (coccicheck: make transformation for strbuf_addf(sb, "...")
more precise) added a check to avoid transforming calls with format
strings which contain percent signs, as that would change the result.
It uses embedded Python code for that. Simplify this rule by using the
regular expression matching operator instead.
Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
reset --hard currently uses its own logic for printing the first line of
the commit message in its output. Instead of just using the first line,
use the pretty machinery to create the output.
In addition to the easier to follow code, this makes the output more
consistent with other commands that print the title of the commit, such
as 'git commit --oneline' or 'git checkout', which both use
'pp_commit_easy()' with the CMIT_FMT_ONELINE modifier.
It is a slight change of the output if the second line of the commit
message is not a blank line, i.e. if the commit message is
foo
bar
previously we would print "HEAD is now at 000000 foo", while after
this change we print "HEAD is now at 000000 foo bar", same as 'git log
--oneline' shows "000000 foo bar".
So this does make the output more consistent with other commands, and
'reset' is a porcelain command, so nobody should be parsing the output
in scripts.
The current behaviour dates back to 0e5a7faa3a ("Make "git reset" a
builtin.", 2007-09-11), so I assume (without digging into the old
codebase too much) that the logic was implemented because there was
no convenience function such as 'pp_commit_easy' that would do this
already.
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is much easier to diff the output against a previous
one when the fields are sorted.
Helped-by: Philip Oakley <philipoakley@iee.org> Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes it easier to use the aggregate script
on the command line when one wants to get the
"environment" fields set in the codespeed output.
Previously setting GIT_REPO_NAME was needed
for this purpose.
Helped-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Convert various uses of direct calls to SHA-1 and 20- and 40-based
constants to use the_hash_algo instead. Don't yet convert the on-disk
data structures, which will be handled in a future commit.
Adjust some comments so as not to refer explicitly to SHA-1.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
pack-write: switch various SHA-1 values to abstract forms
Convert various uses of hardcoded 20- and 40-based numbers to use
the_hash_algo, along with direct calls to SHA-1. Adjust the names of
variables to refer to "hash" instead of "sha1".
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
pack-check: convert various uses of SHA-1 to abstract forms
Convert various explicit calls to use SHA-1 functions and constants to
references to the_hash_algo. Make several strings more generic with
respect to the hash algorithm used.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
builtin/index-pack: improve hash function abstraction
Convert several uses of unsigned char [20] to struct object_id and
convert various hard-coded constants and uses of SHA-1 functions to use
the_hash_algo.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
In various parts of our code, we want to allocate a structure
representing the internal state of a hash algorithm. The original
implementation of the hash algorithm abstraction assumed we would do
that using heap allocations, and added a context size element to struct
git_hash_algo. However, most of the existing code uses stack
allocations and conversion would needlessly complicate various parts of
the code. Add a union for the purpose of allocating hash contexts on
the stack and a typedef for ease of use. Use this union for defining
the init, update, and final functions to avoid casts. Remove the ctxsz
element for struct git_hash_algo, which is no longer very useful.
This does mean that stack allocations will grow slightly as additional
hash functions are added, but this should not be a significant problem,
since we don't allocate many hash contexts. The improved usability and
benefits from avoiding dynamic allocation outweigh this small downside.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most of the other code dealing with SHA-1 and other hashes is located in
hash.h, which is in turn loaded by cache.h. Move the SHA-1 macros to
hash.h as well, so we can use them in additional hash-related items in
the future.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
trace: measure where the time is spent in the index-heavy operations
All the known heavy code blocks are measured (except object database
access). This should help identify if an optimization is effective or
not. An unoptimized git-status would give something like below:
`fnmatch(3)` is a great mention if the intended audience is
programmers. For normal users it's probably better to spell out what
a shell glob is.
This paragraph is updated to roughly tell (or remind) what the main
wildcards are supposed to do. All the details are still hidden away
behind the `fnmatch(3)` wall because bringing the whole specification
here may be too much.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Mark the newly added test which creates test files on-disk as
EXPENSIVE_ON_WINDOWS. According to [1] it takes almost ten minutes to
run this test file on Windows after this recent change, but just a few
seconds on Linux as noted in my [2].
This could be done faster by exiting earlier, however by using this
pattern we'll emit "skip" lines for each skipped test, making it clear
we're not running a lot of them in the TAP output, at the cost of some
overhead.
test-lib: add an EXPENSIVE_ON_WINDOWS prerequisite
Add an EXPENSIVE_ON_WINDOWS prerequisite to mark those tests which are
very expensive to run on Windows, but cheap elsewhere.
Certain tests that heavily stress the filesystem or run a lot of shell
commands are disproportionately expensive on Windows, this
prerequisite will later be used by a tests that runs in 4-8 seconds on
a modern Linux system, but takes almost 10 minutes on Windows.
There's no reason to skip such tests by default on other platforms,
but Windows users shouldn't need to wait around while they finish.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
wildmatch test: create & test files on disk in addition to in-memory
There has never been any full roundtrip testing of what git-ls-files
and other commands that use wildmatch() actually do, rather we've been
satisfied with just testing the underlying C function.
Due to git-ls-files and friends having their own codepaths before they
call wildmatch() there's sometimes differences in the behavior between
the two. Even when we test for those (as with [1]), there was no one
place where you can review how these two modes differ.
Now there is. We now attempt to create a file called $haystack and
match $needle against it for each pair of $needle and $haystack that
we were passing to test-wildmatch.
If we can't create the file we skip the test. This ensures that we can
run this on all platforms and not maintain some infinitely growing
whitelist of e.g. platforms that don't support certain characters in
filenames.
A notable exception to this is Windows, where due to the reasons
explained in [2] the shellscript emulation layer might fake the
creation of a file such as "*", and "test -e" for it will succeed
since it just got created with some character that maps to "*", but
git ls-files won't be fooled by this.
Thus we need to skip creating certain filenames entirely on Windows,
the list here might be overly aggressive. I don't have access to a
Windows system to test this.
As a result of doing these tests we can now see the cases where these
two ways of testing wildmatch differ:
* Creating a file called 'a[]b' and running ls-files 'a[]b' will show
that file, but wildmatch("a[]b", "a[]b") will not match
* wildmatch() won't match a file called \ against \, but ls-files
will.
* `git --glob-pathspecs ls-files 'foo**'` will match a file
'foo/bba/arr', but wildmatch won't, however pathmatch will.
This seems like a bug to me, the two are otherwise equivalent as
these tests show.
This also reveals the case discussed in [1], since 2.16.0 '' is now an
error as far as ls-files is concerned, but wildmatch() itself happily
accepts it.
1. 9e4e8a64c2 ("pathspec: die on empty strings as pathspec",
2017-06-06)
wildmatch test: perform all tests under all wildmatch() modes
Rewrite the wildmatch() test suite so that each test now tests all
combinations of the wildmatch() WM_CASEFOLD and WM_PATHNAME flags.
Before this change some test inputs were not tested on
e.g. WM_PATHNAME. Now the function is stress tested on all possible
inputs, and for each input we declare what the result should be if the
mode is case-insensitive, or pathname matching, or case-sensitive or
not matching pathnames.
Also before this change, nothing was testing case-insensitive
non-pathname matching, so I've added that to test-wildmatch.c and made
use of it.
This yields a rather scary patch, but there are no functional changes
here, just more test coverage. Some now-redundant tests were deleted
as a result of this change, since they were now duplicating an earlier
test.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
wildmatch test: use test_must_fail, not ! for test-wildmatch
Use of ! should be reserved for non-git programs that are assumed not
to fail, see README. With this change only
t/t0110-urlmatch-normalization.sh is still using this anti-pattern.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove the unused fnmatch() test parameter from the wildmatch
test. The code that used to test this was removed in 70a8fc999d ("stop
using fnmatch (either native or compat)", 2014-02-15).
As a --word-diff shows the only change to the body of the tests is the
removal of the second out of four parameters passed to match().
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
wildmatch test: use a paranoia pattern from nul_match()
Use a pattern from the nul_match() function in t7008-grep-binary.sh to
make sure that we don't just fall through to the "else" if there's an
unknown parameter.
This is something I added in commit 77f6f4406f ("grep: add a test
helper function for less verbose -f \0 tests", 2017-05-20) to grep
tests, which were modeled on these wildmatch tests, and I'm now
porting back to the original wildmatch tests.
I am not using the "say '...'; exit 1" pattern from t0000-basic.sh
because if I fail I want to run the rest of the tests (unless under
-i), and doing this makes sure we do that and don't exit right away
without fully reporting our errors.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
wildmatch test: don't try to vertically align our output
Don't try to vertically align the test output, which is futile anyway
under the TAP output where we're going to be emitting a number for
each test without aligning the test count.
This makes subsequent changes of mine where I'm not going to be
aligning this output as I add new tests easier.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
travis-ci: don't fail if user already exists on 32 bit Linux build job
The 32 bit Linux build job runs in a Docker container, which lends
itself to running and debugging locally, too. Especially during
debugging one usually doesn't want to start with a fresh container
every time, to save time spent on installing a bunch of dependencies.
However, that doesn't work quite smootly, because the script running
in the container always creates a new user, which then must be removed
every time before subsequent executions, or the build script fails.
Make this process more convenient and don't try to create that user if
it already exists and has the right user ID in the container, so
developers don't have to bother with running a 'userdel' each time
before they run the build script.
The build job on Travis CI always starts with a fresh Docker
container, so this change doesn't make a difference there.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
travis-ci: don't run the test suite as root in the 32 bit Linux build
Travis CI runs the 32 bit Linux build job in a Docker container, where
all commands are executed as root by default. Therefore, ever since
we added this build job in 88dedd5e7 (Travis: also test on 32-bit
Linux, 2017-03-05), we have a bit of code to create a user in the
container matching the ID of the host user and then to run the test
suite as this user. Matching the host user ID is important, because
otherwise the host user would have no access to any files written by
processes running in the container, notably the logs of failed tests
couldn't be included in the build job's trace log.
Alas, this piece of code never worked, because it sets the variable
holding the user name ($CI_USER) in a subshell, meaning it doesn't
have any effect by the time we get to the point to actually use the
variable to switch users with 'su'. So all this time we were running
the test suite as root.
Reorganize that piece of code in 'ci/run-linux32-build.sh' a bit to
avoid that problematic subshell and to ensure that we switch to the
right user. Furthermore, make the script's optional host user ID
option mandatory, so running the build accidentally as root will
become harder when debugging locally. If someone really wants to run
the test suite as root, whatever the reasons might be, it'll still be
possible to do so by explicitly passing '0' as host user ID.
Finally, one last catch: since commit 7e72cfcee (travis-ci: save prove
state for the 32 bit Linux build, 2017-12-27) the 'prove' test harness
has been writing its state to the Travis CI cache directory from
within the Docker container while running as root. After this patch
'prove' will run as a regular user, so in future build jobs it won't
be able overwrite a previously written, still root-owned state file,
resulting in build job failures. To resolve this we should manually
delete caches containing such root-owned files, but that would be a
hassle. Instead, work this around by changing the owner of the whole
contents of the cache directory to the host user ID.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
travis-ci: don't repeat the path of the cache directory
Some of our 'ci/*' scripts repeat the name or full path of the Travis
CI cache directory, and the following patches will add new places
using that path.
Use a variable to refer to the path of the cache directory instead, so
it's hard-coded only in a single place.
Pay extra attention to the 32 bit Linux build: it runs in a Docker
container, so pass the path of the cache directory from the host to
the container in an environment variable. Note that an environment
variable passed this way is exported inside the container, therefore
its value is directly available in the 'su' snippet even though that
snippet is single quoted. Furthermore, use the variable in the
container only if it's been assigned a non-empty value, to prevent
errors when someone is running or debugging the Docker build locally,
because in that case the variable won't be set as there won't be any
Travis CI cache.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
travis-ci: use 'set -e' in the 32 bit Linux build job
The script 'ci/run-linux32-build.sh' running inside the Docker
container of the 32 bit Linux build job uses an && chain to break the
build if one of the commands fails. This is problematic for two
reasons:
- The && chain is broken, because there is this in the middle:
Luckily it is broken in a way that it didn't lead to false
successes. If installing dependencies fails, then the rest of the
first && chain is skipped and execution resumes after the ||
operator. At that point $HOST_UID is still unset, causing
'useradd' to error out with "invalid user ID 'ci'", which in turn
causes the second && chain to abort the script and thus break the
build.
- All other 'ci/*' scripts use 'set -e' to break the build if one of
the commands fails. This inconsistency among these scripts is
asking for trouble: I forgot about the && chain more than once
while working on this patch series.
Enable 'set -e' for the whole script and for the commands executed
under 'su' as well.
While touching every line in the 'su' command block anyway, change
their indentation to use a tab instead of spaces.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-svn: control destruction order to avoid segfault
It seems necessary to control destruction ordering to avoid a
segfault with SVN 1.9.5 when using "git svn branch". I've also
reported the problem against libsvn-perl to Debian [Bug #888791],
but releasing the SVN::Client instance can be beneficial anyways to
save memory.
ref: https://bugs.debian.org/888791 Tested-by: Todd Zullinger <tmz@pobox.com> Reported-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When 'git show' is called without any object it defaults to HEAD. This
has been true since d4ed9793fd ("Simplify common default options setup
for built-in log family.", 2006-04-16).
The SYNOPSIS suggests that the object argument is required. Clarify
that it is not required and note the default.
Signed-off-by: Todd Zullinger <tmz@pobox.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
As long as GIT_SHA1_RAWSZ is equal to GIT_MAX_RAWSZ there's no problem,
but when new hashing algorithm will be in place this memset will clear
only 20-byte prefix of hash buffer.
Alternatively, hashclr implementation could be adjusted, but this
function is almost removed from codebase already. Separate
implementation of oidclr prevents potential buffer overrun in case
someone incorrectly used hashclr on object_id in future.
Signed-off-by: Patryk Obara <patryk.obara@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Convert the declaration of struct sha1_stat. Adjust all usages of this
struct and replace hash{clr,cmp,cpy} with oid{clr,cmp,cpy} wherever
possible. Rename it to struct oid_stat.
Rename static function load_sha1_stat to load_oid_stat.
Remove macro EMPTY_BLOB_SHA1_BIN, as it's no longer used.
Signed-off-by: Patryk Obara <patryk.obara@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Convert the declaration and definition of pretend_sha1_file to use
struct object_id and adjust all usages of this function. Rename it to
pretend_object_file.
Signed-off-by: Patryk Obara <patryk.obara@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
completion: fix completing merge strategies on non-C locales
The anchor string "Available strategies are:" is translatable so
__git_list_merge_strategies may fail to collect available strategies
from 'git merge' on non-C locales. Force C locale on this command.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
daemon: fix length computation in newline stripping
When git-daemon gets a pktline request, we strip off any
trailing newline, replacing it with a NUL. Clients prior to 5ad312bede (in git v1.4.0) would send:
git-upload-pack repo.git\n
and we need to strip it off to understand their request.
After 5ad312bede, we send the host attribute but no newline,
like:
git-upload-pack repo.git\0host=example.com\0
Both of these are parsed correctly by git-daemon. But if
some client were to combine the two:
git-upload-pack repo.git\n\0host=example.com\0
we don't parse it correctly. The problem is that we use the
"len" variable to record the position of the NUL separator,
but then decrement it when we strip the newline. So we start
with:
git-upload-pack repo.git\n\0host=example.com\0
^-- len
and end up with:
git-upload-pack repo.git\0\0host=example.com\0
^-- len
This is arguably correct, since "len" tells us the length of
the initial string, but we don't actually use it for that.
What we do use it for is finding the offset of the extended
attributes; they used to be at len+1, but are now at len+2.
We can solve that by just leaving "len" where it is. We
don't have to care about the length of the shortened string,
since we just treat it like a C string.
No version of Git ever produced such a string, but it seems
like the daemon code meant to handle this case (and it seems
like a reasonable thing for somebody to do in a 3rd-party
implementation).
Reported-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
All of our git-protocol tests rely on invoking the client
and having it make a request of a server. That gives a nice
real-world test of how the two behave together, but it
doesn't leave any room for testing how a server might react
to _other_ clients.
Let's add a few test helper functions which can be used to
manually conduct a git-protocol conversation with a remote
git-daemon:
1. To connect to a remote git-daemon, we need something
like "netcat". But not everybody will have netcat. And
even if they do, the behavior with respect to
half-duplex shutdowns is not portable (openbsd netcat
has "-N", with others you must rely on "-q 1", which is
racy).
Here we provide a "fake_nc" that is capable of doing
a client-side netcat, with sane half-duplex semantics.
It relies on perl's IO::Socket::INET. That's been in
the base distribution since 5.6.0, so it's probably
available everywhere. But just to be on the safe side,
we'll add a prereq.
2. To help tests speak and read pktline, this patch adds
packetize() and depacketize() functions.
I've put fake_nc() into lib-git-daemon.sh, since that's
really the only server where we'd need to use a network
socket. Whereas the pktline helpers may be of more general
use, so I've added them to test-lib-functions.sh. Programs
like upload-pack speak pktline, but can talk directly over
stdio without a network socket.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we receive a request with extended attributes after the
NUL, we try to write those attributes to the log. We do so
with a "%s" format specifier, which will only show
characters up to the first NUL.
That's enough for printing a "host=" specifier. But since dfe422d04d (daemon: recognize hidden request arguments,
2017-10-16) we may have another NUL, followed by protocol
parameters, and those are not logged at all.
Let's cut out the attempt to show the whole string, and
instead log when we parse individual attributes. We could
leave the "extended attributes (%d bytes) exist" part of the
log, which in theory could alert us to attributes that fail
to parse. But anything we don't parse as a "host=" parameter
gets blindly added to the "protocol" attribute, so we'd see
it in that part of the log.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
daemon: fix off-by-one in logging extended attributes
If receive a request like:
git-upload-pack /foo.git\0host=localhost
we mark the offset of the NUL byte as "len", and then log
the bytes after the NUL with a "%.*s" placeholder, using
"pktlen - len" as the length, and "line + len + 1" as the
start of the string.
This is off-by-one, since the start of the string skips past
the separating NUL byte, but the adjusted length includes
it. Fortunately this doesn't actually read past the end of
the buffer, since "%.*s" will stop when it hits a NUL. And
regardless of what is in the buffer, packet_read() will
always add an extra NUL terminator for safety.
As an aside, the git.git client sends an extra NUL after a
"host" field, too, so we'd generally hit that one first, not
the one added by packet_read(). You can see this in the test
output which reports 15 bytes, even though the string has
only 14 bytes of visible data. But the point is that even a
client sending unusual data could not get us to read past
the end of the buffer, so this is purely a cosmetic fix.
Reported-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we start git-daemon for our tests, we send its stderr
log stream to a named pipe. We synchronously read the first
line to make sure that the daemon started, and then dump the
rest to descriptor 4. This is handy for debugging test
output with "--verbose", but the tests themselves can't
access the log data.
Let's dump the log into a file, as well, so that future
tests can check the log. There are a few subtleties worth
calling out here:
- we'll continue to send output to descriptor 4 for
viewing/debugging, which would imply swapping out "cat"
for "tee". But we want to ensure that there's no
buffering, and "tee" doesn't have a standard way to
ask for that. So we'll use a shell loop around "read"
and "printf" instead. That ensures that after a request
has been served, the matching log entries will have made
it to the file.
- the existing first-line shell loop used read/echo. We'll
switch to consistently using "read -r" and "printf" to
relay data as faithfully as possible.
- we open the logfile for append, rather than just output.
That makes it OK for tests to truncate the logfile
without restarting the daemon (the OS will atomically
seek to the end of the file when outputting each line).
That allows tests to look at the log without worrying
about pollution from earlier tests.
Helped-by: Lucas Werkmeister <mail@lucaswerkmeister.de> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Separating out the implementation of the handshake when starting a
long-running subprocess (for example, as is done for a clean/smudge
filter) was done in commit fa64a2fdbeed ("sub-process: refactor
handshake to common function", 2017-07-26), but its documentation still
resides in gitattributes. Split out the documentation as well.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
builtin/pull: respect verbosity settings in submodules
In a6d7eb2c7a (pull: optionally rebase submodules (remote submodule
changes only), 2017-06-23), we taught Git how to rebase submodules in
a pull. However we missed to pass on the verbosity settings.
Reported-by: Robin H. Johnson <robbat2@gentoo.org> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
t5570: use ls-remote instead of clone for interp tests
We don't actually care about the clone operation here; we
just want to know if we were able to actually contact the
remote repository. Using ls-remote does that more
efficiently, and without us having to worry about managing
the tmp.git directory.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When git push fails due to server-side WebDAV error, it's not easy to
point to the main culprit. Additional information about exact cURL
error and HTTP server response is helpful for debugging purpose.
New error log helped me pinpoint failing test t5540-http-push-webdav
to a missing Apache dependency in Fedora 27:
https://bugzilla.redhat.com/show_bug.cgi?id=1491151
Signed-off-by: Patryk Obara <patryk.obara@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
clang-format: adjust penalty for return type line break
The penalty of 5 makes clang-format very eager to put even short type
declarations (e.g. "extern int") into a separate line, even when
breaking parameters list is sufficient.
Signed-off-by: Patryk Obara <patryk.obara@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
packed_ref_cache: don't use mmap() for small files
Take a hint from commit ea68b0ce9f8 (hash-object: don't use mmap() for
small files, 2010-02-21) and use read() instead of mmap() for small
packed-refs files.
Signed-off-by: Kim Gybels <kgybels@infogroep.be> Signed-off-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
We don't actually create zero-length `packed-refs` files, but they are
valid and we should handle them correctly. The old code `xmmap()`ed
such files, which led to an error when `munmap()` was called. So, if
the `packed-refs` file is empty, leave the snapshot at its zero values
and return 0 without trying to read or mmap the file.
Returning 0 also makes `create_snapshot()` exit early, which avoids
the technically undefined comparison `NULL < NULL`.
Reported-by: Kim Gybels <kgybels@infogroep.be> Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
packed_ref_iterator_begin(): make optimization more general
We can return an empty iterator not only if the `packed-refs` file is
missing, but also if it is empty or if there are no references whose
names succeed `prefix`. Optimize away those cases as well by moving
the call to `find_reference_location()` higher in the function and
checking whether the determined start position is the same as
`snapshot->eof`. (This is possible now because the previous commit
made `find_reference_location()` robust against empty snapshots.)
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
struct snapshot: store `start` rather than `header_len`
Store a pointer to the start of the actual references within the
`packed-refs` contents rather than storing the length of the header.
This is more convenient for most users of this field.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 356ee4659b ("sequencer: try to commit without forking 'git
commit'", 2017-11-24) forgot to run the 'prepare-commit-msg' hook when
creating the commit. Fix this by writing the commit message to a
different file and running the hook. Using a different file means that
if the commit is cancelled the original message file is
unchanged. Also move the checks for an empty commit so the order
matches 'git commit'.
Reported-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Helped-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Check that cherry-pick and rebase call the 'prepare-commit-msg' hook
correctly. The expected values for the hook arguments are taken to
match the current master branch. I think there is scope for improving
the arguments passed so they make a bit more sense - for instance
cherry-pick currently passes different arguments depending on whether
the commit message is being edited. Also the arguments for rebase
could be improved. Commit 7c4188360ac ("rebase -i: proper
prepare-commit-msg hook argument when squashing", 2008-10-3) apparently
changed things so that when squashing rebase would pass 'squash' as
the argument to the hook but that has been lost.
I think that it would make more sense to pass 'message' for revert and
cherry-pick -x/-s (i.e. cases where there is a new message or the
current message in modified by the command), 'squash' when squashing
with a new message and 'commit HEAD/CHERRY_PICK_HEAD'
otherwise (picking and squashing without a new message).
Helped-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
If <msg> or <patch> files can't be opened, then mailinfo() returns an
error before it even initializes mi->p_hdr_data or mi->s_hdr_data.
When cmd_mailinfo() then calls clear_mailinfo(), we dereference the
NULL pointers trying to free their contents.
Signed-off-by: Juan F. Codagnone <jcodagnone@gmail.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
read-cache: don't write index twice if we can't write shared index
In a0a967568e ("update-index --split-index: do not split if $GIT_DIR is
read only", 2014-06-13), we tried to make sure we can still write an
index, even if the shared index can not be written.
We did so by just calling 'do_write_locked_index()' just before
'write_shared_index()'. 'do_write_locked_index()' always at least
closes the tempfile nowadays, and used to close or commit the lockfile
if COMMIT_LOCK or CLOSE_LOCK were given at the time this feature was
introduced. COMMIT_LOCK or CLOSE_LOCK is passed in by most callers of
'write_locked_index()'.
After calling 'write_shared_index()', we call 'write_split_index()',
which calls 'do_write_locked_index()' again, which then tries to use the
closed lockfile again, but in fact fails to do so as it's already
closed. This eventually leads to a segfault.
Make sure to write the main index only once.
[nd: most of the commit message and investigation done by Thomas, I only
tweaked the solution a bit]
Helped-by: Thomas Gummerer <t.gummerer@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Replace the custom calls to mru.[ch] with calls to list.h. This patch is
the final step in removing the mru API completely and inlining the logic.
This patch leads to significant code reduction and the mru API hence, is
not a useful abstraction anymore.
Signed-off-by: Gargi Sharma <gs051095@gmail.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of maintaining home-grown email address parsing code, ship
a copy of reasonably recent Mail::Address to be used as a fallback
in 'git send-email' when the platform lacks it.
* mm/send-email-fallback-to-local-mail-address:
send-email: add test for Linux's get_maintainer.pl
perl/Git: remove now useless email-address parsing code
send-email: add and use a local copy of Mail::Address