* av/fsmonitor:
fsmonitor: simplify determining the git worktree under Windows
fsmonitor: store fsmonitor bitmap before splitting index
fsmonitor: read from getcwd(), not the PWD environment variable
fsmonitor: delay updating state until after split index is merged
fsmonitor: document GIT_TRACE_FSMONITOR
fsmonitor: don't bother pretty-printing JSON from watchman
fsmonitor: set the PWD to the top of the working tree
We learned to talk to watchman to speed up "git status" and other
operations that need to see which paths have been modified.
* bp/fsmonitor:
fsmonitor: preserve utf8 filenames in fsmonitor-watchman log
fsmonitor: read entirety of watchman output
fsmonitor: MINGW support for watchman integration
fsmonitor: add a performance test
fsmonitor: add a sample integration script for Watchman
fsmonitor: add test cases for fsmonitor extension
split-index: disable the fsmonitor extension when running the split index test
fsmonitor: add a test tool to dump the index extension
update-index: add fsmonitor support to update-index
ls-files: Add support in ls-files to display the fsmonitor valid bit
fsmonitor: add documentation for the fsmonitor extension.
fsmonitor: teach git to optionally utilize a file system monitor to speed up detecting new or changed files.
update-index: add a new --force-write-index option
preload-index: add override to enable testing preload-index
bswap: add 64 bit endianness helper get_be64
Let's make it clear how patches should flow into
contrib/git-jump. The normal Git maintainer does not
necessarily care about things in contrib/, and authors of
individual components should be the ones giving the final
review/ack for a patch. Ditto for bug reports, which are
likely to get more attention from the area expert.
Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
contrib/git-jump: allow to configure the grep command
Add the configuration option "jump.grepCmd" that allows to configure the
command that is used to search in grep mode. This allows the users of
git-jump to use ag(1) or ack(1) as search engines.
Signed-off-by: Beat Bolli <dev+git@drbeat.li> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git bisect run" that did not specify any command to run used to go
ahead and treated all commits to be tested as 'good'. This has
been corrected by making the command error out.
* sb/bisect-run-empty:
bisect run: die if no command is given
The remote-helper for talking to MediaWiki has been updated to
work with mediawiki namespaces.
* ab/mediawiki-namespace:
remote-mediawiki: show progress while fetching namespaces
remote-mediawiki: process namespaces in order
remote-mediawiki: support fetching from (Main) namespace
remote-mediawiki: skip virtual namespaces
remote-mediawiki: show known namespace choices on failure
remote-mediawiki: allow fetching namespaces with spaces
remote-mediawiki: add namespace support
The "--format=..." option "git for-each-ref" takes learned to show
the name of the 'remote' repository and the ref at the remote side
that is affected for 'upstream' and 'push' via "%(push:remotename)"
and friends.
* js/for-each-ref-remote-name-and-ref:
for-each-ref: test :remotename and :remoteref
for-each-ref: let upstream/push report the remote ref name
for-each-ref: let upstream/push optionally report the remote name
* mh/tidy-ref-update-flags:
refs: update some more docs to use "oid" rather than "sha1"
write_packed_entry(): take `object_id` arguments
refs: rename constant `REF_ISPRUNING` to `REF_IS_PRUNING`
refs: rename constant `REF_NODEREF` to `REF_NO_DEREF`
refs: tidy up and adjust visibility of the `ref_update` flags
ref_transaction_add_update(): remove a check
ref_transaction_update(): die on disallowed flags
prune_ref(): call `ref_transaction_add_update()` directly
files_transaction_prepare(): don't leak flags to packed transaction
* ma/bisect-leakfix:
bisect: fix memory leak when returning best element
bisect: fix off-by-one error in `best_bisection_sorted()`
bisect: fix memory leak in `find_bisection()`
bisect: change calling-convention of `find_bisection()`
* rs/sequencer-rewrite-file-cleanup:
sequencer.c: check return value of close() in rewrite_file()
sequencer: use O_TRUNC to truncate files
sequencer: factor out rewrite_file()
Recent update to the refs infrastructure implementation started
rewriting packed-refs file more often than before; this has been
optimized again for most trivial cases.
* mh/avoid-rewriting-packed-refs:
files-backend: don't rewrite the `packed-refs` file unnecessarily
t1409: check that `packed-refs` is not rewritten unnecessarily
Merge branch 'js/mingw-redirect-std-handles' into maint
MinGW updates.
* js/mingw-redirect-std-handles:
mingw: document the standard handle redirection
mingw: optionally redirect stderr/stdout via the same handle
mingw: add experimental feature to redirect standard handles
Merge branch 'dk/libsecret-unlock-to-load-fix' into maint
The credential helper for libsecret (in contrib/) has been improved
to allow possibly prompting the end user to unlock secrets that are
currently locked (otherwise the secrets may not be loaded).
"git check-ref-format --branch @{-1}" bit a "BUG()" when run
outside a repository for obvious reasons; clarify the documentation
and make sure we do not even try to expand the at-mark magic in
such a case, but still call the validation logic for branch names.
* jc/check-ref-format-oor:
check-ref-format doc: --branch validates and expands <branch>
check-ref-format --branch: strip refs/heads/ using skip_prefix
check-ref-format --branch: do not expand @{...} outside repository
Merge branch 'js/submodule-in-excluded' into maint
"git status --ignored -u" did not stop at a working tree of a
separate project that is embedded in an ignored directory and
listed files in that other project, instead of just showing the
directory itself as ignored.
* js/submodule-in-excluded:
status: do not get confused by submodules in excluded directories
Merge branch 'sb/diff-color-moved-use-xdl-recmatch' into maint
Instead of using custom line comparison and hashing functions to
implement "moved lines" coloring in the diff output, use the pair
of these functions from lower-layer xdiff/ code.
* sb/diff-color-moved-use-xdl-recmatch:
diff.c: get rid of duplicate implementation
xdiff-interface: export comparing and hashing strings
The experimental "color moved lines differently in diff output"
feature was buggy around "ignore whitespace changes" edges, whihch
has been corrected.
* jk/diff-color-moved-fix:
diff: handle NULs in get_string_hash()
diff: fix whitespace-skipping with --color-moved
t4015: test the output of "diff --color-moved -b"
t4015: check "negative" case for "-w --color-moved"
t4015: refactor --color-moved whitespace test
Merge branch 'kd/auto-col-with-pager-fix' into maint
"auto" as a value for the columnar output configuration ought to
judge "is the output consumed by humans?" with the same criteria as
"auto" for coloured output configuration, i.e. either the standard
output stream is going to tty, or a pager is in use. We forgot the
latter, which has been fixed.
* kd/auto-col-with-pager-fix:
column: do not include pager.c
column: show auto columns when pager is active
The set of paths output from "git status --ignored" was tied
closely with its "--untracked=<mode>" option, but now it can be
controlled more flexibly. Most notably, a directory that is
ignored because it is listed to be ignored in the ignore/exclude
mechanism can be handled differently from a directory that ends up
to be ignored only because all files in it are ignored.
* jm/status-ignored-files-list:
status: test ignored modes
status: document options to show matching ignored files
status: report matching ignored and normal untracked
status: add option to show ignored files differently
If an empty string is passed to link_alt_odb_entries(), our
loop finds no entries and we link nothing. But we still do
some preparatory work to normalize the object directory
path, even though we'll never look at the result. This
triggers in basically every git process, since we feed the
usually-empty ALTERNATE_DB_ENVIRONMENT to the function.
Let's detect early that there's nothing to do and return.
While we're at it, let's treat NULL the same as an empty
string as a favor to our callers. That saves
prepare_alt_odb() from having to cover this case.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Documentation: convert SubmittingPatches to AsciiDoc
The SubmittingPatches document is often cited by outside parties as an
example of good practices to follow, including logical, independent
commits; patch sign-offs; and sending patches to a mailing list.
Currently, people who want to cite a particular section tend to either
refer to it by name and let the interested party search through the
document to find it, or link to a given line number on GitHub and hope
the file doesn't change.
Instead, convert the document to AsciiDoc. Build it as part of the
technical documentation, since it is likely of interest to the same
group of people. Provide stable links to the sections which outside
parties are likely to want to link to. Make some minor structural
changes to organize it so that it can be formatted sanely.
Since the makefile needs a .txt extension in order to build with the
rest of the documentation, simply copy the file. Ignore the temporary
file so it doesn't get checked in accidentally, and remove it as part of
the clean process. Do this instead of renaming the file so that people
who have already linked to the documentation (who we're trying to help)
don't find their links broken. Avoid symlinking since Windows will not
like that.
This allows us to render the document as part of the website for the
benefit of others who wish to link to it as well as providing a more
nicely formatted display for our community and potential contributors.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
It was possible to invoke "git bisect run" without any command.
This considers all commits as good commits since "$@"'s return
value for empty $@ is 0.
This is most probably not what a user wants (otherwise she would
invoke "git bisect run true"), so not providing a command now
results in an error.
Signed-off-by: Stephan Beyer <s-beyer@gmx.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
If you have a pcre1 library which is compiled with JIT enabled then
PCRE_STUDY_JIT_COMPILE will be defined whether or not the
NO_LIBPCRE1_JIT configuration is set.
This means that we enable JIT functionality when calling pcre_study
even if NO_LIBPCRE1_JIT has been explicitly set and we just use plain
pcre_exec later.
Fix this by using own macro (GIT_PCRE_STUDY_JIT_COMPILE) which we set to
PCRE_STUDY_JIT_COMPILE only if NO_LIBPCRE1_JIT is not set and define to
0 otherwise, as before.
Reviewed-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
t4201: make use of abbreviation in the test more robust
The test for '--abbrev' in t4201-shortlog.sh assumes that the commits
generated in the test can always be uniquely abbreviated to 5 hex digits
but this is not always the case. If you were unlucky and happened to run
the test at (say) Thu Jun 22 03:04:49 2017 +0000, you would find that
the first commit generated would collide with a tree object created
later in the same test.
This can be simulated in the version of t4201-shortlog.sh prior to this
commit by setting GIT_COMMITTER_DATE and GIT_AUTHOR_DATE to 1498100689
after sourcing test-lib.sh.
Change the test to test --abbrev=35 instead of --abbrev=5 to almost
completely avoid the possibility of a partial collision and add a call
to test_tick in the setup to make the test repeatable (the latter alone
is sufficient to make it robust enough).
Signed-off-by: Charles Bailey <cbailey32@bloomberg.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
fsmonitor: simplify determining the git worktree under Windows
Simplify and speed up the process of finding the git worktree when
running on Windows by keeping it in perl and avoiding spawning helper
processes.
Signed-off-by: Ben Peart <benpeart@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
fsmonitor: store fsmonitor bitmap before splitting index
ba1b9cac ("fsmonitor: delay updating state until after split index
is merged", 2017-10-27) resolved the problem of the fsmonitor data
being applied to the non-base index when reading; however, a similar
problem exists when writing the index. Specifically, writing of the
fsmonitor extension happens only after the work to split the index
has been applied -- as such, the information in the index is only
for the non-"base" index, and thus the extension information
contains only partial data.
When saving, compute the ewah bitmap before the index is split, and
store it in the fsmonitor_dirty field, mirroring the behavior that
occurred during reading. fsmonitor_dirty is kept from being leaked by
being freed when the extension data is written -- which always happens
precisely once, no matter the split index configuration.
Signed-off-by: Alex Vandiver <alexmv@dropbox.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
fsmonitor: read from getcwd(), not the PWD environment variable
Though the process has chdir'd to the root of the working tree, the
PWD environment variable is only guaranteed to be updated accordingly
if a shell is involved -- which is not guaranteed to be the case.
That is, if `/usr/bin/perl` is a binary, $ENV{PWD} is unchanged from
whatever spawned `git` -- if `/usr/bin/perl` is a trivial shell
wrapper to the real `perl`, `$ENV{PWD}` will have been updated to the
root of the working copy.
Update to read from the Cwd module using the `getcwd` syscall, not the
PWD environment variable. The Cygwin case is left unchanged, as it
necessarily _does_ go through a shell.
Signed-off-by: Alex Vandiver <alexmv@dropbox.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* js/mingw-redirect-std-handles:
mingw: document the standard handle redirection
mingw: optionally redirect stderr/stdout via the same handle
mingw: add experimental feature to redirect standard handles
The credential helper for libsecret (in contrib/) has been improved
to allow possibly prompting the end user to unlock secrets that are
currently locked (otherwise the secrets may not be loaded).
A single-word "unsigned flags" in the diff options is being split
into a structure with many bitfields.
* bw/diff-opt-impl-to-bitfields:
diff: make struct diff_flags members lowercase
diff: remove DIFF_OPT_CLR macro
diff: remove DIFF_OPT_SET macro
diff: remove DIFF_OPT_TST macro
diff: remove touched flags
diff: add flag to indicate textconv was set via cmdline
diff: convert flags to be stored in bitfields
add, reset: use DIFF_OPT_SET macro to set a diff flag
Replace Free Software Foundation address in license notices
The mailing address for the FSF has changed over the years. Rather than
updating the address across all files, refer readers to gnu.org, as the
GNU GPL documentation now suggests for license notices. The mailing
address is retained in the full license files (COPYING and LGPL-2.1).
Signed-off-by: Todd Zullinger <tmz@pobox.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Replace Free Software Foundation address in license notices
The mailing address for the FSF has changed over the years. Rather than
updating the address across all files, refer readers to gnu.org, as the
GNU GPL documentation now suggests for license notices. The mailing
address is retained in the full license files (COPYING and LGPL-2.1).
The old address is still present in t/diff-lib/COPYING. This is
intentional, as the file is used in tests and the contents are not
expected to change.
Signed-off-by: Todd Zullinger <tmz@pobox.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
We currently have seven callers of `reduce_heads(foo)`. Six of them do
not use the original list `foo` again, and actually, all six of those
end up leaking it.
Introduce and use `reduce_heads_replace(&foo)` as a leak-free version of
`foo = reduce_heads(foo)` to fix several of these. Fix the remaining
leaks using `free_commit_list()`.
While we're here, document `reduce_heads()` and mark it as `extern`.
Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
In several functions, we iterate through a commit list by assigning
`result = result->next`. As a consequence, we lose the original pointer
and eventually leak the list.
Rewrite the loops so that we keep the original pointers, then call
`free_commit_list()`. Various alternatives were considered:
1) Use `UNLEAK(result)` before the loop. Simple change, but not very
pretty. These would definitely be new lows among our usages of UNLEAK.
2) Use `pop_commit()` when looping. Slightly less simple change, but it
feels slightly preferable to first display the list, then free it.
3) As in this patch, but with `UNLEAK()` instead of freeing. We'd still
go through all the trouble of refactoring the loop, and because it's not
super-obvious that we're about to exit, let's just free the lists -- it
probably doesn't affect the runtime much.
In `handle_independent()` we can drop `result` while we're here and
reuse the `revs`-variable instead. That matches several other users of
`reduce_heads()`. The memory-leak that this hides will be addressed in
the next commit.
Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
remote-mediawiki: show progress while fetching namespaces
Without this, the fetch process seems hanged while we fetch page
listings across the namespaces. Obviously, it should be possible to
silence this with -q, but that's an issue already present everywhere
in the code and should be fixed separately:
Ideally, we'd process them in numeric order since that is more
logical, but we can't do that yet since this is where we find the
numeric identifiers in the first place. Lexicographic order is a good
compromise.
Signed-off-by: Antoine Beaupré <anarcat@debian.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
remote-mediawiki: support fetching from (Main) namespace
When we specify a list of namespaces to fetch from, by default the MW
API will not fetch from the default namespace, refered to as "(Main)"
in the documentation:
I haven't found a way to address that "(Main)" namespace when getting
the namespace ids: indeed, when listing namespaces, there is no
"canonical" field for the main namespace, although there is a "*"
field that is set to "" (empty). So in theory, we could specify the
empty namespace to get the main namespace, but that would make
specifying namespaces harder for the user: we would need to teach
users about the "empty" default namespace. It would also make the code
more complicated: we'd need to parse quotes in the configuration.
So we simply override the query here and allow the user to specify
"(Main)" since that is the publicly documented name.
Signed-off-by: Antoine Beaupré <anarcat@debian.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Virtual namespaces do not correspond to pages in the database and are
automatically generated by MediaWiki. It makes little sense,
therefore, to fetch pages from those namespaces and the MW API doesn't
support listing those pages.
According to the documentation, those virtual namespaces are currently
"Special" (-1) and "Media" (-2) but we treat all negative namespaces
as "virtual" as a future-proofing mechanism.
Signed-off-by: Antoine Beaupré <anarcat@debian.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
remote-mediawiki: show known namespace choices on failure
If we fail to find a requested namespace, we should tell the user
which ones we know about, since those were already fetched. This
allows users to fetch all namespaces by specifying a dummy namespace,
failing, then copying the list of namespaces in the config.
Eventually, we should have a flag that allows fetching all namespaces
automatically.
Reviewed-by: Antoine Beaupré <anarcat@debian.org> Signed-off-by: Antoine Beaupré <anarcat@debian.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
read_index_from(): speed index loading by skipping verification of the entry order
There is code in post_read_index_from() to catch out of order
entries when reading an index file. This order verification is ~13%
of the cost of every call to read_index_from().
Update check_ce_order() so that it skips this verification unless
the "verify_ce_order" global variable is set.
Teach fsck to force this verification.
The effect can be seen using t/perf/p0002-read-cache.sh:
Test HEAD HEAD~1
--------------------------------------------------------------------------------------
0002.1: read_cache/discard_cache 1000 times 0.41(0.04+0.04) 0.50(0.00+0.10) +22.0%
Signed-off-by: Ben Peart <benpeart@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
for-each-ref: let upstream/push report the remote ref name
There are times when scripts want to know not only the name of the
push branch on the remote, but also the name of the branch as known
by the remote repository.
An example of this is when a tool wants to push to the very same branch
from which it would pull automatically, i.e. the `<remote>` and the `<to>`
in `git push <remote> <from>:<to>` would be provided by
`%(upstream:remotename)` and `%(upstream:remoteref)`, respectively.
This patch offers the new suffix :remoteref for the `upstream` and `push`
atoms, allowing to show exactly that. Example:
Signed-off-by: J Wyman <jwyman@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Factor out the commonalities from test_submodule_switch() and
test_submodule_forced_switch() in lib-submodule-update.sh, and document
their usage.
This also makes explicit (through the KNOWN_FAILURE_FORCED_SWITCH_TESTS
variable) the fact that, currently, all functionality tested using
test_submodule_forced_switch() do not correctly handle the situation in
which a submodule is replaced with an ordinary directory.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
wt-status: actually ignore submodules when requested
Since ff6f1f564 (submodule-config: lazy-load a repository's .gitmodules
file, 2017-08-03) rebase interactive fails if there are any submodules
with unstaged changes which have been configured with a value for
'submodule.<name>.ignore' in the repository's config.
This is due to how configured values of 'submodule.<name>.ignore' are
handled in addition to a change in how the submodule config is loaded.
When the diff machinery hits a submodule (gitlink as well as a
corresponding entry in the submodule subsystem) it will read the value
of 'submodule.<name>.ignore' stored in the repository's config and if
the config is present it will clear the 'IGNORE_SUBMODULES' (which is
the flag explicitly requested by rebase interactive),
'IGNORE_UNTRACKED_IN_SUBMODULES', and 'IGNORE_DIRTY_SUBMODULES' diff
flags and then set one of them based on the configured value.
Historically this wasn't a problem because the submodule subsystem
wasn't initialized because the .gitmodules file wasn't explicitly loaded
by the rebase interactive command. So when the diff machinery hit a
submodule it would skip over reading any configured values of
'submodule.<name>.ignore'.
In order to preserve the behavior of submodules being ignored by rebase
interactive, also set the 'OVERRIDE_SUBMODULE_CONFIG' diff flag when
submodules are requested to be ignored when checking for unstaged
changes.
Reported-by: Orgad Shaneh <orgads@gmail.com> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
To make them more generic and make it easy to reuse them,
the following changes are made:
- we don't require capabilities to come in a fixed order,
- we allow duplicates,
- we check that the remote supports the capabilities we
advertise,
- we don't check if the remote declares any capability we
don't know about.
The reason behind the last change is that the protocol
should work using only the capabilities that both ends
support, and it should not stop working if one end starts
to advertise a new capability.
Despite those changes, we can still require a set of
capabilities, and die if one of them is not supported.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
As checking for a lf character at the end of a buffer
will be useful in another function, let's refactor this
functionality into a small remove_final_lf_or_die()
helper function.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Let's refactor the code to initialize communication into its own
packet_initialize() function, so that we can reuse this
functionality in following patches.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before further refactoring the "t0021/rot13-filter.pl" script,
let's modernize the style of its 'if .. elsif .. else' clauses
to improve its readability by making it more similar to our
other perl scripts.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
To make it possible in a following commit to move packet
reading and writing functions into a Packet.pm module,
let's refactor these functions, so they don't handle
printing debug output and exiting.
While at it let's create packet_required_key_val_read()
to still handle erroring out in a common case.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>