It's common to pipe the tar output produce by "git archive"
through gzip or some other compressor. Locally, this can
easily be done by using a shell pipe. When requesting a
remote archive, though, it cannot be done through the
upload-archive interface.
This patch allows configurable tar filters, so that one
could define a "tar.gz" format that automatically pipes tar
output through gzip.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git-archive will guess a format from the output filename if
no format is explicitly given. The current function just
hardcodes "zip" to the zip format, and leaves everything
else NULL (which will default to tar). Since we are about
to add user-specified formats, we need to be more flexible.
The new rule is "if a filename ends with a dot and the name
of a format, it matches that format". For the existing "tar"
and "zip" formats, this is identical to the current
behavior. For new user-specified formats, this will do what
the user expects if they name their formats appropriately.
Because we will eventually start matching arbitrary
user-specified extensions that may include dots, the strrchr
search for the final dot is not sufficient. We need to do an
actual suffix match with each extension.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The process for guessing an archive output format based on
the filename is something like this:
a. parse --output in cmd_archive; check the filename
against a static set of mapping heuristics (right now
it just matches ".zip" for zip files).
b. if found, stick a fake "--format=zip" at the beginning
of the arguments list (if the user did specify a
--format manually, the later option will override our
fake one)
c. if it's a remote call, ship the arguments to the remote
(including the fake), which will call write_archive on
their end
d. if it's local, ship the arguments to write_archive
locally
There are two problems:
1. The set of mappings is static and at too high a level.
The write_archive level is going to check config for
user-defined formats, some of which will specify
extensions. We need to delay lookup until those are
parsed, so we can match against them.
2. For a remote archive call, our set of mappings (or
formats) may not match the remote side's. This is OK in
practice right now, because all versions of git
understand "zip" and "tar". But as new formats are
added, there is going to be a mismatch between what the
client can do and what the remote server can do.
To fix (1), this patch refactors the location guessing to
happen at the write_archive level, instead of the
cmd_archive level. So instead of sticking a fake --format
field in the argv list, we actually pass a "name hint" down
the callchain; this hint is used at the appropriate time to
guess the format (if one hasn't been given already).
This patch leaves (2) unfixed. The name_hint is converted to
a "--format" option as before, and passed to the remote.
This means the local side's idea of how extensions map to
formats will take precedence.
Another option would be to pass the name hint to the remote
side and let the remote choose. This isn't a good idea for
two reasons:
1. There's no room in the protocol for passing that
information. We can pass a new argument, but older
versions of git on the server will choke on it.
2. Letting the remote side decide creates a silent
inconsistency in user experience. Consider the case
that the locally installed git knows about the "tar.gz"
format, but a remote server doesn't.
Running "git archive -o foo.tar.gz" will use the tar.gz
format. If we use --remote, and the local side chooses
the format, then we send "--format=tar.gz" to the
remote, which will complain about the unknown format.
But if we let the remote side choose the format, then
it will realize that it doesn't know about "tar.gz" and
output uncompressed tar without even issuing a warning.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
archive: pass archiver struct to write_archive callback
The current archivers are very static; when you are in the
write_tar_archive function, you know you are writing a tar.
However, to facilitate runtime-configurable archivers
that will share a common write function we need to tell the
function which archiver was used.
As a convenience, we also provide an opaque data pointer in
the archiver struct so that individual archivers can put
something useful there when they register themselves.
Technically they could just use the "name" field to look in
an internal map of names to data, but this is much simpler.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most of the tar and zip code was nicely split out into two
abstracted files which knew only about their specific
formats. The entry point to this code was a single "write
archive" function.
However, as these basic formats grow more complex (e.g., by
handling multiple file extensions and format names), a
static list of the entry point functions won't be enough.
Instead, let's provide a way for the tar and zip code to
tell the main archive code what they support by registering
archiver names and functions.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
We load our own tar-specific config, and then chain to
git_default_config. This is pointless, as our caller should
already have loaded the default config. It also introduces a
needless inconsistency with the zip archiver, which does not
look at the config files at all (and therefore relies on the
caller to have loaded config).
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
gitweb: 'pickaxe' and 'grep' features requires 'search' to be enabled
Both 'pickaxe' (searching changes) and 'grep' (searching files)
require basic 'search' feature to be enabled to work. Enabling
e.g. only 'pickaxe' won't work.
Add a comment about this.
Signed-off-by: Jakub Narebski <jnareb@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Michael J Gruber noticed that under /bin/dash this test failed
(as is expected -- \n in the string can be interpreted by the
command), while it passed with bash. We probably could work it
around by using backquote in front of it, but it is safer and
more readable to avoid "echo" altogether in a case like this.
Add a gcc profile feedback build option "profile-all" to the main
Makefile. It simply runs the test suite to generate feedback data and the
recompiles the main executables with that. The basic structure is similar
to the existing gcov code.
gcc is often able to generate better code with profile feedback data. The
training load also doesn't need to be too similar to the actual load, it
still gives benefits.
The test suite run is unfortunately quite long. It would be good to find a
suitable subset that runs faster and still gives reasonable feedback.
For now the test suite runs single threaded (I had some trouble running
the test suite with -jX)
I tested it with git gc and git blame kernel/sched.c on a Linux kernel
tree. For gc I get about 2.7% improvement in wall clock time by using the
feedback build, for blame about 2.4%. That's not gigantic, but not shabby
either for a very small patch.
If anyone has any favourite CPU intensive git benchmarks feel free to try
them too.
I hope distributors will switch to use a feedback build in their packages.
Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Earlier 7974843 (compat/cygwin.c: make runtime detection of lstat/stat
lessor impact, 2008-10-23) fixed the low-level "do we use cygwin specific
hacks for stat/lstat?" logic not to call into git_default_config() from
random codepaths that are typically very late in the program, to prevent
the call from potentially overwriting other variables that are initialized
from the configuration.
However, it forgot that on Cygwin, trust-executable-bit should default to
true.
Noticed by J6t, confirmed by Ramsay Jones, and the brown paper bag is on
Gitster's head.
fetch: Also fetch submodules in subdirectories in on-demand mode
When on-demand mode was active examining the new commits just fetched in
the superproject (to check if they record commits for submodules which are
not downloaded yet) wasn't done recursively. Because of that fetch did not
recursively fetch submodules living in subdirectories even when it should
have.
Fix that by adding the RECURSIVE flag to the diff_options used to check
the new commits and avoid future regressions in this area by moving a
submodule in t5526 into a subdirectory.
Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Until now, "git tag -l foo* bar*" would silently ignore the
second argument, showing only refs starting with "foo". It's
not just unfriendly not to take a second pattern; we
actually generated subtly wrong results (from the user's
perspective) because some of the requested tags were
omitted.
This patch allows an arbitrary number of patterns on the
command line; if any of them matches, the ref is shown.
While we're tweaking the documentation, let's also make it
clear that the pattern is fnmatch.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
rebase -i -p: include non-first-parent commits in todo list
Consider this graph:
D---E (topic, HEAD)
/ /
A---B---C (master)
\
F (topic2)
and the following three commands:
1. git rebase -i -p A
2. git rebase -i -p --onto F A
3. git rebase -i -p B
Currently, (1) and (2) will pick B, D, C, and E onto A and F,
respectively. However, (3) will only pick D and E onto B, but not C,
which is inconsistent with (1) and (2). As a result, we cannot modify C
during the interactive-rebase.
The current behavior also creates a bug if we do:
4. git rebase -i -p C
In (4), E is never picked. And since interactive-rebase resets "HEAD"
to "onto" before picking any commits, D and E are lost after the
interactive-rebase.
This patch fixes the inconsistency and bug by ensuring that all children
of upstream are always picked. This essentially reverts the commit: d80d6bc146232d81f1bb4bc58e5d89263fd228d4
When compiling the todo list, commits reachable from "upstream" should
never be skipped under any conditions. Otherwise, we lose the ability
to modify them like (3), and create a bug like (4).
Two of the tests contain a scenario like (3). Since the new behavior
added more commits for picking, these tests need to be updated to
account for the additional pick lines. A new test has also been added
for (4).
Signed-off-by: Andrew Wong <andrew.kw.w@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
tests: link shell libraries into valgrind directory
When we run tests under valgrind, we symlink anything
executable that starts with git-* or test-* into a special
valgrind bin directory, and then make that our
GIT_EXEC_PATH.
However, shell libraries like git-sh-setup do not have the
executable bit marked, and did not get symlinked. This
means that any test looking for shell libraries in our
exec-path would fail to find them, even though that is a
fine thing to do when testing against a regular git build
(or in a git install, for that matter).
t2300 demonstrated this problem. The fix is to symlink these
shell libraries directly into the valgrind directory.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
t/Makefile: pass test opts to valgrind target properly
The valgrind target just reinvokes make with GIT_TEST_OPTS
set to "--valgrind". However, it does this using an
environment variable, which means GIT_TEST_OPTS in your
config.mak would override it, and "make valgrind" would
simply run the test suite without valgrind on.
Instead, we should pass GIT_TEST_OPTS on the command-line,
overriding what's in config.mak, and take care to append to
whatever the user has there already.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The getopt.h header file is not used. It's inclusion is left over from the
original version of this source. Additionally, getopt.h does not exist on
all platforms (SunOS 5.7) and will cause a compilation failure. So, let's
remove it.
Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil> Signed-off-by: Junio C Hamano <gitster@pobox.com>
config.c: Make git_config() work correctly when called recursively
On Cygwin, this fixes a test failure in t3301-notes.sh (test 98,
"git notes copy --for-rewrite (disabled)").
The test failure is caused by a recursive call to git_config() which
has the effect of skipping to the end-of-file while processing the
"notes.rewriteref" config variable. Thus, any config variables that
appear after "notes.rewriteref" are simply ignored by git_config().
Also, we note that the original FILE handle is leaked as a result
of the recursive call.
The recursive call to git_config() is due to the "schizophrenic stat"
functions on cygwin, where one of two different implementations of
the l/stat functions is selected lazily, depending on some config
variables.
In this case, the init_copy_notes_for_rewrite() function calls
git_config() with the notes_rewrite_config() callback function.
This callback, while processing the "notes.rewriteref" variable,
in turn calls string_list_add_refs_by_glob() to process the
associated ref value. This eventually leads to a call to the
get_ref_dir() function, which in turn calls stat(). On cygwin,
the stat() macro leads to an indirect call to cygwin_stat_stub()
which, via init_stat(), then calls git_config() in order to
determine which l/stat implementation to bind to.
In order to solve this problem, we modify git_config() so that the
global state variables used by the config reading code is packaged
up and managed on a local state stack.
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'forced modes' test fails on cygwin because the post-update
hook loses it's executable bit when copied from the templates
directory by git-init. The template loses it's executable bit
because the lstat() function resolves to the "native Win32 API"
implementation.
This call to lstat() happens after git-init has set the "git_dir"
(so has_git_dir() returns true), but before the configuration has
been fully initialised. At this point git_config() does not find
any config files to parse and returns 0. Unfortunately, the code
used to determine the cygwin l/stat() function bindings did not
check the return from git_config() and assumed that the config
was complete and accessible once "git_dir" was set.
In order to fix the test, we simply change the binding code to
test the return value from git_config(), to ensure that it actually
had config values to read, before determining the requested binding.
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
help.c: Fix detection of custom merge strategy on cygwin
Test t7606-merge-custom.sh fails on cygwin when git-merge fails
with an "Could not find merge strategy 'theirs'" error, despite
the test correctly preparing an (executable) git-merge-theirs
script.
The cause of the failure is the mis-detection of the executable
status of the script, by the is_executable() function, while the
load_command_list() function is searching the path for additional
merge strategy programs.
Note that the l/stat() "functions" on cygwin are somewhat
schizophrenic (see commits adbc0b6, 7faee6b and 7974843), and
their behaviour depends on the timing of various git setup and
config function calls. In particular, until the "git_dir" has
been set (have_git_dir() returns true), the real cygwin (POSIX
emulating) l/stat() functions are called. Once "git_dir" has
been set, the "native Win32 API" implementations of l/stat()
may, or may not, be called depending on the setting of the
core.filemode and core.ignorecygwinfstricks config variables.
We also note that, since commit c869753, core.filemode is forced
to false, even on NTFS, by git-init and git-clone. A user (or a
test) can, of course, reset core.filemode to true explicitly if
the filesystem supports it (and he doesn't use any problematic
windows software). The test-suite currently runs all tests on
cygwin with core.filemode set to false.
Given the above, we see that the built-in merge strategies are
correctly detected as executable, since they are checked for
before "git_dir" is set, whereas all custom merge strategies are
not, since they are checked for after "git_dir" is set.
In order to fix the mis-detection problem, we change the code in
is_executable() to re-use the conditional WIN32 code section,
which actually looks at the content of the file to determine if
the file is executable. On cygwin we also make the additional
code conditional on the executable bit of the file mode returned
by the initial stat() call. (only the real cygwin function would
set the executable bit in the file mode.)
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
refs.c had a error message "Trying to write ref with nonexistant object".
And no tests relied on the wrong spelling.
Also typo was present in some test scripts internals, these tests still pass.
Signed-off-by: Dmitry Ivankov <divanorama@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
archive: reorder option parsing and config reading
The archive command does three things during its
initialization phase:
1. parse command-line options
2. setup the git directory
3. read config
During phase (1), if we see any options that do not require
a git directory (like "--list"), we handle them immediately
and exit, making it safe to abort step (2) if we are not in
a git directory.
Step (3) must come after step (2), since the git directory
may influence configuration. However, this leaves no
possibility of configuration from step (3) impacting the
command-line options in step (1) (which is useful, for
example, for supporting user-configurable output formats).
Instead, let's reorder this to:
1. setup the git directory, if it exists
2. read config
3. parse command-line options
4. if we are not in a git repository, die
This should have the same external behavior, but puts
configuration before command-line parsing.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
t/gitweb-lib.sh: skip gitweb tests when perl dependencies are not met
Linus noticed that we go ahead testing gitweb and fail miserably on a
box with Perl but not perl-CGI library. We already have a code to detect
lack of Perl and refrain from testing gitweb in t/gitweb-lib.sh (by the
way, shouldn't it be called t/lib-gitweb.sh?), so let's extend it
to cover this case as well.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
in openSUSE /etc/mime.types has only spaces. I don't know if there's
a canonical reference that says that only tabs are allowed. Mutt at
least also accepts spaces. So make gitweb more liberal too.
Signed-off-by: Ludwig Nussel <ludwig.nussel@suse.de> Acked-by: Jakub Narebski <jnareb@gmail.com> Acked-by: John 'Warthog9' Hawley <warthog9@eaglescrag.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-submodule.sh: clarify the "should we die now" logic
Earlier the decision to stop or continue was made on the $action variable
that was set by inspecting $update_module variable. The former is a
redundant variable and will be removed in another topic.
Decide upon inspecting $update_module if a failure should cascade up to
cause us immediately stop, and use a variable that means just that, to
clarify the logic.
Incidentally this also makes the merge with the other topic slightly
easier and cleaner to understand.
"git submodule update" stops at the first error and gives control
back to the user. Only after the user fixes the problematic
submodule and runs "git submodule update" again, the second error
is found. And the user needs to repeat until all the problems are
found and fixed one by one. This is tedious.
Instead, the command can remember which submodules it had trouble with,
continue updating the ones it can, and report which ones had errors at
the end. The user can run "git submodule update", find all the ones that
need minor fixing (e.g. working tree was dirty) to fix them in a single
pass. Then another "git submodule update" can be run to update all.
Note that the problematic submodules are skipped only when they are to
be integrated with a safer value of submodule.<name>.update option,
namely "checkout". Fixing a failure in a submodule that uses "rebase" or
"merge" may need an involved conflict resolution by the user, and
leaving too many submodules in states that need resolution would not
reduce the mental burden on the user.
Signed-off-by: Fredrik Gustafsson <iveqy@iveqy.com> Mentored-by: Jens Lehmann <Jens.Lehmann@web.de> Mentored-by: Heiko Voigt <hvoigt@hvoigt.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update zlib_post_call() that adjusts the wrapper's notion of avail_in and
avail_out to what came back from zlib, so that the callers can feed
buffers larger than than 4GB to the API.
When underlying inflate/deflate stopped processing because we fed a buffer
larger than 4GB limit, detect that case, update the state variables, and
let the zlib function work another round.
The size of objects we read from the repository and data we try to put
into the repository are represented in "unsigned long", so that on larger
architectures we can handle objects that weigh more than 4GB.
But the interface defined in zlib.h to communicate with inflate/deflate
limits avail_in (how many bytes of input are we calling zlib with) and
avail_out (how many bytes of output from zlib are we ready to accept)
fields effectively to 4GB by defining their type to be uInt.
In many places in our code, we allocate a large buffer (e.g. mmap'ing a
large loose object file) and tell zlib its size by assigning the size to
avail_in field of the stream, but that will truncate the high octets of
the real size. The worst part of this story is that we often pass around
z_stream (the state object used by zlib) to keep track of the number of
used bytes in input/output buffer by inspecting these two fields, which
practically limits our callchain to the same 4GB limit.
Wrap z_stream in another structure git_zstream that can express avail_in
and avail_out in unsigned long. For now, just die() when the caller gives
a size that cannot be given to a single zlib call. In later patches in the
series, we would make git_inflate() and git_deflate() internally loop to
give callers an illusion that our "improved" version of zlib interface can
operate on a buffer larger than 4GB in one go.
Wrap deflateInit, deflate, and deflateEnd for everybody, and the sole use
of deflateInit2 in remote-curl.c to tell the library to use gzip header
and trailer in git_deflate_init_gzip().
There is only one caller that cares about the status from deflateEnd().
Introduce git_deflate_end_gently() to let that sole caller retrieve the
status and act on it (i.e. die) for now, but we would probably want to
make inflate_end/deflate_end die when they ran out of memory and get
rid of the _gently() kind.
zlib: wrap inflateInit2 used to accept only for gzip format
http-backend.c uses inflateInit2() to tell the library that it wants to
accept only gzip format. Wrap it in a helper function so that readers do
not have to wonder what the magic numbers 15 and 16 are for.
Before refactoring the main part of the wrappers, first move the
logic to convert error status that come back from zlib to string
to a helper function.
gitweb: do not misparse nonnumeric content tag files that contain a digit
v1.7.6-rc0~27^2~4 (gitweb: Change the way "content tags" ('ctags') are
handled, 2011-04-29) tried to make gitweb's tag cloud feature more
intuitive for webmasters by checking whether the ctags/<label> under
a project's .git dir contains a number (representing the strength of
association to <label>) before treating it as one.
With that change, after putting '$feature{'ctags'}{'default'} = [1];'
in your $GITWEB_CONFIG, you could do
echo Linux >.git/ctags/linux
and gitweb would treat that as a request to tag the current repository
with the Linux tag, instead of the previous behavior of writing an
error page embedded in the projects list that triggers error messages
from Chromium and Firefox about malformed XML.
Unfortunately the pattern (\d+) used to match numbers is too loose,
and the "XML declaration allowed only at the start of the document"
error can still be experienced if you write "Linux-2.6" in place of
"Linux" in the example above. Fix it by tightening the pattern to
^\d+$.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Document the underlying protocol used by shallow repositories and --depth commands.
Explain the exchange that occurs between a client and server when
the client is requesting shallow history and/or is already using
a shallow repository.
Signed-off-by: Alex Neronskiy <zakmagnus@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix documentation of fetch-pack that implies that the client can disconnect after sending wants.
Specify conditions under which the client can terminate the connection
early. Previously, an unintended behavior was possible which could
confuse servers.
Based-on-patch-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Alex Neronskiy <zakmagnus@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
sha1_file.c: "legacy" is really the current format
Every time I look at the read-loose-object codepath, legacy_loose_object()
function makes my brain go through mental contortion. When we were playing
with the experimental loose object format, it may have made sense to call
the traditional format "legacy", in the hope that the experimental one
will some day replace it to become official, but it never happened.
This renames the function (and negates its return value) to detect if we
are looking at the experimental format, and move the code around in its
caller which used to do "if we are looing at legacy, do this special case,
otherwise the normal case is this". The codepath to read from the loose
objects in experimental format is the "unlikely" case.
Someday after Git 2.0, we should drop the support of this format.
verify_path(): simplify check at the directory boundary
We simply want to say "At a directory boundary, be careful with a name
that begins with a dot, forbid a name that ends with the boundary
character or has duplicated bounadry characters".
In cmd_add() the switch statement used to resolve a relative url was
present twice. Remove the second one and use the realrepo variable set
by the first one (lines 194 ff.) instead.
Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
submodule add: allow relative repository path even when no url is set
Adding a submodule with a relative repository path did only succeed when
the superproject's default remote was set. But when that is unset, the
superproject is its own authoritative upstream, so lets use its working
directory as upstream instead.
This allows users to set up a new superpoject where the submodules urls
are configured relative to the superproject's upstream while its default
remote can be configured later.
Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jn/gitweb-docs:
gitweb: Move "Requirements" up in gitweb/INSTALL
gitweb: Describe CSSMIN and JSMIN in gitweb/INSTALL
gitweb: Move information about installation from README to INSTALL
* bc/maint-status-z-to-use-porcelain:
builtin/commit.c: set status_format _after_ option parsing
t7508: demonstrate status's failure to use --porcelain format with -z
Windows: teach getenv to do a case-sensitive search
getenv() on Windows looks up environment variables in a case-insensitive
manner. Even though all documentations claim that the environment is
case-insensitive, it is possible for applications to pass an environment
to child processes that has variables that differ only in case. Bash on
Windows does this, for example, and sh-i18n--envsubst depends on this
behavior.
With this patch environment variables are first looked up in a
case-sensitive manner; only if this finds nothing, the system's getenv() is
used as a fallback.
Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
We want to use static lookup_env() in a subsequent change.
At first sight, this change looks innocent. But it is not due to the
#undef getenv. There is one caller of getenv between the old location and
the new location whose behavior could change. But as can be seen from the
defintion of mingw_getenv, the behavior for this caller does not change
substantially.
To ensure consistent behavior in the future, change all getenv callers
in mingw.c to use mingw_getenv.
With this patch, this is not a big deal, yet, but with the subsequent
change, where we teach getenv to do a case-sensitive lookup, the behavior
of all call sites is changed.
Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
This finally gets rid of the inefficient verify-pack implementation that
walks objects in the packfile in their object name order and replaces it
with a call to index-pack --verify. As a side effect, it also removes
packed_object_info_detail() API which is rather expensive.
As this changes the way errors are reported (verify-pack used to rely on
the usual runtime error detection routine unpack_entry() to diagnose the
CRC errors in an entry in the *.idx file; index-pack --verify checks the
whole *.idx file in one go), update a test that expected the string "CRC"
to appear in the error message.
index-pack: show histogram when emulating "verify-pack -v"
The histogram produced by "verify-pack -v" always had an artificial
limit of 50, but index-pack knows what the maximum delta depth is, so
we do not have to limit it.
index-pack: start learning to emulate "verify-pack -v"
The "index-pack" machinery already has almost enough knowledge to produce
the same output as "verify-pack -v". Fill small gaps in its bookkeeping,
and teach it to show what it knows.
Add a few more command line options that do not have to be advertised to
the end users. They will be used internally when verify-pack calls this.
The eventual goal is to remove verify-pack implementation and redo it as a
thin wrapper around the index-pack, so that we can remove the rather
expensive packed_object_info_detail() API.
This still does not do the delta-chain-depth histogram yet but that part
is easy.
When create a new branch, we fed "refs/heads/<proposed name>" as a string
to get_sha1() and expected it to fail when a branch already exists.
The right way to check if a ref exists is to check with resolve_ref().
A naïve solution that might appear attractive but does not work is to
forbid slashes in get_describe_name() but that will not work. A describe
name is is in the form of "ANYTHING-g<short sha1>", and that ANYTHING part
comes from a original tag name used in the repository the user ran the
describe command. A sick user could have a confusing hierarchical tag
whose name is "refs/heads/foobar" (stored as refs/tags/refs/heads/foobar")
to generate a describe name "refs/heads/foobar-6-g02ac983", and we should
be able to use that name to refer to the object whose name is 02ac983.
With --heading, the filename is printed once before matches from that
file instead of at the start of each line, giving more screen space to
the actual search results.
This option is taken from ack (http://betterthangrep.com/). And now
git grep can dress up like it:
With --break, an empty line is printed between matches from different
files, increasing readability. This option is taken from ack
(http://betterthangrep.com/).
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 431d6e7b (grep: enable threading for context line printing)
split the printing of the "--\n" mark between results from different
files out into two places: show_line() in grep.c for the non-threaded
case and work_done() in builtin/grep.c for the threaded case. Commit 55f638bd (grep: Colorize filename, line number, and separator) updated
the former, but not the latter, so the separators between files are
not colored if threads are used.
This patch merges the two. In the threaded case, hunk marks are now
printed by show_line() for every file, including the first one, and the
very first mark is simply skipped in work_done(). This ensures that the
output is properly colored and works just as well.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Junio C Hamano <gitster@pobox.com>
With XSS prevention on (enabled using $prevent_xss), blobs
('blob_plain') of all types except a few known safe ones are served
with "Content-Disposition: attachment". However the check was too
strict; it didn't take into account optional parameter attributes,
http: pass http.cookiefile using CURLOPT_COOKIEFILE
If the config option http.cookiefile is set, pass this file to libCURL using
the CURLOPT_COOKIEFILE option. This is similar to calling curl with the -b
option. This allows git http authorization with authentication mechanisms
that use cookies, such as SAML Enhanced Client or Proxy (ECP) used by
Shibboleth.
To use SAML/ECP, the user needs to request a session cookie with their own ECP
code. See for example:
gitweb: Move information about installation from README to INSTALL
Almost straightformard moving of "How to configure gitweb for your
local system" section from gitweb/README to gitweb/INSTALL, as it is
about build time configuration. Updated references to it.
Signed-off-by: Jakub Narebski <jnareb@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 1b908b6 (wt-status: rename and restructure
status-print-untracked, 2010-04-10) converted the
wt_status_print_untracked function into
wt_status_print_other, taking a string_list of either
untracked or ignored items to print. However, the "nothing
to show" early return still checked the wt_status->untracked
list instead of the passed-in list.
That meant that if we had ignored items to show, but no
untracked items, we would erroneously exit early and fail to
show the ignored items.
Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Merge branch 'jk/maint-config-alias-fix' into maint
* jk/maint-config-alias-fix:
handle_options(): do not miscount how many arguments were used
config: always parse GIT_CONFIG_PARAMETERS during git_config
git_config: don't peek at global config_parameters
config: make environment parsing routines static
* jk/maint-docs:
docs: fix some antique example output
docs: make sure literal "->" isn't converted to arrow
docs: update status --porcelain format
docs: minor grammar fixes to git-status
Since 9d8a5a5 (diffcore-rename: refactor "too many candidates" logic,
2011-01-06), diffcore_rename() initializes num_src but does not use it
anymore. "-Wunused-but-set-variable" in gcc-4.6 complains about this.
Signed-off-by: Jim Meyering <meyering@redhat.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jk/format-patch-am:
format-patch: preserve subject newlines with -k
clean up calling conventions for pretty.c functions
pretty: add pp_commit_easy function for simple callers
mailinfo: always clean up rfc822 header folding
t: test subject handling in format-patch / am pipeline
* jk/maint-docs:
docs: fix some antique example output
docs: make sure literal "->" isn't converted to arrow
docs: update status --porcelain format
docs: minor grammar fixes to git-status