This variable can be set to 'input',
in which case no output conversion is performed.
+core.checkRoundtripEncoding::
+ A comma and/or whitespace separated list of encodings that Git
+ performs UTF-8 round trip checks on if they are used in an
+ `working-tree-encoding` attribute (see linkgit:gitattributes[5]).
+ The default value is `SHIFT-JIS`.
+
core.symlinks::
If false, symbolic links are checked out as small plain files that
contain the link text. linkgit:git-update-index[1] and
This setting defaults to "refs/notes/commits", and it can be overridden by
the `GIT_NOTES_REF` environment variable. See linkgit:git-notes[1].
+core.commitGraph::
+ Enable git commit graph feature. Allows reading from the
+ commit-graph file.
+
core.sparseCheckout::
Enable "sparse checkout" feature. See section "Sparse checkout" in
linkgit:git-read-tree[1] for more information.
A boolean to make git-clean do nothing unless given -f,
-i or -n. Defaults to true.
+color.advice::
+ A boolean to enable/disable color in hints (e.g. when a push
+ failed, see `advice.*` for a list). May be set to `always`,
+ `false` (or `never`) or `auto` (or `true`), in which case colors
+ are used only when the error output goes to a terminal. If
+ unset, then the value of `color.ui` is used (`auto` by default).
+
+color.advice.hint::
+ Use customized color for hints.
+
color.branch::
A boolean to enable/disable color in the output of
linkgit:git-branch[1]. May be set to `always`,
A boolean to enable/disable colored output when the pager is in
use (default is true).
+color.push::
+ A boolean to enable/disable color in push errors. May be set to
+ `always`, `false` (or `never`) or `auto` (or `true`), in which
+ case colors are used only when the error output goes to a terminal.
+ If unset, then the value of `color.ui` is used (`auto` by default).
+
+color.push.error::
+ Use customized color for push errors.
+
color.showBranch::
A boolean to enable/disable color in the output of
linkgit:git-show-branch[1]. May be set to `always`,
status short-format), or
`unmerged` (files which have unmerged changes).
+color.transport::
+ A boolean to enable/disable color when pushes are rejected. May be
+ set to `always`, `false` (or `never`) or `auto` (or `true`), in which
+ case colors are used only when the error output goes to a terminal.
+ If unset, then the value of `color.ui` is used (`auto` by default).
+
+color.transport.rejected::
+ Use customized color when a push was rejected.
+
color.ui::
This variable determines the default value for variables such
as `color.diff` and `color.grep` that control the use of color
Make `git gc --auto` return immediately and run in background
if the system supports it. Default is true.
+ gc.bigPackThreshold::
+ If non-zero, all packs larger than this limit are kept when
+ `git gc` is run. This is very similar to `--keep-base-pack`
+ except that all packs that meet the threshold are kept, not
+ just the base pack. Defaults to zero. Common unit suffixes of
+ 'k', 'm', or 'g' are supported.
+ +
+ Note that if the number of kept packs is more than gc.autoPackLimit,
+ this configuration variable is ignored, all packs except the base pack
+ will be repacked. After this the number of packs should go below
+ gc.autoPackLimit and gc.bigPackThreshold should be respected again.
+
gc.logExpiry::
If the file gc.log exists, then `git gc --auto` won't run
unless that file is more than 'gc.logExpiry' old. Default is
SYNOPSIS
--------
[verse]
- 'git gc' [--aggressive] [--auto] [--quiet] [--prune=<date> | --no-prune] [--force]
+ 'git gc' [--aggressive] [--auto] [--quiet] [--prune=<date> | --no-prune] [--force] [--keep-largest-pack]
DESCRIPTION
-----------
to 0 disables automatic packing of loose objects.
+
If the number of packs exceeds the value of `gc.autoPackLimit`,
- then existing packs (except those marked with a `.keep` file)
+ then existing packs (except those marked with a `.keep` file
+ or over `gc.bigPackThreshold` limit)
are consolidated into a single pack by using the `-A` option of
- 'git repack'. Setting `gc.autoPackLimit` to 0 disables
- automatic consolidation of packs.
+ 'git repack'.
+ If the amount of memory is estimated not enough for `git repack` to
+ run smoothly and `gc.bigPackThreshold` is not set, the largest
+ pack will also be excluded (this is the equivalent of running `git gc`
+ with `--keep-base-pack`).
+ Setting `gc.autoPackLimit` to 0 disables automatic consolidation of
+ packs.
+
If houskeeping is required due to many loose objects or packs, all
other housekeeping tasks (e.g. rerere, working trees, reflog...) will
Force `git gc` to run even if there may be another `git gc`
instance running on this repository.
+ --keep-largest-pack::
+ All packs except the largest pack and those marked with a
+ `.keep` files are consolidated into a single pack. When this
+ option is used, `gc.bigPackThreshold` is ignored.
+
Configuration
-------------
much time is spent optimizing the delta compression of the objects in
the repository when the --aggressive option is specified. The larger
the value, the more time is spent optimizing the delta compression. See
-the documentation for the --window' option in linkgit:git-repack[1] for
+the documentation for the --window option in linkgit:git-repack[1] for
more details. This defaults to 250.
Similarly, the optional configuration variable `gc.aggressiveDepth`
#include "commit.h"
#include "packfile.h"
#include "object-store.h"
+ #include "pack.h"
+ #include "pack-objects.h"
+ #include "blob.h"
+ #include "tree.h"
#define FAILED_RUN "failed to run %s"
static const char *gc_log_expire = "1.day.ago";
static const char *prune_expire = "2.weeks.ago";
static const char *prune_worktrees_expire = "3.months.ago";
+ static unsigned long big_pack_threshold;
+ static unsigned long max_delta_cache_size = DEFAULT_DELTA_CACHE_SIZE;
static struct argv_array pack_refs_cmd = ARGV_ARRAY_INIT;
static struct argv_array reflog = ARGV_ARRAY_INIT;
git_config_get_expiry("gc.worktreepruneexpire", &prune_worktrees_expire);
git_config_get_expiry("gc.logexpiry", &gc_log_expire);
+ git_config_get_ulong("gc.bigpackthreshold", &big_pack_threshold);
+ git_config_get_ulong("pack.deltacachesize", &max_delta_cache_size);
+
git_config(git_default_config, NULL);
}
return needed;
}
+ static struct packed_git *find_base_packs(struct string_list *packs,
+ unsigned long limit)
+ {
+ struct packed_git *p, *base = NULL;
+
+ for (p = get_packed_git(the_repository); p; p = p->next) {
+ if (!p->pack_local)
+ continue;
+ if (limit) {
+ if (p->pack_size >= limit)
+ string_list_append(packs, p->pack_name);
+ } else if (!base || base->pack_size < p->pack_size) {
+ base = p;
+ }
+ }
+
+ if (base)
+ string_list_append(packs, base->pack_name);
+
+ return base;
+ }
+
static int too_many_packs(void)
{
struct packed_git *p;
return gc_auto_pack_limit < cnt;
}
- static void add_repack_all_option(void)
+ static uint64_t total_ram(void)
+ {
+ #if defined(HAVE_SYSINFO)
+ struct sysinfo si;
+
+ if (!sysinfo(&si))
+ return si.totalram;
+ #elif defined(HAVE_BSD_SYSCTL) && (defined(HW_MEMSIZE) || defined(HW_PHYSMEM))
+ int64_t physical_memory;
+ int mib[2];
+ size_t length;
+
+ mib[0] = CTL_HW;
+ # if defined(HW_MEMSIZE)
+ mib[1] = HW_MEMSIZE;
+ # else
+ mib[1] = HW_PHYSMEM;
+ # endif
+ length = sizeof(int64_t);
+ if (!sysctl(mib, 2, &physical_memory, &length, NULL, 0))
+ return physical_memory;
+ #elif defined(GIT_WINDOWS_NATIVE)
+ MEMORYSTATUSEX memInfo;
+
+ memInfo.dwLength = sizeof(MEMORYSTATUSEX);
+ if (GlobalMemoryStatusEx(&memInfo))
+ return memInfo.ullTotalPhys;
+ #endif
+ return 0;
+ }
+
+ static uint64_t estimate_repack_memory(struct packed_git *pack)
+ {
+ unsigned long nr_objects = approximate_object_count();
+ size_t os_cache, heap;
+
+ if (!pack || !nr_objects)
+ return 0;
+
+ /*
+ * First we have to scan through at least one pack.
+ * Assume enough room in OS file cache to keep the entire pack
+ * or we may accidentally evict data of other processes from
+ * the cache.
+ */
+ os_cache = pack->pack_size + pack->index_size;
+ /* then pack-objects needs lots more for book keeping */
+ heap = sizeof(struct object_entry) * nr_objects;
+ /*
+ * internal rev-list --all --objects takes up some memory too,
+ * let's say half of it is for blobs
+ */
+ heap += sizeof(struct blob) * nr_objects / 2;
+ /*
+ * and the other half is for trees (commits and tags are
+ * usually insignificant)
+ */
+ heap += sizeof(struct tree) * nr_objects / 2;
+ /* and then obj_hash[], underestimated in fact */
+ heap += sizeof(struct object *) * nr_objects;
+ /* revindex is used also */
+ heap += sizeof(struct revindex_entry) * nr_objects;
+ /*
+ * read_sha1_file() (either at delta calculation phase, or
+ * writing phase) also fills up the delta base cache
+ */
+ heap += delta_base_cache_limit;
+ /* and of course pack-objects has its own delta cache */
+ heap += max_delta_cache_size;
+
+ return os_cache + heap;
+ }
+
+ static int keep_one_pack(struct string_list_item *item, void *data)
+ {
+ argv_array_pushf(&repack, "--keep-pack=%s", basename(item->string));
+ return 0;
+ }
+
+ static void add_repack_all_option(struct string_list *keep_pack)
{
if (prune_expire && !strcmp(prune_expire, "now"))
argv_array_push(&repack, "-a");
if (prune_expire)
argv_array_pushf(&repack, "--unpack-unreachable=%s", prune_expire);
}
+
+ if (keep_pack)
+ for_each_string_list(keep_pack, keep_one_pack, NULL);
}
static void add_repack_incremental_option(void)
* we run "repack -A -d -l". Otherwise we tell the caller
* there is no need.
*/
- if (too_many_packs())
- add_repack_all_option();
- else if (too_many_loose_objects())
+ if (too_many_packs()) {
+ struct string_list keep_pack = STRING_LIST_INIT_NODUP;
+
+ if (big_pack_threshold) {
+ find_base_packs(&keep_pack, big_pack_threshold);
+ if (keep_pack.nr >= gc_auto_pack_limit) {
+ big_pack_threshold = 0;
+ string_list_clear(&keep_pack, 0);
+ find_base_packs(&keep_pack, 0);
+ }
+ } else {
+ struct packed_git *p = find_base_packs(&keep_pack, 0);
+ uint64_t mem_have, mem_want;
+
+ mem_have = total_ram();
+ mem_want = estimate_repack_memory(p);
+
+ /*
+ * Only allow 1/2 of memory for pack-objects, leave
+ * the rest for the OS and other processes in the
+ * system.
+ */
+ if (!mem_have || mem_want < mem_have / 2)
+ string_list_clear(&keep_pack, 0);
+ }
+
+ add_repack_all_option(&keep_pack);
+ string_list_clear(&keep_pack, 0);
+ } else if (too_many_loose_objects())
add_repack_incremental_option();
else
return 0;
const char *name;
pid_t pid;
int daemonized = 0;
+ int keep_base_pack = -1;
+ timestamp_t dummy;
struct option builtin_gc_options[] = {
OPT__QUIET(&quiet, N_("suppress progress reporting")),
OPT_BOOL_F(0, "force", &force,
N_("force running gc even if there may be another gc running"),
PARSE_OPT_NOCOMPLETE),
+ OPT_BOOL(0, "keep-largest-pack", &keep_base_pack,
+ N_("repack all other packs except the largest pack")),
OPT_END()
};
/* default expiry time, overwritten in gc_config */
gc_config();
if (parse_expiry_date(gc_log_expire, &gc_log_expire_time))
- die(_("Failed to parse gc.logexpiry value %s"), gc_log_expire);
+ die(_("failed to parse gc.logexpiry value %s"), gc_log_expire);
if (pack_refs < 0)
pack_refs = !is_bare_repository();
if (argc > 0)
usage_with_options(builtin_gc_usage, builtin_gc_options);
+ if (prune_expire && parse_expiry_date(prune_expire, &dummy))
+ die(_("failed to parse prune expiry value %s"), prune_expire);
+
if (aggressive) {
argv_array_push(&repack, "-f");
if (aggressive_depth > 0)
*/
daemonized = !daemonize();
}
- } else
- add_repack_all_option();
+ } else {
+ struct string_list keep_pack = STRING_LIST_INIT_NODUP;
+
+ if (keep_base_pack != -1) {
+ if (keep_base_pack)
+ find_base_packs(&keep_pack, 0);
+ } else if (big_pack_threshold) {
+ find_base_packs(&keep_pack, big_pack_threshold);
+ }
+
+ add_repack_all_option(&keep_pack);
+ string_list_clear(&keep_pack, 0);
+ }
name = lock_repo_for_gc(force, &pid);
if (name) {
#include "list.h"
#include "packfile.h"
#include "object-store.h"
+ #include "dir.h"
static const char *pack_usage[] = {
N_("git pack-objects --stdout [<options>...] [< <ref-list> | < <object-list>]"),
static struct packing_data to_pack;
static struct pack_idx_entry **written_list;
- static uint32_t nr_result, nr_written;
+ static uint32_t nr_result, nr_written, nr_seen;
static int non_empty;
static int reuse_delta = 1, reuse_object = 1;
static int local;
static int have_non_local_packs;
static int incremental;
- static int ignore_packed_keep;
+ static int ignore_packed_keep_on_disk;
+ static int ignore_packed_keep_in_core;
static int allow_ofs_delta;
static struct pack_idx_option pack_idx_opts;
static const char *base_name;
static int exclude_promisor_objects;
static unsigned long delta_cache_size = 0;
- static unsigned long max_delta_cache_size = 256 * 1024 * 1024;
+ static unsigned long max_delta_cache_size = DEFAULT_DELTA_CACHE_SIZE;
static unsigned long cache_max_small_delta_size = 1000;
static unsigned long window_memory_limit = 0;
* If so, rewrite it like in fast-import
*/
if (pack_to_stdout) {
- hashclose(f, oid.hash, CSUM_CLOSE);
+ finalize_hashfile(f, oid.hash, CSUM_HASH_IN_STREAM | CSUM_CLOSE);
} else if (nr_written == nr_remaining) {
- hashclose(f, oid.hash, CSUM_FSYNC);
+ finalize_hashfile(f, oid.hash, CSUM_HASH_IN_STREAM | CSUM_FSYNC | CSUM_CLOSE);
} else {
- int fd = hashclose(f, oid.hash, 0);
+ int fd = finalize_hashfile(f, oid.hash, 0);
fixup_pack_header_footer(fd, oid.hash, pack_tmp_name,
nr_written, oid.hash, offset);
close(fd);
* Otherwise, we signal "-1" at the end to tell the caller that we do
* not know either way, and it needs to check more packs.
*/
- if (!ignore_packed_keep &&
+ if (!ignore_packed_keep_on_disk &&
+ !ignore_packed_keep_in_core &&
(!local || !have_non_local_packs))
return 1;
if (local && !p->pack_local)
return 0;
- if (ignore_packed_keep && p->pack_local && p->pack_keep)
+ if (p->pack_local &&
+ ((ignore_packed_keep_on_disk && p->pack_keep) ||
+ (ignore_packed_keep_in_core && p->pack_keep_in_core)))
return 0;
/* we don't know yet; keep looking for more packs */
off_t found_offset = 0;
uint32_t index_pos;
+ display_progress(progress_state, ++nr_seen);
+
if (have_duplicate_entry(oid, exclude, &index_pos))
return 0;
create_object_entry(oid, type, pack_name_hash(name),
exclude, name && no_try_delta(name),
index_pos, found_pack, found_offset);
-
- display_progress(progress_state, nr_result);
return 1;
}
{
uint32_t index_pos;
+ display_progress(progress_state, ++nr_seen);
+
if (have_duplicate_entry(oid, 0, &index_pos))
return 0;
return 0;
create_object_entry(oid, type, name_hash, 0, 0, index_pos, pack, offset);
-
- display_progress(progress_state, nr_result);
return 1;
}
uint32_t i;
struct object_entry **sorted_by_offset;
+ if (progress)
+ progress_state = start_progress(_("Counting objects"),
+ to_pack.nr_objects);
+
sorted_by_offset = xcalloc(to_pack.nr_objects, sizeof(struct object_entry *));
for (i = 0; i < to_pack.nr_objects; i++)
sorted_by_offset[i] = to_pack.objects + i;
check_object(entry);
if (big_file_threshold < entry->size)
entry->no_try_delta = 1;
+ display_progress(progress_state, i + 1);
}
+ stop_progress(&progress_state);
/*
* This must happen in a second pass, since we rely on the delta
struct object_id oid;
struct object *o;
- if (!p->pack_local || p->pack_keep)
+ if (!p->pack_local || p->pack_keep || p->pack_keep_in_core)
continue;
if (open_pack_index(p))
die("cannot open pack index");
get_packed_git(the_repository);
while (p) {
- if ((!p->pack_local || p->pack_keep) &&
+ if ((!p->pack_local || p->pack_keep ||
+ p->pack_keep_in_core) &&
find_pack_entry_one(oid->hash, p)) {
last_found = p;
return 1;
struct object_id oid;
for (p = get_packed_git(the_repository); p; p = p->next) {
- if (!p->pack_local || p->pack_keep)
+ if (!p->pack_local || p->pack_keep || p->pack_keep_in_core)
continue;
if (open_pack_index(p))
{
return pack_to_stdout &&
allow_ofs_delta &&
- !ignore_packed_keep &&
+ !ignore_packed_keep_on_disk &&
+ !ignore_packed_keep_in_core &&
(!local || !have_non_local_packs) &&
!incremental;
}
oid_array_clear(&recent_objects);
}
+ static void add_extra_kept_packs(const struct string_list *names)
+ {
+ struct packed_git *p;
+
+ if (!names->nr)
+ return;
+
+ for (p = get_packed_git(the_repository); p; p = p->next) {
+ const char *name = basename(p->pack_name);
+ int i;
+
+ if (!p->pack_local)
+ continue;
+
+ for (i = 0; i < names->nr; i++)
+ if (!fspathcmp(name, names->items[i].string))
+ break;
+
+ if (i < names->nr) {
+ p->pack_keep_in_core = 1;
+ ignore_packed_keep_in_core = 1;
+ continue;
+ }
+ }
+ }
+
static int option_parse_index_version(const struct option *opt,
const char *arg, int unset)
{
struct argv_array rp = ARGV_ARRAY_INIT;
int rev_list_unpacked = 0, rev_list_all = 0, rev_list_reflog = 0;
int rev_list_index = 0;
+ struct string_list keep_pack_list = STRING_LIST_INIT_NODUP;
struct option pack_objects_options[] = {
OPT_SET_INT('q', "quiet", &progress,
N_("do not show progress meter"), 0),
N_("create thin packs")),
OPT_BOOL(0, "shallow", &shallow,
N_("create packs suitable for shallow fetches")),
- OPT_BOOL(0, "honor-pack-keep", &ignore_packed_keep,
+ OPT_BOOL(0, "honor-pack-keep", &ignore_packed_keep_on_disk,
N_("ignore packs that have companion .keep file")),
+ OPT_STRING_LIST(0, "keep-pack", &keep_pack_list, N_("name"),
+ N_("ignore this pack")),
OPT_INTEGER(0, "compression", &pack_compression_level,
N_("pack compression level")),
OPT_SET_INT(0, "keep-true-parents", &grafts_replace_parents,
if (progress && all_progress_implied)
progress = 2;
- if (ignore_packed_keep) {
+ add_extra_kept_packs(&keep_pack_list);
+ if (ignore_packed_keep_on_disk) {
struct packed_git *p;
for (p = get_packed_git(the_repository); p; p = p->next)
if (p->pack_local && p->pack_keep)
break;
if (!p) /* no keep-able packs found */
- ignore_packed_keep = 0;
+ ignore_packed_keep_on_disk = 0;
}
if (local) {
/*
- * unlike ignore_packed_keep above, we do not want to
- * unset "local" based on looking at packs, as it
- * also covers non-local objects
+ * unlike ignore_packed_keep_on_disk above, we do not
+ * want to unset "local" based on looking at packs, as
+ * it also covers non-local objects
*/
struct packed_git *p;
for (p = get_packed_git(the_repository); p; p = p->next) {
}
if (progress)
- progress_state = start_progress(_("Counting objects"), 0);
+ progress_state = start_progress(_("Enumerating objects"), 0);
if (!use_internal_rev_list)
read_object_list_from_stdin();
else {
HAVE_GETDELIM = YesPlease
SANE_TEXT_GREP=-a
FREAD_READS_DIRECTORIES = UnfortunatelyYes
+ BASIC_CFLAGS += -DHAVE_SYSINFO
+ PROCFS_EXECUTABLE_PATH = /proc/self/exe
endif
ifeq ($(uname_S),GNU/kFreeBSD)
HAVE_ALLOCA_H = YesPlease
BASIC_CFLAGS += -DPROTECT_HFS_DEFAULT=1
HAVE_BSD_SYSCTL = YesPlease
FREAD_READS_DIRECTORIES = UnfortunatelyYes
+ HAVE_NS_GET_EXECUTABLE_PATH = YesPlease
endif
ifeq ($(uname_S),SunOS)
NEEDS_SOCKET = YesPlease
HAVE_PATHS_H = YesPlease
GMTIME_UNRELIABLE_ERRORS = UnfortunatelyYes
HAVE_BSD_SYSCTL = YesPlease
+ HAVE_BSD_KERN_PROC_SYSCTL = YesPlease
PAGER_ENV = LESS=FRX LV=-c MORE=FRX
FREAD_READS_DIRECTORIES = UnfortunatelyYes
endif
BASIC_LDFLAGS += -L/usr/local/lib
HAVE_PATHS_H = YesPlease
HAVE_BSD_SYSCTL = YesPlease
+ HAVE_BSD_KERN_PROC_SYSCTL = YesPlease
+ PROCFS_EXECUTABLE_PATH = /proc/curproc/file
endif
ifeq ($(uname_S),MirBSD)
NO_STRCASESTR = YesPlease
USE_ST_TIMESPEC = YesPlease
HAVE_PATHS_H = YesPlease
HAVE_BSD_SYSCTL = YesPlease
+ HAVE_BSD_KERN_PROC_SYSCTL = YesPlease
+ PROCFS_EXECUTABLE_PATH = /proc/curproc/exe
endif
ifeq ($(uname_S),AIX)
DEFAULT_PAGER = more
SNPRINTF_RETURNS_BOGUS = YesPlease
NO_SVN_TESTS = YesPlease
RUNTIME_PREFIX = YesPlease
+ HAVE_WPGMPTR = YesWeDo
NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
NO_NSEC = YesPlease
USE_WIN32_MMAP = YesPlease
NO_SVN_TESTS = YesPlease
NO_PERL_MAKEMAKER = YesPlease
RUNTIME_PREFIX = YesPlease
+ HAVE_WPGMPTR = YesWeDo
NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
NO_NSEC = YesPlease
USE_WIN32_MMAP = YesPlease
#include <openssl/err.h>
#endif
+ #ifdef HAVE_SYSINFO
+ # include <sys/sysinfo.h>
+ #endif
+
/* On most systems <netdb.h> would have given us this, but
* not on some systems (e.g. z/OS).
*/
extern void set_die_is_recursing_routine(int (*routine)(void));
extern int starts_with(const char *str, const char *prefix);
+extern int istarts_with(const char *str, const char *prefix);
/*
* If the string "str" begins with the string found in "prefix", return 1.
#ifndef OBJECT_STORE_H
#define OBJECT_STORE_H
+#include "oidmap.h"
+
struct alternate_object_database {
struct alternate_object_database *next;
int pack_fd;
unsigned pack_local:1,
pack_keep:1,
+ pack_keep_in_core:1,
freshened:1,
do_not_close:1,
pack_promisor:1;
struct alternate_object_database *alt_odb_list;
struct alternate_object_database **alt_odb_tail;
+ /*
+ * Objects that should be substituted by other objects
+ * (see git-replace(1)).
+ */
+ struct oidmap *replace_map;
+
/*
* private data
*