The old code allowed many references to be efficiently added to a
single directory, because it just appended the references to the
containing directory unsorted without doing any searching (and
therefore without requiring any intermediate sorting). But the old
code was inefficient when a large number of subdirectories were added
to a directory, because the directory always had to be searched to see
if the new subdirectory already existed, and this search required the
directory to be sorted first. The same was repeated for every new
subdirectory, so the time scaled like O(N^2), where N is the number of
subdirectories within a single directory.
In practice, references are often added to the ref_cache in
lexicographic order, for example when reading the packed-refs file.
So build some intelligence into add_entry_to_dir() to optimize for the
case of references and/or subdirectories being added in lexicographic
order: if the existing entries were already sorted, and the new entry
comes after the last existing entry, then adjust ref_dir::sorted to
reflect the fact that the ref_dir is still sorted.
Thanks to Peff for pointing out the performance regression that
inspired this change.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>