compat / nedmalloc / malloc.c.hon commit Merge tag 'gitgui-0.20.0' of http://repo.or.cz/r/git-gui (1eb0545)
   1/*
   2  This is a version (aka dlmalloc) of malloc/free/realloc written by
   3  Doug Lea and released to the public domain, as explained at
   4  http://creativecommons.org/licenses/publicdomain.  Send questions,
   5  comments, complaints, performance data, etc to dl@cs.oswego.edu
   6
   7* Version pre-2.8.4 Mon Nov 27 11:22:37 2006    (dl at gee)
   8
   9   Note: There may be an updated version of this malloc obtainable at
  10           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
  11         Check before installing!
  12
  13* Quickstart
  14
  15  This library is all in one file to simplify the most common usage:
  16  ftp it, compile it (-O3), and link it into another program. All of
  17  the compile-time options default to reasonable values for use on
  18  most platforms.  You might later want to step through various
  19  compile-time and dynamic tuning options.
  20
  21  For convenience, an include file for code using this malloc is at:
  22     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h
  23  You don't really need this .h file unless you call functions not
  24  defined in your system include files.  The .h file contains only the
  25  excerpts from this file needed for using this malloc on ANSI C/C++
  26  systems, so long as you haven't changed compile-time options about
  27  naming and tuning parameters.  If you do, then you can create your
  28  own malloc.h that does include all settings by cutting at the point
  29  indicated below. Note that you may already by default be using a C
  30  library containing a malloc that is based on some version of this
  31  malloc (for example in linux). You might still want to use the one
  32  in this file to customize settings or to avoid overheads associated
  33  with library versions.
  34
  35* Vital statistics:
  36
  37  Supported pointer/size_t representation:       4 or 8 bytes
  38       size_t MUST be an unsigned type of the same width as
  39       pointers. (If you are using an ancient system that declares
  40       size_t as a signed type, or need it to be a different width
  41       than pointers, you can use a previous release of this malloc
  42       (e.g. 2.7.2) supporting these.)
  43
  44  Alignment:                                     8 bytes (default)
  45       This suffices for nearly all current machines and C compilers.
  46       However, you can define MALLOC_ALIGNMENT to be wider than this
  47       if necessary (up to 128bytes), at the expense of using more space.
  48
  49  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
  50                                          8 or 16 bytes (if 8byte sizes)
  51       Each malloced chunk has a hidden word of overhead holding size
  52       and status information, and additional cross-check word
  53       if FOOTERS is defined.
  54
  55  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
  56                          8-byte ptrs:  32 bytes    (including overhead)
  57
  58       Even a request for zero bytes (i.e., malloc(0)) returns a
  59       pointer to something of the minimum allocatable size.
  60       The maximum overhead wastage (i.e., number of extra bytes
  61       allocated than were requested in malloc) is less than or equal
  62       to the minimum size, except for requests >= mmap_threshold that
  63       are serviced via mmap(), where the worst case wastage is about
  64       32 bytes plus the remainder from a system page (the minimal
  65       mmap unit); typically 4096 or 8192 bytes.
  66
  67  Security: static-safe; optionally more or less
  68       The "security" of malloc refers to the ability of malicious
  69       code to accentuate the effects of errors (for example, freeing
  70       space that is not currently malloc'ed or overwriting past the
  71       ends of chunks) in code that calls malloc.  This malloc
  72       guarantees not to modify any memory locations below the base of
  73       heap, i.e., static variables, even in the presence of usage
  74       errors.  The routines additionally detect most improper frees
  75       and reallocs.  All this holds as long as the static bookkeeping
  76       for malloc itself is not corrupted by some other means.  This
  77       is only one aspect of security -- these checks do not, and
  78       cannot, detect all possible programming errors.
  79
  80       If FOOTERS is defined nonzero, then each allocated chunk
  81       carries an additional check word to verify that it was malloced
  82       from its space.  These check words are the same within each
  83       execution of a program using malloc, but differ across
  84       executions, so externally crafted fake chunks cannot be
  85       freed. This improves security by rejecting frees/reallocs that
  86       could corrupt heap memory, in addition to the checks preventing
  87       writes to statics that are always on.  This may further improve
  88       security at the expense of time and space overhead.  (Note that
  89       FOOTERS may also be worth using with MSPACES.)
  90
  91       By default detected errors cause the program to abort (calling
  92       "abort()"). You can override this to instead proceed past
  93       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
  94       has no effect, and a malloc that encounters a bad address
  95       caused by user overwrites will ignore the bad address by
  96       dropping pointers and indices to all known memory. This may
  97       be appropriate for programs that should continue if at all
  98       possible in the face of programming errors, although they may
  99       run out of memory because dropped memory is never reclaimed.
 100
 101       If you don't like either of these options, you can define
 102       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
 103       else. And if you are sure that your program using malloc has
 104       no errors or vulnerabilities, you can define INSECURE to 1,
 105       which might (or might not) provide a small performance improvement.
 106
 107  Thread-safety: NOT thread-safe unless USE_LOCKS defined
 108       When USE_LOCKS is defined, each public call to malloc, free,
 109       etc is surrounded with either a pthread mutex or a win32
 110       spinlock (depending on WIN32). This is not especially fast, and
 111       can be a major bottleneck.  It is designed only to provide
 112       minimal protection in concurrent environments, and to provide a
 113       basis for extensions.  If you are using malloc in a concurrent
 114       program, consider instead using nedmalloc
 115       (http://www.nedprod.com/programs/portable/nedmalloc/) or
 116       ptmalloc (See http://www.malloc.de), which are derived
 117       from versions of this malloc.
 118
 119  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
 120       This malloc can use unix sbrk or any emulation (invoked using
 121       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
 122       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
 123       memory.  On most unix systems, it tends to work best if both
 124       MORECORE and MMAP are enabled.  On Win32, it uses emulations
 125       based on VirtualAlloc. It also uses common C library functions
 126       like memset.
 127
 128  Compliance: I believe it is compliant with the Single Unix Specification
 129       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
 130       others as well.
 131
 132* Overview of algorithms
 133
 134  This is not the fastest, most space-conserving, most portable, or
 135  most tunable malloc ever written. However it is among the fastest
 136  while also being among the most space-conserving, portable and
 137  tunable.  Consistent balance across these factors results in a good
 138  general-purpose allocator for malloc-intensive programs.
 139
 140  In most ways, this malloc is a best-fit allocator. Generally, it
 141  chooses the best-fitting existing chunk for a request, with ties
 142  broken in approximately least-recently-used order. (This strategy
 143  normally maintains low fragmentation.) However, for requests less
 144  than 256bytes, it deviates from best-fit when there is not an
 145  exactly fitting available chunk by preferring to use space adjacent
 146  to that used for the previous small request, as well as by breaking
 147  ties in approximately most-recently-used order. (These enhance
 148  locality of series of small allocations.)  And for very large requests
 149  (>= 256Kb by default), it relies on system memory mapping
 150  facilities, if supported.  (This helps avoid carrying around and
 151  possibly fragmenting memory used only for large chunks.)
 152
 153  All operations (except malloc_stats and mallinfo) have execution
 154  times that are bounded by a constant factor of the number of bits in
 155  a size_t, not counting any clearing in calloc or copying in realloc,
 156  or actions surrounding MORECORE and MMAP that have times
 157  proportional to the number of non-contiguous regions returned by
 158  system allocation routines, which is often just 1. In real-time
 159  applications, you can optionally suppress segment traversals using
 160  NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
 161  system allocators return non-contiguous spaces, at the typical
 162  expense of carrying around more memory and increased fragmentation.
 163
 164  The implementation is not very modular and seriously overuses
 165  macros. Perhaps someday all C compilers will do as good a job
 166  inlining modular code as can now be done by brute-force expansion,
 167  but now, enough of them seem not to.
 168
 169  Some compilers issue a lot of warnings about code that is
 170  dead/unreachable only on some platforms, and also about intentional
 171  uses of negation on unsigned types. All known cases of each can be
 172  ignored.
 173
 174  For a longer but out of date high-level description, see
 175     http://gee.cs.oswego.edu/dl/html/malloc.html
 176
 177* MSPACES
 178  If MSPACES is defined, then in addition to malloc, free, etc.,
 179  this file also defines mspace_malloc, mspace_free, etc. These
 180  are versions of malloc routines that take an "mspace" argument
 181  obtained using create_mspace, to control all internal bookkeeping.
 182  If ONLY_MSPACES is defined, only these versions are compiled.
 183  So if you would like to use this allocator for only some allocations,
 184  and your system malloc for others, you can compile with
 185  ONLY_MSPACES and then do something like...
 186    static mspace mymspace = create_mspace(0,0); // for example
 187    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
 188
 189  (Note: If you only need one instance of an mspace, you can instead
 190  use "USE_DL_PREFIX" to relabel the global malloc.)
 191
 192  You can similarly create thread-local allocators by storing
 193  mspaces as thread-locals. For example:
 194    static __thread mspace tlms = 0;
 195    void*  tlmalloc(size_t bytes) {
 196      if (tlms == 0) tlms = create_mspace(0, 0);
 197      return mspace_malloc(tlms, bytes);
 198    }
 199    void  tlfree(void* mem) { mspace_free(tlms, mem); }
 200
 201  Unless FOOTERS is defined, each mspace is completely independent.
 202  You cannot allocate from one and free to another (although
 203  conformance is only weakly checked, so usage errors are not always
 204  caught). If FOOTERS is defined, then each chunk carries around a tag
 205  indicating its originating mspace, and frees are directed to their
 206  originating spaces.
 207
 208 -------------------------  Compile-time options ---------------------------
 209
 210Be careful in setting #define values for numerical constants of type
 211size_t. On some systems, literal values are not automatically extended
 212to size_t precision unless they are explicitly casted. You can also
 213use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
 214
 215WIN32                    default: defined if _WIN32 defined
 216  Defining WIN32 sets up defaults for MS environment and compilers.
 217  Otherwise defaults are for unix. Beware that there seem to be some
 218  cases where this malloc might not be a pure drop-in replacement for
 219  Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
 220  SetDIBits()) may be due to bugs in some video driver implementations
 221  when pixel buffers are malloc()ed, and the region spans more than
 222  one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
 223  default granularity, pixel buffers may straddle virtual allocation
 224  regions more often than when using the Microsoft allocator.  You can
 225  avoid this by using VirtualAlloc() and VirtualFree() for all pixel
 226  buffers rather than using malloc().  If this is not possible,
 227  recompile this malloc with a larger DEFAULT_GRANULARITY.
 228
 229MALLOC_ALIGNMENT         default: (size_t)8
 230  Controls the minimum alignment for malloc'ed chunks.  It must be a
 231  power of two and at least 8, even on machines for which smaller
 232  alignments would suffice. It may be defined as larger than this
 233  though. Note however that code and data structures are optimized for
 234  the case of 8-byte alignment.
 235
 236MSPACES                  default: 0 (false)
 237  If true, compile in support for independent allocation spaces.
 238  This is only supported if HAVE_MMAP is true.
 239
 240ONLY_MSPACES             default: 0 (false)
 241  If true, only compile in mspace versions, not regular versions.
 242
 243USE_LOCKS                default: 0 (false)
 244  Causes each call to each public routine to be surrounded with
 245  pthread or WIN32 mutex lock/unlock. (If set true, this can be
 246  overridden on a per-mspace basis for mspace versions.) If set to a
 247  non-zero value other than 1, locks are used, but their
 248  implementation is left out, so lock functions must be supplied manually.
 249
 250USE_SPIN_LOCKS           default: 1 iff USE_LOCKS and on x86 using gcc or MSC
 251  If true, uses custom spin locks for locking. This is currently
 252  supported only for x86 platforms using gcc or recent MS compilers.
 253  Otherwise, posix locks or win32 critical sections are used.
 254
 255FOOTERS                  default: 0
 256  If true, provide extra checking and dispatching by placing
 257  information in the footers of allocated chunks. This adds
 258  space and time overhead.
 259
 260INSECURE                 default: 0
 261  If true, omit checks for usage errors and heap space overwrites.
 262
 263USE_DL_PREFIX            default: NOT defined
 264  Causes compiler to prefix all public routines with the string 'dl'.
 265  This can be useful when you only want to use this malloc in one part
 266  of a program, using your regular system malloc elsewhere.
 267
 268ABORT                    default: defined as abort()
 269  Defines how to abort on failed checks.  On most systems, a failed
 270  check cannot die with an "assert" or even print an informative
 271  message, because the underlying print routines in turn call malloc,
 272  which will fail again.  Generally, the best policy is to simply call
 273  abort(). It's not very useful to do more than this because many
 274  errors due to overwriting will show up as address faults (null, odd
 275  addresses etc) rather than malloc-triggered checks, so will also
 276  abort.  Also, most compilers know that abort() does not return, so
 277  can better optimize code conditionally calling it.
 278
 279PROCEED_ON_ERROR           default: defined as 0 (false)
 280  Controls whether detected bad addresses cause them to bypassed
 281  rather than aborting. If set, detected bad arguments to free and
 282  realloc are ignored. And all bookkeeping information is zeroed out
 283  upon a detected overwrite of freed heap space, thus losing the
 284  ability to ever return it from malloc again, but enabling the
 285  application to proceed. If PROCEED_ON_ERROR is defined, the
 286  static variable malloc_corruption_error_count is compiled in
 287  and can be examined to see if errors have occurred. This option
 288  generates slower code than the default abort policy.
 289
 290DEBUG                    default: NOT defined
 291  The DEBUG setting is mainly intended for people trying to modify
 292  this code or diagnose problems when porting to new platforms.
 293  However, it may also be able to better isolate user errors than just
 294  using runtime checks.  The assertions in the check routines spell
 295  out in more detail the assumptions and invariants underlying the
 296  algorithms.  The checking is fairly extensive, and will slow down
 297  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
 298  set will attempt to check every non-mmapped allocated and free chunk
 299  in the course of computing the summaries.
 300
 301ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
 302  Debugging assertion failures can be nearly impossible if your
 303  version of the assert macro causes malloc to be called, which will
 304  lead to a cascade of further failures, blowing the runtime stack.
 305  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
 306  which will usually make debugging easier.
 307
 308MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
 309  The action to take before "return 0" when malloc fails to be able to
 310  return memory because there is none available.
 311
 312HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
 313  True if this system supports sbrk or an emulation of it.
 314
 315MORECORE                  default: sbrk
 316  The name of the sbrk-style system routine to call to obtain more
 317  memory.  See below for guidance on writing custom MORECORE
 318  functions. The type of the argument to sbrk/MORECORE varies across
 319  systems.  It cannot be size_t, because it supports negative
 320  arguments, so it is normally the signed type of the same width as
 321  size_t (sometimes declared as "intptr_t").  It doesn't much matter
 322  though. Internally, we only call it with arguments less than half
 323  the max value of a size_t, which should work across all reasonable
 324  possibilities, although sometimes generating compiler warnings.
 325
 326MORECORE_CONTIGUOUS       default: 1 (true) if HAVE_MORECORE
 327  If true, take advantage of fact that consecutive calls to MORECORE
 328  with positive arguments always return contiguous increasing
 329  addresses.  This is true of unix sbrk. It does not hurt too much to
 330  set it true anyway, since malloc copes with non-contiguities.
 331  Setting it false when definitely non-contiguous saves time
 332  and possibly wasted space it would take to discover this though.
 333
 334MORECORE_CANNOT_TRIM      default: NOT defined
 335  True if MORECORE cannot release space back to the system when given
 336  negative arguments. This is generally necessary only if you are
 337  using a hand-crafted MORECORE function that cannot handle negative
 338  arguments.
 339
 340NO_SEGMENT_TRAVERSAL       default: 0
 341  If non-zero, suppresses traversals of memory segments
 342  returned by either MORECORE or CALL_MMAP. This disables
 343  merging of segments that are contiguous, and selectively
 344  releasing them to the OS if unused, but bounds execution times.
 345
 346HAVE_MMAP                 default: 1 (true)
 347  True if this system supports mmap or an emulation of it.  If so, and
 348  HAVE_MORECORE is not true, MMAP is used for all system
 349  allocation. If set and HAVE_MORECORE is true as well, MMAP is
 350  primarily used to directly allocate very large blocks. It is also
 351  used as a backup strategy in cases where MORECORE fails to provide
 352  space from system. Note: A single call to MUNMAP is assumed to be
 353  able to unmap memory that may have be allocated using multiple calls
 354  to MMAP, so long as they are adjacent.
 355
 356HAVE_MREMAP               default: 1 on linux, else 0
 357  If true realloc() uses mremap() to re-allocate large blocks and
 358  extend or shrink allocation spaces.
 359
 360MMAP_CLEARS               default: 1 except on WINCE.
 361  True if mmap clears memory so calloc doesn't need to. This is true
 362  for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
 363
 364USE_BUILTIN_FFS            default: 0 (i.e., not used)
 365  Causes malloc to use the builtin ffs() function to compute indices.
 366  Some compilers may recognize and intrinsify ffs to be faster than the
 367  supplied C version. Also, the case of x86 using gcc is special-cased
 368  to an asm instruction, so is already as fast as it can be, and so
 369  this setting has no effect. Similarly for Win32 under recent MS compilers.
 370  (On most x86s, the asm version is only slightly faster than the C version.)
 371
 372malloc_getpagesize         default: derive from system includes, or 4096.
 373  The system page size. To the extent possible, this malloc manages
 374  memory from the system in page-size units.  This may be (and
 375  usually is) a function rather than a constant. This is ignored
 376  if WIN32, where page size is determined using getSystemInfo during
 377  initialization.
 378
 379USE_DEV_RANDOM             default: 0 (i.e., not used)
 380  Causes malloc to use /dev/random to initialize secure magic seed for
 381  stamping footers. Otherwise, the current time is used.
 382
 383NO_MALLINFO                default: 0
 384  If defined, don't compile "mallinfo". This can be a simple way
 385  of dealing with mismatches between system declarations and
 386  those in this file.
 387
 388MALLINFO_FIELD_TYPE        default: size_t
 389  The type of the fields in the mallinfo struct. This was originally
 390  defined as "int" in SVID etc, but is more usefully defined as
 391  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
 392
 393REALLOC_ZERO_BYTES_FREES    default: not defined
 394  This should be set if a call to realloc with zero bytes should
 395  be the same as a call to free. Some people think it should. Otherwise,
 396  since this malloc returns a unique pointer for malloc(0), so does
 397  realloc(p, 0).
 398
 399LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
 400LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
 401LACKS_STDLIB_H                default: NOT defined unless on WIN32
 402  Define these if your system does not have these header files.
 403  You might need to manually insert some of the declarations they provide.
 404
 405DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
 406                                system_info.dwAllocationGranularity in WIN32,
 407                                otherwise 64K.
 408      Also settable using mallopt(M_GRANULARITY, x)
 409  The unit for allocating and deallocating memory from the system.  On
 410  most systems with contiguous MORECORE, there is no reason to
 411  make this more than a page. However, systems with MMAP tend to
 412  either require or encourage larger granularities.  You can increase
 413  this value to prevent system allocation functions to be called so
 414  often, especially if they are slow.  The value must be at least one
 415  page and must be a power of two.  Setting to 0 causes initialization
 416  to either page size or win32 region size.  (Note: In previous
 417  versions of malloc, the equivalent of this option was called
 418  "TOP_PAD")
 419
 420DEFAULT_TRIM_THRESHOLD    default: 2MB
 421      Also settable using mallopt(M_TRIM_THRESHOLD, x)
 422  The maximum amount of unused top-most memory to keep before
 423  releasing via malloc_trim in free().  Automatic trimming is mainly
 424  useful in long-lived programs using contiguous MORECORE.  Because
 425  trimming via sbrk can be slow on some systems, and can sometimes be
 426  wasteful (in cases where programs immediately afterward allocate
 427  more large chunks) the value should be high enough so that your
 428  overall system performance would improve by releasing this much
 429  memory.  As a rough guide, you might set to a value close to the
 430  average size of a process (program) running on your system.
 431  Releasing this much memory would allow such a process to run in
 432  memory.  Generally, it is worth tuning trim thresholds when a
 433  program undergoes phases where several large chunks are allocated
 434  and released in ways that can reuse each other's storage, perhaps
 435  mixed with phases where there are no such chunks at all. The trim
 436  value must be greater than page size to have any useful effect.  To
 437  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
 438  some people use of mallocing a huge space and then freeing it at
 439  program startup, in an attempt to reserve system memory, doesn't
 440  have the intended effect under automatic trimming, since that memory
 441  will immediately be returned to the system.
 442
 443DEFAULT_MMAP_THRESHOLD       default: 256K
 444      Also settable using mallopt(M_MMAP_THRESHOLD, x)
 445  The request size threshold for using MMAP to directly service a
 446  request. Requests of at least this size that cannot be allocated
 447  using already-existing space will be serviced via mmap.  (If enough
 448  normal freed space already exists it is used instead.)  Using mmap
 449  segregates relatively large chunks of memory so that they can be
 450  individually obtained and released from the host system. A request
 451  serviced through mmap is never reused by any other request (at least
 452  not directly; the system may just so happen to remap successive
 453  requests to the same locations).  Segregating space in this way has
 454  the benefits that: Mmapped space can always be individually released
 455  back to the system, which helps keep the system level memory demands
 456  of a long-lived program low.  Also, mapped memory doesn't become
 457  `locked' between other chunks, as can happen with normally allocated
 458  chunks, which means that even trimming via malloc_trim would not
 459  release them.  However, it has the disadvantage that the space
 460  cannot be reclaimed, consolidated, and then used to service later
 461  requests, as happens with normal chunks.  The advantages of mmap
 462  nearly always outweigh disadvantages for "large" chunks, but the
 463  value of "large" may vary across systems.  The default is an
 464  empirically derived value that works well in most systems. You can
 465  disable mmap by setting to MAX_SIZE_T.
 466
 467MAX_RELEASE_CHECK_RATE   default: 4095 unless not HAVE_MMAP
 468  The number of consolidated frees between checks to release
 469  unused segments when freeing. When using non-contiguous segments,
 470  especially with multiple mspaces, checking only for topmost space
 471  doesn't always suffice to trigger trimming. To compensate for this,
 472  free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
 473  current number of segments, if greater) try to release unused
 474  segments to the OS when freeing chunks that result in
 475  consolidation. The best value for this parameter is a compromise
 476  between slowing down frees with relatively costly checks that
 477  rarely trigger versus holding on to unused memory. To effectively
 478  disable, set to MAX_SIZE_T. This may lead to a very slight speed
 479  improvement at the expense of carrying around more memory.
 480*/
 481
 482/* Version identifier to allow people to support multiple versions */
 483#ifndef DLMALLOC_VERSION
 484#define DLMALLOC_VERSION 20804
 485#endif /* DLMALLOC_VERSION */
 486
 487#if defined(linux)
 488#define _GNU_SOURCE 1
 489#endif
 490
 491#ifndef WIN32
 492#ifdef _WIN32
 493#define WIN32 1
 494#endif  /* _WIN32 */
 495#ifdef _WIN32_WCE
 496#define LACKS_FCNTL_H
 497#define WIN32 1
 498#endif /* _WIN32_WCE */
 499#endif  /* WIN32 */
 500#ifdef WIN32
 501#define WIN32_LEAN_AND_MEAN
 502#ifndef _WIN32_WINNT
 503#define _WIN32_WINNT 0x403
 504#endif
 505#include <windows.h>
 506#define HAVE_MMAP 1
 507#define HAVE_MORECORE 0
 508#define LACKS_UNISTD_H
 509#define LACKS_SYS_PARAM_H
 510#define LACKS_SYS_MMAN_H
 511#define LACKS_STRING_H
 512#define LACKS_STRINGS_H
 513#define LACKS_SYS_TYPES_H
 514#define LACKS_ERRNO_H
 515#ifndef MALLOC_FAILURE_ACTION
 516#define MALLOC_FAILURE_ACTION
 517#endif /* MALLOC_FAILURE_ACTION */
 518#ifdef _WIN32_WCE /* WINCE reportedly does not clear */
 519#define MMAP_CLEARS 0
 520#else
 521#define MMAP_CLEARS 1
 522#endif /* _WIN32_WCE */
 523#endif  /* WIN32 */
 524
 525#if defined(DARWIN) || defined(_DARWIN)
 526/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
 527#ifndef HAVE_MORECORE
 528#define HAVE_MORECORE 0
 529#define HAVE_MMAP 1
 530/* OSX allocators provide 16 byte alignment */
 531#ifndef MALLOC_ALIGNMENT
 532#define MALLOC_ALIGNMENT ((size_t)16U)
 533#endif
 534#endif  /* HAVE_MORECORE */
 535#endif  /* DARWIN */
 536
 537#ifndef LACKS_SYS_TYPES_H
 538#include <sys/types.h>  /* For size_t */
 539#endif  /* LACKS_SYS_TYPES_H */
 540
 541/* The maximum possible size_t value has all bits set */
 542#define MAX_SIZE_T           (~(size_t)0)
 543
 544#ifndef ONLY_MSPACES
 545#define ONLY_MSPACES 0     /* define to a value */
 546#else
 547#define ONLY_MSPACES 1
 548#endif  /* ONLY_MSPACES */
 549#ifndef MSPACES
 550#if ONLY_MSPACES
 551#define MSPACES 1
 552#else   /* ONLY_MSPACES */
 553#define MSPACES 0
 554#endif  /* ONLY_MSPACES */
 555#endif  /* MSPACES */
 556#ifndef MALLOC_ALIGNMENT
 557#define MALLOC_ALIGNMENT ((size_t)8U)
 558#endif  /* MALLOC_ALIGNMENT */
 559#ifndef FOOTERS
 560#define FOOTERS 0
 561#endif  /* FOOTERS */
 562#ifndef ABORT
 563#define ABORT  abort()
 564#endif  /* ABORT */
 565#ifndef ABORT_ON_ASSERT_FAILURE
 566#define ABORT_ON_ASSERT_FAILURE 1
 567#endif  /* ABORT_ON_ASSERT_FAILURE */
 568#ifndef PROCEED_ON_ERROR
 569#define PROCEED_ON_ERROR 0
 570#endif  /* PROCEED_ON_ERROR */
 571#ifndef USE_LOCKS
 572#define USE_LOCKS 0
 573#endif  /* USE_LOCKS */
 574#ifndef USE_SPIN_LOCKS
 575#if USE_LOCKS && (defined(__GNUC__) && ((defined(__i386__) || defined(__x86_64__)))) || (defined(_MSC_VER) && _MSC_VER>=1310)
 576#define USE_SPIN_LOCKS 1
 577#else
 578#define USE_SPIN_LOCKS 0
 579#endif /* USE_LOCKS && ... */
 580#endif /* USE_SPIN_LOCKS */
 581#ifndef INSECURE
 582#define INSECURE 0
 583#endif  /* INSECURE */
 584#ifndef HAVE_MMAP
 585#define HAVE_MMAP 1
 586#endif  /* HAVE_MMAP */
 587#ifndef MMAP_CLEARS
 588#define MMAP_CLEARS 1
 589#endif  /* MMAP_CLEARS */
 590#ifndef HAVE_MREMAP
 591#ifdef linux
 592#define HAVE_MREMAP 1
 593#else   /* linux */
 594#define HAVE_MREMAP 0
 595#endif  /* linux */
 596#endif  /* HAVE_MREMAP */
 597#ifndef MALLOC_FAILURE_ACTION
 598#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
 599#endif  /* MALLOC_FAILURE_ACTION */
 600#ifndef HAVE_MORECORE
 601#if ONLY_MSPACES
 602#define HAVE_MORECORE 0
 603#else   /* ONLY_MSPACES */
 604#define HAVE_MORECORE 1
 605#endif  /* ONLY_MSPACES */
 606#endif  /* HAVE_MORECORE */
 607#if !HAVE_MORECORE
 608#define MORECORE_CONTIGUOUS 0
 609#else   /* !HAVE_MORECORE */
 610#define MORECORE_DEFAULT sbrk
 611#ifndef MORECORE_CONTIGUOUS
 612#define MORECORE_CONTIGUOUS 1
 613#endif  /* MORECORE_CONTIGUOUS */
 614#endif  /* HAVE_MORECORE */
 615#ifndef DEFAULT_GRANULARITY
 616#if (MORECORE_CONTIGUOUS || defined(WIN32))
 617#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
 618#else   /* MORECORE_CONTIGUOUS */
 619#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
 620#endif  /* MORECORE_CONTIGUOUS */
 621#endif  /* DEFAULT_GRANULARITY */
 622#ifndef DEFAULT_TRIM_THRESHOLD
 623#ifndef MORECORE_CANNOT_TRIM
 624#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
 625#else   /* MORECORE_CANNOT_TRIM */
 626#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
 627#endif  /* MORECORE_CANNOT_TRIM */
 628#endif  /* DEFAULT_TRIM_THRESHOLD */
 629#ifndef DEFAULT_MMAP_THRESHOLD
 630#if HAVE_MMAP
 631#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
 632#else   /* HAVE_MMAP */
 633#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
 634#endif  /* HAVE_MMAP */
 635#endif  /* DEFAULT_MMAP_THRESHOLD */
 636#ifndef MAX_RELEASE_CHECK_RATE
 637#if HAVE_MMAP
 638#define MAX_RELEASE_CHECK_RATE 4095
 639#else
 640#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
 641#endif /* HAVE_MMAP */
 642#endif /* MAX_RELEASE_CHECK_RATE */
 643#ifndef USE_BUILTIN_FFS
 644#define USE_BUILTIN_FFS 0
 645#endif  /* USE_BUILTIN_FFS */
 646#ifndef USE_DEV_RANDOM
 647#define USE_DEV_RANDOM 0
 648#endif  /* USE_DEV_RANDOM */
 649#ifndef NO_MALLINFO
 650#define NO_MALLINFO 0
 651#endif  /* NO_MALLINFO */
 652#ifndef MALLINFO_FIELD_TYPE
 653#define MALLINFO_FIELD_TYPE size_t
 654#endif  /* MALLINFO_FIELD_TYPE */
 655#ifndef NO_SEGMENT_TRAVERSAL
 656#define NO_SEGMENT_TRAVERSAL 0
 657#endif /* NO_SEGMENT_TRAVERSAL */
 658
 659/*
 660  mallopt tuning options.  SVID/XPG defines four standard parameter
 661  numbers for mallopt, normally defined in malloc.h.  None of these
 662  are used in this malloc, so setting them has no effect. But this
 663  malloc does support the following options.
 664*/
 665
 666#define M_TRIM_THRESHOLD     (-1)
 667#define M_GRANULARITY        (-2)
 668#define M_MMAP_THRESHOLD     (-3)
 669
 670/* ------------------------ Mallinfo declarations ------------------------ */
 671
 672#if !NO_MALLINFO
 673/*
 674  This version of malloc supports the standard SVID/XPG mallinfo
 675  routine that returns a struct containing usage properties and
 676  statistics. It should work on any system that has a
 677  /usr/include/malloc.h defining struct mallinfo.  The main
 678  declaration needed is the mallinfo struct that is returned (by-copy)
 679  by mallinfo().  The malloinfo struct contains a bunch of fields that
 680  are not even meaningful in this version of malloc.  These fields are
 681  are instead filled by mallinfo() with other numbers that might be of
 682  interest.
 683
 684  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
 685  /usr/include/malloc.h file that includes a declaration of struct
 686  mallinfo.  If so, it is included; else a compliant version is
 687  declared below.  These must be precisely the same for mallinfo() to
 688  work.  The original SVID version of this struct, defined on most
 689  systems with mallinfo, declares all fields as ints. But some others
 690  define as unsigned long. If your system defines the fields using a
 691  type of different width than listed here, you MUST #include your
 692  system version and #define HAVE_USR_INCLUDE_MALLOC_H.
 693*/
 694
 695/* #define HAVE_USR_INCLUDE_MALLOC_H */
 696
 697#ifdef HAVE_USR_INCLUDE_MALLOC_H
 698#include "/usr/include/malloc.h"
 699#else /* HAVE_USR_INCLUDE_MALLOC_H */
 700#ifndef STRUCT_MALLINFO_DECLARED
 701#define STRUCT_MALLINFO_DECLARED 1
 702struct mallinfo {
 703  MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
 704  MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
 705  MALLINFO_FIELD_TYPE smblks;   /* always 0 */
 706  MALLINFO_FIELD_TYPE hblks;    /* always 0 */
 707  MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
 708  MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
 709  MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
 710  MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
 711  MALLINFO_FIELD_TYPE fordblks; /* total free space */
 712  MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
 713};
 714#endif /* STRUCT_MALLINFO_DECLARED */
 715#endif /* HAVE_USR_INCLUDE_MALLOC_H */
 716#endif /* NO_MALLINFO */
 717
 718/*
 719  Try to persuade compilers to inline. The most critical functions for
 720  inlining are defined as macros, so these aren't used for them.
 721*/
 722
 723#ifndef FORCEINLINE
 724  #if defined(__GNUC__)
 725#define FORCEINLINE __inline __attribute__ ((always_inline))
 726  #elif defined(_MSC_VER)
 727    #define FORCEINLINE __forceinline
 728  #endif
 729#endif
 730#ifndef NOINLINE
 731  #if defined(__GNUC__)
 732    #define NOINLINE __attribute__ ((noinline))
 733  #elif defined(_MSC_VER)
 734    #define NOINLINE __declspec(noinline)
 735  #else
 736    #define NOINLINE
 737  #endif
 738#endif
 739
 740#ifdef __cplusplus
 741extern "C" {
 742#ifndef FORCEINLINE
 743 #define FORCEINLINE inline
 744#endif
 745#endif /* __cplusplus */
 746#ifndef FORCEINLINE
 747 #define FORCEINLINE
 748#endif
 749
 750#if !ONLY_MSPACES
 751
 752/* ------------------- Declarations of public routines ------------------- */
 753
 754#ifndef USE_DL_PREFIX
 755#define dlcalloc               calloc
 756#define dlfree                 free
 757#define dlmalloc               malloc
 758#define dlmemalign             memalign
 759#define dlrealloc              realloc
 760#define dlvalloc               valloc
 761#define dlpvalloc              pvalloc
 762#define dlmallinfo             mallinfo
 763#define dlmallopt              mallopt
 764#define dlmalloc_trim          malloc_trim
 765#define dlmalloc_stats         malloc_stats
 766#define dlmalloc_usable_size   malloc_usable_size
 767#define dlmalloc_footprint     malloc_footprint
 768#define dlmalloc_max_footprint malloc_max_footprint
 769#define dlindependent_calloc   independent_calloc
 770#define dlindependent_comalloc independent_comalloc
 771#endif /* USE_DL_PREFIX */
 772
 773
 774/*
 775  malloc(size_t n)
 776  Returns a pointer to a newly allocated chunk of at least n bytes, or
 777  null if no space is available, in which case errno is set to ENOMEM
 778  on ANSI C systems.
 779
 780  If n is zero, malloc returns a minimum-sized chunk. (The minimum
 781  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
 782  systems.)  Note that size_t is an unsigned type, so calls with
 783  arguments that would be negative if signed are interpreted as
 784  requests for huge amounts of space, which will often fail. The
 785  maximum supported value of n differs across systems, but is in all
 786  cases less than the maximum representable value of a size_t.
 787*/
 788void* dlmalloc(size_t);
 789
 790/*
 791  free(void* p)
 792  Releases the chunk of memory pointed to by p, that had been previously
 793  allocated using malloc or a related routine such as realloc.
 794  It has no effect if p is null. If p was not malloced or already
 795  freed, free(p) will by default cause the current program to abort.
 796*/
 797void  dlfree(void*);
 798
 799/*
 800  calloc(size_t n_elements, size_t element_size);
 801  Returns a pointer to n_elements * element_size bytes, with all locations
 802  set to zero.
 803*/
 804void* dlcalloc(size_t, size_t);
 805
 806/*
 807  realloc(void* p, size_t n)
 808  Returns a pointer to a chunk of size n that contains the same data
 809  as does chunk p up to the minimum of (n, p's size) bytes, or null
 810  if no space is available.
 811
 812  The returned pointer may or may not be the same as p. The algorithm
 813  prefers extending p in most cases when possible, otherwise it
 814  employs the equivalent of a malloc-copy-free sequence.
 815
 816  If p is null, realloc is equivalent to malloc.
 817
 818  If space is not available, realloc returns null, errno is set (if on
 819  ANSI) and p is NOT freed.
 820
 821  if n is for fewer bytes than already held by p, the newly unused
 822  space is lopped off and freed if possible.  realloc with a size
 823  argument of zero (re)allocates a minimum-sized chunk.
 824
 825  The old unix realloc convention of allowing the last-free'd chunk
 826  to be used as an argument to realloc is not supported.
 827*/
 828
 829void* dlrealloc(void*, size_t);
 830
 831/*
 832  memalign(size_t alignment, size_t n);
 833  Returns a pointer to a newly allocated chunk of n bytes, aligned
 834  in accord with the alignment argument.
 835
 836  The alignment argument should be a power of two. If the argument is
 837  not a power of two, the nearest greater power is used.
 838  8-byte alignment is guaranteed by normal malloc calls, so don't
 839  bother calling memalign with an argument of 8 or less.
 840
 841  Overreliance on memalign is a sure way to fragment space.
 842*/
 843void* dlmemalign(size_t, size_t);
 844
 845/*
 846  valloc(size_t n);
 847  Equivalent to memalign(pagesize, n), where pagesize is the page
 848  size of the system. If the pagesize is unknown, 4096 is used.
 849*/
 850void* dlvalloc(size_t);
 851
 852/*
 853  mallopt(int parameter_number, int parameter_value)
 854  Sets tunable parameters The format is to provide a
 855  (parameter-number, parameter-value) pair.  mallopt then sets the
 856  corresponding parameter to the argument value if it can (i.e., so
 857  long as the value is meaningful), and returns 1 if successful else
 858  0.  To workaround the fact that mallopt is specified to use int,
 859  not size_t parameters, the value -1 is specially treated as the
 860  maximum unsigned size_t value.
 861
 862  SVID/XPG/ANSI defines four standard param numbers for mallopt,
 863  normally defined in malloc.h.  None of these are use in this malloc,
 864  so setting them has no effect. But this malloc also supports other
 865  options in mallopt. See below for details.  Briefly, supported
 866  parameters are as follows (listed defaults are for "typical"
 867  configurations).
 868
 869  Symbol            param #  default    allowed param values
 870  M_TRIM_THRESHOLD     -1   2*1024*1024   any   (-1 disables)
 871  M_GRANULARITY        -2     page size   any power of 2 >= page size
 872  M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
 873*/
 874int dlmallopt(int, int);
 875
 876/*
 877  malloc_footprint();
 878  Returns the number of bytes obtained from the system.  The total
 879  number of bytes allocated by malloc, realloc etc., is less than this
 880  value. Unlike mallinfo, this function returns only a precomputed
 881  result, so can be called frequently to monitor memory consumption.
 882  Even if locks are otherwise defined, this function does not use them,
 883  so results might not be up to date.
 884*/
 885size_t dlmalloc_footprint(void);
 886
 887/*
 888  malloc_max_footprint();
 889  Returns the maximum number of bytes obtained from the system. This
 890  value will be greater than current footprint if deallocated space
 891  has been reclaimed by the system. The peak number of bytes allocated
 892  by malloc, realloc etc., is less than this value. Unlike mallinfo,
 893  this function returns only a precomputed result, so can be called
 894  frequently to monitor memory consumption.  Even if locks are
 895  otherwise defined, this function does not use them, so results might
 896  not be up to date.
 897*/
 898size_t dlmalloc_max_footprint(void);
 899
 900#if !NO_MALLINFO
 901/*
 902  mallinfo()
 903  Returns (by copy) a struct containing various summary statistics:
 904
 905  arena:     current total non-mmapped bytes allocated from system
 906  ordblks:   the number of free chunks
 907  smblks:    always zero.
 908  hblks:     current number of mmapped regions
 909  hblkhd:    total bytes held in mmapped regions
 910  usmblks:   the maximum total allocated space. This will be greater
 911                than current total if trimming has occurred.
 912  fsmblks:   always zero
 913  uordblks:  current total allocated space (normal or mmapped)
 914  fordblks:  total free space
 915  keepcost:  the maximum number of bytes that could ideally be released
 916               back to system via malloc_trim. ("ideally" means that
 917               it ignores page restrictions etc.)
 918
 919  Because these fields are ints, but internal bookkeeping may
 920  be kept as longs, the reported values may wrap around zero and
 921  thus be inaccurate.
 922*/
 923struct mallinfo dlmallinfo(void);
 924#endif /* NO_MALLINFO */
 925
 926/*
 927  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
 928
 929  independent_calloc is similar to calloc, but instead of returning a
 930  single cleared space, it returns an array of pointers to n_elements
 931  independent elements that can hold contents of size elem_size, each
 932  of which starts out cleared, and can be independently freed,
 933  realloc'ed etc. The elements are guaranteed to be adjacently
 934  allocated (this is not guaranteed to occur with multiple callocs or
 935  mallocs), which may also improve cache locality in some
 936  applications.
 937
 938  The "chunks" argument is optional (i.e., may be null, which is
 939  probably the most typical usage). If it is null, the returned array
 940  is itself dynamically allocated and should also be freed when it is
 941  no longer needed. Otherwise, the chunks array must be of at least
 942  n_elements in length. It is filled in with the pointers to the
 943  chunks.
 944
 945  In either case, independent_calloc returns this pointer array, or
 946  null if the allocation failed.  If n_elements is zero and "chunks"
 947  is null, it returns a chunk representing an array with zero elements
 948  (which should be freed if not wanted).
 949
 950  Each element must be individually freed when it is no longer
 951  needed. If you'd like to instead be able to free all at once, you
 952  should instead use regular calloc and assign pointers into this
 953  space to represent elements.  (In this case though, you cannot
 954  independently free elements.)
 955
 956  independent_calloc simplifies and speeds up implementations of many
 957  kinds of pools.  It may also be useful when constructing large data
 958  structures that initially have a fixed number of fixed-sized nodes,
 959  but the number is not known at compile time, and some of the nodes
 960  may later need to be freed. For example:
 961
 962  struct Node { int item; struct Node* next; };
 963
 964  struct Node* build_list() {
 965    struct Node** pool;
 966    int n = read_number_of_nodes_needed();
 967    if (n <= 0) return 0;
 968    pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
 969    if (pool == 0) die();
 970    // organize into a linked list...
 971    struct Node* first = pool[0];
 972    for (i = 0; i < n-1; ++i)
 973      pool[i]->next = pool[i+1];
 974    free(pool);     // Can now free the array (or not, if it is needed later)
 975    return first;
 976  }
 977*/
 978void** dlindependent_calloc(size_t, size_t, void**);
 979
 980/*
 981  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
 982
 983  independent_comalloc allocates, all at once, a set of n_elements
 984  chunks with sizes indicated in the "sizes" array.    It returns
 985  an array of pointers to these elements, each of which can be
 986  independently freed, realloc'ed etc. The elements are guaranteed to
 987  be adjacently allocated (this is not guaranteed to occur with
 988  multiple callocs or mallocs), which may also improve cache locality
 989  in some applications.
 990
 991  The "chunks" argument is optional (i.e., may be null). If it is null
 992  the returned array is itself dynamically allocated and should also
 993  be freed when it is no longer needed. Otherwise, the chunks array
 994  must be of at least n_elements in length. It is filled in with the
 995  pointers to the chunks.
 996
 997  In either case, independent_comalloc returns this pointer array, or
 998  null if the allocation failed.  If n_elements is zero and chunks is
 999  null, it returns a chunk representing an array with zero elements
1000  (which should be freed if not wanted).
1001
1002  Each element must be individually freed when it is no longer
1003  needed. If you'd like to instead be able to free all at once, you
1004  should instead use a single regular malloc, and assign pointers at
1005  particular offsets in the aggregate space. (In this case though, you
1006  cannot independently free elements.)
1007
1008  independent_comallac differs from independent_calloc in that each
1009  element may have a different size, and also that it does not
1010  automatically clear elements.
1011
1012  independent_comalloc can be used to speed up allocation in cases
1013  where several structs or objects must always be allocated at the
1014  same time.  For example:
1015
1016  struct Head { ... }
1017  struct Foot { ... }
1018
1019  void send_message(char* msg) {
1020    int msglen = strlen(msg);
1021    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1022    void* chunks[3];
1023    if (independent_comalloc(3, sizes, chunks) == 0)
1024      die();
1025    struct Head* head = (struct Head*)(chunks[0]);
1026    char*        body = (char*)(chunks[1]);
1027    struct Foot* foot = (struct Foot*)(chunks[2]);
1028    // ...
1029  }
1030
1031  In general though, independent_comalloc is worth using only for
1032  larger values of n_elements. For small values, you probably won't
1033  detect enough difference from series of malloc calls to bother.
1034
1035  Overuse of independent_comalloc can increase overall memory usage,
1036  since it cannot reuse existing noncontiguous small chunks that
1037  might be available for some of the elements.
1038*/
1039void** dlindependent_comalloc(size_t, size_t*, void**);
1040
1041
1042/*
1043  pvalloc(size_t n);
1044  Equivalent to valloc(minimum-page-that-holds(n)), that is,
1045  round up n to nearest pagesize.
1046 */
1047void*  dlpvalloc(size_t);
1048
1049/*
1050  malloc_trim(size_t pad);
1051
1052  If possible, gives memory back to the system (via negative arguments
1053  to sbrk) if there is unused memory at the `high' end of the malloc
1054  pool or in unused MMAP segments. You can call this after freeing
1055  large blocks of memory to potentially reduce the system-level memory
1056  requirements of a program. However, it cannot guarantee to reduce
1057  memory. Under some allocation patterns, some large free blocks of
1058  memory will be locked between two used chunks, so they cannot be
1059  given back to the system.
1060
1061  The `pad' argument to malloc_trim represents the amount of free
1062  trailing space to leave untrimmed. If this argument is zero, only
1063  the minimum amount of memory to maintain internal data structures
1064  will be left. Non-zero arguments can be supplied to maintain enough
1065  trailing space to service future expected allocations without having
1066  to re-obtain memory from the system.
1067
1068  Malloc_trim returns 1 if it actually released any memory, else 0.
1069*/
1070int  dlmalloc_trim(size_t);
1071
1072/*
1073  malloc_stats();
1074  Prints on stderr the amount of space obtained from the system (both
1075  via sbrk and mmap), the maximum amount (which may be more than
1076  current if malloc_trim and/or munmap got called), and the current
1077  number of bytes allocated via malloc (or realloc, etc) but not yet
1078  freed. Note that this is the number of bytes allocated, not the
1079  number requested. It will be larger than the number requested
1080  because of alignment and bookkeeping overhead. Because it includes
1081  alignment wastage as being in use, this figure may be greater than
1082  zero even when no user-level chunks are allocated.
1083
1084  The reported current and maximum system memory can be inaccurate if
1085  a program makes other calls to system memory allocation functions
1086  (normally sbrk) outside of malloc.
1087
1088  malloc_stats prints only the most commonly interesting statistics.
1089  More information can be obtained by calling mallinfo.
1090*/
1091void  dlmalloc_stats(void);
1092
1093#endif /* ONLY_MSPACES */
1094
1095/*
1096  malloc_usable_size(void* p);
1097
1098  Returns the number of bytes you can actually use in
1099  an allocated chunk, which may be more than you requested (although
1100  often not) due to alignment and minimum size constraints.
1101  You can use this many bytes without worrying about
1102  overwriting other allocated objects. This is not a particularly great
1103  programming practice. malloc_usable_size can be more useful in
1104  debugging and assertions, for example:
1105
1106  p = malloc(n);
1107  assert(malloc_usable_size(p) >= 256);
1108*/
1109size_t dlmalloc_usable_size(void*);
1110
1111
1112#if MSPACES
1113
1114/*
1115  mspace is an opaque type representing an independent
1116  region of space that supports mspace_malloc, etc.
1117*/
1118typedef void* mspace;
1119
1120/*
1121  create_mspace creates and returns a new independent space with the
1122  given initial capacity, or, if 0, the default granularity size.  It
1123  returns null if there is no system memory available to create the
1124  space.  If argument locked is non-zero, the space uses a separate
1125  lock to control access. The capacity of the space will grow
1126  dynamically as needed to service mspace_malloc requests.  You can
1127  control the sizes of incremental increases of this space by
1128  compiling with a different DEFAULT_GRANULARITY or dynamically
1129  setting with mallopt(M_GRANULARITY, value).
1130*/
1131mspace create_mspace(size_t capacity, int locked);
1132
1133/*
1134  destroy_mspace destroys the given space, and attempts to return all
1135  of its memory back to the system, returning the total number of
1136  bytes freed. After destruction, the results of access to all memory
1137  used by the space become undefined.
1138*/
1139size_t destroy_mspace(mspace msp);
1140
1141/*
1142  create_mspace_with_base uses the memory supplied as the initial base
1143  of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1144  space is used for bookkeeping, so the capacity must be at least this
1145  large. (Otherwise 0 is returned.) When this initial space is
1146  exhausted, additional memory will be obtained from the system.
1147  Destroying this space will deallocate all additionally allocated
1148  space (if possible) but not the initial base.
1149*/
1150mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1151
1152/*
1153  mspace_mmap_large_chunks controls whether requests for large chunks
1154  are allocated in their own mmapped regions, separate from others in
1155  this mspace. By default this is enabled, which reduces
1156  fragmentation. However, such chunks are not necessarily released to
1157  the system upon destroy_mspace.  Disabling by setting to false may
1158  increase fragmentation, but avoids leakage when relying on
1159  destroy_mspace to release all memory allocated using this space.
1160*/
1161int mspace_mmap_large_chunks(mspace msp, int enable);
1162
1163
1164/*
1165  mspace_malloc behaves as malloc, but operates within
1166  the given space.
1167*/
1168void* mspace_malloc(mspace msp, size_t bytes);
1169
1170/*
1171  mspace_free behaves as free, but operates within
1172  the given space.
1173
1174  If compiled with FOOTERS==1, mspace_free is not actually needed.
1175  free may be called instead of mspace_free because freed chunks from
1176  any space are handled by their originating spaces.
1177*/
1178void mspace_free(mspace msp, void* mem);
1179
1180/*
1181  mspace_realloc behaves as realloc, but operates within
1182  the given space.
1183
1184  If compiled with FOOTERS==1, mspace_realloc is not actually
1185  needed.  realloc may be called instead of mspace_realloc because
1186  realloced chunks from any space are handled by their originating
1187  spaces.
1188*/
1189void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1190
1191/*
1192  mspace_calloc behaves as calloc, but operates within
1193  the given space.
1194*/
1195void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1196
1197/*
1198  mspace_memalign behaves as memalign, but operates within
1199  the given space.
1200*/
1201void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1202
1203/*
1204  mspace_independent_calloc behaves as independent_calloc, but
1205  operates within the given space.
1206*/
1207void** mspace_independent_calloc(mspace msp, size_t n_elements,
1208                                 size_t elem_size, void* chunks[]);
1209
1210/*
1211  mspace_independent_comalloc behaves as independent_comalloc, but
1212  operates within the given space.
1213*/
1214void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1215                                   size_t sizes[], void* chunks[]);
1216
1217/*
1218  mspace_footprint() returns the number of bytes obtained from the
1219  system for this space.
1220*/
1221size_t mspace_footprint(mspace msp);
1222
1223/*
1224  mspace_max_footprint() returns the peak number of bytes obtained from the
1225  system for this space.
1226*/
1227size_t mspace_max_footprint(mspace msp);
1228
1229
1230#if !NO_MALLINFO
1231/*
1232  mspace_mallinfo behaves as mallinfo, but reports properties of
1233  the given space.
1234*/
1235struct mallinfo mspace_mallinfo(mspace msp);
1236#endif /* NO_MALLINFO */
1237
1238/*
1239  malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1240*/
1241  size_t mspace_usable_size(void* mem);
1242
1243/*
1244  mspace_malloc_stats behaves as malloc_stats, but reports
1245  properties of the given space.
1246*/
1247void mspace_malloc_stats(mspace msp);
1248
1249/*
1250  mspace_trim behaves as malloc_trim, but
1251  operates within the given space.
1252*/
1253int mspace_trim(mspace msp, size_t pad);
1254
1255/*
1256  An alias for mallopt.
1257*/
1258int mspace_mallopt(int, int);
1259
1260#endif /* MSPACES */
1261
1262#ifdef __cplusplus
1263};  /* end of extern "C" */
1264#endif /* __cplusplus */
1265
1266/*
1267  ========================================================================
1268  To make a fully customizable malloc.h header file, cut everything
1269  above this line, put into file malloc.h, edit to suit, and #include it
1270  on the next line, as well as in programs that use this malloc.
1271  ========================================================================
1272*/
1273
1274/* #include "malloc.h" */
1275
1276/*------------------------------ internal #includes ---------------------- */
1277
1278#ifdef WIN32
1279#ifndef __GNUC__
1280#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1281#endif
1282#endif /* WIN32 */
1283
1284#include <stdio.h>       /* for printing in malloc_stats */
1285
1286#ifndef LACKS_ERRNO_H
1287#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1288#endif /* LACKS_ERRNO_H */
1289#if FOOTERS
1290#include <time.h>        /* for magic initialization */
1291#endif /* FOOTERS */
1292#ifndef LACKS_STDLIB_H
1293#include <stdlib.h>      /* for abort() */
1294#endif /* LACKS_STDLIB_H */
1295#ifdef DEBUG
1296#if ABORT_ON_ASSERT_FAILURE
1297#define assert(x) if(!(x)) ABORT
1298#else /* ABORT_ON_ASSERT_FAILURE */
1299#include <assert.h>
1300#endif /* ABORT_ON_ASSERT_FAILURE */
1301#else  /* DEBUG */
1302#ifndef assert
1303#define assert(x)
1304#endif
1305#define DEBUG 0
1306#endif /* DEBUG */
1307#ifndef LACKS_STRING_H
1308#include <string.h>      /* for memset etc */
1309#endif  /* LACKS_STRING_H */
1310#if USE_BUILTIN_FFS
1311#ifndef LACKS_STRINGS_H
1312#include <strings.h>     /* for ffs */
1313#endif /* LACKS_STRINGS_H */
1314#endif /* USE_BUILTIN_FFS */
1315#if HAVE_MMAP
1316#ifndef LACKS_SYS_MMAN_H
1317#include <sys/mman.h>    /* for mmap */
1318#endif /* LACKS_SYS_MMAN_H */
1319#ifndef LACKS_FCNTL_H
1320#include <fcntl.h>
1321#endif /* LACKS_FCNTL_H */
1322#endif /* HAVE_MMAP */
1323#ifndef LACKS_UNISTD_H
1324#include <unistd.h>     /* for sbrk, sysconf */
1325#else /* LACKS_UNISTD_H */
1326#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1327extern void*     sbrk(ptrdiff_t);
1328#endif /* FreeBSD etc */
1329#endif /* LACKS_UNISTD_H */
1330
1331/* Declarations for locking */
1332#if USE_LOCKS
1333#ifndef WIN32
1334#include <pthread.h>
1335#if defined (__SVR4) && defined (__sun)  /* solaris */
1336#include <thread.h>
1337#endif /* solaris */
1338#else
1339#ifndef _M_AMD64
1340/* These are already defined on AMD64 builds */
1341#ifdef __cplusplus
1342extern "C" {
1343#endif /* __cplusplus */
1344#ifndef __MINGW32__
1345LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1346LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1347#endif
1348#ifdef __cplusplus
1349}
1350#endif /* __cplusplus */
1351#endif /* _M_AMD64 */
1352#ifndef __MINGW32__
1353#pragma intrinsic (_InterlockedCompareExchange)
1354#pragma intrinsic (_InterlockedExchange)
1355#else
1356  /* --[ start GCC compatibility ]----------------------------------------------
1357   * Compatibility <intrin_x86.h> header for GCC -- GCC equivalents of intrinsic
1358   * Microsoft Visual C++ functions. Originally developed for the ReactOS
1359   * (<http://www.reactos.org/>) and TinyKrnl (<http://www.tinykrnl.org/>)
1360   * projects.
1361   *
1362   * Copyright (c) 2006 KJK::Hyperion <hackbunny@reactos.com>
1363   *
1364   * Permission is hereby granted, free of charge, to any person obtaining a
1365   * copy of this software and associated documentation files (the "Software"),
1366   * to deal in the Software without restriction, including without limitation
1367   * the rights to use, copy, modify, merge, publish, distribute, sublicense,
1368   * and/or sell copies of the Software, and to permit persons to whom the
1369   * Software is furnished to do so, subject to the following conditions:
1370   *
1371   * The above copyright notice and this permission notice shall be included in
1372   * all copies or substantial portions of the Software.
1373   *
1374   * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
1375   * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
1376   * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1377   * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1378   * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
1379   * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
1380   * DEALINGS IN THE SOFTWARE.
1381   */
1382
1383  /*** Atomic operations ***/
1384  #if (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) > 40100
1385    #define _ReadWriteBarrier() __sync_synchronize()
1386  #else
1387    static __inline__ __attribute__((always_inline)) long __sync_lock_test_and_set(volatile long * const Target, const long Value)
1388    {
1389      long res;
1390      __asm__ __volatile__("xchg%z0 %2, %0" : "=g" (*(Target)), "=r" (res) : "1" (Value));
1391      return res;
1392    }
1393    static void __inline__ __attribute__((always_inline)) _MemoryBarrier(void)
1394    {
1395      __asm__ __volatile__("" : : : "memory");
1396    }
1397    #define _ReadWriteBarrier() _MemoryBarrier()
1398  #endif
1399  /* BUGBUG: GCC only supports full barriers */
1400  static __inline__ __attribute__((always_inline)) long _InterlockedExchange(volatile long * const Target, const long Value)
1401  {
1402    /* NOTE: __sync_lock_test_and_set would be an acquire barrier, so we force a full barrier */
1403    _ReadWriteBarrier();
1404    return __sync_lock_test_and_set(Target, Value);
1405  }
1406  /* --[ end GCC compatibility ]---------------------------------------------- */
1407#endif
1408#define interlockedcompareexchange _InterlockedCompareExchange
1409#define interlockedexchange _InterlockedExchange
1410#endif /* Win32 */
1411#endif /* USE_LOCKS */
1412
1413/* Declarations for bit scanning on win32 */
1414#if defined(_MSC_VER) && _MSC_VER>=1300
1415#ifndef BitScanForward  /* Try to avoid pulling in WinNT.h */
1416#ifdef __cplusplus
1417extern "C" {
1418#endif /* __cplusplus */
1419unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1420unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1421#ifdef __cplusplus
1422}
1423#endif /* __cplusplus */
1424
1425#define BitScanForward _BitScanForward
1426#define BitScanReverse _BitScanReverse
1427#pragma intrinsic(_BitScanForward)
1428#pragma intrinsic(_BitScanReverse)
1429#endif /* BitScanForward */
1430#endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1431
1432#ifndef WIN32
1433#ifndef malloc_getpagesize
1434#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1435#    ifndef _SC_PAGE_SIZE
1436#      define _SC_PAGE_SIZE _SC_PAGESIZE
1437#    endif
1438#  endif
1439#  ifdef _SC_PAGE_SIZE
1440#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1441#  else
1442#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1443       extern size_t getpagesize();
1444#      define malloc_getpagesize getpagesize()
1445#    else
1446#      ifdef WIN32 /* use supplied emulation of getpagesize */
1447#        define malloc_getpagesize getpagesize()
1448#      else
1449#        ifndef LACKS_SYS_PARAM_H
1450#          include <sys/param.h>
1451#        endif
1452#        ifdef EXEC_PAGESIZE
1453#          define malloc_getpagesize EXEC_PAGESIZE
1454#        else
1455#          ifdef NBPG
1456#            ifndef CLSIZE
1457#              define malloc_getpagesize NBPG
1458#            else
1459#              define malloc_getpagesize (NBPG * CLSIZE)
1460#            endif
1461#          else
1462#            ifdef NBPC
1463#              define malloc_getpagesize NBPC
1464#            else
1465#              ifdef PAGESIZE
1466#                define malloc_getpagesize PAGESIZE
1467#              else /* just guess */
1468#                define malloc_getpagesize ((size_t)4096U)
1469#              endif
1470#            endif
1471#          endif
1472#        endif
1473#      endif
1474#    endif
1475#  endif
1476#endif
1477#endif
1478
1479
1480
1481/* ------------------- size_t and alignment properties -------------------- */
1482
1483/* The byte and bit size of a size_t */
1484#define SIZE_T_SIZE         (sizeof(size_t))
1485#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1486
1487/* Some constants coerced to size_t */
1488/* Annoying but necessary to avoid errors on some platforms */
1489#define SIZE_T_ZERO         ((size_t)0)
1490#define SIZE_T_ONE          ((size_t)1)
1491#define SIZE_T_TWO          ((size_t)2)
1492#define SIZE_T_FOUR         ((size_t)4)
1493#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1494#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1495#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1496#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1497
1498/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1499#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1500
1501/* True if address a has acceptable alignment */
1502#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1503
1504/* the number of bytes to offset an address to align it */
1505#define align_offset(A)\
1506 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1507  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1508
1509/* -------------------------- MMAP preliminaries ------------------------- */
1510
1511/*
1512   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1513   checks to fail so compiler optimizer can delete code rather than
1514   using so many "#if"s.
1515*/
1516
1517
1518/* MORECORE and MMAP must return MFAIL on failure */
1519#define MFAIL                ((void*)(MAX_SIZE_T))
1520#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1521
1522#if HAVE_MMAP
1523
1524#ifndef WIN32
1525#define MUNMAP_DEFAULT(a, s)  munmap((a), (s))
1526#define MMAP_PROT            (PROT_READ|PROT_WRITE)
1527#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1528#define MAP_ANONYMOUS        MAP_ANON
1529#endif /* MAP_ANON */
1530#ifdef MAP_ANONYMOUS
1531#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1532#define MMAP_DEFAULT(s)       mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1533#else /* MAP_ANONYMOUS */
1534/*
1535   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1536   is unlikely to be needed, but is supplied just in case.
1537*/
1538#define MMAP_FLAGS           (MAP_PRIVATE)
1539static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1540#define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1541           (dev_zero_fd = open("/dev/zero", O_RDWR), \
1542            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1543            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1544#endif /* MAP_ANONYMOUS */
1545
1546#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1547
1548#else /* WIN32 */
1549
1550/* Win32 MMAP via VirtualAlloc */
1551static FORCEINLINE void* win32mmap(size_t size) {
1552  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1553  return (ptr != 0)? ptr: MFAIL;
1554}
1555
1556/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1557static FORCEINLINE void* win32direct_mmap(size_t size) {
1558  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1559                           PAGE_READWRITE);
1560  return (ptr != 0)? ptr: MFAIL;
1561}
1562
1563/* This function supports releasing coalesed segments */
1564static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1565  MEMORY_BASIC_INFORMATION minfo;
1566  char* cptr = (char*)ptr;
1567  while (size) {
1568    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1569      return -1;
1570    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1571        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1572      return -1;
1573    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1574      return -1;
1575    cptr += minfo.RegionSize;
1576    size -= minfo.RegionSize;
1577  }
1578  return 0;
1579}
1580
1581#define MMAP_DEFAULT(s)             win32mmap(s)
1582#define MUNMAP_DEFAULT(a, s)        win32munmap((a), (s))
1583#define DIRECT_MMAP_DEFAULT(s)      win32direct_mmap(s)
1584#endif /* WIN32 */
1585#endif /* HAVE_MMAP */
1586
1587#if HAVE_MREMAP
1588#ifndef WIN32
1589#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1590#endif /* WIN32 */
1591#endif /* HAVE_MREMAP */
1592
1593
1594/**
1595 * Define CALL_MORECORE
1596 */
1597#if HAVE_MORECORE
1598    #ifdef MORECORE
1599        #define CALL_MORECORE(S)    MORECORE(S)
1600    #else  /* MORECORE */
1601        #define CALL_MORECORE(S)    MORECORE_DEFAULT(S)
1602    #endif /* MORECORE */
1603#else  /* HAVE_MORECORE */
1604    #define CALL_MORECORE(S)        MFAIL
1605#endif /* HAVE_MORECORE */
1606
1607/**
1608 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1609 */
1610#if HAVE_MMAP
1611    #define IS_MMAPPED_BIT          (SIZE_T_ONE)
1612    #define USE_MMAP_BIT            (SIZE_T_ONE)
1613
1614    #ifdef MMAP
1615        #define CALL_MMAP(s)        MMAP(s)
1616    #else /* MMAP */
1617        #define CALL_MMAP(s)        MMAP_DEFAULT(s)
1618    #endif /* MMAP */
1619    #ifdef MUNMAP
1620        #define CALL_MUNMAP(a, s)   MUNMAP((a), (s))
1621    #else /* MUNMAP */
1622        #define CALL_MUNMAP(a, s)   MUNMAP_DEFAULT((a), (s))
1623    #endif /* MUNMAP */
1624    #ifdef DIRECT_MMAP
1625        #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1626    #else /* DIRECT_MMAP */
1627        #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1628    #endif /* DIRECT_MMAP */
1629#else  /* HAVE_MMAP */
1630    #define IS_MMAPPED_BIT          (SIZE_T_ZERO)
1631    #define USE_MMAP_BIT            (SIZE_T_ZERO)
1632
1633    #define MMAP(s)                 MFAIL
1634    #define MUNMAP(a, s)            (-1)
1635    #define DIRECT_MMAP(s)          MFAIL
1636    #define CALL_DIRECT_MMAP(s)     DIRECT_MMAP(s)
1637    #define CALL_MMAP(s)            MMAP(s)
1638    #define CALL_MUNMAP(a, s)       MUNMAP((a), (s))
1639#endif /* HAVE_MMAP */
1640
1641/**
1642 * Define CALL_MREMAP
1643 */
1644#if HAVE_MMAP && HAVE_MREMAP
1645    #ifdef MREMAP
1646        #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1647    #else /* MREMAP */
1648        #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1649    #endif /* MREMAP */
1650#else  /* HAVE_MMAP && HAVE_MREMAP */
1651    #define CALL_MREMAP(addr, osz, nsz, mv)     MFAIL
1652#endif /* HAVE_MMAP && HAVE_MREMAP */
1653
1654/* mstate bit set if continguous morecore disabled or failed */
1655#define USE_NONCONTIGUOUS_BIT (4U)
1656
1657/* segment bit set in create_mspace_with_base */
1658#define EXTERN_BIT            (8U)
1659
1660
1661/* --------------------------- Lock preliminaries ------------------------ */
1662
1663/*
1664  When locks are defined, there is one global lock, plus
1665  one per-mspace lock.
1666
1667  The global lock_ensures that mparams.magic and other unique
1668  mparams values are initialized only once. It also protects
1669  sequences of calls to MORECORE.  In many cases sys_alloc requires
1670  two calls, that should not be interleaved with calls by other
1671  threads.  This does not protect against direct calls to MORECORE
1672  by other threads not using this lock, so there is still code to
1673  cope the best we can on interference.
1674
1675  Per-mspace locks surround calls to malloc, free, etc.  To enable use
1676  in layered extensions, per-mspace locks are reentrant.
1677
1678  Because lock-protected regions generally have bounded times, it is
1679  OK to use the supplied simple spinlocks in the custom versions for
1680  x86.
1681
1682  If USE_LOCKS is > 1, the definitions of lock routines here are
1683  bypassed, in which case you will need to define at least
1684  INITIAL_LOCK, ACQUIRE_LOCK, RELEASE_LOCK and possibly TRY_LOCK
1685  (which is not used in this malloc, but commonly needed in
1686  extensions.)
1687*/
1688
1689#if USE_LOCKS == 1
1690
1691#if USE_SPIN_LOCKS
1692#ifndef WIN32
1693
1694/* Custom pthread-style spin locks on x86 and x64 for gcc */
1695struct pthread_mlock_t {
1696  volatile unsigned int l;
1697  volatile unsigned int c;
1698  volatile pthread_t threadid;
1699};
1700#define MLOCK_T struct        pthread_mlock_t
1701#define CURRENT_THREAD        pthread_self()
1702#define INITIAL_LOCK(sl)      (memset(sl, 0, sizeof(MLOCK_T)), 0)
1703#define ACQUIRE_LOCK(sl)      pthread_acquire_lock(sl)
1704#define RELEASE_LOCK(sl)      pthread_release_lock(sl)
1705#define TRY_LOCK(sl)          pthread_try_lock(sl)
1706#define SPINS_PER_YIELD       63
1707
1708static MLOCK_T malloc_global_mutex = { 0, 0, 0};
1709
1710static FORCEINLINE int pthread_acquire_lock (MLOCK_T *sl) {
1711  int spins = 0;
1712  volatile unsigned int* lp = &sl->l;
1713  for (;;) {
1714    if (*lp != 0) {
1715      if (sl->threadid == CURRENT_THREAD) {
1716        ++sl->c;
1717        return 0;
1718      }
1719    }
1720    else {
1721      /* place args to cmpxchgl in locals to evade oddities in some gccs */
1722      int cmp = 0;
1723      int val = 1;
1724      int ret;
1725      __asm__ __volatile__  ("lock; cmpxchgl %1, %2"
1726                             : "=a" (ret)
1727                             : "r" (val), "m" (*(lp)), "0"(cmp)
1728                             : "memory", "cc");
1729      if (!ret) {
1730        assert(!sl->threadid);
1731        sl->c = 1;
1732        sl->threadid = CURRENT_THREAD;
1733        return 0;
1734      }
1735      if ((++spins & SPINS_PER_YIELD) == 0) {
1736#if defined (__SVR4) && defined (__sun) /* solaris */
1737        thr_yield();
1738#else
1739#if defined(__linux__) || defined(__FreeBSD__) || defined(__APPLE__)
1740        sched_yield();
1741#else  /* no-op yield on unknown systems */
1742        ;
1743#endif /* __linux__ || __FreeBSD__ || __APPLE__ */
1744#endif /* solaris */
1745      }
1746    }
1747  }
1748}
1749
1750static FORCEINLINE void pthread_release_lock (MLOCK_T *sl) {
1751  assert(sl->l != 0);
1752  assert(sl->threadid == CURRENT_THREAD);
1753  if (--sl->c == 0) {
1754    sl->threadid = 0;
1755    volatile unsigned int* lp = &sl->l;
1756    int prev = 0;
1757    int ret;
1758    __asm__ __volatile__ ("lock; xchgl %0, %1"
1759                          : "=r" (ret)
1760                          : "m" (*(lp)), "0"(prev)
1761                          : "memory");
1762  }
1763}
1764
1765static FORCEINLINE int pthread_try_lock (MLOCK_T *sl) {
1766  volatile unsigned int* lp = &sl->l;
1767  if (*lp != 0) {
1768      if (sl->threadid == CURRENT_THREAD) {
1769        ++sl->c;
1770        return 1;
1771      }
1772  }
1773  else {
1774    int cmp = 0;
1775    int val = 1;
1776    int ret;
1777    __asm__ __volatile__  ("lock; cmpxchgl %1, %2"
1778                           : "=a" (ret)
1779                           : "r" (val), "m" (*(lp)), "0"(cmp)
1780                           : "memory", "cc");
1781    if (!ret) {
1782      assert(!sl->threadid);
1783      sl->c = 1;
1784      sl->threadid = CURRENT_THREAD;
1785      return 1;
1786    }
1787  }
1788  return 0;
1789}
1790
1791
1792#else /* WIN32 */
1793/* Custom win32-style spin locks on x86 and x64 for MSC */
1794struct win32_mlock_t
1795{
1796  volatile long l;
1797  volatile unsigned int c;
1798  volatile long threadid;
1799};
1800
1801#define MLOCK_T               struct win32_mlock_t
1802#define CURRENT_THREAD        win32_getcurrentthreadid()
1803#define INITIAL_LOCK(sl)      (memset(sl, 0, sizeof(MLOCK_T)), 0)
1804#define ACQUIRE_LOCK(sl)      win32_acquire_lock(sl)
1805#define RELEASE_LOCK(sl)      win32_release_lock(sl)
1806#define TRY_LOCK(sl)          win32_try_lock(sl)
1807#define SPINS_PER_YIELD       63
1808
1809static MLOCK_T malloc_global_mutex = { 0, 0, 0};
1810
1811static FORCEINLINE long win32_getcurrentthreadid(void) {
1812#ifdef _MSC_VER
1813#if defined(_M_IX86)
1814  long *threadstruct=(long *)__readfsdword(0x18);
1815  long threadid=threadstruct[0x24/sizeof(long)];
1816  return threadid;
1817#elif defined(_M_X64)
1818  /* todo */
1819  return GetCurrentThreadId();
1820#else
1821  return GetCurrentThreadId();
1822#endif
1823#else
1824  return GetCurrentThreadId();
1825#endif
1826}
1827
1828static FORCEINLINE int win32_acquire_lock (MLOCK_T *sl) {
1829  int spins = 0;
1830  for (;;) {
1831    if (sl->l != 0) {
1832      if (sl->threadid == CURRENT_THREAD) {
1833        ++sl->c;
1834        return 0;
1835      }
1836    }
1837    else {
1838      if (!interlockedexchange(&sl->l, 1)) {
1839        assert(!sl->threadid);
1840                sl->c=CURRENT_THREAD;
1841        sl->threadid = CURRENT_THREAD;
1842        sl->c = 1;
1843        return 0;
1844      }
1845    }
1846    if ((++spins & SPINS_PER_YIELD) == 0)
1847      SleepEx(0, FALSE);
1848  }
1849}
1850
1851static FORCEINLINE void win32_release_lock (MLOCK_T *sl) {
1852  assert(sl->threadid == CURRENT_THREAD);
1853  assert(sl->l != 0);
1854  if (--sl->c == 0) {
1855    sl->threadid = 0;
1856    interlockedexchange (&sl->l, 0);
1857  }
1858}
1859
1860static FORCEINLINE int win32_try_lock (MLOCK_T *sl) {
1861  if(sl->l != 0) {
1862      if (sl->threadid == CURRENT_THREAD) {
1863        ++sl->c;
1864        return 1;
1865      }
1866  }
1867  else {
1868    if (!interlockedexchange(&sl->l, 1)){
1869      assert(!sl->threadid);
1870      sl->threadid = CURRENT_THREAD;
1871      sl->c = 1;
1872      return 1;
1873    }
1874  }
1875  return 0;
1876}
1877
1878#endif /* WIN32 */
1879#else /* USE_SPIN_LOCKS */
1880
1881#ifndef WIN32
1882/* pthreads-based locks */
1883
1884#define MLOCK_T               pthread_mutex_t
1885#define CURRENT_THREAD        pthread_self()
1886#define INITIAL_LOCK(sl)      pthread_init_lock(sl)
1887#define ACQUIRE_LOCK(sl)      pthread_mutex_lock(sl)
1888#define RELEASE_LOCK(sl)      pthread_mutex_unlock(sl)
1889#define TRY_LOCK(sl)          (!pthread_mutex_trylock(sl))
1890
1891static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
1892
1893/* Cope with old-style linux recursive lock initialization by adding */
1894/* skipped internal declaration from pthread.h */
1895#ifdef linux
1896#ifndef PTHREAD_MUTEX_RECURSIVE
1897extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
1898                                           int __kind));
1899#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
1900#define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
1901#endif
1902#endif
1903
1904static int pthread_init_lock (MLOCK_T *sl) {
1905  pthread_mutexattr_t attr;
1906  if (pthread_mutexattr_init(&attr)) return 1;
1907  if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
1908  if (pthread_mutex_init(sl, &attr)) return 1;
1909  if (pthread_mutexattr_destroy(&attr)) return 1;
1910  return 0;
1911}
1912
1913#else /* WIN32 */
1914/* Win32 critical sections */
1915#define MLOCK_T               CRITICAL_SECTION
1916#define CURRENT_THREAD        GetCurrentThreadId()
1917#define INITIAL_LOCK(s)       (!InitializeCriticalSectionAndSpinCount((s), 0x80000000|4000))
1918#define ACQUIRE_LOCK(s)       (EnterCriticalSection(s), 0)
1919#define RELEASE_LOCK(s)       LeaveCriticalSection(s)
1920#define TRY_LOCK(s)           TryEnterCriticalSection(s)
1921#define NEED_GLOBAL_LOCK_INIT
1922
1923static MLOCK_T malloc_global_mutex;
1924static volatile long malloc_global_mutex_status;
1925
1926/* Use spin loop to initialize global lock */
1927static void init_malloc_global_mutex() {
1928  for (;;) {
1929    long stat = malloc_global_mutex_status;
1930    if (stat > 0)
1931      return;
1932    /* transition to < 0 while initializing, then to > 0) */
1933    if (stat == 0 &&
1934        interlockedcompareexchange(&malloc_global_mutex_status, -1, 0) == 0) {
1935      InitializeCriticalSection(&malloc_global_mutex);
1936      interlockedexchange(&malloc_global_mutex_status,1);
1937      return;
1938    }
1939    SleepEx(0, FALSE);
1940  }
1941}
1942
1943#endif /* WIN32 */
1944#endif /* USE_SPIN_LOCKS */
1945#endif /* USE_LOCKS == 1 */
1946
1947/* -----------------------  User-defined locks ------------------------ */
1948
1949#if USE_LOCKS > 1
1950/* Define your own lock implementation here */
1951/* #define INITIAL_LOCK(sl)  ... */
1952/* #define ACQUIRE_LOCK(sl)  ... */
1953/* #define RELEASE_LOCK(sl)  ... */
1954/* #define TRY_LOCK(sl) ... */
1955/* static MLOCK_T malloc_global_mutex = ... */
1956#endif /* USE_LOCKS > 1 */
1957
1958/* -----------------------  Lock-based state ------------------------ */
1959
1960#if USE_LOCKS
1961#define USE_LOCK_BIT               (2U)
1962#else  /* USE_LOCKS */
1963#define USE_LOCK_BIT               (0U)
1964#define INITIAL_LOCK(l)
1965#endif /* USE_LOCKS */
1966
1967#if USE_LOCKS
1968#define ACQUIRE_MALLOC_GLOBAL_LOCK()  ACQUIRE_LOCK(&malloc_global_mutex);
1969#define RELEASE_MALLOC_GLOBAL_LOCK()  RELEASE_LOCK(&malloc_global_mutex);
1970#else  /* USE_LOCKS */
1971#define ACQUIRE_MALLOC_GLOBAL_LOCK()
1972#define RELEASE_MALLOC_GLOBAL_LOCK()
1973#endif /* USE_LOCKS */
1974
1975
1976/* -----------------------  Chunk representations ------------------------ */
1977
1978/*
1979  (The following includes lightly edited explanations by Colin Plumb.)
1980
1981  The malloc_chunk declaration below is misleading (but accurate and
1982  necessary).  It declares a "view" into memory allowing access to
1983  necessary fields at known offsets from a given base.
1984
1985  Chunks of memory are maintained using a `boundary tag' method as
1986  originally described by Knuth.  (See the paper by Paul Wilson
1987  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1988  techniques.)  Sizes of free chunks are stored both in the front of
1989  each chunk and at the end.  This makes consolidating fragmented
1990  chunks into bigger chunks fast.  The head fields also hold bits
1991  representing whether chunks are free or in use.
1992
1993  Here are some pictures to make it clearer.  They are "exploded" to
1994  show that the state of a chunk can be thought of as extending from
1995  the high 31 bits of the head field of its header through the
1996  prev_foot and PINUSE_BIT bit of the following chunk header.
1997
1998  A chunk that's in use looks like:
1999
2000   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2001           | Size of previous chunk (if P = 0)                             |
2002           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2003         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2004         | Size of this chunk                                         1| +-+
2005   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2006         |                                                               |
2007         +-                                                             -+
2008         |                                                               |
2009         +-                                                             -+
2010         |                                                               :
2011         +-      size - sizeof(size_t) available payload bytes          -+
2012         :                                                               |
2013 chunk-> +-                                                             -+
2014         |                                                               |
2015         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2016       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2017       | Size of next chunk (may or may not be in use)               | +-+
2018 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2019
2020    And if it's free, it looks like this:
2021
2022   chunk-> +-                                                             -+
2023           | User payload (must be in use, or we would have merged!)       |
2024           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2025         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2026         | Size of this chunk                                         0| +-+
2027   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2028         | Next pointer                                                  |
2029         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2030         | Prev pointer                                                  |
2031         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2032         |                                                               :
2033         +-      size - sizeof(struct chunk) unused bytes               -+
2034         :                                                               |
2035 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2036         | Size of this chunk                                            |
2037         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2038       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2039       | Size of next chunk (must be in use, or we would have merged)| +-+
2040 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2041       |                                                               :
2042       +- User payload                                                -+
2043       :                                                               |
2044       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2045                                                                     |0|
2046                                                                     +-+
2047  Note that since we always merge adjacent free chunks, the chunks
2048  adjacent to a free chunk must be in use.
2049
2050  Given a pointer to a chunk (which can be derived trivially from the
2051  payload pointer) we can, in O(1) time, find out whether the adjacent
2052  chunks are free, and if so, unlink them from the lists that they
2053  are on and merge them with the current chunk.
2054
2055  Chunks always begin on even word boundaries, so the mem portion
2056  (which is returned to the user) is also on an even word boundary, and
2057  thus at least double-word aligned.
2058
2059  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2060  chunk size (which is always a multiple of two words), is an in-use
2061  bit for the *previous* chunk.  If that bit is *clear*, then the
2062  word before the current chunk size contains the previous chunk
2063  size, and can be used to find the front of the previous chunk.
2064  The very first chunk allocated always has this bit set, preventing
2065  access to non-existent (or non-owned) memory. If pinuse is set for
2066  any given chunk, then you CANNOT determine the size of the
2067  previous chunk, and might even get a memory addressing fault when
2068  trying to do so.
2069
2070  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2071  the chunk size redundantly records whether the current chunk is
2072  inuse. This redundancy enables usage checks within free and realloc,
2073  and reduces indirection when freeing and consolidating chunks.
2074
2075  Each freshly allocated chunk must have both cinuse and pinuse set.
2076  That is, each allocated chunk borders either a previously allocated
2077  and still in-use chunk, or the base of its memory arena. This is
2078  ensured by making all allocations from the `lowest' part of any
2079  found chunk.  Further, no free chunk physically borders another one,
2080  so each free chunk is known to be preceded and followed by either
2081  inuse chunks or the ends of memory.
2082
2083  Note that the `foot' of the current chunk is actually represented
2084  as the prev_foot of the NEXT chunk. This makes it easier to
2085  deal with alignments etc but can be very confusing when trying
2086  to extend or adapt this code.
2087
2088  The exceptions to all this are
2089
2090     1. The special chunk `top' is the top-most available chunk (i.e.,
2091        the one bordering the end of available memory). It is treated
2092        specially.  Top is never included in any bin, is used only if
2093        no other chunk is available, and is released back to the
2094        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
2095        the top chunk is treated as larger (and thus less well
2096        fitting) than any other available chunk.  The top chunk
2097        doesn't update its trailing size field since there is no next
2098        contiguous chunk that would have to index off it. However,
2099        space is still allocated for it (TOP_FOOT_SIZE) to enable
2100        separation or merging when space is extended.
2101
2102     3. Chunks allocated via mmap, which have the lowest-order bit
2103        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
2104        PINUSE_BIT in their head fields.  Because they are allocated
2105        one-by-one, each must carry its own prev_foot field, which is
2106        also used to hold the offset this chunk has within its mmapped
2107        region, which is needed to preserve alignment. Each mmapped
2108        chunk is trailed by the first two fields of a fake next-chunk
2109        for sake of usage checks.
2110
2111*/
2112
2113struct malloc_chunk {
2114  size_t               prev_foot;  /* Size of previous chunk (if free).  */
2115  size_t               head;       /* Size and inuse bits. */
2116  struct malloc_chunk* fd;         /* double links -- used only if free. */
2117  struct malloc_chunk* bk;
2118};
2119
2120typedef struct malloc_chunk  mchunk;
2121typedef struct malloc_chunk* mchunkptr;
2122typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
2123typedef unsigned int bindex_t;         /* Described below */
2124typedef unsigned int binmap_t;         /* Described below */
2125typedef unsigned int flag_t;           /* The type of various bit flag sets */
2126
2127/* ------------------- Chunks sizes and alignments ----------------------- */
2128
2129#define MCHUNK_SIZE         (sizeof(mchunk))
2130
2131#if FOOTERS
2132#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
2133#else /* FOOTERS */
2134#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
2135#endif /* FOOTERS */
2136
2137/* MMapped chunks need a second word of overhead ... */
2138#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2139/* ... and additional padding for fake next-chunk at foot */
2140#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
2141
2142/* The smallest size we can malloc is an aligned minimal chunk */
2143#define MIN_CHUNK_SIZE\
2144  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2145
2146/* conversion from malloc headers to user pointers, and back */
2147#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
2148#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2149/* chunk associated with aligned address A */
2150#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
2151
2152/* Bounds on request (not chunk) sizes. */
2153#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
2154#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2155
2156/* pad request bytes into a usable size */
2157#define pad_request(req) \
2158   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2159
2160/* pad request, checking for minimum (but not maximum) */
2161#define request2size(req) \
2162  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2163
2164
2165/* ------------------ Operations on head and foot fields ----------------- */
2166
2167/*
2168  The head field of a chunk is or'ed with PINUSE_BIT when previous
2169  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2170  use. If the chunk was obtained with mmap, the prev_foot field has
2171  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
2172  mmapped region to the base of the chunk.
2173
2174  FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2175*/
2176
2177#define PINUSE_BIT          (SIZE_T_ONE)
2178#define CINUSE_BIT          (SIZE_T_TWO)
2179#define FLAG4_BIT           (SIZE_T_FOUR)
2180#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
2181#define FLAG_BITS           (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2182
2183/* Head value for fenceposts */
2184#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
2185
2186/* extraction of fields from head words */
2187#define cinuse(p)           ((p)->head & CINUSE_BIT)
2188#define pinuse(p)           ((p)->head & PINUSE_BIT)
2189#define chunksize(p)        ((p)->head & ~(FLAG_BITS))
2190
2191#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
2192#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
2193
2194/* Treat space at ptr +/- offset as a chunk */
2195#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
2196#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2197
2198/* Ptr to next or previous physical malloc_chunk. */
2199#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2200#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2201
2202/* extract next chunk's pinuse bit */
2203#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
2204
2205/* Get/set size at footer */
2206#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2207#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2208
2209/* Set size, pinuse bit, and foot */
2210#define set_size_and_pinuse_of_free_chunk(p, s)\
2211  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2212
2213/* Set size, pinuse bit, foot, and clear next pinuse */
2214#define set_free_with_pinuse(p, s, n)\
2215  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2216
2217#define is_mmapped(p)\
2218  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
2219
2220/* Get the internal overhead associated with chunk p */
2221#define overhead_for(p)\
2222 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2223
2224/* Return true if malloced space is not necessarily cleared */
2225#if MMAP_CLEARS
2226#define calloc_must_clear(p) (!is_mmapped(p))
2227#else /* MMAP_CLEARS */
2228#define calloc_must_clear(p) (1)
2229#endif /* MMAP_CLEARS */
2230
2231/* ---------------------- Overlaid data structures ----------------------- */
2232
2233/*
2234  When chunks are not in use, they are treated as nodes of either
2235  lists or trees.
2236
2237  "Small"  chunks are stored in circular doubly-linked lists, and look
2238  like this:
2239
2240    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2241            |             Size of previous chunk                            |
2242            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2243    `head:' |             Size of chunk, in bytes                         |P|
2244      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2245            |             Forward pointer to next chunk in list             |
2246            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2247            |             Back pointer to previous chunk in list            |
2248            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2249            |             Unused space (may be 0 bytes long)                .
2250            .                                                               .
2251            .                                                               |
2252nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2253    `foot:' |             Size of chunk, in bytes                           |
2254            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2255
2256  Larger chunks are kept in a form of bitwise digital trees (aka
2257  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
2258  free chunks greater than 256 bytes, their size doesn't impose any
2259  constraints on user chunk sizes.  Each node looks like:
2260
2261    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2262            |             Size of previous chunk                            |
2263            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2264    `head:' |             Size of chunk, in bytes                         |P|
2265      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2266            |             Forward pointer to next chunk of same size        |
2267            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2268            |             Back pointer to previous chunk of same size       |
2269            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2270            |             Pointer to left child (child[0])                  |
2271            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2272            |             Pointer to right child (child[1])                 |
2273            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2274            |             Pointer to parent                                 |
2275            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2276            |             bin index of this chunk                           |
2277            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2278            |             Unused space                                      .
2279            .                                                               |
2280nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2281    `foot:' |             Size of chunk, in bytes                           |
2282            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2283
2284  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
2285  of the same size are arranged in a circularly-linked list, with only
2286  the oldest chunk (the next to be used, in our FIFO ordering)
2287  actually in the tree.  (Tree members are distinguished by a non-null
2288  parent pointer.)  If a chunk with the same size as an existing node
2289  is inserted, it is linked off the existing node using pointers that
2290  work in the same way as fd/bk pointers of small chunks.
2291
2292  Each tree contains a power of 2 sized range of chunk sizes (the
2293  smallest is 0x100 <= x < 0x180), which is divided in half at each
2294  tree level, with the chunks in the smaller half of the range (0x100
2295  <= x < 0x140 for the top nose) in the left subtree and the larger
2296  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
2297  done by inspecting individual bits.
2298
2299  Using these rules, each node's left subtree contains all smaller
2300  sizes than its right subtree.  However, the node at the root of each
2301  subtree has no particular ordering relationship to either.  (The
2302  dividing line between the subtree sizes is based on trie relation.)
2303  If we remove the last chunk of a given size from the interior of the
2304  tree, we need to replace it with a leaf node.  The tree ordering
2305  rules permit a node to be replaced by any leaf below it.
2306
2307  The smallest chunk in a tree (a common operation in a best-fit
2308  allocator) can be found by walking a path to the leftmost leaf in
2309  the tree.  Unlike a usual binary tree, where we follow left child
2310  pointers until we reach a null, here we follow the right child
2311  pointer any time the left one is null, until we reach a leaf with
2312  both child pointers null. The smallest chunk in the tree will be
2313  somewhere along that path.
2314
2315  The worst case number of steps to add, find, or remove a node is
2316  bounded by the number of bits differentiating chunks within
2317  bins. Under current bin calculations, this ranges from 6 up to 21
2318  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2319  is of course much better.
2320*/
2321
2322struct malloc_tree_chunk {
2323  /* The first four fields must be compatible with malloc_chunk */
2324  size_t                    prev_foot;
2325  size_t                    head;
2326  struct malloc_tree_chunk* fd;
2327  struct malloc_tree_chunk* bk;
2328
2329  struct malloc_tree_chunk* child[2];
2330  struct malloc_tree_chunk* parent;
2331  bindex_t                  index;
2332};
2333
2334typedef struct malloc_tree_chunk  tchunk;
2335typedef struct malloc_tree_chunk* tchunkptr;
2336typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2337
2338/* A little helper macro for trees */
2339#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2340
2341/* ----------------------------- Segments -------------------------------- */
2342
2343/*
2344  Each malloc space may include non-contiguous segments, held in a
2345  list headed by an embedded malloc_segment record representing the
2346  top-most space. Segments also include flags holding properties of
2347  the space. Large chunks that are directly allocated by mmap are not
2348  included in this list. They are instead independently created and
2349  destroyed without otherwise keeping track of them.
2350
2351  Segment management mainly comes into play for spaces allocated by
2352  MMAP.  Any call to MMAP might or might not return memory that is
2353  adjacent to an existing segment.  MORECORE normally contiguously
2354  extends the current space, so this space is almost always adjacent,
2355  which is simpler and faster to deal with. (This is why MORECORE is
2356  used preferentially to MMAP when both are available -- see
2357  sys_alloc.)  When allocating using MMAP, we don't use any of the
2358  hinting mechanisms (inconsistently) supported in various
2359  implementations of unix mmap, or distinguish reserving from
2360  committing memory. Instead, we just ask for space, and exploit
2361  contiguity when we get it.  It is probably possible to do
2362  better than this on some systems, but no general scheme seems
2363  to be significantly better.
2364
2365  Management entails a simpler variant of the consolidation scheme
2366  used for chunks to reduce fragmentation -- new adjacent memory is
2367  normally prepended or appended to an existing segment. However,
2368  there are limitations compared to chunk consolidation that mostly
2369  reflect the fact that segment processing is relatively infrequent
2370  (occurring only when getting memory from system) and that we
2371  don't expect to have huge numbers of segments:
2372
2373  * Segments are not indexed, so traversal requires linear scans.  (It
2374    would be possible to index these, but is not worth the extra
2375    overhead and complexity for most programs on most platforms.)
2376  * New segments are only appended to old ones when holding top-most
2377    memory; if they cannot be prepended to others, they are held in
2378    different segments.
2379
2380  Except for the top-most segment of an mstate, each segment record
2381  is kept at the tail of its segment. Segments are added by pushing
2382  segment records onto the list headed by &mstate.seg for the
2383  containing mstate.
2384
2385  Segment flags control allocation/merge/deallocation policies:
2386  * If EXTERN_BIT set, then we did not allocate this segment,
2387    and so should not try to deallocate or merge with others.
2388    (This currently holds only for the initial segment passed
2389    into create_mspace_with_base.)
2390  * If IS_MMAPPED_BIT set, the segment may be merged with
2391    other surrounding mmapped segments and trimmed/de-allocated
2392    using munmap.
2393  * If neither bit is set, then the segment was obtained using
2394    MORECORE so can be merged with surrounding MORECORE'd segments
2395    and deallocated/trimmed using MORECORE with negative arguments.
2396*/
2397
2398struct malloc_segment {
2399  char*        base;             /* base address */
2400  size_t       size;             /* allocated size */
2401  struct malloc_segment* next;   /* ptr to next segment */
2402  flag_t       sflags;           /* mmap and extern flag */
2403};
2404
2405#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
2406#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
2407
2408typedef struct malloc_segment  msegment;
2409typedef struct malloc_segment* msegmentptr;
2410
2411/* ---------------------------- malloc_state ----------------------------- */
2412
2413/*
2414   A malloc_state holds all of the bookkeeping for a space.
2415   The main fields are:
2416
2417  Top
2418    The topmost chunk of the currently active segment. Its size is
2419    cached in topsize.  The actual size of topmost space is
2420    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2421    fenceposts and segment records if necessary when getting more
2422    space from the system.  The size at which to autotrim top is
2423    cached from mparams in trim_check, except that it is disabled if
2424    an autotrim fails.
2425
2426  Designated victim (dv)
2427    This is the preferred chunk for servicing small requests that
2428    don't have exact fits.  It is normally the chunk split off most
2429    recently to service another small request.  Its size is cached in
2430    dvsize. The link fields of this chunk are not maintained since it
2431    is not kept in a bin.
2432
2433  SmallBins
2434    An array of bin headers for free chunks.  These bins hold chunks
2435    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2436    chunks of all the same size, spaced 8 bytes apart.  To simplify
2437    use in double-linked lists, each bin header acts as a malloc_chunk
2438    pointing to the real first node, if it exists (else pointing to
2439    itself).  This avoids special-casing for headers.  But to avoid
2440    waste, we allocate only the fd/bk pointers of bins, and then use
2441    repositioning tricks to treat these as the fields of a chunk.
2442
2443  TreeBins
2444    Treebins are pointers to the roots of trees holding a range of
2445    sizes. There are 2 equally spaced treebins for each power of two
2446    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2447    larger.
2448
2449  Bin maps
2450    There is one bit map for small bins ("smallmap") and one for
2451    treebins ("treemap).  Each bin sets its bit when non-empty, and
2452    clears the bit when empty.  Bit operations are then used to avoid
2453    bin-by-bin searching -- nearly all "search" is done without ever
2454    looking at bins that won't be selected.  The bit maps
2455    conservatively use 32 bits per map word, even if on 64bit system.
2456    For a good description of some of the bit-based techniques used
2457    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2458    supplement at http://hackersdelight.org/). Many of these are
2459    intended to reduce the branchiness of paths through malloc etc, as
2460    well as to reduce the number of memory locations read or written.
2461
2462  Segments
2463    A list of segments headed by an embedded malloc_segment record
2464    representing the initial space.
2465
2466  Address check support
2467    The least_addr field is the least address ever obtained from
2468    MORECORE or MMAP. Attempted frees and reallocs of any address less
2469    than this are trapped (unless INSECURE is defined).
2470
2471  Magic tag
2472    A cross-check field that should always hold same value as mparams.magic.
2473
2474  Flags
2475    Bits recording whether to use MMAP, locks, or contiguous MORECORE
2476
2477  Statistics
2478    Each space keeps track of current and maximum system memory
2479    obtained via MORECORE or MMAP.
2480
2481  Trim support
2482    Fields holding the amount of unused topmost memory that should trigger
2483    timming, and a counter to force periodic scanning to release unused
2484    non-topmost segments.
2485
2486  Locking
2487    If USE_LOCKS is defined, the "mutex" lock is acquired and released
2488    around every public call using this mspace.
2489
2490  Extension support
2491    A void* pointer and a size_t field that can be used to help implement
2492    extensions to this malloc.
2493*/
2494
2495/* Bin types, widths and sizes */
2496#define NSMALLBINS        (32U)
2497#define NTREEBINS         (32U)
2498#define SMALLBIN_SHIFT    (3U)
2499#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
2500#define TREEBIN_SHIFT     (8U)
2501#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
2502#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
2503#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2504
2505struct malloc_state {
2506  binmap_t   smallmap;
2507  binmap_t   treemap;
2508  size_t     dvsize;
2509  size_t     topsize;
2510  char*      least_addr;
2511  mchunkptr  dv;
2512  mchunkptr  top;
2513  size_t     trim_check;
2514  size_t     release_checks;
2515  size_t     magic;
2516  mchunkptr  smallbins[(NSMALLBINS+1)*2];
2517  tbinptr    treebins[NTREEBINS];
2518  size_t     footprint;
2519  size_t     max_footprint;
2520  flag_t     mflags;
2521#if USE_LOCKS
2522  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
2523#endif /* USE_LOCKS */
2524  msegment   seg;
2525  void*      extp;      /* Unused but available for extensions */
2526  size_t     exts;
2527};
2528
2529typedef struct malloc_state*    mstate;
2530
2531/* ------------- Global malloc_state and malloc_params ------------------- */
2532
2533/*
2534  malloc_params holds global properties, including those that can be
2535  dynamically set using mallopt. There is a single instance, mparams,
2536  initialized in init_mparams. Note that the non-zeroness of "magic"
2537  also serves as an initialization flag.
2538*/
2539
2540struct malloc_params {
2541  volatile size_t magic;
2542  size_t page_size;
2543  size_t granularity;
2544  size_t mmap_threshold;
2545  size_t trim_threshold;
2546  flag_t default_mflags;
2547};
2548
2549static struct malloc_params mparams;
2550
2551/* Ensure mparams initialized */
2552#define ensure_initialization() ((void)(mparams.magic != 0 || init_mparams()))
2553
2554#if !ONLY_MSPACES
2555
2556/* The global malloc_state used for all non-"mspace" calls */
2557static struct malloc_state _gm_;
2558#define gm                 (&_gm_)
2559#define is_global(M)       ((M) == &_gm_)
2560
2561#endif /* !ONLY_MSPACES */
2562
2563#define is_initialized(M)  ((M)->top != 0)
2564
2565/* -------------------------- system alloc setup ------------------------- */
2566
2567/* Operations on mflags */
2568
2569#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2570#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2571#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2572
2573#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2574#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2575#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2576
2577#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2578#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2579
2580#define set_lock(M,L)\
2581 ((M)->mflags = (L)?\
2582  ((M)->mflags | USE_LOCK_BIT) :\
2583  ((M)->mflags & ~USE_LOCK_BIT))
2584
2585/* page-align a size */
2586#define page_align(S)\
2587 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2588
2589/* granularity-align a size */
2590#define granularity_align(S)\
2591  (((S) + (mparams.granularity - SIZE_T_ONE))\
2592   & ~(mparams.granularity - SIZE_T_ONE))
2593
2594
2595/* For mmap, use granularity alignment on windows, else page-align */
2596#ifdef WIN32
2597#define mmap_align(S) granularity_align(S)
2598#else
2599#define mmap_align(S) page_align(S)
2600#endif
2601
2602/* For sys_alloc, enough padding to ensure can malloc request on success */
2603#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2604
2605#define is_page_aligned(S)\
2606   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2607#define is_granularity_aligned(S)\
2608   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2609
2610/*  True if segment S holds address A */
2611#define segment_holds(S, A)\
2612  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2613
2614/* Return segment holding given address */
2615static msegmentptr segment_holding(mstate m, char* addr) {
2616  msegmentptr sp = &m->seg;
2617  for (;;) {
2618    if (addr >= sp->base && addr < sp->base + sp->size)
2619      return sp;
2620    if ((sp = sp->next) == 0)
2621      return 0;
2622  }
2623}
2624
2625/* Return true if segment contains a segment link */
2626static int has_segment_link(mstate m, msegmentptr ss) {
2627  msegmentptr sp = &m->seg;
2628  for (;;) {
2629    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2630      return 1;
2631    if ((sp = sp->next) == 0)
2632      return 0;
2633  }
2634}
2635
2636#ifndef MORECORE_CANNOT_TRIM
2637#define should_trim(M,s)  ((s) > (M)->trim_check)
2638#else  /* MORECORE_CANNOT_TRIM */
2639#define should_trim(M,s)  (0)
2640#endif /* MORECORE_CANNOT_TRIM */
2641
2642/*
2643  TOP_FOOT_SIZE is padding at the end of a segment, including space
2644  that may be needed to place segment records and fenceposts when new
2645  noncontiguous segments are added.
2646*/
2647#define TOP_FOOT_SIZE\
2648  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2649
2650
2651/* -------------------------------  Hooks -------------------------------- */
2652
2653/*
2654  PREACTION should be defined to return 0 on success, and nonzero on
2655  failure. If you are not using locking, you can redefine these to do
2656  anything you like.
2657*/
2658
2659#if USE_LOCKS
2660
2661#define PREACTION(M)  ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2662#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2663#else /* USE_LOCKS */
2664
2665#ifndef PREACTION
2666#define PREACTION(M) (0)
2667#endif  /* PREACTION */
2668
2669#ifndef POSTACTION
2670#define POSTACTION(M)
2671#endif  /* POSTACTION */
2672
2673#endif /* USE_LOCKS */
2674
2675/*
2676  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2677  USAGE_ERROR_ACTION is triggered on detected bad frees and
2678  reallocs. The argument p is an address that might have triggered the
2679  fault. It is ignored by the two predefined actions, but might be
2680  useful in custom actions that try to help diagnose errors.
2681*/
2682
2683#if PROCEED_ON_ERROR
2684
2685/* A count of the number of corruption errors causing resets */
2686int malloc_corruption_error_count;
2687
2688/* default corruption action */
2689static void reset_on_error(mstate m);
2690
2691#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2692#define USAGE_ERROR_ACTION(m, p)
2693
2694#else /* PROCEED_ON_ERROR */
2695
2696#ifndef CORRUPTION_ERROR_ACTION
2697#define CORRUPTION_ERROR_ACTION(m) ABORT
2698#endif /* CORRUPTION_ERROR_ACTION */
2699
2700#ifndef USAGE_ERROR_ACTION
2701#define USAGE_ERROR_ACTION(m,p) ABORT
2702#endif /* USAGE_ERROR_ACTION */
2703
2704#endif /* PROCEED_ON_ERROR */
2705
2706/* -------------------------- Debugging setup ---------------------------- */
2707
2708#if ! DEBUG
2709
2710#define check_free_chunk(M,P)
2711#define check_inuse_chunk(M,P)
2712#define check_malloced_chunk(M,P,N)
2713#define check_mmapped_chunk(M,P)
2714#define check_malloc_state(M)
2715#define check_top_chunk(M,P)
2716
2717#else /* DEBUG */
2718#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2719#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2720#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2721#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2722#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2723#define check_malloc_state(M)       do_check_malloc_state(M)
2724
2725static void   do_check_any_chunk(mstate m, mchunkptr p);
2726static void   do_check_top_chunk(mstate m, mchunkptr p);
2727static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2728static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2729static void   do_check_free_chunk(mstate m, mchunkptr p);
2730static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2731static void   do_check_tree(mstate m, tchunkptr t);
2732static void   do_check_treebin(mstate m, bindex_t i);
2733static void   do_check_smallbin(mstate m, bindex_t i);
2734static void   do_check_malloc_state(mstate m);
2735static int    bin_find(mstate m, mchunkptr x);
2736static size_t traverse_and_check(mstate m);
2737#endif /* DEBUG */
2738
2739/* ---------------------------- Indexing Bins ---------------------------- */
2740
2741#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2742#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
2743#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2744#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2745
2746/* addressing by index. See above about smallbin repositioning */
2747#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2748#define treebin_at(M,i)     (&((M)->treebins[i]))
2749
2750/* assign tree index for size S to variable I. Use x86 asm if possible  */
2751#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2752#define compute_tree_index(S, I)\
2753{\
2754  unsigned int X = S >> TREEBIN_SHIFT;\
2755  if (X == 0)\
2756    I = 0;\
2757  else if (X > 0xFFFF)\
2758    I = NTREEBINS-1;\
2759  else {\
2760    unsigned int K;\
2761    __asm__("bsrl\t%1, %0\n\t" : "=r" (K) : "rm"  (X));\
2762    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2763  }\
2764}
2765
2766#elif defined (__INTEL_COMPILER)
2767#define compute_tree_index(S, I)\
2768{\
2769  size_t X = S >> TREEBIN_SHIFT;\
2770  if (X == 0)\
2771    I = 0;\
2772  else if (X > 0xFFFF)\
2773    I = NTREEBINS-1;\
2774  else {\
2775    unsigned int K = _bit_scan_reverse (X); \
2776    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2777  }\
2778}
2779
2780#elif defined(_MSC_VER) && _MSC_VER>=1300
2781#define compute_tree_index(S, I)\
2782{\
2783  size_t X = S >> TREEBIN_SHIFT;\
2784  if (X == 0)\
2785    I = 0;\
2786  else if (X > 0xFFFF)\
2787    I = NTREEBINS-1;\
2788  else {\
2789    unsigned int K;\
2790    _BitScanReverse((DWORD *) &K, X);\
2791    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2792  }\
2793}
2794
2795#else /* GNUC */
2796#define compute_tree_index(S, I)\
2797{\
2798  size_t X = S >> TREEBIN_SHIFT;\
2799  if (X == 0)\
2800    I = 0;\
2801  else if (X > 0xFFFF)\
2802    I = NTREEBINS-1;\
2803  else {\
2804    unsigned int Y = (unsigned int)X;\
2805    unsigned int N = ((Y - 0x100) >> 16) & 8;\
2806    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2807    N += K;\
2808    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2809    K = 14 - N + ((Y <<= K) >> 15);\
2810    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2811  }\
2812}
2813#endif /* GNUC */
2814
2815/* Bit representing maximum resolved size in a treebin at i */
2816#define bit_for_tree_index(i) \
2817   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2818
2819/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2820#define leftshift_for_tree_index(i) \
2821   ((i == NTREEBINS-1)? 0 : \
2822    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2823
2824/* The size of the smallest chunk held in bin with index i */
2825#define minsize_for_tree_index(i) \
2826   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2827   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2828
2829
2830/* ------------------------ Operations on bin maps ----------------------- */
2831
2832/* bit corresponding to given index */
2833#define idx2bit(i)              ((binmap_t)(1) << (i))
2834
2835/* Mark/Clear bits with given index */
2836#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2837#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2838#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2839
2840#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2841#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2842#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2843
2844/* isolate the least set bit of a bitmap */
2845#define least_bit(x)         ((x) & -(x))
2846
2847/* mask with all bits to left of least bit of x on */
2848#define left_bits(x)         ((x<<1) | -(x<<1))
2849
2850/* mask with all bits to left of or equal to least bit of x on */
2851#define same_or_left_bits(x) ((x) | -(x))
2852
2853/* index corresponding to given bit. Use x86 asm if possible */
2854
2855#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2856#define compute_bit2idx(X, I)\
2857{\
2858  unsigned int J;\
2859  __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "rm" (X));\
2860  I = (bindex_t)J;\
2861}
2862
2863#elif defined (__INTEL_COMPILER)
2864#define compute_bit2idx(X, I)\
2865{\
2866  unsigned int J;\
2867  J = _bit_scan_forward (X); \
2868  I = (bindex_t)J;\
2869}
2870
2871#elif defined(_MSC_VER) && _MSC_VER>=1300
2872#define compute_bit2idx(X, I)\
2873{\
2874  unsigned int J;\
2875  _BitScanForward((DWORD *) &J, X);\
2876  I = (bindex_t)J;\
2877}
2878
2879#elif USE_BUILTIN_FFS
2880#define compute_bit2idx(X, I) I = ffs(X)-1
2881
2882#else
2883#define compute_bit2idx(X, I)\
2884{\
2885  unsigned int Y = X - 1;\
2886  unsigned int K = Y >> (16-4) & 16;\
2887  unsigned int N = K;        Y >>= K;\
2888  N += K = Y >> (8-3) &  8;  Y >>= K;\
2889  N += K = Y >> (4-2) &  4;  Y >>= K;\
2890  N += K = Y >> (2-1) &  2;  Y >>= K;\
2891  N += K = Y >> (1-0) &  1;  Y >>= K;\
2892  I = (bindex_t)(N + Y);\
2893}
2894#endif /* GNUC */
2895
2896
2897/* ----------------------- Runtime Check Support ------------------------- */
2898
2899/*
2900  For security, the main invariant is that malloc/free/etc never
2901  writes to a static address other than malloc_state, unless static
2902  malloc_state itself has been corrupted, which cannot occur via
2903  malloc (because of these checks). In essence this means that we
2904  believe all pointers, sizes, maps etc held in malloc_state, but
2905  check all of those linked or offsetted from other embedded data
2906  structures.  These checks are interspersed with main code in a way
2907  that tends to minimize their run-time cost.
2908
2909  When FOOTERS is defined, in addition to range checking, we also
2910  verify footer fields of inuse chunks, which can be used guarantee
2911  that the mstate controlling malloc/free is intact.  This is a
2912  streamlined version of the approach described by William Robertson
2913  et al in "Run-time Detection of Heap-based Overflows" LISA'03
2914  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2915  of an inuse chunk holds the xor of its mstate and a random seed,
2916  that is checked upon calls to free() and realloc().  This is
2917  (probablistically) unguessable from outside the program, but can be
2918  computed by any code successfully malloc'ing any chunk, so does not
2919  itself provide protection against code that has already broken
2920  security through some other means.  Unlike Robertson et al, we
2921  always dynamically check addresses of all offset chunks (previous,
2922  next, etc). This turns out to be cheaper than relying on hashes.
2923*/
2924
2925#if !INSECURE
2926/* Check if address a is at least as high as any from MORECORE or MMAP */
2927#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2928/* Check if address of next chunk n is higher than base chunk p */
2929#define ok_next(p, n)    ((char*)(p) < (char*)(n))
2930/* Check if p has its cinuse bit on */
2931#define ok_cinuse(p)     cinuse(p)
2932/* Check if p has its pinuse bit on */
2933#define ok_pinuse(p)     pinuse(p)
2934
2935#else /* !INSECURE */
2936#define ok_address(M, a) (1)
2937#define ok_next(b, n)    (1)
2938#define ok_cinuse(p)     (1)
2939#define ok_pinuse(p)     (1)
2940#endif /* !INSECURE */
2941
2942#if (FOOTERS && !INSECURE)
2943/* Check if (alleged) mstate m has expected magic field */
2944#define ok_magic(M)      ((M)->magic == mparams.magic)
2945#else  /* (FOOTERS && !INSECURE) */
2946#define ok_magic(M)      (1)
2947#endif /* (FOOTERS && !INSECURE) */
2948
2949
2950/* In gcc, use __builtin_expect to minimize impact of checks */
2951#if !INSECURE
2952#if defined(__GNUC__) && __GNUC__ >= 3
2953#define RTCHECK(e)  __builtin_expect(e, 1)
2954#else /* GNUC */
2955#define RTCHECK(e)  (e)
2956#endif /* GNUC */
2957#else /* !INSECURE */
2958#define RTCHECK(e)  (1)
2959#endif /* !INSECURE */
2960
2961/* macros to set up inuse chunks with or without footers */
2962
2963#if !FOOTERS
2964
2965#define mark_inuse_foot(M,p,s)
2966
2967/* Set cinuse bit and pinuse bit of next chunk */
2968#define set_inuse(M,p,s)\
2969  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2970  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2971
2972/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2973#define set_inuse_and_pinuse(M,p,s)\
2974  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2975  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2976
2977/* Set size, cinuse and pinuse bit of this chunk */
2978#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2979  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2980
2981#else /* FOOTERS */
2982
2983/* Set foot of inuse chunk to be xor of mstate and seed */
2984#define mark_inuse_foot(M,p,s)\
2985  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2986
2987#define get_mstate_for(p)\
2988  ((mstate)(((mchunkptr)((char*)(p) +\
2989    (chunksize(p))))->prev_foot ^ mparams.magic))
2990
2991#define set_inuse(M,p,s)\
2992  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2993  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2994  mark_inuse_foot(M,p,s))
2995
2996#define set_inuse_and_pinuse(M,p,s)\
2997  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2998  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2999 mark_inuse_foot(M,p,s))
3000
3001#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3002  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3003  mark_inuse_foot(M, p, s))
3004
3005#endif /* !FOOTERS */
3006
3007/* ---------------------------- setting mparams -------------------------- */
3008
3009/* Initialize mparams */
3010static int init_mparams(void) {
3011#ifdef NEED_GLOBAL_LOCK_INIT
3012  if (malloc_global_mutex_status <= 0)
3013    init_malloc_global_mutex();
3014#endif
3015
3016  ACQUIRE_MALLOC_GLOBAL_LOCK();
3017  if (mparams.magic == 0) {
3018    size_t magic;
3019    size_t psize;
3020    size_t gsize;
3021
3022#ifndef WIN32
3023    psize = malloc_getpagesize;
3024    gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3025#else /* WIN32 */
3026    {
3027      SYSTEM_INFO system_info;
3028      GetSystemInfo(&system_info);
3029      psize = system_info.dwPageSize;
3030      gsize = ((DEFAULT_GRANULARITY != 0)?
3031               DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3032    }
3033#endif /* WIN32 */
3034
3035    /* Sanity-check configuration:
3036       size_t must be unsigned and as wide as pointer type.
3037       ints must be at least 4 bytes.
3038       alignment must be at least 8.
3039       Alignment, min chunk size, and page size must all be powers of 2.
3040    */
3041    if ((sizeof(size_t) != sizeof(char*)) ||
3042        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
3043        (sizeof(int) < 4)  ||
3044        (MALLOC_ALIGNMENT < (size_t)8U) ||
3045        ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3046        ((MCHUNK_SIZE      & (MCHUNK_SIZE-SIZE_T_ONE))      != 0) ||
3047        ((gsize            & (gsize-SIZE_T_ONE))            != 0) ||
3048        ((psize            & (psize-SIZE_T_ONE))            != 0))
3049      ABORT;
3050
3051    mparams.granularity = gsize;
3052    mparams.page_size = psize;
3053    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3054    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3055#if MORECORE_CONTIGUOUS
3056    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3057#else  /* MORECORE_CONTIGUOUS */
3058    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3059#endif /* MORECORE_CONTIGUOUS */
3060
3061#if !ONLY_MSPACES
3062    /* Set up lock for main malloc area */
3063    gm->mflags = mparams.default_mflags;
3064    INITIAL_LOCK(&gm->mutex);
3065#endif
3066
3067#if (FOOTERS && !INSECURE)
3068    {
3069#if USE_DEV_RANDOM
3070      int fd;
3071      unsigned char buf[sizeof(size_t)];
3072      /* Try to use /dev/urandom, else fall back on using time */
3073      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3074          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3075        magic = *((size_t *) buf);
3076        close(fd);
3077      }
3078      else
3079#endif /* USE_DEV_RANDOM */
3080#ifdef WIN32
3081        magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3082#else
3083      magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3084#endif
3085      magic |= (size_t)8U;    /* ensure nonzero */
3086      magic &= ~(size_t)7U;   /* improve chances of fault for bad values */
3087    }
3088#else /* (FOOTERS && !INSECURE) */
3089    magic = (size_t)0x58585858U;
3090#endif /* (FOOTERS && !INSECURE) */
3091
3092    mparams.magic = magic;
3093  }
3094
3095  RELEASE_MALLOC_GLOBAL_LOCK();
3096  return 1;
3097}
3098
3099/* support for mallopt */
3100static int change_mparam(int param_number, int value) {
3101  size_t val = (value == -1)? MAX_SIZE_T : (size_t)value;
3102  ensure_initialization();
3103  switch(param_number) {
3104  case M_TRIM_THRESHOLD:
3105    mparams.trim_threshold = val;
3106    return 1;
3107  case M_GRANULARITY:
3108    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3109      mparams.granularity = val;
3110      return 1;
3111    }
3112    else
3113      return 0;
3114  case M_MMAP_THRESHOLD:
3115    mparams.mmap_threshold = val;
3116    return 1;
3117  default:
3118    return 0;
3119  }
3120}
3121
3122#if DEBUG
3123/* ------------------------- Debugging Support --------------------------- */
3124
3125/* Check properties of any chunk, whether free, inuse, mmapped etc  */
3126static void do_check_any_chunk(mstate m, mchunkptr p) {
3127  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3128  assert(ok_address(m, p));
3129}
3130
3131/* Check properties of top chunk */
3132static void do_check_top_chunk(mstate m, mchunkptr p) {
3133  msegmentptr sp = segment_holding(m, (char*)p);
3134  size_t  sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3135  assert(sp != 0);
3136  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3137  assert(ok_address(m, p));
3138  assert(sz == m->topsize);
3139  assert(sz > 0);
3140  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3141  assert(pinuse(p));
3142  assert(!pinuse(chunk_plus_offset(p, sz)));
3143}
3144
3145/* Check properties of (inuse) mmapped chunks */
3146static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3147  size_t  sz = chunksize(p);
3148  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
3149  assert(is_mmapped(p));
3150  assert(use_mmap(m));
3151  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3152  assert(ok_address(m, p));
3153  assert(!is_small(sz));
3154  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3155  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3156  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3157}
3158
3159/* Check properties of inuse chunks */
3160static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3161  do_check_any_chunk(m, p);
3162  assert(cinuse(p));
3163  assert(next_pinuse(p));
3164  /* If not pinuse and not mmapped, previous chunk has OK offset */
3165  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3166  if (is_mmapped(p))
3167    do_check_mmapped_chunk(m, p);
3168}
3169
3170/* Check properties of free chunks */
3171static void do_check_free_chunk(mstate m, mchunkptr p) {
3172  size_t sz = chunksize(p);
3173  mchunkptr next = chunk_plus_offset(p, sz);
3174  do_check_any_chunk(m, p);
3175  assert(!cinuse(p));
3176  assert(!next_pinuse(p));
3177  assert (!is_mmapped(p));
3178  if (p != m->dv && p != m->top) {
3179    if (sz >= MIN_CHUNK_SIZE) {
3180      assert((sz & CHUNK_ALIGN_MASK) == 0);
3181      assert(is_aligned(chunk2mem(p)));
3182      assert(next->prev_foot == sz);
3183      assert(pinuse(p));
3184      assert (next == m->top || cinuse(next));
3185      assert(p->fd->bk == p);
3186      assert(p->bk->fd == p);
3187    }
3188    else  /* markers are always of size SIZE_T_SIZE */
3189      assert(sz == SIZE_T_SIZE);
3190  }
3191}
3192
3193/* Check properties of malloced chunks at the point they are malloced */
3194static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3195  if (mem != 0) {
3196    mchunkptr p = mem2chunk(mem);
3197    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
3198    do_check_inuse_chunk(m, p);
3199    assert((sz & CHUNK_ALIGN_MASK) == 0);
3200    assert(sz >= MIN_CHUNK_SIZE);
3201    assert(sz >= s);
3202    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3203    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3204  }
3205}
3206
3207/* Check a tree and its subtrees.  */
3208static void do_check_tree(mstate m, tchunkptr t) {
3209  tchunkptr head = 0;
3210  tchunkptr u = t;
3211  bindex_t tindex = t->index;
3212  size_t tsize = chunksize(t);
3213  bindex_t idx;
3214  compute_tree_index(tsize, idx);
3215  assert(tindex == idx);
3216  assert(tsize >= MIN_LARGE_SIZE);
3217  assert(tsize >= minsize_for_tree_index(idx));
3218  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3219
3220  do { /* traverse through chain of same-sized nodes */
3221    do_check_any_chunk(m, ((mchunkptr)u));
3222    assert(u->index == tindex);
3223    assert(chunksize(u) == tsize);
3224    assert(!cinuse(u));
3225    assert(!next_pinuse(u));
3226    assert(u->fd->bk == u);
3227    assert(u->bk->fd == u);
3228    if (u->parent == 0) {
3229      assert(u->child[0] == 0);
3230      assert(u->child[1] == 0);
3231    }
3232    else {
3233      assert(head == 0); /* only one node on chain has parent */
3234      head = u;
3235      assert(u->parent != u);
3236      assert (u->parent->child[0] == u ||
3237              u->parent->child[1] == u ||
3238              *((tbinptr*)(u->parent)) == u);
3239      if (u->child[0] != 0) {
3240        assert(u->child[0]->parent == u);
3241        assert(u->child[0] != u);
3242        do_check_tree(m, u->child[0]);
3243      }
3244      if (u->child[1] != 0) {
3245        assert(u->child[1]->parent == u);
3246        assert(u->child[1] != u);
3247        do_check_tree(m, u->child[1]);
3248      }
3249      if (u->child[0] != 0 && u->child[1] != 0) {
3250        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3251      }
3252    }
3253    u = u->fd;
3254  } while (u != t);
3255  assert(head != 0);
3256}
3257
3258/*  Check all the chunks in a treebin.  */
3259static void do_check_treebin(mstate m, bindex_t i) {
3260  tbinptr* tb = treebin_at(m, i);
3261  tchunkptr t = *tb;
3262  int empty = (m->treemap & (1U << i)) == 0;
3263  if (t == 0)
3264    assert(empty);
3265  if (!empty)
3266    do_check_tree(m, t);
3267}
3268
3269/*  Check all the chunks in a smallbin.  */
3270static void do_check_smallbin(mstate m, bindex_t i) {
3271  sbinptr b = smallbin_at(m, i);
3272  mchunkptr p = b->bk;
3273  unsigned int empty = (m->smallmap & (1U << i)) == 0;
3274  if (p == b)
3275    assert(empty);
3276  if (!empty) {
3277    for (; p != b; p = p->bk) {
3278      size_t size = chunksize(p);
3279      mchunkptr q;
3280      /* each chunk claims to be free */
3281      do_check_free_chunk(m, p);
3282      /* chunk belongs in bin */
3283      assert(small_index(size) == i);
3284      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3285      /* chunk is followed by an inuse chunk */
3286      q = next_chunk(p);
3287      if (q->head != FENCEPOST_HEAD)
3288        do_check_inuse_chunk(m, q);
3289    }
3290  }
3291}
3292
3293/* Find x in a bin. Used in other check functions. */
3294static int bin_find(mstate m, mchunkptr x) {
3295  size_t size = chunksize(x);
3296  if (is_small(size)) {
3297    bindex_t sidx = small_index(size);
3298    sbinptr b = smallbin_at(m, sidx);
3299    if (smallmap_is_marked(m, sidx)) {
3300      mchunkptr p = b;
3301      do {
3302        if (p == x)
3303          return 1;
3304      } while ((p = p->fd) != b);
3305    }
3306  }
3307  else {
3308    bindex_t tidx;
3309    compute_tree_index(size, tidx);
3310    if (treemap_is_marked(m, tidx)) {
3311      tchunkptr t = *treebin_at(m, tidx);
3312      size_t sizebits = size << leftshift_for_tree_index(tidx);
3313      while (t != 0 && chunksize(t) != size) {
3314        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3315        sizebits <<= 1;
3316      }
3317      if (t != 0) {
3318        tchunkptr u = t;
3319        do {
3320          if (u == (tchunkptr)x)
3321            return 1;
3322        } while ((u = u->fd) != t);
3323      }
3324    }
3325  }
3326  return 0;
3327}
3328
3329/* Traverse each chunk and check it; return total */
3330static size_t traverse_and_check(mstate m) {
3331  size_t sum = 0;
3332  if (is_initialized(m)) {
3333    msegmentptr s = &m->seg;
3334    sum += m->topsize + TOP_FOOT_SIZE;
3335    while (s != 0) {
3336      mchunkptr q = align_as_chunk(s->base);
3337      mchunkptr lastq = 0;
3338      assert(pinuse(q));
3339      while (segment_holds(s, q) &&
3340             q != m->top && q->head != FENCEPOST_HEAD) {
3341        sum += chunksize(q);
3342        if (cinuse(q)) {
3343          assert(!bin_find(m, q));
3344          do_check_inuse_chunk(m, q);
3345        }
3346        else {
3347          assert(q == m->dv || bin_find(m, q));
3348          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
3349          do_check_free_chunk(m, q);
3350        }
3351        lastq = q;
3352        q = next_chunk(q);
3353      }
3354      s = s->next;
3355    }
3356  }
3357  return sum;
3358}
3359
3360/* Check all properties of malloc_state. */
3361static void do_check_malloc_state(mstate m) {
3362  bindex_t i;
3363  size_t total;
3364  /* check bins */
3365  for (i = 0; i < NSMALLBINS; ++i)
3366    do_check_smallbin(m, i);
3367  for (i = 0; i < NTREEBINS; ++i)
3368    do_check_treebin(m, i);
3369
3370  if (m->dvsize != 0) { /* check dv chunk */
3371    do_check_any_chunk(m, m->dv);
3372    assert(m->dvsize == chunksize(m->dv));
3373    assert(m->dvsize >= MIN_CHUNK_SIZE);
3374    assert(bin_find(m, m->dv) == 0);
3375  }
3376
3377  if (m->top != 0) {   /* check top chunk */
3378    do_check_top_chunk(m, m->top);
3379    /*assert(m->topsize == chunksize(m->top)); redundant */
3380    assert(m->topsize > 0);
3381    assert(bin_find(m, m->top) == 0);
3382  }
3383
3384  total = traverse_and_check(m);
3385  assert(total <= m->footprint);
3386  assert(m->footprint <= m->max_footprint);
3387}
3388#endif /* DEBUG */
3389
3390/* ----------------------------- statistics ------------------------------ */
3391
3392#if !NO_MALLINFO
3393static struct mallinfo internal_mallinfo(mstate m) {
3394  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3395  ensure_initialization();
3396  if (!PREACTION(m)) {
3397    check_malloc_state(m);
3398    if (is_initialized(m)) {
3399      size_t nfree = SIZE_T_ONE; /* top always free */
3400      size_t mfree = m->topsize + TOP_FOOT_SIZE;
3401      size_t sum = mfree;
3402      msegmentptr s = &m->seg;
3403      while (s != 0) {
3404        mchunkptr q = align_as_chunk(s->base);
3405        while (segment_holds(s, q) &&
3406               q != m->top && q->head != FENCEPOST_HEAD) {
3407          size_t sz = chunksize(q);
3408          sum += sz;
3409          if (!cinuse(q)) {
3410            mfree += sz;
3411            ++nfree;
3412          }
3413          q = next_chunk(q);
3414        }
3415        s = s->next;
3416      }
3417
3418      nm.arena    = sum;
3419      nm.ordblks  = nfree;
3420      nm.hblkhd   = m->footprint - sum;
3421      nm.usmblks  = m->max_footprint;
3422      nm.uordblks = m->footprint - mfree;
3423      nm.fordblks = mfree;
3424      nm.keepcost = m->topsize;
3425    }
3426
3427    POSTACTION(m);
3428  }
3429  return nm;
3430}
3431#endif /* !NO_MALLINFO */
3432
3433static void internal_malloc_stats(mstate m) {
3434  ensure_initialization();
3435  if (!PREACTION(m)) {
3436    size_t maxfp = 0;
3437    size_t fp = 0;
3438    size_t used = 0;
3439    check_malloc_state(m);
3440    if (is_initialized(m)) {
3441      msegmentptr s = &m->seg;
3442      maxfp = m->max_footprint;
3443      fp = m->footprint;
3444      used = fp - (m->topsize + TOP_FOOT_SIZE);
3445
3446      while (s != 0) {
3447        mchunkptr q = align_as_chunk(s->base);
3448        while (segment_holds(s, q) &&
3449               q != m->top && q->head != FENCEPOST_HEAD) {
3450          if (!cinuse(q))
3451            used -= chunksize(q);
3452          q = next_chunk(q);
3453        }
3454        s = s->next;
3455      }
3456    }
3457
3458    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3459    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
3460    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
3461
3462    POSTACTION(m);
3463  }
3464}
3465
3466/* ----------------------- Operations on smallbins ----------------------- */
3467
3468/*
3469  Various forms of linking and unlinking are defined as macros.  Even
3470  the ones for trees, which are very long but have very short typical
3471  paths.  This is ugly but reduces reliance on inlining support of
3472  compilers.
3473*/
3474
3475/* Link a free chunk into a smallbin  */
3476#define insert_small_chunk(M, P, S) {\
3477  bindex_t I  = small_index(S);\
3478  mchunkptr B = smallbin_at(M, I);\
3479  mchunkptr F = B;\
3480  assert(S >= MIN_CHUNK_SIZE);\
3481  if (!smallmap_is_marked(M, I))\
3482    mark_smallmap(M, I);\
3483  else if (RTCHECK(ok_address(M, B->fd)))\
3484    F = B->fd;\
3485  else {\
3486    CORRUPTION_ERROR_ACTION(M);\
3487  }\
3488  B->fd = P;\
3489  F->bk = P;\
3490  P->fd = F;\
3491  P->bk = B;\
3492}
3493
3494/* Unlink a chunk from a smallbin  */
3495#define unlink_small_chunk(M, P, S) {\
3496  mchunkptr F = P->fd;\
3497  mchunkptr B = P->bk;\
3498  bindex_t I = small_index(S);\
3499  assert(P != B);\
3500  assert(P != F);\
3501  assert(chunksize(P) == small_index2size(I));\
3502  if (F == B)\
3503    clear_smallmap(M, I);\
3504  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
3505                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
3506    F->bk = B;\
3507    B->fd = F;\
3508  }\
3509  else {\
3510    CORRUPTION_ERROR_ACTION(M);\
3511  }\
3512}
3513
3514/* Unlink the first chunk from a smallbin */
3515#define unlink_first_small_chunk(M, B, P, I) {\
3516  mchunkptr F = P->fd;\
3517  assert(P != B);\
3518  assert(P != F);\
3519  assert(chunksize(P) == small_index2size(I));\
3520  if (B == F)\
3521    clear_smallmap(M, I);\
3522  else if (RTCHECK(ok_address(M, F))) {\
3523    B->fd = F;\
3524    F->bk = B;\
3525  }\
3526  else {\
3527    CORRUPTION_ERROR_ACTION(M);\
3528  }\
3529}
3530
3531
3532
3533/* Replace dv node, binning the old one */
3534/* Used only when dvsize known to be small */
3535#define replace_dv(M, P, S) {\
3536  size_t DVS = M->dvsize;\
3537  if (DVS != 0) {\
3538    mchunkptr DV = M->dv;\
3539    assert(is_small(DVS));\
3540    insert_small_chunk(M, DV, DVS);\
3541  }\
3542  M->dvsize = S;\
3543  M->dv = P;\
3544}
3545
3546/* ------------------------- Operations on trees ------------------------- */
3547
3548/* Insert chunk into tree */
3549#define insert_large_chunk(M, X, S) {\
3550  tbinptr* H;\
3551  bindex_t I;\
3552  compute_tree_index(S, I);\
3553  H = treebin_at(M, I);\
3554  X->index = I;\
3555  X->child[0] = X->child[1] = 0;\
3556  if (!treemap_is_marked(M, I)) {\
3557    mark_treemap(M, I);\
3558    *H = X;\
3559    X->parent = (tchunkptr)H;\
3560    X->fd = X->bk = X;\
3561  }\
3562  else {\
3563    tchunkptr T = *H;\
3564    size_t K = S << leftshift_for_tree_index(I);\
3565    for (;;) {\
3566      if (chunksize(T) != S) {\
3567        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3568        K <<= 1;\
3569        if (*C != 0)\
3570          T = *C;\
3571        else if (RTCHECK(ok_address(M, C))) {\
3572          *C = X;\
3573          X->parent = T;\
3574          X->fd = X->bk = X;\
3575          break;\
3576        }\
3577        else {\
3578          CORRUPTION_ERROR_ACTION(M);\
3579          break;\
3580        }\
3581      }\
3582      else {\
3583        tchunkptr F = T->fd;\
3584        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3585          T->fd = F->bk = X;\
3586          X->fd = F;\
3587          X->bk = T;\
3588          X->parent = 0;\
3589          break;\
3590        }\
3591        else {\
3592          CORRUPTION_ERROR_ACTION(M);\
3593          break;\
3594        }\
3595      }\
3596    }\
3597  }\
3598}
3599
3600/*
3601  Unlink steps:
3602
3603  1. If x is a chained node, unlink it from its same-sized fd/bk links
3604     and choose its bk node as its replacement.
3605  2. If x was the last node of its size, but not a leaf node, it must
3606     be replaced with a leaf node (not merely one with an open left or
3607     right), to make sure that lefts and rights of descendants
3608     correspond properly to bit masks.  We use the rightmost descendant
3609     of x.  We could use any other leaf, but this is easy to locate and
3610     tends to counteract removal of leftmosts elsewhere, and so keeps
3611     paths shorter than minimally guaranteed.  This doesn't loop much
3612     because on average a node in a tree is near the bottom.
3613  3. If x is the base of a chain (i.e., has parent links) relink
3614     x's parent and children to x's replacement (or null if none).
3615*/
3616
3617#define unlink_large_chunk(M, X) {\
3618  tchunkptr XP = X->parent;\
3619  tchunkptr R;\
3620  if (X->bk != X) {\
3621    tchunkptr F = X->fd;\
3622    R = X->bk;\
3623    if (RTCHECK(ok_address(M, F))) {\
3624      F->bk = R;\
3625      R->fd = F;\
3626    }\
3627    else {\
3628      CORRUPTION_ERROR_ACTION(M);\
3629    }\
3630  }\
3631  else {\
3632    tchunkptr* RP;\
3633    if (((R = *(RP = &(X->child[1]))) != 0) ||\
3634        ((R = *(RP = &(X->child[0]))) != 0)) {\
3635      tchunkptr* CP;\
3636      while ((*(CP = &(R->child[1])) != 0) ||\
3637             (*(CP = &(R->child[0])) != 0)) {\
3638        R = *(RP = CP);\
3639      }\
3640      if (RTCHECK(ok_address(M, RP)))\
3641        *RP = 0;\
3642      else {\
3643        CORRUPTION_ERROR_ACTION(M);\
3644      }\
3645    }\
3646  }\
3647  if (XP != 0) {\
3648    tbinptr* H = treebin_at(M, X->index);\
3649    if (X == *H) {\
3650      if ((*H = R) == 0) \
3651        clear_treemap(M, X->index);\
3652    }\
3653    else if (RTCHECK(ok_address(M, XP))) {\
3654      if (XP->child[0] == X) \
3655        XP->child[0] = R;\
3656      else \
3657        XP->child[1] = R;\
3658    }\
3659    else\
3660      CORRUPTION_ERROR_ACTION(M);\
3661    if (R != 0) {\
3662      if (RTCHECK(ok_address(M, R))) {\
3663        tchunkptr C0, C1;\
3664        R->parent = XP;\
3665        if ((C0 = X->child[0]) != 0) {\
3666          if (RTCHECK(ok_address(M, C0))) {\
3667            R->child[0] = C0;\
3668            C0->parent = R;\
3669          }\
3670          else\
3671            CORRUPTION_ERROR_ACTION(M);\
3672        }\
3673        if ((C1 = X->child[1]) != 0) {\
3674          if (RTCHECK(ok_address(M, C1))) {\
3675            R->child[1] = C1;\
3676            C1->parent = R;\
3677          }\
3678          else\
3679            CORRUPTION_ERROR_ACTION(M);\
3680        }\
3681      }\
3682      else\
3683        CORRUPTION_ERROR_ACTION(M);\
3684    }\
3685  }\
3686}
3687
3688/* Relays to large vs small bin operations */
3689
3690#define insert_chunk(M, P, S)\
3691  if (is_small(S)) insert_small_chunk(M, P, S)\
3692  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3693
3694#define unlink_chunk(M, P, S)\
3695  if (is_small(S)) unlink_small_chunk(M, P, S)\
3696  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3697
3698
3699/* Relays to internal calls to malloc/free from realloc, memalign etc */
3700
3701#if ONLY_MSPACES
3702#define internal_malloc(m, b) mspace_malloc(m, b)
3703#define internal_free(m, mem) mspace_free(m,mem);
3704#else /* ONLY_MSPACES */
3705#if MSPACES
3706#define internal_malloc(m, b)\
3707   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3708#define internal_free(m, mem)\
3709   if (m == gm) dlfree(mem); else mspace_free(m,mem);
3710#else /* MSPACES */
3711#define internal_malloc(m, b) dlmalloc(b)
3712#define internal_free(m, mem) dlfree(mem)
3713#endif /* MSPACES */
3714#endif /* ONLY_MSPACES */
3715
3716/* -----------------------  Direct-mmapping chunks ----------------------- */
3717
3718/*
3719  Directly mmapped chunks are set up with an offset to the start of
3720  the mmapped region stored in the prev_foot field of the chunk. This
3721  allows reconstruction of the required argument to MUNMAP when freed,
3722  and also allows adjustment of the returned chunk to meet alignment
3723  requirements (especially in memalign).  There is also enough space
3724  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3725  the PINUSE bit so frees can be checked.
3726*/
3727
3728/* Malloc using mmap */
3729static void* mmap_alloc(mstate m, size_t nb) {
3730  size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3731  if (mmsize > nb) {     /* Check for wrap around 0 */
3732    char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3733    if (mm != CMFAIL) {
3734      size_t offset = align_offset(chunk2mem(mm));
3735      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3736      mchunkptr p = (mchunkptr)(mm + offset);
3737      p->prev_foot = offset | IS_MMAPPED_BIT;
3738      (p)->head = (psize|CINUSE_BIT);
3739      mark_inuse_foot(m, p, psize);
3740      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3741      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3742
3743      if (mm < m->least_addr)
3744        m->least_addr = mm;
3745      if ((m->footprint += mmsize) > m->max_footprint)
3746        m->max_footprint = m->footprint;
3747      assert(is_aligned(chunk2mem(p)));
3748      check_mmapped_chunk(m, p);
3749      return chunk2mem(p);
3750    }
3751  }
3752  return 0;
3753}
3754
3755/* Realloc using mmap */
3756static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3757  size_t oldsize = chunksize(oldp);
3758  if (is_small(nb)) /* Can't shrink mmap regions below small size */
3759    return 0;
3760  /* Keep old chunk if big enough but not too big */
3761  if (oldsize >= nb + SIZE_T_SIZE &&
3762      (oldsize - nb) <= (mparams.granularity << 1))
3763    return oldp;
3764  else {
3765    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3766    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3767    size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3768    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3769                                  oldmmsize, newmmsize, 1);
3770    if (cp != CMFAIL) {
3771      mchunkptr newp = (mchunkptr)(cp + offset);
3772      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3773      newp->head = (psize|CINUSE_BIT);
3774      mark_inuse_foot(m, newp, psize);
3775      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3776      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3777
3778      if (cp < m->least_addr)
3779        m->least_addr = cp;
3780      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3781        m->max_footprint = m->footprint;
3782      check_mmapped_chunk(m, newp);
3783      return newp;
3784    }
3785  }
3786  return 0;
3787}
3788
3789/* -------------------------- mspace management -------------------------- */
3790
3791/* Initialize top chunk and its size */
3792static void init_top(mstate m, mchunkptr p, size_t psize) {
3793  /* Ensure alignment */
3794  size_t offset = align_offset(chunk2mem(p));
3795  p = (mchunkptr)((char*)p + offset);
3796  psize -= offset;
3797
3798  m->top = p;
3799  m->topsize = psize;
3800  p->head = psize | PINUSE_BIT;
3801  /* set size of fake trailing chunk holding overhead space only once */
3802  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3803  m->trim_check = mparams.trim_threshold; /* reset on each update */
3804}
3805
3806/* Initialize bins for a new mstate that is otherwise zeroed out */
3807static void init_bins(mstate m) {
3808  /* Establish circular links for smallbins */
3809  bindex_t i;
3810  for (i = 0; i < NSMALLBINS; ++i) {
3811    sbinptr bin = smallbin_at(m,i);
3812    bin->fd = bin->bk = bin;
3813  }
3814}
3815
3816#if PROCEED_ON_ERROR
3817
3818/* default corruption action */
3819static void reset_on_error(mstate m) {
3820  int i;
3821  ++malloc_corruption_error_count;
3822  /* Reinitialize fields to forget about all memory */
3823  m->smallbins = m->treebins = 0;
3824  m->dvsize = m->topsize = 0;
3825  m->seg.base = 0;
3826  m->seg.size = 0;
3827  m->seg.next = 0;
3828  m->top = m->dv = 0;
3829  for (i = 0; i < NTREEBINS; ++i)
3830    *treebin_at(m, i) = 0;
3831  init_bins(m);
3832}
3833#endif /* PROCEED_ON_ERROR */
3834
3835/* Allocate chunk and prepend remainder with chunk in successor base. */
3836static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3837                           size_t nb) {
3838  mchunkptr p = align_as_chunk(newbase);
3839  mchunkptr oldfirst = align_as_chunk(oldbase);
3840  size_t psize = (char*)oldfirst - (char*)p;
3841  mchunkptr q = chunk_plus_offset(p, nb);
3842  size_t qsize = psize - nb;
3843  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3844
3845  assert((char*)oldfirst > (char*)q);
3846  assert(pinuse(oldfirst));
3847  assert(qsize >= MIN_CHUNK_SIZE);
3848
3849  /* consolidate remainder with first chunk of old base */
3850  if (oldfirst == m->top) {
3851    size_t tsize = m->topsize += qsize;
3852    m->top = q;
3853    q->head = tsize | PINUSE_BIT;
3854    check_top_chunk(m, q);
3855  }
3856  else if (oldfirst == m->dv) {
3857    size_t dsize = m->dvsize += qsize;
3858    m->dv = q;
3859    set_size_and_pinuse_of_free_chunk(q, dsize);
3860  }
3861  else {
3862    if (!cinuse(oldfirst)) {
3863      size_t nsize = chunksize(oldfirst);
3864      unlink_chunk(m, oldfirst, nsize);
3865      oldfirst = chunk_plus_offset(oldfirst, nsize);
3866      qsize += nsize;
3867    }
3868    set_free_with_pinuse(q, qsize, oldfirst);
3869    insert_chunk(m, q, qsize);
3870    check_free_chunk(m, q);
3871  }
3872
3873  check_malloced_chunk(m, chunk2mem(p), nb);
3874  return chunk2mem(p);
3875}
3876
3877/* Add a segment to hold a new noncontiguous region */
3878static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3879  /* Determine locations and sizes of segment, fenceposts, old top */
3880  char* old_top = (char*)m->top;
3881  msegmentptr oldsp = segment_holding(m, old_top);
3882  char* old_end = oldsp->base + oldsp->size;
3883  size_t ssize = pad_request(sizeof(struct malloc_segment));
3884  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3885  size_t offset = align_offset(chunk2mem(rawsp));
3886  char* asp = rawsp + offset;
3887  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3888  mchunkptr sp = (mchunkptr)csp;
3889  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3890  mchunkptr tnext = chunk_plus_offset(sp, ssize);
3891  mchunkptr p = tnext;
3892  int nfences = 0;
3893
3894  /* reset top to new space */
3895  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3896
3897  /* Set up segment record */
3898  assert(is_aligned(ss));
3899  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3900  *ss = m->seg; /* Push current record */
3901  m->seg.base = tbase;
3902  m->seg.size = tsize;
3903  m->seg.sflags = mmapped;
3904  m->seg.next = ss;
3905
3906  /* Insert trailing fenceposts */
3907  for (;;) {
3908    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3909    p->head = FENCEPOST_HEAD;
3910    ++nfences;
3911    if ((char*)(&(nextp->head)) < old_end)
3912      p = nextp;
3913    else
3914      break;
3915  }
3916  assert(nfences >= 2);
3917
3918  /* Insert the rest of old top into a bin as an ordinary free chunk */
3919  if (csp != old_top) {
3920    mchunkptr q = (mchunkptr)old_top;
3921    size_t psize = csp - old_top;
3922    mchunkptr tn = chunk_plus_offset(q, psize);
3923    set_free_with_pinuse(q, psize, tn);
3924    insert_chunk(m, q, psize);
3925  }
3926
3927  check_top_chunk(m, m->top);
3928}
3929
3930/* -------------------------- System allocation -------------------------- */
3931
3932/* Get memory from system using MORECORE or MMAP */
3933static void* sys_alloc(mstate m, size_t nb) {
3934  char* tbase = CMFAIL;
3935  size_t tsize = 0;
3936  flag_t mmap_flag = 0;
3937
3938  ensure_initialization();
3939
3940  /* Directly map large chunks */
3941  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3942    void* mem = mmap_alloc(m, nb);
3943    if (mem != 0)
3944      return mem;
3945  }
3946
3947  /*
3948    Try getting memory in any of three ways (in most-preferred to
3949    least-preferred order):
3950    1. A call to MORECORE that can normally contiguously extend memory.
3951       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3952       main space is mmapped or a previous contiguous call failed)
3953    2. A call to MMAP new space (disabled if not HAVE_MMAP).
3954       Note that under the default settings, if MORECORE is unable to
3955       fulfill a request, and HAVE_MMAP is true, then mmap is
3956       used as a noncontiguous system allocator. This is a useful backup
3957       strategy for systems with holes in address spaces -- in this case
3958       sbrk cannot contiguously expand the heap, but mmap may be able to
3959       find space.
3960    3. A call to MORECORE that cannot usually contiguously extend memory.
3961       (disabled if not HAVE_MORECORE)
3962
3963   In all cases, we need to request enough bytes from system to ensure
3964   we can malloc nb bytes upon success, so pad with enough space for
3965   top_foot, plus alignment-pad to make sure we don't lose bytes if
3966   not on boundary, and round this up to a granularity unit.
3967  */
3968
3969  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3970    char* br = CMFAIL;
3971    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3972    size_t asize = 0;
3973    ACQUIRE_MALLOC_GLOBAL_LOCK();
3974
3975    if (ss == 0) {  /* First time through or recovery */
3976      char* base = (char*)CALL_MORECORE(0);
3977      if (base != CMFAIL) {
3978        asize = granularity_align(nb + SYS_ALLOC_PADDING);
3979        /* Adjust to end on a page boundary */
3980        if (!is_page_aligned(base))
3981          asize += (page_align((size_t)base) - (size_t)base);
3982        /* Can't call MORECORE if size is negative when treated as signed */
3983        if (asize < HALF_MAX_SIZE_T &&
3984            (br = (char*)(CALL_MORECORE(asize))) == base) {
3985          tbase = base;
3986          tsize = asize;
3987        }
3988      }
3989    }
3990    else {
3991      /* Subtract out existing available top space from MORECORE request. */
3992      asize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
3993      /* Use mem here only if it did continuously extend old space */
3994      if (asize < HALF_MAX_SIZE_T &&
3995          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3996        tbase = br;
3997        tsize = asize;
3998      }
3999    }
4000
4001    if (tbase == CMFAIL) {    /* Cope with partial failure */
4002      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
4003        if (asize < HALF_MAX_SIZE_T &&
4004            asize < nb + SYS_ALLOC_PADDING) {
4005          size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - asize);
4006          if (esize < HALF_MAX_SIZE_T) {
4007            char* end = (char*)CALL_MORECORE(esize);
4008            if (end != CMFAIL)
4009              asize += esize;
4010            else {            /* Can't use; try to release */
4011              (void) CALL_MORECORE(-asize);
4012              br = CMFAIL;
4013            }
4014          }
4015        }
4016      }
4017      if (br != CMFAIL) {    /* Use the space we did get */
4018        tbase = br;
4019        tsize = asize;
4020      }
4021      else
4022        disable_contiguous(m); /* Don't try contiguous path in the future */
4023    }
4024
4025    RELEASE_MALLOC_GLOBAL_LOCK();
4026  }
4027
4028  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
4029    size_t rsize = granularity_align(nb + SYS_ALLOC_PADDING);
4030    if (rsize > nb) { /* Fail if wraps around zero */
4031      char* mp = (char*)(CALL_MMAP(rsize));
4032      if (mp != CMFAIL) {
4033        tbase = mp;
4034        tsize = rsize;
4035        mmap_flag = IS_MMAPPED_BIT;
4036      }
4037    }
4038  }
4039
4040  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4041    size_t asize = granularity_align(nb + SYS_ALLOC_PADDING);
4042    if (asize < HALF_MAX_SIZE_T) {
4043      char* br = CMFAIL;
4044      char* end = CMFAIL;
4045      ACQUIRE_MALLOC_GLOBAL_LOCK();
4046      br = (char*)(CALL_MORECORE(asize));
4047      end = (char*)(CALL_MORECORE(0));
4048      RELEASE_MALLOC_GLOBAL_LOCK();
4049      if (br != CMFAIL && end != CMFAIL && br < end) {
4050        size_t ssize = end - br;
4051        if (ssize > nb + TOP_FOOT_SIZE) {
4052          tbase = br;
4053          tsize = ssize;
4054        }
4055      }
4056    }
4057  }
4058
4059  if (tbase != CMFAIL) {
4060
4061    if ((m->footprint += tsize) > m->max_footprint)
4062      m->max_footprint = m->footprint;
4063
4064    if (!is_initialized(m)) { /* first-time initialization */
4065      m->seg.base = m->least_addr = tbase;
4066      m->seg.size = tsize;
4067      m->seg.sflags = mmap_flag;
4068      m->magic = mparams.magic;
4069      m->release_checks = MAX_RELEASE_CHECK_RATE;
4070      init_bins(m);
4071#if !ONLY_MSPACES
4072      if (is_global(m))
4073        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4074      else
4075#endif
4076      {
4077        /* Offset top by embedded malloc_state */
4078        mchunkptr mn = next_chunk(mem2chunk(m));
4079        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4080      }
4081    }
4082
4083    else {
4084      /* Try to merge with an existing segment */
4085      msegmentptr sp = &m->seg;
4086      /* Only consider most recent segment if traversal suppressed */
4087      while (sp != 0 && tbase != sp->base + sp->size)
4088        sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4089      if (sp != 0 &&
4090          !is_extern_segment(sp) &&
4091          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
4092          segment_holds(sp, m->top)) { /* append */
4093        sp->size += tsize;
4094        init_top(m, m->top, m->topsize + tsize);
4095      }
4096      else {
4097        if (tbase < m->least_addr)
4098          m->least_addr = tbase;
4099        sp = &m->seg;
4100        while (sp != 0 && sp->base != tbase + tsize)
4101          sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4102        if (sp != 0 &&
4103            !is_extern_segment(sp) &&
4104            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
4105          char* oldbase = sp->base;
4106          sp->base = tbase;
4107          sp->size += tsize;
4108          return prepend_alloc(m, tbase, oldbase, nb);
4109        }
4110        else
4111          add_segment(m, tbase, tsize, mmap_flag);
4112      }
4113    }
4114
4115    if (nb < m->topsize) { /* Allocate from new or extended top space */
4116      size_t rsize = m->topsize -= nb;
4117      mchunkptr p = m->top;
4118      mchunkptr r = m->top = chunk_plus_offset(p, nb);
4119      r->head = rsize | PINUSE_BIT;
4120      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4121      check_top_chunk(m, m->top);
4122      check_malloced_chunk(m, chunk2mem(p), nb);
4123      return chunk2mem(p);
4124    }
4125  }
4126
4127  MALLOC_FAILURE_ACTION;
4128  return 0;
4129}
4130
4131/* -----------------------  system deallocation -------------------------- */
4132
4133/* Unmap and unlink any mmapped segments that don't contain used chunks */
4134static size_t release_unused_segments(mstate m) {
4135  size_t released = 0;
4136  int nsegs = 0;
4137  msegmentptr pred = &m->seg;
4138  msegmentptr sp = pred->next;
4139  while (sp != 0) {
4140    char* base = sp->base;
4141    size_t size = sp->size;
4142    msegmentptr next = sp->next;
4143    ++nsegs;
4144    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4145      mchunkptr p = align_as_chunk(base);
4146      size_t psize = chunksize(p);
4147      /* Can unmap if first chunk holds entire segment and not pinned */
4148      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4149        tchunkptr tp = (tchunkptr)p;
4150        assert(segment_holds(sp, (char*)sp));
4151        if (p == m->dv) {
4152          m->dv = 0;
4153          m->dvsize = 0;
4154        }
4155        else {
4156          unlink_large_chunk(m, tp);
4157        }
4158        if (CALL_MUNMAP(base, size) == 0) {
4159          released += size;
4160          m->footprint -= size;
4161          /* unlink obsoleted record */
4162          sp = pred;
4163          sp->next = next;
4164        }
4165        else { /* back out if cannot unmap */
4166          insert_large_chunk(m, tp, psize);
4167        }
4168      }
4169    }
4170    if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4171      break;
4172    pred = sp;
4173    sp = next;
4174  }
4175  /* Reset check counter */
4176  m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?
4177                       nsegs : MAX_RELEASE_CHECK_RATE);
4178  return released;
4179}
4180
4181static int sys_trim(mstate m, size_t pad) {
4182  size_t released = 0;
4183  ensure_initialization();
4184  if (pad < MAX_REQUEST && is_initialized(m)) {
4185    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4186
4187    if (m->topsize > pad) {
4188      /* Shrink top space in granularity-size units, keeping at least one */
4189      size_t unit = mparams.granularity;
4190      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4191                      SIZE_T_ONE) * unit;
4192      msegmentptr sp = segment_holding(m, (char*)m->top);
4193
4194      if (!is_extern_segment(sp)) {
4195        if (is_mmapped_segment(sp)) {
4196          if (HAVE_MMAP &&
4197              sp->size >= extra &&
4198              !has_segment_link(m, sp)) { /* can't shrink if pinned */
4199            size_t newsize = sp->size - extra;
4200            /* Prefer mremap, fall back to munmap */
4201            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4202                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4203              released = extra;
4204            }
4205          }
4206        }
4207        else if (HAVE_MORECORE) {
4208          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4209            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4210          ACQUIRE_MALLOC_GLOBAL_LOCK();
4211          {
4212            /* Make sure end of memory is where we last set it. */
4213            char* old_br = (char*)(CALL_MORECORE(0));
4214            if (old_br == sp->base + sp->size) {
4215              char* rel_br = (char*)(CALL_MORECORE(-extra));
4216              char* new_br = (char*)(CALL_MORECORE(0));
4217              if (rel_br != CMFAIL && new_br < old_br)
4218                released = old_br - new_br;
4219            }
4220          }
4221          RELEASE_MALLOC_GLOBAL_LOCK();
4222        }
4223      }
4224
4225      if (released != 0) {
4226        sp->size -= released;
4227        m->footprint -= released;
4228        init_top(m, m->top, m->topsize - released);
4229        check_top_chunk(m, m->top);
4230      }
4231    }
4232
4233    /* Unmap any unused mmapped segments */
4234    if (HAVE_MMAP)
4235      released += release_unused_segments(m);
4236
4237    /* On failure, disable autotrim to avoid repeated failed future calls */
4238    if (released == 0 && m->topsize > m->trim_check)
4239      m->trim_check = MAX_SIZE_T;
4240  }
4241
4242  return (released != 0)? 1 : 0;
4243}
4244
4245
4246/* ---------------------------- malloc support --------------------------- */
4247
4248/* allocate a large request from the best fitting chunk in a treebin */
4249static void* tmalloc_large(mstate m, size_t nb) {
4250  tchunkptr v = 0;
4251  size_t rsize = -nb; /* Unsigned negation */
4252  tchunkptr t;
4253  bindex_t idx;
4254  compute_tree_index(nb, idx);
4255  if ((t = *treebin_at(m, idx)) != 0) {
4256    /* Traverse tree for this bin looking for node with size == nb */
4257    size_t sizebits = nb << leftshift_for_tree_index(idx);
4258    tchunkptr rst = 0;  /* The deepest untaken right subtree */
4259    for (;;) {
4260      tchunkptr rt;
4261      size_t trem = chunksize(t) - nb;
4262      if (trem < rsize) {
4263        v = t;
4264        if ((rsize = trem) == 0)
4265          break;
4266      }
4267      rt = t->child[1];
4268      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4269      if (rt != 0 && rt != t)
4270        rst = rt;
4271      if (t == 0) {
4272        t = rst; /* set t to least subtree holding sizes > nb */
4273        break;
4274      }
4275      sizebits <<= 1;
4276    }
4277  }
4278  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4279    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4280    if (leftbits != 0) {
4281      bindex_t i;
4282      binmap_t leastbit = least_bit(leftbits);
4283      compute_bit2idx(leastbit, i);
4284      t = *treebin_at(m, i);
4285    }
4286  }
4287
4288  while (t != 0) { /* find smallest of tree or subtree */
4289    size_t trem = chunksize(t) - nb;
4290    if (trem < rsize) {
4291      rsize = trem;
4292      v = t;
4293    }
4294    t = leftmost_child(t);
4295  }
4296
4297  /*  If dv is a better fit, return 0 so malloc will use it */
4298  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4299    if (RTCHECK(ok_address(m, v))) { /* split */
4300      mchunkptr r = chunk_plus_offset(v, nb);
4301      assert(chunksize(v) == rsize + nb);
4302      if (RTCHECK(ok_next(v, r))) {
4303        unlink_large_chunk(m, v);
4304        if (rsize < MIN_CHUNK_SIZE)
4305          set_inuse_and_pinuse(m, v, (rsize + nb));
4306        else {
4307          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4308          set_size_and_pinuse_of_free_chunk(r, rsize);
4309          insert_chunk(m, r, rsize);
4310        }
4311        return chunk2mem(v);
4312      }
4313    }
4314    CORRUPTION_ERROR_ACTION(m);
4315  }
4316  return 0;
4317}
4318
4319/* allocate a small request from the best fitting chunk in a treebin */
4320static void* tmalloc_small(mstate m, size_t nb) {
4321  tchunkptr t, v;
4322  size_t rsize;
4323  bindex_t i;
4324  binmap_t leastbit = least_bit(m->treemap);
4325  compute_bit2idx(leastbit, i);
4326  v = t = *treebin_at(m, i);
4327  rsize = chunksize(t) - nb;
4328
4329  while ((t = leftmost_child(t)) != 0) {
4330    size_t trem = chunksize(t) - nb;
4331    if (trem < rsize) {
4332      rsize = trem;
4333      v = t;
4334    }
4335  }
4336
4337  if (RTCHECK(ok_address(m, v))) {
4338    mchunkptr r = chunk_plus_offset(v, nb);
4339    assert(chunksize(v) == rsize + nb);
4340    if (RTCHECK(ok_next(v, r))) {
4341      unlink_large_chunk(m, v);
4342      if (rsize < MIN_CHUNK_SIZE)
4343        set_inuse_and_pinuse(m, v, (rsize + nb));
4344      else {
4345        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4346        set_size_and_pinuse_of_free_chunk(r, rsize);
4347        replace_dv(m, r, rsize);
4348      }
4349      return chunk2mem(v);
4350    }
4351  }
4352
4353  CORRUPTION_ERROR_ACTION(m);
4354  return 0;
4355}
4356
4357/* --------------------------- realloc support --------------------------- */
4358
4359static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
4360  if (bytes >= MAX_REQUEST) {
4361    MALLOC_FAILURE_ACTION;
4362    return 0;
4363  }
4364  if (!PREACTION(m)) {
4365    mchunkptr oldp = mem2chunk(oldmem);
4366    size_t oldsize = chunksize(oldp);
4367    mchunkptr next = chunk_plus_offset(oldp, oldsize);
4368    mchunkptr newp = 0;
4369    void* extra = 0;
4370
4371    /* Try to either shrink or extend into top. Else malloc-copy-free */
4372
4373    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
4374                ok_next(oldp, next) && ok_pinuse(next))) {
4375      size_t nb = request2size(bytes);
4376      if (is_mmapped(oldp))
4377        newp = mmap_resize(m, oldp, nb);
4378      else if (oldsize >= nb) { /* already big enough */
4379        size_t rsize = oldsize - nb;
4380        newp = oldp;
4381        if (rsize >= MIN_CHUNK_SIZE) {
4382          mchunkptr remainder = chunk_plus_offset(newp, nb);
4383          set_inuse(m, newp, nb);
4384          set_inuse(m, remainder, rsize);
4385          extra = chunk2mem(remainder);
4386        }
4387      }
4388      else if (next == m->top && oldsize + m->topsize > nb) {
4389        /* Expand into top */
4390        size_t newsize = oldsize + m->topsize;
4391        size_t newtopsize = newsize - nb;
4392        mchunkptr newtop = chunk_plus_offset(oldp, nb);
4393        set_inuse(m, oldp, nb);
4394        newtop->head = newtopsize |PINUSE_BIT;
4395        m->top = newtop;
4396        m->topsize = newtopsize;
4397        newp = oldp;
4398      }
4399    }
4400    else {
4401      USAGE_ERROR_ACTION(m, oldmem);
4402      POSTACTION(m);
4403      return 0;
4404    }
4405
4406    POSTACTION(m);
4407
4408    if (newp != 0) {
4409      if (extra != 0) {
4410        internal_free(m, extra);
4411      }
4412      check_inuse_chunk(m, newp);
4413      return chunk2mem(newp);
4414    }
4415    else {
4416      void* newmem = internal_malloc(m, bytes);
4417      if (newmem != 0) {
4418        size_t oc = oldsize - overhead_for(oldp);
4419        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
4420        internal_free(m, oldmem);
4421      }
4422      return newmem;
4423    }
4424  }
4425  return 0;
4426}
4427
4428/* --------------------------- memalign support -------------------------- */
4429
4430static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4431  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
4432    return internal_malloc(m, bytes);
4433  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4434    alignment = MIN_CHUNK_SIZE;
4435  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4436    size_t a = MALLOC_ALIGNMENT << 1;
4437    while (a < alignment) a <<= 1;
4438    alignment = a;
4439  }
4440
4441  if (bytes >= MAX_REQUEST - alignment) {
4442    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
4443      MALLOC_FAILURE_ACTION;
4444    }
4445  }
4446  else {
4447    size_t nb = request2size(bytes);
4448    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4449    char* mem = (char*)internal_malloc(m, req);
4450    if (mem != 0) {
4451      void* leader = 0;
4452      void* trailer = 0;
4453      mchunkptr p = mem2chunk(mem);
4454
4455      if (PREACTION(m)) return 0;
4456      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
4457        /*
4458          Find an aligned spot inside chunk.  Since we need to give
4459          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4460          the first calculation places us at a spot with less than
4461          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4462          We've allocated enough total room so that this is always
4463          possible.
4464        */
4465        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
4466                                                       alignment -
4467                                                       SIZE_T_ONE)) &
4468                                             -alignment));
4469        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4470          br : br+alignment;
4471        mchunkptr newp = (mchunkptr)pos;
4472        size_t leadsize = pos - (char*)(p);
4473        size_t newsize = chunksize(p) - leadsize;
4474
4475        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4476          newp->prev_foot = p->prev_foot + leadsize;
4477          newp->head = (newsize|CINUSE_BIT);
4478        }
4479        else { /* Otherwise, give back leader, use the rest */
4480          set_inuse(m, newp, newsize);
4481          set_inuse(m, p, leadsize);
4482          leader = chunk2mem(p);
4483        }
4484        p = newp;
4485      }
4486
4487      /* Give back spare room at the end */
4488      if (!is_mmapped(p)) {
4489        size_t size = chunksize(p);
4490        if (size > nb + MIN_CHUNK_SIZE) {
4491          size_t remainder_size = size - nb;
4492          mchunkptr remainder = chunk_plus_offset(p, nb);
4493          set_inuse(m, p, nb);
4494          set_inuse(m, remainder, remainder_size);
4495          trailer = chunk2mem(remainder);
4496        }
4497      }
4498
4499      assert (chunksize(p) >= nb);
4500      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
4501      check_inuse_chunk(m, p);
4502      POSTACTION(m);
4503      if (leader != 0) {
4504        internal_free(m, leader);
4505      }
4506      if (trailer != 0) {
4507        internal_free(m, trailer);
4508      }
4509      return chunk2mem(p);
4510    }
4511  }
4512  return 0;
4513}
4514
4515/* ------------------------ comalloc/coalloc support --------------------- */
4516
4517static void** ialloc(mstate m,
4518                     size_t n_elements,
4519                     size_t* sizes,
4520                     int opts,
4521                     void* chunks[]) {
4522  /*
4523    This provides common support for independent_X routines, handling
4524    all of the combinations that can result.
4525
4526    The opts arg has:
4527    bit 0 set if all elements are same size (using sizes[0])
4528    bit 1 set if elements should be zeroed
4529  */
4530
4531  size_t    element_size;   /* chunksize of each element, if all same */
4532  size_t    contents_size;  /* total size of elements */
4533  size_t    array_size;     /* request size of pointer array */
4534  void*     mem;            /* malloced aggregate space */
4535  mchunkptr p;              /* corresponding chunk */
4536  size_t    remainder_size; /* remaining bytes while splitting */
4537  void**    marray;         /* either "chunks" or malloced ptr array */
4538  mchunkptr array_chunk;    /* chunk for malloced ptr array */
4539  flag_t    was_enabled;    /* to disable mmap */
4540  size_t    size;
4541  size_t    i;
4542
4543  ensure_initialization();
4544  /* compute array length, if needed */
4545  if (chunks != 0) {
4546    if (n_elements == 0)
4547      return chunks; /* nothing to do */
4548    marray = chunks;
4549    array_size = 0;
4550  }
4551  else {
4552    /* if empty req, must still return chunk representing empty array */
4553    if (n_elements == 0)
4554      return (void**)internal_malloc(m, 0);
4555    marray = 0;
4556    array_size = request2size(n_elements * (sizeof(void*)));
4557  }
4558
4559  /* compute total element size */
4560  if (opts & 0x1) { /* all-same-size */
4561    element_size = request2size(*sizes);
4562    contents_size = n_elements * element_size;
4563  }
4564  else { /* add up all the sizes */
4565    element_size = 0;
4566    contents_size = 0;
4567    for (i = 0; i != n_elements; ++i)
4568      contents_size += request2size(sizes[i]);
4569  }
4570
4571  size = contents_size + array_size;
4572
4573  /*
4574     Allocate the aggregate chunk.  First disable direct-mmapping so
4575     malloc won't use it, since we would not be able to later
4576     free/realloc space internal to a segregated mmap region.
4577  */
4578  was_enabled = use_mmap(m);
4579  disable_mmap(m);
4580  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4581  if (was_enabled)
4582    enable_mmap(m);
4583  if (mem == 0)
4584    return 0;
4585
4586  if (PREACTION(m)) return 0;
4587  p = mem2chunk(mem);
4588  remainder_size = chunksize(p);
4589
4590  assert(!is_mmapped(p));
4591
4592  if (opts & 0x2) {       /* optionally clear the elements */
4593    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4594  }
4595
4596  /* If not provided, allocate the pointer array as final part of chunk */
4597  if (marray == 0) {
4598    size_t  array_chunk_size;
4599    array_chunk = chunk_plus_offset(p, contents_size);
4600    array_chunk_size = remainder_size - contents_size;
4601    marray = (void**) (chunk2mem(array_chunk));
4602    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4603    remainder_size = contents_size;
4604  }
4605
4606  /* split out elements */
4607  for (i = 0; ; ++i) {
4608    marray[i] = chunk2mem(p);
4609    if (i != n_elements-1) {
4610      if (element_size != 0)
4611        size = element_size;
4612      else
4613        size = request2size(sizes[i]);
4614      remainder_size -= size;
4615      set_size_and_pinuse_of_inuse_chunk(m, p, size);
4616      p = chunk_plus_offset(p, size);
4617    }
4618    else { /* the final element absorbs any overallocation slop */
4619      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4620      break;
4621    }
4622  }
4623
4624#if DEBUG
4625  if (marray != chunks) {
4626    /* final element must have exactly exhausted chunk */
4627    if (element_size != 0) {
4628      assert(remainder_size == element_size);
4629    }
4630    else {
4631      assert(remainder_size == request2size(sizes[i]));
4632    }
4633    check_inuse_chunk(m, mem2chunk(marray));
4634  }
4635  for (i = 0; i != n_elements; ++i)
4636    check_inuse_chunk(m, mem2chunk(marray[i]));
4637
4638#endif /* DEBUG */
4639
4640  POSTACTION(m);
4641  return marray;
4642}
4643
4644
4645/* -------------------------- public routines ---------------------------- */
4646
4647#if !ONLY_MSPACES
4648
4649void* dlmalloc(size_t bytes) {
4650  /*
4651     Basic algorithm:
4652     If a small request (< 256 bytes minus per-chunk overhead):
4653       1. If one exists, use a remainderless chunk in associated smallbin.
4654          (Remainderless means that there are too few excess bytes to
4655          represent as a chunk.)
4656       2. If it is big enough, use the dv chunk, which is normally the
4657          chunk adjacent to the one used for the most recent small request.
4658       3. If one exists, split the smallest available chunk in a bin,
4659          saving remainder in dv.
4660       4. If it is big enough, use the top chunk.
4661       5. If available, get memory from system and use it
4662     Otherwise, for a large request:
4663       1. Find the smallest available binned chunk that fits, and use it
4664          if it is better fitting than dv chunk, splitting if necessary.
4665       2. If better fitting than any binned chunk, use the dv chunk.
4666       3. If it is big enough, use the top chunk.
4667       4. If request size >= mmap threshold, try to directly mmap this chunk.
4668       5. If available, get memory from system and use it
4669
4670     The ugly goto's here ensure that postaction occurs along all paths.
4671  */
4672
4673#if USE_LOCKS
4674  ensure_initialization(); /* initialize in sys_alloc if not using locks */
4675#endif
4676
4677  if (!PREACTION(gm)) {
4678    void* mem;
4679    size_t nb;
4680    if (bytes <= MAX_SMALL_REQUEST) {
4681      bindex_t idx;
4682      binmap_t smallbits;
4683      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4684      idx = small_index(nb);
4685      smallbits = gm->smallmap >> idx;
4686
4687      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4688        mchunkptr b, p;
4689        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4690        b = smallbin_at(gm, idx);
4691        p = b->fd;
4692        assert(chunksize(p) == small_index2size(idx));
4693        unlink_first_small_chunk(gm, b, p, idx);
4694        set_inuse_and_pinuse(gm, p, small_index2size(idx));
4695        mem = chunk2mem(p);
4696        check_malloced_chunk(gm, mem, nb);
4697        goto postaction;
4698      }
4699
4700      else if (nb > gm->dvsize) {
4701        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4702          mchunkptr b, p, r;
4703          size_t rsize;
4704          bindex_t i;
4705          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4706          binmap_t leastbit = least_bit(leftbits);
4707          compute_bit2idx(leastbit, i);
4708          b = smallbin_at(gm, i);
4709          p = b->fd;
4710          assert(chunksize(p) == small_index2size(i));
4711          unlink_first_small_chunk(gm, b, p, i);
4712          rsize = small_index2size(i) - nb;
4713          /* Fit here cannot be remainderless if 4byte sizes */
4714          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4715            set_inuse_and_pinuse(gm, p, small_index2size(i));
4716          else {
4717            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4718            r = chunk_plus_offset(p, nb);
4719            set_size_and_pinuse_of_free_chunk(r, rsize);
4720            replace_dv(gm, r, rsize);
4721          }
4722          mem = chunk2mem(p);
4723          check_malloced_chunk(gm, mem, nb);
4724          goto postaction;
4725        }
4726
4727        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4728          check_malloced_chunk(gm, mem, nb);
4729          goto postaction;
4730        }
4731      }
4732    }
4733    else if (bytes >= MAX_REQUEST)
4734      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4735    else {
4736      nb = pad_request(bytes);
4737      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4738        check_malloced_chunk(gm, mem, nb);
4739        goto postaction;
4740      }
4741    }
4742
4743    if (nb <= gm->dvsize) {
4744      size_t rsize = gm->dvsize - nb;
4745      mchunkptr p = gm->dv;
4746      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4747        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4748        gm->dvsize = rsize;
4749        set_size_and_pinuse_of_free_chunk(r, rsize);
4750        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4751      }
4752      else { /* exhaust dv */
4753        size_t dvs = gm->dvsize;
4754        gm->dvsize = 0;
4755        gm->dv = 0;
4756        set_inuse_and_pinuse(gm, p, dvs);
4757      }
4758      mem = chunk2mem(p);
4759      check_malloced_chunk(gm, mem, nb);
4760      goto postaction;
4761    }
4762
4763    else if (nb < gm->topsize) { /* Split top */
4764      size_t rsize = gm->topsize -= nb;
4765      mchunkptr p = gm->top;
4766      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4767      r->head = rsize | PINUSE_BIT;
4768      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4769      mem = chunk2mem(p);
4770      check_top_chunk(gm, gm->top);
4771      check_malloced_chunk(gm, mem, nb);
4772      goto postaction;
4773    }
4774
4775    mem = sys_alloc(gm, nb);
4776
4777  postaction:
4778    POSTACTION(gm);
4779    return mem;
4780  }
4781
4782  return 0;
4783}
4784
4785void dlfree(void* mem) {
4786  /*
4787     Consolidate freed chunks with preceding or succeeding bordering
4788     free chunks, if they exist, and then place in a bin.  Intermixed
4789     with special cases for top, dv, mmapped chunks, and usage errors.
4790  */
4791
4792  if (mem != 0) {
4793    mchunkptr p  = mem2chunk(mem);
4794#if FOOTERS
4795    mstate fm = get_mstate_for(p);
4796    if (!ok_magic(fm)) {
4797      USAGE_ERROR_ACTION(fm, p);
4798      return;
4799    }
4800#else /* FOOTERS */
4801#define fm gm
4802#endif /* FOOTERS */
4803    if (!PREACTION(fm)) {
4804      check_inuse_chunk(fm, p);
4805      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4806        size_t psize = chunksize(p);
4807        mchunkptr next = chunk_plus_offset(p, psize);
4808        if (!pinuse(p)) {
4809          size_t prevsize = p->prev_foot;
4810          if ((prevsize & IS_MMAPPED_BIT) != 0) {
4811            prevsize &= ~IS_MMAPPED_BIT;
4812            psize += prevsize + MMAP_FOOT_PAD;
4813            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4814              fm->footprint -= psize;
4815            goto postaction;
4816          }
4817          else {
4818            mchunkptr prev = chunk_minus_offset(p, prevsize);
4819            psize += prevsize;
4820            p = prev;
4821            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4822              if (p != fm->dv) {
4823                unlink_chunk(fm, p, prevsize);
4824              }
4825              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4826                fm->dvsize = psize;
4827                set_free_with_pinuse(p, psize, next);
4828                goto postaction;
4829              }
4830            }
4831            else
4832              goto erroraction;
4833          }
4834        }
4835
4836        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4837          if (!cinuse(next)) {  /* consolidate forward */
4838            if (next == fm->top) {
4839              size_t tsize = fm->topsize += psize;
4840              fm->top = p;
4841              p->head = tsize | PINUSE_BIT;
4842              if (p == fm->dv) {
4843                fm->dv = 0;
4844                fm->dvsize = 0;
4845              }
4846              if (should_trim(fm, tsize))
4847                sys_trim(fm, 0);
4848              goto postaction;
4849            }
4850            else if (next == fm->dv) {
4851              size_t dsize = fm->dvsize += psize;
4852              fm->dv = p;
4853              set_size_and_pinuse_of_free_chunk(p, dsize);
4854              goto postaction;
4855            }
4856            else {
4857              size_t nsize = chunksize(next);
4858              psize += nsize;
4859              unlink_chunk(fm, next, nsize);
4860              set_size_and_pinuse_of_free_chunk(p, psize);
4861              if (p == fm->dv) {
4862                fm->dvsize = psize;
4863                goto postaction;
4864              }
4865            }
4866          }
4867          else
4868            set_free_with_pinuse(p, psize, next);
4869
4870          if (is_small(psize)) {
4871            insert_small_chunk(fm, p, psize);
4872            check_free_chunk(fm, p);
4873          }
4874          else {
4875            tchunkptr tp = (tchunkptr)p;
4876            insert_large_chunk(fm, tp, psize);
4877            check_free_chunk(fm, p);
4878            if (--fm->release_checks == 0)
4879              release_unused_segments(fm);
4880          }
4881          goto postaction;
4882        }
4883      }
4884    erroraction:
4885      USAGE_ERROR_ACTION(fm, p);
4886    postaction:
4887      POSTACTION(fm);
4888    }
4889  }
4890#if !FOOTERS
4891#undef fm
4892#endif /* FOOTERS */
4893}
4894
4895void* dlcalloc(size_t n_elements, size_t elem_size) {
4896  void* mem;
4897  size_t req = 0;
4898  if (n_elements != 0) {
4899    req = n_elements * elem_size;
4900    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4901        (req / n_elements != elem_size))
4902      req = MAX_SIZE_T; /* force downstream failure on overflow */
4903  }
4904  mem = dlmalloc(req);
4905  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4906    memset(mem, 0, req);
4907  return mem;
4908}
4909
4910void* dlrealloc(void* oldmem, size_t bytes) {
4911  if (oldmem == 0)
4912    return dlmalloc(bytes);
4913#ifdef REALLOC_ZERO_BYTES_FREES
4914  if (bytes == 0) {
4915    dlfree(oldmem);
4916    return 0;
4917  }
4918#endif /* REALLOC_ZERO_BYTES_FREES */
4919  else {
4920#if ! FOOTERS
4921    mstate m = gm;
4922#else /* FOOTERS */
4923    mstate m = get_mstate_for(mem2chunk(oldmem));
4924    if (!ok_magic(m)) {
4925      USAGE_ERROR_ACTION(m, oldmem);
4926      return 0;
4927    }
4928#endif /* FOOTERS */
4929    return internal_realloc(m, oldmem, bytes);
4930  }
4931}
4932
4933void* dlmemalign(size_t alignment, size_t bytes) {
4934  return internal_memalign(gm, alignment, bytes);
4935}
4936
4937void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4938                                 void* chunks[]) {
4939  size_t sz = elem_size; /* serves as 1-element array */
4940  return ialloc(gm, n_elements, &sz, 3, chunks);
4941}
4942
4943void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4944                                   void* chunks[]) {
4945  return ialloc(gm, n_elements, sizes, 0, chunks);
4946}
4947
4948void* dlvalloc(size_t bytes) {
4949  size_t pagesz;
4950  ensure_initialization();
4951  pagesz = mparams.page_size;
4952  return dlmemalign(pagesz, bytes);
4953}
4954
4955void* dlpvalloc(size_t bytes) {
4956  size_t pagesz;
4957  ensure_initialization();
4958  pagesz = mparams.page_size;
4959  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4960}
4961
4962int dlmalloc_trim(size_t pad) {
4963  ensure_initialization();
4964  int result = 0;
4965  if (!PREACTION(gm)) {
4966    result = sys_trim(gm, pad);
4967    POSTACTION(gm);
4968  }
4969  return result;
4970}
4971
4972size_t dlmalloc_footprint(void) {
4973  return gm->footprint;
4974}
4975
4976size_t dlmalloc_max_footprint(void) {
4977  return gm->max_footprint;
4978}
4979
4980#if !NO_MALLINFO
4981struct mallinfo dlmallinfo(void) {
4982  return internal_mallinfo(gm);
4983}
4984#endif /* NO_MALLINFO */
4985
4986void dlmalloc_stats() {
4987  internal_malloc_stats(gm);
4988}
4989
4990int dlmallopt(int param_number, int value) {
4991  return change_mparam(param_number, value);
4992}
4993
4994#endif /* !ONLY_MSPACES */
4995
4996size_t dlmalloc_usable_size(void* mem) {
4997  if (mem != 0) {
4998    mchunkptr p = mem2chunk(mem);
4999    if (cinuse(p))
5000      return chunksize(p) - overhead_for(p);
5001  }
5002  return 0;
5003}
5004
5005/* ----------------------------- user mspaces ---------------------------- */
5006
5007#if MSPACES
5008
5009static mstate init_user_mstate(char* tbase, size_t tsize) {
5010  size_t msize = pad_request(sizeof(struct malloc_state));
5011  mchunkptr mn;
5012  mchunkptr msp = align_as_chunk(tbase);
5013  mstate m = (mstate)(chunk2mem(msp));
5014  memset(m, 0, msize);
5015  INITIAL_LOCK(&m->mutex);
5016  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
5017  m->seg.base = m->least_addr = tbase;
5018  m->seg.size = m->footprint = m->max_footprint = tsize;
5019  m->magic = mparams.magic;
5020  m->release_checks = MAX_RELEASE_CHECK_RATE;
5021  m->mflags = mparams.default_mflags;
5022  m->extp = 0;
5023  m->exts = 0;
5024  disable_contiguous(m);
5025  init_bins(m);
5026  mn = next_chunk(mem2chunk(m));
5027  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5028  check_top_chunk(m, m->top);
5029  return m;
5030}
5031
5032mspace create_mspace(size_t capacity, int locked) {
5033  mstate m = 0;
5034  size_t msize;
5035  ensure_initialization();
5036  msize = pad_request(sizeof(struct malloc_state));
5037  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5038    size_t rs = ((capacity == 0)? mparams.granularity :
5039                 (capacity + TOP_FOOT_SIZE + msize));
5040    size_t tsize = granularity_align(rs);
5041    char* tbase = (char*)(CALL_MMAP(tsize));
5042    if (tbase != CMFAIL) {
5043      m = init_user_mstate(tbase, tsize);
5044      m->seg.sflags = IS_MMAPPED_BIT;
5045      set_lock(m, locked);
5046    }
5047  }
5048  return (mspace)m;
5049}
5050
5051mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5052  mstate m = 0;
5053  size_t msize;
5054  ensure_initialization();
5055  msize = pad_request(sizeof(struct malloc_state));
5056  if (capacity > msize + TOP_FOOT_SIZE &&
5057      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5058    m = init_user_mstate((char*)base, capacity);
5059    m->seg.sflags = EXTERN_BIT;
5060    set_lock(m, locked);
5061  }
5062  return (mspace)m;
5063}
5064
5065int mspace_mmap_large_chunks(mspace msp, int enable) {
5066  int ret = 0;
5067  mstate ms = (mstate)msp;
5068  if (!PREACTION(ms)) {
5069    if (use_mmap(ms))
5070      ret = 1;
5071    if (enable)
5072      enable_mmap(ms);
5073    else
5074      disable_mmap(ms);
5075    POSTACTION(ms);
5076  }
5077  return ret;
5078}
5079
5080size_t destroy_mspace(mspace msp) {
5081  size_t freed = 0;
5082  mstate ms = (mstate)msp;
5083  if (ok_magic(ms)) {
5084    msegmentptr sp = &ms->seg;
5085    while (sp != 0) {
5086      char* base = sp->base;
5087      size_t size = sp->size;
5088      flag_t flag = sp->sflags;
5089      sp = sp->next;
5090      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
5091          CALL_MUNMAP(base, size) == 0)
5092        freed += size;
5093    }
5094  }
5095  else {
5096    USAGE_ERROR_ACTION(ms,ms);
5097  }
5098  return freed;
5099}
5100
5101/*
5102  mspace versions of routines are near-clones of the global
5103  versions. This is not so nice but better than the alternatives.
5104*/
5105
5106
5107void* mspace_malloc(mspace msp, size_t bytes) {
5108  mstate ms = (mstate)msp;
5109  if (!ok_magic(ms)) {
5110    USAGE_ERROR_ACTION(ms,ms);
5111    return 0;
5112  }
5113  if (!PREACTION(ms)) {
5114    void* mem;
5115    size_t nb;
5116    if (bytes <= MAX_SMALL_REQUEST) {
5117      bindex_t idx;
5118      binmap_t smallbits;
5119      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5120      idx = small_index(nb);
5121      smallbits = ms->smallmap >> idx;
5122
5123      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5124        mchunkptr b, p;
5125        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
5126        b = smallbin_at(ms, idx);
5127        p = b->fd;
5128        assert(chunksize(p) == small_index2size(idx));
5129        unlink_first_small_chunk(ms, b, p, idx);
5130        set_inuse_and_pinuse(ms, p, small_index2size(idx));
5131        mem = chunk2mem(p);
5132        check_malloced_chunk(ms, mem, nb);
5133        goto postaction;
5134      }
5135
5136      else if (nb > ms->dvsize) {
5137        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5138          mchunkptr b, p, r;
5139          size_t rsize;
5140          bindex_t i;
5141          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5142          binmap_t leastbit = least_bit(leftbits);
5143          compute_bit2idx(leastbit, i);
5144          b = smallbin_at(ms, i);
5145          p = b->fd;
5146          assert(chunksize(p) == small_index2size(i));
5147          unlink_first_small_chunk(ms, b, p, i);
5148          rsize = small_index2size(i) - nb;
5149          /* Fit here cannot be remainderless if 4byte sizes */
5150          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5151            set_inuse_and_pinuse(ms, p, small_index2size(i));
5152          else {
5153            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5154            r = chunk_plus_offset(p, nb);
5155            set_size_and_pinuse_of_free_chunk(r, rsize);
5156            replace_dv(ms, r, rsize);
5157          }
5158          mem = chunk2mem(p);
5159          check_malloced_chunk(ms, mem, nb);
5160          goto postaction;
5161        }
5162
5163        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5164          check_malloced_chunk(ms, mem, nb);
5165          goto postaction;
5166        }
5167      }
5168    }
5169    else if (bytes >= MAX_REQUEST)
5170      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5171    else {
5172      nb = pad_request(bytes);
5173      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5174        check_malloced_chunk(ms, mem, nb);
5175        goto postaction;
5176      }
5177    }
5178
5179    if (nb <= ms->dvsize) {
5180      size_t rsize = ms->dvsize - nb;
5181      mchunkptr p = ms->dv;
5182      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5183        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5184        ms->dvsize = rsize;
5185        set_size_and_pinuse_of_free_chunk(r, rsize);
5186        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5187      }
5188      else { /* exhaust dv */
5189        size_t dvs = ms->dvsize;
5190        ms->dvsize = 0;
5191        ms->dv = 0;
5192        set_inuse_and_pinuse(ms, p, dvs);
5193      }
5194      mem = chunk2mem(p);
5195      check_malloced_chunk(ms, mem, nb);
5196      goto postaction;
5197    }
5198
5199    else if (nb < ms->topsize) { /* Split top */
5200      size_t rsize = ms->topsize -= nb;
5201      mchunkptr p = ms->top;
5202      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5203      r->head = rsize | PINUSE_BIT;
5204      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5205      mem = chunk2mem(p);
5206      check_top_chunk(ms, ms->top);
5207      check_malloced_chunk(ms, mem, nb);
5208      goto postaction;
5209    }
5210
5211    mem = sys_alloc(ms, nb);
5212
5213  postaction:
5214    POSTACTION(ms);
5215    return mem;
5216  }
5217
5218  return 0;
5219}
5220
5221void mspace_free(mspace msp, void* mem) {
5222  if (mem != 0) {
5223    mchunkptr p  = mem2chunk(mem);
5224#if FOOTERS
5225    mstate fm = get_mstate_for(p);
5226#else /* FOOTERS */
5227    mstate fm = (mstate)msp;
5228#endif /* FOOTERS */
5229    if (!ok_magic(fm)) {
5230      USAGE_ERROR_ACTION(fm, p);
5231      return;
5232    }
5233    if (!PREACTION(fm)) {
5234      check_inuse_chunk(fm, p);
5235      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
5236        size_t psize = chunksize(p);
5237        mchunkptr next = chunk_plus_offset(p, psize);
5238        if (!pinuse(p)) {
5239          size_t prevsize = p->prev_foot;
5240          if ((prevsize & IS_MMAPPED_BIT) != 0) {
5241            prevsize &= ~IS_MMAPPED_BIT;
5242            psize += prevsize + MMAP_FOOT_PAD;
5243            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5244              fm->footprint -= psize;
5245            goto postaction;
5246          }
5247          else {
5248            mchunkptr prev = chunk_minus_offset(p, prevsize);
5249            psize += prevsize;
5250            p = prev;
5251            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5252              if (p != fm->dv) {
5253                unlink_chunk(fm, p, prevsize);
5254              }
5255              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5256                fm->dvsize = psize;
5257                set_free_with_pinuse(p, psize, next);
5258                goto postaction;
5259              }
5260            }
5261            else
5262              goto erroraction;
5263          }
5264        }
5265
5266        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5267          if (!cinuse(next)) {  /* consolidate forward */
5268            if (next == fm->top) {
5269              size_t tsize = fm->topsize += psize;
5270              fm->top = p;
5271              p->head = tsize | PINUSE_BIT;
5272              if (p == fm->dv) {
5273                fm->dv = 0;
5274                fm->dvsize = 0;
5275              }
5276              if (should_trim(fm, tsize))
5277                sys_trim(fm, 0);
5278              goto postaction;
5279            }
5280            else if (next == fm->dv) {
5281              size_t dsize = fm->dvsize += psize;
5282              fm->dv = p;
5283              set_size_and_pinuse_of_free_chunk(p, dsize);
5284              goto postaction;
5285            }
5286            else {
5287              size_t nsize = chunksize(next);
5288              psize += nsize;
5289              unlink_chunk(fm, next, nsize);
5290              set_size_and_pinuse_of_free_chunk(p, psize);
5291              if (p == fm->dv) {
5292                fm->dvsize = psize;
5293                goto postaction;
5294              }
5295            }
5296          }
5297          else
5298            set_free_with_pinuse(p, psize, next);
5299
5300          if (is_small(psize)) {
5301            insert_small_chunk(fm, p, psize);
5302            check_free_chunk(fm, p);
5303          }
5304          else {
5305            tchunkptr tp = (tchunkptr)p;
5306            insert_large_chunk(fm, tp, psize);
5307            check_free_chunk(fm, p);
5308            if (--fm->release_checks == 0)
5309              release_unused_segments(fm);
5310          }
5311          goto postaction;
5312        }
5313      }
5314    erroraction:
5315      USAGE_ERROR_ACTION(fm, p);
5316    postaction:
5317      POSTACTION(fm);
5318    }
5319  }
5320}
5321
5322void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5323  void* mem;
5324  size_t req = 0;
5325  mstate ms = (mstate)msp;
5326  if (!ok_magic(ms)) {
5327    USAGE_ERROR_ACTION(ms,ms);
5328    return 0;
5329  }
5330  if (n_elements != 0) {
5331    req = n_elements * elem_size;
5332    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5333        (req / n_elements != elem_size))
5334      req = MAX_SIZE_T; /* force downstream failure on overflow */
5335  }
5336  mem = internal_malloc(ms, req);
5337  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5338    memset(mem, 0, req);
5339  return mem;
5340}
5341
5342void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5343  if (oldmem == 0)
5344    return mspace_malloc(msp, bytes);
5345#ifdef REALLOC_ZERO_BYTES_FREES
5346  if (bytes == 0) {
5347    mspace_free(msp, oldmem);
5348    return 0;
5349  }
5350#endif /* REALLOC_ZERO_BYTES_FREES */
5351  else {
5352#if FOOTERS
5353    mchunkptr p  = mem2chunk(oldmem);
5354    mstate ms = get_mstate_for(p);
5355#else /* FOOTERS */
5356    mstate ms = (mstate)msp;
5357#endif /* FOOTERS */
5358    if (!ok_magic(ms)) {
5359      USAGE_ERROR_ACTION(ms,ms);
5360      return 0;
5361    }
5362    return internal_realloc(ms, oldmem, bytes);
5363  }
5364}
5365
5366void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5367  mstate ms = (mstate)msp;
5368  if (!ok_magic(ms)) {
5369    USAGE_ERROR_ACTION(ms,ms);
5370    return 0;
5371  }
5372  return internal_memalign(ms, alignment, bytes);
5373}
5374
5375void** mspace_independent_calloc(mspace msp, size_t n_elements,
5376                                 size_t elem_size, void* chunks[]) {
5377  size_t sz = elem_size; /* serves as 1-element array */
5378  mstate ms = (mstate)msp;
5379  if (!ok_magic(ms)) {
5380    USAGE_ERROR_ACTION(ms,ms);
5381    return 0;
5382  }
5383  return ialloc(ms, n_elements, &sz, 3, chunks);
5384}
5385
5386void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5387                                   size_t sizes[], void* chunks[]) {
5388  mstate ms = (mstate)msp;
5389  if (!ok_magic(ms)) {
5390    USAGE_ERROR_ACTION(ms,ms);
5391    return 0;
5392  }
5393  return ialloc(ms, n_elements, sizes, 0, chunks);
5394}
5395
5396int mspace_trim(mspace msp, size_t pad) {
5397  int result = 0;
5398  mstate ms = (mstate)msp;
5399  if (ok_magic(ms)) {
5400    if (!PREACTION(ms)) {
5401      result = sys_trim(ms, pad);
5402      POSTACTION(ms);
5403    }
5404  }
5405  else {
5406    USAGE_ERROR_ACTION(ms,ms);
5407  }
5408  return result;
5409}
5410
5411void mspace_malloc_stats(mspace msp) {
5412  mstate ms = (mstate)msp;
5413  if (ok_magic(ms)) {
5414    internal_malloc_stats(ms);
5415  }
5416  else {
5417    USAGE_ERROR_ACTION(ms,ms);
5418  }
5419}
5420
5421size_t mspace_footprint(mspace msp) {
5422  size_t result = 0;
5423  mstate ms = (mstate)msp;
5424  if (ok_magic(ms)) {
5425    result = ms->footprint;
5426  }
5427  else {
5428    USAGE_ERROR_ACTION(ms,ms);
5429  }
5430  return result;
5431}
5432
5433
5434size_t mspace_max_footprint(mspace msp) {
5435  size_t result = 0;
5436  mstate ms = (mstate)msp;
5437  if (ok_magic(ms)) {
5438    result = ms->max_footprint;
5439  }
5440  else {
5441    USAGE_ERROR_ACTION(ms,ms);
5442  }
5443  return result;
5444}
5445
5446
5447#if !NO_MALLINFO
5448struct mallinfo mspace_mallinfo(mspace msp) {
5449  mstate ms = (mstate)msp;
5450  if (!ok_magic(ms)) {
5451    USAGE_ERROR_ACTION(ms,ms);
5452  }
5453  return internal_mallinfo(ms);
5454}
5455#endif /* NO_MALLINFO */
5456
5457size_t mspace_usable_size(void* mem) {
5458  if (mem != 0) {
5459    mchunkptr p = mem2chunk(mem);
5460    if (cinuse(p))
5461      return chunksize(p) - overhead_for(p);
5462  }
5463  return 0;
5464}
5465
5466int mspace_mallopt(int param_number, int value) {
5467  return change_mparam(param_number, value);
5468}
5469
5470#endif /* MSPACES */
5471
5472/* -------------------- Alternative MORECORE functions ------------------- */
5473
5474/*
5475  Guidelines for creating a custom version of MORECORE:
5476
5477  * For best performance, MORECORE should allocate in multiples of pagesize.
5478  * MORECORE may allocate more memory than requested. (Or even less,
5479      but this will usually result in a malloc failure.)
5480  * MORECORE must not allocate memory when given argument zero, but
5481      instead return one past the end address of memory from previous
5482      nonzero call.
5483  * For best performance, consecutive calls to MORECORE with positive
5484      arguments should return increasing addresses, indicating that
5485      space has been contiguously extended.
5486  * Even though consecutive calls to MORECORE need not return contiguous
5487      addresses, it must be OK for malloc'ed chunks to span multiple
5488      regions in those cases where they do happen to be contiguous.
5489  * MORECORE need not handle negative arguments -- it may instead
5490      just return MFAIL when given negative arguments.
5491      Negative arguments are always multiples of pagesize. MORECORE
5492      must not misinterpret negative args as large positive unsigned
5493      args. You can suppress all such calls from even occurring by defining
5494      MORECORE_CANNOT_TRIM,
5495
5496  As an example alternative MORECORE, here is a custom allocator
5497  kindly contributed for pre-OSX macOS.  It uses virtually but not
5498  necessarily physically contiguous non-paged memory (locked in,
5499  present and won't get swapped out).  You can use it by uncommenting
5500  this section, adding some #includes, and setting up the appropriate
5501  defines above:
5502
5503      #define MORECORE osMoreCore
5504
5505  There is also a shutdown routine that should somehow be called for
5506  cleanup upon program exit.
5507
5508  #define MAX_POOL_ENTRIES 100
5509  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
5510  static int next_os_pool;
5511  void *our_os_pools[MAX_POOL_ENTRIES];
5512
5513  void *osMoreCore(int size)
5514  {
5515    void *ptr = 0;
5516    static void *sbrk_top = 0;
5517
5518    if (size > 0)
5519    {
5520      if (size < MINIMUM_MORECORE_SIZE)
5521         size = MINIMUM_MORECORE_SIZE;
5522      if (CurrentExecutionLevel() == kTaskLevel)
5523         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
5524      if (ptr == 0)
5525      {
5526        return (void *) MFAIL;
5527      }
5528      // save ptrs so they can be freed during cleanup
5529      our_os_pools[next_os_pool] = ptr;
5530      next_os_pool++;
5531      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
5532      sbrk_top = (char *) ptr + size;
5533      return ptr;
5534    }
5535    else if (size < 0)
5536    {
5537      // we don't currently support shrink behavior
5538      return (void *) MFAIL;
5539    }
5540    else
5541    {
5542      return sbrk_top;
5543    }
5544  }
5545
5546  // cleanup any allocated memory pools
5547  // called as last thing before shutting down driver
5548
5549  void osCleanupMem(void)
5550  {
5551    void **ptr;
5552
5553    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5554      if (*ptr)
5555      {
5556         PoolDeallocate(*ptr);
5557         *ptr = 0;
5558      }
5559  }
5560
5561*/
5562
5563
5564/* -----------------------------------------------------------------------
5565History:
5566    V2.8.4 (not yet released)
5567      * Add mspace_mmap_large_chunks; thanks to Jean Brouwers
5568      * Fix insufficient sys_alloc padding when using 16byte alignment
5569      * Fix bad error check in mspace_footprint
5570      * Adaptations for ptmalloc, courtesy of Wolfram Gloger.
5571      * Reentrant spin locks, courtesy of Earl Chew and others
5572      * Win32 improvements, courtesy of Niall Douglas and Earl Chew
5573      * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
5574      * Extension hook in malloc_state
5575      * Various small adjustments to reduce warnings on some compilers
5576      * Various configuration extensions/changes for more platforms. Thanks
5577         to all who contributed these.
5578
5579    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
5580      * Add max_footprint functions
5581      * Ensure all appropriate literals are size_t
5582      * Fix conditional compilation problem for some #define settings
5583      * Avoid concatenating segments with the one provided
5584        in create_mspace_with_base
5585      * Rename some variables to avoid compiler shadowing warnings
5586      * Use explicit lock initialization.
5587      * Better handling of sbrk interference.
5588      * Simplify and fix segment insertion, trimming and mspace_destroy
5589      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5590      * Thanks especially to Dennis Flanagan for help on these.
5591
5592    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
5593      * Fix memalign brace error.
5594
5595    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
5596      * Fix improper #endif nesting in C++
5597      * Add explicit casts needed for C++
5598
5599    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
5600      * Use trees for large bins
5601      * Support mspaces
5602      * Use segments to unify sbrk-based and mmap-based system allocation,
5603        removing need for emulation on most platforms without sbrk.
5604      * Default safety checks
5605      * Optional footer checks. Thanks to William Robertson for the idea.
5606      * Internal code refactoring
5607      * Incorporate suggestions and platform-specific changes.
5608        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5609        Aaron Bachmann,  Emery Berger, and others.
5610      * Speed up non-fastbin processing enough to remove fastbins.
5611      * Remove useless cfree() to avoid conflicts with other apps.
5612      * Remove internal memcpy, memset. Compilers handle builtins better.
5613      * Remove some options that no one ever used and rename others.
5614
5615    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
5616      * Fix malloc_state bitmap array misdeclaration
5617
5618    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
5619      * Allow tuning of FIRST_SORTED_BIN_SIZE
5620      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5621      * Better detection and support for non-contiguousness of MORECORE.
5622        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5623      * Bypass most of malloc if no frees. Thanks To Emery Berger.
5624      * Fix freeing of old top non-contiguous chunk im sysmalloc.
5625      * Raised default trim and map thresholds to 256K.
5626      * Fix mmap-related #defines. Thanks to Lubos Lunak.
5627      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5628      * Branch-free bin calculation
5629      * Default trim and mmap thresholds now 256K.
5630
5631    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
5632      * Introduce independent_comalloc and independent_calloc.
5633        Thanks to Michael Pachos for motivation and help.
5634      * Make optional .h file available
5635      * Allow > 2GB requests on 32bit systems.
5636      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5637        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5638        and Anonymous.
5639      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5640        helping test this.)
5641      * memalign: check alignment arg
5642      * realloc: don't try to shift chunks backwards, since this
5643        leads to  more fragmentation in some programs and doesn't
5644        seem to help in any others.
5645      * Collect all cases in malloc requiring system memory into sysmalloc
5646      * Use mmap as backup to sbrk
5647      * Place all internal state in malloc_state
5648      * Introduce fastbins (although similar to 2.5.1)
5649      * Many minor tunings and cosmetic improvements
5650      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5651      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5652        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5653      * Include errno.h to support default failure action.
5654
5655    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
5656      * return null for negative arguments
5657      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5658         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5659          (e.g. WIN32 platforms)
5660         * Cleanup header file inclusion for WIN32 platforms
5661         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5662         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5663           memory allocation routines
5664         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5665         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5666           usage of 'assert' in non-WIN32 code
5667         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5668           avoid infinite loop
5669      * Always call 'fREe()' rather than 'free()'
5670
5671    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
5672      * Fixed ordering problem with boundary-stamping
5673
5674    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
5675      * Added pvalloc, as recommended by H.J. Liu
5676      * Added 64bit pointer support mainly from Wolfram Gloger
5677      * Added anonymously donated WIN32 sbrk emulation
5678      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5679      * malloc_extend_top: fix mask error that caused wastage after
5680        foreign sbrks
5681      * Add linux mremap support code from HJ Liu
5682
5683    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
5684      * Integrated most documentation with the code.
5685      * Add support for mmap, with help from
5686        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5687      * Use last_remainder in more cases.
5688      * Pack bins using idea from  colin@nyx10.cs.du.edu
5689      * Use ordered bins instead of best-fit threshold
5690      * Eliminate block-local decls to simplify tracing and debugging.
5691      * Support another case of realloc via move into top
5692      * Fix error occurring when initial sbrk_base not word-aligned.
5693      * Rely on page size for units instead of SBRK_UNIT to
5694        avoid surprises about sbrk alignment conventions.
5695      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5696        (raymond@es.ele.tue.nl) for the suggestion.
5697      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5698      * More precautions for cases where other routines call sbrk,
5699        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5700      * Added macros etc., allowing use in linux libc from
5701        H.J. Lu (hjl@gnu.ai.mit.edu)
5702      * Inverted this history list
5703
5704    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
5705      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5706      * Removed all preallocation code since under current scheme
5707        the work required to undo bad preallocations exceeds
5708        the work saved in good cases for most test programs.
5709      * No longer use return list or unconsolidated bins since
5710        no scheme using them consistently outperforms those that don't
5711        given above changes.
5712      * Use best fit for very large chunks to prevent some worst-cases.
5713      * Added some support for debugging
5714
5715    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
5716      * Removed footers when chunks are in use. Thanks to
5717        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5718
5719    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
5720      * Added malloc_trim, with help from Wolfram Gloger
5721        (wmglo@Dent.MED.Uni-Muenchen.DE).
5722
5723    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
5724
5725    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
5726      * realloc: try to expand in both directions
5727      * malloc: swap order of clean-bin strategy;
5728      * realloc: only conditionally expand backwards
5729      * Try not to scavenge used bins
5730      * Use bin counts as a guide to preallocation
5731      * Occasionally bin return list chunks in first scan
5732      * Add a few optimizations from colin@nyx10.cs.du.edu
5733
5734    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
5735      * faster bin computation & slightly different binning
5736      * merged all consolidations to one part of malloc proper
5737         (eliminating old malloc_find_space & malloc_clean_bin)
5738      * Scan 2 returns chunks (not just 1)
5739      * Propagate failure in realloc if malloc returns 0
5740      * Add stuff to allow compilation on non-ANSI compilers
5741          from kpv@research.att.com
5742
5743    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
5744      * removed potential for odd address access in prev_chunk
5745      * removed dependency on getpagesize.h
5746      * misc cosmetics and a bit more internal documentation
5747      * anticosmetics: mangled names in macros to evade debugger strangeness
5748      * tested on sparc, hp-700, dec-mips, rs6000
5749          with gcc & native cc (hp, dec only) allowing
5750          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5751
5752    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
5753      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5754         structure of old version,  but most details differ.)
5755
5756*/