compat / nedmalloc / malloc.c.hon commit Merge branch 'sb/pull-rebase' (12d4ffa)
   1/*
   2  This is a version (aka dlmalloc) of malloc/free/realloc written by
   3  Doug Lea and released to the public domain, as explained at
   4  http://creativecommons.org/licenses/publicdomain.  Send questions,
   5  comments, complaints, performance data, etc to dl@cs.oswego.edu
   6
   7* Version pre-2.8.4 Mon Nov 27 11:22:37 2006    (dl at gee)
   8
   9   Note: There may be an updated version of this malloc obtainable at
  10           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
  11         Check before installing!
  12
  13* Quickstart
  14
  15  This library is all in one file to simplify the most common usage:
  16  ftp it, compile it (-O3), and link it into another program. All of
  17  the compile-time options default to reasonable values for use on
  18  most platforms.  You might later want to step through various
  19  compile-time and dynamic tuning options.
  20
  21  For convenience, an include file for code using this malloc is at:
  22     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h
  23  You don't really need this .h file unless you call functions not
  24  defined in your system include files.  The .h file contains only the
  25  excerpts from this file needed for using this malloc on ANSI C/C++
  26  systems, so long as you haven't changed compile-time options about
  27  naming and tuning parameters.  If you do, then you can create your
  28  own malloc.h that does include all settings by cutting at the point
  29  indicated below. Note that you may already by default be using a C
  30  library containing a malloc that is based on some version of this
  31  malloc (for example in linux). You might still want to use the one
  32  in this file to customize settings or to avoid overheads associated
  33  with library versions.
  34
  35* Vital statistics:
  36
  37  Supported pointer/size_t representation:       4 or 8 bytes
  38       size_t MUST be an unsigned type of the same width as
  39       pointers. (If you are using an ancient system that declares
  40       size_t as a signed type, or need it to be a different width
  41       than pointers, you can use a previous release of this malloc
  42       (e.g. 2.7.2) supporting these.)
  43
  44  Alignment:                                     8 bytes (default)
  45       This suffices for nearly all current machines and C compilers.
  46       However, you can define MALLOC_ALIGNMENT to be wider than this
  47       if necessary (up to 128bytes), at the expense of using more space.
  48
  49  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
  50                                          8 or 16 bytes (if 8byte sizes)
  51       Each malloced chunk has a hidden word of overhead holding size
  52       and status information, and additional cross-check word
  53       if FOOTERS is defined.
  54
  55  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
  56                          8-byte ptrs:  32 bytes    (including overhead)
  57
  58       Even a request for zero bytes (i.e., malloc(0)) returns a
  59       pointer to something of the minimum allocatable size.
  60       The maximum overhead wastage (i.e., number of extra bytes
  61       allocated than were requested in malloc) is less than or equal
  62       to the minimum size, except for requests >= mmap_threshold that
  63       are serviced via mmap(), where the worst case wastage is about
  64       32 bytes plus the remainder from a system page (the minimal
  65       mmap unit); typically 4096 or 8192 bytes.
  66
  67  Security: static-safe; optionally more or less
  68       The "security" of malloc refers to the ability of malicious
  69       code to accentuate the effects of errors (for example, freeing
  70       space that is not currently malloc'ed or overwriting past the
  71       ends of chunks) in code that calls malloc.  This malloc
  72       guarantees not to modify any memory locations below the base of
  73       heap, i.e., static variables, even in the presence of usage
  74       errors.  The routines additionally detect most improper frees
  75       and reallocs.  All this holds as long as the static bookkeeping
  76       for malloc itself is not corrupted by some other means.  This
  77       is only one aspect of security -- these checks do not, and
  78       cannot, detect all possible programming errors.
  79
  80       If FOOTERS is defined nonzero, then each allocated chunk
  81       carries an additional check word to verify that it was malloced
  82       from its space.  These check words are the same within each
  83       execution of a program using malloc, but differ across
  84       executions, so externally crafted fake chunks cannot be
  85       freed. This improves security by rejecting frees/reallocs that
  86       could corrupt heap memory, in addition to the checks preventing
  87       writes to statics that are always on.  This may further improve
  88       security at the expense of time and space overhead.  (Note that
  89       FOOTERS may also be worth using with MSPACES.)
  90
  91       By default detected errors cause the program to abort (calling
  92       "abort()"). You can override this to instead proceed past
  93       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
  94       has no effect, and a malloc that encounters a bad address
  95       caused by user overwrites will ignore the bad address by
  96       dropping pointers and indices to all known memory. This may
  97       be appropriate for programs that should continue if at all
  98       possible in the face of programming errors, although they may
  99       run out of memory because dropped memory is never reclaimed.
 100
 101       If you don't like either of these options, you can define
 102       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
 103       else. And if if you are sure that your program using malloc has
 104       no errors or vulnerabilities, you can define INSECURE to 1,
 105       which might (or might not) provide a small performance improvement.
 106
 107  Thread-safety: NOT thread-safe unless USE_LOCKS defined
 108       When USE_LOCKS is defined, each public call to malloc, free,
 109       etc is surrounded with either a pthread mutex or a win32
 110       spinlock (depending on WIN32). This is not especially fast, and
 111       can be a major bottleneck.  It is designed only to provide
 112       minimal protection in concurrent environments, and to provide a
 113       basis for extensions.  If you are using malloc in a concurrent
 114       program, consider instead using nedmalloc
 115       (http://www.nedprod.com/programs/portable/nedmalloc/) or
 116       ptmalloc (See http://www.malloc.de), which are derived
 117       from versions of this malloc.
 118
 119  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
 120       This malloc can use unix sbrk or any emulation (invoked using
 121       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
 122       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
 123       memory.  On most unix systems, it tends to work best if both
 124       MORECORE and MMAP are enabled.  On Win32, it uses emulations
 125       based on VirtualAlloc. It also uses common C library functions
 126       like memset.
 127
 128  Compliance: I believe it is compliant with the Single Unix Specification
 129       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
 130       others as well.
 131
 132* Overview of algorithms
 133
 134  This is not the fastest, most space-conserving, most portable, or
 135  most tunable malloc ever written. However it is among the fastest
 136  while also being among the most space-conserving, portable and
 137  tunable.  Consistent balance across these factors results in a good
 138  general-purpose allocator for malloc-intensive programs.
 139
 140  In most ways, this malloc is a best-fit allocator. Generally, it
 141  chooses the best-fitting existing chunk for a request, with ties
 142  broken in approximately least-recently-used order. (This strategy
 143  normally maintains low fragmentation.) However, for requests less
 144  than 256bytes, it deviates from best-fit when there is not an
 145  exactly fitting available chunk by preferring to use space adjacent
 146  to that used for the previous small request, as well as by breaking
 147  ties in approximately most-recently-used order. (These enhance
 148  locality of series of small allocations.)  And for very large requests
 149  (>= 256Kb by default), it relies on system memory mapping
 150  facilities, if supported.  (This helps avoid carrying around and
 151  possibly fragmenting memory used only for large chunks.)
 152
 153  All operations (except malloc_stats and mallinfo) have execution
 154  times that are bounded by a constant factor of the number of bits in
 155  a size_t, not counting any clearing in calloc or copying in realloc,
 156  or actions surrounding MORECORE and MMAP that have times
 157  proportional to the number of non-contiguous regions returned by
 158  system allocation routines, which is often just 1. In real-time
 159  applications, you can optionally suppress segment traversals using
 160  NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
 161  system allocators return non-contiguous spaces, at the typical
 162  expense of carrying around more memory and increased fragmentation.
 163
 164  The implementation is not very modular and seriously overuses
 165  macros. Perhaps someday all C compilers will do as good a job
 166  inlining modular code as can now be done by brute-force expansion,
 167  but now, enough of them seem not to.
 168
 169  Some compilers issue a lot of warnings about code that is
 170  dead/unreachable only on some platforms, and also about intentional
 171  uses of negation on unsigned types. All known cases of each can be
 172  ignored.
 173
 174  For a longer but out of date high-level description, see
 175     http://gee.cs.oswego.edu/dl/html/malloc.html
 176
 177* MSPACES
 178  If MSPACES is defined, then in addition to malloc, free, etc.,
 179  this file also defines mspace_malloc, mspace_free, etc. These
 180  are versions of malloc routines that take an "mspace" argument
 181  obtained using create_mspace, to control all internal bookkeeping.
 182  If ONLY_MSPACES is defined, only these versions are compiled.
 183  So if you would like to use this allocator for only some allocations,
 184  and your system malloc for others, you can compile with
 185  ONLY_MSPACES and then do something like...
 186    static mspace mymspace = create_mspace(0,0); // for example
 187    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
 188
 189  (Note: If you only need one instance of an mspace, you can instead
 190  use "USE_DL_PREFIX" to relabel the global malloc.)
 191
 192  You can similarly create thread-local allocators by storing
 193  mspaces as thread-locals. For example:
 194    static __thread mspace tlms = 0;
 195    void*  tlmalloc(size_t bytes) {
 196      if (tlms == 0) tlms = create_mspace(0, 0);
 197      return mspace_malloc(tlms, bytes);
 198    }
 199    void  tlfree(void* mem) { mspace_free(tlms, mem); }
 200
 201  Unless FOOTERS is defined, each mspace is completely independent.
 202  You cannot allocate from one and free to another (although
 203  conformance is only weakly checked, so usage errors are not always
 204  caught). If FOOTERS is defined, then each chunk carries around a tag
 205  indicating its originating mspace, and frees are directed to their
 206  originating spaces.
 207
 208 -------------------------  Compile-time options ---------------------------
 209
 210Be careful in setting #define values for numerical constants of type
 211size_t. On some systems, literal values are not automatically extended
 212to size_t precision unless they are explicitly casted. You can also
 213use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
 214
 215WIN32                    default: defined if _WIN32 defined
 216  Defining WIN32 sets up defaults for MS environment and compilers.
 217  Otherwise defaults are for unix. Beware that there seem to be some
 218  cases where this malloc might not be a pure drop-in replacement for
 219  Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
 220  SetDIBits()) may be due to bugs in some video driver implementations
 221  when pixel buffers are malloc()ed, and the region spans more than
 222  one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
 223  default granularity, pixel buffers may straddle virtual allocation
 224  regions more often than when using the Microsoft allocator.  You can
 225  avoid this by using VirtualAlloc() and VirtualFree() for all pixel
 226  buffers rather than using malloc().  If this is not possible,
 227  recompile this malloc with a larger DEFAULT_GRANULARITY.
 228
 229MALLOC_ALIGNMENT         default: (size_t)8
 230  Controls the minimum alignment for malloc'ed chunks.  It must be a
 231  power of two and at least 8, even on machines for which smaller
 232  alignments would suffice. It may be defined as larger than this
 233  though. Note however that code and data structures are optimized for
 234  the case of 8-byte alignment.
 235
 236MSPACES                  default: 0 (false)
 237  If true, compile in support for independent allocation spaces.
 238  This is only supported if HAVE_MMAP is true.
 239
 240ONLY_MSPACES             default: 0 (false)
 241  If true, only compile in mspace versions, not regular versions.
 242
 243USE_LOCKS                default: 0 (false)
 244  Causes each call to each public routine to be surrounded with
 245  pthread or WIN32 mutex lock/unlock. (If set true, this can be
 246  overridden on a per-mspace basis for mspace versions.) If set to a
 247  non-zero value other than 1, locks are used, but their
 248  implementation is left out, so lock functions must be supplied manually.
 249
 250USE_SPIN_LOCKS           default: 1 iff USE_LOCKS and on x86 using gcc or MSC
 251  If true, uses custom spin locks for locking. This is currently
 252  supported only for x86 platforms using gcc or recent MS compilers.
 253  Otherwise, posix locks or win32 critical sections are used.
 254
 255FOOTERS                  default: 0
 256  If true, provide extra checking and dispatching by placing
 257  information in the footers of allocated chunks. This adds
 258  space and time overhead.
 259
 260INSECURE                 default: 0
 261  If true, omit checks for usage errors and heap space overwrites.
 262
 263USE_DL_PREFIX            default: NOT defined
 264  Causes compiler to prefix all public routines with the string 'dl'.
 265  This can be useful when you only want to use this malloc in one part
 266  of a program, using your regular system malloc elsewhere.
 267
 268ABORT                    default: defined as abort()
 269  Defines how to abort on failed checks.  On most systems, a failed
 270  check cannot die with an "assert" or even print an informative
 271  message, because the underlying print routines in turn call malloc,
 272  which will fail again.  Generally, the best policy is to simply call
 273  abort(). It's not very useful to do more than this because many
 274  errors due to overwriting will show up as address faults (null, odd
 275  addresses etc) rather than malloc-triggered checks, so will also
 276  abort.  Also, most compilers know that abort() does not return, so
 277  can better optimize code conditionally calling it.
 278
 279PROCEED_ON_ERROR           default: defined as 0 (false)
 280  Controls whether detected bad addresses cause them to bypassed
 281  rather than aborting. If set, detected bad arguments to free and
 282  realloc are ignored. And all bookkeeping information is zeroed out
 283  upon a detected overwrite of freed heap space, thus losing the
 284  ability to ever return it from malloc again, but enabling the
 285  application to proceed. If PROCEED_ON_ERROR is defined, the
 286  static variable malloc_corruption_error_count is compiled in
 287  and can be examined to see if errors have occurred. This option
 288  generates slower code than the default abort policy.
 289
 290DEBUG                    default: NOT defined
 291  The DEBUG setting is mainly intended for people trying to modify
 292  this code or diagnose problems when porting to new platforms.
 293  However, it may also be able to better isolate user errors than just
 294  using runtime checks.  The assertions in the check routines spell
 295  out in more detail the assumptions and invariants underlying the
 296  algorithms.  The checking is fairly extensive, and will slow down
 297  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
 298  set will attempt to check every non-mmapped allocated and free chunk
 299  in the course of computing the summaries.
 300
 301ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
 302  Debugging assertion failures can be nearly impossible if your
 303  version of the assert macro causes malloc to be called, which will
 304  lead to a cascade of further failures, blowing the runtime stack.
 305  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
 306  which will usually make debugging easier.
 307
 308MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
 309  The action to take before "return 0" when malloc fails to be able to
 310  return memory because there is none available.
 311
 312HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
 313  True if this system supports sbrk or an emulation of it.
 314
 315MORECORE                  default: sbrk
 316  The name of the sbrk-style system routine to call to obtain more
 317  memory.  See below for guidance on writing custom MORECORE
 318  functions. The type of the argument to sbrk/MORECORE varies across
 319  systems.  It cannot be size_t, because it supports negative
 320  arguments, so it is normally the signed type of the same width as
 321  size_t (sometimes declared as "intptr_t").  It doesn't much matter
 322  though. Internally, we only call it with arguments less than half
 323  the max value of a size_t, which should work across all reasonable
 324  possibilities, although sometimes generating compiler warnings.
 325
 326MORECORE_CONTIGUOUS       default: 1 (true) if HAVE_MORECORE
 327  If true, take advantage of fact that consecutive calls to MORECORE
 328  with positive arguments always return contiguous increasing
 329  addresses.  This is true of unix sbrk. It does not hurt too much to
 330  set it true anyway, since malloc copes with non-contiguities.
 331  Setting it false when definitely non-contiguous saves time
 332  and possibly wasted space it would take to discover this though.
 333
 334MORECORE_CANNOT_TRIM      default: NOT defined
 335  True if MORECORE cannot release space back to the system when given
 336  negative arguments. This is generally necessary only if you are
 337  using a hand-crafted MORECORE function that cannot handle negative
 338  arguments.
 339
 340NO_SEGMENT_TRAVERSAL       default: 0
 341  If non-zero, suppresses traversals of memory segments
 342  returned by either MORECORE or CALL_MMAP. This disables
 343  merging of segments that are contiguous, and selectively
 344  releasing them to the OS if unused, but bounds execution times.
 345
 346HAVE_MMAP                 default: 1 (true)
 347  True if this system supports mmap or an emulation of it.  If so, and
 348  HAVE_MORECORE is not true, MMAP is used for all system
 349  allocation. If set and HAVE_MORECORE is true as well, MMAP is
 350  primarily used to directly allocate very large blocks. It is also
 351  used as a backup strategy in cases where MORECORE fails to provide
 352  space from system. Note: A single call to MUNMAP is assumed to be
 353  able to unmap memory that may have be allocated using multiple calls
 354  to MMAP, so long as they are adjacent.
 355
 356HAVE_MREMAP               default: 1 on linux, else 0
 357  If true realloc() uses mremap() to re-allocate large blocks and
 358  extend or shrink allocation spaces.
 359
 360MMAP_CLEARS               default: 1 except on WINCE.
 361  True if mmap clears memory so calloc doesn't need to. This is true
 362  for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
 363
 364USE_BUILTIN_FFS            default: 0 (i.e., not used)
 365  Causes malloc to use the builtin ffs() function to compute indices.
 366  Some compilers may recognize and intrinsify ffs to be faster than the
 367  supplied C version. Also, the case of x86 using gcc is special-cased
 368  to an asm instruction, so is already as fast as it can be, and so
 369  this setting has no effect. Similarly for Win32 under recent MS compilers.
 370  (On most x86s, the asm version is only slightly faster than the C version.)
 371
 372malloc_getpagesize         default: derive from system includes, or 4096.
 373  The system page size. To the extent possible, this malloc manages
 374  memory from the system in page-size units.  This may be (and
 375  usually is) a function rather than a constant. This is ignored
 376  if WIN32, where page size is determined using getSystemInfo during
 377  initialization.
 378
 379USE_DEV_RANDOM             default: 0 (i.e., not used)
 380  Causes malloc to use /dev/random to initialize secure magic seed for
 381  stamping footers. Otherwise, the current time is used.
 382
 383NO_MALLINFO                default: 0
 384  If defined, don't compile "mallinfo". This can be a simple way
 385  of dealing with mismatches between system declarations and
 386  those in this file.
 387
 388MALLINFO_FIELD_TYPE        default: size_t
 389  The type of the fields in the mallinfo struct. This was originally
 390  defined as "int" in SVID etc, but is more usefully defined as
 391  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
 392
 393REALLOC_ZERO_BYTES_FREES    default: not defined
 394  This should be set if a call to realloc with zero bytes should
 395  be the same as a call to free. Some people think it should. Otherwise,
 396  since this malloc returns a unique pointer for malloc(0), so does
 397  realloc(p, 0).
 398
 399LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
 400LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
 401LACKS_STDLIB_H                default: NOT defined unless on WIN32
 402  Define these if your system does not have these header files.
 403  You might need to manually insert some of the declarations they provide.
 404
 405DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
 406                                system_info.dwAllocationGranularity in WIN32,
 407                                otherwise 64K.
 408      Also settable using mallopt(M_GRANULARITY, x)
 409  The unit for allocating and deallocating memory from the system.  On
 410  most systems with contiguous MORECORE, there is no reason to
 411  make this more than a page. However, systems with MMAP tend to
 412  either require or encourage larger granularities.  You can increase
 413  this value to prevent system allocation functions to be called so
 414  often, especially if they are slow.  The value must be at least one
 415  page and must be a power of two.  Setting to 0 causes initialization
 416  to either page size or win32 region size.  (Note: In previous
 417  versions of malloc, the equivalent of this option was called
 418  "TOP_PAD")
 419
 420DEFAULT_TRIM_THRESHOLD    default: 2MB
 421      Also settable using mallopt(M_TRIM_THRESHOLD, x)
 422  The maximum amount of unused top-most memory to keep before
 423  releasing via malloc_trim in free().  Automatic trimming is mainly
 424  useful in long-lived programs using contiguous MORECORE.  Because
 425  trimming via sbrk can be slow on some systems, and can sometimes be
 426  wasteful (in cases where programs immediately afterward allocate
 427  more large chunks) the value should be high enough so that your
 428  overall system performance would improve by releasing this much
 429  memory.  As a rough guide, you might set to a value close to the
 430  average size of a process (program) running on your system.
 431  Releasing this much memory would allow such a process to run in
 432  memory.  Generally, it is worth tuning trim thresholds when a
 433  program undergoes phases where several large chunks are allocated
 434  and released in ways that can reuse each other's storage, perhaps
 435  mixed with phases where there are no such chunks at all. The trim
 436  value must be greater than page size to have any useful effect.  To
 437  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
 438  some people use of mallocing a huge space and then freeing it at
 439  program startup, in an attempt to reserve system memory, doesn't
 440  have the intended effect under automatic trimming, since that memory
 441  will immediately be returned to the system.
 442
 443DEFAULT_MMAP_THRESHOLD       default: 256K
 444      Also settable using mallopt(M_MMAP_THRESHOLD, x)
 445  The request size threshold for using MMAP to directly service a
 446  request. Requests of at least this size that cannot be allocated
 447  using already-existing space will be serviced via mmap.  (If enough
 448  normal freed space already exists it is used instead.)  Using mmap
 449  segregates relatively large chunks of memory so that they can be
 450  individually obtained and released from the host system. A request
 451  serviced through mmap is never reused by any other request (at least
 452  not directly; the system may just so happen to remap successive
 453  requests to the same locations).  Segregating space in this way has
 454  the benefits that: Mmapped space can always be individually released
 455  back to the system, which helps keep the system level memory demands
 456  of a long-lived program low.  Also, mapped memory doesn't become
 457  `locked' between other chunks, as can happen with normally allocated
 458  chunks, which means that even trimming via malloc_trim would not
 459  release them.  However, it has the disadvantage that the space
 460  cannot be reclaimed, consolidated, and then used to service later
 461  requests, as happens with normal chunks.  The advantages of mmap
 462  nearly always outweigh disadvantages for "large" chunks, but the
 463  value of "large" may vary across systems.  The default is an
 464  empirically derived value that works well in most systems. You can
 465  disable mmap by setting to MAX_SIZE_T.
 466
 467MAX_RELEASE_CHECK_RATE   default: 4095 unless not HAVE_MMAP
 468  The number of consolidated frees between checks to release
 469  unused segments when freeing. When using non-contiguous segments,
 470  especially with multiple mspaces, checking only for topmost space
 471  doesn't always suffice to trigger trimming. To compensate for this,
 472  free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
 473  current number of segments, if greater) try to release unused
 474  segments to the OS when freeing chunks that result in
 475  consolidation. The best value for this parameter is a compromise
 476  between slowing down frees with relatively costly checks that
 477  rarely trigger versus holding on to unused memory. To effectively
 478  disable, set to MAX_SIZE_T. This may lead to a very slight speed
 479  improvement at the expense of carrying around more memory.
 480*/
 481
 482/* Version identifier to allow people to support multiple versions */
 483#ifndef DLMALLOC_VERSION
 484#define DLMALLOC_VERSION 20804
 485#endif /* DLMALLOC_VERSION */
 486
 487#ifndef WIN32
 488#ifdef _WIN32
 489#define WIN32 1
 490#endif  /* _WIN32 */
 491#ifdef _WIN32_WCE
 492#define LACKS_FCNTL_H
 493#define WIN32 1
 494#endif /* _WIN32_WCE */
 495#endif  /* WIN32 */
 496#ifdef WIN32
 497#define WIN32_LEAN_AND_MEAN
 498#define _WIN32_WINNT 0x403
 499#include <windows.h>
 500#define HAVE_MMAP 1
 501#define HAVE_MORECORE 0
 502#define LACKS_UNISTD_H
 503#define LACKS_SYS_PARAM_H
 504#define LACKS_SYS_MMAN_H
 505#define LACKS_STRING_H
 506#define LACKS_STRINGS_H
 507#define LACKS_SYS_TYPES_H
 508#define LACKS_ERRNO_H
 509#ifndef MALLOC_FAILURE_ACTION
 510#define MALLOC_FAILURE_ACTION
 511#endif /* MALLOC_FAILURE_ACTION */
 512#ifdef _WIN32_WCE /* WINCE reportedly does not clear */
 513#define MMAP_CLEARS 0
 514#else
 515#define MMAP_CLEARS 1
 516#endif /* _WIN32_WCE */
 517#endif  /* WIN32 */
 518
 519#if defined(DARWIN) || defined(_DARWIN)
 520/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
 521#ifndef HAVE_MORECORE
 522#define HAVE_MORECORE 0
 523#define HAVE_MMAP 1
 524/* OSX allocators provide 16 byte alignment */
 525#ifndef MALLOC_ALIGNMENT
 526#define MALLOC_ALIGNMENT ((size_t)16U)
 527#endif
 528#endif  /* HAVE_MORECORE */
 529#endif  /* DARWIN */
 530
 531#ifndef LACKS_SYS_TYPES_H
 532#include <sys/types.h>  /* For size_t */
 533#endif  /* LACKS_SYS_TYPES_H */
 534
 535/* The maximum possible size_t value has all bits set */
 536#define MAX_SIZE_T           (~(size_t)0)
 537
 538#ifndef ONLY_MSPACES
 539#define ONLY_MSPACES 0     /* define to a value */
 540#else
 541#define ONLY_MSPACES 1
 542#endif  /* ONLY_MSPACES */
 543#ifndef MSPACES
 544#if ONLY_MSPACES
 545#define MSPACES 1
 546#else   /* ONLY_MSPACES */
 547#define MSPACES 0
 548#endif  /* ONLY_MSPACES */
 549#endif  /* MSPACES */
 550#ifndef MALLOC_ALIGNMENT
 551#define MALLOC_ALIGNMENT ((size_t)8U)
 552#endif  /* MALLOC_ALIGNMENT */
 553#ifndef FOOTERS
 554#define FOOTERS 0
 555#endif  /* FOOTERS */
 556#ifndef ABORT
 557#define ABORT  abort()
 558#endif  /* ABORT */
 559#ifndef ABORT_ON_ASSERT_FAILURE
 560#define ABORT_ON_ASSERT_FAILURE 1
 561#endif  /* ABORT_ON_ASSERT_FAILURE */
 562#ifndef PROCEED_ON_ERROR
 563#define PROCEED_ON_ERROR 0
 564#endif  /* PROCEED_ON_ERROR */
 565#ifndef USE_LOCKS
 566#define USE_LOCKS 0
 567#endif  /* USE_LOCKS */
 568#ifndef USE_SPIN_LOCKS
 569#if USE_LOCKS && (defined(__GNUC__) && ((defined(__i386__) || defined(__x86_64__)))) || (defined(_MSC_VER) && _MSC_VER>=1310)
 570#define USE_SPIN_LOCKS 1
 571#else
 572#define USE_SPIN_LOCKS 0
 573#endif /* USE_LOCKS && ... */
 574#endif /* USE_SPIN_LOCKS */
 575#ifndef INSECURE
 576#define INSECURE 0
 577#endif  /* INSECURE */
 578#ifndef HAVE_MMAP
 579#define HAVE_MMAP 1
 580#endif  /* HAVE_MMAP */
 581#ifndef MMAP_CLEARS
 582#define MMAP_CLEARS 1
 583#endif  /* MMAP_CLEARS */
 584#ifndef HAVE_MREMAP
 585#ifdef linux
 586#define HAVE_MREMAP 1
 587#else   /* linux */
 588#define HAVE_MREMAP 0
 589#endif  /* linux */
 590#endif  /* HAVE_MREMAP */
 591#ifndef MALLOC_FAILURE_ACTION
 592#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
 593#endif  /* MALLOC_FAILURE_ACTION */
 594#ifndef HAVE_MORECORE
 595#if ONLY_MSPACES
 596#define HAVE_MORECORE 0
 597#else   /* ONLY_MSPACES */
 598#define HAVE_MORECORE 1
 599#endif  /* ONLY_MSPACES */
 600#endif  /* HAVE_MORECORE */
 601#if !HAVE_MORECORE
 602#define MORECORE_CONTIGUOUS 0
 603#else   /* !HAVE_MORECORE */
 604#define MORECORE_DEFAULT sbrk
 605#ifndef MORECORE_CONTIGUOUS
 606#define MORECORE_CONTIGUOUS 1
 607#endif  /* MORECORE_CONTIGUOUS */
 608#endif  /* HAVE_MORECORE */
 609#ifndef DEFAULT_GRANULARITY
 610#if (MORECORE_CONTIGUOUS || defined(WIN32))
 611#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
 612#else   /* MORECORE_CONTIGUOUS */
 613#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
 614#endif  /* MORECORE_CONTIGUOUS */
 615#endif  /* DEFAULT_GRANULARITY */
 616#ifndef DEFAULT_TRIM_THRESHOLD
 617#ifndef MORECORE_CANNOT_TRIM
 618#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
 619#else   /* MORECORE_CANNOT_TRIM */
 620#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
 621#endif  /* MORECORE_CANNOT_TRIM */
 622#endif  /* DEFAULT_TRIM_THRESHOLD */
 623#ifndef DEFAULT_MMAP_THRESHOLD
 624#if HAVE_MMAP
 625#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
 626#else   /* HAVE_MMAP */
 627#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
 628#endif  /* HAVE_MMAP */
 629#endif  /* DEFAULT_MMAP_THRESHOLD */
 630#ifndef MAX_RELEASE_CHECK_RATE
 631#if HAVE_MMAP
 632#define MAX_RELEASE_CHECK_RATE 4095
 633#else
 634#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
 635#endif /* HAVE_MMAP */
 636#endif /* MAX_RELEASE_CHECK_RATE */
 637#ifndef USE_BUILTIN_FFS
 638#define USE_BUILTIN_FFS 0
 639#endif  /* USE_BUILTIN_FFS */
 640#ifndef USE_DEV_RANDOM
 641#define USE_DEV_RANDOM 0
 642#endif  /* USE_DEV_RANDOM */
 643#ifndef NO_MALLINFO
 644#define NO_MALLINFO 0
 645#endif  /* NO_MALLINFO */
 646#ifndef MALLINFO_FIELD_TYPE
 647#define MALLINFO_FIELD_TYPE size_t
 648#endif  /* MALLINFO_FIELD_TYPE */
 649#ifndef NO_SEGMENT_TRAVERSAL
 650#define NO_SEGMENT_TRAVERSAL 0
 651#endif /* NO_SEGMENT_TRAVERSAL */
 652
 653/*
 654  mallopt tuning options.  SVID/XPG defines four standard parameter
 655  numbers for mallopt, normally defined in malloc.h.  None of these
 656  are used in this malloc, so setting them has no effect. But this
 657  malloc does support the following options.
 658*/
 659
 660#define M_TRIM_THRESHOLD     (-1)
 661#define M_GRANULARITY        (-2)
 662#define M_MMAP_THRESHOLD     (-3)
 663
 664/* ------------------------ Mallinfo declarations ------------------------ */
 665
 666#if !NO_MALLINFO
 667/*
 668  This version of malloc supports the standard SVID/XPG mallinfo
 669  routine that returns a struct containing usage properties and
 670  statistics. It should work on any system that has a
 671  /usr/include/malloc.h defining struct mallinfo.  The main
 672  declaration needed is the mallinfo struct that is returned (by-copy)
 673  by mallinfo().  The malloinfo struct contains a bunch of fields that
 674  are not even meaningful in this version of malloc.  These fields are
 675  are instead filled by mallinfo() with other numbers that might be of
 676  interest.
 677
 678  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
 679  /usr/include/malloc.h file that includes a declaration of struct
 680  mallinfo.  If so, it is included; else a compliant version is
 681  declared below.  These must be precisely the same for mallinfo() to
 682  work.  The original SVID version of this struct, defined on most
 683  systems with mallinfo, declares all fields as ints. But some others
 684  define as unsigned long. If your system defines the fields using a
 685  type of different width than listed here, you MUST #include your
 686  system version and #define HAVE_USR_INCLUDE_MALLOC_H.
 687*/
 688
 689/* #define HAVE_USR_INCLUDE_MALLOC_H */
 690
 691#ifdef HAVE_USR_INCLUDE_MALLOC_H
 692#include "/usr/include/malloc.h"
 693#else /* HAVE_USR_INCLUDE_MALLOC_H */
 694#ifndef STRUCT_MALLINFO_DECLARED
 695#define STRUCT_MALLINFO_DECLARED 1
 696struct mallinfo {
 697  MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
 698  MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
 699  MALLINFO_FIELD_TYPE smblks;   /* always 0 */
 700  MALLINFO_FIELD_TYPE hblks;    /* always 0 */
 701  MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
 702  MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
 703  MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
 704  MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
 705  MALLINFO_FIELD_TYPE fordblks; /* total free space */
 706  MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
 707};
 708#endif /* STRUCT_MALLINFO_DECLARED */
 709#endif /* HAVE_USR_INCLUDE_MALLOC_H */
 710#endif /* NO_MALLINFO */
 711
 712/*
 713  Try to persuade compilers to inline. The most critical functions for
 714  inlining are defined as macros, so these aren't used for them.
 715*/
 716
 717#ifndef FORCEINLINE
 718  #if defined(__GNUC__)
 719#define FORCEINLINE __inline __attribute__ ((always_inline))
 720  #elif defined(_MSC_VER)
 721    #define FORCEINLINE __forceinline
 722  #endif
 723#endif
 724#ifndef NOINLINE
 725  #if defined(__GNUC__)
 726    #define NOINLINE __attribute__ ((noinline))
 727  #elif defined(_MSC_VER)
 728    #define NOINLINE __declspec(noinline)
 729  #else
 730    #define NOINLINE
 731  #endif
 732#endif
 733
 734#ifdef __cplusplus
 735extern "C" {
 736#ifndef FORCEINLINE
 737 #define FORCEINLINE inline
 738#endif
 739#endif /* __cplusplus */
 740#ifndef FORCEINLINE
 741 #define FORCEINLINE
 742#endif
 743
 744#if !ONLY_MSPACES
 745
 746/* ------------------- Declarations of public routines ------------------- */
 747
 748#ifndef USE_DL_PREFIX
 749#define dlcalloc               calloc
 750#define dlfree                 free
 751#define dlmalloc               malloc
 752#define dlmemalign             memalign
 753#define dlrealloc              realloc
 754#define dlvalloc               valloc
 755#define dlpvalloc              pvalloc
 756#define dlmallinfo             mallinfo
 757#define dlmallopt              mallopt
 758#define dlmalloc_trim          malloc_trim
 759#define dlmalloc_stats         malloc_stats
 760#define dlmalloc_usable_size   malloc_usable_size
 761#define dlmalloc_footprint     malloc_footprint
 762#define dlmalloc_max_footprint malloc_max_footprint
 763#define dlindependent_calloc   independent_calloc
 764#define dlindependent_comalloc independent_comalloc
 765#endif /* USE_DL_PREFIX */
 766
 767
 768/*
 769  malloc(size_t n)
 770  Returns a pointer to a newly allocated chunk of at least n bytes, or
 771  null if no space is available, in which case errno is set to ENOMEM
 772  on ANSI C systems.
 773
 774  If n is zero, malloc returns a minimum-sized chunk. (The minimum
 775  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
 776  systems.)  Note that size_t is an unsigned type, so calls with
 777  arguments that would be negative if signed are interpreted as
 778  requests for huge amounts of space, which will often fail. The
 779  maximum supported value of n differs across systems, but is in all
 780  cases less than the maximum representable value of a size_t.
 781*/
 782void* dlmalloc(size_t);
 783
 784/*
 785  free(void* p)
 786  Releases the chunk of memory pointed to by p, that had been previously
 787  allocated using malloc or a related routine such as realloc.
 788  It has no effect if p is null. If p was not malloced or already
 789  freed, free(p) will by default cause the current program to abort.
 790*/
 791void  dlfree(void*);
 792
 793/*
 794  calloc(size_t n_elements, size_t element_size);
 795  Returns a pointer to n_elements * element_size bytes, with all locations
 796  set to zero.
 797*/
 798void* dlcalloc(size_t, size_t);
 799
 800/*
 801  realloc(void* p, size_t n)
 802  Returns a pointer to a chunk of size n that contains the same data
 803  as does chunk p up to the minimum of (n, p's size) bytes, or null
 804  if no space is available.
 805
 806  The returned pointer may or may not be the same as p. The algorithm
 807  prefers extending p in most cases when possible, otherwise it
 808  employs the equivalent of a malloc-copy-free sequence.
 809
 810  If p is null, realloc is equivalent to malloc.
 811
 812  If space is not available, realloc returns null, errno is set (if on
 813  ANSI) and p is NOT freed.
 814
 815  if n is for fewer bytes than already held by p, the newly unused
 816  space is lopped off and freed if possible.  realloc with a size
 817  argument of zero (re)allocates a minimum-sized chunk.
 818
 819  The old unix realloc convention of allowing the last-free'd chunk
 820  to be used as an argument to realloc is not supported.
 821*/
 822
 823void* dlrealloc(void*, size_t);
 824
 825/*
 826  memalign(size_t alignment, size_t n);
 827  Returns a pointer to a newly allocated chunk of n bytes, aligned
 828  in accord with the alignment argument.
 829
 830  The alignment argument should be a power of two. If the argument is
 831  not a power of two, the nearest greater power is used.
 832  8-byte alignment is guaranteed by normal malloc calls, so don't
 833  bother calling memalign with an argument of 8 or less.
 834
 835  Overreliance on memalign is a sure way to fragment space.
 836*/
 837void* dlmemalign(size_t, size_t);
 838
 839/*
 840  valloc(size_t n);
 841  Equivalent to memalign(pagesize, n), where pagesize is the page
 842  size of the system. If the pagesize is unknown, 4096 is used.
 843*/
 844void* dlvalloc(size_t);
 845
 846/*
 847  mallopt(int parameter_number, int parameter_value)
 848  Sets tunable parameters The format is to provide a
 849  (parameter-number, parameter-value) pair.  mallopt then sets the
 850  corresponding parameter to the argument value if it can (i.e., so
 851  long as the value is meaningful), and returns 1 if successful else
 852  0.  To workaround the fact that mallopt is specified to use int,
 853  not size_t parameters, the value -1 is specially treated as the
 854  maximum unsigned size_t value.
 855
 856  SVID/XPG/ANSI defines four standard param numbers for mallopt,
 857  normally defined in malloc.h.  None of these are use in this malloc,
 858  so setting them has no effect. But this malloc also supports other
 859  options in mallopt. See below for details.  Briefly, supported
 860  parameters are as follows (listed defaults are for "typical"
 861  configurations).
 862
 863  Symbol            param #  default    allowed param values
 864  M_TRIM_THRESHOLD     -1   2*1024*1024   any   (-1 disables)
 865  M_GRANULARITY        -2     page size   any power of 2 >= page size
 866  M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
 867*/
 868int dlmallopt(int, int);
 869
 870/*
 871  malloc_footprint();
 872  Returns the number of bytes obtained from the system.  The total
 873  number of bytes allocated by malloc, realloc etc., is less than this
 874  value. Unlike mallinfo, this function returns only a precomputed
 875  result, so can be called frequently to monitor memory consumption.
 876  Even if locks are otherwise defined, this function does not use them,
 877  so results might not be up to date.
 878*/
 879size_t dlmalloc_footprint(void);
 880
 881/*
 882  malloc_max_footprint();
 883  Returns the maximum number of bytes obtained from the system. This
 884  value will be greater than current footprint if deallocated space
 885  has been reclaimed by the system. The peak number of bytes allocated
 886  by malloc, realloc etc., is less than this value. Unlike mallinfo,
 887  this function returns only a precomputed result, so can be called
 888  frequently to monitor memory consumption.  Even if locks are
 889  otherwise defined, this function does not use them, so results might
 890  not be up to date.
 891*/
 892size_t dlmalloc_max_footprint(void);
 893
 894#if !NO_MALLINFO
 895/*
 896  mallinfo()
 897  Returns (by copy) a struct containing various summary statistics:
 898
 899  arena:     current total non-mmapped bytes allocated from system
 900  ordblks:   the number of free chunks
 901  smblks:    always zero.
 902  hblks:     current number of mmapped regions
 903  hblkhd:    total bytes held in mmapped regions
 904  usmblks:   the maximum total allocated space. This will be greater
 905                than current total if trimming has occurred.
 906  fsmblks:   always zero
 907  uordblks:  current total allocated space (normal or mmapped)
 908  fordblks:  total free space
 909  keepcost:  the maximum number of bytes that could ideally be released
 910               back to system via malloc_trim. ("ideally" means that
 911               it ignores page restrictions etc.)
 912
 913  Because these fields are ints, but internal bookkeeping may
 914  be kept as longs, the reported values may wrap around zero and
 915  thus be inaccurate.
 916*/
 917struct mallinfo dlmallinfo(void);
 918#endif /* NO_MALLINFO */
 919
 920/*
 921  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
 922
 923  independent_calloc is similar to calloc, but instead of returning a
 924  single cleared space, it returns an array of pointers to n_elements
 925  independent elements that can hold contents of size elem_size, each
 926  of which starts out cleared, and can be independently freed,
 927  realloc'ed etc. The elements are guaranteed to be adjacently
 928  allocated (this is not guaranteed to occur with multiple callocs or
 929  mallocs), which may also improve cache locality in some
 930  applications.
 931
 932  The "chunks" argument is optional (i.e., may be null, which is
 933  probably the most typical usage). If it is null, the returned array
 934  is itself dynamically allocated and should also be freed when it is
 935  no longer needed. Otherwise, the chunks array must be of at least
 936  n_elements in length. It is filled in with the pointers to the
 937  chunks.
 938
 939  In either case, independent_calloc returns this pointer array, or
 940  null if the allocation failed.  If n_elements is zero and "chunks"
 941  is null, it returns a chunk representing an array with zero elements
 942  (which should be freed if not wanted).
 943
 944  Each element must be individually freed when it is no longer
 945  needed. If you'd like to instead be able to free all at once, you
 946  should instead use regular calloc and assign pointers into this
 947  space to represent elements.  (In this case though, you cannot
 948  independently free elements.)
 949
 950  independent_calloc simplifies and speeds up implementations of many
 951  kinds of pools.  It may also be useful when constructing large data
 952  structures that initially have a fixed number of fixed-sized nodes,
 953  but the number is not known at compile time, and some of the nodes
 954  may later need to be freed. For example:
 955
 956  struct Node { int item; struct Node* next; };
 957
 958  struct Node* build_list() {
 959    struct Node** pool;
 960    int n = read_number_of_nodes_needed();
 961    if (n <= 0) return 0;
 962    pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
 963    if (pool == 0) die();
 964    // organize into a linked list...
 965    struct Node* first = pool[0];
 966    for (i = 0; i < n-1; ++i)
 967      pool[i]->next = pool[i+1];
 968    free(pool);     // Can now free the array (or not, if it is needed later)
 969    return first;
 970  }
 971*/
 972void** dlindependent_calloc(size_t, size_t, void**);
 973
 974/*
 975  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
 976
 977  independent_comalloc allocates, all at once, a set of n_elements
 978  chunks with sizes indicated in the "sizes" array.    It returns
 979  an array of pointers to these elements, each of which can be
 980  independently freed, realloc'ed etc. The elements are guaranteed to
 981  be adjacently allocated (this is not guaranteed to occur with
 982  multiple callocs or mallocs), which may also improve cache locality
 983  in some applications.
 984
 985  The "chunks" argument is optional (i.e., may be null). If it is null
 986  the returned array is itself dynamically allocated and should also
 987  be freed when it is no longer needed. Otherwise, the chunks array
 988  must be of at least n_elements in length. It is filled in with the
 989  pointers to the chunks.
 990
 991  In either case, independent_comalloc returns this pointer array, or
 992  null if the allocation failed.  If n_elements is zero and chunks is
 993  null, it returns a chunk representing an array with zero elements
 994  (which should be freed if not wanted).
 995
 996  Each element must be individually freed when it is no longer
 997  needed. If you'd like to instead be able to free all at once, you
 998  should instead use a single regular malloc, and assign pointers at
 999  particular offsets in the aggregate space. (In this case though, you
1000  cannot independently free elements.)
1001
1002  independent_comallac differs from independent_calloc in that each
1003  element may have a different size, and also that it does not
1004  automatically clear elements.
1005
1006  independent_comalloc can be used to speed up allocation in cases
1007  where several structs or objects must always be allocated at the
1008  same time.  For example:
1009
1010  struct Head { ... }
1011  struct Foot { ... }
1012
1013  void send_message(char* msg) {
1014    int msglen = strlen(msg);
1015    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1016    void* chunks[3];
1017    if (independent_comalloc(3, sizes, chunks) == 0)
1018      die();
1019    struct Head* head = (struct Head*)(chunks[0]);
1020    char*        body = (char*)(chunks[1]);
1021    struct Foot* foot = (struct Foot*)(chunks[2]);
1022    // ...
1023  }
1024
1025  In general though, independent_comalloc is worth using only for
1026  larger values of n_elements. For small values, you probably won't
1027  detect enough difference from series of malloc calls to bother.
1028
1029  Overuse of independent_comalloc can increase overall memory usage,
1030  since it cannot reuse existing noncontiguous small chunks that
1031  might be available for some of the elements.
1032*/
1033void** dlindependent_comalloc(size_t, size_t*, void**);
1034
1035
1036/*
1037  pvalloc(size_t n);
1038  Equivalent to valloc(minimum-page-that-holds(n)), that is,
1039  round up n to nearest pagesize.
1040 */
1041void*  dlpvalloc(size_t);
1042
1043/*
1044  malloc_trim(size_t pad);
1045
1046  If possible, gives memory back to the system (via negative arguments
1047  to sbrk) if there is unused memory at the `high' end of the malloc
1048  pool or in unused MMAP segments. You can call this after freeing
1049  large blocks of memory to potentially reduce the system-level memory
1050  requirements of a program. However, it cannot guarantee to reduce
1051  memory. Under some allocation patterns, some large free blocks of
1052  memory will be locked between two used chunks, so they cannot be
1053  given back to the system.
1054
1055  The `pad' argument to malloc_trim represents the amount of free
1056  trailing space to leave untrimmed. If this argument is zero, only
1057  the minimum amount of memory to maintain internal data structures
1058  will be left. Non-zero arguments can be supplied to maintain enough
1059  trailing space to service future expected allocations without having
1060  to re-obtain memory from the system.
1061
1062  Malloc_trim returns 1 if it actually released any memory, else 0.
1063*/
1064int  dlmalloc_trim(size_t);
1065
1066/*
1067  malloc_stats();
1068  Prints on stderr the amount of space obtained from the system (both
1069  via sbrk and mmap), the maximum amount (which may be more than
1070  current if malloc_trim and/or munmap got called), and the current
1071  number of bytes allocated via malloc (or realloc, etc) but not yet
1072  freed. Note that this is the number of bytes allocated, not the
1073  number requested. It will be larger than the number requested
1074  because of alignment and bookkeeping overhead. Because it includes
1075  alignment wastage as being in use, this figure may be greater than
1076  zero even when no user-level chunks are allocated.
1077
1078  The reported current and maximum system memory can be inaccurate if
1079  a program makes other calls to system memory allocation functions
1080  (normally sbrk) outside of malloc.
1081
1082  malloc_stats prints only the most commonly interesting statistics.
1083  More information can be obtained by calling mallinfo.
1084*/
1085void  dlmalloc_stats(void);
1086
1087#endif /* ONLY_MSPACES */
1088
1089/*
1090  malloc_usable_size(void* p);
1091
1092  Returns the number of bytes you can actually use in
1093  an allocated chunk, which may be more than you requested (although
1094  often not) due to alignment and minimum size constraints.
1095  You can use this many bytes without worrying about
1096  overwriting other allocated objects. This is not a particularly great
1097  programming practice. malloc_usable_size can be more useful in
1098  debugging and assertions, for example:
1099
1100  p = malloc(n);
1101  assert(malloc_usable_size(p) >= 256);
1102*/
1103size_t dlmalloc_usable_size(void*);
1104
1105
1106#if MSPACES
1107
1108/*
1109  mspace is an opaque type representing an independent
1110  region of space that supports mspace_malloc, etc.
1111*/
1112typedef void* mspace;
1113
1114/*
1115  create_mspace creates and returns a new independent space with the
1116  given initial capacity, or, if 0, the default granularity size.  It
1117  returns null if there is no system memory available to create the
1118  space.  If argument locked is non-zero, the space uses a separate
1119  lock to control access. The capacity of the space will grow
1120  dynamically as needed to service mspace_malloc requests.  You can
1121  control the sizes of incremental increases of this space by
1122  compiling with a different DEFAULT_GRANULARITY or dynamically
1123  setting with mallopt(M_GRANULARITY, value).
1124*/
1125mspace create_mspace(size_t capacity, int locked);
1126
1127/*
1128  destroy_mspace destroys the given space, and attempts to return all
1129  of its memory back to the system, returning the total number of
1130  bytes freed. After destruction, the results of access to all memory
1131  used by the space become undefined.
1132*/
1133size_t destroy_mspace(mspace msp);
1134
1135/*
1136  create_mspace_with_base uses the memory supplied as the initial base
1137  of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1138  space is used for bookkeeping, so the capacity must be at least this
1139  large. (Otherwise 0 is returned.) When this initial space is
1140  exhausted, additional memory will be obtained from the system.
1141  Destroying this space will deallocate all additionally allocated
1142  space (if possible) but not the initial base.
1143*/
1144mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1145
1146/*
1147  mspace_mmap_large_chunks controls whether requests for large chunks
1148  are allocated in their own mmapped regions, separate from others in
1149  this mspace. By default this is enabled, which reduces
1150  fragmentation. However, such chunks are not necessarily released to
1151  the system upon destroy_mspace.  Disabling by setting to false may
1152  increase fragmentation, but avoids leakage when relying on
1153  destroy_mspace to release all memory allocated using this space.
1154*/
1155int mspace_mmap_large_chunks(mspace msp, int enable);
1156
1157
1158/*
1159  mspace_malloc behaves as malloc, but operates within
1160  the given space.
1161*/
1162void* mspace_malloc(mspace msp, size_t bytes);
1163
1164/*
1165  mspace_free behaves as free, but operates within
1166  the given space.
1167
1168  If compiled with FOOTERS==1, mspace_free is not actually needed.
1169  free may be called instead of mspace_free because freed chunks from
1170  any space are handled by their originating spaces.
1171*/
1172void mspace_free(mspace msp, void* mem);
1173
1174/*
1175  mspace_realloc behaves as realloc, but operates within
1176  the given space.
1177
1178  If compiled with FOOTERS==1, mspace_realloc is not actually
1179  needed.  realloc may be called instead of mspace_realloc because
1180  realloced chunks from any space are handled by their originating
1181  spaces.
1182*/
1183void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1184
1185/*
1186  mspace_calloc behaves as calloc, but operates within
1187  the given space.
1188*/
1189void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1190
1191/*
1192  mspace_memalign behaves as memalign, but operates within
1193  the given space.
1194*/
1195void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1196
1197/*
1198  mspace_independent_calloc behaves as independent_calloc, but
1199  operates within the given space.
1200*/
1201void** mspace_independent_calloc(mspace msp, size_t n_elements,
1202                                 size_t elem_size, void* chunks[]);
1203
1204/*
1205  mspace_independent_comalloc behaves as independent_comalloc, but
1206  operates within the given space.
1207*/
1208void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1209                                   size_t sizes[], void* chunks[]);
1210
1211/*
1212  mspace_footprint() returns the number of bytes obtained from the
1213  system for this space.
1214*/
1215size_t mspace_footprint(mspace msp);
1216
1217/*
1218  mspace_max_footprint() returns the peak number of bytes obtained from the
1219  system for this space.
1220*/
1221size_t mspace_max_footprint(mspace msp);
1222
1223
1224#if !NO_MALLINFO
1225/*
1226  mspace_mallinfo behaves as mallinfo, but reports properties of
1227  the given space.
1228*/
1229struct mallinfo mspace_mallinfo(mspace msp);
1230#endif /* NO_MALLINFO */
1231
1232/*
1233  malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1234*/
1235  size_t mspace_usable_size(void* mem);
1236
1237/*
1238  mspace_malloc_stats behaves as malloc_stats, but reports
1239  properties of the given space.
1240*/
1241void mspace_malloc_stats(mspace msp);
1242
1243/*
1244  mspace_trim behaves as malloc_trim, but
1245  operates within the given space.
1246*/
1247int mspace_trim(mspace msp, size_t pad);
1248
1249/*
1250  An alias for mallopt.
1251*/
1252int mspace_mallopt(int, int);
1253
1254#endif /* MSPACES */
1255
1256#ifdef __cplusplus
1257};  /* end of extern "C" */
1258#endif /* __cplusplus */
1259
1260/*
1261  ========================================================================
1262  To make a fully customizable malloc.h header file, cut everything
1263  above this line, put into file malloc.h, edit to suit, and #include it
1264  on the next line, as well as in programs that use this malloc.
1265  ========================================================================
1266*/
1267
1268/* #include "malloc.h" */
1269
1270/*------------------------------ internal #includes ---------------------- */
1271
1272#ifdef WIN32
1273#ifndef __GNUC__
1274#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1275#endif
1276#endif /* WIN32 */
1277
1278#include <stdio.h>       /* for printing in malloc_stats */
1279
1280#ifndef LACKS_ERRNO_H
1281#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1282#endif /* LACKS_ERRNO_H */
1283#if FOOTERS
1284#include <time.h>        /* for magic initialization */
1285#endif /* FOOTERS */
1286#ifndef LACKS_STDLIB_H
1287#include <stdlib.h>      /* for abort() */
1288#endif /* LACKS_STDLIB_H */
1289#ifdef DEBUG
1290#if ABORT_ON_ASSERT_FAILURE
1291#define assert(x) if(!(x)) ABORT
1292#else /* ABORT_ON_ASSERT_FAILURE */
1293#include <assert.h>
1294#endif /* ABORT_ON_ASSERT_FAILURE */
1295#else  /* DEBUG */
1296#ifndef assert
1297#define assert(x)
1298#endif
1299#define DEBUG 0
1300#endif /* DEBUG */
1301#ifndef LACKS_STRING_H
1302#include <string.h>      /* for memset etc */
1303#endif  /* LACKS_STRING_H */
1304#if USE_BUILTIN_FFS
1305#ifndef LACKS_STRINGS_H
1306#include <strings.h>     /* for ffs */
1307#endif /* LACKS_STRINGS_H */
1308#endif /* USE_BUILTIN_FFS */
1309#if HAVE_MMAP
1310#ifndef LACKS_SYS_MMAN_H
1311#include <sys/mman.h>    /* for mmap */
1312#endif /* LACKS_SYS_MMAN_H */
1313#ifndef LACKS_FCNTL_H
1314#include <fcntl.h>
1315#endif /* LACKS_FCNTL_H */
1316#endif /* HAVE_MMAP */
1317#ifndef LACKS_UNISTD_H
1318#include <unistd.h>     /* for sbrk, sysconf */
1319#else /* LACKS_UNISTD_H */
1320#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1321extern void*     sbrk(ptrdiff_t);
1322#endif /* FreeBSD etc */
1323#endif /* LACKS_UNISTD_H */
1324
1325/* Declarations for locking */
1326#if USE_LOCKS
1327#ifndef WIN32
1328#include <pthread.h>
1329#if defined (__SVR4) && defined (__sun)  /* solaris */
1330#include <thread.h>
1331#endif /* solaris */
1332#else
1333#ifndef _M_AMD64
1334/* These are already defined on AMD64 builds */
1335#ifdef __cplusplus
1336extern "C" {
1337#endif /* __cplusplus */
1338#ifndef __MINGW32__
1339LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1340LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1341#endif
1342#ifdef __cplusplus
1343}
1344#endif /* __cplusplus */
1345#endif /* _M_AMD64 */
1346#ifndef __MINGW32__
1347#pragma intrinsic (_InterlockedCompareExchange)
1348#pragma intrinsic (_InterlockedExchange)
1349#else
1350  /* --[ start GCC compatibility ]----------------------------------------------
1351   * Compatibility <intrin_x86.h> header for GCC -- GCC equivalents of intrinsic
1352   * Microsoft Visual C++ functions. Originally developed for the ReactOS
1353   * (<http://www.reactos.org/>) and TinyKrnl (<http://www.tinykrnl.org/>)
1354   * projects.
1355   *
1356   * Copyright (c) 2006 KJK::Hyperion <hackbunny@reactos.com>
1357   *
1358   * Permission is hereby granted, free of charge, to any person obtaining a
1359   * copy of this software and associated documentation files (the "Software"),
1360   * to deal in the Software without restriction, including without limitation
1361   * the rights to use, copy, modify, merge, publish, distribute, sublicense,
1362   * and/or sell copies of the Software, and to permit persons to whom the
1363   * Software is furnished to do so, subject to the following conditions:
1364   *
1365   * The above copyright notice and this permission notice shall be included in
1366   * all copies or substantial portions of the Software.
1367   *
1368   * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
1369   * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
1370   * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1371   * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1372   * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
1373   * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
1374   * DEALINGS IN THE SOFTWARE.
1375   */
1376
1377  /*** Atomic operations ***/
1378  #if (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) > 40100
1379    #define _ReadWriteBarrier() __sync_synchronize()
1380  #else
1381    static __inline__ __attribute__((always_inline)) long __sync_lock_test_and_set(volatile long * const Target, const long Value)
1382    {
1383      long res;
1384      __asm__ __volatile__("xchg%z0 %2, %0" : "=g" (*(Target)), "=r" (res) : "1" (Value));
1385      return res;
1386    }
1387    static void __inline__ __attribute__((always_inline)) _MemoryBarrier(void)
1388    {
1389      __asm__ __volatile__("" : : : "memory");
1390    }
1391    #define _ReadWriteBarrier() _MemoryBarrier()
1392  #endif
1393  /* BUGBUG: GCC only supports full barriers */
1394  static __inline__ __attribute__((always_inline)) long _InterlockedExchange(volatile long * const Target, const long Value)
1395  {
1396    /* NOTE: __sync_lock_test_and_set would be an acquire barrier, so we force a full barrier */
1397    _ReadWriteBarrier();
1398    return __sync_lock_test_and_set(Target, Value);
1399  }
1400  /* --[ end GCC compatibility ]---------------------------------------------- */
1401#endif
1402#define interlockedcompareexchange _InterlockedCompareExchange
1403#define interlockedexchange _InterlockedExchange
1404#endif /* Win32 */
1405#endif /* USE_LOCKS */
1406
1407/* Declarations for bit scanning on win32 */
1408#if defined(_MSC_VER) && _MSC_VER>=1300
1409#ifndef BitScanForward  /* Try to avoid pulling in WinNT.h */
1410#ifdef __cplusplus
1411extern "C" {
1412#endif /* __cplusplus */
1413unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1414unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1415#ifdef __cplusplus
1416}
1417#endif /* __cplusplus */
1418
1419#define BitScanForward _BitScanForward
1420#define BitScanReverse _BitScanReverse
1421#pragma intrinsic(_BitScanForward)
1422#pragma intrinsic(_BitScanReverse)
1423#endif /* BitScanForward */
1424#endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1425
1426#ifndef WIN32
1427#ifndef malloc_getpagesize
1428#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1429#    ifndef _SC_PAGE_SIZE
1430#      define _SC_PAGE_SIZE _SC_PAGESIZE
1431#    endif
1432#  endif
1433#  ifdef _SC_PAGE_SIZE
1434#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1435#  else
1436#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1437       extern size_t getpagesize();
1438#      define malloc_getpagesize getpagesize()
1439#    else
1440#      ifdef WIN32 /* use supplied emulation of getpagesize */
1441#        define malloc_getpagesize getpagesize()
1442#      else
1443#        ifndef LACKS_SYS_PARAM_H
1444#          include <sys/param.h>
1445#        endif
1446#        ifdef EXEC_PAGESIZE
1447#          define malloc_getpagesize EXEC_PAGESIZE
1448#        else
1449#          ifdef NBPG
1450#            ifndef CLSIZE
1451#              define malloc_getpagesize NBPG
1452#            else
1453#              define malloc_getpagesize (NBPG * CLSIZE)
1454#            endif
1455#          else
1456#            ifdef NBPC
1457#              define malloc_getpagesize NBPC
1458#            else
1459#              ifdef PAGESIZE
1460#                define malloc_getpagesize PAGESIZE
1461#              else /* just guess */
1462#                define malloc_getpagesize ((size_t)4096U)
1463#              endif
1464#            endif
1465#          endif
1466#        endif
1467#      endif
1468#    endif
1469#  endif
1470#endif
1471#endif
1472
1473
1474
1475/* ------------------- size_t and alignment properties -------------------- */
1476
1477/* The byte and bit size of a size_t */
1478#define SIZE_T_SIZE         (sizeof(size_t))
1479#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1480
1481/* Some constants coerced to size_t */
1482/* Annoying but necessary to avoid errors on some platforms */
1483#define SIZE_T_ZERO         ((size_t)0)
1484#define SIZE_T_ONE          ((size_t)1)
1485#define SIZE_T_TWO          ((size_t)2)
1486#define SIZE_T_FOUR         ((size_t)4)
1487#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1488#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1489#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1490#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1491
1492/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1493#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1494
1495/* True if address a has acceptable alignment */
1496#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1497
1498/* the number of bytes to offset an address to align it */
1499#define align_offset(A)\
1500 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1501  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1502
1503/* -------------------------- MMAP preliminaries ------------------------- */
1504
1505/*
1506   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1507   checks to fail so compiler optimizer can delete code rather than
1508   using so many "#if"s.
1509*/
1510
1511
1512/* MORECORE and MMAP must return MFAIL on failure */
1513#define MFAIL                ((void*)(MAX_SIZE_T))
1514#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1515
1516#if HAVE_MMAP
1517
1518#ifndef WIN32
1519#define MUNMAP_DEFAULT(a, s)  munmap((a), (s))
1520#define MMAP_PROT            (PROT_READ|PROT_WRITE)
1521#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1522#define MAP_ANONYMOUS        MAP_ANON
1523#endif /* MAP_ANON */
1524#ifdef MAP_ANONYMOUS
1525#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1526#define MMAP_DEFAULT(s)       mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1527#else /* MAP_ANONYMOUS */
1528/*
1529   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1530   is unlikely to be needed, but is supplied just in case.
1531*/
1532#define MMAP_FLAGS           (MAP_PRIVATE)
1533static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1534#define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1535           (dev_zero_fd = open("/dev/zero", O_RDWR), \
1536            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1537            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1538#endif /* MAP_ANONYMOUS */
1539
1540#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1541
1542#else /* WIN32 */
1543
1544/* Win32 MMAP via VirtualAlloc */
1545static FORCEINLINE void* win32mmap(size_t size) {
1546  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1547  return (ptr != 0)? ptr: MFAIL;
1548}
1549
1550/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1551static FORCEINLINE void* win32direct_mmap(size_t size) {
1552  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1553                           PAGE_READWRITE);
1554  return (ptr != 0)? ptr: MFAIL;
1555}
1556
1557/* This function supports releasing coalesed segments */
1558static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1559  MEMORY_BASIC_INFORMATION minfo;
1560  char* cptr = (char*)ptr;
1561  while (size) {
1562    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1563      return -1;
1564    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1565        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1566      return -1;
1567    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1568      return -1;
1569    cptr += minfo.RegionSize;
1570    size -= minfo.RegionSize;
1571  }
1572  return 0;
1573}
1574
1575#define MMAP_DEFAULT(s)             win32mmap(s)
1576#define MUNMAP_DEFAULT(a, s)        win32munmap((a), (s))
1577#define DIRECT_MMAP_DEFAULT(s)      win32direct_mmap(s)
1578#endif /* WIN32 */
1579#endif /* HAVE_MMAP */
1580
1581#if HAVE_MREMAP
1582#ifndef WIN32
1583#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1584#endif /* WIN32 */
1585#endif /* HAVE_MREMAP */
1586
1587
1588/**
1589 * Define CALL_MORECORE
1590 */
1591#if HAVE_MORECORE
1592    #ifdef MORECORE
1593        #define CALL_MORECORE(S)    MORECORE(S)
1594    #else  /* MORECORE */
1595        #define CALL_MORECORE(S)    MORECORE_DEFAULT(S)
1596    #endif /* MORECORE */
1597#else  /* HAVE_MORECORE */
1598    #define CALL_MORECORE(S)        MFAIL
1599#endif /* HAVE_MORECORE */
1600
1601/**
1602 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1603 */
1604#if HAVE_MMAP
1605    #define IS_MMAPPED_BIT          (SIZE_T_ONE)
1606    #define USE_MMAP_BIT            (SIZE_T_ONE)
1607
1608    #ifdef MMAP
1609        #define CALL_MMAP(s)        MMAP(s)
1610    #else /* MMAP */
1611        #define CALL_MMAP(s)        MMAP_DEFAULT(s)
1612    #endif /* MMAP */
1613    #ifdef MUNMAP
1614        #define CALL_MUNMAP(a, s)   MUNMAP((a), (s))
1615    #else /* MUNMAP */
1616        #define CALL_MUNMAP(a, s)   MUNMAP_DEFAULT((a), (s))
1617    #endif /* MUNMAP */
1618    #ifdef DIRECT_MMAP
1619        #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1620    #else /* DIRECT_MMAP */
1621        #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1622    #endif /* DIRECT_MMAP */
1623#else  /* HAVE_MMAP */
1624    #define IS_MMAPPED_BIT          (SIZE_T_ZERO)
1625    #define USE_MMAP_BIT            (SIZE_T_ZERO)
1626
1627    #define MMAP(s)                 MFAIL
1628    #define MUNMAP(a, s)            (-1)
1629    #define DIRECT_MMAP(s)          MFAIL
1630    #define CALL_DIRECT_MMAP(s)     DIRECT_MMAP(s)
1631    #define CALL_MMAP(s)            MMAP(s)
1632    #define CALL_MUNMAP(a, s)       MUNMAP((a), (s))
1633#endif /* HAVE_MMAP */
1634
1635/**
1636 * Define CALL_MREMAP
1637 */
1638#if HAVE_MMAP && HAVE_MREMAP
1639    #ifdef MREMAP
1640        #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1641    #else /* MREMAP */
1642        #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1643    #endif /* MREMAP */
1644#else  /* HAVE_MMAP && HAVE_MREMAP */
1645    #define CALL_MREMAP(addr, osz, nsz, mv)     MFAIL
1646#endif /* HAVE_MMAP && HAVE_MREMAP */
1647
1648/* mstate bit set if continguous morecore disabled or failed */
1649#define USE_NONCONTIGUOUS_BIT (4U)
1650
1651/* segment bit set in create_mspace_with_base */
1652#define EXTERN_BIT            (8U)
1653
1654
1655/* --------------------------- Lock preliminaries ------------------------ */
1656
1657/*
1658  When locks are defined, there is one global lock, plus
1659  one per-mspace lock.
1660
1661  The global lock_ensures that mparams.magic and other unique
1662  mparams values are initialized only once. It also protects
1663  sequences of calls to MORECORE.  In many cases sys_alloc requires
1664  two calls, that should not be interleaved with calls by other
1665  threads.  This does not protect against direct calls to MORECORE
1666  by other threads not using this lock, so there is still code to
1667  cope the best we can on interference.
1668
1669  Per-mspace locks surround calls to malloc, free, etc.  To enable use
1670  in layered extensions, per-mspace locks are reentrant.
1671
1672  Because lock-protected regions generally have bounded times, it is
1673  OK to use the supplied simple spinlocks in the custom versions for
1674  x86.
1675
1676  If USE_LOCKS is > 1, the definitions of lock routines here are
1677  bypassed, in which case you will need to define at least
1678  INITIAL_LOCK, ACQUIRE_LOCK, RELEASE_LOCK and possibly TRY_LOCK
1679  (which is not used in this malloc, but commonly needed in
1680  extensions.)
1681*/
1682
1683#if USE_LOCKS == 1
1684
1685#if USE_SPIN_LOCKS
1686#ifndef WIN32
1687
1688/* Custom pthread-style spin locks on x86 and x64 for gcc */
1689struct pthread_mlock_t {
1690  volatile unsigned int l;
1691  volatile unsigned int c;
1692  volatile pthread_t threadid;
1693};
1694#define MLOCK_T struct        pthread_mlock_t
1695#define CURRENT_THREAD        pthread_self()
1696#define INITIAL_LOCK(sl)      (memset(sl, 0, sizeof(MLOCK_T)), 0)
1697#define ACQUIRE_LOCK(sl)      pthread_acquire_lock(sl)
1698#define RELEASE_LOCK(sl)      pthread_release_lock(sl)
1699#define TRY_LOCK(sl)          pthread_try_lock(sl)
1700#define SPINS_PER_YIELD       63
1701
1702static MLOCK_T malloc_global_mutex = { 0, 0, 0};
1703
1704static FORCEINLINE int pthread_acquire_lock (MLOCK_T *sl) {
1705  int spins = 0;
1706  volatile unsigned int* lp = &sl->l;
1707  for (;;) {
1708    if (*lp != 0) {
1709      if (sl->threadid == CURRENT_THREAD) {
1710        ++sl->c;
1711        return 0;
1712      }
1713    }
1714    else {
1715      /* place args to cmpxchgl in locals to evade oddities in some gccs */
1716      int cmp = 0;
1717      int val = 1;
1718      int ret;
1719      __asm__ __volatile__  ("lock; cmpxchgl %1, %2"
1720                             : "=a" (ret)
1721                             : "r" (val), "m" (*(lp)), "0"(cmp)
1722                             : "memory", "cc");
1723      if (!ret) {
1724        assert(!sl->threadid);
1725        sl->c = 1;
1726        sl->threadid = CURRENT_THREAD;
1727        return 0;
1728      }
1729      if ((++spins & SPINS_PER_YIELD) == 0) {
1730#if defined (__SVR4) && defined (__sun) /* solaris */
1731        thr_yield();
1732#else
1733#if defined(__linux__) || defined(__FreeBSD__) || defined(__APPLE__)
1734        sched_yield();
1735#else  /* no-op yield on unknown systems */
1736        ;
1737#endif /* __linux__ || __FreeBSD__ || __APPLE__ */
1738#endif /* solaris */
1739      }
1740    }
1741  }
1742}
1743
1744static FORCEINLINE void pthread_release_lock (MLOCK_T *sl) {
1745  assert(sl->l != 0);
1746  assert(sl->threadid == CURRENT_THREAD);
1747  if (--sl->c == 0) {
1748    sl->threadid = 0;
1749    volatile unsigned int* lp = &sl->l;
1750    int prev = 0;
1751    int ret;
1752    __asm__ __volatile__ ("lock; xchgl %0, %1"
1753                          : "=r" (ret)
1754                          : "m" (*(lp)), "0"(prev)
1755                          : "memory");
1756  }
1757}
1758
1759static FORCEINLINE int pthread_try_lock (MLOCK_T *sl) {
1760  volatile unsigned int* lp = &sl->l;
1761  if (*lp != 0) {
1762      if (sl->threadid == CURRENT_THREAD) {
1763        ++sl->c;
1764        return 1;
1765      }
1766  }
1767  else {
1768    int cmp = 0;
1769    int val = 1;
1770    int ret;
1771    __asm__ __volatile__  ("lock; cmpxchgl %1, %2"
1772                           : "=a" (ret)
1773                           : "r" (val), "m" (*(lp)), "0"(cmp)
1774                           : "memory", "cc");
1775    if (!ret) {
1776      assert(!sl->threadid);
1777      sl->c = 1;
1778      sl->threadid = CURRENT_THREAD;
1779      return 1;
1780    }
1781  }
1782  return 0;
1783}
1784
1785
1786#else /* WIN32 */
1787/* Custom win32-style spin locks on x86 and x64 for MSC */
1788struct win32_mlock_t
1789{
1790  volatile long l;
1791  volatile unsigned int c;
1792  volatile long threadid;
1793};
1794
1795#define MLOCK_T               struct win32_mlock_t
1796#define CURRENT_THREAD        win32_getcurrentthreadid()
1797#define INITIAL_LOCK(sl)      (memset(sl, 0, sizeof(MLOCK_T)), 0)
1798#define ACQUIRE_LOCK(sl)      win32_acquire_lock(sl)
1799#define RELEASE_LOCK(sl)      win32_release_lock(sl)
1800#define TRY_LOCK(sl)          win32_try_lock(sl)
1801#define SPINS_PER_YIELD       63
1802
1803static MLOCK_T malloc_global_mutex = { 0, 0, 0};
1804
1805static FORCEINLINE long win32_getcurrentthreadid() {
1806#ifdef _MSC_VER
1807#if defined(_M_IX86)
1808  long *threadstruct=(long *)__readfsdword(0x18);
1809  long threadid=threadstruct[0x24/sizeof(long)];
1810  return threadid;
1811#elif defined(_M_X64)
1812  /* todo */
1813  return GetCurrentThreadId();
1814#else
1815  return GetCurrentThreadId();
1816#endif
1817#else
1818  return GetCurrentThreadId();
1819#endif
1820}
1821
1822static FORCEINLINE int win32_acquire_lock (MLOCK_T *sl) {
1823  int spins = 0;
1824  for (;;) {
1825    if (sl->l != 0) {
1826      if (sl->threadid == CURRENT_THREAD) {
1827        ++sl->c;
1828        return 0;
1829      }
1830    }
1831    else {
1832      if (!interlockedexchange(&sl->l, 1)) {
1833        assert(!sl->threadid);
1834                sl->c=CURRENT_THREAD;
1835        sl->threadid = CURRENT_THREAD;
1836        sl->c = 1;
1837        return 0;
1838      }
1839    }
1840    if ((++spins & SPINS_PER_YIELD) == 0)
1841      SleepEx(0, FALSE);
1842  }
1843}
1844
1845static FORCEINLINE void win32_release_lock (MLOCK_T *sl) {
1846  assert(sl->threadid == CURRENT_THREAD);
1847  assert(sl->l != 0);
1848  if (--sl->c == 0) {
1849    sl->threadid = 0;
1850    interlockedexchange (&sl->l, 0);
1851  }
1852}
1853
1854static FORCEINLINE int win32_try_lock (MLOCK_T *sl) {
1855  if(sl->l != 0) {
1856      if (sl->threadid == CURRENT_THREAD) {
1857        ++sl->c;
1858        return 1;
1859      }
1860  }
1861  else {
1862    if (!interlockedexchange(&sl->l, 1)){
1863      assert(!sl->threadid);
1864      sl->threadid = CURRENT_THREAD;
1865      sl->c = 1;
1866      return 1;
1867    }
1868  }
1869  return 0;
1870}
1871
1872#endif /* WIN32 */
1873#else /* USE_SPIN_LOCKS */
1874
1875#ifndef WIN32
1876/* pthreads-based locks */
1877
1878#define MLOCK_T               pthread_mutex_t
1879#define CURRENT_THREAD        pthread_self()
1880#define INITIAL_LOCK(sl)      pthread_init_lock(sl)
1881#define ACQUIRE_LOCK(sl)      pthread_mutex_lock(sl)
1882#define RELEASE_LOCK(sl)      pthread_mutex_unlock(sl)
1883#define TRY_LOCK(sl)          (!pthread_mutex_trylock(sl))
1884
1885static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
1886
1887/* Cope with old-style linux recursive lock initialization by adding */
1888/* skipped internal declaration from pthread.h */
1889#ifdef linux
1890#ifndef PTHREAD_MUTEX_RECURSIVE
1891extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
1892                                           int __kind));
1893#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
1894#define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
1895#endif
1896#endif
1897
1898static int pthread_init_lock (MLOCK_T *sl) {
1899  pthread_mutexattr_t attr;
1900  if (pthread_mutexattr_init(&attr)) return 1;
1901  if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
1902  if (pthread_mutex_init(sl, &attr)) return 1;
1903  if (pthread_mutexattr_destroy(&attr)) return 1;
1904  return 0;
1905}
1906
1907#else /* WIN32 */
1908/* Win32 critical sections */
1909#define MLOCK_T               CRITICAL_SECTION
1910#define CURRENT_THREAD        GetCurrentThreadId()
1911#define INITIAL_LOCK(s)       (!InitializeCriticalSectionAndSpinCount((s), 0x80000000|4000))
1912#define ACQUIRE_LOCK(s)       (EnterCriticalSection(s), 0)
1913#define RELEASE_LOCK(s)       LeaveCriticalSection(s)
1914#define TRY_LOCK(s)           TryEnterCriticalSection(s)
1915#define NEED_GLOBAL_LOCK_INIT
1916
1917static MLOCK_T malloc_global_mutex;
1918static volatile long malloc_global_mutex_status;
1919
1920/* Use spin loop to initialize global lock */
1921static void init_malloc_global_mutex() {
1922  for (;;) {
1923    long stat = malloc_global_mutex_status;
1924    if (stat > 0)
1925      return;
1926    /* transition to < 0 while initializing, then to > 0) */
1927    if (stat == 0 &&
1928        interlockedcompareexchange(&malloc_global_mutex_status, -1, 0) == 0) {
1929      InitializeCriticalSection(&malloc_global_mutex);
1930      interlockedexchange(&malloc_global_mutex_status,1);
1931      return;
1932    }
1933    SleepEx(0, FALSE);
1934  }
1935}
1936
1937#endif /* WIN32 */
1938#endif /* USE_SPIN_LOCKS */
1939#endif /* USE_LOCKS == 1 */
1940
1941/* -----------------------  User-defined locks ------------------------ */
1942
1943#if USE_LOCKS > 1
1944/* Define your own lock implementation here */
1945/* #define INITIAL_LOCK(sl)  ... */
1946/* #define ACQUIRE_LOCK(sl)  ... */
1947/* #define RELEASE_LOCK(sl)  ... */
1948/* #define TRY_LOCK(sl) ... */
1949/* static MLOCK_T malloc_global_mutex = ... */
1950#endif /* USE_LOCKS > 1 */
1951
1952/* -----------------------  Lock-based state ------------------------ */
1953
1954#if USE_LOCKS
1955#define USE_LOCK_BIT               (2U)
1956#else  /* USE_LOCKS */
1957#define USE_LOCK_BIT               (0U)
1958#define INITIAL_LOCK(l)
1959#endif /* USE_LOCKS */
1960
1961#if USE_LOCKS
1962#define ACQUIRE_MALLOC_GLOBAL_LOCK()  ACQUIRE_LOCK(&malloc_global_mutex);
1963#define RELEASE_MALLOC_GLOBAL_LOCK()  RELEASE_LOCK(&malloc_global_mutex);
1964#else  /* USE_LOCKS */
1965#define ACQUIRE_MALLOC_GLOBAL_LOCK()
1966#define RELEASE_MALLOC_GLOBAL_LOCK()
1967#endif /* USE_LOCKS */
1968
1969
1970/* -----------------------  Chunk representations ------------------------ */
1971
1972/*
1973  (The following includes lightly edited explanations by Colin Plumb.)
1974
1975  The malloc_chunk declaration below is misleading (but accurate and
1976  necessary).  It declares a "view" into memory allowing access to
1977  necessary fields at known offsets from a given base.
1978
1979  Chunks of memory are maintained using a `boundary tag' method as
1980  originally described by Knuth.  (See the paper by Paul Wilson
1981  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1982  techniques.)  Sizes of free chunks are stored both in the front of
1983  each chunk and at the end.  This makes consolidating fragmented
1984  chunks into bigger chunks fast.  The head fields also hold bits
1985  representing whether chunks are free or in use.
1986
1987  Here are some pictures to make it clearer.  They are "exploded" to
1988  show that the state of a chunk can be thought of as extending from
1989  the high 31 bits of the head field of its header through the
1990  prev_foot and PINUSE_BIT bit of the following chunk header.
1991
1992  A chunk that's in use looks like:
1993
1994   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1995           | Size of previous chunk (if P = 0)                             |
1996           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1997         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1998         | Size of this chunk                                         1| +-+
1999   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2000         |                                                               |
2001         +-                                                             -+
2002         |                                                               |
2003         +-                                                             -+
2004         |                                                               :
2005         +-      size - sizeof(size_t) available payload bytes          -+
2006         :                                                               |
2007 chunk-> +-                                                             -+
2008         |                                                               |
2009         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2010       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2011       | Size of next chunk (may or may not be in use)               | +-+
2012 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2013
2014    And if it's free, it looks like this:
2015
2016   chunk-> +-                                                             -+
2017           | User payload (must be in use, or we would have merged!)       |
2018           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2019         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2020         | Size of this chunk                                         0| +-+
2021   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2022         | Next pointer                                                  |
2023         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2024         | Prev pointer                                                  |
2025         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2026         |                                                               :
2027         +-      size - sizeof(struct chunk) unused bytes               -+
2028         :                                                               |
2029 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2030         | Size of this chunk                                            |
2031         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2032       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2033       | Size of next chunk (must be in use, or we would have merged)| +-+
2034 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2035       |                                                               :
2036       +- User payload                                                -+
2037       :                                                               |
2038       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2039                                                                     |0|
2040                                                                     +-+
2041  Note that since we always merge adjacent free chunks, the chunks
2042  adjacent to a free chunk must be in use.
2043
2044  Given a pointer to a chunk (which can be derived trivially from the
2045  payload pointer) we can, in O(1) time, find out whether the adjacent
2046  chunks are free, and if so, unlink them from the lists that they
2047  are on and merge them with the current chunk.
2048
2049  Chunks always begin on even word boundaries, so the mem portion
2050  (which is returned to the user) is also on an even word boundary, and
2051  thus at least double-word aligned.
2052
2053  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2054  chunk size (which is always a multiple of two words), is an in-use
2055  bit for the *previous* chunk.  If that bit is *clear*, then the
2056  word before the current chunk size contains the previous chunk
2057  size, and can be used to find the front of the previous chunk.
2058  The very first chunk allocated always has this bit set, preventing
2059  access to non-existent (or non-owned) memory. If pinuse is set for
2060  any given chunk, then you CANNOT determine the size of the
2061  previous chunk, and might even get a memory addressing fault when
2062  trying to do so.
2063
2064  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2065  the chunk size redundantly records whether the current chunk is
2066  inuse. This redundancy enables usage checks within free and realloc,
2067  and reduces indirection when freeing and consolidating chunks.
2068
2069  Each freshly allocated chunk must have both cinuse and pinuse set.
2070  That is, each allocated chunk borders either a previously allocated
2071  and still in-use chunk, or the base of its memory arena. This is
2072  ensured by making all allocations from the the `lowest' part of any
2073  found chunk.  Further, no free chunk physically borders another one,
2074  so each free chunk is known to be preceded and followed by either
2075  inuse chunks or the ends of memory.
2076
2077  Note that the `foot' of the current chunk is actually represented
2078  as the prev_foot of the NEXT chunk. This makes it easier to
2079  deal with alignments etc but can be very confusing when trying
2080  to extend or adapt this code.
2081
2082  The exceptions to all this are
2083
2084     1. The special chunk `top' is the top-most available chunk (i.e.,
2085        the one bordering the end of available memory). It is treated
2086        specially.  Top is never included in any bin, is used only if
2087        no other chunk is available, and is released back to the
2088        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
2089        the top chunk is treated as larger (and thus less well
2090        fitting) than any other available chunk.  The top chunk
2091        doesn't update its trailing size field since there is no next
2092        contiguous chunk that would have to index off it. However,
2093        space is still allocated for it (TOP_FOOT_SIZE) to enable
2094        separation or merging when space is extended.
2095
2096     3. Chunks allocated via mmap, which have the lowest-order bit
2097        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
2098        PINUSE_BIT in their head fields.  Because they are allocated
2099        one-by-one, each must carry its own prev_foot field, which is
2100        also used to hold the offset this chunk has within its mmapped
2101        region, which is needed to preserve alignment. Each mmapped
2102        chunk is trailed by the first two fields of a fake next-chunk
2103        for sake of usage checks.
2104
2105*/
2106
2107struct malloc_chunk {
2108  size_t               prev_foot;  /* Size of previous chunk (if free).  */
2109  size_t               head;       /* Size and inuse bits. */
2110  struct malloc_chunk* fd;         /* double links -- used only if free. */
2111  struct malloc_chunk* bk;
2112};
2113
2114typedef struct malloc_chunk  mchunk;
2115typedef struct malloc_chunk* mchunkptr;
2116typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
2117typedef unsigned int bindex_t;         /* Described below */
2118typedef unsigned int binmap_t;         /* Described below */
2119typedef unsigned int flag_t;           /* The type of various bit flag sets */
2120
2121/* ------------------- Chunks sizes and alignments ----------------------- */
2122
2123#define MCHUNK_SIZE         (sizeof(mchunk))
2124
2125#if FOOTERS
2126#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
2127#else /* FOOTERS */
2128#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
2129#endif /* FOOTERS */
2130
2131/* MMapped chunks need a second word of overhead ... */
2132#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2133/* ... and additional padding for fake next-chunk at foot */
2134#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
2135
2136/* The smallest size we can malloc is an aligned minimal chunk */
2137#define MIN_CHUNK_SIZE\
2138  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2139
2140/* conversion from malloc headers to user pointers, and back */
2141#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
2142#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2143/* chunk associated with aligned address A */
2144#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
2145
2146/* Bounds on request (not chunk) sizes. */
2147#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
2148#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2149
2150/* pad request bytes into a usable size */
2151#define pad_request(req) \
2152   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2153
2154/* pad request, checking for minimum (but not maximum) */
2155#define request2size(req) \
2156  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2157
2158
2159/* ------------------ Operations on head and foot fields ----------------- */
2160
2161/*
2162  The head field of a chunk is or'ed with PINUSE_BIT when previous
2163  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2164  use. If the chunk was obtained with mmap, the prev_foot field has
2165  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
2166  mmapped region to the base of the chunk.
2167
2168  FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2169*/
2170
2171#define PINUSE_BIT          (SIZE_T_ONE)
2172#define CINUSE_BIT          (SIZE_T_TWO)
2173#define FLAG4_BIT           (SIZE_T_FOUR)
2174#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
2175#define FLAG_BITS           (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2176
2177/* Head value for fenceposts */
2178#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
2179
2180/* extraction of fields from head words */
2181#define cinuse(p)           ((p)->head & CINUSE_BIT)
2182#define pinuse(p)           ((p)->head & PINUSE_BIT)
2183#define chunksize(p)        ((p)->head & ~(FLAG_BITS))
2184
2185#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
2186#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
2187
2188/* Treat space at ptr +/- offset as a chunk */
2189#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
2190#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2191
2192/* Ptr to next or previous physical malloc_chunk. */
2193#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2194#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2195
2196/* extract next chunk's pinuse bit */
2197#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
2198
2199/* Get/set size at footer */
2200#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2201#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2202
2203/* Set size, pinuse bit, and foot */
2204#define set_size_and_pinuse_of_free_chunk(p, s)\
2205  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2206
2207/* Set size, pinuse bit, foot, and clear next pinuse */
2208#define set_free_with_pinuse(p, s, n)\
2209  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2210
2211#define is_mmapped(p)\
2212  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
2213
2214/* Get the internal overhead associated with chunk p */
2215#define overhead_for(p)\
2216 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2217
2218/* Return true if malloced space is not necessarily cleared */
2219#if MMAP_CLEARS
2220#define calloc_must_clear(p) (!is_mmapped(p))
2221#else /* MMAP_CLEARS */
2222#define calloc_must_clear(p) (1)
2223#endif /* MMAP_CLEARS */
2224
2225/* ---------------------- Overlaid data structures ----------------------- */
2226
2227/*
2228  When chunks are not in use, they are treated as nodes of either
2229  lists or trees.
2230
2231  "Small"  chunks are stored in circular doubly-linked lists, and look
2232  like this:
2233
2234    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2235            |             Size of previous chunk                            |
2236            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2237    `head:' |             Size of chunk, in bytes                         |P|
2238      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2239            |             Forward pointer to next chunk in list             |
2240            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2241            |             Back pointer to previous chunk in list            |
2242            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2243            |             Unused space (may be 0 bytes long)                .
2244            .                                                               .
2245            .                                                               |
2246nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2247    `foot:' |             Size of chunk, in bytes                           |
2248            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2249
2250  Larger chunks are kept in a form of bitwise digital trees (aka
2251  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
2252  free chunks greater than 256 bytes, their size doesn't impose any
2253  constraints on user chunk sizes.  Each node looks like:
2254
2255    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2256            |             Size of previous chunk                            |
2257            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2258    `head:' |             Size of chunk, in bytes                         |P|
2259      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2260            |             Forward pointer to next chunk of same size        |
2261            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2262            |             Back pointer to previous chunk of same size       |
2263            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2264            |             Pointer to left child (child[0])                  |
2265            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2266            |             Pointer to right child (child[1])                 |
2267            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2268            |             Pointer to parent                                 |
2269            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2270            |             bin index of this chunk                           |
2271            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2272            |             Unused space                                      .
2273            .                                                               |
2274nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2275    `foot:' |             Size of chunk, in bytes                           |
2276            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2277
2278  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
2279  of the same size are arranged in a circularly-linked list, with only
2280  the oldest chunk (the next to be used, in our FIFO ordering)
2281  actually in the tree.  (Tree members are distinguished by a non-null
2282  parent pointer.)  If a chunk with the same size an an existing node
2283  is inserted, it is linked off the existing node using pointers that
2284  work in the same way as fd/bk pointers of small chunks.
2285
2286  Each tree contains a power of 2 sized range of chunk sizes (the
2287  smallest is 0x100 <= x < 0x180), which is is divided in half at each
2288  tree level, with the chunks in the smaller half of the range (0x100
2289  <= x < 0x140 for the top nose) in the left subtree and the larger
2290  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
2291  done by inspecting individual bits.
2292
2293  Using these rules, each node's left subtree contains all smaller
2294  sizes than its right subtree.  However, the node at the root of each
2295  subtree has no particular ordering relationship to either.  (The
2296  dividing line between the subtree sizes is based on trie relation.)
2297  If we remove the last chunk of a given size from the interior of the
2298  tree, we need to replace it with a leaf node.  The tree ordering
2299  rules permit a node to be replaced by any leaf below it.
2300
2301  The smallest chunk in a tree (a common operation in a best-fit
2302  allocator) can be found by walking a path to the leftmost leaf in
2303  the tree.  Unlike a usual binary tree, where we follow left child
2304  pointers until we reach a null, here we follow the right child
2305  pointer any time the left one is null, until we reach a leaf with
2306  both child pointers null. The smallest chunk in the tree will be
2307  somewhere along that path.
2308
2309  The worst case number of steps to add, find, or remove a node is
2310  bounded by the number of bits differentiating chunks within
2311  bins. Under current bin calculations, this ranges from 6 up to 21
2312  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2313  is of course much better.
2314*/
2315
2316struct malloc_tree_chunk {
2317  /* The first four fields must be compatible with malloc_chunk */
2318  size_t                    prev_foot;
2319  size_t                    head;
2320  struct malloc_tree_chunk* fd;
2321  struct malloc_tree_chunk* bk;
2322
2323  struct malloc_tree_chunk* child[2];
2324  struct malloc_tree_chunk* parent;
2325  bindex_t                  index;
2326};
2327
2328typedef struct malloc_tree_chunk  tchunk;
2329typedef struct malloc_tree_chunk* tchunkptr;
2330typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2331
2332/* A little helper macro for trees */
2333#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2334
2335/* ----------------------------- Segments -------------------------------- */
2336
2337/*
2338  Each malloc space may include non-contiguous segments, held in a
2339  list headed by an embedded malloc_segment record representing the
2340  top-most space. Segments also include flags holding properties of
2341  the space. Large chunks that are directly allocated by mmap are not
2342  included in this list. They are instead independently created and
2343  destroyed without otherwise keeping track of them.
2344
2345  Segment management mainly comes into play for spaces allocated by
2346  MMAP.  Any call to MMAP might or might not return memory that is
2347  adjacent to an existing segment.  MORECORE normally contiguously
2348  extends the current space, so this space is almost always adjacent,
2349  which is simpler and faster to deal with. (This is why MORECORE is
2350  used preferentially to MMAP when both are available -- see
2351  sys_alloc.)  When allocating using MMAP, we don't use any of the
2352  hinting mechanisms (inconsistently) supported in various
2353  implementations of unix mmap, or distinguish reserving from
2354  committing memory. Instead, we just ask for space, and exploit
2355  contiguity when we get it.  It is probably possible to do
2356  better than this on some systems, but no general scheme seems
2357  to be significantly better.
2358
2359  Management entails a simpler variant of the consolidation scheme
2360  used for chunks to reduce fragmentation -- new adjacent memory is
2361  normally prepended or appended to an existing segment. However,
2362  there are limitations compared to chunk consolidation that mostly
2363  reflect the fact that segment processing is relatively infrequent
2364  (occurring only when getting memory from system) and that we
2365  don't expect to have huge numbers of segments:
2366
2367  * Segments are not indexed, so traversal requires linear scans.  (It
2368    would be possible to index these, but is not worth the extra
2369    overhead and complexity for most programs on most platforms.)
2370  * New segments are only appended to old ones when holding top-most
2371    memory; if they cannot be prepended to others, they are held in
2372    different segments.
2373
2374  Except for the top-most segment of an mstate, each segment record
2375  is kept at the tail of its segment. Segments are added by pushing
2376  segment records onto the list headed by &mstate.seg for the
2377  containing mstate.
2378
2379  Segment flags control allocation/merge/deallocation policies:
2380  * If EXTERN_BIT set, then we did not allocate this segment,
2381    and so should not try to deallocate or merge with others.
2382    (This currently holds only for the initial segment passed
2383    into create_mspace_with_base.)
2384  * If IS_MMAPPED_BIT set, the segment may be merged with
2385    other surrounding mmapped segments and trimmed/de-allocated
2386    using munmap.
2387  * If neither bit is set, then the segment was obtained using
2388    MORECORE so can be merged with surrounding MORECORE'd segments
2389    and deallocated/trimmed using MORECORE with negative arguments.
2390*/
2391
2392struct malloc_segment {
2393  char*        base;             /* base address */
2394  size_t       size;             /* allocated size */
2395  struct malloc_segment* next;   /* ptr to next segment */
2396  flag_t       sflags;           /* mmap and extern flag */
2397};
2398
2399#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
2400#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
2401
2402typedef struct malloc_segment  msegment;
2403typedef struct malloc_segment* msegmentptr;
2404
2405/* ---------------------------- malloc_state ----------------------------- */
2406
2407/*
2408   A malloc_state holds all of the bookkeeping for a space.
2409   The main fields are:
2410
2411  Top
2412    The topmost chunk of the currently active segment. Its size is
2413    cached in topsize.  The actual size of topmost space is
2414    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2415    fenceposts and segment records if necessary when getting more
2416    space from the system.  The size at which to autotrim top is
2417    cached from mparams in trim_check, except that it is disabled if
2418    an autotrim fails.
2419
2420  Designated victim (dv)
2421    This is the preferred chunk for servicing small requests that
2422    don't have exact fits.  It is normally the chunk split off most
2423    recently to service another small request.  Its size is cached in
2424    dvsize. The link fields of this chunk are not maintained since it
2425    is not kept in a bin.
2426
2427  SmallBins
2428    An array of bin headers for free chunks.  These bins hold chunks
2429    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2430    chunks of all the same size, spaced 8 bytes apart.  To simplify
2431    use in double-linked lists, each bin header acts as a malloc_chunk
2432    pointing to the real first node, if it exists (else pointing to
2433    itself).  This avoids special-casing for headers.  But to avoid
2434    waste, we allocate only the fd/bk pointers of bins, and then use
2435    repositioning tricks to treat these as the fields of a chunk.
2436
2437  TreeBins
2438    Treebins are pointers to the roots of trees holding a range of
2439    sizes. There are 2 equally spaced treebins for each power of two
2440    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2441    larger.
2442
2443  Bin maps
2444    There is one bit map for small bins ("smallmap") and one for
2445    treebins ("treemap).  Each bin sets its bit when non-empty, and
2446    clears the bit when empty.  Bit operations are then used to avoid
2447    bin-by-bin searching -- nearly all "search" is done without ever
2448    looking at bins that won't be selected.  The bit maps
2449    conservatively use 32 bits per map word, even if on 64bit system.
2450    For a good description of some of the bit-based techniques used
2451    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2452    supplement at http://hackersdelight.org/). Many of these are
2453    intended to reduce the branchiness of paths through malloc etc, as
2454    well as to reduce the number of memory locations read or written.
2455
2456  Segments
2457    A list of segments headed by an embedded malloc_segment record
2458    representing the initial space.
2459
2460  Address check support
2461    The least_addr field is the least address ever obtained from
2462    MORECORE or MMAP. Attempted frees and reallocs of any address less
2463    than this are trapped (unless INSECURE is defined).
2464
2465  Magic tag
2466    A cross-check field that should always hold same value as mparams.magic.
2467
2468  Flags
2469    Bits recording whether to use MMAP, locks, or contiguous MORECORE
2470
2471  Statistics
2472    Each space keeps track of current and maximum system memory
2473    obtained via MORECORE or MMAP.
2474
2475  Trim support
2476    Fields holding the amount of unused topmost memory that should trigger
2477    timming, and a counter to force periodic scanning to release unused
2478    non-topmost segments.
2479
2480  Locking
2481    If USE_LOCKS is defined, the "mutex" lock is acquired and released
2482    around every public call using this mspace.
2483
2484  Extension support
2485    A void* pointer and a size_t field that can be used to help implement
2486    extensions to this malloc.
2487*/
2488
2489/* Bin types, widths and sizes */
2490#define NSMALLBINS        (32U)
2491#define NTREEBINS         (32U)
2492#define SMALLBIN_SHIFT    (3U)
2493#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
2494#define TREEBIN_SHIFT     (8U)
2495#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
2496#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
2497#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2498
2499struct malloc_state {
2500  binmap_t   smallmap;
2501  binmap_t   treemap;
2502  size_t     dvsize;
2503  size_t     topsize;
2504  char*      least_addr;
2505  mchunkptr  dv;
2506  mchunkptr  top;
2507  size_t     trim_check;
2508  size_t     release_checks;
2509  size_t     magic;
2510  mchunkptr  smallbins[(NSMALLBINS+1)*2];
2511  tbinptr    treebins[NTREEBINS];
2512  size_t     footprint;
2513  size_t     max_footprint;
2514  flag_t     mflags;
2515#if USE_LOCKS
2516  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
2517#endif /* USE_LOCKS */
2518  msegment   seg;
2519  void*      extp;      /* Unused but available for extensions */
2520  size_t     exts;
2521};
2522
2523typedef struct malloc_state*    mstate;
2524
2525/* ------------- Global malloc_state and malloc_params ------------------- */
2526
2527/*
2528  malloc_params holds global properties, including those that can be
2529  dynamically set using mallopt. There is a single instance, mparams,
2530  initialized in init_mparams. Note that the non-zeroness of "magic"
2531  also serves as an initialization flag.
2532*/
2533
2534struct malloc_params {
2535  volatile size_t magic;
2536  size_t page_size;
2537  size_t granularity;
2538  size_t mmap_threshold;
2539  size_t trim_threshold;
2540  flag_t default_mflags;
2541};
2542
2543static struct malloc_params mparams;
2544
2545/* Ensure mparams initialized */
2546#define ensure_initialization() ((void)(mparams.magic != 0 || init_mparams()))
2547
2548#if !ONLY_MSPACES
2549
2550/* The global malloc_state used for all non-"mspace" calls */
2551static struct malloc_state _gm_;
2552#define gm                 (&_gm_)
2553#define is_global(M)       ((M) == &_gm_)
2554
2555#endif /* !ONLY_MSPACES */
2556
2557#define is_initialized(M)  ((M)->top != 0)
2558
2559/* -------------------------- system alloc setup ------------------------- */
2560
2561/* Operations on mflags */
2562
2563#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2564#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2565#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2566
2567#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2568#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2569#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2570
2571#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2572#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2573
2574#define set_lock(M,L)\
2575 ((M)->mflags = (L)?\
2576  ((M)->mflags | USE_LOCK_BIT) :\
2577  ((M)->mflags & ~USE_LOCK_BIT))
2578
2579/* page-align a size */
2580#define page_align(S)\
2581 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2582
2583/* granularity-align a size */
2584#define granularity_align(S)\
2585  (((S) + (mparams.granularity - SIZE_T_ONE))\
2586   & ~(mparams.granularity - SIZE_T_ONE))
2587
2588
2589/* For mmap, use granularity alignment on windows, else page-align */
2590#ifdef WIN32
2591#define mmap_align(S) granularity_align(S)
2592#else
2593#define mmap_align(S) page_align(S)
2594#endif
2595
2596/* For sys_alloc, enough padding to ensure can malloc request on success */
2597#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2598
2599#define is_page_aligned(S)\
2600   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2601#define is_granularity_aligned(S)\
2602   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2603
2604/*  True if segment S holds address A */
2605#define segment_holds(S, A)\
2606  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2607
2608/* Return segment holding given address */
2609static msegmentptr segment_holding(mstate m, char* addr) {
2610  msegmentptr sp = &m->seg;
2611  for (;;) {
2612    if (addr >= sp->base && addr < sp->base + sp->size)
2613      return sp;
2614    if ((sp = sp->next) == 0)
2615      return 0;
2616  }
2617}
2618
2619/* Return true if segment contains a segment link */
2620static int has_segment_link(mstate m, msegmentptr ss) {
2621  msegmentptr sp = &m->seg;
2622  for (;;) {
2623    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2624      return 1;
2625    if ((sp = sp->next) == 0)
2626      return 0;
2627  }
2628}
2629
2630#ifndef MORECORE_CANNOT_TRIM
2631#define should_trim(M,s)  ((s) > (M)->trim_check)
2632#else  /* MORECORE_CANNOT_TRIM */
2633#define should_trim(M,s)  (0)
2634#endif /* MORECORE_CANNOT_TRIM */
2635
2636/*
2637  TOP_FOOT_SIZE is padding at the end of a segment, including space
2638  that may be needed to place segment records and fenceposts when new
2639  noncontiguous segments are added.
2640*/
2641#define TOP_FOOT_SIZE\
2642  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2643
2644
2645/* -------------------------------  Hooks -------------------------------- */
2646
2647/*
2648  PREACTION should be defined to return 0 on success, and nonzero on
2649  failure. If you are not using locking, you can redefine these to do
2650  anything you like.
2651*/
2652
2653#if USE_LOCKS
2654
2655#define PREACTION(M)  ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2656#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2657#else /* USE_LOCKS */
2658
2659#ifndef PREACTION
2660#define PREACTION(M) (0)
2661#endif  /* PREACTION */
2662
2663#ifndef POSTACTION
2664#define POSTACTION(M)
2665#endif  /* POSTACTION */
2666
2667#endif /* USE_LOCKS */
2668
2669/*
2670  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2671  USAGE_ERROR_ACTION is triggered on detected bad frees and
2672  reallocs. The argument p is an address that might have triggered the
2673  fault. It is ignored by the two predefined actions, but might be
2674  useful in custom actions that try to help diagnose errors.
2675*/
2676
2677#if PROCEED_ON_ERROR
2678
2679/* A count of the number of corruption errors causing resets */
2680int malloc_corruption_error_count;
2681
2682/* default corruption action */
2683static void reset_on_error(mstate m);
2684
2685#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2686#define USAGE_ERROR_ACTION(m, p)
2687
2688#else /* PROCEED_ON_ERROR */
2689
2690#ifndef CORRUPTION_ERROR_ACTION
2691#define CORRUPTION_ERROR_ACTION(m) ABORT
2692#endif /* CORRUPTION_ERROR_ACTION */
2693
2694#ifndef USAGE_ERROR_ACTION
2695#define USAGE_ERROR_ACTION(m,p) ABORT
2696#endif /* USAGE_ERROR_ACTION */
2697
2698#endif /* PROCEED_ON_ERROR */
2699
2700/* -------------------------- Debugging setup ---------------------------- */
2701
2702#if ! DEBUG
2703
2704#define check_free_chunk(M,P)
2705#define check_inuse_chunk(M,P)
2706#define check_malloced_chunk(M,P,N)
2707#define check_mmapped_chunk(M,P)
2708#define check_malloc_state(M)
2709#define check_top_chunk(M,P)
2710
2711#else /* DEBUG */
2712#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2713#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2714#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2715#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2716#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2717#define check_malloc_state(M)       do_check_malloc_state(M)
2718
2719static void   do_check_any_chunk(mstate m, mchunkptr p);
2720static void   do_check_top_chunk(mstate m, mchunkptr p);
2721static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2722static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2723static void   do_check_free_chunk(mstate m, mchunkptr p);
2724static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2725static void   do_check_tree(mstate m, tchunkptr t);
2726static void   do_check_treebin(mstate m, bindex_t i);
2727static void   do_check_smallbin(mstate m, bindex_t i);
2728static void   do_check_malloc_state(mstate m);
2729static int    bin_find(mstate m, mchunkptr x);
2730static size_t traverse_and_check(mstate m);
2731#endif /* DEBUG */
2732
2733/* ---------------------------- Indexing Bins ---------------------------- */
2734
2735#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2736#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
2737#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2738#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2739
2740/* addressing by index. See above about smallbin repositioning */
2741#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2742#define treebin_at(M,i)     (&((M)->treebins[i]))
2743
2744/* assign tree index for size S to variable I. Use x86 asm if possible  */
2745#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2746#define compute_tree_index(S, I)\
2747{\
2748  unsigned int X = S >> TREEBIN_SHIFT;\
2749  if (X == 0)\
2750    I = 0;\
2751  else if (X > 0xFFFF)\
2752    I = NTREEBINS-1;\
2753  else {\
2754    unsigned int K;\
2755    __asm__("bsrl\t%1, %0\n\t" : "=r" (K) : "rm"  (X));\
2756    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2757  }\
2758}
2759
2760#elif defined (__INTEL_COMPILER)
2761#define compute_tree_index(S, I)\
2762{\
2763  size_t X = S >> TREEBIN_SHIFT;\
2764  if (X == 0)\
2765    I = 0;\
2766  else if (X > 0xFFFF)\
2767    I = NTREEBINS-1;\
2768  else {\
2769    unsigned int K = _bit_scan_reverse (X); \
2770    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2771  }\
2772}
2773
2774#elif defined(_MSC_VER) && _MSC_VER>=1300
2775#define compute_tree_index(S, I)\
2776{\
2777  size_t X = S >> TREEBIN_SHIFT;\
2778  if (X == 0)\
2779    I = 0;\
2780  else if (X > 0xFFFF)\
2781    I = NTREEBINS-1;\
2782  else {\
2783    unsigned int K;\
2784    _BitScanReverse((DWORD *) &K, X);\
2785    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2786  }\
2787}
2788
2789#else /* GNUC */
2790#define compute_tree_index(S, I)\
2791{\
2792  size_t X = S >> TREEBIN_SHIFT;\
2793  if (X == 0)\
2794    I = 0;\
2795  else if (X > 0xFFFF)\
2796    I = NTREEBINS-1;\
2797  else {\
2798    unsigned int Y = (unsigned int)X;\
2799    unsigned int N = ((Y - 0x100) >> 16) & 8;\
2800    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2801    N += K;\
2802    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2803    K = 14 - N + ((Y <<= K) >> 15);\
2804    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2805  }\
2806}
2807#endif /* GNUC */
2808
2809/* Bit representing maximum resolved size in a treebin at i */
2810#define bit_for_tree_index(i) \
2811   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2812
2813/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2814#define leftshift_for_tree_index(i) \
2815   ((i == NTREEBINS-1)? 0 : \
2816    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2817
2818/* The size of the smallest chunk held in bin with index i */
2819#define minsize_for_tree_index(i) \
2820   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2821   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2822
2823
2824/* ------------------------ Operations on bin maps ----------------------- */
2825
2826/* bit corresponding to given index */
2827#define idx2bit(i)              ((binmap_t)(1) << (i))
2828
2829/* Mark/Clear bits with given index */
2830#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2831#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2832#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2833
2834#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2835#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2836#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2837
2838/* isolate the least set bit of a bitmap */
2839#define least_bit(x)         ((x) & -(x))
2840
2841/* mask with all bits to left of least bit of x on */
2842#define left_bits(x)         ((x<<1) | -(x<<1))
2843
2844/* mask with all bits to left of or equal to least bit of x on */
2845#define same_or_left_bits(x) ((x) | -(x))
2846
2847/* index corresponding to given bit. Use x86 asm if possible */
2848
2849#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2850#define compute_bit2idx(X, I)\
2851{\
2852  unsigned int J;\
2853  __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "rm" (X));\
2854  I = (bindex_t)J;\
2855}
2856
2857#elif defined (__INTEL_COMPILER)
2858#define compute_bit2idx(X, I)\
2859{\
2860  unsigned int J;\
2861  J = _bit_scan_forward (X); \
2862  I = (bindex_t)J;\
2863}
2864
2865#elif defined(_MSC_VER) && _MSC_VER>=1300
2866#define compute_bit2idx(X, I)\
2867{\
2868  unsigned int J;\
2869  _BitScanForward((DWORD *) &J, X);\
2870  I = (bindex_t)J;\
2871}
2872
2873#elif USE_BUILTIN_FFS
2874#define compute_bit2idx(X, I) I = ffs(X)-1
2875
2876#else
2877#define compute_bit2idx(X, I)\
2878{\
2879  unsigned int Y = X - 1;\
2880  unsigned int K = Y >> (16-4) & 16;\
2881  unsigned int N = K;        Y >>= K;\
2882  N += K = Y >> (8-3) &  8;  Y >>= K;\
2883  N += K = Y >> (4-2) &  4;  Y >>= K;\
2884  N += K = Y >> (2-1) &  2;  Y >>= K;\
2885  N += K = Y >> (1-0) &  1;  Y >>= K;\
2886  I = (bindex_t)(N + Y);\
2887}
2888#endif /* GNUC */
2889
2890
2891/* ----------------------- Runtime Check Support ------------------------- */
2892
2893/*
2894  For security, the main invariant is that malloc/free/etc never
2895  writes to a static address other than malloc_state, unless static
2896  malloc_state itself has been corrupted, which cannot occur via
2897  malloc (because of these checks). In essence this means that we
2898  believe all pointers, sizes, maps etc held in malloc_state, but
2899  check all of those linked or offsetted from other embedded data
2900  structures.  These checks are interspersed with main code in a way
2901  that tends to minimize their run-time cost.
2902
2903  When FOOTERS is defined, in addition to range checking, we also
2904  verify footer fields of inuse chunks, which can be used guarantee
2905  that the mstate controlling malloc/free is intact.  This is a
2906  streamlined version of the approach described by William Robertson
2907  et al in "Run-time Detection of Heap-based Overflows" LISA'03
2908  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2909  of an inuse chunk holds the xor of its mstate and a random seed,
2910  that is checked upon calls to free() and realloc().  This is
2911  (probablistically) unguessable from outside the program, but can be
2912  computed by any code successfully malloc'ing any chunk, so does not
2913  itself provide protection against code that has already broken
2914  security through some other means.  Unlike Robertson et al, we
2915  always dynamically check addresses of all offset chunks (previous,
2916  next, etc). This turns out to be cheaper than relying on hashes.
2917*/
2918
2919#if !INSECURE
2920/* Check if address a is at least as high as any from MORECORE or MMAP */
2921#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2922/* Check if address of next chunk n is higher than base chunk p */
2923#define ok_next(p, n)    ((char*)(p) < (char*)(n))
2924/* Check if p has its cinuse bit on */
2925#define ok_cinuse(p)     cinuse(p)
2926/* Check if p has its pinuse bit on */
2927#define ok_pinuse(p)     pinuse(p)
2928
2929#else /* !INSECURE */
2930#define ok_address(M, a) (1)
2931#define ok_next(b, n)    (1)
2932#define ok_cinuse(p)     (1)
2933#define ok_pinuse(p)     (1)
2934#endif /* !INSECURE */
2935
2936#if (FOOTERS && !INSECURE)
2937/* Check if (alleged) mstate m has expected magic field */
2938#define ok_magic(M)      ((M)->magic == mparams.magic)
2939#else  /* (FOOTERS && !INSECURE) */
2940#define ok_magic(M)      (1)
2941#endif /* (FOOTERS && !INSECURE) */
2942
2943
2944/* In gcc, use __builtin_expect to minimize impact of checks */
2945#if !INSECURE
2946#if defined(__GNUC__) && __GNUC__ >= 3
2947#define RTCHECK(e)  __builtin_expect(e, 1)
2948#else /* GNUC */
2949#define RTCHECK(e)  (e)
2950#endif /* GNUC */
2951#else /* !INSECURE */
2952#define RTCHECK(e)  (1)
2953#endif /* !INSECURE */
2954
2955/* macros to set up inuse chunks with or without footers */
2956
2957#if !FOOTERS
2958
2959#define mark_inuse_foot(M,p,s)
2960
2961/* Set cinuse bit and pinuse bit of next chunk */
2962#define set_inuse(M,p,s)\
2963  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2964  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2965
2966/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2967#define set_inuse_and_pinuse(M,p,s)\
2968  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2969  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2970
2971/* Set size, cinuse and pinuse bit of this chunk */
2972#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2973  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2974
2975#else /* FOOTERS */
2976
2977/* Set foot of inuse chunk to be xor of mstate and seed */
2978#define mark_inuse_foot(M,p,s)\
2979  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2980
2981#define get_mstate_for(p)\
2982  ((mstate)(((mchunkptr)((char*)(p) +\
2983    (chunksize(p))))->prev_foot ^ mparams.magic))
2984
2985#define set_inuse(M,p,s)\
2986  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2987  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2988  mark_inuse_foot(M,p,s))
2989
2990#define set_inuse_and_pinuse(M,p,s)\
2991  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2992  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2993 mark_inuse_foot(M,p,s))
2994
2995#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2996  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2997  mark_inuse_foot(M, p, s))
2998
2999#endif /* !FOOTERS */
3000
3001/* ---------------------------- setting mparams -------------------------- */
3002
3003/* Initialize mparams */
3004static int init_mparams(void) {
3005#ifdef NEED_GLOBAL_LOCK_INIT
3006  if (malloc_global_mutex_status <= 0)
3007    init_malloc_global_mutex();
3008#endif
3009
3010  ACQUIRE_MALLOC_GLOBAL_LOCK();
3011  if (mparams.magic == 0) {
3012    size_t magic;
3013    size_t psize;
3014    size_t gsize;
3015
3016#ifndef WIN32
3017    psize = malloc_getpagesize;
3018    gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3019#else /* WIN32 */
3020    {
3021      SYSTEM_INFO system_info;
3022      GetSystemInfo(&system_info);
3023      psize = system_info.dwPageSize;
3024      gsize = ((DEFAULT_GRANULARITY != 0)?
3025               DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3026    }
3027#endif /* WIN32 */
3028
3029    /* Sanity-check configuration:
3030       size_t must be unsigned and as wide as pointer type.
3031       ints must be at least 4 bytes.
3032       alignment must be at least 8.
3033       Alignment, min chunk size, and page size must all be powers of 2.
3034    */
3035    if ((sizeof(size_t) != sizeof(char*)) ||
3036        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
3037        (sizeof(int) < 4)  ||
3038        (MALLOC_ALIGNMENT < (size_t)8U) ||
3039        ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3040        ((MCHUNK_SIZE      & (MCHUNK_SIZE-SIZE_T_ONE))      != 0) ||
3041        ((gsize            & (gsize-SIZE_T_ONE))            != 0) ||
3042        ((psize            & (psize-SIZE_T_ONE))            != 0))
3043      ABORT;
3044
3045    mparams.granularity = gsize;
3046    mparams.page_size = psize;
3047    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3048    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3049#if MORECORE_CONTIGUOUS
3050    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3051#else  /* MORECORE_CONTIGUOUS */
3052    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3053#endif /* MORECORE_CONTIGUOUS */
3054
3055#if !ONLY_MSPACES
3056    /* Set up lock for main malloc area */
3057    gm->mflags = mparams.default_mflags;
3058    INITIAL_LOCK(&gm->mutex);
3059#endif
3060
3061#if (FOOTERS && !INSECURE)
3062    {
3063#if USE_DEV_RANDOM
3064      int fd;
3065      unsigned char buf[sizeof(size_t)];
3066      /* Try to use /dev/urandom, else fall back on using time */
3067      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3068          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3069        magic = *((size_t *) buf);
3070        close(fd);
3071      }
3072      else
3073#endif /* USE_DEV_RANDOM */
3074#ifdef WIN32
3075        magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3076#else
3077      magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3078#endif
3079      magic |= (size_t)8U;    /* ensure nonzero */
3080      magic &= ~(size_t)7U;   /* improve chances of fault for bad values */
3081    }
3082#else /* (FOOTERS && !INSECURE) */
3083    magic = (size_t)0x58585858U;
3084#endif /* (FOOTERS && !INSECURE) */
3085
3086    mparams.magic = magic;
3087  }
3088
3089  RELEASE_MALLOC_GLOBAL_LOCK();
3090  return 1;
3091}
3092
3093/* support for mallopt */
3094static int change_mparam(int param_number, int value) {
3095  size_t val = (value == -1)? MAX_SIZE_T : (size_t)value;
3096  ensure_initialization();
3097  switch(param_number) {
3098  case M_TRIM_THRESHOLD:
3099    mparams.trim_threshold = val;
3100    return 1;
3101  case M_GRANULARITY:
3102    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3103      mparams.granularity = val;
3104      return 1;
3105    }
3106    else
3107      return 0;
3108  case M_MMAP_THRESHOLD:
3109    mparams.mmap_threshold = val;
3110    return 1;
3111  default:
3112    return 0;
3113  }
3114}
3115
3116#if DEBUG
3117/* ------------------------- Debugging Support --------------------------- */
3118
3119/* Check properties of any chunk, whether free, inuse, mmapped etc  */
3120static void do_check_any_chunk(mstate m, mchunkptr p) {
3121  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3122  assert(ok_address(m, p));
3123}
3124
3125/* Check properties of top chunk */
3126static void do_check_top_chunk(mstate m, mchunkptr p) {
3127  msegmentptr sp = segment_holding(m, (char*)p);
3128  size_t  sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3129  assert(sp != 0);
3130  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3131  assert(ok_address(m, p));
3132  assert(sz == m->topsize);
3133  assert(sz > 0);
3134  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3135  assert(pinuse(p));
3136  assert(!pinuse(chunk_plus_offset(p, sz)));
3137}
3138
3139/* Check properties of (inuse) mmapped chunks */
3140static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3141  size_t  sz = chunksize(p);
3142  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
3143  assert(is_mmapped(p));
3144  assert(use_mmap(m));
3145  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3146  assert(ok_address(m, p));
3147  assert(!is_small(sz));
3148  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3149  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3150  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3151}
3152
3153/* Check properties of inuse chunks */
3154static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3155  do_check_any_chunk(m, p);
3156  assert(cinuse(p));
3157  assert(next_pinuse(p));
3158  /* If not pinuse and not mmapped, previous chunk has OK offset */
3159  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3160  if (is_mmapped(p))
3161    do_check_mmapped_chunk(m, p);
3162}
3163
3164/* Check properties of free chunks */
3165static void do_check_free_chunk(mstate m, mchunkptr p) {
3166  size_t sz = chunksize(p);
3167  mchunkptr next = chunk_plus_offset(p, sz);
3168  do_check_any_chunk(m, p);
3169  assert(!cinuse(p));
3170  assert(!next_pinuse(p));
3171  assert (!is_mmapped(p));
3172  if (p != m->dv && p != m->top) {
3173    if (sz >= MIN_CHUNK_SIZE) {
3174      assert((sz & CHUNK_ALIGN_MASK) == 0);
3175      assert(is_aligned(chunk2mem(p)));
3176      assert(next->prev_foot == sz);
3177      assert(pinuse(p));
3178      assert (next == m->top || cinuse(next));
3179      assert(p->fd->bk == p);
3180      assert(p->bk->fd == p);
3181    }
3182    else  /* markers are always of size SIZE_T_SIZE */
3183      assert(sz == SIZE_T_SIZE);
3184  }
3185}
3186
3187/* Check properties of malloced chunks at the point they are malloced */
3188static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3189  if (mem != 0) {
3190    mchunkptr p = mem2chunk(mem);
3191    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
3192    do_check_inuse_chunk(m, p);
3193    assert((sz & CHUNK_ALIGN_MASK) == 0);
3194    assert(sz >= MIN_CHUNK_SIZE);
3195    assert(sz >= s);
3196    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3197    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3198  }
3199}
3200
3201/* Check a tree and its subtrees.  */
3202static void do_check_tree(mstate m, tchunkptr t) {
3203  tchunkptr head = 0;
3204  tchunkptr u = t;
3205  bindex_t tindex = t->index;
3206  size_t tsize = chunksize(t);
3207  bindex_t idx;
3208  compute_tree_index(tsize, idx);
3209  assert(tindex == idx);
3210  assert(tsize >= MIN_LARGE_SIZE);
3211  assert(tsize >= minsize_for_tree_index(idx));
3212  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3213
3214  do { /* traverse through chain of same-sized nodes */
3215    do_check_any_chunk(m, ((mchunkptr)u));
3216    assert(u->index == tindex);
3217    assert(chunksize(u) == tsize);
3218    assert(!cinuse(u));
3219    assert(!next_pinuse(u));
3220    assert(u->fd->bk == u);
3221    assert(u->bk->fd == u);
3222    if (u->parent == 0) {
3223      assert(u->child[0] == 0);
3224      assert(u->child[1] == 0);
3225    }
3226    else {
3227      assert(head == 0); /* only one node on chain has parent */
3228      head = u;
3229      assert(u->parent != u);
3230      assert (u->parent->child[0] == u ||
3231              u->parent->child[1] == u ||
3232              *((tbinptr*)(u->parent)) == u);
3233      if (u->child[0] != 0) {
3234        assert(u->child[0]->parent == u);
3235        assert(u->child[0] != u);
3236        do_check_tree(m, u->child[0]);
3237      }
3238      if (u->child[1] != 0) {
3239        assert(u->child[1]->parent == u);
3240        assert(u->child[1] != u);
3241        do_check_tree(m, u->child[1]);
3242      }
3243      if (u->child[0] != 0 && u->child[1] != 0) {
3244        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3245      }
3246    }
3247    u = u->fd;
3248  } while (u != t);
3249  assert(head != 0);
3250}
3251
3252/*  Check all the chunks in a treebin.  */
3253static void do_check_treebin(mstate m, bindex_t i) {
3254  tbinptr* tb = treebin_at(m, i);
3255  tchunkptr t = *tb;
3256  int empty = (m->treemap & (1U << i)) == 0;
3257  if (t == 0)
3258    assert(empty);
3259  if (!empty)
3260    do_check_tree(m, t);
3261}
3262
3263/*  Check all the chunks in a smallbin.  */
3264static void do_check_smallbin(mstate m, bindex_t i) {
3265  sbinptr b = smallbin_at(m, i);
3266  mchunkptr p = b->bk;
3267  unsigned int empty = (m->smallmap & (1U << i)) == 0;
3268  if (p == b)
3269    assert(empty);
3270  if (!empty) {
3271    for (; p != b; p = p->bk) {
3272      size_t size = chunksize(p);
3273      mchunkptr q;
3274      /* each chunk claims to be free */
3275      do_check_free_chunk(m, p);
3276      /* chunk belongs in bin */
3277      assert(small_index(size) == i);
3278      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3279      /* chunk is followed by an inuse chunk */
3280      q = next_chunk(p);
3281      if (q->head != FENCEPOST_HEAD)
3282        do_check_inuse_chunk(m, q);
3283    }
3284  }
3285}
3286
3287/* Find x in a bin. Used in other check functions. */
3288static int bin_find(mstate m, mchunkptr x) {
3289  size_t size = chunksize(x);
3290  if (is_small(size)) {
3291    bindex_t sidx = small_index(size);
3292    sbinptr b = smallbin_at(m, sidx);
3293    if (smallmap_is_marked(m, sidx)) {
3294      mchunkptr p = b;
3295      do {
3296        if (p == x)
3297          return 1;
3298      } while ((p = p->fd) != b);
3299    }
3300  }
3301  else {
3302    bindex_t tidx;
3303    compute_tree_index(size, tidx);
3304    if (treemap_is_marked(m, tidx)) {
3305      tchunkptr t = *treebin_at(m, tidx);
3306      size_t sizebits = size << leftshift_for_tree_index(tidx);
3307      while (t != 0 && chunksize(t) != size) {
3308        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3309        sizebits <<= 1;
3310      }
3311      if (t != 0) {
3312        tchunkptr u = t;
3313        do {
3314          if (u == (tchunkptr)x)
3315            return 1;
3316        } while ((u = u->fd) != t);
3317      }
3318    }
3319  }
3320  return 0;
3321}
3322
3323/* Traverse each chunk and check it; return total */
3324static size_t traverse_and_check(mstate m) {
3325  size_t sum = 0;
3326  if (is_initialized(m)) {
3327    msegmentptr s = &m->seg;
3328    sum += m->topsize + TOP_FOOT_SIZE;
3329    while (s != 0) {
3330      mchunkptr q = align_as_chunk(s->base);
3331      mchunkptr lastq = 0;
3332      assert(pinuse(q));
3333      while (segment_holds(s, q) &&
3334             q != m->top && q->head != FENCEPOST_HEAD) {
3335        sum += chunksize(q);
3336        if (cinuse(q)) {
3337          assert(!bin_find(m, q));
3338          do_check_inuse_chunk(m, q);
3339        }
3340        else {
3341          assert(q == m->dv || bin_find(m, q));
3342          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
3343          do_check_free_chunk(m, q);
3344        }
3345        lastq = q;
3346        q = next_chunk(q);
3347      }
3348      s = s->next;
3349    }
3350  }
3351  return sum;
3352}
3353
3354/* Check all properties of malloc_state. */
3355static void do_check_malloc_state(mstate m) {
3356  bindex_t i;
3357  size_t total;
3358  /* check bins */
3359  for (i = 0; i < NSMALLBINS; ++i)
3360    do_check_smallbin(m, i);
3361  for (i = 0; i < NTREEBINS; ++i)
3362    do_check_treebin(m, i);
3363
3364  if (m->dvsize != 0) { /* check dv chunk */
3365    do_check_any_chunk(m, m->dv);
3366    assert(m->dvsize == chunksize(m->dv));
3367    assert(m->dvsize >= MIN_CHUNK_SIZE);
3368    assert(bin_find(m, m->dv) == 0);
3369  }
3370
3371  if (m->top != 0) {   /* check top chunk */
3372    do_check_top_chunk(m, m->top);
3373    /*assert(m->topsize == chunksize(m->top)); redundant */
3374    assert(m->topsize > 0);
3375    assert(bin_find(m, m->top) == 0);
3376  }
3377
3378  total = traverse_and_check(m);
3379  assert(total <= m->footprint);
3380  assert(m->footprint <= m->max_footprint);
3381}
3382#endif /* DEBUG */
3383
3384/* ----------------------------- statistics ------------------------------ */
3385
3386#if !NO_MALLINFO
3387static struct mallinfo internal_mallinfo(mstate m) {
3388  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3389  ensure_initialization();
3390  if (!PREACTION(m)) {
3391    check_malloc_state(m);
3392    if (is_initialized(m)) {
3393      size_t nfree = SIZE_T_ONE; /* top always free */
3394      size_t mfree = m->topsize + TOP_FOOT_SIZE;
3395      size_t sum = mfree;
3396      msegmentptr s = &m->seg;
3397      while (s != 0) {
3398        mchunkptr q = align_as_chunk(s->base);
3399        while (segment_holds(s, q) &&
3400               q != m->top && q->head != FENCEPOST_HEAD) {
3401          size_t sz = chunksize(q);
3402          sum += sz;
3403          if (!cinuse(q)) {
3404            mfree += sz;
3405            ++nfree;
3406          }
3407          q = next_chunk(q);
3408        }
3409        s = s->next;
3410      }
3411
3412      nm.arena    = sum;
3413      nm.ordblks  = nfree;
3414      nm.hblkhd   = m->footprint - sum;
3415      nm.usmblks  = m->max_footprint;
3416      nm.uordblks = m->footprint - mfree;
3417      nm.fordblks = mfree;
3418      nm.keepcost = m->topsize;
3419    }
3420
3421    POSTACTION(m);
3422  }
3423  return nm;
3424}
3425#endif /* !NO_MALLINFO */
3426
3427static void internal_malloc_stats(mstate m) {
3428  ensure_initialization();
3429  if (!PREACTION(m)) {
3430    size_t maxfp = 0;
3431    size_t fp = 0;
3432    size_t used = 0;
3433    check_malloc_state(m);
3434    if (is_initialized(m)) {
3435      msegmentptr s = &m->seg;
3436      maxfp = m->max_footprint;
3437      fp = m->footprint;
3438      used = fp - (m->topsize + TOP_FOOT_SIZE);
3439
3440      while (s != 0) {
3441        mchunkptr q = align_as_chunk(s->base);
3442        while (segment_holds(s, q) &&
3443               q != m->top && q->head != FENCEPOST_HEAD) {
3444          if (!cinuse(q))
3445            used -= chunksize(q);
3446          q = next_chunk(q);
3447        }
3448        s = s->next;
3449      }
3450    }
3451
3452    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3453    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
3454    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
3455
3456    POSTACTION(m);
3457  }
3458}
3459
3460/* ----------------------- Operations on smallbins ----------------------- */
3461
3462/*
3463  Various forms of linking and unlinking are defined as macros.  Even
3464  the ones for trees, which are very long but have very short typical
3465  paths.  This is ugly but reduces reliance on inlining support of
3466  compilers.
3467*/
3468
3469/* Link a free chunk into a smallbin  */
3470#define insert_small_chunk(M, P, S) {\
3471  bindex_t I  = small_index(S);\
3472  mchunkptr B = smallbin_at(M, I);\
3473  mchunkptr F = B;\
3474  assert(S >= MIN_CHUNK_SIZE);\
3475  if (!smallmap_is_marked(M, I))\
3476    mark_smallmap(M, I);\
3477  else if (RTCHECK(ok_address(M, B->fd)))\
3478    F = B->fd;\
3479  else {\
3480    CORRUPTION_ERROR_ACTION(M);\
3481  }\
3482  B->fd = P;\
3483  F->bk = P;\
3484  P->fd = F;\
3485  P->bk = B;\
3486}
3487
3488/* Unlink a chunk from a smallbin  */
3489#define unlink_small_chunk(M, P, S) {\
3490  mchunkptr F = P->fd;\
3491  mchunkptr B = P->bk;\
3492  bindex_t I = small_index(S);\
3493  assert(P != B);\
3494  assert(P != F);\
3495  assert(chunksize(P) == small_index2size(I));\
3496  if (F == B)\
3497    clear_smallmap(M, I);\
3498  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
3499                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
3500    F->bk = B;\
3501    B->fd = F;\
3502  }\
3503  else {\
3504    CORRUPTION_ERROR_ACTION(M);\
3505  }\
3506}
3507
3508/* Unlink the first chunk from a smallbin */
3509#define unlink_first_small_chunk(M, B, P, I) {\
3510  mchunkptr F = P->fd;\
3511  assert(P != B);\
3512  assert(P != F);\
3513  assert(chunksize(P) == small_index2size(I));\
3514  if (B == F)\
3515    clear_smallmap(M, I);\
3516  else if (RTCHECK(ok_address(M, F))) {\
3517    B->fd = F;\
3518    F->bk = B;\
3519  }\
3520  else {\
3521    CORRUPTION_ERROR_ACTION(M);\
3522  }\
3523}
3524
3525
3526
3527/* Replace dv node, binning the old one */
3528/* Used only when dvsize known to be small */
3529#define replace_dv(M, P, S) {\
3530  size_t DVS = M->dvsize;\
3531  if (DVS != 0) {\
3532    mchunkptr DV = M->dv;\
3533    assert(is_small(DVS));\
3534    insert_small_chunk(M, DV, DVS);\
3535  }\
3536  M->dvsize = S;\
3537  M->dv = P;\
3538}
3539
3540/* ------------------------- Operations on trees ------------------------- */
3541
3542/* Insert chunk into tree */
3543#define insert_large_chunk(M, X, S) {\
3544  tbinptr* H;\
3545  bindex_t I;\
3546  compute_tree_index(S, I);\
3547  H = treebin_at(M, I);\
3548  X->index = I;\
3549  X->child[0] = X->child[1] = 0;\
3550  if (!treemap_is_marked(M, I)) {\
3551    mark_treemap(M, I);\
3552    *H = X;\
3553    X->parent = (tchunkptr)H;\
3554    X->fd = X->bk = X;\
3555  }\
3556  else {\
3557    tchunkptr T = *H;\
3558    size_t K = S << leftshift_for_tree_index(I);\
3559    for (;;) {\
3560      if (chunksize(T) != S) {\
3561        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3562        K <<= 1;\
3563        if (*C != 0)\
3564          T = *C;\
3565        else if (RTCHECK(ok_address(M, C))) {\
3566          *C = X;\
3567          X->parent = T;\
3568          X->fd = X->bk = X;\
3569          break;\
3570        }\
3571        else {\
3572          CORRUPTION_ERROR_ACTION(M);\
3573          break;\
3574        }\
3575      }\
3576      else {\
3577        tchunkptr F = T->fd;\
3578        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3579          T->fd = F->bk = X;\
3580          X->fd = F;\
3581          X->bk = T;\
3582          X->parent = 0;\
3583          break;\
3584        }\
3585        else {\
3586          CORRUPTION_ERROR_ACTION(M);\
3587          break;\
3588        }\
3589      }\
3590    }\
3591  }\
3592}
3593
3594/*
3595  Unlink steps:
3596
3597  1. If x is a chained node, unlink it from its same-sized fd/bk links
3598     and choose its bk node as its replacement.
3599  2. If x was the last node of its size, but not a leaf node, it must
3600     be replaced with a leaf node (not merely one with an open left or
3601     right), to make sure that lefts and rights of descendents
3602     correspond properly to bit masks.  We use the rightmost descendent
3603     of x.  We could use any other leaf, but this is easy to locate and
3604     tends to counteract removal of leftmosts elsewhere, and so keeps
3605     paths shorter than minimally guaranteed.  This doesn't loop much
3606     because on average a node in a tree is near the bottom.
3607  3. If x is the base of a chain (i.e., has parent links) relink
3608     x's parent and children to x's replacement (or null if none).
3609*/
3610
3611#define unlink_large_chunk(M, X) {\
3612  tchunkptr XP = X->parent;\
3613  tchunkptr R;\
3614  if (X->bk != X) {\
3615    tchunkptr F = X->fd;\
3616    R = X->bk;\
3617    if (RTCHECK(ok_address(M, F))) {\
3618      F->bk = R;\
3619      R->fd = F;\
3620    }\
3621    else {\
3622      CORRUPTION_ERROR_ACTION(M);\
3623    }\
3624  }\
3625  else {\
3626    tchunkptr* RP;\
3627    if (((R = *(RP = &(X->child[1]))) != 0) ||\
3628        ((R = *(RP = &(X->child[0]))) != 0)) {\
3629      tchunkptr* CP;\
3630      while ((*(CP = &(R->child[1])) != 0) ||\
3631             (*(CP = &(R->child[0])) != 0)) {\
3632        R = *(RP = CP);\
3633      }\
3634      if (RTCHECK(ok_address(M, RP)))\
3635        *RP = 0;\
3636      else {\
3637        CORRUPTION_ERROR_ACTION(M);\
3638      }\
3639    }\
3640  }\
3641  if (XP != 0) {\
3642    tbinptr* H = treebin_at(M, X->index);\
3643    if (X == *H) {\
3644      if ((*H = R) == 0) \
3645        clear_treemap(M, X->index);\
3646    }\
3647    else if (RTCHECK(ok_address(M, XP))) {\
3648      if (XP->child[0] == X) \
3649        XP->child[0] = R;\
3650      else \
3651        XP->child[1] = R;\
3652    }\
3653    else\
3654      CORRUPTION_ERROR_ACTION(M);\
3655    if (R != 0) {\
3656      if (RTCHECK(ok_address(M, R))) {\
3657        tchunkptr C0, C1;\
3658        R->parent = XP;\
3659        if ((C0 = X->child[0]) != 0) {\
3660          if (RTCHECK(ok_address(M, C0))) {\
3661            R->child[0] = C0;\
3662            C0->parent = R;\
3663          }\
3664          else\
3665            CORRUPTION_ERROR_ACTION(M);\
3666        }\
3667        if ((C1 = X->child[1]) != 0) {\
3668          if (RTCHECK(ok_address(M, C1))) {\
3669            R->child[1] = C1;\
3670            C1->parent = R;\
3671          }\
3672          else\
3673            CORRUPTION_ERROR_ACTION(M);\
3674        }\
3675      }\
3676      else\
3677        CORRUPTION_ERROR_ACTION(M);\
3678    }\
3679  }\
3680}
3681
3682/* Relays to large vs small bin operations */
3683
3684#define insert_chunk(M, P, S)\
3685  if (is_small(S)) insert_small_chunk(M, P, S)\
3686  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3687
3688#define unlink_chunk(M, P, S)\
3689  if (is_small(S)) unlink_small_chunk(M, P, S)\
3690  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3691
3692
3693/* Relays to internal calls to malloc/free from realloc, memalign etc */
3694
3695#if ONLY_MSPACES
3696#define internal_malloc(m, b) mspace_malloc(m, b)
3697#define internal_free(m, mem) mspace_free(m,mem);
3698#else /* ONLY_MSPACES */
3699#if MSPACES
3700#define internal_malloc(m, b)\
3701   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3702#define internal_free(m, mem)\
3703   if (m == gm) dlfree(mem); else mspace_free(m,mem);
3704#else /* MSPACES */
3705#define internal_malloc(m, b) dlmalloc(b)
3706#define internal_free(m, mem) dlfree(mem)
3707#endif /* MSPACES */
3708#endif /* ONLY_MSPACES */
3709
3710/* -----------------------  Direct-mmapping chunks ----------------------- */
3711
3712/*
3713  Directly mmapped chunks are set up with an offset to the start of
3714  the mmapped region stored in the prev_foot field of the chunk. This
3715  allows reconstruction of the required argument to MUNMAP when freed,
3716  and also allows adjustment of the returned chunk to meet alignment
3717  requirements (especially in memalign).  There is also enough space
3718  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3719  the PINUSE bit so frees can be checked.
3720*/
3721
3722/* Malloc using mmap */
3723static void* mmap_alloc(mstate m, size_t nb) {
3724  size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3725  if (mmsize > nb) {     /* Check for wrap around 0 */
3726    char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3727    if (mm != CMFAIL) {
3728      size_t offset = align_offset(chunk2mem(mm));
3729      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3730      mchunkptr p = (mchunkptr)(mm + offset);
3731      p->prev_foot = offset | IS_MMAPPED_BIT;
3732      (p)->head = (psize|CINUSE_BIT);
3733      mark_inuse_foot(m, p, psize);
3734      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3735      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3736
3737      if (mm < m->least_addr)
3738        m->least_addr = mm;
3739      if ((m->footprint += mmsize) > m->max_footprint)
3740        m->max_footprint = m->footprint;
3741      assert(is_aligned(chunk2mem(p)));
3742      check_mmapped_chunk(m, p);
3743      return chunk2mem(p);
3744    }
3745  }
3746  return 0;
3747}
3748
3749/* Realloc using mmap */
3750static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3751  size_t oldsize = chunksize(oldp);
3752  if (is_small(nb)) /* Can't shrink mmap regions below small size */
3753    return 0;
3754  /* Keep old chunk if big enough but not too big */
3755  if (oldsize >= nb + SIZE_T_SIZE &&
3756      (oldsize - nb) <= (mparams.granularity << 1))
3757    return oldp;
3758  else {
3759    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3760    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3761    size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3762    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3763                                  oldmmsize, newmmsize, 1);
3764    if (cp != CMFAIL) {
3765      mchunkptr newp = (mchunkptr)(cp + offset);
3766      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3767      newp->head = (psize|CINUSE_BIT);
3768      mark_inuse_foot(m, newp, psize);
3769      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3770      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3771
3772      if (cp < m->least_addr)
3773        m->least_addr = cp;
3774      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3775        m->max_footprint = m->footprint;
3776      check_mmapped_chunk(m, newp);
3777      return newp;
3778    }
3779  }
3780  return 0;
3781}
3782
3783/* -------------------------- mspace management -------------------------- */
3784
3785/* Initialize top chunk and its size */
3786static void init_top(mstate m, mchunkptr p, size_t psize) {
3787  /* Ensure alignment */
3788  size_t offset = align_offset(chunk2mem(p));
3789  p = (mchunkptr)((char*)p + offset);
3790  psize -= offset;
3791
3792  m->top = p;
3793  m->topsize = psize;
3794  p->head = psize | PINUSE_BIT;
3795  /* set size of fake trailing chunk holding overhead space only once */
3796  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3797  m->trim_check = mparams.trim_threshold; /* reset on each update */
3798}
3799
3800/* Initialize bins for a new mstate that is otherwise zeroed out */
3801static void init_bins(mstate m) {
3802  /* Establish circular links for smallbins */
3803  bindex_t i;
3804  for (i = 0; i < NSMALLBINS; ++i) {
3805    sbinptr bin = smallbin_at(m,i);
3806    bin->fd = bin->bk = bin;
3807  }
3808}
3809
3810#if PROCEED_ON_ERROR
3811
3812/* default corruption action */
3813static void reset_on_error(mstate m) {
3814  int i;
3815  ++malloc_corruption_error_count;
3816  /* Reinitialize fields to forget about all memory */
3817  m->smallbins = m->treebins = 0;
3818  m->dvsize = m->topsize = 0;
3819  m->seg.base = 0;
3820  m->seg.size = 0;
3821  m->seg.next = 0;
3822  m->top = m->dv = 0;
3823  for (i = 0; i < NTREEBINS; ++i)
3824    *treebin_at(m, i) = 0;
3825  init_bins(m);
3826}
3827#endif /* PROCEED_ON_ERROR */
3828
3829/* Allocate chunk and prepend remainder with chunk in successor base. */
3830static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3831                           size_t nb) {
3832  mchunkptr p = align_as_chunk(newbase);
3833  mchunkptr oldfirst = align_as_chunk(oldbase);
3834  size_t psize = (char*)oldfirst - (char*)p;
3835  mchunkptr q = chunk_plus_offset(p, nb);
3836  size_t qsize = psize - nb;
3837  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3838
3839  assert((char*)oldfirst > (char*)q);
3840  assert(pinuse(oldfirst));
3841  assert(qsize >= MIN_CHUNK_SIZE);
3842
3843  /* consolidate remainder with first chunk of old base */
3844  if (oldfirst == m->top) {
3845    size_t tsize = m->topsize += qsize;
3846    m->top = q;
3847    q->head = tsize | PINUSE_BIT;
3848    check_top_chunk(m, q);
3849  }
3850  else if (oldfirst == m->dv) {
3851    size_t dsize = m->dvsize += qsize;
3852    m->dv = q;
3853    set_size_and_pinuse_of_free_chunk(q, dsize);
3854  }
3855  else {
3856    if (!cinuse(oldfirst)) {
3857      size_t nsize = chunksize(oldfirst);
3858      unlink_chunk(m, oldfirst, nsize);
3859      oldfirst = chunk_plus_offset(oldfirst, nsize);
3860      qsize += nsize;
3861    }
3862    set_free_with_pinuse(q, qsize, oldfirst);
3863    insert_chunk(m, q, qsize);
3864    check_free_chunk(m, q);
3865  }
3866
3867  check_malloced_chunk(m, chunk2mem(p), nb);
3868  return chunk2mem(p);
3869}
3870
3871/* Add a segment to hold a new noncontiguous region */
3872static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3873  /* Determine locations and sizes of segment, fenceposts, old top */
3874  char* old_top = (char*)m->top;
3875  msegmentptr oldsp = segment_holding(m, old_top);
3876  char* old_end = oldsp->base + oldsp->size;
3877  size_t ssize = pad_request(sizeof(struct malloc_segment));
3878  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3879  size_t offset = align_offset(chunk2mem(rawsp));
3880  char* asp = rawsp + offset;
3881  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3882  mchunkptr sp = (mchunkptr)csp;
3883  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3884  mchunkptr tnext = chunk_plus_offset(sp, ssize);
3885  mchunkptr p = tnext;
3886  int nfences = 0;
3887
3888  /* reset top to new space */
3889  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3890
3891  /* Set up segment record */
3892  assert(is_aligned(ss));
3893  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3894  *ss = m->seg; /* Push current record */
3895  m->seg.base = tbase;
3896  m->seg.size = tsize;
3897  m->seg.sflags = mmapped;
3898  m->seg.next = ss;
3899
3900  /* Insert trailing fenceposts */
3901  for (;;) {
3902    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3903    p->head = FENCEPOST_HEAD;
3904    ++nfences;
3905    if ((char*)(&(nextp->head)) < old_end)
3906      p = nextp;
3907    else
3908      break;
3909  }
3910  assert(nfences >= 2);
3911
3912  /* Insert the rest of old top into a bin as an ordinary free chunk */
3913  if (csp != old_top) {
3914    mchunkptr q = (mchunkptr)old_top;
3915    size_t psize = csp - old_top;
3916    mchunkptr tn = chunk_plus_offset(q, psize);
3917    set_free_with_pinuse(q, psize, tn);
3918    insert_chunk(m, q, psize);
3919  }
3920
3921  check_top_chunk(m, m->top);
3922}
3923
3924/* -------------------------- System allocation -------------------------- */
3925
3926/* Get memory from system using MORECORE or MMAP */
3927static void* sys_alloc(mstate m, size_t nb) {
3928  char* tbase = CMFAIL;
3929  size_t tsize = 0;
3930  flag_t mmap_flag = 0;
3931
3932  ensure_initialization();
3933
3934  /* Directly map large chunks */
3935  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3936    void* mem = mmap_alloc(m, nb);
3937    if (mem != 0)
3938      return mem;
3939  }
3940
3941  /*
3942    Try getting memory in any of three ways (in most-preferred to
3943    least-preferred order):
3944    1. A call to MORECORE that can normally contiguously extend memory.
3945       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3946       or main space is mmapped or a previous contiguous call failed)
3947    2. A call to MMAP new space (disabled if not HAVE_MMAP).
3948       Note that under the default settings, if MORECORE is unable to
3949       fulfill a request, and HAVE_MMAP is true, then mmap is
3950       used as a noncontiguous system allocator. This is a useful backup
3951       strategy for systems with holes in address spaces -- in this case
3952       sbrk cannot contiguously expand the heap, but mmap may be able to
3953       find space.
3954    3. A call to MORECORE that cannot usually contiguously extend memory.
3955       (disabled if not HAVE_MORECORE)
3956
3957   In all cases, we need to request enough bytes from system to ensure
3958   we can malloc nb bytes upon success, so pad with enough space for
3959   top_foot, plus alignment-pad to make sure we don't lose bytes if
3960   not on boundary, and round this up to a granularity unit.
3961  */
3962
3963  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3964    char* br = CMFAIL;
3965    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3966    size_t asize = 0;
3967    ACQUIRE_MALLOC_GLOBAL_LOCK();
3968
3969    if (ss == 0) {  /* First time through or recovery */
3970      char* base = (char*)CALL_MORECORE(0);
3971      if (base != CMFAIL) {
3972        asize = granularity_align(nb + SYS_ALLOC_PADDING);
3973        /* Adjust to end on a page boundary */
3974        if (!is_page_aligned(base))
3975          asize += (page_align((size_t)base) - (size_t)base);
3976        /* Can't call MORECORE if size is negative when treated as signed */
3977        if (asize < HALF_MAX_SIZE_T &&
3978            (br = (char*)(CALL_MORECORE(asize))) == base) {
3979          tbase = base;
3980          tsize = asize;
3981        }
3982      }
3983    }
3984    else {
3985      /* Subtract out existing available top space from MORECORE request. */
3986      asize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
3987      /* Use mem here only if it did continuously extend old space */
3988      if (asize < HALF_MAX_SIZE_T &&
3989          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3990        tbase = br;
3991        tsize = asize;
3992      }
3993    }
3994
3995    if (tbase == CMFAIL) {    /* Cope with partial failure */
3996      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
3997        if (asize < HALF_MAX_SIZE_T &&
3998            asize < nb + SYS_ALLOC_PADDING) {
3999          size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - asize);
4000          if (esize < HALF_MAX_SIZE_T) {
4001            char* end = (char*)CALL_MORECORE(esize);
4002            if (end != CMFAIL)
4003              asize += esize;
4004            else {            /* Can't use; try to release */
4005              (void) CALL_MORECORE(-asize);
4006              br = CMFAIL;
4007            }
4008          }
4009        }
4010      }
4011      if (br != CMFAIL) {    /* Use the space we did get */
4012        tbase = br;
4013        tsize = asize;
4014      }
4015      else
4016        disable_contiguous(m); /* Don't try contiguous path in the future */
4017    }
4018
4019    RELEASE_MALLOC_GLOBAL_LOCK();
4020  }
4021
4022  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
4023    size_t rsize = granularity_align(nb + SYS_ALLOC_PADDING);
4024    if (rsize > nb) { /* Fail if wraps around zero */
4025      char* mp = (char*)(CALL_MMAP(rsize));
4026      if (mp != CMFAIL) {
4027        tbase = mp;
4028        tsize = rsize;
4029        mmap_flag = IS_MMAPPED_BIT;
4030      }
4031    }
4032  }
4033
4034  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4035    size_t asize = granularity_align(nb + SYS_ALLOC_PADDING);
4036    if (asize < HALF_MAX_SIZE_T) {
4037      char* br = CMFAIL;
4038      char* end = CMFAIL;
4039      ACQUIRE_MALLOC_GLOBAL_LOCK();
4040      br = (char*)(CALL_MORECORE(asize));
4041      end = (char*)(CALL_MORECORE(0));
4042      RELEASE_MALLOC_GLOBAL_LOCK();
4043      if (br != CMFAIL && end != CMFAIL && br < end) {
4044        size_t ssize = end - br;
4045        if (ssize > nb + TOP_FOOT_SIZE) {
4046          tbase = br;
4047          tsize = ssize;
4048        }
4049      }
4050    }
4051  }
4052
4053  if (tbase != CMFAIL) {
4054
4055    if ((m->footprint += tsize) > m->max_footprint)
4056      m->max_footprint = m->footprint;
4057
4058    if (!is_initialized(m)) { /* first-time initialization */
4059      m->seg.base = m->least_addr = tbase;
4060      m->seg.size = tsize;
4061      m->seg.sflags = mmap_flag;
4062      m->magic = mparams.magic;
4063      m->release_checks = MAX_RELEASE_CHECK_RATE;
4064      init_bins(m);
4065#if !ONLY_MSPACES
4066      if (is_global(m))
4067        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4068      else
4069#endif
4070      {
4071        /* Offset top by embedded malloc_state */
4072        mchunkptr mn = next_chunk(mem2chunk(m));
4073        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4074      }
4075    }
4076
4077    else {
4078      /* Try to merge with an existing segment */
4079      msegmentptr sp = &m->seg;
4080      /* Only consider most recent segment if traversal suppressed */
4081      while (sp != 0 && tbase != sp->base + sp->size)
4082        sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4083      if (sp != 0 &&
4084          !is_extern_segment(sp) &&
4085          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
4086          segment_holds(sp, m->top)) { /* append */
4087        sp->size += tsize;
4088        init_top(m, m->top, m->topsize + tsize);
4089      }
4090      else {
4091        if (tbase < m->least_addr)
4092          m->least_addr = tbase;
4093        sp = &m->seg;
4094        while (sp != 0 && sp->base != tbase + tsize)
4095          sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4096        if (sp != 0 &&
4097            !is_extern_segment(sp) &&
4098            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
4099          char* oldbase = sp->base;
4100          sp->base = tbase;
4101          sp->size += tsize;
4102          return prepend_alloc(m, tbase, oldbase, nb);
4103        }
4104        else
4105          add_segment(m, tbase, tsize, mmap_flag);
4106      }
4107    }
4108
4109    if (nb < m->topsize) { /* Allocate from new or extended top space */
4110      size_t rsize = m->topsize -= nb;
4111      mchunkptr p = m->top;
4112      mchunkptr r = m->top = chunk_plus_offset(p, nb);
4113      r->head = rsize | PINUSE_BIT;
4114      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4115      check_top_chunk(m, m->top);
4116      check_malloced_chunk(m, chunk2mem(p), nb);
4117      return chunk2mem(p);
4118    }
4119  }
4120
4121  MALLOC_FAILURE_ACTION;
4122  return 0;
4123}
4124
4125/* -----------------------  system deallocation -------------------------- */
4126
4127/* Unmap and unlink any mmapped segments that don't contain used chunks */
4128static size_t release_unused_segments(mstate m) {
4129  size_t released = 0;
4130  int nsegs = 0;
4131  msegmentptr pred = &m->seg;
4132  msegmentptr sp = pred->next;
4133  while (sp != 0) {
4134    char* base = sp->base;
4135    size_t size = sp->size;
4136    msegmentptr next = sp->next;
4137    ++nsegs;
4138    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4139      mchunkptr p = align_as_chunk(base);
4140      size_t psize = chunksize(p);
4141      /* Can unmap if first chunk holds entire segment and not pinned */
4142      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4143        tchunkptr tp = (tchunkptr)p;
4144        assert(segment_holds(sp, (char*)sp));
4145        if (p == m->dv) {
4146          m->dv = 0;
4147          m->dvsize = 0;
4148        }
4149        else {
4150          unlink_large_chunk(m, tp);
4151        }
4152        if (CALL_MUNMAP(base, size) == 0) {
4153          released += size;
4154          m->footprint -= size;
4155          /* unlink obsoleted record */
4156          sp = pred;
4157          sp->next = next;
4158        }
4159        else { /* back out if cannot unmap */
4160          insert_large_chunk(m, tp, psize);
4161        }
4162      }
4163    }
4164    if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4165      break;
4166    pred = sp;
4167    sp = next;
4168  }
4169  /* Reset check counter */
4170  m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?
4171                       nsegs : MAX_RELEASE_CHECK_RATE);
4172  return released;
4173}
4174
4175static int sys_trim(mstate m, size_t pad) {
4176  size_t released = 0;
4177  ensure_initialization();
4178  if (pad < MAX_REQUEST && is_initialized(m)) {
4179    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4180
4181    if (m->topsize > pad) {
4182      /* Shrink top space in granularity-size units, keeping at least one */
4183      size_t unit = mparams.granularity;
4184      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4185                      SIZE_T_ONE) * unit;
4186      msegmentptr sp = segment_holding(m, (char*)m->top);
4187
4188      if (!is_extern_segment(sp)) {
4189        if (is_mmapped_segment(sp)) {
4190          if (HAVE_MMAP &&
4191              sp->size >= extra &&
4192              !has_segment_link(m, sp)) { /* can't shrink if pinned */
4193            size_t newsize = sp->size - extra;
4194            /* Prefer mremap, fall back to munmap */
4195            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4196                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4197              released = extra;
4198            }
4199          }
4200        }
4201        else if (HAVE_MORECORE) {
4202          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4203            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4204          ACQUIRE_MALLOC_GLOBAL_LOCK();
4205          {
4206            /* Make sure end of memory is where we last set it. */
4207            char* old_br = (char*)(CALL_MORECORE(0));
4208            if (old_br == sp->base + sp->size) {
4209              char* rel_br = (char*)(CALL_MORECORE(-extra));
4210              char* new_br = (char*)(CALL_MORECORE(0));
4211              if (rel_br != CMFAIL && new_br < old_br)
4212                released = old_br - new_br;
4213            }
4214          }
4215          RELEASE_MALLOC_GLOBAL_LOCK();
4216        }
4217      }
4218
4219      if (released != 0) {
4220        sp->size -= released;
4221        m->footprint -= released;
4222        init_top(m, m->top, m->topsize - released);
4223        check_top_chunk(m, m->top);
4224      }
4225    }
4226
4227    /* Unmap any unused mmapped segments */
4228    if (HAVE_MMAP)
4229      released += release_unused_segments(m);
4230
4231    /* On failure, disable autotrim to avoid repeated failed future calls */
4232    if (released == 0 && m->topsize > m->trim_check)
4233      m->trim_check = MAX_SIZE_T;
4234  }
4235
4236  return (released != 0)? 1 : 0;
4237}
4238
4239
4240/* ---------------------------- malloc support --------------------------- */
4241
4242/* allocate a large request from the best fitting chunk in a treebin */
4243static void* tmalloc_large(mstate m, size_t nb) {
4244  tchunkptr v = 0;
4245  size_t rsize = -nb; /* Unsigned negation */
4246  tchunkptr t;
4247  bindex_t idx;
4248  compute_tree_index(nb, idx);
4249  if ((t = *treebin_at(m, idx)) != 0) {
4250    /* Traverse tree for this bin looking for node with size == nb */
4251    size_t sizebits = nb << leftshift_for_tree_index(idx);
4252    tchunkptr rst = 0;  /* The deepest untaken right subtree */
4253    for (;;) {
4254      tchunkptr rt;
4255      size_t trem = chunksize(t) - nb;
4256      if (trem < rsize) {
4257        v = t;
4258        if ((rsize = trem) == 0)
4259          break;
4260      }
4261      rt = t->child[1];
4262      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4263      if (rt != 0 && rt != t)
4264        rst = rt;
4265      if (t == 0) {
4266        t = rst; /* set t to least subtree holding sizes > nb */
4267        break;
4268      }
4269      sizebits <<= 1;
4270    }
4271  }
4272  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4273    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4274    if (leftbits != 0) {
4275      bindex_t i;
4276      binmap_t leastbit = least_bit(leftbits);
4277      compute_bit2idx(leastbit, i);
4278      t = *treebin_at(m, i);
4279    }
4280  }
4281
4282  while (t != 0) { /* find smallest of tree or subtree */
4283    size_t trem = chunksize(t) - nb;
4284    if (trem < rsize) {
4285      rsize = trem;
4286      v = t;
4287    }
4288    t = leftmost_child(t);
4289  }
4290
4291  /*  If dv is a better fit, return 0 so malloc will use it */
4292  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4293    if (RTCHECK(ok_address(m, v))) { /* split */
4294      mchunkptr r = chunk_plus_offset(v, nb);
4295      assert(chunksize(v) == rsize + nb);
4296      if (RTCHECK(ok_next(v, r))) {
4297        unlink_large_chunk(m, v);
4298        if (rsize < MIN_CHUNK_SIZE)
4299          set_inuse_and_pinuse(m, v, (rsize + nb));
4300        else {
4301          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4302          set_size_and_pinuse_of_free_chunk(r, rsize);
4303          insert_chunk(m, r, rsize);
4304        }
4305        return chunk2mem(v);
4306      }
4307    }
4308    CORRUPTION_ERROR_ACTION(m);
4309  }
4310  return 0;
4311}
4312
4313/* allocate a small request from the best fitting chunk in a treebin */
4314static void* tmalloc_small(mstate m, size_t nb) {
4315  tchunkptr t, v;
4316  size_t rsize;
4317  bindex_t i;
4318  binmap_t leastbit = least_bit(m->treemap);
4319  compute_bit2idx(leastbit, i);
4320  v = t = *treebin_at(m, i);
4321  rsize = chunksize(t) - nb;
4322
4323  while ((t = leftmost_child(t)) != 0) {
4324    size_t trem = chunksize(t) - nb;
4325    if (trem < rsize) {
4326      rsize = trem;
4327      v = t;
4328    }
4329  }
4330
4331  if (RTCHECK(ok_address(m, v))) {
4332    mchunkptr r = chunk_plus_offset(v, nb);
4333    assert(chunksize(v) == rsize + nb);
4334    if (RTCHECK(ok_next(v, r))) {
4335      unlink_large_chunk(m, v);
4336      if (rsize < MIN_CHUNK_SIZE)
4337        set_inuse_and_pinuse(m, v, (rsize + nb));
4338      else {
4339        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4340        set_size_and_pinuse_of_free_chunk(r, rsize);
4341        replace_dv(m, r, rsize);
4342      }
4343      return chunk2mem(v);
4344    }
4345  }
4346
4347  CORRUPTION_ERROR_ACTION(m);
4348  return 0;
4349}
4350
4351/* --------------------------- realloc support --------------------------- */
4352
4353static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
4354  if (bytes >= MAX_REQUEST) {
4355    MALLOC_FAILURE_ACTION;
4356    return 0;
4357  }
4358  if (!PREACTION(m)) {
4359    mchunkptr oldp = mem2chunk(oldmem);
4360    size_t oldsize = chunksize(oldp);
4361    mchunkptr next = chunk_plus_offset(oldp, oldsize);
4362    mchunkptr newp = 0;
4363    void* extra = 0;
4364
4365    /* Try to either shrink or extend into top. Else malloc-copy-free */
4366
4367    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
4368                ok_next(oldp, next) && ok_pinuse(next))) {
4369      size_t nb = request2size(bytes);
4370      if (is_mmapped(oldp))
4371        newp = mmap_resize(m, oldp, nb);
4372      else if (oldsize >= nb) { /* already big enough */
4373        size_t rsize = oldsize - nb;
4374        newp = oldp;
4375        if (rsize >= MIN_CHUNK_SIZE) {
4376          mchunkptr remainder = chunk_plus_offset(newp, nb);
4377          set_inuse(m, newp, nb);
4378          set_inuse(m, remainder, rsize);
4379          extra = chunk2mem(remainder);
4380        }
4381      }
4382      else if (next == m->top && oldsize + m->topsize > nb) {
4383        /* Expand into top */
4384        size_t newsize = oldsize + m->topsize;
4385        size_t newtopsize = newsize - nb;
4386        mchunkptr newtop = chunk_plus_offset(oldp, nb);
4387        set_inuse(m, oldp, nb);
4388        newtop->head = newtopsize |PINUSE_BIT;
4389        m->top = newtop;
4390        m->topsize = newtopsize;
4391        newp = oldp;
4392      }
4393    }
4394    else {
4395      USAGE_ERROR_ACTION(m, oldmem);
4396      POSTACTION(m);
4397      return 0;
4398    }
4399
4400    POSTACTION(m);
4401
4402    if (newp != 0) {
4403      if (extra != 0) {
4404        internal_free(m, extra);
4405      }
4406      check_inuse_chunk(m, newp);
4407      return chunk2mem(newp);
4408    }
4409    else {
4410      void* newmem = internal_malloc(m, bytes);
4411      if (newmem != 0) {
4412        size_t oc = oldsize - overhead_for(oldp);
4413        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
4414        internal_free(m, oldmem);
4415      }
4416      return newmem;
4417    }
4418  }
4419  return 0;
4420}
4421
4422/* --------------------------- memalign support -------------------------- */
4423
4424static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4425  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
4426    return internal_malloc(m, bytes);
4427  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4428    alignment = MIN_CHUNK_SIZE;
4429  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4430    size_t a = MALLOC_ALIGNMENT << 1;
4431    while (a < alignment) a <<= 1;
4432    alignment = a;
4433  }
4434
4435  if (bytes >= MAX_REQUEST - alignment) {
4436    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
4437      MALLOC_FAILURE_ACTION;
4438    }
4439  }
4440  else {
4441    size_t nb = request2size(bytes);
4442    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4443    char* mem = (char*)internal_malloc(m, req);
4444    if (mem != 0) {
4445      void* leader = 0;
4446      void* trailer = 0;
4447      mchunkptr p = mem2chunk(mem);
4448
4449      if (PREACTION(m)) return 0;
4450      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
4451        /*
4452          Find an aligned spot inside chunk.  Since we need to give
4453          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4454          the first calculation places us at a spot with less than
4455          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4456          We've allocated enough total room so that this is always
4457          possible.
4458        */
4459        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
4460                                                       alignment -
4461                                                       SIZE_T_ONE)) &
4462                                             -alignment));
4463        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4464          br : br+alignment;
4465        mchunkptr newp = (mchunkptr)pos;
4466        size_t leadsize = pos - (char*)(p);
4467        size_t newsize = chunksize(p) - leadsize;
4468
4469        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4470          newp->prev_foot = p->prev_foot + leadsize;
4471          newp->head = (newsize|CINUSE_BIT);
4472        }
4473        else { /* Otherwise, give back leader, use the rest */
4474          set_inuse(m, newp, newsize);
4475          set_inuse(m, p, leadsize);
4476          leader = chunk2mem(p);
4477        }
4478        p = newp;
4479      }
4480
4481      /* Give back spare room at the end */
4482      if (!is_mmapped(p)) {
4483        size_t size = chunksize(p);
4484        if (size > nb + MIN_CHUNK_SIZE) {
4485          size_t remainder_size = size - nb;
4486          mchunkptr remainder = chunk_plus_offset(p, nb);
4487          set_inuse(m, p, nb);
4488          set_inuse(m, remainder, remainder_size);
4489          trailer = chunk2mem(remainder);
4490        }
4491      }
4492
4493      assert (chunksize(p) >= nb);
4494      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
4495      check_inuse_chunk(m, p);
4496      POSTACTION(m);
4497      if (leader != 0) {
4498        internal_free(m, leader);
4499      }
4500      if (trailer != 0) {
4501        internal_free(m, trailer);
4502      }
4503      return chunk2mem(p);
4504    }
4505  }
4506  return 0;
4507}
4508
4509/* ------------------------ comalloc/coalloc support --------------------- */
4510
4511static void** ialloc(mstate m,
4512                     size_t n_elements,
4513                     size_t* sizes,
4514                     int opts,
4515                     void* chunks[]) {
4516  /*
4517    This provides common support for independent_X routines, handling
4518    all of the combinations that can result.
4519
4520    The opts arg has:
4521    bit 0 set if all elements are same size (using sizes[0])
4522    bit 1 set if elements should be zeroed
4523  */
4524
4525  size_t    element_size;   /* chunksize of each element, if all same */
4526  size_t    contents_size;  /* total size of elements */
4527  size_t    array_size;     /* request size of pointer array */
4528  void*     mem;            /* malloced aggregate space */
4529  mchunkptr p;              /* corresponding chunk */
4530  size_t    remainder_size; /* remaining bytes while splitting */
4531  void**    marray;         /* either "chunks" or malloced ptr array */
4532  mchunkptr array_chunk;    /* chunk for malloced ptr array */
4533  flag_t    was_enabled;    /* to disable mmap */
4534  size_t    size;
4535  size_t    i;
4536
4537  ensure_initialization();
4538  /* compute array length, if needed */
4539  if (chunks != 0) {
4540    if (n_elements == 0)
4541      return chunks; /* nothing to do */
4542    marray = chunks;
4543    array_size = 0;
4544  }
4545  else {
4546    /* if empty req, must still return chunk representing empty array */
4547    if (n_elements == 0)
4548      return (void**)internal_malloc(m, 0);
4549    marray = 0;
4550    array_size = request2size(n_elements * (sizeof(void*)));
4551  }
4552
4553  /* compute total element size */
4554  if (opts & 0x1) { /* all-same-size */
4555    element_size = request2size(*sizes);
4556    contents_size = n_elements * element_size;
4557  }
4558  else { /* add up all the sizes */
4559    element_size = 0;
4560    contents_size = 0;
4561    for (i = 0; i != n_elements; ++i)
4562      contents_size += request2size(sizes[i]);
4563  }
4564
4565  size = contents_size + array_size;
4566
4567  /*
4568     Allocate the aggregate chunk.  First disable direct-mmapping so
4569     malloc won't use it, since we would not be able to later
4570     free/realloc space internal to a segregated mmap region.
4571  */
4572  was_enabled = use_mmap(m);
4573  disable_mmap(m);
4574  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4575  if (was_enabled)
4576    enable_mmap(m);
4577  if (mem == 0)
4578    return 0;
4579
4580  if (PREACTION(m)) return 0;
4581  p = mem2chunk(mem);
4582  remainder_size = chunksize(p);
4583
4584  assert(!is_mmapped(p));
4585
4586  if (opts & 0x2) {       /* optionally clear the elements */
4587    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4588  }
4589
4590  /* If not provided, allocate the pointer array as final part of chunk */
4591  if (marray == 0) {
4592    size_t  array_chunk_size;
4593    array_chunk = chunk_plus_offset(p, contents_size);
4594    array_chunk_size = remainder_size - contents_size;
4595    marray = (void**) (chunk2mem(array_chunk));
4596    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4597    remainder_size = contents_size;
4598  }
4599
4600  /* split out elements */
4601  for (i = 0; ; ++i) {
4602    marray[i] = chunk2mem(p);
4603    if (i != n_elements-1) {
4604      if (element_size != 0)
4605        size = element_size;
4606      else
4607        size = request2size(sizes[i]);
4608      remainder_size -= size;
4609      set_size_and_pinuse_of_inuse_chunk(m, p, size);
4610      p = chunk_plus_offset(p, size);
4611    }
4612    else { /* the final element absorbs any overallocation slop */
4613      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4614      break;
4615    }
4616  }
4617
4618#if DEBUG
4619  if (marray != chunks) {
4620    /* final element must have exactly exhausted chunk */
4621    if (element_size != 0) {
4622      assert(remainder_size == element_size);
4623    }
4624    else {
4625      assert(remainder_size == request2size(sizes[i]));
4626    }
4627    check_inuse_chunk(m, mem2chunk(marray));
4628  }
4629  for (i = 0; i != n_elements; ++i)
4630    check_inuse_chunk(m, mem2chunk(marray[i]));
4631
4632#endif /* DEBUG */
4633
4634  POSTACTION(m);
4635  return marray;
4636}
4637
4638
4639/* -------------------------- public routines ---------------------------- */
4640
4641#if !ONLY_MSPACES
4642
4643void* dlmalloc(size_t bytes) {
4644  /*
4645     Basic algorithm:
4646     If a small request (< 256 bytes minus per-chunk overhead):
4647       1. If one exists, use a remainderless chunk in associated smallbin.
4648          (Remainderless means that there are too few excess bytes to
4649          represent as a chunk.)
4650       2. If it is big enough, use the dv chunk, which is normally the
4651          chunk adjacent to the one used for the most recent small request.
4652       3. If one exists, split the smallest available chunk in a bin,
4653          saving remainder in dv.
4654       4. If it is big enough, use the top chunk.
4655       5. If available, get memory from system and use it
4656     Otherwise, for a large request:
4657       1. Find the smallest available binned chunk that fits, and use it
4658          if it is better fitting than dv chunk, splitting if necessary.
4659       2. If better fitting than any binned chunk, use the dv chunk.
4660       3. If it is big enough, use the top chunk.
4661       4. If request size >= mmap threshold, try to directly mmap this chunk.
4662       5. If available, get memory from system and use it
4663
4664     The ugly goto's here ensure that postaction occurs along all paths.
4665  */
4666
4667#if USE_LOCKS
4668  ensure_initialization(); /* initialize in sys_alloc if not using locks */
4669#endif
4670
4671  if (!PREACTION(gm)) {
4672    void* mem;
4673    size_t nb;
4674    if (bytes <= MAX_SMALL_REQUEST) {
4675      bindex_t idx;
4676      binmap_t smallbits;
4677      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4678      idx = small_index(nb);
4679      smallbits = gm->smallmap >> idx;
4680
4681      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4682        mchunkptr b, p;
4683        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4684        b = smallbin_at(gm, idx);
4685        p = b->fd;
4686        assert(chunksize(p) == small_index2size(idx));
4687        unlink_first_small_chunk(gm, b, p, idx);
4688        set_inuse_and_pinuse(gm, p, small_index2size(idx));
4689        mem = chunk2mem(p);
4690        check_malloced_chunk(gm, mem, nb);
4691        goto postaction;
4692      }
4693
4694      else if (nb > gm->dvsize) {
4695        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4696          mchunkptr b, p, r;
4697          size_t rsize;
4698          bindex_t i;
4699          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4700          binmap_t leastbit = least_bit(leftbits);
4701          compute_bit2idx(leastbit, i);
4702          b = smallbin_at(gm, i);
4703          p = b->fd;
4704          assert(chunksize(p) == small_index2size(i));
4705          unlink_first_small_chunk(gm, b, p, i);
4706          rsize = small_index2size(i) - nb;
4707          /* Fit here cannot be remainderless if 4byte sizes */
4708          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4709            set_inuse_and_pinuse(gm, p, small_index2size(i));
4710          else {
4711            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4712            r = chunk_plus_offset(p, nb);
4713            set_size_and_pinuse_of_free_chunk(r, rsize);
4714            replace_dv(gm, r, rsize);
4715          }
4716          mem = chunk2mem(p);
4717          check_malloced_chunk(gm, mem, nb);
4718          goto postaction;
4719        }
4720
4721        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4722          check_malloced_chunk(gm, mem, nb);
4723          goto postaction;
4724        }
4725      }
4726    }
4727    else if (bytes >= MAX_REQUEST)
4728      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4729    else {
4730      nb = pad_request(bytes);
4731      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4732        check_malloced_chunk(gm, mem, nb);
4733        goto postaction;
4734      }
4735    }
4736
4737    if (nb <= gm->dvsize) {
4738      size_t rsize = gm->dvsize - nb;
4739      mchunkptr p = gm->dv;
4740      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4741        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4742        gm->dvsize = rsize;
4743        set_size_and_pinuse_of_free_chunk(r, rsize);
4744        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4745      }
4746      else { /* exhaust dv */
4747        size_t dvs = gm->dvsize;
4748        gm->dvsize = 0;
4749        gm->dv = 0;
4750        set_inuse_and_pinuse(gm, p, dvs);
4751      }
4752      mem = chunk2mem(p);
4753      check_malloced_chunk(gm, mem, nb);
4754      goto postaction;
4755    }
4756
4757    else if (nb < gm->topsize) { /* Split top */
4758      size_t rsize = gm->topsize -= nb;
4759      mchunkptr p = gm->top;
4760      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4761      r->head = rsize | PINUSE_BIT;
4762      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4763      mem = chunk2mem(p);
4764      check_top_chunk(gm, gm->top);
4765      check_malloced_chunk(gm, mem, nb);
4766      goto postaction;
4767    }
4768
4769    mem = sys_alloc(gm, nb);
4770
4771  postaction:
4772    POSTACTION(gm);
4773    return mem;
4774  }
4775
4776  return 0;
4777}
4778
4779void dlfree(void* mem) {
4780  /*
4781     Consolidate freed chunks with preceeding or succeeding bordering
4782     free chunks, if they exist, and then place in a bin.  Intermixed
4783     with special cases for top, dv, mmapped chunks, and usage errors.
4784  */
4785
4786  if (mem != 0) {
4787    mchunkptr p  = mem2chunk(mem);
4788#if FOOTERS
4789    mstate fm = get_mstate_for(p);
4790    if (!ok_magic(fm)) {
4791      USAGE_ERROR_ACTION(fm, p);
4792      return;
4793    }
4794#else /* FOOTERS */
4795#define fm gm
4796#endif /* FOOTERS */
4797    if (!PREACTION(fm)) {
4798      check_inuse_chunk(fm, p);
4799      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4800        size_t psize = chunksize(p);
4801        mchunkptr next = chunk_plus_offset(p, psize);
4802        if (!pinuse(p)) {
4803          size_t prevsize = p->prev_foot;
4804          if ((prevsize & IS_MMAPPED_BIT) != 0) {
4805            prevsize &= ~IS_MMAPPED_BIT;
4806            psize += prevsize + MMAP_FOOT_PAD;
4807            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4808              fm->footprint -= psize;
4809            goto postaction;
4810          }
4811          else {
4812            mchunkptr prev = chunk_minus_offset(p, prevsize);
4813            psize += prevsize;
4814            p = prev;
4815            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4816              if (p != fm->dv) {
4817                unlink_chunk(fm, p, prevsize);
4818              }
4819              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4820                fm->dvsize = psize;
4821                set_free_with_pinuse(p, psize, next);
4822                goto postaction;
4823              }
4824            }
4825            else
4826              goto erroraction;
4827          }
4828        }
4829
4830        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4831          if (!cinuse(next)) {  /* consolidate forward */
4832            if (next == fm->top) {
4833              size_t tsize = fm->topsize += psize;
4834              fm->top = p;
4835              p->head = tsize | PINUSE_BIT;
4836              if (p == fm->dv) {
4837                fm->dv = 0;
4838                fm->dvsize = 0;
4839              }
4840              if (should_trim(fm, tsize))
4841                sys_trim(fm, 0);
4842              goto postaction;
4843            }
4844            else if (next == fm->dv) {
4845              size_t dsize = fm->dvsize += psize;
4846              fm->dv = p;
4847              set_size_and_pinuse_of_free_chunk(p, dsize);
4848              goto postaction;
4849            }
4850            else {
4851              size_t nsize = chunksize(next);
4852              psize += nsize;
4853              unlink_chunk(fm, next, nsize);
4854              set_size_and_pinuse_of_free_chunk(p, psize);
4855              if (p == fm->dv) {
4856                fm->dvsize = psize;
4857                goto postaction;
4858              }
4859            }
4860          }
4861          else
4862            set_free_with_pinuse(p, psize, next);
4863
4864          if (is_small(psize)) {
4865            insert_small_chunk(fm, p, psize);
4866            check_free_chunk(fm, p);
4867          }
4868          else {
4869            tchunkptr tp = (tchunkptr)p;
4870            insert_large_chunk(fm, tp, psize);
4871            check_free_chunk(fm, p);
4872            if (--fm->release_checks == 0)
4873              release_unused_segments(fm);
4874          }
4875          goto postaction;
4876        }
4877      }
4878    erroraction:
4879      USAGE_ERROR_ACTION(fm, p);
4880    postaction:
4881      POSTACTION(fm);
4882    }
4883  }
4884#if !FOOTERS
4885#undef fm
4886#endif /* FOOTERS */
4887}
4888
4889void* dlcalloc(size_t n_elements, size_t elem_size) {
4890  void* mem;
4891  size_t req = 0;
4892  if (n_elements != 0) {
4893    req = n_elements * elem_size;
4894    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4895        (req / n_elements != elem_size))
4896      req = MAX_SIZE_T; /* force downstream failure on overflow */
4897  }
4898  mem = dlmalloc(req);
4899  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4900    memset(mem, 0, req);
4901  return mem;
4902}
4903
4904void* dlrealloc(void* oldmem, size_t bytes) {
4905  if (oldmem == 0)
4906    return dlmalloc(bytes);
4907#ifdef REALLOC_ZERO_BYTES_FREES
4908  if (bytes == 0) {
4909    dlfree(oldmem);
4910    return 0;
4911  }
4912#endif /* REALLOC_ZERO_BYTES_FREES */
4913  else {
4914#if ! FOOTERS
4915    mstate m = gm;
4916#else /* FOOTERS */
4917    mstate m = get_mstate_for(mem2chunk(oldmem));
4918    if (!ok_magic(m)) {
4919      USAGE_ERROR_ACTION(m, oldmem);
4920      return 0;
4921    }
4922#endif /* FOOTERS */
4923    return internal_realloc(m, oldmem, bytes);
4924  }
4925}
4926
4927void* dlmemalign(size_t alignment, size_t bytes) {
4928  return internal_memalign(gm, alignment, bytes);
4929}
4930
4931void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4932                                 void* chunks[]) {
4933  size_t sz = elem_size; /* serves as 1-element array */
4934  return ialloc(gm, n_elements, &sz, 3, chunks);
4935}
4936
4937void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4938                                   void* chunks[]) {
4939  return ialloc(gm, n_elements, sizes, 0, chunks);
4940}
4941
4942void* dlvalloc(size_t bytes) {
4943  size_t pagesz;
4944  ensure_initialization();
4945  pagesz = mparams.page_size;
4946  return dlmemalign(pagesz, bytes);
4947}
4948
4949void* dlpvalloc(size_t bytes) {
4950  size_t pagesz;
4951  ensure_initialization();
4952  pagesz = mparams.page_size;
4953  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4954}
4955
4956int dlmalloc_trim(size_t pad) {
4957  ensure_initialization();
4958  int result = 0;
4959  if (!PREACTION(gm)) {
4960    result = sys_trim(gm, pad);
4961    POSTACTION(gm);
4962  }
4963  return result;
4964}
4965
4966size_t dlmalloc_footprint(void) {
4967  return gm->footprint;
4968}
4969
4970size_t dlmalloc_max_footprint(void) {
4971  return gm->max_footprint;
4972}
4973
4974#if !NO_MALLINFO
4975struct mallinfo dlmallinfo(void) {
4976  return internal_mallinfo(gm);
4977}
4978#endif /* NO_MALLINFO */
4979
4980void dlmalloc_stats() {
4981  internal_malloc_stats(gm);
4982}
4983
4984int dlmallopt(int param_number, int value) {
4985  return change_mparam(param_number, value);
4986}
4987
4988#endif /* !ONLY_MSPACES */
4989
4990size_t dlmalloc_usable_size(void* mem) {
4991  if (mem != 0) {
4992    mchunkptr p = mem2chunk(mem);
4993    if (cinuse(p))
4994      return chunksize(p) - overhead_for(p);
4995  }
4996  return 0;
4997}
4998
4999/* ----------------------------- user mspaces ---------------------------- */
5000
5001#if MSPACES
5002
5003static mstate init_user_mstate(char* tbase, size_t tsize) {
5004  size_t msize = pad_request(sizeof(struct malloc_state));
5005  mchunkptr mn;
5006  mchunkptr msp = align_as_chunk(tbase);
5007  mstate m = (mstate)(chunk2mem(msp));
5008  memset(m, 0, msize);
5009  INITIAL_LOCK(&m->mutex);
5010  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
5011  m->seg.base = m->least_addr = tbase;
5012  m->seg.size = m->footprint = m->max_footprint = tsize;
5013  m->magic = mparams.magic;
5014  m->release_checks = MAX_RELEASE_CHECK_RATE;
5015  m->mflags = mparams.default_mflags;
5016  m->extp = 0;
5017  m->exts = 0;
5018  disable_contiguous(m);
5019  init_bins(m);
5020  mn = next_chunk(mem2chunk(m));
5021  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5022  check_top_chunk(m, m->top);
5023  return m;
5024}
5025
5026mspace create_mspace(size_t capacity, int locked) {
5027  mstate m = 0;
5028  size_t msize;
5029  ensure_initialization();
5030  msize = pad_request(sizeof(struct malloc_state));
5031  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5032    size_t rs = ((capacity == 0)? mparams.granularity :
5033                 (capacity + TOP_FOOT_SIZE + msize));
5034    size_t tsize = granularity_align(rs);
5035    char* tbase = (char*)(CALL_MMAP(tsize));
5036    if (tbase != CMFAIL) {
5037      m = init_user_mstate(tbase, tsize);
5038      m->seg.sflags = IS_MMAPPED_BIT;
5039      set_lock(m, locked);
5040    }
5041  }
5042  return (mspace)m;
5043}
5044
5045mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5046  mstate m = 0;
5047  size_t msize;
5048  ensure_initialization();
5049  msize = pad_request(sizeof(struct malloc_state));
5050  if (capacity > msize + TOP_FOOT_SIZE &&
5051      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5052    m = init_user_mstate((char*)base, capacity);
5053    m->seg.sflags = EXTERN_BIT;
5054    set_lock(m, locked);
5055  }
5056  return (mspace)m;
5057}
5058
5059int mspace_mmap_large_chunks(mspace msp, int enable) {
5060  int ret = 0;
5061  mstate ms = (mstate)msp;
5062  if (!PREACTION(ms)) {
5063    if (use_mmap(ms))
5064      ret = 1;
5065    if (enable)
5066      enable_mmap(ms);
5067    else
5068      disable_mmap(ms);
5069    POSTACTION(ms);
5070  }
5071  return ret;
5072}
5073
5074size_t destroy_mspace(mspace msp) {
5075  size_t freed = 0;
5076  mstate ms = (mstate)msp;
5077  if (ok_magic(ms)) {
5078    msegmentptr sp = &ms->seg;
5079    while (sp != 0) {
5080      char* base = sp->base;
5081      size_t size = sp->size;
5082      flag_t flag = sp->sflags;
5083      sp = sp->next;
5084      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
5085          CALL_MUNMAP(base, size) == 0)
5086        freed += size;
5087    }
5088  }
5089  else {
5090    USAGE_ERROR_ACTION(ms,ms);
5091  }
5092  return freed;
5093}
5094
5095/*
5096  mspace versions of routines are near-clones of the global
5097  versions. This is not so nice but better than the alternatives.
5098*/
5099
5100
5101void* mspace_malloc(mspace msp, size_t bytes) {
5102  mstate ms = (mstate)msp;
5103  if (!ok_magic(ms)) {
5104    USAGE_ERROR_ACTION(ms,ms);
5105    return 0;
5106  }
5107  if (!PREACTION(ms)) {
5108    void* mem;
5109    size_t nb;
5110    if (bytes <= MAX_SMALL_REQUEST) {
5111      bindex_t idx;
5112      binmap_t smallbits;
5113      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5114      idx = small_index(nb);
5115      smallbits = ms->smallmap >> idx;
5116
5117      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5118        mchunkptr b, p;
5119        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
5120        b = smallbin_at(ms, idx);
5121        p = b->fd;
5122        assert(chunksize(p) == small_index2size(idx));
5123        unlink_first_small_chunk(ms, b, p, idx);
5124        set_inuse_and_pinuse(ms, p, small_index2size(idx));
5125        mem = chunk2mem(p);
5126        check_malloced_chunk(ms, mem, nb);
5127        goto postaction;
5128      }
5129
5130      else if (nb > ms->dvsize) {
5131        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5132          mchunkptr b, p, r;
5133          size_t rsize;
5134          bindex_t i;
5135          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5136          binmap_t leastbit = least_bit(leftbits);
5137          compute_bit2idx(leastbit, i);
5138          b = smallbin_at(ms, i);
5139          p = b->fd;
5140          assert(chunksize(p) == small_index2size(i));
5141          unlink_first_small_chunk(ms, b, p, i);
5142          rsize = small_index2size(i) - nb;
5143          /* Fit here cannot be remainderless if 4byte sizes */
5144          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5145            set_inuse_and_pinuse(ms, p, small_index2size(i));
5146          else {
5147            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5148            r = chunk_plus_offset(p, nb);
5149            set_size_and_pinuse_of_free_chunk(r, rsize);
5150            replace_dv(ms, r, rsize);
5151          }
5152          mem = chunk2mem(p);
5153          check_malloced_chunk(ms, mem, nb);
5154          goto postaction;
5155        }
5156
5157        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5158          check_malloced_chunk(ms, mem, nb);
5159          goto postaction;
5160        }
5161      }
5162    }
5163    else if (bytes >= MAX_REQUEST)
5164      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5165    else {
5166      nb = pad_request(bytes);
5167      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5168        check_malloced_chunk(ms, mem, nb);
5169        goto postaction;
5170      }
5171    }
5172
5173    if (nb <= ms->dvsize) {
5174      size_t rsize = ms->dvsize - nb;
5175      mchunkptr p = ms->dv;
5176      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5177        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5178        ms->dvsize = rsize;
5179        set_size_and_pinuse_of_free_chunk(r, rsize);
5180        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5181      }
5182      else { /* exhaust dv */
5183        size_t dvs = ms->dvsize;
5184        ms->dvsize = 0;
5185        ms->dv = 0;
5186        set_inuse_and_pinuse(ms, p, dvs);
5187      }
5188      mem = chunk2mem(p);
5189      check_malloced_chunk(ms, mem, nb);
5190      goto postaction;
5191    }
5192
5193    else if (nb < ms->topsize) { /* Split top */
5194      size_t rsize = ms->topsize -= nb;
5195      mchunkptr p = ms->top;
5196      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5197      r->head = rsize | PINUSE_BIT;
5198      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5199      mem = chunk2mem(p);
5200      check_top_chunk(ms, ms->top);
5201      check_malloced_chunk(ms, mem, nb);
5202      goto postaction;
5203    }
5204
5205    mem = sys_alloc(ms, nb);
5206
5207  postaction:
5208    POSTACTION(ms);
5209    return mem;
5210  }
5211
5212  return 0;
5213}
5214
5215void mspace_free(mspace msp, void* mem) {
5216  if (mem != 0) {
5217    mchunkptr p  = mem2chunk(mem);
5218#if FOOTERS
5219    mstate fm = get_mstate_for(p);
5220#else /* FOOTERS */
5221    mstate fm = (mstate)msp;
5222#endif /* FOOTERS */
5223    if (!ok_magic(fm)) {
5224      USAGE_ERROR_ACTION(fm, p);
5225      return;
5226    }
5227    if (!PREACTION(fm)) {
5228      check_inuse_chunk(fm, p);
5229      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
5230        size_t psize = chunksize(p);
5231        mchunkptr next = chunk_plus_offset(p, psize);
5232        if (!pinuse(p)) {
5233          size_t prevsize = p->prev_foot;
5234          if ((prevsize & IS_MMAPPED_BIT) != 0) {
5235            prevsize &= ~IS_MMAPPED_BIT;
5236            psize += prevsize + MMAP_FOOT_PAD;
5237            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5238              fm->footprint -= psize;
5239            goto postaction;
5240          }
5241          else {
5242            mchunkptr prev = chunk_minus_offset(p, prevsize);
5243            psize += prevsize;
5244            p = prev;
5245            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5246              if (p != fm->dv) {
5247                unlink_chunk(fm, p, prevsize);
5248              }
5249              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5250                fm->dvsize = psize;
5251                set_free_with_pinuse(p, psize, next);
5252                goto postaction;
5253              }
5254            }
5255            else
5256              goto erroraction;
5257          }
5258        }
5259
5260        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5261          if (!cinuse(next)) {  /* consolidate forward */
5262            if (next == fm->top) {
5263              size_t tsize = fm->topsize += psize;
5264              fm->top = p;
5265              p->head = tsize | PINUSE_BIT;
5266              if (p == fm->dv) {
5267                fm->dv = 0;
5268                fm->dvsize = 0;
5269              }
5270              if (should_trim(fm, tsize))
5271                sys_trim(fm, 0);
5272              goto postaction;
5273            }
5274            else if (next == fm->dv) {
5275              size_t dsize = fm->dvsize += psize;
5276              fm->dv = p;
5277              set_size_and_pinuse_of_free_chunk(p, dsize);
5278              goto postaction;
5279            }
5280            else {
5281              size_t nsize = chunksize(next);
5282              psize += nsize;
5283              unlink_chunk(fm, next, nsize);
5284              set_size_and_pinuse_of_free_chunk(p, psize);
5285              if (p == fm->dv) {
5286                fm->dvsize = psize;
5287                goto postaction;
5288              }
5289            }
5290          }
5291          else
5292            set_free_with_pinuse(p, psize, next);
5293
5294          if (is_small(psize)) {
5295            insert_small_chunk(fm, p, psize);
5296            check_free_chunk(fm, p);
5297          }
5298          else {
5299            tchunkptr tp = (tchunkptr)p;
5300            insert_large_chunk(fm, tp, psize);
5301            check_free_chunk(fm, p);
5302            if (--fm->release_checks == 0)
5303              release_unused_segments(fm);
5304          }
5305          goto postaction;
5306        }
5307      }
5308    erroraction:
5309      USAGE_ERROR_ACTION(fm, p);
5310    postaction:
5311      POSTACTION(fm);
5312    }
5313  }
5314}
5315
5316void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5317  void* mem;
5318  size_t req = 0;
5319  mstate ms = (mstate)msp;
5320  if (!ok_magic(ms)) {
5321    USAGE_ERROR_ACTION(ms,ms);
5322    return 0;
5323  }
5324  if (n_elements != 0) {
5325    req = n_elements * elem_size;
5326    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5327        (req / n_elements != elem_size))
5328      req = MAX_SIZE_T; /* force downstream failure on overflow */
5329  }
5330  mem = internal_malloc(ms, req);
5331  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5332    memset(mem, 0, req);
5333  return mem;
5334}
5335
5336void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5337  if (oldmem == 0)
5338    return mspace_malloc(msp, bytes);
5339#ifdef REALLOC_ZERO_BYTES_FREES
5340  if (bytes == 0) {
5341    mspace_free(msp, oldmem);
5342    return 0;
5343  }
5344#endif /* REALLOC_ZERO_BYTES_FREES */
5345  else {
5346#if FOOTERS
5347    mchunkptr p  = mem2chunk(oldmem);
5348    mstate ms = get_mstate_for(p);
5349#else /* FOOTERS */
5350    mstate ms = (mstate)msp;
5351#endif /* FOOTERS */
5352    if (!ok_magic(ms)) {
5353      USAGE_ERROR_ACTION(ms,ms);
5354      return 0;
5355    }
5356    return internal_realloc(ms, oldmem, bytes);
5357  }
5358}
5359
5360void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5361  mstate ms = (mstate)msp;
5362  if (!ok_magic(ms)) {
5363    USAGE_ERROR_ACTION(ms,ms);
5364    return 0;
5365  }
5366  return internal_memalign(ms, alignment, bytes);
5367}
5368
5369void** mspace_independent_calloc(mspace msp, size_t n_elements,
5370                                 size_t elem_size, void* chunks[]) {
5371  size_t sz = elem_size; /* serves as 1-element array */
5372  mstate ms = (mstate)msp;
5373  if (!ok_magic(ms)) {
5374    USAGE_ERROR_ACTION(ms,ms);
5375    return 0;
5376  }
5377  return ialloc(ms, n_elements, &sz, 3, chunks);
5378}
5379
5380void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5381                                   size_t sizes[], void* chunks[]) {
5382  mstate ms = (mstate)msp;
5383  if (!ok_magic(ms)) {
5384    USAGE_ERROR_ACTION(ms,ms);
5385    return 0;
5386  }
5387  return ialloc(ms, n_elements, sizes, 0, chunks);
5388}
5389
5390int mspace_trim(mspace msp, size_t pad) {
5391  int result = 0;
5392  mstate ms = (mstate)msp;
5393  if (ok_magic(ms)) {
5394    if (!PREACTION(ms)) {
5395      result = sys_trim(ms, pad);
5396      POSTACTION(ms);
5397    }
5398  }
5399  else {
5400    USAGE_ERROR_ACTION(ms,ms);
5401  }
5402  return result;
5403}
5404
5405void mspace_malloc_stats(mspace msp) {
5406  mstate ms = (mstate)msp;
5407  if (ok_magic(ms)) {
5408    internal_malloc_stats(ms);
5409  }
5410  else {
5411    USAGE_ERROR_ACTION(ms,ms);
5412  }
5413}
5414
5415size_t mspace_footprint(mspace msp) {
5416  size_t result = 0;
5417  mstate ms = (mstate)msp;
5418  if (ok_magic(ms)) {
5419    result = ms->footprint;
5420  }
5421  else {
5422    USAGE_ERROR_ACTION(ms,ms);
5423  }
5424  return result;
5425}
5426
5427
5428size_t mspace_max_footprint(mspace msp) {
5429  size_t result = 0;
5430  mstate ms = (mstate)msp;
5431  if (ok_magic(ms)) {
5432    result = ms->max_footprint;
5433  }
5434  else {
5435    USAGE_ERROR_ACTION(ms,ms);
5436  }
5437  return result;
5438}
5439
5440
5441#if !NO_MALLINFO
5442struct mallinfo mspace_mallinfo(mspace msp) {
5443  mstate ms = (mstate)msp;
5444  if (!ok_magic(ms)) {
5445    USAGE_ERROR_ACTION(ms,ms);
5446  }
5447  return internal_mallinfo(ms);
5448}
5449#endif /* NO_MALLINFO */
5450
5451size_t mspace_usable_size(void* mem) {
5452  if (mem != 0) {
5453    mchunkptr p = mem2chunk(mem);
5454    if (cinuse(p))
5455      return chunksize(p) - overhead_for(p);
5456  }
5457  return 0;
5458}
5459
5460int mspace_mallopt(int param_number, int value) {
5461  return change_mparam(param_number, value);
5462}
5463
5464#endif /* MSPACES */
5465
5466/* -------------------- Alternative MORECORE functions ------------------- */
5467
5468/*
5469  Guidelines for creating a custom version of MORECORE:
5470
5471  * For best performance, MORECORE should allocate in multiples of pagesize.
5472  * MORECORE may allocate more memory than requested. (Or even less,
5473      but this will usually result in a malloc failure.)
5474  * MORECORE must not allocate memory when given argument zero, but
5475      instead return one past the end address of memory from previous
5476      nonzero call.
5477  * For best performance, consecutive calls to MORECORE with positive
5478      arguments should return increasing addresses, indicating that
5479      space has been contiguously extended.
5480  * Even though consecutive calls to MORECORE need not return contiguous
5481      addresses, it must be OK for malloc'ed chunks to span multiple
5482      regions in those cases where they do happen to be contiguous.
5483  * MORECORE need not handle negative arguments -- it may instead
5484      just return MFAIL when given negative arguments.
5485      Negative arguments are always multiples of pagesize. MORECORE
5486      must not misinterpret negative args as large positive unsigned
5487      args. You can suppress all such calls from even occurring by defining
5488      MORECORE_CANNOT_TRIM,
5489
5490  As an example alternative MORECORE, here is a custom allocator
5491  kindly contributed for pre-OSX macOS.  It uses virtually but not
5492  necessarily physically contiguous non-paged memory (locked in,
5493  present and won't get swapped out).  You can use it by uncommenting
5494  this section, adding some #includes, and setting up the appropriate
5495  defines above:
5496
5497      #define MORECORE osMoreCore
5498
5499  There is also a shutdown routine that should somehow be called for
5500  cleanup upon program exit.
5501
5502  #define MAX_POOL_ENTRIES 100
5503  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
5504  static int next_os_pool;
5505  void *our_os_pools[MAX_POOL_ENTRIES];
5506
5507  void *osMoreCore(int size)
5508  {
5509    void *ptr = 0;
5510    static void *sbrk_top = 0;
5511
5512    if (size > 0)
5513    {
5514      if (size < MINIMUM_MORECORE_SIZE)
5515         size = MINIMUM_MORECORE_SIZE;
5516      if (CurrentExecutionLevel() == kTaskLevel)
5517         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
5518      if (ptr == 0)
5519      {
5520        return (void *) MFAIL;
5521      }
5522      // save ptrs so they can be freed during cleanup
5523      our_os_pools[next_os_pool] = ptr;
5524      next_os_pool++;
5525      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
5526      sbrk_top = (char *) ptr + size;
5527      return ptr;
5528    }
5529    else if (size < 0)
5530    {
5531      // we don't currently support shrink behavior
5532      return (void *) MFAIL;
5533    }
5534    else
5535    {
5536      return sbrk_top;
5537    }
5538  }
5539
5540  // cleanup any allocated memory pools
5541  // called as last thing before shutting down driver
5542
5543  void osCleanupMem(void)
5544  {
5545    void **ptr;
5546
5547    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5548      if (*ptr)
5549      {
5550         PoolDeallocate(*ptr);
5551         *ptr = 0;
5552      }
5553  }
5554
5555*/
5556
5557
5558/* -----------------------------------------------------------------------
5559History:
5560    V2.8.4 (not yet released)
5561      * Add mspace_mmap_large_chunks; thanks to Jean Brouwers
5562      * Fix insufficient sys_alloc padding when using 16byte alignment
5563      * Fix bad error check in mspace_footprint
5564      * Adaptations for ptmalloc, courtesy of Wolfram Gloger.
5565      * Reentrant spin locks, courtesy of Earl Chew and others
5566      * Win32 improvements, courtesy of Niall Douglas and Earl Chew
5567      * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
5568      * Extension hook in malloc_state
5569      * Various small adjustments to reduce warnings on some compilers
5570      * Various configuration extensions/changes for more platforms. Thanks
5571         to all who contributed these.
5572
5573    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
5574      * Add max_footprint functions
5575      * Ensure all appropriate literals are size_t
5576      * Fix conditional compilation problem for some #define settings
5577      * Avoid concatenating segments with the one provided
5578        in create_mspace_with_base
5579      * Rename some variables to avoid compiler shadowing warnings
5580      * Use explicit lock initialization.
5581      * Better handling of sbrk interference.
5582      * Simplify and fix segment insertion, trimming and mspace_destroy
5583      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5584      * Thanks especially to Dennis Flanagan for help on these.
5585
5586    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
5587      * Fix memalign brace error.
5588
5589    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
5590      * Fix improper #endif nesting in C++
5591      * Add explicit casts needed for C++
5592
5593    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
5594      * Use trees for large bins
5595      * Support mspaces
5596      * Use segments to unify sbrk-based and mmap-based system allocation,
5597        removing need for emulation on most platforms without sbrk.
5598      * Default safety checks
5599      * Optional footer checks. Thanks to William Robertson for the idea.
5600      * Internal code refactoring
5601      * Incorporate suggestions and platform-specific changes.
5602        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5603        Aaron Bachmann,  Emery Berger, and others.
5604      * Speed up non-fastbin processing enough to remove fastbins.
5605      * Remove useless cfree() to avoid conflicts with other apps.
5606      * Remove internal memcpy, memset. Compilers handle builtins better.
5607      * Remove some options that no one ever used and rename others.
5608
5609    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
5610      * Fix malloc_state bitmap array misdeclaration
5611
5612    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
5613      * Allow tuning of FIRST_SORTED_BIN_SIZE
5614      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5615      * Better detection and support for non-contiguousness of MORECORE.
5616        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5617      * Bypass most of malloc if no frees. Thanks To Emery Berger.
5618      * Fix freeing of old top non-contiguous chunk im sysmalloc.
5619      * Raised default trim and map thresholds to 256K.
5620      * Fix mmap-related #defines. Thanks to Lubos Lunak.
5621      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5622      * Branch-free bin calculation
5623      * Default trim and mmap thresholds now 256K.
5624
5625    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
5626      * Introduce independent_comalloc and independent_calloc.
5627        Thanks to Michael Pachos for motivation and help.
5628      * Make optional .h file available
5629      * Allow > 2GB requests on 32bit systems.
5630      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5631        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5632        and Anonymous.
5633      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5634        helping test this.)
5635      * memalign: check alignment arg
5636      * realloc: don't try to shift chunks backwards, since this
5637        leads to  more fragmentation in some programs and doesn't
5638        seem to help in any others.
5639      * Collect all cases in malloc requiring system memory into sysmalloc
5640      * Use mmap as backup to sbrk
5641      * Place all internal state in malloc_state
5642      * Introduce fastbins (although similar to 2.5.1)
5643      * Many minor tunings and cosmetic improvements
5644      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5645      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5646        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5647      * Include errno.h to support default failure action.
5648
5649    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
5650      * return null for negative arguments
5651      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5652         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5653          (e.g. WIN32 platforms)
5654         * Cleanup header file inclusion for WIN32 platforms
5655         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5656         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5657           memory allocation routines
5658         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5659         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5660           usage of 'assert' in non-WIN32 code
5661         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5662           avoid infinite loop
5663      * Always call 'fREe()' rather than 'free()'
5664
5665    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
5666      * Fixed ordering problem with boundary-stamping
5667
5668    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
5669      * Added pvalloc, as recommended by H.J. Liu
5670      * Added 64bit pointer support mainly from Wolfram Gloger
5671      * Added anonymously donated WIN32 sbrk emulation
5672      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5673      * malloc_extend_top: fix mask error that caused wastage after
5674        foreign sbrks
5675      * Add linux mremap support code from HJ Liu
5676
5677    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
5678      * Integrated most documentation with the code.
5679      * Add support for mmap, with help from
5680        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5681      * Use last_remainder in more cases.
5682      * Pack bins using idea from  colin@nyx10.cs.du.edu
5683      * Use ordered bins instead of best-fit threshhold
5684      * Eliminate block-local decls to simplify tracing and debugging.
5685      * Support another case of realloc via move into top
5686      * Fix error occuring when initial sbrk_base not word-aligned.
5687      * Rely on page size for units instead of SBRK_UNIT to
5688        avoid surprises about sbrk alignment conventions.
5689      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5690        (raymond@es.ele.tue.nl) for the suggestion.
5691      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5692      * More precautions for cases where other routines call sbrk,
5693        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5694      * Added macros etc., allowing use in linux libc from
5695        H.J. Lu (hjl@gnu.ai.mit.edu)
5696      * Inverted this history list
5697
5698    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
5699      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5700      * Removed all preallocation code since under current scheme
5701        the work required to undo bad preallocations exceeds
5702        the work saved in good cases for most test programs.
5703      * No longer use return list or unconsolidated bins since
5704        no scheme using them consistently outperforms those that don't
5705        given above changes.
5706      * Use best fit for very large chunks to prevent some worst-cases.
5707      * Added some support for debugging
5708
5709    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
5710      * Removed footers when chunks are in use. Thanks to
5711        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5712
5713    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
5714      * Added malloc_trim, with help from Wolfram Gloger
5715        (wmglo@Dent.MED.Uni-Muenchen.DE).
5716
5717    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
5718
5719    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
5720      * realloc: try to expand in both directions
5721      * malloc: swap order of clean-bin strategy;
5722      * realloc: only conditionally expand backwards
5723      * Try not to scavenge used bins
5724      * Use bin counts as a guide to preallocation
5725      * Occasionally bin return list chunks in first scan
5726      * Add a few optimizations from colin@nyx10.cs.du.edu
5727
5728    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
5729      * faster bin computation & slightly different binning
5730      * merged all consolidations to one part of malloc proper
5731         (eliminating old malloc_find_space & malloc_clean_bin)
5732      * Scan 2 returns chunks (not just 1)
5733      * Propagate failure in realloc if malloc returns 0
5734      * Add stuff to allow compilation on non-ANSI compilers
5735          from kpv@research.att.com
5736
5737    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
5738      * removed potential for odd address access in prev_chunk
5739      * removed dependency on getpagesize.h
5740      * misc cosmetics and a bit more internal documentation
5741      * anticosmetics: mangled names in macros to evade debugger strangeness
5742      * tested on sparc, hp-700, dec-mips, rs6000
5743          with gcc & native cc (hp, dec only) allowing
5744          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5745
5746    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
5747      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5748         structure of old version,  but most details differ.)
5749
5750*/
5751
5752