compat / nedmalloc / malloc.c.hon commit for-each-ref: avoid loading objects to print %(objectname) (b74cf64)
   1/*
   2  This is a version (aka dlmalloc) of malloc/free/realloc written by
   3  Doug Lea and released to the public domain, as explained at
   4  http://creativecommons.org/licenses/publicdomain.  Send questions,
   5  comments, complaints, performance data, etc to dl@cs.oswego.edu
   6
   7* Version pre-2.8.4 Mon Nov 27 11:22:37 2006    (dl at gee)
   8
   9   Note: There may be an updated version of this malloc obtainable at
  10           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
  11         Check before installing!
  12
  13* Quickstart
  14
  15  This library is all in one file to simplify the most common usage:
  16  ftp it, compile it (-O3), and link it into another program. All of
  17  the compile-time options default to reasonable values for use on
  18  most platforms.  You might later want to step through various
  19  compile-time and dynamic tuning options.
  20
  21  For convenience, an include file for code using this malloc is at:
  22     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h
  23  You don't really need this .h file unless you call functions not
  24  defined in your system include files.  The .h file contains only the
  25  excerpts from this file needed for using this malloc on ANSI C/C++
  26  systems, so long as you haven't changed compile-time options about
  27  naming and tuning parameters.  If you do, then you can create your
  28  own malloc.h that does include all settings by cutting at the point
  29  indicated below. Note that you may already by default be using a C
  30  library containing a malloc that is based on some version of this
  31  malloc (for example in linux). You might still want to use the one
  32  in this file to customize settings or to avoid overheads associated
  33  with library versions.
  34
  35* Vital statistics:
  36
  37  Supported pointer/size_t representation:       4 or 8 bytes
  38       size_t MUST be an unsigned type of the same width as
  39       pointers. (If you are using an ancient system that declares
  40       size_t as a signed type, or need it to be a different width
  41       than pointers, you can use a previous release of this malloc
  42       (e.g. 2.7.2) supporting these.)
  43
  44  Alignment:                                     8 bytes (default)
  45       This suffices for nearly all current machines and C compilers.
  46       However, you can define MALLOC_ALIGNMENT to be wider than this
  47       if necessary (up to 128bytes), at the expense of using more space.
  48
  49  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
  50                                          8 or 16 bytes (if 8byte sizes)
  51       Each malloced chunk has a hidden word of overhead holding size
  52       and status information, and additional cross-check word
  53       if FOOTERS is defined.
  54
  55  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
  56                          8-byte ptrs:  32 bytes    (including overhead)
  57
  58       Even a request for zero bytes (i.e., malloc(0)) returns a
  59       pointer to something of the minimum allocatable size.
  60       The maximum overhead wastage (i.e., number of extra bytes
  61       allocated than were requested in malloc) is less than or equal
  62       to the minimum size, except for requests >= mmap_threshold that
  63       are serviced via mmap(), where the worst case wastage is about
  64       32 bytes plus the remainder from a system page (the minimal
  65       mmap unit); typically 4096 or 8192 bytes.
  66
  67  Security: static-safe; optionally more or less
  68       The "security" of malloc refers to the ability of malicious
  69       code to accentuate the effects of errors (for example, freeing
  70       space that is not currently malloc'ed or overwriting past the
  71       ends of chunks) in code that calls malloc.  This malloc
  72       guarantees not to modify any memory locations below the base of
  73       heap, i.e., static variables, even in the presence of usage
  74       errors.  The routines additionally detect most improper frees
  75       and reallocs.  All this holds as long as the static bookkeeping
  76       for malloc itself is not corrupted by some other means.  This
  77       is only one aspect of security -- these checks do not, and
  78       cannot, detect all possible programming errors.
  79
  80       If FOOTERS is defined nonzero, then each allocated chunk
  81       carries an additional check word to verify that it was malloced
  82       from its space.  These check words are the same within each
  83       execution of a program using malloc, but differ across
  84       executions, so externally crafted fake chunks cannot be
  85       freed. This improves security by rejecting frees/reallocs that
  86       could corrupt heap memory, in addition to the checks preventing
  87       writes to statics that are always on.  This may further improve
  88       security at the expense of time and space overhead.  (Note that
  89       FOOTERS may also be worth using with MSPACES.)
  90
  91       By default detected errors cause the program to abort (calling
  92       "abort()"). You can override this to instead proceed past
  93       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
  94       has no effect, and a malloc that encounters a bad address
  95       caused by user overwrites will ignore the bad address by
  96       dropping pointers and indices to all known memory. This may
  97       be appropriate for programs that should continue if at all
  98       possible in the face of programming errors, although they may
  99       run out of memory because dropped memory is never reclaimed.
 100
 101       If you don't like either of these options, you can define
 102       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
 103       else. And if you are sure that your program using malloc has
 104       no errors or vulnerabilities, you can define INSECURE to 1,
 105       which might (or might not) provide a small performance improvement.
 106
 107  Thread-safety: NOT thread-safe unless USE_LOCKS defined
 108       When USE_LOCKS is defined, each public call to malloc, free,
 109       etc is surrounded with either a pthread mutex or a win32
 110       spinlock (depending on WIN32). This is not especially fast, and
 111       can be a major bottleneck.  It is designed only to provide
 112       minimal protection in concurrent environments, and to provide a
 113       basis for extensions.  If you are using malloc in a concurrent
 114       program, consider instead using nedmalloc
 115       (http://www.nedprod.com/programs/portable/nedmalloc/) or
 116       ptmalloc (See http://www.malloc.de), which are derived
 117       from versions of this malloc.
 118
 119  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
 120       This malloc can use unix sbrk or any emulation (invoked using
 121       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
 122       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
 123       memory.  On most unix systems, it tends to work best if both
 124       MORECORE and MMAP are enabled.  On Win32, it uses emulations
 125       based on VirtualAlloc. It also uses common C library functions
 126       like memset.
 127
 128  Compliance: I believe it is compliant with the Single Unix Specification
 129       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
 130       others as well.
 131
 132* Overview of algorithms
 133
 134  This is not the fastest, most space-conserving, most portable, or
 135  most tunable malloc ever written. However it is among the fastest
 136  while also being among the most space-conserving, portable and
 137  tunable.  Consistent balance across these factors results in a good
 138  general-purpose allocator for malloc-intensive programs.
 139
 140  In most ways, this malloc is a best-fit allocator. Generally, it
 141  chooses the best-fitting existing chunk for a request, with ties
 142  broken in approximately least-recently-used order. (This strategy
 143  normally maintains low fragmentation.) However, for requests less
 144  than 256bytes, it deviates from best-fit when there is not an
 145  exactly fitting available chunk by preferring to use space adjacent
 146  to that used for the previous small request, as well as by breaking
 147  ties in approximately most-recently-used order. (These enhance
 148  locality of series of small allocations.)  And for very large requests
 149  (>= 256Kb by default), it relies on system memory mapping
 150  facilities, if supported.  (This helps avoid carrying around and
 151  possibly fragmenting memory used only for large chunks.)
 152
 153  All operations (except malloc_stats and mallinfo) have execution
 154  times that are bounded by a constant factor of the number of bits in
 155  a size_t, not counting any clearing in calloc or copying in realloc,
 156  or actions surrounding MORECORE and MMAP that have times
 157  proportional to the number of non-contiguous regions returned by
 158  system allocation routines, which is often just 1. In real-time
 159  applications, you can optionally suppress segment traversals using
 160  NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
 161  system allocators return non-contiguous spaces, at the typical
 162  expense of carrying around more memory and increased fragmentation.
 163
 164  The implementation is not very modular and seriously overuses
 165  macros. Perhaps someday all C compilers will do as good a job
 166  inlining modular code as can now be done by brute-force expansion,
 167  but now, enough of them seem not to.
 168
 169  Some compilers issue a lot of warnings about code that is
 170  dead/unreachable only on some platforms, and also about intentional
 171  uses of negation on unsigned types. All known cases of each can be
 172  ignored.
 173
 174  For a longer but out of date high-level description, see
 175     http://gee.cs.oswego.edu/dl/html/malloc.html
 176
 177* MSPACES
 178  If MSPACES is defined, then in addition to malloc, free, etc.,
 179  this file also defines mspace_malloc, mspace_free, etc. These
 180  are versions of malloc routines that take an "mspace" argument
 181  obtained using create_mspace, to control all internal bookkeeping.
 182  If ONLY_MSPACES is defined, only these versions are compiled.
 183  So if you would like to use this allocator for only some allocations,
 184  and your system malloc for others, you can compile with
 185  ONLY_MSPACES and then do something like...
 186    static mspace mymspace = create_mspace(0,0); // for example
 187    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
 188
 189  (Note: If you only need one instance of an mspace, you can instead
 190  use "USE_DL_PREFIX" to relabel the global malloc.)
 191
 192  You can similarly create thread-local allocators by storing
 193  mspaces as thread-locals. For example:
 194    static __thread mspace tlms = 0;
 195    void*  tlmalloc(size_t bytes) {
 196      if (tlms == 0) tlms = create_mspace(0, 0);
 197      return mspace_malloc(tlms, bytes);
 198    }
 199    void  tlfree(void* mem) { mspace_free(tlms, mem); }
 200
 201  Unless FOOTERS is defined, each mspace is completely independent.
 202  You cannot allocate from one and free to another (although
 203  conformance is only weakly checked, so usage errors are not always
 204  caught). If FOOTERS is defined, then each chunk carries around a tag
 205  indicating its originating mspace, and frees are directed to their
 206  originating spaces.
 207
 208 -------------------------  Compile-time options ---------------------------
 209
 210Be careful in setting #define values for numerical constants of type
 211size_t. On some systems, literal values are not automatically extended
 212to size_t precision unless they are explicitly casted. You can also
 213use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
 214
 215WIN32                    default: defined if _WIN32 defined
 216  Defining WIN32 sets up defaults for MS environment and compilers.
 217  Otherwise defaults are for unix. Beware that there seem to be some
 218  cases where this malloc might not be a pure drop-in replacement for
 219  Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
 220  SetDIBits()) may be due to bugs in some video driver implementations
 221  when pixel buffers are malloc()ed, and the region spans more than
 222  one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
 223  default granularity, pixel buffers may straddle virtual allocation
 224  regions more often than when using the Microsoft allocator.  You can
 225  avoid this by using VirtualAlloc() and VirtualFree() for all pixel
 226  buffers rather than using malloc().  If this is not possible,
 227  recompile this malloc with a larger DEFAULT_GRANULARITY.
 228
 229MALLOC_ALIGNMENT         default: (size_t)8
 230  Controls the minimum alignment for malloc'ed chunks.  It must be a
 231  power of two and at least 8, even on machines for which smaller
 232  alignments would suffice. It may be defined as larger than this
 233  though. Note however that code and data structures are optimized for
 234  the case of 8-byte alignment.
 235
 236MSPACES                  default: 0 (false)
 237  If true, compile in support for independent allocation spaces.
 238  This is only supported if HAVE_MMAP is true.
 239
 240ONLY_MSPACES             default: 0 (false)
 241  If true, only compile in mspace versions, not regular versions.
 242
 243USE_LOCKS                default: 0 (false)
 244  Causes each call to each public routine to be surrounded with
 245  pthread or WIN32 mutex lock/unlock. (If set true, this can be
 246  overridden on a per-mspace basis for mspace versions.) If set to a
 247  non-zero value other than 1, locks are used, but their
 248  implementation is left out, so lock functions must be supplied manually.
 249
 250USE_SPIN_LOCKS           default: 1 iff USE_LOCKS and on x86 using gcc or MSC
 251  If true, uses custom spin locks for locking. This is currently
 252  supported only for x86 platforms using gcc or recent MS compilers.
 253  Otherwise, posix locks or win32 critical sections are used.
 254
 255FOOTERS                  default: 0
 256  If true, provide extra checking and dispatching by placing
 257  information in the footers of allocated chunks. This adds
 258  space and time overhead.
 259
 260INSECURE                 default: 0
 261  If true, omit checks for usage errors and heap space overwrites.
 262
 263USE_DL_PREFIX            default: NOT defined
 264  Causes compiler to prefix all public routines with the string 'dl'.
 265  This can be useful when you only want to use this malloc in one part
 266  of a program, using your regular system malloc elsewhere.
 267
 268ABORT                    default: defined as abort()
 269  Defines how to abort on failed checks.  On most systems, a failed
 270  check cannot die with an "assert" or even print an informative
 271  message, because the underlying print routines in turn call malloc,
 272  which will fail again.  Generally, the best policy is to simply call
 273  abort(). It's not very useful to do more than this because many
 274  errors due to overwriting will show up as address faults (null, odd
 275  addresses etc) rather than malloc-triggered checks, so will also
 276  abort.  Also, most compilers know that abort() does not return, so
 277  can better optimize code conditionally calling it.
 278
 279PROCEED_ON_ERROR           default: defined as 0 (false)
 280  Controls whether detected bad addresses cause them to bypassed
 281  rather than aborting. If set, detected bad arguments to free and
 282  realloc are ignored. And all bookkeeping information is zeroed out
 283  upon a detected overwrite of freed heap space, thus losing the
 284  ability to ever return it from malloc again, but enabling the
 285  application to proceed. If PROCEED_ON_ERROR is defined, the
 286  static variable malloc_corruption_error_count is compiled in
 287  and can be examined to see if errors have occurred. This option
 288  generates slower code than the default abort policy.
 289
 290DEBUG                    default: NOT defined
 291  The DEBUG setting is mainly intended for people trying to modify
 292  this code or diagnose problems when porting to new platforms.
 293  However, it may also be able to better isolate user errors than just
 294  using runtime checks.  The assertions in the check routines spell
 295  out in more detail the assumptions and invariants underlying the
 296  algorithms.  The checking is fairly extensive, and will slow down
 297  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
 298  set will attempt to check every non-mmapped allocated and free chunk
 299  in the course of computing the summaries.
 300
 301ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
 302  Debugging assertion failures can be nearly impossible if your
 303  version of the assert macro causes malloc to be called, which will
 304  lead to a cascade of further failures, blowing the runtime stack.
 305  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
 306  which will usually make debugging easier.
 307
 308MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
 309  The action to take before "return 0" when malloc fails to be able to
 310  return memory because there is none available.
 311
 312HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
 313  True if this system supports sbrk or an emulation of it.
 314
 315MORECORE                  default: sbrk
 316  The name of the sbrk-style system routine to call to obtain more
 317  memory.  See below for guidance on writing custom MORECORE
 318  functions. The type of the argument to sbrk/MORECORE varies across
 319  systems.  It cannot be size_t, because it supports negative
 320  arguments, so it is normally the signed type of the same width as
 321  size_t (sometimes declared as "intptr_t").  It doesn't much matter
 322  though. Internally, we only call it with arguments less than half
 323  the max value of a size_t, which should work across all reasonable
 324  possibilities, although sometimes generating compiler warnings.
 325
 326MORECORE_CONTIGUOUS       default: 1 (true) if HAVE_MORECORE
 327  If true, take advantage of fact that consecutive calls to MORECORE
 328  with positive arguments always return contiguous increasing
 329  addresses.  This is true of unix sbrk. It does not hurt too much to
 330  set it true anyway, since malloc copes with non-contiguities.
 331  Setting it false when definitely non-contiguous saves time
 332  and possibly wasted space it would take to discover this though.
 333
 334MORECORE_CANNOT_TRIM      default: NOT defined
 335  True if MORECORE cannot release space back to the system when given
 336  negative arguments. This is generally necessary only if you are
 337  using a hand-crafted MORECORE function that cannot handle negative
 338  arguments.
 339
 340NO_SEGMENT_TRAVERSAL       default: 0
 341  If non-zero, suppresses traversals of memory segments
 342  returned by either MORECORE or CALL_MMAP. This disables
 343  merging of segments that are contiguous, and selectively
 344  releasing them to the OS if unused, but bounds execution times.
 345
 346HAVE_MMAP                 default: 1 (true)
 347  True if this system supports mmap or an emulation of it.  If so, and
 348  HAVE_MORECORE is not true, MMAP is used for all system
 349  allocation. If set and HAVE_MORECORE is true as well, MMAP is
 350  primarily used to directly allocate very large blocks. It is also
 351  used as a backup strategy in cases where MORECORE fails to provide
 352  space from system. Note: A single call to MUNMAP is assumed to be
 353  able to unmap memory that may have be allocated using multiple calls
 354  to MMAP, so long as they are adjacent.
 355
 356HAVE_MREMAP               default: 1 on linux, else 0
 357  If true realloc() uses mremap() to re-allocate large blocks and
 358  extend or shrink allocation spaces.
 359
 360MMAP_CLEARS               default: 1 except on WINCE.
 361  True if mmap clears memory so calloc doesn't need to. This is true
 362  for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
 363
 364USE_BUILTIN_FFS            default: 0 (i.e., not used)
 365  Causes malloc to use the builtin ffs() function to compute indices.
 366  Some compilers may recognize and intrinsify ffs to be faster than the
 367  supplied C version. Also, the case of x86 using gcc is special-cased
 368  to an asm instruction, so is already as fast as it can be, and so
 369  this setting has no effect. Similarly for Win32 under recent MS compilers.
 370  (On most x86s, the asm version is only slightly faster than the C version.)
 371
 372malloc_getpagesize         default: derive from system includes, or 4096.
 373  The system page size. To the extent possible, this malloc manages
 374  memory from the system in page-size units.  This may be (and
 375  usually is) a function rather than a constant. This is ignored
 376  if WIN32, where page size is determined using getSystemInfo during
 377  initialization.
 378
 379USE_DEV_RANDOM             default: 0 (i.e., not used)
 380  Causes malloc to use /dev/random to initialize secure magic seed for
 381  stamping footers. Otherwise, the current time is used.
 382
 383NO_MALLINFO                default: 0
 384  If defined, don't compile "mallinfo". This can be a simple way
 385  of dealing with mismatches between system declarations and
 386  those in this file.
 387
 388MALLINFO_FIELD_TYPE        default: size_t
 389  The type of the fields in the mallinfo struct. This was originally
 390  defined as "int" in SVID etc, but is more usefully defined as
 391  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
 392
 393REALLOC_ZERO_BYTES_FREES    default: not defined
 394  This should be set if a call to realloc with zero bytes should
 395  be the same as a call to free. Some people think it should. Otherwise,
 396  since this malloc returns a unique pointer for malloc(0), so does
 397  realloc(p, 0).
 398
 399LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
 400LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
 401LACKS_STDLIB_H                default: NOT defined unless on WIN32
 402  Define these if your system does not have these header files.
 403  You might need to manually insert some of the declarations they provide.
 404
 405DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
 406                                system_info.dwAllocationGranularity in WIN32,
 407                                otherwise 64K.
 408      Also settable using mallopt(M_GRANULARITY, x)
 409  The unit for allocating and deallocating memory from the system.  On
 410  most systems with contiguous MORECORE, there is no reason to
 411  make this more than a page. However, systems with MMAP tend to
 412  either require or encourage larger granularities.  You can increase
 413  this value to prevent system allocation functions to be called so
 414  often, especially if they are slow.  The value must be at least one
 415  page and must be a power of two.  Setting to 0 causes initialization
 416  to either page size or win32 region size.  (Note: In previous
 417  versions of malloc, the equivalent of this option was called
 418  "TOP_PAD")
 419
 420DEFAULT_TRIM_THRESHOLD    default: 2MB
 421      Also settable using mallopt(M_TRIM_THRESHOLD, x)
 422  The maximum amount of unused top-most memory to keep before
 423  releasing via malloc_trim in free().  Automatic trimming is mainly
 424  useful in long-lived programs using contiguous MORECORE.  Because
 425  trimming via sbrk can be slow on some systems, and can sometimes be
 426  wasteful (in cases where programs immediately afterward allocate
 427  more large chunks) the value should be high enough so that your
 428  overall system performance would improve by releasing this much
 429  memory.  As a rough guide, you might set to a value close to the
 430  average size of a process (program) running on your system.
 431  Releasing this much memory would allow such a process to run in
 432  memory.  Generally, it is worth tuning trim thresholds when a
 433  program undergoes phases where several large chunks are allocated
 434  and released in ways that can reuse each other's storage, perhaps
 435  mixed with phases where there are no such chunks at all. The trim
 436  value must be greater than page size to have any useful effect.  To
 437  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
 438  some people use of mallocing a huge space and then freeing it at
 439  program startup, in an attempt to reserve system memory, doesn't
 440  have the intended effect under automatic trimming, since that memory
 441  will immediately be returned to the system.
 442
 443DEFAULT_MMAP_THRESHOLD       default: 256K
 444      Also settable using mallopt(M_MMAP_THRESHOLD, x)
 445  The request size threshold for using MMAP to directly service a
 446  request. Requests of at least this size that cannot be allocated
 447  using already-existing space will be serviced via mmap.  (If enough
 448  normal freed space already exists it is used instead.)  Using mmap
 449  segregates relatively large chunks of memory so that they can be
 450  individually obtained and released from the host system. A request
 451  serviced through mmap is never reused by any other request (at least
 452  not directly; the system may just so happen to remap successive
 453  requests to the same locations).  Segregating space in this way has
 454  the benefits that: Mmapped space can always be individually released
 455  back to the system, which helps keep the system level memory demands
 456  of a long-lived program low.  Also, mapped memory doesn't become
 457  `locked' between other chunks, as can happen with normally allocated
 458  chunks, which means that even trimming via malloc_trim would not
 459  release them.  However, it has the disadvantage that the space
 460  cannot be reclaimed, consolidated, and then used to service later
 461  requests, as happens with normal chunks.  The advantages of mmap
 462  nearly always outweigh disadvantages for "large" chunks, but the
 463  value of "large" may vary across systems.  The default is an
 464  empirically derived value that works well in most systems. You can
 465  disable mmap by setting to MAX_SIZE_T.
 466
 467MAX_RELEASE_CHECK_RATE   default: 4095 unless not HAVE_MMAP
 468  The number of consolidated frees between checks to release
 469  unused segments when freeing. When using non-contiguous segments,
 470  especially with multiple mspaces, checking only for topmost space
 471  doesn't always suffice to trigger trimming. To compensate for this,
 472  free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
 473  current number of segments, if greater) try to release unused
 474  segments to the OS when freeing chunks that result in
 475  consolidation. The best value for this parameter is a compromise
 476  between slowing down frees with relatively costly checks that
 477  rarely trigger versus holding on to unused memory. To effectively
 478  disable, set to MAX_SIZE_T. This may lead to a very slight speed
 479  improvement at the expense of carrying around more memory.
 480*/
 481
 482/* Version identifier to allow people to support multiple versions */
 483#ifndef DLMALLOC_VERSION
 484#define DLMALLOC_VERSION 20804
 485#endif /* DLMALLOC_VERSION */
 486
 487#if defined(linux)
 488#define _GNU_SOURCE 1
 489#endif
 490
 491#ifndef WIN32
 492#ifdef _WIN32
 493#define WIN32 1
 494#endif  /* _WIN32 */
 495#ifdef _WIN32_WCE
 496#define LACKS_FCNTL_H
 497#define WIN32 1
 498#endif /* _WIN32_WCE */
 499#endif  /* WIN32 */
 500#ifdef WIN32
 501#define WIN32_LEAN_AND_MEAN
 502#define _WIN32_WINNT 0x403
 503#include <windows.h>
 504#define HAVE_MMAP 1
 505#define HAVE_MORECORE 0
 506#define LACKS_UNISTD_H
 507#define LACKS_SYS_PARAM_H
 508#define LACKS_SYS_MMAN_H
 509#define LACKS_STRING_H
 510#define LACKS_STRINGS_H
 511#define LACKS_SYS_TYPES_H
 512#define LACKS_ERRNO_H
 513#ifndef MALLOC_FAILURE_ACTION
 514#define MALLOC_FAILURE_ACTION
 515#endif /* MALLOC_FAILURE_ACTION */
 516#ifdef _WIN32_WCE /* WINCE reportedly does not clear */
 517#define MMAP_CLEARS 0
 518#else
 519#define MMAP_CLEARS 1
 520#endif /* _WIN32_WCE */
 521#endif  /* WIN32 */
 522
 523#if defined(DARWIN) || defined(_DARWIN)
 524/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
 525#ifndef HAVE_MORECORE
 526#define HAVE_MORECORE 0
 527#define HAVE_MMAP 1
 528/* OSX allocators provide 16 byte alignment */
 529#ifndef MALLOC_ALIGNMENT
 530#define MALLOC_ALIGNMENT ((size_t)16U)
 531#endif
 532#endif  /* HAVE_MORECORE */
 533#endif  /* DARWIN */
 534
 535#ifndef LACKS_SYS_TYPES_H
 536#include <sys/types.h>  /* For size_t */
 537#endif  /* LACKS_SYS_TYPES_H */
 538
 539/* The maximum possible size_t value has all bits set */
 540#define MAX_SIZE_T           (~(size_t)0)
 541
 542#ifndef ONLY_MSPACES
 543#define ONLY_MSPACES 0     /* define to a value */
 544#else
 545#define ONLY_MSPACES 1
 546#endif  /* ONLY_MSPACES */
 547#ifndef MSPACES
 548#if ONLY_MSPACES
 549#define MSPACES 1
 550#else   /* ONLY_MSPACES */
 551#define MSPACES 0
 552#endif  /* ONLY_MSPACES */
 553#endif  /* MSPACES */
 554#ifndef MALLOC_ALIGNMENT
 555#define MALLOC_ALIGNMENT ((size_t)8U)
 556#endif  /* MALLOC_ALIGNMENT */
 557#ifndef FOOTERS
 558#define FOOTERS 0
 559#endif  /* FOOTERS */
 560#ifndef ABORT
 561#define ABORT  abort()
 562#endif  /* ABORT */
 563#ifndef ABORT_ON_ASSERT_FAILURE
 564#define ABORT_ON_ASSERT_FAILURE 1
 565#endif  /* ABORT_ON_ASSERT_FAILURE */
 566#ifndef PROCEED_ON_ERROR
 567#define PROCEED_ON_ERROR 0
 568#endif  /* PROCEED_ON_ERROR */
 569#ifndef USE_LOCKS
 570#define USE_LOCKS 0
 571#endif  /* USE_LOCKS */
 572#ifndef USE_SPIN_LOCKS
 573#if USE_LOCKS && (defined(__GNUC__) && ((defined(__i386__) || defined(__x86_64__)))) || (defined(_MSC_VER) && _MSC_VER>=1310)
 574#define USE_SPIN_LOCKS 1
 575#else
 576#define USE_SPIN_LOCKS 0
 577#endif /* USE_LOCKS && ... */
 578#endif /* USE_SPIN_LOCKS */
 579#ifndef INSECURE
 580#define INSECURE 0
 581#endif  /* INSECURE */
 582#ifndef HAVE_MMAP
 583#define HAVE_MMAP 1
 584#endif  /* HAVE_MMAP */
 585#ifndef MMAP_CLEARS
 586#define MMAP_CLEARS 1
 587#endif  /* MMAP_CLEARS */
 588#ifndef HAVE_MREMAP
 589#ifdef linux
 590#define HAVE_MREMAP 1
 591#else   /* linux */
 592#define HAVE_MREMAP 0
 593#endif  /* linux */
 594#endif  /* HAVE_MREMAP */
 595#ifndef MALLOC_FAILURE_ACTION
 596#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
 597#endif  /* MALLOC_FAILURE_ACTION */
 598#ifndef HAVE_MORECORE
 599#if ONLY_MSPACES
 600#define HAVE_MORECORE 0
 601#else   /* ONLY_MSPACES */
 602#define HAVE_MORECORE 1
 603#endif  /* ONLY_MSPACES */
 604#endif  /* HAVE_MORECORE */
 605#if !HAVE_MORECORE
 606#define MORECORE_CONTIGUOUS 0
 607#else   /* !HAVE_MORECORE */
 608#define MORECORE_DEFAULT sbrk
 609#ifndef MORECORE_CONTIGUOUS
 610#define MORECORE_CONTIGUOUS 1
 611#endif  /* MORECORE_CONTIGUOUS */
 612#endif  /* HAVE_MORECORE */
 613#ifndef DEFAULT_GRANULARITY
 614#if (MORECORE_CONTIGUOUS || defined(WIN32))
 615#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
 616#else   /* MORECORE_CONTIGUOUS */
 617#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
 618#endif  /* MORECORE_CONTIGUOUS */
 619#endif  /* DEFAULT_GRANULARITY */
 620#ifndef DEFAULT_TRIM_THRESHOLD
 621#ifndef MORECORE_CANNOT_TRIM
 622#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
 623#else   /* MORECORE_CANNOT_TRIM */
 624#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
 625#endif  /* MORECORE_CANNOT_TRIM */
 626#endif  /* DEFAULT_TRIM_THRESHOLD */
 627#ifndef DEFAULT_MMAP_THRESHOLD
 628#if HAVE_MMAP
 629#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
 630#else   /* HAVE_MMAP */
 631#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
 632#endif  /* HAVE_MMAP */
 633#endif  /* DEFAULT_MMAP_THRESHOLD */
 634#ifndef MAX_RELEASE_CHECK_RATE
 635#if HAVE_MMAP
 636#define MAX_RELEASE_CHECK_RATE 4095
 637#else
 638#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
 639#endif /* HAVE_MMAP */
 640#endif /* MAX_RELEASE_CHECK_RATE */
 641#ifndef USE_BUILTIN_FFS
 642#define USE_BUILTIN_FFS 0
 643#endif  /* USE_BUILTIN_FFS */
 644#ifndef USE_DEV_RANDOM
 645#define USE_DEV_RANDOM 0
 646#endif  /* USE_DEV_RANDOM */
 647#ifndef NO_MALLINFO
 648#define NO_MALLINFO 0
 649#endif  /* NO_MALLINFO */
 650#ifndef MALLINFO_FIELD_TYPE
 651#define MALLINFO_FIELD_TYPE size_t
 652#endif  /* MALLINFO_FIELD_TYPE */
 653#ifndef NO_SEGMENT_TRAVERSAL
 654#define NO_SEGMENT_TRAVERSAL 0
 655#endif /* NO_SEGMENT_TRAVERSAL */
 656
 657/*
 658  mallopt tuning options.  SVID/XPG defines four standard parameter
 659  numbers for mallopt, normally defined in malloc.h.  None of these
 660  are used in this malloc, so setting them has no effect. But this
 661  malloc does support the following options.
 662*/
 663
 664#define M_TRIM_THRESHOLD     (-1)
 665#define M_GRANULARITY        (-2)
 666#define M_MMAP_THRESHOLD     (-3)
 667
 668/* ------------------------ Mallinfo declarations ------------------------ */
 669
 670#if !NO_MALLINFO
 671/*
 672  This version of malloc supports the standard SVID/XPG mallinfo
 673  routine that returns a struct containing usage properties and
 674  statistics. It should work on any system that has a
 675  /usr/include/malloc.h defining struct mallinfo.  The main
 676  declaration needed is the mallinfo struct that is returned (by-copy)
 677  by mallinfo().  The malloinfo struct contains a bunch of fields that
 678  are not even meaningful in this version of malloc.  These fields are
 679  are instead filled by mallinfo() with other numbers that might be of
 680  interest.
 681
 682  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
 683  /usr/include/malloc.h file that includes a declaration of struct
 684  mallinfo.  If so, it is included; else a compliant version is
 685  declared below.  These must be precisely the same for mallinfo() to
 686  work.  The original SVID version of this struct, defined on most
 687  systems with mallinfo, declares all fields as ints. But some others
 688  define as unsigned long. If your system defines the fields using a
 689  type of different width than listed here, you MUST #include your
 690  system version and #define HAVE_USR_INCLUDE_MALLOC_H.
 691*/
 692
 693/* #define HAVE_USR_INCLUDE_MALLOC_H */
 694
 695#ifdef HAVE_USR_INCLUDE_MALLOC_H
 696#include "/usr/include/malloc.h"
 697#else /* HAVE_USR_INCLUDE_MALLOC_H */
 698#ifndef STRUCT_MALLINFO_DECLARED
 699#define STRUCT_MALLINFO_DECLARED 1
 700struct mallinfo {
 701  MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
 702  MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
 703  MALLINFO_FIELD_TYPE smblks;   /* always 0 */
 704  MALLINFO_FIELD_TYPE hblks;    /* always 0 */
 705  MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
 706  MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
 707  MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
 708  MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
 709  MALLINFO_FIELD_TYPE fordblks; /* total free space */
 710  MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
 711};
 712#endif /* STRUCT_MALLINFO_DECLARED */
 713#endif /* HAVE_USR_INCLUDE_MALLOC_H */
 714#endif /* NO_MALLINFO */
 715
 716/*
 717  Try to persuade compilers to inline. The most critical functions for
 718  inlining are defined as macros, so these aren't used for them.
 719*/
 720
 721#ifndef FORCEINLINE
 722  #if defined(__GNUC__)
 723#define FORCEINLINE __inline __attribute__ ((always_inline))
 724  #elif defined(_MSC_VER)
 725    #define FORCEINLINE __forceinline
 726  #endif
 727#endif
 728#ifndef NOINLINE
 729  #if defined(__GNUC__)
 730    #define NOINLINE __attribute__ ((noinline))
 731  #elif defined(_MSC_VER)
 732    #define NOINLINE __declspec(noinline)
 733  #else
 734    #define NOINLINE
 735  #endif
 736#endif
 737
 738#ifdef __cplusplus
 739extern "C" {
 740#ifndef FORCEINLINE
 741 #define FORCEINLINE inline
 742#endif
 743#endif /* __cplusplus */
 744#ifndef FORCEINLINE
 745 #define FORCEINLINE
 746#endif
 747
 748#if !ONLY_MSPACES
 749
 750/* ------------------- Declarations of public routines ------------------- */
 751
 752#ifndef USE_DL_PREFIX
 753#define dlcalloc               calloc
 754#define dlfree                 free
 755#define dlmalloc               malloc
 756#define dlmemalign             memalign
 757#define dlrealloc              realloc
 758#define dlvalloc               valloc
 759#define dlpvalloc              pvalloc
 760#define dlmallinfo             mallinfo
 761#define dlmallopt              mallopt
 762#define dlmalloc_trim          malloc_trim
 763#define dlmalloc_stats         malloc_stats
 764#define dlmalloc_usable_size   malloc_usable_size
 765#define dlmalloc_footprint     malloc_footprint
 766#define dlmalloc_max_footprint malloc_max_footprint
 767#define dlindependent_calloc   independent_calloc
 768#define dlindependent_comalloc independent_comalloc
 769#endif /* USE_DL_PREFIX */
 770
 771
 772/*
 773  malloc(size_t n)
 774  Returns a pointer to a newly allocated chunk of at least n bytes, or
 775  null if no space is available, in which case errno is set to ENOMEM
 776  on ANSI C systems.
 777
 778  If n is zero, malloc returns a minimum-sized chunk. (The minimum
 779  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
 780  systems.)  Note that size_t is an unsigned type, so calls with
 781  arguments that would be negative if signed are interpreted as
 782  requests for huge amounts of space, which will often fail. The
 783  maximum supported value of n differs across systems, but is in all
 784  cases less than the maximum representable value of a size_t.
 785*/
 786void* dlmalloc(size_t);
 787
 788/*
 789  free(void* p)
 790  Releases the chunk of memory pointed to by p, that had been previously
 791  allocated using malloc or a related routine such as realloc.
 792  It has no effect if p is null. If p was not malloced or already
 793  freed, free(p) will by default cause the current program to abort.
 794*/
 795void  dlfree(void*);
 796
 797/*
 798  calloc(size_t n_elements, size_t element_size);
 799  Returns a pointer to n_elements * element_size bytes, with all locations
 800  set to zero.
 801*/
 802void* dlcalloc(size_t, size_t);
 803
 804/*
 805  realloc(void* p, size_t n)
 806  Returns a pointer to a chunk of size n that contains the same data
 807  as does chunk p up to the minimum of (n, p's size) bytes, or null
 808  if no space is available.
 809
 810  The returned pointer may or may not be the same as p. The algorithm
 811  prefers extending p in most cases when possible, otherwise it
 812  employs the equivalent of a malloc-copy-free sequence.
 813
 814  If p is null, realloc is equivalent to malloc.
 815
 816  If space is not available, realloc returns null, errno is set (if on
 817  ANSI) and p is NOT freed.
 818
 819  if n is for fewer bytes than already held by p, the newly unused
 820  space is lopped off and freed if possible.  realloc with a size
 821  argument of zero (re)allocates a minimum-sized chunk.
 822
 823  The old unix realloc convention of allowing the last-free'd chunk
 824  to be used as an argument to realloc is not supported.
 825*/
 826
 827void* dlrealloc(void*, size_t);
 828
 829/*
 830  memalign(size_t alignment, size_t n);
 831  Returns a pointer to a newly allocated chunk of n bytes, aligned
 832  in accord with the alignment argument.
 833
 834  The alignment argument should be a power of two. If the argument is
 835  not a power of two, the nearest greater power is used.
 836  8-byte alignment is guaranteed by normal malloc calls, so don't
 837  bother calling memalign with an argument of 8 or less.
 838
 839  Overreliance on memalign is a sure way to fragment space.
 840*/
 841void* dlmemalign(size_t, size_t);
 842
 843/*
 844  valloc(size_t n);
 845  Equivalent to memalign(pagesize, n), where pagesize is the page
 846  size of the system. If the pagesize is unknown, 4096 is used.
 847*/
 848void* dlvalloc(size_t);
 849
 850/*
 851  mallopt(int parameter_number, int parameter_value)
 852  Sets tunable parameters The format is to provide a
 853  (parameter-number, parameter-value) pair.  mallopt then sets the
 854  corresponding parameter to the argument value if it can (i.e., so
 855  long as the value is meaningful), and returns 1 if successful else
 856  0.  To workaround the fact that mallopt is specified to use int,
 857  not size_t parameters, the value -1 is specially treated as the
 858  maximum unsigned size_t value.
 859
 860  SVID/XPG/ANSI defines four standard param numbers for mallopt,
 861  normally defined in malloc.h.  None of these are use in this malloc,
 862  so setting them has no effect. But this malloc also supports other
 863  options in mallopt. See below for details.  Briefly, supported
 864  parameters are as follows (listed defaults are for "typical"
 865  configurations).
 866
 867  Symbol            param #  default    allowed param values
 868  M_TRIM_THRESHOLD     -1   2*1024*1024   any   (-1 disables)
 869  M_GRANULARITY        -2     page size   any power of 2 >= page size
 870  M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
 871*/
 872int dlmallopt(int, int);
 873
 874/*
 875  malloc_footprint();
 876  Returns the number of bytes obtained from the system.  The total
 877  number of bytes allocated by malloc, realloc etc., is less than this
 878  value. Unlike mallinfo, this function returns only a precomputed
 879  result, so can be called frequently to monitor memory consumption.
 880  Even if locks are otherwise defined, this function does not use them,
 881  so results might not be up to date.
 882*/
 883size_t dlmalloc_footprint(void);
 884
 885/*
 886  malloc_max_footprint();
 887  Returns the maximum number of bytes obtained from the system. This
 888  value will be greater than current footprint if deallocated space
 889  has been reclaimed by the system. The peak number of bytes allocated
 890  by malloc, realloc etc., is less than this value. Unlike mallinfo,
 891  this function returns only a precomputed result, so can be called
 892  frequently to monitor memory consumption.  Even if locks are
 893  otherwise defined, this function does not use them, so results might
 894  not be up to date.
 895*/
 896size_t dlmalloc_max_footprint(void);
 897
 898#if !NO_MALLINFO
 899/*
 900  mallinfo()
 901  Returns (by copy) a struct containing various summary statistics:
 902
 903  arena:     current total non-mmapped bytes allocated from system
 904  ordblks:   the number of free chunks
 905  smblks:    always zero.
 906  hblks:     current number of mmapped regions
 907  hblkhd:    total bytes held in mmapped regions
 908  usmblks:   the maximum total allocated space. This will be greater
 909                than current total if trimming has occurred.
 910  fsmblks:   always zero
 911  uordblks:  current total allocated space (normal or mmapped)
 912  fordblks:  total free space
 913  keepcost:  the maximum number of bytes that could ideally be released
 914               back to system via malloc_trim. ("ideally" means that
 915               it ignores page restrictions etc.)
 916
 917  Because these fields are ints, but internal bookkeeping may
 918  be kept as longs, the reported values may wrap around zero and
 919  thus be inaccurate.
 920*/
 921struct mallinfo dlmallinfo(void);
 922#endif /* NO_MALLINFO */
 923
 924/*
 925  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
 926
 927  independent_calloc is similar to calloc, but instead of returning a
 928  single cleared space, it returns an array of pointers to n_elements
 929  independent elements that can hold contents of size elem_size, each
 930  of which starts out cleared, and can be independently freed,
 931  realloc'ed etc. The elements are guaranteed to be adjacently
 932  allocated (this is not guaranteed to occur with multiple callocs or
 933  mallocs), which may also improve cache locality in some
 934  applications.
 935
 936  The "chunks" argument is optional (i.e., may be null, which is
 937  probably the most typical usage). If it is null, the returned array
 938  is itself dynamically allocated and should also be freed when it is
 939  no longer needed. Otherwise, the chunks array must be of at least
 940  n_elements in length. It is filled in with the pointers to the
 941  chunks.
 942
 943  In either case, independent_calloc returns this pointer array, or
 944  null if the allocation failed.  If n_elements is zero and "chunks"
 945  is null, it returns a chunk representing an array with zero elements
 946  (which should be freed if not wanted).
 947
 948  Each element must be individually freed when it is no longer
 949  needed. If you'd like to instead be able to free all at once, you
 950  should instead use regular calloc and assign pointers into this
 951  space to represent elements.  (In this case though, you cannot
 952  independently free elements.)
 953
 954  independent_calloc simplifies and speeds up implementations of many
 955  kinds of pools.  It may also be useful when constructing large data
 956  structures that initially have a fixed number of fixed-sized nodes,
 957  but the number is not known at compile time, and some of the nodes
 958  may later need to be freed. For example:
 959
 960  struct Node { int item; struct Node* next; };
 961
 962  struct Node* build_list() {
 963    struct Node** pool;
 964    int n = read_number_of_nodes_needed();
 965    if (n <= 0) return 0;
 966    pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
 967    if (pool == 0) die();
 968    // organize into a linked list...
 969    struct Node* first = pool[0];
 970    for (i = 0; i < n-1; ++i)
 971      pool[i]->next = pool[i+1];
 972    free(pool);     // Can now free the array (or not, if it is needed later)
 973    return first;
 974  }
 975*/
 976void** dlindependent_calloc(size_t, size_t, void**);
 977
 978/*
 979  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
 980
 981  independent_comalloc allocates, all at once, a set of n_elements
 982  chunks with sizes indicated in the "sizes" array.    It returns
 983  an array of pointers to these elements, each of which can be
 984  independently freed, realloc'ed etc. The elements are guaranteed to
 985  be adjacently allocated (this is not guaranteed to occur with
 986  multiple callocs or mallocs), which may also improve cache locality
 987  in some applications.
 988
 989  The "chunks" argument is optional (i.e., may be null). If it is null
 990  the returned array is itself dynamically allocated and should also
 991  be freed when it is no longer needed. Otherwise, the chunks array
 992  must be of at least n_elements in length. It is filled in with the
 993  pointers to the chunks.
 994
 995  In either case, independent_comalloc returns this pointer array, or
 996  null if the allocation failed.  If n_elements is zero and chunks is
 997  null, it returns a chunk representing an array with zero elements
 998  (which should be freed if not wanted).
 999
1000  Each element must be individually freed when it is no longer
1001  needed. If you'd like to instead be able to free all at once, you
1002  should instead use a single regular malloc, and assign pointers at
1003  particular offsets in the aggregate space. (In this case though, you
1004  cannot independently free elements.)
1005
1006  independent_comallac differs from independent_calloc in that each
1007  element may have a different size, and also that it does not
1008  automatically clear elements.
1009
1010  independent_comalloc can be used to speed up allocation in cases
1011  where several structs or objects must always be allocated at the
1012  same time.  For example:
1013
1014  struct Head { ... }
1015  struct Foot { ... }
1016
1017  void send_message(char* msg) {
1018    int msglen = strlen(msg);
1019    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1020    void* chunks[3];
1021    if (independent_comalloc(3, sizes, chunks) == 0)
1022      die();
1023    struct Head* head = (struct Head*)(chunks[0]);
1024    char*        body = (char*)(chunks[1]);
1025    struct Foot* foot = (struct Foot*)(chunks[2]);
1026    // ...
1027  }
1028
1029  In general though, independent_comalloc is worth using only for
1030  larger values of n_elements. For small values, you probably won't
1031  detect enough difference from series of malloc calls to bother.
1032
1033  Overuse of independent_comalloc can increase overall memory usage,
1034  since it cannot reuse existing noncontiguous small chunks that
1035  might be available for some of the elements.
1036*/
1037void** dlindependent_comalloc(size_t, size_t*, void**);
1038
1039
1040/*
1041  pvalloc(size_t n);
1042  Equivalent to valloc(minimum-page-that-holds(n)), that is,
1043  round up n to nearest pagesize.
1044 */
1045void*  dlpvalloc(size_t);
1046
1047/*
1048  malloc_trim(size_t pad);
1049
1050  If possible, gives memory back to the system (via negative arguments
1051  to sbrk) if there is unused memory at the `high' end of the malloc
1052  pool or in unused MMAP segments. You can call this after freeing
1053  large blocks of memory to potentially reduce the system-level memory
1054  requirements of a program. However, it cannot guarantee to reduce
1055  memory. Under some allocation patterns, some large free blocks of
1056  memory will be locked between two used chunks, so they cannot be
1057  given back to the system.
1058
1059  The `pad' argument to malloc_trim represents the amount of free
1060  trailing space to leave untrimmed. If this argument is zero, only
1061  the minimum amount of memory to maintain internal data structures
1062  will be left. Non-zero arguments can be supplied to maintain enough
1063  trailing space to service future expected allocations without having
1064  to re-obtain memory from the system.
1065
1066  Malloc_trim returns 1 if it actually released any memory, else 0.
1067*/
1068int  dlmalloc_trim(size_t);
1069
1070/*
1071  malloc_stats();
1072  Prints on stderr the amount of space obtained from the system (both
1073  via sbrk and mmap), the maximum amount (which may be more than
1074  current if malloc_trim and/or munmap got called), and the current
1075  number of bytes allocated via malloc (or realloc, etc) but not yet
1076  freed. Note that this is the number of bytes allocated, not the
1077  number requested. It will be larger than the number requested
1078  because of alignment and bookkeeping overhead. Because it includes
1079  alignment wastage as being in use, this figure may be greater than
1080  zero even when no user-level chunks are allocated.
1081
1082  The reported current and maximum system memory can be inaccurate if
1083  a program makes other calls to system memory allocation functions
1084  (normally sbrk) outside of malloc.
1085
1086  malloc_stats prints only the most commonly interesting statistics.
1087  More information can be obtained by calling mallinfo.
1088*/
1089void  dlmalloc_stats(void);
1090
1091#endif /* ONLY_MSPACES */
1092
1093/*
1094  malloc_usable_size(void* p);
1095
1096  Returns the number of bytes you can actually use in
1097  an allocated chunk, which may be more than you requested (although
1098  often not) due to alignment and minimum size constraints.
1099  You can use this many bytes without worrying about
1100  overwriting other allocated objects. This is not a particularly great
1101  programming practice. malloc_usable_size can be more useful in
1102  debugging and assertions, for example:
1103
1104  p = malloc(n);
1105  assert(malloc_usable_size(p) >= 256);
1106*/
1107size_t dlmalloc_usable_size(void*);
1108
1109
1110#if MSPACES
1111
1112/*
1113  mspace is an opaque type representing an independent
1114  region of space that supports mspace_malloc, etc.
1115*/
1116typedef void* mspace;
1117
1118/*
1119  create_mspace creates and returns a new independent space with the
1120  given initial capacity, or, if 0, the default granularity size.  It
1121  returns null if there is no system memory available to create the
1122  space.  If argument locked is non-zero, the space uses a separate
1123  lock to control access. The capacity of the space will grow
1124  dynamically as needed to service mspace_malloc requests.  You can
1125  control the sizes of incremental increases of this space by
1126  compiling with a different DEFAULT_GRANULARITY or dynamically
1127  setting with mallopt(M_GRANULARITY, value).
1128*/
1129mspace create_mspace(size_t capacity, int locked);
1130
1131/*
1132  destroy_mspace destroys the given space, and attempts to return all
1133  of its memory back to the system, returning the total number of
1134  bytes freed. After destruction, the results of access to all memory
1135  used by the space become undefined.
1136*/
1137size_t destroy_mspace(mspace msp);
1138
1139/*
1140  create_mspace_with_base uses the memory supplied as the initial base
1141  of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1142  space is used for bookkeeping, so the capacity must be at least this
1143  large. (Otherwise 0 is returned.) When this initial space is
1144  exhausted, additional memory will be obtained from the system.
1145  Destroying this space will deallocate all additionally allocated
1146  space (if possible) but not the initial base.
1147*/
1148mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1149
1150/*
1151  mspace_mmap_large_chunks controls whether requests for large chunks
1152  are allocated in their own mmapped regions, separate from others in
1153  this mspace. By default this is enabled, which reduces
1154  fragmentation. However, such chunks are not necessarily released to
1155  the system upon destroy_mspace.  Disabling by setting to false may
1156  increase fragmentation, but avoids leakage when relying on
1157  destroy_mspace to release all memory allocated using this space.
1158*/
1159int mspace_mmap_large_chunks(mspace msp, int enable);
1160
1161
1162/*
1163  mspace_malloc behaves as malloc, but operates within
1164  the given space.
1165*/
1166void* mspace_malloc(mspace msp, size_t bytes);
1167
1168/*
1169  mspace_free behaves as free, but operates within
1170  the given space.
1171
1172  If compiled with FOOTERS==1, mspace_free is not actually needed.
1173  free may be called instead of mspace_free because freed chunks from
1174  any space are handled by their originating spaces.
1175*/
1176void mspace_free(mspace msp, void* mem);
1177
1178/*
1179  mspace_realloc behaves as realloc, but operates within
1180  the given space.
1181
1182  If compiled with FOOTERS==1, mspace_realloc is not actually
1183  needed.  realloc may be called instead of mspace_realloc because
1184  realloced chunks from any space are handled by their originating
1185  spaces.
1186*/
1187void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1188
1189/*
1190  mspace_calloc behaves as calloc, but operates within
1191  the given space.
1192*/
1193void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1194
1195/*
1196  mspace_memalign behaves as memalign, but operates within
1197  the given space.
1198*/
1199void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1200
1201/*
1202  mspace_independent_calloc behaves as independent_calloc, but
1203  operates within the given space.
1204*/
1205void** mspace_independent_calloc(mspace msp, size_t n_elements,
1206                                 size_t elem_size, void* chunks[]);
1207
1208/*
1209  mspace_independent_comalloc behaves as independent_comalloc, but
1210  operates within the given space.
1211*/
1212void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1213                                   size_t sizes[], void* chunks[]);
1214
1215/*
1216  mspace_footprint() returns the number of bytes obtained from the
1217  system for this space.
1218*/
1219size_t mspace_footprint(mspace msp);
1220
1221/*
1222  mspace_max_footprint() returns the peak number of bytes obtained from the
1223  system for this space.
1224*/
1225size_t mspace_max_footprint(mspace msp);
1226
1227
1228#if !NO_MALLINFO
1229/*
1230  mspace_mallinfo behaves as mallinfo, but reports properties of
1231  the given space.
1232*/
1233struct mallinfo mspace_mallinfo(mspace msp);
1234#endif /* NO_MALLINFO */
1235
1236/*
1237  malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1238*/
1239  size_t mspace_usable_size(void* mem);
1240
1241/*
1242  mspace_malloc_stats behaves as malloc_stats, but reports
1243  properties of the given space.
1244*/
1245void mspace_malloc_stats(mspace msp);
1246
1247/*
1248  mspace_trim behaves as malloc_trim, but
1249  operates within the given space.
1250*/
1251int mspace_trim(mspace msp, size_t pad);
1252
1253/*
1254  An alias for mallopt.
1255*/
1256int mspace_mallopt(int, int);
1257
1258#endif /* MSPACES */
1259
1260#ifdef __cplusplus
1261};  /* end of extern "C" */
1262#endif /* __cplusplus */
1263
1264/*
1265  ========================================================================
1266  To make a fully customizable malloc.h header file, cut everything
1267  above this line, put into file malloc.h, edit to suit, and #include it
1268  on the next line, as well as in programs that use this malloc.
1269  ========================================================================
1270*/
1271
1272/* #include "malloc.h" */
1273
1274/*------------------------------ internal #includes ---------------------- */
1275
1276#ifdef WIN32
1277#ifndef __GNUC__
1278#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1279#endif
1280#endif /* WIN32 */
1281
1282#include <stdio.h>       /* for printing in malloc_stats */
1283
1284#ifndef LACKS_ERRNO_H
1285#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1286#endif /* LACKS_ERRNO_H */
1287#if FOOTERS
1288#include <time.h>        /* for magic initialization */
1289#endif /* FOOTERS */
1290#ifndef LACKS_STDLIB_H
1291#include <stdlib.h>      /* for abort() */
1292#endif /* LACKS_STDLIB_H */
1293#ifdef DEBUG
1294#if ABORT_ON_ASSERT_FAILURE
1295#define assert(x) if(!(x)) ABORT
1296#else /* ABORT_ON_ASSERT_FAILURE */
1297#include <assert.h>
1298#endif /* ABORT_ON_ASSERT_FAILURE */
1299#else  /* DEBUG */
1300#ifndef assert
1301#define assert(x)
1302#endif
1303#define DEBUG 0
1304#endif /* DEBUG */
1305#ifndef LACKS_STRING_H
1306#include <string.h>      /* for memset etc */
1307#endif  /* LACKS_STRING_H */
1308#if USE_BUILTIN_FFS
1309#ifndef LACKS_STRINGS_H
1310#include <strings.h>     /* for ffs */
1311#endif /* LACKS_STRINGS_H */
1312#endif /* USE_BUILTIN_FFS */
1313#if HAVE_MMAP
1314#ifndef LACKS_SYS_MMAN_H
1315#include <sys/mman.h>    /* for mmap */
1316#endif /* LACKS_SYS_MMAN_H */
1317#ifndef LACKS_FCNTL_H
1318#include <fcntl.h>
1319#endif /* LACKS_FCNTL_H */
1320#endif /* HAVE_MMAP */
1321#ifndef LACKS_UNISTD_H
1322#include <unistd.h>     /* for sbrk, sysconf */
1323#else /* LACKS_UNISTD_H */
1324#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1325extern void*     sbrk(ptrdiff_t);
1326#endif /* FreeBSD etc */
1327#endif /* LACKS_UNISTD_H */
1328
1329/* Declarations for locking */
1330#if USE_LOCKS
1331#ifndef WIN32
1332#include <pthread.h>
1333#if defined (__SVR4) && defined (__sun)  /* solaris */
1334#include <thread.h>
1335#endif /* solaris */
1336#else
1337#ifndef _M_AMD64
1338/* These are already defined on AMD64 builds */
1339#ifdef __cplusplus
1340extern "C" {
1341#endif /* __cplusplus */
1342#ifndef __MINGW32__
1343LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1344LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1345#endif
1346#ifdef __cplusplus
1347}
1348#endif /* __cplusplus */
1349#endif /* _M_AMD64 */
1350#ifndef __MINGW32__
1351#pragma intrinsic (_InterlockedCompareExchange)
1352#pragma intrinsic (_InterlockedExchange)
1353#else
1354  /* --[ start GCC compatibility ]----------------------------------------------
1355   * Compatibility <intrin_x86.h> header for GCC -- GCC equivalents of intrinsic
1356   * Microsoft Visual C++ functions. Originally developed for the ReactOS
1357   * (<http://www.reactos.org/>) and TinyKrnl (<http://www.tinykrnl.org/>)
1358   * projects.
1359   *
1360   * Copyright (c) 2006 KJK::Hyperion <hackbunny@reactos.com>
1361   *
1362   * Permission is hereby granted, free of charge, to any person obtaining a
1363   * copy of this software and associated documentation files (the "Software"),
1364   * to deal in the Software without restriction, including without limitation
1365   * the rights to use, copy, modify, merge, publish, distribute, sublicense,
1366   * and/or sell copies of the Software, and to permit persons to whom the
1367   * Software is furnished to do so, subject to the following conditions:
1368   *
1369   * The above copyright notice and this permission notice shall be included in
1370   * all copies or substantial portions of the Software.
1371   *
1372   * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
1373   * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
1374   * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1375   * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1376   * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
1377   * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
1378   * DEALINGS IN THE SOFTWARE.
1379   */
1380
1381  /*** Atomic operations ***/
1382  #if (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) > 40100
1383    #define _ReadWriteBarrier() __sync_synchronize()
1384  #else
1385    static __inline__ __attribute__((always_inline)) long __sync_lock_test_and_set(volatile long * const Target, const long Value)
1386    {
1387      long res;
1388      __asm__ __volatile__("xchg%z0 %2, %0" : "=g" (*(Target)), "=r" (res) : "1" (Value));
1389      return res;
1390    }
1391    static void __inline__ __attribute__((always_inline)) _MemoryBarrier(void)
1392    {
1393      __asm__ __volatile__("" : : : "memory");
1394    }
1395    #define _ReadWriteBarrier() _MemoryBarrier()
1396  #endif
1397  /* BUGBUG: GCC only supports full barriers */
1398  static __inline__ __attribute__((always_inline)) long _InterlockedExchange(volatile long * const Target, const long Value)
1399  {
1400    /* NOTE: __sync_lock_test_and_set would be an acquire barrier, so we force a full barrier */
1401    _ReadWriteBarrier();
1402    return __sync_lock_test_and_set(Target, Value);
1403  }
1404  /* --[ end GCC compatibility ]---------------------------------------------- */
1405#endif
1406#define interlockedcompareexchange _InterlockedCompareExchange
1407#define interlockedexchange _InterlockedExchange
1408#endif /* Win32 */
1409#endif /* USE_LOCKS */
1410
1411/* Declarations for bit scanning on win32 */
1412#if defined(_MSC_VER) && _MSC_VER>=1300
1413#ifndef BitScanForward  /* Try to avoid pulling in WinNT.h */
1414#ifdef __cplusplus
1415extern "C" {
1416#endif /* __cplusplus */
1417unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1418unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1419#ifdef __cplusplus
1420}
1421#endif /* __cplusplus */
1422
1423#define BitScanForward _BitScanForward
1424#define BitScanReverse _BitScanReverse
1425#pragma intrinsic(_BitScanForward)
1426#pragma intrinsic(_BitScanReverse)
1427#endif /* BitScanForward */
1428#endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1429
1430#ifndef WIN32
1431#ifndef malloc_getpagesize
1432#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1433#    ifndef _SC_PAGE_SIZE
1434#      define _SC_PAGE_SIZE _SC_PAGESIZE
1435#    endif
1436#  endif
1437#  ifdef _SC_PAGE_SIZE
1438#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1439#  else
1440#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1441       extern size_t getpagesize();
1442#      define malloc_getpagesize getpagesize()
1443#    else
1444#      ifdef WIN32 /* use supplied emulation of getpagesize */
1445#        define malloc_getpagesize getpagesize()
1446#      else
1447#        ifndef LACKS_SYS_PARAM_H
1448#          include <sys/param.h>
1449#        endif
1450#        ifdef EXEC_PAGESIZE
1451#          define malloc_getpagesize EXEC_PAGESIZE
1452#        else
1453#          ifdef NBPG
1454#            ifndef CLSIZE
1455#              define malloc_getpagesize NBPG
1456#            else
1457#              define malloc_getpagesize (NBPG * CLSIZE)
1458#            endif
1459#          else
1460#            ifdef NBPC
1461#              define malloc_getpagesize NBPC
1462#            else
1463#              ifdef PAGESIZE
1464#                define malloc_getpagesize PAGESIZE
1465#              else /* just guess */
1466#                define malloc_getpagesize ((size_t)4096U)
1467#              endif
1468#            endif
1469#          endif
1470#        endif
1471#      endif
1472#    endif
1473#  endif
1474#endif
1475#endif
1476
1477
1478
1479/* ------------------- size_t and alignment properties -------------------- */
1480
1481/* The byte and bit size of a size_t */
1482#define SIZE_T_SIZE         (sizeof(size_t))
1483#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1484
1485/* Some constants coerced to size_t */
1486/* Annoying but necessary to avoid errors on some platforms */
1487#define SIZE_T_ZERO         ((size_t)0)
1488#define SIZE_T_ONE          ((size_t)1)
1489#define SIZE_T_TWO          ((size_t)2)
1490#define SIZE_T_FOUR         ((size_t)4)
1491#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1492#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1493#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1494#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1495
1496/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1497#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1498
1499/* True if address a has acceptable alignment */
1500#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1501
1502/* the number of bytes to offset an address to align it */
1503#define align_offset(A)\
1504 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1505  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1506
1507/* -------------------------- MMAP preliminaries ------------------------- */
1508
1509/*
1510   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1511   checks to fail so compiler optimizer can delete code rather than
1512   using so many "#if"s.
1513*/
1514
1515
1516/* MORECORE and MMAP must return MFAIL on failure */
1517#define MFAIL                ((void*)(MAX_SIZE_T))
1518#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1519
1520#if HAVE_MMAP
1521
1522#ifndef WIN32
1523#define MUNMAP_DEFAULT(a, s)  munmap((a), (s))
1524#define MMAP_PROT            (PROT_READ|PROT_WRITE)
1525#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1526#define MAP_ANONYMOUS        MAP_ANON
1527#endif /* MAP_ANON */
1528#ifdef MAP_ANONYMOUS
1529#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1530#define MMAP_DEFAULT(s)       mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1531#else /* MAP_ANONYMOUS */
1532/*
1533   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1534   is unlikely to be needed, but is supplied just in case.
1535*/
1536#define MMAP_FLAGS           (MAP_PRIVATE)
1537static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1538#define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1539           (dev_zero_fd = open("/dev/zero", O_RDWR), \
1540            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1541            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1542#endif /* MAP_ANONYMOUS */
1543
1544#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1545
1546#else /* WIN32 */
1547
1548/* Win32 MMAP via VirtualAlloc */
1549static FORCEINLINE void* win32mmap(size_t size) {
1550  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1551  return (ptr != 0)? ptr: MFAIL;
1552}
1553
1554/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1555static FORCEINLINE void* win32direct_mmap(size_t size) {
1556  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1557                           PAGE_READWRITE);
1558  return (ptr != 0)? ptr: MFAIL;
1559}
1560
1561/* This function supports releasing coalesed segments */
1562static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1563  MEMORY_BASIC_INFORMATION minfo;
1564  char* cptr = (char*)ptr;
1565  while (size) {
1566    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1567      return -1;
1568    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1569        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1570      return -1;
1571    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1572      return -1;
1573    cptr += minfo.RegionSize;
1574    size -= minfo.RegionSize;
1575  }
1576  return 0;
1577}
1578
1579#define MMAP_DEFAULT(s)             win32mmap(s)
1580#define MUNMAP_DEFAULT(a, s)        win32munmap((a), (s))
1581#define DIRECT_MMAP_DEFAULT(s)      win32direct_mmap(s)
1582#endif /* WIN32 */
1583#endif /* HAVE_MMAP */
1584
1585#if HAVE_MREMAP
1586#ifndef WIN32
1587#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1588#endif /* WIN32 */
1589#endif /* HAVE_MREMAP */
1590
1591
1592/**
1593 * Define CALL_MORECORE
1594 */
1595#if HAVE_MORECORE
1596    #ifdef MORECORE
1597        #define CALL_MORECORE(S)    MORECORE(S)
1598    #else  /* MORECORE */
1599        #define CALL_MORECORE(S)    MORECORE_DEFAULT(S)
1600    #endif /* MORECORE */
1601#else  /* HAVE_MORECORE */
1602    #define CALL_MORECORE(S)        MFAIL
1603#endif /* HAVE_MORECORE */
1604
1605/**
1606 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1607 */
1608#if HAVE_MMAP
1609    #define IS_MMAPPED_BIT          (SIZE_T_ONE)
1610    #define USE_MMAP_BIT            (SIZE_T_ONE)
1611
1612    #ifdef MMAP
1613        #define CALL_MMAP(s)        MMAP(s)
1614    #else /* MMAP */
1615        #define CALL_MMAP(s)        MMAP_DEFAULT(s)
1616    #endif /* MMAP */
1617    #ifdef MUNMAP
1618        #define CALL_MUNMAP(a, s)   MUNMAP((a), (s))
1619    #else /* MUNMAP */
1620        #define CALL_MUNMAP(a, s)   MUNMAP_DEFAULT((a), (s))
1621    #endif /* MUNMAP */
1622    #ifdef DIRECT_MMAP
1623        #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1624    #else /* DIRECT_MMAP */
1625        #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1626    #endif /* DIRECT_MMAP */
1627#else  /* HAVE_MMAP */
1628    #define IS_MMAPPED_BIT          (SIZE_T_ZERO)
1629    #define USE_MMAP_BIT            (SIZE_T_ZERO)
1630
1631    #define MMAP(s)                 MFAIL
1632    #define MUNMAP(a, s)            (-1)
1633    #define DIRECT_MMAP(s)          MFAIL
1634    #define CALL_DIRECT_MMAP(s)     DIRECT_MMAP(s)
1635    #define CALL_MMAP(s)            MMAP(s)
1636    #define CALL_MUNMAP(a, s)       MUNMAP((a), (s))
1637#endif /* HAVE_MMAP */
1638
1639/**
1640 * Define CALL_MREMAP
1641 */
1642#if HAVE_MMAP && HAVE_MREMAP
1643    #ifdef MREMAP
1644        #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1645    #else /* MREMAP */
1646        #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1647    #endif /* MREMAP */
1648#else  /* HAVE_MMAP && HAVE_MREMAP */
1649    #define CALL_MREMAP(addr, osz, nsz, mv)     MFAIL
1650#endif /* HAVE_MMAP && HAVE_MREMAP */
1651
1652/* mstate bit set if continguous morecore disabled or failed */
1653#define USE_NONCONTIGUOUS_BIT (4U)
1654
1655/* segment bit set in create_mspace_with_base */
1656#define EXTERN_BIT            (8U)
1657
1658
1659/* --------------------------- Lock preliminaries ------------------------ */
1660
1661/*
1662  When locks are defined, there is one global lock, plus
1663  one per-mspace lock.
1664
1665  The global lock_ensures that mparams.magic and other unique
1666  mparams values are initialized only once. It also protects
1667  sequences of calls to MORECORE.  In many cases sys_alloc requires
1668  two calls, that should not be interleaved with calls by other
1669  threads.  This does not protect against direct calls to MORECORE
1670  by other threads not using this lock, so there is still code to
1671  cope the best we can on interference.
1672
1673  Per-mspace locks surround calls to malloc, free, etc.  To enable use
1674  in layered extensions, per-mspace locks are reentrant.
1675
1676  Because lock-protected regions generally have bounded times, it is
1677  OK to use the supplied simple spinlocks in the custom versions for
1678  x86.
1679
1680  If USE_LOCKS is > 1, the definitions of lock routines here are
1681  bypassed, in which case you will need to define at least
1682  INITIAL_LOCK, ACQUIRE_LOCK, RELEASE_LOCK and possibly TRY_LOCK
1683  (which is not used in this malloc, but commonly needed in
1684  extensions.)
1685*/
1686
1687#if USE_LOCKS == 1
1688
1689#if USE_SPIN_LOCKS
1690#ifndef WIN32
1691
1692/* Custom pthread-style spin locks on x86 and x64 for gcc */
1693struct pthread_mlock_t {
1694  volatile unsigned int l;
1695  volatile unsigned int c;
1696  volatile pthread_t threadid;
1697};
1698#define MLOCK_T struct        pthread_mlock_t
1699#define CURRENT_THREAD        pthread_self()
1700#define INITIAL_LOCK(sl)      (memset(sl, 0, sizeof(MLOCK_T)), 0)
1701#define ACQUIRE_LOCK(sl)      pthread_acquire_lock(sl)
1702#define RELEASE_LOCK(sl)      pthread_release_lock(sl)
1703#define TRY_LOCK(sl)          pthread_try_lock(sl)
1704#define SPINS_PER_YIELD       63
1705
1706static MLOCK_T malloc_global_mutex = { 0, 0, 0};
1707
1708static FORCEINLINE int pthread_acquire_lock (MLOCK_T *sl) {
1709  int spins = 0;
1710  volatile unsigned int* lp = &sl->l;
1711  for (;;) {
1712    if (*lp != 0) {
1713      if (sl->threadid == CURRENT_THREAD) {
1714        ++sl->c;
1715        return 0;
1716      }
1717    }
1718    else {
1719      /* place args to cmpxchgl in locals to evade oddities in some gccs */
1720      int cmp = 0;
1721      int val = 1;
1722      int ret;
1723      __asm__ __volatile__  ("lock; cmpxchgl %1, %2"
1724                             : "=a" (ret)
1725                             : "r" (val), "m" (*(lp)), "0"(cmp)
1726                             : "memory", "cc");
1727      if (!ret) {
1728        assert(!sl->threadid);
1729        sl->c = 1;
1730        sl->threadid = CURRENT_THREAD;
1731        return 0;
1732      }
1733      if ((++spins & SPINS_PER_YIELD) == 0) {
1734#if defined (__SVR4) && defined (__sun) /* solaris */
1735        thr_yield();
1736#else
1737#if defined(__linux__) || defined(__FreeBSD__) || defined(__APPLE__)
1738        sched_yield();
1739#else  /* no-op yield on unknown systems */
1740        ;
1741#endif /* __linux__ || __FreeBSD__ || __APPLE__ */
1742#endif /* solaris */
1743      }
1744    }
1745  }
1746}
1747
1748static FORCEINLINE void pthread_release_lock (MLOCK_T *sl) {
1749  assert(sl->l != 0);
1750  assert(sl->threadid == CURRENT_THREAD);
1751  if (--sl->c == 0) {
1752    sl->threadid = 0;
1753    volatile unsigned int* lp = &sl->l;
1754    int prev = 0;
1755    int ret;
1756    __asm__ __volatile__ ("lock; xchgl %0, %1"
1757                          : "=r" (ret)
1758                          : "m" (*(lp)), "0"(prev)
1759                          : "memory");
1760  }
1761}
1762
1763static FORCEINLINE int pthread_try_lock (MLOCK_T *sl) {
1764  volatile unsigned int* lp = &sl->l;
1765  if (*lp != 0) {
1766      if (sl->threadid == CURRENT_THREAD) {
1767        ++sl->c;
1768        return 1;
1769      }
1770  }
1771  else {
1772    int cmp = 0;
1773    int val = 1;
1774    int ret;
1775    __asm__ __volatile__  ("lock; cmpxchgl %1, %2"
1776                           : "=a" (ret)
1777                           : "r" (val), "m" (*(lp)), "0"(cmp)
1778                           : "memory", "cc");
1779    if (!ret) {
1780      assert(!sl->threadid);
1781      sl->c = 1;
1782      sl->threadid = CURRENT_THREAD;
1783      return 1;
1784    }
1785  }
1786  return 0;
1787}
1788
1789
1790#else /* WIN32 */
1791/* Custom win32-style spin locks on x86 and x64 for MSC */
1792struct win32_mlock_t
1793{
1794  volatile long l;
1795  volatile unsigned int c;
1796  volatile long threadid;
1797};
1798
1799#define MLOCK_T               struct win32_mlock_t
1800#define CURRENT_THREAD        win32_getcurrentthreadid()
1801#define INITIAL_LOCK(sl)      (memset(sl, 0, sizeof(MLOCK_T)), 0)
1802#define ACQUIRE_LOCK(sl)      win32_acquire_lock(sl)
1803#define RELEASE_LOCK(sl)      win32_release_lock(sl)
1804#define TRY_LOCK(sl)          win32_try_lock(sl)
1805#define SPINS_PER_YIELD       63
1806
1807static MLOCK_T malloc_global_mutex = { 0, 0, 0};
1808
1809static FORCEINLINE long win32_getcurrentthreadid(void) {
1810#ifdef _MSC_VER
1811#if defined(_M_IX86)
1812  long *threadstruct=(long *)__readfsdword(0x18);
1813  long threadid=threadstruct[0x24/sizeof(long)];
1814  return threadid;
1815#elif defined(_M_X64)
1816  /* todo */
1817  return GetCurrentThreadId();
1818#else
1819  return GetCurrentThreadId();
1820#endif
1821#else
1822  return GetCurrentThreadId();
1823#endif
1824}
1825
1826static FORCEINLINE int win32_acquire_lock (MLOCK_T *sl) {
1827  int spins = 0;
1828  for (;;) {
1829    if (sl->l != 0) {
1830      if (sl->threadid == CURRENT_THREAD) {
1831        ++sl->c;
1832        return 0;
1833      }
1834    }
1835    else {
1836      if (!interlockedexchange(&sl->l, 1)) {
1837        assert(!sl->threadid);
1838                sl->c=CURRENT_THREAD;
1839        sl->threadid = CURRENT_THREAD;
1840        sl->c = 1;
1841        return 0;
1842      }
1843    }
1844    if ((++spins & SPINS_PER_YIELD) == 0)
1845      SleepEx(0, FALSE);
1846  }
1847}
1848
1849static FORCEINLINE void win32_release_lock (MLOCK_T *sl) {
1850  assert(sl->threadid == CURRENT_THREAD);
1851  assert(sl->l != 0);
1852  if (--sl->c == 0) {
1853    sl->threadid = 0;
1854    interlockedexchange (&sl->l, 0);
1855  }
1856}
1857
1858static FORCEINLINE int win32_try_lock (MLOCK_T *sl) {
1859  if(sl->l != 0) {
1860      if (sl->threadid == CURRENT_THREAD) {
1861        ++sl->c;
1862        return 1;
1863      }
1864  }
1865  else {
1866    if (!interlockedexchange(&sl->l, 1)){
1867      assert(!sl->threadid);
1868      sl->threadid = CURRENT_THREAD;
1869      sl->c = 1;
1870      return 1;
1871    }
1872  }
1873  return 0;
1874}
1875
1876#endif /* WIN32 */
1877#else /* USE_SPIN_LOCKS */
1878
1879#ifndef WIN32
1880/* pthreads-based locks */
1881
1882#define MLOCK_T               pthread_mutex_t
1883#define CURRENT_THREAD        pthread_self()
1884#define INITIAL_LOCK(sl)      pthread_init_lock(sl)
1885#define ACQUIRE_LOCK(sl)      pthread_mutex_lock(sl)
1886#define RELEASE_LOCK(sl)      pthread_mutex_unlock(sl)
1887#define TRY_LOCK(sl)          (!pthread_mutex_trylock(sl))
1888
1889static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
1890
1891/* Cope with old-style linux recursive lock initialization by adding */
1892/* skipped internal declaration from pthread.h */
1893#ifdef linux
1894#ifndef PTHREAD_MUTEX_RECURSIVE
1895extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
1896                                           int __kind));
1897#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
1898#define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
1899#endif
1900#endif
1901
1902static int pthread_init_lock (MLOCK_T *sl) {
1903  pthread_mutexattr_t attr;
1904  if (pthread_mutexattr_init(&attr)) return 1;
1905  if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
1906  if (pthread_mutex_init(sl, &attr)) return 1;
1907  if (pthread_mutexattr_destroy(&attr)) return 1;
1908  return 0;
1909}
1910
1911#else /* WIN32 */
1912/* Win32 critical sections */
1913#define MLOCK_T               CRITICAL_SECTION
1914#define CURRENT_THREAD        GetCurrentThreadId()
1915#define INITIAL_LOCK(s)       (!InitializeCriticalSectionAndSpinCount((s), 0x80000000|4000))
1916#define ACQUIRE_LOCK(s)       (EnterCriticalSection(s), 0)
1917#define RELEASE_LOCK(s)       LeaveCriticalSection(s)
1918#define TRY_LOCK(s)           TryEnterCriticalSection(s)
1919#define NEED_GLOBAL_LOCK_INIT
1920
1921static MLOCK_T malloc_global_mutex;
1922static volatile long malloc_global_mutex_status;
1923
1924/* Use spin loop to initialize global lock */
1925static void init_malloc_global_mutex() {
1926  for (;;) {
1927    long stat = malloc_global_mutex_status;
1928    if (stat > 0)
1929      return;
1930    /* transition to < 0 while initializing, then to > 0) */
1931    if (stat == 0 &&
1932        interlockedcompareexchange(&malloc_global_mutex_status, -1, 0) == 0) {
1933      InitializeCriticalSection(&malloc_global_mutex);
1934      interlockedexchange(&malloc_global_mutex_status,1);
1935      return;
1936    }
1937    SleepEx(0, FALSE);
1938  }
1939}
1940
1941#endif /* WIN32 */
1942#endif /* USE_SPIN_LOCKS */
1943#endif /* USE_LOCKS == 1 */
1944
1945/* -----------------------  User-defined locks ------------------------ */
1946
1947#if USE_LOCKS > 1
1948/* Define your own lock implementation here */
1949/* #define INITIAL_LOCK(sl)  ... */
1950/* #define ACQUIRE_LOCK(sl)  ... */
1951/* #define RELEASE_LOCK(sl)  ... */
1952/* #define TRY_LOCK(sl) ... */
1953/* static MLOCK_T malloc_global_mutex = ... */
1954#endif /* USE_LOCKS > 1 */
1955
1956/* -----------------------  Lock-based state ------------------------ */
1957
1958#if USE_LOCKS
1959#define USE_LOCK_BIT               (2U)
1960#else  /* USE_LOCKS */
1961#define USE_LOCK_BIT               (0U)
1962#define INITIAL_LOCK(l)
1963#endif /* USE_LOCKS */
1964
1965#if USE_LOCKS
1966#define ACQUIRE_MALLOC_GLOBAL_LOCK()  ACQUIRE_LOCK(&malloc_global_mutex);
1967#define RELEASE_MALLOC_GLOBAL_LOCK()  RELEASE_LOCK(&malloc_global_mutex);
1968#else  /* USE_LOCKS */
1969#define ACQUIRE_MALLOC_GLOBAL_LOCK()
1970#define RELEASE_MALLOC_GLOBAL_LOCK()
1971#endif /* USE_LOCKS */
1972
1973
1974/* -----------------------  Chunk representations ------------------------ */
1975
1976/*
1977  (The following includes lightly edited explanations by Colin Plumb.)
1978
1979  The malloc_chunk declaration below is misleading (but accurate and
1980  necessary).  It declares a "view" into memory allowing access to
1981  necessary fields at known offsets from a given base.
1982
1983  Chunks of memory are maintained using a `boundary tag' method as
1984  originally described by Knuth.  (See the paper by Paul Wilson
1985  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1986  techniques.)  Sizes of free chunks are stored both in the front of
1987  each chunk and at the end.  This makes consolidating fragmented
1988  chunks into bigger chunks fast.  The head fields also hold bits
1989  representing whether chunks are free or in use.
1990
1991  Here are some pictures to make it clearer.  They are "exploded" to
1992  show that the state of a chunk can be thought of as extending from
1993  the high 31 bits of the head field of its header through the
1994  prev_foot and PINUSE_BIT bit of the following chunk header.
1995
1996  A chunk that's in use looks like:
1997
1998   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1999           | Size of previous chunk (if P = 0)                             |
2000           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2001         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2002         | Size of this chunk                                         1| +-+
2003   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2004         |                                                               |
2005         +-                                                             -+
2006         |                                                               |
2007         +-                                                             -+
2008         |                                                               :
2009         +-      size - sizeof(size_t) available payload bytes          -+
2010         :                                                               |
2011 chunk-> +-                                                             -+
2012         |                                                               |
2013         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2014       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2015       | Size of next chunk (may or may not be in use)               | +-+
2016 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2017
2018    And if it's free, it looks like this:
2019
2020   chunk-> +-                                                             -+
2021           | User payload (must be in use, or we would have merged!)       |
2022           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2023         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2024         | Size of this chunk                                         0| +-+
2025   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2026         | Next pointer                                                  |
2027         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2028         | Prev pointer                                                  |
2029         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2030         |                                                               :
2031         +-      size - sizeof(struct chunk) unused bytes               -+
2032         :                                                               |
2033 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2034         | Size of this chunk                                            |
2035         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2036       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2037       | Size of next chunk (must be in use, or we would have merged)| +-+
2038 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2039       |                                                               :
2040       +- User payload                                                -+
2041       :                                                               |
2042       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2043                                                                     |0|
2044                                                                     +-+
2045  Note that since we always merge adjacent free chunks, the chunks
2046  adjacent to a free chunk must be in use.
2047
2048  Given a pointer to a chunk (which can be derived trivially from the
2049  payload pointer) we can, in O(1) time, find out whether the adjacent
2050  chunks are free, and if so, unlink them from the lists that they
2051  are on and merge them with the current chunk.
2052
2053  Chunks always begin on even word boundaries, so the mem portion
2054  (which is returned to the user) is also on an even word boundary, and
2055  thus at least double-word aligned.
2056
2057  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2058  chunk size (which is always a multiple of two words), is an in-use
2059  bit for the *previous* chunk.  If that bit is *clear*, then the
2060  word before the current chunk size contains the previous chunk
2061  size, and can be used to find the front of the previous chunk.
2062  The very first chunk allocated always has this bit set, preventing
2063  access to non-existent (or non-owned) memory. If pinuse is set for
2064  any given chunk, then you CANNOT determine the size of the
2065  previous chunk, and might even get a memory addressing fault when
2066  trying to do so.
2067
2068  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2069  the chunk size redundantly records whether the current chunk is
2070  inuse. This redundancy enables usage checks within free and realloc,
2071  and reduces indirection when freeing and consolidating chunks.
2072
2073  Each freshly allocated chunk must have both cinuse and pinuse set.
2074  That is, each allocated chunk borders either a previously allocated
2075  and still in-use chunk, or the base of its memory arena. This is
2076  ensured by making all allocations from the `lowest' part of any
2077  found chunk.  Further, no free chunk physically borders another one,
2078  so each free chunk is known to be preceded and followed by either
2079  inuse chunks or the ends of memory.
2080
2081  Note that the `foot' of the current chunk is actually represented
2082  as the prev_foot of the NEXT chunk. This makes it easier to
2083  deal with alignments etc but can be very confusing when trying
2084  to extend or adapt this code.
2085
2086  The exceptions to all this are
2087
2088     1. The special chunk `top' is the top-most available chunk (i.e.,
2089        the one bordering the end of available memory). It is treated
2090        specially.  Top is never included in any bin, is used only if
2091        no other chunk is available, and is released back to the
2092        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
2093        the top chunk is treated as larger (and thus less well
2094        fitting) than any other available chunk.  The top chunk
2095        doesn't update its trailing size field since there is no next
2096        contiguous chunk that would have to index off it. However,
2097        space is still allocated for it (TOP_FOOT_SIZE) to enable
2098        separation or merging when space is extended.
2099
2100     3. Chunks allocated via mmap, which have the lowest-order bit
2101        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
2102        PINUSE_BIT in their head fields.  Because they are allocated
2103        one-by-one, each must carry its own prev_foot field, which is
2104        also used to hold the offset this chunk has within its mmapped
2105        region, which is needed to preserve alignment. Each mmapped
2106        chunk is trailed by the first two fields of a fake next-chunk
2107        for sake of usage checks.
2108
2109*/
2110
2111struct malloc_chunk {
2112  size_t               prev_foot;  /* Size of previous chunk (if free).  */
2113  size_t               head;       /* Size and inuse bits. */
2114  struct malloc_chunk* fd;         /* double links -- used only if free. */
2115  struct malloc_chunk* bk;
2116};
2117
2118typedef struct malloc_chunk  mchunk;
2119typedef struct malloc_chunk* mchunkptr;
2120typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
2121typedef unsigned int bindex_t;         /* Described below */
2122typedef unsigned int binmap_t;         /* Described below */
2123typedef unsigned int flag_t;           /* The type of various bit flag sets */
2124
2125/* ------------------- Chunks sizes and alignments ----------------------- */
2126
2127#define MCHUNK_SIZE         (sizeof(mchunk))
2128
2129#if FOOTERS
2130#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
2131#else /* FOOTERS */
2132#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
2133#endif /* FOOTERS */
2134
2135/* MMapped chunks need a second word of overhead ... */
2136#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2137/* ... and additional padding for fake next-chunk at foot */
2138#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
2139
2140/* The smallest size we can malloc is an aligned minimal chunk */
2141#define MIN_CHUNK_SIZE\
2142  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2143
2144/* conversion from malloc headers to user pointers, and back */
2145#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
2146#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2147/* chunk associated with aligned address A */
2148#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
2149
2150/* Bounds on request (not chunk) sizes. */
2151#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
2152#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2153
2154/* pad request bytes into a usable size */
2155#define pad_request(req) \
2156   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2157
2158/* pad request, checking for minimum (but not maximum) */
2159#define request2size(req) \
2160  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2161
2162
2163/* ------------------ Operations on head and foot fields ----------------- */
2164
2165/*
2166  The head field of a chunk is or'ed with PINUSE_BIT when previous
2167  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2168  use. If the chunk was obtained with mmap, the prev_foot field has
2169  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
2170  mmapped region to the base of the chunk.
2171
2172  FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2173*/
2174
2175#define PINUSE_BIT          (SIZE_T_ONE)
2176#define CINUSE_BIT          (SIZE_T_TWO)
2177#define FLAG4_BIT           (SIZE_T_FOUR)
2178#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
2179#define FLAG_BITS           (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2180
2181/* Head value for fenceposts */
2182#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
2183
2184/* extraction of fields from head words */
2185#define cinuse(p)           ((p)->head & CINUSE_BIT)
2186#define pinuse(p)           ((p)->head & PINUSE_BIT)
2187#define chunksize(p)        ((p)->head & ~(FLAG_BITS))
2188
2189#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
2190#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
2191
2192/* Treat space at ptr +/- offset as a chunk */
2193#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
2194#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2195
2196/* Ptr to next or previous physical malloc_chunk. */
2197#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2198#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2199
2200/* extract next chunk's pinuse bit */
2201#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
2202
2203/* Get/set size at footer */
2204#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2205#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2206
2207/* Set size, pinuse bit, and foot */
2208#define set_size_and_pinuse_of_free_chunk(p, s)\
2209  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2210
2211/* Set size, pinuse bit, foot, and clear next pinuse */
2212#define set_free_with_pinuse(p, s, n)\
2213  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2214
2215#define is_mmapped(p)\
2216  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
2217
2218/* Get the internal overhead associated with chunk p */
2219#define overhead_for(p)\
2220 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2221
2222/* Return true if malloced space is not necessarily cleared */
2223#if MMAP_CLEARS
2224#define calloc_must_clear(p) (!is_mmapped(p))
2225#else /* MMAP_CLEARS */
2226#define calloc_must_clear(p) (1)
2227#endif /* MMAP_CLEARS */
2228
2229/* ---------------------- Overlaid data structures ----------------------- */
2230
2231/*
2232  When chunks are not in use, they are treated as nodes of either
2233  lists or trees.
2234
2235  "Small"  chunks are stored in circular doubly-linked lists, and look
2236  like this:
2237
2238    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2239            |             Size of previous chunk                            |
2240            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2241    `head:' |             Size of chunk, in bytes                         |P|
2242      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2243            |             Forward pointer to next chunk in list             |
2244            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2245            |             Back pointer to previous chunk in list            |
2246            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2247            |             Unused space (may be 0 bytes long)                .
2248            .                                                               .
2249            .                                                               |
2250nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2251    `foot:' |             Size of chunk, in bytes                           |
2252            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2253
2254  Larger chunks are kept in a form of bitwise digital trees (aka
2255  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
2256  free chunks greater than 256 bytes, their size doesn't impose any
2257  constraints on user chunk sizes.  Each node looks like:
2258
2259    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2260            |             Size of previous chunk                            |
2261            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2262    `head:' |             Size of chunk, in bytes                         |P|
2263      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2264            |             Forward pointer to next chunk of same size        |
2265            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2266            |             Back pointer to previous chunk of same size       |
2267            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2268            |             Pointer to left child (child[0])                  |
2269            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2270            |             Pointer to right child (child[1])                 |
2271            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2272            |             Pointer to parent                                 |
2273            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2274            |             bin index of this chunk                           |
2275            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2276            |             Unused space                                      .
2277            .                                                               |
2278nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2279    `foot:' |             Size of chunk, in bytes                           |
2280            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2281
2282  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
2283  of the same size are arranged in a circularly-linked list, with only
2284  the oldest chunk (the next to be used, in our FIFO ordering)
2285  actually in the tree.  (Tree members are distinguished by a non-null
2286  parent pointer.)  If a chunk with the same size as an existing node
2287  is inserted, it is linked off the existing node using pointers that
2288  work in the same way as fd/bk pointers of small chunks.
2289
2290  Each tree contains a power of 2 sized range of chunk sizes (the
2291  smallest is 0x100 <= x < 0x180), which is divided in half at each
2292  tree level, with the chunks in the smaller half of the range (0x100
2293  <= x < 0x140 for the top nose) in the left subtree and the larger
2294  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
2295  done by inspecting individual bits.
2296
2297  Using these rules, each node's left subtree contains all smaller
2298  sizes than its right subtree.  However, the node at the root of each
2299  subtree has no particular ordering relationship to either.  (The
2300  dividing line between the subtree sizes is based on trie relation.)
2301  If we remove the last chunk of a given size from the interior of the
2302  tree, we need to replace it with a leaf node.  The tree ordering
2303  rules permit a node to be replaced by any leaf below it.
2304
2305  The smallest chunk in a tree (a common operation in a best-fit
2306  allocator) can be found by walking a path to the leftmost leaf in
2307  the tree.  Unlike a usual binary tree, where we follow left child
2308  pointers until we reach a null, here we follow the right child
2309  pointer any time the left one is null, until we reach a leaf with
2310  both child pointers null. The smallest chunk in the tree will be
2311  somewhere along that path.
2312
2313  The worst case number of steps to add, find, or remove a node is
2314  bounded by the number of bits differentiating chunks within
2315  bins. Under current bin calculations, this ranges from 6 up to 21
2316  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2317  is of course much better.
2318*/
2319
2320struct malloc_tree_chunk {
2321  /* The first four fields must be compatible with malloc_chunk */
2322  size_t                    prev_foot;
2323  size_t                    head;
2324  struct malloc_tree_chunk* fd;
2325  struct malloc_tree_chunk* bk;
2326
2327  struct malloc_tree_chunk* child[2];
2328  struct malloc_tree_chunk* parent;
2329  bindex_t                  index;
2330};
2331
2332typedef struct malloc_tree_chunk  tchunk;
2333typedef struct malloc_tree_chunk* tchunkptr;
2334typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2335
2336/* A little helper macro for trees */
2337#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2338
2339/* ----------------------------- Segments -------------------------------- */
2340
2341/*
2342  Each malloc space may include non-contiguous segments, held in a
2343  list headed by an embedded malloc_segment record representing the
2344  top-most space. Segments also include flags holding properties of
2345  the space. Large chunks that are directly allocated by mmap are not
2346  included in this list. They are instead independently created and
2347  destroyed without otherwise keeping track of them.
2348
2349  Segment management mainly comes into play for spaces allocated by
2350  MMAP.  Any call to MMAP might or might not return memory that is
2351  adjacent to an existing segment.  MORECORE normally contiguously
2352  extends the current space, so this space is almost always adjacent,
2353  which is simpler and faster to deal with. (This is why MORECORE is
2354  used preferentially to MMAP when both are available -- see
2355  sys_alloc.)  When allocating using MMAP, we don't use any of the
2356  hinting mechanisms (inconsistently) supported in various
2357  implementations of unix mmap, or distinguish reserving from
2358  committing memory. Instead, we just ask for space, and exploit
2359  contiguity when we get it.  It is probably possible to do
2360  better than this on some systems, but no general scheme seems
2361  to be significantly better.
2362
2363  Management entails a simpler variant of the consolidation scheme
2364  used for chunks to reduce fragmentation -- new adjacent memory is
2365  normally prepended or appended to an existing segment. However,
2366  there are limitations compared to chunk consolidation that mostly
2367  reflect the fact that segment processing is relatively infrequent
2368  (occurring only when getting memory from system) and that we
2369  don't expect to have huge numbers of segments:
2370
2371  * Segments are not indexed, so traversal requires linear scans.  (It
2372    would be possible to index these, but is not worth the extra
2373    overhead and complexity for most programs on most platforms.)
2374  * New segments are only appended to old ones when holding top-most
2375    memory; if they cannot be prepended to others, they are held in
2376    different segments.
2377
2378  Except for the top-most segment of an mstate, each segment record
2379  is kept at the tail of its segment. Segments are added by pushing
2380  segment records onto the list headed by &mstate.seg for the
2381  containing mstate.
2382
2383  Segment flags control allocation/merge/deallocation policies:
2384  * If EXTERN_BIT set, then we did not allocate this segment,
2385    and so should not try to deallocate or merge with others.
2386    (This currently holds only for the initial segment passed
2387    into create_mspace_with_base.)
2388  * If IS_MMAPPED_BIT set, the segment may be merged with
2389    other surrounding mmapped segments and trimmed/de-allocated
2390    using munmap.
2391  * If neither bit is set, then the segment was obtained using
2392    MORECORE so can be merged with surrounding MORECORE'd segments
2393    and deallocated/trimmed using MORECORE with negative arguments.
2394*/
2395
2396struct malloc_segment {
2397  char*        base;             /* base address */
2398  size_t       size;             /* allocated size */
2399  struct malloc_segment* next;   /* ptr to next segment */
2400  flag_t       sflags;           /* mmap and extern flag */
2401};
2402
2403#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
2404#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
2405
2406typedef struct malloc_segment  msegment;
2407typedef struct malloc_segment* msegmentptr;
2408
2409/* ---------------------------- malloc_state ----------------------------- */
2410
2411/*
2412   A malloc_state holds all of the bookkeeping for a space.
2413   The main fields are:
2414
2415  Top
2416    The topmost chunk of the currently active segment. Its size is
2417    cached in topsize.  The actual size of topmost space is
2418    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2419    fenceposts and segment records if necessary when getting more
2420    space from the system.  The size at which to autotrim top is
2421    cached from mparams in trim_check, except that it is disabled if
2422    an autotrim fails.
2423
2424  Designated victim (dv)
2425    This is the preferred chunk for servicing small requests that
2426    don't have exact fits.  It is normally the chunk split off most
2427    recently to service another small request.  Its size is cached in
2428    dvsize. The link fields of this chunk are not maintained since it
2429    is not kept in a bin.
2430
2431  SmallBins
2432    An array of bin headers for free chunks.  These bins hold chunks
2433    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2434    chunks of all the same size, spaced 8 bytes apart.  To simplify
2435    use in double-linked lists, each bin header acts as a malloc_chunk
2436    pointing to the real first node, if it exists (else pointing to
2437    itself).  This avoids special-casing for headers.  But to avoid
2438    waste, we allocate only the fd/bk pointers of bins, and then use
2439    repositioning tricks to treat these as the fields of a chunk.
2440
2441  TreeBins
2442    Treebins are pointers to the roots of trees holding a range of
2443    sizes. There are 2 equally spaced treebins for each power of two
2444    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2445    larger.
2446
2447  Bin maps
2448    There is one bit map for small bins ("smallmap") and one for
2449    treebins ("treemap).  Each bin sets its bit when non-empty, and
2450    clears the bit when empty.  Bit operations are then used to avoid
2451    bin-by-bin searching -- nearly all "search" is done without ever
2452    looking at bins that won't be selected.  The bit maps
2453    conservatively use 32 bits per map word, even if on 64bit system.
2454    For a good description of some of the bit-based techniques used
2455    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2456    supplement at http://hackersdelight.org/). Many of these are
2457    intended to reduce the branchiness of paths through malloc etc, as
2458    well as to reduce the number of memory locations read or written.
2459
2460  Segments
2461    A list of segments headed by an embedded malloc_segment record
2462    representing the initial space.
2463
2464  Address check support
2465    The least_addr field is the least address ever obtained from
2466    MORECORE or MMAP. Attempted frees and reallocs of any address less
2467    than this are trapped (unless INSECURE is defined).
2468
2469  Magic tag
2470    A cross-check field that should always hold same value as mparams.magic.
2471
2472  Flags
2473    Bits recording whether to use MMAP, locks, or contiguous MORECORE
2474
2475  Statistics
2476    Each space keeps track of current and maximum system memory
2477    obtained via MORECORE or MMAP.
2478
2479  Trim support
2480    Fields holding the amount of unused topmost memory that should trigger
2481    timming, and a counter to force periodic scanning to release unused
2482    non-topmost segments.
2483
2484  Locking
2485    If USE_LOCKS is defined, the "mutex" lock is acquired and released
2486    around every public call using this mspace.
2487
2488  Extension support
2489    A void* pointer and a size_t field that can be used to help implement
2490    extensions to this malloc.
2491*/
2492
2493/* Bin types, widths and sizes */
2494#define NSMALLBINS        (32U)
2495#define NTREEBINS         (32U)
2496#define SMALLBIN_SHIFT    (3U)
2497#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
2498#define TREEBIN_SHIFT     (8U)
2499#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
2500#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
2501#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2502
2503struct malloc_state {
2504  binmap_t   smallmap;
2505  binmap_t   treemap;
2506  size_t     dvsize;
2507  size_t     topsize;
2508  char*      least_addr;
2509  mchunkptr  dv;
2510  mchunkptr  top;
2511  size_t     trim_check;
2512  size_t     release_checks;
2513  size_t     magic;
2514  mchunkptr  smallbins[(NSMALLBINS+1)*2];
2515  tbinptr    treebins[NTREEBINS];
2516  size_t     footprint;
2517  size_t     max_footprint;
2518  flag_t     mflags;
2519#if USE_LOCKS
2520  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
2521#endif /* USE_LOCKS */
2522  msegment   seg;
2523  void*      extp;      /* Unused but available for extensions */
2524  size_t     exts;
2525};
2526
2527typedef struct malloc_state*    mstate;
2528
2529/* ------------- Global malloc_state and malloc_params ------------------- */
2530
2531/*
2532  malloc_params holds global properties, including those that can be
2533  dynamically set using mallopt. There is a single instance, mparams,
2534  initialized in init_mparams. Note that the non-zeroness of "magic"
2535  also serves as an initialization flag.
2536*/
2537
2538struct malloc_params {
2539  volatile size_t magic;
2540  size_t page_size;
2541  size_t granularity;
2542  size_t mmap_threshold;
2543  size_t trim_threshold;
2544  flag_t default_mflags;
2545};
2546
2547static struct malloc_params mparams;
2548
2549/* Ensure mparams initialized */
2550#define ensure_initialization() ((void)(mparams.magic != 0 || init_mparams()))
2551
2552#if !ONLY_MSPACES
2553
2554/* The global malloc_state used for all non-"mspace" calls */
2555static struct malloc_state _gm_;
2556#define gm                 (&_gm_)
2557#define is_global(M)       ((M) == &_gm_)
2558
2559#endif /* !ONLY_MSPACES */
2560
2561#define is_initialized(M)  ((M)->top != 0)
2562
2563/* -------------------------- system alloc setup ------------------------- */
2564
2565/* Operations on mflags */
2566
2567#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2568#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2569#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2570
2571#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2572#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2573#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2574
2575#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2576#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2577
2578#define set_lock(M,L)\
2579 ((M)->mflags = (L)?\
2580  ((M)->mflags | USE_LOCK_BIT) :\
2581  ((M)->mflags & ~USE_LOCK_BIT))
2582
2583/* page-align a size */
2584#define page_align(S)\
2585 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2586
2587/* granularity-align a size */
2588#define granularity_align(S)\
2589  (((S) + (mparams.granularity - SIZE_T_ONE))\
2590   & ~(mparams.granularity - SIZE_T_ONE))
2591
2592
2593/* For mmap, use granularity alignment on windows, else page-align */
2594#ifdef WIN32
2595#define mmap_align(S) granularity_align(S)
2596#else
2597#define mmap_align(S) page_align(S)
2598#endif
2599
2600/* For sys_alloc, enough padding to ensure can malloc request on success */
2601#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2602
2603#define is_page_aligned(S)\
2604   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2605#define is_granularity_aligned(S)\
2606   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2607
2608/*  True if segment S holds address A */
2609#define segment_holds(S, A)\
2610  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2611
2612/* Return segment holding given address */
2613static msegmentptr segment_holding(mstate m, char* addr) {
2614  msegmentptr sp = &m->seg;
2615  for (;;) {
2616    if (addr >= sp->base && addr < sp->base + sp->size)
2617      return sp;
2618    if ((sp = sp->next) == 0)
2619      return 0;
2620  }
2621}
2622
2623/* Return true if segment contains a segment link */
2624static int has_segment_link(mstate m, msegmentptr ss) {
2625  msegmentptr sp = &m->seg;
2626  for (;;) {
2627    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2628      return 1;
2629    if ((sp = sp->next) == 0)
2630      return 0;
2631  }
2632}
2633
2634#ifndef MORECORE_CANNOT_TRIM
2635#define should_trim(M,s)  ((s) > (M)->trim_check)
2636#else  /* MORECORE_CANNOT_TRIM */
2637#define should_trim(M,s)  (0)
2638#endif /* MORECORE_CANNOT_TRIM */
2639
2640/*
2641  TOP_FOOT_SIZE is padding at the end of a segment, including space
2642  that may be needed to place segment records and fenceposts when new
2643  noncontiguous segments are added.
2644*/
2645#define TOP_FOOT_SIZE\
2646  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2647
2648
2649/* -------------------------------  Hooks -------------------------------- */
2650
2651/*
2652  PREACTION should be defined to return 0 on success, and nonzero on
2653  failure. If you are not using locking, you can redefine these to do
2654  anything you like.
2655*/
2656
2657#if USE_LOCKS
2658
2659#define PREACTION(M)  ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2660#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2661#else /* USE_LOCKS */
2662
2663#ifndef PREACTION
2664#define PREACTION(M) (0)
2665#endif  /* PREACTION */
2666
2667#ifndef POSTACTION
2668#define POSTACTION(M)
2669#endif  /* POSTACTION */
2670
2671#endif /* USE_LOCKS */
2672
2673/*
2674  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2675  USAGE_ERROR_ACTION is triggered on detected bad frees and
2676  reallocs. The argument p is an address that might have triggered the
2677  fault. It is ignored by the two predefined actions, but might be
2678  useful in custom actions that try to help diagnose errors.
2679*/
2680
2681#if PROCEED_ON_ERROR
2682
2683/* A count of the number of corruption errors causing resets */
2684int malloc_corruption_error_count;
2685
2686/* default corruption action */
2687static void reset_on_error(mstate m);
2688
2689#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2690#define USAGE_ERROR_ACTION(m, p)
2691
2692#else /* PROCEED_ON_ERROR */
2693
2694#ifndef CORRUPTION_ERROR_ACTION
2695#define CORRUPTION_ERROR_ACTION(m) ABORT
2696#endif /* CORRUPTION_ERROR_ACTION */
2697
2698#ifndef USAGE_ERROR_ACTION
2699#define USAGE_ERROR_ACTION(m,p) ABORT
2700#endif /* USAGE_ERROR_ACTION */
2701
2702#endif /* PROCEED_ON_ERROR */
2703
2704/* -------------------------- Debugging setup ---------------------------- */
2705
2706#if ! DEBUG
2707
2708#define check_free_chunk(M,P)
2709#define check_inuse_chunk(M,P)
2710#define check_malloced_chunk(M,P,N)
2711#define check_mmapped_chunk(M,P)
2712#define check_malloc_state(M)
2713#define check_top_chunk(M,P)
2714
2715#else /* DEBUG */
2716#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2717#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2718#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2719#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2720#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2721#define check_malloc_state(M)       do_check_malloc_state(M)
2722
2723static void   do_check_any_chunk(mstate m, mchunkptr p);
2724static void   do_check_top_chunk(mstate m, mchunkptr p);
2725static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2726static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2727static void   do_check_free_chunk(mstate m, mchunkptr p);
2728static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2729static void   do_check_tree(mstate m, tchunkptr t);
2730static void   do_check_treebin(mstate m, bindex_t i);
2731static void   do_check_smallbin(mstate m, bindex_t i);
2732static void   do_check_malloc_state(mstate m);
2733static int    bin_find(mstate m, mchunkptr x);
2734static size_t traverse_and_check(mstate m);
2735#endif /* DEBUG */
2736
2737/* ---------------------------- Indexing Bins ---------------------------- */
2738
2739#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2740#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
2741#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2742#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2743
2744/* addressing by index. See above about smallbin repositioning */
2745#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2746#define treebin_at(M,i)     (&((M)->treebins[i]))
2747
2748/* assign tree index for size S to variable I. Use x86 asm if possible  */
2749#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2750#define compute_tree_index(S, I)\
2751{\
2752  unsigned int X = S >> TREEBIN_SHIFT;\
2753  if (X == 0)\
2754    I = 0;\
2755  else if (X > 0xFFFF)\
2756    I = NTREEBINS-1;\
2757  else {\
2758    unsigned int K;\
2759    __asm__("bsrl\t%1, %0\n\t" : "=r" (K) : "rm"  (X));\
2760    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2761  }\
2762}
2763
2764#elif defined (__INTEL_COMPILER)
2765#define compute_tree_index(S, I)\
2766{\
2767  size_t X = S >> TREEBIN_SHIFT;\
2768  if (X == 0)\
2769    I = 0;\
2770  else if (X > 0xFFFF)\
2771    I = NTREEBINS-1;\
2772  else {\
2773    unsigned int K = _bit_scan_reverse (X); \
2774    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2775  }\
2776}
2777
2778#elif defined(_MSC_VER) && _MSC_VER>=1300
2779#define compute_tree_index(S, I)\
2780{\
2781  size_t X = S >> TREEBIN_SHIFT;\
2782  if (X == 0)\
2783    I = 0;\
2784  else if (X > 0xFFFF)\
2785    I = NTREEBINS-1;\
2786  else {\
2787    unsigned int K;\
2788    _BitScanReverse((DWORD *) &K, X);\
2789    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2790  }\
2791}
2792
2793#else /* GNUC */
2794#define compute_tree_index(S, I)\
2795{\
2796  size_t X = S >> TREEBIN_SHIFT;\
2797  if (X == 0)\
2798    I = 0;\
2799  else if (X > 0xFFFF)\
2800    I = NTREEBINS-1;\
2801  else {\
2802    unsigned int Y = (unsigned int)X;\
2803    unsigned int N = ((Y - 0x100) >> 16) & 8;\
2804    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2805    N += K;\
2806    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2807    K = 14 - N + ((Y <<= K) >> 15);\
2808    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2809  }\
2810}
2811#endif /* GNUC */
2812
2813/* Bit representing maximum resolved size in a treebin at i */
2814#define bit_for_tree_index(i) \
2815   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2816
2817/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2818#define leftshift_for_tree_index(i) \
2819   ((i == NTREEBINS-1)? 0 : \
2820    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2821
2822/* The size of the smallest chunk held in bin with index i */
2823#define minsize_for_tree_index(i) \
2824   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2825   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2826
2827
2828/* ------------------------ Operations on bin maps ----------------------- */
2829
2830/* bit corresponding to given index */
2831#define idx2bit(i)              ((binmap_t)(1) << (i))
2832
2833/* Mark/Clear bits with given index */
2834#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2835#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2836#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2837
2838#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2839#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2840#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2841
2842/* isolate the least set bit of a bitmap */
2843#define least_bit(x)         ((x) & -(x))
2844
2845/* mask with all bits to left of least bit of x on */
2846#define left_bits(x)         ((x<<1) | -(x<<1))
2847
2848/* mask with all bits to left of or equal to least bit of x on */
2849#define same_or_left_bits(x) ((x) | -(x))
2850
2851/* index corresponding to given bit. Use x86 asm if possible */
2852
2853#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2854#define compute_bit2idx(X, I)\
2855{\
2856  unsigned int J;\
2857  __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "rm" (X));\
2858  I = (bindex_t)J;\
2859}
2860
2861#elif defined (__INTEL_COMPILER)
2862#define compute_bit2idx(X, I)\
2863{\
2864  unsigned int J;\
2865  J = _bit_scan_forward (X); \
2866  I = (bindex_t)J;\
2867}
2868
2869#elif defined(_MSC_VER) && _MSC_VER>=1300
2870#define compute_bit2idx(X, I)\
2871{\
2872  unsigned int J;\
2873  _BitScanForward((DWORD *) &J, X);\
2874  I = (bindex_t)J;\
2875}
2876
2877#elif USE_BUILTIN_FFS
2878#define compute_bit2idx(X, I) I = ffs(X)-1
2879
2880#else
2881#define compute_bit2idx(X, I)\
2882{\
2883  unsigned int Y = X - 1;\
2884  unsigned int K = Y >> (16-4) & 16;\
2885  unsigned int N = K;        Y >>= K;\
2886  N += K = Y >> (8-3) &  8;  Y >>= K;\
2887  N += K = Y >> (4-2) &  4;  Y >>= K;\
2888  N += K = Y >> (2-1) &  2;  Y >>= K;\
2889  N += K = Y >> (1-0) &  1;  Y >>= K;\
2890  I = (bindex_t)(N + Y);\
2891}
2892#endif /* GNUC */
2893
2894
2895/* ----------------------- Runtime Check Support ------------------------- */
2896
2897/*
2898  For security, the main invariant is that malloc/free/etc never
2899  writes to a static address other than malloc_state, unless static
2900  malloc_state itself has been corrupted, which cannot occur via
2901  malloc (because of these checks). In essence this means that we
2902  believe all pointers, sizes, maps etc held in malloc_state, but
2903  check all of those linked or offsetted from other embedded data
2904  structures.  These checks are interspersed with main code in a way
2905  that tends to minimize their run-time cost.
2906
2907  When FOOTERS is defined, in addition to range checking, we also
2908  verify footer fields of inuse chunks, which can be used guarantee
2909  that the mstate controlling malloc/free is intact.  This is a
2910  streamlined version of the approach described by William Robertson
2911  et al in "Run-time Detection of Heap-based Overflows" LISA'03
2912  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2913  of an inuse chunk holds the xor of its mstate and a random seed,
2914  that is checked upon calls to free() and realloc().  This is
2915  (probablistically) unguessable from outside the program, but can be
2916  computed by any code successfully malloc'ing any chunk, so does not
2917  itself provide protection against code that has already broken
2918  security through some other means.  Unlike Robertson et al, we
2919  always dynamically check addresses of all offset chunks (previous,
2920  next, etc). This turns out to be cheaper than relying on hashes.
2921*/
2922
2923#if !INSECURE
2924/* Check if address a is at least as high as any from MORECORE or MMAP */
2925#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2926/* Check if address of next chunk n is higher than base chunk p */
2927#define ok_next(p, n)    ((char*)(p) < (char*)(n))
2928/* Check if p has its cinuse bit on */
2929#define ok_cinuse(p)     cinuse(p)
2930/* Check if p has its pinuse bit on */
2931#define ok_pinuse(p)     pinuse(p)
2932
2933#else /* !INSECURE */
2934#define ok_address(M, a) (1)
2935#define ok_next(b, n)    (1)
2936#define ok_cinuse(p)     (1)
2937#define ok_pinuse(p)     (1)
2938#endif /* !INSECURE */
2939
2940#if (FOOTERS && !INSECURE)
2941/* Check if (alleged) mstate m has expected magic field */
2942#define ok_magic(M)      ((M)->magic == mparams.magic)
2943#else  /* (FOOTERS && !INSECURE) */
2944#define ok_magic(M)      (1)
2945#endif /* (FOOTERS && !INSECURE) */
2946
2947
2948/* In gcc, use __builtin_expect to minimize impact of checks */
2949#if !INSECURE
2950#if defined(__GNUC__) && __GNUC__ >= 3
2951#define RTCHECK(e)  __builtin_expect(e, 1)
2952#else /* GNUC */
2953#define RTCHECK(e)  (e)
2954#endif /* GNUC */
2955#else /* !INSECURE */
2956#define RTCHECK(e)  (1)
2957#endif /* !INSECURE */
2958
2959/* macros to set up inuse chunks with or without footers */
2960
2961#if !FOOTERS
2962
2963#define mark_inuse_foot(M,p,s)
2964
2965/* Set cinuse bit and pinuse bit of next chunk */
2966#define set_inuse(M,p,s)\
2967  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2968  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2969
2970/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2971#define set_inuse_and_pinuse(M,p,s)\
2972  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2973  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2974
2975/* Set size, cinuse and pinuse bit of this chunk */
2976#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2977  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2978
2979#else /* FOOTERS */
2980
2981/* Set foot of inuse chunk to be xor of mstate and seed */
2982#define mark_inuse_foot(M,p,s)\
2983  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2984
2985#define get_mstate_for(p)\
2986  ((mstate)(((mchunkptr)((char*)(p) +\
2987    (chunksize(p))))->prev_foot ^ mparams.magic))
2988
2989#define set_inuse(M,p,s)\
2990  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2991  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2992  mark_inuse_foot(M,p,s))
2993
2994#define set_inuse_and_pinuse(M,p,s)\
2995  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2996  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2997 mark_inuse_foot(M,p,s))
2998
2999#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3000  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3001  mark_inuse_foot(M, p, s))
3002
3003#endif /* !FOOTERS */
3004
3005/* ---------------------------- setting mparams -------------------------- */
3006
3007/* Initialize mparams */
3008static int init_mparams(void) {
3009#ifdef NEED_GLOBAL_LOCK_INIT
3010  if (malloc_global_mutex_status <= 0)
3011    init_malloc_global_mutex();
3012#endif
3013
3014  ACQUIRE_MALLOC_GLOBAL_LOCK();
3015  if (mparams.magic == 0) {
3016    size_t magic;
3017    size_t psize;
3018    size_t gsize;
3019
3020#ifndef WIN32
3021    psize = malloc_getpagesize;
3022    gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3023#else /* WIN32 */
3024    {
3025      SYSTEM_INFO system_info;
3026      GetSystemInfo(&system_info);
3027      psize = system_info.dwPageSize;
3028      gsize = ((DEFAULT_GRANULARITY != 0)?
3029               DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3030    }
3031#endif /* WIN32 */
3032
3033    /* Sanity-check configuration:
3034       size_t must be unsigned and as wide as pointer type.
3035       ints must be at least 4 bytes.
3036       alignment must be at least 8.
3037       Alignment, min chunk size, and page size must all be powers of 2.
3038    */
3039    if ((sizeof(size_t) != sizeof(char*)) ||
3040        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
3041        (sizeof(int) < 4)  ||
3042        (MALLOC_ALIGNMENT < (size_t)8U) ||
3043        ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3044        ((MCHUNK_SIZE      & (MCHUNK_SIZE-SIZE_T_ONE))      != 0) ||
3045        ((gsize            & (gsize-SIZE_T_ONE))            != 0) ||
3046        ((psize            & (psize-SIZE_T_ONE))            != 0))
3047      ABORT;
3048
3049    mparams.granularity = gsize;
3050    mparams.page_size = psize;
3051    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3052    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3053#if MORECORE_CONTIGUOUS
3054    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3055#else  /* MORECORE_CONTIGUOUS */
3056    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3057#endif /* MORECORE_CONTIGUOUS */
3058
3059#if !ONLY_MSPACES
3060    /* Set up lock for main malloc area */
3061    gm->mflags = mparams.default_mflags;
3062    INITIAL_LOCK(&gm->mutex);
3063#endif
3064
3065#if (FOOTERS && !INSECURE)
3066    {
3067#if USE_DEV_RANDOM
3068      int fd;
3069      unsigned char buf[sizeof(size_t)];
3070      /* Try to use /dev/urandom, else fall back on using time */
3071      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3072          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3073        magic = *((size_t *) buf);
3074        close(fd);
3075      }
3076      else
3077#endif /* USE_DEV_RANDOM */
3078#ifdef WIN32
3079        magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3080#else
3081      magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3082#endif
3083      magic |= (size_t)8U;    /* ensure nonzero */
3084      magic &= ~(size_t)7U;   /* improve chances of fault for bad values */
3085    }
3086#else /* (FOOTERS && !INSECURE) */
3087    magic = (size_t)0x58585858U;
3088#endif /* (FOOTERS && !INSECURE) */
3089
3090    mparams.magic = magic;
3091  }
3092
3093  RELEASE_MALLOC_GLOBAL_LOCK();
3094  return 1;
3095}
3096
3097/* support for mallopt */
3098static int change_mparam(int param_number, int value) {
3099  size_t val = (value == -1)? MAX_SIZE_T : (size_t)value;
3100  ensure_initialization();
3101  switch(param_number) {
3102  case M_TRIM_THRESHOLD:
3103    mparams.trim_threshold = val;
3104    return 1;
3105  case M_GRANULARITY:
3106    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3107      mparams.granularity = val;
3108      return 1;
3109    }
3110    else
3111      return 0;
3112  case M_MMAP_THRESHOLD:
3113    mparams.mmap_threshold = val;
3114    return 1;
3115  default:
3116    return 0;
3117  }
3118}
3119
3120#if DEBUG
3121/* ------------------------- Debugging Support --------------------------- */
3122
3123/* Check properties of any chunk, whether free, inuse, mmapped etc  */
3124static void do_check_any_chunk(mstate m, mchunkptr p) {
3125  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3126  assert(ok_address(m, p));
3127}
3128
3129/* Check properties of top chunk */
3130static void do_check_top_chunk(mstate m, mchunkptr p) {
3131  msegmentptr sp = segment_holding(m, (char*)p);
3132  size_t  sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3133  assert(sp != 0);
3134  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3135  assert(ok_address(m, p));
3136  assert(sz == m->topsize);
3137  assert(sz > 0);
3138  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3139  assert(pinuse(p));
3140  assert(!pinuse(chunk_plus_offset(p, sz)));
3141}
3142
3143/* Check properties of (inuse) mmapped chunks */
3144static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3145  size_t  sz = chunksize(p);
3146  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
3147  assert(is_mmapped(p));
3148  assert(use_mmap(m));
3149  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3150  assert(ok_address(m, p));
3151  assert(!is_small(sz));
3152  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3153  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3154  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3155}
3156
3157/* Check properties of inuse chunks */
3158static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3159  do_check_any_chunk(m, p);
3160  assert(cinuse(p));
3161  assert(next_pinuse(p));
3162  /* If not pinuse and not mmapped, previous chunk has OK offset */
3163  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3164  if (is_mmapped(p))
3165    do_check_mmapped_chunk(m, p);
3166}
3167
3168/* Check properties of free chunks */
3169static void do_check_free_chunk(mstate m, mchunkptr p) {
3170  size_t sz = chunksize(p);
3171  mchunkptr next = chunk_plus_offset(p, sz);
3172  do_check_any_chunk(m, p);
3173  assert(!cinuse(p));
3174  assert(!next_pinuse(p));
3175  assert (!is_mmapped(p));
3176  if (p != m->dv && p != m->top) {
3177    if (sz >= MIN_CHUNK_SIZE) {
3178      assert((sz & CHUNK_ALIGN_MASK) == 0);
3179      assert(is_aligned(chunk2mem(p)));
3180      assert(next->prev_foot == sz);
3181      assert(pinuse(p));
3182      assert (next == m->top || cinuse(next));
3183      assert(p->fd->bk == p);
3184      assert(p->bk->fd == p);
3185    }
3186    else  /* markers are always of size SIZE_T_SIZE */
3187      assert(sz == SIZE_T_SIZE);
3188  }
3189}
3190
3191/* Check properties of malloced chunks at the point they are malloced */
3192static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3193  if (mem != 0) {
3194    mchunkptr p = mem2chunk(mem);
3195    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
3196    do_check_inuse_chunk(m, p);
3197    assert((sz & CHUNK_ALIGN_MASK) == 0);
3198    assert(sz >= MIN_CHUNK_SIZE);
3199    assert(sz >= s);
3200    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3201    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3202  }
3203}
3204
3205/* Check a tree and its subtrees.  */
3206static void do_check_tree(mstate m, tchunkptr t) {
3207  tchunkptr head = 0;
3208  tchunkptr u = t;
3209  bindex_t tindex = t->index;
3210  size_t tsize = chunksize(t);
3211  bindex_t idx;
3212  compute_tree_index(tsize, idx);
3213  assert(tindex == idx);
3214  assert(tsize >= MIN_LARGE_SIZE);
3215  assert(tsize >= minsize_for_tree_index(idx));
3216  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3217
3218  do { /* traverse through chain of same-sized nodes */
3219    do_check_any_chunk(m, ((mchunkptr)u));
3220    assert(u->index == tindex);
3221    assert(chunksize(u) == tsize);
3222    assert(!cinuse(u));
3223    assert(!next_pinuse(u));
3224    assert(u->fd->bk == u);
3225    assert(u->bk->fd == u);
3226    if (u->parent == 0) {
3227      assert(u->child[0] == 0);
3228      assert(u->child[1] == 0);
3229    }
3230    else {
3231      assert(head == 0); /* only one node on chain has parent */
3232      head = u;
3233      assert(u->parent != u);
3234      assert (u->parent->child[0] == u ||
3235              u->parent->child[1] == u ||
3236              *((tbinptr*)(u->parent)) == u);
3237      if (u->child[0] != 0) {
3238        assert(u->child[0]->parent == u);
3239        assert(u->child[0] != u);
3240        do_check_tree(m, u->child[0]);
3241      }
3242      if (u->child[1] != 0) {
3243        assert(u->child[1]->parent == u);
3244        assert(u->child[1] != u);
3245        do_check_tree(m, u->child[1]);
3246      }
3247      if (u->child[0] != 0 && u->child[1] != 0) {
3248        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3249      }
3250    }
3251    u = u->fd;
3252  } while (u != t);
3253  assert(head != 0);
3254}
3255
3256/*  Check all the chunks in a treebin.  */
3257static void do_check_treebin(mstate m, bindex_t i) {
3258  tbinptr* tb = treebin_at(m, i);
3259  tchunkptr t = *tb;
3260  int empty = (m->treemap & (1U << i)) == 0;
3261  if (t == 0)
3262    assert(empty);
3263  if (!empty)
3264    do_check_tree(m, t);
3265}
3266
3267/*  Check all the chunks in a smallbin.  */
3268static void do_check_smallbin(mstate m, bindex_t i) {
3269  sbinptr b = smallbin_at(m, i);
3270  mchunkptr p = b->bk;
3271  unsigned int empty = (m->smallmap & (1U << i)) == 0;
3272  if (p == b)
3273    assert(empty);
3274  if (!empty) {
3275    for (; p != b; p = p->bk) {
3276      size_t size = chunksize(p);
3277      mchunkptr q;
3278      /* each chunk claims to be free */
3279      do_check_free_chunk(m, p);
3280      /* chunk belongs in bin */
3281      assert(small_index(size) == i);
3282      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3283      /* chunk is followed by an inuse chunk */
3284      q = next_chunk(p);
3285      if (q->head != FENCEPOST_HEAD)
3286        do_check_inuse_chunk(m, q);
3287    }
3288  }
3289}
3290
3291/* Find x in a bin. Used in other check functions. */
3292static int bin_find(mstate m, mchunkptr x) {
3293  size_t size = chunksize(x);
3294  if (is_small(size)) {
3295    bindex_t sidx = small_index(size);
3296    sbinptr b = smallbin_at(m, sidx);
3297    if (smallmap_is_marked(m, sidx)) {
3298      mchunkptr p = b;
3299      do {
3300        if (p == x)
3301          return 1;
3302      } while ((p = p->fd) != b);
3303    }
3304  }
3305  else {
3306    bindex_t tidx;
3307    compute_tree_index(size, tidx);
3308    if (treemap_is_marked(m, tidx)) {
3309      tchunkptr t = *treebin_at(m, tidx);
3310      size_t sizebits = size << leftshift_for_tree_index(tidx);
3311      while (t != 0 && chunksize(t) != size) {
3312        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3313        sizebits <<= 1;
3314      }
3315      if (t != 0) {
3316        tchunkptr u = t;
3317        do {
3318          if (u == (tchunkptr)x)
3319            return 1;
3320        } while ((u = u->fd) != t);
3321      }
3322    }
3323  }
3324  return 0;
3325}
3326
3327/* Traverse each chunk and check it; return total */
3328static size_t traverse_and_check(mstate m) {
3329  size_t sum = 0;
3330  if (is_initialized(m)) {
3331    msegmentptr s = &m->seg;
3332    sum += m->topsize + TOP_FOOT_SIZE;
3333    while (s != 0) {
3334      mchunkptr q = align_as_chunk(s->base);
3335      mchunkptr lastq = 0;
3336      assert(pinuse(q));
3337      while (segment_holds(s, q) &&
3338             q != m->top && q->head != FENCEPOST_HEAD) {
3339        sum += chunksize(q);
3340        if (cinuse(q)) {
3341          assert(!bin_find(m, q));
3342          do_check_inuse_chunk(m, q);
3343        }
3344        else {
3345          assert(q == m->dv || bin_find(m, q));
3346          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
3347          do_check_free_chunk(m, q);
3348        }
3349        lastq = q;
3350        q = next_chunk(q);
3351      }
3352      s = s->next;
3353    }
3354  }
3355  return sum;
3356}
3357
3358/* Check all properties of malloc_state. */
3359static void do_check_malloc_state(mstate m) {
3360  bindex_t i;
3361  size_t total;
3362  /* check bins */
3363  for (i = 0; i < NSMALLBINS; ++i)
3364    do_check_smallbin(m, i);
3365  for (i = 0; i < NTREEBINS; ++i)
3366    do_check_treebin(m, i);
3367
3368  if (m->dvsize != 0) { /* check dv chunk */
3369    do_check_any_chunk(m, m->dv);
3370    assert(m->dvsize == chunksize(m->dv));
3371    assert(m->dvsize >= MIN_CHUNK_SIZE);
3372    assert(bin_find(m, m->dv) == 0);
3373  }
3374
3375  if (m->top != 0) {   /* check top chunk */
3376    do_check_top_chunk(m, m->top);
3377    /*assert(m->topsize == chunksize(m->top)); redundant */
3378    assert(m->topsize > 0);
3379    assert(bin_find(m, m->top) == 0);
3380  }
3381
3382  total = traverse_and_check(m);
3383  assert(total <= m->footprint);
3384  assert(m->footprint <= m->max_footprint);
3385}
3386#endif /* DEBUG */
3387
3388/* ----------------------------- statistics ------------------------------ */
3389
3390#if !NO_MALLINFO
3391static struct mallinfo internal_mallinfo(mstate m) {
3392  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3393  ensure_initialization();
3394  if (!PREACTION(m)) {
3395    check_malloc_state(m);
3396    if (is_initialized(m)) {
3397      size_t nfree = SIZE_T_ONE; /* top always free */
3398      size_t mfree = m->topsize + TOP_FOOT_SIZE;
3399      size_t sum = mfree;
3400      msegmentptr s = &m->seg;
3401      while (s != 0) {
3402        mchunkptr q = align_as_chunk(s->base);
3403        while (segment_holds(s, q) &&
3404               q != m->top && q->head != FENCEPOST_HEAD) {
3405          size_t sz = chunksize(q);
3406          sum += sz;
3407          if (!cinuse(q)) {
3408            mfree += sz;
3409            ++nfree;
3410          }
3411          q = next_chunk(q);
3412        }
3413        s = s->next;
3414      }
3415
3416      nm.arena    = sum;
3417      nm.ordblks  = nfree;
3418      nm.hblkhd   = m->footprint - sum;
3419      nm.usmblks  = m->max_footprint;
3420      nm.uordblks = m->footprint - mfree;
3421      nm.fordblks = mfree;
3422      nm.keepcost = m->topsize;
3423    }
3424
3425    POSTACTION(m);
3426  }
3427  return nm;
3428}
3429#endif /* !NO_MALLINFO */
3430
3431static void internal_malloc_stats(mstate m) {
3432  ensure_initialization();
3433  if (!PREACTION(m)) {
3434    size_t maxfp = 0;
3435    size_t fp = 0;
3436    size_t used = 0;
3437    check_malloc_state(m);
3438    if (is_initialized(m)) {
3439      msegmentptr s = &m->seg;
3440      maxfp = m->max_footprint;
3441      fp = m->footprint;
3442      used = fp - (m->topsize + TOP_FOOT_SIZE);
3443
3444      while (s != 0) {
3445        mchunkptr q = align_as_chunk(s->base);
3446        while (segment_holds(s, q) &&
3447               q != m->top && q->head != FENCEPOST_HEAD) {
3448          if (!cinuse(q))
3449            used -= chunksize(q);
3450          q = next_chunk(q);
3451        }
3452        s = s->next;
3453      }
3454    }
3455
3456    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3457    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
3458    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
3459
3460    POSTACTION(m);
3461  }
3462}
3463
3464/* ----------------------- Operations on smallbins ----------------------- */
3465
3466/*
3467  Various forms of linking and unlinking are defined as macros.  Even
3468  the ones for trees, which are very long but have very short typical
3469  paths.  This is ugly but reduces reliance on inlining support of
3470  compilers.
3471*/
3472
3473/* Link a free chunk into a smallbin  */
3474#define insert_small_chunk(M, P, S) {\
3475  bindex_t I  = small_index(S);\
3476  mchunkptr B = smallbin_at(M, I);\
3477  mchunkptr F = B;\
3478  assert(S >= MIN_CHUNK_SIZE);\
3479  if (!smallmap_is_marked(M, I))\
3480    mark_smallmap(M, I);\
3481  else if (RTCHECK(ok_address(M, B->fd)))\
3482    F = B->fd;\
3483  else {\
3484    CORRUPTION_ERROR_ACTION(M);\
3485  }\
3486  B->fd = P;\
3487  F->bk = P;\
3488  P->fd = F;\
3489  P->bk = B;\
3490}
3491
3492/* Unlink a chunk from a smallbin  */
3493#define unlink_small_chunk(M, P, S) {\
3494  mchunkptr F = P->fd;\
3495  mchunkptr B = P->bk;\
3496  bindex_t I = small_index(S);\
3497  assert(P != B);\
3498  assert(P != F);\
3499  assert(chunksize(P) == small_index2size(I));\
3500  if (F == B)\
3501    clear_smallmap(M, I);\
3502  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
3503                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
3504    F->bk = B;\
3505    B->fd = F;\
3506  }\
3507  else {\
3508    CORRUPTION_ERROR_ACTION(M);\
3509  }\
3510}
3511
3512/* Unlink the first chunk from a smallbin */
3513#define unlink_first_small_chunk(M, B, P, I) {\
3514  mchunkptr F = P->fd;\
3515  assert(P != B);\
3516  assert(P != F);\
3517  assert(chunksize(P) == small_index2size(I));\
3518  if (B == F)\
3519    clear_smallmap(M, I);\
3520  else if (RTCHECK(ok_address(M, F))) {\
3521    B->fd = F;\
3522    F->bk = B;\
3523  }\
3524  else {\
3525    CORRUPTION_ERROR_ACTION(M);\
3526  }\
3527}
3528
3529
3530
3531/* Replace dv node, binning the old one */
3532/* Used only when dvsize known to be small */
3533#define replace_dv(M, P, S) {\
3534  size_t DVS = M->dvsize;\
3535  if (DVS != 0) {\
3536    mchunkptr DV = M->dv;\
3537    assert(is_small(DVS));\
3538    insert_small_chunk(M, DV, DVS);\
3539  }\
3540  M->dvsize = S;\
3541  M->dv = P;\
3542}
3543
3544/* ------------------------- Operations on trees ------------------------- */
3545
3546/* Insert chunk into tree */
3547#define insert_large_chunk(M, X, S) {\
3548  tbinptr* H;\
3549  bindex_t I;\
3550  compute_tree_index(S, I);\
3551  H = treebin_at(M, I);\
3552  X->index = I;\
3553  X->child[0] = X->child[1] = 0;\
3554  if (!treemap_is_marked(M, I)) {\
3555    mark_treemap(M, I);\
3556    *H = X;\
3557    X->parent = (tchunkptr)H;\
3558    X->fd = X->bk = X;\
3559  }\
3560  else {\
3561    tchunkptr T = *H;\
3562    size_t K = S << leftshift_for_tree_index(I);\
3563    for (;;) {\
3564      if (chunksize(T) != S) {\
3565        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3566        K <<= 1;\
3567        if (*C != 0)\
3568          T = *C;\
3569        else if (RTCHECK(ok_address(M, C))) {\
3570          *C = X;\
3571          X->parent = T;\
3572          X->fd = X->bk = X;\
3573          break;\
3574        }\
3575        else {\
3576          CORRUPTION_ERROR_ACTION(M);\
3577          break;\
3578        }\
3579      }\
3580      else {\
3581        tchunkptr F = T->fd;\
3582        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3583          T->fd = F->bk = X;\
3584          X->fd = F;\
3585          X->bk = T;\
3586          X->parent = 0;\
3587          break;\
3588        }\
3589        else {\
3590          CORRUPTION_ERROR_ACTION(M);\
3591          break;\
3592        }\
3593      }\
3594    }\
3595  }\
3596}
3597
3598/*
3599  Unlink steps:
3600
3601  1. If x is a chained node, unlink it from its same-sized fd/bk links
3602     and choose its bk node as its replacement.
3603  2. If x was the last node of its size, but not a leaf node, it must
3604     be replaced with a leaf node (not merely one with an open left or
3605     right), to make sure that lefts and rights of descendants
3606     correspond properly to bit masks.  We use the rightmost descendant
3607     of x.  We could use any other leaf, but this is easy to locate and
3608     tends to counteract removal of leftmosts elsewhere, and so keeps
3609     paths shorter than minimally guaranteed.  This doesn't loop much
3610     because on average a node in a tree is near the bottom.
3611  3. If x is the base of a chain (i.e., has parent links) relink
3612     x's parent and children to x's replacement (or null if none).
3613*/
3614
3615#define unlink_large_chunk(M, X) {\
3616  tchunkptr XP = X->parent;\
3617  tchunkptr R;\
3618  if (X->bk != X) {\
3619    tchunkptr F = X->fd;\
3620    R = X->bk;\
3621    if (RTCHECK(ok_address(M, F))) {\
3622      F->bk = R;\
3623      R->fd = F;\
3624    }\
3625    else {\
3626      CORRUPTION_ERROR_ACTION(M);\
3627    }\
3628  }\
3629  else {\
3630    tchunkptr* RP;\
3631    if (((R = *(RP = &(X->child[1]))) != 0) ||\
3632        ((R = *(RP = &(X->child[0]))) != 0)) {\
3633      tchunkptr* CP;\
3634      while ((*(CP = &(R->child[1])) != 0) ||\
3635             (*(CP = &(R->child[0])) != 0)) {\
3636        R = *(RP = CP);\
3637      }\
3638      if (RTCHECK(ok_address(M, RP)))\
3639        *RP = 0;\
3640      else {\
3641        CORRUPTION_ERROR_ACTION(M);\
3642      }\
3643    }\
3644  }\
3645  if (XP != 0) {\
3646    tbinptr* H = treebin_at(M, X->index);\
3647    if (X == *H) {\
3648      if ((*H = R) == 0) \
3649        clear_treemap(M, X->index);\
3650    }\
3651    else if (RTCHECK(ok_address(M, XP))) {\
3652      if (XP->child[0] == X) \
3653        XP->child[0] = R;\
3654      else \
3655        XP->child[1] = R;\
3656    }\
3657    else\
3658      CORRUPTION_ERROR_ACTION(M);\
3659    if (R != 0) {\
3660      if (RTCHECK(ok_address(M, R))) {\
3661        tchunkptr C0, C1;\
3662        R->parent = XP;\
3663        if ((C0 = X->child[0]) != 0) {\
3664          if (RTCHECK(ok_address(M, C0))) {\
3665            R->child[0] = C0;\
3666            C0->parent = R;\
3667          }\
3668          else\
3669            CORRUPTION_ERROR_ACTION(M);\
3670        }\
3671        if ((C1 = X->child[1]) != 0) {\
3672          if (RTCHECK(ok_address(M, C1))) {\
3673            R->child[1] = C1;\
3674            C1->parent = R;\
3675          }\
3676          else\
3677            CORRUPTION_ERROR_ACTION(M);\
3678        }\
3679      }\
3680      else\
3681        CORRUPTION_ERROR_ACTION(M);\
3682    }\
3683  }\
3684}
3685
3686/* Relays to large vs small bin operations */
3687
3688#define insert_chunk(M, P, S)\
3689  if (is_small(S)) insert_small_chunk(M, P, S)\
3690  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3691
3692#define unlink_chunk(M, P, S)\
3693  if (is_small(S)) unlink_small_chunk(M, P, S)\
3694  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3695
3696
3697/* Relays to internal calls to malloc/free from realloc, memalign etc */
3698
3699#if ONLY_MSPACES
3700#define internal_malloc(m, b) mspace_malloc(m, b)
3701#define internal_free(m, mem) mspace_free(m,mem);
3702#else /* ONLY_MSPACES */
3703#if MSPACES
3704#define internal_malloc(m, b)\
3705   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3706#define internal_free(m, mem)\
3707   if (m == gm) dlfree(mem); else mspace_free(m,mem);
3708#else /* MSPACES */
3709#define internal_malloc(m, b) dlmalloc(b)
3710#define internal_free(m, mem) dlfree(mem)
3711#endif /* MSPACES */
3712#endif /* ONLY_MSPACES */
3713
3714/* -----------------------  Direct-mmapping chunks ----------------------- */
3715
3716/*
3717  Directly mmapped chunks are set up with an offset to the start of
3718  the mmapped region stored in the prev_foot field of the chunk. This
3719  allows reconstruction of the required argument to MUNMAP when freed,
3720  and also allows adjustment of the returned chunk to meet alignment
3721  requirements (especially in memalign).  There is also enough space
3722  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3723  the PINUSE bit so frees can be checked.
3724*/
3725
3726/* Malloc using mmap */
3727static void* mmap_alloc(mstate m, size_t nb) {
3728  size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3729  if (mmsize > nb) {     /* Check for wrap around 0 */
3730    char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3731    if (mm != CMFAIL) {
3732      size_t offset = align_offset(chunk2mem(mm));
3733      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3734      mchunkptr p = (mchunkptr)(mm + offset);
3735      p->prev_foot = offset | IS_MMAPPED_BIT;
3736      (p)->head = (psize|CINUSE_BIT);
3737      mark_inuse_foot(m, p, psize);
3738      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3739      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3740
3741      if (mm < m->least_addr)
3742        m->least_addr = mm;
3743      if ((m->footprint += mmsize) > m->max_footprint)
3744        m->max_footprint = m->footprint;
3745      assert(is_aligned(chunk2mem(p)));
3746      check_mmapped_chunk(m, p);
3747      return chunk2mem(p);
3748    }
3749  }
3750  return 0;
3751}
3752
3753/* Realloc using mmap */
3754static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3755  size_t oldsize = chunksize(oldp);
3756  if (is_small(nb)) /* Can't shrink mmap regions below small size */
3757    return 0;
3758  /* Keep old chunk if big enough but not too big */
3759  if (oldsize >= nb + SIZE_T_SIZE &&
3760      (oldsize - nb) <= (mparams.granularity << 1))
3761    return oldp;
3762  else {
3763    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3764    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3765    size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3766    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3767                                  oldmmsize, newmmsize, 1);
3768    if (cp != CMFAIL) {
3769      mchunkptr newp = (mchunkptr)(cp + offset);
3770      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3771      newp->head = (psize|CINUSE_BIT);
3772      mark_inuse_foot(m, newp, psize);
3773      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3774      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3775
3776      if (cp < m->least_addr)
3777        m->least_addr = cp;
3778      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3779        m->max_footprint = m->footprint;
3780      check_mmapped_chunk(m, newp);
3781      return newp;
3782    }
3783  }
3784  return 0;
3785}
3786
3787/* -------------------------- mspace management -------------------------- */
3788
3789/* Initialize top chunk and its size */
3790static void init_top(mstate m, mchunkptr p, size_t psize) {
3791  /* Ensure alignment */
3792  size_t offset = align_offset(chunk2mem(p));
3793  p = (mchunkptr)((char*)p + offset);
3794  psize -= offset;
3795
3796  m->top = p;
3797  m->topsize = psize;
3798  p->head = psize | PINUSE_BIT;
3799  /* set size of fake trailing chunk holding overhead space only once */
3800  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3801  m->trim_check = mparams.trim_threshold; /* reset on each update */
3802}
3803
3804/* Initialize bins for a new mstate that is otherwise zeroed out */
3805static void init_bins(mstate m) {
3806  /* Establish circular links for smallbins */
3807  bindex_t i;
3808  for (i = 0; i < NSMALLBINS; ++i) {
3809    sbinptr bin = smallbin_at(m,i);
3810    bin->fd = bin->bk = bin;
3811  }
3812}
3813
3814#if PROCEED_ON_ERROR
3815
3816/* default corruption action */
3817static void reset_on_error(mstate m) {
3818  int i;
3819  ++malloc_corruption_error_count;
3820  /* Reinitialize fields to forget about all memory */
3821  m->smallbins = m->treebins = 0;
3822  m->dvsize = m->topsize = 0;
3823  m->seg.base = 0;
3824  m->seg.size = 0;
3825  m->seg.next = 0;
3826  m->top = m->dv = 0;
3827  for (i = 0; i < NTREEBINS; ++i)
3828    *treebin_at(m, i) = 0;
3829  init_bins(m);
3830}
3831#endif /* PROCEED_ON_ERROR */
3832
3833/* Allocate chunk and prepend remainder with chunk in successor base. */
3834static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3835                           size_t nb) {
3836  mchunkptr p = align_as_chunk(newbase);
3837  mchunkptr oldfirst = align_as_chunk(oldbase);
3838  size_t psize = (char*)oldfirst - (char*)p;
3839  mchunkptr q = chunk_plus_offset(p, nb);
3840  size_t qsize = psize - nb;
3841  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3842
3843  assert((char*)oldfirst > (char*)q);
3844  assert(pinuse(oldfirst));
3845  assert(qsize >= MIN_CHUNK_SIZE);
3846
3847  /* consolidate remainder with first chunk of old base */
3848  if (oldfirst == m->top) {
3849    size_t tsize = m->topsize += qsize;
3850    m->top = q;
3851    q->head = tsize | PINUSE_BIT;
3852    check_top_chunk(m, q);
3853  }
3854  else if (oldfirst == m->dv) {
3855    size_t dsize = m->dvsize += qsize;
3856    m->dv = q;
3857    set_size_and_pinuse_of_free_chunk(q, dsize);
3858  }
3859  else {
3860    if (!cinuse(oldfirst)) {
3861      size_t nsize = chunksize(oldfirst);
3862      unlink_chunk(m, oldfirst, nsize);
3863      oldfirst = chunk_plus_offset(oldfirst, nsize);
3864      qsize += nsize;
3865    }
3866    set_free_with_pinuse(q, qsize, oldfirst);
3867    insert_chunk(m, q, qsize);
3868    check_free_chunk(m, q);
3869  }
3870
3871  check_malloced_chunk(m, chunk2mem(p), nb);
3872  return chunk2mem(p);
3873}
3874
3875/* Add a segment to hold a new noncontiguous region */
3876static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3877  /* Determine locations and sizes of segment, fenceposts, old top */
3878  char* old_top = (char*)m->top;
3879  msegmentptr oldsp = segment_holding(m, old_top);
3880  char* old_end = oldsp->base + oldsp->size;
3881  size_t ssize = pad_request(sizeof(struct malloc_segment));
3882  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3883  size_t offset = align_offset(chunk2mem(rawsp));
3884  char* asp = rawsp + offset;
3885  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3886  mchunkptr sp = (mchunkptr)csp;
3887  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3888  mchunkptr tnext = chunk_plus_offset(sp, ssize);
3889  mchunkptr p = tnext;
3890  int nfences = 0;
3891
3892  /* reset top to new space */
3893  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3894
3895  /* Set up segment record */
3896  assert(is_aligned(ss));
3897  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3898  *ss = m->seg; /* Push current record */
3899  m->seg.base = tbase;
3900  m->seg.size = tsize;
3901  m->seg.sflags = mmapped;
3902  m->seg.next = ss;
3903
3904  /* Insert trailing fenceposts */
3905  for (;;) {
3906    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3907    p->head = FENCEPOST_HEAD;
3908    ++nfences;
3909    if ((char*)(&(nextp->head)) < old_end)
3910      p = nextp;
3911    else
3912      break;
3913  }
3914  assert(nfences >= 2);
3915
3916  /* Insert the rest of old top into a bin as an ordinary free chunk */
3917  if (csp != old_top) {
3918    mchunkptr q = (mchunkptr)old_top;
3919    size_t psize = csp - old_top;
3920    mchunkptr tn = chunk_plus_offset(q, psize);
3921    set_free_with_pinuse(q, psize, tn);
3922    insert_chunk(m, q, psize);
3923  }
3924
3925  check_top_chunk(m, m->top);
3926}
3927
3928/* -------------------------- System allocation -------------------------- */
3929
3930/* Get memory from system using MORECORE or MMAP */
3931static void* sys_alloc(mstate m, size_t nb) {
3932  char* tbase = CMFAIL;
3933  size_t tsize = 0;
3934  flag_t mmap_flag = 0;
3935
3936  ensure_initialization();
3937
3938  /* Directly map large chunks */
3939  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3940    void* mem = mmap_alloc(m, nb);
3941    if (mem != 0)
3942      return mem;
3943  }
3944
3945  /*
3946    Try getting memory in any of three ways (in most-preferred to
3947    least-preferred order):
3948    1. A call to MORECORE that can normally contiguously extend memory.
3949       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3950       main space is mmapped or a previous contiguous call failed)
3951    2. A call to MMAP new space (disabled if not HAVE_MMAP).
3952       Note that under the default settings, if MORECORE is unable to
3953       fulfill a request, and HAVE_MMAP is true, then mmap is
3954       used as a noncontiguous system allocator. This is a useful backup
3955       strategy for systems with holes in address spaces -- in this case
3956       sbrk cannot contiguously expand the heap, but mmap may be able to
3957       find space.
3958    3. A call to MORECORE that cannot usually contiguously extend memory.
3959       (disabled if not HAVE_MORECORE)
3960
3961   In all cases, we need to request enough bytes from system to ensure
3962   we can malloc nb bytes upon success, so pad with enough space for
3963   top_foot, plus alignment-pad to make sure we don't lose bytes if
3964   not on boundary, and round this up to a granularity unit.
3965  */
3966
3967  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3968    char* br = CMFAIL;
3969    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3970    size_t asize = 0;
3971    ACQUIRE_MALLOC_GLOBAL_LOCK();
3972
3973    if (ss == 0) {  /* First time through or recovery */
3974      char* base = (char*)CALL_MORECORE(0);
3975      if (base != CMFAIL) {
3976        asize = granularity_align(nb + SYS_ALLOC_PADDING);
3977        /* Adjust to end on a page boundary */
3978        if (!is_page_aligned(base))
3979          asize += (page_align((size_t)base) - (size_t)base);
3980        /* Can't call MORECORE if size is negative when treated as signed */
3981        if (asize < HALF_MAX_SIZE_T &&
3982            (br = (char*)(CALL_MORECORE(asize))) == base) {
3983          tbase = base;
3984          tsize = asize;
3985        }
3986      }
3987    }
3988    else {
3989      /* Subtract out existing available top space from MORECORE request. */
3990      asize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
3991      /* Use mem here only if it did continuously extend old space */
3992      if (asize < HALF_MAX_SIZE_T &&
3993          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3994        tbase = br;
3995        tsize = asize;
3996      }
3997    }
3998
3999    if (tbase == CMFAIL) {    /* Cope with partial failure */
4000      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
4001        if (asize < HALF_MAX_SIZE_T &&
4002            asize < nb + SYS_ALLOC_PADDING) {
4003          size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - asize);
4004          if (esize < HALF_MAX_SIZE_T) {
4005            char* end = (char*)CALL_MORECORE(esize);
4006            if (end != CMFAIL)
4007              asize += esize;
4008            else {            /* Can't use; try to release */
4009              (void) CALL_MORECORE(-asize);
4010              br = CMFAIL;
4011            }
4012          }
4013        }
4014      }
4015      if (br != CMFAIL) {    /* Use the space we did get */
4016        tbase = br;
4017        tsize = asize;
4018      }
4019      else
4020        disable_contiguous(m); /* Don't try contiguous path in the future */
4021    }
4022
4023    RELEASE_MALLOC_GLOBAL_LOCK();
4024  }
4025
4026  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
4027    size_t rsize = granularity_align(nb + SYS_ALLOC_PADDING);
4028    if (rsize > nb) { /* Fail if wraps around zero */
4029      char* mp = (char*)(CALL_MMAP(rsize));
4030      if (mp != CMFAIL) {
4031        tbase = mp;
4032        tsize = rsize;
4033        mmap_flag = IS_MMAPPED_BIT;
4034      }
4035    }
4036  }
4037
4038  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4039    size_t asize = granularity_align(nb + SYS_ALLOC_PADDING);
4040    if (asize < HALF_MAX_SIZE_T) {
4041      char* br = CMFAIL;
4042      char* end = CMFAIL;
4043      ACQUIRE_MALLOC_GLOBAL_LOCK();
4044      br = (char*)(CALL_MORECORE(asize));
4045      end = (char*)(CALL_MORECORE(0));
4046      RELEASE_MALLOC_GLOBAL_LOCK();
4047      if (br != CMFAIL && end != CMFAIL && br < end) {
4048        size_t ssize = end - br;
4049        if (ssize > nb + TOP_FOOT_SIZE) {
4050          tbase = br;
4051          tsize = ssize;
4052        }
4053      }
4054    }
4055  }
4056
4057  if (tbase != CMFAIL) {
4058
4059    if ((m->footprint += tsize) > m->max_footprint)
4060      m->max_footprint = m->footprint;
4061
4062    if (!is_initialized(m)) { /* first-time initialization */
4063      m->seg.base = m->least_addr = tbase;
4064      m->seg.size = tsize;
4065      m->seg.sflags = mmap_flag;
4066      m->magic = mparams.magic;
4067      m->release_checks = MAX_RELEASE_CHECK_RATE;
4068      init_bins(m);
4069#if !ONLY_MSPACES
4070      if (is_global(m))
4071        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4072      else
4073#endif
4074      {
4075        /* Offset top by embedded malloc_state */
4076        mchunkptr mn = next_chunk(mem2chunk(m));
4077        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4078      }
4079    }
4080
4081    else {
4082      /* Try to merge with an existing segment */
4083      msegmentptr sp = &m->seg;
4084      /* Only consider most recent segment if traversal suppressed */
4085      while (sp != 0 && tbase != sp->base + sp->size)
4086        sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4087      if (sp != 0 &&
4088          !is_extern_segment(sp) &&
4089          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
4090          segment_holds(sp, m->top)) { /* append */
4091        sp->size += tsize;
4092        init_top(m, m->top, m->topsize + tsize);
4093      }
4094      else {
4095        if (tbase < m->least_addr)
4096          m->least_addr = tbase;
4097        sp = &m->seg;
4098        while (sp != 0 && sp->base != tbase + tsize)
4099          sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4100        if (sp != 0 &&
4101            !is_extern_segment(sp) &&
4102            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
4103          char* oldbase = sp->base;
4104          sp->base = tbase;
4105          sp->size += tsize;
4106          return prepend_alloc(m, tbase, oldbase, nb);
4107        }
4108        else
4109          add_segment(m, tbase, tsize, mmap_flag);
4110      }
4111    }
4112
4113    if (nb < m->topsize) { /* Allocate from new or extended top space */
4114      size_t rsize = m->topsize -= nb;
4115      mchunkptr p = m->top;
4116      mchunkptr r = m->top = chunk_plus_offset(p, nb);
4117      r->head = rsize | PINUSE_BIT;
4118      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4119      check_top_chunk(m, m->top);
4120      check_malloced_chunk(m, chunk2mem(p), nb);
4121      return chunk2mem(p);
4122    }
4123  }
4124
4125  MALLOC_FAILURE_ACTION;
4126  return 0;
4127}
4128
4129/* -----------------------  system deallocation -------------------------- */
4130
4131/* Unmap and unlink any mmapped segments that don't contain used chunks */
4132static size_t release_unused_segments(mstate m) {
4133  size_t released = 0;
4134  int nsegs = 0;
4135  msegmentptr pred = &m->seg;
4136  msegmentptr sp = pred->next;
4137  while (sp != 0) {
4138    char* base = sp->base;
4139    size_t size = sp->size;
4140    msegmentptr next = sp->next;
4141    ++nsegs;
4142    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4143      mchunkptr p = align_as_chunk(base);
4144      size_t psize = chunksize(p);
4145      /* Can unmap if first chunk holds entire segment and not pinned */
4146      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4147        tchunkptr tp = (tchunkptr)p;
4148        assert(segment_holds(sp, (char*)sp));
4149        if (p == m->dv) {
4150          m->dv = 0;
4151          m->dvsize = 0;
4152        }
4153        else {
4154          unlink_large_chunk(m, tp);
4155        }
4156        if (CALL_MUNMAP(base, size) == 0) {
4157          released += size;
4158          m->footprint -= size;
4159          /* unlink obsoleted record */
4160          sp = pred;
4161          sp->next = next;
4162        }
4163        else { /* back out if cannot unmap */
4164          insert_large_chunk(m, tp, psize);
4165        }
4166      }
4167    }
4168    if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4169      break;
4170    pred = sp;
4171    sp = next;
4172  }
4173  /* Reset check counter */
4174  m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?
4175                       nsegs : MAX_RELEASE_CHECK_RATE);
4176  return released;
4177}
4178
4179static int sys_trim(mstate m, size_t pad) {
4180  size_t released = 0;
4181  ensure_initialization();
4182  if (pad < MAX_REQUEST && is_initialized(m)) {
4183    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4184
4185    if (m->topsize > pad) {
4186      /* Shrink top space in granularity-size units, keeping at least one */
4187      size_t unit = mparams.granularity;
4188      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4189                      SIZE_T_ONE) * unit;
4190      msegmentptr sp = segment_holding(m, (char*)m->top);
4191
4192      if (!is_extern_segment(sp)) {
4193        if (is_mmapped_segment(sp)) {
4194          if (HAVE_MMAP &&
4195              sp->size >= extra &&
4196              !has_segment_link(m, sp)) { /* can't shrink if pinned */
4197            size_t newsize = sp->size - extra;
4198            /* Prefer mremap, fall back to munmap */
4199            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4200                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4201              released = extra;
4202            }
4203          }
4204        }
4205        else if (HAVE_MORECORE) {
4206          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4207            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4208          ACQUIRE_MALLOC_GLOBAL_LOCK();
4209          {
4210            /* Make sure end of memory is where we last set it. */
4211            char* old_br = (char*)(CALL_MORECORE(0));
4212            if (old_br == sp->base + sp->size) {
4213              char* rel_br = (char*)(CALL_MORECORE(-extra));
4214              char* new_br = (char*)(CALL_MORECORE(0));
4215              if (rel_br != CMFAIL && new_br < old_br)
4216                released = old_br - new_br;
4217            }
4218          }
4219          RELEASE_MALLOC_GLOBAL_LOCK();
4220        }
4221      }
4222
4223      if (released != 0) {
4224        sp->size -= released;
4225        m->footprint -= released;
4226        init_top(m, m->top, m->topsize - released);
4227        check_top_chunk(m, m->top);
4228      }
4229    }
4230
4231    /* Unmap any unused mmapped segments */
4232    if (HAVE_MMAP)
4233      released += release_unused_segments(m);
4234
4235    /* On failure, disable autotrim to avoid repeated failed future calls */
4236    if (released == 0 && m->topsize > m->trim_check)
4237      m->trim_check = MAX_SIZE_T;
4238  }
4239
4240  return (released != 0)? 1 : 0;
4241}
4242
4243
4244/* ---------------------------- malloc support --------------------------- */
4245
4246/* allocate a large request from the best fitting chunk in a treebin */
4247static void* tmalloc_large(mstate m, size_t nb) {
4248  tchunkptr v = 0;
4249  size_t rsize = -nb; /* Unsigned negation */
4250  tchunkptr t;
4251  bindex_t idx;
4252  compute_tree_index(nb, idx);
4253  if ((t = *treebin_at(m, idx)) != 0) {
4254    /* Traverse tree for this bin looking for node with size == nb */
4255    size_t sizebits = nb << leftshift_for_tree_index(idx);
4256    tchunkptr rst = 0;  /* The deepest untaken right subtree */
4257    for (;;) {
4258      tchunkptr rt;
4259      size_t trem = chunksize(t) - nb;
4260      if (trem < rsize) {
4261        v = t;
4262        if ((rsize = trem) == 0)
4263          break;
4264      }
4265      rt = t->child[1];
4266      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4267      if (rt != 0 && rt != t)
4268        rst = rt;
4269      if (t == 0) {
4270        t = rst; /* set t to least subtree holding sizes > nb */
4271        break;
4272      }
4273      sizebits <<= 1;
4274    }
4275  }
4276  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4277    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4278    if (leftbits != 0) {
4279      bindex_t i;
4280      binmap_t leastbit = least_bit(leftbits);
4281      compute_bit2idx(leastbit, i);
4282      t = *treebin_at(m, i);
4283    }
4284  }
4285
4286  while (t != 0) { /* find smallest of tree or subtree */
4287    size_t trem = chunksize(t) - nb;
4288    if (trem < rsize) {
4289      rsize = trem;
4290      v = t;
4291    }
4292    t = leftmost_child(t);
4293  }
4294
4295  /*  If dv is a better fit, return 0 so malloc will use it */
4296  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4297    if (RTCHECK(ok_address(m, v))) { /* split */
4298      mchunkptr r = chunk_plus_offset(v, nb);
4299      assert(chunksize(v) == rsize + nb);
4300      if (RTCHECK(ok_next(v, r))) {
4301        unlink_large_chunk(m, v);
4302        if (rsize < MIN_CHUNK_SIZE)
4303          set_inuse_and_pinuse(m, v, (rsize + nb));
4304        else {
4305          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4306          set_size_and_pinuse_of_free_chunk(r, rsize);
4307          insert_chunk(m, r, rsize);
4308        }
4309        return chunk2mem(v);
4310      }
4311    }
4312    CORRUPTION_ERROR_ACTION(m);
4313  }
4314  return 0;
4315}
4316
4317/* allocate a small request from the best fitting chunk in a treebin */
4318static void* tmalloc_small(mstate m, size_t nb) {
4319  tchunkptr t, v;
4320  size_t rsize;
4321  bindex_t i;
4322  binmap_t leastbit = least_bit(m->treemap);
4323  compute_bit2idx(leastbit, i);
4324  v = t = *treebin_at(m, i);
4325  rsize = chunksize(t) - nb;
4326
4327  while ((t = leftmost_child(t)) != 0) {
4328    size_t trem = chunksize(t) - nb;
4329    if (trem < rsize) {
4330      rsize = trem;
4331      v = t;
4332    }
4333  }
4334
4335  if (RTCHECK(ok_address(m, v))) {
4336    mchunkptr r = chunk_plus_offset(v, nb);
4337    assert(chunksize(v) == rsize + nb);
4338    if (RTCHECK(ok_next(v, r))) {
4339      unlink_large_chunk(m, v);
4340      if (rsize < MIN_CHUNK_SIZE)
4341        set_inuse_and_pinuse(m, v, (rsize + nb));
4342      else {
4343        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4344        set_size_and_pinuse_of_free_chunk(r, rsize);
4345        replace_dv(m, r, rsize);
4346      }
4347      return chunk2mem(v);
4348    }
4349  }
4350
4351  CORRUPTION_ERROR_ACTION(m);
4352  return 0;
4353}
4354
4355/* --------------------------- realloc support --------------------------- */
4356
4357static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
4358  if (bytes >= MAX_REQUEST) {
4359    MALLOC_FAILURE_ACTION;
4360    return 0;
4361  }
4362  if (!PREACTION(m)) {
4363    mchunkptr oldp = mem2chunk(oldmem);
4364    size_t oldsize = chunksize(oldp);
4365    mchunkptr next = chunk_plus_offset(oldp, oldsize);
4366    mchunkptr newp = 0;
4367    void* extra = 0;
4368
4369    /* Try to either shrink or extend into top. Else malloc-copy-free */
4370
4371    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
4372                ok_next(oldp, next) && ok_pinuse(next))) {
4373      size_t nb = request2size(bytes);
4374      if (is_mmapped(oldp))
4375        newp = mmap_resize(m, oldp, nb);
4376      else if (oldsize >= nb) { /* already big enough */
4377        size_t rsize = oldsize - nb;
4378        newp = oldp;
4379        if (rsize >= MIN_CHUNK_SIZE) {
4380          mchunkptr remainder = chunk_plus_offset(newp, nb);
4381          set_inuse(m, newp, nb);
4382          set_inuse(m, remainder, rsize);
4383          extra = chunk2mem(remainder);
4384        }
4385      }
4386      else if (next == m->top && oldsize + m->topsize > nb) {
4387        /* Expand into top */
4388        size_t newsize = oldsize + m->topsize;
4389        size_t newtopsize = newsize - nb;
4390        mchunkptr newtop = chunk_plus_offset(oldp, nb);
4391        set_inuse(m, oldp, nb);
4392        newtop->head = newtopsize |PINUSE_BIT;
4393        m->top = newtop;
4394        m->topsize = newtopsize;
4395        newp = oldp;
4396      }
4397    }
4398    else {
4399      USAGE_ERROR_ACTION(m, oldmem);
4400      POSTACTION(m);
4401      return 0;
4402    }
4403
4404    POSTACTION(m);
4405
4406    if (newp != 0) {
4407      if (extra != 0) {
4408        internal_free(m, extra);
4409      }
4410      check_inuse_chunk(m, newp);
4411      return chunk2mem(newp);
4412    }
4413    else {
4414      void* newmem = internal_malloc(m, bytes);
4415      if (newmem != 0) {
4416        size_t oc = oldsize - overhead_for(oldp);
4417        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
4418        internal_free(m, oldmem);
4419      }
4420      return newmem;
4421    }
4422  }
4423  return 0;
4424}
4425
4426/* --------------------------- memalign support -------------------------- */
4427
4428static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4429  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
4430    return internal_malloc(m, bytes);
4431  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4432    alignment = MIN_CHUNK_SIZE;
4433  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4434    size_t a = MALLOC_ALIGNMENT << 1;
4435    while (a < alignment) a <<= 1;
4436    alignment = a;
4437  }
4438
4439  if (bytes >= MAX_REQUEST - alignment) {
4440    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
4441      MALLOC_FAILURE_ACTION;
4442    }
4443  }
4444  else {
4445    size_t nb = request2size(bytes);
4446    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4447    char* mem = (char*)internal_malloc(m, req);
4448    if (mem != 0) {
4449      void* leader = 0;
4450      void* trailer = 0;
4451      mchunkptr p = mem2chunk(mem);
4452
4453      if (PREACTION(m)) return 0;
4454      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
4455        /*
4456          Find an aligned spot inside chunk.  Since we need to give
4457          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4458          the first calculation places us at a spot with less than
4459          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4460          We've allocated enough total room so that this is always
4461          possible.
4462        */
4463        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
4464                                                       alignment -
4465                                                       SIZE_T_ONE)) &
4466                                             -alignment));
4467        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4468          br : br+alignment;
4469        mchunkptr newp = (mchunkptr)pos;
4470        size_t leadsize = pos - (char*)(p);
4471        size_t newsize = chunksize(p) - leadsize;
4472
4473        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4474          newp->prev_foot = p->prev_foot + leadsize;
4475          newp->head = (newsize|CINUSE_BIT);
4476        }
4477        else { /* Otherwise, give back leader, use the rest */
4478          set_inuse(m, newp, newsize);
4479          set_inuse(m, p, leadsize);
4480          leader = chunk2mem(p);
4481        }
4482        p = newp;
4483      }
4484
4485      /* Give back spare room at the end */
4486      if (!is_mmapped(p)) {
4487        size_t size = chunksize(p);
4488        if (size > nb + MIN_CHUNK_SIZE) {
4489          size_t remainder_size = size - nb;
4490          mchunkptr remainder = chunk_plus_offset(p, nb);
4491          set_inuse(m, p, nb);
4492          set_inuse(m, remainder, remainder_size);
4493          trailer = chunk2mem(remainder);
4494        }
4495      }
4496
4497      assert (chunksize(p) >= nb);
4498      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
4499      check_inuse_chunk(m, p);
4500      POSTACTION(m);
4501      if (leader != 0) {
4502        internal_free(m, leader);
4503      }
4504      if (trailer != 0) {
4505        internal_free(m, trailer);
4506      }
4507      return chunk2mem(p);
4508    }
4509  }
4510  return 0;
4511}
4512
4513/* ------------------------ comalloc/coalloc support --------------------- */
4514
4515static void** ialloc(mstate m,
4516                     size_t n_elements,
4517                     size_t* sizes,
4518                     int opts,
4519                     void* chunks[]) {
4520  /*
4521    This provides common support for independent_X routines, handling
4522    all of the combinations that can result.
4523
4524    The opts arg has:
4525    bit 0 set if all elements are same size (using sizes[0])
4526    bit 1 set if elements should be zeroed
4527  */
4528
4529  size_t    element_size;   /* chunksize of each element, if all same */
4530  size_t    contents_size;  /* total size of elements */
4531  size_t    array_size;     /* request size of pointer array */
4532  void*     mem;            /* malloced aggregate space */
4533  mchunkptr p;              /* corresponding chunk */
4534  size_t    remainder_size; /* remaining bytes while splitting */
4535  void**    marray;         /* either "chunks" or malloced ptr array */
4536  mchunkptr array_chunk;    /* chunk for malloced ptr array */
4537  flag_t    was_enabled;    /* to disable mmap */
4538  size_t    size;
4539  size_t    i;
4540
4541  ensure_initialization();
4542  /* compute array length, if needed */
4543  if (chunks != 0) {
4544    if (n_elements == 0)
4545      return chunks; /* nothing to do */
4546    marray = chunks;
4547    array_size = 0;
4548  }
4549  else {
4550    /* if empty req, must still return chunk representing empty array */
4551    if (n_elements == 0)
4552      return (void**)internal_malloc(m, 0);
4553    marray = 0;
4554    array_size = request2size(n_elements * (sizeof(void*)));
4555  }
4556
4557  /* compute total element size */
4558  if (opts & 0x1) { /* all-same-size */
4559    element_size = request2size(*sizes);
4560    contents_size = n_elements * element_size;
4561  }
4562  else { /* add up all the sizes */
4563    element_size = 0;
4564    contents_size = 0;
4565    for (i = 0; i != n_elements; ++i)
4566      contents_size += request2size(sizes[i]);
4567  }
4568
4569  size = contents_size + array_size;
4570
4571  /*
4572     Allocate the aggregate chunk.  First disable direct-mmapping so
4573     malloc won't use it, since we would not be able to later
4574     free/realloc space internal to a segregated mmap region.
4575  */
4576  was_enabled = use_mmap(m);
4577  disable_mmap(m);
4578  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4579  if (was_enabled)
4580    enable_mmap(m);
4581  if (mem == 0)
4582    return 0;
4583
4584  if (PREACTION(m)) return 0;
4585  p = mem2chunk(mem);
4586  remainder_size = chunksize(p);
4587
4588  assert(!is_mmapped(p));
4589
4590  if (opts & 0x2) {       /* optionally clear the elements */
4591    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4592  }
4593
4594  /* If not provided, allocate the pointer array as final part of chunk */
4595  if (marray == 0) {
4596    size_t  array_chunk_size;
4597    array_chunk = chunk_plus_offset(p, contents_size);
4598    array_chunk_size = remainder_size - contents_size;
4599    marray = (void**) (chunk2mem(array_chunk));
4600    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4601    remainder_size = contents_size;
4602  }
4603
4604  /* split out elements */
4605  for (i = 0; ; ++i) {
4606    marray[i] = chunk2mem(p);
4607    if (i != n_elements-1) {
4608      if (element_size != 0)
4609        size = element_size;
4610      else
4611        size = request2size(sizes[i]);
4612      remainder_size -= size;
4613      set_size_and_pinuse_of_inuse_chunk(m, p, size);
4614      p = chunk_plus_offset(p, size);
4615    }
4616    else { /* the final element absorbs any overallocation slop */
4617      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4618      break;
4619    }
4620  }
4621
4622#if DEBUG
4623  if (marray != chunks) {
4624    /* final element must have exactly exhausted chunk */
4625    if (element_size != 0) {
4626      assert(remainder_size == element_size);
4627    }
4628    else {
4629      assert(remainder_size == request2size(sizes[i]));
4630    }
4631    check_inuse_chunk(m, mem2chunk(marray));
4632  }
4633  for (i = 0; i != n_elements; ++i)
4634    check_inuse_chunk(m, mem2chunk(marray[i]));
4635
4636#endif /* DEBUG */
4637
4638  POSTACTION(m);
4639  return marray;
4640}
4641
4642
4643/* -------------------------- public routines ---------------------------- */
4644
4645#if !ONLY_MSPACES
4646
4647void* dlmalloc(size_t bytes) {
4648  /*
4649     Basic algorithm:
4650     If a small request (< 256 bytes minus per-chunk overhead):
4651       1. If one exists, use a remainderless chunk in associated smallbin.
4652          (Remainderless means that there are too few excess bytes to
4653          represent as a chunk.)
4654       2. If it is big enough, use the dv chunk, which is normally the
4655          chunk adjacent to the one used for the most recent small request.
4656       3. If one exists, split the smallest available chunk in a bin,
4657          saving remainder in dv.
4658       4. If it is big enough, use the top chunk.
4659       5. If available, get memory from system and use it
4660     Otherwise, for a large request:
4661       1. Find the smallest available binned chunk that fits, and use it
4662          if it is better fitting than dv chunk, splitting if necessary.
4663       2. If better fitting than any binned chunk, use the dv chunk.
4664       3. If it is big enough, use the top chunk.
4665       4. If request size >= mmap threshold, try to directly mmap this chunk.
4666       5. If available, get memory from system and use it
4667
4668     The ugly goto's here ensure that postaction occurs along all paths.
4669  */
4670
4671#if USE_LOCKS
4672  ensure_initialization(); /* initialize in sys_alloc if not using locks */
4673#endif
4674
4675  if (!PREACTION(gm)) {
4676    void* mem;
4677    size_t nb;
4678    if (bytes <= MAX_SMALL_REQUEST) {
4679      bindex_t idx;
4680      binmap_t smallbits;
4681      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4682      idx = small_index(nb);
4683      smallbits = gm->smallmap >> idx;
4684
4685      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4686        mchunkptr b, p;
4687        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4688        b = smallbin_at(gm, idx);
4689        p = b->fd;
4690        assert(chunksize(p) == small_index2size(idx));
4691        unlink_first_small_chunk(gm, b, p, idx);
4692        set_inuse_and_pinuse(gm, p, small_index2size(idx));
4693        mem = chunk2mem(p);
4694        check_malloced_chunk(gm, mem, nb);
4695        goto postaction;
4696      }
4697
4698      else if (nb > gm->dvsize) {
4699        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4700          mchunkptr b, p, r;
4701          size_t rsize;
4702          bindex_t i;
4703          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4704          binmap_t leastbit = least_bit(leftbits);
4705          compute_bit2idx(leastbit, i);
4706          b = smallbin_at(gm, i);
4707          p = b->fd;
4708          assert(chunksize(p) == small_index2size(i));
4709          unlink_first_small_chunk(gm, b, p, i);
4710          rsize = small_index2size(i) - nb;
4711          /* Fit here cannot be remainderless if 4byte sizes */
4712          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4713            set_inuse_and_pinuse(gm, p, small_index2size(i));
4714          else {
4715            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4716            r = chunk_plus_offset(p, nb);
4717            set_size_and_pinuse_of_free_chunk(r, rsize);
4718            replace_dv(gm, r, rsize);
4719          }
4720          mem = chunk2mem(p);
4721          check_malloced_chunk(gm, mem, nb);
4722          goto postaction;
4723        }
4724
4725        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4726          check_malloced_chunk(gm, mem, nb);
4727          goto postaction;
4728        }
4729      }
4730    }
4731    else if (bytes >= MAX_REQUEST)
4732      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4733    else {
4734      nb = pad_request(bytes);
4735      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4736        check_malloced_chunk(gm, mem, nb);
4737        goto postaction;
4738      }
4739    }
4740
4741    if (nb <= gm->dvsize) {
4742      size_t rsize = gm->dvsize - nb;
4743      mchunkptr p = gm->dv;
4744      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4745        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4746        gm->dvsize = rsize;
4747        set_size_and_pinuse_of_free_chunk(r, rsize);
4748        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4749      }
4750      else { /* exhaust dv */
4751        size_t dvs = gm->dvsize;
4752        gm->dvsize = 0;
4753        gm->dv = 0;
4754        set_inuse_and_pinuse(gm, p, dvs);
4755      }
4756      mem = chunk2mem(p);
4757      check_malloced_chunk(gm, mem, nb);
4758      goto postaction;
4759    }
4760
4761    else if (nb < gm->topsize) { /* Split top */
4762      size_t rsize = gm->topsize -= nb;
4763      mchunkptr p = gm->top;
4764      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4765      r->head = rsize | PINUSE_BIT;
4766      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4767      mem = chunk2mem(p);
4768      check_top_chunk(gm, gm->top);
4769      check_malloced_chunk(gm, mem, nb);
4770      goto postaction;
4771    }
4772
4773    mem = sys_alloc(gm, nb);
4774
4775  postaction:
4776    POSTACTION(gm);
4777    return mem;
4778  }
4779
4780  return 0;
4781}
4782
4783void dlfree(void* mem) {
4784  /*
4785     Consolidate freed chunks with preceding or succeeding bordering
4786     free chunks, if they exist, and then place in a bin.  Intermixed
4787     with special cases for top, dv, mmapped chunks, and usage errors.
4788  */
4789
4790  if (mem != 0) {
4791    mchunkptr p  = mem2chunk(mem);
4792#if FOOTERS
4793    mstate fm = get_mstate_for(p);
4794    if (!ok_magic(fm)) {
4795      USAGE_ERROR_ACTION(fm, p);
4796      return;
4797    }
4798#else /* FOOTERS */
4799#define fm gm
4800#endif /* FOOTERS */
4801    if (!PREACTION(fm)) {
4802      check_inuse_chunk(fm, p);
4803      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4804        size_t psize = chunksize(p);
4805        mchunkptr next = chunk_plus_offset(p, psize);
4806        if (!pinuse(p)) {
4807          size_t prevsize = p->prev_foot;
4808          if ((prevsize & IS_MMAPPED_BIT) != 0) {
4809            prevsize &= ~IS_MMAPPED_BIT;
4810            psize += prevsize + MMAP_FOOT_PAD;
4811            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4812              fm->footprint -= psize;
4813            goto postaction;
4814          }
4815          else {
4816            mchunkptr prev = chunk_minus_offset(p, prevsize);
4817            psize += prevsize;
4818            p = prev;
4819            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4820              if (p != fm->dv) {
4821                unlink_chunk(fm, p, prevsize);
4822              }
4823              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4824                fm->dvsize = psize;
4825                set_free_with_pinuse(p, psize, next);
4826                goto postaction;
4827              }
4828            }
4829            else
4830              goto erroraction;
4831          }
4832        }
4833
4834        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4835          if (!cinuse(next)) {  /* consolidate forward */
4836            if (next == fm->top) {
4837              size_t tsize = fm->topsize += psize;
4838              fm->top = p;
4839              p->head = tsize | PINUSE_BIT;
4840              if (p == fm->dv) {
4841                fm->dv = 0;
4842                fm->dvsize = 0;
4843              }
4844              if (should_trim(fm, tsize))
4845                sys_trim(fm, 0);
4846              goto postaction;
4847            }
4848            else if (next == fm->dv) {
4849              size_t dsize = fm->dvsize += psize;
4850              fm->dv = p;
4851              set_size_and_pinuse_of_free_chunk(p, dsize);
4852              goto postaction;
4853            }
4854            else {
4855              size_t nsize = chunksize(next);
4856              psize += nsize;
4857              unlink_chunk(fm, next, nsize);
4858              set_size_and_pinuse_of_free_chunk(p, psize);
4859              if (p == fm->dv) {
4860                fm->dvsize = psize;
4861                goto postaction;
4862              }
4863            }
4864          }
4865          else
4866            set_free_with_pinuse(p, psize, next);
4867
4868          if (is_small(psize)) {
4869            insert_small_chunk(fm, p, psize);
4870            check_free_chunk(fm, p);
4871          }
4872          else {
4873            tchunkptr tp = (tchunkptr)p;
4874            insert_large_chunk(fm, tp, psize);
4875            check_free_chunk(fm, p);
4876            if (--fm->release_checks == 0)
4877              release_unused_segments(fm);
4878          }
4879          goto postaction;
4880        }
4881      }
4882    erroraction:
4883      USAGE_ERROR_ACTION(fm, p);
4884    postaction:
4885      POSTACTION(fm);
4886    }
4887  }
4888#if !FOOTERS
4889#undef fm
4890#endif /* FOOTERS */
4891}
4892
4893void* dlcalloc(size_t n_elements, size_t elem_size) {
4894  void* mem;
4895  size_t req = 0;
4896  if (n_elements != 0) {
4897    req = n_elements * elem_size;
4898    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4899        (req / n_elements != elem_size))
4900      req = MAX_SIZE_T; /* force downstream failure on overflow */
4901  }
4902  mem = dlmalloc(req);
4903  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4904    memset(mem, 0, req);
4905  return mem;
4906}
4907
4908void* dlrealloc(void* oldmem, size_t bytes) {
4909  if (oldmem == 0)
4910    return dlmalloc(bytes);
4911#ifdef REALLOC_ZERO_BYTES_FREES
4912  if (bytes == 0) {
4913    dlfree(oldmem);
4914    return 0;
4915  }
4916#endif /* REALLOC_ZERO_BYTES_FREES */
4917  else {
4918#if ! FOOTERS
4919    mstate m = gm;
4920#else /* FOOTERS */
4921    mstate m = get_mstate_for(mem2chunk(oldmem));
4922    if (!ok_magic(m)) {
4923      USAGE_ERROR_ACTION(m, oldmem);
4924      return 0;
4925    }
4926#endif /* FOOTERS */
4927    return internal_realloc(m, oldmem, bytes);
4928  }
4929}
4930
4931void* dlmemalign(size_t alignment, size_t bytes) {
4932  return internal_memalign(gm, alignment, bytes);
4933}
4934
4935void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4936                                 void* chunks[]) {
4937  size_t sz = elem_size; /* serves as 1-element array */
4938  return ialloc(gm, n_elements, &sz, 3, chunks);
4939}
4940
4941void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4942                                   void* chunks[]) {
4943  return ialloc(gm, n_elements, sizes, 0, chunks);
4944}
4945
4946void* dlvalloc(size_t bytes) {
4947  size_t pagesz;
4948  ensure_initialization();
4949  pagesz = mparams.page_size;
4950  return dlmemalign(pagesz, bytes);
4951}
4952
4953void* dlpvalloc(size_t bytes) {
4954  size_t pagesz;
4955  ensure_initialization();
4956  pagesz = mparams.page_size;
4957  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4958}
4959
4960int dlmalloc_trim(size_t pad) {
4961  ensure_initialization();
4962  int result = 0;
4963  if (!PREACTION(gm)) {
4964    result = sys_trim(gm, pad);
4965    POSTACTION(gm);
4966  }
4967  return result;
4968}
4969
4970size_t dlmalloc_footprint(void) {
4971  return gm->footprint;
4972}
4973
4974size_t dlmalloc_max_footprint(void) {
4975  return gm->max_footprint;
4976}
4977
4978#if !NO_MALLINFO
4979struct mallinfo dlmallinfo(void) {
4980  return internal_mallinfo(gm);
4981}
4982#endif /* NO_MALLINFO */
4983
4984void dlmalloc_stats() {
4985  internal_malloc_stats(gm);
4986}
4987
4988int dlmallopt(int param_number, int value) {
4989  return change_mparam(param_number, value);
4990}
4991
4992#endif /* !ONLY_MSPACES */
4993
4994size_t dlmalloc_usable_size(void* mem) {
4995  if (mem != 0) {
4996    mchunkptr p = mem2chunk(mem);
4997    if (cinuse(p))
4998      return chunksize(p) - overhead_for(p);
4999  }
5000  return 0;
5001}
5002
5003/* ----------------------------- user mspaces ---------------------------- */
5004
5005#if MSPACES
5006
5007static mstate init_user_mstate(char* tbase, size_t tsize) {
5008  size_t msize = pad_request(sizeof(struct malloc_state));
5009  mchunkptr mn;
5010  mchunkptr msp = align_as_chunk(tbase);
5011  mstate m = (mstate)(chunk2mem(msp));
5012  memset(m, 0, msize);
5013  INITIAL_LOCK(&m->mutex);
5014  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
5015  m->seg.base = m->least_addr = tbase;
5016  m->seg.size = m->footprint = m->max_footprint = tsize;
5017  m->magic = mparams.magic;
5018  m->release_checks = MAX_RELEASE_CHECK_RATE;
5019  m->mflags = mparams.default_mflags;
5020  m->extp = 0;
5021  m->exts = 0;
5022  disable_contiguous(m);
5023  init_bins(m);
5024  mn = next_chunk(mem2chunk(m));
5025  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5026  check_top_chunk(m, m->top);
5027  return m;
5028}
5029
5030mspace create_mspace(size_t capacity, int locked) {
5031  mstate m = 0;
5032  size_t msize;
5033  ensure_initialization();
5034  msize = pad_request(sizeof(struct malloc_state));
5035  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5036    size_t rs = ((capacity == 0)? mparams.granularity :
5037                 (capacity + TOP_FOOT_SIZE + msize));
5038    size_t tsize = granularity_align(rs);
5039    char* tbase = (char*)(CALL_MMAP(tsize));
5040    if (tbase != CMFAIL) {
5041      m = init_user_mstate(tbase, tsize);
5042      m->seg.sflags = IS_MMAPPED_BIT;
5043      set_lock(m, locked);
5044    }
5045  }
5046  return (mspace)m;
5047}
5048
5049mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5050  mstate m = 0;
5051  size_t msize;
5052  ensure_initialization();
5053  msize = pad_request(sizeof(struct malloc_state));
5054  if (capacity > msize + TOP_FOOT_SIZE &&
5055      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5056    m = init_user_mstate((char*)base, capacity);
5057    m->seg.sflags = EXTERN_BIT;
5058    set_lock(m, locked);
5059  }
5060  return (mspace)m;
5061}
5062
5063int mspace_mmap_large_chunks(mspace msp, int enable) {
5064  int ret = 0;
5065  mstate ms = (mstate)msp;
5066  if (!PREACTION(ms)) {
5067    if (use_mmap(ms))
5068      ret = 1;
5069    if (enable)
5070      enable_mmap(ms);
5071    else
5072      disable_mmap(ms);
5073    POSTACTION(ms);
5074  }
5075  return ret;
5076}
5077
5078size_t destroy_mspace(mspace msp) {
5079  size_t freed = 0;
5080  mstate ms = (mstate)msp;
5081  if (ok_magic(ms)) {
5082    msegmentptr sp = &ms->seg;
5083    while (sp != 0) {
5084      char* base = sp->base;
5085      size_t size = sp->size;
5086      flag_t flag = sp->sflags;
5087      sp = sp->next;
5088      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
5089          CALL_MUNMAP(base, size) == 0)
5090        freed += size;
5091    }
5092  }
5093  else {
5094    USAGE_ERROR_ACTION(ms,ms);
5095  }
5096  return freed;
5097}
5098
5099/*
5100  mspace versions of routines are near-clones of the global
5101  versions. This is not so nice but better than the alternatives.
5102*/
5103
5104
5105void* mspace_malloc(mspace msp, size_t bytes) {
5106  mstate ms = (mstate)msp;
5107  if (!ok_magic(ms)) {
5108    USAGE_ERROR_ACTION(ms,ms);
5109    return 0;
5110  }
5111  if (!PREACTION(ms)) {
5112    void* mem;
5113    size_t nb;
5114    if (bytes <= MAX_SMALL_REQUEST) {
5115      bindex_t idx;
5116      binmap_t smallbits;
5117      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5118      idx = small_index(nb);
5119      smallbits = ms->smallmap >> idx;
5120
5121      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5122        mchunkptr b, p;
5123        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
5124        b = smallbin_at(ms, idx);
5125        p = b->fd;
5126        assert(chunksize(p) == small_index2size(idx));
5127        unlink_first_small_chunk(ms, b, p, idx);
5128        set_inuse_and_pinuse(ms, p, small_index2size(idx));
5129        mem = chunk2mem(p);
5130        check_malloced_chunk(ms, mem, nb);
5131        goto postaction;
5132      }
5133
5134      else if (nb > ms->dvsize) {
5135        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5136          mchunkptr b, p, r;
5137          size_t rsize;
5138          bindex_t i;
5139          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5140          binmap_t leastbit = least_bit(leftbits);
5141          compute_bit2idx(leastbit, i);
5142          b = smallbin_at(ms, i);
5143          p = b->fd;
5144          assert(chunksize(p) == small_index2size(i));
5145          unlink_first_small_chunk(ms, b, p, i);
5146          rsize = small_index2size(i) - nb;
5147          /* Fit here cannot be remainderless if 4byte sizes */
5148          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5149            set_inuse_and_pinuse(ms, p, small_index2size(i));
5150          else {
5151            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5152            r = chunk_plus_offset(p, nb);
5153            set_size_and_pinuse_of_free_chunk(r, rsize);
5154            replace_dv(ms, r, rsize);
5155          }
5156          mem = chunk2mem(p);
5157          check_malloced_chunk(ms, mem, nb);
5158          goto postaction;
5159        }
5160
5161        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5162          check_malloced_chunk(ms, mem, nb);
5163          goto postaction;
5164        }
5165      }
5166    }
5167    else if (bytes >= MAX_REQUEST)
5168      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5169    else {
5170      nb = pad_request(bytes);
5171      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5172        check_malloced_chunk(ms, mem, nb);
5173        goto postaction;
5174      }
5175    }
5176
5177    if (nb <= ms->dvsize) {
5178      size_t rsize = ms->dvsize - nb;
5179      mchunkptr p = ms->dv;
5180      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5181        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5182        ms->dvsize = rsize;
5183        set_size_and_pinuse_of_free_chunk(r, rsize);
5184        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5185      }
5186      else { /* exhaust dv */
5187        size_t dvs = ms->dvsize;
5188        ms->dvsize = 0;
5189        ms->dv = 0;
5190        set_inuse_and_pinuse(ms, p, dvs);
5191      }
5192      mem = chunk2mem(p);
5193      check_malloced_chunk(ms, mem, nb);
5194      goto postaction;
5195    }
5196
5197    else if (nb < ms->topsize) { /* Split top */
5198      size_t rsize = ms->topsize -= nb;
5199      mchunkptr p = ms->top;
5200      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5201      r->head = rsize | PINUSE_BIT;
5202      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5203      mem = chunk2mem(p);
5204      check_top_chunk(ms, ms->top);
5205      check_malloced_chunk(ms, mem, nb);
5206      goto postaction;
5207    }
5208
5209    mem = sys_alloc(ms, nb);
5210
5211  postaction:
5212    POSTACTION(ms);
5213    return mem;
5214  }
5215
5216  return 0;
5217}
5218
5219void mspace_free(mspace msp, void* mem) {
5220  if (mem != 0) {
5221    mchunkptr p  = mem2chunk(mem);
5222#if FOOTERS
5223    mstate fm = get_mstate_for(p);
5224#else /* FOOTERS */
5225    mstate fm = (mstate)msp;
5226#endif /* FOOTERS */
5227    if (!ok_magic(fm)) {
5228      USAGE_ERROR_ACTION(fm, p);
5229      return;
5230    }
5231    if (!PREACTION(fm)) {
5232      check_inuse_chunk(fm, p);
5233      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
5234        size_t psize = chunksize(p);
5235        mchunkptr next = chunk_plus_offset(p, psize);
5236        if (!pinuse(p)) {
5237          size_t prevsize = p->prev_foot;
5238          if ((prevsize & IS_MMAPPED_BIT) != 0) {
5239            prevsize &= ~IS_MMAPPED_BIT;
5240            psize += prevsize + MMAP_FOOT_PAD;
5241            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5242              fm->footprint -= psize;
5243            goto postaction;
5244          }
5245          else {
5246            mchunkptr prev = chunk_minus_offset(p, prevsize);
5247            psize += prevsize;
5248            p = prev;
5249            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5250              if (p != fm->dv) {
5251                unlink_chunk(fm, p, prevsize);
5252              }
5253              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5254                fm->dvsize = psize;
5255                set_free_with_pinuse(p, psize, next);
5256                goto postaction;
5257              }
5258            }
5259            else
5260              goto erroraction;
5261          }
5262        }
5263
5264        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5265          if (!cinuse(next)) {  /* consolidate forward */
5266            if (next == fm->top) {
5267              size_t tsize = fm->topsize += psize;
5268              fm->top = p;
5269              p->head = tsize | PINUSE_BIT;
5270              if (p == fm->dv) {
5271                fm->dv = 0;
5272                fm->dvsize = 0;
5273              }
5274              if (should_trim(fm, tsize))
5275                sys_trim(fm, 0);
5276              goto postaction;
5277            }
5278            else if (next == fm->dv) {
5279              size_t dsize = fm->dvsize += psize;
5280              fm->dv = p;
5281              set_size_and_pinuse_of_free_chunk(p, dsize);
5282              goto postaction;
5283            }
5284            else {
5285              size_t nsize = chunksize(next);
5286              psize += nsize;
5287              unlink_chunk(fm, next, nsize);
5288              set_size_and_pinuse_of_free_chunk(p, psize);
5289              if (p == fm->dv) {
5290                fm->dvsize = psize;
5291                goto postaction;
5292              }
5293            }
5294          }
5295          else
5296            set_free_with_pinuse(p, psize, next);
5297
5298          if (is_small(psize)) {
5299            insert_small_chunk(fm, p, psize);
5300            check_free_chunk(fm, p);
5301          }
5302          else {
5303            tchunkptr tp = (tchunkptr)p;
5304            insert_large_chunk(fm, tp, psize);
5305            check_free_chunk(fm, p);
5306            if (--fm->release_checks == 0)
5307              release_unused_segments(fm);
5308          }
5309          goto postaction;
5310        }
5311      }
5312    erroraction:
5313      USAGE_ERROR_ACTION(fm, p);
5314    postaction:
5315      POSTACTION(fm);
5316    }
5317  }
5318}
5319
5320void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5321  void* mem;
5322  size_t req = 0;
5323  mstate ms = (mstate)msp;
5324  if (!ok_magic(ms)) {
5325    USAGE_ERROR_ACTION(ms,ms);
5326    return 0;
5327  }
5328  if (n_elements != 0) {
5329    req = n_elements * elem_size;
5330    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5331        (req / n_elements != elem_size))
5332      req = MAX_SIZE_T; /* force downstream failure on overflow */
5333  }
5334  mem = internal_malloc(ms, req);
5335  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5336    memset(mem, 0, req);
5337  return mem;
5338}
5339
5340void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5341  if (oldmem == 0)
5342    return mspace_malloc(msp, bytes);
5343#ifdef REALLOC_ZERO_BYTES_FREES
5344  if (bytes == 0) {
5345    mspace_free(msp, oldmem);
5346    return 0;
5347  }
5348#endif /* REALLOC_ZERO_BYTES_FREES */
5349  else {
5350#if FOOTERS
5351    mchunkptr p  = mem2chunk(oldmem);
5352    mstate ms = get_mstate_for(p);
5353#else /* FOOTERS */
5354    mstate ms = (mstate)msp;
5355#endif /* FOOTERS */
5356    if (!ok_magic(ms)) {
5357      USAGE_ERROR_ACTION(ms,ms);
5358      return 0;
5359    }
5360    return internal_realloc(ms, oldmem, bytes);
5361  }
5362}
5363
5364void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5365  mstate ms = (mstate)msp;
5366  if (!ok_magic(ms)) {
5367    USAGE_ERROR_ACTION(ms,ms);
5368    return 0;
5369  }
5370  return internal_memalign(ms, alignment, bytes);
5371}
5372
5373void** mspace_independent_calloc(mspace msp, size_t n_elements,
5374                                 size_t elem_size, void* chunks[]) {
5375  size_t sz = elem_size; /* serves as 1-element array */
5376  mstate ms = (mstate)msp;
5377  if (!ok_magic(ms)) {
5378    USAGE_ERROR_ACTION(ms,ms);
5379    return 0;
5380  }
5381  return ialloc(ms, n_elements, &sz, 3, chunks);
5382}
5383
5384void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5385                                   size_t sizes[], void* chunks[]) {
5386  mstate ms = (mstate)msp;
5387  if (!ok_magic(ms)) {
5388    USAGE_ERROR_ACTION(ms,ms);
5389    return 0;
5390  }
5391  return ialloc(ms, n_elements, sizes, 0, chunks);
5392}
5393
5394int mspace_trim(mspace msp, size_t pad) {
5395  int result = 0;
5396  mstate ms = (mstate)msp;
5397  if (ok_magic(ms)) {
5398    if (!PREACTION(ms)) {
5399      result = sys_trim(ms, pad);
5400      POSTACTION(ms);
5401    }
5402  }
5403  else {
5404    USAGE_ERROR_ACTION(ms,ms);
5405  }
5406  return result;
5407}
5408
5409void mspace_malloc_stats(mspace msp) {
5410  mstate ms = (mstate)msp;
5411  if (ok_magic(ms)) {
5412    internal_malloc_stats(ms);
5413  }
5414  else {
5415    USAGE_ERROR_ACTION(ms,ms);
5416  }
5417}
5418
5419size_t mspace_footprint(mspace msp) {
5420  size_t result = 0;
5421  mstate ms = (mstate)msp;
5422  if (ok_magic(ms)) {
5423    result = ms->footprint;
5424  }
5425  else {
5426    USAGE_ERROR_ACTION(ms,ms);
5427  }
5428  return result;
5429}
5430
5431
5432size_t mspace_max_footprint(mspace msp) {
5433  size_t result = 0;
5434  mstate ms = (mstate)msp;
5435  if (ok_magic(ms)) {
5436    result = ms->max_footprint;
5437  }
5438  else {
5439    USAGE_ERROR_ACTION(ms,ms);
5440  }
5441  return result;
5442}
5443
5444
5445#if !NO_MALLINFO
5446struct mallinfo mspace_mallinfo(mspace msp) {
5447  mstate ms = (mstate)msp;
5448  if (!ok_magic(ms)) {
5449    USAGE_ERROR_ACTION(ms,ms);
5450  }
5451  return internal_mallinfo(ms);
5452}
5453#endif /* NO_MALLINFO */
5454
5455size_t mspace_usable_size(void* mem) {
5456  if (mem != 0) {
5457    mchunkptr p = mem2chunk(mem);
5458    if (cinuse(p))
5459      return chunksize(p) - overhead_for(p);
5460  }
5461  return 0;
5462}
5463
5464int mspace_mallopt(int param_number, int value) {
5465  return change_mparam(param_number, value);
5466}
5467
5468#endif /* MSPACES */
5469
5470/* -------------------- Alternative MORECORE functions ------------------- */
5471
5472/*
5473  Guidelines for creating a custom version of MORECORE:
5474
5475  * For best performance, MORECORE should allocate in multiples of pagesize.
5476  * MORECORE may allocate more memory than requested. (Or even less,
5477      but this will usually result in a malloc failure.)
5478  * MORECORE must not allocate memory when given argument zero, but
5479      instead return one past the end address of memory from previous
5480      nonzero call.
5481  * For best performance, consecutive calls to MORECORE with positive
5482      arguments should return increasing addresses, indicating that
5483      space has been contiguously extended.
5484  * Even though consecutive calls to MORECORE need not return contiguous
5485      addresses, it must be OK for malloc'ed chunks to span multiple
5486      regions in those cases where they do happen to be contiguous.
5487  * MORECORE need not handle negative arguments -- it may instead
5488      just return MFAIL when given negative arguments.
5489      Negative arguments are always multiples of pagesize. MORECORE
5490      must not misinterpret negative args as large positive unsigned
5491      args. You can suppress all such calls from even occurring by defining
5492      MORECORE_CANNOT_TRIM,
5493
5494  As an example alternative MORECORE, here is a custom allocator
5495  kindly contributed for pre-OSX macOS.  It uses virtually but not
5496  necessarily physically contiguous non-paged memory (locked in,
5497  present and won't get swapped out).  You can use it by uncommenting
5498  this section, adding some #includes, and setting up the appropriate
5499  defines above:
5500
5501      #define MORECORE osMoreCore
5502
5503  There is also a shutdown routine that should somehow be called for
5504  cleanup upon program exit.
5505
5506  #define MAX_POOL_ENTRIES 100
5507  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
5508  static int next_os_pool;
5509  void *our_os_pools[MAX_POOL_ENTRIES];
5510
5511  void *osMoreCore(int size)
5512  {
5513    void *ptr = 0;
5514    static void *sbrk_top = 0;
5515
5516    if (size > 0)
5517    {
5518      if (size < MINIMUM_MORECORE_SIZE)
5519         size = MINIMUM_MORECORE_SIZE;
5520      if (CurrentExecutionLevel() == kTaskLevel)
5521         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
5522      if (ptr == 0)
5523      {
5524        return (void *) MFAIL;
5525      }
5526      // save ptrs so they can be freed during cleanup
5527      our_os_pools[next_os_pool] = ptr;
5528      next_os_pool++;
5529      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
5530      sbrk_top = (char *) ptr + size;
5531      return ptr;
5532    }
5533    else if (size < 0)
5534    {
5535      // we don't currently support shrink behavior
5536      return (void *) MFAIL;
5537    }
5538    else
5539    {
5540      return sbrk_top;
5541    }
5542  }
5543
5544  // cleanup any allocated memory pools
5545  // called as last thing before shutting down driver
5546
5547  void osCleanupMem(void)
5548  {
5549    void **ptr;
5550
5551    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5552      if (*ptr)
5553      {
5554         PoolDeallocate(*ptr);
5555         *ptr = 0;
5556      }
5557  }
5558
5559*/
5560
5561
5562/* -----------------------------------------------------------------------
5563History:
5564    V2.8.4 (not yet released)
5565      * Add mspace_mmap_large_chunks; thanks to Jean Brouwers
5566      * Fix insufficient sys_alloc padding when using 16byte alignment
5567      * Fix bad error check in mspace_footprint
5568      * Adaptations for ptmalloc, courtesy of Wolfram Gloger.
5569      * Reentrant spin locks, courtesy of Earl Chew and others
5570      * Win32 improvements, courtesy of Niall Douglas and Earl Chew
5571      * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
5572      * Extension hook in malloc_state
5573      * Various small adjustments to reduce warnings on some compilers
5574      * Various configuration extensions/changes for more platforms. Thanks
5575         to all who contributed these.
5576
5577    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
5578      * Add max_footprint functions
5579      * Ensure all appropriate literals are size_t
5580      * Fix conditional compilation problem for some #define settings
5581      * Avoid concatenating segments with the one provided
5582        in create_mspace_with_base
5583      * Rename some variables to avoid compiler shadowing warnings
5584      * Use explicit lock initialization.
5585      * Better handling of sbrk interference.
5586      * Simplify and fix segment insertion, trimming and mspace_destroy
5587      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5588      * Thanks especially to Dennis Flanagan for help on these.
5589
5590    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
5591      * Fix memalign brace error.
5592
5593    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
5594      * Fix improper #endif nesting in C++
5595      * Add explicit casts needed for C++
5596
5597    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
5598      * Use trees for large bins
5599      * Support mspaces
5600      * Use segments to unify sbrk-based and mmap-based system allocation,
5601        removing need for emulation on most platforms without sbrk.
5602      * Default safety checks
5603      * Optional footer checks. Thanks to William Robertson for the idea.
5604      * Internal code refactoring
5605      * Incorporate suggestions and platform-specific changes.
5606        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5607        Aaron Bachmann,  Emery Berger, and others.
5608      * Speed up non-fastbin processing enough to remove fastbins.
5609      * Remove useless cfree() to avoid conflicts with other apps.
5610      * Remove internal memcpy, memset. Compilers handle builtins better.
5611      * Remove some options that no one ever used and rename others.
5612
5613    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
5614      * Fix malloc_state bitmap array misdeclaration
5615
5616    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
5617      * Allow tuning of FIRST_SORTED_BIN_SIZE
5618      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5619      * Better detection and support for non-contiguousness of MORECORE.
5620        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5621      * Bypass most of malloc if no frees. Thanks To Emery Berger.
5622      * Fix freeing of old top non-contiguous chunk im sysmalloc.
5623      * Raised default trim and map thresholds to 256K.
5624      * Fix mmap-related #defines. Thanks to Lubos Lunak.
5625      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5626      * Branch-free bin calculation
5627      * Default trim and mmap thresholds now 256K.
5628
5629    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
5630      * Introduce independent_comalloc and independent_calloc.
5631        Thanks to Michael Pachos for motivation and help.
5632      * Make optional .h file available
5633      * Allow > 2GB requests on 32bit systems.
5634      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5635        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5636        and Anonymous.
5637      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5638        helping test this.)
5639      * memalign: check alignment arg
5640      * realloc: don't try to shift chunks backwards, since this
5641        leads to  more fragmentation in some programs and doesn't
5642        seem to help in any others.
5643      * Collect all cases in malloc requiring system memory into sysmalloc
5644      * Use mmap as backup to sbrk
5645      * Place all internal state in malloc_state
5646      * Introduce fastbins (although similar to 2.5.1)
5647      * Many minor tunings and cosmetic improvements
5648      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5649      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5650        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5651      * Include errno.h to support default failure action.
5652
5653    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
5654      * return null for negative arguments
5655      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5656         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5657          (e.g. WIN32 platforms)
5658         * Cleanup header file inclusion for WIN32 platforms
5659         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5660         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5661           memory allocation routines
5662         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5663         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5664           usage of 'assert' in non-WIN32 code
5665         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5666           avoid infinite loop
5667      * Always call 'fREe()' rather than 'free()'
5668
5669    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
5670      * Fixed ordering problem with boundary-stamping
5671
5672    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
5673      * Added pvalloc, as recommended by H.J. Liu
5674      * Added 64bit pointer support mainly from Wolfram Gloger
5675      * Added anonymously donated WIN32 sbrk emulation
5676      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5677      * malloc_extend_top: fix mask error that caused wastage after
5678        foreign sbrks
5679      * Add linux mremap support code from HJ Liu
5680
5681    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
5682      * Integrated most documentation with the code.
5683      * Add support for mmap, with help from
5684        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5685      * Use last_remainder in more cases.
5686      * Pack bins using idea from  colin@nyx10.cs.du.edu
5687      * Use ordered bins instead of best-fit threshold
5688      * Eliminate block-local decls to simplify tracing and debugging.
5689      * Support another case of realloc via move into top
5690      * Fix error occurring when initial sbrk_base not word-aligned.
5691      * Rely on page size for units instead of SBRK_UNIT to
5692        avoid surprises about sbrk alignment conventions.
5693      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5694        (raymond@es.ele.tue.nl) for the suggestion.
5695      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5696      * More precautions for cases where other routines call sbrk,
5697        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5698      * Added macros etc., allowing use in linux libc from
5699        H.J. Lu (hjl@gnu.ai.mit.edu)
5700      * Inverted this history list
5701
5702    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
5703      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5704      * Removed all preallocation code since under current scheme
5705        the work required to undo bad preallocations exceeds
5706        the work saved in good cases for most test programs.
5707      * No longer use return list or unconsolidated bins since
5708        no scheme using them consistently outperforms those that don't
5709        given above changes.
5710      * Use best fit for very large chunks to prevent some worst-cases.
5711      * Added some support for debugging
5712
5713    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
5714      * Removed footers when chunks are in use. Thanks to
5715        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5716
5717    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
5718      * Added malloc_trim, with help from Wolfram Gloger
5719        (wmglo@Dent.MED.Uni-Muenchen.DE).
5720
5721    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
5722
5723    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
5724      * realloc: try to expand in both directions
5725      * malloc: swap order of clean-bin strategy;
5726      * realloc: only conditionally expand backwards
5727      * Try not to scavenge used bins
5728      * Use bin counts as a guide to preallocation
5729      * Occasionally bin return list chunks in first scan
5730      * Add a few optimizations from colin@nyx10.cs.du.edu
5731
5732    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
5733      * faster bin computation & slightly different binning
5734      * merged all consolidations to one part of malloc proper
5735         (eliminating old malloc_find_space & malloc_clean_bin)
5736      * Scan 2 returns chunks (not just 1)
5737      * Propagate failure in realloc if malloc returns 0
5738      * Add stuff to allow compilation on non-ANSI compilers
5739          from kpv@research.att.com
5740
5741    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
5742      * removed potential for odd address access in prev_chunk
5743      * removed dependency on getpagesize.h
5744      * misc cosmetics and a bit more internal documentation
5745      * anticosmetics: mangled names in macros to evade debugger strangeness
5746      * tested on sparc, hp-700, dec-mips, rs6000
5747          with gcc & native cc (hp, dec only) allowing
5748          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5749
5750    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
5751      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5752         structure of old version,  but most details differ.)
5753
5754*/