1/* 2 This is a version (aka dlmalloc) of malloc/free/realloc written by 3 Doug Lea and released to the public domain, as explained at 4 http://creativecommons.org/licenses/publicdomain. Send questions, 5 comments, complaints, performance data, etc to dl@cs.oswego.edu 6 7* Version pre-2.8.4 Mon Nov 27 11:22:37 2006 (dl at gee) 8 9 Note: There may be an updated version of this malloc obtainable at 10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c 11 Check before installing! 12 13* Quickstart 14 15 This library is all in one file to simplify the most common usage: 16 ftp it, compile it (-O3), and link it into another program. All of 17 the compile-time options default to reasonable values for use on 18 most platforms. You might later want to step through various 19 compile-time and dynamic tuning options. 20 21 For convenience, an include file for code using this malloc is at: 22 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h 23 You don't really need this .h file unless you call functions not 24 defined in your system include files. The .h file contains only the 25 excerpts from this file needed for using this malloc on ANSI C/C++ 26 systems, so long as you haven't changed compile-time options about 27 naming and tuning parameters. If you do, then you can create your 28 own malloc.h that does include all settings by cutting at the point 29 indicated below. Note that you may already by default be using a C 30 library containing a malloc that is based on some version of this 31 malloc (for example in linux). You might still want to use the one 32 in this file to customize settings or to avoid overheads associated 33 with library versions. 34 35* Vital statistics: 36 37 Supported pointer/size_t representation: 4 or 8 bytes 38 size_t MUST be an unsigned type of the same width as 39 pointers. (If you are using an ancient system that declares 40 size_t as a signed type, or need it to be a different width 41 than pointers, you can use a previous release of this malloc 42 (e.g. 2.7.2) supporting these.) 43 44 Alignment: 8 bytes (default) 45 This suffices for nearly all current machines and C compilers. 46 However, you can define MALLOC_ALIGNMENT to be wider than this 47 if necessary (up to 128bytes), at the expense of using more space. 48 49 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes) 50 8 or 16 bytes (if 8byte sizes) 51 Each malloced chunk has a hidden word of overhead holding size 52 and status information, and additional cross-check word 53 if FOOTERS is defined. 54 55 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead) 56 8-byte ptrs: 32 bytes (including overhead) 57 58 Even a request for zero bytes (i.e., malloc(0)) returns a 59 pointer to something of the minimum allocatable size. 60 The maximum overhead wastage (i.e., number of extra bytes 61 allocated than were requested in malloc) is less than or equal 62 to the minimum size, except for requests >= mmap_threshold that 63 are serviced via mmap(), where the worst case wastage is about 64 32 bytes plus the remainder from a system page (the minimal 65 mmap unit); typically 4096 or 8192 bytes. 66 67 Security: static-safe; optionally more or less 68 The "security" of malloc refers to the ability of malicious 69 code to accentuate the effects of errors (for example, freeing 70 space that is not currently malloc'ed or overwriting past the 71 ends of chunks) in code that calls malloc. This malloc 72 guarantees not to modify any memory locations below the base of 73 heap, i.e., static variables, even in the presence of usage 74 errors. The routines additionally detect most improper frees 75 and reallocs. All this holds as long as the static bookkeeping 76 for malloc itself is not corrupted by some other means. This 77 is only one aspect of security -- these checks do not, and 78 cannot, detect all possible programming errors. 79 80 If FOOTERS is defined nonzero, then each allocated chunk 81 carries an additional check word to verify that it was malloced 82 from its space. These check words are the same within each 83 execution of a program using malloc, but differ across 84 executions, so externally crafted fake chunks cannot be 85 freed. This improves security by rejecting frees/reallocs that 86 could corrupt heap memory, in addition to the checks preventing 87 writes to statics that are always on. This may further improve 88 security at the expense of time and space overhead. (Note that 89 FOOTERS may also be worth using with MSPACES.) 90 91 By default detected errors cause the program to abort (calling 92 "abort()"). You can override this to instead proceed past 93 errors by defining PROCEED_ON_ERROR. In this case, a bad free 94 has no effect, and a malloc that encounters a bad address 95 caused by user overwrites will ignore the bad address by 96 dropping pointers and indices to all known memory. This may 97 be appropriate for programs that should continue if at all 98 possible in the face of programming errors, although they may 99 run out of memory because dropped memory is never reclaimed. 100 101 If you don't like either of these options, you can define 102 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything 103 else. And if you are sure that your program using malloc has 104 no errors or vulnerabilities, you can define INSECURE to 1, 105 which might (or might not) provide a small performance improvement. 106 107 Thread-safety: NOT thread-safe unless USE_LOCKS defined 108 When USE_LOCKS is defined, each public call to malloc, free, 109 etc is surrounded with either a pthread mutex or a win32 110 spinlock (depending on WIN32). This is not especially fast, and 111 can be a major bottleneck. It is designed only to provide 112 minimal protection in concurrent environments, and to provide a 113 basis for extensions. If you are using malloc in a concurrent 114 program, consider instead using nedmalloc 115 (http://www.nedprod.com/programs/portable/nedmalloc/) or 116 ptmalloc (See http://www.malloc.de), which are derived 117 from versions of this malloc. 118 119 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP 120 This malloc can use unix sbrk or any emulation (invoked using 121 the CALL_MORECORE macro) and/or mmap/munmap or any emulation 122 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system 123 memory. On most unix systems, it tends to work best if both 124 MORECORE and MMAP are enabled. On Win32, it uses emulations 125 based on VirtualAlloc. It also uses common C library functions 126 like memset. 127 128 Compliance: I believe it is compliant with the Single Unix Specification 129 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably 130 others as well. 131 132* Overview of algorithms 133 134 This is not the fastest, most space-conserving, most portable, or 135 most tunable malloc ever written. However it is among the fastest 136 while also being among the most space-conserving, portable and 137 tunable. Consistent balance across these factors results in a good 138 general-purpose allocator for malloc-intensive programs. 139 140 In most ways, this malloc is a best-fit allocator. Generally, it 141 chooses the best-fitting existing chunk for a request, with ties 142 broken in approximately least-recently-used order. (This strategy 143 normally maintains low fragmentation.) However, for requests less 144 than 256bytes, it deviates from best-fit when there is not an 145 exactly fitting available chunk by preferring to use space adjacent 146 to that used for the previous small request, as well as by breaking 147 ties in approximately most-recently-used order. (These enhance 148 locality of series of small allocations.) And for very large requests 149 (>= 256Kb by default), it relies on system memory mapping 150 facilities, if supported. (This helps avoid carrying around and 151 possibly fragmenting memory used only for large chunks.) 152 153 All operations (except malloc_stats and mallinfo) have execution 154 times that are bounded by a constant factor of the number of bits in 155 a size_t, not counting any clearing in calloc or copying in realloc, 156 or actions surrounding MORECORE and MMAP that have times 157 proportional to the number of non-contiguous regions returned by 158 system allocation routines, which is often just 1. In real-time 159 applications, you can optionally suppress segment traversals using 160 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when 161 system allocators return non-contiguous spaces, at the typical 162 expense of carrying around more memory and increased fragmentation. 163 164 The implementation is not very modular and seriously overuses 165 macros. Perhaps someday all C compilers will do as good a job 166 inlining modular code as can now be done by brute-force expansion, 167 but now, enough of them seem not to. 168 169 Some compilers issue a lot of warnings about code that is 170 dead/unreachable only on some platforms, and also about intentional 171 uses of negation on unsigned types. All known cases of each can be 172 ignored. 173 174 For a longer but out of date high-level description, see 175 http://gee.cs.oswego.edu/dl/html/malloc.html 176 177* MSPACES 178 If MSPACES is defined, then in addition to malloc, free, etc., 179 this file also defines mspace_malloc, mspace_free, etc. These 180 are versions of malloc routines that take an "mspace" argument 181 obtained using create_mspace, to control all internal bookkeeping. 182 If ONLY_MSPACES is defined, only these versions are compiled. 183 So if you would like to use this allocator for only some allocations, 184 and your system malloc for others, you can compile with 185 ONLY_MSPACES and then do something like... 186 static mspace mymspace = create_mspace(0,0); // for example 187 #define mymalloc(bytes) mspace_malloc(mymspace, bytes) 188 189 (Note: If you only need one instance of an mspace, you can instead 190 use "USE_DL_PREFIX" to relabel the global malloc.) 191 192 You can similarly create thread-local allocators by storing 193 mspaces as thread-locals. For example: 194 static __thread mspace tlms = 0; 195 void* tlmalloc(size_t bytes) { 196 if (tlms == 0) tlms = create_mspace(0, 0); 197 return mspace_malloc(tlms, bytes); 198 } 199 void tlfree(void* mem) { mspace_free(tlms, mem); } 200 201 Unless FOOTERS is defined, each mspace is completely independent. 202 You cannot allocate from one and free to another (although 203 conformance is only weakly checked, so usage errors are not always 204 caught). If FOOTERS is defined, then each chunk carries around a tag 205 indicating its originating mspace, and frees are directed to their 206 originating spaces. 207 208 ------------------------- Compile-time options --------------------------- 209 210Be careful in setting #define values for numerical constants of type 211size_t. On some systems, literal values are not automatically extended 212to size_t precision unless they are explicitly casted. You can also 213use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below. 214 215WIN32 default: defined if _WIN32 defined 216 Defining WIN32 sets up defaults for MS environment and compilers. 217 Otherwise defaults are for unix. Beware that there seem to be some 218 cases where this malloc might not be a pure drop-in replacement for 219 Win32 malloc: Random-looking failures from Win32 GDI API's (eg; 220 SetDIBits()) may be due to bugs in some video driver implementations 221 when pixel buffers are malloc()ed, and the region spans more than 222 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb) 223 default granularity, pixel buffers may straddle virtual allocation 224 regions more often than when using the Microsoft allocator. You can 225 avoid this by using VirtualAlloc() and VirtualFree() for all pixel 226 buffers rather than using malloc(). If this is not possible, 227 recompile this malloc with a larger DEFAULT_GRANULARITY. 228 229MALLOC_ALIGNMENT default: (size_t)8 230 Controls the minimum alignment for malloc'ed chunks. It must be a 231 power of two and at least 8, even on machines for which smaller 232 alignments would suffice. It may be defined as larger than this 233 though. Note however that code and data structures are optimized for 234 the case of 8-byte alignment. 235 236MSPACES default: 0 (false) 237 If true, compile in support for independent allocation spaces. 238 This is only supported if HAVE_MMAP is true. 239 240ONLY_MSPACES default: 0 (false) 241 If true, only compile in mspace versions, not regular versions. 242 243USE_LOCKS default: 0 (false) 244 Causes each call to each public routine to be surrounded with 245 pthread or WIN32 mutex lock/unlock. (If set true, this can be 246 overridden on a per-mspace basis for mspace versions.) If set to a 247 non-zero value other than 1, locks are used, but their 248 implementation is left out, so lock functions must be supplied manually. 249 250USE_SPIN_LOCKS default: 1 iff USE_LOCKS and on x86 using gcc or MSC 251 If true, uses custom spin locks for locking. This is currently 252 supported only for x86 platforms using gcc or recent MS compilers. 253 Otherwise, posix locks or win32 critical sections are used. 254 255FOOTERS default: 0 256 If true, provide extra checking and dispatching by placing 257 information in the footers of allocated chunks. This adds 258 space and time overhead. 259 260INSECURE default: 0 261 If true, omit checks for usage errors and heap space overwrites. 262 263USE_DL_PREFIX default: NOT defined 264 Causes compiler to prefix all public routines with the string 'dl'. 265 This can be useful when you only want to use this malloc in one part 266 of a program, using your regular system malloc elsewhere. 267 268ABORT default: defined as abort() 269 Defines how to abort on failed checks. On most systems, a failed 270 check cannot die with an "assert" or even print an informative 271 message, because the underlying print routines in turn call malloc, 272 which will fail again. Generally, the best policy is to simply call 273 abort(). It's not very useful to do more than this because many 274 errors due to overwriting will show up as address faults (null, odd 275 addresses etc) rather than malloc-triggered checks, so will also 276 abort. Also, most compilers know that abort() does not return, so 277 can better optimize code conditionally calling it. 278 279PROCEED_ON_ERROR default: defined as 0 (false) 280 Controls whether detected bad addresses cause them to bypassed 281 rather than aborting. If set, detected bad arguments to free and 282 realloc are ignored. And all bookkeeping information is zeroed out 283 upon a detected overwrite of freed heap space, thus losing the 284 ability to ever return it from malloc again, but enabling the 285 application to proceed. If PROCEED_ON_ERROR is defined, the 286 static variable malloc_corruption_error_count is compiled in 287 and can be examined to see if errors have occurred. This option 288 generates slower code than the default abort policy. 289 290DEBUG default: NOT defined 291 The DEBUG setting is mainly intended for people trying to modify 292 this code or diagnose problems when porting to new platforms. 293 However, it may also be able to better isolate user errors than just 294 using runtime checks. The assertions in the check routines spell 295 out in more detail the assumptions and invariants underlying the 296 algorithms. The checking is fairly extensive, and will slow down 297 execution noticeably. Calling malloc_stats or mallinfo with DEBUG 298 set will attempt to check every non-mmapped allocated and free chunk 299 in the course of computing the summaries. 300 301ABORT_ON_ASSERT_FAILURE default: defined as 1 (true) 302 Debugging assertion failures can be nearly impossible if your 303 version of the assert macro causes malloc to be called, which will 304 lead to a cascade of further failures, blowing the runtime stack. 305 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(), 306 which will usually make debugging easier. 307 308MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32 309 The action to take before "return 0" when malloc fails to be able to 310 return memory because there is none available. 311 312HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES 313 True if this system supports sbrk or an emulation of it. 314 315MORECORE default: sbrk 316 The name of the sbrk-style system routine to call to obtain more 317 memory. See below for guidance on writing custom MORECORE 318 functions. The type of the argument to sbrk/MORECORE varies across 319 systems. It cannot be size_t, because it supports negative 320 arguments, so it is normally the signed type of the same width as 321 size_t (sometimes declared as "intptr_t"). It doesn't much matter 322 though. Internally, we only call it with arguments less than half 323 the max value of a size_t, which should work across all reasonable 324 possibilities, although sometimes generating compiler warnings. 325 326MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE 327 If true, take advantage of fact that consecutive calls to MORECORE 328 with positive arguments always return contiguous increasing 329 addresses. This is true of unix sbrk. It does not hurt too much to 330 set it true anyway, since malloc copes with non-contiguities. 331 Setting it false when definitely non-contiguous saves time 332 and possibly wasted space it would take to discover this though. 333 334MORECORE_CANNOT_TRIM default: NOT defined 335 True if MORECORE cannot release space back to the system when given 336 negative arguments. This is generally necessary only if you are 337 using a hand-crafted MORECORE function that cannot handle negative 338 arguments. 339 340NO_SEGMENT_TRAVERSAL default: 0 341 If non-zero, suppresses traversals of memory segments 342 returned by either MORECORE or CALL_MMAP. This disables 343 merging of segments that are contiguous, and selectively 344 releasing them to the OS if unused, but bounds execution times. 345 346HAVE_MMAP default: 1 (true) 347 True if this system supports mmap or an emulation of it. If so, and 348 HAVE_MORECORE is not true, MMAP is used for all system 349 allocation. If set and HAVE_MORECORE is true as well, MMAP is 350 primarily used to directly allocate very large blocks. It is also 351 used as a backup strategy in cases where MORECORE fails to provide 352 space from system. Note: A single call to MUNMAP is assumed to be 353 able to unmap memory that may have be allocated using multiple calls 354 to MMAP, so long as they are adjacent. 355 356HAVE_MREMAP default: 1 on linux, else 0 357 If true realloc() uses mremap() to re-allocate large blocks and 358 extend or shrink allocation spaces. 359 360MMAP_CLEARS default: 1 except on WINCE. 361 True if mmap clears memory so calloc doesn't need to. This is true 362 for standard unix mmap using /dev/zero and on WIN32 except for WINCE. 363 364USE_BUILTIN_FFS default: 0 (i.e., not used) 365 Causes malloc to use the builtin ffs() function to compute indices. 366 Some compilers may recognize and intrinsify ffs to be faster than the 367 supplied C version. Also, the case of x86 using gcc is special-cased 368 to an asm instruction, so is already as fast as it can be, and so 369 this setting has no effect. Similarly for Win32 under recent MS compilers. 370 (On most x86s, the asm version is only slightly faster than the C version.) 371 372malloc_getpagesize default: derive from system includes, or 4096. 373 The system page size. To the extent possible, this malloc manages 374 memory from the system in page-size units. This may be (and 375 usually is) a function rather than a constant. This is ignored 376 if WIN32, where page size is determined using getSystemInfo during 377 initialization. 378 379USE_DEV_RANDOM default: 0 (i.e., not used) 380 Causes malloc to use /dev/random to initialize secure magic seed for 381 stamping footers. Otherwise, the current time is used. 382 383NO_MALLINFO default: 0 384 If defined, don't compile "mallinfo". This can be a simple way 385 of dealing with mismatches between system declarations and 386 those in this file. 387 388MALLINFO_FIELD_TYPE default: size_t 389 The type of the fields in the mallinfo struct. This was originally 390 defined as "int" in SVID etc, but is more usefully defined as 391 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set 392 393REALLOC_ZERO_BYTES_FREES default: not defined 394 This should be set if a call to realloc with zero bytes should 395 be the same as a call to free. Some people think it should. Otherwise, 396 since this malloc returns a unique pointer for malloc(0), so does 397 realloc(p, 0). 398 399LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H 400LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H 401LACKS_STDLIB_H default: NOT defined unless on WIN32 402 Define these if your system does not have these header files. 403 You might need to manually insert some of the declarations they provide. 404 405DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS, 406 system_info.dwAllocationGranularity in WIN32, 407 otherwise 64K. 408 Also settable using mallopt(M_GRANULARITY, x) 409 The unit for allocating and deallocating memory from the system. On 410 most systems with contiguous MORECORE, there is no reason to 411 make this more than a page. However, systems with MMAP tend to 412 either require or encourage larger granularities. You can increase 413 this value to prevent system allocation functions to be called so 414 often, especially if they are slow. The value must be at least one 415 page and must be a power of two. Setting to 0 causes initialization 416 to either page size or win32 region size. (Note: In previous 417 versions of malloc, the equivalent of this option was called 418 "TOP_PAD") 419 420DEFAULT_TRIM_THRESHOLD default: 2MB 421 Also settable using mallopt(M_TRIM_THRESHOLD, x) 422 The maximum amount of unused top-most memory to keep before 423 releasing via malloc_trim in free(). Automatic trimming is mainly 424 useful in long-lived programs using contiguous MORECORE. Because 425 trimming via sbrk can be slow on some systems, and can sometimes be 426 wasteful (in cases where programs immediately afterward allocate 427 more large chunks) the value should be high enough so that your 428 overall system performance would improve by releasing this much 429 memory. As a rough guide, you might set to a value close to the 430 average size of a process (program) running on your system. 431 Releasing this much memory would allow such a process to run in 432 memory. Generally, it is worth tuning trim thresholds when a 433 program undergoes phases where several large chunks are allocated 434 and released in ways that can reuse each other's storage, perhaps 435 mixed with phases where there are no such chunks at all. The trim 436 value must be greater than page size to have any useful effect. To 437 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick 438 some people use of mallocing a huge space and then freeing it at 439 program startup, in an attempt to reserve system memory, doesn't 440 have the intended effect under automatic trimming, since that memory 441 will immediately be returned to the system. 442 443DEFAULT_MMAP_THRESHOLD default: 256K 444 Also settable using mallopt(M_MMAP_THRESHOLD, x) 445 The request size threshold for using MMAP to directly service a 446 request. Requests of at least this size that cannot be allocated 447 using already-existing space will be serviced via mmap. (If enough 448 normal freed space already exists it is used instead.) Using mmap 449 segregates relatively large chunks of memory so that they can be 450 individually obtained and released from the host system. A request 451 serviced through mmap is never reused by any other request (at least 452 not directly; the system may just so happen to remap successive 453 requests to the same locations). Segregating space in this way has 454 the benefits that: Mmapped space can always be individually released 455 back to the system, which helps keep the system level memory demands 456 of a long-lived program low. Also, mapped memory doesn't become 457 `locked' between other chunks, as can happen with normally allocated 458 chunks, which means that even trimming via malloc_trim would not 459 release them. However, it has the disadvantage that the space 460 cannot be reclaimed, consolidated, and then used to service later 461 requests, as happens with normal chunks. The advantages of mmap 462 nearly always outweigh disadvantages for "large" chunks, but the 463 value of "large" may vary across systems. The default is an 464 empirically derived value that works well in most systems. You can 465 disable mmap by setting to MAX_SIZE_T. 466 467MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP 468 The number of consolidated frees between checks to release 469 unused segments when freeing. When using non-contiguous segments, 470 especially with multiple mspaces, checking only for topmost space 471 doesn't always suffice to trigger trimming. To compensate for this, 472 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the 473 current number of segments, if greater) try to release unused 474 segments to the OS when freeing chunks that result in 475 consolidation. The best value for this parameter is a compromise 476 between slowing down frees with relatively costly checks that 477 rarely trigger versus holding on to unused memory. To effectively 478 disable, set to MAX_SIZE_T. This may lead to a very slight speed 479 improvement at the expense of carrying around more memory. 480*/ 481 482/* Version identifier to allow people to support multiple versions */ 483#ifndef DLMALLOC_VERSION 484#define DLMALLOC_VERSION 20804 485#endif/* DLMALLOC_VERSION */ 486 487#if defined(linux) 488#define _GNU_SOURCE 1 489#endif 490 491#ifndef WIN32 492#ifdef _WIN32 493#define WIN32 1 494#endif/* _WIN32 */ 495#ifdef _WIN32_WCE 496#define LACKS_FCNTL_H 497#define WIN32 1 498#endif/* _WIN32_WCE */ 499#endif/* WIN32 */ 500#ifdef WIN32 501#define WIN32_LEAN_AND_MEAN 502#define _WIN32_WINNT 0x403 503#include <windows.h> 504#define HAVE_MMAP 1 505#define HAVE_MORECORE 0 506#define LACKS_UNISTD_H 507#define LACKS_SYS_PARAM_H 508#define LACKS_SYS_MMAN_H 509#define LACKS_STRING_H 510#define LACKS_STRINGS_H 511#define LACKS_SYS_TYPES_H 512#define LACKS_ERRNO_H 513#ifndef MALLOC_FAILURE_ACTION 514#define MALLOC_FAILURE_ACTION 515#endif/* MALLOC_FAILURE_ACTION */ 516#ifdef _WIN32_WCE/* WINCE reportedly does not clear */ 517#define MMAP_CLEARS 0 518#else 519#define MMAP_CLEARS 1 520#endif/* _WIN32_WCE */ 521#endif/* WIN32 */ 522 523#if defined(DARWIN) || defined(_DARWIN) 524/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */ 525#ifndef HAVE_MORECORE 526#define HAVE_MORECORE 0 527#define HAVE_MMAP 1 528/* OSX allocators provide 16 byte alignment */ 529#ifndef MALLOC_ALIGNMENT 530#define MALLOC_ALIGNMENT ((size_t)16U) 531#endif 532#endif/* HAVE_MORECORE */ 533#endif/* DARWIN */ 534 535#ifndef LACKS_SYS_TYPES_H 536#include <sys/types.h>/* For size_t */ 537#endif/* LACKS_SYS_TYPES_H */ 538 539/* The maximum possible size_t value has all bits set */ 540#define MAX_SIZE_T (~(size_t)0) 541 542#ifndef ONLY_MSPACES 543#define ONLY_MSPACES 0/* define to a value */ 544#else 545#define ONLY_MSPACES 1 546#endif/* ONLY_MSPACES */ 547#ifndef MSPACES 548#if ONLY_MSPACES 549#define MSPACES 1 550#else/* ONLY_MSPACES */ 551#define MSPACES 0 552#endif/* ONLY_MSPACES */ 553#endif/* MSPACES */ 554#ifndef MALLOC_ALIGNMENT 555#define MALLOC_ALIGNMENT ((size_t)8U) 556#endif/* MALLOC_ALIGNMENT */ 557#ifndef FOOTERS 558#define FOOTERS 0 559#endif/* FOOTERS */ 560#ifndef ABORT 561#define ABORT abort() 562#endif/* ABORT */ 563#ifndef ABORT_ON_ASSERT_FAILURE 564#define ABORT_ON_ASSERT_FAILURE 1 565#endif/* ABORT_ON_ASSERT_FAILURE */ 566#ifndef PROCEED_ON_ERROR 567#define PROCEED_ON_ERROR 0 568#endif/* PROCEED_ON_ERROR */ 569#ifndef USE_LOCKS 570#define USE_LOCKS 0 571#endif/* USE_LOCKS */ 572#ifndef USE_SPIN_LOCKS 573#if USE_LOCKS && (defined(__GNUC__) && ((defined(__i386__) || defined(__x86_64__)))) || (defined(_MSC_VER) && _MSC_VER>=1310) 574#define USE_SPIN_LOCKS 1 575#else 576#define USE_SPIN_LOCKS 0 577#endif/* USE_LOCKS && ... */ 578#endif/* USE_SPIN_LOCKS */ 579#ifndef INSECURE 580#define INSECURE 0 581#endif/* INSECURE */ 582#ifndef HAVE_MMAP 583#define HAVE_MMAP 1 584#endif/* HAVE_MMAP */ 585#ifndef MMAP_CLEARS 586#define MMAP_CLEARS 1 587#endif/* MMAP_CLEARS */ 588#ifndef HAVE_MREMAP 589#ifdef linux 590#define HAVE_MREMAP 1 591#else/* linux */ 592#define HAVE_MREMAP 0 593#endif/* linux */ 594#endif/* HAVE_MREMAP */ 595#ifndef MALLOC_FAILURE_ACTION 596#define MALLOC_FAILURE_ACTION errno = ENOMEM; 597#endif/* MALLOC_FAILURE_ACTION */ 598#ifndef HAVE_MORECORE 599#if ONLY_MSPACES 600#define HAVE_MORECORE 0 601#else/* ONLY_MSPACES */ 602#define HAVE_MORECORE 1 603#endif/* ONLY_MSPACES */ 604#endif/* HAVE_MORECORE */ 605#if !HAVE_MORECORE 606#define MORECORE_CONTIGUOUS 0 607#else/* !HAVE_MORECORE */ 608#define MORECORE_DEFAULT sbrk 609#ifndef MORECORE_CONTIGUOUS 610#define MORECORE_CONTIGUOUS 1 611#endif/* MORECORE_CONTIGUOUS */ 612#endif/* HAVE_MORECORE */ 613#ifndef DEFAULT_GRANULARITY 614#if (MORECORE_CONTIGUOUS || defined(WIN32)) 615#define DEFAULT_GRANULARITY (0)/* 0 means to compute in init_mparams */ 616#else/* MORECORE_CONTIGUOUS */ 617#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U) 618#endif/* MORECORE_CONTIGUOUS */ 619#endif/* DEFAULT_GRANULARITY */ 620#ifndef DEFAULT_TRIM_THRESHOLD 621#ifndef MORECORE_CANNOT_TRIM 622#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U) 623#else/* MORECORE_CANNOT_TRIM */ 624#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T 625#endif/* MORECORE_CANNOT_TRIM */ 626#endif/* DEFAULT_TRIM_THRESHOLD */ 627#ifndef DEFAULT_MMAP_THRESHOLD 628#if HAVE_MMAP 629#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U) 630#else/* HAVE_MMAP */ 631#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T 632#endif/* HAVE_MMAP */ 633#endif/* DEFAULT_MMAP_THRESHOLD */ 634#ifndef MAX_RELEASE_CHECK_RATE 635#if HAVE_MMAP 636#define MAX_RELEASE_CHECK_RATE 4095 637#else 638#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T 639#endif/* HAVE_MMAP */ 640#endif/* MAX_RELEASE_CHECK_RATE */ 641#ifndef USE_BUILTIN_FFS 642#define USE_BUILTIN_FFS 0 643#endif/* USE_BUILTIN_FFS */ 644#ifndef USE_DEV_RANDOM 645#define USE_DEV_RANDOM 0 646#endif/* USE_DEV_RANDOM */ 647#ifndef NO_MALLINFO 648#define NO_MALLINFO 0 649#endif/* NO_MALLINFO */ 650#ifndef MALLINFO_FIELD_TYPE 651#define MALLINFO_FIELD_TYPE size_t 652#endif/* MALLINFO_FIELD_TYPE */ 653#ifndef NO_SEGMENT_TRAVERSAL 654#define NO_SEGMENT_TRAVERSAL 0 655#endif/* NO_SEGMENT_TRAVERSAL */ 656 657/* 658 mallopt tuning options. SVID/XPG defines four standard parameter 659 numbers for mallopt, normally defined in malloc.h. None of these 660 are used in this malloc, so setting them has no effect. But this 661 malloc does support the following options. 662*/ 663 664#define M_TRIM_THRESHOLD (-1) 665#define M_GRANULARITY (-2) 666#define M_MMAP_THRESHOLD (-3) 667 668/* ------------------------ Mallinfo declarations ------------------------ */ 669 670#if !NO_MALLINFO 671/* 672 This version of malloc supports the standard SVID/XPG mallinfo 673 routine that returns a struct containing usage properties and 674 statistics. It should work on any system that has a 675 /usr/include/malloc.h defining struct mallinfo. The main 676 declaration needed is the mallinfo struct that is returned (by-copy) 677 by mallinfo(). The malloinfo struct contains a bunch of fields that 678 are not even meaningful in this version of malloc. These fields are 679 are instead filled by mallinfo() with other numbers that might be of 680 interest. 681 682 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a 683 /usr/include/malloc.h file that includes a declaration of struct 684 mallinfo. If so, it is included; else a compliant version is 685 declared below. These must be precisely the same for mallinfo() to 686 work. The original SVID version of this struct, defined on most 687 systems with mallinfo, declares all fields as ints. But some others 688 define as unsigned long. If your system defines the fields using a 689 type of different width than listed here, you MUST #include your 690 system version and #define HAVE_USR_INCLUDE_MALLOC_H. 691*/ 692 693/* #define HAVE_USR_INCLUDE_MALLOC_H */ 694 695#ifdef HAVE_USR_INCLUDE_MALLOC_H 696#include"/usr/include/malloc.h" 697#else/* HAVE_USR_INCLUDE_MALLOC_H */ 698#ifndef STRUCT_MALLINFO_DECLARED 699#define STRUCT_MALLINFO_DECLARED 1 700struct mallinfo { 701 MALLINFO_FIELD_TYPE arena;/* non-mmapped space allocated from system */ 702 MALLINFO_FIELD_TYPE ordblks;/* number of free chunks */ 703 MALLINFO_FIELD_TYPE smblks;/* always 0 */ 704 MALLINFO_FIELD_TYPE hblks;/* always 0 */ 705 MALLINFO_FIELD_TYPE hblkhd;/* space in mmapped regions */ 706 MALLINFO_FIELD_TYPE usmblks;/* maximum total allocated space */ 707 MALLINFO_FIELD_TYPE fsmblks;/* always 0 */ 708 MALLINFO_FIELD_TYPE uordblks;/* total allocated space */ 709 MALLINFO_FIELD_TYPE fordblks;/* total free space */ 710 MALLINFO_FIELD_TYPE keepcost;/* releasable (via malloc_trim) space */ 711}; 712#endif/* STRUCT_MALLINFO_DECLARED */ 713#endif/* HAVE_USR_INCLUDE_MALLOC_H */ 714#endif/* NO_MALLINFO */ 715 716/* 717 Try to persuade compilers to inline. The most critical functions for 718 inlining are defined as macros, so these aren't used for them. 719*/ 720 721#ifndef FORCEINLINE 722#if defined(__GNUC__) 723#define FORCEINLINE __inline __attribute__ ((always_inline)) 724#elif defined(_MSC_VER) 725#define FORCEINLINE __forceinline 726#endif 727#endif 728#ifndef NOINLINE 729#if defined(__GNUC__) 730#define NOINLINE __attribute__ ((noinline)) 731#elif defined(_MSC_VER) 732#define NOINLINE __declspec(noinline) 733#else 734#define NOINLINE 735#endif 736#endif 737 738#ifdef __cplusplus 739extern"C"{ 740#ifndef FORCEINLINE 741#define FORCEINLINE inline 742#endif 743#endif/* __cplusplus */ 744#ifndef FORCEINLINE 745#define FORCEINLINE 746#endif 747 748#if !ONLY_MSPACES 749 750/* ------------------- Declarations of public routines ------------------- */ 751 752#ifndef USE_DL_PREFIX 753#define dlcalloc calloc 754#define dlfree free 755#define dlmalloc malloc 756#define dlmemalign memalign 757#define dlrealloc realloc 758#define dlvalloc valloc 759#define dlpvalloc pvalloc 760#define dlmallinfo mallinfo 761#define dlmallopt mallopt 762#define dlmalloc_trim malloc_trim 763#define dlmalloc_stats malloc_stats 764#define dlmalloc_usable_size malloc_usable_size 765#define dlmalloc_footprint malloc_footprint 766#define dlmalloc_max_footprint malloc_max_footprint 767#define dlindependent_calloc independent_calloc 768#define dlindependent_comalloc independent_comalloc 769#endif/* USE_DL_PREFIX */ 770 771 772/* 773 malloc(size_t n) 774 Returns a pointer to a newly allocated chunk of at least n bytes, or 775 null if no space is available, in which case errno is set to ENOMEM 776 on ANSI C systems. 777 778 If n is zero, malloc returns a minimum-sized chunk. (The minimum 779 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit 780 systems.) Note that size_t is an unsigned type, so calls with 781 arguments that would be negative if signed are interpreted as 782 requests for huge amounts of space, which will often fail. The 783 maximum supported value of n differs across systems, but is in all 784 cases less than the maximum representable value of a size_t. 785*/ 786void*dlmalloc(size_t); 787 788/* 789 free(void* p) 790 Releases the chunk of memory pointed to by p, that had been previously 791 allocated using malloc or a related routine such as realloc. 792 It has no effect if p is null. If p was not malloced or already 793 freed, free(p) will by default cause the current program to abort. 794*/ 795voiddlfree(void*); 796 797/* 798 calloc(size_t n_elements, size_t element_size); 799 Returns a pointer to n_elements * element_size bytes, with all locations 800 set to zero. 801*/ 802void*dlcalloc(size_t,size_t); 803 804/* 805 realloc(void* p, size_t n) 806 Returns a pointer to a chunk of size n that contains the same data 807 as does chunk p up to the minimum of (n, p's size) bytes, or null 808 if no space is available. 809 810 The returned pointer may or may not be the same as p. The algorithm 811 prefers extending p in most cases when possible, otherwise it 812 employs the equivalent of a malloc-copy-free sequence. 813 814 If p is null, realloc is equivalent to malloc. 815 816 If space is not available, realloc returns null, errno is set (if on 817 ANSI) and p is NOT freed. 818 819 if n is for fewer bytes than already held by p, the newly unused 820 space is lopped off and freed if possible. realloc with a size 821 argument of zero (re)allocates a minimum-sized chunk. 822 823 The old unix realloc convention of allowing the last-free'd chunk 824 to be used as an argument to realloc is not supported. 825*/ 826 827void*dlrealloc(void*,size_t); 828 829/* 830 memalign(size_t alignment, size_t n); 831 Returns a pointer to a newly allocated chunk of n bytes, aligned 832 in accord with the alignment argument. 833 834 The alignment argument should be a power of two. If the argument is 835 not a power of two, the nearest greater power is used. 836 8-byte alignment is guaranteed by normal malloc calls, so don't 837 bother calling memalign with an argument of 8 or less. 838 839 Overreliance on memalign is a sure way to fragment space. 840*/ 841void*dlmemalign(size_t,size_t); 842 843/* 844 valloc(size_t n); 845 Equivalent to memalign(pagesize, n), where pagesize is the page 846 size of the system. If the pagesize is unknown, 4096 is used. 847*/ 848void*dlvalloc(size_t); 849 850/* 851 mallopt(int parameter_number, int parameter_value) 852 Sets tunable parameters The format is to provide a 853 (parameter-number, parameter-value) pair. mallopt then sets the 854 corresponding parameter to the argument value if it can (i.e., so 855 long as the value is meaningful), and returns 1 if successful else 856 0. To workaround the fact that mallopt is specified to use int, 857 not size_t parameters, the value -1 is specially treated as the 858 maximum unsigned size_t value. 859 860 SVID/XPG/ANSI defines four standard param numbers for mallopt, 861 normally defined in malloc.h. None of these are use in this malloc, 862 so setting them has no effect. But this malloc also supports other 863 options in mallopt. See below for details. Briefly, supported 864 parameters are as follows (listed defaults are for "typical" 865 configurations). 866 867 Symbol param # default allowed param values 868 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables) 869 M_GRANULARITY -2 page size any power of 2 >= page size 870 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support) 871*/ 872intdlmallopt(int,int); 873 874/* 875 malloc_footprint(); 876 Returns the number of bytes obtained from the system. The total 877 number of bytes allocated by malloc, realloc etc., is less than this 878 value. Unlike mallinfo, this function returns only a precomputed 879 result, so can be called frequently to monitor memory consumption. 880 Even if locks are otherwise defined, this function does not use them, 881 so results might not be up to date. 882*/ 883size_tdlmalloc_footprint(void); 884 885/* 886 malloc_max_footprint(); 887 Returns the maximum number of bytes obtained from the system. This 888 value will be greater than current footprint if deallocated space 889 has been reclaimed by the system. The peak number of bytes allocated 890 by malloc, realloc etc., is less than this value. Unlike mallinfo, 891 this function returns only a precomputed result, so can be called 892 frequently to monitor memory consumption. Even if locks are 893 otherwise defined, this function does not use them, so results might 894 not be up to date. 895*/ 896size_tdlmalloc_max_footprint(void); 897 898#if !NO_MALLINFO 899/* 900 mallinfo() 901 Returns (by copy) a struct containing various summary statistics: 902 903 arena: current total non-mmapped bytes allocated from system 904 ordblks: the number of free chunks 905 smblks: always zero. 906 hblks: current number of mmapped regions 907 hblkhd: total bytes held in mmapped regions 908 usmblks: the maximum total allocated space. This will be greater 909 than current total if trimming has occurred. 910 fsmblks: always zero 911 uordblks: current total allocated space (normal or mmapped) 912 fordblks: total free space 913 keepcost: the maximum number of bytes that could ideally be released 914 back to system via malloc_trim. ("ideally" means that 915 it ignores page restrictions etc.) 916 917 Because these fields are ints, but internal bookkeeping may 918 be kept as longs, the reported values may wrap around zero and 919 thus be inaccurate. 920*/ 921struct mallinfo dlmallinfo(void); 922#endif/* NO_MALLINFO */ 923 924/* 925 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]); 926 927 independent_calloc is similar to calloc, but instead of returning a 928 single cleared space, it returns an array of pointers to n_elements 929 independent elements that can hold contents of size elem_size, each 930 of which starts out cleared, and can be independently freed, 931 realloc'ed etc. The elements are guaranteed to be adjacently 932 allocated (this is not guaranteed to occur with multiple callocs or 933 mallocs), which may also improve cache locality in some 934 applications. 935 936 The "chunks" argument is optional (i.e., may be null, which is 937 probably the most typical usage). If it is null, the returned array 938 is itself dynamically allocated and should also be freed when it is 939 no longer needed. Otherwise, the chunks array must be of at least 940 n_elements in length. It is filled in with the pointers to the 941 chunks. 942 943 In either case, independent_calloc returns this pointer array, or 944 null if the allocation failed. If n_elements is zero and "chunks" 945 is null, it returns a chunk representing an array with zero elements 946 (which should be freed if not wanted). 947 948 Each element must be individually freed when it is no longer 949 needed. If you'd like to instead be able to free all at once, you 950 should instead use regular calloc and assign pointers into this 951 space to represent elements. (In this case though, you cannot 952 independently free elements.) 953 954 independent_calloc simplifies and speeds up implementations of many 955 kinds of pools. It may also be useful when constructing large data 956 structures that initially have a fixed number of fixed-sized nodes, 957 but the number is not known at compile time, and some of the nodes 958 may later need to be freed. For example: 959 960 struct Node { int item; struct Node* next; }; 961 962 struct Node* build_list() { 963 struct Node** pool; 964 int n = read_number_of_nodes_needed(); 965 if (n <= 0) return 0; 966 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); 967 if (pool == 0) die(); 968 // organize into a linked list... 969 struct Node* first = pool[0]; 970 for (i = 0; i < n-1; ++i) 971 pool[i]->next = pool[i+1]; 972 free(pool); // Can now free the array (or not, if it is needed later) 973 return first; 974 } 975*/ 976void**dlindependent_calloc(size_t,size_t,void**); 977 978/* 979 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]); 980 981 independent_comalloc allocates, all at once, a set of n_elements 982 chunks with sizes indicated in the "sizes" array. It returns 983 an array of pointers to these elements, each of which can be 984 independently freed, realloc'ed etc. The elements are guaranteed to 985 be adjacently allocated (this is not guaranteed to occur with 986 multiple callocs or mallocs), which may also improve cache locality 987 in some applications. 988 989 The "chunks" argument is optional (i.e., may be null). If it is null 990 the returned array is itself dynamically allocated and should also 991 be freed when it is no longer needed. Otherwise, the chunks array 992 must be of at least n_elements in length. It is filled in with the 993 pointers to the chunks. 994 995 In either case, independent_comalloc returns this pointer array, or 996 null if the allocation failed. If n_elements is zero and chunks is 997 null, it returns a chunk representing an array with zero elements 998 (which should be freed if not wanted). 9991000 Each element must be individually freed when it is no longer1001 needed. If you'd like to instead be able to free all at once, you1002 should instead use a single regular malloc, and assign pointers at1003 particular offsets in the aggregate space. (In this case though, you1004 cannot independently free elements.)10051006 independent_comallac differs from independent_calloc in that each1007 element may have a different size, and also that it does not1008 automatically clear elements.10091010 independent_comalloc can be used to speed up allocation in cases1011 where several structs or objects must always be allocated at the1012 same time. For example:10131014 struct Head { ... }1015 struct Foot { ... }10161017 void send_message(char* msg) {1018 int msglen = strlen(msg);1019 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };1020 void* chunks[3];1021 if (independent_comalloc(3, sizes, chunks) == 0)1022 die();1023 struct Head* head = (struct Head*)(chunks[0]);1024 char* body = (char*)(chunks[1]);1025 struct Foot* foot = (struct Foot*)(chunks[2]);1026 // ...1027 }10281029 In general though, independent_comalloc is worth using only for1030 larger values of n_elements. For small values, you probably won't1031 detect enough difference from series of malloc calls to bother.10321033 Overuse of independent_comalloc can increase overall memory usage,1034 since it cannot reuse existing noncontiguous small chunks that1035 might be available for some of the elements.1036*/1037void**dlindependent_comalloc(size_t,size_t*,void**);103810391040/*1041 pvalloc(size_t n);1042 Equivalent to valloc(minimum-page-that-holds(n)), that is,1043 round up n to nearest pagesize.1044 */1045void*dlpvalloc(size_t);10461047/*1048 malloc_trim(size_t pad);10491050 If possible, gives memory back to the system (via negative arguments1051 to sbrk) if there is unused memory at the `high' end of the malloc1052 pool or in unused MMAP segments. You can call this after freeing1053 large blocks of memory to potentially reduce the system-level memory1054 requirements of a program. However, it cannot guarantee to reduce1055 memory. Under some allocation patterns, some large free blocks of1056 memory will be locked between two used chunks, so they cannot be1057 given back to the system.10581059 The `pad' argument to malloc_trim represents the amount of free1060 trailing space to leave untrimmed. If this argument is zero, only1061 the minimum amount of memory to maintain internal data structures1062 will be left. Non-zero arguments can be supplied to maintain enough1063 trailing space to service future expected allocations without having1064 to re-obtain memory from the system.10651066 Malloc_trim returns 1 if it actually released any memory, else 0.1067*/1068intdlmalloc_trim(size_t);10691070/*1071 malloc_stats();1072 Prints on stderr the amount of space obtained from the system (both1073 via sbrk and mmap), the maximum amount (which may be more than1074 current if malloc_trim and/or munmap got called), and the current1075 number of bytes allocated via malloc (or realloc, etc) but not yet1076 freed. Note that this is the number of bytes allocated, not the1077 number requested. It will be larger than the number requested1078 because of alignment and bookkeeping overhead. Because it includes1079 alignment wastage as being in use, this figure may be greater than1080 zero even when no user-level chunks are allocated.10811082 The reported current and maximum system memory can be inaccurate if1083 a program makes other calls to system memory allocation functions1084 (normally sbrk) outside of malloc.10851086 malloc_stats prints only the most commonly interesting statistics.1087 More information can be obtained by calling mallinfo.1088*/1089voiddlmalloc_stats(void);10901091#endif/* ONLY_MSPACES */10921093/*1094 malloc_usable_size(void* p);10951096 Returns the number of bytes you can actually use in1097 an allocated chunk, which may be more than you requested (although1098 often not) due to alignment and minimum size constraints.1099 You can use this many bytes without worrying about1100 overwriting other allocated objects. This is not a particularly great1101 programming practice. malloc_usable_size can be more useful in1102 debugging and assertions, for example:11031104 p = malloc(n);1105 assert(malloc_usable_size(p) >= 256);1106*/1107size_tdlmalloc_usable_size(void*);110811091110#if MSPACES11111112/*1113 mspace is an opaque type representing an independent1114 region of space that supports mspace_malloc, etc.1115*/1116typedefvoid* mspace;11171118/*1119 create_mspace creates and returns a new independent space with the1120 given initial capacity, or, if 0, the default granularity size. It1121 returns null if there is no system memory available to create the1122 space. If argument locked is non-zero, the space uses a separate1123 lock to control access. The capacity of the space will grow1124 dynamically as needed to service mspace_malloc requests. You can1125 control the sizes of incremental increases of this space by1126 compiling with a different DEFAULT_GRANULARITY or dynamically1127 setting with mallopt(M_GRANULARITY, value).1128*/1129mspace create_mspace(size_t capacity,int locked);11301131/*1132 destroy_mspace destroys the given space, and attempts to return all1133 of its memory back to the system, returning the total number of1134 bytes freed. After destruction, the results of access to all memory1135 used by the space become undefined.1136*/1137size_tdestroy_mspace(mspace msp);11381139/*1140 create_mspace_with_base uses the memory supplied as the initial base1141 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this1142 space is used for bookkeeping, so the capacity must be at least this1143 large. (Otherwise 0 is returned.) When this initial space is1144 exhausted, additional memory will be obtained from the system.1145 Destroying this space will deallocate all additionally allocated1146 space (if possible) but not the initial base.1147*/1148mspace create_mspace_with_base(void* base,size_t capacity,int locked);11491150/*1151 mspace_mmap_large_chunks controls whether requests for large chunks1152 are allocated in their own mmapped regions, separate from others in1153 this mspace. By default this is enabled, which reduces1154 fragmentation. However, such chunks are not necessarily released to1155 the system upon destroy_mspace. Disabling by setting to false may1156 increase fragmentation, but avoids leakage when relying on1157 destroy_mspace to release all memory allocated using this space.1158*/1159intmspace_mmap_large_chunks(mspace msp,int enable);116011611162/*1163 mspace_malloc behaves as malloc, but operates within1164 the given space.1165*/1166void*mspace_malloc(mspace msp,size_t bytes);11671168/*1169 mspace_free behaves as free, but operates within1170 the given space.11711172 If compiled with FOOTERS==1, mspace_free is not actually needed.1173 free may be called instead of mspace_free because freed chunks from1174 any space are handled by their originating spaces.1175*/1176voidmspace_free(mspace msp,void* mem);11771178/*1179 mspace_realloc behaves as realloc, but operates within1180 the given space.11811182 If compiled with FOOTERS==1, mspace_realloc is not actually1183 needed. realloc may be called instead of mspace_realloc because1184 realloced chunks from any space are handled by their originating1185 spaces.1186*/1187void*mspace_realloc(mspace msp,void* mem,size_t newsize);11881189/*1190 mspace_calloc behaves as calloc, but operates within1191 the given space.1192*/1193void*mspace_calloc(mspace msp,size_t n_elements,size_t elem_size);11941195/*1196 mspace_memalign behaves as memalign, but operates within1197 the given space.1198*/1199void*mspace_memalign(mspace msp,size_t alignment,size_t bytes);12001201/*1202 mspace_independent_calloc behaves as independent_calloc, but1203 operates within the given space.1204*/1205void**mspace_independent_calloc(mspace msp,size_t n_elements,1206size_t elem_size,void* chunks[]);12071208/*1209 mspace_independent_comalloc behaves as independent_comalloc, but1210 operates within the given space.1211*/1212void**mspace_independent_comalloc(mspace msp,size_t n_elements,1213size_t sizes[],void* chunks[]);12141215/*1216 mspace_footprint() returns the number of bytes obtained from the1217 system for this space.1218*/1219size_tmspace_footprint(mspace msp);12201221/*1222 mspace_max_footprint() returns the peak number of bytes obtained from the1223 system for this space.1224*/1225size_tmspace_max_footprint(mspace msp);122612271228#if !NO_MALLINFO1229/*1230 mspace_mallinfo behaves as mallinfo, but reports properties of1231 the given space.1232*/1233struct mallinfo mspace_mallinfo(mspace msp);1234#endif/* NO_MALLINFO */12351236/*1237 malloc_usable_size(void* p) behaves the same as malloc_usable_size;1238*/1239size_tmspace_usable_size(void* mem);12401241/*1242 mspace_malloc_stats behaves as malloc_stats, but reports1243 properties of the given space.1244*/1245voidmspace_malloc_stats(mspace msp);12461247/*1248 mspace_trim behaves as malloc_trim, but1249 operates within the given space.1250*/1251intmspace_trim(mspace msp,size_t pad);12521253/*1254 An alias for mallopt.1255*/1256intmspace_mallopt(int,int);12571258#endif/* MSPACES */12591260#ifdef __cplusplus1261};/* end of extern "C" */1262#endif/* __cplusplus */12631264/*1265 ========================================================================1266 To make a fully customizable malloc.h header file, cut everything1267 above this line, put into file malloc.h, edit to suit, and #include it1268 on the next line, as well as in programs that use this malloc.1269 ========================================================================1270*/12711272/* #include "malloc.h" */12731274/*------------------------------ internal #includes ---------------------- */12751276#ifdef WIN321277#ifndef __GNUC__1278#pragma warning( disable : 4146 )/* no "unsigned" warnings */1279#endif1280#endif/* WIN32 */12811282#include <stdio.h>/* for printing in malloc_stats */12831284#ifndef LACKS_ERRNO_H1285#include <errno.h>/* for MALLOC_FAILURE_ACTION */1286#endif/* LACKS_ERRNO_H */1287#if FOOTERS1288#include <time.h>/* for magic initialization */1289#endif/* FOOTERS */1290#ifndef LACKS_STDLIB_H1291#include <stdlib.h>/* for abort() */1292#endif/* LACKS_STDLIB_H */1293#ifdef DEBUG1294#if ABORT_ON_ASSERT_FAILURE1295#define assert(x) if(!(x)) ABORT1296#else/* ABORT_ON_ASSERT_FAILURE */1297#include <assert.h>1298#endif/* ABORT_ON_ASSERT_FAILURE */1299#else/* DEBUG */1300#ifndef assert1301#define assert(x)1302#endif1303#define DEBUG 01304#endif/* DEBUG */1305#ifndef LACKS_STRING_H1306#include <string.h>/* for memset etc */1307#endif/* LACKS_STRING_H */1308#if USE_BUILTIN_FFS1309#ifndef LACKS_STRINGS_H1310#include <strings.h>/* for ffs */1311#endif/* LACKS_STRINGS_H */1312#endif/* USE_BUILTIN_FFS */1313#if HAVE_MMAP1314#ifndef LACKS_SYS_MMAN_H1315#include <sys/mman.h>/* for mmap */1316#endif/* LACKS_SYS_MMAN_H */1317#ifndef LACKS_FCNTL_H1318#include <fcntl.h>1319#endif/* LACKS_FCNTL_H */1320#endif/* HAVE_MMAP */1321#ifndef LACKS_UNISTD_H1322#include <unistd.h>/* for sbrk, sysconf */1323#else/* LACKS_UNISTD_H */1324#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)1325externvoid*sbrk(ptrdiff_t);1326#endif/* FreeBSD etc */1327#endif/* LACKS_UNISTD_H */13281329/* Declarations for locking */1330#if USE_LOCKS1331#ifndef WIN321332#include <pthread.h>1333#if defined (__SVR4) && defined (__sun)/* solaris */1334#include <thread.h>1335#endif/* solaris */1336#else1337#ifndef _M_AMD641338/* These are already defined on AMD64 builds */1339#ifdef __cplusplus1340extern"C"{1341#endif/* __cplusplus */1342#ifndef __MINGW32__1343LONG __cdecl _InterlockedCompareExchange(LONG volatile*Dest, LONG Exchange, LONG Comp);1344LONG __cdecl _InterlockedExchange(LONG volatile*Target, LONG Value);1345#endif1346#ifdef __cplusplus1347}1348#endif/* __cplusplus */1349#endif/* _M_AMD64 */1350#ifndef __MINGW32__1351#pragma intrinsic (_InterlockedCompareExchange)1352#pragma intrinsic (_InterlockedExchange)1353#else1354/* --[ start GCC compatibility ]----------------------------------------------1355 * Compatibility <intrin_x86.h> header for GCC -- GCC equivalents of intrinsic1356 * Microsoft Visual C++ functions. Originally developed for the ReactOS1357 * (<http://www.reactos.org/>) and TinyKrnl (<http://www.tinykrnl.org/>)1358 * projects.1359 *1360 * Copyright (c) 2006 KJK::Hyperion <hackbunny@reactos.com>1361 *1362 * Permission is hereby granted, free of charge, to any person obtaining a1363 * copy of this software and associated documentation files (the "Software"),1364 * to deal in the Software without restriction, including without limitation1365 * the rights to use, copy, modify, merge, publish, distribute, sublicense,1366 * and/or sell copies of the Software, and to permit persons to whom the1367 * Software is furnished to do so, subject to the following conditions:1368 *1369 * The above copyright notice and this permission notice shall be included in1370 * all copies or substantial portions of the Software.1371 *1372 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR1373 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,1374 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE1375 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER1376 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING1377 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER1378 * DEALINGS IN THE SOFTWARE.1379 */13801381/*** Atomic operations ***/1382#if (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) > 401001383#define _ReadWriteBarrier() __sync_synchronize()1384#else1385static __inline__ __attribute__((always_inline))long__sync_lock_test_and_set(volatilelong*const Target,const long Value)1386{1387long res;1388 __asm__ __volatile__("xchg%z0%2,%0":"=g"(*(Target)),"=r"(res) :"1"(Value));1389return res;1390}1391static void __inline__ __attribute__((always_inline))_MemoryBarrier(void)1392{1393 __asm__ __volatile__("": : :"memory");1394}1395#define _ReadWriteBarrier() _MemoryBarrier()1396#endif1397/* BUGBUG: GCC only supports full barriers */1398static __inline__ __attribute__((always_inline))long_InterlockedExchange(volatilelong*const Target,const long Value)1399{1400/* NOTE: __sync_lock_test_and_set would be an acquire barrier, so we force a full barrier */1401_ReadWriteBarrier();1402return__sync_lock_test_and_set(Target, Value);1403}1404/* --[ end GCC compatibility ]---------------------------------------------- */1405#endif1406#define interlockedcompareexchange _InterlockedCompareExchange1407#define interlockedexchange _InterlockedExchange1408#endif/* Win32 */1409#endif/* USE_LOCKS */14101411/* Declarations for bit scanning on win32 */1412#if defined(_MSC_VER) && _MSC_VER>=13001413#ifndef BitScanForward/* Try to avoid pulling in WinNT.h */1414#ifdef __cplusplus1415extern"C"{1416#endif/* __cplusplus */1417unsigned char_BitScanForward(unsigned long*index,unsigned long mask);1418unsigned char_BitScanReverse(unsigned long*index,unsigned long mask);1419#ifdef __cplusplus1420}1421#endif/* __cplusplus */14221423#define BitScanForward _BitScanForward1424#define BitScanReverse _BitScanReverse1425#pragma intrinsic(_BitScanForward)1426#pragma intrinsic(_BitScanReverse)1427#endif/* BitScanForward */1428#endif/* defined(_MSC_VER) && _MSC_VER>=1300 */14291430#ifndef WIN321431#ifndef malloc_getpagesize1432# ifdef _SC_PAGESIZE/* some SVR4 systems omit an underscore */1433# ifndef _SC_PAGE_SIZE1434# define _SC_PAGE_SIZE _SC_PAGESIZE1435# endif1436# endif1437# ifdef _SC_PAGE_SIZE1438# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)1439# else1440# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)1441externsize_tgetpagesize();1442# define malloc_getpagesize getpagesize()1443# else1444# ifdef WIN32/* use supplied emulation of getpagesize */1445# define malloc_getpagesize getpagesize()1446# else1447# ifndef LACKS_SYS_PARAM_H1448# include <sys/param.h>1449# endif1450# ifdef EXEC_PAGESIZE1451# define malloc_getpagesize EXEC_PAGESIZE1452# else1453# ifdef NBPG1454# ifndef CLSIZE1455# define malloc_getpagesize NBPG1456# else1457# define malloc_getpagesize (NBPG * CLSIZE)1458# endif1459# else1460# ifdef NBPC1461# define malloc_getpagesize NBPC1462# else1463# ifdef PAGESIZE1464# define malloc_getpagesize PAGESIZE1465# else/* just guess */1466# define malloc_getpagesize ((size_t)4096U)1467# endif1468# endif1469# endif1470# endif1471# endif1472# endif1473# endif1474#endif1475#endif1476147714781479/* ------------------- size_t and alignment properties -------------------- */14801481/* The byte and bit size of a size_t */1482#define SIZE_T_SIZE (sizeof(size_t))1483#define SIZE_T_BITSIZE (sizeof(size_t) << 3)14841485/* Some constants coerced to size_t */1486/* Annoying but necessary to avoid errors on some platforms */1487#define SIZE_T_ZERO ((size_t)0)1488#define SIZE_T_ONE ((size_t)1)1489#define SIZE_T_TWO ((size_t)2)1490#define SIZE_T_FOUR ((size_t)4)1491#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)1492#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)1493#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)1494#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)14951496/* The bit mask value corresponding to MALLOC_ALIGNMENT */1497#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)14981499/* True if address a has acceptable alignment */1500#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)15011502/* the number of bytes to offset an address to align it */1503#define align_offset(A)\1504 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\1505 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))15061507/* -------------------------- MMAP preliminaries ------------------------- */15081509/*1510 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and1511 checks to fail so compiler optimizer can delete code rather than1512 using so many "#if"s.1513*/151415151516/* MORECORE and MMAP must return MFAIL on failure */1517#define MFAIL ((void*)(MAX_SIZE_T))1518#define CMFAIL ((char*)(MFAIL))/* defined for convenience */15191520#if HAVE_MMAP15211522#ifndef WIN321523#define MUNMAP_DEFAULT(a, s) munmap((a), (s))1524#define MMAP_PROT (PROT_READ|PROT_WRITE)1525#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)1526#define MAP_ANONYMOUS MAP_ANON1527#endif/* MAP_ANON */1528#ifdef MAP_ANONYMOUS1529#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)1530#define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)1531#else/* MAP_ANONYMOUS */1532/*1533 Nearly all versions of mmap support MAP_ANONYMOUS, so the following1534 is unlikely to be needed, but is supplied just in case.1535*/1536#define MMAP_FLAGS (MAP_PRIVATE)1537static int dev_zero_fd = -1;/* Cached file descriptor for /dev/zero. */1538#define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \1539 (dev_zero_fd = open("/dev/zero", O_RDWR), \1540 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \1541 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))1542#endif/* MAP_ANONYMOUS */15431544#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)15451546#else/* WIN32 */15471548/* Win32 MMAP via VirtualAlloc */1549static FORCEINLINE void*win32mmap(size_t size) {1550void* ptr =VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);1551return(ptr !=0)? ptr: MFAIL;1552}15531554/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */1555static FORCEINLINE void*win32direct_mmap(size_t size) {1556void* ptr =VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,1557 PAGE_READWRITE);1558return(ptr !=0)? ptr: MFAIL;1559}15601561/* This function supports releasing coalesed segments */1562static FORCEINLINE intwin32munmap(void* ptr,size_t size) {1563 MEMORY_BASIC_INFORMATION minfo;1564char* cptr = (char*)ptr;1565while(size) {1566if(VirtualQuery(cptr, &minfo,sizeof(minfo)) ==0)1567return-1;1568if(minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||1569 minfo.State != MEM_COMMIT || minfo.RegionSize > size)1570return-1;1571if(VirtualFree(cptr,0, MEM_RELEASE) ==0)1572return-1;1573 cptr += minfo.RegionSize;1574 size -= minfo.RegionSize;1575}1576return0;1577}15781579#define MMAP_DEFAULT(s) win32mmap(s)1580#define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))1581#define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)1582#endif/* WIN32 */1583#endif/* HAVE_MMAP */15841585#if HAVE_MREMAP1586#ifndef WIN321587#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))1588#endif/* WIN32 */1589#endif/* HAVE_MREMAP */159015911592/**1593 * Define CALL_MORECORE1594 */1595#if HAVE_MORECORE1596#ifdef MORECORE1597#define CALL_MORECORE(S) MORECORE(S)1598#else/* MORECORE */1599#define CALL_MORECORE(S) MORECORE_DEFAULT(S)1600#endif/* MORECORE */1601#else/* HAVE_MORECORE */1602#define CALL_MORECORE(S) MFAIL1603#endif/* HAVE_MORECORE */16041605/**1606 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP1607 */1608#if HAVE_MMAP1609#define IS_MMAPPED_BIT (SIZE_T_ONE)1610#define USE_MMAP_BIT (SIZE_T_ONE)16111612#ifdef MMAP1613#define CALL_MMAP(s) MMAP(s)1614#else/* MMAP */1615#define CALL_MMAP(s) MMAP_DEFAULT(s)1616#endif/* MMAP */1617#ifdef MUNMAP1618#define CALL_MUNMAP(a, s) MUNMAP((a), (s))1619#else/* MUNMAP */1620#define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))1621#endif/* MUNMAP */1622#ifdef DIRECT_MMAP1623#define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)1624#else/* DIRECT_MMAP */1625#define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)1626#endif/* DIRECT_MMAP */1627#else/* HAVE_MMAP */1628#define IS_MMAPPED_BIT (SIZE_T_ZERO)1629#define USE_MMAP_BIT (SIZE_T_ZERO)16301631#define MMAP(s) MFAIL1632#define MUNMAP(a, s) (-1)1633#define DIRECT_MMAP(s) MFAIL1634#define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)1635#define CALL_MMAP(s) MMAP(s)1636#define CALL_MUNMAP(a, s) MUNMAP((a), (s))1637#endif/* HAVE_MMAP */16381639/**1640 * Define CALL_MREMAP1641 */1642#if HAVE_MMAP && HAVE_MREMAP1643#ifdef MREMAP1644#define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))1645#else/* MREMAP */1646#define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))1647#endif/* MREMAP */1648#else/* HAVE_MMAP && HAVE_MREMAP */1649#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL1650#endif/* HAVE_MMAP && HAVE_MREMAP */16511652/* mstate bit set if continguous morecore disabled or failed */1653#define USE_NONCONTIGUOUS_BIT (4U)16541655/* segment bit set in create_mspace_with_base */1656#define EXTERN_BIT (8U)165716581659/* --------------------------- Lock preliminaries ------------------------ */16601661/*1662 When locks are defined, there is one global lock, plus1663 one per-mspace lock.16641665 The global lock_ensures that mparams.magic and other unique1666 mparams values are initialized only once. It also protects1667 sequences of calls to MORECORE. In many cases sys_alloc requires1668 two calls, that should not be interleaved with calls by other1669 threads. This does not protect against direct calls to MORECORE1670 by other threads not using this lock, so there is still code to1671 cope the best we can on interference.16721673 Per-mspace locks surround calls to malloc, free, etc. To enable use1674 in layered extensions, per-mspace locks are reentrant.16751676 Because lock-protected regions generally have bounded times, it is1677 OK to use the supplied simple spinlocks in the custom versions for1678 x86.16791680 If USE_LOCKS is > 1, the definitions of lock routines here are1681 bypassed, in which case you will need to define at least1682 INITIAL_LOCK, ACQUIRE_LOCK, RELEASE_LOCK and possibly TRY_LOCK1683 (which is not used in this malloc, but commonly needed in1684 extensions.)1685*/16861687#if USE_LOCKS == 116881689#if USE_SPIN_LOCKS1690#ifndef WIN3216911692/* Custom pthread-style spin locks on x86 and x64 for gcc */1693struct pthread_mlock_t {1694volatileunsigned int l;1695volatileunsigned int c;1696volatile pthread_t threadid;1697};1698#define MLOCK_T struct pthread_mlock_t1699#define CURRENT_THREAD pthread_self()1700#define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), 0)1701#define ACQUIRE_LOCK(sl) pthread_acquire_lock(sl)1702#define RELEASE_LOCK(sl) pthread_release_lock(sl)1703#define TRY_LOCK(sl) pthread_try_lock(sl)1704#define SPINS_PER_YIELD 6317051706static MLOCK_T malloc_global_mutex = {0,0,0};17071708static FORCEINLINE intpthread_acquire_lock(MLOCK_T *sl) {1709int spins =0;1710volatileunsigned int* lp = &sl->l;1711for(;;) {1712if(*lp !=0) {1713if(sl->threadid == CURRENT_THREAD) {1714++sl->c;1715return0;1716}1717}1718else{1719/* place args to cmpxchgl in locals to evade oddities in some gccs */1720int cmp =0;1721int val =1;1722int ret;1723 __asm__ __volatile__("lock; cmpxchgl%1,%2"1724:"=a"(ret)1725:"r"(val),"m"(*(lp)),"0"(cmp)1726:"memory","cc");1727if(!ret) {1728assert(!sl->threadid);1729 sl->c =1;1730 sl->threadid = CURRENT_THREAD;1731return0;1732}1733if((++spins & SPINS_PER_YIELD) ==0) {1734#if defined (__SVR4) && defined (__sun)/* solaris */1735thr_yield();1736#else1737#if defined(__linux__) || defined(__FreeBSD__) || defined(__APPLE__)1738sched_yield();1739#else/* no-op yield on unknown systems */1740;1741#endif/* __linux__ || __FreeBSD__ || __APPLE__ */1742#endif/* solaris */1743}1744}1745}1746}17471748static FORCEINLINE voidpthread_release_lock(MLOCK_T *sl) {1749assert(sl->l !=0);1750assert(sl->threadid == CURRENT_THREAD);1751if(--sl->c ==0) {1752 sl->threadid =0;1753volatileunsigned int* lp = &sl->l;1754int prev =0;1755int ret;1756 __asm__ __volatile__("lock; xchgl%0,%1"1757:"=r"(ret)1758:"m"(*(lp)),"0"(prev)1759:"memory");1760}1761}17621763static FORCEINLINE intpthread_try_lock(MLOCK_T *sl) {1764volatileunsigned int* lp = &sl->l;1765if(*lp !=0) {1766if(sl->threadid == CURRENT_THREAD) {1767++sl->c;1768return1;1769}1770}1771else{1772int cmp =0;1773int val =1;1774int ret;1775 __asm__ __volatile__("lock; cmpxchgl%1,%2"1776:"=a"(ret)1777:"r"(val),"m"(*(lp)),"0"(cmp)1778:"memory","cc");1779if(!ret) {1780assert(!sl->threadid);1781 sl->c =1;1782 sl->threadid = CURRENT_THREAD;1783return1;1784}1785}1786return0;1787}178817891790#else/* WIN32 */1791/* Custom win32-style spin locks on x86 and x64 for MSC */1792struct win32_mlock_t1793{1794volatilelong l;1795volatileunsigned int c;1796volatilelong threadid;1797};17981799#define MLOCK_T struct win32_mlock_t1800#define CURRENT_THREAD win32_getcurrentthreadid()1801#define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), 0)1802#define ACQUIRE_LOCK(sl) win32_acquire_lock(sl)1803#define RELEASE_LOCK(sl) win32_release_lock(sl)1804#define TRY_LOCK(sl) win32_try_lock(sl)1805#define SPINS_PER_YIELD 6318061807static MLOCK_T malloc_global_mutex = {0,0,0};18081809static FORCEINLINE longwin32_getcurrentthreadid(void) {1810#ifdef _MSC_VER1811#if defined(_M_IX86)1812long*threadstruct=(long*)__readfsdword(0x18);1813long threadid=threadstruct[0x24/sizeof(long)];1814return threadid;1815#elif defined(_M_X64)1816/* todo */1817returnGetCurrentThreadId();1818#else1819returnGetCurrentThreadId();1820#endif1821#else1822returnGetCurrentThreadId();1823#endif1824}18251826static FORCEINLINE intwin32_acquire_lock(MLOCK_T *sl) {1827int spins =0;1828for(;;) {1829if(sl->l !=0) {1830if(sl->threadid == CURRENT_THREAD) {1831++sl->c;1832return0;1833}1834}1835else{1836if(!interlockedexchange(&sl->l,1)) {1837assert(!sl->threadid);1838 sl->c=CURRENT_THREAD;1839 sl->threadid = CURRENT_THREAD;1840 sl->c =1;1841return0;1842}1843}1844if((++spins & SPINS_PER_YIELD) ==0)1845SleepEx(0, FALSE);1846}1847}18481849static FORCEINLINE voidwin32_release_lock(MLOCK_T *sl) {1850assert(sl->threadid == CURRENT_THREAD);1851assert(sl->l !=0);1852if(--sl->c ==0) {1853 sl->threadid =0;1854interlockedexchange(&sl->l,0);1855}1856}18571858static FORCEINLINE intwin32_try_lock(MLOCK_T *sl) {1859if(sl->l !=0) {1860if(sl->threadid == CURRENT_THREAD) {1861++sl->c;1862return1;1863}1864}1865else{1866if(!interlockedexchange(&sl->l,1)){1867assert(!sl->threadid);1868 sl->threadid = CURRENT_THREAD;1869 sl->c =1;1870return1;1871}1872}1873return0;1874}18751876#endif/* WIN32 */1877#else/* USE_SPIN_LOCKS */18781879#ifndef WIN321880/* pthreads-based locks */18811882#define MLOCK_T pthread_mutex_t1883#define CURRENT_THREAD pthread_self()1884#define INITIAL_LOCK(sl) pthread_init_lock(sl)1885#define ACQUIRE_LOCK(sl) pthread_mutex_lock(sl)1886#define RELEASE_LOCK(sl) pthread_mutex_unlock(sl)1887#define TRY_LOCK(sl) (!pthread_mutex_trylock(sl))18881889static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;18901891/* Cope with old-style linux recursive lock initialization by adding */1892/* skipped internal declaration from pthread.h */1893#ifdef linux1894#ifndef PTHREAD_MUTEX_RECURSIVE1895externint pthread_mutexattr_setkind_np __P((pthread_mutexattr_t *__attr,1896int __kind));1897#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP1898#define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)1899#endif1900#endif19011902static intpthread_init_lock(MLOCK_T *sl) {1903 pthread_mutexattr_t attr;1904if(pthread_mutexattr_init(&attr))return1;1905if(pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE))return1;1906if(pthread_mutex_init(sl, &attr))return1;1907if(pthread_mutexattr_destroy(&attr))return1;1908return0;1909}19101911#else/* WIN32 */1912/* Win32 critical sections */1913#define MLOCK_T CRITICAL_SECTION1914#define CURRENT_THREAD GetCurrentThreadId()1915#define INITIAL_LOCK(s) (!InitializeCriticalSectionAndSpinCount((s), 0x80000000|4000))1916#define ACQUIRE_LOCK(s) (EnterCriticalSection(s), 0)1917#define RELEASE_LOCK(s) LeaveCriticalSection(s)1918#define TRY_LOCK(s) TryEnterCriticalSection(s)1919#define NEED_GLOBAL_LOCK_INIT19201921static MLOCK_T malloc_global_mutex;1922staticvolatilelong malloc_global_mutex_status;19231924/* Use spin loop to initialize global lock */1925static voidinit_malloc_global_mutex() {1926for(;;) {1927long stat = malloc_global_mutex_status;1928if(stat >0)1929return;1930/* transition to < 0 while initializing, then to > 0) */1931if(stat ==0&&1932interlockedcompareexchange(&malloc_global_mutex_status, -1,0) ==0) {1933InitializeCriticalSection(&malloc_global_mutex);1934interlockedexchange(&malloc_global_mutex_status,1);1935return;1936}1937SleepEx(0, FALSE);1938}1939}19401941#endif/* WIN32 */1942#endif/* USE_SPIN_LOCKS */1943#endif/* USE_LOCKS == 1 */19441945/* ----------------------- User-defined locks ------------------------ */19461947#if USE_LOCKS > 11948/* Define your own lock implementation here */1949/* #define INITIAL_LOCK(sl) ... */1950/* #define ACQUIRE_LOCK(sl) ... */1951/* #define RELEASE_LOCK(sl) ... */1952/* #define TRY_LOCK(sl) ... */1953/* static MLOCK_T malloc_global_mutex = ... */1954#endif/* USE_LOCKS > 1 */19551956/* ----------------------- Lock-based state ------------------------ */19571958#if USE_LOCKS1959#define USE_LOCK_BIT (2U)1960#else/* USE_LOCKS */1961#define USE_LOCK_BIT (0U)1962#define INITIAL_LOCK(l)1963#endif/* USE_LOCKS */19641965#if USE_LOCKS1966#define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);1967#define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);1968#else/* USE_LOCKS */1969#define ACQUIRE_MALLOC_GLOBAL_LOCK()1970#define RELEASE_MALLOC_GLOBAL_LOCK()1971#endif/* USE_LOCKS */197219731974/* ----------------------- Chunk representations ------------------------ */19751976/*1977 (The following includes lightly edited explanations by Colin Plumb.)19781979 The malloc_chunk declaration below is misleading (but accurate and1980 necessary). It declares a "view" into memory allowing access to1981 necessary fields at known offsets from a given base.19821983 Chunks of memory are maintained using a `boundary tag' method as1984 originally described by Knuth. (See the paper by Paul Wilson1985 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such1986 techniques.) Sizes of free chunks are stored both in the front of1987 each chunk and at the end. This makes consolidating fragmented1988 chunks into bigger chunks fast. The head fields also hold bits1989 representing whether chunks are free or in use.19901991 Here are some pictures to make it clearer. They are "exploded" to1992 show that the state of a chunk can be thought of as extending from1993 the high 31 bits of the head field of its header through the1994 prev_foot and PINUSE_BIT bit of the following chunk header.19951996 A chunk that's in use looks like:19971998 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+1999 | Size of previous chunk (if P = 0) |2000 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2001 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|2002 | Size of this chunk 1| +-+2003 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2004 | |2005 +- -+2006 | |2007 +- -+2008 | :2009 +- size - sizeof(size_t) available payload bytes -+2010 : |2011 chunk-> +- -+2012 | |2013 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2014 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|2015 | Size of next chunk (may or may not be in use) | +-+2016 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+20172018 And if it's free, it looks like this:20192020 chunk-> +- -+2021 | User payload (must be in use, or we would have merged!) |2022 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2023 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|2024 | Size of this chunk 0| +-+2025 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2026 | Next pointer |2027 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2028 | Prev pointer |2029 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2030 | :2031 +- size - sizeof(struct chunk) unused bytes -+2032 : |2033 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2034 | Size of this chunk |2035 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2036 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|2037 | Size of next chunk (must be in use, or we would have merged)| +-+2038 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2039 | :2040 +- User payload -+2041 : |2042 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2043 |0|2044 +-+2045 Note that since we always merge adjacent free chunks, the chunks2046 adjacent to a free chunk must be in use.20472048 Given a pointer to a chunk (which can be derived trivially from the2049 payload pointer) we can, in O(1) time, find out whether the adjacent2050 chunks are free, and if so, unlink them from the lists that they2051 are on and merge them with the current chunk.20522053 Chunks always begin on even word boundaries, so the mem portion2054 (which is returned to the user) is also on an even word boundary, and2055 thus at least double-word aligned.20562057 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the2058 chunk size (which is always a multiple of two words), is an in-use2059 bit for the *previous* chunk. If that bit is *clear*, then the2060 word before the current chunk size contains the previous chunk2061 size, and can be used to find the front of the previous chunk.2062 The very first chunk allocated always has this bit set, preventing2063 access to non-existent (or non-owned) memory. If pinuse is set for2064 any given chunk, then you CANNOT determine the size of the2065 previous chunk, and might even get a memory addressing fault when2066 trying to do so.20672068 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of2069 the chunk size redundantly records whether the current chunk is2070 inuse. This redundancy enables usage checks within free and realloc,2071 and reduces indirection when freeing and consolidating chunks.20722073 Each freshly allocated chunk must have both cinuse and pinuse set.2074 That is, each allocated chunk borders either a previously allocated2075 and still in-use chunk, or the base of its memory arena. This is2076 ensured by making all allocations from the `lowest' part of any2077 found chunk. Further, no free chunk physically borders another one,2078 so each free chunk is known to be preceded and followed by either2079 inuse chunks or the ends of memory.20802081 Note that the `foot' of the current chunk is actually represented2082 as the prev_foot of the NEXT chunk. This makes it easier to2083 deal with alignments etc but can be very confusing when trying2084 to extend or adapt this code.20852086 The exceptions to all this are20872088 1. The special chunk `top' is the top-most available chunk (i.e.,2089 the one bordering the end of available memory). It is treated2090 specially. Top is never included in any bin, is used only if2091 no other chunk is available, and is released back to the2092 system if it is very large (see M_TRIM_THRESHOLD). In effect,2093 the top chunk is treated as larger (and thus less well2094 fitting) than any other available chunk. The top chunk2095 doesn't update its trailing size field since there is no next2096 contiguous chunk that would have to index off it. However,2097 space is still allocated for it (TOP_FOOT_SIZE) to enable2098 separation or merging when space is extended.20992100 3. Chunks allocated via mmap, which have the lowest-order bit2101 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set2102 PINUSE_BIT in their head fields. Because they are allocated2103 one-by-one, each must carry its own prev_foot field, which is2104 also used to hold the offset this chunk has within its mmapped2105 region, which is needed to preserve alignment. Each mmapped2106 chunk is trailed by the first two fields of a fake next-chunk2107 for sake of usage checks.21082109*/21102111struct malloc_chunk {2112size_t prev_foot;/* Size of previous chunk (if free). */2113size_t head;/* Size and inuse bits. */2114struct malloc_chunk* fd;/* double links -- used only if free. */2115struct malloc_chunk* bk;2116};21172118typedefstruct malloc_chunk mchunk;2119typedefstruct malloc_chunk* mchunkptr;2120typedefstruct malloc_chunk* sbinptr;/* The type of bins of chunks */2121typedefunsigned int bindex_t;/* Described below */2122typedefunsigned int binmap_t;/* Described below */2123typedefunsigned int flag_t;/* The type of various bit flag sets */21242125/* ------------------- Chunks sizes and alignments ----------------------- */21262127#define MCHUNK_SIZE (sizeof(mchunk))21282129#if FOOTERS2130#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)2131#else/* FOOTERS */2132#define CHUNK_OVERHEAD (SIZE_T_SIZE)2133#endif/* FOOTERS */21342135/* MMapped chunks need a second word of overhead ... */2136#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)2137/* ... and additional padding for fake next-chunk at foot */2138#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)21392140/* The smallest size we can malloc is an aligned minimal chunk */2141#define MIN_CHUNK_SIZE\2142 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)21432144/* conversion from malloc headers to user pointers, and back */2145#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))2146#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))2147/* chunk associated with aligned address A */2148#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))21492150/* Bounds on request (not chunk) sizes. */2151#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)2152#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)21532154/* pad request bytes into a usable size */2155#define pad_request(req) \2156 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)21572158/* pad request, checking for minimum (but not maximum) */2159#define request2size(req) \2160 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))216121622163/* ------------------ Operations on head and foot fields ----------------- */21642165/*2166 The head field of a chunk is or'ed with PINUSE_BIT when previous2167 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in2168 use. If the chunk was obtained with mmap, the prev_foot field has2169 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the2170 mmapped region to the base of the chunk.21712172 FLAG4_BIT is not used by this malloc, but might be useful in extensions.2173*/21742175#define PINUSE_BIT (SIZE_T_ONE)2176#define CINUSE_BIT (SIZE_T_TWO)2177#define FLAG4_BIT (SIZE_T_FOUR)2178#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)2179#define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)21802181/* Head value for fenceposts */2182#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)21832184/* extraction of fields from head words */2185#define cinuse(p) ((p)->head & CINUSE_BIT)2186#define pinuse(p) ((p)->head & PINUSE_BIT)2187#define chunksize(p) ((p)->head & ~(FLAG_BITS))21882189#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)2190#define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)21912192/* Treat space at ptr +/- offset as a chunk */2193#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))2194#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))21952196/* Ptr to next or previous physical malloc_chunk. */2197#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))2198#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))21992200/* extract next chunk's pinuse bit */2201#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)22022203/* Get/set size at footer */2204#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)2205#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))22062207/* Set size, pinuse bit, and foot */2208#define set_size_and_pinuse_of_free_chunk(p, s)\2209 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))22102211/* Set size, pinuse bit, foot, and clear next pinuse */2212#define set_free_with_pinuse(p, s, n)\2213 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))22142215#define is_mmapped(p)\2216 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))22172218/* Get the internal overhead associated with chunk p */2219#define overhead_for(p)\2220 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)22212222/* Return true if malloced space is not necessarily cleared */2223#if MMAP_CLEARS2224#define calloc_must_clear(p) (!is_mmapped(p))2225#else/* MMAP_CLEARS */2226#define calloc_must_clear(p) (1)2227#endif/* MMAP_CLEARS */22282229/* ---------------------- Overlaid data structures ----------------------- */22302231/*2232 When chunks are not in use, they are treated as nodes of either2233 lists or trees.22342235 "Small" chunks are stored in circular doubly-linked lists, and look2236 like this:22372238 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2239 | Size of previous chunk |2240 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2241 `head:' | Size of chunk, in bytes |P|2242 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2243 | Forward pointer to next chunk in list |2244 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2245 | Back pointer to previous chunk in list |2246 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2247 | Unused space (may be 0 bytes long) .2248 . .2249 . |2250nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2251 `foot:' | Size of chunk, in bytes |2252 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+22532254 Larger chunks are kept in a form of bitwise digital trees (aka2255 tries) keyed on chunksizes. Because malloc_tree_chunks are only for2256 free chunks greater than 256 bytes, their size doesn't impose any2257 constraints on user chunk sizes. Each node looks like:22582259 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2260 | Size of previous chunk |2261 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2262 `head:' | Size of chunk, in bytes |P|2263 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2264 | Forward pointer to next chunk of same size |2265 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2266 | Back pointer to previous chunk of same size |2267 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2268 | Pointer to left child (child[0]) |2269 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2270 | Pointer to right child (child[1]) |2271 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2272 | Pointer to parent |2273 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2274 | bin index of this chunk |2275 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2276 | Unused space .2277 . |2278nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2279 `foot:' | Size of chunk, in bytes |2280 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+22812282 Each tree holding treenodes is a tree of unique chunk sizes. Chunks2283 of the same size are arranged in a circularly-linked list, with only2284 the oldest chunk (the next to be used, in our FIFO ordering)2285 actually in the tree. (Tree members are distinguished by a non-null2286 parent pointer.) If a chunk with the same size as an existing node2287 is inserted, it is linked off the existing node using pointers that2288 work in the same way as fd/bk pointers of small chunks.22892290 Each tree contains a power of 2 sized range of chunk sizes (the2291 smallest is 0x100 <= x < 0x180), which is divided in half at each2292 tree level, with the chunks in the smaller half of the range (0x1002293 <= x < 0x140 for the top nose) in the left subtree and the larger2294 half (0x140 <= x < 0x180) in the right subtree. This is, of course,2295 done by inspecting individual bits.22962297 Using these rules, each node's left subtree contains all smaller2298 sizes than its right subtree. However, the node at the root of each2299 subtree has no particular ordering relationship to either. (The2300 dividing line between the subtree sizes is based on trie relation.)2301 If we remove the last chunk of a given size from the interior of the2302 tree, we need to replace it with a leaf node. The tree ordering2303 rules permit a node to be replaced by any leaf below it.23042305 The smallest chunk in a tree (a common operation in a best-fit2306 allocator) can be found by walking a path to the leftmost leaf in2307 the tree. Unlike a usual binary tree, where we follow left child2308 pointers until we reach a null, here we follow the right child2309 pointer any time the left one is null, until we reach a leaf with2310 both child pointers null. The smallest chunk in the tree will be2311 somewhere along that path.23122313 The worst case number of steps to add, find, or remove a node is2314 bounded by the number of bits differentiating chunks within2315 bins. Under current bin calculations, this ranges from 6 up to 212316 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case2317 is of course much better.2318*/23192320struct malloc_tree_chunk {2321/* The first four fields must be compatible with malloc_chunk */2322size_t prev_foot;2323size_t head;2324struct malloc_tree_chunk* fd;2325struct malloc_tree_chunk* bk;23262327struct malloc_tree_chunk* child[2];2328struct malloc_tree_chunk* parent;2329 bindex_t index;2330};23312332typedefstruct malloc_tree_chunk tchunk;2333typedefstruct malloc_tree_chunk* tchunkptr;2334typedefstruct malloc_tree_chunk* tbinptr;/* The type of bins of trees */23352336/* A little helper macro for trees */2337#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])23382339/* ----------------------------- Segments -------------------------------- */23402341/*2342 Each malloc space may include non-contiguous segments, held in a2343 list headed by an embedded malloc_segment record representing the2344 top-most space. Segments also include flags holding properties of2345 the space. Large chunks that are directly allocated by mmap are not2346 included in this list. They are instead independently created and2347 destroyed without otherwise keeping track of them.23482349 Segment management mainly comes into play for spaces allocated by2350 MMAP. Any call to MMAP might or might not return memory that is2351 adjacent to an existing segment. MORECORE normally contiguously2352 extends the current space, so this space is almost always adjacent,2353 which is simpler and faster to deal with. (This is why MORECORE is2354 used preferentially to MMAP when both are available -- see2355 sys_alloc.) When allocating using MMAP, we don't use any of the2356 hinting mechanisms (inconsistently) supported in various2357 implementations of unix mmap, or distinguish reserving from2358 committing memory. Instead, we just ask for space, and exploit2359 contiguity when we get it. It is probably possible to do2360 better than this on some systems, but no general scheme seems2361 to be significantly better.23622363 Management entails a simpler variant of the consolidation scheme2364 used for chunks to reduce fragmentation -- new adjacent memory is2365 normally prepended or appended to an existing segment. However,2366 there are limitations compared to chunk consolidation that mostly2367 reflect the fact that segment processing is relatively infrequent2368 (occurring only when getting memory from system) and that we2369 don't expect to have huge numbers of segments:23702371 * Segments are not indexed, so traversal requires linear scans. (It2372 would be possible to index these, but is not worth the extra2373 overhead and complexity for most programs on most platforms.)2374 * New segments are only appended to old ones when holding top-most2375 memory; if they cannot be prepended to others, they are held in2376 different segments.23772378 Except for the top-most segment of an mstate, each segment record2379 is kept at the tail of its segment. Segments are added by pushing2380 segment records onto the list headed by &mstate.seg for the2381 containing mstate.23822383 Segment flags control allocation/merge/deallocation policies:2384 * If EXTERN_BIT set, then we did not allocate this segment,2385 and so should not try to deallocate or merge with others.2386 (This currently holds only for the initial segment passed2387 into create_mspace_with_base.)2388 * If IS_MMAPPED_BIT set, the segment may be merged with2389 other surrounding mmapped segments and trimmed/de-allocated2390 using munmap.2391 * If neither bit is set, then the segment was obtained using2392 MORECORE so can be merged with surrounding MORECORE'd segments2393 and deallocated/trimmed using MORECORE with negative arguments.2394*/23952396struct malloc_segment {2397char* base;/* base address */2398size_t size;/* allocated size */2399struct malloc_segment* next;/* ptr to next segment */2400 flag_t sflags;/* mmap and extern flag */2401};24022403#define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)2404#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)24052406typedefstruct malloc_segment msegment;2407typedefstruct malloc_segment* msegmentptr;24082409/* ---------------------------- malloc_state ----------------------------- */24102411/*2412 A malloc_state holds all of the bookkeeping for a space.2413 The main fields are:24142415 Top2416 The topmost chunk of the currently active segment. Its size is2417 cached in topsize. The actual size of topmost space is2418 topsize+TOP_FOOT_SIZE, which includes space reserved for adding2419 fenceposts and segment records if necessary when getting more2420 space from the system. The size at which to autotrim top is2421 cached from mparams in trim_check, except that it is disabled if2422 an autotrim fails.24232424 Designated victim (dv)2425 This is the preferred chunk for servicing small requests that2426 don't have exact fits. It is normally the chunk split off most2427 recently to service another small request. Its size is cached in2428 dvsize. The link fields of this chunk are not maintained since it2429 is not kept in a bin.24302431 SmallBins2432 An array of bin headers for free chunks. These bins hold chunks2433 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains2434 chunks of all the same size, spaced 8 bytes apart. To simplify2435 use in double-linked lists, each bin header acts as a malloc_chunk2436 pointing to the real first node, if it exists (else pointing to2437 itself). This avoids special-casing for headers. But to avoid2438 waste, we allocate only the fd/bk pointers of bins, and then use2439 repositioning tricks to treat these as the fields of a chunk.24402441 TreeBins2442 Treebins are pointers to the roots of trees holding a range of2443 sizes. There are 2 equally spaced treebins for each power of two2444 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything2445 larger.24462447 Bin maps2448 There is one bit map for small bins ("smallmap") and one for2449 treebins ("treemap). Each bin sets its bit when non-empty, and2450 clears the bit when empty. Bit operations are then used to avoid2451 bin-by-bin searching -- nearly all "search" is done without ever2452 looking at bins that won't be selected. The bit maps2453 conservatively use 32 bits per map word, even if on 64bit system.2454 For a good description of some of the bit-based techniques used2455 here, see Henry S. Warren Jr's book "Hacker's Delight" (and2456 supplement at http://hackersdelight.org/). Many of these are2457 intended to reduce the branchiness of paths through malloc etc, as2458 well as to reduce the number of memory locations read or written.24592460 Segments2461 A list of segments headed by an embedded malloc_segment record2462 representing the initial space.24632464 Address check support2465 The least_addr field is the least address ever obtained from2466 MORECORE or MMAP. Attempted frees and reallocs of any address less2467 than this are trapped (unless INSECURE is defined).24682469 Magic tag2470 A cross-check field that should always hold same value as mparams.magic.24712472 Flags2473 Bits recording whether to use MMAP, locks, or contiguous MORECORE24742475 Statistics2476 Each space keeps track of current and maximum system memory2477 obtained via MORECORE or MMAP.24782479 Trim support2480 Fields holding the amount of unused topmost memory that should trigger2481 timming, and a counter to force periodic scanning to release unused2482 non-topmost segments.24832484 Locking2485 If USE_LOCKS is defined, the "mutex" lock is acquired and released2486 around every public call using this mspace.24872488 Extension support2489 A void* pointer and a size_t field that can be used to help implement2490 extensions to this malloc.2491*/24922493/* Bin types, widths and sizes */2494#define NSMALLBINS (32U)2495#define NTREEBINS (32U)2496#define SMALLBIN_SHIFT (3U)2497#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)2498#define TREEBIN_SHIFT (8U)2499#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)2500#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)2501#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)25022503struct malloc_state {2504 binmap_t smallmap;2505 binmap_t treemap;2506size_t dvsize;2507size_t topsize;2508char* least_addr;2509 mchunkptr dv;2510 mchunkptr top;2511size_t trim_check;2512size_t release_checks;2513size_t magic;2514 mchunkptr smallbins[(NSMALLBINS+1)*2];2515 tbinptr treebins[NTREEBINS];2516size_t footprint;2517size_t max_footprint;2518 flag_t mflags;2519#if USE_LOCKS2520 MLOCK_T mutex;/* locate lock among fields that rarely change */2521#endif/* USE_LOCKS */2522 msegment seg;2523void* extp;/* Unused but available for extensions */2524size_t exts;2525};25262527typedefstruct malloc_state* mstate;25282529/* ------------- Global malloc_state and malloc_params ------------------- */25302531/*2532 malloc_params holds global properties, including those that can be2533 dynamically set using mallopt. There is a single instance, mparams,2534 initialized in init_mparams. Note that the non-zeroness of "magic"2535 also serves as an initialization flag.2536*/25372538struct malloc_params {2539volatilesize_t magic;2540size_t page_size;2541size_t granularity;2542size_t mmap_threshold;2543size_t trim_threshold;2544 flag_t default_mflags;2545};25462547static struct malloc_params mparams;25482549/* Ensure mparams initialized */2550#define ensure_initialization() ((void)(mparams.magic != 0 || init_mparams()))25512552#if !ONLY_MSPACES25532554/* The global malloc_state used for all non-"mspace" calls */2555static struct malloc_state _gm_;2556#define gm (&_gm_)2557#define is_global(M) ((M) == &_gm_)25582559#endif/* !ONLY_MSPACES */25602561#define is_initialized(M) ((M)->top != 0)25622563/* -------------------------- system alloc setup ------------------------- */25642565/* Operations on mflags */25662567#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)2568#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)2569#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)25702571#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)2572#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)2573#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)25742575#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)2576#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)25772578#define set_lock(M,L)\2579 ((M)->mflags = (L)?\2580 ((M)->mflags | USE_LOCK_BIT) :\2581 ((M)->mflags & ~USE_LOCK_BIT))25822583/* page-align a size */2584#define page_align(S)\2585 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))25862587/* granularity-align a size */2588#define granularity_align(S)\2589 (((S) + (mparams.granularity - SIZE_T_ONE))\2590 & ~(mparams.granularity - SIZE_T_ONE))259125922593/* For mmap, use granularity alignment on windows, else page-align */2594#ifdef WIN322595#define mmap_align(S) granularity_align(S)2596#else2597#define mmap_align(S) page_align(S)2598#endif25992600/* For sys_alloc, enough padding to ensure can malloc request on success */2601#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)26022603#define is_page_aligned(S)\2604 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)2605#define is_granularity_aligned(S)\2606 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)26072608/* True if segment S holds address A */2609#define segment_holds(S, A)\2610 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)26112612/* Return segment holding given address */2613static msegmentptr segment_holding(mstate m,char* addr) {2614 msegmentptr sp = &m->seg;2615for(;;) {2616if(addr >= sp->base && addr < sp->base + sp->size)2617return sp;2618if((sp = sp->next) ==0)2619return0;2620}2621}26222623/* Return true if segment contains a segment link */2624static inthas_segment_link(mstate m, msegmentptr ss) {2625 msegmentptr sp = &m->seg;2626for(;;) {2627if((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)2628return1;2629if((sp = sp->next) ==0)2630return0;2631}2632}26332634#ifndef MORECORE_CANNOT_TRIM2635#define should_trim(M,s) ((s) > (M)->trim_check)2636#else/* MORECORE_CANNOT_TRIM */2637#define should_trim(M,s) (0)2638#endif/* MORECORE_CANNOT_TRIM */26392640/*2641 TOP_FOOT_SIZE is padding at the end of a segment, including space2642 that may be needed to place segment records and fenceposts when new2643 noncontiguous segments are added.2644*/2645#define TOP_FOOT_SIZE\2646 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)264726482649/* ------------------------------- Hooks -------------------------------- */26502651/*2652 PREACTION should be defined to return 0 on success, and nonzero on2653 failure. If you are not using locking, you can redefine these to do2654 anything you like.2655*/26562657#if USE_LOCKS26582659#define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)2660#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }2661#else/* USE_LOCKS */26622663#ifndef PREACTION2664#define PREACTION(M) (0)2665#endif/* PREACTION */26662667#ifndef POSTACTION2668#define POSTACTION(M)2669#endif/* POSTACTION */26702671#endif/* USE_LOCKS */26722673/*2674 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.2675 USAGE_ERROR_ACTION is triggered on detected bad frees and2676 reallocs. The argument p is an address that might have triggered the2677 fault. It is ignored by the two predefined actions, but might be2678 useful in custom actions that try to help diagnose errors.2679*/26802681#if PROCEED_ON_ERROR26822683/* A count of the number of corruption errors causing resets */2684int malloc_corruption_error_count;26852686/* default corruption action */2687static voidreset_on_error(mstate m);26882689#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)2690#define USAGE_ERROR_ACTION(m, p)26912692#else/* PROCEED_ON_ERROR */26932694#ifndef CORRUPTION_ERROR_ACTION2695#define CORRUPTION_ERROR_ACTION(m) ABORT2696#endif/* CORRUPTION_ERROR_ACTION */26972698#ifndef USAGE_ERROR_ACTION2699#define USAGE_ERROR_ACTION(m,p) ABORT2700#endif/* USAGE_ERROR_ACTION */27012702#endif/* PROCEED_ON_ERROR */27032704/* -------------------------- Debugging setup ---------------------------- */27052706#if ! DEBUG27072708#define check_free_chunk(M,P)2709#define check_inuse_chunk(M,P)2710#define check_malloced_chunk(M,P,N)2711#define check_mmapped_chunk(M,P)2712#define check_malloc_state(M)2713#define check_top_chunk(M,P)27142715#else/* DEBUG */2716#define check_free_chunk(M,P) do_check_free_chunk(M,P)2717#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)2718#define check_top_chunk(M,P) do_check_top_chunk(M,P)2719#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)2720#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)2721#define check_malloc_state(M) do_check_malloc_state(M)27222723static voiddo_check_any_chunk(mstate m, mchunkptr p);2724static voiddo_check_top_chunk(mstate m, mchunkptr p);2725static voiddo_check_mmapped_chunk(mstate m, mchunkptr p);2726static voiddo_check_inuse_chunk(mstate m, mchunkptr p);2727static voiddo_check_free_chunk(mstate m, mchunkptr p);2728static voiddo_check_malloced_chunk(mstate m,void* mem,size_t s);2729static voiddo_check_tree(mstate m, tchunkptr t);2730static voiddo_check_treebin(mstate m, bindex_t i);2731static voiddo_check_smallbin(mstate m, bindex_t i);2732static voiddo_check_malloc_state(mstate m);2733static intbin_find(mstate m, mchunkptr x);2734static size_ttraverse_and_check(mstate m);2735#endif/* DEBUG */27362737/* ---------------------------- Indexing Bins ---------------------------- */27382739#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)2740#define small_index(s) ((s) >> SMALLBIN_SHIFT)2741#define small_index2size(i) ((i) << SMALLBIN_SHIFT)2742#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))27432744/* addressing by index. See above about smallbin repositioning */2745#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))2746#define treebin_at(M,i) (&((M)->treebins[i]))27472748/* assign tree index for size S to variable I. Use x86 asm if possible */2749#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))2750#define compute_tree_index(S, I)\2751{\2752 unsigned int X = S >> TREEBIN_SHIFT;\2753 if (X == 0)\2754 I = 0;\2755 else if (X > 0xFFFF)\2756 I = NTREEBINS-1;\2757 else {\2758 unsigned int K;\2759 __asm__("bsrl\t%1,%0\n\t" :"=r" (K) :"rm" (X));\2760 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\2761 }\2762}27632764#elif defined (__INTEL_COMPILER)2765#define compute_tree_index(S, I)\2766{\2767 size_t X = S >> TREEBIN_SHIFT;\2768 if (X == 0)\2769 I = 0;\2770 else if (X > 0xFFFF)\2771 I = NTREEBINS-1;\2772 else {\2773 unsigned int K = _bit_scan_reverse (X); \2774 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\2775 }\2776}27772778#elif defined(_MSC_VER) && _MSC_VER>=13002779#define compute_tree_index(S, I)\2780{\2781 size_t X = S >> TREEBIN_SHIFT;\2782 if (X == 0)\2783 I = 0;\2784 else if (X > 0xFFFF)\2785 I = NTREEBINS-1;\2786 else {\2787 unsigned int K;\2788 _BitScanReverse((DWORD *) &K, X);\2789 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\2790 }\2791}27922793#else/* GNUC */2794#define compute_tree_index(S, I)\2795{\2796 size_t X = S >> TREEBIN_SHIFT;\2797 if (X == 0)\2798 I = 0;\2799 else if (X > 0xFFFF)\2800 I = NTREEBINS-1;\2801 else {\2802 unsigned int Y = (unsigned int)X;\2803 unsigned int N = ((Y - 0x100) >> 16) & 8;\2804 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\2805 N += K;\2806 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\2807 K = 14 - N + ((Y <<= K) >> 15);\2808 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\2809 }\2810}2811#endif/* GNUC */28122813/* Bit representing maximum resolved size in a treebin at i */2814#define bit_for_tree_index(i) \2815 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)28162817/* Shift placing maximum resolved bit in a treebin at i as sign bit */2818#define leftshift_for_tree_index(i) \2819 ((i == NTREEBINS-1)? 0 : \2820 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))28212822/* The size of the smallest chunk held in bin with index i */2823#define minsize_for_tree_index(i) \2824 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \2825 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))282628272828/* ------------------------ Operations on bin maps ----------------------- */28292830/* bit corresponding to given index */2831#define idx2bit(i) ((binmap_t)(1) << (i))28322833/* Mark/Clear bits with given index */2834#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))2835#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))2836#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))28372838#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))2839#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))2840#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))28412842/* isolate the least set bit of a bitmap */2843#define least_bit(x) ((x) & -(x))28442845/* mask with all bits to left of least bit of x on */2846#define left_bits(x) ((x<<1) | -(x<<1))28472848/* mask with all bits to left of or equal to least bit of x on */2849#define same_or_left_bits(x) ((x) | -(x))28502851/* index corresponding to given bit. Use x86 asm if possible */28522853#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))2854#define compute_bit2idx(X, I)\2855{\2856 unsigned int J;\2857 __asm__("bsfl\t%1,%0\n\t" :"=r" (J) :"rm" (X));\2858 I = (bindex_t)J;\2859}28602861#elif defined (__INTEL_COMPILER)2862#define compute_bit2idx(X, I)\2863{\2864 unsigned int J;\2865 J = _bit_scan_forward (X); \2866 I = (bindex_t)J;\2867}28682869#elif defined(_MSC_VER) && _MSC_VER>=13002870#define compute_bit2idx(X, I)\2871{\2872 unsigned int J;\2873 _BitScanForward((DWORD *) &J, X);\2874 I = (bindex_t)J;\2875}28762877#elif USE_BUILTIN_FFS2878#define compute_bit2idx(X, I) I = ffs(X)-128792880#else2881#define compute_bit2idx(X, I)\2882{\2883 unsigned int Y = X - 1;\2884 unsigned int K = Y >> (16-4) & 16;\2885 unsigned int N = K; Y >>= K;\2886 N += K = Y >> (8-3) & 8; Y >>= K;\2887 N += K = Y >> (4-2) & 4; Y >>= K;\2888 N += K = Y >> (2-1) & 2; Y >>= K;\2889 N += K = Y >> (1-0) & 1; Y >>= K;\2890 I = (bindex_t)(N + Y);\2891}2892#endif/* GNUC */289328942895/* ----------------------- Runtime Check Support ------------------------- */28962897/*2898 For security, the main invariant is that malloc/free/etc never2899 writes to a static address other than malloc_state, unless static2900 malloc_state itself has been corrupted, which cannot occur via2901 malloc (because of these checks). In essence this means that we2902 believe all pointers, sizes, maps etc held in malloc_state, but2903 check all of those linked or offsetted from other embedded data2904 structures. These checks are interspersed with main code in a way2905 that tends to minimize their run-time cost.29062907 When FOOTERS is defined, in addition to range checking, we also2908 verify footer fields of inuse chunks, which can be used guarantee2909 that the mstate controlling malloc/free is intact. This is a2910 streamlined version of the approach described by William Robertson2911 et al in "Run-time Detection of Heap-based Overflows" LISA'032912 http://www.usenix.org/events/lisa03/tech/robertson.html The footer2913 of an inuse chunk holds the xor of its mstate and a random seed,2914 that is checked upon calls to free() and realloc(). This is2915 (probablistically) unguessable from outside the program, but can be2916 computed by any code successfully malloc'ing any chunk, so does not2917 itself provide protection against code that has already broken2918 security through some other means. Unlike Robertson et al, we2919 always dynamically check addresses of all offset chunks (previous,2920 next, etc). This turns out to be cheaper than relying on hashes.2921*/29222923#if !INSECURE2924/* Check if address a is at least as high as any from MORECORE or MMAP */2925#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)2926/* Check if address of next chunk n is higher than base chunk p */2927#define ok_next(p, n) ((char*)(p) < (char*)(n))2928/* Check if p has its cinuse bit on */2929#define ok_cinuse(p) cinuse(p)2930/* Check if p has its pinuse bit on */2931#define ok_pinuse(p) pinuse(p)29322933#else/* !INSECURE */2934#define ok_address(M, a) (1)2935#define ok_next(b, n) (1)2936#define ok_cinuse(p) (1)2937#define ok_pinuse(p) (1)2938#endif/* !INSECURE */29392940#if (FOOTERS && !INSECURE)2941/* Check if (alleged) mstate m has expected magic field */2942#define ok_magic(M) ((M)->magic == mparams.magic)2943#else/* (FOOTERS && !INSECURE) */2944#define ok_magic(M) (1)2945#endif/* (FOOTERS && !INSECURE) */294629472948/* In gcc, use __builtin_expect to minimize impact of checks */2949#if !INSECURE2950#if defined(__GNUC__) && __GNUC__ >= 32951#define RTCHECK(e) __builtin_expect(e, 1)2952#else/* GNUC */2953#define RTCHECK(e) (e)2954#endif/* GNUC */2955#else/* !INSECURE */2956#define RTCHECK(e) (1)2957#endif/* !INSECURE */29582959/* macros to set up inuse chunks with or without footers */29602961#if !FOOTERS29622963#define mark_inuse_foot(M,p,s)29642965/* Set cinuse bit and pinuse bit of next chunk */2966#define set_inuse(M,p,s)\2967 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\2968 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)29692970/* Set cinuse and pinuse of this chunk and pinuse of next chunk */2971#define set_inuse_and_pinuse(M,p,s)\2972 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\2973 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)29742975/* Set size, cinuse and pinuse bit of this chunk */2976#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\2977 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))29782979#else/* FOOTERS */29802981/* Set foot of inuse chunk to be xor of mstate and seed */2982#define mark_inuse_foot(M,p,s)\2983 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))29842985#define get_mstate_for(p)\2986 ((mstate)(((mchunkptr)((char*)(p) +\2987 (chunksize(p))))->prev_foot ^ mparams.magic))29882989#define set_inuse(M,p,s)\2990 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\2991 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \2992 mark_inuse_foot(M,p,s))29932994#define set_inuse_and_pinuse(M,p,s)\2995 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\2996 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\2997 mark_inuse_foot(M,p,s))29982999#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\3000 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\3001 mark_inuse_foot(M, p, s))30023003#endif/* !FOOTERS */30043005/* ---------------------------- setting mparams -------------------------- */30063007/* Initialize mparams */3008static intinit_mparams(void) {3009#ifdef NEED_GLOBAL_LOCK_INIT3010if(malloc_global_mutex_status <=0)3011init_malloc_global_mutex();3012#endif30133014ACQUIRE_MALLOC_GLOBAL_LOCK();3015if(mparams.magic ==0) {3016size_t magic;3017size_t psize;3018size_t gsize;30193020#ifndef WIN323021 psize = malloc_getpagesize;3022 gsize = ((DEFAULT_GRANULARITY !=0)? DEFAULT_GRANULARITY : psize);3023#else/* WIN32 */3024{3025 SYSTEM_INFO system_info;3026GetSystemInfo(&system_info);3027 psize = system_info.dwPageSize;3028 gsize = ((DEFAULT_GRANULARITY !=0)?3029 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);3030}3031#endif/* WIN32 */30323033/* Sanity-check configuration:3034 size_t must be unsigned and as wide as pointer type.3035 ints must be at least 4 bytes.3036 alignment must be at least 8.3037 Alignment, min chunk size, and page size must all be powers of 2.3038 */3039if((sizeof(size_t) !=sizeof(char*)) ||3040(MAX_SIZE_T < MIN_CHUNK_SIZE) ||3041(sizeof(int) <4) ||3042(MALLOC_ALIGNMENT < (size_t)8U) ||3043((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) !=0) ||3044((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) !=0) ||3045((gsize & (gsize-SIZE_T_ONE)) !=0) ||3046((psize & (psize-SIZE_T_ONE)) !=0))3047 ABORT;30483049 mparams.granularity = gsize;3050 mparams.page_size = psize;3051 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;3052 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;3053#if MORECORE_CONTIGUOUS3054 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;3055#else/* MORECORE_CONTIGUOUS */3056 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;3057#endif/* MORECORE_CONTIGUOUS */30583059#if !ONLY_MSPACES3060/* Set up lock for main malloc area */3061 gm->mflags = mparams.default_mflags;3062INITIAL_LOCK(&gm->mutex);3063#endif30643065#if (FOOTERS && !INSECURE)3066{3067#if USE_DEV_RANDOM3068int fd;3069unsigned char buf[sizeof(size_t)];3070/* Try to use /dev/urandom, else fall back on using time */3071if((fd =open("/dev/urandom", O_RDONLY)) >=0&&3072read(fd, buf,sizeof(buf)) ==sizeof(buf)) {3073 magic = *((size_t*) buf);3074close(fd);3075}3076else3077#endif/* USE_DEV_RANDOM */3078#ifdef WIN323079 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);3080#else3081 magic = (size_t)(time(0) ^ (size_t)0x55555555U);3082#endif3083 magic |= (size_t)8U;/* ensure nonzero */3084 magic &= ~(size_t)7U;/* improve chances of fault for bad values */3085}3086#else/* (FOOTERS && !INSECURE) */3087 magic = (size_t)0x58585858U;3088#endif/* (FOOTERS && !INSECURE) */30893090 mparams.magic = magic;3091}30923093RELEASE_MALLOC_GLOBAL_LOCK();3094return1;3095}30963097/* support for mallopt */3098static intchange_mparam(int param_number,int value) {3099size_t val = (value == -1)? MAX_SIZE_T : (size_t)value;3100ensure_initialization();3101switch(param_number) {3102case M_TRIM_THRESHOLD:3103 mparams.trim_threshold = val;3104return1;3105case M_GRANULARITY:3106if(val >= mparams.page_size && ((val & (val-1)) ==0)) {3107 mparams.granularity = val;3108return1;3109}3110else3111return0;3112case M_MMAP_THRESHOLD:3113 mparams.mmap_threshold = val;3114return1;3115default:3116return0;3117}3118}31193120#if DEBUG3121/* ------------------------- Debugging Support --------------------------- */31223123/* Check properties of any chunk, whether free, inuse, mmapped etc */3124static voiddo_check_any_chunk(mstate m, mchunkptr p) {3125assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));3126assert(ok_address(m, p));3127}31283129/* Check properties of top chunk */3130static voiddo_check_top_chunk(mstate m, mchunkptr p) {3131 msegmentptr sp =segment_holding(m, (char*)p);3132size_t sz = p->head & ~INUSE_BITS;/* third-lowest bit can be set! */3133assert(sp !=0);3134assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));3135assert(ok_address(m, p));3136assert(sz == m->topsize);3137assert(sz >0);3138assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);3139assert(pinuse(p));3140assert(!pinuse(chunk_plus_offset(p, sz)));3141}31423143/* Check properties of (inuse) mmapped chunks */3144static voiddo_check_mmapped_chunk(mstate m, mchunkptr p) {3145size_t sz =chunksize(p);3146size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);3147assert(is_mmapped(p));3148assert(use_mmap(m));3149assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));3150assert(ok_address(m, p));3151assert(!is_small(sz));3152assert((len & (mparams.page_size-SIZE_T_ONE)) ==0);3153assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);3154assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head ==0);3155}31563157/* Check properties of inuse chunks */3158static voiddo_check_inuse_chunk(mstate m, mchunkptr p) {3159do_check_any_chunk(m, p);3160assert(cinuse(p));3161assert(next_pinuse(p));3162/* If not pinuse and not mmapped, previous chunk has OK offset */3163assert(is_mmapped(p) ||pinuse(p) ||next_chunk(prev_chunk(p)) == p);3164if(is_mmapped(p))3165do_check_mmapped_chunk(m, p);3166}31673168/* Check properties of free chunks */3169static voiddo_check_free_chunk(mstate m, mchunkptr p) {3170size_t sz =chunksize(p);3171 mchunkptr next =chunk_plus_offset(p, sz);3172do_check_any_chunk(m, p);3173assert(!cinuse(p));3174assert(!next_pinuse(p));3175assert(!is_mmapped(p));3176if(p != m->dv && p != m->top) {3177if(sz >= MIN_CHUNK_SIZE) {3178assert((sz & CHUNK_ALIGN_MASK) ==0);3179assert(is_aligned(chunk2mem(p)));3180assert(next->prev_foot == sz);3181assert(pinuse(p));3182assert(next == m->top ||cinuse(next));3183assert(p->fd->bk == p);3184assert(p->bk->fd == p);3185}3186else/* markers are always of size SIZE_T_SIZE */3187assert(sz == SIZE_T_SIZE);3188}3189}31903191/* Check properties of malloced chunks at the point they are malloced */3192static voiddo_check_malloced_chunk(mstate m,void* mem,size_t s) {3193if(mem !=0) {3194 mchunkptr p =mem2chunk(mem);3195size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);3196do_check_inuse_chunk(m, p);3197assert((sz & CHUNK_ALIGN_MASK) ==0);3198assert(sz >= MIN_CHUNK_SIZE);3199assert(sz >= s);3200/* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */3201assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));3202}3203}32043205/* Check a tree and its subtrees. */3206static voiddo_check_tree(mstate m, tchunkptr t) {3207 tchunkptr head =0;3208 tchunkptr u = t;3209 bindex_t tindex = t->index;3210size_t tsize =chunksize(t);3211 bindex_t idx;3212compute_tree_index(tsize, idx);3213assert(tindex == idx);3214assert(tsize >= MIN_LARGE_SIZE);3215assert(tsize >=minsize_for_tree_index(idx));3216assert((idx == NTREEBINS-1) || (tsize <minsize_for_tree_index((idx+1))));32173218do{/* traverse through chain of same-sized nodes */3219do_check_any_chunk(m, ((mchunkptr)u));3220assert(u->index == tindex);3221assert(chunksize(u) == tsize);3222assert(!cinuse(u));3223assert(!next_pinuse(u));3224assert(u->fd->bk == u);3225assert(u->bk->fd == u);3226if(u->parent ==0) {3227assert(u->child[0] ==0);3228assert(u->child[1] ==0);3229}3230else{3231assert(head ==0);/* only one node on chain has parent */3232 head = u;3233assert(u->parent != u);3234assert(u->parent->child[0] == u ||3235 u->parent->child[1] == u ||3236*((tbinptr*)(u->parent)) == u);3237if(u->child[0] !=0) {3238assert(u->child[0]->parent == u);3239assert(u->child[0] != u);3240do_check_tree(m, u->child[0]);3241}3242if(u->child[1] !=0) {3243assert(u->child[1]->parent == u);3244assert(u->child[1] != u);3245do_check_tree(m, u->child[1]);3246}3247if(u->child[0] !=0&& u->child[1] !=0) {3248assert(chunksize(u->child[0]) <chunksize(u->child[1]));3249}3250}3251 u = u->fd;3252}while(u != t);3253assert(head !=0);3254}32553256/* Check all the chunks in a treebin. */3257static voiddo_check_treebin(mstate m, bindex_t i) {3258 tbinptr* tb =treebin_at(m, i);3259 tchunkptr t = *tb;3260int empty = (m->treemap & (1U<< i)) ==0;3261if(t ==0)3262assert(empty);3263if(!empty)3264do_check_tree(m, t);3265}32663267/* Check all the chunks in a smallbin. */3268static voiddo_check_smallbin(mstate m, bindex_t i) {3269 sbinptr b =smallbin_at(m, i);3270 mchunkptr p = b->bk;3271unsigned int empty = (m->smallmap & (1U<< i)) ==0;3272if(p == b)3273assert(empty);3274if(!empty) {3275for(; p != b; p = p->bk) {3276size_t size =chunksize(p);3277 mchunkptr q;3278/* each chunk claims to be free */3279do_check_free_chunk(m, p);3280/* chunk belongs in bin */3281assert(small_index(size) == i);3282assert(p->bk == b ||chunksize(p->bk) ==chunksize(p));3283/* chunk is followed by an inuse chunk */3284 q =next_chunk(p);3285if(q->head != FENCEPOST_HEAD)3286do_check_inuse_chunk(m, q);3287}3288}3289}32903291/* Find x in a bin. Used in other check functions. */3292static intbin_find(mstate m, mchunkptr x) {3293size_t size =chunksize(x);3294if(is_small(size)) {3295 bindex_t sidx =small_index(size);3296 sbinptr b =smallbin_at(m, sidx);3297if(smallmap_is_marked(m, sidx)) {3298 mchunkptr p = b;3299do{3300if(p == x)3301return1;3302}while((p = p->fd) != b);3303}3304}3305else{3306 bindex_t tidx;3307compute_tree_index(size, tidx);3308if(treemap_is_marked(m, tidx)) {3309 tchunkptr t = *treebin_at(m, tidx);3310size_t sizebits = size <<leftshift_for_tree_index(tidx);3311while(t !=0&&chunksize(t) != size) {3312 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) &1];3313 sizebits <<=1;3314}3315if(t !=0) {3316 tchunkptr u = t;3317do{3318if(u == (tchunkptr)x)3319return1;3320}while((u = u->fd) != t);3321}3322}3323}3324return0;3325}33263327/* Traverse each chunk and check it; return total */3328static size_ttraverse_and_check(mstate m) {3329size_t sum =0;3330if(is_initialized(m)) {3331 msegmentptr s = &m->seg;3332 sum += m->topsize + TOP_FOOT_SIZE;3333while(s !=0) {3334 mchunkptr q =align_as_chunk(s->base);3335 mchunkptr lastq =0;3336assert(pinuse(q));3337while(segment_holds(s, q) &&3338 q != m->top && q->head != FENCEPOST_HEAD) {3339 sum +=chunksize(q);3340if(cinuse(q)) {3341assert(!bin_find(m, q));3342do_check_inuse_chunk(m, q);3343}3344else{3345assert(q == m->dv ||bin_find(m, q));3346assert(lastq ==0||cinuse(lastq));/* Not 2 consecutive free */3347do_check_free_chunk(m, q);3348}3349 lastq = q;3350 q =next_chunk(q);3351}3352 s = s->next;3353}3354}3355return sum;3356}33573358/* Check all properties of malloc_state. */3359static voiddo_check_malloc_state(mstate m) {3360 bindex_t i;3361size_t total;3362/* check bins */3363for(i =0; i < NSMALLBINS; ++i)3364do_check_smallbin(m, i);3365for(i =0; i < NTREEBINS; ++i)3366do_check_treebin(m, i);33673368if(m->dvsize !=0) {/* check dv chunk */3369do_check_any_chunk(m, m->dv);3370assert(m->dvsize ==chunksize(m->dv));3371assert(m->dvsize >= MIN_CHUNK_SIZE);3372assert(bin_find(m, m->dv) ==0);3373}33743375if(m->top !=0) {/* check top chunk */3376do_check_top_chunk(m, m->top);3377/*assert(m->topsize == chunksize(m->top)); redundant */3378assert(m->topsize >0);3379assert(bin_find(m, m->top) ==0);3380}33813382 total =traverse_and_check(m);3383assert(total <= m->footprint);3384assert(m->footprint <= m->max_footprint);3385}3386#endif/* DEBUG */33873388/* ----------------------------- statistics ------------------------------ */33893390#if !NO_MALLINFO3391static struct mallinfo internal_mallinfo(mstate m) {3392struct mallinfo nm = {0,0,0,0,0,0,0,0,0,0};3393ensure_initialization();3394if(!PREACTION(m)) {3395check_malloc_state(m);3396if(is_initialized(m)) {3397size_t nfree = SIZE_T_ONE;/* top always free */3398size_t mfree = m->topsize + TOP_FOOT_SIZE;3399size_t sum = mfree;3400 msegmentptr s = &m->seg;3401while(s !=0) {3402 mchunkptr q =align_as_chunk(s->base);3403while(segment_holds(s, q) &&3404 q != m->top && q->head != FENCEPOST_HEAD) {3405size_t sz =chunksize(q);3406 sum += sz;3407if(!cinuse(q)) {3408 mfree += sz;3409++nfree;3410}3411 q =next_chunk(q);3412}3413 s = s->next;3414}34153416 nm.arena = sum;3417 nm.ordblks = nfree;3418 nm.hblkhd = m->footprint - sum;3419 nm.usmblks = m->max_footprint;3420 nm.uordblks = m->footprint - mfree;3421 nm.fordblks = mfree;3422 nm.keepcost = m->topsize;3423}34243425POSTACTION(m);3426}3427return nm;3428}3429#endif/* !NO_MALLINFO */34303431static voidinternal_malloc_stats(mstate m) {3432ensure_initialization();3433if(!PREACTION(m)) {3434size_t maxfp =0;3435size_t fp =0;3436size_t used =0;3437check_malloc_state(m);3438if(is_initialized(m)) {3439 msegmentptr s = &m->seg;3440 maxfp = m->max_footprint;3441 fp = m->footprint;3442 used = fp - (m->topsize + TOP_FOOT_SIZE);34433444while(s !=0) {3445 mchunkptr q =align_as_chunk(s->base);3446while(segment_holds(s, q) &&3447 q != m->top && q->head != FENCEPOST_HEAD) {3448if(!cinuse(q))3449 used -=chunksize(q);3450 q =next_chunk(q);3451}3452 s = s->next;3453}3454}34553456fprintf(stderr,"max system bytes =%10lu\n", (unsigned long)(maxfp));3457fprintf(stderr,"system bytes =%10lu\n", (unsigned long)(fp));3458fprintf(stderr,"in use bytes =%10lu\n", (unsigned long)(used));34593460POSTACTION(m);3461}3462}34633464/* ----------------------- Operations on smallbins ----------------------- */34653466/*3467 Various forms of linking and unlinking are defined as macros. Even3468 the ones for trees, which are very long but have very short typical3469 paths. This is ugly but reduces reliance on inlining support of3470 compilers.3471*/34723473/* Link a free chunk into a smallbin */3474#define insert_small_chunk(M, P, S) {\3475 bindex_t I = small_index(S);\3476 mchunkptr B = smallbin_at(M, I);\3477 mchunkptr F = B;\3478 assert(S >= MIN_CHUNK_SIZE);\3479 if (!smallmap_is_marked(M, I))\3480 mark_smallmap(M, I);\3481 else if (RTCHECK(ok_address(M, B->fd)))\3482 F = B->fd;\3483 else {\3484 CORRUPTION_ERROR_ACTION(M);\3485 }\3486 B->fd = P;\3487 F->bk = P;\3488 P->fd = F;\3489 P->bk = B;\3490}34913492/* Unlink a chunk from a smallbin */3493#define unlink_small_chunk(M, P, S) {\3494 mchunkptr F = P->fd;\3495 mchunkptr B = P->bk;\3496 bindex_t I = small_index(S);\3497 assert(P != B);\3498 assert(P != F);\3499 assert(chunksize(P) == small_index2size(I));\3500 if (F == B)\3501 clear_smallmap(M, I);\3502 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\3503 (B == smallbin_at(M,I) || ok_address(M, B)))) {\3504 F->bk = B;\3505 B->fd = F;\3506 }\3507 else {\3508 CORRUPTION_ERROR_ACTION(M);\3509 }\3510}35113512/* Unlink the first chunk from a smallbin */3513#define unlink_first_small_chunk(M, B, P, I) {\3514 mchunkptr F = P->fd;\3515 assert(P != B);\3516 assert(P != F);\3517 assert(chunksize(P) == small_index2size(I));\3518 if (B == F)\3519 clear_smallmap(M, I);\3520 else if (RTCHECK(ok_address(M, F))) {\3521 B->fd = F;\3522 F->bk = B;\3523 }\3524 else {\3525 CORRUPTION_ERROR_ACTION(M);\3526 }\3527}3528352935303531/* Replace dv node, binning the old one */3532/* Used only when dvsize known to be small */3533#define replace_dv(M, P, S) {\3534 size_t DVS = M->dvsize;\3535 if (DVS != 0) {\3536 mchunkptr DV = M->dv;\3537 assert(is_small(DVS));\3538 insert_small_chunk(M, DV, DVS);\3539 }\3540 M->dvsize = S;\3541 M->dv = P;\3542}35433544/* ------------------------- Operations on trees ------------------------- */35453546/* Insert chunk into tree */3547#define insert_large_chunk(M, X, S) {\3548 tbinptr* H;\3549 bindex_t I;\3550 compute_tree_index(S, I);\3551 H = treebin_at(M, I);\3552 X->index = I;\3553 X->child[0] = X->child[1] = 0;\3554 if (!treemap_is_marked(M, I)) {\3555 mark_treemap(M, I);\3556 *H = X;\3557 X->parent = (tchunkptr)H;\3558 X->fd = X->bk = X;\3559 }\3560 else {\3561 tchunkptr T = *H;\3562 size_t K = S << leftshift_for_tree_index(I);\3563 for (;;) {\3564 if (chunksize(T) != S) {\3565 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\3566 K <<= 1;\3567 if (*C != 0)\3568 T = *C;\3569 else if (RTCHECK(ok_address(M, C))) {\3570 *C = X;\3571 X->parent = T;\3572 X->fd = X->bk = X;\3573 break;\3574 }\3575 else {\3576 CORRUPTION_ERROR_ACTION(M);\3577 break;\3578 }\3579 }\3580 else {\3581 tchunkptr F = T->fd;\3582 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\3583 T->fd = F->bk = X;\3584 X->fd = F;\3585 X->bk = T;\3586 X->parent = 0;\3587 break;\3588 }\3589 else {\3590 CORRUPTION_ERROR_ACTION(M);\3591 break;\3592 }\3593 }\3594 }\3595 }\3596}35973598/*3599 Unlink steps:36003601 1. If x is a chained node, unlink it from its same-sized fd/bk links3602 and choose its bk node as its replacement.3603 2. If x was the last node of its size, but not a leaf node, it must3604 be replaced with a leaf node (not merely one with an open left or3605 right), to make sure that lefts and rights of descendents3606 correspond properly to bit masks. We use the rightmost descendent3607 of x. We could use any other leaf, but this is easy to locate and3608 tends to counteract removal of leftmosts elsewhere, and so keeps3609 paths shorter than minimally guaranteed. This doesn't loop much3610 because on average a node in a tree is near the bottom.3611 3. If x is the base of a chain (i.e., has parent links) relink3612 x's parent and children to x's replacement (or null if none).3613*/36143615#define unlink_large_chunk(M, X) {\3616 tchunkptr XP = X->parent;\3617 tchunkptr R;\3618 if (X->bk != X) {\3619 tchunkptr F = X->fd;\3620 R = X->bk;\3621 if (RTCHECK(ok_address(M, F))) {\3622 F->bk = R;\3623 R->fd = F;\3624 }\3625 else {\3626 CORRUPTION_ERROR_ACTION(M);\3627 }\3628 }\3629 else {\3630 tchunkptr* RP;\3631 if (((R = *(RP = &(X->child[1]))) != 0) ||\3632 ((R = *(RP = &(X->child[0]))) != 0)) {\3633 tchunkptr* CP;\3634 while ((*(CP = &(R->child[1])) != 0) ||\3635 (*(CP = &(R->child[0])) != 0)) {\3636 R = *(RP = CP);\3637 }\3638 if (RTCHECK(ok_address(M, RP)))\3639 *RP = 0;\3640 else {\3641 CORRUPTION_ERROR_ACTION(M);\3642 }\3643 }\3644 }\3645 if (XP != 0) {\3646 tbinptr* H = treebin_at(M, X->index);\3647 if (X == *H) {\3648 if ((*H = R) == 0) \3649 clear_treemap(M, X->index);\3650 }\3651 else if (RTCHECK(ok_address(M, XP))) {\3652 if (XP->child[0] == X) \3653 XP->child[0] = R;\3654 else \3655 XP->child[1] = R;\3656 }\3657 else\3658 CORRUPTION_ERROR_ACTION(M);\3659 if (R != 0) {\3660 if (RTCHECK(ok_address(M, R))) {\3661 tchunkptr C0, C1;\3662 R->parent = XP;\3663 if ((C0 = X->child[0]) != 0) {\3664 if (RTCHECK(ok_address(M, C0))) {\3665 R->child[0] = C0;\3666 C0->parent = R;\3667 }\3668 else\3669 CORRUPTION_ERROR_ACTION(M);\3670 }\3671 if ((C1 = X->child[1]) != 0) {\3672 if (RTCHECK(ok_address(M, C1))) {\3673 R->child[1] = C1;\3674 C1->parent = R;\3675 }\3676 else\3677 CORRUPTION_ERROR_ACTION(M);\3678 }\3679 }\3680 else\3681 CORRUPTION_ERROR_ACTION(M);\3682 }\3683 }\3684}36853686/* Relays to large vs small bin operations */36873688#define insert_chunk(M, P, S)\3689 if (is_small(S)) insert_small_chunk(M, P, S)\3690 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }36913692#define unlink_chunk(M, P, S)\3693 if (is_small(S)) unlink_small_chunk(M, P, S)\3694 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }369536963697/* Relays to internal calls to malloc/free from realloc, memalign etc */36983699#if ONLY_MSPACES3700#define internal_malloc(m, b) mspace_malloc(m, b)3701#define internal_free(m, mem) mspace_free(m,mem);3702#else/* ONLY_MSPACES */3703#if MSPACES3704#define internal_malloc(m, b)\3705 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)3706#define internal_free(m, mem)\3707 if (m == gm) dlfree(mem); else mspace_free(m,mem);3708#else/* MSPACES */3709#define internal_malloc(m, b) dlmalloc(b)3710#define internal_free(m, mem) dlfree(mem)3711#endif/* MSPACES */3712#endif/* ONLY_MSPACES */37133714/* ----------------------- Direct-mmapping chunks ----------------------- */37153716/*3717 Directly mmapped chunks are set up with an offset to the start of3718 the mmapped region stored in the prev_foot field of the chunk. This3719 allows reconstruction of the required argument to MUNMAP when freed,3720 and also allows adjustment of the returned chunk to meet alignment3721 requirements (especially in memalign). There is also enough space3722 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain3723 the PINUSE bit so frees can be checked.3724*/37253726/* Malloc using mmap */3727static void*mmap_alloc(mstate m,size_t nb) {3728size_t mmsize =mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);3729if(mmsize > nb) {/* Check for wrap around 0 */3730char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));3731if(mm != CMFAIL) {3732size_t offset =align_offset(chunk2mem(mm));3733size_t psize = mmsize - offset - MMAP_FOOT_PAD;3734 mchunkptr p = (mchunkptr)(mm + offset);3735 p->prev_foot = offset | IS_MMAPPED_BIT;3736(p)->head = (psize|CINUSE_BIT);3737mark_inuse_foot(m, p, psize);3738chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;3739chunk_plus_offset(p, psize+SIZE_T_SIZE)->head =0;37403741if(mm < m->least_addr)3742 m->least_addr = mm;3743if((m->footprint += mmsize) > m->max_footprint)3744 m->max_footprint = m->footprint;3745assert(is_aligned(chunk2mem(p)));3746check_mmapped_chunk(m, p);3747returnchunk2mem(p);3748}3749}3750return0;3751}37523753/* Realloc using mmap */3754static mchunkptr mmap_resize(mstate m, mchunkptr oldp,size_t nb) {3755size_t oldsize =chunksize(oldp);3756if(is_small(nb))/* Can't shrink mmap regions below small size */3757return0;3758/* Keep old chunk if big enough but not too big */3759if(oldsize >= nb + SIZE_T_SIZE &&3760(oldsize - nb) <= (mparams.granularity <<1))3761return oldp;3762else{3763size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;3764size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;3765size_t newmmsize =mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);3766char* cp = (char*)CALL_MREMAP((char*)oldp - offset,3767 oldmmsize, newmmsize,1);3768if(cp != CMFAIL) {3769 mchunkptr newp = (mchunkptr)(cp + offset);3770size_t psize = newmmsize - offset - MMAP_FOOT_PAD;3771 newp->head = (psize|CINUSE_BIT);3772mark_inuse_foot(m, newp, psize);3773chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;3774chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head =0;37753776if(cp < m->least_addr)3777 m->least_addr = cp;3778if((m->footprint += newmmsize - oldmmsize) > m->max_footprint)3779 m->max_footprint = m->footprint;3780check_mmapped_chunk(m, newp);3781return newp;3782}3783}3784return0;3785}37863787/* -------------------------- mspace management -------------------------- */37883789/* Initialize top chunk and its size */3790static voidinit_top(mstate m, mchunkptr p,size_t psize) {3791/* Ensure alignment */3792size_t offset =align_offset(chunk2mem(p));3793 p = (mchunkptr)((char*)p + offset);3794 psize -= offset;37953796 m->top = p;3797 m->topsize = psize;3798 p->head = psize | PINUSE_BIT;3799/* set size of fake trailing chunk holding overhead space only once */3800chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;3801 m->trim_check = mparams.trim_threshold;/* reset on each update */3802}38033804/* Initialize bins for a new mstate that is otherwise zeroed out */3805static voidinit_bins(mstate m) {3806/* Establish circular links for smallbins */3807 bindex_t i;3808for(i =0; i < NSMALLBINS; ++i) {3809 sbinptr bin =smallbin_at(m,i);3810 bin->fd = bin->bk = bin;3811}3812}38133814#if PROCEED_ON_ERROR38153816/* default corruption action */3817static voidreset_on_error(mstate m) {3818int i;3819++malloc_corruption_error_count;3820/* Reinitialize fields to forget about all memory */3821 m->smallbins = m->treebins =0;3822 m->dvsize = m->topsize =0;3823 m->seg.base =0;3824 m->seg.size =0;3825 m->seg.next =0;3826 m->top = m->dv =0;3827for(i =0; i < NTREEBINS; ++i)3828*treebin_at(m, i) =0;3829init_bins(m);3830}3831#endif/* PROCEED_ON_ERROR */38323833/* Allocate chunk and prepend remainder with chunk in successor base. */3834static void*prepend_alloc(mstate m,char* newbase,char* oldbase,3835size_t nb) {3836 mchunkptr p =align_as_chunk(newbase);3837 mchunkptr oldfirst =align_as_chunk(oldbase);3838size_t psize = (char*)oldfirst - (char*)p;3839 mchunkptr q =chunk_plus_offset(p, nb);3840size_t qsize = psize - nb;3841set_size_and_pinuse_of_inuse_chunk(m, p, nb);38423843assert((char*)oldfirst > (char*)q);3844assert(pinuse(oldfirst));3845assert(qsize >= MIN_CHUNK_SIZE);38463847/* consolidate remainder with first chunk of old base */3848if(oldfirst == m->top) {3849size_t tsize = m->topsize += qsize;3850 m->top = q;3851 q->head = tsize | PINUSE_BIT;3852check_top_chunk(m, q);3853}3854else if(oldfirst == m->dv) {3855size_t dsize = m->dvsize += qsize;3856 m->dv = q;3857set_size_and_pinuse_of_free_chunk(q, dsize);3858}3859else{3860if(!cinuse(oldfirst)) {3861size_t nsize =chunksize(oldfirst);3862unlink_chunk(m, oldfirst, nsize);3863 oldfirst =chunk_plus_offset(oldfirst, nsize);3864 qsize += nsize;3865}3866set_free_with_pinuse(q, qsize, oldfirst);3867insert_chunk(m, q, qsize);3868check_free_chunk(m, q);3869}38703871check_malloced_chunk(m,chunk2mem(p), nb);3872returnchunk2mem(p);3873}38743875/* Add a segment to hold a new noncontiguous region */3876static voidadd_segment(mstate m,char* tbase,size_t tsize, flag_t mmapped) {3877/* Determine locations and sizes of segment, fenceposts, old top */3878char* old_top = (char*)m->top;3879 msegmentptr oldsp =segment_holding(m, old_top);3880char* old_end = oldsp->base + oldsp->size;3881size_t ssize =pad_request(sizeof(struct malloc_segment));3882char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);3883size_t offset =align_offset(chunk2mem(rawsp));3884char* asp = rawsp + offset;3885char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;3886 mchunkptr sp = (mchunkptr)csp;3887 msegmentptr ss = (msegmentptr)(chunk2mem(sp));3888 mchunkptr tnext =chunk_plus_offset(sp, ssize);3889 mchunkptr p = tnext;3890int nfences =0;38913892/* reset top to new space */3893init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);38943895/* Set up segment record */3896assert(is_aligned(ss));3897set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);3898*ss = m->seg;/* Push current record */3899 m->seg.base = tbase;3900 m->seg.size = tsize;3901 m->seg.sflags = mmapped;3902 m->seg.next = ss;39033904/* Insert trailing fenceposts */3905for(;;) {3906 mchunkptr nextp =chunk_plus_offset(p, SIZE_T_SIZE);3907 p->head = FENCEPOST_HEAD;3908++nfences;3909if((char*)(&(nextp->head)) < old_end)3910 p = nextp;3911else3912break;3913}3914assert(nfences >=2);39153916/* Insert the rest of old top into a bin as an ordinary free chunk */3917if(csp != old_top) {3918 mchunkptr q = (mchunkptr)old_top;3919size_t psize = csp - old_top;3920 mchunkptr tn =chunk_plus_offset(q, psize);3921set_free_with_pinuse(q, psize, tn);3922insert_chunk(m, q, psize);3923}39243925check_top_chunk(m, m->top);3926}39273928/* -------------------------- System allocation -------------------------- */39293930/* Get memory from system using MORECORE or MMAP */3931static void*sys_alloc(mstate m,size_t nb) {3932char* tbase = CMFAIL;3933size_t tsize =0;3934 flag_t mmap_flag =0;39353936ensure_initialization();39373938/* Directly map large chunks */3939if(use_mmap(m) && nb >= mparams.mmap_threshold) {3940void* mem =mmap_alloc(m, nb);3941if(mem !=0)3942return mem;3943}39443945/*3946 Try getting memory in any of three ways (in most-preferred to3947 least-preferred order):3948 1. A call to MORECORE that can normally contiguously extend memory.3949 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or3950 main space is mmapped or a previous contiguous call failed)3951 2. A call to MMAP new space (disabled if not HAVE_MMAP).3952 Note that under the default settings, if MORECORE is unable to3953 fulfill a request, and HAVE_MMAP is true, then mmap is3954 used as a noncontiguous system allocator. This is a useful backup3955 strategy for systems with holes in address spaces -- in this case3956 sbrk cannot contiguously expand the heap, but mmap may be able to3957 find space.3958 3. A call to MORECORE that cannot usually contiguously extend memory.3959 (disabled if not HAVE_MORECORE)39603961 In all cases, we need to request enough bytes from system to ensure3962 we can malloc nb bytes upon success, so pad with enough space for3963 top_foot, plus alignment-pad to make sure we don't lose bytes if3964 not on boundary, and round this up to a granularity unit.3965 */39663967if(MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {3968char* br = CMFAIL;3969 msegmentptr ss = (m->top ==0)?0:segment_holding(m, (char*)m->top);3970size_t asize =0;3971ACQUIRE_MALLOC_GLOBAL_LOCK();39723973if(ss ==0) {/* First time through or recovery */3974char* base = (char*)CALL_MORECORE(0);3975if(base != CMFAIL) {3976 asize =granularity_align(nb + SYS_ALLOC_PADDING);3977/* Adjust to end on a page boundary */3978if(!is_page_aligned(base))3979 asize += (page_align((size_t)base) - (size_t)base);3980/* Can't call MORECORE if size is negative when treated as signed */3981if(asize < HALF_MAX_SIZE_T &&3982(br = (char*)(CALL_MORECORE(asize))) == base) {3983 tbase = base;3984 tsize = asize;3985}3986}3987}3988else{3989/* Subtract out existing available top space from MORECORE request. */3990 asize =granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);3991/* Use mem here only if it did continuously extend old space */3992if(asize < HALF_MAX_SIZE_T &&3993(br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {3994 tbase = br;3995 tsize = asize;3996}3997}39983999if(tbase == CMFAIL) {/* Cope with partial failure */4000if(br != CMFAIL) {/* Try to use/extend the space we did get */4001if(asize < HALF_MAX_SIZE_T &&4002 asize < nb + SYS_ALLOC_PADDING) {4003size_t esize =granularity_align(nb + SYS_ALLOC_PADDING - asize);4004if(esize < HALF_MAX_SIZE_T) {4005char* end = (char*)CALL_MORECORE(esize);4006if(end != CMFAIL)4007 asize += esize;4008else{/* Can't use; try to release */4009(void)CALL_MORECORE(-asize);4010 br = CMFAIL;4011}4012}4013}4014}4015if(br != CMFAIL) {/* Use the space we did get */4016 tbase = br;4017 tsize = asize;4018}4019else4020disable_contiguous(m);/* Don't try contiguous path in the future */4021}40224023RELEASE_MALLOC_GLOBAL_LOCK();4024}40254026if(HAVE_MMAP && tbase == CMFAIL) {/* Try MMAP */4027size_t rsize =granularity_align(nb + SYS_ALLOC_PADDING);4028if(rsize > nb) {/* Fail if wraps around zero */4029char* mp = (char*)(CALL_MMAP(rsize));4030if(mp != CMFAIL) {4031 tbase = mp;4032 tsize = rsize;4033 mmap_flag = IS_MMAPPED_BIT;4034}4035}4036}40374038if(HAVE_MORECORE && tbase == CMFAIL) {/* Try noncontiguous MORECORE */4039size_t asize =granularity_align(nb + SYS_ALLOC_PADDING);4040if(asize < HALF_MAX_SIZE_T) {4041char* br = CMFAIL;4042char* end = CMFAIL;4043ACQUIRE_MALLOC_GLOBAL_LOCK();4044 br = (char*)(CALL_MORECORE(asize));4045 end = (char*)(CALL_MORECORE(0));4046RELEASE_MALLOC_GLOBAL_LOCK();4047if(br != CMFAIL && end != CMFAIL && br < end) {4048size_t ssize = end - br;4049if(ssize > nb + TOP_FOOT_SIZE) {4050 tbase = br;4051 tsize = ssize;4052}4053}4054}4055}40564057if(tbase != CMFAIL) {40584059if((m->footprint += tsize) > m->max_footprint)4060 m->max_footprint = m->footprint;40614062if(!is_initialized(m)) {/* first-time initialization */4063 m->seg.base = m->least_addr = tbase;4064 m->seg.size = tsize;4065 m->seg.sflags = mmap_flag;4066 m->magic = mparams.magic;4067 m->release_checks = MAX_RELEASE_CHECK_RATE;4068init_bins(m);4069#if !ONLY_MSPACES4070if(is_global(m))4071init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);4072else4073#endif4074{4075/* Offset top by embedded malloc_state */4076 mchunkptr mn =next_chunk(mem2chunk(m));4077init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);4078}4079}40804081else{4082/* Try to merge with an existing segment */4083 msegmentptr sp = &m->seg;4084/* Only consider most recent segment if traversal suppressed */4085while(sp !=0&& tbase != sp->base + sp->size)4086 sp = (NO_SEGMENT_TRAVERSAL) ?0: sp->next;4087if(sp !=0&&4088!is_extern_segment(sp) &&4089(sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&4090segment_holds(sp, m->top)) {/* append */4091 sp->size += tsize;4092init_top(m, m->top, m->topsize + tsize);4093}4094else{4095if(tbase < m->least_addr)4096 m->least_addr = tbase;4097 sp = &m->seg;4098while(sp !=0&& sp->base != tbase + tsize)4099 sp = (NO_SEGMENT_TRAVERSAL) ?0: sp->next;4100if(sp !=0&&4101!is_extern_segment(sp) &&4102(sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {4103char* oldbase = sp->base;4104 sp->base = tbase;4105 sp->size += tsize;4106returnprepend_alloc(m, tbase, oldbase, nb);4107}4108else4109add_segment(m, tbase, tsize, mmap_flag);4110}4111}41124113if(nb < m->topsize) {/* Allocate from new or extended top space */4114size_t rsize = m->topsize -= nb;4115 mchunkptr p = m->top;4116 mchunkptr r = m->top =chunk_plus_offset(p, nb);4117 r->head = rsize | PINUSE_BIT;4118set_size_and_pinuse_of_inuse_chunk(m, p, nb);4119check_top_chunk(m, m->top);4120check_malloced_chunk(m,chunk2mem(p), nb);4121returnchunk2mem(p);4122}4123}41244125 MALLOC_FAILURE_ACTION;4126return0;4127}41284129/* ----------------------- system deallocation -------------------------- */41304131/* Unmap and unlink any mmapped segments that don't contain used chunks */4132static size_trelease_unused_segments(mstate m) {4133size_t released =0;4134int nsegs =0;4135 msegmentptr pred = &m->seg;4136 msegmentptr sp = pred->next;4137while(sp !=0) {4138char* base = sp->base;4139size_t size = sp->size;4140 msegmentptr next = sp->next;4141++nsegs;4142if(is_mmapped_segment(sp) && !is_extern_segment(sp)) {4143 mchunkptr p =align_as_chunk(base);4144size_t psize =chunksize(p);4145/* Can unmap if first chunk holds entire segment and not pinned */4146if(!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {4147 tchunkptr tp = (tchunkptr)p;4148assert(segment_holds(sp, (char*)sp));4149if(p == m->dv) {4150 m->dv =0;4151 m->dvsize =0;4152}4153else{4154unlink_large_chunk(m, tp);4155}4156if(CALL_MUNMAP(base, size) ==0) {4157 released += size;4158 m->footprint -= size;4159/* unlink obsoleted record */4160 sp = pred;4161 sp->next = next;4162}4163else{/* back out if cannot unmap */4164insert_large_chunk(m, tp, psize);4165}4166}4167}4168if(NO_SEGMENT_TRAVERSAL)/* scan only first segment */4169break;4170 pred = sp;4171 sp = next;4172}4173/* Reset check counter */4174 m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?4175 nsegs : MAX_RELEASE_CHECK_RATE);4176return released;4177}41784179static intsys_trim(mstate m,size_t pad) {4180size_t released =0;4181ensure_initialization();4182if(pad < MAX_REQUEST &&is_initialized(m)) {4183 pad += TOP_FOOT_SIZE;/* ensure enough room for segment overhead */41844185if(m->topsize > pad) {4186/* Shrink top space in granularity-size units, keeping at least one */4187size_t unit = mparams.granularity;4188size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -4189 SIZE_T_ONE) * unit;4190 msegmentptr sp =segment_holding(m, (char*)m->top);41914192if(!is_extern_segment(sp)) {4193if(is_mmapped_segment(sp)) {4194if(HAVE_MMAP &&4195 sp->size >= extra &&4196!has_segment_link(m, sp)) {/* can't shrink if pinned */4197size_t newsize = sp->size - extra;4198/* Prefer mremap, fall back to munmap */4199if((CALL_MREMAP(sp->base, sp->size, newsize,0) != MFAIL) ||4200(CALL_MUNMAP(sp->base + newsize, extra) ==0)) {4201 released = extra;4202}4203}4204}4205else if(HAVE_MORECORE) {4206if(extra >= HALF_MAX_SIZE_T)/* Avoid wrapping negative */4207 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;4208ACQUIRE_MALLOC_GLOBAL_LOCK();4209{4210/* Make sure end of memory is where we last set it. */4211char* old_br = (char*)(CALL_MORECORE(0));4212if(old_br == sp->base + sp->size) {4213char* rel_br = (char*)(CALL_MORECORE(-extra));4214char* new_br = (char*)(CALL_MORECORE(0));4215if(rel_br != CMFAIL && new_br < old_br)4216 released = old_br - new_br;4217}4218}4219RELEASE_MALLOC_GLOBAL_LOCK();4220}4221}42224223if(released !=0) {4224 sp->size -= released;4225 m->footprint -= released;4226init_top(m, m->top, m->topsize - released);4227check_top_chunk(m, m->top);4228}4229}42304231/* Unmap any unused mmapped segments */4232if(HAVE_MMAP)4233 released +=release_unused_segments(m);42344235/* On failure, disable autotrim to avoid repeated failed future calls */4236if(released ==0&& m->topsize > m->trim_check)4237 m->trim_check = MAX_SIZE_T;4238}42394240return(released !=0)?1:0;4241}424242434244/* ---------------------------- malloc support --------------------------- */42454246/* allocate a large request from the best fitting chunk in a treebin */4247static void*tmalloc_large(mstate m,size_t nb) {4248 tchunkptr v =0;4249size_t rsize = -nb;/* Unsigned negation */4250 tchunkptr t;4251 bindex_t idx;4252compute_tree_index(nb, idx);4253if((t = *treebin_at(m, idx)) !=0) {4254/* Traverse tree for this bin looking for node with size == nb */4255size_t sizebits = nb <<leftshift_for_tree_index(idx);4256 tchunkptr rst =0;/* The deepest untaken right subtree */4257for(;;) {4258 tchunkptr rt;4259size_t trem =chunksize(t) - nb;4260if(trem < rsize) {4261 v = t;4262if((rsize = trem) ==0)4263break;4264}4265 rt = t->child[1];4266 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) &1];4267if(rt !=0&& rt != t)4268 rst = rt;4269if(t ==0) {4270 t = rst;/* set t to least subtree holding sizes > nb */4271break;4272}4273 sizebits <<=1;4274}4275}4276if(t ==0&& v ==0) {/* set t to root of next non-empty treebin */4277 binmap_t leftbits =left_bits(idx2bit(idx)) & m->treemap;4278if(leftbits !=0) {4279 bindex_t i;4280 binmap_t leastbit =least_bit(leftbits);4281compute_bit2idx(leastbit, i);4282 t = *treebin_at(m, i);4283}4284}42854286while(t !=0) {/* find smallest of tree or subtree */4287size_t trem =chunksize(t) - nb;4288if(trem < rsize) {4289 rsize = trem;4290 v = t;4291}4292 t =leftmost_child(t);4293}42944295/* If dv is a better fit, return 0 so malloc will use it */4296if(v !=0&& rsize < (size_t)(m->dvsize - nb)) {4297if(RTCHECK(ok_address(m, v))) {/* split */4298 mchunkptr r =chunk_plus_offset(v, nb);4299assert(chunksize(v) == rsize + nb);4300if(RTCHECK(ok_next(v, r))) {4301unlink_large_chunk(m, v);4302if(rsize < MIN_CHUNK_SIZE)4303set_inuse_and_pinuse(m, v, (rsize + nb));4304else{4305set_size_and_pinuse_of_inuse_chunk(m, v, nb);4306set_size_and_pinuse_of_free_chunk(r, rsize);4307insert_chunk(m, r, rsize);4308}4309returnchunk2mem(v);4310}4311}4312CORRUPTION_ERROR_ACTION(m);4313}4314return0;4315}43164317/* allocate a small request from the best fitting chunk in a treebin */4318static void*tmalloc_small(mstate m,size_t nb) {4319 tchunkptr t, v;4320size_t rsize;4321 bindex_t i;4322 binmap_t leastbit =least_bit(m->treemap);4323compute_bit2idx(leastbit, i);4324 v = t = *treebin_at(m, i);4325 rsize =chunksize(t) - nb;43264327while((t =leftmost_child(t)) !=0) {4328size_t trem =chunksize(t) - nb;4329if(trem < rsize) {4330 rsize = trem;4331 v = t;4332}4333}43344335if(RTCHECK(ok_address(m, v))) {4336 mchunkptr r =chunk_plus_offset(v, nb);4337assert(chunksize(v) == rsize + nb);4338if(RTCHECK(ok_next(v, r))) {4339unlink_large_chunk(m, v);4340if(rsize < MIN_CHUNK_SIZE)4341set_inuse_and_pinuse(m, v, (rsize + nb));4342else{4343set_size_and_pinuse_of_inuse_chunk(m, v, nb);4344set_size_and_pinuse_of_free_chunk(r, rsize);4345replace_dv(m, r, rsize);4346}4347returnchunk2mem(v);4348}4349}43504351CORRUPTION_ERROR_ACTION(m);4352return0;4353}43544355/* --------------------------- realloc support --------------------------- */43564357static void*internal_realloc(mstate m,void* oldmem,size_t bytes) {4358if(bytes >= MAX_REQUEST) {4359 MALLOC_FAILURE_ACTION;4360return0;4361}4362if(!PREACTION(m)) {4363 mchunkptr oldp =mem2chunk(oldmem);4364size_t oldsize =chunksize(oldp);4365 mchunkptr next =chunk_plus_offset(oldp, oldsize);4366 mchunkptr newp =0;4367void* extra =0;43684369/* Try to either shrink or extend into top. Else malloc-copy-free */43704371if(RTCHECK(ok_address(m, oldp) &&ok_cinuse(oldp) &&4372ok_next(oldp, next) &&ok_pinuse(next))) {4373size_t nb =request2size(bytes);4374if(is_mmapped(oldp))4375 newp =mmap_resize(m, oldp, nb);4376else if(oldsize >= nb) {/* already big enough */4377size_t rsize = oldsize - nb;4378 newp = oldp;4379if(rsize >= MIN_CHUNK_SIZE) {4380 mchunkptr remainder =chunk_plus_offset(newp, nb);4381set_inuse(m, newp, nb);4382set_inuse(m, remainder, rsize);4383 extra =chunk2mem(remainder);4384}4385}4386else if(next == m->top && oldsize + m->topsize > nb) {4387/* Expand into top */4388size_t newsize = oldsize + m->topsize;4389size_t newtopsize = newsize - nb;4390 mchunkptr newtop =chunk_plus_offset(oldp, nb);4391set_inuse(m, oldp, nb);4392 newtop->head = newtopsize |PINUSE_BIT;4393 m->top = newtop;4394 m->topsize = newtopsize;4395 newp = oldp;4396}4397}4398else{4399USAGE_ERROR_ACTION(m, oldmem);4400POSTACTION(m);4401return0;4402}44034404POSTACTION(m);44054406if(newp !=0) {4407if(extra !=0) {4408internal_free(m, extra);4409}4410check_inuse_chunk(m, newp);4411returnchunk2mem(newp);4412}4413else{4414void* newmem =internal_malloc(m, bytes);4415if(newmem !=0) {4416size_t oc = oldsize -overhead_for(oldp);4417memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);4418internal_free(m, oldmem);4419}4420return newmem;4421}4422}4423return0;4424}44254426/* --------------------------- memalign support -------------------------- */44274428static void*internal_memalign(mstate m,size_t alignment,size_t bytes) {4429if(alignment <= MALLOC_ALIGNMENT)/* Can just use malloc */4430returninternal_malloc(m, bytes);4431if(alignment < MIN_CHUNK_SIZE)/* must be at least a minimum chunk size */4432 alignment = MIN_CHUNK_SIZE;4433if((alignment & (alignment-SIZE_T_ONE)) !=0) {/* Ensure a power of 2 */4434size_t a = MALLOC_ALIGNMENT <<1;4435while(a < alignment) a <<=1;4436 alignment = a;4437}44384439if(bytes >= MAX_REQUEST - alignment) {4440if(m !=0) {/* Test isn't needed but avoids compiler warning */4441 MALLOC_FAILURE_ACTION;4442}4443}4444else{4445size_t nb =request2size(bytes);4446size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;4447char* mem = (char*)internal_malloc(m, req);4448if(mem !=0) {4449void* leader =0;4450void* trailer =0;4451 mchunkptr p =mem2chunk(mem);44524453if(PREACTION(m))return0;4454if((((size_t)(mem)) % alignment) !=0) {/* misaligned */4455/*4456 Find an aligned spot inside chunk. Since we need to give4457 back leading space in a chunk of at least MIN_CHUNK_SIZE, if4458 the first calculation places us at a spot with less than4459 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.4460 We've allocated enough total room so that this is always4461 possible.4462 */4463char* br = (char*)mem2chunk((size_t)(((size_t)(mem +4464 alignment -4465 SIZE_T_ONE)) &4466-alignment));4467char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?4468 br : br+alignment;4469 mchunkptr newp = (mchunkptr)pos;4470size_t leadsize = pos - (char*)(p);4471size_t newsize =chunksize(p) - leadsize;44724473if(is_mmapped(p)) {/* For mmapped chunks, just adjust offset */4474 newp->prev_foot = p->prev_foot + leadsize;4475 newp->head = (newsize|CINUSE_BIT);4476}4477else{/* Otherwise, give back leader, use the rest */4478set_inuse(m, newp, newsize);4479set_inuse(m, p, leadsize);4480 leader =chunk2mem(p);4481}4482 p = newp;4483}44844485/* Give back spare room at the end */4486if(!is_mmapped(p)) {4487size_t size =chunksize(p);4488if(size > nb + MIN_CHUNK_SIZE) {4489size_t remainder_size = size - nb;4490 mchunkptr remainder =chunk_plus_offset(p, nb);4491set_inuse(m, p, nb);4492set_inuse(m, remainder, remainder_size);4493 trailer =chunk2mem(remainder);4494}4495}44964497assert(chunksize(p) >= nb);4498assert((((size_t)(chunk2mem(p))) % alignment) ==0);4499check_inuse_chunk(m, p);4500POSTACTION(m);4501if(leader !=0) {4502internal_free(m, leader);4503}4504if(trailer !=0) {4505internal_free(m, trailer);4506}4507returnchunk2mem(p);4508}4509}4510return0;4511}45124513/* ------------------------ comalloc/coalloc support --------------------- */45144515static void**ialloc(mstate m,4516size_t n_elements,4517size_t* sizes,4518int opts,4519void* chunks[]) {4520/*4521 This provides common support for independent_X routines, handling4522 all of the combinations that can result.45234524 The opts arg has:4525 bit 0 set if all elements are same size (using sizes[0])4526 bit 1 set if elements should be zeroed4527 */45284529size_t element_size;/* chunksize of each element, if all same */4530size_t contents_size;/* total size of elements */4531size_t array_size;/* request size of pointer array */4532void* mem;/* malloced aggregate space */4533 mchunkptr p;/* corresponding chunk */4534size_t remainder_size;/* remaining bytes while splitting */4535void** marray;/* either "chunks" or malloced ptr array */4536 mchunkptr array_chunk;/* chunk for malloced ptr array */4537 flag_t was_enabled;/* to disable mmap */4538size_t size;4539size_t i;45404541ensure_initialization();4542/* compute array length, if needed */4543if(chunks !=0) {4544if(n_elements ==0)4545return chunks;/* nothing to do */4546 marray = chunks;4547 array_size =0;4548}4549else{4550/* if empty req, must still return chunk representing empty array */4551if(n_elements ==0)4552return(void**)internal_malloc(m,0);4553 marray =0;4554 array_size =request2size(n_elements * (sizeof(void*)));4555}45564557/* compute total element size */4558if(opts &0x1) {/* all-same-size */4559 element_size =request2size(*sizes);4560 contents_size = n_elements * element_size;4561}4562else{/* add up all the sizes */4563 element_size =0;4564 contents_size =0;4565for(i =0; i != n_elements; ++i)4566 contents_size +=request2size(sizes[i]);4567}45684569 size = contents_size + array_size;45704571/*4572 Allocate the aggregate chunk. First disable direct-mmapping so4573 malloc won't use it, since we would not be able to later4574 free/realloc space internal to a segregated mmap region.4575 */4576 was_enabled =use_mmap(m);4577disable_mmap(m);4578 mem =internal_malloc(m, size - CHUNK_OVERHEAD);4579if(was_enabled)4580enable_mmap(m);4581if(mem ==0)4582return0;45834584if(PREACTION(m))return0;4585 p =mem2chunk(mem);4586 remainder_size =chunksize(p);45874588assert(!is_mmapped(p));45894590if(opts &0x2) {/* optionally clear the elements */4591memset((size_t*)mem,0, remainder_size - SIZE_T_SIZE - array_size);4592}45934594/* If not provided, allocate the pointer array as final part of chunk */4595if(marray ==0) {4596size_t array_chunk_size;4597 array_chunk =chunk_plus_offset(p, contents_size);4598 array_chunk_size = remainder_size - contents_size;4599 marray = (void**) (chunk2mem(array_chunk));4600set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);4601 remainder_size = contents_size;4602}46034604/* split out elements */4605for(i =0; ; ++i) {4606 marray[i] =chunk2mem(p);4607if(i != n_elements-1) {4608if(element_size !=0)4609 size = element_size;4610else4611 size =request2size(sizes[i]);4612 remainder_size -= size;4613set_size_and_pinuse_of_inuse_chunk(m, p, size);4614 p =chunk_plus_offset(p, size);4615}4616else{/* the final element absorbs any overallocation slop */4617set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);4618break;4619}4620}46214622#if DEBUG4623if(marray != chunks) {4624/* final element must have exactly exhausted chunk */4625if(element_size !=0) {4626assert(remainder_size == element_size);4627}4628else{4629assert(remainder_size ==request2size(sizes[i]));4630}4631check_inuse_chunk(m,mem2chunk(marray));4632}4633for(i =0; i != n_elements; ++i)4634check_inuse_chunk(m,mem2chunk(marray[i]));46354636#endif/* DEBUG */46374638POSTACTION(m);4639return marray;4640}464146424643/* -------------------------- public routines ---------------------------- */46444645#if !ONLY_MSPACES46464647void*dlmalloc(size_t bytes) {4648/*4649 Basic algorithm:4650 If a small request (< 256 bytes minus per-chunk overhead):4651 1. If one exists, use a remainderless chunk in associated smallbin.4652 (Remainderless means that there are too few excess bytes to4653 represent as a chunk.)4654 2. If it is big enough, use the dv chunk, which is normally the4655 chunk adjacent to the one used for the most recent small request.4656 3. If one exists, split the smallest available chunk in a bin,4657 saving remainder in dv.4658 4. If it is big enough, use the top chunk.4659 5. If available, get memory from system and use it4660 Otherwise, for a large request:4661 1. Find the smallest available binned chunk that fits, and use it4662 if it is better fitting than dv chunk, splitting if necessary.4663 2. If better fitting than any binned chunk, use the dv chunk.4664 3. If it is big enough, use the top chunk.4665 4. If request size >= mmap threshold, try to directly mmap this chunk.4666 5. If available, get memory from system and use it46674668 The ugly goto's here ensure that postaction occurs along all paths.4669 */46704671#if USE_LOCKS4672ensure_initialization();/* initialize in sys_alloc if not using locks */4673#endif46744675if(!PREACTION(gm)) {4676void* mem;4677size_t nb;4678if(bytes <= MAX_SMALL_REQUEST) {4679 bindex_t idx;4680 binmap_t smallbits;4681 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE :pad_request(bytes);4682 idx =small_index(nb);4683 smallbits = gm->smallmap >> idx;46844685if((smallbits &0x3U) !=0) {/* Remainderless fit to a smallbin. */4686 mchunkptr b, p;4687 idx += ~smallbits &1;/* Uses next bin if idx empty */4688 b =smallbin_at(gm, idx);4689 p = b->fd;4690assert(chunksize(p) ==small_index2size(idx));4691unlink_first_small_chunk(gm, b, p, idx);4692set_inuse_and_pinuse(gm, p,small_index2size(idx));4693 mem =chunk2mem(p);4694check_malloced_chunk(gm, mem, nb);4695goto postaction;4696}46974698else if(nb > gm->dvsize) {4699if(smallbits !=0) {/* Use chunk in next nonempty smallbin */4700 mchunkptr b, p, r;4701size_t rsize;4702 bindex_t i;4703 binmap_t leftbits = (smallbits << idx) &left_bits(idx2bit(idx));4704 binmap_t leastbit =least_bit(leftbits);4705compute_bit2idx(leastbit, i);4706 b =smallbin_at(gm, i);4707 p = b->fd;4708assert(chunksize(p) ==small_index2size(i));4709unlink_first_small_chunk(gm, b, p, i);4710 rsize =small_index2size(i) - nb;4711/* Fit here cannot be remainderless if 4byte sizes */4712if(SIZE_T_SIZE !=4&& rsize < MIN_CHUNK_SIZE)4713set_inuse_and_pinuse(gm, p,small_index2size(i));4714else{4715set_size_and_pinuse_of_inuse_chunk(gm, p, nb);4716 r =chunk_plus_offset(p, nb);4717set_size_and_pinuse_of_free_chunk(r, rsize);4718replace_dv(gm, r, rsize);4719}4720 mem =chunk2mem(p);4721check_malloced_chunk(gm, mem, nb);4722goto postaction;4723}47244725else if(gm->treemap !=0&& (mem =tmalloc_small(gm, nb)) !=0) {4726check_malloced_chunk(gm, mem, nb);4727goto postaction;4728}4729}4730}4731else if(bytes >= MAX_REQUEST)4732 nb = MAX_SIZE_T;/* Too big to allocate. Force failure (in sys alloc) */4733else{4734 nb =pad_request(bytes);4735if(gm->treemap !=0&& (mem =tmalloc_large(gm, nb)) !=0) {4736check_malloced_chunk(gm, mem, nb);4737goto postaction;4738}4739}47404741if(nb <= gm->dvsize) {4742size_t rsize = gm->dvsize - nb;4743 mchunkptr p = gm->dv;4744if(rsize >= MIN_CHUNK_SIZE) {/* split dv */4745 mchunkptr r = gm->dv =chunk_plus_offset(p, nb);4746 gm->dvsize = rsize;4747set_size_and_pinuse_of_free_chunk(r, rsize);4748set_size_and_pinuse_of_inuse_chunk(gm, p, nb);4749}4750else{/* exhaust dv */4751size_t dvs = gm->dvsize;4752 gm->dvsize =0;4753 gm->dv =0;4754set_inuse_and_pinuse(gm, p, dvs);4755}4756 mem =chunk2mem(p);4757check_malloced_chunk(gm, mem, nb);4758goto postaction;4759}47604761else if(nb < gm->topsize) {/* Split top */4762size_t rsize = gm->topsize -= nb;4763 mchunkptr p = gm->top;4764 mchunkptr r = gm->top =chunk_plus_offset(p, nb);4765 r->head = rsize | PINUSE_BIT;4766set_size_and_pinuse_of_inuse_chunk(gm, p, nb);4767 mem =chunk2mem(p);4768check_top_chunk(gm, gm->top);4769check_malloced_chunk(gm, mem, nb);4770goto postaction;4771}47724773 mem =sys_alloc(gm, nb);47744775 postaction:4776POSTACTION(gm);4777return mem;4778}47794780return0;4781}47824783voiddlfree(void* mem) {4784/*4785 Consolidate freed chunks with preceding or succeeding bordering4786 free chunks, if they exist, and then place in a bin. Intermixed4787 with special cases for top, dv, mmapped chunks, and usage errors.4788 */47894790if(mem !=0) {4791 mchunkptr p =mem2chunk(mem);4792#if FOOTERS4793 mstate fm =get_mstate_for(p);4794if(!ok_magic(fm)) {4795USAGE_ERROR_ACTION(fm, p);4796return;4797}4798#else/* FOOTERS */4799#define fm gm4800#endif/* FOOTERS */4801if(!PREACTION(fm)) {4802check_inuse_chunk(fm, p);4803if(RTCHECK(ok_address(fm, p) &&ok_cinuse(p))) {4804size_t psize =chunksize(p);4805 mchunkptr next =chunk_plus_offset(p, psize);4806if(!pinuse(p)) {4807size_t prevsize = p->prev_foot;4808if((prevsize & IS_MMAPPED_BIT) !=0) {4809 prevsize &= ~IS_MMAPPED_BIT;4810 psize += prevsize + MMAP_FOOT_PAD;4811if(CALL_MUNMAP((char*)p - prevsize, psize) ==0)4812 fm->footprint -= psize;4813goto postaction;4814}4815else{4816 mchunkptr prev =chunk_minus_offset(p, prevsize);4817 psize += prevsize;4818 p = prev;4819if(RTCHECK(ok_address(fm, prev))) {/* consolidate backward */4820if(p != fm->dv) {4821unlink_chunk(fm, p, prevsize);4822}4823else if((next->head & INUSE_BITS) == INUSE_BITS) {4824 fm->dvsize = psize;4825set_free_with_pinuse(p, psize, next);4826goto postaction;4827}4828}4829else4830goto erroraction;4831}4832}48334834if(RTCHECK(ok_next(p, next) &&ok_pinuse(next))) {4835if(!cinuse(next)) {/* consolidate forward */4836if(next == fm->top) {4837size_t tsize = fm->topsize += psize;4838 fm->top = p;4839 p->head = tsize | PINUSE_BIT;4840if(p == fm->dv) {4841 fm->dv =0;4842 fm->dvsize =0;4843}4844if(should_trim(fm, tsize))4845sys_trim(fm,0);4846goto postaction;4847}4848else if(next == fm->dv) {4849size_t dsize = fm->dvsize += psize;4850 fm->dv = p;4851set_size_and_pinuse_of_free_chunk(p, dsize);4852goto postaction;4853}4854else{4855size_t nsize =chunksize(next);4856 psize += nsize;4857unlink_chunk(fm, next, nsize);4858set_size_and_pinuse_of_free_chunk(p, psize);4859if(p == fm->dv) {4860 fm->dvsize = psize;4861goto postaction;4862}4863}4864}4865else4866set_free_with_pinuse(p, psize, next);48674868if(is_small(psize)) {4869insert_small_chunk(fm, p, psize);4870check_free_chunk(fm, p);4871}4872else{4873 tchunkptr tp = (tchunkptr)p;4874insert_large_chunk(fm, tp, psize);4875check_free_chunk(fm, p);4876if(--fm->release_checks ==0)4877release_unused_segments(fm);4878}4879goto postaction;4880}4881}4882 erroraction:4883USAGE_ERROR_ACTION(fm, p);4884 postaction:4885POSTACTION(fm);4886}4887}4888#if !FOOTERS4889#undef fm4890#endif/* FOOTERS */4891}48924893void*dlcalloc(size_t n_elements,size_t elem_size) {4894void* mem;4895size_t req =0;4896if(n_elements !=0) {4897 req = n_elements * elem_size;4898if(((n_elements | elem_size) & ~(size_t)0xffff) &&4899(req / n_elements != elem_size))4900 req = MAX_SIZE_T;/* force downstream failure on overflow */4901}4902 mem =dlmalloc(req);4903if(mem !=0&&calloc_must_clear(mem2chunk(mem)))4904memset(mem,0, req);4905return mem;4906}49074908void*dlrealloc(void* oldmem,size_t bytes) {4909if(oldmem ==0)4910returndlmalloc(bytes);4911#ifdef REALLOC_ZERO_BYTES_FREES4912if(bytes ==0) {4913dlfree(oldmem);4914return0;4915}4916#endif/* REALLOC_ZERO_BYTES_FREES */4917else{4918#if ! FOOTERS4919 mstate m = gm;4920#else/* FOOTERS */4921 mstate m =get_mstate_for(mem2chunk(oldmem));4922if(!ok_magic(m)) {4923USAGE_ERROR_ACTION(m, oldmem);4924return0;4925}4926#endif/* FOOTERS */4927returninternal_realloc(m, oldmem, bytes);4928}4929}49304931void*dlmemalign(size_t alignment,size_t bytes) {4932returninternal_memalign(gm, alignment, bytes);4933}49344935void**dlindependent_calloc(size_t n_elements,size_t elem_size,4936void* chunks[]) {4937size_t sz = elem_size;/* serves as 1-element array */4938returnialloc(gm, n_elements, &sz,3, chunks);4939}49404941void**dlindependent_comalloc(size_t n_elements,size_t sizes[],4942void* chunks[]) {4943returnialloc(gm, n_elements, sizes,0, chunks);4944}49454946void*dlvalloc(size_t bytes) {4947size_t pagesz;4948ensure_initialization();4949 pagesz = mparams.page_size;4950returndlmemalign(pagesz, bytes);4951}49524953void*dlpvalloc(size_t bytes) {4954size_t pagesz;4955ensure_initialization();4956 pagesz = mparams.page_size;4957returndlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));4958}49594960intdlmalloc_trim(size_t pad) {4961ensure_initialization();4962int result =0;4963if(!PREACTION(gm)) {4964 result =sys_trim(gm, pad);4965POSTACTION(gm);4966}4967return result;4968}49694970size_tdlmalloc_footprint(void) {4971return gm->footprint;4972}49734974size_tdlmalloc_max_footprint(void) {4975return gm->max_footprint;4976}49774978#if !NO_MALLINFO4979struct mallinfo dlmallinfo(void) {4980returninternal_mallinfo(gm);4981}4982#endif/* NO_MALLINFO */49834984voiddlmalloc_stats() {4985internal_malloc_stats(gm);4986}49874988intdlmallopt(int param_number,int value) {4989returnchange_mparam(param_number, value);4990}49914992#endif/* !ONLY_MSPACES */49934994size_tdlmalloc_usable_size(void* mem) {4995if(mem !=0) {4996 mchunkptr p =mem2chunk(mem);4997if(cinuse(p))4998returnchunksize(p) -overhead_for(p);4999}5000return0;5001}50025003/* ----------------------------- user mspaces ---------------------------- */50045005#if MSPACES50065007static mstate init_user_mstate(char* tbase,size_t tsize) {5008size_t msize =pad_request(sizeof(struct malloc_state));5009 mchunkptr mn;5010 mchunkptr msp =align_as_chunk(tbase);5011 mstate m = (mstate)(chunk2mem(msp));5012memset(m,0, msize);5013INITIAL_LOCK(&m->mutex);5014 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);5015 m->seg.base = m->least_addr = tbase;5016 m->seg.size = m->footprint = m->max_footprint = tsize;5017 m->magic = mparams.magic;5018 m->release_checks = MAX_RELEASE_CHECK_RATE;5019 m->mflags = mparams.default_mflags;5020 m->extp =0;5021 m->exts =0;5022disable_contiguous(m);5023init_bins(m);5024 mn =next_chunk(mem2chunk(m));5025init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);5026check_top_chunk(m, m->top);5027return m;5028}50295030mspace create_mspace(size_t capacity,int locked) {5031 mstate m =0;5032size_t msize;5033ensure_initialization();5034 msize =pad_request(sizeof(struct malloc_state));5035if(capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {5036size_t rs = ((capacity ==0)? mparams.granularity :5037(capacity + TOP_FOOT_SIZE + msize));5038size_t tsize =granularity_align(rs);5039char* tbase = (char*)(CALL_MMAP(tsize));5040if(tbase != CMFAIL) {5041 m =init_user_mstate(tbase, tsize);5042 m->seg.sflags = IS_MMAPPED_BIT;5043set_lock(m, locked);5044}5045}5046return(mspace)m;5047}50485049mspace create_mspace_with_base(void* base,size_t capacity,int locked) {5050 mstate m =0;5051size_t msize;5052ensure_initialization();5053 msize =pad_request(sizeof(struct malloc_state));5054if(capacity > msize + TOP_FOOT_SIZE &&5055 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {5056 m =init_user_mstate((char*)base, capacity);5057 m->seg.sflags = EXTERN_BIT;5058set_lock(m, locked);5059}5060return(mspace)m;5061}50625063intmspace_mmap_large_chunks(mspace msp,int enable) {5064int ret =0;5065 mstate ms = (mstate)msp;5066if(!PREACTION(ms)) {5067if(use_mmap(ms))5068 ret =1;5069if(enable)5070enable_mmap(ms);5071else5072disable_mmap(ms);5073POSTACTION(ms);5074}5075return ret;5076}50775078size_tdestroy_mspace(mspace msp) {5079size_t freed =0;5080 mstate ms = (mstate)msp;5081if(ok_magic(ms)) {5082 msegmentptr sp = &ms->seg;5083while(sp !=0) {5084char* base = sp->base;5085size_t size = sp->size;5086 flag_t flag = sp->sflags;5087 sp = sp->next;5088if((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&5089CALL_MUNMAP(base, size) ==0)5090 freed += size;5091}5092}5093else{5094USAGE_ERROR_ACTION(ms,ms);5095}5096return freed;5097}50985099/*5100 mspace versions of routines are near-clones of the global5101 versions. This is not so nice but better than the alternatives.5102*/510351045105void*mspace_malloc(mspace msp,size_t bytes) {5106 mstate ms = (mstate)msp;5107if(!ok_magic(ms)) {5108USAGE_ERROR_ACTION(ms,ms);5109return0;5110}5111if(!PREACTION(ms)) {5112void* mem;5113size_t nb;5114if(bytes <= MAX_SMALL_REQUEST) {5115 bindex_t idx;5116 binmap_t smallbits;5117 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE :pad_request(bytes);5118 idx =small_index(nb);5119 smallbits = ms->smallmap >> idx;51205121if((smallbits &0x3U) !=0) {/* Remainderless fit to a smallbin. */5122 mchunkptr b, p;5123 idx += ~smallbits &1;/* Uses next bin if idx empty */5124 b =smallbin_at(ms, idx);5125 p = b->fd;5126assert(chunksize(p) ==small_index2size(idx));5127unlink_first_small_chunk(ms, b, p, idx);5128set_inuse_and_pinuse(ms, p,small_index2size(idx));5129 mem =chunk2mem(p);5130check_malloced_chunk(ms, mem, nb);5131goto postaction;5132}51335134else if(nb > ms->dvsize) {5135if(smallbits !=0) {/* Use chunk in next nonempty smallbin */5136 mchunkptr b, p, r;5137size_t rsize;5138 bindex_t i;5139 binmap_t leftbits = (smallbits << idx) &left_bits(idx2bit(idx));5140 binmap_t leastbit =least_bit(leftbits);5141compute_bit2idx(leastbit, i);5142 b =smallbin_at(ms, i);5143 p = b->fd;5144assert(chunksize(p) ==small_index2size(i));5145unlink_first_small_chunk(ms, b, p, i);5146 rsize =small_index2size(i) - nb;5147/* Fit here cannot be remainderless if 4byte sizes */5148if(SIZE_T_SIZE !=4&& rsize < MIN_CHUNK_SIZE)5149set_inuse_and_pinuse(ms, p,small_index2size(i));5150else{5151set_size_and_pinuse_of_inuse_chunk(ms, p, nb);5152 r =chunk_plus_offset(p, nb);5153set_size_and_pinuse_of_free_chunk(r, rsize);5154replace_dv(ms, r, rsize);5155}5156 mem =chunk2mem(p);5157check_malloced_chunk(ms, mem, nb);5158goto postaction;5159}51605161else if(ms->treemap !=0&& (mem =tmalloc_small(ms, nb)) !=0) {5162check_malloced_chunk(ms, mem, nb);5163goto postaction;5164}5165}5166}5167else if(bytes >= MAX_REQUEST)5168 nb = MAX_SIZE_T;/* Too big to allocate. Force failure (in sys alloc) */5169else{5170 nb =pad_request(bytes);5171if(ms->treemap !=0&& (mem =tmalloc_large(ms, nb)) !=0) {5172check_malloced_chunk(ms, mem, nb);5173goto postaction;5174}5175}51765177if(nb <= ms->dvsize) {5178size_t rsize = ms->dvsize - nb;5179 mchunkptr p = ms->dv;5180if(rsize >= MIN_CHUNK_SIZE) {/* split dv */5181 mchunkptr r = ms->dv =chunk_plus_offset(p, nb);5182 ms->dvsize = rsize;5183set_size_and_pinuse_of_free_chunk(r, rsize);5184set_size_and_pinuse_of_inuse_chunk(ms, p, nb);5185}5186else{/* exhaust dv */5187size_t dvs = ms->dvsize;5188 ms->dvsize =0;5189 ms->dv =0;5190set_inuse_and_pinuse(ms, p, dvs);5191}5192 mem =chunk2mem(p);5193check_malloced_chunk(ms, mem, nb);5194goto postaction;5195}51965197else if(nb < ms->topsize) {/* Split top */5198size_t rsize = ms->topsize -= nb;5199 mchunkptr p = ms->top;5200 mchunkptr r = ms->top =chunk_plus_offset(p, nb);5201 r->head = rsize | PINUSE_BIT;5202set_size_and_pinuse_of_inuse_chunk(ms, p, nb);5203 mem =chunk2mem(p);5204check_top_chunk(ms, ms->top);5205check_malloced_chunk(ms, mem, nb);5206goto postaction;5207}52085209 mem =sys_alloc(ms, nb);52105211 postaction:5212POSTACTION(ms);5213return mem;5214}52155216return0;5217}52185219voidmspace_free(mspace msp,void* mem) {5220if(mem !=0) {5221 mchunkptr p =mem2chunk(mem);5222#if FOOTERS5223 mstate fm =get_mstate_for(p);5224#else/* FOOTERS */5225 mstate fm = (mstate)msp;5226#endif/* FOOTERS */5227if(!ok_magic(fm)) {5228USAGE_ERROR_ACTION(fm, p);5229return;5230}5231if(!PREACTION(fm)) {5232check_inuse_chunk(fm, p);5233if(RTCHECK(ok_address(fm, p) &&ok_cinuse(p))) {5234size_t psize =chunksize(p);5235 mchunkptr next =chunk_plus_offset(p, psize);5236if(!pinuse(p)) {5237size_t prevsize = p->prev_foot;5238if((prevsize & IS_MMAPPED_BIT) !=0) {5239 prevsize &= ~IS_MMAPPED_BIT;5240 psize += prevsize + MMAP_FOOT_PAD;5241if(CALL_MUNMAP((char*)p - prevsize, psize) ==0)5242 fm->footprint -= psize;5243goto postaction;5244}5245else{5246 mchunkptr prev =chunk_minus_offset(p, prevsize);5247 psize += prevsize;5248 p = prev;5249if(RTCHECK(ok_address(fm, prev))) {/* consolidate backward */5250if(p != fm->dv) {5251unlink_chunk(fm, p, prevsize);5252}5253else if((next->head & INUSE_BITS) == INUSE_BITS) {5254 fm->dvsize = psize;5255set_free_with_pinuse(p, psize, next);5256goto postaction;5257}5258}5259else5260goto erroraction;5261}5262}52635264if(RTCHECK(ok_next(p, next) &&ok_pinuse(next))) {5265if(!cinuse(next)) {/* consolidate forward */5266if(next == fm->top) {5267size_t tsize = fm->topsize += psize;5268 fm->top = p;5269 p->head = tsize | PINUSE_BIT;5270if(p == fm->dv) {5271 fm->dv =0;5272 fm->dvsize =0;5273}5274if(should_trim(fm, tsize))5275sys_trim(fm,0);5276goto postaction;5277}5278else if(next == fm->dv) {5279size_t dsize = fm->dvsize += psize;5280 fm->dv = p;5281set_size_and_pinuse_of_free_chunk(p, dsize);5282goto postaction;5283}5284else{5285size_t nsize =chunksize(next);5286 psize += nsize;5287unlink_chunk(fm, next, nsize);5288set_size_and_pinuse_of_free_chunk(p, psize);5289if(p == fm->dv) {5290 fm->dvsize = psize;5291goto postaction;5292}5293}5294}5295else5296set_free_with_pinuse(p, psize, next);52975298if(is_small(psize)) {5299insert_small_chunk(fm, p, psize);5300check_free_chunk(fm, p);5301}5302else{5303 tchunkptr tp = (tchunkptr)p;5304insert_large_chunk(fm, tp, psize);5305check_free_chunk(fm, p);5306if(--fm->release_checks ==0)5307release_unused_segments(fm);5308}5309goto postaction;5310}5311}5312 erroraction:5313USAGE_ERROR_ACTION(fm, p);5314 postaction:5315POSTACTION(fm);5316}5317}5318}53195320void*mspace_calloc(mspace msp,size_t n_elements,size_t elem_size) {5321void* mem;5322size_t req =0;5323 mstate ms = (mstate)msp;5324if(!ok_magic(ms)) {5325USAGE_ERROR_ACTION(ms,ms);5326return0;5327}5328if(n_elements !=0) {5329 req = n_elements * elem_size;5330if(((n_elements | elem_size) & ~(size_t)0xffff) &&5331(req / n_elements != elem_size))5332 req = MAX_SIZE_T;/* force downstream failure on overflow */5333}5334 mem =internal_malloc(ms, req);5335if(mem !=0&&calloc_must_clear(mem2chunk(mem)))5336memset(mem,0, req);5337return mem;5338}53395340void*mspace_realloc(mspace msp,void* oldmem,size_t bytes) {5341if(oldmem ==0)5342returnmspace_malloc(msp, bytes);5343#ifdef REALLOC_ZERO_BYTES_FREES5344if(bytes ==0) {5345mspace_free(msp, oldmem);5346return0;5347}5348#endif/* REALLOC_ZERO_BYTES_FREES */5349else{5350#if FOOTERS5351 mchunkptr p =mem2chunk(oldmem);5352 mstate ms =get_mstate_for(p);5353#else/* FOOTERS */5354 mstate ms = (mstate)msp;5355#endif/* FOOTERS */5356if(!ok_magic(ms)) {5357USAGE_ERROR_ACTION(ms,ms);5358return0;5359}5360returninternal_realloc(ms, oldmem, bytes);5361}5362}53635364void*mspace_memalign(mspace msp,size_t alignment,size_t bytes) {5365 mstate ms = (mstate)msp;5366if(!ok_magic(ms)) {5367USAGE_ERROR_ACTION(ms,ms);5368return0;5369}5370returninternal_memalign(ms, alignment, bytes);5371}53725373void**mspace_independent_calloc(mspace msp,size_t n_elements,5374size_t elem_size,void* chunks[]) {5375size_t sz = elem_size;/* serves as 1-element array */5376 mstate ms = (mstate)msp;5377if(!ok_magic(ms)) {5378USAGE_ERROR_ACTION(ms,ms);5379return0;5380}5381returnialloc(ms, n_elements, &sz,3, chunks);5382}53835384void**mspace_independent_comalloc(mspace msp,size_t n_elements,5385size_t sizes[],void* chunks[]) {5386 mstate ms = (mstate)msp;5387if(!ok_magic(ms)) {5388USAGE_ERROR_ACTION(ms,ms);5389return0;5390}5391returnialloc(ms, n_elements, sizes,0, chunks);5392}53935394intmspace_trim(mspace msp,size_t pad) {5395int result =0;5396 mstate ms = (mstate)msp;5397if(ok_magic(ms)) {5398if(!PREACTION(ms)) {5399 result =sys_trim(ms, pad);5400POSTACTION(ms);5401}5402}5403else{5404USAGE_ERROR_ACTION(ms,ms);5405}5406return result;5407}54085409voidmspace_malloc_stats(mspace msp) {5410 mstate ms = (mstate)msp;5411if(ok_magic(ms)) {5412internal_malloc_stats(ms);5413}5414else{5415USAGE_ERROR_ACTION(ms,ms);5416}5417}54185419size_tmspace_footprint(mspace msp) {5420size_t result =0;5421 mstate ms = (mstate)msp;5422if(ok_magic(ms)) {5423 result = ms->footprint;5424}5425else{5426USAGE_ERROR_ACTION(ms,ms);5427}5428return result;5429}543054315432size_tmspace_max_footprint(mspace msp) {5433size_t result =0;5434 mstate ms = (mstate)msp;5435if(ok_magic(ms)) {5436 result = ms->max_footprint;5437}5438else{5439USAGE_ERROR_ACTION(ms,ms);5440}5441return result;5442}544354445445#if !NO_MALLINFO5446struct mallinfo mspace_mallinfo(mspace msp) {5447 mstate ms = (mstate)msp;5448if(!ok_magic(ms)) {5449USAGE_ERROR_ACTION(ms,ms);5450}5451returninternal_mallinfo(ms);5452}5453#endif/* NO_MALLINFO */54545455size_tmspace_usable_size(void* mem) {5456if(mem !=0) {5457 mchunkptr p =mem2chunk(mem);5458if(cinuse(p))5459returnchunksize(p) -overhead_for(p);5460}5461return0;5462}54635464intmspace_mallopt(int param_number,int value) {5465returnchange_mparam(param_number, value);5466}54675468#endif/* MSPACES */54695470/* -------------------- Alternative MORECORE functions ------------------- */54715472/*5473 Guidelines for creating a custom version of MORECORE:54745475 * For best performance, MORECORE should allocate in multiples of pagesize.5476 * MORECORE may allocate more memory than requested. (Or even less,5477 but this will usually result in a malloc failure.)5478 * MORECORE must not allocate memory when given argument zero, but5479 instead return one past the end address of memory from previous5480 nonzero call.5481 * For best performance, consecutive calls to MORECORE with positive5482 arguments should return increasing addresses, indicating that5483 space has been contiguously extended.5484 * Even though consecutive calls to MORECORE need not return contiguous5485 addresses, it must be OK for malloc'ed chunks to span multiple5486 regions in those cases where they do happen to be contiguous.5487 * MORECORE need not handle negative arguments -- it may instead5488 just return MFAIL when given negative arguments.5489 Negative arguments are always multiples of pagesize. MORECORE5490 must not misinterpret negative args as large positive unsigned5491 args. You can suppress all such calls from even occurring by defining5492 MORECORE_CANNOT_TRIM,54935494 As an example alternative MORECORE, here is a custom allocator5495 kindly contributed for pre-OSX macOS. It uses virtually but not5496 necessarily physically contiguous non-paged memory (locked in,5497 present and won't get swapped out). You can use it by uncommenting5498 this section, adding some #includes, and setting up the appropriate5499 defines above:55005501 #define MORECORE osMoreCore55025503 There is also a shutdown routine that should somehow be called for5504 cleanup upon program exit.55055506 #define MAX_POOL_ENTRIES 1005507 #define MINIMUM_MORECORE_SIZE (64 * 1024U)5508 static int next_os_pool;5509 void *our_os_pools[MAX_POOL_ENTRIES];55105511 void *osMoreCore(int size)5512 {5513 void *ptr = 0;5514 static void *sbrk_top = 0;55155516 if (size > 0)5517 {5518 if (size < MINIMUM_MORECORE_SIZE)5519 size = MINIMUM_MORECORE_SIZE;5520 if (CurrentExecutionLevel() == kTaskLevel)5521 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);5522 if (ptr == 0)5523 {5524 return (void *) MFAIL;5525 }5526 // save ptrs so they can be freed during cleanup5527 our_os_pools[next_os_pool] = ptr;5528 next_os_pool++;5529 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);5530 sbrk_top = (char *) ptr + size;5531 return ptr;5532 }5533 else if (size < 0)5534 {5535 // we don't currently support shrink behavior5536 return (void *) MFAIL;5537 }5538 else5539 {5540 return sbrk_top;5541 }5542 }55435544 // cleanup any allocated memory pools5545 // called as last thing before shutting down driver55465547 void osCleanupMem(void)5548 {5549 void **ptr;55505551 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)5552 if (*ptr)5553 {5554 PoolDeallocate(*ptr);5555 *ptr = 0;5556 }5557 }55585559*/556055615562/* -----------------------------------------------------------------------5563History:5564 V2.8.4 (not yet released)5565 * Add mspace_mmap_large_chunks; thanks to Jean Brouwers5566 * Fix insufficient sys_alloc padding when using 16byte alignment5567 * Fix bad error check in mspace_footprint5568 * Adaptations for ptmalloc, courtesy of Wolfram Gloger.5569 * Reentrant spin locks, courtesy of Earl Chew and others5570 * Win32 improvements, courtesy of Niall Douglas and Earl Chew5571 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options5572 * Extension hook in malloc_state5573 * Various small adjustments to reduce warnings on some compilers5574 * Various configuration extensions/changes for more platforms. Thanks5575 to all who contributed these.55765577 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)5578 * Add max_footprint functions5579 * Ensure all appropriate literals are size_t5580 * Fix conditional compilation problem for some #define settings5581 * Avoid concatenating segments with the one provided5582 in create_mspace_with_base5583 * Rename some variables to avoid compiler shadowing warnings5584 * Use explicit lock initialization.5585 * Better handling of sbrk interference.5586 * Simplify and fix segment insertion, trimming and mspace_destroy5587 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x5588 * Thanks especially to Dennis Flanagan for help on these.55895590 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)5591 * Fix memalign brace error.55925593 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)5594 * Fix improper #endif nesting in C++5595 * Add explicit casts needed for C++55965597 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)5598 * Use trees for large bins5599 * Support mspaces5600 * Use segments to unify sbrk-based and mmap-based system allocation,5601 removing need for emulation on most platforms without sbrk.5602 * Default safety checks5603 * Optional footer checks. Thanks to William Robertson for the idea.5604 * Internal code refactoring5605 * Incorporate suggestions and platform-specific changes.5606 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,5607 Aaron Bachmann, Emery Berger, and others.5608 * Speed up non-fastbin processing enough to remove fastbins.5609 * Remove useless cfree() to avoid conflicts with other apps.5610 * Remove internal memcpy, memset. Compilers handle builtins better.5611 * Remove some options that no one ever used and rename others.56125613 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)5614 * Fix malloc_state bitmap array misdeclaration56155616 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)5617 * Allow tuning of FIRST_SORTED_BIN_SIZE5618 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.5619 * Better detection and support for non-contiguousness of MORECORE.5620 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger5621 * Bypass most of malloc if no frees. Thanks To Emery Berger.5622 * Fix freeing of old top non-contiguous chunk im sysmalloc.5623 * Raised default trim and map thresholds to 256K.5624 * Fix mmap-related #defines. Thanks to Lubos Lunak.5625 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.5626 * Branch-free bin calculation5627 * Default trim and mmap thresholds now 256K.56285629 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)5630 * Introduce independent_comalloc and independent_calloc.5631 Thanks to Michael Pachos for motivation and help.5632 * Make optional .h file available5633 * Allow > 2GB requests on 32bit systems.5634 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.5635 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,5636 and Anonymous.5637 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for5638 helping test this.)5639 * memalign: check alignment arg5640 * realloc: don't try to shift chunks backwards, since this5641 leads to more fragmentation in some programs and doesn't5642 seem to help in any others.5643 * Collect all cases in malloc requiring system memory into sysmalloc5644 * Use mmap as backup to sbrk5645 * Place all internal state in malloc_state5646 * Introduce fastbins (although similar to 2.5.1)5647 * Many minor tunings and cosmetic improvements5648 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK5649 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS5650 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.5651 * Include errno.h to support default failure action.56525653 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)5654 * return null for negative arguments5655 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>5656 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'5657 (e.g. WIN32 platforms)5658 * Cleanup header file inclusion for WIN32 platforms5659 * Cleanup code to avoid Microsoft Visual C++ compiler complaints5660 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing5661 memory allocation routines5662 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)5663 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to5664 usage of 'assert' in non-WIN32 code5665 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to5666 avoid infinite loop5667 * Always call 'fREe()' rather than 'free()'56685669 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)5670 * Fixed ordering problem with boundary-stamping56715672 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)5673 * Added pvalloc, as recommended by H.J. Liu5674 * Added 64bit pointer support mainly from Wolfram Gloger5675 * Added anonymously donated WIN32 sbrk emulation5676 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen5677 * malloc_extend_top: fix mask error that caused wastage after5678 foreign sbrks5679 * Add linux mremap support code from HJ Liu56805681 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)5682 * Integrated most documentation with the code.5683 * Add support for mmap, with help from5684 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).5685 * Use last_remainder in more cases.5686 * Pack bins using idea from colin@nyx10.cs.du.edu5687 * Use ordered bins instead of best-fit threshold5688 * Eliminate block-local decls to simplify tracing and debugging.5689 * Support another case of realloc via move into top5690 * Fix error occurring when initial sbrk_base not word-aligned.5691 * Rely on page size for units instead of SBRK_UNIT to5692 avoid surprises about sbrk alignment conventions.5693 * Add mallinfo, mallopt. Thanks to Raymond Nijssen5694 (raymond@es.ele.tue.nl) for the suggestion.5695 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.5696 * More precautions for cases where other routines call sbrk,5697 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).5698 * Added macros etc., allowing use in linux libc from5699 H.J. Lu (hjl@gnu.ai.mit.edu)5700 * Inverted this history list57015702 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)5703 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.5704 * Removed all preallocation code since under current scheme5705 the work required to undo bad preallocations exceeds5706 the work saved in good cases for most test programs.5707 * No longer use return list or unconsolidated bins since5708 no scheme using them consistently outperforms those that don't5709 given above changes.5710 * Use best fit for very large chunks to prevent some worst-cases.5711 * Added some support for debugging57125713 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)5714 * Removed footers when chunks are in use. Thanks to5715 Paul Wilson (wilson@cs.texas.edu) for the suggestion.57165717 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)5718 * Added malloc_trim, with help from Wolfram Gloger5719 (wmglo@Dent.MED.Uni-Muenchen.DE).57205721 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)57225723 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)5724 * realloc: try to expand in both directions5725 * malloc: swap order of clean-bin strategy;5726 * realloc: only conditionally expand backwards5727 * Try not to scavenge used bins5728 * Use bin counts as a guide to preallocation5729 * Occasionally bin return list chunks in first scan5730 * Add a few optimizations from colin@nyx10.cs.du.edu57315732 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)5733 * faster bin computation & slightly different binning5734 * merged all consolidations to one part of malloc proper5735 (eliminating old malloc_find_space & malloc_clean_bin)5736 * Scan 2 returns chunks (not just 1)5737 * Propagate failure in realloc if malloc returns 05738 * Add stuff to allow compilation on non-ANSI compilers5739 from kpv@research.att.com57405741 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)5742 * removed potential for odd address access in prev_chunk5743 * removed dependency on getpagesize.h5744 * misc cosmetics and a bit more internal documentation5745 * anticosmetics: mangled names in macros to evade debugger strangeness5746 * tested on sparc, hp-700, dec-mips, rs60005747 with gcc & native cc (hp, dec only) allowing5748 Detlefs & Zorn comparison study (in SIGPLAN Notices.)57495750 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)5751 * Based loosely on libg++-1.2X malloc. (It retains some of the overall5752 structure of old version, but most details differ.)57535754*/