1/* 2 This is a version (aka dlmalloc) of malloc/free/realloc written by 3 Doug Lea and released to the public domain, as explained at 4 http://creativecommons.org/licenses/publicdomain. Send questions, 5 comments, complaints, performance data, etc to dl@cs.oswego.edu 6 7* Version pre-2.8.4 Mon Nov 27 11:22:37 2006 (dl at gee) 8 9 Note: There may be an updated version of this malloc obtainable at 10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c 11 Check before installing! 12 13* Quickstart 14 15 This library is all in one file to simplify the most common usage: 16 ftp it, compile it (-O3), and link it into another program. All of 17 the compile-time options default to reasonable values for use on 18 most platforms. You might later want to step through various 19 compile-time and dynamic tuning options. 20 21 For convenience, an include file for code using this malloc is at: 22 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h 23 You don't really need this .h file unless you call functions not 24 defined in your system include files. The .h file contains only the 25 excerpts from this file needed for using this malloc on ANSI C/C++ 26 systems, so long as you haven't changed compile-time options about 27 naming and tuning parameters. If you do, then you can create your 28 own malloc.h that does include all settings by cutting at the point 29 indicated below. Note that you may already by default be using a C 30 library containing a malloc that is based on some version of this 31 malloc (for example in linux). You might still want to use the one 32 in this file to customize settings or to avoid overheads associated 33 with library versions. 34 35* Vital statistics: 36 37 Supported pointer/size_t representation: 4 or 8 bytes 38 size_t MUST be an unsigned type of the same width as 39 pointers. (If you are using an ancient system that declares 40 size_t as a signed type, or need it to be a different width 41 than pointers, you can use a previous release of this malloc 42 (e.g. 2.7.2) supporting these.) 43 44 Alignment: 8 bytes (default) 45 This suffices for nearly all current machines and C compilers. 46 However, you can define MALLOC_ALIGNMENT to be wider than this 47 if necessary (up to 128bytes), at the expense of using more space. 48 49 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes) 50 8 or 16 bytes (if 8byte sizes) 51 Each malloced chunk has a hidden word of overhead holding size 52 and status information, and additional cross-check word 53 if FOOTERS is defined. 54 55 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead) 56 8-byte ptrs: 32 bytes (including overhead) 57 58 Even a request for zero bytes (i.e., malloc(0)) returns a 59 pointer to something of the minimum allocatable size. 60 The maximum overhead wastage (i.e., number of extra bytes 61 allocated than were requested in malloc) is less than or equal 62 to the minimum size, except for requests >= mmap_threshold that 63 are serviced via mmap(), where the worst case wastage is about 64 32 bytes plus the remainder from a system page (the minimal 65 mmap unit); typically 4096 or 8192 bytes. 66 67 Security: static-safe; optionally more or less 68 The "security" of malloc refers to the ability of malicious 69 code to accentuate the effects of errors (for example, freeing 70 space that is not currently malloc'ed or overwriting past the 71 ends of chunks) in code that calls malloc. This malloc 72 guarantees not to modify any memory locations below the base of 73 heap, i.e., static variables, even in the presence of usage 74 errors. The routines additionally detect most improper frees 75 and reallocs. All this holds as long as the static bookkeeping 76 for malloc itself is not corrupted by some other means. This 77 is only one aspect of security -- these checks do not, and 78 cannot, detect all possible programming errors. 79 80 If FOOTERS is defined nonzero, then each allocated chunk 81 carries an additional check word to verify that it was malloced 82 from its space. These check words are the same within each 83 execution of a program using malloc, but differ across 84 executions, so externally crafted fake chunks cannot be 85 freed. This improves security by rejecting frees/reallocs that 86 could corrupt heap memory, in addition to the checks preventing 87 writes to statics that are always on. This may further improve 88 security at the expense of time and space overhead. (Note that 89 FOOTERS may also be worth using with MSPACES.) 90 91 By default detected errors cause the program to abort (calling 92 "abort()"). You can override this to instead proceed past 93 errors by defining PROCEED_ON_ERROR. In this case, a bad free 94 has no effect, and a malloc that encounters a bad address 95 caused by user overwrites will ignore the bad address by 96 dropping pointers and indices to all known memory. This may 97 be appropriate for programs that should continue if at all 98 possible in the face of programming errors, although they may 99 run out of memory because dropped memory is never reclaimed. 100 101 If you don't like either of these options, you can define 102 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything 103 else. And if you are sure that your program using malloc has 104 no errors or vulnerabilities, you can define INSECURE to 1, 105 which might (or might not) provide a small performance improvement. 106 107 Thread-safety: NOT thread-safe unless USE_LOCKS defined 108 When USE_LOCKS is defined, each public call to malloc, free, 109 etc is surrounded with either a pthread mutex or a win32 110 spinlock (depending on WIN32). This is not especially fast, and 111 can be a major bottleneck. It is designed only to provide 112 minimal protection in concurrent environments, and to provide a 113 basis for extensions. If you are using malloc in a concurrent 114 program, consider instead using nedmalloc 115 (http://www.nedprod.com/programs/portable/nedmalloc/) or 116 ptmalloc (See http://www.malloc.de), which are derived 117 from versions of this malloc. 118 119 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP 120 This malloc can use unix sbrk or any emulation (invoked using 121 the CALL_MORECORE macro) and/or mmap/munmap or any emulation 122 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system 123 memory. On most unix systems, it tends to work best if both 124 MORECORE and MMAP are enabled. On Win32, it uses emulations 125 based on VirtualAlloc. It also uses common C library functions 126 like memset. 127 128 Compliance: I believe it is compliant with the Single Unix Specification 129 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably 130 others as well. 131 132* Overview of algorithms 133 134 This is not the fastest, most space-conserving, most portable, or 135 most tunable malloc ever written. However it is among the fastest 136 while also being among the most space-conserving, portable and 137 tunable. Consistent balance across these factors results in a good 138 general-purpose allocator for malloc-intensive programs. 139 140 In most ways, this malloc is a best-fit allocator. Generally, it 141 chooses the best-fitting existing chunk for a request, with ties 142 broken in approximately least-recently-used order. (This strategy 143 normally maintains low fragmentation.) However, for requests less 144 than 256bytes, it deviates from best-fit when there is not an 145 exactly fitting available chunk by preferring to use space adjacent 146 to that used for the previous small request, as well as by breaking 147 ties in approximately most-recently-used order. (These enhance 148 locality of series of small allocations.) And for very large requests 149 (>= 256Kb by default), it relies on system memory mapping 150 facilities, if supported. (This helps avoid carrying around and 151 possibly fragmenting memory used only for large chunks.) 152 153 All operations (except malloc_stats and mallinfo) have execution 154 times that are bounded by a constant factor of the number of bits in 155 a size_t, not counting any clearing in calloc or copying in realloc, 156 or actions surrounding MORECORE and MMAP that have times 157 proportional to the number of non-contiguous regions returned by 158 system allocation routines, which is often just 1. In real-time 159 applications, you can optionally suppress segment traversals using 160 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when 161 system allocators return non-contiguous spaces, at the typical 162 expense of carrying around more memory and increased fragmentation. 163 164 The implementation is not very modular and seriously overuses 165 macros. Perhaps someday all C compilers will do as good a job 166 inlining modular code as can now be done by brute-force expansion, 167 but now, enough of them seem not to. 168 169 Some compilers issue a lot of warnings about code that is 170 dead/unreachable only on some platforms, and also about intentional 171 uses of negation on unsigned types. All known cases of each can be 172 ignored. 173 174 For a longer but out of date high-level description, see 175 http://gee.cs.oswego.edu/dl/html/malloc.html 176 177* MSPACES 178 If MSPACES is defined, then in addition to malloc, free, etc., 179 this file also defines mspace_malloc, mspace_free, etc. These 180 are versions of malloc routines that take an "mspace" argument 181 obtained using create_mspace, to control all internal bookkeeping. 182 If ONLY_MSPACES is defined, only these versions are compiled. 183 So if you would like to use this allocator for only some allocations, 184 and your system malloc for others, you can compile with 185 ONLY_MSPACES and then do something like... 186 static mspace mymspace = create_mspace(0,0); // for example 187 #define mymalloc(bytes) mspace_malloc(mymspace, bytes) 188 189 (Note: If you only need one instance of an mspace, you can instead 190 use "USE_DL_PREFIX" to relabel the global malloc.) 191 192 You can similarly create thread-local allocators by storing 193 mspaces as thread-locals. For example: 194 static __thread mspace tlms = 0; 195 void* tlmalloc(size_t bytes) { 196 if (tlms == 0) tlms = create_mspace(0, 0); 197 return mspace_malloc(tlms, bytes); 198 } 199 void tlfree(void* mem) { mspace_free(tlms, mem); } 200 201 Unless FOOTERS is defined, each mspace is completely independent. 202 You cannot allocate from one and free to another (although 203 conformance is only weakly checked, so usage errors are not always 204 caught). If FOOTERS is defined, then each chunk carries around a tag 205 indicating its originating mspace, and frees are directed to their 206 originating spaces. 207 208 ------------------------- Compile-time options --------------------------- 209 210Be careful in setting #define values for numerical constants of type 211size_t. On some systems, literal values are not automatically extended 212to size_t precision unless they are explicitly casted. You can also 213use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below. 214 215WIN32 default: defined if _WIN32 defined 216 Defining WIN32 sets up defaults for MS environment and compilers. 217 Otherwise defaults are for unix. Beware that there seem to be some 218 cases where this malloc might not be a pure drop-in replacement for 219 Win32 malloc: Random-looking failures from Win32 GDI API's (eg; 220 SetDIBits()) may be due to bugs in some video driver implementations 221 when pixel buffers are malloc()ed, and the region spans more than 222 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb) 223 default granularity, pixel buffers may straddle virtual allocation 224 regions more often than when using the Microsoft allocator. You can 225 avoid this by using VirtualAlloc() and VirtualFree() for all pixel 226 buffers rather than using malloc(). If this is not possible, 227 recompile this malloc with a larger DEFAULT_GRANULARITY. 228 229MALLOC_ALIGNMENT default: (size_t)8 230 Controls the minimum alignment for malloc'ed chunks. It must be a 231 power of two and at least 8, even on machines for which smaller 232 alignments would suffice. It may be defined as larger than this 233 though. Note however that code and data structures are optimized for 234 the case of 8-byte alignment. 235 236MSPACES default: 0 (false) 237 If true, compile in support for independent allocation spaces. 238 This is only supported if HAVE_MMAP is true. 239 240ONLY_MSPACES default: 0 (false) 241 If true, only compile in mspace versions, not regular versions. 242 243USE_LOCKS default: 0 (false) 244 Causes each call to each public routine to be surrounded with 245 pthread or WIN32 mutex lock/unlock. (If set true, this can be 246 overridden on a per-mspace basis for mspace versions.) If set to a 247 non-zero value other than 1, locks are used, but their 248 implementation is left out, so lock functions must be supplied manually. 249 250USE_SPIN_LOCKS default: 1 iff USE_LOCKS and on x86 using gcc or MSC 251 If true, uses custom spin locks for locking. This is currently 252 supported only for x86 platforms using gcc or recent MS compilers. 253 Otherwise, posix locks or win32 critical sections are used. 254 255FOOTERS default: 0 256 If true, provide extra checking and dispatching by placing 257 information in the footers of allocated chunks. This adds 258 space and time overhead. 259 260INSECURE default: 0 261 If true, omit checks for usage errors and heap space overwrites. 262 263USE_DL_PREFIX default: NOT defined 264 Causes compiler to prefix all public routines with the string 'dl'. 265 This can be useful when you only want to use this malloc in one part 266 of a program, using your regular system malloc elsewhere. 267 268ABORT default: defined as abort() 269 Defines how to abort on failed checks. On most systems, a failed 270 check cannot die with an "assert" or even print an informative 271 message, because the underlying print routines in turn call malloc, 272 which will fail again. Generally, the best policy is to simply call 273 abort(). It's not very useful to do more than this because many 274 errors due to overwriting will show up as address faults (null, odd 275 addresses etc) rather than malloc-triggered checks, so will also 276 abort. Also, most compilers know that abort() does not return, so 277 can better optimize code conditionally calling it. 278 279PROCEED_ON_ERROR default: defined as 0 (false) 280 Controls whether detected bad addresses cause them to bypassed 281 rather than aborting. If set, detected bad arguments to free and 282 realloc are ignored. And all bookkeeping information is zeroed out 283 upon a detected overwrite of freed heap space, thus losing the 284 ability to ever return it from malloc again, but enabling the 285 application to proceed. If PROCEED_ON_ERROR is defined, the 286 static variable malloc_corruption_error_count is compiled in 287 and can be examined to see if errors have occurred. This option 288 generates slower code than the default abort policy. 289 290DEBUG default: NOT defined 291 The DEBUG setting is mainly intended for people trying to modify 292 this code or diagnose problems when porting to new platforms. 293 However, it may also be able to better isolate user errors than just 294 using runtime checks. The assertions in the check routines spell 295 out in more detail the assumptions and invariants underlying the 296 algorithms. The checking is fairly extensive, and will slow down 297 execution noticeably. Calling malloc_stats or mallinfo with DEBUG 298 set will attempt to check every non-mmapped allocated and free chunk 299 in the course of computing the summaries. 300 301ABORT_ON_ASSERT_FAILURE default: defined as 1 (true) 302 Debugging assertion failures can be nearly impossible if your 303 version of the assert macro causes malloc to be called, which will 304 lead to a cascade of further failures, blowing the runtime stack. 305 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(), 306 which will usually make debugging easier. 307 308MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32 309 The action to take before "return 0" when malloc fails to be able to 310 return memory because there is none available. 311 312HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES 313 True if this system supports sbrk or an emulation of it. 314 315MORECORE default: sbrk 316 The name of the sbrk-style system routine to call to obtain more 317 memory. See below for guidance on writing custom MORECORE 318 functions. The type of the argument to sbrk/MORECORE varies across 319 systems. It cannot be size_t, because it supports negative 320 arguments, so it is normally the signed type of the same width as 321 size_t (sometimes declared as "intptr_t"). It doesn't much matter 322 though. Internally, we only call it with arguments less than half 323 the max value of a size_t, which should work across all reasonable 324 possibilities, although sometimes generating compiler warnings. 325 326MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE 327 If true, take advantage of fact that consecutive calls to MORECORE 328 with positive arguments always return contiguous increasing 329 addresses. This is true of unix sbrk. It does not hurt too much to 330 set it true anyway, since malloc copes with non-contiguities. 331 Setting it false when definitely non-contiguous saves time 332 and possibly wasted space it would take to discover this though. 333 334MORECORE_CANNOT_TRIM default: NOT defined 335 True if MORECORE cannot release space back to the system when given 336 negative arguments. This is generally necessary only if you are 337 using a hand-crafted MORECORE function that cannot handle negative 338 arguments. 339 340NO_SEGMENT_TRAVERSAL default: 0 341 If non-zero, suppresses traversals of memory segments 342 returned by either MORECORE or CALL_MMAP. This disables 343 merging of segments that are contiguous, and selectively 344 releasing them to the OS if unused, but bounds execution times. 345 346HAVE_MMAP default: 1 (true) 347 True if this system supports mmap or an emulation of it. If so, and 348 HAVE_MORECORE is not true, MMAP is used for all system 349 allocation. If set and HAVE_MORECORE is true as well, MMAP is 350 primarily used to directly allocate very large blocks. It is also 351 used as a backup strategy in cases where MORECORE fails to provide 352 space from system. Note: A single call to MUNMAP is assumed to be 353 able to unmap memory that may have be allocated using multiple calls 354 to MMAP, so long as they are adjacent. 355 356HAVE_MREMAP default: 1 on linux, else 0 357 If true realloc() uses mremap() to re-allocate large blocks and 358 extend or shrink allocation spaces. 359 360MMAP_CLEARS default: 1 except on WINCE. 361 True if mmap clears memory so calloc doesn't need to. This is true 362 for standard unix mmap using /dev/zero and on WIN32 except for WINCE. 363 364USE_BUILTIN_FFS default: 0 (i.e., not used) 365 Causes malloc to use the builtin ffs() function to compute indices. 366 Some compilers may recognize and intrinsify ffs to be faster than the 367 supplied C version. Also, the case of x86 using gcc is special-cased 368 to an asm instruction, so is already as fast as it can be, and so 369 this setting has no effect. Similarly for Win32 under recent MS compilers. 370 (On most x86s, the asm version is only slightly faster than the C version.) 371 372malloc_getpagesize default: derive from system includes, or 4096. 373 The system page size. To the extent possible, this malloc manages 374 memory from the system in page-size units. This may be (and 375 usually is) a function rather than a constant. This is ignored 376 if WIN32, where page size is determined using getSystemInfo during 377 initialization. 378 379USE_DEV_RANDOM default: 0 (i.e., not used) 380 Causes malloc to use /dev/random to initialize secure magic seed for 381 stamping footers. Otherwise, the current time is used. 382 383NO_MALLINFO default: 0 384 If defined, don't compile "mallinfo". This can be a simple way 385 of dealing with mismatches between system declarations and 386 those in this file. 387 388MALLINFO_FIELD_TYPE default: size_t 389 The type of the fields in the mallinfo struct. This was originally 390 defined as "int" in SVID etc, but is more usefully defined as 391 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set 392 393REALLOC_ZERO_BYTES_FREES default: not defined 394 This should be set if a call to realloc with zero bytes should 395 be the same as a call to free. Some people think it should. Otherwise, 396 since this malloc returns a unique pointer for malloc(0), so does 397 realloc(p, 0). 398 399LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H 400LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H 401LACKS_STDLIB_H default: NOT defined unless on WIN32 402 Define these if your system does not have these header files. 403 You might need to manually insert some of the declarations they provide. 404 405DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS, 406 system_info.dwAllocationGranularity in WIN32, 407 otherwise 64K. 408 Also settable using mallopt(M_GRANULARITY, x) 409 The unit for allocating and deallocating memory from the system. On 410 most systems with contiguous MORECORE, there is no reason to 411 make this more than a page. However, systems with MMAP tend to 412 either require or encourage larger granularities. You can increase 413 this value to prevent system allocation functions to be called so 414 often, especially if they are slow. The value must be at least one 415 page and must be a power of two. Setting to 0 causes initialization 416 to either page size or win32 region size. (Note: In previous 417 versions of malloc, the equivalent of this option was called 418 "TOP_PAD") 419 420DEFAULT_TRIM_THRESHOLD default: 2MB 421 Also settable using mallopt(M_TRIM_THRESHOLD, x) 422 The maximum amount of unused top-most memory to keep before 423 releasing via malloc_trim in free(). Automatic trimming is mainly 424 useful in long-lived programs using contiguous MORECORE. Because 425 trimming via sbrk can be slow on some systems, and can sometimes be 426 wasteful (in cases where programs immediately afterward allocate 427 more large chunks) the value should be high enough so that your 428 overall system performance would improve by releasing this much 429 memory. As a rough guide, you might set to a value close to the 430 average size of a process (program) running on your system. 431 Releasing this much memory would allow such a process to run in 432 memory. Generally, it is worth tuning trim thresholds when a 433 program undergoes phases where several large chunks are allocated 434 and released in ways that can reuse each other's storage, perhaps 435 mixed with phases where there are no such chunks at all. The trim 436 value must be greater than page size to have any useful effect. To 437 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick 438 some people use of mallocing a huge space and then freeing it at 439 program startup, in an attempt to reserve system memory, doesn't 440 have the intended effect under automatic trimming, since that memory 441 will immediately be returned to the system. 442 443DEFAULT_MMAP_THRESHOLD default: 256K 444 Also settable using mallopt(M_MMAP_THRESHOLD, x) 445 The request size threshold for using MMAP to directly service a 446 request. Requests of at least this size that cannot be allocated 447 using already-existing space will be serviced via mmap. (If enough 448 normal freed space already exists it is used instead.) Using mmap 449 segregates relatively large chunks of memory so that they can be 450 individually obtained and released from the host system. A request 451 serviced through mmap is never reused by any other request (at least 452 not directly; the system may just so happen to remap successive 453 requests to the same locations). Segregating space in this way has 454 the benefits that: Mmapped space can always be individually released 455 back to the system, which helps keep the system level memory demands 456 of a long-lived program low. Also, mapped memory doesn't become 457 `locked' between other chunks, as can happen with normally allocated 458 chunks, which means that even trimming via malloc_trim would not 459 release them. However, it has the disadvantage that the space 460 cannot be reclaimed, consolidated, and then used to service later 461 requests, as happens with normal chunks. The advantages of mmap 462 nearly always outweigh disadvantages for "large" chunks, but the 463 value of "large" may vary across systems. The default is an 464 empirically derived value that works well in most systems. You can 465 disable mmap by setting to MAX_SIZE_T. 466 467MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP 468 The number of consolidated frees between checks to release 469 unused segments when freeing. When using non-contiguous segments, 470 especially with multiple mspaces, checking only for topmost space 471 doesn't always suffice to trigger trimming. To compensate for this, 472 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the 473 current number of segments, if greater) try to release unused 474 segments to the OS when freeing chunks that result in 475 consolidation. The best value for this parameter is a compromise 476 between slowing down frees with relatively costly checks that 477 rarely trigger versus holding on to unused memory. To effectively 478 disable, set to MAX_SIZE_T. This may lead to a very slight speed 479 improvement at the expense of carrying around more memory. 480*/ 481 482/* Version identifier to allow people to support multiple versions */ 483#ifndef DLMALLOC_VERSION 484#define DLMALLOC_VERSION 20804 485#endif/* DLMALLOC_VERSION */ 486 487#if defined(linux) 488#define _GNU_SOURCE 1 489#endif 490 491#ifndef WIN32 492#ifdef _WIN32 493#define WIN32 1 494#endif/* _WIN32 */ 495#ifdef _WIN32_WCE 496#define LACKS_FCNTL_H 497#define WIN32 1 498#endif/* _WIN32_WCE */ 499#endif/* WIN32 */ 500#ifdef WIN32 501#define WIN32_LEAN_AND_MEAN 502#ifndef _WIN32_WINNT 503#define _WIN32_WINNT 0x403 504#endif 505#include <windows.h> 506#define HAVE_MMAP 1 507#define HAVE_MORECORE 0 508#define LACKS_UNISTD_H 509#define LACKS_SYS_PARAM_H 510#define LACKS_SYS_MMAN_H 511#define LACKS_STRING_H 512#define LACKS_STRINGS_H 513#define LACKS_SYS_TYPES_H 514#define LACKS_ERRNO_H 515#ifndef MALLOC_FAILURE_ACTION 516#define MALLOC_FAILURE_ACTION 517#endif/* MALLOC_FAILURE_ACTION */ 518#ifdef _WIN32_WCE/* WINCE reportedly does not clear */ 519#define MMAP_CLEARS 0 520#else 521#define MMAP_CLEARS 1 522#endif/* _WIN32_WCE */ 523#endif/* WIN32 */ 524 525#if defined(DARWIN) || defined(_DARWIN) 526/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */ 527#ifndef HAVE_MORECORE 528#define HAVE_MORECORE 0 529#define HAVE_MMAP 1 530/* OSX allocators provide 16 byte alignment */ 531#ifndef MALLOC_ALIGNMENT 532#define MALLOC_ALIGNMENT ((size_t)16U) 533#endif 534#endif/* HAVE_MORECORE */ 535#endif/* DARWIN */ 536 537#ifndef LACKS_SYS_TYPES_H 538#include <sys/types.h>/* For size_t */ 539#endif/* LACKS_SYS_TYPES_H */ 540 541/* The maximum possible size_t value has all bits set */ 542#define MAX_SIZE_T (~(size_t)0) 543 544#ifndef ONLY_MSPACES 545#define ONLY_MSPACES 0/* define to a value */ 546#else 547#define ONLY_MSPACES 1 548#endif/* ONLY_MSPACES */ 549#ifndef MSPACES 550#if ONLY_MSPACES 551#define MSPACES 1 552#else/* ONLY_MSPACES */ 553#define MSPACES 0 554#endif/* ONLY_MSPACES */ 555#endif/* MSPACES */ 556#ifndef MALLOC_ALIGNMENT 557#define MALLOC_ALIGNMENT ((size_t)8U) 558#endif/* MALLOC_ALIGNMENT */ 559#ifndef FOOTERS 560#define FOOTERS 0 561#endif/* FOOTERS */ 562#ifndef ABORT 563#define ABORT abort() 564#endif/* ABORT */ 565#ifndef ABORT_ON_ASSERT_FAILURE 566#define ABORT_ON_ASSERT_FAILURE 1 567#endif/* ABORT_ON_ASSERT_FAILURE */ 568#ifndef PROCEED_ON_ERROR 569#define PROCEED_ON_ERROR 0 570#endif/* PROCEED_ON_ERROR */ 571#ifndef USE_LOCKS 572#define USE_LOCKS 0 573#endif/* USE_LOCKS */ 574#ifndef USE_SPIN_LOCKS 575#if USE_LOCKS && (defined(__GNUC__) && ((defined(__i386__) || defined(__x86_64__)))) || (defined(_MSC_VER) && _MSC_VER>=1310) 576#define USE_SPIN_LOCKS 1 577#else 578#define USE_SPIN_LOCKS 0 579#endif/* USE_LOCKS && ... */ 580#endif/* USE_SPIN_LOCKS */ 581#ifndef INSECURE 582#define INSECURE 0 583#endif/* INSECURE */ 584#ifndef HAVE_MMAP 585#define HAVE_MMAP 1 586#endif/* HAVE_MMAP */ 587#ifndef MMAP_CLEARS 588#define MMAP_CLEARS 1 589#endif/* MMAP_CLEARS */ 590#ifndef HAVE_MREMAP 591#ifdef linux 592#define HAVE_MREMAP 1 593#else/* linux */ 594#define HAVE_MREMAP 0 595#endif/* linux */ 596#endif/* HAVE_MREMAP */ 597#ifndef MALLOC_FAILURE_ACTION 598#define MALLOC_FAILURE_ACTION errno = ENOMEM; 599#endif/* MALLOC_FAILURE_ACTION */ 600#ifndef HAVE_MORECORE 601#if ONLY_MSPACES 602#define HAVE_MORECORE 0 603#else/* ONLY_MSPACES */ 604#define HAVE_MORECORE 1 605#endif/* ONLY_MSPACES */ 606#endif/* HAVE_MORECORE */ 607#if !HAVE_MORECORE 608#define MORECORE_CONTIGUOUS 0 609#else/* !HAVE_MORECORE */ 610#define MORECORE_DEFAULT sbrk 611#ifndef MORECORE_CONTIGUOUS 612#define MORECORE_CONTIGUOUS 1 613#endif/* MORECORE_CONTIGUOUS */ 614#endif/* HAVE_MORECORE */ 615#ifndef DEFAULT_GRANULARITY 616#if (MORECORE_CONTIGUOUS || defined(WIN32)) 617#define DEFAULT_GRANULARITY (0)/* 0 means to compute in init_mparams */ 618#else/* MORECORE_CONTIGUOUS */ 619#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U) 620#endif/* MORECORE_CONTIGUOUS */ 621#endif/* DEFAULT_GRANULARITY */ 622#ifndef DEFAULT_TRIM_THRESHOLD 623#ifndef MORECORE_CANNOT_TRIM 624#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U) 625#else/* MORECORE_CANNOT_TRIM */ 626#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T 627#endif/* MORECORE_CANNOT_TRIM */ 628#endif/* DEFAULT_TRIM_THRESHOLD */ 629#ifndef DEFAULT_MMAP_THRESHOLD 630#if HAVE_MMAP 631#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U) 632#else/* HAVE_MMAP */ 633#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T 634#endif/* HAVE_MMAP */ 635#endif/* DEFAULT_MMAP_THRESHOLD */ 636#ifndef MAX_RELEASE_CHECK_RATE 637#if HAVE_MMAP 638#define MAX_RELEASE_CHECK_RATE 4095 639#else 640#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T 641#endif/* HAVE_MMAP */ 642#endif/* MAX_RELEASE_CHECK_RATE */ 643#ifndef USE_BUILTIN_FFS 644#define USE_BUILTIN_FFS 0 645#endif/* USE_BUILTIN_FFS */ 646#ifndef USE_DEV_RANDOM 647#define USE_DEV_RANDOM 0 648#endif/* USE_DEV_RANDOM */ 649#ifndef NO_MALLINFO 650#define NO_MALLINFO 0 651#endif/* NO_MALLINFO */ 652#ifndef MALLINFO_FIELD_TYPE 653#define MALLINFO_FIELD_TYPE size_t 654#endif/* MALLINFO_FIELD_TYPE */ 655#ifndef NO_SEGMENT_TRAVERSAL 656#define NO_SEGMENT_TRAVERSAL 0 657#endif/* NO_SEGMENT_TRAVERSAL */ 658 659/* 660 mallopt tuning options. SVID/XPG defines four standard parameter 661 numbers for mallopt, normally defined in malloc.h. None of these 662 are used in this malloc, so setting them has no effect. But this 663 malloc does support the following options. 664*/ 665 666#define M_TRIM_THRESHOLD (-1) 667#define M_GRANULARITY (-2) 668#define M_MMAP_THRESHOLD (-3) 669 670/* ------------------------ Mallinfo declarations ------------------------ */ 671 672#if !NO_MALLINFO 673/* 674 This version of malloc supports the standard SVID/XPG mallinfo 675 routine that returns a struct containing usage properties and 676 statistics. It should work on any system that has a 677 /usr/include/malloc.h defining struct mallinfo. The main 678 declaration needed is the mallinfo struct that is returned (by-copy) 679 by mallinfo(). The malloinfo struct contains a bunch of fields that 680 are not even meaningful in this version of malloc. These fields are 681 are instead filled by mallinfo() with other numbers that might be of 682 interest. 683 684 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a 685 /usr/include/malloc.h file that includes a declaration of struct 686 mallinfo. If so, it is included; else a compliant version is 687 declared below. These must be precisely the same for mallinfo() to 688 work. The original SVID version of this struct, defined on most 689 systems with mallinfo, declares all fields as ints. But some others 690 define as unsigned long. If your system defines the fields using a 691 type of different width than listed here, you MUST #include your 692 system version and #define HAVE_USR_INCLUDE_MALLOC_H. 693*/ 694 695/* #define HAVE_USR_INCLUDE_MALLOC_H */ 696 697#ifdef HAVE_USR_INCLUDE_MALLOC_H 698#include"/usr/include/malloc.h" 699#else/* HAVE_USR_INCLUDE_MALLOC_H */ 700#ifndef STRUCT_MALLINFO_DECLARED 701#define STRUCT_MALLINFO_DECLARED 1 702struct mallinfo { 703 MALLINFO_FIELD_TYPE arena;/* non-mmapped space allocated from system */ 704 MALLINFO_FIELD_TYPE ordblks;/* number of free chunks */ 705 MALLINFO_FIELD_TYPE smblks;/* always 0 */ 706 MALLINFO_FIELD_TYPE hblks;/* always 0 */ 707 MALLINFO_FIELD_TYPE hblkhd;/* space in mmapped regions */ 708 MALLINFO_FIELD_TYPE usmblks;/* maximum total allocated space */ 709 MALLINFO_FIELD_TYPE fsmblks;/* always 0 */ 710 MALLINFO_FIELD_TYPE uordblks;/* total allocated space */ 711 MALLINFO_FIELD_TYPE fordblks;/* total free space */ 712 MALLINFO_FIELD_TYPE keepcost;/* releasable (via malloc_trim) space */ 713}; 714#endif/* STRUCT_MALLINFO_DECLARED */ 715#endif/* HAVE_USR_INCLUDE_MALLOC_H */ 716#endif/* NO_MALLINFO */ 717 718/* 719 Try to persuade compilers to inline. The most critical functions for 720 inlining are defined as macros, so these aren't used for them. 721*/ 722 723#ifndef FORCEINLINE 724#if defined(__GNUC__) 725#define FORCEINLINE __inline __attribute__ ((always_inline)) 726#elif defined(_MSC_VER) 727#define FORCEINLINE __forceinline 728#endif 729#endif 730#ifndef NOINLINE 731#if defined(__GNUC__) 732#define NOINLINE __attribute__ ((noinline)) 733#elif defined(_MSC_VER) 734#define NOINLINE __declspec(noinline) 735#else 736#define NOINLINE 737#endif 738#endif 739 740#ifdef __cplusplus 741extern"C"{ 742#ifndef FORCEINLINE 743#define FORCEINLINE inline 744#endif 745#endif/* __cplusplus */ 746#ifndef FORCEINLINE 747#define FORCEINLINE 748#endif 749 750#if !ONLY_MSPACES 751 752/* ------------------- Declarations of public routines ------------------- */ 753 754#ifndef USE_DL_PREFIX 755#define dlcalloc calloc 756#define dlfree free 757#define dlmalloc malloc 758#define dlmemalign memalign 759#define dlrealloc realloc 760#define dlvalloc valloc 761#define dlpvalloc pvalloc 762#define dlmallinfo mallinfo 763#define dlmallopt mallopt 764#define dlmalloc_trim malloc_trim 765#define dlmalloc_stats malloc_stats 766#define dlmalloc_usable_size malloc_usable_size 767#define dlmalloc_footprint malloc_footprint 768#define dlmalloc_max_footprint malloc_max_footprint 769#define dlindependent_calloc independent_calloc 770#define dlindependent_comalloc independent_comalloc 771#endif/* USE_DL_PREFIX */ 772 773 774/* 775 malloc(size_t n) 776 Returns a pointer to a newly allocated chunk of at least n bytes, or 777 null if no space is available, in which case errno is set to ENOMEM 778 on ANSI C systems. 779 780 If n is zero, malloc returns a minimum-sized chunk. (The minimum 781 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit 782 systems.) Note that size_t is an unsigned type, so calls with 783 arguments that would be negative if signed are interpreted as 784 requests for huge amounts of space, which will often fail. The 785 maximum supported value of n differs across systems, but is in all 786 cases less than the maximum representable value of a size_t. 787*/ 788void*dlmalloc(size_t); 789 790/* 791 free(void* p) 792 Releases the chunk of memory pointed to by p, that had been previously 793 allocated using malloc or a related routine such as realloc. 794 It has no effect if p is null. If p was not malloced or already 795 freed, free(p) will by default cause the current program to abort. 796*/ 797voiddlfree(void*); 798 799/* 800 calloc(size_t n_elements, size_t element_size); 801 Returns a pointer to n_elements * element_size bytes, with all locations 802 set to zero. 803*/ 804void*dlcalloc(size_t,size_t); 805 806/* 807 realloc(void* p, size_t n) 808 Returns a pointer to a chunk of size n that contains the same data 809 as does chunk p up to the minimum of (n, p's size) bytes, or null 810 if no space is available. 811 812 The returned pointer may or may not be the same as p. The algorithm 813 prefers extending p in most cases when possible, otherwise it 814 employs the equivalent of a malloc-copy-free sequence. 815 816 If p is null, realloc is equivalent to malloc. 817 818 If space is not available, realloc returns null, errno is set (if on 819 ANSI) and p is NOT freed. 820 821 if n is for fewer bytes than already held by p, the newly unused 822 space is lopped off and freed if possible. realloc with a size 823 argument of zero (re)allocates a minimum-sized chunk. 824 825 The old unix realloc convention of allowing the last-free'd chunk 826 to be used as an argument to realloc is not supported. 827*/ 828 829void*dlrealloc(void*,size_t); 830 831/* 832 memalign(size_t alignment, size_t n); 833 Returns a pointer to a newly allocated chunk of n bytes, aligned 834 in accord with the alignment argument. 835 836 The alignment argument should be a power of two. If the argument is 837 not a power of two, the nearest greater power is used. 838 8-byte alignment is guaranteed by normal malloc calls, so don't 839 bother calling memalign with an argument of 8 or less. 840 841 Overreliance on memalign is a sure way to fragment space. 842*/ 843void*dlmemalign(size_t,size_t); 844 845/* 846 valloc(size_t n); 847 Equivalent to memalign(pagesize, n), where pagesize is the page 848 size of the system. If the pagesize is unknown, 4096 is used. 849*/ 850void*dlvalloc(size_t); 851 852/* 853 mallopt(int parameter_number, int parameter_value) 854 Sets tunable parameters The format is to provide a 855 (parameter-number, parameter-value) pair. mallopt then sets the 856 corresponding parameter to the argument value if it can (i.e., so 857 long as the value is meaningful), and returns 1 if successful else 858 0. To workaround the fact that mallopt is specified to use int, 859 not size_t parameters, the value -1 is specially treated as the 860 maximum unsigned size_t value. 861 862 SVID/XPG/ANSI defines four standard param numbers for mallopt, 863 normally defined in malloc.h. None of these are use in this malloc, 864 so setting them has no effect. But this malloc also supports other 865 options in mallopt. See below for details. Briefly, supported 866 parameters are as follows (listed defaults are for "typical" 867 configurations). 868 869 Symbol param # default allowed param values 870 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables) 871 M_GRANULARITY -2 page size any power of 2 >= page size 872 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support) 873*/ 874intdlmallopt(int,int); 875 876/* 877 malloc_footprint(); 878 Returns the number of bytes obtained from the system. The total 879 number of bytes allocated by malloc, realloc etc., is less than this 880 value. Unlike mallinfo, this function returns only a precomputed 881 result, so can be called frequently to monitor memory consumption. 882 Even if locks are otherwise defined, this function does not use them, 883 so results might not be up to date. 884*/ 885size_tdlmalloc_footprint(void); 886 887/* 888 malloc_max_footprint(); 889 Returns the maximum number of bytes obtained from the system. This 890 value will be greater than current footprint if deallocated space 891 has been reclaimed by the system. The peak number of bytes allocated 892 by malloc, realloc etc., is less than this value. Unlike mallinfo, 893 this function returns only a precomputed result, so can be called 894 frequently to monitor memory consumption. Even if locks are 895 otherwise defined, this function does not use them, so results might 896 not be up to date. 897*/ 898size_tdlmalloc_max_footprint(void); 899 900#if !NO_MALLINFO 901/* 902 mallinfo() 903 Returns (by copy) a struct containing various summary statistics: 904 905 arena: current total non-mmapped bytes allocated from system 906 ordblks: the number of free chunks 907 smblks: always zero. 908 hblks: current number of mmapped regions 909 hblkhd: total bytes held in mmapped regions 910 usmblks: the maximum total allocated space. This will be greater 911 than current total if trimming has occurred. 912 fsmblks: always zero 913 uordblks: current total allocated space (normal or mmapped) 914 fordblks: total free space 915 keepcost: the maximum number of bytes that could ideally be released 916 back to system via malloc_trim. ("ideally" means that 917 it ignores page restrictions etc.) 918 919 Because these fields are ints, but internal bookkeeping may 920 be kept as longs, the reported values may wrap around zero and 921 thus be inaccurate. 922*/ 923struct mallinfo dlmallinfo(void); 924#endif/* NO_MALLINFO */ 925 926/* 927 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]); 928 929 independent_calloc is similar to calloc, but instead of returning a 930 single cleared space, it returns an array of pointers to n_elements 931 independent elements that can hold contents of size elem_size, each 932 of which starts out cleared, and can be independently freed, 933 realloc'ed etc. The elements are guaranteed to be adjacently 934 allocated (this is not guaranteed to occur with multiple callocs or 935 mallocs), which may also improve cache locality in some 936 applications. 937 938 The "chunks" argument is optional (i.e., may be null, which is 939 probably the most typical usage). If it is null, the returned array 940 is itself dynamically allocated and should also be freed when it is 941 no longer needed. Otherwise, the chunks array must be of at least 942 n_elements in length. It is filled in with the pointers to the 943 chunks. 944 945 In either case, independent_calloc returns this pointer array, or 946 null if the allocation failed. If n_elements is zero and "chunks" 947 is null, it returns a chunk representing an array with zero elements 948 (which should be freed if not wanted). 949 950 Each element must be individually freed when it is no longer 951 needed. If you'd like to instead be able to free all at once, you 952 should instead use regular calloc and assign pointers into this 953 space to represent elements. (In this case though, you cannot 954 independently free elements.) 955 956 independent_calloc simplifies and speeds up implementations of many 957 kinds of pools. It may also be useful when constructing large data 958 structures that initially have a fixed number of fixed-sized nodes, 959 but the number is not known at compile time, and some of the nodes 960 may later need to be freed. For example: 961 962 struct Node { int item; struct Node* next; }; 963 964 struct Node* build_list() { 965 struct Node** pool; 966 int n = read_number_of_nodes_needed(); 967 if (n <= 0) return 0; 968 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); 969 if (pool == 0) die(); 970 // organize into a linked list... 971 struct Node* first = pool[0]; 972 for (i = 0; i < n-1; ++i) 973 pool[i]->next = pool[i+1]; 974 free(pool); // Can now free the array (or not, if it is needed later) 975 return first; 976 } 977*/ 978void**dlindependent_calloc(size_t,size_t,void**); 979 980/* 981 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]); 982 983 independent_comalloc allocates, all at once, a set of n_elements 984 chunks with sizes indicated in the "sizes" array. It returns 985 an array of pointers to these elements, each of which can be 986 independently freed, realloc'ed etc. The elements are guaranteed to 987 be adjacently allocated (this is not guaranteed to occur with 988 multiple callocs or mallocs), which may also improve cache locality 989 in some applications. 990 991 The "chunks" argument is optional (i.e., may be null). If it is null 992 the returned array is itself dynamically allocated and should also 993 be freed when it is no longer needed. Otherwise, the chunks array 994 must be of at least n_elements in length. It is filled in with the 995 pointers to the chunks. 996 997 In either case, independent_comalloc returns this pointer array, or 998 null if the allocation failed. If n_elements is zero and chunks is 999 null, it returns a chunk representing an array with zero elements1000 (which should be freed if not wanted).10011002 Each element must be individually freed when it is no longer1003 needed. If you'd like to instead be able to free all at once, you1004 should instead use a single regular malloc, and assign pointers at1005 particular offsets in the aggregate space. (In this case though, you1006 cannot independently free elements.)10071008 independent_comallac differs from independent_calloc in that each1009 element may have a different size, and also that it does not1010 automatically clear elements.10111012 independent_comalloc can be used to speed up allocation in cases1013 where several structs or objects must always be allocated at the1014 same time. For example:10151016 struct Head { ... }1017 struct Foot { ... }10181019 void send_message(char* msg) {1020 int msglen = strlen(msg);1021 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };1022 void* chunks[3];1023 if (independent_comalloc(3, sizes, chunks) == 0)1024 die();1025 struct Head* head = (struct Head*)(chunks[0]);1026 char* body = (char*)(chunks[1]);1027 struct Foot* foot = (struct Foot*)(chunks[2]);1028 // ...1029 }10301031 In general though, independent_comalloc is worth using only for1032 larger values of n_elements. For small values, you probably won't1033 detect enough difference from series of malloc calls to bother.10341035 Overuse of independent_comalloc can increase overall memory usage,1036 since it cannot reuse existing noncontiguous small chunks that1037 might be available for some of the elements.1038*/1039void**dlindependent_comalloc(size_t,size_t*,void**);104010411042/*1043 pvalloc(size_t n);1044 Equivalent to valloc(minimum-page-that-holds(n)), that is,1045 round up n to nearest pagesize.1046 */1047void*dlpvalloc(size_t);10481049/*1050 malloc_trim(size_t pad);10511052 If possible, gives memory back to the system (via negative arguments1053 to sbrk) if there is unused memory at the `high' end of the malloc1054 pool or in unused MMAP segments. You can call this after freeing1055 large blocks of memory to potentially reduce the system-level memory1056 requirements of a program. However, it cannot guarantee to reduce1057 memory. Under some allocation patterns, some large free blocks of1058 memory will be locked between two used chunks, so they cannot be1059 given back to the system.10601061 The `pad' argument to malloc_trim represents the amount of free1062 trailing space to leave untrimmed. If this argument is zero, only1063 the minimum amount of memory to maintain internal data structures1064 will be left. Non-zero arguments can be supplied to maintain enough1065 trailing space to service future expected allocations without having1066 to re-obtain memory from the system.10671068 Malloc_trim returns 1 if it actually released any memory, else 0.1069*/1070intdlmalloc_trim(size_t);10711072/*1073 malloc_stats();1074 Prints on stderr the amount of space obtained from the system (both1075 via sbrk and mmap), the maximum amount (which may be more than1076 current if malloc_trim and/or munmap got called), and the current1077 number of bytes allocated via malloc (or realloc, etc) but not yet1078 freed. Note that this is the number of bytes allocated, not the1079 number requested. It will be larger than the number requested1080 because of alignment and bookkeeping overhead. Because it includes1081 alignment wastage as being in use, this figure may be greater than1082 zero even when no user-level chunks are allocated.10831084 The reported current and maximum system memory can be inaccurate if1085 a program makes other calls to system memory allocation functions1086 (normally sbrk) outside of malloc.10871088 malloc_stats prints only the most commonly interesting statistics.1089 More information can be obtained by calling mallinfo.1090*/1091voiddlmalloc_stats(void);10921093#endif/* ONLY_MSPACES */10941095/*1096 malloc_usable_size(void* p);10971098 Returns the number of bytes you can actually use in1099 an allocated chunk, which may be more than you requested (although1100 often not) due to alignment and minimum size constraints.1101 You can use this many bytes without worrying about1102 overwriting other allocated objects. This is not a particularly great1103 programming practice. malloc_usable_size can be more useful in1104 debugging and assertions, for example:11051106 p = malloc(n);1107 assert(malloc_usable_size(p) >= 256);1108*/1109size_tdlmalloc_usable_size(void*);111011111112#if MSPACES11131114/*1115 mspace is an opaque type representing an independent1116 region of space that supports mspace_malloc, etc.1117*/1118typedefvoid* mspace;11191120/*1121 create_mspace creates and returns a new independent space with the1122 given initial capacity, or, if 0, the default granularity size. It1123 returns null if there is no system memory available to create the1124 space. If argument locked is non-zero, the space uses a separate1125 lock to control access. The capacity of the space will grow1126 dynamically as needed to service mspace_malloc requests. You can1127 control the sizes of incremental increases of this space by1128 compiling with a different DEFAULT_GRANULARITY or dynamically1129 setting with mallopt(M_GRANULARITY, value).1130*/1131mspace create_mspace(size_t capacity,int locked);11321133/*1134 destroy_mspace destroys the given space, and attempts to return all1135 of its memory back to the system, returning the total number of1136 bytes freed. After destruction, the results of access to all memory1137 used by the space become undefined.1138*/1139size_tdestroy_mspace(mspace msp);11401141/*1142 create_mspace_with_base uses the memory supplied as the initial base1143 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this1144 space is used for bookkeeping, so the capacity must be at least this1145 large. (Otherwise 0 is returned.) When this initial space is1146 exhausted, additional memory will be obtained from the system.1147 Destroying this space will deallocate all additionally allocated1148 space (if possible) but not the initial base.1149*/1150mspace create_mspace_with_base(void* base,size_t capacity,int locked);11511152/*1153 mspace_mmap_large_chunks controls whether requests for large chunks1154 are allocated in their own mmapped regions, separate from others in1155 this mspace. By default this is enabled, which reduces1156 fragmentation. However, such chunks are not necessarily released to1157 the system upon destroy_mspace. Disabling by setting to false may1158 increase fragmentation, but avoids leakage when relying on1159 destroy_mspace to release all memory allocated using this space.1160*/1161intmspace_mmap_large_chunks(mspace msp,int enable);116211631164/*1165 mspace_malloc behaves as malloc, but operates within1166 the given space.1167*/1168void*mspace_malloc(mspace msp,size_t bytes);11691170/*1171 mspace_free behaves as free, but operates within1172 the given space.11731174 If compiled with FOOTERS==1, mspace_free is not actually needed.1175 free may be called instead of mspace_free because freed chunks from1176 any space are handled by their originating spaces.1177*/1178voidmspace_free(mspace msp,void* mem);11791180/*1181 mspace_realloc behaves as realloc, but operates within1182 the given space.11831184 If compiled with FOOTERS==1, mspace_realloc is not actually1185 needed. realloc may be called instead of mspace_realloc because1186 realloced chunks from any space are handled by their originating1187 spaces.1188*/1189void*mspace_realloc(mspace msp,void* mem,size_t newsize);11901191/*1192 mspace_calloc behaves as calloc, but operates within1193 the given space.1194*/1195void*mspace_calloc(mspace msp,size_t n_elements,size_t elem_size);11961197/*1198 mspace_memalign behaves as memalign, but operates within1199 the given space.1200*/1201void*mspace_memalign(mspace msp,size_t alignment,size_t bytes);12021203/*1204 mspace_independent_calloc behaves as independent_calloc, but1205 operates within the given space.1206*/1207void**mspace_independent_calloc(mspace msp,size_t n_elements,1208size_t elem_size,void* chunks[]);12091210/*1211 mspace_independent_comalloc behaves as independent_comalloc, but1212 operates within the given space.1213*/1214void**mspace_independent_comalloc(mspace msp,size_t n_elements,1215size_t sizes[],void* chunks[]);12161217/*1218 mspace_footprint() returns the number of bytes obtained from the1219 system for this space.1220*/1221size_tmspace_footprint(mspace msp);12221223/*1224 mspace_max_footprint() returns the peak number of bytes obtained from the1225 system for this space.1226*/1227size_tmspace_max_footprint(mspace msp);122812291230#if !NO_MALLINFO1231/*1232 mspace_mallinfo behaves as mallinfo, but reports properties of1233 the given space.1234*/1235struct mallinfo mspace_mallinfo(mspace msp);1236#endif/* NO_MALLINFO */12371238/*1239 malloc_usable_size(void* p) behaves the same as malloc_usable_size;1240*/1241size_tmspace_usable_size(void* mem);12421243/*1244 mspace_malloc_stats behaves as malloc_stats, but reports1245 properties of the given space.1246*/1247voidmspace_malloc_stats(mspace msp);12481249/*1250 mspace_trim behaves as malloc_trim, but1251 operates within the given space.1252*/1253intmspace_trim(mspace msp,size_t pad);12541255/*1256 An alias for mallopt.1257*/1258intmspace_mallopt(int,int);12591260#endif/* MSPACES */12611262#ifdef __cplusplus1263};/* end of extern "C" */1264#endif/* __cplusplus */12651266/*1267 ========================================================================1268 To make a fully customizable malloc.h header file, cut everything1269 above this line, put into file malloc.h, edit to suit, and #include it1270 on the next line, as well as in programs that use this malloc.1271 ========================================================================1272*/12731274/* #include "malloc.h" */12751276/*------------------------------ internal #includes ---------------------- */12771278#ifdef WIN321279#ifndef __GNUC__1280#pragma warning( disable : 4146 )/* no "unsigned" warnings */1281#endif1282#endif/* WIN32 */12831284#include <stdio.h>/* for printing in malloc_stats */12851286#ifndef LACKS_ERRNO_H1287#include <errno.h>/* for MALLOC_FAILURE_ACTION */1288#endif/* LACKS_ERRNO_H */1289#if FOOTERS1290#include <time.h>/* for magic initialization */1291#endif/* FOOTERS */1292#ifndef LACKS_STDLIB_H1293#include <stdlib.h>/* for abort() */1294#endif/* LACKS_STDLIB_H */1295#ifdef DEBUG1296#if ABORT_ON_ASSERT_FAILURE1297#define assert(x) if(!(x)) ABORT1298#else/* ABORT_ON_ASSERT_FAILURE */1299#include <assert.h>1300#endif/* ABORT_ON_ASSERT_FAILURE */1301#else/* DEBUG */1302#ifndef assert1303#define assert(x)1304#endif1305#define DEBUG 01306#endif/* DEBUG */1307#ifndef LACKS_STRING_H1308#include <string.h>/* for memset etc */1309#endif/* LACKS_STRING_H */1310#if USE_BUILTIN_FFS1311#ifndef LACKS_STRINGS_H1312#include <strings.h>/* for ffs */1313#endif/* LACKS_STRINGS_H */1314#endif/* USE_BUILTIN_FFS */1315#if HAVE_MMAP1316#ifndef LACKS_SYS_MMAN_H1317#include <sys/mman.h>/* for mmap */1318#endif/* LACKS_SYS_MMAN_H */1319#ifndef LACKS_FCNTL_H1320#include <fcntl.h>1321#endif/* LACKS_FCNTL_H */1322#endif/* HAVE_MMAP */1323#ifndef LACKS_UNISTD_H1324#include <unistd.h>/* for sbrk, sysconf */1325#else/* LACKS_UNISTD_H */1326#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)1327externvoid*sbrk(ptrdiff_t);1328#endif/* FreeBSD etc */1329#endif/* LACKS_UNISTD_H */13301331/* Declarations for locking */1332#if USE_LOCKS1333#ifndef WIN321334#include <pthread.h>1335#if defined (__SVR4) && defined (__sun)/* solaris */1336#include <thread.h>1337#endif/* solaris */1338#else1339#ifndef _M_AMD641340/* These are already defined on AMD64 builds */1341#ifdef __cplusplus1342extern"C"{1343#endif/* __cplusplus */1344#ifndef __MINGW32__1345LONG __cdecl _InterlockedCompareExchange(LONG volatile*Dest, LONG Exchange, LONG Comp);1346LONG __cdecl _InterlockedExchange(LONG volatile*Target, LONG Value);1347#endif1348#ifdef __cplusplus1349}1350#endif/* __cplusplus */1351#endif/* _M_AMD64 */1352#ifndef __MINGW32__1353#pragma intrinsic (_InterlockedCompareExchange)1354#pragma intrinsic (_InterlockedExchange)1355#else1356/* --[ start GCC compatibility ]----------------------------------------------1357 * Compatibility <intrin_x86.h> header for GCC -- GCC equivalents of intrinsic1358 * Microsoft Visual C++ functions. Originally developed for the ReactOS1359 * (<http://www.reactos.org/>) and TinyKrnl (<http://www.tinykrnl.org/>)1360 * projects.1361 *1362 * Copyright (c) 2006 KJK::Hyperion <hackbunny@reactos.com>1363 *1364 * Permission is hereby granted, free of charge, to any person obtaining a1365 * copy of this software and associated documentation files (the "Software"),1366 * to deal in the Software without restriction, including without limitation1367 * the rights to use, copy, modify, merge, publish, distribute, sublicense,1368 * and/or sell copies of the Software, and to permit persons to whom the1369 * Software is furnished to do so, subject to the following conditions:1370 *1371 * The above copyright notice and this permission notice shall be included in1372 * all copies or substantial portions of the Software.1373 *1374 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR1375 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,1376 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE1377 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER1378 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING1379 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER1380 * DEALINGS IN THE SOFTWARE.1381 */13821383/*** Atomic operations ***/1384#if (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) > 401001385#define _ReadWriteBarrier() __sync_synchronize()1386#else1387static __inline__ __attribute__((always_inline))long__sync_lock_test_and_set(volatilelong*const Target,const long Value)1388{1389long res;1390 __asm__ __volatile__("xchg%z0%2,%0":"=g"(*(Target)),"=r"(res) :"1"(Value));1391return res;1392}1393static void __inline__ __attribute__((always_inline))_MemoryBarrier(void)1394{1395 __asm__ __volatile__("": : :"memory");1396}1397#define _ReadWriteBarrier() _MemoryBarrier()1398#endif1399/* BUGBUG: GCC only supports full barriers */1400static __inline__ __attribute__((always_inline))long_InterlockedExchange(volatilelong*const Target,const long Value)1401{1402/* NOTE: __sync_lock_test_and_set would be an acquire barrier, so we force a full barrier */1403_ReadWriteBarrier();1404return__sync_lock_test_and_set(Target, Value);1405}1406/* --[ end GCC compatibility ]---------------------------------------------- */1407#endif1408#define interlockedcompareexchange _InterlockedCompareExchange1409#define interlockedexchange _InterlockedExchange1410#endif/* Win32 */1411#endif/* USE_LOCKS */14121413/* Declarations for bit scanning on win32 */1414#if defined(_MSC_VER) && _MSC_VER>=13001415#ifndef BitScanForward/* Try to avoid pulling in WinNT.h */1416#ifdef __cplusplus1417extern"C"{1418#endif/* __cplusplus */1419unsigned char_BitScanForward(unsigned long*index,unsigned long mask);1420unsigned char_BitScanReverse(unsigned long*index,unsigned long mask);1421#ifdef __cplusplus1422}1423#endif/* __cplusplus */14241425#define BitScanForward _BitScanForward1426#define BitScanReverse _BitScanReverse1427#pragma intrinsic(_BitScanForward)1428#pragma intrinsic(_BitScanReverse)1429#endif/* BitScanForward */1430#endif/* defined(_MSC_VER) && _MSC_VER>=1300 */14311432#ifndef WIN321433#ifndef malloc_getpagesize1434# ifdef _SC_PAGESIZE/* some SVR4 systems omit an underscore */1435# ifndef _SC_PAGE_SIZE1436# define _SC_PAGE_SIZE _SC_PAGESIZE1437# endif1438# endif1439# ifdef _SC_PAGE_SIZE1440# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)1441# else1442# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)1443externsize_tgetpagesize();1444# define malloc_getpagesize getpagesize()1445# else1446# ifdef WIN32/* use supplied emulation of getpagesize */1447# define malloc_getpagesize getpagesize()1448# else1449# ifndef LACKS_SYS_PARAM_H1450# include <sys/param.h>1451# endif1452# ifdef EXEC_PAGESIZE1453# define malloc_getpagesize EXEC_PAGESIZE1454# else1455# ifdef NBPG1456# ifndef CLSIZE1457# define malloc_getpagesize NBPG1458# else1459# define malloc_getpagesize (NBPG * CLSIZE)1460# endif1461# else1462# ifdef NBPC1463# define malloc_getpagesize NBPC1464# else1465# ifdef PAGESIZE1466# define malloc_getpagesize PAGESIZE1467# else/* just guess */1468# define malloc_getpagesize ((size_t)4096U)1469# endif1470# endif1471# endif1472# endif1473# endif1474# endif1475# endif1476#endif1477#endif1478147914801481/* ------------------- size_t and alignment properties -------------------- */14821483/* The byte and bit size of a size_t */1484#define SIZE_T_SIZE (sizeof(size_t))1485#define SIZE_T_BITSIZE (sizeof(size_t) << 3)14861487/* Some constants coerced to size_t */1488/* Annoying but necessary to avoid errors on some platforms */1489#define SIZE_T_ZERO ((size_t)0)1490#define SIZE_T_ONE ((size_t)1)1491#define SIZE_T_TWO ((size_t)2)1492#define SIZE_T_FOUR ((size_t)4)1493#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)1494#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)1495#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)1496#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)14971498/* The bit mask value corresponding to MALLOC_ALIGNMENT */1499#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)15001501/* True if address a has acceptable alignment */1502#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)15031504/* the number of bytes to offset an address to align it */1505#define align_offset(A)\1506 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\1507 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))15081509/* -------------------------- MMAP preliminaries ------------------------- */15101511/*1512 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and1513 checks to fail so compiler optimizer can delete code rather than1514 using so many "#if"s.1515*/151615171518/* MORECORE and MMAP must return MFAIL on failure */1519#define MFAIL ((void*)(MAX_SIZE_T))1520#define CMFAIL ((char*)(MFAIL))/* defined for convenience */15211522#if HAVE_MMAP15231524#ifndef WIN321525#define MUNMAP_DEFAULT(a, s) munmap((a), (s))1526#define MMAP_PROT (PROT_READ|PROT_WRITE)1527#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)1528#define MAP_ANONYMOUS MAP_ANON1529#endif/* MAP_ANON */1530#ifdef MAP_ANONYMOUS1531#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)1532#define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)1533#else/* MAP_ANONYMOUS */1534/*1535 Nearly all versions of mmap support MAP_ANONYMOUS, so the following1536 is unlikely to be needed, but is supplied just in case.1537*/1538#define MMAP_FLAGS (MAP_PRIVATE)1539static int dev_zero_fd = -1;/* Cached file descriptor for /dev/zero. */1540#define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \1541 (dev_zero_fd = open("/dev/zero", O_RDWR), \1542 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \1543 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))1544#endif/* MAP_ANONYMOUS */15451546#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)15471548#else/* WIN32 */15491550/* Win32 MMAP via VirtualAlloc */1551static FORCEINLINE void*win32mmap(size_t size) {1552void* ptr =VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);1553return(ptr !=0)? ptr: MFAIL;1554}15551556/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */1557static FORCEINLINE void*win32direct_mmap(size_t size) {1558void* ptr =VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,1559 PAGE_READWRITE);1560return(ptr !=0)? ptr: MFAIL;1561}15621563/* This function supports releasing coalesed segments */1564static FORCEINLINE intwin32munmap(void* ptr,size_t size) {1565 MEMORY_BASIC_INFORMATION minfo;1566char* cptr = (char*)ptr;1567while(size) {1568if(VirtualQuery(cptr, &minfo,sizeof(minfo)) ==0)1569return-1;1570if(minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||1571 minfo.State != MEM_COMMIT || minfo.RegionSize > size)1572return-1;1573if(VirtualFree(cptr,0, MEM_RELEASE) ==0)1574return-1;1575 cptr += minfo.RegionSize;1576 size -= minfo.RegionSize;1577}1578return0;1579}15801581#define MMAP_DEFAULT(s) win32mmap(s)1582#define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))1583#define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)1584#endif/* WIN32 */1585#endif/* HAVE_MMAP */15861587#if HAVE_MREMAP1588#ifndef WIN321589#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))1590#endif/* WIN32 */1591#endif/* HAVE_MREMAP */159215931594/**1595 * Define CALL_MORECORE1596 */1597#if HAVE_MORECORE1598#ifdef MORECORE1599#define CALL_MORECORE(S) MORECORE(S)1600#else/* MORECORE */1601#define CALL_MORECORE(S) MORECORE_DEFAULT(S)1602#endif/* MORECORE */1603#else/* HAVE_MORECORE */1604#define CALL_MORECORE(S) MFAIL1605#endif/* HAVE_MORECORE */16061607/**1608 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP1609 */1610#if HAVE_MMAP1611#define IS_MMAPPED_BIT (SIZE_T_ONE)1612#define USE_MMAP_BIT (SIZE_T_ONE)16131614#ifdef MMAP1615#define CALL_MMAP(s) MMAP(s)1616#else/* MMAP */1617#define CALL_MMAP(s) MMAP_DEFAULT(s)1618#endif/* MMAP */1619#ifdef MUNMAP1620#define CALL_MUNMAP(a, s) MUNMAP((a), (s))1621#else/* MUNMAP */1622#define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))1623#endif/* MUNMAP */1624#ifdef DIRECT_MMAP1625#define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)1626#else/* DIRECT_MMAP */1627#define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)1628#endif/* DIRECT_MMAP */1629#else/* HAVE_MMAP */1630#define IS_MMAPPED_BIT (SIZE_T_ZERO)1631#define USE_MMAP_BIT (SIZE_T_ZERO)16321633#define MMAP(s) MFAIL1634#define MUNMAP(a, s) (-1)1635#define DIRECT_MMAP(s) MFAIL1636#define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)1637#define CALL_MMAP(s) MMAP(s)1638#define CALL_MUNMAP(a, s) MUNMAP((a), (s))1639#endif/* HAVE_MMAP */16401641/**1642 * Define CALL_MREMAP1643 */1644#if HAVE_MMAP && HAVE_MREMAP1645#ifdef MREMAP1646#define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))1647#else/* MREMAP */1648#define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))1649#endif/* MREMAP */1650#else/* HAVE_MMAP && HAVE_MREMAP */1651#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL1652#endif/* HAVE_MMAP && HAVE_MREMAP */16531654/* mstate bit set if continguous morecore disabled or failed */1655#define USE_NONCONTIGUOUS_BIT (4U)16561657/* segment bit set in create_mspace_with_base */1658#define EXTERN_BIT (8U)165916601661/* --------------------------- Lock preliminaries ------------------------ */16621663/*1664 When locks are defined, there is one global lock, plus1665 one per-mspace lock.16661667 The global lock_ensures that mparams.magic and other unique1668 mparams values are initialized only once. It also protects1669 sequences of calls to MORECORE. In many cases sys_alloc requires1670 two calls, that should not be interleaved with calls by other1671 threads. This does not protect against direct calls to MORECORE1672 by other threads not using this lock, so there is still code to1673 cope the best we can on interference.16741675 Per-mspace locks surround calls to malloc, free, etc. To enable use1676 in layered extensions, per-mspace locks are reentrant.16771678 Because lock-protected regions generally have bounded times, it is1679 OK to use the supplied simple spinlocks in the custom versions for1680 x86.16811682 If USE_LOCKS is > 1, the definitions of lock routines here are1683 bypassed, in which case you will need to define at least1684 INITIAL_LOCK, ACQUIRE_LOCK, RELEASE_LOCK and possibly TRY_LOCK1685 (which is not used in this malloc, but commonly needed in1686 extensions.)1687*/16881689#if USE_LOCKS == 116901691#if USE_SPIN_LOCKS1692#ifndef WIN3216931694/* Custom pthread-style spin locks on x86 and x64 for gcc */1695struct pthread_mlock_t {1696volatileunsigned int l;1697volatileunsigned int c;1698volatile pthread_t threadid;1699};1700#define MLOCK_T struct pthread_mlock_t1701#define CURRENT_THREAD pthread_self()1702#define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), 0)1703#define ACQUIRE_LOCK(sl) pthread_acquire_lock(sl)1704#define RELEASE_LOCK(sl) pthread_release_lock(sl)1705#define TRY_LOCK(sl) pthread_try_lock(sl)1706#define SPINS_PER_YIELD 6317071708static MLOCK_T malloc_global_mutex = {0,0,0};17091710static FORCEINLINE intpthread_acquire_lock(MLOCK_T *sl) {1711int spins =0;1712volatileunsigned int* lp = &sl->l;1713for(;;) {1714if(*lp !=0) {1715if(sl->threadid == CURRENT_THREAD) {1716++sl->c;1717return0;1718}1719}1720else{1721/* place args to cmpxchgl in locals to evade oddities in some gccs */1722int cmp =0;1723int val =1;1724int ret;1725 __asm__ __volatile__("lock; cmpxchgl%1,%2"1726:"=a"(ret)1727:"r"(val),"m"(*(lp)),"0"(cmp)1728:"memory","cc");1729if(!ret) {1730assert(!sl->threadid);1731 sl->c =1;1732 sl->threadid = CURRENT_THREAD;1733return0;1734}1735if((++spins & SPINS_PER_YIELD) ==0) {1736#if defined (__SVR4) && defined (__sun)/* solaris */1737thr_yield();1738#else1739#if defined(__linux__) || defined(__FreeBSD__) || defined(__APPLE__)1740sched_yield();1741#else/* no-op yield on unknown systems */1742;1743#endif/* __linux__ || __FreeBSD__ || __APPLE__ */1744#endif/* solaris */1745}1746}1747}1748}17491750static FORCEINLINE voidpthread_release_lock(MLOCK_T *sl) {1751assert(sl->l !=0);1752assert(sl->threadid == CURRENT_THREAD);1753if(--sl->c ==0) {1754 sl->threadid =0;1755volatileunsigned int* lp = &sl->l;1756int prev =0;1757int ret;1758 __asm__ __volatile__("lock; xchgl%0,%1"1759:"=r"(ret)1760:"m"(*(lp)),"0"(prev)1761:"memory");1762}1763}17641765static FORCEINLINE intpthread_try_lock(MLOCK_T *sl) {1766volatileunsigned int* lp = &sl->l;1767if(*lp !=0) {1768if(sl->threadid == CURRENT_THREAD) {1769++sl->c;1770return1;1771}1772}1773else{1774int cmp =0;1775int val =1;1776int ret;1777 __asm__ __volatile__("lock; cmpxchgl%1,%2"1778:"=a"(ret)1779:"r"(val),"m"(*(lp)),"0"(cmp)1780:"memory","cc");1781if(!ret) {1782assert(!sl->threadid);1783 sl->c =1;1784 sl->threadid = CURRENT_THREAD;1785return1;1786}1787}1788return0;1789}179017911792#else/* WIN32 */1793/* Custom win32-style spin locks on x86 and x64 for MSC */1794struct win32_mlock_t1795{1796volatilelong l;1797volatileunsigned int c;1798volatilelong threadid;1799};18001801#define MLOCK_T struct win32_mlock_t1802#define CURRENT_THREAD win32_getcurrentthreadid()1803#define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), 0)1804#define ACQUIRE_LOCK(sl) win32_acquire_lock(sl)1805#define RELEASE_LOCK(sl) win32_release_lock(sl)1806#define TRY_LOCK(sl) win32_try_lock(sl)1807#define SPINS_PER_YIELD 6318081809static MLOCK_T malloc_global_mutex = {0,0,0};18101811static FORCEINLINE longwin32_getcurrentthreadid(void) {1812#ifdef _MSC_VER1813#if defined(_M_IX86)1814long*threadstruct=(long*)__readfsdword(0x18);1815long threadid=threadstruct[0x24/sizeof(long)];1816return threadid;1817#elif defined(_M_X64)1818/* todo */1819returnGetCurrentThreadId();1820#else1821returnGetCurrentThreadId();1822#endif1823#else1824returnGetCurrentThreadId();1825#endif1826}18271828static FORCEINLINE intwin32_acquire_lock(MLOCK_T *sl) {1829int spins =0;1830for(;;) {1831if(sl->l !=0) {1832if(sl->threadid == CURRENT_THREAD) {1833++sl->c;1834return0;1835}1836}1837else{1838if(!interlockedexchange(&sl->l,1)) {1839assert(!sl->threadid);1840 sl->c=CURRENT_THREAD;1841 sl->threadid = CURRENT_THREAD;1842 sl->c =1;1843return0;1844}1845}1846if((++spins & SPINS_PER_YIELD) ==0)1847SleepEx(0, FALSE);1848}1849}18501851static FORCEINLINE voidwin32_release_lock(MLOCK_T *sl) {1852assert(sl->threadid == CURRENT_THREAD);1853assert(sl->l !=0);1854if(--sl->c ==0) {1855 sl->threadid =0;1856interlockedexchange(&sl->l,0);1857}1858}18591860static FORCEINLINE intwin32_try_lock(MLOCK_T *sl) {1861if(sl->l !=0) {1862if(sl->threadid == CURRENT_THREAD) {1863++sl->c;1864return1;1865}1866}1867else{1868if(!interlockedexchange(&sl->l,1)){1869assert(!sl->threadid);1870 sl->threadid = CURRENT_THREAD;1871 sl->c =1;1872return1;1873}1874}1875return0;1876}18771878#endif/* WIN32 */1879#else/* USE_SPIN_LOCKS */18801881#ifndef WIN321882/* pthreads-based locks */18831884#define MLOCK_T pthread_mutex_t1885#define CURRENT_THREAD pthread_self()1886#define INITIAL_LOCK(sl) pthread_init_lock(sl)1887#define ACQUIRE_LOCK(sl) pthread_mutex_lock(sl)1888#define RELEASE_LOCK(sl) pthread_mutex_unlock(sl)1889#define TRY_LOCK(sl) (!pthread_mutex_trylock(sl))18901891static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;18921893/* Cope with old-style linux recursive lock initialization by adding */1894/* skipped internal declaration from pthread.h */1895#ifdef linux1896#ifndef PTHREAD_MUTEX_RECURSIVE1897externint pthread_mutexattr_setkind_np __P((pthread_mutexattr_t *__attr,1898int __kind));1899#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP1900#define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)1901#endif1902#endif19031904static intpthread_init_lock(MLOCK_T *sl) {1905 pthread_mutexattr_t attr;1906if(pthread_mutexattr_init(&attr))return1;1907if(pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE))return1;1908if(pthread_mutex_init(sl, &attr))return1;1909if(pthread_mutexattr_destroy(&attr))return1;1910return0;1911}19121913#else/* WIN32 */1914/* Win32 critical sections */1915#define MLOCK_T CRITICAL_SECTION1916#define CURRENT_THREAD GetCurrentThreadId()1917#define INITIAL_LOCK(s) (!InitializeCriticalSectionAndSpinCount((s), 0x80000000|4000))1918#define ACQUIRE_LOCK(s) (EnterCriticalSection(s), 0)1919#define RELEASE_LOCK(s) LeaveCriticalSection(s)1920#define TRY_LOCK(s) TryEnterCriticalSection(s)1921#define NEED_GLOBAL_LOCK_INIT19221923static MLOCK_T malloc_global_mutex;1924staticvolatilelong malloc_global_mutex_status;19251926/* Use spin loop to initialize global lock */1927static voidinit_malloc_global_mutex() {1928for(;;) {1929long stat = malloc_global_mutex_status;1930if(stat >0)1931return;1932/* transition to < 0 while initializing, then to > 0) */1933if(stat ==0&&1934interlockedcompareexchange(&malloc_global_mutex_status, -1,0) ==0) {1935InitializeCriticalSection(&malloc_global_mutex);1936interlockedexchange(&malloc_global_mutex_status,1);1937return;1938}1939SleepEx(0, FALSE);1940}1941}19421943#endif/* WIN32 */1944#endif/* USE_SPIN_LOCKS */1945#endif/* USE_LOCKS == 1 */19461947/* ----------------------- User-defined locks ------------------------ */19481949#if USE_LOCKS > 11950/* Define your own lock implementation here */1951/* #define INITIAL_LOCK(sl) ... */1952/* #define ACQUIRE_LOCK(sl) ... */1953/* #define RELEASE_LOCK(sl) ... */1954/* #define TRY_LOCK(sl) ... */1955/* static MLOCK_T malloc_global_mutex = ... */1956#endif/* USE_LOCKS > 1 */19571958/* ----------------------- Lock-based state ------------------------ */19591960#if USE_LOCKS1961#define USE_LOCK_BIT (2U)1962#else/* USE_LOCKS */1963#define USE_LOCK_BIT (0U)1964#define INITIAL_LOCK(l)1965#endif/* USE_LOCKS */19661967#if USE_LOCKS1968#define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);1969#define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);1970#else/* USE_LOCKS */1971#define ACQUIRE_MALLOC_GLOBAL_LOCK()1972#define RELEASE_MALLOC_GLOBAL_LOCK()1973#endif/* USE_LOCKS */197419751976/* ----------------------- Chunk representations ------------------------ */19771978/*1979 (The following includes lightly edited explanations by Colin Plumb.)19801981 The malloc_chunk declaration below is misleading (but accurate and1982 necessary). It declares a "view" into memory allowing access to1983 necessary fields at known offsets from a given base.19841985 Chunks of memory are maintained using a `boundary tag' method as1986 originally described by Knuth. (See the paper by Paul Wilson1987 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such1988 techniques.) Sizes of free chunks are stored both in the front of1989 each chunk and at the end. This makes consolidating fragmented1990 chunks into bigger chunks fast. The head fields also hold bits1991 representing whether chunks are free or in use.19921993 Here are some pictures to make it clearer. They are "exploded" to1994 show that the state of a chunk can be thought of as extending from1995 the high 31 bits of the head field of its header through the1996 prev_foot and PINUSE_BIT bit of the following chunk header.19971998 A chunk that's in use looks like:19992000 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2001 | Size of previous chunk (if P = 0) |2002 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2003 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|2004 | Size of this chunk 1| +-+2005 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2006 | |2007 +- -+2008 | |2009 +- -+2010 | :2011 +- size - sizeof(size_t) available payload bytes -+2012 : |2013 chunk-> +- -+2014 | |2015 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2016 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|2017 | Size of next chunk (may or may not be in use) | +-+2018 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+20192020 And if it's free, it looks like this:20212022 chunk-> +- -+2023 | User payload (must be in use, or we would have merged!) |2024 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2025 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|2026 | Size of this chunk 0| +-+2027 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2028 | Next pointer |2029 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2030 | Prev pointer |2031 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2032 | :2033 +- size - sizeof(struct chunk) unused bytes -+2034 : |2035 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2036 | Size of this chunk |2037 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2038 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|2039 | Size of next chunk (must be in use, or we would have merged)| +-+2040 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2041 | :2042 +- User payload -+2043 : |2044 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2045 |0|2046 +-+2047 Note that since we always merge adjacent free chunks, the chunks2048 adjacent to a free chunk must be in use.20492050 Given a pointer to a chunk (which can be derived trivially from the2051 payload pointer) we can, in O(1) time, find out whether the adjacent2052 chunks are free, and if so, unlink them from the lists that they2053 are on and merge them with the current chunk.20542055 Chunks always begin on even word boundaries, so the mem portion2056 (which is returned to the user) is also on an even word boundary, and2057 thus at least double-word aligned.20582059 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the2060 chunk size (which is always a multiple of two words), is an in-use2061 bit for the *previous* chunk. If that bit is *clear*, then the2062 word before the current chunk size contains the previous chunk2063 size, and can be used to find the front of the previous chunk.2064 The very first chunk allocated always has this bit set, preventing2065 access to non-existent (or non-owned) memory. If pinuse is set for2066 any given chunk, then you CANNOT determine the size of the2067 previous chunk, and might even get a memory addressing fault when2068 trying to do so.20692070 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of2071 the chunk size redundantly records whether the current chunk is2072 inuse. This redundancy enables usage checks within free and realloc,2073 and reduces indirection when freeing and consolidating chunks.20742075 Each freshly allocated chunk must have both cinuse and pinuse set.2076 That is, each allocated chunk borders either a previously allocated2077 and still in-use chunk, or the base of its memory arena. This is2078 ensured by making all allocations from the `lowest' part of any2079 found chunk. Further, no free chunk physically borders another one,2080 so each free chunk is known to be preceded and followed by either2081 inuse chunks or the ends of memory.20822083 Note that the `foot' of the current chunk is actually represented2084 as the prev_foot of the NEXT chunk. This makes it easier to2085 deal with alignments etc but can be very confusing when trying2086 to extend or adapt this code.20872088 The exceptions to all this are20892090 1. The special chunk `top' is the top-most available chunk (i.e.,2091 the one bordering the end of available memory). It is treated2092 specially. Top is never included in any bin, is used only if2093 no other chunk is available, and is released back to the2094 system if it is very large (see M_TRIM_THRESHOLD). In effect,2095 the top chunk is treated as larger (and thus less well2096 fitting) than any other available chunk. The top chunk2097 doesn't update its trailing size field since there is no next2098 contiguous chunk that would have to index off it. However,2099 space is still allocated for it (TOP_FOOT_SIZE) to enable2100 separation or merging when space is extended.21012102 3. Chunks allocated via mmap, which have the lowest-order bit2103 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set2104 PINUSE_BIT in their head fields. Because they are allocated2105 one-by-one, each must carry its own prev_foot field, which is2106 also used to hold the offset this chunk has within its mmapped2107 region, which is needed to preserve alignment. Each mmapped2108 chunk is trailed by the first two fields of a fake next-chunk2109 for sake of usage checks.21102111*/21122113struct malloc_chunk {2114size_t prev_foot;/* Size of previous chunk (if free). */2115size_t head;/* Size and inuse bits. */2116struct malloc_chunk* fd;/* double links -- used only if free. */2117struct malloc_chunk* bk;2118};21192120typedefstruct malloc_chunk mchunk;2121typedefstruct malloc_chunk* mchunkptr;2122typedefstruct malloc_chunk* sbinptr;/* The type of bins of chunks */2123typedefunsigned int bindex_t;/* Described below */2124typedefunsigned int binmap_t;/* Described below */2125typedefunsigned int flag_t;/* The type of various bit flag sets */21262127/* ------------------- Chunks sizes and alignments ----------------------- */21282129#define MCHUNK_SIZE (sizeof(mchunk))21302131#if FOOTERS2132#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)2133#else/* FOOTERS */2134#define CHUNK_OVERHEAD (SIZE_T_SIZE)2135#endif/* FOOTERS */21362137/* MMapped chunks need a second word of overhead ... */2138#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)2139/* ... and additional padding for fake next-chunk at foot */2140#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)21412142/* The smallest size we can malloc is an aligned minimal chunk */2143#define MIN_CHUNK_SIZE\2144 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)21452146/* conversion from malloc headers to user pointers, and back */2147#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))2148#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))2149/* chunk associated with aligned address A */2150#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))21512152/* Bounds on request (not chunk) sizes. */2153#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)2154#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)21552156/* pad request bytes into a usable size */2157#define pad_request(req) \2158 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)21592160/* pad request, checking for minimum (but not maximum) */2161#define request2size(req) \2162 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))216321642165/* ------------------ Operations on head and foot fields ----------------- */21662167/*2168 The head field of a chunk is or'ed with PINUSE_BIT when previous2169 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in2170 use. If the chunk was obtained with mmap, the prev_foot field has2171 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the2172 mmapped region to the base of the chunk.21732174 FLAG4_BIT is not used by this malloc, but might be useful in extensions.2175*/21762177#define PINUSE_BIT (SIZE_T_ONE)2178#define CINUSE_BIT (SIZE_T_TWO)2179#define FLAG4_BIT (SIZE_T_FOUR)2180#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)2181#define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)21822183/* Head value for fenceposts */2184#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)21852186/* extraction of fields from head words */2187#define cinuse(p) ((p)->head & CINUSE_BIT)2188#define pinuse(p) ((p)->head & PINUSE_BIT)2189#define chunksize(p) ((p)->head & ~(FLAG_BITS))21902191#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)2192#define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)21932194/* Treat space at ptr +/- offset as a chunk */2195#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))2196#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))21972198/* Ptr to next or previous physical malloc_chunk. */2199#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))2200#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))22012202/* extract next chunk's pinuse bit */2203#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)22042205/* Get/set size at footer */2206#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)2207#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))22082209/* Set size, pinuse bit, and foot */2210#define set_size_and_pinuse_of_free_chunk(p, s)\2211 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))22122213/* Set size, pinuse bit, foot, and clear next pinuse */2214#define set_free_with_pinuse(p, s, n)\2215 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))22162217#define is_mmapped(p)\2218 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))22192220/* Get the internal overhead associated with chunk p */2221#define overhead_for(p)\2222 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)22232224/* Return true if malloced space is not necessarily cleared */2225#if MMAP_CLEARS2226#define calloc_must_clear(p) (!is_mmapped(p))2227#else/* MMAP_CLEARS */2228#define calloc_must_clear(p) (1)2229#endif/* MMAP_CLEARS */22302231/* ---------------------- Overlaid data structures ----------------------- */22322233/*2234 When chunks are not in use, they are treated as nodes of either2235 lists or trees.22362237 "Small" chunks are stored in circular doubly-linked lists, and look2238 like this:22392240 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2241 | Size of previous chunk |2242 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2243 `head:' | Size of chunk, in bytes |P|2244 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2245 | Forward pointer to next chunk in list |2246 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2247 | Back pointer to previous chunk in list |2248 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2249 | Unused space (may be 0 bytes long) .2250 . .2251 . |2252nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2253 `foot:' | Size of chunk, in bytes |2254 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+22552256 Larger chunks are kept in a form of bitwise digital trees (aka2257 tries) keyed on chunksizes. Because malloc_tree_chunks are only for2258 free chunks greater than 256 bytes, their size doesn't impose any2259 constraints on user chunk sizes. Each node looks like:22602261 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2262 | Size of previous chunk |2263 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2264 `head:' | Size of chunk, in bytes |P|2265 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2266 | Forward pointer to next chunk of same size |2267 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2268 | Back pointer to previous chunk of same size |2269 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2270 | Pointer to left child (child[0]) |2271 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2272 | Pointer to right child (child[1]) |2273 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2274 | Pointer to parent |2275 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2276 | bin index of this chunk |2277 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2278 | Unused space .2279 . |2280nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2281 `foot:' | Size of chunk, in bytes |2282 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+22832284 Each tree holding treenodes is a tree of unique chunk sizes. Chunks2285 of the same size are arranged in a circularly-linked list, with only2286 the oldest chunk (the next to be used, in our FIFO ordering)2287 actually in the tree. (Tree members are distinguished by a non-null2288 parent pointer.) If a chunk with the same size as an existing node2289 is inserted, it is linked off the existing node using pointers that2290 work in the same way as fd/bk pointers of small chunks.22912292 Each tree contains a power of 2 sized range of chunk sizes (the2293 smallest is 0x100 <= x < 0x180), which is divided in half at each2294 tree level, with the chunks in the smaller half of the range (0x1002295 <= x < 0x140 for the top nose) in the left subtree and the larger2296 half (0x140 <= x < 0x180) in the right subtree. This is, of course,2297 done by inspecting individual bits.22982299 Using these rules, each node's left subtree contains all smaller2300 sizes than its right subtree. However, the node at the root of each2301 subtree has no particular ordering relationship to either. (The2302 dividing line between the subtree sizes is based on trie relation.)2303 If we remove the last chunk of a given size from the interior of the2304 tree, we need to replace it with a leaf node. The tree ordering2305 rules permit a node to be replaced by any leaf below it.23062307 The smallest chunk in a tree (a common operation in a best-fit2308 allocator) can be found by walking a path to the leftmost leaf in2309 the tree. Unlike a usual binary tree, where we follow left child2310 pointers until we reach a null, here we follow the right child2311 pointer any time the left one is null, until we reach a leaf with2312 both child pointers null. The smallest chunk in the tree will be2313 somewhere along that path.23142315 The worst case number of steps to add, find, or remove a node is2316 bounded by the number of bits differentiating chunks within2317 bins. Under current bin calculations, this ranges from 6 up to 212318 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case2319 is of course much better.2320*/23212322struct malloc_tree_chunk {2323/* The first four fields must be compatible with malloc_chunk */2324size_t prev_foot;2325size_t head;2326struct malloc_tree_chunk* fd;2327struct malloc_tree_chunk* bk;23282329struct malloc_tree_chunk* child[2];2330struct malloc_tree_chunk* parent;2331 bindex_t index;2332};23332334typedefstruct malloc_tree_chunk tchunk;2335typedefstruct malloc_tree_chunk* tchunkptr;2336typedefstruct malloc_tree_chunk* tbinptr;/* The type of bins of trees */23372338/* A little helper macro for trees */2339#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])23402341/* ----------------------------- Segments -------------------------------- */23422343/*2344 Each malloc space may include non-contiguous segments, held in a2345 list headed by an embedded malloc_segment record representing the2346 top-most space. Segments also include flags holding properties of2347 the space. Large chunks that are directly allocated by mmap are not2348 included in this list. They are instead independently created and2349 destroyed without otherwise keeping track of them.23502351 Segment management mainly comes into play for spaces allocated by2352 MMAP. Any call to MMAP might or might not return memory that is2353 adjacent to an existing segment. MORECORE normally contiguously2354 extends the current space, so this space is almost always adjacent,2355 which is simpler and faster to deal with. (This is why MORECORE is2356 used preferentially to MMAP when both are available -- see2357 sys_alloc.) When allocating using MMAP, we don't use any of the2358 hinting mechanisms (inconsistently) supported in various2359 implementations of unix mmap, or distinguish reserving from2360 committing memory. Instead, we just ask for space, and exploit2361 contiguity when we get it. It is probably possible to do2362 better than this on some systems, but no general scheme seems2363 to be significantly better.23642365 Management entails a simpler variant of the consolidation scheme2366 used for chunks to reduce fragmentation -- new adjacent memory is2367 normally prepended or appended to an existing segment. However,2368 there are limitations compared to chunk consolidation that mostly2369 reflect the fact that segment processing is relatively infrequent2370 (occurring only when getting memory from system) and that we2371 don't expect to have huge numbers of segments:23722373 * Segments are not indexed, so traversal requires linear scans. (It2374 would be possible to index these, but is not worth the extra2375 overhead and complexity for most programs on most platforms.)2376 * New segments are only appended to old ones when holding top-most2377 memory; if they cannot be prepended to others, they are held in2378 different segments.23792380 Except for the top-most segment of an mstate, each segment record2381 is kept at the tail of its segment. Segments are added by pushing2382 segment records onto the list headed by &mstate.seg for the2383 containing mstate.23842385 Segment flags control allocation/merge/deallocation policies:2386 * If EXTERN_BIT set, then we did not allocate this segment,2387 and so should not try to deallocate or merge with others.2388 (This currently holds only for the initial segment passed2389 into create_mspace_with_base.)2390 * If IS_MMAPPED_BIT set, the segment may be merged with2391 other surrounding mmapped segments and trimmed/de-allocated2392 using munmap.2393 * If neither bit is set, then the segment was obtained using2394 MORECORE so can be merged with surrounding MORECORE'd segments2395 and deallocated/trimmed using MORECORE with negative arguments.2396*/23972398struct malloc_segment {2399char* base;/* base address */2400size_t size;/* allocated size */2401struct malloc_segment* next;/* ptr to next segment */2402 flag_t sflags;/* mmap and extern flag */2403};24042405#define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)2406#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)24072408typedefstruct malloc_segment msegment;2409typedefstruct malloc_segment* msegmentptr;24102411/* ---------------------------- malloc_state ----------------------------- */24122413/*2414 A malloc_state holds all of the bookkeeping for a space.2415 The main fields are:24162417 Top2418 The topmost chunk of the currently active segment. Its size is2419 cached in topsize. The actual size of topmost space is2420 topsize+TOP_FOOT_SIZE, which includes space reserved for adding2421 fenceposts and segment records if necessary when getting more2422 space from the system. The size at which to autotrim top is2423 cached from mparams in trim_check, except that it is disabled if2424 an autotrim fails.24252426 Designated victim (dv)2427 This is the preferred chunk for servicing small requests that2428 don't have exact fits. It is normally the chunk split off most2429 recently to service another small request. Its size is cached in2430 dvsize. The link fields of this chunk are not maintained since it2431 is not kept in a bin.24322433 SmallBins2434 An array of bin headers for free chunks. These bins hold chunks2435 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains2436 chunks of all the same size, spaced 8 bytes apart. To simplify2437 use in double-linked lists, each bin header acts as a malloc_chunk2438 pointing to the real first node, if it exists (else pointing to2439 itself). This avoids special-casing for headers. But to avoid2440 waste, we allocate only the fd/bk pointers of bins, and then use2441 repositioning tricks to treat these as the fields of a chunk.24422443 TreeBins2444 Treebins are pointers to the roots of trees holding a range of2445 sizes. There are 2 equally spaced treebins for each power of two2446 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything2447 larger.24482449 Bin maps2450 There is one bit map for small bins ("smallmap") and one for2451 treebins ("treemap). Each bin sets its bit when non-empty, and2452 clears the bit when empty. Bit operations are then used to avoid2453 bin-by-bin searching -- nearly all "search" is done without ever2454 looking at bins that won't be selected. The bit maps2455 conservatively use 32 bits per map word, even if on 64bit system.2456 For a good description of some of the bit-based techniques used2457 here, see Henry S. Warren Jr's book "Hacker's Delight" (and2458 supplement at http://hackersdelight.org/). Many of these are2459 intended to reduce the branchiness of paths through malloc etc, as2460 well as to reduce the number of memory locations read or written.24612462 Segments2463 A list of segments headed by an embedded malloc_segment record2464 representing the initial space.24652466 Address check support2467 The least_addr field is the least address ever obtained from2468 MORECORE or MMAP. Attempted frees and reallocs of any address less2469 than this are trapped (unless INSECURE is defined).24702471 Magic tag2472 A cross-check field that should always hold same value as mparams.magic.24732474 Flags2475 Bits recording whether to use MMAP, locks, or contiguous MORECORE24762477 Statistics2478 Each space keeps track of current and maximum system memory2479 obtained via MORECORE or MMAP.24802481 Trim support2482 Fields holding the amount of unused topmost memory that should trigger2483 timming, and a counter to force periodic scanning to release unused2484 non-topmost segments.24852486 Locking2487 If USE_LOCKS is defined, the "mutex" lock is acquired and released2488 around every public call using this mspace.24892490 Extension support2491 A void* pointer and a size_t field that can be used to help implement2492 extensions to this malloc.2493*/24942495/* Bin types, widths and sizes */2496#define NSMALLBINS (32U)2497#define NTREEBINS (32U)2498#define SMALLBIN_SHIFT (3U)2499#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)2500#define TREEBIN_SHIFT (8U)2501#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)2502#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)2503#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)25042505struct malloc_state {2506 binmap_t smallmap;2507 binmap_t treemap;2508size_t dvsize;2509size_t topsize;2510char* least_addr;2511 mchunkptr dv;2512 mchunkptr top;2513size_t trim_check;2514size_t release_checks;2515size_t magic;2516 mchunkptr smallbins[(NSMALLBINS+1)*2];2517 tbinptr treebins[NTREEBINS];2518size_t footprint;2519size_t max_footprint;2520 flag_t mflags;2521#if USE_LOCKS2522 MLOCK_T mutex;/* locate lock among fields that rarely change */2523#endif/* USE_LOCKS */2524 msegment seg;2525void* extp;/* Unused but available for extensions */2526size_t exts;2527};25282529typedefstruct malloc_state* mstate;25302531/* ------------- Global malloc_state and malloc_params ------------------- */25322533/*2534 malloc_params holds global properties, including those that can be2535 dynamically set using mallopt. There is a single instance, mparams,2536 initialized in init_mparams. Note that the non-zeroness of "magic"2537 also serves as an initialization flag.2538*/25392540struct malloc_params {2541volatilesize_t magic;2542size_t page_size;2543size_t granularity;2544size_t mmap_threshold;2545size_t trim_threshold;2546 flag_t default_mflags;2547};25482549static struct malloc_params mparams;25502551/* Ensure mparams initialized */2552#define ensure_initialization() ((void)(mparams.magic != 0 || init_mparams()))25532554#if !ONLY_MSPACES25552556/* The global malloc_state used for all non-"mspace" calls */2557static struct malloc_state _gm_;2558#define gm (&_gm_)2559#define is_global(M) ((M) == &_gm_)25602561#endif/* !ONLY_MSPACES */25622563#define is_initialized(M) ((M)->top != 0)25642565/* -------------------------- system alloc setup ------------------------- */25662567/* Operations on mflags */25682569#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)2570#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)2571#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)25722573#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)2574#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)2575#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)25762577#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)2578#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)25792580#define set_lock(M,L)\2581 ((M)->mflags = (L)?\2582 ((M)->mflags | USE_LOCK_BIT) :\2583 ((M)->mflags & ~USE_LOCK_BIT))25842585/* page-align a size */2586#define page_align(S)\2587 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))25882589/* granularity-align a size */2590#define granularity_align(S)\2591 (((S) + (mparams.granularity - SIZE_T_ONE))\2592 & ~(mparams.granularity - SIZE_T_ONE))259325942595/* For mmap, use granularity alignment on windows, else page-align */2596#ifdef WIN322597#define mmap_align(S) granularity_align(S)2598#else2599#define mmap_align(S) page_align(S)2600#endif26012602/* For sys_alloc, enough padding to ensure can malloc request on success */2603#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)26042605#define is_page_aligned(S)\2606 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)2607#define is_granularity_aligned(S)\2608 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)26092610/* True if segment S holds address A */2611#define segment_holds(S, A)\2612 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)26132614/* Return segment holding given address */2615static msegmentptr segment_holding(mstate m,char* addr) {2616 msegmentptr sp = &m->seg;2617for(;;) {2618if(addr >= sp->base && addr < sp->base + sp->size)2619return sp;2620if((sp = sp->next) ==0)2621return0;2622}2623}26242625/* Return true if segment contains a segment link */2626static inthas_segment_link(mstate m, msegmentptr ss) {2627 msegmentptr sp = &m->seg;2628for(;;) {2629if((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)2630return1;2631if((sp = sp->next) ==0)2632return0;2633}2634}26352636#ifndef MORECORE_CANNOT_TRIM2637#define should_trim(M,s) ((s) > (M)->trim_check)2638#else/* MORECORE_CANNOT_TRIM */2639#define should_trim(M,s) (0)2640#endif/* MORECORE_CANNOT_TRIM */26412642/*2643 TOP_FOOT_SIZE is padding at the end of a segment, including space2644 that may be needed to place segment records and fenceposts when new2645 noncontiguous segments are added.2646*/2647#define TOP_FOOT_SIZE\2648 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)264926502651/* ------------------------------- Hooks -------------------------------- */26522653/*2654 PREACTION should be defined to return 0 on success, and nonzero on2655 failure. If you are not using locking, you can redefine these to do2656 anything you like.2657*/26582659#if USE_LOCKS26602661#define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)2662#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }2663#else/* USE_LOCKS */26642665#ifndef PREACTION2666#define PREACTION(M) (0)2667#endif/* PREACTION */26682669#ifndef POSTACTION2670#define POSTACTION(M)2671#endif/* POSTACTION */26722673#endif/* USE_LOCKS */26742675/*2676 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.2677 USAGE_ERROR_ACTION is triggered on detected bad frees and2678 reallocs. The argument p is an address that might have triggered the2679 fault. It is ignored by the two predefined actions, but might be2680 useful in custom actions that try to help diagnose errors.2681*/26822683#if PROCEED_ON_ERROR26842685/* A count of the number of corruption errors causing resets */2686int malloc_corruption_error_count;26872688/* default corruption action */2689static voidreset_on_error(mstate m);26902691#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)2692#define USAGE_ERROR_ACTION(m, p)26932694#else/* PROCEED_ON_ERROR */26952696#ifndef CORRUPTION_ERROR_ACTION2697#define CORRUPTION_ERROR_ACTION(m) ABORT2698#endif/* CORRUPTION_ERROR_ACTION */26992700#ifndef USAGE_ERROR_ACTION2701#define USAGE_ERROR_ACTION(m,p) ABORT2702#endif/* USAGE_ERROR_ACTION */27032704#endif/* PROCEED_ON_ERROR */27052706/* -------------------------- Debugging setup ---------------------------- */27072708#if ! DEBUG27092710#define check_free_chunk(M,P)2711#define check_inuse_chunk(M,P)2712#define check_malloced_chunk(M,P,N)2713#define check_mmapped_chunk(M,P)2714#define check_malloc_state(M)2715#define check_top_chunk(M,P)27162717#else/* DEBUG */2718#define check_free_chunk(M,P) do_check_free_chunk(M,P)2719#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)2720#define check_top_chunk(M,P) do_check_top_chunk(M,P)2721#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)2722#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)2723#define check_malloc_state(M) do_check_malloc_state(M)27242725static voiddo_check_any_chunk(mstate m, mchunkptr p);2726static voiddo_check_top_chunk(mstate m, mchunkptr p);2727static voiddo_check_mmapped_chunk(mstate m, mchunkptr p);2728static voiddo_check_inuse_chunk(mstate m, mchunkptr p);2729static voiddo_check_free_chunk(mstate m, mchunkptr p);2730static voiddo_check_malloced_chunk(mstate m,void* mem,size_t s);2731static voiddo_check_tree(mstate m, tchunkptr t);2732static voiddo_check_treebin(mstate m, bindex_t i);2733static voiddo_check_smallbin(mstate m, bindex_t i);2734static voiddo_check_malloc_state(mstate m);2735static intbin_find(mstate m, mchunkptr x);2736static size_ttraverse_and_check(mstate m);2737#endif/* DEBUG */27382739/* ---------------------------- Indexing Bins ---------------------------- */27402741#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)2742#define small_index(s) ((s) >> SMALLBIN_SHIFT)2743#define small_index2size(i) ((i) << SMALLBIN_SHIFT)2744#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))27452746/* addressing by index. See above about smallbin repositioning */2747#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))2748#define treebin_at(M,i) (&((M)->treebins[i]))27492750/* assign tree index for size S to variable I. Use x86 asm if possible */2751#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))2752#define compute_tree_index(S, I)\2753{\2754 unsigned int X = S >> TREEBIN_SHIFT;\2755 if (X == 0)\2756 I = 0;\2757 else if (X > 0xFFFF)\2758 I = NTREEBINS-1;\2759 else {\2760 unsigned int K;\2761 __asm__("bsrl\t%1,%0\n\t" :"=r" (K) :"rm" (X));\2762 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\2763 }\2764}27652766#elif defined (__INTEL_COMPILER)2767#define compute_tree_index(S, I)\2768{\2769 size_t X = S >> TREEBIN_SHIFT;\2770 if (X == 0)\2771 I = 0;\2772 else if (X > 0xFFFF)\2773 I = NTREEBINS-1;\2774 else {\2775 unsigned int K = _bit_scan_reverse (X); \2776 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\2777 }\2778}27792780#elif defined(_MSC_VER) && _MSC_VER>=13002781#define compute_tree_index(S, I)\2782{\2783 size_t X = S >> TREEBIN_SHIFT;\2784 if (X == 0)\2785 I = 0;\2786 else if (X > 0xFFFF)\2787 I = NTREEBINS-1;\2788 else {\2789 unsigned int K;\2790 _BitScanReverse((DWORD *) &K, X);\2791 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\2792 }\2793}27942795#else/* GNUC */2796#define compute_tree_index(S, I)\2797{\2798 size_t X = S >> TREEBIN_SHIFT;\2799 if (X == 0)\2800 I = 0;\2801 else if (X > 0xFFFF)\2802 I = NTREEBINS-1;\2803 else {\2804 unsigned int Y = (unsigned int)X;\2805 unsigned int N = ((Y - 0x100) >> 16) & 8;\2806 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\2807 N += K;\2808 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\2809 K = 14 - N + ((Y <<= K) >> 15);\2810 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\2811 }\2812}2813#endif/* GNUC */28142815/* Bit representing maximum resolved size in a treebin at i */2816#define bit_for_tree_index(i) \2817 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)28182819/* Shift placing maximum resolved bit in a treebin at i as sign bit */2820#define leftshift_for_tree_index(i) \2821 ((i == NTREEBINS-1)? 0 : \2822 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))28232824/* The size of the smallest chunk held in bin with index i */2825#define minsize_for_tree_index(i) \2826 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \2827 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))282828292830/* ------------------------ Operations on bin maps ----------------------- */28312832/* bit corresponding to given index */2833#define idx2bit(i) ((binmap_t)(1) << (i))28342835/* Mark/Clear bits with given index */2836#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))2837#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))2838#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))28392840#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))2841#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))2842#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))28432844/* isolate the least set bit of a bitmap */2845#define least_bit(x) ((x) & -(x))28462847/* mask with all bits to left of least bit of x on */2848#define left_bits(x) ((x<<1) | -(x<<1))28492850/* mask with all bits to left of or equal to least bit of x on */2851#define same_or_left_bits(x) ((x) | -(x))28522853/* index corresponding to given bit. Use x86 asm if possible */28542855#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))2856#define compute_bit2idx(X, I)\2857{\2858 unsigned int J;\2859 __asm__("bsfl\t%1,%0\n\t" :"=r" (J) :"rm" (X));\2860 I = (bindex_t)J;\2861}28622863#elif defined (__INTEL_COMPILER)2864#define compute_bit2idx(X, I)\2865{\2866 unsigned int J;\2867 J = _bit_scan_forward (X); \2868 I = (bindex_t)J;\2869}28702871#elif defined(_MSC_VER) && _MSC_VER>=13002872#define compute_bit2idx(X, I)\2873{\2874 unsigned int J;\2875 _BitScanForward((DWORD *) &J, X);\2876 I = (bindex_t)J;\2877}28782879#elif USE_BUILTIN_FFS2880#define compute_bit2idx(X, I) I = ffs(X)-128812882#else2883#define compute_bit2idx(X, I)\2884{\2885 unsigned int Y = X - 1;\2886 unsigned int K = Y >> (16-4) & 16;\2887 unsigned int N = K; Y >>= K;\2888 N += K = Y >> (8-3) & 8; Y >>= K;\2889 N += K = Y >> (4-2) & 4; Y >>= K;\2890 N += K = Y >> (2-1) & 2; Y >>= K;\2891 N += K = Y >> (1-0) & 1; Y >>= K;\2892 I = (bindex_t)(N + Y);\2893}2894#endif/* GNUC */289528962897/* ----------------------- Runtime Check Support ------------------------- */28982899/*2900 For security, the main invariant is that malloc/free/etc never2901 writes to a static address other than malloc_state, unless static2902 malloc_state itself has been corrupted, which cannot occur via2903 malloc (because of these checks). In essence this means that we2904 believe all pointers, sizes, maps etc held in malloc_state, but2905 check all of those linked or offsetted from other embedded data2906 structures. These checks are interspersed with main code in a way2907 that tends to minimize their run-time cost.29082909 When FOOTERS is defined, in addition to range checking, we also2910 verify footer fields of inuse chunks, which can be used guarantee2911 that the mstate controlling malloc/free is intact. This is a2912 streamlined version of the approach described by William Robertson2913 et al in "Run-time Detection of Heap-based Overflows" LISA'032914 http://www.usenix.org/events/lisa03/tech/robertson.html The footer2915 of an inuse chunk holds the xor of its mstate and a random seed,2916 that is checked upon calls to free() and realloc(). This is2917 (probablistically) unguessable from outside the program, but can be2918 computed by any code successfully malloc'ing any chunk, so does not2919 itself provide protection against code that has already broken2920 security through some other means. Unlike Robertson et al, we2921 always dynamically check addresses of all offset chunks (previous,2922 next, etc). This turns out to be cheaper than relying on hashes.2923*/29242925#if !INSECURE2926/* Check if address a is at least as high as any from MORECORE or MMAP */2927#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)2928/* Check if address of next chunk n is higher than base chunk p */2929#define ok_next(p, n) ((char*)(p) < (char*)(n))2930/* Check if p has its cinuse bit on */2931#define ok_cinuse(p) cinuse(p)2932/* Check if p has its pinuse bit on */2933#define ok_pinuse(p) pinuse(p)29342935#else/* !INSECURE */2936#define ok_address(M, a) (1)2937#define ok_next(b, n) (1)2938#define ok_cinuse(p) (1)2939#define ok_pinuse(p) (1)2940#endif/* !INSECURE */29412942#if (FOOTERS && !INSECURE)2943/* Check if (alleged) mstate m has expected magic field */2944#define ok_magic(M) ((M)->magic == mparams.magic)2945#else/* (FOOTERS && !INSECURE) */2946#define ok_magic(M) (1)2947#endif/* (FOOTERS && !INSECURE) */294829492950/* In gcc, use __builtin_expect to minimize impact of checks */2951#if !INSECURE2952#if defined(__GNUC__) && __GNUC__ >= 32953#define RTCHECK(e) __builtin_expect(e, 1)2954#else/* GNUC */2955#define RTCHECK(e) (e)2956#endif/* GNUC */2957#else/* !INSECURE */2958#define RTCHECK(e) (1)2959#endif/* !INSECURE */29602961/* macros to set up inuse chunks with or without footers */29622963#if !FOOTERS29642965#define mark_inuse_foot(M,p,s)29662967/* Set cinuse bit and pinuse bit of next chunk */2968#define set_inuse(M,p,s)\2969 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\2970 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)29712972/* Set cinuse and pinuse of this chunk and pinuse of next chunk */2973#define set_inuse_and_pinuse(M,p,s)\2974 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\2975 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)29762977/* Set size, cinuse and pinuse bit of this chunk */2978#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\2979 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))29802981#else/* FOOTERS */29822983/* Set foot of inuse chunk to be xor of mstate and seed */2984#define mark_inuse_foot(M,p,s)\2985 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))29862987#define get_mstate_for(p)\2988 ((mstate)(((mchunkptr)((char*)(p) +\2989 (chunksize(p))))->prev_foot ^ mparams.magic))29902991#define set_inuse(M,p,s)\2992 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\2993 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \2994 mark_inuse_foot(M,p,s))29952996#define set_inuse_and_pinuse(M,p,s)\2997 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\2998 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\2999 mark_inuse_foot(M,p,s))30003001#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\3002 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\3003 mark_inuse_foot(M, p, s))30043005#endif/* !FOOTERS */30063007/* ---------------------------- setting mparams -------------------------- */30083009/* Initialize mparams */3010static intinit_mparams(void) {3011#ifdef NEED_GLOBAL_LOCK_INIT3012if(malloc_global_mutex_status <=0)3013init_malloc_global_mutex();3014#endif30153016ACQUIRE_MALLOC_GLOBAL_LOCK();3017if(mparams.magic ==0) {3018size_t magic;3019size_t psize;3020size_t gsize;30213022#ifndef WIN323023 psize = malloc_getpagesize;3024 gsize = ((DEFAULT_GRANULARITY !=0)? DEFAULT_GRANULARITY : psize);3025#else/* WIN32 */3026{3027 SYSTEM_INFO system_info;3028GetSystemInfo(&system_info);3029 psize = system_info.dwPageSize;3030 gsize = ((DEFAULT_GRANULARITY !=0)?3031 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);3032}3033#endif/* WIN32 */30343035/* Sanity-check configuration:3036 size_t must be unsigned and as wide as pointer type.3037 ints must be at least 4 bytes.3038 alignment must be at least 8.3039 Alignment, min chunk size, and page size must all be powers of 2.3040 */3041if((sizeof(size_t) !=sizeof(char*)) ||3042(MAX_SIZE_T < MIN_CHUNK_SIZE) ||3043(sizeof(int) <4) ||3044(MALLOC_ALIGNMENT < (size_t)8U) ||3045((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) !=0) ||3046((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) !=0) ||3047((gsize & (gsize-SIZE_T_ONE)) !=0) ||3048((psize & (psize-SIZE_T_ONE)) !=0))3049 ABORT;30503051 mparams.granularity = gsize;3052 mparams.page_size = psize;3053 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;3054 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;3055#if MORECORE_CONTIGUOUS3056 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;3057#else/* MORECORE_CONTIGUOUS */3058 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;3059#endif/* MORECORE_CONTIGUOUS */30603061#if !ONLY_MSPACES3062/* Set up lock for main malloc area */3063 gm->mflags = mparams.default_mflags;3064INITIAL_LOCK(&gm->mutex);3065#endif30663067#if (FOOTERS && !INSECURE)3068{3069#if USE_DEV_RANDOM3070int fd;3071unsigned char buf[sizeof(size_t)];3072/* Try to use /dev/urandom, else fall back on using time */3073if((fd =open("/dev/urandom", O_RDONLY)) >=0&&3074read(fd, buf,sizeof(buf)) ==sizeof(buf)) {3075 magic = *((size_t*) buf);3076close(fd);3077}3078else3079#endif/* USE_DEV_RANDOM */3080#ifdef WIN323081 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);3082#else3083 magic = (size_t)(time(0) ^ (size_t)0x55555555U);3084#endif3085 magic |= (size_t)8U;/* ensure nonzero */3086 magic &= ~(size_t)7U;/* improve chances of fault for bad values */3087}3088#else/* (FOOTERS && !INSECURE) */3089 magic = (size_t)0x58585858U;3090#endif/* (FOOTERS && !INSECURE) */30913092 mparams.magic = magic;3093}30943095RELEASE_MALLOC_GLOBAL_LOCK();3096return1;3097}30983099/* support for mallopt */3100static intchange_mparam(int param_number,int value) {3101size_t val = (value == -1)? MAX_SIZE_T : (size_t)value;3102ensure_initialization();3103switch(param_number) {3104case M_TRIM_THRESHOLD:3105 mparams.trim_threshold = val;3106return1;3107case M_GRANULARITY:3108if(val >= mparams.page_size && ((val & (val-1)) ==0)) {3109 mparams.granularity = val;3110return1;3111}3112else3113return0;3114case M_MMAP_THRESHOLD:3115 mparams.mmap_threshold = val;3116return1;3117default:3118return0;3119}3120}31213122#if DEBUG3123/* ------------------------- Debugging Support --------------------------- */31243125/* Check properties of any chunk, whether free, inuse, mmapped etc */3126static voiddo_check_any_chunk(mstate m, mchunkptr p) {3127assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));3128assert(ok_address(m, p));3129}31303131/* Check properties of top chunk */3132static voiddo_check_top_chunk(mstate m, mchunkptr p) {3133 msegmentptr sp =segment_holding(m, (char*)p);3134size_t sz = p->head & ~INUSE_BITS;/* third-lowest bit can be set! */3135assert(sp !=0);3136assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));3137assert(ok_address(m, p));3138assert(sz == m->topsize);3139assert(sz >0);3140assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);3141assert(pinuse(p));3142assert(!pinuse(chunk_plus_offset(p, sz)));3143}31443145/* Check properties of (inuse) mmapped chunks */3146static voiddo_check_mmapped_chunk(mstate m, mchunkptr p) {3147size_t sz =chunksize(p);3148size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);3149assert(is_mmapped(p));3150assert(use_mmap(m));3151assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));3152assert(ok_address(m, p));3153assert(!is_small(sz));3154assert((len & (mparams.page_size-SIZE_T_ONE)) ==0);3155assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);3156assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head ==0);3157}31583159/* Check properties of inuse chunks */3160static voiddo_check_inuse_chunk(mstate m, mchunkptr p) {3161do_check_any_chunk(m, p);3162assert(cinuse(p));3163assert(next_pinuse(p));3164/* If not pinuse and not mmapped, previous chunk has OK offset */3165assert(is_mmapped(p) ||pinuse(p) ||next_chunk(prev_chunk(p)) == p);3166if(is_mmapped(p))3167do_check_mmapped_chunk(m, p);3168}31693170/* Check properties of free chunks */3171static voiddo_check_free_chunk(mstate m, mchunkptr p) {3172size_t sz =chunksize(p);3173 mchunkptr next =chunk_plus_offset(p, sz);3174do_check_any_chunk(m, p);3175assert(!cinuse(p));3176assert(!next_pinuse(p));3177assert(!is_mmapped(p));3178if(p != m->dv && p != m->top) {3179if(sz >= MIN_CHUNK_SIZE) {3180assert((sz & CHUNK_ALIGN_MASK) ==0);3181assert(is_aligned(chunk2mem(p)));3182assert(next->prev_foot == sz);3183assert(pinuse(p));3184assert(next == m->top ||cinuse(next));3185assert(p->fd->bk == p);3186assert(p->bk->fd == p);3187}3188else/* markers are always of size SIZE_T_SIZE */3189assert(sz == SIZE_T_SIZE);3190}3191}31923193/* Check properties of malloced chunks at the point they are malloced */3194static voiddo_check_malloced_chunk(mstate m,void* mem,size_t s) {3195if(mem !=0) {3196 mchunkptr p =mem2chunk(mem);3197size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);3198do_check_inuse_chunk(m, p);3199assert((sz & CHUNK_ALIGN_MASK) ==0);3200assert(sz >= MIN_CHUNK_SIZE);3201assert(sz >= s);3202/* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */3203assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));3204}3205}32063207/* Check a tree and its subtrees. */3208static voiddo_check_tree(mstate m, tchunkptr t) {3209 tchunkptr head =0;3210 tchunkptr u = t;3211 bindex_t tindex = t->index;3212size_t tsize =chunksize(t);3213 bindex_t idx;3214compute_tree_index(tsize, idx);3215assert(tindex == idx);3216assert(tsize >= MIN_LARGE_SIZE);3217assert(tsize >=minsize_for_tree_index(idx));3218assert((idx == NTREEBINS-1) || (tsize <minsize_for_tree_index((idx+1))));32193220do{/* traverse through chain of same-sized nodes */3221do_check_any_chunk(m, ((mchunkptr)u));3222assert(u->index == tindex);3223assert(chunksize(u) == tsize);3224assert(!cinuse(u));3225assert(!next_pinuse(u));3226assert(u->fd->bk == u);3227assert(u->bk->fd == u);3228if(u->parent ==0) {3229assert(u->child[0] ==0);3230assert(u->child[1] ==0);3231}3232else{3233assert(head ==0);/* only one node on chain has parent */3234 head = u;3235assert(u->parent != u);3236assert(u->parent->child[0] == u ||3237 u->parent->child[1] == u ||3238*((tbinptr*)(u->parent)) == u);3239if(u->child[0] !=0) {3240assert(u->child[0]->parent == u);3241assert(u->child[0] != u);3242do_check_tree(m, u->child[0]);3243}3244if(u->child[1] !=0) {3245assert(u->child[1]->parent == u);3246assert(u->child[1] != u);3247do_check_tree(m, u->child[1]);3248}3249if(u->child[0] !=0&& u->child[1] !=0) {3250assert(chunksize(u->child[0]) <chunksize(u->child[1]));3251}3252}3253 u = u->fd;3254}while(u != t);3255assert(head !=0);3256}32573258/* Check all the chunks in a treebin. */3259static voiddo_check_treebin(mstate m, bindex_t i) {3260 tbinptr* tb =treebin_at(m, i);3261 tchunkptr t = *tb;3262int empty = (m->treemap & (1U<< i)) ==0;3263if(t ==0)3264assert(empty);3265if(!empty)3266do_check_tree(m, t);3267}32683269/* Check all the chunks in a smallbin. */3270static voiddo_check_smallbin(mstate m, bindex_t i) {3271 sbinptr b =smallbin_at(m, i);3272 mchunkptr p = b->bk;3273unsigned int empty = (m->smallmap & (1U<< i)) ==0;3274if(p == b)3275assert(empty);3276if(!empty) {3277for(; p != b; p = p->bk) {3278size_t size =chunksize(p);3279 mchunkptr q;3280/* each chunk claims to be free */3281do_check_free_chunk(m, p);3282/* chunk belongs in bin */3283assert(small_index(size) == i);3284assert(p->bk == b ||chunksize(p->bk) ==chunksize(p));3285/* chunk is followed by an inuse chunk */3286 q =next_chunk(p);3287if(q->head != FENCEPOST_HEAD)3288do_check_inuse_chunk(m, q);3289}3290}3291}32923293/* Find x in a bin. Used in other check functions. */3294static intbin_find(mstate m, mchunkptr x) {3295size_t size =chunksize(x);3296if(is_small(size)) {3297 bindex_t sidx =small_index(size);3298 sbinptr b =smallbin_at(m, sidx);3299if(smallmap_is_marked(m, sidx)) {3300 mchunkptr p = b;3301do{3302if(p == x)3303return1;3304}while((p = p->fd) != b);3305}3306}3307else{3308 bindex_t tidx;3309compute_tree_index(size, tidx);3310if(treemap_is_marked(m, tidx)) {3311 tchunkptr t = *treebin_at(m, tidx);3312size_t sizebits = size <<leftshift_for_tree_index(tidx);3313while(t !=0&&chunksize(t) != size) {3314 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) &1];3315 sizebits <<=1;3316}3317if(t !=0) {3318 tchunkptr u = t;3319do{3320if(u == (tchunkptr)x)3321return1;3322}while((u = u->fd) != t);3323}3324}3325}3326return0;3327}33283329/* Traverse each chunk and check it; return total */3330static size_ttraverse_and_check(mstate m) {3331size_t sum =0;3332if(is_initialized(m)) {3333 msegmentptr s = &m->seg;3334 sum += m->topsize + TOP_FOOT_SIZE;3335while(s !=0) {3336 mchunkptr q =align_as_chunk(s->base);3337 mchunkptr lastq =0;3338assert(pinuse(q));3339while(segment_holds(s, q) &&3340 q != m->top && q->head != FENCEPOST_HEAD) {3341 sum +=chunksize(q);3342if(cinuse(q)) {3343assert(!bin_find(m, q));3344do_check_inuse_chunk(m, q);3345}3346else{3347assert(q == m->dv ||bin_find(m, q));3348assert(lastq ==0||cinuse(lastq));/* Not 2 consecutive free */3349do_check_free_chunk(m, q);3350}3351 lastq = q;3352 q =next_chunk(q);3353}3354 s = s->next;3355}3356}3357return sum;3358}33593360/* Check all properties of malloc_state. */3361static voiddo_check_malloc_state(mstate m) {3362 bindex_t i;3363size_t total;3364/* check bins */3365for(i =0; i < NSMALLBINS; ++i)3366do_check_smallbin(m, i);3367for(i =0; i < NTREEBINS; ++i)3368do_check_treebin(m, i);33693370if(m->dvsize !=0) {/* check dv chunk */3371do_check_any_chunk(m, m->dv);3372assert(m->dvsize ==chunksize(m->dv));3373assert(m->dvsize >= MIN_CHUNK_SIZE);3374assert(bin_find(m, m->dv) ==0);3375}33763377if(m->top !=0) {/* check top chunk */3378do_check_top_chunk(m, m->top);3379/*assert(m->topsize == chunksize(m->top)); redundant */3380assert(m->topsize >0);3381assert(bin_find(m, m->top) ==0);3382}33833384 total =traverse_and_check(m);3385assert(total <= m->footprint);3386assert(m->footprint <= m->max_footprint);3387}3388#endif/* DEBUG */33893390/* ----------------------------- statistics ------------------------------ */33913392#if !NO_MALLINFO3393static struct mallinfo internal_mallinfo(mstate m) {3394struct mallinfo nm = {0,0,0,0,0,0,0,0,0,0};3395ensure_initialization();3396if(!PREACTION(m)) {3397check_malloc_state(m);3398if(is_initialized(m)) {3399size_t nfree = SIZE_T_ONE;/* top always free */3400size_t mfree = m->topsize + TOP_FOOT_SIZE;3401size_t sum = mfree;3402 msegmentptr s = &m->seg;3403while(s !=0) {3404 mchunkptr q =align_as_chunk(s->base);3405while(segment_holds(s, q) &&3406 q != m->top && q->head != FENCEPOST_HEAD) {3407size_t sz =chunksize(q);3408 sum += sz;3409if(!cinuse(q)) {3410 mfree += sz;3411++nfree;3412}3413 q =next_chunk(q);3414}3415 s = s->next;3416}34173418 nm.arena = sum;3419 nm.ordblks = nfree;3420 nm.hblkhd = m->footprint - sum;3421 nm.usmblks = m->max_footprint;3422 nm.uordblks = m->footprint - mfree;3423 nm.fordblks = mfree;3424 nm.keepcost = m->topsize;3425}34263427POSTACTION(m);3428}3429return nm;3430}3431#endif/* !NO_MALLINFO */34323433static voidinternal_malloc_stats(mstate m) {3434ensure_initialization();3435if(!PREACTION(m)) {3436size_t maxfp =0;3437size_t fp =0;3438size_t used =0;3439check_malloc_state(m);3440if(is_initialized(m)) {3441 msegmentptr s = &m->seg;3442 maxfp = m->max_footprint;3443 fp = m->footprint;3444 used = fp - (m->topsize + TOP_FOOT_SIZE);34453446while(s !=0) {3447 mchunkptr q =align_as_chunk(s->base);3448while(segment_holds(s, q) &&3449 q != m->top && q->head != FENCEPOST_HEAD) {3450if(!cinuse(q))3451 used -=chunksize(q);3452 q =next_chunk(q);3453}3454 s = s->next;3455}3456}34573458fprintf(stderr,"max system bytes =%10lu\n", (unsigned long)(maxfp));3459fprintf(stderr,"system bytes =%10lu\n", (unsigned long)(fp));3460fprintf(stderr,"in use bytes =%10lu\n", (unsigned long)(used));34613462POSTACTION(m);3463}3464}34653466/* ----------------------- Operations on smallbins ----------------------- */34673468/*3469 Various forms of linking and unlinking are defined as macros. Even3470 the ones for trees, which are very long but have very short typical3471 paths. This is ugly but reduces reliance on inlining support of3472 compilers.3473*/34743475/* Link a free chunk into a smallbin */3476#define insert_small_chunk(M, P, S) {\3477 bindex_t I = small_index(S);\3478 mchunkptr B = smallbin_at(M, I);\3479 mchunkptr F = B;\3480 assert(S >= MIN_CHUNK_SIZE);\3481 if (!smallmap_is_marked(M, I))\3482 mark_smallmap(M, I);\3483 else if (RTCHECK(ok_address(M, B->fd)))\3484 F = B->fd;\3485 else {\3486 CORRUPTION_ERROR_ACTION(M);\3487 }\3488 B->fd = P;\3489 F->bk = P;\3490 P->fd = F;\3491 P->bk = B;\3492}34933494/* Unlink a chunk from a smallbin */3495#define unlink_small_chunk(M, P, S) {\3496 mchunkptr F = P->fd;\3497 mchunkptr B = P->bk;\3498 bindex_t I = small_index(S);\3499 assert(P != B);\3500 assert(P != F);\3501 assert(chunksize(P) == small_index2size(I));\3502 if (F == B)\3503 clear_smallmap(M, I);\3504 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\3505 (B == smallbin_at(M,I) || ok_address(M, B)))) {\3506 F->bk = B;\3507 B->fd = F;\3508 }\3509 else {\3510 CORRUPTION_ERROR_ACTION(M);\3511 }\3512}35133514/* Unlink the first chunk from a smallbin */3515#define unlink_first_small_chunk(M, B, P, I) {\3516 mchunkptr F = P->fd;\3517 assert(P != B);\3518 assert(P != F);\3519 assert(chunksize(P) == small_index2size(I));\3520 if (B == F)\3521 clear_smallmap(M, I);\3522 else if (RTCHECK(ok_address(M, F))) {\3523 B->fd = F;\3524 F->bk = B;\3525 }\3526 else {\3527 CORRUPTION_ERROR_ACTION(M);\3528 }\3529}3530353135323533/* Replace dv node, binning the old one */3534/* Used only when dvsize known to be small */3535#define replace_dv(M, P, S) {\3536 size_t DVS = M->dvsize;\3537 if (DVS != 0) {\3538 mchunkptr DV = M->dv;\3539 assert(is_small(DVS));\3540 insert_small_chunk(M, DV, DVS);\3541 }\3542 M->dvsize = S;\3543 M->dv = P;\3544}35453546/* ------------------------- Operations on trees ------------------------- */35473548/* Insert chunk into tree */3549#define insert_large_chunk(M, X, S) {\3550 tbinptr* H;\3551 bindex_t I;\3552 compute_tree_index(S, I);\3553 H = treebin_at(M, I);\3554 X->index = I;\3555 X->child[0] = X->child[1] = 0;\3556 if (!treemap_is_marked(M, I)) {\3557 mark_treemap(M, I);\3558 *H = X;\3559 X->parent = (tchunkptr)H;\3560 X->fd = X->bk = X;\3561 }\3562 else {\3563 tchunkptr T = *H;\3564 size_t K = S << leftshift_for_tree_index(I);\3565 for (;;) {\3566 if (chunksize(T) != S) {\3567 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\3568 K <<= 1;\3569 if (*C != 0)\3570 T = *C;\3571 else if (RTCHECK(ok_address(M, C))) {\3572 *C = X;\3573 X->parent = T;\3574 X->fd = X->bk = X;\3575 break;\3576 }\3577 else {\3578 CORRUPTION_ERROR_ACTION(M);\3579 break;\3580 }\3581 }\3582 else {\3583 tchunkptr F = T->fd;\3584 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\3585 T->fd = F->bk = X;\3586 X->fd = F;\3587 X->bk = T;\3588 X->parent = 0;\3589 break;\3590 }\3591 else {\3592 CORRUPTION_ERROR_ACTION(M);\3593 break;\3594 }\3595 }\3596 }\3597 }\3598}35993600/*3601 Unlink steps:36023603 1. If x is a chained node, unlink it from its same-sized fd/bk links3604 and choose its bk node as its replacement.3605 2. If x was the last node of its size, but not a leaf node, it must3606 be replaced with a leaf node (not merely one with an open left or3607 right), to make sure that lefts and rights of descendants3608 correspond properly to bit masks. We use the rightmost descendant3609 of x. We could use any other leaf, but this is easy to locate and3610 tends to counteract removal of leftmosts elsewhere, and so keeps3611 paths shorter than minimally guaranteed. This doesn't loop much3612 because on average a node in a tree is near the bottom.3613 3. If x is the base of a chain (i.e., has parent links) relink3614 x's parent and children to x's replacement (or null if none).3615*/36163617#define unlink_large_chunk(M, X) {\3618 tchunkptr XP = X->parent;\3619 tchunkptr R;\3620 if (X->bk != X) {\3621 tchunkptr F = X->fd;\3622 R = X->bk;\3623 if (RTCHECK(ok_address(M, F))) {\3624 F->bk = R;\3625 R->fd = F;\3626 }\3627 else {\3628 CORRUPTION_ERROR_ACTION(M);\3629 }\3630 }\3631 else {\3632 tchunkptr* RP;\3633 if (((R = *(RP = &(X->child[1]))) != 0) ||\3634 ((R = *(RP = &(X->child[0]))) != 0)) {\3635 tchunkptr* CP;\3636 while ((*(CP = &(R->child[1])) != 0) ||\3637 (*(CP = &(R->child[0])) != 0)) {\3638 R = *(RP = CP);\3639 }\3640 if (RTCHECK(ok_address(M, RP)))\3641 *RP = 0;\3642 else {\3643 CORRUPTION_ERROR_ACTION(M);\3644 }\3645 }\3646 }\3647 if (XP != 0) {\3648 tbinptr* H = treebin_at(M, X->index);\3649 if (X == *H) {\3650 if ((*H = R) == 0) \3651 clear_treemap(M, X->index);\3652 }\3653 else if (RTCHECK(ok_address(M, XP))) {\3654 if (XP->child[0] == X) \3655 XP->child[0] = R;\3656 else \3657 XP->child[1] = R;\3658 }\3659 else\3660 CORRUPTION_ERROR_ACTION(M);\3661 if (R != 0) {\3662 if (RTCHECK(ok_address(M, R))) {\3663 tchunkptr C0, C1;\3664 R->parent = XP;\3665 if ((C0 = X->child[0]) != 0) {\3666 if (RTCHECK(ok_address(M, C0))) {\3667 R->child[0] = C0;\3668 C0->parent = R;\3669 }\3670 else\3671 CORRUPTION_ERROR_ACTION(M);\3672 }\3673 if ((C1 = X->child[1]) != 0) {\3674 if (RTCHECK(ok_address(M, C1))) {\3675 R->child[1] = C1;\3676 C1->parent = R;\3677 }\3678 else\3679 CORRUPTION_ERROR_ACTION(M);\3680 }\3681 }\3682 else\3683 CORRUPTION_ERROR_ACTION(M);\3684 }\3685 }\3686}36873688/* Relays to large vs small bin operations */36893690#define insert_chunk(M, P, S)\3691 if (is_small(S)) insert_small_chunk(M, P, S)\3692 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }36933694#define unlink_chunk(M, P, S)\3695 if (is_small(S)) unlink_small_chunk(M, P, S)\3696 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }369736983699/* Relays to internal calls to malloc/free from realloc, memalign etc */37003701#if ONLY_MSPACES3702#define internal_malloc(m, b) mspace_malloc(m, b)3703#define internal_free(m, mem) mspace_free(m,mem);3704#else/* ONLY_MSPACES */3705#if MSPACES3706#define internal_malloc(m, b)\3707 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)3708#define internal_free(m, mem)\3709 if (m == gm) dlfree(mem); else mspace_free(m,mem);3710#else/* MSPACES */3711#define internal_malloc(m, b) dlmalloc(b)3712#define internal_free(m, mem) dlfree(mem)3713#endif/* MSPACES */3714#endif/* ONLY_MSPACES */37153716/* ----------------------- Direct-mmapping chunks ----------------------- */37173718/*3719 Directly mmapped chunks are set up with an offset to the start of3720 the mmapped region stored in the prev_foot field of the chunk. This3721 allows reconstruction of the required argument to MUNMAP when freed,3722 and also allows adjustment of the returned chunk to meet alignment3723 requirements (especially in memalign). There is also enough space3724 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain3725 the PINUSE bit so frees can be checked.3726*/37273728/* Malloc using mmap */3729static void*mmap_alloc(mstate m,size_t nb) {3730size_t mmsize =mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);3731if(mmsize > nb) {/* Check for wrap around 0 */3732char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));3733if(mm != CMFAIL) {3734size_t offset =align_offset(chunk2mem(mm));3735size_t psize = mmsize - offset - MMAP_FOOT_PAD;3736 mchunkptr p = (mchunkptr)(mm + offset);3737 p->prev_foot = offset | IS_MMAPPED_BIT;3738(p)->head = (psize|CINUSE_BIT);3739mark_inuse_foot(m, p, psize);3740chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;3741chunk_plus_offset(p, psize+SIZE_T_SIZE)->head =0;37423743if(mm < m->least_addr)3744 m->least_addr = mm;3745if((m->footprint += mmsize) > m->max_footprint)3746 m->max_footprint = m->footprint;3747assert(is_aligned(chunk2mem(p)));3748check_mmapped_chunk(m, p);3749returnchunk2mem(p);3750}3751}3752return0;3753}37543755/* Realloc using mmap */3756static mchunkptr mmap_resize(mstate m, mchunkptr oldp,size_t nb) {3757size_t oldsize =chunksize(oldp);3758if(is_small(nb))/* Can't shrink mmap regions below small size */3759return0;3760/* Keep old chunk if big enough but not too big */3761if(oldsize >= nb + SIZE_T_SIZE &&3762(oldsize - nb) <= (mparams.granularity <<1))3763return oldp;3764else{3765size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;3766size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;3767size_t newmmsize =mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);3768char* cp = (char*)CALL_MREMAP((char*)oldp - offset,3769 oldmmsize, newmmsize,1);3770if(cp != CMFAIL) {3771 mchunkptr newp = (mchunkptr)(cp + offset);3772size_t psize = newmmsize - offset - MMAP_FOOT_PAD;3773 newp->head = (psize|CINUSE_BIT);3774mark_inuse_foot(m, newp, psize);3775chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;3776chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head =0;37773778if(cp < m->least_addr)3779 m->least_addr = cp;3780if((m->footprint += newmmsize - oldmmsize) > m->max_footprint)3781 m->max_footprint = m->footprint;3782check_mmapped_chunk(m, newp);3783return newp;3784}3785}3786return0;3787}37883789/* -------------------------- mspace management -------------------------- */37903791/* Initialize top chunk and its size */3792static voidinit_top(mstate m, mchunkptr p,size_t psize) {3793/* Ensure alignment */3794size_t offset =align_offset(chunk2mem(p));3795 p = (mchunkptr)((char*)p + offset);3796 psize -= offset;37973798 m->top = p;3799 m->topsize = psize;3800 p->head = psize | PINUSE_BIT;3801/* set size of fake trailing chunk holding overhead space only once */3802chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;3803 m->trim_check = mparams.trim_threshold;/* reset on each update */3804}38053806/* Initialize bins for a new mstate that is otherwise zeroed out */3807static voidinit_bins(mstate m) {3808/* Establish circular links for smallbins */3809 bindex_t i;3810for(i =0; i < NSMALLBINS; ++i) {3811 sbinptr bin =smallbin_at(m,i);3812 bin->fd = bin->bk = bin;3813}3814}38153816#if PROCEED_ON_ERROR38173818/* default corruption action */3819static voidreset_on_error(mstate m) {3820int i;3821++malloc_corruption_error_count;3822/* Reinitialize fields to forget about all memory */3823 m->smallbins = m->treebins =0;3824 m->dvsize = m->topsize =0;3825 m->seg.base =0;3826 m->seg.size =0;3827 m->seg.next =0;3828 m->top = m->dv =0;3829for(i =0; i < NTREEBINS; ++i)3830*treebin_at(m, i) =0;3831init_bins(m);3832}3833#endif/* PROCEED_ON_ERROR */38343835/* Allocate chunk and prepend remainder with chunk in successor base. */3836static void*prepend_alloc(mstate m,char* newbase,char* oldbase,3837size_t nb) {3838 mchunkptr p =align_as_chunk(newbase);3839 mchunkptr oldfirst =align_as_chunk(oldbase);3840size_t psize = (char*)oldfirst - (char*)p;3841 mchunkptr q =chunk_plus_offset(p, nb);3842size_t qsize = psize - nb;3843set_size_and_pinuse_of_inuse_chunk(m, p, nb);38443845assert((char*)oldfirst > (char*)q);3846assert(pinuse(oldfirst));3847assert(qsize >= MIN_CHUNK_SIZE);38483849/* consolidate remainder with first chunk of old base */3850if(oldfirst == m->top) {3851size_t tsize = m->topsize += qsize;3852 m->top = q;3853 q->head = tsize | PINUSE_BIT;3854check_top_chunk(m, q);3855}3856else if(oldfirst == m->dv) {3857size_t dsize = m->dvsize += qsize;3858 m->dv = q;3859set_size_and_pinuse_of_free_chunk(q, dsize);3860}3861else{3862if(!cinuse(oldfirst)) {3863size_t nsize =chunksize(oldfirst);3864unlink_chunk(m, oldfirst, nsize);3865 oldfirst =chunk_plus_offset(oldfirst, nsize);3866 qsize += nsize;3867}3868set_free_with_pinuse(q, qsize, oldfirst);3869insert_chunk(m, q, qsize);3870check_free_chunk(m, q);3871}38723873check_malloced_chunk(m,chunk2mem(p), nb);3874returnchunk2mem(p);3875}38763877/* Add a segment to hold a new noncontiguous region */3878static voidadd_segment(mstate m,char* tbase,size_t tsize, flag_t mmapped) {3879/* Determine locations and sizes of segment, fenceposts, old top */3880char* old_top = (char*)m->top;3881 msegmentptr oldsp =segment_holding(m, old_top);3882char* old_end = oldsp->base + oldsp->size;3883size_t ssize =pad_request(sizeof(struct malloc_segment));3884char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);3885size_t offset =align_offset(chunk2mem(rawsp));3886char* asp = rawsp + offset;3887char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;3888 mchunkptr sp = (mchunkptr)csp;3889 msegmentptr ss = (msegmentptr)(chunk2mem(sp));3890 mchunkptr tnext =chunk_plus_offset(sp, ssize);3891 mchunkptr p = tnext;3892int nfences =0;38933894/* reset top to new space */3895init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);38963897/* Set up segment record */3898assert(is_aligned(ss));3899set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);3900*ss = m->seg;/* Push current record */3901 m->seg.base = tbase;3902 m->seg.size = tsize;3903 m->seg.sflags = mmapped;3904 m->seg.next = ss;39053906/* Insert trailing fenceposts */3907for(;;) {3908 mchunkptr nextp =chunk_plus_offset(p, SIZE_T_SIZE);3909 p->head = FENCEPOST_HEAD;3910++nfences;3911if((char*)(&(nextp->head)) < old_end)3912 p = nextp;3913else3914break;3915}3916assert(nfences >=2);39173918/* Insert the rest of old top into a bin as an ordinary free chunk */3919if(csp != old_top) {3920 mchunkptr q = (mchunkptr)old_top;3921size_t psize = csp - old_top;3922 mchunkptr tn =chunk_plus_offset(q, psize);3923set_free_with_pinuse(q, psize, tn);3924insert_chunk(m, q, psize);3925}39263927check_top_chunk(m, m->top);3928}39293930/* -------------------------- System allocation -------------------------- */39313932/* Get memory from system using MORECORE or MMAP */3933static void*sys_alloc(mstate m,size_t nb) {3934char* tbase = CMFAIL;3935size_t tsize =0;3936 flag_t mmap_flag =0;39373938ensure_initialization();39393940/* Directly map large chunks */3941if(use_mmap(m) && nb >= mparams.mmap_threshold) {3942void* mem =mmap_alloc(m, nb);3943if(mem !=0)3944return mem;3945}39463947/*3948 Try getting memory in any of three ways (in most-preferred to3949 least-preferred order):3950 1. A call to MORECORE that can normally contiguously extend memory.3951 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or3952 main space is mmapped or a previous contiguous call failed)3953 2. A call to MMAP new space (disabled if not HAVE_MMAP).3954 Note that under the default settings, if MORECORE is unable to3955 fulfill a request, and HAVE_MMAP is true, then mmap is3956 used as a noncontiguous system allocator. This is a useful backup3957 strategy for systems with holes in address spaces -- in this case3958 sbrk cannot contiguously expand the heap, but mmap may be able to3959 find space.3960 3. A call to MORECORE that cannot usually contiguously extend memory.3961 (disabled if not HAVE_MORECORE)39623963 In all cases, we need to request enough bytes from system to ensure3964 we can malloc nb bytes upon success, so pad with enough space for3965 top_foot, plus alignment-pad to make sure we don't lose bytes if3966 not on boundary, and round this up to a granularity unit.3967 */39683969if(MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {3970char* br = CMFAIL;3971 msegmentptr ss = (m->top ==0)?0:segment_holding(m, (char*)m->top);3972size_t asize =0;3973ACQUIRE_MALLOC_GLOBAL_LOCK();39743975if(ss ==0) {/* First time through or recovery */3976char* base = (char*)CALL_MORECORE(0);3977if(base != CMFAIL) {3978 asize =granularity_align(nb + SYS_ALLOC_PADDING);3979/* Adjust to end on a page boundary */3980if(!is_page_aligned(base))3981 asize += (page_align((size_t)base) - (size_t)base);3982/* Can't call MORECORE if size is negative when treated as signed */3983if(asize < HALF_MAX_SIZE_T &&3984(br = (char*)(CALL_MORECORE(asize))) == base) {3985 tbase = base;3986 tsize = asize;3987}3988}3989}3990else{3991/* Subtract out existing available top space from MORECORE request. */3992 asize =granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);3993/* Use mem here only if it did continuously extend old space */3994if(asize < HALF_MAX_SIZE_T &&3995(br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {3996 tbase = br;3997 tsize = asize;3998}3999}40004001if(tbase == CMFAIL) {/* Cope with partial failure */4002if(br != CMFAIL) {/* Try to use/extend the space we did get */4003if(asize < HALF_MAX_SIZE_T &&4004 asize < nb + SYS_ALLOC_PADDING) {4005size_t esize =granularity_align(nb + SYS_ALLOC_PADDING - asize);4006if(esize < HALF_MAX_SIZE_T) {4007char* end = (char*)CALL_MORECORE(esize);4008if(end != CMFAIL)4009 asize += esize;4010else{/* Can't use; try to release */4011(void)CALL_MORECORE(-asize);4012 br = CMFAIL;4013}4014}4015}4016}4017if(br != CMFAIL) {/* Use the space we did get */4018 tbase = br;4019 tsize = asize;4020}4021else4022disable_contiguous(m);/* Don't try contiguous path in the future */4023}40244025RELEASE_MALLOC_GLOBAL_LOCK();4026}40274028if(HAVE_MMAP && tbase == CMFAIL) {/* Try MMAP */4029size_t rsize =granularity_align(nb + SYS_ALLOC_PADDING);4030if(rsize > nb) {/* Fail if wraps around zero */4031char* mp = (char*)(CALL_MMAP(rsize));4032if(mp != CMFAIL) {4033 tbase = mp;4034 tsize = rsize;4035 mmap_flag = IS_MMAPPED_BIT;4036}4037}4038}40394040if(HAVE_MORECORE && tbase == CMFAIL) {/* Try noncontiguous MORECORE */4041size_t asize =granularity_align(nb + SYS_ALLOC_PADDING);4042if(asize < HALF_MAX_SIZE_T) {4043char* br = CMFAIL;4044char* end = CMFAIL;4045ACQUIRE_MALLOC_GLOBAL_LOCK();4046 br = (char*)(CALL_MORECORE(asize));4047 end = (char*)(CALL_MORECORE(0));4048RELEASE_MALLOC_GLOBAL_LOCK();4049if(br != CMFAIL && end != CMFAIL && br < end) {4050size_t ssize = end - br;4051if(ssize > nb + TOP_FOOT_SIZE) {4052 tbase = br;4053 tsize = ssize;4054}4055}4056}4057}40584059if(tbase != CMFAIL) {40604061if((m->footprint += tsize) > m->max_footprint)4062 m->max_footprint = m->footprint;40634064if(!is_initialized(m)) {/* first-time initialization */4065 m->seg.base = m->least_addr = tbase;4066 m->seg.size = tsize;4067 m->seg.sflags = mmap_flag;4068 m->magic = mparams.magic;4069 m->release_checks = MAX_RELEASE_CHECK_RATE;4070init_bins(m);4071#if !ONLY_MSPACES4072if(is_global(m))4073init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);4074else4075#endif4076{4077/* Offset top by embedded malloc_state */4078 mchunkptr mn =next_chunk(mem2chunk(m));4079init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);4080}4081}40824083else{4084/* Try to merge with an existing segment */4085 msegmentptr sp = &m->seg;4086/* Only consider most recent segment if traversal suppressed */4087while(sp !=0&& tbase != sp->base + sp->size)4088 sp = (NO_SEGMENT_TRAVERSAL) ?0: sp->next;4089if(sp !=0&&4090!is_extern_segment(sp) &&4091(sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&4092segment_holds(sp, m->top)) {/* append */4093 sp->size += tsize;4094init_top(m, m->top, m->topsize + tsize);4095}4096else{4097if(tbase < m->least_addr)4098 m->least_addr = tbase;4099 sp = &m->seg;4100while(sp !=0&& sp->base != tbase + tsize)4101 sp = (NO_SEGMENT_TRAVERSAL) ?0: sp->next;4102if(sp !=0&&4103!is_extern_segment(sp) &&4104(sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {4105char* oldbase = sp->base;4106 sp->base = tbase;4107 sp->size += tsize;4108returnprepend_alloc(m, tbase, oldbase, nb);4109}4110else4111add_segment(m, tbase, tsize, mmap_flag);4112}4113}41144115if(nb < m->topsize) {/* Allocate from new or extended top space */4116size_t rsize = m->topsize -= nb;4117 mchunkptr p = m->top;4118 mchunkptr r = m->top =chunk_plus_offset(p, nb);4119 r->head = rsize | PINUSE_BIT;4120set_size_and_pinuse_of_inuse_chunk(m, p, nb);4121check_top_chunk(m, m->top);4122check_malloced_chunk(m,chunk2mem(p), nb);4123returnchunk2mem(p);4124}4125}41264127 MALLOC_FAILURE_ACTION;4128return0;4129}41304131/* ----------------------- system deallocation -------------------------- */41324133/* Unmap and unlink any mmapped segments that don't contain used chunks */4134static size_trelease_unused_segments(mstate m) {4135size_t released =0;4136int nsegs =0;4137 msegmentptr pred = &m->seg;4138 msegmentptr sp = pred->next;4139while(sp !=0) {4140char* base = sp->base;4141size_t size = sp->size;4142 msegmentptr next = sp->next;4143++nsegs;4144if(is_mmapped_segment(sp) && !is_extern_segment(sp)) {4145 mchunkptr p =align_as_chunk(base);4146size_t psize =chunksize(p);4147/* Can unmap if first chunk holds entire segment and not pinned */4148if(!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {4149 tchunkptr tp = (tchunkptr)p;4150assert(segment_holds(sp, (char*)sp));4151if(p == m->dv) {4152 m->dv =0;4153 m->dvsize =0;4154}4155else{4156unlink_large_chunk(m, tp);4157}4158if(CALL_MUNMAP(base, size) ==0) {4159 released += size;4160 m->footprint -= size;4161/* unlink obsoleted record */4162 sp = pred;4163 sp->next = next;4164}4165else{/* back out if cannot unmap */4166insert_large_chunk(m, tp, psize);4167}4168}4169}4170if(NO_SEGMENT_TRAVERSAL)/* scan only first segment */4171break;4172 pred = sp;4173 sp = next;4174}4175/* Reset check counter */4176 m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?4177 nsegs : MAX_RELEASE_CHECK_RATE);4178return released;4179}41804181static intsys_trim(mstate m,size_t pad) {4182size_t released =0;4183ensure_initialization();4184if(pad < MAX_REQUEST &&is_initialized(m)) {4185 pad += TOP_FOOT_SIZE;/* ensure enough room for segment overhead */41864187if(m->topsize > pad) {4188/* Shrink top space in granularity-size units, keeping at least one */4189size_t unit = mparams.granularity;4190size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -4191 SIZE_T_ONE) * unit;4192 msegmentptr sp =segment_holding(m, (char*)m->top);41934194if(!is_extern_segment(sp)) {4195if(is_mmapped_segment(sp)) {4196if(HAVE_MMAP &&4197 sp->size >= extra &&4198!has_segment_link(m, sp)) {/* can't shrink if pinned */4199size_t newsize = sp->size - extra;4200/* Prefer mremap, fall back to munmap */4201if((CALL_MREMAP(sp->base, sp->size, newsize,0) != MFAIL) ||4202(CALL_MUNMAP(sp->base + newsize, extra) ==0)) {4203 released = extra;4204}4205}4206}4207else if(HAVE_MORECORE) {4208if(extra >= HALF_MAX_SIZE_T)/* Avoid wrapping negative */4209 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;4210ACQUIRE_MALLOC_GLOBAL_LOCK();4211{4212/* Make sure end of memory is where we last set it. */4213char* old_br = (char*)(CALL_MORECORE(0));4214if(old_br == sp->base + sp->size) {4215char* rel_br = (char*)(CALL_MORECORE(-extra));4216char* new_br = (char*)(CALL_MORECORE(0));4217if(rel_br != CMFAIL && new_br < old_br)4218 released = old_br - new_br;4219}4220}4221RELEASE_MALLOC_GLOBAL_LOCK();4222}4223}42244225if(released !=0) {4226 sp->size -= released;4227 m->footprint -= released;4228init_top(m, m->top, m->topsize - released);4229check_top_chunk(m, m->top);4230}4231}42324233/* Unmap any unused mmapped segments */4234if(HAVE_MMAP)4235 released +=release_unused_segments(m);42364237/* On failure, disable autotrim to avoid repeated failed future calls */4238if(released ==0&& m->topsize > m->trim_check)4239 m->trim_check = MAX_SIZE_T;4240}42414242return(released !=0)?1:0;4243}424442454246/* ---------------------------- malloc support --------------------------- */42474248/* allocate a large request from the best fitting chunk in a treebin */4249static void*tmalloc_large(mstate m,size_t nb) {4250 tchunkptr v =0;4251size_t rsize = -nb;/* Unsigned negation */4252 tchunkptr t;4253 bindex_t idx;4254compute_tree_index(nb, idx);4255if((t = *treebin_at(m, idx)) !=0) {4256/* Traverse tree for this bin looking for node with size == nb */4257size_t sizebits = nb <<leftshift_for_tree_index(idx);4258 tchunkptr rst =0;/* The deepest untaken right subtree */4259for(;;) {4260 tchunkptr rt;4261size_t trem =chunksize(t) - nb;4262if(trem < rsize) {4263 v = t;4264if((rsize = trem) ==0)4265break;4266}4267 rt = t->child[1];4268 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) &1];4269if(rt !=0&& rt != t)4270 rst = rt;4271if(t ==0) {4272 t = rst;/* set t to least subtree holding sizes > nb */4273break;4274}4275 sizebits <<=1;4276}4277}4278if(t ==0&& v ==0) {/* set t to root of next non-empty treebin */4279 binmap_t leftbits =left_bits(idx2bit(idx)) & m->treemap;4280if(leftbits !=0) {4281 bindex_t i;4282 binmap_t leastbit =least_bit(leftbits);4283compute_bit2idx(leastbit, i);4284 t = *treebin_at(m, i);4285}4286}42874288while(t !=0) {/* find smallest of tree or subtree */4289size_t trem =chunksize(t) - nb;4290if(trem < rsize) {4291 rsize = trem;4292 v = t;4293}4294 t =leftmost_child(t);4295}42964297/* If dv is a better fit, return 0 so malloc will use it */4298if(v !=0&& rsize < (size_t)(m->dvsize - nb)) {4299if(RTCHECK(ok_address(m, v))) {/* split */4300 mchunkptr r =chunk_plus_offset(v, nb);4301assert(chunksize(v) == rsize + nb);4302if(RTCHECK(ok_next(v, r))) {4303unlink_large_chunk(m, v);4304if(rsize < MIN_CHUNK_SIZE)4305set_inuse_and_pinuse(m, v, (rsize + nb));4306else{4307set_size_and_pinuse_of_inuse_chunk(m, v, nb);4308set_size_and_pinuse_of_free_chunk(r, rsize);4309insert_chunk(m, r, rsize);4310}4311returnchunk2mem(v);4312}4313}4314CORRUPTION_ERROR_ACTION(m);4315}4316return0;4317}43184319/* allocate a small request from the best fitting chunk in a treebin */4320static void*tmalloc_small(mstate m,size_t nb) {4321 tchunkptr t, v;4322size_t rsize;4323 bindex_t i;4324 binmap_t leastbit =least_bit(m->treemap);4325compute_bit2idx(leastbit, i);4326 v = t = *treebin_at(m, i);4327 rsize =chunksize(t) - nb;43284329while((t =leftmost_child(t)) !=0) {4330size_t trem =chunksize(t) - nb;4331if(trem < rsize) {4332 rsize = trem;4333 v = t;4334}4335}43364337if(RTCHECK(ok_address(m, v))) {4338 mchunkptr r =chunk_plus_offset(v, nb);4339assert(chunksize(v) == rsize + nb);4340if(RTCHECK(ok_next(v, r))) {4341unlink_large_chunk(m, v);4342if(rsize < MIN_CHUNK_SIZE)4343set_inuse_and_pinuse(m, v, (rsize + nb));4344else{4345set_size_and_pinuse_of_inuse_chunk(m, v, nb);4346set_size_and_pinuse_of_free_chunk(r, rsize);4347replace_dv(m, r, rsize);4348}4349returnchunk2mem(v);4350}4351}43524353CORRUPTION_ERROR_ACTION(m);4354return0;4355}43564357/* --------------------------- realloc support --------------------------- */43584359static void*internal_realloc(mstate m,void* oldmem,size_t bytes) {4360if(bytes >= MAX_REQUEST) {4361 MALLOC_FAILURE_ACTION;4362return0;4363}4364if(!PREACTION(m)) {4365 mchunkptr oldp =mem2chunk(oldmem);4366size_t oldsize =chunksize(oldp);4367 mchunkptr next =chunk_plus_offset(oldp, oldsize);4368 mchunkptr newp =0;4369void* extra =0;43704371/* Try to either shrink or extend into top. Else malloc-copy-free */43724373if(RTCHECK(ok_address(m, oldp) &&ok_cinuse(oldp) &&4374ok_next(oldp, next) &&ok_pinuse(next))) {4375size_t nb =request2size(bytes);4376if(is_mmapped(oldp))4377 newp =mmap_resize(m, oldp, nb);4378else if(oldsize >= nb) {/* already big enough */4379size_t rsize = oldsize - nb;4380 newp = oldp;4381if(rsize >= MIN_CHUNK_SIZE) {4382 mchunkptr remainder =chunk_plus_offset(newp, nb);4383set_inuse(m, newp, nb);4384set_inuse(m, remainder, rsize);4385 extra =chunk2mem(remainder);4386}4387}4388else if(next == m->top && oldsize + m->topsize > nb) {4389/* Expand into top */4390size_t newsize = oldsize + m->topsize;4391size_t newtopsize = newsize - nb;4392 mchunkptr newtop =chunk_plus_offset(oldp, nb);4393set_inuse(m, oldp, nb);4394 newtop->head = newtopsize |PINUSE_BIT;4395 m->top = newtop;4396 m->topsize = newtopsize;4397 newp = oldp;4398}4399}4400else{4401USAGE_ERROR_ACTION(m, oldmem);4402POSTACTION(m);4403return0;4404}44054406POSTACTION(m);44074408if(newp !=0) {4409if(extra !=0) {4410internal_free(m, extra);4411}4412check_inuse_chunk(m, newp);4413returnchunk2mem(newp);4414}4415else{4416void* newmem =internal_malloc(m, bytes);4417if(newmem !=0) {4418size_t oc = oldsize -overhead_for(oldp);4419memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);4420internal_free(m, oldmem);4421}4422return newmem;4423}4424}4425return0;4426}44274428/* --------------------------- memalign support -------------------------- */44294430static void*internal_memalign(mstate m,size_t alignment,size_t bytes) {4431if(alignment <= MALLOC_ALIGNMENT)/* Can just use malloc */4432returninternal_malloc(m, bytes);4433if(alignment < MIN_CHUNK_SIZE)/* must be at least a minimum chunk size */4434 alignment = MIN_CHUNK_SIZE;4435if((alignment & (alignment-SIZE_T_ONE)) !=0) {/* Ensure a power of 2 */4436size_t a = MALLOC_ALIGNMENT <<1;4437while(a < alignment) a <<=1;4438 alignment = a;4439}44404441if(bytes >= MAX_REQUEST - alignment) {4442if(m !=0) {/* Test isn't needed but avoids compiler warning */4443 MALLOC_FAILURE_ACTION;4444}4445}4446else{4447size_t nb =request2size(bytes);4448size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;4449char* mem = (char*)internal_malloc(m, req);4450if(mem !=0) {4451void* leader =0;4452void* trailer =0;4453 mchunkptr p =mem2chunk(mem);44544455if(PREACTION(m))return0;4456if((((size_t)(mem)) % alignment) !=0) {/* misaligned */4457/*4458 Find an aligned spot inside chunk. Since we need to give4459 back leading space in a chunk of at least MIN_CHUNK_SIZE, if4460 the first calculation places us at a spot with less than4461 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.4462 We've allocated enough total room so that this is always4463 possible.4464 */4465char* br = (char*)mem2chunk((size_t)(((size_t)(mem +4466 alignment -4467 SIZE_T_ONE)) &4468-alignment));4469char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?4470 br : br+alignment;4471 mchunkptr newp = (mchunkptr)pos;4472size_t leadsize = pos - (char*)(p);4473size_t newsize =chunksize(p) - leadsize;44744475if(is_mmapped(p)) {/* For mmapped chunks, just adjust offset */4476 newp->prev_foot = p->prev_foot + leadsize;4477 newp->head = (newsize|CINUSE_BIT);4478}4479else{/* Otherwise, give back leader, use the rest */4480set_inuse(m, newp, newsize);4481set_inuse(m, p, leadsize);4482 leader =chunk2mem(p);4483}4484 p = newp;4485}44864487/* Give back spare room at the end */4488if(!is_mmapped(p)) {4489size_t size =chunksize(p);4490if(size > nb + MIN_CHUNK_SIZE) {4491size_t remainder_size = size - nb;4492 mchunkptr remainder =chunk_plus_offset(p, nb);4493set_inuse(m, p, nb);4494set_inuse(m, remainder, remainder_size);4495 trailer =chunk2mem(remainder);4496}4497}44984499assert(chunksize(p) >= nb);4500assert((((size_t)(chunk2mem(p))) % alignment) ==0);4501check_inuse_chunk(m, p);4502POSTACTION(m);4503if(leader !=0) {4504internal_free(m, leader);4505}4506if(trailer !=0) {4507internal_free(m, trailer);4508}4509returnchunk2mem(p);4510}4511}4512return0;4513}45144515/* ------------------------ comalloc/coalloc support --------------------- */45164517static void**ialloc(mstate m,4518size_t n_elements,4519size_t* sizes,4520int opts,4521void* chunks[]) {4522/*4523 This provides common support for independent_X routines, handling4524 all of the combinations that can result.45254526 The opts arg has:4527 bit 0 set if all elements are same size (using sizes[0])4528 bit 1 set if elements should be zeroed4529 */45304531size_t element_size;/* chunksize of each element, if all same */4532size_t contents_size;/* total size of elements */4533size_t array_size;/* request size of pointer array */4534void* mem;/* malloced aggregate space */4535 mchunkptr p;/* corresponding chunk */4536size_t remainder_size;/* remaining bytes while splitting */4537void** marray;/* either "chunks" or malloced ptr array */4538 mchunkptr array_chunk;/* chunk for malloced ptr array */4539 flag_t was_enabled;/* to disable mmap */4540size_t size;4541size_t i;45424543ensure_initialization();4544/* compute array length, if needed */4545if(chunks !=0) {4546if(n_elements ==0)4547return chunks;/* nothing to do */4548 marray = chunks;4549 array_size =0;4550}4551else{4552/* if empty req, must still return chunk representing empty array */4553if(n_elements ==0)4554return(void**)internal_malloc(m,0);4555 marray =0;4556 array_size =request2size(n_elements * (sizeof(void*)));4557}45584559/* compute total element size */4560if(opts &0x1) {/* all-same-size */4561 element_size =request2size(*sizes);4562 contents_size = n_elements * element_size;4563}4564else{/* add up all the sizes */4565 element_size =0;4566 contents_size =0;4567for(i =0; i != n_elements; ++i)4568 contents_size +=request2size(sizes[i]);4569}45704571 size = contents_size + array_size;45724573/*4574 Allocate the aggregate chunk. First disable direct-mmapping so4575 malloc won't use it, since we would not be able to later4576 free/realloc space internal to a segregated mmap region.4577 */4578 was_enabled =use_mmap(m);4579disable_mmap(m);4580 mem =internal_malloc(m, size - CHUNK_OVERHEAD);4581if(was_enabled)4582enable_mmap(m);4583if(mem ==0)4584return0;45854586if(PREACTION(m))return0;4587 p =mem2chunk(mem);4588 remainder_size =chunksize(p);45894590assert(!is_mmapped(p));45914592if(opts &0x2) {/* optionally clear the elements */4593memset((size_t*)mem,0, remainder_size - SIZE_T_SIZE - array_size);4594}45954596/* If not provided, allocate the pointer array as final part of chunk */4597if(marray ==0) {4598size_t array_chunk_size;4599 array_chunk =chunk_plus_offset(p, contents_size);4600 array_chunk_size = remainder_size - contents_size;4601 marray = (void**) (chunk2mem(array_chunk));4602set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);4603 remainder_size = contents_size;4604}46054606/* split out elements */4607for(i =0; ; ++i) {4608 marray[i] =chunk2mem(p);4609if(i != n_elements-1) {4610if(element_size !=0)4611 size = element_size;4612else4613 size =request2size(sizes[i]);4614 remainder_size -= size;4615set_size_and_pinuse_of_inuse_chunk(m, p, size);4616 p =chunk_plus_offset(p, size);4617}4618else{/* the final element absorbs any overallocation slop */4619set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);4620break;4621}4622}46234624#if DEBUG4625if(marray != chunks) {4626/* final element must have exactly exhausted chunk */4627if(element_size !=0) {4628assert(remainder_size == element_size);4629}4630else{4631assert(remainder_size ==request2size(sizes[i]));4632}4633check_inuse_chunk(m,mem2chunk(marray));4634}4635for(i =0; i != n_elements; ++i)4636check_inuse_chunk(m,mem2chunk(marray[i]));46374638#endif/* DEBUG */46394640POSTACTION(m);4641return marray;4642}464346444645/* -------------------------- public routines ---------------------------- */46464647#if !ONLY_MSPACES46484649void*dlmalloc(size_t bytes) {4650/*4651 Basic algorithm:4652 If a small request (< 256 bytes minus per-chunk overhead):4653 1. If one exists, use a remainderless chunk in associated smallbin.4654 (Remainderless means that there are too few excess bytes to4655 represent as a chunk.)4656 2. If it is big enough, use the dv chunk, which is normally the4657 chunk adjacent to the one used for the most recent small request.4658 3. If one exists, split the smallest available chunk in a bin,4659 saving remainder in dv.4660 4. If it is big enough, use the top chunk.4661 5. If available, get memory from system and use it4662 Otherwise, for a large request:4663 1. Find the smallest available binned chunk that fits, and use it4664 if it is better fitting than dv chunk, splitting if necessary.4665 2. If better fitting than any binned chunk, use the dv chunk.4666 3. If it is big enough, use the top chunk.4667 4. If request size >= mmap threshold, try to directly mmap this chunk.4668 5. If available, get memory from system and use it46694670 The ugly goto's here ensure that postaction occurs along all paths.4671 */46724673#if USE_LOCKS4674ensure_initialization();/* initialize in sys_alloc if not using locks */4675#endif46764677if(!PREACTION(gm)) {4678void* mem;4679size_t nb;4680if(bytes <= MAX_SMALL_REQUEST) {4681 bindex_t idx;4682 binmap_t smallbits;4683 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE :pad_request(bytes);4684 idx =small_index(nb);4685 smallbits = gm->smallmap >> idx;46864687if((smallbits &0x3U) !=0) {/* Remainderless fit to a smallbin. */4688 mchunkptr b, p;4689 idx += ~smallbits &1;/* Uses next bin if idx empty */4690 b =smallbin_at(gm, idx);4691 p = b->fd;4692assert(chunksize(p) ==small_index2size(idx));4693unlink_first_small_chunk(gm, b, p, idx);4694set_inuse_and_pinuse(gm, p,small_index2size(idx));4695 mem =chunk2mem(p);4696check_malloced_chunk(gm, mem, nb);4697goto postaction;4698}46994700else if(nb > gm->dvsize) {4701if(smallbits !=0) {/* Use chunk in next nonempty smallbin */4702 mchunkptr b, p, r;4703size_t rsize;4704 bindex_t i;4705 binmap_t leftbits = (smallbits << idx) &left_bits(idx2bit(idx));4706 binmap_t leastbit =least_bit(leftbits);4707compute_bit2idx(leastbit, i);4708 b =smallbin_at(gm, i);4709 p = b->fd;4710assert(chunksize(p) ==small_index2size(i));4711unlink_first_small_chunk(gm, b, p, i);4712 rsize =small_index2size(i) - nb;4713/* Fit here cannot be remainderless if 4byte sizes */4714if(SIZE_T_SIZE !=4&& rsize < MIN_CHUNK_SIZE)4715set_inuse_and_pinuse(gm, p,small_index2size(i));4716else{4717set_size_and_pinuse_of_inuse_chunk(gm, p, nb);4718 r =chunk_plus_offset(p, nb);4719set_size_and_pinuse_of_free_chunk(r, rsize);4720replace_dv(gm, r, rsize);4721}4722 mem =chunk2mem(p);4723check_malloced_chunk(gm, mem, nb);4724goto postaction;4725}47264727else if(gm->treemap !=0&& (mem =tmalloc_small(gm, nb)) !=0) {4728check_malloced_chunk(gm, mem, nb);4729goto postaction;4730}4731}4732}4733else if(bytes >= MAX_REQUEST)4734 nb = MAX_SIZE_T;/* Too big to allocate. Force failure (in sys alloc) */4735else{4736 nb =pad_request(bytes);4737if(gm->treemap !=0&& (mem =tmalloc_large(gm, nb)) !=0) {4738check_malloced_chunk(gm, mem, nb);4739goto postaction;4740}4741}47424743if(nb <= gm->dvsize) {4744size_t rsize = gm->dvsize - nb;4745 mchunkptr p = gm->dv;4746if(rsize >= MIN_CHUNK_SIZE) {/* split dv */4747 mchunkptr r = gm->dv =chunk_plus_offset(p, nb);4748 gm->dvsize = rsize;4749set_size_and_pinuse_of_free_chunk(r, rsize);4750set_size_and_pinuse_of_inuse_chunk(gm, p, nb);4751}4752else{/* exhaust dv */4753size_t dvs = gm->dvsize;4754 gm->dvsize =0;4755 gm->dv =0;4756set_inuse_and_pinuse(gm, p, dvs);4757}4758 mem =chunk2mem(p);4759check_malloced_chunk(gm, mem, nb);4760goto postaction;4761}47624763else if(nb < gm->topsize) {/* Split top */4764size_t rsize = gm->topsize -= nb;4765 mchunkptr p = gm->top;4766 mchunkptr r = gm->top =chunk_plus_offset(p, nb);4767 r->head = rsize | PINUSE_BIT;4768set_size_and_pinuse_of_inuse_chunk(gm, p, nb);4769 mem =chunk2mem(p);4770check_top_chunk(gm, gm->top);4771check_malloced_chunk(gm, mem, nb);4772goto postaction;4773}47744775 mem =sys_alloc(gm, nb);47764777 postaction:4778POSTACTION(gm);4779return mem;4780}47814782return0;4783}47844785voiddlfree(void* mem) {4786/*4787 Consolidate freed chunks with preceding or succeeding bordering4788 free chunks, if they exist, and then place in a bin. Intermixed4789 with special cases for top, dv, mmapped chunks, and usage errors.4790 */47914792if(mem !=0) {4793 mchunkptr p =mem2chunk(mem);4794#if FOOTERS4795 mstate fm =get_mstate_for(p);4796if(!ok_magic(fm)) {4797USAGE_ERROR_ACTION(fm, p);4798return;4799}4800#else/* FOOTERS */4801#define fm gm4802#endif/* FOOTERS */4803if(!PREACTION(fm)) {4804check_inuse_chunk(fm, p);4805if(RTCHECK(ok_address(fm, p) &&ok_cinuse(p))) {4806size_t psize =chunksize(p);4807 mchunkptr next =chunk_plus_offset(p, psize);4808if(!pinuse(p)) {4809size_t prevsize = p->prev_foot;4810if((prevsize & IS_MMAPPED_BIT) !=0) {4811 prevsize &= ~IS_MMAPPED_BIT;4812 psize += prevsize + MMAP_FOOT_PAD;4813if(CALL_MUNMAP((char*)p - prevsize, psize) ==0)4814 fm->footprint -= psize;4815goto postaction;4816}4817else{4818 mchunkptr prev =chunk_minus_offset(p, prevsize);4819 psize += prevsize;4820 p = prev;4821if(RTCHECK(ok_address(fm, prev))) {/* consolidate backward */4822if(p != fm->dv) {4823unlink_chunk(fm, p, prevsize);4824}4825else if((next->head & INUSE_BITS) == INUSE_BITS) {4826 fm->dvsize = psize;4827set_free_with_pinuse(p, psize, next);4828goto postaction;4829}4830}4831else4832goto erroraction;4833}4834}48354836if(RTCHECK(ok_next(p, next) &&ok_pinuse(next))) {4837if(!cinuse(next)) {/* consolidate forward */4838if(next == fm->top) {4839size_t tsize = fm->topsize += psize;4840 fm->top = p;4841 p->head = tsize | PINUSE_BIT;4842if(p == fm->dv) {4843 fm->dv =0;4844 fm->dvsize =0;4845}4846if(should_trim(fm, tsize))4847sys_trim(fm,0);4848goto postaction;4849}4850else if(next == fm->dv) {4851size_t dsize = fm->dvsize += psize;4852 fm->dv = p;4853set_size_and_pinuse_of_free_chunk(p, dsize);4854goto postaction;4855}4856else{4857size_t nsize =chunksize(next);4858 psize += nsize;4859unlink_chunk(fm, next, nsize);4860set_size_and_pinuse_of_free_chunk(p, psize);4861if(p == fm->dv) {4862 fm->dvsize = psize;4863goto postaction;4864}4865}4866}4867else4868set_free_with_pinuse(p, psize, next);48694870if(is_small(psize)) {4871insert_small_chunk(fm, p, psize);4872check_free_chunk(fm, p);4873}4874else{4875 tchunkptr tp = (tchunkptr)p;4876insert_large_chunk(fm, tp, psize);4877check_free_chunk(fm, p);4878if(--fm->release_checks ==0)4879release_unused_segments(fm);4880}4881goto postaction;4882}4883}4884 erroraction:4885USAGE_ERROR_ACTION(fm, p);4886 postaction:4887POSTACTION(fm);4888}4889}4890#if !FOOTERS4891#undef fm4892#endif/* FOOTERS */4893}48944895void*dlcalloc(size_t n_elements,size_t elem_size) {4896void* mem;4897size_t req =0;4898if(n_elements !=0) {4899 req = n_elements * elem_size;4900if(((n_elements | elem_size) & ~(size_t)0xffff) &&4901(req / n_elements != elem_size))4902 req = MAX_SIZE_T;/* force downstream failure on overflow */4903}4904 mem =dlmalloc(req);4905if(mem !=0&&calloc_must_clear(mem2chunk(mem)))4906memset(mem,0, req);4907return mem;4908}49094910void*dlrealloc(void* oldmem,size_t bytes) {4911if(oldmem ==0)4912returndlmalloc(bytes);4913#ifdef REALLOC_ZERO_BYTES_FREES4914if(bytes ==0) {4915dlfree(oldmem);4916return0;4917}4918#endif/* REALLOC_ZERO_BYTES_FREES */4919else{4920#if ! FOOTERS4921 mstate m = gm;4922#else/* FOOTERS */4923 mstate m =get_mstate_for(mem2chunk(oldmem));4924if(!ok_magic(m)) {4925USAGE_ERROR_ACTION(m, oldmem);4926return0;4927}4928#endif/* FOOTERS */4929returninternal_realloc(m, oldmem, bytes);4930}4931}49324933void*dlmemalign(size_t alignment,size_t bytes) {4934returninternal_memalign(gm, alignment, bytes);4935}49364937void**dlindependent_calloc(size_t n_elements,size_t elem_size,4938void* chunks[]) {4939size_t sz = elem_size;/* serves as 1-element array */4940returnialloc(gm, n_elements, &sz,3, chunks);4941}49424943void**dlindependent_comalloc(size_t n_elements,size_t sizes[],4944void* chunks[]) {4945returnialloc(gm, n_elements, sizes,0, chunks);4946}49474948void*dlvalloc(size_t bytes) {4949size_t pagesz;4950ensure_initialization();4951 pagesz = mparams.page_size;4952returndlmemalign(pagesz, bytes);4953}49544955void*dlpvalloc(size_t bytes) {4956size_t pagesz;4957ensure_initialization();4958 pagesz = mparams.page_size;4959returndlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));4960}49614962intdlmalloc_trim(size_t pad) {4963ensure_initialization();4964int result =0;4965if(!PREACTION(gm)) {4966 result =sys_trim(gm, pad);4967POSTACTION(gm);4968}4969return result;4970}49714972size_tdlmalloc_footprint(void) {4973return gm->footprint;4974}49754976size_tdlmalloc_max_footprint(void) {4977return gm->max_footprint;4978}49794980#if !NO_MALLINFO4981struct mallinfo dlmallinfo(void) {4982returninternal_mallinfo(gm);4983}4984#endif/* NO_MALLINFO */49854986voiddlmalloc_stats() {4987internal_malloc_stats(gm);4988}49894990intdlmallopt(int param_number,int value) {4991returnchange_mparam(param_number, value);4992}49934994#endif/* !ONLY_MSPACES */49954996size_tdlmalloc_usable_size(void* mem) {4997if(mem !=0) {4998 mchunkptr p =mem2chunk(mem);4999if(cinuse(p))5000returnchunksize(p) -overhead_for(p);5001}5002return0;5003}50045005/* ----------------------------- user mspaces ---------------------------- */50065007#if MSPACES50085009static mstate init_user_mstate(char* tbase,size_t tsize) {5010size_t msize =pad_request(sizeof(struct malloc_state));5011 mchunkptr mn;5012 mchunkptr msp =align_as_chunk(tbase);5013 mstate m = (mstate)(chunk2mem(msp));5014memset(m,0, msize);5015INITIAL_LOCK(&m->mutex);5016 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);5017 m->seg.base = m->least_addr = tbase;5018 m->seg.size = m->footprint = m->max_footprint = tsize;5019 m->magic = mparams.magic;5020 m->release_checks = MAX_RELEASE_CHECK_RATE;5021 m->mflags = mparams.default_mflags;5022 m->extp =0;5023 m->exts =0;5024disable_contiguous(m);5025init_bins(m);5026 mn =next_chunk(mem2chunk(m));5027init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);5028check_top_chunk(m, m->top);5029return m;5030}50315032mspace create_mspace(size_t capacity,int locked) {5033 mstate m =0;5034size_t msize;5035ensure_initialization();5036 msize =pad_request(sizeof(struct malloc_state));5037if(capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {5038size_t rs = ((capacity ==0)? mparams.granularity :5039(capacity + TOP_FOOT_SIZE + msize));5040size_t tsize =granularity_align(rs);5041char* tbase = (char*)(CALL_MMAP(tsize));5042if(tbase != CMFAIL) {5043 m =init_user_mstate(tbase, tsize);5044 m->seg.sflags = IS_MMAPPED_BIT;5045set_lock(m, locked);5046}5047}5048return(mspace)m;5049}50505051mspace create_mspace_with_base(void* base,size_t capacity,int locked) {5052 mstate m =0;5053size_t msize;5054ensure_initialization();5055 msize =pad_request(sizeof(struct malloc_state));5056if(capacity > msize + TOP_FOOT_SIZE &&5057 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {5058 m =init_user_mstate((char*)base, capacity);5059 m->seg.sflags = EXTERN_BIT;5060set_lock(m, locked);5061}5062return(mspace)m;5063}50645065intmspace_mmap_large_chunks(mspace msp,int enable) {5066int ret =0;5067 mstate ms = (mstate)msp;5068if(!PREACTION(ms)) {5069if(use_mmap(ms))5070 ret =1;5071if(enable)5072enable_mmap(ms);5073else5074disable_mmap(ms);5075POSTACTION(ms);5076}5077return ret;5078}50795080size_tdestroy_mspace(mspace msp) {5081size_t freed =0;5082 mstate ms = (mstate)msp;5083if(ok_magic(ms)) {5084 msegmentptr sp = &ms->seg;5085while(sp !=0) {5086char* base = sp->base;5087size_t size = sp->size;5088 flag_t flag = sp->sflags;5089 sp = sp->next;5090if((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&5091CALL_MUNMAP(base, size) ==0)5092 freed += size;5093}5094}5095else{5096USAGE_ERROR_ACTION(ms,ms);5097}5098return freed;5099}51005101/*5102 mspace versions of routines are near-clones of the global5103 versions. This is not so nice but better than the alternatives.5104*/510551065107void*mspace_malloc(mspace msp,size_t bytes) {5108 mstate ms = (mstate)msp;5109if(!ok_magic(ms)) {5110USAGE_ERROR_ACTION(ms,ms);5111return0;5112}5113if(!PREACTION(ms)) {5114void* mem;5115size_t nb;5116if(bytes <= MAX_SMALL_REQUEST) {5117 bindex_t idx;5118 binmap_t smallbits;5119 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE :pad_request(bytes);5120 idx =small_index(nb);5121 smallbits = ms->smallmap >> idx;51225123if((smallbits &0x3U) !=0) {/* Remainderless fit to a smallbin. */5124 mchunkptr b, p;5125 idx += ~smallbits &1;/* Uses next bin if idx empty */5126 b =smallbin_at(ms, idx);5127 p = b->fd;5128assert(chunksize(p) ==small_index2size(idx));5129unlink_first_small_chunk(ms, b, p, idx);5130set_inuse_and_pinuse(ms, p,small_index2size(idx));5131 mem =chunk2mem(p);5132check_malloced_chunk(ms, mem, nb);5133goto postaction;5134}51355136else if(nb > ms->dvsize) {5137if(smallbits !=0) {/* Use chunk in next nonempty smallbin */5138 mchunkptr b, p, r;5139size_t rsize;5140 bindex_t i;5141 binmap_t leftbits = (smallbits << idx) &left_bits(idx2bit(idx));5142 binmap_t leastbit =least_bit(leftbits);5143compute_bit2idx(leastbit, i);5144 b =smallbin_at(ms, i);5145 p = b->fd;5146assert(chunksize(p) ==small_index2size(i));5147unlink_first_small_chunk(ms, b, p, i);5148 rsize =small_index2size(i) - nb;5149/* Fit here cannot be remainderless if 4byte sizes */5150if(SIZE_T_SIZE !=4&& rsize < MIN_CHUNK_SIZE)5151set_inuse_and_pinuse(ms, p,small_index2size(i));5152else{5153set_size_and_pinuse_of_inuse_chunk(ms, p, nb);5154 r =chunk_plus_offset(p, nb);5155set_size_and_pinuse_of_free_chunk(r, rsize);5156replace_dv(ms, r, rsize);5157}5158 mem =chunk2mem(p);5159check_malloced_chunk(ms, mem, nb);5160goto postaction;5161}51625163else if(ms->treemap !=0&& (mem =tmalloc_small(ms, nb)) !=0) {5164check_malloced_chunk(ms, mem, nb);5165goto postaction;5166}5167}5168}5169else if(bytes >= MAX_REQUEST)5170 nb = MAX_SIZE_T;/* Too big to allocate. Force failure (in sys alloc) */5171else{5172 nb =pad_request(bytes);5173if(ms->treemap !=0&& (mem =tmalloc_large(ms, nb)) !=0) {5174check_malloced_chunk(ms, mem, nb);5175goto postaction;5176}5177}51785179if(nb <= ms->dvsize) {5180size_t rsize = ms->dvsize - nb;5181 mchunkptr p = ms->dv;5182if(rsize >= MIN_CHUNK_SIZE) {/* split dv */5183 mchunkptr r = ms->dv =chunk_plus_offset(p, nb);5184 ms->dvsize = rsize;5185set_size_and_pinuse_of_free_chunk(r, rsize);5186set_size_and_pinuse_of_inuse_chunk(ms, p, nb);5187}5188else{/* exhaust dv */5189size_t dvs = ms->dvsize;5190 ms->dvsize =0;5191 ms->dv =0;5192set_inuse_and_pinuse(ms, p, dvs);5193}5194 mem =chunk2mem(p);5195check_malloced_chunk(ms, mem, nb);5196goto postaction;5197}51985199else if(nb < ms->topsize) {/* Split top */5200size_t rsize = ms->topsize -= nb;5201 mchunkptr p = ms->top;5202 mchunkptr r = ms->top =chunk_plus_offset(p, nb);5203 r->head = rsize | PINUSE_BIT;5204set_size_and_pinuse_of_inuse_chunk(ms, p, nb);5205 mem =chunk2mem(p);5206check_top_chunk(ms, ms->top);5207check_malloced_chunk(ms, mem, nb);5208goto postaction;5209}52105211 mem =sys_alloc(ms, nb);52125213 postaction:5214POSTACTION(ms);5215return mem;5216}52175218return0;5219}52205221voidmspace_free(mspace msp,void* mem) {5222if(mem !=0) {5223 mchunkptr p =mem2chunk(mem);5224#if FOOTERS5225 mstate fm =get_mstate_for(p);5226#else/* FOOTERS */5227 mstate fm = (mstate)msp;5228#endif/* FOOTERS */5229if(!ok_magic(fm)) {5230USAGE_ERROR_ACTION(fm, p);5231return;5232}5233if(!PREACTION(fm)) {5234check_inuse_chunk(fm, p);5235if(RTCHECK(ok_address(fm, p) &&ok_cinuse(p))) {5236size_t psize =chunksize(p);5237 mchunkptr next =chunk_plus_offset(p, psize);5238if(!pinuse(p)) {5239size_t prevsize = p->prev_foot;5240if((prevsize & IS_MMAPPED_BIT) !=0) {5241 prevsize &= ~IS_MMAPPED_BIT;5242 psize += prevsize + MMAP_FOOT_PAD;5243if(CALL_MUNMAP((char*)p - prevsize, psize) ==0)5244 fm->footprint -= psize;5245goto postaction;5246}5247else{5248 mchunkptr prev =chunk_minus_offset(p, prevsize);5249 psize += prevsize;5250 p = prev;5251if(RTCHECK(ok_address(fm, prev))) {/* consolidate backward */5252if(p != fm->dv) {5253unlink_chunk(fm, p, prevsize);5254}5255else if((next->head & INUSE_BITS) == INUSE_BITS) {5256 fm->dvsize = psize;5257set_free_with_pinuse(p, psize, next);5258goto postaction;5259}5260}5261else5262goto erroraction;5263}5264}52655266if(RTCHECK(ok_next(p, next) &&ok_pinuse(next))) {5267if(!cinuse(next)) {/* consolidate forward */5268if(next == fm->top) {5269size_t tsize = fm->topsize += psize;5270 fm->top = p;5271 p->head = tsize | PINUSE_BIT;5272if(p == fm->dv) {5273 fm->dv =0;5274 fm->dvsize =0;5275}5276if(should_trim(fm, tsize))5277sys_trim(fm,0);5278goto postaction;5279}5280else if(next == fm->dv) {5281size_t dsize = fm->dvsize += psize;5282 fm->dv = p;5283set_size_and_pinuse_of_free_chunk(p, dsize);5284goto postaction;5285}5286else{5287size_t nsize =chunksize(next);5288 psize += nsize;5289unlink_chunk(fm, next, nsize);5290set_size_and_pinuse_of_free_chunk(p, psize);5291if(p == fm->dv) {5292 fm->dvsize = psize;5293goto postaction;5294}5295}5296}5297else5298set_free_with_pinuse(p, psize, next);52995300if(is_small(psize)) {5301insert_small_chunk(fm, p, psize);5302check_free_chunk(fm, p);5303}5304else{5305 tchunkptr tp = (tchunkptr)p;5306insert_large_chunk(fm, tp, psize);5307check_free_chunk(fm, p);5308if(--fm->release_checks ==0)5309release_unused_segments(fm);5310}5311goto postaction;5312}5313}5314 erroraction:5315USAGE_ERROR_ACTION(fm, p);5316 postaction:5317POSTACTION(fm);5318}5319}5320}53215322void*mspace_calloc(mspace msp,size_t n_elements,size_t elem_size) {5323void* mem;5324size_t req =0;5325 mstate ms = (mstate)msp;5326if(!ok_magic(ms)) {5327USAGE_ERROR_ACTION(ms,ms);5328return0;5329}5330if(n_elements !=0) {5331 req = n_elements * elem_size;5332if(((n_elements | elem_size) & ~(size_t)0xffff) &&5333(req / n_elements != elem_size))5334 req = MAX_SIZE_T;/* force downstream failure on overflow */5335}5336 mem =internal_malloc(ms, req);5337if(mem !=0&&calloc_must_clear(mem2chunk(mem)))5338memset(mem,0, req);5339return mem;5340}53415342void*mspace_realloc(mspace msp,void* oldmem,size_t bytes) {5343if(oldmem ==0)5344returnmspace_malloc(msp, bytes);5345#ifdef REALLOC_ZERO_BYTES_FREES5346if(bytes ==0) {5347mspace_free(msp, oldmem);5348return0;5349}5350#endif/* REALLOC_ZERO_BYTES_FREES */5351else{5352#if FOOTERS5353 mchunkptr p =mem2chunk(oldmem);5354 mstate ms =get_mstate_for(p);5355#else/* FOOTERS */5356 mstate ms = (mstate)msp;5357#endif/* FOOTERS */5358if(!ok_magic(ms)) {5359USAGE_ERROR_ACTION(ms,ms);5360return0;5361}5362returninternal_realloc(ms, oldmem, bytes);5363}5364}53655366void*mspace_memalign(mspace msp,size_t alignment,size_t bytes) {5367 mstate ms = (mstate)msp;5368if(!ok_magic(ms)) {5369USAGE_ERROR_ACTION(ms,ms);5370return0;5371}5372returninternal_memalign(ms, alignment, bytes);5373}53745375void**mspace_independent_calloc(mspace msp,size_t n_elements,5376size_t elem_size,void* chunks[]) {5377size_t sz = elem_size;/* serves as 1-element array */5378 mstate ms = (mstate)msp;5379if(!ok_magic(ms)) {5380USAGE_ERROR_ACTION(ms,ms);5381return0;5382}5383returnialloc(ms, n_elements, &sz,3, chunks);5384}53855386void**mspace_independent_comalloc(mspace msp,size_t n_elements,5387size_t sizes[],void* chunks[]) {5388 mstate ms = (mstate)msp;5389if(!ok_magic(ms)) {5390USAGE_ERROR_ACTION(ms,ms);5391return0;5392}5393returnialloc(ms, n_elements, sizes,0, chunks);5394}53955396intmspace_trim(mspace msp,size_t pad) {5397int result =0;5398 mstate ms = (mstate)msp;5399if(ok_magic(ms)) {5400if(!PREACTION(ms)) {5401 result =sys_trim(ms, pad);5402POSTACTION(ms);5403}5404}5405else{5406USAGE_ERROR_ACTION(ms,ms);5407}5408return result;5409}54105411voidmspace_malloc_stats(mspace msp) {5412 mstate ms = (mstate)msp;5413if(ok_magic(ms)) {5414internal_malloc_stats(ms);5415}5416else{5417USAGE_ERROR_ACTION(ms,ms);5418}5419}54205421size_tmspace_footprint(mspace msp) {5422size_t result =0;5423 mstate ms = (mstate)msp;5424if(ok_magic(ms)) {5425 result = ms->footprint;5426}5427else{5428USAGE_ERROR_ACTION(ms,ms);5429}5430return result;5431}543254335434size_tmspace_max_footprint(mspace msp) {5435size_t result =0;5436 mstate ms = (mstate)msp;5437if(ok_magic(ms)) {5438 result = ms->max_footprint;5439}5440else{5441USAGE_ERROR_ACTION(ms,ms);5442}5443return result;5444}544554465447#if !NO_MALLINFO5448struct mallinfo mspace_mallinfo(mspace msp) {5449 mstate ms = (mstate)msp;5450if(!ok_magic(ms)) {5451USAGE_ERROR_ACTION(ms,ms);5452}5453returninternal_mallinfo(ms);5454}5455#endif/* NO_MALLINFO */54565457size_tmspace_usable_size(void* mem) {5458if(mem !=0) {5459 mchunkptr p =mem2chunk(mem);5460if(cinuse(p))5461returnchunksize(p) -overhead_for(p);5462}5463return0;5464}54655466intmspace_mallopt(int param_number,int value) {5467returnchange_mparam(param_number, value);5468}54695470#endif/* MSPACES */54715472/* -------------------- Alternative MORECORE functions ------------------- */54735474/*5475 Guidelines for creating a custom version of MORECORE:54765477 * For best performance, MORECORE should allocate in multiples of pagesize.5478 * MORECORE may allocate more memory than requested. (Or even less,5479 but this will usually result in a malloc failure.)5480 * MORECORE must not allocate memory when given argument zero, but5481 instead return one past the end address of memory from previous5482 nonzero call.5483 * For best performance, consecutive calls to MORECORE with positive5484 arguments should return increasing addresses, indicating that5485 space has been contiguously extended.5486 * Even though consecutive calls to MORECORE need not return contiguous5487 addresses, it must be OK for malloc'ed chunks to span multiple5488 regions in those cases where they do happen to be contiguous.5489 * MORECORE need not handle negative arguments -- it may instead5490 just return MFAIL when given negative arguments.5491 Negative arguments are always multiples of pagesize. MORECORE5492 must not misinterpret negative args as large positive unsigned5493 args. You can suppress all such calls from even occurring by defining5494 MORECORE_CANNOT_TRIM,54955496 As an example alternative MORECORE, here is a custom allocator5497 kindly contributed for pre-OSX macOS. It uses virtually but not5498 necessarily physically contiguous non-paged memory (locked in,5499 present and won't get swapped out). You can use it by uncommenting5500 this section, adding some #includes, and setting up the appropriate5501 defines above:55025503 #define MORECORE osMoreCore55045505 There is also a shutdown routine that should somehow be called for5506 cleanup upon program exit.55075508 #define MAX_POOL_ENTRIES 1005509 #define MINIMUM_MORECORE_SIZE (64 * 1024U)5510 static int next_os_pool;5511 void *our_os_pools[MAX_POOL_ENTRIES];55125513 void *osMoreCore(int size)5514 {5515 void *ptr = 0;5516 static void *sbrk_top = 0;55175518 if (size > 0)5519 {5520 if (size < MINIMUM_MORECORE_SIZE)5521 size = MINIMUM_MORECORE_SIZE;5522 if (CurrentExecutionLevel() == kTaskLevel)5523 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);5524 if (ptr == 0)5525 {5526 return (void *) MFAIL;5527 }5528 // save ptrs so they can be freed during cleanup5529 our_os_pools[next_os_pool] = ptr;5530 next_os_pool++;5531 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);5532 sbrk_top = (char *) ptr + size;5533 return ptr;5534 }5535 else if (size < 0)5536 {5537 // we don't currently support shrink behavior5538 return (void *) MFAIL;5539 }5540 else5541 {5542 return sbrk_top;5543 }5544 }55455546 // cleanup any allocated memory pools5547 // called as last thing before shutting down driver55485549 void osCleanupMem(void)5550 {5551 void **ptr;55525553 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)5554 if (*ptr)5555 {5556 PoolDeallocate(*ptr);5557 *ptr = 0;5558 }5559 }55605561*/556255635564/* -----------------------------------------------------------------------5565History:5566 V2.8.4 (not yet released)5567 * Add mspace_mmap_large_chunks; thanks to Jean Brouwers5568 * Fix insufficient sys_alloc padding when using 16byte alignment5569 * Fix bad error check in mspace_footprint5570 * Adaptations for ptmalloc, courtesy of Wolfram Gloger.5571 * Reentrant spin locks, courtesy of Earl Chew and others5572 * Win32 improvements, courtesy of Niall Douglas and Earl Chew5573 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options5574 * Extension hook in malloc_state5575 * Various small adjustments to reduce warnings on some compilers5576 * Various configuration extensions/changes for more platforms. Thanks5577 to all who contributed these.55785579 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)5580 * Add max_footprint functions5581 * Ensure all appropriate literals are size_t5582 * Fix conditional compilation problem for some #define settings5583 * Avoid concatenating segments with the one provided5584 in create_mspace_with_base5585 * Rename some variables to avoid compiler shadowing warnings5586 * Use explicit lock initialization.5587 * Better handling of sbrk interference.5588 * Simplify and fix segment insertion, trimming and mspace_destroy5589 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x5590 * Thanks especially to Dennis Flanagan for help on these.55915592 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)5593 * Fix memalign brace error.55945595 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)5596 * Fix improper #endif nesting in C++5597 * Add explicit casts needed for C++55985599 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)5600 * Use trees for large bins5601 * Support mspaces5602 * Use segments to unify sbrk-based and mmap-based system allocation,5603 removing need for emulation on most platforms without sbrk.5604 * Default safety checks5605 * Optional footer checks. Thanks to William Robertson for the idea.5606 * Internal code refactoring5607 * Incorporate suggestions and platform-specific changes.5608 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,5609 Aaron Bachmann, Emery Berger, and others.5610 * Speed up non-fastbin processing enough to remove fastbins.5611 * Remove useless cfree() to avoid conflicts with other apps.5612 * Remove internal memcpy, memset. Compilers handle builtins better.5613 * Remove some options that no one ever used and rename others.56145615 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)5616 * Fix malloc_state bitmap array misdeclaration56175618 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)5619 * Allow tuning of FIRST_SORTED_BIN_SIZE5620 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.5621 * Better detection and support for non-contiguousness of MORECORE.5622 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger5623 * Bypass most of malloc if no frees. Thanks To Emery Berger.5624 * Fix freeing of old top non-contiguous chunk im sysmalloc.5625 * Raised default trim and map thresholds to 256K.5626 * Fix mmap-related #defines. Thanks to Lubos Lunak.5627 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.5628 * Branch-free bin calculation5629 * Default trim and mmap thresholds now 256K.56305631 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)5632 * Introduce independent_comalloc and independent_calloc.5633 Thanks to Michael Pachos for motivation and help.5634 * Make optional .h file available5635 * Allow > 2GB requests on 32bit systems.5636 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.5637 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,5638 and Anonymous.5639 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for5640 helping test this.)5641 * memalign: check alignment arg5642 * realloc: don't try to shift chunks backwards, since this5643 leads to more fragmentation in some programs and doesn't5644 seem to help in any others.5645 * Collect all cases in malloc requiring system memory into sysmalloc5646 * Use mmap as backup to sbrk5647 * Place all internal state in malloc_state5648 * Introduce fastbins (although similar to 2.5.1)5649 * Many minor tunings and cosmetic improvements5650 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK5651 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS5652 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.5653 * Include errno.h to support default failure action.56545655 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)5656 * return null for negative arguments5657 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>5658 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'5659 (e.g. WIN32 platforms)5660 * Cleanup header file inclusion for WIN32 platforms5661 * Cleanup code to avoid Microsoft Visual C++ compiler complaints5662 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing5663 memory allocation routines5664 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)5665 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to5666 usage of 'assert' in non-WIN32 code5667 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to5668 avoid infinite loop5669 * Always call 'fREe()' rather than 'free()'56705671 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)5672 * Fixed ordering problem with boundary-stamping56735674 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)5675 * Added pvalloc, as recommended by H.J. Liu5676 * Added 64bit pointer support mainly from Wolfram Gloger5677 * Added anonymously donated WIN32 sbrk emulation5678 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen5679 * malloc_extend_top: fix mask error that caused wastage after5680 foreign sbrks5681 * Add linux mremap support code from HJ Liu56825683 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)5684 * Integrated most documentation with the code.5685 * Add support for mmap, with help from5686 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).5687 * Use last_remainder in more cases.5688 * Pack bins using idea from colin@nyx10.cs.du.edu5689 * Use ordered bins instead of best-fit threshold5690 * Eliminate block-local decls to simplify tracing and debugging.5691 * Support another case of realloc via move into top5692 * Fix error occurring when initial sbrk_base not word-aligned.5693 * Rely on page size for units instead of SBRK_UNIT to5694 avoid surprises about sbrk alignment conventions.5695 * Add mallinfo, mallopt. Thanks to Raymond Nijssen5696 (raymond@es.ele.tue.nl) for the suggestion.5697 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.5698 * More precautions for cases where other routines call sbrk,5699 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).5700 * Added macros etc., allowing use in linux libc from5701 H.J. Lu (hjl@gnu.ai.mit.edu)5702 * Inverted this history list57035704 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)5705 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.5706 * Removed all preallocation code since under current scheme5707 the work required to undo bad preallocations exceeds5708 the work saved in good cases for most test programs.5709 * No longer use return list or unconsolidated bins since5710 no scheme using them consistently outperforms those that don't5711 given above changes.5712 * Use best fit for very large chunks to prevent some worst-cases.5713 * Added some support for debugging57145715 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)5716 * Removed footers when chunks are in use. Thanks to5717 Paul Wilson (wilson@cs.texas.edu) for the suggestion.57185719 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)5720 * Added malloc_trim, with help from Wolfram Gloger5721 (wmglo@Dent.MED.Uni-Muenchen.DE).57225723 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)57245725 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)5726 * realloc: try to expand in both directions5727 * malloc: swap order of clean-bin strategy;5728 * realloc: only conditionally expand backwards5729 * Try not to scavenge used bins5730 * Use bin counts as a guide to preallocation5731 * Occasionally bin return list chunks in first scan5732 * Add a few optimizations from colin@nyx10.cs.du.edu57335734 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)5735 * faster bin computation & slightly different binning5736 * merged all consolidations to one part of malloc proper5737 (eliminating old malloc_find_space & malloc_clean_bin)5738 * Scan 2 returns chunks (not just 1)5739 * Propagate failure in realloc if malloc returns 05740 * Add stuff to allow compilation on non-ANSI compilers5741 from kpv@research.att.com57425743 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)5744 * removed potential for odd address access in prev_chunk5745 * removed dependency on getpagesize.h5746 * misc cosmetics and a bit more internal documentation5747 * anticosmetics: mangled names in macros to evade debugger strangeness5748 * tested on sparc, hp-700, dec-mips, rs60005749 with gcc & native cc (hp, dec only) allowing5750 Detlefs & Zorn comparison study (in SIGPLAN Notices.)57515752 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)5753 * Based loosely on libg++-1.2X malloc. (It retains some of the overall5754 structure of old version, but most details differ.)57555756*/