blob: 35003d1925947d2689bc9d7480839bfce246d615 [file] [log] [blame]
Imre Kisf98280c2024-05-29 14:21:38 +02001/*
2Copyright 2023 Doug Lea
3
4Permission is hereby granted, free of charge, to any person obtaining
5a copy of this software and associated documentation files (the
6"Software"), to deal in the Software without restriction, including
7without limitation the rights to use, copy, modify, merge, publish,
8distribute, sublicense, and/or sell copies of the Software, and to
9permit persons to whom the Software is furnished to do so.
10
11THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
12EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
13MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
14NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
15LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
16OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
17WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
18
19* Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
20 Re-licensed 25 Sep 2023 with MIT-0 replacing obsolete CC0
21 See https://opensource.org/license/mit-0/
22
23* Quickstart
24
25 This library is all in one file to simplify the most common usage:
26 ftp it, compile it (-O3), and link it into another program. All of
27 the compile-time options default to reasonable values for use on
28 most platforms. You might later want to step through various
29 compile-time and dynamic tuning options.
30
31 For convenience, an include file for code using this malloc is at:
32 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
33 You don't really need this .h file unless you call functions not
34 defined in your system include files. The .h file contains only the
35 excerpts from this file needed for using this malloc on ANSI C/C++
36 systems, so long as you haven't changed compile-time options about
37 naming and tuning parameters. If you do, then you can create your
38 own malloc.h that does include all settings by cutting at the point
39 indicated below. Note that you may already by default be using a C
40 library containing a malloc that is based on some version of this
41 malloc (for example in linux). You might still want to use the one
42 in this file to customize settings or to avoid overheads associated
43 with library versions.
44
45* Vital statistics:
46
47 Supported pointer/size_t representation: 4 or 8 bytes
48 size_t MUST be an unsigned type of the same width as
49 pointers. (If you are using an ancient system that declares
50 size_t as a signed type, or need it to be a different width
51 than pointers, you can use a previous release of this malloc
52 (e.g. 2.7.2) supporting these.)
53
54 Alignment: 8 bytes (minimum)
55 This suffices for nearly all current machines and C compilers.
56 However, you can define MALLOC_ALIGNMENT to be wider than this
57 if necessary (up to 128bytes), at the expense of using more space.
58
59 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
60 8 or 16 bytes (if 8byte sizes)
61 Each malloced chunk has a hidden word of overhead holding size
62 and status information, and additional cross-check word
63 if FOOTERS is defined.
64
65 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
66 8-byte ptrs: 32 bytes (including overhead)
67
68 Even a request for zero bytes (i.e., malloc(0)) returns a
69 pointer to something of the minimum allocatable size.
70 The maximum overhead wastage (i.e., number of extra bytes
71 allocated than were requested in malloc) is less than or equal
72 to the minimum size, except for requests >= mmap_threshold that
73 are serviced via mmap(), where the worst case wastage is about
74 32 bytes plus the remainder from a system page (the minimal
75 mmap unit); typically 4096 or 8192 bytes.
76
77 Security: static-safe; optionally more or less
78 The "security" of malloc refers to the ability of malicious
79 code to accentuate the effects of errors (for example, freeing
80 space that is not currently malloc'ed or overwriting past the
81 ends of chunks) in code that calls malloc. This malloc
82 guarantees not to modify any memory locations below the base of
83 heap, i.e., static variables, even in the presence of usage
84 errors. The routines additionally detect most improper frees
85 and reallocs. All this holds as long as the static bookkeeping
86 for malloc itself is not corrupted by some other means. This
87 is only one aspect of security -- these checks do not, and
88 cannot, detect all possible programming errors.
89
90 If FOOTERS is defined nonzero, then each allocated chunk
91 carries an additional check word to verify that it was malloced
92 from its space. These check words are the same within each
93 execution of a program using malloc, but differ across
94 executions, so externally crafted fake chunks cannot be
95 freed. This improves security by rejecting frees/reallocs that
96 could corrupt heap memory, in addition to the checks preventing
97 writes to statics that are always on. This may further improve
98 security at the expense of time and space overhead. (Note that
99 FOOTERS may also be worth using with MSPACES.)
100
101 By default detected errors cause the program to abort (calling
102 "abort()"). You can override this to instead proceed past
103 errors by defining PROCEED_ON_ERROR. In this case, a bad free
104 has no effect, and a malloc that encounters a bad address
105 caused by user overwrites will ignore the bad address by
106 dropping pointers and indices to all known memory. This may
107 be appropriate for programs that should continue if at all
108 possible in the face of programming errors, although they may
109 run out of memory because dropped memory is never reclaimed.
110
111 If you don't like either of these options, you can define
112 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
113 else. And if if you are sure that your program using malloc has
114 no errors or vulnerabilities, you can define INSECURE to 1,
115 which might (or might not) provide a small performance improvement.
116
117 It is also possible to limit the maximum total allocatable
118 space, using malloc_set_footprint_limit. This is not
119 designed as a security feature in itself (calls to set limits
120 are not screened or privileged), but may be useful as one
121 aspect of a secure implementation.
122
123 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
124 When USE_LOCKS is defined, each public call to malloc, free,
125 etc is surrounded with a lock. By default, this uses a plain
126 pthread mutex, win32 critical section, or a spin-lock if if
127 available for the platform and not disabled by setting
128 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
129 recursive versions are used instead (which are not required for
130 base functionality but may be needed in layered extensions).
131 Using a global lock is not especially fast, and can be a major
132 bottleneck. It is designed only to provide minimal protection
133 in concurrent environments, and to provide a basis for
134 extensions. If you are using malloc in a concurrent program,
135 consider instead using nedmalloc
136 (http://www.nedprod.com/programs/portable/nedmalloc/) or
137 ptmalloc (See http://www.malloc.de), which are derived from
138 versions of this malloc.
139
140 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
141 This malloc can use unix sbrk or any emulation (invoked using
142 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
143 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
144 memory. On most unix systems, it tends to work best if both
145 MORECORE and MMAP are enabled. On Win32, it uses emulations
146 based on VirtualAlloc. It also uses common C library functions
147 like memset.
148
149 Compliance: I believe it is compliant with the Single Unix Specification
150 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
151 others as well.
152
153* Overview of algorithms
154
155 This is not the fastest, most space-conserving, most portable, or
156 most tunable malloc ever written. However it is among the fastest
157 while also being among the most space-conserving, portable and
158 tunable. Consistent balance across these factors results in a good
159 general-purpose allocator for malloc-intensive programs.
160
161 In most ways, this malloc is a best-fit allocator. Generally, it
162 chooses the best-fitting existing chunk for a request, with ties
163 broken in approximately least-recently-used order. (This strategy
164 normally maintains low fragmentation.) However, for requests less
165 than 256bytes, it deviates from best-fit when there is not an
166 exactly fitting available chunk by preferring to use space adjacent
167 to that used for the previous small request, as well as by breaking
168 ties in approximately most-recently-used order. (These enhance
169 locality of series of small allocations.) And for very large requests
170 (>= 256Kb by default), it relies on system memory mapping
171 facilities, if supported. (This helps avoid carrying around and
172 possibly fragmenting memory used only for large chunks.)
173
174 All operations (except malloc_stats and mallinfo) have execution
175 times that are bounded by a constant factor of the number of bits in
176 a size_t, not counting any clearing in calloc or copying in realloc,
177 or actions surrounding MORECORE and MMAP that have times
178 proportional to the number of non-contiguous regions returned by
179 system allocation routines, which is often just 1. In real-time
180 applications, you can optionally suppress segment traversals using
181 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
182 system allocators return non-contiguous spaces, at the typical
183 expense of carrying around more memory and increased fragmentation.
184
185 The implementation is not very modular and seriously overuses
186 macros. Perhaps someday all C compilers will do as good a job
187 inlining modular code as can now be done by brute-force expansion,
188 but now, enough of them seem not to.
189
190 Some compilers issue a lot of warnings about code that is
191 dead/unreachable only on some platforms, and also about intentional
192 uses of negation on unsigned types. All known cases of each can be
193 ignored.
194
195 For a longer but out of date high-level description, see
196 http://gee.cs.oswego.edu/dl/html/malloc.html
197
198* MSPACES
199 If MSPACES is defined, then in addition to malloc, free, etc.,
200 this file also defines mspace_malloc, mspace_free, etc. These
201 are versions of malloc routines that take an "mspace" argument
202 obtained using create_mspace, to control all internal bookkeeping.
203 If ONLY_MSPACES is defined, only these versions are compiled.
204 So if you would like to use this allocator for only some allocations,
205 and your system malloc for others, you can compile with
206 ONLY_MSPACES and then do something like...
207 static mspace mymspace = create_mspace(0,0); // for example
208 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
209
210 (Note: If you only need one instance of an mspace, you can instead
211 use "USE_DL_PREFIX" to relabel the global malloc.)
212
213 You can similarly create thread-local allocators by storing
214 mspaces as thread-locals. For example:
215 static __thread mspace tlms = 0;
216 void* tlmalloc(size_t bytes) {
217 if (tlms == 0) tlms = create_mspace(0, 0);
218 return mspace_malloc(tlms, bytes);
219 }
220 void tlfree(void* mem) { mspace_free(tlms, mem); }
221
222 Unless FOOTERS is defined, each mspace is completely independent.
223 You cannot allocate from one and free to another (although
224 conformance is only weakly checked, so usage errors are not always
225 caught). If FOOTERS is defined, then each chunk carries around a tag
226 indicating its originating mspace, and frees are directed to their
227 originating spaces. Normally, this requires use of locks.
228
229 ------------------------- Compile-time options ---------------------------
230
231Be careful in setting #define values for numerical constants of type
232size_t. On some systems, literal values are not automatically extended
233to size_t precision unless they are explicitly casted. You can also
234use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
235
236WIN32 default: defined if _WIN32 defined
237 Defining WIN32 sets up defaults for MS environment and compilers.
238 Otherwise defaults are for unix. Beware that there seem to be some
239 cases where this malloc might not be a pure drop-in replacement for
240 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
241 SetDIBits()) may be due to bugs in some video driver implementations
242 when pixel buffers are malloc()ed, and the region spans more than
243 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
244 default granularity, pixel buffers may straddle virtual allocation
245 regions more often than when using the Microsoft allocator. You can
246 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
247 buffers rather than using malloc(). If this is not possible,
248 recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
249 in cases where MSC and gcc (cygwin) are known to differ on WIN32,
250 conditions use _MSC_VER to distinguish them.
251
252DLMALLOC_EXPORT default: extern
253 Defines how public APIs are declared. If you want to export via a
254 Windows DLL, you might define this as
255 #define DLMALLOC_EXPORT extern __declspec(dllexport)
256 If you want a POSIX ELF shared object, you might use
257 #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
258
259MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))
260 Controls the minimum alignment for malloc'ed chunks. It must be a
261 power of two and at least 8, even on machines for which smaller
262 alignments would suffice. It may be defined as larger than this
263 though. Note however that code and data structures are optimized for
264 the case of 8-byte alignment.
265
266MSPACES default: 0 (false)
267 If true, compile in support for independent allocation spaces.
268 This is only supported if HAVE_MMAP is true.
269
270ONLY_MSPACES default: 0 (false)
271 If true, only compile in mspace versions, not regular versions.
272
273USE_LOCKS default: 0 (false)
274 Causes each call to each public routine to be surrounded with
275 pthread or WIN32 mutex lock/unlock. (If set true, this can be
276 overridden on a per-mspace basis for mspace versions.) If set to a
277 non-zero value other than 1, locks are used, but their
278 implementation is left out, so lock functions must be supplied manually,
279 as described below.
280
281USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
282 If true, uses custom spin locks for locking. This is currently
283 supported only gcc >= 4.1, older gccs on x86 platforms, and recent
284 MS compilers. Otherwise, posix locks or win32 critical sections are
285 used.
286
287USE_RECURSIVE_LOCKS default: not defined
288 If defined nonzero, uses recursive (aka reentrant) locks, otherwise
289 uses plain mutexes. This is not required for malloc proper, but may
290 be needed for layered allocators such as nedmalloc.
291
292LOCK_AT_FORK default: not defined
293 If defined nonzero, performs pthread_atfork upon initialization
294 to initialize child lock while holding parent lock. The implementation
295 assumes that pthread locks (not custom locks) are being used. In other
296 cases, you may need to customize the implementation.
297
298FOOTERS default: 0
299 If true, provide extra checking and dispatching by placing
300 information in the footers of allocated chunks. This adds
301 space and time overhead.
302
303INSECURE default: 0
304 If true, omit checks for usage errors and heap space overwrites.
305
306USE_DL_PREFIX default: NOT defined
307 Causes compiler to prefix all public routines with the string 'dl'.
308 This can be useful when you only want to use this malloc in one part
309 of a program, using your regular system malloc elsewhere.
310
311MALLOC_INSPECT_ALL default: NOT defined
312 If defined, compiles malloc_inspect_all and mspace_inspect_all, that
313 perform traversal of all heap space. Unless access to these
314 functions is otherwise restricted, you probably do not want to
315 include them in secure implementations.
316
317ABORT default: defined as abort()
318 Defines how to abort on failed checks. On most systems, a failed
319 check cannot die with an "assert" or even print an informative
320 message, because the underlying print routines in turn call malloc,
321 which will fail again. Generally, the best policy is to simply call
322 abort(). It's not very useful to do more than this because many
323 errors due to overwriting will show up as address faults (null, odd
324 addresses etc) rather than malloc-triggered checks, so will also
325 abort. Also, most compilers know that abort() does not return, so
326 can better optimize code conditionally calling it.
327
328PROCEED_ON_ERROR default: defined as 0 (false)
329 Controls whether detected bad addresses cause them to bypassed
330 rather than aborting. If set, detected bad arguments to free and
331 realloc are ignored. And all bookkeeping information is zeroed out
332 upon a detected overwrite of freed heap space, thus losing the
333 ability to ever return it from malloc again, but enabling the
334 application to proceed. If PROCEED_ON_ERROR is defined, the
335 static variable malloc_corruption_error_count is compiled in
336 and can be examined to see if errors have occurred. This option
337 generates slower code than the default abort policy.
338
339DEBUG default: NOT defined
340 The DEBUG setting is mainly intended for people trying to modify
341 this code or diagnose problems when porting to new platforms.
342 However, it may also be able to better isolate user errors than just
343 using runtime checks. The assertions in the check routines spell
344 out in more detail the assumptions and invariants underlying the
345 algorithms. The checking is fairly extensive, and will slow down
346 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
347 set will attempt to check every non-mmapped allocated and free chunk
348 in the course of computing the summaries.
349
350ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
351 Debugging assertion failures can be nearly impossible if your
352 version of the assert macro causes malloc to be called, which will
353 lead to a cascade of further failures, blowing the runtime stack.
354 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
355 which will usually make debugging easier.
356
357MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
358 The action to take before "return 0" when malloc fails to be able to
359 return memory because there is none available.
360
361HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
362 True if this system supports sbrk or an emulation of it.
363
364MORECORE default: sbrk
365 The name of the sbrk-style system routine to call to obtain more
366 memory. See below for guidance on writing custom MORECORE
367 functions. The type of the argument to sbrk/MORECORE varies across
368 systems. It cannot be size_t, because it supports negative
369 arguments, so it is normally the signed type of the same width as
370 size_t (sometimes declared as "intptr_t"). It doesn't much matter
371 though. Internally, we only call it with arguments less than half
372 the max value of a size_t, which should work across all reasonable
373 possibilities, although sometimes generating compiler warnings.
374
375MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
376 If true, take advantage of fact that consecutive calls to MORECORE
377 with positive arguments always return contiguous increasing
378 addresses. This is true of unix sbrk. It does not hurt too much to
379 set it true anyway, since malloc copes with non-contiguities.
380 Setting it false when definitely non-contiguous saves time
381 and possibly wasted space it would take to discover this though.
382
383MORECORE_CANNOT_TRIM default: NOT defined
384 True if MORECORE cannot release space back to the system when given
385 negative arguments. This is generally necessary only if you are
386 using a hand-crafted MORECORE function that cannot handle negative
387 arguments.
388
389NO_SEGMENT_TRAVERSAL default: 0
390 If non-zero, suppresses traversals of memory segments
391 returned by either MORECORE or CALL_MMAP. This disables
392 merging of segments that are contiguous, and selectively
393 releasing them to the OS if unused, but bounds execution times.
394
395HAVE_MMAP default: 1 (true)
396 True if this system supports mmap or an emulation of it. If so, and
397 HAVE_MORECORE is not true, MMAP is used for all system
398 allocation. If set and HAVE_MORECORE is true as well, MMAP is
399 primarily used to directly allocate very large blocks. It is also
400 used as a backup strategy in cases where MORECORE fails to provide
401 space from system. Note: A single call to MUNMAP is assumed to be
402 able to unmap memory that may have be allocated using multiple calls
403 to MMAP, so long as they are adjacent.
404
405HAVE_MREMAP default: 1 on linux, else 0
406 If true realloc() uses mremap() to re-allocate large blocks and
407 extend or shrink allocation spaces.
408
409MMAP_CLEARS default: 1 except on WINCE.
410 True if mmap clears memory so calloc doesn't need to. This is true
411 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
412
413USE_BUILTIN_FFS default: 0 (i.e., not used)
414 Causes malloc to use the builtin ffs() function to compute indices.
415 Some compilers may recognize and intrinsify ffs to be faster than the
416 supplied C version. Also, the case of x86 using gcc is special-cased
417 to an asm instruction, so is already as fast as it can be, and so
418 this setting has no effect. Similarly for Win32 under recent MS compilers.
419 (On most x86s, the asm version is only slightly faster than the C version.)
420
421malloc_getpagesize default: derive from system includes, or 4096.
422 The system page size. To the extent possible, this malloc manages
423 memory from the system in page-size units. This may be (and
424 usually is) a function rather than a constant. This is ignored
425 if WIN32, where page size is determined using getSystemInfo during
426 initialization.
427
428USE_DEV_RANDOM default: 0 (i.e., not used)
429 Causes malloc to use /dev/random to initialize secure magic seed for
430 stamping footers. Otherwise, the current time is used.
431
432NO_MALLINFO default: 0
433 If defined, don't compile "mallinfo". This can be a simple way
434 of dealing with mismatches between system declarations and
435 those in this file.
436
437MALLINFO_FIELD_TYPE default: size_t
438 The type of the fields in the mallinfo struct. This was originally
439 defined as "int" in SVID etc, but is more usefully defined as
440 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
441
442NO_MALLOC_STATS default: 0
443 If defined, don't compile "malloc_stats". This avoids calls to
444 fprintf and bringing in stdio dependencies you might not want.
445
446REALLOC_ZERO_BYTES_FREES default: not defined
447 This should be set if a call to realloc with zero bytes should
448 be the same as a call to free. Some people think it should. Otherwise,
449 since this malloc returns a unique pointer for malloc(0), so does
450 realloc(p, 0).
451
452LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
453LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
454LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
455 Define these if your system does not have these header files.
456 You might need to manually insert some of the declarations they provide.
457
458DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
459 system_info.dwAllocationGranularity in WIN32,
460 otherwise 64K.
461 Also settable using mallopt(M_GRANULARITY, x)
462 The unit for allocating and deallocating memory from the system. On
463 most systems with contiguous MORECORE, there is no reason to
464 make this more than a page. However, systems with MMAP tend to
465 either require or encourage larger granularities. You can increase
466 this value to prevent system allocation functions to be called so
467 often, especially if they are slow. The value must be at least one
468 page and must be a power of two. Setting to 0 causes initialization
469 to either page size or win32 region size. (Note: In previous
470 versions of malloc, the equivalent of this option was called
471 "TOP_PAD")
472
473DEFAULT_TRIM_THRESHOLD default: 2MB
474 Also settable using mallopt(M_TRIM_THRESHOLD, x)
475 The maximum amount of unused top-most memory to keep before
476 releasing via malloc_trim in free(). Automatic trimming is mainly
477 useful in long-lived programs using contiguous MORECORE. Because
478 trimming via sbrk can be slow on some systems, and can sometimes be
479 wasteful (in cases where programs immediately afterward allocate
480 more large chunks) the value should be high enough so that your
481 overall system performance would improve by releasing this much
482 memory. As a rough guide, you might set to a value close to the
483 average size of a process (program) running on your system.
484 Releasing this much memory would allow such a process to run in
485 memory. Generally, it is worth tuning trim thresholds when a
486 program undergoes phases where several large chunks are allocated
487 and released in ways that can reuse each other's storage, perhaps
488 mixed with phases where there are no such chunks at all. The trim
489 value must be greater than page size to have any useful effect. To
490 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
491 some people use of mallocing a huge space and then freeing it at
492 program startup, in an attempt to reserve system memory, doesn't
493 have the intended effect under automatic trimming, since that memory
494 will immediately be returned to the system.
495
496DEFAULT_MMAP_THRESHOLD default: 256K
497 Also settable using mallopt(M_MMAP_THRESHOLD, x)
498 The request size threshold for using MMAP to directly service a
499 request. Requests of at least this size that cannot be allocated
500 using already-existing space will be serviced via mmap. (If enough
501 normal freed space already exists it is used instead.) Using mmap
502 segregates relatively large chunks of memory so that they can be
503 individually obtained and released from the host system. A request
504 serviced through mmap is never reused by any other request (at least
505 not directly; the system may just so happen to remap successive
506 requests to the same locations). Segregating space in this way has
507 the benefits that: Mmapped space can always be individually released
508 back to the system, which helps keep the system level memory demands
509 of a long-lived program low. Also, mapped memory doesn't become
510 `locked' between other chunks, as can happen with normally allocated
511 chunks, which means that even trimming via malloc_trim would not
512 release them. However, it has the disadvantage that the space
513 cannot be reclaimed, consolidated, and then used to service later
514 requests, as happens with normal chunks. The advantages of mmap
515 nearly always outweigh disadvantages for "large" chunks, but the
516 value of "large" may vary across systems. The default is an
517 empirically derived value that works well in most systems. You can
518 disable mmap by setting to MAX_SIZE_T.
519
520MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
521 The number of consolidated frees between checks to release
522 unused segments when freeing. When using non-contiguous segments,
523 especially with multiple mspaces, checking only for topmost space
524 doesn't always suffice to trigger trimming. To compensate for this,
525 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
526 current number of segments, if greater) try to release unused
527 segments to the OS when freeing chunks that result in
528 consolidation. The best value for this parameter is a compromise
529 between slowing down frees with relatively costly checks that
530 rarely trigger versus holding on to unused memory. To effectively
531 disable, set to MAX_SIZE_T. This may lead to a very slight speed
532 improvement at the expense of carrying around more memory.
533*/
534
535/* Version identifier to allow people to support multiple versions */
536#ifndef DLMALLOC_VERSION
537#define DLMALLOC_VERSION 20806
538#endif /* DLMALLOC_VERSION */
539
540#ifndef DLMALLOC_EXPORT
541#define DLMALLOC_EXPORT extern
542#endif
543
544#ifndef WIN32
545#ifdef _WIN32
546#define WIN32 1
547#endif /* _WIN32 */
548#ifdef _WIN32_WCE
549#define LACKS_FCNTL_H
550#define WIN32 1
551#endif /* _WIN32_WCE */
552#endif /* WIN32 */
553#ifdef WIN32
554#define WIN32_LEAN_AND_MEAN
555#include <windows.h>
556#include <tchar.h>
557#define HAVE_MMAP 1
558#define HAVE_MORECORE 0
559#define LACKS_UNISTD_H
560#define LACKS_SYS_PARAM_H
561#define LACKS_SYS_MMAN_H
562#define LACKS_STRING_H
563#define LACKS_STRINGS_H
564#define LACKS_SYS_TYPES_H
565#define LACKS_ERRNO_H
566#define LACKS_SCHED_H
567#ifndef MALLOC_FAILURE_ACTION
568#define MALLOC_FAILURE_ACTION
569#endif /* MALLOC_FAILURE_ACTION */
570#ifndef MMAP_CLEARS
571#ifdef _WIN32_WCE /* WINCE reportedly does not clear */
572#define MMAP_CLEARS 0
573#else
574#define MMAP_CLEARS 1
575#endif /* _WIN32_WCE */
576#endif /*MMAP_CLEARS */
577#endif /* WIN32 */
578
579#if defined(DARWIN) || defined(_DARWIN)
580/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
581#ifndef HAVE_MORECORE
582#define HAVE_MORECORE 0
583#define HAVE_MMAP 1
584/* OSX allocators provide 16 byte alignment */
585#ifndef MALLOC_ALIGNMENT
586#define MALLOC_ALIGNMENT ((size_t)16U)
587#endif
588#endif /* HAVE_MORECORE */
589#endif /* DARWIN */
590
591#ifndef LACKS_SYS_TYPES_H
592#include <sys/types.h> /* For size_t */
593#endif /* LACKS_SYS_TYPES_H */
594
595/* The maximum possible size_t value has all bits set */
596#define MAX_SIZE_T (~(size_t)0)
597
598#ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
599#define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
600 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
601#endif /* USE_LOCKS */
602
603#if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
604#if ((defined(__GNUC__) && \
605 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
606 defined(__i386__) || defined(__x86_64__))) || \
607 (defined(_MSC_VER) && _MSC_VER>=1310))
608#ifndef USE_SPIN_LOCKS
609#define USE_SPIN_LOCKS 1
610#endif /* USE_SPIN_LOCKS */
611#elif USE_SPIN_LOCKS
612#error "USE_SPIN_LOCKS defined without implementation"
613#endif /* ... locks available... */
614#elif !defined(USE_SPIN_LOCKS)
615#define USE_SPIN_LOCKS 0
616#endif /* USE_LOCKS */
617
618#ifndef ONLY_MSPACES
619#define ONLY_MSPACES 0
620#endif /* ONLY_MSPACES */
621#ifndef MSPACES
622#if ONLY_MSPACES
623#define MSPACES 1
624#else /* ONLY_MSPACES */
625#define MSPACES 0
626#endif /* ONLY_MSPACES */
627#endif /* MSPACES */
628#ifndef MALLOC_ALIGNMENT
629#define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
630#endif /* MALLOC_ALIGNMENT */
631#ifndef FOOTERS
632#define FOOTERS 0
633#endif /* FOOTERS */
634#ifndef ABORT
635#define ABORT abort()
636#endif /* ABORT */
637#ifndef ABORT_ON_ASSERT_FAILURE
638#define ABORT_ON_ASSERT_FAILURE 1
639#endif /* ABORT_ON_ASSERT_FAILURE */
640#ifndef PROCEED_ON_ERROR
641#define PROCEED_ON_ERROR 0
642#endif /* PROCEED_ON_ERROR */
643
644#ifndef INSECURE
645#define INSECURE 0
646#endif /* INSECURE */
647#ifndef MALLOC_INSPECT_ALL
648#define MALLOC_INSPECT_ALL 0
649#endif /* MALLOC_INSPECT_ALL */
650#ifndef HAVE_MMAP
651#define HAVE_MMAP 1
652#endif /* HAVE_MMAP */
653#ifndef MMAP_CLEARS
654#define MMAP_CLEARS 1
655#endif /* MMAP_CLEARS */
656#ifndef HAVE_MREMAP
657#ifdef linux
658#define HAVE_MREMAP 1
659#define _GNU_SOURCE /* Turns on mremap() definition */
660#else /* linux */
661#define HAVE_MREMAP 0
662#endif /* linux */
663#endif /* HAVE_MREMAP */
664#ifndef MALLOC_FAILURE_ACTION
665#define MALLOC_FAILURE_ACTION errno = ENOMEM;
666#endif /* MALLOC_FAILURE_ACTION */
667#ifndef HAVE_MORECORE
668#if ONLY_MSPACES
669#define HAVE_MORECORE 0
670#else /* ONLY_MSPACES */
671#define HAVE_MORECORE 1
672#endif /* ONLY_MSPACES */
673#endif /* HAVE_MORECORE */
674#if !HAVE_MORECORE
675#define MORECORE_CONTIGUOUS 0
676#else /* !HAVE_MORECORE */
677#define MORECORE_DEFAULT sbrk
678#ifndef MORECORE_CONTIGUOUS
679#define MORECORE_CONTIGUOUS 1
680#endif /* MORECORE_CONTIGUOUS */
681#endif /* HAVE_MORECORE */
682#ifndef DEFAULT_GRANULARITY
683#if (MORECORE_CONTIGUOUS || defined(WIN32))
684#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
685#else /* MORECORE_CONTIGUOUS */
686#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
687#endif /* MORECORE_CONTIGUOUS */
688#endif /* DEFAULT_GRANULARITY */
689#ifndef DEFAULT_TRIM_THRESHOLD
690#ifndef MORECORE_CANNOT_TRIM
691#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
692#else /* MORECORE_CANNOT_TRIM */
693#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
694#endif /* MORECORE_CANNOT_TRIM */
695#endif /* DEFAULT_TRIM_THRESHOLD */
696#ifndef DEFAULT_MMAP_THRESHOLD
697#if HAVE_MMAP
698#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
699#else /* HAVE_MMAP */
700#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
701#endif /* HAVE_MMAP */
702#endif /* DEFAULT_MMAP_THRESHOLD */
703#ifndef MAX_RELEASE_CHECK_RATE
704#if HAVE_MMAP
705#define MAX_RELEASE_CHECK_RATE 4095
706#else
707#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
708#endif /* HAVE_MMAP */
709#endif /* MAX_RELEASE_CHECK_RATE */
710#ifndef USE_BUILTIN_FFS
711#define USE_BUILTIN_FFS 0
712#endif /* USE_BUILTIN_FFS */
713#ifndef USE_DEV_RANDOM
714#define USE_DEV_RANDOM 0
715#endif /* USE_DEV_RANDOM */
716#ifndef NO_MALLINFO
717#define NO_MALLINFO 0
718#endif /* NO_MALLINFO */
719#ifndef MALLINFO_FIELD_TYPE
720#define MALLINFO_FIELD_TYPE size_t
721#endif /* MALLINFO_FIELD_TYPE */
722#ifndef NO_MALLOC_STATS
723#define NO_MALLOC_STATS 0
724#endif /* NO_MALLOC_STATS */
725#ifndef NO_SEGMENT_TRAVERSAL
726#define NO_SEGMENT_TRAVERSAL 0
727#endif /* NO_SEGMENT_TRAVERSAL */
728
729/*
730 mallopt tuning options. SVID/XPG defines four standard parameter
731 numbers for mallopt, normally defined in malloc.h. None of these
732 are used in this malloc, so setting them has no effect. But this
733 malloc does support the following options.
734*/
735
736#define M_TRIM_THRESHOLD (-1)
737#define M_GRANULARITY (-2)
738#define M_MMAP_THRESHOLD (-3)
739
740/* ------------------------ Mallinfo declarations ------------------------ */
741
742#if !NO_MALLINFO
743/*
744 This version of malloc supports the standard SVID/XPG mallinfo
745 routine that returns a struct containing usage properties and
746 statistics. It should work on any system that has a
747 /usr/include/malloc.h defining struct mallinfo. The main
748 declaration needed is the mallinfo struct that is returned (by-copy)
749 by mallinfo(). The malloinfo struct contains a bunch of fields that
750 are not even meaningful in this version of malloc. These fields are
751 are instead filled by mallinfo() with other numbers that might be of
752 interest.
753
754 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
755 /usr/include/malloc.h file that includes a declaration of struct
756 mallinfo. If so, it is included; else a compliant version is
757 declared below. These must be precisely the same for mallinfo() to
758 work. The original SVID version of this struct, defined on most
759 systems with mallinfo, declares all fields as ints. But some others
760 define as unsigned long. If your system defines the fields using a
761 type of different width than listed here, you MUST #include your
762 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
763*/
764
765/* #define HAVE_USR_INCLUDE_MALLOC_H */
766
767#ifdef HAVE_USR_INCLUDE_MALLOC_H
768#include "/usr/include/malloc.h"
769#else /* HAVE_USR_INCLUDE_MALLOC_H */
770#ifndef STRUCT_MALLINFO_DECLARED
771/* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
772#define _STRUCT_MALLINFO
773#define STRUCT_MALLINFO_DECLARED 1
774struct mallinfo {
775 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
776 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
777 MALLINFO_FIELD_TYPE smblks; /* always 0 */
778 MALLINFO_FIELD_TYPE hblks; /* always 0 */
779 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
780 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
781 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
782 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
783 MALLINFO_FIELD_TYPE fordblks; /* total free space */
784 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
785};
786#endif /* STRUCT_MALLINFO_DECLARED */
787#endif /* HAVE_USR_INCLUDE_MALLOC_H */
788#endif /* NO_MALLINFO */
789
790/*
791 Try to persuade compilers to inline. The most critical functions for
792 inlining are defined as macros, so these aren't used for them.
793*/
794
795#ifndef FORCEINLINE
796 #if defined(__GNUC__)
797#define FORCEINLINE __inline __attribute__ ((always_inline))
798 #elif defined(_MSC_VER)
799 #define FORCEINLINE __forceinline
800 #endif
801#endif
802#ifndef NOINLINE
803 #if defined(__GNUC__)
804 #define NOINLINE __attribute__ ((noinline))
805 #elif defined(_MSC_VER)
806 #define NOINLINE __declspec(noinline)
807 #else
808 #define NOINLINE
809 #endif
810#endif
811
812#ifdef __cplusplus
813extern "C" {
814#ifndef FORCEINLINE
815 #define FORCEINLINE inline
816#endif
817#endif /* __cplusplus */
818#ifndef FORCEINLINE
819 #define FORCEINLINE
820#endif
821
822#if !ONLY_MSPACES
823
824/* ------------------- Declarations of public routines ------------------- */
825
826#ifndef USE_DL_PREFIX
827#define dlcalloc calloc
828#define dlfree free
829#define dlmalloc malloc
830#define dlmemalign memalign
831#define dlposix_memalign posix_memalign
832#define dlrealloc realloc
833#define dlrealloc_in_place realloc_in_place
834#define dlvalloc valloc
835#define dlpvalloc pvalloc
836#define dlmallinfo mallinfo
837#define dlmallopt mallopt
838#define dlmalloc_trim malloc_trim
839#define dlmalloc_stats malloc_stats
840#define dlmalloc_usable_size malloc_usable_size
841#define dlmalloc_footprint malloc_footprint
842#define dlmalloc_max_footprint malloc_max_footprint
843#define dlmalloc_footprint_limit malloc_footprint_limit
844#define dlmalloc_set_footprint_limit malloc_set_footprint_limit
845#define dlmalloc_inspect_all malloc_inspect_all
846#define dlindependent_calloc independent_calloc
847#define dlindependent_comalloc independent_comalloc
848#define dlbulk_free bulk_free
849#endif /* USE_DL_PREFIX */
850
851/*
852 malloc(size_t n)
853 Returns a pointer to a newly allocated chunk of at least n bytes, or
854 null if no space is available, in which case errno is set to ENOMEM
855 on ANSI C systems.
856
857 If n is zero, malloc returns a minimum-sized chunk. (The minimum
858 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
859 systems.) Note that size_t is an unsigned type, so calls with
860 arguments that would be negative if signed are interpreted as
861 requests for huge amounts of space, which will often fail. The
862 maximum supported value of n differs across systems, but is in all
863 cases less than the maximum representable value of a size_t.
864*/
865DLMALLOC_EXPORT void* dlmalloc(size_t);
866
867/*
868 free(void* p)
869 Releases the chunk of memory pointed to by p, that had been previously
870 allocated using malloc or a related routine such as realloc.
871 It has no effect if p is null. If p was not malloced or already
872 freed, free(p) will by default cause the current program to abort.
873*/
874DLMALLOC_EXPORT void dlfree(void*);
875
876/*
877 calloc(size_t n_elements, size_t element_size);
878 Returns a pointer to n_elements * element_size bytes, with all locations
879 set to zero.
880*/
881DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
882
883/*
884 realloc(void* p, size_t n)
885 Returns a pointer to a chunk of size n that contains the same data
886 as does chunk p up to the minimum of (n, p's size) bytes, or null
887 if no space is available.
888
889 The returned pointer may or may not be the same as p. The algorithm
890 prefers extending p in most cases when possible, otherwise it
891 employs the equivalent of a malloc-copy-free sequence.
892
893 If p is null, realloc is equivalent to malloc.
894
895 If space is not available, realloc returns null, errno is set (if on
896 ANSI) and p is NOT freed.
897
898 if n is for fewer bytes than already held by p, the newly unused
899 space is lopped off and freed if possible. realloc with a size
900 argument of zero (re)allocates a minimum-sized chunk.
901
902 The old unix realloc convention of allowing the last-free'd chunk
903 to be used as an argument to realloc is not supported.
904*/
905DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
906
907/*
908 realloc_in_place(void* p, size_t n)
909 Resizes the space allocated for p to size n, only if this can be
910 done without moving p (i.e., only if there is adjacent space
911 available if n is greater than p's current allocated size, or n is
912 less than or equal to p's size). This may be used instead of plain
913 realloc if an alternative allocation strategy is needed upon failure
914 to expand space; for example, reallocation of a buffer that must be
915 memory-aligned or cleared. You can use realloc_in_place to trigger
916 these alternatives only when needed.
917
918 Returns p if successful; otherwise null.
919*/
920DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
921
922/*
923 memalign(size_t alignment, size_t n);
924 Returns a pointer to a newly allocated chunk of n bytes, aligned
925 in accord with the alignment argument.
926
927 The alignment argument should be a power of two. If the argument is
928 not a power of two, the nearest greater power is used.
929 8-byte alignment is guaranteed by normal malloc calls, so don't
930 bother calling memalign with an argument of 8 or less.
931
932 Overreliance on memalign is a sure way to fragment space.
933*/
934DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
935
936/*
937 int posix_memalign(void** pp, size_t alignment, size_t n);
938 Allocates a chunk of n bytes, aligned in accord with the alignment
939 argument. Differs from memalign only in that it (1) assigns the
940 allocated memory to *pp rather than returning it, (2) fails and
941 returns EINVAL if the alignment is not a power of two (3) fails and
942 returns ENOMEM if memory cannot be allocated.
943*/
944DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
945
946/*
947 valloc(size_t n);
948 Equivalent to memalign(pagesize, n), where pagesize is the page
949 size of the system. If the pagesize is unknown, 4096 is used.
950*/
951DLMALLOC_EXPORT void* dlvalloc(size_t);
952
953/*
954 mallopt(int parameter_number, int parameter_value)
955 Sets tunable parameters The format is to provide a
956 (parameter-number, parameter-value) pair. mallopt then sets the
957 corresponding parameter to the argument value if it can (i.e., so
958 long as the value is meaningful), and returns 1 if successful else
959 0. To workaround the fact that mallopt is specified to use int,
960 not size_t parameters, the value -1 is specially treated as the
961 maximum unsigned size_t value.
962
963 SVID/XPG/ANSI defines four standard param numbers for mallopt,
964 normally defined in malloc.h. None of these are use in this malloc,
965 so setting them has no effect. But this malloc also supports other
966 options in mallopt. See below for details. Briefly, supported
967 parameters are as follows (listed defaults are for "typical"
968 configurations).
969
970 Symbol param # default allowed param values
971 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
972 M_GRANULARITY -2 page size any power of 2 >= page size
973 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
974*/
975DLMALLOC_EXPORT int dlmallopt(int, int);
976
977/*
978 malloc_footprint();
979 Returns the number of bytes obtained from the system. The total
980 number of bytes allocated by malloc, realloc etc., is less than this
981 value. Unlike mallinfo, this function returns only a precomputed
982 result, so can be called frequently to monitor memory consumption.
983 Even if locks are otherwise defined, this function does not use them,
984 so results might not be up to date.
985*/
986DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
987
988/*
989 malloc_max_footprint();
990 Returns the maximum number of bytes obtained from the system. This
991 value will be greater than current footprint if deallocated space
992 has been reclaimed by the system. The peak number of bytes allocated
993 by malloc, realloc etc., is less than this value. Unlike mallinfo,
994 this function returns only a precomputed result, so can be called
995 frequently to monitor memory consumption. Even if locks are
996 otherwise defined, this function does not use them, so results might
997 not be up to date.
998*/
999DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
1000
1001/*
1002 malloc_footprint_limit();
1003 Returns the number of bytes that the heap is allowed to obtain from
1004 the system, returning the last value returned by
1005 malloc_set_footprint_limit, or the maximum size_t value if
1006 never set. The returned value reflects a permission. There is no
1007 guarantee that this number of bytes can actually be obtained from
1008 the system.
1009*/
1010DLMALLOC_EXPORT size_t dlmalloc_footprint_limit();
1011
1012/*
1013 malloc_set_footprint_limit();
1014 Sets the maximum number of bytes to obtain from the system, causing
1015 failure returns from malloc and related functions upon attempts to
1016 exceed this value. The argument value may be subject to page
1017 rounding to an enforceable limit; this actual value is returned.
1018 Using an argument of the maximum possible size_t effectively
1019 disables checks. If the argument is less than or equal to the
1020 current malloc_footprint, then all future allocations that require
1021 additional system memory will fail. However, invocation cannot
1022 retroactively deallocate existing used memory.
1023*/
1024DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
1025
1026#if MALLOC_INSPECT_ALL
1027/*
1028 malloc_inspect_all(void(*handler)(void *start,
1029 void *end,
1030 size_t used_bytes,
1031 void* callback_arg),
1032 void* arg);
1033 Traverses the heap and calls the given handler for each managed
1034 region, skipping all bytes that are (or may be) used for bookkeeping
1035 purposes. Traversal does not include include chunks that have been
1036 directly memory mapped. Each reported region begins at the start
1037 address, and continues up to but not including the end address. The
1038 first used_bytes of the region contain allocated data. If
1039 used_bytes is zero, the region is unallocated. The handler is
1040 invoked with the given callback argument. If locks are defined, they
1041 are held during the entire traversal. It is a bad idea to invoke
1042 other malloc functions from within the handler.
1043
1044 For example, to count the number of in-use chunks with size greater
1045 than 1000, you could write:
1046 static int count = 0;
1047 void count_chunks(void* start, void* end, size_t used, void* arg) {
1048 if (used >= 1000) ++count;
1049 }
1050 then:
1051 malloc_inspect_all(count_chunks, NULL);
1052
1053 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1054*/
1055DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
1056 void* arg);
1057
1058#endif /* MALLOC_INSPECT_ALL */
1059
1060#if !NO_MALLINFO
1061/*
1062 mallinfo()
1063 Returns (by copy) a struct containing various summary statistics:
1064
1065 arena: current total non-mmapped bytes allocated from system
1066 ordblks: the number of free chunks
1067 smblks: always zero.
1068 hblks: current number of mmapped regions
1069 hblkhd: total bytes held in mmapped regions
1070 usmblks: the maximum total allocated space. This will be greater
1071 than current total if trimming has occurred.
1072 fsmblks: always zero
1073 uordblks: current total allocated space (normal or mmapped)
1074 fordblks: total free space
1075 keepcost: the maximum number of bytes that could ideally be released
1076 back to system via malloc_trim. ("ideally" means that
1077 it ignores page restrictions etc.)
1078
1079 Because these fields are ints, but internal bookkeeping may
1080 be kept as longs, the reported values may wrap around zero and
1081 thus be inaccurate.
1082*/
1083DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
1084#endif /* NO_MALLINFO */
1085
1086/*
1087 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1088
1089 independent_calloc is similar to calloc, but instead of returning a
1090 single cleared space, it returns an array of pointers to n_elements
1091 independent elements that can hold contents of size elem_size, each
1092 of which starts out cleared, and can be independently freed,
1093 realloc'ed etc. The elements are guaranteed to be adjacently
1094 allocated (this is not guaranteed to occur with multiple callocs or
1095 mallocs), which may also improve cache locality in some
1096 applications.
1097
1098 The "chunks" argument is optional (i.e., may be null, which is
1099 probably the most typical usage). If it is null, the returned array
1100 is itself dynamically allocated and should also be freed when it is
1101 no longer needed. Otherwise, the chunks array must be of at least
1102 n_elements in length. It is filled in with the pointers to the
1103 chunks.
1104
1105 In either case, independent_calloc returns this pointer array, or
1106 null if the allocation failed. If n_elements is zero and "chunks"
1107 is null, it returns a chunk representing an array with zero elements
1108 (which should be freed if not wanted).
1109
1110 Each element must be freed when it is no longer needed. This can be
1111 done all at once using bulk_free.
1112
1113 independent_calloc simplifies and speeds up implementations of many
1114 kinds of pools. It may also be useful when constructing large data
1115 structures that initially have a fixed number of fixed-sized nodes,
1116 but the number is not known at compile time, and some of the nodes
1117 may later need to be freed. For example:
1118
1119 struct Node { int item; struct Node* next; };
1120
1121 struct Node* build_list() {
1122 struct Node** pool;
1123 int n = read_number_of_nodes_needed();
1124 if (n <= 0) return 0;
1125 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1126 if (pool == 0) die();
1127 // organize into a linked list...
1128 struct Node* first = pool[0];
1129 for (i = 0; i < n-1; ++i)
1130 pool[i]->next = pool[i+1];
1131 free(pool); // Can now free the array (or not, if it is needed later)
1132 return first;
1133 }
1134*/
1135DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
1136
1137/*
1138 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1139
1140 independent_comalloc allocates, all at once, a set of n_elements
1141 chunks with sizes indicated in the "sizes" array. It returns
1142 an array of pointers to these elements, each of which can be
1143 independently freed, realloc'ed etc. The elements are guaranteed to
1144 be adjacently allocated (this is not guaranteed to occur with
1145 multiple callocs or mallocs), which may also improve cache locality
1146 in some applications.
1147
1148 The "chunks" argument is optional (i.e., may be null). If it is null
1149 the returned array is itself dynamically allocated and should also
1150 be freed when it is no longer needed. Otherwise, the chunks array
1151 must be of at least n_elements in length. It is filled in with the
1152 pointers to the chunks.
1153
1154 In either case, independent_comalloc returns this pointer array, or
1155 null if the allocation failed. If n_elements is zero and chunks is
1156 null, it returns a chunk representing an array with zero elements
1157 (which should be freed if not wanted).
1158
1159 Each element must be freed when it is no longer needed. This can be
1160 done all at once using bulk_free.
1161
1162 independent_comallac differs from independent_calloc in that each
1163 element may have a different size, and also that it does not
1164 automatically clear elements.
1165
1166 independent_comalloc can be used to speed up allocation in cases
1167 where several structs or objects must always be allocated at the
1168 same time. For example:
1169
1170 struct Head { ... }
1171 struct Foot { ... }
1172
1173 void send_message(char* msg) {
1174 int msglen = strlen(msg);
1175 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1176 void* chunks[3];
1177 if (independent_comalloc(3, sizes, chunks) == 0)
1178 die();
1179 struct Head* head = (struct Head*)(chunks[0]);
1180 char* body = (char*)(chunks[1]);
1181 struct Foot* foot = (struct Foot*)(chunks[2]);
1182 // ...
1183 }
1184
1185 In general though, independent_comalloc is worth using only for
1186 larger values of n_elements. For small values, you probably won't
1187 detect enough difference from series of malloc calls to bother.
1188
1189 Overuse of independent_comalloc can increase overall memory usage,
1190 since it cannot reuse existing noncontiguous small chunks that
1191 might be available for some of the elements.
1192*/
1193DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
1194
1195/*
1196 bulk_free(void* array[], size_t n_elements)
1197 Frees and clears (sets to null) each non-null pointer in the given
1198 array. This is likely to be faster than freeing them one-by-one.
1199 If footers are used, pointers that have been allocated in different
1200 mspaces are not freed or cleared, and the count of all such pointers
1201 is returned. For large arrays of pointers with poor locality, it
1202 may be worthwhile to sort this array before calling bulk_free.
1203*/
1204DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);
1205
1206/*
1207 pvalloc(size_t n);
1208 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1209 round up n to nearest pagesize.
1210 */
1211DLMALLOC_EXPORT void* dlpvalloc(size_t);
1212
1213/*
1214 malloc_trim(size_t pad);
1215
1216 If possible, gives memory back to the system (via negative arguments
1217 to sbrk) if there is unused memory at the `high' end of the malloc
1218 pool or in unused MMAP segments. You can call this after freeing
1219 large blocks of memory to potentially reduce the system-level memory
1220 requirements of a program. However, it cannot guarantee to reduce
1221 memory. Under some allocation patterns, some large free blocks of
1222 memory will be locked between two used chunks, so they cannot be
1223 given back to the system.
1224
1225 The `pad' argument to malloc_trim represents the amount of free
1226 trailing space to leave untrimmed. If this argument is zero, only
1227 the minimum amount of memory to maintain internal data structures
1228 will be left. Non-zero arguments can be supplied to maintain enough
1229 trailing space to service future expected allocations without having
1230 to re-obtain memory from the system.
1231
1232 Malloc_trim returns 1 if it actually released any memory, else 0.
1233*/
1234DLMALLOC_EXPORT int dlmalloc_trim(size_t);
1235
1236/*
1237 malloc_stats();
1238 Prints on stderr the amount of space obtained from the system (both
1239 via sbrk and mmap), the maximum amount (which may be more than
1240 current if malloc_trim and/or munmap got called), and the current
1241 number of bytes allocated via malloc (or realloc, etc) but not yet
1242 freed. Note that this is the number of bytes allocated, not the
1243 number requested. It will be larger than the number requested
1244 because of alignment and bookkeeping overhead. Because it includes
1245 alignment wastage as being in use, this figure may be greater than
1246 zero even when no user-level chunks are allocated.
1247
1248 The reported current and maximum system memory can be inaccurate if
1249 a program makes other calls to system memory allocation functions
1250 (normally sbrk) outside of malloc.
1251
1252 malloc_stats prints only the most commonly interesting statistics.
1253 More information can be obtained by calling mallinfo.
1254*/
1255DLMALLOC_EXPORT void dlmalloc_stats(void);
1256
1257/*
1258 malloc_usable_size(void* p);
1259
1260 Returns the number of bytes you can actually use in
1261 an allocated chunk, which may be more than you requested (although
1262 often not) due to alignment and minimum size constraints.
1263 You can use this many bytes without worrying about
1264 overwriting other allocated objects. This is not a particularly great
1265 programming practice. malloc_usable_size can be more useful in
1266 debugging and assertions, for example:
1267
1268 p = malloc(n);
1269 assert(malloc_usable_size(p) >= 256);
1270*/
1271size_t dlmalloc_usable_size(void*);
1272
1273#endif /* ONLY_MSPACES */
1274
1275#if MSPACES
1276
1277/*
1278 mspace is an opaque type representing an independent
1279 region of space that supports mspace_malloc, etc.
1280*/
1281typedef void* mspace;
1282
1283/*
1284 create_mspace creates and returns a new independent space with the
1285 given initial capacity, or, if 0, the default granularity size. It
1286 returns null if there is no system memory available to create the
1287 space. If argument locked is non-zero, the space uses a separate
1288 lock to control access. The capacity of the space will grow
1289 dynamically as needed to service mspace_malloc requests. You can
1290 control the sizes of incremental increases of this space by
1291 compiling with a different DEFAULT_GRANULARITY or dynamically
1292 setting with mallopt(M_GRANULARITY, value).
1293*/
1294DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1295
1296/*
1297 destroy_mspace destroys the given space, and attempts to return all
1298 of its memory back to the system, returning the total number of
1299 bytes freed. After destruction, the results of access to all memory
1300 used by the space become undefined.
1301*/
1302DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1303
1304/*
1305 create_mspace_with_base uses the memory supplied as the initial base
1306 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1307 space is used for bookkeeping, so the capacity must be at least this
1308 large. (Otherwise 0 is returned.) When this initial space is
1309 exhausted, additional memory will be obtained from the system.
1310 Destroying this space will deallocate all additionally allocated
1311 space (if possible) but not the initial base.
1312*/
1313DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1314
1315/*
1316 mspace_track_large_chunks controls whether requests for large chunks
1317 are allocated in their own untracked mmapped regions, separate from
1318 others in this mspace. By default large chunks are not tracked,
1319 which reduces fragmentation. However, such chunks are not
1320 necessarily released to the system upon destroy_mspace. Enabling
1321 tracking by setting to true may increase fragmentation, but avoids
1322 leakage when relying on destroy_mspace to release all memory
1323 allocated using this space. The function returns the previous
1324 setting.
1325*/
1326DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1327
1328
1329/*
1330 mspace_malloc behaves as malloc, but operates within
1331 the given space.
1332*/
1333DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1334
1335/*
1336 mspace_free behaves as free, but operates within
1337 the given space.
1338
1339 If compiled with FOOTERS==1, mspace_free is not actually needed.
1340 free may be called instead of mspace_free because freed chunks from
1341 any space are handled by their originating spaces.
1342*/
1343DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
1344
1345/*
1346 mspace_realloc behaves as realloc, but operates within
1347 the given space.
1348
1349 If compiled with FOOTERS==1, mspace_realloc is not actually
1350 needed. realloc may be called instead of mspace_realloc because
1351 realloced chunks from any space are handled by their originating
1352 spaces.
1353*/
1354DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1355
1356/*
1357 mspace_calloc behaves as calloc, but operates within
1358 the given space.
1359*/
1360DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1361
1362/*
1363 mspace_memalign behaves as memalign, but operates within
1364 the given space.
1365*/
1366DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1367
1368/*
1369 mspace_independent_calloc behaves as independent_calloc, but
1370 operates within the given space.
1371*/
1372DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1373 size_t elem_size, void* chunks[]);
1374
1375/*
1376 mspace_independent_comalloc behaves as independent_comalloc, but
1377 operates within the given space.
1378*/
1379DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1380 size_t sizes[], void* chunks[]);
1381
1382/*
1383 mspace_footprint() returns the number of bytes obtained from the
1384 system for this space.
1385*/
1386DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1387
1388/*
1389 mspace_max_footprint() returns the peak number of bytes obtained from the
1390 system for this space.
1391*/
1392DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1393
1394
1395#if !NO_MALLINFO
1396/*
1397 mspace_mallinfo behaves as mallinfo, but reports properties of
1398 the given space.
1399*/
1400DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1401#endif /* NO_MALLINFO */
1402
1403/*
1404 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1405*/
1406DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);
1407
1408/*
1409 mspace_malloc_stats behaves as malloc_stats, but reports
1410 properties of the given space.
1411*/
1412DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1413
1414/*
1415 mspace_trim behaves as malloc_trim, but
1416 operates within the given space.
1417*/
1418DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1419
1420/*
1421 An alias for mallopt.
1422*/
1423DLMALLOC_EXPORT int mspace_mallopt(int, int);
1424
1425#endif /* MSPACES */
1426
1427#ifdef __cplusplus
1428} /* end of extern "C" */
1429#endif /* __cplusplus */
1430
1431/*
1432 ========================================================================
1433 To make a fully customizable malloc.h header file, cut everything
1434 above this line, put into file malloc.h, edit to suit, and #include it
1435 on the next line, as well as in programs that use this malloc.
1436 ========================================================================
1437*/
1438
1439/* #include "malloc.h" */
1440
1441/*------------------------------ internal #includes ---------------------- */
1442
1443#ifdef _MSC_VER
1444#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1445#endif /* _MSC_VER */
1446#if !NO_MALLOC_STATS
1447#include <stdio.h> /* for printing in malloc_stats */
1448#endif /* NO_MALLOC_STATS */
1449#ifndef LACKS_ERRNO_H
1450#include <errno.h> /* for MALLOC_FAILURE_ACTION */
1451#endif /* LACKS_ERRNO_H */
1452#ifdef DEBUG
1453#if ABORT_ON_ASSERT_FAILURE
1454#undef assert
1455#define assert(x) if(!(x)) ABORT
1456#else /* ABORT_ON_ASSERT_FAILURE */
1457#include <assert.h>
1458#endif /* ABORT_ON_ASSERT_FAILURE */
1459#else /* DEBUG */
1460#ifndef assert
1461#define assert(x)
1462#endif
1463#define DEBUG 0
1464#endif /* DEBUG */
1465#if !defined(WIN32) && !defined(LACKS_TIME_H)
1466#include <time.h> /* for magic initialization */
1467#endif /* WIN32 */
1468#ifndef LACKS_STDLIB_H
1469#include <stdlib.h> /* for abort() */
1470#endif /* LACKS_STDLIB_H */
1471#ifndef LACKS_STRING_H
1472#include <string.h> /* for memset etc */
1473#endif /* LACKS_STRING_H */
1474#if USE_BUILTIN_FFS
1475#ifndef LACKS_STRINGS_H
1476#include <strings.h> /* for ffs */
1477#endif /* LACKS_STRINGS_H */
1478#endif /* USE_BUILTIN_FFS */
1479#if HAVE_MMAP
1480#ifndef LACKS_SYS_MMAN_H
1481/* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1482#if (defined(linux) && !defined(__USE_GNU))
1483#define __USE_GNU 1
1484#include <sys/mman.h> /* for mmap */
1485#undef __USE_GNU
1486#else
1487#include <sys/mman.h> /* for mmap */
1488#endif /* linux */
1489#endif /* LACKS_SYS_MMAN_H */
1490#ifndef LACKS_FCNTL_H
1491#include <fcntl.h>
1492#endif /* LACKS_FCNTL_H */
1493#endif /* HAVE_MMAP */
1494#ifndef LACKS_UNISTD_H
1495#include <unistd.h> /* for sbrk, sysconf */
1496#else /* LACKS_UNISTD_H */
1497#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1498extern void* sbrk(ptrdiff_t);
1499#endif /* FreeBSD etc */
1500#endif /* LACKS_UNISTD_H */
1501
1502/* Declarations for locking */
1503#if USE_LOCKS
1504#ifndef WIN32
1505#if defined (__SVR4) && defined (__sun) /* solaris */
1506#include <thread.h>
1507#elif !defined(LACKS_SCHED_H)
1508#include <sched.h>
1509#endif /* solaris or LACKS_SCHED_H */
1510#if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1511#include <pthread.h>
1512#endif /* USE_RECURSIVE_LOCKS ... */
1513#elif defined(_MSC_VER)
1514#ifndef _M_AMD64
1515/* These are already defined on AMD64 builds */
1516#ifdef __cplusplus
1517extern "C" {
1518#endif /* __cplusplus */
1519LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1520LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1521#ifdef __cplusplus
1522}
1523#endif /* __cplusplus */
1524#endif /* _M_AMD64 */
1525#pragma intrinsic (_InterlockedCompareExchange)
1526#pragma intrinsic (_InterlockedExchange)
1527#define interlockedcompareexchange _InterlockedCompareExchange
1528#define interlockedexchange _InterlockedExchange
1529#elif defined(WIN32) && defined(__GNUC__)
1530#define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1531#define interlockedexchange __sync_lock_test_and_set
1532#endif /* Win32 */
1533#else /* USE_LOCKS */
1534#endif /* USE_LOCKS */
1535
1536#ifndef LOCK_AT_FORK
1537#define LOCK_AT_FORK 0
1538#endif
1539
1540/* Declarations for bit scanning on win32 */
1541#if defined(_MSC_VER) && _MSC_VER>=1300
1542#ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1543#ifdef __cplusplus
1544extern "C" {
1545#endif /* __cplusplus */
1546unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1547unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1548#ifdef __cplusplus
1549}
1550#endif /* __cplusplus */
1551
1552#define BitScanForward _BitScanForward
1553#define BitScanReverse _BitScanReverse
1554#pragma intrinsic(_BitScanForward)
1555#pragma intrinsic(_BitScanReverse)
1556#endif /* BitScanForward */
1557#endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1558
1559#ifndef WIN32
1560#ifndef malloc_getpagesize
1561# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1562# ifndef _SC_PAGE_SIZE
1563# define _SC_PAGE_SIZE _SC_PAGESIZE
1564# endif
1565# endif
1566# ifdef _SC_PAGE_SIZE
1567# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1568# else
1569# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1570 extern size_t getpagesize();
1571# define malloc_getpagesize getpagesize()
1572# else
1573# ifdef WIN32 /* use supplied emulation of getpagesize */
1574# define malloc_getpagesize getpagesize()
1575# else
1576# ifndef LACKS_SYS_PARAM_H
1577# include <sys/param.h>
1578# endif
1579# ifdef EXEC_PAGESIZE
1580# define malloc_getpagesize EXEC_PAGESIZE
1581# else
1582# ifdef NBPG
1583# ifndef CLSIZE
1584# define malloc_getpagesize NBPG
1585# else
1586# define malloc_getpagesize (NBPG * CLSIZE)
1587# endif
1588# else
1589# ifdef NBPC
1590# define malloc_getpagesize NBPC
1591# else
1592# ifdef PAGESIZE
1593# define malloc_getpagesize PAGESIZE
1594# else /* just guess */
1595# define malloc_getpagesize ((size_t)4096U)
1596# endif
1597# endif
1598# endif
1599# endif
1600# endif
1601# endif
1602# endif
1603#endif
1604#endif
1605
1606/* ------------------- size_t and alignment properties -------------------- */
1607
1608/* The byte and bit size of a size_t */
1609#define SIZE_T_SIZE (sizeof(size_t))
1610#define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1611
1612/* Some constants coerced to size_t */
1613/* Annoying but necessary to avoid errors on some platforms */
1614#define SIZE_T_ZERO ((size_t)0)
1615#define SIZE_T_ONE ((size_t)1)
1616#define SIZE_T_TWO ((size_t)2)
1617#define SIZE_T_FOUR ((size_t)4)
1618#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1619#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1620#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1621#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1622
1623/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1624#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1625
1626/* True if address a has acceptable alignment */
1627#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1628
1629/* the number of bytes to offset an address to align it */
1630#define align_offset(A)\
1631 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1632 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1633
1634/* -------------------------- MMAP preliminaries ------------------------- */
1635
1636/*
1637 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1638 checks to fail so compiler optimizer can delete code rather than
1639 using so many "#if"s.
1640*/
1641
1642
1643/* MORECORE and MMAP must return MFAIL on failure */
1644#define MFAIL ((void*)(MAX_SIZE_T))
1645#define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1646
1647#if HAVE_MMAP
1648
1649#ifndef WIN32
1650#define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1651#define MMAP_PROT (PROT_READ|PROT_WRITE)
1652#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1653#define MAP_ANONYMOUS MAP_ANON
1654#endif /* MAP_ANON */
1655#ifdef MAP_ANONYMOUS
1656#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1657#define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1658#else /* MAP_ANONYMOUS */
1659/*
1660 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1661 is unlikely to be needed, but is supplied just in case.
1662*/
1663#define MMAP_FLAGS (MAP_PRIVATE)
1664static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1665#define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1666 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1667 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1668 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1669#endif /* MAP_ANONYMOUS */
1670
1671#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1672
1673#else /* WIN32 */
1674
1675/* Win32 MMAP via VirtualAlloc */
1676static FORCEINLINE void* win32mmap(size_t size) {
1677 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1678 return (ptr != 0)? ptr: MFAIL;
1679}
1680
1681/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1682static FORCEINLINE void* win32direct_mmap(size_t size) {
1683 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1684 PAGE_READWRITE);
1685 return (ptr != 0)? ptr: MFAIL;
1686}
1687
1688/* This function supports releasing coalesed segments */
1689static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1690 MEMORY_BASIC_INFORMATION minfo;
1691 char* cptr = (char*)ptr;
1692 while (size) {
1693 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1694 return -1;
1695 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1696 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1697 return -1;
1698 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1699 return -1;
1700 cptr += minfo.RegionSize;
1701 size -= minfo.RegionSize;
1702 }
1703 return 0;
1704}
1705
1706#define MMAP_DEFAULT(s) win32mmap(s)
1707#define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1708#define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1709#endif /* WIN32 */
1710#endif /* HAVE_MMAP */
1711
1712#if HAVE_MREMAP
1713#ifndef WIN32
1714#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1715#endif /* WIN32 */
1716#endif /* HAVE_MREMAP */
1717
1718/**
1719 * Define CALL_MORECORE
1720 */
1721#if HAVE_MORECORE
1722 #ifdef MORECORE
1723 #define CALL_MORECORE(S) MORECORE(S)
1724 #else /* MORECORE */
1725 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1726 #endif /* MORECORE */
1727#else /* HAVE_MORECORE */
1728 #define CALL_MORECORE(S) MFAIL
1729#endif /* HAVE_MORECORE */
1730
1731/**
1732 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1733 */
1734#if HAVE_MMAP
1735 #define USE_MMAP_BIT (SIZE_T_ONE)
1736
1737 #ifdef MMAP
1738 #define CALL_MMAP(s) MMAP(s)
1739 #else /* MMAP */
1740 #define CALL_MMAP(s) MMAP_DEFAULT(s)
1741 #endif /* MMAP */
1742 #ifdef MUNMAP
1743 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1744 #else /* MUNMAP */
1745 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1746 #endif /* MUNMAP */
1747 #ifdef DIRECT_MMAP
1748 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1749 #else /* DIRECT_MMAP */
1750 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1751 #endif /* DIRECT_MMAP */
1752#else /* HAVE_MMAP */
1753 #define USE_MMAP_BIT (SIZE_T_ZERO)
1754
1755 #define MMAP(s) MFAIL
1756 #define MUNMAP(a, s) (-1)
1757 #define DIRECT_MMAP(s) MFAIL
1758 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1759 #define CALL_MMAP(s) MMAP(s)
1760 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1761#endif /* HAVE_MMAP */
1762
1763/**
1764 * Define CALL_MREMAP
1765 */
1766#if HAVE_MMAP && HAVE_MREMAP
1767 #ifdef MREMAP
1768 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1769 #else /* MREMAP */
1770 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1771 #endif /* MREMAP */
1772#else /* HAVE_MMAP && HAVE_MREMAP */
1773 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1774#endif /* HAVE_MMAP && HAVE_MREMAP */
1775
1776/* mstate bit set if continguous morecore disabled or failed */
1777#define USE_NONCONTIGUOUS_BIT (4U)
1778
1779/* segment bit set in create_mspace_with_base */
1780#define EXTERN_BIT (8U)
1781
1782
1783/* --------------------------- Lock preliminaries ------------------------ */
1784
1785/*
1786 When locks are defined, there is one global lock, plus
1787 one per-mspace lock.
1788
1789 The global lock_ensures that mparams.magic and other unique
1790 mparams values are initialized only once. It also protects
1791 sequences of calls to MORECORE. In many cases sys_alloc requires
1792 two calls, that should not be interleaved with calls by other
1793 threads. This does not protect against direct calls to MORECORE
1794 by other threads not using this lock, so there is still code to
1795 cope the best we can on interference.
1796
1797 Per-mspace locks surround calls to malloc, free, etc.
1798 By default, locks are simple non-reentrant mutexes.
1799
1800 Because lock-protected regions generally have bounded times, it is
1801 OK to use the supplied simple spinlocks. Spinlocks are likely to
1802 improve performance for lightly contended applications, but worsen
1803 performance under heavy contention.
1804
1805 If USE_LOCKS is > 1, the definitions of lock routines here are
1806 bypassed, in which case you will need to define the type MLOCK_T,
1807 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1808 and TRY_LOCK. You must also declare a
1809 static MLOCK_T malloc_global_mutex = { initialization values };.
1810
1811*/
1812
1813#if !USE_LOCKS
1814#define USE_LOCK_BIT (0U)
1815#define INITIAL_LOCK(l) (0)
1816#define DESTROY_LOCK(l) (0)
1817#define ACQUIRE_MALLOC_GLOBAL_LOCK()
1818#define RELEASE_MALLOC_GLOBAL_LOCK()
1819
1820#else
1821#if USE_LOCKS > 1
1822/* ----------------------- User-defined locks ------------------------ */
1823/* Define your own lock implementation here */
1824/* #define INITIAL_LOCK(lk) ... */
1825/* #define DESTROY_LOCK(lk) ... */
1826/* #define ACQUIRE_LOCK(lk) ... */
1827/* #define RELEASE_LOCK(lk) ... */
1828/* #define TRY_LOCK(lk) ... */
1829/* static MLOCK_T malloc_global_mutex = ... */
1830
1831#elif USE_SPIN_LOCKS
1832
1833/* First, define CAS_LOCK and CLEAR_LOCK on ints */
1834/* Note CAS_LOCK defined to return 0 on success */
1835
1836#if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1837#define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1838#define CLEAR_LOCK(sl) __sync_lock_release(sl)
1839
1840#elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1841/* Custom spin locks for older gcc on x86 */
1842static FORCEINLINE int x86_cas_lock(int *sl) {
1843 int ret;
1844 int val = 1;
1845 int cmp = 0;
1846 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1847 : "=a" (ret)
1848 : "r" (val), "m" (*(sl)), "0"(cmp)
1849 : "memory", "cc");
1850 return ret;
1851}
1852
1853static FORCEINLINE void x86_clear_lock(int* sl) {
1854 assert(*sl != 0);
1855 int prev = 0;
1856 int ret;
1857 __asm__ __volatile__ ("lock; xchgl %0, %1"
1858 : "=r" (ret)
1859 : "m" (*(sl)), "0"(prev)
1860 : "memory");
1861}
1862
1863#define CAS_LOCK(sl) x86_cas_lock(sl)
1864#define CLEAR_LOCK(sl) x86_clear_lock(sl)
1865
1866#else /* Win32 MSC */
1867#define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1)
1868#define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0)
1869
1870#endif /* ... gcc spins locks ... */
1871
1872/* How to yield for a spin lock */
1873#define SPINS_PER_YIELD 63
1874#if defined(_MSC_VER)
1875#define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1876#define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1877#elif defined (__SVR4) && defined (__sun) /* solaris */
1878#define SPIN_LOCK_YIELD thr_yield();
1879#elif !defined(LACKS_SCHED_H)
1880#define SPIN_LOCK_YIELD sched_yield();
1881#else
1882#define SPIN_LOCK_YIELD
1883#endif /* ... yield ... */
1884
1885#if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1886/* Plain spin locks use single word (embedded in malloc_states) */
1887static int spin_acquire_lock(int *sl) {
1888 int spins = 0;
1889 while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
1890 if ((++spins & SPINS_PER_YIELD) == 0) {
1891 SPIN_LOCK_YIELD;
1892 }
1893 }
1894 return 0;
1895}
1896
1897#define MLOCK_T int
1898#define TRY_LOCK(sl) !CAS_LOCK(sl)
1899#define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1900#define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1901#define INITIAL_LOCK(sl) (*sl = 0)
1902#define DESTROY_LOCK(sl) (0)
1903static MLOCK_T malloc_global_mutex = 0;
1904
1905#else /* USE_RECURSIVE_LOCKS */
1906/* types for lock owners */
1907#ifdef WIN32
1908#define THREAD_ID_T DWORD
1909#define CURRENT_THREAD GetCurrentThreadId()
1910#define EQ_OWNER(X,Y) ((X) == (Y))
1911#else
1912/*
1913 Note: the following assume that pthread_t is a type that can be
1914 initialized to (casted) zero. If this is not the case, you will need to
1915 somehow redefine these or not use spin locks.
1916*/
1917#define THREAD_ID_T pthread_t
1918#define CURRENT_THREAD pthread_self()
1919#define EQ_OWNER(X,Y) pthread_equal(X, Y)
1920#endif
1921
1922struct malloc_recursive_lock {
1923 int sl;
1924 unsigned int c;
1925 THREAD_ID_T threadid;
1926};
1927
1928#define MLOCK_T struct malloc_recursive_lock
1929static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
1930
1931static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
1932 assert(lk->sl != 0);
1933 if (--lk->c == 0) {
1934 CLEAR_LOCK(&lk->sl);
1935 }
1936}
1937
1938static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
1939 THREAD_ID_T mythreadid = CURRENT_THREAD;
1940 int spins = 0;
1941 for (;;) {
1942 if (*((volatile int *)(&lk->sl)) == 0) {
1943 if (!CAS_LOCK(&lk->sl)) {
1944 lk->threadid = mythreadid;
1945 lk->c = 1;
1946 return 0;
1947 }
1948 }
1949 else if (EQ_OWNER(lk->threadid, mythreadid)) {
1950 ++lk->c;
1951 return 0;
1952 }
1953 if ((++spins & SPINS_PER_YIELD) == 0) {
1954 SPIN_LOCK_YIELD;
1955 }
1956 }
1957}
1958
1959static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
1960 THREAD_ID_T mythreadid = CURRENT_THREAD;
1961 if (*((volatile int *)(&lk->sl)) == 0) {
1962 if (!CAS_LOCK(&lk->sl)) {
1963 lk->threadid = mythreadid;
1964 lk->c = 1;
1965 return 1;
1966 }
1967 }
1968 else if (EQ_OWNER(lk->threadid, mythreadid)) {
1969 ++lk->c;
1970 return 1;
1971 }
1972 return 0;
1973}
1974
1975#define RELEASE_LOCK(lk) recursive_release_lock(lk)
1976#define TRY_LOCK(lk) recursive_try_lock(lk)
1977#define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
1978#define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1979#define DESTROY_LOCK(lk) (0)
1980#endif /* USE_RECURSIVE_LOCKS */
1981
1982#elif defined(WIN32) /* Win32 critical sections */
1983#define MLOCK_T CRITICAL_SECTION
1984#define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
1985#define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
1986#define TRY_LOCK(lk) TryEnterCriticalSection(lk)
1987#define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1988#define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
1989#define NEED_GLOBAL_LOCK_INIT
1990
1991static MLOCK_T malloc_global_mutex;
1992static volatile LONG malloc_global_mutex_status;
1993
1994/* Use spin loop to initialize global lock */
1995static void init_malloc_global_mutex() {
1996 for (;;) {
1997 long stat = malloc_global_mutex_status;
1998 if (stat > 0)
1999 return;
2000 /* transition to < 0 while initializing, then to > 0) */
2001 if (stat == 0 &&
2002 interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {
2003 InitializeCriticalSection(&malloc_global_mutex);
2004 interlockedexchange(&malloc_global_mutex_status, (LONG)1);
2005 return;
2006 }
2007 SleepEx(0, FALSE);
2008 }
2009}
2010
2011#else /* pthreads-based locks */
2012#define MLOCK_T pthread_mutex_t
2013#define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2014#define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2015#define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2016#define INITIAL_LOCK(lk) pthread_init_lock(lk)
2017#define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2018
2019#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2020/* Cope with old-style linux recursive lock initialization by adding */
2021/* skipped internal declaration from pthread.h */
2022extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
2023 int __kind));
2024#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2025#define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2026#endif /* USE_RECURSIVE_LOCKS ... */
2027
2028static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2029
2030static int pthread_init_lock (MLOCK_T *lk) {
2031 pthread_mutexattr_t attr;
2032 if (pthread_mutexattr_init(&attr)) return 1;
2033#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2034 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2035#endif
2036 if (pthread_mutex_init(lk, &attr)) return 1;
2037 if (pthread_mutexattr_destroy(&attr)) return 1;
2038 return 0;
2039}
2040
2041#endif /* ... lock types ... */
2042
2043/* Common code for all lock types */
2044#define USE_LOCK_BIT (2U)
2045
2046#ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2047#define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2048#endif
2049
2050#ifndef RELEASE_MALLOC_GLOBAL_LOCK
2051#define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2052#endif
2053
2054#endif /* USE_LOCKS */
2055
2056/* ----------------------- Chunk representations ------------------------ */
2057
2058/*
2059 (The following includes lightly edited explanations by Colin Plumb.)
2060
2061 The malloc_chunk declaration below is misleading (but accurate and
2062 necessary). It declares a "view" into memory allowing access to
2063 necessary fields at known offsets from a given base.
2064
2065 Chunks of memory are maintained using a `boundary tag' method as
2066 originally described by Knuth. (See the paper by Paul Wilson
2067 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2068 techniques.) Sizes of free chunks are stored both in the front of
2069 each chunk and at the end. This makes consolidating fragmented
2070 chunks into bigger chunks fast. The head fields also hold bits
2071 representing whether chunks are free or in use.
2072
2073 Here are some pictures to make it clearer. They are "exploded" to
2074 show that the state of a chunk can be thought of as extending from
2075 the high 31 bits of the head field of its header through the
2076 prev_foot and PINUSE_BIT bit of the following chunk header.
2077
2078 A chunk that's in use looks like:
2079
2080 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2081 | Size of previous chunk (if P = 0) |
2082 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2083 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2084 | Size of this chunk 1| +-+
2085 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2086 | |
2087 +- -+
2088 | |
2089 +- -+
2090 | :
2091 +- size - sizeof(size_t) available payload bytes -+
2092 : |
2093 chunk-> +- -+
2094 | |
2095 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2096 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2097 | Size of next chunk (may or may not be in use) | +-+
2098 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2099
2100 And if it's free, it looks like this:
2101
2102 chunk-> +- -+
2103 | User payload (must be in use, or we would have merged!) |
2104 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2105 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2106 | Size of this chunk 0| +-+
2107 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2108 | Next pointer |
2109 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2110 | Prev pointer |
2111 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2112 | :
2113 +- size - sizeof(struct chunk) unused bytes -+
2114 : |
2115 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2116 | Size of this chunk |
2117 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2118 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2119 | Size of next chunk (must be in use, or we would have merged)| +-+
2120 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2121 | :
2122 +- User payload -+
2123 : |
2124 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2125 |0|
2126 +-+
2127 Note that since we always merge adjacent free chunks, the chunks
2128 adjacent to a free chunk must be in use.
2129
2130 Given a pointer to a chunk (which can be derived trivially from the
2131 payload pointer) we can, in O(1) time, find out whether the adjacent
2132 chunks are free, and if so, unlink them from the lists that they
2133 are on and merge them with the current chunk.
2134
2135 Chunks always begin on even word boundaries, so the mem portion
2136 (which is returned to the user) is also on an even word boundary, and
2137 thus at least double-word aligned.
2138
2139 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2140 chunk size (which is always a multiple of two words), is an in-use
2141 bit for the *previous* chunk. If that bit is *clear*, then the
2142 word before the current chunk size contains the previous chunk
2143 size, and can be used to find the front of the previous chunk.
2144 The very first chunk allocated always has this bit set, preventing
2145 access to non-existent (or non-owned) memory. If pinuse is set for
2146 any given chunk, then you CANNOT determine the size of the
2147 previous chunk, and might even get a memory addressing fault when
2148 trying to do so.
2149
2150 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2151 the chunk size redundantly records whether the current chunk is
2152 inuse (unless the chunk is mmapped). This redundancy enables usage
2153 checks within free and realloc, and reduces indirection when freeing
2154 and consolidating chunks.
2155
2156 Each freshly allocated chunk must have both cinuse and pinuse set.
2157 That is, each allocated chunk borders either a previously allocated
2158 and still in-use chunk, or the base of its memory arena. This is
2159 ensured by making all allocations from the `lowest' part of any
2160 found chunk. Further, no free chunk physically borders another one,
2161 so each free chunk is known to be preceded and followed by either
2162 inuse chunks or the ends of memory.
2163
2164 Note that the `foot' of the current chunk is actually represented
2165 as the prev_foot of the NEXT chunk. This makes it easier to
2166 deal with alignments etc but can be very confusing when trying
2167 to extend or adapt this code.
2168
2169 The exceptions to all this are
2170
2171 1. The special chunk `top' is the top-most available chunk (i.e.,
2172 the one bordering the end of available memory). It is treated
2173 specially. Top is never included in any bin, is used only if
2174 no other chunk is available, and is released back to the
2175 system if it is very large (see M_TRIM_THRESHOLD). In effect,
2176 the top chunk is treated as larger (and thus less well
2177 fitting) than any other available chunk. The top chunk
2178 doesn't update its trailing size field since there is no next
2179 contiguous chunk that would have to index off it. However,
2180 space is still allocated for it (TOP_FOOT_SIZE) to enable
2181 separation or merging when space is extended.
2182
2183 3. Chunks allocated via mmap, have both cinuse and pinuse bits
2184 cleared in their head fields. Because they are allocated
2185 one-by-one, each must carry its own prev_foot field, which is
2186 also used to hold the offset this chunk has within its mmapped
2187 region, which is needed to preserve alignment. Each mmapped
2188 chunk is trailed by the first two fields of a fake next-chunk
2189 for sake of usage checks.
2190
2191*/
2192
2193struct malloc_chunk {
2194 size_t prev_foot; /* Size of previous chunk (if free). */
2195 size_t head; /* Size and inuse bits. */
2196 struct malloc_chunk* fd; /* double links -- used only if free. */
2197 struct malloc_chunk* bk;
2198};
2199
2200typedef struct malloc_chunk mchunk;
2201typedef struct malloc_chunk* mchunkptr;
2202typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2203typedef unsigned int bindex_t; /* Described below */
2204typedef unsigned int binmap_t; /* Described below */
2205typedef unsigned int flag_t; /* The type of various bit flag sets */
2206
2207/* ------------------- Chunks sizes and alignments ----------------------- */
2208
2209#define MCHUNK_SIZE (sizeof(mchunk))
2210
2211#if FOOTERS
2212#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2213#else /* FOOTERS */
2214#define CHUNK_OVERHEAD (SIZE_T_SIZE)
2215#endif /* FOOTERS */
2216
2217/* MMapped chunks need a second word of overhead ... */
2218#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2219/* ... and additional padding for fake next-chunk at foot */
2220#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2221
2222/* The smallest size we can malloc is an aligned minimal chunk */
2223#define MIN_CHUNK_SIZE\
2224 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2225
2226/* conversion from malloc headers to user pointers, and back */
2227#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2228#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2229/* chunk associated with aligned address A */
2230#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2231
2232/* Bounds on request (not chunk) sizes. */
2233#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2234#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2235
2236/* pad request bytes into a usable size */
2237#define pad_request(req) \
2238 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2239
2240/* pad request, checking for minimum (but not maximum) */
2241#define request2size(req) \
2242 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2243
2244
2245/* ------------------ Operations on head and foot fields ----------------- */
2246
2247/*
2248 The head field of a chunk is or'ed with PINUSE_BIT when previous
2249 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2250 use, unless mmapped, in which case both bits are cleared.
2251
2252 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2253*/
2254
2255#define PINUSE_BIT (SIZE_T_ONE)
2256#define CINUSE_BIT (SIZE_T_TWO)
2257#define FLAG4_BIT (SIZE_T_FOUR)
2258#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2259#define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2260
2261/* Head value for fenceposts */
2262#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2263
2264/* extraction of fields from head words */
2265#define cinuse(p) ((p)->head & CINUSE_BIT)
2266#define pinuse(p) ((p)->head & PINUSE_BIT)
2267#define flag4inuse(p) ((p)->head & FLAG4_BIT)
2268#define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2269#define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2270
2271#define chunksize(p) ((p)->head & ~(FLAG_BITS))
2272
2273#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2274#define set_flag4(p) ((p)->head |= FLAG4_BIT)
2275#define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2276
2277/* Treat space at ptr +/- offset as a chunk */
2278#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2279#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2280
2281/* Ptr to next or previous physical malloc_chunk. */
2282#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2283#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2284
2285/* extract next chunk's pinuse bit */
2286#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2287
2288/* Get/set size at footer */
2289#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2290#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2291
2292/* Set size, pinuse bit, and foot */
2293#define set_size_and_pinuse_of_free_chunk(p, s)\
2294 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2295
2296/* Set size, pinuse bit, foot, and clear next pinuse */
2297#define set_free_with_pinuse(p, s, n)\
2298 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2299
2300/* Get the internal overhead associated with chunk p */
2301#define overhead_for(p)\
2302 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2303
2304/* Return true if malloced space is not necessarily cleared */
2305#if MMAP_CLEARS
2306#define calloc_must_clear(p) (!is_mmapped(p))
2307#else /* MMAP_CLEARS */
2308#define calloc_must_clear(p) (1)
2309#endif /* MMAP_CLEARS */
2310
2311/* ---------------------- Overlaid data structures ----------------------- */
2312
2313/*
2314 When chunks are not in use, they are treated as nodes of either
2315 lists or trees.
2316
2317 "Small" chunks are stored in circular doubly-linked lists, and look
2318 like this:
2319
2320 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2321 | Size of previous chunk |
2322 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2323 `head:' | Size of chunk, in bytes |P|
2324 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2325 | Forward pointer to next chunk in list |
2326 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2327 | Back pointer to previous chunk in list |
2328 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2329 | Unused space (may be 0 bytes long) .
2330 . .
2331 . |
2332nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2333 `foot:' | Size of chunk, in bytes |
2334 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2335
2336 Larger chunks are kept in a form of bitwise digital trees (aka
2337 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2338 free chunks greater than 256 bytes, their size doesn't impose any
2339 constraints on user chunk sizes. Each node looks like:
2340
2341 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2342 | Size of previous chunk |
2343 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2344 `head:' | Size of chunk, in bytes |P|
2345 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2346 | Forward pointer to next chunk of same size |
2347 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2348 | Back pointer to previous chunk of same size |
2349 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2350 | Pointer to left child (child[0]) |
2351 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2352 | Pointer to right child (child[1]) |
2353 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2354 | Pointer to parent |
2355 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2356 | bin index of this chunk |
2357 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2358 | Unused space .
2359 . |
2360nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2361 `foot:' | Size of chunk, in bytes |
2362 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2363
2364 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2365 of the same size are arranged in a circularly-linked list, with only
2366 the oldest chunk (the next to be used, in our FIFO ordering)
2367 actually in the tree. (Tree members are distinguished by a non-null
2368 parent pointer.) If a chunk with the same size an an existing node
2369 is inserted, it is linked off the existing node using pointers that
2370 work in the same way as fd/bk pointers of small chunks.
2371
2372 Each tree contains a power of 2 sized range of chunk sizes (the
2373 smallest is 0x100 <= x < 0x180), which is is divided in half at each
2374 tree level, with the chunks in the smaller half of the range (0x100
2375 <= x < 0x140 for the top nose) in the left subtree and the larger
2376 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2377 done by inspecting individual bits.
2378
2379 Using these rules, each node's left subtree contains all smaller
2380 sizes than its right subtree. However, the node at the root of each
2381 subtree has no particular ordering relationship to either. (The
2382 dividing line between the subtree sizes is based on trie relation.)
2383 If we remove the last chunk of a given size from the interior of the
2384 tree, we need to replace it with a leaf node. The tree ordering
2385 rules permit a node to be replaced by any leaf below it.
2386
2387 The smallest chunk in a tree (a common operation in a best-fit
2388 allocator) can be found by walking a path to the leftmost leaf in
2389 the tree. Unlike a usual binary tree, where we follow left child
2390 pointers until we reach a null, here we follow the right child
2391 pointer any time the left one is null, until we reach a leaf with
2392 both child pointers null. The smallest chunk in the tree will be
2393 somewhere along that path.
2394
2395 The worst case number of steps to add, find, or remove a node is
2396 bounded by the number of bits differentiating chunks within
2397 bins. Under current bin calculations, this ranges from 6 up to 21
2398 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2399 is of course much better.
2400*/
2401
2402struct malloc_tree_chunk {
2403 /* The first four fields must be compatible with malloc_chunk */
2404 size_t prev_foot;
2405 size_t head;
2406 struct malloc_tree_chunk* fd;
2407 struct malloc_tree_chunk* bk;
2408
2409 struct malloc_tree_chunk* child[2];
2410 struct malloc_tree_chunk* parent;
2411 bindex_t index;
2412};
2413
2414typedef struct malloc_tree_chunk tchunk;
2415typedef struct malloc_tree_chunk* tchunkptr;
2416typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2417
2418/* A little helper macro for trees */
2419#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2420
2421/* ----------------------------- Segments -------------------------------- */
2422
2423/*
2424 Each malloc space may include non-contiguous segments, held in a
2425 list headed by an embedded malloc_segment record representing the
2426 top-most space. Segments also include flags holding properties of
2427 the space. Large chunks that are directly allocated by mmap are not
2428 included in this list. They are instead independently created and
2429 destroyed without otherwise keeping track of them.
2430
2431 Segment management mainly comes into play for spaces allocated by
2432 MMAP. Any call to MMAP might or might not return memory that is
2433 adjacent to an existing segment. MORECORE normally contiguously
2434 extends the current space, so this space is almost always adjacent,
2435 which is simpler and faster to deal with. (This is why MORECORE is
2436 used preferentially to MMAP when both are available -- see
2437 sys_alloc.) When allocating using MMAP, we don't use any of the
2438 hinting mechanisms (inconsistently) supported in various
2439 implementations of unix mmap, or distinguish reserving from
2440 committing memory. Instead, we just ask for space, and exploit
2441 contiguity when we get it. It is probably possible to do
2442 better than this on some systems, but no general scheme seems
2443 to be significantly better.
2444
2445 Management entails a simpler variant of the consolidation scheme
2446 used for chunks to reduce fragmentation -- new adjacent memory is
2447 normally prepended or appended to an existing segment. However,
2448 there are limitations compared to chunk consolidation that mostly
2449 reflect the fact that segment processing is relatively infrequent
2450 (occurring only when getting memory from system) and that we
2451 don't expect to have huge numbers of segments:
2452
2453 * Segments are not indexed, so traversal requires linear scans. (It
2454 would be possible to index these, but is not worth the extra
2455 overhead and complexity for most programs on most platforms.)
2456 * New segments are only appended to old ones when holding top-most
2457 memory; if they cannot be prepended to others, they are held in
2458 different segments.
2459
2460 Except for the top-most segment of an mstate, each segment record
2461 is kept at the tail of its segment. Segments are added by pushing
2462 segment records onto the list headed by &mstate.seg for the
2463 containing mstate.
2464
2465 Segment flags control allocation/merge/deallocation policies:
2466 * If EXTERN_BIT set, then we did not allocate this segment,
2467 and so should not try to deallocate or merge with others.
2468 (This currently holds only for the initial segment passed
2469 into create_mspace_with_base.)
2470 * If USE_MMAP_BIT set, the segment may be merged with
2471 other surrounding mmapped segments and trimmed/de-allocated
2472 using munmap.
2473 * If neither bit is set, then the segment was obtained using
2474 MORECORE so can be merged with surrounding MORECORE'd segments
2475 and deallocated/trimmed using MORECORE with negative arguments.
2476*/
2477
2478struct malloc_segment {
2479 char* base; /* base address */
2480 size_t size; /* allocated size */
2481 struct malloc_segment* next; /* ptr to next segment */
2482 flag_t sflags; /* mmap and extern flag */
2483};
2484
2485#define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2486#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2487
2488typedef struct malloc_segment msegment;
2489typedef struct malloc_segment* msegmentptr;
2490
2491/* ---------------------------- malloc_state ----------------------------- */
2492
2493/*
2494 A malloc_state holds all of the bookkeeping for a space.
2495 The main fields are:
2496
2497 Top
2498 The topmost chunk of the currently active segment. Its size is
2499 cached in topsize. The actual size of topmost space is
2500 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2501 fenceposts and segment records if necessary when getting more
2502 space from the system. The size at which to autotrim top is
2503 cached from mparams in trim_check, except that it is disabled if
2504 an autotrim fails.
2505
2506 Designated victim (dv)
2507 This is the preferred chunk for servicing small requests that
2508 don't have exact fits. It is normally the chunk split off most
2509 recently to service another small request. Its size is cached in
2510 dvsize. The link fields of this chunk are not maintained since it
2511 is not kept in a bin.
2512
2513 SmallBins
2514 An array of bin headers for free chunks. These bins hold chunks
2515 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2516 chunks of all the same size, spaced 8 bytes apart. To simplify
2517 use in double-linked lists, each bin header acts as a malloc_chunk
2518 pointing to the real first node, if it exists (else pointing to
2519 itself). This avoids special-casing for headers. But to avoid
2520 waste, we allocate only the fd/bk pointers of bins, and then use
2521 repositioning tricks to treat these as the fields of a chunk.
2522
2523 TreeBins
2524 Treebins are pointers to the roots of trees holding a range of
2525 sizes. There are 2 equally spaced treebins for each power of two
2526 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2527 larger.
2528
2529 Bin maps
2530 There is one bit map for small bins ("smallmap") and one for
2531 treebins ("treemap). Each bin sets its bit when non-empty, and
2532 clears the bit when empty. Bit operations are then used to avoid
2533 bin-by-bin searching -- nearly all "search" is done without ever
2534 looking at bins that won't be selected. The bit maps
2535 conservatively use 32 bits per map word, even if on 64bit system.
2536 For a good description of some of the bit-based techniques used
2537 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2538 supplement at http://hackersdelight.org/). Many of these are
2539 intended to reduce the branchiness of paths through malloc etc, as
2540 well as to reduce the number of memory locations read or written.
2541
2542 Segments
2543 A list of segments headed by an embedded malloc_segment record
2544 representing the initial space.
2545
2546 Address check support
2547 The least_addr field is the least address ever obtained from
2548 MORECORE or MMAP. Attempted frees and reallocs of any address less
2549 than this are trapped (unless INSECURE is defined).
2550
2551 Magic tag
2552 A cross-check field that should always hold same value as mparams.magic.
2553
2554 Max allowed footprint
2555 The maximum allowed bytes to allocate from system (zero means no limit)
2556
2557 Flags
2558 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2559
2560 Statistics
2561 Each space keeps track of current and maximum system memory
2562 obtained via MORECORE or MMAP.
2563
2564 Trim support
2565 Fields holding the amount of unused topmost memory that should trigger
2566 trimming, and a counter to force periodic scanning to release unused
2567 non-topmost segments.
2568
2569 Locking
2570 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2571 around every public call using this mspace.
2572
2573 Extension support
2574 A void* pointer and a size_t field that can be used to help implement
2575 extensions to this malloc.
2576*/
2577
2578/* Bin types, widths and sizes */
2579#define NSMALLBINS (32U)
2580#define NTREEBINS (32U)
2581#define SMALLBIN_SHIFT (3U)
2582#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2583#define TREEBIN_SHIFT (8U)
2584#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2585#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2586#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2587
2588struct malloc_state {
2589 binmap_t smallmap;
2590 binmap_t treemap;
2591 size_t dvsize;
2592 size_t topsize;
2593 char* least_addr;
2594 mchunkptr dv;
2595 mchunkptr top;
2596 size_t trim_check;
2597 size_t release_checks;
2598 size_t magic;
2599 mchunkptr smallbins[(NSMALLBINS+1)*2];
2600 tbinptr treebins[NTREEBINS];
2601 size_t footprint;
2602 size_t max_footprint;
2603 size_t footprint_limit; /* zero means no limit */
2604 flag_t mflags;
2605#if USE_LOCKS
2606 MLOCK_T mutex; /* locate lock among fields that rarely change */
2607#endif /* USE_LOCKS */
2608 msegment seg;
2609 void* extp; /* Unused but available for extensions */
2610 size_t exts;
2611};
2612
2613typedef struct malloc_state* mstate;
2614
2615/* ------------- Global malloc_state and malloc_params ------------------- */
2616
2617/*
2618 malloc_params holds global properties, including those that can be
2619 dynamically set using mallopt. There is a single instance, mparams,
2620 initialized in init_mparams. Note that the non-zeroness of "magic"
2621 also serves as an initialization flag.
2622*/
2623
2624struct malloc_params {
2625 size_t magic;
2626 size_t page_size;
2627 size_t granularity;
2628 size_t mmap_threshold;
2629 size_t trim_threshold;
2630 flag_t default_mflags;
2631};
2632
2633static struct malloc_params mparams;
2634
2635/* Ensure mparams initialized */
2636#define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2637
2638#if !ONLY_MSPACES
2639
2640/* The global malloc_state used for all non-"mspace" calls */
2641static struct malloc_state _gm_;
2642#define gm (&_gm_)
2643#define is_global(M) ((M) == &_gm_)
2644
2645#endif /* !ONLY_MSPACES */
2646
2647#define is_initialized(M) ((M)->top != 0)
2648
2649/* -------------------------- system alloc setup ------------------------- */
2650
2651/* Operations on mflags */
2652
2653#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2654#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2655#if USE_LOCKS
2656#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2657#else
2658#define disable_lock(M)
2659#endif
2660
2661#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2662#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2663#if HAVE_MMAP
2664#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2665#else
2666#define disable_mmap(M)
2667#endif
2668
2669#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2670#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2671
2672#define set_lock(M,L)\
2673 ((M)->mflags = (L)?\
2674 ((M)->mflags | USE_LOCK_BIT) :\
2675 ((M)->mflags & ~USE_LOCK_BIT))
2676
2677/* page-align a size */
2678#define page_align(S)\
2679 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2680
2681/* granularity-align a size */
2682#define granularity_align(S)\
2683 (((S) + (mparams.granularity - SIZE_T_ONE))\
2684 & ~(mparams.granularity - SIZE_T_ONE))
2685
2686
2687/* For mmap, use granularity alignment on windows, else page-align */
2688#ifdef WIN32
2689#define mmap_align(S) granularity_align(S)
2690#else
2691#define mmap_align(S) page_align(S)
2692#endif
2693
2694/* For sys_alloc, enough padding to ensure can malloc request on success */
2695#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2696
2697#define is_page_aligned(S)\
2698 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2699#define is_granularity_aligned(S)\
2700 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2701
2702/* True if segment S holds address A */
2703#define segment_holds(S, A)\
2704 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2705
2706/* Return segment holding given address */
2707static msegmentptr segment_holding(mstate m, char* addr) {
2708 msegmentptr sp = &m->seg;
2709 for (;;) {
2710 if (addr >= sp->base && addr < sp->base + sp->size)
2711 return sp;
2712 if ((sp = sp->next) == 0)
2713 return 0;
2714 }
2715}
2716
2717/* Return true if segment contains a segment link */
2718static int has_segment_link(mstate m, msegmentptr ss) {
2719 msegmentptr sp = &m->seg;
2720 for (;;) {
2721 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2722 return 1;
2723 if ((sp = sp->next) == 0)
2724 return 0;
2725 }
2726}
2727
2728#ifndef MORECORE_CANNOT_TRIM
2729#define should_trim(M,s) ((s) > (M)->trim_check)
2730#else /* MORECORE_CANNOT_TRIM */
2731#define should_trim(M,s) (0)
2732#endif /* MORECORE_CANNOT_TRIM */
2733
2734/*
2735 TOP_FOOT_SIZE is padding at the end of a segment, including space
2736 that may be needed to place segment records and fenceposts when new
2737 noncontiguous segments are added.
2738*/
2739#define TOP_FOOT_SIZE\
2740 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2741
2742
2743/* ------------------------------- Hooks -------------------------------- */
2744
2745/*
2746 PREACTION should be defined to return 0 on success, and nonzero on
2747 failure. If you are not using locking, you can redefine these to do
2748 anything you like.
2749*/
2750
2751#if USE_LOCKS
2752#define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2753#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2754#else /* USE_LOCKS */
2755
2756#ifndef PREACTION
2757#define PREACTION(M) (0)
2758#endif /* PREACTION */
2759
2760#ifndef POSTACTION
2761#define POSTACTION(M)
2762#endif /* POSTACTION */
2763
2764#endif /* USE_LOCKS */
2765
2766/*
2767 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2768 USAGE_ERROR_ACTION is triggered on detected bad frees and
2769 reallocs. The argument p is an address that might have triggered the
2770 fault. It is ignored by the two predefined actions, but might be
2771 useful in custom actions that try to help diagnose errors.
2772*/
2773
2774#if PROCEED_ON_ERROR
2775
2776/* A count of the number of corruption errors causing resets */
2777int malloc_corruption_error_count;
2778
2779/* default corruption action */
2780static void reset_on_error(mstate m);
2781
2782#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2783#define USAGE_ERROR_ACTION(m, p)
2784
2785#else /* PROCEED_ON_ERROR */
2786
2787#ifndef CORRUPTION_ERROR_ACTION
2788#define CORRUPTION_ERROR_ACTION(m) ABORT
2789#endif /* CORRUPTION_ERROR_ACTION */
2790
2791#ifndef USAGE_ERROR_ACTION
2792#define USAGE_ERROR_ACTION(m,p) ABORT
2793#endif /* USAGE_ERROR_ACTION */
2794
2795#endif /* PROCEED_ON_ERROR */
2796
2797
2798/* -------------------------- Debugging setup ---------------------------- */
2799
2800#if ! DEBUG
2801
2802#define check_free_chunk(M,P)
2803#define check_inuse_chunk(M,P)
2804#define check_malloced_chunk(M,P,N)
2805#define check_mmapped_chunk(M,P)
2806#define check_malloc_state(M)
2807#define check_top_chunk(M,P)
2808
2809#else /* DEBUG */
2810#define check_free_chunk(M,P) do_check_free_chunk(M,P)
2811#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2812#define check_top_chunk(M,P) do_check_top_chunk(M,P)
2813#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2814#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2815#define check_malloc_state(M) do_check_malloc_state(M)
2816
2817static void do_check_any_chunk(mstate m, mchunkptr p);
2818static void do_check_top_chunk(mstate m, mchunkptr p);
2819static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2820static void do_check_inuse_chunk(mstate m, mchunkptr p);
2821static void do_check_free_chunk(mstate m, mchunkptr p);
2822static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2823static void do_check_tree(mstate m, tchunkptr t);
2824static void do_check_treebin(mstate m, bindex_t i);
2825static void do_check_smallbin(mstate m, bindex_t i);
2826static void do_check_malloc_state(mstate m);
2827static int bin_find(mstate m, mchunkptr x);
2828static size_t traverse_and_check(mstate m);
2829#endif /* DEBUG */
2830
2831/* ---------------------------- Indexing Bins ---------------------------- */
2832
2833#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2834#define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2835#define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2836#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2837
2838/* addressing by index. See above about smallbin repositioning */
2839#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2840#define treebin_at(M,i) (&((M)->treebins[i]))
2841
2842/* assign tree index for size S to variable I. Use x86 asm if possible */
2843#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2844#define compute_tree_index(S, I)\
2845{\
2846 unsigned int X = S >> TREEBIN_SHIFT;\
2847 if (X == 0)\
2848 I = 0;\
2849 else if (X > 0xFFFF)\
2850 I = NTREEBINS-1;\
2851 else {\
2852 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2853 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2854 }\
2855}
2856
2857#elif defined (__INTEL_COMPILER)
2858#define compute_tree_index(S, I)\
2859{\
2860 size_t X = S >> TREEBIN_SHIFT;\
2861 if (X == 0)\
2862 I = 0;\
2863 else if (X > 0xFFFF)\
2864 I = NTREEBINS-1;\
2865 else {\
2866 unsigned int K = _bit_scan_reverse (X); \
2867 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2868 }\
2869}
2870
2871#elif defined(_MSC_VER) && _MSC_VER>=1300
2872#define compute_tree_index(S, I)\
2873{\
2874 size_t X = S >> TREEBIN_SHIFT;\
2875 if (X == 0)\
2876 I = 0;\
2877 else if (X > 0xFFFF)\
2878 I = NTREEBINS-1;\
2879 else {\
2880 unsigned int K;\
2881 _BitScanReverse((DWORD *) &K, (DWORD) X);\
2882 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2883 }\
2884}
2885
2886#else /* GNUC */
2887#define compute_tree_index(S, I)\
2888{\
2889 size_t X = S >> TREEBIN_SHIFT;\
2890 if (X == 0)\
2891 I = 0;\
2892 else if (X > 0xFFFF)\
2893 I = NTREEBINS-1;\
2894 else {\
2895 unsigned int Y = (unsigned int)X;\
2896 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2897 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2898 N += K;\
2899 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2900 K = 14 - N + ((Y <<= K) >> 15);\
2901 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2902 }\
2903}
2904#endif /* GNUC */
2905
2906/* Bit representing maximum resolved size in a treebin at i */
2907#define bit_for_tree_index(i) \
2908 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2909
2910/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2911#define leftshift_for_tree_index(i) \
2912 ((i == NTREEBINS-1)? 0 : \
2913 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2914
2915/* The size of the smallest chunk held in bin with index i */
2916#define minsize_for_tree_index(i) \
2917 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2918 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2919
2920
2921/* ------------------------ Operations on bin maps ----------------------- */
2922
2923/* bit corresponding to given index */
2924#define idx2bit(i) ((binmap_t)(1) << (i))
2925
2926/* Mark/Clear bits with given index */
2927#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2928#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2929#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2930
2931#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2932#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2933#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2934
2935/* isolate the least set bit of a bitmap */
2936#define least_bit(x) ((x) & -(x))
2937
2938/* mask with all bits to left of least bit of x on */
2939#define left_bits(x) ((x<<1) | -(x<<1))
2940
2941/* mask with all bits to left of or equal to least bit of x on */
2942#define same_or_left_bits(x) ((x) | -(x))
2943
2944/* index corresponding to given bit. Use x86 asm if possible */
2945
2946#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2947#define compute_bit2idx(X, I)\
2948{\
2949 unsigned int J;\
2950 J = __builtin_ctz(X); \
2951 I = (bindex_t)J;\
2952}
2953
2954#elif defined (__INTEL_COMPILER)
2955#define compute_bit2idx(X, I)\
2956{\
2957 unsigned int J;\
2958 J = _bit_scan_forward (X); \
2959 I = (bindex_t)J;\
2960}
2961
2962#elif defined(_MSC_VER) && _MSC_VER>=1300
2963#define compute_bit2idx(X, I)\
2964{\
2965 unsigned int J;\
2966 _BitScanForward((DWORD *) &J, X);\
2967 I = (bindex_t)J;\
2968}
2969
2970#elif USE_BUILTIN_FFS
2971#define compute_bit2idx(X, I) I = ffs(X)-1
2972
2973#else
2974#define compute_bit2idx(X, I)\
2975{\
2976 unsigned int Y = X - 1;\
2977 unsigned int K = Y >> (16-4) & 16;\
2978 unsigned int N = K; Y >>= K;\
2979 N += K = Y >> (8-3) & 8; Y >>= K;\
2980 N += K = Y >> (4-2) & 4; Y >>= K;\
2981 N += K = Y >> (2-1) & 2; Y >>= K;\
2982 N += K = Y >> (1-0) & 1; Y >>= K;\
2983 I = (bindex_t)(N + Y);\
2984}
2985#endif /* GNUC */
2986
2987
2988/* ----------------------- Runtime Check Support ------------------------- */
2989
2990/*
2991 For security, the main invariant is that malloc/free/etc never
2992 writes to a static address other than malloc_state, unless static
2993 malloc_state itself has been corrupted, which cannot occur via
2994 malloc (because of these checks). In essence this means that we
2995 believe all pointers, sizes, maps etc held in malloc_state, but
2996 check all of those linked or offsetted from other embedded data
2997 structures. These checks are interspersed with main code in a way
2998 that tends to minimize their run-time cost.
2999
3000 When FOOTERS is defined, in addition to range checking, we also
3001 verify footer fields of inuse chunks, which can be used guarantee
3002 that the mstate controlling malloc/free is intact. This is a
3003 streamlined version of the approach described by William Robertson
3004 et al in "Run-time Detection of Heap-based Overflows" LISA'03
3005 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
3006 of an inuse chunk holds the xor of its mstate and a random seed,
3007 that is checked upon calls to free() and realloc(). This is
3008 (probabalistically) unguessable from outside the program, but can be
3009 computed by any code successfully malloc'ing any chunk, so does not
3010 itself provide protection against code that has already broken
3011 security through some other means. Unlike Robertson et al, we
3012 always dynamically check addresses of all offset chunks (previous,
3013 next, etc). This turns out to be cheaper than relying on hashes.
3014*/
3015
3016#if !INSECURE
3017/* Check if address a is at least as high as any from MORECORE or MMAP */
3018#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3019/* Check if address of next chunk n is higher than base chunk p */
3020#define ok_next(p, n) ((char*)(p) < (char*)(n))
3021/* Check if p has inuse status */
3022#define ok_inuse(p) is_inuse(p)
3023/* Check if p has its pinuse bit on */
3024#define ok_pinuse(p) pinuse(p)
3025
3026#else /* !INSECURE */
3027#define ok_address(M, a) (1)
3028#define ok_next(b, n) (1)
3029#define ok_inuse(p) (1)
3030#define ok_pinuse(p) (1)
3031#endif /* !INSECURE */
3032
3033#if (FOOTERS && !INSECURE)
3034/* Check if (alleged) mstate m has expected magic field */
3035#define ok_magic(M) ((M)->magic == mparams.magic)
3036#else /* (FOOTERS && !INSECURE) */
3037#define ok_magic(M) (1)
3038#endif /* (FOOTERS && !INSECURE) */
3039
3040/* In gcc, use __builtin_expect to minimize impact of checks */
3041#if !INSECURE
3042#if defined(__GNUC__) && __GNUC__ >= 3
3043#define RTCHECK(e) __builtin_expect(e, 1)
3044#else /* GNUC */
3045#define RTCHECK(e) (e)
3046#endif /* GNUC */
3047#else /* !INSECURE */
3048#define RTCHECK(e) (1)
3049#endif /* !INSECURE */
3050
3051/* macros to set up inuse chunks with or without footers */
3052
3053#if !FOOTERS
3054
3055#define mark_inuse_foot(M,p,s)
3056
3057/* Macros for setting head/foot of non-mmapped chunks */
3058
3059/* Set cinuse bit and pinuse bit of next chunk */
3060#define set_inuse(M,p,s)\
3061 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3062 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3063
3064/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3065#define set_inuse_and_pinuse(M,p,s)\
3066 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3067 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3068
3069/* Set size, cinuse and pinuse bit of this chunk */
3070#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3071 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3072
3073#else /* FOOTERS */
3074
3075/* Set foot of inuse chunk to be xor of mstate and seed */
3076#define mark_inuse_foot(M,p,s)\
3077 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3078
3079#define get_mstate_for(p)\
3080 ((mstate)(((mchunkptr)((char*)(p) +\
3081 (chunksize(p))))->prev_foot ^ mparams.magic))
3082
3083#define set_inuse(M,p,s)\
3084 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3085 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3086 mark_inuse_foot(M,p,s))
3087
3088#define set_inuse_and_pinuse(M,p,s)\
3089 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3090 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3091 mark_inuse_foot(M,p,s))
3092
3093#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3094 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3095 mark_inuse_foot(M, p, s))
3096
3097#endif /* !FOOTERS */
3098
3099/* ---------------------------- setting mparams -------------------------- */
3100
3101#if LOCK_AT_FORK
3102static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); }
3103static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
3104static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); }
3105#endif /* LOCK_AT_FORK */
3106
3107/* Initialize mparams */
3108static int init_mparams(void) {
3109#ifdef NEED_GLOBAL_LOCK_INIT
3110 if (malloc_global_mutex_status <= 0)
3111 init_malloc_global_mutex();
3112#endif
3113
3114 ACQUIRE_MALLOC_GLOBAL_LOCK();
3115 if (mparams.magic == 0) {
3116 size_t magic;
3117 size_t psize;
3118 size_t gsize;
3119
3120#ifndef WIN32
3121 psize = malloc_getpagesize;
3122 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3123#else /* WIN32 */
3124 {
3125 SYSTEM_INFO system_info;
3126 GetSystemInfo(&system_info);
3127 psize = system_info.dwPageSize;
3128 gsize = ((DEFAULT_GRANULARITY != 0)?
3129 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3130 }
3131#endif /* WIN32 */
3132
3133 /* Sanity-check configuration:
3134 size_t must be unsigned and as wide as pointer type.
3135 ints must be at least 4 bytes.
3136 alignment must be at least 8.
3137 Alignment, min chunk size, and page size must all be powers of 2.
3138 */
3139 if ((sizeof(size_t) != sizeof(char*)) ||
3140 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
3141 (sizeof(int) < 4) ||
3142 (MALLOC_ALIGNMENT < (size_t)8U) ||
3143 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3144 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
3145 ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
3146 ((psize & (psize-SIZE_T_ONE)) != 0))
3147 ABORT;
3148 mparams.granularity = gsize;
3149 mparams.page_size = psize;
3150 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3151 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3152#if MORECORE_CONTIGUOUS
3153 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3154#else /* MORECORE_CONTIGUOUS */
3155 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3156#endif /* MORECORE_CONTIGUOUS */
3157
3158#if !ONLY_MSPACES
3159 /* Set up lock for main malloc area */
3160 gm->mflags = mparams.default_mflags;
3161 (void)INITIAL_LOCK(&gm->mutex);
3162#endif
3163#if LOCK_AT_FORK
3164 pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3165#endif
3166
3167 {
3168#if USE_DEV_RANDOM
3169 int fd;
3170 unsigned char buf[sizeof(size_t)];
3171 /* Try to use /dev/urandom, else fall back on using time */
3172 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3173 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3174 magic = *((size_t *) buf);
3175 close(fd);
3176 }
3177 else
3178#endif /* USE_DEV_RANDOM */
3179#ifdef WIN32
3180 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3181#elif defined(LACKS_TIME_H)
3182 magic = (size_t)&magic ^ (size_t)0x55555555U;
3183#else
3184 magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3185#endif
3186 magic |= (size_t)8U; /* ensure nonzero */
3187 magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3188 /* Until memory modes commonly available, use volatile-write */
3189 (*(volatile size_t *)(&(mparams.magic))) = magic;
3190 }
3191 }
3192
3193 RELEASE_MALLOC_GLOBAL_LOCK();
3194 return 1;
3195}
3196
3197/* support for mallopt */
3198static int change_mparam(int param_number, int value) {
3199 size_t val;
3200 ensure_initialization();
3201 val = (value == -1)? MAX_SIZE_T : (size_t)value;
3202 switch(param_number) {
3203 case M_TRIM_THRESHOLD:
3204 mparams.trim_threshold = val;
3205 return 1;
3206 case M_GRANULARITY:
3207 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3208 mparams.granularity = val;
3209 return 1;
3210 }
3211 else
3212 return 0;
3213 case M_MMAP_THRESHOLD:
3214 mparams.mmap_threshold = val;
3215 return 1;
3216 default:
3217 return 0;
3218 }
3219}
3220
3221#if DEBUG
3222/* ------------------------- Debugging Support --------------------------- */
3223
3224/* Check properties of any chunk, whether free, inuse, mmapped etc */
3225static void do_check_any_chunk(mstate m, mchunkptr p) {
3226 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3227 assert(ok_address(m, p));
3228}
3229
3230/* Check properties of top chunk */
3231static void do_check_top_chunk(mstate m, mchunkptr p) {
3232 msegmentptr sp = segment_holding(m, (char*)p);
3233 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3234 assert(sp != 0);
3235 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3236 assert(ok_address(m, p));
3237 assert(sz == m->topsize);
3238 assert(sz > 0);
3239 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3240 assert(pinuse(p));
3241 assert(!pinuse(chunk_plus_offset(p, sz)));
3242}
3243
3244/* Check properties of (inuse) mmapped chunks */
3245static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3246 size_t sz = chunksize(p);
3247 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3248 assert(is_mmapped(p));
3249 assert(use_mmap(m));
3250 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3251 assert(ok_address(m, p));
3252 assert(!is_small(sz));
3253 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3254 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3255 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3256}
3257
3258/* Check properties of inuse chunks */
3259static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3260 do_check_any_chunk(m, p);
3261 assert(is_inuse(p));
3262 assert(next_pinuse(p));
3263 /* If not pinuse and not mmapped, previous chunk has OK offset */
3264 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3265 if (is_mmapped(p))
3266 do_check_mmapped_chunk(m, p);
3267}
3268
3269/* Check properties of free chunks */
3270static void do_check_free_chunk(mstate m, mchunkptr p) {
3271 size_t sz = chunksize(p);
3272 mchunkptr next = chunk_plus_offset(p, sz);
3273 do_check_any_chunk(m, p);
3274 assert(!is_inuse(p));
3275 assert(!next_pinuse(p));
3276 assert (!is_mmapped(p));
3277 if (p != m->dv && p != m->top) {
3278 if (sz >= MIN_CHUNK_SIZE) {
3279 assert((sz & CHUNK_ALIGN_MASK) == 0);
3280 assert(is_aligned(chunk2mem(p)));
3281 assert(next->prev_foot == sz);
3282 assert(pinuse(p));
3283 assert (next == m->top || is_inuse(next));
3284 assert(p->fd->bk == p);
3285 assert(p->bk->fd == p);
3286 }
3287 else /* markers are always of size SIZE_T_SIZE */
3288 assert(sz == SIZE_T_SIZE);
3289 }
3290}
3291
3292/* Check properties of malloced chunks at the point they are malloced */
3293static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3294 if (mem != 0) {
3295 mchunkptr p = mem2chunk(mem);
3296 size_t sz = p->head & ~INUSE_BITS;
3297 do_check_inuse_chunk(m, p);
3298 assert((sz & CHUNK_ALIGN_MASK) == 0);
3299 assert(sz >= MIN_CHUNK_SIZE);
3300 assert(sz >= s);
3301 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3302 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3303 }
3304}
3305
3306/* Check a tree and its subtrees. */
3307static void do_check_tree(mstate m, tchunkptr t) {
3308 tchunkptr head = 0;
3309 tchunkptr u = t;
3310 bindex_t tindex = t->index;
3311 size_t tsize = chunksize(t);
3312 bindex_t idx;
3313 compute_tree_index(tsize, idx);
3314 assert(tindex == idx);
3315 assert(tsize >= MIN_LARGE_SIZE);
3316 assert(tsize >= minsize_for_tree_index(idx));
3317 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3318
3319 do { /* traverse through chain of same-sized nodes */
3320 do_check_any_chunk(m, ((mchunkptr)u));
3321 assert(u->index == tindex);
3322 assert(chunksize(u) == tsize);
3323 assert(!is_inuse(u));
3324 assert(!next_pinuse(u));
3325 assert(u->fd->bk == u);
3326 assert(u->bk->fd == u);
3327 if (u->parent == 0) {
3328 assert(u->child[0] == 0);
3329 assert(u->child[1] == 0);
3330 }
3331 else {
3332 assert(head == 0); /* only one node on chain has parent */
3333 head = u;
3334 assert(u->parent != u);
3335 assert (u->parent->child[0] == u ||
3336 u->parent->child[1] == u ||
3337 *((tbinptr*)(u->parent)) == u);
3338 if (u->child[0] != 0) {
3339 assert(u->child[0]->parent == u);
3340 assert(u->child[0] != u);
3341 do_check_tree(m, u->child[0]);
3342 }
3343 if (u->child[1] != 0) {
3344 assert(u->child[1]->parent == u);
3345 assert(u->child[1] != u);
3346 do_check_tree(m, u->child[1]);
3347 }
3348 if (u->child[0] != 0 && u->child[1] != 0) {
3349 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3350 }
3351 }
3352 u = u->fd;
3353 } while (u != t);
3354 assert(head != 0);
3355}
3356
3357/* Check all the chunks in a treebin. */
3358static void do_check_treebin(mstate m, bindex_t i) {
3359 tbinptr* tb = treebin_at(m, i);
3360 tchunkptr t = *tb;
3361 int empty = (m->treemap & (1U << i)) == 0;
3362 if (t == 0)
3363 assert(empty);
3364 if (!empty)
3365 do_check_tree(m, t);
3366}
3367
3368/* Check all the chunks in a smallbin. */
3369static void do_check_smallbin(mstate m, bindex_t i) {
3370 sbinptr b = smallbin_at(m, i);
3371 mchunkptr p = b->bk;
3372 unsigned int empty = (m->smallmap & (1U << i)) == 0;
3373 if (p == b)
3374 assert(empty);
3375 if (!empty) {
3376 for (; p != b; p = p->bk) {
3377 size_t size = chunksize(p);
3378 mchunkptr q;
3379 /* each chunk claims to be free */
3380 do_check_free_chunk(m, p);
3381 /* chunk belongs in bin */
3382 assert(small_index(size) == i);
3383 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3384 /* chunk is followed by an inuse chunk */
3385 q = next_chunk(p);
3386 if (q->head != FENCEPOST_HEAD)
3387 do_check_inuse_chunk(m, q);
3388 }
3389 }
3390}
3391
3392/* Find x in a bin. Used in other check functions. */
3393static int bin_find(mstate m, mchunkptr x) {
3394 size_t size = chunksize(x);
3395 if (is_small(size)) {
3396 bindex_t sidx = small_index(size);
3397 sbinptr b = smallbin_at(m, sidx);
3398 if (smallmap_is_marked(m, sidx)) {
3399 mchunkptr p = b;
3400 do {
3401 if (p == x)
3402 return 1;
3403 } while ((p = p->fd) != b);
3404 }
3405 }
3406 else {
3407 bindex_t tidx;
3408 compute_tree_index(size, tidx);
3409 if (treemap_is_marked(m, tidx)) {
3410 tchunkptr t = *treebin_at(m, tidx);
3411 size_t sizebits = size << leftshift_for_tree_index(tidx);
3412 while (t != 0 && chunksize(t) != size) {
3413 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3414 sizebits <<= 1;
3415 }
3416 if (t != 0) {
3417 tchunkptr u = t;
3418 do {
3419 if (u == (tchunkptr)x)
3420 return 1;
3421 } while ((u = u->fd) != t);
3422 }
3423 }
3424 }
3425 return 0;
3426}
3427
3428/* Traverse each chunk and check it; return total */
3429static size_t traverse_and_check(mstate m) {
3430 size_t sum = 0;
3431 if (is_initialized(m)) {
3432 msegmentptr s = &m->seg;
3433 sum += m->topsize + TOP_FOOT_SIZE;
3434 while (s != 0) {
3435 mchunkptr q = align_as_chunk(s->base);
3436 mchunkptr lastq = 0;
3437 assert(pinuse(q));
3438 while (segment_holds(s, q) &&
3439 q != m->top && q->head != FENCEPOST_HEAD) {
3440 sum += chunksize(q);
3441 if (is_inuse(q)) {
3442 assert(!bin_find(m, q));
3443 do_check_inuse_chunk(m, q);
3444 }
3445 else {
3446 assert(q == m->dv || bin_find(m, q));
3447 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3448 do_check_free_chunk(m, q);
3449 }
3450 lastq = q;
3451 q = next_chunk(q);
3452 }
3453 s = s->next;
3454 }
3455 }
3456 return sum;
3457}
3458
3459
3460/* Check all properties of malloc_state. */
3461static void do_check_malloc_state(mstate m) {
3462 bindex_t i;
3463 size_t total;
3464 /* check bins */
3465 for (i = 0; i < NSMALLBINS; ++i)
3466 do_check_smallbin(m, i);
3467 for (i = 0; i < NTREEBINS; ++i)
3468 do_check_treebin(m, i);
3469
3470 if (m->dvsize != 0) { /* check dv chunk */
3471 do_check_any_chunk(m, m->dv);
3472 assert(m->dvsize == chunksize(m->dv));
3473 assert(m->dvsize >= MIN_CHUNK_SIZE);
3474 assert(bin_find(m, m->dv) == 0);
3475 }
3476
3477 if (m->top != 0) { /* check top chunk */
3478 do_check_top_chunk(m, m->top);
3479 /*assert(m->topsize == chunksize(m->top)); redundant */
3480 assert(m->topsize > 0);
3481 assert(bin_find(m, m->top) == 0);
3482 }
3483
3484 total = traverse_and_check(m);
3485 assert(total <= m->footprint);
3486 assert(m->footprint <= m->max_footprint);
3487}
3488#endif /* DEBUG */
3489
3490/* ----------------------------- statistics ------------------------------ */
3491
3492#if !NO_MALLINFO
3493static struct mallinfo internal_mallinfo(mstate m) {
3494 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3495 ensure_initialization();
3496 if (!PREACTION(m)) {
3497 check_malloc_state(m);
3498 if (is_initialized(m)) {
3499 size_t nfree = SIZE_T_ONE; /* top always free */
3500 size_t mfree = m->topsize + TOP_FOOT_SIZE;
3501 size_t sum = mfree;
3502 msegmentptr s = &m->seg;
3503 while (s != 0) {
3504 mchunkptr q = align_as_chunk(s->base);
3505 while (segment_holds(s, q) &&
3506 q != m->top && q->head != FENCEPOST_HEAD) {
3507 size_t sz = chunksize(q);
3508 sum += sz;
3509 if (!is_inuse(q)) {
3510 mfree += sz;
3511 ++nfree;
3512 }
3513 q = next_chunk(q);
3514 }
3515 s = s->next;
3516 }
3517
3518 nm.arena = sum;
3519 nm.ordblks = nfree;
3520 nm.hblkhd = m->footprint - sum;
3521 nm.usmblks = m->max_footprint;
3522 nm.uordblks = m->footprint - mfree;
3523 nm.fordblks = mfree;
3524 nm.keepcost = m->topsize;
3525 }
3526
3527 POSTACTION(m);
3528 }
3529 return nm;
3530}
3531#endif /* !NO_MALLINFO */
3532
3533#if !NO_MALLOC_STATS
3534static void internal_malloc_stats(mstate m) {
3535 ensure_initialization();
3536 if (!PREACTION(m)) {
3537 size_t maxfp = 0;
3538 size_t fp = 0;
3539 size_t used = 0;
3540 check_malloc_state(m);
3541 if (is_initialized(m)) {
3542 msegmentptr s = &m->seg;
3543 maxfp = m->max_footprint;
3544 fp = m->footprint;
3545 used = fp - (m->topsize + TOP_FOOT_SIZE);
3546
3547 while (s != 0) {
3548 mchunkptr q = align_as_chunk(s->base);
3549 while (segment_holds(s, q) &&
3550 q != m->top && q->head != FENCEPOST_HEAD) {
3551 if (!is_inuse(q))
3552 used -= chunksize(q);
3553 q = next_chunk(q);
3554 }
3555 s = s->next;
3556 }
3557 }
3558 POSTACTION(m); /* drop lock */
3559 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3560 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3561 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3562 }
3563}
3564#endif /* NO_MALLOC_STATS */
3565
3566/* ----------------------- Operations on smallbins ----------------------- */
3567
3568/*
3569 Various forms of linking and unlinking are defined as macros. Even
3570 the ones for trees, which are very long but have very short typical
3571 paths. This is ugly but reduces reliance on inlining support of
3572 compilers.
3573*/
3574
3575/* Link a free chunk into a smallbin */
3576#define insert_small_chunk(M, P, S) {\
3577 bindex_t I = small_index(S);\
3578 mchunkptr B = smallbin_at(M, I);\
3579 mchunkptr F = B;\
3580 assert(S >= MIN_CHUNK_SIZE);\
3581 if (!smallmap_is_marked(M, I))\
3582 mark_smallmap(M, I);\
3583 else if (RTCHECK(ok_address(M, B->fd)))\
3584 F = B->fd;\
3585 else {\
3586 CORRUPTION_ERROR_ACTION(M);\
3587 }\
3588 B->fd = P;\
3589 F->bk = P;\
3590 P->fd = F;\
3591 P->bk = B;\
3592}
3593
3594/* Unlink a chunk from a smallbin */
3595#define unlink_small_chunk(M, P, S) {\
3596 mchunkptr F = P->fd;\
3597 mchunkptr B = P->bk;\
3598 bindex_t I = small_index(S);\
3599 assert(P != B);\
3600 assert(P != F);\
3601 assert(chunksize(P) == small_index2size(I));\
3602 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3603 if (B == F) {\
3604 clear_smallmap(M, I);\
3605 }\
3606 else if (RTCHECK(B == smallbin_at(M,I) ||\
3607 (ok_address(M, B) && B->fd == P))) {\
3608 F->bk = B;\
3609 B->fd = F;\
3610 }\
3611 else {\
3612 CORRUPTION_ERROR_ACTION(M);\
3613 }\
3614 }\
3615 else {\
3616 CORRUPTION_ERROR_ACTION(M);\
3617 }\
3618}
3619
3620/* Unlink the first chunk from a smallbin */
3621#define unlink_first_small_chunk(M, B, P, I) {\
3622 mchunkptr F = P->fd;\
3623 assert(P != B);\
3624 assert(P != F);\
3625 assert(chunksize(P) == small_index2size(I));\
3626 if (B == F) {\
3627 clear_smallmap(M, I);\
3628 }\
3629 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3630 F->bk = B;\
3631 B->fd = F;\
3632 }\
3633 else {\
3634 CORRUPTION_ERROR_ACTION(M);\
3635 }\
3636}
3637
3638/* Replace dv node, binning the old one */
3639/* Used only when dvsize known to be small */
3640#define replace_dv(M, P, S) {\
3641 size_t DVS = M->dvsize;\
3642 assert(is_small(DVS));\
3643 if (DVS != 0) {\
3644 mchunkptr DV = M->dv;\
3645 insert_small_chunk(M, DV, DVS);\
3646 }\
3647 M->dvsize = S;\
3648 M->dv = P;\
3649}
3650
3651/* ------------------------- Operations on trees ------------------------- */
3652
3653/* Insert chunk into tree */
3654#define insert_large_chunk(M, X, S) {\
3655 tbinptr* H;\
3656 bindex_t I;\
3657 compute_tree_index(S, I);\
3658 H = treebin_at(M, I);\
3659 X->index = I;\
3660 X->child[0] = X->child[1] = 0;\
3661 if (!treemap_is_marked(M, I)) {\
3662 mark_treemap(M, I);\
3663 *H = X;\
3664 X->parent = (tchunkptr)H;\
3665 X->fd = X->bk = X;\
3666 }\
3667 else {\
3668 tchunkptr T = *H;\
3669 size_t K = S << leftshift_for_tree_index(I);\
3670 for (;;) {\
3671 if (chunksize(T) != S) {\
3672 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3673 K <<= 1;\
3674 if (*C != 0)\
3675 T = *C;\
3676 else if (RTCHECK(ok_address(M, C))) {\
3677 *C = X;\
3678 X->parent = T;\
3679 X->fd = X->bk = X;\
3680 break;\
3681 }\
3682 else {\
3683 CORRUPTION_ERROR_ACTION(M);\
3684 break;\
3685 }\
3686 }\
3687 else {\
3688 tchunkptr F = T->fd;\
3689 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3690 T->fd = F->bk = X;\
3691 X->fd = F;\
3692 X->bk = T;\
3693 X->parent = 0;\
3694 break;\
3695 }\
3696 else {\
3697 CORRUPTION_ERROR_ACTION(M);\
3698 break;\
3699 }\
3700 }\
3701 }\
3702 }\
3703}
3704
3705/*
3706 Unlink steps:
3707
3708 1. If x is a chained node, unlink it from its same-sized fd/bk links
3709 and choose its bk node as its replacement.
3710 2. If x was the last node of its size, but not a leaf node, it must
3711 be replaced with a leaf node (not merely one with an open left or
3712 right), to make sure that lefts and rights of descendents
3713 correspond properly to bit masks. We use the rightmost descendent
3714 of x. We could use any other leaf, but this is easy to locate and
3715 tends to counteract removal of leftmosts elsewhere, and so keeps
3716 paths shorter than minimally guaranteed. This doesn't loop much
3717 because on average a node in a tree is near the bottom.
3718 3. If x is the base of a chain (i.e., has parent links) relink
3719 x's parent and children to x's replacement (or null if none).
3720*/
3721
3722#define unlink_large_chunk(M, X) {\
3723 tchunkptr XP = X->parent;\
3724 tchunkptr R;\
3725 if (X->bk != X) {\
3726 tchunkptr F = X->fd;\
3727 R = X->bk;\
3728 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3729 F->bk = R;\
3730 R->fd = F;\
3731 }\
3732 else {\
3733 CORRUPTION_ERROR_ACTION(M);\
3734 }\
3735 }\
3736 else {\
3737 tchunkptr* RP;\
3738 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3739 ((R = *(RP = &(X->child[0]))) != 0)) {\
3740 tchunkptr* CP;\
3741 while ((*(CP = &(R->child[1])) != 0) ||\
3742 (*(CP = &(R->child[0])) != 0)) {\
3743 R = *(RP = CP);\
3744 }\
3745 if (RTCHECK(ok_address(M, RP)))\
3746 *RP = 0;\
3747 else {\
3748 CORRUPTION_ERROR_ACTION(M);\
3749 }\
3750 }\
3751 }\
3752 if (XP != 0) {\
3753 tbinptr* H = treebin_at(M, X->index);\
3754 if (X == *H) {\
3755 if ((*H = R) == 0) \
3756 clear_treemap(M, X->index);\
3757 }\
3758 else if (RTCHECK(ok_address(M, XP))) {\
3759 if (XP->child[0] == X) \
3760 XP->child[0] = R;\
3761 else \
3762 XP->child[1] = R;\
3763 }\
3764 else\
3765 CORRUPTION_ERROR_ACTION(M);\
3766 if (R != 0) {\
3767 if (RTCHECK(ok_address(M, R))) {\
3768 tchunkptr C0, C1;\
3769 R->parent = XP;\
3770 if ((C0 = X->child[0]) != 0) {\
3771 if (RTCHECK(ok_address(M, C0))) {\
3772 R->child[0] = C0;\
3773 C0->parent = R;\
3774 }\
3775 else\
3776 CORRUPTION_ERROR_ACTION(M);\
3777 }\
3778 if ((C1 = X->child[1]) != 0) {\
3779 if (RTCHECK(ok_address(M, C1))) {\
3780 R->child[1] = C1;\
3781 C1->parent = R;\
3782 }\
3783 else\
3784 CORRUPTION_ERROR_ACTION(M);\
3785 }\
3786 }\
3787 else\
3788 CORRUPTION_ERROR_ACTION(M);\
3789 }\
3790 }\
3791}
3792
3793/* Relays to large vs small bin operations */
3794
3795#define insert_chunk(M, P, S)\
3796 if (is_small(S)) insert_small_chunk(M, P, S)\
3797 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3798
3799#define unlink_chunk(M, P, S)\
3800 if (is_small(S)) unlink_small_chunk(M, P, S)\
3801 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3802
3803
3804/* Relays to internal calls to malloc/free from realloc, memalign etc */
3805
3806#if ONLY_MSPACES
3807#define internal_malloc(m, b) mspace_malloc(m, b)
3808#define internal_free(m, mem) mspace_free(m,mem);
3809#else /* ONLY_MSPACES */
3810#if MSPACES
3811#define internal_malloc(m, b)\
3812 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3813#define internal_free(m, mem)\
3814 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3815#else /* MSPACES */
3816#define internal_malloc(m, b) dlmalloc(b)
3817#define internal_free(m, mem) dlfree(mem)
3818#endif /* MSPACES */
3819#endif /* ONLY_MSPACES */
3820
3821/* ----------------------- Direct-mmapping chunks ----------------------- */
3822
3823/*
3824 Directly mmapped chunks are set up with an offset to the start of
3825 the mmapped region stored in the prev_foot field of the chunk. This
3826 allows reconstruction of the required argument to MUNMAP when freed,
3827 and also allows adjustment of the returned chunk to meet alignment
3828 requirements (especially in memalign).
3829*/
3830
3831/* Malloc using mmap */
3832static void* mmap_alloc(mstate m, size_t nb) {
3833 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3834 if (m->footprint_limit != 0) {
3835 size_t fp = m->footprint + mmsize;
3836 if (fp <= m->footprint || fp > m->footprint_limit)
3837 return 0;
3838 }
3839 if (mmsize > nb) { /* Check for wrap around 0 */
3840 char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3841 if (mm != CMFAIL) {
3842 size_t offset = align_offset(chunk2mem(mm));
3843 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3844 mchunkptr p = (mchunkptr)(mm + offset);
3845 p->prev_foot = offset;
3846 p->head = psize;
3847 mark_inuse_foot(m, p, psize);
3848 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3849 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3850
3851 if (m->least_addr == 0 || mm < m->least_addr)
3852 m->least_addr = mm;
3853 if ((m->footprint += mmsize) > m->max_footprint)
3854 m->max_footprint = m->footprint;
3855 assert(is_aligned(chunk2mem(p)));
3856 check_mmapped_chunk(m, p);
3857 return chunk2mem(p);
3858 }
3859 }
3860 return 0;
3861}
3862
3863/* Realloc using mmap */
3864static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
3865 size_t oldsize = chunksize(oldp);
3866 (void)flags; /* placate people compiling -Wunused */
3867 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3868 return 0;
3869 /* Keep old chunk if big enough but not too big */
3870 if (oldsize >= nb + SIZE_T_SIZE &&
3871 (oldsize - nb) <= (mparams.granularity << 1))
3872 return oldp;
3873 else {
3874 size_t offset = oldp->prev_foot;
3875 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3876 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3877 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3878 oldmmsize, newmmsize, flags);
3879 if (cp != CMFAIL) {
3880 mchunkptr newp = (mchunkptr)(cp + offset);
3881 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3882 newp->head = psize;
3883 mark_inuse_foot(m, newp, psize);
3884 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3885 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3886
3887 if (cp < m->least_addr)
3888 m->least_addr = cp;
3889 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3890 m->max_footprint = m->footprint;
3891 check_mmapped_chunk(m, newp);
3892 return newp;
3893 }
3894 }
3895 return 0;
3896}
3897
3898
3899/* -------------------------- mspace management -------------------------- */
3900
3901/* Initialize top chunk and its size */
3902static void init_top(mstate m, mchunkptr p, size_t psize) {
3903 /* Ensure alignment */
3904 size_t offset = align_offset(chunk2mem(p));
3905 p = (mchunkptr)((char*)p + offset);
3906 psize -= offset;
3907
3908 m->top = p;
3909 m->topsize = psize;
3910 p->head = psize | PINUSE_BIT;
3911 /* set size of fake trailing chunk holding overhead space only once */
3912 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3913 m->trim_check = mparams.trim_threshold; /* reset on each update */
3914}
3915
3916/* Initialize bins for a new mstate that is otherwise zeroed out */
3917static void init_bins(mstate m) {
3918 /* Establish circular links for smallbins */
3919 bindex_t i;
3920 for (i = 0; i < NSMALLBINS; ++i) {
3921 sbinptr bin = smallbin_at(m,i);
3922 bin->fd = bin->bk = bin;
3923 }
3924}
3925
3926#if PROCEED_ON_ERROR
3927
3928/* default corruption action */
3929static void reset_on_error(mstate m) {
3930 int i;
3931 ++malloc_corruption_error_count;
3932 /* Reinitialize fields to forget about all memory */
3933 m->smallmap = m->treemap = 0;
3934 m->dvsize = m->topsize = 0;
3935 m->seg.base = 0;
3936 m->seg.size = 0;
3937 m->seg.next = 0;
3938 m->top = m->dv = 0;
3939 for (i = 0; i < NTREEBINS; ++i)
3940 *treebin_at(m, i) = 0;
3941 init_bins(m);
3942}
3943#endif /* PROCEED_ON_ERROR */
3944
3945/* Allocate chunk and prepend remainder with chunk in successor base. */
3946static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3947 size_t nb) {
3948 mchunkptr p = align_as_chunk(newbase);
3949 mchunkptr oldfirst = align_as_chunk(oldbase);
3950 size_t psize = (char*)oldfirst - (char*)p;
3951 mchunkptr q = chunk_plus_offset(p, nb);
3952 size_t qsize = psize - nb;
3953 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3954
3955 assert((char*)oldfirst > (char*)q);
3956 assert(pinuse(oldfirst));
3957 assert(qsize >= MIN_CHUNK_SIZE);
3958
3959 /* consolidate remainder with first chunk of old base */
3960 if (oldfirst == m->top) {
3961 size_t tsize = m->topsize += qsize;
3962 m->top = q;
3963 q->head = tsize | PINUSE_BIT;
3964 check_top_chunk(m, q);
3965 }
3966 else if (oldfirst == m->dv) {
3967 size_t dsize = m->dvsize += qsize;
3968 m->dv = q;
3969 set_size_and_pinuse_of_free_chunk(q, dsize);
3970 }
3971 else {
3972 if (!is_inuse(oldfirst)) {
3973 size_t nsize = chunksize(oldfirst);
3974 unlink_chunk(m, oldfirst, nsize);
3975 oldfirst = chunk_plus_offset(oldfirst, nsize);
3976 qsize += nsize;
3977 }
3978 set_free_with_pinuse(q, qsize, oldfirst);
3979 insert_chunk(m, q, qsize);
3980 check_free_chunk(m, q);
3981 }
3982
3983 check_malloced_chunk(m, chunk2mem(p), nb);
3984 return chunk2mem(p);
3985}
3986
3987/* Add a segment to hold a new noncontiguous region */
3988static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3989 /* Determine locations and sizes of segment, fenceposts, old top */
3990 char* old_top = (char*)m->top;
3991 msegmentptr oldsp = segment_holding(m, old_top);
3992 char* old_end = oldsp->base + oldsp->size;
3993 size_t ssize = pad_request(sizeof(struct malloc_segment));
3994 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3995 size_t offset = align_offset(chunk2mem(rawsp));
3996 char* asp = rawsp + offset;
3997 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3998 mchunkptr sp = (mchunkptr)csp;
3999 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
4000 mchunkptr tnext = chunk_plus_offset(sp, ssize);
4001 mchunkptr p = tnext;
4002 int nfences = 0;
4003
4004 /* reset top to new space */
4005 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4006
4007 /* Set up segment record */
4008 assert(is_aligned(ss));
4009 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
4010 *ss = m->seg; /* Push current record */
4011 m->seg.base = tbase;
4012 m->seg.size = tsize;
4013 m->seg.sflags = mmapped;
4014 m->seg.next = ss;
4015
4016 /* Insert trailing fenceposts */
4017 for (;;) {
4018 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
4019 p->head = FENCEPOST_HEAD;
4020 ++nfences;
4021 if ((char*)(&(nextp->head)) < old_end)
4022 p = nextp;
4023 else
4024 break;
4025 }
4026 assert(nfences >= 2);
4027
4028 /* Insert the rest of old top into a bin as an ordinary free chunk */
4029 if (csp != old_top) {
4030 mchunkptr q = (mchunkptr)old_top;
4031 size_t psize = csp - old_top;
4032 mchunkptr tn = chunk_plus_offset(q, psize);
4033 set_free_with_pinuse(q, psize, tn);
4034 insert_chunk(m, q, psize);
4035 }
4036
4037 check_top_chunk(m, m->top);
4038}
4039
4040/* -------------------------- System allocation -------------------------- */
4041
4042/* Get memory from system using MORECORE or MMAP */
4043static void* sys_alloc(mstate m, size_t nb) {
4044 char* tbase = CMFAIL;
4045 size_t tsize = 0;
4046 flag_t mmap_flag = 0;
4047 size_t asize; /* allocation size */
4048
4049 ensure_initialization();
4050
4051 /* Directly map large chunks, but only if already initialized */
4052 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4053 void* mem = mmap_alloc(m, nb);
4054 if (mem != 0)
4055 return mem;
4056 }
4057
4058 asize = granularity_align(nb + SYS_ALLOC_PADDING);
4059 if (asize <= nb)
4060 return 0; /* wraparound */
4061 if (m->footprint_limit != 0) {
4062 size_t fp = m->footprint + asize;
4063 if (fp <= m->footprint || fp > m->footprint_limit)
4064 return 0;
4065 }
4066
4067 /*
4068 Try getting memory in any of three ways (in most-preferred to
4069 least-preferred order):
4070 1. A call to MORECORE that can normally contiguously extend memory.
4071 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4072 or main space is mmapped or a previous contiguous call failed)
4073 2. A call to MMAP new space (disabled if not HAVE_MMAP).
4074 Note that under the default settings, if MORECORE is unable to
4075 fulfill a request, and HAVE_MMAP is true, then mmap is
4076 used as a noncontiguous system allocator. This is a useful backup
4077 strategy for systems with holes in address spaces -- in this case
4078 sbrk cannot contiguously expand the heap, but mmap may be able to
4079 find space.
4080 3. A call to MORECORE that cannot usually contiguously extend memory.
4081 (disabled if not HAVE_MORECORE)
4082
4083 In all cases, we need to request enough bytes from system to ensure
4084 we can malloc nb bytes upon success, so pad with enough space for
4085 top_foot, plus alignment-pad to make sure we don't lose bytes if
4086 not on boundary, and round this up to a granularity unit.
4087 */
4088
4089 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
4090 char* br = CMFAIL;
4091 size_t ssize = asize; /* sbrk call size */
4092 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4093 ACQUIRE_MALLOC_GLOBAL_LOCK();
4094
4095 if (ss == 0) { /* First time through or recovery */
4096 char* base = (char*)CALL_MORECORE(0);
4097 if (base != CMFAIL) {
4098 size_t fp;
4099 /* Adjust to end on a page boundary */
4100 if (!is_page_aligned(base))
4101 ssize += (page_align((size_t)base) - (size_t)base);
4102 fp = m->footprint + ssize; /* recheck limits */
4103 if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
4104 (m->footprint_limit == 0 ||
4105 (fp > m->footprint && fp <= m->footprint_limit)) &&
4106 (br = (char*)(CALL_MORECORE(ssize))) == base) {
4107 tbase = base;
4108 tsize = ssize;
4109 }
4110 }
4111 }
4112 else {
4113 /* Subtract out existing available top space from MORECORE request. */
4114 ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4115 /* Use mem here only if it did continuously extend old space */
4116 if (ssize < HALF_MAX_SIZE_T &&
4117 (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
4118 tbase = br;
4119 tsize = ssize;
4120 }
4121 }
4122
4123 if (tbase == CMFAIL) { /* Cope with partial failure */
4124 if (br != CMFAIL) { /* Try to use/extend the space we did get */
4125 if (ssize < HALF_MAX_SIZE_T &&
4126 ssize < nb + SYS_ALLOC_PADDING) {
4127 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
4128 if (esize < HALF_MAX_SIZE_T) {
4129 char* end = (char*)CALL_MORECORE(esize);
4130 if (end != CMFAIL)
4131 ssize += esize;
4132 else { /* Can't use; try to release */
4133 (void) CALL_MORECORE(-ssize);
4134 br = CMFAIL;
4135 }
4136 }
4137 }
4138 }
4139 if (br != CMFAIL) { /* Use the space we did get */
4140 tbase = br;
4141 tsize = ssize;
4142 }
4143 else
4144 disable_contiguous(m); /* Don't try contiguous path in the future */
4145 }
4146
4147 RELEASE_MALLOC_GLOBAL_LOCK();
4148 }
4149
4150 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4151 char* mp = (char*)(CALL_MMAP(asize));
4152 if (mp != CMFAIL) {
4153 tbase = mp;
4154 tsize = asize;
4155 mmap_flag = USE_MMAP_BIT;
4156 }
4157 }
4158
4159 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4160 if (asize < HALF_MAX_SIZE_T) {
4161 char* br = CMFAIL;
4162 char* end = CMFAIL;
4163 ACQUIRE_MALLOC_GLOBAL_LOCK();
4164 br = (char*)(CALL_MORECORE(asize));
4165 end = (char*)(CALL_MORECORE(0));
4166 RELEASE_MALLOC_GLOBAL_LOCK();
4167 if (br != CMFAIL && end != CMFAIL && br < end) {
4168 size_t ssize = end - br;
4169 if (ssize > nb + TOP_FOOT_SIZE) {
4170 tbase = br;
4171 tsize = ssize;
4172 }
4173 }
4174 }
4175 }
4176
4177 if (tbase != CMFAIL) {
4178
4179 if ((m->footprint += tsize) > m->max_footprint)
4180 m->max_footprint = m->footprint;
4181
4182 if (!is_initialized(m)) { /* first-time initialization */
4183 if (m->least_addr == 0 || tbase < m->least_addr)
4184 m->least_addr = tbase;
4185 m->seg.base = tbase;
4186 m->seg.size = tsize;
4187 m->seg.sflags = mmap_flag;
4188 m->magic = mparams.magic;
4189 m->release_checks = MAX_RELEASE_CHECK_RATE;
4190 init_bins(m);
4191#if !ONLY_MSPACES
4192 if (is_global(m))
4193 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4194 else
4195#endif
4196 {
4197 /* Offset top by embedded malloc_state */
4198 mchunkptr mn = next_chunk(mem2chunk(m));
4199 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4200 }
4201 }
4202
4203 else {
4204 /* Try to merge with an existing segment */
4205 msegmentptr sp = &m->seg;
4206 /* Only consider most recent segment if traversal suppressed */
4207 while (sp != 0 && tbase != sp->base + sp->size)
4208 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4209 if (sp != 0 &&
4210 !is_extern_segment(sp) &&
4211 (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4212 segment_holds(sp, m->top)) { /* append */
4213 sp->size += tsize;
4214 init_top(m, m->top, m->topsize + tsize);
4215 }
4216 else {
4217 if (tbase < m->least_addr)
4218 m->least_addr = tbase;
4219 sp = &m->seg;
4220 while (sp != 0 && sp->base != tbase + tsize)
4221 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4222 if (sp != 0 &&
4223 !is_extern_segment(sp) &&
4224 (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4225 char* oldbase = sp->base;
4226 sp->base = tbase;
4227 sp->size += tsize;
4228 return prepend_alloc(m, tbase, oldbase, nb);
4229 }
4230 else
4231 add_segment(m, tbase, tsize, mmap_flag);
4232 }
4233 }
4234
4235 if (nb < m->topsize) { /* Allocate from new or extended top space */
4236 size_t rsize = m->topsize -= nb;
4237 mchunkptr p = m->top;
4238 mchunkptr r = m->top = chunk_plus_offset(p, nb);
4239 r->head = rsize | PINUSE_BIT;
4240 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4241 check_top_chunk(m, m->top);
4242 check_malloced_chunk(m, chunk2mem(p), nb);
4243 return chunk2mem(p);
4244 }
4245 }
4246
4247 MALLOC_FAILURE_ACTION;
4248 return 0;
4249}
4250
4251/* ----------------------- system deallocation -------------------------- */
4252
4253/* Unmap and unlink any mmapped segments that don't contain used chunks */
4254static size_t release_unused_segments(mstate m) {
4255 size_t released = 0;
4256 int nsegs = 0;
4257 msegmentptr pred = &m->seg;
4258 msegmentptr sp = pred->next;
4259 while (sp != 0) {
4260 char* base = sp->base;
4261 size_t size = sp->size;
4262 msegmentptr next = sp->next;
4263 ++nsegs;
4264 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4265 mchunkptr p = align_as_chunk(base);
4266 size_t psize = chunksize(p);
4267 /* Can unmap if first chunk holds entire segment and not pinned */
4268 if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4269 tchunkptr tp = (tchunkptr)p;
4270 assert(segment_holds(sp, (char*)sp));
4271 if (p == m->dv) {
4272 m->dv = 0;
4273 m->dvsize = 0;
4274 }
4275 else {
4276 unlink_large_chunk(m, tp);
4277 }
4278 if (CALL_MUNMAP(base, size) == 0) {
4279 released += size;
4280 m->footprint -= size;
4281 /* unlink obsoleted record */
4282 sp = pred;
4283 sp->next = next;
4284 }
4285 else { /* back out if cannot unmap */
4286 insert_large_chunk(m, tp, psize);
4287 }
4288 }
4289 }
4290 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4291 break;
4292 pred = sp;
4293 sp = next;
4294 }
4295 /* Reset check counter */
4296 m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
4297 (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
4298 return released;
4299}
4300
4301static int sys_trim(mstate m, size_t pad) {
4302 size_t released = 0;
4303 ensure_initialization();
4304 if (pad < MAX_REQUEST && is_initialized(m)) {
4305 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4306
4307 if (m->topsize > pad) {
4308 /* Shrink top space in granularity-size units, keeping at least one */
4309 size_t unit = mparams.granularity;
4310 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4311 SIZE_T_ONE) * unit;
4312 msegmentptr sp = segment_holding(m, (char*)m->top);
4313
4314 if (!is_extern_segment(sp)) {
4315 if (is_mmapped_segment(sp)) {
4316 if (HAVE_MMAP &&
4317 sp->size >= extra &&
4318 !has_segment_link(m, sp)) { /* can't shrink if pinned */
4319 size_t newsize = sp->size - extra;
4320 (void)newsize; /* placate people compiling -Wunused-variable */
4321 /* Prefer mremap, fall back to munmap */
4322 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4323 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4324 released = extra;
4325 }
4326 }
4327 }
4328 else if (HAVE_MORECORE) {
4329 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4330 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4331 ACQUIRE_MALLOC_GLOBAL_LOCK();
4332 {
4333 /* Make sure end of memory is where we last set it. */
4334 char* old_br = (char*)(CALL_MORECORE(0));
4335 if (old_br == sp->base + sp->size) {
4336 char* rel_br = (char*)(CALL_MORECORE(-extra));
4337 char* new_br = (char*)(CALL_MORECORE(0));
4338 if (rel_br != CMFAIL && new_br < old_br)
4339 released = old_br - new_br;
4340 }
4341 }
4342 RELEASE_MALLOC_GLOBAL_LOCK();
4343 }
4344 }
4345
4346 if (released != 0) {
4347 sp->size -= released;
4348 m->footprint -= released;
4349 init_top(m, m->top, m->topsize - released);
4350 check_top_chunk(m, m->top);
4351 }
4352 }
4353
4354 /* Unmap any unused mmapped segments */
4355 if (HAVE_MMAP)
4356 released += release_unused_segments(m);
4357
4358 /* On failure, disable autotrim to avoid repeated failed future calls */
4359 if (released == 0 && m->topsize > m->trim_check)
4360 m->trim_check = MAX_SIZE_T;
4361 }
4362
4363 return (released != 0)? 1 : 0;
4364}
4365
4366/* Consolidate and bin a chunk. Differs from exported versions
4367 of free mainly in that the chunk need not be marked as inuse.
4368*/
4369static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
4370 mchunkptr next = chunk_plus_offset(p, psize);
4371 if (!pinuse(p)) {
4372 mchunkptr prev;
4373 size_t prevsize = p->prev_foot;
4374 if (is_mmapped(p)) {
4375 psize += prevsize + MMAP_FOOT_PAD;
4376 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4377 m->footprint -= psize;
4378 return;
4379 }
4380 prev = chunk_minus_offset(p, prevsize);
4381 psize += prevsize;
4382 p = prev;
4383 if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4384 if (p != m->dv) {
4385 unlink_chunk(m, p, prevsize);
4386 }
4387 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4388 m->dvsize = psize;
4389 set_free_with_pinuse(p, psize, next);
4390 return;
4391 }
4392 }
4393 else {
4394 CORRUPTION_ERROR_ACTION(m);
4395 return;
4396 }
4397 }
4398 if (RTCHECK(ok_address(m, next))) {
4399 if (!cinuse(next)) { /* consolidate forward */
4400 if (next == m->top) {
4401 size_t tsize = m->topsize += psize;
4402 m->top = p;
4403 p->head = tsize | PINUSE_BIT;
4404 if (p == m->dv) {
4405 m->dv = 0;
4406 m->dvsize = 0;
4407 }
4408 return;
4409 }
4410 else if (next == m->dv) {
4411 size_t dsize = m->dvsize += psize;
4412 m->dv = p;
4413 set_size_and_pinuse_of_free_chunk(p, dsize);
4414 return;
4415 }
4416 else {
4417 size_t nsize = chunksize(next);
4418 psize += nsize;
4419 unlink_chunk(m, next, nsize);
4420 set_size_and_pinuse_of_free_chunk(p, psize);
4421 if (p == m->dv) {
4422 m->dvsize = psize;
4423 return;
4424 }
4425 }
4426 }
4427 else {
4428 set_free_with_pinuse(p, psize, next);
4429 }
4430 insert_chunk(m, p, psize);
4431 }
4432 else {
4433 CORRUPTION_ERROR_ACTION(m);
4434 }
4435}
4436
4437/* ---------------------------- malloc --------------------------- */
4438
4439/* allocate a large request from the best fitting chunk in a treebin */
4440static void* tmalloc_large(mstate m, size_t nb) {
4441 tchunkptr v = 0;
4442 size_t rsize = -nb; /* Unsigned negation */
4443 tchunkptr t;
4444 bindex_t idx;
4445 compute_tree_index(nb, idx);
4446 if ((t = *treebin_at(m, idx)) != 0) {
4447 /* Traverse tree for this bin looking for node with size == nb */
4448 size_t sizebits = nb << leftshift_for_tree_index(idx);
4449 tchunkptr rst = 0; /* The deepest untaken right subtree */
4450 for (;;) {
4451 tchunkptr rt;
4452 size_t trem = chunksize(t) - nb;
4453 if (trem < rsize) {
4454 v = t;
4455 if ((rsize = trem) == 0)
4456 break;
4457 }
4458 rt = t->child[1];
4459 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4460 if (rt != 0 && rt != t)
4461 rst = rt;
4462 if (t == 0) {
4463 t = rst; /* set t to least subtree holding sizes > nb */
4464 break;
4465 }
4466 sizebits <<= 1;
4467 }
4468 }
4469 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4470 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4471 if (leftbits != 0) {
4472 bindex_t i;
4473 binmap_t leastbit = least_bit(leftbits);
4474 compute_bit2idx(leastbit, i);
4475 t = *treebin_at(m, i);
4476 }
4477 }
4478
4479 while (t != 0) { /* find smallest of tree or subtree */
4480 size_t trem = chunksize(t) - nb;
4481 if (trem < rsize) {
4482 rsize = trem;
4483 v = t;
4484 }
4485 t = leftmost_child(t);
4486 }
4487
4488 /* If dv is a better fit, return 0 so malloc will use it */
4489 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4490 if (RTCHECK(ok_address(m, v))) { /* split */
4491 mchunkptr r = chunk_plus_offset(v, nb);
4492 assert(chunksize(v) == rsize + nb);
4493 if (RTCHECK(ok_next(v, r))) {
4494 unlink_large_chunk(m, v);
4495 if (rsize < MIN_CHUNK_SIZE)
4496 set_inuse_and_pinuse(m, v, (rsize + nb));
4497 else {
4498 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4499 set_size_and_pinuse_of_free_chunk(r, rsize);
4500 insert_chunk(m, r, rsize);
4501 }
4502 return chunk2mem(v);
4503 }
4504 }
4505 CORRUPTION_ERROR_ACTION(m);
4506 }
4507 return 0;
4508}
4509
4510/* allocate a small request from the best fitting chunk in a treebin */
4511static void* tmalloc_small(mstate m, size_t nb) {
4512 tchunkptr t, v;
4513 size_t rsize;
4514 bindex_t i;
4515 binmap_t leastbit = least_bit(m->treemap);
4516 compute_bit2idx(leastbit, i);
4517 v = t = *treebin_at(m, i);
4518 rsize = chunksize(t) - nb;
4519
4520 while ((t = leftmost_child(t)) != 0) {
4521 size_t trem = chunksize(t) - nb;
4522 if (trem < rsize) {
4523 rsize = trem;
4524 v = t;
4525 }
4526 }
4527
4528 if (RTCHECK(ok_address(m, v))) {
4529 mchunkptr r = chunk_plus_offset(v, nb);
4530 assert(chunksize(v) == rsize + nb);
4531 if (RTCHECK(ok_next(v, r))) {
4532 unlink_large_chunk(m, v);
4533 if (rsize < MIN_CHUNK_SIZE)
4534 set_inuse_and_pinuse(m, v, (rsize + nb));
4535 else {
4536 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4537 set_size_and_pinuse_of_free_chunk(r, rsize);
4538 replace_dv(m, r, rsize);
4539 }
4540 return chunk2mem(v);
4541 }
4542 }
4543
4544 CORRUPTION_ERROR_ACTION(m);
4545 return 0;
4546}
4547
4548#if !ONLY_MSPACES
4549
4550void* dlmalloc(size_t bytes) {
4551 /*
4552 Basic algorithm:
4553 If a small request (< 256 bytes minus per-chunk overhead):
4554 1. If one exists, use a remainderless chunk in associated smallbin.
4555 (Remainderless means that there are too few excess bytes to
4556 represent as a chunk.)
4557 2. If it is big enough, use the dv chunk, which is normally the
4558 chunk adjacent to the one used for the most recent small request.
4559 3. If one exists, split the smallest available chunk in a bin,
4560 saving remainder in dv.
4561 4. If it is big enough, use the top chunk.
4562 5. If available, get memory from system and use it
4563 Otherwise, for a large request:
4564 1. Find the smallest available binned chunk that fits, and use it
4565 if it is better fitting than dv chunk, splitting if necessary.
4566 2. If better fitting than any binned chunk, use the dv chunk.
4567 3. If it is big enough, use the top chunk.
4568 4. If request size >= mmap threshold, try to directly mmap this chunk.
4569 5. If available, get memory from system and use it
4570
4571 The ugly goto's here ensure that postaction occurs along all paths.
4572 */
4573
4574#if USE_LOCKS
4575 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4576#endif
4577
4578 if (!PREACTION(gm)) {
4579 void* mem;
4580 size_t nb;
4581 if (bytes <= MAX_SMALL_REQUEST) {
4582 bindex_t idx;
4583 binmap_t smallbits;
4584 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4585 idx = small_index(nb);
4586 smallbits = gm->smallmap >> idx;
4587
4588 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4589 mchunkptr b, p;
4590 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4591 b = smallbin_at(gm, idx);
4592 p = b->fd;
4593 assert(chunksize(p) == small_index2size(idx));
4594 unlink_first_small_chunk(gm, b, p, idx);
4595 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4596 mem = chunk2mem(p);
4597 check_malloced_chunk(gm, mem, nb);
4598 goto postaction;
4599 }
4600
4601 else if (nb > gm->dvsize) {
4602 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4603 mchunkptr b, p, r;
4604 size_t rsize;
4605 bindex_t i;
4606 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4607 binmap_t leastbit = least_bit(leftbits);
4608 compute_bit2idx(leastbit, i);
4609 b = smallbin_at(gm, i);
4610 p = b->fd;
4611 assert(chunksize(p) == small_index2size(i));
4612 unlink_first_small_chunk(gm, b, p, i);
4613 rsize = small_index2size(i) - nb;
4614 /* Fit here cannot be remainderless if 4byte sizes */
4615 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4616 set_inuse_and_pinuse(gm, p, small_index2size(i));
4617 else {
4618 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4619 r = chunk_plus_offset(p, nb);
4620 set_size_and_pinuse_of_free_chunk(r, rsize);
4621 replace_dv(gm, r, rsize);
4622 }
4623 mem = chunk2mem(p);
4624 check_malloced_chunk(gm, mem, nb);
4625 goto postaction;
4626 }
4627
4628 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4629 check_malloced_chunk(gm, mem, nb);
4630 goto postaction;
4631 }
4632 }
4633 }
4634 else if (bytes >= MAX_REQUEST)
4635 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4636 else {
4637 nb = pad_request(bytes);
4638 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4639 check_malloced_chunk(gm, mem, nb);
4640 goto postaction;
4641 }
4642 }
4643
4644 if (nb <= gm->dvsize) {
4645 size_t rsize = gm->dvsize - nb;
4646 mchunkptr p = gm->dv;
4647 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4648 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4649 gm->dvsize = rsize;
4650 set_size_and_pinuse_of_free_chunk(r, rsize);
4651 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4652 }
4653 else { /* exhaust dv */
4654 size_t dvs = gm->dvsize;
4655 gm->dvsize = 0;
4656 gm->dv = 0;
4657 set_inuse_and_pinuse(gm, p, dvs);
4658 }
4659 mem = chunk2mem(p);
4660 check_malloced_chunk(gm, mem, nb);
4661 goto postaction;
4662 }
4663
4664 else if (nb < gm->topsize) { /* Split top */
4665 size_t rsize = gm->topsize -= nb;
4666 mchunkptr p = gm->top;
4667 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4668 r->head = rsize | PINUSE_BIT;
4669 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4670 mem = chunk2mem(p);
4671 check_top_chunk(gm, gm->top);
4672 check_malloced_chunk(gm, mem, nb);
4673 goto postaction;
4674 }
4675
4676 mem = sys_alloc(gm, nb);
4677
4678 postaction:
4679 POSTACTION(gm);
4680 return mem;
4681 }
4682
4683 return 0;
4684}
4685
4686/* ---------------------------- free --------------------------- */
4687
4688void dlfree(void* mem) {
4689 /*
4690 Consolidate freed chunks with preceeding or succeeding bordering
4691 free chunks, if they exist, and then place in a bin. Intermixed
4692 with special cases for top, dv, mmapped chunks, and usage errors.
4693 */
4694
4695 if (mem != 0) {
4696 mchunkptr p = mem2chunk(mem);
4697#if FOOTERS
4698 mstate fm = get_mstate_for(p);
4699 if (!ok_magic(fm)) {
4700 USAGE_ERROR_ACTION(fm, p);
4701 return;
4702 }
4703#else /* FOOTERS */
4704#define fm gm
4705#endif /* FOOTERS */
4706 if (!PREACTION(fm)) {
4707 check_inuse_chunk(fm, p);
4708 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4709 size_t psize = chunksize(p);
4710 mchunkptr next = chunk_plus_offset(p, psize);
4711 if (!pinuse(p)) {
4712 size_t prevsize = p->prev_foot;
4713 if (is_mmapped(p)) {
4714 psize += prevsize + MMAP_FOOT_PAD;
4715 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4716 fm->footprint -= psize;
4717 goto postaction;
4718 }
4719 else {
4720 mchunkptr prev = chunk_minus_offset(p, prevsize);
4721 psize += prevsize;
4722 p = prev;
4723 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4724 if (p != fm->dv) {
4725 unlink_chunk(fm, p, prevsize);
4726 }
4727 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4728 fm->dvsize = psize;
4729 set_free_with_pinuse(p, psize, next);
4730 goto postaction;
4731 }
4732 }
4733 else
4734 goto erroraction;
4735 }
4736 }
4737
4738 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4739 if (!cinuse(next)) { /* consolidate forward */
4740 if (next == fm->top) {
4741 size_t tsize = fm->topsize += psize;
4742 fm->top = p;
4743 p->head = tsize | PINUSE_BIT;
4744 if (p == fm->dv) {
4745 fm->dv = 0;
4746 fm->dvsize = 0;
4747 }
4748 if (should_trim(fm, tsize))
4749 sys_trim(fm, 0);
4750 goto postaction;
4751 }
4752 else if (next == fm->dv) {
4753 size_t dsize = fm->dvsize += psize;
4754 fm->dv = p;
4755 set_size_and_pinuse_of_free_chunk(p, dsize);
4756 goto postaction;
4757 }
4758 else {
4759 size_t nsize = chunksize(next);
4760 psize += nsize;
4761 unlink_chunk(fm, next, nsize);
4762 set_size_and_pinuse_of_free_chunk(p, psize);
4763 if (p == fm->dv) {
4764 fm->dvsize = psize;
4765 goto postaction;
4766 }
4767 }
4768 }
4769 else
4770 set_free_with_pinuse(p, psize, next);
4771
4772 if (is_small(psize)) {
4773 insert_small_chunk(fm, p, psize);
4774 check_free_chunk(fm, p);
4775 }
4776 else {
4777 tchunkptr tp = (tchunkptr)p;
4778 insert_large_chunk(fm, tp, psize);
4779 check_free_chunk(fm, p);
4780 if (--fm->release_checks == 0)
4781 release_unused_segments(fm);
4782 }
4783 goto postaction;
4784 }
4785 }
4786 erroraction:
4787 USAGE_ERROR_ACTION(fm, p);
4788 postaction:
4789 POSTACTION(fm);
4790 }
4791 }
4792#if !FOOTERS
4793#undef fm
4794#endif /* FOOTERS */
4795}
4796
4797void* dlcalloc(size_t n_elements, size_t elem_size) {
4798 void* mem;
4799 size_t req = 0;
4800 if (n_elements != 0) {
4801 req = n_elements * elem_size;
4802 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4803 (req / n_elements != elem_size))
4804 req = MAX_SIZE_T; /* force downstream failure on overflow */
4805 }
4806 mem = dlmalloc(req);
4807 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4808 memset(mem, 0, req);
4809 return mem;
4810}
4811
4812#endif /* !ONLY_MSPACES */
4813
4814/* ------------ Internal support for realloc, memalign, etc -------------- */
4815
4816/* Try to realloc; only in-place unless can_move true */
4817static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
4818 int can_move) {
4819 mchunkptr newp = 0;
4820 size_t oldsize = chunksize(p);
4821 mchunkptr next = chunk_plus_offset(p, oldsize);
4822 if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4823 ok_next(p, next) && ok_pinuse(next))) {
4824 if (is_mmapped(p)) {
4825 newp = mmap_resize(m, p, nb, can_move);
4826 }
4827 else if (oldsize >= nb) { /* already big enough */
4828 size_t rsize = oldsize - nb;
4829 if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
4830 mchunkptr r = chunk_plus_offset(p, nb);
4831 set_inuse(m, p, nb);
4832 set_inuse(m, r, rsize);
4833 dispose_chunk(m, r, rsize);
4834 }
4835 newp = p;
4836 }
4837 else if (next == m->top) { /* extend into top */
4838 if (oldsize + m->topsize > nb) {
4839 size_t newsize = oldsize + m->topsize;
4840 size_t newtopsize = newsize - nb;
4841 mchunkptr newtop = chunk_plus_offset(p, nb);
4842 set_inuse(m, p, nb);
4843 newtop->head = newtopsize |PINUSE_BIT;
4844 m->top = newtop;
4845 m->topsize = newtopsize;
4846 newp = p;
4847 }
4848 }
4849 else if (next == m->dv) { /* extend into dv */
4850 size_t dvs = m->dvsize;
4851 if (oldsize + dvs >= nb) {
4852 size_t dsize = oldsize + dvs - nb;
4853 if (dsize >= MIN_CHUNK_SIZE) {
4854 mchunkptr r = chunk_plus_offset(p, nb);
4855 mchunkptr n = chunk_plus_offset(r, dsize);
4856 set_inuse(m, p, nb);
4857 set_size_and_pinuse_of_free_chunk(r, dsize);
4858 clear_pinuse(n);
4859 m->dvsize = dsize;
4860 m->dv = r;
4861 }
4862 else { /* exhaust dv */
4863 size_t newsize = oldsize + dvs;
4864 set_inuse(m, p, newsize);
4865 m->dvsize = 0;
4866 m->dv = 0;
4867 }
4868 newp = p;
4869 }
4870 }
4871 else if (!cinuse(next)) { /* extend into next free chunk */
4872 size_t nextsize = chunksize(next);
4873 if (oldsize + nextsize >= nb) {
4874 size_t rsize = oldsize + nextsize - nb;
4875 unlink_chunk(m, next, nextsize);
4876 if (rsize < MIN_CHUNK_SIZE) {
4877 size_t newsize = oldsize + nextsize;
4878 set_inuse(m, p, newsize);
4879 }
4880 else {
4881 mchunkptr r = chunk_plus_offset(p, nb);
4882 set_inuse(m, p, nb);
4883 set_inuse(m, r, rsize);
4884 dispose_chunk(m, r, rsize);
4885 }
4886 newp = p;
4887 }
4888 }
4889 }
4890 else {
4891 USAGE_ERROR_ACTION(m, chunk2mem(p));
4892 }
4893 return newp;
4894}
4895
4896static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4897 void* mem = 0;
4898 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4899 alignment = MIN_CHUNK_SIZE;
4900 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4901 size_t a = MALLOC_ALIGNMENT << 1;
4902 while (a < alignment) a <<= 1;
4903 alignment = a;
4904 }
4905 if (bytes >= MAX_REQUEST - alignment) {
4906 if (m != 0) { /* Test isn't needed but avoids compiler warning */
4907 MALLOC_FAILURE_ACTION;
4908 }
4909 }
4910 else {
4911 size_t nb = request2size(bytes);
4912 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4913 mem = internal_malloc(m, req);
4914 if (mem != 0) {
4915 mchunkptr p = mem2chunk(mem);
4916 if (PREACTION(m))
4917 return 0;
4918 if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
4919 /*
4920 Find an aligned spot inside chunk. Since we need to give
4921 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4922 the first calculation places us at a spot with less than
4923 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4924 We've allocated enough total room so that this is always
4925 possible.
4926 */
4927 char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
4928 SIZE_T_ONE)) &
4929 -alignment));
4930 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4931 br : br+alignment;
4932 mchunkptr newp = (mchunkptr)pos;
4933 size_t leadsize = pos - (char*)(p);
4934 size_t newsize = chunksize(p) - leadsize;
4935
4936 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4937 newp->prev_foot = p->prev_foot + leadsize;
4938 newp->head = newsize;
4939 }
4940 else { /* Otherwise, give back leader, use the rest */
4941 set_inuse(m, newp, newsize);
4942 set_inuse(m, p, leadsize);
4943 dispose_chunk(m, p, leadsize);
4944 }
4945 p = newp;
4946 }
4947
4948 /* Give back spare room at the end */
4949 if (!is_mmapped(p)) {
4950 size_t size = chunksize(p);
4951 if (size > nb + MIN_CHUNK_SIZE) {
4952 size_t remainder_size = size - nb;
4953 mchunkptr remainder = chunk_plus_offset(p, nb);
4954 set_inuse(m, p, nb);
4955 set_inuse(m, remainder, remainder_size);
4956 dispose_chunk(m, remainder, remainder_size);
4957 }
4958 }
4959
4960 mem = chunk2mem(p);
4961 assert (chunksize(p) >= nb);
4962 assert(((size_t)mem & (alignment - 1)) == 0);
4963 check_inuse_chunk(m, p);
4964 POSTACTION(m);
4965 }
4966 }
4967 return mem;
4968}
4969
4970/*
4971 Common support for independent_X routines, handling
4972 all of the combinations that can result.
4973 The opts arg has:
4974 bit 0 set if all elements are same size (using sizes[0])
4975 bit 1 set if elements should be zeroed
4976*/
4977static void** ialloc(mstate m,
4978 size_t n_elements,
4979 size_t* sizes,
4980 int opts,
4981 void* chunks[]) {
4982
4983 size_t element_size; /* chunksize of each element, if all same */
4984 size_t contents_size; /* total size of elements */
4985 size_t array_size; /* request size of pointer array */
4986 void* mem; /* malloced aggregate space */
4987 mchunkptr p; /* corresponding chunk */
4988 size_t remainder_size; /* remaining bytes while splitting */
4989 void** marray; /* either "chunks" or malloced ptr array */
4990 mchunkptr array_chunk; /* chunk for malloced ptr array */
4991 flag_t was_enabled; /* to disable mmap */
4992 size_t size;
4993 size_t i;
4994
4995 ensure_initialization();
4996 /* compute array length, if needed */
4997 if (chunks != 0) {
4998 if (n_elements == 0)
4999 return chunks; /* nothing to do */
5000 marray = chunks;
5001 array_size = 0;
5002 }
5003 else {
5004 /* if empty req, must still return chunk representing empty array */
5005 if (n_elements == 0)
5006 return (void**)internal_malloc(m, 0);
5007 marray = 0;
5008 array_size = request2size(n_elements * (sizeof(void*)));
5009 }
5010
5011 /* compute total element size */
5012 if (opts & 0x1) { /* all-same-size */
5013 element_size = request2size(*sizes);
5014 contents_size = n_elements * element_size;
5015 }
5016 else { /* add up all the sizes */
5017 element_size = 0;
5018 contents_size = 0;
5019 for (i = 0; i != n_elements; ++i)
5020 contents_size += request2size(sizes[i]);
5021 }
5022
5023 size = contents_size + array_size;
5024
5025 /*
5026 Allocate the aggregate chunk. First disable direct-mmapping so
5027 malloc won't use it, since we would not be able to later
5028 free/realloc space internal to a segregated mmap region.
5029 */
5030 was_enabled = use_mmap(m);
5031 disable_mmap(m);
5032 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5033 if (was_enabled)
5034 enable_mmap(m);
5035 if (mem == 0)
5036 return 0;
5037
5038 if (PREACTION(m)) return 0;
5039 p = mem2chunk(mem);
5040 remainder_size = chunksize(p);
5041
5042 assert(!is_mmapped(p));
5043
5044 if (opts & 0x2) { /* optionally clear the elements */
5045 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5046 }
5047
5048 /* If not provided, allocate the pointer array as final part of chunk */
5049 if (marray == 0) {
5050 size_t array_chunk_size;
5051 array_chunk = chunk_plus_offset(p, contents_size);
5052 array_chunk_size = remainder_size - contents_size;
5053 marray = (void**) (chunk2mem(array_chunk));
5054 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5055 remainder_size = contents_size;
5056 }
5057
5058 /* split out elements */
5059 for (i = 0; ; ++i) {
5060 marray[i] = chunk2mem(p);
5061 if (i != n_elements-1) {
5062 if (element_size != 0)
5063 size = element_size;
5064 else
5065 size = request2size(sizes[i]);
5066 remainder_size -= size;
5067 set_size_and_pinuse_of_inuse_chunk(m, p, size);
5068 p = chunk_plus_offset(p, size);
5069 }
5070 else { /* the final element absorbs any overallocation slop */
5071 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5072 break;
5073 }
5074 }
5075
5076#if DEBUG
5077 if (marray != chunks) {
5078 /* final element must have exactly exhausted chunk */
5079 if (element_size != 0) {
5080 assert(remainder_size == element_size);
5081 }
5082 else {
5083 assert(remainder_size == request2size(sizes[i]));
5084 }
5085 check_inuse_chunk(m, mem2chunk(marray));
5086 }
5087 for (i = 0; i != n_elements; ++i)
5088 check_inuse_chunk(m, mem2chunk(marray[i]));
5089
5090#endif /* DEBUG */
5091
5092 POSTACTION(m);
5093 return marray;
5094}
5095
5096/* Try to free all pointers in the given array.
5097 Note: this could be made faster, by delaying consolidation,
5098 at the price of disabling some user integrity checks, We
5099 still optimize some consolidations by combining adjacent
5100 chunks before freeing, which will occur often if allocated
5101 with ialloc or the array is sorted.
5102*/
5103static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
5104 size_t unfreed = 0;
5105 if (!PREACTION(m)) {
5106 void** a;
5107 void** fence = &(array[nelem]);
5108 for (a = array; a != fence; ++a) {
5109 void* mem = *a;
5110 if (mem != 0) {
5111 mchunkptr p = mem2chunk(mem);
5112 size_t psize = chunksize(p);
5113#if FOOTERS
5114 if (get_mstate_for(p) != m) {
5115 ++unfreed;
5116 continue;
5117 }
5118#endif
5119 check_inuse_chunk(m, p);
5120 *a = 0;
5121 if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5122 void ** b = a + 1; /* try to merge with next chunk */
5123 mchunkptr next = next_chunk(p);
5124 if (b != fence && *b == chunk2mem(next)) {
5125 size_t newsize = chunksize(next) + psize;
5126 set_inuse(m, p, newsize);
5127 *b = chunk2mem(p);
5128 }
5129 else
5130 dispose_chunk(m, p, psize);
5131 }
5132 else {
5133 CORRUPTION_ERROR_ACTION(m);
5134 break;
5135 }
5136 }
5137 }
5138 if (should_trim(m, m->topsize))
5139 sys_trim(m, 0);
5140 POSTACTION(m);
5141 }
5142 return unfreed;
5143}
5144
5145/* Traversal */
5146#if MALLOC_INSPECT_ALL
5147static void internal_inspect_all(mstate m,
5148 void(*handler)(void *start,
5149 void *end,
5150 size_t used_bytes,
5151 void* callback_arg),
5152 void* arg) {
5153 if (is_initialized(m)) {
5154 mchunkptr top = m->top;
5155 msegmentptr s;
5156 for (s = &m->seg; s != 0; s = s->next) {
5157 mchunkptr q = align_as_chunk(s->base);
5158 while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5159 mchunkptr next = next_chunk(q);
5160 size_t sz = chunksize(q);
5161 size_t used;
5162 void* start;
5163 if (is_inuse(q)) {
5164 used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5165 start = chunk2mem(q);
5166 }
5167 else {
5168 used = 0;
5169 if (is_small(sz)) { /* offset by possible bookkeeping */
5170 start = (void*)((char*)q + sizeof(struct malloc_chunk));
5171 }
5172 else {
5173 start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));
5174 }
5175 }
5176 if (start < (void*)next) /* skip if all space is bookkeeping */
5177 handler(start, next, used, arg);
5178 if (q == top)
5179 break;
5180 q = next;
5181 }
5182 }
5183 }
5184}
5185#endif /* MALLOC_INSPECT_ALL */
5186
5187/* ------------------ Exported realloc, memalign, etc -------------------- */
5188
5189#if !ONLY_MSPACES
5190
5191void* dlrealloc(void* oldmem, size_t bytes) {
5192 void* mem = 0;
5193 if (oldmem == 0) {
5194 mem = dlmalloc(bytes);
5195 }
5196 else if (bytes >= MAX_REQUEST) {
5197 MALLOC_FAILURE_ACTION;
5198 }
5199#ifdef REALLOC_ZERO_BYTES_FREES
5200 else if (bytes == 0) {
5201 dlfree(oldmem);
5202 }
5203#endif /* REALLOC_ZERO_BYTES_FREES */
5204 else {
5205 size_t nb = request2size(bytes);
5206 mchunkptr oldp = mem2chunk(oldmem);
5207#if ! FOOTERS
5208 mstate m = gm;
5209#else /* FOOTERS */
5210 mstate m = get_mstate_for(oldp);
5211 if (!ok_magic(m)) {
5212 USAGE_ERROR_ACTION(m, oldmem);
5213 return 0;
5214 }
5215#endif /* FOOTERS */
5216 if (!PREACTION(m)) {
5217 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5218 POSTACTION(m);
5219 if (newp != 0) {
5220 check_inuse_chunk(m, newp);
5221 mem = chunk2mem(newp);
5222 }
5223 else {
5224 mem = internal_malloc(m, bytes);
5225 if (mem != 0) {
5226 size_t oc = chunksize(oldp) - overhead_for(oldp);
5227 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5228 internal_free(m, oldmem);
5229 }
5230 }
5231 }
5232 }
5233 return mem;
5234}
5235
5236void* dlrealloc_in_place(void* oldmem, size_t bytes) {
5237 void* mem = 0;
5238 if (oldmem != 0) {
5239 if (bytes >= MAX_REQUEST) {
5240 MALLOC_FAILURE_ACTION;
5241 }
5242 else {
5243 size_t nb = request2size(bytes);
5244 mchunkptr oldp = mem2chunk(oldmem);
5245#if ! FOOTERS
5246 mstate m = gm;
5247#else /* FOOTERS */
5248 mstate m = get_mstate_for(oldp);
5249 if (!ok_magic(m)) {
5250 USAGE_ERROR_ACTION(m, oldmem);
5251 return 0;
5252 }
5253#endif /* FOOTERS */
5254 if (!PREACTION(m)) {
5255 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5256 POSTACTION(m);
5257 if (newp == oldp) {
5258 check_inuse_chunk(m, newp);
5259 mem = oldmem;
5260 }
5261 }
5262 }
5263 }
5264 return mem;
5265}
5266
5267void* dlmemalign(size_t alignment, size_t bytes) {
5268 if (alignment <= MALLOC_ALIGNMENT) {
5269 return dlmalloc(bytes);
5270 }
5271 return internal_memalign(gm, alignment, bytes);
5272}
5273
5274int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
5275 void* mem = 0;
5276 if (alignment == MALLOC_ALIGNMENT)
5277 mem = dlmalloc(bytes);
5278 else {
5279 size_t d = alignment / sizeof(void*);
5280 size_t r = alignment % sizeof(void*);
5281 if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5282 return EINVAL;
5283 else if (bytes <= MAX_REQUEST - alignment) {
5284 if (alignment < MIN_CHUNK_SIZE)
5285 alignment = MIN_CHUNK_SIZE;
5286 mem = internal_memalign(gm, alignment, bytes);
5287 }
5288 }
5289 if (mem == 0)
5290 return ENOMEM;
5291 else {
5292 *pp = mem;
5293 return 0;
5294 }
5295}
5296
5297void* dlvalloc(size_t bytes) {
5298 size_t pagesz;
5299 ensure_initialization();
5300 pagesz = mparams.page_size;
5301 return dlmemalign(pagesz, bytes);
5302}
5303
5304void* dlpvalloc(size_t bytes) {
5305 size_t pagesz;
5306 ensure_initialization();
5307 pagesz = mparams.page_size;
5308 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
5309}
5310
5311void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5312 void* chunks[]) {
5313 size_t sz = elem_size; /* serves as 1-element array */
5314 return ialloc(gm, n_elements, &sz, 3, chunks);
5315}
5316
5317void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5318 void* chunks[]) {
5319 return ialloc(gm, n_elements, sizes, 0, chunks);
5320}
5321
5322size_t dlbulk_free(void* array[], size_t nelem) {
5323 return internal_bulk_free(gm, array, nelem);
5324}
5325
5326#if MALLOC_INSPECT_ALL
5327void dlmalloc_inspect_all(void(*handler)(void *start,
5328 void *end,
5329 size_t used_bytes,
5330 void* callback_arg),
5331 void* arg) {
5332 ensure_initialization();
5333 if (!PREACTION(gm)) {
5334 internal_inspect_all(gm, handler, arg);
5335 POSTACTION(gm);
5336 }
5337}
5338#endif /* MALLOC_INSPECT_ALL */
5339
5340int dlmalloc_trim(size_t pad) {
5341 int result = 0;
5342 ensure_initialization();
5343 if (!PREACTION(gm)) {
5344 result = sys_trim(gm, pad);
5345 POSTACTION(gm);
5346 }
5347 return result;
5348}
5349
5350size_t dlmalloc_footprint(void) {
5351 return gm->footprint;
5352}
5353
5354size_t dlmalloc_max_footprint(void) {
5355 return gm->max_footprint;
5356}
5357
5358size_t dlmalloc_footprint_limit(void) {
5359 size_t maf = gm->footprint_limit;
5360 return maf == 0 ? MAX_SIZE_T : maf;
5361}
5362
5363size_t dlmalloc_set_footprint_limit(size_t bytes) {
5364 size_t result; /* invert sense of 0 */
5365 if (bytes == 0)
5366 result = granularity_align(1); /* Use minimal size */
5367 if (bytes == MAX_SIZE_T)
5368 result = 0; /* disable */
5369 else
5370 result = granularity_align(bytes);
5371 return gm->footprint_limit = result;
5372}
5373
5374#if !NO_MALLINFO
5375struct mallinfo dlmallinfo(void) {
5376 return internal_mallinfo(gm);
5377}
5378#endif /* NO_MALLINFO */
5379
5380#if !NO_MALLOC_STATS
5381void dlmalloc_stats() {
5382 internal_malloc_stats(gm);
5383}
5384#endif /* NO_MALLOC_STATS */
5385
5386int dlmallopt(int param_number, int value) {
5387 return change_mparam(param_number, value);
5388}
5389
5390size_t dlmalloc_usable_size(void* mem) {
5391 if (mem != 0) {
5392 mchunkptr p = mem2chunk(mem);
5393 if (is_inuse(p))
5394 return chunksize(p) - overhead_for(p);
5395 }
5396 return 0;
5397}
5398
5399#endif /* !ONLY_MSPACES */
5400
5401/* ----------------------------- user mspaces ---------------------------- */
5402
5403#if MSPACES
5404
5405static mstate init_user_mstate(char* tbase, size_t tsize) {
5406 size_t msize = pad_request(sizeof(struct malloc_state));
5407 mchunkptr mn;
5408 mchunkptr msp = align_as_chunk(tbase);
5409 mstate m = (mstate)(chunk2mem(msp));
5410 memset(m, 0, msize);
5411 (void)INITIAL_LOCK(&m->mutex);
5412 msp->head = (msize|INUSE_BITS);
5413 m->seg.base = m->least_addr = tbase;
5414 m->seg.size = m->footprint = m->max_footprint = tsize;
5415 m->magic = mparams.magic;
5416 m->release_checks = MAX_RELEASE_CHECK_RATE;
5417 m->mflags = mparams.default_mflags;
5418 m->extp = 0;
5419 m->exts = 0;
5420 disable_contiguous(m);
5421 init_bins(m);
5422 mn = next_chunk(mem2chunk(m));
5423 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5424 check_top_chunk(m, m->top);
5425 return m;
5426}
5427
5428mspace create_mspace(size_t capacity, int locked) {
5429 mstate m = 0;
5430 size_t msize;
5431 ensure_initialization();
5432 msize = pad_request(sizeof(struct malloc_state));
5433 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5434 size_t rs = ((capacity == 0)? mparams.granularity :
5435 (capacity + TOP_FOOT_SIZE + msize));
5436 size_t tsize = granularity_align(rs);
5437 char* tbase = (char*)(CALL_MMAP(tsize));
5438 if (tbase != CMFAIL) {
5439 m = init_user_mstate(tbase, tsize);
5440 m->seg.sflags = USE_MMAP_BIT;
5441 set_lock(m, locked);
5442 }
5443 }
5444 return (mspace)m;
5445}
5446
5447mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5448 mstate m = 0;
5449 size_t msize;
5450 ensure_initialization();
5451 msize = pad_request(sizeof(struct malloc_state));
5452 if (capacity > msize + TOP_FOOT_SIZE &&
5453 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5454 m = init_user_mstate((char*)base, capacity);
5455 m->seg.sflags = EXTERN_BIT;
5456 set_lock(m, locked);
5457 }
5458 return (mspace)m;
5459}
5460
5461int mspace_track_large_chunks(mspace msp, int enable) {
5462 int ret = 0;
5463 mstate ms = (mstate)msp;
5464 if (!PREACTION(ms)) {
5465 if (!use_mmap(ms)) {
5466 ret = 1;
5467 }
5468 if (!enable) {
5469 enable_mmap(ms);
5470 } else {
5471 disable_mmap(ms);
5472 }
5473 POSTACTION(ms);
5474 }
5475 return ret;
5476}
5477
5478size_t destroy_mspace(mspace msp) {
5479 size_t freed = 0;
5480 mstate ms = (mstate)msp;
5481 if (ok_magic(ms)) {
5482 msegmentptr sp = &ms->seg;
5483 (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5484 while (sp != 0) {
5485 char* base = sp->base;
5486 size_t size = sp->size;
5487 flag_t flag = sp->sflags;
5488 (void)base; /* placate people compiling -Wunused-variable */
5489 sp = sp->next;
5490 if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5491 CALL_MUNMAP(base, size) == 0)
5492 freed += size;
5493 }
5494 }
5495 else {
5496 USAGE_ERROR_ACTION(ms,ms);
5497 }
5498 return freed;
5499}
5500
5501/*
5502 mspace versions of routines are near-clones of the global
5503 versions. This is not so nice but better than the alternatives.
5504*/
5505
5506void* mspace_malloc(mspace msp, size_t bytes) {
5507 mstate ms = (mstate)msp;
5508 if (!ok_magic(ms)) {
5509 USAGE_ERROR_ACTION(ms,ms);
5510 return 0;
5511 }
5512 if (!PREACTION(ms)) {
5513 void* mem;
5514 size_t nb;
5515 if (bytes <= MAX_SMALL_REQUEST) {
5516 bindex_t idx;
5517 binmap_t smallbits;
5518 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5519 idx = small_index(nb);
5520 smallbits = ms->smallmap >> idx;
5521
5522 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5523 mchunkptr b, p;
5524 idx += ~smallbits & 1; /* Uses next bin if idx empty */
5525 b = smallbin_at(ms, idx);
5526 p = b->fd;
5527 assert(chunksize(p) == small_index2size(idx));
5528 unlink_first_small_chunk(ms, b, p, idx);
5529 set_inuse_and_pinuse(ms, p, small_index2size(idx));
5530 mem = chunk2mem(p);
5531 check_malloced_chunk(ms, mem, nb);
5532 goto postaction;
5533 }
5534
5535 else if (nb > ms->dvsize) {
5536 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5537 mchunkptr b, p, r;
5538 size_t rsize;
5539 bindex_t i;
5540 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5541 binmap_t leastbit = least_bit(leftbits);
5542 compute_bit2idx(leastbit, i);
5543 b = smallbin_at(ms, i);
5544 p = b->fd;
5545 assert(chunksize(p) == small_index2size(i));
5546 unlink_first_small_chunk(ms, b, p, i);
5547 rsize = small_index2size(i) - nb;
5548 /* Fit here cannot be remainderless if 4byte sizes */
5549 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5550 set_inuse_and_pinuse(ms, p, small_index2size(i));
5551 else {
5552 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5553 r = chunk_plus_offset(p, nb);
5554 set_size_and_pinuse_of_free_chunk(r, rsize);
5555 replace_dv(ms, r, rsize);
5556 }
5557 mem = chunk2mem(p);
5558 check_malloced_chunk(ms, mem, nb);
5559 goto postaction;
5560 }
5561
5562 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5563 check_malloced_chunk(ms, mem, nb);
5564 goto postaction;
5565 }
5566 }
5567 }
5568 else if (bytes >= MAX_REQUEST)
5569 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5570 else {
5571 nb = pad_request(bytes);
5572 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5573 check_malloced_chunk(ms, mem, nb);
5574 goto postaction;
5575 }
5576 }
5577
5578 if (nb <= ms->dvsize) {
5579 size_t rsize = ms->dvsize - nb;
5580 mchunkptr p = ms->dv;
5581 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5582 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5583 ms->dvsize = rsize;
5584 set_size_and_pinuse_of_free_chunk(r, rsize);
5585 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5586 }
5587 else { /* exhaust dv */
5588 size_t dvs = ms->dvsize;
5589 ms->dvsize = 0;
5590 ms->dv = 0;
5591 set_inuse_and_pinuse(ms, p, dvs);
5592 }
5593 mem = chunk2mem(p);
5594 check_malloced_chunk(ms, mem, nb);
5595 goto postaction;
5596 }
5597
5598 else if (nb < ms->topsize) { /* Split top */
5599 size_t rsize = ms->topsize -= nb;
5600 mchunkptr p = ms->top;
5601 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5602 r->head = rsize | PINUSE_BIT;
5603 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5604 mem = chunk2mem(p);
5605 check_top_chunk(ms, ms->top);
5606 check_malloced_chunk(ms, mem, nb);
5607 goto postaction;
5608 }
5609
5610 mem = sys_alloc(ms, nb);
5611
5612 postaction:
5613 POSTACTION(ms);
5614 return mem;
5615 }
5616
5617 return 0;
5618}
5619
5620void mspace_free(mspace msp, void* mem) {
5621 if (mem != 0) {
5622 mchunkptr p = mem2chunk(mem);
5623#if FOOTERS
5624 mstate fm = get_mstate_for(p);
5625 (void)msp; /* placate people compiling -Wunused */
5626#else /* FOOTERS */
5627 mstate fm = (mstate)msp;
5628#endif /* FOOTERS */
5629 if (!ok_magic(fm)) {
5630 USAGE_ERROR_ACTION(fm, p);
5631 return;
5632 }
5633 if (!PREACTION(fm)) {
5634 check_inuse_chunk(fm, p);
5635 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5636 size_t psize = chunksize(p);
5637 mchunkptr next = chunk_plus_offset(p, psize);
5638 if (!pinuse(p)) {
5639 size_t prevsize = p->prev_foot;
5640 if (is_mmapped(p)) {
5641 psize += prevsize + MMAP_FOOT_PAD;
5642 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5643 fm->footprint -= psize;
5644 goto postaction;
5645 }
5646 else {
5647 mchunkptr prev = chunk_minus_offset(p, prevsize);
5648 psize += prevsize;
5649 p = prev;
5650 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5651 if (p != fm->dv) {
5652 unlink_chunk(fm, p, prevsize);
5653 }
5654 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5655 fm->dvsize = psize;
5656 set_free_with_pinuse(p, psize, next);
5657 goto postaction;
5658 }
5659 }
5660 else
5661 goto erroraction;
5662 }
5663 }
5664
5665 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5666 if (!cinuse(next)) { /* consolidate forward */
5667 if (next == fm->top) {
5668 size_t tsize = fm->topsize += psize;
5669 fm->top = p;
5670 p->head = tsize | PINUSE_BIT;
5671 if (p == fm->dv) {
5672 fm->dv = 0;
5673 fm->dvsize = 0;
5674 }
5675 if (should_trim(fm, tsize))
5676 sys_trim(fm, 0);
5677 goto postaction;
5678 }
5679 else if (next == fm->dv) {
5680 size_t dsize = fm->dvsize += psize;
5681 fm->dv = p;
5682 set_size_and_pinuse_of_free_chunk(p, dsize);
5683 goto postaction;
5684 }
5685 else {
5686 size_t nsize = chunksize(next);
5687 psize += nsize;
5688 unlink_chunk(fm, next, nsize);
5689 set_size_and_pinuse_of_free_chunk(p, psize);
5690 if (p == fm->dv) {
5691 fm->dvsize = psize;
5692 goto postaction;
5693 }
5694 }
5695 }
5696 else
5697 set_free_with_pinuse(p, psize, next);
5698
5699 if (is_small(psize)) {
5700 insert_small_chunk(fm, p, psize);
5701 check_free_chunk(fm, p);
5702 }
5703 else {
5704 tchunkptr tp = (tchunkptr)p;
5705 insert_large_chunk(fm, tp, psize);
5706 check_free_chunk(fm, p);
5707 if (--fm->release_checks == 0)
5708 release_unused_segments(fm);
5709 }
5710 goto postaction;
5711 }
5712 }
5713 erroraction:
5714 USAGE_ERROR_ACTION(fm, p);
5715 postaction:
5716 POSTACTION(fm);
5717 }
5718 }
5719}
5720
5721void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5722 void* mem;
5723 size_t req = 0;
5724 mstate ms = (mstate)msp;
5725 if (!ok_magic(ms)) {
5726 USAGE_ERROR_ACTION(ms,ms);
5727 return 0;
5728 }
5729 if (n_elements != 0) {
5730 req = n_elements * elem_size;
5731 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5732 (req / n_elements != elem_size))
5733 req = MAX_SIZE_T; /* force downstream failure on overflow */
5734 }
5735 mem = internal_malloc(ms, req);
5736 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5737 memset(mem, 0, req);
5738 return mem;
5739}
5740
5741void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5742 void* mem = 0;
5743 if (oldmem == 0) {
5744 mem = mspace_malloc(msp, bytes);
5745 }
5746 else if (bytes >= MAX_REQUEST) {
5747 MALLOC_FAILURE_ACTION;
5748 }
5749#ifdef REALLOC_ZERO_BYTES_FREES
5750 else if (bytes == 0) {
5751 mspace_free(msp, oldmem);
5752 }
5753#endif /* REALLOC_ZERO_BYTES_FREES */
5754 else {
5755 size_t nb = request2size(bytes);
5756 mchunkptr oldp = mem2chunk(oldmem);
5757#if ! FOOTERS
5758 mstate m = (mstate)msp;
5759#else /* FOOTERS */
5760 mstate m = get_mstate_for(oldp);
5761 if (!ok_magic(m)) {
5762 USAGE_ERROR_ACTION(m, oldmem);
5763 return 0;
5764 }
5765#endif /* FOOTERS */
5766 if (!PREACTION(m)) {
5767 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5768 POSTACTION(m);
5769 if (newp != 0) {
5770 check_inuse_chunk(m, newp);
5771 mem = chunk2mem(newp);
5772 }
5773 else {
5774 mem = mspace_malloc(m, bytes);
5775 if (mem != 0) {
5776 size_t oc = chunksize(oldp) - overhead_for(oldp);
5777 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5778 mspace_free(m, oldmem);
5779 }
5780 }
5781 }
5782 }
5783 return mem;
5784}
5785
5786void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
5787 void* mem = 0;
5788 if (oldmem != 0) {
5789 if (bytes >= MAX_REQUEST) {
5790 MALLOC_FAILURE_ACTION;
5791 }
5792 else {
5793 size_t nb = request2size(bytes);
5794 mchunkptr oldp = mem2chunk(oldmem);
5795#if ! FOOTERS
5796 mstate m = (mstate)msp;
5797#else /* FOOTERS */
5798 mstate m = get_mstate_for(oldp);
5799 (void)msp; /* placate people compiling -Wunused */
5800 if (!ok_magic(m)) {
5801 USAGE_ERROR_ACTION(m, oldmem);
5802 return 0;
5803 }
5804#endif /* FOOTERS */
5805 if (!PREACTION(m)) {
5806 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5807 POSTACTION(m);
5808 if (newp == oldp) {
5809 check_inuse_chunk(m, newp);
5810 mem = oldmem;
5811 }
5812 }
5813 }
5814 }
5815 return mem;
5816}
5817
5818void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5819 mstate ms = (mstate)msp;
5820 if (!ok_magic(ms)) {
5821 USAGE_ERROR_ACTION(ms,ms);
5822 return 0;
5823 }
5824 if (alignment <= MALLOC_ALIGNMENT)
5825 return mspace_malloc(msp, bytes);
5826 return internal_memalign(ms, alignment, bytes);
5827}
5828
5829void** mspace_independent_calloc(mspace msp, size_t n_elements,
5830 size_t elem_size, void* chunks[]) {
5831 size_t sz = elem_size; /* serves as 1-element array */
5832 mstate ms = (mstate)msp;
5833 if (!ok_magic(ms)) {
5834 USAGE_ERROR_ACTION(ms,ms);
5835 return 0;
5836 }
5837 return ialloc(ms, n_elements, &sz, 3, chunks);
5838}
5839
5840void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5841 size_t sizes[], void* chunks[]) {
5842 mstate ms = (mstate)msp;
5843 if (!ok_magic(ms)) {
5844 USAGE_ERROR_ACTION(ms,ms);
5845 return 0;
5846 }
5847 return ialloc(ms, n_elements, sizes, 0, chunks);
5848}
5849
5850size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
5851 return internal_bulk_free((mstate)msp, array, nelem);
5852}
5853
5854#if MALLOC_INSPECT_ALL
5855void mspace_inspect_all(mspace msp,
5856 void(*handler)(void *start,
5857 void *end,
5858 size_t used_bytes,
5859 void* callback_arg),
5860 void* arg) {
5861 mstate ms = (mstate)msp;
5862 if (ok_magic(ms)) {
5863 if (!PREACTION(ms)) {
5864 internal_inspect_all(ms, handler, arg);
5865 POSTACTION(ms);
5866 }
5867 }
5868 else {
5869 USAGE_ERROR_ACTION(ms,ms);
5870 }
5871}
5872#endif /* MALLOC_INSPECT_ALL */
5873
5874int mspace_trim(mspace msp, size_t pad) {
5875 int result = 0;
5876 mstate ms = (mstate)msp;
5877 if (ok_magic(ms)) {
5878 if (!PREACTION(ms)) {
5879 result = sys_trim(ms, pad);
5880 POSTACTION(ms);
5881 }
5882 }
5883 else {
5884 USAGE_ERROR_ACTION(ms,ms);
5885 }
5886 return result;
5887}
5888
5889#if !NO_MALLOC_STATS
5890void mspace_malloc_stats(mspace msp) {
5891 mstate ms = (mstate)msp;
5892 if (ok_magic(ms)) {
5893 internal_malloc_stats(ms);
5894 }
5895 else {
5896 USAGE_ERROR_ACTION(ms,ms);
5897 }
5898}
5899#endif /* NO_MALLOC_STATS */
5900
5901size_t mspace_footprint(mspace msp) {
5902 size_t result = 0;
5903 mstate ms = (mstate)msp;
5904 if (ok_magic(ms)) {
5905 result = ms->footprint;
5906 }
5907 else {
5908 USAGE_ERROR_ACTION(ms,ms);
5909 }
5910 return result;
5911}
5912
5913size_t mspace_max_footprint(mspace msp) {
5914 size_t result = 0;
5915 mstate ms = (mstate)msp;
5916 if (ok_magic(ms)) {
5917 result = ms->max_footprint;
5918 }
5919 else {
5920 USAGE_ERROR_ACTION(ms,ms);
5921 }
5922 return result;
5923}
5924
5925size_t mspace_footprint_limit(mspace msp) {
5926 size_t result = 0;
5927 mstate ms = (mstate)msp;
5928 if (ok_magic(ms)) {
5929 size_t maf = ms->footprint_limit;
5930 result = (maf == 0) ? MAX_SIZE_T : maf;
5931 }
5932 else {
5933 USAGE_ERROR_ACTION(ms,ms);
5934 }
5935 return result;
5936}
5937
5938size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
5939 size_t result = 0;
5940 mstate ms = (mstate)msp;
5941 if (ok_magic(ms)) {
5942 if (bytes == 0)
5943 result = granularity_align(1); /* Use minimal size */
5944 if (bytes == MAX_SIZE_T)
5945 result = 0; /* disable */
5946 else
5947 result = granularity_align(bytes);
5948 ms->footprint_limit = result;
5949 }
5950 else {
5951 USAGE_ERROR_ACTION(ms,ms);
5952 }
5953 return result;
5954}
5955
5956#if !NO_MALLINFO
5957struct mallinfo mspace_mallinfo(mspace msp) {
5958 mstate ms = (mstate)msp;
5959 if (!ok_magic(ms)) {
5960 USAGE_ERROR_ACTION(ms,ms);
5961 }
5962 return internal_mallinfo(ms);
5963}
5964#endif /* NO_MALLINFO */
5965
5966size_t mspace_usable_size(const void* mem) {
5967 if (mem != 0) {
5968 mchunkptr p = mem2chunk(mem);
5969 if (is_inuse(p))
5970 return chunksize(p) - overhead_for(p);
5971 }
5972 return 0;
5973}
5974
5975int mspace_mallopt(int param_number, int value) {
5976 return change_mparam(param_number, value);
5977}
5978
5979#endif /* MSPACES */
5980
5981
5982/* -------------------- Alternative MORECORE functions ------------------- */
5983
5984/*
5985 Guidelines for creating a custom version of MORECORE:
5986
5987 * For best performance, MORECORE should allocate in multiples of pagesize.
5988 * MORECORE may allocate more memory than requested. (Or even less,
5989 but this will usually result in a malloc failure.)
5990 * MORECORE must not allocate memory when given argument zero, but
5991 instead return one past the end address of memory from previous
5992 nonzero call.
5993 * For best performance, consecutive calls to MORECORE with positive
5994 arguments should return increasing addresses, indicating that
5995 space has been contiguously extended.
5996 * Even though consecutive calls to MORECORE need not return contiguous
5997 addresses, it must be OK for malloc'ed chunks to span multiple
5998 regions in those cases where they do happen to be contiguous.
5999 * MORECORE need not handle negative arguments -- it may instead
6000 just return MFAIL when given negative arguments.
6001 Negative arguments are always multiples of pagesize. MORECORE
6002 must not misinterpret negative args as large positive unsigned
6003 args. You can suppress all such calls from even occurring by defining
6004 MORECORE_CANNOT_TRIM,
6005
6006 As an example alternative MORECORE, here is a custom allocator
6007 kindly contributed for pre-OSX macOS. It uses virtually but not
6008 necessarily physically contiguous non-paged memory (locked in,
6009 present and won't get swapped out). You can use it by uncommenting
6010 this section, adding some #includes, and setting up the appropriate
6011 defines above:
6012
6013 #define MORECORE osMoreCore
6014
6015 There is also a shutdown routine that should somehow be called for
6016 cleanup upon program exit.
6017
6018 #define MAX_POOL_ENTRIES 100
6019 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
6020 static int next_os_pool;
6021 void *our_os_pools[MAX_POOL_ENTRIES];
6022
6023 void *osMoreCore(int size)
6024 {
6025 void *ptr = 0;
6026 static void *sbrk_top = 0;
6027
6028 if (size > 0)
6029 {
6030 if (size < MINIMUM_MORECORE_SIZE)
6031 size = MINIMUM_MORECORE_SIZE;
6032 if (CurrentExecutionLevel() == kTaskLevel)
6033 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6034 if (ptr == 0)
6035 {
6036 return (void *) MFAIL;
6037 }
6038 // save ptrs so they can be freed during cleanup
6039 our_os_pools[next_os_pool] = ptr;
6040 next_os_pool++;
6041 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6042 sbrk_top = (char *) ptr + size;
6043 return ptr;
6044 }
6045 else if (size < 0)
6046 {
6047 // we don't currently support shrink behavior
6048 return (void *) MFAIL;
6049 }
6050 else
6051 {
6052 return sbrk_top;
6053 }
6054 }
6055
6056 // cleanup any allocated memory pools
6057 // called as last thing before shutting down driver
6058
6059 void osCleanupMem(void)
6060 {
6061 void **ptr;
6062
6063 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6064 if (*ptr)
6065 {
6066 PoolDeallocate(*ptr);
6067 *ptr = 0;
6068 }
6069 }
6070
6071*/
6072
6073
6074/* -----------------------------------------------------------------------
6075History:
6076 v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
6077 * fix bad comparison in dlposix_memalign
6078 * don't reuse adjusted asize in sys_alloc
6079 * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6080 * reduce compiler warnings -- thanks to all who reported/suggested these
6081
6082 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6083 * Always perform unlink checks unless INSECURE
6084 * Add posix_memalign.
6085 * Improve realloc to expand in more cases; expose realloc_in_place.
6086 Thanks to Peter Buhr for the suggestion.
6087 * Add footprint_limit, inspect_all, bulk_free. Thanks
6088 to Barry Hayes and others for the suggestions.
6089 * Internal refactorings to avoid calls while holding locks
6090 * Use non-reentrant locks by default. Thanks to Roland McGrath
6091 for the suggestion.
6092 * Small fixes to mspace_destroy, reset_on_error.
6093 * Various configuration extensions/changes. Thanks
6094 to all who contributed these.
6095
6096 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6097 * Update Creative Commons URL
6098
6099 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6100 * Use zeros instead of prev foot for is_mmapped
6101 * Add mspace_track_large_chunks; thanks to Jean Brouwers
6102 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6103 * Fix insufficient sys_alloc padding when using 16byte alignment
6104 * Fix bad error check in mspace_footprint
6105 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6106 * Reentrant spin locks; thanks to Earl Chew and others
6107 * Win32 improvements; thanks to Niall Douglas and Earl Chew
6108 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6109 * Extension hook in malloc_state
6110 * Various small adjustments to reduce warnings on some compilers
6111 * Various configuration extensions/changes for more platforms. Thanks
6112 to all who contributed these.
6113
6114 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6115 * Add max_footprint functions
6116 * Ensure all appropriate literals are size_t
6117 * Fix conditional compilation problem for some #define settings
6118 * Avoid concatenating segments with the one provided
6119 in create_mspace_with_base
6120 * Rename some variables to avoid compiler shadowing warnings
6121 * Use explicit lock initialization.
6122 * Better handling of sbrk interference.
6123 * Simplify and fix segment insertion, trimming and mspace_destroy
6124 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6125 * Thanks especially to Dennis Flanagan for help on these.
6126
6127 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6128 * Fix memalign brace error.
6129
6130 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6131 * Fix improper #endif nesting in C++
6132 * Add explicit casts needed for C++
6133
6134 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6135 * Use trees for large bins
6136 * Support mspaces
6137 * Use segments to unify sbrk-based and mmap-based system allocation,
6138 removing need for emulation on most platforms without sbrk.
6139 * Default safety checks
6140 * Optional footer checks. Thanks to William Robertson for the idea.
6141 * Internal code refactoring
6142 * Incorporate suggestions and platform-specific changes.
6143 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6144 Aaron Bachmann, Emery Berger, and others.
6145 * Speed up non-fastbin processing enough to remove fastbins.
6146 * Remove useless cfree() to avoid conflicts with other apps.
6147 * Remove internal memcpy, memset. Compilers handle builtins better.
6148 * Remove some options that no one ever used and rename others.
6149
6150 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6151 * Fix malloc_state bitmap array misdeclaration
6152
6153 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6154 * Allow tuning of FIRST_SORTED_BIN_SIZE
6155 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6156 * Better detection and support for non-contiguousness of MORECORE.
6157 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6158 * Bypass most of malloc if no frees. Thanks To Emery Berger.
6159 * Fix freeing of old top non-contiguous chunk im sysmalloc.
6160 * Raised default trim and map thresholds to 256K.
6161 * Fix mmap-related #defines. Thanks to Lubos Lunak.
6162 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6163 * Branch-free bin calculation
6164 * Default trim and mmap thresholds now 256K.
6165
6166 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6167 * Introduce independent_comalloc and independent_calloc.
6168 Thanks to Michael Pachos for motivation and help.
6169 * Make optional .h file available
6170 * Allow > 2GB requests on 32bit systems.
6171 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
6172 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6173 and Anonymous.
6174 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6175 helping test this.)
6176 * memalign: check alignment arg
6177 * realloc: don't try to shift chunks backwards, since this
6178 leads to more fragmentation in some programs and doesn't
6179 seem to help in any others.
6180 * Collect all cases in malloc requiring system memory into sysmalloc
6181 * Use mmap as backup to sbrk
6182 * Place all internal state in malloc_state
6183 * Introduce fastbins (although similar to 2.5.1)
6184 * Many minor tunings and cosmetic improvements
6185 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6186 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6187 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
6188 * Include errno.h to support default failure action.
6189
6190 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6191 * return null for negative arguments
6192 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6193 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6194 (e.g. WIN32 platforms)
6195 * Cleanup header file inclusion for WIN32 platforms
6196 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6197 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6198 memory allocation routines
6199 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6200 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6201 usage of 'assert' in non-WIN32 code
6202 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6203 avoid infinite loop
6204 * Always call 'fREe()' rather than 'free()'
6205
6206 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6207 * Fixed ordering problem with boundary-stamping
6208
6209 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6210 * Added pvalloc, as recommended by H.J. Liu
6211 * Added 64bit pointer support mainly from Wolfram Gloger
6212 * Added anonymously donated WIN32 sbrk emulation
6213 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6214 * malloc_extend_top: fix mask error that caused wastage after
6215 foreign sbrks
6216 * Add linux mremap support code from HJ Liu
6217
6218 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6219 * Integrated most documentation with the code.
6220 * Add support for mmap, with help from
6221 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6222 * Use last_remainder in more cases.
6223 * Pack bins using idea from colin@nyx10.cs.du.edu
6224 * Use ordered bins instead of best-fit threshhold
6225 * Eliminate block-local decls to simplify tracing and debugging.
6226 * Support another case of realloc via move into top
6227 * Fix error occuring when initial sbrk_base not word-aligned.
6228 * Rely on page size for units instead of SBRK_UNIT to
6229 avoid surprises about sbrk alignment conventions.
6230 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6231 (raymond@es.ele.tue.nl) for the suggestion.
6232 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6233 * More precautions for cases where other routines call sbrk,
6234 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6235 * Added macros etc., allowing use in linux libc from
6236 H.J. Lu (hjl@gnu.ai.mit.edu)
6237 * Inverted this history list
6238
6239 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6240 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6241 * Removed all preallocation code since under current scheme
6242 the work required to undo bad preallocations exceeds
6243 the work saved in good cases for most test programs.
6244 * No longer use return list or unconsolidated bins since
6245 no scheme using them consistently outperforms those that don't
6246 given above changes.
6247 * Use best fit for very large chunks to prevent some worst-cases.
6248 * Added some support for debugging
6249
6250 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6251 * Removed footers when chunks are in use. Thanks to
6252 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
6253
6254 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6255 * Added malloc_trim, with help from Wolfram Gloger
6256 (wmglo@Dent.MED.Uni-Muenchen.DE).
6257
6258 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6259
6260 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6261 * realloc: try to expand in both directions
6262 * malloc: swap order of clean-bin strategy;
6263 * realloc: only conditionally expand backwards
6264 * Try not to scavenge used bins
6265 * Use bin counts as a guide to preallocation
6266 * Occasionally bin return list chunks in first scan
6267 * Add a few optimizations from colin@nyx10.cs.du.edu
6268
6269 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6270 * faster bin computation & slightly different binning
6271 * merged all consolidations to one part of malloc proper
6272 (eliminating old malloc_find_space & malloc_clean_bin)
6273 * Scan 2 returns chunks (not just 1)
6274 * Propagate failure in realloc if malloc returns 0
6275 * Add stuff to allow compilation on non-ANSI compilers
6276 from kpv@research.att.com
6277
6278 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6279 * removed potential for odd address access in prev_chunk
6280 * removed dependency on getpagesize.h
6281 * misc cosmetics and a bit more internal documentation
6282 * anticosmetics: mangled names in macros to evade debugger strangeness
6283 * tested on sparc, hp-700, dec-mips, rs6000
6284 with gcc & native cc (hp, dec only) allowing
6285 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6286
6287 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6288 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6289 structure of old version, but most details differ.)
6290
6291*/