malloc.c - vx32 - Local 9vx git repository for patches.
 (HTM) git clone git://r-36.net/vx32
 (DIR) Log
 (DIR) Files
 (DIR) Refs
       ---
       malloc.c (183203B)
       ---
            1 /*
            2   This is a version (aka dlmalloc) of malloc/free/realloc written by
            3   Doug Lea and released to the public domain.  Use, modify, and
            4   redistribute this code without permission or acknowledgement in any
            5   way you wish.  Send questions, comments, complaints, performance
            6   data, etc to dl@cs.oswego.edu
            7 
            8 * VERSION 2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
            9 
           10    Note: There may be an updated version of this malloc obtainable at
           11            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
           12          Check before installing!
           13 
           14 * Quickstart
           15 
           16   This library is all in one file to simplify the most common usage:
           17   ftp it, compile it (-O), and link it into another program. All
           18   of the compile-time options default to reasonable values for use on
           19   most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
           20   You might later want to step through various compile-time and dynamic
           21   tuning options.
           22 
           23   For convenience, an include file for code using this malloc is at:
           24      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.1.h
           25   You don't really need this .h file unless you call functions not
           26   defined in your system include files.  The .h file contains only the
           27   excerpts from this file needed for using this malloc on ANSI C/C++
           28   systems, so long as you haven't changed compile-time options about
           29   naming and tuning parameters.  If you do, then you can create your
           30   own malloc.h that does include all settings by cutting at the point
           31   indicated below.
           32 
           33 * Why use this malloc?
           34 
           35   This is not the fastest, most space-conserving, most portable, or
           36   most tunable malloc ever written. However it is among the fastest
           37   while also being among the most space-conserving, portable and tunable.
           38   Consistent balance across these factors results in a good general-purpose
           39   allocator for malloc-intensive programs.
           40 
           41   The main properties of the algorithms are:
           42   * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
           43     with ties normally decided via FIFO (i.e. least recently used).
           44   * For small (<= 64 bytes by default) requests, it is a caching
           45     allocator, that maintains pools of quickly recycled chunks.
           46   * In between, and for combinations of large and small requests, it does
           47     the best it can trying to meet both goals at once.
           48   * For very large requests (>= 128KB by default), it relies on system
           49     memory mapping facilities, if supported.
           50 
           51   For a longer but slightly out of date high-level description, see
           52      http://gee.cs.oswego.edu/dl/html/malloc.html
           53 
           54   You may already by default be using a C library containing a malloc
           55   that is  based on some version of this malloc (for example in
           56   linux). You might still want to use the one in this file in order to
           57   customize settings or to avoid overheads associated with library
           58   versions.
           59 
           60 * Contents, described in more detail in "description of public routines" below.
           61 
           62   Standard (ANSI/SVID/...)  functions:
           63     malloc(size_t n);
           64     calloc(size_t n_elements, size_t element_size);
           65     free(Void_t* p);
           66     realloc(Void_t* p, size_t n);
           67     memalign(size_t alignment, size_t n);
           68     valloc(size_t n);
           69     mallinfo()
           70     mallopt(int parameter_number, int parameter_value)
           71 
           72   Additional functions:
           73     independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]);
           74     independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
           75     pvalloc(size_t n);
           76     cfree(Void_t* p);
           77     malloc_trim(size_t pad);
           78     malloc_usable_size(Void_t* p);
           79     malloc_stats();
           80 
           81 * Vital statistics:
           82 
           83   Supported pointer representation:       4 or 8 bytes
           84   Supported size_t  representation:       4 or 8 bytes 
           85        Note that size_t is allowed to be 4 bytes even if pointers are 8.
           86        You can adjust this by defining INTERNAL_SIZE_T
           87 
           88   Alignment:                              2 * sizeof(size_t) (default)
           89        (i.e., 8 byte alignment with 4byte size_t). This suffices for
           90        nearly all current machines and C compilers. However, you can
           91        define MALLOC_ALIGNMENT to be wider than this if necessary.
           92 
           93   Minimum overhead per allocated chunk:   4 or 8 bytes
           94        Each malloced chunk has a hidden word of overhead holding size
           95        and status information.
           96 
           97   Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
           98                           8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
           99 
          100        When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
          101        ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
          102        needed; 4 (8) for a trailing size field and 8 (16) bytes for
          103        free list pointers. Thus, the minimum allocatable size is
          104        16/24/32 bytes.
          105 
          106        Even a request for zero bytes (i.e., malloc(0)) returns a
          107        pointer to something of the minimum allocatable size.
          108 
          109        The maximum overhead wastage (i.e., number of extra bytes
          110        allocated than were requested in malloc) is less than or equal
          111        to the minimum size, except for requests >= mmap_threshold that
          112        are serviced via mmap(), where the worst case wastage is 2 *
          113        sizeof(size_t) bytes plus the remainder from a system page (the
          114        minimal mmap unit); typically 4096 or 8192 bytes.
          115 
          116   Maximum allocated size:  4-byte size_t: 2^32 minus about two pages 
          117                            8-byte size_t: 2^64 minus about two pages
          118 
          119        It is assumed that (possibly signed) size_t values suffice to
          120        represent chunk sizes. `Possibly signed' is due to the fact
          121        that `size_t' may be defined on a system as either a signed or
          122        an unsigned type. The ISO C standard says that it must be
          123        unsigned, but a few systems are known not to adhere to this.
          124        Additionally, even when size_t is unsigned, sbrk (which is by
          125        default used to obtain memory from system) accepts signed
          126        arguments, and may not be able to handle size_t-wide arguments
          127        with negative sign bit.  Generally, values that would
          128        appear as negative after accounting for overhead and alignment
          129        are supported only via mmap(), which does not have this
          130        limitation.
          131 
          132        Requests for sizes outside the allowed range will perform an optional
          133        failure action and then return null. (Requests may also
          134        also fail because a system is out of memory.)
          135 
          136   Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
          137 
          138        When USE_MALLOC_LOCK is defined, wrappers are created to
          139        surround every public call with either a pthread mutex or
          140        a win32 spinlock (depending on WIN32). This is not
          141        especially fast, and can be a major bottleneck.
          142        It is designed only to provide minimal protection
          143        in concurrent environments, and to provide a basis for
          144        extensions.  If you are using malloc in a concurrent program,
          145        you would be far better off obtaining ptmalloc, which is
          146        derived from a version of this malloc, and is well-tuned for
          147        concurrent programs. (See http://www.malloc.de) Note that
          148        even when USE_MALLOC_LOCK is defined, you can can guarantee
          149        full thread-safety only if no threads acquire memory through 
          150        direct calls to MORECORE or other system-level allocators.
          151 
          152   Compliance: I believe it is compliant with the 1997 Single Unix Specification
          153        (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably 
          154        others as well.
          155 
          156 * Synopsis of compile-time options:
          157 
          158     People have reported using previous versions of this malloc on all
          159     versions of Unix, sometimes by tweaking some of the defines
          160     below. It has been tested most extensively on Solaris and
          161     Linux. It is also reported to work on WIN32 platforms.
          162     People also report using it in stand-alone embedded systems.
          163 
          164     The implementation is in straight, hand-tuned ANSI C.  It is not
          165     at all modular. (Sorry!)  It uses a lot of macros.  To be at all
          166     usable, this code should be compiled using an optimizing compiler
          167     (for example gcc -O3) that can simplify expressions and control
          168     paths. (FAQ: some macros import variables as arguments rather than
          169     declare locals because people reported that some debuggers
          170     otherwise get confused.)
          171 
          172     OPTION                     DEFAULT VALUE
          173 
          174     Compilation Environment options:
          175 
          176     __STD_C                    derived from C compiler defines
          177     WIN32                      NOT defined
          178     HAVE_MEMCPY                defined
          179     USE_MEMCPY                 1 if HAVE_MEMCPY is defined
          180     HAVE_MMAP                  defined as 1 
          181     MMAP_CLEARS                1
          182     HAVE_MREMAP                0 unless linux defined
          183     malloc_getpagesize         derived from system #includes, or 4096 if not
          184     HAVE_USR_INCLUDE_MALLOC_H  NOT defined
          185     LACKS_UNISTD_H             NOT defined unless WIN32
          186     LACKS_SYS_PARAM_H          NOT defined unless WIN32
          187     LACKS_SYS_MMAN_H           NOT defined unless WIN32
          188     LACKS_FCNTL_H              NOT defined
          189 
          190     Changing default word sizes:
          191 
          192     INTERNAL_SIZE_T            size_t
          193     MALLOC_ALIGNMENT           2 * sizeof(INTERNAL_SIZE_T)
          194     PTR_UINT                   unsigned long
          195     CHUNK_SIZE_T               unsigned long
          196 
          197     Configuration and functionality options:
          198 
          199     USE_DL_PREFIX              NOT defined
          200     USE_PUBLIC_MALLOC_WRAPPERS NOT defined
          201     USE_MALLOC_LOCK            NOT defined
          202     DEBUG                      NOT defined
          203     REALLOC_ZERO_BYTES_FREES   NOT defined
          204     MALLOC_FAILURE_ACTION      errno = ENOMEM, if __STD_C defined, else no-op
          205     TRIM_FASTBINS              0
          206     FIRST_SORTED_BIN_SIZE      512
          207 
          208     Options for customizing MORECORE:
          209 
          210     MORECORE                   sbrk
          211     MORECORE_CONTIGUOUS        1 
          212     MORECORE_CANNOT_TRIM       NOT defined
          213     MMAP_AS_MORECORE_SIZE      (1024 * 1024) 
          214 
          215     Tuning options that are also dynamically changeable via mallopt:
          216 
          217     DEFAULT_MXFAST             64
          218     DEFAULT_TRIM_THRESHOLD     256 * 1024
          219     DEFAULT_TOP_PAD            0
          220     DEFAULT_MMAP_THRESHOLD     256 * 1024
          221     DEFAULT_MMAP_MAX           65536
          222 
          223     There are several other #defined constants and macros that you
          224     probably don't want to touch unless you are extending or adapting malloc.
          225 */
          226 
          227 /*
          228   WIN32 sets up defaults for MS environment and compilers.
          229   Otherwise defaults are for unix.
          230 */
          231 
          232 /* #define WIN32 */
          233 
          234 #ifdef WIN32
          235 
          236 #define WIN32_LEAN_AND_MEAN
          237 #include <windows.h>
          238 
          239 /* Win32 doesn't supply or need the following headers */
          240 #define LACKS_UNISTD_H
          241 #define LACKS_SYS_PARAM_H
          242 #define LACKS_SYS_MMAN_H
          243 
          244 /* Use the supplied emulation of sbrk */
          245 #define MORECORE sbrk
          246 #define MORECORE_CONTIGUOUS 1
          247 #define MORECORE_FAILURE    ((void*)(-1))
          248 
          249 /* Use the supplied emulation of mmap and munmap */
          250 #define HAVE_MMAP 1
          251 #define MUNMAP_FAILURE  (-1)
          252 #define MMAP_CLEARS 1
          253 
          254 /* These values don't really matter in windows mmap emulation */
          255 #define MAP_PRIVATE 1
          256 #define MAP_ANONYMOUS 2
          257 #define PROT_READ 1
          258 #define PROT_WRITE 2
          259 
          260 /* Emulation functions defined at the end of this file */
          261 
          262 /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
          263 #ifdef USE_MALLOC_LOCK
          264 static int slwait(int *sl);
          265 static int slrelease(int *sl);
          266 #endif
          267 
          268 static long getpagesize(void);
          269 static long getregionsize(void);
          270 static void *sbrk(long size);
          271 static void *mmap(void *ptr, long size, long prot, long type, long handle, long arg);
          272 static long munmap(void *ptr, long size);
          273 
          274 static void vminfo (unsigned long*free, unsigned long*reserved, unsigned long*committed);
          275 static int cpuinfo (int whole, unsigned long*kernel, unsigned long*user);
          276 
          277 #endif
          278 
          279 /*
          280   __STD_C should be nonzero if using ANSI-standard C compiler, a C++
          281   compiler, or a C compiler sufficiently close to ANSI to get away
          282   with it.
          283 */
          284 
          285 #ifndef __STD_C
          286 #if defined(__STDC__) || defined(_cplusplus)
          287 #define __STD_C     1
          288 #else
          289 #define __STD_C     0
          290 #endif 
          291 #endif /*__STD_C*/
          292 
          293 
          294 /*
          295   Void_t* is the pointer type that malloc should say it returns
          296 */
          297 
          298 #ifndef Void_t
          299 #if (__STD_C || defined(WIN32))
          300 #define Void_t      void
          301 #else
          302 #define Void_t      char
          303 #endif
          304 #endif /*Void_t*/
          305 
          306 #if __STD_C
          307 #include <stddef.h>   /* for size_t */
          308 #else
          309 #include <sys/types.h>
          310 #endif
          311 
          312 #ifdef __cplusplus
          313 extern "C" {
          314 #endif
          315 
          316 /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
          317 
          318 #define  LACKS_UNISTD_H
          319 
          320 #ifndef LACKS_UNISTD_H
          321 #include <unistd.h>
          322 #endif
          323 
          324 /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
          325 
          326 #define  LACKS_SYS_PARAM_H
          327 
          328 
          329 #include <stdio.h>    /* needed for malloc_stats */
          330 #include <errno.h>    /* needed for optional MALLOC_FAILURE_ACTION */
          331 
          332 
          333 /*
          334   Debugging:
          335 
          336   Because freed chunks may be overwritten with bookkeeping fields, this
          337   malloc will often die when freed memory is overwritten by user
          338   programs.  This can be very effective (albeit in an annoying way)
          339   in helping track down dangling pointers.
          340 
          341   If you compile with -DDEBUG, a number of assertion checks are
          342   enabled that will catch more memory errors. You probably won't be
          343   able to make much sense of the actual assertion errors, but they
          344   should help you locate incorrectly overwritten memory.  The
          345   checking is fairly extensive, and will slow down execution
          346   noticeably. Calling malloc_stats or mallinfo with DEBUG set will
          347   attempt to check every non-mmapped allocated and free chunk in the
          348   course of computing the summmaries. (By nature, mmapped regions
          349   cannot be checked very much automatically.)
          350 
          351   Setting DEBUG may also be helpful if you are trying to modify
          352   this code. The assertions in the check routines spell out in more
          353   detail the assumptions and invariants underlying the algorithms.
          354 
          355   Setting DEBUG does NOT provide an automated mechanism for checking
          356   that all accesses to malloced memory stay within their
          357   bounds. However, there are several add-ons and adaptations of this
          358   or other mallocs available that do this.
          359 */
          360 
          361 #if DEBUG
          362 #include <assert.h>
          363 #else
          364 #define assert(x) ((void)0)
          365 #endif
          366 
          367 /*
          368   The unsigned integer type used for comparing any two chunk sizes.
          369   This should be at least as wide as size_t, but should not be signed.
          370 */
          371 
          372 #ifndef CHUNK_SIZE_T
          373 #define CHUNK_SIZE_T unsigned long
          374 #endif
          375 
          376 /* 
          377   The unsigned integer type used to hold addresses when they are are
          378   manipulated as integers. Except that it is not defined on all
          379   systems, intptr_t would suffice.
          380 */
          381 #ifndef PTR_UINT
          382 #define PTR_UINT unsigned long
          383 #endif
          384 
          385 
          386 /*
          387   INTERNAL_SIZE_T is the word-size used for internal bookkeeping
          388   of chunk sizes.
          389 
          390   The default version is the same as size_t.
          391 
          392   While not strictly necessary, it is best to define this as an
          393   unsigned type, even if size_t is a signed type. This may avoid some
          394   artificial size limitations on some systems.
          395 
          396   On a 64-bit machine, you may be able to reduce malloc overhead by
          397   defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
          398   expense of not being able to handle more than 2^32 of malloced
          399   space. If this limitation is acceptable, you are encouraged to set
          400   this unless you are on a platform requiring 16byte alignments. In
          401   this case the alignment requirements turn out to negate any
          402   potential advantages of decreasing size_t word size.
          403 
          404   Implementors: Beware of the possible combinations of:
          405      - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
          406        and might be the same width as int or as long
          407      - size_t might have different width and signedness as INTERNAL_SIZE_T
          408      - int and long might be 32 or 64 bits, and might be the same width
          409   To deal with this, most comparisons and difference computations
          410   among INTERNAL_SIZE_Ts should cast them to CHUNK_SIZE_T, being
          411   aware of the fact that casting an unsigned int to a wider long does
          412   not sign-extend. (This also makes checking for negative numbers
          413   awkward.) Some of these casts result in harmless compiler warnings
          414   on some systems.
          415 */
          416 
          417 #ifndef INTERNAL_SIZE_T
          418 #define INTERNAL_SIZE_T size_t
          419 #endif
          420 
          421 /* The corresponding word size */
          422 #define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
          423 
          424 
          425 
          426 /*
          427   MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
          428   It must be a power of two at least 2 * SIZE_SZ, even on machines
          429   for which smaller alignments would suffice. It may be defined as
          430   larger than this though. Note however that code and data structures
          431   are optimized for the case of 8-byte alignment.
          432 */
          433 
          434 
          435 #ifndef MALLOC_ALIGNMENT
          436 #define MALLOC_ALIGNMENT       (2 * SIZE_SZ)
          437 #endif
          438 
          439 /* The corresponding bit mask value */
          440 #define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
          441 
          442 
          443 
          444 /*
          445   REALLOC_ZERO_BYTES_FREES should be set if a call to
          446   realloc with zero bytes should be the same as a call to free.
          447   Some people think it should. Otherwise, since this malloc
          448   returns a unique pointer for malloc(0), so does realloc(p, 0).
          449 */
          450 
          451 /*   #define REALLOC_ZERO_BYTES_FREES */
          452 
          453 /*
          454   TRIM_FASTBINS controls whether free() of a very small chunk can
          455   immediately lead to trimming. Setting to true (1) can reduce memory
          456   footprint, but will almost always slow down programs that use a lot
          457   of small chunks.
          458 
          459   Define this only if you are willing to give up some speed to more
          460   aggressively reduce system-level memory footprint when releasing
          461   memory in programs that use many small chunks.  You can get
          462   essentially the same effect by setting MXFAST to 0, but this can
          463   lead to even greater slowdowns in programs using many small chunks.
          464   TRIM_FASTBINS is an in-between compile-time option, that disables
          465   only those chunks bordering topmost memory from being placed in
          466   fastbins.
          467 */
          468 
          469 #ifndef TRIM_FASTBINS
          470 #define TRIM_FASTBINS  0
          471 #endif
          472 
          473 
          474 /*
          475   USE_DL_PREFIX will prefix all public routines with the string 'dl'.
          476   This is necessary when you only want to use this malloc in one part 
          477   of a program, using your regular system malloc elsewhere.
          478 */
          479 
          480 /* #define USE_DL_PREFIX */
          481 
          482 
          483 /*
          484   USE_MALLOC_LOCK causes wrapper functions to surround each
          485   callable routine with pthread mutex lock/unlock.
          486 
          487   USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
          488 */
          489 
          490 
          491 /* #define USE_MALLOC_LOCK */
          492 
          493 
          494 /*
          495   If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
          496   actually a wrapper function that first calls MALLOC_PREACTION, then
          497   calls the internal routine, and follows it with
          498   MALLOC_POSTACTION. This is needed for locking, but you can also use
          499   this, without USE_MALLOC_LOCK, for purposes of interception,
          500   instrumentation, etc. It is a sad fact that using wrappers often
          501   noticeably degrades performance of malloc-intensive programs.
          502 */
          503 
          504 #ifdef USE_MALLOC_LOCK
          505 #define USE_PUBLIC_MALLOC_WRAPPERS
          506 #else
          507 /* #define USE_PUBLIC_MALLOC_WRAPPERS */
          508 #endif
          509 
          510 
          511 /* 
          512    Two-phase name translation.
          513    All of the actual routines are given mangled names.
          514    When wrappers are used, they become the public callable versions.
          515    When DL_PREFIX is used, the callable names are prefixed.
          516 */
          517 
          518 #ifndef USE_PUBLIC_MALLOC_WRAPPERS
          519 #define cALLOc      public_cALLOc
          520 #define fREe        public_fREe
          521 #define cFREe       public_cFREe
          522 #define mALLOc      public_mALLOc
          523 #define mEMALIGn    public_mEMALIGn
          524 #define rEALLOc     public_rEALLOc
          525 #define vALLOc      public_vALLOc
          526 #define pVALLOc     public_pVALLOc
          527 #define mALLINFo    public_mALLINFo
          528 #define mALLOPt     public_mALLOPt
          529 #define mTRIm       public_mTRIm
          530 #define mSTATs      public_mSTATs
          531 #define mUSABLe     public_mUSABLe
          532 #define iCALLOc     public_iCALLOc
          533 #define iCOMALLOc   public_iCOMALLOc
          534 #endif
          535 
          536 #ifdef USE_DL_PREFIX
          537 #define public_cALLOc    dlcalloc
          538 #define public_fREe      dlfree
          539 #define public_cFREe     dlcfree
          540 #define public_mALLOc    dlmalloc
          541 #define public_mEMALIGn  dlmemalign
          542 #define public_rEALLOc   dlrealloc
          543 #define public_vALLOc    dlvalloc
          544 #define public_pVALLOc   dlpvalloc
          545 #define public_mALLINFo  dlmallinfo
          546 #define public_mALLOPt   dlmallopt
          547 #define public_mTRIm     dlmalloc_trim
          548 #define public_mSTATs    dlmalloc_stats
          549 #define public_mUSABLe   dlmalloc_usable_size
          550 #define public_iCALLOc   dlindependent_calloc
          551 #define public_iCOMALLOc dlindependent_comalloc
          552 #else /* USE_DL_PREFIX */
          553 #define public_cALLOc    calloc
          554 #define public_fREe      free
          555 #define public_cFREe     cfree
          556 #define public_mALLOc    malloc
          557 #define public_mEMALIGn  memalign
          558 #define public_rEALLOc   realloc
          559 #define public_vALLOc    valloc
          560 #define public_pVALLOc   pvalloc
          561 #define public_mALLINFo  mallinfo
          562 #define public_mALLOPt   mallopt
          563 #define public_mTRIm     malloc_trim
          564 #define public_mSTATs    malloc_stats
          565 #define public_mUSABLe   malloc_usable_size
          566 #define public_iCALLOc   independent_calloc
          567 #define public_iCOMALLOc independent_comalloc
          568 #endif /* USE_DL_PREFIX */
          569 
          570 
          571 /*
          572   HAVE_MEMCPY should be defined if you are not otherwise using
          573   ANSI STD C, but still have memcpy and memset in your C library
          574   and want to use them in calloc and realloc. Otherwise simple
          575   macro versions are defined below.
          576 
          577   USE_MEMCPY should be defined as 1 if you actually want to
          578   have memset and memcpy called. People report that the macro
          579   versions are faster than libc versions on some systems.
          580   
          581   Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
          582   (of <= 36 bytes) are manually unrolled in realloc and calloc.
          583 */
          584 
          585 #define HAVE_MEMCPY
          586 
          587 #ifndef USE_MEMCPY
          588 #ifdef HAVE_MEMCPY
          589 #define USE_MEMCPY 1
          590 #else
          591 #define USE_MEMCPY 0
          592 #endif
          593 #endif
          594 
          595 
          596 #if (__STD_C || defined(HAVE_MEMCPY))
          597 
          598 #ifdef WIN32
          599 /* On Win32 memset and memcpy are already declared in windows.h */
          600 #else
          601 #if __STD_C
          602 void* memset(void*, int, size_t);
          603 void* memcpy(void*, const void*, size_t);
          604 #else
          605 Void_t* memset();
          606 Void_t* memcpy();
          607 #endif
          608 #endif
          609 #endif
          610 
          611 /*
          612   MALLOC_FAILURE_ACTION is the action to take before "return 0" when
          613   malloc fails to be able to return memory, either because memory is
          614   exhausted or because of illegal arguments.
          615   
          616   By default, sets errno if running on STD_C platform, else does nothing.  
          617 */
          618 
          619 #ifndef MALLOC_FAILURE_ACTION
          620 #if __STD_C
          621 #define MALLOC_FAILURE_ACTION \
          622    errno = ENOMEM;
          623 
          624 #else
          625 #define MALLOC_FAILURE_ACTION
          626 #endif
          627 #endif
          628 
          629 /*
          630   MORECORE-related declarations. By default, rely on sbrk
          631 */
          632 
          633 
          634 #ifdef LACKS_UNISTD_H
          635 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
          636 #if __STD_C
          637 extern Void_t*     sbrk(ptrdiff_t);
          638 #else
          639 extern Void_t*     sbrk();
          640 #endif
          641 #endif
          642 #endif
          643 
          644 /*
          645   MORECORE is the name of the routine to call to obtain more memory
          646   from the system.  See below for general guidance on writing
          647   alternative MORECORE functions, as well as a version for WIN32 and a
          648   sample version for pre-OSX macos.
          649 */
          650 
          651 #ifndef MORECORE
          652 #define MORECORE sbrk
          653 #endif
          654 
          655 /*
          656   MORECORE_FAILURE is the value returned upon failure of MORECORE
          657   as well as mmap. Since it cannot be an otherwise valid memory address,
          658   and must reflect values of standard sys calls, you probably ought not
          659   try to redefine it.
          660 */
          661 
          662 #ifndef MORECORE_FAILURE
          663 #define MORECORE_FAILURE (-1)
          664 #endif
          665 
          666 /*
          667   If MORECORE_CONTIGUOUS is true, take advantage of fact that
          668   consecutive calls to MORECORE with positive arguments always return
          669   contiguous increasing addresses.  This is true of unix sbrk.  Even
          670   if not defined, when regions happen to be contiguous, malloc will
          671   permit allocations spanning regions obtained from different
          672   calls. But defining this when applicable enables some stronger
          673   consistency checks and space efficiencies. 
          674 */
          675 
          676 #ifndef MORECORE_CONTIGUOUS
          677 #define MORECORE_CONTIGUOUS 1
          678 #endif
          679 
          680 /*
          681   Define MORECORE_CANNOT_TRIM if your version of MORECORE
          682   cannot release space back to the system when given negative
          683   arguments. This is generally necessary only if you are using
          684   a hand-crafted MORECORE function that cannot handle negative arguments.
          685 */
          686 
          687 /* #define MORECORE_CANNOT_TRIM */
          688 
          689 
          690 /*
          691   Define HAVE_MMAP as true to optionally make malloc() use mmap() to
          692   allocate very large blocks.  These will be returned to the
          693   operating system immediately after a free(). Also, if mmap
          694   is available, it is used as a backup strategy in cases where
          695   MORECORE fails to provide space from system.
          696 
          697   This malloc is best tuned to work with mmap for large requests.
          698   If you do not have mmap, operations involving very large chunks (1MB
          699   or so) may be slower than you'd like.
          700 */
          701 
          702 #ifndef HAVE_MMAP
          703 #define HAVE_MMAP 0
          704 #endif
          705 
          706 #if HAVE_MMAP
          707 /* 
          708    Standard unix mmap using /dev/zero clears memory so calloc doesn't
          709    need to.
          710 */
          711 
          712 #ifndef MMAP_CLEARS
          713 #define MMAP_CLEARS 1
          714 #endif
          715 
          716 #else /* no mmap */
          717 #ifndef MMAP_CLEARS
          718 #define MMAP_CLEARS 0
          719 #endif
          720 #endif
          721 
          722 
          723 /* 
          724    MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
          725    sbrk fails, and mmap is used as a backup (which is done only if
          726    HAVE_MMAP).  The value must be a multiple of page size.  This
          727    backup strategy generally applies only when systems have "holes" in
          728    address space, so sbrk cannot perform contiguous expansion, but
          729    there is still space available on system.  On systems for which
          730    this is known to be useful (i.e. most linux kernels), this occurs
          731    only when programs allocate huge amounts of memory.  Between this,
          732    and the fact that mmap regions tend to be limited, the size should
          733    be large, to avoid too many mmap calls and thus avoid running out
          734    of kernel resources.
          735 */
          736 
          737 #ifndef MMAP_AS_MORECORE_SIZE
          738 #define MMAP_AS_MORECORE_SIZE (1024 * 1024)
          739 #endif
          740 
          741 /*
          742   Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
          743   large blocks.  This is currently only possible on Linux with
          744   kernel versions newer than 1.3.77.
          745 */
          746 
          747 #ifndef HAVE_MREMAP
          748 #ifdef linux
          749 #define HAVE_MREMAP 1
          750 #else
          751 #define HAVE_MREMAP 0
          752 #endif
          753 
          754 #endif /* HAVE_MMAP */
          755 
          756 
          757 /*
          758   The system page size. To the extent possible, this malloc manages
          759   memory from the system in page-size units.  Note that this value is
          760   cached during initialization into a field of malloc_state. So even
          761   if malloc_getpagesize is a function, it is only called once.
          762 
          763   The following mechanics for getpagesize were adapted from bsd/gnu
          764   getpagesize.h. If none of the system-probes here apply, a value of
          765   4096 is used, which should be OK: If they don't apply, then using
          766   the actual value probably doesn't impact performance.
          767 */
          768 
          769 
          770 #ifndef malloc_getpagesize
          771 
          772 #ifndef LACKS_UNISTD_H
          773 #  include <unistd.h>
          774 #endif
          775 
          776 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
          777 #    ifndef _SC_PAGE_SIZE
          778 #      define _SC_PAGE_SIZE _SC_PAGESIZE
          779 #    endif
          780 #  endif
          781 
          782 #  ifdef _SC_PAGE_SIZE
          783 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
          784 #  else
          785 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
          786        extern size_t getpagesize();
          787 #      define malloc_getpagesize getpagesize()
          788 #    else
          789 #      ifdef WIN32 /* use supplied emulation of getpagesize */
          790 #        define malloc_getpagesize getpagesize() 
          791 #      else
          792 #        ifndef LACKS_SYS_PARAM_H
          793 #          include <sys/param.h>
          794 #        endif
          795 #        ifdef EXEC_PAGESIZE
          796 #          define malloc_getpagesize EXEC_PAGESIZE
          797 #        else
          798 #          ifdef NBPG
          799 #            ifndef CLSIZE
          800 #              define malloc_getpagesize NBPG
          801 #            else
          802 #              define malloc_getpagesize (NBPG * CLSIZE)
          803 #            endif
          804 #          else
          805 #            ifdef NBPC
          806 #              define malloc_getpagesize NBPC
          807 #            else
          808 #              ifdef PAGESIZE
          809 #                define malloc_getpagesize PAGESIZE
          810 #              else /* just guess */
          811 #                define malloc_getpagesize (4096) 
          812 #              endif
          813 #            endif
          814 #          endif
          815 #        endif
          816 #      endif
          817 #    endif
          818 #  endif
          819 #endif
          820 
          821 /*
          822   This version of malloc supports the standard SVID/XPG mallinfo
          823   routine that returns a struct containing usage properties and
          824   statistics. It should work on any SVID/XPG compliant system that has
          825   a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
          826   install such a thing yourself, cut out the preliminary declarations
          827   as described above and below and save them in a malloc.h file. But
          828   there's no compelling reason to bother to do this.)
          829 
          830   The main declaration needed is the mallinfo struct that is returned
          831   (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
          832   bunch of fields that are not even meaningful in this version of
          833   malloc.  These fields are are instead filled by mallinfo() with
          834   other numbers that might be of interest.
          835 
          836   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
          837   /usr/include/malloc.h file that includes a declaration of struct
          838   mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
          839   version is declared below.  These must be precisely the same for
          840   mallinfo() to work.  The original SVID version of this struct,
          841   defined on most systems with mallinfo, declares all fields as
          842   ints. But some others define as unsigned long. If your system
          843   defines the fields using a type of different width than listed here,
          844   you must #include your system version and #define
          845   HAVE_USR_INCLUDE_MALLOC_H.
          846 */
          847 
          848 /* #define HAVE_USR_INCLUDE_MALLOC_H */
          849 
          850 #ifdef HAVE_USR_INCLUDE_MALLOC_H
          851 #include "/usr/include/malloc.h"
          852 #else
          853 
          854 /* SVID2/XPG mallinfo structure */
          855 
          856 struct mallinfo {
          857   int arena;    /* non-mmapped space allocated from system */
          858   int ordblks;  /* number of free chunks */
          859   int smblks;   /* number of fastbin blocks */
          860   int hblks;    /* number of mmapped regions */
          861   int hblkhd;   /* space in mmapped regions */
          862   int usmblks;  /* maximum total allocated space */
          863   int fsmblks;  /* space available in freed fastbin blocks */
          864   int uordblks; /* total allocated space */
          865   int fordblks; /* total free space */
          866   int keepcost; /* top-most, releasable (via malloc_trim) space */
          867 };
          868 
          869 /*
          870   SVID/XPG defines four standard parameter numbers for mallopt,
          871   normally defined in malloc.h.  Only one of these (M_MXFAST) is used
          872   in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
          873   so setting them has no effect. But this malloc also supports other
          874   options in mallopt described below.
          875 */
          876 #endif
          877 
          878 
          879 /* ---------- description of public routines ------------ */
          880 
          881 /*
          882   malloc(size_t n)
          883   Returns a pointer to a newly allocated chunk of at least n bytes, or null
          884   if no space is available. Additionally, on failure, errno is
          885   set to ENOMEM on ANSI C systems.
          886 
          887   If n is zero, malloc returns a minumum-sized chunk. (The minimum
          888   size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit
          889   systems.)  On most systems, size_t is an unsigned type, so calls
          890   with negative arguments are interpreted as requests for huge amounts
          891   of space, which will often fail. The maximum supported value of n
          892   differs across systems, but is in all cases less than the maximum
          893   representable value of a size_t.
          894 */
          895 #if __STD_C
          896 Void_t*  public_mALLOc(size_t);
          897 #else
          898 Void_t*  public_mALLOc();
          899 #endif
          900 
          901 /*
          902   free(Void_t* p)
          903   Releases the chunk of memory pointed to by p, that had been previously
          904   allocated using malloc or a related routine such as realloc.
          905   It has no effect if p is null. It can have arbitrary (i.e., bad!)
          906   effects if p has already been freed.
          907 
          908   Unless disabled (using mallopt), freeing very large spaces will
          909   when possible, automatically trigger operations that give
          910   back unused memory to the system, thus reducing program footprint.
          911 */
          912 #if __STD_C
          913 void     public_fREe(Void_t*);
          914 #else
          915 void     public_fREe();
          916 #endif
          917 
          918 /*
          919   calloc(size_t n_elements, size_t element_size);
          920   Returns a pointer to n_elements * element_size bytes, with all locations
          921   set to zero.
          922 */
          923 #if __STD_C
          924 Void_t*  public_cALLOc(size_t, size_t);
          925 #else
          926 Void_t*  public_cALLOc();
          927 #endif
          928 
          929 /*
          930   realloc(Void_t* p, size_t n)
          931   Returns a pointer to a chunk of size n that contains the same data
          932   as does chunk p up to the minimum of (n, p's size) bytes, or null
          933   if no space is available. 
          934 
          935   The returned pointer may or may not be the same as p. The algorithm
          936   prefers extending p when possible, otherwise it employs the
          937   equivalent of a malloc-copy-free sequence.
          938 
          939   If p is null, realloc is equivalent to malloc.  
          940 
          941   If space is not available, realloc returns null, errno is set (if on
          942   ANSI) and p is NOT freed.
          943 
          944   if n is for fewer bytes than already held by p, the newly unused
          945   space is lopped off and freed if possible.  Unless the #define
          946   REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of
          947   zero (re)allocates a minimum-sized chunk.
          948 
          949   Large chunks that were internally obtained via mmap will always
          950   be reallocated using malloc-copy-free sequences unless
          951   the system supports MREMAP (currently only linux).
          952 
          953   The old unix realloc convention of allowing the last-free'd chunk
          954   to be used as an argument to realloc is not supported.
          955 */
          956 #if __STD_C
          957 Void_t*  public_rEALLOc(Void_t*, size_t);
          958 #else
          959 Void_t*  public_rEALLOc();
          960 #endif
          961 
          962 /*
          963   memalign(size_t alignment, size_t n);
          964   Returns a pointer to a newly allocated chunk of n bytes, aligned
          965   in accord with the alignment argument.
          966 
          967   The alignment argument should be a power of two. If the argument is
          968   not a power of two, the nearest greater power is used.
          969   8-byte alignment is guaranteed by normal malloc calls, so don't
          970   bother calling memalign with an argument of 8 or less.
          971 
          972   Overreliance on memalign is a sure way to fragment space.
          973 */
          974 #if __STD_C
          975 Void_t*  public_mEMALIGn(size_t, size_t);
          976 #else
          977 Void_t*  public_mEMALIGn();
          978 #endif
          979 
          980 /*
          981   valloc(size_t n);
          982   Equivalent to memalign(pagesize, n), where pagesize is the page
          983   size of the system. If the pagesize is unknown, 4096 is used.
          984 */
          985 #if __STD_C
          986 Void_t*  public_vALLOc(size_t);
          987 #else
          988 Void_t*  public_vALLOc();
          989 #endif
          990 
          991 
          992 
          993 /*
          994   mallopt(int parameter_number, int parameter_value)
          995   Sets tunable parameters The format is to provide a
          996   (parameter-number, parameter-value) pair.  mallopt then sets the
          997   corresponding parameter to the argument value if it can (i.e., so
          998   long as the value is meaningful), and returns 1 if successful else
          999   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
         1000   normally defined in malloc.h.  Only one of these (M_MXFAST) is used
         1001   in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
         1002   so setting them has no effect. But this malloc also supports four
         1003   other options in mallopt. See below for details.  Briefly, supported
         1004   parameters are as follows (listed defaults are for "typical"
         1005   configurations).
         1006 
         1007   Symbol            param #   default    allowed param values
         1008   M_MXFAST          1         64         0-80  (0 disables fastbins)
         1009   M_TRIM_THRESHOLD -1         256*1024   any   (-1U disables trimming)
         1010   M_TOP_PAD        -2         0          any  
         1011   M_MMAP_THRESHOLD -3         256*1024   any   (or 0 if no MMAP support)
         1012   M_MMAP_MAX       -4         65536      any   (0 disables use of mmap)
         1013 */
         1014 #if __STD_C
         1015 int      public_mALLOPt(int, int);
         1016 #else
         1017 int      public_mALLOPt();
         1018 #endif
         1019 
         1020 
         1021 /*
         1022   mallinfo()
         1023   Returns (by copy) a struct containing various summary statistics:
         1024 
         1025   arena:     current total non-mmapped bytes allocated from system 
         1026   ordblks:   the number of free chunks 
         1027   smblks:    the number of fastbin blocks (i.e., small chunks that
         1028                have been freed but not use resused or consolidated)
         1029   hblks:     current number of mmapped regions 
         1030   hblkhd:    total bytes held in mmapped regions 
         1031   usmblks:   the maximum total allocated space. This will be greater
         1032                 than current total if trimming has occurred.
         1033   fsmblks:   total bytes held in fastbin blocks 
         1034   uordblks:  current total allocated space (normal or mmapped)
         1035   fordblks:  total free space 
         1036   keepcost:  the maximum number of bytes that could ideally be released
         1037                back to system via malloc_trim. ("ideally" means that
         1038                it ignores page restrictions etc.)
         1039 
         1040   Because these fields are ints, but internal bookkeeping may
         1041   be kept as longs, the reported values may wrap around zero and 
         1042   thus be inaccurate.
         1043 */
         1044 #if __STD_C
         1045 struct mallinfo public_mALLINFo(void);
         1046 #else
         1047 struct mallinfo public_mALLINFo();
         1048 #endif
         1049 
         1050 /*
         1051   independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]);
         1052 
         1053   independent_calloc is similar to calloc, but instead of returning a
         1054   single cleared space, it returns an array of pointers to n_elements
         1055   independent elements that can hold contents of size elem_size, each
         1056   of which starts out cleared, and can be independently freed,
         1057   realloc'ed etc. The elements are guaranteed to be adjacently
         1058   allocated (this is not guaranteed to occur with multiple callocs or
         1059   mallocs), which may also improve cache locality in some
         1060   applications.
         1061 
         1062   The "chunks" argument is optional (i.e., may be null, which is
         1063   probably the most typical usage). If it is null, the returned array
         1064   is itself dynamically allocated and should also be freed when it is
         1065   no longer needed. Otherwise, the chunks array must be of at least
         1066   n_elements in length. It is filled in with the pointers to the
         1067   chunks.
         1068 
         1069   In either case, independent_calloc returns this pointer array, or
         1070   null if the allocation failed.  If n_elements is zero and "chunks"
         1071   is null, it returns a chunk representing an array with zero elements
         1072   (which should be freed if not wanted).
         1073 
         1074   Each element must be individually freed when it is no longer
         1075   needed. If you'd like to instead be able to free all at once, you
         1076   should instead use regular calloc and assign pointers into this
         1077   space to represent elements.  (In this case though, you cannot
         1078   independently free elements.)
         1079   
         1080   independent_calloc simplifies and speeds up implementations of many
         1081   kinds of pools.  It may also be useful when constructing large data
         1082   structures that initially have a fixed number of fixed-sized nodes,
         1083   but the number is not known at compile time, and some of the nodes
         1084   may later need to be freed. For example:
         1085 
         1086   struct Node { int item; struct Node* next; };
         1087   
         1088   struct Node* build_list() {
         1089     struct Node** pool;
         1090     int n = read_number_of_nodes_needed();
         1091     if (n <= 0) return 0;
         1092     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
         1093     if (pool == 0) die(); 
         1094     // organize into a linked list... 
         1095     struct Node* first = pool[0];
         1096     for (i = 0; i < n-1; ++i) 
         1097       pool[i]->next = pool[i+1];
         1098     free(pool);     // Can now free the array (or not, if it is needed later)
         1099     return first;
         1100   }
         1101 */
         1102 #if __STD_C
         1103 Void_t** public_iCALLOc(size_t, size_t, Void_t**);
         1104 #else
         1105 Void_t** public_iCALLOc();
         1106 #endif
         1107 
         1108 /*
         1109   independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
         1110 
         1111   independent_comalloc allocates, all at once, a set of n_elements
         1112   chunks with sizes indicated in the "sizes" array.    It returns
         1113   an array of pointers to these elements, each of which can be
         1114   independently freed, realloc'ed etc. The elements are guaranteed to
         1115   be adjacently allocated (this is not guaranteed to occur with
         1116   multiple callocs or mallocs), which may also improve cache locality
         1117   in some applications.
         1118 
         1119   The "chunks" argument is optional (i.e., may be null). If it is null
         1120   the returned array is itself dynamically allocated and should also
         1121   be freed when it is no longer needed. Otherwise, the chunks array
         1122   must be of at least n_elements in length. It is filled in with the
         1123   pointers to the chunks.
         1124 
         1125   In either case, independent_comalloc returns this pointer array, or
         1126   null if the allocation failed.  If n_elements is zero and chunks is
         1127   null, it returns a chunk representing an array with zero elements
         1128   (which should be freed if not wanted).
         1129   
         1130   Each element must be individually freed when it is no longer
         1131   needed. If you'd like to instead be able to free all at once, you
         1132   should instead use a single regular malloc, and assign pointers at
         1133   particular offsets in the aggregate space. (In this case though, you 
         1134   cannot independently free elements.)
         1135 
         1136   independent_comallac differs from independent_calloc in that each
         1137   element may have a different size, and also that it does not
         1138   automatically clear elements.
         1139 
         1140   independent_comalloc can be used to speed up allocation in cases
         1141   where several structs or objects must always be allocated at the
         1142   same time.  For example:
         1143 
         1144   struct Head { ... }
         1145   struct Foot { ... }
         1146 
         1147   void send_message(char* msg) {
         1148     int msglen = strlen(msg);
         1149     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
         1150     void* chunks[3];
         1151     if (independent_comalloc(3, sizes, chunks) == 0)
         1152       die();
         1153     struct Head* head = (struct Head*)(chunks[0]);
         1154     char*        body = (char*)(chunks[1]);
         1155     struct Foot* foot = (struct Foot*)(chunks[2]);
         1156     // ...
         1157   }
         1158 
         1159   In general though, independent_comalloc is worth using only for
         1160   larger values of n_elements. For small values, you probably won't
         1161   detect enough difference from series of malloc calls to bother.
         1162 
         1163   Overuse of independent_comalloc can increase overall memory usage,
         1164   since it cannot reuse existing noncontiguous small chunks that
         1165   might be available for some of the elements.
         1166 */
         1167 #if __STD_C
         1168 Void_t** public_iCOMALLOc(size_t, size_t*, Void_t**);
         1169 #else
         1170 Void_t** public_iCOMALLOc();
         1171 #endif
         1172 
         1173 
         1174 /*
         1175   pvalloc(size_t n);
         1176   Equivalent to valloc(minimum-page-that-holds(n)), that is,
         1177   round up n to nearest pagesize.
         1178  */
         1179 #if __STD_C
         1180 Void_t*  public_pVALLOc(size_t);
         1181 #else
         1182 Void_t*  public_pVALLOc();
         1183 #endif
         1184 
         1185 /*
         1186   cfree(Void_t* p);
         1187   Equivalent to free(p).
         1188 
         1189   cfree is needed/defined on some systems that pair it with calloc,
         1190   for odd historical reasons (such as: cfree is used in example 
         1191   code in the first edition of K&R).
         1192 */
         1193 #if __STD_C
         1194 void     public_cFREe(Void_t*);
         1195 #else
         1196 void     public_cFREe();
         1197 #endif
         1198 
         1199 /*
         1200   malloc_trim(size_t pad);
         1201 
         1202   If possible, gives memory back to the system (via negative
         1203   arguments to sbrk) if there is unused memory at the `high' end of
         1204   the malloc pool. You can call this after freeing large blocks of
         1205   memory to potentially reduce the system-level memory requirements
         1206   of a program. However, it cannot guarantee to reduce memory. Under
         1207   some allocation patterns, some large free blocks of memory will be
         1208   locked between two used chunks, so they cannot be given back to
         1209   the system.
         1210   
         1211   The `pad' argument to malloc_trim represents the amount of free
         1212   trailing space to leave untrimmed. If this argument is zero,
         1213   only the minimum amount of memory to maintain internal data
         1214   structures will be left (one page or less). Non-zero arguments
         1215   can be supplied to maintain enough trailing space to service
         1216   future expected allocations without having to re-obtain memory
         1217   from the system.
         1218   
         1219   Malloc_trim returns 1 if it actually released any memory, else 0.
         1220   On systems that do not support "negative sbrks", it will always
         1221   rreturn 0.
         1222 */
         1223 #if __STD_C
         1224 int      public_mTRIm(size_t);
         1225 #else
         1226 int      public_mTRIm();
         1227 #endif
         1228 
         1229 /*
         1230   malloc_usable_size(Void_t* p);
         1231 
         1232   Returns the number of bytes you can actually use in
         1233   an allocated chunk, which may be more than you requested (although
         1234   often not) due to alignment and minimum size constraints.
         1235   You can use this many bytes without worrying about
         1236   overwriting other allocated objects. This is not a particularly great
         1237   programming practice. malloc_usable_size can be more useful in
         1238   debugging and assertions, for example:
         1239 
         1240   p = malloc(n);
         1241   assert(malloc_usable_size(p) >= 256);
         1242 
         1243 */
         1244 #if __STD_C
         1245 size_t   public_mUSABLe(Void_t*);
         1246 #else
         1247 size_t   public_mUSABLe();
         1248 #endif
         1249 
         1250 /*
         1251   malloc_stats();
         1252   Prints on stderr the amount of space obtained from the system (both
         1253   via sbrk and mmap), the maximum amount (which may be more than
         1254   current if malloc_trim and/or munmap got called), and the current
         1255   number of bytes allocated via malloc (or realloc, etc) but not yet
         1256   freed. Note that this is the number of bytes allocated, not the
         1257   number requested. It will be larger than the number requested
         1258   because of alignment and bookkeeping overhead. Because it includes
         1259   alignment wastage as being in use, this figure may be greater than
         1260   zero even when no user-level chunks are allocated.
         1261 
         1262   The reported current and maximum system memory can be inaccurate if
         1263   a program makes other calls to system memory allocation functions
         1264   (normally sbrk) outside of malloc.
         1265 
         1266   malloc_stats prints only the most commonly interesting statistics.
         1267   More information can be obtained by calling mallinfo.
         1268 
         1269 */
         1270 #if __STD_C
         1271 void     public_mSTATs();
         1272 #else
         1273 void     public_mSTATs();
         1274 #endif
         1275 
         1276 /* mallopt tuning options */
         1277 
         1278 /*
         1279   M_MXFAST is the maximum request size used for "fastbins", special bins
         1280   that hold returned chunks without consolidating their spaces. This
         1281   enables future requests for chunks of the same size to be handled
         1282   very quickly, but can increase fragmentation, and thus increase the
         1283   overall memory footprint of a program.
         1284 
         1285   This malloc manages fastbins very conservatively yet still
         1286   efficiently, so fragmentation is rarely a problem for values less
         1287   than or equal to the default.  The maximum supported value of MXFAST
         1288   is 80. You wouldn't want it any higher than this anyway.  Fastbins
         1289   are designed especially for use with many small structs, objects or
         1290   strings -- the default handles structs/objects/arrays with sizes up
         1291   to 16 4byte fields, or small strings representing words, tokens,
         1292   etc. Using fastbins for larger objects normally worsens
         1293   fragmentation without improving speed.
         1294 
         1295   M_MXFAST is set in REQUEST size units. It is internally used in
         1296   chunksize units, which adds padding and alignment.  You can reduce
         1297   M_MXFAST to 0 to disable all use of fastbins.  This causes the malloc
         1298   algorithm to be a closer approximation of fifo-best-fit in all cases,
         1299   not just for larger requests, but will generally cause it to be
         1300   slower.
         1301 */
         1302 
         1303 
         1304 /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */
         1305 #ifndef M_MXFAST
         1306 #define M_MXFAST            1    
         1307 #endif
         1308 
         1309 #ifndef DEFAULT_MXFAST
         1310 #define DEFAULT_MXFAST     64
         1311 #endif
         1312 
         1313 
         1314 /*
         1315   M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
         1316   to keep before releasing via malloc_trim in free().
         1317 
         1318   Automatic trimming is mainly useful in long-lived programs.
         1319   Because trimming via sbrk can be slow on some systems, and can
         1320   sometimes be wasteful (in cases where programs immediately
         1321   afterward allocate more large chunks) the value should be high
         1322   enough so that your overall system performance would improve by
         1323   releasing this much memory.
         1324 
         1325   The trim threshold and the mmap control parameters (see below)
         1326   can be traded off with one another. Trimming and mmapping are
         1327   two different ways of releasing unused memory back to the
         1328   system. Between these two, it is often possible to keep
         1329   system-level demands of a long-lived program down to a bare
         1330   minimum. For example, in one test suite of sessions measuring
         1331   the XF86 X server on Linux, using a trim threshold of 128K and a
         1332   mmap threshold of 192K led to near-minimal long term resource
         1333   consumption.
         1334 
         1335   If you are using this malloc in a long-lived program, it should
         1336   pay to experiment with these values.  As a rough guide, you
         1337   might set to a value close to the average size of a process
         1338   (program) running on your system.  Releasing this much memory
         1339   would allow such a process to run in memory.  Generally, it's
         1340   worth it to tune for trimming rather tham memory mapping when a
         1341   program undergoes phases where several large chunks are
         1342   allocated and released in ways that can reuse each other's
         1343   storage, perhaps mixed with phases where there are no such
         1344   chunks at all.  And in well-behaved long-lived programs,
         1345   controlling release of large blocks via trimming versus mapping
         1346   is usually faster.
         1347 
         1348   However, in most programs, these parameters serve mainly as
         1349   protection against the system-level effects of carrying around
         1350   massive amounts of unneeded memory. Since frequent calls to
         1351   sbrk, mmap, and munmap otherwise degrade performance, the default
         1352   parameters are set to relatively high values that serve only as
         1353   safeguards.
         1354 
         1355   The trim value must be greater than page size to have any useful
         1356   effect.  To disable trimming completely, you can set to 
         1357   (unsigned long)(-1)
         1358 
         1359   Trim settings interact with fastbin (MXFAST) settings: Unless
         1360   TRIM_FASTBINS is defined, automatic trimming never takes place upon
         1361   freeing a chunk with size less than or equal to MXFAST. Trimming is
         1362   instead delayed until subsequent freeing of larger chunks. However,
         1363   you can still force an attempted trim by calling malloc_trim.
         1364 
         1365   Also, trimming is not generally possible in cases where
         1366   the main arena is obtained via mmap.
         1367 
         1368   Note that the trick some people use of mallocing a huge space and
         1369   then freeing it at program startup, in an attempt to reserve system
         1370   memory, doesn't have the intended effect under automatic trimming,
         1371   since that memory will immediately be returned to the system.
         1372 */
         1373 
         1374 #define M_TRIM_THRESHOLD       -1
         1375 
         1376 #ifndef DEFAULT_TRIM_THRESHOLD
         1377 #define DEFAULT_TRIM_THRESHOLD (256 * 1024)
         1378 #endif
         1379 
         1380 /*
         1381   M_TOP_PAD is the amount of extra `padding' space to allocate or
         1382   retain whenever sbrk is called. It is used in two ways internally:
         1383 
         1384   * When sbrk is called to extend the top of the arena to satisfy
         1385   a new malloc request, this much padding is added to the sbrk
         1386   request.
         1387 
         1388   * When malloc_trim is called automatically from free(),
         1389   it is used as the `pad' argument.
         1390 
         1391   In both cases, the actual amount of padding is rounded
         1392   so that the end of the arena is always a system page boundary.
         1393 
         1394   The main reason for using padding is to avoid calling sbrk so
         1395   often. Having even a small pad greatly reduces the likelihood
         1396   that nearly every malloc request during program start-up (or
         1397   after trimming) will invoke sbrk, which needlessly wastes
         1398   time.
         1399 
         1400   Automatic rounding-up to page-size units is normally sufficient
         1401   to avoid measurable overhead, so the default is 0.  However, in
         1402   systems where sbrk is relatively slow, it can pay to increase
         1403   this value, at the expense of carrying around more memory than
         1404   the program needs.
         1405 */
         1406 
         1407 #define M_TOP_PAD              -2
         1408 
         1409 #ifndef DEFAULT_TOP_PAD
         1410 #define DEFAULT_TOP_PAD        (0)
         1411 #endif
         1412 
         1413 /*
         1414   M_MMAP_THRESHOLD is the request size threshold for using mmap()
         1415   to service a request. Requests of at least this size that cannot
         1416   be allocated using already-existing space will be serviced via mmap.
         1417   (If enough normal freed space already exists it is used instead.)
         1418 
         1419   Using mmap segregates relatively large chunks of memory so that
         1420   they can be individually obtained and released from the host
         1421   system. A request serviced through mmap is never reused by any
         1422   other request (at least not directly; the system may just so
         1423   happen to remap successive requests to the same locations).
         1424 
         1425   Segregating space in this way has the benefits that:
         1426 
         1427    1. Mmapped space can ALWAYS be individually released back 
         1428       to the system, which helps keep the system level memory 
         1429       demands of a long-lived program low. 
         1430    2. Mapped memory can never become `locked' between
         1431       other chunks, as can happen with normally allocated chunks, which
         1432       means that even trimming via malloc_trim would not release them.
         1433    3. On some systems with "holes" in address spaces, mmap can obtain
         1434       memory that sbrk cannot.
         1435 
         1436   However, it has the disadvantages that:
         1437 
         1438    1. The space cannot be reclaimed, consolidated, and then
         1439       used to service later requests, as happens with normal chunks.
         1440    2. It can lead to more wastage because of mmap page alignment
         1441       requirements
         1442    3. It causes malloc performance to be more dependent on host
         1443       system memory management support routines which may vary in
         1444       implementation quality and may impose arbitrary
         1445       limitations. Generally, servicing a request via normal
         1446       malloc steps is faster than going through a system's mmap.
         1447 
         1448   The advantages of mmap nearly always outweigh disadvantages for
         1449   "large" chunks, but the value of "large" varies across systems.  The
         1450   default is an empirically derived value that works well in most
         1451   systems.
         1452 */
         1453 
         1454 #define M_MMAP_THRESHOLD      -3
         1455 
         1456 #ifndef DEFAULT_MMAP_THRESHOLD
         1457 #define DEFAULT_MMAP_THRESHOLD (256 * 1024)
         1458 #endif
         1459 
         1460 /*
         1461   M_MMAP_MAX is the maximum number of requests to simultaneously
         1462   service using mmap. This parameter exists because
         1463 . Some systems have a limited number of internal tables for
         1464   use by mmap, and using more than a few of them may degrade
         1465   performance.
         1466 
         1467   The default is set to a value that serves only as a safeguard.
         1468   Setting to 0 disables use of mmap for servicing large requests.  If
         1469   HAVE_MMAP is not set, the default value is 0, and attempts to set it
         1470   to non-zero values in mallopt will fail.
         1471 */
         1472 
         1473 #define M_MMAP_MAX             -4
         1474 
         1475 #ifndef DEFAULT_MMAP_MAX
         1476 #if HAVE_MMAP
         1477 #define DEFAULT_MMAP_MAX       (65536)
         1478 #else
         1479 #define DEFAULT_MMAP_MAX       (0)
         1480 #endif
         1481 #endif
         1482 
         1483 #ifdef __cplusplus
         1484 };  /* end of extern "C" */
         1485 #endif
         1486 
         1487 /* 
         1488   ========================================================================
         1489   To make a fully customizable malloc.h header file, cut everything
         1490   above this line, put into file malloc.h, edit to suit, and #include it 
         1491   on the next line, as well as in programs that use this malloc.
         1492   ========================================================================
         1493 */
         1494 
         1495 /* #include "malloc.h" */
         1496 
         1497 /* --------------------- public wrappers ---------------------- */
         1498 
         1499 #ifdef USE_PUBLIC_MALLOC_WRAPPERS
         1500 
         1501 /* Declare all routines as internal */
         1502 #if __STD_C
         1503 static Void_t*  mALLOc(size_t);
         1504 static void     fREe(Void_t*);
         1505 static Void_t*  rEALLOc(Void_t*, size_t);
         1506 static Void_t*  mEMALIGn(size_t, size_t);
         1507 static Void_t*  vALLOc(size_t);
         1508 static Void_t*  pVALLOc(size_t);
         1509 static Void_t*  cALLOc(size_t, size_t);
         1510 static Void_t** iCALLOc(size_t, size_t, Void_t**);
         1511 static Void_t** iCOMALLOc(size_t, size_t*, Void_t**);
         1512 static void     cFREe(Void_t*);
         1513 static int      mTRIm(size_t);
         1514 static size_t   mUSABLe(Void_t*);
         1515 static void     mSTATs();
         1516 static int      mALLOPt(int, int);
         1517 static struct mallinfo mALLINFo(void);
         1518 #else
         1519 static Void_t*  mALLOc();
         1520 static void     fREe();
         1521 static Void_t*  rEALLOc();
         1522 static Void_t*  mEMALIGn();
         1523 static Void_t*  vALLOc();
         1524 static Void_t*  pVALLOc();
         1525 static Void_t*  cALLOc();
         1526 static Void_t** iCALLOc();
         1527 static Void_t** iCOMALLOc();
         1528 static void     cFREe();
         1529 static int      mTRIm();
         1530 static size_t   mUSABLe();
         1531 static void     mSTATs();
         1532 static int      mALLOPt();
         1533 static struct mallinfo mALLINFo();
         1534 #endif
         1535 
         1536 /*
         1537   MALLOC_PREACTION and MALLOC_POSTACTION should be
         1538   defined to return 0 on success, and nonzero on failure.
         1539   The return value of MALLOC_POSTACTION is currently ignored
         1540   in wrapper functions since there is no reasonable default
         1541   action to take on failure.
         1542 */
         1543 
         1544 
         1545 #ifdef USE_MALLOC_LOCK
         1546 
         1547 #ifdef WIN32
         1548 
         1549 static int mALLOC_MUTEx;
         1550 #define MALLOC_PREACTION   slwait(&mALLOC_MUTEx)
         1551 #define MALLOC_POSTACTION  slrelease(&mALLOC_MUTEx)
         1552 
         1553 #else
         1554 
         1555 #include <pthread.h>
         1556 
         1557 static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER;
         1558 
         1559 #define MALLOC_PREACTION   pthread_mutex_lock(&mALLOC_MUTEx)
         1560 #define MALLOC_POSTACTION  pthread_mutex_unlock(&mALLOC_MUTEx)
         1561 
         1562 #endif /* USE_MALLOC_LOCK */
         1563 
         1564 #else
         1565 
         1566 /* Substitute anything you like for these */
         1567 
         1568 #define MALLOC_PREACTION   (0)
         1569 #define MALLOC_POSTACTION  (0)
         1570 
         1571 #endif
         1572 
         1573 Void_t* public_mALLOc(size_t bytes) {
         1574   Void_t* m;
         1575   if (MALLOC_PREACTION != 0) {
         1576     return 0;
         1577   }
         1578   m = mALLOc(bytes);
         1579   if (MALLOC_POSTACTION != 0) {
         1580   }
         1581   return m;
         1582 }
         1583 
         1584 void public_fREe(Void_t* m) {
         1585   if (MALLOC_PREACTION != 0) {
         1586     return;
         1587   }
         1588   fREe(m);
         1589   if (MALLOC_POSTACTION != 0) {
         1590   }
         1591 }
         1592 
         1593 Void_t* public_rEALLOc(Void_t* m, size_t bytes) {
         1594   if (MALLOC_PREACTION != 0) {
         1595     return 0;
         1596   }
         1597   m = rEALLOc(m, bytes);
         1598   if (MALLOC_POSTACTION != 0) {
         1599   }
         1600   return m;
         1601 }
         1602 
         1603 Void_t* public_mEMALIGn(size_t alignment, size_t bytes) {
         1604   Void_t* m;
         1605   if (MALLOC_PREACTION != 0) {
         1606     return 0;
         1607   }
         1608   m = mEMALIGn(alignment, bytes);
         1609   if (MALLOC_POSTACTION != 0) {
         1610   }
         1611   return m;
         1612 }
         1613 
         1614 Void_t* public_vALLOc(size_t bytes) {
         1615   Void_t* m;
         1616   if (MALLOC_PREACTION != 0) {
         1617     return 0;
         1618   }
         1619   m = vALLOc(bytes);
         1620   if (MALLOC_POSTACTION != 0) {
         1621   }
         1622   return m;
         1623 }
         1624 
         1625 Void_t* public_pVALLOc(size_t bytes) {
         1626   Void_t* m;
         1627   if (MALLOC_PREACTION != 0) {
         1628     return 0;
         1629   }
         1630   m = pVALLOc(bytes);
         1631   if (MALLOC_POSTACTION != 0) {
         1632   }
         1633   return m;
         1634 }
         1635 
         1636 Void_t* public_cALLOc(size_t n, size_t elem_size) {
         1637   Void_t* m;
         1638   if (MALLOC_PREACTION != 0) {
         1639     return 0;
         1640   }
         1641   m = cALLOc(n, elem_size);
         1642   if (MALLOC_POSTACTION != 0) {
         1643   }
         1644   return m;
         1645 }
         1646 
         1647 
         1648 Void_t** public_iCALLOc(size_t n, size_t elem_size, Void_t** chunks) {
         1649   Void_t** m;
         1650   if (MALLOC_PREACTION != 0) {
         1651     return 0;
         1652   }
         1653   m = iCALLOc(n, elem_size, chunks);
         1654   if (MALLOC_POSTACTION != 0) {
         1655   }
         1656   return m;
         1657 }
         1658 
         1659 Void_t** public_iCOMALLOc(size_t n, size_t sizes[], Void_t** chunks) {
         1660   Void_t** m;
         1661   if (MALLOC_PREACTION != 0) {
         1662     return 0;
         1663   }
         1664   m = iCOMALLOc(n, sizes, chunks);
         1665   if (MALLOC_POSTACTION != 0) {
         1666   }
         1667   return m;
         1668 }
         1669 
         1670 void public_cFREe(Void_t* m) {
         1671   if (MALLOC_PREACTION != 0) {
         1672     return;
         1673   }
         1674   cFREe(m);
         1675   if (MALLOC_POSTACTION != 0) {
         1676   }
         1677 }
         1678 
         1679 int public_mTRIm(size_t s) {
         1680   int result;
         1681   if (MALLOC_PREACTION != 0) {
         1682     return 0;
         1683   }
         1684   result = mTRIm(s);
         1685   if (MALLOC_POSTACTION != 0) {
         1686   }
         1687   return result;
         1688 }
         1689 
         1690 size_t public_mUSABLe(Void_t* m) {
         1691   size_t result;
         1692   if (MALLOC_PREACTION != 0) {
         1693     return 0;
         1694   }
         1695   result = mUSABLe(m);
         1696   if (MALLOC_POSTACTION != 0) {
         1697   }
         1698   return result;
         1699 }
         1700 
         1701 void public_mSTATs() {
         1702   if (MALLOC_PREACTION != 0) {
         1703     return;
         1704   }
         1705   mSTATs();
         1706   if (MALLOC_POSTACTION != 0) {
         1707   }
         1708 }
         1709 
         1710 struct mallinfo public_mALLINFo() {
         1711   struct mallinfo m;
         1712   if (MALLOC_PREACTION != 0) {
         1713     struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
         1714     return nm;
         1715   }
         1716   m = mALLINFo();
         1717   if (MALLOC_POSTACTION != 0) {
         1718   }
         1719   return m;
         1720 }
         1721 
         1722 int public_mALLOPt(int p, int v) {
         1723   int result;
         1724   if (MALLOC_PREACTION != 0) {
         1725     return 0;
         1726   }
         1727   result = mALLOPt(p, v);
         1728   if (MALLOC_POSTACTION != 0) {
         1729   }
         1730   return result;
         1731 }
         1732 
         1733 #endif
         1734 
         1735 
         1736 
         1737 /* ------------- Optional versions of memcopy ---------------- */
         1738 
         1739 
         1740 #if USE_MEMCPY
         1741 
         1742 /* 
         1743   Note: memcpy is ONLY invoked with non-overlapping regions,
         1744   so the (usually slower) memmove is not needed.
         1745 */
         1746 
         1747 #define MALLOC_COPY(dest, src, nbytes)  memcpy(dest, src, nbytes)
         1748 #define MALLOC_ZERO(dest, nbytes)       memset(dest, 0,   nbytes)
         1749 
         1750 #else /* !USE_MEMCPY */
         1751 
         1752 /* Use Duff's device for good zeroing/copying performance. */
         1753 
         1754 #define MALLOC_ZERO(charp, nbytes)                                            \
         1755 do {                                                                          \
         1756   INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
         1757   CHUNK_SIZE_T  mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T);                     \
         1758   long mcn;                                                                   \
         1759   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
         1760   switch (mctmp) {                                                            \
         1761     case 0: for(;;) { *mzp++ = 0;                                             \
         1762     case 7:           *mzp++ = 0;                                             \
         1763     case 6:           *mzp++ = 0;                                             \
         1764     case 5:           *mzp++ = 0;                                             \
         1765     case 4:           *mzp++ = 0;                                             \
         1766     case 3:           *mzp++ = 0;                                             \
         1767     case 2:           *mzp++ = 0;                                             \
         1768     case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
         1769   }                                                                           \
         1770 } while(0)
         1771 
         1772 #define MALLOC_COPY(dest,src,nbytes)                                          \
         1773 do {                                                                          \
         1774   INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
         1775   INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
         1776   CHUNK_SIZE_T  mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T);                     \
         1777   long mcn;                                                                   \
         1778   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
         1779   switch (mctmp) {                                                            \
         1780     case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
         1781     case 7:           *mcdst++ = *mcsrc++;                                    \
         1782     case 6:           *mcdst++ = *mcsrc++;                                    \
         1783     case 5:           *mcdst++ = *mcsrc++;                                    \
         1784     case 4:           *mcdst++ = *mcsrc++;                                    \
         1785     case 3:           *mcdst++ = *mcsrc++;                                    \
         1786     case 2:           *mcdst++ = *mcsrc++;                                    \
         1787     case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
         1788   }                                                                           \
         1789 } while(0)
         1790 
         1791 #endif
         1792 
         1793 /* ------------------ MMAP support ------------------  */
         1794 
         1795 
         1796 #if HAVE_MMAP
         1797 
         1798 #ifndef LACKS_FCNTL_H
         1799 #include <fcntl.h>
         1800 #endif
         1801 
         1802 #ifndef LACKS_SYS_MMAN_H
         1803 #include <sys/mman.h>
         1804 #endif
         1805 
         1806 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
         1807 #define MAP_ANONYMOUS MAP_ANON
         1808 #endif
         1809 
         1810 /* 
         1811    Nearly all versions of mmap support MAP_ANONYMOUS, 
         1812    so the following is unlikely to be needed, but is
         1813    supplied just in case.
         1814 */
         1815 
         1816 #ifndef MAP_ANONYMOUS
         1817 
         1818 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
         1819 
         1820 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
         1821  (dev_zero_fd = open("/dev/zero", O_RDWR), \
         1822   mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
         1823    mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
         1824 
         1825 #else
         1826 
         1827 #define MMAP(addr, size, prot, flags) \
         1828  (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
         1829 
         1830 #endif
         1831 
         1832 
         1833 #endif /* HAVE_MMAP */
         1834 
         1835 
         1836 /*
         1837   -----------------------  Chunk representations -----------------------
         1838 */
         1839 
         1840 
         1841 /*
         1842   This struct declaration is misleading (but accurate and necessary).
         1843   It declares a "view" into memory allowing access to necessary
         1844   fields at known offsets from a given base. See explanation below.
         1845 */
         1846 
         1847 struct malloc_chunk {
         1848 
         1849   INTERNAL_SIZE_T      prev_size;  /* Size of previous chunk (if free).  */
         1850   INTERNAL_SIZE_T      size;       /* Size in bytes, including overhead. */
         1851 
         1852   struct malloc_chunk* fd;         /* double links -- used only if free. */
         1853   struct malloc_chunk* bk;
         1854 };
         1855 
         1856 
         1857 typedef struct malloc_chunk* mchunkptr;
         1858 
         1859 /*
         1860    malloc_chunk details:
         1861 
         1862     (The following includes lightly edited explanations by Colin Plumb.)
         1863 
         1864     Chunks of memory are maintained using a `boundary tag' method as
         1865     described in e.g., Knuth or Standish.  (See the paper by Paul
         1866     Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
         1867     survey of such techniques.)  Sizes of free chunks are stored both
         1868     in the front of each chunk and at the end.  This makes
         1869     consolidating fragmented chunks into bigger chunks very fast.  The
         1870     size fields also hold bits representing whether chunks are free or
         1871     in use.
         1872 
         1873     An allocated chunk looks like this:
         1874 
         1875 
         1876     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1877             |             Size of previous chunk, if allocated            | |
         1878             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1879             |             Size of chunk, in bytes                         |P|
         1880       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1881             |             User data starts here...                          .
         1882             .                                                               .
         1883             .             (malloc_usable_space() bytes)                     .
         1884             .                                                               |
         1885 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1886             |             Size of chunk                                     |
         1887             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1888 
         1889 
         1890     Where "chunk" is the front of the chunk for the purpose of most of
         1891     the malloc code, but "mem" is the pointer that is returned to the
         1892     user.  "Nextchunk" is the beginning of the next contiguous chunk.
         1893 
         1894     Chunks always begin on even word boundries, so the mem portion
         1895     (which is returned to the user) is also on an even word boundary, and
         1896     thus at least double-word aligned.
         1897 
         1898     Free chunks are stored in circular doubly-linked lists, and look like this:
         1899 
         1900     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1901             |             Size of previous chunk                            |
         1902             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1903     `head:' |             Size of chunk, in bytes                         |P|
         1904       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1905             |             Forward pointer to next chunk in list             |
         1906             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1907             |             Back pointer to previous chunk in list            |
         1908             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1909             |             Unused space (may be 0 bytes long)                .
         1910             .                                                               .
         1911             .                                                               |
         1912 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1913     `foot:' |             Size of chunk, in bytes                           |
         1914             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         1915 
         1916     The P (PREV_INUSE) bit, stored in the unused low-order bit of the
         1917     chunk size (which is always a multiple of two words), is an in-use
         1918     bit for the *previous* chunk.  If that bit is *clear*, then the
         1919     word before the current chunk size contains the previous chunk
         1920     size, and can be used to find the front of the previous chunk.
         1921     The very first chunk allocated always has this bit set,
         1922     preventing access to non-existent (or non-owned) memory. If
         1923     prev_inuse is set for any given chunk, then you CANNOT determine
         1924     the size of the previous chunk, and might even get a memory
         1925     addressing fault when trying to do so.
         1926 
         1927     Note that the `foot' of the current chunk is actually represented
         1928     as the prev_size of the NEXT chunk. This makes it easier to
         1929     deal with alignments etc but can be very confusing when trying
         1930     to extend or adapt this code.
         1931 
         1932     The two exceptions to all this are
         1933 
         1934      1. The special chunk `top' doesn't bother using the
         1935         trailing size field since there is no next contiguous chunk
         1936         that would have to index off it. After initialization, `top'
         1937         is forced to always exist.  If it would become less than
         1938         MINSIZE bytes long, it is replenished.
         1939 
         1940      2. Chunks allocated via mmap, which have the second-lowest-order
         1941         bit (IS_MMAPPED) set in their size fields.  Because they are
         1942         allocated one-by-one, each must contain its own trailing size field.
         1943 
         1944 */
         1945 
         1946 /*
         1947   ---------- Size and alignment checks and conversions ----------
         1948 */
         1949 
         1950 /* conversion from malloc headers to user pointers, and back */
         1951 
         1952 #define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
         1953 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
         1954 
         1955 /* The smallest possible chunk */
         1956 #define MIN_CHUNK_SIZE        (sizeof(struct malloc_chunk))
         1957 
         1958 /* The smallest size we can malloc is an aligned minimal chunk */
         1959 
         1960 #define MINSIZE  \
         1961   (CHUNK_SIZE_T)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
         1962 
         1963 /* Check if m has acceptable alignment */
         1964 
         1965 #define aligned_OK(m)  (((PTR_UINT)((m)) & (MALLOC_ALIGN_MASK)) == 0)
         1966 
         1967 
         1968 /* 
         1969    Check if a request is so large that it would wrap around zero when
         1970    padded and aligned. To simplify some other code, the bound is made
         1971    low enough so that adding MINSIZE will also not wrap around sero.
         1972 */
         1973 
         1974 #define REQUEST_OUT_OF_RANGE(req)                                 \
         1975   ((CHUNK_SIZE_T)(req) >=                                        \
         1976    (CHUNK_SIZE_T)(INTERNAL_SIZE_T)(-2 * MINSIZE))    
         1977 
         1978 /* pad request bytes into a usable size -- internal version */
         1979 
         1980 #define request2size(req)                                         \
         1981   (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE)  ?             \
         1982    MINSIZE :                                                      \
         1983    ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
         1984 
         1985 /*  Same, except also perform argument check */
         1986 
         1987 #define checked_request2size(req, sz)                             \
         1988   if (REQUEST_OUT_OF_RANGE(req)) {                                \
         1989     MALLOC_FAILURE_ACTION;                                        \
         1990     return 0;                                                     \
         1991   }                                                               \
         1992   (sz) = request2size(req);                                              
         1993 
         1994 /*
         1995   --------------- Physical chunk operations ---------------
         1996 */
         1997 
         1998 
         1999 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
         2000 #define PREV_INUSE 0x1
         2001 
         2002 /* extract inuse bit of previous chunk */
         2003 #define prev_inuse(p)       ((p)->size & PREV_INUSE)
         2004 
         2005 
         2006 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
         2007 #define IS_MMAPPED 0x2
         2008 
         2009 /* check for mmap()'ed chunk */
         2010 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
         2011 
         2012 /* 
         2013   Bits to mask off when extracting size 
         2014 
         2015   Note: IS_MMAPPED is intentionally not masked off from size field in
         2016   macros for which mmapped chunks should never be seen. This should
         2017   cause helpful core dumps to occur if it is tried by accident by
         2018   people extending or adapting this malloc.
         2019 */
         2020 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
         2021 
         2022 /* Get size, ignoring use bits */
         2023 #define chunksize(p)         ((p)->size & ~(SIZE_BITS))
         2024 
         2025 
         2026 /* Ptr to next physical malloc_chunk. */
         2027 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
         2028 
         2029 /* Ptr to previous physical malloc_chunk */
         2030 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
         2031 
         2032 /* Treat space at ptr + offset as a chunk */
         2033 #define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
         2034 
         2035 /* extract p's inuse bit */
         2036 #define inuse(p)\
         2037 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
         2038 
         2039 /* set/clear chunk as being inuse without otherwise disturbing */
         2040 #define set_inuse(p)\
         2041 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
         2042 
         2043 #define clear_inuse(p)\
         2044 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
         2045 
         2046 
         2047 /* check/set/clear inuse bits in known places */
         2048 #define inuse_bit_at_offset(p, s)\
         2049  (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
         2050 
         2051 #define set_inuse_bit_at_offset(p, s)\
         2052  (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
         2053 
         2054 #define clear_inuse_bit_at_offset(p, s)\
         2055  (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
         2056 
         2057 
         2058 /* Set size at head, without disturbing its use bit */
         2059 #define set_head_size(p, s)  ((p)->size = (((p)->size & PREV_INUSE) | (s)))
         2060 
         2061 /* Set size/use field */
         2062 #define set_head(p, s)       ((p)->size = (s))
         2063 
         2064 /* Set size at footer (only when chunk is not in use) */
         2065 #define set_foot(p, s)       (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
         2066 
         2067 
         2068 /*
         2069   -------------------- Internal data structures --------------------
         2070 
         2071    All internal state is held in an instance of malloc_state defined
         2072    below. There are no other static variables, except in two optional
         2073    cases: 
         2074    * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above. 
         2075    * If HAVE_MMAP is true, but mmap doesn't support
         2076      MAP_ANONYMOUS, a dummy file descriptor for mmap.
         2077 
         2078    Beware of lots of tricks that minimize the total bookkeeping space
         2079    requirements. The result is a little over 1K bytes (for 4byte
         2080    pointers and size_t.)
         2081 */
         2082 
         2083 /*
         2084   Bins
         2085 
         2086     An array of bin headers for free chunks. Each bin is doubly
         2087     linked.  The bins are approximately proportionally (log) spaced.
         2088     There are a lot of these bins (128). This may look excessive, but
         2089     works very well in practice.  Most bins hold sizes that are
         2090     unusual as malloc request sizes, but are more usual for fragments
         2091     and consolidated sets of chunks, which is what these bins hold, so
         2092     they can be found quickly.  All procedures maintain the invariant
         2093     that no consolidated chunk physically borders another one, so each
         2094     chunk in a list is known to be preceeded and followed by either
         2095     inuse chunks or the ends of memory.
         2096 
         2097     Chunks in bins are kept in size order, with ties going to the
         2098     approximately least recently used chunk. Ordering isn't needed
         2099     for the small bins, which all contain the same-sized chunks, but
         2100     facilitates best-fit allocation for larger chunks. These lists
         2101     are just sequential. Keeping them in order almost never requires
         2102     enough traversal to warrant using fancier ordered data
         2103     structures.  
         2104 
         2105     Chunks of the same size are linked with the most
         2106     recently freed at the front, and allocations are taken from the
         2107     back.  This results in LRU (FIFO) allocation order, which tends
         2108     to give each chunk an equal opportunity to be consolidated with
         2109     adjacent freed chunks, resulting in larger free chunks and less
         2110     fragmentation.
         2111 
         2112     To simplify use in double-linked lists, each bin header acts
         2113     as a malloc_chunk. This avoids special-casing for headers.
         2114     But to conserve space and improve locality, we allocate
         2115     only the fd/bk pointers of bins, and then use repositioning tricks
         2116     to treat these as the fields of a malloc_chunk*.  
         2117 */
         2118 
         2119 typedef struct malloc_chunk* mbinptr;
         2120 
         2121 /* addressing -- note that bin_at(0) does not exist */
         2122 #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
         2123 
         2124 /* analog of ++bin */
         2125 #define next_bin(b)  ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))
         2126 
         2127 /* Reminders about list directionality within bins */
         2128 #define first(b)     ((b)->fd)
         2129 #define last(b)      ((b)->bk)
         2130 
         2131 /* Take a chunk off a bin list */
         2132 #define unlink(P, BK, FD) {                                            \
         2133   FD = P->fd;                                                          \
         2134   BK = P->bk;                                                          \
         2135   FD->bk = BK;                                                         \
         2136   BK->fd = FD;                                                         \
         2137 }
         2138 
         2139 /*
         2140   Indexing
         2141 
         2142     Bins for sizes < 512 bytes contain chunks of all the same size, spaced
         2143     8 bytes apart. Larger bins are approximately logarithmically spaced:
         2144 
         2145     64 bins of size       8
         2146     32 bins of size      64
         2147     16 bins of size     512
         2148      8 bins of size    4096
         2149      4 bins of size   32768
         2150      2 bins of size  262144
         2151      1 bin  of size what's left
         2152 
         2153     The bins top out around 1MB because we expect to service large
         2154     requests via mmap.
         2155 */
         2156 
         2157 #define NBINS              96
         2158 #define NSMALLBINS         32
         2159 #define SMALLBIN_WIDTH      8
         2160 #define MIN_LARGE_SIZE    256
         2161 
         2162 #define in_smallbin_range(sz)  \
         2163   ((CHUNK_SIZE_T)(sz) < (CHUNK_SIZE_T)MIN_LARGE_SIZE)
         2164 
         2165 #define smallbin_index(sz)     (((unsigned)(sz)) >> 3)
         2166 
         2167 /*
         2168   Compute index for size. We expect this to be inlined when
         2169   compiled with optimization, else not, which works out well.
         2170 */
         2171 static int largebin_index(unsigned int sz) {
         2172   unsigned int  x = sz >> SMALLBIN_WIDTH; 
         2173   unsigned int m;            /* bit position of highest set bit of m */
         2174 
         2175   if (x >= 0x10000) return NBINS-1;
         2176 
         2177   /* On intel, use BSRL instruction to find highest bit */
         2178 #if defined(__GNUC__) && defined(i386)
         2179 
         2180   __asm__("bsrl %1,%0\n\t"
         2181           : "=r" (m) 
         2182           : "g"  (x));
         2183 
         2184 #else
         2185   {
         2186     /*
         2187       Based on branch-free nlz algorithm in chapter 5 of Henry
         2188       S. Warren Jr's book "Hacker's Delight".
         2189     */
         2190 
         2191     unsigned int n = ((x - 0x100) >> 16) & 8;
         2192     x <<= n; 
         2193     m = ((x - 0x1000) >> 16) & 4;
         2194     n += m; 
         2195     x <<= m; 
         2196     m = ((x - 0x4000) >> 16) & 2;
         2197     n += m; 
         2198     x = (x << m) >> 14;
         2199     m = 13 - n + (x & ~(x>>1));
         2200   }
         2201 #endif
         2202 
         2203   /* Use next 2 bits to create finer-granularity bins */
         2204   return NSMALLBINS + (m << 2) + ((sz >> (m + 6)) & 3);
         2205 }
         2206 
         2207 #define bin_index(sz) \
         2208  ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
         2209 
         2210 /*
         2211   FIRST_SORTED_BIN_SIZE is the chunk size corresponding to the
         2212   first bin that is maintained in sorted order. This must
         2213   be the smallest size corresponding to a given bin.
         2214 
         2215   Normally, this should be MIN_LARGE_SIZE. But you can weaken
         2216   best fit guarantees to sometimes speed up malloc by increasing value.
         2217   Doing this means that malloc may choose a chunk that is 
         2218   non-best-fitting by up to the width of the bin.
         2219 
         2220   Some useful cutoff values:
         2221       512 - all bins sorted
         2222      2560 - leaves bins <=     64 bytes wide unsorted  
         2223     12288 - leaves bins <=    512 bytes wide unsorted
         2224     65536 - leaves bins <=   4096 bytes wide unsorted
         2225    262144 - leaves bins <=  32768 bytes wide unsorted
         2226        -1 - no bins sorted (not recommended!)
         2227 */
         2228 
         2229 #define FIRST_SORTED_BIN_SIZE MIN_LARGE_SIZE 
         2230 /* #define FIRST_SORTED_BIN_SIZE 65536 */
         2231 
         2232 /*
         2233   Unsorted chunks
         2234 
         2235     All remainders from chunk splits, as well as all returned chunks,
         2236     are first placed in the "unsorted" bin. They are then placed
         2237     in regular bins after malloc gives them ONE chance to be used before
         2238     binning. So, basically, the unsorted_chunks list acts as a queue,
         2239     with chunks being placed on it in free (and malloc_consolidate),
         2240     and taken off (to be either used or placed in bins) in malloc.
         2241 */
         2242 
         2243 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
         2244 #define unsorted_chunks(M)          (bin_at(M, 1))
         2245 
         2246 /*
         2247   Top
         2248 
         2249     The top-most available chunk (i.e., the one bordering the end of
         2250     available memory) is treated specially. It is never included in
         2251     any bin, is used only if no other chunk is available, and is
         2252     released back to the system if it is very large (see
         2253     M_TRIM_THRESHOLD).  Because top initially
         2254     points to its own bin with initial zero size, thus forcing
         2255     extension on the first malloc request, we avoid having any special
         2256     code in malloc to check whether it even exists yet. But we still
         2257     need to do so when getting memory from system, so we make
         2258     initial_top treat the bin as a legal but unusable chunk during the
         2259     interval between initialization and the first call to
         2260     sYSMALLOc. (This is somewhat delicate, since it relies on
         2261     the 2 preceding words to be zero during this interval as well.)
         2262 */
         2263 
         2264 /* Conveniently, the unsorted bin can be used as dummy top on first call */
         2265 #define initial_top(M)              (unsorted_chunks(M))
         2266 
         2267 /*
         2268   Binmap
         2269 
         2270     To help compensate for the large number of bins, a one-level index
         2271     structure is used for bin-by-bin searching.  `binmap' is a
         2272     bitvector recording whether bins are definitely empty so they can
         2273     be skipped over during during traversals.  The bits are NOT always
         2274     cleared as soon as bins are empty, but instead only
         2275     when they are noticed to be empty during traversal in malloc.
         2276 */
         2277 
         2278 /* Conservatively use 32 bits per map word, even if on 64bit system */
         2279 #define BINMAPSHIFT      5
         2280 #define BITSPERMAP       (1U << BINMAPSHIFT)
         2281 #define BINMAPSIZE       (NBINS / BITSPERMAP)
         2282 
         2283 #define idx2block(i)     ((i) >> BINMAPSHIFT)
         2284 #define idx2bit(i)       ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
         2285 
         2286 #define mark_bin(m,i)    ((m)->binmap[idx2block(i)] |=  idx2bit(i))
         2287 #define unmark_bin(m,i)  ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
         2288 #define get_binmap(m,i)  ((m)->binmap[idx2block(i)] &   idx2bit(i))
         2289 
         2290 /*
         2291   Fastbins
         2292 
         2293     An array of lists holding recently freed small chunks.  Fastbins
         2294     are not doubly linked.  It is faster to single-link them, and
         2295     since chunks are never removed from the middles of these lists,
         2296     double linking is not necessary. Also, unlike regular bins, they
         2297     are not even processed in FIFO order (they use faster LIFO) since
         2298     ordering doesn't much matter in the transient contexts in which
         2299     fastbins are normally used.
         2300 
         2301     Chunks in fastbins keep their inuse bit set, so they cannot
         2302     be consolidated with other free chunks. malloc_consolidate
         2303     releases all chunks in fastbins and consolidates them with
         2304     other free chunks. 
         2305 */
         2306 
         2307 typedef struct malloc_chunk* mfastbinptr;
         2308 
         2309 /* offset 2 to use otherwise unindexable first 2 bins */
         2310 #define fastbin_index(sz)        ((((unsigned int)(sz)) >> 3) - 2)
         2311 
         2312 /* The maximum fastbin request size we support */
         2313 #define MAX_FAST_SIZE     80
         2314 
         2315 #define NFASTBINS  (fastbin_index(request2size(MAX_FAST_SIZE))+1)
         2316 
         2317 /*
         2318   FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free()
         2319   that triggers automatic consolidation of possibly-surrounding
         2320   fastbin chunks. This is a heuristic, so the exact value should not
         2321   matter too much. It is defined at half the default trim threshold as a
         2322   compromise heuristic to only attempt consolidation if it is likely
         2323   to lead to trimming. However, it is not dynamically tunable, since
         2324   consolidation reduces fragmentation surrounding loarge chunks even 
         2325   if trimming is not used.
         2326 */
         2327 
         2328 #define FASTBIN_CONSOLIDATION_THRESHOLD  \
         2329   ((unsigned long)(DEFAULT_TRIM_THRESHOLD) >> 1)
         2330 
         2331 /*
         2332   Since the lowest 2 bits in max_fast don't matter in size comparisons, 
         2333   they are used as flags.
         2334 */
         2335 
         2336 /*
         2337   ANYCHUNKS_BIT held in max_fast indicates that there may be any
         2338   freed chunks at all. It is set true when entering a chunk into any
         2339   bin.
         2340 */
         2341 
         2342 #define ANYCHUNKS_BIT        (1U)
         2343 
         2344 #define have_anychunks(M)     (((M)->max_fast &  ANYCHUNKS_BIT))
         2345 #define set_anychunks(M)      ((M)->max_fast |=  ANYCHUNKS_BIT)
         2346 #define clear_anychunks(M)    ((M)->max_fast &= ~ANYCHUNKS_BIT)
         2347 
         2348 /*
         2349   FASTCHUNKS_BIT held in max_fast indicates that there are probably
         2350   some fastbin chunks. It is set true on entering a chunk into any
         2351   fastbin, and cleared only in malloc_consolidate.
         2352 */
         2353 
         2354 #define FASTCHUNKS_BIT        (2U)
         2355 
         2356 #define have_fastchunks(M)   (((M)->max_fast &  FASTCHUNKS_BIT))
         2357 #define set_fastchunks(M)    ((M)->max_fast |=  (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
         2358 #define clear_fastchunks(M)  ((M)->max_fast &= ~(FASTCHUNKS_BIT))
         2359 
         2360 /* 
         2361    Set value of max_fast. 
         2362    Use impossibly small value if 0.
         2363 */
         2364 
         2365 #define set_max_fast(M, s) \
         2366   (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \
         2367   ((M)->max_fast &  (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
         2368 
         2369 #define get_max_fast(M) \
         2370   ((M)->max_fast & ~(FASTCHUNKS_BIT | ANYCHUNKS_BIT))
         2371 
         2372 
         2373 /*
         2374   morecore_properties is a status word holding dynamically discovered
         2375   or controlled properties of the morecore function
         2376 */
         2377 
         2378 #define MORECORE_CONTIGUOUS_BIT  (1U)
         2379 
         2380 #define contiguous(M) \
         2381         (((M)->morecore_properties &  MORECORE_CONTIGUOUS_BIT))
         2382 #define noncontiguous(M) \
         2383         (((M)->morecore_properties &  MORECORE_CONTIGUOUS_BIT) == 0)
         2384 #define set_contiguous(M) \
         2385         ((M)->morecore_properties |=  MORECORE_CONTIGUOUS_BIT)
         2386 #define set_noncontiguous(M) \
         2387         ((M)->morecore_properties &= ~MORECORE_CONTIGUOUS_BIT)
         2388 
         2389 
         2390 /*
         2391    ----------- Internal state representation and initialization -----------
         2392 */
         2393 
         2394 struct malloc_state {
         2395 
         2396   /* The maximum chunk size to be eligible for fastbin */
         2397   INTERNAL_SIZE_T  max_fast;   /* low 2 bits used as flags */
         2398 
         2399   /* Fastbins */
         2400   mfastbinptr      fastbins[NFASTBINS];
         2401 
         2402   /* Base of the topmost chunk -- not otherwise kept in a bin */
         2403   mchunkptr        top;
         2404 
         2405   /* The remainder from the most recent split of a small request */
         2406   mchunkptr        last_remainder;
         2407 
         2408   /* Normal bins packed as described above */
         2409   mchunkptr        bins[NBINS * 2];
         2410 
         2411   /* Bitmap of bins. Trailing zero map handles cases of largest binned size */
         2412   unsigned int     binmap[BINMAPSIZE+1];
         2413 
         2414   /* Tunable parameters */
         2415   CHUNK_SIZE_T     trim_threshold;
         2416   INTERNAL_SIZE_T  top_pad;
         2417   INTERNAL_SIZE_T  mmap_threshold;
         2418 
         2419   /* Memory map support */
         2420   int              n_mmaps;
         2421   int              n_mmaps_max;
         2422   int              max_n_mmaps;
         2423 
         2424   /* Cache malloc_getpagesize */
         2425   unsigned int     pagesize;    
         2426 
         2427   /* Track properties of MORECORE */
         2428   unsigned int     morecore_properties;
         2429 
         2430   /* Statistics */
         2431   INTERNAL_SIZE_T  mmapped_mem;
         2432   INTERNAL_SIZE_T  sbrked_mem;
         2433   INTERNAL_SIZE_T  max_sbrked_mem;
         2434   INTERNAL_SIZE_T  max_mmapped_mem;
         2435   INTERNAL_SIZE_T  max_total_mem;
         2436 };
         2437 
         2438 typedef struct malloc_state *mstate;
         2439 
         2440 /* 
         2441    There is exactly one instance of this struct in this malloc.
         2442    If you are adapting this malloc in a way that does NOT use a static
         2443    malloc_state, you MUST explicitly zero-fill it before using. This
         2444    malloc relies on the property that malloc_state is initialized to
         2445    all zeroes (as is true of C statics).
         2446 */
         2447 
         2448 static struct malloc_state av_;  /* never directly referenced */
         2449 
         2450 /*
         2451    All uses of av_ are via get_malloc_state().
         2452    At most one "call" to get_malloc_state is made per invocation of
         2453    the public versions of malloc and free, but other routines
         2454    that in turn invoke malloc and/or free may call more then once. 
         2455    Also, it is called in check* routines if DEBUG is set.
         2456 */
         2457 
         2458 #define get_malloc_state() (&(av_))
         2459 
         2460 /*
         2461   Initialize a malloc_state struct.
         2462 
         2463   This is called only from within malloc_consolidate, which needs
         2464   be called in the same contexts anyway.  It is never called directly
         2465   outside of malloc_consolidate because some optimizing compilers try
         2466   to inline it at all call points, which turns out not to be an
         2467   optimization at all. (Inlining it in malloc_consolidate is fine though.)
         2468 */
         2469 
         2470 #if __STD_C
         2471 static void malloc_init_state(mstate av)
         2472 #else
         2473 static void malloc_init_state(av) mstate av;
         2474 #endif
         2475 {
         2476   int     i;
         2477   mbinptr bin;
         2478   
         2479   /* Establish circular links for normal bins */
         2480   for (i = 1; i < NBINS; ++i) { 
         2481     bin = bin_at(av,i);
         2482     bin->fd = bin->bk = bin;
         2483   }
         2484 
         2485   av->top_pad        = DEFAULT_TOP_PAD;
         2486   av->n_mmaps_max    = DEFAULT_MMAP_MAX;
         2487   av->mmap_threshold = DEFAULT_MMAP_THRESHOLD;
         2488   av->trim_threshold = DEFAULT_TRIM_THRESHOLD;
         2489 
         2490 #if MORECORE_CONTIGUOUS
         2491   set_contiguous(av);
         2492 #else
         2493   set_noncontiguous(av);
         2494 #endif
         2495 
         2496 
         2497   set_max_fast(av, DEFAULT_MXFAST);
         2498 
         2499   av->top            = initial_top(av);
         2500   av->pagesize       = malloc_getpagesize;
         2501 }
         2502 
         2503 /* 
         2504    Other internal utilities operating on mstates
         2505 */
         2506 
         2507 #if __STD_C
         2508 static Void_t*  sYSMALLOc(INTERNAL_SIZE_T, mstate);
         2509 static int      sYSTRIm(size_t, mstate);
         2510 static void     malloc_consolidate(mstate);
         2511 static Void_t** iALLOc(size_t, size_t*, int, Void_t**);
         2512 #else
         2513 static Void_t*  sYSMALLOc();
         2514 static int      sYSTRIm();
         2515 static void     malloc_consolidate();
         2516 static Void_t** iALLOc();
         2517 #endif
         2518 
         2519 /*
         2520   Debugging support
         2521 
         2522   These routines make a number of assertions about the states
         2523   of data structures that should be true at all times. If any
         2524   are not true, it's very likely that a user program has somehow
         2525   trashed memory. (It's also possible that there is a coding error
         2526   in malloc. In which case, please report it!)
         2527 */
         2528 
         2529 #if ! DEBUG
         2530 
         2531 #define check_chunk(P)
         2532 #define check_free_chunk(P)
         2533 #define check_inuse_chunk(P)
         2534 #define check_remalloced_chunk(P,N)
         2535 #define check_malloced_chunk(P,N)
         2536 #define check_malloc_state()
         2537 
         2538 #else
         2539 #define check_chunk(P)              do_check_chunk(P)
         2540 #define check_free_chunk(P)         do_check_free_chunk(P)
         2541 #define check_inuse_chunk(P)        do_check_inuse_chunk(P)
         2542 #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
         2543 #define check_malloced_chunk(P,N)   do_check_malloced_chunk(P,N)
         2544 #define check_malloc_state()        do_check_malloc_state()
         2545 
         2546 /*
         2547   Properties of all chunks
         2548 */
         2549 
         2550 #if __STD_C
         2551 static void do_check_chunk(mchunkptr p)
         2552 #else
         2553 static void do_check_chunk(p) mchunkptr p;
         2554 #endif
         2555 {
         2556   mstate av = get_malloc_state();
         2557   CHUNK_SIZE_T  sz = chunksize(p);
         2558   /* min and max possible addresses assuming contiguous allocation */
         2559   char* max_address = (char*)(av->top) + chunksize(av->top);
         2560   char* min_address = max_address - av->sbrked_mem;
         2561 
         2562   if (!chunk_is_mmapped(p)) {
         2563     
         2564     /* Has legal address ... */
         2565     if (p != av->top) {
         2566       if (contiguous(av)) {
         2567         assert(((char*)p) >= min_address);
         2568         assert(((char*)p + sz) <= ((char*)(av->top)));
         2569       }
         2570     }
         2571     else {
         2572       /* top size is always at least MINSIZE */
         2573       assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
         2574       /* top predecessor always marked inuse */
         2575       assert(prev_inuse(p));
         2576     }
         2577       
         2578   }
         2579   else {
         2580 #if HAVE_MMAP
         2581     /* address is outside main heap  */
         2582     if (contiguous(av) && av->top != initial_top(av)) {
         2583       assert(((char*)p) < min_address || ((char*)p) > max_address);
         2584     }
         2585     /* chunk is page-aligned */
         2586     assert(((p->prev_size + sz) & (av->pagesize-1)) == 0);
         2587     /* mem is aligned */
         2588     assert(aligned_OK(chunk2mem(p)));
         2589 #else
         2590     /* force an appropriate assert violation if debug set */
         2591     assert(!chunk_is_mmapped(p));
         2592 #endif
         2593   }
         2594 }
         2595 
         2596 /*
         2597   Properties of free chunks
         2598 */
         2599 
         2600 #if __STD_C
         2601 static void do_check_free_chunk(mchunkptr p)
         2602 #else
         2603 static void do_check_free_chunk(p) mchunkptr p;
         2604 #endif
         2605 {
         2606   mstate av = get_malloc_state();
         2607 
         2608   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
         2609   mchunkptr next = chunk_at_offset(p, sz);
         2610 
         2611   do_check_chunk(p);
         2612 
         2613   /* Chunk must claim to be free ... */
         2614   assert(!inuse(p));
         2615   assert (!chunk_is_mmapped(p));
         2616 
         2617   /* Unless a special marker, must have OK fields */
         2618   if ((CHUNK_SIZE_T)(sz) >= MINSIZE)
         2619   {
         2620     assert((sz & MALLOC_ALIGN_MASK) == 0);
         2621     assert(aligned_OK(chunk2mem(p)));
         2622     /* ... matching footer field */
         2623     assert(next->prev_size == sz);
         2624     /* ... and is fully consolidated */
         2625     assert(prev_inuse(p));
         2626     assert (next == av->top || inuse(next));
         2627 
         2628     /* ... and has minimally sane links */
         2629     assert(p->fd->bk == p);
         2630     assert(p->bk->fd == p);
         2631   }
         2632   else /* markers are always of size SIZE_SZ */
         2633     assert(sz == SIZE_SZ);
         2634 }
         2635 
         2636 /*
         2637   Properties of inuse chunks
         2638 */
         2639 
         2640 #if __STD_C
         2641 static void do_check_inuse_chunk(mchunkptr p)
         2642 #else
         2643 static void do_check_inuse_chunk(p) mchunkptr p;
         2644 #endif
         2645 {
         2646   mstate av = get_malloc_state();
         2647   mchunkptr next;
         2648   do_check_chunk(p);
         2649 
         2650   if (chunk_is_mmapped(p))
         2651     return; /* mmapped chunks have no next/prev */
         2652 
         2653   /* Check whether it claims to be in use ... */
         2654   assert(inuse(p));
         2655 
         2656   next = next_chunk(p);
         2657 
         2658   /* ... and is surrounded by OK chunks.
         2659     Since more things can be checked with free chunks than inuse ones,
         2660     if an inuse chunk borders them and debug is on, it's worth doing them.
         2661   */
         2662   if (!prev_inuse(p))  {
         2663     /* Note that we cannot even look at prev unless it is not inuse */
         2664     mchunkptr prv = prev_chunk(p);
         2665     assert(next_chunk(prv) == p);
         2666     do_check_free_chunk(prv);
         2667   }
         2668 
         2669   if (next == av->top) {
         2670     assert(prev_inuse(next));
         2671     assert(chunksize(next) >= MINSIZE);
         2672   }
         2673   else if (!inuse(next))
         2674     do_check_free_chunk(next);
         2675 }
         2676 
         2677 /*
         2678   Properties of chunks recycled from fastbins
         2679 */
         2680 
         2681 #if __STD_C
         2682 static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
         2683 #else
         2684 static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
         2685 #endif
         2686 {
         2687   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
         2688 
         2689   do_check_inuse_chunk(p);
         2690 
         2691   /* Legal size ... */
         2692   assert((sz & MALLOC_ALIGN_MASK) == 0);
         2693   assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
         2694   /* ... and alignment */
         2695   assert(aligned_OK(chunk2mem(p)));
         2696   /* chunk is less than MINSIZE more than request */
         2697   assert((long)(sz) - (long)(s) >= 0);
         2698   assert((long)(sz) - (long)(s + MINSIZE) < 0);
         2699 }
         2700 
         2701 /*
         2702   Properties of nonrecycled chunks at the point they are malloced
         2703 */
         2704 
         2705 #if __STD_C
         2706 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
         2707 #else
         2708 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
         2709 #endif
         2710 {
         2711   /* same as recycled case ... */
         2712   do_check_remalloced_chunk(p, s);
         2713 
         2714   /*
         2715     ... plus,  must obey implementation invariant that prev_inuse is
         2716     always true of any allocated chunk; i.e., that each allocated
         2717     chunk borders either a previously allocated and still in-use
         2718     chunk, or the base of its memory arena. This is ensured
         2719     by making all allocations from the the `lowest' part of any found
         2720     chunk.  This does not necessarily hold however for chunks
         2721     recycled via fastbins.
         2722   */
         2723 
         2724   assert(prev_inuse(p));
         2725 }
         2726 
         2727 
         2728 /*
         2729   Properties of malloc_state.
         2730 
         2731   This may be useful for debugging malloc, as well as detecting user
         2732   programmer errors that somehow write into malloc_state.
         2733 
         2734   If you are extending or experimenting with this malloc, you can
         2735   probably figure out how to hack this routine to print out or
         2736   display chunk addresses, sizes, bins, and other instrumentation.
         2737 */
         2738 
         2739 static void do_check_malloc_state()
         2740 {
         2741   mstate av = get_malloc_state();
         2742   int i;
         2743   mchunkptr p;
         2744   mchunkptr q;
         2745   mbinptr b;
         2746   unsigned int binbit;
         2747   int empty;
         2748   unsigned int idx;
         2749   INTERNAL_SIZE_T size;
         2750   CHUNK_SIZE_T  total = 0;
         2751   int max_fast_bin;
         2752 
         2753   /* internal size_t must be no wider than pointer type */
         2754   assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*));
         2755 
         2756   /* alignment is a power of 2 */
         2757   assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0);
         2758 
         2759   /* cannot run remaining checks until fully initialized */
         2760   if (av->top == 0 || av->top == initial_top(av))
         2761     return;
         2762 
         2763   /* pagesize is a power of 2 */
         2764   assert((av->pagesize & (av->pagesize-1)) == 0);
         2765 
         2766   /* properties of fastbins */
         2767 
         2768   /* max_fast is in allowed range */
         2769   assert(get_max_fast(av) <= request2size(MAX_FAST_SIZE));
         2770 
         2771   max_fast_bin = fastbin_index(av->max_fast);
         2772 
         2773   for (i = 0; i < NFASTBINS; ++i) {
         2774     p = av->fastbins[i];
         2775 
         2776     /* all bins past max_fast are empty */
         2777     if (i > max_fast_bin)
         2778       assert(p == 0);
         2779 
         2780     while (p != 0) {
         2781       /* each chunk claims to be inuse */
         2782       do_check_inuse_chunk(p);
         2783       total += chunksize(p);
         2784       /* chunk belongs in this bin */
         2785       assert(fastbin_index(chunksize(p)) == i);
         2786       p = p->fd;
         2787     }
         2788   }
         2789 
         2790   if (total != 0)
         2791     assert(have_fastchunks(av));
         2792   else if (!have_fastchunks(av))
         2793     assert(total == 0);
         2794 
         2795   /* check normal bins */
         2796   for (i = 1; i < NBINS; ++i) {
         2797     b = bin_at(av,i);
         2798 
         2799     /* binmap is accurate (except for bin 1 == unsorted_chunks) */
         2800     if (i >= 2) {
         2801       binbit = get_binmap(av,i);
         2802       empty = last(b) == b;
         2803       if (!binbit)
         2804         assert(empty);
         2805       else if (!empty)
         2806         assert(binbit);
         2807     }
         2808 
         2809     for (p = last(b); p != b; p = p->bk) {
         2810       /* each chunk claims to be free */
         2811       do_check_free_chunk(p);
         2812       size = chunksize(p);
         2813       total += size;
         2814       if (i >= 2) {
         2815         /* chunk belongs in bin */
         2816         idx = bin_index(size);
         2817         assert(idx == i);
         2818         /* lists are sorted */
         2819         if ((CHUNK_SIZE_T) size >= (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
         2820           assert(p->bk == b || 
         2821                  (CHUNK_SIZE_T)chunksize(p->bk) >= 
         2822                  (CHUNK_SIZE_T)chunksize(p));
         2823         }
         2824       }
         2825       /* chunk is followed by a legal chain of inuse chunks */
         2826       for (q = next_chunk(p);
         2827            (q != av->top && inuse(q) && 
         2828              (CHUNK_SIZE_T)(chunksize(q)) >= MINSIZE);
         2829            q = next_chunk(q))
         2830         do_check_inuse_chunk(q);
         2831     }
         2832   }
         2833 
         2834   /* top chunk is OK */
         2835   check_chunk(av->top);
         2836 
         2837   /* sanity checks for statistics */
         2838 
         2839   assert(total <= (CHUNK_SIZE_T)(av->max_total_mem));
         2840   assert(av->n_mmaps >= 0);
         2841   assert(av->n_mmaps <= av->max_n_mmaps);
         2842 
         2843   assert((CHUNK_SIZE_T)(av->sbrked_mem) <=
         2844          (CHUNK_SIZE_T)(av->max_sbrked_mem));
         2845 
         2846   assert((CHUNK_SIZE_T)(av->mmapped_mem) <=
         2847          (CHUNK_SIZE_T)(av->max_mmapped_mem));
         2848 
         2849   assert((CHUNK_SIZE_T)(av->max_total_mem) >=
         2850          (CHUNK_SIZE_T)(av->mmapped_mem) + (CHUNK_SIZE_T)(av->sbrked_mem));
         2851 }
         2852 #endif
         2853 
         2854 
         2855 /* ----------- Routines dealing with system allocation -------------- */
         2856 
         2857 /*
         2858   sysmalloc handles malloc cases requiring more memory from the system.
         2859   On entry, it is assumed that av->top does not have enough
         2860   space to service request for nb bytes, thus requiring that av->top
         2861   be extended or replaced.
         2862 */
         2863 
         2864 #if __STD_C
         2865 static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av)
         2866 #else
         2867 static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av;
         2868 #endif
         2869 {
         2870   mchunkptr       old_top;        /* incoming value of av->top */
         2871   INTERNAL_SIZE_T old_size;       /* its size */
         2872   char*           old_end;        /* its end address */
         2873 
         2874   long            size;           /* arg to first MORECORE or mmap call */
         2875   char*           brk;            /* return value from MORECORE */
         2876 
         2877   long            correction;     /* arg to 2nd MORECORE call */
         2878   char*           snd_brk;        /* 2nd return val */
         2879 
         2880   INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
         2881   INTERNAL_SIZE_T end_misalign;   /* partial page left at end of new space */
         2882   char*           aligned_brk;    /* aligned offset into brk */
         2883 
         2884   mchunkptr       p;              /* the allocated/returned chunk */
         2885   mchunkptr       remainder;      /* remainder from allocation */
         2886   CHUNK_SIZE_T    remainder_size; /* its size */
         2887 
         2888   CHUNK_SIZE_T    sum;            /* for updating stats */
         2889 
         2890   size_t          pagemask  = av->pagesize - 1;
         2891 
         2892   /*
         2893     If there is space available in fastbins, consolidate and retry
         2894     malloc from scratch rather than getting memory from system.  This
         2895     can occur only if nb is in smallbin range so we didn't consolidate
         2896     upon entry to malloc. It is much easier to handle this case here
         2897     than in malloc proper.
         2898   */
         2899 
         2900   if (have_fastchunks(av)) {
         2901     assert(in_smallbin_range(nb));
         2902     malloc_consolidate(av);
         2903     return mALLOc(nb - MALLOC_ALIGN_MASK);
         2904   }
         2905 
         2906 
         2907 #if HAVE_MMAP
         2908 
         2909   /*
         2910     If have mmap, and the request size meets the mmap threshold, and
         2911     the system supports mmap, and there are few enough currently
         2912     allocated mmapped regions, try to directly map this request
         2913     rather than expanding top.
         2914   */
         2915 
         2916   if ((CHUNK_SIZE_T)(nb) >= (CHUNK_SIZE_T)(av->mmap_threshold) &&
         2917       (av->n_mmaps < av->n_mmaps_max)) {
         2918 
         2919     char* mm;             /* return value from mmap call*/
         2920 
         2921     /*
         2922       Round up size to nearest page.  For mmapped chunks, the overhead
         2923       is one SIZE_SZ unit larger than for normal chunks, because there
         2924       is no following chunk whose prev_size field could be used.
         2925     */
         2926     size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask;
         2927 
         2928     /* Don't try if size wraps around 0 */
         2929     if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
         2930 
         2931       mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
         2932       
         2933       if (mm != (char*)(MORECORE_FAILURE)) {
         2934         
         2935         /*
         2936           The offset to the start of the mmapped region is stored
         2937           in the prev_size field of the chunk. This allows us to adjust
         2938           returned start address to meet alignment requirements here 
         2939           and in memalign(), and still be able to compute proper
         2940           address argument for later munmap in free() and realloc().
         2941         */
         2942         
         2943         front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK;
         2944         if (front_misalign > 0) {
         2945           correction = MALLOC_ALIGNMENT - front_misalign;
         2946           p = (mchunkptr)(mm + correction);
         2947           p->prev_size = correction;
         2948           set_head(p, (size - correction) |IS_MMAPPED);
         2949         }
         2950         else {
         2951           p = (mchunkptr)mm;
         2952           p->prev_size = 0;
         2953           set_head(p, size|IS_MMAPPED);
         2954         }
         2955         
         2956         /* update statistics */
         2957         
         2958         if (++av->n_mmaps > av->max_n_mmaps) 
         2959           av->max_n_mmaps = av->n_mmaps;
         2960         
         2961         sum = av->mmapped_mem += size;
         2962         if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem)) 
         2963           av->max_mmapped_mem = sum;
         2964         sum += av->sbrked_mem;
         2965         if (sum > (CHUNK_SIZE_T)(av->max_total_mem)) 
         2966           av->max_total_mem = sum;
         2967 
         2968         check_chunk(p);
         2969         
         2970         return chunk2mem(p);
         2971       }
         2972     }
         2973   }
         2974 #endif
         2975 
         2976   /* Record incoming configuration of top */
         2977 
         2978   old_top  = av->top;
         2979   old_size = chunksize(old_top);
         2980   old_end  = (char*)(chunk_at_offset(old_top, old_size));
         2981 
         2982   brk = snd_brk = (char*)(MORECORE_FAILURE); 
         2983 
         2984   /* 
         2985      If not the first time through, we require old_size to be
         2986      at least MINSIZE and to have prev_inuse set.
         2987   */
         2988 
         2989   assert((old_top == initial_top(av) && old_size == 0) || 
         2990          ((CHUNK_SIZE_T) (old_size) >= MINSIZE &&
         2991           prev_inuse(old_top)));
         2992 
         2993   /* Precondition: not enough current space to satisfy nb request */
         2994   assert((CHUNK_SIZE_T)(old_size) < (CHUNK_SIZE_T)(nb + MINSIZE));
         2995 
         2996   /* Precondition: all fastbins are consolidated */
         2997   assert(!have_fastchunks(av));
         2998 
         2999 
         3000   /* Request enough space for nb + pad + overhead */
         3001 
         3002   size = nb + av->top_pad + MINSIZE;
         3003 
         3004   /*
         3005     If contiguous, we can subtract out existing space that we hope to
         3006     combine with new space. We add it back later only if
         3007     we don't actually get contiguous space.
         3008   */
         3009 
         3010   if (contiguous(av))
         3011     size -= old_size;
         3012 
         3013   /*
         3014     Round to a multiple of page size.
         3015     If MORECORE is not contiguous, this ensures that we only call it
         3016     with whole-page arguments.  And if MORECORE is contiguous and
         3017     this is not first time through, this preserves page-alignment of
         3018     previous calls. Otherwise, we correct to page-align below.
         3019   */
         3020 
         3021   size = (size + pagemask) & ~pagemask;
         3022 
         3023   /*
         3024     Don't try to call MORECORE if argument is so big as to appear
         3025     negative. Note that since mmap takes size_t arg, it may succeed
         3026     below even if we cannot call MORECORE.
         3027   */
         3028 
         3029   if (size > 0) 
         3030     brk = (char*)(MORECORE(size));
         3031 
         3032   /*
         3033     If have mmap, try using it as a backup when MORECORE fails or
         3034     cannot be used. This is worth doing on systems that have "holes" in
         3035     address space, so sbrk cannot extend to give contiguous space, but
         3036     space is available elsewhere.  Note that we ignore mmap max count
         3037     and threshold limits, since the space will not be used as a
         3038     segregated mmap region.
         3039   */
         3040 
         3041 #if HAVE_MMAP
         3042   if (brk == (char*)(MORECORE_FAILURE)) {
         3043 
         3044     /* Cannot merge with old top, so add its size back in */
         3045     if (contiguous(av))
         3046       size = (size + old_size + pagemask) & ~pagemask;
         3047 
         3048     /* If we are relying on mmap as backup, then use larger units */
         3049     if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(MMAP_AS_MORECORE_SIZE))
         3050       size = MMAP_AS_MORECORE_SIZE;
         3051 
         3052     /* Don't try if size wraps around 0 */
         3053     if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
         3054 
         3055       brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
         3056       
         3057       if (brk != (char*)(MORECORE_FAILURE)) {
         3058         
         3059         /* We do not need, and cannot use, another sbrk call to find end */
         3060         snd_brk = brk + size;
         3061         
         3062         /* 
         3063            Record that we no longer have a contiguous sbrk region. 
         3064            After the first time mmap is used as backup, we do not
         3065            ever rely on contiguous space since this could incorrectly
         3066            bridge regions.
         3067         */
         3068         set_noncontiguous(av);
         3069       }
         3070     }
         3071   }
         3072 #endif
         3073 
         3074   if (brk != (char*)(MORECORE_FAILURE)) {
         3075     av->sbrked_mem += size;
         3076 
         3077     /*
         3078       If MORECORE extends previous space, we can likewise extend top size.
         3079     */
         3080     
         3081     if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) {
         3082       set_head(old_top, (size + old_size) | PREV_INUSE);
         3083     }
         3084 
         3085     /*
         3086       Otherwise, make adjustments:
         3087       
         3088       * If the first time through or noncontiguous, we need to call sbrk
         3089         just to find out where the end of memory lies.
         3090 
         3091       * We need to ensure that all returned chunks from malloc will meet
         3092         MALLOC_ALIGNMENT
         3093 
         3094       * If there was an intervening foreign sbrk, we need to adjust sbrk
         3095         request size to account for fact that we will not be able to
         3096         combine new space with existing space in old_top.
         3097 
         3098       * Almost all systems internally allocate whole pages at a time, in
         3099         which case we might as well use the whole last page of request.
         3100         So we allocate enough more memory to hit a page boundary now,
         3101         which in turn causes future contiguous calls to page-align.
         3102     */
         3103     
         3104     else {
         3105       front_misalign = 0;
         3106       end_misalign = 0;
         3107       correction = 0;
         3108       aligned_brk = brk;
         3109 
         3110       /*
         3111         If MORECORE returns an address lower than we have seen before,
         3112         we know it isn't really contiguous.  This and some subsequent
         3113         checks help cope with non-conforming MORECORE functions and
         3114         the presence of "foreign" calls to MORECORE from outside of
         3115         malloc or by other threads.  We cannot guarantee to detect
         3116         these in all cases, but cope with the ones we do detect.
         3117       */
         3118       if (contiguous(av) && old_size != 0 && brk < old_end) {
         3119         set_noncontiguous(av);
         3120       }
         3121       
         3122       /* handle contiguous cases */
         3123       if (contiguous(av)) { 
         3124 
         3125         /* 
         3126            We can tolerate forward non-contiguities here (usually due
         3127            to foreign calls) but treat them as part of our space for
         3128            stats reporting.
         3129         */
         3130         if (old_size != 0) 
         3131           av->sbrked_mem += brk - old_end;
         3132         
         3133         /* Guarantee alignment of first new chunk made from this space */
         3134 
         3135         front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK;
         3136         if (front_misalign > 0) {
         3137 
         3138           /*
         3139             Skip over some bytes to arrive at an aligned position.
         3140             We don't need to specially mark these wasted front bytes.
         3141             They will never be accessed anyway because
         3142             prev_inuse of av->top (and any chunk created from its start)
         3143             is always true after initialization.
         3144           */
         3145 
         3146           correction = MALLOC_ALIGNMENT - front_misalign;
         3147           aligned_brk += correction;
         3148         }
         3149         
         3150         /*
         3151           If this isn't adjacent to existing space, then we will not
         3152           be able to merge with old_top space, so must add to 2nd request.
         3153         */
         3154         
         3155         correction += old_size;
         3156         
         3157         /* Extend the end address to hit a page boundary */
         3158         end_misalign = (INTERNAL_SIZE_T)(brk + size + correction);
         3159         correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign;
         3160         
         3161         assert(correction >= 0);
         3162         snd_brk = (char*)(MORECORE(correction));
         3163         
         3164         if (snd_brk == (char*)(MORECORE_FAILURE)) {
         3165           /*
         3166             If can't allocate correction, try to at least find out current
         3167             brk.  It might be enough to proceed without failing.
         3168           */
         3169           correction = 0;
         3170           snd_brk = (char*)(MORECORE(0));
         3171         }
         3172         else if (snd_brk < brk) {
         3173           /*
         3174             If the second call gives noncontiguous space even though
         3175             it says it won't, the only course of action is to ignore
         3176             results of second call, and conservatively estimate where
         3177             the first call left us. Also set noncontiguous, so this
         3178             won't happen again, leaving at most one hole.
         3179             
         3180             Note that this check is intrinsically incomplete.  Because
         3181             MORECORE is allowed to give more space than we ask for,
         3182             there is no reliable way to detect a noncontiguity
         3183             producing a forward gap for the second call.
         3184           */
         3185           snd_brk = brk + size;
         3186           correction = 0;
         3187           set_noncontiguous(av);
         3188         }
         3189 
         3190       }
         3191       
         3192       /* handle non-contiguous cases */
         3193       else { 
         3194         /* MORECORE/mmap must correctly align */
         3195         assert(aligned_OK(chunk2mem(brk)));
         3196         
         3197         /* Find out current end of memory */
         3198         if (snd_brk == (char*)(MORECORE_FAILURE)) {
         3199           snd_brk = (char*)(MORECORE(0));
         3200           av->sbrked_mem += snd_brk - brk - size;
         3201         }
         3202       }
         3203       
         3204       /* Adjust top based on results of second sbrk */
         3205       if (snd_brk != (char*)(MORECORE_FAILURE)) {
         3206         av->top = (mchunkptr)aligned_brk;
         3207         set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
         3208         av->sbrked_mem += correction;
         3209      
         3210         /*
         3211           If not the first time through, we either have a
         3212           gap due to foreign sbrk or a non-contiguous region.  Insert a
         3213           double fencepost at old_top to prevent consolidation with space
         3214           we don't own. These fenceposts are artificial chunks that are
         3215           marked as inuse and are in any case too small to use.  We need
         3216           two to make sizes and alignments work out.
         3217         */
         3218    
         3219         if (old_size != 0) {
         3220           /* 
         3221              Shrink old_top to insert fenceposts, keeping size a
         3222              multiple of MALLOC_ALIGNMENT. We know there is at least
         3223              enough space in old_top to do this.
         3224           */
         3225           old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
         3226           set_head(old_top, old_size | PREV_INUSE);
         3227           
         3228           /*
         3229             Note that the following assignments completely overwrite
         3230             old_top when old_size was previously MINSIZE.  This is
         3231             intentional. We need the fencepost, even if old_top otherwise gets
         3232             lost.
         3233           */
         3234           chunk_at_offset(old_top, old_size          )->size =
         3235             SIZE_SZ|PREV_INUSE;
         3236 
         3237           chunk_at_offset(old_top, old_size + SIZE_SZ)->size =
         3238             SIZE_SZ|PREV_INUSE;
         3239 
         3240           /* 
         3241              If possible, release the rest, suppressing trimming.
         3242           */
         3243           if (old_size >= MINSIZE) {
         3244             INTERNAL_SIZE_T tt = av->trim_threshold;
         3245             av->trim_threshold = (INTERNAL_SIZE_T)(-1);
         3246             fREe(chunk2mem(old_top));
         3247             av->trim_threshold = tt;
         3248           }
         3249         }
         3250       }
         3251     }
         3252     
         3253     /* Update statistics */
         3254     sum = av->sbrked_mem;
         3255     if (sum > (CHUNK_SIZE_T)(av->max_sbrked_mem))
         3256       av->max_sbrked_mem = sum;
         3257     
         3258     sum += av->mmapped_mem;
         3259     if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
         3260       av->max_total_mem = sum;
         3261 
         3262     check_malloc_state();
         3263     
         3264     /* finally, do the allocation */
         3265 
         3266     p = av->top;
         3267     size = chunksize(p);
         3268     
         3269     /* check that one of the above allocation paths succeeded */
         3270     if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
         3271       remainder_size = size - nb;
         3272       remainder = chunk_at_offset(p, nb);
         3273       av->top = remainder;
         3274       set_head(p, nb | PREV_INUSE);
         3275       set_head(remainder, remainder_size | PREV_INUSE);
         3276       check_malloced_chunk(p, nb);
         3277       return chunk2mem(p);
         3278     }
         3279 
         3280   }
         3281 
         3282   /* catch all failure paths */
         3283   MALLOC_FAILURE_ACTION;
         3284   return 0;
         3285 }
         3286 
         3287 
         3288 
         3289 
         3290 /*
         3291   sYSTRIm is an inverse of sorts to sYSMALLOc.  It gives memory back
         3292   to the system (via negative arguments to sbrk) if there is unused
         3293   memory at the `high' end of the malloc pool. It is called
         3294   automatically by free() when top space exceeds the trim
         3295   threshold. It is also called by the public malloc_trim routine.  It
         3296   returns 1 if it actually released any memory, else 0.
         3297 */
         3298 
         3299 #if __STD_C
         3300 static int sYSTRIm(size_t pad, mstate av)
         3301 #else
         3302 static int sYSTRIm(pad, av) size_t pad; mstate av;
         3303 #endif
         3304 {
         3305   long  top_size;        /* Amount of top-most memory */
         3306   long  extra;           /* Amount to release */
         3307   long  released;        /* Amount actually released */
         3308   char* current_brk;     /* address returned by pre-check sbrk call */
         3309   char* new_brk;         /* address returned by post-check sbrk call */
         3310   size_t pagesz;
         3311 
         3312   pagesz = av->pagesize;
         3313   top_size = chunksize(av->top);
         3314   
         3315   /* Release in pagesize units, keeping at least one page */
         3316   extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
         3317   
         3318   if (extra > 0) {
         3319     
         3320     /*
         3321       Only proceed if end of memory is where we last set it.
         3322       This avoids problems if there were foreign sbrk calls.
         3323     */
         3324     current_brk = (char*)(MORECORE(0));
         3325     if (current_brk == (char*)(av->top) + top_size) {
         3326       
         3327       /*
         3328         Attempt to release memory. We ignore MORECORE return value,
         3329         and instead call again to find out where new end of memory is.
         3330         This avoids problems if first call releases less than we asked,
         3331         of if failure somehow altered brk value. (We could still
         3332         encounter problems if it altered brk in some very bad way,
         3333         but the only thing we can do is adjust anyway, which will cause
         3334         some downstream failure.)
         3335       */
         3336       
         3337       MORECORE(-extra);
         3338       new_brk = (char*)(MORECORE(0));
         3339       
         3340       if (new_brk != (char*)MORECORE_FAILURE) {
         3341         released = (long)(current_brk - new_brk);
         3342         
         3343         if (released != 0) {
         3344           /* Success. Adjust top. */
         3345           av->sbrked_mem -= released;
         3346           set_head(av->top, (top_size - released) | PREV_INUSE);
         3347           check_malloc_state();
         3348           return 1;
         3349         }
         3350       }
         3351     }
         3352   }
         3353   return 0;
         3354 }
         3355 
         3356 /*
         3357   ------------------------------ malloc ------------------------------
         3358 */
         3359 
         3360 
         3361 #if __STD_C
         3362 Void_t* mALLOc(size_t bytes)
         3363 #else
         3364   Void_t* mALLOc(bytes) size_t bytes;
         3365 #endif
         3366 {
         3367   mstate av = get_malloc_state();
         3368 
         3369   INTERNAL_SIZE_T nb;               /* normalized request size */
         3370   unsigned int    idx;              /* associated bin index */
         3371   mbinptr         bin;              /* associated bin */
         3372   mfastbinptr*    fb;               /* associated fastbin */
         3373 
         3374   mchunkptr       victim;           /* inspected/selected chunk */
         3375   INTERNAL_SIZE_T size;             /* its size */
         3376   int             victim_index;     /* its bin index */
         3377 
         3378   mchunkptr       remainder;        /* remainder from a split */
         3379   CHUNK_SIZE_T    remainder_size;   /* its size */
         3380 
         3381   unsigned int    block;            /* bit map traverser */
         3382   unsigned int    bit;              /* bit map traverser */
         3383   unsigned int    map;              /* current word of binmap */
         3384 
         3385   mchunkptr       fwd;              /* misc temp for linking */
         3386   mchunkptr       bck;              /* misc temp for linking */
         3387 
         3388   /*
         3389     Convert request size to internal form by adding SIZE_SZ bytes
         3390     overhead plus possibly more to obtain necessary alignment and/or
         3391     to obtain a size of at least MINSIZE, the smallest allocatable
         3392     size. Also, checked_request2size traps (returning 0) request sizes
         3393     that are so large that they wrap around zero when padded and
         3394     aligned.
         3395   */
         3396 
         3397   checked_request2size(bytes, nb);
         3398 
         3399   /*
         3400     Bypass search if no frees yet
         3401    */
         3402   if (!have_anychunks(av)) {
         3403     if (av->max_fast == 0) /* initialization check */
         3404       malloc_consolidate(av);
         3405     goto use_top;
         3406   }
         3407 
         3408   /*
         3409     If the size qualifies as a fastbin, first check corresponding bin.
         3410   */
         3411 
         3412   if ((CHUNK_SIZE_T)(nb) <= (CHUNK_SIZE_T)(av->max_fast)) { 
         3413     fb = &(av->fastbins[(fastbin_index(nb))]);
         3414     if ( (victim = *fb) != 0) {
         3415       *fb = victim->fd;
         3416       check_remalloced_chunk(victim, nb);
         3417       return chunk2mem(victim);
         3418     }
         3419   }
         3420 
         3421   /*
         3422     If a small request, check regular bin.  Since these "smallbins"
         3423     hold one size each, no searching within bins is necessary.
         3424     (For a large request, we need to wait until unsorted chunks are
         3425     processed to find best fit. But for small ones, fits are exact
         3426     anyway, so we can check now, which is faster.)
         3427   */
         3428 
         3429   if (in_smallbin_range(nb)) {
         3430     idx = smallbin_index(nb);
         3431     bin = bin_at(av,idx);
         3432 
         3433     if ( (victim = last(bin)) != bin) {
         3434       bck = victim->bk;
         3435       set_inuse_bit_at_offset(victim, nb);
         3436       bin->bk = bck;
         3437       bck->fd = bin;
         3438       
         3439       check_malloced_chunk(victim, nb);
         3440       return chunk2mem(victim);
         3441     }
         3442   }
         3443 
         3444   /* 
         3445      If this is a large request, consolidate fastbins before continuing.
         3446      While it might look excessive to kill all fastbins before
         3447      even seeing if there is space available, this avoids
         3448      fragmentation problems normally associated with fastbins.
         3449      Also, in practice, programs tend to have runs of either small or
         3450      large requests, but less often mixtures, so consolidation is not 
         3451      invoked all that often in most programs. And the programs that
         3452      it is called frequently in otherwise tend to fragment.
         3453   */
         3454 
         3455   else {
         3456     idx = largebin_index(nb);
         3457     if (have_fastchunks(av)) 
         3458       malloc_consolidate(av);
         3459   }
         3460 
         3461   /*
         3462     Process recently freed or remaindered chunks, taking one only if
         3463     it is exact fit, or, if this a small request, the chunk is remainder from
         3464     the most recent non-exact fit.  Place other traversed chunks in
         3465     bins.  Note that this step is the only place in any routine where
         3466     chunks are placed in bins.
         3467   */
         3468     
         3469   while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) {
         3470     bck = victim->bk;
         3471     size = chunksize(victim);
         3472     
         3473     /* 
         3474        If a small request, try to use last remainder if it is the
         3475        only chunk in unsorted bin.  This helps promote locality for
         3476        runs of consecutive small requests. This is the only
         3477        exception to best-fit, and applies only when there is
         3478        no exact fit for a small chunk.
         3479     */
         3480     
         3481     if (in_smallbin_range(nb) && 
         3482         bck == unsorted_chunks(av) &&
         3483         victim == av->last_remainder &&
         3484         (CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
         3485       
         3486       /* split and reattach remainder */
         3487       remainder_size = size - nb;
         3488       remainder = chunk_at_offset(victim, nb);
         3489       unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
         3490       av->last_remainder = remainder; 
         3491       remainder->bk = remainder->fd = unsorted_chunks(av);
         3492       
         3493       set_head(victim, nb | PREV_INUSE);
         3494       set_head(remainder, remainder_size | PREV_INUSE);
         3495       set_foot(remainder, remainder_size);
         3496       
         3497       check_malloced_chunk(victim, nb);
         3498       return chunk2mem(victim);
         3499     }
         3500     
         3501     /* remove from unsorted list */
         3502     unsorted_chunks(av)->bk = bck;
         3503     bck->fd = unsorted_chunks(av);
         3504     
         3505     /* Take now instead of binning if exact fit */
         3506     
         3507     if (size == nb) {
         3508       set_inuse_bit_at_offset(victim, size);
         3509       check_malloced_chunk(victim, nb);
         3510       return chunk2mem(victim);
         3511     }
         3512     
         3513     /* place chunk in bin */
         3514     
         3515     if (in_smallbin_range(size)) {
         3516       victim_index = smallbin_index(size);
         3517       bck = bin_at(av, victim_index);
         3518       fwd = bck->fd;
         3519     }
         3520     else {
         3521       victim_index = largebin_index(size);
         3522       bck = bin_at(av, victim_index);
         3523       fwd = bck->fd;
         3524       
         3525       if (fwd != bck) {
         3526         /* if smaller than smallest, place first */
         3527         if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(bck->bk->size)) {
         3528           fwd = bck;
         3529           bck = bck->bk;
         3530         }
         3531         else if ((CHUNK_SIZE_T)(size) >= 
         3532                  (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
         3533           
         3534           /* maintain large bins in sorted order */
         3535           size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */
         3536           while ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(fwd->size)) 
         3537             fwd = fwd->fd;
         3538           bck = fwd->bk;
         3539         }
         3540       }
         3541     }
         3542       
         3543     mark_bin(av, victim_index);
         3544     victim->bk = bck;
         3545     victim->fd = fwd;
         3546     fwd->bk = victim;
         3547     bck->fd = victim;
         3548   }
         3549   
         3550   /*
         3551     If a large request, scan through the chunks of current bin to
         3552     find one that fits.  (This will be the smallest that fits unless
         3553     FIRST_SORTED_BIN_SIZE has been changed from default.)  This is
         3554     the only step where an unbounded number of chunks might be
         3555     scanned without doing anything useful with them. However the
         3556     lists tend to be short.
         3557   */
         3558   
         3559   if (!in_smallbin_range(nb)) {
         3560     bin = bin_at(av, idx);
         3561     
         3562     for (victim = last(bin); victim != bin; victim = victim->bk) {
         3563       size = chunksize(victim);
         3564       
         3565       if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb)) {
         3566         remainder_size = size - nb;
         3567         unlink(victim, bck, fwd);
         3568         
         3569         /* Exhaust */
         3570         if (remainder_size < MINSIZE)  {
         3571           set_inuse_bit_at_offset(victim, size);
         3572           check_malloced_chunk(victim, nb);
         3573           return chunk2mem(victim);
         3574         }
         3575         /* Split */
         3576         else {
         3577           remainder = chunk_at_offset(victim, nb);
         3578           unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
         3579           remainder->bk = remainder->fd = unsorted_chunks(av);
         3580           set_head(victim, nb | PREV_INUSE);
         3581           set_head(remainder, remainder_size | PREV_INUSE);
         3582           set_foot(remainder, remainder_size);
         3583           check_malloced_chunk(victim, nb);
         3584           return chunk2mem(victim);
         3585         } 
         3586       }
         3587     }    
         3588   }
         3589 
         3590   /*
         3591     Search for a chunk by scanning bins, starting with next largest
         3592     bin. This search is strictly by best-fit; i.e., the smallest
         3593     (with ties going to approximately the least recently used) chunk
         3594     that fits is selected.
         3595     
         3596     The bitmap avoids needing to check that most blocks are nonempty.
         3597   */
         3598     
         3599   ++idx;
         3600   bin = bin_at(av,idx);
         3601   block = idx2block(idx);
         3602   map = av->binmap[block];
         3603   bit = idx2bit(idx);
         3604   
         3605   for (;;) {
         3606     
         3607     /* Skip rest of block if there are no more set bits in this block.  */
         3608     if (bit > map || bit == 0) {
         3609       do {
         3610         if (++block >= BINMAPSIZE)  /* out of bins */
         3611           goto use_top;
         3612       } while ( (map = av->binmap[block]) == 0);
         3613       
         3614       bin = bin_at(av, (block << BINMAPSHIFT));
         3615       bit = 1;
         3616     }
         3617     
         3618     /* Advance to bin with set bit. There must be one. */
         3619     while ((bit & map) == 0) {
         3620       bin = next_bin(bin);
         3621       bit <<= 1;
         3622       assert(bit != 0);
         3623     }
         3624     
         3625     /* Inspect the bin. It is likely to be non-empty */
         3626     victim = last(bin);
         3627     
         3628     /*  If a false alarm (empty bin), clear the bit. */
         3629     if (victim == bin) {
         3630       av->binmap[block] = map &= ~bit; /* Write through */
         3631       bin = next_bin(bin);
         3632       bit <<= 1;
         3633     }
         3634     
         3635     else {
         3636       size = chunksize(victim);
         3637       
         3638       /*  We know the first chunk in this bin is big enough to use. */
         3639       assert((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb));
         3640       
         3641       remainder_size = size - nb;
         3642       
         3643       /* unlink */
         3644       bck = victim->bk;
         3645       bin->bk = bck;
         3646       bck->fd = bin;
         3647       
         3648       /* Exhaust */
         3649       if (remainder_size < MINSIZE) {
         3650         set_inuse_bit_at_offset(victim, size);
         3651         check_malloced_chunk(victim, nb);
         3652         return chunk2mem(victim);
         3653       }
         3654       
         3655       /* Split */
         3656       else {
         3657         remainder = chunk_at_offset(victim, nb);
         3658         
         3659         unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
         3660         remainder->bk = remainder->fd = unsorted_chunks(av);
         3661         /* advertise as last remainder */
         3662         if (in_smallbin_range(nb)) 
         3663           av->last_remainder = remainder; 
         3664         
         3665         set_head(victim, nb | PREV_INUSE);
         3666         set_head(remainder, remainder_size | PREV_INUSE);
         3667         set_foot(remainder, remainder_size);
         3668         check_malloced_chunk(victim, nb);
         3669         return chunk2mem(victim);
         3670       }
         3671     }
         3672   }
         3673 
         3674   use_top:    
         3675   /*
         3676     If large enough, split off the chunk bordering the end of memory
         3677     (held in av->top). Note that this is in accord with the best-fit
         3678     search rule.  In effect, av->top is treated as larger (and thus
         3679     less well fitting) than any other available chunk since it can
         3680     be extended to be as large as necessary (up to system
         3681     limitations).
         3682     
         3683     We require that av->top always exists (i.e., has size >=
         3684     MINSIZE) after initialization, so if it would otherwise be
         3685     exhuasted by current request, it is replenished. (The main
         3686     reason for ensuring it exists is that we may need MINSIZE space
         3687     to put in fenceposts in sysmalloc.)
         3688   */
         3689   
         3690   victim = av->top;
         3691   size = chunksize(victim);
         3692   
         3693   if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
         3694     remainder_size = size - nb;
         3695     remainder = chunk_at_offset(victim, nb);
         3696     av->top = remainder;
         3697     set_head(victim, nb | PREV_INUSE);
         3698     set_head(remainder, remainder_size | PREV_INUSE);
         3699     
         3700     check_malloced_chunk(victim, nb);
         3701     return chunk2mem(victim);
         3702   }
         3703   
         3704   /* 
         3705      If no space in top, relay to handle system-dependent cases 
         3706   */
         3707   return sYSMALLOc(nb, av);    
         3708 }
         3709 
         3710 /*
         3711   ------------------------------ free ------------------------------
         3712 */
         3713 
         3714 #if __STD_C
         3715 void fREe(Void_t* mem)
         3716 #else
         3717 void fREe(mem) Void_t* mem;
         3718 #endif
         3719 {
         3720   mstate av = get_malloc_state();
         3721 
         3722   mchunkptr       p;           /* chunk corresponding to mem */
         3723   INTERNAL_SIZE_T size;        /* its size */
         3724   mfastbinptr*    fb;          /* associated fastbin */
         3725   mchunkptr       nextchunk;   /* next contiguous chunk */
         3726   INTERNAL_SIZE_T nextsize;    /* its size */
         3727   int             nextinuse;   /* true if nextchunk is used */
         3728   INTERNAL_SIZE_T prevsize;    /* size of previous contiguous chunk */
         3729   mchunkptr       bck;         /* misc temp for linking */
         3730   mchunkptr       fwd;         /* misc temp for linking */
         3731 
         3732   /* free(0) has no effect */
         3733   if (mem != 0) {
         3734     p = mem2chunk(mem);
         3735     size = chunksize(p);
         3736 
         3737     check_inuse_chunk(p);
         3738 
         3739     /*
         3740       If eligible, place chunk on a fastbin so it can be found
         3741       and used quickly in malloc.
         3742     */
         3743 
         3744     if ((CHUNK_SIZE_T)(size) <= (CHUNK_SIZE_T)(av->max_fast)
         3745 
         3746 #if TRIM_FASTBINS
         3747         /* 
         3748            If TRIM_FASTBINS set, don't place chunks
         3749            bordering top into fastbins
         3750         */
         3751         && (chunk_at_offset(p, size) != av->top)
         3752 #endif
         3753         ) {
         3754 
         3755       set_fastchunks(av);
         3756       fb = &(av->fastbins[fastbin_index(size)]);
         3757       p->fd = *fb;
         3758       *fb = p;
         3759     }
         3760 
         3761     /*
         3762        Consolidate other non-mmapped chunks as they arrive.
         3763     */
         3764 
         3765     else if (!chunk_is_mmapped(p)) {
         3766       set_anychunks(av);
         3767 
         3768       nextchunk = chunk_at_offset(p, size);
         3769       nextsize = chunksize(nextchunk);
         3770 
         3771       /* consolidate backward */
         3772       if (!prev_inuse(p)) {
         3773         prevsize = p->prev_size;
         3774         size += prevsize;
         3775         p = chunk_at_offset(p, -((long) prevsize));
         3776         unlink(p, bck, fwd);
         3777       }
         3778 
         3779       if (nextchunk != av->top) {
         3780         /* get and clear inuse bit */
         3781         nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
         3782         set_head(nextchunk, nextsize);
         3783 
         3784         /* consolidate forward */
         3785         if (!nextinuse) {
         3786           unlink(nextchunk, bck, fwd);
         3787           size += nextsize;
         3788         }
         3789 
         3790         /*
         3791           Place the chunk in unsorted chunk list. Chunks are
         3792           not placed into regular bins until after they have
         3793           been given one chance to be used in malloc.
         3794         */
         3795 
         3796         bck = unsorted_chunks(av);
         3797         fwd = bck->fd;
         3798         p->bk = bck;
         3799         p->fd = fwd;
         3800         bck->fd = p;
         3801         fwd->bk = p;
         3802 
         3803         set_head(p, size | PREV_INUSE);
         3804         set_foot(p, size);
         3805         
         3806         check_free_chunk(p);
         3807       }
         3808 
         3809       /*
         3810          If the chunk borders the current high end of memory,
         3811          consolidate into top
         3812       */
         3813 
         3814       else {
         3815         size += nextsize;
         3816         set_head(p, size | PREV_INUSE);
         3817         av->top = p;
         3818         check_chunk(p);
         3819       }
         3820 
         3821       /*
         3822         If freeing a large space, consolidate possibly-surrounding
         3823         chunks. Then, if the total unused topmost memory exceeds trim
         3824         threshold, ask malloc_trim to reduce top.
         3825 
         3826         Unless max_fast is 0, we don't know if there are fastbins
         3827         bordering top, so we cannot tell for sure whether threshold
         3828         has been reached unless fastbins are consolidated.  But we
         3829         don't want to consolidate on each free.  As a compromise,
         3830         consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
         3831         is reached.
         3832       */
         3833 
         3834       if ((CHUNK_SIZE_T)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) { 
         3835         if (have_fastchunks(av)) 
         3836           malloc_consolidate(av);
         3837 
         3838 #ifndef MORECORE_CANNOT_TRIM        
         3839         if ((CHUNK_SIZE_T)(chunksize(av->top)) >= 
         3840             (CHUNK_SIZE_T)(av->trim_threshold))
         3841           sYSTRIm(av->top_pad, av);
         3842 #endif
         3843       }
         3844 
         3845     }
         3846     /*
         3847       If the chunk was allocated via mmap, release via munmap()
         3848       Note that if HAVE_MMAP is false but chunk_is_mmapped is
         3849       true, then user must have overwritten memory. There's nothing
         3850       we can do to catch this error unless DEBUG is set, in which case
         3851       check_inuse_chunk (above) will have triggered error.
         3852     */
         3853 
         3854     else {
         3855 #if HAVE_MMAP
         3856       int ret;
         3857       INTERNAL_SIZE_T offset = p->prev_size;
         3858       av->n_mmaps--;
         3859       av->mmapped_mem -= (size + offset);
         3860       ret = munmap((char*)p - offset, size + offset);
         3861       /* munmap returns non-zero on failure */
         3862       assert(ret == 0);
         3863 #endif
         3864     }
         3865   }
         3866 }
         3867 
         3868 /*
         3869   ------------------------- malloc_consolidate -------------------------
         3870 
         3871   malloc_consolidate is a specialized version of free() that tears
         3872   down chunks held in fastbins.  Free itself cannot be used for this
         3873   purpose since, among other things, it might place chunks back onto
         3874   fastbins.  So, instead, we need to use a minor variant of the same
         3875   code.
         3876   
         3877   Also, because this routine needs to be called the first time through
         3878   malloc anyway, it turns out to be the perfect place to trigger
         3879   initialization code.
         3880 */
         3881 
         3882 #if __STD_C
         3883 static void malloc_consolidate(mstate av)
         3884 #else
         3885 static void malloc_consolidate(av) mstate av;
         3886 #endif
         3887 {
         3888   mfastbinptr*    fb;                 /* current fastbin being consolidated */
         3889   mfastbinptr*    maxfb;              /* last fastbin (for loop control) */
         3890   mchunkptr       p;                  /* current chunk being consolidated */
         3891   mchunkptr       nextp;              /* next chunk to consolidate */
         3892   mchunkptr       unsorted_bin;       /* bin header */
         3893   mchunkptr       first_unsorted;     /* chunk to link to */
         3894 
         3895   /* These have same use as in free() */
         3896   mchunkptr       nextchunk;
         3897   INTERNAL_SIZE_T size;
         3898   INTERNAL_SIZE_T nextsize;
         3899   INTERNAL_SIZE_T prevsize;
         3900   int             nextinuse;
         3901   mchunkptr       bck;
         3902   mchunkptr       fwd;
         3903 
         3904   /*
         3905     If max_fast is 0, we know that av hasn't
         3906     yet been initialized, in which case do so below
         3907   */
         3908 
         3909   if (av->max_fast != 0) {
         3910     clear_fastchunks(av);
         3911 
         3912     unsorted_bin = unsorted_chunks(av);
         3913 
         3914     /*
         3915       Remove each chunk from fast bin and consolidate it, placing it
         3916       then in unsorted bin. Among other reasons for doing this,
         3917       placing in unsorted bin avoids needing to calculate actual bins
         3918       until malloc is sure that chunks aren't immediately going to be
         3919       reused anyway.
         3920     */
         3921     
         3922     maxfb = &(av->fastbins[fastbin_index(av->max_fast)]);
         3923     fb = &(av->fastbins[0]);
         3924     do {
         3925       if ( (p = *fb) != 0) {
         3926         *fb = 0;
         3927         
         3928         do {
         3929           check_inuse_chunk(p);
         3930           nextp = p->fd;
         3931           
         3932           /* Slightly streamlined version of consolidation code in free() */
         3933           size = p->size & ~PREV_INUSE;
         3934           nextchunk = chunk_at_offset(p, size);
         3935           nextsize = chunksize(nextchunk);
         3936           
         3937           if (!prev_inuse(p)) {
         3938             prevsize = p->prev_size;
         3939             size += prevsize;
         3940             p = chunk_at_offset(p, -((long) prevsize));
         3941             unlink(p, bck, fwd);
         3942           }
         3943           
         3944           if (nextchunk != av->top) {
         3945             nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
         3946             set_head(nextchunk, nextsize);
         3947             
         3948             if (!nextinuse) {
         3949               size += nextsize;
         3950               unlink(nextchunk, bck, fwd);
         3951             }
         3952             
         3953             first_unsorted = unsorted_bin->fd;
         3954             unsorted_bin->fd = p;
         3955             first_unsorted->bk = p;
         3956             
         3957             set_head(p, size | PREV_INUSE);
         3958             p->bk = unsorted_bin;
         3959             p->fd = first_unsorted;
         3960             set_foot(p, size);
         3961           }
         3962           
         3963           else {
         3964             size += nextsize;
         3965             set_head(p, size | PREV_INUSE);
         3966             av->top = p;
         3967           }
         3968           
         3969         } while ( (p = nextp) != 0);
         3970         
         3971       }
         3972     } while (fb++ != maxfb);
         3973   }
         3974   else {
         3975     malloc_init_state(av);
         3976     check_malloc_state();
         3977   }
         3978 }
         3979 
         3980 /*
         3981   ------------------------------ realloc ------------------------------
         3982 */
         3983 
         3984 
         3985 #if __STD_C
         3986 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
         3987 #else
         3988 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
         3989 #endif
         3990 {
         3991   mstate av = get_malloc_state();
         3992 
         3993   INTERNAL_SIZE_T  nb;              /* padded request size */
         3994 
         3995   mchunkptr        oldp;            /* chunk corresponding to oldmem */
         3996   INTERNAL_SIZE_T  oldsize;         /* its size */
         3997 
         3998   mchunkptr        newp;            /* chunk to return */
         3999   INTERNAL_SIZE_T  newsize;         /* its size */
         4000   Void_t*          newmem;          /* corresponding user mem */
         4001 
         4002   mchunkptr        next;            /* next contiguous chunk after oldp */
         4003 
         4004   mchunkptr        remainder;       /* extra space at end of newp */
         4005   CHUNK_SIZE_T     remainder_size;  /* its size */
         4006 
         4007   mchunkptr        bck;             /* misc temp for linking */
         4008   mchunkptr        fwd;             /* misc temp for linking */
         4009 
         4010   CHUNK_SIZE_T     copysize;        /* bytes to copy */
         4011   unsigned int     ncopies;         /* INTERNAL_SIZE_T words to copy */
         4012   INTERNAL_SIZE_T* s;               /* copy source */ 
         4013   INTERNAL_SIZE_T* d;               /* copy destination */
         4014 
         4015 
         4016 #ifdef REALLOC_ZERO_BYTES_FREES
         4017   if (bytes == 0) {
         4018     fREe(oldmem);
         4019     return 0;
         4020   }
         4021 #endif
         4022 
         4023   /* realloc of null is supposed to be same as malloc */
         4024   if (oldmem == 0) return mALLOc(bytes);
         4025 
         4026   checked_request2size(bytes, nb);
         4027 
         4028   oldp    = mem2chunk(oldmem);
         4029   oldsize = chunksize(oldp);
         4030 
         4031   check_inuse_chunk(oldp);
         4032 
         4033   if (!chunk_is_mmapped(oldp)) {
         4034 
         4035     if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb)) {
         4036       /* already big enough; split below */
         4037       newp = oldp;
         4038       newsize = oldsize;
         4039     }
         4040 
         4041     else {
         4042       next = chunk_at_offset(oldp, oldsize);
         4043 
         4044       /* Try to expand forward into top */
         4045       if (next == av->top &&
         4046           (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
         4047           (CHUNK_SIZE_T)(nb + MINSIZE)) {
         4048         set_head_size(oldp, nb);
         4049         av->top = chunk_at_offset(oldp, nb);
         4050         set_head(av->top, (newsize - nb) | PREV_INUSE);
         4051         return chunk2mem(oldp);
         4052       }
         4053       
         4054       /* Try to expand forward into next chunk;  split off remainder below */
         4055       else if (next != av->top && 
         4056                !inuse(next) &&
         4057                (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
         4058                (CHUNK_SIZE_T)(nb)) {
         4059         newp = oldp;
         4060         unlink(next, bck, fwd);
         4061       }
         4062 
         4063       /* allocate, copy, free */
         4064       else {
         4065         newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
         4066         if (newmem == 0)
         4067           return 0; /* propagate failure */
         4068       
         4069         newp = mem2chunk(newmem);
         4070         newsize = chunksize(newp);
         4071         
         4072         /*
         4073           Avoid copy if newp is next chunk after oldp.
         4074         */
         4075         if (newp == next) {
         4076           newsize += oldsize;
         4077           newp = oldp;
         4078         }
         4079         else {
         4080           /*
         4081             Unroll copy of <= 36 bytes (72 if 8byte sizes)
         4082             We know that contents have an odd number of
         4083             INTERNAL_SIZE_T-sized words; minimally 3.
         4084           */
         4085           
         4086           copysize = oldsize - SIZE_SZ;
         4087           s = (INTERNAL_SIZE_T*)(oldmem);
         4088           d = (INTERNAL_SIZE_T*)(newmem);
         4089           ncopies = copysize / sizeof(INTERNAL_SIZE_T);
         4090           assert(ncopies >= 3);
         4091           
         4092           if (ncopies > 9)
         4093             MALLOC_COPY(d, s, copysize);
         4094           
         4095           else {
         4096             *(d+0) = *(s+0);
         4097             *(d+1) = *(s+1);
         4098             *(d+2) = *(s+2);
         4099             if (ncopies > 4) {
         4100               *(d+3) = *(s+3);
         4101               *(d+4) = *(s+4);
         4102               if (ncopies > 6) {
         4103                 *(d+5) = *(s+5);
         4104                 *(d+6) = *(s+6);
         4105                 if (ncopies > 8) {
         4106                   *(d+7) = *(s+7);
         4107                   *(d+8) = *(s+8);
         4108                 }
         4109               }
         4110             }
         4111           }
         4112           
         4113           fREe(oldmem);
         4114           check_inuse_chunk(newp);
         4115           return chunk2mem(newp);
         4116         }
         4117       }
         4118     }
         4119 
         4120     /* If possible, free extra space in old or extended chunk */
         4121 
         4122     assert((CHUNK_SIZE_T)(newsize) >= (CHUNK_SIZE_T)(nb));
         4123 
         4124     remainder_size = newsize - nb;
         4125 
         4126     if (remainder_size < MINSIZE) { /* not enough extra to split off */
         4127       set_head_size(newp, newsize);
         4128       set_inuse_bit_at_offset(newp, newsize);
         4129     }
         4130     else { /* split remainder */
         4131       remainder = chunk_at_offset(newp, nb);
         4132       set_head_size(newp, nb);
         4133       set_head(remainder, remainder_size | PREV_INUSE);
         4134       /* Mark remainder as inuse so free() won't complain */
         4135       set_inuse_bit_at_offset(remainder, remainder_size);
         4136       fREe(chunk2mem(remainder)); 
         4137     }
         4138 
         4139     check_inuse_chunk(newp);
         4140     return chunk2mem(newp);
         4141   }
         4142 
         4143   /*
         4144     Handle mmap cases
         4145   */
         4146 
         4147   else {
         4148 #if HAVE_MMAP
         4149 
         4150 #if HAVE_MREMAP
         4151     INTERNAL_SIZE_T offset = oldp->prev_size;
         4152     size_t pagemask = av->pagesize - 1;
         4153     char *cp;
         4154     CHUNK_SIZE_T  sum;
         4155     
         4156     /* Note the extra SIZE_SZ overhead */
         4157     newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask;
         4158 
         4159     /* don't need to remap if still within same page */
         4160     if (oldsize == newsize - offset) 
         4161       return oldmem;
         4162 
         4163     cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1);
         4164     
         4165     if (cp != (char*)MORECORE_FAILURE) {
         4166 
         4167       newp = (mchunkptr)(cp + offset);
         4168       set_head(newp, (newsize - offset)|IS_MMAPPED);
         4169       
         4170       assert(aligned_OK(chunk2mem(newp)));
         4171       assert((newp->prev_size == offset));
         4172       
         4173       /* update statistics */
         4174       sum = av->mmapped_mem += newsize - oldsize;
         4175       if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem)) 
         4176         av->max_mmapped_mem = sum;
         4177       sum += av->sbrked_mem;
         4178       if (sum > (CHUNK_SIZE_T)(av->max_total_mem)) 
         4179         av->max_total_mem = sum;
         4180       
         4181       return chunk2mem(newp);
         4182     }
         4183 #endif
         4184 
         4185     /* Note the extra SIZE_SZ overhead. */
         4186     if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb + SIZE_SZ)) 
         4187       newmem = oldmem; /* do nothing */
         4188     else {
         4189       /* Must alloc, copy, free. */
         4190       newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
         4191       if (newmem != 0) {
         4192         MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
         4193         fREe(oldmem);
         4194       }
         4195     }
         4196     return newmem;
         4197 
         4198 #else 
         4199     /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
         4200     check_malloc_state();
         4201     MALLOC_FAILURE_ACTION;
         4202     return 0;
         4203 #endif
         4204   }
         4205 }
         4206 
         4207 /*
         4208   ------------------------------ memalign ------------------------------
         4209 */
         4210 
         4211 #if __STD_C
         4212 Void_t* mEMALIGn(size_t alignment, size_t bytes)
         4213 #else
         4214 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
         4215 #endif
         4216 {
         4217   INTERNAL_SIZE_T nb;             /* padded  request size */
         4218   char*           m;              /* memory returned by malloc call */
         4219   mchunkptr       p;              /* corresponding chunk */
         4220   char*           brk;            /* alignment point within p */
         4221   mchunkptr       newp;           /* chunk to return */
         4222   INTERNAL_SIZE_T newsize;        /* its size */
         4223   INTERNAL_SIZE_T leadsize;       /* leading space before alignment point */
         4224   mchunkptr       remainder;      /* spare room at end to split off */
         4225   CHUNK_SIZE_T    remainder_size; /* its size */
         4226   INTERNAL_SIZE_T size;
         4227 
         4228   /* If need less alignment than we give anyway, just relay to malloc */
         4229 
         4230   if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
         4231 
         4232   /* Otherwise, ensure that it is at least a minimum chunk size */
         4233 
         4234   if (alignment <  MINSIZE) alignment = MINSIZE;
         4235 
         4236   /* Make sure alignment is power of 2 (in case MINSIZE is not).  */
         4237   if ((alignment & (alignment - 1)) != 0) {
         4238     size_t a = MALLOC_ALIGNMENT * 2;
         4239     while ((CHUNK_SIZE_T)a < (CHUNK_SIZE_T)alignment) a <<= 1;
         4240     alignment = a;
         4241   }
         4242 
         4243   checked_request2size(bytes, nb);
         4244 
         4245   /*
         4246     Strategy: find a spot within that chunk that meets the alignment
         4247     request, and then possibly free the leading and trailing space.
         4248   */
         4249 
         4250 
         4251   /* Call malloc with worst case padding to hit alignment. */
         4252 
         4253   m  = (char*)(mALLOc(nb + alignment + MINSIZE));
         4254 
         4255   if (m == 0) return 0; /* propagate failure */
         4256 
         4257   p = mem2chunk(m);
         4258 
         4259   if ((((PTR_UINT)(m)) % alignment) != 0) { /* misaligned */
         4260 
         4261     /*
         4262       Find an aligned spot inside chunk.  Since we need to give back
         4263       leading space in a chunk of at least MINSIZE, if the first
         4264       calculation places us at a spot with less than MINSIZE leader,
         4265       we can move to the next aligned spot -- we've allocated enough
         4266       total room so that this is always possible.
         4267     */
         4268 
         4269     brk = (char*)mem2chunk((PTR_UINT)(((PTR_UINT)(m + alignment - 1)) &
         4270                            -((signed long) alignment)));
         4271     if ((CHUNK_SIZE_T)(brk - (char*)(p)) < MINSIZE)
         4272       brk += alignment;
         4273 
         4274     newp = (mchunkptr)brk;
         4275     leadsize = brk - (char*)(p);
         4276     newsize = chunksize(p) - leadsize;
         4277 
         4278     /* For mmapped chunks, just adjust offset */
         4279     if (chunk_is_mmapped(p)) {
         4280       newp->prev_size = p->prev_size + leadsize;
         4281       set_head(newp, newsize|IS_MMAPPED);
         4282       return chunk2mem(newp);
         4283     }
         4284 
         4285     /* Otherwise, give back leader, use the rest */
         4286     set_head(newp, newsize | PREV_INUSE);
         4287     set_inuse_bit_at_offset(newp, newsize);
         4288     set_head_size(p, leadsize);
         4289     fREe(chunk2mem(p));
         4290     p = newp;
         4291 
         4292     assert (newsize >= nb &&
         4293             (((PTR_UINT)(chunk2mem(p))) % alignment) == 0);
         4294   }
         4295 
         4296   /* Also give back spare room at the end */
         4297   if (!chunk_is_mmapped(p)) {
         4298     size = chunksize(p);
         4299     if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
         4300       remainder_size = size - nb;
         4301       remainder = chunk_at_offset(p, nb);
         4302       set_head(remainder, remainder_size | PREV_INUSE);
         4303       set_head_size(p, nb);
         4304       fREe(chunk2mem(remainder));
         4305     }
         4306   }
         4307 
         4308   check_inuse_chunk(p);
         4309   return chunk2mem(p);
         4310 }
         4311 
         4312 /*
         4313   ------------------------------ calloc ------------------------------
         4314 */
         4315 
         4316 #if __STD_C
         4317 Void_t* cALLOc(size_t n_elements, size_t elem_size)
         4318 #else
         4319 Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size;
         4320 #endif
         4321 {
         4322   mchunkptr p;
         4323   CHUNK_SIZE_T  clearsize;
         4324   CHUNK_SIZE_T  nclears;
         4325   INTERNAL_SIZE_T* d;
         4326 
         4327   Void_t* mem = mALLOc(n_elements * elem_size);
         4328 
         4329   if (mem != 0) {
         4330     p = mem2chunk(mem);
         4331 
         4332     if (!chunk_is_mmapped(p))
         4333     {  
         4334       /*
         4335         Unroll clear of <= 36 bytes (72 if 8byte sizes)
         4336         We know that contents have an odd number of
         4337         INTERNAL_SIZE_T-sized words; minimally 3.
         4338       */
         4339 
         4340       d = (INTERNAL_SIZE_T*)mem;
         4341       clearsize = chunksize(p) - SIZE_SZ;
         4342       nclears = clearsize / sizeof(INTERNAL_SIZE_T);
         4343       assert(nclears >= 3);
         4344 
         4345       if (nclears > 9)
         4346         MALLOC_ZERO(d, clearsize);
         4347 
         4348       else {
         4349         *(d+0) = 0;
         4350         *(d+1) = 0;
         4351         *(d+2) = 0;
         4352         if (nclears > 4) {
         4353           *(d+3) = 0;
         4354           *(d+4) = 0;
         4355           if (nclears > 6) {
         4356             *(d+5) = 0;
         4357             *(d+6) = 0;
         4358             if (nclears > 8) {
         4359               *(d+7) = 0;
         4360               *(d+8) = 0;
         4361             }
         4362           }
         4363         }
         4364       }
         4365     }
         4366 #if ! MMAP_CLEARS
         4367     else
         4368     {
         4369       d = (INTERNAL_SIZE_T*)mem;
         4370       /*
         4371         Note the additional SIZE_SZ
         4372       */
         4373       clearsize = chunksize(p) - 2*SIZE_SZ;
         4374       MALLOC_ZERO(d, clearsize);
         4375     }
         4376 #endif
         4377   }
         4378   return mem;
         4379 }
         4380 
         4381 /*
         4382   ------------------------------ cfree ------------------------------
         4383 */
         4384 
         4385 #if __STD_C
         4386 void cFREe(Void_t *mem)
         4387 #else
         4388 void cFREe(mem) Void_t *mem;
         4389 #endif
         4390 {
         4391   fREe(mem);
         4392 }
         4393 
         4394 /*
         4395   ------------------------- independent_calloc -------------------------
         4396 */
         4397 
         4398 #if __STD_C
         4399 Void_t** iCALLOc(size_t n_elements, size_t elem_size, Void_t* chunks[])
         4400 #else
         4401 Void_t** iCALLOc(n_elements, elem_size, chunks) size_t n_elements; size_t elem_size; Void_t* chunks[];
         4402 #endif
         4403 {
         4404   size_t sz = elem_size; /* serves as 1-element array */
         4405   /* opts arg of 3 means all elements are same size, and should be cleared */
         4406   return iALLOc(n_elements, &sz, 3, chunks);
         4407 }
         4408 
         4409 /*
         4410   ------------------------- independent_comalloc -------------------------
         4411 */
         4412 
         4413 #if __STD_C
         4414 Void_t** iCOMALLOc(size_t n_elements, size_t sizes[], Void_t* chunks[])
         4415 #else
         4416 Void_t** iCOMALLOc(n_elements, sizes, chunks) size_t n_elements; size_t sizes[]; Void_t* chunks[];
         4417 #endif
         4418 {
         4419   return iALLOc(n_elements, sizes, 0, chunks);
         4420 }
         4421 
         4422 
         4423 /*
         4424   ------------------------------ ialloc ------------------------------
         4425   ialloc provides common support for independent_X routines, handling all of
         4426   the combinations that can result.
         4427 
         4428   The opts arg has:
         4429     bit 0 set if all elements are same size (using sizes[0])
         4430     bit 1 set if elements should be zeroed
         4431 */
         4432 
         4433 
         4434 #if __STD_C
         4435 static Void_t** iALLOc(size_t n_elements, 
         4436                        size_t* sizes,  
         4437                        int opts,
         4438                        Void_t* chunks[])
         4439 #else
         4440 static Void_t** iALLOc(n_elements, sizes, opts, chunks) size_t n_elements; size_t* sizes; int opts; Void_t* chunks[];
         4441 #endif
         4442 {
         4443   mstate av = get_malloc_state();
         4444   INTERNAL_SIZE_T element_size;   /* chunksize of each element, if all same */
         4445   INTERNAL_SIZE_T contents_size;  /* total size of elements */
         4446   INTERNAL_SIZE_T array_size;     /* request size of pointer array */
         4447   Void_t*         mem;            /* malloced aggregate space */
         4448   mchunkptr       p;              /* corresponding chunk */
         4449   INTERNAL_SIZE_T remainder_size; /* remaining bytes while splitting */
         4450   Void_t**        marray;         /* either "chunks" or malloced ptr array */
         4451   mchunkptr       array_chunk;    /* chunk for malloced ptr array */
         4452   int             mmx;            /* to disable mmap */
         4453   INTERNAL_SIZE_T size;           
         4454   size_t          i;
         4455 
         4456   /* Ensure initialization */
         4457   if (av->max_fast == 0) malloc_consolidate(av);
         4458 
         4459   /* compute array length, if needed */
         4460   if (chunks != 0) {
         4461     if (n_elements == 0)
         4462       return chunks; /* nothing to do */
         4463     marray = chunks;
         4464     array_size = 0;
         4465   }
         4466   else {
         4467     /* if empty req, must still return chunk representing empty array */
         4468     if (n_elements == 0) 
         4469       return (Void_t**) mALLOc(0);
         4470     marray = 0;
         4471     array_size = request2size(n_elements * (sizeof(Void_t*)));
         4472   }
         4473 
         4474   /* compute total element size */
         4475   if (opts & 0x1) { /* all-same-size */
         4476     element_size = request2size(*sizes);
         4477     contents_size = n_elements * element_size;
         4478   }
         4479   else { /* add up all the sizes */
         4480     element_size = 0;
         4481     contents_size = 0;
         4482     for (i = 0; i != n_elements; ++i) 
         4483       contents_size += request2size(sizes[i]);     
         4484   }
         4485 
         4486   /* subtract out alignment bytes from total to minimize overallocation */
         4487   size = contents_size + array_size - MALLOC_ALIGN_MASK;
         4488   
         4489   /* 
         4490      Allocate the aggregate chunk.
         4491      But first disable mmap so malloc won't use it, since
         4492      we would not be able to later free/realloc space internal
         4493      to a segregated mmap region.
         4494  */
         4495   mmx = av->n_mmaps_max;   /* disable mmap */
         4496   av->n_mmaps_max = 0;
         4497   mem = mALLOc(size);
         4498   av->n_mmaps_max = mmx;   /* reset mmap */
         4499   if (mem == 0) 
         4500     return 0;
         4501 
         4502   p = mem2chunk(mem);
         4503   assert(!chunk_is_mmapped(p)); 
         4504   remainder_size = chunksize(p);
         4505 
         4506   if (opts & 0x2) {       /* optionally clear the elements */
         4507     MALLOC_ZERO(mem, remainder_size - SIZE_SZ - array_size);
         4508   }
         4509 
         4510   /* If not provided, allocate the pointer array as final part of chunk */
         4511   if (marray == 0) {
         4512     array_chunk = chunk_at_offset(p, contents_size);
         4513     marray = (Void_t**) (chunk2mem(array_chunk));
         4514     set_head(array_chunk, (remainder_size - contents_size) | PREV_INUSE);
         4515     remainder_size = contents_size;
         4516   }
         4517 
         4518   /* split out elements */
         4519   for (i = 0; ; ++i) {
         4520     marray[i] = chunk2mem(p);
         4521     if (i != n_elements-1) {
         4522       if (element_size != 0) 
         4523         size = element_size;
         4524       else
         4525         size = request2size(sizes[i]);          
         4526       remainder_size -= size;
         4527       set_head(p, size | PREV_INUSE);
         4528       p = chunk_at_offset(p, size);
         4529     }
         4530     else { /* the final element absorbs any overallocation slop */
         4531       set_head(p, remainder_size | PREV_INUSE);
         4532       break;
         4533     }
         4534   }
         4535 
         4536 #if DEBUG
         4537   if (marray != chunks) {
         4538     /* final element must have exactly exhausted chunk */
         4539     if (element_size != 0) 
         4540       assert(remainder_size == element_size);
         4541     else
         4542       assert(remainder_size == request2size(sizes[i]));
         4543     check_inuse_chunk(mem2chunk(marray));
         4544   }
         4545 
         4546   for (i = 0; i != n_elements; ++i)
         4547     check_inuse_chunk(mem2chunk(marray[i]));
         4548 #endif
         4549 
         4550   return marray;
         4551 }
         4552 
         4553 
         4554 /*
         4555   ------------------------------ valloc ------------------------------
         4556 */
         4557 
         4558 #if __STD_C
         4559 Void_t* vALLOc(size_t bytes)
         4560 #else
         4561 Void_t* vALLOc(bytes) size_t bytes;
         4562 #endif
         4563 {
         4564   /* Ensure initialization */
         4565   mstate av = get_malloc_state();
         4566   if (av->max_fast == 0) malloc_consolidate(av);
         4567   return mEMALIGn(av->pagesize, bytes);
         4568 }
         4569 
         4570 /*
         4571   ------------------------------ pvalloc ------------------------------
         4572 */
         4573 
         4574 
         4575 #if __STD_C
         4576 Void_t* pVALLOc(size_t bytes)
         4577 #else
         4578 Void_t* pVALLOc(bytes) size_t bytes;
         4579 #endif
         4580 {
         4581   mstate av = get_malloc_state();
         4582   size_t pagesz;
         4583 
         4584   /* Ensure initialization */
         4585   if (av->max_fast == 0) malloc_consolidate(av);
         4586   pagesz = av->pagesize;
         4587   return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1));
         4588 }
         4589    
         4590 
         4591 /*
         4592   ------------------------------ malloc_trim ------------------------------
         4593 */
         4594 
         4595 #if __STD_C
         4596 int mTRIm(size_t pad)
         4597 #else
         4598 int mTRIm(pad) size_t pad;
         4599 #endif
         4600 {
         4601   mstate av = get_malloc_state();
         4602   /* Ensure initialization/consolidation */
         4603   malloc_consolidate(av);
         4604 
         4605 #ifndef MORECORE_CANNOT_TRIM        
         4606   return sYSTRIm(pad, av);
         4607 #else
         4608   return 0;
         4609 #endif
         4610 }
         4611 
         4612 
         4613 /*
         4614   ------------------------- malloc_usable_size -------------------------
         4615 */
         4616 
         4617 #if __STD_C
         4618 size_t mUSABLe(Void_t* mem)
         4619 #else
         4620 size_t mUSABLe(mem) Void_t* mem;
         4621 #endif
         4622 {
         4623   mchunkptr p;
         4624   if (mem != 0) {
         4625     p = mem2chunk(mem);
         4626     if (chunk_is_mmapped(p))
         4627       return chunksize(p) - 2*SIZE_SZ;
         4628     else if (inuse(p))
         4629       return chunksize(p) - SIZE_SZ;
         4630   }
         4631   return 0;
         4632 }
         4633 
         4634 /*
         4635   ------------------------------ mallinfo ------------------------------
         4636 */
         4637 
         4638 struct mallinfo mALLINFo()
         4639 {
         4640   mstate av = get_malloc_state();
         4641   struct mallinfo mi;
         4642   int i;
         4643   mbinptr b;
         4644   mchunkptr p;
         4645   INTERNAL_SIZE_T avail;
         4646   INTERNAL_SIZE_T fastavail;
         4647   int nblocks;
         4648   int nfastblocks;
         4649 
         4650   /* Ensure initialization */
         4651   if (av->top == 0)  malloc_consolidate(av);
         4652 
         4653   check_malloc_state();
         4654 
         4655   /* Account for top */
         4656   avail = chunksize(av->top);
         4657   nblocks = 1;  /* top always exists */
         4658 
         4659   /* traverse fastbins */
         4660   nfastblocks = 0;
         4661   fastavail = 0;
         4662 
         4663   for (i = 0; i < NFASTBINS; ++i) {
         4664     for (p = av->fastbins[i]; p != 0; p = p->fd) {
         4665       ++nfastblocks;
         4666       fastavail += chunksize(p);
         4667     }
         4668   }
         4669 
         4670   avail += fastavail;
         4671 
         4672   /* traverse regular bins */
         4673   for (i = 1; i < NBINS; ++i) {
         4674     b = bin_at(av, i);
         4675     for (p = last(b); p != b; p = p->bk) {
         4676       ++nblocks;
         4677       avail += chunksize(p);
         4678     }
         4679   }
         4680 
         4681   mi.smblks = nfastblocks;
         4682   mi.ordblks = nblocks;
         4683   mi.fordblks = avail;
         4684   mi.uordblks = av->sbrked_mem - avail;
         4685   mi.arena = av->sbrked_mem;
         4686   mi.hblks = av->n_mmaps;
         4687   mi.hblkhd = av->mmapped_mem;
         4688   mi.fsmblks = fastavail;
         4689   mi.keepcost = chunksize(av->top);
         4690   mi.usmblks = av->max_total_mem;
         4691   return mi;
         4692 }
         4693 
         4694 /*
         4695   ------------------------------ malloc_stats ------------------------------
         4696 */
         4697 
         4698 void mSTATs()
         4699 {
         4700   struct mallinfo mi = mALLINFo();
         4701 
         4702 #ifdef WIN32
         4703   {
         4704     CHUNK_SIZE_T  free, reserved, committed;
         4705     vminfo (&free, &reserved, &committed);
         4706     fprintf(stderr, "free bytes       = %10lu\n", 
         4707             free);
         4708     fprintf(stderr, "reserved bytes   = %10lu\n", 
         4709             reserved);
         4710     fprintf(stderr, "committed bytes  = %10lu\n", 
         4711             committed);
         4712   }
         4713 #endif
         4714 
         4715 
         4716   fprintf(stderr, "max system bytes = %10lu\n",
         4717           (CHUNK_SIZE_T)(mi.usmblks));
         4718   fprintf(stderr, "system bytes     = %10lu\n",
         4719           (CHUNK_SIZE_T)(mi.arena + mi.hblkhd));
         4720   fprintf(stderr, "in use bytes     = %10lu\n",
         4721           (CHUNK_SIZE_T)(mi.uordblks + mi.hblkhd));
         4722 
         4723 #ifdef WIN32 
         4724   {
         4725     CHUNK_SIZE_T  kernel, user;
         4726     if (cpuinfo (TRUE, &kernel, &user)) {
         4727       fprintf(stderr, "kernel ms        = %10lu\n", 
         4728               kernel);
         4729       fprintf(stderr, "user ms          = %10lu\n", 
         4730               user);
         4731     }
         4732   }
         4733 #endif
         4734 }
         4735 
         4736 
         4737 /*
         4738   ------------------------------ mallopt ------------------------------
         4739 */
         4740 
         4741 #if __STD_C
         4742 int mALLOPt(int param_number, int value)
         4743 #else
         4744 int mALLOPt(param_number, value) int param_number; int value;
         4745 #endif
         4746 {
         4747   mstate av = get_malloc_state();
         4748   /* Ensure initialization/consolidation */
         4749   malloc_consolidate(av);
         4750 
         4751   switch(param_number) {
         4752   case M_MXFAST:
         4753     if (value >= 0 && value <= MAX_FAST_SIZE) {
         4754       set_max_fast(av, value);
         4755       return 1;
         4756     }
         4757     else
         4758       return 0;
         4759 
         4760   case M_TRIM_THRESHOLD:
         4761     av->trim_threshold = value;
         4762     return 1;
         4763 
         4764   case M_TOP_PAD:
         4765     av->top_pad = value;
         4766     return 1;
         4767 
         4768   case M_MMAP_THRESHOLD:
         4769     av->mmap_threshold = value;
         4770     return 1;
         4771 
         4772   case M_MMAP_MAX:
         4773 #if !HAVE_MMAP
         4774     if (value != 0)
         4775       return 0;
         4776 #endif
         4777     av->n_mmaps_max = value;
         4778     return 1;
         4779 
         4780   default:
         4781     return 0;
         4782   }
         4783 }
         4784 
         4785 
         4786 /* 
         4787   -------------------- Alternative MORECORE functions --------------------
         4788 */
         4789 
         4790 
         4791 /*
         4792   General Requirements for MORECORE.
         4793 
         4794   The MORECORE function must have the following properties:
         4795 
         4796   If MORECORE_CONTIGUOUS is false:
         4797 
         4798     * MORECORE must allocate in multiples of pagesize. It will
         4799       only be called with arguments that are multiples of pagesize.
         4800 
         4801     * MORECORE(0) must return an address that is at least 
         4802       MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.)
         4803 
         4804   else (i.e. If MORECORE_CONTIGUOUS is true):
         4805 
         4806     * Consecutive calls to MORECORE with positive arguments
         4807       return increasing addresses, indicating that space has been
         4808       contiguously extended. 
         4809 
         4810     * MORECORE need not allocate in multiples of pagesize.
         4811       Calls to MORECORE need not have args of multiples of pagesize.
         4812 
         4813     * MORECORE need not page-align.
         4814 
         4815   In either case:
         4816 
         4817     * MORECORE may allocate more memory than requested. (Or even less,
         4818       but this will generally result in a malloc failure.)
         4819 
         4820     * MORECORE must not allocate memory when given argument zero, but
         4821       instead return one past the end address of memory from previous
         4822       nonzero call. This malloc does NOT call MORECORE(0)
         4823       until at least one call with positive arguments is made, so
         4824       the initial value returned is not important.
         4825 
         4826     * Even though consecutive calls to MORECORE need not return contiguous
         4827       addresses, it must be OK for malloc'ed chunks to span multiple
         4828       regions in those cases where they do happen to be contiguous.
         4829 
         4830     * MORECORE need not handle negative arguments -- it may instead
         4831       just return MORECORE_FAILURE when given negative arguments.
         4832       Negative arguments are always multiples of pagesize. MORECORE
         4833       must not misinterpret negative args as large positive unsigned
         4834       args. You can suppress all such calls from even occurring by defining
         4835       MORECORE_CANNOT_TRIM,
         4836 
         4837   There is some variation across systems about the type of the
         4838   argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
         4839   actually be size_t, because sbrk supports negative args, so it is
         4840   normally the signed type of the same width as size_t (sometimes
         4841   declared as "intptr_t", and sometimes "ptrdiff_t").  It doesn't much
         4842   matter though. Internally, we use "long" as arguments, which should
         4843   work across all reasonable possibilities.
         4844 
         4845   Additionally, if MORECORE ever returns failure for a positive
         4846   request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
         4847   system allocator. This is a useful backup strategy for systems with
         4848   holes in address spaces -- in this case sbrk cannot contiguously
         4849   expand the heap, but mmap may be able to map noncontiguous space.
         4850 
         4851   If you'd like mmap to ALWAYS be used, you can define MORECORE to be
         4852   a function that always returns MORECORE_FAILURE.
         4853 
         4854   Malloc only has limited ability to detect failures of MORECORE
         4855   to supply contiguous space when it says it can. In particular,
         4856   multithreaded programs that do not use locks may result in
         4857   rece conditions across calls to MORECORE that result in gaps
         4858   that cannot be detected as such, and subsequent corruption.
         4859 
         4860   If you are using this malloc with something other than sbrk (or its
         4861   emulation) to supply memory regions, you probably want to set
         4862   MORECORE_CONTIGUOUS as false.  As an example, here is a custom
         4863   allocator kindly contributed for pre-OSX macOS.  It uses virtually
         4864   but not necessarily physically contiguous non-paged memory (locked
         4865   in, present and won't get swapped out).  You can use it by
         4866   uncommenting this section, adding some #includes, and setting up the
         4867   appropriate defines above:
         4868 
         4869       #define MORECORE osMoreCore
         4870       #define MORECORE_CONTIGUOUS 0
         4871 
         4872   There is also a shutdown routine that should somehow be called for
         4873   cleanup upon program exit.
         4874 
         4875   #define MAX_POOL_ENTRIES 100
         4876   #define MINIMUM_MORECORE_SIZE  (64 * 1024)
         4877   static int next_os_pool;
         4878   void *our_os_pools[MAX_POOL_ENTRIES];
         4879 
         4880   void *osMoreCore(int size)
         4881   {
         4882     void *ptr = 0;
         4883     static void *sbrk_top = 0;
         4884 
         4885     if (size > 0)
         4886     {
         4887       if (size < MINIMUM_MORECORE_SIZE)
         4888          size = MINIMUM_MORECORE_SIZE;
         4889       if (CurrentExecutionLevel() == kTaskLevel)
         4890          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
         4891       if (ptr == 0)
         4892       {
         4893         return (void *) MORECORE_FAILURE;
         4894       }
         4895       // save ptrs so they can be freed during cleanup
         4896       our_os_pools[next_os_pool] = ptr;
         4897       next_os_pool++;
         4898       ptr = (void *) ((((CHUNK_SIZE_T) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
         4899       sbrk_top = (char *) ptr + size;
         4900       return ptr;
         4901     }
         4902     else if (size < 0)
         4903     {
         4904       // we don't currently support shrink behavior
         4905       return (void *) MORECORE_FAILURE;
         4906     }
         4907     else
         4908     {
         4909       return sbrk_top;
         4910     }
         4911   }
         4912 
         4913   // cleanup any allocated memory pools
         4914   // called as last thing before shutting down driver
         4915 
         4916   void osCleanupMem(void)
         4917   {
         4918     void **ptr;
         4919 
         4920     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
         4921       if (*ptr)
         4922       {
         4923          PoolDeallocate(*ptr);
         4924          *ptr = 0;
         4925       }
         4926   }
         4927 
         4928 */
         4929 
         4930 
         4931 /* 
         4932   -------------------------------------------------------------- 
         4933 
         4934   Emulation of sbrk for win32. 
         4935   Donated by J. Walter <Walter@GeNeSys-e.de>.
         4936   For additional information about this code, and malloc on Win32, see 
         4937      http://www.genesys-e.de/jwalter/
         4938 */
         4939 
         4940 
         4941 #ifdef WIN32
         4942 
         4943 #ifdef _DEBUG
         4944 /* #define TRACE */
         4945 #endif
         4946 
         4947 /* Support for USE_MALLOC_LOCK */
         4948 #ifdef USE_MALLOC_LOCK
         4949 
         4950 /* Wait for spin lock */
         4951 static int slwait (int *sl) {
         4952     while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0) 
         4953             Sleep (0);
         4954     return 0;
         4955 }
         4956 
         4957 /* Release spin lock */
         4958 static int slrelease (int *sl) {
         4959     InterlockedExchange (sl, 0);
         4960     return 0;
         4961 }
         4962 
         4963 #ifdef NEEDED
         4964 /* Spin lock for emulation code */
         4965 static int g_sl;
         4966 #endif
         4967 
         4968 #endif /* USE_MALLOC_LOCK */
         4969 
         4970 /* getpagesize for windows */
         4971 static long getpagesize (void) {
         4972     static long g_pagesize = 0;
         4973     if (! g_pagesize) {
         4974         SYSTEM_INFO system_info;
         4975         GetSystemInfo (&system_info);
         4976         g_pagesize = system_info.dwPageSize;
         4977     }
         4978     return g_pagesize;
         4979 }
         4980 static long getregionsize (void) {
         4981     static long g_regionsize = 0;
         4982     if (! g_regionsize) {
         4983         SYSTEM_INFO system_info;
         4984         GetSystemInfo (&system_info);
         4985         g_regionsize = system_info.dwAllocationGranularity;
         4986     }
         4987     return g_regionsize;
         4988 }
         4989 
         4990 /* A region list entry */
         4991 typedef struct _region_list_entry {
         4992     void *top_allocated;
         4993     void *top_committed;
         4994     void *top_reserved;
         4995     long reserve_size;
         4996     struct _region_list_entry *previous;
         4997 } region_list_entry;
         4998 
         4999 /* Allocate and link a region entry in the region list */
         5000 static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) {
         5001     region_list_entry *next = HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry));
         5002     if (! next)
         5003         return FALSE;
         5004     next->top_allocated = (char *) base_reserved;
         5005     next->top_committed = (char *) base_reserved;
         5006     next->top_reserved = (char *) base_reserved + reserve_size;
         5007     next->reserve_size = reserve_size;
         5008     next->previous = *last;
         5009     *last = next;
         5010     return TRUE;
         5011 }
         5012 /* Free and unlink the last region entry from the region list */
         5013 static int region_list_remove (region_list_entry **last) {
         5014     region_list_entry *previous = (*last)->previous;
         5015     if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last))
         5016         return FALSE;
         5017     *last = previous;
         5018     return TRUE;
         5019 }
         5020 
         5021 #define CEIL(size,to)        (((size)+(to)-1)&~((to)-1))
         5022 #define FLOOR(size,to)        ((size)&~((to)-1))
         5023 
         5024 #define SBRK_SCALE  0
         5025 /* #define SBRK_SCALE  1 */
         5026 /* #define SBRK_SCALE  2 */
         5027 /* #define SBRK_SCALE  4  */
         5028 
         5029 /* sbrk for windows */
         5030 static void *sbrk (long size) {
         5031     static long g_pagesize, g_my_pagesize;
         5032     static long g_regionsize, g_my_regionsize;
         5033     static region_list_entry *g_last;
         5034     void *result = (void *) MORECORE_FAILURE;
         5035 #ifdef TRACE
         5036     printf ("sbrk %d\n", size);
         5037 #endif
         5038 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
         5039     /* Wait for spin lock */
         5040     slwait (&g_sl);
         5041 #endif
         5042     /* First time initialization */
         5043     if (! g_pagesize) {
         5044         g_pagesize = getpagesize ();
         5045         g_my_pagesize = g_pagesize << SBRK_SCALE;
         5046     }
         5047     if (! g_regionsize) {
         5048         g_regionsize = getregionsize ();
         5049         g_my_regionsize = g_regionsize << SBRK_SCALE;
         5050     }
         5051     if (! g_last) {
         5052         if (! region_list_append (&g_last, 0, 0)) 
         5053            goto sbrk_exit;
         5054     }
         5055     /* Assert invariants */
         5056     assert (g_last);
         5057     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
         5058             g_last->top_allocated <= g_last->top_committed);
         5059     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
         5060             g_last->top_committed <= g_last->top_reserved &&
         5061             (unsigned) g_last->top_committed % g_pagesize == 0);
         5062     assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
         5063     assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
         5064     /* Allocation requested? */
         5065     if (size >= 0) {
         5066         /* Allocation size is the requested size */
         5067         long allocate_size = size;
         5068         /* Compute the size to commit */
         5069         long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
         5070         /* Do we reach the commit limit? */
         5071         if (to_commit > 0) {
         5072             /* Round size to commit */
         5073             long commit_size = CEIL (to_commit, g_my_pagesize);
         5074             /* Compute the size to reserve */
         5075             long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved;
         5076             /* Do we reach the reserve limit? */
         5077             if (to_reserve > 0) {
         5078                 /* Compute the remaining size to commit in the current region */
         5079                 long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed;
         5080                 if (remaining_commit_size > 0) {
         5081                     /* Assert preconditions */
         5082                     assert ((unsigned) g_last->top_committed % g_pagesize == 0);
         5083                     assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); {
         5084                         /* Commit this */
         5085                         void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size,
         5086                                                                                          MEM_COMMIT, PAGE_READWRITE);
         5087                         /* Check returned pointer for consistency */
         5088                         if (base_committed != g_last->top_committed)
         5089                             goto sbrk_exit;
         5090                         /* Assert postconditions */
         5091                         assert ((unsigned) base_committed % g_pagesize == 0);
         5092 #ifdef TRACE
         5093                         printf ("Commit %p %d\n", base_committed, remaining_commit_size);
         5094 #endif
         5095                         /* Adjust the regions commit top */
         5096                         g_last->top_committed = (char *) base_committed + remaining_commit_size;
         5097                     }
         5098                 } {
         5099                     /* Now we are going to search and reserve. */
         5100                     int contiguous = -1;
         5101                     int found = FALSE;
         5102                     MEMORY_BASIC_INFORMATION memory_info;
         5103                     void *base_reserved;
         5104                     long reserve_size;
         5105                     do {
         5106                         /* Assume contiguous memory */
         5107                         contiguous = TRUE;
         5108                         /* Round size to reserve */
         5109                         reserve_size = CEIL (to_reserve, g_my_regionsize);
         5110                         /* Start with the current region's top */
         5111                         memory_info.BaseAddress = g_last->top_reserved;
         5112                         /* Assert preconditions */
         5113                         assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
         5114                         assert (0 < reserve_size && reserve_size % g_regionsize == 0);
         5115                         while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
         5116                             /* Assert postconditions */
         5117                             assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
         5118 #ifdef TRACE
         5119                             printf ("Query %p %d %s\n", memory_info.BaseAddress, memory_info.RegionSize, 
         5120                                     memory_info.State == MEM_FREE ? "FREE": 
         5121                                     (memory_info.State == MEM_RESERVE ? "RESERVED":
         5122                                      (memory_info.State == MEM_COMMIT ? "COMMITTED": "?")));
         5123 #endif
         5124                             /* Region is free, well aligned and big enough: we are done */
         5125                             if (memory_info.State == MEM_FREE &&
         5126                                 (unsigned) memory_info.BaseAddress % g_regionsize == 0 &&
         5127                                 memory_info.RegionSize >= (unsigned) reserve_size) {
         5128                                 found = TRUE;
         5129                                 break;
         5130                             }
         5131                             /* From now on we can't get contiguous memory! */
         5132                             contiguous = FALSE;
         5133                             /* Recompute size to reserve */
         5134                             reserve_size = CEIL (allocate_size, g_my_regionsize);
         5135                             memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
         5136                             /* Assert preconditions */
         5137                             assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
         5138                             assert (0 < reserve_size && reserve_size % g_regionsize == 0);
         5139                         }
         5140                         /* Search failed? */
         5141                         if (! found) 
         5142                             goto sbrk_exit;
         5143                         /* Assert preconditions */
         5144                         assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0);
         5145                         assert (0 < reserve_size && reserve_size % g_regionsize == 0);
         5146                         /* Try to reserve this */
         5147                         base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size, 
         5148                                                                           MEM_RESERVE, PAGE_NOACCESS);
         5149                         if (! base_reserved) {
         5150                             int rc = GetLastError ();
         5151                             if (rc != ERROR_INVALID_ADDRESS) 
         5152                                 goto sbrk_exit;
         5153                         }
         5154                         /* A null pointer signals (hopefully) a race condition with another thread. */
         5155                         /* In this case, we try again. */
         5156                     } while (! base_reserved);
         5157                     /* Check returned pointer for consistency */
         5158                     if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress)
         5159                         goto sbrk_exit;
         5160                     /* Assert postconditions */
         5161                     assert ((unsigned) base_reserved % g_regionsize == 0);
         5162 #ifdef TRACE
         5163                     printf ("Reserve %p %d\n", base_reserved, reserve_size);
         5164 #endif
         5165                     /* Did we get contiguous memory? */
         5166                     if (contiguous) {
         5167                         long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated;
         5168                         /* Adjust allocation size */
         5169                         allocate_size -= start_size;
         5170                         /* Adjust the regions allocation top */
         5171                         g_last->top_allocated = g_last->top_committed;
         5172                         /* Recompute the size to commit */
         5173                         to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
         5174                         /* Round size to commit */
         5175                         commit_size = CEIL (to_commit, g_my_pagesize);
         5176                     } 
         5177                     /* Append the new region to the list */
         5178                     if (! region_list_append (&g_last, base_reserved, reserve_size))
         5179                         goto sbrk_exit;
         5180                     /* Didn't we get contiguous memory? */
         5181                     if (! contiguous) {
         5182                         /* Recompute the size to commit */
         5183                         to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
         5184                         /* Round size to commit */
         5185                         commit_size = CEIL (to_commit, g_my_pagesize);
         5186                     }
         5187                 }
         5188             } 
         5189             /* Assert preconditions */
         5190             assert ((unsigned) g_last->top_committed % g_pagesize == 0);
         5191             assert (0 < commit_size && commit_size % g_pagesize == 0); {
         5192                 /* Commit this */
         5193                 void *base_committed = VirtualAlloc (g_last->top_committed, commit_size, 
         5194                                                                                  MEM_COMMIT, PAGE_READWRITE);
         5195                 /* Check returned pointer for consistency */
         5196                 if (base_committed != g_last->top_committed)
         5197                     goto sbrk_exit;
         5198                 /* Assert postconditions */
         5199                 assert ((unsigned) base_committed % g_pagesize == 0);
         5200 #ifdef TRACE
         5201                 printf ("Commit %p %d\n", base_committed, commit_size);
         5202 #endif
         5203                 /* Adjust the regions commit top */
         5204                 g_last->top_committed = (char *) base_committed + commit_size;
         5205             }
         5206         } 
         5207         /* Adjust the regions allocation top */
         5208         g_last->top_allocated = (char *) g_last->top_allocated + allocate_size;
         5209         result = (char *) g_last->top_allocated - size;
         5210     /* Deallocation requested? */
         5211     } else if (size < 0) {
         5212         long deallocate_size = - size;
         5213         /* As long as we have a region to release */
         5214         while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) {
         5215             /* Get the size to release */
         5216             long release_size = g_last->reserve_size;
         5217             /* Get the base address */
         5218             void *base_reserved = (char *) g_last->top_reserved - release_size;
         5219             /* Assert preconditions */
         5220             assert ((unsigned) base_reserved % g_regionsize == 0); 
         5221             assert (0 < release_size && release_size % g_regionsize == 0); {
         5222                 /* Release this */
         5223                 int rc = VirtualFree (base_reserved, 0, 
         5224                                       MEM_RELEASE);
         5225                 /* Check returned code for consistency */
         5226                 if (! rc)
         5227                     goto sbrk_exit;
         5228 #ifdef TRACE
         5229                 printf ("Release %p %d\n", base_reserved, release_size);
         5230 #endif
         5231             }
         5232             /* Adjust deallocation size */
         5233             deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved;
         5234             /* Remove the old region from the list */
         5235             if (! region_list_remove (&g_last))
         5236                 goto sbrk_exit;
         5237         } {
         5238             /* Compute the size to decommit */
         5239             long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size);
         5240             if (to_decommit >= g_my_pagesize) {
         5241                 /* Compute the size to decommit */
         5242                 long decommit_size = FLOOR (to_decommit, g_my_pagesize);
         5243                 /*  Compute the base address */
         5244                 void *base_committed = (char *) g_last->top_committed - decommit_size;
         5245                 /* Assert preconditions */
         5246                 assert ((unsigned) base_committed % g_pagesize == 0);
         5247                 assert (0 < decommit_size && decommit_size % g_pagesize == 0); {
         5248                     /* Decommit this */
         5249                     int rc = VirtualFree ((char *) base_committed, decommit_size, 
         5250                                           MEM_DECOMMIT);
         5251                     /* Check returned code for consistency */
         5252                     if (! rc)
         5253                         goto sbrk_exit;
         5254 #ifdef TRACE
         5255                     printf ("Decommit %p %d\n", base_committed, decommit_size);
         5256 #endif
         5257                 }
         5258                 /* Adjust deallocation size and regions commit and allocate top */
         5259                 deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed;
         5260                 g_last->top_committed = base_committed;
         5261                 g_last->top_allocated = base_committed;
         5262             }
         5263         }
         5264         /* Adjust regions allocate top */
         5265         g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size;
         5266         /* Check for underflow */
         5267         if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated ||
         5268             g_last->top_allocated > g_last->top_committed) {
         5269             /* Adjust regions allocate top */
         5270             g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size;
         5271             goto sbrk_exit;
         5272         }
         5273         result = g_last->top_allocated;
         5274     }
         5275     /* Assert invariants */
         5276     assert (g_last);
         5277     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
         5278             g_last->top_allocated <= g_last->top_committed);
         5279     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
         5280             g_last->top_committed <= g_last->top_reserved &&
         5281             (unsigned) g_last->top_committed % g_pagesize == 0);
         5282     assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
         5283     assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
         5284 
         5285 sbrk_exit:
         5286 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
         5287     /* Release spin lock */
         5288     slrelease (&g_sl);
         5289 #endif
         5290     return result;
         5291 }
         5292 
         5293 /* mmap for windows */
         5294 static void *mmap (void *ptr, long size, long prot, long type, long handle, long arg) {
         5295     static long g_pagesize;
         5296     static long g_regionsize;
         5297 #ifdef TRACE
         5298     printf ("mmap %d\n", size);
         5299 #endif
         5300 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
         5301     /* Wait for spin lock */
         5302     slwait (&g_sl);
         5303 #endif
         5304     /* First time initialization */
         5305     if (! g_pagesize) 
         5306         g_pagesize = getpagesize ();
         5307     if (! g_regionsize) 
         5308         g_regionsize = getregionsize ();
         5309     /* Assert preconditions */
         5310     assert ((unsigned) ptr % g_regionsize == 0);
         5311     assert (size % g_pagesize == 0);
         5312     /* Allocate this */
         5313     ptr = VirtualAlloc (ptr, size,
         5314                                             MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE);
         5315     if (! ptr) {
         5316         ptr = (void *) MORECORE_FAILURE;
         5317         goto mmap_exit;
         5318     }
         5319     /* Assert postconditions */
         5320     assert ((unsigned) ptr % g_regionsize == 0);
         5321 #ifdef TRACE
         5322     printf ("Commit %p %d\n", ptr, size);
         5323 #endif
         5324 mmap_exit:
         5325 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
         5326     /* Release spin lock */
         5327     slrelease (&g_sl);
         5328 #endif
         5329     return ptr;
         5330 }
         5331 
         5332 /* munmap for windows */
         5333 static long munmap (void *ptr, long size) {
         5334     static long g_pagesize;
         5335     static long g_regionsize;
         5336     int rc = MUNMAP_FAILURE;
         5337 #ifdef TRACE
         5338     printf ("munmap %p %d\n", ptr, size);
         5339 #endif
         5340 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
         5341     /* Wait for spin lock */
         5342     slwait (&g_sl);
         5343 #endif
         5344     /* First time initialization */
         5345     if (! g_pagesize) 
         5346         g_pagesize = getpagesize ();
         5347     if (! g_regionsize) 
         5348         g_regionsize = getregionsize ();
         5349     /* Assert preconditions */
         5350     assert ((unsigned) ptr % g_regionsize == 0);
         5351     assert (size % g_pagesize == 0);
         5352     /* Free this */
         5353     if (! VirtualFree (ptr, 0, 
         5354                        MEM_RELEASE))
         5355         goto munmap_exit;
         5356     rc = 0;
         5357 #ifdef TRACE
         5358     printf ("Release %p %d\n", ptr, size);
         5359 #endif
         5360 munmap_exit:
         5361 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
         5362     /* Release spin lock */
         5363     slrelease (&g_sl);
         5364 #endif
         5365     return rc;
         5366 }
         5367 
         5368 static void vminfo (CHUNK_SIZE_T  *free, CHUNK_SIZE_T  *reserved, CHUNK_SIZE_T  *committed) {
         5369     MEMORY_BASIC_INFORMATION memory_info;
         5370     memory_info.BaseAddress = 0;
         5371     *free = *reserved = *committed = 0;
         5372     while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
         5373         switch (memory_info.State) {
         5374         case MEM_FREE:
         5375             *free += memory_info.RegionSize;
         5376             break;
         5377         case MEM_RESERVE:
         5378             *reserved += memory_info.RegionSize;
         5379             break;
         5380         case MEM_COMMIT:
         5381             *committed += memory_info.RegionSize;
         5382             break;
         5383         }
         5384         memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
         5385     }
         5386 }
         5387 
         5388 static int cpuinfo (int whole, CHUNK_SIZE_T  *kernel, CHUNK_SIZE_T  *user) {
         5389     if (whole) {
         5390         __int64 creation64, exit64, kernel64, user64;
         5391         int rc = GetProcessTimes (GetCurrentProcess (), 
         5392                                   (FILETIME *) &creation64,  
         5393                                   (FILETIME *) &exit64, 
         5394                                   (FILETIME *) &kernel64, 
         5395                                   (FILETIME *) &user64);
         5396         if (! rc) {
         5397             *kernel = 0;
         5398             *user = 0;
         5399             return FALSE;
         5400         } 
         5401         *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
         5402         *user = (CHUNK_SIZE_T) (user64 / 10000);
         5403         return TRUE;
         5404     } else {
         5405         __int64 creation64, exit64, kernel64, user64;
         5406         int rc = GetThreadTimes (GetCurrentThread (), 
         5407                                  (FILETIME *) &creation64,  
         5408                                  (FILETIME *) &exit64, 
         5409                                  (FILETIME *) &kernel64, 
         5410                                  (FILETIME *) &user64);
         5411         if (! rc) {
         5412             *kernel = 0;
         5413             *user = 0;
         5414             return FALSE;
         5415         } 
         5416         *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
         5417         *user = (CHUNK_SIZE_T) (user64 / 10000);
         5418         return TRUE;
         5419     }
         5420 }
         5421 
         5422 #endif /* WIN32 */
         5423 
         5424 /* ------------------------------------------------------------
         5425 History:
         5426     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
         5427       * Fix malloc_state bitmap array misdeclaration
         5428 
         5429     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
         5430       * Allow tuning of FIRST_SORTED_BIN_SIZE
         5431       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
         5432       * Better detection and support for non-contiguousness of MORECORE. 
         5433         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
         5434       * Bypass most of malloc if no frees. Thanks To Emery Berger.
         5435       * Fix freeing of old top non-contiguous chunk im sysmalloc.
         5436       * Raised default trim and map thresholds to 256K.
         5437       * Fix mmap-related #defines. Thanks to Lubos Lunak.
         5438       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
         5439       * Branch-free bin calculation
         5440       * Default trim and mmap thresholds now 256K.
         5441 
         5442     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
         5443       * Introduce independent_comalloc and independent_calloc.
         5444         Thanks to Michael Pachos for motivation and help.
         5445       * Make optional .h file available
         5446       * Allow > 2GB requests on 32bit systems.
         5447       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
         5448         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
         5449         and Anonymous.
         5450       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for 
         5451         helping test this.)
         5452       * memalign: check alignment arg
         5453       * realloc: don't try to shift chunks backwards, since this
         5454         leads to  more fragmentation in some programs and doesn't
         5455         seem to help in any others.
         5456       * Collect all cases in malloc requiring system memory into sYSMALLOc
         5457       * Use mmap as backup to sbrk
         5458       * Place all internal state in malloc_state
         5459       * Introduce fastbins (although similar to 2.5.1)
         5460       * Many minor tunings and cosmetic improvements
         5461       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK 
         5462       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
         5463         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
         5464       * Include errno.h to support default failure action.
         5465 
         5466     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
         5467       * return null for negative arguments
         5468       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
         5469          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
         5470           (e.g. WIN32 platforms)
         5471          * Cleanup header file inclusion for WIN32 platforms
         5472          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
         5473          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
         5474            memory allocation routines
         5475          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
         5476          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
         5477            usage of 'assert' in non-WIN32 code
         5478          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
         5479            avoid infinite loop
         5480       * Always call 'fREe()' rather than 'free()'
         5481 
         5482     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
         5483       * Fixed ordering problem with boundary-stamping
         5484 
         5485     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
         5486       * Added pvalloc, as recommended by H.J. Liu
         5487       * Added 64bit pointer support mainly from Wolfram Gloger
         5488       * Added anonymously donated WIN32 sbrk emulation
         5489       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
         5490       * malloc_extend_top: fix mask error that caused wastage after
         5491         foreign sbrks
         5492       * Add linux mremap support code from HJ Liu
         5493 
         5494     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
         5495       * Integrated most documentation with the code.
         5496       * Add support for mmap, with help from
         5497         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
         5498       * Use last_remainder in more cases.
         5499       * Pack bins using idea from  colin@nyx10.cs.du.edu
         5500       * Use ordered bins instead of best-fit threshhold
         5501       * Eliminate block-local decls to simplify tracing and debugging.
         5502       * Support another case of realloc via move into top
         5503       * Fix error occuring when initial sbrk_base not word-aligned.
         5504       * Rely on page size for units instead of SBRK_UNIT to
         5505         avoid surprises about sbrk alignment conventions.
         5506       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
         5507         (raymond@es.ele.tue.nl) for the suggestion.
         5508       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
         5509       * More precautions for cases where other routines call sbrk,
         5510         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
         5511       * Added macros etc., allowing use in linux libc from
         5512         H.J. Lu (hjl@gnu.ai.mit.edu)
         5513       * Inverted this history list
         5514 
         5515     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
         5516       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
         5517       * Removed all preallocation code since under current scheme
         5518         the work required to undo bad preallocations exceeds
         5519         the work saved in good cases for most test programs.
         5520       * No longer use return list or unconsolidated bins since
         5521         no scheme using them consistently outperforms those that don't
         5522         given above changes.
         5523       * Use best fit for very large chunks to prevent some worst-cases.
         5524       * Added some support for debugging
         5525 
         5526     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
         5527       * Removed footers when chunks are in use. Thanks to
         5528         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
         5529 
         5530     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
         5531       * Added malloc_trim, with help from Wolfram Gloger
         5532         (wmglo@Dent.MED.Uni-Muenchen.DE).
         5533 
         5534     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
         5535 
         5536     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
         5537       * realloc: try to expand in both directions
         5538       * malloc: swap order of clean-bin strategy;
         5539       * realloc: only conditionally expand backwards
         5540       * Try not to scavenge used bins
         5541       * Use bin counts as a guide to preallocation
         5542       * Occasionally bin return list chunks in first scan
         5543       * Add a few optimizations from colin@nyx10.cs.du.edu
         5544 
         5545     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
         5546       * faster bin computation & slightly different binning
         5547       * merged all consolidations to one part of malloc proper
         5548          (eliminating old malloc_find_space & malloc_clean_bin)
         5549       * Scan 2 returns chunks (not just 1)
         5550       * Propagate failure in realloc if malloc returns 0
         5551       * Add stuff to allow compilation on non-ANSI compilers
         5552           from kpv@research.att.com
         5553 
         5554     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
         5555       * removed potential for odd address access in prev_chunk
         5556       * removed dependency on getpagesize.h
         5557       * misc cosmetics and a bit more internal documentation
         5558       * anticosmetics: mangled names in macros to evade debugger strangeness
         5559       * tested on sparc, hp-700, dec-mips, rs6000
         5560           with gcc & native cc (hp, dec only) allowing
         5561           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
         5562 
         5563     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
         5564       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
         5565          structure of old version,  but most details differ.)
         5566 
         5567 */