uboot/common/dlmalloc.c
<<
>>
Prefs
   1#include <common.h>
   2
   3#if 0   /* Moved to malloc.h */
   4/* ---------- To make a malloc.h, start cutting here ------------ */
   5
   6/*
   7  A version of malloc/free/realloc written by Doug Lea and released to the
   8  public domain.  Send questions/comments/complaints/performance data
   9  to dl@cs.oswego.edu
  10
  11* VERSION 2.6.6  Sun Mar  5 19:10:03 2000  Doug Lea  (dl at gee)
  12
  13   Note: There may be an updated version of this malloc obtainable at
  14           ftp://g.oswego.edu/pub/misc/malloc.c
  15         Check before installing!
  16
  17* Why use this malloc?
  18
  19  This is not the fastest, most space-conserving, most portable, or
  20  most tunable malloc ever written. However it is among the fastest
  21  while also being among the most space-conserving, portable and tunable.
  22  Consistent balance across these factors results in a good general-purpose
  23  allocator. For a high-level description, see
  24     http://g.oswego.edu/dl/html/malloc.html
  25
  26* Synopsis of public routines
  27
  28  (Much fuller descriptions are contained in the program documentation below.)
  29
  30  malloc(size_t n);
  31     Return a pointer to a newly allocated chunk of at least n bytes, or null
  32     if no space is available.
  33  free(Void_t* p);
  34     Release the chunk of memory pointed to by p, or no effect if p is null.
  35  realloc(Void_t* p, size_t n);
  36     Return a pointer to a chunk of size n that contains the same data
  37     as does chunk p up to the minimum of (n, p's size) bytes, or null
  38     if no space is available. The returned pointer may or may not be
  39     the same as p. If p is null, equivalent to malloc.  Unless the
  40     #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
  41     size argument of zero (re)allocates a minimum-sized chunk.
  42  memalign(size_t alignment, size_t n);
  43     Return a pointer to a newly allocated chunk of n bytes, aligned
  44     in accord with the alignment argument, which must be a power of
  45     two.
  46  valloc(size_t n);
  47     Equivalent to memalign(pagesize, n), where pagesize is the page
  48     size of the system (or as near to this as can be figured out from
  49     all the includes/defines below.)
  50  pvalloc(size_t n);
  51     Equivalent to valloc(minimum-page-that-holds(n)), that is,
  52     round up n to nearest pagesize.
  53  calloc(size_t unit, size_t quantity);
  54     Returns a pointer to quantity * unit bytes, with all locations
  55     set to zero.
  56  cfree(Void_t* p);
  57     Equivalent to free(p).
  58  malloc_trim(size_t pad);
  59     Release all but pad bytes of freed top-most memory back
  60     to the system. Return 1 if successful, else 0.
  61  malloc_usable_size(Void_t* p);
  62     Report the number usable allocated bytes associated with allocated
  63     chunk p. This may or may not report more bytes than were requested,
  64     due to alignment and minimum size constraints.
  65  malloc_stats();
  66     Prints brief summary statistics.
  67  mallinfo()
  68     Returns (by copy) a struct containing various summary statistics.
  69  mallopt(int parameter_number, int parameter_value)
  70     Changes one of the tunable parameters described below. Returns
  71     1 if successful in changing the parameter, else 0.
  72
  73* Vital statistics:
  74
  75  Alignment:                            8-byte
  76       8 byte alignment is currently hardwired into the design.  This
  77       seems to suffice for all current machines and C compilers.
  78
  79  Assumed pointer representation:       4 or 8 bytes
  80       Code for 8-byte pointers is untested by me but has worked
  81       reliably by Wolfram Gloger, who contributed most of the
  82       changes supporting this.
  83
  84  Assumed size_t  representation:       4 or 8 bytes
  85       Note that size_t is allowed to be 4 bytes even if pointers are 8.
  86
  87  Minimum overhead per allocated chunk: 4 or 8 bytes
  88       Each malloced chunk has a hidden overhead of 4 bytes holding size
  89       and status information.
  90
  91  Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
  92                          8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
  93
  94       When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
  95       ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
  96       needed; 4 (8) for a trailing size field
  97       and 8 (16) bytes for free list pointers. Thus, the minimum
  98       allocatable size is 16/24/32 bytes.
  99
 100       Even a request for zero bytes (i.e., malloc(0)) returns a
 101       pointer to something of the minimum allocatable size.
 102
 103  Maximum allocated size: 4-byte size_t: 2^31 -  8 bytes
 104                          8-byte size_t: 2^63 - 16 bytes
 105
 106       It is assumed that (possibly signed) size_t bit values suffice to
 107       represent chunk sizes. `Possibly signed' is due to the fact
 108       that `size_t' may be defined on a system as either a signed or
 109       an unsigned type. To be conservative, values that would appear
 110       as negative numbers are avoided.
 111       Requests for sizes with a negative sign bit when the request
 112       size is treaded as a long will return null.
 113
 114  Maximum overhead wastage per allocated chunk: normally 15 bytes
 115
 116       Alignnment demands, plus the minimum allocatable size restriction
 117       make the normal worst-case wastage 15 bytes (i.e., up to 15
 118       more bytes will be allocated than were requested in malloc), with
 119       two exceptions:
 120         1. Because requests for zero bytes allocate non-zero space,
 121            the worst case wastage for a request of zero bytes is 24 bytes.
 122         2. For requests >= mmap_threshold that are serviced via
 123            mmap(), the worst case wastage is 8 bytes plus the remainder
 124            from a system page (the minimal mmap unit); typically 4096 bytes.
 125
 126* Limitations
 127
 128    Here are some features that are NOT currently supported
 129
 130    * No user-definable hooks for callbacks and the like.
 131    * No automated mechanism for fully checking that all accesses
 132      to malloced memory stay within their bounds.
 133    * No support for compaction.
 134
 135* Synopsis of compile-time options:
 136
 137    People have reported using previous versions of this malloc on all
 138    versions of Unix, sometimes by tweaking some of the defines
 139    below. It has been tested most extensively on Solaris and
 140    Linux. It is also reported to work on WIN32 platforms.
 141    People have also reported adapting this malloc for use in
 142    stand-alone embedded systems.
 143
 144    The implementation is in straight, hand-tuned ANSI C.  Among other
 145    consequences, it uses a lot of macros.  Because of this, to be at
 146    all usable, this code should be compiled using an optimizing compiler
 147    (for example gcc -O2) that can simplify expressions and control
 148    paths.
 149
 150  __STD_C                  (default: derived from C compiler defines)
 151     Nonzero if using ANSI-standard C compiler, a C++ compiler, or
 152     a C compiler sufficiently close to ANSI to get away with it.
 153  DEBUG                    (default: NOT defined)
 154     Define to enable debugging. Adds fairly extensive assertion-based
 155     checking to help track down memory errors, but noticeably slows down
 156     execution.
 157  REALLOC_ZERO_BYTES_FREES (default: NOT defined)
 158     Define this if you think that realloc(p, 0) should be equivalent
 159     to free(p). Otherwise, since malloc returns a unique pointer for
 160     malloc(0), so does realloc(p, 0).
 161  HAVE_MEMCPY               (default: defined)
 162     Define if you are not otherwise using ANSI STD C, but still
 163     have memcpy and memset in your C library and want to use them.
 164     Otherwise, simple internal versions are supplied.
 165  USE_MEMCPY               (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
 166     Define as 1 if you want the C library versions of memset and
 167     memcpy called in realloc and calloc (otherwise macro versions are used).
 168     At least on some platforms, the simple macro versions usually
 169     outperform libc versions.
 170  HAVE_MMAP                 (default: defined as 1)
 171     Define to non-zero to optionally make malloc() use mmap() to
 172     allocate very large blocks.
 173  HAVE_MREMAP                 (default: defined as 0 unless Linux libc set)
 174     Define to non-zero to optionally make realloc() use mremap() to
 175     reallocate very large blocks.
 176  malloc_getpagesize        (default: derived from system #includes)
 177     Either a constant or routine call returning the system page size.
 178  HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
 179     Optionally define if you are on a system with a /usr/include/malloc.h
 180     that declares struct mallinfo. It is not at all necessary to
 181     define this even if you do, but will ensure consistency.
 182  INTERNAL_SIZE_T           (default: size_t)
 183     Define to a 32-bit type (probably `unsigned int') if you are on a
 184     64-bit machine, yet do not want or need to allow malloc requests of
 185     greater than 2^31 to be handled. This saves space, especially for
 186     very small chunks.
 187  INTERNAL_LINUX_C_LIB      (default: NOT defined)
 188     Defined only when compiled as part of Linux libc.
 189     Also note that there is some odd internal name-mangling via defines
 190     (for example, internally, `malloc' is named `mALLOc') needed
 191     when compiling in this case. These look funny but don't otherwise
 192     affect anything.
 193  WIN32                     (default: undefined)
 194     Define this on MS win (95, nt) platforms to compile in sbrk emulation.
 195  LACKS_UNISTD_H            (default: undefined if not WIN32)
 196     Define this if your system does not have a <unistd.h>.
 197  LACKS_SYS_PARAM_H         (default: undefined if not WIN32)
 198     Define this if your system does not have a <sys/param.h>.
 199  MORECORE                  (default: sbrk)
 200     The name of the routine to call to obtain more memory from the system.
 201  MORECORE_FAILURE          (default: -1)
 202     The value returned upon failure of MORECORE.
 203  MORECORE_CLEARS           (default 1)
 204     true (1) if the routine mapped to MORECORE zeroes out memory (which
 205     holds for sbrk).
 206  DEFAULT_TRIM_THRESHOLD
 207  DEFAULT_TOP_PAD
 208  DEFAULT_MMAP_THRESHOLD
 209  DEFAULT_MMAP_MAX
 210     Default values of tunable parameters (described in detail below)
 211     controlling interaction with host system routines (sbrk, mmap, etc).
 212     These values may also be changed dynamically via mallopt(). The
 213     preset defaults are those that give best performance for typical
 214     programs/systems.
 215  USE_DL_PREFIX             (default: undefined)
 216     Prefix all public routines with the string 'dl'.  Useful to
 217     quickly avoid procedure declaration conflicts and linker symbol
 218     conflicts with existing memory allocation routines.
 219
 220
 221*/
 222
 223
 224
 225/* Preliminaries */
 226
 227#ifndef __STD_C
 228#ifdef __STDC__
 229#define __STD_C     1
 230#else
 231#if __cplusplus
 232#define __STD_C     1
 233#else
 234#define __STD_C     0
 235#endif /*__cplusplus*/
 236#endif /*__STDC__*/
 237#endif /*__STD_C*/
 238
 239#ifndef Void_t
 240#if (__STD_C || defined(WIN32))
 241#define Void_t      void
 242#else
 243#define Void_t      char
 244#endif
 245#endif /*Void_t*/
 246
 247#if __STD_C
 248#include <stddef.h>   /* for size_t */
 249#else
 250#include <sys/types.h>
 251#endif
 252
 253#ifdef __cplusplus
 254extern "C" {
 255#endif
 256
 257#include <stdio.h>    /* needed for malloc_stats */
 258
 259
 260/*
 261  Compile-time options
 262*/
 263
 264
 265/*
 266    Debugging:
 267
 268    Because freed chunks may be overwritten with link fields, this
 269    malloc will often die when freed memory is overwritten by user
 270    programs.  This can be very effective (albeit in an annoying way)
 271    in helping track down dangling pointers.
 272
 273    If you compile with -DDEBUG, a number of assertion checks are
 274    enabled that will catch more memory errors. You probably won't be
 275    able to make much sense of the actual assertion errors, but they
 276    should help you locate incorrectly overwritten memory.  The
 277    checking is fairly extensive, and will slow down execution
 278    noticeably. Calling malloc_stats or mallinfo with DEBUG set will
 279    attempt to check every non-mmapped allocated and free chunk in the
 280    course of computing the summmaries. (By nature, mmapped regions
 281    cannot be checked very much automatically.)
 282
 283    Setting DEBUG may also be helpful if you are trying to modify
 284    this code. The assertions in the check routines spell out in more
 285    detail the assumptions and invariants underlying the algorithms.
 286
 287*/
 288
 289/*
 290  INTERNAL_SIZE_T is the word-size used for internal bookkeeping
 291  of chunk sizes. On a 64-bit machine, you can reduce malloc
 292  overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
 293  at the expense of not being able to handle requests greater than
 294  2^31. This limitation is hardly ever a concern; you are encouraged
 295  to set this. However, the default version is the same as size_t.
 296*/
 297
 298#ifndef INTERNAL_SIZE_T
 299#define INTERNAL_SIZE_T size_t
 300#endif
 301
 302/*
 303  REALLOC_ZERO_BYTES_FREES should be set if a call to
 304  realloc with zero bytes should be the same as a call to free.
 305  Some people think it should. Otherwise, since this malloc
 306  returns a unique pointer for malloc(0), so does realloc(p, 0).
 307*/
 308
 309
 310/*   #define REALLOC_ZERO_BYTES_FREES */
 311
 312
 313/*
 314  WIN32 causes an emulation of sbrk to be compiled in
 315  mmap-based options are not currently supported in WIN32.
 316*/
 317
 318/* #define WIN32 */
 319#ifdef WIN32
 320#define MORECORE wsbrk
 321#define HAVE_MMAP 0
 322
 323#define LACKS_UNISTD_H
 324#define LACKS_SYS_PARAM_H
 325
 326/*
 327  Include 'windows.h' to get the necessary declarations for the
 328  Microsoft Visual C++ data structures and routines used in the 'sbrk'
 329  emulation.
 330
 331  Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
 332  Visual C++ header files are included.
 333*/
 334#define WIN32_LEAN_AND_MEAN
 335#include <windows.h>
 336#endif
 337
 338
 339/*
 340  HAVE_MEMCPY should be defined if you are not otherwise using
 341  ANSI STD C, but still have memcpy and memset in your C library
 342  and want to use them in calloc and realloc. Otherwise simple
 343  macro versions are defined here.
 344
 345  USE_MEMCPY should be defined as 1 if you actually want to
 346  have memset and memcpy called. People report that the macro
 347  versions are often enough faster than libc versions on many
 348  systems that it is better to use them.
 349
 350*/
 351
 352#define HAVE_MEMCPY
 353
 354#ifndef USE_MEMCPY
 355#ifdef HAVE_MEMCPY
 356#define USE_MEMCPY 1
 357#else
 358#define USE_MEMCPY 0
 359#endif
 360#endif
 361
 362#if (__STD_C || defined(HAVE_MEMCPY))
 363
 364#if __STD_C
 365void* memset(void*, int, size_t);
 366void* memcpy(void*, const void*, size_t);
 367#else
 368#ifdef WIN32
 369/* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */
 370/* 'windows.h' */
 371#else
 372Void_t* memset();
 373Void_t* memcpy();
 374#endif
 375#endif
 376#endif
 377
 378#if USE_MEMCPY
 379
 380/* The following macros are only invoked with (2n+1)-multiples of
 381   INTERNAL_SIZE_T units, with a positive integer n. This is exploited
 382   for fast inline execution when n is small. */
 383
 384#define MALLOC_ZERO(charp, nbytes)                                            \
 385do {                                                                          \
 386  INTERNAL_SIZE_T mzsz = (nbytes);                                            \
 387  if(mzsz <= 9*sizeof(mzsz)) {                                                \
 388    INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp);                         \
 389    if(mzsz >= 5*sizeof(mzsz)) {     *mz++ = 0;                               \
 390                                     *mz++ = 0;                               \
 391      if(mzsz >= 7*sizeof(mzsz)) {   *mz++ = 0;                               \
 392                                     *mz++ = 0;                               \
 393        if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0;                               \
 394                                     *mz++ = 0; }}}                           \
 395                                     *mz++ = 0;                               \
 396                                     *mz++ = 0;                               \
 397                                     *mz   = 0;                               \
 398  } else memset((charp), 0, mzsz);                                            \
 399} while(0)
 400
 401#define MALLOC_COPY(dest,src,nbytes)                                          \
 402do {                                                                          \
 403  INTERNAL_SIZE_T mcsz = (nbytes);                                            \
 404  if(mcsz <= 9*sizeof(mcsz)) {                                                \
 405    INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src);                        \
 406    INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest);                       \
 407    if(mcsz >= 5*sizeof(mcsz)) {     *mcdst++ = *mcsrc++;                     \
 408                                     *mcdst++ = *mcsrc++;                     \
 409      if(mcsz >= 7*sizeof(mcsz)) {   *mcdst++ = *mcsrc++;                     \
 410                                     *mcdst++ = *mcsrc++;                     \
 411        if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++;                     \
 412                                     *mcdst++ = *mcsrc++; }}}                 \
 413                                     *mcdst++ = *mcsrc++;                     \
 414                                     *mcdst++ = *mcsrc++;                     \
 415                                     *mcdst   = *mcsrc  ;                     \
 416  } else memcpy(dest, src, mcsz);                                             \
 417} while(0)
 418
 419#else /* !USE_MEMCPY */
 420
 421/* Use Duff's device for good zeroing/copying performance. */
 422
 423#define MALLOC_ZERO(charp, nbytes)                                            \
 424do {                                                                          \
 425  INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
 426  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
 427  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
 428  switch (mctmp) {                                                            \
 429    case 0: for(;;) { *mzp++ = 0;                                             \
 430    case 7:           *mzp++ = 0;                                             \
 431    case 6:           *mzp++ = 0;                                             \
 432    case 5:           *mzp++ = 0;                                             \
 433    case 4:           *mzp++ = 0;                                             \
 434    case 3:           *mzp++ = 0;                                             \
 435    case 2:           *mzp++ = 0;                                             \
 436    case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
 437  }                                                                           \
 438} while(0)
 439
 440#define MALLOC_COPY(dest,src,nbytes)                                          \
 441do {                                                                          \
 442  INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
 443  INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
 444  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
 445  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
 446  switch (mctmp) {                                                            \
 447    case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
 448    case 7:           *mcdst++ = *mcsrc++;                                    \
 449    case 6:           *mcdst++ = *mcsrc++;                                    \
 450    case 5:           *mcdst++ = *mcsrc++;                                    \
 451    case 4:           *mcdst++ = *mcsrc++;                                    \
 452    case 3:           *mcdst++ = *mcsrc++;                                    \
 453    case 2:           *mcdst++ = *mcsrc++;                                    \
 454    case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
 455  }                                                                           \
 456} while(0)
 457
 458#endif
 459
 460
 461/*
 462  Define HAVE_MMAP to optionally make malloc() use mmap() to
 463  allocate very large blocks.  These will be returned to the
 464  operating system immediately after a free().
 465*/
 466
 467#ifndef HAVE_MMAP
 468#define HAVE_MMAP 1
 469#endif
 470
 471/*
 472  Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
 473  large blocks.  This is currently only possible on Linux with
 474  kernel versions newer than 1.3.77.
 475*/
 476
 477#ifndef HAVE_MREMAP
 478#ifdef INTERNAL_LINUX_C_LIB
 479#define HAVE_MREMAP 1
 480#else
 481#define HAVE_MREMAP 0
 482#endif
 483#endif
 484
 485#if HAVE_MMAP
 486
 487#include <unistd.h>
 488#include <fcntl.h>
 489#include <sys/mman.h>
 490
 491#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
 492#define MAP_ANONYMOUS MAP_ANON
 493#endif
 494
 495#endif /* HAVE_MMAP */
 496
 497/*
 498  Access to system page size. To the extent possible, this malloc
 499  manages memory from the system in page-size units.
 500
 501  The following mechanics for getpagesize were adapted from
 502  bsd/gnu getpagesize.h
 503*/
 504
 505#ifndef LACKS_UNISTD_H
 506#  include <unistd.h>
 507#endif
 508
 509#ifndef malloc_getpagesize
 510#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
 511#    ifndef _SC_PAGE_SIZE
 512#      define _SC_PAGE_SIZE _SC_PAGESIZE
 513#    endif
 514#  endif
 515#  ifdef _SC_PAGE_SIZE
 516#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
 517#  else
 518#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
 519       extern size_t getpagesize();
 520#      define malloc_getpagesize getpagesize()
 521#    else
 522#      ifdef WIN32
 523#        define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
 524#      else
 525#        ifndef LACKS_SYS_PARAM_H
 526#          include <sys/param.h>
 527#        endif
 528#        ifdef EXEC_PAGESIZE
 529#          define malloc_getpagesize EXEC_PAGESIZE
 530#        else
 531#          ifdef NBPG
 532#            ifndef CLSIZE
 533#              define malloc_getpagesize NBPG
 534#            else
 535#              define malloc_getpagesize (NBPG * CLSIZE)
 536#            endif
 537#          else
 538#            ifdef NBPC
 539#              define malloc_getpagesize NBPC
 540#            else
 541#              ifdef PAGESIZE
 542#                define malloc_getpagesize PAGESIZE
 543#              else
 544#                define malloc_getpagesize (4096) /* just guess */
 545#              endif
 546#            endif
 547#          endif
 548#        endif
 549#      endif
 550#    endif
 551#  endif
 552#endif
 553
 554
 555/*
 556
 557  This version of malloc supports the standard SVID/XPG mallinfo
 558  routine that returns a struct containing the same kind of
 559  information you can get from malloc_stats. It should work on
 560  any SVID/XPG compliant system that has a /usr/include/malloc.h
 561  defining struct mallinfo. (If you'd like to install such a thing
 562  yourself, cut out the preliminary declarations as described above
 563  and below and save them in a malloc.h file. But there's no
 564  compelling reason to bother to do this.)
 565
 566  The main declaration needed is the mallinfo struct that is returned
 567  (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
 568  bunch of fields, most of which are not even meaningful in this
 569  version of malloc. Some of these fields are are instead filled by
 570  mallinfo() with other numbers that might possibly be of interest.
 571
 572  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
 573  /usr/include/malloc.h file that includes a declaration of struct
 574  mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
 575  version is declared below.  These must be precisely the same for
 576  mallinfo() to work.
 577
 578*/
 579
 580/* #define HAVE_USR_INCLUDE_MALLOC_H */
 581
 582#if HAVE_USR_INCLUDE_MALLOC_H
 583#include "/usr/include/malloc.h"
 584#else
 585
 586/* SVID2/XPG mallinfo structure */
 587
 588struct mallinfo {
 589  int arena;    /* total space allocated from system */
 590  int ordblks;  /* number of non-inuse chunks */
 591  int smblks;   /* unused -- always zero */
 592  int hblks;    /* number of mmapped regions */
 593  int hblkhd;   /* total space in mmapped regions */
 594  int usmblks;  /* unused -- always zero */
 595  int fsmblks;  /* unused -- always zero */
 596  int uordblks; /* total allocated space */
 597  int fordblks; /* total non-inuse space */
 598  int keepcost; /* top-most, releasable (via malloc_trim) space */
 599};
 600
 601/* SVID2/XPG mallopt options */
 602
 603#define M_MXFAST  1    /* UNUSED in this malloc */
 604#define M_NLBLKS  2    /* UNUSED in this malloc */
 605#define M_GRAIN   3    /* UNUSED in this malloc */
 606#define M_KEEP    4    /* UNUSED in this malloc */
 607
 608#endif
 609
 610/* mallopt options that actually do something */
 611
 612#define M_TRIM_THRESHOLD    -1
 613#define M_TOP_PAD           -2
 614#define M_MMAP_THRESHOLD    -3
 615#define M_MMAP_MAX          -4
 616
 617
 618#ifndef DEFAULT_TRIM_THRESHOLD
 619#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
 620#endif
 621
 622/*
 623    M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
 624      to keep before releasing via malloc_trim in free().
 625
 626      Automatic trimming is mainly useful in long-lived programs.
 627      Because trimming via sbrk can be slow on some systems, and can
 628      sometimes be wasteful (in cases where programs immediately
 629      afterward allocate more large chunks) the value should be high
 630      enough so that your overall system performance would improve by
 631      releasing.
 632
 633      The trim threshold and the mmap control parameters (see below)
 634      can be traded off with one another. Trimming and mmapping are
 635      two different ways of releasing unused memory back to the
 636      system. Between these two, it is often possible to keep
 637      system-level demands of a long-lived program down to a bare
 638      minimum. For example, in one test suite of sessions measuring
 639      the XF86 X server on Linux, using a trim threshold of 128K and a
 640      mmap threshold of 192K led to near-minimal long term resource
 641      consumption.
 642
 643      If you are using this malloc in a long-lived program, it should
 644      pay to experiment with these values.  As a rough guide, you
 645      might set to a value close to the average size of a process
 646      (program) running on your system.  Releasing this much memory
 647      would allow such a process to run in memory.  Generally, it's
 648      worth it to tune for trimming rather tham memory mapping when a
 649      program undergoes phases where several large chunks are
 650      allocated and released in ways that can reuse each other's
 651      storage, perhaps mixed with phases where there are no such
 652      chunks at all.  And in well-behaved long-lived programs,
 653      controlling release of large blocks via trimming versus mapping
 654      is usually faster.
 655
 656      However, in most programs, these parameters serve mainly as
 657      protection against the system-level effects of carrying around
 658      massive amounts of unneeded memory. Since frequent calls to
 659      sbrk, mmap, and munmap otherwise degrade performance, the default
 660      parameters are set to relatively high values that serve only as
 661      safeguards.
 662
 663      The default trim value is high enough to cause trimming only in
 664      fairly extreme (by current memory consumption standards) cases.
 665      It must be greater than page size to have any useful effect.  To
 666      disable trimming completely, you can set to (unsigned long)(-1);
 667
 668
 669*/
 670
 671
 672#ifndef DEFAULT_TOP_PAD
 673#define DEFAULT_TOP_PAD        (0)
 674#endif
 675
 676/*
 677    M_TOP_PAD is the amount of extra `padding' space to allocate or
 678      retain whenever sbrk is called. It is used in two ways internally:
 679
 680      * When sbrk is called to extend the top of the arena to satisfy
 681        a new malloc request, this much padding is added to the sbrk
 682        request.
 683
 684      * When malloc_trim is called automatically from free(),
 685        it is used as the `pad' argument.
 686
 687      In both cases, the actual amount of padding is rounded
 688      so that the end of the arena is always a system page boundary.
 689
 690      The main reason for using padding is to avoid calling sbrk so
 691      often. Having even a small pad greatly reduces the likelihood
 692      that nearly every malloc request during program start-up (or
 693      after trimming) will invoke sbrk, which needlessly wastes
 694      time.
 695
 696      Automatic rounding-up to page-size units is normally sufficient
 697      to avoid measurable overhead, so the default is 0.  However, in
 698      systems where sbrk is relatively slow, it can pay to increase
 699      this value, at the expense of carrying around more memory than
 700      the program needs.
 701
 702*/
 703
 704
 705#ifndef DEFAULT_MMAP_THRESHOLD
 706#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
 707#endif
 708
 709/*
 710
 711    M_MMAP_THRESHOLD is the request size threshold for using mmap()
 712      to service a request. Requests of at least this size that cannot
 713      be allocated using already-existing space will be serviced via mmap.
 714      (If enough normal freed space already exists it is used instead.)
 715
 716      Using mmap segregates relatively large chunks of memory so that
 717      they can be individually obtained and released from the host
 718      system. A request serviced through mmap is never reused by any
 719      other request (at least not directly; the system may just so
 720      happen to remap successive requests to the same locations).
 721
 722      Segregating space in this way has the benefit that mmapped space
 723      can ALWAYS be individually released back to the system, which
 724      helps keep the system level memory demands of a long-lived
 725      program low. Mapped memory can never become `locked' between
 726      other chunks, as can happen with normally allocated chunks, which
 727      menas that even trimming via malloc_trim would not release them.
 728
 729      However, it has the disadvantages that:
 730
 731         1. The space cannot be reclaimed, consolidated, and then
 732            used to service later requests, as happens with normal chunks.
 733         2. It can lead to more wastage because of mmap page alignment
 734            requirements
 735         3. It causes malloc performance to be more dependent on host
 736            system memory management support routines which may vary in
 737            implementation quality and may impose arbitrary
 738            limitations. Generally, servicing a request via normal
 739            malloc steps is faster than going through a system's mmap.
 740
 741      All together, these considerations should lead you to use mmap
 742      only for relatively large requests.
 743
 744
 745*/
 746
 747
 748#ifndef DEFAULT_MMAP_MAX
 749#if HAVE_MMAP
 750#define DEFAULT_MMAP_MAX       (64)
 751#else
 752#define DEFAULT_MMAP_MAX       (0)
 753#endif
 754#endif
 755
 756/*
 757    M_MMAP_MAX is the maximum number of requests to simultaneously
 758      service using mmap. This parameter exists because:
 759
 760         1. Some systems have a limited number of internal tables for
 761            use by mmap.
 762         2. In most systems, overreliance on mmap can degrade overall
 763            performance.
 764         3. If a program allocates many large regions, it is probably
 765            better off using normal sbrk-based allocation routines that
 766            can reclaim and reallocate normal heap memory. Using a
 767            small value allows transition into this mode after the
 768            first few allocations.
 769
 770      Setting to 0 disables all use of mmap.  If HAVE_MMAP is not set,
 771      the default value is 0, and attempts to set it to non-zero values
 772      in mallopt will fail.
 773*/
 774
 775
 776/*
 777    USE_DL_PREFIX will prefix all public routines with the string 'dl'.
 778      Useful to quickly avoid procedure declaration conflicts and linker
 779      symbol conflicts with existing memory allocation routines.
 780
 781*/
 782
 783/* #define USE_DL_PREFIX */
 784
 785
 786/*
 787
 788  Special defines for linux libc
 789
 790  Except when compiled using these special defines for Linux libc
 791  using weak aliases, this malloc is NOT designed to work in
 792  multithreaded applications.  No semaphores or other concurrency
 793  control are provided to ensure that multiple malloc or free calls
 794  don't run at the same time, which could be disasterous. A single
 795  semaphore could be used across malloc, realloc, and free (which is
 796  essentially the effect of the linux weak alias approach). It would
 797  be hard to obtain finer granularity.
 798
 799*/
 800
 801
 802#ifdef INTERNAL_LINUX_C_LIB
 803
 804#if __STD_C
 805
 806Void_t * __default_morecore_init (ptrdiff_t);
 807Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
 808
 809#else
 810
 811Void_t * __default_morecore_init ();
 812Void_t *(*__morecore)() = __default_morecore_init;
 813
 814#endif
 815
 816#define MORECORE (*__morecore)
 817#define MORECORE_FAILURE 0
 818#define MORECORE_CLEARS 1
 819
 820#else /* INTERNAL_LINUX_C_LIB */
 821
 822#if __STD_C
 823extern Void_t*     sbrk(ptrdiff_t);
 824#else
 825extern Void_t*     sbrk();
 826#endif
 827
 828#ifndef MORECORE
 829#define MORECORE sbrk
 830#endif
 831
 832#ifndef MORECORE_FAILURE
 833#define MORECORE_FAILURE -1
 834#endif
 835
 836#ifndef MORECORE_CLEARS
 837#define MORECORE_CLEARS 1
 838#endif
 839
 840#endif /* INTERNAL_LINUX_C_LIB */
 841
 842#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
 843
 844#define cALLOc          __libc_calloc
 845#define fREe            __libc_free
 846#define mALLOc          __libc_malloc
 847#define mEMALIGn        __libc_memalign
 848#define rEALLOc         __libc_realloc
 849#define vALLOc          __libc_valloc
 850#define pvALLOc         __libc_pvalloc
 851#define mALLINFo        __libc_mallinfo
 852#define mALLOPt         __libc_mallopt
 853
 854#pragma weak calloc = __libc_calloc
 855#pragma weak free = __libc_free
 856#pragma weak cfree = __libc_free
 857#pragma weak malloc = __libc_malloc
 858#pragma weak memalign = __libc_memalign
 859#pragma weak realloc = __libc_realloc
 860#pragma weak valloc = __libc_valloc
 861#pragma weak pvalloc = __libc_pvalloc
 862#pragma weak mallinfo = __libc_mallinfo
 863#pragma weak mallopt = __libc_mallopt
 864
 865#else
 866
 867#ifdef USE_DL_PREFIX
 868#define cALLOc          dlcalloc
 869#define fREe            dlfree
 870#define mALLOc          dlmalloc
 871#define mEMALIGn        dlmemalign
 872#define rEALLOc         dlrealloc
 873#define vALLOc          dlvalloc
 874#define pvALLOc         dlpvalloc
 875#define mALLINFo        dlmallinfo
 876#define mALLOPt         dlmallopt
 877#else /* USE_DL_PREFIX */
 878#define cALLOc          calloc
 879#define fREe            free
 880#define mALLOc          malloc
 881#define mEMALIGn        memalign
 882#define rEALLOc         realloc
 883#define vALLOc          valloc
 884#define pvALLOc         pvalloc
 885#define mALLINFo        mallinfo
 886#define mALLOPt         mallopt
 887#endif /* USE_DL_PREFIX */
 888
 889#endif
 890
 891/* Public routines */
 892
 893#if __STD_C
 894
 895Void_t* mALLOc(size_t);
 896void    fREe(Void_t*);
 897Void_t* rEALLOc(Void_t*, size_t);
 898Void_t* mEMALIGn(size_t, size_t);
 899Void_t* vALLOc(size_t);
 900Void_t* pvALLOc(size_t);
 901Void_t* cALLOc(size_t, size_t);
 902void    cfree(Void_t*);
 903int     malloc_trim(size_t);
 904size_t  malloc_usable_size(Void_t*);
 905void    malloc_stats();
 906int     mALLOPt(int, int);
 907struct mallinfo mALLINFo(void);
 908#else
 909Void_t* mALLOc();
 910void    fREe();
 911Void_t* rEALLOc();
 912Void_t* mEMALIGn();
 913Void_t* vALLOc();
 914Void_t* pvALLOc();
 915Void_t* cALLOc();
 916void    cfree();
 917int     malloc_trim();
 918size_t  malloc_usable_size();
 919void    malloc_stats();
 920int     mALLOPt();
 921struct mallinfo mALLINFo();
 922#endif
 923
 924
 925#ifdef __cplusplus
 926};  /* end of extern "C" */
 927#endif
 928
 929/* ---------- To make a malloc.h, end cutting here ------------ */
 930#endif  /* 0 */                 /* Moved to malloc.h */
 931
 932#include <malloc.h>
 933#ifdef DEBUG
 934#if __STD_C
 935static void malloc_update_mallinfo (void);
 936void malloc_stats (void);
 937#else
 938static void malloc_update_mallinfo ();
 939void malloc_stats();
 940#endif
 941#endif  /* DEBUG */
 942
 943DECLARE_GLOBAL_DATA_PTR;
 944
 945/*
 946  Emulation of sbrk for WIN32
 947  All code within the ifdef WIN32 is untested by me.
 948
 949  Thanks to Martin Fong and others for supplying this.
 950*/
 951
 952
 953#ifdef WIN32
 954
 955#define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
 956~(malloc_getpagesize-1))
 957#define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
 958
 959/* resrve 64MB to insure large contiguous space */
 960#define RESERVED_SIZE (1024*1024*64)
 961#define NEXT_SIZE (2048*1024)
 962#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
 963
 964struct GmListElement;
 965typedef struct GmListElement GmListElement;
 966
 967struct GmListElement
 968{
 969        GmListElement* next;
 970        void* base;
 971};
 972
 973static GmListElement* head = 0;
 974static unsigned int gNextAddress = 0;
 975static unsigned int gAddressBase = 0;
 976static unsigned int gAllocatedSize = 0;
 977
 978static
 979GmListElement* makeGmListElement (void* bas)
 980{
 981        GmListElement* this;
 982        this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
 983        assert (this);
 984        if (this)
 985        {
 986                this->base = bas;
 987                this->next = head;
 988                head = this;
 989        }
 990        return this;
 991}
 992
 993void gcleanup ()
 994{
 995        BOOL rval;
 996        assert ( (head == NULL) || (head->base == (void*)gAddressBase));
 997        if (gAddressBase && (gNextAddress - gAddressBase))
 998        {
 999                rval = VirtualFree ((void*)gAddressBase,
1000                                                        gNextAddress - gAddressBase,
1001                                                        MEM_DECOMMIT);
1002        assert (rval);
1003        }
1004        while (head)
1005        {
1006                GmListElement* next = head->next;
1007                rval = VirtualFree (head->base, 0, MEM_RELEASE);
1008                assert (rval);
1009                LocalFree (head);
1010                head = next;
1011        }
1012}
1013
1014static
1015void* findRegion (void* start_address, unsigned long size)
1016{
1017        MEMORY_BASIC_INFORMATION info;
1018        if (size >= TOP_MEMORY) return NULL;
1019
1020        while ((unsigned long)start_address + size < TOP_MEMORY)
1021        {
1022                VirtualQuery (start_address, &info, sizeof (info));
1023                if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1024                        return start_address;
1025                else
1026                {
1027                        /* Requested region is not available so see if the */
1028                        /* next region is available.  Set 'start_address' */
1029                        /* to the next region and call 'VirtualQuery()' */
1030                        /* again. */
1031
1032                        start_address = (char*)info.BaseAddress + info.RegionSize;
1033
1034                        /* Make sure we start looking for the next region */
1035                        /* on the *next* 64K boundary.  Otherwise, even if */
1036                        /* the new region is free according to */
1037                        /* 'VirtualQuery()', the subsequent call to */
1038                        /* 'VirtualAlloc()' (which follows the call to */
1039                        /* this routine in 'wsbrk()') will round *down* */
1040                        /* the requested address to a 64K boundary which */
1041                        /* we already know is an address in the */
1042                        /* unavailable region.  Thus, the subsequent call */
1043                        /* to 'VirtualAlloc()' will fail and bring us back */
1044                        /* here, causing us to go into an infinite loop. */
1045
1046                        start_address =
1047                                (void *) AlignPage64K((unsigned long) start_address);
1048                }
1049        }
1050        return NULL;
1051
1052}
1053
1054
1055void* wsbrk (long size)
1056{
1057        void* tmp;
1058        if (size > 0)
1059        {
1060                if (gAddressBase == 0)
1061                {
1062                        gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1063                        gNextAddress = gAddressBase =
1064                                (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1065                                                                                        MEM_RESERVE, PAGE_NOACCESS);
1066                } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1067gAllocatedSize))
1068                {
1069                        long new_size = max (NEXT_SIZE, AlignPage (size));
1070                        void* new_address = (void*)(gAddressBase+gAllocatedSize);
1071                        do
1072                        {
1073                                new_address = findRegion (new_address, new_size);
1074
1075                                if (new_address == 0)
1076                                        return (void*)-1;
1077
1078                                gAddressBase = gNextAddress =
1079                                        (unsigned int)VirtualAlloc (new_address, new_size,
1080                                                                                                MEM_RESERVE, PAGE_NOACCESS);
1081                                /* repeat in case of race condition */
1082                                /* The region that we found has been snagged */
1083                                /* by another thread */
1084                        }
1085                        while (gAddressBase == 0);
1086
1087                        assert (new_address == (void*)gAddressBase);
1088
1089                        gAllocatedSize = new_size;
1090
1091                        if (!makeGmListElement ((void*)gAddressBase))
1092                                return (void*)-1;
1093                }
1094                if ((size + gNextAddress) > AlignPage (gNextAddress))
1095                {
1096                        void* res;
1097                        res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1098                                                                (size + gNextAddress -
1099                                                                 AlignPage (gNextAddress)),
1100                                                                MEM_COMMIT, PAGE_READWRITE);
1101                        if (res == 0)
1102                                return (void*)-1;
1103                }
1104                tmp = (void*)gNextAddress;
1105                gNextAddress = (unsigned int)tmp + size;
1106                return tmp;
1107        }
1108        else if (size < 0)
1109        {
1110                unsigned int alignedGoal = AlignPage (gNextAddress + size);
1111                /* Trim by releasing the virtual memory */
1112                if (alignedGoal >= gAddressBase)
1113                {
1114                        VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1115                                                 MEM_DECOMMIT);
1116                        gNextAddress = gNextAddress + size;
1117                        return (void*)gNextAddress;
1118                }
1119                else
1120                {
1121                        VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1122                                                 MEM_DECOMMIT);
1123                        gNextAddress = gAddressBase;
1124                        return (void*)-1;
1125                }
1126        }
1127        else
1128        {
1129                return (void*)gNextAddress;
1130        }
1131}
1132
1133#endif
1134
1135
1136
1137/*
1138  Type declarations
1139*/
1140
1141
1142struct malloc_chunk
1143{
1144  INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1145  INTERNAL_SIZE_T size;      /* Size in bytes, including overhead. */
1146  struct malloc_chunk* fd;   /* double links -- used only if free. */
1147  struct malloc_chunk* bk;
1148} __attribute__((__may_alias__)) ;
1149
1150typedef struct malloc_chunk* mchunkptr;
1151
1152/*
1153
1154   malloc_chunk details:
1155
1156    (The following includes lightly edited explanations by Colin Plumb.)
1157
1158    Chunks of memory are maintained using a `boundary tag' method as
1159    described in e.g., Knuth or Standish.  (See the paper by Paul
1160    Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1161    survey of such techniques.)  Sizes of free chunks are stored both
1162    in the front of each chunk and at the end.  This makes
1163    consolidating fragmented chunks into bigger chunks very fast.  The
1164    size fields also hold bits representing whether chunks are free or
1165    in use.
1166
1167    An allocated chunk looks like this:
1168
1169
1170    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1171            |             Size of previous chunk, if allocated            | |
1172            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1173            |             Size of chunk, in bytes                         |P|
1174      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1175            |             User data starts here...                          .
1176            .                                                               .
1177            .             (malloc_usable_space() bytes)                     .
1178            .                                                               |
1179nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1180            |             Size of chunk                                     |
1181            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1182
1183
1184    Where "chunk" is the front of the chunk for the purpose of most of
1185    the malloc code, but "mem" is the pointer that is returned to the
1186    user.  "Nextchunk" is the beginning of the next contiguous chunk.
1187
1188    Chunks always begin on even word boundries, so the mem portion
1189    (which is returned to the user) is also on an even word boundary, and
1190    thus double-word aligned.
1191
1192    Free chunks are stored in circular doubly-linked lists, and look like this:
1193
1194    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1195            |             Size of previous chunk                            |
1196            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1197    `head:' |             Size of chunk, in bytes                         |P|
1198      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1199            |             Forward pointer to next chunk in list             |
1200            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1201            |             Back pointer to previous chunk in list            |
1202            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1203            |             Unused space (may be 0 bytes long)                .
1204            .                                                               .
1205            .                                                               |
1206nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1207    `foot:' |             Size of chunk, in bytes                           |
1208            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1209
1210    The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1211    chunk size (which is always a multiple of two words), is an in-use
1212    bit for the *previous* chunk.  If that bit is *clear*, then the
1213    word before the current chunk size contains the previous chunk
1214    size, and can be used to find the front of the previous chunk.
1215    (The very first chunk allocated always has this bit set,
1216    preventing access to non-existent (or non-owned) memory.)
1217
1218    Note that the `foot' of the current chunk is actually represented
1219    as the prev_size of the NEXT chunk. (This makes it easier to
1220    deal with alignments etc).
1221
1222    The two exceptions to all this are
1223
1224     1. The special chunk `top', which doesn't bother using the
1225        trailing size field since there is no
1226        next contiguous chunk that would have to index off it. (After
1227        initialization, `top' is forced to always exist.  If it would
1228        become less than MINSIZE bytes long, it is replenished via
1229        malloc_extend_top.)
1230
1231     2. Chunks allocated via mmap, which have the second-lowest-order
1232        bit (IS_MMAPPED) set in their size fields.  Because they are
1233        never merged or traversed from any other chunk, they have no
1234        foot size or inuse information.
1235
1236    Available chunks are kept in any of several places (all declared below):
1237
1238    * `av': An array of chunks serving as bin headers for consolidated
1239       chunks. Each bin is doubly linked.  The bins are approximately
1240       proportionally (log) spaced.  There are a lot of these bins
1241       (128). This may look excessive, but works very well in
1242       practice.  All procedures maintain the invariant that no
1243       consolidated chunk physically borders another one. Chunks in
1244       bins are kept in size order, with ties going to the
1245       approximately least recently used chunk.
1246
1247       The chunks in each bin are maintained in decreasing sorted order by
1248       size.  This is irrelevant for the small bins, which all contain
1249       the same-sized chunks, but facilitates best-fit allocation for
1250       larger chunks. (These lists are just sequential. Keeping them in
1251       order almost never requires enough traversal to warrant using
1252       fancier ordered data structures.)  Chunks of the same size are
1253       linked with the most recently freed at the front, and allocations
1254       are taken from the back.  This results in LRU or FIFO allocation
1255       order, which tends to give each chunk an equal opportunity to be
1256       consolidated with adjacent freed chunks, resulting in larger free
1257       chunks and less fragmentation.
1258
1259    * `top': The top-most available chunk (i.e., the one bordering the
1260       end of available memory) is treated specially. It is never
1261       included in any bin, is used only if no other chunk is
1262       available, and is released back to the system if it is very
1263       large (see M_TRIM_THRESHOLD).
1264
1265    * `last_remainder': A bin holding only the remainder of the
1266       most recently split (non-top) chunk. This bin is checked
1267       before other non-fitting chunks, so as to provide better
1268       locality for runs of sequentially allocated chunks.
1269
1270    *  Implicitly, through the host system's memory mapping tables.
1271       If supported, requests greater than a threshold are usually
1272       serviced via calls to mmap, and then later released via munmap.
1273
1274*/
1275
1276/*  sizes, alignments */
1277
1278#define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
1279#define MALLOC_ALIGNMENT       (SIZE_SZ + SIZE_SZ)
1280#define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
1281#define MINSIZE                (sizeof(struct malloc_chunk))
1282
1283/* conversion from malloc headers to user pointers, and back */
1284
1285#define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1286#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1287
1288/* pad request bytes into a usable size */
1289
1290#define request2size(req) \
1291 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1292  (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1293   (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1294
1295/* Check if m has acceptable alignment */
1296
1297#define aligned_OK(m)    (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1298
1299
1300
1301
1302/*
1303  Physical chunk operations
1304*/
1305
1306
1307/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1308
1309#define PREV_INUSE 0x1
1310
1311/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1312
1313#define IS_MMAPPED 0x2
1314
1315/* Bits to mask off when extracting size */
1316
1317#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1318
1319
1320/* Ptr to next physical malloc_chunk. */
1321
1322#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1323
1324/* Ptr to previous physical malloc_chunk */
1325
1326#define prev_chunk(p)\
1327   ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1328
1329
1330/* Treat space at ptr + offset as a chunk */
1331
1332#define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1333
1334
1335
1336
1337/*
1338  Dealing with use bits
1339*/
1340
1341/* extract p's inuse bit */
1342
1343#define inuse(p)\
1344((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1345
1346/* extract inuse bit of previous chunk */
1347
1348#define prev_inuse(p)  ((p)->size & PREV_INUSE)
1349
1350/* check for mmap()'ed chunk */
1351
1352#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1353
1354/* set/clear chunk as in use without otherwise disturbing */
1355
1356#define set_inuse(p)\
1357((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1358
1359#define clear_inuse(p)\
1360((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1361
1362/* check/set/clear inuse bits in known places */
1363
1364#define inuse_bit_at_offset(p, s)\
1365 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1366
1367#define set_inuse_bit_at_offset(p, s)\
1368 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1369
1370#define clear_inuse_bit_at_offset(p, s)\
1371 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1372
1373
1374
1375
1376/*
1377  Dealing with size fields
1378*/
1379
1380/* Get size, ignoring use bits */
1381
1382#define chunksize(p)          ((p)->size & ~(SIZE_BITS))
1383
1384/* Set size at head, without disturbing its use bit */
1385
1386#define set_head_size(p, s)   ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1387
1388/* Set size/use ignoring previous bits in header */
1389
1390#define set_head(p, s)        ((p)->size = (s))
1391
1392/* Set size at footer (only when chunk is not in use) */
1393
1394#define set_foot(p, s)   (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1395
1396
1397
1398
1399
1400/*
1401   Bins
1402
1403    The bins, `av_' are an array of pairs of pointers serving as the
1404    heads of (initially empty) doubly-linked lists of chunks, laid out
1405    in a way so that each pair can be treated as if it were in a
1406    malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1407    and chunks are the same).
1408
1409    Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1410    8 bytes apart. Larger bins are approximately logarithmically
1411    spaced. (See the table below.) The `av_' array is never mentioned
1412    directly in the code, but instead via bin access macros.
1413
1414    Bin layout:
1415
1416    64 bins of size       8
1417    32 bins of size      64
1418    16 bins of size     512
1419     8 bins of size    4096
1420     4 bins of size   32768
1421     2 bins of size  262144
1422     1 bin  of size what's left
1423
1424    There is actually a little bit of slop in the numbers in bin_index
1425    for the sake of speed. This makes no difference elsewhere.
1426
1427    The special chunks `top' and `last_remainder' get their own bins,
1428    (this is implemented via yet more trickery with the av_ array),
1429    although `top' is never properly linked to its bin since it is
1430    always handled specially.
1431
1432*/
1433
1434#define NAV             128   /* number of bins */
1435
1436typedef struct malloc_chunk* mbinptr;
1437
1438/* access macros */
1439
1440#define bin_at(i)      ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1441#define next_bin(b)    ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1442#define prev_bin(b)    ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1443
1444/*
1445   The first 2 bins are never indexed. The corresponding av_ cells are instead
1446   used for bookkeeping. This is not to save space, but to simplify
1447   indexing, maintain locality, and avoid some initialization tests.
1448*/
1449
1450#define top            (av_[2])          /* The topmost chunk */
1451#define last_remainder (bin_at(1))       /* remainder from last split */
1452
1453
1454/*
1455   Because top initially points to its own bin with initial
1456   zero size, thus forcing extension on the first malloc request,
1457   we avoid having any special code in malloc to check whether
1458   it even exists yet. But we still need to in malloc_extend_top.
1459*/
1460
1461#define initial_top    ((mchunkptr)(bin_at(0)))
1462
1463/* Helper macro to initialize bins */
1464
1465#define IAV(i)  bin_at(i), bin_at(i)
1466
1467static mbinptr av_[NAV * 2 + 2] = {
1468 NULL, NULL,
1469 IAV(0),   IAV(1),   IAV(2),   IAV(3),   IAV(4),   IAV(5),   IAV(6),   IAV(7),
1470 IAV(8),   IAV(9),   IAV(10),  IAV(11),  IAV(12),  IAV(13),  IAV(14),  IAV(15),
1471 IAV(16),  IAV(17),  IAV(18),  IAV(19),  IAV(20),  IAV(21),  IAV(22),  IAV(23),
1472 IAV(24),  IAV(25),  IAV(26),  IAV(27),  IAV(28),  IAV(29),  IAV(30),  IAV(31),
1473 IAV(32),  IAV(33),  IAV(34),  IAV(35),  IAV(36),  IAV(37),  IAV(38),  IAV(39),
1474 IAV(40),  IAV(41),  IAV(42),  IAV(43),  IAV(44),  IAV(45),  IAV(46),  IAV(47),
1475 IAV(48),  IAV(49),  IAV(50),  IAV(51),  IAV(52),  IAV(53),  IAV(54),  IAV(55),
1476 IAV(56),  IAV(57),  IAV(58),  IAV(59),  IAV(60),  IAV(61),  IAV(62),  IAV(63),
1477 IAV(64),  IAV(65),  IAV(66),  IAV(67),  IAV(68),  IAV(69),  IAV(70),  IAV(71),
1478 IAV(72),  IAV(73),  IAV(74),  IAV(75),  IAV(76),  IAV(77),  IAV(78),  IAV(79),
1479 IAV(80),  IAV(81),  IAV(82),  IAV(83),  IAV(84),  IAV(85),  IAV(86),  IAV(87),
1480 IAV(88),  IAV(89),  IAV(90),  IAV(91),  IAV(92),  IAV(93),  IAV(94),  IAV(95),
1481 IAV(96),  IAV(97),  IAV(98),  IAV(99),  IAV(100), IAV(101), IAV(102), IAV(103),
1482 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1483 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1484 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1485};
1486
1487#ifdef CONFIG_NEEDS_MANUAL_RELOC
1488static void malloc_bin_reloc(void)
1489{
1490        mbinptr *p = &av_[2];
1491        size_t i;
1492
1493        for (i = 2; i < ARRAY_SIZE(av_); ++i, ++p)
1494                *p = (mbinptr)((ulong)*p + gd->reloc_off);
1495}
1496#else
1497static inline void malloc_bin_reloc(void) {}
1498#endif
1499
1500ulong mem_malloc_start = 0;
1501ulong mem_malloc_end = 0;
1502ulong mem_malloc_brk = 0;
1503
1504void *sbrk(ptrdiff_t increment)
1505{
1506        ulong old = mem_malloc_brk;
1507        ulong new = old + increment;
1508
1509        /*
1510         * if we are giving memory back make sure we clear it out since
1511         * we set MORECORE_CLEARS to 1
1512         */
1513        if (increment < 0)
1514                memset((void *)new, 0, -increment);
1515
1516        if ((new < mem_malloc_start) || (new > mem_malloc_end))
1517                return (void *)MORECORE_FAILURE;
1518
1519        mem_malloc_brk = new;
1520
1521        return (void *)old;
1522}
1523
1524void mem_malloc_init(ulong start, ulong size)
1525{
1526        mem_malloc_start = start;
1527        mem_malloc_end = start + size;
1528        mem_malloc_brk = start;
1529
1530        memset((void *)mem_malloc_start, 0, size);
1531
1532        malloc_bin_reloc();
1533}
1534
1535/* field-extraction macros */
1536
1537#define first(b) ((b)->fd)
1538#define last(b)  ((b)->bk)
1539
1540/*
1541  Indexing into bins
1542*/
1543
1544#define bin_index(sz)                                                          \
1545(((((unsigned long)(sz)) >> 9) ==    0) ?       (((unsigned long)(sz)) >>  3): \
1546 ((((unsigned long)(sz)) >> 9) <=    4) ?  56 + (((unsigned long)(sz)) >>  6): \
1547 ((((unsigned long)(sz)) >> 9) <=   20) ?  91 + (((unsigned long)(sz)) >>  9): \
1548 ((((unsigned long)(sz)) >> 9) <=   84) ? 110 + (((unsigned long)(sz)) >> 12): \
1549 ((((unsigned long)(sz)) >> 9) <=  340) ? 119 + (((unsigned long)(sz)) >> 15): \
1550 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1551                                          126)
1552/*
1553  bins for chunks < 512 are all spaced 8 bytes apart, and hold
1554  identically sized chunks. This is exploited in malloc.
1555*/
1556
1557#define MAX_SMALLBIN         63
1558#define MAX_SMALLBIN_SIZE   512
1559#define SMALLBIN_WIDTH        8
1560
1561#define smallbin_index(sz)  (((unsigned long)(sz)) >> 3)
1562
1563/*
1564   Requests are `small' if both the corresponding and the next bin are small
1565*/
1566
1567#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1568
1569
1570
1571/*
1572    To help compensate for the large number of bins, a one-level index
1573    structure is used for bin-by-bin searching.  `binblocks' is a
1574    one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1575    have any (possibly) non-empty bins, so they can be skipped over
1576    all at once during during traversals. The bits are NOT always
1577    cleared as soon as all bins in a block are empty, but instead only
1578    when all are noticed to be empty during traversal in malloc.
1579*/
1580
1581#define BINBLOCKWIDTH     4   /* bins per block */
1582
1583#define binblocks_r     ((INTERNAL_SIZE_T)av_[1]) /* bitvector of nonempty blocks */
1584#define binblocks_w     (av_[1])
1585
1586/* bin<->block macros */
1587
1588#define idx2binblock(ix)    ((unsigned)1 << (ix / BINBLOCKWIDTH))
1589#define mark_binblock(ii)   (binblocks_w = (mbinptr)(binblocks_r | idx2binblock(ii)))
1590#define clear_binblock(ii)  (binblocks_w = (mbinptr)(binblocks_r & ~(idx2binblock(ii))))
1591
1592
1593
1594
1595
1596/*  Other static bookkeeping data */
1597
1598/* variables holding tunable values */
1599
1600static unsigned long trim_threshold   = DEFAULT_TRIM_THRESHOLD;
1601static unsigned long top_pad          = DEFAULT_TOP_PAD;
1602static unsigned int  n_mmaps_max      = DEFAULT_MMAP_MAX;
1603static unsigned long mmap_threshold   = DEFAULT_MMAP_THRESHOLD;
1604
1605/* The first value returned from sbrk */
1606static char* sbrk_base = (char*)(-1);
1607
1608/* The maximum memory obtained from system via sbrk */
1609static unsigned long max_sbrked_mem = 0;
1610
1611/* The maximum via either sbrk or mmap */
1612static unsigned long max_total_mem = 0;
1613
1614/* internal working copy of mallinfo */
1615static struct mallinfo current_mallinfo = {  0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1616
1617/* The total memory obtained from system via sbrk */
1618#define sbrked_mem  (current_mallinfo.arena)
1619
1620/* Tracking mmaps */
1621
1622#ifdef DEBUG
1623static unsigned int n_mmaps = 0;
1624#endif  /* DEBUG */
1625static unsigned long mmapped_mem = 0;
1626#if HAVE_MMAP
1627static unsigned int max_n_mmaps = 0;
1628static unsigned long max_mmapped_mem = 0;
1629#endif
1630
1631
1632
1633/*
1634  Debugging support
1635*/
1636
1637#ifdef DEBUG
1638
1639
1640/*
1641  These routines make a number of assertions about the states
1642  of data structures that should be true at all times. If any
1643  are not true, it's very likely that a user program has somehow
1644  trashed memory. (It's also possible that there is a coding error
1645  in malloc. In which case, please report it!)
1646*/
1647
1648#if __STD_C
1649static void do_check_chunk(mchunkptr p)
1650#else
1651static void do_check_chunk(p) mchunkptr p;
1652#endif
1653{
1654  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1655
1656  /* No checkable chunk is mmapped */
1657  assert(!chunk_is_mmapped(p));
1658
1659  /* Check for legal address ... */
1660  assert((char*)p >= sbrk_base);
1661  if (p != top)
1662    assert((char*)p + sz <= (char*)top);
1663  else
1664    assert((char*)p + sz <= sbrk_base + sbrked_mem);
1665
1666}
1667
1668
1669#if __STD_C
1670static void do_check_free_chunk(mchunkptr p)
1671#else
1672static void do_check_free_chunk(p) mchunkptr p;
1673#endif
1674{
1675  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1676  mchunkptr next = chunk_at_offset(p, sz);
1677
1678  do_check_chunk(p);
1679
1680  /* Check whether it claims to be free ... */
1681  assert(!inuse(p));
1682
1683  /* Unless a special marker, must have OK fields */
1684  if ((long)sz >= (long)MINSIZE)
1685  {
1686    assert((sz & MALLOC_ALIGN_MASK) == 0);
1687    assert(aligned_OK(chunk2mem(p)));
1688    /* ... matching footer field */
1689    assert(next->prev_size == sz);
1690    /* ... and is fully consolidated */
1691    assert(prev_inuse(p));
1692    assert (next == top || inuse(next));
1693
1694    /* ... and has minimally sane links */
1695    assert(p->fd->bk == p);
1696    assert(p->bk->fd == p);
1697  }
1698  else /* markers are always of size SIZE_SZ */
1699    assert(sz == SIZE_SZ);
1700}
1701
1702#if __STD_C
1703static void do_check_inuse_chunk(mchunkptr p)
1704#else
1705static void do_check_inuse_chunk(p) mchunkptr p;
1706#endif
1707{
1708  mchunkptr next = next_chunk(p);
1709  do_check_chunk(p);
1710
1711  /* Check whether it claims to be in use ... */
1712  assert(inuse(p));
1713
1714  /* ... and is surrounded by OK chunks.
1715    Since more things can be checked with free chunks than inuse ones,
1716    if an inuse chunk borders them and debug is on, it's worth doing them.
1717  */
1718  if (!prev_inuse(p))
1719  {
1720    mchunkptr prv = prev_chunk(p);
1721    assert(next_chunk(prv) == p);
1722    do_check_free_chunk(prv);
1723  }
1724  if (next == top)
1725  {
1726    assert(prev_inuse(next));
1727    assert(chunksize(next) >= MINSIZE);
1728  }
1729  else if (!inuse(next))
1730    do_check_free_chunk(next);
1731
1732}
1733
1734#if __STD_C
1735static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1736#else
1737static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1738#endif
1739{
1740  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1741  long room = sz - s;
1742
1743  do_check_inuse_chunk(p);
1744
1745  /* Legal size ... */
1746  assert((long)sz >= (long)MINSIZE);
1747  assert((sz & MALLOC_ALIGN_MASK) == 0);
1748  assert(room >= 0);
1749  assert(room < (long)MINSIZE);
1750
1751  /* ... and alignment */
1752  assert(aligned_OK(chunk2mem(p)));
1753
1754
1755  /* ... and was allocated at front of an available chunk */
1756  assert(prev_inuse(p));
1757
1758}
1759
1760
1761#define check_free_chunk(P)  do_check_free_chunk(P)
1762#define check_inuse_chunk(P) do_check_inuse_chunk(P)
1763#define check_chunk(P) do_check_chunk(P)
1764#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1765#else
1766#define check_free_chunk(P)
1767#define check_inuse_chunk(P)
1768#define check_chunk(P)
1769#define check_malloced_chunk(P,N)
1770#endif
1771
1772
1773
1774/*
1775  Macro-based internal utilities
1776*/
1777
1778
1779/*
1780  Linking chunks in bin lists.
1781  Call these only with variables, not arbitrary expressions, as arguments.
1782*/
1783
1784/*
1785  Place chunk p of size s in its bin, in size order,
1786  putting it ahead of others of same size.
1787*/
1788
1789
1790#define frontlink(P, S, IDX, BK, FD)                                          \
1791{                                                                             \
1792  if (S < MAX_SMALLBIN_SIZE)                                                  \
1793  {                                                                           \
1794    IDX = smallbin_index(S);                                                  \
1795    mark_binblock(IDX);                                                       \
1796    BK = bin_at(IDX);                                                         \
1797    FD = BK->fd;                                                              \
1798    P->bk = BK;                                                               \
1799    P->fd = FD;                                                               \
1800    FD->bk = BK->fd = P;                                                      \
1801  }                                                                           \
1802  else                                                                        \
1803  {                                                                           \
1804    IDX = bin_index(S);                                                       \
1805    BK = bin_at(IDX);                                                         \
1806    FD = BK->fd;                                                              \
1807    if (FD == BK) mark_binblock(IDX);                                         \
1808    else                                                                      \
1809    {                                                                         \
1810      while (FD != BK && S < chunksize(FD)) FD = FD->fd;                      \
1811      BK = FD->bk;                                                            \
1812    }                                                                         \
1813    P->bk = BK;                                                               \
1814    P->fd = FD;                                                               \
1815    FD->bk = BK->fd = P;                                                      \
1816  }                                                                           \
1817}
1818
1819
1820/* take a chunk off a list */
1821
1822#define unlink(P, BK, FD)                                                     \
1823{                                                                             \
1824  BK = P->bk;                                                                 \
1825  FD = P->fd;                                                                 \
1826  FD->bk = BK;                                                                \
1827  BK->fd = FD;                                                                \
1828}                                                                             \
1829
1830/* Place p as the last remainder */
1831
1832#define link_last_remainder(P)                                                \
1833{                                                                             \
1834  last_remainder->fd = last_remainder->bk =  P;                               \
1835  P->fd = P->bk = last_remainder;                                             \
1836}
1837
1838/* Clear the last_remainder bin */
1839
1840#define clear_last_remainder \
1841  (last_remainder->fd = last_remainder->bk = last_remainder)
1842
1843
1844
1845
1846
1847/* Routines dealing with mmap(). */
1848
1849#if HAVE_MMAP
1850
1851#if __STD_C
1852static mchunkptr mmap_chunk(size_t size)
1853#else
1854static mchunkptr mmap_chunk(size) size_t size;
1855#endif
1856{
1857  size_t page_mask = malloc_getpagesize - 1;
1858  mchunkptr p;
1859
1860#ifndef MAP_ANONYMOUS
1861  static int fd = -1;
1862#endif
1863
1864  if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1865
1866  /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1867   * there is no following chunk whose prev_size field could be used.
1868   */
1869  size = (size + SIZE_SZ + page_mask) & ~page_mask;
1870
1871#ifdef MAP_ANONYMOUS
1872  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1873                      MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1874#else /* !MAP_ANONYMOUS */
1875  if (fd < 0)
1876  {
1877    fd = open("/dev/zero", O_RDWR);
1878    if(fd < 0) return 0;
1879  }
1880  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1881#endif
1882
1883  if(p == (mchunkptr)-1) return 0;
1884
1885  n_mmaps++;
1886  if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1887
1888  /* We demand that eight bytes into a page must be 8-byte aligned. */
1889  assert(aligned_OK(chunk2mem(p)));
1890
1891  /* The offset to the start of the mmapped region is stored
1892   * in the prev_size field of the chunk; normally it is zero,
1893   * but that can be changed in memalign().
1894   */
1895  p->prev_size = 0;
1896  set_head(p, size|IS_MMAPPED);
1897
1898  mmapped_mem += size;
1899  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1900    max_mmapped_mem = mmapped_mem;
1901  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1902    max_total_mem = mmapped_mem + sbrked_mem;
1903  return p;
1904}
1905
1906#if __STD_C
1907static void munmap_chunk(mchunkptr p)
1908#else
1909static void munmap_chunk(p) mchunkptr p;
1910#endif
1911{
1912  INTERNAL_SIZE_T size = chunksize(p);
1913  int ret;
1914
1915  assert (chunk_is_mmapped(p));
1916  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1917  assert((n_mmaps > 0));
1918  assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1919
1920  n_mmaps--;
1921  mmapped_mem -= (size + p->prev_size);
1922
1923  ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1924
1925  /* munmap returns non-zero on failure */
1926  assert(ret == 0);
1927}
1928
1929#if HAVE_MREMAP
1930
1931#if __STD_C
1932static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1933#else
1934static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1935#endif
1936{
1937  size_t page_mask = malloc_getpagesize - 1;
1938  INTERNAL_SIZE_T offset = p->prev_size;
1939  INTERNAL_SIZE_T size = chunksize(p);
1940  char *cp;
1941
1942  assert (chunk_is_mmapped(p));
1943  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1944  assert((n_mmaps > 0));
1945  assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1946
1947  /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1948  new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1949
1950  cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1951
1952  if (cp == (char *)-1) return 0;
1953
1954  p = (mchunkptr)(cp + offset);
1955
1956  assert(aligned_OK(chunk2mem(p)));
1957
1958  assert((p->prev_size == offset));
1959  set_head(p, (new_size - offset)|IS_MMAPPED);
1960
1961  mmapped_mem -= size + offset;
1962  mmapped_mem += new_size;
1963  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1964    max_mmapped_mem = mmapped_mem;
1965  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1966    max_total_mem = mmapped_mem + sbrked_mem;
1967  return p;
1968}
1969
1970#endif /* HAVE_MREMAP */
1971
1972#endif /* HAVE_MMAP */
1973
1974
1975
1976
1977/*
1978  Extend the top-most chunk by obtaining memory from system.
1979  Main interface to sbrk (but see also malloc_trim).
1980*/
1981
1982#if __STD_C
1983static void malloc_extend_top(INTERNAL_SIZE_T nb)
1984#else
1985static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1986#endif
1987{
1988  char*     brk;                  /* return value from sbrk */
1989  INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1990  INTERNAL_SIZE_T correction;     /* bytes for 2nd sbrk call */
1991  char*     new_brk;              /* return of 2nd sbrk call */
1992  INTERNAL_SIZE_T top_size;       /* new size of top chunk */
1993
1994  mchunkptr old_top     = top;  /* Record state of old top */
1995  INTERNAL_SIZE_T old_top_size = chunksize(old_top);
1996  char*     old_end      = (char*)(chunk_at_offset(old_top, old_top_size));
1997
1998  /* Pad request with top_pad plus minimal overhead */
1999
2000  INTERNAL_SIZE_T    sbrk_size     = nb + top_pad + MINSIZE;
2001  unsigned long pagesz    = malloc_getpagesize;
2002
2003  /* If not the first time through, round to preserve page boundary */
2004  /* Otherwise, we need to correct to a page size below anyway. */
2005  /* (We also correct below if an intervening foreign sbrk call.) */
2006
2007  if (sbrk_base != (char*)(-1))
2008    sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2009
2010  brk = (char*)(MORECORE (sbrk_size));
2011
2012  /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2013  if (brk == (char*)(MORECORE_FAILURE) ||
2014      (brk < old_end && old_top != initial_top))
2015    return;
2016
2017  sbrked_mem += sbrk_size;
2018
2019  if (brk == old_end) /* can just add bytes to current top */
2020  {
2021    top_size = sbrk_size + old_top_size;
2022    set_head(top, top_size | PREV_INUSE);
2023  }
2024  else
2025  {
2026    if (sbrk_base == (char*)(-1))  /* First time through. Record base */
2027      sbrk_base = brk;
2028    else  /* Someone else called sbrk().  Count those bytes as sbrked_mem. */
2029      sbrked_mem += brk - (char*)old_end;
2030
2031    /* Guarantee alignment of first new chunk made from this space */
2032    front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2033    if (front_misalign > 0)
2034    {
2035      correction = (MALLOC_ALIGNMENT) - front_misalign;
2036      brk += correction;
2037    }
2038    else
2039      correction = 0;
2040
2041    /* Guarantee the next brk will be at a page boundary */
2042
2043    correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
2044                   ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
2045
2046    /* Allocate correction */
2047    new_brk = (char*)(MORECORE (correction));
2048    if (new_brk == (char*)(MORECORE_FAILURE)) return;
2049
2050    sbrked_mem += correction;
2051
2052    top = (mchunkptr)brk;
2053    top_size = new_brk - brk + correction;
2054    set_head(top, top_size | PREV_INUSE);
2055
2056    if (old_top != initial_top)
2057    {
2058
2059      /* There must have been an intervening foreign sbrk call. */
2060      /* A double fencepost is necessary to prevent consolidation */
2061
2062      /* If not enough space to do this, then user did something very wrong */
2063      if (old_top_size < MINSIZE)
2064      {
2065        set_head(top, PREV_INUSE); /* will force null return from malloc */
2066        return;
2067      }
2068
2069      /* Also keep size a multiple of MALLOC_ALIGNMENT */
2070      old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2071      set_head_size(old_top, old_top_size);
2072      chunk_at_offset(old_top, old_top_size          )->size =
2073        SIZE_SZ|PREV_INUSE;
2074      chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2075        SIZE_SZ|PREV_INUSE;
2076      /* If possible, release the rest. */
2077      if (old_top_size >= MINSIZE)
2078        fREe(chunk2mem(old_top));
2079    }
2080  }
2081
2082  if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2083    max_sbrked_mem = sbrked_mem;
2084  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2085    max_total_mem = mmapped_mem + sbrked_mem;
2086
2087  /* We always land on a page boundary */
2088  assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2089}
2090
2091
2092
2093
2094/* Main public routines */
2095
2096
2097/*
2098  Malloc Algorthim:
2099
2100    The requested size is first converted into a usable form, `nb'.
2101    This currently means to add 4 bytes overhead plus possibly more to
2102    obtain 8-byte alignment and/or to obtain a size of at least
2103    MINSIZE (currently 16 bytes), the smallest allocatable size.
2104    (All fits are considered `exact' if they are within MINSIZE bytes.)
2105
2106    From there, the first successful of the following steps is taken:
2107
2108      1. The bin corresponding to the request size is scanned, and if
2109         a chunk of exactly the right size is found, it is taken.
2110
2111      2. The most recently remaindered chunk is used if it is big
2112         enough.  This is a form of (roving) first fit, used only in
2113         the absence of exact fits. Runs of consecutive requests use
2114         the remainder of the chunk used for the previous such request
2115         whenever possible. This limited use of a first-fit style
2116         allocation strategy tends to give contiguous chunks
2117         coextensive lifetimes, which improves locality and can reduce
2118         fragmentation in the long run.
2119
2120      3. Other bins are scanned in increasing size order, using a
2121         chunk big enough to fulfill the request, and splitting off
2122         any remainder.  This search is strictly by best-fit; i.e.,
2123         the smallest (with ties going to approximately the least
2124         recently used) chunk that fits is selected.
2125
2126      4. If large enough, the chunk bordering the end of memory
2127         (`top') is split off. (This use of `top' is in accord with
2128         the best-fit search rule.  In effect, `top' is treated as
2129         larger (and thus less well fitting) than any other available
2130         chunk since it can be extended to be as large as necessary
2131         (up to system limitations).
2132
2133      5. If the request size meets the mmap threshold and the
2134         system supports mmap, and there are few enough currently
2135         allocated mmapped regions, and a call to mmap succeeds,
2136         the request is allocated via direct memory mapping.
2137
2138      6. Otherwise, the top of memory is extended by
2139         obtaining more space from the system (normally using sbrk,
2140         but definable to anything else via the MORECORE macro).
2141         Memory is gathered from the system (in system page-sized
2142         units) in a way that allows chunks obtained across different
2143         sbrk calls to be consolidated, but does not require
2144         contiguous memory. Thus, it should be safe to intersperse
2145         mallocs with other sbrk calls.
2146
2147
2148      All allocations are made from the the `lowest' part of any found
2149      chunk. (The implementation invariant is that prev_inuse is
2150      always true of any allocated chunk; i.e., that each allocated
2151      chunk borders either a previously allocated and still in-use chunk,
2152      or the base of its memory arena.)
2153
2154*/
2155
2156#if __STD_C
2157Void_t* mALLOc(size_t bytes)
2158#else
2159Void_t* mALLOc(bytes) size_t bytes;
2160#endif
2161{
2162  mchunkptr victim;                  /* inspected/selected chunk */
2163  INTERNAL_SIZE_T victim_size;       /* its size */
2164  int       idx;                     /* index for bin traversal */
2165  mbinptr   bin;                     /* associated bin */
2166  mchunkptr remainder;               /* remainder from a split */
2167  long      remainder_size;          /* its size */
2168  int       remainder_index;         /* its bin index */
2169  unsigned long block;               /* block traverser bit */
2170  int       startidx;                /* first bin of a traversed block */
2171  mchunkptr fwd;                     /* misc temp for linking */
2172  mchunkptr bck;                     /* misc temp for linking */
2173  mbinptr q;                         /* misc temp */
2174
2175  INTERNAL_SIZE_T nb;
2176
2177  /* check if mem_malloc_init() was run */
2178  if ((mem_malloc_start == 0) && (mem_malloc_end == 0)) {
2179    /* not initialized yet */
2180    return NULL;
2181  }
2182
2183  if ((long)bytes < 0) return NULL;
2184
2185  nb = request2size(bytes);  /* padded request size; */
2186
2187  /* Check for exact match in a bin */
2188
2189  if (is_small_request(nb))  /* Faster version for small requests */
2190  {
2191    idx = smallbin_index(nb);
2192
2193    /* No traversal or size check necessary for small bins.  */
2194
2195    q = bin_at(idx);
2196    victim = last(q);
2197
2198    /* Also scan the next one, since it would have a remainder < MINSIZE */
2199    if (victim == q)
2200    {
2201      q = next_bin(q);
2202      victim = last(q);
2203    }
2204    if (victim != q)
2205    {
2206      victim_size = chunksize(victim);
2207      unlink(victim, bck, fwd);
2208      set_inuse_bit_at_offset(victim, victim_size);
2209      check_malloced_chunk(victim, nb);
2210      return chunk2mem(victim);
2211    }
2212
2213    idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2214
2215  }
2216  else
2217  {
2218    idx = bin_index(nb);
2219    bin = bin_at(idx);
2220
2221    for (victim = last(bin); victim != bin; victim = victim->bk)
2222    {
2223      victim_size = chunksize(victim);
2224      remainder_size = victim_size - nb;
2225
2226      if (remainder_size >= (long)MINSIZE) /* too big */
2227      {
2228        --idx; /* adjust to rescan below after checking last remainder */
2229        break;
2230      }
2231
2232      else if (remainder_size >= 0) /* exact fit */
2233      {
2234        unlink(victim, bck, fwd);
2235        set_inuse_bit_at_offset(victim, victim_size);
2236        check_malloced_chunk(victim, nb);
2237        return chunk2mem(victim);
2238      }
2239    }
2240
2241    ++idx;
2242
2243  }
2244
2245  /* Try to use the last split-off remainder */
2246
2247  if ( (victim = last_remainder->fd) != last_remainder)
2248  {
2249    victim_size = chunksize(victim);
2250    remainder_size = victim_size - nb;
2251
2252    if (remainder_size >= (long)MINSIZE) /* re-split */
2253    {
2254      remainder = chunk_at_offset(victim, nb);
2255      set_head(victim, nb | PREV_INUSE);
2256      link_last_remainder(remainder);
2257      set_head(remainder, remainder_size | PREV_INUSE);
2258      set_foot(remainder, remainder_size);
2259      check_malloced_chunk(victim, nb);
2260      return chunk2mem(victim);
2261    }
2262
2263    clear_last_remainder;
2264
2265    if (remainder_size >= 0)  /* exhaust */
2266    {
2267      set_inuse_bit_at_offset(victim, victim_size);
2268      check_malloced_chunk(victim, nb);
2269      return chunk2mem(victim);
2270    }
2271
2272    /* Else place in bin */
2273
2274    frontlink(victim, victim_size, remainder_index, bck, fwd);
2275  }
2276
2277  /*
2278     If there are any possibly nonempty big-enough blocks,
2279     search for best fitting chunk by scanning bins in blockwidth units.
2280  */
2281
2282  if ( (block = idx2binblock(idx)) <= binblocks_r)
2283  {
2284
2285    /* Get to the first marked block */
2286
2287    if ( (block & binblocks_r) == 0)
2288    {
2289      /* force to an even block boundary */
2290      idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2291      block <<= 1;
2292      while ((block & binblocks_r) == 0)
2293      {
2294        idx += BINBLOCKWIDTH;
2295        block <<= 1;
2296      }
2297    }
2298
2299    /* For each possibly nonempty block ... */
2300    for (;;)
2301    {
2302      startidx = idx;          /* (track incomplete blocks) */
2303      q = bin = bin_at(idx);
2304
2305      /* For each bin in this block ... */
2306      do
2307      {
2308        /* Find and use first big enough chunk ... */
2309
2310        for (victim = last(bin); victim != bin; victim = victim->bk)
2311        {
2312          victim_size = chunksize(victim);
2313          remainder_size = victim_size - nb;
2314
2315          if (remainder_size >= (long)MINSIZE) /* split */
2316          {
2317            remainder = chunk_at_offset(victim, nb);
2318            set_head(victim, nb | PREV_INUSE);
2319            unlink(victim, bck, fwd);
2320            link_last_remainder(remainder);
2321            set_head(remainder, remainder_size | PREV_INUSE);
2322            set_foot(remainder, remainder_size);
2323            check_malloced_chunk(victim, nb);
2324            return chunk2mem(victim);
2325          }
2326
2327          else if (remainder_size >= 0)  /* take */
2328          {
2329            set_inuse_bit_at_offset(victim, victim_size);
2330            unlink(victim, bck, fwd);
2331            check_malloced_chunk(victim, nb);
2332            return chunk2mem(victim);
2333          }
2334
2335        }
2336
2337       bin = next_bin(bin);
2338
2339      } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2340
2341      /* Clear out the block bit. */
2342
2343      do   /* Possibly backtrack to try to clear a partial block */
2344      {
2345        if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2346        {
2347          av_[1] = (mbinptr)(binblocks_r & ~block);
2348          break;
2349        }
2350        --startidx;
2351       q = prev_bin(q);
2352      } while (first(q) == q);
2353
2354      /* Get to the next possibly nonempty block */
2355
2356      if ( (block <<= 1) <= binblocks_r && (block != 0) )
2357      {
2358        while ((block & binblocks_r) == 0)
2359        {
2360          idx += BINBLOCKWIDTH;
2361          block <<= 1;
2362        }
2363      }
2364      else
2365        break;
2366    }
2367  }
2368
2369
2370  /* Try to use top chunk */
2371
2372  /* Require that there be a remainder, ensuring top always exists  */
2373  if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2374  {
2375
2376#if HAVE_MMAP
2377    /* If big and would otherwise need to extend, try to use mmap instead */
2378    if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2379        (victim = mmap_chunk(nb)) != 0)
2380      return chunk2mem(victim);
2381#endif
2382
2383    /* Try to extend */
2384    malloc_extend_top(nb);
2385    if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2386      return NULL; /* propagate failure */
2387  }
2388
2389  victim = top;
2390  set_head(victim, nb | PREV_INUSE);
2391  top = chunk_at_offset(victim, nb);
2392  set_head(top, remainder_size | PREV_INUSE);
2393  check_malloced_chunk(victim, nb);
2394  return chunk2mem(victim);
2395
2396}
2397
2398
2399
2400
2401/*
2402
2403  free() algorithm :
2404
2405    cases:
2406
2407       1. free(0) has no effect.
2408
2409       2. If the chunk was allocated via mmap, it is release via munmap().
2410
2411       3. If a returned chunk borders the current high end of memory,
2412          it is consolidated into the top, and if the total unused
2413          topmost memory exceeds the trim threshold, malloc_trim is
2414          called.
2415
2416       4. Other chunks are consolidated as they arrive, and
2417          placed in corresponding bins. (This includes the case of
2418          consolidating with the current `last_remainder').
2419
2420*/
2421
2422
2423#if __STD_C
2424void fREe(Void_t* mem)
2425#else
2426void fREe(mem) Void_t* mem;
2427#endif
2428{
2429  mchunkptr p;         /* chunk corresponding to mem */
2430  INTERNAL_SIZE_T hd;  /* its head field */
2431  INTERNAL_SIZE_T sz;  /* its size */
2432  int       idx;       /* its bin index */
2433  mchunkptr next;      /* next contiguous chunk */
2434  INTERNAL_SIZE_T nextsz; /* its size */
2435  INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2436  mchunkptr bck;       /* misc temp for linking */
2437  mchunkptr fwd;       /* misc temp for linking */
2438  int       islr;      /* track whether merging with last_remainder */
2439
2440  if (mem == NULL)                              /* free(0) has no effect */
2441    return;
2442
2443  p = mem2chunk(mem);
2444  hd = p->size;
2445
2446#if HAVE_MMAP
2447  if (hd & IS_MMAPPED)                       /* release mmapped memory. */
2448  {
2449    munmap_chunk(p);
2450    return;
2451  }
2452#endif
2453
2454  check_inuse_chunk(p);
2455
2456  sz = hd & ~PREV_INUSE;
2457  next = chunk_at_offset(p, sz);
2458  nextsz = chunksize(next);
2459
2460  if (next == top)                            /* merge with top */
2461  {
2462    sz += nextsz;
2463
2464    if (!(hd & PREV_INUSE))                    /* consolidate backward */
2465    {
2466      prevsz = p->prev_size;
2467      p = chunk_at_offset(p, -((long) prevsz));
2468      sz += prevsz;
2469      unlink(p, bck, fwd);
2470    }
2471
2472    set_head(p, sz | PREV_INUSE);
2473    top = p;
2474    if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2475      malloc_trim(top_pad);
2476    return;
2477  }
2478
2479  set_head(next, nextsz);                    /* clear inuse bit */
2480
2481  islr = 0;
2482
2483  if (!(hd & PREV_INUSE))                    /* consolidate backward */
2484  {
2485    prevsz = p->prev_size;
2486    p = chunk_at_offset(p, -((long) prevsz));
2487    sz += prevsz;
2488
2489    if (p->fd == last_remainder)             /* keep as last_remainder */
2490      islr = 1;
2491    else
2492      unlink(p, bck, fwd);
2493  }
2494
2495  if (!(inuse_bit_at_offset(next, nextsz)))   /* consolidate forward */
2496  {
2497    sz += nextsz;
2498
2499    if (!islr && next->fd == last_remainder)  /* re-insert last_remainder */
2500    {
2501      islr = 1;
2502      link_last_remainder(p);
2503    }
2504    else
2505      unlink(next, bck, fwd);
2506  }
2507
2508
2509  set_head(p, sz | PREV_INUSE);
2510  set_foot(p, sz);
2511  if (!islr)
2512    frontlink(p, sz, idx, bck, fwd);
2513}
2514
2515
2516
2517
2518
2519/*
2520
2521  Realloc algorithm:
2522
2523    Chunks that were obtained via mmap cannot be extended or shrunk
2524    unless HAVE_MREMAP is defined, in which case mremap is used.
2525    Otherwise, if their reallocation is for additional space, they are
2526    copied.  If for less, they are just left alone.
2527
2528    Otherwise, if the reallocation is for additional space, and the
2529    chunk can be extended, it is, else a malloc-copy-free sequence is
2530    taken.  There are several different ways that a chunk could be
2531    extended. All are tried:
2532
2533       * Extending forward into following adjacent free chunk.
2534       * Shifting backwards, joining preceding adjacent space
2535       * Both shifting backwards and extending forward.
2536       * Extending into newly sbrked space
2537
2538    Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2539    size argument of zero (re)allocates a minimum-sized chunk.
2540
2541    If the reallocation is for less space, and the new request is for
2542    a `small' (<512 bytes) size, then the newly unused space is lopped
2543    off and freed.
2544
2545    The old unix realloc convention of allowing the last-free'd chunk
2546    to be used as an argument to realloc is no longer supported.
2547    I don't know of any programs still relying on this feature,
2548    and allowing it would also allow too many other incorrect
2549    usages of realloc to be sensible.
2550
2551
2552*/
2553
2554
2555#if __STD_C
2556Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2557#else
2558Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2559#endif
2560{
2561  INTERNAL_SIZE_T    nb;      /* padded request size */
2562
2563  mchunkptr oldp;             /* chunk corresponding to oldmem */
2564  INTERNAL_SIZE_T    oldsize; /* its size */
2565
2566  mchunkptr newp;             /* chunk to return */
2567  INTERNAL_SIZE_T    newsize; /* its size */
2568  Void_t*   newmem;           /* corresponding user mem */
2569
2570  mchunkptr next;             /* next contiguous chunk after oldp */
2571  INTERNAL_SIZE_T  nextsize;  /* its size */
2572
2573  mchunkptr prev;             /* previous contiguous chunk before oldp */
2574  INTERNAL_SIZE_T  prevsize;  /* its size */
2575
2576  mchunkptr remainder;        /* holds split off extra space from newp */
2577  INTERNAL_SIZE_T  remainder_size;   /* its size */
2578
2579  mchunkptr bck;              /* misc temp for linking */
2580  mchunkptr fwd;              /* misc temp for linking */
2581
2582#ifdef REALLOC_ZERO_BYTES_FREES
2583  if (bytes == 0) { fREe(oldmem); return 0; }
2584#endif
2585
2586  if ((long)bytes < 0) return NULL;
2587
2588  /* realloc of null is supposed to be same as malloc */
2589  if (oldmem == NULL) return mALLOc(bytes);
2590
2591  newp    = oldp    = mem2chunk(oldmem);
2592  newsize = oldsize = chunksize(oldp);
2593
2594
2595  nb = request2size(bytes);
2596
2597#if HAVE_MMAP
2598  if (chunk_is_mmapped(oldp))
2599  {
2600#if HAVE_MREMAP
2601    newp = mremap_chunk(oldp, nb);
2602    if(newp) return chunk2mem(newp);
2603#endif
2604    /* Note the extra SIZE_SZ overhead. */
2605    if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2606    /* Must alloc, copy, free. */
2607    newmem = mALLOc(bytes);
2608    if (newmem == 0) return 0; /* propagate failure */
2609    MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2610    munmap_chunk(oldp);
2611    return newmem;
2612  }
2613#endif
2614
2615  check_inuse_chunk(oldp);
2616
2617  if ((long)(oldsize) < (long)(nb))
2618  {
2619
2620    /* Try expanding forward */
2621
2622    next = chunk_at_offset(oldp, oldsize);
2623    if (next == top || !inuse(next))
2624    {
2625      nextsize = chunksize(next);
2626
2627      /* Forward into top only if a remainder */
2628      if (next == top)
2629      {
2630        if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2631        {
2632          newsize += nextsize;
2633          top = chunk_at_offset(oldp, nb);
2634          set_head(top, (newsize - nb) | PREV_INUSE);
2635          set_head_size(oldp, nb);
2636          return chunk2mem(oldp);
2637        }
2638      }
2639
2640      /* Forward into next chunk */
2641      else if (((long)(nextsize + newsize) >= (long)(nb)))
2642      {
2643        unlink(next, bck, fwd);
2644        newsize  += nextsize;
2645        goto split;
2646      }
2647    }
2648    else
2649    {
2650      next = NULL;
2651      nextsize = 0;
2652    }
2653
2654    /* Try shifting backwards. */
2655
2656    if (!prev_inuse(oldp))
2657    {
2658      prev = prev_chunk(oldp);
2659      prevsize = chunksize(prev);
2660
2661      /* try forward + backward first to save a later consolidation */
2662
2663      if (next != NULL)
2664      {
2665        /* into top */
2666        if (next == top)
2667        {
2668          if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2669          {
2670            unlink(prev, bck, fwd);
2671            newp = prev;
2672            newsize += prevsize + nextsize;
2673            newmem = chunk2mem(newp);
2674            MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2675            top = chunk_at_offset(newp, nb);
2676            set_head(top, (newsize - nb) | PREV_INUSE);
2677            set_head_size(newp, nb);
2678            return newmem;
2679          }
2680        }
2681
2682        /* into next chunk */
2683        else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2684        {
2685          unlink(next, bck, fwd);
2686          unlink(prev, bck, fwd);
2687          newp = prev;
2688          newsize += nextsize + prevsize;
2689          newmem = chunk2mem(newp);
2690          MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2691          goto split;
2692        }
2693      }
2694
2695      /* backward only */
2696      if (prev != NULL && (long)(prevsize + newsize) >= (long)nb)
2697      {
2698        unlink(prev, bck, fwd);
2699        newp = prev;
2700        newsize += prevsize;
2701        newmem = chunk2mem(newp);
2702        MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2703        goto split;
2704      }
2705    }
2706
2707    /* Must allocate */
2708
2709    newmem = mALLOc (bytes);
2710
2711    if (newmem == NULL)  /* propagate failure */
2712      return NULL;
2713
2714    /* Avoid copy if newp is next chunk after oldp. */
2715    /* (This can only happen when new chunk is sbrk'ed.) */
2716
2717    if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2718    {
2719      newsize += chunksize(newp);
2720      newp = oldp;
2721      goto split;
2722    }
2723
2724    /* Otherwise copy, free, and exit */
2725    MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2726    fREe(oldmem);
2727    return newmem;
2728  }
2729
2730
2731 split:  /* split off extra room in old or expanded chunk */
2732
2733  if (newsize - nb >= MINSIZE) /* split off remainder */
2734  {
2735    remainder = chunk_at_offset(newp, nb);
2736    remainder_size = newsize - nb;
2737    set_head_size(newp, nb);
2738    set_head(remainder, remainder_size | PREV_INUSE);
2739    set_inuse_bit_at_offset(remainder, remainder_size);
2740    fREe(chunk2mem(remainder)); /* let free() deal with it */
2741  }
2742  else
2743  {
2744    set_head_size(newp, newsize);
2745    set_inuse_bit_at_offset(newp, newsize);
2746  }
2747
2748  check_inuse_chunk(newp);
2749  return chunk2mem(newp);
2750}
2751
2752
2753
2754
2755/*
2756
2757  memalign algorithm:
2758
2759    memalign requests more than enough space from malloc, finds a spot
2760    within that chunk that meets the alignment request, and then
2761    possibly frees the leading and trailing space.
2762
2763    The alignment argument must be a power of two. This property is not
2764    checked by memalign, so misuse may result in random runtime errors.
2765
2766    8-byte alignment is guaranteed by normal malloc calls, so don't
2767    bother calling memalign with an argument of 8 or less.
2768
2769    Overreliance on memalign is a sure way to fragment space.
2770
2771*/
2772
2773
2774#if __STD_C
2775Void_t* mEMALIGn(size_t alignment, size_t bytes)
2776#else
2777Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2778#endif
2779{
2780  INTERNAL_SIZE_T    nb;      /* padded  request size */
2781  char*     m;                /* memory returned by malloc call */
2782  mchunkptr p;                /* corresponding chunk */
2783  char*     brk;              /* alignment point within p */
2784  mchunkptr newp;             /* chunk to return */
2785  INTERNAL_SIZE_T  newsize;   /* its size */
2786  INTERNAL_SIZE_T  leadsize;  /* leading space befor alignment point */
2787  mchunkptr remainder;        /* spare room at end to split off */
2788  long      remainder_size;   /* its size */
2789
2790  if ((long)bytes < 0) return NULL;
2791
2792  /* If need less alignment than we give anyway, just relay to malloc */
2793
2794  if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2795
2796  /* Otherwise, ensure that it is at least a minimum chunk size */
2797
2798  if (alignment <  MINSIZE) alignment = MINSIZE;
2799
2800  /* Call malloc with worst case padding to hit alignment. */
2801
2802  nb = request2size(bytes);
2803  m  = (char*)(mALLOc(nb + alignment + MINSIZE));
2804
2805  if (m == NULL) return NULL; /* propagate failure */
2806
2807  p = mem2chunk(m);
2808
2809  if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2810  {
2811#if HAVE_MMAP
2812    if(chunk_is_mmapped(p))
2813      return chunk2mem(p); /* nothing more to do */
2814#endif
2815  }
2816  else /* misaligned */
2817  {
2818    /*
2819      Find an aligned spot inside chunk.
2820      Since we need to give back leading space in a chunk of at
2821      least MINSIZE, if the first calculation places us at
2822      a spot with less than MINSIZE leader, we can move to the
2823      next aligned spot -- we've allocated enough total room so that
2824      this is always possible.
2825    */
2826
2827    brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
2828    if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2829
2830    newp = (mchunkptr)brk;
2831    leadsize = brk - (char*)(p);
2832    newsize = chunksize(p) - leadsize;
2833
2834#if HAVE_MMAP
2835    if(chunk_is_mmapped(p))
2836    {
2837      newp->prev_size = p->prev_size + leadsize;
2838      set_head(newp, newsize|IS_MMAPPED);
2839      return chunk2mem(newp);
2840    }
2841#endif
2842
2843    /* give back leader, use the rest */
2844
2845    set_head(newp, newsize | PREV_INUSE);
2846    set_inuse_bit_at_offset(newp, newsize);
2847    set_head_size(p, leadsize);
2848    fREe(chunk2mem(p));
2849    p = newp;
2850
2851    assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2852  }
2853
2854  /* Also give back spare room at the end */
2855
2856  remainder_size = chunksize(p) - nb;
2857
2858  if (remainder_size >= (long)MINSIZE)
2859  {
2860    remainder = chunk_at_offset(p, nb);
2861    set_head(remainder, remainder_size | PREV_INUSE);
2862    set_head_size(p, nb);
2863    fREe(chunk2mem(remainder));
2864  }
2865
2866  check_inuse_chunk(p);
2867  return chunk2mem(p);
2868
2869}
2870
2871
2872
2873
2874/*
2875    valloc just invokes memalign with alignment argument equal
2876    to the page size of the system (or as near to this as can
2877    be figured out from all the includes/defines above.)
2878*/
2879
2880#if __STD_C
2881Void_t* vALLOc(size_t bytes)
2882#else
2883Void_t* vALLOc(bytes) size_t bytes;
2884#endif
2885{
2886  return mEMALIGn (malloc_getpagesize, bytes);
2887}
2888
2889/*
2890  pvalloc just invokes valloc for the nearest pagesize
2891  that will accommodate request
2892*/
2893
2894
2895#if __STD_C
2896Void_t* pvALLOc(size_t bytes)
2897#else
2898Void_t* pvALLOc(bytes) size_t bytes;
2899#endif
2900{
2901  size_t pagesize = malloc_getpagesize;
2902  return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2903}
2904
2905/*
2906
2907  calloc calls malloc, then zeroes out the allocated chunk.
2908
2909*/
2910
2911#if __STD_C
2912Void_t* cALLOc(size_t n, size_t elem_size)
2913#else
2914Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2915#endif
2916{
2917  mchunkptr p;
2918  INTERNAL_SIZE_T csz;
2919
2920  INTERNAL_SIZE_T sz = n * elem_size;
2921
2922
2923  /* check if expand_top called, in which case don't need to clear */
2924#if MORECORE_CLEARS
2925  mchunkptr oldtop = top;
2926  INTERNAL_SIZE_T oldtopsize = chunksize(top);
2927#endif
2928  Void_t* mem = mALLOc (sz);
2929
2930  if ((long)n < 0) return NULL;
2931
2932  if (mem == NULL)
2933    return NULL;
2934  else
2935  {
2936    p = mem2chunk(mem);
2937
2938    /* Two optional cases in which clearing not necessary */
2939
2940
2941#if HAVE_MMAP
2942    if (chunk_is_mmapped(p)) return mem;
2943#endif
2944
2945    csz = chunksize(p);
2946
2947#if MORECORE_CLEARS
2948    if (p == oldtop && csz > oldtopsize)
2949    {
2950      /* clear only the bytes from non-freshly-sbrked memory */
2951      csz = oldtopsize;
2952    }
2953#endif
2954
2955    MALLOC_ZERO(mem, csz - SIZE_SZ);
2956    return mem;
2957  }
2958}
2959
2960/*
2961
2962  cfree just calls free. It is needed/defined on some systems
2963  that pair it with calloc, presumably for odd historical reasons.
2964
2965*/
2966
2967#if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
2968#if __STD_C
2969void cfree(Void_t *mem)
2970#else
2971void cfree(mem) Void_t *mem;
2972#endif
2973{
2974  fREe(mem);
2975}
2976#endif
2977
2978
2979
2980/*
2981
2982    Malloc_trim gives memory back to the system (via negative
2983    arguments to sbrk) if there is unused memory at the `high' end of
2984    the malloc pool. You can call this after freeing large blocks of
2985    memory to potentially reduce the system-level memory requirements
2986    of a program. However, it cannot guarantee to reduce memory. Under
2987    some allocation patterns, some large free blocks of memory will be
2988    locked between two used chunks, so they cannot be given back to
2989    the system.
2990
2991    The `pad' argument to malloc_trim represents the amount of free
2992    trailing space to leave untrimmed. If this argument is zero,
2993    only the minimum amount of memory to maintain internal data
2994    structures will be left (one page or less). Non-zero arguments
2995    can be supplied to maintain enough trailing space to service
2996    future expected allocations without having to re-obtain memory
2997    from the system.
2998
2999    Malloc_trim returns 1 if it actually released any memory, else 0.
3000
3001*/
3002
3003#if __STD_C
3004int malloc_trim(size_t pad)
3005#else
3006int malloc_trim(pad) size_t pad;
3007#endif
3008{
3009  long  top_size;        /* Amount of top-most memory */
3010  long  extra;           /* Amount to release */
3011  char* current_brk;     /* address returned by pre-check sbrk call */
3012  char* new_brk;         /* address returned by negative sbrk call */
3013
3014  unsigned long pagesz = malloc_getpagesize;
3015
3016  top_size = chunksize(top);
3017  extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3018
3019  if (extra < (long)pagesz)  /* Not enough memory to release */
3020    return 0;
3021
3022  else
3023  {
3024    /* Test to make sure no one else called sbrk */
3025    current_brk = (char*)(MORECORE (0));
3026    if (current_brk != (char*)(top) + top_size)
3027      return 0;     /* Apparently we don't own memory; must fail */
3028
3029    else
3030    {
3031      new_brk = (char*)(MORECORE (-extra));
3032
3033      if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3034      {
3035        /* Try to figure out what we have */
3036        current_brk = (char*)(MORECORE (0));
3037        top_size = current_brk - (char*)top;
3038        if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3039        {
3040          sbrked_mem = current_brk - sbrk_base;
3041          set_head(top, top_size | PREV_INUSE);
3042        }
3043        check_chunk(top);
3044        return 0;
3045      }
3046
3047      else
3048      {
3049        /* Success. Adjust top accordingly. */
3050        set_head(top, (top_size - extra) | PREV_INUSE);
3051        sbrked_mem -= extra;
3052        check_chunk(top);
3053        return 1;
3054      }
3055    }
3056  }
3057}
3058
3059
3060
3061/*
3062  malloc_usable_size:
3063
3064    This routine tells you how many bytes you can actually use in an
3065    allocated chunk, which may be more than you requested (although
3066    often not). You can use this many bytes without worrying about
3067    overwriting other allocated objects. Not a particularly great
3068    programming practice, but still sometimes useful.
3069
3070*/
3071
3072#if __STD_C
3073size_t malloc_usable_size(Void_t* mem)
3074#else
3075size_t malloc_usable_size(mem) Void_t* mem;
3076#endif
3077{
3078  mchunkptr p;
3079  if (mem == NULL)
3080    return 0;
3081  else
3082  {
3083    p = mem2chunk(mem);
3084    if(!chunk_is_mmapped(p))
3085    {
3086      if (!inuse(p)) return 0;
3087      check_inuse_chunk(p);
3088      return chunksize(p) - SIZE_SZ;
3089    }
3090    return chunksize(p) - 2*SIZE_SZ;
3091  }
3092}
3093
3094
3095
3096
3097/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3098
3099#ifdef DEBUG
3100static void malloc_update_mallinfo()
3101{
3102  int i;
3103  mbinptr b;
3104  mchunkptr p;
3105#ifdef DEBUG
3106  mchunkptr q;
3107#endif
3108
3109  INTERNAL_SIZE_T avail = chunksize(top);
3110  int   navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3111
3112  for (i = 1; i < NAV; ++i)
3113  {
3114    b = bin_at(i);
3115    for (p = last(b); p != b; p = p->bk)
3116    {
3117#ifdef DEBUG
3118      check_free_chunk(p);
3119      for (q = next_chunk(p);
3120           q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3121           q = next_chunk(q))
3122        check_inuse_chunk(q);
3123#endif
3124      avail += chunksize(p);
3125      navail++;
3126    }
3127  }
3128
3129  current_mallinfo.ordblks = navail;
3130  current_mallinfo.uordblks = sbrked_mem - avail;
3131  current_mallinfo.fordblks = avail;
3132  current_mallinfo.hblks = n_mmaps;
3133  current_mallinfo.hblkhd = mmapped_mem;
3134  current_mallinfo.keepcost = chunksize(top);
3135
3136}
3137#endif  /* DEBUG */
3138
3139
3140
3141/*
3142
3143  malloc_stats:
3144
3145    Prints on the amount of space obtain from the system (both
3146    via sbrk and mmap), the maximum amount (which may be more than
3147    current if malloc_trim and/or munmap got called), the maximum
3148    number of simultaneous mmap regions used, and the current number
3149    of bytes allocated via malloc (or realloc, etc) but not yet
3150    freed. (Note that this is the number of bytes allocated, not the
3151    number requested. It will be larger than the number requested
3152    because of alignment and bookkeeping overhead.)
3153
3154*/
3155
3156#ifdef DEBUG
3157void malloc_stats()
3158{
3159  malloc_update_mallinfo();
3160  printf("max system bytes = %10u\n",
3161          (unsigned int)(max_total_mem));
3162  printf("system bytes     = %10u\n",
3163          (unsigned int)(sbrked_mem + mmapped_mem));
3164  printf("in use bytes     = %10u\n",
3165          (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3166#if HAVE_MMAP
3167  printf("max mmap regions = %10u\n",
3168          (unsigned int)max_n_mmaps);
3169#endif
3170}
3171#endif  /* DEBUG */
3172
3173/*
3174  mallinfo returns a copy of updated current mallinfo.
3175*/
3176
3177#ifdef DEBUG
3178struct mallinfo mALLINFo()
3179{
3180  malloc_update_mallinfo();
3181  return current_mallinfo;
3182}
3183#endif  /* DEBUG */
3184
3185
3186
3187
3188/*
3189  mallopt:
3190
3191    mallopt is the general SVID/XPG interface to tunable parameters.
3192    The format is to provide a (parameter-number, parameter-value) pair.
3193    mallopt then sets the corresponding parameter to the argument
3194    value if it can (i.e., so long as the value is meaningful),
3195    and returns 1 if successful else 0.
3196
3197    See descriptions of tunable parameters above.
3198
3199*/
3200
3201#if __STD_C
3202int mALLOPt(int param_number, int value)
3203#else
3204int mALLOPt(param_number, value) int param_number; int value;
3205#endif
3206{
3207  switch(param_number)
3208  {
3209    case M_TRIM_THRESHOLD:
3210      trim_threshold = value; return 1;
3211    case M_TOP_PAD:
3212      top_pad = value; return 1;
3213    case M_MMAP_THRESHOLD:
3214      mmap_threshold = value; return 1;
3215    case M_MMAP_MAX:
3216#if HAVE_MMAP
3217      n_mmaps_max = value; return 1;
3218#else
3219      if (value != 0) return 0; else  n_mmaps_max = value; return 1;
3220#endif
3221
3222    default:
3223      return 0;
3224  }
3225}
3226
3227/*
3228
3229History:
3230
3231    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
3232      * return null for negative arguments
3233      * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
3234         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3235          (e.g. WIN32 platforms)
3236         * Cleanup up header file inclusion for WIN32 platforms
3237         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3238         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3239           memory allocation routines
3240         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3241         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3242           usage of 'assert' in non-WIN32 code
3243         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3244           avoid infinite loop
3245      * Always call 'fREe()' rather than 'free()'
3246
3247    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
3248      * Fixed ordering problem with boundary-stamping
3249
3250    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
3251      * Added pvalloc, as recommended by H.J. Liu
3252      * Added 64bit pointer support mainly from Wolfram Gloger
3253      * Added anonymously donated WIN32 sbrk emulation
3254      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3255      * malloc_extend_top: fix mask error that caused wastage after
3256        foreign sbrks
3257      * Add linux mremap support code from HJ Liu
3258
3259    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
3260      * Integrated most documentation with the code.
3261      * Add support for mmap, with help from
3262        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3263      * Use last_remainder in more cases.
3264      * Pack bins using idea from  colin@nyx10.cs.du.edu
3265      * Use ordered bins instead of best-fit threshhold
3266      * Eliminate block-local decls to simplify tracing and debugging.
3267      * Support another case of realloc via move into top
3268      * Fix error occuring when initial sbrk_base not word-aligned.
3269      * Rely on page size for units instead of SBRK_UNIT to
3270        avoid surprises about sbrk alignment conventions.
3271      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3272        (raymond@es.ele.tue.nl) for the suggestion.
3273      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3274      * More precautions for cases where other routines call sbrk,
3275        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3276      * Added macros etc., allowing use in linux libc from
3277        H.J. Lu (hjl@gnu.ai.mit.edu)
3278      * Inverted this history list
3279
3280    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
3281      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3282      * Removed all preallocation code since under current scheme
3283        the work required to undo bad preallocations exceeds
3284        the work saved in good cases for most test programs.
3285      * No longer use return list or unconsolidated bins since
3286        no scheme using them consistently outperforms those that don't
3287        given above changes.
3288      * Use best fit for very large chunks to prevent some worst-cases.
3289      * Added some support for debugging
3290
3291    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
3292      * Removed footers when chunks are in use. Thanks to
3293        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3294
3295    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
3296      * Added malloc_trim, with help from Wolfram Gloger
3297        (wmglo@Dent.MED.Uni-Muenchen.DE).
3298
3299    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
3300
3301    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
3302      * realloc: try to expand in both directions
3303      * malloc: swap order of clean-bin strategy;
3304      * realloc: only conditionally expand backwards
3305      * Try not to scavenge used bins
3306      * Use bin counts as a guide to preallocation
3307      * Occasionally bin return list chunks in first scan
3308      * Add a few optimizations from colin@nyx10.cs.du.edu
3309
3310    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
3311      * faster bin computation & slightly different binning
3312      * merged all consolidations to one part of malloc proper
3313         (eliminating old malloc_find_space & malloc_clean_bin)
3314      * Scan 2 returns chunks (not just 1)
3315      * Propagate failure in realloc if malloc returns 0
3316      * Add stuff to allow compilation on non-ANSI compilers
3317          from kpv@research.att.com
3318
3319    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
3320      * removed potential for odd address access in prev_chunk
3321      * removed dependency on getpagesize.h
3322      * misc cosmetics and a bit more internal documentation
3323      * anticosmetics: mangled names in macros to evade debugger strangeness
3324      * tested on sparc, hp-700, dec-mips, rs6000
3325          with gcc & native cc (hp, dec only) allowing
3326          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3327
3328    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
3329      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3330         structure of old version,  but most details differ.)
3331
3332*/
3333