linux/Documentation/memory-hotplug.txt
<<
>>
Prefs
   1==============
   2Memory Hotplug
   3==============
   4
   5Created:                                        Jul 28 2007
   6Add description of notifier of memory hotplug   Oct 11 2007
   7
   8This document is about memory hotplug including how-to-use and current status.
   9Because Memory Hotplug is still under development, contents of this text will
  10be changed often.
  11
  121. Introduction
  13  1.1 purpose of memory hotplug
  14  1.2. Phases of memory hotplug
  15  1.3. Unit of Memory online/offline operation
  162. Kernel Configuration
  173. sysfs files for memory hotplug
  184. Physical memory hot-add phase
  19  4.1 Hardware(Firmware) Support
  20  4.2 Notify memory hot-add event by hand
  215. Logical Memory hot-add phase
  22  5.1. State of memory
  23  5.2. How to online memory
  246. Logical memory remove
  25  6.1 Memory offline and ZONE_MOVABLE
  26  6.2. How to offline memory
  277. Physical memory remove
  288. Memory hotplug event notifier
  299. Future Work List
  30
  31Note(1): x86_64's has special implementation for memory hotplug.
  32         This text does not describe it.
  33Note(2): This text assumes that sysfs is mounted at /sys.
  34
  35
  36---------------
  371. Introduction
  38---------------
  39
  401.1 purpose of memory hotplug
  41------------
  42Memory Hotplug allows users to increase/decrease the amount of memory.
  43Generally, there are two purposes.
  44
  45(A) For changing the amount of memory.
  46    This is to allow a feature like capacity on demand.
  47(B) For installing/removing DIMMs or NUMA-nodes physically.
  48    This is to exchange DIMMs/NUMA-nodes, reduce power consumption, etc.
  49
  50(A) is required by highly virtualized environments and (B) is required by
  51hardware which supports memory power management.
  52
  53Linux memory hotplug is designed for both purpose.
  54
  55
  561.2. Phases of memory hotplug
  57---------------
  58There are 2 phases in Memory Hotplug.
  59  1) Physical Memory Hotplug phase
  60  2) Logical Memory Hotplug phase.
  61
  62The First phase is to communicate hardware/firmware and make/erase
  63environment for hotplugged memory. Basically, this phase is necessary
  64for the purpose (B), but this is good phase for communication between
  65highly virtualized environments too.
  66
  67When memory is hotplugged, the kernel recognizes new memory, makes new memory
  68management tables, and makes sysfs files for new memory's operation.
  69
  70If firmware supports notification of connection of new memory to OS,
  71this phase is triggered automatically. ACPI can notify this event. If not,
  72"probe" operation by system administration is used instead.
  73(see Section 4.).
  74
  75Logical Memory Hotplug phase is to change memory state into
  76available/unavailable for users. Amount of memory from user's view is
  77changed by this phase. The kernel makes all memory in it as free pages
  78when a memory range is available.
  79
  80In this document, this phase is described as online/offline.
  81
  82Logical Memory Hotplug phase is triggered by write of sysfs file by system
  83administrator. For the hot-add case, it must be executed after Physical Hotplug
  84phase by hand.
  85(However, if you writes udev's hotplug scripts for memory hotplug, these
  86 phases can be execute in seamless way.)
  87
  88
  891.3. Unit of Memory online/offline operation
  90------------
  91Memory hotplug uses SPARSEMEM memory model. SPARSEMEM divides the whole memory
  92into chunks of the same size. The chunk is called a "section". The size of
  93a section is architecture dependent. For example, power uses 16MiB, ia64 uses
  941GiB. The unit of online/offline operation is "one section". (see Section 3.)
  95
  96To determine the size of sections, please read this file:
  97
  98/sys/devices/system/memory/block_size_bytes
  99
 100This file shows the size of sections in byte.
 101
 102-----------------------
 1032. Kernel Configuration
 104-----------------------
 105To use memory hotplug feature, kernel must be compiled with following
 106config options.
 107
 108- For all memory hotplug
 109    Memory model -> Sparse Memory  (CONFIG_SPARSEMEM)
 110    Allow for memory hot-add       (CONFIG_MEMORY_HOTPLUG)
 111
 112- To enable memory removal, the followings are also necessary
 113    Allow for memory hot remove    (CONFIG_MEMORY_HOTREMOVE)
 114    Page Migration                 (CONFIG_MIGRATION)
 115
 116- For ACPI memory hotplug, the followings are also necessary
 117    Memory hotplug (under ACPI Support menu) (CONFIG_ACPI_HOTPLUG_MEMORY)
 118    This option can be kernel module.
 119
 120- As a related configuration, if your box has a feature of NUMA-node hotplug
 121  via ACPI, then this option is necessary too.
 122    ACPI0004,PNP0A05 and PNP0A06 Container Driver (under ACPI Support menu)
 123    (CONFIG_ACPI_CONTAINER).
 124    This option can be kernel module too.
 125
 126--------------------------------
 1274 sysfs files for memory hotplug
 128--------------------------------
 129All sections have their device information in sysfs.  Each section is part of
 130a memory block under /sys/devices/system/memory as
 131
 132/sys/devices/system/memory/memoryXXX
 133(XXX is the section id.)
 134
 135Now, XXX is defined as (start_address_of_section / section_size) of the first
 136section contained in the memory block.  The files 'phys_index' and
 137'end_phys_index' under each directory report the beginning and end section id's
 138for the memory block covered by the sysfs directory.  It is expected that all
 139memory sections in this range are present and no memory holes exist in the
 140range. Currently there is no way to determine if there is a memory hole, but
 141the existence of one should not affect the hotplug capabilities of the memory
 142block.
 143
 144For example, assume 1GiB section size. A device for a memory starting at
 1450x100000000 is /sys/device/system/memory/memory4
 146(0x100000000 / 1Gib = 4)
 147This device covers address range [0x100000000 ... 0x140000000)
 148
 149Under each section, you can see 4 or 5 files, the end_phys_index file being
 150a recent addition and not present on older kernels.
 151
 152/sys/devices/system/memory/memoryXXX/start_phys_index
 153/sys/devices/system/memory/memoryXXX/end_phys_index
 154/sys/devices/system/memory/memoryXXX/phys_device
 155/sys/devices/system/memory/memoryXXX/state
 156/sys/devices/system/memory/memoryXXX/removable
 157
 158'phys_index'      : read-only and contains section id of the first section
 159                    in the memory block, same as XXX.
 160'end_phys_index'  : read-only and contains section id of the last section
 161                    in the memory block.
 162'state'           : read-write
 163                    at read:  contains online/offline state of memory.
 164                    at write: user can specify "online_kernel",
 165                    "online_movable", "online", "offline" command
 166                    which will be performed on al sections in the block.
 167'phys_device'     : read-only: designed to show the name of physical memory
 168                    device.  This is not well implemented now.
 169'removable'       : read-only: contains an integer value indicating
 170                    whether the memory block is removable or not
 171                    removable.  A value of 1 indicates that the memory
 172                    block is removable and a value of 0 indicates that
 173                    it is not removable. A memory block is removable only if
 174                    every section in the block is removable.
 175
 176NOTE:
 177  These directories/files appear after physical memory hotplug phase.
 178
 179If CONFIG_NUMA is enabled the memoryXXX/ directories can also be accessed
 180via symbolic links located in the /sys/devices/system/node/node* directories.
 181
 182For example:
 183/sys/devices/system/node/node0/memory9 -> ../../memory/memory9
 184
 185A backlink will also be created:
 186/sys/devices/system/memory/memory9/node0 -> ../../node/node0
 187
 188--------------------------------
 1894. Physical memory hot-add phase
 190--------------------------------
 191
 1924.1 Hardware(Firmware) Support
 193------------
 194On x86_64/ia64 platform, memory hotplug by ACPI is supported.
 195
 196In general, the firmware (ACPI) which supports memory hotplug defines
 197memory class object of _HID "PNP0C80". When a notify is asserted to PNP0C80,
 198Linux's ACPI handler does hot-add memory to the system and calls a hotplug udev
 199script. This will be done automatically.
 200
 201But scripts for memory hotplug are not contained in generic udev package(now).
 202You may have to write it by yourself or online/offline memory by hand.
 203Please see "How to online memory", "How to offline memory" in this text.
 204
 205If firmware supports NUMA-node hotplug, and defines an object _HID "ACPI0004",
 206"PNP0A05", or "PNP0A06", notification is asserted to it, and ACPI handler
 207calls hotplug code for all of objects which are defined in it.
 208If memory device is found, memory hotplug code will be called.
 209
 210
 2114.2 Notify memory hot-add event by hand
 212------------
 213In some environments, especially virtualized environment, firmware will not
 214notify memory hotplug event to the kernel. For such environment, "probe"
 215interface is supported. This interface depends on CONFIG_ARCH_MEMORY_PROBE.
 216
 217Now, CONFIG_ARCH_MEMORY_PROBE is supported only by powerpc but it does not
 218contain highly architecture codes. Please add config if you need "probe"
 219interface.
 220
 221Probe interface is located at
 222/sys/devices/system/memory/probe
 223
 224You can tell the physical address of new memory to the kernel by
 225
 226% echo start_address_of_new_memory > /sys/devices/system/memory/probe
 227
 228Then, [start_address_of_new_memory, start_address_of_new_memory + section_size)
 229memory range is hot-added. In this case, hotplug script is not called (in
 230current implementation). You'll have to online memory by yourself.
 231Please see "How to online memory" in this text.
 232
 233
 234
 235------------------------------
 2365. Logical Memory hot-add phase
 237------------------------------
 238
 2395.1. State of memory
 240------------
 241To see (online/offline) state of memory section, read 'state' file.
 242
 243% cat /sys/device/system/memory/memoryXXX/state
 244
 245
 246If the memory section is online, you'll read "online".
 247If the memory section is offline, you'll read "offline".
 248
 249
 2505.2. How to online memory
 251------------
 252Even if the memory is hot-added, it is not at ready-to-use state.
 253For using newly added memory, you have to "online" the memory section.
 254
 255For onlining, you have to write "online" to the section's state file as:
 256
 257% echo online > /sys/devices/system/memory/memoryXXX/state
 258
 259This onlining will not change the ZONE type of the target memory section,
 260If the memory section is in ZONE_NORMAL, you can change it to ZONE_MOVABLE:
 261
 262% echo online_movable > /sys/devices/system/memory/memoryXXX/state
 263(NOTE: current limit: this memory section must be adjacent to ZONE_MOVABLE)
 264
 265And if the memory section is in ZONE_MOVABLE, you can change it to ZONE_NORMAL:
 266
 267% echo online_kernel > /sys/devices/system/memory/memoryXXX/state
 268(NOTE: current limit: this memory section must be adjacent to ZONE_NORMAL)
 269
 270After this, section memoryXXX's state will be 'online' and the amount of
 271available memory will be increased.
 272
 273Currently, newly added memory is added as ZONE_NORMAL (for powerpc, ZONE_DMA).
 274This may be changed in future.
 275
 276
 277
 278------------------------
 2796. Logical memory remove
 280------------------------
 281
 2826.1 Memory offline and ZONE_MOVABLE
 283------------
 284Memory offlining is more complicated than memory online. Because memory offline
 285has to make the whole memory section be unused, memory offline can fail if
 286the section includes memory which cannot be freed.
 287
 288In general, memory offline can use 2 techniques.
 289
 290(1) reclaim and free all memory in the section.
 291(2) migrate all pages in the section.
 292
 293In the current implementation, Linux's memory offline uses method (2), freeing
 294all  pages in the section by page migration. But not all pages are
 295migratable. Under current Linux, migratable pages are anonymous pages and
 296page caches. For offlining a section by migration, the kernel has to guarantee
 297that the section contains only migratable pages.
 298
 299Now, a boot option for making a section which consists of migratable pages is
 300supported. By specifying "kernelcore=" or "movablecore=" boot option, you can
 301create ZONE_MOVABLE...a zone which is just used for movable pages.
 302(See also Documentation/kernel-parameters.txt)
 303
 304Assume the system has "TOTAL" amount of memory at boot time, this boot option
 305creates ZONE_MOVABLE as following.
 306
 3071) When kernelcore=YYYY boot option is used,
 308  Size of memory not for movable pages (not for offline) is YYYY.
 309  Size of memory for movable pages (for offline) is TOTAL-YYYY.
 310
 3112) When movablecore=ZZZZ boot option is used,
 312  Size of memory not for movable pages (not for offline) is TOTAL - ZZZZ.
 313  Size of memory for movable pages (for offline) is ZZZZ.
 314
 315
 316Note) Unfortunately, there is no information to show which section belongs
 317to ZONE_MOVABLE. This is TBD.
 318
 319
 3206.2. How to offline memory
 321------------
 322You can offline a section by using the same sysfs interface that was used in
 323memory onlining.
 324
 325% echo offline > /sys/devices/system/memory/memoryXXX/state
 326
 327If offline succeeds, the state of the memory section is changed to be "offline".
 328If it fails, some error core (like -EBUSY) will be returned by the kernel.
 329Even if a section does not belong to ZONE_MOVABLE, you can try to offline it.
 330If it doesn't contain 'unmovable' memory, you'll get success.
 331
 332A section under ZONE_MOVABLE is considered to be able to be offlined easily.
 333But under some busy state, it may return -EBUSY. Even if a memory section
 334cannot be offlined due to -EBUSY, you can retry offlining it and may be able to
 335offline it (or not).
 336(For example, a page is referred to by some kernel internal call and released
 337 soon.)
 338
 339Consideration:
 340Memory hotplug's design direction is to make the possibility of memory offlining
 341higher and to guarantee unplugging memory under any situation. But it needs
 342more work. Returning -EBUSY under some situation may be good because the user
 343can decide to retry more or not by himself. Currently, memory offlining code
 344does some amount of retry with 120 seconds timeout.
 345
 346-------------------------
 3477. Physical memory remove
 348-------------------------
 349Need more implementation yet....
 350 - Notification completion of remove works by OS to firmware.
 351 - Guard from remove if not yet.
 352
 353--------------------------------
 3548. Memory hotplug event notifier
 355--------------------------------
 356Memory hotplug has event notifier. There are 6 types of notification.
 357
 358MEMORY_GOING_ONLINE
 359  Generated before new memory becomes available in order to be able to
 360  prepare subsystems to handle memory. The page allocator is still unable
 361  to allocate from the new memory.
 362
 363MEMORY_CANCEL_ONLINE
 364  Generated if MEMORY_GOING_ONLINE fails.
 365
 366MEMORY_ONLINE
 367  Generated when memory has successfully brought online. The callback may
 368  allocate pages from the new memory.
 369
 370MEMORY_GOING_OFFLINE
 371  Generated to begin the process of offlining memory. Allocations are no
 372  longer possible from the memory but some of the memory to be offlined
 373  is still in use. The callback can be used to free memory known to a
 374  subsystem from the indicated memory section.
 375
 376MEMORY_CANCEL_OFFLINE
 377  Generated if MEMORY_GOING_OFFLINE fails. Memory is available again from
 378  the section that we attempted to offline.
 379
 380MEMORY_OFFLINE
 381  Generated after offlining memory is complete.
 382
 383A callback routine can be registered by
 384  hotplug_memory_notifier(callback_func, priority)
 385
 386The second argument of callback function (action) is event types of above.
 387The third argument is passed by pointer of struct memory_notify.
 388
 389struct memory_notify {
 390       unsigned long start_pfn;
 391       unsigned long nr_pages;
 392       int status_change_nid_normal;
 393       int status_change_nid_high;
 394       int status_change_nid;
 395}
 396
 397start_pfn is start_pfn of online/offline memory.
 398nr_pages is # of pages of online/offline memory.
 399status_change_nid_normal is set node id when N_NORMAL_MEMORY of nodemask
 400is (will be) set/clear, if this is -1, then nodemask status is not changed.
 401status_change_nid_high is set node id when N_HIGH_MEMORY of nodemask
 402is (will be) set/clear, if this is -1, then nodemask status is not changed.
 403status_change_nid is set node id when N_MEMORY of nodemask is (will be)
 404set/clear. It means a new(memoryless) node gets new memory by online and a
 405node loses all memory. If this is -1, then nodemask status is not changed.
 406If status_changed_nid* >= 0, callback should create/discard structures for the
 407node if necessary.
 408
 409--------------
 4109. Future Work
 411--------------
 412  - allowing memory hot-add to ZONE_MOVABLE. maybe we need some switch like
 413    sysctl or new control file.
 414  - showing memory section and physical device relationship.
 415  - showing memory section is under ZONE_MOVABLE or not
 416  - test and make it better memory offlining.
 417  - support HugeTLB page migration and offlining.
 418  - memmap removing at memory offline.
 419  - physical remove memory.
 420
 421