linux/Documentation/admin-guide/cgroup-v2.rst
<<
>>
Prefs
   1.. _cgroup-v2:
   2
   3================
   4Control Group v2
   5================
   6
   7:Date: October, 2015
   8:Author: Tejun Heo <tj@kernel.org>
   9
  10This is the authoritative documentation on the design, interface and
  11conventions of cgroup v2.  It describes all userland-visible aspects
  12of cgroup including core and specific controller behaviors.  All
  13future changes must be reflected in this document.  Documentation for
  14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
  15
  16.. CONTENTS
  17
  18   1. Introduction
  19     1-1. Terminology
  20     1-2. What is cgroup?
  21   2. Basic Operations
  22     2-1. Mounting
  23     2-2. Organizing Processes and Threads
  24       2-2-1. Processes
  25       2-2-2. Threads
  26     2-3. [Un]populated Notification
  27     2-4. Controlling Controllers
  28       2-4-1. Enabling and Disabling
  29       2-4-2. Top-down Constraint
  30       2-4-3. No Internal Process Constraint
  31     2-5. Delegation
  32       2-5-1. Model of Delegation
  33       2-5-2. Delegation Containment
  34     2-6. Guidelines
  35       2-6-1. Organize Once and Control
  36       2-6-2. Avoid Name Collisions
  37   3. Resource Distribution Models
  38     3-1. Weights
  39     3-2. Limits
  40     3-3. Protections
  41     3-4. Allocations
  42   4. Interface Files
  43     4-1. Format
  44     4-2. Conventions
  45     4-3. Core Interface Files
  46   5. Controllers
  47     5-1. CPU
  48       5-1-1. CPU Interface Files
  49     5-2. Memory
  50       5-2-1. Memory Interface Files
  51       5-2-2. Usage Guidelines
  52       5-2-3. Memory Ownership
  53     5-3. IO
  54       5-3-1. IO Interface Files
  55       5-3-2. Writeback
  56       5-3-3. IO Latency
  57         5-3-3-1. How IO Latency Throttling Works
  58         5-3-3-2. IO Latency Interface Files
  59       5-3-4. IO Priority
  60     5-4. PID
  61       5-4-1. PID Interface Files
  62     5-5. Cpuset
  63       5.5-1. Cpuset Interface Files
  64     5-6. Device
  65     5-7. RDMA
  66       5-7-1. RDMA Interface Files
  67     5-8. HugeTLB
  68       5.8-1. HugeTLB Interface Files
  69     5-9. Misc
  70       5.9-1 Miscellaneous cgroup Interface Files
  71       5.9-2 Migration and Ownership
  72     5-10. Others
  73       5-10-1. perf_event
  74     5-N. Non-normative information
  75       5-N-1. CPU controller root cgroup process behaviour
  76       5-N-2. IO controller root cgroup process behaviour
  77   6. Namespace
  78     6-1. Basics
  79     6-2. The Root and Views
  80     6-3. Migration and setns(2)
  81     6-4. Interaction with Other Namespaces
  82   P. Information on Kernel Programming
  83     P-1. Filesystem Support for Writeback
  84   D. Deprecated v1 Core Features
  85   R. Issues with v1 and Rationales for v2
  86     R-1. Multiple Hierarchies
  87     R-2. Thread Granularity
  88     R-3. Competition Between Inner Nodes and Threads
  89     R-4. Other Interface Issues
  90     R-5. Controller Issues and Remedies
  91       R-5-1. Memory
  92
  93
  94Introduction
  95============
  96
  97Terminology
  98-----------
  99
 100"cgroup" stands for "control group" and is never capitalized.  The
 101singular form is used to designate the whole feature and also as a
 102qualifier as in "cgroup controllers".  When explicitly referring to
 103multiple individual control groups, the plural form "cgroups" is used.
 104
 105
 106What is cgroup?
 107---------------
 108
 109cgroup is a mechanism to organize processes hierarchically and
 110distribute system resources along the hierarchy in a controlled and
 111configurable manner.
 112
 113cgroup is largely composed of two parts - the core and controllers.
 114cgroup core is primarily responsible for hierarchically organizing
 115processes.  A cgroup controller is usually responsible for
 116distributing a specific type of system resource along the hierarchy
 117although there are utility controllers which serve purposes other than
 118resource distribution.
 119
 120cgroups form a tree structure and every process in the system belongs
 121to one and only one cgroup.  All threads of a process belong to the
 122same cgroup.  On creation, all processes are put in the cgroup that
 123the parent process belongs to at the time.  A process can be migrated
 124to another cgroup.  Migration of a process doesn't affect already
 125existing descendant processes.
 126
 127Following certain structural constraints, controllers may be enabled or
 128disabled selectively on a cgroup.  All controller behaviors are
 129hierarchical - if a controller is enabled on a cgroup, it affects all
 130processes which belong to the cgroups consisting the inclusive
 131sub-hierarchy of the cgroup.  When a controller is enabled on a nested
 132cgroup, it always restricts the resource distribution further.  The
 133restrictions set closer to the root in the hierarchy can not be
 134overridden from further away.
 135
 136
 137Basic Operations
 138================
 139
 140Mounting
 141--------
 142
 143Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
 144hierarchy can be mounted with the following mount command::
 145
 146  # mount -t cgroup2 none $MOUNT_POINT
 147
 148cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
 149controllers which support v2 and are not bound to a v1 hierarchy are
 150automatically bound to the v2 hierarchy and show up at the root.
 151Controllers which are not in active use in the v2 hierarchy can be
 152bound to other hierarchies.  This allows mixing v2 hierarchy with the
 153legacy v1 multiple hierarchies in a fully backward compatible way.
 154
 155A controller can be moved across hierarchies only after the controller
 156is no longer referenced in its current hierarchy.  Because per-cgroup
 157controller states are destroyed asynchronously and controllers may
 158have lingering references, a controller may not show up immediately on
 159the v2 hierarchy after the final umount of the previous hierarchy.
 160Similarly, a controller should be fully disabled to be moved out of
 161the unified hierarchy and it may take some time for the disabled
 162controller to become available for other hierarchies; furthermore, due
 163to inter-controller dependencies, other controllers may need to be
 164disabled too.
 165
 166While useful for development and manual configurations, moving
 167controllers dynamically between the v2 and other hierarchies is
 168strongly discouraged for production use.  It is recommended to decide
 169the hierarchies and controller associations before starting using the
 170controllers after system boot.
 171
 172During transition to v2, system management software might still
 173automount the v1 cgroup filesystem and so hijack all controllers
 174during boot, before manual intervention is possible. To make testing
 175and experimenting easier, the kernel parameter cgroup_no_v1= allows
 176disabling controllers in v1 and make them always available in v2.
 177
 178cgroup v2 currently supports the following mount options.
 179
 180  nsdelegate
 181        Consider cgroup namespaces as delegation boundaries.  This
 182        option is system wide and can only be set on mount or modified
 183        through remount from the init namespace.  The mount option is
 184        ignored on non-init namespace mounts.  Please refer to the
 185        Delegation section for details.
 186
 187  memory_localevents
 188        Only populate memory.events with data for the current cgroup,
 189        and not any subtrees. This is legacy behaviour, the default
 190        behaviour without this option is to include subtree counts.
 191        This option is system wide and can only be set on mount or
 192        modified through remount from the init namespace. The mount
 193        option is ignored on non-init namespace mounts.
 194
 195  memory_recursiveprot
 196        Recursively apply memory.min and memory.low protection to
 197        entire subtrees, without requiring explicit downward
 198        propagation into leaf cgroups.  This allows protecting entire
 199        subtrees from one another, while retaining free competition
 200        within those subtrees.  This should have been the default
 201        behavior but is a mount-option to avoid regressing setups
 202        relying on the original semantics (e.g. specifying bogusly
 203        high 'bypass' protection values at higher tree levels).
 204
 205
 206Organizing Processes and Threads
 207--------------------------------
 208
 209Processes
 210~~~~~~~~~
 211
 212Initially, only the root cgroup exists to which all processes belong.
 213A child cgroup can be created by creating a sub-directory::
 214
 215  # mkdir $CGROUP_NAME
 216
 217A given cgroup may have multiple child cgroups forming a tree
 218structure.  Each cgroup has a read-writable interface file
 219"cgroup.procs".  When read, it lists the PIDs of all processes which
 220belong to the cgroup one-per-line.  The PIDs are not ordered and the
 221same PID may show up more than once if the process got moved to
 222another cgroup and then back or the PID got recycled while reading.
 223
 224A process can be migrated into a cgroup by writing its PID to the
 225target cgroup's "cgroup.procs" file.  Only one process can be migrated
 226on a single write(2) call.  If a process is composed of multiple
 227threads, writing the PID of any thread migrates all threads of the
 228process.
 229
 230When a process forks a child process, the new process is born into the
 231cgroup that the forking process belongs to at the time of the
 232operation.  After exit, a process stays associated with the cgroup
 233that it belonged to at the time of exit until it's reaped; however, a
 234zombie process does not appear in "cgroup.procs" and thus can't be
 235moved to another cgroup.
 236
 237A cgroup which doesn't have any children or live processes can be
 238destroyed by removing the directory.  Note that a cgroup which doesn't
 239have any children and is associated only with zombie processes is
 240considered empty and can be removed::
 241
 242  # rmdir $CGROUP_NAME
 243
 244"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
 245cgroup is in use in the system, this file may contain multiple lines,
 246one for each hierarchy.  The entry for cgroup v2 is always in the
 247format "0::$PATH"::
 248
 249  # cat /proc/842/cgroup
 250  ...
 251  0::/test-cgroup/test-cgroup-nested
 252
 253If the process becomes a zombie and the cgroup it was associated with
 254is removed subsequently, " (deleted)" is appended to the path::
 255
 256  # cat /proc/842/cgroup
 257  ...
 258  0::/test-cgroup/test-cgroup-nested (deleted)
 259
 260
 261Threads
 262~~~~~~~
 263
 264cgroup v2 supports thread granularity for a subset of controllers to
 265support use cases requiring hierarchical resource distribution across
 266the threads of a group of processes.  By default, all threads of a
 267process belong to the same cgroup, which also serves as the resource
 268domain to host resource consumptions which are not specific to a
 269process or thread.  The thread mode allows threads to be spread across
 270a subtree while still maintaining the common resource domain for them.
 271
 272Controllers which support thread mode are called threaded controllers.
 273The ones which don't are called domain controllers.
 274
 275Marking a cgroup threaded makes it join the resource domain of its
 276parent as a threaded cgroup.  The parent may be another threaded
 277cgroup whose resource domain is further up in the hierarchy.  The root
 278of a threaded subtree, that is, the nearest ancestor which is not
 279threaded, is called threaded domain or thread root interchangeably and
 280serves as the resource domain for the entire subtree.
 281
 282Inside a threaded subtree, threads of a process can be put in
 283different cgroups and are not subject to the no internal process
 284constraint - threaded controllers can be enabled on non-leaf cgroups
 285whether they have threads in them or not.
 286
 287As the threaded domain cgroup hosts all the domain resource
 288consumptions of the subtree, it is considered to have internal
 289resource consumptions whether there are processes in it or not and
 290can't have populated child cgroups which aren't threaded.  Because the
 291root cgroup is not subject to no internal process constraint, it can
 292serve both as a threaded domain and a parent to domain cgroups.
 293
 294The current operation mode or type of the cgroup is shown in the
 295"cgroup.type" file which indicates whether the cgroup is a normal
 296domain, a domain which is serving as the domain of a threaded subtree,
 297or a threaded cgroup.
 298
 299On creation, a cgroup is always a domain cgroup and can be made
 300threaded by writing "threaded" to the "cgroup.type" file.  The
 301operation is single direction::
 302
 303  # echo threaded > cgroup.type
 304
 305Once threaded, the cgroup can't be made a domain again.  To enable the
 306thread mode, the following conditions must be met.
 307
 308- As the cgroup will join the parent's resource domain.  The parent
 309  must either be a valid (threaded) domain or a threaded cgroup.
 310
 311- When the parent is an unthreaded domain, it must not have any domain
 312  controllers enabled or populated domain children.  The root is
 313  exempt from this requirement.
 314
 315Topology-wise, a cgroup can be in an invalid state.  Please consider
 316the following topology::
 317
 318  A (threaded domain) - B (threaded) - C (domain, just created)
 319
 320C is created as a domain but isn't connected to a parent which can
 321host child domains.  C can't be used until it is turned into a
 322threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
 323these cases.  Operations which fail due to invalid topology use
 324EOPNOTSUPP as the errno.
 325
 326A domain cgroup is turned into a threaded domain when one of its child
 327cgroup becomes threaded or threaded controllers are enabled in the
 328"cgroup.subtree_control" file while there are processes in the cgroup.
 329A threaded domain reverts to a normal domain when the conditions
 330clear.
 331
 332When read, "cgroup.threads" contains the list of the thread IDs of all
 333threads in the cgroup.  Except that the operations are per-thread
 334instead of per-process, "cgroup.threads" has the same format and
 335behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
 336written to in any cgroup, as it can only move threads inside the same
 337threaded domain, its operations are confined inside each threaded
 338subtree.
 339
 340The threaded domain cgroup serves as the resource domain for the whole
 341subtree, and, while the threads can be scattered across the subtree,
 342all the processes are considered to be in the threaded domain cgroup.
 343"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
 344processes in the subtree and is not readable in the subtree proper.
 345However, "cgroup.procs" can be written to from anywhere in the subtree
 346to migrate all threads of the matching process to the cgroup.
 347
 348Only threaded controllers can be enabled in a threaded subtree.  When
 349a threaded controller is enabled inside a threaded subtree, it only
 350accounts for and controls resource consumptions associated with the
 351threads in the cgroup and its descendants.  All consumptions which
 352aren't tied to a specific thread belong to the threaded domain cgroup.
 353
 354Because a threaded subtree is exempt from no internal process
 355constraint, a threaded controller must be able to handle competition
 356between threads in a non-leaf cgroup and its child cgroups.  Each
 357threaded controller defines how such competitions are handled.
 358
 359
 360[Un]populated Notification
 361--------------------------
 362
 363Each non-root cgroup has a "cgroup.events" file which contains
 364"populated" field indicating whether the cgroup's sub-hierarchy has
 365live processes in it.  Its value is 0 if there is no live process in
 366the cgroup and its descendants; otherwise, 1.  poll and [id]notify
 367events are triggered when the value changes.  This can be used, for
 368example, to start a clean-up operation after all processes of a given
 369sub-hierarchy have exited.  The populated state updates and
 370notifications are recursive.  Consider the following sub-hierarchy
 371where the numbers in the parentheses represent the numbers of processes
 372in each cgroup::
 373
 374  A(4) - B(0) - C(1)
 375              \ D(0)
 376
 377A, B and C's "populated" fields would be 1 while D's 0.  After the one
 378process in C exits, B and C's "populated" fields would flip to "0" and
 379file modified events will be generated on the "cgroup.events" files of
 380both cgroups.
 381
 382
 383Controlling Controllers
 384-----------------------
 385
 386Enabling and Disabling
 387~~~~~~~~~~~~~~~~~~~~~~
 388
 389Each cgroup has a "cgroup.controllers" file which lists all
 390controllers available for the cgroup to enable::
 391
 392  # cat cgroup.controllers
 393  cpu io memory
 394
 395No controller is enabled by default.  Controllers can be enabled and
 396disabled by writing to the "cgroup.subtree_control" file::
 397
 398  # echo "+cpu +memory -io" > cgroup.subtree_control
 399
 400Only controllers which are listed in "cgroup.controllers" can be
 401enabled.  When multiple operations are specified as above, either they
 402all succeed or fail.  If multiple operations on the same controller
 403are specified, the last one is effective.
 404
 405Enabling a controller in a cgroup indicates that the distribution of
 406the target resource across its immediate children will be controlled.
 407Consider the following sub-hierarchy.  The enabled controllers are
 408listed in parentheses::
 409
 410  A(cpu,memory) - B(memory) - C()
 411                            \ D()
 412
 413As A has "cpu" and "memory" enabled, A will control the distribution
 414of CPU cycles and memory to its children, in this case, B.  As B has
 415"memory" enabled but not "CPU", C and D will compete freely on CPU
 416cycles but their division of memory available to B will be controlled.
 417
 418As a controller regulates the distribution of the target resource to
 419the cgroup's children, enabling it creates the controller's interface
 420files in the child cgroups.  In the above example, enabling "cpu" on B
 421would create the "cpu." prefixed controller interface files in C and
 422D.  Likewise, disabling "memory" from B would remove the "memory."
 423prefixed controller interface files from C and D.  This means that the
 424controller interface files - anything which doesn't start with
 425"cgroup." are owned by the parent rather than the cgroup itself.
 426
 427
 428Top-down Constraint
 429~~~~~~~~~~~~~~~~~~~
 430
 431Resources are distributed top-down and a cgroup can further distribute
 432a resource only if the resource has been distributed to it from the
 433parent.  This means that all non-root "cgroup.subtree_control" files
 434can only contain controllers which are enabled in the parent's
 435"cgroup.subtree_control" file.  A controller can be enabled only if
 436the parent has the controller enabled and a controller can't be
 437disabled if one or more children have it enabled.
 438
 439
 440No Internal Process Constraint
 441~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 442
 443Non-root cgroups can distribute domain resources to their children
 444only when they don't have any processes of their own.  In other words,
 445only domain cgroups which don't contain any processes can have domain
 446controllers enabled in their "cgroup.subtree_control" files.
 447
 448This guarantees that, when a domain controller is looking at the part
 449of the hierarchy which has it enabled, processes are always only on
 450the leaves.  This rules out situations where child cgroups compete
 451against internal processes of the parent.
 452
 453The root cgroup is exempt from this restriction.  Root contains
 454processes and anonymous resource consumption which can't be associated
 455with any other cgroups and requires special treatment from most
 456controllers.  How resource consumption in the root cgroup is governed
 457is up to each controller (for more information on this topic please
 458refer to the Non-normative information section in the Controllers
 459chapter).
 460
 461Note that the restriction doesn't get in the way if there is no
 462enabled controller in the cgroup's "cgroup.subtree_control".  This is
 463important as otherwise it wouldn't be possible to create children of a
 464populated cgroup.  To control resource distribution of a cgroup, the
 465cgroup must create children and transfer all its processes to the
 466children before enabling controllers in its "cgroup.subtree_control"
 467file.
 468
 469
 470Delegation
 471----------
 472
 473Model of Delegation
 474~~~~~~~~~~~~~~~~~~~
 475
 476A cgroup can be delegated in two ways.  First, to a less privileged
 477user by granting write access of the directory and its "cgroup.procs",
 478"cgroup.threads" and "cgroup.subtree_control" files to the user.
 479Second, if the "nsdelegate" mount option is set, automatically to a
 480cgroup namespace on namespace creation.
 481
 482Because the resource control interface files in a given directory
 483control the distribution of the parent's resources, the delegatee
 484shouldn't be allowed to write to them.  For the first method, this is
 485achieved by not granting access to these files.  For the second, the
 486kernel rejects writes to all files other than "cgroup.procs" and
 487"cgroup.subtree_control" on a namespace root from inside the
 488namespace.
 489
 490The end results are equivalent for both delegation types.  Once
 491delegated, the user can build sub-hierarchy under the directory,
 492organize processes inside it as it sees fit and further distribute the
 493resources it received from the parent.  The limits and other settings
 494of all resource controllers are hierarchical and regardless of what
 495happens in the delegated sub-hierarchy, nothing can escape the
 496resource restrictions imposed by the parent.
 497
 498Currently, cgroup doesn't impose any restrictions on the number of
 499cgroups in or nesting depth of a delegated sub-hierarchy; however,
 500this may be limited explicitly in the future.
 501
 502
 503Delegation Containment
 504~~~~~~~~~~~~~~~~~~~~~~
 505
 506A delegated sub-hierarchy is contained in the sense that processes
 507can't be moved into or out of the sub-hierarchy by the delegatee.
 508
 509For delegations to a less privileged user, this is achieved by
 510requiring the following conditions for a process with a non-root euid
 511to migrate a target process into a cgroup by writing its PID to the
 512"cgroup.procs" file.
 513
 514- The writer must have write access to the "cgroup.procs" file.
 515
 516- The writer must have write access to the "cgroup.procs" file of the
 517  common ancestor of the source and destination cgroups.
 518
 519The above two constraints ensure that while a delegatee may migrate
 520processes around freely in the delegated sub-hierarchy it can't pull
 521in from or push out to outside the sub-hierarchy.
 522
 523For an example, let's assume cgroups C0 and C1 have been delegated to
 524user U0 who created C00, C01 under C0 and C10 under C1 as follows and
 525all processes under C0 and C1 belong to U0::
 526
 527  ~~~~~~~~~~~~~ - C0 - C00
 528  ~ cgroup    ~      \ C01
 529  ~ hierarchy ~
 530  ~~~~~~~~~~~~~ - C1 - C10
 531
 532Let's also say U0 wants to write the PID of a process which is
 533currently in C10 into "C00/cgroup.procs".  U0 has write access to the
 534file; however, the common ancestor of the source cgroup C10 and the
 535destination cgroup C00 is above the points of delegation and U0 would
 536not have write access to its "cgroup.procs" files and thus the write
 537will be denied with -EACCES.
 538
 539For delegations to namespaces, containment is achieved by requiring
 540that both the source and destination cgroups are reachable from the
 541namespace of the process which is attempting the migration.  If either
 542is not reachable, the migration is rejected with -ENOENT.
 543
 544
 545Guidelines
 546----------
 547
 548Organize Once and Control
 549~~~~~~~~~~~~~~~~~~~~~~~~~
 550
 551Migrating a process across cgroups is a relatively expensive operation
 552and stateful resources such as memory are not moved together with the
 553process.  This is an explicit design decision as there often exist
 554inherent trade-offs between migration and various hot paths in terms
 555of synchronization cost.
 556
 557As such, migrating processes across cgroups frequently as a means to
 558apply different resource restrictions is discouraged.  A workload
 559should be assigned to a cgroup according to the system's logical and
 560resource structure once on start-up.  Dynamic adjustments to resource
 561distribution can be made by changing controller configuration through
 562the interface files.
 563
 564
 565Avoid Name Collisions
 566~~~~~~~~~~~~~~~~~~~~~
 567
 568Interface files for a cgroup and its children cgroups occupy the same
 569directory and it is possible to create children cgroups which collide
 570with interface files.
 571
 572All cgroup core interface files are prefixed with "cgroup." and each
 573controller's interface files are prefixed with the controller name and
 574a dot.  A controller's name is composed of lower case alphabets and
 575'_'s but never begins with an '_' so it can be used as the prefix
 576character for collision avoidance.  Also, interface file names won't
 577start or end with terms which are often used in categorizing workloads
 578such as job, service, slice, unit or workload.
 579
 580cgroup doesn't do anything to prevent name collisions and it's the
 581user's responsibility to avoid them.
 582
 583
 584Resource Distribution Models
 585============================
 586
 587cgroup controllers implement several resource distribution schemes
 588depending on the resource type and expected use cases.  This section
 589describes major schemes in use along with their expected behaviors.
 590
 591
 592Weights
 593-------
 594
 595A parent's resource is distributed by adding up the weights of all
 596active children and giving each the fraction matching the ratio of its
 597weight against the sum.  As only children which can make use of the
 598resource at the moment participate in the distribution, this is
 599work-conserving.  Due to the dynamic nature, this model is usually
 600used for stateless resources.
 601
 602All weights are in the range [1, 10000] with the default at 100.  This
 603allows symmetric multiplicative biases in both directions at fine
 604enough granularity while staying in the intuitive range.
 605
 606As long as the weight is in range, all configuration combinations are
 607valid and there is no reason to reject configuration changes or
 608process migrations.
 609
 610"cpu.weight" proportionally distributes CPU cycles to active children
 611and is an example of this type.
 612
 613
 614Limits
 615------
 616
 617A child can only consume upto the configured amount of the resource.
 618Limits can be over-committed - the sum of the limits of children can
 619exceed the amount of resource available to the parent.
 620
 621Limits are in the range [0, max] and defaults to "max", which is noop.
 622
 623As limits can be over-committed, all configuration combinations are
 624valid and there is no reason to reject configuration changes or
 625process migrations.
 626
 627"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
 628on an IO device and is an example of this type.
 629
 630
 631Protections
 632-----------
 633
 634A cgroup is protected upto the configured amount of the resource
 635as long as the usages of all its ancestors are under their
 636protected levels.  Protections can be hard guarantees or best effort
 637soft boundaries.  Protections can also be over-committed in which case
 638only upto the amount available to the parent is protected among
 639children.
 640
 641Protections are in the range [0, max] and defaults to 0, which is
 642noop.
 643
 644As protections can be over-committed, all configuration combinations
 645are valid and there is no reason to reject configuration changes or
 646process migrations.
 647
 648"memory.low" implements best-effort memory protection and is an
 649example of this type.
 650
 651
 652Allocations
 653-----------
 654
 655A cgroup is exclusively allocated a certain amount of a finite
 656resource.  Allocations can't be over-committed - the sum of the
 657allocations of children can not exceed the amount of resource
 658available to the parent.
 659
 660Allocations are in the range [0, max] and defaults to 0, which is no
 661resource.
 662
 663As allocations can't be over-committed, some configuration
 664combinations are invalid and should be rejected.  Also, if the
 665resource is mandatory for execution of processes, process migrations
 666may be rejected.
 667
 668"cpu.rt.max" hard-allocates realtime slices and is an example of this
 669type.
 670
 671
 672Interface Files
 673===============
 674
 675Format
 676------
 677
 678All interface files should be in one of the following formats whenever
 679possible::
 680
 681  New-line separated values
 682  (when only one value can be written at once)
 683
 684        VAL0\n
 685        VAL1\n
 686        ...
 687
 688  Space separated values
 689  (when read-only or multiple values can be written at once)
 690
 691        VAL0 VAL1 ...\n
 692
 693  Flat keyed
 694
 695        KEY0 VAL0\n
 696        KEY1 VAL1\n
 697        ...
 698
 699  Nested keyed
 700
 701        KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
 702        KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
 703        ...
 704
 705For a writable file, the format for writing should generally match
 706reading; however, controllers may allow omitting later fields or
 707implement restricted shortcuts for most common use cases.
 708
 709For both flat and nested keyed files, only the values for a single key
 710can be written at a time.  For nested keyed files, the sub key pairs
 711may be specified in any order and not all pairs have to be specified.
 712
 713
 714Conventions
 715-----------
 716
 717- Settings for a single feature should be contained in a single file.
 718
 719- The root cgroup should be exempt from resource control and thus
 720  shouldn't have resource control interface files.
 721
 722- The default time unit is microseconds.  If a different unit is ever
 723  used, an explicit unit suffix must be present.
 724
 725- A parts-per quantity should use a percentage decimal with at least
 726  two digit fractional part - e.g. 13.40.
 727
 728- If a controller implements weight based resource distribution, its
 729  interface file should be named "weight" and have the range [1,
 730  10000] with 100 as the default.  The values are chosen to allow
 731  enough and symmetric bias in both directions while keeping it
 732  intuitive (the default is 100%).
 733
 734- If a controller implements an absolute resource guarantee and/or
 735  limit, the interface files should be named "min" and "max"
 736  respectively.  If a controller implements best effort resource
 737  guarantee and/or limit, the interface files should be named "low"
 738  and "high" respectively.
 739
 740  In the above four control files, the special token "max" should be
 741  used to represent upward infinity for both reading and writing.
 742
 743- If a setting has a configurable default value and keyed specific
 744  overrides, the default entry should be keyed with "default" and
 745  appear as the first entry in the file.
 746
 747  The default value can be updated by writing either "default $VAL" or
 748  "$VAL".
 749
 750  When writing to update a specific override, "default" can be used as
 751  the value to indicate removal of the override.  Override entries
 752  with "default" as the value must not appear when read.
 753
 754  For example, a setting which is keyed by major:minor device numbers
 755  with integer values may look like the following::
 756
 757    # cat cgroup-example-interface-file
 758    default 150
 759    8:0 300
 760
 761  The default value can be updated by::
 762
 763    # echo 125 > cgroup-example-interface-file
 764
 765  or::
 766
 767    # echo "default 125" > cgroup-example-interface-file
 768
 769  An override can be set by::
 770
 771    # echo "8:16 170" > cgroup-example-interface-file
 772
 773  and cleared by::
 774
 775    # echo "8:0 default" > cgroup-example-interface-file
 776    # cat cgroup-example-interface-file
 777    default 125
 778    8:16 170
 779
 780- For events which are not very high frequency, an interface file
 781  "events" should be created which lists event key value pairs.
 782  Whenever a notifiable event happens, file modified event should be
 783  generated on the file.
 784
 785
 786Core Interface Files
 787--------------------
 788
 789All cgroup core files are prefixed with "cgroup."
 790
 791  cgroup.type
 792        A read-write single value file which exists on non-root
 793        cgroups.
 794
 795        When read, it indicates the current type of the cgroup, which
 796        can be one of the following values.
 797
 798        - "domain" : A normal valid domain cgroup.
 799
 800        - "domain threaded" : A threaded domain cgroup which is
 801          serving as the root of a threaded subtree.
 802
 803        - "domain invalid" : A cgroup which is in an invalid state.
 804          It can't be populated or have controllers enabled.  It may
 805          be allowed to become a threaded cgroup.
 806
 807        - "threaded" : A threaded cgroup which is a member of a
 808          threaded subtree.
 809
 810        A cgroup can be turned into a threaded cgroup by writing
 811        "threaded" to this file.
 812
 813  cgroup.procs
 814        A read-write new-line separated values file which exists on
 815        all cgroups.
 816
 817        When read, it lists the PIDs of all processes which belong to
 818        the cgroup one-per-line.  The PIDs are not ordered and the
 819        same PID may show up more than once if the process got moved
 820        to another cgroup and then back or the PID got recycled while
 821        reading.
 822
 823        A PID can be written to migrate the process associated with
 824        the PID to the cgroup.  The writer should match all of the
 825        following conditions.
 826
 827        - It must have write access to the "cgroup.procs" file.
 828
 829        - It must have write access to the "cgroup.procs" file of the
 830          common ancestor of the source and destination cgroups.
 831
 832        When delegating a sub-hierarchy, write access to this file
 833        should be granted along with the containing directory.
 834
 835        In a threaded cgroup, reading this file fails with EOPNOTSUPP
 836        as all the processes belong to the thread root.  Writing is
 837        supported and moves every thread of the process to the cgroup.
 838
 839  cgroup.threads
 840        A read-write new-line separated values file which exists on
 841        all cgroups.
 842
 843        When read, it lists the TIDs of all threads which belong to
 844        the cgroup one-per-line.  The TIDs are not ordered and the
 845        same TID may show up more than once if the thread got moved to
 846        another cgroup and then back or the TID got recycled while
 847        reading.
 848
 849        A TID can be written to migrate the thread associated with the
 850        TID to the cgroup.  The writer should match all of the
 851        following conditions.
 852
 853        - It must have write access to the "cgroup.threads" file.
 854
 855        - The cgroup that the thread is currently in must be in the
 856          same resource domain as the destination cgroup.
 857
 858        - It must have write access to the "cgroup.procs" file of the
 859          common ancestor of the source and destination cgroups.
 860
 861        When delegating a sub-hierarchy, write access to this file
 862        should be granted along with the containing directory.
 863
 864  cgroup.controllers
 865        A read-only space separated values file which exists on all
 866        cgroups.
 867
 868        It shows space separated list of all controllers available to
 869        the cgroup.  The controllers are not ordered.
 870
 871  cgroup.subtree_control
 872        A read-write space separated values file which exists on all
 873        cgroups.  Starts out empty.
 874
 875        When read, it shows space separated list of the controllers
 876        which are enabled to control resource distribution from the
 877        cgroup to its children.
 878
 879        Space separated list of controllers prefixed with '+' or '-'
 880        can be written to enable or disable controllers.  A controller
 881        name prefixed with '+' enables the controller and '-'
 882        disables.  If a controller appears more than once on the list,
 883        the last one is effective.  When multiple enable and disable
 884        operations are specified, either all succeed or all fail.
 885
 886  cgroup.events
 887        A read-only flat-keyed file which exists on non-root cgroups.
 888        The following entries are defined.  Unless specified
 889        otherwise, a value change in this file generates a file
 890        modified event.
 891
 892          populated
 893                1 if the cgroup or its descendants contains any live
 894                processes; otherwise, 0.
 895          frozen
 896                1 if the cgroup is frozen; otherwise, 0.
 897
 898  cgroup.max.descendants
 899        A read-write single value files.  The default is "max".
 900
 901        Maximum allowed number of descent cgroups.
 902        If the actual number of descendants is equal or larger,
 903        an attempt to create a new cgroup in the hierarchy will fail.
 904
 905  cgroup.max.depth
 906        A read-write single value files.  The default is "max".
 907
 908        Maximum allowed descent depth below the current cgroup.
 909        If the actual descent depth is equal or larger,
 910        an attempt to create a new child cgroup will fail.
 911
 912  cgroup.stat
 913        A read-only flat-keyed file with the following entries:
 914
 915          nr_descendants
 916                Total number of visible descendant cgroups.
 917
 918          nr_dying_descendants
 919                Total number of dying descendant cgroups. A cgroup becomes
 920                dying after being deleted by a user. The cgroup will remain
 921                in dying state for some time undefined time (which can depend
 922                on system load) before being completely destroyed.
 923
 924                A process can't enter a dying cgroup under any circumstances,
 925                a dying cgroup can't revive.
 926
 927                A dying cgroup can consume system resources not exceeding
 928                limits, which were active at the moment of cgroup deletion.
 929
 930  cgroup.freeze
 931        A read-write single value file which exists on non-root cgroups.
 932        Allowed values are "0" and "1". The default is "0".
 933
 934        Writing "1" to the file causes freezing of the cgroup and all
 935        descendant cgroups. This means that all belonging processes will
 936        be stopped and will not run until the cgroup will be explicitly
 937        unfrozen. Freezing of the cgroup may take some time; when this action
 938        is completed, the "frozen" value in the cgroup.events control file
 939        will be updated to "1" and the corresponding notification will be
 940        issued.
 941
 942        A cgroup can be frozen either by its own settings, or by settings
 943        of any ancestor cgroups. If any of ancestor cgroups is frozen, the
 944        cgroup will remain frozen.
 945
 946        Processes in the frozen cgroup can be killed by a fatal signal.
 947        They also can enter and leave a frozen cgroup: either by an explicit
 948        move by a user, or if freezing of the cgroup races with fork().
 949        If a process is moved to a frozen cgroup, it stops. If a process is
 950        moved out of a frozen cgroup, it becomes running.
 951
 952        Frozen status of a cgroup doesn't affect any cgroup tree operations:
 953        it's possible to delete a frozen (and empty) cgroup, as well as
 954        create new sub-cgroups.
 955
 956  cgroup.kill
 957        A write-only single value file which exists in non-root cgroups.
 958        The only allowed value is "1".
 959
 960        Writing "1" to the file causes the cgroup and all descendant cgroups to
 961        be killed. This means that all processes located in the affected cgroup
 962        tree will be killed via SIGKILL.
 963
 964        Killing a cgroup tree will deal with concurrent forks appropriately and
 965        is protected against migrations.
 966
 967        In a threaded cgroup, writing this file fails with EOPNOTSUPP as
 968        killing cgroups is a process directed operation, i.e. it affects
 969        the whole thread-group.
 970
 971Controllers
 972===========
 973
 974.. _cgroup-v2-cpu:
 975
 976CPU
 977---
 978
 979The "cpu" controllers regulates distribution of CPU cycles.  This
 980controller implements weight and absolute bandwidth limit models for
 981normal scheduling policy and absolute bandwidth allocation model for
 982realtime scheduling policy.
 983
 984In all the above models, cycles distribution is defined only on a temporal
 985base and it does not account for the frequency at which tasks are executed.
 986The (optional) utilization clamping support allows to hint the schedutil
 987cpufreq governor about the minimum desired frequency which should always be
 988provided by a CPU, as well as the maximum desired frequency, which should not
 989be exceeded by a CPU.
 990
 991WARNING: cgroup2 doesn't yet support control of realtime processes and
 992the cpu controller can only be enabled when all RT processes are in
 993the root cgroup.  Be aware that system management software may already
 994have placed RT processes into nonroot cgroups during the system boot
 995process, and these processes may need to be moved to the root cgroup
 996before the cpu controller can be enabled.
 997
 998
 999CPU Interface Files
1000~~~~~~~~~~~~~~~~~~~
1001
1002All time durations are in microseconds.
1003
1004  cpu.stat
1005        A read-only flat-keyed file.
1006        This file exists whether the controller is enabled or not.
1007
1008        It always reports the following three stats:
1009
1010        - usage_usec
1011        - user_usec
1012        - system_usec
1013
1014        and the following three when the controller is enabled:
1015
1016        - nr_periods
1017        - nr_throttled
1018        - throttled_usec
1019
1020  cpu.weight
1021        A read-write single value file which exists on non-root
1022        cgroups.  The default is "100".
1023
1024        The weight in the range [1, 10000].
1025
1026  cpu.weight.nice
1027        A read-write single value file which exists on non-root
1028        cgroups.  The default is "0".
1029
1030        The nice value is in the range [-20, 19].
1031
1032        This interface file is an alternative interface for
1033        "cpu.weight" and allows reading and setting weight using the
1034        same values used by nice(2).  Because the range is smaller and
1035        granularity is coarser for the nice values, the read value is
1036        the closest approximation of the current weight.
1037
1038  cpu.max
1039        A read-write two value file which exists on non-root cgroups.
1040        The default is "max 100000".
1041
1042        The maximum bandwidth limit.  It's in the following format::
1043
1044          $MAX $PERIOD
1045
1046        which indicates that the group may consume upto $MAX in each
1047        $PERIOD duration.  "max" for $MAX indicates no limit.  If only
1048        one number is written, $MAX is updated.
1049
1050  cpu.pressure
1051        A read-write nested-keyed file.
1052
1053        Shows pressure stall information for CPU. See
1054        :ref:`Documentation/accounting/psi.rst <psi>` for details.
1055
1056  cpu.uclamp.min
1057        A read-write single value file which exists on non-root cgroups.
1058        The default is "0", i.e. no utilization boosting.
1059
1060        The requested minimum utilization (protection) as a percentage
1061        rational number, e.g. 12.34 for 12.34%.
1062
1063        This interface allows reading and setting minimum utilization clamp
1064        values similar to the sched_setattr(2). This minimum utilization
1065        value is used to clamp the task specific minimum utilization clamp.
1066
1067        The requested minimum utilization (protection) is always capped by
1068        the current value for the maximum utilization (limit), i.e.
1069        `cpu.uclamp.max`.
1070
1071  cpu.uclamp.max
1072        A read-write single value file which exists on non-root cgroups.
1073        The default is "max". i.e. no utilization capping
1074
1075        The requested maximum utilization (limit) as a percentage rational
1076        number, e.g. 98.76 for 98.76%.
1077
1078        This interface allows reading and setting maximum utilization clamp
1079        values similar to the sched_setattr(2). This maximum utilization
1080        value is used to clamp the task specific maximum utilization clamp.
1081
1082
1083
1084Memory
1085------
1086
1087The "memory" controller regulates distribution of memory.  Memory is
1088stateful and implements both limit and protection models.  Due to the
1089intertwining between memory usage and reclaim pressure and the
1090stateful nature of memory, the distribution model is relatively
1091complex.
1092
1093While not completely water-tight, all major memory usages by a given
1094cgroup are tracked so that the total memory consumption can be
1095accounted and controlled to a reasonable extent.  Currently, the
1096following types of memory usages are tracked.
1097
1098- Userland memory - page cache and anonymous memory.
1099
1100- Kernel data structures such as dentries and inodes.
1101
1102- TCP socket buffers.
1103
1104The above list may expand in the future for better coverage.
1105
1106
1107Memory Interface Files
1108~~~~~~~~~~~~~~~~~~~~~~
1109
1110All memory amounts are in bytes.  If a value which is not aligned to
1111PAGE_SIZE is written, the value may be rounded up to the closest
1112PAGE_SIZE multiple when read back.
1113
1114  memory.current
1115        A read-only single value file which exists on non-root
1116        cgroups.
1117
1118        The total amount of memory currently being used by the cgroup
1119        and its descendants.
1120
1121  memory.min
1122        A read-write single value file which exists on non-root
1123        cgroups.  The default is "0".
1124
1125        Hard memory protection.  If the memory usage of a cgroup
1126        is within its effective min boundary, the cgroup's memory
1127        won't be reclaimed under any conditions. If there is no
1128        unprotected reclaimable memory available, OOM killer
1129        is invoked. Above the effective min boundary (or
1130        effective low boundary if it is higher), pages are reclaimed
1131        proportionally to the overage, reducing reclaim pressure for
1132        smaller overages.
1133
1134        Effective min boundary is limited by memory.min values of
1135        all ancestor cgroups. If there is memory.min overcommitment
1136        (child cgroup or cgroups are requiring more protected memory
1137        than parent will allow), then each child cgroup will get
1138        the part of parent's protection proportional to its
1139        actual memory usage below memory.min.
1140
1141        Putting more memory than generally available under this
1142        protection is discouraged and may lead to constant OOMs.
1143
1144        If a memory cgroup is not populated with processes,
1145        its memory.min is ignored.
1146
1147  memory.low
1148        A read-write single value file which exists on non-root
1149        cgroups.  The default is "0".
1150
1151        Best-effort memory protection.  If the memory usage of a
1152        cgroup is within its effective low boundary, the cgroup's
1153        memory won't be reclaimed unless there is no reclaimable
1154        memory available in unprotected cgroups.
1155        Above the effective low boundary (or 
1156        effective min boundary if it is higher), pages are reclaimed
1157        proportionally to the overage, reducing reclaim pressure for
1158        smaller overages.
1159
1160        Effective low boundary is limited by memory.low values of
1161        all ancestor cgroups. If there is memory.low overcommitment
1162        (child cgroup or cgroups are requiring more protected memory
1163        than parent will allow), then each child cgroup will get
1164        the part of parent's protection proportional to its
1165        actual memory usage below memory.low.
1166
1167        Putting more memory than generally available under this
1168        protection is discouraged.
1169
1170  memory.high
1171        A read-write single value file which exists on non-root
1172        cgroups.  The default is "max".
1173
1174        Memory usage throttle limit.  This is the main mechanism to
1175        control memory usage of a cgroup.  If a cgroup's usage goes
1176        over the high boundary, the processes of the cgroup are
1177        throttled and put under heavy reclaim pressure.
1178
1179        Going over the high limit never invokes the OOM killer and
1180        under extreme conditions the limit may be breached.
1181
1182  memory.max
1183        A read-write single value file which exists on non-root
1184        cgroups.  The default is "max".
1185
1186        Memory usage hard limit.  This is the final protection
1187        mechanism.  If a cgroup's memory usage reaches this limit and
1188        can't be reduced, the OOM killer is invoked in the cgroup.
1189        Under certain circumstances, the usage may go over the limit
1190        temporarily.
1191
1192        In default configuration regular 0-order allocations always
1193        succeed unless OOM killer chooses current task as a victim.
1194
1195        Some kinds of allocations don't invoke the OOM killer.
1196        Caller could retry them differently, return into userspace
1197        as -ENOMEM or silently ignore in cases like disk readahead.
1198
1199        This is the ultimate protection mechanism.  As long as the
1200        high limit is used and monitored properly, this limit's
1201        utility is limited to providing the final safety net.
1202
1203  memory.oom.group
1204        A read-write single value file which exists on non-root
1205        cgroups.  The default value is "0".
1206
1207        Determines whether the cgroup should be treated as
1208        an indivisible workload by the OOM killer. If set,
1209        all tasks belonging to the cgroup or to its descendants
1210        (if the memory cgroup is not a leaf cgroup) are killed
1211        together or not at all. This can be used to avoid
1212        partial kills to guarantee workload integrity.
1213
1214        Tasks with the OOM protection (oom_score_adj set to -1000)
1215        are treated as an exception and are never killed.
1216
1217        If the OOM killer is invoked in a cgroup, it's not going
1218        to kill any tasks outside of this cgroup, regardless
1219        memory.oom.group values of ancestor cgroups.
1220
1221  memory.events
1222        A read-only flat-keyed file which exists on non-root cgroups.
1223        The following entries are defined.  Unless specified
1224        otherwise, a value change in this file generates a file
1225        modified event.
1226
1227        Note that all fields in this file are hierarchical and the
1228        file modified event can be generated due to an event down the
1229        hierarchy. For for the local events at the cgroup level see
1230        memory.events.local.
1231
1232          low
1233                The number of times the cgroup is reclaimed due to
1234                high memory pressure even though its usage is under
1235                the low boundary.  This usually indicates that the low
1236                boundary is over-committed.
1237
1238          high
1239                The number of times processes of the cgroup are
1240                throttled and routed to perform direct memory reclaim
1241                because the high memory boundary was exceeded.  For a
1242                cgroup whose memory usage is capped by the high limit
1243                rather than global memory pressure, this event's
1244                occurrences are expected.
1245
1246          max
1247                The number of times the cgroup's memory usage was
1248                about to go over the max boundary.  If direct reclaim
1249                fails to bring it down, the cgroup goes to OOM state.
1250
1251          oom
1252                The number of time the cgroup's memory usage was
1253                reached the limit and allocation was about to fail.
1254
1255                This event is not raised if the OOM killer is not
1256                considered as an option, e.g. for failed high-order
1257                allocations or if caller asked to not retry attempts.
1258
1259          oom_kill
1260                The number of processes belonging to this cgroup
1261                killed by any kind of OOM killer.
1262
1263  memory.events.local
1264        Similar to memory.events but the fields in the file are local
1265        to the cgroup i.e. not hierarchical. The file modified event
1266        generated on this file reflects only the local events.
1267
1268  memory.stat
1269        A read-only flat-keyed file which exists on non-root cgroups.
1270
1271        This breaks down the cgroup's memory footprint into different
1272        types of memory, type-specific details, and other information
1273        on the state and past events of the memory management system.
1274
1275        All memory amounts are in bytes.
1276
1277        The entries are ordered to be human readable, and new entries
1278        can show up in the middle. Don't rely on items remaining in a
1279        fixed position; use the keys to look up specific values!
1280
1281        If the entry has no per-node counter (or not show in the
1282        memory.numa_stat). We use 'npn' (non-per-node) as the tag
1283        to indicate that it will not show in the memory.numa_stat.
1284
1285          anon
1286                Amount of memory used in anonymous mappings such as
1287                brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1288
1289          file
1290                Amount of memory used to cache filesystem data,
1291                including tmpfs and shared memory.
1292
1293          kernel_stack
1294                Amount of memory allocated to kernel stacks.
1295
1296          pagetables
1297                Amount of memory allocated for page tables.
1298
1299          percpu (npn)
1300                Amount of memory used for storing per-cpu kernel
1301                data structures.
1302
1303          sock (npn)
1304                Amount of memory used in network transmission buffers
1305
1306          shmem
1307                Amount of cached filesystem data that is swap-backed,
1308                such as tmpfs, shm segments, shared anonymous mmap()s
1309
1310          file_mapped
1311                Amount of cached filesystem data mapped with mmap()
1312
1313          file_dirty
1314                Amount of cached filesystem data that was modified but
1315                not yet written back to disk
1316
1317          file_writeback
1318                Amount of cached filesystem data that was modified and
1319                is currently being written back to disk
1320
1321          swapcached
1322                Amount of swap cached in memory. The swapcache is accounted
1323                against both memory and swap usage.
1324
1325          anon_thp
1326                Amount of memory used in anonymous mappings backed by
1327                transparent hugepages
1328
1329          file_thp
1330                Amount of cached filesystem data backed by transparent
1331                hugepages
1332
1333          shmem_thp
1334                Amount of shm, tmpfs, shared anonymous mmap()s backed by
1335                transparent hugepages
1336
1337          inactive_anon, active_anon, inactive_file, active_file, unevictable
1338                Amount of memory, swap-backed and filesystem-backed,
1339                on the internal memory management lists used by the
1340                page reclaim algorithm.
1341
1342                As these represent internal list state (eg. shmem pages are on anon
1343                memory management lists), inactive_foo + active_foo may not be equal to
1344                the value for the foo counter, since the foo counter is type-based, not
1345                list-based.
1346
1347          slab_reclaimable
1348                Part of "slab" that might be reclaimed, such as
1349                dentries and inodes.
1350
1351          slab_unreclaimable
1352                Part of "slab" that cannot be reclaimed on memory
1353                pressure.
1354
1355          slab (npn)
1356                Amount of memory used for storing in-kernel data
1357                structures.
1358
1359          workingset_refault_anon
1360                Number of refaults of previously evicted anonymous pages.
1361
1362          workingset_refault_file
1363                Number of refaults of previously evicted file pages.
1364
1365          workingset_activate_anon
1366                Number of refaulted anonymous pages that were immediately
1367                activated.
1368
1369          workingset_activate_file
1370                Number of refaulted file pages that were immediately activated.
1371
1372          workingset_restore_anon
1373                Number of restored anonymous pages which have been detected as
1374                an active workingset before they got reclaimed.
1375
1376          workingset_restore_file
1377                Number of restored file pages which have been detected as an
1378                active workingset before they got reclaimed.
1379
1380          workingset_nodereclaim
1381                Number of times a shadow node has been reclaimed
1382
1383          pgfault (npn)
1384                Total number of page faults incurred
1385
1386          pgmajfault (npn)
1387                Number of major page faults incurred
1388
1389          pgrefill (npn)
1390                Amount of scanned pages (in an active LRU list)
1391
1392          pgscan (npn)
1393                Amount of scanned pages (in an inactive LRU list)
1394
1395          pgsteal (npn)
1396                Amount of reclaimed pages
1397
1398          pgactivate (npn)
1399                Amount of pages moved to the active LRU list
1400
1401          pgdeactivate (npn)
1402                Amount of pages moved to the inactive LRU list
1403
1404          pglazyfree (npn)
1405                Amount of pages postponed to be freed under memory pressure
1406
1407          pglazyfreed (npn)
1408                Amount of reclaimed lazyfree pages
1409
1410          thp_fault_alloc (npn)
1411                Number of transparent hugepages which were allocated to satisfy
1412                a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1413                is not set.
1414
1415          thp_collapse_alloc (npn)
1416                Number of transparent hugepages which were allocated to allow
1417                collapsing an existing range of pages. This counter is not
1418                present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1419
1420  memory.numa_stat
1421        A read-only nested-keyed file which exists on non-root cgroups.
1422
1423        This breaks down the cgroup's memory footprint into different
1424        types of memory, type-specific details, and other information
1425        per node on the state of the memory management system.
1426
1427        This is useful for providing visibility into the NUMA locality
1428        information within an memcg since the pages are allowed to be
1429        allocated from any physical node. One of the use case is evaluating
1430        application performance by combining this information with the
1431        application's CPU allocation.
1432
1433        All memory amounts are in bytes.
1434
1435        The output format of memory.numa_stat is::
1436
1437          type N0=<bytes in node 0> N1=<bytes in node 1> ...
1438
1439        The entries are ordered to be human readable, and new entries
1440        can show up in the middle. Don't rely on items remaining in a
1441        fixed position; use the keys to look up specific values!
1442
1443        The entries can refer to the memory.stat.
1444
1445  memory.swap.current
1446        A read-only single value file which exists on non-root
1447        cgroups.
1448
1449        The total amount of swap currently being used by the cgroup
1450        and its descendants.
1451
1452  memory.swap.high
1453        A read-write single value file which exists on non-root
1454        cgroups.  The default is "max".
1455
1456        Swap usage throttle limit.  If a cgroup's swap usage exceeds
1457        this limit, all its further allocations will be throttled to
1458        allow userspace to implement custom out-of-memory procedures.
1459
1460        This limit marks a point of no return for the cgroup. It is NOT
1461        designed to manage the amount of swapping a workload does
1462        during regular operation. Compare to memory.swap.max, which
1463        prohibits swapping past a set amount, but lets the cgroup
1464        continue unimpeded as long as other memory can be reclaimed.
1465
1466        Healthy workloads are not expected to reach this limit.
1467
1468  memory.swap.max
1469        A read-write single value file which exists on non-root
1470        cgroups.  The default is "max".
1471
1472        Swap usage hard limit.  If a cgroup's swap usage reaches this
1473        limit, anonymous memory of the cgroup will not be swapped out.
1474
1475  memory.swap.events
1476        A read-only flat-keyed file which exists on non-root cgroups.
1477        The following entries are defined.  Unless specified
1478        otherwise, a value change in this file generates a file
1479        modified event.
1480
1481          high
1482                The number of times the cgroup's swap usage was over
1483                the high threshold.
1484
1485          max
1486                The number of times the cgroup's swap usage was about
1487                to go over the max boundary and swap allocation
1488                failed.
1489
1490          fail
1491                The number of times swap allocation failed either
1492                because of running out of swap system-wide or max
1493                limit.
1494
1495        When reduced under the current usage, the existing swap
1496        entries are reclaimed gradually and the swap usage may stay
1497        higher than the limit for an extended period of time.  This
1498        reduces the impact on the workload and memory management.
1499
1500  memory.pressure
1501        A read-only nested-keyed file.
1502
1503        Shows pressure stall information for memory. See
1504        :ref:`Documentation/accounting/psi.rst <psi>` for details.
1505
1506
1507Usage Guidelines
1508~~~~~~~~~~~~~~~~
1509
1510"memory.high" is the main mechanism to control memory usage.
1511Over-committing on high limit (sum of high limits > available memory)
1512and letting global memory pressure to distribute memory according to
1513usage is a viable strategy.
1514
1515Because breach of the high limit doesn't trigger the OOM killer but
1516throttles the offending cgroup, a management agent has ample
1517opportunities to monitor and take appropriate actions such as granting
1518more memory or terminating the workload.
1519
1520Determining whether a cgroup has enough memory is not trivial as
1521memory usage doesn't indicate whether the workload can benefit from
1522more memory.  For example, a workload which writes data received from
1523network to a file can use all available memory but can also operate as
1524performant with a small amount of memory.  A measure of memory
1525pressure - how much the workload is being impacted due to lack of
1526memory - is necessary to determine whether a workload needs more
1527memory; unfortunately, memory pressure monitoring mechanism isn't
1528implemented yet.
1529
1530
1531Memory Ownership
1532~~~~~~~~~~~~~~~~
1533
1534A memory area is charged to the cgroup which instantiated it and stays
1535charged to the cgroup until the area is released.  Migrating a process
1536to a different cgroup doesn't move the memory usages that it
1537instantiated while in the previous cgroup to the new cgroup.
1538
1539A memory area may be used by processes belonging to different cgroups.
1540To which cgroup the area will be charged is in-deterministic; however,
1541over time, the memory area is likely to end up in a cgroup which has
1542enough memory allowance to avoid high reclaim pressure.
1543
1544If a cgroup sweeps a considerable amount of memory which is expected
1545to be accessed repeatedly by other cgroups, it may make sense to use
1546POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1547belonging to the affected files to ensure correct memory ownership.
1548
1549
1550IO
1551--
1552
1553The "io" controller regulates the distribution of IO resources.  This
1554controller implements both weight based and absolute bandwidth or IOPS
1555limit distribution; however, weight based distribution is available
1556only if cfq-iosched is in use and neither scheme is available for
1557blk-mq devices.
1558
1559
1560IO Interface Files
1561~~~~~~~~~~~~~~~~~~
1562
1563  io.stat
1564        A read-only nested-keyed file.
1565
1566        Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1567        The following nested keys are defined.
1568
1569          ======        =====================
1570          rbytes        Bytes read
1571          wbytes        Bytes written
1572          rios          Number of read IOs
1573          wios          Number of write IOs
1574          dbytes        Bytes discarded
1575          dios          Number of discard IOs
1576          ======        =====================
1577
1578        An example read output follows::
1579
1580          8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1581          8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1582
1583  io.cost.qos
1584        A read-write nested-keyed file which exists only on the root
1585        cgroup.
1586
1587        This file configures the Quality of Service of the IO cost
1588        model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1589        currently implements "io.weight" proportional control.  Lines
1590        are keyed by $MAJ:$MIN device numbers and not ordered.  The
1591        line for a given device is populated on the first write for
1592        the device on "io.cost.qos" or "io.cost.model".  The following
1593        nested keys are defined.
1594
1595          ======        =====================================
1596          enable        Weight-based control enable
1597          ctrl          "auto" or "user"
1598          rpct          Read latency percentile    [0, 100]
1599          rlat          Read latency threshold
1600          wpct          Write latency percentile   [0, 100]
1601          wlat          Write latency threshold
1602          min           Minimum scaling percentage [1, 10000]
1603          max           Maximum scaling percentage [1, 10000]
1604          ======        =====================================
1605
1606        The controller is disabled by default and can be enabled by
1607        setting "enable" to 1.  "rpct" and "wpct" parameters default
1608        to zero and the controller uses internal device saturation
1609        state to adjust the overall IO rate between "min" and "max".
1610
1611        When a better control quality is needed, latency QoS
1612        parameters can be configured.  For example::
1613
1614          8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1615
1616        shows that on sdb, the controller is enabled, will consider
1617        the device saturated if the 95th percentile of read completion
1618        latencies is above 75ms or write 150ms, and adjust the overall
1619        IO issue rate between 50% and 150% accordingly.
1620
1621        The lower the saturation point, the better the latency QoS at
1622        the cost of aggregate bandwidth.  The narrower the allowed
1623        adjustment range between "min" and "max", the more conformant
1624        to the cost model the IO behavior.  Note that the IO issue
1625        base rate may be far off from 100% and setting "min" and "max"
1626        blindly can lead to a significant loss of device capacity or
1627        control quality.  "min" and "max" are useful for regulating
1628        devices which show wide temporary behavior changes - e.g. a
1629        ssd which accepts writes at the line speed for a while and
1630        then completely stalls for multiple seconds.
1631
1632        When "ctrl" is "auto", the parameters are controlled by the
1633        kernel and may change automatically.  Setting "ctrl" to "user"
1634        or setting any of the percentile and latency parameters puts
1635        it into "user" mode and disables the automatic changes.  The
1636        automatic mode can be restored by setting "ctrl" to "auto".
1637
1638  io.cost.model
1639        A read-write nested-keyed file which exists only on the root
1640        cgroup.
1641
1642        This file configures the cost model of the IO cost model based
1643        controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1644        implements "io.weight" proportional control.  Lines are keyed
1645        by $MAJ:$MIN device numbers and not ordered.  The line for a
1646        given device is populated on the first write for the device on
1647        "io.cost.qos" or "io.cost.model".  The following nested keys
1648        are defined.
1649
1650          =====         ================================
1651          ctrl          "auto" or "user"
1652          model         The cost model in use - "linear"
1653          =====         ================================
1654
1655        When "ctrl" is "auto", the kernel may change all parameters
1656        dynamically.  When "ctrl" is set to "user" or any other
1657        parameters are written to, "ctrl" become "user" and the
1658        automatic changes are disabled.
1659
1660        When "model" is "linear", the following model parameters are
1661        defined.
1662
1663          ============= ========================================
1664          [r|w]bps      The maximum sequential IO throughput
1665          [r|w]seqiops  The maximum 4k sequential IOs per second
1666          [r|w]randiops The maximum 4k random IOs per second
1667          ============= ========================================
1668
1669        From the above, the builtin linear model determines the base
1670        costs of a sequential and random IO and the cost coefficient
1671        for the IO size.  While simple, this model can cover most
1672        common device classes acceptably.
1673
1674        The IO cost model isn't expected to be accurate in absolute
1675        sense and is scaled to the device behavior dynamically.
1676
1677        If needed, tools/cgroup/iocost_coef_gen.py can be used to
1678        generate device-specific coefficients.
1679
1680  io.weight
1681        A read-write flat-keyed file which exists on non-root cgroups.
1682        The default is "default 100".
1683
1684        The first line is the default weight applied to devices
1685        without specific override.  The rest are overrides keyed by
1686        $MAJ:$MIN device numbers and not ordered.  The weights are in
1687        the range [1, 10000] and specifies the relative amount IO time
1688        the cgroup can use in relation to its siblings.
1689
1690        The default weight can be updated by writing either "default
1691        $WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
1692        "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1693
1694        An example read output follows::
1695
1696          default 100
1697          8:16 200
1698          8:0 50
1699
1700  io.max
1701        A read-write nested-keyed file which exists on non-root
1702        cgroups.
1703
1704        BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
1705        device numbers and not ordered.  The following nested keys are
1706        defined.
1707
1708          =====         ==================================
1709          rbps          Max read bytes per second
1710          wbps          Max write bytes per second
1711          riops         Max read IO operations per second
1712          wiops         Max write IO operations per second
1713          =====         ==================================
1714
1715        When writing, any number of nested key-value pairs can be
1716        specified in any order.  "max" can be specified as the value
1717        to remove a specific limit.  If the same key is specified
1718        multiple times, the outcome is undefined.
1719
1720        BPS and IOPS are measured in each IO direction and IOs are
1721        delayed if limit is reached.  Temporary bursts are allowed.
1722
1723        Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1724
1725          echo "8:16 rbps=2097152 wiops=120" > io.max
1726
1727        Reading returns the following::
1728
1729          8:16 rbps=2097152 wbps=max riops=max wiops=120
1730
1731        Write IOPS limit can be removed by writing the following::
1732
1733          echo "8:16 wiops=max" > io.max
1734
1735        Reading now returns the following::
1736
1737          8:16 rbps=2097152 wbps=max riops=max wiops=max
1738
1739  io.pressure
1740        A read-only nested-keyed file.
1741
1742        Shows pressure stall information for IO. See
1743        :ref:`Documentation/accounting/psi.rst <psi>` for details.
1744
1745
1746Writeback
1747~~~~~~~~~
1748
1749Page cache is dirtied through buffered writes and shared mmaps and
1750written asynchronously to the backing filesystem by the writeback
1751mechanism.  Writeback sits between the memory and IO domains and
1752regulates the proportion of dirty memory by balancing dirtying and
1753write IOs.
1754
1755The io controller, in conjunction with the memory controller,
1756implements control of page cache writeback IOs.  The memory controller
1757defines the memory domain that dirty memory ratio is calculated and
1758maintained for and the io controller defines the io domain which
1759writes out dirty pages for the memory domain.  Both system-wide and
1760per-cgroup dirty memory states are examined and the more restrictive
1761of the two is enforced.
1762
1763cgroup writeback requires explicit support from the underlying
1764filesystem.  Currently, cgroup writeback is implemented on ext2, ext4,
1765btrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are 
1766attributed to the root cgroup.
1767
1768There are inherent differences in memory and writeback management
1769which affects how cgroup ownership is tracked.  Memory is tracked per
1770page while writeback per inode.  For the purpose of writeback, an
1771inode is assigned to a cgroup and all IO requests to write dirty pages
1772from the inode are attributed to that cgroup.
1773
1774As cgroup ownership for memory is tracked per page, there can be pages
1775which are associated with different cgroups than the one the inode is
1776associated with.  These are called foreign pages.  The writeback
1777constantly keeps track of foreign pages and, if a particular foreign
1778cgroup becomes the majority over a certain period of time, switches
1779the ownership of the inode to that cgroup.
1780
1781While this model is enough for most use cases where a given inode is
1782mostly dirtied by a single cgroup even when the main writing cgroup
1783changes over time, use cases where multiple cgroups write to a single
1784inode simultaneously are not supported well.  In such circumstances, a
1785significant portion of IOs are likely to be attributed incorrectly.
1786As memory controller assigns page ownership on the first use and
1787doesn't update it until the page is released, even if writeback
1788strictly follows page ownership, multiple cgroups dirtying overlapping
1789areas wouldn't work as expected.  It's recommended to avoid such usage
1790patterns.
1791
1792The sysctl knobs which affect writeback behavior are applied to cgroup
1793writeback as follows.
1794
1795  vm.dirty_background_ratio, vm.dirty_ratio
1796        These ratios apply the same to cgroup writeback with the
1797        amount of available memory capped by limits imposed by the
1798        memory controller and system-wide clean memory.
1799
1800  vm.dirty_background_bytes, vm.dirty_bytes
1801        For cgroup writeback, this is calculated into ratio against
1802        total available memory and applied the same way as
1803        vm.dirty[_background]_ratio.
1804
1805
1806IO Latency
1807~~~~~~~~~~
1808
1809This is a cgroup v2 controller for IO workload protection.  You provide a group
1810with a latency target, and if the average latency exceeds that target the
1811controller will throttle any peers that have a lower latency target than the
1812protected workload.
1813
1814The limits are only applied at the peer level in the hierarchy.  This means that
1815in the diagram below, only groups A, B, and C will influence each other, and
1816groups D and F will influence each other.  Group G will influence nobody::
1817
1818                        [root]
1819                /          |            \
1820                A          B            C
1821               /  \        |
1822              D    F       G
1823
1824
1825So the ideal way to configure this is to set io.latency in groups A, B, and C.
1826Generally you do not want to set a value lower than the latency your device
1827supports.  Experiment to find the value that works best for your workload.
1828Start at higher than the expected latency for your device and watch the
1829avg_lat value in io.stat for your workload group to get an idea of the
1830latency you see during normal operation.  Use the avg_lat value as a basis for
1831your real setting, setting at 10-15% higher than the value in io.stat.
1832
1833How IO Latency Throttling Works
1834~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1835
1836io.latency is work conserving; so as long as everybody is meeting their latency
1837target the controller doesn't do anything.  Once a group starts missing its
1838target it begins throttling any peer group that has a higher target than itself.
1839This throttling takes 2 forms:
1840
1841- Queue depth throttling.  This is the number of outstanding IO's a group is
1842  allowed to have.  We will clamp down relatively quickly, starting at no limit
1843  and going all the way down to 1 IO at a time.
1844
1845- Artificial delay induction.  There are certain types of IO that cannot be
1846  throttled without possibly adversely affecting higher priority groups.  This
1847  includes swapping and metadata IO.  These types of IO are allowed to occur
1848  normally, however they are "charged" to the originating group.  If the
1849  originating group is being throttled you will see the use_delay and delay
1850  fields in io.stat increase.  The delay value is how many microseconds that are
1851  being added to any process that runs in this group.  Because this number can
1852  grow quite large if there is a lot of swapping or metadata IO occurring we
1853  limit the individual delay events to 1 second at a time.
1854
1855Once the victimized group starts meeting its latency target again it will start
1856unthrottling any peer groups that were throttled previously.  If the victimized
1857group simply stops doing IO the global counter will unthrottle appropriately.
1858
1859IO Latency Interface Files
1860~~~~~~~~~~~~~~~~~~~~~~~~~~
1861
1862  io.latency
1863        This takes a similar format as the other controllers.
1864
1865                "MAJOR:MINOR target=<target time in microseconds"
1866
1867  io.stat
1868        If the controller is enabled you will see extra stats in io.stat in
1869        addition to the normal ones.
1870
1871          depth
1872                This is the current queue depth for the group.
1873
1874          avg_lat
1875                This is an exponential moving average with a decay rate of 1/exp
1876                bound by the sampling interval.  The decay rate interval can be
1877                calculated by multiplying the win value in io.stat by the
1878                corresponding number of samples based on the win value.
1879
1880          win
1881                The sampling window size in milliseconds.  This is the minimum
1882                duration of time between evaluation events.  Windows only elapse
1883                with IO activity.  Idle periods extend the most recent window.
1884
1885IO Priority
1886~~~~~~~~~~~
1887
1888A single attribute controls the behavior of the I/O priority cgroup policy,
1889namely the blkio.prio.class attribute. The following values are accepted for
1890that attribute:
1891
1892  no-change
1893        Do not modify the I/O priority class.
1894
1895  none-to-rt
1896        For requests that do not have an I/O priority class (NONE),
1897        change the I/O priority class into RT. Do not modify
1898        the I/O priority class of other requests.
1899
1900  restrict-to-be
1901        For requests that do not have an I/O priority class or that have I/O
1902        priority class RT, change it into BE. Do not modify the I/O priority
1903        class of requests that have priority class IDLE.
1904
1905  idle
1906        Change the I/O priority class of all requests into IDLE, the lowest
1907        I/O priority class.
1908
1909The following numerical values are associated with the I/O priority policies:
1910
1911+-------------+---+
1912| no-change   | 0 |
1913+-------------+---+
1914| none-to-rt  | 1 |
1915+-------------+---+
1916| rt-to-be    | 2 |
1917+-------------+---+
1918| all-to-idle | 3 |
1919+-------------+---+
1920
1921The numerical value that corresponds to each I/O priority class is as follows:
1922
1923+-------------------------------+---+
1924| IOPRIO_CLASS_NONE             | 0 |
1925+-------------------------------+---+
1926| IOPRIO_CLASS_RT (real-time)   | 1 |
1927+-------------------------------+---+
1928| IOPRIO_CLASS_BE (best effort) | 2 |
1929+-------------------------------+---+
1930| IOPRIO_CLASS_IDLE             | 3 |
1931+-------------------------------+---+
1932
1933The algorithm to set the I/O priority class for a request is as follows:
1934
1935- Translate the I/O priority class policy into a number.
1936- Change the request I/O priority class into the maximum of the I/O priority
1937  class policy number and the numerical I/O priority class.
1938
1939PID
1940---
1941
1942The process number controller is used to allow a cgroup to stop any
1943new tasks from being fork()'d or clone()'d after a specified limit is
1944reached.
1945
1946The number of tasks in a cgroup can be exhausted in ways which other
1947controllers cannot prevent, thus warranting its own controller.  For
1948example, a fork bomb is likely to exhaust the number of tasks before
1949hitting memory restrictions.
1950
1951Note that PIDs used in this controller refer to TIDs, process IDs as
1952used by the kernel.
1953
1954
1955PID Interface Files
1956~~~~~~~~~~~~~~~~~~~
1957
1958  pids.max
1959        A read-write single value file which exists on non-root
1960        cgroups.  The default is "max".
1961
1962        Hard limit of number of processes.
1963
1964  pids.current
1965        A read-only single value file which exists on all cgroups.
1966
1967        The number of processes currently in the cgroup and its
1968        descendants.
1969
1970Organisational operations are not blocked by cgroup policies, so it is
1971possible to have pids.current > pids.max.  This can be done by either
1972setting the limit to be smaller than pids.current, or attaching enough
1973processes to the cgroup such that pids.current is larger than
1974pids.max.  However, it is not possible to violate a cgroup PID policy
1975through fork() or clone(). These will return -EAGAIN if the creation
1976of a new process would cause a cgroup policy to be violated.
1977
1978
1979Cpuset
1980------
1981
1982The "cpuset" controller provides a mechanism for constraining
1983the CPU and memory node placement of tasks to only the resources
1984specified in the cpuset interface files in a task's current cgroup.
1985This is especially valuable on large NUMA systems where placing jobs
1986on properly sized subsets of the systems with careful processor and
1987memory placement to reduce cross-node memory access and contention
1988can improve overall system performance.
1989
1990The "cpuset" controller is hierarchical.  That means the controller
1991cannot use CPUs or memory nodes not allowed in its parent.
1992
1993
1994Cpuset Interface Files
1995~~~~~~~~~~~~~~~~~~~~~~
1996
1997  cpuset.cpus
1998        A read-write multiple values file which exists on non-root
1999        cpuset-enabled cgroups.
2000
2001        It lists the requested CPUs to be used by tasks within this
2002        cgroup.  The actual list of CPUs to be granted, however, is
2003        subjected to constraints imposed by its parent and can differ
2004        from the requested CPUs.
2005
2006        The CPU numbers are comma-separated numbers or ranges.
2007        For example::
2008
2009          # cat cpuset.cpus
2010          0-4,6,8-10
2011
2012        An empty value indicates that the cgroup is using the same
2013        setting as the nearest cgroup ancestor with a non-empty
2014        "cpuset.cpus" or all the available CPUs if none is found.
2015
2016        The value of "cpuset.cpus" stays constant until the next update
2017        and won't be affected by any CPU hotplug events.
2018
2019  cpuset.cpus.effective
2020        A read-only multiple values file which exists on all
2021        cpuset-enabled cgroups.
2022
2023        It lists the onlined CPUs that are actually granted to this
2024        cgroup by its parent.  These CPUs are allowed to be used by
2025        tasks within the current cgroup.
2026
2027        If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2028        all the CPUs from the parent cgroup that can be available to
2029        be used by this cgroup.  Otherwise, it should be a subset of
2030        "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2031        can be granted.  In this case, it will be treated just like an
2032        empty "cpuset.cpus".
2033
2034        Its value will be affected by CPU hotplug events.
2035
2036  cpuset.mems
2037        A read-write multiple values file which exists on non-root
2038        cpuset-enabled cgroups.
2039
2040        It lists the requested memory nodes to be used by tasks within
2041        this cgroup.  The actual list of memory nodes granted, however,
2042        is subjected to constraints imposed by its parent and can differ
2043        from the requested memory nodes.
2044
2045        The memory node numbers are comma-separated numbers or ranges.
2046        For example::
2047
2048          # cat cpuset.mems
2049          0-1,3
2050
2051        An empty value indicates that the cgroup is using the same
2052        setting as the nearest cgroup ancestor with a non-empty
2053        "cpuset.mems" or all the available memory nodes if none
2054        is found.
2055
2056        The value of "cpuset.mems" stays constant until the next update
2057        and won't be affected by any memory nodes hotplug events.
2058
2059  cpuset.mems.effective
2060        A read-only multiple values file which exists on all
2061        cpuset-enabled cgroups.
2062
2063        It lists the onlined memory nodes that are actually granted to
2064        this cgroup by its parent. These memory nodes are allowed to
2065        be used by tasks within the current cgroup.
2066
2067        If "cpuset.mems" is empty, it shows all the memory nodes from the
2068        parent cgroup that will be available to be used by this cgroup.
2069        Otherwise, it should be a subset of "cpuset.mems" unless none of
2070        the memory nodes listed in "cpuset.mems" can be granted.  In this
2071        case, it will be treated just like an empty "cpuset.mems".
2072
2073        Its value will be affected by memory nodes hotplug events.
2074
2075  cpuset.cpus.partition
2076        A read-write single value file which exists on non-root
2077        cpuset-enabled cgroups.  This flag is owned by the parent cgroup
2078        and is not delegatable.
2079
2080        It accepts only the following input values when written to.
2081
2082          ========      ================================
2083          "root"        a partition root
2084          "member"      a non-root member of a partition
2085          ========      ================================
2086
2087        When set to be a partition root, the current cgroup is the
2088        root of a new partition or scheduling domain that comprises
2089        itself and all its descendants except those that are separate
2090        partition roots themselves and their descendants.  The root
2091        cgroup is always a partition root.
2092
2093        There are constraints on where a partition root can be set.
2094        It can only be set in a cgroup if all the following conditions
2095        are true.
2096
2097        1) The "cpuset.cpus" is not empty and the list of CPUs are
2098           exclusive, i.e. they are not shared by any of its siblings.
2099        2) The parent cgroup is a partition root.
2100        3) The "cpuset.cpus" is also a proper subset of the parent's
2101           "cpuset.cpus.effective".
2102        4) There is no child cgroups with cpuset enabled.  This is for
2103           eliminating corner cases that have to be handled if such a
2104           condition is allowed.
2105
2106        Setting it to partition root will take the CPUs away from the
2107        effective CPUs of the parent cgroup.  Once it is set, this
2108        file cannot be reverted back to "member" if there are any child
2109        cgroups with cpuset enabled.
2110
2111        A parent partition cannot distribute all its CPUs to its
2112        child partitions.  There must be at least one cpu left in the
2113        parent partition.
2114
2115        Once becoming a partition root, changes to "cpuset.cpus" is
2116        generally allowed as long as the first condition above is true,
2117        the change will not take away all the CPUs from the parent
2118        partition and the new "cpuset.cpus" value is a superset of its
2119        children's "cpuset.cpus" values.
2120
2121        Sometimes, external factors like changes to ancestors'
2122        "cpuset.cpus" or cpu hotplug can cause the state of the partition
2123        root to change.  On read, the "cpuset.sched.partition" file
2124        can show the following values.
2125
2126          ==============        ==============================
2127          "member"              Non-root member of a partition
2128          "root"                Partition root
2129          "root invalid"        Invalid partition root
2130          ==============        ==============================
2131
2132        It is a partition root if the first 2 partition root conditions
2133        above are true and at least one CPU from "cpuset.cpus" is
2134        granted by the parent cgroup.
2135
2136        A partition root can become invalid if none of CPUs requested
2137        in "cpuset.cpus" can be granted by the parent cgroup or the
2138        parent cgroup is no longer a partition root itself.  In this
2139        case, it is not a real partition even though the restriction
2140        of the first partition root condition above will still apply.
2141        The cpu affinity of all the tasks in the cgroup will then be
2142        associated with CPUs in the nearest ancestor partition.
2143
2144        An invalid partition root can be transitioned back to a
2145        real partition root if at least one of the requested CPUs
2146        can now be granted by its parent.  In this case, the cpu
2147        affinity of all the tasks in the formerly invalid partition
2148        will be associated to the CPUs of the newly formed partition.
2149        Changing the partition state of an invalid partition root to
2150        "member" is always allowed even if child cpusets are present.
2151
2152
2153Device controller
2154-----------------
2155
2156Device controller manages access to device files. It includes both
2157creation of new device files (using mknod), and access to the
2158existing device files.
2159
2160Cgroup v2 device controller has no interface files and is implemented
2161on top of cgroup BPF. To control access to device files, a user may
2162create bpf programs of the BPF_CGROUP_DEVICE type and attach them
2163to cgroups. On an attempt to access a device file, corresponding
2164BPF programs will be executed, and depending on the return value
2165the attempt will succeed or fail with -EPERM.
2166
2167A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx
2168structure, which describes the device access attempt: access type
2169(mknod/read/write) and device (type, major and minor numbers).
2170If the program returns 0, the attempt fails with -EPERM, otherwise
2171it succeeds.
2172
2173An example of BPF_CGROUP_DEVICE program may be found in the kernel
2174source tree in the tools/testing/selftests/bpf/progs/dev_cgroup.c file.
2175
2176
2177RDMA
2178----
2179
2180The "rdma" controller regulates the distribution and accounting of
2181RDMA resources.
2182
2183RDMA Interface Files
2184~~~~~~~~~~~~~~~~~~~~
2185
2186  rdma.max
2187        A readwrite nested-keyed file that exists for all the cgroups
2188        except root that describes current configured resource limit
2189        for a RDMA/IB device.
2190
2191        Lines are keyed by device name and are not ordered.
2192        Each line contains space separated resource name and its configured
2193        limit that can be distributed.
2194
2195        The following nested keys are defined.
2196
2197          ==========    =============================
2198          hca_handle    Maximum number of HCA Handles
2199          hca_object    Maximum number of HCA Objects
2200          ==========    =============================
2201
2202        An example for mlx4 and ocrdma device follows::
2203
2204          mlx4_0 hca_handle=2 hca_object=2000
2205          ocrdma1 hca_handle=3 hca_object=max
2206
2207  rdma.current
2208        A read-only file that describes current resource usage.
2209        It exists for all the cgroup except root.
2210
2211        An example for mlx4 and ocrdma device follows::
2212
2213          mlx4_0 hca_handle=1 hca_object=20
2214          ocrdma1 hca_handle=1 hca_object=23
2215
2216HugeTLB
2217-------
2218
2219The HugeTLB controller allows to limit the HugeTLB usage per control group and
2220enforces the controller limit during page fault.
2221
2222HugeTLB Interface Files
2223~~~~~~~~~~~~~~~~~~~~~~~
2224
2225  hugetlb.<hugepagesize>.current
2226        Show current usage for "hugepagesize" hugetlb.  It exists for all
2227        the cgroup except root.
2228
2229  hugetlb.<hugepagesize>.max
2230        Set/show the hard limit of "hugepagesize" hugetlb usage.
2231        The default value is "max".  It exists for all the cgroup except root.
2232
2233  hugetlb.<hugepagesize>.events
2234        A read-only flat-keyed file which exists on non-root cgroups.
2235
2236          max
2237                The number of allocation failure due to HugeTLB limit
2238
2239  hugetlb.<hugepagesize>.events.local
2240        Similar to hugetlb.<hugepagesize>.events but the fields in the file
2241        are local to the cgroup i.e. not hierarchical. The file modified event
2242        generated on this file reflects only the local events.
2243
2244Misc
2245----
2246
2247The Miscellaneous cgroup provides the resource limiting and tracking
2248mechanism for the scalar resources which cannot be abstracted like the other
2249cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2250option.
2251
2252A resource can be added to the controller via enum misc_res_type{} in the
2253include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2254in the kernel/cgroup/misc.c file. Provider of the resource must set its
2255capacity prior to using the resource by calling misc_cg_set_capacity().
2256
2257Once a capacity is set then the resource usage can be updated using charge and
2258uncharge APIs. All of the APIs to interact with misc controller are in
2259include/linux/misc_cgroup.h.
2260
2261Misc Interface Files
2262~~~~~~~~~~~~~~~~~~~~
2263
2264Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2265
2266  misc.capacity
2267        A read-only flat-keyed file shown only in the root cgroup.  It shows
2268        miscellaneous scalar resources available on the platform along with
2269        their quantities::
2270
2271          $ cat misc.capacity
2272          res_a 50
2273          res_b 10
2274
2275  misc.current
2276        A read-only flat-keyed file shown in the non-root cgroups.  It shows
2277        the current usage of the resources in the cgroup and its children.::
2278
2279          $ cat misc.current
2280          res_a 3
2281          res_b 0
2282
2283  misc.max
2284        A read-write flat-keyed file shown in the non root cgroups. Allowed
2285        maximum usage of the resources in the cgroup and its children.::
2286
2287          $ cat misc.max
2288          res_a max
2289          res_b 4
2290
2291        Limit can be set by::
2292
2293          # echo res_a 1 > misc.max
2294
2295        Limit can be set to max by::
2296
2297          # echo res_a max > misc.max
2298
2299        Limits can be set higher than the capacity value in the misc.capacity
2300        file.
2301
2302Migration and Ownership
2303~~~~~~~~~~~~~~~~~~~~~~~
2304
2305A miscellaneous scalar resource is charged to the cgroup in which it is used
2306first, and stays charged to that cgroup until that resource is freed. Migrating
2307a process to a different cgroup does not move the charge to the destination
2308cgroup where the process has moved.
2309
2310Others
2311------
2312
2313perf_event
2314~~~~~~~~~~
2315
2316perf_event controller, if not mounted on a legacy hierarchy, is
2317automatically enabled on the v2 hierarchy so that perf events can
2318always be filtered by cgroup v2 path.  The controller can still be
2319moved to a legacy hierarchy after v2 hierarchy is populated.
2320
2321
2322Non-normative information
2323-------------------------
2324
2325This section contains information that isn't considered to be a part of
2326the stable kernel API and so is subject to change.
2327
2328
2329CPU controller root cgroup process behaviour
2330~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2331
2332When distributing CPU cycles in the root cgroup each thread in this
2333cgroup is treated as if it was hosted in a separate child cgroup of the
2334root cgroup. This child cgroup weight is dependent on its thread nice
2335level.
2336
2337For details of this mapping see sched_prio_to_weight array in
2338kernel/sched/core.c file (values from this array should be scaled
2339appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2340
2341
2342IO controller root cgroup process behaviour
2343~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2344
2345Root cgroup processes are hosted in an implicit leaf child node.
2346When distributing IO resources this implicit child node is taken into
2347account as if it was a normal child cgroup of the root cgroup with a
2348weight value of 200.
2349
2350
2351Namespace
2352=========
2353
2354Basics
2355------
2356
2357cgroup namespace provides a mechanism to virtualize the view of the
2358"/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
2359flag can be used with clone(2) and unshare(2) to create a new cgroup
2360namespace.  The process running inside the cgroup namespace will have
2361its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
2362cgroupns root is the cgroup of the process at the time of creation of
2363the cgroup namespace.
2364
2365Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2366complete path of the cgroup of a process.  In a container setup where
2367a set of cgroups and namespaces are intended to isolate processes the
2368"/proc/$PID/cgroup" file may leak potential system level information
2369to the isolated processes.  For example::
2370
2371  # cat /proc/self/cgroup
2372  0::/batchjobs/container_id1
2373
2374The path '/batchjobs/container_id1' can be considered as system-data
2375and undesirable to expose to the isolated processes.  cgroup namespace
2376can be used to restrict visibility of this path.  For example, before
2377creating a cgroup namespace, one would see::
2378
2379  # ls -l /proc/self/ns/cgroup
2380  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2381  # cat /proc/self/cgroup
2382  0::/batchjobs/container_id1
2383
2384After unsharing a new namespace, the view changes::
2385
2386  # ls -l /proc/self/ns/cgroup
2387  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2388  # cat /proc/self/cgroup
2389  0::/
2390
2391When some thread from a multi-threaded process unshares its cgroup
2392namespace, the new cgroupns gets applied to the entire process (all
2393the threads).  This is natural for the v2 hierarchy; however, for the
2394legacy hierarchies, this may be unexpected.
2395
2396A cgroup namespace is alive as long as there are processes inside or
2397mounts pinning it.  When the last usage goes away, the cgroup
2398namespace is destroyed.  The cgroupns root and the actual cgroups
2399remain.
2400
2401
2402The Root and Views
2403------------------
2404
2405The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2406process calling unshare(2) is running.  For example, if a process in
2407/batchjobs/container_id1 cgroup calls unshare, cgroup
2408/batchjobs/container_id1 becomes the cgroupns root.  For the
2409init_cgroup_ns, this is the real root ('/') cgroup.
2410
2411The cgroupns root cgroup does not change even if the namespace creator
2412process later moves to a different cgroup::
2413
2414  # ~/unshare -c # unshare cgroupns in some cgroup
2415  # cat /proc/self/cgroup
2416  0::/
2417  # mkdir sub_cgrp_1
2418  # echo 0 > sub_cgrp_1/cgroup.procs
2419  # cat /proc/self/cgroup
2420  0::/sub_cgrp_1
2421
2422Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2423
2424Processes running inside the cgroup namespace will be able to see
2425cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2426From within an unshared cgroupns::
2427
2428  # sleep 100000 &
2429  [1] 7353
2430  # echo 7353 > sub_cgrp_1/cgroup.procs
2431  # cat /proc/7353/cgroup
2432  0::/sub_cgrp_1
2433
2434From the initial cgroup namespace, the real cgroup path will be
2435visible::
2436
2437  $ cat /proc/7353/cgroup
2438  0::/batchjobs/container_id1/sub_cgrp_1
2439
2440From a sibling cgroup namespace (that is, a namespace rooted at a
2441different cgroup), the cgroup path relative to its own cgroup
2442namespace root will be shown.  For instance, if PID 7353's cgroup
2443namespace root is at '/batchjobs/container_id2', then it will see::
2444
2445  # cat /proc/7353/cgroup
2446  0::/../container_id2/sub_cgrp_1
2447
2448Note that the relative path always starts with '/' to indicate that
2449its relative to the cgroup namespace root of the caller.
2450
2451
2452Migration and setns(2)
2453----------------------
2454
2455Processes inside a cgroup namespace can move into and out of the
2456namespace root if they have proper access to external cgroups.  For
2457example, from inside a namespace with cgroupns root at
2458/batchjobs/container_id1, and assuming that the global hierarchy is
2459still accessible inside cgroupns::
2460
2461  # cat /proc/7353/cgroup
2462  0::/sub_cgrp_1
2463  # echo 7353 > batchjobs/container_id2/cgroup.procs
2464  # cat /proc/7353/cgroup
2465  0::/../container_id2
2466
2467Note that this kind of setup is not encouraged.  A task inside cgroup
2468namespace should only be exposed to its own cgroupns hierarchy.
2469
2470setns(2) to another cgroup namespace is allowed when:
2471
2472(a) the process has CAP_SYS_ADMIN against its current user namespace
2473(b) the process has CAP_SYS_ADMIN against the target cgroup
2474    namespace's userns
2475
2476No implicit cgroup changes happen with attaching to another cgroup
2477namespace.  It is expected that the someone moves the attaching
2478process under the target cgroup namespace root.
2479
2480
2481Interaction with Other Namespaces
2482---------------------------------
2483
2484Namespace specific cgroup hierarchy can be mounted by a process
2485running inside a non-init cgroup namespace::
2486
2487  # mount -t cgroup2 none $MOUNT_POINT
2488
2489This will mount the unified cgroup hierarchy with cgroupns root as the
2490filesystem root.  The process needs CAP_SYS_ADMIN against its user and
2491mount namespaces.
2492
2493The virtualization of /proc/self/cgroup file combined with restricting
2494the view of cgroup hierarchy by namespace-private cgroupfs mount
2495provides a properly isolated cgroup view inside the container.
2496
2497
2498Information on Kernel Programming
2499=================================
2500
2501This section contains kernel programming information in the areas
2502where interacting with cgroup is necessary.  cgroup core and
2503controllers are not covered.
2504
2505
2506Filesystem Support for Writeback
2507--------------------------------
2508
2509A filesystem can support cgroup writeback by updating
2510address_space_operations->writepage[s]() to annotate bio's using the
2511following two functions.
2512
2513  wbc_init_bio(@wbc, @bio)
2514        Should be called for each bio carrying writeback data and
2515        associates the bio with the inode's owner cgroup and the
2516        corresponding request queue.  This must be called after
2517        a queue (device) has been associated with the bio and
2518        before submission.
2519
2520  wbc_account_cgroup_owner(@wbc, @page, @bytes)
2521        Should be called for each data segment being written out.
2522        While this function doesn't care exactly when it's called
2523        during the writeback session, it's the easiest and most
2524        natural to call it as data segments are added to a bio.
2525
2526With writeback bio's annotated, cgroup support can be enabled per
2527super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
2528selective disabling of cgroup writeback support which is helpful when
2529certain filesystem features, e.g. journaled data mode, are
2530incompatible.
2531
2532wbc_init_bio() binds the specified bio to its cgroup.  Depending on
2533the configuration, the bio may be executed at a lower priority and if
2534the writeback session is holding shared resources, e.g. a journal
2535entry, may lead to priority inversion.  There is no one easy solution
2536for the problem.  Filesystems can try to work around specific problem
2537cases by skipping wbc_init_bio() and using bio_associate_blkg()
2538directly.
2539
2540
2541Deprecated v1 Core Features
2542===========================
2543
2544- Multiple hierarchies including named ones are not supported.
2545
2546- All v1 mount options are not supported.
2547
2548- The "tasks" file is removed and "cgroup.procs" is not sorted.
2549
2550- "cgroup.clone_children" is removed.
2551
2552- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" file
2553  at the root instead.
2554
2555
2556Issues with v1 and Rationales for v2
2557====================================
2558
2559Multiple Hierarchies
2560--------------------
2561
2562cgroup v1 allowed an arbitrary number of hierarchies and each
2563hierarchy could host any number of controllers.  While this seemed to
2564provide a high level of flexibility, it wasn't useful in practice.
2565
2566For example, as there is only one instance of each controller, utility
2567type controllers such as freezer which can be useful in all
2568hierarchies could only be used in one.  The issue is exacerbated by
2569the fact that controllers couldn't be moved to another hierarchy once
2570hierarchies were populated.  Another issue was that all controllers
2571bound to a hierarchy were forced to have exactly the same view of the
2572hierarchy.  It wasn't possible to vary the granularity depending on
2573the specific controller.
2574
2575In practice, these issues heavily limited which controllers could be
2576put on the same hierarchy and most configurations resorted to putting
2577each controller on its own hierarchy.  Only closely related ones, such
2578as the cpu and cpuacct controllers, made sense to be put on the same
2579hierarchy.  This often meant that userland ended up managing multiple
2580similar hierarchies repeating the same steps on each hierarchy
2581whenever a hierarchy management operation was necessary.
2582
2583Furthermore, support for multiple hierarchies came at a steep cost.
2584It greatly complicated cgroup core implementation but more importantly
2585the support for multiple hierarchies restricted how cgroup could be
2586used in general and what controllers was able to do.
2587
2588There was no limit on how many hierarchies there might be, which meant
2589that a thread's cgroup membership couldn't be described in finite
2590length.  The key might contain any number of entries and was unlimited
2591in length, which made it highly awkward to manipulate and led to
2592addition of controllers which existed only to identify membership,
2593which in turn exacerbated the original problem of proliferating number
2594of hierarchies.
2595
2596Also, as a controller couldn't have any expectation regarding the
2597topologies of hierarchies other controllers might be on, each
2598controller had to assume that all other controllers were attached to
2599completely orthogonal hierarchies.  This made it impossible, or at
2600least very cumbersome, for controllers to cooperate with each other.
2601
2602In most use cases, putting controllers on hierarchies which are
2603completely orthogonal to each other isn't necessary.  What usually is
2604called for is the ability to have differing levels of granularity
2605depending on the specific controller.  In other words, hierarchy may
2606be collapsed from leaf towards root when viewed from specific
2607controllers.  For example, a given configuration might not care about
2608how memory is distributed beyond a certain level while still wanting
2609to control how CPU cycles are distributed.
2610
2611
2612Thread Granularity
2613------------------
2614
2615cgroup v1 allowed threads of a process to belong to different cgroups.
2616This didn't make sense for some controllers and those controllers
2617ended up implementing different ways to ignore such situations but
2618much more importantly it blurred the line between API exposed to
2619individual applications and system management interface.
2620
2621Generally, in-process knowledge is available only to the process
2622itself; thus, unlike service-level organization of processes,
2623categorizing threads of a process requires active participation from
2624the application which owns the target process.
2625
2626cgroup v1 had an ambiguously defined delegation model which got abused
2627in combination with thread granularity.  cgroups were delegated to
2628individual applications so that they can create and manage their own
2629sub-hierarchies and control resource distributions along them.  This
2630effectively raised cgroup to the status of a syscall-like API exposed
2631to lay programs.
2632
2633First of all, cgroup has a fundamentally inadequate interface to be
2634exposed this way.  For a process to access its own knobs, it has to
2635extract the path on the target hierarchy from /proc/self/cgroup,
2636construct the path by appending the name of the knob to the path, open
2637and then read and/or write to it.  This is not only extremely clunky
2638and unusual but also inherently racy.  There is no conventional way to
2639define transaction across the required steps and nothing can guarantee
2640that the process would actually be operating on its own sub-hierarchy.
2641
2642cgroup controllers implemented a number of knobs which would never be
2643accepted as public APIs because they were just adding control knobs to
2644system-management pseudo filesystem.  cgroup ended up with interface
2645knobs which were not properly abstracted or refined and directly
2646revealed kernel internal details.  These knobs got exposed to
2647individual applications through the ill-defined delegation mechanism
2648effectively abusing cgroup as a shortcut to implementing public APIs
2649without going through the required scrutiny.
2650
2651This was painful for both userland and kernel.  Userland ended up with
2652misbehaving and poorly abstracted interfaces and kernel exposing and
2653locked into constructs inadvertently.
2654
2655
2656Competition Between Inner Nodes and Threads
2657-------------------------------------------
2658
2659cgroup v1 allowed threads to be in any cgroups which created an
2660interesting problem where threads belonging to a parent cgroup and its
2661children cgroups competed for resources.  This was nasty as two
2662different types of entities competed and there was no obvious way to
2663settle it.  Different controllers did different things.
2664
2665The cpu controller considered threads and cgroups as equivalents and
2666mapped nice levels to cgroup weights.  This worked for some cases but
2667fell flat when children wanted to be allocated specific ratios of CPU
2668cycles and the number of internal threads fluctuated - the ratios
2669constantly changed as the number of competing entities fluctuated.
2670There also were other issues.  The mapping from nice level to weight
2671wasn't obvious or universal, and there were various other knobs which
2672simply weren't available for threads.
2673
2674The io controller implicitly created a hidden leaf node for each
2675cgroup to host the threads.  The hidden leaf had its own copies of all
2676the knobs with ``leaf_`` prefixed.  While this allowed equivalent
2677control over internal threads, it was with serious drawbacks.  It
2678always added an extra layer of nesting which wouldn't be necessary
2679otherwise, made the interface messy and significantly complicated the
2680implementation.
2681
2682The memory controller didn't have a way to control what happened
2683between internal tasks and child cgroups and the behavior was not
2684clearly defined.  There were attempts to add ad-hoc behaviors and
2685knobs to tailor the behavior to specific workloads which would have
2686led to problems extremely difficult to resolve in the long term.
2687
2688Multiple controllers struggled with internal tasks and came up with
2689different ways to deal with it; unfortunately, all the approaches were
2690severely flawed and, furthermore, the widely different behaviors
2691made cgroup as a whole highly inconsistent.
2692
2693This clearly is a problem which needs to be addressed from cgroup core
2694in a uniform way.
2695
2696
2697Other Interface Issues
2698----------------------
2699
2700cgroup v1 grew without oversight and developed a large number of
2701idiosyncrasies and inconsistencies.  One issue on the cgroup core side
2702was how an empty cgroup was notified - a userland helper binary was
2703forked and executed for each event.  The event delivery wasn't
2704recursive or delegatable.  The limitations of the mechanism also led
2705to in-kernel event delivery filtering mechanism further complicating
2706the interface.
2707
2708Controller interfaces were problematic too.  An extreme example is
2709controllers completely ignoring hierarchical organization and treating
2710all cgroups as if they were all located directly under the root
2711cgroup.  Some controllers exposed a large amount of inconsistent
2712implementation details to userland.
2713
2714There also was no consistency across controllers.  When a new cgroup
2715was created, some controllers defaulted to not imposing extra
2716restrictions while others disallowed any resource usage until
2717explicitly configured.  Configuration knobs for the same type of
2718control used widely differing naming schemes and formats.  Statistics
2719and information knobs were named arbitrarily and used different
2720formats and units even in the same controller.
2721
2722cgroup v2 establishes common conventions where appropriate and updates
2723controllers so that they expose minimal and consistent interfaces.
2724
2725
2726Controller Issues and Remedies
2727------------------------------
2728
2729Memory
2730~~~~~~
2731
2732The original lower boundary, the soft limit, is defined as a limit
2733that is per default unset.  As a result, the set of cgroups that
2734global reclaim prefers is opt-in, rather than opt-out.  The costs for
2735optimizing these mostly negative lookups are so high that the
2736implementation, despite its enormous size, does not even provide the
2737basic desirable behavior.  First off, the soft limit has no
2738hierarchical meaning.  All configured groups are organized in a global
2739rbtree and treated like equal peers, regardless where they are located
2740in the hierarchy.  This makes subtree delegation impossible.  Second,
2741the soft limit reclaim pass is so aggressive that it not just
2742introduces high allocation latencies into the system, but also impacts
2743system performance due to overreclaim, to the point where the feature
2744becomes self-defeating.
2745
2746The memory.low boundary on the other hand is a top-down allocated
2747reserve.  A cgroup enjoys reclaim protection when it's within its
2748effective low, which makes delegation of subtrees possible. It also
2749enjoys having reclaim pressure proportional to its overage when
2750above its effective low.
2751
2752The original high boundary, the hard limit, is defined as a strict
2753limit that can not budge, even if the OOM killer has to be called.
2754But this generally goes against the goal of making the most out of the
2755available memory.  The memory consumption of workloads varies during
2756runtime, and that requires users to overcommit.  But doing that with a
2757strict upper limit requires either a fairly accurate prediction of the
2758working set size or adding slack to the limit.  Since working set size
2759estimation is hard and error prone, and getting it wrong results in
2760OOM kills, most users tend to err on the side of a looser limit and
2761end up wasting precious resources.
2762
2763The memory.high boundary on the other hand can be set much more
2764conservatively.  When hit, it throttles allocations by forcing them
2765into direct reclaim to work off the excess, but it never invokes the
2766OOM killer.  As a result, a high boundary that is chosen too
2767aggressively will not terminate the processes, but instead it will
2768lead to gradual performance degradation.  The user can monitor this
2769and make corrections until the minimal memory footprint that still
2770gives acceptable performance is found.
2771
2772In extreme cases, with many concurrent allocations and a complete
2773breakdown of reclaim progress within the group, the high boundary can
2774be exceeded.  But even then it's mostly better to satisfy the
2775allocation from the slack available in other groups or the rest of the
2776system than killing the group.  Otherwise, memory.max is there to
2777limit this type of spillover and ultimately contain buggy or even
2778malicious applications.
2779
2780Setting the original memory.limit_in_bytes below the current usage was
2781subject to a race condition, where concurrent charges could cause the
2782limit setting to fail. memory.max on the other hand will first set the
2783limit to prevent new charges, and then reclaim and OOM kill until the
2784new limit is met - or the task writing to memory.max is killed.
2785
2786The combined memory+swap accounting and limiting is replaced by real
2787control over swap space.
2788
2789The main argument for a combined memory+swap facility in the original
2790cgroup design was that global or parental pressure would always be
2791able to swap all anonymous memory of a child group, regardless of the
2792child's own (possibly untrusted) configuration.  However, untrusted
2793groups can sabotage swapping by other means - such as referencing its
2794anonymous memory in a tight loop - and an admin can not assume full
2795swappability when overcommitting untrusted jobs.
2796
2797For trusted jobs, on the other hand, a combined counter is not an
2798intuitive userspace interface, and it flies in the face of the idea
2799that cgroup controllers should account and limit specific physical
2800resources.  Swap space is a resource like all others in the system,
2801and that's why unified hierarchy allows distributing it separately.
2802