linux/Documentation/cgroup-v2.txt
<<
>>
Prefs
   1================
   2Control Group v2
   3================
   4
   5:Date: October, 2015
   6:Author: Tejun Heo <tj@kernel.org>
   7
   8This is the authoritative documentation on the design, interface and
   9conventions of cgroup v2.  It describes all userland-visible aspects
  10of cgroup including core and specific controller behaviors.  All
  11future changes must be reflected in this document.  Documentation for
  12v1 is available under Documentation/cgroup-v1/.
  13
  14.. CONTENTS
  15
  16   1. Introduction
  17     1-1. Terminology
  18     1-2. What is cgroup?
  19   2. Basic Operations
  20     2-1. Mounting
  21     2-2. Organizing Processes and Threads
  22       2-2-1. Processes
  23       2-2-2. Threads
  24     2-3. [Un]populated Notification
  25     2-4. Controlling Controllers
  26       2-4-1. Enabling and Disabling
  27       2-4-2. Top-down Constraint
  28       2-4-3. No Internal Process Constraint
  29     2-5. Delegation
  30       2-5-1. Model of Delegation
  31       2-5-2. Delegation Containment
  32     2-6. Guidelines
  33       2-6-1. Organize Once and Control
  34       2-6-2. Avoid Name Collisions
  35   3. Resource Distribution Models
  36     3-1. Weights
  37     3-2. Limits
  38     3-3. Protections
  39     3-4. Allocations
  40   4. Interface Files
  41     4-1. Format
  42     4-2. Conventions
  43     4-3. Core Interface Files
  44   5. Controllers
  45     5-1. CPU
  46       5-1-1. CPU Interface Files
  47     5-2. Memory
  48       5-2-1. Memory Interface Files
  49       5-2-2. Usage Guidelines
  50       5-2-3. Memory Ownership
  51     5-3. IO
  52       5-3-1. IO Interface Files
  53       5-3-2. Writeback
  54     5-4. PID
  55       5-4-1. PID Interface Files
  56     5-5. Device
  57     5-6. RDMA
  58       5-6-1. RDMA Interface Files
  59     5-7. Misc
  60       5-7-1. perf_event
  61     5-N. Non-normative information
  62       5-N-1. CPU controller root cgroup process behaviour
  63       5-N-2. IO controller root cgroup process behaviour
  64   6. Namespace
  65     6-1. Basics
  66     6-2. The Root and Views
  67     6-3. Migration and setns(2)
  68     6-4. Interaction with Other Namespaces
  69   P. Information on Kernel Programming
  70     P-1. Filesystem Support for Writeback
  71   D. Deprecated v1 Core Features
  72   R. Issues with v1 and Rationales for v2
  73     R-1. Multiple Hierarchies
  74     R-2. Thread Granularity
  75     R-3. Competition Between Inner Nodes and Threads
  76     R-4. Other Interface Issues
  77     R-5. Controller Issues and Remedies
  78       R-5-1. Memory
  79
  80
  81Introduction
  82============
  83
  84Terminology
  85-----------
  86
  87"cgroup" stands for "control group" and is never capitalized.  The
  88singular form is used to designate the whole feature and also as a
  89qualifier as in "cgroup controllers".  When explicitly referring to
  90multiple individual control groups, the plural form "cgroups" is used.
  91
  92
  93What is cgroup?
  94---------------
  95
  96cgroup is a mechanism to organize processes hierarchically and
  97distribute system resources along the hierarchy in a controlled and
  98configurable manner.
  99
 100cgroup is largely composed of two parts - the core and controllers.
 101cgroup core is primarily responsible for hierarchically organizing
 102processes.  A cgroup controller is usually responsible for
 103distributing a specific type of system resource along the hierarchy
 104although there are utility controllers which serve purposes other than
 105resource distribution.
 106
 107cgroups form a tree structure and every process in the system belongs
 108to one and only one cgroup.  All threads of a process belong to the
 109same cgroup.  On creation, all processes are put in the cgroup that
 110the parent process belongs to at the time.  A process can be migrated
 111to another cgroup.  Migration of a process doesn't affect already
 112existing descendant processes.
 113
 114Following certain structural constraints, controllers may be enabled or
 115disabled selectively on a cgroup.  All controller behaviors are
 116hierarchical - if a controller is enabled on a cgroup, it affects all
 117processes which belong to the cgroups consisting the inclusive
 118sub-hierarchy of the cgroup.  When a controller is enabled on a nested
 119cgroup, it always restricts the resource distribution further.  The
 120restrictions set closer to the root in the hierarchy can not be
 121overridden from further away.
 122
 123
 124Basic Operations
 125================
 126
 127Mounting
 128--------
 129
 130Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
 131hierarchy can be mounted with the following mount command::
 132
 133  # mount -t cgroup2 none $MOUNT_POINT
 134
 135cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
 136controllers which support v2 and are not bound to a v1 hierarchy are
 137automatically bound to the v2 hierarchy and show up at the root.
 138Controllers which are not in active use in the v2 hierarchy can be
 139bound to other hierarchies.  This allows mixing v2 hierarchy with the
 140legacy v1 multiple hierarchies in a fully backward compatible way.
 141
 142A controller can be moved across hierarchies only after the controller
 143is no longer referenced in its current hierarchy.  Because per-cgroup
 144controller states are destroyed asynchronously and controllers may
 145have lingering references, a controller may not show up immediately on
 146the v2 hierarchy after the final umount of the previous hierarchy.
 147Similarly, a controller should be fully disabled to be moved out of
 148the unified hierarchy and it may take some time for the disabled
 149controller to become available for other hierarchies; furthermore, due
 150to inter-controller dependencies, other controllers may need to be
 151disabled too.
 152
 153While useful for development and manual configurations, moving
 154controllers dynamically between the v2 and other hierarchies is
 155strongly discouraged for production use.  It is recommended to decide
 156the hierarchies and controller associations before starting using the
 157controllers after system boot.
 158
 159During transition to v2, system management software might still
 160automount the v1 cgroup filesystem and so hijack all controllers
 161during boot, before manual intervention is possible. To make testing
 162and experimenting easier, the kernel parameter cgroup_no_v1= allows
 163disabling controllers in v1 and make them always available in v2.
 164
 165cgroup v2 currently supports the following mount options.
 166
 167  nsdelegate
 168
 169        Consider cgroup namespaces as delegation boundaries.  This
 170        option is system wide and can only be set on mount or modified
 171        through remount from the init namespace.  The mount option is
 172        ignored on non-init namespace mounts.  Please refer to the
 173        Delegation section for details.
 174
 175
 176Organizing Processes and Threads
 177--------------------------------
 178
 179Processes
 180~~~~~~~~~
 181
 182Initially, only the root cgroup exists to which all processes belong.
 183A child cgroup can be created by creating a sub-directory::
 184
 185  # mkdir $CGROUP_NAME
 186
 187A given cgroup may have multiple child cgroups forming a tree
 188structure.  Each cgroup has a read-writable interface file
 189"cgroup.procs".  When read, it lists the PIDs of all processes which
 190belong to the cgroup one-per-line.  The PIDs are not ordered and the
 191same PID may show up more than once if the process got moved to
 192another cgroup and then back or the PID got recycled while reading.
 193
 194A process can be migrated into a cgroup by writing its PID to the
 195target cgroup's "cgroup.procs" file.  Only one process can be migrated
 196on a single write(2) call.  If a process is composed of multiple
 197threads, writing the PID of any thread migrates all threads of the
 198process.
 199
 200When a process forks a child process, the new process is born into the
 201cgroup that the forking process belongs to at the time of the
 202operation.  After exit, a process stays associated with the cgroup
 203that it belonged to at the time of exit until it's reaped; however, a
 204zombie process does not appear in "cgroup.procs" and thus can't be
 205moved to another cgroup.
 206
 207A cgroup which doesn't have any children or live processes can be
 208destroyed by removing the directory.  Note that a cgroup which doesn't
 209have any children and is associated only with zombie processes is
 210considered empty and can be removed::
 211
 212  # rmdir $CGROUP_NAME
 213
 214"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
 215cgroup is in use in the system, this file may contain multiple lines,
 216one for each hierarchy.  The entry for cgroup v2 is always in the
 217format "0::$PATH"::
 218
 219  # cat /proc/842/cgroup
 220  ...
 221  0::/test-cgroup/test-cgroup-nested
 222
 223If the process becomes a zombie and the cgroup it was associated with
 224is removed subsequently, " (deleted)" is appended to the path::
 225
 226  # cat /proc/842/cgroup
 227  ...
 228  0::/test-cgroup/test-cgroup-nested (deleted)
 229
 230
 231Threads
 232~~~~~~~
 233
 234cgroup v2 supports thread granularity for a subset of controllers to
 235support use cases requiring hierarchical resource distribution across
 236the threads of a group of processes.  By default, all threads of a
 237process belong to the same cgroup, which also serves as the resource
 238domain to host resource consumptions which are not specific to a
 239process or thread.  The thread mode allows threads to be spread across
 240a subtree while still maintaining the common resource domain for them.
 241
 242Controllers which support thread mode are called threaded controllers.
 243The ones which don't are called domain controllers.
 244
 245Marking a cgroup threaded makes it join the resource domain of its
 246parent as a threaded cgroup.  The parent may be another threaded
 247cgroup whose resource domain is further up in the hierarchy.  The root
 248of a threaded subtree, that is, the nearest ancestor which is not
 249threaded, is called threaded domain or thread root interchangeably and
 250serves as the resource domain for the entire subtree.
 251
 252Inside a threaded subtree, threads of a process can be put in
 253different cgroups and are not subject to the no internal process
 254constraint - threaded controllers can be enabled on non-leaf cgroups
 255whether they have threads in them or not.
 256
 257As the threaded domain cgroup hosts all the domain resource
 258consumptions of the subtree, it is considered to have internal
 259resource consumptions whether there are processes in it or not and
 260can't have populated child cgroups which aren't threaded.  Because the
 261root cgroup is not subject to no internal process constraint, it can
 262serve both as a threaded domain and a parent to domain cgroups.
 263
 264The current operation mode or type of the cgroup is shown in the
 265"cgroup.type" file which indicates whether the cgroup is a normal
 266domain, a domain which is serving as the domain of a threaded subtree,
 267or a threaded cgroup.
 268
 269On creation, a cgroup is always a domain cgroup and can be made
 270threaded by writing "threaded" to the "cgroup.type" file.  The
 271operation is single direction::
 272
 273  # echo threaded > cgroup.type
 274
 275Once threaded, the cgroup can't be made a domain again.  To enable the
 276thread mode, the following conditions must be met.
 277
 278- As the cgroup will join the parent's resource domain.  The parent
 279  must either be a valid (threaded) domain or a threaded cgroup.
 280
 281- When the parent is an unthreaded domain, it must not have any domain
 282  controllers enabled or populated domain children.  The root is
 283  exempt from this requirement.
 284
 285Topology-wise, a cgroup can be in an invalid state.  Please consider
 286the following topology::
 287
 288  A (threaded domain) - B (threaded) - C (domain, just created)
 289
 290C is created as a domain but isn't connected to a parent which can
 291host child domains.  C can't be used until it is turned into a
 292threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
 293these cases.  Operations which fail due to invalid topology use
 294EOPNOTSUPP as the errno.
 295
 296A domain cgroup is turned into a threaded domain when one of its child
 297cgroup becomes threaded or threaded controllers are enabled in the
 298"cgroup.subtree_control" file while there are processes in the cgroup.
 299A threaded domain reverts to a normal domain when the conditions
 300clear.
 301
 302When read, "cgroup.threads" contains the list of the thread IDs of all
 303threads in the cgroup.  Except that the operations are per-thread
 304instead of per-process, "cgroup.threads" has the same format and
 305behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
 306written to in any cgroup, as it can only move threads inside the same
 307threaded domain, its operations are confined inside each threaded
 308subtree.
 309
 310The threaded domain cgroup serves as the resource domain for the whole
 311subtree, and, while the threads can be scattered across the subtree,
 312all the processes are considered to be in the threaded domain cgroup.
 313"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
 314processes in the subtree and is not readable in the subtree proper.
 315However, "cgroup.procs" can be written to from anywhere in the subtree
 316to migrate all threads of the matching process to the cgroup.
 317
 318Only threaded controllers can be enabled in a threaded subtree.  When
 319a threaded controller is enabled inside a threaded subtree, it only
 320accounts for and controls resource consumptions associated with the
 321threads in the cgroup and its descendants.  All consumptions which
 322aren't tied to a specific thread belong to the threaded domain cgroup.
 323
 324Because a threaded subtree is exempt from no internal process
 325constraint, a threaded controller must be able to handle competition
 326between threads in a non-leaf cgroup and its child cgroups.  Each
 327threaded controller defines how such competitions are handled.
 328
 329
 330[Un]populated Notification
 331--------------------------
 332
 333Each non-root cgroup has a "cgroup.events" file which contains
 334"populated" field indicating whether the cgroup's sub-hierarchy has
 335live processes in it.  Its value is 0 if there is no live process in
 336the cgroup and its descendants; otherwise, 1.  poll and [id]notify
 337events are triggered when the value changes.  This can be used, for
 338example, to start a clean-up operation after all processes of a given
 339sub-hierarchy have exited.  The populated state updates and
 340notifications are recursive.  Consider the following sub-hierarchy
 341where the numbers in the parentheses represent the numbers of processes
 342in each cgroup::
 343
 344  A(4) - B(0) - C(1)
 345              \ D(0)
 346
 347A, B and C's "populated" fields would be 1 while D's 0.  After the one
 348process in C exits, B and C's "populated" fields would flip to "0" and
 349file modified events will be generated on the "cgroup.events" files of
 350both cgroups.
 351
 352
 353Controlling Controllers
 354-----------------------
 355
 356Enabling and Disabling
 357~~~~~~~~~~~~~~~~~~~~~~
 358
 359Each cgroup has a "cgroup.controllers" file which lists all
 360controllers available for the cgroup to enable::
 361
 362  # cat cgroup.controllers
 363  cpu io memory
 364
 365No controller is enabled by default.  Controllers can be enabled and
 366disabled by writing to the "cgroup.subtree_control" file::
 367
 368  # echo "+cpu +memory -io" > cgroup.subtree_control
 369
 370Only controllers which are listed in "cgroup.controllers" can be
 371enabled.  When multiple operations are specified as above, either they
 372all succeed or fail.  If multiple operations on the same controller
 373are specified, the last one is effective.
 374
 375Enabling a controller in a cgroup indicates that the distribution of
 376the target resource across its immediate children will be controlled.
 377Consider the following sub-hierarchy.  The enabled controllers are
 378listed in parentheses::
 379
 380  A(cpu,memory) - B(memory) - C()
 381                            \ D()
 382
 383As A has "cpu" and "memory" enabled, A will control the distribution
 384of CPU cycles and memory to its children, in this case, B.  As B has
 385"memory" enabled but not "CPU", C and D will compete freely on CPU
 386cycles but their division of memory available to B will be controlled.
 387
 388As a controller regulates the distribution of the target resource to
 389the cgroup's children, enabling it creates the controller's interface
 390files in the child cgroups.  In the above example, enabling "cpu" on B
 391would create the "cpu." prefixed controller interface files in C and
 392D.  Likewise, disabling "memory" from B would remove the "memory."
 393prefixed controller interface files from C and D.  This means that the
 394controller interface files - anything which doesn't start with
 395"cgroup." are owned by the parent rather than the cgroup itself.
 396
 397
 398Top-down Constraint
 399~~~~~~~~~~~~~~~~~~~
 400
 401Resources are distributed top-down and a cgroup can further distribute
 402a resource only if the resource has been distributed to it from the
 403parent.  This means that all non-root "cgroup.subtree_control" files
 404can only contain controllers which are enabled in the parent's
 405"cgroup.subtree_control" file.  A controller can be enabled only if
 406the parent has the controller enabled and a controller can't be
 407disabled if one or more children have it enabled.
 408
 409
 410No Internal Process Constraint
 411~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 412
 413Non-root cgroups can distribute domain resources to their children
 414only when they don't have any processes of their own.  In other words,
 415only domain cgroups which don't contain any processes can have domain
 416controllers enabled in their "cgroup.subtree_control" files.
 417
 418This guarantees that, when a domain controller is looking at the part
 419of the hierarchy which has it enabled, processes are always only on
 420the leaves.  This rules out situations where child cgroups compete
 421against internal processes of the parent.
 422
 423The root cgroup is exempt from this restriction.  Root contains
 424processes and anonymous resource consumption which can't be associated
 425with any other cgroups and requires special treatment from most
 426controllers.  How resource consumption in the root cgroup is governed
 427is up to each controller (for more information on this topic please
 428refer to the Non-normative information section in the Controllers
 429chapter).
 430
 431Note that the restriction doesn't get in the way if there is no
 432enabled controller in the cgroup's "cgroup.subtree_control".  This is
 433important as otherwise it wouldn't be possible to create children of a
 434populated cgroup.  To control resource distribution of a cgroup, the
 435cgroup must create children and transfer all its processes to the
 436children before enabling controllers in its "cgroup.subtree_control"
 437file.
 438
 439
 440Delegation
 441----------
 442
 443Model of Delegation
 444~~~~~~~~~~~~~~~~~~~
 445
 446A cgroup can be delegated in two ways.  First, to a less privileged
 447user by granting write access of the directory and its "cgroup.procs",
 448"cgroup.threads" and "cgroup.subtree_control" files to the user.
 449Second, if the "nsdelegate" mount option is set, automatically to a
 450cgroup namespace on namespace creation.
 451
 452Because the resource control interface files in a given directory
 453control the distribution of the parent's resources, the delegatee
 454shouldn't be allowed to write to them.  For the first method, this is
 455achieved by not granting access to these files.  For the second, the
 456kernel rejects writes to all files other than "cgroup.procs" and
 457"cgroup.subtree_control" on a namespace root from inside the
 458namespace.
 459
 460The end results are equivalent for both delegation types.  Once
 461delegated, the user can build sub-hierarchy under the directory,
 462organize processes inside it as it sees fit and further distribute the
 463resources it received from the parent.  The limits and other settings
 464of all resource controllers are hierarchical and regardless of what
 465happens in the delegated sub-hierarchy, nothing can escape the
 466resource restrictions imposed by the parent.
 467
 468Currently, cgroup doesn't impose any restrictions on the number of
 469cgroups in or nesting depth of a delegated sub-hierarchy; however,
 470this may be limited explicitly in the future.
 471
 472
 473Delegation Containment
 474~~~~~~~~~~~~~~~~~~~~~~
 475
 476A delegated sub-hierarchy is contained in the sense that processes
 477can't be moved into or out of the sub-hierarchy by the delegatee.
 478
 479For delegations to a less privileged user, this is achieved by
 480requiring the following conditions for a process with a non-root euid
 481to migrate a target process into a cgroup by writing its PID to the
 482"cgroup.procs" file.
 483
 484- The writer must have write access to the "cgroup.procs" file.
 485
 486- The writer must have write access to the "cgroup.procs" file of the
 487  common ancestor of the source and destination cgroups.
 488
 489The above two constraints ensure that while a delegatee may migrate
 490processes around freely in the delegated sub-hierarchy it can't pull
 491in from or push out to outside the sub-hierarchy.
 492
 493For an example, let's assume cgroups C0 and C1 have been delegated to
 494user U0 who created C00, C01 under C0 and C10 under C1 as follows and
 495all processes under C0 and C1 belong to U0::
 496
 497  ~~~~~~~~~~~~~ - C0 - C00
 498  ~ cgroup    ~      \ C01
 499  ~ hierarchy ~
 500  ~~~~~~~~~~~~~ - C1 - C10
 501
 502Let's also say U0 wants to write the PID of a process which is
 503currently in C10 into "C00/cgroup.procs".  U0 has write access to the
 504file; however, the common ancestor of the source cgroup C10 and the
 505destination cgroup C00 is above the points of delegation and U0 would
 506not have write access to its "cgroup.procs" files and thus the write
 507will be denied with -EACCES.
 508
 509For delegations to namespaces, containment is achieved by requiring
 510that both the source and destination cgroups are reachable from the
 511namespace of the process which is attempting the migration.  If either
 512is not reachable, the migration is rejected with -ENOENT.
 513
 514
 515Guidelines
 516----------
 517
 518Organize Once and Control
 519~~~~~~~~~~~~~~~~~~~~~~~~~
 520
 521Migrating a process across cgroups is a relatively expensive operation
 522and stateful resources such as memory are not moved together with the
 523process.  This is an explicit design decision as there often exist
 524inherent trade-offs between migration and various hot paths in terms
 525of synchronization cost.
 526
 527As such, migrating processes across cgroups frequently as a means to
 528apply different resource restrictions is discouraged.  A workload
 529should be assigned to a cgroup according to the system's logical and
 530resource structure once on start-up.  Dynamic adjustments to resource
 531distribution can be made by changing controller configuration through
 532the interface files.
 533
 534
 535Avoid Name Collisions
 536~~~~~~~~~~~~~~~~~~~~~
 537
 538Interface files for a cgroup and its children cgroups occupy the same
 539directory and it is possible to create children cgroups which collide
 540with interface files.
 541
 542All cgroup core interface files are prefixed with "cgroup." and each
 543controller's interface files are prefixed with the controller name and
 544a dot.  A controller's name is composed of lower case alphabets and
 545'_'s but never begins with an '_' so it can be used as the prefix
 546character for collision avoidance.  Also, interface file names won't
 547start or end with terms which are often used in categorizing workloads
 548such as job, service, slice, unit or workload.
 549
 550cgroup doesn't do anything to prevent name collisions and it's the
 551user's responsibility to avoid them.
 552
 553
 554Resource Distribution Models
 555============================
 556
 557cgroup controllers implement several resource distribution schemes
 558depending on the resource type and expected use cases.  This section
 559describes major schemes in use along with their expected behaviors.
 560
 561
 562Weights
 563-------
 564
 565A parent's resource is distributed by adding up the weights of all
 566active children and giving each the fraction matching the ratio of its
 567weight against the sum.  As only children which can make use of the
 568resource at the moment participate in the distribution, this is
 569work-conserving.  Due to the dynamic nature, this model is usually
 570used for stateless resources.
 571
 572All weights are in the range [1, 10000] with the default at 100.  This
 573allows symmetric multiplicative biases in both directions at fine
 574enough granularity while staying in the intuitive range.
 575
 576As long as the weight is in range, all configuration combinations are
 577valid and there is no reason to reject configuration changes or
 578process migrations.
 579
 580"cpu.weight" proportionally distributes CPU cycles to active children
 581and is an example of this type.
 582
 583
 584Limits
 585------
 586
 587A child can only consume upto the configured amount of the resource.
 588Limits can be over-committed - the sum of the limits of children can
 589exceed the amount of resource available to the parent.
 590
 591Limits are in the range [0, max] and defaults to "max", which is noop.
 592
 593As limits can be over-committed, all configuration combinations are
 594valid and there is no reason to reject configuration changes or
 595process migrations.
 596
 597"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
 598on an IO device and is an example of this type.
 599
 600
 601Protections
 602-----------
 603
 604A cgroup is protected to be allocated upto the configured amount of
 605the resource if the usages of all its ancestors are under their
 606protected levels.  Protections can be hard guarantees or best effort
 607soft boundaries.  Protections can also be over-committed in which case
 608only upto the amount available to the parent is protected among
 609children.
 610
 611Protections are in the range [0, max] and defaults to 0, which is
 612noop.
 613
 614As protections can be over-committed, all configuration combinations
 615are valid and there is no reason to reject configuration changes or
 616process migrations.
 617
 618"memory.low" implements best-effort memory protection and is an
 619example of this type.
 620
 621
 622Allocations
 623-----------
 624
 625A cgroup is exclusively allocated a certain amount of a finite
 626resource.  Allocations can't be over-committed - the sum of the
 627allocations of children can not exceed the amount of resource
 628available to the parent.
 629
 630Allocations are in the range [0, max] and defaults to 0, which is no
 631resource.
 632
 633As allocations can't be over-committed, some configuration
 634combinations are invalid and should be rejected.  Also, if the
 635resource is mandatory for execution of processes, process migrations
 636may be rejected.
 637
 638"cpu.rt.max" hard-allocates realtime slices and is an example of this
 639type.
 640
 641
 642Interface Files
 643===============
 644
 645Format
 646------
 647
 648All interface files should be in one of the following formats whenever
 649possible::
 650
 651  New-line separated values
 652  (when only one value can be written at once)
 653
 654        VAL0\n
 655        VAL1\n
 656        ...
 657
 658  Space separated values
 659  (when read-only or multiple values can be written at once)
 660
 661        VAL0 VAL1 ...\n
 662
 663  Flat keyed
 664
 665        KEY0 VAL0\n
 666        KEY1 VAL1\n
 667        ...
 668
 669  Nested keyed
 670
 671        KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
 672        KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
 673        ...
 674
 675For a writable file, the format for writing should generally match
 676reading; however, controllers may allow omitting later fields or
 677implement restricted shortcuts for most common use cases.
 678
 679For both flat and nested keyed files, only the values for a single key
 680can be written at a time.  For nested keyed files, the sub key pairs
 681may be specified in any order and not all pairs have to be specified.
 682
 683
 684Conventions
 685-----------
 686
 687- Settings for a single feature should be contained in a single file.
 688
 689- The root cgroup should be exempt from resource control and thus
 690  shouldn't have resource control interface files.  Also,
 691  informational files on the root cgroup which end up showing global
 692  information available elsewhere shouldn't exist.
 693
 694- If a controller implements weight based resource distribution, its
 695  interface file should be named "weight" and have the range [1,
 696  10000] with 100 as the default.  The values are chosen to allow
 697  enough and symmetric bias in both directions while keeping it
 698  intuitive (the default is 100%).
 699
 700- If a controller implements an absolute resource guarantee and/or
 701  limit, the interface files should be named "min" and "max"
 702  respectively.  If a controller implements best effort resource
 703  guarantee and/or limit, the interface files should be named "low"
 704  and "high" respectively.
 705
 706  In the above four control files, the special token "max" should be
 707  used to represent upward infinity for both reading and writing.
 708
 709- If a setting has a configurable default value and keyed specific
 710  overrides, the default entry should be keyed with "default" and
 711  appear as the first entry in the file.
 712
 713  The default value can be updated by writing either "default $VAL" or
 714  "$VAL".
 715
 716  When writing to update a specific override, "default" can be used as
 717  the value to indicate removal of the override.  Override entries
 718  with "default" as the value must not appear when read.
 719
 720  For example, a setting which is keyed by major:minor device numbers
 721  with integer values may look like the following::
 722
 723    # cat cgroup-example-interface-file
 724    default 150
 725    8:0 300
 726
 727  The default value can be updated by::
 728
 729    # echo 125 > cgroup-example-interface-file
 730
 731  or::
 732
 733    # echo "default 125" > cgroup-example-interface-file
 734
 735  An override can be set by::
 736
 737    # echo "8:16 170" > cgroup-example-interface-file
 738
 739  and cleared by::
 740
 741    # echo "8:0 default" > cgroup-example-interface-file
 742    # cat cgroup-example-interface-file
 743    default 125
 744    8:16 170
 745
 746- For events which are not very high frequency, an interface file
 747  "events" should be created which lists event key value pairs.
 748  Whenever a notifiable event happens, file modified event should be
 749  generated on the file.
 750
 751
 752Core Interface Files
 753--------------------
 754
 755All cgroup core files are prefixed with "cgroup."
 756
 757  cgroup.type
 758
 759        A read-write single value file which exists on non-root
 760        cgroups.
 761
 762        When read, it indicates the current type of the cgroup, which
 763        can be one of the following values.
 764
 765        - "domain" : A normal valid domain cgroup.
 766
 767        - "domain threaded" : A threaded domain cgroup which is
 768          serving as the root of a threaded subtree.
 769
 770        - "domain invalid" : A cgroup which is in an invalid state.
 771          It can't be populated or have controllers enabled.  It may
 772          be allowed to become a threaded cgroup.
 773
 774        - "threaded" : A threaded cgroup which is a member of a
 775          threaded subtree.
 776
 777        A cgroup can be turned into a threaded cgroup by writing
 778        "threaded" to this file.
 779
 780  cgroup.procs
 781        A read-write new-line separated values file which exists on
 782        all cgroups.
 783
 784        When read, it lists the PIDs of all processes which belong to
 785        the cgroup one-per-line.  The PIDs are not ordered and the
 786        same PID may show up more than once if the process got moved
 787        to another cgroup and then back or the PID got recycled while
 788        reading.
 789
 790        A PID can be written to migrate the process associated with
 791        the PID to the cgroup.  The writer should match all of the
 792        following conditions.
 793
 794        - It must have write access to the "cgroup.procs" file.
 795
 796        - It must have write access to the "cgroup.procs" file of the
 797          common ancestor of the source and destination cgroups.
 798
 799        When delegating a sub-hierarchy, write access to this file
 800        should be granted along with the containing directory.
 801
 802        In a threaded cgroup, reading this file fails with EOPNOTSUPP
 803        as all the processes belong to the thread root.  Writing is
 804        supported and moves every thread of the process to the cgroup.
 805
 806  cgroup.threads
 807        A read-write new-line separated values file which exists on
 808        all cgroups.
 809
 810        When read, it lists the TIDs of all threads which belong to
 811        the cgroup one-per-line.  The TIDs are not ordered and the
 812        same TID may show up more than once if the thread got moved to
 813        another cgroup and then back or the TID got recycled while
 814        reading.
 815
 816        A TID can be written to migrate the thread associated with the
 817        TID to the cgroup.  The writer should match all of the
 818        following conditions.
 819
 820        - It must have write access to the "cgroup.threads" file.
 821
 822        - The cgroup that the thread is currently in must be in the
 823          same resource domain as the destination cgroup.
 824
 825        - It must have write access to the "cgroup.procs" file of the
 826          common ancestor of the source and destination cgroups.
 827
 828        When delegating a sub-hierarchy, write access to this file
 829        should be granted along with the containing directory.
 830
 831  cgroup.controllers
 832        A read-only space separated values file which exists on all
 833        cgroups.
 834
 835        It shows space separated list of all controllers available to
 836        the cgroup.  The controllers are not ordered.
 837
 838  cgroup.subtree_control
 839        A read-write space separated values file which exists on all
 840        cgroups.  Starts out empty.
 841
 842        When read, it shows space separated list of the controllers
 843        which are enabled to control resource distribution from the
 844        cgroup to its children.
 845
 846        Space separated list of controllers prefixed with '+' or '-'
 847        can be written to enable or disable controllers.  A controller
 848        name prefixed with '+' enables the controller and '-'
 849        disables.  If a controller appears more than once on the list,
 850        the last one is effective.  When multiple enable and disable
 851        operations are specified, either all succeed or all fail.
 852
 853  cgroup.events
 854        A read-only flat-keyed file which exists on non-root cgroups.
 855        The following entries are defined.  Unless specified
 856        otherwise, a value change in this file generates a file
 857        modified event.
 858
 859          populated
 860                1 if the cgroup or its descendants contains any live
 861                processes; otherwise, 0.
 862
 863  cgroup.max.descendants
 864        A read-write single value files.  The default is "max".
 865
 866        Maximum allowed number of descent cgroups.
 867        If the actual number of descendants is equal or larger,
 868        an attempt to create a new cgroup in the hierarchy will fail.
 869
 870  cgroup.max.depth
 871        A read-write single value files.  The default is "max".
 872
 873        Maximum allowed descent depth below the current cgroup.
 874        If the actual descent depth is equal or larger,
 875        an attempt to create a new child cgroup will fail.
 876
 877  cgroup.stat
 878        A read-only flat-keyed file with the following entries:
 879
 880          nr_descendants
 881                Total number of visible descendant cgroups.
 882
 883          nr_dying_descendants
 884                Total number of dying descendant cgroups. A cgroup becomes
 885                dying after being deleted by a user. The cgroup will remain
 886                in dying state for some time undefined time (which can depend
 887                on system load) before being completely destroyed.
 888
 889                A process can't enter a dying cgroup under any circumstances,
 890                a dying cgroup can't revive.
 891
 892                A dying cgroup can consume system resources not exceeding
 893                limits, which were active at the moment of cgroup deletion.
 894
 895
 896Controllers
 897===========
 898
 899CPU
 900---
 901
 902The "cpu" controllers regulates distribution of CPU cycles.  This
 903controller implements weight and absolute bandwidth limit models for
 904normal scheduling policy and absolute bandwidth allocation model for
 905realtime scheduling policy.
 906
 907WARNING: cgroup2 doesn't yet support control of realtime processes and
 908the cpu controller can only be enabled when all RT processes are in
 909the root cgroup.  Be aware that system management software may already
 910have placed RT processes into nonroot cgroups during the system boot
 911process, and these processes may need to be moved to the root cgroup
 912before the cpu controller can be enabled.
 913
 914
 915CPU Interface Files
 916~~~~~~~~~~~~~~~~~~~
 917
 918All time durations are in microseconds.
 919
 920  cpu.stat
 921        A read-only flat-keyed file which exists on non-root cgroups.
 922        This file exists whether the controller is enabled or not.
 923
 924        It always reports the following three stats:
 925
 926        - usage_usec
 927        - user_usec
 928        - system_usec
 929
 930        and the following three when the controller is enabled:
 931
 932        - nr_periods
 933        - nr_throttled
 934        - throttled_usec
 935
 936  cpu.weight
 937        A read-write single value file which exists on non-root
 938        cgroups.  The default is "100".
 939
 940        The weight in the range [1, 10000].
 941
 942  cpu.weight.nice
 943        A read-write single value file which exists on non-root
 944        cgroups.  The default is "0".
 945
 946        The nice value is in the range [-20, 19].
 947
 948        This interface file is an alternative interface for
 949        "cpu.weight" and allows reading and setting weight using the
 950        same values used by nice(2).  Because the range is smaller and
 951        granularity is coarser for the nice values, the read value is
 952        the closest approximation of the current weight.
 953
 954  cpu.max
 955        A read-write two value file which exists on non-root cgroups.
 956        The default is "max 100000".
 957
 958        The maximum bandwidth limit.  It's in the following format::
 959
 960          $MAX $PERIOD
 961
 962        which indicates that the group may consume upto $MAX in each
 963        $PERIOD duration.  "max" for $MAX indicates no limit.  If only
 964        one number is written, $MAX is updated.
 965
 966
 967Memory
 968------
 969
 970The "memory" controller regulates distribution of memory.  Memory is
 971stateful and implements both limit and protection models.  Due to the
 972intertwining between memory usage and reclaim pressure and the
 973stateful nature of memory, the distribution model is relatively
 974complex.
 975
 976While not completely water-tight, all major memory usages by a given
 977cgroup are tracked so that the total memory consumption can be
 978accounted and controlled to a reasonable extent.  Currently, the
 979following types of memory usages are tracked.
 980
 981- Userland memory - page cache and anonymous memory.
 982
 983- Kernel data structures such as dentries and inodes.
 984
 985- TCP socket buffers.
 986
 987The above list may expand in the future for better coverage.
 988
 989
 990Memory Interface Files
 991~~~~~~~~~~~~~~~~~~~~~~
 992
 993All memory amounts are in bytes.  If a value which is not aligned to
 994PAGE_SIZE is written, the value may be rounded up to the closest
 995PAGE_SIZE multiple when read back.
 996
 997  memory.current
 998        A read-only single value file which exists on non-root
 999        cgroups.
1000
1001        The total amount of memory currently being used by the cgroup
1002        and its descendants.
1003
1004  memory.low
1005        A read-write single value file which exists on non-root
1006        cgroups.  The default is "0".
1007
1008        Best-effort memory protection.  If the memory usages of a
1009        cgroup and all its ancestors are below their low boundaries,
1010        the cgroup's memory won't be reclaimed unless memory can be
1011        reclaimed from unprotected cgroups.
1012
1013        Putting more memory than generally available under this
1014        protection is discouraged.
1015
1016  memory.high
1017        A read-write single value file which exists on non-root
1018        cgroups.  The default is "max".
1019
1020        Memory usage throttle limit.  This is the main mechanism to
1021        control memory usage of a cgroup.  If a cgroup's usage goes
1022        over the high boundary, the processes of the cgroup are
1023        throttled and put under heavy reclaim pressure.
1024
1025        Going over the high limit never invokes the OOM killer and
1026        under extreme conditions the limit may be breached.
1027
1028  memory.max
1029        A read-write single value file which exists on non-root
1030        cgroups.  The default is "max".
1031
1032        Memory usage hard limit.  This is the final protection
1033        mechanism.  If a cgroup's memory usage reaches this limit and
1034        can't be reduced, the OOM killer is invoked in the cgroup.
1035        Under certain circumstances, the usage may go over the limit
1036        temporarily.
1037
1038        This is the ultimate protection mechanism.  As long as the
1039        high limit is used and monitored properly, this limit's
1040        utility is limited to providing the final safety net.
1041
1042  memory.events
1043        A read-only flat-keyed file which exists on non-root cgroups.
1044        The following entries are defined.  Unless specified
1045        otherwise, a value change in this file generates a file
1046        modified event.
1047
1048          low
1049                The number of times the cgroup is reclaimed due to
1050                high memory pressure even though its usage is under
1051                the low boundary.  This usually indicates that the low
1052                boundary is over-committed.
1053
1054          high
1055                The number of times processes of the cgroup are
1056                throttled and routed to perform direct memory reclaim
1057                because the high memory boundary was exceeded.  For a
1058                cgroup whose memory usage is capped by the high limit
1059                rather than global memory pressure, this event's
1060                occurrences are expected.
1061
1062          max
1063                The number of times the cgroup's memory usage was
1064                about to go over the max boundary.  If direct reclaim
1065                fails to bring it down, the cgroup goes to OOM state.
1066
1067          oom
1068                The number of time the cgroup's memory usage was
1069                reached the limit and allocation was about to fail.
1070
1071                Depending on context result could be invocation of OOM
1072                killer and retrying allocation or failing allocation.
1073
1074                Failed allocation in its turn could be returned into
1075                userspace as -ENOMEM or silently ignored in cases like
1076                disk readahead.  For now OOM in memory cgroup kills
1077                tasks iff shortage has happened inside page fault.
1078
1079          oom_kill
1080                The number of processes belonging to this cgroup
1081                killed by any kind of OOM killer.
1082
1083  memory.stat
1084        A read-only flat-keyed file which exists on non-root cgroups.
1085
1086        This breaks down the cgroup's memory footprint into different
1087        types of memory, type-specific details, and other information
1088        on the state and past events of the memory management system.
1089
1090        All memory amounts are in bytes.
1091
1092        The entries are ordered to be human readable, and new entries
1093        can show up in the middle. Don't rely on items remaining in a
1094        fixed position; use the keys to look up specific values!
1095
1096          anon
1097                Amount of memory used in anonymous mappings such as
1098                brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1099
1100          file
1101                Amount of memory used to cache filesystem data,
1102                including tmpfs and shared memory.
1103
1104          kernel_stack
1105                Amount of memory allocated to kernel stacks.
1106
1107          slab
1108                Amount of memory used for storing in-kernel data
1109                structures.
1110
1111          sock
1112                Amount of memory used in network transmission buffers
1113
1114          shmem
1115                Amount of cached filesystem data that is swap-backed,
1116                such as tmpfs, shm segments, shared anonymous mmap()s
1117
1118          file_mapped
1119                Amount of cached filesystem data mapped with mmap()
1120
1121          file_dirty
1122                Amount of cached filesystem data that was modified but
1123                not yet written back to disk
1124
1125          file_writeback
1126                Amount of cached filesystem data that was modified and
1127                is currently being written back to disk
1128
1129          inactive_anon, active_anon, inactive_file, active_file, unevictable
1130                Amount of memory, swap-backed and filesystem-backed,
1131                on the internal memory management lists used by the
1132                page reclaim algorithm
1133
1134          slab_reclaimable
1135                Part of "slab" that might be reclaimed, such as
1136                dentries and inodes.
1137
1138          slab_unreclaimable
1139                Part of "slab" that cannot be reclaimed on memory
1140                pressure.
1141
1142          pgfault
1143                Total number of page faults incurred
1144
1145          pgmajfault
1146                Number of major page faults incurred
1147
1148          workingset_refault
1149
1150                Number of refaults of previously evicted pages
1151
1152          workingset_activate
1153
1154                Number of refaulted pages that were immediately activated
1155
1156          workingset_nodereclaim
1157
1158                Number of times a shadow node has been reclaimed
1159
1160          pgrefill
1161
1162                Amount of scanned pages (in an active LRU list)
1163
1164          pgscan
1165
1166                Amount of scanned pages (in an inactive LRU list)
1167
1168          pgsteal
1169
1170                Amount of reclaimed pages
1171
1172          pgactivate
1173
1174                Amount of pages moved to the active LRU list
1175
1176          pgdeactivate
1177
1178                Amount of pages moved to the inactive LRU lis
1179
1180          pglazyfree
1181
1182                Amount of pages postponed to be freed under memory pressure
1183
1184          pglazyfreed
1185
1186                Amount of reclaimed lazyfree pages
1187
1188  memory.swap.current
1189        A read-only single value file which exists on non-root
1190        cgroups.
1191
1192        The total amount of swap currently being used by the cgroup
1193        and its descendants.
1194
1195  memory.swap.max
1196        A read-write single value file which exists on non-root
1197        cgroups.  The default is "max".
1198
1199        Swap usage hard limit.  If a cgroup's swap usage reaches this
1200        limit, anonymous memory of the cgroup will not be swapped out.
1201
1202
1203Usage Guidelines
1204~~~~~~~~~~~~~~~~
1205
1206"memory.high" is the main mechanism to control memory usage.
1207Over-committing on high limit (sum of high limits > available memory)
1208and letting global memory pressure to distribute memory according to
1209usage is a viable strategy.
1210
1211Because breach of the high limit doesn't trigger the OOM killer but
1212throttles the offending cgroup, a management agent has ample
1213opportunities to monitor and take appropriate actions such as granting
1214more memory or terminating the workload.
1215
1216Determining whether a cgroup has enough memory is not trivial as
1217memory usage doesn't indicate whether the workload can benefit from
1218more memory.  For example, a workload which writes data received from
1219network to a file can use all available memory but can also operate as
1220performant with a small amount of memory.  A measure of memory
1221pressure - how much the workload is being impacted due to lack of
1222memory - is necessary to determine whether a workload needs more
1223memory; unfortunately, memory pressure monitoring mechanism isn't
1224implemented yet.
1225
1226
1227Memory Ownership
1228~~~~~~~~~~~~~~~~
1229
1230A memory area is charged to the cgroup which instantiated it and stays
1231charged to the cgroup until the area is released.  Migrating a process
1232to a different cgroup doesn't move the memory usages that it
1233instantiated while in the previous cgroup to the new cgroup.
1234
1235A memory area may be used by processes belonging to different cgroups.
1236To which cgroup the area will be charged is in-deterministic; however,
1237over time, the memory area is likely to end up in a cgroup which has
1238enough memory allowance to avoid high reclaim pressure.
1239
1240If a cgroup sweeps a considerable amount of memory which is expected
1241to be accessed repeatedly by other cgroups, it may make sense to use
1242POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1243belonging to the affected files to ensure correct memory ownership.
1244
1245
1246IO
1247--
1248
1249The "io" controller regulates the distribution of IO resources.  This
1250controller implements both weight based and absolute bandwidth or IOPS
1251limit distribution; however, weight based distribution is available
1252only if cfq-iosched is in use and neither scheme is available for
1253blk-mq devices.
1254
1255
1256IO Interface Files
1257~~~~~~~~~~~~~~~~~~
1258
1259  io.stat
1260        A read-only nested-keyed file which exists on non-root
1261        cgroups.
1262
1263        Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1264        The following nested keys are defined.
1265
1266          ======        ===================
1267          rbytes        Bytes read
1268          wbytes        Bytes written
1269          rios          Number of read IOs
1270          wios          Number of write IOs
1271          ======        ===================
1272
1273        An example read output follows:
1274
1275          8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353
1276          8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252
1277
1278  io.weight
1279        A read-write flat-keyed file which exists on non-root cgroups.
1280        The default is "default 100".
1281
1282        The first line is the default weight applied to devices
1283        without specific override.  The rest are overrides keyed by
1284        $MAJ:$MIN device numbers and not ordered.  The weights are in
1285        the range [1, 10000] and specifies the relative amount IO time
1286        the cgroup can use in relation to its siblings.
1287
1288        The default weight can be updated by writing either "default
1289        $WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
1290        "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1291
1292        An example read output follows::
1293
1294          default 100
1295          8:16 200
1296          8:0 50
1297
1298  io.max
1299        A read-write nested-keyed file which exists on non-root
1300        cgroups.
1301
1302        BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
1303        device numbers and not ordered.  The following nested keys are
1304        defined.
1305
1306          =====         ==================================
1307          rbps          Max read bytes per second
1308          wbps          Max write bytes per second
1309          riops         Max read IO operations per second
1310          wiops         Max write IO operations per second
1311          =====         ==================================
1312
1313        When writing, any number of nested key-value pairs can be
1314        specified in any order.  "max" can be specified as the value
1315        to remove a specific limit.  If the same key is specified
1316        multiple times, the outcome is undefined.
1317
1318        BPS and IOPS are measured in each IO direction and IOs are
1319        delayed if limit is reached.  Temporary bursts are allowed.
1320
1321        Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1322
1323          echo "8:16 rbps=2097152 wiops=120" > io.max
1324
1325        Reading returns the following::
1326
1327          8:16 rbps=2097152 wbps=max riops=max wiops=120
1328
1329        Write IOPS limit can be removed by writing the following::
1330
1331          echo "8:16 wiops=max" > io.max
1332
1333        Reading now returns the following::
1334
1335          8:16 rbps=2097152 wbps=max riops=max wiops=max
1336
1337
1338Writeback
1339~~~~~~~~~
1340
1341Page cache is dirtied through buffered writes and shared mmaps and
1342written asynchronously to the backing filesystem by the writeback
1343mechanism.  Writeback sits between the memory and IO domains and
1344regulates the proportion of dirty memory by balancing dirtying and
1345write IOs.
1346
1347The io controller, in conjunction with the memory controller,
1348implements control of page cache writeback IOs.  The memory controller
1349defines the memory domain that dirty memory ratio is calculated and
1350maintained for and the io controller defines the io domain which
1351writes out dirty pages for the memory domain.  Both system-wide and
1352per-cgroup dirty memory states are examined and the more restrictive
1353of the two is enforced.
1354
1355cgroup writeback requires explicit support from the underlying
1356filesystem.  Currently, cgroup writeback is implemented on ext2, ext4
1357and btrfs.  On other filesystems, all writeback IOs are attributed to
1358the root cgroup.
1359
1360There are inherent differences in memory and writeback management
1361which affects how cgroup ownership is tracked.  Memory is tracked per
1362page while writeback per inode.  For the purpose of writeback, an
1363inode is assigned to a cgroup and all IO requests to write dirty pages
1364from the inode are attributed to that cgroup.
1365
1366As cgroup ownership for memory is tracked per page, there can be pages
1367which are associated with different cgroups than the one the inode is
1368associated with.  These are called foreign pages.  The writeback
1369constantly keeps track of foreign pages and, if a particular foreign
1370cgroup becomes the majority over a certain period of time, switches
1371the ownership of the inode to that cgroup.
1372
1373While this model is enough for most use cases where a given inode is
1374mostly dirtied by a single cgroup even when the main writing cgroup
1375changes over time, use cases where multiple cgroups write to a single
1376inode simultaneously are not supported well.  In such circumstances, a
1377significant portion of IOs are likely to be attributed incorrectly.
1378As memory controller assigns page ownership on the first use and
1379doesn't update it until the page is released, even if writeback
1380strictly follows page ownership, multiple cgroups dirtying overlapping
1381areas wouldn't work as expected.  It's recommended to avoid such usage
1382patterns.
1383
1384The sysctl knobs which affect writeback behavior are applied to cgroup
1385writeback as follows.
1386
1387  vm.dirty_background_ratio, vm.dirty_ratio
1388        These ratios apply the same to cgroup writeback with the
1389        amount of available memory capped by limits imposed by the
1390        memory controller and system-wide clean memory.
1391
1392  vm.dirty_background_bytes, vm.dirty_bytes
1393        For cgroup writeback, this is calculated into ratio against
1394        total available memory and applied the same way as
1395        vm.dirty[_background]_ratio.
1396
1397
1398PID
1399---
1400
1401The process number controller is used to allow a cgroup to stop any
1402new tasks from being fork()'d or clone()'d after a specified limit is
1403reached.
1404
1405The number of tasks in a cgroup can be exhausted in ways which other
1406controllers cannot prevent, thus warranting its own controller.  For
1407example, a fork bomb is likely to exhaust the number of tasks before
1408hitting memory restrictions.
1409
1410Note that PIDs used in this controller refer to TIDs, process IDs as
1411used by the kernel.
1412
1413
1414PID Interface Files
1415~~~~~~~~~~~~~~~~~~~
1416
1417  pids.max
1418        A read-write single value file which exists on non-root
1419        cgroups.  The default is "max".
1420
1421        Hard limit of number of processes.
1422
1423  pids.current
1424        A read-only single value file which exists on all cgroups.
1425
1426        The number of processes currently in the cgroup and its
1427        descendants.
1428
1429Organisational operations are not blocked by cgroup policies, so it is
1430possible to have pids.current > pids.max.  This can be done by either
1431setting the limit to be smaller than pids.current, or attaching enough
1432processes to the cgroup such that pids.current is larger than
1433pids.max.  However, it is not possible to violate a cgroup PID policy
1434through fork() or clone(). These will return -EAGAIN if the creation
1435of a new process would cause a cgroup policy to be violated.
1436
1437
1438Device controller
1439-----------------
1440
1441Device controller manages access to device files. It includes both
1442creation of new device files (using mknod), and access to the
1443existing device files.
1444
1445Cgroup v2 device controller has no interface files and is implemented
1446on top of cgroup BPF. To control access to device files, a user may
1447create bpf programs of the BPF_CGROUP_DEVICE type and attach them
1448to cgroups. On an attempt to access a device file, corresponding
1449BPF programs will be executed, and depending on the return value
1450the attempt will succeed or fail with -EPERM.
1451
1452A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx
1453structure, which describes the device access attempt: access type
1454(mknod/read/write) and device (type, major and minor numbers).
1455If the program returns 0, the attempt fails with -EPERM, otherwise
1456it succeeds.
1457
1458An example of BPF_CGROUP_DEVICE program may be found in the kernel
1459source tree in the tools/testing/selftests/bpf/dev_cgroup.c file.
1460
1461
1462RDMA
1463----
1464
1465The "rdma" controller regulates the distribution and accounting of
1466of RDMA resources.
1467
1468RDMA Interface Files
1469~~~~~~~~~~~~~~~~~~~~
1470
1471  rdma.max
1472        A readwrite nested-keyed file that exists for all the cgroups
1473        except root that describes current configured resource limit
1474        for a RDMA/IB device.
1475
1476        Lines are keyed by device name and are not ordered.
1477        Each line contains space separated resource name and its configured
1478        limit that can be distributed.
1479
1480        The following nested keys are defined.
1481
1482          ==========    =============================
1483          hca_handle    Maximum number of HCA Handles
1484          hca_object    Maximum number of HCA Objects
1485          ==========    =============================
1486
1487        An example for mlx4 and ocrdma device follows::
1488
1489          mlx4_0 hca_handle=2 hca_object=2000
1490          ocrdma1 hca_handle=3 hca_object=max
1491
1492  rdma.current
1493        A read-only file that describes current resource usage.
1494        It exists for all the cgroup except root.
1495
1496        An example for mlx4 and ocrdma device follows::
1497
1498          mlx4_0 hca_handle=1 hca_object=20
1499          ocrdma1 hca_handle=1 hca_object=23
1500
1501
1502Misc
1503----
1504
1505perf_event
1506~~~~~~~~~~
1507
1508perf_event controller, if not mounted on a legacy hierarchy, is
1509automatically enabled on the v2 hierarchy so that perf events can
1510always be filtered by cgroup v2 path.  The controller can still be
1511moved to a legacy hierarchy after v2 hierarchy is populated.
1512
1513
1514Non-normative information
1515-------------------------
1516
1517This section contains information that isn't considered to be a part of
1518the stable kernel API and so is subject to change.
1519
1520
1521CPU controller root cgroup process behaviour
1522~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1523
1524When distributing CPU cycles in the root cgroup each thread in this
1525cgroup is treated as if it was hosted in a separate child cgroup of the
1526root cgroup. This child cgroup weight is dependent on its thread nice
1527level.
1528
1529For details of this mapping see sched_prio_to_weight array in
1530kernel/sched/core.c file (values from this array should be scaled
1531appropriately so the neutral - nice 0 - value is 100 instead of 1024).
1532
1533
1534IO controller root cgroup process behaviour
1535~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1536
1537Root cgroup processes are hosted in an implicit leaf child node.
1538When distributing IO resources this implicit child node is taken into
1539account as if it was a normal child cgroup of the root cgroup with a
1540weight value of 200.
1541
1542
1543Namespace
1544=========
1545
1546Basics
1547------
1548
1549cgroup namespace provides a mechanism to virtualize the view of the
1550"/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
1551flag can be used with clone(2) and unshare(2) to create a new cgroup
1552namespace.  The process running inside the cgroup namespace will have
1553its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
1554cgroupns root is the cgroup of the process at the time of creation of
1555the cgroup namespace.
1556
1557Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
1558complete path of the cgroup of a process.  In a container setup where
1559a set of cgroups and namespaces are intended to isolate processes the
1560"/proc/$PID/cgroup" file may leak potential system level information
1561to the isolated processes.  For Example::
1562
1563  # cat /proc/self/cgroup
1564  0::/batchjobs/container_id1
1565
1566The path '/batchjobs/container_id1' can be considered as system-data
1567and undesirable to expose to the isolated processes.  cgroup namespace
1568can be used to restrict visibility of this path.  For example, before
1569creating a cgroup namespace, one would see::
1570
1571  # ls -l /proc/self/ns/cgroup
1572  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
1573  # cat /proc/self/cgroup
1574  0::/batchjobs/container_id1
1575
1576After unsharing a new namespace, the view changes::
1577
1578  # ls -l /proc/self/ns/cgroup
1579  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
1580  # cat /proc/self/cgroup
1581  0::/
1582
1583When some thread from a multi-threaded process unshares its cgroup
1584namespace, the new cgroupns gets applied to the entire process (all
1585the threads).  This is natural for the v2 hierarchy; however, for the
1586legacy hierarchies, this may be unexpected.
1587
1588A cgroup namespace is alive as long as there are processes inside or
1589mounts pinning it.  When the last usage goes away, the cgroup
1590namespace is destroyed.  The cgroupns root and the actual cgroups
1591remain.
1592
1593
1594The Root and Views
1595------------------
1596
1597The 'cgroupns root' for a cgroup namespace is the cgroup in which the
1598process calling unshare(2) is running.  For example, if a process in
1599/batchjobs/container_id1 cgroup calls unshare, cgroup
1600/batchjobs/container_id1 becomes the cgroupns root.  For the
1601init_cgroup_ns, this is the real root ('/') cgroup.
1602
1603The cgroupns root cgroup does not change even if the namespace creator
1604process later moves to a different cgroup::
1605
1606  # ~/unshare -c # unshare cgroupns in some cgroup
1607  # cat /proc/self/cgroup
1608  0::/
1609  # mkdir sub_cgrp_1
1610  # echo 0 > sub_cgrp_1/cgroup.procs
1611  # cat /proc/self/cgroup
1612  0::/sub_cgrp_1
1613
1614Each process gets its namespace-specific view of "/proc/$PID/cgroup"
1615
1616Processes running inside the cgroup namespace will be able to see
1617cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
1618From within an unshared cgroupns::
1619
1620  # sleep 100000 &
1621  [1] 7353
1622  # echo 7353 > sub_cgrp_1/cgroup.procs
1623  # cat /proc/7353/cgroup
1624  0::/sub_cgrp_1
1625
1626From the initial cgroup namespace, the real cgroup path will be
1627visible::
1628
1629  $ cat /proc/7353/cgroup
1630  0::/batchjobs/container_id1/sub_cgrp_1
1631
1632From a sibling cgroup namespace (that is, a namespace rooted at a
1633different cgroup), the cgroup path relative to its own cgroup
1634namespace root will be shown.  For instance, if PID 7353's cgroup
1635namespace root is at '/batchjobs/container_id2', then it will see::
1636
1637  # cat /proc/7353/cgroup
1638  0::/../container_id2/sub_cgrp_1
1639
1640Note that the relative path always starts with '/' to indicate that
1641its relative to the cgroup namespace root of the caller.
1642
1643
1644Migration and setns(2)
1645----------------------
1646
1647Processes inside a cgroup namespace can move into and out of the
1648namespace root if they have proper access to external cgroups.  For
1649example, from inside a namespace with cgroupns root at
1650/batchjobs/container_id1, and assuming that the global hierarchy is
1651still accessible inside cgroupns::
1652
1653  # cat /proc/7353/cgroup
1654  0::/sub_cgrp_1
1655  # echo 7353 > batchjobs/container_id2/cgroup.procs
1656  # cat /proc/7353/cgroup
1657  0::/../container_id2
1658
1659Note that this kind of setup is not encouraged.  A task inside cgroup
1660namespace should only be exposed to its own cgroupns hierarchy.
1661
1662setns(2) to another cgroup namespace is allowed when:
1663
1664(a) the process has CAP_SYS_ADMIN against its current user namespace
1665(b) the process has CAP_SYS_ADMIN against the target cgroup
1666    namespace's userns
1667
1668No implicit cgroup changes happen with attaching to another cgroup
1669namespace.  It is expected that the someone moves the attaching
1670process under the target cgroup namespace root.
1671
1672
1673Interaction with Other Namespaces
1674---------------------------------
1675
1676Namespace specific cgroup hierarchy can be mounted by a process
1677running inside a non-init cgroup namespace::
1678
1679  # mount -t cgroup2 none $MOUNT_POINT
1680
1681This will mount the unified cgroup hierarchy with cgroupns root as the
1682filesystem root.  The process needs CAP_SYS_ADMIN against its user and
1683mount namespaces.
1684
1685The virtualization of /proc/self/cgroup file combined with restricting
1686the view of cgroup hierarchy by namespace-private cgroupfs mount
1687provides a properly isolated cgroup view inside the container.
1688
1689
1690Information on Kernel Programming
1691=================================
1692
1693This section contains kernel programming information in the areas
1694where interacting with cgroup is necessary.  cgroup core and
1695controllers are not covered.
1696
1697
1698Filesystem Support for Writeback
1699--------------------------------
1700
1701A filesystem can support cgroup writeback by updating
1702address_space_operations->writepage[s]() to annotate bio's using the
1703following two functions.
1704
1705  wbc_init_bio(@wbc, @bio)
1706        Should be called for each bio carrying writeback data and
1707        associates the bio with the inode's owner cgroup.  Can be
1708        called anytime between bio allocation and submission.
1709
1710  wbc_account_io(@wbc, @page, @bytes)
1711        Should be called for each data segment being written out.
1712        While this function doesn't care exactly when it's called
1713        during the writeback session, it's the easiest and most
1714        natural to call it as data segments are added to a bio.
1715
1716With writeback bio's annotated, cgroup support can be enabled per
1717super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
1718selective disabling of cgroup writeback support which is helpful when
1719certain filesystem features, e.g. journaled data mode, are
1720incompatible.
1721
1722wbc_init_bio() binds the specified bio to its cgroup.  Depending on
1723the configuration, the bio may be executed at a lower priority and if
1724the writeback session is holding shared resources, e.g. a journal
1725entry, may lead to priority inversion.  There is no one easy solution
1726for the problem.  Filesystems can try to work around specific problem
1727cases by skipping wbc_init_bio() or using bio_associate_blkcg()
1728directly.
1729
1730
1731Deprecated v1 Core Features
1732===========================
1733
1734- Multiple hierarchies including named ones are not supported.
1735
1736- All v1 mount options are not supported.
1737
1738- The "tasks" file is removed and "cgroup.procs" is not sorted.
1739
1740- "cgroup.clone_children" is removed.
1741
1742- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" file
1743  at the root instead.
1744
1745
1746Issues with v1 and Rationales for v2
1747====================================
1748
1749Multiple Hierarchies
1750--------------------
1751
1752cgroup v1 allowed an arbitrary number of hierarchies and each
1753hierarchy could host any number of controllers.  While this seemed to
1754provide a high level of flexibility, it wasn't useful in practice.
1755
1756For example, as there is only one instance of each controller, utility
1757type controllers such as freezer which can be useful in all
1758hierarchies could only be used in one.  The issue is exacerbated by
1759the fact that controllers couldn't be moved to another hierarchy once
1760hierarchies were populated.  Another issue was that all controllers
1761bound to a hierarchy were forced to have exactly the same view of the
1762hierarchy.  It wasn't possible to vary the granularity depending on
1763the specific controller.
1764
1765In practice, these issues heavily limited which controllers could be
1766put on the same hierarchy and most configurations resorted to putting
1767each controller on its own hierarchy.  Only closely related ones, such
1768as the cpu and cpuacct controllers, made sense to be put on the same
1769hierarchy.  This often meant that userland ended up managing multiple
1770similar hierarchies repeating the same steps on each hierarchy
1771whenever a hierarchy management operation was necessary.
1772
1773Furthermore, support for multiple hierarchies came at a steep cost.
1774It greatly complicated cgroup core implementation but more importantly
1775the support for multiple hierarchies restricted how cgroup could be
1776used in general and what controllers was able to do.
1777
1778There was no limit on how many hierarchies there might be, which meant
1779that a thread's cgroup membership couldn't be described in finite
1780length.  The key might contain any number of entries and was unlimited
1781in length, which made it highly awkward to manipulate and led to
1782addition of controllers which existed only to identify membership,
1783which in turn exacerbated the original problem of proliferating number
1784of hierarchies.
1785
1786Also, as a controller couldn't have any expectation regarding the
1787topologies of hierarchies other controllers might be on, each
1788controller had to assume that all other controllers were attached to
1789completely orthogonal hierarchies.  This made it impossible, or at
1790least very cumbersome, for controllers to cooperate with each other.
1791
1792In most use cases, putting controllers on hierarchies which are
1793completely orthogonal to each other isn't necessary.  What usually is
1794called for is the ability to have differing levels of granularity
1795depending on the specific controller.  In other words, hierarchy may
1796be collapsed from leaf towards root when viewed from specific
1797controllers.  For example, a given configuration might not care about
1798how memory is distributed beyond a certain level while still wanting
1799to control how CPU cycles are distributed.
1800
1801
1802Thread Granularity
1803------------------
1804
1805cgroup v1 allowed threads of a process to belong to different cgroups.
1806This didn't make sense for some controllers and those controllers
1807ended up implementing different ways to ignore such situations but
1808much more importantly it blurred the line between API exposed to
1809individual applications and system management interface.
1810
1811Generally, in-process knowledge is available only to the process
1812itself; thus, unlike service-level organization of processes,
1813categorizing threads of a process requires active participation from
1814the application which owns the target process.
1815
1816cgroup v1 had an ambiguously defined delegation model which got abused
1817in combination with thread granularity.  cgroups were delegated to
1818individual applications so that they can create and manage their own
1819sub-hierarchies and control resource distributions along them.  This
1820effectively raised cgroup to the status of a syscall-like API exposed
1821to lay programs.
1822
1823First of all, cgroup has a fundamentally inadequate interface to be
1824exposed this way.  For a process to access its own knobs, it has to
1825extract the path on the target hierarchy from /proc/self/cgroup,
1826construct the path by appending the name of the knob to the path, open
1827and then read and/or write to it.  This is not only extremely clunky
1828and unusual but also inherently racy.  There is no conventional way to
1829define transaction across the required steps and nothing can guarantee
1830that the process would actually be operating on its own sub-hierarchy.
1831
1832cgroup controllers implemented a number of knobs which would never be
1833accepted as public APIs because they were just adding control knobs to
1834system-management pseudo filesystem.  cgroup ended up with interface
1835knobs which were not properly abstracted or refined and directly
1836revealed kernel internal details.  These knobs got exposed to
1837individual applications through the ill-defined delegation mechanism
1838effectively abusing cgroup as a shortcut to implementing public APIs
1839without going through the required scrutiny.
1840
1841This was painful for both userland and kernel.  Userland ended up with
1842misbehaving and poorly abstracted interfaces and kernel exposing and
1843locked into constructs inadvertently.
1844
1845
1846Competition Between Inner Nodes and Threads
1847-------------------------------------------
1848
1849cgroup v1 allowed threads to be in any cgroups which created an
1850interesting problem where threads belonging to a parent cgroup and its
1851children cgroups competed for resources.  This was nasty as two
1852different types of entities competed and there was no obvious way to
1853settle it.  Different controllers did different things.
1854
1855The cpu controller considered threads and cgroups as equivalents and
1856mapped nice levels to cgroup weights.  This worked for some cases but
1857fell flat when children wanted to be allocated specific ratios of CPU
1858cycles and the number of internal threads fluctuated - the ratios
1859constantly changed as the number of competing entities fluctuated.
1860There also were other issues.  The mapping from nice level to weight
1861wasn't obvious or universal, and there were various other knobs which
1862simply weren't available for threads.
1863
1864The io controller implicitly created a hidden leaf node for each
1865cgroup to host the threads.  The hidden leaf had its own copies of all
1866the knobs with ``leaf_`` prefixed.  While this allowed equivalent
1867control over internal threads, it was with serious drawbacks.  It
1868always added an extra layer of nesting which wouldn't be necessary
1869otherwise, made the interface messy and significantly complicated the
1870implementation.
1871
1872The memory controller didn't have a way to control what happened
1873between internal tasks and child cgroups and the behavior was not
1874clearly defined.  There were attempts to add ad-hoc behaviors and
1875knobs to tailor the behavior to specific workloads which would have
1876led to problems extremely difficult to resolve in the long term.
1877
1878Multiple controllers struggled with internal tasks and came up with
1879different ways to deal with it; unfortunately, all the approaches were
1880severely flawed and, furthermore, the widely different behaviors
1881made cgroup as a whole highly inconsistent.
1882
1883This clearly is a problem which needs to be addressed from cgroup core
1884in a uniform way.
1885
1886
1887Other Interface Issues
1888----------------------
1889
1890cgroup v1 grew without oversight and developed a large number of
1891idiosyncrasies and inconsistencies.  One issue on the cgroup core side
1892was how an empty cgroup was notified - a userland helper binary was
1893forked and executed for each event.  The event delivery wasn't
1894recursive or delegatable.  The limitations of the mechanism also led
1895to in-kernel event delivery filtering mechanism further complicating
1896the interface.
1897
1898Controller interfaces were problematic too.  An extreme example is
1899controllers completely ignoring hierarchical organization and treating
1900all cgroups as if they were all located directly under the root
1901cgroup.  Some controllers exposed a large amount of inconsistent
1902implementation details to userland.
1903
1904There also was no consistency across controllers.  When a new cgroup
1905was created, some controllers defaulted to not imposing extra
1906restrictions while others disallowed any resource usage until
1907explicitly configured.  Configuration knobs for the same type of
1908control used widely differing naming schemes and formats.  Statistics
1909and information knobs were named arbitrarily and used different
1910formats and units even in the same controller.
1911
1912cgroup v2 establishes common conventions where appropriate and updates
1913controllers so that they expose minimal and consistent interfaces.
1914
1915
1916Controller Issues and Remedies
1917------------------------------
1918
1919Memory
1920~~~~~~
1921
1922The original lower boundary, the soft limit, is defined as a limit
1923that is per default unset.  As a result, the set of cgroups that
1924global reclaim prefers is opt-in, rather than opt-out.  The costs for
1925optimizing these mostly negative lookups are so high that the
1926implementation, despite its enormous size, does not even provide the
1927basic desirable behavior.  First off, the soft limit has no
1928hierarchical meaning.  All configured groups are organized in a global
1929rbtree and treated like equal peers, regardless where they are located
1930in the hierarchy.  This makes subtree delegation impossible.  Second,
1931the soft limit reclaim pass is so aggressive that it not just
1932introduces high allocation latencies into the system, but also impacts
1933system performance due to overreclaim, to the point where the feature
1934becomes self-defeating.
1935
1936The memory.low boundary on the other hand is a top-down allocated
1937reserve.  A cgroup enjoys reclaim protection when it and all its
1938ancestors are below their low boundaries, which makes delegation of
1939subtrees possible.  Secondly, new cgroups have no reserve per default
1940and in the common case most cgroups are eligible for the preferred
1941reclaim pass.  This allows the new low boundary to be efficiently
1942implemented with just a minor addition to the generic reclaim code,
1943without the need for out-of-band data structures and reclaim passes.
1944Because the generic reclaim code considers all cgroups except for the
1945ones running low in the preferred first reclaim pass, overreclaim of
1946individual groups is eliminated as well, resulting in much better
1947overall workload performance.
1948
1949The original high boundary, the hard limit, is defined as a strict
1950limit that can not budge, even if the OOM killer has to be called.
1951But this generally goes against the goal of making the most out of the
1952available memory.  The memory consumption of workloads varies during
1953runtime, and that requires users to overcommit.  But doing that with a
1954strict upper limit requires either a fairly accurate prediction of the
1955working set size or adding slack to the limit.  Since working set size
1956estimation is hard and error prone, and getting it wrong results in
1957OOM kills, most users tend to err on the side of a looser limit and
1958end up wasting precious resources.
1959
1960The memory.high boundary on the other hand can be set much more
1961conservatively.  When hit, it throttles allocations by forcing them
1962into direct reclaim to work off the excess, but it never invokes the
1963OOM killer.  As a result, a high boundary that is chosen too
1964aggressively will not terminate the processes, but instead it will
1965lead to gradual performance degradation.  The user can monitor this
1966and make corrections until the minimal memory footprint that still
1967gives acceptable performance is found.
1968
1969In extreme cases, with many concurrent allocations and a complete
1970breakdown of reclaim progress within the group, the high boundary can
1971be exceeded.  But even then it's mostly better to satisfy the
1972allocation from the slack available in other groups or the rest of the
1973system than killing the group.  Otherwise, memory.max is there to
1974limit this type of spillover and ultimately contain buggy or even
1975malicious applications.
1976
1977Setting the original memory.limit_in_bytes below the current usage was
1978subject to a race condition, where concurrent charges could cause the
1979limit setting to fail. memory.max on the other hand will first set the
1980limit to prevent new charges, and then reclaim and OOM kill until the
1981new limit is met - or the task writing to memory.max is killed.
1982
1983The combined memory+swap accounting and limiting is replaced by real
1984control over swap space.
1985
1986The main argument for a combined memory+swap facility in the original
1987cgroup design was that global or parental pressure would always be
1988able to swap all anonymous memory of a child group, regardless of the
1989child's own (possibly untrusted) configuration.  However, untrusted
1990groups can sabotage swapping by other means - such as referencing its
1991anonymous memory in a tight loop - and an admin can not assume full
1992swappability when overcommitting untrusted jobs.
1993
1994For trusted jobs, on the other hand, a combined counter is not an
1995intuitive userspace interface, and it flies in the face of the idea
1996that cgroup controllers should account and limit specific physical
1997resources.  Swap space is a resource like all others in the system,
1998and that's why unified hierarchy allows distributing it separately.
1999