linux/Documentation/kernel-per-CPU-kthreads.txt
<<
>>
Prefs
   1REDUCING OS JITTER DUE TO PER-CPU KTHREADS
   2
   3This document lists per-CPU kthreads in the Linux kernel and presents
   4options to control their OS jitter.  Note that non-per-CPU kthreads are
   5not listed here.  To reduce OS jitter from non-per-CPU kthreads, bind
   6them to a "housekeeping" CPU dedicated to such work.
   7
   8
   9REFERENCES
  10
  11o       Documentation/IRQ-affinity.txt:  Binding interrupts to sets of CPUs.
  12
  13o       Documentation/cgroups:  Using cgroups to bind tasks to sets of CPUs.
  14
  15o       man taskset:  Using the taskset command to bind tasks to sets
  16        of CPUs.
  17
  18o       man sched_setaffinity:  Using the sched_setaffinity() system
  19        call to bind tasks to sets of CPUs.
  20
  21o       /sys/devices/system/cpu/cpuN/online:  Control CPU N's hotplug state,
  22        writing "0" to offline and "1" to online.
  23
  24o       In order to locate kernel-generated OS jitter on CPU N:
  25
  26                cd /sys/kernel/debug/tracing
  27                echo 1 > max_graph_depth # Increase the "1" for more detail
  28                echo function_graph > current_tracer
  29                # run workload
  30                cat per_cpu/cpuN/trace
  31
  32
  33KTHREADS
  34
  35Name: ehca_comp/%u
  36Purpose: Periodically process Infiniband-related work.
  37To reduce its OS jitter, do any of the following:
  381.      Don't use eHCA Infiniband hardware, instead choosing hardware
  39        that does not require per-CPU kthreads.  This will prevent these
  40        kthreads from being created in the first place.  (This will
  41        work for most people, as this hardware, though important, is
  42        relatively old and is produced in relatively low unit volumes.)
  432.      Do all eHCA-Infiniband-related work on other CPUs, including
  44        interrupts.
  453.      Rework the eHCA driver so that its per-CPU kthreads are
  46        provisioned only on selected CPUs.
  47
  48
  49Name: irq/%d-%s
  50Purpose: Handle threaded interrupts.
  51To reduce its OS jitter, do the following:
  521.      Use irq affinity to force the irq threads to execute on
  53        some other CPU.
  54
  55Name: kcmtpd_ctr_%d
  56Purpose: Handle Bluetooth work.
  57To reduce its OS jitter, do one of the following:
  581.      Don't use Bluetooth, in which case these kthreads won't be
  59        created in the first place.
  602.      Use irq affinity to force Bluetooth-related interrupts to
  61        occur on some other CPU and furthermore initiate all
  62        Bluetooth activity on some other CPU.
  63
  64Name: ksoftirqd/%u
  65Purpose: Execute softirq handlers when threaded or when under heavy load.
  66To reduce its OS jitter, each softirq vector must be handled
  67separately as follows:
  68TIMER_SOFTIRQ:  Do all of the following:
  691.      To the extent possible, keep the CPU out of the kernel when it
  70        is non-idle, for example, by avoiding system calls and by forcing
  71        both kernel threads and interrupts to execute elsewhere.
  722.      Build with CONFIG_HOTPLUG_CPU=y.  After boot completes, force
  73        the CPU offline, then bring it back online.  This forces
  74        recurring timers to migrate elsewhere.  If you are concerned
  75        with multiple CPUs, force them all offline before bringing the
  76        first one back online.  Once you have onlined the CPUs in question,
  77        do not offline any other CPUs, because doing so could force the
  78        timer back onto one of the CPUs in question.
  79NET_TX_SOFTIRQ and NET_RX_SOFTIRQ:  Do all of the following:
  801.      Force networking interrupts onto other CPUs.
  812.      Initiate any network I/O on other CPUs.
  823.      Once your application has started, prevent CPU-hotplug operations
  83        from being initiated from tasks that might run on the CPU to
  84        be de-jittered.  (It is OK to force this CPU offline and then
  85        bring it back online before you start your application.)
  86BLOCK_SOFTIRQ:  Do all of the following:
  871.      Force block-device interrupts onto some other CPU.
  882.      Initiate any block I/O on other CPUs.
  893.      Once your application has started, prevent CPU-hotplug operations
  90        from being initiated from tasks that might run on the CPU to
  91        be de-jittered.  (It is OK to force this CPU offline and then
  92        bring it back online before you start your application.)
  93IRQ_POLL_SOFTIRQ:  Do all of the following:
  941.      Force block-device interrupts onto some other CPU.
  952.      Initiate any block I/O and block-I/O polling on other CPUs.
  963.      Once your application has started, prevent CPU-hotplug operations
  97        from being initiated from tasks that might run on the CPU to
  98        be de-jittered.  (It is OK to force this CPU offline and then
  99        bring it back online before you start your application.)
 100TASKLET_SOFTIRQ: Do one or more of the following:
 1011.      Avoid use of drivers that use tasklets.  (Such drivers will contain
 102        calls to things like tasklet_schedule().)
 1032.      Convert all drivers that you must use from tasklets to workqueues.
 1043.      Force interrupts for drivers using tasklets onto other CPUs,
 105        and also do I/O involving these drivers on other CPUs.
 106SCHED_SOFTIRQ: Do all of the following:
 1071.      Avoid sending scheduler IPIs to the CPU to be de-jittered,
 108        for example, ensure that at most one runnable kthread is present
 109        on that CPU.  If a thread that expects to run on the de-jittered
 110        CPU awakens, the scheduler will send an IPI that can result in
 111        a subsequent SCHED_SOFTIRQ.
 1122.      Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
 113        CONFIG_NO_HZ_FULL=y, and, in addition, ensure that the CPU
 114        to be de-jittered is marked as an adaptive-ticks CPU using the
 115        "nohz_full=" boot parameter.  This reduces the number of
 116        scheduler-clock interrupts that the de-jittered CPU receives,
 117        minimizing its chances of being selected to do the load balancing
 118        work that runs in SCHED_SOFTIRQ context.
 1193.      To the extent possible, keep the CPU out of the kernel when it
 120        is non-idle, for example, by avoiding system calls and by
 121        forcing both kernel threads and interrupts to execute elsewhere.
 122        This further reduces the number of scheduler-clock interrupts
 123        received by the de-jittered CPU.
 124HRTIMER_SOFTIRQ:  Do all of the following:
 1251.      To the extent possible, keep the CPU out of the kernel when it
 126        is non-idle.  For example, avoid system calls and force both
 127        kernel threads and interrupts to execute elsewhere.
 1282.      Build with CONFIG_HOTPLUG_CPU=y.  Once boot completes, force the
 129        CPU offline, then bring it back online.  This forces recurring
 130        timers to migrate elsewhere.  If you are concerned with multiple
 131        CPUs, force them all offline before bringing the first one
 132        back online.  Once you have onlined the CPUs in question, do not
 133        offline any other CPUs, because doing so could force the timer
 134        back onto one of the CPUs in question.
 135RCU_SOFTIRQ:  Do at least one of the following:
 1361.      Offload callbacks and keep the CPU in either dyntick-idle or
 137        adaptive-ticks state by doing all of the following:
 138        a.      Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
 139                CONFIG_NO_HZ_FULL=y, and, in addition ensure that the CPU
 140                to be de-jittered is marked as an adaptive-ticks CPU using
 141                the "nohz_full=" boot parameter.  Bind the rcuo kthreads
 142                to housekeeping CPUs, which can tolerate OS jitter.
 143        b.      To the extent possible, keep the CPU out of the kernel
 144                when it is non-idle, for example, by avoiding system
 145                calls and by forcing both kernel threads and interrupts
 146                to execute elsewhere.
 1472.      Enable RCU to do its processing remotely via dyntick-idle by
 148        doing all of the following:
 149        a.      Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
 150        b.      Ensure that the CPU goes idle frequently, allowing other
 151                CPUs to detect that it has passed through an RCU quiescent
 152                state.  If the kernel is built with CONFIG_NO_HZ_FULL=y,
 153                userspace execution also allows other CPUs to detect that
 154                the CPU in question has passed through a quiescent state.
 155        c.      To the extent possible, keep the CPU out of the kernel
 156                when it is non-idle, for example, by avoiding system
 157                calls and by forcing both kernel threads and interrupts
 158                to execute elsewhere.
 159
 160Name: kworker/%u:%d%s (cpu, id, priority)
 161Purpose: Execute workqueue requests
 162To reduce its OS jitter, do any of the following:
 1631.      Run your workload at a real-time priority, which will allow
 164        preempting the kworker daemons.
 1652.      A given workqueue can be made visible in the sysfs filesystem
 166        by passing the WQ_SYSFS to that workqueue's alloc_workqueue().
 167        Such a workqueue can be confined to a given subset of the
 168        CPUs using the /sys/devices/virtual/workqueue/*/cpumask sysfs
 169        files.  The set of WQ_SYSFS workqueues can be displayed using
 170        "ls sys/devices/virtual/workqueue".  That said, the workqueues
 171        maintainer would like to caution people against indiscriminately
 172        sprinkling WQ_SYSFS across all the workqueues.  The reason for
 173        caution is that it is easy to add WQ_SYSFS, but because sysfs is
 174        part of the formal user/kernel API, it can be nearly impossible
 175        to remove it, even if its addition was a mistake.
 1763.      Do any of the following needed to avoid jitter that your
 177        application cannot tolerate:
 178        a.      Build your kernel with CONFIG_SLUB=y rather than
 179                CONFIG_SLAB=y, thus avoiding the slab allocator's periodic
 180                use of each CPU's workqueues to run its cache_reap()
 181                function.
 182        b.      Avoid using oprofile, thus avoiding OS jitter from
 183                wq_sync_buffer().
 184        c.      Limit your CPU frequency so that a CPU-frequency
 185                governor is not required, possibly enlisting the aid of
 186                special heatsinks or other cooling technologies.  If done
 187                correctly, and if you CPU architecture permits, you should
 188                be able to build your kernel with CONFIG_CPU_FREQ=n to
 189                avoid the CPU-frequency governor periodically running
 190                on each CPU, including cs_dbs_timer() and od_dbs_timer().
 191                WARNING:  Please check your CPU specifications to
 192                make sure that this is safe on your particular system.
 193        d.      As of v3.18, Christoph Lameter's on-demand vmstat workers
 194                commit prevents OS jitter due to vmstat_update() on
 195                CONFIG_SMP=y systems.  Before v3.18, is not possible
 196                to entirely get rid of the OS jitter, but you can
 197                decrease its frequency by writing a large value to
 198                /proc/sys/vm/stat_interval.  The default value is HZ,
 199                for an interval of one second.  Of course, larger values
 200                will make your virtual-memory statistics update more
 201                slowly.  Of course, you can also run your workload at
 202                a real-time priority, thus preempting vmstat_update(),
 203                but if your workload is CPU-bound, this is a bad idea.
 204                However, there is an RFC patch from Christoph Lameter
 205                (based on an earlier one from Gilad Ben-Yossef) that
 206                reduces or even eliminates vmstat overhead for some
 207                workloads at https://lkml.org/lkml/2013/9/4/379.
 208        e.      Boot with "elevator=noop" to avoid workqueue use by
 209                the block layer.
 210        f.      If running on high-end powerpc servers, build with
 211                CONFIG_PPC_RTAS_DAEMON=n.  This prevents the RTAS
 212                daemon from running on each CPU every second or so.
 213                (This will require editing Kconfig files and will defeat
 214                this platform's RAS functionality.)  This avoids jitter
 215                due to the rtas_event_scan() function.
 216                WARNING:  Please check your CPU specifications to
 217                make sure that this is safe on your particular system.
 218        g.      If running on Cell Processor, build your kernel with
 219                CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from
 220                spu_gov_work().
 221                WARNING:  Please check your CPU specifications to
 222                make sure that this is safe on your particular system.
 223        h.      If running on PowerMAC, build your kernel with
 224                CONFIG_PMAC_RACKMETER=n to disable the CPU-meter,
 225                avoiding OS jitter from rackmeter_do_timer().
 226
 227Name: rcuc/%u
 228Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
 229To reduce its OS jitter, do at least one of the following:
 2301.      Build the kernel with CONFIG_PREEMPT=n.  This prevents these
 231        kthreads from being created in the first place, and also obviates
 232        the need for RCU priority boosting.  This approach is feasible
 233        for workloads that do not require high degrees of responsiveness.
 2342.      Build the kernel with CONFIG_RCU_BOOST=n.  This prevents these
 235        kthreads from being created in the first place.  This approach
 236        is feasible only if your workload never requires RCU priority
 237        boosting, for example, if you ensure frequent idle time on all
 238        CPUs that might execute within the kernel.
 2393.      Build with CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y,
 240        which offloads all RCU callbacks to kthreads that can be moved
 241        off of CPUs susceptible to OS jitter.  This approach prevents the
 242        rcuc/%u kthreads from having any work to do, so that they are
 243        never awakened.
 2444.      Ensure that the CPU never enters the kernel, and, in particular,
 245        avoid initiating any CPU hotplug operations on this CPU.  This is
 246        another way of preventing any callbacks from being queued on the
 247        CPU, again preventing the rcuc/%u kthreads from having any work
 248        to do.
 249
 250Name: rcuob/%d, rcuop/%d, and rcuos/%d
 251Purpose: Offload RCU callbacks from the corresponding CPU.
 252To reduce its OS jitter, do at least one of the following:
 2531.      Use affinity, cgroups, or other mechanism to force these kthreads
 254        to execute on some other CPU.
 2552.      Build with CONFIG_RCU_NOCB_CPU=n, which will prevent these
 256        kthreads from being created in the first place.  However, please
 257        note that this will not eliminate OS jitter, but will instead
 258        shift it to RCU_SOFTIRQ.
 259
 260Name: watchdog/%u
 261Purpose: Detect software lockups on each CPU.
 262To reduce its OS jitter, do at least one of the following:
 2631.      Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
 264        kthreads from being created in the first place.
 2652.      Boot with "nosoftlockup=0", which will also prevent these kthreads
 266        from being created.  Other related watchdog and softlockup boot
 267        parameters may be found in Documentation/kernel-parameters.txt
 268        and Documentation/watchdog/watchdog-parameters.txt.
 2693.      Echo a zero to /proc/sys/kernel/watchdog to disable the
 270        watchdog timer.
 2714.      Echo a large number of /proc/sys/kernel/watchdog_thresh in
 272        order to reduce the frequency of OS jitter due to the watchdog
 273        timer down to a level that is acceptable for your workload.
 274