linux/Documentation/memory-barriers.txt
<<
>>
Prefs
   1                         ============================
   2                         LINUX KERNEL MEMORY BARRIERS
   3                         ============================
   4
   5By: David Howells <dhowells@redhat.com>
   6    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
   7    Will Deacon <will.deacon@arm.com>
   8    Peter Zijlstra <peterz@infradead.org>
   9
  10==========
  11DISCLAIMER
  12==========
  13
  14This document is not a specification; it is intentionally (for the sake of
  15brevity) and unintentionally (due to being human) incomplete. This document is
  16meant as a guide to using the various memory barriers provided by Linux, but
  17in case of any doubt (and there are many) please ask.  Some doubts may be
  18resolved by referring to the formal memory consistency model and related
  19documentation at tools/memory-model/.  Nevertheless, even this memory
  20model should be viewed as the collective opinion of its maintainers rather
  21than as an infallible oracle.
  22
  23To repeat, this document is not a specification of what Linux expects from
  24hardware.
  25
  26The purpose of this document is twofold:
  27
  28 (1) to specify the minimum functionality that one can rely on for any
  29     particular barrier, and
  30
  31 (2) to provide a guide as to how to use the barriers that are available.
  32
  33Note that an architecture can provide more than the minimum requirement
  34for any particular barrier, but if the architecture provides less than
  35that, that architecture is incorrect.
  36
  37Note also that it is possible that a barrier may be a no-op for an
  38architecture because the way that arch works renders an explicit barrier
  39unnecessary in that case.
  40
  41
  42========
  43CONTENTS
  44========
  45
  46 (*) Abstract memory access model.
  47
  48     - Device operations.
  49     - Guarantees.
  50
  51 (*) What are memory barriers?
  52
  53     - Varieties of memory barrier.
  54     - What may not be assumed about memory barriers?
  55     - Data dependency barriers (historical).
  56     - Control dependencies.
  57     - SMP barrier pairing.
  58     - Examples of memory barrier sequences.
  59     - Read memory barriers vs load speculation.
  60     - Multicopy atomicity.
  61
  62 (*) Explicit kernel barriers.
  63
  64     - Compiler barrier.
  65     - CPU memory barriers.
  66     - MMIO write barrier.
  67
  68 (*) Implicit kernel memory barriers.
  69
  70     - Lock acquisition functions.
  71     - Interrupt disabling functions.
  72     - Sleep and wake-up functions.
  73     - Miscellaneous functions.
  74
  75 (*) Inter-CPU acquiring barrier effects.
  76
  77     - Acquires vs memory accesses.
  78     - Acquires vs I/O accesses.
  79
  80 (*) Where are memory barriers needed?
  81
  82     - Interprocessor interaction.
  83     - Atomic operations.
  84     - Accessing devices.
  85     - Interrupts.
  86
  87 (*) Kernel I/O barrier effects.
  88
  89 (*) Assumed minimum execution ordering model.
  90
  91 (*) The effects of the cpu cache.
  92
  93     - Cache coherency.
  94     - Cache coherency vs DMA.
  95     - Cache coherency vs MMIO.
  96
  97 (*) The things CPUs get up to.
  98
  99     - And then there's the Alpha.
 100     - Virtual Machine Guests.
 101
 102 (*) Example uses.
 103
 104     - Circular buffers.
 105
 106 (*) References.
 107
 108
 109============================
 110ABSTRACT MEMORY ACCESS MODEL
 111============================
 112
 113Consider the following abstract model of the system:
 114
 115                            :                :
 116                            :                :
 117                            :                :
 118                +-------+   :   +--------+   :   +-------+
 119                |       |   :   |        |   :   |       |
 120                |       |   :   |        |   :   |       |
 121                | CPU 1 |<----->| Memory |<----->| CPU 2 |
 122                |       |   :   |        |   :   |       |
 123                |       |   :   |        |   :   |       |
 124                +-------+   :   +--------+   :   +-------+
 125                    ^       :       ^        :       ^
 126                    |       :       |        :       |
 127                    |       :       |        :       |
 128                    |       :       v        :       |
 129                    |       :   +--------+   :       |
 130                    |       :   |        |   :       |
 131                    |       :   |        |   :       |
 132                    +---------->| Device |<----------+
 133                            :   |        |   :
 134                            :   |        |   :
 135                            :   +--------+   :
 136                            :                :
 137
 138Each CPU executes a program that generates memory access operations.  In the
 139abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
 140perform the memory operations in any order it likes, provided program causality
 141appears to be maintained.  Similarly, the compiler may also arrange the
 142instructions it emits in any order it likes, provided it doesn't affect the
 143apparent operation of the program.
 144
 145So in the above diagram, the effects of the memory operations performed by a
 146CPU are perceived by the rest of the system as the operations cross the
 147interface between the CPU and rest of the system (the dotted lines).
 148
 149
 150For example, consider the following sequence of events:
 151
 152        CPU 1           CPU 2
 153        =============== ===============
 154        { A == 1; B == 2 }
 155        A = 3;          x = B;
 156        B = 4;          y = A;
 157
 158The set of accesses as seen by the memory system in the middle can be arranged
 159in 24 different combinations:
 160
 161        STORE A=3,      STORE B=4,      y=LOAD A->3,    x=LOAD B->4
 162        STORE A=3,      STORE B=4,      x=LOAD B->4,    y=LOAD A->3
 163        STORE A=3,      y=LOAD A->3,    STORE B=4,      x=LOAD B->4
 164        STORE A=3,      y=LOAD A->3,    x=LOAD B->2,    STORE B=4
 165        STORE A=3,      x=LOAD B->2,    STORE B=4,      y=LOAD A->3
 166        STORE A=3,      x=LOAD B->2,    y=LOAD A->3,    STORE B=4
 167        STORE B=4,      STORE A=3,      y=LOAD A->3,    x=LOAD B->4
 168        STORE B=4, ...
 169        ...
 170
 171and can thus result in four different combinations of values:
 172
 173        x == 2, y == 1
 174        x == 2, y == 3
 175        x == 4, y == 1
 176        x == 4, y == 3
 177
 178
 179Furthermore, the stores committed by a CPU to the memory system may not be
 180perceived by the loads made by another CPU in the same order as the stores were
 181committed.
 182
 183
 184As a further example, consider this sequence of events:
 185
 186        CPU 1           CPU 2
 187        =============== ===============
 188        { A == 1, B == 2, C == 3, P == &A, Q == &C }
 189        B = 4;          Q = P;
 190        P = &B          D = *Q;
 191
 192There is an obvious data dependency here, as the value loaded into D depends on
 193the address retrieved from P by CPU 2.  At the end of the sequence, any of the
 194following results are possible:
 195
 196        (Q == &A) and (D == 1)
 197        (Q == &B) and (D == 2)
 198        (Q == &B) and (D == 4)
 199
 200Note that CPU 2 will never try and load C into D because the CPU will load P
 201into Q before issuing the load of *Q.
 202
 203
 204DEVICE OPERATIONS
 205-----------------
 206
 207Some devices present their control interfaces as collections of memory
 208locations, but the order in which the control registers are accessed is very
 209important.  For instance, imagine an ethernet card with a set of internal
 210registers that are accessed through an address port register (A) and a data
 211port register (D).  To read internal register 5, the following code might then
 212be used:
 213
 214        *A = 5;
 215        x = *D;
 216
 217but this might show up as either of the following two sequences:
 218
 219        STORE *A = 5, x = LOAD *D
 220        x = LOAD *D, STORE *A = 5
 221
 222the second of which will almost certainly result in a malfunction, since it set
 223the address _after_ attempting to read the register.
 224
 225
 226GUARANTEES
 227----------
 228
 229There are some minimal guarantees that may be expected of a CPU:
 230
 231 (*) On any given CPU, dependent memory accesses will be issued in order, with
 232     respect to itself.  This means that for:
 233
 234        Q = READ_ONCE(P); D = READ_ONCE(*Q);
 235
 236     the CPU will issue the following memory operations:
 237
 238        Q = LOAD P, D = LOAD *Q
 239
 240     and always in that order.  However, on DEC Alpha, READ_ONCE() also
 241     emits a memory-barrier instruction, so that a DEC Alpha CPU will
 242     instead issue the following memory operations:
 243
 244        Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER
 245
 246     Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler
 247     mischief.
 248
 249 (*) Overlapping loads and stores within a particular CPU will appear to be
 250     ordered within that CPU.  This means that for:
 251
 252        a = READ_ONCE(*X); WRITE_ONCE(*X, b);
 253
 254     the CPU will only issue the following sequence of memory operations:
 255
 256        a = LOAD *X, STORE *X = b
 257
 258     And for:
 259
 260        WRITE_ONCE(*X, c); d = READ_ONCE(*X);
 261
 262     the CPU will only issue:
 263
 264        STORE *X = c, d = LOAD *X
 265
 266     (Loads and stores overlap if they are targeted at overlapping pieces of
 267     memory).
 268
 269And there are a number of things that _must_ or _must_not_ be assumed:
 270
 271 (*) It _must_not_ be assumed that the compiler will do what you want
 272     with memory references that are not protected by READ_ONCE() and
 273     WRITE_ONCE().  Without them, the compiler is within its rights to
 274     do all sorts of "creative" transformations, which are covered in
 275     the COMPILER BARRIER section.
 276
 277 (*) It _must_not_ be assumed that independent loads and stores will be issued
 278     in the order given.  This means that for:
 279
 280        X = *A; Y = *B; *D = Z;
 281
 282     we may get any of the following sequences:
 283
 284        X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
 285        X = LOAD *A,  STORE *D = Z, Y = LOAD *B
 286        Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
 287        Y = LOAD *B,  STORE *D = Z, X = LOAD *A
 288        STORE *D = Z, X = LOAD *A,  Y = LOAD *B
 289        STORE *D = Z, Y = LOAD *B,  X = LOAD *A
 290
 291 (*) It _must_ be assumed that overlapping memory accesses may be merged or
 292     discarded.  This means that for:
 293
 294        X = *A; Y = *(A + 4);
 295
 296     we may get any one of the following sequences:
 297
 298        X = LOAD *A; Y = LOAD *(A + 4);
 299        Y = LOAD *(A + 4); X = LOAD *A;
 300        {X, Y} = LOAD {*A, *(A + 4) };
 301
 302     And for:
 303
 304        *A = X; *(A + 4) = Y;
 305
 306     we may get any of:
 307
 308        STORE *A = X; STORE *(A + 4) = Y;
 309        STORE *(A + 4) = Y; STORE *A = X;
 310        STORE {*A, *(A + 4) } = {X, Y};
 311
 312And there are anti-guarantees:
 313
 314 (*) These guarantees do not apply to bitfields, because compilers often
 315     generate code to modify these using non-atomic read-modify-write
 316     sequences.  Do not attempt to use bitfields to synchronize parallel
 317     algorithms.
 318
 319 (*) Even in cases where bitfields are protected by locks, all fields
 320     in a given bitfield must be protected by one lock.  If two fields
 321     in a given bitfield are protected by different locks, the compiler's
 322     non-atomic read-modify-write sequences can cause an update to one
 323     field to corrupt the value of an adjacent field.
 324
 325 (*) These guarantees apply only to properly aligned and sized scalar
 326     variables.  "Properly sized" currently means variables that are
 327     the same size as "char", "short", "int" and "long".  "Properly
 328     aligned" means the natural alignment, thus no constraints for
 329     "char", two-byte alignment for "short", four-byte alignment for
 330     "int", and either four-byte or eight-byte alignment for "long",
 331     on 32-bit and 64-bit systems, respectively.  Note that these
 332     guarantees were introduced into the C11 standard, so beware when
 333     using older pre-C11 compilers (for example, gcc 4.6).  The portion
 334     of the standard containing this guarantee is Section 3.14, which
 335     defines "memory location" as follows:
 336
 337        memory location
 338                either an object of scalar type, or a maximal sequence
 339                of adjacent bit-fields all having nonzero width
 340
 341                NOTE 1: Two threads of execution can update and access
 342                separate memory locations without interfering with
 343                each other.
 344
 345                NOTE 2: A bit-field and an adjacent non-bit-field member
 346                are in separate memory locations. The same applies
 347                to two bit-fields, if one is declared inside a nested
 348                structure declaration and the other is not, or if the two
 349                are separated by a zero-length bit-field declaration,
 350                or if they are separated by a non-bit-field member
 351                declaration. It is not safe to concurrently update two
 352                bit-fields in the same structure if all members declared
 353                between them are also bit-fields, no matter what the
 354                sizes of those intervening bit-fields happen to be.
 355
 356
 357=========================
 358WHAT ARE MEMORY BARRIERS?
 359=========================
 360
 361As can be seen above, independent memory operations are effectively performed
 362in random order, but this can be a problem for CPU-CPU interaction and for I/O.
 363What is required is some way of intervening to instruct the compiler and the
 364CPU to restrict the order.
 365
 366Memory barriers are such interventions.  They impose a perceived partial
 367ordering over the memory operations on either side of the barrier.
 368
 369Such enforcement is important because the CPUs and other devices in a system
 370can use a variety of tricks to improve performance, including reordering,
 371deferral and combination of memory operations; speculative loads; speculative
 372branch prediction and various types of caching.  Memory barriers are used to
 373override or suppress these tricks, allowing the code to sanely control the
 374interaction of multiple CPUs and/or devices.
 375
 376
 377VARIETIES OF MEMORY BARRIER
 378---------------------------
 379
 380Memory barriers come in four basic varieties:
 381
 382 (1) Write (or store) memory barriers.
 383
 384     A write memory barrier gives a guarantee that all the STORE operations
 385     specified before the barrier will appear to happen before all the STORE
 386     operations specified after the barrier with respect to the other
 387     components of the system.
 388
 389     A write barrier is a partial ordering on stores only; it is not required
 390     to have any effect on loads.
 391
 392     A CPU can be viewed as committing a sequence of store operations to the
 393     memory system as time progresses.  All stores _before_ a write barrier
 394     will occur _before_ all the stores after the write barrier.
 395
 396     [!] Note that write barriers should normally be paired with read or data
 397     dependency barriers; see the "SMP barrier pairing" subsection.
 398
 399
 400 (2) Data dependency barriers.
 401
 402     A data dependency barrier is a weaker form of read barrier.  In the case
 403     where two loads are performed such that the second depends on the result
 404     of the first (eg: the first load retrieves the address to which the second
 405     load will be directed), a data dependency barrier would be required to
 406     make sure that the target of the second load is updated after the address
 407     obtained by the first load is accessed.
 408
 409     A data dependency barrier is a partial ordering on interdependent loads
 410     only; it is not required to have any effect on stores, independent loads
 411     or overlapping loads.
 412
 413     As mentioned in (1), the other CPUs in the system can be viewed as
 414     committing sequences of stores to the memory system that the CPU being
 415     considered can then perceive.  A data dependency barrier issued by the CPU
 416     under consideration guarantees that for any load preceding it, if that
 417     load touches one of a sequence of stores from another CPU, then by the
 418     time the barrier completes, the effects of all the stores prior to that
 419     touched by the load will be perceptible to any loads issued after the data
 420     dependency barrier.
 421
 422     See the "Examples of memory barrier sequences" subsection for diagrams
 423     showing the ordering constraints.
 424
 425     [!] Note that the first load really has to have a _data_ dependency and
 426     not a control dependency.  If the address for the second load is dependent
 427     on the first load, but the dependency is through a conditional rather than
 428     actually loading the address itself, then it's a _control_ dependency and
 429     a full read barrier or better is required.  See the "Control dependencies"
 430     subsection for more information.
 431
 432     [!] Note that data dependency barriers should normally be paired with
 433     write barriers; see the "SMP barrier pairing" subsection.
 434
 435
 436 (3) Read (or load) memory barriers.
 437
 438     A read barrier is a data dependency barrier plus a guarantee that all the
 439     LOAD operations specified before the barrier will appear to happen before
 440     all the LOAD operations specified after the barrier with respect to the
 441     other components of the system.
 442
 443     A read barrier is a partial ordering on loads only; it is not required to
 444     have any effect on stores.
 445
 446     Read memory barriers imply data dependency barriers, and so can substitute
 447     for them.
 448
 449     [!] Note that read barriers should normally be paired with write barriers;
 450     see the "SMP barrier pairing" subsection.
 451
 452
 453 (4) General memory barriers.
 454
 455     A general memory barrier gives a guarantee that all the LOAD and STORE
 456     operations specified before the barrier will appear to happen before all
 457     the LOAD and STORE operations specified after the barrier with respect to
 458     the other components of the system.
 459
 460     A general memory barrier is a partial ordering over both loads and stores.
 461
 462     General memory barriers imply both read and write memory barriers, and so
 463     can substitute for either.
 464
 465
 466And a couple of implicit varieties:
 467
 468 (5) ACQUIRE operations.
 469
 470     This acts as a one-way permeable barrier.  It guarantees that all memory
 471     operations after the ACQUIRE operation will appear to happen after the
 472     ACQUIRE operation with respect to the other components of the system.
 473     ACQUIRE operations include LOCK operations and both smp_load_acquire()
 474     and smp_cond_acquire() operations. The later builds the necessary ACQUIRE
 475     semantics from relying on a control dependency and smp_rmb().
 476
 477     Memory operations that occur before an ACQUIRE operation may appear to
 478     happen after it completes.
 479
 480     An ACQUIRE operation should almost always be paired with a RELEASE
 481     operation.
 482
 483
 484 (6) RELEASE operations.
 485
 486     This also acts as a one-way permeable barrier.  It guarantees that all
 487     memory operations before the RELEASE operation will appear to happen
 488     before the RELEASE operation with respect to the other components of the
 489     system. RELEASE operations include UNLOCK operations and
 490     smp_store_release() operations.
 491
 492     Memory operations that occur after a RELEASE operation may appear to
 493     happen before it completes.
 494
 495     The use of ACQUIRE and RELEASE operations generally precludes the need
 496     for other sorts of memory barrier (but note the exceptions mentioned in
 497     the subsection "MMIO write barrier").  In addition, a RELEASE+ACQUIRE
 498     pair is -not- guaranteed to act as a full memory barrier.  However, after
 499     an ACQUIRE on a given variable, all memory accesses preceding any prior
 500     RELEASE on that same variable are guaranteed to be visible.  In other
 501     words, within a given variable's critical section, all accesses of all
 502     previous critical sections for that variable are guaranteed to have
 503     completed.
 504
 505     This means that ACQUIRE acts as a minimal "acquire" operation and
 506     RELEASE acts as a minimal "release" operation.
 507
 508A subset of the atomic operations described in atomic_t.txt have ACQUIRE and
 509RELEASE variants in addition to fully-ordered and relaxed (no barrier
 510semantics) definitions.  For compound atomics performing both a load and a
 511store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
 512only to the store portion of the operation.
 513
 514Memory barriers are only required where there's a possibility of interaction
 515between two CPUs or between a CPU and a device.  If it can be guaranteed that
 516there won't be any such interaction in any particular piece of code, then
 517memory barriers are unnecessary in that piece of code.
 518
 519
 520Note that these are the _minimum_ guarantees.  Different architectures may give
 521more substantial guarantees, but they may _not_ be relied upon outside of arch
 522specific code.
 523
 524
 525WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
 526----------------------------------------------
 527
 528There are certain things that the Linux kernel memory barriers do not guarantee:
 529
 530 (*) There is no guarantee that any of the memory accesses specified before a
 531     memory barrier will be _complete_ by the completion of a memory barrier
 532     instruction; the barrier can be considered to draw a line in that CPU's
 533     access queue that accesses of the appropriate type may not cross.
 534
 535 (*) There is no guarantee that issuing a memory barrier on one CPU will have
 536     any direct effect on another CPU or any other hardware in the system.  The
 537     indirect effect will be the order in which the second CPU sees the effects
 538     of the first CPU's accesses occur, but see the next point:
 539
 540 (*) There is no guarantee that a CPU will see the correct order of effects
 541     from a second CPU's accesses, even _if_ the second CPU uses a memory
 542     barrier, unless the first CPU _also_ uses a matching memory barrier (see
 543     the subsection on "SMP Barrier Pairing").
 544
 545 (*) There is no guarantee that some intervening piece of off-the-CPU
 546     hardware[*] will not reorder the memory accesses.  CPU cache coherency
 547     mechanisms should propagate the indirect effects of a memory barrier
 548     between CPUs, but might not do so in order.
 549
 550        [*] For information on bus mastering DMA and coherency please read:
 551
 552            Documentation/PCI/pci.txt
 553            Documentation/DMA-API-HOWTO.txt
 554            Documentation/DMA-API.txt
 555
 556
 557DATA DEPENDENCY BARRIERS (HISTORICAL)
 558-------------------------------------
 559
 560As of v4.15 of the Linux kernel, an smp_read_barrier_depends() was
 561added to READ_ONCE(), which means that about the only people who
 562need to pay attention to this section are those working on DEC Alpha
 563architecture-specific code and those working on READ_ONCE() itself.
 564For those who need it, and for those who are interested in the history,
 565here is the story of data-dependency barriers.
 566
 567The usage requirements of data dependency barriers are a little subtle, and
 568it's not always obvious that they're needed.  To illustrate, consider the
 569following sequence of events:
 570
 571        CPU 1                 CPU 2
 572        ===============       ===============
 573        { A == 1, B == 2, C == 3, P == &A, Q == &C }
 574        B = 4;
 575        <write barrier>
 576        WRITE_ONCE(P, &B)
 577                              Q = READ_ONCE(P);
 578                              D = *Q;
 579
 580There's a clear data dependency here, and it would seem that by the end of the
 581sequence, Q must be either &A or &B, and that:
 582
 583        (Q == &A) implies (D == 1)
 584        (Q == &B) implies (D == 4)
 585
 586But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
 587leading to the following situation:
 588
 589        (Q == &B) and (D == 2) ????
 590
 591Whilst this may seem like a failure of coherency or causality maintenance, it
 592isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
 593Alpha).
 594
 595To deal with this, a data dependency barrier or better must be inserted
 596between the address load and the data load:
 597
 598        CPU 1                 CPU 2
 599        ===============       ===============
 600        { A == 1, B == 2, C == 3, P == &A, Q == &C }
 601        B = 4;
 602        <write barrier>
 603        WRITE_ONCE(P, &B);
 604                              Q = READ_ONCE(P);
 605                              <data dependency barrier>
 606                              D = *Q;
 607
 608This enforces the occurrence of one of the two implications, and prevents the
 609third possibility from arising.
 610
 611
 612[!] Note that this extremely counterintuitive situation arises most easily on
 613machines with split caches, so that, for example, one cache bank processes
 614even-numbered cache lines and the other bank processes odd-numbered cache
 615lines.  The pointer P might be stored in an odd-numbered cache line, and the
 616variable B might be stored in an even-numbered cache line.  Then, if the
 617even-numbered bank of the reading CPU's cache is extremely busy while the
 618odd-numbered bank is idle, one can see the new value of the pointer P (&B),
 619but the old value of the variable B (2).
 620
 621
 622A data-dependency barrier is not required to order dependent writes
 623because the CPUs that the Linux kernel supports don't do writes
 624until they are certain (1) that the write will actually happen, (2)
 625of the location of the write, and (3) of the value to be written.
 626But please carefully read the "CONTROL DEPENDENCIES" section and the
 627Documentation/RCU/rcu_dereference.txt file:  The compiler can and does
 628break dependencies in a great many highly creative ways.
 629
 630        CPU 1                 CPU 2
 631        ===============       ===============
 632        { A == 1, B == 2, C = 3, P == &A, Q == &C }
 633        B = 4;
 634        <write barrier>
 635        WRITE_ONCE(P, &B);
 636                              Q = READ_ONCE(P);
 637                              WRITE_ONCE(*Q, 5);
 638
 639Therefore, no data-dependency barrier is required to order the read into
 640Q with the store into *Q.  In other words, this outcome is prohibited,
 641even without a data-dependency barrier:
 642
 643        (Q == &B) && (B == 4)
 644
 645Please note that this pattern should be rare.  After all, the whole point
 646of dependency ordering is to -prevent- writes to the data structure, along
 647with the expensive cache misses associated with those writes.  This pattern
 648can be used to record rare error conditions and the like, and the CPUs'
 649naturally occurring ordering prevents such records from being lost.
 650
 651
 652Note well that the ordering provided by a data dependency is local to
 653the CPU containing it.  See the section on "Multicopy atomicity" for
 654more information.
 655
 656
 657The data dependency barrier is very important to the RCU system,
 658for example.  See rcu_assign_pointer() and rcu_dereference() in
 659include/linux/rcupdate.h.  This permits the current target of an RCU'd
 660pointer to be replaced with a new modified target, without the replacement
 661target appearing to be incompletely initialised.
 662
 663See also the subsection on "Cache Coherency" for a more thorough example.
 664
 665
 666CONTROL DEPENDENCIES
 667--------------------
 668
 669Control dependencies can be a bit tricky because current compilers do
 670not understand them.  The purpose of this section is to help you prevent
 671the compiler's ignorance from breaking your code.
 672
 673A load-load control dependency requires a full read memory barrier, not
 674simply a data dependency barrier to make it work correctly.  Consider the
 675following bit of code:
 676
 677        q = READ_ONCE(a);
 678        if (q) {
 679                <data dependency barrier>  /* BUG: No data dependency!!! */
 680                p = READ_ONCE(b);
 681        }
 682
 683This will not have the desired effect because there is no actual data
 684dependency, but rather a control dependency that the CPU may short-circuit
 685by attempting to predict the outcome in advance, so that other CPUs see
 686the load from b as having happened before the load from a.  In such a
 687case what's actually required is:
 688
 689        q = READ_ONCE(a);
 690        if (q) {
 691                <read barrier>
 692                p = READ_ONCE(b);
 693        }
 694
 695However, stores are not speculated.  This means that ordering -is- provided
 696for load-store control dependencies, as in the following example:
 697
 698        q = READ_ONCE(a);
 699        if (q) {
 700                WRITE_ONCE(b, 1);
 701        }
 702
 703Control dependencies pair normally with other types of barriers.
 704That said, please note that neither READ_ONCE() nor WRITE_ONCE()
 705are optional! Without the READ_ONCE(), the compiler might combine the
 706load from 'a' with other loads from 'a'.  Without the WRITE_ONCE(),
 707the compiler might combine the store to 'b' with other stores to 'b'.
 708Either can result in highly counterintuitive effects on ordering.
 709
 710Worse yet, if the compiler is able to prove (say) that the value of
 711variable 'a' is always non-zero, it would be well within its rights
 712to optimize the original example by eliminating the "if" statement
 713as follows:
 714
 715        q = a;
 716        b = 1;  /* BUG: Compiler and CPU can both reorder!!! */
 717
 718So don't leave out the READ_ONCE().
 719
 720It is tempting to try to enforce ordering on identical stores on both
 721branches of the "if" statement as follows:
 722
 723        q = READ_ONCE(a);
 724        if (q) {
 725                barrier();
 726                WRITE_ONCE(b, 1);
 727                do_something();
 728        } else {
 729                barrier();
 730                WRITE_ONCE(b, 1);
 731                do_something_else();
 732        }
 733
 734Unfortunately, current compilers will transform this as follows at high
 735optimization levels:
 736
 737        q = READ_ONCE(a);
 738        barrier();
 739        WRITE_ONCE(b, 1);  /* BUG: No ordering vs. load from a!!! */
 740        if (q) {
 741                /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
 742                do_something();
 743        } else {
 744                /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
 745                do_something_else();
 746        }
 747
 748Now there is no conditional between the load from 'a' and the store to
 749'b', which means that the CPU is within its rights to reorder them:
 750The conditional is absolutely required, and must be present in the
 751assembly code even after all compiler optimizations have been applied.
 752Therefore, if you need ordering in this example, you need explicit
 753memory barriers, for example, smp_store_release():
 754
 755        q = READ_ONCE(a);
 756        if (q) {
 757                smp_store_release(&b, 1);
 758                do_something();
 759        } else {
 760                smp_store_release(&b, 1);
 761                do_something_else();
 762        }
 763
 764In contrast, without explicit memory barriers, two-legged-if control
 765ordering is guaranteed only when the stores differ, for example:
 766
 767        q = READ_ONCE(a);
 768        if (q) {
 769                WRITE_ONCE(b, 1);
 770                do_something();
 771        } else {
 772                WRITE_ONCE(b, 2);
 773                do_something_else();
 774        }
 775
 776The initial READ_ONCE() is still required to prevent the compiler from
 777proving the value of 'a'.
 778
 779In addition, you need to be careful what you do with the local variable 'q',
 780otherwise the compiler might be able to guess the value and again remove
 781the needed conditional.  For example:
 782
 783        q = READ_ONCE(a);
 784        if (q % MAX) {
 785                WRITE_ONCE(b, 1);
 786                do_something();
 787        } else {
 788                WRITE_ONCE(b, 2);
 789                do_something_else();
 790        }
 791
 792If MAX is defined to be 1, then the compiler knows that (q % MAX) is
 793equal to zero, in which case the compiler is within its rights to
 794transform the above code into the following:
 795
 796        q = READ_ONCE(a);
 797        WRITE_ONCE(b, 2);
 798        do_something_else();
 799
 800Given this transformation, the CPU is not required to respect the ordering
 801between the load from variable 'a' and the store to variable 'b'.  It is
 802tempting to add a barrier(), but this does not help.  The conditional
 803is gone, and the barrier won't bring it back.  Therefore, if you are
 804relying on this ordering, you should make sure that MAX is greater than
 805one, perhaps as follows:
 806
 807        q = READ_ONCE(a);
 808        BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
 809        if (q % MAX) {
 810                WRITE_ONCE(b, 1);
 811                do_something();
 812        } else {
 813                WRITE_ONCE(b, 2);
 814                do_something_else();
 815        }
 816
 817Please note once again that the stores to 'b' differ.  If they were
 818identical, as noted earlier, the compiler could pull this store outside
 819of the 'if' statement.
 820
 821You must also be careful not to rely too much on boolean short-circuit
 822evaluation.  Consider this example:
 823
 824        q = READ_ONCE(a);
 825        if (q || 1 > 0)
 826                WRITE_ONCE(b, 1);
 827
 828Because the first condition cannot fault and the second condition is
 829always true, the compiler can transform this example as following,
 830defeating control dependency:
 831
 832        q = READ_ONCE(a);
 833        WRITE_ONCE(b, 1);
 834
 835This example underscores the need to ensure that the compiler cannot
 836out-guess your code.  More generally, although READ_ONCE() does force
 837the compiler to actually emit code for a given load, it does not force
 838the compiler to use the results.
 839
 840In addition, control dependencies apply only to the then-clause and
 841else-clause of the if-statement in question.  In particular, it does
 842not necessarily apply to code following the if-statement:
 843
 844        q = READ_ONCE(a);
 845        if (q) {
 846                WRITE_ONCE(b, 1);
 847        } else {
 848                WRITE_ONCE(b, 2);
 849        }
 850        WRITE_ONCE(c, 1);  /* BUG: No ordering against the read from 'a'. */
 851
 852It is tempting to argue that there in fact is ordering because the
 853compiler cannot reorder volatile accesses and also cannot reorder
 854the writes to 'b' with the condition.  Unfortunately for this line
 855of reasoning, the compiler might compile the two writes to 'b' as
 856conditional-move instructions, as in this fanciful pseudo-assembly
 857language:
 858
 859        ld r1,a
 860        cmp r1,$0
 861        cmov,ne r4,$1
 862        cmov,eq r4,$2
 863        st r4,b
 864        st $1,c
 865
 866A weakly ordered CPU would have no dependency of any sort between the load
 867from 'a' and the store to 'c'.  The control dependencies would extend
 868only to the pair of cmov instructions and the store depending on them.
 869In short, control dependencies apply only to the stores in the then-clause
 870and else-clause of the if-statement in question (including functions
 871invoked by those two clauses), not to code following that if-statement.
 872
 873
 874Note well that the ordering provided by a control dependency is local
 875to the CPU containing it.  See the section on "Multicopy atomicity"
 876for more information.
 877
 878
 879In summary:
 880
 881  (*) Control dependencies can order prior loads against later stores.
 882      However, they do -not- guarantee any other sort of ordering:
 883      Not prior loads against later loads, nor prior stores against
 884      later anything.  If you need these other forms of ordering,
 885      use smp_rmb(), smp_wmb(), or, in the case of prior stores and
 886      later loads, smp_mb().
 887
 888  (*) If both legs of the "if" statement begin with identical stores to
 889      the same variable, then those stores must be ordered, either by
 890      preceding both of them with smp_mb() or by using smp_store_release()
 891      to carry out the stores.  Please note that it is -not- sufficient
 892      to use barrier() at beginning of each leg of the "if" statement
 893      because, as shown by the example above, optimizing compilers can
 894      destroy the control dependency while respecting the letter of the
 895      barrier() law.
 896
 897  (*) Control dependencies require at least one run-time conditional
 898      between the prior load and the subsequent store, and this
 899      conditional must involve the prior load.  If the compiler is able
 900      to optimize the conditional away, it will have also optimized
 901      away the ordering.  Careful use of READ_ONCE() and WRITE_ONCE()
 902      can help to preserve the needed conditional.
 903
 904  (*) Control dependencies require that the compiler avoid reordering the
 905      dependency into nonexistence.  Careful use of READ_ONCE() or
 906      atomic{,64}_read() can help to preserve your control dependency.
 907      Please see the COMPILER BARRIER section for more information.
 908
 909  (*) Control dependencies apply only to the then-clause and else-clause
 910      of the if-statement containing the control dependency, including
 911      any functions that these two clauses call.  Control dependencies
 912      do -not- apply to code following the if-statement containing the
 913      control dependency.
 914
 915  (*) Control dependencies pair normally with other types of barriers.
 916
 917  (*) Control dependencies do -not- provide multicopy atomicity.  If you
 918      need all the CPUs to see a given store at the same time, use smp_mb().
 919
 920  (*) Compilers do not understand control dependencies.  It is therefore
 921      your job to ensure that they do not break your code.
 922
 923
 924SMP BARRIER PAIRING
 925-------------------
 926
 927When dealing with CPU-CPU interactions, certain types of memory barrier should
 928always be paired.  A lack of appropriate pairing is almost certainly an error.
 929
 930General barriers pair with each other, though they also pair with most
 931other types of barriers, albeit without multicopy atomicity.  An acquire
 932barrier pairs with a release barrier, but both may also pair with other
 933barriers, including of course general barriers.  A write barrier pairs
 934with a data dependency barrier, a control dependency, an acquire barrier,
 935a release barrier, a read barrier, or a general barrier.  Similarly a
 936read barrier, control dependency, or a data dependency barrier pairs
 937with a write barrier, an acquire barrier, a release barrier, or a
 938general barrier:
 939
 940        CPU 1                 CPU 2
 941        ===============       ===============
 942        WRITE_ONCE(a, 1);
 943        <write barrier>
 944        WRITE_ONCE(b, 2);     x = READ_ONCE(b);
 945                              <read barrier>
 946                              y = READ_ONCE(a);
 947
 948Or:
 949
 950        CPU 1                 CPU 2
 951        ===============       ===============================
 952        a = 1;
 953        <write barrier>
 954        WRITE_ONCE(b, &a);    x = READ_ONCE(b);
 955                              <data dependency barrier>
 956                              y = *x;
 957
 958Or even:
 959
 960        CPU 1                 CPU 2
 961        ===============       ===============================
 962        r1 = READ_ONCE(y);
 963        <general barrier>
 964        WRITE_ONCE(x, 1);     if (r2 = READ_ONCE(x)) {
 965                                 <implicit control dependency>
 966                                 WRITE_ONCE(y, 1);
 967                              }
 968
 969        assert(r1 == 0 || r2 == 0);
 970
 971Basically, the read barrier always has to be there, even though it can be of
 972the "weaker" type.
 973
 974[!] Note that the stores before the write barrier would normally be expected to
 975match the loads after the read barrier or the data dependency barrier, and vice
 976versa:
 977
 978        CPU 1                               CPU 2
 979        ===================                 ===================
 980        WRITE_ONCE(a, 1);    }----   --->{  v = READ_ONCE(c);
 981        WRITE_ONCE(b, 2);    }    \ /    {  w = READ_ONCE(d);
 982        <write barrier>            \        <read barrier>
 983        WRITE_ONCE(c, 3);    }    / \    {  x = READ_ONCE(a);
 984        WRITE_ONCE(d, 4);    }----   --->{  y = READ_ONCE(b);
 985
 986
 987EXAMPLES OF MEMORY BARRIER SEQUENCES
 988------------------------------------
 989
 990Firstly, write barriers act as partial orderings on store operations.
 991Consider the following sequence of events:
 992
 993        CPU 1
 994        =======================
 995        STORE A = 1
 996        STORE B = 2
 997        STORE C = 3
 998        <write barrier>
 999        STORE D = 4
1000        STORE E = 5
1001
1002This sequence of events is committed to the memory coherence system in an order
1003that the rest of the system might perceive as the unordered set of { STORE A,
1004STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
1005}:
1006
1007        +-------+       :      :
1008        |       |       +------+
1009        |       |------>| C=3  |     }     /\
1010        |       |  :    +------+     }-----  \  -----> Events perceptible to
1011        |       |  :    | A=1  |     }        \/       the rest of the system
1012        |       |  :    +------+     }
1013        | CPU 1 |  :    | B=2  |     }
1014        |       |       +------+     }
1015        |       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
1016        |       |       +------+     }        requires all stores prior to the
1017        |       |  :    | E=5  |     }        barrier to be committed before
1018        |       |  :    +------+     }        further stores may take place
1019        |       |------>| D=4  |     }
1020        |       |       +------+
1021        +-------+       :      :
1022                           |
1023                           | Sequence in which stores are committed to the
1024                           | memory system by CPU 1
1025                           V
1026
1027
1028Secondly, data dependency barriers act as partial orderings on data-dependent
1029loads.  Consider the following sequence of events:
1030
1031        CPU 1                   CPU 2
1032        ======================= =======================
1033                { B = 7; X = 9; Y = 8; C = &Y }
1034        STORE A = 1
1035        STORE B = 2
1036        <write barrier>
1037        STORE C = &B            LOAD X
1038        STORE D = 4             LOAD C (gets &B)
1039                                LOAD *C (reads B)
1040
1041Without intervention, CPU 2 may perceive the events on CPU 1 in some
1042effectively random order, despite the write barrier issued by CPU 1:
1043
1044        +-------+       :      :                :       :
1045        |       |       +------+                +-------+  | Sequence of update
1046        |       |------>| B=2  |-----       --->| Y->8  |  | of perception on
1047        |       |  :    +------+     \          +-------+  | CPU 2
1048        | CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
1049        |       |       +------+       |        +-------+
1050        |       |   wwwwwwwwwwwwwwww   |        :       :
1051        |       |       +------+       |        :       :
1052        |       |  :    | C=&B |---    |        :       :       +-------+
1053        |       |  :    +------+   \   |        +-------+       |       |
1054        |       |------>| D=4  |    ----------->| C->&B |------>|       |
1055        |       |       +------+       |        +-------+       |       |
1056        +-------+       :      :       |        :       :       |       |
1057                                       |        :       :       |       |
1058                                       |        :       :       | CPU 2 |
1059                                       |        +-------+       |       |
1060            Apparently incorrect --->  |        | B->7  |------>|       |
1061            perception of B (!)        |        +-------+       |       |
1062                                       |        :       :       |       |
1063                                       |        +-------+       |       |
1064            The load of X holds --->    \       | X->9  |------>|       |
1065            up the maintenance           \      +-------+       |       |
1066            of coherence of B             ----->| B->2  |       +-------+
1067                                                +-------+
1068                                                :       :
1069
1070
1071In the above example, CPU 2 perceives that B is 7, despite the load of *C
1072(which would be B) coming after the LOAD of C.
1073
1074If, however, a data dependency barrier were to be placed between the load of C
1075and the load of *C (ie: B) on CPU 2:
1076
1077        CPU 1                   CPU 2
1078        ======================= =======================
1079                { B = 7; X = 9; Y = 8; C = &Y }
1080        STORE A = 1
1081        STORE B = 2
1082        <write barrier>
1083        STORE C = &B            LOAD X
1084        STORE D = 4             LOAD C (gets &B)
1085                                <data dependency barrier>
1086                                LOAD *C (reads B)
1087
1088then the following will occur:
1089
1090        +-------+       :      :                :       :
1091        |       |       +------+                +-------+
1092        |       |------>| B=2  |-----       --->| Y->8  |
1093        |       |  :    +------+     \          +-------+
1094        | CPU 1 |  :    | A=1  |      \     --->| C->&Y |
1095        |       |       +------+       |        +-------+
1096        |       |   wwwwwwwwwwwwwwww   |        :       :
1097        |       |       +------+       |        :       :
1098        |       |  :    | C=&B |---    |        :       :       +-------+
1099        |       |  :    +------+   \   |        +-------+       |       |
1100        |       |------>| D=4  |    ----------->| C->&B |------>|       |
1101        |       |       +------+       |        +-------+       |       |
1102        +-------+       :      :       |        :       :       |       |
1103                                       |        :       :       |       |
1104                                       |        :       :       | CPU 2 |
1105                                       |        +-------+       |       |
1106                                       |        | X->9  |------>|       |
1107                                       |        +-------+       |       |
1108          Makes sure all effects --->   \   ddddddddddddddddd   |       |
1109          prior to the store of C        \      +-------+       |       |
1110          are perceptible to              ----->| B->2  |------>|       |
1111          subsequent loads                      +-------+       |       |
1112                                                :       :       +-------+
1113
1114
1115And thirdly, a read barrier acts as a partial order on loads.  Consider the
1116following sequence of events:
1117
1118        CPU 1                   CPU 2
1119        ======================= =======================
1120                { A = 0, B = 9 }
1121        STORE A=1
1122        <write barrier>
1123        STORE B=2
1124                                LOAD B
1125                                LOAD A
1126
1127Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1128some effectively random order, despite the write barrier issued by CPU 1:
1129
1130        +-------+       :      :                :       :
1131        |       |       +------+                +-------+
1132        |       |------>| A=1  |------      --->| A->0  |
1133        |       |       +------+      \         +-------+
1134        | CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1135        |       |       +------+        |       +-------+
1136        |       |------>| B=2  |---     |       :       :
1137        |       |       +------+   \    |       :       :       +-------+
1138        +-------+       :      :    \   |       +-------+       |       |
1139                                     ---------->| B->2  |------>|       |
1140                                        |       +-------+       | CPU 2 |
1141                                        |       | A->0  |------>|       |
1142                                        |       +-------+       |       |
1143                                        |       :       :       +-------+
1144                                         \      :       :
1145                                          \     +-------+
1146                                           ---->| A->1  |
1147                                                +-------+
1148                                                :       :
1149
1150
1151If, however, a read barrier were to be placed between the load of B and the
1152load of A on CPU 2:
1153
1154        CPU 1                   CPU 2
1155        ======================= =======================
1156                { A = 0, B = 9 }
1157        STORE A=1
1158        <write barrier>
1159        STORE B=2
1160                                LOAD B
1161                                <read barrier>
1162                                LOAD A
1163
1164then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
11652:
1166
1167        +-------+       :      :                :       :
1168        |       |       +------+                +-------+
1169        |       |------>| A=1  |------      --->| A->0  |
1170        |       |       +------+      \         +-------+
1171        | CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1172        |       |       +------+        |       +-------+
1173        |       |------>| B=2  |---     |       :       :
1174        |       |       +------+   \    |       :       :       +-------+
1175        +-------+       :      :    \   |       +-------+       |       |
1176                                     ---------->| B->2  |------>|       |
1177                                        |       +-------+       | CPU 2 |
1178                                        |       :       :       |       |
1179                                        |       :       :       |       |
1180          At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1181          barrier causes all effects      \     +-------+       |       |
1182          prior to the storage of B        ---->| A->1  |------>|       |
1183          to be perceptible to CPU 2            +-------+       |       |
1184                                                :       :       +-------+
1185
1186
1187To illustrate this more completely, consider what could happen if the code
1188contained a load of A either side of the read barrier:
1189
1190        CPU 1                   CPU 2
1191        ======================= =======================
1192                { A = 0, B = 9 }
1193        STORE A=1
1194        <write barrier>
1195        STORE B=2
1196                                LOAD B
1197                                LOAD A [first load of A]
1198                                <read barrier>
1199                                LOAD A [second load of A]
1200
1201Even though the two loads of A both occur after the load of B, they may both
1202come up with different values:
1203
1204        +-------+       :      :                :       :
1205        |       |       +------+                +-------+
1206        |       |------>| A=1  |------      --->| A->0  |
1207        |       |       +------+      \         +-------+
1208        | CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1209        |       |       +------+        |       +-------+
1210        |       |------>| B=2  |---     |       :       :
1211        |       |       +------+   \    |       :       :       +-------+
1212        +-------+       :      :    \   |       +-------+       |       |
1213                                     ---------->| B->2  |------>|       |
1214                                        |       +-------+       | CPU 2 |
1215                                        |       :       :       |       |
1216                                        |       :       :       |       |
1217                                        |       +-------+       |       |
1218                                        |       | A->0  |------>| 1st   |
1219                                        |       +-------+       |       |
1220          At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1221          barrier causes all effects      \     +-------+       |       |
1222          prior to the storage of B        ---->| A->1  |------>| 2nd   |
1223          to be perceptible to CPU 2            +-------+       |       |
1224                                                :       :       +-------+
1225
1226
1227But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1228before the read barrier completes anyway:
1229
1230        +-------+       :      :                :       :
1231        |       |       +------+                +-------+
1232        |       |------>| A=1  |------      --->| A->0  |
1233        |       |       +------+      \         +-------+
1234        | CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1235        |       |       +------+        |       +-------+
1236        |       |------>| B=2  |---     |       :       :
1237        |       |       +------+   \    |       :       :       +-------+
1238        +-------+       :      :    \   |       +-------+       |       |
1239                                     ---------->| B->2  |------>|       |
1240                                        |       +-------+       | CPU 2 |
1241                                        |       :       :       |       |
1242                                         \      :       :       |       |
1243                                          \     +-------+       |       |
1244                                           ---->| A->1  |------>| 1st   |
1245                                                +-------+       |       |
1246                                            rrrrrrrrrrrrrrrrr   |       |
1247                                                +-------+       |       |
1248                                                | A->1  |------>| 2nd   |
1249                                                +-------+       |       |
1250                                                :       :       +-------+
1251
1252
1253The guarantee is that the second load will always come up with A == 1 if the
1254load of B came up with B == 2.  No such guarantee exists for the first load of
1255A; that may come up with either A == 0 or A == 1.
1256
1257
1258READ MEMORY BARRIERS VS LOAD SPECULATION
1259----------------------------------------
1260
1261Many CPUs speculate with loads: that is they see that they will need to load an
1262item from memory, and they find a time where they're not using the bus for any
1263other loads, and so do the load in advance - even though they haven't actually
1264got to that point in the instruction execution flow yet.  This permits the
1265actual load instruction to potentially complete immediately because the CPU
1266already has the value to hand.
1267
1268It may turn out that the CPU didn't actually need the value - perhaps because a
1269branch circumvented the load - in which case it can discard the value or just
1270cache it for later use.
1271
1272Consider:
1273
1274        CPU 1                   CPU 2
1275        ======================= =======================
1276                                LOAD B
1277                                DIVIDE          } Divide instructions generally
1278                                DIVIDE          } take a long time to perform
1279                                LOAD A
1280
1281Which might appear as this:
1282
1283                                                :       :       +-------+
1284                                                +-------+       |       |
1285                                            --->| B->2  |------>|       |
1286                                                +-------+       | CPU 2 |
1287                                                :       :DIVIDE |       |
1288                                                +-------+       |       |
1289        The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1290        division speculates on the              +-------+   ~   |       |
1291        LOAD of A                               :       :   ~   |       |
1292                                                :       :DIVIDE |       |
1293                                                :       :   ~   |       |
1294        Once the divisions are complete -->     :       :   ~-->|       |
1295        the CPU can then perform the            :       :       |       |
1296        LOAD with immediate effect              :       :       +-------+
1297
1298
1299Placing a read barrier or a data dependency barrier just before the second
1300load:
1301
1302        CPU 1                   CPU 2
1303        ======================= =======================
1304                                LOAD B
1305                                DIVIDE
1306                                DIVIDE
1307                                <read barrier>
1308                                LOAD A
1309
1310will force any value speculatively obtained to be reconsidered to an extent
1311dependent on the type of barrier used.  If there was no change made to the
1312speculated memory location, then the speculated value will just be used:
1313
1314                                                :       :       +-------+
1315                                                +-------+       |       |
1316                                            --->| B->2  |------>|       |
1317                                                +-------+       | CPU 2 |
1318                                                :       :DIVIDE |       |
1319                                                +-------+       |       |
1320        The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1321        division speculates on the              +-------+   ~   |       |
1322        LOAD of A                               :       :   ~   |       |
1323                                                :       :DIVIDE |       |
1324                                                :       :   ~   |       |
1325                                                :       :   ~   |       |
1326                                            rrrrrrrrrrrrrrrr~   |       |
1327                                                :       :   ~   |       |
1328                                                :       :   ~-->|       |
1329                                                :       :       |       |
1330                                                :       :       +-------+
1331
1332
1333but if there was an update or an invalidation from another CPU pending, then
1334the speculation will be cancelled and the value reloaded:
1335
1336                                                :       :       +-------+
1337                                                +-------+       |       |
1338                                            --->| B->2  |------>|       |
1339                                                +-------+       | CPU 2 |
1340                                                :       :DIVIDE |       |
1341                                                +-------+       |       |
1342        The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1343        division speculates on the              +-------+   ~   |       |
1344        LOAD of A                               :       :   ~   |       |
1345                                                :       :DIVIDE |       |
1346                                                :       :   ~   |       |
1347                                                :       :   ~   |       |
1348                                            rrrrrrrrrrrrrrrrr   |       |
1349                                                +-------+       |       |
1350        The speculation is discarded --->   --->| A->1  |------>|       |
1351        and an updated value is                 +-------+       |       |
1352        retrieved                               :       :       +-------+
1353
1354
1355MULTICOPY ATOMICITY
1356--------------------
1357
1358Multicopy atomicity is a deeply intuitive notion about ordering that is
1359not always provided by real computer systems, namely that a given store
1360becomes visible at the same time to all CPUs, or, alternatively, that all
1361CPUs agree on the order in which all stores become visible.  However,
1362support of full multicopy atomicity would rule out valuable hardware
1363optimizations, so a weaker form called ``other multicopy atomicity''
1364instead guarantees only that a given store becomes visible at the same
1365time to all -other- CPUs.  The remainder of this document discusses this
1366weaker form, but for brevity will call it simply ``multicopy atomicity''.
1367
1368The following example demonstrates multicopy atomicity:
1369
1370        CPU 1                   CPU 2                   CPU 3
1371        ======================= ======================= =======================
1372                { X = 0, Y = 0 }
1373        STORE X=1               r1=LOAD X (reads 1)     LOAD Y (reads 1)
1374                                <general barrier>       <read barrier>
1375                                STORE Y=r1              LOAD X
1376
1377Suppose that CPU 2's load from X returns 1, which it then stores to Y,
1378and CPU 3's load from Y returns 1.  This indicates that CPU 1's store
1379to X precedes CPU 2's load from X and that CPU 2's store to Y precedes
1380CPU 3's load from Y.  In addition, the memory barriers guarantee that
1381CPU 2 executes its load before its store, and CPU 3 loads from Y before
1382it loads from X.  The question is then "Can CPU 3's load from X return 0?"
1383
1384Because CPU 3's load from X in some sense comes after CPU 2's load, it
1385is natural to expect that CPU 3's load from X must therefore return 1.
1386This expectation follows from multicopy atomicity: if a load executing
1387on CPU B follows a load from the same variable executing on CPU A (and
1388CPU A did not originally store the value which it read), then on
1389multicopy-atomic systems, CPU B's load must return either the same value
1390that CPU A's load did or some later value.  However, the Linux kernel
1391does not require systems to be multicopy atomic.
1392
1393The use of a general memory barrier in the example above compensates
1394for any lack of multicopy atomicity.  In the example, if CPU 2's load
1395from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load
1396from X must indeed also return 1.
1397
1398However, dependencies, read barriers, and write barriers are not always
1399able to compensate for non-multicopy atomicity.  For example, suppose
1400that CPU 2's general barrier is removed from the above example, leaving
1401only the data dependency shown below:
1402
1403        CPU 1                   CPU 2                   CPU 3
1404        ======================= ======================= =======================
1405                { X = 0, Y = 0 }
1406        STORE X=1               r1=LOAD X (reads 1)     LOAD Y (reads 1)
1407                                <data dependency>       <read barrier>
1408                                STORE Y=r1              LOAD X (reads 0)
1409
1410This substitution allows non-multicopy atomicity to run rampant: in
1411this example, it is perfectly legal for CPU 2's load from X to return 1,
1412CPU 3's load from Y to return 1, and its load from X to return 0.
1413
1414The key point is that although CPU 2's data dependency orders its load
1415and store, it does not guarantee to order CPU 1's store.  Thus, if this
1416example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a
1417store buffer or a level of cache, CPU 2 might have early access to CPU 1's
1418writes.  General barriers are therefore required to ensure that all CPUs
1419agree on the combined order of multiple accesses.
1420
1421General barriers can compensate not only for non-multicopy atomicity,
1422but can also generate additional ordering that can ensure that -all-
1423CPUs will perceive the same order of -all- operations.  In contrast, a
1424chain of release-acquire pairs do not provide this additional ordering,
1425which means that only those CPUs on the chain are guaranteed to agree
1426on the combined order of the accesses.  For example, switching to C code
1427in deference to the ghost of Herman Hollerith:
1428
1429        int u, v, x, y, z;
1430
1431        void cpu0(void)
1432        {
1433                r0 = smp_load_acquire(&x);
1434                WRITE_ONCE(u, 1);
1435                smp_store_release(&y, 1);
1436        }
1437
1438        void cpu1(void)
1439        {
1440                r1 = smp_load_acquire(&y);
1441                r4 = READ_ONCE(v);
1442                r5 = READ_ONCE(u);
1443                smp_store_release(&z, 1);
1444        }
1445
1446        void cpu2(void)
1447        {
1448                r2 = smp_load_acquire(&z);
1449                smp_store_release(&x, 1);
1450        }
1451
1452        void cpu3(void)
1453        {
1454                WRITE_ONCE(v, 1);
1455                smp_mb();
1456                r3 = READ_ONCE(u);
1457        }
1458
1459Because cpu0(), cpu1(), and cpu2() participate in a chain of
1460smp_store_release()/smp_load_acquire() pairs, the following outcome
1461is prohibited:
1462
1463        r0 == 1 && r1 == 1 && r2 == 1
1464
1465Furthermore, because of the release-acquire relationship between cpu0()
1466and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1467outcome is prohibited:
1468
1469        r1 == 1 && r5 == 0
1470
1471However, the ordering provided by a release-acquire chain is local
1472to the CPUs participating in that chain and does not apply to cpu3(),
1473at least aside from stores.  Therefore, the following outcome is possible:
1474
1475        r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1476
1477As an aside, the following outcome is also possible:
1478
1479        r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1480
1481Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1482writes in order, CPUs not involved in the release-acquire chain might
1483well disagree on the order.  This disagreement stems from the fact that
1484the weak memory-barrier instructions used to implement smp_load_acquire()
1485and smp_store_release() are not required to order prior stores against
1486subsequent loads in all cases.  This means that cpu3() can see cpu0()'s
1487store to u as happening -after- cpu1()'s load from v, even though
1488both cpu0() and cpu1() agree that these two operations occurred in the
1489intended order.
1490
1491However, please keep in mind that smp_load_acquire() is not magic.
1492In particular, it simply reads from its argument with ordering.  It does
1493-not- ensure that any particular value will be read.  Therefore, the
1494following outcome is possible:
1495
1496        r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1497
1498Note that this outcome can happen even on a mythical sequentially
1499consistent system where nothing is ever reordered.
1500
1501To reiterate, if your code requires full ordering of all operations,
1502use general barriers throughout.
1503
1504
1505========================
1506EXPLICIT KERNEL BARRIERS
1507========================
1508
1509The Linux kernel has a variety of different barriers that act at different
1510levels:
1511
1512  (*) Compiler barrier.
1513
1514  (*) CPU memory barriers.
1515
1516  (*) MMIO write barrier.
1517
1518
1519COMPILER BARRIER
1520----------------
1521
1522The Linux kernel has an explicit compiler barrier function that prevents the
1523compiler from moving the memory accesses either side of it to the other side:
1524
1525        barrier();
1526
1527This is a general barrier -- there are no read-read or write-write
1528variants of barrier().  However, READ_ONCE() and WRITE_ONCE() can be
1529thought of as weak forms of barrier() that affect only the specific
1530accesses flagged by the READ_ONCE() or WRITE_ONCE().
1531
1532The barrier() function has the following effects:
1533
1534 (*) Prevents the compiler from reordering accesses following the
1535     barrier() to precede any accesses preceding the barrier().
1536     One example use for this property is to ease communication between
1537     interrupt-handler code and the code that was interrupted.
1538
1539 (*) Within a loop, forces the compiler to load the variables used
1540     in that loop's conditional on each pass through that loop.
1541
1542The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1543optimizations that, while perfectly safe in single-threaded code, can
1544be fatal in concurrent code.  Here are some examples of these sorts
1545of optimizations:
1546
1547 (*) The compiler is within its rights to reorder loads and stores
1548     to the same variable, and in some cases, the CPU is within its
1549     rights to reorder loads to the same variable.  This means that
1550     the following code:
1551
1552        a[0] = x;
1553        a[1] = x;
1554
1555     Might result in an older value of x stored in a[1] than in a[0].
1556     Prevent both the compiler and the CPU from doing this as follows:
1557
1558        a[0] = READ_ONCE(x);
1559        a[1] = READ_ONCE(x);
1560
1561     In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1562     accesses from multiple CPUs to a single variable.
1563
1564 (*) The compiler is within its rights to merge successive loads from
1565     the same variable.  Such merging can cause the compiler to "optimize"
1566     the following code:
1567
1568        while (tmp = a)
1569                do_something_with(tmp);
1570
1571     into the following code, which, although in some sense legitimate
1572     for single-threaded code, is almost certainly not what the developer
1573     intended:
1574
1575        if (tmp = a)
1576                for (;;)
1577                        do_something_with(tmp);
1578
1579     Use READ_ONCE() to prevent the compiler from doing this to you:
1580
1581        while (tmp = READ_ONCE(a))
1582                do_something_with(tmp);
1583
1584 (*) The compiler is within its rights to reload a variable, for example,
1585     in cases where high register pressure prevents the compiler from
1586     keeping all data of interest in registers.  The compiler might
1587     therefore optimize the variable 'tmp' out of our previous example:
1588
1589        while (tmp = a)
1590                do_something_with(tmp);
1591
1592     This could result in the following code, which is perfectly safe in
1593     single-threaded code, but can be fatal in concurrent code:
1594
1595        while (a)
1596                do_something_with(a);
1597
1598     For example, the optimized version of this code could result in
1599     passing a zero to do_something_with() in the case where the variable
1600     a was modified by some other CPU between the "while" statement and
1601     the call to do_something_with().
1602
1603     Again, use READ_ONCE() to prevent the compiler from doing this:
1604
1605        while (tmp = READ_ONCE(a))
1606                do_something_with(tmp);
1607
1608     Note that if the compiler runs short of registers, it might save
1609     tmp onto the stack.  The overhead of this saving and later restoring
1610     is why compilers reload variables.  Doing so is perfectly safe for
1611     single-threaded code, so you need to tell the compiler about cases
1612     where it is not safe.
1613
1614 (*) The compiler is within its rights to omit a load entirely if it knows
1615     what the value will be.  For example, if the compiler can prove that
1616     the value of variable 'a' is always zero, it can optimize this code:
1617
1618        while (tmp = a)
1619                do_something_with(tmp);
1620
1621     Into this:
1622
1623        do { } while (0);
1624
1625     This transformation is a win for single-threaded code because it
1626     gets rid of a load and a branch.  The problem is that the compiler
1627     will carry out its proof assuming that the current CPU is the only
1628     one updating variable 'a'.  If variable 'a' is shared, then the
1629     compiler's proof will be erroneous.  Use READ_ONCE() to tell the
1630     compiler that it doesn't know as much as it thinks it does:
1631
1632        while (tmp = READ_ONCE(a))
1633                do_something_with(tmp);
1634
1635     But please note that the compiler is also closely watching what you
1636     do with the value after the READ_ONCE().  For example, suppose you
1637     do the following and MAX is a preprocessor macro with the value 1:
1638
1639        while ((tmp = READ_ONCE(a)) % MAX)
1640                do_something_with(tmp);
1641
1642     Then the compiler knows that the result of the "%" operator applied
1643     to MAX will always be zero, again allowing the compiler to optimize
1644     the code into near-nonexistence.  (It will still load from the
1645     variable 'a'.)
1646
1647 (*) Similarly, the compiler is within its rights to omit a store entirely
1648     if it knows that the variable already has the value being stored.
1649     Again, the compiler assumes that the current CPU is the only one
1650     storing into the variable, which can cause the compiler to do the
1651     wrong thing for shared variables.  For example, suppose you have
1652     the following:
1653
1654        a = 0;
1655        ... Code that does not store to variable a ...
1656        a = 0;
1657
1658     The compiler sees that the value of variable 'a' is already zero, so
1659     it might well omit the second store.  This would come as a fatal
1660     surprise if some other CPU might have stored to variable 'a' in the
1661     meantime.
1662
1663     Use WRITE_ONCE() to prevent the compiler from making this sort of
1664     wrong guess:
1665
1666        WRITE_ONCE(a, 0);
1667        ... Code that does not store to variable a ...
1668        WRITE_ONCE(a, 0);
1669
1670 (*) The compiler is within its rights to reorder memory accesses unless
1671     you tell it not to.  For example, consider the following interaction
1672     between process-level code and an interrupt handler:
1673
1674        void process_level(void)
1675        {
1676                msg = get_message();
1677                flag = true;
1678        }
1679
1680        void interrupt_handler(void)
1681        {
1682                if (flag)
1683                        process_message(msg);
1684        }
1685
1686     There is nothing to prevent the compiler from transforming
1687     process_level() to the following, in fact, this might well be a
1688     win for single-threaded code:
1689
1690        void process_level(void)
1691        {
1692                flag = true;
1693                msg = get_message();
1694        }
1695
1696     If the interrupt occurs between these two statement, then
1697     interrupt_handler() might be passed a garbled msg.  Use WRITE_ONCE()
1698     to prevent this as follows:
1699
1700        void process_level(void)
1701        {
1702                WRITE_ONCE(msg, get_message());
1703                WRITE_ONCE(flag, true);
1704        }
1705
1706        void interrupt_handler(void)
1707        {
1708                if (READ_ONCE(flag))
1709                        process_message(READ_ONCE(msg));
1710        }
1711
1712     Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1713     interrupt_handler() are needed if this interrupt handler can itself
1714     be interrupted by something that also accesses 'flag' and 'msg',
1715     for example, a nested interrupt or an NMI.  Otherwise, READ_ONCE()
1716     and WRITE_ONCE() are not needed in interrupt_handler() other than
1717     for documentation purposes.  (Note also that nested interrupts
1718     do not typically occur in modern Linux kernels, in fact, if an
1719     interrupt handler returns with interrupts enabled, you will get a
1720     WARN_ONCE() splat.)
1721
1722     You should assume that the compiler can move READ_ONCE() and
1723     WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1724     barrier(), or similar primitives.
1725
1726     This effect could also be achieved using barrier(), but READ_ONCE()
1727     and WRITE_ONCE() are more selective:  With READ_ONCE() and
1728     WRITE_ONCE(), the compiler need only forget the contents of the
1729     indicated memory locations, while with barrier() the compiler must
1730     discard the value of all memory locations that it has currented
1731     cached in any machine registers.  Of course, the compiler must also
1732     respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1733     though the CPU of course need not do so.
1734
1735 (*) The compiler is within its rights to invent stores to a variable,
1736     as in the following example:
1737
1738        if (a)
1739                b = a;
1740        else
1741                b = 42;
1742
1743     The compiler might save a branch by optimizing this as follows:
1744
1745        b = 42;
1746        if (a)
1747                b = a;
1748
1749     In single-threaded code, this is not only safe, but also saves
1750     a branch.  Unfortunately, in concurrent code, this optimization
1751     could cause some other CPU to see a spurious value of 42 -- even
1752     if variable 'a' was never zero -- when loading variable 'b'.
1753     Use WRITE_ONCE() to prevent this as follows:
1754
1755        if (a)
1756                WRITE_ONCE(b, a);
1757        else
1758                WRITE_ONCE(b, 42);
1759
1760     The compiler can also invent loads.  These are usually less
1761     damaging, but they can result in cache-line bouncing and thus in
1762     poor performance and scalability.  Use READ_ONCE() to prevent
1763     invented loads.
1764
1765 (*) For aligned memory locations whose size allows them to be accessed
1766     with a single memory-reference instruction, prevents "load tearing"
1767     and "store tearing," in which a single large access is replaced by
1768     multiple smaller accesses.  For example, given an architecture having
1769     16-bit store instructions with 7-bit immediate fields, the compiler
1770     might be tempted to use two 16-bit store-immediate instructions to
1771     implement the following 32-bit store:
1772
1773        p = 0x00010002;
1774
1775     Please note that GCC really does use this sort of optimization,
1776     which is not surprising given that it would likely take more
1777     than two instructions to build the constant and then store it.
1778     This optimization can therefore be a win in single-threaded code.
1779     In fact, a recent bug (since fixed) caused GCC to incorrectly use
1780     this optimization in a volatile store.  In the absence of such bugs,
1781     use of WRITE_ONCE() prevents store tearing in the following example:
1782
1783        WRITE_ONCE(p, 0x00010002);
1784
1785     Use of packed structures can also result in load and store tearing,
1786     as in this example:
1787
1788        struct __attribute__((__packed__)) foo {
1789                short a;
1790                int b;
1791                short c;
1792        };
1793        struct foo foo1, foo2;
1794        ...
1795
1796        foo2.a = foo1.a;
1797        foo2.b = foo1.b;
1798        foo2.c = foo1.c;
1799
1800     Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1801     volatile markings, the compiler would be well within its rights to
1802     implement these three assignment statements as a pair of 32-bit
1803     loads followed by a pair of 32-bit stores.  This would result in
1804     load tearing on 'foo1.b' and store tearing on 'foo2.b'.  READ_ONCE()
1805     and WRITE_ONCE() again prevent tearing in this example:
1806
1807        foo2.a = foo1.a;
1808        WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
1809        foo2.c = foo1.c;
1810
1811All that aside, it is never necessary to use READ_ONCE() and
1812WRITE_ONCE() on a variable that has been marked volatile.  For example,
1813because 'jiffies' is marked volatile, it is never necessary to
1814say READ_ONCE(jiffies).  The reason for this is that READ_ONCE() and
1815WRITE_ONCE() are implemented as volatile casts, which has no effect when
1816its argument is already marked volatile.
1817
1818Please note that these compiler barriers have no direct effect on the CPU,
1819which may then reorder things however it wishes.
1820
1821
1822CPU MEMORY BARRIERS
1823-------------------
1824
1825The Linux kernel has eight basic CPU memory barriers:
1826
1827        TYPE            MANDATORY               SMP CONDITIONAL
1828        =============== ======================= ===========================
1829        GENERAL         mb()                    smp_mb()
1830        WRITE           wmb()                   smp_wmb()
1831        READ            rmb()                   smp_rmb()
1832        DATA DEPENDENCY                         READ_ONCE()
1833
1834
1835All memory barriers except the data dependency barriers imply a compiler
1836barrier.  Data dependencies do not impose any additional compiler ordering.
1837
1838Aside: In the case of data dependencies, the compiler would be expected
1839to issue the loads in the correct order (eg. `a[b]` would have to load
1840the value of b before loading a[b]), however there is no guarantee in
1841the C specification that the compiler may not speculate the value of b
1842(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
1843tmp = a[b]; ).  There is also the problem of a compiler reloading b after
1844having loaded a[b], thus having a newer copy of b than a[b].  A consensus
1845has not yet been reached about these problems, however the READ_ONCE()
1846macro is a good place to start looking.
1847
1848SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1849systems because it is assumed that a CPU will appear to be self-consistent,
1850and will order overlapping accesses correctly with respect to itself.
1851However, see the subsection on "Virtual Machine Guests" below.
1852
1853[!] Note that SMP memory barriers _must_ be used to control the ordering of
1854references to shared memory on SMP systems, though the use of locking instead
1855is sufficient.
1856
1857Mandatory barriers should not be used to control SMP effects, since mandatory
1858barriers impose unnecessary overhead on both SMP and UP systems. They may,
1859however, be used to control MMIO effects on accesses through relaxed memory I/O
1860windows.  These barriers are required even on non-SMP systems as they affect
1861the order in which memory operations appear to a device by prohibiting both the
1862compiler and the CPU from reordering them.
1863
1864
1865There are some more advanced barrier functions:
1866
1867 (*) smp_store_mb(var, value)
1868
1869     This assigns the value to the variable and then inserts a full memory
1870     barrier after it.  It isn't guaranteed to insert anything more than a
1871     compiler barrier in a UP compilation.
1872
1873
1874 (*) smp_mb__before_atomic();
1875 (*) smp_mb__after_atomic();
1876
1877     These are for use with atomic (such as add, subtract, increment and
1878     decrement) functions that don't return a value, especially when used for
1879     reference counting.  These functions do not imply memory barriers.
1880
1881     These are also used for atomic bitop functions that do not return a
1882     value (such as set_bit and clear_bit).
1883
1884     As an example, consider a piece of code that marks an object as being dead
1885     and then decrements the object's reference count:
1886
1887        obj->dead = 1;
1888        smp_mb__before_atomic();
1889        atomic_dec(&obj->ref_count);
1890
1891     This makes sure that the death mark on the object is perceived to be set
1892     *before* the reference counter is decremented.
1893
1894     See Documentation/atomic_{t,bitops}.txt for more information.
1895
1896
1897 (*) dma_wmb();
1898 (*) dma_rmb();
1899
1900     These are for use with consistent memory to guarantee the ordering
1901     of writes or reads of shared memory accessible to both the CPU and a
1902     DMA capable device.
1903
1904     For example, consider a device driver that shares memory with a device
1905     and uses a descriptor status value to indicate if the descriptor belongs
1906     to the device or the CPU, and a doorbell to notify it when new
1907     descriptors are available:
1908
1909        if (desc->status != DEVICE_OWN) {
1910                /* do not read data until we own descriptor */
1911                dma_rmb();
1912
1913                /* read/modify data */
1914                read_data = desc->data;
1915                desc->data = write_data;
1916
1917                /* flush modifications before status update */
1918                dma_wmb();
1919
1920                /* assign ownership */
1921                desc->status = DEVICE_OWN;
1922
1923                /* force memory to sync before notifying device via MMIO */
1924                wmb();
1925
1926                /* notify device of new descriptors */
1927                writel(DESC_NOTIFY, doorbell);
1928        }
1929
1930     The dma_rmb() allows us guarantee the device has released ownership
1931     before we read the data from the descriptor, and the dma_wmb() allows
1932     us to guarantee the data is written to the descriptor before the device
1933     can see it now has ownership.  The wmb() is needed to guarantee that the
1934     cache coherent memory writes have completed before attempting a write to
1935     the cache incoherent MMIO region.
1936
1937     See Documentation/DMA-API.txt for more information on consistent memory.
1938
1939
1940MMIO WRITE BARRIER
1941------------------
1942
1943The Linux kernel also has a special barrier for use with memory-mapped I/O
1944writes:
1945
1946        mmiowb();
1947
1948This is a variation on the mandatory write barrier that causes writes to weakly
1949ordered I/O regions to be partially ordered.  Its effects may go beyond the
1950CPU->Hardware interface and actually affect the hardware at some level.
1951
1952See the subsection "Acquires vs I/O accesses" for more information.
1953
1954
1955===============================
1956IMPLICIT KERNEL MEMORY BARRIERS
1957===============================
1958
1959Some of the other functions in the linux kernel imply memory barriers, amongst
1960which are locking and scheduling functions.
1961
1962This specification is a _minimum_ guarantee; any particular architecture may
1963provide more substantial guarantees, but these may not be relied upon outside
1964of arch specific code.
1965
1966
1967LOCK ACQUISITION FUNCTIONS
1968--------------------------
1969
1970The Linux kernel has a number of locking constructs:
1971
1972 (*) spin locks
1973 (*) R/W spin locks
1974 (*) mutexes
1975 (*) semaphores
1976 (*) R/W semaphores
1977
1978In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1979for each construct.  These operations all imply certain barriers:
1980
1981 (1) ACQUIRE operation implication:
1982
1983     Memory operations issued after the ACQUIRE will be completed after the
1984     ACQUIRE operation has completed.
1985
1986     Memory operations issued before the ACQUIRE may be completed after
1987     the ACQUIRE operation has completed.
1988
1989 (2) RELEASE operation implication:
1990
1991     Memory operations issued before the RELEASE will be completed before the
1992     RELEASE operation has completed.
1993
1994     Memory operations issued after the RELEASE may be completed before the
1995     RELEASE operation has completed.
1996
1997 (3) ACQUIRE vs ACQUIRE implication:
1998
1999     All ACQUIRE operations issued before another ACQUIRE operation will be
2000     completed before that ACQUIRE operation.
2001
2002 (4) ACQUIRE vs RELEASE implication:
2003
2004     All ACQUIRE operations issued before a RELEASE operation will be
2005     completed before the RELEASE operation.
2006
2007 (5) Failed conditional ACQUIRE implication:
2008
2009     Certain locking variants of the ACQUIRE operation may fail, either due to
2010     being unable to get the lock immediately, or due to receiving an unblocked
2011     signal whilst asleep waiting for the lock to become available.  Failed
2012     locks do not imply any sort of barrier.
2013
2014[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
2015one-way barriers is that the effects of instructions outside of a critical
2016section may seep into the inside of the critical section.
2017
2018An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
2019because it is possible for an access preceding the ACQUIRE to happen after the
2020ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
2021the two accesses can themselves then cross:
2022
2023        *A = a;
2024        ACQUIRE M
2025        RELEASE M
2026        *B = b;
2027
2028may occur as:
2029
2030        ACQUIRE M, STORE *B, STORE *A, RELEASE M
2031
2032When the ACQUIRE and RELEASE are a lock acquisition and release,
2033respectively, this same reordering can occur if the lock's ACQUIRE and
2034RELEASE are to the same lock variable, but only from the perspective of
2035another CPU not holding that lock.  In short, a ACQUIRE followed by an
2036RELEASE may -not- be assumed to be a full memory barrier.
2037
2038Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
2039not imply a full memory barrier.  Therefore, the CPU's execution of the
2040critical sections corresponding to the RELEASE and the ACQUIRE can cross,
2041so that:
2042
2043        *A = a;
2044        RELEASE M
2045        ACQUIRE N
2046        *B = b;
2047
2048could occur as:
2049
2050        ACQUIRE N, STORE *B, STORE *A, RELEASE M
2051
2052It might appear that this reordering could introduce a deadlock.
2053However, this cannot happen because if such a deadlock threatened,
2054the RELEASE would simply complete, thereby avoiding the deadlock.
2055
2056        Why does this work?
2057
2058        One key point is that we are only talking about the CPU doing
2059        the reordering, not the compiler.  If the compiler (or, for
2060        that matter, the developer) switched the operations, deadlock
2061        -could- occur.
2062
2063        But suppose the CPU reordered the operations.  In this case,
2064        the unlock precedes the lock in the assembly code.  The CPU
2065        simply elected to try executing the later lock operation first.
2066        If there is a deadlock, this lock operation will simply spin (or
2067        try to sleep, but more on that later).  The CPU will eventually
2068        execute the unlock operation (which preceded the lock operation
2069        in the assembly code), which will unravel the potential deadlock,
2070        allowing the lock operation to succeed.
2071
2072        But what if the lock is a sleeplock?  In that case, the code will
2073        try to enter the scheduler, where it will eventually encounter
2074        a memory barrier, which will force the earlier unlock operation
2075        to complete, again unraveling the deadlock.  There might be
2076        a sleep-unlock race, but the locking primitive needs to resolve
2077        such races properly in any case.
2078
2079Locks and semaphores may not provide any guarantee of ordering on UP compiled
2080systems, and so cannot be counted on in such a situation to actually achieve
2081anything at all - especially with respect to I/O accesses - unless combined
2082with interrupt disabling operations.
2083
2084See also the section on "Inter-CPU acquiring barrier effects".
2085
2086
2087As an example, consider the following:
2088
2089        *A = a;
2090        *B = b;
2091        ACQUIRE
2092        *C = c;
2093        *D = d;
2094        RELEASE
2095        *E = e;
2096        *F = f;
2097
2098The following sequence of events is acceptable:
2099
2100        ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
2101
2102        [+] Note that {*F,*A} indicates a combined access.
2103
2104But none of the following are:
2105
2106        {*F,*A}, *B,    ACQUIRE, *C, *D,        RELEASE, *E
2107        *A, *B, *C,     ACQUIRE, *D,            RELEASE, *E, *F
2108        *A, *B,         ACQUIRE, *C,            RELEASE, *D, *E, *F
2109        *B,             ACQUIRE, *C, *D,        RELEASE, {*F,*A}, *E
2110
2111
2112
2113INTERRUPT DISABLING FUNCTIONS
2114-----------------------------
2115
2116Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2117(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
2118barriers are required in such a situation, they must be provided from some
2119other means.
2120
2121
2122SLEEP AND WAKE-UP FUNCTIONS
2123---------------------------
2124
2125Sleeping and waking on an event flagged in global data can be viewed as an
2126interaction between two pieces of data: the task state of the task waiting for
2127the event and the global data used to indicate the event.  To make sure that
2128these appear to happen in the right order, the primitives to begin the process
2129of going to sleep, and the primitives to initiate a wake up imply certain
2130barriers.
2131
2132Firstly, the sleeper normally follows something like this sequence of events:
2133
2134        for (;;) {
2135                set_current_state(TASK_UNINTERRUPTIBLE);
2136                if (event_indicated)
2137                        break;
2138                schedule();
2139        }
2140
2141A general memory barrier is interpolated automatically by set_current_state()
2142after it has altered the task state:
2143
2144        CPU 1
2145        ===============================
2146        set_current_state();
2147          smp_store_mb();
2148            STORE current->state
2149            <general barrier>
2150        LOAD event_indicated
2151
2152set_current_state() may be wrapped by:
2153
2154        prepare_to_wait();
2155        prepare_to_wait_exclusive();
2156
2157which therefore also imply a general memory barrier after setting the state.
2158The whole sequence above is available in various canned forms, all of which
2159interpolate the memory barrier in the right place:
2160
2161        wait_event();
2162        wait_event_interruptible();
2163        wait_event_interruptible_exclusive();
2164        wait_event_interruptible_timeout();
2165        wait_event_killable();
2166        wait_event_timeout();
2167        wait_on_bit();
2168        wait_on_bit_lock();
2169
2170
2171Secondly, code that performs a wake up normally follows something like this:
2172
2173        event_indicated = 1;
2174        wake_up(&event_wait_queue);
2175
2176or:
2177
2178        event_indicated = 1;
2179        wake_up_process(event_daemon);
2180
2181A write memory barrier is implied by wake_up() and co.  if and only if they
2182wake something up.  The barrier occurs before the task state is cleared, and so
2183sits between the STORE to indicate the event and the STORE to set TASK_RUNNING:
2184
2185        CPU 1                           CPU 2
2186        =============================== ===============================
2187        set_current_state();            STORE event_indicated
2188          smp_store_mb();               wake_up();
2189            STORE current->state          <write barrier>
2190            <general barrier>             STORE current->state
2191        LOAD event_indicated
2192
2193To repeat, this write memory barrier is present if and only if something
2194is actually awakened.  To see this, consider the following sequence of
2195events, where X and Y are both initially zero:
2196
2197        CPU 1                           CPU 2
2198        =============================== ===============================
2199        X = 1;                          STORE event_indicated
2200        smp_mb();                       wake_up();
2201        Y = 1;                          wait_event(wq, Y == 1);
2202        wake_up();                        load from Y sees 1, no memory barrier
2203                                        load from X might see 0
2204
2205In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2206to see 1.
2207
2208The available waker functions include:
2209
2210        complete();
2211        wake_up();
2212        wake_up_all();
2213        wake_up_bit();
2214        wake_up_interruptible();
2215        wake_up_interruptible_all();
2216        wake_up_interruptible_nr();
2217        wake_up_interruptible_poll();
2218        wake_up_interruptible_sync();
2219        wake_up_interruptible_sync_poll();
2220        wake_up_locked();
2221        wake_up_locked_poll();
2222        wake_up_nr();
2223        wake_up_poll();
2224        wake_up_process();
2225
2226
2227[!] Note that the memory barriers implied by the sleeper and the waker do _not_
2228order multiple stores before the wake-up with respect to loads of those stored
2229values after the sleeper has called set_current_state().  For instance, if the
2230sleeper does:
2231
2232        set_current_state(TASK_INTERRUPTIBLE);
2233        if (event_indicated)
2234                break;
2235        __set_current_state(TASK_RUNNING);
2236        do_something(my_data);
2237
2238and the waker does:
2239
2240        my_data = value;
2241        event_indicated = 1;
2242        wake_up(&event_wait_queue);
2243
2244there's no guarantee that the change to event_indicated will be perceived by
2245the sleeper as coming after the change to my_data.  In such a circumstance, the
2246code on both sides must interpolate its own memory barriers between the
2247separate data accesses.  Thus the above sleeper ought to do:
2248
2249        set_current_state(TASK_INTERRUPTIBLE);
2250        if (event_indicated) {
2251                smp_rmb();
2252                do_something(my_data);
2253        }
2254
2255and the waker should do:
2256
2257        my_data = value;
2258        smp_wmb();
2259        event_indicated = 1;
2260        wake_up(&event_wait_queue);
2261
2262
2263MISCELLANEOUS FUNCTIONS
2264-----------------------
2265
2266Other functions that imply barriers:
2267
2268 (*) schedule() and similar imply full memory barriers.
2269
2270
2271===================================
2272INTER-CPU ACQUIRING BARRIER EFFECTS
2273===================================
2274
2275On SMP systems locking primitives give a more substantial form of barrier: one
2276that does affect memory access ordering on other CPUs, within the context of
2277conflict on any particular lock.
2278
2279
2280ACQUIRES VS MEMORY ACCESSES
2281---------------------------
2282
2283Consider the following: the system has a pair of spinlocks (M) and (Q), and
2284three CPUs; then should the following sequence of events occur:
2285
2286        CPU 1                           CPU 2
2287        =============================== ===============================
2288        WRITE_ONCE(*A, a);              WRITE_ONCE(*E, e);
2289        ACQUIRE M                       ACQUIRE Q
2290        WRITE_ONCE(*B, b);              WRITE_ONCE(*F, f);
2291        WRITE_ONCE(*C, c);              WRITE_ONCE(*G, g);
2292        RELEASE M                       RELEASE Q
2293        WRITE_ONCE(*D, d);              WRITE_ONCE(*H, h);
2294
2295Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2296through *H occur in, other than the constraints imposed by the separate locks
2297on the separate CPUs.  It might, for example, see:
2298
2299        *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2300
2301But it won't see any of:
2302
2303        *B, *C or *D preceding ACQUIRE M
2304        *A, *B or *C following RELEASE M
2305        *F, *G or *H preceding ACQUIRE Q
2306        *E, *F or *G following RELEASE Q
2307
2308
2309
2310ACQUIRES VS I/O ACCESSES
2311------------------------
2312
2313Under certain circumstances (especially involving NUMA), I/O accesses within
2314two spinlocked sections on two different CPUs may be seen as interleaved by the
2315PCI bridge, because the PCI bridge does not necessarily participate in the
2316cache-coherence protocol, and is therefore incapable of issuing the required
2317read memory barriers.
2318
2319For example:
2320
2321        CPU 1                           CPU 2
2322        =============================== ===============================
2323        spin_lock(Q)
2324        writel(0, ADDR)
2325        writel(1, DATA);
2326        spin_unlock(Q);
2327                                        spin_lock(Q);
2328                                        writel(4, ADDR);
2329                                        writel(5, DATA);
2330                                        spin_unlock(Q);
2331
2332may be seen by the PCI bridge as follows:
2333
2334        STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2335
2336which would probably cause the hardware to malfunction.
2337
2338
2339What is necessary here is to intervene with an mmiowb() before dropping the
2340spinlock, for example:
2341
2342        CPU 1                           CPU 2
2343        =============================== ===============================
2344        spin_lock(Q)
2345        writel(0, ADDR)
2346        writel(1, DATA);
2347        mmiowb();
2348        spin_unlock(Q);
2349                                        spin_lock(Q);
2350                                        writel(4, ADDR);
2351                                        writel(5, DATA);
2352                                        mmiowb();
2353                                        spin_unlock(Q);
2354
2355this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2356before either of the stores issued on CPU 2.
2357
2358
2359Furthermore, following a store by a load from the same device obviates the need
2360for the mmiowb(), because the load forces the store to complete before the load
2361is performed:
2362
2363        CPU 1                           CPU 2
2364        =============================== ===============================
2365        spin_lock(Q)
2366        writel(0, ADDR)
2367        a = readl(DATA);
2368        spin_unlock(Q);
2369                                        spin_lock(Q);
2370                                        writel(4, ADDR);
2371                                        b = readl(DATA);
2372                                        spin_unlock(Q);
2373
2374
2375See Documentation/driver-api/device-io.rst for more information.
2376
2377
2378=================================
2379WHERE ARE MEMORY BARRIERS NEEDED?
2380=================================
2381
2382Under normal operation, memory operation reordering is generally not going to
2383be a problem as a single-threaded linear piece of code will still appear to
2384work correctly, even if it's in an SMP kernel.  There are, however, four
2385circumstances in which reordering definitely _could_ be a problem:
2386
2387 (*) Interprocessor interaction.
2388
2389 (*) Atomic operations.
2390
2391 (*) Accessing devices.
2392
2393 (*) Interrupts.
2394
2395
2396INTERPROCESSOR INTERACTION
2397--------------------------
2398
2399When there's a system with more than one processor, more than one CPU in the
2400system may be working on the same data set at the same time.  This can cause
2401synchronisation problems, and the usual way of dealing with them is to use
2402locks.  Locks, however, are quite expensive, and so it may be preferable to
2403operate without the use of a lock if at all possible.  In such a case
2404operations that affect both CPUs may have to be carefully ordered to prevent
2405a malfunction.
2406
2407Consider, for example, the R/W semaphore slow path.  Here a waiting process is
2408queued on the semaphore, by virtue of it having a piece of its stack linked to
2409the semaphore's list of waiting processes:
2410
2411        struct rw_semaphore {
2412                ...
2413                spinlock_t lock;
2414                struct list_head waiters;
2415        };
2416
2417        struct rwsem_waiter {
2418                struct list_head list;
2419                struct task_struct *task;
2420        };
2421
2422To wake up a particular waiter, the up_read() or up_write() functions have to:
2423
2424 (1) read the next pointer from this waiter's record to know as to where the
2425     next waiter record is;
2426
2427 (2) read the pointer to the waiter's task structure;
2428
2429 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2430
2431 (4) call wake_up_process() on the task; and
2432
2433 (5) release the reference held on the waiter's task struct.
2434
2435In other words, it has to perform this sequence of events:
2436
2437        LOAD waiter->list.next;
2438        LOAD waiter->task;
2439        STORE waiter->task;
2440        CALL wakeup
2441        RELEASE task
2442
2443and if any of these steps occur out of order, then the whole thing may
2444malfunction.
2445
2446Once it has queued itself and dropped the semaphore lock, the waiter does not
2447get the lock again; it instead just waits for its task pointer to be cleared
2448before proceeding.  Since the record is on the waiter's stack, this means that
2449if the task pointer is cleared _before_ the next pointer in the list is read,
2450another CPU might start processing the waiter and might clobber the waiter's
2451stack before the up*() function has a chance to read the next pointer.
2452
2453Consider then what might happen to the above sequence of events:
2454
2455        CPU 1                           CPU 2
2456        =============================== ===============================
2457                                        down_xxx()
2458                                        Queue waiter
2459                                        Sleep
2460        up_yyy()
2461        LOAD waiter->task;
2462        STORE waiter->task;
2463                                        Woken up by other event
2464        <preempt>
2465                                        Resume processing
2466                                        down_xxx() returns
2467                                        call foo()
2468                                        foo() clobbers *waiter
2469        </preempt>
2470        LOAD waiter->list.next;
2471        --- OOPS ---
2472
2473This could be dealt with using the semaphore lock, but then the down_xxx()
2474function has to needlessly get the spinlock again after being woken up.
2475
2476The way to deal with this is to insert a general SMP memory barrier:
2477
2478        LOAD waiter->list.next;
2479        LOAD waiter->task;
2480        smp_mb();
2481        STORE waiter->task;
2482        CALL wakeup
2483        RELEASE task
2484
2485In this case, the barrier makes a guarantee that all memory accesses before the
2486barrier will appear to happen before all the memory accesses after the barrier
2487with respect to the other CPUs on the system.  It does _not_ guarantee that all
2488the memory accesses before the barrier will be complete by the time the barrier
2489instruction itself is complete.
2490
2491On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2492compiler barrier, thus making sure the compiler emits the instructions in the
2493right order without actually intervening in the CPU.  Since there's only one
2494CPU, that CPU's dependency ordering logic will take care of everything else.
2495
2496
2497ATOMIC OPERATIONS
2498-----------------
2499
2500Whilst they are technically interprocessor interaction considerations, atomic
2501operations are noted specially as some of them imply full memory barriers and
2502some don't, but they're very heavily relied on as a group throughout the
2503kernel.
2504
2505See Documentation/atomic_t.txt for more information.
2506
2507
2508ACCESSING DEVICES
2509-----------------
2510
2511Many devices can be memory mapped, and so appear to the CPU as if they're just
2512a set of memory locations.  To control such a device, the driver usually has to
2513make the right memory accesses in exactly the right order.
2514
2515However, having a clever CPU or a clever compiler creates a potential problem
2516in that the carefully sequenced accesses in the driver code won't reach the
2517device in the requisite order if the CPU or the compiler thinks it is more
2518efficient to reorder, combine or merge accesses - something that would cause
2519the device to malfunction.
2520
2521Inside of the Linux kernel, I/O should be done through the appropriate accessor
2522routines - such as inb() or writel() - which know how to make such accesses
2523appropriately sequential.  Whilst this, for the most part, renders the explicit
2524use of memory barriers unnecessary, there are a couple of situations where they
2525might be needed:
2526
2527 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2528     so for _all_ general drivers locks should be used and mmiowb() must be
2529     issued prior to unlocking the critical section.
2530
2531 (2) If the accessor functions are used to refer to an I/O memory window with
2532     relaxed memory access properties, then _mandatory_ memory barriers are
2533     required to enforce ordering.
2534
2535See Documentation/driver-api/device-io.rst for more information.
2536
2537
2538INTERRUPTS
2539----------
2540
2541A driver may be interrupted by its own interrupt service routine, and thus the
2542two parts of the driver may interfere with each other's attempts to control or
2543access the device.
2544
2545This may be alleviated - at least in part - by disabling local interrupts (a
2546form of locking), such that the critical operations are all contained within
2547the interrupt-disabled section in the driver.  Whilst the driver's interrupt
2548routine is executing, the driver's core may not run on the same CPU, and its
2549interrupt is not permitted to happen again until the current interrupt has been
2550handled, thus the interrupt handler does not need to lock against that.
2551
2552However, consider a driver that was talking to an ethernet card that sports an
2553address register and a data register.  If that driver's core talks to the card
2554under interrupt-disablement and then the driver's interrupt handler is invoked:
2555
2556        LOCAL IRQ DISABLE
2557        writew(ADDR, 3);
2558        writew(DATA, y);
2559        LOCAL IRQ ENABLE
2560        <interrupt>
2561        writew(ADDR, 4);
2562        q = readw(DATA);
2563        </interrupt>
2564
2565The store to the data register might happen after the second store to the
2566address register if ordering rules are sufficiently relaxed:
2567
2568        STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2569
2570
2571If ordering rules are relaxed, it must be assumed that accesses done inside an
2572interrupt disabled section may leak outside of it and may interleave with
2573accesses performed in an interrupt - and vice versa - unless implicit or
2574explicit barriers are used.
2575
2576Normally this won't be a problem because the I/O accesses done inside such
2577sections will include synchronous load operations on strictly ordered I/O
2578registers that form implicit I/O barriers.  If this isn't sufficient then an
2579mmiowb() may need to be used explicitly.
2580
2581
2582A similar situation may occur between an interrupt routine and two routines
2583running on separate CPUs that communicate with each other.  If such a case is
2584likely, then interrupt-disabling locks should be used to guarantee ordering.
2585
2586
2587==========================
2588KERNEL I/O BARRIER EFFECTS
2589==========================
2590
2591When accessing I/O memory, drivers should use the appropriate accessor
2592functions:
2593
2594 (*) inX(), outX():
2595
2596     These are intended to talk to I/O space rather than memory space, but
2597     that's primarily a CPU-specific concept.  The i386 and x86_64 processors
2598     do indeed have special I/O space access cycles and instructions, but many
2599     CPUs don't have such a concept.
2600
2601     The PCI bus, amongst others, defines an I/O space concept which - on such
2602     CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2603     space.  However, it may also be mapped as a virtual I/O space in the CPU's
2604     memory map, particularly on those CPUs that don't support alternate I/O
2605     spaces.
2606
2607     Accesses to this space may be fully synchronous (as on i386), but
2608     intermediary bridges (such as the PCI host bridge) may not fully honour
2609     that.
2610
2611     They are guaranteed to be fully ordered with respect to each other.
2612
2613     They are not guaranteed to be fully ordered with respect to other types of
2614     memory and I/O operation.
2615
2616 (*) readX(), writeX():
2617
2618     Whether these are guaranteed to be fully ordered and uncombined with
2619     respect to each other on the issuing CPU depends on the characteristics
2620     defined for the memory window through which they're accessing.  On later
2621     i386 architecture machines, for example, this is controlled by way of the
2622     MTRR registers.
2623
2624     Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2625     provided they're not accessing a prefetchable device.
2626
2627     However, intermediary hardware (such as a PCI bridge) may indulge in
2628     deferral if it so wishes; to flush a store, a load from the same location
2629     is preferred[*], but a load from the same device or from configuration
2630     space should suffice for PCI.
2631
2632     [*] NOTE! attempting to load from the same location as was written to may
2633         cause a malfunction - consider the 16550 Rx/Tx serial registers for
2634         example.
2635
2636     Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2637     force stores to be ordered.
2638
2639     Please refer to the PCI specification for more information on interactions
2640     between PCI transactions.
2641
2642 (*) readX_relaxed(), writeX_relaxed()
2643
2644     These are similar to readX() and writeX(), but provide weaker memory
2645     ordering guarantees.  Specifically, they do not guarantee ordering with
2646     respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2647     ordering with respect to LOCK or UNLOCK operations.  If the latter is
2648     required, an mmiowb() barrier can be used.  Note that relaxed accesses to
2649     the same peripheral are guaranteed to be ordered with respect to each
2650     other.
2651
2652 (*) ioreadX(), iowriteX()
2653
2654     These will perform appropriately for the type of access they're actually
2655     doing, be it inX()/outX() or readX()/writeX().
2656
2657
2658========================================
2659ASSUMED MINIMUM EXECUTION ORDERING MODEL
2660========================================
2661
2662It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2663maintain the appearance of program causality with respect to itself.  Some CPUs
2664(such as i386 or x86_64) are more constrained than others (such as powerpc or
2665frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2666of arch-specific code.
2667
2668This means that it must be considered that the CPU will execute its instruction
2669stream in any order it feels like - or even in parallel - provided that if an
2670instruction in the stream depends on an earlier instruction, then that
2671earlier instruction must be sufficiently complete[*] before the later
2672instruction may proceed; in other words: provided that the appearance of
2673causality is maintained.
2674
2675 [*] Some instructions have more than one effect - such as changing the
2676     condition codes, changing registers or changing memory - and different
2677     instructions may depend on different effects.
2678
2679A CPU may also discard any instruction sequence that winds up having no
2680ultimate effect.  For example, if two adjacent instructions both load an
2681immediate value into the same register, the first may be discarded.
2682
2683
2684Similarly, it has to be assumed that compiler might reorder the instruction
2685stream in any way it sees fit, again provided the appearance of causality is
2686maintained.
2687
2688
2689============================
2690THE EFFECTS OF THE CPU CACHE
2691============================
2692
2693The way cached memory operations are perceived across the system is affected to
2694a certain extent by the caches that lie between CPUs and memory, and by the
2695memory coherence system that maintains the consistency of state in the system.
2696
2697As far as the way a CPU interacts with another part of the system through the
2698caches goes, the memory system has to include the CPU's caches, and memory
2699barriers for the most part act at the interface between the CPU and its cache
2700(memory barriers logically act on the dotted line in the following diagram):
2701
2702            <--- CPU --->         :       <----------- Memory ----------->
2703                                  :
2704        +--------+    +--------+  :   +--------+    +-----------+
2705        |        |    |        |  :   |        |    |           |    +--------+
2706        |  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
2707        |  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2708        |        |    | Queue  |  :   |        |    |           |--->| Memory |
2709        |        |    |        |  :   |        |    |           |    |        |
2710        +--------+    +--------+  :   +--------+    |           |    |        |
2711                                  :                 | Cache     |    +--------+
2712                                  :                 | Coherency |
2713                                  :                 | Mechanism |    +--------+
2714        +--------+    +--------+  :   +--------+    |           |    |        |
2715        |        |    |        |  :   |        |    |           |    |        |
2716        |  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
2717        |  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2718        |        |    | Queue  |  :   |        |    |           |    |        |
2719        |        |    |        |  :   |        |    |           |    +--------+
2720        +--------+    +--------+  :   +--------+    +-----------+
2721                                  :
2722                                  :
2723
2724Although any particular load or store may not actually appear outside of the
2725CPU that issued it since it may have been satisfied within the CPU's own cache,
2726it will still appear as if the full memory access had taken place as far as the
2727other CPUs are concerned since the cache coherency mechanisms will migrate the
2728cacheline over to the accessing CPU and propagate the effects upon conflict.
2729
2730The CPU core may execute instructions in any order it deems fit, provided the
2731expected program causality appears to be maintained.  Some of the instructions
2732generate load and store operations which then go into the queue of memory
2733accesses to be performed.  The core may place these in the queue in any order
2734it wishes, and continue execution until it is forced to wait for an instruction
2735to complete.
2736
2737What memory barriers are concerned with is controlling the order in which
2738accesses cross from the CPU side of things to the memory side of things, and
2739the order in which the effects are perceived to happen by the other observers
2740in the system.
2741
2742[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2743their own loads and stores as if they had happened in program order.
2744
2745[!] MMIO or other device accesses may bypass the cache system.  This depends on
2746the properties of the memory window through which devices are accessed and/or
2747the use of any special device communication instructions the CPU may have.
2748
2749
2750CACHE COHERENCY
2751---------------
2752
2753Life isn't quite as simple as it may appear above, however: for while the
2754caches are expected to be coherent, there's no guarantee that that coherency
2755will be ordered.  This means that whilst changes made on one CPU will
2756eventually become visible on all CPUs, there's no guarantee that they will
2757become apparent in the same order on those other CPUs.
2758
2759
2760Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2761has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2762
2763                    :
2764                    :                          +--------+
2765                    :      +---------+         |        |
2766        +--------+  : +--->| Cache A |<------->|        |
2767        |        |  : |    +---------+         |        |
2768        |  CPU 1 |<---+                        |        |
2769        |        |  : |    +---------+         |        |
2770        +--------+  : +--->| Cache B |<------->|        |
2771                    :      +---------+         |        |
2772                    :                          | Memory |
2773                    :      +---------+         | System |
2774        +--------+  : +--->| Cache C |<------->|        |
2775        |        |  : |    +---------+         |        |
2776        |  CPU 2 |<---+                        |        |
2777        |        |  : |    +---------+         |        |
2778        +--------+  : +--->| Cache D |<------->|        |
2779                    :      +---------+         |        |
2780                    :                          +--------+
2781                    :
2782
2783Imagine the system has the following properties:
2784
2785 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2786     resident in memory;
2787
2788 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2789     resident in memory;
2790
2791 (*) whilst the CPU core is interrogating one cache, the other cache may be
2792     making use of the bus to access the rest of the system - perhaps to
2793     displace a dirty cacheline or to do a speculative load;
2794
2795 (*) each cache has a queue of operations that need to be applied to that cache
2796     to maintain coherency with the rest of the system;
2797
2798 (*) the coherency queue is not flushed by normal loads to lines already
2799     present in the cache, even though the contents of the queue may
2800     potentially affect those loads.
2801
2802Imagine, then, that two writes are made on the first CPU, with a write barrier
2803between them to guarantee that they will appear to reach that CPU's caches in
2804the requisite order:
2805
2806        CPU 1           CPU 2           COMMENT
2807        =============== =============== =======================================
2808                                        u == 0, v == 1 and p == &u, q == &u
2809        v = 2;
2810        smp_wmb();                      Make sure change to v is visible before
2811                                         change to p
2812        <A:modify v=2>                  v is now in cache A exclusively
2813        p = &v;
2814        <B:modify p=&v>                 p is now in cache B exclusively
2815
2816The write memory barrier forces the other CPUs in the system to perceive that
2817the local CPU's caches have apparently been updated in the correct order.  But
2818now imagine that the second CPU wants to read those values:
2819
2820        CPU 1           CPU 2           COMMENT
2821        =============== =============== =======================================
2822        ...
2823                        q = p;
2824                        x = *q;
2825
2826The above pair of reads may then fail to happen in the expected order, as the
2827cacheline holding p may get updated in one of the second CPU's caches whilst
2828the update to the cacheline holding v is delayed in the other of the second
2829CPU's caches by some other cache event:
2830
2831        CPU 1           CPU 2           COMMENT
2832        =============== =============== =======================================
2833                                        u == 0, v == 1 and p == &u, q == &u
2834        v = 2;
2835        smp_wmb();
2836        <A:modify v=2>  <C:busy>
2837                        <C:queue v=2>
2838        p = &v;         q = p;
2839                        <D:request p>
2840        <B:modify p=&v> <D:commit p=&v>
2841                        <D:read p>
2842                        x = *q;
2843                        <C:read *q>     Reads from v before v updated in cache
2844                        <C:unbusy>
2845                        <C:commit v=2>
2846
2847Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2848no guarantee that, without intervention, the order of update will be the same
2849as that committed on CPU 1.
2850
2851
2852To intervene, we need to interpolate a data dependency barrier or a read
2853barrier between the loads (which as of v4.15 is supplied unconditionally
2854by the READ_ONCE() macro).  This will force the cache to commit its
2855coherency queue before processing any further requests:
2856
2857        CPU 1           CPU 2           COMMENT
2858        =============== =============== =======================================
2859                                        u == 0, v == 1 and p == &u, q == &u
2860        v = 2;
2861        smp_wmb();
2862        <A:modify v=2>  <C:busy>
2863                        <C:queue v=2>
2864        p = &v;         q = p;
2865                        <D:request p>
2866        <B:modify p=&v> <D:commit p=&v>
2867                        <D:read p>
2868                        smp_read_barrier_depends()
2869                        <C:unbusy>
2870                        <C:commit v=2>
2871                        x = *q;
2872                        <C:read *q>     Reads from v after v updated in cache
2873
2874
2875This sort of problem can be encountered on DEC Alpha processors as they have a
2876split cache that improves performance by making better use of the data bus.
2877Whilst most CPUs do imply a data dependency barrier on the read when a memory
2878access depends on a read, not all do, so it may not be relied on.
2879
2880Other CPUs may also have split caches, but must coordinate between the various
2881cachelets for normal memory accesses.  The semantics of the Alpha removes the
2882need for hardware coordination in the absence of memory barriers, which
2883permitted Alpha to sport higher CPU clock rates back in the day.  However,
2884please note that (again, as of v4.15) smp_read_barrier_depends() should not
2885be used except in Alpha arch-specific code and within the READ_ONCE() macro.
2886
2887
2888CACHE COHERENCY VS DMA
2889----------------------
2890
2891Not all systems maintain cache coherency with respect to devices doing DMA.  In
2892such cases, a device attempting DMA may obtain stale data from RAM because
2893dirty cache lines may be resident in the caches of various CPUs, and may not
2894have been written back to RAM yet.  To deal with this, the appropriate part of
2895the kernel must flush the overlapping bits of cache on each CPU (and maybe
2896invalidate them as well).
2897
2898In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2899cache lines being written back to RAM from a CPU's cache after the device has
2900installed its own data, or cache lines present in the CPU's cache may simply
2901obscure the fact that RAM has been updated, until at such time as the cacheline
2902is discarded from the CPU's cache and reloaded.  To deal with this, the
2903appropriate part of the kernel must invalidate the overlapping bits of the
2904cache on each CPU.
2905
2906See Documentation/cachetlb.txt for more information on cache management.
2907
2908
2909CACHE COHERENCY VS MMIO
2910-----------------------
2911
2912Memory mapped I/O usually takes place through memory locations that are part of
2913a window in the CPU's memory space that has different properties assigned than
2914the usual RAM directed window.
2915
2916Amongst these properties is usually the fact that such accesses bypass the
2917caching entirely and go directly to the device buses.  This means MMIO accesses
2918may, in effect, overtake accesses to cached memory that were emitted earlier.
2919A memory barrier isn't sufficient in such a case, but rather the cache must be
2920flushed between the cached memory write and the MMIO access if the two are in
2921any way dependent.
2922
2923
2924=========================
2925THE THINGS CPUS GET UP TO
2926=========================
2927
2928A programmer might take it for granted that the CPU will perform memory
2929operations in exactly the order specified, so that if the CPU is, for example,
2930given the following piece of code to execute:
2931
2932        a = READ_ONCE(*A);
2933        WRITE_ONCE(*B, b);
2934        c = READ_ONCE(*C);
2935        d = READ_ONCE(*D);
2936        WRITE_ONCE(*E, e);
2937
2938they would then expect that the CPU will complete the memory operation for each
2939instruction before moving on to the next one, leading to a definite sequence of
2940operations as seen by external observers in the system:
2941
2942        LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2943
2944
2945Reality is, of course, much messier.  With many CPUs and compilers, the above
2946assumption doesn't hold because:
2947
2948 (*) loads are more likely to need to be completed immediately to permit
2949     execution progress, whereas stores can often be deferred without a
2950     problem;
2951
2952 (*) loads may be done speculatively, and the result discarded should it prove
2953     to have been unnecessary;
2954
2955 (*) loads may be done speculatively, leading to the result having been fetched
2956     at the wrong time in the expected sequence of events;
2957
2958 (*) the order of the memory accesses may be rearranged to promote better use
2959     of the CPU buses and caches;
2960
2961 (*) loads and stores may be combined to improve performance when talking to
2962     memory or I/O hardware that can do batched accesses of adjacent locations,
2963     thus cutting down on transaction setup costs (memory and PCI devices may
2964     both be able to do this); and
2965
2966 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2967     mechanisms may alleviate this - once the store has actually hit the cache
2968     - there's no guarantee that the coherency management will be propagated in
2969     order to other CPUs.
2970
2971So what another CPU, say, might actually observe from the above piece of code
2972is:
2973
2974        LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2975
2976        (Where "LOAD {*C,*D}" is a combined load)
2977
2978
2979However, it is guaranteed that a CPU will be self-consistent: it will see its
2980_own_ accesses appear to be correctly ordered, without the need for a memory
2981barrier.  For instance with the following code:
2982
2983        U = READ_ONCE(*A);
2984        WRITE_ONCE(*A, V);
2985        WRITE_ONCE(*A, W);
2986        X = READ_ONCE(*A);
2987        WRITE_ONCE(*A, Y);
2988        Z = READ_ONCE(*A);
2989
2990and assuming no intervention by an external influence, it can be assumed that
2991the final result will appear to be:
2992
2993        U == the original value of *A
2994        X == W
2995        Z == Y
2996        *A == Y
2997
2998The code above may cause the CPU to generate the full sequence of memory
2999accesses:
3000
3001        U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
3002
3003in that order, but, without intervention, the sequence may have almost any
3004combination of elements combined or discarded, provided the program's view
3005of the world remains consistent.  Note that READ_ONCE() and WRITE_ONCE()
3006are -not- optional in the above example, as there are architectures
3007where a given CPU might reorder successive loads to the same location.
3008On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
3009necessary to prevent this, for example, on Itanium the volatile casts
3010used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
3011and st.rel instructions (respectively) that prevent such reordering.
3012
3013The compiler may also combine, discard or defer elements of the sequence before
3014the CPU even sees them.
3015
3016For instance:
3017
3018        *A = V;
3019        *A = W;
3020
3021may be reduced to:
3022
3023        *A = W;
3024
3025since, without either a write barrier or an WRITE_ONCE(), it can be
3026assumed that the effect of the storage of V to *A is lost.  Similarly:
3027
3028        *A = Y;
3029        Z = *A;
3030
3031may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
3032reduced to:
3033
3034        *A = Y;
3035        Z = Y;
3036
3037and the LOAD operation never appear outside of the CPU.
3038
3039
3040AND THEN THERE'S THE ALPHA
3041--------------------------
3042
3043The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
3044some versions of the Alpha CPU have a split data cache, permitting them to have
3045two semantically-related cache lines updated at separate times.  This is where
3046the data dependency barrier really becomes necessary as this synchronises both
3047caches with the memory coherence system, thus making it seem like pointer
3048changes vs new data occur in the right order.
3049
3050The Alpha defines the Linux kernel's memory model, although as of v4.15
3051the Linux kernel's addition of smp_read_barrier_depends() to READ_ONCE()
3052greatly reduced Alpha's impact on the memory model.
3053
3054See the subsection on "Cache Coherency" above.
3055
3056
3057VIRTUAL MACHINE GUESTS
3058----------------------
3059
3060Guests running within virtual machines might be affected by SMP effects even if
3061the guest itself is compiled without SMP support.  This is an artifact of
3062interfacing with an SMP host while running an UP kernel.  Using mandatory
3063barriers for this use-case would be possible but is often suboptimal.
3064
3065To handle this case optimally, low-level virt_mb() etc macros are available.
3066These have the same effect as smp_mb() etc when SMP is enabled, but generate
3067identical code for SMP and non-SMP systems.  For example, virtual machine guests
3068should use virt_mb() rather than smp_mb() when synchronizing against a
3069(possibly SMP) host.
3070
3071These are equivalent to smp_mb() etc counterparts in all other respects,
3072in particular, they do not control MMIO effects: to control
3073MMIO effects, use mandatory barriers.
3074
3075
3076============
3077EXAMPLE USES
3078============
3079
3080CIRCULAR BUFFERS
3081----------------
3082
3083Memory barriers can be used to implement circular buffering without the need
3084of a lock to serialise the producer with the consumer.  See:
3085
3086        Documentation/circular-buffers.txt
3087
3088for details.
3089
3090
3091==========
3092REFERENCES
3093==========
3094
3095Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3096Digital Press)
3097        Chapter 5.2: Physical Address Space Characteristics
3098        Chapter 5.4: Caches and Write Buffers
3099        Chapter 5.5: Data Sharing
3100        Chapter 5.6: Read/Write Ordering
3101
3102AMD64 Architecture Programmer's Manual Volume 2: System Programming
3103        Chapter 7.1: Memory-Access Ordering
3104        Chapter 7.4: Buffering and Combining Memory Writes
3105
3106ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile)
3107        Chapter B2: The AArch64 Application Level Memory Model
3108
3109IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3110System Programming Guide
3111        Chapter 7.1: Locked Atomic Operations
3112        Chapter 7.2: Memory Ordering
3113        Chapter 7.4: Serializing Instructions
3114
3115The SPARC Architecture Manual, Version 9
3116        Chapter 8: Memory Models
3117        Appendix D: Formal Specification of the Memory Models
3118        Appendix J: Programming with the Memory Models
3119
3120Storage in the PowerPC (Stone and Fitzgerald)
3121
3122UltraSPARC Programmer Reference Manual
3123        Chapter 5: Memory Accesses and Cacheability
3124        Chapter 15: Sparc-V9 Memory Models
3125
3126UltraSPARC III Cu User's Manual
3127        Chapter 9: Memory Models
3128
3129UltraSPARC IIIi Processor User's Manual
3130        Chapter 8: Memory Models
3131
3132UltraSPARC Architecture 2005
3133        Chapter 9: Memory
3134        Appendix D: Formal Specifications of the Memory Models
3135
3136UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3137        Chapter 8: Memory Models
3138        Appendix F: Caches and Cache Coherency
3139
3140Solaris Internals, Core Kernel Architecture, p63-68:
3141        Chapter 3.3: Hardware Considerations for Locks and
3142                        Synchronization
3143
3144Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3145for Kernel Programmers:
3146        Chapter 13: Other Memory Models
3147
3148Intel Itanium Architecture Software Developer's Manual: Volume 1:
3149        Section 2.6: Speculation
3150        Section 4.4: Memory Access
3151