linux/Documentation/hwspinlock.txt
<<
>>
Prefs
   1===========================
   2Hardware Spinlock Framework
   3===========================
   4
   5Introduction
   6============
   7
   8Hardware spinlock modules provide hardware assistance for synchronization
   9and mutual exclusion between heterogeneous processors and those not operating
  10under a single, shared operating system.
  11
  12For example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP,
  13each of which is running a different Operating System (the master, A9,
  14is usually running Linux and the slave processors, the M3 and the DSP,
  15are running some flavor of RTOS).
  16
  17A generic hwspinlock framework allows platform-independent drivers to use
  18the hwspinlock device in order to access data structures that are shared
  19between remote processors, that otherwise have no alternative mechanism
  20to accomplish synchronization and mutual exclusion operations.
  21
  22This is necessary, for example, for Inter-processor communications:
  23on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the
  24remote M3 and/or C64x+ slave processors (by an IPC subsystem called Syslink).
  25
  26To achieve fast message-based communications, a minimal kernel support
  27is needed to deliver messages arriving from a remote processor to the
  28appropriate user process.
  29
  30This communication is based on simple data structures that is shared between
  31the remote processors, and access to it is synchronized using the hwspinlock
  32module (remote processor directly places new messages in this shared data
  33structure).
  34
  35A common hwspinlock interface makes it possible to have generic, platform-
  36independent, drivers.
  37
  38User API
  39========
  40
  41::
  42
  43  struct hwspinlock *hwspin_lock_request(void);
  44
  45Dynamically assign an hwspinlock and return its address, or NULL
  46in case an unused hwspinlock isn't available. Users of this
  47API will usually want to communicate the lock's id to the remote core
  48before it can be used to achieve synchronization.
  49
  50Should be called from a process context (might sleep).
  51
  52::
  53
  54  struct hwspinlock *hwspin_lock_request_specific(unsigned int id);
  55
  56Assign a specific hwspinlock id and return its address, or NULL
  57if that hwspinlock is already in use. Usually board code will
  58be calling this function in order to reserve specific hwspinlock
  59ids for predefined purposes.
  60
  61Should be called from a process context (might sleep).
  62
  63::
  64
  65  int of_hwspin_lock_get_id(struct device_node *np, int index);
  66
  67Retrieve the global lock id for an OF phandle-based specific lock.
  68This function provides a means for DT users of a hwspinlock module
  69to get the global lock id of a specific hwspinlock, so that it can
  70be requested using the normal hwspin_lock_request_specific() API.
  71
  72The function returns a lock id number on success, -EPROBE_DEFER if
  73the hwspinlock device is not yet registered with the core, or other
  74error values.
  75
  76Should be called from a process context (might sleep).
  77
  78::
  79
  80  int hwspin_lock_free(struct hwspinlock *hwlock);
  81
  82Free a previously-assigned hwspinlock; returns 0 on success, or an
  83appropriate error code on failure (e.g. -EINVAL if the hwspinlock
  84is already free).
  85
  86Should be called from a process context (might sleep).
  87
  88::
  89
  90  int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout);
  91
  92Lock a previously-assigned hwspinlock with a timeout limit (specified in
  93msecs). If the hwspinlock is already taken, the function will busy loop
  94waiting for it to be released, but give up when the timeout elapses.
  95Upon a successful return from this function, preemption is disabled so
  96the caller must not sleep, and is advised to release the hwspinlock as
  97soon as possible, in order to minimize remote cores polling on the
  98hardware interconnect.
  99
 100Returns 0 when successful and an appropriate error code otherwise (most
 101notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
 102The function will never sleep.
 103
 104::
 105
 106  int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout);
 107
 108Lock a previously-assigned hwspinlock with a timeout limit (specified in
 109msecs). If the hwspinlock is already taken, the function will busy loop
 110waiting for it to be released, but give up when the timeout elapses.
 111Upon a successful return from this function, preemption and the local
 112interrupts are disabled, so the caller must not sleep, and is advised to
 113release the hwspinlock as soon as possible.
 114
 115Returns 0 when successful and an appropriate error code otherwise (most
 116notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
 117The function will never sleep.
 118
 119::
 120
 121  int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to,
 122                                  unsigned long *flags);
 123
 124Lock a previously-assigned hwspinlock with a timeout limit (specified in
 125msecs). If the hwspinlock is already taken, the function will busy loop
 126waiting for it to be released, but give up when the timeout elapses.
 127Upon a successful return from this function, preemption is disabled,
 128local interrupts are disabled and their previous state is saved at the
 129given flags placeholder. The caller must not sleep, and is advised to
 130release the hwspinlock as soon as possible.
 131
 132Returns 0 when successful and an appropriate error code otherwise (most
 133notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
 134
 135The function will never sleep.
 136
 137::
 138
 139  int hwspin_lock_timeout_raw(struct hwspinlock *hwlock, unsigned int timeout);
 140
 141Lock a previously-assigned hwspinlock with a timeout limit (specified in
 142msecs). If the hwspinlock is already taken, the function will busy loop
 143waiting for it to be released, but give up when the timeout elapses.
 144
 145Caution: User must protect the routine of getting hardware lock with mutex
 146or spinlock to avoid dead-lock, that will let user can do some time-consuming
 147or sleepable operations under the hardware lock.
 148
 149Returns 0 when successful and an appropriate error code otherwise (most
 150notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
 151
 152The function will never sleep.
 153
 154::
 155
 156  int hwspin_lock_timeout_in_atomic(struct hwspinlock *hwlock, unsigned int to);
 157
 158Lock a previously-assigned hwspinlock with a timeout limit (specified in
 159msecs). If the hwspinlock is already taken, the function will busy loop
 160waiting for it to be released, but give up when the timeout elapses.
 161
 162This function shall be called only from an atomic context and the timeout
 163value shall not exceed a few msecs.
 164
 165Returns 0 when successful and an appropriate error code otherwise (most
 166notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
 167
 168The function will never sleep.
 169
 170::
 171
 172  int hwspin_trylock(struct hwspinlock *hwlock);
 173
 174
 175Attempt to lock a previously-assigned hwspinlock, but immediately fail if
 176it is already taken.
 177
 178Upon a successful return from this function, preemption is disabled so
 179caller must not sleep, and is advised to release the hwspinlock as soon as
 180possible, in order to minimize remote cores polling on the hardware
 181interconnect.
 182
 183Returns 0 on success and an appropriate error code otherwise (most
 184notably -EBUSY if the hwspinlock was already taken).
 185The function will never sleep.
 186
 187::
 188
 189  int hwspin_trylock_irq(struct hwspinlock *hwlock);
 190
 191
 192Attempt to lock a previously-assigned hwspinlock, but immediately fail if
 193it is already taken.
 194
 195Upon a successful return from this function, preemption and the local
 196interrupts are disabled so caller must not sleep, and is advised to
 197release the hwspinlock as soon as possible.
 198
 199Returns 0 on success and an appropriate error code otherwise (most
 200notably -EBUSY if the hwspinlock was already taken).
 201
 202The function will never sleep.
 203
 204::
 205
 206  int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags);
 207
 208Attempt to lock a previously-assigned hwspinlock, but immediately fail if
 209it is already taken.
 210
 211Upon a successful return from this function, preemption is disabled,
 212the local interrupts are disabled and their previous state is saved
 213at the given flags placeholder. The caller must not sleep, and is advised
 214to release the hwspinlock as soon as possible.
 215
 216Returns 0 on success and an appropriate error code otherwise (most
 217notably -EBUSY if the hwspinlock was already taken).
 218The function will never sleep.
 219
 220::
 221
 222  int hwspin_trylock_raw(struct hwspinlock *hwlock);
 223
 224Attempt to lock a previously-assigned hwspinlock, but immediately fail if
 225it is already taken.
 226
 227Caution: User must protect the routine of getting hardware lock with mutex
 228or spinlock to avoid dead-lock, that will let user can do some time-consuming
 229or sleepable operations under the hardware lock.
 230
 231Returns 0 on success and an appropriate error code otherwise (most
 232notably -EBUSY if the hwspinlock was already taken).
 233The function will never sleep.
 234
 235::
 236
 237  int hwspin_trylock_in_atomic(struct hwspinlock *hwlock);
 238
 239Attempt to lock a previously-assigned hwspinlock, but immediately fail if
 240it is already taken.
 241
 242This function shall be called only from an atomic context.
 243
 244Returns 0 on success and an appropriate error code otherwise (most
 245notably -EBUSY if the hwspinlock was already taken).
 246The function will never sleep.
 247
 248::
 249
 250  void hwspin_unlock(struct hwspinlock *hwlock);
 251
 252Unlock a previously-locked hwspinlock. Always succeed, and can be called
 253from any context (the function never sleeps).
 254
 255.. note::
 256
 257  code should **never** unlock an hwspinlock which is already unlocked
 258  (there is no protection against this).
 259
 260::
 261
 262  void hwspin_unlock_irq(struct hwspinlock *hwlock);
 263
 264Unlock a previously-locked hwspinlock and enable local interrupts.
 265The caller should **never** unlock an hwspinlock which is already unlocked.
 266
 267Doing so is considered a bug (there is no protection against this).
 268Upon a successful return from this function, preemption and local
 269interrupts are enabled. This function will never sleep.
 270
 271::
 272
 273  void
 274  hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags);
 275
 276Unlock a previously-locked hwspinlock.
 277
 278The caller should **never** unlock an hwspinlock which is already unlocked.
 279Doing so is considered a bug (there is no protection against this).
 280Upon a successful return from this function, preemption is reenabled,
 281and the state of the local interrupts is restored to the state saved at
 282the given flags. This function will never sleep.
 283
 284::
 285
 286  void hwspin_unlock_raw(struct hwspinlock *hwlock);
 287
 288Unlock a previously-locked hwspinlock.
 289
 290The caller should **never** unlock an hwspinlock which is already unlocked.
 291Doing so is considered a bug (there is no protection against this).
 292This function will never sleep.
 293
 294::
 295
 296  void hwspin_unlock_in_atomic(struct hwspinlock *hwlock);
 297
 298Unlock a previously-locked hwspinlock.
 299
 300The caller should **never** unlock an hwspinlock which is already unlocked.
 301Doing so is considered a bug (there is no protection against this).
 302This function will never sleep.
 303
 304::
 305
 306  int hwspin_lock_get_id(struct hwspinlock *hwlock);
 307
 308Retrieve id number of a given hwspinlock. This is needed when an
 309hwspinlock is dynamically assigned: before it can be used to achieve
 310mutual exclusion with a remote cpu, the id number should be communicated
 311to the remote task with which we want to synchronize.
 312
 313Returns the hwspinlock id number, or -EINVAL if hwlock is null.
 314
 315Typical usage
 316=============
 317
 318::
 319
 320        #include <linux/hwspinlock.h>
 321        #include <linux/err.h>
 322
 323        int hwspinlock_example1(void)
 324        {
 325                struct hwspinlock *hwlock;
 326                int ret;
 327
 328                /* dynamically assign a hwspinlock */
 329                hwlock = hwspin_lock_request();
 330                if (!hwlock)
 331                        ...
 332
 333                id = hwspin_lock_get_id(hwlock);
 334                /* probably need to communicate id to a remote processor now */
 335
 336                /* take the lock, spin for 1 sec if it's already taken */
 337                ret = hwspin_lock_timeout(hwlock, 1000);
 338                if (ret)
 339                        ...
 340
 341                /*
 342                * we took the lock, do our thing now, but do NOT sleep
 343                */
 344
 345                /* release the lock */
 346                hwspin_unlock(hwlock);
 347
 348                /* free the lock */
 349                ret = hwspin_lock_free(hwlock);
 350                if (ret)
 351                        ...
 352
 353                return ret;
 354        }
 355
 356        int hwspinlock_example2(void)
 357        {
 358                struct hwspinlock *hwlock;
 359                int ret;
 360
 361                /*
 362                * assign a specific hwspinlock id - this should be called early
 363                * by board init code.
 364                */
 365                hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID);
 366                if (!hwlock)
 367                        ...
 368
 369                /* try to take it, but don't spin on it */
 370                ret = hwspin_trylock(hwlock);
 371                if (!ret) {
 372                        pr_info("lock is already taken\n");
 373                        return -EBUSY;
 374                }
 375
 376                /*
 377                * we took the lock, do our thing now, but do NOT sleep
 378                */
 379
 380                /* release the lock */
 381                hwspin_unlock(hwlock);
 382
 383                /* free the lock */
 384                ret = hwspin_lock_free(hwlock);
 385                if (ret)
 386                        ...
 387
 388                return ret;
 389        }
 390
 391
 392API for implementors
 393====================
 394
 395::
 396
 397  int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
 398                const struct hwspinlock_ops *ops, int base_id, int num_locks);
 399
 400To be called from the underlying platform-specific implementation, in
 401order to register a new hwspinlock device (which is usually a bank of
 402numerous locks). Should be called from a process context (this function
 403might sleep).
 404
 405Returns 0 on success, or appropriate error code on failure.
 406
 407::
 408
 409  int hwspin_lock_unregister(struct hwspinlock_device *bank);
 410
 411To be called from the underlying vendor-specific implementation, in order
 412to unregister an hwspinlock device (which is usually a bank of numerous
 413locks).
 414
 415Should be called from a process context (this function might sleep).
 416
 417Returns the address of hwspinlock on success, or NULL on error (e.g.
 418if the hwspinlock is still in use).
 419
 420Important structs
 421=================
 422
 423struct hwspinlock_device is a device which usually contains a bank
 424of hardware locks. It is registered by the underlying hwspinlock
 425implementation using the hwspin_lock_register() API.
 426
 427::
 428
 429        /**
 430        * struct hwspinlock_device - a device which usually spans numerous hwspinlocks
 431        * @dev: underlying device, will be used to invoke runtime PM api
 432        * @ops: platform-specific hwspinlock handlers
 433        * @base_id: id index of the first lock in this device
 434        * @num_locks: number of locks in this device
 435        * @lock: dynamically allocated array of 'struct hwspinlock'
 436        */
 437        struct hwspinlock_device {
 438                struct device *dev;
 439                const struct hwspinlock_ops *ops;
 440                int base_id;
 441                int num_locks;
 442                struct hwspinlock lock[0];
 443        };
 444
 445struct hwspinlock_device contains an array of hwspinlock structs, each
 446of which represents a single hardware lock::
 447
 448        /**
 449        * struct hwspinlock - this struct represents a single hwspinlock instance
 450        * @bank: the hwspinlock_device structure which owns this lock
 451        * @lock: initialized and used by hwspinlock core
 452        * @priv: private data, owned by the underlying platform-specific hwspinlock drv
 453        */
 454        struct hwspinlock {
 455                struct hwspinlock_device *bank;
 456                spinlock_t lock;
 457                void *priv;
 458        };
 459
 460When registering a bank of locks, the hwspinlock driver only needs to
 461set the priv members of the locks. The rest of the members are set and
 462initialized by the hwspinlock core itself.
 463
 464Implementation callbacks
 465========================
 466
 467There are three possible callbacks defined in 'struct hwspinlock_ops'::
 468
 469        struct hwspinlock_ops {
 470                int (*trylock)(struct hwspinlock *lock);
 471                void (*unlock)(struct hwspinlock *lock);
 472                void (*relax)(struct hwspinlock *lock);
 473        };
 474
 475The first two callbacks are mandatory:
 476
 477The ->trylock() callback should make a single attempt to take the lock, and
 478return 0 on failure and 1 on success. This callback may **not** sleep.
 479
 480The ->unlock() callback releases the lock. It always succeed, and it, too,
 481may **not** sleep.
 482
 483The ->relax() callback is optional. It is called by hwspinlock core while
 484spinning on a lock, and can be used by the underlying implementation to force
 485a delay between two successive invocations of ->trylock(). It may **not** sleep.
 486