linux/Documentation/hwspinlock.txt
<<
>>
Prefs
   1===========================
   2Hardware Spinlock Framework
   3===========================
   4
   5Introduction
   6============
   7
   8Hardware spinlock modules provide hardware assistance for synchronization
   9and mutual exclusion between heterogeneous processors and those not operating
  10under a single, shared operating system.
  11
  12For example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP,
  13each of which is running a different Operating System (the master, A9,
  14is usually running Linux and the slave processors, the M3 and the DSP,
  15are running some flavor of RTOS).
  16
  17A generic hwspinlock framework allows platform-independent drivers to use
  18the hwspinlock device in order to access data structures that are shared
  19between remote processors, that otherwise have no alternative mechanism
  20to accomplish synchronization and mutual exclusion operations.
  21
  22This is necessary, for example, for Inter-processor communications:
  23on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the
  24remote M3 and/or C64x+ slave processors (by an IPC subsystem called Syslink).
  25
  26To achieve fast message-based communications, a minimal kernel support
  27is needed to deliver messages arriving from a remote processor to the
  28appropriate user process.
  29
  30This communication is based on simple data structures that is shared between
  31the remote processors, and access to it is synchronized using the hwspinlock
  32module (remote processor directly places new messages in this shared data
  33structure).
  34
  35A common hwspinlock interface makes it possible to have generic, platform-
  36independent, drivers.
  37
  38User API
  39========
  40
  41::
  42
  43  struct hwspinlock *hwspin_lock_request(void);
  44
  45Dynamically assign an hwspinlock and return its address, or NULL
  46in case an unused hwspinlock isn't available. Users of this
  47API will usually want to communicate the lock's id to the remote core
  48before it can be used to achieve synchronization.
  49
  50Should be called from a process context (might sleep).
  51
  52::
  53
  54  struct hwspinlock *hwspin_lock_request_specific(unsigned int id);
  55
  56Assign a specific hwspinlock id and return its address, or NULL
  57if that hwspinlock is already in use. Usually board code will
  58be calling this function in order to reserve specific hwspinlock
  59ids for predefined purposes.
  60
  61Should be called from a process context (might sleep).
  62
  63::
  64
  65  int of_hwspin_lock_get_id(struct device_node *np, int index);
  66
  67Retrieve the global lock id for an OF phandle-based specific lock.
  68This function provides a means for DT users of a hwspinlock module
  69to get the global lock id of a specific hwspinlock, so that it can
  70be requested using the normal hwspin_lock_request_specific() API.
  71
  72The function returns a lock id number on success, -EPROBE_DEFER if
  73the hwspinlock device is not yet registered with the core, or other
  74error values.
  75
  76Should be called from a process context (might sleep).
  77
  78::
  79
  80  int hwspin_lock_free(struct hwspinlock *hwlock);
  81
  82Free a previously-assigned hwspinlock; returns 0 on success, or an
  83appropriate error code on failure (e.g. -EINVAL if the hwspinlock
  84is already free).
  85
  86Should be called from a process context (might sleep).
  87
  88::
  89
  90  int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout);
  91
  92Lock a previously-assigned hwspinlock with a timeout limit (specified in
  93msecs). If the hwspinlock is already taken, the function will busy loop
  94waiting for it to be released, but give up when the timeout elapses.
  95Upon a successful return from this function, preemption is disabled so
  96the caller must not sleep, and is advised to release the hwspinlock as
  97soon as possible, in order to minimize remote cores polling on the
  98hardware interconnect.
  99
 100Returns 0 when successful and an appropriate error code otherwise (most
 101notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
 102The function will never sleep.
 103
 104::
 105
 106  int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout);
 107
 108Lock a previously-assigned hwspinlock with a timeout limit (specified in
 109msecs). If the hwspinlock is already taken, the function will busy loop
 110waiting for it to be released, but give up when the timeout elapses.
 111Upon a successful return from this function, preemption and the local
 112interrupts are disabled, so the caller must not sleep, and is advised to
 113release the hwspinlock as soon as possible.
 114
 115Returns 0 when successful and an appropriate error code otherwise (most
 116notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
 117The function will never sleep.
 118
 119::
 120
 121  int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to,
 122                                  unsigned long *flags);
 123
 124Lock a previously-assigned hwspinlock with a timeout limit (specified in
 125msecs). If the hwspinlock is already taken, the function will busy loop
 126waiting for it to be released, but give up when the timeout elapses.
 127Upon a successful return from this function, preemption is disabled,
 128local interrupts are disabled and their previous state is saved at the
 129given flags placeholder. The caller must not sleep, and is advised to
 130release the hwspinlock as soon as possible.
 131
 132Returns 0 when successful and an appropriate error code otherwise (most
 133notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
 134
 135The function will never sleep.
 136
 137::
 138
 139  int hwspin_trylock(struct hwspinlock *hwlock);
 140
 141
 142Attempt to lock a previously-assigned hwspinlock, but immediately fail if
 143it is already taken.
 144
 145Upon a successful return from this function, preemption is disabled so
 146caller must not sleep, and is advised to release the hwspinlock as soon as
 147possible, in order to minimize remote cores polling on the hardware
 148interconnect.
 149
 150Returns 0 on success and an appropriate error code otherwise (most
 151notably -EBUSY if the hwspinlock was already taken).
 152The function will never sleep.
 153
 154::
 155
 156  int hwspin_trylock_irq(struct hwspinlock *hwlock);
 157
 158
 159Attempt to lock a previously-assigned hwspinlock, but immediately fail if
 160it is already taken.
 161
 162Upon a successful return from this function, preemption and the local
 163interrupts are disabled so caller must not sleep, and is advised to
 164release the hwspinlock as soon as possible.
 165
 166Returns 0 on success and an appropriate error code otherwise (most
 167notably -EBUSY if the hwspinlock was already taken).
 168
 169The function will never sleep.
 170
 171::
 172
 173  int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags);
 174
 175Attempt to lock a previously-assigned hwspinlock, but immediately fail if
 176it is already taken.
 177
 178Upon a successful return from this function, preemption is disabled,
 179the local interrupts are disabled and their previous state is saved
 180at the given flags placeholder. The caller must not sleep, and is advised
 181to release the hwspinlock as soon as possible.
 182
 183Returns 0 on success and an appropriate error code otherwise (most
 184notably -EBUSY if the hwspinlock was already taken).
 185The function will never sleep.
 186
 187::
 188
 189  void hwspin_unlock(struct hwspinlock *hwlock);
 190
 191Unlock a previously-locked hwspinlock. Always succeed, and can be called
 192from any context (the function never sleeps).
 193
 194.. note::
 195
 196  code should **never** unlock an hwspinlock which is already unlocked
 197  (there is no protection against this).
 198
 199::
 200
 201  void hwspin_unlock_irq(struct hwspinlock *hwlock);
 202
 203Unlock a previously-locked hwspinlock and enable local interrupts.
 204The caller should **never** unlock an hwspinlock which is already unlocked.
 205
 206Doing so is considered a bug (there is no protection against this).
 207Upon a successful return from this function, preemption and local
 208interrupts are enabled. This function will never sleep.
 209
 210::
 211
 212  void
 213  hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags);
 214
 215Unlock a previously-locked hwspinlock.
 216
 217The caller should **never** unlock an hwspinlock which is already unlocked.
 218Doing so is considered a bug (there is no protection against this).
 219Upon a successful return from this function, preemption is reenabled,
 220and the state of the local interrupts is restored to the state saved at
 221the given flags. This function will never sleep.
 222
 223::
 224
 225  int hwspin_lock_get_id(struct hwspinlock *hwlock);
 226
 227Retrieve id number of a given hwspinlock. This is needed when an
 228hwspinlock is dynamically assigned: before it can be used to achieve
 229mutual exclusion with a remote cpu, the id number should be communicated
 230to the remote task with which we want to synchronize.
 231
 232Returns the hwspinlock id number, or -EINVAL if hwlock is null.
 233
 234Typical usage
 235=============
 236
 237::
 238
 239        #include <linux/hwspinlock.h>
 240        #include <linux/err.h>
 241
 242        int hwspinlock_example1(void)
 243        {
 244                struct hwspinlock *hwlock;
 245                int ret;
 246
 247                /* dynamically assign a hwspinlock */
 248                hwlock = hwspin_lock_request();
 249                if (!hwlock)
 250                        ...
 251
 252                id = hwspin_lock_get_id(hwlock);
 253                /* probably need to communicate id to a remote processor now */
 254
 255                /* take the lock, spin for 1 sec if it's already taken */
 256                ret = hwspin_lock_timeout(hwlock, 1000);
 257                if (ret)
 258                        ...
 259
 260                /*
 261                * we took the lock, do our thing now, but do NOT sleep
 262                */
 263
 264                /* release the lock */
 265                hwspin_unlock(hwlock);
 266
 267                /* free the lock */
 268                ret = hwspin_lock_free(hwlock);
 269                if (ret)
 270                        ...
 271
 272                return ret;
 273        }
 274
 275        int hwspinlock_example2(void)
 276        {
 277                struct hwspinlock *hwlock;
 278                int ret;
 279
 280                /*
 281                * assign a specific hwspinlock id - this should be called early
 282                * by board init code.
 283                */
 284                hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID);
 285                if (!hwlock)
 286                        ...
 287
 288                /* try to take it, but don't spin on it */
 289                ret = hwspin_trylock(hwlock);
 290                if (!ret) {
 291                        pr_info("lock is already taken\n");
 292                        return -EBUSY;
 293                }
 294
 295                /*
 296                * we took the lock, do our thing now, but do NOT sleep
 297                */
 298
 299                /* release the lock */
 300                hwspin_unlock(hwlock);
 301
 302                /* free the lock */
 303                ret = hwspin_lock_free(hwlock);
 304                if (ret)
 305                        ...
 306
 307                return ret;
 308        }
 309
 310
 311API for implementors
 312====================
 313
 314::
 315
 316  int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
 317                const struct hwspinlock_ops *ops, int base_id, int num_locks);
 318
 319To be called from the underlying platform-specific implementation, in
 320order to register a new hwspinlock device (which is usually a bank of
 321numerous locks). Should be called from a process context (this function
 322might sleep).
 323
 324Returns 0 on success, or appropriate error code on failure.
 325
 326::
 327
 328  int hwspin_lock_unregister(struct hwspinlock_device *bank);
 329
 330To be called from the underlying vendor-specific implementation, in order
 331to unregister an hwspinlock device (which is usually a bank of numerous
 332locks).
 333
 334Should be called from a process context (this function might sleep).
 335
 336Returns the address of hwspinlock on success, or NULL on error (e.g.
 337if the hwspinlock is still in use).
 338
 339Important structs
 340=================
 341
 342struct hwspinlock_device is a device which usually contains a bank
 343of hardware locks. It is registered by the underlying hwspinlock
 344implementation using the hwspin_lock_register() API.
 345
 346::
 347
 348        /**
 349        * struct hwspinlock_device - a device which usually spans numerous hwspinlocks
 350        * @dev: underlying device, will be used to invoke runtime PM api
 351        * @ops: platform-specific hwspinlock handlers
 352        * @base_id: id index of the first lock in this device
 353        * @num_locks: number of locks in this device
 354        * @lock: dynamically allocated array of 'struct hwspinlock'
 355        */
 356        struct hwspinlock_device {
 357                struct device *dev;
 358                const struct hwspinlock_ops *ops;
 359                int base_id;
 360                int num_locks;
 361                struct hwspinlock lock[0];
 362        };
 363
 364struct hwspinlock_device contains an array of hwspinlock structs, each
 365of which represents a single hardware lock::
 366
 367        /**
 368        * struct hwspinlock - this struct represents a single hwspinlock instance
 369        * @bank: the hwspinlock_device structure which owns this lock
 370        * @lock: initialized and used by hwspinlock core
 371        * @priv: private data, owned by the underlying platform-specific hwspinlock drv
 372        */
 373        struct hwspinlock {
 374                struct hwspinlock_device *bank;
 375                spinlock_t lock;
 376                void *priv;
 377        };
 378
 379When registering a bank of locks, the hwspinlock driver only needs to
 380set the priv members of the locks. The rest of the members are set and
 381initialized by the hwspinlock core itself.
 382
 383Implementation callbacks
 384========================
 385
 386There are three possible callbacks defined in 'struct hwspinlock_ops'::
 387
 388        struct hwspinlock_ops {
 389                int (*trylock)(struct hwspinlock *lock);
 390                void (*unlock)(struct hwspinlock *lock);
 391                void (*relax)(struct hwspinlock *lock);
 392        };
 393
 394The first two callbacks are mandatory:
 395
 396The ->trylock() callback should make a single attempt to take the lock, and
 397return 0 on failure and 1 on success. This callback may **not** sleep.
 398
 399The ->unlock() callback releases the lock. It always succeed, and it, too,
 400may **not** sleep.
 401
 402The ->relax() callback is optional. It is called by hwspinlock core while
 403spinning on a lock, and can be used by the underlying implementation to force
 404a delay between two successive invocations of ->trylock(). It may **not** sleep.
 405