Zephyr Project API 4.0.0
A Scalable Open Source RTOS
Loading...
Searching...
No Matches
Spinlock APIs

Spinlock APIs. More...

Data Structures

struct  k_spinlock
 Kernel Spin Lock. More...
 

Macros

#define K_SPINLOCK_BREAK   continue
 Leaves a code block guarded with K_SPINLOCK after releasing the lock.
 
#define K_SPINLOCK(lck)
 Guards a code block with the given spinlock, automatically acquiring the lock before executing the code block.
 

Typedefs

typedef struct z_spinlock_key k_spinlock_key_t
 Spinlock key type.
 

Functions

static ALWAYS_INLINE k_spinlock_key_t k_spin_lock (struct k_spinlock *l)
 Lock a spinlock.
 
static ALWAYS_INLINE int k_spin_trylock (struct k_spinlock *l, k_spinlock_key_t *k)
 Attempt to lock a spinlock.
 
static ALWAYS_INLINE void k_spin_unlock (struct k_spinlock *l, k_spinlock_key_t key)
 Unlock a spin lock.
 

Detailed Description

Spinlock APIs.

Macro Definition Documentation

◆ K_SPINLOCK

#define K_SPINLOCK (   lck)

#include <include/zephyr/spinlock.h>

Value:
for (k_spinlock_key_t __i K_SPINLOCK_ONEXIT = {}, __key = k_spin_lock(lck); !__i.key; \
k_spin_unlock((lck), __key), __i.key = 1)
static ALWAYS_INLINE k_spinlock_key_t k_spin_lock(struct k_spinlock *l)
Lock a spinlock.
Definition spinlock.h:182
struct z_spinlock_key k_spinlock_key_t
Spinlock key type.
Definition spinlock.h:130

Guards a code block with the given spinlock, automatically acquiring the lock before executing the code block.

The lock will be released either when reaching the end of the code block or when leaving the block with K_SPINLOCK_BREAK.

Example usage:

K_SPINLOCK(&mylock) {
...execute statements with the lock held...
if (some_condition) {
...release the lock and leave the guarded section prematurely:
}
...execute statements with the lock held...
}
#define K_SPINLOCK(lck)
Guards a code block with the given spinlock, automatically acquiring the lock before executing the co...
Definition spinlock.h:438
#define K_SPINLOCK_BREAK
Leaves a code block guarded with K_SPINLOCK after releasing the lock.
Definition spinlock.h:395

Behind the scenes this pattern expands to a for-loop whose body is executed exactly once:

for (k_spinlock_key_t key = k_spin_lock(&mylock); ...; k_spin_unlock(&mylock, key)) {
...
}
static ALWAYS_INLINE void k_spin_unlock(struct k_spinlock *l, k_spinlock_key_t key)
Unlock a spin lock.
Definition spinlock.h:300
Warning
The code block must execute to its end or be left by calling K_SPINLOCK_BREAK. Otherwise, e.g. if exiting the block with a break, goto or return statement, the spinlock will not be released on exit.
Note
In user mode the spinlock must be placed in memory accessible to the application, see K_APP_DMEM and K_APP_BMEM macros for details.
Parameters
lckSpinlock used to guard the enclosed code block.

◆ K_SPINLOCK_BREAK

#define K_SPINLOCK_BREAK   continue

#include <include/zephyr/spinlock.h>

Leaves a code block guarded with K_SPINLOCK after releasing the lock.

See K_SPINLOCK for details.

Typedef Documentation

◆ k_spinlock_key_t

typedef struct z_spinlock_key k_spinlock_key_t

#include <include/zephyr/spinlock.h>

Spinlock key type.

This type defines a "key" value used by a spinlock implementation to store the system interrupt state at the time of a call to k_spin_lock(). It is expected to be passed to a matching k_spin_unlock().

This type is opaque and should not be inspected by application code.

Function Documentation

◆ k_spin_lock()

static ALWAYS_INLINE k_spinlock_key_t k_spin_lock ( struct k_spinlock l)
static

#include <include/zephyr/spinlock.h>

Lock a spinlock.

This routine locks the specified spinlock, returning a key handle representing interrupt state needed at unlock time. Upon returning, the calling thread is guaranteed not to be suspended or interrupted on its current CPU until it calls k_spin_unlock(). The implementation guarantees mutual exclusion: exactly one thread on one CPU will return from k_spin_lock() at a time. Other CPUs trying to acquire a lock already held by another CPU will enter an implementation-defined busy loop ("spinning") until the lock is released.

Separate spin locks may be nested. It is legal to lock an (unlocked) spin lock while holding a different lock. Spin locks are not recursive, however: an attempt to acquire a spin lock that the CPU already holds will deadlock.

In circumstances where only one CPU exists, the behavior of k_spin_lock() remains as specified above, though obviously no spinning will take place. Implementations may be free to optimize in uniprocessor contexts such that the locking reduces to an interrupt mask operation.

Parameters
lA pointer to the spinlock to lock
Returns
A key value that must be passed to k_spin_unlock() when the lock is released.

◆ k_spin_trylock()

static ALWAYS_INLINE int k_spin_trylock ( struct k_spinlock l,
k_spinlock_key_t k 
)
static

#include <include/zephyr/spinlock.h>

Attempt to lock a spinlock.

This routine makes one attempt to lock l. If it is successful, then it will store the key into k.

Parameters
[in]lA pointer to the spinlock to lock
[out]kA pointer to the spinlock key
Return values
0on success
-EBUSYif another thread holds the lock
See also
k_spin_lock
k_spin_unlock

◆ k_spin_unlock()

static ALWAYS_INLINE void k_spin_unlock ( struct k_spinlock l,
k_spinlock_key_t  key 
)
static

#include <include/zephyr/spinlock.h>

Unlock a spin lock.

This releases a lock acquired by k_spin_lock(). After this function is called, any CPU will be able to acquire the lock. If other CPUs are currently spinning inside k_spin_lock() waiting for this lock, exactly one of them will return synchronously with the lock held.

Spin locks must be properly nested. A call to k_spin_unlock() must be made on the lock object most recently locked using k_spin_lock(), using the key value that it returned. Attempts to unlock mis-nested locks, or to unlock locks that are not held, or to passing a key parameter other than the one returned from k_spin_lock(), are illegal. When CONFIG_SPIN_VALIDATE is set, some of these errors can be detected by the framework.

Parameters
lA pointer to the spinlock to release
keyThe value returned from k_spin_lock() when this lock was acquired