diff options
| author | Mathieu Desnoyers <mathieu.desnoyers@efficios.com> | 2018-01-29 15:20:13 -0500 |
|---|---|---|
| committer | Ingo Molnar <mingo@kernel.org> | 2018-02-05 15:34:31 -0500 |
| commit | c5f58bd58f432be5d92df33c5458e0bcbee3aadf (patch) | |
| tree | 0a7c6d59b6101cd22de8a7da86b75010c84c199f /include/uapi/linux | |
| parent | 306e060435d7a3aef8f6f033e43b0f581638adce (diff) | |
membarrier: Provide GLOBAL_EXPEDITED command
Allow expedited membarrier to be used for data shared between processes
through shared memory.
Processes wishing to receive the membarriers register with
MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED. Those which want to issue
membarrier invoke MEMBARRIER_CMD_GLOBAL_EXPEDITED.
This allows extremely simple kernel-level implementation: we have almost
everything we need with the PRIVATE_EXPEDITED barrier code. All we need
to do is to add a flag in the mm_struct that will be used to check
whether we need to send the IPI to the current thread of each CPU.
There is a slight downside to this approach compared to targeting
specific shared memory users: when performing a membarrier operation,
all registered "global" receivers will get the barrier, even if they
don't share a memory mapping with the sender issuing
MEMBARRIER_CMD_GLOBAL_EXPEDITED.
This registration approach seems to fit the requirement of not
disturbing processes that really deeply care about real-time: they
simply should not register with MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED.
In order to align the membarrier command names, the "MEMBARRIER_CMD_SHARED"
command is renamed to "MEMBARRIER_CMD_GLOBAL", keeping an alias of
MEMBARRIER_CMD_SHARED to MEMBARRIER_CMD_GLOBAL for UAPI header backward
compatibility.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrea Parri <parri.andrea@gmail.com>
Cc: Andrew Hunter <ahh@google.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Avi Kivity <avi@scylladb.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Dave Watson <davejwatson@fb.com>
Cc: David Sehr <sehr@google.com>
Cc: Greg Hackmann <ghackmann@google.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Maged Michael <maged.michael@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-api@vger.kernel.org
Link: http://lkml.kernel.org/r/20180129202020.8515-5-mathieu.desnoyers@efficios.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/uapi/linux')
| -rw-r--r-- | include/uapi/linux/membarrier.h | 42 |
1 files changed, 35 insertions, 7 deletions
diff --git a/include/uapi/linux/membarrier.h b/include/uapi/linux/membarrier.h index 4e01ad7ffe98..d252506e1b5e 100644 --- a/include/uapi/linux/membarrier.h +++ b/include/uapi/linux/membarrier.h | |||
| @@ -31,7 +31,7 @@ | |||
| 31 | * enum membarrier_cmd - membarrier system call command | 31 | * enum membarrier_cmd - membarrier system call command |
| 32 | * @MEMBARRIER_CMD_QUERY: Query the set of supported commands. It returns | 32 | * @MEMBARRIER_CMD_QUERY: Query the set of supported commands. It returns |
| 33 | * a bitmask of valid commands. | 33 | * a bitmask of valid commands. |
| 34 | * @MEMBARRIER_CMD_SHARED: Execute a memory barrier on all running threads. | 34 | * @MEMBARRIER_CMD_GLOBAL: Execute a memory barrier on all running threads. |
| 35 | * Upon return from system call, the caller thread | 35 | * Upon return from system call, the caller thread |
| 36 | * is ensured that all running threads have passed | 36 | * is ensured that all running threads have passed |
| 37 | * through a state where all memory accesses to | 37 | * through a state where all memory accesses to |
| @@ -40,6 +40,28 @@ | |||
| 40 | * (non-running threads are de facto in such a | 40 | * (non-running threads are de facto in such a |
| 41 | * state). This covers threads from all processes | 41 | * state). This covers threads from all processes |
| 42 | * running on the system. This command returns 0. | 42 | * running on the system. This command returns 0. |
| 43 | * @MEMBARRIER_CMD_GLOBAL_EXPEDITED: | ||
| 44 | * Execute a memory barrier on all running threads | ||
| 45 | * of all processes which previously registered | ||
| 46 | * with MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED. | ||
| 47 | * Upon return from system call, the caller thread | ||
| 48 | * is ensured that all running threads have passed | ||
| 49 | * through a state where all memory accesses to | ||
| 50 | * user-space addresses match program order between | ||
| 51 | * entry to and return from the system call | ||
| 52 | * (non-running threads are de facto in such a | ||
| 53 | * state). This only covers threads from processes | ||
| 54 | * which registered with | ||
| 55 | * MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED. | ||
| 56 | * This command returns 0. Given that | ||
| 57 | * registration is about the intent to receive | ||
| 58 | * the barriers, it is valid to invoke | ||
| 59 | * MEMBARRIER_CMD_GLOBAL_EXPEDITED from a | ||
| 60 | * non-registered process. | ||
| 61 | * @MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED: | ||
| 62 | * Register the process intent to receive | ||
| 63 | * MEMBARRIER_CMD_GLOBAL_EXPEDITED memory | ||
| 64 | * barriers. Always returns 0. | ||
| 43 | * @MEMBARRIER_CMD_PRIVATE_EXPEDITED: | 65 | * @MEMBARRIER_CMD_PRIVATE_EXPEDITED: |
| 44 | * Execute a memory barrier on each running | 66 | * Execute a memory barrier on each running |
| 45 | * thread belonging to the same process as the current | 67 | * thread belonging to the same process as the current |
| @@ -64,18 +86,24 @@ | |||
| 64 | * Register the process intent to use | 86 | * Register the process intent to use |
| 65 | * MEMBARRIER_CMD_PRIVATE_EXPEDITED. Always | 87 | * MEMBARRIER_CMD_PRIVATE_EXPEDITED. Always |
| 66 | * returns 0. | 88 | * returns 0. |
| 89 | * @MEMBARRIER_CMD_SHARED: | ||
| 90 | * Alias to MEMBARRIER_CMD_GLOBAL. Provided for | ||
| 91 | * header backward compatibility. | ||
| 67 | * | 92 | * |
| 68 | * Command to be passed to the membarrier system call. The commands need to | 93 | * Command to be passed to the membarrier system call. The commands need to |
| 69 | * be a single bit each, except for MEMBARRIER_CMD_QUERY which is assigned to | 94 | * be a single bit each, except for MEMBARRIER_CMD_QUERY which is assigned to |
| 70 | * the value 0. | 95 | * the value 0. |
| 71 | */ | 96 | */ |
| 72 | enum membarrier_cmd { | 97 | enum membarrier_cmd { |
| 73 | MEMBARRIER_CMD_QUERY = 0, | 98 | MEMBARRIER_CMD_QUERY = 0, |
| 74 | MEMBARRIER_CMD_SHARED = (1 << 0), | 99 | MEMBARRIER_CMD_GLOBAL = (1 << 0), |
| 75 | /* reserved for MEMBARRIER_CMD_SHARED_EXPEDITED (1 << 1) */ | 100 | MEMBARRIER_CMD_GLOBAL_EXPEDITED = (1 << 1), |
| 76 | /* reserved for MEMBARRIER_CMD_PRIVATE (1 << 2) */ | 101 | MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED = (1 << 2), |
| 77 | MEMBARRIER_CMD_PRIVATE_EXPEDITED = (1 << 3), | 102 | MEMBARRIER_CMD_PRIVATE_EXPEDITED = (1 << 3), |
| 78 | MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED = (1 << 4), | 103 | MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED = (1 << 4), |
| 104 | |||
| 105 | /* Alias for header backward compatibility. */ | ||
| 106 | MEMBARRIER_CMD_SHARED = MEMBARRIER_CMD_GLOBAL, | ||
| 79 | }; | 107 | }; |
| 80 | 108 | ||
| 81 | #endif /* _UAPI_LINUX_MEMBARRIER_H */ | 109 | #endif /* _UAPI_LINUX_MEMBARRIER_H */ |
