diff options
| author | Nick Desaulniers <ndesaulniers@google.com> | 2018-01-03 15:39:52 -0500 |
|---|---|---|
| committer | Thomas Gleixner <tglx@linutronix.de> | 2018-01-03 17:19:33 -0500 |
| commit | 2fd9c41aea47f4ad071accf94b94f94f2c4d31eb (patch) | |
| tree | 341ab7ce325f5f390affa28d968e36d423642a17 | |
| parent | d7732ba55c4b6a2da339bb12589c515830cfac2c (diff) | |
x86/process: Define cpu_tss_rw in same section as declaration
cpu_tss_rw is declared with DECLARE_PER_CPU_PAGE_ALIGNED
but then defined with DEFINE_PER_CPU_SHARED_ALIGNED
leading to section mismatch warnings.
Use DEFINE_PER_CPU_PAGE_ALIGNED consistently. This is necessary because
it's mapped to the cpu entry area and must be page aligned.
[ tglx: Massaged changelog a bit ]
Fixes: 1a935bc3d4ea ("x86/entry: Move SYSENTER_stack to the beginning of struct tss_struct")
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: thomas.lendacky@amd.com
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: tklauser@distanz.ch
Cc: minipli@googlemail.com
Cc: me@kylehuey.com
Cc: namit@vmware.com
Cc: luto@kernel.org
Cc: jpoimboe@redhat.com
Cc: tj@kernel.org
Cc: cl@linux.com
Cc: bp@suse.de
Cc: thgarnie@google.com
Cc: kirill.shutemov@linux.intel.com
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180103203954.183360-1-ndesaulniers@google.com
| -rw-r--r-- | arch/x86/kernel/process.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 517415978409..3cb2486c47e4 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c | |||
| @@ -47,7 +47,7 @@ | |||
| 47 | * section. Since TSS's are completely CPU-local, we want them | 47 | * section. Since TSS's are completely CPU-local, we want them |
| 48 | * on exact cacheline boundaries, to eliminate cacheline ping-pong. | 48 | * on exact cacheline boundaries, to eliminate cacheline ping-pong. |
| 49 | */ | 49 | */ |
| 50 | __visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss_rw) = { | 50 | __visible DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss_rw) = { |
| 51 | .x86_tss = { | 51 | .x86_tss = { |
| 52 | /* | 52 | /* |
| 53 | * .sp0 is only used when entering ring 0 from a lower | 53 | * .sp0 is only used when entering ring 0 from a lower |
