diff options
author | J. Bruce Fields <bfields@redhat.com> | 2020-05-21 10:58:15 -0400 |
---|---|---|
committer | J. Bruce Fields <bfields@redhat.com> | 2020-05-21 10:58:15 -0400 |
commit | 6670ee2ef219ac9e1c836a277dda0c949ad8b1ff (patch) | |
tree | 0eee3d97941caab0bc85f93a7653b0cfd4abf51b /kernel/trace/trace.c | |
parent | 746c6237ece641da79df09abcf87bc29f6f9665b (diff) | |
parent | f2453978a4f2ddb1938fa80e9bf0c9d6252bd5f8 (diff) | |
download | kernel_replicant_linux-6670ee2ef219ac9e1c836a277dda0c949ad8b1ff.tar.gz kernel_replicant_linux-6670ee2ef219ac9e1c836a277dda0c949ad8b1ff.tar.bz2 kernel_replicant_linux-6670ee2ef219ac9e1c836a277dda0c949ad8b1ff.zip |
Merge branch 'nfsd-5.8' of git://linux-nfs.org/~cel/cel-2.6 into for-5.8-incoming
Highlights of this series:
* Remove serialization of sending RPC/RDMA Replies
* Convert the TCP socket send path to use xdr_buf::bvecs (pre-requisite for
RPC-on-TLS)
* Fix svcrdma backchannel sendto return code
* Convert a number of dprintk call sites to use tracepoints
* Fix the "suggest braces around empty body in an 'else' statement" warning
Diffstat (limited to 'kernel/trace/trace.c')
-rw-r--r-- | kernel/trace/trace.c | 16 |
1 files changed, 15 insertions, 1 deletions
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 8d2b98812625..29615f15a820 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -947,7 +947,8 @@ int __trace_bputs(unsigned long ip, const char *str) EXPORT_SYMBOL_GPL(__trace_bputs); #ifdef CONFIG_TRACER_SNAPSHOT -void tracing_snapshot_instance_cond(struct trace_array *tr, void *cond_data) +static void tracing_snapshot_instance_cond(struct trace_array *tr, + void *cond_data) { struct tracer *tracer = tr->current_trace; unsigned long flags; @@ -8525,6 +8526,19 @@ static int allocate_trace_buffers(struct trace_array *tr, int size) */ allocate_snapshot = false; #endif + + /* + * Because of some magic with the way alloc_percpu() works on + * x86_64, we need to synchronize the pgd of all the tables, + * otherwise the trace events that happen in x86_64 page fault + * handlers can't cope with accessing the chance that a + * alloc_percpu()'d memory might be touched in the page fault trace + * event. Oh, and we need to audit all other alloc_percpu() and vmalloc() + * calls in tracing, because something might get triggered within a + * page fault trace event! + */ + vmalloc_sync_mappings(); + return 0; } |