diff options
author | Scott Mayhew <smayhew@redhat.com> | 2015-04-29 10:38:26 -0400 |
---|---|---|
committer | J. Bruce Fields <bfields@redhat.com> | 2015-05-04 12:02:44 -0400 |
commit | 72faedae8bc3504ee4252cebf14737a23677cb8f (patch) | |
tree | 54b8cf6952635e1d1f287527fe5cd984b42717a4 /Documentation/filesystems/nfs | |
parent | fd891454609ec036dc23e34536e45d655b4ca4db (diff) |
Documentation: remove overloads-avoided counter from knfsd-stats.txt
The 'overloads-avoided' counter itself was removed several years ago by
commit 78c210e (Revert "knfsd: avoid overloading the CPU scheduler with
enormous load averages").
Signed-off-by: Scott Mayhew <smayhew@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Diffstat (limited to 'Documentation/filesystems/nfs')
-rw-r--r-- | Documentation/filesystems/nfs/knfsd-stats.txt | 44 |
1 files changed, 4 insertions, 40 deletions
diff --git a/Documentation/filesystems/nfs/knfsd-stats.txt b/Documentation/filesystems/nfs/knfsd-stats.txt index 64ced5149d37..1a5d82180b84 100644 --- a/Documentation/filesystems/nfs/knfsd-stats.txt +++ b/Documentation/filesystems/nfs/knfsd-stats.txt | |||
@@ -68,16 +68,10 @@ sockets-enqueued | |||
68 | rate of change for this counter is zero; significantly non-zero | 68 | rate of change for this counter is zero; significantly non-zero |
69 | values may indicate a performance limitation. | 69 | values may indicate a performance limitation. |
70 | 70 | ||
71 | This can happen either because there are too few nfsd threads in the | 71 | This can happen because there are too few nfsd threads in the thread |
72 | thread pool for the NFS workload (the workload is thread-limited), | 72 | pool for the NFS workload (the workload is thread-limited), in which |
73 | or because the NFS workload needs more CPU time than is available in | 73 | case configuring more nfsd threads will probably improve the |
74 | the thread pool (the workload is CPU-limited). In the former case, | 74 | performance of the NFS workload. |
75 | configuring more nfsd threads will probably improve the performance | ||
76 | of the NFS workload. In the latter case, the sunrpc server layer is | ||
77 | already choosing not to wake idle nfsd threads because there are too | ||
78 | many nfsd threads which want to run but cannot, so configuring more | ||
79 | nfsd threads will make no difference whatsoever. The overloads-avoided | ||
80 | statistic (see below) can be used to distinguish these cases. | ||
81 | 75 | ||
82 | threads-woken | 76 | threads-woken |
83 | Counts how many times an idle nfsd thread is woken to try to | 77 | Counts how many times an idle nfsd thread is woken to try to |
@@ -88,36 +82,6 @@ threads-woken | |||
88 | thing. The ideal rate of change for this counter will be close | 82 | thing. The ideal rate of change for this counter will be close |
89 | to but less than the rate of change of the packets-arrived counter. | 83 | to but less than the rate of change of the packets-arrived counter. |
90 | 84 | ||
91 | overloads-avoided | ||
92 | Counts how many times the sunrpc server layer chose not to wake an | ||
93 | nfsd thread, despite the presence of idle nfsd threads, because | ||
94 | too many nfsd threads had been recently woken but could not get | ||
95 | enough CPU time to actually run. | ||
96 | |||
97 | This statistic counts a circumstance where the sunrpc layer | ||
98 | heuristically avoids overloading the CPU scheduler with too many | ||
99 | runnable nfsd threads. The ideal rate of change for this counter | ||
100 | is zero. Significant non-zero values indicate that the workload | ||
101 | is CPU limited. Usually this is associated with heavy CPU usage | ||
102 | on all the CPUs in the nfsd thread pool. | ||
103 | |||
104 | If a sustained large overloads-avoided rate is detected on a pool, | ||
105 | the top(1) utility should be used to check for the following | ||
106 | pattern of CPU usage on all the CPUs associated with the given | ||
107 | nfsd thread pool. | ||
108 | |||
109 | - %us ~= 0 (as you're *NOT* running applications on your NFS server) | ||
110 | |||
111 | - %wa ~= 0 | ||
112 | |||
113 | - %id ~= 0 | ||
114 | |||
115 | - %sy + %hi + %si ~= 100 | ||
116 | |||
117 | If this pattern is seen, configuring more nfsd threads will *not* | ||
118 | improve the performance of the workload. If this patten is not | ||
119 | seen, then something more subtle is wrong. | ||
120 | |||
121 | threads-timedout | 85 | threads-timedout |
122 | Counts how many times an nfsd thread triggered an idle timeout, | 86 | Counts how many times an nfsd thread triggered an idle timeout, |
123 | i.e. was not woken to handle any incoming network packets for | 87 | i.e. was not woken to handle any incoming network packets for |