aboutsummaryrefslogtreecommitdiffstats
path: root/include/crypto/aead.h
diff options
context:
space:
mode:
authorDmitry Kasatkin <dmitry.kasatkin@intel.com>2012-02-08 14:15:42 -0500
committerMimi Zohar <zohar@linux.vnet.ibm.com>2012-09-07 14:57:46 -0400
commita10bf26b2f53242836e9362c6c9c857b627b82a9 (patch)
tree98c7b83684f1df42571013af4c0572c7eeea8e76 /include/crypto/aead.h
parentbf2276d10ce58ff44ab8857266a6718024496af6 (diff)
ima: replace iint spinblock with rwlock/read_lock
For performance, replace the iint spinlock with rwlock/read_lock. Eric Paris questioned this change, from spinlocks to rwlocks, saying "rwlocks have been shown to actually be slower on multi processor systems in a number of cases due to the cache line bouncing required." Based on performance measurements compiling the kernel on a cold boot with multiple jobs with/without this patch, Dmitry Kasatkin and I found that rwlocks performed better than spinlocks, but very insignificantly. For example with total compilation time around 6 minutes, with rwlocks time was 1 - 3 seconds shorter... but always like that. Changelog v2: - new patch taken from the 'allocating iint improvements' patch Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@intel.com> Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Diffstat (limited to 'include/crypto/aead.h')
0 files changed, 0 insertions, 0 deletions