diff options
author | David Jeffery <djeffery@redhat.com> | 2013-07-10 13:19:50 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2013-07-28 16:18:48 -0700 |
commit | a00c4c9f176094d7b71acd410f871b609f5e7c84 (patch) | |
tree | 3f55b6c425c57429f771fc7f230116f22bd5cead /fs | |
parent | 101e49a9ce6fadd876ae68bddf6f155bbd02a776 (diff) | |
download | kernel_samsung_smdk4412-a00c4c9f176094d7b71acd410f871b609f5e7c84.tar.gz kernel_samsung_smdk4412-a00c4c9f176094d7b71acd410f871b609f5e7c84.tar.bz2 kernel_samsung_smdk4412-a00c4c9f176094d7b71acd410f871b609f5e7c84.zip |
lockd: protect nlm_blocked access in nlmsvc_retry_blocked
commit 1c327d962fc420aea046c16215a552710bde8231 upstream.
In nlmsvc_retry_blocked, the check that the list is non-empty and acquiring
the pointer of the first entry is unprotected by any lock. This allows a rare
race condition when there is only one entry on the list. A function such as
nlmsvc_grant_callback() can be called, which will temporarily remove the entry
from the list. Between the list_empty() and list_entry(),the list may become
empty, causing an invalid pointer to be used as an nlm_block, leading to a
possible crash.
This patch adds the nlm_block_lock around these calls to prevent concurrent
use of the nlm_blocked list.
This was a regression introduced by
f904be9cc77f361d37d71468b13ff3d1a1823dea "lockd: Mostly remove BKL from
the server".
Signed-off-by: David Jeffery <djeffery@redhat.com>
Cc: Bryan Schumaker <bjschuma@netapp.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/lockd/svclock.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c index 6e31695d046..db7be3a462b 100644 --- a/fs/lockd/svclock.c +++ b/fs/lockd/svclock.c @@ -913,6 +913,7 @@ nlmsvc_retry_blocked(void) unsigned long timeout = MAX_SCHEDULE_TIMEOUT; struct nlm_block *block; + spin_lock(&nlm_blocked_lock); while (!list_empty(&nlm_blocked) && !kthread_should_stop()) { block = list_entry(nlm_blocked.next, struct nlm_block, b_list); @@ -922,6 +923,7 @@ nlmsvc_retry_blocked(void) timeout = block->b_when - jiffies; break; } + spin_unlock(&nlm_blocked_lock); dprintk("nlmsvc_retry_blocked(%p, when=%ld)\n", block, block->b_when); @@ -931,7 +933,9 @@ nlmsvc_retry_blocked(void) retry_deferred_block(block); } else nlmsvc_grant_blocked(block); + spin_lock(&nlm_blocked_lock); } + spin_unlock(&nlm_blocked_lock); return timeout; } |