aboutsummaryrefslogtreecommitdiffstats
path: root/fs/direct-io.c
diff options
context:
space:
mode:
authorJeff Moyer <jmoyer@redhat.com>2009-10-02 18:57:36 -0400
committerJens Axboe <jens.axboe@oracle.com>2009-10-28 09:29:25 +0100
commitcfb1e33eed48165763edc7a4a067cf5f74898d0b (patch)
treed0e0bdd0664615b1f7be6cf770476e16dbcad116 /fs/direct-io.c
parent1af60fbd759d31f565552fea315c2033947cfbe6 (diff)
downloadkernel_samsung_smdk4412-cfb1e33eed48165763edc7a4a067cf5f74898d0b.tar.gz
kernel_samsung_smdk4412-cfb1e33eed48165763edc7a4a067cf5f74898d0b.tar.bz2
kernel_samsung_smdk4412-cfb1e33eed48165763edc7a4a067cf5f74898d0b.zip
aio: implement request batching
Hi, Some workloads issue batches of small I/O, and the performance is poor due to the call to blk_run_address_space for every single iocb. Nathan Roberts pointed this out, and suggested that by deferring this call until all I/Os in the iocb array are submitted to the block layer, we can realize some impressive performance gains (up to 30% for sequential 4k reads in batches of 16). Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Diffstat (limited to 'fs/direct-io.c')
-rw-r--r--fs/direct-io.c8
1 files changed, 4 insertions, 4 deletions
diff --git a/fs/direct-io.c b/fs/direct-io.c
index c86d35f142d..3af761c8c5c 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -1028,9 +1028,6 @@ direct_io_worker(int rw, struct kiocb *iocb, struct inode *inode,
if (dio->bio)
dio_bio_submit(dio);
- /* All IO is now issued, send it on its way */
- blk_run_address_space(inode->i_mapping);
-
/*
* It is possible that, we return short IO due to end of file.
* In that case, we need to release all the pages we got hold on.
@@ -1057,8 +1054,11 @@ direct_io_worker(int rw, struct kiocb *iocb, struct inode *inode,
((rw & READ) || (dio->result == dio->size)))
ret = -EIOCBQUEUED;
- if (ret != -EIOCBQUEUED)
+ if (ret != -EIOCBQUEUED) {
+ /* All IO is now issued, send it on its way */
+ blk_run_address_space(inode->i_mapping);
dio_await_completion(dio);
+ }
/*
* Sync will always be dropping the final ref and completing the