Commit 5b0a3500 authored by Anton Altaparmakov's avatar Anton Altaparmakov Committed by Anton Altaparmakov

Merge cam.ac.uk:/rain/usr/src/bkntfs-tng-2.5 into cam.ac.uk:/usr/src/tng

parents c1a6eca6 3d16a3d1
......@@ -8,7 +8,7 @@ ToDo:
functions need to clone block_read_full_page and modify it to cope
with the significance of the different attribute sizes.
Still need to go through:
aops.c, attrib.c, compress.c, dir.c, mft.c
aops.c, attrib.c, compress.c, dir.c
- Find and fix bugs.
- W.r.t. s_maxbytes still need to be careful on reading/truncating as
there are dragons lurking in the details, e.g. read_inode() currently
......@@ -33,12 +33,19 @@ ToDo:
in between. But now we have a semaphore so are ok. Update
load_attribute_list to use the semaphore so can be called outside
read_inode, too. Apply the same optimization where desired.
- Optimize mft_readpage() to not do i/o on buffer heads beyond
initialized_size, just zero the buffer heads instead. Question: How
to setup the buffer heads so they point to the on disk location
correctly (after all they are allocated) but are not read from disk?
tng-0.0.9 - Work in progress
- Add kill_super, just keeping up with the vfs changes in the kernel.
- Repeat some changes from tng-0.0.8 that somehow got lost on the way
from the CVS import into BitKeeper.
- Begin to implement proper handling of allocated_size vs
initialized_size vs data_size (i.e. i_size). Done are
mft.c::ntfs_mft_readpage() and aops.c::end_buffer_read_index_async().
tng-0.0.8 - 08/03/2002 - Now using BitKeeper, http://linux-ntfs.bkbits.net/
......
......@@ -431,12 +431,46 @@ void end_buffer_read_index_async(struct buffer_head *bh, int uptodate)
unsigned long flags;
struct buffer_head *tmp;
struct page *page;
ntfs_inode *ni;
mark_buffer_uptodate(bh, uptodate);
/* This is a temporary buffer used for page I/O. */
page = bh->b_page;
if (!uptodate)
ni = NTFS_I(page->mapping->host);
if (likely(uptodate)) {
/*
* The below code is very cpu intensive so we add an extra
* check to ensure it is only run when processing buffer heads
* in pages reaching beyond the initialized data size.
*/
if ((page->index + 1) << PAGE_CACHE_SHIFT >
ni->initialized_size) {
s64 file_ofs;
char *addr;
int page_ofs;
addr = kmap_atomic(page, KM_BIO_IRQ);
BUG_ON(bh->b_data < addr);
BUG_ON(bh->b_data + bh->b_size > addr +
PAGE_CACHE_SIZE);
page_ofs = bh->b_data - addr;
file_ofs = (page->index << PAGE_CACHE_SHIFT) + page_ofs;
/* Check for the current buffer head overflowing. */
if (file_ofs + bh->b_size > ni->initialized_size) {
int bh_ofs = 0;
if (file_ofs < ni->initialized_size) {
bh_ofs = ni->initialized_size -
file_ofs;
BUG_ON(bh_ofs < 0);
BUG_ON(bh_ofs >= bh->b_size);
}
memset(bh->b_data + bh_ofs, 0,
bh->b_size - bh_ofs);
flush_dcache_page(page);
}
kunmap_atomic(addr, KM_BIO_IRQ);
}
} else
SetPageError(page);
/*
* Be _very_ careful from here on. Bad things can happen if
......@@ -470,7 +504,6 @@ void end_buffer_read_index_async(struct buffer_head *bh, int uptodate)
char *addr;
unsigned int i, recs, nr_err = 0;
u32 rec_size;
ntfs_inode *ni = NTFS_I(page->mapping->host);
addr = kmap_atomic(page, KM_BIO_IRQ);
rec_size = ni->_IDM(index_block_size);
......@@ -487,6 +520,7 @@ void end_buffer_read_index_async(struct buffer_head *bh, int uptodate)
PAGE_CACHE_SHIFT >>
ni->_IDM(index_block_size_bits)) + i));
}
flush_dcache_page(page);
kunmap_atomic(addr, KM_BIO_IRQ);
if (!nr_err && recs)
SetPageUptodate(page);
......
......@@ -148,7 +148,17 @@ int format_mft_record(ntfs_inode *ni, MFT_RECORD *mft_rec)
*
* Note, we only setup asynchronous I/O on the page and return. I/O completion
* is signalled via our asynchronous I/O completion handler
* end_buffer_read_index_async().
* end_buffer_read_index_async(). We also take care of the nitty gritty
* details towards the end of the file and zero out non-initialized regions.
*
* TODO:/FIXME: The current implementation is simple but wasteful as we perform
* actual i/o from disk for all data up to allocated size completely ignoring
* the fact that initialized size, and data size for that matter, may well be
* lower and hence there is no point in reading them in. We can just zero the
* page range, which is what is currently done in our async i/o completion
* handler anyway once the read from disk completes. However, I am not sure how
* to setup the buffer heads in that case, so for now we do the pointless i/o.
* Any help with this would be appreciated...
*/
static int ntfs_mft_readpage(struct file *file, struct page *page)
{
......@@ -160,7 +170,7 @@ static int ntfs_mft_readpage(struct file *file, struct page *page)
ntfs_volume *vol;
struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
sector_t iblock, lblock;
unsigned int blocksize, blocks,vcn_ofs;
unsigned int blocksize, blocks, vcn_ofs;
int i, nr;
unsigned char blocksize_bits;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment