Commit b1507c9a authored by Linus Torvalds's avatar Linus Torvalds

v2.5.0.8 -> v2.5.0.9

- Jeff Garzik: separate out handling of older tulip chips
- Jens Axboe: more bio stuff
- Anton Altaparmakov: NTFS 1.1.21 update
parent 098b7955
......@@ -8,11 +8,6 @@
<author>
<firstname>Jeff</firstname>
<surname>Garzik</surname>
<affiliation>
<address>
<email>jgarzik@mandrakesoft.com</email>
</address>
</affiliation>
</author>
</authorgroup>
......@@ -115,7 +110,7 @@
<sect1 id="bugrepdiag"><title>Diagnostic output</title>
<para>
Obtain the via-audio-diag diagnostics program from
http://gtf.org/garzik/drivers/via82cxxx/ and provide a dump of the
http://sf.net/projects/gkernel/ and provide a dump of the
audio chip's registers while the problem is occurring. Sample command line:
</para>
<programlisting>
......
......@@ -98,6 +98,16 @@ list at sourceforge: linux-ntfs-dev@lists.sourceforge.net
ChangeLog
=========
NTFS 1.1.21:
- Fixed bug with reading $MFT where we try to read higher mft records
before having read the $DATA attribute of $MFT. (Note this is only a
partial solution which will only work in the case that the attribute
list is resident or non-resident but $DATA is in the first 1024
bytes. But this should be enough in the majority of cases. I am not
going to bother fixing the general case until someone finds this to
be a problem for them, which I doubt very much will ever happen...)
- Fixed bogus BUG() call in readdir().
NTFS 1.1.20:
- Fixed two bugs in ntfs_readwrite_attr(). Thanks to Jan Kara for
spotting the out of bounds one.
......
Tulip Ethernet Card Driver
Maintained by Jeff Garzik <jgarzik@mandrakesoft.com>
The Tulip driver was developed by Donald Becker and changed by
Takashi Manabe and a cast of thousands.
For 2.4.x and later kernels, the Linux Tulip driver is available at
http://sourceforge.net/projects/tulip/
This driver is for the Digital "Tulip" Ethernet adapter interface.
It should work with most DEC 21*4*-based chips/ethercards, as well as
with work-alike chips from Lite-On (PNIC) and Macronix (MXIC) and ASIX.
The author may be reached as becker@scyld.com, or C/O
Center of Excellence in Space Data and Information Sciences
Code 930.5, Goddard Space Flight Center, Greenbelt MD 20771
Additional information on Donald Becker's tulip.c
is available at http://www.scyld.com/network/tulip.html
Theory of Operation
Board Compatibility
===================
This device driver is designed for the DECchip "Tulip", Digital's
single-chip ethernet controllers for PCI. Supported members of the family
are the 21040, 21041, 21140, 21140A, 21142, and 21143. Similar work-alike
chips from Lite-On, Macronics, ASIX, Compex and other listed below are also
supported.
These chips are used on at least 140 unique PCI board designs. The great
number of chips and board designs supported is the reason for the
driver size and complexity. Almost of the increasing complexity is in the
board configuration and media selection code. There is very little
increasing in the operational critical path length.
Board-specific settings
=======================
PCI bus devices are configured by the system at boot time, so no jumpers
need to be set on the board. The system BIOS preferably should assign the
PCI INTA signal to an otherwise unused system IRQ line.
Some boards have EEPROMs tables with default media entry. The factory default
is usually "autoselect". This should only be overridden when using
transceiver connections without link beat e.g. 10base2 or AUI, or (rarely!)
for forcing full-duplex when used with old link partners that do not do
autonegotiation.
Driver operation
================
Ring buffers
------------
The Tulip can use either ring buffers or lists of Tx and Rx descriptors.
This driver uses statically allocated rings of Rx and Tx descriptors, set at
compile time by RX/TX_RING_SIZE. This version of the driver allocates skbuffs
for the Rx ring buffers at open() time and passes the skb->data field to the
Tulip as receive data buffers. When an incoming frame is less than
RX_COPYBREAK bytes long, a fresh skbuff is allocated and the frame is
copied to the new skbuff. When the incoming frame is larger, the skbuff is
passed directly up the protocol stack and replaced by a newly allocated
skbuff.
The RX_COPYBREAK value is chosen to trade-off the memory wasted by
using a full-sized skbuff for small frames vs. the copying costs of larger
frames. For small frames the copying cost is negligible (esp. considering
that we are pre-loading the cache with immediately useful header
information). For large frames the copying cost is non-trivial, and the
larger copy might flush the cache of useful data. A subtle aspect of this
choice is that the Tulip only receives into longword aligned buffers, thus
the IP header at offset 14 isn't longword aligned for further processing.
Copied frames are put into the new skbuff at an offset of "+2", thus copying
has the beneficial effect of aligning the IP header and preloading the
cache.
Synchronization
---------------
The driver runs as two independent, single-threaded flows of control. One
is the send-packet routine, which enforces single-threaded use by the
dev->tbusy flag. The other thread is the interrupt handler, which is single
threaded by the hardware and other software.
The send packet thread has partial control over the Tx ring and 'dev->tbusy'
flag. It sets the tbusy flag whenever it's queuing a Tx packet. If the next
queue slot is empty, it clears the tbusy flag when finished otherwise it sets
the 'tp->tx_full' flag.
The interrupt handler has exclusive control over the Rx ring and records stats
from the Tx ring. (The Tx-done interrupt can't be selectively turned off, so
we can't avoid the interrupt overhead by having the Tx routine reap the Tx
stats.) After reaping the stats, it marks the queue entry as empty by setting
the 'base' to zero. Iff the 'tp->tx_full' flag is set, it clears both the
tx_full and tbusy flags.
Notes
=====
Thanks to Duke Kamstra of SMC for long ago providing an EtherPower board.
Greg LaPolla at Linksys provided PNIC and other Linksys boards.
Znyx provided a four-port card for testing.
References
==========
http://cesdis.gsfc.nasa.gov/linux/misc/NWay.html
http://www.digital.com (search for current 21*4* datasheets and "21X4 SROM")
http://www.national.com/pf/DP/DP83840A.html
http://www.asix.com.tw/pmac.htm
http://www.admtek.com.tw/
Errata
======
The old DEC databooks were light on details.
The 21040 databook claims that CSR13, CSR14, and CSR15 should each be the last
register of the set CSR12-15 written. Hmmm, now how is that possible?
The DEC SROM format is very badly designed not precisely defined, leading to
part of the media selection junkheap below. Some boards do not have EEPROM
media tables and need to be patched up. Worse, other boards use the DEC
design kit media table when it isn't correct for their board.
We cannot use MII interrupts because there is no defined GPIO pin to attach
them. The MII transceiver status is polled using an kernel timer.
Source tree tour
================
The following is a list of files comprising the Tulip ethernet driver in
drivers/net/tulip subdirectory.
21142.c - 21142-specific h/w interaction
eeprom.c - EEPROM reading and parsing
interrupt.c - Interrupt handler
media.c - Media selection and MII support
pnic.c - PNIC-specific h/w interaction
timer.c - Main driver timer, and misc h/w timers
tulip.h - Private driver header
tulip_core.c - Driver core (a.k.a. where "everything else" goes)
Version history
===============
0.9.14 (February 20, 2000):
* Fix PNIC problems (Manfred Spraul)
* Add new PCI id for Accton comet
* Support Davicom tulips
* Fix oops in eeprom parsing
* Enable workarounds for early PCI chipsets
* IA64, hppa csr0 support
* Support media types 5, 6
* Interpret a bit more of the 21142 SROM extended media type 3
* Add missing delay in eeprom reading
0.9.11 (November 3, 2000):
* Eliminate extra bus accesses when sharing interrupts (prumpf)
* Barrier following ownership descriptor bit flip (prumpf)
* Endianness fixes for >14 addresses in setup frames (prumpf)
* Report link beat to kernel/userspace via netif_carrier_*. (kuznet)
* Better spinlocking in set_rx_mode.
* Fix I/O resource request failure error messages (DaveM catch)
* Handle DMA allocation failure.
0.9.10 (September 6, 2000):
* Simple interrupt mitigation (via jamal)
* More PCI ids
0.9.9 (August 11, 2000):
* More PCI ids
0.9.8 (July 13, 2000):
* Correct signed/unsigned comparison for dummy frame index
* Remove outdated references to struct enet_statistics
0.9.7 (June 17, 2000):
* Timer cleanups (Andrew Morton)
* Alpha compile fix (somebody?)
0.9.6 (May 31, 2000):
* Revert 21143-related support flag patch
* Add HPPA/media-table debugging printk
0.9.5 (May 30, 2000):
* HPPA support (willy@puffingroup)
* CSR6 bits and tulip.h cleanup (Chris Smith)
* Improve debugging messages a bit
* Add delay after CSR13 write in t21142_start_nway
* Remove unused ETHER_STATS code
* Convert 'extern inline' to 'static inline' in tulip.h (Chris Smith)
* Update DS21143 support flags in tulip_chip_info[]
* Use spin_lock_irq, not _irqsave/restore, in tulip_start_xmit()
* Add locking to set_rx_mode()
* Fix race with chip setting DescOwned bit (Hal Murray)
* Request 100% of PIO and MMIO resource space assigned to card
* Remove error message from pci_enable_device failure
0.9.4.3 (April 14, 2000):
* mod_timer fix (Hal Murray)
* PNIC2 resuscitation (Chris Smith)
0.9.4.2 (March 21, 2000):
* Fix 21041 CSR7, CSR13/14/15 handling
* Merge some PCI ids from tulip 0.91x
* Merge some HAS_xxx flags and flag settings from tulip 0.91x
* asm/io.h fix (submitted by many) and cleanup
* s/HAS_NWAY143/HAS_NWAY/
* Cleanup 21041 mode reporting
* Small code cleanups
0.9.4.1 (March 18, 2000):
* Finish PCI DMA conversion (davem)
* Do not netif_start_queue() at end of tulip_tx_timeout() (kuznet)
* PCI DMA fix (kuznet)
* eeprom.c code cleanup
* Remove Xircom Tulip crud
[EOF]
......@@ -1692,10 +1692,8 @@ S: Maintained
VIA 82Cxxx AUDIO DRIVER
P: Jeff Garzik
M: jgarzik@mandrakesoft.com
L: linux-via@gtf.org
W: http://sourceforge.net/projects/gkernel/
S: Maintained
S: Odd fixes
USB DIAMOND RIO500 DRIVER
P: Cesar Miquel
......
VERSION = 2
PATCHLEVEL = 5
SUBLEVEL = 1
EXTRAVERSION =-pre8
EXTRAVERSION =-pre9
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
......
......@@ -22,7 +22,7 @@
/* Note mask bit is true for DISABLED irqs. */
static unsigned int cached_irq_mask = 0xffff;
spinlock_t i8259_irq_lock = SPIN_LOCK_UNLOCKED;
static spinlock_t i8259_irq_lock = SPIN_LOCK_UNLOCKED;
static inline void
i8259_update_irq_hw(unsigned int irq, unsigned long mask)
......
......@@ -100,7 +100,7 @@ typedef struct _efivar_entry_t {
struct list_head list;
} efivar_entry_t;
spinlock_t efivars_lock = SPIN_LOCK_UNLOCKED;
static spinlock_t efivars_lock = SPIN_LOCK_UNLOCKED;
static LIST_HEAD(efivar_list);
static struct proc_dir_entry *efi_vars_dir = NULL;
......
......@@ -46,7 +46,7 @@ extern void ia64_mca_check_errors( void );
* This interrupt-safe spinlock protects all accesses to PCI
* configuration space.
*/
spinlock_t pci_lock = SPIN_LOCK_UNLOCKED;
static spinlock_t pci_lock = SPIN_LOCK_UNLOCKED;
struct pci_fixup pcibios_fixups[] = {
{ 0 }
......
......@@ -61,7 +61,7 @@ typedef struct cpuprom_info {
}cpuprom_info_t;
static cpuprom_info_t *cpuprom_head;
spinlock_t cpuprom_spinlock;
static spinlock_t cpuprom_spinlock;
#define PROM_LOCK() mutex_spinlock(&cpuprom_spinlock)
#define PROM_UNLOCK(s) mutex_spinunlock(&cpuprom_spinlock, (s))
......
......@@ -13,7 +13,7 @@ unsigned char cached_8259[2] = { 0xff, 0xff };
#define cached_A1 (cached_8259[0])
#define cached_21 (cached_8259[1])
spinlock_t i8259_lock = SPIN_LOCK_UNLOCKED;
static spinlock_t i8259_lock = SPIN_LOCK_UNLOCKED;
int i8259_pic_irq_offset;
......
......@@ -36,7 +36,7 @@ static volatile struct pmac_irq_hw *pmac_irq_hw[4] = {
static int max_irqs;
static int max_real_irqs;
spinlock_t pmac_pic_lock = SPIN_LOCK_UNLOCKED;
static spinlock_t pmac_pic_lock = SPIN_LOCK_UNLOCKED;
#define GATWICK_IRQ_POOL_SIZE 10
......
......@@ -1928,7 +1928,7 @@ print_properties(struct device_node *np)
}
#endif
spinlock_t rtas_lock = SPIN_LOCK_UNLOCKED;
static spinlock_t rtas_lock = SPIN_LOCK_UNLOCKED;
/* this can be called after setup -- Cort */
int __openfirmware
......
......@@ -33,7 +33,7 @@ extern asmlinkage int sys32_ioctl(unsigned int fd, unsigned int cmd,
u32 arg);
asmlinkage int solaris_ioctl(unsigned int fd, unsigned int cmd, u32 arg);
spinlock_t timod_pagelock = SPIN_LOCK_UNLOCKED;
static spinlock_t timod_pagelock = SPIN_LOCK_UNLOCKED;
static char * page = NULL ;
#ifndef DEBUG_SOLARIS_KMALLOC
......
......@@ -257,7 +257,8 @@ void blk_queue_segment_boundary(request_queue_t *q, unsigned long mask)
static char *rq_flags[] = { "REQ_RW", "REQ_RW_AHEAD", "REQ_BARRIER",
"REQ_CMD", "REQ_NOMERGE", "REQ_STARTED",
"REQ_DONTPREP", "REQ_DRIVE_CMD", "REQ_DRIVE_TASK",
"REQ_PC", "REQ_SENSE", "REQ_SPECIAL" };
"REQ_PC", "REQ_BLOCK_PC", "REQ_SENSE",
"REQ_SPECIAL" };
void blk_dump_rq_flags(struct request *rq, char *msg)
{
......@@ -315,15 +316,46 @@ static int ll_10byte_cmd_build(request_queue_t *q, struct request *rq)
return 0;
}
/*
* can we merge the two segments, or do we need to start a new one?
*/
inline int blk_same_segment(request_queue_t *q, struct bio *bio,
void blk_recount_segments(request_queue_t *q, struct bio *bio)
{
struct bio_vec *bv, *bvprv = NULL;
int i, nr_segs, seg_size, cluster;
if (unlikely(!bio->bi_io_vec))
return;
cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
seg_size = nr_segs = 0;
bio_for_each_segment(bv, bio, i) {
if (bvprv && cluster) {
if (seg_size + bv->bv_len > q->max_segment_size)
goto new_segment;
if (!BIOVEC_MERGEABLE(bvprv, bv))
goto new_segment;
if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv))
goto new_segment;
seg_size += bv->bv_len;
bvprv = bv;
continue;
}
new_segment:
nr_segs++;
bvprv = bv;
seg_size = 0;
}
bio->bi_hw_seg = nr_segs;
bio->bi_flags |= (1 << BIO_SEG_VALID);
}
inline int blk_contig_segment(request_queue_t *q, struct bio *bio,
struct bio *nxt)
{
/*
* not contigous, just forget it
*/
if (!(q->queue_flags & (1 << QUEUE_FLAG_CLUSTER)))
return 0;
if (!BIO_CONTIG(bio, nxt))
return 0;
......@@ -343,19 +375,17 @@ inline int blk_same_segment(request_queue_t *q, struct bio *bio,
*/
int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg)
{
unsigned long long lastend;
struct bio_vec *bvec;
struct bio_vec *bvec, *bvprv;
struct bio *bio;
int nsegs, i, cluster;
nsegs = 0;
bio = rq->bio;
lastend = ~0ULL;
cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
/*
* for each bio in rq
*/
bvprv = NULL;
rq_for_each_bio(bio, rq) {
/*
* for each segment in bio
......@@ -363,26 +393,20 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
bio_for_each_segment(bvec, bio, i) {
int nbytes = bvec->bv_len;
BIO_BUG_ON(i > bio->bi_vcnt);
if (cluster && bvec_to_phys(bvec) == lastend) {
if (bvprv && cluster) {
if (sg[nsegs - 1].length + nbytes > q->max_segment_size)
goto new_segment;
/*
* make sure to not map a segment across a
* boundary that the queue doesn't want
*/
if (!__BIO_SEG_BOUNDARY(lastend, lastend + nbytes, q->seg_boundary_mask))
lastend = ~0ULL;
else
lastend += nbytes;
if (!BIOVEC_MERGEABLE(bvprv, bvec))
goto new_segment;
if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec))
goto new_segment;
sg[nsegs - 1].length += nbytes;
} else {
new_segment:
if (nsegs > q->max_segments) {
printk("map: %d >= %d\n", nsegs, q->max_segments);
if (nsegs >= q->max_segments) {
printk("map: %d >= %d, i %d, segs %d, size %ld\n", nsegs, q->max_segments, i, rq->nr_segments, rq->nr_sectors);
BUG();
}
......@@ -391,9 +415,9 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
sg[nsegs].length = nbytes;
sg[nsegs].offset = bvec->bv_offset;
lastend = bvec_to_phys(bvec) + nbytes;
nsegs++;
}
bvprv = bvec;
} /* segments in bio */
} /* bios in rq */
......@@ -405,35 +429,55 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
* specific ones if so desired
*/
static inline int ll_new_segment(request_queue_t *q, struct request *req,
struct bio *bio)
struct bio *bio, int nr_segs)
{
if (req->nr_segments + bio->bi_vcnt <= q->max_segments) {
req->nr_segments += bio->bi_vcnt;
if (req->nr_segments + nr_segs <= q->max_segments) {
req->nr_segments += nr_segs;
return 1;
}
req->flags |= REQ_NOMERGE;
return 0;
}
static int ll_back_merge_fn(request_queue_t *q, struct request *req,
struct bio *bio)
{
if (req->nr_sectors + bio_sectors(bio) > q->max_sectors)
int bio_segs;
if (req->nr_sectors + bio_sectors(bio) > q->max_sectors) {
req->flags |= REQ_NOMERGE;
return 0;
if (blk_same_segment(q, req->biotail, bio))
}
bio_segs = bio_hw_segments(q, bio);
if (blk_contig_segment(q, req->biotail, bio))
bio_segs--;
if (!bio_segs)
return 1;
return ll_new_segment(q, req, bio);
return ll_new_segment(q, req, bio, bio_segs);
}
static int ll_front_merge_fn(request_queue_t *q, struct request *req,
struct bio *bio)
{
if (req->nr_sectors + bio_sectors(bio) > q->max_sectors)
int bio_segs;
if (req->nr_sectors + bio_sectors(bio) > q->max_sectors) {
req->flags |= REQ_NOMERGE;
return 0;
if (blk_same_segment(q, bio, req->bio))
}
bio_segs = bio_hw_segments(q, bio);
if (blk_contig_segment(q, bio, req->bio))
bio_segs--;
if (!bio_segs)
return 1;
return ll_new_segment(q, req, bio);
return ll_new_segment(q, req, bio, bio_segs);
}
static int ll_merge_requests_fn(request_queue_t *q, struct request *req,
......@@ -441,7 +485,7 @@ static int ll_merge_requests_fn(request_queue_t *q, struct request *req,
{
int total_segments = req->nr_segments + next->nr_segments;
if (blk_same_segment(q, req->biotail, next->bio))
if (blk_contig_segment(q, req->biotail, next->bio))
total_segments--;
if (total_segments > q->max_segments)
......@@ -643,7 +687,7 @@ int blk_init_queue(request_queue_t *q, request_fn_proc *rfn)
return ret;
}
q->request_fn = rfn;
q->request_fn = rfn;
q->back_merge_fn = ll_back_merge_fn;
q->front_merge_fn = ll_front_merge_fn;
q->merge_requests_fn = ll_merge_requests_fn;
......@@ -973,6 +1017,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
BUG_ON(req->flags & REQ_NOMERGE);
if (!q->back_merge_fn(q, req, bio))
break;
elevator->elevator_merge_cleanup_fn(q, req, nr_sectors);
req->biotail->bi_next = bio;
......@@ -987,6 +1032,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
BUG_ON(req->flags & REQ_NOMERGE);
if (!q->front_merge_fn(q, req, bio))
break;
elevator->elevator_merge_cleanup_fn(q, req, nr_sectors);
bio->bi_next = req->bio;
......@@ -1070,7 +1116,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
req->hard_nr_sectors = req->nr_sectors = nr_sectors;
req->current_nr_sectors = req->hard_cur_sectors = cur_nr_sectors;
req->nr_segments = bio->bi_vcnt;
req->nr_hw_segments = req->nr_segments;
req->nr_hw_segments = bio_hw_segments(q, bio);
req->buffer = bio_data(bio); /* see ->buffer comment above */
req->waiting = NULL;
req->bio = req->biotail = bio;
......@@ -1451,7 +1497,6 @@ int end_that_request_first(struct request *req, int uptodate, int nr_sectors)
while ((bio = req->bio)) {
nsect = bio_iovec(bio)->bv_len >> 9;
bio->bi_size -= bio_iovec(bio)->bv_len;
/*
* not a complete bvec done
......@@ -1459,12 +1504,18 @@ int end_that_request_first(struct request *req, int uptodate, int nr_sectors)
if (unlikely(nsect > nr_sectors)) {
int residual = (nsect - nr_sectors) << 9;
bio->bi_size -= residual;
bio_iovec(bio)->bv_offset += residual;
bio_iovec(bio)->bv_len -= residual;
blk_recalc_request(req, nr_sectors);
return 1;
}
/*
* account transfer
*/
bio->bi_size -= bio_iovec(bio)->bv_len;
nr_sectors -= nsect;
total_nsect += nsect;
......
......@@ -483,7 +483,8 @@ static void do_ps2esdi_request(request_queue_t * q)
} /* check for above 16Mb dmas */
else if ((CURRENT_DEV < ps2esdi_drives) &&
(CURRENT->sector + CURRENT->current_nr_sectors <=
ps2esdi[MINOR(CURRENT->rq_dev)].nr_sects)) {
ps2esdi[MINOR(CURRENT->rq_dev)].nr_sects) &&
CURRENT->flags & REQ_CMD) {
#if 0
printk("%s:got request. device : %d minor : %d command : %d sector : %ld count : %ld\n",
DEVICE_NAME,
......@@ -495,7 +496,7 @@ static void do_ps2esdi_request(request_queue_t * q)
block = CURRENT->sector;
count = CURRENT->current_nr_sectors;
switch (CURRENT->cmd) {
switch (rq_data_dir(CURRENT)) {
case READ:
ps2esdi_readwrite(READ, CURRENT_DEV, block, count);
break;
......
......@@ -336,7 +336,7 @@ static struct sysrq_key_op sysrq_killall_op = {
/* Key Operations table and lock */
spinlock_t sysrq_key_table_lock = SPIN_LOCK_UNLOCKED;
static spinlock_t sysrq_key_table_lock = SPIN_LOCK_UNLOCKED;
#define SYSRQ_KEY_TABLE_LENGTH 36
static struct sysrq_key_op *sysrq_key_table[SYSRQ_KEY_TABLE_LENGTH] = {
/* 0 */ &sysrq_loglevel_op,
......
......@@ -226,12 +226,13 @@ ide_startstop_t ide_dma_intr (ide_drive_t *drive)
static int ide_build_sglist (ide_hwif_t *hwif, struct request *rq)
{
request_queue_t *q = &hwif->drives[DEVICE_NR(rq->rq_dev) & 1].queue;
struct scatterlist *sg = hwif->sg_table;
int nents;
nents = blk_rq_map_sg(rq->q, rq, hwif->sg_table);
nents = blk_rq_map_sg(q, rq, hwif->sg_table);
if (nents > rq->nr_segments)
if (rq->q && nents > rq->nr_segments)
printk("ide-dma: received %d segments, build %d\n", rq->nr_segments, nents);
if (rq_data_dir(rq) == READ)
......
......@@ -1887,7 +1887,8 @@ static void idetape_end_request (byte uptodate, ide_hwgroup_t *hwgroup)
printk("ide-tape: %s: skipping over config parition..\n", tape->name);
#endif
tape->onstream_write_error = OS_PART_ERROR;
complete(tape->waiting);
if (tape->waiting)
complete(tape->waiting);
}
}
remove_stage = 1;
......@@ -1903,7 +1904,8 @@ static void idetape_end_request (byte uptodate, ide_hwgroup_t *hwgroup)
tape->nr_pending_stages++;
tape->next_stage = tape->first_stage;
rq->current_nr_sectors = rq->nr_sectors;
complete(tape->waiting);
if (tape->waiting)
complete(tape->waiting);
}
}
} else if (rq->cmd == IDETAPE_READ_RQ) {
......
......@@ -180,7 +180,7 @@ static int initializing; /* set while initializing built-in drivers */
*
* anti-deadlock ordering: ide_lock -> DRIVE_LOCK
*/
spinlock_t ide_lock = SPIN_LOCK_UNLOCKED;
spinlock_t ide_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED;
#ifdef CONFIG_BLK_DEV_IDEPCI
static int ide_scan_direction; /* THIS was formerly 2.2.x pci=reverse */
......
......@@ -159,7 +159,8 @@ if [ "$CONFIG_NET_ETHERNET" = "y" ]; then
dep_tristate ' Apricot Xen-II on board Ethernet' CONFIG_APRICOT $CONFIG_ISA
dep_tristate ' CS89x0 support' CONFIG_CS89x0 $CONFIG_ISA
dep_tristate ' DECchip Tulip (dc21x4x) PCI support' CONFIG_TULIP $CONFIG_PCI
dep_tristate ' Early DECchip Tulip (dc2104x) PCI support (EXPERIMENTAL)' CONFIG_DE2104X $CONFIG_PCI $CONFIG_EXPERIMENTAL
dep_tristate ' DECchip Tulip (dc2114x) PCI support' CONFIG_TULIP $CONFIG_PCI
if [ "$CONFIG_TULIP" = "y" -o "$CONFIG_TULIP" = "m" ]; then
dep_bool ' New bus configuration (EXPERIMENTAL)' CONFIG_TULIP_MWI $CONFIG_EXPERIMENTAL
bool ' Use PCI shared mem for NIC registers' CONFIG_TULIP_MMIO
......
......@@ -174,6 +174,7 @@ obj-$(CONFIG_DEPCA) += depca.o
obj-$(CONFIG_EWRK3) += ewrk3.o
obj-$(CONFIG_ATP) += atp.o
obj-$(CONFIG_DE4X5) += de4x5.o
obj-$(CONFIG_DE2104X) += de2104x.o
obj-$(CONFIG_NI5010) += ni5010.o
obj-$(CONFIG_NI52) += ni52.o
obj-$(CONFIG_NI65) += ni65.o
......
This diff is collapsed.
......@@ -7,6 +7,11 @@ Tested in single and dual HBA configuration, 32 and 64bit busses,
SEST size 512 Exchanges (simultaneous I/Os) limited by module kmalloc()
max of 128k bytes contiguous.
Ver 2.5.0 Nov 29, 2001
* eliminated io_request_lock. This change makes the driver specific
to the 2.5.x kernels.
* silenced excessively noisy printks.
Ver 2.1.1 Oct 18, 2001
* reinitialize Cmnd->SCp.sent_command (used to identify commands as
passthrus) on calling scsi_done, since the scsi mid layer does not
......
......@@ -97,11 +97,11 @@ int CpqTsCreateTachLiteQues( void* pHBA, int opcode)
fcChip->Exchanges = NULL;
cpqfcHBAdata->fcLQ = NULL;
printk("Allocating %u for %u Exchanges ",
(ULONG)sizeof(FC_EXCHANGES), TACH_MAX_XID);
/* printk("Allocating %u for %u Exchanges ",
(ULONG)sizeof(FC_EXCHANGES), TACH_MAX_XID); */
fcChip->Exchanges = pci_alloc_consistent(cpqfcHBAdata->PciDev,
sizeof(FC_EXCHANGES), &fcChip->exch_dma_handle);
printk("@ %p\n", fcChip->Exchanges);
/* printk("@ %p\n", fcChip->Exchanges); */
if( fcChip->Exchanges == NULL ) // fatal error!!
{
......@@ -112,10 +112,10 @@ int CpqTsCreateTachLiteQues( void* pHBA, int opcode)
memset( fcChip->Exchanges, 0, sizeof( FC_EXCHANGES));
printk("Allocating %u for LinkQ ", (ULONG)sizeof(FC_LINK_QUE));
/* printk("Allocating %u for LinkQ ", (ULONG)sizeof(FC_LINK_QUE)); */
cpqfcHBAdata->fcLQ = pci_alloc_consistent(cpqfcHBAdata->PciDev,
sizeof( FC_LINK_QUE), &cpqfcHBAdata->fcLQ_dma_handle);
printk("@ %p (%u elements)\n", cpqfcHBAdata->fcLQ, FC_LINKQ_DEPTH);
/* printk("@ %p (%u elements)\n", cpqfcHBAdata->fcLQ, FC_LINKQ_DEPTH); */
memset( cpqfcHBAdata->fcLQ, 0, sizeof( FC_LINK_QUE));
if( cpqfcHBAdata->fcLQ == NULL ) // fatal error!!
......@@ -222,8 +222,8 @@ int CpqTsCreateTachLiteQues( void* pHBA, int opcode)
// power-of-2 boundary
// LIVE DANGEROUSLY! Assume the boundary for SEST mem will
// be on physical page (e.g. 4k) boundary.
printk("Allocating %u for TachSEST for %u Exchanges\n",
(ULONG)sizeof(TachSEST), TACH_SEST_LEN);
/* printk("Allocating %u for TachSEST for %u Exchanges\n",
(ULONG)sizeof(TachSEST), TACH_SEST_LEN); */
fcChip->SEST = fcMemManager( cpqfcHBAdata->PciDev,
&cpqfcHBAdata->dynamic_mem[0],
sizeof(TachSEST), 4, 0L, &SESTdma );
......@@ -289,7 +289,7 @@ int CpqTsCreateTachLiteQues( void* pHBA, int opcode)
// set the Host's pointer for Tachyon to access
printk(" cpqfcTS: writing IMQ BASE %Xh ", fcChip->IMQ->base );
/* printk(" cpqfcTS: writing IMQ BASE %Xh ", fcChip->IMQ->base ); */
writel( fcChip->IMQ->base,
(fcChip->Registers.ReMapMemBase + IMQ_BASE));
......@@ -315,9 +315,9 @@ int CpqTsCreateTachLiteQues( void* pHBA, int opcode)
return -1; // failed
}
#endif
//#if DBG
#if DBG
printk(" PI %Xh\n", (ULONG)ulAddr );
//#endif
#endif
writel( (ULONG)ulAddr,
(fcChip->Registers.ReMapMemBase + IMQ_PRODUCER_INDEX));
......@@ -337,9 +337,9 @@ int CpqTsCreateTachLiteQues( void* pHBA, int opcode)
writel( fcChip->SEST->base,
(fcChip->Registers.ReMapMemBase + TL_MEM_SEST_BASE));
printk(" cpqfcTS: SEST %p(virt): Wrote base %Xh @ %p\n",
/* printk(" cpqfcTS: SEST %p(virt): Wrote base %Xh @ %p\n",
fcChip->SEST, fcChip->SEST->base,
fcChip->Registers.ReMapMemBase + TL_MEM_SEST_BASE);
fcChip->Registers.ReMapMemBase + TL_MEM_SEST_BASE); */
writel( fcChip->SEST->length,
(fcChip->Registers.ReMapMemBase + TL_MEM_SEST_LENGTH));
......@@ -1723,7 +1723,7 @@ int CpqTsInitializeTachLite( void *pHBA, int opcode1, int opcode2)
UCHAR Minor = (UCHAR)(RevId & 0x3);
UCHAR Major = (UCHAR)((RevId & 0x1C) >>2);
printk(" HBA Tachyon RevId %d.%d\n", Major, Minor);
/* printk(" HBA Tachyon RevId %d.%d\n", Major, Minor); */
if( (Major == 1) && (Minor == 2) )
{
sprintf( cpqfcHBAdata->fcChip.Name, STACHLITE66_TS12);
......
......@@ -188,7 +188,7 @@ static void Cpqfc_initHBAdata( CPQFCHBA *cpqfcHBAdata, struct pci_dev *PciDev )
DEBUG_PCI(printk(" IOBaseU = %x\n",
cpqfcHBAdata->fcChip.Registers.IOBaseU));
printk(" ioremap'd Membase: %p\n", cpqfcHBAdata->fcChip.Registers.ReMapMemBase);
/* printk(" ioremap'd Membase: %p\n", cpqfcHBAdata->fcChip.Registers.ReMapMemBase); */
DEBUG_PCI(printk(" SFQconsumerIndex.address = %p\n",
cpqfcHBAdata->fcChip.Registers.SFQconsumerIndex.address));
......@@ -242,7 +242,7 @@ static void launch_FCworker_thread(struct Scsi_Host *HostAdapter)
cpqfcHBAdata->notify_wt = &sem;
/* must unlock before kernel_thread(), for it may cause a reschedule. */
spin_unlock_irq(&io_request_lock);
spin_unlock_irq(&HostAdapter->host_lock);
kernel_thread((int (*)(void *))cpqfcTSWorkerThread,
(void *) HostAdapter, 0);
/*
......@@ -250,7 +250,7 @@ static void launch_FCworker_thread(struct Scsi_Host *HostAdapter)
*/
down (&sem);
spin_lock_irq(&io_request_lock);
spin_lock_irq(&HostAdapter->host_lock);
cpqfcHBAdata->notify_wt = NULL;
LEAVE("launch_FC_worker_thread");
......@@ -312,8 +312,8 @@ int cpqfcTS_detect(Scsi_Host_Template *ScsiHostTemplate)
}
// NOTE: (kernel 2.2.12-32) limits allocation to 128k bytes...
printk(" scsi_register allocating %d bytes for FC HBA\n",
(ULONG)sizeof(CPQFCHBA));
/* printk(" scsi_register allocating %d bytes for FC HBA\n",
(ULONG)sizeof(CPQFCHBA)); */
HostAdapter = scsi_register( ScsiHostTemplate, sizeof( CPQFCHBA ) );
......@@ -403,9 +403,11 @@ int cpqfcTS_detect(Scsi_Host_Template *ScsiHostTemplate)
DEBUG_PCI(printk(" Requesting 255 I/O addresses @ %x\n",
cpqfcHBAdata->fcChip.Registers.IOBaseU ));
// start our kernel worker thread
spin_lock_irq(&HostAdapter->host_lock);
launch_FCworker_thread(HostAdapter);
......@@ -445,15 +447,16 @@ int cpqfcTS_detect(Scsi_Host_Template *ScsiHostTemplate)
unsigned long stop_time;
spin_unlock_irq(&io_request_lock);
spin_unlock_irq(&HostAdapter->host_lock);
stop_time = jiffies + 4*HZ;
while ( time_before(jiffies, stop_time) )
schedule(); // (our worker task needs to run)
spin_lock_irq(&io_request_lock);
}
spin_lock_irq(&HostAdapter->host_lock);
NumberOfAdapters++;
spin_unlock_irq(&HostAdapter->host_lock);
} // end of while()
}
......@@ -1593,9 +1596,9 @@ int cpqfcTS_eh_device_reset(Scsi_Cmnd *Cmnd)
int retval;
Scsi_Device *SDpnt = Cmnd->device;
// printk(" ENTERING cpqfcTS_eh_device_reset() \n");
spin_unlock_irq(&io_request_lock);
spin_unlock_irq(&Cmnd->host->host_lock);
retval = cpqfcTS_TargetDeviceReset( SDpnt, 0);
spin_lock_irq(&io_request_lock);
spin_lock_irq(&Cmnd->host->host_lock);
return retval;
}
......@@ -1650,8 +1653,7 @@ void cpqfcTS_intr_handler( int irq,
UCHAR IntPending;
ENTER("intr_handler");
spin_lock_irqsave( &io_request_lock, flags);
spin_lock_irqsave( &HostAdapter->host_lock, flags);
// is this our INT?
IntPending = readb( cpqfcHBA->fcChip.Registers.INTPEND.address);
......@@ -1700,7 +1702,7 @@ void cpqfcTS_intr_handler( int irq,
}
}
}
spin_unlock_irqrestore( &io_request_lock, flags);
spin_unlock_irqrestore( &HostAdapter->host_lock, flags);
LEAVE("intr_handler");
}
......
......@@ -32,8 +32,8 @@
#define CPQFCTS_DRIVER_VER(maj,min,submin) ((maj<<16)|(min<<8)|(submin))
// don't forget to also change MODULE_DESCRIPTION in cpqfcTSinit.c
#define VER_MAJOR 2
#define VER_MINOR 1
#define VER_SUBMINOR 1
#define VER_MINOR 5
#define VER_SUBMINOR 0
// Macros for kernel (esp. SMP) tracing using a PCI analyzer
// (e.g. x86).
......
......@@ -227,7 +227,7 @@ void cpqfcTSWorkerThread( void *host)
PCI_TRACE( 0x90)
// first, take the IO lock so the SCSI upper layers can't call
// into our _quecommand function (this also disables INTs)
spin_lock_irqsave( &io_request_lock, flags); // STOP _que function
spin_lock_irqsave( &HostAdapter->host_lock, flags); // STOP _que function
PCI_TRACE( 0x90)
CPQ_SPINLOCK_HBA( cpqfcHBAdata)
......@@ -241,7 +241,7 @@ void cpqfcTSWorkerThread( void *host)
PCI_TRACE( 0x90)
// release the IO lock (and re-enable interrupts)
spin_unlock_irqrestore( &io_request_lock, flags);
spin_unlock_irqrestore( &HostAdapter->host_lock, flags);
// disable OUR HBA interrupt (keep them off as much as possible
// during error recovery)
......@@ -3077,7 +3077,8 @@ void cpqfcTSheartbeat( unsigned long ptr )
if( cpqfcHBAdata->BoardLock) // Worker Task Running?
goto Skip;
spin_lock_irqsave( &io_request_lock, flags); // STOP _que function
// STOP _que function
spin_lock_irqsave( &cpqfcHBAdata->HostAdapter->host_lock, flags);
PCI_TRACE( 0xA8)
......@@ -3085,7 +3086,7 @@ void cpqfcTSheartbeat( unsigned long ptr )
cpqfcHBAdata->BoardLock = &BoardLock; // stop Linux SCSI command queuing
// release the IO lock (and re-enable interrupts)
spin_unlock_irqrestore( &io_request_lock, flags);
spin_unlock_irqrestore( &cpqfcHBAdata->HostAdapter->host_lock, flags);
// Ensure no contention from _quecommand or Worker process
CPQ_SPINLOCK_HBA( cpqfcHBAdata)
......
......@@ -242,7 +242,7 @@ static inline void idescsi_free_bio (struct bio *bio)
while (bio) {
bhp = bio;
bio = bio->bi_next;
kfree (bhp);
bio_put(bhp);
}
}
......
......@@ -181,18 +181,23 @@ void scsi_build_commandblocks(Scsi_Device * SDpnt);
void scsi_initialize_queue(Scsi_Device * SDpnt, struct Scsi_Host * SHpnt)
{
request_queue_t *q = &SDpnt->request_queue;
int max_segments = SHpnt->sg_tablesize;
blk_init_queue(q, scsi_request_fn);
q->queuedata = (void *) SDpnt;
#ifdef DMA_CHUNK_SIZE
blk_queue_max_segments(q, 64);
#else
blk_queue_max_segments(q, SHpnt->sg_tablesize);
if (max_segments > 64)
max_segments = 64;
#endif
blk_queue_max_segments(q, max_segments);
blk_queue_max_sectors(q, SHpnt->max_sectors);
if (!SHpnt->use_clustering)
clear_bit(QUEUE_FLAG_CLUSTER, &q->queue_flags);
if (SHpnt->unchecked_isa_dma)
blk_queue_segment_boundary(q, ISA_DMA_THRESHOLD);
}
#ifdef MODULE
......
......@@ -189,25 +189,23 @@ __inline static int __count_segments(struct request *req,
void
recount_segments(Scsi_Cmnd * SCpnt)
{
struct request *req;
struct Scsi_Host *SHpnt;
Scsi_Device * SDpnt;
req = &SCpnt->request;
SHpnt = SCpnt->host;
SDpnt = SCpnt->device;
struct request *req = &SCpnt->request;
struct Scsi_Host *SHpnt = SCpnt->host;
req->nr_segments = __count_segments(req, SHpnt->unchecked_isa_dma,NULL);
}
/*
* IOMMU hackery for sparc64
*/
#ifdef DMA_CHUNK_SIZE
#define MERGEABLE_BUFFERS(X,Y) \
(((((long)bio_to_phys((X))+(X)->bi_size)|((long)bio_to_phys((Y)))) & \
(DMA_CHUNK_SIZE - 1)) == 0)
((((bvec_to_phys(__BVEC_END((X))) + __BVEC_END((X))->bv_len) | bio_to_phys((Y))) & (DMA_CHUNK_SIZE - 1)) == 0)
#ifdef DMA_CHUNK_SIZE
static inline int scsi_new_mergeable(request_queue_t * q,
struct request * req,
struct Scsi_Host *SHpnt)
int nr_segs)
{
/*
* pci_map_sg will be able to merge these two
......@@ -216,49 +214,51 @@ static inline int scsi_new_mergeable(request_queue_t * q,
* scsi.c allocates for this purpose
* min(64,sg_tablesize) entries.
*/
if (req->nr_segments >= q->max_segments)
if (req->nr_segments + nr_segs > q->max_segments)
return 0;
req->nr_segments++;
req->nr_segments += nr_segs;
return 1;
}
static inline int scsi_new_segment(request_queue_t * q,
struct request * req,
struct bio *bio)
struct bio *bio, int nr_segs)
{
/*
* pci_map_sg won't be able to map these two
* into a single hardware sg entry, so we have to
* check if things fit into sg_tablesize.
*/
if (req->nr_hw_segments >= q->max_segments)
if (req->nr_hw_segments + nr_segs > q->max_segments)
return 0;
else if (req->nr_segments + bio->bi_vcnt > q->max_segments)
else if (req->nr_segments + nr_segs > q->max_segments)
return 0;
req->nr_hw_segments += bio->bi_vcnt;
req->nr_segments += bio->bi_vcnt;
req->nr_hw_segments += nr_segs;
req->nr_segments += nr_segs;
return 1;
}
#else
#else /* DMA_CHUNK_SIZE */
static inline int scsi_new_segment(request_queue_t * q,
struct request * req,
struct bio *bio)
struct bio *bio, int nr_segs)
{
if (req->nr_segments + bio->bi_vcnt > q->max_segments)
if (req->nr_segments + nr_segs > q->max_segments) {
req->flags |= REQ_NOMERGE;
return 0;
}
/*
* This will form the start of a new segment. Bump the
* counter.
*/
req->nr_segments += bio->bi_vcnt;
req->nr_segments += nr_segs;
return 1;
}
#endif
#endif /* DMA_CHUNK_SIZE */
/*
* Function: __scsi_merge_fn()
......@@ -294,36 +294,47 @@ static inline int scsi_new_segment(request_queue_t * q,
*/
__inline static int __scsi_back_merge_fn(request_queue_t * q,
struct request *req,
struct bio *bio,
int dma_host)
struct bio *bio)
{
if (req->nr_sectors + bio_sectors(bio) > q->max_sectors)
return 0;
else if (!BIO_SEG_BOUNDARY(q, req->biotail, bio))
int bio_segs;
if (req->nr_sectors + bio_sectors(bio) > q->max_sectors) {
req->flags |= REQ_NOMERGE;
return 0;
}
bio_segs = bio_hw_segments(q, bio);
if (blk_contig_segment(q, req->biotail, bio))
bio_segs--;
#ifdef DMA_CHUNK_SIZE
if (MERGEABLE_BUFFERS(req->biotail, bio))
return scsi_new_mergeable(q, req, q->queuedata);
if (MERGEABLE_BUFFERS(bio, req->bio))
return scsi_new_mergeable(q, req, bio_segs);
#endif
return scsi_new_segment(q, req, bio);
return scsi_new_segment(q, req, bio, bio_segs);
}
__inline static int __scsi_front_merge_fn(request_queue_t * q,
struct request *req,
struct bio *bio,
int dma_host)
struct bio *bio)
{
if (req->nr_sectors + bio_sectors(bio) > q->max_sectors)
return 0;
else if (!BIO_SEG_BOUNDARY(q, bio, req->bio))
int bio_segs;
if (req->nr_sectors + bio_sectors(bio) > q->max_sectors) {
req->flags |= REQ_NOMERGE;
return 0;
}
bio_segs = bio_hw_segments(q, bio);
if (blk_contig_segment(q, req->biotail, bio))
bio_segs--;
#ifdef DMA_CHUNK_SIZE
if (MERGEABLE_BUFFERS(bio, req->bio))
return scsi_new_mergeable(q, req, q->queuedata);
return scsi_new_mergeable(q, req, bio_segs);
#endif
return scsi_new_segment(q, req, bio);
return scsi_new_segment(q, req, bio, bio_segs);
}
/*
......@@ -343,7 +354,7 @@ __inline static int __scsi_front_merge_fn(request_queue_t * q,
* Notes: Optimized for different cases depending upon whether
* ISA DMA is in use and whether clustering should be used.
*/
#define MERGEFCT(_FUNCTION, _BACK_FRONT, _DMA) \
#define MERGEFCT(_FUNCTION, _BACK_FRONT) \
static int _FUNCTION(request_queue_t * q, \
struct request * req, \
struct bio *bio) \
......@@ -351,16 +362,12 @@ static int _FUNCTION(request_queue_t * q, \
int ret; \
ret = __scsi_ ## _BACK_FRONT ## _merge_fn(q, \
req, \
bio, \
_DMA); \
bio); \
return ret; \
}
MERGEFCT(scsi_back_merge_fn_, back, 0)
MERGEFCT(scsi_back_merge_fn_d, back, 1)
MERGEFCT(scsi_front_merge_fn_, front, 0)
MERGEFCT(scsi_front_merge_fn_d, front, 1)
MERGEFCT(scsi_back_merge_fn, back)
MERGEFCT(scsi_front_merge_fn, front)
/*
* Function: __scsi_merge_requests_fn()
......@@ -390,8 +397,7 @@ __inline static int __scsi_merge_requests_fn(request_queue_t * q,
struct request *next,
int dma_host)
{
Scsi_Device *SDpnt;
struct Scsi_Host *SHpnt;
int bio_segs;
/*
* First check if the either of the requests are re-queued
......@@ -399,69 +405,44 @@ __inline static int __scsi_merge_requests_fn(request_queue_t * q,
*/
if (req->special || next->special)
return 0;
else if (!BIO_SEG_BOUNDARY(q, req->biotail, next->bio))
return 0;
SDpnt = (Scsi_Device *) q->queuedata;
SHpnt = SDpnt->host;
#ifdef DMA_CHUNK_SIZE
/* If it would not fit into prepared memory space for sg chain,
* then don't allow the merge.
/*
* will become to large?
*/
if (req->nr_segments + next->nr_segments - 1 > q->max_segments)
if ((req->nr_sectors + next->nr_sectors) > q->max_sectors)
return 0;
if (req->nr_hw_segments + next->nr_hw_segments - 1 > q->max_segments)
return 0;
#else
bio_segs = req->nr_segments + next->nr_segments;
if (blk_contig_segment(q, req->biotail, next->bio))
bio_segs--;
/*
* If the two requests together are too large (even assuming that we
* can merge the boundary requests into one segment, then don't
* allow the merge.
* exceeds our max allowed segments?
*/
if (req->nr_segments + next->nr_segments - 1 > q->max_segments) {
return 0;
}
#endif
if ((req->nr_sectors + next->nr_sectors) > SHpnt->max_sectors)
if (bio_segs > q->max_segments)
return 0;
#ifdef DMA_CHUNK_SIZE
if (req->nr_segments + next->nr_segments > q->max_segments)
return 0;
bio_segs = req->nr_hw_segments + next->nr_hw_segments;
if (blk_contig_segment(q, req->biotail, next->bio))
bio_segs--;
/* If dynamic DMA mapping can merge last segment in req with
* first segment in next, then the check for hw segments was
* done above already, so we can always merge.
*/
if (MERGEABLE_BUFFERS(req->biotail, next->bio)) {
req->nr_hw_segments += next->nr_hw_segments - 1;
} else if (req->nr_hw_segments + next->nr_hw_segments > q->max_segments)
if (bio_segs > q->max_segments)
return 0;
} else {
req->nr_hw_segments += next->nr_hw_segments;
}
req->nr_segments += next->nr_segments;
return 1;
#else
req->nr_hw_segments = bio_segs;
#endif
/*
* We know that the two requests at the boundary should not be combined.
* Make sure we can fix something that is the sum of the two.
* A slightly stricter test than we had above.
* This will form the start of a new segment. Bump the
* counter.
*/
if (req->nr_segments + next->nr_segments > q->max_segments) {
return 0;
} else {
/*
* This will form the start of a new segment. Bump the
* counter.
*/
req->nr_segments += next->nr_segments;
return 1;
}
#endif
req->nr_segments = bio_segs;
return 1;
}
/*
......@@ -530,7 +511,6 @@ __inline static int __init_io(Scsi_Cmnd * SCpnt,
int dma_host)
{
struct bio * bio;
struct bio * bioprev;
char * buff;
int count;
int i;
......@@ -603,7 +583,6 @@ __inline static int __init_io(Scsi_Cmnd * SCpnt,
SCpnt->request_buffer = (char *) sgpnt;
SCpnt->request_bufflen = 0;
req->buffer = NULL;
bioprev = NULL;
if (dma_host)
bbpnt = (void **) ((char *)sgpnt +
......@@ -833,13 +812,13 @@ void initialize_merge_fn(Scsi_Device * SDpnt)
* rather than rely upon the default behavior of ll_rw_blk.
*/
if (SHpnt->unchecked_isa_dma == 0) {
q->back_merge_fn = scsi_back_merge_fn_;
q->front_merge_fn = scsi_front_merge_fn_;
q->back_merge_fn = scsi_back_merge_fn;
q->front_merge_fn = scsi_front_merge_fn;
q->merge_requests_fn = scsi_merge_requests_fn_;
SDpnt->scsi_init_io_fn = scsi_init_io_v;
} else {
q->back_merge_fn = scsi_back_merge_fn_d;
q->front_merge_fn = scsi_front_merge_fn_d;
q->back_merge_fn = scsi_back_merge_fn;
q->front_merge_fn = scsi_front_merge_fn;
q->merge_requests_fn = scsi_merge_requests_fn_d;
SDpnt->scsi_init_io_fn = scsi_init_io_vd;
}
......
/*
* Support for VIA 82Cxxx Audio Codecs
* Copyright 1999,2000 Jeff Garzik <jgarzik@mandrakesoft.com>
* Copyright 1999,2000 Jeff Garzik
*
* Distributed under the GNU GENERAL PUBLIC LICENSE (GPL) Version 2.
* See the "COPYING" file distributed with this software for more info.
......@@ -8,9 +8,6 @@
* For a list of known bugs (errata) and documentation,
* see via-audio.pdf in linux/Documentation/DocBook.
* If this documentation does not exist, run "make pdfdocs".
* If "make pdfdocs" fails, obtain the documentation from
* the driver's Website at
* http://gtf.org/garzik/drivers/via82cxxx/
*
*/
......@@ -3357,7 +3354,7 @@ static void __exit cleanup_via82cxxx_audio(void)
module_init(init_via82cxxx_audio);
module_exit(cleanup_via82cxxx_audio);
MODULE_AUTHOR("Jeff Garzik <jgarzik@mandrakesoft.com>");
MODULE_AUTHOR("Jeff Garzik");
MODULE_DESCRIPTION("DSP audio and mixer driver for Via 82Cxxx audio devices");
MODULE_LICENSE("GPL");
......
......@@ -33,6 +33,7 @@
#include <linux/compiler.h>
#include <asm/uaccess.h>
#include <asm/io.h>
kmem_cache_t *bio_cachep;
static spinlock_t __cacheline_aligned bio_lock = SPIN_LOCK_UNLOCKED;
......@@ -53,7 +54,8 @@ static struct biovec_pool bvec_list[BIOVEC_NR_POOLS];
/*
* if you change this list, also change bvec_alloc or things will
* break badly!
* break badly! cannot be bigger than what you can fit into an
* unsigned short
*/
static const int bvec_pool_sizes[BIOVEC_NR_POOLS] = { 1, 4, 16, 64, 128, 256 };
......@@ -204,6 +206,7 @@ inline void bio_init(struct bio *bio)
bio->bi_rw = 0;
bio->bi_vcnt = 0;
bio->bi_idx = 0;
bio->bi_hw_seg = 0;
bio->bi_size = 0;
bio->bi_end_io = NULL;
atomic_set(&bio->bi_cnt, 1);
......@@ -312,6 +315,14 @@ void bio_put(struct bio *bio)
bio_free(bio);
}
inline int bio_hw_segments(request_queue_t *q, struct bio *bio)
{
if (unlikely(!(bio->bi_flags & BIO_SEG_VALID)))
blk_recount_segments(q, bio);
return bio->bi_hw_seg;
}
/**
* __bio_clone - clone a bio
* @bio: destination bio
......@@ -331,11 +342,15 @@ inline void __bio_clone(struct bio *bio, struct bio *bio_src)
bio->bi_rw = bio_src->bi_rw;
/*
* notes -- maybe just leave bi_idx alone. bi_max has no used
* on a cloned bio
* notes -- maybe just leave bi_idx alone. bi_max has no use
* on a cloned bio. assume identical mapping for the clone
*/
bio->bi_vcnt = bio_src->bi_vcnt;
bio->bi_idx = bio_src->bi_idx;
if (bio_src->bi_flags & (1 << BIO_SEG_VALID)) {
bio->bi_hw_seg = bio_src->bi_hw_seg;
bio->bi_flags |= (1 << BIO_SEG_VALID);
}
bio->bi_size = bio_src->bi_size;
bio->bi_max = bio_src->bi_max;
}
......@@ -387,8 +402,15 @@ struct bio *bio_copy(struct bio *bio, int gfp_mask, int copy)
if (bbv->bv_page == NULL)
goto oom;
bbv->bv_len = bv->bv_len;
bbv->bv_offset = bv->bv_offset;
/*
* if doing a copy for a READ request, no need
* to memcpy page data
*/
if (!copy)
goto fill_in;
continue;
if (gfp_mask & __GFP_WAIT) {
vfrom = kmap(bv->bv_page);
......@@ -408,10 +430,6 @@ struct bio *bio_copy(struct bio *bio, int gfp_mask, int copy)
kunmap_atomic(vfrom, KM_BIO_IRQ);
local_irq_restore(flags);
}
fill_in:
bbv->bv_len = bv->bv_len;
bbv->bv_offset = bv->bv_offset;
}
b->bi_sector = bio->bi_sector;
......@@ -595,9 +613,6 @@ void ll_rw_kio(int rw, struct kiobuf *kio, kdev_t dev, sector_t sector)
}
queue_io:
if (bio->bi_vcnt > 1)
bio->bi_flags |= 1 << BIO_PREBUILT;
submit_bio(rw, bio);
if (total_nr_pages)
......
......@@ -2005,12 +2005,12 @@ int generic_direct_IO(int rw, struct inode * inode, struct kiobuf * iobuf, unsig
{
int i, nr_blocks, retval;
sector_t *blocks = iobuf->blocks;
struct buffer_head bh;
bh.b_dev = inode->i_dev;
nr_blocks = iobuf->length / blocksize;
/* build the blocklist */
for (i = 0; i < nr_blocks; i++, blocknr++) {
struct buffer_head bh;
bh.b_state = 0;
bh.b_dev = inode->i_dev;
bh.b_size = blocksize;
......@@ -2036,7 +2036,8 @@ int generic_direct_IO(int rw, struct inode * inode, struct kiobuf * iobuf, unsig
blocks[i] = bh.b_blocknr;
}
retval = brw_kiovec(rw, 1, &iobuf, inode->i_dev, blocks, blocksize);
/* This does not understand multi-device filesystems currently */
retval = brw_kiovec(rw, 1, &iobuf, bh.b_dev, blocks, blocksize);
out:
return retval;
......
......@@ -543,7 +543,7 @@ void shrink_dcache_parent(struct dentry * parent)
* too much.
*
* Priority:
* 0 - very urgent: shrink everything
* 1 - very urgent: shrink everything
* ...
* 6 - base-level: try to shrink a bit.
*/
......
......@@ -408,10 +408,25 @@ static void prune_dqcache(int count)
}
}
/*
* This is called from kswapd when we think we need some
* more memory, but aren't really sure how much. So we
* carefully try to free a _bit_ of our dqcache, but not
* too much.
*
* Priority:
* 1 - very urgent: shrink everything
* ...
* 6 - base-level: try to shrink a bit.
*/
int shrink_dqcache_memory(int priority, unsigned int gfp_mask)
{
int count = 0;
lock_kernel();
prune_dqcache(nr_free_dquots / (priority + 1));
count = nr_free_dquots / priority;
prune_dqcache(count);
unlock_kernel();
kmem_cache_shrink(dquot_cachep);
return 0;
......
......@@ -707,7 +707,17 @@ void prune_icache(int goal)
if (goal)
schedule_task(&unused_inodes_flush_task);
}
/*
* This is called from kswapd when we think we need some
* more memory, but aren't really sure how much. So we
* carefully try to free a _bit_ of our icache, but not
* too much.
*
* Priority:
* 1 - very urgent: shrink everything
* ...
* 6 - base-level: try to shrink a bit.
*/
int shrink_icache_memory(int priority, int gfp_mask)
{
int count = 0;
......
......@@ -5,7 +5,7 @@ O_TARGET := ntfs.o
obj-y := fs.o sysctl.o support.o util.o inode.o dir.o super.o attr.o unistr.o
obj-m := $(O_TARGET)
# New version format started 3 February 2001.
EXTRA_CFLAGS = -DNTFS_VERSION=\"1.1.20\" #-DDEBUG
EXTRA_CFLAGS = -DNTFS_VERSION=\"1.1.21\" #-DDEBUG
include $(TOPDIR)/Rules.make
......@@ -548,7 +548,7 @@ int ntfs_create_attr(ntfs_inode *ino, int anum, char *aname, void *data,
* attribute.
*/
static int ntfs_process_runs(ntfs_inode *ino, ntfs_attribute* attr,
unsigned char *data)
unsigned char *data)
{
int startvcn, endvcn;
int vcn, cnum;
......@@ -622,7 +622,7 @@ static int ntfs_process_runs(ntfs_inode *ino, ntfs_attribute* attr,
}
/* Insert the attribute starting at attr in the inode ino. */
int ntfs_insert_attribute(ntfs_inode *ino, unsigned char* attrdata)
int ntfs_insert_attribute(ntfs_inode *ino, unsigned char *attrdata)
{
int i, found;
int type;
......
......@@ -303,10 +303,8 @@ static int ntfs_readdir(struct file* filp, void *dirent, filldir_t filldir)
ntfs_debug(DEBUG_OTHER, __FUNCTION__ "(): After ntfs_getdir_unsorted()"
" calls, f_pos 0x%Lx.\n", filp->f_pos);
if (!err) {
#ifdef DEBUG
if (cb.ph != 0x7fff || cb.pl)
BUG();
done:
#ifdef DEBUG
if (!cb.ret_code)
ntfs_debug(DEBUG_OTHER, __FUNCTION__ "(): EOD, f_pos "
"0x%Lx, returning 0.\n", filp->f_pos);
......@@ -314,8 +312,6 @@ static int ntfs_readdir(struct file* filp, void *dirent, filldir_t filldir)
ntfs_debug(DEBUG_OTHER, __FUNCTION__ "(): filldir "
"returned %i, returning 0, f_pos "
"0x%Lx.\n", cb.ret_code, filp->f_pos);
#else
done:
#endif
return 0;
}
......
......@@ -159,7 +159,7 @@ static int ntfs_insert_mft_attributes(ntfs_inode* ino, char *mft, int mftno)
* Return 0 on success or -errno on error.
*/
static int ntfs_insert_mft_attribute(ntfs_inode* ino, int mftno,
ntfs_u8 *attr)
ntfs_u8 *attr)
{
int i, error, present = 0;
......@@ -207,6 +207,7 @@ static int parse_attributes(ntfs_inode *ino, ntfs_u8 *alist, int *plen)
int mftno, l, error;
int last_mft = -1;
int len = *plen;
int tries = 0;
if (!ino->attr) {
ntfs_error("parse_attributes: called on inode 0x%x without a "
......@@ -230,9 +231,13 @@ static int parse_attributes(ntfs_inode *ino, ntfs_u8 *alist, int *plen)
* then occur there or the user notified to run
* ntfsck. (AIA) */
if (mftno != ino->i_number && mftno != last_mft) {
continue_after_loading_mft_data:
last_mft = mftno;
error = ntfs_read_mft_record(ino->vol, mftno, mft);
if (error) {
if (error == -EINVAL && !tries)
goto force_load_mft_data;
failed_reading_mft_data:
ntfs_debug(DEBUG_FILE3, "parse_attributes: "
"ntfs_read_mft_record(mftno = 0x%x) "
"failed\n", mftno);
......@@ -272,9 +277,106 @@ static int parse_attributes(ntfs_inode *ino, ntfs_u8 *alist, int *plen)
ntfs_free(mft);
*plen = len;
return 0;
force_load_mft_data:
{
ntfs_u8 *mft2, *attr2;
int mftno2;
int last_mft2 = last_mft;
int len2 = len;
int error2;
int found2 = 0;
ntfs_u8 *alist2 = alist;
/*
* We only get here if $DATA wasn't found in $MFT which only happens
* on volume mount when $MFT has an attribute list and there are
* attributes before $DATA which are inside extent mft records. So
* we just skip forward to the $DATA attribute and read that. Then we
* restart which is safe as an attribute will not be inserted twice.
*
* This still will not fix the case where the attribute list is non-
* resident, larger than 1024 bytes, and the $DATA attribute list entry
* is not in the first 1024 bytes. FIXME: This should be implemented
* somehow! Perhaps by passing special error code up to
* ntfs_load_attributes() so it keeps going trying to get to $DATA
* regardless. Then it would have to restart just like we do here.
*/
mft2 = ntfs_malloc(ino->vol->mft_record_size);
if (!mft2) {
ntfs_free(mft);
return -ENOMEM;
}
ntfs_memcpy(mft2, mft, ino->vol->mft_record_size);
while (len2 > 8) {
l = NTFS_GETU16(alist2 + 4);
if (l > len2)
break;
if (NTFS_GETU32(alist2 + 0x0) < ino->vol->at_data) {
len2 -= l;
alist2 += l;
continue;
}
if (NTFS_GETU32(alist2 + 0x0) > ino->vol->at_data) {
if (found2)
break;
/* Uh-oh! It really isn't there! */
ntfs_error("Either the $MFT is corrupt or, equally "
"likely, the $MFT is too complex for "
"the current driver to handle. Please "
"email the ntfs maintainer that you "
"saw this message. Thank you.\n");
goto failed_reading_mft_data;
}
/* Process attribute description. */
mftno2 = NTFS_GETU32(alist2 + 0x10);
if (mftno2 != ino->i_number && mftno2 != last_mft2) {
last_mft2 = mftno2;
error2 = ntfs_read_mft_record(ino->vol, mftno2, mft2);
if (error2) {
ntfs_debug(DEBUG_FILE3, "parse_attributes: "
"ntfs_read_mft_record(mftno2 = 0x%x) "
"failed\n", mftno2);
ntfs_free(mft2);
goto failed_reading_mft_data;
}
}
attr2 = ntfs_find_attr_in_mft_rec(
ino->vol, /* ntfs volume */
mftno2 == ino->i_number ?/* mft record is: */
ino->attr: /* base record */
mft2, /* extension record */
NTFS_GETU32(alist2 + 0), /* type */
(wchar_t*)(alist2 + alist2[7]), /* name */
alist2[6], /* name length */
1, /* ignore case */
NTFS_GETU16(alist2 + 24) /* instance number */
);
if (!attr2) {
ntfs_error("parse_attributes: mft records 0x%x and/or "
"0x%x corrupt!\n", ino->i_number,
mftno2);
ntfs_free(mft2);
goto failed_reading_mft_data;
}
error2 = ntfs_insert_mft_attribute(ino, mftno2, attr2);
if (error2) {
ntfs_debug(DEBUG_FILE3, "parse_attributes: "
"ntfs_insert_mft_attribute(mftno2 0x%x, "
"attribute2 type 0x%x) failed\n", mftno2,
NTFS_GETU32(alist2 + 0));
ntfs_free(mft2);
goto failed_reading_mft_data;
}
len2 -= l;
alist2 += l;
found2 = 1;
}
ntfs_free(mft2);
tries = 1;
goto continue_after_loading_mft_data;
}
}
static void ntfs_load_attributes(ntfs_inode* ino)
static void ntfs_load_attributes(ntfs_inode *ino)
{
ntfs_attribute *alist;
int datasize;
......
......@@ -150,7 +150,7 @@ int ntfs_read_mft_record(ntfs_volume *vol, int mftno, char *buf)
* now as we just can't handle some on disk structures
* this way. (AIA) */
printk(KERN_WARNING "NTFS: Invalid MFT record for 0x%x\n", mftno);
return -EINVAL;
return -EIO;
}
ntfs_debug(DEBUG_OTHER, "read_mft_record: Done 0x%x\n", mftno);
return 0;
......
......@@ -104,6 +104,13 @@ extern void iounmap(void *addr);
#define bus_to_virt phys_to_virt
#define page_to_bus page_to_phys
/*
* can the hardware map this into one segment or not, given no other
* constraints.
*/
#define BIOVEC_MERGEABLE(vec1, vec2) \
((bvec_to_phys((vec1)) + (vec1)->bv_len) == bvec_to_phys((vec2)))
/*
* readX/writeX() are used to access memory mapped devices. On some
* architectures the memory mapped IO stuff needs to be accessed
......
......@@ -18,7 +18,11 @@ extern unsigned long virt_to_bus_not_defined_use_pci_map(volatile void *addr);
extern unsigned long bus_to_virt_not_defined_use_pci_map(volatile void *addr);
#define bus_to_virt bus_to_virt_not_defined_use_pci_map
#define page_to_phys(page) (((page) - mem_map) << PAGE_SHIFT)
extern unsigned long phys_base;
#define page_to_phys(page) ((((page) - mem_map) << PAGE_SHIFT)+phys_base)
#define BIOVEC_MERGEABLE(vec1, vec2) \
((((bvec_to_phys((vec1)) + (vec1)->bv_len) | bvec_to_phys((vec2))) & (DMA_CHUNK_SIZE - 1)) == 0)
/* Different PCI controllers we support have their PCI MEM space
* mapped to an either 2GB (Psycho) or 4GB (Sabre) aligned area,
......
......@@ -57,8 +57,9 @@ struct bio {
* top bits priority
*/
unsigned int bi_vcnt; /* how many bio_vec's */
unsigned int bi_idx; /* current index into bvl_vec */
unsigned short bi_vcnt; /* how many bio_vec's */
unsigned short bi_idx; /* current index into bvl_vec */
unsigned short bi_hw_seg; /* actual mapped segments */
unsigned int bi_size; /* total size in bytes */
unsigned int bi_max; /* max bvl_vecs we can hold,
used as index into pool */
......@@ -79,7 +80,7 @@ struct bio {
#define BIO_UPTODATE 0 /* ok after I/O completion */
#define BIO_RW_BLOCK 1 /* RW_AHEAD set, and read/write would block */
#define BIO_EOF 2 /* out-out-bounds error */
#define BIO_PREBUILT 3 /* not merged big */
#define BIO_SEG_VALID 3 /* nr_hw_seg valid */
#define BIO_CLONED 4 /* doesn't own data */
/*
......@@ -108,8 +109,8 @@ struct bio {
/*
* will die
*/
#define bio_to_phys(bio) (page_to_phys(bio_page((bio))) + bio_offset((bio)))
#define bvec_to_phys(bv) (page_to_phys((bv)->bv_page) + (bv)->bv_offset)
#define bio_to_phys(bio) (page_to_phys(bio_page((bio))) + (unsigned long) bio_offset((bio)))
#define bvec_to_phys(bv) (page_to_phys((bv)->bv_page) + (unsigned long) (bv)->bv_offset)
/*
* queues that have highmem support enabled may still need to revert to
......@@ -125,13 +126,16 @@ struct bio {
/*
* merge helpers etc
*/
#define __BVEC_END(bio) bio_iovec_idx((bio), (bio)->bi_vcnt - 1)
#define __BVEC_END(bio) bio_iovec_idx((bio), (bio)->bi_vcnt - 1)
#define __BVEC_START(bio) bio_iovec_idx((bio), 0)
#define BIO_CONTIG(bio, nxt) \
(bvec_to_phys(__BVEC_END((bio))) + (bio)->bi_size == bio_to_phys((nxt)))
BIOVEC_MERGEABLE(__BVEC_END((bio)), __BVEC_START((nxt)))
#define __BIO_SEG_BOUNDARY(addr1, addr2, mask) \
(((addr1) | (mask)) == (((addr2) - 1) | (mask)))
#define BIOVEC_SEG_BOUNDARY(q, b1, b2) \
__BIO_SEG_BOUNDARY(bvec_to_phys((b1)), bvec_to_phys((b2)) + (b2)->bv_len, (q)->seg_boundary_mask)
#define BIO_SEG_BOUNDARY(q, b1, b2) \
__BIO_SEG_BOUNDARY(bvec_to_phys(__BVEC_END((b1))), bio_to_phys((b2)) + (b2)->bi_size, (q)->seg_boundary_mask)
BIOVEC_SEG_BOUNDARY((q), __BVEC_END((b1)), __BVEC_START((b2)))
#define bio_io_error(bio) bio_endio((bio), 0, bio_sectors((bio)))
......@@ -167,6 +171,8 @@ extern struct bio *bio_alloc(int, int);
extern void bio_put(struct bio *);
extern int bio_endio(struct bio *, int, int);
struct request_queue;
extern inline int bio_hw_segments(struct request_queue *, struct bio *);
extern inline void __bio_clone(struct bio *, struct bio *);
extern struct bio *bio_clone(struct bio *, int);
......
......@@ -84,11 +84,12 @@ extern inline struct request *elv_next_request(request_queue_t *q)
(q)->elevator.elevator_add_req_fn((q), (rq), (where)); \
} while (0)
#define __elv_add_request(q, rq, back, p) \
#define __elv_add_request(q, rq, back, p) do { \
if ((back)) \
__elv_add_request_core((q), (rq), (q)->queue_head.prev, (p)); \
else \
__elv_add_request_core((q), (rq), &(q)->queue_head, 0); \
} while (0)
#define elv_add_request(q, rq, back) __elv_add_request((q), (rq), (back), 1)
......
......@@ -256,6 +256,8 @@ extern void blk_attempt_remerge(request_queue_t *, struct request *);
extern struct request *blk_get_request(request_queue_t *, int, int);
extern void blk_put_request(struct request *);
extern void blk_plug_device(request_queue_t *);
extern void blk_recount_segments(request_queue_t *, struct bio *);
extern inline int blk_contig_segment(request_queue_t *q, struct bio *, struct bio *);
extern int block_ioctl(kdev_t, unsigned int, unsigned long);
......
......@@ -9,6 +9,7 @@
#include <linux/smp_lock.h>
#include <linux/blk.h>
#include <linux/tty.h>
#include <linux/fd.h>
#include <linux/nfs_fs.h>
#include <linux/nfs_fs_sb.h>
......@@ -29,6 +30,11 @@ static inline _syscall2(int,umount,char *,name,int,flags);
extern void rd_load(void);
extern void initrd_load(void);
extern int get_filesystem_list(char * buf);
extern void wait_for_keypress(void);
asmlinkage long sys_mount(char * dev_name, char * dir_name, char * type,
unsigned long flags, void * data);
#ifdef CONFIG_BLK_DEV_INITRD
unsigned int real_root_dev; /* do_proc_dointvec cannot handle kdev_t */
......@@ -276,29 +282,25 @@ static void __init mount_root(void)
char path[64];
char *name = "/dev/root";
char *fs_names, *p;
int err;
int do_devfs = 0;
#ifdef CONFIG_ROOT_NFS
void *data;
#endif
root_mountflags |= MS_VERBOSE;
fs_names = __getname();
get_fs_names(fs_names);
#ifdef CONFIG_ROOT_NFS
if (MAJOR(ROOT_DEV) != UNNAMED_MAJOR)
goto skip_nfs;
data = nfs_root_data();
if (!data)
goto no_nfs;
err = mount("/dev/root", "/root", "nfs", root_mountflags, data);
if (!err)
goto done;
no_nfs:
printk(KERN_ERR "VFS: Unable to mount root fs via NFS, trying floppy.\n");
ROOT_DEV = MKDEV(FLOPPY_MAJOR, 0);
skip_nfs:
if (MAJOR(ROOT_DEV) == UNNAMED_MAJOR) {
void *data;
data = nfs_root_data();
if (data) {
int err = mount("/dev/root", "/root", "nfs", root_mountflags, data);
if (!err)
goto done;
}
printk(KERN_ERR "VFS: Unable to mount root fs via NFS, trying floppy.\n");
ROOT_DEV = MKDEV(FLOPPY_MAJOR, 0);
}
#endif
#ifdef CONFIG_BLK_DEV_FD
......
......@@ -899,7 +899,6 @@ static int __init device_init_root(void)
int __init device_driver_init(void)
{
int error = 0;
int pid;
DBG("DEV: Initialising Device Tree\n");
......
......@@ -45,7 +45,8 @@ extern int C_A_D;
extern int bdf_prm[], bdflush_min[], bdflush_max[];
extern int sysctl_overcommit_memory;
extern int max_threads;
extern int nr_queued_signals, max_queued_signals;
extern atomic_t nr_queued_signals;
extern int max_queued_signals;
extern int sysrq_enabled;
extern int core_uses_pid;
extern int cad_pid;
......
......@@ -3881,6 +3881,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
}
}
/* Fall through */
case TCP_LAST_ACK:
case TCP_ESTABLISHED:
tcp_data_queue(sk, skb);
queued = 1;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment