Commit f8735053 authored by Linus Torvalds's avatar Linus Torvalds

Merge master.kernel.org:/home/davem/BK/net-2.5

into home.transmeta.com:/home/torvalds/v2.5/linux
parents 0694164e ddd11110
...@@ -31,6 +31,7 @@ Offset Type Description ...@@ -31,6 +31,7 @@ Offset Type Description
0x1e0 unsigned long ALT_MEM_K, alternative mem check, in Kb 0x1e0 unsigned long ALT_MEM_K, alternative mem check, in Kb
0x1e8 char number of entries in E820MAP (below) 0x1e8 char number of entries in E820MAP (below)
0x1e9 unsigned char number of entries in EDDBUF (below)
0x1f1 char size of setup.S, number of sectors 0x1f1 char size of setup.S, number of sectors
0x1f2 unsigned short MOUNT_ROOT_RDONLY (if !=0) 0x1f2 unsigned short MOUNT_ROOT_RDONLY (if !=0)
0x1f4 unsigned short size of compressed kernel-part in the 0x1f4 unsigned short size of compressed kernel-part in the
...@@ -66,6 +67,7 @@ Offset Type Description ...@@ -66,6 +67,7 @@ Offset Type Description
0x220 4 bytes (setup.S) 0x220 4 bytes (setup.S)
0x224 unsigned short setup.S heap end pointer 0x224 unsigned short setup.S heap end pointer
0x2d0 - 0x600 E820MAP 0x2d0 - 0x600 E820MAP
0x600 - 0x7D4 EDDBUF (setup.S)
0x800 string, 2K max COMMAND_LINE, the kernel commandline as 0x800 string, 2K max COMMAND_LINE, the kernel commandline as
copied using CL_OFFSET. copied using CL_OFFSET.
......
Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters
============================================================== ==============================================================
April 9, 2002 September 16, 2002
Contents Contents
...@@ -19,26 +19,9 @@ In This Release ...@@ -19,26 +19,9 @@ In This Release
=============== ===============
This file describes the Linux* Base Driver for the Intel(R) PRO/100 Family of This file describes the Linux* Base Driver for the Intel(R) PRO/100 Family of
Adapters, version 2.0.x. This driver includes support for Itanium(TM)-based Adapters, version 2.1.x. This driver includes support for Itanium(TM)-based
systems. systems.
New for this release:
- Additional ethtool functionality, including link status test and EEPROM
read/write. A third-party application can use the ethtool interface to
get and set driver parameters.
- Support for Zero copy on 82550-based adapters. This feature provides
faster data throughput and significant CPU usage improvement in systems
that use the relevant system call (sendfile(2)).
- Support for large MTU-enabling interface (1504 bytes) with kernel's
VLAN module
- Support for polling on RX
- Support for Wake On LAN* on 82550 and 82559-based adapters
Supported Adapters Supported Adapters
================== ==================
...@@ -96,8 +79,7 @@ CNR PRO/100 VE Desktop Adapter A10386-xxx, A10725-xxx, ...@@ -96,8 +79,7 @@ CNR PRO/100 VE Desktop Adapter A10386-xxx, A10725-xxx,
To verify that your adapter is supported, find the board ID number on the To verify that your adapter is supported, find the board ID number on the
adapter. Look for a label that has a barcode and a number in the format adapter. Look for a label that has a barcode and a number in the format
123456-001 (six digits hyphen three digits). Match this to the list of A12345-001. Match this to the list of numbers above.
numbers above.
For more information on how to identify your adapter, go to the Adapter & For more information on how to identify your adapter, go to the Adapter &
Driver ID Guide at: Driver ID Guide at:
...@@ -106,15 +88,20 @@ Driver ID Guide at: ...@@ -106,15 +88,20 @@ Driver ID Guide at:
For the latest Intel PRO/100 network driver for Linux, see: For the latest Intel PRO/100 network driver for Linux, see:
http://appsr.intel.com/scripts-df/support_intel.asp http://downloadfinder.intel.com/scripts-df/support_intel.asp
Command Line Parameters Command Line Parameters
======================= =======================
The following parameters are used by entering them on the command line with The following optional parameters are used by entering them on the command
the modprobe or insmod command. For example, with two Intel PRO/100 PCI line with the modprobe or insmod command using this syntax:
adapters, entering:
modprobe e100 [<option>=<VAL1>,<VAL2>,...]
insmod e100 [<option>=<VAL1>,<VAL2>,...]
For example, with two Intel PRO/100 PCI adapters, entering:
modprobe e100 TxDescriptors=32,128 modprobe e100 TxDescriptors=32,128
...@@ -122,16 +109,20 @@ loads the e100 driver with 32 TX resources for the first adapter and 128 TX ...@@ -122,16 +109,20 @@ loads the e100 driver with 32 TX resources for the first adapter and 128 TX
resources for the second adapter. This configuration favors the second resources for the second adapter. This configuration favors the second
adapter. The driver supports up to 16 network adapters concurrently. adapter. The driver supports up to 16 network adapters concurrently.
The default value for each parameter is generally the recommended setting,
unless otherwise noted.
NOTE: Giving any command line option the value "-1" causes the driver to use NOTE: Giving any command line option the value "-1" causes the driver to use
the appropriate default value for that option, as if no value was the appropriate default value for that option, as if no value was
specified. specified.
BundleMax BundleMax
Valid Range: 0x1-0xFFFF Valid Range: 1-65535
Default Value: 6 Default Value: 6
This parameter holds the maximum number of packets in a bundle. Suggested This parameter holds the maximum number of small packets (less than 128
values range from 2 to 10. See "CPU Cycle Saver." bytes) in a bundle. Suggested values range from 2 to 10. See "CPU Cycle
Saver."
BundleSmallFr BundleSmallFr
Valid Range: 0-1 (0=off, 1=on) Valid Range: 0-1 (0=off, 1=on)
...@@ -142,48 +133,33 @@ Default Value: 0 ...@@ -142,48 +133,33 @@ Default Value: 0
e100_speed_duplex e100_speed_duplex
Valid Range: 0-4 (1=10half;2=10full;3=100half;4=100full) Valid Range: 0-4 (1=10half;2=10full;3=100half;4=100full)
Default Value: 0 Default Value: 0
The default value of 0 is set to auto-negotiate if the link partner is set The default value of 0 sets the adapter to auto-negotiate. Other values
to auto-negotiate. If the link partner is forced, e100_speed_duplex set the adapter to forced speed and duplex.
defaults to half-duplex.
Example usage: insmod e100.o e100_speed_duplex=4,4 (for two adapters) Example usage: insmod e100.o e100_speed_duplex=4,4 (for two adapters)
flow_control flow_control
Valid Range: 0-1 (0=off, 1=on) Valid Range: 0-1 (0=off, 1=on)
Default Value: 0 Default Value: 0
This parameter controls the automatic generation(Tx) and response(Rx) to This parameter controls the automatic generation(Tx) and response(Rx) to
Ethernet PAUSE frames. flow_control should NOT be set to 1 when the e100 Ethernet PAUSE frames. flow_control should NOT be set to 1 when the
adapter is connected to an interface that does not support Ethernet PAUSE adapter is connected to an interface that does not support Ethernet PAUSE
frames and when the e100_speed_duplex parameter is NOT set to zero. frames and when the e100_speed_duplex parameter is NOT set to zero.
IntDelay IntDelay
Valid Range: 0-0xFFFF (0=off) Valid Range: 0-65535 (0=off)
Default Value: 1536 Default Value: 1536
This parameter holds the number of time units (in adapter terminology) This parameter holds the number of time units (in adapter terminology)
until the adapter generates an interrupt. The recommended value for until the adapter generates an interrupt. The recommended value for
IntDelay is 0x600 (upon initialization). Suggested values range from IntDelay is 1536 (upon initialization). Suggested values range from
0x200h to 0x800. See "CPU Cycle Saver." 512 to 2048. See "CPU Cycle Saver."
IFS IFS
Valid Range: 0-1 (0=off, 1=on) Valid Range: 0-1 (0=off, 1=on)
Default Value: 1 Default Value: 1
Inter Frame Spacing (IFS) aims to reduce the number of Ethernet frame Inter Frame Spacing (IFS) aims to reduce the number of Ethernet frame
collisions by altering the time between frame transmissions. When IFS is collisions by altering the time between frame transmissions. When IFS is
enabled the driver tries to find an optimal IFS value. However, some enabled the driver tries to find an optimal IFS value. It is used only at
switches function better when IFS is disabled. half duplex.
PollingMaxWork
Valid Range: 1-1024 (max number of RxDescriptors)
Default Value: Specified number of RxDescriptors
This value specifies the maximum number of receive packets that are
processed on a single polling call. This parameter is invalid if
RxCongestionControl is set to 0.
RxCongestionControl
Valid Range: 0-1 (0=off, 1=on)
Default Value: 1
1 enables polling mode. When the link is congested, the driver can decide
to handle received packets by polling them, instead of waiting until
interrupts occur.
RxDescriptors RxDescriptors
Valid Range: 8-1024 Valid Range: 8-1024
...@@ -200,13 +176,15 @@ Default Value: 64 ...@@ -200,13 +176,15 @@ Default Value: 64
Increasing this value allows the protocol stack to queue more transmits at Increasing this value allows the protocol stack to queue more transmits at
the driver level. The maximum value for Itanium-based systems is 64. the driver level. The maximum value for Itanium-based systems is 64.
ucode (not available for 82557-based adapters) ucode
Valid Range: 0-1 (0=off, 1=on) Valid Range: 0-1 (0=off, 1=on)
Default Value: 0 for 82558-based adapters Default Value: 0 for 82558-based adapters
1 for 82559(and higher)-based adapters 1 for 82559, 82550, and 82551-based adapters
On uploads the micro code to the adapter, which enables CPU Cycle Saver. On uploads the micro code to the adapter, which enables CPU Cycle Saver.
See the section "CPU Cycle Saver" below. See the section "CPU Cycle Saver" below.
Example usage: insmod e100.o ucode=0 (does not reduce CPU usage) Example usage: insmod e100.o ucode=1
Not available on 82557-based adapters.
XsumRX XsumRX
Valid Range: 0-1 (0=off, 1=on) Valid Range: 0-1 (0=off, 1=on)
...@@ -214,6 +192,8 @@ Default Value: 1 ...@@ -214,6 +192,8 @@ Default Value: 1
On allows Rx checksum offloading for TCP/UDP packets. Requires that the On allows Rx checksum offloading for TCP/UDP packets. Requires that the
hardware support this feature. hardware support this feature.
Not available on 82557 and 82558-based adapters.
CPU Cycle Saver CPU Cycle Saver
================ ================
...@@ -234,10 +214,11 @@ switching to and from the driver. ...@@ -234,10 +214,11 @@ switching to and from the driver.
CPU Cycle Saver consists of these arguments: IntDelay, BundleMax and CPU Cycle Saver consists of these arguments: IntDelay, BundleMax and
BundleSmallFr. When IntDelay is increased, the adapter waits longer for BundleSmallFr. When IntDelay is increased, the adapter waits longer for
frames to arrive before generating the interrupt. By increasing BundleMax, frames to arrive before generating the interrupt. By increasing BundleMax,
the network adapter waits for the number of frames specified to arrive before the network adapter waits for the number of small frames (less than 128 bytes)
generating the interrupt. When BundleSmallFr is disabled, the adapter does specified to arrive before generating the interrupt. When BundleSmallFr is
not bundle packets that are smaller than 128 bytes. Such small packets are disabled, the adapter does not bundle small packets. Such small packets are
often, but not always, control packets that are better served immediately. often, but not always, control packets that are better served immediately;
therefore, BundleSmallFr is disabled by default.
For most users, it is recommended that CPU Cycle Saver be used with the For most users, it is recommended that CPU Cycle Saver be used with the
default values specified in the Command Line Parameters section. However, in default values specified in the Command Line Parameters section. However, in
...@@ -249,7 +230,7 @@ ucode=0. ...@@ -249,7 +230,7 @@ ucode=0.
Support Support
======= =======
For general information and support, go to the Intel support website at: For general information, go to the Intel support website at:
http://support.intel.com http://support.intel.com
......
Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters
=============================================================== ===============================================================
August 6, 2002 October 12, 2002
Contents Contents
...@@ -20,7 +20,7 @@ In This Release ...@@ -20,7 +20,7 @@ In This Release
=============== ===============
This file describes the Linux* Base Driver for the Intel(R) PRO/1000 Family This file describes the Linux* Base Driver for the Intel(R) PRO/1000 Family
of Adapters, version 4.3.x. This driver includes support for of Adapters, version 4.4.x. This driver includes support for
Itanium(TM)-based systems. Itanium(TM)-based systems.
This release version includes the following: This release version includes the following:
...@@ -32,7 +32,7 @@ This release version includes the following: ...@@ -32,7 +32,7 @@ This release version includes the following:
default in supporting kernels. It is not supported on the Intel(R) default in supporting kernels. It is not supported on the Intel(R)
PRO/1000 Gigabit Server Adapter. PRO/1000 Gigabit Server Adapter.
New features include: Features include:
- Support for the 82545 and 82546-based adapters listed below - Support for the 82545 and 82546-based adapters listed below
...@@ -144,8 +144,7 @@ Default Value: 80 ...@@ -144,8 +144,7 @@ Default Value: 80
RxIntDelay RxIntDelay
Valid Range: 0-65535 (0=off) Valid Range: 0-65535 (0=off)
Default Value: 0 (82542, 82543, and 82544-based adapters) Default Value: 0
128 (82540, 82545, and 82546-based adapters)
This value delays the generation of receive interrupts in units of 1.024 This value delays the generation of receive interrupts in units of 1.024
microseconds. Receive interrupt reduction can improve CPU efficiency if microseconds. Receive interrupt reduction can improve CPU efficiency if
properly tuned for specific network traffic. Increasing this value adds properly tuned for specific network traffic. Increasing this value adds
...@@ -154,13 +153,12 @@ Default Value: 0 (82542, 82543, and 82544-based adapters) ...@@ -154,13 +153,12 @@ Default Value: 0 (82542, 82543, and 82544-based adapters)
may be set too high, causing the driver to run out of available receive may be set too high, causing the driver to run out of available receive
descriptors. descriptors.
CAUTION: When setting RxIntDelay to a value other than 0, adapters based CAUTION: When setting RxIntDelay to a value other than 0, adapters may
on the Intel 82543 and 82544 LAN controllers may hang (stop hang (stop transmitting) under certain network conditions. If
transmitting) under certain network conditions. If this occurs a this occurs a NETDEV WATCHDOG message is logged in the system
message is logged in the system event log. In addition, the event log. In addition, the controller is automatically reset,
controller is automatically reset, restoring the network restoring the network connection. To eliminate the potential for
connection. To eliminate the potential for the hang ensure that the hang ensure that RxIntDelay is set to 0.
RxIntDelay is set to 0.
RxAbsIntDelay (82540, 82545, and 82546-based adapters only) RxAbsIntDelay (82540, 82545, and 82546-based adapters only)
Valid Range: 0-65535 (0=off) Valid Range: 0-65535 (0=off)
......
...@@ -1094,3 +1094,11 @@ CONFIG_SCx200 ...@@ -1094,3 +1094,11 @@ CONFIG_SCx200
This support is also available as a module. If compiled as a This support is also available as a module. If compiled as a
module, it will be called scx200.o. module, it will be called scx200.o.
CONFIG_EDD
Say Y or M here if you want to enable BIOS Enhanced Disk Drive
Services real mode BIOS calls to determine which disk
BIOS tries boot from. This information is then exported via driverfs.
This option is experimental, but believed to be safe,
and most disk controller BIOS vendors do not yet implement this feature.
...@@ -45,6 +45,9 @@ ...@@ -45,6 +45,9 @@
* New A20 code ported from SYSLINUX by H. Peter Anvin. AMD Elan bugfixes * New A20 code ported from SYSLINUX by H. Peter Anvin. AMD Elan bugfixes
* by Robert Schwebel, December 2001 <robert@schwebel.de> * by Robert Schwebel, December 2001 <robert@schwebel.de>
* *
* BIOS Enhanced Disk Drive support
* by Matt Domsch <Matt_Domsch@dell.com> September 2002
*
*/ */
#include <linux/config.h> #include <linux/config.h>
...@@ -53,6 +56,7 @@ ...@@ -53,6 +56,7 @@
#include <linux/compile.h> #include <linux/compile.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/edd.h>
#include <asm/page.h> #include <asm/page.h>
/* Signature words to ensure LILO loaded us right */ /* Signature words to ensure LILO loaded us right */
...@@ -543,6 +547,49 @@ no_32_apm_bios: ...@@ -543,6 +547,49 @@ no_32_apm_bios:
done_apm_bios: done_apm_bios:
#endif #endif
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
# Do the BIOS Enhanced Disk Drive calls
# This code is sensitive to the size of the structs in edd.h
edd_start:
# %ds points to the bootsector
# result buffer for fn48
movw $EDDBUF+EDDEXTSIZE, %si # in ds:si, fn41 results
# kept just before that
movb $0, (EDDNR) # zero value at EDDNR
movb $0x80, %dl # BIOS device 0x80
edd_check_ext:
movb $0x41, %ah # Function 41
movw $0x55aa, %bx # magic
int $0x13 # make the call
jc edd_done # no more BIOS devices
cmpw $0xAA55, %bx # is magic right?
jne edd_next # nope, next...
movb %dl, %ds:-4(%si) # store device number
movb %ah, %ds:-3(%si) # store version
movw %cx, %ds:-2(%si) # store extensions
incb (EDDNR) # note that we stored something
edd_get_device_params:
movw $EDDPARMSIZE, %ds:(%si) # put size
movb $0x48, %ah # Function 48
int $0x13 # make the call
# Don't check for fail return
# it doesn't matter.
movw %si, %ax # increment si
addw $EDDPARMSIZE+EDDEXTSIZE, %ax
movw %ax, %si
edd_next:
incb %dl # increment to next device
cmpb $EDDMAXNR, (EDDNR) # Out of space?
jb edd_check_ext # keep looping
edd_done:
#endif
# Now we want to move to protected mode ... # Now we want to move to protected mode ...
cmpw $0, %cs:realmode_swtch cmpw $0, %cs:realmode_swtch
jz rmodeswtch_normal jz rmodeswtch_normal
......
...@@ -216,6 +216,10 @@ tristate '/dev/cpu/microcode - Intel IA32 CPU microcode support' CONFIG_MICROCOD ...@@ -216,6 +216,10 @@ tristate '/dev/cpu/microcode - Intel IA32 CPU microcode support' CONFIG_MICROCOD
tristate '/dev/cpu/*/msr - Model-specific register support' CONFIG_X86_MSR tristate '/dev/cpu/*/msr - Model-specific register support' CONFIG_X86_MSR
tristate '/dev/cpu/*/cpuid - CPU information support' CONFIG_X86_CPUID tristate '/dev/cpu/*/cpuid - CPU information support' CONFIG_X86_CPUID
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
tristate 'BIOS Enhanced Disk Drive calls determine boot disk (EXPERIMENTAL)' CONFIG_EDD
fi
choice 'High Memory Support' \ choice 'High Memory Support' \
"off CONFIG_NOHIGHMEM \ "off CONFIG_NOHIGHMEM \
4GB CONFIG_HIGHMEM4G \ 4GB CONFIG_HIGHMEM4G \
......
...@@ -70,6 +70,7 @@ CONFIG_X86_MCE_P4THERMAL=y ...@@ -70,6 +70,7 @@ CONFIG_X86_MCE_P4THERMAL=y
# CONFIG_MICROCODE is not set # CONFIG_MICROCODE is not set
# CONFIG_X86_MSR is not set # CONFIG_X86_MSR is not set
# CONFIG_X86_CPUID is not set # CONFIG_X86_CPUID is not set
# CONFIG_EDD is not set
CONFIG_NOHIGHMEM=y CONFIG_NOHIGHMEM=y
# CONFIG_HIGHMEM4G is not set # CONFIG_HIGHMEM4G is not set
# CONFIG_HIGHMEM64G is not set # CONFIG_HIGHMEM64G is not set
......
...@@ -28,6 +28,7 @@ obj-$(CONFIG_X86_IO_APIC) += io_apic.o ...@@ -28,6 +28,7 @@ obj-$(CONFIG_X86_IO_APIC) += io_apic.o
obj-$(CONFIG_SOFTWARE_SUSPEND) += suspend.o obj-$(CONFIG_SOFTWARE_SUSPEND) += suspend.o
obj-$(CONFIG_X86_NUMAQ) += numaq.o obj-$(CONFIG_X86_NUMAQ) += numaq.o
obj-$(CONFIG_PROFILING) += profile.o obj-$(CONFIG_PROFILING) += profile.o
obj-$(CONFIG_EDD) += edd.o
EXTRA_AFLAGS := -traditional EXTRA_AFLAGS := -traditional
......
/*
* linux/arch/i386/kernel/edd.c
* Copyright (C) 2002 Dell Computer Corporation
* by Matt Domsch <Matt_Domsch@dell.com>
*
* BIOS Enhanced Disk Drive Services (EDD)
* conformant to T13 Committee www.t13.org
* projects 1572D, 1484D, 1386D, 1226DT
*
* This code takes information provided by BIOS EDD calls
* fn41 - Check Extensions Present and
* fn48 - Get Device Parametes with EDD extensions
* made in setup.S, copied to safe structures in setup.c,
* and presents it in driverfs.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License v2.0 as published by
* the Free Software Foundation
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
/*
* Known issues:
* - module unload leaves a directory around. Seems related to
* creating symlinks in that directory. Seen on kernel 2.5.41.
* - refcounting of struct device objects could be improved.
*
* TODO:
* - Add IDE and USB disk device support
* - when driverfs model of discs and partitions changes,
* update symlink accordingly.
* - Get symlink creator helper functions exported from
* drivers/base instead of duplicating them here.
* - move edd.[ch] to better locations if/when one is decided
*/
#include <linux/module.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/init.h>
#include <linux/stat.h>
#include <linux/err.h>
#include <linux/ctype.h>
#include <linux/slab.h>
#include <linux/limits.h>
#include <linux/driverfs_fs.h>
#include <linux/pci.h>
#include <asm/edd.h>
#include <linux/device.h>
#include <linux/blkdev.h>
/* FIXME - this really belongs in include/scsi/scsi.h */
#include <../drivers/scsi/scsi.h>
#include <../drivers/scsi/hosts.h>
MODULE_AUTHOR("Matt Domsch <Matt_Domsch@Dell.com>");
MODULE_DESCRIPTION("driverfs interface to BIOS EDD information");
MODULE_LICENSE("GPL");
#define EDD_VERSION "0.06 2002-Oct-09"
#define EDD_DEVICE_NAME_SIZE 16
#define REPORT_URL "http://domsch.com/linux/edd30/results.html"
/*
* bios_dir may go away completely,
* and it definitely won't be at the root
* of driverfs forever.
*/
static struct driver_dir_entry bios_dir = {
.name = "bios",
.mode = (S_IFDIR | S_IRWXU | S_IRUGO | S_IXUGO),
};
struct edd_device {
char name[EDD_DEVICE_NAME_SIZE];
struct edd_info *info;
struct list_head node;
struct driver_dir_entry dir;
};
struct edd_attribute {
struct attribute attr;
ssize_t(*show) (struct edd_device * edev, char *buf, size_t count,
loff_t off);
};
/* forward declarations */
static int edd_dev_is_type(struct edd_device *edev, const char *type);
static struct pci_dev *edd_get_pci_dev(struct edd_device *edev);
static struct scsi_device *edd_find_matching_scsi_device(struct edd_device *edev);
static struct edd_device *edd_devices[EDDMAXNR];
#define EDD_DEVICE_ATTR(_name,_mode,_show) \
struct edd_attribute edd_attr_##_name = { \
.attr = {.name = __stringify(_name), .mode = _mode }, \
.show = _show, \
};
static inline struct edd_info *
edd_dev_get_info(struct edd_device *edev)
{
return edev->info;
}
static inline void
edd_dev_set_info(struct edd_device *edev, struct edd_info *info)
{
edev->info = info;
}
#define to_edd_attr(_attr) container_of(_attr,struct edd_attribute,attr)
#define to_edd_device(_dir) container_of(_dir,struct edd_device,dir)
static ssize_t
edd_attr_show(struct driver_dir_entry *dir, struct attribute *attr,
char *buf, size_t count, loff_t off)
{
struct edd_device *dev = to_edd_device(dir);
struct edd_attribute *edd_attr = to_edd_attr(attr);
ssize_t ret = 0;
if (edd_attr->show)
ret = edd_attr->show(dev, buf, count, off);
return ret;
}
static struct driverfs_ops edd_attr_ops = {
.show = edd_attr_show,
};
static int
edd_dump_raw_data(char *b, void *data, int length)
{
char *orig_b = b;
char buffer1[80], buffer2[80], *b1, *b2, c;
unsigned char *p = data;
unsigned long column = 0;
int length_printed = 0;
const char maxcolumn = 16;
while (length_printed < length) {
b1 = buffer1;
b2 = buffer2;
for (column = 0;
column < maxcolumn && length_printed < length; column++) {
b1 += sprintf(b1, "%02x ", (unsigned char) *p);
if (*p < 32 || *p > 126)
c = '.';
else
c = *p;
b2 += sprintf(b2, "%c", c);
p++;
length_printed++;
}
/* pad out the line */
for (; column < maxcolumn; column++) {
b1 += sprintf(b1, " ");
b2 += sprintf(b2, " ");
}
b += sprintf(b, "%s\t%s\n", buffer1, buffer2);
}
return (b - orig_b);
}
static ssize_t
edd_show_host_bus(struct edd_device *edev, char *buf, size_t count, loff_t off)
{
struct edd_info *info = edd_dev_get_info(edev);
char *p = buf;
int i;
if (!edev || !info || !buf || off) {
return 0;
}
for (i = 0; i < 4; i++) {
if (isprint(info->params.host_bus_type[i])) {
p += sprintf(p, "%c", info->params.host_bus_type[i]);
} else {
p += sprintf(p, " ");
}
}
if (!strncmp(info->params.host_bus_type, "ISA", 3)) {
p += sprintf(p, "\tbase_address: %x\n",
info->params.interface_path.isa.base_address);
} else if (!strncmp(info->params.host_bus_type, "PCIX", 4) ||
!strncmp(info->params.host_bus_type, "PCI", 3)) {
p += sprintf(p,
"\t%02x:%02x.%01x channel: %u\n",
info->params.interface_path.pci.bus,
info->params.interface_path.pci.slot,
info->params.interface_path.pci.function,
info->params.interface_path.pci.channel);
} else if (!strncmp(info->params.host_bus_type, "IBND", 4) ||
!strncmp(info->params.host_bus_type, "XPRS", 4) ||
!strncmp(info->params.host_bus_type, "HTPT", 4)) {
p += sprintf(p,
"\tTBD: %llx\n",
info->params.interface_path.ibnd.reserved);
} else {
p += sprintf(p, "\tunknown: %llx\n",
info->params.interface_path.unknown.reserved);
}
return (p - buf);
}
static ssize_t
edd_show_interface(struct edd_device *edev, char *buf, size_t count, loff_t off)
{
struct edd_info *info = edd_dev_get_info(edev);
char *p = buf;
int i;
if (!edev || !info || !buf || off) {
return 0;
}
for (i = 0; i < 8; i++) {
if (isprint(info->params.interface_type[i])) {
p += sprintf(p, "%c", info->params.interface_type[i]);
} else {
p += sprintf(p, " ");
}
}
if (!strncmp(info->params.interface_type, "ATAPI", 5)) {
p += sprintf(p, "\tdevice: %u lun: %u\n",
info->params.device_path.atapi.device,
info->params.device_path.atapi.lun);
} else if (!strncmp(info->params.interface_type, "ATA", 3)) {
p += sprintf(p, "\tdevice: %u\n",
info->params.device_path.ata.device);
} else if (!strncmp(info->params.interface_type, "SCSI", 4)) {
p += sprintf(p, "\tid: %u lun: %llu\n",
info->params.device_path.scsi.id,
info->params.device_path.scsi.lun);
} else if (!strncmp(info->params.interface_type, "USB", 3)) {
p += sprintf(p, "\tserial_number: %llx\n",
info->params.device_path.usb.serial_number);
} else if (!strncmp(info->params.interface_type, "1394", 4)) {
p += sprintf(p, "\teui: %llx\n",
info->params.device_path.i1394.eui);
} else if (!strncmp(info->params.interface_type, "FIBRE", 5)) {
p += sprintf(p, "\twwid: %llx lun: %llx\n",
info->params.device_path.fibre.wwid,
info->params.device_path.fibre.lun);
} else if (!strncmp(info->params.interface_type, "I2O", 3)) {
p += sprintf(p, "\tidentity_tag: %llx\n",
info->params.device_path.i2o.identity_tag);
} else if (!strncmp(info->params.interface_type, "RAID", 4)) {
p += sprintf(p, "\tidentity_tag: %x\n",
info->params.device_path.raid.array_number);
} else if (!strncmp(info->params.interface_type, "SATA", 4)) {
p += sprintf(p, "\tdevice: %u\n",
info->params.device_path.sata.device);
} else {
p += sprintf(p, "\tunknown: %llx %llx\n",
info->params.device_path.unknown.reserved1,
info->params.device_path.unknown.reserved2);
}
return (p - buf);
}
/**
* edd_show_raw_data() - unparses EDD information, returned to user-space
*
* Returns: number of bytes written, or 0 on failure
*/
static ssize_t
edd_show_raw_data(struct edd_device *edev, char *buf, size_t count, loff_t off)
{
struct edd_info *info = edd_dev_get_info(edev);
int i, rc, warn_padding = 0, email = 0, nonzero_path = 0,
len = sizeof (*edd) - 4, found_pci=0;
uint8_t checksum = 0, c = 0;
char *p = buf;
struct pci_dev *pci_dev=NULL;
struct scsi_device *sd;
if (!edev || !info || !buf || off) {
return 0;
}
if (!(info->params.key == 0xBEDD || info->params.key == 0xDDBE))
len = info->params.length;
p += sprintf(p, "int13 fn48 returned data:\n\n");
p += edd_dump_raw_data(p, ((char *) edd) + 4, len);
/* Spec violation. Adaptec AIC7899 returns 0xDDBE
here, when it should be 0xBEDD.
*/
p += sprintf(p, "\n");
if (info->params.key == 0xDDBE) {
p += sprintf(p,
"Warning: Spec violation. Key should be 0xBEDD, is 0xDDBE\n");
email++;
}
if (!(info->params.key == 0xBEDD || info->params.key == 0xDDBE)) {
goto out;
}
for (i = 30; i <= 73; i++) {
c = *(((uint8_t *) edd) + i + 4);
if (c)
nonzero_path++;
checksum += c;
}
if (checksum) {
p += sprintf(p,
"Warning: Spec violation. Device Path checksum invalid.\n");
email++;
}
if (!nonzero_path) {
p += sprintf(p, "Error: Spec violation. Empty device path.\n");
email++;
goto out;
}
for (i = 0; i < 4; i++) {
if (!isprint(info->params.host_bus_type[i])) {
warn_padding++;
}
}
for (i = 0; i < 8; i++) {
if (!isprint(info->params.interface_type[i])) {
warn_padding++;
}
}
if (warn_padding) {
p += sprintf(p,
"Warning: Spec violation. Padding should be 0x20.\n");
email++;
}
rc = edd_dev_is_type(edev, "PCI");
if (!rc) {
pci_dev = pci_find_slot(info->params.interface_path.pci.bus,
PCI_DEVFN(info->params.interface_path.
pci.slot,
info->params.interface_path.
pci.function));
if (!pci_dev) {
p += sprintf(p, "Error: BIOS says this is a PCI device, but the OS doesn't know\n");
p += sprintf(p, " about a PCI device at %02x:%02x.%01x\n",
info->params.interface_path.pci.bus,
info->params.interface_path.pci.slot,
info->params.interface_path.pci.function);
email++;
}
else {
found_pci++;
}
}
if (found_pci) {
sd = edd_find_matching_scsi_device(edev);
if (!sd) {
p += sprintf(p, "Error: BIOS says this is a SCSI device, but\n");
p += sprintf(p, " the OS doesn't know about this SCSI device.\n");
p += sprintf(p, " Do you have it's driver module loaded?\n");
email++;
}
}
out:
if (email) {
p += sprintf(p, "\nPlease check %s\n", REPORT_URL);
p += sprintf(p, "to see if this has been reported. If not,\n");
p += sprintf(p, "please send the information requested there.\n");
}
return (p - buf);
}
static ssize_t
edd_show_version(struct edd_device *edev, char *buf, size_t count, loff_t off)
{
struct edd_info *info = edd_dev_get_info(edev);
char *p = buf;
if (!edev || !info || !buf || off) {
return 0;
}
p += sprintf(p, "0x%02x\n", info->version);
return (p - buf);
}
static ssize_t
edd_show_extensions(struct edd_device *edev, char *buf, size_t count,
loff_t off)
{
struct edd_info *info = edd_dev_get_info(edev);
char *p = buf;
if (!edev || !info || !buf || off) {
return 0;
}
if (info->interface_support & EDD_EXT_FIXED_DISK_ACCESS) {
p += sprintf(p, "Fixed disk access\n");
}
if (info->interface_support & EDD_EXT_DEVICE_LOCKING_AND_EJECTING) {
p += sprintf(p, "Device locking and ejecting\n");
}
if (info->interface_support & EDD_EXT_ENHANCED_DISK_DRIVE_SUPPORT) {
p += sprintf(p, "Enhanced Disk Drive support\n");
}
if (info->interface_support & EDD_EXT_64BIT_EXTENSIONS) {
p += sprintf(p, "64-bit extensions\n");
}
return (p - buf);
}
static ssize_t
edd_show_info_flags(struct edd_device *edev, char *buf, size_t count,
loff_t off)
{
struct edd_info *info = edd_dev_get_info(edev);
char *p = buf;
if (!edev || !info || !buf || off) {
return 0;
}
if (info->params.info_flags & EDD_INFO_DMA_BOUNDRY_ERROR_TRANSPARENT)
p += sprintf(p, "DMA boundry error transparent\n");
if (info->params.info_flags & EDD_INFO_GEOMETRY_VALID)
p += sprintf(p, "geometry valid\n");
if (info->params.info_flags & EDD_INFO_REMOVABLE)
p += sprintf(p, "removable\n");
if (info->params.info_flags & EDD_INFO_WRITE_VERIFY)
p += sprintf(p, "write verify\n");
if (info->params.info_flags & EDD_INFO_MEDIA_CHANGE_NOTIFICATION)
p += sprintf(p, "media change notification\n");
if (info->params.info_flags & EDD_INFO_LOCKABLE)
p += sprintf(p, "lockable\n");
if (info->params.info_flags & EDD_INFO_NO_MEDIA_PRESENT)
p += sprintf(p, "no media present\n");
if (info->params.info_flags & EDD_INFO_USE_INT13_FN50)
p += sprintf(p, "use int13 fn50\n");
return (p - buf);
}
static ssize_t
edd_show_default_cylinders(struct edd_device *edev, char *buf, size_t count,
loff_t off)
{
struct edd_info *info = edd_dev_get_info(edev);
char *p = buf;
if (!edev || !info || !buf || off) {
return 0;
}
p += sprintf(p, "0x%x\n", info->params.num_default_cylinders);
return (p - buf);
}
static ssize_t
edd_show_default_heads(struct edd_device *edev, char *buf, size_t count,
loff_t off)
{
struct edd_info *info = edd_dev_get_info(edev);
char *p = buf;
if (!edev || !info || !buf || off) {
return 0;
}
p += sprintf(p, "0x%x\n", info->params.num_default_heads);
return (p - buf);
}
static ssize_t
edd_show_default_sectors_per_track(struct edd_device *edev, char *buf,
size_t count, loff_t off)
{
struct edd_info *info = edd_dev_get_info(edev);
char *p = buf;
if (!edev || !info || !buf || off) {
return 0;
}
p += sprintf(p, "0x%x\n", info->params.sectors_per_track);
return (p - buf);
}
static ssize_t
edd_show_sectors(struct edd_device *edev, char *buf, size_t count, loff_t off)
{
struct edd_info *info = edd_dev_get_info(edev);
char *p = buf;
if (!edev || !info || !buf || off) {
return 0;
}
p += sprintf(p, "0x%llx\n", info->params.number_of_sectors);
return (p - buf);
}
static EDD_DEVICE_ATTR(raw_data, 0444, edd_show_raw_data);
static EDD_DEVICE_ATTR(version, 0444, edd_show_version);
static EDD_DEVICE_ATTR(extensions, 0444, edd_show_extensions);
static EDD_DEVICE_ATTR(info_flags, 0444, edd_show_info_flags);
static EDD_DEVICE_ATTR(default_cylinders, 0444, edd_show_default_cylinders);
static EDD_DEVICE_ATTR(default_heads, 0444, edd_show_default_heads);
static EDD_DEVICE_ATTR(default_sectors_per_track, 0444,
edd_show_default_sectors_per_track);
static EDD_DEVICE_ATTR(sectors, 0444, edd_show_sectors);
static EDD_DEVICE_ATTR(interface, 0444, edd_show_interface);
static EDD_DEVICE_ATTR(host_bus, 0444, edd_show_host_bus);
/*
* Some device instances may not have all the above attributes,
* or the attribute values may be meaningless (i.e. if
* the device is < EDD 3.0, it won't have host_bus and interface
* information), so don't bother making files for them. Likewise
* if the default_{cylinders,heads,sectors_per_track} values
* are zero, the BIOS doesn't provide sane values, don't bother
* creating files for them either.
*
* struct attr_test pairs an attribute and a test,
* (the default NULL test being true - the attribute exists)
* and individual existence tests may be written for each
* attribute.
*/
struct attr_test {
struct edd_attribute *attr;
int (*test) (struct edd_device * edev);
};
static int
edd_has_default_cylinders(struct edd_device *edev)
{
struct edd_info *info = edd_dev_get_info(edev);
if (!edev || !info)
return 1;
return !info->params.num_default_cylinders;
}
static int
edd_has_default_heads(struct edd_device *edev)
{
struct edd_info *info = edd_dev_get_info(edev);
if (!edev || !info)
return 1;
return !info->params.num_default_heads;
}
static int
edd_has_default_sectors_per_track(struct edd_device *edev)
{
struct edd_info *info = edd_dev_get_info(edev);
if (!edev || !info)
return 1;
return !info->params.sectors_per_track;
}
static int
edd_has_edd30(struct edd_device *edev)
{
struct edd_info *info = edd_dev_get_info(edev);
int i, nonzero_path = 0;
char c;
if (!edev || !info)
return 1;
if (!(info->params.key == 0xBEDD || info->params.key == 0xDDBE)) {
return 1;
}
for (i = 30; i <= 73; i++) {
c = *(((uint8_t *) edd) + i + 4);
if (c) {
nonzero_path++;
break;
}
}
if (!nonzero_path) {
return 1;
}
return 0;
}
static struct attr_test def_attrs[] = {
{.attr = &edd_attr_raw_data},
{.attr = &edd_attr_version},
{.attr = &edd_attr_extensions},
{.attr = &edd_attr_info_flags},
{.attr = &edd_attr_sectors},
{.attr = &edd_attr_default_cylinders,
.test = &edd_has_default_cylinders},
{.attr = &edd_attr_default_heads,
.test = &edd_has_default_heads},
{.attr = &edd_attr_default_sectors_per_track,
.test = &edd_has_default_sectors_per_track},
{.attr = &edd_attr_interface,
.test = &edd_has_edd30},
{.attr = &edd_attr_host_bus,
.test = &edd_has_edd30},
{.attr = NULL,.test = NULL},
};
/* edd_get_devpath_length(), edd_fill_devpath(), and edd_device_link()
were taken from linux/drivers/base/fs/device.c. When these
or similar are exported to generic code, remove these.
*/
static int
edd_get_devpath_length(struct device *dev)
{
int length = 1;
struct device *parent = dev;
/* walk up the ancestors until we hit the root.
* Add 1 to strlen for leading '/' of each level.
*/
do {
length += strlen(parent->bus_id) + 1;
parent = parent->parent;
} while (parent);
return length;
}
static void
edd_fill_devpath(struct device *dev, char *path, int length)
{
struct device *parent;
--length;
for (parent = dev; parent; parent = parent->parent) {
int cur = strlen(parent->bus_id);
/* back up enough to print this bus id with '/' */
length -= cur;
strncpy(path + length, parent->bus_id, cur);
*(path + --length) = '/';
}
}
static int
edd_device_symlink(struct edd_device *edev, struct device *dev, char *name)
{
char *path;
int length;
int error = 0;
if (!dev->bus || !name)
return 0;
length = edd_get_devpath_length(dev);
/* now add the path from the edd_device directory
* It should be '../..' (one to get to the 'bios' directory,
* and one to get to the root of the fs.)
*/
length += strlen("../../root");
if (length > PATH_MAX)
return -ENAMETOOLONG;
if (!(path = kmalloc(length, GFP_KERNEL)))
return -ENOMEM;
memset(path, 0, length);
/* our relative position */
strcpy(path, "../../root");
edd_fill_devpath(dev, path, length);
error = driverfs_create_symlink(&edev->dir, name, path);
kfree(path);
return error;
}
/**
* edd_dev_is_type() - is this EDD device a 'type' device?
* @edev
* @type - a host bus or interface identifier string per the EDD spec
*
* Returns 0 if it is a 'type' device, nonzero otherwise.
*/
static int
edd_dev_is_type(struct edd_device *edev, const char *type)
{
int rc;
struct edd_info *info = edd_dev_get_info(edev);
if (!edev || !info)
return 1;
rc = strncmp(info->params.host_bus_type, type, strlen(type));
if (!rc)
return 0;
return strncmp(info->params.interface_type, type, strlen(type));
}
/**
* edd_get_pci_dev() - finds pci_dev that matches edev
* @edev - edd_device
*
* Returns pci_dev if found, or NULL
*/
static struct pci_dev *
edd_get_pci_dev(struct edd_device *edev)
{
struct edd_info *info = edd_dev_get_info(edev);
int rc;
rc = edd_dev_is_type(edev, "PCI");
if (rc)
return NULL;
return pci_find_slot(info->params.interface_path.pci.bus,
PCI_DEVFN(info->params.interface_path.pci.slot,
info->params.interface_path.pci.
function));
}
static int
edd_create_symlink_to_pcidev(struct edd_device *edev)
{
struct pci_dev *pci_dev = edd_get_pci_dev(edev);
if (!pci_dev)
return 1;
return edd_device_symlink(edev, &pci_dev->dev, "pci_dev");
}
/**
* edd_match_scsidev()
* @edev - EDD device is a known SCSI device
* @sd - scsi_device with host who's parent is a PCI controller
*
* returns 0 on success, 1 on failure
*/
static int
edd_match_scsidev(struct edd_device *edev, struct scsi_device *sd)
{
struct edd_info *info = edd_dev_get_info(edev);
if (!edev || !sd || !info)
return 1;
if ((sd->channel == info->params.interface_path.pci.channel) &&
(sd->id == info->params.device_path.scsi.id) &&
(sd->lun == info->params.device_path.scsi.lun)) {
return 0;
}
return 1;
}
/**
* edd_find_matching_device()
* @edev - edd_device to match
*
* Returns struct scsi_device * on success,
* or NULL on failure.
* This assumes that all children of the PCI controller
* are scsi_hosts, and that all children of scsi_hosts
* are scsi_devices.
* The reference counting probably isn't the best it could be.
*/
#define to_scsi_host(d) \
container_of(d, struct Scsi_Host, host_driverfs_dev)
#define children_to_dev(n) container_of(n,struct device,node)
static struct scsi_device *
edd_find_matching_scsi_device(struct edd_device *edev)
{
struct list_head *shost_node, *sdev_node;
int rc = 1;
struct scsi_device *sd = NULL;
struct device *shost_dev, *sdev_dev;
struct pci_dev *pci_dev;
struct Scsi_Host *sh;
rc = edd_dev_is_type(edev, "SCSI");
if (rc)
return NULL;
pci_dev = edd_get_pci_dev(edev);
if (!pci_dev)
return NULL;
get_device(&pci_dev->dev);
list_for_each(shost_node, &pci_dev->dev.children) {
shost_dev = children_to_dev(shost_node);
get_device(shost_dev);
sh = to_scsi_host(shost_dev);
list_for_each(sdev_node, &shost_dev->children) {
sdev_dev = children_to_dev(sdev_node);
get_device(sdev_dev);
sd = to_scsi_device(sdev_dev);
rc = edd_match_scsidev(edev, sd);
put_device(sdev_dev);
if (!rc)
break;
}
put_device(shost_dev);
if (!rc)
break;
}
put_device(&pci_dev->dev);
if (!rc)
return sd;
return NULL;
}
static int
edd_create_symlink_to_scsidev(struct edd_device *edev)
{
struct scsi_device *sdev;
struct pci_dev *pci_dev;
struct edd_info *info = edd_dev_get_info(edev);
int rc;
rc = edd_dev_is_type(edev, "PCI");
if (rc)
return rc;
pci_dev = pci_find_slot(info->params.interface_path.pci.bus,
PCI_DEVFN(info->params.interface_path.pci.slot,
info->params.interface_path.pci.
function));
if (!pci_dev)
return 1;
sdev = edd_find_matching_scsi_device(edev);
if (!sdev)
return 1;
get_device(&sdev->sdev_driverfs_dev);
rc = edd_device_symlink(edev, &sdev->sdev_driverfs_dev, "disc");
put_device(&sdev->sdev_driverfs_dev);
return rc;
}
static inline int
edd_create_file(struct edd_device *edev, struct edd_attribute *attr)
{
return driverfs_create_file(&attr->attr, &edev->dir);
}
static inline void
edd_device_unregister(struct edd_device *edev)
{
driverfs_remove_file(&edev->dir, "pci_dev");
driverfs_remove_file(&edev->dir, "disc");
driverfs_remove_dir(&edev->dir);
list_del_init(&edev->node);
}
static int
edd_populate_dir(struct edd_device *edev)
{
struct attr_test *s;
int i;
int error = 0;
for (i = 0; def_attrs[i].attr; i++) {
s = &def_attrs[i];
if (!s->test || (s->test && !s->test(edev))) {
if ((error = edd_create_file(edev, s->attr))) {
break;
}
}
}
if (error)
return error;
edd_create_symlink_to_pcidev(edev);
edd_create_symlink_to_scsidev(edev);
return 0;
}
static int
edd_make_dir(struct edd_device *edev)
{
int error;
edev->dir.name = edev->name;
edev->dir.mode = (S_IFDIR | S_IRWXU | S_IRUGO | S_IXUGO);
edev->dir.ops = &edd_attr_ops;
error = driverfs_create_dir(&edev->dir, &bios_dir);
if (!error)
error = edd_populate_dir(edev);
return error;
}
static int
edd_device_register(struct edd_device *edev, int i)
{
int error;
if (!edev)
return 1;
memset(edev, 0, sizeof (*edev));
edd_dev_set_info(edev, &edd[i]);
snprintf(edev->name, EDD_DEVICE_NAME_SIZE, "int13_dev%02x",
edd[i].device);
error = edd_make_dir(edev);
return error;
}
/**
* edd_init() - creates driverfs tree of EDD data
*
* This assumes that eddnr and edd were
* assigned in setup.c already.
*/
static int __init
edd_init(void)
{
unsigned int i;
int rc;
struct edd_device *edev;
printk(KERN_INFO "BIOS EDD facility v%s, %d devices found\n",
EDD_VERSION, eddnr);
if (!eddnr) {
printk(KERN_INFO "EDD information not available.\n");
return 1;
}
rc = driverfs_create_dir(&bios_dir, NULL);
if (rc)
return rc;
for (i = 0; i < eddnr && i < EDDMAXNR && !rc; i++) {
edev = kmalloc(sizeof (*edev), GFP_KERNEL);
if (!edev)
return -ENOMEM;
rc = edd_device_register(edev, i);
if (rc) {
kfree(edev);
break;
}
edd_devices[i] = edev;
}
if (rc) {
driverfs_remove_dir(&bios_dir);
return rc;
}
return 0;
}
static void __exit
edd_exit(void)
{
int i;
struct edd_device *edev;
for (i = 0; i < eddnr && i < EDDMAXNR; i++) {
if ((edev = edd_devices[i])) {
edd_device_unregister(edev);
kfree(edev);
}
}
driverfs_remove_dir(&bios_dir);
}
late_initcall(edd_init);
module_exit(edd_exit);
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/nmi.h> #include <asm/nmi.h>
#include <asm/edd.h>
extern void dump_thread(struct pt_regs *, struct user *); extern void dump_thread(struct pt_regs *, struct user *);
extern spinlock_t rtc_lock; extern spinlock_t rtc_lock;
...@@ -201,3 +202,8 @@ EXPORT_SYMBOL(kmap_atomic); ...@@ -201,3 +202,8 @@ EXPORT_SYMBOL(kmap_atomic);
EXPORT_SYMBOL(kunmap_atomic); EXPORT_SYMBOL(kunmap_atomic);
EXPORT_SYMBOL(kmap_atomic_to_page); EXPORT_SYMBOL(kmap_atomic_to_page);
#endif #endif
#ifdef CONFIG_EDD_MODULE
EXPORT_SYMBOL(edd);
EXPORT_SYMBOL(eddnr);
#endif
...@@ -36,6 +36,7 @@ ...@@ -36,6 +36,7 @@
#include <linux/highmem.h> #include <linux/highmem.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/mpspec.h> #include <asm/mpspec.h>
#include <asm/edd.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/arch_hooks.h> #include <asm/arch_hooks.h>
#include "setup_arch_pre.h" #include "setup_arch_pre.h"
...@@ -466,6 +467,22 @@ static int __init copy_e820_map(struct e820entry * biosmap, int nr_map) ...@@ -466,6 +467,22 @@ static int __init copy_e820_map(struct e820entry * biosmap, int nr_map)
return 0; return 0;
} }
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
unsigned char eddnr;
struct edd_info edd[EDDNR];
/**
* copy_edd() - Copy the BIOS EDD information into a safe place.
*
*/
static inline void copy_edd(void)
{
eddnr = EDD_NR;
memcpy(edd, EDD_BUF, sizeof(edd));
}
#else
#define copy_edd() do {} while (0)
#endif
/* /*
* Do NOT EVER look at the BIOS memory size location. * Do NOT EVER look at the BIOS memory size location.
* It does not work on many machines. * It does not work on many machines.
...@@ -843,6 +860,7 @@ void __init setup_arch(char **cmdline_p) ...@@ -843,6 +860,7 @@ void __init setup_arch(char **cmdline_p)
#endif #endif
ARCH_SETUP ARCH_SETUP
setup_memory_region(); setup_memory_region();
copy_edd();
if (!MOUNT_ROOT_RDONLY) if (!MOUNT_ROOT_RDONLY)
root_mountflags &= ~MS_RDONLY; root_mountflags &= ~MS_RDONLY;
......
...@@ -136,7 +136,7 @@ void put_device(struct device * dev) ...@@ -136,7 +136,7 @@ void put_device(struct device * dev)
list_del_init(&dev->g_list); list_del_init(&dev->g_list);
up(&device_sem); up(&device_sem);
BUG_ON((dev->state != DEVICE_GONE)); WARN_ON(dev->state != DEVICE_GONE);
device_del(dev); device_del(dev);
} }
......
...@@ -150,6 +150,7 @@ static devfs_handle_t devfs_handle = NULL; ...@@ -150,6 +150,7 @@ static devfs_handle_t devfs_handle = NULL;
static int __init xd_init(void) static int __init xd_init(void)
{ {
u_char i,controller; u_char i,controller;
u_char count = 0;
unsigned int address; unsigned int address;
int err; int err;
......
...@@ -349,8 +349,8 @@ static struct sysrq_key_op *sysrq_key_table[SYSRQ_KEY_TABLE_LENGTH] = { ...@@ -349,8 +349,8 @@ static struct sysrq_key_op *sysrq_key_table[SYSRQ_KEY_TABLE_LENGTH] = {
/* 8 */ &sysrq_loglevel_op, /* 8 */ &sysrq_loglevel_op,
/* 9 */ &sysrq_loglevel_op, /* 9 */ &sysrq_loglevel_op,
/* a */ NULL, /* Don't use for system provided sysrqs, /* a */ NULL, /* Don't use for system provided sysrqs,
it is handled specially on the spark it is handled specially on the sparc
and will never arive */ and will never arrive */
/* b */ &sysrq_reboot_op, /* b */ &sysrq_reboot_op,
/* c */ NULL, /* c */ NULL,
/* d */ NULL, /* d */ NULL,
......
...@@ -121,7 +121,6 @@ ...@@ -121,7 +121,6 @@
#define E100_DEFAULT_CPUSAVER_BUNDLE_MAX 6 #define E100_DEFAULT_CPUSAVER_BUNDLE_MAX 6
#define E100_DEFAULT_CPUSAVER_INTERRUPT_DELAY 0x600 #define E100_DEFAULT_CPUSAVER_INTERRUPT_DELAY 0x600
#define E100_DEFAULT_BUNDLE_SMALL_FR false #define E100_DEFAULT_BUNDLE_SMALL_FR false
#define E100_DEFAULT_RX_CONGESTION_CONTROL true
/* end of configurables */ /* end of configurables */
...@@ -146,8 +145,6 @@ struct driver_stats { ...@@ -146,8 +145,6 @@ struct driver_stats {
unsigned long xmt_tco_pkts; unsigned long xmt_tco_pkts;
unsigned long rcv_tco_pkts; unsigned long rcv_tco_pkts;
unsigned long rx_intr_pkts; unsigned long rx_intr_pkts;
unsigned long rx_tasklet_pkts;
unsigned long poll_intr_switch;
}; };
/* TODO: kill me when we can do C99 */ /* TODO: kill me when we can do C99 */
...@@ -838,7 +835,6 @@ typedef struct _bd_dma_able_t { ...@@ -838,7 +835,6 @@ typedef struct _bd_dma_able_t {
#define PRM_FC 0x00000004 #define PRM_FC 0x00000004
#define PRM_IFS 0x00000008 #define PRM_IFS 0x00000008
#define PRM_BUNDLE_SMALL 0x00000010 #define PRM_BUNDLE_SMALL 0x00000010
#define PRM_RX_CONG 0x00000020
struct cfg_params { struct cfg_params {
int e100_speed_duplex; int e100_speed_duplex;
...@@ -847,7 +843,6 @@ struct cfg_params { ...@@ -847,7 +843,6 @@ struct cfg_params {
int IntDelay; int IntDelay;
int BundleMax; int BundleMax;
int ber; int ber;
int PollingMaxWork;
u32 b_params; u32 b_params;
}; };
struct ethtool_lpbk_data{ struct ethtool_lpbk_data{
...@@ -949,8 +944,6 @@ struct e100_private { ...@@ -949,8 +944,6 @@ struct e100_private {
u32 speed_duplex_caps; /* adapter's speed/duplex capabilities */ u32 speed_duplex_caps; /* adapter's speed/duplex capabilities */
struct tasklet_struct polling_tasklet;
/* WOL params for ethtool */ /* WOL params for ethtool */
u32 wolsupported; u32 wolsupported;
u32 wolopts; u32 wolopts;
...@@ -961,6 +954,7 @@ struct e100_private { ...@@ -961,6 +954,7 @@ struct e100_private {
#ifdef CONFIG_PM #ifdef CONFIG_PM
u32 pci_state[16]; u32 pci_state[16];
#endif #endif
char ifname[IFNAMSIZ];
}; };
#define E100_AUTONEG 0 #define E100_AUTONEG 0
......
...@@ -45,6 +45,15 @@ ...@@ -45,6 +45,15 @@
**********************************************************************/ **********************************************************************/
/* Change Log /* Change Log
*
* 2.1.24 10/7/02
* o Bug fix: Wrong files under /proc/net/PRO_LAN_Adapters/ when interface
* name is changed
* o Bug fix: Rx skb corruption when Rx polling code and Rx interrupt code
* are executing during stress traffic at shared interrupt system.
* Removed Rx polling code
* o Added detailed printk if selftest failed when insmod
* o Removed misleading printks
* *
* 2.1.12 8/2/02 * 2.1.12 8/2/02
* o Feature: ethtool register dump * o Feature: ethtool register dump
...@@ -62,25 +71,6 @@ ...@@ -62,25 +71,6 @@
* o Bug fix: PHY loopback diagnostic fails * o Bug fix: PHY loopback diagnostic fails
* *
* 2.1.6 7/5/02 * 2.1.6 7/5/02
* o Added device ID support for Dell LOM.
* o Added device ID support for 82511QM mobile nics.
* o Bug fix: ethtool get/set EEPROM routines modified to use byte
* addressing rather than word addressing.
* o Feature: added MDIX mode support for 82550 and up.
* o Bug fix: added reboot notifer to setup WOL settings when
* shutting system down.
* o Cleanup: removed yield() redefinition (Andrew Morton,
* akpm@zip.com.au).
* o Bug fix: flow control now working when link partner is
* autoneg capable but not flow control capable.
* o Bug fix: added check for corrupted EEPROM
* o Bug fix: don't report checksum offloading for the older
* controllers that don't support the feature.
* o Bug fix: calculate cable diagnostics when link goes down
* rather than when queuering /proc file.
* o Cleanup: move mdi_access_lock to local get/set mdi routines.
*
* 2.0.30 5/30/02
*/ */
#include <linux/config.h> #include <linux/config.h>
...@@ -94,8 +84,8 @@ ...@@ -94,8 +84,8 @@
#include "e100_vendor.h" #include "e100_vendor.h"
#ifdef CONFIG_PROC_FS #ifdef CONFIG_PROC_FS
extern int e100_create_proc_subdir(struct e100_private *); extern int e100_create_proc_subdir(struct e100_private *, char *);
extern void e100_remove_proc_subdir(struct e100_private *); extern void e100_remove_proc_subdir(struct e100_private *, char *);
#else #else
#define e100_create_proc_subdir(X) 0 #define e100_create_proc_subdir(X) 0
#define e100_remove_proc_subdir(X) do {} while(0) #define e100_remove_proc_subdir(X) do {} while(0)
...@@ -145,7 +135,7 @@ static void e100_non_tx_background(unsigned long); ...@@ -145,7 +135,7 @@ static void e100_non_tx_background(unsigned long);
/* Global Data structures and variables */ /* Global Data structures and variables */
char e100_copyright[] __devinitdata = "Copyright (c) 2002 Intel Corporation"; char e100_copyright[] __devinitdata = "Copyright (c) 2002 Intel Corporation";
char e100_driver_version[]="2.1.15-k1"; char e100_driver_version[]="2.1.24-k1";
const char *e100_full_driver_name = "Intel(R) PRO/100 Network Driver"; const char *e100_full_driver_name = "Intel(R) PRO/100 Network Driver";
char e100_short_driver_name[] = "e100"; char e100_short_driver_name[] = "e100";
static int e100nics = 0; static int e100nics = 0;
...@@ -154,12 +144,19 @@ static int e100nics = 0; ...@@ -154,12 +144,19 @@ static int e100nics = 0;
static int e100_notify_reboot(struct notifier_block *, unsigned long event, void *ptr); static int e100_notify_reboot(struct notifier_block *, unsigned long event, void *ptr);
static int e100_suspend(struct pci_dev *pcid, u32 state); static int e100_suspend(struct pci_dev *pcid, u32 state);
static int e100_resume(struct pci_dev *pcid); static int e100_resume(struct pci_dev *pcid);
struct notifier_block e100_notifier = { struct notifier_block e100_notifier_reboot = {
notifier_call: e100_notify_reboot, notifier_call: e100_notify_reboot,
next: NULL, next: NULL,
priority: 0 priority: 0
}; };
#endif #endif
static int e100_notify_netdev(struct notifier_block *, unsigned long event, void *ptr);
struct notifier_block e100_notifier_netdev = {
notifier_call: e100_notify_netdev,
next: NULL,
priority: 0
};
static void e100_get_mdix_status(struct e100_private *bdp); static void e100_get_mdix_status(struct e100_private *bdp);
...@@ -349,9 +346,7 @@ e100_alloc_skbs(struct e100_private *bdp) ...@@ -349,9 +346,7 @@ e100_alloc_skbs(struct e100_private *bdp)
} }
void e100_tx_srv(struct e100_private *); void e100_tx_srv(struct e100_private *);
u32 e100_rx_srv(struct e100_private *, u32, int *); u32 e100_rx_srv(struct e100_private *);
void e100_polling_tasklet(unsigned long);
void e100_watchdog(struct net_device *); void e100_watchdog(struct net_device *);
static void e100_do_hwi(struct net_device *); static void e100_do_hwi(struct net_device *);
...@@ -379,9 +374,6 @@ E100_PARAM(IntDelay, "Value for CPU saver's interrupt delay"); ...@@ -379,9 +374,6 @@ E100_PARAM(IntDelay, "Value for CPU saver's interrupt delay");
E100_PARAM(BundleSmallFr, "Disable or enable interrupt bundling of small frames"); E100_PARAM(BundleSmallFr, "Disable or enable interrupt bundling of small frames");
E100_PARAM(BundleMax, "Maximum number for CPU saver's packet bundling"); E100_PARAM(BundleMax, "Maximum number for CPU saver's packet bundling");
E100_PARAM(IFS, "Disable or enable the adaptive IFS algorithm"); E100_PARAM(IFS, "Disable or enable the adaptive IFS algorithm");
E100_PARAM(RxCongestionControl, "Disable or enable switch to polling mode");
E100_PARAM(PollingMaxWork, "Max number of receive packets processed on single "
"polling call");
/** /**
* e100_exec_cmd - issue a comand * e100_exec_cmd - issue a comand
...@@ -656,6 +648,8 @@ e100_found1(struct pci_dev *pcid, const struct pci_device_id *ent) ...@@ -656,6 +648,8 @@ e100_found1(struct pci_dev *pcid, const struct pci_device_id *ent)
if ((rc = register_netdev(dev)) != 0) { if ((rc = register_netdev(dev)) != 0) {
goto err_pci; goto err_pci;
} }
memcpy(bdp->ifname, dev->name, IFNAMSIZ);
bdp->ifname[IFNAMSIZ-1] = 0;
bdp->device_type = ent->driver_data; bdp->device_type = ent->driver_data;
printk(KERN_NOTICE printk(KERN_NOTICE
...@@ -674,7 +668,7 @@ e100_found1(struct pci_dev *pcid, const struct pci_device_id *ent) ...@@ -674,7 +668,7 @@ e100_found1(struct pci_dev *pcid, const struct pci_device_id *ent)
bdp->cable_status = "Not available"; bdp->cable_status = "Not available";
} }
if (e100_create_proc_subdir(bdp) < 0) { if (e100_create_proc_subdir(bdp, bdp->ifname) < 0) {
printk(KERN_ERR "e100: Failed to create proc dir for %s\n", printk(KERN_ERR "e100: Failed to create proc dir for %s\n",
bdp->device->name); bdp->device->name);
} }
...@@ -738,7 +732,7 @@ e100_remove1(struct pci_dev *pcid) ...@@ -738,7 +732,7 @@ e100_remove1(struct pci_dev *pcid)
unregister_netdev(dev); unregister_netdev(dev);
e100_remove_proc_subdir(bdp); e100_remove_proc_subdir(bdp, bdp->ifname);
e100_sw_reset(bdp, PORT_SELECTIVE_RESET); e100_sw_reset(bdp, PORT_SELECTIVE_RESET);
...@@ -772,10 +766,12 @@ e100_init_module(void) ...@@ -772,10 +766,12 @@ e100_init_module(void)
int ret; int ret;
ret = pci_module_init(&e100_driver); ret = pci_module_init(&e100_driver);
if(ret >= 0) {
#ifdef CONFIG_PM #ifdef CONFIG_PM
if(ret >= 0) register_reboot_notifier(&e100_notifier_reboot);
register_reboot_notifier(&e100_notifier);
#endif #endif
register_netdevice_notifier(&e100_notifier_netdev);
}
return ret; return ret;
} }
...@@ -784,8 +780,9 @@ static void __exit ...@@ -784,8 +780,9 @@ static void __exit
e100_cleanup_module(void) e100_cleanup_module(void)
{ {
#ifdef CONFIG_PM #ifdef CONFIG_PM
unregister_reboot_notifier(&e100_notifier); unregister_reboot_notifier(&e100_notifier_reboot);
#endif #endif
unregister_netdevice_notifier(&e100_notifier_netdev);
pci_unregister_driver(&e100_driver); pci_unregister_driver(&e100_driver);
} }
...@@ -856,14 +853,6 @@ e100_check_options(int board, struct e100_private *bdp) ...@@ -856,14 +853,6 @@ e100_check_options(int board, struct e100_private *bdp)
0xFFFF, E100_DEFAULT_CPUSAVER_BUNDLE_MAX, 0xFFFF, E100_DEFAULT_CPUSAVER_BUNDLE_MAX,
"CPU saver bundle max value"); "CPU saver bundle max value");
e100_set_bool_option(bdp, RxCongestionControl[board], PRM_RX_CONG,
E100_DEFAULT_RX_CONGESTION_CONTROL,
"Rx Congestion Control value");
e100_set_int_option(&(bdp->params.PollingMaxWork),
PollingMaxWork[board], 1, E100_MAX_RFD,
bdp->params.RxDescriptors,
"Polling Max Work value");
} }
/** /**
...@@ -991,11 +980,6 @@ e100_open(struct net_device *dev) ...@@ -991,11 +980,6 @@ e100_open(struct net_device *dev)
del_timer_sync(&bdp->watchdog_timer); del_timer_sync(&bdp->watchdog_timer);
goto err_exit; goto err_exit;
} }
if (bdp->params.b_params & PRM_RX_CONG) {
DECLARE_TASKLET(polling_tasklet,
e100_polling_tasklet, (unsigned long) bdp);
bdp->polling_tasklet = polling_tasklet;
}
bdp->intr_mask = 0; bdp->intr_mask = 0;
e100_set_intr_mask(bdp); e100_set_intr_mask(bdp);
...@@ -1024,10 +1008,6 @@ e100_close(struct net_device *dev) ...@@ -1024,10 +1008,6 @@ e100_close(struct net_device *dev)
free_irq(dev->irq, dev); free_irq(dev->irq, dev);
e100_clear_pools(bdp); e100_clear_pools(bdp);
if (bdp->params.b_params & PRM_RX_CONG) {
tasklet_kill(&(bdp->polling_tasklet));
}
/* set the isolate flag to false, so e100_open can be called */ /* set the isolate flag to false, so e100_open can be called */
bdp->driver_isolated = false; bdp->driver_isolated = false;
...@@ -1263,12 +1243,21 @@ e100_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) ...@@ -1263,12 +1243,21 @@ e100_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
static unsigned char __devinit static unsigned char __devinit
e100_init(struct e100_private *bdp) e100_init(struct e100_private *bdp)
{ {
u32 st_timeout = 0;
u32 st_result = 0;
e100_sw_init(bdp); e100_sw_init(bdp);
if (!e100_selftest(bdp, NULL, NULL)) { if (!e100_selftest(bdp, &st_timeout, &st_result)) {
printk(KERN_ERR "e100: selftest failed\n"); if (st_timeout) {
printk(KERN_ERR "e100: selftest timeout\n");
} else {
printk(KERN_ERR "e100: selftest failed. Results: %x\n",
st_result);
}
return false; return false;
} }
else
printk(KERN_DEBUG "e100: selftest OK.\n");
/* read the MAC address from the eprom */ /* read the MAC address from the eprom */
e100_rd_eaddr(bdp); e100_rd_eaddr(bdp);
...@@ -1802,47 +1791,6 @@ e100_manage_adaptive_ifs(struct e100_private *bdp) ...@@ -1802,47 +1791,6 @@ e100_manage_adaptive_ifs(struct e100_private *bdp)
} }
} }
void
e100_polling_tasklet(unsigned long ptr)
{
struct e100_private *bdp = (struct e100_private *) ptr;
unsigned int rx_congestion = 0;
u32 skb_cnt;
/* the device is closed, don't continue or else bad things may happen. */
if (!netif_running(bdp->device)) {
return;
}
read_lock(&(bdp->isolate_lock));
if (bdp->driver_isolated) {
tasklet_schedule(&(bdp->polling_tasklet));
goto exit;
}
e100_alloc_skbs(bdp);
skb_cnt = e100_rx_srv(bdp, bdp->params.PollingMaxWork, &rx_congestion);
bdp->drv_stats.rx_tasklet_pkts += skb_cnt;
if (rx_congestion || skb_cnt) {
tasklet_schedule(&(bdp->polling_tasklet));
} else {
bdp->intr_mask &= ~SCB_INT_MASK;
bdp->drv_stats.poll_intr_switch++;
}
bdp->tx_count = 0; /* restart tx interrupt batch count */
e100_tx_srv(bdp);
e100_set_intr_mask(bdp);
exit:
read_unlock(&(bdp->isolate_lock));
}
/** /**
* e100intr - interrupt handler * e100intr - interrupt handler
* @irq: the IRQ number * @irq: the IRQ number
...@@ -1892,18 +1840,8 @@ e100intr(int irq, void *dev_inst, struct pt_regs *regs) ...@@ -1892,18 +1840,8 @@ e100intr(int irq, void *dev_inst, struct pt_regs *regs)
/* do recv work if any */ /* do recv work if any */
if (intr_status & if (intr_status &
(SCB_STATUS_ACK_FR | SCB_STATUS_ACK_RNR | SCB_STATUS_ACK_SWI)) { (SCB_STATUS_ACK_FR | SCB_STATUS_ACK_RNR | SCB_STATUS_ACK_SWI))
int rx_congestion; bdp->drv_stats.rx_intr_pkts += e100_rx_srv(bdp);
bdp->drv_stats.rx_intr_pkts +=
e100_rx_srv(bdp, 0, &rx_congestion);
if ((bdp->params.b_params & PRM_RX_CONG) && rx_congestion) {
bdp->intr_mask |= SCB_INT_MASK;
tasklet_schedule(&(bdp->polling_tasklet));
bdp->drv_stats.poll_intr_switch++;
}
}
/* clean up after tx'ed packets */ /* clean up after tx'ed packets */
if (intr_status & (SCB_STATUS_ACK_CNA | SCB_STATUS_ACK_CX)) { if (intr_status & (SCB_STATUS_ACK_CNA | SCB_STATUS_ACK_CX)) {
...@@ -1997,8 +1935,7 @@ e100_tx_srv(struct e100_private *bdp) ...@@ -1997,8 +1935,7 @@ e100_tx_srv(struct e100_private *bdp)
* It returns the number of serviced RFDs. * It returns the number of serviced RFDs.
*/ */
u32 u32
e100_rx_srv(struct e100_private *bdp, u32 max_number_of_rfds, e100_rx_srv(struct e100_private *bdp)
int *rx_congestion)
{ {
rfd_t *rfd; /* new rfd, received rfd */ rfd_t *rfd; /* new rfd, received rfd */
int i; int i;
...@@ -2009,10 +1946,6 @@ e100_rx_srv(struct e100_private *bdp, u32 max_number_of_rfds, ...@@ -2009,10 +1946,6 @@ e100_rx_srv(struct e100_private *bdp, u32 max_number_of_rfds,
struct rx_list_elem *rx_struct; struct rx_list_elem *rx_struct;
u32 rfd_cnt = 0; u32 rfd_cnt = 0;
if (rx_congestion) {
*rx_congestion = 0;
}
dev = bdp->device; dev = bdp->device;
/* current design of rx is as following: /* current design of rx is as following:
...@@ -2027,9 +1960,6 @@ e100_rx_srv(struct e100_private *bdp, u32 max_number_of_rfds, ...@@ -2027,9 +1960,6 @@ e100_rx_srv(struct e100_private *bdp, u32 max_number_of_rfds,
* (watchdog trigger SWI intr and isr should allocate new skbs) * (watchdog trigger SWI intr and isr should allocate new skbs)
*/ */
for (i = 0; i < bdp->params.RxDescriptors; i++) { for (i = 0; i < bdp->params.RxDescriptors; i++) {
if (max_number_of_rfds && (rfd_cnt >= max_number_of_rfds)) {
break;
}
if (list_empty(&(bdp->active_rx_list))) { if (list_empty(&(bdp->active_rx_list))) {
break; break;
} }
...@@ -2094,20 +2024,12 @@ e100_rx_srv(struct e100_private *bdp, u32 max_number_of_rfds, ...@@ -2094,20 +2024,12 @@ e100_rx_srv(struct e100_private *bdp, u32 max_number_of_rfds,
} else { } else {
skb->ip_summed = CHECKSUM_NONE; skb->ip_summed = CHECKSUM_NONE;
} }
switch (netif_rx(skb)) { switch (netif_rx(skb)) {
case NET_RX_BAD: case NET_RX_BAD:
break;
case NET_RX_DROP: case NET_RX_DROP:
case NET_RX_CN_MOD: case NET_RX_CN_MOD:
case NET_RX_CN_HIGH: case NET_RX_CN_HIGH:
if (bdp->params.b_params & PRM_RX_CONG) { break;
if (rx_congestion) {
*rx_congestion = 1;
}
}
/* FALL THROUGH TO STATISTICS UPDATE */
default: default:
bdp->drv_stats.net_stats.rx_bytes += skb->len; bdp->drv_stats.net_stats.rx_bytes += skb->len;
break; break;
...@@ -3032,11 +2954,6 @@ e100_print_brd_conf(struct e100_private *bdp) ...@@ -3032,11 +2954,6 @@ e100_print_brd_conf(struct e100_private *bdp)
" Mem:0x%08lx IRQ:%d Speed:%d Mbps Dx:%s\n", " Mem:0x%08lx IRQ:%d Speed:%d Mbps Dx:%s\n",
(unsigned long) bdp->device->mem_start, (unsigned long) bdp->device->mem_start,
bdp->device->irq, 0, "N/A"); bdp->device->irq, 0, "N/A");
/* Auto negotiation failed so we should display an error */
printk(KERN_NOTICE " Failed to detect cable link\n");
printk(KERN_NOTICE " Speed and duplex will be determined "
"at time of connection\n");
} }
/* Print the string if checksum Offloading was enabled */ /* Print the string if checksum Offloading was enabled */
...@@ -4176,11 +4093,34 @@ e100_non_tx_background(unsigned long ptr) ...@@ -4176,11 +4093,34 @@ e100_non_tx_background(unsigned long ptr)
spin_unlock_bh(&(bdp->bd_non_tx_lock)); spin_unlock_bh(&(bdp->bd_non_tx_lock));
} }
int e100_notify_netdev(struct notifier_block *nb, unsigned long event, void *p)
{
struct e100_private *bdp;
struct net_device *netdev = p;
if(netdev == NULL)
return NOTIFY_DONE;
switch(event) {
case NETDEV_CHANGENAME:
if(netdev->open == e100_open) {
bdp = netdev->priv;
/* rename the proc nodes the easy way */
e100_remove_proc_subdir(bdp, bdp->ifname);
memcpy(bdp->ifname, netdev->name, IFNAMSIZ);
bdp->ifname[IFNAMSIZ-1] = 0;
e100_create_proc_subdir(bdp, bdp->ifname);
}
break;
}
return NOTIFY_DONE;
}
#ifdef CONFIG_PM #ifdef CONFIG_PM
static int static int
e100_notify_reboot(struct notifier_block *nb, unsigned long event, void *p) e100_notify_reboot(struct notifier_block *nb, unsigned long event, void *p)
{ {
struct pci_dev *pdev = NULL; struct pci_dev *pdev;
switch(event) { switch(event) {
case SYS_DOWN: case SYS_DOWN:
......
...@@ -249,8 +249,6 @@ static e100_proc_entry e100_proc_list[] = { ...@@ -249,8 +249,6 @@ static e100_proc_entry e100_proc_list[] = {
{"Rx_TCO_Packets", read_gen_ulong, 0, bdp_drv_off(rcv_tco_pkts)}, {"Rx_TCO_Packets", read_gen_ulong, 0, bdp_drv_off(rcv_tco_pkts)},
{"\n",}, {"\n",},
{"Rx_Interrupt_Packets", read_gen_ulong, 0, bdp_drv_off(rx_intr_pkts)}, {"Rx_Interrupt_Packets", read_gen_ulong, 0, bdp_drv_off(rx_intr_pkts)},
{"Rx_Polling_Packets", read_gen_ulong, 0, bdp_drv_off(rx_tasklet_pkts)},
{"Polling_Interrupt_Switch", read_gen_ulong, 0, bdp_drv_off(poll_intr_switch)},
{"Identify_Adapter", 0, write_blink_led_timer, 0}, {"Identify_Adapter", 0, write_blink_led_timer, 0},
{"", 0, 0, 0} {"", 0, 0, 0}
}; };
...@@ -291,7 +289,7 @@ read_info(char *page, char **start, off_t off, int count, int *eof, void *data) ...@@ -291,7 +289,7 @@ read_info(char *page, char **start, off_t off, int count, int *eof, void *data)
return generic_read(page, start, off, count, eof, len); return generic_read(page, start, off, count, eof, len);
} }
static struct proc_dir_entry * __devinit static struct proc_dir_entry *
create_proc_rw(char *name, void *data, struct proc_dir_entry *parent, create_proc_rw(char *name, void *data, struct proc_dir_entry *parent,
read_proc_t * read_proc, write_proc_t * write_proc) read_proc_t * read_proc, write_proc_t * write_proc)
{ {
...@@ -318,7 +316,7 @@ create_proc_rw(char *name, void *data, struct proc_dir_entry *parent, ...@@ -318,7 +316,7 @@ create_proc_rw(char *name, void *data, struct proc_dir_entry *parent,
} }
void void
e100_remove_proc_subdir(struct e100_private *bdp) e100_remove_proc_subdir(struct e100_private *bdp, char *name)
{ {
e100_proc_entry *pe; e100_proc_entry *pe;
char info[256]; char info[256];
...@@ -329,8 +327,8 @@ e100_remove_proc_subdir(struct e100_private *bdp) ...@@ -329,8 +327,8 @@ e100_remove_proc_subdir(struct e100_private *bdp)
return; return;
} }
len = strlen(bdp->device->name); len = strlen(bdp->ifname);
strncpy(info, bdp->device->name, sizeof (info)); strncpy(info, bdp->ifname, sizeof (info));
strncat(info + len, ".info", sizeof (info) - len); strncat(info + len, ".info", sizeof (info) - len);
if (bdp->proc_parent) { if (bdp->proc_parent) {
...@@ -341,7 +339,7 @@ e100_remove_proc_subdir(struct e100_private *bdp) ...@@ -341,7 +339,7 @@ e100_remove_proc_subdir(struct e100_private *bdp)
remove_proc_entry(pe->name, bdp->proc_parent); remove_proc_entry(pe->name, bdp->proc_parent);
} }
remove_proc_entry(bdp->device->name, adapters_proc_dir); remove_proc_entry(bdp->ifname, adapters_proc_dir);
bdp->proc_parent = NULL; bdp->proc_parent = NULL;
} }
...@@ -351,7 +349,7 @@ e100_remove_proc_subdir(struct e100_private *bdp) ...@@ -351,7 +349,7 @@ e100_remove_proc_subdir(struct e100_private *bdp)
e100_proc_cleanup(); e100_proc_cleanup();
} }
int __devinit int
e100_create_proc_subdir(struct e100_private *bdp) e100_create_proc_subdir(struct e100_private *bdp)
{ {
struct proc_dir_entry *dev_dir; struct proc_dir_entry *dev_dir;
...@@ -366,7 +364,7 @@ e100_create_proc_subdir(struct e100_private *bdp) ...@@ -366,7 +364,7 @@ e100_create_proc_subdir(struct e100_private *bdp)
return -ENOMEM; return -ENOMEM;
} }
strncpy(info, bdp->device->name, sizeof (info)); strncpy(info, bdp->ifname, sizeof (info));
len = strlen(info); len = strlen(info);
strncat(info + len, ".info", sizeof (info) - len); strncat(info + len, ".info", sizeof (info) - len);
...@@ -376,12 +374,12 @@ e100_create_proc_subdir(struct e100_private *bdp) ...@@ -376,12 +374,12 @@ e100_create_proc_subdir(struct e100_private *bdp)
return -ENOMEM; return -ENOMEM;
} }
dev_dir = create_proc_entry(bdp->device->name, S_IFDIR, dev_dir = create_proc_entry(bdp->ifname, S_IFDIR,
adapters_proc_dir); adapters_proc_dir);
bdp->proc_parent = dev_dir; bdp->proc_parent = dev_dir;
if (!dev_dir) { if (!dev_dir) {
e100_remove_proc_subdir(bdp); e100_remove_proc_subdir(bdp, bdp->ifname);
return -ENOMEM; return -ENOMEM;
} }
...@@ -396,7 +394,7 @@ e100_create_proc_subdir(struct e100_private *bdp) ...@@ -396,7 +394,7 @@ e100_create_proc_subdir(struct e100_private *bdp)
if (!(create_proc_rw(pe->name, data, dev_dir, if (!(create_proc_rw(pe->name, data, dev_dir,
pe->read_proc, pe->write_proc))) { pe->read_proc, pe->write_proc))) {
e100_remove_proc_subdir(bdp); e100_remove_proc_subdir(bdp, bdp->ifname);
return -ENOMEM; return -ENOMEM;
} }
} }
......
...@@ -95,6 +95,15 @@ struct e1000_adapter; ...@@ -95,6 +95,15 @@ struct e1000_adapter;
#define E1000_RXBUFFER_8192 8192 #define E1000_RXBUFFER_8192 8192
#define E1000_RXBUFFER_16384 16384 #define E1000_RXBUFFER_16384 16384
/* Flow Control High-Watermark: 43464 bytes */
#define E1000_FC_HIGH_THRESH 0xA9C8
/* Flow Control Low-Watermark: 43456 bytes */
#define E1000_FC_LOW_THRESH 0xA9C0
/* Flow Control Pause Time: 858 usec */
#define E1000_FC_PAUSE_TIME 0x0680
/* How many Tx Descriptors do we need to call netif_wake_queue ? */ /* How many Tx Descriptors do we need to call netif_wake_queue ? */
#define E1000_TX_QUEUE_WAKE 16 #define E1000_TX_QUEUE_WAKE 16
/* How many Rx Buffers do we bundle into one write to the hardware ? */ /* How many Rx Buffers do we bundle into one write to the hardware ? */
...@@ -194,5 +203,6 @@ struct e1000_adapter { ...@@ -194,5 +203,6 @@ struct e1000_adapter {
uint32_t pci_state[16]; uint32_t pci_state[16];
char ifname[IFNAMSIZ];
}; };
#endif /* _E1000_H_ */ #endif /* _E1000_H_ */
...@@ -117,7 +117,8 @@ e1000_ethtool_sset(struct e1000_adapter *adapter, struct ethtool_cmd *ecmd) ...@@ -117,7 +117,8 @@ e1000_ethtool_sset(struct e1000_adapter *adapter, struct ethtool_cmd *ecmd)
if(ecmd->autoneg == AUTONEG_ENABLE) { if(ecmd->autoneg == AUTONEG_ENABLE) {
hw->autoneg = 1; hw->autoneg = 1;
hw->autoneg_advertised = (ecmd->advertising & 0x002F); hw->autoneg_advertised = 0x002F;
ecmd->advertising = 0x002F;
} else { } else {
hw->autoneg = 0; hw->autoneg = 0;
switch(ecmd->speed + ecmd->duplex) { switch(ecmd->speed + ecmd->duplex) {
...@@ -228,6 +229,7 @@ e1000_ethtool_geeprom(struct e1000_adapter *adapter, ...@@ -228,6 +229,7 @@ e1000_ethtool_geeprom(struct e1000_adapter *adapter,
for(i = 0; i <= (last_word - first_word); i++) for(i = 0; i <= (last_word - first_word); i++)
e1000_read_eeprom(hw, first_word + i, &eeprom_buff[i]); e1000_read_eeprom(hw, first_word + i, &eeprom_buff[i]);
return 0; return 0;
} }
...@@ -290,7 +292,6 @@ e1000_ethtool_gwol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol) ...@@ -290,7 +292,6 @@ e1000_ethtool_gwol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol)
case E1000_DEV_ID_82543GC_FIBER: case E1000_DEV_ID_82543GC_FIBER:
case E1000_DEV_ID_82543GC_COPPER: case E1000_DEV_ID_82543GC_COPPER:
case E1000_DEV_ID_82544EI_FIBER: case E1000_DEV_ID_82544EI_FIBER:
default:
wol->supported = 0; wol->supported = 0;
wol->wolopts = 0; wol->wolopts = 0;
return; return;
...@@ -304,14 +305,7 @@ e1000_ethtool_gwol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol) ...@@ -304,14 +305,7 @@ e1000_ethtool_gwol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol)
} }
/* Fall Through */ /* Fall Through */
case E1000_DEV_ID_82544EI_COPPER: default:
case E1000_DEV_ID_82544GC_COPPER:
case E1000_DEV_ID_82544GC_LOM:
case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER:
case E1000_DEV_ID_82545EM_FIBER:
case E1000_DEV_ID_82546EB_COPPER:
wol->supported = WAKE_PHY | WAKE_UCAST | wol->supported = WAKE_PHY | WAKE_UCAST |
WAKE_MCAST | WAKE_BCAST | WAKE_MAGIC; WAKE_MCAST | WAKE_BCAST | WAKE_MAGIC;
...@@ -340,7 +334,6 @@ e1000_ethtool_swol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol) ...@@ -340,7 +334,6 @@ e1000_ethtool_swol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol)
case E1000_DEV_ID_82543GC_FIBER: case E1000_DEV_ID_82543GC_FIBER:
case E1000_DEV_ID_82543GC_COPPER: case E1000_DEV_ID_82543GC_COPPER:
case E1000_DEV_ID_82544EI_FIBER: case E1000_DEV_ID_82544EI_FIBER:
default:
return wol->wolopts ? -EOPNOTSUPP : 0; return wol->wolopts ? -EOPNOTSUPP : 0;
case E1000_DEV_ID_82546EB_FIBER: case E1000_DEV_ID_82546EB_FIBER:
...@@ -349,14 +342,7 @@ e1000_ethtool_swol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol) ...@@ -349,14 +342,7 @@ e1000_ethtool_swol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol)
return wol->wolopts ? -EOPNOTSUPP : 0; return wol->wolopts ? -EOPNOTSUPP : 0;
/* Fall Through */ /* Fall Through */
case E1000_DEV_ID_82544EI_COPPER: default:
case E1000_DEV_ID_82544GC_COPPER:
case E1000_DEV_ID_82544GC_LOM:
case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER:
case E1000_DEV_ID_82545EM_FIBER:
case E1000_DEV_ID_82546EB_COPPER:
if(wol->wolopts & (WAKE_ARP | WAKE_MAGICSECURE)) if(wol->wolopts & (WAKE_ARP | WAKE_MAGICSECURE))
return -EOPNOTSUPP; return -EOPNOTSUPP;
...@@ -518,7 +504,8 @@ e1000_ethtool_ioctl(struct net_device *netdev, struct ifreq *ifr) ...@@ -518,7 +504,8 @@ e1000_ethtool_ioctl(struct net_device *netdev, struct ifreq *ifr)
if(copy_from_user(&eeprom, addr, sizeof(eeprom))) if(copy_from_user(&eeprom, addr, sizeof(eeprom)))
return -EFAULT; return -EFAULT;
if((err = e1000_ethtool_geeprom(adapter, &eeprom, eeprom_buff))<0) if((err = e1000_ethtool_geeprom(adapter,
&eeprom, eeprom_buff)))
return err; return err;
if(copy_to_user(addr, &eeprom, sizeof(eeprom))) if(copy_to_user(addr, &eeprom, sizeof(eeprom)))
......
...@@ -47,9 +47,9 @@ static void e1000_lower_ee_clk(struct e1000_hw *hw, uint32_t *eecd); ...@@ -47,9 +47,9 @@ static void e1000_lower_ee_clk(struct e1000_hw *hw, uint32_t *eecd);
static void e1000_shift_out_ee_bits(struct e1000_hw *hw, uint16_t data, uint16_t count); static void e1000_shift_out_ee_bits(struct e1000_hw *hw, uint16_t data, uint16_t count);
static uint16_t e1000_shift_in_ee_bits(struct e1000_hw *hw); static uint16_t e1000_shift_in_ee_bits(struct e1000_hw *hw);
static void e1000_setup_eeprom(struct e1000_hw *hw); static void e1000_setup_eeprom(struct e1000_hw *hw);
static void e1000_standby_eeprom(struct e1000_hw *hw);
static void e1000_clock_eeprom(struct e1000_hw *hw); static void e1000_clock_eeprom(struct e1000_hw *hw);
static void e1000_cleanup_eeprom(struct e1000_hw *hw); static void e1000_cleanup_eeprom(struct e1000_hw *hw);
static void e1000_standby_eeprom(struct e1000_hw *hw);
static int32_t e1000_id_led_init(struct e1000_hw * hw); static int32_t e1000_id_led_init(struct e1000_hw * hw);
/****************************************************************************** /******************************************************************************
...@@ -88,6 +88,9 @@ e1000_set_mac_type(struct e1000_hw *hw) ...@@ -88,6 +88,9 @@ e1000_set_mac_type(struct e1000_hw *hw)
break; break;
case E1000_DEV_ID_82540EM: case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM: case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82540EP:
case E1000_DEV_ID_82540EP_LOM:
case E1000_DEV_ID_82540EP_LP:
hw->mac_type = e1000_82540; hw->mac_type = e1000_82540;
break; break;
case E1000_DEV_ID_82545EM_COPPER: case E1000_DEV_ID_82545EM_COPPER:
...@@ -655,6 +658,8 @@ e1000_setup_copper_link(struct e1000_hw *hw) ...@@ -655,6 +658,8 @@ e1000_setup_copper_link(struct e1000_hw *hw)
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
} }
phy_data |= M88E1000_EPSCR_TX_CLK_25; phy_data |= M88E1000_EPSCR_TX_CLK_25;
if (hw->phy_revision < M88E1011_I_REV_4) {
/* Configure Master and Slave downshift values */ /* Configure Master and Slave downshift values */
phy_data &= ~(M88E1000_EPSCR_MASTER_DOWNSHIFT_MASK | phy_data &= ~(M88E1000_EPSCR_MASTER_DOWNSHIFT_MASK |
M88E1000_EPSCR_SLAVE_DOWNSHIFT_MASK); M88E1000_EPSCR_SLAVE_DOWNSHIFT_MASK);
...@@ -664,6 +669,7 @@ e1000_setup_copper_link(struct e1000_hw *hw) ...@@ -664,6 +669,7 @@ e1000_setup_copper_link(struct e1000_hw *hw)
DEBUGOUT("PHY Write Error\n"); DEBUGOUT("PHY Write Error\n");
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
} }
}
/* SW Reset the PHY so all changes take effect */ /* SW Reset the PHY so all changes take effect */
ret_val = e1000_phy_reset(hw); ret_val = e1000_phy_reset(hw);
...@@ -1008,7 +1014,6 @@ e1000_phy_force_speed_duplex(struct e1000_hw *hw) ...@@ -1008,7 +1014,6 @@ e1000_phy_force_speed_duplex(struct e1000_hw *hw)
/* Write the configured values back to the Device Control Reg. */ /* Write the configured values back to the Device Control Reg. */
E1000_WRITE_REG(hw, CTRL, ctrl); E1000_WRITE_REG(hw, CTRL, ctrl);
/* Write the MII Control Register with the new PHY configuration. */
if(e1000_read_phy_reg(hw, M88E1000_PHY_SPEC_CTRL, &phy_data) < 0) { if(e1000_read_phy_reg(hw, M88E1000_PHY_SPEC_CTRL, &phy_data) < 0) {
DEBUGOUT("PHY Read Error\n"); DEBUGOUT("PHY Read Error\n");
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
...@@ -1026,6 +1031,8 @@ e1000_phy_force_speed_duplex(struct e1000_hw *hw) ...@@ -1026,6 +1031,8 @@ e1000_phy_force_speed_duplex(struct e1000_hw *hw)
/* Need to reset the PHY or these changes will be ignored */ /* Need to reset the PHY or these changes will be ignored */
mii_ctrl_reg |= MII_CR_RESET; mii_ctrl_reg |= MII_CR_RESET;
/* Write back the modified PHY MII control register. */
if(e1000_write_phy_reg(hw, PHY_CTRL, mii_ctrl_reg) < 0) { if(e1000_write_phy_reg(hw, PHY_CTRL, mii_ctrl_reg) < 0) {
DEBUGOUT("PHY Write Error\n"); DEBUGOUT("PHY Write Error\n");
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
...@@ -2100,6 +2107,7 @@ e1000_detect_gig_phy(struct e1000_hw *hw) ...@@ -2100,6 +2107,7 @@ e1000_detect_gig_phy(struct e1000_hw *hw)
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
} }
hw->phy_id |= (uint32_t) (phy_id_low & PHY_REVISION_MASK); hw->phy_id |= (uint32_t) (phy_id_low & PHY_REVISION_MASK);
hw->phy_revision = (uint32_t) phy_id_low & ~PHY_REVISION_MASK;
switch(hw->mac_type) { switch(hw->mac_type) {
case e1000_82543: case e1000_82543:
...@@ -2242,7 +2250,7 @@ e1000_raise_ee_clk(struct e1000_hw *hw, ...@@ -2242,7 +2250,7 @@ e1000_raise_ee_clk(struct e1000_hw *hw,
uint32_t *eecd) uint32_t *eecd)
{ {
/* Raise the clock input to the EEPROM (by setting the SK bit), and then /* Raise the clock input to the EEPROM (by setting the SK bit), and then
* wait 50 microseconds. * wait <delay> microseconds.
*/ */
*eecd = *eecd | E1000_EECD_SK; *eecd = *eecd | E1000_EECD_SK;
E1000_WRITE_REG(hw, EECD, *eecd); E1000_WRITE_REG(hw, EECD, *eecd);
...@@ -2331,11 +2339,11 @@ e1000_shift_in_ee_bits(struct e1000_hw *hw) ...@@ -2331,11 +2339,11 @@ e1000_shift_in_ee_bits(struct e1000_hw *hw)
uint32_t i; uint32_t i;
uint16_t data; uint16_t data;
/* In order to read a register from the EEPROM, we need to shift 16 bits /* In order to read a register from the EEPROM, we need to shift 'count'
* in from the EEPROM. Bits are "shifted in" by raising the clock input to * bits in from the EEPROM. Bits are "shifted in" by raising the clock
* the EEPROM (setting the SK bit), and then reading the value of the "DO" * input to the EEPROM (setting the SK bit), and then reading the value of
* bit. During this "shifting in" process the "DI" bit should always be * the "DO" bit. During this "shifting in" process the "DI" bit should
* clear.. * always be clear.
*/ */
eecd = E1000_READ_REG(hw, EECD); eecd = E1000_READ_REG(hw, EECD);
...@@ -3140,6 +3148,9 @@ e1000_setup_led(struct e1000_hw *hw) ...@@ -3140,6 +3148,9 @@ e1000_setup_led(struct e1000_hw *hw)
ledctl |= (E1000_LEDCTL_MODE_LED_OFF << E1000_LEDCTL_LED0_MODE_SHIFT); ledctl |= (E1000_LEDCTL_MODE_LED_OFF << E1000_LEDCTL_LED0_MODE_SHIFT);
E1000_WRITE_REG(hw, LEDCTL, ledctl); E1000_WRITE_REG(hw, LEDCTL, ledctl);
break; break;
case E1000_DEV_ID_82540EP:
case E1000_DEV_ID_82540EP_LOM:
case E1000_DEV_ID_82540EP_LP:
case E1000_DEV_ID_82540EM: case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM: case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER: case E1000_DEV_ID_82545EM_COPPER:
...@@ -3173,6 +3184,9 @@ e1000_cleanup_led(struct e1000_hw *hw) ...@@ -3173,6 +3184,9 @@ e1000_cleanup_led(struct e1000_hw *hw)
case E1000_DEV_ID_82544GC_LOM: case E1000_DEV_ID_82544GC_LOM:
/* No cleanup necessary */ /* No cleanup necessary */
break; break;
case E1000_DEV_ID_82540EP:
case E1000_DEV_ID_82540EP_LOM:
case E1000_DEV_ID_82540EP_LP:
case E1000_DEV_ID_82540EM: case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM: case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER: case E1000_DEV_ID_82545EM_COPPER:
...@@ -3223,6 +3237,9 @@ e1000_led_on(struct e1000_hw *hw) ...@@ -3223,6 +3237,9 @@ e1000_led_on(struct e1000_hw *hw)
ctrl |= E1000_CTRL_SWDPIO0; ctrl |= E1000_CTRL_SWDPIO0;
E1000_WRITE_REG(hw, CTRL, ctrl); E1000_WRITE_REG(hw, CTRL, ctrl);
break; break;
case E1000_DEV_ID_82540EP:
case E1000_DEV_ID_82540EP_LOM:
case E1000_DEV_ID_82540EP_LP:
case E1000_DEV_ID_82540EM: case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM: case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER: case E1000_DEV_ID_82545EM_COPPER:
...@@ -3270,6 +3287,9 @@ e1000_led_off(struct e1000_hw *hw) ...@@ -3270,6 +3287,9 @@ e1000_led_off(struct e1000_hw *hw)
ctrl |= E1000_CTRL_SWDPIO0; ctrl |= E1000_CTRL_SWDPIO0;
E1000_WRITE_REG(hw, CTRL, ctrl); E1000_WRITE_REG(hw, CTRL, ctrl);
break; break;
case E1000_DEV_ID_82540EP:
case E1000_DEV_ID_82540EP_LOM:
case E1000_DEV_ID_82540EP_LP:
case E1000_DEV_ID_82540EM: case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM: case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER: case E1000_DEV_ID_82545EM_COPPER:
......
...@@ -246,11 +246,14 @@ void e1000_write_reg_io(struct e1000_hw *hw, uint32_t offset, uint32_t value); ...@@ -246,11 +246,14 @@ void e1000_write_reg_io(struct e1000_hw *hw, uint32_t offset, uint32_t value);
#define E1000_DEV_ID_82544GC_LOM 0x100D #define E1000_DEV_ID_82544GC_LOM 0x100D
#define E1000_DEV_ID_82540EM 0x100E #define E1000_DEV_ID_82540EM 0x100E
#define E1000_DEV_ID_82540EM_LOM 0x1015 #define E1000_DEV_ID_82540EM_LOM 0x1015
#define E1000_DEV_ID_82540EP_LOM 0x1016
#define E1000_DEV_ID_82540EP 0x1017
#define E1000_DEV_ID_82540EP_LP 0x101E
#define E1000_DEV_ID_82545EM_COPPER 0x100F #define E1000_DEV_ID_82545EM_COPPER 0x100F
#define E1000_DEV_ID_82545EM_FIBER 0x1011 #define E1000_DEV_ID_82545EM_FIBER 0x1011
#define E1000_DEV_ID_82546EB_COPPER 0x1010 #define E1000_DEV_ID_82546EB_COPPER 0x1010
#define E1000_DEV_ID_82546EB_FIBER 0x1012 #define E1000_DEV_ID_82546EB_FIBER 0x1012
#define NUM_DEV_IDS 13 #define NUM_DEV_IDS 16
#define NODE_ADDRESS_SIZE 6 #define NODE_ADDRESS_SIZE 6
#define ETH_LENGTH_OF_ADDRESS 6 #define ETH_LENGTH_OF_ADDRESS 6
...@@ -851,6 +854,7 @@ struct e1000_hw { ...@@ -851,6 +854,7 @@ struct e1000_hw {
e1000_bus_type bus_type; e1000_bus_type bus_type;
uint32_t io_base; uint32_t io_base;
uint32_t phy_id; uint32_t phy_id;
uint32_t phy_revision;
uint32_t phy_addr; uint32_t phy_addr;
uint32_t original_fc; uint32_t original_fc;
uint32_t txcw; uint32_t txcw;
...@@ -1755,6 +1759,7 @@ struct e1000_hw { ...@@ -1755,6 +1759,7 @@ struct e1000_hw {
#define M88E1011_I_PHY_ID 0x01410C20 #define M88E1011_I_PHY_ID 0x01410C20
#define M88E1000_12_PHY_ID M88E1000_E_PHY_ID #define M88E1000_12_PHY_ID M88E1000_E_PHY_ID
#define M88E1000_14_PHY_ID M88E1000_E_PHY_ID #define M88E1000_14_PHY_ID M88E1000_E_PHY_ID
#define M88E1011_I_REV_4 0x04
/* Miscellaneous PHY bit definitions. */ /* Miscellaneous PHY bit definitions. */
#define PHY_PREAMBLE 0xFFFFFFFF #define PHY_PREAMBLE 0xFFFFFFFF
......
...@@ -30,6 +30,19 @@ ...@@ -30,6 +30,19 @@
#include "e1000.h" #include "e1000.h"
/* Change Log /* Change Log
*
* 4.4.12 10/15/02
* o Clean up: use members of pci_device rather than direct calls to
* pci_read_config_word.
* o Bug fix: changed default flow control settings.
* o Clean up: ethtool file now has an inclusive list for adapters in the
* Wake-On-LAN capabilities instead of an exclusive list.
* o Bug fix: miscellaneous WoL bug fixes.
* o Added software interrupt for clearing rx ring
* o Bug fix: easier to undo "forcing" of 1000/fd using ethtool.
* o Now setting netdev->mem_end in e1000_probe.
* o Clean up: Moved tx_timeout from interrupt context to process context
* using schedule_task.
* *
* o Feature: merged in modified NAPI patch from Robert Olsson * o Feature: merged in modified NAPI patch from Robert Olsson
* <Robert.Olsson@its.uu.se> Uppsala Univeristy, Sweden. * <Robert.Olsson@its.uu.se> Uppsala Univeristy, Sweden.
...@@ -49,29 +62,11 @@ ...@@ -49,29 +62,11 @@
* o Misc ethtool bug fixes. * o Misc ethtool bug fixes.
* *
* 4.3.2 7/5/02 * 4.3.2 7/5/02
* o Bug fix: perform controller reset using I/O rather than mmio because
* some chipsets try to perform a 64-bit write, but the controller ignores
* the upper 32-bit write once the reset is intiated by the lower 32-bit
* write, causing a master abort.
* o Bug fix: fixed jumbo frames sized from 1514 to 2048.
* o ASF support: disable ARP when driver is loaded or resumed; enable when
* driver is removed or suspended.
* o Bug fix: changed default setting for RxIntDelay to 0 for 82542/3/4
* controllers to workaround h/w errata where controller will hang when
* RxIntDelay <> 0 under certian network conditions.
* o Clean up: removed unused and undocumented user-settable settings for
* PHY.
* o Bug fix: ethtool GEEPROM was using byte address rather than word
* addressing.
* o Feature: added support for ethtool SEEPROM.
* o Feature: added support for entropy pool.
*
* 4.2.17 5/30/02
*/ */
char e1000_driver_name[] = "e1000"; char e1000_driver_name[] = "e1000";
char e1000_driver_string[] = "Intel(R) PRO/1000 Network Driver"; char e1000_driver_string[] = "Intel(R) PRO/1000 Network Driver";
char e1000_driver_version[] = "4.3.15-k1"; char e1000_driver_version[] = "4.4.12-k1";
char e1000_copyright[] = "Copyright (c) 1999-2002 Intel Corporation."; char e1000_copyright[] = "Copyright (c) 1999-2002 Intel Corporation.";
/* e1000_pci_tbl - PCI Device ID Table /* e1000_pci_tbl - PCI Device ID Table
...@@ -113,6 +108,9 @@ static struct pci_device_id e1000_pci_tbl[] __devinitdata = { ...@@ -113,6 +108,9 @@ static struct pci_device_id e1000_pci_tbl[] __devinitdata = {
{0x8086, 0x1011, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, {0x8086, 0x1011, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{0x8086, 0x1010, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, {0x8086, 0x1010, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{0x8086, 0x1012, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, {0x8086, 0x1012, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{0x8086, 0x1016, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{0x8086, 0x1017, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{0x8086, 0x101E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
/* required last entry */ /* required last entry */
{0,} {0,}
}; };
...@@ -179,17 +177,24 @@ static void e1000_vlan_rx_add_vid(struct net_device *netdev, uint16_t vid); ...@@ -179,17 +177,24 @@ static void e1000_vlan_rx_add_vid(struct net_device *netdev, uint16_t vid);
static void e1000_vlan_rx_kill_vid(struct net_device *netdev, uint16_t vid); static void e1000_vlan_rx_kill_vid(struct net_device *netdev, uint16_t vid);
static int e1000_notify_reboot(struct notifier_block *, unsigned long event, void *ptr); static int e1000_notify_reboot(struct notifier_block *, unsigned long event, void *ptr);
static int e1000_notify_netdev(struct notifier_block *, unsigned long event, void *ptr);
static int e1000_suspend(struct pci_dev *pdev, uint32_t state); static int e1000_suspend(struct pci_dev *pdev, uint32_t state);
#ifdef CONFIG_PM #ifdef CONFIG_PM
static int e1000_resume(struct pci_dev *pdev); static int e1000_resume(struct pci_dev *pdev);
#endif #endif
struct notifier_block e1000_notifier = { struct notifier_block e1000_notifier_reboot = {
.notifier_call = e1000_notify_reboot, .notifier_call = e1000_notify_reboot,
.next = NULL, .next = NULL,
.priority = 0 .priority = 0
}; };
struct notifier_block e1000_notifier_netdev = {
.notifier_call = e1000_notify_netdev,
.next = NULL,
.priority = 0
};
/* Exported from other modules */ /* Exported from other modules */
extern void e1000_check_options(struct e1000_adapter *adapter); extern void e1000_check_options(struct e1000_adapter *adapter);
...@@ -230,8 +235,10 @@ e1000_init_module(void) ...@@ -230,8 +235,10 @@ e1000_init_module(void)
printk(KERN_INFO "%s\n", e1000_copyright); printk(KERN_INFO "%s\n", e1000_copyright);
ret = pci_module_init(&e1000_driver); ret = pci_module_init(&e1000_driver);
if(ret >= 0) if(ret >= 0) {
register_reboot_notifier(&e1000_notifier); register_reboot_notifier(&e1000_notifier_reboot);
register_netdevice_notifier(&e1000_notifier_netdev);
}
return ret; return ret;
} }
...@@ -247,7 +254,8 @@ module_init(e1000_init_module); ...@@ -247,7 +254,8 @@ module_init(e1000_init_module);
static void __exit static void __exit
e1000_exit_module(void) e1000_exit_module(void)
{ {
unregister_reboot_notifier(&e1000_notifier); unregister_reboot_notifier(&e1000_notifier_reboot);
unregister_netdevice_notifier(&e1000_notifier_netdev);
pci_unregister_driver(&e1000_driver); pci_unregister_driver(&e1000_driver);
} }
...@@ -408,6 +416,7 @@ e1000_probe(struct pci_dev *pdev, ...@@ -408,6 +416,7 @@ e1000_probe(struct pci_dev *pdev,
netdev->irq = pdev->irq; netdev->irq = pdev->irq;
netdev->mem_start = mmio_start; netdev->mem_start = mmio_start;
netdev->mem_end = mmio_start + mmio_len;
netdev->base_addr = adapter->hw.io_base; netdev->base_addr = adapter->hw.io_base;
adapter->bd_number = cards_found; adapter->bd_number = cards_found;
...@@ -475,6 +484,8 @@ e1000_probe(struct pci_dev *pdev, ...@@ -475,6 +484,8 @@ e1000_probe(struct pci_dev *pdev,
(void (*)(void *))e1000_tx_timeout_task, netdev); (void (*)(void *))e1000_tx_timeout_task, netdev);
register_netdev(netdev); register_netdev(netdev);
memcpy(adapter->ifname, netdev->name, IFNAMSIZ);
adapter->ifname[IFNAMSIZ-1] = 0;
/* we're going to reset, so assume we have no link for now */ /* we're going to reset, so assume we have no link for now */
...@@ -588,9 +599,9 @@ e1000_sw_init(struct e1000_adapter *adapter) ...@@ -588,9 +599,9 @@ e1000_sw_init(struct e1000_adapter *adapter)
/* flow control settings */ /* flow control settings */
hw->fc_high_water = FC_DEFAULT_HI_THRESH; hw->fc_high_water = E1000_FC_HIGH_THRESH;
hw->fc_low_water = FC_DEFAULT_LO_THRESH; hw->fc_low_water = E1000_FC_LOW_THRESH;
hw->fc_pause_time = FC_DEFAULT_TX_TIMER; hw->fc_pause_time = E1000_FC_PAUSE_TIME;
hw->fc_send_xon = 1; hw->fc_send_xon = 1;
/* Media type - copper or fiber */ /* Media type - copper or fiber */
...@@ -911,8 +922,9 @@ e1000_configure_rx(struct e1000_adapter *adapter) ...@@ -911,8 +922,9 @@ e1000_configure_rx(struct e1000_adapter *adapter)
/* set the Receive Delay Timer Register */ /* set the Receive Delay Timer Register */
if(adapter->hw.mac_type >= e1000_82540) {
E1000_WRITE_REG(&adapter->hw, RDTR, adapter->rx_int_delay); E1000_WRITE_REG(&adapter->hw, RDTR, adapter->rx_int_delay);
if(adapter->hw.mac_type >= e1000_82540) {
E1000_WRITE_REG(&adapter->hw, RADV, adapter->rx_abs_int_delay); E1000_WRITE_REG(&adapter->hw, RADV, adapter->rx_abs_int_delay);
/* Set the interrupt throttling rate. Value is calculated /* Set the interrupt throttling rate. Value is calculated
...@@ -920,9 +932,6 @@ e1000_configure_rx(struct e1000_adapter *adapter) ...@@ -920,9 +932,6 @@ e1000_configure_rx(struct e1000_adapter *adapter)
#define MAX_INTS_PER_SEC 8000 #define MAX_INTS_PER_SEC 8000
#define DEFAULT_ITR 1000000000/(MAX_INTS_PER_SEC * 256) #define DEFAULT_ITR 1000000000/(MAX_INTS_PER_SEC * 256)
E1000_WRITE_REG(&adapter->hw, ITR, DEFAULT_ITR); E1000_WRITE_REG(&adapter->hw, ITR, DEFAULT_ITR);
} else {
E1000_WRITE_REG(&adapter->hw, RDTR, adapter->rx_int_delay);
} }
/* Setup the Base and Length of the Rx Descriptor Ring */ /* Setup the Base and Length of the Rx Descriptor Ring */
...@@ -1280,6 +1289,10 @@ e1000_watchdog(unsigned long data) ...@@ -1280,6 +1289,10 @@ e1000_watchdog(unsigned long data)
e1000_update_stats(adapter); e1000_update_stats(adapter);
e1000_update_adaptive(&adapter->hw); e1000_update_adaptive(&adapter->hw);
/* Cause software interrupt to ensure rx ring is cleaned */
E1000_WRITE_REG(&adapter->hw, ICS, E1000_ICS_RXDMT0);
/* Early detection of hung controller */ /* Early detection of hung controller */
i = txdr->next_to_clean; i = txdr->next_to_clean;
if(txdr->buffer_info[i].dma && if(txdr->buffer_info[i].dma &&
...@@ -1497,7 +1510,6 @@ e1000_xmit_frame(struct sk_buff *skb, struct net_device *netdev) ...@@ -1497,7 +1510,6 @@ e1000_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
int f; int f;
count = TXD_USE_COUNT(skb->len - skb->data_len, count = TXD_USE_COUNT(skb->len - skb->data_len,
adapter->max_data_per_txd); adapter->max_data_per_txd);
for(f = 0; f < skb_shinfo(skb)->nr_frags; f++) for(f = 0; f < skb_shinfo(skb)->nr_frags; f++)
...@@ -1744,6 +1756,7 @@ e1000_update_stats(struct e1000_adapter *adapter) ...@@ -1744,6 +1756,7 @@ e1000_update_stats(struct e1000_adapter *adapter)
adapter->stats.latecol; adapter->stats.latecol;
adapter->net_stats.tx_aborted_errors = adapter->stats.ecol; adapter->net_stats.tx_aborted_errors = adapter->stats.ecol;
adapter->net_stats.tx_window_errors = adapter->stats.latecol; adapter->net_stats.tx_window_errors = adapter->stats.latecol;
adapter->net_stats.tx_carrier_errors = adapter->stats.tncrs;
/* Tx Dropped needs to be maintained elsewhere */ /* Tx Dropped needs to be maintained elsewhere */
...@@ -1756,7 +1769,8 @@ e1000_update_stats(struct e1000_adapter *adapter) ...@@ -1756,7 +1769,8 @@ e1000_update_stats(struct e1000_adapter *adapter)
adapter->phy_stats.idle_errors += phy_tmp; adapter->phy_stats.idle_errors += phy_tmp;
} }
if(!e1000_read_phy_reg(hw, M88E1000_RX_ERR_CNTR, &phy_tmp)) if((hw->mac_type <= e1000_82546) &&
!e1000_read_phy_reg(hw, M88E1000_RX_ERR_CNTR, &phy_tmp))
adapter->phy_stats.receive_errors += phy_tmp; adapter->phy_stats.receive_errors += phy_tmp;
} }
} }
...@@ -2164,8 +2178,7 @@ e1000_alloc_rx_buffers(struct e1000_adapter *adapter) ...@@ -2164,8 +2178,7 @@ e1000_alloc_rx_buffers(struct e1000_adapter *adapter)
while(!rx_ring->buffer_info[i].skb) { while(!rx_ring->buffer_info[i].skb) {
rx_desc = E1000_RX_DESC(*rx_ring, i); rx_desc = E1000_RX_DESC(*rx_ring, i);
skb = alloc_skb(adapter->rx_buffer_len + reserve_len, skb = dev_alloc_skb(adapter->rx_buffer_len + reserve_len);
GFP_ATOMIC);
if(!skb) { if(!skb) {
/* Better luck next round */ /* Better luck next round */
...@@ -2396,6 +2409,29 @@ e1000_notify_reboot(struct notifier_block *nb, unsigned long event, void *p) ...@@ -2396,6 +2409,29 @@ e1000_notify_reboot(struct notifier_block *nb, unsigned long event, void *p)
return NOTIFY_DONE; return NOTIFY_DONE;
} }
static int
e1000_notify_netdev(struct notifier_block *nb, unsigned long event, void *p)
{
struct e1000_adapter *adapter;
struct net_device *netdev = p;
if(netdev == NULL)
return NOTIFY_DONE;
switch(event) {
case NETDEV_CHANGENAME:
if(netdev->open == e1000_open) {
adapter = netdev->priv;
/* rename the proc nodes the easy way */
e1000_proc_dev_free(adapter);
memcpy(adapter->ifname, netdev->name, IFNAMSIZ);
adapter->ifname[IFNAMSIZ-1] = 0;
e1000_proc_dev_setup(adapter);
}
break;
}
return NOTIFY_DONE;
}
static int static int
e1000_suspend(struct pci_dev *pdev, uint32_t state) e1000_suspend(struct pci_dev *pdev, uint32_t state)
{ {
...@@ -2412,30 +2448,40 @@ e1000_suspend(struct pci_dev *pdev, uint32_t state) ...@@ -2412,30 +2448,40 @@ e1000_suspend(struct pci_dev *pdev, uint32_t state)
e1000_setup_rctl(adapter); e1000_setup_rctl(adapter);
e1000_set_multi(netdev); e1000_set_multi(netdev);
/* turn on all-multi mode if wake on multicast is enabled */
if(adapter->wol & E1000_WUFC_MC) { if(adapter->wol & E1000_WUFC_MC) {
rctl = E1000_READ_REG(&adapter->hw, RCTL); rctl = E1000_READ_REG(&adapter->hw, RCTL);
rctl |= E1000_RCTL_MPE; rctl |= E1000_RCTL_MPE;
E1000_WRITE_REG(&adapter->hw, RCTL, rctl); E1000_WRITE_REG(&adapter->hw, RCTL, rctl);
} }
if(adapter->hw.media_type == e1000_media_type_fiber) { if(adapter->hw.mac_type >= e1000_82540) {
#define E1000_CTRL_ADVD3WUC 0x00100000
ctrl = E1000_READ_REG(&adapter->hw, CTRL); ctrl = E1000_READ_REG(&adapter->hw, CTRL);
ctrl |= E1000_CTRL_ADVD3WUC; /* advertise wake from D3Cold */
#define E1000_CTRL_ADVD3WUC 0x00100000
/* phy power management enable */
#define E1000_CTRL_EN_PHY_PWR_MGMT 0x00200000
ctrl |= E1000_CTRL_ADVD3WUC |
E1000_CTRL_EN_PHY_PWR_MGMT;
E1000_WRITE_REG(&adapter->hw, CTRL, ctrl); E1000_WRITE_REG(&adapter->hw, CTRL, ctrl);
}
if(adapter->hw.media_type == e1000_media_type_fiber) {
/* keep the laser running in D3 */
ctrl_ext = E1000_READ_REG(&adapter->hw, CTRL_EXT); ctrl_ext = E1000_READ_REG(&adapter->hw, CTRL_EXT);
ctrl_ext |= E1000_CTRL_EXT_SDP7_DATA; ctrl_ext |= E1000_CTRL_EXT_SDP7_DATA;
E1000_WRITE_REG(&adapter->hw, CTRL_EXT, ctrl_ext); E1000_WRITE_REG(&adapter->hw, CTRL_EXT, ctrl_ext);
} }
E1000_WRITE_REG(&adapter->hw, WUC, 0); E1000_WRITE_REG(&adapter->hw, WUC, E1000_WUC_PME_EN);
E1000_WRITE_REG(&adapter->hw, WUFC, adapter->wol); E1000_WRITE_REG(&adapter->hw, WUFC, adapter->wol);
pci_enable_wake(pdev, 3, 1); pci_enable_wake(pdev, 3, 1);
pci_enable_wake(pdev, 4, 1); /* 4 == D3 cold */
} else { } else {
E1000_WRITE_REG(&adapter->hw, WUC, 0); E1000_WRITE_REG(&adapter->hw, WUC, 0);
E1000_WRITE_REG(&adapter->hw, WUFC, 0); E1000_WRITE_REG(&adapter->hw, WUFC, 0);
pci_enable_wake(pdev, 3, 0); pci_enable_wake(pdev, 3, 0);
pci_enable_wake(pdev, 4, 0); /* 4 == D3 cold */
} }
pci_save_state(pdev, adapter->pci_state); pci_save_state(pdev, adapter->pci_state);
...@@ -2445,9 +2491,12 @@ e1000_suspend(struct pci_dev *pdev, uint32_t state) ...@@ -2445,9 +2491,12 @@ e1000_suspend(struct pci_dev *pdev, uint32_t state)
if(manc & E1000_MANC_SMBUS_EN) { if(manc & E1000_MANC_SMBUS_EN) {
manc |= E1000_MANC_ARP_EN; manc |= E1000_MANC_ARP_EN;
E1000_WRITE_REG(&adapter->hw, MANC, manc); E1000_WRITE_REG(&adapter->hw, MANC, manc);
state = 0;
}
} }
} else
pci_set_power_state(pdev, 3); state = (state > 0) ? 3 : 0;
pci_set_power_state(pdev, state);
return 0; return 0;
} }
...@@ -2462,10 +2511,11 @@ e1000_resume(struct pci_dev *pdev) ...@@ -2462,10 +2511,11 @@ e1000_resume(struct pci_dev *pdev)
pci_set_power_state(pdev, 0); pci_set_power_state(pdev, 0);
pci_restore_state(pdev, adapter->pci_state); pci_restore_state(pdev, adapter->pci_state);
pci_enable_wake(pdev, 0, 0);
/* Clear the wakeup status bits */ pci_enable_wake(pdev, 3, 0);
pci_enable_wake(pdev, 4, 0); /* 4 == D3 cold */
e1000_reset(adapter);
E1000_WRITE_REG(&adapter->hw, WUS, ~0); E1000_WRITE_REG(&adapter->hw, WUS, ~0);
if(netif_running(netdev)) if(netif_running(netdev))
......
...@@ -96,6 +96,6 @@ typedef enum { ...@@ -96,6 +96,6 @@ typedef enum {
readl((a)->hw_addr + E1000_##reg + ((offset) << 2)) : \ readl((a)->hw_addr + E1000_##reg + ((offset) << 2)) : \
readl((a)->hw_addr + E1000_82542_##reg + ((offset) << 2))) readl((a)->hw_addr + E1000_82542_##reg + ((offset) << 2)))
#define E1000_WRITE_FLUSH(a) ((void)E1000_READ_REG(a, STATUS)) #define E1000_WRITE_FLUSH(a) E1000_READ_REG(a, STATUS)
#endif /* _E1000_OSDEP_H_ */ #endif /* _E1000_OSDEP_H_ */
...@@ -197,8 +197,7 @@ E1000_PARAM(RxAbsIntDelay, "Receive Absolute Interrupt Delay"); ...@@ -197,8 +197,7 @@ E1000_PARAM(RxAbsIntDelay, "Receive Absolute Interrupt Delay");
#define MIN_RXD 80 #define MIN_RXD 80
#define MAX_82544_RXD 4096 #define MAX_82544_RXD 4096
#define DEFAULT_RDTR 128 #define DEFAULT_RDTR 0
#define DEFAULT_RDTR_82544 0
#define MAX_RXDELAY 0xFFFF #define MAX_RXDELAY 0xFFFF
#define MIN_RXDELAY 0 #define MIN_RXDELAY 0
...@@ -315,7 +314,8 @@ e1000_check_options(struct e1000_adapter *adapter) ...@@ -315,7 +314,8 @@ e1000_check_options(struct e1000_adapter *adapter)
}; };
struct e1000_desc_ring *tx_ring = &adapter->tx_ring; struct e1000_desc_ring *tx_ring = &adapter->tx_ring;
e1000_mac_type mac_type = adapter->hw.mac_type; e1000_mac_type mac_type = adapter->hw.mac_type;
opt.arg.r.max = mac_type < e1000_82544 ? MAX_TXD : MAX_82544_TXD; opt.arg.r.max = mac_type < e1000_82544 ?
MAX_TXD : MAX_82544_TXD;
tx_ring->count = TxDescriptors[bd]; tx_ring->count = TxDescriptors[bd];
e1000_validate_option(&tx_ring->count, &opt); e1000_validate_option(&tx_ring->count, &opt);
...@@ -398,16 +398,13 @@ e1000_check_options(struct e1000_adapter *adapter) ...@@ -398,16 +398,13 @@ e1000_check_options(struct e1000_adapter *adapter)
} }
{ /* Receive Interrupt Delay */ { /* Receive Interrupt Delay */
char *rdtr = "using default of " __MODULE_STRING(DEFAULT_RDTR); char *rdtr = "using default of " __MODULE_STRING(DEFAULT_RDTR);
char *rdtr_82544 = "using default of "
__MODULE_STRING(DEFAULT_RDTR_82544);
struct e1000_option opt = { struct e1000_option opt = {
.type = range_option, .type = range_option,
.name = "Receive Interrupt Delay", .name = "Receive Interrupt Delay",
.arg = { r: { min: MIN_RXDELAY, max: MAX_RXDELAY }} .arg = { r: { min: MIN_RXDELAY, max: MAX_RXDELAY }}
}; };
e1000_mac_type mac_type = adapter->hw.mac_type; opt.def = DEFAULT_RDTR;
opt.def = mac_type > e1000_82544 ? DEFAULT_RDTR : 0; opt.err = rdtr;
opt.err = mac_type > e1000_82544 ? rdtr : rdtr_82544;
adapter->rx_int_delay = RxIntDelay[bd]; adapter->rx_int_delay = RxIntDelay[bd];
e1000_validate_option(&adapter->rx_int_delay, &opt); e1000_validate_option(&adapter->rx_int_delay, &opt);
......
...@@ -132,7 +132,7 @@ e1000_proc_single_read(char *page, char **start, off_t off, ...@@ -132,7 +132,7 @@ e1000_proc_single_read(char *page, char **start, off_t off,
return e1000_proc_read(page, start, off, count, eof); return e1000_proc_read(page, start, off, count, eof);
} }
static void __devexit static void
e1000_proc_dirs_free(char *name, struct list_head *proc_list_head) e1000_proc_dirs_free(char *name, struct list_head *proc_list_head)
{ {
struct proc_dir_entry *intel_proc_dir, *proc_dir; struct proc_dir_entry *intel_proc_dir, *proc_dir;
...@@ -188,7 +188,7 @@ e1000_proc_dirs_free(char *name, struct list_head *proc_list_head) ...@@ -188,7 +188,7 @@ e1000_proc_dirs_free(char *name, struct list_head *proc_list_head)
} }
static int __devinit static int
e1000_proc_singles_create(struct proc_dir_entry *parent, e1000_proc_singles_create(struct proc_dir_entry *parent,
struct list_head *proc_list_head) struct list_head *proc_list_head)
{ {
...@@ -215,7 +215,7 @@ e1000_proc_singles_create(struct proc_dir_entry *parent, ...@@ -215,7 +215,7 @@ e1000_proc_singles_create(struct proc_dir_entry *parent,
return 1; return 1;
} }
static void __devinit static void
e1000_proc_dirs_create(void *data, char *name, e1000_proc_dirs_create(void *data, char *name,
struct list_head *proc_list_head) struct list_head *proc_list_head)
{ {
...@@ -255,7 +255,7 @@ e1000_proc_dirs_create(void *data, char *name, ...@@ -255,7 +255,7 @@ e1000_proc_dirs_create(void *data, char *name,
info_entry->data = proc_list_head; info_entry->data = proc_list_head;
} }
static void __devinit static void
e1000_proc_list_add(struct list_head *proc_list_head, char *tag, e1000_proc_list_add(struct list_head *proc_list_head, char *tag,
void *data, size_t len, void *data, size_t len,
char *(*func)(void *, size_t, char *)) char *(*func)(void *, size_t, char *))
...@@ -274,7 +274,7 @@ e1000_proc_list_add(struct list_head *proc_list_head, char *tag, ...@@ -274,7 +274,7 @@ e1000_proc_list_add(struct list_head *proc_list_head, char *tag,
list_add_tail(&new->list, proc_list_head); list_add_tail(&new->list, proc_list_head);
} }
static void __devexit static void
e1000_proc_list_free(struct list_head *proc_list_head) e1000_proc_list_free(struct list_head *proc_list_head)
{ {
struct proc_list *elem; struct proc_list *elem;
...@@ -542,7 +542,7 @@ e1000_proc_rx_status(void *data, size_t len, char *buf) ...@@ -542,7 +542,7 @@ e1000_proc_rx_status(void *data, size_t len, char *buf)
#define LIST_ADD_H(T,D) LIST_ADD_F((T), (D), e1000_proc_hex) #define LIST_ADD_H(T,D) LIST_ADD_F((T), (D), e1000_proc_hex)
#define LIST_ADD_U(T,D) LIST_ADD_F((T), (D), e1000_proc_unsigned) #define LIST_ADD_U(T,D) LIST_ADD_F((T), (D), e1000_proc_unsigned)
static void __devinit static void
e1000_proc_list_setup(struct e1000_adapter *adapter) e1000_proc_list_setup(struct e1000_adapter *adapter)
{ {
struct e1000_hw *hw = &adapter->hw; struct e1000_hw *hw = &adapter->hw;
...@@ -572,7 +572,7 @@ e1000_proc_list_setup(struct e1000_adapter *adapter) ...@@ -572,7 +572,7 @@ e1000_proc_list_setup(struct e1000_adapter *adapter)
} }
LIST_ADD_U("IRQ", &adapter->pdev->irq); LIST_ADD_U("IRQ", &adapter->pdev->irq);
LIST_ADD_S("System_Device_Name", adapter->netdev->name); LIST_ADD_S("System_Device_Name", adapter->ifname);
LIST_ADD_F("Current_HWaddr", LIST_ADD_F("Current_HWaddr",
adapter->netdev->dev_addr, e1000_proc_hwaddr); adapter->netdev->dev_addr, e1000_proc_hwaddr);
LIST_ADD_F("Permanent_HWaddr", LIST_ADD_F("Permanent_HWaddr",
...@@ -670,13 +670,13 @@ e1000_proc_list_setup(struct e1000_adapter *adapter) ...@@ -670,13 +670,13 @@ e1000_proc_list_setup(struct e1000_adapter *adapter)
* @adapter: board private structure * @adapter: board private structure
*/ */
void __devinit void
e1000_proc_dev_setup(struct e1000_adapter *adapter) e1000_proc_dev_setup(struct e1000_adapter *adapter)
{ {
e1000_proc_list_setup(adapter); e1000_proc_list_setup(adapter);
e1000_proc_dirs_create(adapter, e1000_proc_dirs_create(adapter,
adapter->netdev->name, adapter->ifname,
&adapter->proc_list_head); &adapter->proc_list_head);
} }
...@@ -685,18 +685,18 @@ e1000_proc_dev_setup(struct e1000_adapter *adapter) ...@@ -685,18 +685,18 @@ e1000_proc_dev_setup(struct e1000_adapter *adapter)
* @adapter: board private structure * @adapter: board private structure
*/ */
void __devexit void
e1000_proc_dev_free(struct e1000_adapter *adapter) e1000_proc_dev_free(struct e1000_adapter *adapter)
{ {
e1000_proc_dirs_free(adapter->netdev->name, &adapter->proc_list_head); e1000_proc_dirs_free(adapter->ifname, &adapter->proc_list_head);
e1000_proc_list_free(&adapter->proc_list_head); e1000_proc_list_free(&adapter->proc_list_head);
} }
#else /* CONFIG_PROC_FS */ #else /* CONFIG_PROC_FS */
void __devinit e1000_proc_dev_setup(struct e1000_adapter *adapter) {} void e1000_proc_dev_setup(struct e1000_adapter *adapter) {}
void __devexit e1000_proc_dev_free(struct e1000_adapter *adapter) {} void e1000_proc_dev_free(struct e1000_adapter *adapter) {}
#endif /* CONFIG_PROC_FS */ #endif /* CONFIG_PROC_FS */
...@@ -65,7 +65,6 @@ static int multicast_filter_limit = 64; ...@@ -65,7 +65,6 @@ static int multicast_filter_limit = 64;
e.g. "options=16" for FD, "options=32" for 100mbps-only. */ e.g. "options=16" for FD, "options=32" for 100mbps-only. */
static int full_duplex[] = {-1, -1, -1, -1, -1, -1, -1, -1}; static int full_duplex[] = {-1, -1, -1, -1, -1, -1, -1, -1};
static int options[] = {-1, -1, -1, -1, -1, -1, -1, -1}; static int options[] = {-1, -1, -1, -1, -1, -1, -1, -1};
static int debug = -1; /* The debug level */
/* A few values that may be tweaked. */ /* A few values that may be tweaked. */
/* The ring sizes should be a power of two for efficiency. */ /* The ring sizes should be a power of two for efficiency. */
...@@ -119,6 +118,16 @@ static int debug = -1; /* The debug level */ ...@@ -119,6 +118,16 @@ static int debug = -1; /* The debug level */
#include <linux/etherdevice.h> #include <linux/etherdevice.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
#include <linux/ethtool.h> #include <linux/ethtool.h>
#include <linux/mii.h>
static int debug = -1;
#define DEBUG_DEFAULT (NETIF_MSG_DRV | \
NETIF_MSG_IFDOWN | \
NETIF_MSG_IFUP | \
NETIF_MSG_RX_ERR | \
NETIF_MSG_TX_ERR)
#define DEBUG ((debug >= 0) ? (1<<debug)-1 : DEBUG_DEFAULT)
MODULE_AUTHOR("Maintainer: Andrey V. Savochkin <saw@saw.sw.com.sg>"); MODULE_AUTHOR("Maintainer: Andrey V. Savochkin <saw@saw.sw.com.sg>");
MODULE_DESCRIPTION("Intel i82557/i82558/i82559 PCI EtherExpressPro driver"); MODULE_DESCRIPTION("Intel i82557/i82558/i82559 PCI EtherExpressPro driver");
...@@ -167,7 +176,6 @@ static inline int null_set_power_state(struct pci_dev *dev, int state) ...@@ -167,7 +176,6 @@ static inline int null_set_power_state(struct pci_dev *dev, int state)
} while(0) } while(0)
static int speedo_debug = 1;
/* /*
Theory of Operation Theory of Operation
...@@ -316,23 +324,11 @@ static inline void io_outw(unsigned int val, unsigned long port) ...@@ -316,23 +324,11 @@ static inline void io_outw(unsigned int val, unsigned long port)
#define outl writel #define outl writel
#endif #endif
/* How to wait for the command unit to accept a command.
Typically this takes 0 ticks. */
static inline void wait_for_cmd_done(long cmd_ioaddr)
{
int wait = 1000;
do udelay(1) ;
while(inb(cmd_ioaddr) && --wait >= 0);
#ifndef final_version
if (wait < 0)
printk(KERN_ALERT "eepro100: wait_for_cmd_done timeout!\n");
#endif
}
/* Offsets to the various registers. /* Offsets to the various registers.
All accesses need not be longword aligned. */ All accesses need not be longword aligned. */
enum speedo_offsets { enum speedo_offsets {
SCBStatus = 0, SCBCmd = 2, /* Rx/Command Unit command and status. */ SCBStatus = 0, SCBCmd = 2, /* Rx/Command Unit command and status. */
SCBIntmask = 3,
SCBPointer = 4, /* General purpose pointer. */ SCBPointer = 4, /* General purpose pointer. */
SCBPort = 8, /* Misc. commands and operands. */ SCBPort = 8, /* Misc. commands and operands. */
SCBflash = 12, SCBeeprom = 14, /* EEPROM and flash memory control. */ SCBflash = 12, SCBeeprom = 14, /* EEPROM and flash memory control. */
...@@ -488,7 +484,6 @@ struct speedo_private { ...@@ -488,7 +484,6 @@ struct speedo_private {
unsigned char acpi_pwr; unsigned char acpi_pwr;
signed char rx_mode; /* Current PROMISC/ALLMULTI setting. */ signed char rx_mode; /* Current PROMISC/ALLMULTI setting. */
unsigned int tx_full:1; /* The Tx queue is full. */ unsigned int tx_full:1; /* The Tx queue is full. */
unsigned int full_duplex:1; /* Full-duplex operation requested. */
unsigned int flow_ctrl:1; /* Use 802.3x flow control. */ unsigned int flow_ctrl:1; /* Use 802.3x flow control. */
unsigned int rx_bug:1; /* Work around receiver hang errata. */ unsigned int rx_bug:1; /* Work around receiver hang errata. */
unsigned char default_port:8; /* Last dev->if_port value. */ unsigned char default_port:8; /* Last dev->if_port value. */
...@@ -496,6 +491,7 @@ struct speedo_private { ...@@ -496,6 +491,7 @@ struct speedo_private {
unsigned short phy[2]; /* PHY media interfaces available. */ unsigned short phy[2]; /* PHY media interfaces available. */
unsigned short partner; /* Link partner caps. */ unsigned short partner; /* Link partner caps. */
struct mii_if_info mii_if; /* MII API hooks, info */ struct mii_if_info mii_if; /* MII API hooks, info */
u32 msg_enable; /* debug message level */
#ifdef CONFIG_PM #ifdef CONFIG_PM
u32 pm_state[16]; u32 pm_state[16];
#endif #endif
...@@ -559,6 +555,26 @@ static int mii_ctrl[8] = { 0x3300, 0x3100, 0x0000, 0x0100, ...@@ -559,6 +555,26 @@ static int mii_ctrl[8] = { 0x3300, 0x3100, 0x0000, 0x0100,
0x2000, 0x2100, 0x0400, 0x3100}; 0x2000, 0x2100, 0x0400, 0x3100};
#endif #endif
/* How to wait for the command unit to accept a command.
Typically this takes 0 ticks. */
static inline unsigned char wait_for_cmd_done(struct net_device *dev)
{
int wait = 1000;
long cmd_ioaddr = dev->base_addr + SCBCmd;
unsigned char r;
do {
udelay(1);
r = inb(cmd_ioaddr);
} while(r && --wait >= 0);
#ifndef final_version
if (wait < 0)
printk(KERN_ALERT "%s: wait_for_cmd_done timeout!\n", dev->name);
#endif
return r;
}
static int __devinit eepro100_init_one (struct pci_dev *pdev, static int __devinit eepro100_init_one (struct pci_dev *pdev,
const struct pci_device_id *ent) const struct pci_device_id *ent)
{ {
...@@ -567,9 +583,12 @@ static int __devinit eepro100_init_one (struct pci_dev *pdev, ...@@ -567,9 +583,12 @@ static int __devinit eepro100_init_one (struct pci_dev *pdev,
int acpi_idle_state = 0, pm; int acpi_idle_state = 0, pm;
static int cards_found /* = 0 */; static int cards_found /* = 0 */;
static int did_version /* = 0 */; /* Already printed version info. */ #ifndef MODULE
if (speedo_debug > 0 && did_version++ == 0) /* when built-in, we only print version if device is found */
static int did_version;
if (did_version++ == 0)
printk(version); printk(version);
#endif
/* save power state before pci_enable_device overwrites it */ /* save power state before pci_enable_device overwrites it */
pm = pci_find_capability(pdev, PCI_CAP_ID_PM); pm = pci_find_capability(pdev, PCI_CAP_ID_PM);
...@@ -598,7 +617,7 @@ static int __devinit eepro100_init_one (struct pci_dev *pdev, ...@@ -598,7 +617,7 @@ static int __devinit eepro100_init_one (struct pci_dev *pdev,
irq = pdev->irq; irq = pdev->irq;
#ifdef USE_IO #ifdef USE_IO
ioaddr = pci_resource_start(pdev, 1); ioaddr = pci_resource_start(pdev, 1);
if (speedo_debug > 2) if (DEBUG & NETIF_MSG_PROBE)
printk("Found Intel i82557 PCI Speedo at I/O %#lx, IRQ %d.\n", printk("Found Intel i82557 PCI Speedo at I/O %#lx, IRQ %d.\n",
ioaddr, irq); ioaddr, irq);
#else #else
...@@ -609,7 +628,7 @@ static int __devinit eepro100_init_one (struct pci_dev *pdev, ...@@ -609,7 +628,7 @@ static int __devinit eepro100_init_one (struct pci_dev *pdev,
pci_resource_len(pdev, 0), pci_resource_start(pdev, 0)); pci_resource_len(pdev, 0), pci_resource_start(pdev, 0));
goto err_out_free_mmio_region; goto err_out_free_mmio_region;
} }
if (speedo_debug > 2) if (DEBUG & NETIF_MSG_PROBE)
printk("Found Intel i82557 PCI Speedo, MMIO at %#lx, IRQ %d.\n", printk("Found Intel i82557 PCI Speedo, MMIO at %#lx, IRQ %d.\n",
pci_resource_start(pdev, 0), irq); pci_resource_start(pdev, 0), irq);
#endif #endif
...@@ -815,6 +834,7 @@ static int __devinit speedo_found1(struct pci_dev *pdev, ...@@ -815,6 +834,7 @@ static int __devinit speedo_found1(struct pci_dev *pdev,
sp = dev->priv; sp = dev->priv;
sp->pdev = pdev; sp->pdev = pdev;
sp->msg_enable = DEBUG;
sp->acpi_pwr = acpi_idle_state; sp->acpi_pwr = acpi_idle_state;
sp->tx_ring = tx_ring_space; sp->tx_ring = tx_ring_space;
sp->tx_ring_dma = tx_ring_dma; sp->tx_ring_dma = tx_ring_dma;
...@@ -960,7 +980,7 @@ speedo_open(struct net_device *dev) ...@@ -960,7 +980,7 @@ speedo_open(struct net_device *dev)
long ioaddr = dev->base_addr; long ioaddr = dev->base_addr;
int retval; int retval;
if (speedo_debug > 1) if (netif_msg_ifup(sp))
printk(KERN_DEBUG "%s: speedo_open() irq %d.\n", dev->name, dev->irq); printk(KERN_DEBUG "%s: speedo_open() irq %d.\n", dev->name, dev->irq);
pci_set_power_state(sp->pdev, 0); pci_set_power_state(sp->pdev, 0);
...@@ -1017,12 +1037,9 @@ speedo_open(struct net_device *dev) ...@@ -1017,12 +1037,9 @@ speedo_open(struct net_device *dev)
if ((sp->phy[0] & 0x8000) == 0) if ((sp->phy[0] & 0x8000) == 0)
sp->mii_if.advertising = mdio_read(dev, sp->phy[0] & 0x1f, MII_ADVERTISE); sp->mii_if.advertising = mdio_read(dev, sp->phy[0] & 0x1f, MII_ADVERTISE);
if (mdio_read(dev, sp->phy[0] & 0x1f, MII_BMSR) & BMSR_LSTATUS) mii_check_link(&sp->mii_if);
netif_carrier_on(dev);
else
netif_carrier_off(dev);
if (speedo_debug > 2) { if (netif_msg_ifup(sp)) {
printk(KERN_DEBUG "%s: Done speedo_open(), status %8.8x.\n", printk(KERN_DEBUG "%s: Done speedo_open(), status %8.8x.\n",
dev->name, inw(ioaddr + SCBStatus)); dev->name, inw(ioaddr + SCBStatus));
} }
...@@ -1054,8 +1071,7 @@ static void speedo_resume(struct net_device *dev) ...@@ -1054,8 +1071,7 @@ static void speedo_resume(struct net_device *dev)
sp->tx_threshold = 0x01208000; sp->tx_threshold = 0x01208000;
/* Set the segment registers to '0'. */ /* Set the segment registers to '0'. */
wait_for_cmd_done(ioaddr + SCBCmd); if (wait_for_cmd_done(dev) != 0) {
if (inb(ioaddr + SCBCmd)) {
outl(PortPartialReset, ioaddr + SCBPort); outl(PortPartialReset, ioaddr + SCBPort);
udelay(10); udelay(10);
} }
...@@ -1074,10 +1090,10 @@ static void speedo_resume(struct net_device *dev) ...@@ -1074,10 +1090,10 @@ static void speedo_resume(struct net_device *dev)
outb(CUStatsAddr, ioaddr + SCBCmd); outb(CUStatsAddr, ioaddr + SCBCmd);
sp->lstats->done_marker = 0; sp->lstats->done_marker = 0;
wait_for_cmd_done(ioaddr + SCBCmd); wait_for_cmd_done(dev);
if (sp->rx_ringp[sp->cur_rx % RX_RING_SIZE] == NULL) { if (sp->rx_ringp[sp->cur_rx % RX_RING_SIZE] == NULL) {
if (speedo_debug > 2) if (netif_msg_rx_err(sp))
printk(KERN_DEBUG "%s: NULL cur_rx in speedo_resume().\n", printk(KERN_DEBUG "%s: NULL cur_rx in speedo_resume().\n",
dev->name); dev->name);
} else { } else {
...@@ -1133,8 +1149,7 @@ speedo_rx_soft_reset(struct net_device *dev) ...@@ -1133,8 +1149,7 @@ speedo_rx_soft_reset(struct net_device *dev)
long ioaddr; long ioaddr;
ioaddr = dev->base_addr; ioaddr = dev->base_addr;
wait_for_cmd_done(ioaddr + SCBCmd); if (wait_for_cmd_done(dev) != 0) {
if (inb(ioaddr + SCBCmd) != 0) {
printk("%s: previous command stalled\n", dev->name); printk("%s: previous command stalled\n", dev->name);
return; return;
} }
...@@ -1147,9 +1162,7 @@ speedo_rx_soft_reset(struct net_device *dev) ...@@ -1147,9 +1162,7 @@ speedo_rx_soft_reset(struct net_device *dev)
rfd->rx_buf_addr = 0xffffffff; rfd->rx_buf_addr = 0xffffffff;
wait_for_cmd_done(ioaddr + SCBCmd); if (wait_for_cmd_done(dev) != 0) {
if (inb(ioaddr + SCBCmd) != 0) {
printk("%s: RxAbort command stalled\n", dev->name); printk("%s: RxAbort command stalled\n", dev->name);
return; return;
} }
...@@ -1172,7 +1185,7 @@ static void speedo_timer(unsigned long data) ...@@ -1172,7 +1185,7 @@ static void speedo_timer(unsigned long data)
int partner = mdio_read(dev, phy_num, MII_LPA); int partner = mdio_read(dev, phy_num, MII_LPA);
if (partner != sp->partner) { if (partner != sp->partner) {
int flow_ctrl = sp->mii_if.advertising & partner & 0x0400 ? 1 : 0; int flow_ctrl = sp->mii_if.advertising & partner & 0x0400 ? 1 : 0;
if (speedo_debug > 2) { if (netif_msg_link(sp)) {
printk(KERN_DEBUG "%s: Link status change.\n", dev->name); printk(KERN_DEBUG "%s: Link status change.\n", dev->name);
printk(KERN_DEBUG "%s: Old partner %x, new %x, adv %x.\n", printk(KERN_DEBUG "%s: Old partner %x, new %x, adv %x.\n",
dev->name, sp->partner, partner, sp->mii_if.advertising); dev->name, sp->partner, partner, sp->mii_if.advertising);
...@@ -1182,16 +1195,10 @@ static void speedo_timer(unsigned long data) ...@@ -1182,16 +1195,10 @@ static void speedo_timer(unsigned long data)
sp->flow_ctrl = flow_ctrl; sp->flow_ctrl = flow_ctrl;
sp->rx_mode = -1; /* Trigger a reload. */ sp->rx_mode = -1; /* Trigger a reload. */
} }
/* Clear sticky bit. */
mdio_read(dev, phy_num, MII_BMSR);
/* If link beat has returned... */
if (mdio_read(dev, phy_num, MII_BMSR) & BMSR_LSTATUS)
netif_carrier_on(dev);
else
netif_carrier_off(dev);
} }
} }
if (speedo_debug > 3) { mii_check_link(&sp->mii_if);
if (netif_msg_timer(sp)) {
printk(KERN_DEBUG "%s: Media control tick, status %4.4x.\n", printk(KERN_DEBUG "%s: Media control tick, status %4.4x.\n",
dev->name, inw(ioaddr + SCBStatus)); dev->name, inw(ioaddr + SCBStatus));
} }
...@@ -1200,7 +1207,7 @@ static void speedo_timer(unsigned long data) ...@@ -1200,7 +1207,7 @@ static void speedo_timer(unsigned long data)
/* We haven't received a packet in a Long Time. We might have been /* We haven't received a packet in a Long Time. We might have been
bitten by the receiver hang bug. This can be cleared by sending bitten by the receiver hang bug. This can be cleared by sending
a set multicast list command. */ a set multicast list command. */
if (speedo_debug > 3) if (netif_msg_rx_err(sp))
printk(KERN_DEBUG "%s: Sending a multicast list set command" printk(KERN_DEBUG "%s: Sending a multicast list set command"
" from a timer routine," " from a timer routine,"
" m=%d, j=%ld, l=%ld.\n", " m=%d, j=%ld, l=%ld.\n",
...@@ -1218,8 +1225,6 @@ static void speedo_show_state(struct net_device *dev) ...@@ -1218,8 +1225,6 @@ static void speedo_show_state(struct net_device *dev)
int i; int i;
/* Print a few items for debugging. */ /* Print a few items for debugging. */
if (speedo_debug > 0) {
int i;
printk(KERN_DEBUG "%s: Tx ring dump, Tx queue %u / %u:\n", dev->name, printk(KERN_DEBUG "%s: Tx ring dump, Tx queue %u / %u:\n", dev->name,
sp->cur_tx, sp->dirty_tx); sp->cur_tx, sp->dirty_tx);
for (i = 0; i < TX_RING_SIZE; i++) for (i = 0; i < TX_RING_SIZE; i++)
...@@ -1227,11 +1232,10 @@ static void speedo_show_state(struct net_device *dev) ...@@ -1227,11 +1232,10 @@ static void speedo_show_state(struct net_device *dev)
i == sp->dirty_tx % TX_RING_SIZE ? '*' : ' ', i == sp->dirty_tx % TX_RING_SIZE ? '*' : ' ',
i == sp->cur_tx % TX_RING_SIZE ? '=' : ' ', i == sp->cur_tx % TX_RING_SIZE ? '=' : ' ',
i, sp->tx_ring[i].status); i, sp->tx_ring[i].status);
}
printk(KERN_DEBUG "%s: Printing Rx ring" printk(KERN_DEBUG "%s: Printing Rx ring"
" (next to receive into %u, dirty index %u).\n", " (next to receive into %u, dirty index %u).\n",
dev->name, sp->cur_rx, sp->dirty_rx); dev->name, sp->cur_rx, sp->dirty_rx);
for (i = 0; i < RX_RING_SIZE; i++) for (i = 0; i < RX_RING_SIZE; i++)
printk(KERN_DEBUG "%s: %c%c%c%2d %8.8x.\n", dev->name, printk(KERN_DEBUG "%s: %c%c%c%2d %8.8x.\n", dev->name,
sp->rx_ringp[i] == sp->last_rxf ? 'l' : ' ', sp->rx_ringp[i] == sp->last_rxf ? 'l' : ' ',
...@@ -1324,7 +1328,7 @@ static void speedo_purge_tx(struct net_device *dev) ...@@ -1324,7 +1328,7 @@ static void speedo_purge_tx(struct net_device *dev)
} }
while (sp->mc_setup_head != NULL) { while (sp->mc_setup_head != NULL) {
struct speedo_mc_block *t; struct speedo_mc_block *t;
if (speedo_debug > 1) if (netif_msg_tx_err(sp))
printk(KERN_DEBUG "%s: freeing mc frame.\n", dev->name); printk(KERN_DEBUG "%s: freeing mc frame.\n", dev->name);
pci_unmap_single(sp->pdev, sp->mc_setup_head->frame_dma, pci_unmap_single(sp->pdev, sp->mc_setup_head->frame_dma,
sp->mc_setup_head->len, PCI_DMA_TODEVICE); sp->mc_setup_head->len, PCI_DMA_TODEVICE);
...@@ -1367,6 +1371,7 @@ static void speedo_tx_timeout(struct net_device *dev) ...@@ -1367,6 +1371,7 @@ static void speedo_tx_timeout(struct net_device *dev)
int status = inw(ioaddr + SCBStatus); int status = inw(ioaddr + SCBStatus);
unsigned long flags; unsigned long flags;
if (netif_msg_tx_err(sp)) {
printk(KERN_WARNING "%s: Transmit timed out: status %4.4x " printk(KERN_WARNING "%s: Transmit timed out: status %4.4x "
" %4.4x at %d/%d command %8.8x.\n", " %4.4x at %d/%d command %8.8x.\n",
dev->name, status, inw(ioaddr + SCBCmd), dev->name, status, inw(ioaddr + SCBCmd),
...@@ -1374,6 +1379,7 @@ static void speedo_tx_timeout(struct net_device *dev) ...@@ -1374,6 +1379,7 @@ static void speedo_tx_timeout(struct net_device *dev)
sp->tx_ring[sp->dirty_tx % TX_RING_SIZE].status); sp->tx_ring[sp->dirty_tx % TX_RING_SIZE].status);
speedo_show_state(dev); speedo_show_state(dev);
}
#if 0 #if 0
if ((status & 0x00C0) != 0x0080 if ((status & 0x00C0) != 0x0080
&& (status & 0x003C) == 0x0010) { && (status & 0x003C) == 0x0010) {
...@@ -1462,13 +1468,13 @@ speedo_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -1462,13 +1468,13 @@ speedo_start_xmit(struct sk_buff *skb, struct net_device *dev)
/* workaround for hardware bug on 10 mbit half duplex */ /* workaround for hardware bug on 10 mbit half duplex */
if ((sp->partner == 0) && (sp->chip_id == 1)) { if ((sp->partner == 0) && (sp->chip_id == 1)) {
wait_for_cmd_done(ioaddr + SCBCmd); wait_for_cmd_done(dev);
outb(0 , ioaddr + SCBCmd); outb(0 , ioaddr + SCBCmd);
udelay(1); udelay(1);
} }
/* Trigger the command unit resume. */ /* Trigger the command unit resume. */
wait_for_cmd_done(ioaddr + SCBCmd); wait_for_cmd_done(dev);
clear_suspend(sp->last_cmd); clear_suspend(sp->last_cmd);
/* We want the time window between clearing suspend flag on the previous /* We want the time window between clearing suspend flag on the previous
command and resuming CU to be as small as possible. command and resuming CU to be as small as possible.
...@@ -1500,14 +1506,14 @@ static void speedo_tx_buffer_gc(struct net_device *dev) ...@@ -1500,14 +1506,14 @@ static void speedo_tx_buffer_gc(struct net_device *dev)
int entry = dirty_tx % TX_RING_SIZE; int entry = dirty_tx % TX_RING_SIZE;
int status = le32_to_cpu(sp->tx_ring[entry].status); int status = le32_to_cpu(sp->tx_ring[entry].status);
if (speedo_debug > 5) if (netif_msg_tx_done(sp))
printk(KERN_DEBUG " scavenge candidate %d status %4.4x.\n", printk(KERN_DEBUG " scavenge candidate %d status %4.4x.\n",
entry, status); entry, status);
if ((status & StatusComplete) == 0) if ((status & StatusComplete) == 0)
break; /* It still hasn't been processed. */ break; /* It still hasn't been processed. */
if (status & TxUnderrun) if (status & TxUnderrun)
if (sp->tx_threshold < 0x01e08000) { if (sp->tx_threshold < 0x01e08000) {
if (speedo_debug > 2) if (netif_msg_tx_err(sp))
printk(KERN_DEBUG "%s: TX underrun, threshold adjusted.\n", printk(KERN_DEBUG "%s: TX underrun, threshold adjusted.\n",
dev->name); dev->name);
sp->tx_threshold += 0x00040000; sp->tx_threshold += 0x00040000;
...@@ -1525,7 +1531,7 @@ static void speedo_tx_buffer_gc(struct net_device *dev) ...@@ -1525,7 +1531,7 @@ static void speedo_tx_buffer_gc(struct net_device *dev)
dirty_tx++; dirty_tx++;
} }
if (speedo_debug && (int)(sp->cur_tx - dirty_tx) > TX_RING_SIZE) { if (netif_msg_tx_err(sp) && (int)(sp->cur_tx - dirty_tx) > TX_RING_SIZE) {
printk(KERN_ERR "out-of-sync dirty pointer, %d vs. %d," printk(KERN_ERR "out-of-sync dirty pointer, %d vs. %d,"
" full=%d.\n", " full=%d.\n",
dirty_tx, sp->cur_tx, sp->tx_full); dirty_tx, sp->cur_tx, sp->tx_full);
...@@ -1535,7 +1541,7 @@ static void speedo_tx_buffer_gc(struct net_device *dev) ...@@ -1535,7 +1541,7 @@ static void speedo_tx_buffer_gc(struct net_device *dev)
while (sp->mc_setup_head != NULL while (sp->mc_setup_head != NULL
&& (int)(dirty_tx - sp->mc_setup_head->tx - 1) > 0) { && (int)(dirty_tx - sp->mc_setup_head->tx - 1) > 0) {
struct speedo_mc_block *t; struct speedo_mc_block *t;
if (speedo_debug > 1) if (netif_msg_tx_err(sp))
printk(KERN_DEBUG "%s: freeing mc frame.\n", dev->name); printk(KERN_DEBUG "%s: freeing mc frame.\n", dev->name);
pci_unmap_single(sp->pdev, sp->mc_setup_head->frame_dma, pci_unmap_single(sp->pdev, sp->mc_setup_head->frame_dma,
sp->mc_setup_head->len, PCI_DMA_TODEVICE); sp->mc_setup_head->len, PCI_DMA_TODEVICE);
...@@ -1585,7 +1591,7 @@ static void speedo_interrupt(int irq, void *dev_instance, struct pt_regs *regs) ...@@ -1585,7 +1591,7 @@ static void speedo_interrupt(int irq, void *dev_instance, struct pt_regs *regs)
FCP and ER interrupts --Dragan */ FCP and ER interrupts --Dragan */
outw(status & 0xfc00, ioaddr + SCBStatus); outw(status & 0xfc00, ioaddr + SCBStatus);
if (speedo_debug > 4) if (netif_msg_intr(sp))
printk(KERN_DEBUG "%s: interrupt status=%#4.4x.\n", printk(KERN_DEBUG "%s: interrupt status=%#4.4x.\n",
dev->name, status); dev->name, status);
...@@ -1647,7 +1653,7 @@ static void speedo_interrupt(int irq, void *dev_instance, struct pt_regs *regs) ...@@ -1647,7 +1653,7 @@ static void speedo_interrupt(int irq, void *dev_instance, struct pt_regs *regs)
} }
} while (1); } while (1);
if (speedo_debug > 3) if (netif_msg_intr(sp))
printk(KERN_DEBUG "%s: exiting interrupt, status=%#4.4x.\n", printk(KERN_DEBUG "%s: exiting interrupt, status=%#4.4x.\n",
dev->name, inw(ioaddr + SCBStatus)); dev->name, inw(ioaddr + SCBStatus));
...@@ -1708,7 +1714,7 @@ static int speedo_refill_rx_buf(struct net_device *dev, int force) ...@@ -1708,7 +1714,7 @@ static int speedo_refill_rx_buf(struct net_device *dev, int force)
if (rxf == NULL) { if (rxf == NULL) {
unsigned int forw; unsigned int forw;
int forw_entry; int forw_entry;
if (speedo_debug > 2 || !(sp->rx_ring_state & RrOOMReported)) { if (netif_msg_rx_err(sp) || !(sp->rx_ring_state & RrOOMReported)) {
printk(KERN_WARNING "%s: can't fill rx buffer (force %d)!\n", printk(KERN_WARNING "%s: can't fill rx buffer (force %d)!\n",
dev->name, force); dev->name, force);
speedo_show_state(dev); speedo_show_state(dev);
...@@ -1754,8 +1760,9 @@ speedo_rx(struct net_device *dev) ...@@ -1754,8 +1760,9 @@ speedo_rx(struct net_device *dev)
int entry = sp->cur_rx % RX_RING_SIZE; int entry = sp->cur_rx % RX_RING_SIZE;
int rx_work_limit = sp->dirty_rx + RX_RING_SIZE - sp->cur_rx; int rx_work_limit = sp->dirty_rx + RX_RING_SIZE - sp->cur_rx;
int alloc_ok = 1; int alloc_ok = 1;
int npkts = 0;
if (speedo_debug > 4) if (netif_msg_intr(sp))
printk(KERN_DEBUG " In speedo_rx().\n"); printk(KERN_DEBUG " In speedo_rx().\n");
/* If we own the next entry, it's a new packet. Send it up. */ /* If we own the next entry, it's a new packet. Send it up. */
while (sp->rx_ringp[entry] != NULL) { while (sp->rx_ringp[entry] != NULL) {
...@@ -1778,14 +1785,14 @@ speedo_rx(struct net_device *dev) ...@@ -1778,14 +1785,14 @@ speedo_rx(struct net_device *dev)
if (sp->last_rxf == sp->rx_ringp[entry]) { if (sp->last_rxf == sp->rx_ringp[entry]) {
/* Postpone the packet. It'll be reaped at an interrupt when this /* Postpone the packet. It'll be reaped at an interrupt when this
packet is no longer the last packet in the ring. */ packet is no longer the last packet in the ring. */
if (speedo_debug > 2) if (netif_msg_rx_err(sp))
printk(KERN_DEBUG "%s: RX packet postponed!\n", printk(KERN_DEBUG "%s: RX packet postponed!\n",
dev->name); dev->name);
sp->rx_ring_state |= RrPostponed; sp->rx_ring_state |= RrPostponed;
break; break;
} }
if (speedo_debug > 4) if (netif_msg_intr(sp))
printk(KERN_DEBUG " speedo_rx() status %8.8x len %d.\n", status, printk(KERN_DEBUG " speedo_rx() status %8.8x len %d.\n", status,
pkt_len); pkt_len);
if ((status & (RxErrTooBig|RxOK|0x0f90)) != RxOK) { if ((status & (RxErrTooBig|RxOK|0x0f90)) != RxOK) {
...@@ -1820,6 +1827,7 @@ speedo_rx(struct net_device *dev) ...@@ -1820,6 +1827,7 @@ speedo_rx(struct net_device *dev)
memcpy(skb_put(skb, pkt_len), sp->rx_skbuff[entry]->tail, memcpy(skb_put(skb, pkt_len), sp->rx_skbuff[entry]->tail,
pkt_len); pkt_len);
#endif #endif
npkts++;
} else { } else {
/* Pass up the already-filled skbuff. */ /* Pass up the already-filled skbuff. */
skb = sp->rx_skbuff[entry]; skb = sp->rx_skbuff[entry];
...@@ -1830,6 +1838,7 @@ speedo_rx(struct net_device *dev) ...@@ -1830,6 +1838,7 @@ speedo_rx(struct net_device *dev)
} }
sp->rx_skbuff[entry] = NULL; sp->rx_skbuff[entry] = NULL;
skb_put(skb, pkt_len); skb_put(skb, pkt_len);
npkts++;
sp->rx_ringp[entry] = NULL; sp->rx_ringp[entry] = NULL;
pci_unmap_single(sp->pdev, sp->rx_ring_dma[entry], pci_unmap_single(sp->pdev, sp->rx_ring_dma[entry],
PKT_BUF_SZ + sizeof(struct RxFD), PCI_DMA_FROMDEVICE); PKT_BUF_SZ + sizeof(struct RxFD), PCI_DMA_FROMDEVICE);
...@@ -1851,6 +1860,7 @@ speedo_rx(struct net_device *dev) ...@@ -1851,6 +1860,7 @@ speedo_rx(struct net_device *dev)
/* Try hard to refill the recently taken buffers. */ /* Try hard to refill the recently taken buffers. */
speedo_refill_rx_buffers(dev, 1); speedo_refill_rx_buffers(dev, 1);
if (npkts)
sp->last_rx_time = jiffies; sp->last_rx_time = jiffies;
return 0; return 0;
...@@ -1866,20 +1876,27 @@ speedo_close(struct net_device *dev) ...@@ -1866,20 +1876,27 @@ speedo_close(struct net_device *dev)
netdevice_stop(dev); netdevice_stop(dev);
netif_stop_queue(dev); netif_stop_queue(dev);
if (speedo_debug > 1) if (netif_msg_ifdown(sp))
printk(KERN_DEBUG "%s: Shutting down ethercard, status was %4.4x.\n", printk(KERN_DEBUG "%s: Shutting down ethercard, status was %4.4x.\n",
dev->name, inw(ioaddr + SCBStatus)); dev->name, inw(ioaddr + SCBStatus));
/* Shut off the media monitoring timer. */ /* Shut off the media monitoring timer. */
del_timer_sync(&sp->timer); del_timer_sync(&sp->timer);
outw(SCBMaskAll, ioaddr + SCBCmd);
/* Shutting down the chip nicely fails to disable flow control. So.. */ /* Shutting down the chip nicely fails to disable flow control. So.. */
outl(PortPartialReset, ioaddr + SCBPort); outl(PortPartialReset, ioaddr + SCBPort);
inl(ioaddr + SCBPort); /* flush posted write */
/*
* The chip requires a 10 microsecond quiet period. Wait here!
*/
udelay(10);
free_irq(dev->irq, dev); free_irq(dev->irq, dev);
/* Print a few items for debugging. */ /* Print a few items for debugging. */
if (speedo_debug > 3) if (netif_msg_ifdown(sp))
speedo_show_state(dev); speedo_show_state(dev);
/* Free all the skbuffs in the Rx and Tx queues. */ /* Free all the skbuffs in the Rx and Tx queues. */
...@@ -1915,7 +1932,7 @@ speedo_close(struct net_device *dev) ...@@ -1915,7 +1932,7 @@ speedo_close(struct net_device *dev)
sp->mc_setup_head = t; sp->mc_setup_head = t;
} }
sp->mc_setup_tail = NULL; sp->mc_setup_tail = NULL;
if (speedo_debug > 0) if (netif_msg_ifdown(sp))
printk(KERN_DEBUG "%s: %d multicast blocks dropped.\n", dev->name, i); printk(KERN_DEBUG "%s: %d multicast blocks dropped.\n", dev->name, i);
pci_set_power_state(sp->pdev, 2); pci_set_power_state(sp->pdev, 2);
...@@ -1956,7 +1973,7 @@ speedo_get_stats(struct net_device *dev) ...@@ -1956,7 +1973,7 @@ speedo_get_stats(struct net_device *dev)
/* Take a spinlock to make wait_for_cmd_done and sending the /* Take a spinlock to make wait_for_cmd_done and sending the
command atomic. --SAW */ command atomic. --SAW */
spin_lock_irqsave(&sp->lock, flags); spin_lock_irqsave(&sp->lock, flags);
wait_for_cmd_done(ioaddr + SCBCmd); wait_for_cmd_done(dev);
outb(CUDumpStats, ioaddr + SCBCmd); outb(CUDumpStats, ioaddr + SCBCmd);
spin_unlock_irqrestore(&sp->lock, flags); spin_unlock_irqrestore(&sp->lock, flags);
} }
...@@ -2018,6 +2035,22 @@ static int netdev_ethtool_ioctl(struct net_device *dev, void *useraddr) ...@@ -2018,6 +2035,22 @@ static int netdev_ethtool_ioctl(struct net_device *dev, void *useraddr)
return -EFAULT; return -EFAULT;
return 0; return 0;
} }
/* get message-level */
case ETHTOOL_GMSGLVL: {
struct ethtool_value edata = {ETHTOOL_GMSGLVL};
edata.data = sp->msg_enable;
if (copy_to_user(useraddr, &edata, sizeof(edata)))
return -EFAULT;
return 0;
}
/* set message-level */
case ETHTOOL_SMSGLVL: {
struct ethtool_value edata;
if (copy_from_user(&edata, useraddr, sizeof(edata)))
return -EFAULT;
sp->msg_enable = edata.data;
return 0;
}
} }
...@@ -2092,7 +2125,7 @@ static void set_rx_mode(struct net_device *dev) ...@@ -2092,7 +2125,7 @@ static void set_rx_mode(struct net_device *dev)
} else } else
new_rx_mode = 0; new_rx_mode = 0;
if (speedo_debug > 3) if (netif_msg_rx_status(sp))
printk(KERN_DEBUG "%s: set_rx_mode %d -> %d\n", dev->name, printk(KERN_DEBUG "%s: set_rx_mode %d -> %d\n", dev->name,
sp->rx_mode, new_rx_mode); sp->rx_mode, new_rx_mode);
...@@ -2133,7 +2166,7 @@ static void set_rx_mode(struct net_device *dev) ...@@ -2133,7 +2166,7 @@ static void set_rx_mode(struct net_device *dev)
config_cmd_data[8] = 0; config_cmd_data[8] = 0;
} }
/* Trigger the command unit resume. */ /* Trigger the command unit resume. */
wait_for_cmd_done(ioaddr + SCBCmd); wait_for_cmd_done(dev);
clear_suspend(last_cmd); clear_suspend(last_cmd);
outb(CUResume, ioaddr + SCBCmd); outb(CUResume, ioaddr + SCBCmd);
if ((int)(sp->cur_tx - sp->dirty_tx) >= TX_QUEUE_LIMIT) { if ((int)(sp->cur_tx - sp->dirty_tx) >= TX_QUEUE_LIMIT) {
...@@ -2170,7 +2203,7 @@ static void set_rx_mode(struct net_device *dev) ...@@ -2170,7 +2203,7 @@ static void set_rx_mode(struct net_device *dev)
*setup_params++ = *eaddrs++; *setup_params++ = *eaddrs++;
} }
wait_for_cmd_done(ioaddr + SCBCmd); wait_for_cmd_done(dev);
clear_suspend(last_cmd); clear_suspend(last_cmd);
/* Immediately trigger the command unit resume. */ /* Immediately trigger the command unit resume. */
outb(CUResume, ioaddr + SCBCmd); outb(CUResume, ioaddr + SCBCmd);
...@@ -2203,7 +2236,7 @@ static void set_rx_mode(struct net_device *dev) ...@@ -2203,7 +2236,7 @@ static void set_rx_mode(struct net_device *dev)
mc_setup_frm = &mc_blk->frame; mc_setup_frm = &mc_blk->frame;
/* Fill the setup frame. */ /* Fill the setup frame. */
if (speedo_debug > 1) if (netif_msg_ifup(sp))
printk(KERN_DEBUG "%s: Constructing a setup frame at %p.\n", printk(KERN_DEBUG "%s: Constructing a setup frame at %p.\n",
dev->name, mc_setup_frm); dev->name, mc_setup_frm);
mc_setup_frm->cmd_status = mc_setup_frm->cmd_status =
...@@ -2246,7 +2279,7 @@ static void set_rx_mode(struct net_device *dev) ...@@ -2246,7 +2279,7 @@ static void set_rx_mode(struct net_device *dev)
pci_dma_sync_single(sp->pdev, mc_blk->frame_dma, pci_dma_sync_single(sp->pdev, mc_blk->frame_dma,
mc_blk->len, PCI_DMA_TODEVICE); mc_blk->len, PCI_DMA_TODEVICE);
wait_for_cmd_done(ioaddr + SCBCmd); wait_for_cmd_done(dev);
clear_suspend(last_cmd); clear_suspend(last_cmd);
/* Immediately trigger the command unit resume. */ /* Immediately trigger the command unit resume. */
outb(CUResume, ioaddr + SCBCmd); outb(CUResume, ioaddr + SCBCmd);
...@@ -2257,7 +2290,7 @@ static void set_rx_mode(struct net_device *dev) ...@@ -2257,7 +2290,7 @@ static void set_rx_mode(struct net_device *dev)
} }
spin_unlock_irqrestore(&sp->lock, flags); spin_unlock_irqrestore(&sp->lock, flags);
if (speedo_debug > 5) if (netif_msg_rx_status(sp))
printk(" CmdMCSetup frame length %d in entry %d.\n", printk(" CmdMCSetup frame length %d in entry %d.\n",
dev->mc_count, entry); dev->mc_count, entry);
} }
...@@ -2400,11 +2433,9 @@ static int pci_module_init(struct pci_driver *pdev) ...@@ -2400,11 +2433,9 @@ static int pci_module_init(struct pci_driver *pdev)
static int __init eepro100_init_module(void) static int __init eepro100_init_module(void)
{ {
if (debug >= 0 && speedo_debug != debug) #ifdef MODULE
printk(KERN_INFO "eepro100.c: Debug level is %d.\n", debug); printk(version);
if (debug >= 0) #endif
speedo_debug = debug;
return pci_module_init(&eepro100_driver); return pci_module_init(&eepro100_driver);
} }
......
...@@ -192,9 +192,12 @@ int mii_nway_restart (struct mii_if_info *mii) ...@@ -192,9 +192,12 @@ int mii_nway_restart (struct mii_if_info *mii)
void mii_check_link (struct mii_if_info *mii) void mii_check_link (struct mii_if_info *mii)
{ {
if (mii_link_ok(mii)) int cur_link = mii_link_ok(mii);
int prev_link = netif_carrier_ok(mii->dev);
if (cur_link && !prev_link)
netif_carrier_on(mii->dev); netif_carrier_on(mii->dev);
else else if (prev_link && !cur_link)
netif_carrier_off(mii->dev); netif_carrier_off(mii->dev);
} }
......
...@@ -184,7 +184,7 @@ ...@@ -184,7 +184,7 @@
NETIF_MSG_WOL | \ NETIF_MSG_WOL | \
NETIF_MSG_RX_ERR | \ NETIF_MSG_RX_ERR | \
NETIF_MSG_TX_ERR) NETIF_MSG_TX_ERR)
static int debug = NATSEMI_DEF_MSG; static int debug = -1;
/* Maximum events (Rx packets, etc.) to handle at each interrupt. */ /* Maximum events (Rx packets, etc.) to handle at each interrupt. */
static int max_interrupt_work = 20; static int max_interrupt_work = 20;
...@@ -256,7 +256,7 @@ MODULE_PARM(full_duplex, "1-" __MODULE_STRING(MAX_UNITS) "i"); ...@@ -256,7 +256,7 @@ MODULE_PARM(full_duplex, "1-" __MODULE_STRING(MAX_UNITS) "i");
MODULE_PARM_DESC(max_interrupt_work, MODULE_PARM_DESC(max_interrupt_work,
"DP8381x maximum events handled per interrupt"); "DP8381x maximum events handled per interrupt");
MODULE_PARM_DESC(mtu, "DP8381x MTU (all boards)"); MODULE_PARM_DESC(mtu, "DP8381x MTU (all boards)");
MODULE_PARM_DESC(debug, "DP8381x default debug bitmask"); MODULE_PARM_DESC(debug, "DP8381x default debug level");
MODULE_PARM_DESC(rx_copybreak, MODULE_PARM_DESC(rx_copybreak,
"DP8381x copy breakpoint for copy-only-tiny-frames"); "DP8381x copy breakpoint for copy-only-tiny-frames");
MODULE_PARM_DESC(options, MODULE_PARM_DESC(options,
...@@ -796,7 +796,7 @@ static int __devinit natsemi_probe1 (struct pci_dev *pdev, ...@@ -796,7 +796,7 @@ static int __devinit natsemi_probe1 (struct pci_dev *pdev,
pci_set_drvdata(pdev, dev); pci_set_drvdata(pdev, dev);
np->iosize = iosize; np->iosize = iosize;
spin_lock_init(&np->lock); spin_lock_init(&np->lock);
np->msg_enable = debug; np->msg_enable = (debug >= 0) ? (1<<debug)-1 : NATSEMI_DEF_MSG;
np->hands_off = 0; np->hands_off = 0;
/* Reset the chip to erase previous misconfiguration. */ /* Reset the chip to erase previous misconfiguration. */
......
...@@ -474,6 +474,8 @@ int scsi_register_host(Scsi_Host_Template *shost_tp) ...@@ -474,6 +474,8 @@ int scsi_register_host(Scsi_Host_Template *shost_tp)
struct list_head *lh; struct list_head *lh;
struct Scsi_Host *shost; struct Scsi_Host *shost;
INIT_LIST_HEAD(&shost_tp->shtp_list);
/* /*
* Check no detect routine. * Check no detect routine.
*/ */
......
...@@ -149,12 +149,13 @@ struct bio *bio_alloc(int gfp_mask, int nr_iovecs) ...@@ -149,12 +149,13 @@ struct bio *bio_alloc(int gfp_mask, int nr_iovecs)
bio_init(bio); bio_init(bio);
if (unlikely(!nr_iovecs)) if (unlikely(!nr_iovecs))
goto out; goto noiovec;
bvl = bvec_alloc(gfp_mask, nr_iovecs, &idx); bvl = bvec_alloc(gfp_mask, nr_iovecs, &idx);
if (bvl) { if (bvl) {
bio->bi_flags |= idx << BIO_POOL_OFFSET; bio->bi_flags |= idx << BIO_POOL_OFFSET;
bio->bi_max_vecs = bvec_array[idx].nr_vecs; bio->bi_max_vecs = bvec_array[idx].nr_vecs;
noiovec:
bio->bi_io_vec = bvl; bio->bi_io_vec = bvl;
bio->bi_destructor = bio_destructor; bio->bi_destructor = bio_destructor;
out: out:
......
...@@ -151,14 +151,19 @@ __asm__ __volatile__("mb": : :"memory") ...@@ -151,14 +151,19 @@ __asm__ __volatile__("mb": : :"memory")
#define wmb() \ #define wmb() \
__asm__ __volatile__("wmb": : :"memory") __asm__ __volatile__("wmb": : :"memory")
#define read_barrier_depends() \
__asm__ __volatile__("mb": : :"memory")
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() barrier()
#endif #endif
#define set_mb(var, value) \ #define set_mb(var, value) \
......
...@@ -52,6 +52,7 @@ extern asmlinkage void __backtrace(void); ...@@ -52,6 +52,7 @@ extern asmlinkage void __backtrace(void);
#define mb() __asm__ __volatile__ ("" : : : "memory") #define mb() __asm__ __volatile__ ("" : : : "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
#define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t"); #define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t");
...@@ -78,12 +79,14 @@ extern struct task_struct *__switch_to(struct thread_info *, struct thread_info ...@@ -78,12 +79,14 @@ extern struct task_struct *__switch_to(struct thread_info *, struct thread_info
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#define clf() __clf() #define clf() __clf()
#define stf() __stf() #define stf() __stf()
......
...@@ -150,6 +150,7 @@ static inline unsigned long __xchg(unsigned long x, void * ptr, int size) ...@@ -150,6 +150,7 @@ static inline unsigned long __xchg(unsigned long x, void * ptr, int size)
#define mb() __asm__ __volatile__ ("" : : : "memory") #define mb() __asm__ __volatile__ ("" : : : "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
...@@ -157,10 +158,12 @@ static inline unsigned long __xchg(unsigned long x, void * ptr, int size) ...@@ -157,10 +158,12 @@ static inline unsigned long __xchg(unsigned long x, void * ptr, int size)
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#endif #endif
#define iret() #define iret()
......
/*
* linux/include/asm-i386/edd.h
* Copyright (C) 2002 Dell Computer Corporation
* by Matt Domsch <Matt_Domsch@dell.com>
*
* structures and definitions for the int 13h, ax={41,48}h
* BIOS Enhanced Disk Drive Services
* This is based on the T13 group document D1572 Revision 0 (August 14 2002)
* available at http://www.t13.org/docs2002/d1572r0.pdf. It is
* very similar to D1484 Revision 3 http://www.t13.org/docs2002/d1484r3.pdf
*
* In a nutshell, arch/i386/boot/setup.S populates a scratch table
* in the empty_zero_block that contains a list of BIOS-enumerated
* boot devices.
* In arch/i386/kernel/setup.c, this information is
* transferred into the edd structure, and in arch/i386/kernel/edd.c, that
* information is used to identify BIOS boot disk. The code in setup.S
* is very sensitive to the size of these structures.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License v2.0 as published by
* the Free Software Foundation
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _ASM_I386_EDD_H
#define _ASM_I386_EDD_H
#define EDDNR 0x1e9 /* addr of number of edd_info structs at EDDBUF
in empty_zero_block - treat this as 1 byte */
#define EDDBUF 0x600 /* addr of edd_info structs in empty_zero_block */
#define EDDMAXNR 6 /* number of edd_info structs starting at EDDBUF */
#define EDDEXTSIZE 4 /* change these if you muck with the structures */
#define EDDPARMSIZE 74
#ifndef __ASSEMBLY__
#define EDD_EXT_FIXED_DISK_ACCESS (1 << 0)
#define EDD_EXT_DEVICE_LOCKING_AND_EJECTING (1 << 1)
#define EDD_EXT_ENHANCED_DISK_DRIVE_SUPPORT (1 << 2)
#define EDD_EXT_64BIT_EXTENSIONS (1 << 3)
#define EDD_INFO_DMA_BOUNDRY_ERROR_TRANSPARENT (1 << 0)
#define EDD_INFO_GEOMETRY_VALID (1 << 1)
#define EDD_INFO_REMOVABLE (1 << 2)
#define EDD_INFO_WRITE_VERIFY (1 << 3)
#define EDD_INFO_MEDIA_CHANGE_NOTIFICATION (1 << 4)
#define EDD_INFO_LOCKABLE (1 << 5)
#define EDD_INFO_NO_MEDIA_PRESENT (1 << 6)
#define EDD_INFO_USE_INT13_FN50 (1 << 7)
struct edd_device_params {
u16 length;
u16 info_flags;
u32 num_default_cylinders;
u32 num_default_heads;
u32 sectors_per_track;
u64 number_of_sectors;
u16 bytes_per_sector;
u32 dpte_ptr; /* 0xFFFFFFFF for our purposes */
u16 key; /* = 0xBEDD */
u8 device_path_info_length; /* = 44 */
u8 reserved2;
u16 reserved3;
u8 host_bus_type[4];
u8 interface_type[8];
union {
struct {
u16 base_address;
u16 reserved1;
u32 reserved2;
} __attribute__ ((packed)) isa;
struct {
u8 bus;
u8 slot;
u8 function;
u8 channel;
u32 reserved;
} __attribute__ ((packed)) pci;
/* pcix is same as pci */
struct {
u64 reserved;
} __attribute__ ((packed)) ibnd;
struct {
u64 reserved;
} __attribute__ ((packed)) xprs;
struct {
u64 reserved;
} __attribute__ ((packed)) htpt;
struct {
u64 reserved;
} __attribute__ ((packed)) unknown;
} interface_path;
union {
struct {
u8 device;
u8 reserved1;
u16 reserved2;
u32 reserved3;
u64 reserved4;
} __attribute__ ((packed)) ata;
struct {
u8 device;
u8 lun;
u8 reserved1;
u8 reserved2;
u32 reserved3;
u64 reserved4;
} __attribute__ ((packed)) atapi;
struct {
u16 id;
u64 lun;
u16 reserved1;
u32 reserved2;
} __attribute__ ((packed)) scsi;
struct {
u64 serial_number;
u64 reserved;
} __attribute__ ((packed)) usb;
struct {
u64 eui;
u64 reserved;
} __attribute__ ((packed)) i1394;
struct {
u64 wwid;
u64 lun;
} __attribute__ ((packed)) fibre;
struct {
u64 identity_tag;
u64 reserved;
} __attribute__ ((packed)) i2o;
struct {
u32 array_number;
u32 reserved1;
u64 reserved2;
} __attribute((packed)) raid;
struct {
u8 device;
u8 reserved1;
u16 reserved2;
u32 reserved3;
u64 reserved4;
} __attribute__ ((packed)) sata;
struct {
u64 reserved1;
u64 reserved2;
} __attribute__ ((packed)) unknown;
} device_path;
u8 reserved4;
u8 checksum;
} __attribute__ ((packed));
struct edd_info {
u8 device;
u8 version;
u16 interface_support;
struct edd_device_params params;
} __attribute__ ((packed));
extern struct edd_info edd[EDDNR];
extern unsigned char eddnr;
#endif /*!__ASSEMBLY__ */
#endif /* _ASM_I386_EDD_H */
...@@ -37,6 +37,8 @@ ...@@ -37,6 +37,8 @@
#define KERNEL_START (*(unsigned long *) (PARAM+0x214)) #define KERNEL_START (*(unsigned long *) (PARAM+0x214))
#define INITRD_START (*(unsigned long *) (PARAM+0x218)) #define INITRD_START (*(unsigned long *) (PARAM+0x218))
#define INITRD_SIZE (*(unsigned long *) (PARAM+0x21c)) #define INITRD_SIZE (*(unsigned long *) (PARAM+0x21c))
#define EDD_NR (*(unsigned char *) (PARAM+EDDNR))
#define EDD_BUF ((struct edd_info *) (PARAM+EDDBUF))
#define COMMAND_LINE ((char *) (PARAM+2048)) #define COMMAND_LINE ((char *) (PARAM+2048))
#define COMMAND_LINE_SIZE 256 #define COMMAND_LINE_SIZE 256
......
...@@ -286,6 +286,60 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, ...@@ -286,6 +286,60 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
#define mb() __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory") #define mb() __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory")
#define rmb() mb() #define rmb() mb()
/**
* read_barrier_depends - Flush all pending reads that subsequents reads
* depend on.
*
* No data-dependent reads from memory-like regions are ever reordered
* over this barrier. All reads preceding this primitive are guaranteed
* to access memory (but not necessarily other CPUs' caches) before any
* reads following this primitive that depend on the data return by
* any of the preceding reads. This primitive is much lighter weight than
* rmb() on most CPUs, and is never heavier weight than is
* rmb().
*
* These ordering constraints are respected by both the local CPU
* and the compiler.
*
* Ordering is not guaranteed by anything other than these primitives,
* not even by data dependencies. See the documentation for
* memory_barrier() for examples and URLs to more information.
*
* For example, the following code would force ordering (the initial
* value of "a" is zero, "b" is one, and "p" is "&a"):
*
* <programlisting>
* CPU 0 CPU 1
*
* b = 2;
* memory_barrier();
* p = &b; q = p;
* read_barrier_depends();
* d = *q;
* </programlisting>
*
* because the read of "*q" depends on the read of "p" and these
* two reads are separated by a read_barrier_depends(). However,
* the following code, with the same initial values for "a" and "b":
*
* <programlisting>
* CPU 0 CPU 1
*
* a = 2;
* memory_barrier();
* b = 3; y = b;
* read_barrier_depends();
* x = a;
* </programlisting>
*
* does not enforce ordering, since there is no data dependency between
* the read of "a" and the read of "b". Therefore, on some CPUs, such
* as Alpha, "y" could be set to 3 and "x" to 0. Use rmb()
* in cases like thiswhere there are no data dependencies.
**/
#define read_barrier_depends() do { } while(0)
#ifdef CONFIG_X86_OOSTORE #ifdef CONFIG_X86_OOSTORE
#define wmb() __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory") #define wmb() __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory")
#else #else
...@@ -296,11 +350,13 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, ...@@ -296,11 +350,13 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#define set_mb(var, value) do { xchg(&var, value); } while (0) #define set_mb(var, value) do { xchg(&var, value); } while (0)
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; barrier(); } while (0) #define set_mb(var, value) do { var = value; barrier(); } while (0)
#endif #endif
......
...@@ -85,15 +85,18 @@ ia64_insn_group_barrier (void) ...@@ -85,15 +85,18 @@ ia64_insn_group_barrier (void)
#define mb() __asm__ __volatile__ ("mf" ::: "memory") #define mb() __asm__ __volatile__ ("mf" ::: "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
# define smp_mb() mb() # define smp_mb() mb()
# define smp_rmb() rmb() # define smp_rmb() rmb()
# define smp_wmb() wmb() # define smp_wmb() wmb()
# define smp_read_barrier_depends() read_barrier_depends()
#else #else
# define smp_mb() barrier() # define smp_mb() barrier()
# define smp_rmb() barrier() # define smp_rmb() barrier()
# define smp_wmb() barrier() # define smp_wmb() barrier()
# define smp_read_barrier_depends() do { } while(0)
#endif #endif
/* /*
......
...@@ -146,6 +146,7 @@ extern void __global_restore_flags(unsigned long); ...@@ -146,6 +146,7 @@ extern void __global_restore_flags(unsigned long);
#define rmb() do { } while(0) #define rmb() do { } while(0)
#define wmb() wbflush() #define wmb() wbflush()
#define mb() wbflush() #define mb() wbflush()
#define read_barrier_depends() do { } while(0)
#else /* CONFIG_CPU_HAS_WB */ #else /* CONFIG_CPU_HAS_WB */
...@@ -161,6 +162,7 @@ __asm__ __volatile__( \ ...@@ -161,6 +162,7 @@ __asm__ __volatile__( \
: "memory") : "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#endif /* CONFIG_CPU_HAS_WB */ #endif /* CONFIG_CPU_HAS_WB */
...@@ -168,10 +170,12 @@ __asm__ __volatile__( \ ...@@ -168,10 +170,12 @@ __asm__ __volatile__( \
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#endif #endif
#define set_mb(var, value) \ #define set_mb(var, value) \
......
...@@ -142,15 +142,18 @@ __asm__ __volatile__( \ ...@@ -142,15 +142,18 @@ __asm__ __volatile__( \
: "memory") : "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#endif #endif
#define set_mb(var, value) \ #define set_mb(var, value) \
......
...@@ -51,6 +51,7 @@ extern struct task_struct *_switch_to(struct task_struct *, struct task_struct * ...@@ -51,6 +51,7 @@ extern struct task_struct *_switch_to(struct task_struct *, struct task_struct *
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() do { } while(0)
#else #else
/* This is simply the barrier() macro from linux/kernel.h but when serial.c /* This is simply the barrier() macro from linux/kernel.h but when serial.c
* uses tqueue.h uses smp_mb() defined using barrier(), linux/kernel.h * uses tqueue.h uses smp_mb() defined using barrier(), linux/kernel.h
...@@ -59,6 +60,7 @@ extern struct task_struct *_switch_to(struct task_struct *, struct task_struct * ...@@ -59,6 +60,7 @@ extern struct task_struct *_switch_to(struct task_struct *, struct task_struct *
#define smp_mb() __asm__ __volatile__("":::"memory"); #define smp_mb() __asm__ __volatile__("":::"memory");
#define smp_rmb() __asm__ __volatile__("":::"memory"); #define smp_rmb() __asm__ __volatile__("":::"memory");
#define smp_wmb() __asm__ __volatile__("":::"memory"); #define smp_wmb() __asm__ __volatile__("":::"memory");
#define smp_read_barrier_depends() do { } while(0)
#endif #endif
/* interrupt control */ /* interrupt control */
...@@ -120,6 +122,7 @@ static inline void set_eiem(unsigned long val) ...@@ -120,6 +122,7 @@ static inline void set_eiem(unsigned long val)
#define mb() __asm__ __volatile__ ("sync" : : :"memory") #define mb() __asm__ __volatile__ ("sync" : : :"memory")
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
......
...@@ -22,6 +22,8 @@ ...@@ -22,6 +22,8 @@
* mb() prevents loads and stores being reordered across this point. * mb() prevents loads and stores being reordered across this point.
* rmb() prevents loads being reordered across this point. * rmb() prevents loads being reordered across this point.
* wmb() prevents stores being reordered across this point. * wmb() prevents stores being reordered across this point.
* read_barrier_depends() prevents data-dependant loads being reordered
* across this point (nop on PPC).
* *
* We can use the eieio instruction for wmb, but since it doesn't * We can use the eieio instruction for wmb, but since it doesn't
* give any ordering guarantees about loads, we have to use the * give any ordering guarantees about loads, we have to use the
...@@ -30,6 +32,7 @@ ...@@ -30,6 +32,7 @@
#define mb() __asm__ __volatile__ ("sync" : : : "memory") #define mb() __asm__ __volatile__ ("sync" : : : "memory")
#define rmb() __asm__ __volatile__ ("sync" : : : "memory") #define rmb() __asm__ __volatile__ ("sync" : : : "memory")
#define wmb() __asm__ __volatile__ ("eieio" : : : "memory") #define wmb() __asm__ __volatile__ ("eieio" : : : "memory")
#define read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
...@@ -38,10 +41,12 @@ ...@@ -38,10 +41,12 @@
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() __asm__ __volatile__("": : :"memory") #define smp_mb() __asm__ __volatile__("": : :"memory")
#define smp_rmb() __asm__ __volatile__("": : :"memory") #define smp_rmb() __asm__ __volatile__("": : :"memory")
#define smp_wmb() __asm__ __volatile__("": : :"memory") #define smp_wmb() __asm__ __volatile__("": : :"memory")
#define smp_read_barrier_depends() do { } while(0)
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
#ifdef __KERNEL__ #ifdef __KERNEL__
......
...@@ -25,6 +25,8 @@ ...@@ -25,6 +25,8 @@
* mb() prevents loads and stores being reordered across this point. * mb() prevents loads and stores being reordered across this point.
* rmb() prevents loads being reordered across this point. * rmb() prevents loads being reordered across this point.
* wmb() prevents stores being reordered across this point. * wmb() prevents stores being reordered across this point.
* read_barrier_depends() prevents data-dependant loads being reordered
* across this point (nop on PPC).
* *
* We can use the eieio instruction for wmb, but since it doesn't * We can use the eieio instruction for wmb, but since it doesn't
* give any ordering guarantees about loads, we have to use the * give any ordering guarantees about loads, we have to use the
...@@ -33,6 +35,7 @@ ...@@ -33,6 +35,7 @@
#define mb() __asm__ __volatile__ ("sync" : : : "memory") #define mb() __asm__ __volatile__ ("sync" : : : "memory")
#define rmb() __asm__ __volatile__ ("lwsync" : : : "memory") #define rmb() __asm__ __volatile__ ("lwsync" : : : "memory")
#define wmb() __asm__ __volatile__ ("eieio" : : : "memory") #define wmb() __asm__ __volatile__ ("eieio" : : : "memory")
#define read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
...@@ -41,10 +44,12 @@ ...@@ -41,10 +44,12 @@
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() __asm__ __volatile__("": : :"memory") #define smp_mb() __asm__ __volatile__("": : :"memory")
#define smp_rmb() __asm__ __volatile__("": : :"memory") #define smp_rmb() __asm__ __volatile__("": : :"memory")
#define smp_wmb() __asm__ __volatile__("": : :"memory") #define smp_wmb() __asm__ __volatile__("": : :"memory")
#define smp_read_barrier_depends() do { } while(0)
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
#ifdef CONFIG_DEBUG_KERNEL #ifdef CONFIG_DEBUG_KERNEL
......
...@@ -227,9 +227,11 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size) ...@@ -227,9 +227,11 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
#define mb() eieio() #define mb() eieio()
#define rmb() eieio() #define rmb() eieio()
#define wmb() eieio() #define wmb() eieio()
#define read_barrier_depends() do { } while(0)
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#define smp_mb__before_clear_bit() smp_mb() #define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb() #define smp_mb__after_clear_bit() smp_mb()
......
...@@ -238,9 +238,11 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size) ...@@ -238,9 +238,11 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
#define mb() eieio() #define mb() eieio()
#define rmb() eieio() #define rmb() eieio()
#define wmb() eieio() #define wmb() eieio()
#define read_barrier_depends() do { } while(0)
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#define smp_mb__before_clear_bit() smp_mb() #define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb() #define smp_mb__after_clear_bit() smp_mb()
......
...@@ -89,15 +89,18 @@ extern void __xchg_called_with_bad_pointer(void); ...@@ -89,15 +89,18 @@ extern void __xchg_called_with_bad_pointer(void);
#define mb() __asm__ __volatile__ ("": : :"memory") #define mb() __asm__ __volatile__ ("": : :"memory")
#define rmb() mb() #define rmb() mb()
#define wmb() __asm__ __volatile__ ("": : :"memory") #define wmb() __asm__ __volatile__ ("": : :"memory")
#define read_barrier_depends() do { } while(0)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#endif #endif
#define set_mb(var, value) do { xchg(&var, value); } while (0) #define set_mb(var, value) do { xchg(&var, value); } while (0)
......
...@@ -310,11 +310,13 @@ extern void __global_restore_flags(unsigned long flags); ...@@ -310,11 +310,13 @@ extern void __global_restore_flags(unsigned long flags);
#define mb() __asm__ __volatile__ ("" : : : "memory") #define mb() __asm__ __volatile__ ("" : : : "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#define set_mb(__var, __value) do { __var = __value; mb(); } while(0) #define set_mb(__var, __value) do { __var = __value; mb(); } while(0)
#define set_wmb(__var, __value) set_mb(__var, __value) #define set_wmb(__var, __value) set_mb(__var, __value)
#define smp_mb() __asm__ __volatile__("":::"memory"); #define smp_mb() __asm__ __volatile__("":::"memory");
#define smp_rmb() __asm__ __volatile__("":::"memory"); #define smp_rmb() __asm__ __volatile__("":::"memory");
#define smp_wmb() __asm__ __volatile__("":::"memory"); #define smp_wmb() __asm__ __volatile__("":::"memory");
#define smp_read_barrier_depends() do { } while(0)
#define nop() __asm__ __volatile__ ("nop"); #define nop() __asm__ __volatile__ ("nop");
......
...@@ -215,10 +215,12 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, ...@@ -215,10 +215,12 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() do {} while(0)
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do {} while(0)
#endif #endif
...@@ -230,6 +232,7 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, ...@@ -230,6 +232,7 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
#define mb() asm volatile("mfence":::"memory") #define mb() asm volatile("mfence":::"memory")
#define rmb() asm volatile("lfence":::"memory") #define rmb() asm volatile("lfence":::"memory")
#define wmb() asm volatile("sfence":::"memory") #define wmb() asm volatile("sfence":::"memory")
#define read_barrier_depends() do {} while(0)
#define set_mb(var, value) do { xchg(&var, value); } while (0) #define set_mb(var, value) do { xchg(&var, value); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
......
...@@ -200,6 +200,12 @@ struct sysinfo { ...@@ -200,6 +200,12 @@ struct sysinfo {
}; };
#define BUG_ON(condition) do { if (unlikely((condition)!=0)) BUG(); } while(0) #define BUG_ON(condition) do { if (unlikely((condition)!=0)) BUG(); } while(0)
#define WARN_ON(condition) do { \
if (unlikely((condition)!=0)) { \
printk("Badness in %s at %s:%d\n", __FUNCTION__, __FILE__, __LINE__); \
dump_stack(); \
} \
} while (0)
extern void BUILD_BUG(void); extern void BUILD_BUG(void);
#define BUILD_BUG_ON(condition) do { if (condition) BUILD_BUG(); } while(0) #define BUILD_BUG_ON(condition) do { if (condition) BUILD_BUG(); } while(0)
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#if defined(__KERNEL__) || defined(_LVM_H_INCLUDE) #if defined(__KERNEL__) || defined(_LVM_H_INCLUDE)
#include <linux/prefetch.h> #include <linux/prefetch.h>
#include <asm/system.h>
/* /*
* Simple doubly linked list implementation. * Simple doubly linked list implementation.
...@@ -70,6 +71,49 @@ static inline void list_add_tail(struct list_head *new, struct list_head *head) ...@@ -70,6 +71,49 @@ static inline void list_add_tail(struct list_head *new, struct list_head *head)
__list_add(new, head->prev, head); __list_add(new, head->prev, head);
} }
/*
* Insert a new entry between two known consecutive entries.
*
* This is only for internal list manipulation where we know
* the prev/next entries already!
*/
static __inline__ void __list_add_rcu(struct list_head * new,
struct list_head * prev,
struct list_head * next)
{
new->next = next;
new->prev = prev;
wmb();
next->prev = new;
prev->next = new;
}
/**
* list_add_rcu - add a new entry to rcu-protected list
* @new: new entry to be added
* @head: list head to add it after
*
* Insert a new entry after the specified head.
* This is good for implementing stacks.
*/
static __inline__ void list_add_rcu(struct list_head *new, struct list_head *head)
{
__list_add_rcu(new, head, head->next);
}
/**
* list_add_tail_rcu - add a new entry to rcu-protected list
* @new: new entry to be added
* @head: list head to add it before
*
* Insert a new entry before the specified head.
* This is useful for implementing queues.
*/
static __inline__ void list_add_tail_rcu(struct list_head *new, struct list_head *head)
{
__list_add_rcu(new, head->prev, head);
}
/* /*
* Delete a list entry by making the prev/next entries * Delete a list entry by making the prev/next entries
* point to each other. * point to each other.
...@@ -93,6 +137,17 @@ static inline void list_del(struct list_head *entry) ...@@ -93,6 +137,17 @@ static inline void list_del(struct list_head *entry)
{ {
__list_del(entry->prev, entry->next); __list_del(entry->prev, entry->next);
} }
/**
* list_del_rcu - deletes entry from list without re-initialization
* @entry: the element to delete from the list.
* Note: list_empty on entry does not return true after this,
* the entry is in an undefined state. It is useful for RCU based
* lockfree traversal.
*/
static inline void list_del_rcu(struct list_head *entry)
{
__list_del(entry->prev, entry->next);
}
/** /**
* list_del_init - deletes entry from list and reinitialize it. * list_del_init - deletes entry from list and reinitialize it.
...@@ -240,6 +295,30 @@ static inline void list_splice_init(struct list_head *list, ...@@ -240,6 +295,30 @@ static inline void list_splice_init(struct list_head *list,
pos = list_entry(pos->member.next, typeof(*pos), member), \ pos = list_entry(pos->member.next, typeof(*pos), member), \
prefetch(pos->member.next)) prefetch(pos->member.next))
/**
* list_for_each_rcu - iterate over an rcu-protected list
* @pos: the &struct list_head to use as a loop counter.
* @head: the head for your list.
*/
#define list_for_each_rcu(pos, head) \
for (pos = (head)->next, prefetch(pos->next); pos != (head); \
pos = pos->next, ({ read_barrier_depends(); 0;}), prefetch(pos->next))
#define __list_for_each_rcu(pos, head) \
for (pos = (head)->next; pos != (head); \
pos = pos->next, ({ read_barrier_depends(); 0;}))
/**
* list_for_each_safe_rcu - iterate over an rcu-protected list safe
* against removal of list entry
* @pos: the &struct list_head to use as a loop counter.
* @n: another &struct list_head to use as temporary storage
* @head: the head for your list.
*/
#define list_for_each_safe_rcu(pos, n, head) \
for (pos = (head)->next, n = pos->next; pos != (head); \
pos = n, ({ read_barrier_depends(); 0;}), n = pos->next)
#endif /* __KERNEL__ || _LVM_H_INCLUDE */ #endif /* __KERNEL__ || _LVM_H_INCLUDE */
#endif #endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment