Commit f8735053 authored by Linus Torvalds's avatar Linus Torvalds

Merge master.kernel.org:/home/davem/BK/net-2.5

into home.transmeta.com:/home/torvalds/v2.5/linux
parents 0694164e ddd11110
...@@ -31,6 +31,7 @@ Offset Type Description ...@@ -31,6 +31,7 @@ Offset Type Description
0x1e0 unsigned long ALT_MEM_K, alternative mem check, in Kb 0x1e0 unsigned long ALT_MEM_K, alternative mem check, in Kb
0x1e8 char number of entries in E820MAP (below) 0x1e8 char number of entries in E820MAP (below)
0x1e9 unsigned char number of entries in EDDBUF (below)
0x1f1 char size of setup.S, number of sectors 0x1f1 char size of setup.S, number of sectors
0x1f2 unsigned short MOUNT_ROOT_RDONLY (if !=0) 0x1f2 unsigned short MOUNT_ROOT_RDONLY (if !=0)
0x1f4 unsigned short size of compressed kernel-part in the 0x1f4 unsigned short size of compressed kernel-part in the
...@@ -66,6 +67,7 @@ Offset Type Description ...@@ -66,6 +67,7 @@ Offset Type Description
0x220 4 bytes (setup.S) 0x220 4 bytes (setup.S)
0x224 unsigned short setup.S heap end pointer 0x224 unsigned short setup.S heap end pointer
0x2d0 - 0x600 E820MAP 0x2d0 - 0x600 E820MAP
0x600 - 0x7D4 EDDBUF (setup.S)
0x800 string, 2K max COMMAND_LINE, the kernel commandline as 0x800 string, 2K max COMMAND_LINE, the kernel commandline as
copied using CL_OFFSET. copied using CL_OFFSET.
......
Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters
============================================================== ==============================================================
April 9, 2002 September 16, 2002
Contents Contents
...@@ -19,26 +19,9 @@ In This Release ...@@ -19,26 +19,9 @@ In This Release
=============== ===============
This file describes the Linux* Base Driver for the Intel(R) PRO/100 Family of This file describes the Linux* Base Driver for the Intel(R) PRO/100 Family of
Adapters, version 2.0.x. This driver includes support for Itanium(TM)-based Adapters, version 2.1.x. This driver includes support for Itanium(TM)-based
systems. systems.
New for this release:
- Additional ethtool functionality, including link status test and EEPROM
read/write. A third-party application can use the ethtool interface to
get and set driver parameters.
- Support for Zero copy on 82550-based adapters. This feature provides
faster data throughput and significant CPU usage improvement in systems
that use the relevant system call (sendfile(2)).
- Support for large MTU-enabling interface (1504 bytes) with kernel's
VLAN module
- Support for polling on RX
- Support for Wake On LAN* on 82550 and 82559-based adapters
Supported Adapters Supported Adapters
================== ==================
...@@ -96,8 +79,7 @@ CNR PRO/100 VE Desktop Adapter A10386-xxx, A10725-xxx, ...@@ -96,8 +79,7 @@ CNR PRO/100 VE Desktop Adapter A10386-xxx, A10725-xxx,
To verify that your adapter is supported, find the board ID number on the To verify that your adapter is supported, find the board ID number on the
adapter. Look for a label that has a barcode and a number in the format adapter. Look for a label that has a barcode and a number in the format
123456-001 (six digits hyphen three digits). Match this to the list of A12345-001. Match this to the list of numbers above.
numbers above.
For more information on how to identify your adapter, go to the Adapter & For more information on how to identify your adapter, go to the Adapter &
Driver ID Guide at: Driver ID Guide at:
...@@ -106,15 +88,20 @@ Driver ID Guide at: ...@@ -106,15 +88,20 @@ Driver ID Guide at:
For the latest Intel PRO/100 network driver for Linux, see: For the latest Intel PRO/100 network driver for Linux, see:
http://appsr.intel.com/scripts-df/support_intel.asp http://downloadfinder.intel.com/scripts-df/support_intel.asp
Command Line Parameters Command Line Parameters
======================= =======================
The following parameters are used by entering them on the command line with The following optional parameters are used by entering them on the command
the modprobe or insmod command. For example, with two Intel PRO/100 PCI line with the modprobe or insmod command using this syntax:
adapters, entering:
modprobe e100 [<option>=<VAL1>,<VAL2>,...]
insmod e100 [<option>=<VAL1>,<VAL2>,...]
For example, with two Intel PRO/100 PCI adapters, entering:
modprobe e100 TxDescriptors=32,128 modprobe e100 TxDescriptors=32,128
...@@ -122,16 +109,20 @@ loads the e100 driver with 32 TX resources for the first adapter and 128 TX ...@@ -122,16 +109,20 @@ loads the e100 driver with 32 TX resources for the first adapter and 128 TX
resources for the second adapter. This configuration favors the second resources for the second adapter. This configuration favors the second
adapter. The driver supports up to 16 network adapters concurrently. adapter. The driver supports up to 16 network adapters concurrently.
The default value for each parameter is generally the recommended setting,
unless otherwise noted.
NOTE: Giving any command line option the value "-1" causes the driver to use NOTE: Giving any command line option the value "-1" causes the driver to use
the appropriate default value for that option, as if no value was the appropriate default value for that option, as if no value was
specified. specified.
BundleMax BundleMax
Valid Range: 0x1-0xFFFF Valid Range: 1-65535
Default Value: 6 Default Value: 6
This parameter holds the maximum number of packets in a bundle. Suggested This parameter holds the maximum number of small packets (less than 128
values range from 2 to 10. See "CPU Cycle Saver." bytes) in a bundle. Suggested values range from 2 to 10. See "CPU Cycle
Saver."
BundleSmallFr BundleSmallFr
Valid Range: 0-1 (0=off, 1=on) Valid Range: 0-1 (0=off, 1=on)
...@@ -142,48 +133,33 @@ Default Value: 0 ...@@ -142,48 +133,33 @@ Default Value: 0
e100_speed_duplex e100_speed_duplex
Valid Range: 0-4 (1=10half;2=10full;3=100half;4=100full) Valid Range: 0-4 (1=10half;2=10full;3=100half;4=100full)
Default Value: 0 Default Value: 0
The default value of 0 is set to auto-negotiate if the link partner is set The default value of 0 sets the adapter to auto-negotiate. Other values
to auto-negotiate. If the link partner is forced, e100_speed_duplex set the adapter to forced speed and duplex.
defaults to half-duplex.
Example usage: insmod e100.o e100_speed_duplex=4,4 (for two adapters) Example usage: insmod e100.o e100_speed_duplex=4,4 (for two adapters)
flow_control flow_control
Valid Range: 0-1 (0=off, 1=on) Valid Range: 0-1 (0=off, 1=on)
Default Value: 0 Default Value: 0
This parameter controls the automatic generation(Tx) and response(Rx) to This parameter controls the automatic generation(Tx) and response(Rx) to
Ethernet PAUSE frames. flow_control should NOT be set to 1 when the e100 Ethernet PAUSE frames. flow_control should NOT be set to 1 when the
adapter is connected to an interface that does not support Ethernet PAUSE adapter is connected to an interface that does not support Ethernet PAUSE
frames and when the e100_speed_duplex parameter is NOT set to zero. frames and when the e100_speed_duplex parameter is NOT set to zero.
IntDelay IntDelay
Valid Range: 0-0xFFFF (0=off) Valid Range: 0-65535 (0=off)
Default Value: 1536 Default Value: 1536
This parameter holds the number of time units (in adapter terminology) This parameter holds the number of time units (in adapter terminology)
until the adapter generates an interrupt. The recommended value for until the adapter generates an interrupt. The recommended value for
IntDelay is 0x600 (upon initialization). Suggested values range from IntDelay is 1536 (upon initialization). Suggested values range from
0x200h to 0x800. See "CPU Cycle Saver." 512 to 2048. See "CPU Cycle Saver."
IFS IFS
Valid Range: 0-1 (0=off, 1=on) Valid Range: 0-1 (0=off, 1=on)
Default Value: 1 Default Value: 1
Inter Frame Spacing (IFS) aims to reduce the number of Ethernet frame Inter Frame Spacing (IFS) aims to reduce the number of Ethernet frame
collisions by altering the time between frame transmissions. When IFS is collisions by altering the time between frame transmissions. When IFS is
enabled the driver tries to find an optimal IFS value. However, some enabled the driver tries to find an optimal IFS value. It is used only at
switches function better when IFS is disabled. half duplex.
PollingMaxWork
Valid Range: 1-1024 (max number of RxDescriptors)
Default Value: Specified number of RxDescriptors
This value specifies the maximum number of receive packets that are
processed on a single polling call. This parameter is invalid if
RxCongestionControl is set to 0.
RxCongestionControl
Valid Range: 0-1 (0=off, 1=on)
Default Value: 1
1 enables polling mode. When the link is congested, the driver can decide
to handle received packets by polling them, instead of waiting until
interrupts occur.
RxDescriptors RxDescriptors
Valid Range: 8-1024 Valid Range: 8-1024
...@@ -200,13 +176,15 @@ Default Value: 64 ...@@ -200,13 +176,15 @@ Default Value: 64
Increasing this value allows the protocol stack to queue more transmits at Increasing this value allows the protocol stack to queue more transmits at
the driver level. The maximum value for Itanium-based systems is 64. the driver level. The maximum value for Itanium-based systems is 64.
ucode (not available for 82557-based adapters) ucode
Valid Range: 0-1 (0=off, 1=on) Valid Range: 0-1 (0=off, 1=on)
Default Value: 0 for 82558-based adapters Default Value: 0 for 82558-based adapters
1 for 82559(and higher)-based adapters 1 for 82559, 82550, and 82551-based adapters
On uploads the micro code to the adapter, which enables CPU Cycle Saver. On uploads the micro code to the adapter, which enables CPU Cycle Saver.
See the section "CPU Cycle Saver" below. See the section "CPU Cycle Saver" below.
Example usage: insmod e100.o ucode=0 (does not reduce CPU usage) Example usage: insmod e100.o ucode=1
Not available on 82557-based adapters.
XsumRX XsumRX
Valid Range: 0-1 (0=off, 1=on) Valid Range: 0-1 (0=off, 1=on)
...@@ -214,6 +192,8 @@ Default Value: 1 ...@@ -214,6 +192,8 @@ Default Value: 1
On allows Rx checksum offloading for TCP/UDP packets. Requires that the On allows Rx checksum offloading for TCP/UDP packets. Requires that the
hardware support this feature. hardware support this feature.
Not available on 82557 and 82558-based adapters.
CPU Cycle Saver CPU Cycle Saver
================ ================
...@@ -234,10 +214,11 @@ switching to and from the driver. ...@@ -234,10 +214,11 @@ switching to and from the driver.
CPU Cycle Saver consists of these arguments: IntDelay, BundleMax and CPU Cycle Saver consists of these arguments: IntDelay, BundleMax and
BundleSmallFr. When IntDelay is increased, the adapter waits longer for BundleSmallFr. When IntDelay is increased, the adapter waits longer for
frames to arrive before generating the interrupt. By increasing BundleMax, frames to arrive before generating the interrupt. By increasing BundleMax,
the network adapter waits for the number of frames specified to arrive before the network adapter waits for the number of small frames (less than 128 bytes)
generating the interrupt. When BundleSmallFr is disabled, the adapter does specified to arrive before generating the interrupt. When BundleSmallFr is
not bundle packets that are smaller than 128 bytes. Such small packets are disabled, the adapter does not bundle small packets. Such small packets are
often, but not always, control packets that are better served immediately. often, but not always, control packets that are better served immediately;
therefore, BundleSmallFr is disabled by default.
For most users, it is recommended that CPU Cycle Saver be used with the For most users, it is recommended that CPU Cycle Saver be used with the
default values specified in the Command Line Parameters section. However, in default values specified in the Command Line Parameters section. However, in
...@@ -249,7 +230,7 @@ ucode=0. ...@@ -249,7 +230,7 @@ ucode=0.
Support Support
======= =======
For general information and support, go to the Intel support website at: For general information, go to the Intel support website at:
http://support.intel.com http://support.intel.com
......
Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters
=============================================================== ===============================================================
August 6, 2002 October 12, 2002
Contents Contents
...@@ -20,7 +20,7 @@ In This Release ...@@ -20,7 +20,7 @@ In This Release
=============== ===============
This file describes the Linux* Base Driver for the Intel(R) PRO/1000 Family This file describes the Linux* Base Driver for the Intel(R) PRO/1000 Family
of Adapters, version 4.3.x. This driver includes support for of Adapters, version 4.4.x. This driver includes support for
Itanium(TM)-based systems. Itanium(TM)-based systems.
This release version includes the following: This release version includes the following:
...@@ -32,7 +32,7 @@ This release version includes the following: ...@@ -32,7 +32,7 @@ This release version includes the following:
default in supporting kernels. It is not supported on the Intel(R) default in supporting kernels. It is not supported on the Intel(R)
PRO/1000 Gigabit Server Adapter. PRO/1000 Gigabit Server Adapter.
New features include: Features include:
- Support for the 82545 and 82546-based adapters listed below - Support for the 82545 and 82546-based adapters listed below
...@@ -144,8 +144,7 @@ Default Value: 80 ...@@ -144,8 +144,7 @@ Default Value: 80
RxIntDelay RxIntDelay
Valid Range: 0-65535 (0=off) Valid Range: 0-65535 (0=off)
Default Value: 0 (82542, 82543, and 82544-based adapters) Default Value: 0
128 (82540, 82545, and 82546-based adapters)
This value delays the generation of receive interrupts in units of 1.024 This value delays the generation of receive interrupts in units of 1.024
microseconds. Receive interrupt reduction can improve CPU efficiency if microseconds. Receive interrupt reduction can improve CPU efficiency if
properly tuned for specific network traffic. Increasing this value adds properly tuned for specific network traffic. Increasing this value adds
...@@ -154,13 +153,12 @@ Default Value: 0 (82542, 82543, and 82544-based adapters) ...@@ -154,13 +153,12 @@ Default Value: 0 (82542, 82543, and 82544-based adapters)
may be set too high, causing the driver to run out of available receive may be set too high, causing the driver to run out of available receive
descriptors. descriptors.
CAUTION: When setting RxIntDelay to a value other than 0, adapters based CAUTION: When setting RxIntDelay to a value other than 0, adapters may
on the Intel 82543 and 82544 LAN controllers may hang (stop hang (stop transmitting) under certain network conditions. If
transmitting) under certain network conditions. If this occurs a this occurs a NETDEV WATCHDOG message is logged in the system
message is logged in the system event log. In addition, the event log. In addition, the controller is automatically reset,
controller is automatically reset, restoring the network restoring the network connection. To eliminate the potential for
connection. To eliminate the potential for the hang ensure that the hang ensure that RxIntDelay is set to 0.
RxIntDelay is set to 0.
RxAbsIntDelay (82540, 82545, and 82546-based adapters only) RxAbsIntDelay (82540, 82545, and 82546-based adapters only)
Valid Range: 0-65535 (0=off) Valid Range: 0-65535 (0=off)
......
...@@ -1094,3 +1094,11 @@ CONFIG_SCx200 ...@@ -1094,3 +1094,11 @@ CONFIG_SCx200
This support is also available as a module. If compiled as a This support is also available as a module. If compiled as a
module, it will be called scx200.o. module, it will be called scx200.o.
CONFIG_EDD
Say Y or M here if you want to enable BIOS Enhanced Disk Drive
Services real mode BIOS calls to determine which disk
BIOS tries boot from. This information is then exported via driverfs.
This option is experimental, but believed to be safe,
and most disk controller BIOS vendors do not yet implement this feature.
...@@ -45,6 +45,9 @@ ...@@ -45,6 +45,9 @@
* New A20 code ported from SYSLINUX by H. Peter Anvin. AMD Elan bugfixes * New A20 code ported from SYSLINUX by H. Peter Anvin. AMD Elan bugfixes
* by Robert Schwebel, December 2001 <robert@schwebel.de> * by Robert Schwebel, December 2001 <robert@schwebel.de>
* *
* BIOS Enhanced Disk Drive support
* by Matt Domsch <Matt_Domsch@dell.com> September 2002
*
*/ */
#include <linux/config.h> #include <linux/config.h>
...@@ -53,6 +56,7 @@ ...@@ -53,6 +56,7 @@
#include <linux/compile.h> #include <linux/compile.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/edd.h>
#include <asm/page.h> #include <asm/page.h>
/* Signature words to ensure LILO loaded us right */ /* Signature words to ensure LILO loaded us right */
...@@ -543,6 +547,49 @@ no_32_apm_bios: ...@@ -543,6 +547,49 @@ no_32_apm_bios:
done_apm_bios: done_apm_bios:
#endif #endif
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
# Do the BIOS Enhanced Disk Drive calls
# This code is sensitive to the size of the structs in edd.h
edd_start:
# %ds points to the bootsector
# result buffer for fn48
movw $EDDBUF+EDDEXTSIZE, %si # in ds:si, fn41 results
# kept just before that
movb $0, (EDDNR) # zero value at EDDNR
movb $0x80, %dl # BIOS device 0x80
edd_check_ext:
movb $0x41, %ah # Function 41
movw $0x55aa, %bx # magic
int $0x13 # make the call
jc edd_done # no more BIOS devices
cmpw $0xAA55, %bx # is magic right?
jne edd_next # nope, next...
movb %dl, %ds:-4(%si) # store device number
movb %ah, %ds:-3(%si) # store version
movw %cx, %ds:-2(%si) # store extensions
incb (EDDNR) # note that we stored something
edd_get_device_params:
movw $EDDPARMSIZE, %ds:(%si) # put size
movb $0x48, %ah # Function 48
int $0x13 # make the call
# Don't check for fail return
# it doesn't matter.
movw %si, %ax # increment si
addw $EDDPARMSIZE+EDDEXTSIZE, %ax
movw %ax, %si
edd_next:
incb %dl # increment to next device
cmpb $EDDMAXNR, (EDDNR) # Out of space?
jb edd_check_ext # keep looping
edd_done:
#endif
# Now we want to move to protected mode ... # Now we want to move to protected mode ...
cmpw $0, %cs:realmode_swtch cmpw $0, %cs:realmode_swtch
jz rmodeswtch_normal jz rmodeswtch_normal
......
...@@ -216,6 +216,10 @@ tristate '/dev/cpu/microcode - Intel IA32 CPU microcode support' CONFIG_MICROCOD ...@@ -216,6 +216,10 @@ tristate '/dev/cpu/microcode - Intel IA32 CPU microcode support' CONFIG_MICROCOD
tristate '/dev/cpu/*/msr - Model-specific register support' CONFIG_X86_MSR tristate '/dev/cpu/*/msr - Model-specific register support' CONFIG_X86_MSR
tristate '/dev/cpu/*/cpuid - CPU information support' CONFIG_X86_CPUID tristate '/dev/cpu/*/cpuid - CPU information support' CONFIG_X86_CPUID
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
tristate 'BIOS Enhanced Disk Drive calls determine boot disk (EXPERIMENTAL)' CONFIG_EDD
fi
choice 'High Memory Support' \ choice 'High Memory Support' \
"off CONFIG_NOHIGHMEM \ "off CONFIG_NOHIGHMEM \
4GB CONFIG_HIGHMEM4G \ 4GB CONFIG_HIGHMEM4G \
......
...@@ -70,6 +70,7 @@ CONFIG_X86_MCE_P4THERMAL=y ...@@ -70,6 +70,7 @@ CONFIG_X86_MCE_P4THERMAL=y
# CONFIG_MICROCODE is not set # CONFIG_MICROCODE is not set
# CONFIG_X86_MSR is not set # CONFIG_X86_MSR is not set
# CONFIG_X86_CPUID is not set # CONFIG_X86_CPUID is not set
# CONFIG_EDD is not set
CONFIG_NOHIGHMEM=y CONFIG_NOHIGHMEM=y
# CONFIG_HIGHMEM4G is not set # CONFIG_HIGHMEM4G is not set
# CONFIG_HIGHMEM64G is not set # CONFIG_HIGHMEM64G is not set
......
...@@ -28,6 +28,7 @@ obj-$(CONFIG_X86_IO_APIC) += io_apic.o ...@@ -28,6 +28,7 @@ obj-$(CONFIG_X86_IO_APIC) += io_apic.o
obj-$(CONFIG_SOFTWARE_SUSPEND) += suspend.o obj-$(CONFIG_SOFTWARE_SUSPEND) += suspend.o
obj-$(CONFIG_X86_NUMAQ) += numaq.o obj-$(CONFIG_X86_NUMAQ) += numaq.o
obj-$(CONFIG_PROFILING) += profile.o obj-$(CONFIG_PROFILING) += profile.o
obj-$(CONFIG_EDD) += edd.o
EXTRA_AFLAGS := -traditional EXTRA_AFLAGS := -traditional
......
This diff is collapsed.
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/nmi.h> #include <asm/nmi.h>
#include <asm/edd.h>
extern void dump_thread(struct pt_regs *, struct user *); extern void dump_thread(struct pt_regs *, struct user *);
extern spinlock_t rtc_lock; extern spinlock_t rtc_lock;
...@@ -201,3 +202,8 @@ EXPORT_SYMBOL(kmap_atomic); ...@@ -201,3 +202,8 @@ EXPORT_SYMBOL(kmap_atomic);
EXPORT_SYMBOL(kunmap_atomic); EXPORT_SYMBOL(kunmap_atomic);
EXPORT_SYMBOL(kmap_atomic_to_page); EXPORT_SYMBOL(kmap_atomic_to_page);
#endif #endif
#ifdef CONFIG_EDD_MODULE
EXPORT_SYMBOL(edd);
EXPORT_SYMBOL(eddnr);
#endif
...@@ -36,6 +36,7 @@ ...@@ -36,6 +36,7 @@
#include <linux/highmem.h> #include <linux/highmem.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/mpspec.h> #include <asm/mpspec.h>
#include <asm/edd.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/arch_hooks.h> #include <asm/arch_hooks.h>
#include "setup_arch_pre.h" #include "setup_arch_pre.h"
...@@ -466,6 +467,22 @@ static int __init copy_e820_map(struct e820entry * biosmap, int nr_map) ...@@ -466,6 +467,22 @@ static int __init copy_e820_map(struct e820entry * biosmap, int nr_map)
return 0; return 0;
} }
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
unsigned char eddnr;
struct edd_info edd[EDDNR];
/**
* copy_edd() - Copy the BIOS EDD information into a safe place.
*
*/
static inline void copy_edd(void)
{
eddnr = EDD_NR;
memcpy(edd, EDD_BUF, sizeof(edd));
}
#else
#define copy_edd() do {} while (0)
#endif
/* /*
* Do NOT EVER look at the BIOS memory size location. * Do NOT EVER look at the BIOS memory size location.
* It does not work on many machines. * It does not work on many machines.
...@@ -843,6 +860,7 @@ void __init setup_arch(char **cmdline_p) ...@@ -843,6 +860,7 @@ void __init setup_arch(char **cmdline_p)
#endif #endif
ARCH_SETUP ARCH_SETUP
setup_memory_region(); setup_memory_region();
copy_edd();
if (!MOUNT_ROOT_RDONLY) if (!MOUNT_ROOT_RDONLY)
root_mountflags &= ~MS_RDONLY; root_mountflags &= ~MS_RDONLY;
......
...@@ -136,7 +136,7 @@ void put_device(struct device * dev) ...@@ -136,7 +136,7 @@ void put_device(struct device * dev)
list_del_init(&dev->g_list); list_del_init(&dev->g_list);
up(&device_sem); up(&device_sem);
BUG_ON((dev->state != DEVICE_GONE)); WARN_ON(dev->state != DEVICE_GONE);
device_del(dev); device_del(dev);
} }
......
...@@ -150,6 +150,7 @@ static devfs_handle_t devfs_handle = NULL; ...@@ -150,6 +150,7 @@ static devfs_handle_t devfs_handle = NULL;
static int __init xd_init(void) static int __init xd_init(void)
{ {
u_char i,controller; u_char i,controller;
u_char count = 0;
unsigned int address; unsigned int address;
int err; int err;
......
...@@ -349,8 +349,8 @@ static struct sysrq_key_op *sysrq_key_table[SYSRQ_KEY_TABLE_LENGTH] = { ...@@ -349,8 +349,8 @@ static struct sysrq_key_op *sysrq_key_table[SYSRQ_KEY_TABLE_LENGTH] = {
/* 8 */ &sysrq_loglevel_op, /* 8 */ &sysrq_loglevel_op,
/* 9 */ &sysrq_loglevel_op, /* 9 */ &sysrq_loglevel_op,
/* a */ NULL, /* Don't use for system provided sysrqs, /* a */ NULL, /* Don't use for system provided sysrqs,
it is handled specially on the spark it is handled specially on the sparc
and will never arive */ and will never arrive */
/* b */ &sysrq_reboot_op, /* b */ &sysrq_reboot_op,
/* c */ NULL, /* c */ NULL,
/* d */ NULL, /* d */ NULL,
......
...@@ -121,7 +121,6 @@ ...@@ -121,7 +121,6 @@
#define E100_DEFAULT_CPUSAVER_BUNDLE_MAX 6 #define E100_DEFAULT_CPUSAVER_BUNDLE_MAX 6
#define E100_DEFAULT_CPUSAVER_INTERRUPT_DELAY 0x600 #define E100_DEFAULT_CPUSAVER_INTERRUPT_DELAY 0x600
#define E100_DEFAULT_BUNDLE_SMALL_FR false #define E100_DEFAULT_BUNDLE_SMALL_FR false
#define E100_DEFAULT_RX_CONGESTION_CONTROL true
/* end of configurables */ /* end of configurables */
...@@ -146,8 +145,6 @@ struct driver_stats { ...@@ -146,8 +145,6 @@ struct driver_stats {
unsigned long xmt_tco_pkts; unsigned long xmt_tco_pkts;
unsigned long rcv_tco_pkts; unsigned long rcv_tco_pkts;
unsigned long rx_intr_pkts; unsigned long rx_intr_pkts;
unsigned long rx_tasklet_pkts;
unsigned long poll_intr_switch;
}; };
/* TODO: kill me when we can do C99 */ /* TODO: kill me when we can do C99 */
...@@ -838,7 +835,6 @@ typedef struct _bd_dma_able_t { ...@@ -838,7 +835,6 @@ typedef struct _bd_dma_able_t {
#define PRM_FC 0x00000004 #define PRM_FC 0x00000004
#define PRM_IFS 0x00000008 #define PRM_IFS 0x00000008
#define PRM_BUNDLE_SMALL 0x00000010 #define PRM_BUNDLE_SMALL 0x00000010
#define PRM_RX_CONG 0x00000020
struct cfg_params { struct cfg_params {
int e100_speed_duplex; int e100_speed_duplex;
...@@ -847,7 +843,6 @@ struct cfg_params { ...@@ -847,7 +843,6 @@ struct cfg_params {
int IntDelay; int IntDelay;
int BundleMax; int BundleMax;
int ber; int ber;
int PollingMaxWork;
u32 b_params; u32 b_params;
}; };
struct ethtool_lpbk_data{ struct ethtool_lpbk_data{
...@@ -949,8 +944,6 @@ struct e100_private { ...@@ -949,8 +944,6 @@ struct e100_private {
u32 speed_duplex_caps; /* adapter's speed/duplex capabilities */ u32 speed_duplex_caps; /* adapter's speed/duplex capabilities */
struct tasklet_struct polling_tasklet;
/* WOL params for ethtool */ /* WOL params for ethtool */
u32 wolsupported; u32 wolsupported;
u32 wolopts; u32 wolopts;
...@@ -961,6 +954,7 @@ struct e100_private { ...@@ -961,6 +954,7 @@ struct e100_private {
#ifdef CONFIG_PM #ifdef CONFIG_PM
u32 pci_state[16]; u32 pci_state[16];
#endif #endif
char ifname[IFNAMSIZ];
}; };
#define E100_AUTONEG 0 #define E100_AUTONEG 0
......
This diff is collapsed.
...@@ -249,8 +249,6 @@ static e100_proc_entry e100_proc_list[] = { ...@@ -249,8 +249,6 @@ static e100_proc_entry e100_proc_list[] = {
{"Rx_TCO_Packets", read_gen_ulong, 0, bdp_drv_off(rcv_tco_pkts)}, {"Rx_TCO_Packets", read_gen_ulong, 0, bdp_drv_off(rcv_tco_pkts)},
{"\n",}, {"\n",},
{"Rx_Interrupt_Packets", read_gen_ulong, 0, bdp_drv_off(rx_intr_pkts)}, {"Rx_Interrupt_Packets", read_gen_ulong, 0, bdp_drv_off(rx_intr_pkts)},
{"Rx_Polling_Packets", read_gen_ulong, 0, bdp_drv_off(rx_tasklet_pkts)},
{"Polling_Interrupt_Switch", read_gen_ulong, 0, bdp_drv_off(poll_intr_switch)},
{"Identify_Adapter", 0, write_blink_led_timer, 0}, {"Identify_Adapter", 0, write_blink_led_timer, 0},
{"", 0, 0, 0} {"", 0, 0, 0}
}; };
...@@ -291,7 +289,7 @@ read_info(char *page, char **start, off_t off, int count, int *eof, void *data) ...@@ -291,7 +289,7 @@ read_info(char *page, char **start, off_t off, int count, int *eof, void *data)
return generic_read(page, start, off, count, eof, len); return generic_read(page, start, off, count, eof, len);
} }
static struct proc_dir_entry * __devinit static struct proc_dir_entry *
create_proc_rw(char *name, void *data, struct proc_dir_entry *parent, create_proc_rw(char *name, void *data, struct proc_dir_entry *parent,
read_proc_t * read_proc, write_proc_t * write_proc) read_proc_t * read_proc, write_proc_t * write_proc)
{ {
...@@ -318,7 +316,7 @@ create_proc_rw(char *name, void *data, struct proc_dir_entry *parent, ...@@ -318,7 +316,7 @@ create_proc_rw(char *name, void *data, struct proc_dir_entry *parent,
} }
void void
e100_remove_proc_subdir(struct e100_private *bdp) e100_remove_proc_subdir(struct e100_private *bdp, char *name)
{ {
e100_proc_entry *pe; e100_proc_entry *pe;
char info[256]; char info[256];
...@@ -329,8 +327,8 @@ e100_remove_proc_subdir(struct e100_private *bdp) ...@@ -329,8 +327,8 @@ e100_remove_proc_subdir(struct e100_private *bdp)
return; return;
} }
len = strlen(bdp->device->name); len = strlen(bdp->ifname);
strncpy(info, bdp->device->name, sizeof (info)); strncpy(info, bdp->ifname, sizeof (info));
strncat(info + len, ".info", sizeof (info) - len); strncat(info + len, ".info", sizeof (info) - len);
if (bdp->proc_parent) { if (bdp->proc_parent) {
...@@ -341,7 +339,7 @@ e100_remove_proc_subdir(struct e100_private *bdp) ...@@ -341,7 +339,7 @@ e100_remove_proc_subdir(struct e100_private *bdp)
remove_proc_entry(pe->name, bdp->proc_parent); remove_proc_entry(pe->name, bdp->proc_parent);
} }
remove_proc_entry(bdp->device->name, adapters_proc_dir); remove_proc_entry(bdp->ifname, adapters_proc_dir);
bdp->proc_parent = NULL; bdp->proc_parent = NULL;
} }
...@@ -351,7 +349,7 @@ e100_remove_proc_subdir(struct e100_private *bdp) ...@@ -351,7 +349,7 @@ e100_remove_proc_subdir(struct e100_private *bdp)
e100_proc_cleanup(); e100_proc_cleanup();
} }
int __devinit int
e100_create_proc_subdir(struct e100_private *bdp) e100_create_proc_subdir(struct e100_private *bdp)
{ {
struct proc_dir_entry *dev_dir; struct proc_dir_entry *dev_dir;
...@@ -366,7 +364,7 @@ e100_create_proc_subdir(struct e100_private *bdp) ...@@ -366,7 +364,7 @@ e100_create_proc_subdir(struct e100_private *bdp)
return -ENOMEM; return -ENOMEM;
} }
strncpy(info, bdp->device->name, sizeof (info)); strncpy(info, bdp->ifname, sizeof (info));
len = strlen(info); len = strlen(info);
strncat(info + len, ".info", sizeof (info) - len); strncat(info + len, ".info", sizeof (info) - len);
...@@ -376,12 +374,12 @@ e100_create_proc_subdir(struct e100_private *bdp) ...@@ -376,12 +374,12 @@ e100_create_proc_subdir(struct e100_private *bdp)
return -ENOMEM; return -ENOMEM;
} }
dev_dir = create_proc_entry(bdp->device->name, S_IFDIR, dev_dir = create_proc_entry(bdp->ifname, S_IFDIR,
adapters_proc_dir); adapters_proc_dir);
bdp->proc_parent = dev_dir; bdp->proc_parent = dev_dir;
if (!dev_dir) { if (!dev_dir) {
e100_remove_proc_subdir(bdp); e100_remove_proc_subdir(bdp, bdp->ifname);
return -ENOMEM; return -ENOMEM;
} }
...@@ -396,7 +394,7 @@ e100_create_proc_subdir(struct e100_private *bdp) ...@@ -396,7 +394,7 @@ e100_create_proc_subdir(struct e100_private *bdp)
if (!(create_proc_rw(pe->name, data, dev_dir, if (!(create_proc_rw(pe->name, data, dev_dir,
pe->read_proc, pe->write_proc))) { pe->read_proc, pe->write_proc))) {
e100_remove_proc_subdir(bdp); e100_remove_proc_subdir(bdp, bdp->ifname);
return -ENOMEM; return -ENOMEM;
} }
} }
......
...@@ -95,6 +95,15 @@ struct e1000_adapter; ...@@ -95,6 +95,15 @@ struct e1000_adapter;
#define E1000_RXBUFFER_8192 8192 #define E1000_RXBUFFER_8192 8192
#define E1000_RXBUFFER_16384 16384 #define E1000_RXBUFFER_16384 16384
/* Flow Control High-Watermark: 43464 bytes */
#define E1000_FC_HIGH_THRESH 0xA9C8
/* Flow Control Low-Watermark: 43456 bytes */
#define E1000_FC_LOW_THRESH 0xA9C0
/* Flow Control Pause Time: 858 usec */
#define E1000_FC_PAUSE_TIME 0x0680
/* How many Tx Descriptors do we need to call netif_wake_queue ? */ /* How many Tx Descriptors do we need to call netif_wake_queue ? */
#define E1000_TX_QUEUE_WAKE 16 #define E1000_TX_QUEUE_WAKE 16
/* How many Rx Buffers do we bundle into one write to the hardware ? */ /* How many Rx Buffers do we bundle into one write to the hardware ? */
...@@ -194,5 +203,6 @@ struct e1000_adapter { ...@@ -194,5 +203,6 @@ struct e1000_adapter {
uint32_t pci_state[16]; uint32_t pci_state[16];
char ifname[IFNAMSIZ];
}; };
#endif /* _E1000_H_ */ #endif /* _E1000_H_ */
...@@ -117,7 +117,8 @@ e1000_ethtool_sset(struct e1000_adapter *adapter, struct ethtool_cmd *ecmd) ...@@ -117,7 +117,8 @@ e1000_ethtool_sset(struct e1000_adapter *adapter, struct ethtool_cmd *ecmd)
if(ecmd->autoneg == AUTONEG_ENABLE) { if(ecmd->autoneg == AUTONEG_ENABLE) {
hw->autoneg = 1; hw->autoneg = 1;
hw->autoneg_advertised = (ecmd->advertising & 0x002F); hw->autoneg_advertised = 0x002F;
ecmd->advertising = 0x002F;
} else { } else {
hw->autoneg = 0; hw->autoneg = 0;
switch(ecmd->speed + ecmd->duplex) { switch(ecmd->speed + ecmd->duplex) {
...@@ -228,6 +229,7 @@ e1000_ethtool_geeprom(struct e1000_adapter *adapter, ...@@ -228,6 +229,7 @@ e1000_ethtool_geeprom(struct e1000_adapter *adapter,
for(i = 0; i <= (last_word - first_word); i++) for(i = 0; i <= (last_word - first_word); i++)
e1000_read_eeprom(hw, first_word + i, &eeprom_buff[i]); e1000_read_eeprom(hw, first_word + i, &eeprom_buff[i]);
return 0; return 0;
} }
...@@ -290,7 +292,6 @@ e1000_ethtool_gwol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol) ...@@ -290,7 +292,6 @@ e1000_ethtool_gwol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol)
case E1000_DEV_ID_82543GC_FIBER: case E1000_DEV_ID_82543GC_FIBER:
case E1000_DEV_ID_82543GC_COPPER: case E1000_DEV_ID_82543GC_COPPER:
case E1000_DEV_ID_82544EI_FIBER: case E1000_DEV_ID_82544EI_FIBER:
default:
wol->supported = 0; wol->supported = 0;
wol->wolopts = 0; wol->wolopts = 0;
return; return;
...@@ -304,14 +305,7 @@ e1000_ethtool_gwol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol) ...@@ -304,14 +305,7 @@ e1000_ethtool_gwol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol)
} }
/* Fall Through */ /* Fall Through */
case E1000_DEV_ID_82544EI_COPPER: default:
case E1000_DEV_ID_82544GC_COPPER:
case E1000_DEV_ID_82544GC_LOM:
case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER:
case E1000_DEV_ID_82545EM_FIBER:
case E1000_DEV_ID_82546EB_COPPER:
wol->supported = WAKE_PHY | WAKE_UCAST | wol->supported = WAKE_PHY | WAKE_UCAST |
WAKE_MCAST | WAKE_BCAST | WAKE_MAGIC; WAKE_MCAST | WAKE_BCAST | WAKE_MAGIC;
...@@ -340,7 +334,6 @@ e1000_ethtool_swol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol) ...@@ -340,7 +334,6 @@ e1000_ethtool_swol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol)
case E1000_DEV_ID_82543GC_FIBER: case E1000_DEV_ID_82543GC_FIBER:
case E1000_DEV_ID_82543GC_COPPER: case E1000_DEV_ID_82543GC_COPPER:
case E1000_DEV_ID_82544EI_FIBER: case E1000_DEV_ID_82544EI_FIBER:
default:
return wol->wolopts ? -EOPNOTSUPP : 0; return wol->wolopts ? -EOPNOTSUPP : 0;
case E1000_DEV_ID_82546EB_FIBER: case E1000_DEV_ID_82546EB_FIBER:
...@@ -349,14 +342,7 @@ e1000_ethtool_swol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol) ...@@ -349,14 +342,7 @@ e1000_ethtool_swol(struct e1000_adapter *adapter, struct ethtool_wolinfo *wol)
return wol->wolopts ? -EOPNOTSUPP : 0; return wol->wolopts ? -EOPNOTSUPP : 0;
/* Fall Through */ /* Fall Through */
case E1000_DEV_ID_82544EI_COPPER: default:
case E1000_DEV_ID_82544GC_COPPER:
case E1000_DEV_ID_82544GC_LOM:
case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER:
case E1000_DEV_ID_82545EM_FIBER:
case E1000_DEV_ID_82546EB_COPPER:
if(wol->wolopts & (WAKE_ARP | WAKE_MAGICSECURE)) if(wol->wolopts & (WAKE_ARP | WAKE_MAGICSECURE))
return -EOPNOTSUPP; return -EOPNOTSUPP;
...@@ -518,7 +504,8 @@ e1000_ethtool_ioctl(struct net_device *netdev, struct ifreq *ifr) ...@@ -518,7 +504,8 @@ e1000_ethtool_ioctl(struct net_device *netdev, struct ifreq *ifr)
if(copy_from_user(&eeprom, addr, sizeof(eeprom))) if(copy_from_user(&eeprom, addr, sizeof(eeprom)))
return -EFAULT; return -EFAULT;
if((err = e1000_ethtool_geeprom(adapter, &eeprom, eeprom_buff))<0) if((err = e1000_ethtool_geeprom(adapter,
&eeprom, eeprom_buff)))
return err; return err;
if(copy_to_user(addr, &eeprom, sizeof(eeprom))) if(copy_to_user(addr, &eeprom, sizeof(eeprom)))
......
...@@ -47,9 +47,9 @@ static void e1000_lower_ee_clk(struct e1000_hw *hw, uint32_t *eecd); ...@@ -47,9 +47,9 @@ static void e1000_lower_ee_clk(struct e1000_hw *hw, uint32_t *eecd);
static void e1000_shift_out_ee_bits(struct e1000_hw *hw, uint16_t data, uint16_t count); static void e1000_shift_out_ee_bits(struct e1000_hw *hw, uint16_t data, uint16_t count);
static uint16_t e1000_shift_in_ee_bits(struct e1000_hw *hw); static uint16_t e1000_shift_in_ee_bits(struct e1000_hw *hw);
static void e1000_setup_eeprom(struct e1000_hw *hw); static void e1000_setup_eeprom(struct e1000_hw *hw);
static void e1000_standby_eeprom(struct e1000_hw *hw);
static void e1000_clock_eeprom(struct e1000_hw *hw); static void e1000_clock_eeprom(struct e1000_hw *hw);
static void e1000_cleanup_eeprom(struct e1000_hw *hw); static void e1000_cleanup_eeprom(struct e1000_hw *hw);
static void e1000_standby_eeprom(struct e1000_hw *hw);
static int32_t e1000_id_led_init(struct e1000_hw * hw); static int32_t e1000_id_led_init(struct e1000_hw * hw);
/****************************************************************************** /******************************************************************************
...@@ -88,6 +88,9 @@ e1000_set_mac_type(struct e1000_hw *hw) ...@@ -88,6 +88,9 @@ e1000_set_mac_type(struct e1000_hw *hw)
break; break;
case E1000_DEV_ID_82540EM: case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM: case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82540EP:
case E1000_DEV_ID_82540EP_LOM:
case E1000_DEV_ID_82540EP_LP:
hw->mac_type = e1000_82540; hw->mac_type = e1000_82540;
break; break;
case E1000_DEV_ID_82545EM_COPPER: case E1000_DEV_ID_82545EM_COPPER:
...@@ -655,6 +658,8 @@ e1000_setup_copper_link(struct e1000_hw *hw) ...@@ -655,6 +658,8 @@ e1000_setup_copper_link(struct e1000_hw *hw)
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
} }
phy_data |= M88E1000_EPSCR_TX_CLK_25; phy_data |= M88E1000_EPSCR_TX_CLK_25;
if (hw->phy_revision < M88E1011_I_REV_4) {
/* Configure Master and Slave downshift values */ /* Configure Master and Slave downshift values */
phy_data &= ~(M88E1000_EPSCR_MASTER_DOWNSHIFT_MASK | phy_data &= ~(M88E1000_EPSCR_MASTER_DOWNSHIFT_MASK |
M88E1000_EPSCR_SLAVE_DOWNSHIFT_MASK); M88E1000_EPSCR_SLAVE_DOWNSHIFT_MASK);
...@@ -664,6 +669,7 @@ e1000_setup_copper_link(struct e1000_hw *hw) ...@@ -664,6 +669,7 @@ e1000_setup_copper_link(struct e1000_hw *hw)
DEBUGOUT("PHY Write Error\n"); DEBUGOUT("PHY Write Error\n");
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
} }
}
/* SW Reset the PHY so all changes take effect */ /* SW Reset the PHY so all changes take effect */
ret_val = e1000_phy_reset(hw); ret_val = e1000_phy_reset(hw);
...@@ -1008,7 +1014,6 @@ e1000_phy_force_speed_duplex(struct e1000_hw *hw) ...@@ -1008,7 +1014,6 @@ e1000_phy_force_speed_duplex(struct e1000_hw *hw)
/* Write the configured values back to the Device Control Reg. */ /* Write the configured values back to the Device Control Reg. */
E1000_WRITE_REG(hw, CTRL, ctrl); E1000_WRITE_REG(hw, CTRL, ctrl);
/* Write the MII Control Register with the new PHY configuration. */
if(e1000_read_phy_reg(hw, M88E1000_PHY_SPEC_CTRL, &phy_data) < 0) { if(e1000_read_phy_reg(hw, M88E1000_PHY_SPEC_CTRL, &phy_data) < 0) {
DEBUGOUT("PHY Read Error\n"); DEBUGOUT("PHY Read Error\n");
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
...@@ -1026,6 +1031,8 @@ e1000_phy_force_speed_duplex(struct e1000_hw *hw) ...@@ -1026,6 +1031,8 @@ e1000_phy_force_speed_duplex(struct e1000_hw *hw)
/* Need to reset the PHY or these changes will be ignored */ /* Need to reset the PHY or these changes will be ignored */
mii_ctrl_reg |= MII_CR_RESET; mii_ctrl_reg |= MII_CR_RESET;
/* Write back the modified PHY MII control register. */
if(e1000_write_phy_reg(hw, PHY_CTRL, mii_ctrl_reg) < 0) { if(e1000_write_phy_reg(hw, PHY_CTRL, mii_ctrl_reg) < 0) {
DEBUGOUT("PHY Write Error\n"); DEBUGOUT("PHY Write Error\n");
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
...@@ -2100,6 +2107,7 @@ e1000_detect_gig_phy(struct e1000_hw *hw) ...@@ -2100,6 +2107,7 @@ e1000_detect_gig_phy(struct e1000_hw *hw)
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
} }
hw->phy_id |= (uint32_t) (phy_id_low & PHY_REVISION_MASK); hw->phy_id |= (uint32_t) (phy_id_low & PHY_REVISION_MASK);
hw->phy_revision = (uint32_t) phy_id_low & ~PHY_REVISION_MASK;
switch(hw->mac_type) { switch(hw->mac_type) {
case e1000_82543: case e1000_82543:
...@@ -2242,7 +2250,7 @@ e1000_raise_ee_clk(struct e1000_hw *hw, ...@@ -2242,7 +2250,7 @@ e1000_raise_ee_clk(struct e1000_hw *hw,
uint32_t *eecd) uint32_t *eecd)
{ {
/* Raise the clock input to the EEPROM (by setting the SK bit), and then /* Raise the clock input to the EEPROM (by setting the SK bit), and then
* wait 50 microseconds. * wait <delay> microseconds.
*/ */
*eecd = *eecd | E1000_EECD_SK; *eecd = *eecd | E1000_EECD_SK;
E1000_WRITE_REG(hw, EECD, *eecd); E1000_WRITE_REG(hw, EECD, *eecd);
...@@ -2331,11 +2339,11 @@ e1000_shift_in_ee_bits(struct e1000_hw *hw) ...@@ -2331,11 +2339,11 @@ e1000_shift_in_ee_bits(struct e1000_hw *hw)
uint32_t i; uint32_t i;
uint16_t data; uint16_t data;
/* In order to read a register from the EEPROM, we need to shift 16 bits /* In order to read a register from the EEPROM, we need to shift 'count'
* in from the EEPROM. Bits are "shifted in" by raising the clock input to * bits in from the EEPROM. Bits are "shifted in" by raising the clock
* the EEPROM (setting the SK bit), and then reading the value of the "DO" * input to the EEPROM (setting the SK bit), and then reading the value of
* bit. During this "shifting in" process the "DI" bit should always be * the "DO" bit. During this "shifting in" process the "DI" bit should
* clear.. * always be clear.
*/ */
eecd = E1000_READ_REG(hw, EECD); eecd = E1000_READ_REG(hw, EECD);
...@@ -3140,6 +3148,9 @@ e1000_setup_led(struct e1000_hw *hw) ...@@ -3140,6 +3148,9 @@ e1000_setup_led(struct e1000_hw *hw)
ledctl |= (E1000_LEDCTL_MODE_LED_OFF << E1000_LEDCTL_LED0_MODE_SHIFT); ledctl |= (E1000_LEDCTL_MODE_LED_OFF << E1000_LEDCTL_LED0_MODE_SHIFT);
E1000_WRITE_REG(hw, LEDCTL, ledctl); E1000_WRITE_REG(hw, LEDCTL, ledctl);
break; break;
case E1000_DEV_ID_82540EP:
case E1000_DEV_ID_82540EP_LOM:
case E1000_DEV_ID_82540EP_LP:
case E1000_DEV_ID_82540EM: case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM: case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER: case E1000_DEV_ID_82545EM_COPPER:
...@@ -3173,6 +3184,9 @@ e1000_cleanup_led(struct e1000_hw *hw) ...@@ -3173,6 +3184,9 @@ e1000_cleanup_led(struct e1000_hw *hw)
case E1000_DEV_ID_82544GC_LOM: case E1000_DEV_ID_82544GC_LOM:
/* No cleanup necessary */ /* No cleanup necessary */
break; break;
case E1000_DEV_ID_82540EP:
case E1000_DEV_ID_82540EP_LOM:
case E1000_DEV_ID_82540EP_LP:
case E1000_DEV_ID_82540EM: case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM: case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER: case E1000_DEV_ID_82545EM_COPPER:
...@@ -3223,6 +3237,9 @@ e1000_led_on(struct e1000_hw *hw) ...@@ -3223,6 +3237,9 @@ e1000_led_on(struct e1000_hw *hw)
ctrl |= E1000_CTRL_SWDPIO0; ctrl |= E1000_CTRL_SWDPIO0;
E1000_WRITE_REG(hw, CTRL, ctrl); E1000_WRITE_REG(hw, CTRL, ctrl);
break; break;
case E1000_DEV_ID_82540EP:
case E1000_DEV_ID_82540EP_LOM:
case E1000_DEV_ID_82540EP_LP:
case E1000_DEV_ID_82540EM: case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM: case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER: case E1000_DEV_ID_82545EM_COPPER:
...@@ -3270,6 +3287,9 @@ e1000_led_off(struct e1000_hw *hw) ...@@ -3270,6 +3287,9 @@ e1000_led_off(struct e1000_hw *hw)
ctrl |= E1000_CTRL_SWDPIO0; ctrl |= E1000_CTRL_SWDPIO0;
E1000_WRITE_REG(hw, CTRL, ctrl); E1000_WRITE_REG(hw, CTRL, ctrl);
break; break;
case E1000_DEV_ID_82540EP:
case E1000_DEV_ID_82540EP_LOM:
case E1000_DEV_ID_82540EP_LP:
case E1000_DEV_ID_82540EM: case E1000_DEV_ID_82540EM:
case E1000_DEV_ID_82540EM_LOM: case E1000_DEV_ID_82540EM_LOM:
case E1000_DEV_ID_82545EM_COPPER: case E1000_DEV_ID_82545EM_COPPER:
......
...@@ -246,11 +246,14 @@ void e1000_write_reg_io(struct e1000_hw *hw, uint32_t offset, uint32_t value); ...@@ -246,11 +246,14 @@ void e1000_write_reg_io(struct e1000_hw *hw, uint32_t offset, uint32_t value);
#define E1000_DEV_ID_82544GC_LOM 0x100D #define E1000_DEV_ID_82544GC_LOM 0x100D
#define E1000_DEV_ID_82540EM 0x100E #define E1000_DEV_ID_82540EM 0x100E
#define E1000_DEV_ID_82540EM_LOM 0x1015 #define E1000_DEV_ID_82540EM_LOM 0x1015
#define E1000_DEV_ID_82540EP_LOM 0x1016
#define E1000_DEV_ID_82540EP 0x1017
#define E1000_DEV_ID_82540EP_LP 0x101E
#define E1000_DEV_ID_82545EM_COPPER 0x100F #define E1000_DEV_ID_82545EM_COPPER 0x100F
#define E1000_DEV_ID_82545EM_FIBER 0x1011 #define E1000_DEV_ID_82545EM_FIBER 0x1011
#define E1000_DEV_ID_82546EB_COPPER 0x1010 #define E1000_DEV_ID_82546EB_COPPER 0x1010
#define E1000_DEV_ID_82546EB_FIBER 0x1012 #define E1000_DEV_ID_82546EB_FIBER 0x1012
#define NUM_DEV_IDS 13 #define NUM_DEV_IDS 16
#define NODE_ADDRESS_SIZE 6 #define NODE_ADDRESS_SIZE 6
#define ETH_LENGTH_OF_ADDRESS 6 #define ETH_LENGTH_OF_ADDRESS 6
...@@ -851,6 +854,7 @@ struct e1000_hw { ...@@ -851,6 +854,7 @@ struct e1000_hw {
e1000_bus_type bus_type; e1000_bus_type bus_type;
uint32_t io_base; uint32_t io_base;
uint32_t phy_id; uint32_t phy_id;
uint32_t phy_revision;
uint32_t phy_addr; uint32_t phy_addr;
uint32_t original_fc; uint32_t original_fc;
uint32_t txcw; uint32_t txcw;
...@@ -1755,6 +1759,7 @@ struct e1000_hw { ...@@ -1755,6 +1759,7 @@ struct e1000_hw {
#define M88E1011_I_PHY_ID 0x01410C20 #define M88E1011_I_PHY_ID 0x01410C20
#define M88E1000_12_PHY_ID M88E1000_E_PHY_ID #define M88E1000_12_PHY_ID M88E1000_E_PHY_ID
#define M88E1000_14_PHY_ID M88E1000_E_PHY_ID #define M88E1000_14_PHY_ID M88E1000_E_PHY_ID
#define M88E1011_I_REV_4 0x04
/* Miscellaneous PHY bit definitions. */ /* Miscellaneous PHY bit definitions. */
#define PHY_PREAMBLE 0xFFFFFFFF #define PHY_PREAMBLE 0xFFFFFFFF
......
This diff is collapsed.
...@@ -96,6 +96,6 @@ typedef enum { ...@@ -96,6 +96,6 @@ typedef enum {
readl((a)->hw_addr + E1000_##reg + ((offset) << 2)) : \ readl((a)->hw_addr + E1000_##reg + ((offset) << 2)) : \
readl((a)->hw_addr + E1000_82542_##reg + ((offset) << 2))) readl((a)->hw_addr + E1000_82542_##reg + ((offset) << 2)))
#define E1000_WRITE_FLUSH(a) ((void)E1000_READ_REG(a, STATUS)) #define E1000_WRITE_FLUSH(a) E1000_READ_REG(a, STATUS)
#endif /* _E1000_OSDEP_H_ */ #endif /* _E1000_OSDEP_H_ */
...@@ -197,8 +197,7 @@ E1000_PARAM(RxAbsIntDelay, "Receive Absolute Interrupt Delay"); ...@@ -197,8 +197,7 @@ E1000_PARAM(RxAbsIntDelay, "Receive Absolute Interrupt Delay");
#define MIN_RXD 80 #define MIN_RXD 80
#define MAX_82544_RXD 4096 #define MAX_82544_RXD 4096
#define DEFAULT_RDTR 128 #define DEFAULT_RDTR 0
#define DEFAULT_RDTR_82544 0
#define MAX_RXDELAY 0xFFFF #define MAX_RXDELAY 0xFFFF
#define MIN_RXDELAY 0 #define MIN_RXDELAY 0
...@@ -315,7 +314,8 @@ e1000_check_options(struct e1000_adapter *adapter) ...@@ -315,7 +314,8 @@ e1000_check_options(struct e1000_adapter *adapter)
}; };
struct e1000_desc_ring *tx_ring = &adapter->tx_ring; struct e1000_desc_ring *tx_ring = &adapter->tx_ring;
e1000_mac_type mac_type = adapter->hw.mac_type; e1000_mac_type mac_type = adapter->hw.mac_type;
opt.arg.r.max = mac_type < e1000_82544 ? MAX_TXD : MAX_82544_TXD; opt.arg.r.max = mac_type < e1000_82544 ?
MAX_TXD : MAX_82544_TXD;
tx_ring->count = TxDescriptors[bd]; tx_ring->count = TxDescriptors[bd];
e1000_validate_option(&tx_ring->count, &opt); e1000_validate_option(&tx_ring->count, &opt);
...@@ -398,16 +398,13 @@ e1000_check_options(struct e1000_adapter *adapter) ...@@ -398,16 +398,13 @@ e1000_check_options(struct e1000_adapter *adapter)
} }
{ /* Receive Interrupt Delay */ { /* Receive Interrupt Delay */
char *rdtr = "using default of " __MODULE_STRING(DEFAULT_RDTR); char *rdtr = "using default of " __MODULE_STRING(DEFAULT_RDTR);
char *rdtr_82544 = "using default of "
__MODULE_STRING(DEFAULT_RDTR_82544);
struct e1000_option opt = { struct e1000_option opt = {
.type = range_option, .type = range_option,
.name = "Receive Interrupt Delay", .name = "Receive Interrupt Delay",
.arg = { r: { min: MIN_RXDELAY, max: MAX_RXDELAY }} .arg = { r: { min: MIN_RXDELAY, max: MAX_RXDELAY }}
}; };
e1000_mac_type mac_type = adapter->hw.mac_type; opt.def = DEFAULT_RDTR;
opt.def = mac_type > e1000_82544 ? DEFAULT_RDTR : 0; opt.err = rdtr;
opt.err = mac_type > e1000_82544 ? rdtr : rdtr_82544;
adapter->rx_int_delay = RxIntDelay[bd]; adapter->rx_int_delay = RxIntDelay[bd];
e1000_validate_option(&adapter->rx_int_delay, &opt); e1000_validate_option(&adapter->rx_int_delay, &opt);
......
...@@ -132,7 +132,7 @@ e1000_proc_single_read(char *page, char **start, off_t off, ...@@ -132,7 +132,7 @@ e1000_proc_single_read(char *page, char **start, off_t off,
return e1000_proc_read(page, start, off, count, eof); return e1000_proc_read(page, start, off, count, eof);
} }
static void __devexit static void
e1000_proc_dirs_free(char *name, struct list_head *proc_list_head) e1000_proc_dirs_free(char *name, struct list_head *proc_list_head)
{ {
struct proc_dir_entry *intel_proc_dir, *proc_dir; struct proc_dir_entry *intel_proc_dir, *proc_dir;
...@@ -188,7 +188,7 @@ e1000_proc_dirs_free(char *name, struct list_head *proc_list_head) ...@@ -188,7 +188,7 @@ e1000_proc_dirs_free(char *name, struct list_head *proc_list_head)
} }
static int __devinit static int
e1000_proc_singles_create(struct proc_dir_entry *parent, e1000_proc_singles_create(struct proc_dir_entry *parent,
struct list_head *proc_list_head) struct list_head *proc_list_head)
{ {
...@@ -215,7 +215,7 @@ e1000_proc_singles_create(struct proc_dir_entry *parent, ...@@ -215,7 +215,7 @@ e1000_proc_singles_create(struct proc_dir_entry *parent,
return 1; return 1;
} }
static void __devinit static void
e1000_proc_dirs_create(void *data, char *name, e1000_proc_dirs_create(void *data, char *name,
struct list_head *proc_list_head) struct list_head *proc_list_head)
{ {
...@@ -255,7 +255,7 @@ e1000_proc_dirs_create(void *data, char *name, ...@@ -255,7 +255,7 @@ e1000_proc_dirs_create(void *data, char *name,
info_entry->data = proc_list_head; info_entry->data = proc_list_head;
} }
static void __devinit static void
e1000_proc_list_add(struct list_head *proc_list_head, char *tag, e1000_proc_list_add(struct list_head *proc_list_head, char *tag,
void *data, size_t len, void *data, size_t len,
char *(*func)(void *, size_t, char *)) char *(*func)(void *, size_t, char *))
...@@ -274,7 +274,7 @@ e1000_proc_list_add(struct list_head *proc_list_head, char *tag, ...@@ -274,7 +274,7 @@ e1000_proc_list_add(struct list_head *proc_list_head, char *tag,
list_add_tail(&new->list, proc_list_head); list_add_tail(&new->list, proc_list_head);
} }
static void __devexit static void
e1000_proc_list_free(struct list_head *proc_list_head) e1000_proc_list_free(struct list_head *proc_list_head)
{ {
struct proc_list *elem; struct proc_list *elem;
...@@ -542,7 +542,7 @@ e1000_proc_rx_status(void *data, size_t len, char *buf) ...@@ -542,7 +542,7 @@ e1000_proc_rx_status(void *data, size_t len, char *buf)
#define LIST_ADD_H(T,D) LIST_ADD_F((T), (D), e1000_proc_hex) #define LIST_ADD_H(T,D) LIST_ADD_F((T), (D), e1000_proc_hex)
#define LIST_ADD_U(T,D) LIST_ADD_F((T), (D), e1000_proc_unsigned) #define LIST_ADD_U(T,D) LIST_ADD_F((T), (D), e1000_proc_unsigned)
static void __devinit static void
e1000_proc_list_setup(struct e1000_adapter *adapter) e1000_proc_list_setup(struct e1000_adapter *adapter)
{ {
struct e1000_hw *hw = &adapter->hw; struct e1000_hw *hw = &adapter->hw;
...@@ -572,7 +572,7 @@ e1000_proc_list_setup(struct e1000_adapter *adapter) ...@@ -572,7 +572,7 @@ e1000_proc_list_setup(struct e1000_adapter *adapter)
} }
LIST_ADD_U("IRQ", &adapter->pdev->irq); LIST_ADD_U("IRQ", &adapter->pdev->irq);
LIST_ADD_S("System_Device_Name", adapter->netdev->name); LIST_ADD_S("System_Device_Name", adapter->ifname);
LIST_ADD_F("Current_HWaddr", LIST_ADD_F("Current_HWaddr",
adapter->netdev->dev_addr, e1000_proc_hwaddr); adapter->netdev->dev_addr, e1000_proc_hwaddr);
LIST_ADD_F("Permanent_HWaddr", LIST_ADD_F("Permanent_HWaddr",
...@@ -670,13 +670,13 @@ e1000_proc_list_setup(struct e1000_adapter *adapter) ...@@ -670,13 +670,13 @@ e1000_proc_list_setup(struct e1000_adapter *adapter)
* @adapter: board private structure * @adapter: board private structure
*/ */
void __devinit void
e1000_proc_dev_setup(struct e1000_adapter *adapter) e1000_proc_dev_setup(struct e1000_adapter *adapter)
{ {
e1000_proc_list_setup(adapter); e1000_proc_list_setup(adapter);
e1000_proc_dirs_create(adapter, e1000_proc_dirs_create(adapter,
adapter->netdev->name, adapter->ifname,
&adapter->proc_list_head); &adapter->proc_list_head);
} }
...@@ -685,18 +685,18 @@ e1000_proc_dev_setup(struct e1000_adapter *adapter) ...@@ -685,18 +685,18 @@ e1000_proc_dev_setup(struct e1000_adapter *adapter)
* @adapter: board private structure * @adapter: board private structure
*/ */
void __devexit void
e1000_proc_dev_free(struct e1000_adapter *adapter) e1000_proc_dev_free(struct e1000_adapter *adapter)
{ {
e1000_proc_dirs_free(adapter->netdev->name, &adapter->proc_list_head); e1000_proc_dirs_free(adapter->ifname, &adapter->proc_list_head);
e1000_proc_list_free(&adapter->proc_list_head); e1000_proc_list_free(&adapter->proc_list_head);
} }
#else /* CONFIG_PROC_FS */ #else /* CONFIG_PROC_FS */
void __devinit e1000_proc_dev_setup(struct e1000_adapter *adapter) {} void e1000_proc_dev_setup(struct e1000_adapter *adapter) {}
void __devexit e1000_proc_dev_free(struct e1000_adapter *adapter) {} void e1000_proc_dev_free(struct e1000_adapter *adapter) {}
#endif /* CONFIG_PROC_FS */ #endif /* CONFIG_PROC_FS */
This diff is collapsed.
...@@ -192,9 +192,12 @@ int mii_nway_restart (struct mii_if_info *mii) ...@@ -192,9 +192,12 @@ int mii_nway_restart (struct mii_if_info *mii)
void mii_check_link (struct mii_if_info *mii) void mii_check_link (struct mii_if_info *mii)
{ {
if (mii_link_ok(mii)) int cur_link = mii_link_ok(mii);
int prev_link = netif_carrier_ok(mii->dev);
if (cur_link && !prev_link)
netif_carrier_on(mii->dev); netif_carrier_on(mii->dev);
else else if (prev_link && !cur_link)
netif_carrier_off(mii->dev); netif_carrier_off(mii->dev);
} }
......
...@@ -184,7 +184,7 @@ ...@@ -184,7 +184,7 @@
NETIF_MSG_WOL | \ NETIF_MSG_WOL | \
NETIF_MSG_RX_ERR | \ NETIF_MSG_RX_ERR | \
NETIF_MSG_TX_ERR) NETIF_MSG_TX_ERR)
static int debug = NATSEMI_DEF_MSG; static int debug = -1;
/* Maximum events (Rx packets, etc.) to handle at each interrupt. */ /* Maximum events (Rx packets, etc.) to handle at each interrupt. */
static int max_interrupt_work = 20; static int max_interrupt_work = 20;
...@@ -256,7 +256,7 @@ MODULE_PARM(full_duplex, "1-" __MODULE_STRING(MAX_UNITS) "i"); ...@@ -256,7 +256,7 @@ MODULE_PARM(full_duplex, "1-" __MODULE_STRING(MAX_UNITS) "i");
MODULE_PARM_DESC(max_interrupt_work, MODULE_PARM_DESC(max_interrupt_work,
"DP8381x maximum events handled per interrupt"); "DP8381x maximum events handled per interrupt");
MODULE_PARM_DESC(mtu, "DP8381x MTU (all boards)"); MODULE_PARM_DESC(mtu, "DP8381x MTU (all boards)");
MODULE_PARM_DESC(debug, "DP8381x default debug bitmask"); MODULE_PARM_DESC(debug, "DP8381x default debug level");
MODULE_PARM_DESC(rx_copybreak, MODULE_PARM_DESC(rx_copybreak,
"DP8381x copy breakpoint for copy-only-tiny-frames"); "DP8381x copy breakpoint for copy-only-tiny-frames");
MODULE_PARM_DESC(options, MODULE_PARM_DESC(options,
...@@ -796,7 +796,7 @@ static int __devinit natsemi_probe1 (struct pci_dev *pdev, ...@@ -796,7 +796,7 @@ static int __devinit natsemi_probe1 (struct pci_dev *pdev,
pci_set_drvdata(pdev, dev); pci_set_drvdata(pdev, dev);
np->iosize = iosize; np->iosize = iosize;
spin_lock_init(&np->lock); spin_lock_init(&np->lock);
np->msg_enable = debug; np->msg_enable = (debug >= 0) ? (1<<debug)-1 : NATSEMI_DEF_MSG;
np->hands_off = 0; np->hands_off = 0;
/* Reset the chip to erase previous misconfiguration. */ /* Reset the chip to erase previous misconfiguration. */
......
...@@ -474,6 +474,8 @@ int scsi_register_host(Scsi_Host_Template *shost_tp) ...@@ -474,6 +474,8 @@ int scsi_register_host(Scsi_Host_Template *shost_tp)
struct list_head *lh; struct list_head *lh;
struct Scsi_Host *shost; struct Scsi_Host *shost;
INIT_LIST_HEAD(&shost_tp->shtp_list);
/* /*
* Check no detect routine. * Check no detect routine.
*/ */
......
...@@ -149,12 +149,13 @@ struct bio *bio_alloc(int gfp_mask, int nr_iovecs) ...@@ -149,12 +149,13 @@ struct bio *bio_alloc(int gfp_mask, int nr_iovecs)
bio_init(bio); bio_init(bio);
if (unlikely(!nr_iovecs)) if (unlikely(!nr_iovecs))
goto out; goto noiovec;
bvl = bvec_alloc(gfp_mask, nr_iovecs, &idx); bvl = bvec_alloc(gfp_mask, nr_iovecs, &idx);
if (bvl) { if (bvl) {
bio->bi_flags |= idx << BIO_POOL_OFFSET; bio->bi_flags |= idx << BIO_POOL_OFFSET;
bio->bi_max_vecs = bvec_array[idx].nr_vecs; bio->bi_max_vecs = bvec_array[idx].nr_vecs;
noiovec:
bio->bi_io_vec = bvl; bio->bi_io_vec = bvl;
bio->bi_destructor = bio_destructor; bio->bi_destructor = bio_destructor;
out: out:
......
...@@ -151,14 +151,19 @@ __asm__ __volatile__("mb": : :"memory") ...@@ -151,14 +151,19 @@ __asm__ __volatile__("mb": : :"memory")
#define wmb() \ #define wmb() \
__asm__ __volatile__("wmb": : :"memory") __asm__ __volatile__("wmb": : :"memory")
#define read_barrier_depends() \
__asm__ __volatile__("mb": : :"memory")
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() barrier()
#endif #endif
#define set_mb(var, value) \ #define set_mb(var, value) \
......
...@@ -52,6 +52,7 @@ extern asmlinkage void __backtrace(void); ...@@ -52,6 +52,7 @@ extern asmlinkage void __backtrace(void);
#define mb() __asm__ __volatile__ ("" : : : "memory") #define mb() __asm__ __volatile__ ("" : : : "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
#define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t"); #define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t");
...@@ -78,12 +79,14 @@ extern struct task_struct *__switch_to(struct thread_info *, struct thread_info ...@@ -78,12 +79,14 @@ extern struct task_struct *__switch_to(struct thread_info *, struct thread_info
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#define clf() __clf() #define clf() __clf()
#define stf() __stf() #define stf() __stf()
......
...@@ -150,6 +150,7 @@ static inline unsigned long __xchg(unsigned long x, void * ptr, int size) ...@@ -150,6 +150,7 @@ static inline unsigned long __xchg(unsigned long x, void * ptr, int size)
#define mb() __asm__ __volatile__ ("" : : : "memory") #define mb() __asm__ __volatile__ ("" : : : "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
...@@ -157,10 +158,12 @@ static inline unsigned long __xchg(unsigned long x, void * ptr, int size) ...@@ -157,10 +158,12 @@ static inline unsigned long __xchg(unsigned long x, void * ptr, int size)
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#endif #endif
#define iret() #define iret()
......
/*
* linux/include/asm-i386/edd.h
* Copyright (C) 2002 Dell Computer Corporation
* by Matt Domsch <Matt_Domsch@dell.com>
*
* structures and definitions for the int 13h, ax={41,48}h
* BIOS Enhanced Disk Drive Services
* This is based on the T13 group document D1572 Revision 0 (August 14 2002)
* available at http://www.t13.org/docs2002/d1572r0.pdf. It is
* very similar to D1484 Revision 3 http://www.t13.org/docs2002/d1484r3.pdf
*
* In a nutshell, arch/i386/boot/setup.S populates a scratch table
* in the empty_zero_block that contains a list of BIOS-enumerated
* boot devices.
* In arch/i386/kernel/setup.c, this information is
* transferred into the edd structure, and in arch/i386/kernel/edd.c, that
* information is used to identify BIOS boot disk. The code in setup.S
* is very sensitive to the size of these structures.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License v2.0 as published by
* the Free Software Foundation
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _ASM_I386_EDD_H
#define _ASM_I386_EDD_H
#define EDDNR 0x1e9 /* addr of number of edd_info structs at EDDBUF
in empty_zero_block - treat this as 1 byte */
#define EDDBUF 0x600 /* addr of edd_info structs in empty_zero_block */
#define EDDMAXNR 6 /* number of edd_info structs starting at EDDBUF */
#define EDDEXTSIZE 4 /* change these if you muck with the structures */
#define EDDPARMSIZE 74
#ifndef __ASSEMBLY__
#define EDD_EXT_FIXED_DISK_ACCESS (1 << 0)
#define EDD_EXT_DEVICE_LOCKING_AND_EJECTING (1 << 1)
#define EDD_EXT_ENHANCED_DISK_DRIVE_SUPPORT (1 << 2)
#define EDD_EXT_64BIT_EXTENSIONS (1 << 3)
#define EDD_INFO_DMA_BOUNDRY_ERROR_TRANSPARENT (1 << 0)
#define EDD_INFO_GEOMETRY_VALID (1 << 1)
#define EDD_INFO_REMOVABLE (1 << 2)
#define EDD_INFO_WRITE_VERIFY (1 << 3)
#define EDD_INFO_MEDIA_CHANGE_NOTIFICATION (1 << 4)
#define EDD_INFO_LOCKABLE (1 << 5)
#define EDD_INFO_NO_MEDIA_PRESENT (1 << 6)
#define EDD_INFO_USE_INT13_FN50 (1 << 7)
struct edd_device_params {
u16 length;
u16 info_flags;
u32 num_default_cylinders;
u32 num_default_heads;
u32 sectors_per_track;
u64 number_of_sectors;
u16 bytes_per_sector;
u32 dpte_ptr; /* 0xFFFFFFFF for our purposes */
u16 key; /* = 0xBEDD */
u8 device_path_info_length; /* = 44 */
u8 reserved2;
u16 reserved3;
u8 host_bus_type[4];
u8 interface_type[8];
union {
struct {
u16 base_address;
u16 reserved1;
u32 reserved2;
} __attribute__ ((packed)) isa;
struct {
u8 bus;
u8 slot;
u8 function;
u8 channel;
u32 reserved;
} __attribute__ ((packed)) pci;
/* pcix is same as pci */
struct {
u64 reserved;
} __attribute__ ((packed)) ibnd;
struct {
u64 reserved;
} __attribute__ ((packed)) xprs;
struct {
u64 reserved;
} __attribute__ ((packed)) htpt;
struct {
u64 reserved;
} __attribute__ ((packed)) unknown;
} interface_path;
union {
struct {
u8 device;
u8 reserved1;
u16 reserved2;
u32 reserved3;
u64 reserved4;
} __attribute__ ((packed)) ata;
struct {
u8 device;
u8 lun;
u8 reserved1;
u8 reserved2;
u32 reserved3;
u64 reserved4;
} __attribute__ ((packed)) atapi;
struct {
u16 id;
u64 lun;
u16 reserved1;
u32 reserved2;
} __attribute__ ((packed)) scsi;
struct {
u64 serial_number;
u64 reserved;
} __attribute__ ((packed)) usb;
struct {
u64 eui;
u64 reserved;
} __attribute__ ((packed)) i1394;
struct {
u64 wwid;
u64 lun;
} __attribute__ ((packed)) fibre;
struct {
u64 identity_tag;
u64 reserved;
} __attribute__ ((packed)) i2o;
struct {
u32 array_number;
u32 reserved1;
u64 reserved2;
} __attribute((packed)) raid;
struct {
u8 device;
u8 reserved1;
u16 reserved2;
u32 reserved3;
u64 reserved4;
} __attribute__ ((packed)) sata;
struct {
u64 reserved1;
u64 reserved2;
} __attribute__ ((packed)) unknown;
} device_path;
u8 reserved4;
u8 checksum;
} __attribute__ ((packed));
struct edd_info {
u8 device;
u8 version;
u16 interface_support;
struct edd_device_params params;
} __attribute__ ((packed));
extern struct edd_info edd[EDDNR];
extern unsigned char eddnr;
#endif /*!__ASSEMBLY__ */
#endif /* _ASM_I386_EDD_H */
...@@ -37,6 +37,8 @@ ...@@ -37,6 +37,8 @@
#define KERNEL_START (*(unsigned long *) (PARAM+0x214)) #define KERNEL_START (*(unsigned long *) (PARAM+0x214))
#define INITRD_START (*(unsigned long *) (PARAM+0x218)) #define INITRD_START (*(unsigned long *) (PARAM+0x218))
#define INITRD_SIZE (*(unsigned long *) (PARAM+0x21c)) #define INITRD_SIZE (*(unsigned long *) (PARAM+0x21c))
#define EDD_NR (*(unsigned char *) (PARAM+EDDNR))
#define EDD_BUF ((struct edd_info *) (PARAM+EDDBUF))
#define COMMAND_LINE ((char *) (PARAM+2048)) #define COMMAND_LINE ((char *) (PARAM+2048))
#define COMMAND_LINE_SIZE 256 #define COMMAND_LINE_SIZE 256
......
...@@ -286,6 +286,60 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, ...@@ -286,6 +286,60 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
#define mb() __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory") #define mb() __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory")
#define rmb() mb() #define rmb() mb()
/**
* read_barrier_depends - Flush all pending reads that subsequents reads
* depend on.
*
* No data-dependent reads from memory-like regions are ever reordered
* over this barrier. All reads preceding this primitive are guaranteed
* to access memory (but not necessarily other CPUs' caches) before any
* reads following this primitive that depend on the data return by
* any of the preceding reads. This primitive is much lighter weight than
* rmb() on most CPUs, and is never heavier weight than is
* rmb().
*
* These ordering constraints are respected by both the local CPU
* and the compiler.
*
* Ordering is not guaranteed by anything other than these primitives,
* not even by data dependencies. See the documentation for
* memory_barrier() for examples and URLs to more information.
*
* For example, the following code would force ordering (the initial
* value of "a" is zero, "b" is one, and "p" is "&a"):
*
* <programlisting>
* CPU 0 CPU 1
*
* b = 2;
* memory_barrier();
* p = &b; q = p;
* read_barrier_depends();
* d = *q;
* </programlisting>
*
* because the read of "*q" depends on the read of "p" and these
* two reads are separated by a read_barrier_depends(). However,
* the following code, with the same initial values for "a" and "b":
*
* <programlisting>
* CPU 0 CPU 1
*
* a = 2;
* memory_barrier();
* b = 3; y = b;
* read_barrier_depends();
* x = a;
* </programlisting>
*
* does not enforce ordering, since there is no data dependency between
* the read of "a" and the read of "b". Therefore, on some CPUs, such
* as Alpha, "y" could be set to 3 and "x" to 0. Use rmb()
* in cases like thiswhere there are no data dependencies.
**/
#define read_barrier_depends() do { } while(0)
#ifdef CONFIG_X86_OOSTORE #ifdef CONFIG_X86_OOSTORE
#define wmb() __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory") #define wmb() __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory")
#else #else
...@@ -296,11 +350,13 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, ...@@ -296,11 +350,13 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#define set_mb(var, value) do { xchg(&var, value); } while (0) #define set_mb(var, value) do { xchg(&var, value); } while (0)
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; barrier(); } while (0) #define set_mb(var, value) do { var = value; barrier(); } while (0)
#endif #endif
......
...@@ -85,15 +85,18 @@ ia64_insn_group_barrier (void) ...@@ -85,15 +85,18 @@ ia64_insn_group_barrier (void)
#define mb() __asm__ __volatile__ ("mf" ::: "memory") #define mb() __asm__ __volatile__ ("mf" ::: "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
# define smp_mb() mb() # define smp_mb() mb()
# define smp_rmb() rmb() # define smp_rmb() rmb()
# define smp_wmb() wmb() # define smp_wmb() wmb()
# define smp_read_barrier_depends() read_barrier_depends()
#else #else
# define smp_mb() barrier() # define smp_mb() barrier()
# define smp_rmb() barrier() # define smp_rmb() barrier()
# define smp_wmb() barrier() # define smp_wmb() barrier()
# define smp_read_barrier_depends() do { } while(0)
#endif #endif
/* /*
......
...@@ -146,6 +146,7 @@ extern void __global_restore_flags(unsigned long); ...@@ -146,6 +146,7 @@ extern void __global_restore_flags(unsigned long);
#define rmb() do { } while(0) #define rmb() do { } while(0)
#define wmb() wbflush() #define wmb() wbflush()
#define mb() wbflush() #define mb() wbflush()
#define read_barrier_depends() do { } while(0)
#else /* CONFIG_CPU_HAS_WB */ #else /* CONFIG_CPU_HAS_WB */
...@@ -161,6 +162,7 @@ __asm__ __volatile__( \ ...@@ -161,6 +162,7 @@ __asm__ __volatile__( \
: "memory") : "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#endif /* CONFIG_CPU_HAS_WB */ #endif /* CONFIG_CPU_HAS_WB */
...@@ -168,10 +170,12 @@ __asm__ __volatile__( \ ...@@ -168,10 +170,12 @@ __asm__ __volatile__( \
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#endif #endif
#define set_mb(var, value) \ #define set_mb(var, value) \
......
...@@ -142,15 +142,18 @@ __asm__ __volatile__( \ ...@@ -142,15 +142,18 @@ __asm__ __volatile__( \
: "memory") : "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#endif #endif
#define set_mb(var, value) \ #define set_mb(var, value) \
......
...@@ -51,6 +51,7 @@ extern struct task_struct *_switch_to(struct task_struct *, struct task_struct * ...@@ -51,6 +51,7 @@ extern struct task_struct *_switch_to(struct task_struct *, struct task_struct *
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() do { } while(0)
#else #else
/* This is simply the barrier() macro from linux/kernel.h but when serial.c /* This is simply the barrier() macro from linux/kernel.h but when serial.c
* uses tqueue.h uses smp_mb() defined using barrier(), linux/kernel.h * uses tqueue.h uses smp_mb() defined using barrier(), linux/kernel.h
...@@ -59,6 +60,7 @@ extern struct task_struct *_switch_to(struct task_struct *, struct task_struct * ...@@ -59,6 +60,7 @@ extern struct task_struct *_switch_to(struct task_struct *, struct task_struct *
#define smp_mb() __asm__ __volatile__("":::"memory"); #define smp_mb() __asm__ __volatile__("":::"memory");
#define smp_rmb() __asm__ __volatile__("":::"memory"); #define smp_rmb() __asm__ __volatile__("":::"memory");
#define smp_wmb() __asm__ __volatile__("":::"memory"); #define smp_wmb() __asm__ __volatile__("":::"memory");
#define smp_read_barrier_depends() do { } while(0)
#endif #endif
/* interrupt control */ /* interrupt control */
...@@ -120,6 +122,7 @@ static inline void set_eiem(unsigned long val) ...@@ -120,6 +122,7 @@ static inline void set_eiem(unsigned long val)
#define mb() __asm__ __volatile__ ("sync" : : :"memory") #define mb() __asm__ __volatile__ ("sync" : : :"memory")
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
......
...@@ -22,6 +22,8 @@ ...@@ -22,6 +22,8 @@
* mb() prevents loads and stores being reordered across this point. * mb() prevents loads and stores being reordered across this point.
* rmb() prevents loads being reordered across this point. * rmb() prevents loads being reordered across this point.
* wmb() prevents stores being reordered across this point. * wmb() prevents stores being reordered across this point.
* read_barrier_depends() prevents data-dependant loads being reordered
* across this point (nop on PPC).
* *
* We can use the eieio instruction for wmb, but since it doesn't * We can use the eieio instruction for wmb, but since it doesn't
* give any ordering guarantees about loads, we have to use the * give any ordering guarantees about loads, we have to use the
...@@ -30,6 +32,7 @@ ...@@ -30,6 +32,7 @@
#define mb() __asm__ __volatile__ ("sync" : : : "memory") #define mb() __asm__ __volatile__ ("sync" : : : "memory")
#define rmb() __asm__ __volatile__ ("sync" : : : "memory") #define rmb() __asm__ __volatile__ ("sync" : : : "memory")
#define wmb() __asm__ __volatile__ ("eieio" : : : "memory") #define wmb() __asm__ __volatile__ ("eieio" : : : "memory")
#define read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
...@@ -38,10 +41,12 @@ ...@@ -38,10 +41,12 @@
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() __asm__ __volatile__("": : :"memory") #define smp_mb() __asm__ __volatile__("": : :"memory")
#define smp_rmb() __asm__ __volatile__("": : :"memory") #define smp_rmb() __asm__ __volatile__("": : :"memory")
#define smp_wmb() __asm__ __volatile__("": : :"memory") #define smp_wmb() __asm__ __volatile__("": : :"memory")
#define smp_read_barrier_depends() do { } while(0)
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
#ifdef __KERNEL__ #ifdef __KERNEL__
......
...@@ -25,6 +25,8 @@ ...@@ -25,6 +25,8 @@
* mb() prevents loads and stores being reordered across this point. * mb() prevents loads and stores being reordered across this point.
* rmb() prevents loads being reordered across this point. * rmb() prevents loads being reordered across this point.
* wmb() prevents stores being reordered across this point. * wmb() prevents stores being reordered across this point.
* read_barrier_depends() prevents data-dependant loads being reordered
* across this point (nop on PPC).
* *
* We can use the eieio instruction for wmb, but since it doesn't * We can use the eieio instruction for wmb, but since it doesn't
* give any ordering guarantees about loads, we have to use the * give any ordering guarantees about loads, we have to use the
...@@ -33,6 +35,7 @@ ...@@ -33,6 +35,7 @@
#define mb() __asm__ __volatile__ ("sync" : : : "memory") #define mb() __asm__ __volatile__ ("sync" : : : "memory")
#define rmb() __asm__ __volatile__ ("lwsync" : : : "memory") #define rmb() __asm__ __volatile__ ("lwsync" : : : "memory")
#define wmb() __asm__ __volatile__ ("eieio" : : : "memory") #define wmb() __asm__ __volatile__ ("eieio" : : : "memory")
#define read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; mb(); } while (0) #define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
...@@ -41,10 +44,12 @@ ...@@ -41,10 +44,12 @@
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() __asm__ __volatile__("": : :"memory") #define smp_mb() __asm__ __volatile__("": : :"memory")
#define smp_rmb() __asm__ __volatile__("": : :"memory") #define smp_rmb() __asm__ __volatile__("": : :"memory")
#define smp_wmb() __asm__ __volatile__("": : :"memory") #define smp_wmb() __asm__ __volatile__("": : :"memory")
#define smp_read_barrier_depends() do { } while(0)
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
#ifdef CONFIG_DEBUG_KERNEL #ifdef CONFIG_DEBUG_KERNEL
......
...@@ -227,9 +227,11 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size) ...@@ -227,9 +227,11 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
#define mb() eieio() #define mb() eieio()
#define rmb() eieio() #define rmb() eieio()
#define wmb() eieio() #define wmb() eieio()
#define read_barrier_depends() do { } while(0)
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#define smp_mb__before_clear_bit() smp_mb() #define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb() #define smp_mb__after_clear_bit() smp_mb()
......
...@@ -238,9 +238,11 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size) ...@@ -238,9 +238,11 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
#define mb() eieio() #define mb() eieio()
#define rmb() eieio() #define rmb() eieio()
#define wmb() eieio() #define wmb() eieio()
#define read_barrier_depends() do { } while(0)
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#define smp_mb__before_clear_bit() smp_mb() #define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb() #define smp_mb__after_clear_bit() smp_mb()
......
...@@ -89,15 +89,18 @@ extern void __xchg_called_with_bad_pointer(void); ...@@ -89,15 +89,18 @@ extern void __xchg_called_with_bad_pointer(void);
#define mb() __asm__ __volatile__ ("": : :"memory") #define mb() __asm__ __volatile__ ("": : :"memory")
#define rmb() mb() #define rmb() mb()
#define wmb() __asm__ __volatile__ ("": : :"memory") #define wmb() __asm__ __volatile__ ("": : :"memory")
#define read_barrier_depends() do { } while(0)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#endif #endif
#define set_mb(var, value) do { xchg(&var, value); } while (0) #define set_mb(var, value) do { xchg(&var, value); } while (0)
......
...@@ -310,11 +310,13 @@ extern void __global_restore_flags(unsigned long flags); ...@@ -310,11 +310,13 @@ extern void __global_restore_flags(unsigned long flags);
#define mb() __asm__ __volatile__ ("" : : : "memory") #define mb() __asm__ __volatile__ ("" : : : "memory")
#define rmb() mb() #define rmb() mb()
#define wmb() mb() #define wmb() mb()
#define read_barrier_depends() do { } while(0)
#define set_mb(__var, __value) do { __var = __value; mb(); } while(0) #define set_mb(__var, __value) do { __var = __value; mb(); } while(0)
#define set_wmb(__var, __value) set_mb(__var, __value) #define set_wmb(__var, __value) set_mb(__var, __value)
#define smp_mb() __asm__ __volatile__("":::"memory"); #define smp_mb() __asm__ __volatile__("":::"memory");
#define smp_rmb() __asm__ __volatile__("":::"memory"); #define smp_rmb() __asm__ __volatile__("":::"memory");
#define smp_wmb() __asm__ __volatile__("":::"memory"); #define smp_wmb() __asm__ __volatile__("":::"memory");
#define smp_read_barrier_depends() do { } while(0)
#define nop() __asm__ __volatile__ ("nop"); #define nop() __asm__ __volatile__ ("nop");
......
...@@ -215,10 +215,12 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, ...@@ -215,10 +215,12 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() rmb()
#define smp_wmb() wmb() #define smp_wmb() wmb()
#define smp_read_barrier_depends() do {} while(0)
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() do {} while(0)
#endif #endif
...@@ -230,6 +232,7 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, ...@@ -230,6 +232,7 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
#define mb() asm volatile("mfence":::"memory") #define mb() asm volatile("mfence":::"memory")
#define rmb() asm volatile("lfence":::"memory") #define rmb() asm volatile("lfence":::"memory")
#define wmb() asm volatile("sfence":::"memory") #define wmb() asm volatile("sfence":::"memory")
#define read_barrier_depends() do {} while(0)
#define set_mb(var, value) do { xchg(&var, value); } while (0) #define set_mb(var, value) do { xchg(&var, value); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0)
......
...@@ -200,6 +200,12 @@ struct sysinfo { ...@@ -200,6 +200,12 @@ struct sysinfo {
}; };
#define BUG_ON(condition) do { if (unlikely((condition)!=0)) BUG(); } while(0) #define BUG_ON(condition) do { if (unlikely((condition)!=0)) BUG(); } while(0)
#define WARN_ON(condition) do { \
if (unlikely((condition)!=0)) { \
printk("Badness in %s at %s:%d\n", __FUNCTION__, __FILE__, __LINE__); \
dump_stack(); \
} \
} while (0)
extern void BUILD_BUG(void); extern void BUILD_BUG(void);
#define BUILD_BUG_ON(condition) do { if (condition) BUILD_BUG(); } while(0) #define BUILD_BUG_ON(condition) do { if (condition) BUILD_BUG(); } while(0)
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#if defined(__KERNEL__) || defined(_LVM_H_INCLUDE) #if defined(__KERNEL__) || defined(_LVM_H_INCLUDE)
#include <linux/prefetch.h> #include <linux/prefetch.h>
#include <asm/system.h>
/* /*
* Simple doubly linked list implementation. * Simple doubly linked list implementation.
...@@ -70,6 +71,49 @@ static inline void list_add_tail(struct list_head *new, struct list_head *head) ...@@ -70,6 +71,49 @@ static inline void list_add_tail(struct list_head *new, struct list_head *head)
__list_add(new, head->prev, head); __list_add(new, head->prev, head);
} }
/*
* Insert a new entry between two known consecutive entries.
*
* This is only for internal list manipulation where we know
* the prev/next entries already!
*/
static __inline__ void __list_add_rcu(struct list_head * new,
struct list_head * prev,
struct list_head * next)
{
new->next = next;
new->prev = prev;
wmb();
next->prev = new;
prev->next = new;
}
/**
* list_add_rcu - add a new entry to rcu-protected list
* @new: new entry to be added
* @head: list head to add it after
*
* Insert a new entry after the specified head.
* This is good for implementing stacks.
*/
static __inline__ void list_add_rcu(struct list_head *new, struct list_head *head)
{
__list_add_rcu(new, head, head->next);
}
/**
* list_add_tail_rcu - add a new entry to rcu-protected list
* @new: new entry to be added
* @head: list head to add it before
*
* Insert a new entry before the specified head.
* This is useful for implementing queues.
*/
static __inline__ void list_add_tail_rcu(struct list_head *new, struct list_head *head)
{
__list_add_rcu(new, head->prev, head);
}
/* /*
* Delete a list entry by making the prev/next entries * Delete a list entry by making the prev/next entries
* point to each other. * point to each other.
...@@ -93,6 +137,17 @@ static inline void list_del(struct list_head *entry) ...@@ -93,6 +137,17 @@ static inline void list_del(struct list_head *entry)
{ {
__list_del(entry->prev, entry->next); __list_del(entry->prev, entry->next);
} }
/**
* list_del_rcu - deletes entry from list without re-initialization
* @entry: the element to delete from the list.
* Note: list_empty on entry does not return true after this,
* the entry is in an undefined state. It is useful for RCU based
* lockfree traversal.
*/
static inline void list_del_rcu(struct list_head *entry)
{
__list_del(entry->prev, entry->next);
}
/** /**
* list_del_init - deletes entry from list and reinitialize it. * list_del_init - deletes entry from list and reinitialize it.
...@@ -240,6 +295,30 @@ static inline void list_splice_init(struct list_head *list, ...@@ -240,6 +295,30 @@ static inline void list_splice_init(struct list_head *list,
pos = list_entry(pos->member.next, typeof(*pos), member), \ pos = list_entry(pos->member.next, typeof(*pos), member), \
prefetch(pos->member.next)) prefetch(pos->member.next))
/**
* list_for_each_rcu - iterate over an rcu-protected list
* @pos: the &struct list_head to use as a loop counter.
* @head: the head for your list.
*/
#define list_for_each_rcu(pos, head) \
for (pos = (head)->next, prefetch(pos->next); pos != (head); \
pos = pos->next, ({ read_barrier_depends(); 0;}), prefetch(pos->next))
#define __list_for_each_rcu(pos, head) \
for (pos = (head)->next; pos != (head); \
pos = pos->next, ({ read_barrier_depends(); 0;}))
/**
* list_for_each_safe_rcu - iterate over an rcu-protected list safe
* against removal of list entry
* @pos: the &struct list_head to use as a loop counter.
* @n: another &struct list_head to use as temporary storage
* @head: the head for your list.
*/
#define list_for_each_safe_rcu(pos, n, head) \
for (pos = (head)->next, n = pos->next; pos != (head); \
pos = n, ({ read_barrier_depends(); 0;}), n = pos->next)
#endif /* __KERNEL__ || _LVM_H_INCLUDE */ #endif /* __KERNEL__ || _LVM_H_INCLUDE */
#endif #endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment