Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
L
linux
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
nexedi
linux
Commits
70d9d825
Commit
70d9d825
authored
Nov 05, 2005
by
Linus Torvalds
Browse files
Options
Browse Files
Download
Plain Diff
Merge branch 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6
parents
537a95d9
f896424c
Changes
16
Show whitespace changes
Inline
Side-by-side
Showing
16 changed files
with
792 additions
and
507 deletions
+792
-507
Documentation/networking/s2io.txt
Documentation/networking/s2io.txt
+152
-47
MAINTAINERS
MAINTAINERS
+9
-0
drivers/net/Kconfig
drivers/net/Kconfig
+1
-12
drivers/net/fec_8xx/Kconfig
drivers/net/fec_8xx/Kconfig
+7
-1
drivers/net/fec_8xx/fec_mii.c
drivers/net/fec_8xx/fec_mii.c
+42
-0
drivers/net/fs_enet/fs_enet-main.c
drivers/net/fs_enet/fs_enet-main.c
+12
-9
drivers/net/ibm_emac/ibm_emac.h
drivers/net/ibm_emac/ibm_emac.h
+21
-1
drivers/net/ibm_emac/ibm_emac_core.c
drivers/net/ibm_emac/ibm_emac_core.c
+11
-9
drivers/net/ibm_emac/ibm_emac_mal.h
drivers/net/ibm_emac/ibm_emac_mal.h
+3
-2
drivers/net/ibm_emac/ibm_emac_phy.c
drivers/net/ibm_emac/ibm_emac_phy.c
+12
-0
drivers/net/pcnet32.c
drivers/net/pcnet32.c
+60
-27
drivers/net/phy/mdio_bus.c
drivers/net/phy/mdio_bus.c
+3
-0
drivers/net/s2io.c
drivers/net/s2io.c
+408
-354
drivers/net/s2io.h
drivers/net/s2io.h
+47
-44
drivers/net/wireless/airo.c
drivers/net/wireless/airo.c
+1
-1
include/linux/phy.h
include/linux/phy.h
+3
-0
No files found.
Documentation/networking/s2io.txt
View file @
70d9d825
S2IO Technologies XFrame 10 Gig adapter.
Release notes for Neterion's (Formerly S2io) Xframe I/II PCI-X 10GbE driver.
-------------------------------------------
Contents
I. Module loadable parameters.
=======
When loaded as a module, the driver provides a host of Module loadable
- 1. Introduction
parameters, so the device can be tuned as per the users needs.
- 2. Identifying the adapter/interface
A list of the Module params is given below.
- 3. Features supported
(i) ring_num: This can be used to program the number of
- 4. Command line parameters
receive rings used in the driver.
- 5. Performance suggestions
(ii) ring_len: This defines the number of descriptors each ring
- 6. Available Downloads
can have. There can be a maximum of 8 rings.
(iii) frame_len: This is an array of size 8. Using this we can
set the maximum size of the received frame that can
1. Introduction:
be steered into the corrsponding receive ring.
This Linux driver supports Neterion's Xframe I PCI-X 1.0 and
(iv) fifo_num: This defines the number of Tx FIFOs thats used in
Xframe II PCI-X 2.0 adapters. It supports several features
the driver.
such as jumbo frames, MSI/MSI-X, checksum offloads, TSO, UFO and so on.
(v) fifo_len: Each element defines the number of
See below for complete list of features.
Tx descriptors that can be associated with each
All features are supported for both IPv4 and IPv6.
corresponding FIFO. There are a maximum of 8 FIFOs.
(vi) tx_prio: This is a bool, if module is loaded with a non-zero
2. Identifying the adapter/interface:
value for tx_prio multi FIFO scheme is activated.
a. Insert the adapter(s) in your system.
(vii) rx_prio: This is a bool, if module is loaded with a non-zero
b. Build and load driver
value for tx_prio multi RING scheme is activated.
# insmod s2io.ko
(viii) latency_timer: The value given against this param will be
c. View log messages
loaded into the latency timer register in PCI Config
# dmesg | tail -40
space, else the register is left with its reset value.
You will see messages similar to:
eth3: Neterion Xframe I 10GbE adapter (rev 3), Version 2.0.9.1, Intr type INTA
II. Performance tuning.
eth4: Neterion Xframe II 10GbE adapter (rev 2), Version 2.0.9.1, Intr type INTA
By changing a few sysctl parameters.
eth4: Device is on 64 bit 133MHz PCIX(M1) bus
Copy the following lines into a file and run the following command,
"sysctl -p <file_name>"
The above messages identify the adapter type(Xframe I/II), adapter revision,
### IPV4 specific settings
driver version, interface name(eth3, eth4), Interrupt type(INTA, MSI, MSI-X).
net.ipv4.tcp_timestamps = 0 # turns TCP timestamp support off, default 1, reduces CPU use
In case of Xframe II, the PCI/PCI-X bus width and frequency are displayed
net.ipv4.tcp_sack = 0 # turn SACK support off, default on
as well.
# on systems with a VERY fast bus -> memory interface this is the big gainer
net.ipv4.tcp_rmem = 10000000 10000000 10000000 # sets min/default/max TCP read buffer, default 4096 87380 174760
To associate an interface with a physical adapter use "ethtool -p <ethX>".
net.ipv4.tcp_wmem = 10000000 10000000 10000000 # sets min/pressure/max TCP write buffer, default 4096 16384 131072
The corresponding adapter's LED will blink multiple times.
net.ipv4.tcp_mem = 10000000 10000000 10000000 # sets min/pressure/max TCP buffer space, default 31744 32256 32768
3. Features supported:
### CORE settings (mostly for socket and UDP effect)
a. Jumbo frames. Xframe I/II supports MTU upto 9600 bytes,
net.core.rmem_max = 524287 # maximum receive socket buffer size, default 131071
modifiable using ifconfig command.
net.core.wmem_max = 524287 # maximum send socket buffer size, default 131071
net.core.rmem_default = 524287 # default receive socket buffer size, default 65535
b. Offloads. Supports checksum offload(TCP/UDP/IP) on transmit
net.core.wmem_default = 524287 # default send socket buffer size, default 65535
and receive, TSO.
net.core.optmem_max = 524287 # maximum amount of option memory buffers, default 10240
net.core.netdev_max_backlog = 300000 # number of unprocessed input packets before kernel starts dropping them, default 300
c. Multi-buffer receive mode. Scattering of packet across multiple
---End of performance tuning file---
buffers. Currently driver supports 2-buffer mode which yields
significant performance improvement on certain platforms(SGI Altix,
IBM xSeries).
d. MSI/MSI-X. Can be enabled on platforms which support this feature
(IA64, Xeon) resulting in noticeable performance improvement(upto 7%
on certain platforms).
e. NAPI. Compile-time option(CONFIG_S2IO_NAPI) for better Rx interrupt
moderation.
f. Statistics. Comprehensive MAC-level and software statistics displayed
using "ethtool -S" option.
g. Multi-FIFO/Ring. Supports up to 8 transmit queues and receive rings,
with multiple steering options.
4. Command line parameters
a. tx_fifo_num
Number of transmit queues
Valid range: 1-8
Default: 1
b. rx_ring_num
Number of receive rings
Valid range: 1-8
Default: 1
c. tx_fifo_len
Size of each transmit queue
Valid range: Total length of all queues should not exceed 8192
Default: 4096
d. rx_ring_sz
Size of each receive ring(in 4K blocks)
Valid range: Limited by memory on system
Default: 30
e. intr_type
Specifies interrupt type. Possible values 1(INTA), 2(MSI), 3(MSI-X)
Valid range: 1-3
Default: 1
5. Performance suggestions
General:
a. Set MTU to maximum(9000 for switch setup, 9600 in back-to-back configuration)
b. Set TCP windows size to optimal value.
For instance, for MTU=1500 a value of 210K has been observed to result in
good performance.
# sysctl -w net.ipv4.tcp_rmem="210000 210000 210000"
# sysctl -w net.ipv4.tcp_wmem="210000 210000 210000"
For MTU=9000, TCP window size of 10 MB is recommended.
# sysctl -w net.ipv4.tcp_rmem="10000000 10000000 10000000"
# sysctl -w net.ipv4.tcp_wmem="10000000 10000000 10000000"
Transmit performance:
a. By default, the driver respects BIOS settings for PCI bus parameters.
However, you may want to experiment with PCI bus parameters
max-split-transactions(MOST) and MMRBC (use setpci command).
A MOST value of 2 has been found optimal for Opterons and 3 for Itanium.
It could be different for your hardware.
Set MMRBC to 4K**.
For example you can set
For opteron
#setpci -d 17d5:* 62=1d
For Itanium
#setpci -d 17d5:* 62=3d
For detailed description of the PCI registers, please see Xframe User Guide.
b. Ensure Transmit Checksum offload is enabled. Use ethtool to set/verify this
parameter.
c. Turn on TSO(using "ethtool -K")
# ethtool -K <ethX> tso on
Receive performance:
a. By default, the driver respects BIOS settings for PCI bus parameters.
However, you may want to set PCI latency timer to 248.
#setpci -d 17d5:* LATENCY_TIMER=f8
For detailed description of the PCI registers, please see Xframe User Guide.
b. Use 2-buffer mode. This results in large performance boost on
on certain platforms(eg. SGI Altix, IBM xSeries).
c. Ensure Receive Checksum offload is enabled. Use "ethtool -K ethX" command to
set/verify this option.
d. Enable NAPI feature(in kernel configuration Device Drivers ---> Network
device support ---> Ethernet (10000 Mbit) ---> S2IO 10Gbe Xframe NIC) to
bring down CPU utilization.
** For AMD opteron platforms with 8131 chipset, MMRBC=1 and MOST=1 are
recommended as safe parameters.
For more information, please review the AMD8131 errata at
http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/26310.pdf
6. Available Downloads
Neterion "s2io" driver in Red Hat and Suse 2.6-based distributions is kept up
to date, also the latest "s2io" code (including support for 2.4 kernels) is
available via "Support" link on the Neterion site: http://www.neterion.com.
For Xframe User Guide (Programming manual), visit ftp site ns1.s2io.com,
user: linuxdocs password: HALdocs
7. Support
For further support please contact either your 10GbE Xframe NIC vendor (IBM,
HP, SGI etc.) or click on the "Support" link on the Neterion site:
http://www.neterion.com.
MAINTAINERS
View file @
70d9d825
...
@@ -910,6 +910,15 @@ L: linux-fbdev-devel@lists.sourceforge.net
...
@@ -910,6 +910,15 @@ L: linux-fbdev-devel@lists.sourceforge.net
W: http://linux-fbdev.sourceforge.net/
W: http://linux-fbdev.sourceforge.net/
S: Maintained
S: Maintained
FREESCALE SOC FS_ENET DRIVER
P: Pantelis Antoniou
M: pantelis.antoniou@gmail.com
P: Vitaly Bordug
M: vbordug@ru.mvista.com
L: linuxppc-embedded@ozlabs.org
L: netdev@vger.kernel.org
S: Maintained
FILE LOCKING (flock() and fcntl()/lockf())
FILE LOCKING (flock() and fcntl()/lockf())
P: Matthew Wilcox
P: Matthew Wilcox
M: matthew@wil.cx
M: matthew@wil.cx
...
...
drivers/net/Kconfig
View file @
70d9d825
...
@@ -1203,7 +1203,7 @@ config IBM_EMAC_RX_SKB_HEADROOM
...
@@ -1203,7 +1203,7 @@ config IBM_EMAC_RX_SKB_HEADROOM
config IBM_EMAC_PHY_RX_CLK_FIX
config IBM_EMAC_PHY_RX_CLK_FIX
bool "PHY Rx clock workaround"
bool "PHY Rx clock workaround"
depends on IBM_EMAC && (405EP || 440GX || 440EP)
depends on IBM_EMAC && (405EP || 440GX || 440EP
|| 440GR
)
help
help
Enable this if EMAC attached to a PHY which doesn't generate
Enable this if EMAC attached to a PHY which doesn't generate
RX clock if there is no link, if this is the case, you will
RX clock if there is no link, if this is the case, you will
...
@@ -2258,17 +2258,6 @@ config S2IO_NAPI
...
@@ -2258,17 +2258,6 @@ config S2IO_NAPI
If in doubt, say N.
If in doubt, say N.
config 2BUFF_MODE
bool "Use 2 Buffer Mode on Rx side."
depends on S2IO
---help---
On enabling the 2 buffer mode, the received frame will be
split into 2 parts before being DMA'ed to the hosts memory.
The parts are the ethernet header and ethernet payload.
This is useful on systems where DMA'ing to to unaligned
physical memory loactions comes with a heavy price.
If not sure please say N.
endmenu
endmenu
if !UML
if !UML
...
...
drivers/net/fec_8xx/Kconfig
View file @
70d9d825
config FEC_8XX
config FEC_8XX
tristate "Motorola 8xx FEC driver"
tristate "Motorola 8xx FEC driver"
depends on NET_ETHERNET
&& 8xx && (NETTA || NETPHONE)
depends on NET_ETHERNET
select MII
select MII
config FEC_8XX_GENERIC_PHY
config FEC_8XX_GENERIC_PHY
...
@@ -12,3 +12,9 @@ config FEC_8XX_DM9161_PHY
...
@@ -12,3 +12,9 @@ config FEC_8XX_DM9161_PHY
bool "Support DM9161 PHY"
bool "Support DM9161 PHY"
depends on FEC_8XX
depends on FEC_8XX
default n
default n
config FEC_8XX_LXT971_PHY
bool "Support LXT971/LXT972 PHY"
depends on FEC_8XX
default n
drivers/net/fec_8xx/fec_mii.c
View file @
70d9d825
...
@@ -203,6 +203,39 @@ static void dm9161_shutdown(struct net_device *dev)
...
@@ -203,6 +203,39 @@ static void dm9161_shutdown(struct net_device *dev)
#endif
#endif
#ifdef CONFIG_FEC_8XX_LXT971_PHY
/* Support for LXT971/972 PHY */
#define MII_LXT971_PCR 16
/* Port Control Register */
#define MII_LXT971_SR2 17
/* Status Register 2 */
#define MII_LXT971_IER 18
/* Interrupt Enable Register */
#define MII_LXT971_ISR 19
/* Interrupt Status Register */
#define MII_LXT971_LCR 20
/* LED Control Register */
#define MII_LXT971_TCR 30
/* Transmit Control Register */
static
void
lxt971_startup
(
struct
net_device
*
dev
)
{
struct
fec_enet_private
*
fep
=
netdev_priv
(
dev
);
fec_mii_write
(
dev
,
fep
->
mii_if
.
phy_id
,
MII_LXT971_IER
,
0x00F2
);
}
static
void
lxt971_ack_int
(
struct
net_device
*
dev
)
{
struct
fec_enet_private
*
fep
=
netdev_priv
(
dev
);
fec_mii_read
(
dev
,
fep
->
mii_if
.
phy_id
,
MII_LXT971_ISR
);
}
static
void
lxt971_shutdown
(
struct
net_device
*
dev
)
{
struct
fec_enet_private
*
fep
=
netdev_priv
(
dev
);
fec_mii_write
(
dev
,
fep
->
mii_if
.
phy_id
,
MII_LXT971_IER
,
0x0000
);
}
#endif
/**********************************************************************************/
/**********************************************************************************/
static
const
struct
phy_info
phy_info
[]
=
{
static
const
struct
phy_info
phy_info
[]
=
{
...
@@ -215,6 +248,15 @@ static const struct phy_info phy_info[] = {
...
@@ -215,6 +248,15 @@ static const struct phy_info phy_info[] = {
.
shutdown
=
dm9161_shutdown
,
.
shutdown
=
dm9161_shutdown
,
},
},
#endif
#endif
#ifdef CONFIG_FEC_8XX_LXT971_PHY
{
.
id
=
0x0001378e
,
.
name
=
"LXT971/972"
,
.
startup
=
lxt971_startup
,
.
ack_int
=
lxt971_ack_int
,
.
shutdown
=
lxt971_shutdown
,
},
#endif
#ifdef CONFIG_FEC_8XX_GENERIC_PHY
#ifdef CONFIG_FEC_8XX_GENERIC_PHY
{
{
.
id
=
0
,
.
id
=
0
,
...
...
drivers/net/fs_enet/fs_enet-main.c
View file @
70d9d825
...
@@ -130,7 +130,7 @@ static int fs_enet_rx_napi(struct net_device *dev, int *budget)
...
@@ -130,7 +130,7 @@ static int fs_enet_rx_napi(struct net_device *dev, int *budget)
skb
=
fep
->
rx_skbuff
[
curidx
];
skb
=
fep
->
rx_skbuff
[
curidx
];
dma_unmap_single
(
fep
->
dev
,
skb
->
data
,
dma_unmap_single
(
fep
->
dev
,
CBDR_BUFADDR
(
bdp
)
,
L1_CACHE_ALIGN
(
PKT_MAXBUF_SIZE
),
L1_CACHE_ALIGN
(
PKT_MAXBUF_SIZE
),
DMA_FROM_DEVICE
);
DMA_FROM_DEVICE
);
...
@@ -144,7 +144,7 @@ static int fs_enet_rx_napi(struct net_device *dev, int *budget)
...
@@ -144,7 +144,7 @@ static int fs_enet_rx_napi(struct net_device *dev, int *budget)
skb
=
fep
->
rx_skbuff
[
curidx
];
skb
=
fep
->
rx_skbuff
[
curidx
];
dma_unmap_single
(
fep
->
dev
,
skb
->
data
,
dma_unmap_single
(
fep
->
dev
,
CBDR_BUFADDR
(
bdp
)
,
L1_CACHE_ALIGN
(
PKT_MAXBUF_SIZE
),
L1_CACHE_ALIGN
(
PKT_MAXBUF_SIZE
),
DMA_FROM_DEVICE
);
DMA_FROM_DEVICE
);
...
@@ -268,7 +268,7 @@ static int fs_enet_rx_non_napi(struct net_device *dev)
...
@@ -268,7 +268,7 @@ static int fs_enet_rx_non_napi(struct net_device *dev)
skb
=
fep
->
rx_skbuff
[
curidx
];
skb
=
fep
->
rx_skbuff
[
curidx
];
dma_unmap_single
(
fep
->
dev
,
skb
->
data
,
dma_unmap_single
(
fep
->
dev
,
CBDR_BUFADDR
(
bdp
)
,
L1_CACHE_ALIGN
(
PKT_MAXBUF_SIZE
),
L1_CACHE_ALIGN
(
PKT_MAXBUF_SIZE
),
DMA_FROM_DEVICE
);
DMA_FROM_DEVICE
);
...
@@ -278,7 +278,7 @@ static int fs_enet_rx_non_napi(struct net_device *dev)
...
@@ -278,7 +278,7 @@ static int fs_enet_rx_non_napi(struct net_device *dev)
skb
=
fep
->
rx_skbuff
[
curidx
];
skb
=
fep
->
rx_skbuff
[
curidx
];
dma_unmap_single
(
fep
->
dev
,
skb
->
data
,
dma_unmap_single
(
fep
->
dev
,
CBDR_BUFADDR
(
bdp
)
,
L1_CACHE_ALIGN
(
PKT_MAXBUF_SIZE
),
L1_CACHE_ALIGN
(
PKT_MAXBUF_SIZE
),
DMA_FROM_DEVICE
);
DMA_FROM_DEVICE
);
...
@@ -399,7 +399,8 @@ static void fs_enet_tx(struct net_device *dev)
...
@@ -399,7 +399,8 @@ static void fs_enet_tx(struct net_device *dev)
fep
->
stats
.
collisions
++
;
fep
->
stats
.
collisions
++
;
/* unmap */
/* unmap */
dma_unmap_single
(
fep
->
dev
,
skb
->
data
,
skb
->
len
,
DMA_TO_DEVICE
);
dma_unmap_single
(
fep
->
dev
,
CBDR_BUFADDR
(
bdp
),
skb
->
len
,
DMA_TO_DEVICE
);
/*
/*
* Free the sk buffer associated with this last transmit.
* Free the sk buffer associated with this last transmit.
...
@@ -547,17 +548,19 @@ void fs_cleanup_bds(struct net_device *dev)
...
@@ -547,17 +548,19 @@ void fs_cleanup_bds(struct net_device *dev)
{
{
struct
fs_enet_private
*
fep
=
netdev_priv
(
dev
);
struct
fs_enet_private
*
fep
=
netdev_priv
(
dev
);
struct
sk_buff
*
skb
;
struct
sk_buff
*
skb
;
cbd_t
*
bdp
;
int
i
;
int
i
;
/*
/*
* Reset SKB transmit buffers.
* Reset SKB transmit buffers.
*/
*/
for
(
i
=
0
;
i
<
fep
->
tx_ring
;
i
++
)
{
for
(
i
=
0
,
bdp
=
fep
->
tx_bd_base
;
i
<
fep
->
tx_ring
;
i
++
,
bdp
++
)
{
if
((
skb
=
fep
->
tx_skbuff
[
i
])
==
NULL
)
if
((
skb
=
fep
->
tx_skbuff
[
i
])
==
NULL
)
continue
;
continue
;
/* unmap */
/* unmap */
dma_unmap_single
(
fep
->
dev
,
skb
->
data
,
skb
->
len
,
DMA_TO_DEVICE
);
dma_unmap_single
(
fep
->
dev
,
CBDR_BUFADDR
(
bdp
),
skb
->
len
,
DMA_TO_DEVICE
);
fep
->
tx_skbuff
[
i
]
=
NULL
;
fep
->
tx_skbuff
[
i
]
=
NULL
;
dev_kfree_skb
(
skb
);
dev_kfree_skb
(
skb
);
...
@@ -566,12 +569,12 @@ void fs_cleanup_bds(struct net_device *dev)
...
@@ -566,12 +569,12 @@ void fs_cleanup_bds(struct net_device *dev)
/*
/*
* Reset SKB receive buffers
* Reset SKB receive buffers
*/
*/
for
(
i
=
0
;
i
<
fep
->
rx_ring
;
i
++
)
{
for
(
i
=
0
,
bdp
=
fep
->
rx_bd_base
;
i
<
fep
->
rx_ring
;
i
++
,
bdp
++
)
{
if
((
skb
=
fep
->
rx_skbuff
[
i
])
==
NULL
)
if
((
skb
=
fep
->
rx_skbuff
[
i
])
==
NULL
)
continue
;
continue
;
/* unmap */
/* unmap */
dma_unmap_single
(
fep
->
dev
,
skb
->
data
,
dma_unmap_single
(
fep
->
dev
,
CBDR_BUFADDR
(
bdp
)
,
L1_CACHE_ALIGN
(
PKT_MAXBUF_SIZE
),
L1_CACHE_ALIGN
(
PKT_MAXBUF_SIZE
),
DMA_FROM_DEVICE
);
DMA_FROM_DEVICE
);
...
...
drivers/net/ibm_emac/ibm_emac.h
View file @
70d9d825
...
@@ -26,7 +26,8 @@
...
@@ -26,7 +26,8 @@
/* This is a simple check to prevent use of this driver on non-tested SoCs */
/* This is a simple check to prevent use of this driver on non-tested SoCs */
#if !defined(CONFIG_405GP) && !defined(CONFIG_405GPR) && !defined(CONFIG_405EP) && \
#if !defined(CONFIG_405GP) && !defined(CONFIG_405GPR) && !defined(CONFIG_405EP) && \
!defined(CONFIG_440GP) && !defined(CONFIG_440GX) && !defined(CONFIG_440SP) && \
!defined(CONFIG_440GP) && !defined(CONFIG_440GX) && !defined(CONFIG_440SP) && \
!defined(CONFIG_440EP) && !defined(CONFIG_NP405H)
!defined(CONFIG_440EP) && !defined(CONFIG_NP405H) && !defined(CONFIG_440SPE) && \
!defined(CONFIG_440GR)
#error "Unknown SoC. Please, check chip user manual and make sure EMAC defines are OK"
#error "Unknown SoC. Please, check chip user manual and make sure EMAC defines are OK"
#endif
#endif
...
@@ -246,6 +247,25 @@ struct emac_regs {
...
@@ -246,6 +247,25 @@ struct emac_regs {
#define EMAC_STACR_PCDA_SHIFT 5
#define EMAC_STACR_PCDA_SHIFT 5
#define EMAC_STACR_PRA_MASK 0x1f
#define EMAC_STACR_PRA_MASK 0x1f
/*
* For the 440SPe, AMCC inexplicably changed the polarity of
* the "operation complete" bit in the MII control register.
*/
#if defined(CONFIG_440SPE)
static
inline
int
emac_phy_done
(
u32
stacr
)
{
return
!
(
stacr
&
EMAC_STACR_OC
);
};
#define EMAC_STACR_START EMAC_STACR_OC
#else
/* CONFIG_440SPE */
static
inline
int
emac_phy_done
(
u32
stacr
)
{
return
stacr
&
EMAC_STACR_OC
;
};
#define EMAC_STACR_START 0
#endif
/* !CONFIG_440SPE */
/* EMACx_TRTR */
/* EMACx_TRTR */
#if !defined(CONFIG_IBM_EMAC4)
#if !defined(CONFIG_IBM_EMAC4)
#define EMAC_TRTR_SHIFT 27
#define EMAC_TRTR_SHIFT 27
...
...
drivers/net/ibm_emac/ibm_emac_core.c
View file @
70d9d825
...
@@ -87,10 +87,11 @@ MODULE_LICENSE("GPL");
...
@@ -87,10 +87,11 @@ MODULE_LICENSE("GPL");
*/
*/
static
u32
busy_phy_map
;
static
u32
busy_phy_map
;
#if defined(CONFIG_IBM_EMAC_PHY_RX_CLK_FIX) && (defined(CONFIG_405EP) || defined(CONFIG_440EP))
#if defined(CONFIG_IBM_EMAC_PHY_RX_CLK_FIX) && \
(defined(CONFIG_405EP) || defined(CONFIG_440EP) || defined(CONFIG_440GR))
/* 405EP has "EMAC to PHY Control Register" (CPC0_EPCTL) which can help us
/* 405EP has "EMAC to PHY Control Register" (CPC0_EPCTL) which can help us
* with PHY RX clock problem.
* with PHY RX clock problem.
* 440EP has more sane SDR0_MFR register implementation than 440GX, which
* 440EP
/440GR
has more sane SDR0_MFR register implementation than 440GX, which
* also allows controlling each EMAC clock
* also allows controlling each EMAC clock
*/
*/
static
inline
void
EMAC_RX_CLK_TX
(
int
idx
)
static
inline
void
EMAC_RX_CLK_TX
(
int
idx
)
...
@@ -100,7 +101,7 @@ static inline void EMAC_RX_CLK_TX(int idx)
...
@@ -100,7 +101,7 @@ static inline void EMAC_RX_CLK_TX(int idx)
#if defined(CONFIG_405EP)
#if defined(CONFIG_405EP)
mtdcr
(
0xf3
,
mfdcr
(
0xf3
)
|
(
1
<<
idx
));
mtdcr
(
0xf3
,
mfdcr
(
0xf3
)
|
(
1
<<
idx
));
#else
/* CONFIG_440EP */
#else
/* CONFIG_440EP
|| CONFIG_440GR
*/
SDR_WRITE
(
DCRN_SDR_MFR
,
SDR_READ
(
DCRN_SDR_MFR
)
|
(
0x08000000
>>
idx
));
SDR_WRITE
(
DCRN_SDR_MFR
,
SDR_READ
(
DCRN_SDR_MFR
)
|
(
0x08000000
>>
idx
));
#endif
#endif
...
@@ -546,7 +547,7 @@ static int __emac_mdio_read(struct ocp_enet_private *dev, u8 id, u8 reg)
...
@@ -546,7 +547,7 @@ static int __emac_mdio_read(struct ocp_enet_private *dev, u8 id, u8 reg)
/* Wait for management interface to become idle */
/* Wait for management interface to become idle */
n
=
10
;
n
=
10
;
while
(
!
(
in_be32
(
&
p
->
stacr
)
&
EMAC_STACR_OC
))
{
while
(
!
emac_phy_done
(
in_be32
(
&
p
->
stacr
)
))
{
udelay
(
1
);
udelay
(
1
);
if
(
!--
n
)
if
(
!--
n
)
goto
to
;
goto
to
;
...
@@ -556,11 +557,12 @@ static int __emac_mdio_read(struct ocp_enet_private *dev, u8 id, u8 reg)
...
@@ -556,11 +557,12 @@ static int __emac_mdio_read(struct ocp_enet_private *dev, u8 id, u8 reg)
out_be32
(
&
p
->
stacr
,
out_be32
(
&
p
->
stacr
,
EMAC_STACR_BASE
(
emac_opb_mhz
())
|
EMAC_STACR_STAC_READ
|
EMAC_STACR_BASE
(
emac_opb_mhz
())
|
EMAC_STACR_STAC_READ
|
(
reg
&
EMAC_STACR_PRA_MASK
)
(
reg
&
EMAC_STACR_PRA_MASK
)
|
((
id
&
EMAC_STACR_PCDA_MASK
)
<<
EMAC_STACR_PCDA_SHIFT
));
|
((
id
&
EMAC_STACR_PCDA_MASK
)
<<
EMAC_STACR_PCDA_SHIFT
)
|
EMAC_STACR_START
);
/* Wait for read to complete */
/* Wait for read to complete */
n
=
100
;
n
=
100
;
while
(
!
((
r
=
in_be32
(
&
p
->
stacr
))
&
EMAC_STACR_OC
))
{
while
(
!
emac_phy_done
(
r
=
in_be32
(
&
p
->
stacr
)
))
{
udelay
(
1
);
udelay
(
1
);
if
(
!--
n
)
if
(
!--
n
)
goto
to
;
goto
to
;
...
@@ -594,7 +596,7 @@ static void __emac_mdio_write(struct ocp_enet_private *dev, u8 id, u8 reg,
...
@@ -594,7 +596,7 @@ static void __emac_mdio_write(struct ocp_enet_private *dev, u8 id, u8 reg,
/* Wait for management interface to be idle */
/* Wait for management interface to be idle */
n
=
10
;
n
=
10
;
while
(
!
(
in_be32
(
&
p
->
stacr
)
&
EMAC_STACR_OC
))
{
while
(
!
emac_phy_done
(
in_be32
(
&
p
->
stacr
)
))
{
udelay
(
1
);
udelay
(
1
);
if
(
!--
n
)
if
(
!--
n
)
goto
to
;
goto
to
;
...
@@ -605,11 +607,11 @@ static void __emac_mdio_write(struct ocp_enet_private *dev, u8 id, u8 reg,
...
@@ -605,11 +607,11 @@ static void __emac_mdio_write(struct ocp_enet_private *dev, u8 id, u8 reg,
EMAC_STACR_BASE
(
emac_opb_mhz
())
|
EMAC_STACR_STAC_WRITE
|
EMAC_STACR_BASE
(
emac_opb_mhz
())
|
EMAC_STACR_STAC_WRITE
|
(
reg
&
EMAC_STACR_PRA_MASK
)
|
(
reg
&
EMAC_STACR_PRA_MASK
)
|
((
id
&
EMAC_STACR_PCDA_MASK
)
<<
EMAC_STACR_PCDA_SHIFT
)
|
((
id
&
EMAC_STACR_PCDA_MASK
)
<<
EMAC_STACR_PCDA_SHIFT
)
|
(
val
<<
EMAC_STACR_PHYD_SHIFT
));
(
val
<<
EMAC_STACR_PHYD_SHIFT
)
|
EMAC_STACR_START
);
/* Wait for write to complete */
/* Wait for write to complete */
n
=
100
;
n
=
100
;
while
(
!
(
in_be32
(
&
p
->
stacr
)
&
EMAC_STACR_OC
))
{
while
(
!
emac_phy_done
(
in_be32
(
&
p
->
stacr
)
))
{
udelay
(
1
);
udelay
(
1
);
if
(
!--
n
)
if
(
!--
n
)
goto
to
;
goto
to
;
...
...
drivers/net/ibm_emac/ibm_emac_mal.h
View file @
70d9d825
...
@@ -32,9 +32,10 @@
...
@@ -32,9 +32,10 @@
* reflect the fact that 40x and 44x have slightly different MALs. --ebs
* reflect the fact that 40x and 44x have slightly different MALs. --ebs
*/
*/
#if defined(CONFIG_405GP) || defined(CONFIG_405GPR) || defined(CONFIG_405EP) || \
#if defined(CONFIG_405GP) || defined(CONFIG_405GPR) || defined(CONFIG_405EP) || \
defined(CONFIG_440EP) || defined(CONFIG_NP405H)
defined(CONFIG_440EP) || defined(CONFIG_
440GR) || defined(CONFIG_
NP405H)
#define MAL_VERSION 1
#define MAL_VERSION 1
#elif defined(CONFIG_440GP) || defined(CONFIG_440GX) || defined(CONFIG_440SP)
#elif defined(CONFIG_440GP) || defined(CONFIG_440GX) || defined(CONFIG_440SP) || \
defined(CONFIG_440SPE)
#define MAL_VERSION 2
#define MAL_VERSION 2
#else
#else
#error "Unknown SoC, please check chip manual and choose MAL 'version'"
#error "Unknown SoC, please check chip manual and choose MAL 'version'"
...
...
drivers/net/ibm_emac/ibm_emac_phy.c
View file @
70d9d825
...
@@ -236,12 +236,16 @@ static struct mii_phy_def genmii_phy_def = {
...
@@ -236,12 +236,16 @@ static struct mii_phy_def genmii_phy_def = {
};
};
/* CIS8201 */
/* CIS8201 */
#define MII_CIS8201_10BTCSR 0x16
#define TENBTCSR_ECHO_DISABLE 0x2000
#define MII_CIS8201_EPCR 0x17
#define MII_CIS8201_EPCR 0x17
#define EPCR_MODE_MASK 0x3000
#define EPCR_MODE_MASK 0x3000
#define EPCR_GMII_MODE 0x0000
#define EPCR_GMII_MODE 0x0000
#define EPCR_RGMII_MODE 0x1000
#define EPCR_RGMII_MODE 0x1000
#define EPCR_TBI_MODE 0x2000
#define EPCR_TBI_MODE 0x2000
#define EPCR_RTBI_MODE 0x3000
#define EPCR_RTBI_MODE 0x3000
#define MII_CIS8201_ACSR 0x1c
#define ACSR_PIN_PRIO_SELECT 0x0004
static
int
cis8201_init
(
struct
mii_phy
*
phy
)
static
int
cis8201_init
(
struct
mii_phy
*
phy
)
{
{
...
@@ -270,6 +274,14 @@ static int cis8201_init(struct mii_phy *phy)
...
@@ -270,6 +274,14 @@ static int cis8201_init(struct mii_phy *phy)
phy_write
(
phy
,
MII_CIS8201_EPCR
,
epcr
);
phy_write
(
phy
,
MII_CIS8201_EPCR
,
epcr
);
/* MII regs override strap pins */
phy_write
(
phy
,
MII_CIS8201_ACSR
,
phy_read
(
phy
,
MII_CIS8201_ACSR
)
|
ACSR_PIN_PRIO_SELECT
);
/* Disable TX_EN -> CRS echo mode, otherwise 10/HDX doesn't work */
phy_write
(
phy
,
MII_CIS8201_10BTCSR
,
phy_read
(
phy
,
MII_CIS8201_10BTCSR
)
|
TENBTCSR_ECHO_DISABLE
);
return
0
;
return
0
;
}
}
...
...
drivers/net/pcnet32.c
View file @
70d9d825
...
@@ -22,8 +22,8 @@
...
@@ -22,8 +22,8 @@
*************************************************************************/
*************************************************************************/
#define DRV_NAME "pcnet32"
#define DRV_NAME "pcnet32"
#define DRV_VERSION "1.31
a
"
#define DRV_VERSION "1.31
c
"
#define DRV_RELDATE "
12.Sep
.2005"
#define DRV_RELDATE "
01.Nov
.2005"
#define PFX DRV_NAME ": "
#define PFX DRV_NAME ": "
static
const
char
*
version
=
static
const
char
*
version
=
...
@@ -260,6 +260,11 @@ static int homepna[MAX_UNITS];
...
@@ -260,6 +260,11 @@ static int homepna[MAX_UNITS];
* v1.31 02 Sep 2005 Hubert WS Lin <wslin@tw.ibm.c0m> added set_ringparam().
* v1.31 02 Sep 2005 Hubert WS Lin <wslin@tw.ibm.c0m> added set_ringparam().
* v1.31a 12 Sep 2005 Hubert WS Lin <wslin@tw.ibm.c0m> set min ring size to 4
* v1.31a 12 Sep 2005 Hubert WS Lin <wslin@tw.ibm.c0m> set min ring size to 4
* to allow loopback test to work unchanged.
* to allow loopback test to work unchanged.
* v1.31b 06 Oct 2005 Don Fry changed alloc_ring to show name of device
* if allocation fails
* v1.31c 01 Nov 2005 Don Fry Allied Telesyn 2700/2701 FX are 100Mbit only.
* Force 100Mbit FD if Auto (ASEL) is selected.
* See Bugzilla 2669 and 4551.
*/
*/
...
@@ -408,7 +413,7 @@ static int pcnet32_get_regs_len(struct net_device *dev);
...
@@ -408,7 +413,7 @@ static int pcnet32_get_regs_len(struct net_device *dev);
static
void
pcnet32_get_regs
(
struct
net_device
*
dev
,
struct
ethtool_regs
*
regs
,
static
void
pcnet32_get_regs
(
struct
net_device
*
dev
,
struct
ethtool_regs
*
regs
,
void
*
ptr
);
void
*
ptr
);
static
void
pcnet32_purge_tx_ring
(
struct
net_device
*
dev
);
static
void
pcnet32_purge_tx_ring
(
struct
net_device
*
dev
);
static
int
pcnet32_alloc_ring
(
struct
net_device
*
dev
);
static
int
pcnet32_alloc_ring
(
struct
net_device
*
dev
,
char
*
name
);
static
void
pcnet32_free_ring
(
struct
net_device
*
dev
);
static
void
pcnet32_free_ring
(
struct
net_device
*
dev
);
...
@@ -669,15 +674,17 @@ static int pcnet32_set_ringparam(struct net_device *dev, struct ethtool_ringpara
...
@@ -669,15 +674,17 @@ static int pcnet32_set_ringparam(struct net_device *dev, struct ethtool_ringpara
lp
->
rx_mod_mask
=
lp
->
rx_ring_size
-
1
;
lp
->
rx_mod_mask
=
lp
->
rx_ring_size
-
1
;
lp
->
rx_len_bits
=
(
i
<<
4
);
lp
->
rx_len_bits
=
(
i
<<
4
);
if
(
pcnet32_alloc_ring
(
dev
))
{
if
(
pcnet32_alloc_ring
(
dev
,
dev
->
name
))
{
pcnet32_free_ring
(
dev
);
pcnet32_free_ring
(
dev
);
spin_unlock_irqrestore
(
&
lp
->
lock
,
flags
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
spin_unlock_irqrestore
(
&
lp
->
lock
,
flags
);
spin_unlock_irqrestore
(
&
lp
->
lock
,
flags
);
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
printk
(
KERN_INFO
PFX
"Ring Param Settings: RX: %d, TX: %d
\n
"
,
lp
->
rx_ring_size
,
lp
->
tx_ring_size
);
printk
(
KERN_INFO
PFX
"%s: Ring Param Settings: RX: %d, TX: %d
\n
"
,
dev
->
name
,
lp
->
rx_ring_size
,
lp
->
tx_ring_size
);
if
(
netif_running
(
dev
))
if
(
netif_running
(
dev
))
pcnet32_open
(
dev
);
pcnet32_open
(
dev
);
...
@@ -981,7 +988,11 @@ static void pcnet32_get_regs(struct net_device *dev, struct ethtool_regs *regs,
...
@@ -981,7 +988,11 @@ static void pcnet32_get_regs(struct net_device *dev, struct ethtool_regs *regs,
*
buff
++
=
a
->
read_csr
(
ioaddr
,
114
);
*
buff
++
=
a
->
read_csr
(
ioaddr
,
114
);
/* read bus configuration registers */
/* read bus configuration registers */
for
(
i
=
0
;
i
<
36
;
i
++
)
{
for
(
i
=
0
;
i
<
30
;
i
++
)
{
*
buff
++
=
a
->
read_bcr
(
ioaddr
,
i
);
}
*
buff
++
=
0
;
/* skip bcr30 so as not to hang 79C976 */
for
(
i
=
31
;
i
<
36
;
i
++
)
{
*
buff
++
=
a
->
read_bcr
(
ioaddr
,
i
);
*
buff
++
=
a
->
read_bcr
(
ioaddr
,
i
);
}
}
...
@@ -1340,7 +1351,8 @@ pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
...
@@ -1340,7 +1351,8 @@ pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
}
}
lp
->
a
=
*
a
;
lp
->
a
=
*
a
;
if
(
pcnet32_alloc_ring
(
dev
))
{
/* prior to register_netdev, dev->name is not yet correct */
if
(
pcnet32_alloc_ring
(
dev
,
pci_name
(
lp
->
pci_dev
)))
{
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
err_free_ring
;
goto
err_free_ring
;
}
}
...
@@ -1448,48 +1460,63 @@ pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
...
@@ -1448,48 +1460,63 @@ pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
}
}
static
int
pcnet32_alloc_ring
(
struct
net_device
*
dev
)
/* if any allocation fails, caller must also call pcnet32_free_ring */
static
int
pcnet32_alloc_ring
(
struct
net_device
*
dev
,
char
*
name
)
{
{
struct
pcnet32_private
*
lp
=
dev
->
priv
;
struct
pcnet32_private
*
lp
=
dev
->
priv
;
if
((
lp
->
tx_ring
=
pci_alloc_consistent
(
lp
->
pci_dev
,
sizeof
(
struct
pcnet32_tx_head
)
*
lp
->
tx_ring_size
,
lp
->
tx_ring
=
pci_alloc_consistent
(
lp
->
pci_dev
,
&
lp
->
tx_ring_dma_addr
))
==
NULL
)
{
sizeof
(
struct
pcnet32_tx_head
)
*
lp
->
tx_ring_size
,
&
lp
->
tx_ring_dma_addr
);
if
(
lp
->
tx_ring
==
NULL
)
{
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
printk
(
KERN_ERR
PFX
"Consistent memory allocation failed.
\n
"
);
printk
(
"
\n
"
KERN_ERR
PFX
"%s: Consistent memory allocation failed.
\n
"
,
name
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
if
((
lp
->
rx_ring
=
pci_alloc_consistent
(
lp
->
pci_dev
,
sizeof
(
struct
pcnet32_rx_head
)
*
lp
->
rx_ring_size
,
lp
->
rx_ring
=
pci_alloc_consistent
(
lp
->
pci_dev
,
&
lp
->
rx_ring_dma_addr
))
==
NULL
)
{
sizeof
(
struct
pcnet32_rx_head
)
*
lp
->
rx_ring_size
,
&
lp
->
rx_ring_dma_addr
);
if
(
lp
->
rx_ring
==
NULL
)
{
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
printk
(
KERN_ERR
PFX
"Consistent memory allocation failed.
\n
"
);
printk
(
"
\n
"
KERN_ERR
PFX
"%s: Consistent memory allocation failed.
\n
"
,
name
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
if
(
!
(
lp
->
tx_dma_addr
=
kmalloc
(
sizeof
(
dma_addr_t
)
*
lp
->
tx_ring_size
,
GFP_ATOMIC
)))
{
lp
->
tx_dma_addr
=
kmalloc
(
sizeof
(
dma_addr_t
)
*
lp
->
tx_ring_size
,
GFP_ATOMIC
);
if
(
!
lp
->
tx_dma_addr
)
{
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
printk
(
KERN_ERR
PFX
"Memory allocation failed.
\n
"
);
printk
(
"
\n
"
KERN_ERR
PFX
"%s: Memory allocation failed.
\n
"
,
name
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
memset
(
lp
->
tx_dma_addr
,
0
,
sizeof
(
dma_addr_t
)
*
lp
->
tx_ring_size
);
memset
(
lp
->
tx_dma_addr
,
0
,
sizeof
(
dma_addr_t
)
*
lp
->
tx_ring_size
);
if
(
!
(
lp
->
rx_dma_addr
=
kmalloc
(
sizeof
(
dma_addr_t
)
*
lp
->
rx_ring_size
,
GFP_ATOMIC
)))
{
lp
->
rx_dma_addr
=
kmalloc
(
sizeof
(
dma_addr_t
)
*
lp
->
rx_ring_size
,
GFP_ATOMIC
);
if
(
!
lp
->
rx_dma_addr
)
{
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
printk
(
KERN_ERR
PFX
"Memory allocation failed.
\n
"
);
printk
(
"
\n
"
KERN_ERR
PFX
"%s: Memory allocation failed.
\n
"
,
name
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
memset
(
lp
->
rx_dma_addr
,
0
,
sizeof
(
dma_addr_t
)
*
lp
->
rx_ring_size
);
memset
(
lp
->
rx_dma_addr
,
0
,
sizeof
(
dma_addr_t
)
*
lp
->
rx_ring_size
);
if
(
!
(
lp
->
tx_skbuff
=
kmalloc
(
sizeof
(
struct
sk_buff
*
)
*
lp
->
tx_ring_size
,
GFP_ATOMIC
)))
{
lp
->
tx_skbuff
=
kmalloc
(
sizeof
(
struct
sk_buff
*
)
*
lp
->
tx_ring_size
,
GFP_ATOMIC
);
if
(
!
lp
->
tx_skbuff
)
{
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
printk
(
KERN_ERR
PFX
"Memory allocation failed.
\n
"
);
printk
(
"
\n
"
KERN_ERR
PFX
"%s: Memory allocation failed.
\n
"
,
name
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
memset
(
lp
->
tx_skbuff
,
0
,
sizeof
(
struct
sk_buff
*
)
*
lp
->
tx_ring_size
);
memset
(
lp
->
tx_skbuff
,
0
,
sizeof
(
struct
sk_buff
*
)
*
lp
->
tx_ring_size
);
if
(
!
(
lp
->
rx_skbuff
=
kmalloc
(
sizeof
(
struct
sk_buff
*
)
*
lp
->
rx_ring_size
,
GFP_ATOMIC
)))
{
lp
->
rx_skbuff
=
kmalloc
(
sizeof
(
struct
sk_buff
*
)
*
lp
->
rx_ring_size
,
GFP_ATOMIC
);
if
(
!
lp
->
rx_skbuff
)
{
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
if
(
pcnet32_debug
&
NETIF_MSG_DRV
)
printk
(
KERN_ERR
PFX
"Memory allocation failed.
\n
"
);
printk
(
"
\n
"
KERN_ERR
PFX
"%s: Memory allocation failed.
\n
"
,
name
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
memset
(
lp
->
rx_skbuff
,
0
,
sizeof
(
struct
sk_buff
*
)
*
lp
->
rx_ring_size
);
memset
(
lp
->
rx_skbuff
,
0
,
sizeof
(
struct
sk_buff
*
)
*
lp
->
rx_ring_size
);
...
@@ -1592,12 +1619,18 @@ pcnet32_open(struct net_device *dev)
...
@@ -1592,12 +1619,18 @@ pcnet32_open(struct net_device *dev)
val
|=
0x10
;
val
|=
0x10
;
lp
->
a
.
write_csr
(
ioaddr
,
124
,
val
);
lp
->
a
.
write_csr
(
ioaddr
,
124
,
val
);
/* Allied Telesyn AT 2700/2701 FX
looses the link, so skip that
*/
/* Allied Telesyn AT 2700/2701 FX
are 100Mbit only and do not negotiate
*/
if
(
lp
->
pci_dev
->
subsystem_vendor
==
PCI_VENDOR_ID_AT
&&
if
(
lp
->
pci_dev
->
subsystem_vendor
==
PCI_VENDOR_ID_AT
&&
(
lp
->
pci_dev
->
subsystem_device
==
PCI_SUBDEVICE_ID_AT_2700FX
||
(
lp
->
pci_dev
->
subsystem_device
==
PCI_SUBDEVICE_ID_AT_2700FX
||
lp
->
pci_dev
->
subsystem_device
==
PCI_SUBDEVICE_ID_AT_2701FX
))
{
lp
->
pci_dev
->
subsystem_device
==
PCI_SUBDEVICE_ID_AT_2701FX
))
{
printk
(
KERN_DEBUG
"%s: Skipping PHY selection.
\n
"
,
dev
->
name
);
if
(
lp
->
options
&
PCNET32_PORT_ASEL
)
{
}
else
{
lp
->
options
=
PCNET32_PORT_FD
|
PCNET32_PORT_100
;
if
(
netif_msg_link
(
lp
))
printk
(
KERN_DEBUG
"%s: Setting 100Mb-Full Duplex.
\n
"
,
dev
->
name
);
}
}
{
/*
/*
* 24 Jun 2004 according AMD, in order to change the PHY,
* 24 Jun 2004 according AMD, in order to change the PHY,
* DANAS (or DISPM for 79C976) must be set; then select the speed,
* DANAS (or DISPM for 79C976) must be set; then select the speed,
...
...
drivers/net/phy/mdio_bus.c
View file @
70d9d825
...
@@ -61,6 +61,9 @@ int mdiobus_register(struct mii_bus *bus)
...
@@ -61,6 +61,9 @@ int mdiobus_register(struct mii_bus *bus)
for
(
i
=
0
;
i
<
PHY_MAX_ADDR
;
i
++
)
{
for
(
i
=
0
;
i
<
PHY_MAX_ADDR
;
i
++
)
{
struct
phy_device
*
phydev
;
struct
phy_device
*
phydev
;
if
(
bus
->
phy_mask
&
(
1
<<
i
))
continue
;
phydev
=
get_phy_device
(
bus
,
i
);
phydev
=
get_phy_device
(
bus
,
i
);
if
(
IS_ERR
(
phydev
))
if
(
IS_ERR
(
phydev
))
...
...
drivers/net/s2io.c
View file @
70d9d825
...
@@ -30,6 +30,8 @@
...
@@ -30,6 +30,8 @@
* in the driver.
* in the driver.
* rx_ring_sz: This defines the number of descriptors each ring can have. This
* rx_ring_sz: This defines the number of descriptors each ring can have. This
* is also an array of size 8.
* is also an array of size 8.
* rx_ring_mode: This defines the operation mode of all 8 rings. The valid
* values are 1, 2 and 3.
* tx_fifo_num: This defines the number of Tx FIFOs thats used int the driver.
* tx_fifo_num: This defines the number of Tx FIFOs thats used int the driver.
* tx_fifo_len: This too is an array of 8. Each element defines the number of
* tx_fifo_len: This too is an array of 8. Each element defines the number of
* Tx descriptors that can be associated with each corresponding FIFO.
* Tx descriptors that can be associated with each corresponding FIFO.
...
@@ -65,12 +67,15 @@
...
@@ -65,12 +67,15 @@
#include "s2io.h"
#include "s2io.h"
#include "s2io-regs.h"
#include "s2io-regs.h"
#define DRV_VERSION "Version 2.0.9.
1
"
#define DRV_VERSION "Version 2.0.9.
3
"
/* S2io Driver name & version. */
/* S2io Driver name & version. */
static
char
s2io_driver_name
[]
=
"Neterion"
;
static
char
s2io_driver_name
[]
=
"Neterion"
;
static
char
s2io_driver_version
[]
=
DRV_VERSION
;
static
char
s2io_driver_version
[]
=
DRV_VERSION
;
int
rxd_size
[
4
]
=
{
32
,
48
,
48
,
64
};
int
rxd_count
[
4
]
=
{
127
,
85
,
85
,
63
};
static
inline
int
RXD_IS_UP2DT
(
RxD_t
*
rxdp
)
static
inline
int
RXD_IS_UP2DT
(
RxD_t
*
rxdp
)
{
{
int
ret
;
int
ret
;
...
@@ -104,7 +109,7 @@ static inline int rx_buffer_level(nic_t * sp, int rxb_size, int ring)
...
@@ -104,7 +109,7 @@ static inline int rx_buffer_level(nic_t * sp, int rxb_size, int ring)
mac_control
=
&
sp
->
mac_control
;
mac_control
=
&
sp
->
mac_control
;
if
((
mac_control
->
rings
[
ring
].
pkt_cnt
-
rxb_size
)
>
16
)
{
if
((
mac_control
->
rings
[
ring
].
pkt_cnt
-
rxb_size
)
>
16
)
{
level
=
LOW
;
level
=
LOW
;
if
(
rxb_size
<=
MAX_RXDS_PER_BLOCK
)
{
if
(
rxb_size
<=
rxd_count
[
sp
->
rxd_mode
]
)
{
level
=
PANIC
;
level
=
PANIC
;
}
}
}
}
...
@@ -296,6 +301,7 @@ static unsigned int rx_ring_sz[MAX_RX_RINGS] =
...
@@ -296,6 +301,7 @@ static unsigned int rx_ring_sz[MAX_RX_RINGS] =
{[
0
...(
MAX_RX_RINGS
-
1
)]
=
0
};
{[
0
...(
MAX_RX_RINGS
-
1
)]
=
0
};
static
unsigned
int
rts_frm_len
[
MAX_RX_RINGS
]
=
static
unsigned
int
rts_frm_len
[
MAX_RX_RINGS
]
=
{[
0
...(
MAX_RX_RINGS
-
1
)]
=
0
};
{[
0
...(
MAX_RX_RINGS
-
1
)]
=
0
};
static
unsigned
int
rx_ring_mode
=
1
;
static
unsigned
int
use_continuous_tx_intrs
=
1
;
static
unsigned
int
use_continuous_tx_intrs
=
1
;
static
unsigned
int
rmac_pause_time
=
65535
;
static
unsigned
int
rmac_pause_time
=
65535
;
static
unsigned
int
mc_pause_threshold_q0q3
=
187
;
static
unsigned
int
mc_pause_threshold_q0q3
=
187
;
...
@@ -304,6 +310,7 @@ static unsigned int shared_splits;
...
@@ -304,6 +310,7 @@ static unsigned int shared_splits;
static
unsigned
int
tmac_util_period
=
5
;
static
unsigned
int
tmac_util_period
=
5
;
static
unsigned
int
rmac_util_period
=
5
;
static
unsigned
int
rmac_util_period
=
5
;
static
unsigned
int
bimodal
=
0
;
static
unsigned
int
bimodal
=
0
;
static
unsigned
int
l3l4hdr_size
=
128
;
#ifndef CONFIG_S2IO_NAPI
#ifndef CONFIG_S2IO_NAPI
static
unsigned
int
indicate_max_pkts
;
static
unsigned
int
indicate_max_pkts
;
#endif
#endif
...
@@ -357,10 +364,8 @@ static int init_shared_mem(struct s2io_nic *nic)
...
@@ -357,10 +364,8 @@ static int init_shared_mem(struct s2io_nic *nic)
int
i
,
j
,
blk_cnt
,
rx_sz
,
tx_sz
;
int
i
,
j
,
blk_cnt
,
rx_sz
,
tx_sz
;
int
lst_size
,
lst_per_page
;
int
lst_size
,
lst_per_page
;
struct
net_device
*
dev
=
nic
->
dev
;
struct
net_device
*
dev
=
nic
->
dev
;
#ifdef CONFIG_2BUFF_MODE
unsigned
long
tmp
;
unsigned
long
tmp
;
buffAdd_t
*
ba
;
buffAdd_t
*
ba
;
#endif
mac_info_t
*
mac_control
;
mac_info_t
*
mac_control
;
struct
config_param
*
config
;
struct
config_param
*
config
;
...
@@ -458,7 +463,8 @@ static int init_shared_mem(struct s2io_nic *nic)
...
@@ -458,7 +463,8 @@ static int init_shared_mem(struct s2io_nic *nic)
/* Allocation and initialization of RXDs in Rings */
/* Allocation and initialization of RXDs in Rings */
size
=
0
;
size
=
0
;
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
if
(
config
->
rx_cfg
[
i
].
num_rxd
%
(
MAX_RXDS_PER_BLOCK
+
1
))
{
if
(
config
->
rx_cfg
[
i
].
num_rxd
%
(
rxd_count
[
nic
->
rxd_mode
]
+
1
))
{
DBG_PRINT
(
ERR_DBG
,
"%s: RxD count of "
,
dev
->
name
);
DBG_PRINT
(
ERR_DBG
,
"%s: RxD count of "
,
dev
->
name
);
DBG_PRINT
(
ERR_DBG
,
"Ring%d is not a multiple of "
,
DBG_PRINT
(
ERR_DBG
,
"Ring%d is not a multiple of "
,
i
);
i
);
...
@@ -467,11 +473,15 @@ static int init_shared_mem(struct s2io_nic *nic)
...
@@ -467,11 +473,15 @@ static int init_shared_mem(struct s2io_nic *nic)
}
}
size
+=
config
->
rx_cfg
[
i
].
num_rxd
;
size
+=
config
->
rx_cfg
[
i
].
num_rxd
;
mac_control
->
rings
[
i
].
block_count
=
mac_control
->
rings
[
i
].
block_count
=
config
->
rx_cfg
[
i
].
num_rxd
/
(
MAX_RXDS_PER_BLOCK
+
1
);
config
->
rx_cfg
[
i
].
num_rxd
/
mac_control
->
rings
[
i
].
pkt_cnt
=
(
rxd_count
[
nic
->
rxd_mode
]
+
1
);
config
->
rx_cfg
[
i
].
num_rxd
-
mac_control
->
rings
[
i
].
block_count
;
mac_control
->
rings
[
i
].
pkt_cnt
=
config
->
rx_cfg
[
i
].
num_rxd
-
mac_control
->
rings
[
i
].
block_count
;
}
}
size
=
(
size
*
(
sizeof
(
RxD_t
)));
if
(
nic
->
rxd_mode
==
RXD_MODE_1
)
size
=
(
size
*
(
sizeof
(
RxD1_t
)));
else
size
=
(
size
*
(
sizeof
(
RxD3_t
)));
rx_sz
=
size
;
rx_sz
=
size
;
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
...
@@ -486,15 +496,15 @@ static int init_shared_mem(struct s2io_nic *nic)
...
@@ -486,15 +496,15 @@ static int init_shared_mem(struct s2io_nic *nic)
mac_control
->
rings
[
i
].
nic
=
nic
;
mac_control
->
rings
[
i
].
nic
=
nic
;
mac_control
->
rings
[
i
].
ring_no
=
i
;
mac_control
->
rings
[
i
].
ring_no
=
i
;
blk_cnt
=
blk_cnt
=
config
->
rx_cfg
[
i
].
num_rxd
/
config
->
rx_cfg
[
i
].
num_rxd
/
(
MAX_RXDS_PER_BLOCK
+
1
);
(
rxd_count
[
nic
->
rxd_mode
]
+
1
);
/* Allocating all the Rx blocks */
/* Allocating all the Rx blocks */
for
(
j
=
0
;
j
<
blk_cnt
;
j
++
)
{
for
(
j
=
0
;
j
<
blk_cnt
;
j
++
)
{
#ifndef CONFIG_2BUFF_MODE
rx_block_info_t
*
rx_blocks
;
size
=
(
MAX_RXDS_PER_BLOCK
+
1
)
*
(
sizeof
(
RxD_t
))
;
int
l
;
#else
size
=
SIZE_OF_BLOCK
;
rx_blocks
=
&
mac_control
->
rings
[
i
].
rx_blocks
[
j
]
;
#endif
size
=
SIZE_OF_BLOCK
;
//size is always page size
tmp_v_addr
=
pci_alloc_consistent
(
nic
->
pdev
,
size
,
tmp_v_addr
=
pci_alloc_consistent
(
nic
->
pdev
,
size
,
&
tmp_p_addr
);
&
tmp_p_addr
);
if
(
tmp_v_addr
==
NULL
)
{
if
(
tmp_v_addr
==
NULL
)
{
...
@@ -504,11 +514,24 @@ static int init_shared_mem(struct s2io_nic *nic)
...
@@ -504,11 +514,24 @@ static int init_shared_mem(struct s2io_nic *nic)
* memory that was alloced till the
* memory that was alloced till the
* failure happened.
* failure happened.
*/
*/
mac_control
->
rings
[
i
].
rx_blocks
[
j
].
block_virt_addr
=
rx_blocks
->
block_virt_addr
=
tmp_v_addr
;
tmp_v_addr
;
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
memset
(
tmp_v_addr
,
0
,
size
);
memset
(
tmp_v_addr
,
0
,
size
);
rx_blocks
->
block_virt_addr
=
tmp_v_addr
;
rx_blocks
->
block_dma_addr
=
tmp_p_addr
;
rx_blocks
->
rxds
=
kmalloc
(
sizeof
(
rxd_info_t
)
*
rxd_count
[
nic
->
rxd_mode
],
GFP_KERNEL
);
for
(
l
=
0
;
l
<
rxd_count
[
nic
->
rxd_mode
];
l
++
)
{
rx_blocks
->
rxds
[
l
].
virt_addr
=
rx_blocks
->
block_virt_addr
+
(
rxd_size
[
nic
->
rxd_mode
]
*
l
);
rx_blocks
->
rxds
[
l
].
dma_addr
=
rx_blocks
->
block_dma_addr
+
(
rxd_size
[
nic
->
rxd_mode
]
*
l
);
}
mac_control
->
rings
[
i
].
rx_blocks
[
j
].
block_virt_addr
=
mac_control
->
rings
[
i
].
rx_blocks
[
j
].
block_virt_addr
=
tmp_v_addr
;
tmp_v_addr
;
mac_control
->
rings
[
i
].
rx_blocks
[
j
].
block_dma_addr
=
mac_control
->
rings
[
i
].
rx_blocks
[
j
].
block_dma_addr
=
...
@@ -528,45 +551,41 @@ static int init_shared_mem(struct s2io_nic *nic)
...
@@ -528,45 +551,41 @@ static int init_shared_mem(struct s2io_nic *nic)
blk_cnt
].
block_dma_addr
;
blk_cnt
].
block_dma_addr
;
pre_rxd_blk
=
(
RxD_block_t
*
)
tmp_v_addr
;
pre_rxd_blk
=
(
RxD_block_t
*
)
tmp_v_addr
;
pre_rxd_blk
->
reserved_1
=
END_OF_BLOCK
;
/* last RxD
* marker.
*/
#ifndef CONFIG_2BUFF_MODE
pre_rxd_blk
->
reserved_2_pNext_RxD_block
=
pre_rxd_blk
->
reserved_2_pNext_RxD_block
=
(
unsigned
long
)
tmp_v_addr_next
;
(
unsigned
long
)
tmp_v_addr_next
;
#endif
pre_rxd_blk
->
pNext_RxD_Blk_physical
=
pre_rxd_blk
->
pNext_RxD_Blk_physical
=
(
u64
)
tmp_p_addr_next
;
(
u64
)
tmp_p_addr_next
;
}
}
}
}
if
(
nic
->
rxd_mode
>=
RXD_MODE_3A
)
{
#ifdef CONFIG_2BUFF_MODE
/*
/*
* Allocation of Storages for buffer addresses in 2BUFF mode
* Allocation of Storages for buffer addresses in 2BUFF mode
* and the buffers as well.
* and the buffers as well.
*/
*/
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
blk_cnt
=
blk_cnt
=
config
->
rx_cfg
[
i
].
num_rxd
/
config
->
rx_cfg
[
i
].
num_rxd
/
(
MAX_RXDS_PER_BLOCK
+
1
);
(
rxd_count
[
nic
->
rxd_mode
]
+
1
);
mac_control
->
rings
[
i
].
ba
=
kmalloc
((
sizeof
(
buffAdd_t
*
)
*
blk_cnt
),
mac_control
->
rings
[
i
].
ba
=
kmalloc
((
sizeof
(
buffAdd_t
*
)
*
blk_cnt
),
GFP_KERNEL
);
GFP_KERNEL
);
if
(
!
mac_control
->
rings
[
i
].
ba
)
if
(
!
mac_control
->
rings
[
i
].
ba
)
return
-
ENOMEM
;
return
-
ENOMEM
;
for
(
j
=
0
;
j
<
blk_cnt
;
j
++
)
{
for
(
j
=
0
;
j
<
blk_cnt
;
j
++
)
{
int
k
=
0
;
int
k
=
0
;
mac_control
->
rings
[
i
].
ba
[
j
]
=
kmalloc
((
sizeof
(
buffAdd_t
)
*
mac_control
->
rings
[
i
].
ba
[
j
]
=
(
MAX_RXDS_PER_BLOCK
+
1
)),
kmalloc
((
sizeof
(
buffAdd_t
)
*
(
rxd_count
[
nic
->
rxd_mode
]
+
1
)),
GFP_KERNEL
);
GFP_KERNEL
);
if
(
!
mac_control
->
rings
[
i
].
ba
[
j
])
if
(
!
mac_control
->
rings
[
i
].
ba
[
j
])
return
-
ENOMEM
;
return
-
ENOMEM
;
while
(
k
!=
MAX_RXDS_PER_BLOCK
)
{
while
(
k
!=
rxd_count
[
nic
->
rxd_mode
]
)
{
ba
=
&
mac_control
->
rings
[
i
].
ba
[
j
][
k
];
ba
=
&
mac_control
->
rings
[
i
].
ba
[
j
][
k
];
ba
->
ba_0_org
=
(
void
*
)
kmalloc
ba
->
ba_0_org
=
(
void
*
)
kmalloc
(
BUF0_LEN
+
ALIGN_SIZE
,
GFP_KERNEL
);
(
BUF0_LEN
+
ALIGN_SIZE
,
GFP_KERNEL
);
if
(
!
ba
->
ba_0_org
)
if
(
!
ba
->
ba_0_org
)
return
-
ENOMEM
;
return
-
ENOMEM
;
tmp
=
(
unsigned
long
)
ba
->
ba_0_org
;
tmp
=
(
unsigned
long
)
ba
->
ba_0_org
;
tmp
+=
ALIGN_SIZE
;
tmp
+=
ALIGN_SIZE
;
tmp
&=
~
((
unsigned
long
)
ALIGN_SIZE
);
tmp
&=
~
((
unsigned
long
)
ALIGN_SIZE
);
ba
->
ba_0
=
(
void
*
)
tmp
;
ba
->
ba_0
=
(
void
*
)
tmp
;
...
@@ -583,7 +602,7 @@ static int init_shared_mem(struct s2io_nic *nic)
...
@@ -583,7 +602,7 @@ static int init_shared_mem(struct s2io_nic *nic)
}
}
}
}
}
}
#endif
}
/* Allocation and initialization of Statistics block */
/* Allocation and initialization of Statistics block */
size
=
sizeof
(
StatInfo_t
);
size
=
sizeof
(
StatInfo_t
);
...
@@ -669,11 +688,7 @@ static void free_shared_mem(struct s2io_nic *nic)
...
@@ -669,11 +688,7 @@ static void free_shared_mem(struct s2io_nic *nic)
kfree
(
mac_control
->
fifos
[
i
].
list_info
);
kfree
(
mac_control
->
fifos
[
i
].
list_info
);
}
}
#ifndef CONFIG_2BUFF_MODE
size
=
(
MAX_RXDS_PER_BLOCK
+
1
)
*
(
sizeof
(
RxD_t
));
#else
size
=
SIZE_OF_BLOCK
;
size
=
SIZE_OF_BLOCK
;
#endif
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
blk_cnt
=
mac_control
->
rings
[
i
].
block_count
;
blk_cnt
=
mac_control
->
rings
[
i
].
block_count
;
for
(
j
=
0
;
j
<
blk_cnt
;
j
++
)
{
for
(
j
=
0
;
j
<
blk_cnt
;
j
++
)
{
...
@@ -685,20 +700,22 @@ static void free_shared_mem(struct s2io_nic *nic)
...
@@ -685,20 +700,22 @@ static void free_shared_mem(struct s2io_nic *nic)
break
;
break
;
pci_free_consistent
(
nic
->
pdev
,
size
,
pci_free_consistent
(
nic
->
pdev
,
size
,
tmp_v_addr
,
tmp_p_addr
);
tmp_v_addr
,
tmp_p_addr
);
kfree
(
mac_control
->
rings
[
i
].
rx_blocks
[
j
].
rxds
);
}
}
}
}
#ifdef CONFIG_2BUFF_MODE
if
(
nic
->
rxd_mode
>=
RXD_MODE_3A
)
{
/* Freeing buffer storage addresses in 2BUFF mode. */
/* Freeing buffer storage addresses in 2BUFF mode. */
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
blk_cnt
=
blk_cnt
=
config
->
rx_cfg
[
i
].
num_rxd
/
config
->
rx_cfg
[
i
].
num_rxd
/
(
MAX_RXDS_PER_BLOCK
+
1
);
(
rxd_count
[
nic
->
rxd_mode
]
+
1
);
for
(
j
=
0
;
j
<
blk_cnt
;
j
++
)
{
for
(
j
=
0
;
j
<
blk_cnt
;
j
++
)
{
int
k
=
0
;
int
k
=
0
;
if
(
!
mac_control
->
rings
[
i
].
ba
[
j
])
if
(
!
mac_control
->
rings
[
i
].
ba
[
j
])
continue
;
continue
;
while
(
k
!=
MAX_RXDS_PER_BLOCK
)
{
while
(
k
!=
rxd_count
[
nic
->
rxd_mode
])
{
buffAdd_t
*
ba
=
&
mac_control
->
rings
[
i
].
ba
[
j
][
k
];
buffAdd_t
*
ba
=
&
mac_control
->
rings
[
i
].
ba
[
j
][
k
];
kfree
(
ba
->
ba_0_org
);
kfree
(
ba
->
ba_0_org
);
kfree
(
ba
->
ba_1_org
);
kfree
(
ba
->
ba_1_org
);
k
++
;
k
++
;
...
@@ -707,7 +724,7 @@ static void free_shared_mem(struct s2io_nic *nic)
...
@@ -707,7 +724,7 @@ static void free_shared_mem(struct s2io_nic *nic)
}
}
kfree
(
mac_control
->
rings
[
i
].
ba
);
kfree
(
mac_control
->
rings
[
i
].
ba
);
}
}
#endif
}
if
(
mac_control
->
stats_mem
)
{
if
(
mac_control
->
stats_mem
)
{
pci_free_consistent
(
nic
->
pdev
,
pci_free_consistent
(
nic
->
pdev
,
...
@@ -1894,20 +1911,19 @@ static int start_nic(struct s2io_nic *nic)
...
@@ -1894,20 +1911,19 @@ static int start_nic(struct s2io_nic *nic)
val64
=
readq
(
&
bar0
->
prc_ctrl_n
[
i
]);
val64
=
readq
(
&
bar0
->
prc_ctrl_n
[
i
]);
if
(
nic
->
config
.
bimodal
)
if
(
nic
->
config
.
bimodal
)
val64
|=
PRC_CTRL_BIMODAL_INTERRUPT
;
val64
|=
PRC_CTRL_BIMODAL_INTERRUPT
;
#ifndef CONFIG_2BUFF_MODE
if
(
nic
->
rxd_mode
==
RXD_MODE_1
)
val64
|=
PRC_CTRL_RC_ENABLED
;
val64
|=
PRC_CTRL_RC_ENABLED
;
#
else
else
val64
|=
PRC_CTRL_RC_ENABLED
|
PRC_CTRL_RING_MODE_3
;
val64
|=
PRC_CTRL_RC_ENABLED
|
PRC_CTRL_RING_MODE_3
;
#endif
writeq
(
val64
,
&
bar0
->
prc_ctrl_n
[
i
]);
writeq
(
val64
,
&
bar0
->
prc_ctrl_n
[
i
]);
}
}
#ifdef CONFIG_2BUFF_MODE
if
(
nic
->
rxd_mode
==
RXD_MODE_3B
)
{
/* Enabling 2 buffer mode by writing into Rx_pa_cfg reg. */
/* Enabling 2 buffer mode by writing into Rx_pa_cfg reg. */
val64
=
readq
(
&
bar0
->
rx_pa_cfg
);
val64
=
readq
(
&
bar0
->
rx_pa_cfg
);
val64
|=
RX_PA_CFG_IGNORE_L2_ERR
;
val64
|=
RX_PA_CFG_IGNORE_L2_ERR
;
writeq
(
val64
,
&
bar0
->
rx_pa_cfg
);
writeq
(
val64
,
&
bar0
->
rx_pa_cfg
);
#endif
}
/*
/*
* Enabling MC-RLDRAM. After enabling the device, we timeout
* Enabling MC-RLDRAM. After enabling the device, we timeout
...
@@ -2090,6 +2106,41 @@ static void stop_nic(struct s2io_nic *nic)
...
@@ -2090,6 +2106,41 @@ static void stop_nic(struct s2io_nic *nic)
}
}
}
}
int
fill_rxd_3buf
(
nic_t
*
nic
,
RxD_t
*
rxdp
,
struct
sk_buff
*
skb
)
{
struct
net_device
*
dev
=
nic
->
dev
;
struct
sk_buff
*
frag_list
;
u64
tmp
;
/* Buffer-1 receives L3/L4 headers */
((
RxD3_t
*
)
rxdp
)
->
Buffer1_ptr
=
pci_map_single
(
nic
->
pdev
,
skb
->
data
,
l3l4hdr_size
+
4
,
PCI_DMA_FROMDEVICE
);
/* skb_shinfo(skb)->frag_list will have L4 data payload */
skb_shinfo
(
skb
)
->
frag_list
=
dev_alloc_skb
(
dev
->
mtu
+
ALIGN_SIZE
);
if
(
skb_shinfo
(
skb
)
->
frag_list
==
NULL
)
{
DBG_PRINT
(
ERR_DBG
,
"%s: dev_alloc_skb failed
\n
"
,
dev
->
name
);
return
-
ENOMEM
;
}
frag_list
=
skb_shinfo
(
skb
)
->
frag_list
;
frag_list
->
next
=
NULL
;
tmp
=
(
u64
)
frag_list
->
data
;
tmp
+=
ALIGN_SIZE
;
tmp
&=
~
ALIGN_SIZE
;
frag_list
->
data
=
(
void
*
)
tmp
;
frag_list
->
tail
=
(
void
*
)
tmp
;
/* Buffer-2 receives L4 data payload */
((
RxD3_t
*
)
rxdp
)
->
Buffer2_ptr
=
pci_map_single
(
nic
->
pdev
,
frag_list
->
data
,
dev
->
mtu
,
PCI_DMA_FROMDEVICE
);
rxdp
->
Control_2
|=
SET_BUFFER1_SIZE_3
(
l3l4hdr_size
+
4
);
rxdp
->
Control_2
|=
SET_BUFFER2_SIZE_3
(
dev
->
mtu
);
return
SUCCESS
;
}
/**
/**
* fill_rx_buffers - Allocates the Rx side skbs
* fill_rx_buffers - Allocates the Rx side skbs
* @nic: device private variable
* @nic: device private variable
...
@@ -2117,18 +2168,12 @@ int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
...
@@ -2117,18 +2168,12 @@ int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
struct
sk_buff
*
skb
;
struct
sk_buff
*
skb
;
RxD_t
*
rxdp
;
RxD_t
*
rxdp
;
int
off
,
off1
,
size
,
block_no
,
block_no1
;
int
off
,
off1
,
size
,
block_no
,
block_no1
;
int
offset
,
offset1
;
u32
alloc_tab
=
0
;
u32
alloc_tab
=
0
;
u32
alloc_cnt
;
u32
alloc_cnt
;
mac_info_t
*
mac_control
;
mac_info_t
*
mac_control
;
struct
config_param
*
config
;
struct
config_param
*
config
;
#ifdef CONFIG_2BUFF_MODE
RxD_t
*
rxdpnext
;
int
nextblk
;
u64
tmp
;
u64
tmp
;
buffAdd_t
*
ba
;
buffAdd_t
*
ba
;
dma_addr_t
rxdpphys
;
#endif
#ifndef CONFIG_S2IO_NAPI
#ifndef CONFIG_S2IO_NAPI
unsigned
long
flags
;
unsigned
long
flags
;
#endif
#endif
...
@@ -2138,8 +2183,6 @@ int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
...
@@ -2138,8 +2183,6 @@ int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
config
=
&
nic
->
config
;
config
=
&
nic
->
config
;
alloc_cnt
=
mac_control
->
rings
[
ring_no
].
pkt_cnt
-
alloc_cnt
=
mac_control
->
rings
[
ring_no
].
pkt_cnt
-
atomic_read
(
&
nic
->
rx_bufs_left
[
ring_no
]);
atomic_read
(
&
nic
->
rx_bufs_left
[
ring_no
]);
size
=
dev
->
mtu
+
HEADER_ETHERNET_II_802_3_SIZE
+
HEADER_802_2_SIZE
+
HEADER_SNAP_SIZE
;
while
(
alloc_tab
<
alloc_cnt
)
{
while
(
alloc_tab
<
alloc_cnt
)
{
block_no
=
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
block_no
=
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
...
@@ -2148,159 +2191,145 @@ int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
...
@@ -2148,159 +2191,145 @@ int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
block_index
;
block_index
;
off
=
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
offset
;
off
=
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
offset
;
off1
=
mac_control
->
rings
[
ring_no
].
rx_curr_get_info
.
offset
;
off1
=
mac_control
->
rings
[
ring_no
].
rx_curr_get_info
.
offset
;
#ifndef CONFIG_2BUFF_MODE
offset
=
block_no
*
(
MAX_RXDS_PER_BLOCK
+
1
)
+
off
;
offset1
=
block_no1
*
(
MAX_RXDS_PER_BLOCK
+
1
)
+
off1
;
#else
offset
=
block_no
*
(
MAX_RXDS_PER_BLOCK
)
+
off
;
offset1
=
block_no1
*
(
MAX_RXDS_PER_BLOCK
)
+
off1
;
#endif
rxdp
=
mac_control
->
rings
[
ring_no
].
rx_blocks
[
block_no
].
rxdp
=
mac_control
->
rings
[
ring_no
].
block_virt_addr
+
off
;
rx_blocks
[
block_no
].
rxds
[
off
].
virt_addr
;
if
((
offset
==
offset1
)
&&
(
rxdp
->
Host_Control
))
{
DBG_PRINT
(
INTR_DBG
,
"%s: Get and Put"
,
dev
->
name
);
if
((
block_no
==
block_no1
)
&&
(
off
==
off1
)
&&
(
rxdp
->
Host_Control
))
{
DBG_PRINT
(
INTR_DBG
,
"%s: Get and Put"
,
dev
->
name
);
DBG_PRINT
(
INTR_DBG
,
" info equated
\n
"
);
DBG_PRINT
(
INTR_DBG
,
" info equated
\n
"
);
goto
end
;
goto
end
;
}
}
#ifndef CONFIG_2BUFF_MODE
if
(
off
&&
(
off
==
rxd_count
[
nic
->
rxd_mode
]))
{
if
(
rxdp
->
Control_1
==
END_OF_BLOCK
)
{
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
block_index
++
;
block_index
++
;
if
(
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
block_index
==
mac_control
->
rings
[
ring_no
].
block_count
)
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
block_index
%=
mac_control
->
rings
[
ring_no
].
block_count
;
block_index
=
0
;
block_no
=
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
block_no
=
mac_control
->
rings
[
ring_no
].
block_index
;
rx_curr_put_info
.
block_index
;
off
++
;
if
(
off
==
rxd_count
[
nic
->
rxd_mode
])
off
%=
(
MAX_RXDS_PER_BLOCK
+
1
);
off
=
0
;
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
offset
=
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
off
;
offset
=
off
;
rxdp
=
(
RxD_t
*
)
((
unsigned
long
)
rxdp
->
Control_2
);
rxdp
=
mac_control
->
rings
[
ring_no
].
rx_blocks
[
block_no
].
block_virt_addr
;
DBG_PRINT
(
INTR_DBG
,
"%s: Next block at: %p
\n
"
,
DBG_PRINT
(
INTR_DBG
,
"%s: Next block at: %p
\n
"
,
dev
->
name
,
rxdp
);
dev
->
name
,
rxdp
);
}
}
#ifndef CONFIG_S2IO_NAPI
#ifndef CONFIG_S2IO_NAPI
spin_lock_irqsave
(
&
nic
->
put_lock
,
flags
);
spin_lock_irqsave
(
&
nic
->
put_lock
,
flags
);
mac_control
->
rings
[
ring_no
].
put_pos
=
mac_control
->
rings
[
ring_no
].
put_pos
=
(
block_no
*
(
MAX_RXDS_PER_BLOCK
+
1
))
+
off
;
(
block_no
*
(
rxd_count
[
nic
->
rxd_mode
]
+
1
))
+
off
;
spin_unlock_irqrestore
(
&
nic
->
put_lock
,
flags
);
#endif
#else
if
(
rxdp
->
Host_Control
==
END_OF_BLOCK
)
{
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
block_index
++
;
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
block_index
%=
mac_control
->
rings
[
ring_no
].
block_count
;
block_no
=
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
block_index
;
off
=
0
;
DBG_PRINT
(
INTR_DBG
,
"%s: block%d at: 0x%llx
\n
"
,
dev
->
name
,
block_no
,
(
unsigned
long
long
)
rxdp
->
Control_1
);
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
offset
=
off
;
rxdp
=
mac_control
->
rings
[
ring_no
].
rx_blocks
[
block_no
].
block_virt_addr
;
}
#ifndef CONFIG_S2IO_NAPI
spin_lock_irqsave
(
&
nic
->
put_lock
,
flags
);
mac_control
->
rings
[
ring_no
].
put_pos
=
(
block_no
*
(
MAX_RXDS_PER_BLOCK
+
1
))
+
off
;
spin_unlock_irqrestore
(
&
nic
->
put_lock
,
flags
);
spin_unlock_irqrestore
(
&
nic
->
put_lock
,
flags
);
#endif
#endif
#endif
if
((
rxdp
->
Control_1
&
RXD_OWN_XENA
)
&&
((
nic
->
rxd_mode
>=
RXD_MODE_3A
)
&&
#ifndef CONFIG_2BUFF_MODE
(
rxdp
->
Control_2
&
BIT
(
0
))))
{
if
(
rxdp
->
Control_1
&
RXD_OWN_XENA
)
#else
if
(
rxdp
->
Control_2
&
BIT
(
0
))
#endif
{
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
offset
=
off
;
offset
=
off
;
goto
end
;
goto
end
;
}
}
#ifdef CONFIG_2BUFF_MODE
/* calculate size of skb based on ring mode */
/*
size
=
dev
->
mtu
+
HEADER_ETHERNET_II_802_3_SIZE
+
* RxDs Spanning cache lines will be replenished only
HEADER_802_2_SIZE
+
HEADER_SNAP_SIZE
;
* if the succeeding RxD is also owned by Host. It
if
(
nic
->
rxd_mode
==
RXD_MODE_1
)
* will always be the ((8*i)+3) and ((8*i)+6)
size
+=
NET_IP_ALIGN
;
* descriptors for the 48 byte descriptor. The offending
else
if
(
nic
->
rxd_mode
==
RXD_MODE_3B
)
* decsriptor is of-course the 3rd descriptor.
size
=
dev
->
mtu
+
ALIGN_SIZE
+
BUF0_LEN
+
4
;
*/
else
rxdpphys
=
mac_control
->
rings
[
ring_no
].
rx_blocks
[
block_no
].
size
=
l3l4hdr_size
+
ALIGN_SIZE
+
BUF0_LEN
+
4
;
block_dma_addr
+
(
off
*
sizeof
(
RxD_t
));
if
(((
u64
)
(
rxdpphys
))
%
128
>
80
)
{
rxdpnext
=
mac_control
->
rings
[
ring_no
].
rx_blocks
[
block_no
].
block_virt_addr
+
(
off
+
1
);
if
(
rxdpnext
->
Host_Control
==
END_OF_BLOCK
)
{
nextblk
=
(
block_no
+
1
)
%
(
mac_control
->
rings
[
ring_no
].
block_count
);
rxdpnext
=
mac_control
->
rings
[
ring_no
].
rx_blocks
[
nextblk
].
block_virt_addr
;
}
if
(
rxdpnext
->
Control_2
&
BIT
(
0
))
goto
end
;
}
#endif
#ifndef CONFIG_2BUFF_MODE
/* allocate skb */
skb
=
dev_alloc_skb
(
size
+
NET_IP_ALIGN
);
skb
=
dev_alloc_skb
(
size
);
#else
if
(
!
skb
)
{
skb
=
dev_alloc_skb
(
dev
->
mtu
+
ALIGN_SIZE
+
BUF0_LEN
+
4
);
#endif
if
(
!
skb
)
{
DBG_PRINT
(
ERR_DBG
,
"%s: Out of "
,
dev
->
name
);
DBG_PRINT
(
ERR_DBG
,
"%s: Out of "
,
dev
->
name
);
DBG_PRINT
(
ERR_DBG
,
"memory to allocate SKBs
\n
"
);
DBG_PRINT
(
ERR_DBG
,
"memory to allocate SKBs
\n
"
);
if
(
first_rxdp
)
{
if
(
first_rxdp
)
{
wmb
();
wmb
();
first_rxdp
->
Control_1
|=
RXD_OWN_XENA
;
first_rxdp
->
Control_1
|=
RXD_OWN_XENA
;
}
}
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
#ifndef CONFIG_2BUFF_MODE
if
(
nic
->
rxd_mode
==
RXD_MODE_1
)
{
/* 1 buffer mode - normal operation mode */
memset
(
rxdp
,
0
,
sizeof
(
RxD1_t
));
skb_reserve
(
skb
,
NET_IP_ALIGN
);
skb_reserve
(
skb
,
NET_IP_ALIGN
);
memset
(
rxdp
,
0
,
sizeof
(
RxD_t
));
((
RxD1_t
*
)
rxdp
)
->
Buffer0_ptr
=
pci_map_single
rxdp
->
Buffer0_ptr
=
pci_map_single
(
nic
->
pdev
,
skb
->
data
,
size
,
PCI_DMA_FROMDEVICE
);
(
nic
->
pdev
,
skb
->
data
,
size
,
PCI_DMA_FROMDEVICE
);
rxdp
->
Control_2
&=
(
~
MASK_BUFFER0_SIZE
);
rxdp
->
Control_2
&=
(
~
MASK_BUFFER0_SIZE_1
);
rxdp
->
Control_2
|=
SET_BUFFER0_SIZE
(
size
);
rxdp
->
Control_2
|=
SET_BUFFER0_SIZE_1
(
size
);
rxdp
->
Host_Control
=
(
unsigned
long
)
(
skb
);
if
(
alloc_tab
&
((
1
<<
rxsync_frequency
)
-
1
))
}
else
if
(
nic
->
rxd_mode
>=
RXD_MODE_3A
)
{
rxdp
->
Control_1
|=
RXD_OWN_XENA
;
/*
off
++
;
* 2 or 3 buffer mode -
off
%=
(
MAX_RXDS_PER_BLOCK
+
1
);
* Both 2 buffer mode and 3 buffer mode provides 128
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
offset
=
off
;
* byte aligned receive buffers.
#else
*
* 3 buffer mode provides header separation where in
* skb->data will have L3/L4 headers where as
* skb_shinfo(skb)->frag_list will have the L4 data
* payload
*/
memset
(
rxdp
,
0
,
sizeof
(
RxD3_t
));
ba
=
&
mac_control
->
rings
[
ring_no
].
ba
[
block_no
][
off
];
ba
=
&
mac_control
->
rings
[
ring_no
].
ba
[
block_no
][
off
];
skb_reserve
(
skb
,
BUF0_LEN
);
skb_reserve
(
skb
,
BUF0_LEN
);
tmp
=
((
unsigned
long
)
skb
->
data
&
ALIGN_SIZE
);
tmp
=
(
u64
)(
unsigned
long
)
skb
->
data
;
if
(
tmp
)
tmp
+=
ALIGN_SIZE
;
skb_reserve
(
skb
,
(
ALIGN_SIZE
+
1
)
-
tmp
);
tmp
&=
~
ALIGN_SIZE
;
skb
->
data
=
(
void
*
)
(
unsigned
long
)
tmp
;
skb
->
tail
=
(
void
*
)
(
unsigned
long
)
tmp
;
memset
(
rxdp
,
0
,
sizeof
(
RxD_t
));
((
RxD3_t
*
)
rxdp
)
->
Buffer0_ptr
=
rxdp
->
Buffer2_ptr
=
pci_map_single
(
nic
->
pdev
,
skb
->
data
,
dev
->
mtu
+
BUF0_LEN
+
4
,
PCI_DMA_FROMDEVICE
);
rxdp
->
Buffer0_ptr
=
pci_map_single
(
nic
->
pdev
,
ba
->
ba_0
,
BUF0_LEN
,
pci_map_single
(
nic
->
pdev
,
ba
->
ba_0
,
BUF0_LEN
,
PCI_DMA_FROMDEVICE
);
PCI_DMA_FROMDEVICE
);
rxdp
->
Buffer1_ptr
=
rxdp
->
Control_2
=
SET_BUFFER0_SIZE_3
(
BUF0_LEN
);
pci_map_single
(
nic
->
pdev
,
ba
->
ba_1
,
BUF1_LEN
,
if
(
nic
->
rxd_mode
==
RXD_MODE_3B
)
{
/* Two buffer mode */
/*
* Buffer2 will have L3/L4 header plus
* L4 payload
*/
((
RxD3_t
*
)
rxdp
)
->
Buffer2_ptr
=
pci_map_single
(
nic
->
pdev
,
skb
->
data
,
dev
->
mtu
+
4
,
PCI_DMA_FROMDEVICE
);
PCI_DMA_FROMDEVICE
);
rxdp
->
Control_2
=
SET_BUFFER2_SIZE
(
dev
->
mtu
+
4
);
/* Buffer-1 will be dummy buffer not used */
rxdp
->
Control_2
|=
SET_BUFFER0_SIZE
(
BUF0_LEN
);
((
RxD3_t
*
)
rxdp
)
->
Buffer1_ptr
=
rxdp
->
Control_2
|=
SET_BUFFER1_SIZE
(
1
);
/* dummy. */
pci_map_single
(
nic
->
pdev
,
ba
->
ba_1
,
BUF1_LEN
,
rxdp
->
Control_2
|=
BIT
(
0
);
/* Set Buffer_Empty bit. */
PCI_DMA_FROMDEVICE
);
rxdp
->
Host_Control
=
(
u64
)
((
unsigned
long
)
(
skb
));
rxdp
->
Control_2
|=
SET_BUFFER1_SIZE_3
(
1
);
rxdp
->
Control_2
|=
SET_BUFFER2_SIZE_3
(
dev
->
mtu
+
4
);
}
else
{
/* 3 buffer mode */
if
(
fill_rxd_3buf
(
nic
,
rxdp
,
skb
)
==
-
ENOMEM
)
{
dev_kfree_skb_irq
(
skb
);
if
(
first_rxdp
)
{
wmb
();
first_rxdp
->
Control_1
|=
RXD_OWN_XENA
;
}
return
-
ENOMEM
;
}
}
rxdp
->
Control_2
|=
BIT
(
0
);
}
rxdp
->
Host_Control
=
(
unsigned
long
)
(
skb
);
if
(
alloc_tab
&
((
1
<<
rxsync_frequency
)
-
1
))
if
(
alloc_tab
&
((
1
<<
rxsync_frequency
)
-
1
))
rxdp
->
Control_1
|=
RXD_OWN_XENA
;
rxdp
->
Control_1
|=
RXD_OWN_XENA
;
off
++
;
off
++
;
if
(
off
==
(
rxd_count
[
nic
->
rxd_mode
]
+
1
))
off
=
0
;
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
offset
=
off
;
mac_control
->
rings
[
ring_no
].
rx_curr_put_info
.
offset
=
off
;
#endif
rxdp
->
Control_2
|=
SET_RXD_MARKER
;
rxdp
->
Control_2
|=
SET_RXD_MARKER
;
if
(
!
(
alloc_tab
&
((
1
<<
rxsync_frequency
)
-
1
)))
{
if
(
!
(
alloc_tab
&
((
1
<<
rxsync_frequency
)
-
1
)))
{
if
(
first_rxdp
)
{
if
(
first_rxdp
)
{
wmb
();
wmb
();
...
@@ -2325,89 +2354,90 @@ int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
...
@@ -2325,89 +2354,90 @@ int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
return
SUCCESS
;
return
SUCCESS
;
}
}
/**
static
void
free_rxd_blk
(
struct
s2io_nic
*
sp
,
int
ring_no
,
int
blk
)
* free_rx_buffers - Frees all Rx buffers
* @sp: device private variable.
* Description:
* This function will free all Rx buffers allocated by host.
* Return Value:
* NONE.
*/
static
void
free_rx_buffers
(
struct
s2io_nic
*
sp
)
{
{
struct
net_device
*
dev
=
sp
->
dev
;
struct
net_device
*
dev
=
sp
->
dev
;
int
i
,
j
,
blk
=
0
,
off
,
buf_cnt
=
0
;
int
j
;
RxD_t
*
rxdp
;
struct
sk_buff
*
skb
;
struct
sk_buff
*
skb
;
RxD_t
*
rxdp
;
mac_info_t
*
mac_control
;
mac_info_t
*
mac_control
;
struct
config_param
*
config
;
#ifdef CONFIG_2BUFF_MODE
buffAdd_t
*
ba
;
buffAdd_t
*
ba
;
#endif
mac_control
=
&
sp
->
mac_control
;
mac_control
=
&
sp
->
mac_control
;
config
=
&
sp
->
config
;
for
(
j
=
0
;
j
<
rxd_count
[
sp
->
rxd_mode
];
j
++
)
{
rxdp
=
mac_control
->
rings
[
ring_no
].
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
rx_blocks
[
blk
].
rxds
[
j
].
virt_addr
;
for
(
j
=
0
,
blk
=
0
;
j
<
config
->
rx_cfg
[
i
].
num_rxd
;
j
++
)
{
skb
=
(
struct
sk_buff
*
)
off
=
j
%
(
MAX_RXDS_PER_BLOCK
+
1
);
((
unsigned
long
)
rxdp
->
Host_Control
);
rxdp
=
mac_control
->
rings
[
i
].
rx_blocks
[
blk
].
if
(
!
skb
)
{
block_virt_addr
+
off
;
#ifndef CONFIG_2BUFF_MODE
if
(
rxdp
->
Control_1
==
END_OF_BLOCK
)
{
rxdp
=
(
RxD_t
*
)
((
unsigned
long
)
rxdp
->
Control_2
);
j
++
;
blk
++
;
}
#else
if
(
rxdp
->
Host_Control
==
END_OF_BLOCK
)
{
blk
++
;
continue
;
}
#endif
if
(
!
(
rxdp
->
Control_1
&
RXD_OWN_XENA
))
{
memset
(
rxdp
,
0
,
sizeof
(
RxD_t
));
continue
;
continue
;
}
}
if
(
sp
->
rxd_mode
==
RXD_MODE_1
)
{
skb
=
(
struct
sk_buff
*
)
((
unsigned
long
)
rxdp
->
Host_Control
);
if
(
skb
)
{
#ifndef CONFIG_2BUFF_MODE
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
rxdp
->
Buffer0_ptr
,
((
RxD1_t
*
)
rxdp
)
->
Buffer0_ptr
,
dev
->
mtu
+
dev
->
mtu
+
HEADER_ETHERNET_II_802_3_SIZE
HEADER_ETHERNET_II_802_3_SIZE
+
HEADER_802_2_SIZE
+
+
HEADER_802_2_SIZE
+
HEADER_SNAP_SIZE
,
HEADER_SNAP_SIZE
,
PCI_DMA_FROMDEVICE
);
PCI_DMA_FROMDEVICE
);
#else
memset
(
rxdp
,
0
,
sizeof
(
RxD1_t
));
ba
=
&
mac_control
->
rings
[
i
].
ba
[
blk
][
off
];
}
else
if
(
sp
->
rxd_mode
==
RXD_MODE_3B
)
{
ba
=
&
mac_control
->
rings
[
ring_no
].
ba
[
blk
][
j
];
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
rxdp
->
Buffer0_ptr
,
((
RxD3_t
*
)
rxdp
)
->
Buffer0_ptr
,
BUF0_LEN
,
BUF0_LEN
,
PCI_DMA_FROMDEVICE
);
PCI_DMA_FROMDEVICE
);
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
rxdp
->
Buffer1_ptr
,
((
RxD3_t
*
)
rxdp
)
->
Buffer1_ptr
,
BUF1_LEN
,
BUF1_LEN
,
PCI_DMA_FROMDEVICE
);
PCI_DMA_FROMDEVICE
);
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
rxdp
->
Buffer2_ptr
,
((
RxD3_t
*
)
rxdp
)
->
Buffer2_ptr
,
dev
->
mtu
+
BUF0_LEN
+
4
,
dev
->
mtu
+
4
,
PCI_DMA_FROMDEVICE
);
PCI_DMA_FROMDEVICE
);
#endif
memset
(
rxdp
,
0
,
sizeof
(
RxD3_t
));
dev_kfree_skb
(
skb
);
}
else
{
atomic_dec
(
&
sp
->
rx_bufs_left
[
i
]);
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
buf_cnt
++
;
((
RxD3_t
*
)
rxdp
)
->
Buffer0_ptr
,
BUF0_LEN
,
PCI_DMA_FROMDEVICE
);
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
((
RxD3_t
*
)
rxdp
)
->
Buffer1_ptr
,
l3l4hdr_size
+
4
,
PCI_DMA_FROMDEVICE
);
pci_unmap_single
(
sp
->
pdev
,
(
dma_addr_t
)
((
RxD3_t
*
)
rxdp
)
->
Buffer2_ptr
,
dev
->
mtu
,
PCI_DMA_FROMDEVICE
);
memset
(
rxdp
,
0
,
sizeof
(
RxD3_t
));
}
}
memset
(
rxdp
,
0
,
sizeof
(
RxD_t
));
dev_kfree_skb
(
skb
);
atomic_dec
(
&
sp
->
rx_bufs_left
[
ring_no
]);
}
}
}
/**
* free_rx_buffers - Frees all Rx buffers
* @sp: device private variable.
* Description:
* This function will free all Rx buffers allocated by host.
* Return Value:
* NONE.
*/
static
void
free_rx_buffers
(
struct
s2io_nic
*
sp
)
{
struct
net_device
*
dev
=
sp
->
dev
;
int
i
,
blk
=
0
,
buf_cnt
=
0
;
mac_info_t
*
mac_control
;
struct
config_param
*
config
;
mac_control
=
&
sp
->
mac_control
;
config
=
&
sp
->
config
;
for
(
i
=
0
;
i
<
config
->
rx_ring_num
;
i
++
)
{
for
(
blk
=
0
;
blk
<
rx_ring_sz
[
i
];
blk
++
)
free_rxd_blk
(
sp
,
i
,
blk
);
mac_control
->
rings
[
i
].
rx_curr_put_info
.
block_index
=
0
;
mac_control
->
rings
[
i
].
rx_curr_put_info
.
block_index
=
0
;
mac_control
->
rings
[
i
].
rx_curr_get_info
.
block_index
=
0
;
mac_control
->
rings
[
i
].
rx_curr_get_info
.
block_index
=
0
;
mac_control
->
rings
[
i
].
rx_curr_put_info
.
offset
=
0
;
mac_control
->
rings
[
i
].
rx_curr_put_info
.
offset
=
0
;
...
@@ -2513,7 +2543,7 @@ static void rx_intr_handler(ring_info_t *ring_data)
...
@@ -2513,7 +2543,7 @@ static void rx_intr_handler(ring_info_t *ring_data)
{
{
nic_t
*
nic
=
ring_data
->
nic
;
nic_t
*
nic
=
ring_data
->
nic
;
struct
net_device
*
dev
=
(
struct
net_device
*
)
nic
->
dev
;
struct
net_device
*
dev
=
(
struct
net_device
*
)
nic
->
dev
;
int
get_block
,
get_offset
,
put_block
,
put_offset
,
ring_bufs
;
int
get_block
,
put_block
,
put_offset
;
rx_curr_get_info_t
get_info
,
put_info
;
rx_curr_get_info_t
get_info
,
put_info
;
RxD_t
*
rxdp
;
RxD_t
*
rxdp
;
struct
sk_buff
*
skb
;
struct
sk_buff
*
skb
;
...
@@ -2532,21 +2562,22 @@ static void rx_intr_handler(ring_info_t *ring_data)
...
@@ -2532,21 +2562,22 @@ static void rx_intr_handler(ring_info_t *ring_data)
get_block
=
get_info
.
block_index
;
get_block
=
get_info
.
block_index
;
put_info
=
ring_data
->
rx_curr_put_info
;
put_info
=
ring_data
->
rx_curr_put_info
;
put_block
=
put_info
.
block_index
;
put_block
=
put_info
.
block_index
;
ring_bufs
=
get_info
.
ring_len
+
1
;
rxdp
=
ring_data
->
rx_blocks
[
get_block
].
rxds
[
get_info
.
offset
].
virt_addr
;
rxdp
=
ring_data
->
rx_blocks
[
get_block
].
block_virt_addr
+
get_info
.
offset
;
get_offset
=
(
get_block
*
(
MAX_RXDS_PER_BLOCK
+
1
))
+
get_info
.
offset
;
#ifndef CONFIG_S2IO_NAPI
#ifndef CONFIG_S2IO_NAPI
spin_lock
(
&
nic
->
put_lock
);
spin_lock
(
&
nic
->
put_lock
);
put_offset
=
ring_data
->
put_pos
;
put_offset
=
ring_data
->
put_pos
;
spin_unlock
(
&
nic
->
put_lock
);
spin_unlock
(
&
nic
->
put_lock
);
#else
#else
put_offset
=
(
put_block
*
(
MAX_RXDS_PER_BLOCK
+
1
))
+
put_offset
=
(
put_block
*
(
rxd_count
[
nic
->
rxd_mode
]
+
1
))
+
put_info
.
offset
;
put_info
.
offset
;
#endif
#endif
while
(
RXD_IS_UP2DT
(
rxdp
)
&&
while
(
RXD_IS_UP2DT
(
rxdp
))
{
(((
get_offset
+
1
)
%
ring_bufs
)
!=
put_offset
))
{
/* If your are next to put index then it's FIFO full condition */
if
((
get_block
==
put_block
)
&&
(
get_info
.
offset
+
1
)
==
put_info
.
offset
)
{
DBG_PRINT
(
ERR_DBG
,
"%s: Ring Full
\n
"
,
dev
->
name
);
break
;
}
skb
=
(
struct
sk_buff
*
)
((
unsigned
long
)
rxdp
->
Host_Control
);
skb
=
(
struct
sk_buff
*
)
((
unsigned
long
)
rxdp
->
Host_Control
);
if
(
skb
==
NULL
)
{
if
(
skb
==
NULL
)
{
DBG_PRINT
(
ERR_DBG
,
"%s: The skb is "
,
DBG_PRINT
(
ERR_DBG
,
"%s: The skb is "
,
...
@@ -2555,46 +2586,52 @@ static void rx_intr_handler(ring_info_t *ring_data)
...
@@ -2555,46 +2586,52 @@ static void rx_intr_handler(ring_info_t *ring_data)
spin_unlock
(
&
nic
->
rx_lock
);
spin_unlock
(
&
nic
->
rx_lock
);
return
;
return
;
}
}
#ifndef CONFIG_2BUFF_MODE
if
(
nic
->
rxd_mode
==
RXD_MODE_1
)
{
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
rxdp
->
Buffer0_ptr
,
((
RxD1_t
*
)
rxdp
)
->
Buffer0_ptr
,
dev
->
mtu
+
dev
->
mtu
+
HEADER_ETHERNET_II_802_3_SIZE
+
HEADER_ETHERNET_II_802_3_SIZE
+
HEADER_802_2_SIZE
+
HEADER_802_2_SIZE
+
HEADER_SNAP_SIZE
,
HEADER_SNAP_SIZE
,
PCI_DMA_FROMDEVICE
);
PCI_DMA_FROMDEVICE
);
#else
}
else
if
(
nic
->
rxd_mode
==
RXD_MODE_3B
)
{
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
rxdp
->
Buffer0_ptr
,
((
RxD3_t
*
)
rxdp
)
->
Buffer0_ptr
,
BUF0_LEN
,
PCI_DMA_FROMDEVICE
);
BUF0_LEN
,
PCI_DMA_FROMDEVICE
);
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
rxdp
->
Buffer1_ptr
,
((
RxD3_t
*
)
rxdp
)
->
Buffer1_ptr
,
BUF1_LEN
,
PCI_DMA_FROMDEVICE
);
BUF1_LEN
,
PCI_DMA_FROMDEVICE
);
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
rxdp
->
Buffer2_ptr
,
((
RxD3_t
*
)
rxdp
)
->
Buffer2_ptr
,
dev
->
mtu
+
BUF0_LEN
+
4
,
dev
->
mtu
+
4
,
PCI_DMA_FROMDEVICE
);
PCI_DMA_FROMDEVICE
);
#endif
}
else
{
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
((
RxD3_t
*
)
rxdp
)
->
Buffer0_ptr
,
BUF0_LEN
,
PCI_DMA_FROMDEVICE
);
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
((
RxD3_t
*
)
rxdp
)
->
Buffer1_ptr
,
l3l4hdr_size
+
4
,
PCI_DMA_FROMDEVICE
);
pci_unmap_single
(
nic
->
pdev
,
(
dma_addr_t
)
((
RxD3_t
*
)
rxdp
)
->
Buffer2_ptr
,
dev
->
mtu
,
PCI_DMA_FROMDEVICE
);
}
rx_osm_handler
(
ring_data
,
rxdp
);
rx_osm_handler
(
ring_data
,
rxdp
);
get_info
.
offset
++
;
get_info
.
offset
++
;
ring_data
->
rx_curr_get_info
.
offset
=
ring_data
->
rx_curr_get_info
.
offset
=
get_info
.
offset
;
get_info
.
offset
;
rxdp
=
ring_data
->
rx_blocks
[
get_block
].
rxdp
=
ring_data
->
rx_blocks
[
get_block
].
block_virt_addr
+
rxds
[
get_info
.
offset
].
virt_addr
;
get_info
.
offset
;
if
(
get_info
.
offset
==
rxd_count
[
nic
->
rxd_mode
])
{
if
(
get_info
.
offset
&&
(
!
(
get_info
.
offset
%
MAX_RXDS_PER_BLOCK
)))
{
get_info
.
offset
=
0
;
get_info
.
offset
=
0
;
ring_data
->
rx_curr_get_info
.
offset
ring_data
->
rx_curr_get_info
.
offset
=
get_info
.
offset
;
=
get_info
.
offset
;
get_block
++
;
get_block
++
;
get_block
%=
ring_data
->
block_count
;
if
(
get_block
==
ring_data
->
block_count
)
ring_data
->
rx_curr_get_info
.
block_index
get_block
=
0
;
=
get_block
;
ring_data
->
rx_curr_get_info
.
block_index
=
get_block
;
rxdp
=
ring_data
->
rx_blocks
[
get_block
].
block_virt_addr
;
rxdp
=
ring_data
->
rx_blocks
[
get_block
].
block_virt_addr
;
}
}
get_offset
=
(
get_block
*
(
MAX_RXDS_PER_BLOCK
+
1
))
+
get_info
.
offset
;
#ifdef CONFIG_S2IO_NAPI
#ifdef CONFIG_S2IO_NAPI
nic
->
pkts_to_process
-=
1
;
nic
->
pkts_to_process
-=
1
;
if
(
!
nic
->
pkts_to_process
)
if
(
!
nic
->
pkts_to_process
)
...
@@ -3044,7 +3081,7 @@ int s2io_set_swapper(nic_t * sp)
...
@@ -3044,7 +3081,7 @@ int s2io_set_swapper(nic_t * sp)
int
wait_for_msix_trans
(
nic_t
*
nic
,
int
i
)
int
wait_for_msix_trans
(
nic_t
*
nic
,
int
i
)
{
{
XENA_dev_config_t
__iomem
*
bar0
=
nic
->
bar0
;
XENA_dev_config_t
*
bar0
=
(
XENA_dev_config_t
*
)
nic
->
bar0
;
u64
val64
;
u64
val64
;
int
ret
=
0
,
cnt
=
0
;
int
ret
=
0
,
cnt
=
0
;
...
@@ -3065,7 +3102,7 @@ int wait_for_msix_trans(nic_t *nic, int i)
...
@@ -3065,7 +3102,7 @@ int wait_for_msix_trans(nic_t *nic, int i)
void
restore_xmsi_data
(
nic_t
*
nic
)
void
restore_xmsi_data
(
nic_t
*
nic
)
{
{
XENA_dev_config_t
__iomem
*
bar0
=
nic
->
bar0
;
XENA_dev_config_t
*
bar0
=
(
XENA_dev_config_t
*
)
nic
->
bar0
;
u64
val64
;
u64
val64
;
int
i
;
int
i
;
...
@@ -3083,7 +3120,7 @@ void restore_xmsi_data(nic_t *nic)
...
@@ -3083,7 +3120,7 @@ void restore_xmsi_data(nic_t *nic)
void
store_xmsi_data
(
nic_t
*
nic
)
void
store_xmsi_data
(
nic_t
*
nic
)
{
{
XENA_dev_config_t
__iomem
*
bar0
=
nic
->
bar0
;
XENA_dev_config_t
*
bar0
=
(
XENA_dev_config_t
*
)
nic
->
bar0
;
u64
val64
,
addr
,
data
;
u64
val64
,
addr
,
data
;
int
i
;
int
i
;
...
@@ -3106,7 +3143,7 @@ void store_xmsi_data(nic_t *nic)
...
@@ -3106,7 +3143,7 @@ void store_xmsi_data(nic_t *nic)
int
s2io_enable_msi
(
nic_t
*
nic
)
int
s2io_enable_msi
(
nic_t
*
nic
)
{
{
XENA_dev_config_t
__iomem
*
bar0
=
nic
->
bar0
;
XENA_dev_config_t
*
bar0
=
(
XENA_dev_config_t
*
)
nic
->
bar0
;
u16
msi_ctrl
,
msg_val
;
u16
msi_ctrl
,
msg_val
;
struct
config_param
*
config
=
&
nic
->
config
;
struct
config_param
*
config
=
&
nic
->
config
;
struct
net_device
*
dev
=
nic
->
dev
;
struct
net_device
*
dev
=
nic
->
dev
;
...
@@ -3156,7 +3193,7 @@ int s2io_enable_msi(nic_t *nic)
...
@@ -3156,7 +3193,7 @@ int s2io_enable_msi(nic_t *nic)
int
s2io_enable_msi_x
(
nic_t
*
nic
)
int
s2io_enable_msi_x
(
nic_t
*
nic
)
{
{
XENA_dev_config_t
__iomem
*
bar0
=
nic
->
bar0
;
XENA_dev_config_t
*
bar0
=
(
XENA_dev_config_t
*
)
nic
->
bar0
;
u64
tx_mat
,
rx_mat
;
u64
tx_mat
,
rx_mat
;
u16
msi_control
;
/* Temp variable */
u16
msi_control
;
/* Temp variable */
int
ret
,
i
,
j
,
msix_indx
=
1
;
int
ret
,
i
,
j
,
msix_indx
=
1
;
...
@@ -5537,16 +5574,7 @@ static int rx_osm_handler(ring_info_t *ring_data, RxD_t * rxdp)
...
@@ -5537,16 +5574,7 @@ static int rx_osm_handler(ring_info_t *ring_data, RxD_t * rxdp)
((
unsigned
long
)
rxdp
->
Host_Control
);
((
unsigned
long
)
rxdp
->
Host_Control
);
int
ring_no
=
ring_data
->
ring_no
;
int
ring_no
=
ring_data
->
ring_no
;
u16
l3_csum
,
l4_csum
;
u16
l3_csum
,
l4_csum
;
#ifdef CONFIG_2BUFF_MODE
int
buf0_len
=
RXD_GET_BUFFER0_SIZE
(
rxdp
->
Control_2
);
int
buf2_len
=
RXD_GET_BUFFER2_SIZE
(
rxdp
->
Control_2
);
int
get_block
=
ring_data
->
rx_curr_get_info
.
block_index
;
int
get_off
=
ring_data
->
rx_curr_get_info
.
offset
;
buffAdd_t
*
ba
=
&
ring_data
->
ba
[
get_block
][
get_off
];
unsigned
char
*
buff
;
#else
u16
len
=
(
u16
)
((
RXD_GET_BUFFER0_SIZE
(
rxdp
->
Control_2
))
>>
48
);;
#endif
skb
->
dev
=
dev
;
skb
->
dev
=
dev
;
if
(
rxdp
->
Control_1
&
RXD_T_CODE
)
{
if
(
rxdp
->
Control_1
&
RXD_T_CODE
)
{
unsigned
long
long
err
=
rxdp
->
Control_1
&
RXD_T_CODE
;
unsigned
long
long
err
=
rxdp
->
Control_1
&
RXD_T_CODE
;
...
@@ -5563,19 +5591,36 @@ static int rx_osm_handler(ring_info_t *ring_data, RxD_t * rxdp)
...
@@ -5563,19 +5591,36 @@ static int rx_osm_handler(ring_info_t *ring_data, RxD_t * rxdp)
rxdp
->
Host_Control
=
0
;
rxdp
->
Host_Control
=
0
;
sp
->
rx_pkt_count
++
;
sp
->
rx_pkt_count
++
;
sp
->
stats
.
rx_packets
++
;
sp
->
stats
.
rx_packets
++
;
#ifndef CONFIG_2BUFF_MODE
if
(
sp
->
rxd_mode
==
RXD_MODE_1
)
{
sp
->
stats
.
rx_bytes
+=
len
;
int
len
=
RXD_GET_BUFFER0_SIZE_1
(
rxdp
->
Control_2
);
#else
sp
->
stats
.
rx_bytes
+=
buf0_len
+
buf2_len
;
#endif
#ifndef CONFIG_2BUFF_MODE
sp
->
stats
.
rx_bytes
+=
len
;
skb_put
(
skb
,
len
);
skb_put
(
skb
,
len
);
#else
buff
=
skb_push
(
skb
,
buf0_len
);
}
else
if
(
sp
->
rxd_mode
>=
RXD_MODE_3A
)
{
int
get_block
=
ring_data
->
rx_curr_get_info
.
block_index
;
int
get_off
=
ring_data
->
rx_curr_get_info
.
offset
;
int
buf0_len
=
RXD_GET_BUFFER0_SIZE_3
(
rxdp
->
Control_2
);
int
buf2_len
=
RXD_GET_BUFFER2_SIZE_3
(
rxdp
->
Control_2
);
unsigned
char
*
buff
=
skb_push
(
skb
,
buf0_len
);
buffAdd_t
*
ba
=
&
ring_data
->
ba
[
get_block
][
get_off
];
sp
->
stats
.
rx_bytes
+=
buf0_len
+
buf2_len
;
memcpy
(
buff
,
ba
->
ba_0
,
buf0_len
);
memcpy
(
buff
,
ba
->
ba_0
,
buf0_len
);
if
(
sp
->
rxd_mode
==
RXD_MODE_3A
)
{
int
buf1_len
=
RXD_GET_BUFFER1_SIZE_3
(
rxdp
->
Control_2
);
skb_put
(
skb
,
buf1_len
);
skb
->
len
+=
buf2_len
;
skb
->
data_len
+=
buf2_len
;
skb
->
truesize
+=
buf2_len
;
skb_put
(
skb_shinfo
(
skb
)
->
frag_list
,
buf2_len
);
sp
->
stats
.
rx_bytes
+=
buf1_len
;
}
else
skb_put
(
skb
,
buf2_len
);
skb_put
(
skb
,
buf2_len
);
#endif
}
if
((
rxdp
->
Control_1
&
TCP_OR_UDP_FRAME
)
&&
if
((
rxdp
->
Control_1
&
TCP_OR_UDP_FRAME
)
&&
(
sp
->
rx_csum
))
{
(
sp
->
rx_csum
))
{
...
@@ -5711,6 +5756,7 @@ MODULE_VERSION(DRV_VERSION);
...
@@ -5711,6 +5756,7 @@ MODULE_VERSION(DRV_VERSION);
module_param
(
tx_fifo_num
,
int
,
0
);
module_param
(
tx_fifo_num
,
int
,
0
);
module_param
(
rx_ring_num
,
int
,
0
);
module_param
(
rx_ring_num
,
int
,
0
);
module_param
(
rx_ring_mode
,
int
,
0
);
module_param_array
(
tx_fifo_len
,
uint
,
NULL
,
0
);
module_param_array
(
tx_fifo_len
,
uint
,
NULL
,
0
);
module_param_array
(
rx_ring_sz
,
uint
,
NULL
,
0
);
module_param_array
(
rx_ring_sz
,
uint
,
NULL
,
0
);
module_param_array
(
rts_frm_len
,
uint
,
NULL
,
0
);
module_param_array
(
rts_frm_len
,
uint
,
NULL
,
0
);
...
@@ -5722,6 +5768,7 @@ module_param(shared_splits, int, 0);
...
@@ -5722,6 +5768,7 @@ module_param(shared_splits, int, 0);
module_param
(
tmac_util_period
,
int
,
0
);
module_param
(
tmac_util_period
,
int
,
0
);
module_param
(
rmac_util_period
,
int
,
0
);
module_param
(
rmac_util_period
,
int
,
0
);
module_param
(
bimodal
,
bool
,
0
);
module_param
(
bimodal
,
bool
,
0
);
module_param
(
l3l4hdr_size
,
int
,
0
);
#ifndef CONFIG_S2IO_NAPI
#ifndef CONFIG_S2IO_NAPI
module_param
(
indicate_max_pkts
,
int
,
0
);
module_param
(
indicate_max_pkts
,
int
,
0
);
#endif
#endif
...
@@ -5843,6 +5890,13 @@ Defaulting to INTA\n");
...
@@ -5843,6 +5890,13 @@ Defaulting to INTA\n");
sp
->
pdev
=
pdev
;
sp
->
pdev
=
pdev
;
sp
->
high_dma_flag
=
dma_flag
;
sp
->
high_dma_flag
=
dma_flag
;
sp
->
device_enabled_once
=
FALSE
;
sp
->
device_enabled_once
=
FALSE
;
if
(
rx_ring_mode
==
1
)
sp
->
rxd_mode
=
RXD_MODE_1
;
if
(
rx_ring_mode
==
2
)
sp
->
rxd_mode
=
RXD_MODE_3B
;
if
(
rx_ring_mode
==
3
)
sp
->
rxd_mode
=
RXD_MODE_3A
;
sp
->
intr_type
=
dev_intr_type
;
sp
->
intr_type
=
dev_intr_type
;
if
((
pdev
->
device
==
PCI_DEVICE_ID_HERC_WIN
)
||
if
((
pdev
->
device
==
PCI_DEVICE_ID_HERC_WIN
)
||
...
@@ -5895,7 +5949,7 @@ Defaulting to INTA\n");
...
@@ -5895,7 +5949,7 @@ Defaulting to INTA\n");
config
->
rx_ring_num
=
rx_ring_num
;
config
->
rx_ring_num
=
rx_ring_num
;
for
(
i
=
0
;
i
<
MAX_RX_RINGS
;
i
++
)
{
for
(
i
=
0
;
i
<
MAX_RX_RINGS
;
i
++
)
{
config
->
rx_cfg
[
i
].
num_rxd
=
rx_ring_sz
[
i
]
*
config
->
rx_cfg
[
i
].
num_rxd
=
rx_ring_sz
[
i
]
*
(
MAX_RXDS_PER_BLOCK
+
1
);
(
rxd_count
[
sp
->
rxd_mode
]
+
1
);
config
->
rx_cfg
[
i
].
ring_priority
=
i
;
config
->
rx_cfg
[
i
].
ring_priority
=
i
;
}
}
...
@@ -6090,9 +6144,6 @@ Defaulting to INTA\n");
...
@@ -6090,9 +6144,6 @@ Defaulting to INTA\n");
DBG_PRINT
(
ERR_DBG
,
"(rev %d), Version %s"
,
DBG_PRINT
(
ERR_DBG
,
"(rev %d), Version %s"
,
get_xena_rev_id
(
sp
->
pdev
),
get_xena_rev_id
(
sp
->
pdev
),
s2io_driver_version
);
s2io_driver_version
);
#ifdef CONFIG_2BUFF_MODE
DBG_PRINT
(
ERR_DBG
,
", Buffer mode %d"
,
2
);
#endif
switch
(
sp
->
intr_type
)
{
switch
(
sp
->
intr_type
)
{
case
INTA
:
case
INTA
:
DBG_PRINT
(
ERR_DBG
,
", Intr type INTA"
);
DBG_PRINT
(
ERR_DBG
,
", Intr type INTA"
);
...
@@ -6125,9 +6176,6 @@ Defaulting to INTA\n");
...
@@ -6125,9 +6176,6 @@ Defaulting to INTA\n");
DBG_PRINT
(
ERR_DBG
,
"(rev %d), Version %s"
,
DBG_PRINT
(
ERR_DBG
,
"(rev %d), Version %s"
,
get_xena_rev_id
(
sp
->
pdev
),
get_xena_rev_id
(
sp
->
pdev
),
s2io_driver_version
);
s2io_driver_version
);
#ifdef CONFIG_2BUFF_MODE
DBG_PRINT
(
ERR_DBG
,
", Buffer mode %d"
,
2
);
#endif
switch
(
sp
->
intr_type
)
{
switch
(
sp
->
intr_type
)
{
case
INTA
:
case
INTA
:
DBG_PRINT
(
ERR_DBG
,
", Intr type INTA"
);
DBG_PRINT
(
ERR_DBG
,
", Intr type INTA"
);
...
@@ -6148,6 +6196,12 @@ Defaulting to INTA\n");
...
@@ -6148,6 +6196,12 @@ Defaulting to INTA\n");
sp
->
def_mac_addr
[
0
].
mac_addr
[
4
],
sp
->
def_mac_addr
[
0
].
mac_addr
[
4
],
sp
->
def_mac_addr
[
0
].
mac_addr
[
5
]);
sp
->
def_mac_addr
[
0
].
mac_addr
[
5
]);
}
}
if
(
sp
->
rxd_mode
==
RXD_MODE_3B
)
DBG_PRINT
(
ERR_DBG
,
"%s: 2-Buffer mode support has been "
"enabled
\n
"
,
dev
->
name
);
if
(
sp
->
rxd_mode
==
RXD_MODE_3A
)
DBG_PRINT
(
ERR_DBG
,
"%s: 3-Buffer mode support has been "
"enabled
\n
"
,
dev
->
name
);
/* Initialize device name */
/* Initialize device name */
strcpy
(
sp
->
name
,
dev
->
name
);
strcpy
(
sp
->
name
,
dev
->
name
);
...
...
drivers/net/s2io.h
View file @
70d9d825
...
@@ -418,7 +418,7 @@ typedef struct list_info_hold {
...
@@ -418,7 +418,7 @@ typedef struct list_info_hold {
void
*
list_virt_addr
;
void
*
list_virt_addr
;
}
list_info_hold_t
;
}
list_info_hold_t
;
/* Rx descriptor structure */
/* Rx descriptor structure
for 1 buffer mode
*/
typedef
struct
_RxD_t
{
typedef
struct
_RxD_t
{
u64
Host_Control
;
/* reserved for host */
u64
Host_Control
;
/* reserved for host */
u64
Control_1
;
u64
Control_1
;
...
@@ -439,49 +439,54 @@ typedef struct _RxD_t {
...
@@ -439,49 +439,54 @@ typedef struct _RxD_t {
#define SET_RXD_MARKER vBIT(THE_RXD_MARK, 0, 2)
#define SET_RXD_MARKER vBIT(THE_RXD_MARK, 0, 2)
#define GET_RXD_MARKER(ctrl) ((ctrl & SET_RXD_MARKER) >> 62)
#define GET_RXD_MARKER(ctrl) ((ctrl & SET_RXD_MARKER) >> 62)
#ifndef CONFIG_2BUFF_MODE
#define MASK_BUFFER0_SIZE vBIT(0x3FFF,2,14)
#define SET_BUFFER0_SIZE(val) vBIT(val,2,14)
#else
#define MASK_BUFFER0_SIZE vBIT(0xFF,2,14)
#define MASK_BUFFER1_SIZE vBIT(0xFFFF,16,16)
#define MASK_BUFFER2_SIZE vBIT(0xFFFF,32,16)
#define SET_BUFFER0_SIZE(val) vBIT(val,8,8)
#define SET_BUFFER1_SIZE(val) vBIT(val,16,16)
#define SET_BUFFER2_SIZE(val) vBIT(val,32,16)
#endif
#define MASK_VLAN_TAG vBIT(0xFFFF,48,16)
#define MASK_VLAN_TAG vBIT(0xFFFF,48,16)
#define SET_VLAN_TAG(val) vBIT(val,48,16)
#define SET_VLAN_TAG(val) vBIT(val,48,16)
#define SET_NUM_TAG(val) vBIT(val,16,32)
#define SET_NUM_TAG(val) vBIT(val,16,32)
#ifndef CONFIG_2BUFF_MODE
#define RXD_GET_BUFFER0_SIZE(Control_2) (u64)((Control_2 & vBIT(0x3FFF,2,14)))
}
RxD_t
;
#else
/* Rx descriptor structure for 1 buffer mode */
#define RXD_GET_BUFFER0_SIZE(Control_2) (u8)((Control_2 & MASK_BUFFER0_SIZE) \
typedef
struct
_RxD1_t
{
>> 48)
struct
_RxD_t
h
;
#define RXD_GET_BUFFER1_SIZE(Control_2) (u16)((Control_2 & MASK_BUFFER1_SIZE) \
>> 32)
#define MASK_BUFFER0_SIZE_1 vBIT(0x3FFF,2,14)
#define RXD_GET_BUFFER2_SIZE(Control_2) (u16)((Control_2 & MASK_BUFFER2_SIZE) \
#define SET_BUFFER0_SIZE_1(val) vBIT(val,2,14)
>> 16)
#define RXD_GET_BUFFER0_SIZE_1(_Control_2) \
(u16)((_Control_2 & MASK_BUFFER0_SIZE_1) >> 48)
u64
Buffer0_ptr
;
}
RxD1_t
;
/* Rx descriptor structure for 3 or 2 buffer mode */
typedef
struct
_RxD3_t
{
struct
_RxD_t
h
;
#define MASK_BUFFER0_SIZE_3 vBIT(0xFF,2,14)
#define MASK_BUFFER1_SIZE_3 vBIT(0xFFFF,16,16)
#define MASK_BUFFER2_SIZE_3 vBIT(0xFFFF,32,16)
#define SET_BUFFER0_SIZE_3(val) vBIT(val,8,8)
#define SET_BUFFER1_SIZE_3(val) vBIT(val,16,16)
#define SET_BUFFER2_SIZE_3(val) vBIT(val,32,16)
#define RXD_GET_BUFFER0_SIZE_3(Control_2) \
(u8)((Control_2 & MASK_BUFFER0_SIZE_3) >> 48)
#define RXD_GET_BUFFER1_SIZE_3(Control_2) \
(u16)((Control_2 & MASK_BUFFER1_SIZE_3) >> 32)
#define RXD_GET_BUFFER2_SIZE_3(Control_2) \
(u16)((Control_2 & MASK_BUFFER2_SIZE_3) >> 16)
#define BUF0_LEN 40
#define BUF0_LEN 40
#define BUF1_LEN 1
#define BUF1_LEN 1
#endif
u64
Buffer0_ptr
;
u64
Buffer0_ptr
;
#ifdef CONFIG_2BUFF_MODE
u64
Buffer1_ptr
;
u64
Buffer1_ptr
;
u64
Buffer2_ptr
;
u64
Buffer2_ptr
;
#endif
}
RxD3_t
;
}
RxD_t
;
/* Structure that represents the Rx descriptor block which contains
/* Structure that represents the Rx descriptor block which contains
* 128 Rx descriptors.
* 128 Rx descriptors.
*/
*/
#ifndef CONFIG_2BUFF_MODE
typedef
struct
_RxD_block
{
typedef
struct
_RxD_block
{
#define MAX_RXDS_PER_BLOCK
127
#define MAX_RXDS_PER_BLOCK
_1
127
RxD
_t
rxd
[
MAX_RXDS_PER_BLOCK
];
RxD
1_t
rxd
[
MAX_RXDS_PER_BLOCK_1
];
u64
reserved_0
;
u64
reserved_0
;
#define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL
#define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL
...
@@ -492,18 +497,13 @@ typedef struct _RxD_block {
...
@@ -492,18 +497,13 @@ typedef struct _RxD_block {
* the upper 32 bits should
* the upper 32 bits should
* be 0 */
* be 0 */
}
RxD_block_t
;
}
RxD_block_t
;
#else
typedef
struct
_RxD_block
{
#define MAX_RXDS_PER_BLOCK 85
RxD_t
rxd
[
MAX_RXDS_PER_BLOCK
];
#define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL
u64
reserved_1
;
/* 0xFEFFFFFFFFFFFFFF to mark last Rxd
* in this blk */
u64
pNext_RxD_Blk_physical
;
/* Phy ponter to next blk. */
}
RxD_block_t
;
#define SIZE_OF_BLOCK 4096
#define SIZE_OF_BLOCK 4096
#define RXD_MODE_1 0
#define RXD_MODE_3A 1
#define RXD_MODE_3B 2
/* Structure to hold virtual addresses of Buf0 and Buf1 in
/* Structure to hold virtual addresses of Buf0 and Buf1 in
* 2buf mode. */
* 2buf mode. */
typedef
struct
bufAdd
{
typedef
struct
bufAdd
{
...
@@ -512,7 +512,6 @@ typedef struct bufAdd {
...
@@ -512,7 +512,6 @@ typedef struct bufAdd {
void
*
ba_0
;
void
*
ba_0
;
void
*
ba_1
;
void
*
ba_1
;
}
buffAdd_t
;
}
buffAdd_t
;
#endif
/* Structure which stores all the MAC control parameters */
/* Structure which stores all the MAC control parameters */
...
@@ -539,10 +538,17 @@ typedef struct {
...
@@ -539,10 +538,17 @@ typedef struct {
typedef
tx_curr_get_info_t
tx_curr_put_info_t
;
typedef
tx_curr_get_info_t
tx_curr_put_info_t
;
typedef
struct
rxd_info
{
void
*
virt_addr
;
dma_addr_t
dma_addr
;
}
rxd_info_t
;
/* Structure that holds the Phy and virt addresses of the Blocks */
/* Structure that holds the Phy and virt addresses of the Blocks */
typedef
struct
rx_block_info
{
typedef
struct
rx_block_info
{
RxD_t
*
block_virt_addr
;
void
*
block_virt_addr
;
dma_addr_t
block_dma_addr
;
dma_addr_t
block_dma_addr
;
rxd_info_t
*
rxds
;
}
rx_block_info_t
;
}
rx_block_info_t
;
/* pre declaration of the nic structure */
/* pre declaration of the nic structure */
...
@@ -578,10 +584,8 @@ typedef struct ring_info {
...
@@ -578,10 +584,8 @@ typedef struct ring_info {
int
put_pos
;
int
put_pos
;
#endif
#endif
#ifdef CONFIG_2BUFF_MODE
/* Buffer Address store. */
/* Buffer Address store. */
buffAdd_t
**
ba
;
buffAdd_t
**
ba
;
#endif
nic_t
*
nic
;
nic_t
*
nic
;
}
ring_info_t
;
}
ring_info_t
;
...
@@ -647,8 +651,6 @@ typedef struct {
...
@@ -647,8 +651,6 @@ typedef struct {
/* Default Tunable parameters of the NIC. */
/* Default Tunable parameters of the NIC. */
#define DEFAULT_FIFO_LEN 4096
#define DEFAULT_FIFO_LEN 4096
#define SMALL_RXD_CNT 30 * (MAX_RXDS_PER_BLOCK+1)
#define LARGE_RXD_CNT 100 * (MAX_RXDS_PER_BLOCK+1)
#define SMALL_BLK_CNT 30
#define SMALL_BLK_CNT 30
#define LARGE_BLK_CNT 100
#define LARGE_BLK_CNT 100
...
@@ -678,6 +680,7 @@ struct msix_info_st {
...
@@ -678,6 +680,7 @@ struct msix_info_st {
/* Structure representing one instance of the NIC */
/* Structure representing one instance of the NIC */
struct
s2io_nic
{
struct
s2io_nic
{
int
rxd_mode
;
#ifdef CONFIG_S2IO_NAPI
#ifdef CONFIG_S2IO_NAPI
/*
/*
* Count of packets to be processed in a given iteration, it will be indicated
* Count of packets to be processed in a given iteration, it will be indicated
...
...
drivers/net/wireless/airo.c
View file @
70d9d825
...
@@ -2040,7 +2040,7 @@ static int mpi_send_packet (struct net_device *dev)
...
@@ -2040,7 +2040,7 @@ static int mpi_send_packet (struct net_device *dev)
return
1
;
return
1
;
}
}
static
void
get_tx_error
(
struct
airo_info
*
ai
,
u
32
fid
)
static
void
get_tx_error
(
struct
airo_info
*
ai
,
s
32
fid
)
{
{
u16
status
;
u16
status
;
...
...
include/linux/phy.h
View file @
70d9d825
...
@@ -72,6 +72,9 @@ struct mii_bus {
...
@@ -72,6 +72,9 @@ struct mii_bus {
/* list of all PHYs on bus */
/* list of all PHYs on bus */
struct
phy_device
*
phy_map
[
PHY_MAX_ADDR
];
struct
phy_device
*
phy_map
[
PHY_MAX_ADDR
];
/* Phy addresses to be ignored when probing */
u32
phy_mask
;
/* Pointer to an array of interrupts, each PHY's
/* Pointer to an array of interrupts, each PHY's
* interrupt at the index matching its address */
* interrupt at the index matching its address */
int
*
irq
;
int
*
irq
;
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment