Commit 0ab2d668 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] IPMI driver updates

From: Corey Minyard <minyard@acm.org>

- Add support for messaging through an IPMI LAN interface, which is
  required for some system software that already exists on other IPMI
  drivers.  It also does some renaming and a lot of little cleanups.

- Add the "System Interface" driver.  The previous driver for system
  interfaces only supported the KCS interface, this driver supports all
  system interfaces defined in the IPMI standard.  It also does a much better
  job of handling ACPI and SMBIOS tables for detecting IPMI system
  interfaces.
parent 87c22e84
......@@ -22,6 +22,58 @@ are not familiar with IPMI itself, see the web site at
http://www.intel.com/design/servers/ipmi/index.htm. IPMI is a big
subject and I can't cover it all here!
Configuration
-------------
The LinuxIPMI driver is modular, which means you have to pick several
things to have it work right depending on your hardware. Most of
these are available in the 'Character Devices' menu.
No matter what, you must pick 'IPMI top-level message handler' to use
IPMI. What you do beyond that depends on your needs and hardware.
The message handler does not provide any user-level interfaces.
Kernel code (like the watchdog) can still use it. If you need access
from userland, you need to select 'Device interface for IPMI' if you
want access through a device driver. Another interface is also
available, you may select 'IPMI sockets' in the 'Networking Support'
main menu. This provides a socket interface to IPMI. You may select
both of these at the same time, they will both work together.
The driver interface depends on your hardware. If you have a board
with a standard interface (These will generally be either "KCS",
"SMIC", or "BT", consult your hardware manual), choose the 'IPMI SI
handler' option. A driver also exists for direct I2C access to the
IPMI management controller. Some boards support this, but it is
unknown if it will work on every board. For this, choose 'IPMI SMBus
handler', but be ready to try to do some figuring to see if it will
work.
There is also a KCS-only driver interface supplied, but it is
depracated in favor of the SI interface.
You should generally enable ACPI on your system, as systems with IPMI
should have ACPI tables describing them.
If you have a standard interface and the board manufacturer has done
their job correctly, the IPMI controller should be automatically
detect (via ACPI or SMBIOS tables) and should just work. Sadly, many
boards do not have this information. The driver attempts standard
defaults, but they may not work. If you fall into this situation, you
need to read the section below named 'The SI Driver' on how to
hand-configure your system.
IPMI defines a standard watchdog timer. You can enable this with the
'IPMI Watchdog Timer' config option. If you compile the driver into
the kernel, then via a kernel command-line option you can have the
watchdog timer start as soon as it intitializes. It also have a lot
of other options, see the 'Watchdog' section below for more details.
Note that you can also have the watchdog continue to run if it is
closed (by default it is disabled on close). Go into the 'Watchdog
Cards' menu, enable 'Watchdog Timer Support', and enable the option
'Disable watchdog shutdown on close'.
Basic Design
------------
......@@ -41,18 +93,30 @@ ipmi_devintf - This provides a userland IOCTL interface for the IPMI
driver, each open file for this device ties in to the message handler
as an IPMI user.
ipmi_kcs_drv - A driver for the KCS SMI. Most system have a KCS
interface for IPMI.
ipmi_si - A driver for various system interfaces. This supports
KCS, SMIC, and may support BT in the future. Unless you have your own
custom interface, you probably need to use this.
ipmi_smb - A driver for accessing BMCs on the SMBus. It uses the
I2C kernel driver's SMBus interfaces to send and receive IPMI messages
over the SMBus.
af_ipmi - A network socket interface to IPMI. This doesn't take up
a character device in your system.
Note that the KCS-only interface ahs been removed.
Much documentation for the interface is in the include files. The
IPMI include files are:
ipmi.h - Contains the user interface and IOCTL interface for IPMI.
net/af_ipmi.h - Contains the socket interface.
ipmi_smi.h - Contains the interface for SMI drivers to use.
linux/ipmi.h - Contains the user interface and IOCTL interface for IPMI.
ipmi_msgdefs.h - General definitions for base IPMI messaging.
linux/ipmi_smi.h - Contains the interface for system management interfaces
(things that interface to IPMI controllers) to use.
linux/ipmi_msgdefs.h - General definitions for base IPMI messaging.
Addressing
......@@ -260,70 +324,131 @@ they register with the message handler. They are generally assigned
in the order they register, although if an SMI unregisters and then
another one registers, all bets are off.
The ipmi_smi.h defines the interface for SMIs, see that for more
details.
The ipmi_smi.h defines the interface for management interfaces, see
that for more details.
The KCS Driver
--------------
The SI Driver
-------------
The KCS driver allows up to 4 KCS interfaces to be configured in the
system. By default, the driver will register one KCS interface at the
spec-specified I/O port 0xca2 without interrupts. You can change this
at module load time (for a module) with:
The SI driver allows up to 4 KCS or SMIC interfaces to be configured
in the system. By default, scan the ACPI tables for interfaces, and
if it doesn't find any the driver will attempt to register one KCS
interface at the spec-specified I/O port 0xca2 without interrupts.
You can change this at module load time (for a module) with:
modprobe ipmi_si.o type=<type1>,<type2>....
ports=<port1>,<port2>... addrs=<addr1>,<addr2>...
irqs=<irq1>,<irq2>... trydefaults=[0|1]
Each of these except si_trydefaults is a list, the first item for the
first interface, second item for the second interface, etc.
insmod ipmi_kcs_drv.o kcs_ports=<port1>,<port2>... kcs_addrs=<addr1>,<addr2>
kcs_irqs=<irq1>,<irq2>... kcs_trydefaults=[0|1]
The si_type may be either "kcs", "smic", or "bt". If you leave it blank, it
defaults to "kcs".
The KCS driver supports two types of interfaces, ports (for I/O port
based KCS interfaces) and memory addresses (for KCS interfaces in
memory). The driver will support both of them simultaneously, setting
the port to zero (or just not specifying it) will allow the memory
address to be used. The port will override the memory address if it
is specified and non-zero. kcs_trydefaults sets whether the standard
IPMI interface at 0xca2 and any interfaces specified by ACPE are
tried. By default, the driver tries it, set this value to zero to
turn this off.
If you specify si_addrs as non-zero for an interface, the driver will
use the memory address given as the address of the device. This
overrides si_ports.
If you specify si_ports as non-zero for an interface, the driver will
use the I/O port given as the device address.
If you specify si_irqs as non-zero for an interface, the driver will
attempt to use the given interrupt for the device.
si_trydefaults sets whether the standard IPMI interface at 0xca2 and
any interfaces specified by ACPE are tried. By default, the driver
tries it, set this value to zero to turn this off.
When compiled into the kernel, the addresses can be specified on the
kernel command line as:
ipmi_kcs=<bmc1>:<irq1>,<bmc2>:<irq2>....,[nodefault]
ipmi_si.type=<type1>,<type2>...
ipmi_si.ports=<port1>,<port2>... ipmi_si.addrs=<addr1>,<addr2>...
ipmi_si.irqs=<irq1>,<irq2>... ipmi_si.trydefaults=[0|1]
The <bmcx> values is either "p<port>" or "m<addr>" for port or memory
addresses. So for instance, a KCS interface at port 0xca2 using
interrupt 9 and a memory interface at address 0xf9827341 with no
interrupt would be specified "ipmi_kcs=p0xca2:9,m0xf9827341".
If you specify zero for in irq or don't specify it, the driver will
run polled unless the software can detect the interrupt to use in the
ACPI tables.
It works the same as the module parameters of the same names.
By default, the driver will attempt to detect a KCS device at the
spec-specified 0xca2 address and any address specified by ACPI. If
you want to turn this off, use the "nodefault" option.
By default, the driver will attempt to detect any device specified by
ACPI, and if none of those then a KCS device at the spec-specified
0xca2. If you want to turn this off, set the "trydefaults" option to
false.
If you have high-res timers compiled into the kernel, the driver will
use them to provide much better performance. Note that if you do not
have high-res timers enabled in the kernel and you don't have
interrupts enabled, the driver will run VERY slowly. Don't blame me,
the KCS interface sucks.
these interfaces suck.
The SMBus Driver
----------------
The SMBus driver allows up to 4 SMBus devices to be configured in the
system. By default, the driver will register any SMBus interfaces it finds
in the I2C address range of 0x20 to 0x4f on any adapter. You can change this
at module load time (for a module) with:
modprobe ipmi_smb.o
addr=<adapter1>,<i2caddr1>[,<adapter2>,<i2caddr2>[,...]]
dbg=<flags1>,<flags2>...
[defaultprobe=0] [dbg_probe=1]
The addresses are specified in pairs, the first is the adapter ID and the
second is the I2C address on that adapter.
The debug flags are bit flags for each BMC found, they are:
IPMI messages: 1, driver state: 2, timing: 4, I2C probe: 8
Setting smb_defaultprobe to zero disabled the default probing of SMBus
interfaces at address range 0x20 to 0x4f. This means that only the
BMCs specified on the smb_addr line will be detected.
Setting smb_dbg_probe to 1 will enable debugging of the probing and
detection process for BMCs on the SMBusses.
Discovering the IPMI compilant BMC on the SMBus can cause devices
on the I2C bus to fail. The SMBus driver writes a "Get Device ID" IPMI
message as a block write to the I2C bus and waits for a response.
This action can be detrimental to some I2C devices. It is highly recommended
that the known I2c address be given to the SMBus driver in the smb_addr
parameter. The default adrress range will not be used when a smb_addr
parameter is provided.
When compiled into the kernel, the addresses can be specified on the
kernel command line as:
ipmb_smb.addr=<adapter1>,<i2caddr1>[,<adapter2>,<i2caddr2>[,...]]
ipmi_smb.dbg=<flags1>,<flags2>...
ipmi_smb.defaultprobe=0 ipmi_smb.dbg_probe=1
These are the same options as on the module command line.
Note that you might need some I2C changes if CONFIG_IPMI_PANIC_EVENT
is enabled along with this, so the I2C driver knows to run to
completion during sending a panic event.
Other Pieces
------------
Watchdog
--------
A watchdog timer is provided that implements the Linux-standard
watchdog timer interface. It has three module parameters that can be
used to control it:
insmod ipmi_watchdog timeout=<t> pretimeout=<t> action=<action type>
preaction=<preaction type> preop=<preop type>
modprobe ipmi_watchdog timeout=<t> pretimeout=<t> action=<action type>
preaction=<preaction type> preop=<preop type> start_now=x
The timeout is the number of seconds to the action, and the pretimeout
is the amount of seconds before the reset that the pre-timeout panic will
occur (if pretimeout is zero, then pretimeout will not be enabled).
occur (if pretimeout is zero, then pretimeout will not be enabled). Note
that the pretimeout is the time before the final timeout. So if the
timeout is 50 seconds and the pretimeout is 10 seconds, then the pretimeout
will occur in 40 second (10 seconds before the timeout).
The action may be "reset", "power_cycle", or "power_off", and
specifies what to do when the timer times out, and defaults to
......@@ -344,16 +469,19 @@ When preop is set to "preop_give_data", one byte comes ready to read
on the device when the pretimeout occurs. Select and fasync work on
the device, as well.
If start_now is set to 1, the watchdog timer will start running as
soon as the driver is loaded.
When compiled into the kernel, the kernel command line is available
for configuring the watchdog:
ipmi_wdog=<timeout>[,<pretimeout>[,<option>[,<options>....]]]
ipmi_watchdog.timeout=<t> ipmi_watchdog.pretimeout=<t>
ipmi_watchdog.action=<action type>
ipmi_watchdog.preaction=<preaction type>
ipmi_watchdog.preop=<preop type>
ipmi_watchdog.start_now=x
The options are the actions and preaction above (if an option
controlling the same thing is specified twice, the last is taken). An
options "start_now" is also there, if included, the watchdog will
start running immediately when all the drivers are ready, it doesn't
have to have a user hooked up to start it.
The options are the same as the module parameter options.
The watchdog will panic and start a 120 second reset timeout if it
gets a pre-action. During a panic or a reboot, the watchdog will
......
......@@ -43,11 +43,13 @@ config IPMI_DEVICE_INTERFACE
This provides an IOCTL interface to the IPMI message handler so
userland processes may use IPMI. It supports poll() and select().
config IPMI_KCS
tristate 'IPMI KCS handler'
config IPMI_SI
tristate 'IPMI System Interface handler'
depends on IPMI_HANDLER
help
Provides a driver for a KCS-style interface to a BMC.
Provides a driver for System Interfaces (KCS, SMIC, BT).
Currently, only KCS and SMIC are supported. If
you are using IPMI, you should probably say "y" here.
config IPMI_WATCHDOG
tristate 'IPMI Watchdog Timer'
......
......@@ -2,12 +2,13 @@
# Makefile for the ipmi drivers.
#
ipmi_kcs_drv-objs := ipmi_kcs_sm.o ipmi_kcs_intf.o
ipmi_si-objs := ipmi_si_intf.o ipmi_kcs_sm.o ipmi_smic_sm.o ipmi_bt_sm.o
obj-$(CONFIG_IPMI_HANDLER) += ipmi_msghandler.o
obj-$(CONFIG_IPMI_DEVICE_INTERFACE) += ipmi_devintf.o
obj-$(CONFIG_IPMI_KCS) += ipmi_kcs_drv.o
obj-$(CONFIG_IPMI_SI) += ipmi_si.o
obj-$(CONFIG_IPMI_WATCHDOG) += ipmi_watchdog.o
ipmi_kcs_drv.o: $(ipmi_kcs_drv-objs)
$(LD) -r -o $@ $(ipmi_kcs_drv-objs)
ipmi_si.o: $(ipmi_si-objs)
$(LD) -r -o $@ $(ipmi_si-objs)
/*
* ipmi_bt_sm.c
*
* The state machine for an Open IPMI BT sub-driver under ipmi_si.c, part
* of the driver architecture at http://sourceforge.net/project/openipmi
*
* Author: Rocky Craig <first.last@hp.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
* OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA. */
#include <linux/kernel.h> /* For printk. */
#include <linux/string.h>
#include <linux/ipmi_msgdefs.h> /* for completion codes */
#include "ipmi_si_sm.h"
#define IPMI_BT_VERSION "v31"
static int bt_debug = 0x00; /* Production value 0, see following flags */
#define BT_DEBUG_ENABLE 1
#define BT_DEBUG_MSG 2
#define BT_DEBUG_STATES 4
/* Typical "Get BT Capabilities" values are 2-3 retries, 5-10 seconds,
and 64 byte buffers. However, one HP implementation wants 255 bytes of
buffer (with a documented message of 160 bytes) so go for the max.
Since the Open IPMI architecture is single-message oriented at this
stage, the queue depth of BT is of no concern. */
#define BT_NORMAL_TIMEOUT 2000000 /* seconds in microseconds */
#define BT_RETRY_LIMIT 2
#define BT_RESET_DELAY 6000000 /* 6 seconds after warm reset */
enum bt_states {
BT_STATE_IDLE,
BT_STATE_XACTION_START,
BT_STATE_WRITE_BYTES,
BT_STATE_WRITE_END,
BT_STATE_WRITE_CONSUME,
BT_STATE_B2H_WAIT,
BT_STATE_READ_END,
BT_STATE_RESET1, /* These must come last */
BT_STATE_RESET2,
BT_STATE_RESET3,
BT_STATE_RESTART,
BT_STATE_HOSED
};
struct si_sm_data {
enum bt_states state;
enum bt_states last_state; /* assist printing and resets */
unsigned char seq; /* BT sequence number */
struct si_sm_io *io;
unsigned char write_data[IPMI_MAX_MSG_LENGTH];
int write_count;
unsigned char read_data[IPMI_MAX_MSG_LENGTH];
int read_count;
int truncated;
long timeout;
unsigned int error_retries; /* end of "common" fields */
int nonzero_status; /* hung BMCs stay all 0 */
};
#define BT_CLR_WR_PTR 0x01 /* See IPMI 1.5 table 11.6.4 */
#define BT_CLR_RD_PTR 0x02
#define BT_H2B_ATN 0x04
#define BT_B2H_ATN 0x08
#define BT_SMS_ATN 0x10
#define BT_OEM0 0x20
#define BT_H_BUSY 0x40
#define BT_B_BUSY 0x80
/* Some bits are toggled on each write: write once to set it, once
more to clear it; writing a zero does nothing. To absolutely
clear it, check its state and write if set. This avoids the "get
current then use as mask" scheme to modify one bit. Note that the
variable "bt" is hardcoded into these macros. */
#define BT_STATUS bt->io->inputb(bt->io, 0)
#define BT_CONTROL(x) bt->io->outputb(bt->io, 0, x)
#define BMC2HOST bt->io->inputb(bt->io, 1)
#define HOST2BMC(x) bt->io->outputb(bt->io, 1, x)
#define BT_INTMASK_R bt->io->inputb(bt->io, 2)
#define BT_INTMASK_W(x) bt->io->outputb(bt->io, 2, x)
/* Convenience routines for debugging. These are not multi-open safe!
Note the macros have hardcoded variables in them. */
static char *state2txt(unsigned char state)
{
switch (state) {
case BT_STATE_IDLE: return("IDLE");
case BT_STATE_XACTION_START: return("XACTION");
case BT_STATE_WRITE_BYTES: return("WR_BYTES");
case BT_STATE_WRITE_END: return("WR_END");
case BT_STATE_WRITE_CONSUME: return("WR_CONSUME");
case BT_STATE_B2H_WAIT: return("B2H_WAIT");
case BT_STATE_READ_END: return("RD_END");
case BT_STATE_RESET1: return("RESET1");
case BT_STATE_RESET2: return("RESET2");
case BT_STATE_RESET3: return("RESET3");
case BT_STATE_RESTART: return("RESTART");
case BT_STATE_HOSED: return("HOSED");
}
return("BAD STATE");
}
#define STATE2TXT state2txt(bt->state)
static char *status2txt(unsigned char status, char *buf)
{
strcpy(buf, "[ ");
if (status & BT_B_BUSY) strcat(buf, "B_BUSY ");
if (status & BT_H_BUSY) strcat(buf, "H_BUSY ");
if (status & BT_OEM0) strcat(buf, "OEM0 ");
if (status & BT_SMS_ATN) strcat(buf, "SMS ");
if (status & BT_B2H_ATN) strcat(buf, "B2H ");
if (status & BT_H2B_ATN) strcat(buf, "H2B ");
strcat(buf, "]");
return buf;
}
#define STATUS2TXT(buf) status2txt(status, buf)
/* This will be called from within this module on a hosed condition */
#define FIRST_SEQ 0
static unsigned int bt_init_data(struct si_sm_data *bt, struct si_sm_io *io)
{
bt->state = BT_STATE_IDLE;
bt->last_state = BT_STATE_IDLE;
bt->seq = FIRST_SEQ;
bt->io = io;
bt->write_count = 0;
bt->read_count = 0;
bt->error_retries = 0;
bt->nonzero_status = 0;
bt->truncated = 0;
bt->timeout = BT_NORMAL_TIMEOUT;
return 3; /* We claim 3 bytes of space; ought to check SPMI table */
}
static int bt_start_transaction(struct si_sm_data *bt,
unsigned char *data,
unsigned int size)
{
unsigned int i;
if ((size < 2) || (size > IPMI_MAX_MSG_LENGTH)) return -1;
if ((bt->state != BT_STATE_IDLE) && (bt->state != BT_STATE_HOSED))
return -2;
if (bt_debug & BT_DEBUG_MSG) {
printk(KERN_WARNING "+++++++++++++++++++++++++++++++++++++\n");
printk(KERN_WARNING "BT: write seq=0x%02X:", bt->seq);
for (i = 0; i < size; i ++) printk (" %02x", data[i]);
printk("\n");
}
bt->write_data[0] = size + 1; /* all data plus seq byte */
bt->write_data[1] = *data; /* NetFn/LUN */
bt->write_data[2] = bt->seq;
memcpy(bt->write_data + 3, data + 1, size - 1);
bt->write_count = size + 2;
bt->error_retries = 0;
bt->nonzero_status = 0;
bt->read_count = 0;
bt->truncated = 0;
bt->state = BT_STATE_XACTION_START;
bt->last_state = BT_STATE_IDLE;
bt->timeout = BT_NORMAL_TIMEOUT;
return 0;
}
/* After the upper state machine has been told SI_SM_TRANSACTION_COMPLETE
it calls this. Strip out the length and seq bytes. */
static int bt_get_result(struct si_sm_data *bt,
unsigned char *data,
unsigned int length)
{
int i, msg_len;
msg_len = bt->read_count - 2; /* account for length & seq */
/* Always NetFn, Cmd, cCode */
if (msg_len < 3 || msg_len > IPMI_MAX_MSG_LENGTH) {
printk(KERN_WARNING "BT results: bad msg_len = %d\n", msg_len);
data[0] = bt->write_data[1] | 0x4; /* Kludge a response */
data[1] = bt->write_data[3];
data[2] = IPMI_ERR_UNSPECIFIED;
msg_len = 3;
} else {
data[0] = bt->read_data[1];
data[1] = bt->read_data[3];
if (length < msg_len) bt->truncated = 1;
if (bt->truncated) { /* can be set in read_all_bytes() */
data[2] = IPMI_ERR_MSG_TRUNCATED;
msg_len = 3;
} else memcpy(data + 2, bt->read_data + 4, msg_len - 2);
if (bt_debug & BT_DEBUG_MSG) {
printk (KERN_WARNING "BT: res (raw)");
for (i = 0; i < msg_len; i++) printk(" %02x", data[i]);
printk ("\n");
}
}
bt->read_count = 0; /* paranoia */
return msg_len;
}
/* This bit's functionality is optional */
#define BT_BMC_HWRST 0x80
static void reset_flags(struct si_sm_data *bt)
{
if (BT_STATUS & BT_H_BUSY) BT_CONTROL(BT_H_BUSY);
if (BT_STATUS & BT_B_BUSY) BT_CONTROL(BT_B_BUSY);
BT_CONTROL(BT_CLR_WR_PTR);
BT_CONTROL(BT_SMS_ATN);
BT_INTMASK_W(BT_BMC_HWRST);
#ifdef DEVELOPMENT_ONLY_NOT_FOR_PRODUCTION
if (BT_STATUS & BT_B2H_ATN) {
int i;
BT_CONTROL(BT_H_BUSY);
BT_CONTROL(BT_B2H_ATN);
BT_CONTROL(BT_CLR_RD_PTR);
for (i = 0; i < IPMI_MAX_MSG_LENGTH + 2; i++) BMC2HOST;
BT_CONTROL(BT_H_BUSY);
}
#endif
}
static inline void write_all_bytes(struct si_sm_data *bt)
{
int i;
if (bt_debug & BT_DEBUG_MSG) {
printk(KERN_WARNING "BT: write %d bytes seq=0x%02X",
bt->write_count, bt->seq);
for (i = 0; i < bt->write_count; i++)
printk (" %02x", bt->write_data[i]);
printk ("\n");
}
for (i = 0; i < bt->write_count; i++) HOST2BMC(bt->write_data[i]);
}
static inline int read_all_bytes(struct si_sm_data *bt)
{
unsigned char i;
bt->read_data[0] = BMC2HOST;
bt->read_count = bt->read_data[0];
if (bt_debug & BT_DEBUG_MSG)
printk(KERN_WARNING "BT: read %d bytes:", bt->read_count);
/* minimum: length, NetFn, Seq, Cmd, cCode == 5 total, or 4 more
following the length byte. */
if (bt->read_count < 4 || bt->read_count >= IPMI_MAX_MSG_LENGTH) {
if (bt_debug & BT_DEBUG_MSG)
printk("bad length %d\n", bt->read_count);
bt->truncated = 1;
return 1; /* let next XACTION START clean it up */
}
for (i = 1; i <= bt->read_count; i++) bt->read_data[i] = BMC2HOST;
bt->read_count++; /* account for the length byte */
if (bt_debug & BT_DEBUG_MSG) {
for (i = 0; i < bt->read_count; i++)
printk (" %02x", bt->read_data[i]);
printk ("\n");
}
if (bt->seq != bt->write_data[2]) /* idiot check */
printk(KERN_WARNING "BT: internal error: sequence mismatch\n");
/* per the spec, the (NetFn, Seq, Cmd) tuples should match */
if ((bt->read_data[3] == bt->write_data[3]) && /* Cmd */
(bt->read_data[2] == bt->write_data[2]) && /* Sequence */
((bt->read_data[1] & 0xF8) == (bt->write_data[1] & 0xF8)))
return 1;
if (bt_debug & BT_DEBUG_MSG) printk(KERN_WARNING "BT: bad packet: "
"want 0x(%02X, %02X, %02X) got (%02X, %02X, %02X)\n",
bt->write_data[1], bt->write_data[2], bt->write_data[3],
bt->read_data[1], bt->read_data[2], bt->read_data[3]);
return 0;
}
/* Modifies bt->state appropriately, need to get into the bt_event() switch */
static void error_recovery(struct si_sm_data *bt, char *reason)
{
unsigned char status;
char buf[40]; /* For getting status */
bt->timeout = BT_NORMAL_TIMEOUT; /* various places want to retry */
status = BT_STATUS;
printk(KERN_WARNING "BT: %s in %s %s ", reason, STATE2TXT,
STATUS2TXT(buf));
(bt->error_retries)++;
if (bt->error_retries > BT_RETRY_LIMIT) {
printk("retry limit (%d) exceeded\n", BT_RETRY_LIMIT);
bt->state = BT_STATE_HOSED;
if (!bt->nonzero_status)
printk(KERN_ERR "IPMI: BT stuck, try power cycle\n");
else if (bt->seq == FIRST_SEQ + BT_RETRY_LIMIT) {
/* most likely during insmod */
printk(KERN_WARNING "IPMI: BT reset (takes 5 secs)\n");
bt->state = BT_STATE_RESET1;
}
return;
}
/* Sometimes the BMC queues get in an "off-by-one" state...*/
if ((bt->state == BT_STATE_B2H_WAIT) && (status & BT_B2H_ATN)) {
printk("retry B2H_WAIT\n");
return;
}
printk("restart command\n");
bt->state = BT_STATE_RESTART;
}
/* Check the status and (possibly) advance the BT state machine. The
default return is SI_SM_CALL_WITH_DELAY. */
static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
{
unsigned char status;
char buf[40]; /* For getting status */
int i;
status = BT_STATUS;
bt->nonzero_status |= status;
if ((bt_debug & BT_DEBUG_STATES) && (bt->state != bt->last_state))
printk(KERN_WARNING "BT: %s %s TO=%ld - %ld \n",
STATE2TXT,
STATUS2TXT(buf),
bt->timeout,
time);
bt->last_state = bt->state;
if (bt->state == BT_STATE_HOSED) return SI_SM_HOSED;
if (bt->state != BT_STATE_IDLE) { /* do timeout test */
/* Certain states, on error conditions, can lock up a CPU
because they are effectively in an infinite loop with
CALL_WITHOUT_DELAY (right back here with time == 0).
Prevent infinite lockup by ALWAYS decrementing timeout. */
/* FIXME: bt_event is sometimes called with time > BT_NORMAL_TIMEOUT
(noticed in ipmi_smic_sm.c January 2004) */
if ((time <= 0) || (time >= BT_NORMAL_TIMEOUT)) time = 100;
bt->timeout -= time;
if ((bt->timeout < 0) && (bt->state < BT_STATE_RESET1)) {
error_recovery(bt, "timed out");
return SI_SM_CALL_WITHOUT_DELAY;
}
}
switch (bt->state) {
case BT_STATE_IDLE: /* check for asynchronous messages */
if (status & BT_SMS_ATN) {
BT_CONTROL(BT_SMS_ATN); /* clear it */
return SI_SM_ATTN;
}
return SI_SM_IDLE;
case BT_STATE_XACTION_START:
if (status & BT_H_BUSY) {
BT_CONTROL(BT_H_BUSY);
break;
}
if (status & BT_B2H_ATN) break;
bt->state = BT_STATE_WRITE_BYTES;
return SI_SM_CALL_WITHOUT_DELAY; /* for logging */
case BT_STATE_WRITE_BYTES:
if (status & (BT_B_BUSY | BT_H2B_ATN)) break;
BT_CONTROL(BT_CLR_WR_PTR);
write_all_bytes(bt);
BT_CONTROL(BT_H2B_ATN); /* clears too fast to catch? */
bt->state = BT_STATE_WRITE_CONSUME;
return SI_SM_CALL_WITHOUT_DELAY; /* it MIGHT sail through */
case BT_STATE_WRITE_CONSUME: /* BMCs usually blow right thru here */
if (status & (BT_H2B_ATN | BT_B_BUSY)) break;
bt->state = BT_STATE_B2H_WAIT;
/* fall through with status */
/* Stay in BT_STATE_B2H_WAIT until a packet matches. However, spinning
hard here, constantly reading status, seems to hold off the
generation of B2H_ATN so ALWAYS return CALL_WITH_DELAY. */
case BT_STATE_B2H_WAIT:
if (!(status & BT_B2H_ATN)) break;
/* Assume ordered, uncached writes: no need to wait */
if (!(status & BT_H_BUSY)) BT_CONTROL(BT_H_BUSY); /* set */
BT_CONTROL(BT_B2H_ATN); /* clear it, ACK to the BMC */
BT_CONTROL(BT_CLR_RD_PTR); /* reset the queue */
i = read_all_bytes(bt);
BT_CONTROL(BT_H_BUSY); /* clear */
if (!i) break; /* Try this state again */
bt->state = BT_STATE_READ_END;
return SI_SM_CALL_WITHOUT_DELAY; /* for logging */
case BT_STATE_READ_END:
/* I could wait on BT_H_BUSY to go clear for a truly clean
exit. However, this is already done in XACTION_START
and the (possible) extra loop/status/possible wait affects
performance. So, as long as it works, just ignore H_BUSY */
#ifdef MAKE_THIS_TRUE_IF_NECESSARY
if (status & BT_H_BUSY) break;
#endif
bt->seq++;
bt->state = BT_STATE_IDLE;
return SI_SM_TRANSACTION_COMPLETE;
case BT_STATE_RESET1:
reset_flags(bt);
bt->timeout = BT_RESET_DELAY;;
bt->state = BT_STATE_RESET2;
break;
case BT_STATE_RESET2: /* Send a soft reset */
BT_CONTROL(BT_CLR_WR_PTR);
HOST2BMC(3); /* number of bytes following */
HOST2BMC(0x18); /* NetFn/LUN == Application, LUN 0 */
HOST2BMC(42); /* Sequence number */
HOST2BMC(3); /* Cmd == Soft reset */
BT_CONTROL(BT_H2B_ATN);
bt->state = BT_STATE_RESET3;
break;
case BT_STATE_RESET3:
if (bt->timeout > 0) return SI_SM_CALL_WITH_DELAY;
bt->state = BT_STATE_RESTART; /* printk in debug modes */
break;
case BT_STATE_RESTART: /* don't reset retries! */
bt->write_data[2] = ++bt->seq;
bt->read_count = 0;
bt->nonzero_status = 0;
bt->timeout = BT_NORMAL_TIMEOUT;
bt->state = BT_STATE_XACTION_START;
break;
default: /* HOSED is supposed to be caught much earlier */
error_recovery(bt, "internal logic error");
break;
}
return SI_SM_CALL_WITH_DELAY;
}
static int bt_detect(struct si_sm_data *bt)
{
/* It's impossible for the BT status and interrupt registers to be
all 1's, (assuming a properly functioning, self-initialized BMC)
but that's what you get from reading a bogus address, so we
test that first. The calling routine uses negative logic. */
if ((BT_STATUS == 0xFF) && (BT_INTMASK_R == 0xFF)) return 1;
reset_flags(bt);
return 0;
}
static void bt_cleanup(struct si_sm_data *bt)
{
}
static int bt_size(void)
{
return sizeof(struct si_sm_data);
}
struct si_sm_handlers bt_smi_handlers =
{
.version = IPMI_BT_VERSION,
.init_data = bt_init_data,
.start_transaction = bt_start_transaction,
.get_result = bt_get_result,
.event = bt_event,
.detect = bt_detect,
.cleanup = bt_cleanup,
.size = bt_size,
};
......@@ -33,6 +33,7 @@
#include <linux/config.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/errno.h>
#include <asm/system.h>
#include <linux/sched.h>
......@@ -44,6 +45,8 @@
#include <asm/semaphore.h>
#include <linux/init.h>
#define IPMI_DEVINTF_VERSION "v31"
struct ipmi_file_private
{
ipmi_user_t user;
......@@ -53,6 +56,8 @@ struct ipmi_file_private
struct fasync_struct *fasync_queue;
wait_queue_head_t wait;
struct semaphore recv_sem;
int default_retries;
unsigned int default_retry_time_ms;
};
static void file_receive_handler(struct ipmi_recv_msg *msg,
......@@ -138,6 +143,10 @@ static int ipmi_open(struct inode *inode, struct file *file)
priv->fasync_queue = NULL;
sema_init(&(priv->recv_sem), 1);
/* Use the low-level defaults. */
priv->default_retries = -1;
priv->default_retry_time_ms = 0;
return 0;
}
......@@ -158,6 +167,63 @@ static int ipmi_release(struct inode *inode, struct file *file)
return 0;
}
static int handle_send_req(ipmi_user_t user,
struct ipmi_req *req,
int retries,
unsigned int retry_time_ms)
{
int rv;
struct ipmi_addr addr;
unsigned char *msgdata;
if (req->addr_len > sizeof(struct ipmi_addr))
return -EINVAL;
if (copy_from_user(&addr, req->addr, req->addr_len))
return -EFAULT;
msgdata = kmalloc(IPMI_MAX_MSG_LENGTH, GFP_KERNEL);
if (!msgdata)
return -ENOMEM;
/* From here out we cannot return, we must jump to "out" for
error exits to free msgdata. */
rv = ipmi_validate_addr(&addr, req->addr_len);
if (rv)
goto out;
if (req->msg.data != NULL) {
if (req->msg.data_len > IPMI_MAX_MSG_LENGTH) {
rv = -EMSGSIZE;
goto out;
}
if (copy_from_user(&msgdata,
req->msg.data,
req->msg.data_len))
{
rv = -EFAULT;
goto out;
}
} else {
req->msg.data_len = 0;
}
req->msg.data = msgdata;
rv = ipmi_request_settime(user,
&addr,
req->msgid,
&(req->msg),
NULL,
0,
retries,
retry_time_ms);
out:
kfree(msgdata);
return rv;
}
static int ipmi_ioctl(struct inode *inode,
struct file *file,
unsigned int cmd,
......@@ -170,54 +236,33 @@ static int ipmi_ioctl(struct inode *inode,
{
case IPMICTL_SEND_COMMAND:
{
struct ipmi_req req;
struct ipmi_addr addr;
unsigned char msgdata[IPMI_MAX_MSG_LENGTH];
struct ipmi_req req;
if (copy_from_user(&req, (void *) data, sizeof(req))) {
rv = -EFAULT;
break;
}
if (req.addr_len > sizeof(struct ipmi_addr))
{
rv = -EINVAL;
break;
}
rv = handle_send_req(priv->user,
&req,
priv->default_retries,
priv->default_retry_time_ms);
break;
}
case IPMICTL_SEND_COMMAND_SETTIME:
{
struct ipmi_req_settime req;
if (copy_from_user(&addr, req.addr, req.addr_len)) {
if (copy_from_user(&req, (void *) data, sizeof(req))) {
rv = -EFAULT;
break;
}
rv = ipmi_validate_addr(&addr, req.addr_len);
if (rv)
break;
if (req.msg.data != NULL) {
if (req.msg.data_len > IPMI_MAX_MSG_LENGTH) {
rv = -EMSGSIZE;
break;
}
if (copy_from_user(&msgdata,
req.msg.data,
req.msg.data_len))
{
rv = -EFAULT;
break;
}
} else {
req.msg.data_len = 0;
}
req.msg.data = msgdata;
rv = ipmi_request(priv->user,
&addr,
req.msgid,
&(req.msg),
0);
rv = handle_send_req(priv->user,
&req.req,
req.retries,
req.retry_time_ms);
break;
}
......@@ -416,7 +461,36 @@ static int ipmi_ioctl(struct inode *inode,
rv = 0;
break;
}
case IPMICTL_SET_TIMING_PARMS_CMD:
{
struct ipmi_timing_parms parms;
if (copy_from_user(&parms, (void *) data, sizeof(parms))) {
rv = -EFAULT;
break;
}
priv->default_retries = parms.retries;
priv->default_retry_time_ms = parms.retry_time_ms;
rv = 0;
break;
}
case IPMICTL_GET_TIMING_PARMS_CMD:
{
struct ipmi_timing_parms parms;
parms.retries = priv->default_retries;
parms.retry_time_ms = priv->default_retry_time_ms;
if (copy_to_user((void *) data, &parms, sizeof(parms))) {
rv = -EFAULT;
break;
}
rv = 0;
break;
}
}
return rv;
......@@ -435,29 +509,30 @@ static struct file_operations ipmi_fops = {
#define DEVICE_NAME "ipmidev"
static int ipmi_major = 0;
MODULE_PARM(ipmi_major, "i");
#define MAX_DEVICES 10
module_param(ipmi_major, int, 0);
MODULE_PARM_DESC(ipmi_major, "Sets the major number of the IPMI device. By"
" default, or if you set it to zero, it will choose the next"
" available device. Setting it to -1 will disable the"
" interface. Other values will set the major device number"
" to that value.");
static void ipmi_new_smi(int if_num)
{
if (if_num <= MAX_DEVICES) {
devfs_mk_cdev(MKDEV(ipmi_major, if_num),
S_IFCHR | S_IRUSR | S_IWUSR,
"ipmidev/%d", if_num);
}
devfs_mk_cdev(MKDEV(ipmi_major, if_num),
S_IFCHR | S_IRUSR | S_IWUSR,
"ipmidev/%d", if_num);
}
static void ipmi_smi_gone(int if_num)
{
if (if_num <= MAX_DEVICES)
devfs_remove("ipmidev/%d", if_num);
devfs_remove("ipmidev/%d", if_num);
}
static struct ipmi_smi_watcher smi_watcher =
{
.new_smi = ipmi_new_smi,
.smi_gone = ipmi_smi_gone,
.owner = THIS_MODULE,
.new_smi = ipmi_new_smi,
.smi_gone = ipmi_smi_gone,
};
static __init int init_ipmi_devintf(void)
......@@ -467,6 +542,9 @@ static __init int init_ipmi_devintf(void)
if (ipmi_major < 0)
return -EINVAL;
printk(KERN_INFO "ipmi device interface version "
IPMI_DEVINTF_VERSION "\n");
rv = register_chrdev(ipmi_major, DEVICE_NAME, &ipmi_fops);
if (rv < 0) {
printk(KERN_ERR "ipmi: can't get major %d\n", ipmi_major);
......@@ -482,13 +560,10 @@ static __init int init_ipmi_devintf(void)
rv = ipmi_smi_watcher_register(&smi_watcher);
if (rv) {
unregister_chrdev(ipmi_major, DEVICE_NAME);
printk(KERN_WARNING "ipmi: can't register smi watcher");
printk(KERN_WARNING "ipmi: can't register smi watcher\n");
return rv;
}
printk(KERN_INFO "ipmi: device interface at char major %d\n",
ipmi_major);
return 0;
}
module_init(init_ipmi_devintf);
......@@ -500,21 +575,5 @@ static __exit void cleanup_ipmi(void)
unregister_chrdev(ipmi_major, DEVICE_NAME);
}
module_exit(cleanup_ipmi);
#ifndef MODULE
static __init int ipmi_setup (char *str)
{
int x;
if (get_option (&str, &x)) {
/* ipmi=x sets the major number to x. */
ipmi_major = x;
} else if (!strcmp(str, "off")) {
ipmi_major = -1;
}
return 1;
}
#endif
__setup("ipmi=", ipmi_setup);
MODULE_LICENSE("GPL");
/*
* ipmi_kcs_intf.c
*
* The interface to the IPMI driver for the KCS.
*
* Author: MontaVista Software, Inc.
* Corey Minyard <minyard@mvista.com>
* source@mvista.com
*
* Copyright 2002 MontaVista Software Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
* OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
* This file holds the "policy" for the interface to the KCS state
* machine. It does the configuration, handles timers and interrupts,
* and drives the real KCS state machine.
*/
#include <linux/config.h>
#include <linux/module.h>
#include <asm/system.h>
#include <linux/sched.h>
#include <linux/timer.h>
#include <linux/errno.h>
#include <linux/spinlock.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/list.h>
#include <linux/ioport.h>
#ifdef CONFIG_HIGH_RES_TIMERS
#include <linux/hrtime.h>
#endif
#include <linux/interrupt.h>
#include <linux/rcupdate.h>
#include <linux/ipmi_smi.h>
#include <asm/io.h>
#include <asm/irq.h>
#include "ipmi_kcs_sm.h"
#include <linux/init.h>
/* Measure times between events in the driver. */
#undef DEBUG_TIMING
/* Timing parameters. Call every 10 ms when not doing anything,
otherwise call every KCS_SHORT_TIMEOUT_USEC microseconds. */
#define KCS_TIMEOUT_TIME_USEC 10000
#define KCS_USEC_PER_JIFFY (1000000/HZ)
#define KCS_TIMEOUT_JIFFIES (KCS_TIMEOUT_TIME_USEC/KCS_USEC_PER_JIFFY)
#define KCS_SHORT_TIMEOUT_USEC 250 /* .25ms when the SM request a
short timeout */
#ifdef CONFIG_IPMI_KCS
/* This forces a dependency to the config file for this option. */
#endif
enum kcs_intf_state {
KCS_NORMAL,
KCS_GETTING_FLAGS,
KCS_GETTING_EVENTS,
KCS_CLEARING_FLAGS,
KCS_CLEARING_FLAGS_THEN_SET_IRQ,
KCS_GETTING_MESSAGES,
KCS_ENABLE_INTERRUPTS1,
KCS_ENABLE_INTERRUPTS2
/* FIXME - add watchdog stuff. */
};
struct kcs_info
{
ipmi_smi_t intf;
struct kcs_data *kcs_sm;
spinlock_t kcs_lock;
spinlock_t msg_lock;
struct list_head xmit_msgs;
struct list_head hp_xmit_msgs;
struct ipmi_smi_msg *curr_msg;
enum kcs_intf_state kcs_state;
/* Flags from the last GET_MSG_FLAGS command, used when an ATTN
is set to hold the flags until we are done handling everything
from the flags. */
#define RECEIVE_MSG_AVAIL 0x01
#define EVENT_MSG_BUFFER_FULL 0x02
#define WDT_PRE_TIMEOUT_INT 0x08
unsigned char msg_flags;
/* If set to true, this will request events the next time the
state machine is idle. */
atomic_t req_events;
/* If true, run the state machine to completion on every send
call. Generally used after a panic to make sure stuff goes
out. */
int run_to_completion;
/* The I/O port of a KCS interface. */
int port;
/* zero if no irq; */
int irq;
/* The physical and remapped memory addresses of a KCS interface. */
unsigned long physaddr;
unsigned char *addr;
/* The timer for this kcs. */
struct timer_list kcs_timer;
/* The time (in jiffies) the last timeout occurred at. */
unsigned long last_timeout_jiffies;
/* Used to gracefully stop the timer without race conditions. */
volatile int stop_operation;
volatile int timer_stopped;
/* The driver will disable interrupts when it gets into a
situation where it cannot handle messages due to lack of
memory. Once that situation clears up, it will re-enable
interrupts. */
int interrupt_disabled;
};
static void kcs_restart_short_timer(struct kcs_info *kcs_info);
static void deliver_recv_msg(struct kcs_info *kcs_info, struct ipmi_smi_msg *msg)
{
/* Deliver the message to the upper layer with the lock
released. */
spin_unlock(&(kcs_info->kcs_lock));
ipmi_smi_msg_received(kcs_info->intf, msg);
spin_lock(&(kcs_info->kcs_lock));
}
static void return_hosed_msg(struct kcs_info *kcs_info)
{
struct ipmi_smi_msg *msg = kcs_info->curr_msg;
/* Make it a reponse */
msg->rsp[0] = msg->data[0] | 4;
msg->rsp[1] = msg->data[1];
msg->rsp[2] = 0xFF; /* Unknown error. */
msg->rsp_size = 3;
kcs_info->curr_msg = NULL;
deliver_recv_msg(kcs_info, msg);
}
static enum kcs_result start_next_msg(struct kcs_info *kcs_info)
{
int rv;
struct list_head *entry = NULL;
#ifdef DEBUG_TIMING
struct timeval t;
#endif
/* No need to save flags, we aleady have interrupts off and we
already hold the KCS lock. */
spin_lock(&(kcs_info->msg_lock));
/* Pick the high priority queue first. */
if (! list_empty(&(kcs_info->hp_xmit_msgs))) {
entry = kcs_info->hp_xmit_msgs.next;
} else if (! list_empty(&(kcs_info->xmit_msgs))) {
entry = kcs_info->xmit_msgs.next;
}
if (!entry) {
kcs_info->curr_msg = NULL;
rv = KCS_SM_IDLE;
} else {
int err;
list_del(entry);
kcs_info->curr_msg = list_entry(entry,
struct ipmi_smi_msg,
link);
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
printk("**Start2: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
err = start_kcs_transaction(kcs_info->kcs_sm,
kcs_info->curr_msg->data,
kcs_info->curr_msg->data_size);
if (err) {
return_hosed_msg(kcs_info);
}
rv = KCS_CALL_WITHOUT_DELAY;
}
spin_unlock(&(kcs_info->msg_lock));
return rv;
}
static void start_enable_irq(struct kcs_info *kcs_info)
{
unsigned char msg[2];
/* If we are enabling interrupts, we have to tell the
BMC to use them. */
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD;
start_kcs_transaction(kcs_info->kcs_sm, msg, 2);
kcs_info->kcs_state = KCS_ENABLE_INTERRUPTS1;
}
static void start_clear_flags(struct kcs_info *kcs_info)
{
unsigned char msg[3];
/* Make sure the watchdog pre-timeout flag is not set at startup. */
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_CLEAR_MSG_FLAGS_CMD;
msg[2] = WDT_PRE_TIMEOUT_INT;
start_kcs_transaction(kcs_info->kcs_sm, msg, 3);
kcs_info->kcs_state = KCS_CLEARING_FLAGS;
}
/* When we have a situtaion where we run out of memory and cannot
allocate messages, we just leave them in the BMC and run the system
polled until we can allocate some memory. Once we have some
memory, we will re-enable the interrupt. */
static inline void disable_kcs_irq(struct kcs_info *kcs_info)
{
if ((kcs_info->irq) && (!kcs_info->interrupt_disabled)) {
disable_irq_nosync(kcs_info->irq);
kcs_info->interrupt_disabled = 1;
}
}
static inline void enable_kcs_irq(struct kcs_info *kcs_info)
{
if ((kcs_info->irq) && (kcs_info->interrupt_disabled)) {
enable_irq(kcs_info->irq);
kcs_info->interrupt_disabled = 0;
}
}
static void handle_flags(struct kcs_info *kcs_info)
{
if (kcs_info->msg_flags & WDT_PRE_TIMEOUT_INT) {
/* Watchdog pre-timeout */
start_clear_flags(kcs_info);
kcs_info->msg_flags &= ~WDT_PRE_TIMEOUT_INT;
spin_unlock(&(kcs_info->kcs_lock));
ipmi_smi_watchdog_pretimeout(kcs_info->intf);
spin_lock(&(kcs_info->kcs_lock));
} else if (kcs_info->msg_flags & RECEIVE_MSG_AVAIL) {
/* Messages available. */
kcs_info->curr_msg = ipmi_alloc_smi_msg();
if (!kcs_info->curr_msg) {
disable_kcs_irq(kcs_info);
kcs_info->kcs_state = KCS_NORMAL;
return;
}
enable_kcs_irq(kcs_info);
kcs_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
kcs_info->curr_msg->data[1] = IPMI_GET_MSG_CMD;
kcs_info->curr_msg->data_size = 2;
start_kcs_transaction(kcs_info->kcs_sm,
kcs_info->curr_msg->data,
kcs_info->curr_msg->data_size);
kcs_info->kcs_state = KCS_GETTING_MESSAGES;
} else if (kcs_info->msg_flags & EVENT_MSG_BUFFER_FULL) {
/* Events available. */
kcs_info->curr_msg = ipmi_alloc_smi_msg();
if (!kcs_info->curr_msg) {
disable_kcs_irq(kcs_info);
kcs_info->kcs_state = KCS_NORMAL;
return;
}
enable_kcs_irq(kcs_info);
kcs_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
kcs_info->curr_msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD;
kcs_info->curr_msg->data_size = 2;
start_kcs_transaction(kcs_info->kcs_sm,
kcs_info->curr_msg->data,
kcs_info->curr_msg->data_size);
kcs_info->kcs_state = KCS_GETTING_EVENTS;
} else {
kcs_info->kcs_state = KCS_NORMAL;
}
}
static void handle_transaction_done(struct kcs_info *kcs_info)
{
struct ipmi_smi_msg *msg;
#ifdef DEBUG_TIMING
struct timeval t;
do_gettimeofday(&t);
printk("**Done: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
switch (kcs_info->kcs_state) {
case KCS_NORMAL:
if (!kcs_info->curr_msg)
break;
kcs_info->curr_msg->rsp_size
= kcs_get_result(kcs_info->kcs_sm,
kcs_info->curr_msg->rsp,
IPMI_MAX_MSG_LENGTH);
/* Do this here becase deliver_recv_msg() releases the
lock, and a new message can be put in during the
time the lock is released. */
msg = kcs_info->curr_msg;
kcs_info->curr_msg = NULL;
deliver_recv_msg(kcs_info, msg);
break;
case KCS_GETTING_FLAGS:
{
unsigned char msg[4];
unsigned int len;
/* We got the flags from the KCS, now handle them. */
len = kcs_get_result(kcs_info->kcs_sm, msg, 4);
if (msg[2] != 0) {
/* Error fetching flags, just give up for
now. */
kcs_info->kcs_state = KCS_NORMAL;
} else if (len < 3) {
/* Hmm, no flags. That's technically illegal, but
don't use uninitialized data. */
kcs_info->kcs_state = KCS_NORMAL;
} else {
kcs_info->msg_flags = msg[3];
handle_flags(kcs_info);
}
break;
}
case KCS_CLEARING_FLAGS:
case KCS_CLEARING_FLAGS_THEN_SET_IRQ:
{
unsigned char msg[3];
/* We cleared the flags. */
kcs_get_result(kcs_info->kcs_sm, msg, 3);
if (msg[2] != 0) {
/* Error clearing flags */
printk(KERN_WARNING
"ipmi_kcs: Error clearing flags: %2.2x\n",
msg[2]);
}
if (kcs_info->kcs_state == KCS_CLEARING_FLAGS_THEN_SET_IRQ)
start_enable_irq(kcs_info);
else
kcs_info->kcs_state = KCS_NORMAL;
break;
}
case KCS_GETTING_EVENTS:
{
kcs_info->curr_msg->rsp_size
= kcs_get_result(kcs_info->kcs_sm,
kcs_info->curr_msg->rsp,
IPMI_MAX_MSG_LENGTH);
/* Do this here becase deliver_recv_msg() releases the
lock, and a new message can be put in during the
time the lock is released. */
msg = kcs_info->curr_msg;
kcs_info->curr_msg = NULL;
if (msg->rsp[2] != 0) {
/* Error getting event, probably done. */
msg->done(msg);
/* Take off the event flag. */
kcs_info->msg_flags &= ~EVENT_MSG_BUFFER_FULL;
} else {
deliver_recv_msg(kcs_info, msg);
}
handle_flags(kcs_info);
break;
}
case KCS_GETTING_MESSAGES:
{
kcs_info->curr_msg->rsp_size
= kcs_get_result(kcs_info->kcs_sm,
kcs_info->curr_msg->rsp,
IPMI_MAX_MSG_LENGTH);
/* Do this here becase deliver_recv_msg() releases the
lock, and a new message can be put in during the
time the lock is released. */
msg = kcs_info->curr_msg;
kcs_info->curr_msg = NULL;
if (msg->rsp[2] != 0) {
/* Error getting event, probably done. */
msg->done(msg);
/* Take off the msg flag. */
kcs_info->msg_flags &= ~RECEIVE_MSG_AVAIL;
} else {
deliver_recv_msg(kcs_info, msg);
}
handle_flags(kcs_info);
break;
}
case KCS_ENABLE_INTERRUPTS1:
{
unsigned char msg[4];
/* We got the flags from the KCS, now handle them. */
kcs_get_result(kcs_info->kcs_sm, msg, 4);
if (msg[2] != 0) {
printk(KERN_WARNING
"ipmi_kcs: Could not enable interrupts"
", failed get, using polled mode.\n");
kcs_info->kcs_state = KCS_NORMAL;
} else {
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD;
msg[2] = msg[3] | 1; /* enable msg queue int */
start_kcs_transaction(kcs_info->kcs_sm, msg,3);
kcs_info->kcs_state = KCS_ENABLE_INTERRUPTS2;
}
break;
}
case KCS_ENABLE_INTERRUPTS2:
{
unsigned char msg[4];
/* We got the flags from the KCS, now handle them. */
kcs_get_result(kcs_info->kcs_sm, msg, 4);
if (msg[2] != 0) {
printk(KERN_WARNING
"ipmi_kcs: Could not enable interrupts"
", failed set, using polled mode.\n");
}
kcs_info->kcs_state = KCS_NORMAL;
break;
}
}
}
/* Called on timeouts and events. Timeouts should pass the elapsed
time, interrupts should pass in zero. */
static enum kcs_result kcs_event_handler(struct kcs_info *kcs_info, int time)
{
enum kcs_result kcs_result;
restart:
/* There used to be a loop here that waited a little while
(around 25us) before giving up. That turned out to be
pointless, the minimum delays I was seeing were in the 300us
range, which is far too long to wait in an interrupt. So
we just run until the state machine tells us something
happened or it needs a delay. */
kcs_result = kcs_event(kcs_info->kcs_sm, time);
time = 0;
while (kcs_result == KCS_CALL_WITHOUT_DELAY)
{
kcs_result = kcs_event(kcs_info->kcs_sm, 0);
}
if (kcs_result == KCS_TRANSACTION_COMPLETE)
{
handle_transaction_done(kcs_info);
kcs_result = kcs_event(kcs_info->kcs_sm, 0);
}
else if (kcs_result == KCS_SM_HOSED)
{
if (kcs_info->curr_msg != NULL) {
/* If we were handling a user message, format
a response to send to the upper layer to
tell it about the error. */
return_hosed_msg(kcs_info);
}
kcs_result = kcs_event(kcs_info->kcs_sm, 0);
kcs_info->kcs_state = KCS_NORMAL;
}
/* We prefer handling attn over new messages. */
if (kcs_result == KCS_ATTN)
{
unsigned char msg[2];
/* Got a attn, send down a get message flags to see
what's causing it. It would be better to handle
this in the upper layer, but due to the way
interrupts work with the KCS, that's not really
possible. */
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_GET_MSG_FLAGS_CMD;
start_kcs_transaction(kcs_info->kcs_sm, msg, 2);
kcs_info->kcs_state = KCS_GETTING_FLAGS;
goto restart;
}
/* If we are currently idle, try to start the next message. */
if (kcs_result == KCS_SM_IDLE) {
kcs_result = start_next_msg(kcs_info);
if (kcs_result != KCS_SM_IDLE)
goto restart;
}
if ((kcs_result == KCS_SM_IDLE)
&& (atomic_read(&kcs_info->req_events)))
{
/* We are idle and the upper layer requested that I fetch
events, so do so. */
unsigned char msg[2];
atomic_set(&kcs_info->req_events, 0);
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_GET_MSG_FLAGS_CMD;
start_kcs_transaction(kcs_info->kcs_sm, msg, 2);
kcs_info->kcs_state = KCS_GETTING_FLAGS;
goto restart;
}
return kcs_result;
}
static void sender(void *send_info,
struct ipmi_smi_msg *msg,
int priority)
{
struct kcs_info *kcs_info = (struct kcs_info *) send_info;
enum kcs_result result;
unsigned long flags;
#ifdef DEBUG_TIMING
struct timeval t;
#endif
spin_lock_irqsave(&(kcs_info->msg_lock), flags);
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
printk("**Enqueue: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
if (kcs_info->run_to_completion) {
/* If we are running to completion, then throw it in
the list and run transactions until everything is
clear. Priority doesn't matter here. */
list_add_tail(&(msg->link), &(kcs_info->xmit_msgs));
/* We have to release the msg lock and claim the kcs
lock in this case, because of race conditions. */
spin_unlock_irqrestore(&(kcs_info->msg_lock), flags);
spin_lock_irqsave(&(kcs_info->kcs_lock), flags);
result = kcs_event_handler(kcs_info, 0);
while (result != KCS_SM_IDLE) {
udelay(KCS_SHORT_TIMEOUT_USEC);
result = kcs_event_handler(kcs_info,
KCS_SHORT_TIMEOUT_USEC);
}
spin_unlock_irqrestore(&(kcs_info->kcs_lock), flags);
return;
} else {
if (priority > 0) {
list_add_tail(&(msg->link), &(kcs_info->hp_xmit_msgs));
} else {
list_add_tail(&(msg->link), &(kcs_info->xmit_msgs));
}
}
spin_unlock_irqrestore(&(kcs_info->msg_lock), flags);
spin_lock_irqsave(&(kcs_info->kcs_lock), flags);
if ((kcs_info->kcs_state == KCS_NORMAL)
&& (kcs_info->curr_msg == NULL))
{
start_next_msg(kcs_info);
kcs_restart_short_timer(kcs_info);
}
spin_unlock_irqrestore(&(kcs_info->kcs_lock), flags);
}
static void set_run_to_completion(void *send_info, int i_run_to_completion)
{
struct kcs_info *kcs_info = (struct kcs_info *) send_info;
enum kcs_result result;
unsigned long flags;
spin_lock_irqsave(&(kcs_info->kcs_lock), flags);
kcs_info->run_to_completion = i_run_to_completion;
if (i_run_to_completion) {
result = kcs_event_handler(kcs_info, 0);
while (result != KCS_SM_IDLE) {
udelay(KCS_SHORT_TIMEOUT_USEC);
result = kcs_event_handler(kcs_info,
KCS_SHORT_TIMEOUT_USEC);
}
}
spin_unlock_irqrestore(&(kcs_info->kcs_lock), flags);
}
static void request_events(void *send_info)
{
struct kcs_info *kcs_info = (struct kcs_info *) send_info;
atomic_set(&kcs_info->req_events, 1);
}
static int initialized = 0;
/* Must be called with interrupts off and with the kcs_lock held. */
static void kcs_restart_short_timer(struct kcs_info *kcs_info)
{
if (del_timer(&(kcs_info->kcs_timer))) {
#ifdef CONFIG_HIGH_RES_TIMERS
unsigned long jiffies_now;
/* If we don't delete the timer, then it will go off
immediately, anyway. So we only process if we
actually delete the timer. */
/* We already have irqsave on, so no need for it
here. */
read_lock(&xtime_lock);
jiffies_now = jiffies;
kcs_info->kcs_timer.expires = jiffies_now;
kcs_info->kcs_timer.sub_expires
= quick_update_jiffies_sub(jiffies_now);
read_unlock(&xtime_lock);
kcs_info->kcs_timer.sub_expires
+= usec_to_arch_cycles(KCS_SHORT_TIMEOUT_USEC);
while (kcs_info->kcs_timer.sub_expires >= cycles_per_jiffies) {
kcs_info->kcs_timer.expires++;
kcs_info->kcs_timer.sub_expires -= cycles_per_jiffies;
}
#else
kcs_info->kcs_timer.expires = jiffies + 1;
#endif
add_timer(&(kcs_info->kcs_timer));
}
}
static void kcs_timeout(unsigned long data)
{
struct kcs_info *kcs_info = (struct kcs_info *) data;
enum kcs_result kcs_result;
unsigned long flags;
unsigned long jiffies_now;
unsigned long time_diff;
#ifdef DEBUG_TIMING
struct timeval t;
#endif
if (kcs_info->stop_operation) {
kcs_info->timer_stopped = 1;
return;
}
spin_lock_irqsave(&(kcs_info->kcs_lock), flags);
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
printk("**Timer: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
jiffies_now = jiffies;
time_diff = ((jiffies_now - kcs_info->last_timeout_jiffies)
* KCS_USEC_PER_JIFFY);
kcs_result = kcs_event_handler(kcs_info, time_diff);
kcs_info->last_timeout_jiffies = jiffies_now;
if ((kcs_info->irq) && (! kcs_info->interrupt_disabled)) {
/* Running with interrupts, only do long timeouts. */
kcs_info->kcs_timer.expires = jiffies + KCS_TIMEOUT_JIFFIES;
goto do_add_timer;
}
/* If the state machine asks for a short delay, then shorten
the timer timeout. */
#ifdef CONFIG_HIGH_RES_TIMERS
if (kcs_result == KCS_CALL_WITH_DELAY) {
kcs_info->kcs_timer.sub_expires
+= usec_to_arch_cycles(KCS_SHORT_TIMEOUT_USEC);
while (kcs_info->kcs_timer.sub_expires >= cycles_per_jiffies) {
kcs_info->kcs_timer.expires++;
kcs_info->kcs_timer.sub_expires -= cycles_per_jiffies;
}
} else {
kcs_info->kcs_timer.expires = jiffies + KCS_TIMEOUT_JIFFIES;
kcs_info->kcs_timer.sub_expires = 0;
}
#else
/* If requested, take the shortest delay possible */
if (kcs_result == KCS_CALL_WITH_DELAY) {
kcs_info->kcs_timer.expires = jiffies + 1;
} else {
kcs_info->kcs_timer.expires = jiffies + KCS_TIMEOUT_JIFFIES;
}
#endif
do_add_timer:
add_timer(&(kcs_info->kcs_timer));
spin_unlock_irqrestore(&(kcs_info->kcs_lock), flags);
}
static irqreturn_t kcs_irq_handler(int irq, void *data, struct pt_regs *regs)
{
struct kcs_info *kcs_info = (struct kcs_info *) data;
unsigned long flags;
#ifdef DEBUG_TIMING
struct timeval t;
#endif
spin_lock_irqsave(&(kcs_info->kcs_lock), flags);
if (kcs_info->stop_operation)
goto out;
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
printk("**Interrupt: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
kcs_event_handler(kcs_info, 0);
out:
spin_unlock_irqrestore(&(kcs_info->kcs_lock), flags);
return IRQ_HANDLED;
}
static struct ipmi_smi_handlers handlers =
{
.owner = THIS_MODULE,
.sender = sender,
.request_events = request_events,
.set_run_to_completion = set_run_to_completion,
};
static unsigned char ipmi_kcs_dev_rev;
static unsigned char ipmi_kcs_fw_rev_major;
static unsigned char ipmi_kcs_fw_rev_minor;
static unsigned char ipmi_version_major;
static unsigned char ipmi_version_minor;
extern int kcs_dbg;
static int ipmi_kcs_detect_hardware(unsigned int port,
unsigned char *addr,
struct kcs_data *data)
{
unsigned char msg[2];
unsigned char resp[IPMI_MAX_MSG_LENGTH];
unsigned long resp_len;
enum kcs_result kcs_result;
/* It's impossible for the KCS status register to be all 1's,
(assuming a properly functioning, self-initialized BMC)
but that's what you get from reading a bogus address, so we
test that first. */
if (port) {
if (inb(port+1) == 0xff) return -ENODEV;
} else {
if (readb(addr+1) == 0xff) return -ENODEV;
}
/* Do a Get Device ID command, since it comes back with some
useful info. */
msg[0] = IPMI_NETFN_APP_REQUEST << 2;
msg[1] = IPMI_GET_DEVICE_ID_CMD;
start_kcs_transaction(data, msg, 2);
kcs_result = kcs_event(data, 0);
for (;;)
{
if (kcs_result == KCS_CALL_WITH_DELAY) {
udelay(100);
kcs_result = kcs_event(data, 100);
}
else if (kcs_result == KCS_CALL_WITHOUT_DELAY)
{
kcs_result = kcs_event(data, 0);
}
else
break;
}
if (kcs_result == KCS_SM_HOSED) {
/* We couldn't get the state machine to run, so whatever's at
the port is probably not an IPMI KCS interface. */
return -ENODEV;
}
/* Otherwise, we got some data. */
resp_len = kcs_get_result(data, resp, IPMI_MAX_MSG_LENGTH);
if (resp_len < 6)
/* That's odd, it should be longer. */
return -EINVAL;
if ((resp[1] != IPMI_GET_DEVICE_ID_CMD) || (resp[2] != 0))
/* That's odd, it shouldn't be able to fail. */
return -EINVAL;
ipmi_kcs_dev_rev = resp[4] & 0xf;
ipmi_kcs_fw_rev_major = resp[5] & 0x7f;
ipmi_kcs_fw_rev_minor = resp[6];
ipmi_version_major = resp[7] & 0xf;
ipmi_version_minor = resp[7] >> 4;
return 0;
}
/* There can be 4 IO ports passed in (with or without IRQs), 4 addresses,
a default IO port, and 1 ACPI/SPMI address. That sets KCS_MAX_DRIVERS */
#define KCS_MAX_PARMS 4
#define KCS_MAX_DRIVERS ((KCS_MAX_PARMS * 2) + 2)
static struct kcs_info *kcs_infos[KCS_MAX_DRIVERS] =
{ NULL, NULL, NULL, NULL };
#define DEVICE_NAME "ipmi_kcs"
#define DEFAULT_IO_PORT 0xca2
static int kcs_trydefaults = 1;
static unsigned long kcs_addrs[KCS_MAX_PARMS] = { 0, 0, 0, 0 };
static int kcs_ports[KCS_MAX_PARMS] = { 0, 0, 0, 0 };
static int kcs_irqs[KCS_MAX_PARMS] = { 0, 0, 0, 0 };
MODULE_PARM(kcs_trydefaults, "i");
MODULE_PARM(kcs_addrs, "1-4l");
MODULE_PARM(kcs_irqs, "1-4i");
MODULE_PARM(kcs_ports, "1-4i");
/* Returns 0 if initialized, or negative on an error. */
static int init_one_kcs(int kcs_port,
int irq,
unsigned long kcs_physaddr,
struct kcs_info **kcs)
{
int rv;
struct kcs_info *new_kcs;
/* Did anything get passed in at all? Both == zero disables the
driver. */
if (!(kcs_port || kcs_physaddr))
return -ENODEV;
/* Only initialize a port OR a physical address on this call.
Also, IRQs can go with either ports or addresses. */
if (kcs_port && kcs_physaddr)
return -EINVAL;
new_kcs = kmalloc(sizeof(*new_kcs), GFP_KERNEL);
if (!new_kcs) {
printk(KERN_ERR "ipmi_kcs: out of memory\n");
return -ENOMEM;
}
/* So we know not to free it unless we have allocated one. */
new_kcs->kcs_sm = NULL;
new_kcs->addr = NULL;
new_kcs->physaddr = kcs_physaddr;
new_kcs->port = kcs_port;
if (kcs_port) {
if (request_region(kcs_port, 2, DEVICE_NAME) == NULL) {
kfree(new_kcs);
printk(KERN_ERR
"ipmi_kcs: can't reserve port @ 0x%4.4x\n",
kcs_port);
return -EIO;
}
} else {
if (request_mem_region(kcs_physaddr, 2, DEVICE_NAME) == NULL) {
kfree(new_kcs);
printk(KERN_ERR
"ipmi_kcs: can't reserve memory @ 0x%lx\n",
kcs_physaddr);
return -EIO;
}
if ((new_kcs->addr = ioremap(kcs_physaddr, 2)) == NULL) {
kfree(new_kcs);
printk(KERN_ERR
"ipmi_kcs: can't remap memory at 0x%lx\n",
kcs_physaddr);
return -EIO;
}
}
new_kcs->kcs_sm = kmalloc(kcs_size(), GFP_KERNEL);
if (!new_kcs->kcs_sm) {
printk(KERN_ERR "ipmi_kcs: out of memory\n");
rv = -ENOMEM;
goto out_err;
}
init_kcs_data(new_kcs->kcs_sm, kcs_port, new_kcs->addr);
spin_lock_init(&(new_kcs->kcs_lock));
spin_lock_init(&(new_kcs->msg_lock));
rv = ipmi_kcs_detect_hardware(kcs_port, new_kcs->addr, new_kcs->kcs_sm);
if (rv) {
if (kcs_port)
printk(KERN_ERR
"ipmi_kcs: No KCS @ port 0x%4.4x\n",
kcs_port);
else
printk(KERN_ERR
"ipmi_kcs: No KCS @ addr 0x%lx\n",
kcs_physaddr);
goto out_err;
}
if (irq != 0) {
rv = request_irq(irq,
kcs_irq_handler,
SA_INTERRUPT,
DEVICE_NAME,
new_kcs);
if (rv) {
printk(KERN_WARNING
"ipmi_kcs: %s unable to claim interrupt %d,"
" running polled\n",
DEVICE_NAME, irq);
irq = 0;
}
}
new_kcs->irq = irq;
INIT_LIST_HEAD(&(new_kcs->xmit_msgs));
INIT_LIST_HEAD(&(new_kcs->hp_xmit_msgs));
new_kcs->curr_msg = NULL;
atomic_set(&new_kcs->req_events, 0);
new_kcs->run_to_completion = 0;
start_clear_flags(new_kcs);
if (irq) {
new_kcs->kcs_state = KCS_CLEARING_FLAGS_THEN_SET_IRQ;
printk(KERN_INFO
"ipmi_kcs: Acquiring BMC @ port=0x%x irq=%d\n",
kcs_port, irq);
} else {
if (kcs_port)
printk(KERN_INFO
"ipmi_kcs: Acquiring BMC @ port=0x%x\n",
kcs_port);
else
printk(KERN_INFO
"ipmi_kcs: Acquiring BMC @ addr=0x%lx\n",
kcs_physaddr);
}
rv = ipmi_register_smi(&handlers,
new_kcs,
ipmi_version_major,
ipmi_version_minor,
&(new_kcs->intf));
if (rv) {
free_irq(irq, new_kcs);
printk(KERN_ERR
"ipmi_kcs: Unable to register device: error %d\n",
rv);
goto out_err;
}
new_kcs->interrupt_disabled = 0;
new_kcs->timer_stopped = 0;
new_kcs->stop_operation = 0;
init_timer(&(new_kcs->kcs_timer));
new_kcs->kcs_timer.data = (long) new_kcs;
new_kcs->kcs_timer.function = kcs_timeout;
new_kcs->last_timeout_jiffies = jiffies;
new_kcs->kcs_timer.expires = jiffies + KCS_TIMEOUT_JIFFIES;
add_timer(&(new_kcs->kcs_timer));
*kcs = new_kcs;
return 0;
out_err:
if (kcs_port)
release_region (kcs_port, 2);
if (new_kcs->addr)
iounmap(new_kcs->addr);
if (kcs_physaddr)
release_mem_region(kcs_physaddr, 2);
if (new_kcs->kcs_sm)
kfree(new_kcs->kcs_sm);
kfree(new_kcs);
return rv;
}
#ifdef CONFIG_ACPI_INTERPRETER
#include <linux/acpi.h>
struct SPMITable {
s8 Signature[4];
u32 Length;
u8 Revision;
u8 Checksum;
s8 OEMID[6];
s8 OEMTableID[8];
s8 OEMRevision[4];
s8 CreatorID[4];
s8 CreatorRevision[4];
u8 InterfaceType[2];
s16 SpecificationRevision;
/*
* Bit 0 - SCI interrupt supported
* Bit 1 - I/O APIC/SAPIC
*/
u8 InterruptType;
/* If bit 0 of InterruptType is set, then this is the SCI
interrupt in the GPEx_STS register. */
u8 GPE;
s16 Reserved;
/* If bit 1 of InterruptType is set, then this is the I/O
APIC/SAPIC interrupt. */
u32 GlobalSystemInterrupt;
/* The actual register address. */
struct acpi_generic_address addr;
u8 UID[4];
s8 spmi_id[1]; /* A '\0' terminated array starts here. */
};
static int acpi_find_bmc(unsigned long *physaddr, int *port)
{
acpi_status status;
struct SPMITable *spmi;
status = acpi_get_firmware_table("SPMI", 1,
ACPI_LOGICAL_ADDRESSING,
(struct acpi_table_header **) &spmi);
if (status != AE_OK)
goto not_found;
if (spmi->InterfaceType[0] != 1)
/* Not IPMI. */
goto not_found;
if (spmi->InterfaceType[1] != 1)
/* Not KCS. */
goto not_found;
if (spmi->addr.address_space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
*physaddr = spmi->addr.address;
printk("ipmi_kcs_intf: Found ACPI-specified state machine"
" at memory address 0x%lx\n",
(unsigned long) spmi->addr.address);
} else if (spmi->addr.address_space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
*port = spmi->addr.address;
printk("ipmi_kcs_intf: Found ACPI-specified state machine"
" at I/O address 0x%lx\n",
(unsigned long) spmi->addr.address);
} else
goto not_found; /* Not an address type we recognise. */
return 0;
not_found:
return -ENODEV;
}
#endif
static __init int init_ipmi_kcs(void)
{
int rv = 0;
int pos = 0;
int i = 0;
#ifdef CONFIG_ACPI_INTERPRETER
unsigned long physaddr = 0;
int port = 0;
#endif
if (initialized)
return 0;
initialized = 1;
/* First do the "command-line" parameters */
for (i=0; i < KCS_MAX_PARMS; i++) {
rv = init_one_kcs(kcs_ports[i],
kcs_irqs[i],
0,
&(kcs_infos[pos]));
if (rv == 0)
pos++;
rv = init_one_kcs(0,
kcs_irqs[i],
kcs_addrs[i],
&(kcs_infos[pos]));
if (rv == 0)
pos++;
}
/* Only try the defaults if enabled and resources are available
(because they weren't already specified above). */
if (kcs_trydefaults && (pos == 0)) {
rv = -EINVAL;
#ifdef CONFIG_ACPI_INTERPRETER
if (rv && (physaddr = acpi_find_bmc(&physaddr, &port) == 0)) {
rv = init_one_kcs(port,
0,
physaddr,
&(kcs_infos[pos]));
if (rv == 0)
pos++;
}
#endif
if (rv) {
rv = init_one_kcs(DEFAULT_IO_PORT,
0,
0,
&(kcs_infos[pos]));
if (rv == 0)
pos++;
}
}
if (kcs_infos[0] == NULL) {
printk("ipmi_kcs: Unable to find any KCS interfaces\n");
return -ENODEV;
}
return 0;
}
module_init(init_ipmi_kcs);
#ifdef MODULE
void __exit cleanup_one_kcs(struct kcs_info *to_clean)
{
int rv;
unsigned long flags;
if (! to_clean)
return;
/* Tell the timer and interrupt handlers that we are shutting
down. */
spin_lock_irqsave(&(to_clean->kcs_lock), flags);
spin_lock(&(to_clean->msg_lock));
to_clean->stop_operation = 1;
if (to_clean->irq != 0)
free_irq(to_clean->irq, to_clean);
if (to_clean->port) {
printk(KERN_INFO
"ipmi_kcs: Releasing BMC @ port=0x%x\n",
to_clean->port);
release_region (to_clean->port, 2);
}
if (to_clean->addr) {
printk(KERN_INFO
"ipmi_kcs: Releasing BMC @ addr=0x%lx\n",
to_clean->physaddr);
iounmap(to_clean->addr);
release_mem_region(to_clean->physaddr, 2);
}
spin_unlock(&(to_clean->msg_lock));
spin_unlock_irqrestore(&(to_clean->kcs_lock), flags);
/* Wait until we know that we are out of any interrupt
handlers might have been running before we freed the
interrupt. */
synchronize_kernel();
/* Wait for the timer to stop. This avoids problems with race
conditions removing the timer here. */
while (!to_clean->timer_stopped) {
schedule_timeout(1);
}
rv = ipmi_unregister_smi(to_clean->intf);
if (rv) {
printk(KERN_ERR
"ipmi_kcs: Unable to unregister device: errno=%d\n",
rv);
}
initialized = 0;
kfree(to_clean->kcs_sm);
kfree(to_clean);
}
static __exit void cleanup_ipmi_kcs(void)
{
int i;
if (!initialized)
return;
for (i=0; i<KCS_MAX_DRIVERS; i++) {
cleanup_one_kcs(kcs_infos[i]);
}
}
module_exit(cleanup_ipmi_kcs);
#else
/* Unfortunately, cmdline::get_options() only returns integers, not
longs. Since we need ulongs (64-bit physical addresses) parse the
comma-separated list manually. Arguments can be one of these forms:
m0xaabbccddeeff A physical memory address without an IRQ
m0xaabbccddeeff:cc A physical memory address with an IRQ
p0xaabb An IO port without an IRQ
p0xaabb:cc An IO port with an IRQ
nodefaults Suppress trying the default IO port or ACPI address
For example, to pass one IO port with an IRQ, one address, and
suppress the use of the default IO port and ACPI address,
use this option string: ipmi_kcs=p0xCA2:5,m0xFF5B0022,nodefaults
Remember, ipmi_kcs_setup() is passed the string after the equal sign. */
static int __init ipmi_kcs_setup(char *str)
{
unsigned long val;
char *cur, *colon;
int pos;
pos = 0;
cur = strsep(&str, ",");
while ((cur) && (*cur) && (pos < KCS_MAX_PARMS)) {
switch (*cur) {
case 'n':
if (strcmp(cur, "nodefaults") == 0)
kcs_trydefaults = 0;
else
printk(KERN_INFO
"ipmi_kcs: bad parameter value %s\n",
cur);
break;
case 'm':
case 'p':
val = simple_strtoul(cur + 1,
&colon,
0);
if (*cur == 'p')
kcs_ports[pos] = val;
else
kcs_addrs[pos] = val;
if (*colon == ':') {
val = simple_strtoul(colon + 1,
&colon,
0);
kcs_irqs[pos] = val;
}
pos++;
break;
default:
printk(KERN_INFO
"ipmi_kcs: bad parameter value %s\n",
cur);
}
cur = strsep(&str, ",");
}
return 1;
}
__setup("ipmi_kcs=", ipmi_kcs_setup);
#endif
MODULE_LICENSE("GPL");
......@@ -37,13 +37,12 @@
* that document.
*/
#include <linux/types.h>
#include <linux/kernel.h> /* For printk. */
#include <linux/string.h>
#include <linux/ipmi_msgdefs.h> /* for completion codes */
#include "ipmi_si_sm.h"
#include <asm/io.h>
#include <asm/string.h> /* Gets rid of memcpy warning */
#include <asm/system.h>
#include "ipmi_kcs_sm.h"
#define IPMI_KCS_VERSION "v31"
/* Set this if you want a printout of why the state machine was hosed
when it gets hosed. */
......@@ -95,32 +94,28 @@ enum kcs_states {
#define OBF_RETRY_TIMEOUT 1000000
#define MAX_ERROR_RETRIES 10
#define IPMI_ERR_MSG_TRUNCATED 0xc6
#define IPMI_ERR_UNSPECIFIED 0xff
struct kcs_data
struct si_sm_data
{
enum kcs_states state;
unsigned int port;
unsigned char *addr;
unsigned char write_data[MAX_KCS_WRITE_SIZE];
int write_pos;
int write_count;
int orig_write_count;
unsigned char read_data[MAX_KCS_READ_SIZE];
int read_pos;
int truncated;
enum kcs_states state;
struct si_sm_io *io;
unsigned char write_data[MAX_KCS_WRITE_SIZE];
int write_pos;
int write_count;
int orig_write_count;
unsigned char read_data[MAX_KCS_READ_SIZE];
int read_pos;
int truncated;
unsigned int error_retries;
long ibf_timeout;
long obf_timeout;
};
void init_kcs_data(struct kcs_data *kcs, unsigned int port, unsigned char *addr)
static unsigned int init_kcs_data(struct si_sm_data *kcs,
struct si_sm_io *io)
{
kcs->state = KCS_IDLE;
kcs->port = port;
kcs->addr = addr;
kcs->io = io;
kcs->write_pos = 0;
kcs->write_count = 0;
kcs->orig_write_count = 0;
......@@ -129,40 +124,29 @@ void init_kcs_data(struct kcs_data *kcs, unsigned int port, unsigned char *addr)
kcs->truncated = 0;
kcs->ibf_timeout = IBF_RETRY_TIMEOUT;
kcs->obf_timeout = OBF_RETRY_TIMEOUT;
}
/* Remember, init_one_kcs() insured port and addr can't both be set */
/* Reserve 2 I/O bytes. */
return 2;
}
static inline unsigned char read_status(struct kcs_data *kcs)
static inline unsigned char read_status(struct si_sm_data *kcs)
{
if (kcs->port)
return inb(kcs->port + 1);
else
return readb(kcs->addr + 1);
return kcs->io->inputb(kcs->io, 1);
}
static inline unsigned char read_data(struct kcs_data *kcs)
static inline unsigned char read_data(struct si_sm_data *kcs)
{
if (kcs->port)
return inb(kcs->port + 0);
else
return readb(kcs->addr + 0);
return kcs->io->inputb(kcs->io, 0);
}
static inline void write_cmd(struct kcs_data *kcs, unsigned char data)
static inline void write_cmd(struct si_sm_data *kcs, unsigned char data)
{
if (kcs->port)
outb(data, kcs->port + 1);
else
writeb(data, kcs->addr + 1);
kcs->io->outputb(kcs->io, 1, data);
}
static inline void write_data(struct kcs_data *kcs, unsigned char data)
static inline void write_data(struct si_sm_data *kcs, unsigned char data)
{
if (kcs->port)
outb(data, kcs->port + 0);
else
writeb(data, kcs->addr + 0);
kcs->io->outputb(kcs->io, 0, data);
}
/* Control codes. */
......@@ -182,14 +166,14 @@ static inline void write_data(struct kcs_data *kcs, unsigned char data)
#define GET_STATUS_OBF(status) ((status) & 0x01)
static inline void write_next_byte(struct kcs_data *kcs)
static inline void write_next_byte(struct si_sm_data *kcs)
{
write_data(kcs, kcs->write_data[kcs->write_pos]);
(kcs->write_pos)++;
(kcs->write_count)--;
}
static inline void start_error_recovery(struct kcs_data *kcs, char *reason)
static inline void start_error_recovery(struct si_sm_data *kcs, char *reason)
{
(kcs->error_retries)++;
if (kcs->error_retries > MAX_ERROR_RETRIES) {
......@@ -202,7 +186,7 @@ static inline void start_error_recovery(struct kcs_data *kcs, char *reason)
}
}
static inline void read_next_byte(struct kcs_data *kcs)
static inline void read_next_byte(struct si_sm_data *kcs)
{
if (kcs->read_pos >= MAX_KCS_READ_SIZE) {
/* Throw the data away and mark it truncated. */
......@@ -215,9 +199,8 @@ static inline void read_next_byte(struct kcs_data *kcs)
write_data(kcs, KCS_READ_BYTE);
}
static inline int check_ibf(struct kcs_data *kcs,
unsigned char status,
long time)
static inline int check_ibf(struct si_sm_data *kcs, unsigned char status,
long time)
{
if (GET_STATUS_IBF(status)) {
kcs->ibf_timeout -= time;
......@@ -232,9 +215,8 @@ static inline int check_ibf(struct kcs_data *kcs,
return 1;
}
static inline int check_obf(struct kcs_data *kcs,
unsigned char status,
long time)
static inline int check_obf(struct si_sm_data *kcs, unsigned char status,
long time)
{
if (! GET_STATUS_OBF(status)) {
kcs->obf_timeout -= time;
......@@ -248,13 +230,13 @@ static inline int check_obf(struct kcs_data *kcs,
return 1;
}
static void clear_obf(struct kcs_data *kcs, unsigned char status)
static void clear_obf(struct si_sm_data *kcs, unsigned char status)
{
if (GET_STATUS_OBF(status))
read_data(kcs);
}
static void restart_kcs_transaction(struct kcs_data *kcs)
static void restart_kcs_transaction(struct si_sm_data *kcs)
{
kcs->write_count = kcs->orig_write_count;
kcs->write_pos = 0;
......@@ -265,7 +247,8 @@ static void restart_kcs_transaction(struct kcs_data *kcs)
write_cmd(kcs, KCS_WRITE_START);
}
int start_kcs_transaction(struct kcs_data *kcs, char *data, unsigned int size)
static int start_kcs_transaction(struct si_sm_data *kcs, unsigned char *data,
unsigned int size)
{
if ((size < 2) || (size > MAX_KCS_WRITE_SIZE)) {
return -1;
......@@ -287,7 +270,8 @@ int start_kcs_transaction(struct kcs_data *kcs, char *data, unsigned int size)
return 0;
}
int kcs_get_result(struct kcs_data *kcs, unsigned char *data, int length)
static int get_kcs_result(struct si_sm_data *kcs, unsigned char *data,
unsigned int length)
{
if (length < kcs->read_pos) {
kcs->read_pos = length;
......@@ -316,7 +300,7 @@ int kcs_get_result(struct kcs_data *kcs, unsigned char *data, int length)
/* This implements the state machine defined in the IPMI manual, see
that for details on how this works. Divide that flowchart into
sections delimited by "Wait for IBF" and this will become clear. */
enum kcs_result kcs_event(struct kcs_data *kcs, long time)
static enum si_sm_result kcs_event(struct si_sm_data *kcs, long time)
{
unsigned char status;
unsigned char state;
......@@ -328,7 +312,7 @@ enum kcs_result kcs_event(struct kcs_data *kcs, long time)
#endif
/* All states wait for ibf, so just do it here. */
if (!check_ibf(kcs, status, time))
return KCS_CALL_WITH_DELAY;
return SI_SM_CALL_WITH_DELAY;
/* Just about everything looks at the KCS state, so grab that, too. */
state = GET_STATUS_STATE(status);
......@@ -339,9 +323,9 @@ enum kcs_result kcs_event(struct kcs_data *kcs, long time)
clear_obf(kcs, status);
if (GET_STATUS_ATN(status))
return KCS_ATTN;
return SI_SM_ATTN;
else
return KCS_SM_IDLE;
return SI_SM_IDLE;
case KCS_START_OP:
if (state != KCS_IDLE) {
......@@ -408,7 +392,7 @@ enum kcs_result kcs_event(struct kcs_data *kcs, long time)
if (state == KCS_READ_STATE) {
if (! check_obf(kcs, status, time))
return KCS_CALL_WITH_DELAY;
return SI_SM_CALL_WITH_DELAY;
read_next_byte(kcs);
} else {
/* We don't implement this exactly like the state
......@@ -421,7 +405,7 @@ enum kcs_result kcs_event(struct kcs_data *kcs, long time)
clear_obf(kcs, status);
kcs->orig_write_count = 0;
kcs->state = KCS_IDLE;
return KCS_TRANSACTION_COMPLETE;
return SI_SM_TRANSACTION_COMPLETE;
}
break;
......@@ -444,7 +428,7 @@ enum kcs_result kcs_event(struct kcs_data *kcs, long time)
break;
}
if (! check_obf(kcs, status, time))
return KCS_CALL_WITH_DELAY;
return SI_SM_CALL_WITH_DELAY;
clear_obf(kcs, status);
write_data(kcs, KCS_READ_BYTE);
......@@ -459,14 +443,14 @@ enum kcs_result kcs_event(struct kcs_data *kcs, long time)
}
if (! check_obf(kcs, status, time))
return KCS_CALL_WITH_DELAY;
return SI_SM_CALL_WITH_DELAY;
clear_obf(kcs, status);
if (kcs->orig_write_count) {
restart_kcs_transaction(kcs);
} else {
kcs->state = KCS_IDLE;
return KCS_TRANSACTION_COMPLETE;
return SI_SM_TRANSACTION_COMPLETE;
}
break;
......@@ -475,14 +459,42 @@ enum kcs_result kcs_event(struct kcs_data *kcs, long time)
}
if (kcs->state == KCS_HOSED) {
init_kcs_data(kcs, kcs->port, kcs->addr);
return KCS_SM_HOSED;
init_kcs_data(kcs, kcs->io);
return SI_SM_HOSED;
}
return KCS_CALL_WITHOUT_DELAY;
return SI_SM_CALL_WITHOUT_DELAY;
}
int kcs_size(void)
static int kcs_size(void)
{
return sizeof(struct kcs_data);
return sizeof(struct si_sm_data);
}
static int kcs_detect(struct si_sm_data *kcs)
{
/* It's impossible for the KCS status register to be all 1's,
(assuming a properly functioning, self-initialized BMC)
but that's what you get from reading a bogus address, so we
test that first. */
if (read_status(kcs) == 0xff)
return 1;
return 0;
}
static void kcs_cleanup(struct si_sm_data *kcs)
{
}
struct si_sm_handlers kcs_smi_handlers =
{
.version = IPMI_KCS_VERSION,
.init_data = init_kcs_data,
.start_transaction = start_kcs_transaction,
.get_result = get_kcs_result,
.event = kcs_event,
.detect = kcs_detect,
.cleanup = kcs_cleanup,
.size = kcs_size,
};
......@@ -44,16 +44,21 @@
#include <linux/ipmi_smi.h>
#include <linux/notifier.h>
#include <linux/init.h>
#include <linux/proc_fs.h>
#define IPMI_MSGHANDLER_VERSION "v31"
struct ipmi_recv_msg *ipmi_alloc_recv_msg(void);
static int ipmi_init_msghandler(void);
static int initialized = 0;
static struct proc_dir_entry *proc_ipmi_root = NULL;
#define MAX_EVENTS_IN_QUEUE 25
/* Don't let a message sit in a queue forever, always time it with at lest
the max message timer. */
the max message timer. This is in milliseconds. */
#define MAX_MSG_TIMEOUT 60000
struct ipmi_user
......@@ -82,7 +87,8 @@ struct cmd_rcvr
struct seq_table
{
int inuse : 1;
unsigned int inuse : 1;
unsigned int broadcast : 1;
unsigned long timeout;
unsigned long orig_timeout;
......@@ -111,10 +117,19 @@ struct seq_table
#define NEXT_SEQID(seqid) (((seqid) + 1) & 0x3fffff)
struct ipmi_channel
{
unsigned char medium;
unsigned char protocol;
};
#define IPMI_IPMB_NUM_SEQ 64
#define IPMI_MAX_CHANNELS 8
struct ipmi_smi
{
/* What interface number are we? */
int intf_num;
/* The list of upper layers that are using me. We read-lock
this when delivering messages to the upper layer to keep
the user from going away while we are processing the
......@@ -123,6 +138,9 @@ struct ipmi_smi
rwlock_t users_lock;
struct list_head users;
/* Used for wake ups at startup. */
wait_queue_head_t waitq;
/* The IPMI version of the BMC on the other end. */
unsigned char version_major;
unsigned char version_minor;
......@@ -182,6 +200,86 @@ struct ipmi_smi
it. Note that the message will still be freed by the
caller. This only works on the system interface. */
void (*null_user_handler)(ipmi_smi_t intf, struct ipmi_smi_msg *msg);
/* When we are scanning the channels for an SMI, this will
tell which channel we are scanning. */
int curr_channel;
/* Channel information */
struct ipmi_channel channels[IPMI_MAX_CHANNELS];
/* Proc FS stuff. */
struct proc_dir_entry *proc_dir;
char proc_dir_name[10];
spinlock_t counter_lock; /* For making counters atomic. */
/* Commands we got that were invalid. */
unsigned int sent_invalid_commands;
/* Commands we sent to the MC. */
unsigned int sent_local_commands;
/* Responses from the MC that were delivered to a user. */
unsigned int handled_local_responses;
/* Responses from the MC that were not delivered to a user. */
unsigned int unhandled_local_responses;
/* Commands we sent out to the IPMB bus. */
unsigned int sent_ipmb_commands;
/* Commands sent on the IPMB that had errors on the SEND CMD */
unsigned int sent_ipmb_command_errs;
/* Each retransmit increments this count. */
unsigned int retransmitted_ipmb_commands;
/* When a message times out (runs out of retransmits) this is
incremented. */
unsigned int timed_out_ipmb_commands;
/* This is like above, but for broadcasts. Broadcasts are
*not* included in the above count (they are expected to
time out). */
unsigned int timed_out_ipmb_broadcasts;
/* Responses I have sent to the IPMB bus. */
unsigned int sent_ipmb_responses;
/* The response was delivered to the user. */
unsigned int handled_ipmb_responses;
/* The response had invalid data in it. */
unsigned int invalid_ipmb_responses;
/* The response didn't have anyone waiting for it. */
unsigned int unhandled_ipmb_responses;
/* Commands we sent out to the IPMB bus. */
unsigned int sent_lan_commands;
/* Commands sent on the IPMB that had errors on the SEND CMD */
unsigned int sent_lan_command_errs;
/* Each retransmit increments this count. */
unsigned int retransmitted_lan_commands;
/* When a message times out (runs out of retransmits) this is
incremented. */
unsigned int timed_out_lan_commands;
/* Responses I have sent to the IPMB bus. */
unsigned int sent_lan_responses;
/* The response was delivered to the user. */
unsigned int handled_lan_responses;
/* The response had invalid data in it. */
unsigned int invalid_lan_responses;
/* The response didn't have anyone waiting for it. */
unsigned int unhandled_lan_responses;
/* The command was delivered to the user. */
unsigned int handled_commands;
/* The command had invalid data in it. */
unsigned int invalid_commands;
/* The command didn't have anyone waiting for it. */
unsigned int unhandled_commands;
/* Invalid data in an event. */
unsigned int invalid_events;
/* Events that were received with the proper format. */
unsigned int events;
};
int
......@@ -264,6 +362,21 @@ int ipmi_smi_watcher_unregister(struct ipmi_smi_watcher *watcher)
return 0;
}
static void
call_smi_watchers(int i)
{
struct ipmi_smi_watcher *w;
down_read(&smi_watchers_sem);
list_for_each_entry(w, &smi_watchers, link) {
if (try_module_get(w->owner)) {
w->new_smi(i);
module_put(w->owner);
}
}
up_read(&smi_watchers_sem);
}
int
ipmi_addr_equal(struct ipmi_addr *addr1, struct ipmi_addr *addr2)
{
......@@ -293,6 +406,19 @@ ipmi_addr_equal(struct ipmi_addr *addr1, struct ipmi_addr *addr2)
&& (ipmb_addr1->lun == ipmb_addr2->lun));
}
if (addr1->addr_type == IPMI_LAN_ADDR_TYPE) {
struct ipmi_lan_addr *lan_addr1
= (struct ipmi_lan_addr *) addr1;
struct ipmi_lan_addr *lan_addr2
= (struct ipmi_lan_addr *) addr2;
return ((lan_addr1->remote_SWID == lan_addr2->remote_SWID)
&& (lan_addr1->local_SWID == lan_addr2->local_SWID)
&& (lan_addr1->session_handle
== lan_addr2->session_handle)
&& (lan_addr1->lun == lan_addr2->lun));
}
return 1;
}
......@@ -322,6 +448,13 @@ int ipmi_validate_addr(struct ipmi_addr *addr, int len)
return 0;
}
if (addr->addr_type == IPMI_LAN_ADDR_TYPE) {
if (len < sizeof(struct ipmi_lan_addr)) {
return -EINVAL;
}
return 0;
}
return -EINVAL;
}
......@@ -341,7 +474,7 @@ unsigned int ipmi_addr_length(int addr_type)
static void deliver_response(struct ipmi_recv_msg *msg)
{
msg->user->handler->ipmi_recv_hndl(msg, msg->user->handler_data);
msg->user->handler->ipmi_recv_hndl(msg, msg->user->handler_data);
}
/* Find the next sequence number not being used and add the given
......@@ -351,6 +484,7 @@ static int intf_next_seq(ipmi_smi_t intf,
struct ipmi_recv_msg *recv_msg,
unsigned long timeout,
int retries,
int broadcast,
unsigned char *seq,
long *seqid)
{
......@@ -373,6 +507,7 @@ static int intf_next_seq(ipmi_smi_t intf,
intf->seq_table[i].timeout = MAX_MSG_TIMEOUT;
intf->seq_table[i].orig_timeout = timeout;
intf->seq_table[i].retries_left = retries;
intf->seq_table[i].broadcast = broadcast;
intf->seq_table[i].inuse = 1;
intf->seq_table[i].seqid = NEXT_SEQID(intf->seq_table[i].seqid);
*seq = i;
......@@ -425,8 +560,8 @@ static int intf_find_seq(ipmi_smi_t intf,
/* Start the timer for a specific sequence table entry. */
static int intf_start_seq_timer(ipmi_smi_t intf,
long msgid)
static int intf_start_seq_timer(ipmi_smi_t intf,
long msgid)
{
int rv = -ENODEV;
unsigned long flags;
......@@ -451,6 +586,46 @@ static int intf_start_seq_timer(ipmi_smi_t intf,
return rv;
}
/* Got an error for the send message for a specific sequence number. */
static int intf_err_seq(ipmi_smi_t intf,
long msgid,
unsigned int err)
{
int rv = -ENODEV;
unsigned long flags;
unsigned char seq;
unsigned long seqid;
struct ipmi_recv_msg *msg = NULL;
GET_SEQ_FROM_MSGID(msgid, seq, seqid);
spin_lock_irqsave(&(intf->seq_lock), flags);
/* We do this verification because the user can be deleted
while a message is outstanding. */
if ((intf->seq_table[seq].inuse)
&& (intf->seq_table[seq].seqid == seqid))
{
struct seq_table *ent = &(intf->seq_table[seq]);
ent->inuse = 0;
msg = ent->recv_msg;
rv = 0;
}
spin_unlock_irqrestore(&(intf->seq_lock), flags);
if (msg) {
msg->recv_type = IPMI_RESPONSE_RECV_TYPE;
msg->msg_data[0] = err;
msg->msg.netfn |= 1; /* Convert to a response. */
msg->msg.data_len = 1;
msg->msg.data = msg->msg_data;
deliver_response(msg);
}
return rv;
}
int ipmi_create_user(unsigned int if_num,
struct ipmi_user_hndl *handler,
......@@ -523,15 +698,14 @@ static int ipmi_destroy_user_nolock(ipmi_user_t user)
{
int rv = -ENODEV;
ipmi_user_t t_user;
struct list_head *entry, *entry2;
struct cmd_rcvr *rcvr, *rcvr2;
int i;
unsigned long flags;
/* Find the user and delete them from the list. */
list_for_each(entry, &(user->intf->users)) {
t_user = list_entry(entry, struct ipmi_user, link);
list_for_each_entry(t_user, &(user->intf->users), link) {
if (t_user == user) {
list_del(entry);
list_del(&t_user->link);
rv = 0;
break;
}
......@@ -554,11 +728,9 @@ static int ipmi_destroy_user_nolock(ipmi_user_t user)
/* Remove the user from the command receiver's table. */
write_lock_irqsave(&(user->intf->cmd_rcvr_lock), flags);
list_for_each_safe(entry, entry2, &(user->intf->cmd_rcvrs)) {
struct cmd_rcvr *rcvr;
rcvr = list_entry(entry, struct cmd_rcvr, link);
list_for_each_entry_safe(rcvr, rcvr2, &(user->intf->cmd_rcvrs), link) {
if (rcvr->user == user) {
list_del(entry);
list_del(&rcvr->link);
kfree(rcvr);
}
}
......@@ -621,8 +793,7 @@ unsigned char ipmi_get_my_LUN(ipmi_user_t user)
int ipmi_set_gets_events(ipmi_user_t user, int val)
{
unsigned long flags;
struct list_head *e, *e2;
struct ipmi_recv_msg *msg;
struct ipmi_recv_msg *msg, *msg2;
read_lock(&(user->intf->users_lock));
spin_lock_irqsave(&(user->intf->events_lock), flags);
......@@ -630,9 +801,8 @@ int ipmi_set_gets_events(ipmi_user_t user, int val)
if (val) {
/* Deliver any queued events. */
list_for_each_safe(e, e2, &(user->intf->waiting_events)) {
msg = list_entry(e, struct ipmi_recv_msg, link);
list_del(e);
list_for_each_entry_safe(msg, msg2, &(user->intf->waiting_events), link) {
list_del(&msg->link);
msg->user = user;
deliver_response(msg);
}
......@@ -648,7 +818,7 @@ int ipmi_register_for_cmd(ipmi_user_t user,
unsigned char netfn,
unsigned char cmd)
{
struct list_head *entry;
struct cmd_rcvr *cmp;
unsigned long flags;
struct cmd_rcvr *rcvr;
int rv = 0;
......@@ -666,9 +836,7 @@ int ipmi_register_for_cmd(ipmi_user_t user,
}
/* Make sure the command/netfn is not already registered. */
list_for_each(entry, &(user->intf->cmd_rcvrs)) {
struct cmd_rcvr *cmp;
cmp = list_entry(entry, struct cmd_rcvr, link);
list_for_each_entry(cmp, &(user->intf->cmd_rcvrs), link) {
if ((cmp->netfn == netfn) && (cmp->cmd == cmd)) {
rv = -EBUSY;
break;
......@@ -695,7 +863,6 @@ int ipmi_unregister_for_cmd(ipmi_user_t user,
unsigned char netfn,
unsigned char cmd)
{
struct list_head *entry;
unsigned long flags;
struct cmd_rcvr *rcvr;
int rv = -ENOENT;
......@@ -703,11 +870,10 @@ int ipmi_unregister_for_cmd(ipmi_user_t user,
read_lock(&(user->intf->users_lock));
write_lock_irqsave(&(user->intf->cmd_rcvr_lock), flags);
/* Make sure the command/netfn is not already registered. */
list_for_each(entry, &(user->intf->cmd_rcvrs)) {
rcvr = list_entry(entry, struct cmd_rcvr, link);
list_for_each_entry(rcvr, &(user->intf->cmd_rcvrs), link) {
if ((rcvr->netfn == netfn) && (rcvr->cmd == cmd)) {
rv = 0;
list_del(entry);
list_del(&rcvr->link);
kfree(rcvr);
break;
}
......@@ -771,6 +937,43 @@ static inline void format_ipmb_msg(struct ipmi_smi_msg *smi_msg,
smi_msg->msgid = msgid;
}
static inline void format_lan_msg(struct ipmi_smi_msg *smi_msg,
struct ipmi_msg *msg,
struct ipmi_lan_addr *lan_addr,
long msgid,
unsigned char ipmb_seq,
unsigned char source_lun)
{
/* Format the IPMB header data. */
smi_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
smi_msg->data[1] = IPMI_SEND_MSG_CMD;
smi_msg->data[2] = lan_addr->channel;
smi_msg->data[3] = lan_addr->session_handle;
smi_msg->data[4] = lan_addr->remote_SWID;
smi_msg->data[5] = (msg->netfn << 2) | (lan_addr->lun & 0x3);
smi_msg->data[6] = ipmb_checksum(&(smi_msg->data[4]), 2);
smi_msg->data[7] = lan_addr->local_SWID;
smi_msg->data[8] = (ipmb_seq << 2) | source_lun;
smi_msg->data[9] = msg->cmd;
/* Now tack on the data to the message. */
if (msg->data_len > 0)
memcpy(&(smi_msg->data[10]), msg->data,
msg->data_len);
smi_msg->data_size = msg->data_len + 10;
/* Now calculate the checksum and tack it on. */
smi_msg->data[smi_msg->data_size]
= ipmb_checksum(&(smi_msg->data[7]),
smi_msg->data_size-7);
/* Add on the checksum size and the offset from the
broadcast. */
smi_msg->data_size += 1;
smi_msg->msgid = msgid;
}
/* Separate from ipmi_request so that the user does not have to be
supplied in certain circumstances (mainly at panic time). If
messages are supplied, they will be freed, even if an error
......@@ -780,11 +983,14 @@ static inline int i_ipmi_request(ipmi_user_t user,
struct ipmi_addr *addr,
long msgid,
struct ipmi_msg *msg,
void *user_msg_data,
void *supplied_smi,
struct ipmi_recv_msg *supplied_recv,
int priority,
unsigned char source_address,
unsigned char source_lun)
unsigned char source_lun,
int retries,
unsigned int retry_time_ms)
{
int rv = 0;
struct ipmi_smi_msg *smi_msg;
......@@ -800,6 +1006,7 @@ static inline int i_ipmi_request(ipmi_user_t user,
return -ENOMEM;
}
}
recv_msg->user_msg_data = user_msg_data;
if (supplied_smi) {
smi_msg = (struct ipmi_smi_msg *) supplied_smi;
......@@ -811,11 +1018,6 @@ static inline int i_ipmi_request(ipmi_user_t user,
}
}
if (addr->channel > IPMI_NUM_CHANNELS) {
rv = -EINVAL;
goto out_err;
}
recv_msg->user = user;
recv_msg->msgid = msgid;
/* Store the message to send in the receive message so timeout
......@@ -825,10 +1027,20 @@ static inline int i_ipmi_request(ipmi_user_t user,
if (addr->addr_type == IPMI_SYSTEM_INTERFACE_ADDR_TYPE) {
struct ipmi_system_interface_addr *smi_addr;
if (msg->netfn & 1) {
/* Responses are not allowed to the SMI. */
rv = -EINVAL;
goto out_err;
}
smi_addr = (struct ipmi_system_interface_addr *) addr;
if (smi_addr->lun > 3)
return -EINVAL;
if (smi_addr->lun > 3) {
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EINVAL;
goto out_err;
}
memcpy(&recv_msg->addr, smi_addr, sizeof(*smi_addr));
......@@ -839,11 +1051,17 @@ static inline int i_ipmi_request(ipmi_user_t user,
{
/* We don't let the user do these, since we manage
the sequence numbers. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EINVAL;
goto out_err;
}
if ((msg->data_len + 2) > IPMI_MAX_MSG_LENGTH) {
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EMSGSIZE;
goto out_err;
}
......@@ -855,41 +1073,69 @@ static inline int i_ipmi_request(ipmi_user_t user,
if (msg->data_len > 0)
memcpy(&(smi_msg->data[2]), msg->data, msg->data_len);
smi_msg->data_size = msg->data_len + 2;
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_local_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
} else if ((addr->addr_type == IPMI_IPMB_ADDR_TYPE)
|| (addr->addr_type == IPMI_IPMB_BROADCAST_ADDR_TYPE))
{
struct ipmi_ipmb_addr *ipmb_addr;
unsigned char ipmb_seq;
long seqid;
int broadcast;
int retries;
int broadcast = 0;
if (addr->channel > IPMI_NUM_CHANNELS) {
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EINVAL;
goto out_err;
}
if (addr == NULL) {
if (intf->channels[addr->channel].medium
!= IPMI_CHANNEL_MEDIUM_IPMB)
{
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EINVAL;
goto out_err;
}
if (retries < 0) {
if (addr->addr_type == IPMI_IPMB_BROADCAST_ADDR_TYPE)
retries = 0; /* Don't retry broadcasts. */
else
retries = 4;
}
if (addr->addr_type == IPMI_IPMB_BROADCAST_ADDR_TYPE) {
/* Broadcasts add a zero at the beginning of the
message, but otherwise is the same as an IPMB
address. */
addr->addr_type = IPMI_IPMB_ADDR_TYPE;
broadcast = 1;
retries = 0; /* Don't retry broadcasts. */
} else {
broadcast = 0;
retries = 4;
}
/* Default to 1 second retries. */
if (retry_time_ms == 0)
retry_time_ms = 1000;
/* 9 for the header and 1 for the checksum, plus
possibly one for the broadcast. */
if ((msg->data_len + 10 + broadcast) > IPMI_MAX_MSG_LENGTH) {
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EMSGSIZE;
goto out_err;
}
ipmb_addr = (struct ipmi_ipmb_addr *) addr;
if (ipmb_addr->lun > 3) {
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EINVAL;
goto out_err;
}
......@@ -899,21 +1145,32 @@ static inline int i_ipmi_request(ipmi_user_t user,
if (recv_msg->msg.netfn & 0x1) {
/* It's a response, so use the user's sequence
from msgid. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_ipmb_responses++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
format_ipmb_msg(smi_msg, msg, ipmb_addr, msgid,
msgid, broadcast,
source_address, source_lun);
/* Save the receive message so we can use it
to deliver the response. */
smi_msg->user_data = recv_msg;
} else {
/* It's a command, so get a sequence for it. */
spin_lock_irqsave(&(intf->seq_lock), flags);
spin_lock(&intf->counter_lock);
intf->sent_ipmb_commands++;
spin_unlock(&intf->counter_lock);
/* Create a sequence number with a 1 second
timeout and 4 retries. */
/* FIXME - magic number for the timeout. */
rv = intf_next_seq(intf,
recv_msg,
1000,
retry_time_ms,
retries,
broadcast,
&ipmb_seq,
&seqid);
if (rv) {
......@@ -939,6 +1196,117 @@ static inline int i_ipmi_request(ipmi_user_t user,
recv_msg->msg.data = recv_msg->msg_data;
recv_msg->msg.data_len = smi_msg->data_size;
/* We don't unlock until here, because we need
to copy the completed message into the
recv_msg before we release the lock.
Otherwise, race conditions may bite us. I
know that's pretty paranoid, but I prefer
to be correct. */
spin_unlock_irqrestore(&(intf->seq_lock), flags);
}
} else if (addr->addr_type == IPMI_LAN_ADDR_TYPE) {
struct ipmi_lan_addr *lan_addr;
unsigned char ipmb_seq;
long seqid;
if (addr->channel > IPMI_NUM_CHANNELS) {
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EINVAL;
goto out_err;
}
if ((intf->channels[addr->channel].medium
!= IPMI_CHANNEL_MEDIUM_8023LAN)
&& (intf->channels[addr->channel].medium
!= IPMI_CHANNEL_MEDIUM_ASYNC))
{
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EINVAL;
goto out_err;
}
retries = 4;
/* Default to 1 second retries. */
if (retry_time_ms == 0)
retry_time_ms = 1000;
/* 11 for the header and 1 for the checksum. */
if ((msg->data_len + 12) > IPMI_MAX_MSG_LENGTH) {
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EMSGSIZE;
goto out_err;
}
lan_addr = (struct ipmi_lan_addr *) addr;
if (lan_addr->lun > 3) {
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EINVAL;
goto out_err;
}
memcpy(&recv_msg->addr, lan_addr, sizeof(*lan_addr));
if (recv_msg->msg.netfn & 0x1) {
/* It's a response, so use the user's sequence
from msgid. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_lan_responses++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
format_lan_msg(smi_msg, msg, lan_addr, msgid,
msgid, source_lun);
/* Save the receive message so we can use it
to deliver the response. */
smi_msg->user_data = recv_msg;
} else {
/* It's a command, so get a sequence for it. */
spin_lock_irqsave(&(intf->seq_lock), flags);
spin_lock(&intf->counter_lock);
intf->sent_lan_commands++;
spin_unlock(&intf->counter_lock);
/* Create a sequence number with a 1 second
timeout and 4 retries. */
rv = intf_next_seq(intf,
recv_msg,
retry_time_ms,
retries,
0,
&ipmb_seq,
&seqid);
if (rv) {
/* We have used up all the sequence numbers,
probably, so abort. */
spin_unlock_irqrestore(&(intf->seq_lock),
flags);
goto out_err;
}
/* Store the sequence number in the message,
so that when the send message response
comes back we can start the timer. */
format_lan_msg(smi_msg, msg, lan_addr,
STORE_SEQ_IN_MSGID(ipmb_seq, seqid),
ipmb_seq, source_lun);
/* Copy the message into the recv message data, so we
can retransmit it later if necessary. */
memcpy(recv_msg->msg_data, smi_msg->data,
smi_msg->data_size);
recv_msg->msg.data = recv_msg->msg_data;
recv_msg->msg.data_len = smi_msg->data_size;
/* We don't unlock until here, because we need
to copy the completed message into the
recv_msg before we release the lock.
......@@ -949,16 +1317,19 @@ static inline int i_ipmi_request(ipmi_user_t user,
}
} else {
/* Unknown address type. */
rv = -EINVAL;
goto out_err;
spin_lock_irqsave(&intf->counter_lock, flags);
intf->sent_invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = -EINVAL;
goto out_err;
}
#if DEBUG_MSGING
{
int m;
for (m=0; m<smi_msg->data_size; m++)
printk(" %2.2x", smi_msg->data[m]);
printk("\n");
int m;
for (m=0; m<smi_msg->data_size; m++)
printk(" %2.2x", smi_msg->data[m]);
printk("\n");
}
#endif
intf->handlers->sender(intf->send_info, smi_msg, priority);
......@@ -975,6 +1346,7 @@ int ipmi_request(ipmi_user_t user,
struct ipmi_addr *addr,
long msgid,
struct ipmi_msg *msg,
void *user_msg_data,
int priority)
{
return i_ipmi_request(user,
......@@ -982,16 +1354,42 @@ int ipmi_request(ipmi_user_t user,
addr,
msgid,
msg,
user_msg_data,
NULL, NULL,
priority,
user->intf->my_address,
user->intf->my_lun,
-1, 0);
}
int ipmi_request_settime(ipmi_user_t user,
struct ipmi_addr *addr,
long msgid,
struct ipmi_msg *msg,
void *user_msg_data,
int priority,
int retries,
unsigned int retry_time_ms)
{
return i_ipmi_request(user,
user->intf,
addr,
msgid,
msg,
user_msg_data,
NULL, NULL,
priority,
user->intf->my_address,
user->intf->my_lun);
user->intf->my_lun,
retries,
retry_time_ms);
}
int ipmi_request_supply_msgs(ipmi_user_t user,
struct ipmi_addr *addr,
long msgid,
struct ipmi_msg *msg,
void *user_msg_data,
void *supplied_smi,
struct ipmi_recv_msg *supplied_recv,
int priority)
......@@ -1001,17 +1399,20 @@ int ipmi_request_supply_msgs(ipmi_user_t user,
addr,
msgid,
msg,
user_msg_data,
supplied_smi,
supplied_recv,
priority,
user->intf->my_address,
user->intf->my_lun);
user->intf->my_lun,
-1, 0);
}
int ipmi_request_with_source(ipmi_user_t user,
struct ipmi_addr *addr,
long msgid,
struct ipmi_msg *msg,
void *user_msg_data,
int priority,
unsigned char source_address,
unsigned char source_lun)
......@@ -1021,10 +1422,215 @@ int ipmi_request_with_source(ipmi_user_t user,
addr,
msgid,
msg,
user_msg_data,
NULL, NULL,
priority,
source_address,
source_lun);
source_lun,
-1, 0);
}
static int ipmb_file_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
char *out = (char *) page;
ipmi_smi_t intf = data;
return sprintf(out, "%x\n", intf->my_address);
}
static int version_file_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
char *out = (char *) page;
ipmi_smi_t intf = data;
return sprintf(out, "%d.%d\n",
intf->version_major, intf->version_minor);
}
static int stat_file_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
char *out = (char *) page;
ipmi_smi_t intf = data;
out += sprintf(out, "sent_invalid_commands: %d\n",
intf->sent_invalid_commands);
out += sprintf(out, "sent_local_commands: %d\n",
intf->sent_local_commands);
out += sprintf(out, "handled_local_responses: %d\n",
intf->handled_local_responses);
out += sprintf(out, "unhandled_local_responses: %d\n",
intf->unhandled_local_responses);
out += sprintf(out, "sent_ipmb_commands: %d\n",
intf->sent_ipmb_commands);
out += sprintf(out, "sent_ipmb_command_errs: %d\n",
intf->sent_ipmb_command_errs);
out += sprintf(out, "retransmitted_ipmb_commands: %d\n",
intf->retransmitted_ipmb_commands);
out += sprintf(out, "timed_out_ipmb_commands: %d\n",
intf->timed_out_ipmb_commands);
out += sprintf(out, "timed_out_ipmb_broadcasts: %d\n",
intf->timed_out_ipmb_broadcasts);
out += sprintf(out, "sent_ipmb_responses: %d\n",
intf->sent_ipmb_responses);
out += sprintf(out, "handled_ipmb_responses: %d\n",
intf->handled_ipmb_responses);
out += sprintf(out, "invalid_ipmb_responses: %d\n",
intf->invalid_ipmb_responses);
out += sprintf(out, "unhandled_ipmb_responses: %d\n",
intf->unhandled_ipmb_responses);
out += sprintf(out, "sent_lan_commands: %d\n",
intf->sent_lan_commands);
out += sprintf(out, "sent_lan_command_errs: %d\n",
intf->sent_lan_command_errs);
out += sprintf(out, "retransmitted_lan_commands: %d\n",
intf->retransmitted_lan_commands);
out += sprintf(out, "timed_out_lan_commands: %d\n",
intf->timed_out_lan_commands);
out += sprintf(out, "sent_lan_responses: %d\n",
intf->sent_lan_responses);
out += sprintf(out, "handled_lan_responses: %d\n",
intf->handled_lan_responses);
out += sprintf(out, "invalid_lan_responses: %d\n",
intf->invalid_lan_responses);
out += sprintf(out, "unhandled_lan_responses: %d\n",
intf->unhandled_lan_responses);
out += sprintf(out, "handled_commands: %d\n",
intf->handled_commands);
out += sprintf(out, "invalid_commands: %d\n",
intf->invalid_commands);
out += sprintf(out, "unhandled_commands: %d\n",
intf->unhandled_commands);
out += sprintf(out, "invalid_events: %d\n",
intf->invalid_events);
out += sprintf(out, "events: %d\n",
intf->events);
return (out - ((char *) page));
}
int ipmi_smi_add_proc_entry(ipmi_smi_t smi, char *name,
read_proc_t *read_proc, write_proc_t *write_proc,
void *data, struct module *owner)
{
struct proc_dir_entry *file;
int rv = 0;
file = create_proc_entry(name, 0, smi->proc_dir);
if (!file)
rv = -ENOMEM;
else {
file->nlink = 1;
file->data = data;
file->read_proc = read_proc;
file->write_proc = write_proc;
file->owner = owner;
}
return rv;
}
static int add_proc_entries(ipmi_smi_t smi, int num)
{
int rv = 0;
sprintf(smi->proc_dir_name, "%d", num);
smi->proc_dir = proc_mkdir(smi->proc_dir_name, proc_ipmi_root);
if (!smi->proc_dir)
rv = -ENOMEM;
else {
smi->proc_dir->owner = THIS_MODULE;
}
if (rv == 0)
rv = ipmi_smi_add_proc_entry(smi, "stats",
stat_file_read_proc, NULL,
smi, THIS_MODULE);
if (rv == 0)
rv = ipmi_smi_add_proc_entry(smi, "ipmb",
ipmb_file_read_proc, NULL,
smi, THIS_MODULE);
if (rv == 0)
rv = ipmi_smi_add_proc_entry(smi, "version",
version_file_read_proc, NULL,
smi, THIS_MODULE);
return rv;
}
static int
send_channel_info_cmd(ipmi_smi_t intf, int chan)
{
struct ipmi_msg msg;
unsigned char data[1];
struct ipmi_system_interface_addr si;
si.addr_type = IPMI_SYSTEM_INTERFACE_ADDR_TYPE;
si.channel = IPMI_BMC_CHANNEL;
si.lun = 0;
msg.netfn = IPMI_NETFN_APP_REQUEST;
msg.cmd = IPMI_GET_CHANNEL_INFO_CMD;
msg.data = data;
msg.data_len = 1;
data[0] = chan;
return i_ipmi_request(NULL,
intf,
(struct ipmi_addr *) &si,
0,
&msg,
NULL,
NULL,
NULL,
0,
intf->my_address,
intf->my_lun,
-1, 0);
}
static void
channel_handler(ipmi_smi_t intf, struct ipmi_smi_msg *msg)
{
int rv = 0;
int chan;
if ((msg->rsp[0] == (IPMI_NETFN_APP_RESPONSE << 2))
&& (msg->rsp[1] == IPMI_GET_CHANNEL_INFO_CMD))
{
/* It's the one we want */
if (msg->rsp[2] != 0) {
/* Got an error from the channel, just go on. */
goto next_channel;
}
if (msg->rsp_size < 6) {
/* Message not big enough, just go on. */
goto next_channel;
}
chan = intf->curr_channel;
intf->channels[chan].medium = msg->rsp[4] & 0x7f;
intf->channels[chan].protocol = msg->rsp[5] & 0x1f;
next_channel:
intf->curr_channel++;
if (intf->curr_channel >= IPMI_MAX_CHANNELS)
wake_up(&intf->waitq);
else
rv = send_channel_info_cmd(intf, intf->curr_channel);
if (rv) {
/* Got an error somehow, just give up. */
intf->curr_channel = IPMI_MAX_CHANNELS;
wake_up(&intf->waitq);
printk(KERN_WARNING "ipmi_msghandler: Error sending"
"channel information: 0x%x\n",
rv);
}
}
}
int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
......@@ -1036,7 +1642,6 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
int i, j;
int rv;
ipmi_smi_t new_intf;
struct list_head *entry;
unsigned long flags;
......@@ -1055,12 +1660,16 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
new_intf = kmalloc(sizeof(*new_intf), GFP_KERNEL);
if (!new_intf)
return -ENOMEM;
memset(new_intf, 0, sizeof(*new_intf));
new_intf->proc_dir = NULL;
rv = -ENOMEM;
down_write(&interfaces_sem);
for (i=0; i<MAX_IPMI_INTERFACES; i++) {
if (ipmi_interfaces[i] == NULL) {
new_intf->intf_num = i;
new_intf->version_major = version_major;
new_intf->version_minor = version_minor;
new_intf->my_address = IPMI_BMC_SLAVE_ADDR;
......@@ -1081,9 +1690,12 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
INIT_LIST_HEAD(&(new_intf->waiting_events));
new_intf->waiting_events_count = 0;
rwlock_init(&(new_intf->cmd_rcvr_lock));
init_waitqueue_head(&new_intf->waitq);
INIT_LIST_HEAD(&(new_intf->cmd_rcvrs));
new_intf->all_cmd_rcvr = NULL;
spin_lock_init(&(new_intf->counter_lock));
spin_lock_irqsave(&interfaces_lock, flags);
ipmi_interfaces[i] = new_intf;
spin_unlock_irqrestore(&interfaces_lock, flags);
......@@ -1096,46 +1708,71 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
downgrade_write(&interfaces_sem);
if (rv == 0)
rv = add_proc_entries(*intf, i);
if (rv == 0) {
/* Call all the watcher interfaces to tell them that a
new interface is available. */
down_read(&smi_watchers_sem);
list_for_each(entry, &smi_watchers) {
struct ipmi_smi_watcher *w;
w = list_entry(entry, struct ipmi_smi_watcher, link);
w->new_smi(i);
}
up_read(&smi_watchers_sem);
if ((version_major > 1)
|| ((version_major == 1) && (version_minor >= 5)))
{
/* Start scanning the channels to see what is
available. */
(*intf)->null_user_handler = channel_handler;
(*intf)->curr_channel = 0;
rv = send_channel_info_cmd(*intf, 0);
if (rv)
goto out;
/* Wait for the channel info to be read. */
up_read(&interfaces_sem);
wait_event((*intf)->waitq,
((*intf)->curr_channel>=IPMI_MAX_CHANNELS));
down_read(&interfaces_sem);
if (ipmi_interfaces[i] != new_intf)
/* Well, it went away. Just return. */
goto out;
} else {
/* Assume a single IPMB channel at zero. */
(*intf)->channels[0].medium = IPMI_CHANNEL_MEDIUM_IPMB;
(*intf)->channels[0].protocol
= IPMI_CHANNEL_PROTOCOL_IPMB;
}
/* Call all the watcher interfaces to tell
them that a new interface is available. */
call_smi_watchers(i);
}
out:
up_read(&interfaces_sem);
if (rv)
if (rv) {
if (new_intf->proc_dir)
remove_proc_entry(new_intf->proc_dir_name,
proc_ipmi_root);
kfree(new_intf);
}
return rv;
}
static void free_recv_msg_list(struct list_head *q)
{
struct list_head *entry, *entry2;
struct ipmi_recv_msg *msg;
struct ipmi_recv_msg *msg, *msg2;
list_for_each_safe(entry, entry2, q) {
msg = list_entry(entry, struct ipmi_recv_msg, link);
list_del(entry);
list_for_each_entry_safe(msg, msg2, q, link) {
list_del(&msg->link);
ipmi_free_recv_msg(msg);
}
}
static void free_cmd_rcvr_list(struct list_head *q)
{
struct list_head *entry, *entry2;
struct cmd_rcvr *rcvr;
struct cmd_rcvr *rcvr, *rcvr2;
list_for_each_safe(entry, entry2, q) {
rcvr = list_entry(entry, struct cmd_rcvr, link);
list_del(entry);
list_for_each_entry_safe(rcvr, rcvr2, q, link) {
list_del(&rcvr->link);
kfree(rcvr);
}
}
......@@ -1159,16 +1796,18 @@ static void clean_up_interface_data(ipmi_smi_t intf)
int ipmi_unregister_smi(ipmi_smi_t intf)
{
int rv = -ENODEV;
int i;
struct list_head *entry;
unsigned long flags;
int rv = -ENODEV;
int i;
struct ipmi_smi_watcher *w;
unsigned long flags;
down_write(&interfaces_sem);
if (list_empty(&(intf->users)))
{
for (i=0; i<MAX_IPMI_INTERFACES; i++) {
if (ipmi_interfaces[i] == intf) {
remove_proc_entry(intf->proc_dir_name,
proc_ipmi_root);
spin_lock_irqsave(&interfaces_lock, flags);
ipmi_interfaces[i] = NULL;
clean_up_interface_data(intf);
......@@ -1191,11 +1830,7 @@ int ipmi_unregister_smi(ipmi_smi_t intf)
/* Call all the watcher interfaces to tell them that
an interface is gone. */
down_read(&smi_watchers_sem);
list_for_each(entry, &smi_watchers) {
struct ipmi_smi_watcher *w;
w = list_entry(entry,
struct ipmi_smi_watcher,
link);
list_for_each_entry(w, &smi_watchers, link) {
w->smi_gone(i);
}
up_read(&smi_watchers_sem);
......@@ -1203,20 +1838,28 @@ int ipmi_unregister_smi(ipmi_smi_t intf)
return 0;
}
static int handle_get_msg_rsp(ipmi_smi_t intf,
struct ipmi_smi_msg *msg)
static int handle_ipmb_get_msg_rsp(ipmi_smi_t intf,
struct ipmi_smi_msg *msg)
{
struct ipmi_ipmb_addr ipmb_addr;
struct ipmi_recv_msg *recv_msg;
unsigned long flags;
if (msg->rsp_size < 11)
/* This is 11, not 10, because the response must contain a
* completion code. */
if (msg->rsp_size < 11) {
/* Message not big enough, just ignore it. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->invalid_ipmb_responses++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
return 0;
}
if (msg->rsp[2] != 0)
if (msg->rsp[2] != 0) {
/* An error getting the response, just ignore it. */
return 0;
}
ipmb_addr.addr_type = IPMI_IPMB_ADDR_TYPE;
ipmb_addr.slave_addr = msg->rsp[6];
......@@ -1235,6 +1878,9 @@ static int handle_get_msg_rsp(ipmi_smi_t intf,
{
/* We were unable to find the sequence number,
so just nuke the message. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->unhandled_ipmb_responses++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
return 0;
}
......@@ -1248,26 +1894,33 @@ static int handle_get_msg_rsp(ipmi_smi_t intf,
recv_msg->msg.data = recv_msg->msg_data;
recv_msg->msg.data_len = msg->rsp_size - 10;
recv_msg->recv_type = IPMI_RESPONSE_RECV_TYPE;
spin_lock_irqsave(&intf->counter_lock, flags);
intf->handled_ipmb_responses++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
deliver_response(recv_msg);
return 0;
}
static int handle_get_msg_cmd(ipmi_smi_t intf,
struct ipmi_smi_msg *msg)
static int handle_ipmb_get_msg_cmd(ipmi_smi_t intf,
struct ipmi_smi_msg *msg)
{
struct list_head *entry;
struct cmd_rcvr *rcvr;
int rv = 0;
unsigned char netfn;
unsigned char cmd;
ipmi_user_t user = NULL;
int rv = 0;
unsigned char netfn;
unsigned char cmd;
ipmi_user_t user = NULL;
struct ipmi_ipmb_addr *ipmb_addr;
struct ipmi_recv_msg *recv_msg;
unsigned long flags;
if (msg->rsp_size < 10)
if (msg->rsp_size < 10) {
/* Message not big enough, just ignore it. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
return 0;
}
if (msg->rsp[2] != 0) {
/* An error getting the response, just ignore it. */
......@@ -1283,8 +1936,7 @@ static int handle_get_msg_cmd(ipmi_smi_t intf,
user = intf->all_cmd_rcvr;
} else {
/* Find the command/netfn. */
list_for_each(entry, &(intf->cmd_rcvrs)) {
rcvr = list_entry(entry, struct cmd_rcvr, link);
list_for_each_entry(rcvr, &(intf->cmd_rcvrs), link) {
if ((rcvr->netfn == netfn) && (rcvr->cmd == cmd)) {
user = rcvr->user;
break;
......@@ -1295,6 +1947,10 @@ static int handle_get_msg_cmd(ipmi_smi_t intf,
if (user == NULL) {
/* We didn't find a user, deliver an error response. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->unhandled_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg->data[1] = IPMI_SEND_MSG_CMD;
msg->data[2] = msg->rsp[3];
......@@ -1309,12 +1965,25 @@ static int handle_get_msg_cmd(ipmi_smi_t intf,
msg->data[10] = ipmb_checksum(&(msg->data[6]), 4);
msg->data_size = 11;
#if DEBUG_MSGING
{
int m;
printk("Invalid command:");
for (m=0; m<msg->data_size; m++)
printk(" %2.2x", msg->data[m]);
printk("\n");
}
#endif
intf->handlers->sender(intf->send_info, msg, 0);
rv = -1; /* We used the message, so return the value that
causes it to not be freed or queued. */
} else {
/* Deliver the message to the user. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->handled_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
recv_msg = ipmi_alloc_recv_msg();
if (! recv_msg) {
/* We couldn't allocate memory for the
......@@ -1322,18 +1991,24 @@ static int handle_get_msg_cmd(ipmi_smi_t intf,
later. */
rv = 1;
} else {
/* Extract the source address from the data. */
ipmb_addr = (struct ipmi_ipmb_addr *) &recv_msg->addr;
ipmb_addr->addr_type = IPMI_IPMB_ADDR_TYPE;
ipmb_addr->slave_addr = msg->rsp[6];
ipmb_addr->lun = msg->rsp[7] & 3;
ipmb_addr->channel = msg->rsp[3];
ipmb_addr->channel = msg->rsp[3] & 0xf;
/* Extract the rest of the message information
from the IPMB header.*/
recv_msg->user = user;
recv_msg->recv_type = IPMI_CMD_RECV_TYPE;
recv_msg->msgid = msg->rsp[7] >> 2;
recv_msg->msg.netfn = msg->rsp[4] >> 2;
recv_msg->msg.cmd = msg->rsp[8];
recv_msg->msg.data = recv_msg->msg_data;
/* We chop off 10, not 9 bytes because the checksum
at the end also needs to be removed. */
recv_msg->msg.data_len = msg->rsp_size - 10;
memcpy(recv_msg->msg_data,
&(msg->rsp[9]),
......@@ -1345,6 +2020,169 @@ static int handle_get_msg_cmd(ipmi_smi_t intf,
return rv;
}
static int handle_lan_get_msg_rsp(ipmi_smi_t intf,
struct ipmi_smi_msg *msg)
{
struct ipmi_lan_addr lan_addr;
struct ipmi_recv_msg *recv_msg;
unsigned long flags;
/* This is 13, not 12, because the response must contain a
* completion code. */
if (msg->rsp_size < 13) {
/* Message not big enough, just ignore it. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->invalid_lan_responses++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
return 0;
}
if (msg->rsp[2] != 0) {
/* An error getting the response, just ignore it. */
return 0;
}
lan_addr.addr_type = IPMI_LAN_ADDR_TYPE;
lan_addr.session_handle = msg->rsp[4];
lan_addr.remote_SWID = msg->rsp[8];
lan_addr.local_SWID = msg->rsp[5];
lan_addr.channel = msg->rsp[3] & 0x0f;
lan_addr.privilege = msg->rsp[3] >> 4;
lan_addr.lun = msg->rsp[9] & 3;
/* It's a response from a remote entity. Look up the sequence
number and handle the response. */
if (intf_find_seq(intf,
msg->rsp[9] >> 2,
msg->rsp[3] & 0x0f,
msg->rsp[10],
(msg->rsp[6] >> 2) & (~1),
(struct ipmi_addr *) &(lan_addr),
&recv_msg))
{
/* We were unable to find the sequence number,
so just nuke the message. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->unhandled_lan_responses++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
return 0;
}
memcpy(recv_msg->msg_data,
&(msg->rsp[11]),
msg->rsp_size - 11);
/* The other fields matched, so no need to set them, except
for netfn, which needs to be the response that was
returned, not the request value. */
recv_msg->msg.netfn = msg->rsp[6] >> 2;
recv_msg->msg.data = recv_msg->msg_data;
recv_msg->msg.data_len = msg->rsp_size - 12;
recv_msg->recv_type = IPMI_RESPONSE_RECV_TYPE;
spin_lock_irqsave(&intf->counter_lock, flags);
intf->handled_lan_responses++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
deliver_response(recv_msg);
return 0;
}
static int handle_lan_get_msg_cmd(ipmi_smi_t intf,
struct ipmi_smi_msg *msg)
{
struct cmd_rcvr *rcvr;
int rv = 0;
unsigned char netfn;
unsigned char cmd;
ipmi_user_t user = NULL;
struct ipmi_lan_addr *lan_addr;
struct ipmi_recv_msg *recv_msg;
unsigned long flags;
if (msg->rsp_size < 12) {
/* Message not big enough, just ignore it. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->invalid_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
return 0;
}
if (msg->rsp[2] != 0) {
/* An error getting the response, just ignore it. */
return 0;
}
netfn = msg->rsp[6] >> 2;
cmd = msg->rsp[10];
read_lock(&(intf->cmd_rcvr_lock));
if (intf->all_cmd_rcvr) {
user = intf->all_cmd_rcvr;
} else {
/* Find the command/netfn. */
list_for_each_entry(rcvr, &(intf->cmd_rcvrs), link) {
if ((rcvr->netfn == netfn) && (rcvr->cmd == cmd)) {
user = rcvr->user;
break;
}
}
}
read_unlock(&(intf->cmd_rcvr_lock));
if (user == NULL) {
/* We didn't find a user, deliver an error response. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->unhandled_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
rv = 0; /* Don't do anything with these messages, just
allow them to be freed. */
} else {
/* Deliver the message to the user. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->handled_commands++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
recv_msg = ipmi_alloc_recv_msg();
if (! recv_msg) {
/* We couldn't allocate memory for the
message, so requeue it for handling
later. */
rv = 1;
} else {
/* Extract the source address from the data. */
lan_addr = (struct ipmi_lan_addr *) &recv_msg->addr;
lan_addr->addr_type = IPMI_LAN_ADDR_TYPE;
lan_addr->session_handle = msg->rsp[4];
lan_addr->remote_SWID = msg->rsp[8];
lan_addr->local_SWID = msg->rsp[5];
lan_addr->lun = msg->rsp[9] & 3;
lan_addr->channel = msg->rsp[3] & 0xf;
lan_addr->privilege = msg->rsp[3] >> 4;
/* Extract the rest of the message information
from the IPMB header.*/
recv_msg->user = user;
recv_msg->recv_type = IPMI_CMD_RECV_TYPE;
recv_msg->msgid = msg->rsp[9] >> 2;
recv_msg->msg.netfn = msg->rsp[6] >> 2;
recv_msg->msg.cmd = msg->rsp[10];
recv_msg->msg.data = recv_msg->msg_data;
/* We chop off 12, not 11 bytes because the checksum
at the end also needs to be removed. */
recv_msg->msg.data_len = msg->rsp_size - 12;
memcpy(recv_msg->msg_data,
&(msg->rsp[11]),
msg->rsp_size - 12);
deliver_response(recv_msg);
}
}
return rv;
}
static void copy_event_into_recv_msg(struct ipmi_recv_msg *recv_msg,
struct ipmi_smi_msg *msg)
{
......@@ -1368,9 +2206,8 @@ static void copy_event_into_recv_msg(struct ipmi_recv_msg *recv_msg,
static int handle_read_event_rsp(ipmi_smi_t intf,
struct ipmi_smi_msg *msg)
{
struct ipmi_recv_msg *recv_msg;
struct ipmi_recv_msg *recv_msg, *recv_msg2;
struct list_head msgs;
struct list_head *entry, *entry2;
ipmi_user_t user;
int rv = 0;
int deliver_count = 0;
......@@ -1378,6 +2215,9 @@ static int handle_read_event_rsp(ipmi_smi_t intf,
if (msg->rsp_size < 19) {
/* Message is too small to be an IPMB event. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->invalid_events++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
return 0;
}
......@@ -1390,21 +2230,20 @@ static int handle_read_event_rsp(ipmi_smi_t intf,
spin_lock_irqsave(&(intf->events_lock), flags);
spin_lock(&intf->counter_lock);
intf->events++;
spin_unlock(&intf->counter_lock);
/* Allocate and fill in one message for every user that is getting
events. */
list_for_each(entry, &(intf->users)) {
user = list_entry(entry, struct ipmi_user, link);
list_for_each_entry(user, &(intf->users), link) {
if (! user->gets_events)
continue;
recv_msg = ipmi_alloc_recv_msg();
if (! recv_msg) {
list_for_each_safe(entry, entry2, &msgs) {
recv_msg = list_entry(entry,
struct ipmi_recv_msg,
link);
list_del(entry);
list_for_each_entry_safe(recv_msg, recv_msg2, &msgs, link) {
list_del(&recv_msg->link);
ipmi_free_recv_msg(recv_msg);
}
/* We couldn't allocate memory for the
......@@ -1423,11 +2262,8 @@ static int handle_read_event_rsp(ipmi_smi_t intf,
if (deliver_count) {
/* Now deliver all the messages. */
list_for_each_safe(entry, entry2, &msgs) {
recv_msg = list_entry(entry,
struct ipmi_recv_msg,
link);
list_del(entry);
list_for_each_entry_safe(recv_msg, recv_msg2, &msgs, link) {
list_del(&recv_msg->link);
deliver_response(recv_msg);
}
} else if (intf->waiting_events_count < MAX_EVENTS_IN_QUEUE) {
......@@ -1462,15 +2298,14 @@ static int handle_bmc_rsp(ipmi_smi_t intf,
{
struct ipmi_recv_msg *recv_msg;
int found = 0;
struct list_head *entry;
struct ipmi_user *user;
unsigned long flags;
recv_msg = (struct ipmi_recv_msg *) msg->user_data;
/* Make sure the user still exists. */
list_for_each(entry, &(intf->users)) {
if (list_entry(entry, struct ipmi_user, link)
== recv_msg->user)
{
list_for_each_entry(user, &(intf->users), link) {
if (user == recv_msg->user) {
/* Found it, so we can deliver it */
found = 1;
break;
......@@ -1482,10 +2317,16 @@ static int handle_bmc_rsp(ipmi_smi_t intf,
if (!recv_msg->user && intf->null_user_handler)
intf->null_user_handler(intf, msg);
/* The user for the message went away, so give up. */
spin_lock_irqsave(&intf->counter_lock, flags);
intf->unhandled_local_responses++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
ipmi_free_recv_msg(recv_msg);
} else {
struct ipmi_system_interface_addr *smi_addr;
spin_lock_irqsave(&intf->counter_lock, flags);
intf->handled_local_responses++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
recv_msg->recv_type = IPMI_RESPONSE_RECV_TYPE;
recv_msg->msgid = msg->msgid;
smi_addr = ((struct ipmi_system_interface_addr *)
......@@ -1513,28 +2354,86 @@ static int handle_new_recv_msg(ipmi_smi_t intf,
struct ipmi_smi_msg *msg)
{
int requeue;
int chan;
#if DEBUG_MSGING
int m;
printk("Recv:");
for (m=0; m<msg->rsp_size; m++)
printk(" %2.2x", msg->rsp[m]);
printk("\n");
#endif
if (msg->rsp_size < 2) {
/* Message is too small to be correct. */
requeue = 0;
} else if (msg->rsp[1] == IPMI_GET_MSG_CMD) {
#if DEBUG_MSGING
int m;
printk("Response:");
for (m=0; m<msg->rsp_size; m++)
printk(" %2.2x", msg->rsp[m]);
printk("\n");
#endif
} else if ((msg->rsp[0] == ((IPMI_NETFN_APP_REQUEST|1) << 2))
&& (msg->rsp[1] == IPMI_SEND_MSG_CMD)
&& (msg->user_data != NULL))
{
/* It's a response to a response we sent. For this we
deliver a send message response to the user. */
struct ipmi_recv_msg *recv_msg = msg->user_data;
requeue = 0;
if (msg->rsp_size < 2)
/* Message is too small to be correct. */
goto out;
chan = msg->data[2] & 0x0f;
if (chan >= IPMI_MAX_CHANNELS)
/* Invalid channel number */
goto out;
if (recv_msg) {
recv_msg->recv_type = IPMI_RESPONSE_RESPONSE_TYPE;
recv_msg->msg.data = recv_msg->msg_data;
recv_msg->msg.data_len = 1;
recv_msg->msg_data[0] = msg->rsp[2];
deliver_response(recv_msg);
}
} else if ((msg->rsp[0] == ((IPMI_NETFN_APP_REQUEST|1) << 2))
&& (msg->rsp[1] == IPMI_GET_MSG_CMD))
{
/* It's from the receive queue. */
if (msg->rsp[4] & 0x04) {
/* It's a response, so find the
requesting message and send it up. */
requeue = handle_get_msg_rsp(intf, msg);
} else {
/* It's a command to the SMS from some other
entity. Handle that. */
requeue = handle_get_msg_cmd(intf, msg);
chan = msg->rsp[3] & 0xf;
if (chan >= IPMI_MAX_CHANNELS) {
/* Invalid channel number */
requeue = 0;
goto out;
}
switch (intf->channels[chan].medium) {
case IPMI_CHANNEL_MEDIUM_IPMB:
if (msg->rsp[4] & 0x04) {
/* It's a response, so find the
requesting message and send it up. */
requeue = handle_ipmb_get_msg_rsp(intf, msg);
} else {
/* It's a command to the SMS from some other
entity. Handle that. */
requeue = handle_ipmb_get_msg_cmd(intf, msg);
}
break;
case IPMI_CHANNEL_MEDIUM_8023LAN:
case IPMI_CHANNEL_MEDIUM_ASYNC:
if (msg->rsp[6] & 0x04) {
/* It's a response, so find the
requesting message and send it up. */
requeue = handle_lan_get_msg_rsp(intf, msg);
} else {
/* It's a command to the SMS from some other
entity. Handle that. */
requeue = handle_lan_get_msg_cmd(intf, msg);
}
break;
default:
/* We don't handle the channel type, so just
* free the message. */
requeue = 0;
}
} else if (msg->rsp[1] == IPMI_READ_EVENT_MSG_BUFFER_CMD) {
/* It's an asyncronous event. */
requeue = handle_read_event_rsp(intf, msg);
......@@ -1543,6 +2442,7 @@ static int handle_new_recv_msg(ipmi_smi_t intf,
requeue = handle_bmc_rsp(intf, msg);
}
out:
return requeue;
}
......@@ -1558,10 +2458,43 @@ void ipmi_smi_msg_received(ipmi_smi_t intf,
working on it. */
read_lock(&(intf->users_lock));
if ((msg->data_size >= 2) && (msg->data[1] == IPMI_SEND_MSG_CMD)) {
/* This is the local response to a send, start the
timer for these. */
intf_start_seq_timer(intf, msg->msgid);
if ((msg->data_size >= 2)
&& (msg->data[0] == (IPMI_NETFN_APP_REQUEST << 2))
&& (msg->data[1] == IPMI_SEND_MSG_CMD)
&& (msg->user_data == NULL)) {
/* This is the local response to a command send, start
the timer for these. The user_data will not be
NULL if this is a response send, and we will let
response sends just go through. */
/* Check for errors, if we get certain errors (ones
that mean basically we can try again later), we
ignore them and start the timer. Otherwise we
report the error immediately. */
if ((msg->rsp_size >= 3) && (msg->rsp[2] != 0)
&& (msg->rsp[2] != IPMI_NODE_BUSY_ERR)
&& (msg->rsp[2] != IPMI_LOST_ARBITRATION_ERR))
{
int chan = msg->rsp[3] & 0xf;
/* Got an error sending the message, handle it. */
spin_lock_irqsave(&intf->counter_lock, flags);
if (chan >= IPMI_MAX_CHANNELS)
; /* This shouldn't happen */
else if ((intf->channels[chan].medium
== IPMI_CHANNEL_MEDIUM_8023LAN)
|| (intf->channels[chan].medium
== IPMI_CHANNEL_MEDIUM_ASYNC))
intf->sent_lan_command_errs++;
else
intf->sent_ipmb_command_errs++;
spin_unlock_irqrestore(&intf->counter_lock, flags);
intf_err_seq(intf, msg->msgid, msg->rsp[2]);
} else {
/* The message was sent, start the timer. */
intf_start_seq_timer(intf, msg->msgid);
}
ipmi_free_smi_msg(msg);
goto out_unlock;
}
......@@ -1593,13 +2526,10 @@ void ipmi_smi_msg_received(ipmi_smi_t intf,
void ipmi_smi_watchdog_pretimeout(ipmi_smi_t intf)
{
struct list_head *entry;
ipmi_user_t user;
ipmi_user_t user;
read_lock(&(intf->users_lock));
list_for_each(entry, &(intf->users)) {
user = list_entry(entry, struct ipmi_user, link);
list_for_each_entry(user, &(intf->users), link) {
if (! user->handler->ipmi_watchdog_pretimeout)
continue;
......@@ -1657,10 +2587,9 @@ ipmi_timeout_handler(long timeout_period)
{
ipmi_smi_t intf;
struct list_head timeouts;
struct ipmi_recv_msg *msg;
struct ipmi_smi_msg *smi_msg;
struct ipmi_recv_msg *msg, *msg2;
struct ipmi_smi_msg *smi_msg, *smi_msg2;
unsigned long flags;
struct list_head *entry, *entry2;
int i, j;
INIT_LIST_HEAD(&timeouts);
......@@ -1675,10 +2604,9 @@ ipmi_timeout_handler(long timeout_period)
/* See if any waiting messages need to be processed. */
spin_lock_irqsave(&(intf->waiting_msgs_lock), flags);
list_for_each_safe(entry, entry2, &(intf->waiting_msgs)) {
smi_msg = list_entry(entry, struct ipmi_smi_msg, link);
list_for_each_entry_safe(smi_msg, smi_msg2, &(intf->waiting_msgs), link) {
if (! handle_new_recv_msg(intf, smi_msg)) {
list_del(entry);
list_del(&smi_msg->link);
ipmi_free_smi_msg(smi_msg);
} else {
/* To preserve message order, quit if we
......@@ -1706,6 +2634,15 @@ ipmi_timeout_handler(long timeout_period)
ent->inuse = 0;
msg = ent->recv_msg;
list_add_tail(&(msg->link), &timeouts);
spin_lock(&intf->counter_lock);
if (ent->broadcast)
intf->timed_out_ipmb_broadcasts++;
else if (ent->recv_msg->addr.addr_type
== IPMI_LAN_ADDR_TYPE)
intf->timed_out_lan_commands++;
else
intf->timed_out_ipmb_commands++;
spin_unlock(&intf->counter_lock);
} else {
/* More retries, send again. */
......@@ -1715,12 +2652,18 @@ ipmi_timeout_handler(long timeout_period)
ent->retries_left--;
send_from_recv_msg(intf, ent->recv_msg, NULL,
j, ent->seqid);
spin_lock(&intf->counter_lock);
if (ent->recv_msg->addr.addr_type
== IPMI_LAN_ADDR_TYPE)
intf->retransmitted_lan_commands++;
else
intf->retransmitted_ipmb_commands++;
spin_unlock(&intf->counter_lock);
}
}
spin_unlock_irqrestore(&(intf->seq_lock), flags);
list_for_each_safe(entry, entry2, &timeouts) {
msg = list_entry(entry, struct ipmi_recv_msg, link);
list_for_each_entry_safe(msg, msg2, &timeouts, link) {
handle_msg_timeout(msg);
}
......@@ -1747,13 +2690,16 @@ static void ipmi_request_event(void)
static struct timer_list ipmi_timer;
/* Call every 100 ms. */
/* Call every ~100 ms. */
#define IPMI_TIMEOUT_TIME 100
#define IPMI_TIMEOUT_JIFFIES ((IPMI_TIMEOUT_TIME * HZ)/1000)
/* Request events from the queue every second. Hopefully, in the
future, IPMI will add a way to know immediately if an event is
in the queue. */
/* How many jiffies does it take to get to the timeout time. */
#define IPMI_TIMEOUT_JIFFIES ((IPMI_TIMEOUT_TIME * HZ) / 1000)
/* Request events from the queue every second (this is the number of
IPMI_TIMEOUT_TIMES between event requests). Hopefully, in the
future, IPMI will add a way to know immediately if an event is in
the queue and this silliness can go away. */
#define IPMI_REQUEST_EV_TIME (1000 / (IPMI_TIMEOUT_TIME))
static volatile int stop_operation = 0;
......@@ -1796,6 +2742,7 @@ struct ipmi_smi_msg *ipmi_alloc_smi_msg(void)
rv = kmalloc(sizeof(struct ipmi_smi_msg), GFP_ATOMIC);
if (rv) {
rv->done = free_smi_msg;
rv->user_data = NULL;
atomic_inc(&smi_msg_inuse_count);
}
return rv;
......@@ -1907,11 +2854,13 @@ static void send_panic_events(char *str)
&addr,
0,
&msg,
NULL,
&smi_msg,
&recv_msg,
0,
intf->my_address,
intf->my_lun);
intf->my_lun,
0, 1); /* Don't retry, and don't wait. */
}
#ifdef CONFIG_IPMI_PANIC_STRING
......@@ -1951,11 +2900,13 @@ static void send_panic_events(char *str)
&addr,
0,
&msg,
NULL,
&smi_msg,
&recv_msg,
0,
intf->my_address,
intf->my_lun);
intf->my_lun,
0, 1); /* Don't retry, and don't wait. */
if (intf->local_event_generator) {
/* Request the event receiver from the local MC. */
......@@ -1969,11 +2920,13 @@ static void send_panic_events(char *str)
&addr,
0,
&msg,
NULL,
&smi_msg,
&recv_msg,
0,
intf->my_address,
intf->my_lun);
intf->my_lun,
0, 1); /* no retry, and no wait. */
}
intf->null_user_handler = NULL;
......@@ -2029,11 +2982,13 @@ static void send_panic_events(char *str)
&addr,
0,
&msg,
NULL,
&smi_msg,
&recv_msg,
0,
intf->my_address,
intf->my_lun);
intf->my_lun,
0, 1); /* no retry, and no wait. */
}
}
#endif /* CONFIG_IPMI_PANIC_STRING */
......@@ -2075,7 +3030,6 @@ static struct notifier_block panic_block = {
200 /* priority: INT_MAX >= x >= 0 */
};
static __init int ipmi_init_msghandler(void)
{
int i;
......@@ -2083,10 +3037,21 @@ static __init int ipmi_init_msghandler(void)
if (initialized)
return 0;
printk(KERN_INFO "ipmi message handler version "
IPMI_MSGHANDLER_VERSION "\n");
for (i=0; i<MAX_IPMI_INTERFACES; i++) {
ipmi_interfaces[i] = NULL;
}
proc_ipmi_root = proc_mkdir("ipmi", 0);
if (!proc_ipmi_root) {
printk("Unable to create IPMI proc dir");
return -ENOMEM;
}
proc_ipmi_root->owner = THIS_MODULE;
init_timer(&ipmi_timer);
ipmi_timer.data = 0;
ipmi_timer.function = ipmi_timeout;
......@@ -2097,8 +3062,6 @@ static __init int ipmi_init_msghandler(void)
initialized = 1;
printk(KERN_INFO "ipmi: message handler initialized\n");
return 0;
}
......@@ -2118,9 +3081,12 @@ static __exit void cleanup_ipmi(void)
problems with race conditions removing the timer here. */
stop_operation = 1;
while (!timer_stopped) {
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1);
}
remove_proc_entry(proc_ipmi_root->name, &proc_root);
initialized = 0;
/* Check for buffer leaks. */
......@@ -2143,6 +3109,7 @@ EXPORT_SYMBOL(ipmi_create_user);
EXPORT_SYMBOL(ipmi_destroy_user);
EXPORT_SYMBOL(ipmi_get_version);
EXPORT_SYMBOL(ipmi_request);
EXPORT_SYMBOL(ipmi_request_settime);
EXPORT_SYMBOL(ipmi_request_supply_msgs);
EXPORT_SYMBOL(ipmi_request_with_source);
EXPORT_SYMBOL(ipmi_register_smi);
......@@ -2164,3 +3131,4 @@ EXPORT_SYMBOL(ipmi_set_my_address);
EXPORT_SYMBOL(ipmi_get_my_address);
EXPORT_SYMBOL(ipmi_set_my_LUN);
EXPORT_SYMBOL(ipmi_get_my_LUN);
EXPORT_SYMBOL(ipmi_smi_add_proc_entry);
/*
* ipmi_si.c
*
* The interface to the IPMI driver for the system interfaces (KCS, SMIC,
* BT).
*
* Author: MontaVista Software, Inc.
* Corey Minyard <minyard@mvista.com>
* source@mvista.com
*
* Copyright 2002 MontaVista Software Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
* OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
* This file holds the "policy" for the interface to the SMI state
* machine. It does the configuration, handles timers and interrupts,
* and drives the real SMI state machine.
*/
#include <linux/config.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <asm/system.h>
#include <linux/sched.h>
#include <linux/timer.h>
#include <linux/errno.h>
#include <linux/spinlock.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/list.h>
#include <linux/pci.h>
#include <linux/ioport.h>
#ifdef CONFIG_HIGH_RES_TIMERS
#include <linux/hrtime.h>
# if defined(schedule_next_int)
/* Old high-res timer code, do translations. */
# define get_arch_cycles(a) quick_update_jiffies_sub(a)
# define arch_cycles_per_jiffy cycles_per_jiffies
# endif
static inline void add_usec_to_timer(struct timer_list *t, long v)
{
t->sub_expires += nsec_to_arch_cycle(v * 1000);
while (t->sub_expires >= arch_cycles_per_jiffy)
{
t->expires++;
t->sub_expires -= arch_cycles_per_jiffy;
}
}
#endif
#include <linux/interrupt.h>
#include <linux/rcupdate.h>
#include <linux/ipmi_smi.h>
#include <asm/io.h>
#include "ipmi_si_sm.h"
#include <linux/init.h>
#define IPMI_SI_VERSION "v31"
/* Measure times between events in the driver. */
#undef DEBUG_TIMING
/* Call every 10 ms. */
#define SI_TIMEOUT_TIME_USEC 10000
#define SI_USEC_PER_JIFFY (1000000/HZ)
#define SI_TIMEOUT_JIFFIES (SI_TIMEOUT_TIME_USEC/SI_USEC_PER_JIFFY)
#define SI_SHORT_TIMEOUT_USEC 250 /* .25ms when the SM request a
short timeout */
enum si_intf_state {
SI_NORMAL,
SI_GETTING_FLAGS,
SI_GETTING_EVENTS,
SI_CLEARING_FLAGS,
SI_CLEARING_FLAGS_THEN_SET_IRQ,
SI_GETTING_MESSAGES,
SI_ENABLE_INTERRUPTS1,
SI_ENABLE_INTERRUPTS2
/* FIXME - add watchdog stuff. */
};
enum si_type {
SI_KCS, SI_SMIC, SI_BT
};
struct smi_info
{
ipmi_smi_t intf;
struct si_sm_data *si_sm;
struct si_sm_handlers *handlers;
enum si_type si_type;
spinlock_t si_lock;
spinlock_t msg_lock;
struct list_head xmit_msgs;
struct list_head hp_xmit_msgs;
struct ipmi_smi_msg *curr_msg;
enum si_intf_state si_state;
/* Used to handle the various types of I/O that can occur with
IPMI */
struct si_sm_io io;
int (*io_setup)(struct smi_info *info);
void (*io_cleanup)(struct smi_info *info);
int (*irq_setup)(struct smi_info *info);
void (*irq_cleanup)(struct smi_info *info);
unsigned int io_size;
/* Flags from the last GET_MSG_FLAGS command, used when an ATTN
is set to hold the flags until we are done handling everything
from the flags. */
#define RECEIVE_MSG_AVAIL 0x01
#define EVENT_MSG_BUFFER_FULL 0x02
#define WDT_PRE_TIMEOUT_INT 0x08
unsigned char msg_flags;
/* If set to true, this will request events the next time the
state machine is idle. */
atomic_t req_events;
/* If true, run the state machine to completion on every send
call. Generally used after a panic to make sure stuff goes
out. */
int run_to_completion;
/* The I/O port of an SI interface. */
int port;
/* zero if no irq; */
int irq;
/* The timer for this si. */
struct timer_list si_timer;
/* The time (in jiffies) the last timeout occurred at. */
unsigned long last_timeout_jiffies;
/* Used to gracefully stop the timer without race conditions. */
volatile int stop_operation;
volatile int timer_stopped;
/* The driver will disable interrupts when it gets into a
situation where it cannot handle messages due to lack of
memory. Once that situation clears up, it will re-enable
interrupts. */
int interrupt_disabled;
unsigned char ipmi_si_dev_rev;
unsigned char ipmi_si_fw_rev_major;
unsigned char ipmi_si_fw_rev_minor;
unsigned char ipmi_version_major;
unsigned char ipmi_version_minor;
/* Counters and things for the proc filesystem. */
spinlock_t count_lock;
unsigned long short_timeouts;
unsigned long long_timeouts;
unsigned long timeout_restarts;
unsigned long idles;
unsigned long interrupts;
unsigned long attentions;
unsigned long flag_fetches;
unsigned long hosed_count;
unsigned long complete_transactions;
unsigned long events;
unsigned long watchdog_pretimeouts;
unsigned long incoming_messages;
};
static void si_restart_short_timer(struct smi_info *smi_info);
static void deliver_recv_msg(struct smi_info *smi_info,
struct ipmi_smi_msg *msg)
{
/* Deliver the message to the upper layer with the lock
released. */
spin_unlock(&(smi_info->si_lock));
ipmi_smi_msg_received(smi_info->intf, msg);
spin_lock(&(smi_info->si_lock));
}
static void return_hosed_msg(struct smi_info *smi_info)
{
struct ipmi_smi_msg *msg = smi_info->curr_msg;
/* Make it a reponse */
msg->rsp[0] = msg->data[0] | 4;
msg->rsp[1] = msg->data[1];
msg->rsp[2] = 0xFF; /* Unknown error. */
msg->rsp_size = 3;
smi_info->curr_msg = NULL;
deliver_recv_msg(smi_info, msg);
}
static enum si_sm_result start_next_msg(struct smi_info *smi_info)
{
int rv;
struct list_head *entry = NULL;
#ifdef DEBUG_TIMING
struct timeval t;
#endif
/* No need to save flags, we aleady have interrupts off and we
already hold the SMI lock. */
spin_lock(&(smi_info->msg_lock));
/* Pick the high priority queue first. */
if (! list_empty(&(smi_info->hp_xmit_msgs))) {
entry = smi_info->hp_xmit_msgs.next;
} else if (! list_empty(&(smi_info->xmit_msgs))) {
entry = smi_info->xmit_msgs.next;
}
if (!entry) {
smi_info->curr_msg = NULL;
rv = SI_SM_IDLE;
} else {
int err;
list_del(entry);
smi_info->curr_msg = list_entry(entry,
struct ipmi_smi_msg,
link);
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
printk("**Start2: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
err = smi_info->handlers->start_transaction(
smi_info->si_sm,
smi_info->curr_msg->data,
smi_info->curr_msg->data_size);
if (err) {
return_hosed_msg(smi_info);
}
rv = SI_SM_CALL_WITHOUT_DELAY;
}
spin_unlock(&(smi_info->msg_lock));
return rv;
}
static void start_enable_irq(struct smi_info *smi_info)
{
unsigned char msg[2];
/* If we are enabling interrupts, we have to tell the
BMC to use them. */
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD;
smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2);
smi_info->si_state = SI_ENABLE_INTERRUPTS1;
}
static void start_clear_flags(struct smi_info *smi_info)
{
unsigned char msg[3];
/* Make sure the watchdog pre-timeout flag is not set at startup. */
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_CLEAR_MSG_FLAGS_CMD;
msg[2] = WDT_PRE_TIMEOUT_INT;
smi_info->handlers->start_transaction(smi_info->si_sm, msg, 3);
smi_info->si_state = SI_CLEARING_FLAGS;
}
/* When we have a situtaion where we run out of memory and cannot
allocate messages, we just leave them in the BMC and run the system
polled until we can allocate some memory. Once we have some
memory, we will re-enable the interrupt. */
static inline void disable_si_irq(struct smi_info *smi_info)
{
if ((smi_info->irq) && (!smi_info->interrupt_disabled)) {
disable_irq_nosync(smi_info->irq);
smi_info->interrupt_disabled = 1;
}
}
static inline void enable_si_irq(struct smi_info *smi_info)
{
if ((smi_info->irq) && (smi_info->interrupt_disabled)) {
enable_irq(smi_info->irq);
smi_info->interrupt_disabled = 0;
}
}
static void handle_flags(struct smi_info *smi_info)
{
if (smi_info->msg_flags & WDT_PRE_TIMEOUT_INT) {
/* Watchdog pre-timeout */
spin_lock(&smi_info->count_lock);
smi_info->watchdog_pretimeouts++;
spin_unlock(&smi_info->count_lock);
start_clear_flags(smi_info);
smi_info->msg_flags &= ~WDT_PRE_TIMEOUT_INT;
spin_unlock(&(smi_info->si_lock));
ipmi_smi_watchdog_pretimeout(smi_info->intf);
spin_lock(&(smi_info->si_lock));
} else if (smi_info->msg_flags & RECEIVE_MSG_AVAIL) {
/* Messages available. */
smi_info->curr_msg = ipmi_alloc_smi_msg();
if (!smi_info->curr_msg) {
disable_si_irq(smi_info);
smi_info->si_state = SI_NORMAL;
return;
}
enable_si_irq(smi_info);
smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
smi_info->curr_msg->data[1] = IPMI_GET_MSG_CMD;
smi_info->curr_msg->data_size = 2;
smi_info->handlers->start_transaction(
smi_info->si_sm,
smi_info->curr_msg->data,
smi_info->curr_msg->data_size);
smi_info->si_state = SI_GETTING_MESSAGES;
} else if (smi_info->msg_flags & EVENT_MSG_BUFFER_FULL) {
/* Events available. */
smi_info->curr_msg = ipmi_alloc_smi_msg();
if (!smi_info->curr_msg) {
disable_si_irq(smi_info);
smi_info->si_state = SI_NORMAL;
return;
}
enable_si_irq(smi_info);
smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
smi_info->curr_msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD;
smi_info->curr_msg->data_size = 2;
smi_info->handlers->start_transaction(
smi_info->si_sm,
smi_info->curr_msg->data,
smi_info->curr_msg->data_size);
smi_info->si_state = SI_GETTING_EVENTS;
} else {
smi_info->si_state = SI_NORMAL;
}
}
static void handle_transaction_done(struct smi_info *smi_info)
{
struct ipmi_smi_msg *msg;
#ifdef DEBUG_TIMING
struct timeval t;
do_gettimeofday(&t);
printk("**Done: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
switch (smi_info->si_state) {
case SI_NORMAL:
if (!smi_info->curr_msg)
break;
smi_info->curr_msg->rsp_size
= smi_info->handlers->get_result(
smi_info->si_sm,
smi_info->curr_msg->rsp,
IPMI_MAX_MSG_LENGTH);
/* Do this here becase deliver_recv_msg() releases the
lock, and a new message can be put in during the
time the lock is released. */
msg = smi_info->curr_msg;
smi_info->curr_msg = NULL;
deliver_recv_msg(smi_info, msg);
break;
case SI_GETTING_FLAGS:
{
unsigned char msg[4];
unsigned int len;
/* We got the flags from the SMI, now handle them. */
len = smi_info->handlers->get_result(smi_info->si_sm, msg, 4);
if (msg[2] != 0) {
/* Error fetching flags, just give up for
now. */
smi_info->si_state = SI_NORMAL;
} else if (len < 3) {
/* Hmm, no flags. That's technically illegal, but
don't use uninitialized data. */
smi_info->si_state = SI_NORMAL;
} else {
smi_info->msg_flags = msg[3];
handle_flags(smi_info);
}
break;
}
case SI_CLEARING_FLAGS:
case SI_CLEARING_FLAGS_THEN_SET_IRQ:
{
unsigned char msg[3];
/* We cleared the flags. */
smi_info->handlers->get_result(smi_info->si_sm, msg, 3);
if (msg[2] != 0) {
/* Error clearing flags */
printk(KERN_WARNING
"ipmi_si: Error clearing flags: %2.2x\n",
msg[2]);
}
if (smi_info->si_state == SI_CLEARING_FLAGS_THEN_SET_IRQ)
start_enable_irq(smi_info);
else
smi_info->si_state = SI_NORMAL;
break;
}
case SI_GETTING_EVENTS:
{
smi_info->curr_msg->rsp_size
= smi_info->handlers->get_result(
smi_info->si_sm,
smi_info->curr_msg->rsp,
IPMI_MAX_MSG_LENGTH);
/* Do this here becase deliver_recv_msg() releases the
lock, and a new message can be put in during the
time the lock is released. */
msg = smi_info->curr_msg;
smi_info->curr_msg = NULL;
if (msg->rsp[2] != 0) {
/* Error getting event, probably done. */
msg->done(msg);
/* Take off the event flag. */
smi_info->msg_flags &= ~EVENT_MSG_BUFFER_FULL;
} else {
spin_lock(&smi_info->count_lock);
smi_info->events++;
spin_unlock(&smi_info->count_lock);
deliver_recv_msg(smi_info, msg);
}
handle_flags(smi_info);
break;
}
case SI_GETTING_MESSAGES:
{
smi_info->curr_msg->rsp_size
= smi_info->handlers->get_result(
smi_info->si_sm,
smi_info->curr_msg->rsp,
IPMI_MAX_MSG_LENGTH);
/* Do this here becase deliver_recv_msg() releases the
lock, and a new message can be put in during the
time the lock is released. */
msg = smi_info->curr_msg;
smi_info->curr_msg = NULL;
if (msg->rsp[2] != 0) {
/* Error getting event, probably done. */
msg->done(msg);
/* Take off the msg flag. */
smi_info->msg_flags &= ~RECEIVE_MSG_AVAIL;
} else {
spin_lock(&smi_info->count_lock);
smi_info->incoming_messages++;
spin_unlock(&smi_info->count_lock);
deliver_recv_msg(smi_info, msg);
}
handle_flags(smi_info);
break;
}
case SI_ENABLE_INTERRUPTS1:
{
unsigned char msg[4];
/* We got the flags from the SMI, now handle them. */
smi_info->handlers->get_result(smi_info->si_sm, msg, 4);
if (msg[2] != 0) {
printk(KERN_WARNING
"ipmi_si: Could not enable interrupts"
", failed get, using polled mode.\n");
smi_info->si_state = SI_NORMAL;
} else {
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD;
msg[2] = msg[3] | 1; /* enable msg queue int */
smi_info->handlers->start_transaction(
smi_info->si_sm, msg, 3);
smi_info->si_state = SI_ENABLE_INTERRUPTS2;
}
break;
}
case SI_ENABLE_INTERRUPTS2:
{
unsigned char msg[4];
/* We got the flags from the SMI, now handle them. */
smi_info->handlers->get_result(smi_info->si_sm, msg, 4);
if (msg[2] != 0) {
printk(KERN_WARNING
"ipmi_si: Could not enable interrupts"
", failed set, using polled mode.\n");
}
smi_info->si_state = SI_NORMAL;
break;
}
}
}
/* Called on timeouts and events. Timeouts should pass the elapsed
time, interrupts should pass in zero. */
static enum si_sm_result smi_event_handler(struct smi_info *smi_info,
int time)
{
enum si_sm_result si_sm_result;
restart:
/* There used to be a loop here that waited a little while
(around 25us) before giving up. That turned out to be
pointless, the minimum delays I was seeing were in the 300us
range, which is far too long to wait in an interrupt. So
we just run until the state machine tells us something
happened or it needs a delay. */
si_sm_result = smi_info->handlers->event(smi_info->si_sm, time);
time = 0;
while (si_sm_result == SI_SM_CALL_WITHOUT_DELAY)
{
si_sm_result = smi_info->handlers->event(smi_info->si_sm, 0);
}
if (si_sm_result == SI_SM_TRANSACTION_COMPLETE)
{
spin_lock(&smi_info->count_lock);
smi_info->complete_transactions++;
spin_unlock(&smi_info->count_lock);
handle_transaction_done(smi_info);
si_sm_result = smi_info->handlers->event(smi_info->si_sm, 0);
}
else if (si_sm_result == SI_SM_HOSED)
{
spin_lock(&smi_info->count_lock);
smi_info->hosed_count++;
spin_unlock(&smi_info->count_lock);
if (smi_info->curr_msg != NULL) {
/* If we were handling a user message, format
a response to send to the upper layer to
tell it about the error. */
return_hosed_msg(smi_info);
}
si_sm_result = smi_info->handlers->event(smi_info->si_sm, 0);
smi_info->si_state = SI_NORMAL;
}
/* We prefer handling attn over new messages. */
if (si_sm_result == SI_SM_ATTN)
{
unsigned char msg[2];
spin_lock(&smi_info->count_lock);
smi_info->attentions++;
spin_unlock(&smi_info->count_lock);
/* Got a attn, send down a get message flags to see
what's causing it. It would be better to handle
this in the upper layer, but due to the way
interrupts work with the SMI, that's not really
possible. */
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_GET_MSG_FLAGS_CMD;
smi_info->handlers->start_transaction(
smi_info->si_sm, msg, 2);
smi_info->si_state = SI_GETTING_FLAGS;
goto restart;
}
/* If we are currently idle, try to start the next message. */
if (si_sm_result == SI_SM_IDLE) {
spin_lock(&smi_info->count_lock);
smi_info->idles++;
spin_unlock(&smi_info->count_lock);
si_sm_result = start_next_msg(smi_info);
if (si_sm_result != SI_SM_IDLE)
goto restart;
}
if ((si_sm_result == SI_SM_IDLE)
&& (atomic_read(&smi_info->req_events)))
{
/* We are idle and the upper layer requested that I fetch
events, so do so. */
unsigned char msg[2];
spin_lock(&smi_info->count_lock);
smi_info->flag_fetches++;
spin_unlock(&smi_info->count_lock);
atomic_set(&smi_info->req_events, 0);
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_GET_MSG_FLAGS_CMD;
smi_info->handlers->start_transaction(
smi_info->si_sm, msg, 2);
smi_info->si_state = SI_GETTING_FLAGS;
goto restart;
}
return si_sm_result;
}
static void sender(void *send_info,
struct ipmi_smi_msg *msg,
int priority)
{
struct smi_info *smi_info = send_info;
enum si_sm_result result;
unsigned long flags;
#ifdef DEBUG_TIMING
struct timeval t;
#endif
spin_lock_irqsave(&(smi_info->msg_lock), flags);
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
printk("**Enqueue: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
if (smi_info->run_to_completion) {
/* If we are running to completion, then throw it in
the list and run transactions until everything is
clear. Priority doesn't matter here. */
list_add_tail(&(msg->link), &(smi_info->xmit_msgs));
/* We have to release the msg lock and claim the smi
lock in this case, because of race conditions. */
spin_unlock_irqrestore(&(smi_info->msg_lock), flags);
spin_lock_irqsave(&(smi_info->si_lock), flags);
result = smi_event_handler(smi_info, 0);
while (result != SI_SM_IDLE) {
udelay(SI_SHORT_TIMEOUT_USEC);
result = smi_event_handler(smi_info,
SI_SHORT_TIMEOUT_USEC);
}
spin_unlock_irqrestore(&(smi_info->si_lock), flags);
return;
} else {
if (priority > 0) {
list_add_tail(&(msg->link), &(smi_info->hp_xmit_msgs));
} else {
list_add_tail(&(msg->link), &(smi_info->xmit_msgs));
}
}
spin_unlock_irqrestore(&(smi_info->msg_lock), flags);
spin_lock_irqsave(&(smi_info->si_lock), flags);
if ((smi_info->si_state == SI_NORMAL)
&& (smi_info->curr_msg == NULL))
{
start_next_msg(smi_info);
si_restart_short_timer(smi_info);
}
spin_unlock_irqrestore(&(smi_info->si_lock), flags);
}
static void set_run_to_completion(void *send_info, int i_run_to_completion)
{
struct smi_info *smi_info = send_info;
enum si_sm_result result;
unsigned long flags;
spin_lock_irqsave(&(smi_info->si_lock), flags);
smi_info->run_to_completion = i_run_to_completion;
if (i_run_to_completion) {
result = smi_event_handler(smi_info, 0);
while (result != SI_SM_IDLE) {
udelay(SI_SHORT_TIMEOUT_USEC);
result = smi_event_handler(smi_info,
SI_SHORT_TIMEOUT_USEC);
}
}
spin_unlock_irqrestore(&(smi_info->si_lock), flags);
}
static void request_events(void *send_info)
{
struct smi_info *smi_info = send_info;
atomic_set(&smi_info->req_events, 1);
}
static int initialized = 0;
/* Must be called with interrupts off and with the si_lock held. */
static void si_restart_short_timer(struct smi_info *smi_info)
{
#if defined(CONFIG_HIGH_RES_TIMERS)
unsigned long flags;
unsigned long jiffies_now;
if (del_timer(&(smi_info->si_timer))) {
/* If we don't delete the timer, then it will go off
immediately, anyway. So we only process if we
actually delete the timer. */
/* We already have irqsave on, so no need for it
here. */
read_lock(&xtime_lock);
jiffies_now = jiffies;
smi_info->si_timer.expires = jiffies_now;
smi_info->si_timer.sub_expires = get_arch_cycles(jiffies_now);
add_usec_to_timer(&smi_info->si_timer, SI_SHORT_TIMEOUT_USEC);
add_timer(&(smi_info->si_timer));
spin_lock_irqsave(&smi_info->count_lock, flags);
smi_info->timeout_restarts++;
spin_unlock_irqrestore(&smi_info->count_lock, flags);
}
#endif
}
static void smi_timeout(unsigned long data)
{
struct smi_info *smi_info = (struct smi_info *) data;
enum si_sm_result smi_result;
unsigned long flags;
unsigned long jiffies_now;
unsigned long time_diff;
#ifdef DEBUG_TIMING
struct timeval t;
#endif
if (smi_info->stop_operation) {
smi_info->timer_stopped = 1;
return;
}
spin_lock_irqsave(&(smi_info->si_lock), flags);
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
printk("**Timer: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
jiffies_now = jiffies;
time_diff = ((jiffies_now - smi_info->last_timeout_jiffies)
* SI_USEC_PER_JIFFY);
smi_result = smi_event_handler(smi_info, time_diff);
spin_unlock_irqrestore(&(smi_info->si_lock), flags);
smi_info->last_timeout_jiffies = jiffies_now;
if ((smi_info->irq) && (! smi_info->interrupt_disabled)) {
/* Running with interrupts, only do long timeouts. */
smi_info->si_timer.expires = jiffies + SI_TIMEOUT_JIFFIES;
spin_lock_irqsave(&smi_info->count_lock, flags);
smi_info->long_timeouts++;
spin_unlock_irqrestore(&smi_info->count_lock, flags);
goto do_add_timer;
}
/* If the state machine asks for a short delay, then shorten
the timer timeout. */
if (smi_result == SI_SM_CALL_WITH_DELAY) {
spin_lock_irqsave(&smi_info->count_lock, flags);
smi_info->short_timeouts++;
spin_unlock_irqrestore(&smi_info->count_lock, flags);
#if defined(CONFIG_HIGH_RES_TIMERS)
read_lock(&xtime_lock);
smi_info->si_timer.expires = jiffies;
smi_info->si_timer.sub_expires
= get_arch_cycles(smi_info->si_timer.expires);
read_unlock(&xtime_lock);
add_usec_to_timer(&smi_info->si_timer, SI_SHORT_TIMEOUT_USEC);
#else
smi_info->si_timer.expires = jiffies + 1;
#endif
} else {
spin_lock_irqsave(&smi_info->count_lock, flags);
smi_info->long_timeouts++;
spin_unlock_irqrestore(&smi_info->count_lock, flags);
smi_info->si_timer.expires = jiffies + SI_TIMEOUT_JIFFIES;
#if defined(CONFIG_HIGH_RES_TIMERS)
smi_info->si_timer.sub_expires = 0;
#endif
}
do_add_timer:
add_timer(&(smi_info->si_timer));
}
static irqreturn_t si_irq_handler(int irq, void *data, struct pt_regs *regs)
{
struct smi_info *smi_info = data;
unsigned long flags;
#ifdef DEBUG_TIMING
struct timeval t;
#endif
spin_lock_irqsave(&(smi_info->si_lock), flags);
spin_lock(&smi_info->count_lock);
smi_info->interrupts++;
spin_unlock(&smi_info->count_lock);
if (smi_info->stop_operation)
goto out;
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
printk("**Interrupt: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
smi_event_handler(smi_info, 0);
out:
spin_unlock_irqrestore(&(smi_info->si_lock), flags);
return IRQ_HANDLED;
}
static struct ipmi_smi_handlers handlers =
{
.owner = THIS_MODULE,
.sender = sender,
.request_events = request_events,
.set_run_to_completion = set_run_to_completion
};
/* There can be 4 IO ports passed in (with or without IRQs), 4 addresses,
a default IO port, and 1 ACPI/SPMI address. That sets SI_MAX_DRIVERS */
#define SI_MAX_PARMS 4
#define SI_MAX_DRIVERS ((SI_MAX_PARMS * 2) + 2)
static struct smi_info *smi_infos[SI_MAX_DRIVERS] =
{ NULL, NULL, NULL, NULL };
#define DEVICE_NAME "ipmi_si"
#define DEFAULT_KCS_IO_PORT 0xca2
#define DEFAULT_SMIC_IO_PORT 0xca9
#define DEFAULT_BT_IO_PORT 0xe4
static int si_trydefaults = 1;
static char *si_type[SI_MAX_PARMS] = { NULL, NULL, NULL, NULL };
#define MAX_SI_TYPE_STR 30
static char si_type_str[MAX_SI_TYPE_STR];
static unsigned long addrs[SI_MAX_PARMS] = { 0, 0, 0, 0 };
static int num_addrs = 0;
static unsigned int ports[SI_MAX_PARMS] = { 0, 0, 0, 0 };
static int num_ports = 0;
static int irqs[SI_MAX_PARMS] = { 0, 0, 0, 0 };
static int num_irqs = 0;
module_param_named(trydefaults, si_trydefaults, bool, 0);
MODULE_PARM_DESC(trydefaults, "Setting this to 'false' will disable the"
" default scan of the KCS and SMIC interface at the standard"
" address");
module_param_string(type, si_type_str, MAX_SI_TYPE_STR, 0);
MODULE_PARM_DESC(type, "Defines the type of each interface, each"
" interface separated by commas. The types are 'kcs',"
" 'smic', and 'bt'. For example si_type=kcs,bt will set"
" the first interface to kcs and the second to bt");
module_param_array(addrs, long, num_addrs, 0);
MODULE_PARM_DESC(addrs, "Sets the memory address of each interface, the"
" addresses separated by commas. Only use if an interface"
" is in memory. Otherwise, set it to zero or leave"
" it blank.");
module_param_array(ports, int, num_ports, 0);
MODULE_PARM_DESC(ports, "Sets the port address of each interface, the"
" addresses separated by commas. Only use if an interface"
" is a port. Otherwise, set it to zero or leave"
" it blank.");
module_param_array(irqs, int, num_irqs, 0);
MODULE_PARM_DESC(irqs, "Sets the interrupt of each interface, the"
" addresses separated by commas. Only use if an interface"
" has an interrupt. Otherwise, set it to zero or leave"
" it blank.");
#if defined(CONFIG_ACPI_INTERPETER) || defined(CONFIG_X86) || defined(CONFIG_PCI)
#define IPMI_MEM_ADDR_SPACE 1
#define IPMI_IO_ADDR_SPACE 2
static int is_new_interface(int intf, u8 addr_space, unsigned long base_addr)
{
int i;
for (i = 0; i < SI_MAX_PARMS; ++i) {
/* Don't check our address. */
if (i == intf)
continue;
if (si_type[i] != NULL) {
if ((addr_space == IPMI_MEM_ADDR_SPACE &&
base_addr == addrs[i]) ||
(addr_space == IPMI_IO_ADDR_SPACE &&
base_addr == ports[i]))
return 0;
}
else
break;
}
return 1;
}
#endif
static int std_irq_setup(struct smi_info *info)
{
int rv;
if (!info->irq)
return 0;
rv = request_irq(info->irq,
si_irq_handler,
SA_INTERRUPT,
DEVICE_NAME,
info);
if (rv) {
printk(KERN_WARNING
"ipmi_si: %s unable to claim interrupt %d,"
" running polled\n",
DEVICE_NAME, info->irq);
info->irq = 0;
} else {
printk(" Using irq %d\n", info->irq);
}
return rv;
}
static void std_irq_cleanup(struct smi_info *info)
{
if (!info->irq)
return;
free_irq(info->irq, info);
}
static unsigned char port_inb(struct si_sm_io *io, unsigned int offset)
{
unsigned int *addr = io->info;
return inb((*addr)+offset);
}
static void port_outb(struct si_sm_io *io, unsigned int offset,
unsigned char b)
{
unsigned int *addr = io->info;
outb(b, (*addr)+offset);
}
static int port_setup(struct smi_info *info)
{
unsigned int *addr = info->io.info;
if (!addr || (!*addr))
return -ENODEV;
if (request_region(*addr, info->io_size, DEVICE_NAME) == NULL)
return -EIO;
return 0;
}
static void port_cleanup(struct smi_info *info)
{
unsigned int *addr = info->io.info;
if (addr && (*addr))
release_region (*addr, info->io_size);
kfree(info);
}
static int try_init_port(int intf_num, struct smi_info **new_info)
{
struct smi_info *info;
if (!ports[intf_num])
return -ENODEV;
if (!is_new_interface(intf_num, IPMI_IO_ADDR_SPACE,
ports[intf_num]))
return -ENODEV;
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
printk(KERN_ERR "ipmi_si: Could not allocate SI data (1)\n");
return -ENOMEM;
}
memset(info, 0, sizeof(*info));
info->io_setup = port_setup;
info->io_cleanup = port_cleanup;
info->io.inputb = port_inb;
info->io.outputb = port_outb;
info->io.info = &(ports[intf_num]);
info->io.addr = NULL;
info->irq = 0;
info->irq_setup = NULL;
*new_info = info;
if (si_type[intf_num] == NULL)
si_type[intf_num] = "kcs";
printk("ipmi_si: Trying \"%s\" at I/O port 0x%x\n",
si_type[intf_num], ports[intf_num]);
return 0;
}
static unsigned char mem_inb(struct si_sm_io *io, unsigned int offset)
{
return readb((io->addr)+offset);
}
static void mem_outb(struct si_sm_io *io, unsigned int offset,
unsigned char b)
{
writeb(b, (io->addr)+offset);
}
static int mem_setup(struct smi_info *info)
{
unsigned long *addr = info->io.info;
if (!addr || (!*addr))
return -ENODEV;
if (request_mem_region(*addr, info->io_size, DEVICE_NAME) == NULL)
return -EIO;
info->io.addr = ioremap(*addr, info->io_size);
if (info->io.addr == NULL) {
release_mem_region(*addr, info->io_size);
return -EIO;
}
return 0;
}
static void mem_cleanup(struct smi_info *info)
{
unsigned long *addr = info->io.info;
if (info->io.addr) {
iounmap(info->io.addr);
release_mem_region(*addr, info->io_size);
}
kfree(info);
}
static int try_init_mem(int intf_num, struct smi_info **new_info)
{
struct smi_info *info;
if (!addrs[intf_num])
return -ENODEV;
if (!is_new_interface(intf_num, IPMI_MEM_ADDR_SPACE,
addrs[intf_num]))
return -ENODEV;
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
printk(KERN_ERR "ipmi_si: Could not allocate SI data (2)\n");
return -ENOMEM;
}
memset(info, 0, sizeof(*info));
info->io_setup = mem_setup;
info->io_cleanup = mem_cleanup;
info->io.inputb = mem_inb;
info->io.outputb = mem_outb;
info->io.info = (void *) addrs[intf_num];
info->io.addr = NULL;
info->irq = 0;
info->irq_setup = NULL;
*new_info = info;
if (si_type[intf_num] == NULL)
si_type[intf_num] = "kcs";
printk("ipmi_si: Trying \"%s\" at memory address 0x%lx\n",
si_type[intf_num], addrs[intf_num]);
return 0;
}
#ifdef CONFIG_ACPI_INTERPRETER
#include <linux/acpi.h>
/* Once we get an ACPI failure, we don't try any more, because we go
through the tables sequentially. Once we don't find a table, there
are no more. */
static int acpi_failure = 0;
/* For GPE-type interrupts. */
void ipmi_acpi_gpe(void *context)
{
struct smi_info *smi_info = context;
unsigned long flags;
#ifdef DEBUG_TIMING
struct timeval t;
#endif
spin_lock_irqsave(&(smi_info->si_lock), flags);
spin_lock(&smi_info->count_lock);
smi_info->interrupts++;
spin_unlock(&smi_info->count_lock);
if (smi_info->stop_operation)
goto out;
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
printk("**ACPI_GPE: %d.%9.9d\n", t.tv_sec, t.tv_usec);
#endif
smi_event_handler(smi_info, 0);
out:
spin_unlock_irqrestore(&(smi_info->si_lock), flags);
}
static int acpi_gpe_irq_setup(struct smi_info *info)
{
acpi_status status;
if (!info->irq)
return 0;
/* FIXME - is level triggered right? */
status = acpi_install_gpe_handler(NULL,
info->irq,
ACPI_GPE_LEVEL_TRIGGERED,
ipmi_acpi_gpe,
info);
if (status != AE_OK) {
printk(KERN_WARNING
"ipmi_si: %s unable to claim ACPI GPE %d,"
" running polled\n",
DEVICE_NAME, info->irq);
info->irq = 0;
return -EINVAL;
} else {
printk(" Using ACPI GPE %d\n", info->irq);
return 0;
}
}
static void acpi_gpe_irq_cleanup(struct smi_info *info)
{
if (!info->irq)
return;
acpi_remove_gpe_handler(NULL, info->irq, ipmi_acpi_gpe);
}
/*
* Defined at
* http://h21007.www2.hp.com/dspp/files/unprotected/devresource/Docs/TechPapers/IA64/hpspmi.pdf
*/
struct SPMITable {
s8 Signature[4];
u32 Length;
u8 Revision;
u8 Checksum;
s8 OEMID[6];
s8 OEMTableID[8];
s8 OEMRevision[4];
s8 CreatorID[4];
s8 CreatorRevision[4];
u8 InterfaceType;
u8 IPMIlegacy;
s16 SpecificationRevision;
/*
* Bit 0 - SCI interrupt supported
* Bit 1 - I/O APIC/SAPIC
*/
u8 InterruptType;
/* If bit 0 of InterruptType is set, then this is the SCI
interrupt in the GPEx_STS register. */
u8 GPE;
s16 Reserved;
/* If bit 1 of InterruptType is set, then this is the I/O
APIC/SAPIC interrupt. */
u32 GlobalSystemInterrupt;
/* The actual register address. */
struct acpi_generic_address addr;
u8 UID[4];
s8 spmi_id[1]; /* A '\0' terminated array starts here. */
};
static int try_init_acpi(int intf_num, struct smi_info **new_info)
{
struct smi_info *info;
acpi_status status;
struct SPMITable *spmi;
char *io_type;
u8 addr_space;
if (acpi_failure)
return -ENODEV;
status = acpi_get_firmware_table("SPMI", intf_num+1,
ACPI_LOGICAL_ADDRESSING,
(struct acpi_table_header **) &spmi);
if (status != AE_OK) {
acpi_failure = 1;
return -ENODEV;
}
if (spmi->IPMIlegacy != 1) {
printk(KERN_INFO "IPMI: Bad SPMI legacy %d\n", spmi->IPMIlegacy);
return -ENODEV;
}
if (spmi->addr.address_space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
addr_space = IPMI_MEM_ADDR_SPACE;
else
addr_space = IPMI_IO_ADDR_SPACE;
if (!is_new_interface(-1, addr_space, spmi->addr.address))
return -ENODEV;
/* Figure out the interface type. */
switch (spmi->InterfaceType)
{
case 1: /* KCS */
si_type[intf_num] = "kcs";
break;
case 2: /* SMIC */
si_type[intf_num] = "smic";
break;
case 3: /* BT */
si_type[intf_num] = "bt";
break;
default:
printk(KERN_INFO "ipmi_si: Unknown ACPI/SPMI SI type %d\n",
spmi->InterfaceType);
return -EIO;
}
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
printk(KERN_ERR "ipmi_si: Could not allocate SI data (3)\n");
return -ENOMEM;
}
memset(info, 0, sizeof(*info));
if (spmi->InterruptType & 1) {
/* We've got a GPE interrupt. */
info->irq = spmi->GPE;
info->irq_setup = acpi_gpe_irq_setup;
info->irq_cleanup = acpi_gpe_irq_cleanup;
} else if (spmi->InterruptType & 2) {
/* We've got an APIC/SAPIC interrupt. */
info->irq = spmi->GlobalSystemInterrupt;
info->irq_setup = std_irq_setup;
info->irq_cleanup = std_irq_cleanup;
} else {
/* Use the default interrupt setting. */
info->irq = 0;
info->irq_setup = NULL;
}
if (spmi->addr.address_space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
io_type = "memory";
info->io_setup = mem_setup;
info->io_cleanup = mem_cleanup;
addrs[intf_num] = spmi->addr.address;
info->io.inputb = mem_inb;
info->io.outputb = mem_outb;
info->io.info = &(addrs[intf_num]);
} else if (spmi->addr.address_space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
io_type = "I/O";
info->io_setup = port_setup;
info->io_cleanup = port_cleanup;
ports[intf_num] = spmi->addr.address;
info->io.inputb = port_inb;
info->io.outputb = port_outb;
info->io.info = &(ports[intf_num]);
} else {
kfree(info);
printk("ipmi_si: Unknown ACPI I/O Address type\n");
return -EIO;
}
*new_info = info;
printk("ipmi_si: ACPI/SPMI specifies \"%s\" %s SI @ 0x%lx\n",
si_type[intf_num], io_type, (unsigned long) spmi->addr.address);
return 0;
}
#endif
#ifdef CONFIG_X86
typedef struct dmi_ipmi_data
{
u8 type;
u8 addr_space;
unsigned long base_addr;
u8 irq;
}dmi_ipmi_data_t;
typedef struct dmi_header
{
u8 type;
u8 length;
u16 handle;
}dmi_header_t;
static int decode_dmi(dmi_header_t *dm, dmi_ipmi_data_t *ipmi_data)
{
u8 *data = (u8 *)dm;
unsigned long base_addr;
ipmi_data->type = data[0x04];
memcpy(&base_addr,&data[0x08],sizeof(unsigned long));
if (base_addr & 1) {
/* I/O */
base_addr &= 0xFFFE;
ipmi_data->addr_space = IPMI_IO_ADDR_SPACE;
}
else {
/* Memory */
ipmi_data->addr_space = IPMI_MEM_ADDR_SPACE;
}
ipmi_data->base_addr = base_addr;
ipmi_data->irq = data[0x11];
if (is_new_interface(-1, ipmi_data->addr_space,ipmi_data->base_addr))
return 0;
memset(ipmi_data,0,sizeof(dmi_ipmi_data_t));
return -1;
}
static int dmi_table(u32 base, int len, int num,
dmi_ipmi_data_t *ipmi_data)
{
u8 *buf;
struct dmi_header *dm;
u8 *data;
int i=1;
int status=-1;
buf = ioremap(base, len);
if(buf==NULL)
return -1;
data = buf;
while(i<num && (data - buf) < len)
{
dm=(dmi_header_t *)data;
if((data-buf+dm->length) >= len)
break;
if (dm->type == 38) {
if (decode_dmi(dm, ipmi_data) == 0) {
status = 0;
break;
}
}
data+=dm->length;
while((data-buf) < len && (*data || data[1]))
data++;
data+=2;
i++;
}
iounmap(buf);
return status;
}
inline static int dmi_checksum(u8 *buf)
{
u8 sum=0;
int a;
for(a=0; a<15; a++)
sum+=buf[a];
return (sum==0);
}
static int dmi_iterator(dmi_ipmi_data_t *ipmi_data)
{
u8 buf[15];
u32 fp=0xF0000;
#ifdef CONFIG_SIMNOW
return -1;
#endif
while(fp < 0xFFFFF)
{
isa_memcpy_fromio(buf, fp, 15);
if(memcmp(buf, "_DMI_", 5)==0 && dmi_checksum(buf))
{
u16 num=buf[13]<<8|buf[12];
u16 len=buf[7]<<8|buf[6];
u32 base=buf[11]<<24|buf[10]<<16|buf[9]<<8|buf[8];
if(dmi_table(base, len, num, ipmi_data) == 0)
return 0;
}
fp+=16;
}
return -1;
}
static int try_init_smbios(int intf_num, struct smi_info **new_info)
{
struct smi_info *info;
dmi_ipmi_data_t ipmi_data;
char *io_type;
int status;
status = dmi_iterator(&ipmi_data);
if (status < 0)
return -ENODEV;
switch(ipmi_data.type) {
case 0x01: /* KCS */
si_type[intf_num] = "kcs";
break;
case 0x02: /* SMIC */
si_type[intf_num] = "smic";
break;
case 0x03: /* BT */
si_type[intf_num] = "bt";
break;
default:
printk("ipmi_si: Unknown SMBIOS SI type.\n");
return -EIO;
}
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
printk(KERN_ERR "ipmi_si: Could not allocate SI data (4)\n");
return -ENOMEM;
}
memset(info, 0, sizeof(*info));
if (ipmi_data.addr_space == 1) {
io_type = "memory";
info->io_setup = mem_setup;
info->io_cleanup = mem_cleanup;
addrs[intf_num] = ipmi_data.base_addr;
info->io.inputb = mem_inb;
info->io.outputb = mem_outb;
info->io.info = &(addrs[intf_num]);
} else if (ipmi_data.addr_space == 2) {
io_type = "I/O";
info->io_setup = port_setup;
info->io_cleanup = port_cleanup;
ports[intf_num] = ipmi_data.base_addr;
info->io.inputb = port_inb;
info->io.outputb = port_outb;
info->io.info = &(ports[intf_num]);
} else {
kfree(info);
printk("ipmi_si: Unknown SMBIOS I/O Address type.\n");
return -EIO;
}
irqs[intf_num] = ipmi_data.irq;
*new_info = info;
printk("ipmi_si: Found SMBIOS-specified state machine at %s"
" address 0x%lx\n",
io_type, (unsigned long)ipmi_data.base_addr);
return 0;
}
#endif /* CONFIG_X86 */
#ifdef CONFIG_PCI
#define PCI_ERMC_CLASSCODE 0x0C0700
#define PCI_HP_VENDOR_ID 0x103C
#define PCI_MMC_DEVICE_ID 0x121A
#define PCI_MMC_ADDR_CW 0x10
/* Avoid more than one attempt to probe pci smic. */
static int pci_smic_checked = 0;
static int find_pci_smic(int intf_num, struct smi_info **new_info)
{
struct smi_info *info;
int error;
struct pci_dev *pci_dev = NULL;
u16 base_addr;
int fe_rmc = 0;
if (pci_smic_checked)
return -ENODEV;
pci_smic_checked = 1;
if ((pci_dev = pci_find_device(PCI_HP_VENDOR_ID, PCI_MMC_DEVICE_ID,
NULL)))
;
else if ((pci_dev = pci_find_class(PCI_ERMC_CLASSCODE, NULL)) &&
pci_dev->subsystem_vendor == PCI_HP_VENDOR_ID)
fe_rmc = 1;
else
return -ENODEV;
error = pci_read_config_word(pci_dev, PCI_MMC_ADDR_CW, &base_addr);
if (error)
{
printk(KERN_ERR
"ipmi_si: pci_read_config_word() failed (%d).\n",
error);
return -ENODEV;
}
/* Bit 0: 1 specifies programmed I/O, 0 specifies memory mapped I/O */
if (!(base_addr & 0x0001))
{
printk(KERN_ERR
"ipmi_si: memory mapped I/O not supported for PCI"
" smic.\n");
return -ENODEV;
}
base_addr &= 0xFFFE;
if (!fe_rmc)
/* Data register starts at base address + 1 in eRMC */
++base_addr;
if (!is_new_interface(-1, IPMI_IO_ADDR_SPACE, base_addr))
return -ENODEV;
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
printk(KERN_ERR "ipmi_si: Could not allocate SI data (5)\n");
return -ENOMEM;
}
memset(info, 0, sizeof(*info));
info->io_setup = port_setup;
info->io_cleanup = port_cleanup;
ports[intf_num] = base_addr;
info->io.inputb = port_inb;
info->io.outputb = port_outb;
info->io.info = &(ports[intf_num]);
*new_info = info;
irqs[intf_num] = pci_dev->irq;
si_type[intf_num] = "smic";
printk("ipmi_si: Found PCI SMIC at I/O address 0x%lx\n",
(long unsigned int) base_addr);
return 0;
}
#endif /* CONFIG_PCI */
static int try_init_plug_and_play(int intf_num, struct smi_info **new_info)
{
#ifdef CONFIG_PCI
if (find_pci_smic(intf_num, new_info)==0)
return 0;
#endif
/* Include other methods here. */
return -ENODEV;
}
static int try_get_dev_id(struct smi_info *smi_info)
{
unsigned char msg[2];
unsigned char *resp;
unsigned long resp_len;
enum si_sm_result smi_result;
int rv = 0;
resp = kmalloc(IPMI_MAX_MSG_LENGTH, GFP_KERNEL);
if (!resp)
return -ENOMEM;
/* Do a Get Device ID command, since it comes back with some
useful info. */
msg[0] = IPMI_NETFN_APP_REQUEST << 2;
msg[1] = IPMI_GET_DEVICE_ID_CMD;
smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2);
smi_result = smi_info->handlers->event(smi_info->si_sm, 0);
for (;;)
{
if (smi_result == SI_SM_CALL_WITH_DELAY) {
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1);
smi_result = smi_info->handlers->event(
smi_info->si_sm, 100);
}
else if (smi_result == SI_SM_CALL_WITHOUT_DELAY)
{
smi_result = smi_info->handlers->event(
smi_info->si_sm, 0);
}
else
break;
}
if (smi_result == SI_SM_HOSED) {
/* We couldn't get the state machine to run, so whatever's at
the port is probably not an IPMI SMI interface. */
rv = -ENODEV;
goto out;
}
/* Otherwise, we got some data. */
resp_len = smi_info->handlers->get_result(smi_info->si_sm,
resp, IPMI_MAX_MSG_LENGTH);
if (resp_len < 6) {
/* That's odd, it should be longer. */
rv = -EINVAL;
goto out;
}
if ((resp[1] != IPMI_GET_DEVICE_ID_CMD) || (resp[2] != 0)) {
/* That's odd, it shouldn't be able to fail. */
rv = -EINVAL;
goto out;
}
/* Record info from the get device id, in case we need it. */
smi_info->ipmi_si_dev_rev = resp[4] & 0xf;
smi_info->ipmi_si_fw_rev_major = resp[5] & 0x7f;
smi_info->ipmi_si_fw_rev_minor = resp[6];
smi_info->ipmi_version_major = resp[7] & 0xf;
smi_info->ipmi_version_minor = resp[7] >> 4;
out:
kfree(resp);
return rv;
}
static int type_file_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
char *out = (char *) page;
struct smi_info *smi = data;
switch (smi->si_type) {
case SI_KCS:
return sprintf(out, "kcs\n");
case SI_SMIC:
return sprintf(out, "smic\n");
case SI_BT:
return sprintf(out, "bt\n");
default:
return 0;
}
}
static int stat_file_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
char *out = (char *) page;
struct smi_info *smi = data;
out += sprintf(out, "interrupts_enabled: %d\n",
smi->irq && !smi->interrupt_disabled);
out += sprintf(out, "short_timeouts: %ld\n",
smi->short_timeouts);
out += sprintf(out, "long_timeouts: %ld\n",
smi->long_timeouts);
out += sprintf(out, "timeout_restarts: %ld\n",
smi->timeout_restarts);
out += sprintf(out, "idles: %ld\n",
smi->idles);
out += sprintf(out, "interrupts: %ld\n",
smi->interrupts);
out += sprintf(out, "attentions: %ld\n",
smi->attentions);
out += sprintf(out, "flag_fetches: %ld\n",
smi->flag_fetches);
out += sprintf(out, "hosed_count: %ld\n",
smi->hosed_count);
out += sprintf(out, "complete_transactions: %ld\n",
smi->complete_transactions);
out += sprintf(out, "events: %ld\n",
smi->events);
out += sprintf(out, "watchdog_pretimeouts: %ld\n",
smi->watchdog_pretimeouts);
out += sprintf(out, "incoming_messages: %ld\n",
smi->incoming_messages);
return (out - ((char *) page));
}
/* Returns 0 if initialized, or negative on an error. */
static int init_one_smi(int intf_num, struct smi_info **smi)
{
int rv;
struct smi_info *new_smi;
rv = try_init_mem(intf_num, &new_smi);
if (rv)
rv = try_init_port(intf_num, &new_smi);
#ifdef CONFIG_ACPI_INTERPRETER
if ((rv) && (si_trydefaults)) {
rv = try_init_acpi(intf_num, &new_smi);
}
#endif
#ifdef CONFIG_X86
if ((rv) && (si_trydefaults)) {
rv = try_init_smbios(intf_num, &new_smi);
}
#endif
if ((rv) && (si_trydefaults)) {
rv = try_init_plug_and_play(intf_num, &new_smi);
}
if (rv)
return rv;
/* So we know not to free it unless we have allocated one. */
new_smi->intf = NULL;
new_smi->si_sm = NULL;
new_smi->handlers = 0;
if (!new_smi->irq_setup) {
new_smi->irq = irqs[intf_num];
new_smi->irq_setup = std_irq_setup;
new_smi->irq_cleanup = std_irq_cleanup;
}
/* Default to KCS if no type is specified. */
if (si_type[intf_num] == NULL) {
if (si_trydefaults)
si_type[intf_num] = "kcs";
else {
rv = -EINVAL;
goto out_err;
}
}
/* Set up the state machine to use. */
if (strcmp(si_type[intf_num], "kcs") == 0) {
new_smi->handlers = &kcs_smi_handlers;
new_smi->si_type = SI_KCS;
} else if (strcmp(si_type[intf_num], "smic") == 0) {
new_smi->handlers = &smic_smi_handlers;
new_smi->si_type = SI_SMIC;
} else if (strcmp(si_type[intf_num], "bt") == 0) {
new_smi->handlers = &bt_smi_handlers;
new_smi->si_type = SI_BT;
} else {
/* No support for anything else yet. */
rv = -EIO;
goto out_err;
}
/* Allocate the state machine's data and initialize it. */
new_smi->si_sm = kmalloc(new_smi->handlers->size(), GFP_KERNEL);
if (!new_smi->si_sm) {
printk(" Could not allocate state machine memory\n");
rv = -ENOMEM;
goto out_err;
}
new_smi->io_size = new_smi->handlers->init_data(new_smi->si_sm,
&new_smi->io);
/* Now that we know the I/O size, we can set up the I/O. */
rv = new_smi->io_setup(new_smi);
if (rv) {
printk(" Could not set up I/O space\n");
goto out_err;
}
spin_lock_init(&(new_smi->si_lock));
spin_lock_init(&(new_smi->msg_lock));
spin_lock_init(&(new_smi->count_lock));
/* Do low-level detection first. */
if (new_smi->handlers->detect(new_smi->si_sm)) {
rv = -ENODEV;
goto out_err;
}
/* Attempt a get device id command. If it fails, we probably
don't have a SMI here. */
rv = try_get_dev_id(new_smi);
if (rv)
goto out_err;
/* Try to claim any interrupts. */
new_smi->irq_setup(new_smi);
INIT_LIST_HEAD(&(new_smi->xmit_msgs));
INIT_LIST_HEAD(&(new_smi->hp_xmit_msgs));
new_smi->curr_msg = NULL;
atomic_set(&new_smi->req_events, 0);
new_smi->run_to_completion = 0;
rv = ipmi_register_smi(&handlers,
new_smi,
new_smi->ipmi_version_major,
new_smi->ipmi_version_minor,
&(new_smi->intf));
if (rv) {
printk(KERN_ERR
"ipmi_si: Unable to register device: error %d\n",
rv);
goto out_err;
}
rv = ipmi_smi_add_proc_entry(new_smi->intf, "type",
type_file_read_proc, NULL,
new_smi, THIS_MODULE);
if (rv) {
printk(KERN_ERR
"ipmi_si: Unable to create proc entry: %d\n",
rv);
goto out_err;
}
rv = ipmi_smi_add_proc_entry(new_smi->intf, "si_stats",
stat_file_read_proc, NULL,
new_smi, THIS_MODULE);
if (rv) {
printk(KERN_ERR
"ipmi_si: Unable to create proc entry: %d\n",
rv);
goto out_err;
}
start_clear_flags(new_smi);
/* IRQ is defined to be set when non-zero. */
if (new_smi->irq)
new_smi->si_state = SI_CLEARING_FLAGS_THEN_SET_IRQ;
new_smi->interrupt_disabled = 0;
new_smi->timer_stopped = 0;
new_smi->stop_operation = 0;
init_timer(&(new_smi->si_timer));
new_smi->si_timer.data = (long) new_smi;
new_smi->si_timer.function = smi_timeout;
new_smi->last_timeout_jiffies = jiffies;
new_smi->si_timer.expires = jiffies + SI_TIMEOUT_JIFFIES;
add_timer(&(new_smi->si_timer));
*smi = new_smi;
printk(" IPMI %s interface initialized\n", si_type[intf_num]);
return 0;
out_err:
if (new_smi->intf)
ipmi_unregister_smi(new_smi->intf);
new_smi->irq_cleanup(new_smi);
if (new_smi->si_sm) {
if (new_smi->handlers)
new_smi->handlers->cleanup(new_smi->si_sm);
kfree(new_smi->si_sm);
}
new_smi->io_cleanup(new_smi);
return rv;
}
static __init int init_ipmi_si(void)
{
int rv = 0;
int pos = 0;
int i;
char *str;
if (initialized)
return 0;
initialized = 1;
/* Parse out the si_type string into its components. */
str = si_type_str;
if (*str != '\0') {
for (i=0; (i<SI_MAX_PARMS) && (*str != '\0'); i++) {
si_type[i] = str;
str = strchr(str, ',');
if (str) {
*str = '\0';
str++;
} else {
break;
}
}
}
printk(KERN_INFO "IPMI System Interface driver version "
IPMI_SI_VERSION);
if (kcs_smi_handlers.version)
printk(", KCS version %s", kcs_smi_handlers.version);
if (smic_smi_handlers.version)
printk(", SMIC version %s", smic_smi_handlers.version);
if (bt_smi_handlers.version)
printk(", BT version %s", bt_smi_handlers.version);
printk("\n");
rv = init_one_smi(0, &(smi_infos[pos]));
if (rv && !ports[0] && si_trydefaults) {
/* If we are trying defaults and the initial port is
not set, then set it. */
si_type[0] = "kcs";
ports[0] = DEFAULT_KCS_IO_PORT;
rv = init_one_smi(0, &(smi_infos[pos]));
if (rv) {
/* No KCS - try SMIC */
si_type[0] = "smic";
ports[0] = DEFAULT_SMIC_IO_PORT;
rv = init_one_smi(0, &(smi_infos[pos]));
}
if (rv) {
/* No SMIC - try BT */
si_type[0] = "bt";
ports[0] = DEFAULT_BT_IO_PORT;
rv = init_one_smi(0, &(smi_infos[pos]));
}
}
if (rv == 0)
pos++;
for (i=1; i < SI_MAX_PARMS; i++) {
rv = init_one_smi(i, &(smi_infos[pos]));
if (rv == 0)
pos++;
}
if (smi_infos[0] == NULL) {
printk("ipmi_si: Unable to find any System Interface(s)\n");
return -ENODEV;
}
return 0;
}
module_init(init_ipmi_si);
void __exit cleanup_one_si(struct smi_info *to_clean)
{
int rv;
unsigned long flags;
if (! to_clean)
return;
/* Tell the timer and interrupt handlers that we are shutting
down. */
spin_lock_irqsave(&(to_clean->si_lock), flags);
spin_lock(&(to_clean->msg_lock));
to_clean->stop_operation = 1;
to_clean->irq_cleanup(to_clean);
spin_unlock(&(to_clean->msg_lock));
spin_unlock_irqrestore(&(to_clean->si_lock), flags);
/* Wait until we know that we are out of any interrupt
handlers might have been running before we freed the
interrupt. */
synchronize_kernel();
/* Wait for the timer to stop. This avoids problems with race
conditions removing the timer here. */
while (!to_clean->timer_stopped) {
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1);
}
rv = ipmi_unregister_smi(to_clean->intf);
if (rv) {
printk(KERN_ERR
"ipmi_si: Unable to unregister device: errno=%d\n",
rv);
}
to_clean->handlers->cleanup(to_clean->si_sm);
kfree(to_clean->si_sm);
to_clean->io_cleanup(to_clean);
}
static __exit void cleanup_ipmi_si(void)
{
int i;
if (!initialized)
return;
for (i=0; i<SI_MAX_DRIVERS; i++) {
cleanup_one_si(smi_infos[i]);
}
}
module_exit(cleanup_ipmi_si);
MODULE_LICENSE("GPL");
/*
* ipmi_kcs_sm.h
* ipmi_si_sm.h
*
* State machine for handling IPMI KCS interfaces.
* State machine interface for low-level IPMI system management
* interface state machines. This code is the interface between
* the ipmi_smi code (that handles the policy of a KCS, SMIC, or
* BT interface) and the actual low-level state machine.
*
* Author: MontaVista Software, Inc.
* Corey Minyard <minyard@mvista.com>
......@@ -31,40 +34,84 @@
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
struct kcs_data;
/* This is defined by the state machines themselves, it is an opaque
data type for them to use. */
struct si_sm_data;
void init_kcs_data(struct kcs_data *kcs,
unsigned int port,
unsigned char *addr);
/* Start a new transaction in the state machine. This will return -2
if the state machine is not idle, -1 if the size is invalid (to
large or too small), or 0 if the transaction is successfully
completed. */
int start_kcs_transaction(struct kcs_data *kcs, char *data, unsigned int size);
/* The structure for doing I/O in the state machine. The state
machine doesn't have the actual I/O routines, they are done through
this interface. */
struct si_sm_io
{
unsigned char (*inputb)(struct si_sm_io *io, unsigned int offset);
void (*outputb)(struct si_sm_io *io,
unsigned int offset,
unsigned char b);
/* Return the results after the transaction. This will return -1 if
the buffer is too small, zero if no transaction is present, or the
actual length of the result data. */
int kcs_get_result(struct kcs_data *kcs, unsigned char *data, int length);
/* Generic info used by the actual handling routines, the
state machine shouldn't touch these. */
void *info;
void *addr;
};
enum kcs_result
/* Results of SMI events. */
enum si_sm_result
{
KCS_CALL_WITHOUT_DELAY, /* Call the driver again immediately */
KCS_CALL_WITH_DELAY, /* Delay some before calling again. */
KCS_TRANSACTION_COMPLETE, /* A transaction is finished. */
KCS_SM_IDLE, /* The SM is in idle state. */
KCS_SM_HOSED, /* The hardware violated the state machine. */
KCS_ATTN /* The hardware is asserting attn and the
SI_SM_CALL_WITHOUT_DELAY, /* Call the driver again immediately */
SI_SM_CALL_WITH_DELAY, /* Delay some before calling again. */
SI_SM_TRANSACTION_COMPLETE, /* A transaction is finished. */
SI_SM_IDLE, /* The SM is in idle state. */
SI_SM_HOSED, /* The hardware violated the state machine. */
SI_SM_ATTN /* The hardware is asserting attn and the
state machine is idle. */
};
/* Call this periodically (for a polled interface) or upon receiving
an interrupt (for a interrupt-driven interface). If interrupt
driven, you should probably poll this periodically when not in idle
state. This should be called with the time that passed since the
last call, if it is significant. Time is in microseconds. */
enum kcs_result kcs_event(struct kcs_data *kcs, long time);
/* Handlers for the SMI state machine. */
struct si_sm_handlers
{
/* Put the version number of the state machine here so the
upper layer can print it. */
char *version;
/* Initialize the data and return the amount of I/O space to
reserve for the space. */
unsigned int (*init_data)(struct si_sm_data *smi,
struct si_sm_io *io);
/* Start a new transaction in the state machine. This will
return -2 if the state machine is not idle, -1 if the size
is invalid (to large or too small), or 0 if the transaction
is successfully completed. */
int (*start_transaction)(struct si_sm_data *smi,
unsigned char *data, unsigned int size);
/* Return the results after the transaction. This will return
-1 if the buffer is too small, zero if no transaction is
present, or the actual length of the result data. */
int (*get_result)(struct si_sm_data *smi,
unsigned char *data, unsigned int length);
/* Call this periodically (for a polled interface) or upon
receiving an interrupt (for a interrupt-driven interface).
If interrupt driven, you should probably poll this
periodically when not in idle state. This should be called
with the time that passed since the last call, if it is
significant. Time is in microseconds. */
enum si_sm_result (*event)(struct si_sm_data *smi, long time);
/* Attempt to detect an SMI. Returns 0 on success or nonzero
on failure. */
int (*detect)(struct si_sm_data *smi);
/* The interface is shutting down, so clean it up. */
void (*cleanup)(struct si_sm_data *smi);
/* Return the size of the SMI structure in bytes. */
int (*size)(void);
};
/* Current state machines that we can use. */
extern struct si_sm_handlers kcs_smi_handlers;
extern struct si_sm_handlers smic_smi_handlers;
extern struct si_sm_handlers bt_smi_handlers;
/* Return the size of the KCS structure in bytes. */
int kcs_size(void);
/*
* ipmi_smic_sm.c
*
* The state-machine driver for an IPMI SMIC driver
*
* It started as a copy of Corey Minyard's driver for the KSC interface
* and the kernel patch "mmcdev-patch-245" by HP
*
* modified by: Hannes Schulz <schulz@schwaar.com>
* ipmi@schwaar.com
*
*
* Corey Minyard's driver for the KSC interface has the following
* copyright notice:
* Copyright 2002 MontaVista Software Inc.
*
* the kernel patch "mmcdev-patch-245" by HP has the following
* copyright notice:
* (c) Copyright 2001 Grant Grundler (c) Copyright
* 2001 Hewlett-Packard Company
*
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
* OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA. */
#include <linux/kernel.h> /* For printk. */
#include <linux/string.h>
#include <linux/ipmi_msgdefs.h> /* for completion codes */
#include "ipmi_si_sm.h"
#define IPMI_SMIC_VERSION "v31"
/* smic_debug is a bit-field
* SMIC_DEBUG_ENABLE - turned on for now
* SMIC_DEBUG_MSG - commands and their responses
* SMIC_DEBUG_STATES - state machine
*/
#define SMIC_DEBUG_STATES 4
#define SMIC_DEBUG_MSG 2
#define SMIC_DEBUG_ENABLE 1
static int smic_debug = 1;
enum smic_states {
SMIC_IDLE,
SMIC_START_OP,
SMIC_OP_OK,
SMIC_WRITE_START,
SMIC_WRITE_NEXT,
SMIC_WRITE_END,
SMIC_WRITE2READ,
SMIC_READ_START,
SMIC_READ_NEXT,
SMIC_READ_END,
SMIC_HOSED
};
#define MAX_SMIC_READ_SIZE 80
#define MAX_SMIC_WRITE_SIZE 80
#define SMIC_MAX_ERROR_RETRIES 3
/* Timeouts in microseconds. */
#define SMIC_RETRY_TIMEOUT 100000
/* SMIC Flags Register Bits */
#define SMIC_RX_DATA_READY 0x80
#define SMIC_TX_DATA_READY 0x40
#define SMIC_SMI 0x10
#define SMIC_EVM_DATA_AVAIL 0x08
#define SMIC_SMS_DATA_AVAIL 0x04
#define SMIC_FLAG_BSY 0x01
/* SMIC Error Codes */
#define EC_NO_ERROR 0x00
#define EC_ABORTED 0x01
#define EC_ILLEGAL_CONTROL 0x02
#define EC_NO_RESPONSE 0x03
#define EC_ILLEGAL_COMMAND 0x04
#define EC_BUFFER_FULL 0x05
struct si_sm_data
{
enum smic_states state;
struct si_sm_io *io;
unsigned char write_data[MAX_SMIC_WRITE_SIZE];
int write_pos;
int write_count;
int orig_write_count;
unsigned char read_data[MAX_SMIC_READ_SIZE];
int read_pos;
int truncated;
unsigned int error_retries;
long smic_timeout;
};
static unsigned int init_smic_data (struct si_sm_data *smic,
struct si_sm_io *io)
{
smic->state = SMIC_IDLE;
smic->io = io;
smic->write_pos = 0;
smic->write_count = 0;
smic->orig_write_count = 0;
smic->read_pos = 0;
smic->error_retries = 0;
smic->truncated = 0;
smic->smic_timeout = SMIC_RETRY_TIMEOUT;
/* We use 3 bytes of I/O. */
return 3;
}
static int start_smic_transaction(struct si_sm_data *smic,
unsigned char *data, unsigned int size)
{
unsigned int i;
if ((size < 2) || (size > MAX_SMIC_WRITE_SIZE)) {
return -1;
}
if ((smic->state != SMIC_IDLE) && (smic->state != SMIC_HOSED)) {
return -2;
}
if (smic_debug & SMIC_DEBUG_MSG) {
printk(KERN_INFO "start_smic_transaction -");
for (i = 0; i < size; i ++) {
printk (" %02x", (unsigned char) (data [i]));
}
printk ("\n");
}
smic->error_retries = 0;
memcpy(smic->write_data, data, size);
smic->write_count = size;
smic->orig_write_count = size;
smic->write_pos = 0;
smic->read_pos = 0;
smic->state = SMIC_START_OP;
smic->smic_timeout = SMIC_RETRY_TIMEOUT;
return 0;
}
static int smic_get_result(struct si_sm_data *smic,
unsigned char *data, unsigned int length)
{
int i;
if (smic_debug & SMIC_DEBUG_MSG) {
printk (KERN_INFO "smic_get result -");
for (i = 0; i < smic->read_pos; i ++) {
printk (" %02x", (smic->read_data [i]));
}
printk ("\n");
}
if (length < smic->read_pos) {
smic->read_pos = length;
smic->truncated = 1;
}
memcpy(data, smic->read_data, smic->read_pos);
if ((length >= 3) && (smic->read_pos < 3)) {
data[2] = IPMI_ERR_UNSPECIFIED;
smic->read_pos = 3;
}
if (smic->truncated) {
data[2] = IPMI_ERR_MSG_TRUNCATED;
smic->truncated = 0;
}
return smic->read_pos;
}
static inline unsigned char read_smic_flags(struct si_sm_data *smic)
{
return smic->io->inputb(smic->io, 2);
}
static inline unsigned char read_smic_status(struct si_sm_data *smic)
{
return smic->io->inputb(smic->io, 1);
}
static inline unsigned char read_smic_data(struct si_sm_data *smic)
{
return smic->io->inputb(smic->io, 0);
}
static inline void write_smic_flags(struct si_sm_data *smic,
unsigned char flags)
{
smic->io->outputb(smic->io, 2, flags);
}
static inline void write_smic_control(struct si_sm_data *smic,
unsigned char control)
{
smic->io->outputb(smic->io, 1, control);
}
static inline void write_si_sm_data (struct si_sm_data *smic,
unsigned char data)
{
smic->io->outputb(smic->io, 0, data);
}
static inline void start_error_recovery(struct si_sm_data *smic, char *reason)
{
(smic->error_retries)++;
if (smic->error_retries > SMIC_MAX_ERROR_RETRIES) {
if (smic_debug & SMIC_DEBUG_ENABLE) {
printk(KERN_WARNING
"ipmi_smic_drv: smic hosed: %s\n", reason);
}
smic->state = SMIC_HOSED;
} else {
smic->write_count = smic->orig_write_count;
smic->write_pos = 0;
smic->read_pos = 0;
smic->state = SMIC_START_OP;
smic->smic_timeout = SMIC_RETRY_TIMEOUT;
}
}
static inline void write_next_byte(struct si_sm_data *smic)
{
write_si_sm_data(smic, smic->write_data[smic->write_pos]);
(smic->write_pos)++;
(smic->write_count)--;
}
static inline void read_next_byte (struct si_sm_data *smic)
{
if (smic->read_pos >= MAX_SMIC_READ_SIZE) {
read_smic_data (smic);
smic->truncated = 1;
} else {
smic->read_data[smic->read_pos] = read_smic_data(smic);
(smic->read_pos)++;
}
}
/* SMIC Control/Status Code Components */
#define SMIC_GET_STATUS 0x00 /* Control form's name */
#define SMIC_READY 0x00 /* Status form's name */
#define SMIC_WR_START 0x01 /* Unified Control/Status names... */
#define SMIC_WR_NEXT 0x02
#define SMIC_WR_END 0x03
#define SMIC_RD_START 0x04
#define SMIC_RD_NEXT 0x05
#define SMIC_RD_END 0x06
#define SMIC_CODE_MASK 0x0f
#define SMIC_CONTROL 0x00
#define SMIC_STATUS 0x80
#define SMIC_CS_MASK 0x80
#define SMIC_SMS 0x40
#define SMIC_SMM 0x60
#define SMIC_STREAM_MASK 0x60
/* SMIC Control Codes */
#define SMIC_CC_SMS_GET_STATUS (SMIC_CONTROL|SMIC_SMS|SMIC_GET_STATUS)
#define SMIC_CC_SMS_WR_START (SMIC_CONTROL|SMIC_SMS|SMIC_WR_START)
#define SMIC_CC_SMS_WR_NEXT (SMIC_CONTROL|SMIC_SMS|SMIC_WR_NEXT)
#define SMIC_CC_SMS_WR_END (SMIC_CONTROL|SMIC_SMS|SMIC_WR_END)
#define SMIC_CC_SMS_RD_START (SMIC_CONTROL|SMIC_SMS|SMIC_RD_START)
#define SMIC_CC_SMS_RD_NEXT (SMIC_CONTROL|SMIC_SMS|SMIC_RD_NEXT)
#define SMIC_CC_SMS_RD_END (SMIC_CONTROL|SMIC_SMS|SMIC_RD_END)
#define SMIC_CC_SMM_GET_STATUS (SMIC_CONTROL|SMIC_SMM|SMIC_GET_STATUS)
#define SMIC_CC_SMM_WR_START (SMIC_CONTROL|SMIC_SMM|SMIC_WR_START)
#define SMIC_CC_SMM_WR_NEXT (SMIC_CONTROL|SMIC_SMM|SMIC_WR_NEXT)
#define SMIC_CC_SMM_WR_END (SMIC_CONTROL|SMIC_SMM|SMIC_WR_END)
#define SMIC_CC_SMM_RD_START (SMIC_CONTROL|SMIC_SMM|SMIC_RD_START)
#define SMIC_CC_SMM_RD_NEXT (SMIC_CONTROL|SMIC_SMM|SMIC_RD_NEXT)
#define SMIC_CC_SMM_RD_END (SMIC_CONTROL|SMIC_SMM|SMIC_RD_END)
/* SMIC Status Codes */
#define SMIC_SC_SMS_READY (SMIC_STATUS|SMIC_SMS|SMIC_READY)
#define SMIC_SC_SMS_WR_START (SMIC_STATUS|SMIC_SMS|SMIC_WR_START)
#define SMIC_SC_SMS_WR_NEXT (SMIC_STATUS|SMIC_SMS|SMIC_WR_NEXT)
#define SMIC_SC_SMS_WR_END (SMIC_STATUS|SMIC_SMS|SMIC_WR_END)
#define SMIC_SC_SMS_RD_START (SMIC_STATUS|SMIC_SMS|SMIC_RD_START)
#define SMIC_SC_SMS_RD_NEXT (SMIC_STATUS|SMIC_SMS|SMIC_RD_NEXT)
#define SMIC_SC_SMS_RD_END (SMIC_STATUS|SMIC_SMS|SMIC_RD_END)
#define SMIC_SC_SMM_READY (SMIC_STATUS|SMIC_SMM|SMIC_READY)
#define SMIC_SC_SMM_WR_START (SMIC_STATUS|SMIC_SMM|SMIC_WR_START)
#define SMIC_SC_SMM_WR_NEXT (SMIC_STATUS|SMIC_SMM|SMIC_WR_NEXT)
#define SMIC_SC_SMM_WR_END (SMIC_STATUS|SMIC_SMM|SMIC_WR_END)
#define SMIC_SC_SMM_RD_START (SMIC_STATUS|SMIC_SMM|SMIC_RD_START)
#define SMIC_SC_SMM_RD_NEXT (SMIC_STATUS|SMIC_SMM|SMIC_RD_NEXT)
#define SMIC_SC_SMM_RD_END (SMIC_STATUS|SMIC_SMM|SMIC_RD_END)
/* these are the control/status codes we actually use
SMIC_CC_SMS_GET_STATUS 0x40
SMIC_CC_SMS_WR_START 0x41
SMIC_CC_SMS_WR_NEXT 0x42
SMIC_CC_SMS_WR_END 0x43
SMIC_CC_SMS_RD_START 0x44
SMIC_CC_SMS_RD_NEXT 0x45
SMIC_CC_SMS_RD_END 0x46
SMIC_SC_SMS_READY 0xC0
SMIC_SC_SMS_WR_START 0xC1
SMIC_SC_SMS_WR_NEXT 0xC2
SMIC_SC_SMS_WR_END 0xC3
SMIC_SC_SMS_RD_START 0xC4
SMIC_SC_SMS_RD_NEXT 0xC5
SMIC_SC_SMS_RD_END 0xC6
*/
static enum si_sm_result smic_event (struct si_sm_data *smic, long time)
{
unsigned char status;
unsigned char flags;
unsigned char data;
if (smic->state == SMIC_HOSED) {
init_smic_data(smic, smic->io);
return SI_SM_HOSED;
}
if (smic->state != SMIC_IDLE) {
if (smic_debug & SMIC_DEBUG_STATES) {
printk(KERN_INFO
"smic_event - smic->smic_timeout = %ld,"
" time = %ld\n",
smic->smic_timeout, time);
}
/* FIXME: smic_event is sometimes called with time > SMIC_RETRY_TIMEOUT */
if (time < SMIC_RETRY_TIMEOUT) {
smic->smic_timeout -= time;
if (smic->smic_timeout < 0) {
start_error_recovery(smic, "smic timed out.");
return SI_SM_CALL_WITH_DELAY;
}
}
}
flags = read_smic_flags(smic);
if (flags & SMIC_FLAG_BSY)
return SI_SM_CALL_WITH_DELAY;
status = read_smic_status (smic);
if (smic_debug & SMIC_DEBUG_STATES)
printk(KERN_INFO
"smic_event - state = %d, flags = 0x%02x,"
" status = 0x%02x\n",
smic->state, flags, status);
switch (smic->state) {
case SMIC_IDLE:
/* in IDLE we check for available messages */
if (flags & (SMIC_SMI |
SMIC_EVM_DATA_AVAIL | SMIC_SMS_DATA_AVAIL))
{
return SI_SM_ATTN;
}
return SI_SM_IDLE;
case SMIC_START_OP:
/* sanity check whether smic is really idle */
write_smic_control(smic, SMIC_CC_SMS_GET_STATUS);
write_smic_flags(smic, flags | SMIC_FLAG_BSY);
smic->state = SMIC_OP_OK;
break;
case SMIC_OP_OK:
if (status != SMIC_SC_SMS_READY) {
/* this should not happen */
start_error_recovery(smic,
"state = SMIC_OP_OK,"
" status != SMIC_SC_SMS_READY");
return SI_SM_CALL_WITH_DELAY;
}
/* OK so far; smic is idle let us start ... */
write_smic_control(smic, SMIC_CC_SMS_WR_START);
write_next_byte(smic);
write_smic_flags(smic, flags | SMIC_FLAG_BSY);
smic->state = SMIC_WRITE_START;
break;
case SMIC_WRITE_START:
if (status != SMIC_SC_SMS_WR_START) {
start_error_recovery(smic,
"state = SMIC_WRITE_START, "
"status != SMIC_SC_SMS_WR_START");
return SI_SM_CALL_WITH_DELAY;
}
/* we must not issue WR_(NEXT|END) unless
TX_DATA_READY is set */
if (flags & SMIC_TX_DATA_READY) {
if (smic->write_count == 1) {
/* last byte */
write_smic_control(smic, SMIC_CC_SMS_WR_END);
smic->state = SMIC_WRITE_END;
} else {
write_smic_control(smic, SMIC_CC_SMS_WR_NEXT);
smic->state = SMIC_WRITE_NEXT;
}
write_next_byte(smic);
write_smic_flags(smic, flags | SMIC_FLAG_BSY);
}
else {
return SI_SM_CALL_WITH_DELAY;
}
break;
case SMIC_WRITE_NEXT:
if (status != SMIC_SC_SMS_WR_NEXT) {
start_error_recovery(smic,
"state = SMIC_WRITE_NEXT, "
"status != SMIC_SC_SMS_WR_NEXT");
return SI_SM_CALL_WITH_DELAY;
}
/* this is the same code as in SMIC_WRITE_START */
if (flags & SMIC_TX_DATA_READY) {
if (smic->write_count == 1) {
write_smic_control(smic, SMIC_CC_SMS_WR_END);
smic->state = SMIC_WRITE_END;
}
else {
write_smic_control(smic, SMIC_CC_SMS_WR_NEXT);
smic->state = SMIC_WRITE_NEXT;
}
write_next_byte(smic);
write_smic_flags(smic, flags | SMIC_FLAG_BSY);
}
else {
return SI_SM_CALL_WITH_DELAY;
}
break;
case SMIC_WRITE_END:
if (status != SMIC_SC_SMS_WR_END) {
start_error_recovery (smic,
"state = SMIC_WRITE_END, "
"status != SMIC_SC_SMS_WR_END");
return SI_SM_CALL_WITH_DELAY;
}
/* data register holds an error code */
data = read_smic_data(smic);
if (data != 0) {
if (smic_debug & SMIC_DEBUG_ENABLE) {
printk(KERN_INFO
"SMIC_WRITE_END: data = %02x\n", data);
}
start_error_recovery(smic,
"state = SMIC_WRITE_END, "
"data != SUCCESS");
return SI_SM_CALL_WITH_DELAY;
} else {
smic->state = SMIC_WRITE2READ;
}
break;
case SMIC_WRITE2READ:
/* we must wait for RX_DATA_READY to be set before we
can continue */
if (flags & SMIC_RX_DATA_READY) {
write_smic_control(smic, SMIC_CC_SMS_RD_START);
write_smic_flags(smic, flags | SMIC_FLAG_BSY);
smic->state = SMIC_READ_START;
} else {
return SI_SM_CALL_WITH_DELAY;
}
break;
case SMIC_READ_START:
if (status != SMIC_SC_SMS_RD_START) {
start_error_recovery(smic,
"state = SMIC_READ_START, "
"status != SMIC_SC_SMS_RD_START");
return SI_SM_CALL_WITH_DELAY;
}
if (flags & SMIC_RX_DATA_READY) {
read_next_byte(smic);
write_smic_control(smic, SMIC_CC_SMS_RD_NEXT);
write_smic_flags(smic, flags | SMIC_FLAG_BSY);
smic->state = SMIC_READ_NEXT;
} else {
return SI_SM_CALL_WITH_DELAY;
}
break;
case SMIC_READ_NEXT:
switch (status) {
/* smic tells us that this is the last byte to be read
--> clean up */
case SMIC_SC_SMS_RD_END:
read_next_byte(smic);
write_smic_control(smic, SMIC_CC_SMS_RD_END);
write_smic_flags(smic, flags | SMIC_FLAG_BSY);
smic->state = SMIC_READ_END;
break;
case SMIC_SC_SMS_RD_NEXT:
if (flags & SMIC_RX_DATA_READY) {
read_next_byte(smic);
write_smic_control(smic, SMIC_CC_SMS_RD_NEXT);
write_smic_flags(smic, flags | SMIC_FLAG_BSY);
smic->state = SMIC_READ_NEXT;
} else {
return SI_SM_CALL_WITH_DELAY;
}
break;
default:
start_error_recovery(
smic,
"state = SMIC_READ_NEXT, "
"status != SMIC_SC_SMS_RD_(NEXT|END)");
return SI_SM_CALL_WITH_DELAY;
}
break;
case SMIC_READ_END:
if (status != SMIC_SC_SMS_READY) {
start_error_recovery(smic,
"state = SMIC_READ_END, "
"status != SMIC_SC_SMS_READY");
return SI_SM_CALL_WITH_DELAY;
}
data = read_smic_data(smic);
/* data register holds an error code */
if (data != 0) {
if (smic_debug & SMIC_DEBUG_ENABLE) {
printk(KERN_INFO
"SMIC_READ_END: data = %02x\n", data);
}
start_error_recovery(smic,
"state = SMIC_READ_END, "
"data != SUCCESS");
return SI_SM_CALL_WITH_DELAY;
} else {
smic->state = SMIC_IDLE;
return SI_SM_TRANSACTION_COMPLETE;
}
case SMIC_HOSED:
init_smic_data(smic, smic->io);
return SI_SM_HOSED;
default:
if (smic_debug & SMIC_DEBUG_ENABLE) {
printk(KERN_WARNING "smic->state = %d\n", smic->state);
start_error_recovery(smic, "state = UNKNOWN");
return SI_SM_CALL_WITH_DELAY;
}
}
smic->smic_timeout = SMIC_RETRY_TIMEOUT;
return SI_SM_CALL_WITHOUT_DELAY;
}
static int smic_detect(struct si_sm_data *smic)
{
/* It's impossible for the SMIC fnags register to be all 1's,
(assuming a properly functioning, self-initialized BMC)
but that's what you get from reading a bogus address, so we
test that first. */
if (read_smic_flags(smic) == 0xff)
return 1;
return 0;
}
static void smic_cleanup(struct si_sm_data *kcs)
{
}
static int smic_size(void)
{
return sizeof(struct si_sm_data);
}
struct si_sm_handlers smic_smi_handlers =
{
.version = IPMI_SMIC_VERSION,
.init_data = init_smic_data,
.start_transaction = start_smic_transaction,
.get_result = smic_get_result,
.event = smic_event,
.detect = smic_detect,
.cleanup = smic_cleanup,
.size = smic_size,
};
......@@ -33,6 +33,7 @@
#include <linux/config.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/ipmi.h>
#include <linux/ipmi_smi.h>
#include <linux/watchdog.h>
......@@ -50,6 +51,8 @@
#include <asm/apic.h>
#endif
#define IPMI_WATCHDOG_VERSION "v31"
/*
* The IPMI command/response information for the watchdog timer.
*/
......@@ -137,26 +140,41 @@ static int pretimeout = 0;
/* Default action is to reset the board on a timeout. */
static unsigned char action_val = WDOG_TIMEOUT_RESET;
static char *action = "reset";
static char action[16] = "reset";
static unsigned char preaction_val = WDOG_PRETIMEOUT_NONE;
static char *preaction = "pre_none";
static char preaction[16] = "pre_none";
static unsigned char preop_val = WDOG_PREOP_NONE;
static char *preop = "preop_none";
static char preop[16] = "preop_none";
static spinlock_t ipmi_read_lock = SPIN_LOCK_UNLOCKED;
static char data_to_read = 0;
static DECLARE_WAIT_QUEUE_HEAD(read_q);
static struct fasync_struct *fasync_q = NULL;
static char pretimeout_since_last_heartbeat = 0;
MODULE_PARM(timeout, "i");
MODULE_PARM(pretimeout, "i");
MODULE_PARM(action, "s");
MODULE_PARM(preaction, "s");
MODULE_PARM(preop, "s");
/* If true, the driver will start running as soon as it is configured
and ready. */
static int start_now = 0;
module_param(timeout, int, 0);
MODULE_PARM_DESC(timeout, "Timeout value in seconds.");
module_param(pretimeout, int, 0);
MODULE_PARM_DESC(pretimeout, "Pretimeout value in seconds.");
module_param_string(action, action, sizeof(action), 0);
MODULE_PARM_DESC(action, "Timeout action. One of: "
"reset, none, power_cycle, power_off.");
module_param_string(preaction, preaction, sizeof(preaction), 0);
MODULE_PARM_DESC(preaction, "Pretimeout action. One of: "
"pre_none, pre_smi, pre_nmi, pre_int.");
module_param_string(preop, preop, sizeof(preop), 0);
MODULE_PARM_DESC(preop, "Pretimeout driver operation. One of: "
"preop_none, preop_panic, preop_give_data.");
module_param(start_now, int, 0);
MODULE_PARM_DESC(start_now, "Set to 1 to start the watchdog as"
"soon as the driver is loaded.");
/* Default state of the timer. */
static unsigned char ipmi_watchdog_state = WDOG_TIMEOUT_NONE;
......@@ -167,10 +185,6 @@ static int ipmi_ignore_heartbeat = 0;
/* Is someone using the watchdog? Only one user is allowed. */
static int ipmi_wdog_open = 0;
/* If true, the driver will start running as soon as it is configured
and ready. */
static int start_now = 0;
/* If set to 1, the heartbeat command will set the state to reset and
start the timer. The timer doesn't normally run when the driver is
first opened until the heartbeat is set the first time, this
......@@ -260,6 +274,7 @@ static int i_ipmi_set_timeout(struct ipmi_smi_msg *smi_msg,
(struct ipmi_addr *) &addr,
0,
&msg,
NULL,
smi_msg,
recv_msg,
1);
......@@ -435,6 +450,7 @@ static int ipmi_heartbeat(void)
(struct ipmi_addr *) &addr,
0,
&msg,
NULL,
&heartbeat_smi_msg,
&heartbeat_recv_msg,
1);
......@@ -483,6 +499,7 @@ static void panic_halt_ipmi_heartbeat(void)
(struct ipmi_addr *) &addr,
0,
&msg,
NULL,
&panic_halt_heartbeat_smi_msg,
&panic_halt_heartbeat_recv_msg,
1);
......@@ -903,6 +920,7 @@ static void ipmi_smi_gone(int if_num)
static struct ipmi_smi_watcher smi_watcher =
{
.owner = THIS_MODULE,
.new_smi = ipmi_new_smi,
.smi_gone = ipmi_smi_gone
};
......@@ -911,6 +929,9 @@ static int __init ipmi_wdog_init(void)
{
int rv;
printk(KERN_INFO "IPMI watchdog driver version "
IPMI_WATCHDOG_VERSION "\n");
if (strcmp(action, "reset") == 0) {
action_val = WDOG_TIMEOUT_RESET;
} else if (strcmp(action, "none") == 0) {
......@@ -999,14 +1020,10 @@ static int __init ipmi_wdog_init(void)
register_reboot_notifier(&wdog_reboot_notifier);
notifier_chain_register(&panic_notifier_list, &wdog_panic_notifier);
printk(KERN_INFO "IPMI watchdog by "
"Corey Minyard (minyard@mvista.com)\n");
return 0;
}
#ifdef MODULE
static void ipmi_unregister_watchdog(void)
static __exit void ipmi_unregister_watchdog(void)
{
int rv;
......@@ -1034,6 +1051,7 @@ static void ipmi_unregister_watchdog(void)
pointers to our buffers, we want to make sure they are done before
we release our memory. */
while (atomic_read(&set_timeout_tofree)) {
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1);
}
......@@ -1056,76 +1074,6 @@ static void __exit ipmi_wdog_exit(void)
ipmi_unregister_watchdog();
}
module_exit(ipmi_wdog_exit);
#else
static int __init ipmi_wdog_setup(char *str)
{
int val;
int rv;
char *option;
rv = get_option(&str, &val);
if (rv == 0)
return 1;
if (val > 0)
timeout = val;
if (rv == 1)
return 1;
rv = get_option(&str, &val);
if (rv == 0)
return 1;
if (val >= 0)
pretimeout = val;
if (rv == 1)
return 1;
while ((option = strsep(&str, ",")) != NULL) {
if (strcmp(option, "reset") == 0) {
action = "reset";
}
else if (strcmp(option, "none") == 0) {
action = "none";
}
else if (strcmp(option, "power_cycle") == 0) {
action = "power_cycle";
}
else if (strcmp(option, "power_off") == 0) {
action = "power_off";
}
else if (strcmp(option, "pre_none") == 0) {
preaction = "pre_none";
}
else if (strcmp(option, "pre_smi") == 0) {
preaction = "pre_smi";
}
#ifdef HAVE_NMI_HANDLER
else if (strcmp(option, "pre_nmi") == 0) {
preaction = "pre_nmi";
}
#endif
else if (strcmp(option, "pre_int") == 0) {
preaction = "pre_int";
}
else if (strcmp(option, "start_now") == 0) {
start_now = 1;
}
else if (strcmp(option, "preop_none") == 0) {
preop = "preop_none";
}
else if (strcmp(option, "preop_panic") == 0) {
preop = "preop_panic";
}
else if (strcmp(option, "preop_give_data") == 0) {
preop = "preop_give_data";
} else {
printk("Unknown IPMI watchdog option: '%s'\n", option);
}
}
return 1;
}
__setup("ipmi_wdog=", ipmi_wdog_setup);
#endif
EXPORT_SYMBOL(ipmi_delayed_shutdown);
......
......@@ -109,6 +109,35 @@ struct ipmi_ipmb_addr
unsigned char lun;
};
/*
* A LAN Address. This is an address to/from a LAN interface bridged
* by the BMC, not an address actually out on the LAN.
*
* A concious decision was made here to deviate slightly from the IPMI
* spec. We do not use rqSWID and rsSWID like it shows in the
* message. Instead, we use remote_SWID and local_SWID. This means
* that any message (a request or response) from another device will
* always have exactly the same address. If you didn't do this,
* requests and responses from the same device would have different
* addresses, and that's not too cool.
*
* In this address, the remote_SWID is always the SWID the remote
* message came from, or the SWID we are sending the message to.
* local_SWID is always our SWID. Note that having our SWID in the
* message is a little wierd, but this is required.
*/
#define IPMI_LAN_ADDR_TYPE 0x04
struct ipmi_lan_addr
{
int addr_type;
short channel;
unsigned char privilege;
unsigned char session_handle;
unsigned char remote_SWID;
unsigned char local_SWID;
unsigned char lun;
};
/*
* Channel for talking directly with the BMC. When using this
......@@ -145,10 +174,20 @@ struct ipmi_msg
* Receive types for messages coming from the receive interface. This
* is used for the receive in-kernel interface and in the receive
* IOCTL.
*
* The "IPMI_RESPONSE_RESPNOSE_TYPE" is a little strange sounding, but
* it allows you to get the message results when you send a response
* message.
*/
#define IPMI_RESPONSE_RECV_TYPE 1 /* A response to a command */
#define IPMI_ASYNC_EVENT_RECV_TYPE 2 /* Something from the event queue */
#define IPMI_CMD_RECV_TYPE 3 /* A command from somewhere else */
#define IPMI_RESPONSE_RESPONSE_TYPE 4 /* The response for
a sent response, giving any
error status for sending the
response. When you send a
response message, this will
be returned. */
/* Note that async events and received commands do not have a completion
code as the first byte of the incoming data, unlike a response. */
......@@ -160,6 +199,7 @@ struct ipmi_msg
* The in-kernel interface.
*/
#include <linux/list.h>
#include <linux/module.h>
/* Opaque type for a IPMI message user. One of these is needed to
send and receive messages. */
......@@ -185,6 +225,12 @@ struct ipmi_recv_msg
long msgid;
struct ipmi_msg msg;
/* The user_msg_data is the data supplied when a message was
sent, if this is a response to a sent message. If this is
not a response to a sent message, then user_msg_data will
be NULL. */
void *user_msg_data;
/* Call this when done with the message. It will presumably free
the message and do any other necessary cleanup. */
void (*done)(struct ipmi_recv_msg *msg);
......@@ -206,9 +252,10 @@ struct ipmi_user_hndl
/* Routine type to call when a message needs to be routed to
the upper layer. This will be called with some locks held,
the only IPMI routines that can be called are ipmi_request
and the alloc/free operations. */
and the alloc/free operations. The handler_data is the
variable supplied when the receive handler was registered. */
void (*ipmi_recv_hndl)(struct ipmi_recv_msg *msg,
void *handler_data);
void *user_msg_data);
/* Called when the interface detects a watchdog pre-timeout. If
this is NULL, it will be ignored for the user. */
......@@ -221,7 +268,12 @@ int ipmi_create_user(unsigned int if_num,
void *handler_data,
ipmi_user_t *user);
/* Destroy the given user of the IPMI layer. */
/* Destroy the given user of the IPMI layer. Note that after this
function returns, the system is guaranteed to not call any
callbacks for the user. Thus as long as you destroy all the users
before you unload a module, you will be safe. And if you destroy
the users before you destroy the callback structures, it should be
safe, too. */
int ipmi_destroy_user(ipmi_user_t user);
/* Get the IPMI version of the BMC we are talking to. */
......@@ -253,13 +305,43 @@ unsigned char ipmi_get_my_LUN(ipmi_user_t user);
* in the msgid field of the received command. If the priority is >
* 0, the message will go into a high-priority queue and be sent
* first. Otherwise, it goes into a normal-priority queue.
* The user_msg_data field will be returned in any response to this
* message.
*
* Note that if you send a response (with the netfn lower bit set),
* you *will* get back a SEND_MSG response telling you what happened
* when the response was sent. You will not get back a response to
* the message itself.
*/
int ipmi_request(ipmi_user_t user,
struct ipmi_addr *addr,
long msgid,
struct ipmi_msg *msg,
void *user_msg_data,
int priority);
/*
* Like ipmi_request, but lets you specify the number of retries and
* the retry time. The retries is the number of times the message
* will be resent if no reply is received. If set to -1, the default
* value will be used. The retry time is the time in milliseconds
* between retries. If set to zero, the default value will be
* used.
*
* Don't use this unless you *really* have to. It's primarily for the
* IPMI over LAN converter; since the LAN stuff does its own retries,
* it makes no sense to do it here. However, this can be used if you
* have unusual requirements.
*/
int ipmi_request_settime(ipmi_user_t user,
struct ipmi_addr *addr,
long msgid,
struct ipmi_msg *msg,
void *user_msg_data,
int priority,
int max_retries,
unsigned int retry_time_ms);
/*
* Like ipmi_request, but lets you specify the slave return address.
*/
......@@ -267,6 +349,7 @@ int ipmi_request_with_source(ipmi_user_t user,
struct ipmi_addr *addr,
long msgid,
struct ipmi_msg *msg,
void *user_msg_data,
int priority,
unsigned char source_address,
unsigned char source_lun);
......@@ -284,6 +367,7 @@ int ipmi_request_supply_msgs(ipmi_user_t user,
struct ipmi_addr *addr,
long msgid,
struct ipmi_msg *msg,
void *user_msg_data,
void *supplied_smi,
struct ipmi_recv_msg *supplied_recv,
int priority);
......@@ -331,6 +415,10 @@ struct ipmi_smi_watcher
{
struct list_head link;
/* You must set the owner to the current module, if you are in
a module (generally just set it to "THIS_MODULE"). */
struct module *owner;
/* These two are called with read locks held for the interface
the watcher list. So you can add and remove users from the
IPMI interface, send messages, etc., but you cannot add
......@@ -422,6 +510,29 @@ struct ipmi_req
#define IPMICTL_SEND_COMMAND _IOR(IPMI_IOC_MAGIC, 13, \
struct ipmi_req)
/* Messages sent to the interface with timing parameters are this
format. */
struct ipmi_req_settime
{
struct ipmi_req req;
/* See ipmi_request_settime() above for details on these
values. */
int retries;
unsigned int retry_time_ms;
};
/*
* Send a message to the interfaces with timing parameters. error values
* are:
* - EFAULT - an address supplied was invalid.
* - EINVAL - The address supplied was not valid, or the command
* was not allowed.
* - EMSGSIZE - The message to was too large.
* - ENOMEM - Buffers could not be allocated for the command.
*/
#define IPMICTL_SEND_COMMAND_SETTIME _IOR(IPMI_IOC_MAGIC, 21, \
struct ipmi_req_settime)
/* Messages received from the interface are this format. */
struct ipmi_recv
{
......@@ -513,4 +624,18 @@ struct ipmi_cmdspec
#define IPMICTL_SET_MY_LUN_CMD _IOR(IPMI_IOC_MAGIC, 19, unsigned int)
#define IPMICTL_GET_MY_LUN_CMD _IOR(IPMI_IOC_MAGIC, 20, unsigned int)
/*
* Get/set the default timing values for an interface. You shouldn't
* generally mess with these.
*/
struct ipmi_timing_parms
{
int retries;
unsigned int retry_time_ms;
};
#define IPMICTL_SET_TIMING_PARMS_CMD _IOR(IPMI_IOC_MAGIC, 22, \
struct ipmi_timing_parms)
#define IPMICTL_GET_TIMING_PARMS_CMD _IOR(IPMI_IOC_MAGIC, 23, \
struct ipmi_timing_parms)
#endif /* __LINUX_IPMI_H */
......@@ -53,6 +53,7 @@
#define IPMI_SET_BMC_GLOBAL_ENABLES_CMD 0x2e
#define IPMI_GET_BMC_GLOBAL_ENABLES_CMD 0x2f
#define IPMI_READ_EVENT_MSG_BUFFER_CMD 0x35
#define IPMI_GET_CHANNEL_INFO_CMD 0x42
#define IPMI_NETFN_STORAGE_REQUEST 0x0a
#define IPMI_NETFN_STORAGE_RESPONSE 0x0b
......@@ -61,8 +62,39 @@
/* The default slave address */
#define IPMI_BMC_SLAVE_ADDR 0x20
#define IPMI_MAX_MSG_LENGTH 80
/* The BT interface on high-end HP systems supports up to 255 bytes in
* one transfer. Its "virtual" BMC supports some commands that are longer
* than 128 bytes. Use the full 256, plus NetFn/LUN, Cmd, cCode, plus
* some overhead. It would be nice to base this on the "BT Capabilities"
* but that's too hard to propogate to the rest of the driver. */
#define IPMI_MAX_MSG_LENGTH 272 /* multiple of 16 */
#define IPMI_CC_NO_ERROR 0
#define IPMI_CC_NO_ERROR 0x00
#define IPMI_NODE_BUSY_ERR 0xc0
#define IPMI_ERR_MSG_TRUNCATED 0xc6
#define IPMI_LOST_ARBITRATION_ERR 0x81
#define IPMI_ERR_UNSPECIFIED 0xff
#define IPMI_CHANNEL_PROTOCOL_IPMB 1
#define IPMI_CHANNEL_PROTOCOL_ICMB 2
#define IPMI_CHANNEL_PROTOCOL_SMBUS 4
#define IPMI_CHANNEL_PROTOCOL_KCS 5
#define IPMI_CHANNEL_PROTOCOL_SMIC 6
#define IPMI_CHANNEL_PROTOCOL_BT10 7
#define IPMI_CHANNEL_PROTOCOL_BT15 8
#define IPMI_CHANNEL_PROTOCOL_TMODE 9
#define IPMI_CHANNEL_MEDIUM_IPMB 1
#define IPMI_CHANNEL_MEDIUM_ICMB10 2
#define IPMI_CHANNEL_MEDIUM_ICMB09 3
#define IPMI_CHANNEL_MEDIUM_8023LAN 4
#define IPMI_CHANNEL_MEDIUM_ASYNC 5
#define IPMI_CHANNEL_MEDIUM_OTHER_LAN 6
#define IPMI_CHANNEL_MEDIUM_PCI_SMBUS 7
#define IPMI_CHANNEL_MEDIUM_SMBUS1 8
#define IPMI_CHANNEL_MEDIUM_SMBUS2 9
#define IPMI_CHANNEL_MEDIUM_USB1 10
#define IPMI_CHANNEL_MEDIUM_USB2 11
#define IPMI_CHANNEL_MEDIUM_SYSINTF 12
#endif /* __LINUX_IPMI_MSGDEFS_H */
......@@ -35,6 +35,8 @@
#define __LINUX_IPMI_SMI_H
#include <linux/ipmi_msgdefs.h>
#include <linux/proc_fs.h>
#include <linux/module.h>
/* This files describes the interface for IPMI system management interface
drivers to bind into the IPMI message handler. */
......@@ -48,7 +50,7 @@ typedef struct ipmi_smi *ipmi_smi_t;
* been received, it will report this same data structure back up to
* the upper layer. If an error occurs, it should fill in the
* response with an error code in the completion code location. When
* asyncronous data is received, one of these is allocated, the
* asynchronous data is received, one of these is allocated, the
* data_size is set to zero and the response holds the data from the
* get message or get event command that the interface initiated.
* Note that it is the interfaces responsibility to detect
......@@ -62,9 +64,6 @@ struct ipmi_smi_msg
long msgid;
void *user_data;
/* If 0, add to the end of the queue. If 1, add to the beginning. */
int prio;
int data_size;
unsigned char data[IPMI_MAX_MSG_LENGTH];
......@@ -134,4 +133,11 @@ static inline void ipmi_free_smi_msg(struct ipmi_smi_msg *msg)
msg->done(msg);
}
/* Allow the lower layer to add things to the proc filesystem
directory for this interface. Note that the entry will
automatically be dstroyed when the interface is destroyed. */
int ipmi_smi_add_proc_entry(ipmi_smi_t smi, char *name,
read_proc_t *read_proc, write_proc_t *write_proc,
void *data, struct module *owner);
#endif /* __LINUX_IPMI_SMI_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment