Commit 9d41ab4c authored by Anton Altaparmakov's avatar Anton Altaparmakov

Merge cantab.net:/home/src/bklinux-2.6

into cantab.net:/home/src/ntfs-2.6
parents 83afa5cb 3d29f7cb
......@@ -62,6 +62,8 @@ scsi_mid_low_api.txt
- info on API between SCSI layer and low level drivers
st.txt
- info on scsi tape driver
sym53c500_cs.txt
- info on PCMCIA driver for Symbios Logic 53c500 based adapters
sym53c8xx_2.txt
- info on second generation driver for sym53c8xx based adapters
tmscsim.txt
......
The sym53c500_cs driver originated as an add-on to David Hinds' pcmcia-cs
package, and was written by Tom Corner (tcorner@via.at). A rewrite was
long overdue, and the current version addresses the following concerns:
(1) extensive kernel changes between 2.4 and 2.6.
(2) deprecated PCMCIA support outside the kernel.
All the USE_BIOS code has been ripped out. It was never used, and could
not have worked anyway. The USE_DMA code is likewise gone. Many thanks
to YOKOTA Hiroshi (nsp_cs driver) and David Hinds (qlogic_cs driver) for
the code fragments I shamelessly adapted for this work. Thanks also to
Christoph Hellwig for his patient tutelage while I stumbled about.
The Symbios Logic 53c500 chip was used in the "newer" (circa 1997) version
of the New Media Bus Toaster PCMCIA SCSI controller. Presumably there are
other products using this chip, but I've never laid eyes (much less hands)
on one.
Through the years, there have been a number of downloads of the pcmcia-cs
version of this driver, and I guess it worked for those users. It worked
for Tom Corner, and it works for me. Your mileage will probably vary.
--Bob Tracy (rct@frus.com)
......@@ -168,10 +168,8 @@ S: Supported
AACRAID SCSI RAID DRIVER
P: Adaptec OEM Raid Solutions
M: linux-aacraid-devel@dell.com
L: linux-aacraid-devel@dell.com
L: linux-aacraid-announce@dell.com
W: http://domsch.com/linux
L: linux-scsi@vger.kernel.org
W: http://linux.dell.com/storage.shtml
S: Supported
ACPI
......@@ -974,6 +972,11 @@ M: langa2@kph.uni-mainz.de
W: http://www.uni-mainz.de/~langm000/linux.html
S: Maintained
IBM Power Linux RAID adapter
P: Brian King
M: brking@us.ibm.com
S: Supported
IBM ServeRAID RAID DRIVER
P: Jack Hammer
P: Dave Jeffery
......@@ -1801,6 +1804,12 @@ L: selinux@tycho.nsa.gov (general discussion)
W: http://www.nsa.gov/selinux
S: Supported
SERIAL ATA (SATA) SUBSYSTEM:
P: Jeff Garzik
M: jgarzik@pobox.com
L: linux-ide@vger.kernel.org
S: Supported
SGI SN-IA64 (Altix) SERIAL CONSOLE DRIVER
P: Pat Gefre
M: pfg@sgi.com
......
......@@ -70,6 +70,10 @@ void __init config_sun3x(void)
mach_get_model = sun3_get_model;
mach_get_hardware_list = sun3x_get_hardware_list;
#if defined(CONFIG_DUMMY_CONSOLE)
conswitchp = &dummy_con;
#endif
sun3_intreg = (unsigned char *)SUN3X_INTREG;
/* only the serial console is known to work anyway... */
......
......@@ -51,25 +51,34 @@ choice
config PA7000
bool "PA7000/PA7100"
---help---
This is the processor type of your CPU. This information is used for
optimizing purposes. In order to compile a kernel that can run on
all PA CPUs (albeit not optimally fast), you can specify "PA7000"
here.
This is the processor type of your CPU. This information is
used for optimizing purposes. In order to compile a kernel
that can run on all 32-bit PA CPUs (albeit not optimally fast),
you can specify "PA7000" here.
Specifying "PA8000" here will allow you to select a 64-bit kernel
which is required on some machines.
config PA7100LC
bool "PA7100LC/PA7300LC"
bool "PA7100LC"
help
Select this option for a 7100LC or 7300LC processor, as used
in the 712, 715/Mirage, A180, B132, C160L and some other machines.
Select this option for the PCX-L processor, as used in the
712, 715/64, 715/80, 715/100, 715/100XC, 725/100, 743, 748,
D200, D210, D300, D310 and E-class
config PA7200
bool "PA7200"
help
Select this option for the PCX-T' processor, as used in C110, D100
and similar machines.
Select this option for the PCX-T' processor, as used in the
C100, C110, J100, J110, J210XC, D250, D260, D350, D360,
K100, K200, K210, K220, K400, K410 and K420
config PA7300LC
bool "PA7300LC"
help
Select this option for the PCX-L2 processor, as used in the
744, A180, B132L, B160L, B180L, C132L, C160L, C180L,
D220, D230, D320 and D330.
config PA8X00
bool "PA8000 and up"
......@@ -81,14 +90,16 @@ endchoice
# Define implied options from the CPU selection here
config PA20
bool
def_bool y
depends on PA8X00
default y
config PA11
bool
depends on PA7000 || PA7100LC || PA7200
default y
def_bool y
depends on PA7000 || PA7100LC || PA7200 || PA7300LC
config PREFETCH
def_bool y
depends on PA8X00
config PARISC64
bool "64-bit kernel"
......@@ -106,18 +117,6 @@ config PARISC64
config 64BIT
def_bool PARISC64
config PDC_NARROW
bool "32-bit firmware"
depends on PARISC64
help
This option will enable owners of C160, C180, C200, C240, C360, J280,
J282, J2240 and some D/K/R class to run a 64bit kernel with their
32bit PDC firmware.
Nobody should try this option unless they know what they are doing.
If unsure, say N.
config SMP
bool "Symmetric multi-processing support"
---help---
......
......@@ -16,7 +16,7 @@
# Modified for PA-RISC Linux by Paul Lahaie, Alex deVries,
# Mike Shaver, Helge Deller and Martin K. Petersen
#
NM = sh arch/parisc/nm
NM = sh $(srctree)/arch/parisc/nm
ifdef CONFIG_PARISC64
CROSS_COMPILE := hppa64-linux-
UTS_MACHINE := parisc64
......@@ -48,6 +48,7 @@ cflags-y += -ffunction-sections
cflags-$(CONFIG_PA7100) += -march=1.1 -mschedule=7100
cflags-$(CONFIG_PA7200) += -march=1.1 -mschedule=7200
cflags-$(CONFIG_PA7100LC) += -march=1.1 -mschedule=7100LC
cflags-$(CONFIG_PA7300LC) += -march=1.1 -mschedule=7300
cflags-$(CONFIG_PA8X00) += -march=2.0 -mschedule=8000
head-y := arch/parisc/kernel/head.o
......
......@@ -141,6 +141,12 @@ CONFIG_CHR_DEV_SG=y
# CONFIG_SCSI_CONSTANTS is not set
# CONFIG_SCSI_LOGGING is not set
#
# SCSI Transport Attributes
#
CONFIG_SCSI_SPI_ATTRS=y
CONFIG_SCSI_FC_ATTRS=y
#
# SCSI low-level drivers
#
......@@ -179,10 +185,6 @@ CONFIG_MD_RAID5=y
# I2O device support
#
#
# Macintosh device drivers
#
#
# Networking support
#
......@@ -206,7 +208,6 @@ CONFIG_IP_PNP_BOOTP=y
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_IP_MROUTE is not set
CONFIG_INET_ECN=y
# CONFIG_SYN_COOKIES is not set
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
......@@ -290,6 +291,8 @@ CONFIG_NET_RADIO=y
# Bluetooth support
#
# CONFIG_BT is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
#
# ISDN subsystem
......@@ -335,6 +338,7 @@ CONFIG_SERIO_GSCPS2=y
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ATKBD is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_HIL_OLD is not set
......@@ -342,6 +346,7 @@ CONFIG_INPUT_KEYBOARD=y
CONFIG_INPUT_MOUSE=y
# CONFIG_MOUSE_PS2 is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_HIL is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
......@@ -385,11 +390,6 @@ CONFIG_PRINTER=y
# CONFIG_LP_CONSOLE is not set
# CONFIG_PPDEV is not set
# CONFIG_TIPAR is not set
#
# Mice
#
# CONFIG_BUSMOUSE is not set
# CONFIG_QIC02_TAPE is not set
#
......@@ -401,7 +401,6 @@ CONFIG_PRINTER=y
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
# CONFIG_NVRAM is not set
CONFIG_GEN_RTC=y
# CONFIG_GEN_RTC_X is not set
# CONFIG_DTLK is not set
......@@ -635,6 +634,7 @@ CONFIG_CRYPTO=y
# CONFIG_CRYPTO_CAST6 is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_DEFLATE is not set
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_TEST is not set
#
......
......@@ -27,7 +27,7 @@ CONFIG_HOTPLUG=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_EMBEDDED=y
# CONFIG_KALLSYMS is not set
CONFIG_KALLSYMS=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_IOSCHED_NOOP=y
......@@ -78,6 +78,7 @@ CONFIG_CHASSIS_LCD_LED=y
# PCMCIA/CardBus support
#
CONFIG_PCMCIA=m
CONFIG_PCMCIA_DEBUG=y
CONFIG_YENTA=m
CONFIG_CARDBUS=y
# CONFIG_I82092 is not set
......@@ -129,6 +130,7 @@ CONFIG_BLK_DEV_UMEM=m
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_CARMEL is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=6144
CONFIG_BLK_DEV_INITRD=y
......@@ -162,6 +164,12 @@ CONFIG_SCSI_REPORT_LUNS=y
# CONFIG_SCSI_CONSTANTS is not set
# CONFIG_SCSI_LOGGING is not set
#
# SCSI Transport Attributes
#
CONFIG_SCSI_SPI_ATTRS=y
# CONFIG_SCSI_FC_ATTRS is not set
#
# SCSI low-level drivers
#
......@@ -242,10 +250,6 @@ CONFIG_FUSION_CTL=m
# I2O device support
#
#
# Macintosh device drivers
#
#
# Networking support
#
......@@ -270,7 +274,6 @@ CONFIG_IP_PNP_BOOTP=y
# CONFIG_NET_IPGRE is not set
# CONFIG_IP_MROUTE is not set
# CONFIG_ARPD is not set
# CONFIG_INET_ECN is not set
# CONFIG_SYN_COOKIES is not set
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
......@@ -348,7 +351,6 @@ CONFIG_XFRM_USER=m
#
# SCTP Configuration (EXPERIMENTAL)
#
CONFIG_IPV6_SCTP__=y
# CONFIG_IP_SCTP is not set
# CONFIG_ATM is not set
# CONFIG_VLAN_8021Q is not set
......@@ -504,6 +506,11 @@ CONFIG_PCI_HERMES=m
CONFIG_PCMCIA_HERMES=m
CONFIG_AIRO_CS=m
# CONFIG_PCMCIA_WL3501 is not set
#
# Prism GT/Duette 802.11(a/b/g) PCI/Cardbus support
#
# CONFIG_PRISM54 is not set
CONFIG_NET_WIRELESS=y
#
......@@ -512,6 +519,7 @@ CONFIG_NET_WIRELESS=y
# CONFIG_TR is not set
# CONFIG_NET_FC is not set
# CONFIG_SHAPER is not set
# CONFIG_NETCONSOLE is not set
#
# Wan interfaces
......@@ -545,6 +553,8 @@ CONFIG_PCMCIA_XIRC2PS=m
# Bluetooth support
#
# CONFIG_BT is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
#
# ISDN subsystem
......@@ -617,11 +627,6 @@ CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
#
# Mice
#
# CONFIG_BUSMOUSE is not set
# CONFIG_QIC02_TAPE is not set
#
......@@ -633,7 +638,6 @@ CONFIG_UNIX98_PTYS=y
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
# CONFIG_NVRAM is not set
CONFIG_GEN_RTC=y
CONFIG_GEN_RTC_X=y
# CONFIG_DTLK is not set
......@@ -682,7 +686,6 @@ CONFIG_MAX_RAW_DEVS=256
# Console display driver support
#
# CONFIG_MDA_CONSOLE is not set
# CONFIG_STI_CONSOLE is not set
CONFIG_DUMMY_CONSOLE_COLUMNS=160
CONFIG_DUMMY_CONSOLE_ROWS=64
CONFIG_DUMMY_CONSOLE=y
......@@ -788,7 +791,8 @@ CONFIG_LOCKD=m
CONFIG_LOCKD_V4=y
CONFIG_EXPORTFS=m
CONFIG_SUNRPC=m
# CONFIG_SUNRPC_GSS is not set
CONFIG_SUNRPC_GSS=m
CONFIG_RPCSEC_GSS_KRB5=m
CONFIG_SMB_FS=m
CONFIG_SMB_NLS_DEFAULT=y
CONFIG_SMB_NLS_REMOTE="cp437"
......@@ -887,6 +891,7 @@ CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST6=m
# CONFIG_CRYPTO_ARC4 is not set
CONFIG_CRYPTO_DEFLATE=m
# CONFIG_CRYPTO_MICHAEL_MIC is not set
CONFIG_CRYPTO_TEST=m
#
......
......@@ -121,6 +121,7 @@ CONFIG_PARPORT_GSC=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=y
# CONFIG_BLK_DEV_NBD is not set
CONFIG_BLK_DEV_CARMEL=y
# CONFIG_BLK_DEV_RAM is not set
#
......@@ -152,6 +153,12 @@ CONFIG_CHR_DEV_SG=y
# CONFIG_SCSI_CONSTANTS is not set
# CONFIG_SCSI_LOGGING is not set
#
# SCSI Transport Attributes
#
CONFIG_SCSI_SPI_ATTRS=y
# CONFIG_SCSI_FC_ATTRS is not set
#
# SCSI low-level drivers
#
......@@ -244,10 +251,6 @@ CONFIG_MD_RAID5=y
#
# CONFIG_I2O is not set
#
# Macintosh device drivers
#
#
# Networking support
#
......@@ -271,7 +274,6 @@ CONFIG_IP_PNP_BOOTP=y
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_IP_MROUTE is not set
CONFIG_INET_ECN=y
# CONFIG_SYN_COOKIES is not set
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
......@@ -376,6 +378,10 @@ CONFIG_NET_RADIO=y
#
# CONFIG_AIRO is not set
# CONFIG_HERMES is not set
#
# Prism GT/Duette 802.11(a/b/g) PCI/Cardbus support
#
CONFIG_NET_WIRELESS=y
#
......@@ -403,6 +409,8 @@ CONFIG_NET_WIRELESS=y
# Bluetooth support
#
# CONFIG_BT is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
#
# ISDN subsystem
......@@ -449,6 +457,7 @@ CONFIG_SERIO_GSCPS2=y
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ATKBD is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_HIL_OLD is not set
......@@ -459,6 +468,7 @@ CONFIG_INPUT_MOUSE=y
# CONFIG_MOUSE_INPORT is not set
# CONFIG_MOUSE_LOGIBM is not set
# CONFIG_MOUSE_PC110PAD is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_HIL is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
......@@ -502,11 +512,6 @@ CONFIG_PRINTER=y
# CONFIG_LP_CONSOLE is not set
# CONFIG_PPDEV is not set
# CONFIG_TIPAR is not set
#
# Mice
#
# CONFIG_BUSMOUSE is not set
# CONFIG_QIC02_TAPE is not set
#
......@@ -518,7 +523,6 @@ CONFIG_PRINTER=y
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
# CONFIG_NVRAM is not set
CONFIG_GEN_RTC=y
# CONFIG_GEN_RTC_X is not set
# CONFIG_DTLK is not set
......@@ -768,6 +772,7 @@ CONFIG_CRYPTO=y
# CONFIG_CRYPTO_CAST6 is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_DEFLATE is not set
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_TEST is not set
#
......
......@@ -77,6 +77,7 @@ CONFIG_SUPERIO=y
# PCMCIA/CardBus support
#
CONFIG_PCMCIA=m
CONFIG_PCMCIA_DEBUG=y
CONFIG_YENTA=m
CONFIG_CARDBUS=y
# CONFIG_I82092 is not set
......@@ -128,6 +129,7 @@ CONFIG_BLK_DEV_UMEM=m
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_CARMEL is not set
# CONFIG_BLK_DEV_RAM is not set
#
......@@ -212,6 +214,12 @@ CONFIG_SCSI_REPORT_LUNS=y
# CONFIG_SCSI_CONSTANTS is not set
# CONFIG_SCSI_LOGGING is not set
#
# SCSI Transport Attributes
#
CONFIG_SCSI_SPI_ATTRS=y
# CONFIG_SCSI_FC_ATTRS is not set
#
# SCSI low-level drivers
#
......@@ -230,6 +238,7 @@ CONFIG_SCSI_ATA_PIIX=m
CONFIG_SCSI_SATA_PROMISE=m
CONFIG_SCSI_SATA_SIL=m
CONFIG_SCSI_SATA_VIA=m
# CONFIG_SCSI_SATA_VITESSE is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_CPQFCTS is not set
# CONFIG_SCSI_DMX3191D is not set
......@@ -303,10 +312,6 @@ CONFIG_FUSION_CTL=m
#
# CONFIG_I2O is not set
#
# Macintosh device drivers
#
#
# Networking support
#
......@@ -331,7 +336,6 @@ CONFIG_IP_PNP_BOOTP=y
# CONFIG_NET_IPGRE is not set
# CONFIG_IP_MROUTE is not set
# CONFIG_ARPD is not set
# CONFIG_INET_ECN is not set
# CONFIG_SYN_COOKIES is not set
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
......@@ -409,7 +413,6 @@ CONFIG_XFRM_USER=m
#
# SCTP Configuration (EXPERIMENTAL)
#
CONFIG_IPV6_SCTP__=y
# CONFIG_IP_SCTP is not set
# CONFIG_ATM is not set
# CONFIG_VLAN_8021Q is not set
......@@ -543,6 +546,7 @@ CONFIG_PPPOE=m
# CONFIG_NET_FC is not set
# CONFIG_RCPCI is not set
# CONFIG_SHAPER is not set
# CONFIG_NETCONSOLE is not set
#
# Wan interfaces
......@@ -576,6 +580,8 @@ CONFIG_PCMCIA_AXNET=m
# Bluetooth support
#
# CONFIG_BT is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
#
# ISDN subsystem
......@@ -619,6 +625,7 @@ CONFIG_SERIO_SERPORT=m
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ATKBD is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_HIL_OLD is not set
......@@ -626,6 +633,7 @@ CONFIG_INPUT_KEYBOARD=y
CONFIG_INPUT_MOUSE=y
# CONFIG_MOUSE_PS2 is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_HIL is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
......@@ -663,11 +671,6 @@ CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
#
# Mice
#
# CONFIG_BUSMOUSE is not set
# CONFIG_QIC02_TAPE is not set
#
......@@ -679,7 +682,6 @@ CONFIG_LEGACY_PTY_COUNT=256
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
# CONFIG_NVRAM is not set
CONFIG_GEN_RTC=y
CONFIG_GEN_RTC_X=y
# CONFIG_DTLK is not set
......@@ -831,7 +833,9 @@ CONFIG_USB_AIPTEK=m
CONFIG_USB_WACOM=m
CONFIG_USB_KBTAB=m
# CONFIG_USB_POWERMATE is not set
# CONFIG_USB_MTOUCH is not set
# CONFIG_USB_XPAD is not set
# CONFIG_USB_ATI_REMOTE is not set
#
# USB Imaging devices
......@@ -968,7 +972,7 @@ CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_EXPORTFS=y
CONFIG_SUNRPC=y
# CONFIG_SUNRPC_GSS is not set
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_SMB_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
......@@ -1065,6 +1069,7 @@ CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST6=m
# CONFIG_CRYPTO_ARC4 is not set
CONFIG_CRYPTO_DEFLATE=m
# CONFIG_CRYPTO_MICHAEL_MIC is not set
CONFIG_CRYPTO_TEST=m
#
......
......@@ -14,7 +14,7 @@ obj-y := cache.o pacache.o setup.o traps.o time.o irq.o \
pa7300lc.o syscall.o entry.o sys_parisc.o firmware.o \
ptrace.o hardware.o inventory.o drivers.o semaphore.o \
signal.o hpmc.o real2.o parisc_ksyms.o unaligned.o \
process.o processor.o pdc_cons.o pdc_chassis.o
process.o processor.o pdc_cons.o pdc_chassis.o unwind.o
obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_PA11) += pci-dma.o
......
......@@ -32,6 +32,7 @@
#include <linux/thread_info.h>
#include <linux/version.h>
#include <linux/ptrace.h>
#include <asm/pgtable.h>
#include <asm/ptrace.h>
#include <asm/processor.h>
......@@ -276,5 +277,19 @@ int main(void)
DEFINE(PA_BLOCKSTEP_BIT, 31-PT_BLOCKSTEP_BIT);
DEFINE(PA_SINGLESTEP_BIT, 31-PT_SINGLESTEP_BIT);
BLANK();
DEFINE(ASM_PMD_SHIFT, PMD_SHIFT);
DEFINE(ASM_PGDIR_SHIFT, PGDIR_SHIFT);
DEFINE(ASM_BITS_PER_PGD, BITS_PER_PGD);
DEFINE(ASM_BITS_PER_PMD, BITS_PER_PMD);
DEFINE(ASM_BITS_PER_PTE, BITS_PER_PTE);
DEFINE(ASM_PGD_PMD_OFFSET, -(PAGE_SIZE << PGD_ORDER));
DEFINE(ASM_PMD_ENTRY, ((PAGE_OFFSET & PMD_MASK) >> PMD_SHIFT));
DEFINE(ASM_PGD_ENTRY, PAGE_OFFSET >> PGDIR_SHIFT);
DEFINE(ASM_PGD_ENTRY_SIZE, PGD_ENTRY_SIZE);
DEFINE(ASM_PMD_ENTRY_SIZE, PMD_ENTRY_SIZE);
DEFINE(ASM_PTE_ENTRY_SIZE, PTE_ENTRY_SIZE);
DEFINE(ASM_PT_INITIAL, PT_INITIAL);
DEFINE(ASM_PAGE_SIZE, PAGE_SIZE);
BLANK();
return 0;
}
......@@ -230,27 +230,22 @@ void disable_sr_hashing(void)
void __flush_dcache_page(struct page *page)
{
struct address_space *mapping = page_mapping(page);
struct mm_struct *mm = current->active_mm;
struct list_head *l;
flush_kernel_dcache_page(page_address(page));
if (!mapping)
return;
/* check shared list first if it's not empty...it's usually
* the shortest */
/* We have ensured in arch_get_unmapped_area() that all shared
* mappings are mapped at equivalent addresses, so we only need
* to flush one for them all to become coherent */
list_for_each(l, &mapping->i_mmap_shared) {
struct vm_area_struct *mpnt;
unsigned long off;
unsigned long off, addr;
mpnt = list_entry(l, struct vm_area_struct, shared);
/*
* If this VMA is not in our MM, we can ignore it.
*/
if (mpnt->vm_mm != mm)
continue;
if (page->index < mpnt->vm_pgoff)
continue;
......@@ -258,26 +253,35 @@ void __flush_dcache_page(struct page *page)
if (off >= (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT)
continue;
flush_cache_page(mpnt, mpnt->vm_start + (off << PAGE_SHIFT));
addr = mpnt->vm_start + (off << PAGE_SHIFT);
/* All user shared mappings should be equivalently mapped,
* so once we've flushed one we should be ok
*/
/* flush instructions produce non access tlb misses.
* On PA, we nullify these instructions rather than
* taking a page fault if the pte doesn't exist, so we
* have to find a congruent address with an existing
* translation */
if (!translation_exists(mpnt, addr))
continue;
__flush_cache_page(mpnt, addr);
/* If we find an address to flush, that will also
* bring all the private mappings up to date (see
* comment below) */
return;
}
/* then check private mapping list for read only shared mappings
* which are flagged by VM_MAYSHARE */
/* we have carefully arranged in arch_get_unmapped_area() that
* *any* mappings of a file are always congruently mapped (whether
* declared as MAP_PRIVATE or MAP_SHARED), so we only need
* to flush one address here too */
list_for_each(l, &mapping->i_mmap) {
struct vm_area_struct *mpnt;
unsigned long off;
unsigned long off, addr;
mpnt = list_entry(l, struct vm_area_struct, shared);
if (mpnt->vm_mm != mm || !(mpnt->vm_flags & VM_MAYSHARE))
continue;
if (page->index < mpnt->vm_pgoff)
continue;
......@@ -285,12 +289,17 @@ void __flush_dcache_page(struct page *page)
if (off >= (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT)
continue;
flush_cache_page(mpnt, mpnt->vm_start + (off << PAGE_SHIFT));
addr = mpnt->vm_start + (off << PAGE_SHIFT);
/* All user shared mappings should be equivalently mapped,
* so once we've flushed one we should be ok
*/
break;
/* This is just for speed. If the page translation isn't
* there there's no point exciting the nadtlb handler into
* a nullification frenzy */
if(!translation_exists(mpnt, addr))
continue;
__flush_cache_page(mpnt, addr);
return;
}
}
EXPORT_SYMBOL(__flush_dcache_page);
......
This diff is collapsed.
......@@ -10,6 +10,7 @@
* Copyright 1999 SuSE GmbH Nuernberg (Philipp Rumpf, prumpf@tux.org)
* Copyright 1999 The Puffin Group, (Alex deVries, David Kennedy)
* Copyright 2003 Grant Grundler <grundler parisc-linux org>
* Copyright 2003,2004 Ryan Bradetich <rbrad@parisc-linux.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
......@@ -71,6 +72,15 @@ static spinlock_t pdc_lock = SPIN_LOCK_UNLOCKED;
static unsigned long pdc_result[32] __attribute__ ((aligned (8)));
static unsigned long pdc_result2[32] __attribute__ ((aligned (8)));
#ifdef __LP64__
#define WIDE_FIRMWARE 0x1
#define NARROW_FIRMWARE 0x2
/* Firmware needs to be initially set to narrow to determine the
* actual firmware width. */
int parisc_narrow_firmware = 1;
#endif
/* on all currently-supported platforms, IODC I/O calls are always
* 32-bit calls, and MEM_PDC calls are always the same width as the OS.
* This means Cxxx boxes can't run wide kernels right now. -PB
......@@ -87,11 +97,11 @@ long real64_call(unsigned long function, ...);
#endif
long real32_call(unsigned long function, ...);
#if defined(__LP64__) && ! defined(CONFIG_PDC_NARROW)
#define MEM_PDC (unsigned long)(PAGE0->mem_pdc_hi) << 32 | PAGE0->mem_pdc
# define mem_pdc_call(args...) real64_call(MEM_PDC, args)
#ifdef __LP64__
# define MEM_PDC (unsigned long)(PAGE0->mem_pdc_hi) << 32 | PAGE0->mem_pdc
# define mem_pdc_call(args...) unlikely(parisc_narrow_firmware) ? real32_call(MEM_PDC, args) : real64_call(MEM_PDC, args)
#else
#define MEM_PDC (unsigned long)PAGE0->mem_pdc
# define MEM_PDC (unsigned long)PAGE0->mem_pdc
# define mem_pdc_call(args...) real32_call(MEM_PDC, args)
#endif
......@@ -105,12 +115,14 @@ long real32_call(unsigned long function, ...);
*/
static unsigned long f_extend(unsigned long address)
{
#ifdef CONFIG_PDC_NARROW
#ifdef __LP64__
if(unlikely(parisc_narrow_firmware)) {
if((address & 0xff000000) == 0xf0000000)
return 0xf0f0f0f000000000 | (u32)address;
if((address & 0xf0000000) == 0xf0000000)
return 0xffffffff00000000 | (u32)address;
}
#endif
return address;
}
......@@ -125,11 +137,34 @@ static unsigned long f_extend(unsigned long address)
*/
static void convert_to_wide(unsigned long *addr)
{
#ifdef CONFIG_PDC_NARROW
#ifdef __LP64__
int i;
unsigned *p = (unsigned int *)addr;
unsigned int *p = (unsigned int *)addr;
if(unlikely(parisc_narrow_firmware)) {
for(i = 31; i >= 0; --i)
addr[i] = p[i];
}
#endif
}
/**
* set_firmware_width - Determine if the firmware is wide or narrow.
*
* This function must be called before any pdc_* function that uses the convert_to_wide
* function.
*/
void __init set_firmware_width(void)
{
#ifdef __LP64__
int retval;
spin_lock_irq(&pdc_lock);
retval = mem_pdc_call(PDC_MODEL, PDC_MODEL_CAPABILITIES, __pa(pdc_result), 0);
convert_to_wide(pdc_result);
if(pdc_result[0] != NARROW_FIRMWARE)
parisc_narrow_firmware = 0;
spin_unlock_irq(&pdc_lock);
#endif
}
......@@ -582,6 +617,7 @@ int pdc_get_initiator(struct hardware_path *hwpath, unsigned char *scsi_id,
case 10: *period = 1000; break;
case 20: *period = 500; break;
case 40: *period = 250; break;
case 80: *period = 125; break;
default: /* Do nothing */ break;
}
......
......@@ -1273,8 +1273,8 @@ static struct hp_cpu_type_mask {
{ 0x05e6, 0x0ffe, pcxw2 }, /* 0x05e6 - 0x05e7 */
{ 0x05e8, 0x0ff8, pcxw2 }, /* 0x05e8 - 0x05ef */
{ 0x05f0, 0x0ff0, pcxw2 }, /* 0x05f0 - 0x05ff */
{ 0x0600, 0x0ff0, pcxl }, /* 0x0600 - 0x060f */
{ 0x0610, 0x0ff0, pcxl }, /* 0x0610 - 0x061f */
{ 0x0600, 0x0fe0, pcxl }, /* 0x0600 - 0x061f */
{ 0x0880, 0x0ff0, mako }, /* 0x0880 - 0x088f */
{ 0x0000, 0x0000, pcx } /* terminate table */
};
......@@ -1289,7 +1289,8 @@ char *cpu_name_version[][2] = {
[pcxu_] { "PA8200 (PCX-U+)", "2.0" },
[pcxw] { "PA8500 (PCX-W)", "2.0" },
[pcxw_] { "PA8600 (PCX-W+)", "2.0" },
[pcxw2] { "PA8700 (PCX-W2)", "2.0" }
[pcxw2] { "PA8700 (PCX-W2)", "2.0" },
[mako] { "PA8800 (MAKO)", "2.0" }
};
const char * __init
......
......@@ -82,19 +82,21 @@ $bss_loop:
ldo R%PA(swapper_pg_dir)(%r4),%r4
mtctl %r4,%cr24 /* Initialize kernel root pointer */
mtctl %r4,%cr25 /* Initialize user root pointer */
ldi ASM_PT_INITIAL,%r1
ldo ASM_PGD_ENTRY*ASM_PGD_ENTRY_SIZE(%r4),%r4
1:
stw %r3,0(%r4)
ldo ASM_PAGE_SIZE(%r3),%r3
addib,> -1,%r1,1b
ldo ASM_PGD_ENTRY_SIZE(%r4),%r4
#if (__PAGE_OFFSET != 0x10000000UL)
Error! Code below (the next two stw's) needs to be changed
#endif
stw %r3,0x100(%r4) /* Hardwired 0x1... kernel Vaddr start*/
ldo 0x1000(%r3),%r3
stw %r3,0x104(%r4)
ldo _PAGE_KERNEL(%r0),%r3 /* Hardwired 0 phys addr start */
ldil L%PA(pg0),%r1
ldo R%PA(pg0)(%r1),%r1
$pgt_fill_loop:
stwm %r3,4(%r1)
ldo 0x1000(%r3),%r3
bb,>= %r3,8,$pgt_fill_loop
stwm %r3,ASM_PTE_ENTRY_SIZE(%r1)
ldo ASM_PAGE_SIZE(%r3),%r3
bb,>= %r3,31-KERNEL_INITIAL_ORDER,$pgt_fill_loop
nop
......
......@@ -88,26 +88,25 @@ $bss_loop:
mtctl %r4,%cr24 /* Initialize kernel root pointer */
mtctl %r4,%cr25 /* Initialize user root pointer */
#if (__PAGE_OFFSET != 0x10000000UL)
Error! Code below (the next five std's) needs to be changed
#endif
std %r3,0x00(%r4) /* Hardwired 0x1... kernel Vaddr start*/
stw %r3,ASM_PGD_ENTRY*ASM_PGD_ENTRY_SIZE(%r4)
ldo _PAGE_TABLE(%r1),%r3
std %r3,0x400(%r5) /* Hardwired 0x1... kernel Vaddr start*/
ldo 0x1000(%r3),%r3
std %r3,0x408(%r5)
ldo 0x1000(%r3),%r3
std %r3,0x410(%r5)
ldo 0x1000(%r3),%r3
std %r3,0x418(%r5)
ldo ASM_PMD_ENTRY*ASM_PMD_ENTRY_SIZE(%r5),%r5
ldi ASM_PT_INITIAL,%r1
1:
stw %r3,0(%r5)
ldo ASM_PAGE_SIZE(%r3),%r3
addib,> -1,%r1,1b
ldo ASM_PMD_ENTRY_SIZE(%r5),%r5
ldo _PAGE_KERNEL(%r0),%r3 /* Hardwired 0 phys addr start */
ldil L%PA(pg0),%r1
ldo R%PA(pg0)(%r1),%r1
$pgt_fill_loop:
std,ma %r3,8(%r1)
ldo 0x1000(%r3),%r3
bb,>= %r3,8,$pgt_fill_loop
std,ma %r3,ASM_PTE_ENTRY_SIZE(%r1)
ldo ASM_PAGE_SIZE(%r3),%r3
bb,>= %r3,31-KERNEL_INITIAL_ORDER,$pgt_fill_loop
nop
/* And the RFI Target address too */
......@@ -169,7 +168,6 @@ common_stext:
tophys_r1 %r10
std %r11, TASK_PT_GR11(%r10)
#ifndef CONFIG_PDC_NARROW
/* Switch to wide mode; Superdome doesn't support narrow PDC
** calls.
*/
......@@ -179,7 +177,6 @@ common_stext:
bv (%rp)
ssm PSW_SM_W,%r0
2:
#endif /* CONFIG_PDC_NARROW */
/* Set Wide mode as the "Default" (eg for traps)
** First trap occurs *right* after (or part of) rfi for slave CPUs.
......
......@@ -52,11 +52,13 @@ union thread_union init_thread_union
__attribute__((aligned(128))) __attribute__((__section__(".data.init_task"))) =
{ INIT_THREAD_INFO(init_task) };
pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__ ((aligned(4096))) = { {0}, };
#ifdef __LP64__
unsigned long pmd0[PTRS_PER_PMD] __attribute__ ((aligned(4096))) = { 0, };
/* NOTE: This layout exactly conforms to the hybrid L2/L3 page table layout
* with the first pmd adjacent to the pgd and below it */
pmd_t pmd0[PTRS_PER_PMD] __attribute__ ((aligned(PAGE_SIZE))) = { {0}, };
#endif
unsigned long pg0[PT_INITIAL * PTRS_PER_PTE] __attribute__ ((aligned(4096))) = { 0, };
pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__ ((aligned(PAGE_SIZE))) = { {0}, };
pte_t pg0[PT_INITIAL * PTRS_PER_PTE] __attribute__ ((aligned(PAGE_SIZE))) = { {0}, };
/*
* Initial task structure.
......
......@@ -350,10 +350,6 @@ copy_user_page_asm:
.procend
#if (TMPALIAS_MAP_START >= 0x80000000UL)
Warning TMPALIAS_MAP_START changed. If > 2 Gb, code in pacache.S is bogus
#endif
/*
* NOTE: Code in clear_user_page has a hard coded dependency on the
* maximum alias boundary being 4 Mb. We've been assured by the
......@@ -490,6 +486,9 @@ clear_user_page_asm:
ldil L%(TMPALIAS_MAP_START),%r28
#ifdef __LP64__
#if (TMPALIAS_MAP_START >= 0x80000000)
depdi 0,31,32,%r28 /* clear any sign extension */
#endif
extrd,u %r26,56,32,%r26 /* convert phys addr to tlb insert format */
depd %r25,63,22,%r28 /* Form aliased virtual address 'to' */
depdi 0,63,12,%r28 /* Clear any offset bits */
......@@ -575,6 +574,95 @@ flush_kernel_dcache_page:
.procend
.export flush_user_dcache_page
flush_user_dcache_page:
.proc
.callinfo NO_CALLS
.entry
ldil L%dcache_stride,%r1
ldw R%dcache_stride(%r1),%r23
#ifdef __LP64__
depdi,z 1,63-PAGE_SHIFT,1,%r25
#else
depwi,z 1,31-PAGE_SHIFT,1,%r25
#endif
add %r26,%r25,%r25
sub %r25,%r23,%r25
1: fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
fdc,m %r23(%sr3,%r26)
CMPB<< %r26,%r25,1b
fdc,m %r23(%sr3,%r26)
sync
bv %r0(%r2)
nop
.exit
.procend
.export flush_user_icache_page
flush_user_icache_page:
.proc
.callinfo NO_CALLS
.entry
ldil L%dcache_stride,%r1
ldw R%dcache_stride(%r1),%r23
#ifdef __LP64__
depdi,z 1,63-PAGE_SHIFT,1,%r25
#else
depwi,z 1,31-PAGE_SHIFT,1,%r25
#endif
add %r26,%r25,%r25
sub %r25,%r23,%r25
1: fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
fic,m %r23(%sr3,%r26)
CMPB<< %r26,%r25,1b
fic,m %r23(%sr3,%r26)
sync
bv %r0(%r2)
nop
.exit
.procend
.export purge_kernel_dcache_page
purge_kernel_dcache_page:
......
......@@ -539,10 +539,10 @@ struct hppa_dma_ops pcx_dma_ops = {
.unmap_single = pa11_dma_unmap_single,
.map_sg = pa11_dma_map_sg,
.unmap_sg = pa11_dma_unmap_sg,
.dma_sync_single_cpu = pa11_dma_sync_single_cpu,
.dma_sync_single_device = pa11_dma_sync_single_device,
.dma_sync_sg_cpu = pa11_dma_sync_sg_cpu,
.dma_sync_sg_device = pa11_dma_sync_sg_device,
.dma_sync_single_for_cpu = pa11_dma_sync_single_for_cpu,
.dma_sync_single_for_device = pa11_dma_sync_single_for_device,
.dma_sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu,
.dma_sync_sg_for_device = pa11_dma_sync_sg_for_device,
};
......
......@@ -506,9 +506,11 @@ static int __init perf_init(void)
perf_processor_interface = ONYX_INTF;
} else if (boot_cpu_data.cpu_type == pcxw ||
boot_cpu_data.cpu_type == pcxw_ ||
boot_cpu_data.cpu_type == pcxw2) {
boot_cpu_data.cpu_type == pcxw2 ||
boot_cpu_data.cpu_type == mako) {
perf_processor_interface = CUDA_INTF;
if (boot_cpu_data.cpu_type == pcxw2)
if (boot_cpu_data.cpu_type == pcxw2 ||
boot_cpu_data.cpu_type == mako)
bitmask_array = perf_bitmasks_piranha;
} else {
perf_processor_interface = UNKNOWN_INTF;
......
......@@ -44,6 +44,7 @@
#include <linux/sched.h>
#include <linux/stddef.h>
#include <linux/unistd.h>
#include <linux/kallsyms.h>
#include <asm/io.h>
#include <asm/offsets.h>
......@@ -51,6 +52,7 @@
#include <asm/pdc_chassis.h>
#include <asm/pgalloc.h>
#include <asm/uaccess.h>
#include <asm/unwind.h>
int hlt_counter;
......@@ -368,3 +370,28 @@ asmlinkage int sys_execve(struct pt_regs *regs)
return error;
}
unsigned long
get_wchan(struct task_struct *p)
{
struct unwind_frame_info info;
unsigned long ip;
int count = 0;
/*
* These bracket the sleeping functions..
*/
# define first_sched ((unsigned long) scheduling_functions_start_here)
# define last_sched ((unsigned long) scheduling_functions_end_here)
unwind_frame_init_from_blocked_task(&info, p);
do {
if (unwind_once(&info) < 0)
return 0;
ip = info.ip;
if (ip < first_sched || ip >= last_sched)
return ip;
} while (count++ < 16);
return 0;
# undef first_sched
# undef last_sched
}
......@@ -231,9 +231,7 @@ void __init collect_boot_cpu_data(void)
boot_cpu_data.hversion = boot_cpu_data.pdc.model.hversion;
boot_cpu_data.sversion = boot_cpu_data.pdc.model.sversion;
boot_cpu_data.cpu_type =
parisc_get_cpu_type(boot_cpu_data.hversion);
boot_cpu_data.cpu_type = parisc_get_cpu_type(boot_cpu_data.hversion);
boot_cpu_data.cpu_name = cpu_name_version[boot_cpu_data.cpu_type][0];
boot_cpu_data.family_name = cpu_name_version[boot_cpu_data.cpu_type][1];
}
......@@ -276,6 +274,7 @@ int __init init_per_cpu(int cpunum)
int ret;
struct pdc_coproc_cfg coproc_cfg;
set_firmware_width();
ret = pdc_coproc_cfg(&coproc_cfg);
if(ret >= 0 && coproc_cfg.ccr_functional) {
......
......@@ -26,6 +26,7 @@ real_stack:
save_cr_space:
.block REG_SZ * N_SAVED_REGS
save_cr_end:
/************************ 32-bit real-mode calls ***********************/
......@@ -123,7 +124,7 @@ save_control_regs:
nop
restore_control_regs:
load32 PA(save_cr_space+(N_SAVED_REGS*REG_SZ)), %r26
load32 PA(save_cr_end), %r26
POP_CR(%cr15, %r26)
POP_CR(%cr31, %r26)
POP_CR(%cr30, %r26)
......
......@@ -121,8 +121,11 @@ void __init setup_arch(char **cmdline_p)
pdc_console_init();
#ifdef CONFIG_PDC_NARROW
#ifdef __LP64__
extern int parisc_narrow_firmware;
if(parisc_narrow_firmware) {
printk(KERN_INFO "Kernel is using PDC in 32-bit mode.\n");
}
#endif
setup_pdc();
setup_cmdline(cmdline_p);
......@@ -204,6 +207,7 @@ static void __init parisc_proc_mkdir(void)
case pcxw:
case pcxw_:
case pcxw2:
case mako: /* XXX : this is really mckinley bus */
if (NULL == proc_runway_root)
{
proc_runway_root = proc_mkdir("bus/runway", 0);
......
......@@ -353,12 +353,17 @@ setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
goto give_sigsegv;
/* Set up to return from userspace. If provided, use a stub
already in userspace. */
already in userspace. The first words of tramp are used to
save the previous sigrestartblock trampoline that might be
on the stack. We start the sigreturn trampoline at
SIGRESTARTBLOCK_TRAMP+X. */
err |= __put_user(in_syscall ? INSN_LDI_R25_1 : INSN_LDI_R25_0,
&frame->tramp[SIGRETURN_TRAMP+0]);
err |= __put_user(INSN_LDI_R20, &frame->tramp[SIGRETURN_TRAMP+1]);
err |= __put_user(INSN_BLE_SR2_R0, &frame->tramp[SIGRETURN_TRAMP+2]);
err |= __put_user(INSN_NOP, &frame->tramp[SIGRETURN_TRAMP+3]);
&frame->tramp[SIGRESTARTBLOCK_TRAMP+0]);
err |= __put_user(INSN_LDI_R20,
&frame->tramp[SIGRESTARTBLOCK_TRAMP+1]);
err |= __put_user(INSN_BLE_SR2_R0,
&frame->tramp[SIGRESTARTBLOCK_TRAMP+2]);
err |= __put_user(INSN_NOP, &frame->tramp[SIGRESTARTBLOCK_TRAMP+3]);
#if DEBUG_SIG
/* Assert that we're flushing in the correct space... */
......@@ -370,12 +375,16 @@ setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
}
#endif
flush_user_dcache_range((unsigned long) &frame->tramp[SIGRETURN_TRAMP],
flush_user_dcache_range((unsigned long) &frame->tramp[0],
(unsigned long) &frame->tramp[TRAMP_SIZE]);
flush_user_icache_range((unsigned long) &frame->tramp[SIGRETURN_TRAMP],
flush_user_icache_range((unsigned long) &frame->tramp[0],
(unsigned long) &frame->tramp[TRAMP_SIZE]);
rp = (unsigned long) &frame->tramp[SIGRETURN_TRAMP];
/* TRAMP Words 0-4, Lenght 5 = SIGRESTARTBLOCK_TRAMP
* TRAMP Words 5-9, Length 4 = SIGRETURN_TRAMP
* So the SIGRETURN_TRAMP is at the end of SIGRESTARTBLOCK_TRAMP
*/
rp = (unsigned long) &frame->tramp[SIGRESTARTBLOCK_TRAMP];
if (err)
goto give_sigsegv;
......
......@@ -3,7 +3,7 @@
**
** Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
** Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
** Copyright (C) 2001 Grant Grundler <grundler@parisc-linux.org>
** Copyright (C) 2001,2004 Grant Grundler <grundler@parisc-linux.org>
**
** Lots of stuff stolen from arch/alpha/kernel/smp.c
** ...and then parisc stole from arch/ia64/kernel/smp.c. Thanks David! :^)
......@@ -60,20 +60,16 @@ spinlock_t smp_lock = SPIN_LOCK_UNLOCKED;
volatile struct task_struct *smp_init_current_idle_task;
static volatile int smp_commenced = 0; /* Set when the idlers are all forked */
static volatile int cpu_now_booting = 0; /* track which CPU is booting */
static int parisc_max_cpus = -1; /* Command line */
unsigned long cache_decay_ticks; /* declared by include/linux/sched.h */
cpumask_t cpu_online_map = CPU_MASK_NONE; /* Bitmap of online CPUs */
#define IS_LOGGED_IN(cpunum) (cpu_isset(cpunum, cpu_online_map))
cpumask_t cpu_possible_map = CPU_MASK_NONE; /* Bitmap of Present CPUs */
EXPORT_SYMBOL(cpu_online_map);
EXPORT_SYMBOL(cpu_possible_map);
int smp_num_cpus = 1;
int smp_threads_ready = 0;
unsigned long cache_decay_ticks;
static int max_cpus = -1; /* Command line */
cpumask_t cpu_present_mask;
EXPORT_SYMBOL(cpu_present_mask);
struct smp_call_struct {
void (*func) (void *info);
......@@ -114,7 +110,7 @@ ipi_init(int cpuid)
#error verify IRQ_OFFSET(IPI_IRQ) is ipi_interrupt() in new IRQ region
if(IS_LOGGED_IN(cpuid) )
if(cpu_online(cpuid) )
{
switch_to_idle_task(current);
}
......@@ -293,12 +289,13 @@ send_IPI_allbutself(enum ipi_message_type op)
{
int i;
for (i = 0; i < smp_num_cpus; i++) {
if (i != smp_processor_id())
for (i = 0; i < parisc_max_cpus; i++) {
if (cpu_online(i) && i != smp_processor_id())
send_IPI_single(i, op);
}
}
inline void
smp_send_stop(void) { send_IPI_allbutself(IPI_CPU_STOP); }
......@@ -334,8 +331,8 @@ smp_call_function (void (*func) (void *info), void *info, int retry, int wait)
data.func = func;
data.info = info;
data.wait = wait;
atomic_set(&data.unstarted_count, smp_num_cpus - 1);
atomic_set(&data.unfinished_count, smp_num_cpus - 1);
atomic_set(&data.unstarted_count, num_online_cpus() - 1);
atomic_set(&data.unfinished_count, num_online_cpus() - 1);
if (retry) {
spin_lock (&lock);
......@@ -395,7 +392,7 @@ EXPORT_SYMBOL(smp_call_function);
static int __init nosmp(char *str)
{
max_cpus = 0;
parisc_max_cpus = 0;
return 1;
}
......@@ -403,7 +400,7 @@ __setup("nosmp", nosmp);
static int __init maxcpus(char *str)
{
get_option(&str, &max_cpus);
get_option(&str, &parisc_max_cpus);
return 1;
}
......@@ -499,17 +496,13 @@ void __init smp_callin(void)
local_irq_enable(); /* Interrupts have been off until now */
/* Slaves wait here until Big Poppa daddy say "jump" */
mb(); /* PARANOID */
while (!smp_commenced) ;
mb(); /* PARANOID */
cpu_idle(); /* Wait for timer to schedule some work */
/* NOTREACHED */
panic("smp_callin() AAAAaaaaahhhh....\n");
}
#if 0
/*
* Create the idle task for a new Slave CPU. DO NOT use kernel_thread()
* because that could end up calling schedule(). If it did, the new idle
......@@ -531,7 +524,7 @@ static struct task_struct *fork_by_hand(void)
/*
* Bring one cpu online.
*/
static int __init smp_boot_one_cpu(int cpuid, int cpunum)
int __init smp_boot_one_cpu(int cpuid, int cpunum)
{
struct task_struct *idle;
long timeout;
......@@ -576,12 +569,11 @@ static int __init smp_boot_one_cpu(int cpuid, int cpunum)
/*
* OK, wait a bit for that CPU to finish staggering about.
* Slave will set a bit when it reaches smp_cpu_init() and then
* wait for smp_commenced to be 1.
* Once we see the bit change, we can move on.
* Slave will set a bit when it reaches smp_cpu_init().
* Once the "monarch CPU" sees the bit change, it can move on.
*/
for (timeout = 0; timeout < 10000; timeout++) {
if(IS_LOGGED_IN(cpunum)) {
if(cpu_online(cpunum)) {
/* Which implies Slave has started up */
cpu_now_booting = 0;
smp_init_current_idle_task = NULL;
......@@ -608,120 +600,56 @@ static int __init smp_boot_one_cpu(int cpuid, int cpunum)
#endif
return 0;
}
#endif
/*
** inventory.c:do_inventory() has already 'discovered' the additional CPU's.
** We are ready to wrest them from PDC's control now.
** Called by smp_init bring all the secondaries online and hold them.
**
** o Setup of the IPI irq handler is done in irq.c.
** o MEM_RENDEZ is initialzed in head.S:stext()
**
*/
void __init smp_boot_cpus(void)
void __devinit smp_prepare_boot_cpu(void)
{
int i, cpu_count = 1;
unsigned long bogosum = cpu_data[0].loops_per_jiffy; /* Count Monarch */
/* REVISIT - assumes first CPU reported by PAT PDC is BSP */
int bootstrap_processor=cpu_data[0].cpuid; /* CPU ID of BSP */
/* Setup BSP mappings */
printk(KERN_DEBUG "SMP: bootstrap CPU ID is %d\n",bootstrap_processor);
init_task.thread_info->cpu = bootstrap_processor;
current->thread_info->cpu = bootstrap_processor;
/* Mark Boostrap processor as present */
cpu_online_map = cpumask_of_cpu(bootstrap_processor);
current->active_mm = &init_mm;
#ifdef ENTRY_SYS_CPUS
cpu_data[0].state = STATE_RUNNING;
#endif
cpu_present_mask = cpumask_of_cpu(bootstrap_processor);
/* Nothing to do when told not to. */
if (max_cpus == 0) {
printk(KERN_INFO "SMP mode deactivated.\n");
return;
}
if (max_cpus != -1)
printk(KERN_INFO "Limiting CPUs to %d\n", max_cpus);
/* Setup BSP mappings */
printk(KERN_DEBUG "SMP: bootstrap CPU ID is %d\n",bootstrap_processor);
init_task.thread_info->cpu = bootstrap_processor;
current->thread_info->cpu = bootstrap_processor;
/* We found more than one CPU.... */
if (boot_cpu_data.cpu_count > 1) {
cpu_set(bootstrap_processor, cpu_online_map);
cpu_set(bootstrap_processor, cpu_possible_map);
for (i = 0; i < NR_CPUS; i++) {
if (cpu_data[i].cpuid == NO_PROC_ID ||
cpu_data[i].cpuid == bootstrap_processor)
continue;
/* Mark Boostrap processor as present */
current->active_mm = &init_mm;
if (smp_boot_one_cpu(cpu_data[i].cpuid, cpu_count) < 0)
continue;
cache_decay_ticks = HZ/100; /* FIXME very rough. */
}
bogosum += cpu_data[i].loops_per_jiffy;
cpu_count++; /* Count good CPUs only... */
cpu_present_mask |= 1UL << i;
/* Bail when we've started as many CPUS as told to */
if (cpu_count == max_cpus)
break;
}
}
if (cpu_count == 1) {
printk(KERN_INFO "SMP: Bootstrap processor only.\n");
}
/*
** inventory.c:do_inventory() hasn't yet been run and thus we
** don't 'discover' the additional CPU's until later.
*/
void __init smp_prepare_cpus(unsigned int max_cpus)
{
/*
* FIXME very rough.
*/
cache_decay_ticks = HZ/100;
if (max_cpus != -1)
printk(KERN_INFO "SMP: Limited to %d CPUs\n", max_cpus);
printk(KERN_INFO "SMP: Total %d of %d processors activated "
"(%lu.%02lu BogoMIPS noticed) (Present Mask: %lu).\n",
cpu_count, boot_cpu_data.cpu_count, (bogosum + 25) / 5000,
((bogosum + 25) / 50) % 100, cpu_present_mask);
printk(KERN_INFO "SMP: Monarch CPU activated (%lu.%02lu BogoMIPS)\n",
(cpu_data[0].loops_per_jiffy + 25) / 5000,
((cpu_data[0].loops_per_jiffy + 25) / 50) % 100);
smp_num_cpus = cpu_count;
#ifdef PER_CPU_IRQ_REGION
ipi_init();
#endif
return;
}
/*
* Called from main.c by Monarch Processor.
* After this, any CPU can schedule any task.
*/
void smp_commence(void)
{
smp_commenced = 1;
mb();
return;
}
/*
* XXX FIXME : do nothing
*/
void smp_cpus_done(unsigned int cpu_max)
{
smp_threads_ready = 1;
}
void __init smp_prepare_cpus(unsigned int max_cpus)
{
smp_boot_cpus();
return;
}
void __devinit smp_prepare_boot_cpu(void)
{
cpu_set(smp_processor_id(), cpu_online_map);
cpu_set(smp_processor_id(), cpu_present_mask);
}
int __devinit __cpu_up(unsigned int cpu)
{
......@@ -748,7 +676,7 @@ int sys_cpus(int argc, char **argv)
#ifdef DUMP_MORE_STATE
for(i=0; i<NR_CPUS; i++) {
int cpus_per_line = 4;
if(IS_LOGGED_IN(i)) {
if(cpu_online(i)) {
if (j++ % cpus_per_line)
printk(" %3d",i);
else
......@@ -763,7 +691,7 @@ int sys_cpus(int argc, char **argv)
printk("\nCPUSTATE TASK CPUNUM CPUID HARDCPU(HPA)\n");
#ifdef DUMP_MORE_STATE
for(i=0;i<NR_CPUS;i++) {
if (!IS_LOGGED_IN(i))
if (!cpu_online(i))
continue;
if (cpu_data[i].cpuid != NO_PROC_ID) {
switch(cpu_data[i].state) {
......@@ -783,7 +711,7 @@ int sys_cpus(int argc, char **argv)
printk("%08x?", cpu_data[i].state);
break;
}
if(IS_LOGGED_IN(i)) {
if(cpu_online(i)) {
printk(" %4d",current_pid(i));
}
printk(" %6d",cpu_number_map(i));
......@@ -799,7 +727,7 @@ int sys_cpus(int argc, char **argv)
#ifdef DUMP_MORE_STATE
printk("\nCPUSTATE CPUID\n");
for (i=0;i<NR_CPUS;i++) {
if (!IS_LOGGED_IN(i))
if (!cpu_online(i))
continue;
if (cpu_data[i].cpuid != NO_PROC_ID) {
switch(cpu_data[i].state) {
......
......@@ -93,7 +93,7 @@ static unsigned long get_shared_area(struct address_space *mapping,
unsigned long addr, unsigned long len, unsigned long pgoff)
{
struct vm_area_struct *vma;
int offset = get_offset(mapping);
int offset = mapping ? get_offset(mapping) : 0;
addr = DCACHE_ALIGN(addr - offset) + offset;
......@@ -117,8 +117,10 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
if (!addr)
addr = TASK_UNMAPPED_BASE;
if (filp && (flags & MAP_SHARED)) {
if (filp) {
addr = get_shared_area(filp->f_mapping, addr, len, pgoff);
} else if(flags & MAP_SHARED) {
addr = get_shared_area(NULL, addr, len, pgoff);
} else {
addr = get_unshared_area(addr, len);
}
......
......@@ -155,9 +155,10 @@ linux_gateway_entry:
stw %r21, -56(%r30) /* 6th argument */
#endif
/* Are we being ptraced? */
mfctl %cr30, %r1
LDREG TI_FLAGS(%r1), %r19
bb,<,n %r19,31-TIF_SYSCALL_TRACE,.Ltracesys
LDREG TASK_PTRACE(%r1), %r1
bb,<,n %r1,31,.Ltracesys
/* Note! We cannot use the syscall table that is mapped
nearby since the gateway page is mapped execute-only. */
......
......@@ -334,3 +334,12 @@
ENTRY_SAME(epoll_ctl) /* 225 */
ENTRY_SAME(epoll_wait)
ENTRY_SAME(remap_file_pages)
ENTRY_SAME(semtimedop)
ENTRY_SAME(mq_open)
ENTRY_SAME(mq_unlink) /* 230 */
ENTRY_SAME(mq_timedsend)
ENTRY_SAME(mq_timedreceive)
ENTRY_SAME(mq_notify)
ENTRY_SAME(mq_getsetattr)
/* Nothing yet */ /* 235 */
/*
* Kernel unwinding support
*
* (c) 2002-2004 Randolph Chung <tausq@debian.org>
*
* Derived partially from the IA64 implementation. The PA-RISC
* Runtime Architecture Document is also a useful reference to
* understand what is happening here
*/
/*
* J. David Anglin writes:
*
* "You have to adjust the current sp to that at the begining of the function.
* There can be up to two stack additions to allocate the frame in the
* prologue. Similar things happen in the epilogue. In the presence of
* interrupts, you have to be concerned about where you are in the function
* and what stack adjustments have taken place."
*
* For now these cases are not handled, but they should be!
*/
#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <asm/uaccess.h>
#include <asm/unwind.h>
/* #define DEBUG 1 */
#ifdef DEBUG
#define dbg(x...) printk(x)
#else
#define dbg(x...)
#endif
extern const struct unwind_table_entry __start___unwind[];
extern const struct unwind_table_entry __stop___unwind[];
static spinlock_t unwind_lock;
/*
* the kernel unwind block is not dynamically allocated so that
* we can call unwind_init as early in the bootup process as
* possible (before the slab allocator is initialized)
*/
static struct unwind_table kernel_unwind_table;
static struct unwind_table *unwind_tables, *unwind_tables_end;
static inline const struct unwind_table_entry *
find_unwind_entry_in_table(const struct unwind_table *table, unsigned long addr)
{
const struct unwind_table_entry *e = 0;
unsigned long lo, hi, mid;
addr -= table->base_addr;
for (lo = 0, hi = table->length; lo < hi; )
{
mid = (lo + hi) / 2;
e = &table->table[mid];
if (addr < e->region_start)
hi = mid;
else if (addr > e->region_end)
lo = mid + 1;
else
break;
}
return e;
}
static inline const struct unwind_table_entry *
find_unwind_entry(unsigned long addr)
{
struct unwind_table *table = unwind_tables;
const struct unwind_table_entry *e = NULL;
if (addr >= kernel_unwind_table.start &&
addr <= kernel_unwind_table.end)
e = find_unwind_entry_in_table(&kernel_unwind_table, addr);
else
for (; table; table = table->next)
{
if (addr >= table->start &&
addr <= table->end)
e = find_unwind_entry_in_table(table, addr);
if (e)
break;
}
return e;
}
static void
unwind_table_init(struct unwind_table *table, const char *name,
unsigned long base_addr, unsigned long gp,
const void *table_start, const void *table_end)
{
const struct unwind_table_entry *start = table_start;
const struct unwind_table_entry *end = table_end - 1;
table->name = name;
table->base_addr = base_addr;
table->gp = gp;
table->start = base_addr + start->region_start;
table->end = base_addr + end->region_end;
table->table = (struct unwind_table_entry *)table_start;
table->length = end - start;
table->next = NULL;
}
void *
unwind_table_add(const char *name, unsigned long base_addr,
unsigned long gp,
const void *start, const void *end)
{
struct unwind_table *table;
unsigned long flags;
table = kmalloc(sizeof(struct unwind_table), GFP_USER);
if (table == NULL)
return 0;
unwind_table_init(table, name, base_addr, gp, start, end);
spin_lock_irqsave(&unwind_lock, flags);
if (unwind_tables)
{
unwind_tables_end->next = table;
unwind_tables_end = table;
}
else
{
unwind_tables = unwind_tables_end = table;
}
spin_unlock_irqrestore(&unwind_lock, flags);
return table;
}
/* Called from setup_arch to import the kernel unwind info */
static int unwind_init(void)
{
long start, stop;
register unsigned long gp __asm__ ("r27");
start = (long)&__start___unwind[0];
stop = (long)&__stop___unwind[0];
printk("unwind_init: start = 0x%lx, end = 0x%lx, entries = %lu\n",
start, stop,
(stop - start) / sizeof(struct unwind_table_entry));
unwind_table_init(&kernel_unwind_table, "kernel", KERNEL_START,
gp,
&__start___unwind[0], &__stop___unwind[0]);
#if 0
{
int i;
for (i = 0; i < 10; i++)
{
printk("region 0x%x-0x%x\n",
__start___unwind[i].region_start,
__start___unwind[i].region_end);
}
}
#endif
return 0;
}
static void unwind_frame_regs(struct unwind_frame_info *info)
{
const struct unwind_table_entry *e;
unsigned long npc;
unsigned int insn;
long frame_size = 0;
int looking_for_rp, rpoffset = 0;
e = find_unwind_entry(info->ip);
if (!e) {
unsigned long sp;
extern char _stext[], _etext[];
dbg("Cannot find unwind entry for 0x%lx; forced unwinding\n", info->ip);
/* Since we are doing the unwinding blind, we don't know if
we are adjusting the stack correctly or extracting the rp
correctly. The rp is checked to see if it belongs to the
kernel text section, if not we assume we don't have a
correct stack frame and we continue to unwind the stack.
This is not quite correct, and will fail for loadable
modules. */
sp = info->sp & ~63;
do {
info->prev_sp = sp - 64;
/* FIXME: what happens if we unwind too far so that
sp no longer falls in a mapped kernel page? */
#ifndef __LP64__
info->prev_ip = *(unsigned long *)(info->prev_sp - 20);
#else
info->prev_ip = *(unsigned long *)(info->prev_sp - 16);
#endif
sp = info->prev_sp;
} while (info->prev_ip < (unsigned long)_stext ||
info->prev_ip > (unsigned long)_etext);
} else {
dbg("e->start = 0x%x, e->end = 0x%x, Save_SP = %d, Save_RP = %d size = %u\n",
e->region_start, e->region_end, e->Save_SP, e->Save_RP, e->Total_frame_size);
looking_for_rp = e->Save_RP;
for (npc = e->region_start;
(frame_size < (e->Total_frame_size << 3) || looking_for_rp) &&
npc < info->ip;
npc += 4) {
insn = *(unsigned int *)npc;
if ((insn & 0xffffc000) == 0x37de0000 ||
(insn & 0xffe00000) == 0x6fc00000) {
/* ldo X(sp), sp, or stwm X,D(sp) */
frame_size += (insn & 0x1 ? -1 << 13 : 0) |
((insn & 0x3fff) >> 1);
} else if ((insn & 0xffe00008) == 0x7ec00008) {
/* std,ma X,D(sp) */
frame_size += (insn & 0x1 ? -1 << 13 : 0) |
(((insn >> 4) & 0x3ff) << 3);
} else if (insn == 0x6bc23fd9) {
/* stw rp,-20(sp) */
rpoffset = 20;
looking_for_rp = 0;
} else if (insn == 0x0fc212c1) {
/* std rp,-16(sr0,sp) */
rpoffset = 16;
looking_for_rp = 0;
}
}
info->prev_sp = info->sp - frame_size;
if (rpoffset)
info->prev_ip = *(unsigned long *)(info->prev_sp - rpoffset);
}
}
void unwind_frame_init(struct unwind_frame_info *info, struct task_struct *t,
struct pt_regs *regs)
{
memset(info, 0, sizeof(struct unwind_frame_info));
info->t = t;
info->sp = regs->ksp;
info->ip = regs->kpc;
dbg("(%d) Start unwind from sp=%08lx ip=%08lx\n", (int)t->pid, info->sp, info->ip);
}
void unwind_frame_init_from_blocked_task(struct unwind_frame_info *info, struct task_struct *t)
{
struct pt_regs *regs = &t->thread.regs;
unwind_frame_init(info, t, regs);
}
int unwind_once(struct unwind_frame_info *next_frame)
{
unwind_frame_regs(next_frame);
if (next_frame->prev_sp == 0 ||
next_frame->prev_ip == 0)
return -1;
next_frame->sp = next_frame->prev_sp;
next_frame->ip = next_frame->prev_ip;
next_frame->prev_sp = 0;
next_frame->prev_ip = 0;
dbg("(%d) Continue unwind to sp=%08lx ip=%08lx\n", (int)next_frame->t->pid, next_frame->sp, next_frame->ip);
return 0;
}
int unwind_to_user(struct unwind_frame_info *info)
{
int ret;
do {
ret = unwind_once(info);
} while (!ret && !(info->ip & 3));
return ret;
}
module_init(unwind_init);
......@@ -26,6 +26,7 @@
#include <asm-generic/vmlinux.lds.h>
/* needed for the processor specific cache alignment size */
#include <asm/cache.h>
#include <asm/page.h>
/* ld script to make hppa Linux kernel */
#ifndef CONFIG_PARISC64
......@@ -45,13 +46,17 @@ jiffies = jiffies_64;
SECTIONS
{
. = 0x10100000;
. = KERNEL_BINARY_TEXT_START;
_text = .; /* Text and read-only data */
.text ALIGN(16) : {
*(.text*)
*(.text)
SCHED_TEXT
*(.PARISC.unwind)
*(.text.do_softirq)
*(.text.sys_exit)
*(.text.do_sigaltstack)
*(.text.do_fork)
*(.text.*)
*(.fixup)
*(.lock.text) /* out-of-line lock text */
*(.gnu.warning)
......@@ -72,6 +77,10 @@ SECTIONS
__ex_table : { *(__ex_table) }
__stop___ex_table = .;
__start___unwind = .; /* unwind info */
.PARISC.unwind : { *(.PARISC.unwind) }
__stop___unwind = .;
.data : { /* Data */
*(.data)
CONSTRUCTORS
......@@ -88,6 +97,10 @@ SECTIONS
. = ALIGN(L1_CACHE_BYTES);
.data.cacheline_aligned : { *(.data.cacheline_aligned) }
/* PA-RISC locks requires 16-byte alignment */
. = ALIGN(16);
.data.lock_aligned : { *(.data.lock_aligned) }
_edata = .; /* End of data section */
. = ALIGN(16384); /* init_task */
......
......@@ -13,22 +13,20 @@
#include <asm/atomic.h>
#ifdef CONFIG_SMP
spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] = {
[0 ... (ATOMIC_HASH_SIZE-1)] = SPIN_LOCK_UNLOCKED
atomic_lock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned = {
[0 ... (ATOMIC_HASH_SIZE-1)] = (atomic_lock_t) { { 1, 1, 1, 1 } }
};
#endif
spinlock_t __atomic_lock = SPIN_LOCK_UNLOCKED;
#ifdef __LP64__
unsigned long __xchg64(unsigned long x, unsigned long *ptr)
{
unsigned long temp, flags;
SPIN_LOCK_IRQSAVE(ATOMIC_HASH(ptr), flags);
atomic_spin_lock_irqsave(ATOMIC_HASH(ptr), flags);
temp = *ptr;
*ptr = x;
SPIN_UNLOCK_IRQRESTORE(ATOMIC_HASH(ptr), flags);
atomic_spin_unlock_irqrestore(ATOMIC_HASH(ptr), flags);
return temp;
}
#endif
......@@ -38,10 +36,10 @@ unsigned long __xchg32(int x, int *ptr)
unsigned long flags;
unsigned long temp;
SPIN_LOCK_IRQSAVE(ATOMIC_HASH(ptr), flags);
atomic_spin_lock_irqsave(ATOMIC_HASH(ptr), flags);
(long) temp = (long) *ptr; /* XXX - sign extension wanted? */
*ptr = x;
SPIN_UNLOCK_IRQRESTORE(ATOMIC_HASH(ptr), flags);
atomic_spin_unlock_irqrestore(ATOMIC_HASH(ptr), flags);
return temp;
}
......@@ -51,10 +49,10 @@ unsigned long __xchg8(char x, char *ptr)
unsigned long flags;
unsigned long temp;
SPIN_LOCK_IRQSAVE(ATOMIC_HASH(ptr), flags);
atomic_spin_lock_irqsave(ATOMIC_HASH(ptr), flags);
(long) temp = (long) *ptr; /* XXX - sign extension wanted? */
*ptr = x;
SPIN_UNLOCK_IRQRESTORE(ATOMIC_HASH(ptr), flags);
atomic_spin_unlock_irqrestore(ATOMIC_HASH(ptr), flags);
return temp;
}
......@@ -65,10 +63,10 @@ unsigned long __cmpxchg_u64(volatile unsigned long *ptr, unsigned long old, unsi
unsigned long flags;
unsigned long prev;
SPIN_LOCK_IRQSAVE(ATOMIC_HASH(ptr), flags);
atomic_spin_lock_irqsave(ATOMIC_HASH(ptr), flags);
if ((prev = *ptr) == old)
*ptr = new;
SPIN_UNLOCK_IRQRESTORE(ATOMIC_HASH(ptr), flags);
atomic_spin_unlock_irqrestore(ATOMIC_HASH(ptr), flags);
return prev;
}
#endif
......@@ -78,9 +76,9 @@ unsigned long __cmpxchg_u32(volatile unsigned int *ptr, unsigned int old, unsign
unsigned long flags;
unsigned int prev;
SPIN_LOCK_IRQSAVE(ATOMIC_HASH(ptr), flags);
atomic_spin_lock_irqsave(ATOMIC_HASH(ptr), flags);
if ((prev = *ptr) == old)
*ptr = new;
SPIN_UNLOCK_IRQRESTORE(ATOMIC_HASH(ptr), flags);
atomic_spin_unlock_irqrestore(ATOMIC_HASH(ptr), flags);
return (unsigned long)prev;
}
......@@ -424,7 +424,12 @@ void free_initmem(void)
* a hole of 4kB between each vmalloced area for the same reason.
*/
#define MAP_START 0x4000 /* Leave room for gateway page expansion */
/* Leave room for gateway page expansion */
#if KERNEL_MAP_START < GATEWAY_PAGE_SIZE
#error KERNEL_MAP_START is in gateway reserved region
#endif
#define MAP_START (KERNEL_MAP_START)
#define VM_MAP_OFFSET (32*1024)
#define SET_MAP_OFFSET(x) ((void *)(((unsigned long)(x) + VM_MAP_OFFSET) \
& ~(VM_MAP_OFFSET-1)))
......@@ -545,7 +550,7 @@ static void __init map_pages(unsigned long start_vaddr, unsigned long start_padd
*/
if (!pmd) {
pmd = (pmd_t *) alloc_bootmem_low_pages_node(NODE_DATA(0),PAGE_SIZE);
pmd = (pmd_t *) alloc_bootmem_low_pages_node(NODE_DATA(0),PAGE_SIZE << PMD_ORDER);
pmd = (pmd_t *) __pa(pmd);
}
......
......@@ -46,8 +46,8 @@
#define DRM_IOC_WRITE _IOC_WRITE
#define DRM_IOC_READWRITE _IOC_READ|_IOC_WRITE
#define DRM_IOC(dir, group, nr, size) _IOC(dir, group, nr, size)
#elif defined(__FreeBSD__) || defined(__NetBSD__)
#if defined(__FreeBSD__) && defined(XFree86Server)
#elif defined(__FreeBSD__) || defined(__NetBSD__) || defined(__OpenBSD__)
#if defined(__FreeBSD__) && defined(IN_MODULE)
/* Prevent name collision when including sys/ioccom.h */
#undef ioctl
#include <sys/ioccom.h>
......@@ -130,6 +130,18 @@ typedef struct drm_tex_region {
unsigned int age;
} drm_tex_region_t;
/**
* Hardware lock.
*
* The lock structure is a simple cache-line aligned integer. To avoid
* processor bus contention on a multiprocessor system, there should not be any
* other data stored in the same cache line.
*/
typedef struct drm_hw_lock {
__volatile__ unsigned int lock; /**< lock variable */
char padding[60]; /**< Pad to cache line */
} drm_hw_lock_t;
/**
* DRM_IOCTL_VERSION ioctl argument type.
......@@ -580,6 +592,16 @@ typedef struct drm_scatter_gather {
unsigned long handle; /**< Used for mapping / unmapping */
} drm_scatter_gather_t;
/**
* DRM_IOCTL_SET_VERSION ioctl argument type.
*/
typedef struct drm_set_version {
int drm_di_major;
int drm_di_minor;
int drm_dd_major;
int drm_dd_minor;
} drm_set_version_t;
#define DRM_IOCTL_BASE 'd'
#define DRM_IO(nr) _IO(DRM_IOCTL_BASE,nr)
......@@ -594,6 +616,7 @@ typedef struct drm_scatter_gather {
#define DRM_IOCTL_GET_MAP DRM_IOWR(0x04, drm_map_t)
#define DRM_IOCTL_GET_CLIENT DRM_IOWR(0x05, drm_client_t)
#define DRM_IOCTL_GET_STATS DRM_IOR( 0x06, drm_stats_t)
#define DRM_IOCTL_SET_VERSION DRM_IOWR(0x07, drm_set_version_t)
#define DRM_IOCTL_SET_UNIQUE DRM_IOW( 0x10, drm_unique_t)
#define DRM_IOCTL_AUTH_MAGIC DRM_IOW( 0x11, drm_auth_t)
......
......@@ -92,8 +92,8 @@
#ifndef __HAVE_DMA
#define __HAVE_DMA 0
#endif
#ifndef __HAVE_DMA_IRQ
#define __HAVE_DMA_IRQ 0
#ifndef __HAVE_IRQ
#define __HAVE_IRQ 0
#endif
#ifndef __HAVE_DMA_WAITLIST
#define __HAVE_DMA_WAITLIST 0
......@@ -148,6 +148,7 @@
#define DRM_MEM_CTXBITMAP 18
#define DRM_MEM_STUB 19
#define DRM_MEM_SGLISTS 20
#define DRM_MEM_CTXLIST 21
#define DRM_MAX_CTXBITMAP (PAGE_SIZE * 8)
......@@ -324,6 +325,7 @@ do { \
#define DRM_BUFCOUNT(x) ((x)->count - DRM_LEFTCOUNT(x))
#define DRM_WAITCOUNT(dev,idx) DRM_BUFCOUNT(&dev->queuelist[idx]->waitlist)
#define DRM_IF_VERSION(maj, min) (maj << 16 | min)
/**
* Get the private SAREA mapping.
*
......@@ -362,11 +364,6 @@ do { \
typedef int drm_ioctl_t( struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg );
typedef struct drm_pci_list {
u16 vendor;
u16 device;
} drm_pci_list_t;
typedef struct drm_ioctl_desc {
drm_ioctl_t *func;
int auth_needed;
......@@ -463,18 +460,6 @@ typedef struct drm_buf_entry {
drm_freelist_t freelist;
} drm_buf_entry_t;
/**
* Hardware lock.
*
* The lock structure is a simple cache-line aligned integer. To avoid
* processor bus contention on a multiprocessor system, there should not be any
* other data stored in the same cache line.
*/
typedef struct drm_hw_lock {
__volatile__ unsigned int lock; /**< lock variable */
char padding[60]; /**< Pad to cache line */
} drm_hw_lock_t;
/** File private data */
typedef struct drm_file {
int authenticated;
......@@ -488,6 +473,9 @@ typedef struct drm_file {
struct drm_device *dev;
int remove_auth_on_close;
unsigned long lock_count;
#ifdef DRIVER_FILE_FIELDS
DRIVER_FILE_FIELDS;
#endif
} drm_file_t;
/** Wait queue */
......@@ -602,6 +590,15 @@ typedef struct drm_map_list {
typedef drm_map_t drm_local_map_t;
/**
* Context handle list
*/
typedef struct drm_ctx_list {
struct list_head head; /**< list head */
drm_context_t handle; /**< context handle */
drm_file_t *tag; /**< associated fd private data */
} drm_ctx_list_t;
#if __HAVE_VBL_IRQ
typedef struct drm_vbl_sig {
......@@ -622,6 +619,8 @@ typedef struct drm_device {
int unique_len; /**< Length of unique field */
dev_t device; /**< Device number for mknod */
char *devname; /**< For /proc/interrupts */
int minor; /**< Minor device number */
int if_version; /**< Highest interface version set */
int blocked; /**< Blocked due to VC switch? */
struct proc_dir_entry *root; /**< Root for this device's entries */
......@@ -660,6 +659,12 @@ typedef struct drm_device {
drm_map_list_t *maplist; /**< Linked list of regions */
int map_count; /**< Number of mappable regions */
/** \name Context handle management */
/*@{*/
drm_ctx_list_t *ctxlist; /**< Linked list of context handles */
int ctx_count; /**< Number of context handles */
struct semaphore ctxlist_sem; /**< For ctxlist */
drm_map_t **context_sareas; /**< per-context SAREA's */
int max_context;
......@@ -679,6 +684,7 @@ typedef struct drm_device {
/** \name Context support */
/*@{*/
int irq; /**< Interrupt used by board */
int irq_enabled; /**< True if irq handler is enabled */
__volatile__ long context_flag; /**< Context swapping flag */
__volatile__ long interrupt_flag; /**< Interruption handler flag */
__volatile__ long dma_flag; /**< DMA dispatch flag */
......@@ -714,7 +720,12 @@ typedef struct drm_device {
#if __REALLY_HAVE_AGP
drm_agp_head_t *agp; /**< AGP data */
#endif
struct pci_dev *pdev; /**< PCI device structure */
int pci_domain; /**< PCI bus domain number */
int pci_bus; /**< PCI bus number */
int pci_slot; /**< PCI slot number */
int pci_func; /**< PCI function number */
#ifdef __alpha__
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,4,3)
struct pci_controler *hose;
......@@ -758,18 +769,6 @@ extern int DRM(flush)(struct file *filp);
extern int DRM(fasync)(int fd, struct file *filp, int on);
/* Mapping support (drm_vm.h) */
extern struct page *DRM(vm_nopage)(struct vm_area_struct *vma,
unsigned long address,
int *type);
extern struct page *DRM(vm_shm_nopage)(struct vm_area_struct *vma,
unsigned long address,
int *type);
extern struct page *DRM(vm_dma_nopage)(struct vm_area_struct *vma,
unsigned long address,
int *type);
extern struct page *DRM(vm_sg_nopage)(struct vm_area_struct *vma,
unsigned long address,
int *type);
extern void DRM(vm_open)(struct vm_area_struct *vma);
extern void DRM(vm_close)(struct vm_area_struct *vma);
extern void DRM(vm_shm_close)(struct vm_area_struct *vma);
......@@ -804,7 +803,7 @@ extern int DRM(unbind_agp)(DRM_AGP_MEM *handle);
#endif
/* Misc. IOCTL support (drm_ioctl.h) */
extern int DRM(irq_busid)(struct inode *inode, struct file *filp,
extern int DRM(irq_by_busid)(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
extern int DRM(getunique)(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
......@@ -816,6 +815,8 @@ extern int DRM(getclient)(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
extern int DRM(getstats)(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
extern int DRM(setversion)(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
/* Context IOCTL support (drm_context.h) */
extern int DRM(resctx)( struct inode *inode, struct file *filp,
......@@ -900,12 +901,17 @@ extern int DRM(dma_setup)(drm_device_t *dev);
extern void DRM(dma_takedown)(drm_device_t *dev);
extern void DRM(free_buffer)(drm_device_t *dev, drm_buf_t *buf);
extern void DRM(reclaim_buffers)( struct file *filp );
#if __HAVE_DMA_IRQ
#endif /* __HAVE_DMA */
/* IRQ support (drm_irq.h) */
#if __HAVE_IRQ || __HAVE_DMA
extern int DRM(control)( struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg );
extern int DRM(irq_install)( drm_device_t *dev, int irq );
#endif
#if __HAVE_IRQ
extern int DRM(irq_install)( drm_device_t *dev );
extern int DRM(irq_uninstall)( drm_device_t *dev );
extern irqreturn_t DRM(dma_service)( DRM_IRQ_ARGS );
extern irqreturn_t DRM(irq_handler)( DRM_IRQ_ARGS );
extern void DRM(driver_irq_preinstall)( drm_device_t *dev );
extern void DRM(driver_irq_postinstall)( drm_device_t *dev );
extern void DRM(driver_irq_uninstall)( drm_device_t *dev );
......@@ -915,12 +921,11 @@ extern int DRM(wait_vblank)(struct inode *inode, struct file *filp,
extern int DRM(vblank_wait)(drm_device_t *dev, unsigned int *vbl_seq);
extern void DRM(vbl_send_signals)( drm_device_t *dev );
#endif
#if __HAVE_DMA_IRQ_BH
extern void DRM(dma_immediate_bh)( void *dev );
#if __HAVE_IRQ_BH
extern void DRM(irq_immediate_bh)( void *dev );
#endif
#endif
#endif /* __HAVE_DMA */
#if __REALLY_HAVE_AGP
/* AGP/GART support (drm_agpsupport.h) */
......
......@@ -103,7 +103,13 @@ int DRM(agp_acquire)(struct inode *inode, struct file *filp,
drm_device_t *dev = priv->dev;
int retcode;
if (!dev->agp || dev->agp->acquired || !drm_agp->acquire)
if (!dev->agp)
return -ENODEV;
if (dev->agp->acquired)
return -EBUSY;
if (!drm_agp->acquire)
return -EINVAL;
if ( dev->agp->cant_use_aperture )
return -EINVAL;
if ((retcode = drm_agp->acquire()))
return retcode;
......
......@@ -147,7 +147,9 @@ int DRM(addmap)( struct inode *inode, struct file *filp,
MTRR_TYPE_WRCOMB, 1 );
}
#endif
map->handle = DRM(ioremap)( map->offset, map->size, dev );
if (map->type == _DRM_REGISTERS)
map->handle = DRM(ioremap)( map->offset, map->size,
dev );
break;
case _DRM_SHM:
......@@ -160,6 +162,12 @@ int DRM(addmap)( struct inode *inode, struct file *filp,
}
map->offset = (unsigned long)map->handle;
if ( map->flags & _DRM_CONTAINS_LOCK ) {
/* Prevent a 2nd X Server from creating a 2nd lock */
if (dev->lock.hw_lock != NULL) {
vfree( map->handle );
DRM(free)( map, sizeof(*map), DRM_MEM_MAPS );
return -EBUSY;
}
dev->sigdata.lock =
dev->lock.hw_lock = map->handle; /* Pointer to lock */
}
......@@ -767,7 +775,7 @@ int DRM(addbufs_pci)( struct inode *inode, struct file *filp,
}
#endif /* __HAVE_PCI_DMA */
#ifdef __HAVE_SG
#if __HAVE_SG
int DRM(addbufs_sg)( struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg )
{
......
......@@ -401,6 +401,7 @@ int DRM(addctx)( struct inode *inode, struct file *filp,
{
drm_file_t *priv = filp->private_data;
drm_device_t *dev = priv->dev;
drm_ctx_list_t * ctx_entry;
drm_ctx_t ctx;
if ( copy_from_user( &ctx, (drm_ctx_t *)arg, sizeof(ctx) ) )
......@@ -421,6 +422,20 @@ int DRM(addctx)( struct inode *inode, struct file *filp,
if ( ctx.handle != DRM_KERNEL_CONTEXT )
DRIVER_CTX_CTOR(ctx.handle); /* XXX: also pass dev ? */
#endif
ctx_entry = DRM(alloc)( sizeof(*ctx_entry), DRM_MEM_CTXLIST );
if ( !ctx_entry ) {
DRM_DEBUG("out of memory\n");
return -ENOMEM;
}
INIT_LIST_HEAD( &ctx_entry->head );
ctx_entry->handle = ctx.handle;
ctx_entry->tag = priv;
down( &dev->ctxlist_sem );
list_add( &ctx_entry->head, &dev->ctxlist->head );
++dev->ctx_count;
up( &dev->ctxlist_sem );
if ( copy_to_user( (drm_ctx_t *)arg, &ctx, sizeof(ctx) ) )
return -EFAULT;
......@@ -543,6 +558,20 @@ int DRM(rmctx)( struct inode *inode, struct file *filp,
DRM(ctxbitmap_free)( dev, ctx.handle );
}
down( &dev->ctxlist_sem );
if ( !list_empty( &dev->ctxlist->head ) ) {
drm_ctx_list_t *pos, *n;
list_for_each_entry_safe( pos, n, &dev->ctxlist->head, head ) {
if ( pos->handle == ctx.handle ) {
list_del( &pos->head );
DRM(free)( pos, sizeof(*pos), DRM_MEM_CTXLIST );
--dev->ctx_count;
}
}
}
up( &dev->ctxlist_sem );
return 0;
}
......
......@@ -35,7 +35,6 @@
#include "drmP.h"
#include <linux/interrupt.h> /* For task queue support */
#ifndef __HAVE_DMA_WAITQUEUE
#define __HAVE_DMA_WAITQUEUE 0
......@@ -43,15 +42,6 @@
#ifndef __HAVE_DMA_RECLAIM
#define __HAVE_DMA_RECLAIM 0
#endif
#ifndef __HAVE_SHARED_IRQ
#define __HAVE_SHARED_IRQ 0
#endif
#if __HAVE_SHARED_IRQ
#define DRM_IRQ_TYPE SA_SHIRQ
#else
#define DRM_IRQ_TYPE 0
#endif
#if __HAVE_DMA
......@@ -214,293 +204,11 @@ void DRM(reclaim_buffers)( struct file *filp )
}
#endif
#if __HAVE_DMA_IRQ
/**
* Install IRQ handler.
*
* \param dev DRM device.
* \param irq IRQ number.
*
* Initializes the IRQ related data, and setups drm_device::vbl_queue. Installs the handler, calling the driver
* \c DRM(driver_irq_preinstall)() and \c DRM(driver_irq_postinstall)() functions
* before and after the installation.
*/
int DRM(irq_install)( drm_device_t *dev, int irq )
{
int ret;
if ( !irq )
return -EINVAL;
down( &dev->struct_sem );
/* Driver must have been initialized */
if ( !dev->dev_private ) {
up( &dev->struct_sem );
return -EINVAL;
}
if ( dev->irq ) {
up( &dev->struct_sem );
return -EBUSY;
}
dev->irq = irq;
up( &dev->struct_sem );
DRM_DEBUG( "%s: irq=%d\n", __FUNCTION__, irq );
dev->context_flag = 0;
dev->interrupt_flag = 0;
dev->dma_flag = 0;
dev->dma->next_buffer = NULL;
dev->dma->next_queue = NULL;
dev->dma->this_buffer = NULL;
#if __HAVE_DMA_IRQ_BH
INIT_WORK(&dev->work, DRM(dma_immediate_bh), dev);
#endif
#if __HAVE_VBL_IRQ
init_waitqueue_head(&dev->vbl_queue);
spin_lock_init( &dev->vbl_lock );
INIT_LIST_HEAD( &dev->vbl_sigs.head );
dev->vbl_pending = 0;
#endif
/* Before installing handler */
DRM(driver_irq_preinstall)(dev);
/* Install handler */
ret = request_irq( dev->irq, DRM(dma_service),
DRM_IRQ_TYPE, dev->devname, dev );
if ( ret < 0 ) {
down( &dev->struct_sem );
dev->irq = 0;
up( &dev->struct_sem );
return ret;
}
/* After installing handler */
DRM(driver_irq_postinstall)(dev);
return 0;
}
/**
* Uninstall the IRQ handler.
*
* \param dev DRM device.
*
* Calls the driver's \c DRM(driver_irq_uninstall)() function, and stops the irq.
*/
int DRM(irq_uninstall)( drm_device_t *dev )
{
int irq;
down( &dev->struct_sem );
irq = dev->irq;
dev->irq = 0;
up( &dev->struct_sem );
if ( !irq )
return -EINVAL;
DRM_DEBUG( "%s: irq=%d\n", __FUNCTION__, irq );
DRM(driver_irq_uninstall)( dev );
free_irq( irq, dev );
return 0;
}
/**
* IRQ control ioctl.
*
* \param inode device inode.
* \param filp file pointer.
* \param cmd command.
* \param arg user argument, pointing to a drm_control structure.
* \return zero on success or a negative number on failure.
*
* Calls irq_install() or irq_uninstall() according to \p arg.
#if !__HAVE_IRQ
/* This stub DRM_IOCTL_CONTROL handler is for the drivers that used to require
* IRQs for DMA but no longer do. It maintains compatibility with the X Servers
* that try to use the control ioctl by simply returning success.
*/
int DRM(control)( struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg )
{
drm_file_t *priv = filp->private_data;
drm_device_t *dev = priv->dev;
drm_control_t ctl;
if ( copy_from_user( &ctl, (drm_control_t *)arg, sizeof(ctl) ) )
return -EFAULT;
switch ( ctl.func ) {
case DRM_INST_HANDLER:
return DRM(irq_install)( dev, ctl.irq );
case DRM_UNINST_HANDLER:
return DRM(irq_uninstall)( dev );
default:
return -EINVAL;
}
}
#if __HAVE_VBL_IRQ
/**
* Wait for VBLANK.
*
* \param inode device inode.
* \param filp file pointer.
* \param cmd command.
* \param data user argument, pointing to a drm_wait_vblank structure.
* \return zero on success or a negative number on failure.
*
* Verifies the IRQ is installed.
*
* If a signal is requested checks if this task has already scheduled the same signal
* for the same vblank sequence number - nothing to be done in
* that case. If the number of tasks waiting for the interrupt exceeds 100 the
* function fails. Otherwise adds a new entry to drm_device::vbl_sigs for this
* task.
*
* If a signal is not requested, then calls vblank_wait().
*/
int DRM(wait_vblank)( DRM_IOCTL_ARGS )
{
drm_file_t *priv = filp->private_data;
drm_device_t *dev = priv->dev;
drm_wait_vblank_t vblwait;
struct timeval now;
int ret = 0;
unsigned int flags;
if (!dev->irq)
return -EINVAL;
DRM_COPY_FROM_USER_IOCTL( vblwait, (drm_wait_vblank_t *)data,
sizeof(vblwait) );
switch ( vblwait.request.type & ~_DRM_VBLANK_FLAGS_MASK ) {
case _DRM_VBLANK_RELATIVE:
vblwait.request.sequence += atomic_read( &dev->vbl_received );
vblwait.request.type &= ~_DRM_VBLANK_RELATIVE;
case _DRM_VBLANK_ABSOLUTE:
break;
default:
return -EINVAL;
}
flags = vblwait.request.type & _DRM_VBLANK_FLAGS_MASK;
if ( flags & _DRM_VBLANK_SIGNAL ) {
unsigned long irqflags;
drm_vbl_sig_t *vbl_sig;
vblwait.reply.sequence = atomic_read( &dev->vbl_received );
spin_lock_irqsave( &dev->vbl_lock, irqflags );
/* Check if this task has already scheduled the same signal
* for the same vblank sequence number; nothing to be done in
* that case
*/
list_for_each_entry( vbl_sig, &dev->vbl_sigs.head, head ) {
if (vbl_sig->sequence == vblwait.request.sequence
&& vbl_sig->info.si_signo == vblwait.request.signal
&& vbl_sig->task == current)
{
spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
goto done;
}
}
if ( dev->vbl_pending >= 100 ) {
spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
return -EBUSY;
}
dev->vbl_pending++;
spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
if ( !( vbl_sig = DRM_MALLOC( sizeof( drm_vbl_sig_t ) ) ) ) {
return -ENOMEM;
}
memset( (void *)vbl_sig, 0, sizeof(*vbl_sig) );
vbl_sig->sequence = vblwait.request.sequence;
vbl_sig->info.si_signo = vblwait.request.signal;
vbl_sig->task = current;
spin_lock_irqsave( &dev->vbl_lock, irqflags );
list_add_tail( (struct list_head *) vbl_sig, &dev->vbl_sigs.head );
spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
} else {
ret = DRM(vblank_wait)( dev, &vblwait.request.sequence );
do_gettimeofday( &now );
vblwait.reply.tval_sec = now.tv_sec;
vblwait.reply.tval_usec = now.tv_usec;
}
done:
DRM_COPY_TO_USER_IOCTL( (drm_wait_vblank_t *)data, vblwait,
sizeof(vblwait) );
return ret;
}
/**
* Send the VBLANK signals.
*
* \param dev DRM device.
*
* Sends a signal for each task in drm_device::vbl_sigs and empties the list.
*
* If a signal is not requested, then calls vblank_wait().
*/
void DRM(vbl_send_signals)( drm_device_t *dev )
{
struct list_head *list, *tmp;
drm_vbl_sig_t *vbl_sig;
unsigned int vbl_seq = atomic_read( &dev->vbl_received );
unsigned long flags;
spin_lock_irqsave( &dev->vbl_lock, flags );
list_for_each_safe( list, tmp, &dev->vbl_sigs.head ) {
vbl_sig = list_entry( list, drm_vbl_sig_t, head );
if ( ( vbl_seq - vbl_sig->sequence ) <= (1<<23) ) {
vbl_sig->info.si_code = vbl_seq;
send_sig_info( vbl_sig->info.si_signo, &vbl_sig->info, vbl_sig->task );
list_del( list );
DRM_FREE( vbl_sig, sizeof(*vbl_sig) );
dev->vbl_pending--;
}
}
spin_unlock_irqrestore( &dev->vbl_lock, flags );
}
#endif /* __HAVE_VBL_IRQ */
#else
int DRM(control)( struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg )
{
......@@ -517,7 +225,6 @@ int DRM(control)( struct inode *inode, struct file *filp,
return -EINVAL;
}
}
#endif /* __HAVE_DMA_IRQ */
#endif
#endif /* __HAVE_DMA */
This diff is collapsed.
......@@ -72,6 +72,8 @@ int DRM(open_helper)(struct inode *inode, struct file *filp, drm_device_t *dev)
priv->authenticated = capable(CAP_SYS_ADMIN);
priv->lock_count = 0;
DRIVER_OPEN_HELPER( priv, dev );
down(&dev->struct_sem);
if (!dev->file_last) {
priv->next = NULL;
......
......@@ -35,69 +35,7 @@
#include "drmP.h"
/**
* Get interrupt from bus id.
*
* \param inode device inode.
* \param filp file pointer.
* \param cmd command.
* \param arg user argument, pointing to a drm_irq_busid structure.
* \return zero on success or a negative number on failure.
*
* Finds the PCI device with the specified bus id and gets its IRQ number.
*/
int DRM(irq_busid)(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg)
{
drm_irq_busid_t p;
struct pci_dev *dev;
if (copy_from_user(&p, (drm_irq_busid_t *)arg, sizeof(p)))
return -EFAULT;
#ifdef __alpha__
{
int domain = p.busnum >> 8;
p.busnum &= 0xff;
/*
* Find the hose the device is on (the domain number is the
* hose index) and offset the bus by the root bus of that
* hose.
*/
for(dev = pci_find_device(PCI_ANY_ID,PCI_ANY_ID,NULL);
dev;
dev = pci_find_device(PCI_ANY_ID,PCI_ANY_ID,dev)) {
struct pci_controller *hose = dev->sysdata;
if (hose->index == domain) {
p.busnum += hose->bus->number;
break;
}
}
}
#endif
dev = pci_find_slot(p.busnum, PCI_DEVFN(p.devnum, p.funcnum));
if (!dev) {
DRM_ERROR("pci_find_slot failed for %d:%d:%d\n",
p.busnum, p.devnum, p.funcnum);
p.irq = 0;
goto out;
}
if (pci_enable_device(dev) != 0) {
DRM_ERROR("pci_enable_device failed for %d:%d:%d\n",
p.busnum, p.devnum, p.funcnum);
p.irq = 0;
goto out;
}
p.irq = dev->irq;
out:
DRM_DEBUG("%d:%d:%d => IRQ %d\n",
p.busnum, p.devnum, p.funcnum, p.irq);
if (copy_to_user((drm_irq_busid_t *)arg, &p, sizeof(p)))
return -EFAULT;
return 0;
}
#include "linux/pci.h"
/**
* Get the bus id.
......@@ -138,8 +76,10 @@ int DRM(getunique)(struct inode *inode, struct file *filp,
* \param arg user argument, pointing to a drm_unique structure.
* \return zero on success or a negative number on failure.
*
* Copies the bus id from userspace into drm_device::unique, and searches for
* the respective PCI device, updating drm_device::pdev.
* Copies the bus id from userspace into drm_device::unique, and verifies that
* it matches the device this DRM is attached to (EINVAL otherwise). Deprecated
* in interface version 1.1 and will return EBUSY when setversion has requested
* version 1.1 or greater.
*/
int DRM(setunique)(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg)
......@@ -147,6 +87,7 @@ int DRM(setunique)(struct inode *inode, struct file *filp,
drm_file_t *priv = filp->private_data;
drm_device_t *dev = priv->dev;
drm_unique_t u;
int domain, bus, slot, func, ret;
if (dev->unique_len || dev->unique) return -EBUSY;
......@@ -164,55 +105,42 @@ int DRM(setunique)(struct inode *inode, struct file *filp,
dev->devname = DRM(alloc)(strlen(dev->name) + strlen(dev->unique) + 2,
DRM_MEM_DRIVER);
if(!dev->devname) {
DRM(free)(dev->devname, sizeof(*dev->devname), DRM_MEM_DRIVER);
if (!dev->devname)
return -ENOMEM;
}
sprintf(dev->devname, "%s@%s", dev->name, dev->unique);
do {
struct pci_dev *pci_dev;
int domain, b, d, f;
char *p;
/* Return error if the busid submitted doesn't match the device's actual
* busid.
*/
ret = sscanf(dev->unique, "PCI:%d:%d:%d", &bus, &slot, &func);
if (ret != 3)
return DRM_ERR(EINVAL);
domain = bus >> 8;
bus &= 0xff;
if ((domain != dev->pci_domain) ||
(bus != dev->pci_bus) ||
(slot != dev->pci_slot) ||
(func != dev->pci_func))
return -EINVAL;
for(p = dev->unique; p && *p && *p != ':'; p++);
if (!p || !*p) break;
b = (int)simple_strtoul(p+1, &p, 10);
if (*p != ':') break;
d = (int)simple_strtoul(p+1, &p, 10);
if (*p != ':') break;
f = (int)simple_strtoul(p+1, &p, 10);
if (*p) break;
return 0;
}
domain = b >> 8;
b &= 0xff;
static int
DRM(set_busid)(drm_device_t *dev)
{
if (dev->unique != NULL)
return EBUSY;
#ifdef __alpha__
/*
* Find the hose the device is on (the domain number is the
* hose index) and offset the bus by the root bus of that
* hose.
*/
for(pci_dev = pci_find_device(PCI_ANY_ID,PCI_ANY_ID,NULL);
pci_dev;
pci_dev = pci_find_device(PCI_ANY_ID,PCI_ANY_ID,pci_dev)) {
struct pci_controller *hose = pci_dev->sysdata;
dev->unique_len = 20;
dev->unique = DRM(alloc)(dev->unique_len + 1, DRM_MEM_DRIVER);
if (dev->unique == NULL)
return ENOMEM;
if (hose->index == domain) {
b += hose->bus->number;
break;
}
}
#endif
pci_dev = pci_find_slot(b, PCI_DEVFN(d,f));
if (pci_dev) {
dev->pdev = pci_dev;
#ifdef __alpha__
dev->hose = pci_dev->sysdata;
#endif
}
} while(0);
snprintf(dev->unique, dev->unique_len, "pci:%04x:%02x:%02x.%d",
dev->pci_domain, dev->pci_bus, dev->pci_slot, dev->pci_func);
return 0;
}
......@@ -363,3 +291,47 @@ int DRM(getstats)( struct inode *inode, struct file *filp,
return -EFAULT;
return 0;
}
#define DRM_IF_MAJOR 1
#define DRM_IF_MINOR 2
int DRM(setversion)(DRM_IOCTL_ARGS)
{
DRM_DEVICE;
drm_set_version_t sv;
drm_set_version_t retv;
int if_version;
DRM_COPY_FROM_USER_IOCTL(sv, (drm_set_version_t *)data, sizeof(sv));
retv.drm_di_major = DRM_IF_MAJOR;
retv.drm_di_minor = DRM_IF_MINOR;
retv.drm_dd_major = DRIVER_MAJOR;
retv.drm_dd_minor = DRIVER_MINOR;
DRM_COPY_TO_USER_IOCTL((drm_set_version_t *)data, retv, sizeof(sv));
if (sv.drm_di_major != -1) {
if (sv.drm_di_major != DRM_IF_MAJOR ||
sv.drm_di_minor < 0 || sv.drm_di_minor > DRM_IF_MINOR)
return EINVAL;
if_version = DRM_IF_VERSION(sv.drm_di_major, sv.drm_dd_minor);
dev->if_version = DRM_MAX(if_version, dev->if_version);
if (sv.drm_di_minor >= 1) {
/*
* Version 1.1 includes tying of DRM to specific device
*/
DRM(set_busid)(dev);
}
}
if (sv.drm_dd_major != -1) {
if (sv.drm_dd_major != DRIVER_MAJOR ||
sv.drm_dd_minor < 0 || sv.drm_dd_minor > DRIVER_MINOR)
return EINVAL;
#ifdef DRIVER_SETVERSION
DRIVER_SETVERSION(dev, &sv);
#endif
}
return 0;
}
/**
* \file drm_irq.h
* IRQ support
*
* \author Rickard E. (Rik) Faith <faith@valinux.com>
* \author Gareth Hughes <gareth@valinux.com>
*/
/*
* Created: Fri Mar 19 14:30:16 1999 by faith@valinux.com
*
* Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include "drmP.h"
#include <linux/interrupt.h> /* For task queue support */
#ifndef __HAVE_SHARED_IRQ
#define __HAVE_SHARED_IRQ 0
#endif
#if __HAVE_SHARED_IRQ
#define DRM_IRQ_TYPE SA_SHIRQ
#else
#define DRM_IRQ_TYPE 0
#endif
/**
* Get interrupt from bus id.
*
* \param inode device inode.
* \param filp file pointer.
* \param cmd command.
* \param arg user argument, pointing to a drm_irq_busid structure.
* \return zero on success or a negative number on failure.
*
* Finds the PCI device with the specified bus id and gets its IRQ number.
* This IOCTL is deprecated, and will now return EINVAL for any busid not equal
* to that of the device that this DRM instance attached to.
*/
int DRM(irq_by_busid)(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg)
{
drm_file_t *priv = filp->private_data;
drm_device_t *dev = priv->dev;
drm_irq_busid_t p;
if (copy_from_user(&p, (drm_irq_busid_t *)arg, sizeof(p)))
return -EFAULT;
if ((p.busnum >> 8) != dev->pci_domain ||
(p.busnum & 0xff) != dev->pci_bus ||
p.devnum != dev->pci_slot ||
p.funcnum != dev->pci_func)
return -EINVAL;
p.irq = dev->irq;
DRM_DEBUG("%d:%d:%d => IRQ %d\n",
p.busnum, p.devnum, p.funcnum, p.irq);
if (copy_to_user((drm_irq_busid_t *)arg, &p, sizeof(p)))
return -EFAULT;
return 0;
}
#if __HAVE_IRQ
/**
* Install IRQ handler.
*
* \param dev DRM device.
* \param irq IRQ number.
*
* Initializes the IRQ related data, and setups drm_device::vbl_queue. Installs the handler, calling the driver
* \c DRM(driver_irq_preinstall)() and \c DRM(driver_irq_postinstall)() functions
* before and after the installation.
*/
int DRM(irq_install)( drm_device_t *dev )
{
int ret;
if ( dev->irq == 0 )
return -EINVAL;
down( &dev->struct_sem );
/* Driver must have been initialized */
if ( !dev->dev_private ) {
up( &dev->struct_sem );
return -EINVAL;
}
if ( dev->irq_enabled ) {
up( &dev->struct_sem );
return -EBUSY;
}
dev->irq_enabled = 1;
up( &dev->struct_sem );
DRM_DEBUG( "%s: irq=%d\n", __FUNCTION__, dev->irq );
#if __HAVE_DMA
dev->dma->next_buffer = NULL;
dev->dma->next_queue = NULL;
dev->dma->this_buffer = NULL;
#endif
#if __HAVE_IRQ_BH
INIT_WORK(&dev->work, DRM(irq_immediate_bh), dev);
#endif
#if __HAVE_VBL_IRQ
init_waitqueue_head(&dev->vbl_queue);
spin_lock_init( &dev->vbl_lock );
INIT_LIST_HEAD( &dev->vbl_sigs.head );
dev->vbl_pending = 0;
#endif
/* Before installing handler */
DRM(driver_irq_preinstall)(dev);
/* Install handler */
ret = request_irq( dev->irq, DRM(irq_handler),
DRM_IRQ_TYPE, dev->devname, dev );
if ( ret < 0 ) {
down( &dev->struct_sem );
dev->irq_enabled = 0;
up( &dev->struct_sem );
return ret;
}
/* After installing handler */
DRM(driver_irq_postinstall)(dev);
return 0;
}
/**
* Uninstall the IRQ handler.
*
* \param dev DRM device.
*
* Calls the driver's \c DRM(driver_irq_uninstall)() function, and stops the irq.
*/
int DRM(irq_uninstall)( drm_device_t *dev )
{
int irq_enabled;
down( &dev->struct_sem );
irq_enabled = dev->irq_enabled;
dev->irq_enabled = 0;
up( &dev->struct_sem );
if ( !irq_enabled )
return -EINVAL;
DRM_DEBUG( "%s: irq=%d\n", __FUNCTION__, dev->irq );
DRM(driver_irq_uninstall)( dev );
free_irq( dev->irq, dev );
return 0;
}
/**
* IRQ control ioctl.
*
* \param inode device inode.
* \param filp file pointer.
* \param cmd command.
* \param arg user argument, pointing to a drm_control structure.
* \return zero on success or a negative number on failure.
*
* Calls irq_install() or irq_uninstall() according to \p arg.
*/
int DRM(control)( struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg )
{
drm_file_t *priv = filp->private_data;
drm_device_t *dev = priv->dev;
drm_control_t ctl;
if ( copy_from_user( &ctl, (drm_control_t *)arg, sizeof(ctl) ) )
return -EFAULT;
switch ( ctl.func ) {
case DRM_INST_HANDLER:
if (dev->if_version < DRM_IF_VERSION(1, 2) &&
ctl.irq != dev->irq)
return -EINVAL;
return DRM(irq_install)( dev );
case DRM_UNINST_HANDLER:
return DRM(irq_uninstall)( dev );
default:
return -EINVAL;
}
}
#if __HAVE_VBL_IRQ
/**
* Wait for VBLANK.
*
* \param inode device inode.
* \param filp file pointer.
* \param cmd command.
* \param data user argument, pointing to a drm_wait_vblank structure.
* \return zero on success or a negative number on failure.
*
* Verifies the IRQ is installed.
*
* If a signal is requested checks if this task has already scheduled the same signal
* for the same vblank sequence number - nothing to be done in
* that case. If the number of tasks waiting for the interrupt exceeds 100 the
* function fails. Otherwise adds a new entry to drm_device::vbl_sigs for this
* task.
*
* If a signal is not requested, then calls vblank_wait().
*/
int DRM(wait_vblank)( DRM_IOCTL_ARGS )
{
drm_file_t *priv = filp->private_data;
drm_device_t *dev = priv->dev;
drm_wait_vblank_t vblwait;
struct timeval now;
int ret = 0;
unsigned int flags;
if (!dev->irq)
return -EINVAL;
DRM_COPY_FROM_USER_IOCTL( vblwait, (drm_wait_vblank_t *)data,
sizeof(vblwait) );
switch ( vblwait.request.type & ~_DRM_VBLANK_FLAGS_MASK ) {
case _DRM_VBLANK_RELATIVE:
vblwait.request.sequence += atomic_read( &dev->vbl_received );
vblwait.request.type &= ~_DRM_VBLANK_RELATIVE;
case _DRM_VBLANK_ABSOLUTE:
break;
default:
return -EINVAL;
}
flags = vblwait.request.type & _DRM_VBLANK_FLAGS_MASK;
if ( flags & _DRM_VBLANK_SIGNAL ) {
unsigned long irqflags;
drm_vbl_sig_t *vbl_sig;
vblwait.reply.sequence = atomic_read( &dev->vbl_received );
spin_lock_irqsave( &dev->vbl_lock, irqflags );
/* Check if this task has already scheduled the same signal
* for the same vblank sequence number; nothing to be done in
* that case
*/
list_for_each_entry( vbl_sig, &dev->vbl_sigs.head, head ) {
if (vbl_sig->sequence == vblwait.request.sequence
&& vbl_sig->info.si_signo == vblwait.request.signal
&& vbl_sig->task == current)
{
spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
goto done;
}
}
if ( dev->vbl_pending >= 100 ) {
spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
return -EBUSY;
}
dev->vbl_pending++;
spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
if ( !( vbl_sig = DRM_MALLOC( sizeof( drm_vbl_sig_t ) ) ) ) {
return -ENOMEM;
}
memset( (void *)vbl_sig, 0, sizeof(*vbl_sig) );
vbl_sig->sequence = vblwait.request.sequence;
vbl_sig->info.si_signo = vblwait.request.signal;
vbl_sig->task = current;
spin_lock_irqsave( &dev->vbl_lock, irqflags );
list_add_tail( (struct list_head *) vbl_sig, &dev->vbl_sigs.head );
spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
} else {
ret = DRM(vblank_wait)( dev, &vblwait.request.sequence );
do_gettimeofday( &now );
vblwait.reply.tval_sec = now.tv_sec;
vblwait.reply.tval_usec = now.tv_usec;
}
done:
DRM_COPY_TO_USER_IOCTL( (drm_wait_vblank_t *)data, vblwait,
sizeof(vblwait) );
return ret;
}
/**
* Send the VBLANK signals.
*
* \param dev DRM device.
*
* Sends a signal for each task in drm_device::vbl_sigs and empties the list.
*
* If a signal is not requested, then calls vblank_wait().
*/
void DRM(vbl_send_signals)( drm_device_t *dev )
{
struct list_head *list, *tmp;
drm_vbl_sig_t *vbl_sig;
unsigned int vbl_seq = atomic_read( &dev->vbl_received );
unsigned long flags;
spin_lock_irqsave( &dev->vbl_lock, flags );
list_for_each_safe( list, tmp, &dev->vbl_sigs.head ) {
vbl_sig = list_entry( list, drm_vbl_sig_t, head );
if ( ( vbl_seq - vbl_sig->sequence ) <= (1<<23) ) {
vbl_sig->info.si_code = vbl_seq;
send_sig_info( vbl_sig->info.si_signo, &vbl_sig->info, vbl_sig->task );
list_del( list );
DRM_FREE( vbl_sig, sizeof(*vbl_sig) );
dev->vbl_pending--;
}
}
spin_unlock_irqrestore( &dev->vbl_lock, flags );
}
#endif /* __HAVE_VBL_IRQ */
#endif /* __HAVE_IRQ */
......@@ -67,6 +67,7 @@ static drm_mem_stats_t DRM(mem_stats)[] = {
[DRM_MEM_TOTALAGP] = { "totalagp" },
[DRM_MEM_BOUNDAGP] = { "boundagp" },
[DRM_MEM_CTXBITMAP] = { "ctxbitmap"},
[DRM_MEM_CTXLIST] = { "ctxlist" },
[DRM_MEM_STUB] = { "stub" },
{ NULL, 0, } /* Last entry must be null */
};
......
......@@ -62,8 +62,12 @@
verify_area( VERIFY_READ, uaddr, size )
#define DRM_COPY_FROM_USER_UNCHECKED(arg1, arg2, arg3) \
__copy_from_user(arg1, arg2, arg3)
#define DRM_COPY_TO_USER_UNCHECKED(arg1, arg2, arg3) \
__copy_to_user(arg1, arg2, arg3)
#define DRM_GET_USER_UNCHECKED(val, uaddr) \
__get_user(val, uaddr)
#define DRM_PUT_USER_UNCHECKED(uaddr, val) \
__put_user(val, uaddr)
/** 'malloc' without the overhead of DRM(alloc)() */
......@@ -71,6 +75,8 @@
/** 'free' without the overhead of DRM(free)() */
#define DRM_FREE(x,size) kfree(x)
#define DRM_GET_PRIV_WITH_RETURN(_priv, _filp) _priv = _filp->private_data
/**
* Get the pointer to the SAREA.
*
......
/*
This file is auto-generated from the drm_pciids.txt in the DRM CVS
Please contact dri-devel@lists.sf.net to add new cards to this list
*/
#define radeon_PCI_IDS \
{0x1002, 0x4136, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4137, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4237, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4336, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4337, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4437, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4964, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4965, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4966, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4967, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4C57, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4C58, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4C59, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4C5A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4C64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4C65, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4C67, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5144, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5145, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5146, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5147, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5148, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5149, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x514A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x514B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x514C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x514D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x514E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x514F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5157, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5158, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5159, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x515A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5168, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5169, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x516A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x516B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x516C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5834, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5835, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5836, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5837, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5960, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5961, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5962, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5963, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5964, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5968, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5969, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x596A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x596B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5c61, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5c62, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5c63, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5c64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define r128_PCI_IDS \
{0x1002, 0x4c45, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4c46, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4d46, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4d4c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5041, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5042, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5043, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5044, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5045, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5046, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5047, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5048, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5049, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x504A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x504B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x504C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x504D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x504E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x504F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5051, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5052, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5053, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5054, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5057, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5058, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5245, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5246, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5247, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x524b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x524c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x534d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5446, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x544C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x5452, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define mga_PCI_IDS \
{0x102b, 0x0521, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x102b, 0x0525, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x102b, 0x2527, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define mach64_PCI_IDS \
{0x1002, 0x4749, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4750, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4751, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4742, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4744, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4c49, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4c50, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4c51, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4c42, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4c44, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x474c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x474f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4752, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4753, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x474d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x474e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4c52, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4c53, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4c4d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1002, 0x4c4e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define sisdrv_PCI_IDS \
{0x1039, 0x0300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1039, 0x5300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1039, 0x6300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1039, 0x7300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define tdfx_PCI_IDS \
{0x121a, 0x0003, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x121a, 0x0004, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x121a, 0x0005, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x121a, 0x0007, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x121a, 0x0009, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x121a, 0x000b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define viadrv_PCI_IDS \
{0x1106, 0x3022, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1106, 0x3122, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1106, 0x7205, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x1106, 0x7204, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define i810_PCI_IDS \
{0x8086, 0x7121, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x8086, 0x7123, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x8086, 0x7125, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x8086, 0x1132, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define i830_PCI_IDS \
{0x8086, 0x3577, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x8086, 0x2562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x8086, 0x3582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x8086, 0x2572, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define gamma_PCI_IDS \
{0x3d3d, 0x0008, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define savage_PCI_IDS \
{0x5333, 0x8a22, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8a23, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c10, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c11, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c12, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c13, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c20, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c21, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c22, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c24, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c26, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c2a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c2b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c2c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c2d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c2e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8c2f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8a25, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8a26, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8d01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8d02, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x5333, 0x8d04, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
#define ffb_PCI_IDS \
{0, 0, 0}
......@@ -32,9 +32,23 @@
#ifndef _DRM_SAREA_H_
#define _DRM_SAREA_H_
#include "drm.h"
/* SAREA area needs to be at least a page */
#if defined(__alpha__)
#define SAREA_MAX 0x2000
#elif defined(__ia64__)
#define SAREA_MAX 0x10000 /* 64kB */
#else
/* Intel 830M driver needs at least 8k SAREA */
#define SAREA_MAX 0x2000
#endif
/** Maximum number of drawables in the SAREA */
#define SAREA_MAX_DRAWABLES 256
#define SAREA_DRAWABLE_CLAIMED_ENTRY 0x80000000
/** SAREA drawable */
typedef struct drm_sarea_drawable {
unsigned int stamp;
......
This diff is collapsed.
......@@ -13,3 +13,4 @@
#define __HAVE_KERNEL_CTX_SWITCH 1
#define __HAVE_RELEASE 1
#endif
......@@ -104,8 +104,8 @@
return 0; \
} while (0)
#define __HAVE_DMA_IRQ 1
#define __HAVE_DMA_IRQ_BH 1
#define __HAVE_IRQ 1
#define __HAVE_IRQ_BH 1
#define DRIVER_AGP_BUFFERS_MAP( dev ) \
((drm_gamma_private_t *)((dev)->dev_private))->buffers
......
......@@ -116,7 +116,7 @@ static inline int gamma_dma_is_ready(drm_device_t *dev)
return (!GAMMA_READ(GAMMA_DMACOUNT));
}
irqreturn_t gamma_dma_service( DRM_IRQ_ARGS )
irqreturn_t gamma_irq_handler( DRM_IRQ_ARGS )
{
drm_device_t *dev = (drm_device_t *)arg;
drm_device_dma_t *dma = dev->dma;
......@@ -262,7 +262,7 @@ static void gamma_dma_timer_bh(unsigned long dev)
gamma_dma_schedule((drm_device_t *)dev, 0);
}
void gamma_dma_immediate_bh(void *dev)
void gamma_irq_immediate_bh(void *dev)
{
gamma_dma_schedule(dev, 0);
}
......@@ -656,12 +656,12 @@ int gamma_do_cleanup_dma( drm_device_t *dev )
{
DRM_DEBUG( "%s\n", __FUNCTION__ );
#if _HAVE_DMA_IRQ
#if __HAVE_IRQ
/* Make sure interrupts are disabled here because the uninstall ioctl
* may not have been called from userspace and after dev_private
* is freed, it's too late.
*/
if ( dev->irq ) DRM(irq_uninstall)(dev);
if ( dev->irq_enabled ) DRM(irq_uninstall)(dev);
#endif
if ( dev->dev_private ) {
......
......@@ -48,6 +48,7 @@
#include "drm_fops.h"
#include "drm_init.h"
#include "drm_ioctl.h"
#include "drm_irq.h"
#include "gamma_lists.h" /* NOTE */
#include "drm_lock.h"
#include "gamma_lock.h" /* NOTE */
......
......@@ -78,7 +78,6 @@
[DRM_IOCTL_NR(DRM_IOCTL_I810_RSTATUS)] = { i810_rstatus, 1, 0 }, \
[DRM_IOCTL_NR(DRM_IOCTL_I810_FLIP)] = { i810_flip_bufs, 1, 0 }
#define __HAVE_COUNTERS 4
#define __HAVE_COUNTER6 _DRM_STAT_IRQ
#define __HAVE_COUNTER7 _DRM_STAT_PRIMARY
......@@ -112,7 +111,7 @@
* a noop stub is generated for compatibility.
*/
/* XXX: Add vblank support? */
#define __HAVE_DMA_IRQ 0
#define __HAVE_IRQ 0
/* Buffer customization:
*/
......
......@@ -232,12 +232,12 @@ int i810_dma_cleanup(drm_device_t *dev)
{
drm_device_dma_t *dma = dev->dma;
#if _HAVE_DMA_IRQ
#if __HAVE_IRQ
/* Make sure interrupts are disabled here because the uninstall ioctl
* may not have been called from userspace and after dev_private
* is freed, it's too late.
*/
if (dev->irq) DRM(irq_uninstall)(dev);
if ( dev->irq_enabled ) DRM(irq_uninstall)(dev);
#endif
if (dev->dev_private) {
......
......@@ -115,10 +115,10 @@
#define USE_IRQS 0
#if USE_IRQS
#define __HAVE_DMA_IRQ 1
#define __HAVE_IRQ 1
#define __HAVE_SHARED_IRQ 1
#else
#define __HAVE_DMA_IRQ 0
#define __HAVE_IRQ 0
#endif
......
This diff is collapsed.
......@@ -50,6 +50,7 @@
#include "drm_fops.h"
#include "drm_init.h"
#include "drm_ioctl.h"
#include "drm_irq.h"
#include "drm_lock.h"
#include "drm_memory.h"
#include "drm_proc.h"
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment