- 07 Apr, 2008 6 commits
-
-
Darrick J. Wong authored
Provide a facility to use the request_firmware() interface to get a SAS address from userspace. This can be used by SAS LLDDs that cannot obtain the address from the host adapter. Signed-off-by: Darrick J. Wong <djwong@us.ibm.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
-
FUJITA Tomonori authored
I overlooked ips_scmd_buf_write and ips_scmd_buf_read when I converted ips to use the data buffer accessors. ips is unlikely to use sg chaining (especially in this path) since a) this path is used only for non I/O commands (with little data transfer), b) ips's sg_tablesize is set to just 17. Thanks to Tim Pepper for testing this patch. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Mark Salyzyn <Mark_Salyzyn@adaptec.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
-
Jeff Garzik authored
- remove PCI device sort, which greatly simplifies PCI probe, permitting direct, per-HBA function calls rather than an indirect route to the same end result. - remove need for pcistr[] Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
-
Jeff Garzik authored
- Reduce uses of gdth_pci_str::pdev, preferring a local variable (or function arg) 'pdev' instead. - Reduce uses of gdth_pcistr array, preferring local variable (or function arg) 'pcistr' instead. - Eliminate lone use of gdth_pci_str::irq, using equivalent pdev->irq instead - Eliminate assign-only gdth_pci_str::io_mm Note: If the indentation seems weird, that's because a line was converted from spaces to tabs, when it was modified. Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: fix endian lossage in forcedeth net/tokenring/olympic.c section fixes net: marvell.c fix sparse shadowed variable warning [VLAN]: Fix egress priority mappings leak. [TG3]: Add PHY workaround for 5784 [NET]: srandom32 fixes for networking v2 [IPV6]: Fix refcounting for anycast dst entries. [IPV6]: inet6_dev on loopback should be kept until namespace stop. [IPV6]: Event type in addrconf_ifdown is mis-used. [ICMP]: Ensure that ICMP relookup maintains status quo
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: [SPARC64]: Fix user accesses in regset code. [SPARC64]: Fix FPU saving in 64-bit signal handling.
-
- 06 Apr, 2008 12 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/v4l-dvbLinus Torvalds authored
* 'pci_id_updates' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/v4l-dvb: V4L/DVB (7497): pvrusb2: add new usb pid for 73xxx models V4L/DVB (7496): pvrusb2: add new usb pid for 75xxx models
-
git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/v4l-dvbLinus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/v4l-dvb: V4L/DVB (7499): v4l/dvb Kconfig: Fix bugzilla #10067 V4L/DVB (7495): s5h1409: fix blown-away bit in function s5h1409_set_gpio V4L/DVB (7460): bttv: Bt832 - fix possible NULL pointer deref
-
git://git.kernel.org/pub/scm/linux/kernel/git/wim/linux-2.6-watchdogLinus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/wim/linux-2.6-watchdog: [WATCHDOG] it8712f_wdt Zero MSB timeout byte when disabling watchdog
-
Rusty Russell authored
We handle a broken tsc these days, so no need to panic. We clear the TSC bit when tsc_init decides it's unreliable (eg. under lguest w/ bad host TSC), leading to bogus panic. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jesse Barnes authored
Now that we're mapping registers in the DRM driver at load time, the driver actually checks the PCI ID, so we need to make sure the macros have all the right bits (and longer term use the DRM headers as the sole copy of the PCI & register definitions). This patch adds 945GME support to the DRM headers, fixing a regression reported in http://bugzilla.kernel.org/show_bug.cgi?id=10395. Tested-by: Alexander Oltu <alexander@all-2.com> Signed-off-by: Jesse Barnes <jesse.barnes@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Hugh Dickins authored
Since 2.6.25-rc7, I've been seeing an occasional livelock on one x86_64 machine, copying kernel trees to tmpfs, paging out to swap. Signature: 6000 pages under writeback but never getting written; most tasks of interest trying to reclaim, but each get_swap_bio waiting for a bio in mempool_alloc's io_schedule_timeout(5*HZ); every five seconds an atomic page allocation failure report from kblockd failing to allocate a sense_buffer in __scsi_get_command. __scsi_get_command has a (one item) free_list to protect against this, but rc1's [SCSI] use dynamically allocated sense buffer de25deb1 upset that slightly. When it fails to allocate from the separate sense_slab, instead of giving up, it must fall back to the command free_list, which is sure to have a sense_buffer attached. Either my earlier -rc testing missed this, or there's some recent contributory factor. One very significant factor is SLUB, which merges slab caches when it can, and on 64-bit happens to merge both bio cache and sense_slab cache into kmalloc's 128-byte cache: so that under this swapping load, bios above are liable to gobble up all the slots needed for scsi_cmnd sense_buffers below. That's disturbing behaviour, and I tried a few things to fix it. Adding a no-op constructor to the sense_slab inhibits SLUB from merging it, and stops all the allocation failures I was seeing; but it's rather a hack, and perhaps in different configurations we have other caches on the swapout path which are ill-merged. Another alternative is to revert the separate sense_slab, using cache-line-aligned sense_buffer allocated beyond scsi_cmnd from the one kmem_cache; but that might waste more memory, and is only a way of diverting around the known problem. While I don't like seeing the allocation failures, and hate the idea of all those bios piled up above a scsi host working one by one, it does seem to emerge fairly soon with the livelock fix. So lacking better ideas, stick with that one clear fix for now. Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Christoph Lameter <clameter@sgi.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Peter Zijlstra <a.p.ziljstra@chello.nl> Cc: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Michael Krufky authored
Signed-off-by: Michael Krufky <mkrufky@linuxtv.org> Signed-off-by: Mauro Carvalho Chehab <mchehab@infradead.org>
-
Michael Krufky authored
Signed-off-by: Michael Krufky <mkrufky@linuxtv.org> Signed-off-by: Mauro Carvalho Chehab <mchehab@infradead.org>
-
Mauro Carvalho Chehab authored
tda8290 breaks if tuner is selected, but CONFIG_DVB=n. Signed-off-by: Mauro Carvalho Chehab <mchehab@infradead.org>
-
Michael Krufky authored
Preserve all other bits when setting gpio. Signed-off-by: Michael Krufky <mkrufky@linuxtv.org> Signed-off-by: Steven Toth <stoth@hauppauge.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@infradead.org>
-
Cyrill Gorcunov authored
This patch does fix potential NULL pointer dereference Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@infradead.org>
-
Andrew Paprocki authored
I noticed this while testing the latest code. I'm not sure if it is required, but the normal (or LSB) timeout value is set to zero, so the MSB should be as well to stay consistent. If the chip revision is >= 8, set MSB of the 16-bit timeout value to zero when disabling the watchdog in it8712f_wdt_disable(). Signed-off-by: Andrew Paprocki <andrew@ishiboo.com> Signed-off-by: Wim Van Sebroeck <wim@iguana.be> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 05 Apr, 2008 1 commit
-
-
Linus Torvalds authored
This reverts commit 7c0ea45b which caused a regression with the backlight being set to off when a laptop doesn't have a _BQC entry to query the actual backlight value. The code blindly then falls back on a value of 0. See http://bugzilla.kernel.org/show_bug.cgi?id=10387 http://lkml.org/lkml/2008/4/2/366 for details. Bisected-and-reported-by: Andrey Borzenkov <arvidjaar@mail.ru> Cc: Zhao Yakui <yakui.zhao@intel.com> Cc: Zhang Rui <rui.zhang@intel.com> Cc: Len Brown <len.brown@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 04 Apr, 2008 21 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/ralf/upstream-linusLinus Torvalds authored
* 'upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/ralf/upstream-linus: [MIPS] Make KGDB compile on UP [MIPS] Pb1200: Fix header breakage
-
David S. Miller authored
-
Carol Hebert authored
In 2.6.14 a patch was merged which switching the order of the ipmi device naming from in-order-of-discovery over to reverse-order-of-discovery. So on systems with multiple BMC interfaces, the ipmi device names are being created in reverse order relative to how they are discovered on the system (e.g. on an IBM x3950 multinode server with N nodes, the device name for the BMC in the first node is /dev/ipmiN-1 and the device name for the BMC in the last node is /dev/ipmi0, etc.). The problem is caused by the list handling routines chosen in dmi_scan.c. Using list_add() causes the multiple ipmi devices to be added to the device list using a stack-paradigm and so the ipmi driver subsequently pulls them off during initialization in LIFO order. This patch changes the dmi_save_ipmi_device() list handling paradigm to a queue, thereby allowing the ipmi driver to build the ipmi device names in the order in which they are found on the system. Signed-off-by: Carol Hebert <cah@us.ibm.com> Signed-off-by: Corey Minyard <cminyard@mvista.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Korolev authored
THe CFI driver in 2.6.24 kernel is broken. Not so intensive read/write operations cause incomplete writes which lead to kernel panics in JFFS2. We investigated the issue - it is caused by bug in FL_SHUTDOWN parsing code. Sometimes chip returns -EIO as if it is in FL_SHUTDOWN state when it should wait in FL_PONT (error in order of conditions). The following patch fixes the bug in state parsing code of CFI. Also I've added comments to notify developers if they want to add new case in future. Signed-off-by: Alexey Korolev <akorolev@infradead.org> Reviewed-by: Joern Engel <joern@logfs.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Balbir Singh authored
A boot option for the memory controller was discussed on lkml. It is a good idea to add it, since it saves memory for people who want to turn off the memory controller. By default the option is on for the following two reasons: 1. It provides compatibility with the current scheme where the memory controller turns on if the config option is enabled 2. It allows for wider testing of the memory controller, once the config option is enabled We still allow the create, destroy callbacks to succeed, since they are not aware of boot options. We do not populate the directory will memory resource controller specific files. Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Paul Menage <menage@google.com> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Sudhir Kumar <skumar@linux.vnet.ibm.com> Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Paul Menage authored
The effects of cgroup_disable=foo are: - foo isn't auto-mounted if you mount all cgroups in a single hierarchy - foo isn't visible as an individually mountable subsystem As a result there will only ever be one call to foo->create(), at init time; all processes will stay in this group, and the group will never be mounted on a visible hierarchy. Any additional effects (e.g. not allocating metadata) are up to the foo subsystem. This doesn't handle early_init subsystems (their "disabled" bit isn't set be, but it could easily be extended to do so if any of the early_init systems wanted it - I think it would just involve some nastier parameter processing since it would occur before the command-line argument parser had been run. Hugh said: Ballpark figures, I'm trying to get this question out rather than processing the exact numbers: CONFIG_CGROUP_MEM_RES_CTLR adds 15% overhead to the affected paths, booting with cgroup_disable=memory cuts that back to 1% overhead (due to slightly bigger struct page). I'm no expert on distros, they may have no interest whatever in CONFIG_CGROUP_MEM_RES_CTLR=y; and the rest of us can easily build with or without it, or apply the cgroup_disable=memory patches. Unix bench's execl test result on x86_64 was == just after boot without mounting any cgroup fs.== mem_cgorup=off : Execl Throughput 43.0 3150.1 732.6 mem_cgroup=on : Execl Throughput 43.0 2932.6 682.0 == [lizf@cn.fujitsu.com: fix boot option parsing] Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Paul Menage <menage@google.com> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Sudhir Kumar <skumar@linux.vnet.ibm.com> Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sergei Shtylyov authored
Building UP kernel with KGDB enabled produces the following errors and warning (fatal due to -Werror in arch/mips/kernel/Makefile): In file included from arch/mips/kernel/gdb-stub.c:142: include/asm/smp.h:25:1: "raw_smp_processor_id" redefined In file included from include/linux/sched.h:69, from arch/mips/kernel/gdb-stub.c:126: include/linux/smp.h:88:1: this is the location of the previous definition In file included from arch/mips/kernel/gdb-stub.c:142: include/asm/smp.h:62: error: redefinition of 'smp_send_reschedule' include/linux/smp.h:102: error: previous definition of 'smp_send_reschedule' was here include/asm/smp.h: In function `smp_send_reschedule': include/asm/smp.h:65: error: dereferencing pointer to incomplete type arch/mips/kernel/gdb-stub.c: At top level: arch/mips/kernel/gdb-stub.c:660: warning: 'kgdb_wait' defined but not used Fix the errors by not directly including <asm/smp.h> (which is already included by <linux/smp.h>) and the warning by enclosing kgdb_wait() in #ifdef CONFIG_SMP. Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Sergei Shtylyov authored
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86Linus Torvalds authored
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86: x86: revert assign IRQs to hpet timer x86: tsc prevent time going backwards xen: Clear PG_pinned in release_{pt,pd}() xen: Do not pin/unpin PMD pages xen: refactor xen_{alloc,release}_{pt,pd}() x86, agpgart: scary messages are fortunately obsolete xen: fix grant table bug x86: fix breakage of vSMP irq operations x86: print message if nmi_watchdog=2 cannot be enabled x86: fix nmi_watchdog=2 on Pentium-D CPUs
-
Geert Uytterhoeven authored
Long overdue update of the m68k defconfigs Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Adrian Bunk authored
The default defconfig should be one from arch/m68k/configs/ arch/m68k/defconfig was not exactly identical to amiga_defconfig but also considering how long they have been without any update that doesn't seem to have been on purpose. Signed-off-by: Adrian Bunk <adrian.bunk@movial.fi> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-devLinus Torvalds authored
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev: pata_ali: disable ATAPI DMA libata: ATA_12/16 doesn't fall into ATAPI_MISC libata: uninline atapi_cmd_type() libata: fix IDENTIFY order in ata_bus_probe()
-
Linus Torvalds authored
Mikulas Patocka noted that the optimization where we check if a buffer was already dirty (and we avoid re-dirtying it) was not really SMP-safe. Since the read of the old status was not synchronized with anything, an aggressive CPU re-ordering of memory accesses might have moved that read up to before the data was even written to the buffer, and another CPU that cleaned it again, causing the newly dirty state to never actually hit the disk. Admittedly this would probably never trigger in practice, but it's still wrong. Mikulas sent a patch that fixed the problem, but I dislike the subtlety of the whole optimization, so this is an alternate fix that is more explicit about the particular SMP ordering for the optimization, and separates out the speculative reads of the buffer state into its own conditional (and makes the memory barrier only happen if we are likely to actually hit the optimized case in the first place). I considered removing the optimization entirely, but Andrew argued for it's continued existence. I'm a push-over. Cc: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
Commit f63fd7e2 ("parport_pc: detection for SuperIO IT87XX POST") only released the IO port region on success, not when the probe for the IT87XX chip failed. That caused not only a reserved region to leak, but also caused an oops when the driver module was unloaded and somebody tried to cat /proc/ioports - because the string that was assigned to the IO port region was a static string in the module virtual address area. Reported-by: Lubos Lunak <l.lunak@suse.cz> Cc: Jan Kara <jack@suse.cz> Cc: Petr Cvek <petr.cvek@tul.cz> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Al Viro authored
a) if you initialize something with le32_to_cpu(...), then |= it with host-endian and feed to cpu_to_le32(), it's most definitely *not* __le32. As sparse would've told you... b) the whole sequence is |= cpu_to_le32(host-endian constant) Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Jeff Garzik <jeff@garzik.org>
-
Adrian Bunk authored
My previous section fix only turned one section problem into another section problem. This patch fixes it for real. Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Jeff Garzik <jeff@garzik.org>
-
Harvey Harrison authored
The other if blocks don't redeclare temp, remove the redeclaration in the final if() block. drivers/net/phy/marvell.c:214:7: warning: symbol 'temp' shadows an earlier one drivers/net/phy/marvell.c:160:6: originally declared here Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Jeff Garzik <jeff@garzik.org>
-
Pavel Emelyanov authored
These entries are allocated in vlan_dev_set_egress_priority, but are never released and leaks on vlan device removal. Drop these in vlan's ->uninit callback - after the device is brought down and everyone is notified about it is going to be unregistered. Found during testing vlan netnsization patchset. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Acked-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Gleixner authored
The commits: commit 37a47db8 Author: Balaji Rao <balajirrao@gmail.com> Date: Wed Jan 30 13:30:03 2008 +0100 x86: assign IRQs to HPET timers, fix and commit e3f37a54 Author: Balaji Rao <balajirrao@gmail.com> Date: Wed Jan 30 13:30:03 2008 +0100 x86: assign IRQs to HPET timers have been identified to cause a regression on some platforms due to the assignement of legacy IRQs which makes the legacy devices connected to those IRQs disfunctional. Revert them. This fixes http://bugzilla.kernel.org/show_bug.cgi?id=10382Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
We already catch most of the TSC problems by sanity checks, but there is a subtle bug which has been in the code for ever. This can cause time jumps in the range of hours. This was reported in: http://lkml.org/lkml/2007/8/23/96 and http://lkml.org/lkml/2008/3/31/23 I was able to reproduce the problem with a gettimeofday loop test on a dual core and a quad core machine which both have sychronized TSCs. The TSCs seems not to be perfectly in sync though, but the kernel is not able to detect the slight delta in the sync check. Still there exists an extremly small window where this delta can be observed with a real big time jump. So far I was only able to reproduce this with the vsyscall gettimeofday implementation, but in theory this might be observable with the syscall based version as well. CPU 0 updates the clock source variables under xtime/vyscall lock and CPU1, where the TSC is slighty behind CPU0, is reading the time right after the seqlock was unlocked. The clocksource reference data was updated with the TSC from CPU0 and the value which is read from TSC on CPU1 is less than the reference data. This results in a huge delta value due to the unsigned subtraction of the TSC value and the reference value. This algorithm can not be changed due to the support of wrapping clock sources like pm timer. The huge delta is converted to nanoseconds and added to xtime, which is then observable by the caller. The next gettimeofday call on CPU1 will show the correct time again as now the TSC has advanced above the reference value. To prevent this TSC specific wreckage we need to compare the TSC value against the reference value and return the latter when it is larger than the actual TSC value. I pondered to mark the TSC unstable when the readout is smaller than the reference value, but this would render an otherwise good and fast clocksource unusable without a real good reason. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Mark McLoughlin authored
Signed-off-by: Mark McLoughlin <markmc@redhat.com> Cc: xen-devel@lists.xensource.com Cc: Mark McLoughlin <markmc@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-