- 30 Aug, 2012 1 commit
-
-
David S. Miller authored
The AES opcodes have a 3 cycle latency, so by doing 32-bytes at a time we avoid a pipeline bubble in between every round. For the 256-bit key case, it looks like we're doing more work in order to reload the KEY registers during the loop to make space for scarce temporaries. But the load dual issues with the AES operations so we get the KEY reloads essentially for free. Before: testing speed of ecb(aes) encryption test 0 (128 bit key, 16 byte blocks): 1 operation in 264 cycles (16 bytes) test 1 (128 bit key, 64 byte blocks): 1 operation in 231 cycles (64 bytes) test 2 (128 bit key, 256 byte blocks): 1 operation in 329 cycles (256 bytes) test 3 (128 bit key, 1024 byte blocks): 1 operation in 715 cycles (1024 bytes) test 4 (128 bit key, 8192 byte blocks): 1 operation in 4248 cycles (8192 bytes) test 5 (192 bit key, 16 byte blocks): 1 operation in 221 cycles (16 bytes) test 6 (192 bit key, 64 byte blocks): 1 operation in 234 cycles (64 bytes) test 7 (192 bit key, 256 byte blocks): 1 operation in 359 cycles (256 bytes) test 8 (192 bit key, 1024 byte blocks): 1 operation in 803 cycles (1024 bytes) test 9 (192 bit key, 8192 byte blocks): 1 operation in 5366 cycles (8192 bytes) test 10 (256 bit key, 16 byte blocks): 1 operation in 209 cycles (16 bytes) test 11 (256 bit key, 64 byte blocks): 1 operation in 255 cycles (64 bytes) test 12 (256 bit key, 256 byte blocks): 1 operation in 379 cycles (256 bytes) test 13 (256 bit key, 1024 byte blocks): 1 operation in 938 cycles (1024 bytes) test 14 (256 bit key, 8192 byte blocks): 1 operation in 6041 cycles (8192 bytes) After: testing speed of ecb(aes) encryption test 0 (128 bit key, 16 byte blocks): 1 operation in 266 cycles (16 bytes) test 1 (128 bit key, 64 byte blocks): 1 operation in 256 cycles (64 bytes) test 2 (128 bit key, 256 byte blocks): 1 operation in 305 cycles (256 bytes) test 3 (128 bit key, 1024 byte blocks): 1 operation in 676 cycles (1024 bytes) test 4 (128 bit key, 8192 byte blocks): 1 operation in 3981 cycles (8192 bytes) test 5 (192 bit key, 16 byte blocks): 1 operation in 210 cycles (16 bytes) test 6 (192 bit key, 64 byte blocks): 1 operation in 233 cycles (64 bytes) test 7 (192 bit key, 256 byte blocks): 1 operation in 340 cycles (256 bytes) test 8 (192 bit key, 1024 byte blocks): 1 operation in 766 cycles (1024 bytes) test 9 (192 bit key, 8192 byte blocks): 1 operation in 5136 cycles (8192 bytes) test 10 (256 bit key, 16 byte blocks): 1 operation in 206 cycles (16 bytes) test 11 (256 bit key, 64 byte blocks): 1 operation in 268 cycles (64 bytes) test 12 (256 bit key, 256 byte blocks): 1 operation in 368 cycles (256 bytes) test 13 (256 bit key, 1024 byte blocks): 1 operation in 890 cycles (1024 bytes) test 14 (256 bit key, 8192 byte blocks): 1 operation in 5718 cycles (8192 bytes) Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 29 Aug, 2012 4 commits
-
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Instead of testing and branching off of the key size on every encrypt/decrypt call, use method ops assigned at key set time. Reverse the order of float registers used for decryption to make future changes easier. Align all assembler routines on a 32-byte boundary. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
On SPARC-T4 fsrc2 has 1 cycle of latency, whereas fsrc1 has 11 cycles. True story. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 28 Aug, 2012 1 commit
-
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 26 Aug, 2012 1 commit
-
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 23 Aug, 2012 1 commit
-
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 22 Aug, 2012 1 commit
-
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 20 Aug, 2012 4 commits
-
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 19 Aug, 2012 17 commits
-
-
David S. Miller authored
Describe how we support two types of PMU setups, one with a single control register and two counters stored in a single register, and another with one control register per counter and each counter living in it's own register. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
When cpuc->n_events is zero, we actually don't do anything and we just write the cpuc->pcr[0] value as-is without any modifications. The "pcr = 0;" assignment there was just useless and confusing. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Make the per-cpu pcr save area an array instead of one u64. Describe how many PCR and PIC registers the chip has in the sparc_pmu descriptor. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Now specified in sparc_pmu descriptor. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Starting with SPARC-T4 we have a seperate PCR control register for each performance counter, and there are absolutely no restrictions on what events can run on which counters. Add flags that we can use to elide the conflict and dependency logic used to handle older chips. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This is enough to get the NMIs working, more work is needed for perf events. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
We assumed PCR_PIC_PRIV can always be used to disable it, but that won't be true for SPARC-T4. This allows us also to get rid of some messy defines used in only one location. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
And, like for the PCR, allow indexing of different PIC register numbers. This also removes all of the non-__KERNEL__ bits from asm/perfctr.h, nothing kernel side should include it any more. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
SPARC-T4 and later have multiple PCR registers, one for each PIC counter. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Unlike for previous chips, access to the perf-counter control registers are all hyper-privileged. Therefore, access to them must go through a hypervisor interface. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Compare and branch, pause, and the various new cryptographic opcodes. We advertise the crypto opcodes to userspace using one hwcap bit, HWCAP_SPARC_CRYPTO. This essentially indicates that the %cfr register can be interrograted and used to determine exactly which crypto opcodes are available on the current cpu. We use the %cfr register to report all of the crypto opcodes available in the bootup CPU caps log message, and via /proc/cpuinfo. Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 18 Aug, 2012 4 commits
-
-
git://git.linaro.org/people/rmk/linux-armLinus Torvalds authored
Pull ARM fixes from Russell King: "The largest thing in this set of changes is bringing back some of the ARMv3 code to fix a compile problem noticed on RiscPC, which we still support, even though we only support ARMv4 there. (The reason is that the system bus doesn't support ARMv4 half-word accesses, so we need the ARMv3 library code for this platform.) The rest are all quite minor fixes." * 'fixes' of git://git.linaro.org/people/rmk/linux-arm: ARM: 7490/1: Drop duplicate select for GENERIC_IRQ_PROBE ARM: Bring back ARMv3 IO and user access code ARM: 7489/1: errata: fix workaround for erratum #720789 on UP systems ARM: 7488/1: mm: use 5 bits for swapfile type encoding ARM: 7487/1: mm: avoid setting nG bit for user mappings that aren't present ARM: 7486/1: sched_clock: update epoch_cyc on resume ARM: 7484/1: Don't enable GENERIC_LOCKBREAK with ticket spinlocks ARM: 7483/1: vfp: only advertise VFPv4 in hwcaps if CONFIG_VFPv3 is enabled ARM: 7482/1: topology: fix section mismatch warning for init_cpu_topology
-
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pmLinus Torvalds authored
Pull power management fixes from Rafael J. Wysocki: - Fixes for three obscure problems in the runtime PM core code found recently. - Two fixes for the new "coupled" cpuidle code from Colin Cross and Jon Medhurst. - intel_idle driver fix from Konrad Rzeszutek Wilk. * tag 'pm-for-3.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: intel_idle: Check cpu_idle_get_driver() for NULL before dereferencing it. cpuidle: Prevent null pointer dereference in cpuidle_coupled_cpu_notify cpuidle: coupled: fix sleeping while atomic in cpu notifier PM / Runtime: Check device PM QoS setting before "no callbacks" check PM / Runtime: Clear power.deferred_resume on success in rpm_suspend() PM / Runtime: Fix rpm_resume() return value for power.no_callbacks set
-
git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfsLinus Torvalds authored
Pull vfs fixes from Miklos Szeredi. This mainly fixes some confusion about whether the open 'mode' variable passed around should contain the full file type (S_IFREG etc) information or just the permission mode. In particular, the lack of proper file type information had confused fuse. * 'vfs-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs: vfs: fix propagation of atomic_open create error on negative dentry fuse: check create mode in atomic open vfs: pass right create mode to may_o_create() vfs: atomic_open(): fix create mode usage vfs: canonicalize create mode in build_open_flags()
-
git://neil.brown.name/mdLinus Torvalds authored
Pull md fixes from NeilBrown: "2 fixes for md, tagged for -stable" * tag 'md-3.6-fixes' of git://neil.brown.name/md: md/raid10: fix problem with on-stack allocation of r10bio structure. md: Don't truncate size at 4TB for RAID0 and Linear
-
- 17 Aug, 2012 6 commits
-
-
NeilBrown authored
A 'struct r10bio' has an array of per-copy information at the end. This array is declared with size [0] and r10bio_pool_alloc allocates enough extra space to store the per-copy information depending on the number of copies needed. So declaring a 'struct r10bio on the stack isn't going to work. It won't allocate enough space, and memory corruption will ensue. So in the two places where this is done, declare a sufficiently large structure and use that instead. The two call-sites of this bug were introduced in 3.4 and 3.5 so this is suitable for both those kernels. The patch will have to be modified for 3.4 as it only has one bug. Cc: stable@vger.kernel.org Reported-by: Ivan Vasilyev <ivan.vasilyev@gmail.com> Tested-by: Ivan Vasilyev <ivan.vasilyev@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
-
git://git.kernel.org/pub/scm/linux/kernel/git/roland/infinibandLinus Torvalds authored
Pull infiniband/rdma fixes from Roland Dreier: "Grab bag of InfiniBand/RDMA fixes: - IPoIB fixes for regressions introduced by path database conversion - mlx4 fixes for bugs with large memory systems and regressions from SR-IOV patches - RDMA CM fix for passing bad event up to userspace - Other minor fixes" * tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband: IB/mlx4: Check iboe netdev pointer before dereferencing it mlx4_core: Clean up buddy bitmap allocation mlx4_core: Fix integer overflow issues around MTT table mlx4_core: Allow large mlx4_buddy bitmaps IB/srp: Fix a race condition IB/qib: Fix error return code in qib_init_7322_variables() IB: Fix typos in infiniband drivers IB/ipoib: Fix RCU pointer dereference of wrong object IB/ipoib: Add missing locking when CM object is deleted RDMA/ucma.c: Fix for events with wrong context on iWARP RDMA/ocrdma: Don't call vlan_dev_real_dev() for non-VLAN netdevs IB/mlx4: Fix possible deadlock on sm_lock spinlock
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/ttyLinus Torvalds authored
Pull TTY fixes from Greg Kroah-Hartman: "Here are 4 tiny patches, each fixing a serial driver problem that people have reported. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>" * tag 'tty-3.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: pmac_zilog,kdb: Fix console poll hook to return instead of loop serial: mxs-auart: fix the wrong RTS hardware flow control serial: ifx6x60: fix paging fault on spi_register_driver serial: Change Kconfig entry for CLPS711X-target
-
Konrad Rzeszutek Wilk authored
If the machine is booted without any cpu_idle driver set (b/c disable_cpuidle() has been called) we should follow other users of cpu_idle API and check the return value for NULL before using it. Reported-and-tested-by: Mark van Dijk <mark@internecto.net> Suggested-by: Jan Beulich <JBeulich@suse.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
-
Jon Medhurst (Tixy) authored
When a kernel is built to support multiple hardware types it's possible that CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is set but the hardware the kernel is run on doesn't support cpuidle and therefore doesn't load a driver for it. In this case, when the system is shut down, cpuidle_coupled_cpu_notify() gets called with cpuidle_devices set to NULL. There are quite possibly other circumstances where this situation can also occur and we should check for it. Signed-off-by: Jon Medhurst <tixy@linaro.org> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
-
Colin Cross authored
The cpu hotplug notifier gets called in both atomic and non-atomic contexts, it is not always safe to lock a mutex. Filter out all events except the six necessary ones, which are all sleepable, before taking the mutex. Signed-off-by: Colin Cross <ccross@android.com> Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
-