Commit 18404756 authored by Max Krasnyansky's avatar Max Krasnyansky Committed by Thomas Gleixner

genirq: Expose default irq affinity mask (take 3)

Current IRQ affinity interface does not provide a way to set affinity
for the IRQs that will be allocated/activated in the future.
This patch creates /proc/irq/default_smp_affinity that lets users set
default affinity mask for the newly allocated IRQs. Changing the default
does not affect affinity masks for the currently active IRQs, they
have to be changed explicitly.

Updated based on Paul J's comments and added some more documentation.
Signed-off-by: default avatarMax Krasnyansky <maxk@qualcomm.com>
Cc: pj@sgi.com
Cc: a.p.zijlstra@chello.nl
Cc: tglx@linutronix.de
Cc: rdunlap@xenotime.net
Cc: mingo@elte.hu
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
parent c3b25b32
ChangeLog:
Started by Ingo Molnar <mingo@redhat.com>
Update by Max Krasnyansky <maxk@qualcomm.com>
SMP IRQ affinity, started by Ingo Molnar <mingo@redhat.com> SMP IRQ affinity
/proc/irq/IRQ#/smp_affinity specifies which target CPUs are permitted /proc/irq/IRQ#/smp_affinity specifies which target CPUs are permitted
for a given IRQ source. It's a bitmask of allowed CPUs. It's not allowed for a given IRQ source. It's a bitmask of allowed CPUs. It's not allowed
to turn off all CPUs, and if an IRQ controller does not support IRQ to turn off all CPUs, and if an IRQ controller does not support IRQ
affinity then the value will not change from the default 0xffffffff. affinity then the value will not change from the default 0xffffffff.
/proc/irq/default_smp_affinity specifies default affinity mask that applies
to all non-active IRQs. Once IRQ is allocated/activated its affinity bitmask
will be set to the default mask. It can then be changed as described above.
Default mask is 0xffffffff.
Here is an example of restricting IRQ44 (eth1) to CPU0-3 then restricting Here is an example of restricting IRQ44 (eth1) to CPU0-3 then restricting
the IRQ to CPU4-7 (this is an 8-CPU SMP box): it to CPU4-7 (this is an 8-CPU SMP box):
[root@moon 44]# cd /proc/irq/44
[root@moon 44]# cat smp_affinity [root@moon 44]# cat smp_affinity
ffffffff ffffffff
[root@moon 44]# echo 0f > smp_affinity [root@moon 44]# echo 0f > smp_affinity
[root@moon 44]# cat smp_affinity [root@moon 44]# cat smp_affinity
0000000f 0000000f
...@@ -21,17 +30,27 @@ PING hell (195.4.7.3): 56 data bytes ...@@ -21,17 +30,27 @@ PING hell (195.4.7.3): 56 data bytes
--- hell ping statistics --- --- hell ping statistics ---
6029 packets transmitted, 6027 packets received, 0% packet loss 6029 packets transmitted, 6027 packets received, 0% packet loss
round-trip min/avg/max = 0.1/0.1/0.4 ms round-trip min/avg/max = 0.1/0.1/0.4 ms
[root@moon 44]# cat /proc/interrupts | grep 44: [root@moon 44]# cat /proc/interrupts | grep 'CPU\|44:'
44: 0 1785 1785 1783 1783 1 CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
1 0 IO-APIC-level eth1 44: 1068 1785 1785 1783 0 0 0 0 IO-APIC-level eth1
As can be seen from the line above IRQ44 was delivered only to the first four
processors (0-3).
Now lets restrict that IRQ to CPU(4-7).
[root@moon 44]# echo f0 > smp_affinity [root@moon 44]# echo f0 > smp_affinity
[root@moon 44]# cat smp_affinity
000000f0
[root@moon 44]# ping -f h [root@moon 44]# ping -f h
PING hell (195.4.7.3): 56 data bytes PING hell (195.4.7.3): 56 data bytes
.. ..
--- hell ping statistics --- --- hell ping statistics ---
2779 packets transmitted, 2777 packets received, 0% packet loss 2779 packets transmitted, 2777 packets received, 0% packet loss
round-trip min/avg/max = 0.1/0.5/585.4 ms round-trip min/avg/max = 0.1/0.5/585.4 ms
[root@moon 44]# cat /proc/interrupts | grep 44: [root@moon 44]# cat /proc/interrupts | 'CPU\|44:'
44: 1068 1785 1785 1784 1784 1069 1070 1069 IO-APIC-level eth1 CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
[root@moon 44]# 44: 1068 1785 1785 1783 1784 1069 1070 1069 IO-APIC-level eth1
This time around IRQ44 was delivered only to the last four processors.
i.e counters for the CPU0-3 did not change.
...@@ -380,28 +380,35 @@ i386 and x86_64 platforms support the new IRQ vector displays. ...@@ -380,28 +380,35 @@ i386 and x86_64 platforms support the new IRQ vector displays.
Of some interest is the introduction of the /proc/irq directory to 2.4. Of some interest is the introduction of the /proc/irq directory to 2.4.
It could be used to set IRQ to CPU affinity, this means that you can "hook" an It could be used to set IRQ to CPU affinity, this means that you can "hook" an
IRQ to only one CPU, or to exclude a CPU of handling IRQs. The contents of the IRQ to only one CPU, or to exclude a CPU of handling IRQs. The contents of the
irq subdir is one subdir for each IRQ, and one file; prof_cpu_mask irq subdir is one subdir for each IRQ, and two files; default_smp_affinity and
prof_cpu_mask.
For example For example
> ls /proc/irq/ > ls /proc/irq/
0 10 12 14 16 18 2 4 6 8 prof_cpu_mask 0 10 12 14 16 18 2 4 6 8 prof_cpu_mask
1 11 13 15 17 19 3 5 7 9 1 11 13 15 17 19 3 5 7 9 default_smp_affinity
> ls /proc/irq/0/ > ls /proc/irq/0/
smp_affinity smp_affinity
The contents of the prof_cpu_mask file and each smp_affinity file for each IRQ smp_affinity is a bitmask, in which you can specify which CPUs can handle the
is the same by default: IRQ, you can set it by doing:
> echo 1 > /proc/irq/10/smp_affinity
This means that only the first CPU will handle the IRQ, but you can also echo
5 which means that only the first and fourth CPU can handle the IRQ.
The contents of each smp_affinity file is the same by default:
> cat /proc/irq/0/smp_affinity > cat /proc/irq/0/smp_affinity
ffffffff ffffffff
It's a bitmask, in which you can specify which CPUs can handle the IRQ, you can The default_smp_affinity mask applies to all non-active IRQs, which are the
set it by doing: IRQs which have not yet been allocated/activated, and hence which lack a
/proc/irq/[0-9]* directory.
> echo 1 > /proc/irq/prof_cpu_mask
This means that only the first CPU will handle the IRQ, but you can also echo 5 prof_cpu_mask specifies which CPUs are to be profiled by the system wide
which means that only the first and fourth CPU can handle the IRQ. profiler. Default value is ffffffff (all cpus).
The way IRQs are routed is handled by the IO-APIC, and it's Round Robin The way IRQs are routed is handled by the IO-APIC, and it's Round Robin
between all the CPUs which are allowed to handle it. As usual the kernel has between all the CPUs which are allowed to handle it. As usual the kernel has
......
...@@ -42,8 +42,7 @@ void ack_bad_irq(unsigned int irq) ...@@ -42,8 +42,7 @@ void ack_bad_irq(unsigned int irq)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static char irq_user_affinity[NR_IRQS]; static char irq_user_affinity[NR_IRQS];
int int irq_select_affinity(unsigned int irq)
select_smp_affinity(unsigned int irq)
{ {
static int last_cpu; static int last_cpu;
int cpu = last_cpu + 1; int cpu = last_cpu + 1;
...@@ -51,7 +50,7 @@ select_smp_affinity(unsigned int irq) ...@@ -51,7 +50,7 @@ select_smp_affinity(unsigned int irq)
if (!irq_desc[irq].chip->set_affinity || irq_user_affinity[irq]) if (!irq_desc[irq].chip->set_affinity || irq_user_affinity[irq])
return 1; return 1;
while (!cpu_possible(cpu)) while (!cpu_possible(cpu) || !cpu_isset(cpu, irq_default_affinity))
cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0); cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0);
last_cpu = cpu; last_cpu = cpu;
......
...@@ -104,8 +104,11 @@ extern void enable_irq(unsigned int irq); ...@@ -104,8 +104,11 @@ extern void enable_irq(unsigned int irq);
#if defined(CONFIG_SMP) && defined(CONFIG_GENERIC_HARDIRQS) #if defined(CONFIG_SMP) && defined(CONFIG_GENERIC_HARDIRQS)
extern cpumask_t irq_default_affinity;
extern int irq_set_affinity(unsigned int irq, cpumask_t cpumask); extern int irq_set_affinity(unsigned int irq, cpumask_t cpumask);
extern int irq_can_set_affinity(unsigned int irq); extern int irq_can_set_affinity(unsigned int irq);
extern int irq_select_affinity(unsigned int irq);
#else /* CONFIG_SMP */ #else /* CONFIG_SMP */
...@@ -119,6 +122,8 @@ static inline int irq_can_set_affinity(unsigned int irq) ...@@ -119,6 +122,8 @@ static inline int irq_can_set_affinity(unsigned int irq)
return 0; return 0;
} }
static inline int irq_select_affinity(unsigned int irq) { return 0; }
#endif /* CONFIG_SMP && CONFIG_GENERIC_HARDIRQS */ #endif /* CONFIG_SMP && CONFIG_GENERIC_HARDIRQS */
#ifdef CONFIG_GENERIC_HARDIRQS #ifdef CONFIG_GENERIC_HARDIRQS
......
...@@ -244,15 +244,6 @@ static inline void set_balance_irq_affinity(unsigned int irq, cpumask_t mask) ...@@ -244,15 +244,6 @@ static inline void set_balance_irq_affinity(unsigned int irq, cpumask_t mask)
} }
#endif #endif
#ifdef CONFIG_AUTO_IRQ_AFFINITY
extern int select_smp_affinity(unsigned int irq);
#else
static inline int select_smp_affinity(unsigned int irq)
{
return 1;
}
#endif
extern int no_irq_affinity; extern int no_irq_affinity;
static inline int irq_balancing_disabled(unsigned int irq) static inline int irq_balancing_disabled(unsigned int irq)
......
...@@ -17,6 +17,8 @@ ...@@ -17,6 +17,8 @@
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
cpumask_t irq_default_affinity = CPU_MASK_ALL;
/** /**
* synchronize_irq - wait for pending IRQ handlers (on other CPUs) * synchronize_irq - wait for pending IRQ handlers (on other CPUs)
* @irq: interrupt number to wait for * @irq: interrupt number to wait for
...@@ -95,6 +97,27 @@ int irq_set_affinity(unsigned int irq, cpumask_t cpumask) ...@@ -95,6 +97,27 @@ int irq_set_affinity(unsigned int irq, cpumask_t cpumask)
return 0; return 0;
} }
#ifndef CONFIG_AUTO_IRQ_AFFINITY
/*
* Generic version of the affinity autoselector.
*/
int irq_select_affinity(unsigned int irq)
{
cpumask_t mask;
if (!irq_can_set_affinity(irq))
return 0;
cpus_and(mask, cpu_online_map, irq_default_affinity);
irq_desc[irq].affinity = mask;
irq_desc[irq].chip->set_affinity(irq, mask);
set_balance_irq_affinity(irq, mask);
return 0;
}
#endif
#endif #endif
/** /**
...@@ -382,6 +405,9 @@ int setup_irq(unsigned int irq, struct irqaction *new) ...@@ -382,6 +405,9 @@ int setup_irq(unsigned int irq, struct irqaction *new)
} else } else
/* Undo nested disables: */ /* Undo nested disables: */
desc->depth = 1; desc->depth = 1;
/* Set default affinity mask once everything is setup */
irq_select_affinity(irq);
} }
/* Reset broken irq detection when installing new handler */ /* Reset broken irq detection when installing new handler */
desc->irq_count = 0; desc->irq_count = 0;
...@@ -571,8 +597,6 @@ int request_irq(unsigned int irq, irq_handler_t handler, ...@@ -571,8 +597,6 @@ int request_irq(unsigned int irq, irq_handler_t handler,
action->next = NULL; action->next = NULL;
action->dev_id = dev_id; action->dev_id = dev_id;
select_smp_affinity(irq);
#ifdef CONFIG_DEBUG_SHIRQ #ifdef CONFIG_DEBUG_SHIRQ
if (irqflags & IRQF_SHARED) { if (irqflags & IRQF_SHARED) {
/* /*
......
...@@ -44,7 +44,7 @@ static int irq_affinity_write_proc(struct file *file, const char __user *buffer, ...@@ -44,7 +44,7 @@ static int irq_affinity_write_proc(struct file *file, const char __user *buffer,
unsigned long count, void *data) unsigned long count, void *data)
{ {
unsigned int irq = (int)(long)data, full_count = count, err; unsigned int irq = (int)(long)data, full_count = count, err;
cpumask_t new_value, tmp; cpumask_t new_value;
if (!irq_desc[irq].chip->set_affinity || no_irq_affinity || if (!irq_desc[irq].chip->set_affinity || no_irq_affinity ||
irq_balancing_disabled(irq)) irq_balancing_disabled(irq))
...@@ -62,17 +62,51 @@ static int irq_affinity_write_proc(struct file *file, const char __user *buffer, ...@@ -62,17 +62,51 @@ static int irq_affinity_write_proc(struct file *file, const char __user *buffer,
* way to make the system unusable accidentally :-) At least * way to make the system unusable accidentally :-) At least
* one online CPU still has to be targeted. * one online CPU still has to be targeted.
*/ */
cpus_and(tmp, new_value, cpu_online_map); if (!cpus_intersects(new_value, cpu_online_map))
if (cpus_empty(tmp))
/* Special case for empty set - allow the architecture /* Special case for empty set - allow the architecture
code to set default SMP affinity. */ code to set default SMP affinity. */
return select_smp_affinity(irq) ? -EINVAL : full_count; return irq_select_affinity(irq) ? -EINVAL : full_count;
irq_set_affinity(irq, new_value); irq_set_affinity(irq, new_value);
return full_count; return full_count;
} }
static int default_affinity_read(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
int len = cpumask_scnprintf(page, count, irq_default_affinity);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
return len;
}
static int default_affinity_write(struct file *file, const char __user *buffer,
unsigned long count, void *data)
{
unsigned int full_count = count, err;
cpumask_t new_value;
err = cpumask_parse_user(buffer, count, new_value);
if (err)
return err;
if (!is_affinity_mask_valid(new_value))
return -EINVAL;
/*
* Do not allow disabling IRQs completely - it's a too easy
* way to make the system unusable accidentally :-) At least
* one online CPU still has to be targeted.
*/
if (!cpus_intersects(new_value, cpu_online_map))
return -EINVAL;
irq_default_affinity = new_value;
return full_count;
}
#endif #endif
static int irq_spurious_read(char *page, char **start, off_t off, static int irq_spurious_read(char *page, char **start, off_t off,
...@@ -171,6 +205,21 @@ void unregister_handler_proc(unsigned int irq, struct irqaction *action) ...@@ -171,6 +205,21 @@ void unregister_handler_proc(unsigned int irq, struct irqaction *action)
remove_proc_entry(action->dir->name, irq_desc[irq].dir); remove_proc_entry(action->dir->name, irq_desc[irq].dir);
} }
void register_default_affinity_proc(void)
{
#ifdef CONFIG_SMP
struct proc_dir_entry *entry;
/* create /proc/irq/default_smp_affinity */
entry = create_proc_entry("default_smp_affinity", 0600, root_irq_dir);
if (entry) {
entry->data = NULL;
entry->read_proc = default_affinity_read;
entry->write_proc = default_affinity_write;
}
#endif
}
void init_irq_proc(void) void init_irq_proc(void)
{ {
int i; int i;
...@@ -180,6 +229,8 @@ void init_irq_proc(void) ...@@ -180,6 +229,8 @@ void init_irq_proc(void)
if (!root_irq_dir) if (!root_irq_dir)
return; return;
register_default_affinity_proc();
/* /*
* Create entries for all existing IRQs. * Create entries for all existing IRQs.
*/ */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment