- 13 Mar, 2015 21 commits
-
-
Hante Meuleman authored
The watchdog thread is used to put the SDIO bus to sleep when the system is idling. This patch simplifies the way it is determined when sleep can be entered. Reviewed-by: Arend Van Spriel <arend@broadcom.com> Reviewed-by: Franky (Zhenhui) Lin <frankyl@broadcom.com> Reviewed-by: Pieter-Paul Giesberts <pieterpg@broadcom.com> Reviewed-by: Daniel (Deognyoun) Kim <dekim@broadcom.com> Signed-off-by: Hante Meuleman <meuleman@broadcom.com> Signed-off-by: Arend van Spriel <arend@broadcom.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
-
Hante Meuleman authored
On removal of SDIO card both functions of card will be getting a remove call. When the first is hanging in ctrl frame xmit then the second will cause oops. This patch fixes the xmit ctrl handling in case of serious errors and also limits the handling for remove to function 1 only. Reviewed-by: Arend Van Spriel <arend@broadcom.com> Reviewed-by: Franky (Zhenhui) Lin <frankyl@broadcom.com> Reviewed-by: Pieter-Paul Giesberts <pieterpg@broadcom.com> Reviewed-by: Daniel (Deognyoun) Kim <dekim@broadcom.com> Signed-off-by: Hante Meuleman <meuleman@broadcom.com> Signed-off-by: Arend van Spriel <arend@broadcom.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
-
Taehee Yoo authored
Remove duplicated prototype in base.h Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
-
Sergey Ryazanov authored
To prepare for reset ath5k should finish all asynchronous tasks. At first, it disables the interrupt generation, then it waits for the interrupt handler and tasklets completion, and then proceeds to the HW configuration update. But it does not consider that the interrupt handler or tasklet re-enables the interrupt generation. And we fall in a situation when ath5k assumes that interrupts are disabled, but it is not. This can lead to different consequences, such as reception of the frame, when we do not expect it. Under certain circumstances, this can lead to the following warning: WARNING: at ath5k/base.c:589 ath5k_tasklet_rx+0x318/0x6ec [ath5k]() invalid hw_rix: 1a [..] Call Trace: [<802656a8>] show_stack+0x48/0x70 [<802dd92c>] warn_slowpath_common+0x88/0xbc [<802dd98c>] warn_slowpath_fmt+0x2c/0x38 [<81b51be8>] ath5k_tasklet_rx+0x318/0x6ec [ath5k] [<8028ac64>] tasklet_action+0x8c/0xf0 [<80075804>] __do_softirq+0x180/0x32c [<80196ce8>] irq_exit+0x54/0x70 [<80041848>] ret_from_irq+0x0/0x4 [<80182fdc>] ioread32+0x4/0xc [<81b4c42c>] ath5k_hw_set_sleep_clock+0x2ec/0x474 [ath5k] [<81b4cf28>] ath5k_hw_reset+0x50/0xeb8 [ath5k] [<81b50900>] ath5k_reset+0xd4/0x310 [ath5k] [<81b557e8>] ath5k_config+0x4c/0x104 [ath5k] [<80d01770>] ieee80211_hw_config+0x2f4/0x35c [mac80211] [<80d09aa8>] ieee80211_scan_work+0x2e4/0x414 [mac80211] [<8022c3f4>] process_one_work+0x28c/0x400 [<802df8f8>] worker_thread+0x258/0x3c0 [<801b5710>] kthread+0xe0/0xec [<800418a8>] ret_from_kernel_thread+0x14/0x1c Fix this issue by adding a new status flag, which forbids to re-enable the interrupt generation until the HW configuration is completed. Note: previous patch, which reorders the Rx disable code helps to avoid the above warning, but not fixes the root cause of unexpected frame receiving. CC: Jiri Slaby <jirislaby@gmail.com> CC: Nick Kossifidis <mickflemm@gmail.com> CC: Luis R. Rodriguez <mcgrof@do-not-panic.com> Reported-by: Christophe Prevotaux <cprevotaux@nltinc.com> Tested-by: Christophe Prevotaux <cprevotaux@nltinc.com> Tested-by: Eric Bree <ebree@nltinc.com> Signed-off-by: Sergey Ryazanov <ryazanov.s.a@gmail.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
-
Sergey Ryazanov authored
ath5k updates the channel pointer and after that it stops the Rx logic and apply channel to HW. In case of channel switch, such sequence creates a small window when a frame, which is received on the old channel is considered as a frame received on the new one. The most notable consequence of this situation occurs during the switch from 2 GHz band (CCK+OFDM) to the 5GHz band (OFDM-only). Frame received with CCK rate, e.g. beacon received at the 1mbps, causes the following warning: WARNING: at ath5k/base.c:589 ath5k_tasklet_rx+0x318/0x6ec [ath5k]() invalid hw_rix: 1a [..] Call Trace: [<802656a8>] show_stack+0x48/0x70 [<802dd92c>] warn_slowpath_common+0x88/0xbc [<802dd98c>] warn_slowpath_fmt+0x2c/0x38 [<81b51be8>] ath5k_tasklet_rx+0x318/0x6ec [ath5k] [<8028ac64>] tasklet_action+0x8c/0xf0 [<80075804>] __do_softirq+0x180/0x32c [<80196ce8>] irq_exit+0x54/0x70 [<80041848>] ret_from_irq+0x0/0x4 [<80182fdc>] ioread32+0x4/0xc [<81b4c42c>] ath5k_hw_set_sleep_clock+0x2ec/0x474 [ath5k] [<81b4cf28>] ath5k_hw_reset+0x50/0xeb8 [ath5k] [<81b50900>] ath5k_reset+0xd4/0x310 [ath5k] [<81b557e8>] ath5k_config+0x4c/0x104 [ath5k] [<80d01770>] ieee80211_hw_config+0x2f4/0x35c [mac80211] [<80d09aa8>] ieee80211_scan_work+0x2e4/0x414 [mac80211] [<8022c3f4>] process_one_work+0x28c/0x400 [<802df8f8>] worker_thread+0x258/0x3c0 [<801b5710>] kthread+0xe0/0xec [<800418a8>] ret_from_kernel_thread+0x14/0x1c The easiest way to reproduce this warning is to run scan with dualband NIC in noisy environments, when the channel 11 runs multiple APs. In my tests if the APs num >= 12, the warning appears in the first few seconds of scanning. In order to fix this, the Rx disable code moved to a higher level and placed before the channel pointer update. This is also makes the code a bit more symmetrical, since we disable and enable the Rx in the same function. In fact, at the pointer update time new frames should not appear, because interrupt generation at this point should already be disabled. The next patch should address this issue. CC: Jiri Slaby <jirislaby@gmail.com> CC: Nick Kossifidis <mickflemm@gmail.com> CC: Luis R. Rodriguez <mcgrof@do-not-panic.com> Reported-by: Christophe Prevotaux <cprevotaux@nltinc.com> Tested-by: Christophe Prevotaux <cprevotaux@nltinc.com> Tested-by: Eric Bree <ebree@nltinc.com> Signed-off-by: Sergey Ryazanov <ryazanov.s.a@gmail.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
-
Kalle Valo authored
Merge tag 'iwlwifi-next-for-kalle-2015-03-12' of https://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/iwlwifi-next * Location Aware Regulatory was added by Arik * 8000 device family work * Update to the BT Coex firmware API
-
Simon Horman authored
In the case of AF_INET s_addr was set to INADDR_ANY (0) which which both symmetric with the AF_INET6 case, where s_addr is not set, and unnecessary as udp_conf is zeroed out earlier in the same function. I suspect this change does not have any run-time effect due to compiler optimisations. But it does make the code a little easier on the/my eyes. Cc: Tom Herbert <therbert@google.com> Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric W. Biederman authored
Reobert Shearman noticed that mpls_egress is failing to verify that the bytes to be examined are in fact present in the packet before mpls_egress reads those bytes. As suggested by David Miller reduce this to a single pskb_may_pull call so that we don't do unnecessary work in the fast path. Reported-by: Robert Shearman <rshearma@brocade.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jaeden Amero authored
The PHY state machine (in drivers/net/phy/phy.c) will unconditionally call phydev->adjust_link (macb_handle_link_change) when polling in the PHY_CHANGELINK state. As currently written, macb always ends up requesting a new tx_clk frequency in macb_handle_link_change. It is a waste of time to request a new tx_clk frequency if the link state hasn't changed, as the tx_clk will already be configured properly. Let's only request a new tx_clk clock frequency when necessary. Signed-off-by: Jaeden Amero <jaeden.amero@ni.com> Cc: Josh Cartwright <joshc@ni.com> Cc: Soren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
This patch fixes a typo rhashtable_lookup_compare where we fail to recompute the hash when looking up the new table. This causes elements to be missed and potentially a crash during a resize. Reported-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
Commit c0c09bfd ("rhashtable: avoid unnecessary wakeup for worker queue") changed ht->shift to be atomic, which is actually unnecessary. Instead of leaving the current shift in the core rhashtable structure, it can be cached inside the individual bucket tables. There, it will only be initialized once during a new table allocation in the shrink/expansion slow path, and from then onward it stays immutable for the rest of the bucket table liftime. That allows shift to be non-atomic. The patch also moves hash_rnd management into the table setup. The rhashtable structure now consumes 3 instead of 4 cachelines. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: Ying Xue <ying.xue@windriver.com> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
There is a potential race condition between readers and the rehasher. In particular, the rehasher could have started a rehash while the reader finishes a scan of the old table but fails to see the new table pointer. This patch closes this window by adding smp_wmb/smp_rmb. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== inet: tcp listener refactoring, part 8 These patches prepare request socks being hashed into general ehash table : We declare 3 aliases (ireq_state, ireq_refcnt, ireq_family) Note that refcnt is not yet handled, this will be done later. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Before inserting request socks into general hash table, fill their socket family. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
ireq->ir_num contains local port, use it. Also, get_openreq4() dumping listen_sk->refcnt makes litle sense. inet_diag_fill_req() can also use ireq->ir_num Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
sock_edemux() & sock_gen_put() should be ready to cope with request socks. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Make proto_register() & proto_unregister() a bit nicer. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
When request socks will be in ehash, they'll need to be refcounted. This patch adds rsk_refcnt/ireq_refcnt macros, and adds reqsk_put() function, but nothing yet use them. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We need to identify request sock when they'll be visible in global ehash table. ireq_state is an alias to req.__req_common.skc_state. Its value is set to TCP_NEW_SYN_RECV Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
TCP_SYN_RECV state is currently used by fast open sockets. Initial TCP requests (the pseudo sockets created when a SYN is received) are not yet associated to a state. They are attached to their parent, and the parent is in TCP_LISTEN state. This commit adds TCP_NEW_SYN_RECV state, so that we can convert TCP stack to a different schem gradually. This state is not exported to user space. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
I forgot to update dccp_v6_conn_request() & cookie_v6_check(). They both need to set ireq->ireq_net and ireq->ir_cookie Lets clear ireq->ir_cookie in inet_reqsk_alloc() Signed-off-by: Eric Dumazet <edumazet@google.com> Fixes: 33cf7c90 ("net: add real socket cookies") Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 12 Mar, 2015 19 commits
-
-
Daniel Borkmann authored
Currently, it is possible in cls_bpf to access eBPF maps only under rcu_read_lock_bh() variants: while on ingress side, that is, handle_ing(), the classifier would be called from __netif_receive_skb_core() under rcu_read_lock(); on egress side, however, it's rcu_read_lock_bh() via __dev_queue_xmit(). This rcu/rcu_bh mix doesn't work together with eBPF maps as they require soley to be called under rcu_read_lock(). eBPF maps could also be shared among various other eBPF programs (possibly even with other eBPF program types, f.e. tracing) and user space processes, so any context is assumed. Therefore, a possible fix for cls_bpf is to wrap/nest eBPF program invocation under non-bh RCU lock variant. Fixes: e2e9b654 ("cls_bpf: add initial eBPF support for programmable classifiers") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alexander Duyck says: ==================== fib_trie: Minor fixes for table merge This patch set addresses two issues reported with the tables merged, the first is a NULL pointer dereference, and the other is to remove a WARN_ON and set the ordering for aliases from different tables with the same slen values. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This change makes it so that we should always have a deterministic ordering for the main and local aliases within the merged table when two leaves overlap. So for example if we have a leaf with a key of 192.168.254.0. If we previously added two aliases with a prefix length of 24 from both local and main the first entry would be first and the second would be second. When I was coding this I had added a WARN_ON should such a situation occur as I wasn't sure how likely it would be. However this WARN_ON has been triggered so this is something that should be addressed. With this patch the ordering of the aliases is as follows. First they are sorted on prefix length, then on their table ID, then tos, and finally priority. This way what we end up doing is essentially interleaving the two tables on what used to be leaf_info structure boundaries. Fixes: 0ddcf43d ("ipv4: FIB Local/MAIN table collapse") Reported-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
The function fib_unmerge assumed the local table had already been allocated. If that is not the case however when custom rules are applied then this can result in a NULL pointer dereference. In order to prevent this we must check the value of the local table pointer and if it is NULL simply return 0 as there is no local table to separate from the main. Fixes: 0ddcf43d ("ipv4: FIB Local/MAIN table collapse") Reported-by: Madhu Challa <challa@noironetworks.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
I noticed that a helper function with argument type ARG_ANYTHING does not need to have an initialized value (register). This can worst case lead to unintented stack memory leakage in future helper functions if they are not carefully designed, or unintended application behaviour in case the application developer was not careful enough to match a correct helper function signature in the API. The underlying issue is that ARG_ANYTHING should actually be split into two different semantics: 1) ARG_DONTCARE for function arguments that the helper function does not care about (in other words: the default for unused function arguments), and 2) ARG_ANYTHING that is an argument actually being used by a helper function and *guaranteed* to be an initialized register. The current risk is low: ARG_ANYTHING is only used for the 'flags' argument (r4) in bpf_map_update_elem() that internally does strict checking. Fixes: 17a52670 ("bpf: verifier (add verifier core)") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric W. Biederman says: ==================== Introduce possible_net_t The current usage of write_pnet and read_pnet is a little laborious and error prone as you only notice if you failed to include them if are compiling with network namespaces enabled. possible_net_t remedies that by using a type that is 0 bytes when network namespaces are disabled and can only be read and written to with read_pnet and write_pnet. Aka less work and safer for the same effect. I kill hold_net and release_net first as are they are haven't been used since 2008 and are noise at the points where write_pnet and read_pnet are used. I have folded in Eric Dumazets suggestions to improve the killing of hold_net and release net. And respon. I had to respin anyway as there was enough changes elsewhere in the tree the previous version of these patches did not quite apply cleanly. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric W. Biederman authored
Having to say > #ifdef CONFIG_NET_NS > struct net *net; > #endif in structures is a little bit wordy and a little bit error prone. Instead it is possible to say: > typedef struct { > #ifdef CONFIG_NET_NS > struct net *net; > #endif > } possible_net_t; And then in a header say: > possible_net_t net; Which is cleaner and easier to use and easier to test, as the possible_net_t is always there no matter what the compile options. Further this allows read_pnet and write_pnet to be functions in all cases which is better at catching typos. This change adds possible_net_t, updates the definitions of read_pnet and write_pnet, updates optional struct net * variables that write_pnet uses on to have the type possible_net_t, and finally fixes up the b0rked users of read_pnet and write_pnet. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric W. Biederman authored
hold_net and release_net were an idea that turned out to be useless. The code has been disabled since 2008. Kill the code it is long past due. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Herbert Xu says: ==================== rhashtable hash cleanups This is a rebase on top of the nested lock annotation fix. Nothing to see here, just a bunch of simple clean-ups before I move onto something more substantial (hopefully). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
Now that the only caller of obj_raw_hashfn is head_hashfn, we can simply kill it and fold it into the latter. This patch also moves the common shift from head_hashfn/key_hashfn into rht_bucket_index. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
key_hashfn has only one caller and it doesn't really need to supply the key length as an extra parameter. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
Now that we don't have cross-table hashes, we no longer need to keep the entire hash value so all users of obj_raw_hashfn can use head_hashfn instead. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
This patch reverts commit c88455ce ("rhashtable: key_hashfn() must return full hash value") because the only user of it always masks the hash value. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Emmanuel Grumbach authored
-
Johannes Berg authored
The return value in iwl_mvm_get_wakeup_status() is a bit unclear in that it's not obvious that we don't leak fw_status in some cases. Use fw_status directly with ERR_PTR() and return only it, that way the compiler has a chance of proving that it's uninitialized (if it ever is due to new changes.) Additionally, this removes a smatch warning since smatch couldn't figure out that fw_status can't, in fact, leak here. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Luciano Coelho authored
When IWLWIFI_DEBUGFS is not set, we should not unlock the mutex after calling iwl_mvm_query_wakeup_reasons(), because this function unlocks it already. Move the goto out_iterate outside the #ifdef. Change-Id: I13d86402aecf0eeec44b1abbe2b244fbc706a5eb Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
-
Johannes Berg authored
The code here is a little confusing, the iwl_mvm_te_check_disconnect() will check that the interface is a station, but going into it after already having processed the time even end for P2P seems strange at first look. Put a switch statement there to distinguish the interface types and make this more readable. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Eran Harary authored
We previously enabled the smart FIFO (SF) in BSS only after association. This cause interrupt latency on P2P on certain devices. Change the working model to enable the SF all the time and play with the timeout values based on the association state. This change was not tested on older firwmares, so make it happen only on -13.ucode and up. Signed-off-by: Eran Harary <eran.harary@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Emmanuel Grumbach authored
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-