- 17 Jul, 2017 40 commits
-
-
Logan Gunthorpe authored
Presently, the order of the block devices listed in /proc/devices is not entirely sequential. If a block device has a major number greater than BLKDEV_MAJOR_HASH_SIZE (255), it will be ordered as if its major were module 255. For example, 511 appears after 1. This patch cleans that up and prints each major number in the correct order, regardless of where they are stored in the hash table. In order to do this, we introduce BLKDEV_MAJOR_MAX as an artificial limit (chosen to be 512). It will then print all devices in major order number from 0 to the maximum. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Jeff Layton <jlayton@poochiereds.net> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Logan Gunthorpe authored
Presently, the order of the char devices listed in /proc/devices is not entirely sequential. If a char device has a major number greater than CHRDEV_MAJOR_HASH_SIZE (255), it will be ordered as if its major were module 255. For example, 511 appears after 1. This patch cleans that up and prints each major number in the correct order, regardless of where they are stored in the hash table. In order to do this, we introduce CHRDEV_MAJOR_MAX as an artificial limit (chosen to be 511). It will then print all devices in major order number from 0 to the maximum. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Alan Cox <alan@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Logan Gunthorpe authored
We've run into problems with running out of dynamicly assign char device majors particullarly on automated test systems with all-yes-configs. Roughly 40 dynamic assignments can be made with such kernels at this time while space is reserved for only 20. Currently, the kernel only prints a warning when dynamic allocation overflows the reserved region. And when this happens drivers that have fixed assignments can randomly fail depending on the order of initialization of other drivers. Thus, adding a new char device can cause unexpected failures in completely unrelated parts of the kernel. This patch solves the problem by extending dynamic major number allocations down from 511 once the 234-254 region fills up. Fixed majors already exist above 255 so the infrastructure to support high number majors is already in place. The patch reserves an additional 128 major numbers which should hopefully last us a while. Kernels that don't require more than 20 dynamic majors assigned (which is pretty typical) should not be affected by this change. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Alan Cox <alan@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Linus Walleij <linus.walleij@linaro.org> Link: https://lkml.org/lkml/2017/6/4/107Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tomas Winkler authored
Device 'new_id' interface is useful for testing of not yet published hardware on older kernels and for internally used device ids on simulation platforms. However currently with the device configuration held in device_id driver data as a pointer to mei_cfg structure it is hard, as one need to locate the address of the correct structure. A recommended way of doing that is to use and index instead of a pointer. This patch adds a new list of configuration mei_cfg_list[] indexed via enum mei_cfg_idx. In addition it cleanups ich platform naming, renames legacy generation to ich and what was ich to ich10. Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
On some platforms, currently Broxton, Apollo Lake and Kaby Lake, ME FW may be busy with internal bookkeeping and answering late to the start message. As a mitigation, the driver requests for a synchronous probing to prevent stalling of the overall boot process. For example, on a Apollo Lake platform the overall boot time has reduced from ~0.9 to ~0.6 seconds on average. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dmitry Safonov authored
It was never used since addition of binder to linux mainstream tree. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "Arve Hjønnevåg" <arve@android.com> Cc: Riley Andrews <riandrews@android.com> Cc: devel@driverdev.osuosl.org Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Krzysztof Opasiak authored
Use rlimit() helper instead of manually writing whole chain from current task to rlim_cur Signed-off-by: Krzysztof Opasiak <k.opasiak@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Remove global mutex and rely on fine-grained locking Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Martijn Coenen authored
A race existed where one thread could register a death notification for a node, while another thread was cleaning up that node and sending out death notifications for its references, causing simultaneous access to ref->death because different locks were held. Signed-off-by: Martijn Coenen <maco@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
When printing transactions there were several race conditions that could cause a stale pointer to be deferenced. Fixed by reading the pointer once and using it if valid (which is safe). The transaction buffer also needed protection via proc lock, so it is only printed if we are holding the correct lock. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Use proc->outer_lock to protect the binder_ref structure. The outer lock allows functions operating on the binder_ref to do nested acquires of node and inner locks as necessary to attach refs to nodes atomically. Binder refs must never be accesssed without holding the outer lock. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Use the inner lock to protect thread accounting fields in proc structure: max_threads, requested_threads, requested_threads_started and ready_threads. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Martijn Coenen authored
This makes future changes to priority inheritance easier, since we want to be able to look at a thread's transaction stack when selecting a thread to inherit priority for. It also allows us to take just a single lock in a few paths, where we used to take two in succession. Signed-off-by: Martijn Coenen <maco@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
proc->threads will need to be accessed with higher locks of other processes held so use proc->inner_lock to protect it. proc->tmp_ref now needs to be protected by proc->inner_lock. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
When locks for binder_ref handling are added, proc->nodes will need to be modified while holding the outer lock Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
node->node_lock is used to protect elements of node. No need to acquire for fields that are invariant: debug_id, ptr, cookie. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
The todo lists in the proc, thread, and node structures are accessed by other procs/threads to place work items on the queue. The todo lists are protected by the new proc->inner_lock. No locks should ever be nested under these locks. As the name suggests, an outer lock will be introduced in a later patch. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
For correct behavior we need to hold the inner lock when dequeuing and processing node work in binder_thread_read. We now hold the inner lock when we enter the switch statement and release it after processing anything that might be affected by other threads. We also need to hold the inner lock to protect the node weak/strong ref tracking fields as long as node->proc is non-NULL (if it is NULL then we are guaranteed that we don't have any node work queued). This means that other functions that manipulate these fields must hold the inner lock. Refactored these functions to use the inner lock. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
There are 3 main spinlocks which must be acquired in this order: 1) proc->outer_lock : protects most fields of binder_proc, binder_thread, and binder_ref structures. binder_proc_lock() and binder_proc_unlock() are used to acq/rel. 2) node->lock : protects most fields of binder_node. binder_node_lock() and binder_node_unlock() are used to acq/rel 3) proc->inner_lock : protects the thread and node lists (proc->threads, proc->nodes) and all todo lists associated with the binder_proc (proc->todo, thread->todo, proc->delivered_death and node->async_todo). binder_inner_proc_lock() and binder_inner_proc_unlock() are used to acq/rel Any lock under procA must never be nested under any lock at the same level or below on procB. Functions that require a lock held on entry indicate which lock in the suffix of the function name: foo_olocked() : requires node->outer_lock foo_nlocked() : requires node->lock foo_ilocked() : requires proc->inner_lock foo_iolocked(): requires proc->outer_lock and proc->inner_lock foo_nilocked(): requires node->lock and proc->inner_lock Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
When obtaining a node via binder_get_node(), binder_get_node_from_ref() or binder_new_node(), increment node->tmp_refs to take a temporary reference on the node to ensure the node persists while being used. binder_put_node() must be called to remove the temporary reference. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Once locks are added, binder_ref's will only be accessed safely with the proc lock held. Refactor the inc/dec paths to make them atomic with the binder_get_ref* paths and node inc/dec. For example, instead of: ref = binder_get_ref(proc, handle, strong); ... binder_dec_ref(ref, strong); we now have: ret = binder_dec_ref_for_handle(proc, handle, strong, &rdata); Since the actual ref is no longer exposed to callers, a new struct binder_ref_data is introduced which can be used to return a copy of ref state. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
binder_thread and binder_proc may be accessed by other threads when processing transaction. Therefore they must be prevented from being freed while a transaction is in progress that references them. This is done by introducing a temporary reference counter for threads and procs that indicates that the object is in use and must not be freed. binder_thread_dec_tmpref() and binder_proc_dec_tmpref() are used to decrement the temporary reference. It is safe to free a binder_thread if there is no reference and it has been released (indicated by thread->is_dead). It is safe to free a binder_proc if it has no remaining threads and no reference. A spinlock is added to the binder_transaction to safely access and set references for t->from and for debug code to safely access t->to_thread and t->to_proc. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
When initiating a transaction, the target_node must have a strong ref on it. Then we take a second strong ref to make sure the node survives until the transaction is complete. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Since errors are tracked in the return_error/return_error2 fields of the binder_thread object and BR_TRANSACTION_COMPLETEs can be tracked either in those fields or via the thread todo work list, it is possible for errors to be reported ahead of the associated txn complete. Use the thread todo work list for errors to guarantee order. Also changed binder_send_failed_reply to pop the transaction even if it failed to send a reply. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
binder_pop_transaction needs to be split into 2 pieces to to allow the proc lock to be held on entry to dequeue the transaction stack, but no lock when kfree'ing the transaction. Split into binder_pop_transaction_locked and binder_free_transaction (the actual locks are still to be added). Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
The log->next index for the transaction log was not protected when incremented. This led to a case where log->next++ resulted in an index larger than ARRAY_SIZE(log->entry) and eventually a bad access to memory. Fixed by making the log index an atomic64 and converting to an array by using "% ARRAY_SIZE(log->entry)" Also added "complete" field to the log entry which is written last to tell the print code whether the entry is complete Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Martijn Coenen authored
Display information about allocated/free space whenever binder buffer allocation fails on synchronous transactions. Signed-off-by: Martijn Coenen <maco@android.com> Signed-off-by: Siqi Lin <siqilin@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Adds protection against malicious user code freeing the same buffer at the same time which could cause a crash. Cannot happen under normal use. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
node is always non-NULL in binder_get_ref_for_node so the conditional and else clause are not needed Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
The looper member of struct binder_thread is a bitmask of control bits. All of the existing bits are modified by the affected thread except for BINDER_LOOPER_STATE_NEED_RETURN which can be modified in binder_deferred_flush() by another thread. To avoid adding a spinlock around all read-mod-writes to modify a bit, the BINDER_LOOPER_STATE_NEED_RETURN flag is replaced by a separate field in struct binder_thread. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Currently, the transaction complete work item is queued after the transaction. This means that it is possible for the transaction to be handled and a reply to be enqueued in the current thread before the transaction complete is enqueued, which violates the protocol with userspace who may not expect the transaction complete. Fixed by always enqueing the transaction complete first. Also, once the transaction is enqueued, it is unsafe to access since it might be freed. Currently, t->flags is accessed to determine whether a sync wake is needed. Changed to access tr->flags instead. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
In binder_thread_read, the BINDER_WORK_NODE command is used to communicate the references on the node to userspace. It can take a couple of iterations in the loop to construct the list of commands for user space. When locking is added, the lock would need to be release on each iteration which means the state could change. The work item is not dequeued during this process which prevents a simpler queue management that can just dequeue up front and handle the work item. Fixed by changing the BINDER_WORK_NODE algorithm in binder_thread_read to determine which commands to send to userspace atomically in 1 pass so it stays consistent with the kernel view. The work item is now dequeued immediately since only 1 pass is needed. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Add additional information to determine the cause of binder failures. Adds the following to failed transaction log and kernel messages: return_error : value returned for transaction return_error_param : errno returned by binder allocator return_error_line : line number where error detected Also, return BR_DEAD_REPLY if an allocation error indicates a dead proc (-ESRCH) Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Use an atomic for binder_last_id to avoid locking it Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Badhri Jagan Sridharan authored
Use atomics for stats to avoid needing to lock for increments/decrements Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Add binder_dead_nodes_lock, binder_procs_lock, and binder_context_mgr_node_lock to protect the associated global lists Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
With the global lock, there was a mechanism to access binder driver debugging information with the global lock disabled to debug deadlocks or other issues. This mechanism is rarely (if ever) used anymore and wasn't needed during the development of fine-grained locking in the binder driver. Removing it. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Move the binder allocator functionality to its own file Continuation of splitting the binder allocator from the binder driver. Split binder_alloc functions from normal binder functions. Add kernel doc comments to functions declared extern in binder_alloc.h Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
Continuation of splitting the binder allocator from the binder driver. Separate binder_alloc functions from normal binder functions. Protect the allocator with a separate mutex. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Todd Kjos authored
The buffer's transaction has already been freed before binder_deferred_release. No need to do it again. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-