- 04 May, 2017 30 commits
-
-
Alexander Graf authored
The file open flags (O_foo) are platform specific and should never go out to an interface that is not local to the system. Unfortunately these flags have leaked out onto the wire in the cephfs implementation. That lead to bogus flags getting transmitted on ppc64. This patch converts the kernel view of flags to the ceph view of file open flags. Fixes: 124e68e7 ("ceph: file operations") Signed-off-by: Alexander Graf <agraf@suse.de> Reviewed-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Yan, Zheng authored
The dirfragtree is lazily updated, it's not always accurate. Infinite loops happens in following circumstance. - client send request to read frag A - frag A has been fragmented into frag B and C. So mds fills the reply with contents of frag B - client wants to read next frag C. ceph_choose_frag(frag value of C) return frag A. The fix is using previous readdir reply to calculate next readdir frag when possible. Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Ilya Dryomov authored
Support disabling automatic exclusive lock transfers to allow users to be in charge of which node should own the lock while being able to reuse exclusive lock's built-in blacklist/break-lock functionality. Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Jason Dillaman <dillaman@redhat.com>
-
Ilya Dryomov authored
Right now it's just 0, but "no automatic exclusive lock transfers" mode code will need -EROFS. Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Jason Dillaman <dillaman@redhat.com>
-
Ilya Dryomov authored
Currently the exclusive lock is acquired only if the mapping is writable, i.e. an image HEAD mapped in rw mode. This means that we don't acquire the lock for executing a read from a snapshot or an image HEAD mapped in ro mode, even if lock_on_read is set. This is somewhat weird and inconsistent with "no automatic exclusive lock transfers" mode, where the lock is acquired unconditionally. Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Jason Dillaman <dillaman@redhat.com>
-
Ilya Dryomov authored
As we no longer release the lock before potentially raising BLACKLISTED in rbd_reregister_watch(), the "either locked or blacklisted" assert in rbd_queue_workfn() needs to go: we can be both locked and blacklisted at that point now. Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Jason Dillaman <dillaman@redhat.com>
-
Ilya Dryomov authored
In preparation for supporting set_cookie method (or rather set_cookie fallback for older OSDs), store the lock cookie on lock and use it on unlock instead of recalculating from rbd_dev->watch_cookie. Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Jason Dillaman <dillaman@redhat.com>
-
Ilya Dryomov authored
Currently the lock_state is set to UNLOCKED (preventing further I/O), but RELEASED_LOCK notification isn't sent. Be consistent with userspace and treat ceph_cls_unlock() errors as the image is unlocked. Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Jason Dillaman <dillaman@redhat.com>
-
Ilya Dryomov authored
add_disk() takes an extra reference on disk->queue, which is put in put_disk() -> disk_release(). Avoiding blk_cleanup_queue() (which also puts the queue) until add_disk() sets GENHD_FL_UP works for the queue itself, but leaks various queue internals. Conditioning tag_set freeing on GENHD_FL_UP is wrong too: all error paths after rbd_init_disk() leak the tag_set. Move the final "announce" steps out of rbd_dev_device_setup() so that it can be unwound like any other function. Leave "announce" steps to do_rbd_add/remove(). Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Jason Dillaman <dillaman@redhat.com>
-
Ilya Dryomov authored
rbd_dev->disk tear down vs rbd_watch_cb() race shouldn't be a problem anymore thanks to EXISTS and REMOVING checks in rbd_dev_update_size(). A similar race could occur on "rbd map", see commit 811c6688 ("rbd: fix rbd map vs notify races"). Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Jason Dillaman <dillaman@redhat.com>
-
Ilya Dryomov authored
... to simplify error handling in do_rbd_add(). Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Jason Dillaman <dillaman@redhat.com>
-
Jeff Layton authored
Currently, we don't have a real feedback mechanism in place for when we start seeing buffered writeback errors. If writeback is failing, there is nothing that prevents an application from continuing to dirty pages that aren't being cleaned. In the event that we're seeing write errors of any sort occur on an inode, have the callback set a flag to force further writes to be synchronous. When the next write succeeds, clear the flag to allow buffered writeback to continue. Since this is just a hint to the write submission mechanism, we only take the i_ceph_lock when a lockless check shows that the flag needs to be changed. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: "Yan, Zheng” <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Jeff Layton authored
This reverts commit b109eec6. If I'm filling up a filesystem with this sort of command: $ dd if=/dev/urandom of=/mnt/cephfs/fillfile bs=2M oflag=sync ...then I'll eventually get back EIO on a write. Further calls will give us ENOSPC. I'm not sure what prompted this change, but I don't think it's what we want to do. If writepages failed, we will have already set the mapping error appropriately, and that's what gets reported by fsync() or close(). __filemap_fdatawait_range however, does this: wait_on_page_writeback(page); if (TestClearPageError(page)) ret = -EIO; ...and that -EIO ends up trumping the mapping's error if one exists. When writepages fails, we only want to set the error in the mapping, and not flag the individual pages. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: "Yan, Zheng” <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Jeff Layton authored
Have the client store and update the osdc epoch_barrier when a cap message comes in with one. When sending cap messages, send the epoch barrier as well. This allows clients to inform servers that their released caps may not be used until a particular OSD map epoch. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: "Yan, Zheng” <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Jeff Layton authored
Cephfs can get cap update requests that contain a new epoch barrier in them. When that happens we want to pause all OSD traffic until the right map epoch arrives. Add an epoch_barrier field to ceph_osd_client that is protected by the osdc->lock rwsem. When the barrier is set, and the current OSD map epoch is below that, pause the request target when submitting the request or when revisiting it. Add a way for upper layers (cephfs) to update the epoch_barrier as well. If we get a new map, compare the new epoch against the barrier before kicking requests and request another map if the map epoch is still lower than the one we want. If we get a map with a full pool, or at quota condition, then set the barrier to the current epoch value. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Jeff Layton authored
When a Ceph volume hits capacity, a flag is set in the OSD map to indicate that, and a new map is sprayed around the cluster. With cephfs we want it to shut down any abortable requests that are in progress with an -ENOSPC error as they'd just hang otherwise. Add a new ceph_osdc_abort_on_full helper function to handle this. It will first check whether there is an out-of-space condition in the cluster and then walk the tree and abort any request that has r_abort_on_full set with a -ENOSPC error. Call this new function directly whenever we get a new OSD map. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Jeff Layton authored
Usually, when the osd map is flagged as full or the pool is at quota, write requests just hang. This is not what we want for cephfs, where it would be better to simply report -ENOSPC back to userland instead of stalling. If the caller knows that it will want an immediate error return instead of blocking on a full or at-quota error condition then allow it to set a flag to request that behavior. Set that flag in ceph_osdc_new_request (since ceph.ko is the only caller), and on any other write request from ceph.ko. A later patch will deal with requests that were submitted before the new map showing the full condition came in. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Jeff Layton authored
Nothing uses this anymore with the removal of the ack vs. commit code. Remove the field and just encode zeroes into place in the request encoding. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Yan, Zheng authored
Current cephfs client uses string to indicate start position of readdir. The string is last entry of previous readdir reply. This approach does not work for seeky readdir because we can not easily convert the new postion to a string. For seeky readdir, mds needs to return dentries from the beginning. Client keeps retrying if the reply does not contain the dentry it wants. In current version of ceph, mds sorts CDentry in its cache in hash order. Client also uses dentry hash to compose dir postion. For seeky readdir, if client passes the hash part of dir postion to mds. mds can avoid replying useless dentries. Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Yan, Zheng authored
If a mds has stopped, close its session and clean up its session requests/caps. The process is similar to handling SESSION_CLOSE initiated by mds. Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Yan, Zheng authored
__unregister_session() free the session if it drops the last reference. We should grab an extra reference if we want to use session after __unregister_session(). Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Yan, Zheng authored
mdsmap::m_max_mds is the expected count of active mds. It's not the max rank of active mds. User can decrease mdsmap::m_max_mds, but does not stop mds whose rank >= mdsmap::m_max_mds. Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Yan, Zheng authored
Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Elena Reshetova authored
refcount_t type and corresponding API should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid accidental refcounter overflows that might lead to use-after-free situations. Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: David Windsor <dwindsor@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Elena Reshetova authored
refcount_t type and corresponding API should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid accidental refcounter overflows that might lead to use-after-free situations. Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: David Windsor <dwindsor@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Elena Reshetova authored
refcount_t type and corresponding API should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid accidental refcounter overflows that might lead to use-after-free situations. Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: David Windsor <dwindsor@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Elena Reshetova authored
refcount_t type and corresponding API should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid accidental refcounter overflows that might lead to use-after-free situations. Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: David Windsor <dwindsor@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Elena Reshetova authored
refcount_t type and corresponding API should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid accidental refcounter overflows that might lead to use-after-free situations. Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: David Windsor <dwindsor@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Ilya Dryomov authored
Add a readonly, exported to sysfs module parameter so that userspace can generate meaningful error messages. It's a bit funky, but there is no other libceph-specific place. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
Ilya Dryomov authored
No reason to hide CephFS-specific features in the rbd case. Recent feature bits mix RADOS and CephFS-specific stuff together anyway. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
-
- 01 May, 2017 1 commit
-
-
Linus Torvalds authored
-
- 30 Apr, 2017 3 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Thomas Gleixner: "The final fixes for 4.11: - prevent a triple fault with function graph tracing triggered via suspend to ram - prevent optimizing for size when function graph tracing is enabled and the compiler does not support -mfentry - prevent mwaitx() being called with a zero timeout as mwaitx() might never return. Observed on the new Ryzen CPUs" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: Prevent timer value 0 for MWAITX x86/build: convert function graph '-Os' error to warning ftrace/x86: Fix triple fault with graph tracing and suspend-to-ram
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull scheduler fix from Thomas Gleixner: "A single fix for a cputime accounting regression which got introduced in the 4.11 cycle" * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/cputime: Fix ksoftirqd cputime accounting regression
-
Janakarajan Natarajan authored
Newer hardware has uncovered a bug in the software implementation of using MWAITX for the delay function. A value of 0 for the timer is meant to indicate that a timeout will not be used to exit MWAITX. On newer hardware this can result in MWAITX never returning, resulting in NMI soft lockup messages being printed. On older hardware, some of the other conditions under which MWAITX can exit masked this issue. The AMD APM does not currently document this and will be updated. Please refer to http://marc.info/?l=kvm&m=148950623231140 for information regarding NMI soft lockup messages on an AMD Ryzen 1800X. This has been root-caused as a 0 passed to MWAITX causing it to wait indefinitely. This change has the added benefit of avoiding the unnecessary setup of MONITORX/MWAITX when the delay value is zero. Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Link: http://lkml.kernel.org/r/1493156643-29366-1-git-send-email-Janakarajan.Natarajan@amd.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de>
-
- 29 Apr, 2017 3 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds authored
Pull iov iter fix from Al Viro. * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: fix a braino in ITER_PIPE iov_iter_revert()
-
Al Viro authored
Fixes: 27c0e374Tested-by: Dave Jones <davej@codemonkey.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linuxLinus Torvalds authored
Pull clk fix from Stephen Boyd: "One odd config build fix for a recent Allwinner clock driver change that got merged. The common code called code in another file that wasn't always built. This just forces it on so people don't run into this bad configuration" * tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: clk: sunxi-ng: always select CCU_GATE
-
- 28 Apr, 2017 3 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds authored
Pull networking fixes from David Miller: "Just a couple more stragglers, I really hope this is it. 1) Don't let frags slip down into the GRO segmentation handlers, from Steffen Klassert. 2) Truesize under-estimation triggers warnings in TCP over loopback with socket filters, 2 part fix from Eric Dumazet. 3) Fix undesirable reset of bonding MTU to ETH_HLEN on slave removal, from Paolo Abeni. 4) If we flush the XFRM policy after garbage collection, it doesn't work because stray entries can be created afterwards. Fix from Xin Long. 5) Hung socket connection fixes in TIPC from Parthasarathy Bhuvaragan. 6) Fix GRO regression with IPSEC when netfilter is disabled, from Sabrina Dubroca. 7) Fix cpsw driver Kconfig dependency regression, from Arnd Bergmann" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: net: hso: register netdev later to avoid a race condition net: adjust skb->truesize in ___pskb_trim() tcp: do not underestimate skb->truesize in tcp_trim_head() bonding: avoid defaulting hard_header_len to ETH_HLEN on slave removal ipv4: Don't pass IP fragments to upper layer GRO handlers. cpsw/netcp: refine cpts dependency tipc: close the connection if protocol messages contain errors tipc: improve error validations for sockets in CONNECTING state tipc: Fix missing connection request handling xfrm: fix GRO for !CONFIG_NETFILTER xfrm: do the garbage collection after flushing policy
-
Andreas Kemnade authored
If the netdev is accessed before the urbs are initialized, there will be NULL pointer dereferences. That is avoided by registering it when it is fully initialized. This case occurs e.g. if dhcpcd is running in the background and the device is probed, either after insmod hso or when the device appears on the usb bus. A backtrace is the following: [ 1357.356048] usb 1-2: new high-speed USB device number 12 using ehci-omap [ 1357.551177] usb 1-2: New USB device found, idVendor=0af0, idProduct=8800 [ 1357.558654] usb 1-2: New USB device strings: Mfr=3, Product=2, SerialNumber=0 [ 1357.568572] usb 1-2: Product: Globetrotter HSUPA Modem [ 1357.574096] usb 1-2: Manufacturer: Option N.V. [ 1357.685882] hso 1-2:1.5: Not our interface [ 1460.886352] hso: unloaded [ 1460.889984] usbcore: deregistering interface driver hso [ 1513.769134] hso: ../drivers/net/usb/hso.c: Option Wireless [ 1513.846771] Unable to handle kernel NULL pointer dereference at virtual address 00000030 [ 1513.887664] hso 1-2:1.5: Not our interface [ 1513.906890] usbcore: registered new interface driver hso [ 1513.937988] pgd = ecdec000 [ 1513.949890] [00000030] *pgd=acd15831, *pte=00000000, *ppte=00000000 [ 1513.956573] Internal error: Oops: 817 [#1] PREEMPT SMP ARM [ 1513.962371] Modules linked in: hso usb_f_ecm omap2430 bnep bluetooth g_ether usb_f_rndis u_ether libcomposite configfs ipv6 arc4 wl18xx wlcore mac80211 cfg80211 bq27xxx_battery panel_tpo_td028ttec1 omapdrm drm_kms_helper cfbfillrect snd_soc_simple_card syscopyarea cfbimgblt snd_soc_simple_card_utils sysfillrect sysimgblt fb_sys_fops snd_soc_omap_twl4030 cfbcopyarea encoder_opa362 drm twl4030_madc_hwmon wwan_on_off snd_soc_gtm601 pwm_omap_dmtimer generic_adc_battery connector_analog_tv pwm_bl extcon_gpio omap3_isp wlcore_sdio videobuf2_dma_contig videobuf2_memops w1_bq27000 videobuf2_v4l2 videobuf2_core omap_hdq snd_soc_omap_mcbsp ov9650 snd_soc_omap bmp280_i2c bmg160_i2c v4l2_common snd_pcm_dmaengine bmp280 bmg160_core at24 bmc150_magn_i2c nvmem_core videodev phy_twl4030_usb bmc150_accel_i2c tsc2007 [ 1514.037384] bmc150_magn bmc150_accel_core media leds_tca6507 bno055 industrialio_triggered_buffer kfifo_buf gpio_twl4030 musb_hdrc snd_soc_twl4030 twl4030_vibra twl4030_madc twl4030_pwrbutton twl4030_charger industrialio w2sg0004 ehci_omap omapdss [last unloaded: hso] [ 1514.062622] CPU: 0 PID: 3433 Comm: dhcpcd Tainted: G W 4.11.0-rc8-letux+ #1 [ 1514.071136] Hardware name: Generic OMAP36xx (Flattened Device Tree) [ 1514.077758] task: ee748240 task.stack: ecdd6000 [ 1514.082580] PC is at hso_start_net_device+0x50/0xc0 [hso] [ 1514.088287] LR is at hso_net_open+0x68/0x84 [hso] [ 1514.093231] pc : [<bf79c304>] lr : [<bf79ced8>] psr: a00f0013 sp : ecdd7e20 ip : 00000000 fp : ffffffff [ 1514.105316] r10: 00000000 r9 : ed0e080c r8 : ecd8fe2c [ 1514.110839] r7 : bf79cef4 r6 : ecd8fe00 r5 : 00000000 r4 : ed0dbd80 [ 1514.117706] r3 : 00000000 r2 : c0020c80 r1 : 00000000 r0 : ecdb7800 [ 1514.124572] Flags: NzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none [ 1514.132110] Control: 10c5387d Table: acdec019 DAC: 00000051 [ 1514.138153] Process dhcpcd (pid: 3433, stack limit = 0xecdd6218) [ 1514.144470] Stack: (0xecdd7e20 to 0xecdd8000) [ 1514.149078] 7e20: ed0dbd80 ecd8fe98 00000001 00000000 ecd8f800 ecd8fe00 ecd8fe60 00000000 [ 1514.157714] 7e40: ed0e080c bf79ced8 bf79ce70 ecd8f800 00000001 bf7a0258 ecd8f830 c068d958 [ 1514.166320] 7e60: c068d8b8 ecd8f800 00000001 00001091 00001090 c068dba4 ecd8f800 00001090 [ 1514.174926] 7e80: ecd8f940 ecd8f800 00000000 c068dc60 00000000 00000001 ed0e0800 ecd8f800 [ 1514.183563] 7ea0: 00000000 c06feaa8 c0ca39c2 beea57dc 00000020 00000000 306f7368 00000000 [ 1514.192169] 7ec0: 00000000 00000000 00001091 00000000 00000000 00000000 00000000 00008914 [ 1514.200805] 7ee0: eaa9ab60 beea57dc c0c9bfc0 eaa9ab40 00000006 00000000 00046858 c066a948 [ 1514.209411] 7f00: beea57dc eaa9ab60 ecc6b0c0 c02837b0 00000006 c0282c90 0000c000 c0283654 [ 1514.218017] 7f20: c09b0c00 c098bc31 00000001 c0c5e513 c0c5e513 00000000 c0151354 c01a20c0 [ 1514.226654] 7f40: c0c5e513 c01a3134 ecdd6000 c01a3160 ee7487f0 600f0013 00000000 ee748240 [ 1514.235260] 7f60: ee748734 00000000 ecc6b0c0 ecc6b0c0 beea57dc 00008914 00000006 00000000 [ 1514.243896] 7f80: 00046858 c02837b0 00001091 0003a1f0 00046608 0003a248 00000036 c01071e4 [ 1514.252502] 7fa0: ecdd6000 c0107040 0003a1f0 00046608 00000006 00008914 beea57dc 00001091 [ 1514.261108] 7fc0: 0003a1f0 00046608 0003a248 00000036 0003ac0c 00046608 00046610 00046858 [ 1514.269744] 7fe0: 0003a0ac beea57d4 000167eb b6f23106 400f0030 00000006 00000000 00000000 [ 1514.278411] [<bf79c304>] (hso_start_net_device [hso]) from [<bf79ced8>] (hso_net_open+0x68/0x84 [hso]) [ 1514.288238] [<bf79ced8>] (hso_net_open [hso]) from [<c068d958>] (__dev_open+0xa0/0xf4) [ 1514.296600] [<c068d958>] (__dev_open) from [<c068dba4>] (__dev_change_flags+0x8c/0x130) [ 1514.305023] [<c068dba4>] (__dev_change_flags) from [<c068dc60>] (dev_change_flags+0x18/0x48) [ 1514.313934] [<c068dc60>] (dev_change_flags) from [<c06feaa8>] (devinet_ioctl+0x348/0x714) [ 1514.322540] [<c06feaa8>] (devinet_ioctl) from [<c066a948>] (sock_ioctl+0x2b0/0x308) [ 1514.330627] [<c066a948>] (sock_ioctl) from [<c0282c90>] (vfs_ioctl+0x20/0x34) [ 1514.338165] [<c0282c90>] (vfs_ioctl) from [<c0283654>] (do_vfs_ioctl+0x82c/0x93c) [ 1514.346038] [<c0283654>] (do_vfs_ioctl) from [<c02837b0>] (SyS_ioctl+0x4c/0x74) [ 1514.353759] [<c02837b0>] (SyS_ioctl) from [<c0107040>] (ret_fast_syscall+0x0/0x1c) [ 1514.361755] Code: e3822103 e3822080 e1822781 e5981014 (e5832030) [ 1514.510833] ---[ end trace dfb3e53c657f34a0 ]--- Reported-by: H. Nikolaus Schaller <hns@goldelico.com> Signed-off-by: Andreas Kemnade <andreas@kemnade.info> Reviewed-by: Johan Hovold <johan@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Andrey found a way to trigger the WARN_ON_ONCE(delta < len) in skb_try_coalesce() using syzkaller and a filter attached to a TCP socket. As we did recently in commit 158f323b ("net: adjust skb->truesize in pskb_expand_head()") we can adjust skb->truesize from ___pskb_trim(), via a call to skb_condense(). If all frags were freed, then skb->truesize can be recomputed. This call can be done if skb is not yet owned, or destructor is sock_edemux(). Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Andrey Konovalov <andreyknvl@google.com> Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-