Commit 2938651d authored by Aurelien Aptel's avatar Aurelien Aptel Committed by Greg Kroah-Hartman

CIFS: fix POSIX lock leak and invalid ptr deref

[ Upstream commit bc31d0cd ]

We have a customer reporting crashes in lock_get_status() with many
"Leaked POSIX lock" messages preceeding the crash.

 Leaked POSIX lock on dev=0x0:0x56 ...
 Leaked POSIX lock on dev=0x0:0x56 ...
 Leaked POSIX lock on dev=0x0:0x56 ...
 Leaked POSIX lock on dev=0x0:0x53 ...
 Leaked POSIX lock on dev=0x0:0x53 ...
 Leaked POSIX lock on dev=0x0:0x53 ...
 Leaked POSIX lock on dev=0x0:0x53 ...
 POSIX: fl_owner=ffff8900e7b79380 fl_flags=0x1 fl_type=0x1 fl_pid=20709
 Leaked POSIX lock on dev=0x0:0x4b ino...
 Leaked locks on dev=0x0:0x4b ino=0xf911400000029:
 POSIX: fl_owner=ffff89f41c870e00 fl_flags=0x1 fl_type=0x1 fl_pid=19592
 stack segment: 0000 [#1] SMP
 Modules linked in: binfmt_misc msr tcp_diag udp_diag inet_diag unix_diag af_packet_diag netlink_diag rpcsec_gss_krb5 arc4 ecb auth_rpcgss nfsv4 md4 nfs nls_utf8 lockd grace cifs sunrpc ccm dns_resolver fscache af_packet iscsi_ibft iscsi_boot_sysfs vmw_vsock_vmci_transport vsock xfs libcrc32c sb_edac edac_core crct10dif_pclmul crc32_pclmul ghash_clmulni_intel drbg ansi_cprng vmw_balloon aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd joydev pcspkr vmxnet3 i2c_piix4 vmw_vmci shpchp fjes processor button ac btrfs xor raid6_pq sr_mod cdrom ata_generic sd_mod ata_piix vmwgfx crc32c_intel drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm serio_raw ahci libahci drm libata vmw_pvscsi sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua scsi_mod autofs4

 Supported: Yes
 CPU: 6 PID: 28250 Comm: lsof Not tainted 4.4.156-94.64-default #1
 Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 04/05/2016
 task: ffff88a345f28740 ti: ffff88c74005c000 task.ti: ffff88c74005c000
 RIP: 0010:[<ffffffff8125dcab>]  [<ffffffff8125dcab>] lock_get_status+0x9b/0x3b0
 RSP: 0018:ffff88c74005fd90  EFLAGS: 00010202
 RAX: ffff89bde83e20ae RBX: ffff89e870003d18 RCX: 0000000049534f50
 RDX: ffffffff81a3541f RSI: ffffffff81a3544e RDI: ffff89bde83e20ae
 RBP: 0026252423222120 R08: 0000000020584953 R09: 000000000000ffff
 R10: 0000000000000000 R11: ffff88c74005fc70 R12: ffff89e5ca7b1340
 R13: 00000000000050e5 R14: ffff89e870003d30 R15: ffff89e5ca7b1340
 FS:  00007fafd64be800(0000) GS:ffff89f41fd00000(0000) knlGS:0000000000000000
 CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 CR2: 0000000001c80018 CR3: 000000a522048000 CR4: 0000000000360670
 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
 Stack:
  0000000000000208 ffffffff81a3d6b6 ffff89e870003d30 ffff89e870003d18
  ffff89e5ca7b1340 ffff89f41738d7c0 ffff89e870003d30 ffff89e5ca7b1340
  ffffffff8125e08f 0000000000000000 ffff89bc22b67d00 ffff88c74005ff28
 Call Trace:
  [<ffffffff8125e08f>] locks_show+0x2f/0x70
  [<ffffffff81230ad1>] seq_read+0x251/0x3a0
  [<ffffffff81275bbc>] proc_reg_read+0x3c/0x70
  [<ffffffff8120e456>] __vfs_read+0x26/0x140
  [<ffffffff8120e9da>] vfs_read+0x7a/0x120
  [<ffffffff8120faf2>] SyS_read+0x42/0xa0
  [<ffffffff8161cbc3>] entry_SYSCALL_64_fastpath+0x1e/0xb7

When Linux closes a FD (close(), close-on-exec, dup2(), ...) it calls
filp_close() which also removes all posix locks.

The lock struct is initialized like so in filp_close() and passed
down to cifs

	...
        lock.fl_type = F_UNLCK;
        lock.fl_flags = FL_POSIX | FL_CLOSE;
        lock.fl_start = 0;
        lock.fl_end = OFFSET_MAX;
	...

Note the FL_CLOSE flag, which hints the VFS code that this unlocking
is done for closing the fd.

filp_close()
  locks_remove_posix(filp, id);
    vfs_lock_file(filp, F_SETLK, &lock, NULL);
      return filp->f_op->lock(filp, cmd, fl) => cifs_lock()
        rc = cifs_setlk(file, flock, type, wait_flag, posix_lck, lock, unlock, xid);
          rc = server->ops->mand_unlock_range(cfile, flock, xid);
          if (flock->fl_flags & FL_POSIX && !rc)
                  rc = locks_lock_file_wait(file, flock)

Notice how we don't call locks_lock_file_wait() which does the
generic VFS lock/unlock/wait work on the inode if rc != 0.

If we are closing the handle, the SMB server is supposed to remove any
locks associated with it. Similarly, cifs.ko frees and wakes up any
lock and lock waiter when closing the file:

cifs_close()
  cifsFileInfo_put(file->private_data)
	/*
	 * Delete any outstanding lock records. We'll lose them when the file
	 * is closed anyway.
	 */
	down_write(&cifsi->lock_sem);
	list_for_each_entry_safe(li, tmp, &cifs_file->llist->locks, llist) {
		list_del(&li->llist);
		cifs_del_lock_waiters(li);
		kfree(li);
	}
	list_del(&cifs_file->llist->llist);
	kfree(cifs_file->llist);
	up_write(&cifsi->lock_sem);

So we can safely ignore unlocking failures in cifs_lock() if they
happen with the FL_CLOSE flag hint set as both the server and the
client take care of it during the actual closing.

This is not a proper fix for the unlocking failure but it's safe and
it seems to prevent the lock leakages and crashes the customer
experiences.
Signed-off-by: default avatarAurelien Aptel <aaptel@suse.com>
Signed-off-by: default avatarNeilBrown <neil@brown.name>
Signed-off-by: default avatarSteve French <stfrench@microsoft.com>
Acked-by: default avatarPavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
parent 442d5d17
......@@ -1631,8 +1631,20 @@ cifs_setlk(struct file *file, struct file_lock *flock, __u32 type,
rc = server->ops->mand_unlock_range(cfile, flock, xid);
out:
if (flock->fl_flags & FL_POSIX && !rc)
if (flock->fl_flags & FL_POSIX) {
/*
* If this is a request to remove all locks because we
* are closing the file, it doesn't matter if the
* unlocking failed as both cifs.ko and the SMB server
* remove the lock on file close
*/
if (rc) {
cifs_dbg(VFS, "%s failed rc=%d\n", __func__, rc);
if (!(flock->fl_flags & FL_CLOSE))
return rc;
}
rc = locks_lock_file_wait(file, flock);
}
return rc;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment