1. 03 Feb, 2018 21 commits
  2. 31 Jan, 2018 19 commits
    • Greg Kroah-Hartman's avatar
      Linux 4.14.16 · 6c700766
      Greg Kroah-Hartman authored
      6c700766
    • Ben Hutchings's avatar
      nfsd: auth: Fix gid sorting when rootsquash enabled · 54e67ba7
      Ben Hutchings authored
      commit 19952667 upstream.
      
      Commit bdcf0a42 ("kernel: make groups_sort calling a responsibility
      group_info allocators") appears to break nfsd rootsquash in a pretty
      major way.
      
      It adds a call to groups_sort() inside the loop that copies/squashes
      gids, which means the valid gids are sorted along with the following
      garbage.  The net result is that the highest numbered valid gids are
      replaced with any lower-valued garbage gids, possibly including 0.
      
      We should sort only once, after filling in all the gids.
      
      Fixes: bdcf0a42 ("kernel: make groups_sort calling a responsibility ...")
      Signed-off-by: default avatarBen Hutchings <ben.hutchings@codethink.co.uk>
      Acked-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Wolfgang Walter <linux@stwm.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      54e67ba7
    • Rafael J. Wysocki's avatar
      cpufreq: governor: Ensure sufficiently large sampling intervals · c83189ed
      Rafael J. Wysocki authored
      commit 56026645 upstream.
      
      After commit aa7519af (cpufreq: Use transition_delay_us for legacy
      governors as well) the sampling_rate field of struct dbs_data may be
      less than the tick period which causes dbs_update() to produce
      incorrect results, so make the code ensure that the value of that
      field will always be sufficiently large.
      
      Fixes: aa7519af (cpufreq: Use transition_delay_us for legacy governors as well)
      Reported-by: default avatarAndy Tang <andy.tang@nxp.com>
      Reported-by: default avatarDoug Smythies <dsmythies@telus.net>
      Tested-by: default avatarAndy Tang <andy.tang@nxp.com>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: default avatarViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c83189ed
    • Daniel Borkmann's avatar
      bpf, arm64: fix stack_depth tracking in combination with tail calls · c43db1a3
      Daniel Borkmann authored
      [ upstream commit a2284d91 ]
      
      Using dynamic stack_depth tracking in arm64 JIT is currently broken in
      combination with tail calls. In prologue, we cache ctx->stack_size and
      adjust SP reg for setting up function call stack, and tearing it down
      again in epilogue. Problem is that when doing a tail call, the cached
      ctx->stack_size might not be the same.
      
      One way to fix the problem with minimal overhead is to re-adjust SP in
      emit_bpf_tail_call() and properly adjust it to the current program's
      ctx->stack_size. Tested on Cavium ThunderX ARMv8.
      
      Fixes: f1c9eed7 ("bpf, arm64: take advantage of stack_depth tracking")
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c43db1a3
    • Daniel Borkmann's avatar
      bpf: reject stores into ctx via st and xadd · a1753674
      Daniel Borkmann authored
      [ upstream commit f37a8cb8 ]
      
      Alexei found that verifier does not reject stores into context
      via BPF_ST instead of BPF_STX. And while looking at it, we
      also should not allow XADD variant of BPF_STX.
      
      The context rewriter is only assuming either BPF_LDX_MEM- or
      BPF_STX_MEM-type operations, thus reject anything other than
      that so that assumptions in the rewriter properly hold. Add
      test cases as well for BPF selftests.
      
      Fixes: d691f9e8 ("bpf: allow programs to write to certain skb fields")
      Reported-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a1753674
    • Alexei Starovoitov's avatar
      bpf: fix 32-bit divide by zero · ca0a0967
      Alexei Starovoitov authored
      [ upstream commit 68fda450 ]
      
      due to some JITs doing if (src_reg == 0) check in 64-bit mode
      for div/mod operations mask upper 32-bits of src register
      before doing the check
      
      Fixes: 62258278 ("net: filter: x86: internal BPF JIT")
      Fixes: 7a12b503 ("sparc64: Add eBPF JIT.")
      Reported-by: syzbot+48340bb518e88849e2e3@syzkaller.appspotmail.com
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ca0a0967
    • Eric Dumazet's avatar
      bpf: fix divides by zero · 6eca013b
      Eric Dumazet authored
      [ upstream commit c366287e ]
      
      Divides by zero are not nice, lets avoid them if possible.
      
      Also do_div() seems not needed when dealing with 32bit operands,
      but this seems a minor detail.
      
      Fixes: bd4cf0ed ("net: filter: rework/optimize internal BPF interpreter's instruction set")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reported-by: default avatarsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6eca013b
    • Daniel Borkmann's avatar
      bpf: avoid false sharing of map refcount with max_entries · 3ea4247e
      Daniel Borkmann authored
      [ upstream commit be95a845 ]
      
      In addition to commit b2157399 ("bpf: prevent out-of-bounds
      speculation") also change the layout of struct bpf_map such that
      false sharing of fast-path members like max_entries is avoided
      when the maps reference counter is altered. Therefore enforce
      them to be placed into separate cachelines.
      
      pahole dump after change:
      
        struct bpf_map {
              const struct bpf_map_ops  * ops;                 /*     0     8 */
              struct bpf_map *           inner_map_meta;       /*     8     8 */
              void *                     security;             /*    16     8 */
              enum bpf_map_type          map_type;             /*    24     4 */
              u32                        key_size;             /*    28     4 */
              u32                        value_size;           /*    32     4 */
              u32                        max_entries;          /*    36     4 */
              u32                        map_flags;            /*    40     4 */
              u32                        pages;                /*    44     4 */
              u32                        id;                   /*    48     4 */
              int                        numa_node;            /*    52     4 */
              bool                       unpriv_array;         /*    56     1 */
      
              /* XXX 7 bytes hole, try to pack */
      
              /* --- cacheline 1 boundary (64 bytes) --- */
              struct user_struct *       user;                 /*    64     8 */
              atomic_t                   refcnt;               /*    72     4 */
              atomic_t                   usercnt;              /*    76     4 */
              struct work_struct         work;                 /*    80    32 */
              char                       name[16];             /*   112    16 */
              /* --- cacheline 2 boundary (128 bytes) --- */
      
              /* size: 128, cachelines: 2, members: 17 */
              /* sum members: 121, holes: 1, sum holes: 7 */
        };
      
      Now all entries in the first cacheline are read only throughout
      the life time of the map, set up once during map creation. Overall
      struct size and number of cachelines doesn't change from the
      reordering. struct bpf_map is usually first member and embedded
      in map structs in specific map implementations, so also avoid those
      members to sit at the end where it could potentially share the
      cacheline with first map values e.g. in the array since remote
      CPUs could trigger map updates just as well for those (easily
      dirtying members like max_entries intentionally as well) while
      having subsequent values in cache.
      
      Quoting from Google's Project Zero blog [1]:
      
        Additionally, at least on the Intel machine on which this was
        tested, bouncing modified cache lines between cores is slow,
        apparently because the MESI protocol is used for cache coherence
        [8]. Changing the reference counter of an eBPF array on one
        physical CPU core causes the cache line containing the reference
        counter to be bounced over to that CPU core, making reads of the
        reference counter on all other CPU cores slow until the changed
        reference counter has been written back to memory. Because the
        length and the reference counter of an eBPF array are stored in
        the same cache line, this also means that changing the reference
        counter on one physical CPU core causes reads of the eBPF array's
        length to be slow on other physical CPU cores (intentional false
        sharing).
      
      While this doesn't 'control' the out-of-bounds speculation through
      masking the index as in commit b2157399, triggering a manipulation
      of the map's reference counter is really trivial, so lets not allow
      to easily affect max_entries from it.
      
      Splitting to separate cachelines also generally makes sense from
      a performance perspective anyway in that fast-path won't have a
      cache miss if the map gets pinned, reused in other progs, etc out
      of control path, thus also avoids unintentional false sharing.
      
        [1] https://googleprojectzero.blogspot.ch/2018/01/reading-privileged-memory-with-side.htmlSigned-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3ea4247e
    • Alexei Starovoitov's avatar
      bpf: introduce BPF_JIT_ALWAYS_ON config · 6fde36d5
      Alexei Starovoitov authored
      [ upstream commit 290af866 ]
      
      The BPF interpreter has been used as part of the spectre 2 attack CVE-2017-5715.
      
      A quote from goolge project zero blog:
      "At this point, it would normally be necessary to locate gadgets in
      the host kernel code that can be used to actually leak data by reading
      from an attacker-controlled location, shifting and masking the result
      appropriately and then using the result of that as offset to an
      attacker-controlled address for a load. But piecing gadgets together
      and figuring out which ones work in a speculation context seems annoying.
      So instead, we decided to use the eBPF interpreter, which is built into
      the host kernel - while there is no legitimate way to invoke it from inside
      a VM, the presence of the code in the host kernel's text section is sufficient
      to make it usable for the attack, just like with ordinary ROP gadgets."
      
      To make attacker job harder introduce BPF_JIT_ALWAYS_ON config
      option that removes interpreter from the kernel in favor of JIT-only mode.
      So far eBPF JIT is supported by:
      x64, arm64, arm32, sparc64, s390, powerpc64, mips64
      
      The start of JITed program is randomized and code page is marked as read-only.
      In addition "constant blinding" can be turned on with net.core.bpf_jit_harden
      
      v2->v3:
      - move __bpf_prog_ret0 under ifdef (Daniel)
      
      v1->v2:
      - fix init order, test_bpf and cBPF (Daniel's feedback)
      - fix offloaded bpf (Jakub's feedback)
      - add 'return 0' dummy in case something can invoke prog->bpf_func
      - retarget bpf tree. For bpf-next the patch would need one extra hunk.
        It will be sent when the trees are merged back to net-next
      
      Considered doing:
        int bpf_jit_enable __read_mostly = BPF_EBPF_JIT_DEFAULT;
      but it seems better to land the patch as-is and in bpf-next remove
      bpf_jit_enable global variable from all JITs, consolidate in one place
      and remove this jit_init() function.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6fde36d5
    • Thomas Gleixner's avatar
      hrtimer: Reset hrtimer cpu base proper on CPU hotplug · fdd88d75
      Thomas Gleixner authored
      commit d5421ea4 upstream.
      
      The hrtimer interrupt code contains a hang detection and mitigation
      mechanism, which prevents that a long delayed hrtimer interrupt causes a
      continous retriggering of interrupts which prevent the system from making
      progress. If a hang is detected then the timer hardware is programmed with
      a certain delay into the future and a flag is set in the hrtimer cpu base
      which prevents newly enqueued timers from reprogramming the timer hardware
      prior to the chosen delay. The subsequent hrtimer interrupt after the delay
      clears the flag and resumes normal operation.
      
      If such a hang happens in the last hrtimer interrupt before a CPU is
      unplugged then the hang_detected flag is set and stays that way when the
      CPU is plugged in again. At that point the timer hardware is not armed and
      it cannot be armed because the hang_detected flag is still active, so
      nothing clears that flag. As a consequence the CPU does not receive hrtimer
      interrupts and no timers expire on that CPU which results in RCU stalls and
      other malfunctions.
      
      Clear the flag along with some other less critical members of the hrtimer
      cpu base to ensure starting from a clean state when a CPU is plugged in.
      
      Thanks to Paul, Sebastian and Anna-Maria for their help to get down to the
      root cause of that hard to reproduce heisenbug. Once understood it's
      trivial and certainly justifies a brown paperbag.
      
      Fixes: 41d2e494 ("hrtimer: Tune hrtimer_interrupt hang logic")
      Reported-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Sewior <bigeasy@linutronix.de>
      Cc: Anna-Maria Gleixner <anna-maria@linutronix.de>
      Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801261447590.2067@nanosSigned-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      fdd88d75
    • Andy Lutomirski's avatar
      x86/mm/64: Fix vmapped stack syncing on very-large-memory 4-level systems · ba07aba7
      Andy Lutomirski authored
      commit 5beda7d5 upstream.
      
      Neil Berrington reported a double-fault on a VM with 768GB of RAM that uses
      large amounts of vmalloc space with PTI enabled.
      
      The cause is that load_new_mm_cr3() was never fixed to take the 5-level pgd
      folding code into account, so, on a 4-level kernel, the pgd synchronization
      logic compiles away to exactly nothing.
      
      Interestingly, the problem doesn't trigger with nopti.  I assume this is
      because the kernel is mapped with global pages if we boot with nopti.  The
      sequence of operations when we create a new task is that we first load its
      mm while still running on the old stack (which crashes if the old stack is
      unmapped in the new mm unless the TLB saves us), then we call
      prepare_switch_to(), and then we switch to the new stack.
      prepare_switch_to() pokes the new stack directly, which will populate the
      mapping through vmalloc_fault().  I assume that we're getting lucky on
      non-PTI systems -- the old stack's TLB entry stays alive long enough to
      make it all the way through prepare_switch_to() and switch_to() so that we
      make it to a valid stack.
      
      Fixes: b50858ce ("x86/mm/vmalloc: Add 5-level paging support")
      Reported-and-tested-by: default avatarNeil Berrington <neil.berrington@datacore.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Link: https://lkml.kernel.org/r/346541c56caed61abbe693d7d2742b4a380c5001.1516914529.git.luto@kernel.orgSigned-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ba07aba7
    • Borislav Petkov's avatar
      x86/microcode: Fix again accessing initrd after having been freed · cbfb351b
      Borislav Petkov authored
      commit 1d080f09 upstream.
      
      Commit 24c25032 ("x86/microcode: Do not access the initrd after it has
      been freed") fixed attempts to access initrd from the microcode loader
      after it has been freed. However, a similar KASAN warning was reported
      (stack trace edited):
      
        smpboot: Booting Node 0 Processor 1 APIC 0x11
        ==================================================================
        BUG: KASAN: use-after-free in find_cpio_data+0x9b5/0xa50
        Read of size 1 at addr ffff880035ffd000 by task swapper/1/0
      
        CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.14.8-slack #7
        Hardware name: System manufacturer System Product Name/A88X-PLUS, BIOS 3003 03/10/2016
        Call Trace:
         dump_stack
         print_address_description
         kasan_report
         ? find_cpio_data
         __asan_report_load1_noabort
         find_cpio_data
         find_microcode_in_initrd
         __load_ucode_amd
         load_ucode_amd_ap
            load_ucode_ap
      
      After some investigation, it turned out that a merge was done using the
      wrong side to resolve, leading to picking up the previous state, before
      the 24c25032 fix. Therefore the Fixes tag below contains a merge
      commit.
      
      Revert the mismerge by catching the save_microcode_in_initrd_amd()
      retval and thus letting the function exit with the last return statement
      so that initrd_gone can be set to true.
      
      Fixes: f26483ea ("Merge branch 'x86/urgent' into x86/microcode, to resolve conflicts")
      Reported-by: <higuita@gmx.net>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=198295
      Link: https://lkml.kernel.org/r/20180123104133.918-2-bp@alien8.deSigned-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      cbfb351b
    • Jia Zhang's avatar
      x86/microcode/intel: Extend BDW late-loading further with LLC size check · ac2cc887
      Jia Zhang authored
      commit 7e702d17 upstream.
      
      Commit b94b7373 ("x86/microcode/intel: Extend BDW late-loading with a
      revision check") reduced the impact of erratum BDF90 for Broadwell model
      79.
      
      The impact can be reduced further by checking the size of the last level
      cache portion per core.
      
      Tony: "The erratum says the problem only occurs on the large-cache SKUs.
      So we only need to avoid the update if we are on a big cache SKU that is
      also running old microcode."
      
      For more details, see erratum BDF90 in document #334165 (Intel Xeon
      Processor E7-8800/4800 v4 Product Family Specification Update) from
      September 2017.
      
      Fixes: b94b7373 ("x86/microcode/intel: Extend BDW late-loading with a revision check")
      Signed-off-by: default avatarJia Zhang <zhang.jia@linux.alibaba.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarTony Luck <tony.luck@intel.com>
      Link: https://lkml.kernel.org/r/1516321542-31161-1-git-send-email-zhang.jia@linux.alibaba.comSigned-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ac2cc887
    • Xiao Liang's avatar
      perf/x86/amd/power: Do not load AMD power module on !AMD platforms · 34c1acc2
      Xiao Liang authored
      commit 40d4071c upstream.
      
      The AMD power module can be loaded on non AMD platforms, but unload fails
      with the following Oops:
      
       BUG: unable to handle kernel NULL pointer dereference at           (null)
       IP: __list_del_entry_valid+0x29/0x90
       Call Trace:
        perf_pmu_unregister+0x25/0xf0
        amd_power_pmu_exit+0x1c/0xd23 [power]
        SyS_delete_module+0x1a8/0x2b0
        ? exit_to_usermode_loop+0x8f/0xb0
        entry_SYSCALL_64_fastpath+0x20/0x83
      
      Return -ENODEV instead of 0 from the module init function if the CPU does
      not match.
      
      Fixes: c7ab62bf ("perf/x86/amd/power: Add AMD accumulated power reporting mechanism")
      Signed-off-by: default avatarXiao Liang <xiliang@redhat.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20180122061252.6394-1-xiliang@redhat.comSigned-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      34c1acc2
    • Neil Horman's avatar
      vmxnet3: repair memory leak · 74026a18
      Neil Horman authored
      
      [ Upstream commit 848b1598 ]
      
      with the introduction of commit
      b0eb57cb, it appears that rq->buf_info
      is improperly handled.  While it is heap allocated when an rx queue is
      setup, and freed when torn down, an old line of code in
      vmxnet3_rq_destroy was not properly removed, leading to rq->buf_info[0]
      being set to NULL prior to its being freed, causing a memory leak, which
      eventually exhausts the system on repeated create/destroy operations
      (for example, when  the mtu of a vmxnet3 interface is changed
      frequently.
      
      Fix is pretty straight forward, just move the NULL set to after the
      free.
      
      Tested by myself with successful results
      
      Applies to net, and should likely be queued for stable, please
      Signed-off-by: default avatarNeil Horman <nhorman@tuxdriver.com>
      Reported-By: boyang@redhat.com
      CC: boyang@redhat.com
      CC: Shrikrishna Khare <skhare@vmware.com>
      CC: "VMware, Inc." <pv-drivers@vmware.com>
      CC: David S. Miller <davem@davemloft.net>
      Acked-by: default avatarShrikrishna Khare <skhare@vmware.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      74026a18
    • Lorenzo Colitti's avatar
      net: ipv4: Make "ip route get" match iif lo rules again. · c2fd0b21
      Lorenzo Colitti authored
      
      [ Upstream commit 6503a304 ]
      
      Commit 3765d35e ("net: ipv4: Convert inet_rtm_getroute to rcu
      versions of route lookup") broke "ip route get" in the presence
      of rules that specify iif lo.
      
      Host-originated traffic always has iif lo, because
      ip_route_output_key_hash and ip6_route_output_flags set the flow
      iif to LOOPBACK_IFINDEX. Thus, putting "iif lo" in an ip rule is a
      convenient way to select only originated traffic and not forwarded
      traffic.
      
      inet_rtm_getroute used to match these rules correctly because
      even though it sets the flow iif to 0, it called
      ip_route_output_key which overwrites iif with LOOPBACK_IFINDEX.
      But now that it calls ip_route_output_key_hash_rcu, the ifindex
      will remain 0 and not match the iif lo in the rule. As a result,
      "ip route get" will return ENETUNREACH.
      
      Fixes: 3765d35e ("net: ipv4: Convert inet_rtm_getroute to rcu versions of route lookup")
      Tested: https://android.googlesource.com/kernel/tests/+/master/net/test/multinetwork_test.py passes again
      Signed-off-by: default avatarLorenzo Colitti <lorenzo@google.com>
      Acked-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c2fd0b21
    • Sabrina Dubroca's avatar
      tls: reset crypto_info when do_tls_setsockopt_tx fails · ed10b9af
      Sabrina Dubroca authored
      
      [ Upstream commit 6db959c8 ]
      
      The current code copies directly from userspace to ctx->crypto_send, but
      doesn't always reinitialize it to 0 on failure. This causes any
      subsequent attempt to use this setsockopt to fail because of the
      TLS_CRYPTO_INFO_READY check, eventhough crypto_info is not actually
      ready.
      
      This should result in a correctly set up socket after the 3rd call, but
      currently it does not:
      
          size_t s = sizeof(struct tls12_crypto_info_aes_gcm_128);
          struct tls12_crypto_info_aes_gcm_128 crypto_good = {
              .info.version = TLS_1_2_VERSION,
              .info.cipher_type = TLS_CIPHER_AES_GCM_128,
          };
      
          struct tls12_crypto_info_aes_gcm_128 crypto_bad_type = crypto_good;
          crypto_bad_type.info.cipher_type = 42;
      
          setsockopt(sock, SOL_TLS, TLS_TX, &crypto_bad_type, s);
          setsockopt(sock, SOL_TLS, TLS_TX, &crypto_good, s - 1);
          setsockopt(sock, SOL_TLS, TLS_TX, &crypto_good, s);
      
      Fixes: 3c4d7559 ("tls: kernel TLS support")
      Signed-off-by: default avatarSabrina Dubroca <sd@queasysnail.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ed10b9af
    • Sabrina Dubroca's avatar
      tls: return -EBUSY if crypto_info is already set · 2f54941c
      Sabrina Dubroca authored
      
      [ Upstream commit 877d17c7 ]
      
      do_tls_setsockopt_tx returns 0 without doing anything when crypto_info
      is already set. Silent failure is confusing for users.
      
      Fixes: 3c4d7559 ("tls: kernel TLS support")
      Signed-off-by: default avatarSabrina Dubroca <sd@queasysnail.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2f54941c
    • Sabrina Dubroca's avatar
      tls: fix sw_ctx leak · 3a28f04b
      Sabrina Dubroca authored
      
      [ Upstream commit cf6d43ef ]
      
      During setsockopt(SOL_TCP, TLS_TX), if initialization of the software
      context fails in tls_set_sw_offload(), we leak sw_ctx. We also don't
      reassign ctx->priv_ctx to NULL, so we can't even do another attempt to
      set it up on the same socket, as it will fail with -EEXIST.
      
      Fixes: 3c4d7559 ('tls: kernel TLS support')
      Signed-off-by: default avatarSabrina Dubroca <sd@queasysnail.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3a28f04b