Commit 06b19fe9 authored by David S. Miller's avatar David S. Miller

Merge branch 'chelsio-inline-tls'

Atul Gupta says:

====================
Chelsio Inline TLS

Series for Chelsio Inline TLS driver (chtls)

Use tls ULP infrastructure to register chtls as Inline TLS driver.
Chtls use TCP Sockets to Tx/Rx TLS records.
TCP sk_proto APIs are enhanced to offload TLS record.

T6 adapter provides the following features:
        -TLS record offload, TLS header, encrypt, digest and transmit
        -TLS record receive and decrypt
        -TLS keys store
        -TCP/IP engine
        -TLS engine
        -GCM crypto engine [support CBC also]

TLS provides security at the transport layer. It uses TCP to provide
reliable end-to-end transport of application data.
It relies on TCP for any retransmission.
TLS session comprises of three parts:
a. TCP/IP connection
b. TLS handshake
c. Record layer processing

TLS handshake state machine is executed in host (refer standard
implementation eg. OpenSSL).  Setsockopt [SOL_TCP, TCP_ULP]
initialize TCP proto-ops for Chelsio inline tls support.
setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls"));

Tx and Rx Keys are decided during handshake and programmed on
the chip after CCS is exchanged.
struct tls12_crypto_info_aes_gcm_128 crypto_info
setsockopt(sock, SOL_TLS, TLS_TX, &crypto_info, sizeof(crypto_info))
Finish is the first encrypted/decrypted message tx/rx inline.

On the Tx path TLS engine receive plain text from openssl, insert IV,
fetches the tx key, create cipher text records and generate MAC.

TLS header is added to cipher text and forward to TCP/IP engine for
transport layer processing and transmission on wire.
TX PATH:
Apps--openssl--chtls---TLS engine---encrypt/auth---TCP/IP engine---wire

On the Rx side, data received is PDU aligned at record boundaries.
TLS processes only the complete record. If rx key is programmed on
CCS receive, data is decrypted and plain text is posted to host.
RX PATH:
Wire--cipher-text--TCP/IP engine [PDU align]---TLS engine---
decrypt/auth---plain-text--chtls--openssl--application

v15: indent fix in mark_urg
     -removed unwanted checks in sendmsg, sendpage, recvmsg,
      close, disconnect,shutdown, destroy sock [Sabrina]
     - removed unused chtls_free_kmap [chtls.h]
     - rebase to top of net-next

v14: -Reverse christmas tree style for variable declarations for
     various functions in chtls_hw.c, chtls_io.c [Stefano Brivio]
     - replaced break with return in tcp_state_to_flowc_state
       [Stefano Brivio]
     - renamed tlstx_seq_number to tlstx_incr_seqnum [Stefano Brivio]
     - use bool for corked, should_push and send_should_push
       [Stefano Brivio]
     - removed "Reviewed-by" tag for Stefano, Sabrina, Dave Watson

v13: handle clean ctx free for HW_RECORD in tls_sk_proto_close
    -removed SOCK_INLINE [chtls.h], using csk_conn_inline instead
     in send_abort_rpl,chtls_send_abort_rpl,chtls_sendmsg,chtls_sendpage
    -removed sk_no_receive [chtls_io.c] replaced with sk_shutdown &
     RCV_SHUTDOWN in chtls_pt_recvmsg, peekmsg and chtls_recvmsg
    -cleaned chtls_expansion_size [Stefano Brivio]
    - u8 conf:3 in tls_sw_context to add TLS_HW_RECORD
    -removed is_tls_skb, using tls_skb_inline [Stefano Brivio]
    -reverse christmas tree formatting in chtls_io.c, chtls_cm.c
     [Stefano Brivio]
    -fixed build warning reported by kbuild robot
    -retained ctx conf enum in chtls_main vs earlier versions, tls_prots
     not used in chtls.
    -cleanup [removed syn_sent, base_prot, added synq] [Michael Werner]
    - passing struct fw_wr_hdr * to ofldtxq_stop [Casey]
    - rebased on top of the current net-next

v12: patch against net-next
    -fixed build error [reported by Julia]
    -replace set_queue with skb_set_queue_mapping [Sabrina]
    -copyright year correction [chtls]

v11: formatting and cleanup, few function rename and error
     handling [Stefano Brivio]
     - ctx freed later for TLS_HW_RECORD
     - split tx and rx in different patch

v10: fixed following based on the review comments of Sabrina Dubroca
     -docs header added for struct tls_device [tls.h]
     -changed TLS_FULL_HW to TLS_HW_RECORD
     -similary using tls-hw-record instead of tls-inline for
     ethtool feature config
     -added more description to patch sets
     -replaced kmalloc/vmalloc/kfree with kvzalloc/kvfree
     -reordered the patch sequence
     -formatted entire patch for func return values

v9: corrected __u8 and similar usage
    -create_ctx to alloc tls_context
    -tls_hw_prot before sk !establish check

v8: tls_main.c cleanup comment [Dave Watson]

v7: func name change, use sk->sk_prot where required

v6: modify prot only for FULL_HW
   -corrected commit message for patch 11

v5: set TLS_FULL_HW for registered inline tls drivers
   -set TLS_FULL_HW prot for offload connection else move
    to TLS_SW_TX
   -Case handled for interface with same IP [Dave Miller]
   -Removed Specific IP and INADDR_ANY handling [v4]

v4: removed chtls ULP type, retained tls ULP
   -registered chtls with net tls
   -defined struct tls_device to register the Inline drivers
   -ethtool interface tls-inline to enable Inline TLS for interface
   -prot update to support inline TLS

v3: fixed the kbuild test issues
   -made few funtions static
   -initialized few variables

v2: fixed the following based on the review comments of Stephan Mueller,
    Stefano Brivio and Hannes Frederic
    -Added more details in cover letter
    -Fixed indentation and formating issues
    -Using aes instead of aes-generic
    -memset key info after programing the key on chip
    -reordered the patch sequence
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents d4069fe6 bd7f4857
......@@ -29,3 +29,14 @@ config CHELSIO_IPSEC_INLINE
default n
---help---
Enable support for IPSec Tx Inline.
config CRYPTO_DEV_CHELSIO_TLS
tristate "Chelsio Crypto Inline TLS Driver"
depends on CHELSIO_T4
depends on TLS
select CRYPTO_DEV_CHELSIO
---help---
Support Chelsio Inline TLS with Chelsio crypto accelerator.
To compile this driver as a module, choose M here: the module
will be called chtls.
......@@ -3,3 +3,4 @@ ccflags-y := -Idrivers/net/ethernet/chelsio/cxgb4
obj-$(CONFIG_CRYPTO_DEV_CHELSIO) += chcr.o
chcr-objs := chcr_core.o chcr_algo.o
chcr-$(CONFIG_CHELSIO_IPSEC_INLINE) += chcr_ipsec.o
obj-$(CONFIG_CRYPTO_DEV_CHELSIO_TLS) += chtls/
......@@ -86,6 +86,39 @@
KEY_CONTEXT_OPAD_PRESENT_M)
#define KEY_CONTEXT_OPAD_PRESENT_F KEY_CONTEXT_OPAD_PRESENT_V(1U)
#define TLS_KEYCTX_RXFLIT_CNT_S 24
#define TLS_KEYCTX_RXFLIT_CNT_V(x) ((x) << TLS_KEYCTX_RXFLIT_CNT_S)
#define TLS_KEYCTX_RXPROT_VER_S 20
#define TLS_KEYCTX_RXPROT_VER_M 0xf
#define TLS_KEYCTX_RXPROT_VER_V(x) ((x) << TLS_KEYCTX_RXPROT_VER_S)
#define TLS_KEYCTX_RXCIPH_MODE_S 16
#define TLS_KEYCTX_RXCIPH_MODE_M 0xf
#define TLS_KEYCTX_RXCIPH_MODE_V(x) ((x) << TLS_KEYCTX_RXCIPH_MODE_S)
#define TLS_KEYCTX_RXAUTH_MODE_S 12
#define TLS_KEYCTX_RXAUTH_MODE_M 0xf
#define TLS_KEYCTX_RXAUTH_MODE_V(x) ((x) << TLS_KEYCTX_RXAUTH_MODE_S)
#define TLS_KEYCTX_RXCIAU_CTRL_S 11
#define TLS_KEYCTX_RXCIAU_CTRL_V(x) ((x) << TLS_KEYCTX_RXCIAU_CTRL_S)
#define TLS_KEYCTX_RX_SEQCTR_S 9
#define TLS_KEYCTX_RX_SEQCTR_M 0x3
#define TLS_KEYCTX_RX_SEQCTR_V(x) ((x) << TLS_KEYCTX_RX_SEQCTR_S)
#define TLS_KEYCTX_RX_VALID_S 8
#define TLS_KEYCTX_RX_VALID_V(x) ((x) << TLS_KEYCTX_RX_VALID_S)
#define TLS_KEYCTX_RXCK_SIZE_S 3
#define TLS_KEYCTX_RXCK_SIZE_M 0x7
#define TLS_KEYCTX_RXCK_SIZE_V(x) ((x) << TLS_KEYCTX_RXCK_SIZE_S)
#define TLS_KEYCTX_RXMK_SIZE_S 0
#define TLS_KEYCTX_RXMK_SIZE_M 0x7
#define TLS_KEYCTX_RXMK_SIZE_V(x) ((x) << TLS_KEYCTX_RXMK_SIZE_S)
#define CHCR_HASH_MAX_DIGEST_SIZE 64
#define CHCR_MAX_SHA_DIGEST_SIZE 64
......@@ -176,6 +209,15 @@
KEY_CONTEXT_SALT_PRESENT_V(1) | \
KEY_CONTEXT_CTX_LEN_V((ctx_len)))
#define FILL_KEY_CRX_HDR(ck_size, mk_size, d_ck, opad, ctx_len) \
htonl(TLS_KEYCTX_RXMK_SIZE_V(mk_size) | \
TLS_KEYCTX_RXCK_SIZE_V(ck_size) | \
TLS_KEYCTX_RX_VALID_V(1) | \
TLS_KEYCTX_RX_SEQCTR_V(3) | \
TLS_KEYCTX_RXAUTH_MODE_V(4) | \
TLS_KEYCTX_RXCIPH_MODE_V(2) | \
TLS_KEYCTX_RXFLIT_CNT_V((ctx_len)))
#define FILL_WR_OP_CCTX_SIZE \
htonl( \
FW_CRYPTO_LOOKASIDE_WR_OPCODE_V( \
......
......@@ -65,10 +65,58 @@ struct uld_ctx;
struct _key_ctx {
__be32 ctx_hdr;
u8 salt[MAX_SALT];
__be64 reserverd;
__be64 iv_to_auth;
unsigned char key[0];
};
#define KEYCTX_TX_WR_IV_S 55
#define KEYCTX_TX_WR_IV_M 0x1ffULL
#define KEYCTX_TX_WR_IV_V(x) ((x) << KEYCTX_TX_WR_IV_S)
#define KEYCTX_TX_WR_IV_G(x) \
(((x) >> KEYCTX_TX_WR_IV_S) & KEYCTX_TX_WR_IV_M)
#define KEYCTX_TX_WR_AAD_S 47
#define KEYCTX_TX_WR_AAD_M 0xffULL
#define KEYCTX_TX_WR_AAD_V(x) ((x) << KEYCTX_TX_WR_AAD_S)
#define KEYCTX_TX_WR_AAD_G(x) (((x) >> KEYCTX_TX_WR_AAD_S) & \
KEYCTX_TX_WR_AAD_M)
#define KEYCTX_TX_WR_AADST_S 39
#define KEYCTX_TX_WR_AADST_M 0xffULL
#define KEYCTX_TX_WR_AADST_V(x) ((x) << KEYCTX_TX_WR_AADST_S)
#define KEYCTX_TX_WR_AADST_G(x) \
(((x) >> KEYCTX_TX_WR_AADST_S) & KEYCTX_TX_WR_AADST_M)
#define KEYCTX_TX_WR_CIPHER_S 30
#define KEYCTX_TX_WR_CIPHER_M 0x1ffULL
#define KEYCTX_TX_WR_CIPHER_V(x) ((x) << KEYCTX_TX_WR_CIPHER_S)
#define KEYCTX_TX_WR_CIPHER_G(x) \
(((x) >> KEYCTX_TX_WR_CIPHER_S) & KEYCTX_TX_WR_CIPHER_M)
#define KEYCTX_TX_WR_CIPHERST_S 23
#define KEYCTX_TX_WR_CIPHERST_M 0x7f
#define KEYCTX_TX_WR_CIPHERST_V(x) ((x) << KEYCTX_TX_WR_CIPHERST_S)
#define KEYCTX_TX_WR_CIPHERST_G(x) \
(((x) >> KEYCTX_TX_WR_CIPHERST_S) & KEYCTX_TX_WR_CIPHERST_M)
#define KEYCTX_TX_WR_AUTH_S 14
#define KEYCTX_TX_WR_AUTH_M 0x1ff
#define KEYCTX_TX_WR_AUTH_V(x) ((x) << KEYCTX_TX_WR_AUTH_S)
#define KEYCTX_TX_WR_AUTH_G(x) \
(((x) >> KEYCTX_TX_WR_AUTH_S) & KEYCTX_TX_WR_AUTH_M)
#define KEYCTX_TX_WR_AUTHST_S 7
#define KEYCTX_TX_WR_AUTHST_M 0x7f
#define KEYCTX_TX_WR_AUTHST_V(x) ((x) << KEYCTX_TX_WR_AUTHST_S)
#define KEYCTX_TX_WR_AUTHST_G(x) \
(((x) >> KEYCTX_TX_WR_AUTHST_S) & KEYCTX_TX_WR_AUTHST_M)
#define KEYCTX_TX_WR_AUTHIN_S 0
#define KEYCTX_TX_WR_AUTHIN_M 0x7f
#define KEYCTX_TX_WR_AUTHIN_V(x) ((x) << KEYCTX_TX_WR_AUTHIN_S)
#define KEYCTX_TX_WR_AUTHIN_G(x) \
(((x) >> KEYCTX_TX_WR_AUTHIN_S) & KEYCTX_TX_WR_AUTHIN_M)
struct chcr_wr {
struct fw_crypto_lookaside_wr wreq;
struct ulp_txpkt ulptx;
......@@ -90,6 +138,11 @@ struct uld_ctx {
struct chcr_dev *dev;
};
struct sge_opaque_hdr {
void *dev;
dma_addr_t addr[MAX_SKB_FRAGS + 1];
};
struct chcr_ipsec_req {
struct ulp_txpkt ulptx;
struct ulptx_idata sc_imm;
......
ccflags-y := -Idrivers/net/ethernet/chelsio/cxgb4 -Idrivers/crypto/chelsio/
obj-$(CONFIG_CRYPTO_DEV_CHELSIO_TLS) += chtls.o
chtls-objs := chtls_main.o chtls_cm.o chtls_io.o chtls_hw.o
This diff is collapsed.
This diff is collapsed.
/*
* Copyright (c) 2018 Chelsio Communications, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __CHTLS_CM_H__
#define __CHTLS_CM_H__
/*
* TCB settings
*/
/* 3:0 */
#define TCB_ULP_TYPE_W 0
#define TCB_ULP_TYPE_S 0
#define TCB_ULP_TYPE_M 0xfULL
#define TCB_ULP_TYPE_V(x) ((x) << TCB_ULP_TYPE_S)
/* 11:4 */
#define TCB_ULP_RAW_W 0
#define TCB_ULP_RAW_S 4
#define TCB_ULP_RAW_M 0xffULL
#define TCB_ULP_RAW_V(x) ((x) << TCB_ULP_RAW_S)
#define TF_TLS_KEY_SIZE_S 7
#define TF_TLS_KEY_SIZE_V(x) ((x) << TF_TLS_KEY_SIZE_S)
#define TF_TLS_CONTROL_S 2
#define TF_TLS_CONTROL_V(x) ((x) << TF_TLS_CONTROL_S)
#define TF_TLS_ACTIVE_S 1
#define TF_TLS_ACTIVE_V(x) ((x) << TF_TLS_ACTIVE_S)
#define TF_TLS_ENABLE_S 0
#define TF_TLS_ENABLE_V(x) ((x) << TF_TLS_ENABLE_S)
#define TF_RX_QUIESCE_S 15
#define TF_RX_QUIESCE_V(x) ((x) << TF_RX_QUIESCE_S)
/*
* Max receive window supported by HW in bytes. Only a small part of it can
* be set through option0, the rest needs to be set through RX_DATA_ACK.
*/
#define MAX_RCV_WND ((1U << 27) - 1)
#define MAX_MSS 65536
/*
* Min receive window. We want it to be large enough to accommodate receive
* coalescing, handle jumbo frames, and not trigger sender SWS avoidance.
*/
#define MIN_RCV_WND (24 * 1024U)
#define LOOPBACK(x) (((x) & htonl(0xff000000)) == htonl(0x7f000000))
/* ulp_mem_io + ulptx_idata + payload + padding */
#define MAX_IMM_ULPTX_WR_LEN (32 + 8 + 256 + 8)
/* for TX: a skb must have a headroom of at least TX_HEADER_LEN bytes */
#define TX_HEADER_LEN \
(sizeof(struct fw_ofld_tx_data_wr) + sizeof(struct sge_opaque_hdr))
#define TX_TLSHDR_LEN \
(sizeof(struct fw_tlstx_data_wr) + sizeof(struct cpl_tx_tls_sfo) + \
sizeof(struct sge_opaque_hdr))
#define TXDATA_SKB_LEN 128
enum {
CPL_TX_TLS_SFO_TYPE_CCS,
CPL_TX_TLS_SFO_TYPE_ALERT,
CPL_TX_TLS_SFO_TYPE_HANDSHAKE,
CPL_TX_TLS_SFO_TYPE_DATA,
CPL_TX_TLS_SFO_TYPE_HEARTBEAT,
};
enum {
TLS_HDR_TYPE_CCS = 20,
TLS_HDR_TYPE_ALERT,
TLS_HDR_TYPE_HANDSHAKE,
TLS_HDR_TYPE_RECORD,
TLS_HDR_TYPE_HEARTBEAT,
};
typedef void (*defer_handler_t)(struct chtls_dev *dev, struct sk_buff *skb);
extern struct request_sock_ops chtls_rsk_ops;
struct deferred_skb_cb {
defer_handler_t handler;
struct chtls_dev *dev;
};
#define DEFERRED_SKB_CB(skb) ((struct deferred_skb_cb *)(skb)->cb)
#define failover_flowc_wr_len offsetof(struct fw_flowc_wr, mnemval[3])
#define WR_SKB_CB(skb) ((struct wr_skb_cb *)(skb)->cb)
#define ACCEPT_QUEUE(sk) (&inet_csk(sk)->icsk_accept_queue.rskq_accept_head)
#define SND_WSCALE(tp) ((tp)->rx_opt.snd_wscale)
#define RCV_WSCALE(tp) ((tp)->rx_opt.rcv_wscale)
#define USER_MSS(tp) ((tp)->rx_opt.user_mss)
#define TS_RECENT_STAMP(tp) ((tp)->rx_opt.ts_recent_stamp)
#define WSCALE_OK(tp) ((tp)->rx_opt.wscale_ok)
#define TSTAMP_OK(tp) ((tp)->rx_opt.tstamp_ok)
#define SACK_OK(tp) ((tp)->rx_opt.sack_ok)
#define INC_ORPHAN_COUNT(sk) percpu_counter_inc((sk)->sk_prot->orphan_count)
/* TLS SKB */
#define skb_ulp_tls_inline(skb) (ULP_SKB_CB(skb)->ulp.tls.ofld)
#define skb_ulp_tls_iv_imm(skb) (ULP_SKB_CB(skb)->ulp.tls.iv)
void chtls_defer_reply(struct sk_buff *skb, struct chtls_dev *dev,
defer_handler_t handler);
/*
* Returns true if the socket is in one of the supplied states.
*/
static inline unsigned int sk_in_state(const struct sock *sk,
unsigned int states)
{
return states & (1 << sk->sk_state);
}
static void chtls_rsk_destructor(struct request_sock *req)
{
/* do nothing */
}
static inline void chtls_init_rsk_ops(struct proto *chtls_tcp_prot,
struct request_sock_ops *chtls_tcp_ops,
struct proto *tcp_prot, int family)
{
memset(chtls_tcp_ops, 0, sizeof(*chtls_tcp_ops));
chtls_tcp_ops->family = family;
chtls_tcp_ops->obj_size = sizeof(struct tcp_request_sock);
chtls_tcp_ops->destructor = chtls_rsk_destructor;
chtls_tcp_ops->slab = tcp_prot->rsk_prot->slab;
chtls_tcp_prot->rsk_prot = chtls_tcp_ops;
}
static inline void chtls_reqsk_free(struct request_sock *req)
{
if (req->rsk_listener)
sock_put(req->rsk_listener);
kmem_cache_free(req->rsk_ops->slab, req);
}
#define DECLARE_TASK_FUNC(task, task_param) \
static void task(struct work_struct *task_param)
static inline void sk_wakeup_sleepers(struct sock *sk, bool interruptable)
{
struct socket_wq *wq;
rcu_read_lock();
wq = rcu_dereference(sk->sk_wq);
if (skwq_has_sleeper(wq)) {
if (interruptable)
wake_up_interruptible(sk_sleep(sk));
else
wake_up_all(sk_sleep(sk));
}
rcu_read_unlock();
}
static inline void chtls_set_req_port(struct request_sock *oreq,
__be16 source, __be16 dest)
{
inet_rsk(oreq)->ir_rmt_port = source;
inet_rsk(oreq)->ir_num = ntohs(dest);
}
static inline void chtls_set_req_addr(struct request_sock *oreq,
__be32 local_ip, __be32 peer_ip)
{
inet_rsk(oreq)->ir_loc_addr = local_ip;
inet_rsk(oreq)->ir_rmt_addr = peer_ip;
}
static inline void chtls_free_skb(struct sock *sk, struct sk_buff *skb)
{
skb_dst_set(skb, NULL);
__skb_unlink(skb, &sk->sk_receive_queue);
__kfree_skb(skb);
}
static inline void chtls_kfree_skb(struct sock *sk, struct sk_buff *skb)
{
skb_dst_set(skb, NULL);
__skb_unlink(skb, &sk->sk_receive_queue);
kfree_skb(skb);
}
static inline void enqueue_wr(struct chtls_sock *csk, struct sk_buff *skb)
{
WR_SKB_CB(skb)->next_wr = NULL;
skb_get(skb);
if (!csk->wr_skb_head)
csk->wr_skb_head = skb;
else
WR_SKB_CB(csk->wr_skb_tail)->next_wr = skb;
csk->wr_skb_tail = skb;
}
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -4549,18 +4549,32 @@ static int adap_init0(struct adapter *adap)
adap->num_ofld_uld += 2;
}
if (caps_cmd.cryptocaps) {
/* Should query params here...TODO */
params[0] = FW_PARAM_PFVF(NCRYPTO_LOOKASIDE);
ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 2,
params, val);
if (ret < 0) {
if (ret != -EINVAL)
if (ntohs(caps_cmd.cryptocaps) &
FW_CAPS_CONFIG_CRYPTO_LOOKASIDE) {
params[0] = FW_PARAM_PFVF(NCRYPTO_LOOKASIDE);
ret = t4_query_params(adap, adap->mbox, adap->pf, 0,
2, params, val);
if (ret < 0) {
if (ret != -EINVAL)
goto bye;
} else {
adap->vres.ncrypto_fc = val[0];
}
adap->num_ofld_uld += 1;
}
if (ntohs(caps_cmd.cryptocaps) &
FW_CAPS_CONFIG_TLS_INLINE) {
params[0] = FW_PARAM_PFVF(TLS_START);
params[1] = FW_PARAM_PFVF(TLS_END);
ret = t4_query_params(adap, adap->mbox, adap->pf, 0,
2, params, val);
if (ret < 0)
goto bye;
} else {
adap->vres.ncrypto_fc = val[0];
adap->vres.key.start = val[0];
adap->vres.key.size = val[1] - val[0] + 1;
adap->num_uld += 1;
}
adap->params.crypto = ntohs(caps_cmd.cryptocaps);
adap->num_uld += 1;
}
#undef FW_PARAM_PFVF
#undef FW_PARAM_DEV
......
......@@ -237,6 +237,7 @@ enum cxgb4_uld {
CXGB4_ULD_ISCSI,
CXGB4_ULD_ISCSIT,
CXGB4_ULD_CRYPTO,
CXGB4_ULD_TLS,
CXGB4_ULD_MAX
};
......@@ -289,6 +290,7 @@ struct cxgb4_virt_res { /* virtualized HW resources */
struct cxgb4_range qp;
struct cxgb4_range cq;
struct cxgb4_range ocq;
struct cxgb4_range key;
unsigned int ncrypto_fc;
};
......@@ -300,6 +302,9 @@ struct chcr_stats_debug {
atomic_t error;
atomic_t fallback;
atomic_t ipsec_cnt;
atomic_t tls_pdu_tx;
atomic_t tls_pdu_rx;
atomic_t tls_key;
};
#define OCQ_WIN_OFFSET(pdev, vres) \
......@@ -382,6 +387,8 @@ struct cxgb4_uld_info {
int cxgb4_register_uld(enum cxgb4_uld type, const struct cxgb4_uld_info *p);
int cxgb4_unregister_uld(enum cxgb4_uld type);
int cxgb4_ofld_send(struct net_device *dev, struct sk_buff *skb);
int cxgb4_immdata_send(struct net_device *dev, unsigned int idx,
const void *src, unsigned int len);
int cxgb4_crypto_send(struct net_device *dev, struct sk_buff *skb);
unsigned int cxgb4_dbfifo_count(const struct net_device *dev, int lpfifo);
unsigned int cxgb4_port_chan(const struct net_device *dev);
......
......@@ -1019,8 +1019,8 @@ EXPORT_SYMBOL(cxgb4_ring_tx_db);
void cxgb4_inline_tx_skb(const struct sk_buff *skb,
const struct sge_txq *q, void *pos)
{
u64 *p;
int left = (void *)q->stat - pos;
u64 *p;
if (likely(skb->len <= left)) {
if (likely(!skb->data_len))
......@@ -1735,15 +1735,13 @@ static void txq_stop_maperr(struct sge_uld_txq *q)
/**
* ofldtxq_stop - stop an offload Tx queue that has become full
* @q: the queue to stop
* @skb: the packet causing the queue to become full
* @wr: the Work Request causing the queue to become full
*
* Stops an offload Tx queue that has become full and modifies the packet
* being written to request a wakeup.
*/
static void ofldtxq_stop(struct sge_uld_txq *q, struct sk_buff *skb)
static void ofldtxq_stop(struct sge_uld_txq *q, struct fw_wr_hdr *wr)
{
struct fw_wr_hdr *wr = (struct fw_wr_hdr *)skb->data;
wr->lo |= htonl(FW_WR_EQUEQ_F | FW_WR_EQUIQ_F);
q->q.stops++;
q->full = 1;
......@@ -1804,7 +1802,7 @@ static void service_ofldq(struct sge_uld_txq *q)
credits = txq_avail(&q->q) - ndesc;
BUG_ON(credits < 0);
if (unlikely(credits < TXQ_STOP_THRES))
ofldtxq_stop(q, skb);
ofldtxq_stop(q, (struct fw_wr_hdr *)skb->data);
pos = (u64 *)&q->q.desc[q->q.pidx];
if (is_ofld_imm(skb))
......@@ -2005,6 +2003,103 @@ int cxgb4_ofld_send(struct net_device *dev, struct sk_buff *skb)
}
EXPORT_SYMBOL(cxgb4_ofld_send);
static void *inline_tx_header(const void *src,
const struct sge_txq *q,
void *pos, int length)
{
int left = (void *)q->stat - pos;
u64 *p;
if (likely(length <= left)) {
memcpy(pos, src, length);
pos += length;
} else {
memcpy(pos, src, left);
memcpy(q->desc, src + left, length - left);
pos = (void *)q->desc + (length - left);
}
/* 0-pad to multiple of 16 */
p = PTR_ALIGN(pos, 8);
if ((uintptr_t)p & 8) {
*p = 0;
return p + 1;
}
return p;
}
/**
* ofld_xmit_direct - copy a WR into offload queue
* @q: the Tx offload queue
* @src: location of WR
* @len: WR length
*
* Copy an immediate WR into an uncontended SGE offload queue.
*/
static int ofld_xmit_direct(struct sge_uld_txq *q, const void *src,
unsigned int len)
{
unsigned int ndesc;
int credits;
u64 *pos;
/* Use the lower limit as the cut-off */
if (len > MAX_IMM_OFLD_TX_DATA_WR_LEN) {
WARN_ON(1);
return NET_XMIT_DROP;
}
/* Don't return NET_XMIT_CN here as the current
* implementation doesn't queue the request
* using an skb when the following conditions not met
*/
if (!spin_trylock(&q->sendq.lock))
return NET_XMIT_DROP;
if (q->full || !skb_queue_empty(&q->sendq) ||
q->service_ofldq_running) {
spin_unlock(&q->sendq.lock);
return NET_XMIT_DROP;
}
ndesc = flits_to_desc(DIV_ROUND_UP(len, 8));
credits = txq_avail(&q->q) - ndesc;
pos = (u64 *)&q->q.desc[q->q.pidx];
/* ofldtxq_stop modifies WR header in-situ */
inline_tx_header(src, &q->q, pos, len);
if (unlikely(credits < TXQ_STOP_THRES))
ofldtxq_stop(q, (struct fw_wr_hdr *)pos);
txq_advance(&q->q, ndesc);
cxgb4_ring_tx_db(q->adap, &q->q, ndesc);
spin_unlock(&q->sendq.lock);
return NET_XMIT_SUCCESS;
}
int cxgb4_immdata_send(struct net_device *dev, unsigned int idx,
const void *src, unsigned int len)
{
struct sge_uld_txq_info *txq_info;
struct sge_uld_txq *txq;
struct adapter *adap;
int ret;
adap = netdev2adap(dev);
local_bh_disable();
txq_info = adap->sge.uld_txq_info[CXGB4_TX_OFLD];
if (unlikely(!txq_info)) {
WARN_ON(true);
local_bh_enable();
return NET_XMIT_DROP;
}
txq = &txq_info->uldtxq[idx];
ret = ofld_xmit_direct(txq, src, len);
local_bh_enable();
return net_xmit_eval(ret);
}
EXPORT_SYMBOL(cxgb4_immdata_send);
/**
* t4_crypto_send - send crypto packet
* @adap: the adapter
......
......@@ -82,13 +82,15 @@ enum {
CPL_RX_ISCSI_CMP = 0x45,
CPL_TRACE_PKT_T5 = 0x48,
CPL_RX_ISCSI_DDP = 0x49,
CPL_RX_TLS_CMP = 0x4E,
CPL_RDMA_READ_REQ = 0x60,
CPL_PASS_OPEN_REQ6 = 0x81,
CPL_ACT_OPEN_REQ6 = 0x83,
CPL_TX_TLS_PDU = 0x88,
CPL_TX_TLS_PDU = 0x88,
CPL_TX_TLS_SFO = 0x89,
CPL_TX_SEC_PDU = 0x8A,
CPL_TX_TLS_ACK = 0x8B,
......@@ -98,6 +100,7 @@ enum {
CPL_RX_MPS_PKT = 0xAF,
CPL_TRACE_PKT = 0xB0,
CPL_TLS_DATA = 0xB1,
CPL_ISCSI_DATA = 0xB2,
CPL_FW4_MSG = 0xC0,
......@@ -155,6 +158,7 @@ enum {
ULP_MODE_RDMA = 4,
ULP_MODE_TCPDDP = 5,
ULP_MODE_FCOE = 6,
ULP_MODE_TLS = 8,
};
enum {
......@@ -1445,6 +1449,14 @@ struct cpl_tx_data {
#define T6_TX_FORCE_V(x) ((x) << T6_TX_FORCE_S)
#define T6_TX_FORCE_F T6_TX_FORCE_V(1U)
#define TX_SHOVE_S 14
#define TX_SHOVE_V(x) ((x) << TX_SHOVE_S)
#define TX_ULP_MODE_S 10
#define TX_ULP_MODE_M 0x7
#define TX_ULP_MODE_V(x) ((x) << TX_ULP_MODE_S)
#define TX_ULP_MODE_G(x) (((x) >> TX_ULP_MODE_S) & TX_ULP_MODE_M)
enum {
ULP_TX_MEM_READ = 2,
ULP_TX_MEM_WRITE = 3,
......@@ -1455,12 +1467,21 @@ enum {
ULP_TX_SC_NOOP = 0x80,
ULP_TX_SC_IMM = 0x81,
ULP_TX_SC_DSGL = 0x82,
ULP_TX_SC_ISGL = 0x83
ULP_TX_SC_ISGL = 0x83,
ULP_TX_SC_MEMRD = 0x86
};
#define ULPTX_CMD_S 24
#define ULPTX_CMD_V(x) ((x) << ULPTX_CMD_S)
#define ULPTX_LEN16_S 0
#define ULPTX_LEN16_M 0xFF
#define ULPTX_LEN16_V(x) ((x) << ULPTX_LEN16_S)
#define ULP_TX_SC_MORE_S 23
#define ULP_TX_SC_MORE_V(x) ((x) << ULP_TX_SC_MORE_S)
#define ULP_TX_SC_MORE_F ULP_TX_SC_MORE_V(1U)
struct ulptx_sge_pair {
__be32 len[2];
__be64 addr[2];
......@@ -2183,4 +2204,101 @@ struct cpl_srq_table_rpl {
#define SRQT_IDX_V(x) ((x) << SRQT_IDX_S)
#define SRQT_IDX_G(x) (((x) >> SRQT_IDX_S) & SRQT_IDX_M)
struct cpl_tx_tls_sfo {
__be32 op_to_seg_len;
__be32 pld_len;
__be32 type_protover;
__be32 r1_lo;
__be32 seqno_numivs;
__be32 ivgen_hdrlen;
__be64 scmd1;
};
/* cpl_tx_tls_sfo macros */
#define CPL_TX_TLS_SFO_OPCODE_S 24
#define CPL_TX_TLS_SFO_OPCODE_V(x) ((x) << CPL_TX_TLS_SFO_OPCODE_S)
#define CPL_TX_TLS_SFO_DATA_TYPE_S 20
#define CPL_TX_TLS_SFO_DATA_TYPE_V(x) ((x) << CPL_TX_TLS_SFO_DATA_TYPE_S)
#define CPL_TX_TLS_SFO_CPL_LEN_S 16
#define CPL_TX_TLS_SFO_CPL_LEN_V(x) ((x) << CPL_TX_TLS_SFO_CPL_LEN_S)
#define CPL_TX_TLS_SFO_SEG_LEN_S 0
#define CPL_TX_TLS_SFO_SEG_LEN_M 0xffff
#define CPL_TX_TLS_SFO_SEG_LEN_V(x) ((x) << CPL_TX_TLS_SFO_SEG_LEN_S)
#define CPL_TX_TLS_SFO_SEG_LEN_G(x) \
(((x) >> CPL_TX_TLS_SFO_SEG_LEN_S) & CPL_TX_TLS_SFO_SEG_LEN_M)
#define CPL_TX_TLS_SFO_TYPE_S 24
#define CPL_TX_TLS_SFO_TYPE_M 0xff
#define CPL_TX_TLS_SFO_TYPE_V(x) ((x) << CPL_TX_TLS_SFO_TYPE_S)
#define CPL_TX_TLS_SFO_TYPE_G(x) \
(((x) >> CPL_TX_TLS_SFO_TYPE_S) & CPL_TX_TLS_SFO_TYPE_M)
#define CPL_TX_TLS_SFO_PROTOVER_S 8
#define CPL_TX_TLS_SFO_PROTOVER_M 0xffff
#define CPL_TX_TLS_SFO_PROTOVER_V(x) ((x) << CPL_TX_TLS_SFO_PROTOVER_S)
#define CPL_TX_TLS_SFO_PROTOVER_G(x) \
(((x) >> CPL_TX_TLS_SFO_PROTOVER_S) & CPL_TX_TLS_SFO_PROTOVER_M)
struct cpl_tls_data {
struct rss_header rsshdr;
union opcode_tid ot;
__be32 length_pkd;
__be32 seq;
__be32 r1;
};
#define CPL_TLS_DATA_OPCODE_S 24
#define CPL_TLS_DATA_OPCODE_M 0xff
#define CPL_TLS_DATA_OPCODE_V(x) ((x) << CPL_TLS_DATA_OPCODE_S)
#define CPL_TLS_DATA_OPCODE_G(x) \
(((x) >> CPL_TLS_DATA_OPCODE_S) & CPL_TLS_DATA_OPCODE_M)
#define CPL_TLS_DATA_TID_S 0
#define CPL_TLS_DATA_TID_M 0xffffff
#define CPL_TLS_DATA_TID_V(x) ((x) << CPL_TLS_DATA_TID_S)
#define CPL_TLS_DATA_TID_G(x) \
(((x) >> CPL_TLS_DATA_TID_S) & CPL_TLS_DATA_TID_M)
#define CPL_TLS_DATA_LENGTH_S 0
#define CPL_TLS_DATA_LENGTH_M 0xffff
#define CPL_TLS_DATA_LENGTH_V(x) ((x) << CPL_TLS_DATA_LENGTH_S)
#define CPL_TLS_DATA_LENGTH_G(x) \
(((x) >> CPL_TLS_DATA_LENGTH_S) & CPL_TLS_DATA_LENGTH_M)
struct cpl_rx_tls_cmp {
struct rss_header rsshdr;
union opcode_tid ot;
__be32 pdulength_length;
__be32 seq;
__be32 ddp_report;
__be32 r;
__be32 ddp_valid;
};
#define CPL_RX_TLS_CMP_OPCODE_S 24
#define CPL_RX_TLS_CMP_OPCODE_M 0xff
#define CPL_RX_TLS_CMP_OPCODE_V(x) ((x) << CPL_RX_TLS_CMP_OPCODE_S)
#define CPL_RX_TLS_CMP_OPCODE_G(x) \
(((x) >> CPL_RX_TLS_CMP_OPCODE_S) & CPL_RX_TLS_CMP_OPCODE_M)
#define CPL_RX_TLS_CMP_TID_S 0
#define CPL_RX_TLS_CMP_TID_M 0xffffff
#define CPL_RX_TLS_CMP_TID_V(x) ((x) << CPL_RX_TLS_CMP_TID_S)
#define CPL_RX_TLS_CMP_TID_G(x) \
(((x) >> CPL_RX_TLS_CMP_TID_S) & CPL_RX_TLS_CMP_TID_M)
#define CPL_RX_TLS_CMP_PDULENGTH_S 16
#define CPL_RX_TLS_CMP_PDULENGTH_M 0xffff
#define CPL_RX_TLS_CMP_PDULENGTH_V(x) ((x) << CPL_RX_TLS_CMP_PDULENGTH_S)
#define CPL_RX_TLS_CMP_PDULENGTH_G(x) \
(((x) >> CPL_RX_TLS_CMP_PDULENGTH_S) & CPL_RX_TLS_CMP_PDULENGTH_M)
#define CPL_RX_TLS_CMP_LENGTH_S 0
#define CPL_RX_TLS_CMP_LENGTH_M 0xffff
#define CPL_RX_TLS_CMP_LENGTH_V(x) ((x) << CPL_RX_TLS_CMP_LENGTH_S)
#define CPL_RX_TLS_CMP_LENGTH_G(x) \
(((x) >> CPL_RX_TLS_CMP_LENGTH_S) & CPL_RX_TLS_CMP_LENGTH_M)
#endif /* __T4_MSG_H */
......@@ -2775,6 +2775,8 @@
#define ULP_RX_LA_RDPTR_A 0x19240
#define ULP_RX_LA_RDDATA_A 0x19244
#define ULP_RX_LA_WRPTR_A 0x19248
#define ULP_RX_TLS_KEY_LLIMIT_A 0x192ac
#define ULP_RX_TLS_KEY_ULIMIT_A 0x192b0
#define HPZ3_S 24
#define HPZ3_V(x) ((x) << HPZ3_S)
......
......@@ -105,6 +105,7 @@ enum fw_wr_opcodes {
FW_RI_INV_LSTAG_WR = 0x1a,
FW_ISCSI_TX_DATA_WR = 0x45,
FW_PTP_TX_PKT_WR = 0x46,
FW_TLSTX_DATA_WR = 0x68,
FW_CRYPTO_LOOKASIDE_WR = 0X6d,
FW_LASTC2E_WR = 0x70,
FW_FILTER2_WR = 0x77
......@@ -635,6 +636,30 @@ struct fw_ofld_connection_wr {
#define FW_OFLD_CONNECTION_WR_CPLPASSACCEPTRPL_F \
FW_OFLD_CONNECTION_WR_CPLPASSACCEPTRPL_V(1U)
enum fw_flowc_mnem_tcpstate {
FW_FLOWC_MNEM_TCPSTATE_CLOSED = 0, /* illegal */
FW_FLOWC_MNEM_TCPSTATE_LISTEN = 1, /* illegal */
FW_FLOWC_MNEM_TCPSTATE_SYNSENT = 2, /* illegal */
FW_FLOWC_MNEM_TCPSTATE_SYNRECEIVED = 3, /* illegal */
FW_FLOWC_MNEM_TCPSTATE_ESTABLISHED = 4, /* default */
FW_FLOWC_MNEM_TCPSTATE_CLOSEWAIT = 5, /* got peer close already */
FW_FLOWC_MNEM_TCPSTATE_FINWAIT1 = 6, /* haven't gotten ACK for FIN and
* will resend FIN - equiv ESTAB
*/
FW_FLOWC_MNEM_TCPSTATE_CLOSING = 7, /* haven't gotten ACK for FIN and
* will resend FIN but have
* received FIN
*/
FW_FLOWC_MNEM_TCPSTATE_LASTACK = 8, /* haven't gotten ACK for FIN and
* will resend FIN but have
* received FIN
*/
FW_FLOWC_MNEM_TCPSTATE_FINWAIT2 = 9, /* sent FIN and got FIN + ACK,
* waiting for FIN
*/
FW_FLOWC_MNEM_TCPSTATE_TIMEWAIT = 10, /* not expected */
};
enum fw_flowc_mnem {
FW_FLOWC_MNEM_PFNVFN, /* PFN [15:8] VFN [7:0] */
FW_FLOWC_MNEM_CH,
......@@ -651,6 +676,8 @@ enum fw_flowc_mnem {
FW_FLOWC_MNEM_DCBPRIO,
FW_FLOWC_MNEM_SND_SCALE,
FW_FLOWC_MNEM_RCV_SCALE,
FW_FLOWC_MNEM_ULD_MODE,
FW_FLOWC_MNEM_MAX,
};
struct fw_flowc_mnemval {
......@@ -675,6 +702,14 @@ struct fw_ofld_tx_data_wr {
__be32 tunnel_to_proxy;
};
#define FW_OFLD_TX_DATA_WR_ALIGNPLD_S 30
#define FW_OFLD_TX_DATA_WR_ALIGNPLD_V(x) ((x) << FW_OFLD_TX_DATA_WR_ALIGNPLD_S)
#define FW_OFLD_TX_DATA_WR_ALIGNPLD_F FW_OFLD_TX_DATA_WR_ALIGNPLD_V(1U)
#define FW_OFLD_TX_DATA_WR_SHOVE_S 29
#define FW_OFLD_TX_DATA_WR_SHOVE_V(x) ((x) << FW_OFLD_TX_DATA_WR_SHOVE_S)
#define FW_OFLD_TX_DATA_WR_SHOVE_F FW_OFLD_TX_DATA_WR_SHOVE_V(1U)
#define FW_OFLD_TX_DATA_WR_TUNNEL_S 19
#define FW_OFLD_TX_DATA_WR_TUNNEL_V(x) ((x) << FW_OFLD_TX_DATA_WR_TUNNEL_S)
......@@ -691,10 +726,6 @@ struct fw_ofld_tx_data_wr {
#define FW_OFLD_TX_DATA_WR_MORE_S 15
#define FW_OFLD_TX_DATA_WR_MORE_V(x) ((x) << FW_OFLD_TX_DATA_WR_MORE_S)
#define FW_OFLD_TX_DATA_WR_SHOVE_S 14
#define FW_OFLD_TX_DATA_WR_SHOVE_V(x) ((x) << FW_OFLD_TX_DATA_WR_SHOVE_S)
#define FW_OFLD_TX_DATA_WR_SHOVE_F FW_OFLD_TX_DATA_WR_SHOVE_V(1U)
#define FW_OFLD_TX_DATA_WR_ULPMODE_S 10
#define FW_OFLD_TX_DATA_WR_ULPMODE_V(x) ((x) << FW_OFLD_TX_DATA_WR_ULPMODE_S)
......@@ -1121,6 +1152,12 @@ enum fw_caps_config_iscsi {
FW_CAPS_CONFIG_ISCSI_TARGET_CNXOFLD = 0x00000008,
};
enum fw_caps_config_crypto {
FW_CAPS_CONFIG_CRYPTO_LOOKASIDE = 0x00000001,
FW_CAPS_CONFIG_TLS_INLINE = 0x00000002,
FW_CAPS_CONFIG_IPSEC_INLINE = 0x00000004,
};
enum fw_caps_config_fcoe {
FW_CAPS_CONFIG_FCOE_INITIATOR = 0x00000001,
FW_CAPS_CONFIG_FCOE_TARGET = 0x00000002,
......@@ -1266,6 +1303,8 @@ enum fw_params_param_pfvf {
FW_PARAMS_PARAM_PFVF_CPLFW4MSG_ENCAP = 0x31,
FW_PARAMS_PARAM_PFVF_HPFILTER_START = 0x32,
FW_PARAMS_PARAM_PFVF_HPFILTER_END = 0x33,
FW_PARAMS_PARAM_PFVF_TLS_START = 0x34,
FW_PARAMS_PARAM_PFVF_TLS_END = 0x35,
FW_PARAMS_PARAM_PFVF_NCRYPTO_LOOKASIDE = 0x39,
FW_PARAMS_PARAM_PFVF_PORT_CAPS32 = 0x3A,
};
......@@ -3839,4 +3878,122 @@ struct fw_crypto_lookaside_wr {
(((x) >> FW_CRYPTO_LOOKASIDE_WR_HASH_SIZE_S) & \
FW_CRYPTO_LOOKASIDE_WR_HASH_SIZE_M)
struct fw_tlstx_data_wr {
__be32 op_to_immdlen;
__be32 flowid_len16;
__be32 plen;
__be32 lsodisable_to_flags;
__be32 r5;
__be32 ctxloc_to_exp;
__be16 mfs;
__be16 adjustedplen_pkd;
__be16 expinplenmax_pkd;
u8 pdusinplenmax_pkd;
u8 r10;
};
#define FW_TLSTX_DATA_WR_OPCODE_S 24
#define FW_TLSTX_DATA_WR_OPCODE_M 0xff
#define FW_TLSTX_DATA_WR_OPCODE_V(x) ((x) << FW_TLSTX_DATA_WR_OPCODE_S)
#define FW_TLSTX_DATA_WR_OPCODE_G(x) \
(((x) >> FW_TLSTX_DATA_WR_OPCODE_S) & FW_TLSTX_DATA_WR_OPCODE_M)
#define FW_TLSTX_DATA_WR_COMPL_S 21
#define FW_TLSTX_DATA_WR_COMPL_M 0x1
#define FW_TLSTX_DATA_WR_COMPL_V(x) ((x) << FW_TLSTX_DATA_WR_COMPL_S)
#define FW_TLSTX_DATA_WR_COMPL_G(x) \
(((x) >> FW_TLSTX_DATA_WR_COMPL_S) & FW_TLSTX_DATA_WR_COMPL_M)
#define FW_TLSTX_DATA_WR_COMPL_F FW_TLSTX_DATA_WR_COMPL_V(1U)
#define FW_TLSTX_DATA_WR_IMMDLEN_S 0
#define FW_TLSTX_DATA_WR_IMMDLEN_M 0xff
#define FW_TLSTX_DATA_WR_IMMDLEN_V(x) ((x) << FW_TLSTX_DATA_WR_IMMDLEN_S)
#define FW_TLSTX_DATA_WR_IMMDLEN_G(x) \
(((x) >> FW_TLSTX_DATA_WR_IMMDLEN_S) & FW_TLSTX_DATA_WR_IMMDLEN_M)
#define FW_TLSTX_DATA_WR_FLOWID_S 8
#define FW_TLSTX_DATA_WR_FLOWID_M 0xfffff
#define FW_TLSTX_DATA_WR_FLOWID_V(x) ((x) << FW_TLSTX_DATA_WR_FLOWID_S)
#define FW_TLSTX_DATA_WR_FLOWID_G(x) \
(((x) >> FW_TLSTX_DATA_WR_FLOWID_S) & FW_TLSTX_DATA_WR_FLOWID_M)
#define FW_TLSTX_DATA_WR_LEN16_S 0
#define FW_TLSTX_DATA_WR_LEN16_M 0xff
#define FW_TLSTX_DATA_WR_LEN16_V(x) ((x) << FW_TLSTX_DATA_WR_LEN16_S)
#define FW_TLSTX_DATA_WR_LEN16_G(x) \
(((x) >> FW_TLSTX_DATA_WR_LEN16_S) & FW_TLSTX_DATA_WR_LEN16_M)
#define FW_TLSTX_DATA_WR_LSODISABLE_S 31
#define FW_TLSTX_DATA_WR_LSODISABLE_M 0x1
#define FW_TLSTX_DATA_WR_LSODISABLE_V(x) \
((x) << FW_TLSTX_DATA_WR_LSODISABLE_S)
#define FW_TLSTX_DATA_WR_LSODISABLE_G(x) \
(((x) >> FW_TLSTX_DATA_WR_LSODISABLE_S) & FW_TLSTX_DATA_WR_LSODISABLE_M)
#define FW_TLSTX_DATA_WR_LSODISABLE_F FW_TLSTX_DATA_WR_LSODISABLE_V(1U)
#define FW_TLSTX_DATA_WR_ALIGNPLD_S 30
#define FW_TLSTX_DATA_WR_ALIGNPLD_M 0x1
#define FW_TLSTX_DATA_WR_ALIGNPLD_V(x) ((x) << FW_TLSTX_DATA_WR_ALIGNPLD_S)
#define FW_TLSTX_DATA_WR_ALIGNPLD_G(x) \
(((x) >> FW_TLSTX_DATA_WR_ALIGNPLD_S) & FW_TLSTX_DATA_WR_ALIGNPLD_M)
#define FW_TLSTX_DATA_WR_ALIGNPLD_F FW_TLSTX_DATA_WR_ALIGNPLD_V(1U)
#define FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_S 29
#define FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_M 0x1
#define FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_V(x) \
((x) << FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_S)
#define FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_G(x) \
(((x) >> FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_S) & \
FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_M)
#define FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_F FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_V(1U)
#define FW_TLSTX_DATA_WR_FLAGS_S 0
#define FW_TLSTX_DATA_WR_FLAGS_M 0xfffffff
#define FW_TLSTX_DATA_WR_FLAGS_V(x) ((x) << FW_TLSTX_DATA_WR_FLAGS_S)
#define FW_TLSTX_DATA_WR_FLAGS_G(x) \
(((x) >> FW_TLSTX_DATA_WR_FLAGS_S) & FW_TLSTX_DATA_WR_FLAGS_M)
#define FW_TLSTX_DATA_WR_CTXLOC_S 30
#define FW_TLSTX_DATA_WR_CTXLOC_M 0x3
#define FW_TLSTX_DATA_WR_CTXLOC_V(x) ((x) << FW_TLSTX_DATA_WR_CTXLOC_S)
#define FW_TLSTX_DATA_WR_CTXLOC_G(x) \
(((x) >> FW_TLSTX_DATA_WR_CTXLOC_S) & FW_TLSTX_DATA_WR_CTXLOC_M)
#define FW_TLSTX_DATA_WR_IVDSGL_S 29
#define FW_TLSTX_DATA_WR_IVDSGL_M 0x1
#define FW_TLSTX_DATA_WR_IVDSGL_V(x) ((x) << FW_TLSTX_DATA_WR_IVDSGL_S)
#define FW_TLSTX_DATA_WR_IVDSGL_G(x) \
(((x) >> FW_TLSTX_DATA_WR_IVDSGL_S) & FW_TLSTX_DATA_WR_IVDSGL_M)
#define FW_TLSTX_DATA_WR_IVDSGL_F FW_TLSTX_DATA_WR_IVDSGL_V(1U)
#define FW_TLSTX_DATA_WR_KEYSIZE_S 24
#define FW_TLSTX_DATA_WR_KEYSIZE_M 0x1f
#define FW_TLSTX_DATA_WR_KEYSIZE_V(x) ((x) << FW_TLSTX_DATA_WR_KEYSIZE_S)
#define FW_TLSTX_DATA_WR_KEYSIZE_G(x) \
(((x) >> FW_TLSTX_DATA_WR_KEYSIZE_S) & FW_TLSTX_DATA_WR_KEYSIZE_M)
#define FW_TLSTX_DATA_WR_NUMIVS_S 14
#define FW_TLSTX_DATA_WR_NUMIVS_M 0xff
#define FW_TLSTX_DATA_WR_NUMIVS_V(x) ((x) << FW_TLSTX_DATA_WR_NUMIVS_S)
#define FW_TLSTX_DATA_WR_NUMIVS_G(x) \
(((x) >> FW_TLSTX_DATA_WR_NUMIVS_S) & FW_TLSTX_DATA_WR_NUMIVS_M)
#define FW_TLSTX_DATA_WR_EXP_S 0
#define FW_TLSTX_DATA_WR_EXP_M 0x3fff
#define FW_TLSTX_DATA_WR_EXP_V(x) ((x) << FW_TLSTX_DATA_WR_EXP_S)
#define FW_TLSTX_DATA_WR_EXP_G(x) \
(((x) >> FW_TLSTX_DATA_WR_EXP_S) & FW_TLSTX_DATA_WR_EXP_M)
#define FW_TLSTX_DATA_WR_ADJUSTEDPLEN_S 1
#define FW_TLSTX_DATA_WR_ADJUSTEDPLEN_V(x) \
((x) << FW_TLSTX_DATA_WR_ADJUSTEDPLEN_S)
#define FW_TLSTX_DATA_WR_EXPINPLENMAX_S 4
#define FW_TLSTX_DATA_WR_EXPINPLENMAX_V(x) \
((x) << FW_TLSTX_DATA_WR_EXPINPLENMAX_S)
#define FW_TLSTX_DATA_WR_PDUSINPLENMAX_S 2
#define FW_TLSTX_DATA_WR_PDUSINPLENMAX_V(x) \
((x) << FW_TLSTX_DATA_WR_PDUSINPLENMAX_S)
#endif /* _T4FW_INTERFACE_H_ */
......@@ -79,6 +79,7 @@ enum {
NETIF_F_RX_UDP_TUNNEL_PORT_BIT, /* Offload of RX port for UDP tunnels */
NETIF_F_GRO_HW_BIT, /* Hardware Generic receive offload */
NETIF_F_HW_TLS_RECORD_BIT, /* Offload TLS record */
/*
* Add your fresh new feature above and remember to update
......@@ -145,6 +146,7 @@ enum {
#define NETIF_F_HW_ESP __NETIF_F(HW_ESP)
#define NETIF_F_HW_ESP_TX_CSUM __NETIF_F(HW_ESP_TX_CSUM)
#define NETIF_F_RX_UDP_TUNNEL_PORT __NETIF_F(RX_UDP_TUNNEL_PORT)
#define NETIF_F_HW_TLS_RECORD __NETIF_F(HW_TLS_RECORD)
#define for_each_netdev_feature(mask_addr, bit) \
for_each_set_bit(bit, (unsigned long *)mask_addr, NETDEV_FEATURE_COUNT)
......
......@@ -56,6 +56,32 @@
#define TLS_RECORD_TYPE_DATA 0x17
#define TLS_AAD_SPACE_SIZE 13
#define TLS_DEVICE_NAME_MAX 32
/*
* This structure defines the routines for Inline TLS driver.
* The following routines are optional and filled with a
* null pointer if not defined.
*
* @name: Its the name of registered Inline tls device
* @dev_list: Inline tls device list
* int (*feature)(struct tls_device *device);
* Called to return Inline TLS driver capability
*
* int (*hash)(struct tls_device *device, struct sock *sk);
* This function sets Inline driver for listen and program
* device specific functioanlity as required
*
* void (*unhash)(struct tls_device *device, struct sock *sk);
* This function cleans listen state set by Inline TLS driver
*/
struct tls_device {
char name[TLS_DEVICE_NAME_MAX];
struct list_head dev_list;
int (*feature)(struct tls_device *device);
int (*hash)(struct tls_device *device, struct sock *sk);
void (*unhash)(struct tls_device *device, struct sock *sk);
};
struct tls_sw_context {
struct crypto_aead *aead_send;
......@@ -114,7 +140,7 @@ struct tls_context {
void *priv_ctx;
u8 conf:2;
u8 conf:3;
struct cipher_context tx;
struct cipher_context rx;
......@@ -135,6 +161,8 @@ struct tls_context {
int (*getsockopt)(struct sock *sk, int level,
int optname, char __user *optval,
int __user *optlen);
int (*hash)(struct sock *sk);
void (*unhash)(struct sock *sk);
};
int wait_on_pending_writer(struct sock *sk, long *timeo);
......@@ -283,5 +311,7 @@ static inline struct tls_offload_context *tls_offload_ctx(
int tls_proccess_cmsg(struct sock *sk, struct msghdr *msg,
unsigned char *record_type);
void tls_register_device(struct tls_device *device);
void tls_unregister_device(struct tls_device *device);
#endif /* _TLS_OFFLOAD_H */
......@@ -108,6 +108,7 @@ static const char netdev_features_strings[NETDEV_FEATURE_COUNT][ETH_GSTRING_LEN]
[NETIF_F_HW_ESP_BIT] = "esp-hw-offload",
[NETIF_F_HW_ESP_TX_CSUM_BIT] = "esp-tx-csum-hw-offload",
[NETIF_F_RX_UDP_TUNNEL_PORT_BIT] = "rx-udp_tunnel-port-offload",
[NETIF_F_HW_TLS_RECORD_BIT] = "tls-hw-record",
};
static const char
......
......@@ -332,6 +332,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
tcp_update_metrics(sk);
tcp_done(sk);
}
EXPORT_SYMBOL(tcp_time_wait);
void tcp_twsk_destructor(struct sock *sk)
{
......
......@@ -38,6 +38,7 @@
#include <linux/highmem.h>
#include <linux/netdevice.h>
#include <linux/sched/signal.h>
#include <linux/inetdevice.h>
#include <net/tls.h>
......@@ -56,11 +57,14 @@ enum {
TLS_SW_TX,
TLS_SW_RX,
TLS_SW_RXTX,
TLS_HW_RECORD,
TLS_NUM_CONFIG,
};
static struct proto *saved_tcpv6_prot;
static DEFINE_MUTEX(tcpv6_prot_mutex);
static LIST_HEAD(device_list);
static DEFINE_MUTEX(device_mutex);
static struct proto tls_prots[TLS_NUM_PROTS][TLS_NUM_CONFIG];
static struct proto_ops tls_sw_proto_ops;
......@@ -241,8 +245,12 @@ static void tls_sk_proto_close(struct sock *sk, long timeout)
lock_sock(sk);
sk_proto_close = ctx->sk_proto_close;
if (ctx->conf == TLS_HW_RECORD)
goto skip_tx_cleanup;
if (ctx->conf == TLS_BASE) {
kfree(ctx);
ctx = NULL;
goto skip_tx_cleanup;
}
......@@ -276,6 +284,11 @@ static void tls_sk_proto_close(struct sock *sk, long timeout)
skip_tx_cleanup:
release_sock(sk);
sk_proto_close(sk, timeout);
/* free ctx for TLS_HW_RECORD, used by tcp_set_state
* for sk->sk_prot->unhash [tls_hw_unhash]
*/
if (ctx && ctx->conf == TLS_HW_RECORD)
kfree(ctx);
}
static int do_tls_getsockopt_tx(struct sock *sk, char __user *optval,
......@@ -493,6 +506,79 @@ static int tls_setsockopt(struct sock *sk, int level, int optname,
return do_tls_setsockopt(sk, optname, optval, optlen);
}
static struct tls_context *create_ctx(struct sock *sk)
{
struct inet_connection_sock *icsk = inet_csk(sk);
struct tls_context *ctx;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return NULL;
icsk->icsk_ulp_data = ctx;
return ctx;
}
static int tls_hw_prot(struct sock *sk)
{
struct tls_context *ctx;
struct tls_device *dev;
int rc = 0;
mutex_lock(&device_mutex);
list_for_each_entry(dev, &device_list, dev_list) {
if (dev->feature && dev->feature(dev)) {
ctx = create_ctx(sk);
if (!ctx)
goto out;
ctx->hash = sk->sk_prot->hash;
ctx->unhash = sk->sk_prot->unhash;
ctx->sk_proto_close = sk->sk_prot->close;
ctx->conf = TLS_HW_RECORD;
update_sk_prot(sk, ctx);
rc = 1;
break;
}
}
out:
mutex_unlock(&device_mutex);
return rc;
}
static void tls_hw_unhash(struct sock *sk)
{
struct tls_context *ctx = tls_get_ctx(sk);
struct tls_device *dev;
mutex_lock(&device_mutex);
list_for_each_entry(dev, &device_list, dev_list) {
if (dev->unhash)
dev->unhash(dev, sk);
}
mutex_unlock(&device_mutex);
ctx->unhash(sk);
}
static int tls_hw_hash(struct sock *sk)
{
struct tls_context *ctx = tls_get_ctx(sk);
struct tls_device *dev;
int err;
err = ctx->hash(sk);
mutex_lock(&device_mutex);
list_for_each_entry(dev, &device_list, dev_list) {
if (dev->hash)
err |= dev->hash(dev, sk);
}
mutex_unlock(&device_mutex);
if (err)
tls_hw_unhash(sk);
return err;
}
static void build_protos(struct proto *prot, struct proto *base)
{
prot[TLS_BASE] = *base;
......@@ -511,15 +597,22 @@ static void build_protos(struct proto *prot, struct proto *base)
prot[TLS_SW_RXTX] = prot[TLS_SW_TX];
prot[TLS_SW_RXTX].recvmsg = tls_sw_recvmsg;
prot[TLS_SW_RXTX].close = tls_sk_proto_close;
prot[TLS_HW_RECORD] = *base;
prot[TLS_HW_RECORD].hash = tls_hw_hash;
prot[TLS_HW_RECORD].unhash = tls_hw_unhash;
prot[TLS_HW_RECORD].close = tls_sk_proto_close;
}
static int tls_init(struct sock *sk)
{
int ip_ver = sk->sk_family == AF_INET6 ? TLSV6 : TLSV4;
struct inet_connection_sock *icsk = inet_csk(sk);
struct tls_context *ctx;
int rc = 0;
if (tls_hw_prot(sk))
goto out;
/* The TLS ulp is currently supported only for TCP sockets
* in ESTABLISHED state.
* Supporting sockets in LISTEN state will require us
......@@ -530,12 +623,11 @@ static int tls_init(struct sock *sk)
return -ENOTSUPP;
/* allocate tls context */
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
ctx = create_ctx(sk);
if (!ctx) {
rc = -ENOMEM;
goto out;
}
icsk->icsk_ulp_data = ctx;
ctx->setsockopt = sk->sk_prot->setsockopt;
ctx->getsockopt = sk->sk_prot->getsockopt;
ctx->sk_proto_close = sk->sk_prot->close;
......@@ -557,6 +649,22 @@ static int tls_init(struct sock *sk)
return rc;
}
void tls_register_device(struct tls_device *device)
{
mutex_lock(&device_mutex);
list_add_tail(&device->dev_list, &device_list);
mutex_unlock(&device_mutex);
}
EXPORT_SYMBOL(tls_register_device);
void tls_unregister_device(struct tls_device *device)
{
mutex_lock(&device_mutex);
list_del(&device->dev_list);
mutex_unlock(&device_mutex);
}
EXPORT_SYMBOL(tls_unregister_device);
static struct tcp_ulp_ops tcp_tls_ulp_ops __read_mostly = {
.name = "tls",
.uid = TCP_ULP_TLS,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment