Commit 05ef8b97 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'docs-5.6' of git://git.lwn.net/linux

Pull documentation updates from Jonathan Corbet:
 "It has been a relatively quiet cycle for documentation, but there's
  still a couple of things of note:

   - Conversion of the NFS documentation to RST

   - A new document on how to help with documentation (and a maintainer
     profile entry too)

  Plus the usual collection of typo fixes, etc"

* tag 'docs-5.6' of git://git.lwn.net/linux: (40 commits)
  docs: filesystems: add overlayfs to index.rst
  docs: usb: remove some broken references
  scripts/find-unused-docs: Fix massive false positives
  docs: nvdimm: use ReST notation for subsection
  zram: correct documentation about sysfs node of huge page writeback
  Documentation: zram: various fixes in zram.rst
  Add a maintainer entry profile for documentation
  Add a document on how to contribute to the documentation
  docs: Keep up with the location of NoUri
  Documentation: Call out example SYM_FUNC_* usage as x86-specific
  Documentation: nfs: fault_injection: convert to ReST
  Documentation: nfs: pnfs-scsi-server: convert to ReST
  Documentation: nfs: convert pnfs-block-server to ReST
  Documentation: nfs: idmapper: convert to ReST
  Documentation: convert nfsd-admin-interfaces to ReST
  Documentation: nfs-rdma: convert to ReST
  Documentation: nfsroot.rst: COSMETIC: refill a paragraph
  Documentation: nfsroot.txt: convert to ReST
  Documentation: convert nfs.txt to ReST
  Documentation: filesystems: convert vfat.txt to RST
  ...
parents 08a3ef8f 77ce1a47
========================================
zram: Compressed RAM based block devices
zram: Compressed RAM-based block devices
========================================
Introduction
============
The zram module creates RAM based block devices named /dev/zram<id>
The zram module creates RAM-based block devices named /dev/zram<id>
(<id> = 0, 1, ...). Pages written to these disks are compressed and stored
in memory itself. These disks allow very fast I/O and compression provides
good amounts of memory savings. Some of the usecases include /tmp storage,
use as swap disks, various caches under /var and maybe many more :)
good amounts of memory savings. Some of the use cases include /tmp storage,
use as swap disks, various caches under /var and maybe many more. :)
Statistics for individual zram devices are exported through sysfs nodes at
/sys/block/zram<id>/
......@@ -43,17 +43,17 @@ The list of possible return codes:
======== =============================================================
-EBUSY an attempt to modify an attribute that cannot be changed once
the device has been initialised. Please reset device first;
the device has been initialised. Please reset device first.
-ENOMEM zram was not able to allocate enough memory to fulfil your
needs;
needs.
-EINVAL invalid input has been provided.
======== =============================================================
If you use 'echo', the returned value that is changed by 'echo' utility,
If you use 'echo', the returned value is set by the 'echo' utility,
and, in general case, something like::
echo 3 > /sys/block/zram0/max_comp_streams
if [ $? -ne 0 ];
if [ $? -ne 0 ]; then
handle_error
fi
......@@ -65,7 +65,8 @@ should suffice.
::
modprobe zram num_devices=4
This creates 4 devices: /dev/zram{0,1,2,3}
This creates 4 devices: /dev/zram{0,1,2,3}
num_devices parameter is optional and tells zram how many devices should be
pre-created. Default: 1.
......@@ -73,12 +74,12 @@ pre-created. Default: 1.
2) Set max number of compression streams
========================================
Regardless the value passed to this attribute, ZRAM will always
allocate multiple compression streams - one per online CPUs - thus
Regardless of the value passed to this attribute, ZRAM will always
allocate multiple compression streams - one per online CPU - thus
allowing several concurrent compression operations. The number of
allocated compression streams goes down when some of the CPUs
become offline. There is no single-compression-stream mode anymore,
unless you are running a UP system or has only 1 CPU online.
unless you are running a UP system or have only 1 CPU online.
To find out how many streams are currently available::
......@@ -89,7 +90,7 @@ To find out how many streams are currently available::
Using comp_algorithm device attribute one can see available and
currently selected (shown in square brackets) compression algorithms,
change selected compression algorithm (once the device is initialised
or change the selected compression algorithm (once the device is initialised
there is no way to change compression algorithm).
Examples::
......@@ -167,9 +168,9 @@ Examples::
zram provides a control interface, which enables dynamic (on-demand) device
addition and removal.
In order to add a new /dev/zramX device, perform read operation on hot_add
attribute. This will return either new device's device id (meaning that you
can use /dev/zram<id>) or error code.
In order to add a new /dev/zramX device, perform a read operation on the hot_add
attribute. This will return either the new device's device id (meaning that you
can use /dev/zram<id>) or an error code.
Example::
......@@ -186,8 +187,8 @@ execute::
Per-device statistics are exported as various nodes under /sys/block/zram<id>/
A brief description of exported device attributes. For more details please
read Documentation/ABI/testing/sysfs-block-zram.
A brief description of exported device attributes follows. For more details
please read Documentation/ABI/testing/sysfs-block-zram.
====================== ====== ===============================================
Name access description
......@@ -245,7 +246,7 @@ whitespace:
File /sys/block/zram<id>/mm_stat
The stat file represents device's mm statistics. It consists of a single
The mm_stat file represents the device's mm statistics. It consists of a single
line of text and contains the following stats separated by whitespace:
================ =============================================================
......@@ -261,7 +262,7 @@ line of text and contains the following stats separated by whitespace:
Unit: bytes
mem_limit the maximum amount of memory ZRAM can use to store
the compressed data
mem_used_max the maximum amount of memory zram have consumed to
mem_used_max the maximum amount of memory zram has consumed to
store the data
same_pages the number of same element filled pages written to this disk.
No memory is allocated for such pages.
......@@ -271,7 +272,7 @@ line of text and contains the following stats separated by whitespace:
File /sys/block/zram<id>/bd_stat
The stat file represents device's backing device statistics. It consists of
The bd_stat file represents a device's backing device statistics. It consists of
a single line of text and contains the following stats separated by whitespace:
============== =============================================================
......@@ -316,9 +317,9 @@ To use the feature, admin should set up backing device via::
echo /dev/sda5 > /sys/block/zramX/backing_dev
before disksize setting. It supports only partition at this moment.
If admin want to use incompressible page writeback, they could do via::
If admin wants to use incompressible page writeback, they could do via::
echo huge > /sys/block/zramX/write
echo huge > /sys/block/zramX/writeback
To use idle page writeback, first, user need to declare zram pages
as idle::
......@@ -326,7 +327,7 @@ as idle::
echo all > /sys/block/zramX/idle
From now on, any pages on zram are idle pages. The idle mark
will be removed until someone request access of the block.
will be removed until someone requests access of the block.
IOW, unless there is access request, those pages are still idle pages.
Admin can request writeback of those idle pages at right timing via::
......@@ -341,16 +342,16 @@ to guarantee storage health for entire product life.
To overcome the concern, zram supports "writeback_limit" feature.
The "writeback_limit_enable"'s default value is 0 so that it doesn't limit
any writeback. IOW, if admin want to apply writeback budget, he should
any writeback. IOW, if admin wants to apply writeback budget, he should
enable writeback_limit_enable via::
$ echo 1 > /sys/block/zramX/writeback_limit_enable
Once writeback_limit_enable is set, zram doesn't allow any writeback
until admin set the budget via /sys/block/zramX/writeback_limit.
until admin sets the budget via /sys/block/zramX/writeback_limit.
(If admin doesn't enable writeback_limit_enable, writeback_limit's value
assigned via /sys/block/zramX/writeback_limit is meaninless.)
assigned via /sys/block/zramX/writeback_limit is meaningless.)
If admin want to limit writeback as per-day 400M, he could do it
like below::
......@@ -361,13 +362,13 @@ like below::
/sys/block/zram0/writeback_limit.
$ echo 1 > /sys/block/zram0/writeback_limit_enable
If admin want to allow further write again once the bugdet is exausted,
If admins want to allow further write again once the bugdet is exhausted,
he could do it like below::
$ echo $((400<<MB_SHIFT>>4K_SHIFT)) > \
/sys/block/zram0/writeback_limit
If admin want to see remaining writeback budget since he set::
If admin wants to see remaining writeback budget since last set::
$ cat /sys/block/zramX/writeback_limit
......@@ -375,12 +376,12 @@ If admin want to disable writeback limit, he could do::
$ echo 0 > /sys/block/zramX/writeback_limit_enable
The writeback_limit count will reset whenever you reset zram(e.g.,
The writeback_limit count will reset whenever you reset zram (e.g.,
system reboot, echo 1 > /sys/block/zramX/reset) so keeping how many of
writeback happened until you reset the zram to allocate extra writeback
budget in next setting is user's job.
If admin want to measure writeback count in a certain period, he could
If admin wants to measure writeback count in a certain period, he could
know it via /sys/block/zram0/bd_stat's 3rd column.
memory tracking
......
......@@ -76,6 +76,7 @@ configure specific aspects of kernel behavior to your liking.
device-mapper/index
efi-stub
ext4
nfs/index
gpio/index
highuid
hw_random
......
===================
NFS Fault Injection
===================
Fault Injection
===============
Fault injection is a method for forcing errors that may not normally occur, or
may be difficult to reproduce. Forcing these errors in a controlled environment
can help the developer find and fix bugs before their code is shipped in a
......
=============
NFS
=============
.. toctree::
:maxdepth: 1
nfs-client
nfsroot
nfs-rdma
nfsd-admin-interfaces
nfs-idmapper
pnfs-block-server
pnfs-scsi-server
fault_injection
==========
NFS Client
==========
The NFS client
==============
......@@ -59,10 +62,11 @@ The DNS resolver
NFSv4 allows for one server to refer the NFS client to data that has been
migrated onto another server by means of the special "fs_locations"
attribute. See
http://tools.ietf.org/html/rfc3530#section-6
and
http://tools.ietf.org/html/draft-ietf-nfsv4-referrals-00
attribute. See `RFC3530 Section 6: Filesystem Migration and Replication`_ and
`Implementation Guide for Referrals in NFSv4`_.
.. _RFC3530 Section 6\: Filesystem Migration and Replication: http://tools.ietf.org/html/rfc3530#section-6
.. _Implementation Guide for Referrals in NFSv4: http://tools.ietf.org/html/draft-ietf-nfsv4-referrals-00
The fs_locations information can take the form of either an ip address and
a path, or a DNS hostname and a path. The latter requires the NFS client to
......@@ -78,8 +82,8 @@ Assuming that the user has the 'rpc_pipefs' filesystem mounted in the usual
(2) If no valid entry exists, the helper script '/sbin/nfs_cache_getent'
(may be changed using the 'nfs.cache_getent' kernel boot parameter)
is run, with two arguments:
- the cache name, "dns_resolve"
- the hostname to resolve
- the cache name, "dns_resolve"
- the hostname to resolve
(3) After looking up the corresponding ip address, the helper script
writes the result into the rpc_pipefs pseudo-file
......@@ -94,43 +98,44 @@ Assuming that the user has the 'rpc_pipefs' filesystem mounted in the usual
script, and <ttl> is the 'time to live' of this cache entry (in
units of seconds).
Note: If <ip address> is invalid, say the string "0", then a negative
entry is created, which will cause the kernel to treat the hostname
as having no valid DNS translation.
.. note::
If <ip address> is invalid, say the string "0", then a negative
entry is created, which will cause the kernel to treat the hostname
as having no valid DNS translation.
A basic sample /sbin/nfs_cache_getent
=====================================
#!/bin/bash
#
ttl=600
#
cut=/usr/bin/cut
getent=/usr/bin/getent
rpc_pipefs=/var/lib/nfs/rpc_pipefs
#
die()
{
echo "Usage: $0 cache_name entry_name"
exit 1
}
[ $# -lt 2 ] && die
cachename="$1"
cache_path=${rpc_pipefs}/cache/${cachename}/channel
case "${cachename}" in
dns_resolve)
name="$2"
result="$(${getent} hosts ${name} | ${cut} -f1 -d\ )"
[ -z "${result}" ] && result="0"
;;
*)
die
;;
esac
echo "${result} ${name} ${ttl}" >${cache_path}
.. code-block:: sh
#!/bin/bash
#
ttl=600
#
cut=/usr/bin/cut
getent=/usr/bin/getent
rpc_pipefs=/var/lib/nfs/rpc_pipefs
#
die()
{
echo "Usage: $0 cache_name entry_name"
exit 1
}
[ $# -lt 2 ] && die
cachename="$1"
cache_path=${rpc_pipefs}/cache/${cachename}/channel
case "${cachename}" in
dns_resolve)
name="$2"
result="$(${getent} hosts ${name} | ${cut} -f1 -d\ )"
[ -z "${result}" ] && result="0"
;;
*)
die
;;
esac
echo "${result} ${name} ${ttl}" >${cache_path}
=============
NFS ID Mapper
=============
=========
ID Mapper
=========
Id mapper is used by NFS to translate user and group ids into names, and to
translate user and group names into ids. Part of this translation involves
performing an upcall to userspace to request the information. There are two
......@@ -20,22 +20,24 @@ legacy rpc.idmap daemon for the id mapping. This result will be stored
in a custom NFS idmap cache.
===========
Configuring
===========
The file /etc/request-key.conf will need to be modified so /sbin/request-key can
direct the upcall. The following line should be added:
#OP TYPE DESCRIPTION CALLOUT INFO PROGRAM ARG1 ARG2 ARG3 ...
#====== ======= =============== =============== ===============================
create id_resolver * * /usr/sbin/nfs.idmap %k %d 600
``#OP TYPE DESCRIPTION CALLOUT INFO PROGRAM ARG1 ARG2 ARG3 ...``
``#====== ======= =============== =============== ===============================``
``create id_resolver * * /usr/sbin/nfs.idmap %k %d 600``
This will direct all id_resolver requests to the program /usr/sbin/nfs.idmap.
The last parameter, 600, defines how many seconds into the future the key will
expire. This parameter is optional for /usr/sbin/nfs.idmap. When the timeout
is not specified, nfs.idmap will default to 600 seconds.
id mapper uses for key descriptions:
id mapper uses for key descriptions::
uid: Find the UID for the given user
gid: Find the GID for the given group
user: Find the user name for the given UID
......@@ -45,23 +47,24 @@ You can handle any of these individually, rather than using the generic upcall
program. If you would like to use your own program for a uid lookup then you
would edit your request-key.conf so it look similar to this:
#OP TYPE DESCRIPTION CALLOUT INFO PROGRAM ARG1 ARG2 ARG3 ...
#====== ======= =============== =============== ===============================
create id_resolver uid:* * /some/other/program %k %d 600
create id_resolver * * /usr/sbin/nfs.idmap %k %d 600
``#OP TYPE DESCRIPTION CALLOUT INFO PROGRAM ARG1 ARG2 ARG3 ...``
``#====== ======= =============== =============== ===============================``
``create id_resolver uid:* * /some/other/program %k %d 600``
``create id_resolver * * /usr/sbin/nfs.idmap %k %d 600``
Notice that the new line was added above the line for the generic program.
request-key will find the first matching line and corresponding program. In
this case, /some/other/program will handle all uid lookups and
/usr/sbin/nfs.idmap will handle gid, user, and group lookups.
See <file:Documentation/security/keys/request-key.rst> for more information
See Documentation/security/keys/request-key.rst for more information
about the request-key function.
=========
nfs.idmap
=========
nfs.idmap is designed to be called by request-key, and should not be run "by
hand". This program takes two arguments, a serialized key and a key
description. The serialized key is first converted into a key_serial_t, and
......
===================
Setting up NFS/RDMA
===================
:Author:
NetApp and Open Grid Computing (May 29, 2008)
.. warning::
This document is probably obsolete.
Overview
========
This document describes how to install and setup the Linux NFS/RDMA client
and server software.
The NFS/RDMA client was first included in Linux 2.6.24. The NFS/RDMA server
was first included in the following release, Linux 2.6.25.
In our testing, we have obtained excellent performance results (full 10Gbit
wire bandwidth at minimal client CPU) under many workloads. The code passes
the full Connectathon test suite and operates over both Infiniband and iWARP
RDMA adapters.
Getting Help
============
If you get stuck, you can ask questions on the
nfs-rdma-devel@lists.sourceforge.net mailing list.
Installation
============
These instructions are a step by step guide to building a machine for
use with NFS/RDMA.
- Install an RDMA device
Any device supported by the drivers in drivers/infiniband/hw is acceptable.
Testing has been performed using several Mellanox-based IB cards, the
Ammasso AMS1100 iWARP adapter, and the Chelsio cxgb3 iWARP adapter.
- Install a Linux distribution and tools
The first kernel release to contain both the NFS/RDMA client and server was
Linux 2.6.25 Therefore, a distribution compatible with this and subsequent
Linux kernel release should be installed.
The procedures described in this document have been tested with
distributions from Red Hat's Fedora Project (http://fedora.redhat.com/).
- Install nfs-utils-1.1.2 or greater on the client
An NFS/RDMA mount point can be obtained by using the mount.nfs command in
nfs-utils-1.1.2 or greater (nfs-utils-1.1.1 was the first nfs-utils
version with support for NFS/RDMA mounts, but for various reasons we
recommend using nfs-utils-1.1.2 or greater). To see which version of
mount.nfs you are using, type:
.. code-block:: sh
$ /sbin/mount.nfs -V
If the version is less than 1.1.2 or the command does not exist,
you should install the latest version of nfs-utils.
Download the latest package from: http://www.kernel.org/pub/linux/utils/nfs
Uncompress the package and follow the installation instructions.
If you will not need the idmapper and gssd executables (you do not need
these to create an NFS/RDMA enabled mount command), the installation
process can be simplified by disabling these features when running
configure:
.. code-block:: sh
$ ./configure --disable-gss --disable-nfsv4
To build nfs-utils you will need the tcp_wrappers package installed. For
more information on this see the package's README and INSTALL files.
After building the nfs-utils package, there will be a mount.nfs binary in
the utils/mount directory. This binary can be used to initiate NFS v2, v3,
or v4 mounts. To initiate a v4 mount, the binary must be called
mount.nfs4. The standard technique is to create a symlink called
mount.nfs4 to mount.nfs.
This mount.nfs binary should be installed at /sbin/mount.nfs as follows:
.. code-block:: sh
$ sudo cp utils/mount/mount.nfs /sbin/mount.nfs
In this location, mount.nfs will be invoked automatically for NFS mounts
by the system mount command.
.. note::
mount.nfs and therefore nfs-utils-1.1.2 or greater is only needed
on the NFS client machine. You do not need this specific version of
nfs-utils on the server. Furthermore, only the mount.nfs command from
nfs-utils-1.1.2 is needed on the client.
- Install a Linux kernel with NFS/RDMA
The NFS/RDMA client and server are both included in the mainline Linux
kernel version 2.6.25 and later. This and other versions of the Linux
kernel can be found at: https://www.kernel.org/pub/linux/kernel/
Download the sources and place them in an appropriate location.
- Configure the RDMA stack
Make sure your kernel configuration has RDMA support enabled. Under
Device Drivers -> InfiniBand support, update the kernel configuration
to enable InfiniBand support [NOTE: the option name is misleading. Enabling
InfiniBand support is required for all RDMA devices (IB, iWARP, etc.)].
Enable the appropriate IB HCA support (mlx4, mthca, ehca, ipath, etc.) or
iWARP adapter support (amso, cxgb3, etc.).
If you are using InfiniBand, be sure to enable IP-over-InfiniBand support.
- Configure the NFS client and server
Your kernel configuration must also have NFS file system support and/or
NFS server support enabled. These and other NFS related configuration
options can be found under File Systems -> Network File Systems.
- Build, install, reboot
The NFS/RDMA code will be enabled automatically if NFS and RDMA
are turned on. The NFS/RDMA client and server are configured via the hidden
SUNRPC_XPRT_RDMA config option that depends on SUNRPC and INFINIBAND. The
value of SUNRPC_XPRT_RDMA will be:
#. N if either SUNRPC or INFINIBAND are N, in this case the NFS/RDMA client
and server will not be built
#. M if both SUNRPC and INFINIBAND are on (M or Y) and at least one is M,
in this case the NFS/RDMA client and server will be built as modules
#. Y if both SUNRPC and INFINIBAND are Y, in this case the NFS/RDMA client
and server will be built into the kernel
Therefore, if you have followed the steps above and turned no NFS and RDMA,
the NFS/RDMA client and server will be built.
Build a new kernel, install it, boot it.
Check RDMA and NFS Setup
========================
Before configuring the NFS/RDMA software, it is a good idea to test
your new kernel to ensure that the kernel is working correctly.
In particular, it is a good idea to verify that the RDMA stack
is functioning as expected and standard NFS over TCP/IP and/or UDP/IP
is working properly.
- Check RDMA Setup
If you built the RDMA components as modules, load them at
this time. For example, if you are using a Mellanox Tavor/Sinai/Arbel
card:
.. code-block:: sh
$ modprobe ib_mthca
$ modprobe ib_ipoib
If you are using InfiniBand, make sure there is a Subnet Manager (SM)
running on the network. If your IB switch has an embedded SM, you can
use it. Otherwise, you will need to run an SM, such as OpenSM, on one
of your end nodes.
If an SM is running on your network, you should see the following:
.. code-block:: sh
$ cat /sys/class/infiniband/driverX/ports/1/state
4: ACTIVE
where driverX is mthca0, ipath5, ehca3, etc.
To further test the InfiniBand software stack, use IPoIB (this
assumes you have two IB hosts named host1 and host2):
.. code-block:: sh
host1$ ip link set dev ib0 up
host1$ ip address add dev ib0 a.b.c.x
host2$ ip link set dev ib0 up
host2$ ip address add dev ib0 a.b.c.y
host1$ ping a.b.c.y
host2$ ping a.b.c.x
For other device types, follow the appropriate procedures.
- Check NFS Setup
For the NFS components enabled above (client and/or server),
test their functionality over standard Ethernet using TCP/IP or UDP/IP.
NFS/RDMA Setup
==============
We recommend that you use two machines, one to act as the client and
one to act as the server.
One time configuration:
-----------------------
- On the server system, configure the /etc/exports file and start the NFS/RDMA server.
Exports entries with the following formats have been tested::
/vol0 192.168.0.47(fsid=0,rw,async,insecure,no_root_squash)
/vol0 192.168.0.0/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash)
The IP address(es) is(are) the client's IPoIB address for an InfiniBand
HCA or the client's iWARP address(es) for an RNIC.
.. note::
The "insecure" option must be used because the NFS/RDMA client does
not use a reserved port.
Each time a machine boots:
--------------------------
- Load and configure the RDMA drivers
For InfiniBand using a Mellanox adapter:
.. code-block:: sh
$ modprobe ib_mthca
$ modprobe ib_ipoib
$ ip li set dev ib0 up
$ ip addr add dev ib0 a.b.c.d
.. note::
Please use unique addresses for the client and server!
- Start the NFS server
If the NFS/RDMA server was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
kernel config), load the RDMA transport module:
.. code-block:: sh
$ modprobe svcrdma
Regardless of how the server was built (module or built-in), start the
server:
.. code-block:: sh
$ /etc/init.d/nfs start
or
.. code-block:: sh
$ service nfs start
Instruct the server to listen on the RDMA transport:
.. code-block:: sh
$ echo rdma 20049 > /proc/fs/nfsd/portlist
- On the client system
If the NFS/RDMA client was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
kernel config), load the RDMA client module:
.. code-block:: sh
$ modprobe xprtrdma.ko
Regardless of how the client was built (module or built-in), use this
command to mount the NFS/RDMA server:
.. code-block:: sh
$ mount -o rdma,port=20049 <IPoIB-server-name-or-address>:/<export> /mnt
To verify that the mount is using RDMA, run "cat /proc/mounts" and check
the "proto" field for the given mount.
Congratulations! You're using NFS/RDMA!
==================================
Administrative interfaces for nfsd
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
==================================
Note that normally these interfaces are used only by the utilities in
nfs-utils.
......@@ -13,18 +14,16 @@ nfsd/threads.
Before doing that, NFSD can be told which sockets to listen on by
writing to nfsd/portlist; that write may be:
- an ascii-encoded file descriptor, which should refer to a
bound (and listening, for tcp) socket, or
- "transportname port", where transportname is currently either
"udp", "tcp", or "rdma".
- an ascii-encoded file descriptor, which should refer to a
bound (and listening, for tcp) socket, or
- "transportname port", where transportname is currently either
"udp", "tcp", or "rdma".
If nfsd is started without doing any of these, then it will create one
udp and one tcp listener at port 2049 (see nfsd_init_socks).
On startup, nfsd and lockd grace periods start.
nfsd is shut down by a write of 0 to nfsd/threads. All locks and state
are thrown away at that point.
On startup, nfsd and lockd grace periods start. nfsd is shut down by a write of
0 to nfsd/threads. All locks and state are thrown away at that point.
Between startup and shutdown, the number of threads may be adjusted up
or down by additional writes to nfsd/threads or by writes to
......@@ -34,7 +33,7 @@ For more detail about files under nfsd/ and what they control, see
fs/nfsd/nfsctl.c; most of them have detailed comments.
Implementation notes
^^^^^^^^^^^^^^^^^^^^
====================
Note that the rpc server requires the caller to serialize addition and
removal of listening sockets, and startup and shutdown of the server.
......
===================================
pNFS block layout server user guide
===================================
The Linux NFS server now supports the pNFS block layout extension. In this
case the NFS server acts as Metadata Server (MDS) for pNFS, which in addition
......@@ -22,16 +24,19 @@ If the nfsd server needs to fence a non-responding client it calls
/sbin/nfsd-recall-failed with the first argument set to the IP address of
the client, and the second argument set to the device node without the /dev
prefix for the file system to be fenced. Below is an example file that shows
how to translate the device into a serial number from SCSI EVPD 0x80:
how to translate the device into a serial number from SCSI EVPD 0x80::
cat > /sbin/nfsd-recall-failed << EOF
#!/bin/sh
cat > /sbin/nfsd-recall-failed << EOF
CLIENT="$1"
DEV="/dev/$2"
EVPD=`sg_inq --page=0x80 ${DEV} | \
grep "Unit serial number:" | \
awk -F ': ' '{print $2}'`
.. code-block:: sh
echo "fencing client ${CLIENT} serial ${EVPD}" >> /var/log/pnfsd-fence.log
EOF
#!/bin/sh
CLIENT="$1"
DEV="/dev/$2"
EVPD=`sg_inq --page=0x80 ${DEV} | \
grep "Unit serial number:" | \
awk -F ': ' '{print $2}'`
echo "fencing client ${CLIENT} serial ${EVPD}" >> /var/log/pnfsd-fence.log
EOF
==================================
pNFS SCSI layout server user guide
==================================
......
......@@ -73,10 +73,11 @@ The new macros are prefixed with the ``SYM_`` prefix and can be divided into
three main groups:
1. ``SYM_FUNC_*`` -- to annotate C-like functions. This means functions with
standard C calling conventions, i.e. the stack contains a return address at
the predefined place and a return from the function can happen in a
standard way. When frame pointers are enabled, save/restore of frame
pointer shall happen at the start/end of a function, respectively, too.
standard C calling conventions. For example, on x86, this means that the
stack contains a return address at the predefined place and a return from
the function can happen in a standard way. When frame pointers are enabled,
save/restore of frame pointer shall happen at the start/end of a function,
respectively, too.
Checking tools like ``objtool`` should ensure such marked functions conform
to these rules. The tools can also easily annotate these functions with
......
......@@ -47,7 +47,7 @@ Having a real iterator, and making biovecs immutable, has a number of
advantages:
* Before, iterating over bios was very awkward when you weren't processing
exactly one bvec at a time - for example, bio_copy_data() in fs/bio.c,
exactly one bvec at a time - for example, bio_copy_data() in block/bio.c,
which copies the contents of one bio into another. Because the biovecs
wouldn't necessarily be the same size, the old code was tricky convoluted -
it had to walk two different bios at the same time, keeping both bi_idx and
......
This diff is collapsed.
......@@ -10,6 +10,8 @@ How to write kernel documentation
sphinx
kernel-doc
parse-headers
contributing
maintainer-profile
.. only:: subproject and html
......
.. SPDX-License-Identifier: GPL-2.0
Documentation subsystem maintainer entry profile
================================================
The documentation "subsystem" is the central coordinating point for the
kernel's documentation and associated infrastructure. It covers the
hierarchy under Documentation/ (with the exception of
Documentation/device-tree), various utilities under scripts/ and, at least
some of the time, LICENSES/.
It's worth noting, though, that the boundaries of this subsystem are rather
fuzzier than normal. Many other subsystem maintainers like to keep control
of portions of Documentation/, and many more freely apply changes there
when it is convenient. Beyond that, much of the kernel's documentation is
found in the source as kerneldoc comments; those are usually (but not
always) maintained by the relevant subsystem maintainer.
The mailing list for documentation is linux-doc@vger.kernel.org. Patches
should be made against the docs-next tree whenever possible.
Submit checklist addendum
-------------------------
When making documentation changes, you should actually build the
documentation and ensure that no new errors or warnings have been
introduced. Generating HTML documents and looking at the result will help
to avoid unsightly misunderstandings about how things will be rendered.
Key cycle dates
---------------
Patches can be sent anytime, but response will be slower than usual during
the merge window. The docs tree tends to close late before the merge
window opens, since the risk of regressions from documentation patches is
low.
Review cadence
--------------
I am the sole maintainer for the documentation subsystem, and I am doing
the work on my own time, so the response to patches will occasionally be
slow. I try to always send out a notification when a patch is merged (or
when I decide that one cannot be). Do not hesitate to send a ping if you
have not heard back within a week of sending a patch.
......@@ -9,7 +9,7 @@ also be requested by userspace.
IN-KERNEL AUTOMOUNTING
======================
See section "Mount Traps" of Documentation/filesystems/autofs.txt
See section "Mount Traps" of Documentation/filesystems/autofs.rst
Then from userspace, you can just do something like:
......
......@@ -47,4 +47,6 @@ Documentation for filesystem implementations.
:maxdepth: 2
autofs
overlayfs
virtiofs
vfat
################################################################################
# #
# NFS/RDMA README #
# #
################################################################################
Author: NetApp and Open Grid Computing
Date: May 29, 2008
Table of Contents
~~~~~~~~~~~~~~~~~
- Overview
- Getting Help
- Installation
- Check RDMA and NFS Setup
- NFS/RDMA Setup
Overview
~~~~~~~~
This document describes how to install and setup the Linux NFS/RDMA client
and server software.
The NFS/RDMA client was first included in Linux 2.6.24. The NFS/RDMA server
was first included in the following release, Linux 2.6.25.
In our testing, we have obtained excellent performance results (full 10Gbit
wire bandwidth at minimal client CPU) under many workloads. The code passes
the full Connectathon test suite and operates over both Infiniband and iWARP
RDMA adapters.
Getting Help
~~~~~~~~~~~~
If you get stuck, you can ask questions on the
nfs-rdma-devel@lists.sourceforge.net
mailing list.
Installation
~~~~~~~~~~~~
These instructions are a step by step guide to building a machine for
use with NFS/RDMA.
- Install an RDMA device
Any device supported by the drivers in drivers/infiniband/hw is acceptable.
Testing has been performed using several Mellanox-based IB cards, the
Ammasso AMS1100 iWARP adapter, and the Chelsio cxgb3 iWARP adapter.
- Install a Linux distribution and tools
The first kernel release to contain both the NFS/RDMA client and server was
Linux 2.6.25 Therefore, a distribution compatible with this and subsequent
Linux kernel release should be installed.
The procedures described in this document have been tested with
distributions from Red Hat's Fedora Project (http://fedora.redhat.com/).
- Install nfs-utils-1.1.2 or greater on the client
An NFS/RDMA mount point can be obtained by using the mount.nfs command in
nfs-utils-1.1.2 or greater (nfs-utils-1.1.1 was the first nfs-utils
version with support for NFS/RDMA mounts, but for various reasons we
recommend using nfs-utils-1.1.2 or greater). To see which version of
mount.nfs you are using, type:
$ /sbin/mount.nfs -V
If the version is less than 1.1.2 or the command does not exist,
you should install the latest version of nfs-utils.
Download the latest package from:
http://www.kernel.org/pub/linux/utils/nfs
Uncompress the package and follow the installation instructions.
If you will not need the idmapper and gssd executables (you do not need
these to create an NFS/RDMA enabled mount command), the installation
process can be simplified by disabling these features when running
configure:
$ ./configure --disable-gss --disable-nfsv4
To build nfs-utils you will need the tcp_wrappers package installed. For
more information on this see the package's README and INSTALL files.
After building the nfs-utils package, there will be a mount.nfs binary in
the utils/mount directory. This binary can be used to initiate NFS v2, v3,
or v4 mounts. To initiate a v4 mount, the binary must be called
mount.nfs4. The standard technique is to create a symlink called
mount.nfs4 to mount.nfs.
This mount.nfs binary should be installed at /sbin/mount.nfs as follows:
$ sudo cp utils/mount/mount.nfs /sbin/mount.nfs
In this location, mount.nfs will be invoked automatically for NFS mounts
by the system mount command.
NOTE: mount.nfs and therefore nfs-utils-1.1.2 or greater is only needed
on the NFS client machine. You do not need this specific version of
nfs-utils on the server. Furthermore, only the mount.nfs command from
nfs-utils-1.1.2 is needed on the client.
- Install a Linux kernel with NFS/RDMA
The NFS/RDMA client and server are both included in the mainline Linux
kernel version 2.6.25 and later. This and other versions of the Linux
kernel can be found at:
https://www.kernel.org/pub/linux/kernel/
Download the sources and place them in an appropriate location.
- Configure the RDMA stack
Make sure your kernel configuration has RDMA support enabled. Under
Device Drivers -> InfiniBand support, update the kernel configuration
to enable InfiniBand support [NOTE: the option name is misleading. Enabling
InfiniBand support is required for all RDMA devices (IB, iWARP, etc.)].
Enable the appropriate IB HCA support (mlx4, mthca, ehca, ipath, etc.) or
iWARP adapter support (amso, cxgb3, etc.).
If you are using InfiniBand, be sure to enable IP-over-InfiniBand support.
- Configure the NFS client and server
Your kernel configuration must also have NFS file system support and/or
NFS server support enabled. These and other NFS related configuration
options can be found under File Systems -> Network File Systems.
- Build, install, reboot
The NFS/RDMA code will be enabled automatically if NFS and RDMA
are turned on. The NFS/RDMA client and server are configured via the hidden
SUNRPC_XPRT_RDMA config option that depends on SUNRPC and INFINIBAND. The
value of SUNRPC_XPRT_RDMA will be:
- N if either SUNRPC or INFINIBAND are N, in this case the NFS/RDMA client
and server will not be built
- M if both SUNRPC and INFINIBAND are on (M or Y) and at least one is M,
in this case the NFS/RDMA client and server will be built as modules
- Y if both SUNRPC and INFINIBAND are Y, in this case the NFS/RDMA client
and server will be built into the kernel
Therefore, if you have followed the steps above and turned no NFS and RDMA,
the NFS/RDMA client and server will be built.
Build a new kernel, install it, boot it.
Check RDMA and NFS Setup
~~~~~~~~~~~~~~~~~~~~~~~~
Before configuring the NFS/RDMA software, it is a good idea to test
your new kernel to ensure that the kernel is working correctly.
In particular, it is a good idea to verify that the RDMA stack
is functioning as expected and standard NFS over TCP/IP and/or UDP/IP
is working properly.
- Check RDMA Setup
If you built the RDMA components as modules, load them at
this time. For example, if you are using a Mellanox Tavor/Sinai/Arbel
card:
$ modprobe ib_mthca
$ modprobe ib_ipoib
If you are using InfiniBand, make sure there is a Subnet Manager (SM)
running on the network. If your IB switch has an embedded SM, you can
use it. Otherwise, you will need to run an SM, such as OpenSM, on one
of your end nodes.
If an SM is running on your network, you should see the following:
$ cat /sys/class/infiniband/driverX/ports/1/state
4: ACTIVE
where driverX is mthca0, ipath5, ehca3, etc.
To further test the InfiniBand software stack, use IPoIB (this
assumes you have two IB hosts named host1 and host2):
host1$ ip link set dev ib0 up
host1$ ip address add dev ib0 a.b.c.x
host2$ ip link set dev ib0 up
host2$ ip address add dev ib0 a.b.c.y
host1$ ping a.b.c.y
host2$ ping a.b.c.x
For other device types, follow the appropriate procedures.
- Check NFS Setup
For the NFS components enabled above (client and/or server),
test their functionality over standard Ethernet using TCP/IP or UDP/IP.
NFS/RDMA Setup
~~~~~~~~~~~~~~
We recommend that you use two machines, one to act as the client and
one to act as the server.
One time configuration:
- On the server system, configure the /etc/exports file and
start the NFS/RDMA server.
Exports entries with the following formats have been tested:
/vol0 192.168.0.47(fsid=0,rw,async,insecure,no_root_squash)
/vol0 192.168.0.0/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash)
The IP address(es) is(are) the client's IPoIB address for an InfiniBand
HCA or the client's iWARP address(es) for an RNIC.
NOTE: The "insecure" option must be used because the NFS/RDMA client does
not use a reserved port.
Each time a machine boots:
- Load and configure the RDMA drivers
For InfiniBand using a Mellanox adapter:
$ modprobe ib_mthca
$ modprobe ib_ipoib
$ ip li set dev ib0 up
$ ip addr add dev ib0 a.b.c.d
NOTE: use unique addresses for the client and server
- Start the NFS server
If the NFS/RDMA server was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
kernel config), load the RDMA transport module:
$ modprobe svcrdma
Regardless of how the server was built (module or built-in), start the
server:
$ /etc/init.d/nfs start
or
$ service nfs start
Instruct the server to listen on the RDMA transport:
$ echo rdma 20049 > /proc/fs/nfsd/portlist
- On the client system
If the NFS/RDMA client was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
kernel config), load the RDMA client module:
$ modprobe xprtrdma.ko
Regardless of how the client was built (module or built-in), use this
command to mount the NFS/RDMA server:
$ mount -o rdma,port=20049 <IPoIB-server-name-or-address>:/<export> /mnt
To verify that the mount is using RDMA, run "cat /proc/mounts" and check
the "proto" field for the given mount.
Congratulations! You're using NFS/RDMA!
......@@ -601,7 +601,7 @@ Defined in ``include/linux/export.h``
This is the variant of `EXPORT_SYMBOL()` that allows specifying a symbol
namespace. Symbol Namespaces are documented in
``Documentation/kbuild/namespaces.rst``.
``Documentation/core-api/symbol-namespaces.rst``.
:c:func:`EXPORT_SYMBOL_NS_GPL()`
--------------------------------
......@@ -610,7 +610,7 @@ Defined in ``include/linux/export.h``
This is the variant of `EXPORT_SYMBOL_GPL()` that allows specifying a symbol
namespace. Symbol Namespaces are documented in
``Documentation/kbuild/namespaces.rst``.
``Documentation/core-api/symbol-namespaces.rst``.
Routines and Conventions
========================
......
......@@ -103,8 +103,7 @@ stat_interval
Number of seconds between statistics-related printk()s.
By default, locktorture will report stats every 60 seconds.
Setting the interval to zero causes the statistics to
be printed -only- when the module is unloaded, and this
is the default.
be printed -only- when the module is unloaded.
stutter
The length of time to run the test before pausing for this
......
......@@ -99,4 +99,5 @@ to do something different in the near future.
.. toctree::
:maxdepth: 1
../doc-guide/maintainer-profile
../nvdimm/maintainer-entry-profile
.. SPDX-License-Identifier: GPL-2.0+
====================
Xilinx SD-FEC Driver
====================
......
......@@ -33,7 +33,8 @@ Those tests need to be passed before the patches go upstream, but not
necessarily before initial posting. Contact the list if you need help
getting the test environment set up.
### ACPI Device Specific Methods (_DSM)
ACPI Device Specific Methods (_DSM)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before patches enabling for a new _DSM family will be considered it must
be assigned a format-interface-code from the NVDIMM Sub-team of the ACPI
Specification Working Group. In general, the stance of the subsystem is
......
.. _embargoed_hardware_issues:
Embargoed hardware issues
=========================
......@@ -36,7 +38,10 @@ issue according to our documented process.
The list is encrypted and email to the list can be sent by either PGP or
S/MIME encrypted and must be signed with the reporter's PGP key or S/MIME
certificate. The list's PGP key and S/MIME certificate are available from
https://www.kernel.org/....
the following URLs:
- PGP: https://www.kernel.org/static/files/hardware-security.asc
- S/MIME: https://www.kernel.org/static/files/hardware-security.crt
While hardware security issues are often handled by the affected hardware
vendor, we welcome contact from researchers or individuals who have
......@@ -55,14 +60,14 @@ Operation of mailing-lists
^^^^^^^^^^^^^^^^^^^^^^^^^^
The encrypted mailing-lists which are used in our process are hosted on
Linux Foundation's IT infrastructure. By providing this service Linux
Foundation's director of IT Infrastructure security technically has the
ability to access the embargoed information, but is obliged to
confidentiality by his employment contract. Linux Foundation's director of
IT Infrastructure security is also responsible for the kernel.org
infrastructure.
The Linux Foundation's current director of IT Infrastructure security is
Linux Foundation's IT infrastructure. By providing this service, members
of Linux Foundation's IT operations personnel technically have the
ability to access the embargoed information, but are obliged to
confidentiality by their employment contract. Linux Foundation IT
personnel are also responsible for operating and managing the rest of
kernel.org infrastructure.
The Linux Foundation's current director of IT Project infrastructure is
Konstantin Ryabitsev.
......@@ -274,7 +279,7 @@ software decrypts the email and re-encrypts it individually for each
subscriber with the subscriber's PGP key or S/MIME certificate. Details
about the mailing-list software and the setup which is used to ensure the
security of the lists and protection of the data can be found here:
https://www.kernel.org/....
https://korg.wiki.kernel.org/userdoc/remail.
List keys
^^^^^^^^^
......
......@@ -22,7 +22,7 @@ The following 64-byte header is present in decompressed Linux kernel image::
u64 res2 = 0; /* Reserved */
u64 magic = 0x5643534952; /* Magic number, little endian, "RISCV" */
u32 magic2 = 0x05435352; /* Magic number 2, little endian, "RSC\x05" */
u32 res4; /* Reserved for PE COFF offset */
u32 res3; /* Reserved for PE COFF offset */
This header format is compliant with PE/COFF header and largely inspired from
ARM64 header. Thus, both ARM64 & RISC-V header can be combined into one common
......@@ -34,7 +34,7 @@ Notes
- This header can also be reused to support EFI stub for RISC-V in future. EFI
specification needs PE/COFF image header in the beginning of the kernel image
in order to load it as an EFI application. In order to support EFI stub,
code0 should be replaced with "MZ" magic string and res5(at offset 0x3c) should
code0 should be replaced with "MZ" magic string and res3(at offset 0x3c) should
point to the rest of the PE/COFF header.
- version field indicate header version number
......
......@@ -5,8 +5,13 @@
# has been done.
#
from docutils import nodes
import sphinx
from sphinx import addnodes
from sphinx.environment import NoUri
if sphinx.version_info[0] < 2 or \
sphinx.version_info[0] == 2 and sphinx.version_info[1] < 1:
from sphinx.environment import NoUri
else:
from sphinx.errors import NoUri
import re
#
......
......@@ -95,7 +95,8 @@ of ftrace. Here is a list of some of the key files:
current_tracer:
This is used to set or display the current tracer
that is configured.
that is configured. Changing the current tracer clears
the ring buffer content as well as the "snapshot" buffer.
available_tracers:
......@@ -126,7 +127,8 @@ of ftrace. Here is a list of some of the key files:
This file holds the output of the trace in a human
readable format (described below). Note, tracing is temporarily
disabled when the file is open for reading. Once all readers
are closed, tracing is re-enabled.
are closed, tracing is re-enabled. Opening this file for
writing with the O_TRUNC flag clears the ring buffer content.
trace_pipe:
......@@ -185,7 +187,8 @@ of ftrace. Here is a list of some of the key files:
CPU buffer and not total size of all buffers. The
trace buffers are allocated in pages (blocks of memory
that the kernel uses for allocation, usually 4 KB in size).
If the last page allocated has room for more bytes
A few extra pages may be allocated to accommodate buffer management
meta-data. If the last page allocated has room for more bytes
than requested, the rest of the page will be used,
making the actual allocation bigger than requested or shown.
( Note, the size may not be a multiple of the page size
......@@ -235,7 +238,7 @@ of ftrace. Here is a list of some of the key files:
This interface also allows for commands to be used. See the
"Filter commands" section for more details.
As a speed up, since processing strings can't be quite expensive
As a speed up, since processing strings can be quite expensive
and requires a check of all functions registered to tracing, instead
an index can be written into this file. A number (starting with "1")
written will instead select the same corresponding at the line position
......@@ -382,7 +385,7 @@ of ftrace. Here is a list of some of the key files:
By default, 128 comms are saved (see "saved_cmdlines" above). To
increase or decrease the amount of comms that are cached, echo
in a the number of comms to cache, into this file.
the number of comms to cache into this file.
saved_tgids:
......@@ -490,6 +493,9 @@ of ftrace. Here is a list of some of the key files:
# echo global > trace_clock
Setting a clock clears the ring buffer content as well as the
"snapshot" buffer.
trace_marker:
This is a very useful file for synchronizing user space
......@@ -3324,7 +3330,7 @@ directories after it is created.
As you can see, the new directory looks similar to the tracing directory
itself. In fact, it is very similar, except that the buffer and
events are agnostic from the main director, or from any other
events are agnostic from the main directory, or from any other
instances that are created.
The files in the new directory work just like the files with the
......
......@@ -37,7 +37,7 @@ commit_page - a pointer to the page with the last finished non-nested write.
cmpxchg - hardware-assisted atomic transaction that performs the following:
A = B iff previous A == C
A = B if previous A == C
R = cmpxchg(A, C, B) is saying that we replace A with B if and only if
current A is equal to C, and we put the old (current) A into R
......
......@@ -2413,7 +2413,7 @@ _않습니다_.
알고 있는, - inb() 나 writel() 과 같은 - 적절한 액세스 루틴을 통해 이루어져야만
합니다. 이것들은 대부분의 경우에는 명시적 메모리 배리어 와 함께 사용될 필요가
없습니다만, 완화된 메모리 액세스 속성으로 I/O 메모리 윈도우로의 참조를 위해
액세스 함수가 사용된다면 순서를 강제하기 위해 _madatory_ 메모리 배리어가
액세스 함수가 사용된다면 순서를 강제하기 위해 _mandatory_ 메모리 배리어가
필요합니다.
더 많은 정보를 위해선 Documentation/driver-api/device-io.rst 를 참고하십시오.
......@@ -2528,7 +2528,7 @@ I/O 액세스를 통한 주변장치와의 통신은 아키텍쳐와 기기에
이것들은 readX() 와 writeX() 랑 비슷하지만, 더 완화된 메모리 순서
보장을 제공합니다. 구체적으로, 이것들은 일반적 메모리 액세스나 delay()
루프 (예:앞의 2-5 항목) 에 대해 순서를 보장하지 않습니다만 디폴트 I/O
기능으로 매핑된 __iomem 포인터에 대해 동작할 때, 같은 CPU 쓰레드에 의
기능으로 매핑된 __iomem 포인터에 대해 동작할 때, 같은 CPU 쓰레드에 의
같은 주변장치로의 액세스에는 순서가 맞춰질 것이 보장됩니다.
(*) readsX(), writesX():
......
......@@ -31,6 +31,8 @@
development-process
email-clients
license-rules
kernel-enforcement-statement
kernel-driver-statement
其它大多数开发人员感兴趣的社区指南:
......@@ -43,6 +45,7 @@
stable-api-nonsense
stable-kernel-rules
management-style
embargoed-hardware-issues
这些是一些总体技术指南,由于缺乏更好的地方,现在已经放在这里
......
.. _cn_process_statement_driver:
.. include:: ../disclaimer-zh_CN.rst
:Original: :ref:`Documentation/process/kernel-driver-statement.rst <process_statement_driver>`
:Translator: Alex Shi <alex.shi@linux.alibaba.com>
内核驱动声明
------------
关于Linux内核模块的立场声明
===========================
我们,以下署名的Linux内核开发人员,认为任何封闭源Linux内核模块或驱动程序都是
有害的和不可取的。我们已经一再发现它们对Linux用户,企业和更大的Linux生态系统
有害。这样的模块否定了Linux开发模型的开放性,稳定性,灵活性和可维护性,并使
他们的用户无法使用Linux社区的专业知识。提供闭源内核模块的供应商迫使其客户
放弃Linux的主要优势或选择新的供应商。因此,为了充分利用开源所提供的成本节省和
共享支持优势,我们敦促供应商采取措施,以开源内核代码在Linux上为其客户提供支持。
我们只为自己说话,而不是我们今天可能会为之工作,过去或将来会为之工作的任何公司。
- Dave Airlie
- Nick Andrew
- Jens Axboe
- Ralf Baechle
- Felipe Balbi
- Ohad Ben-Cohen
- Muli Ben-Yehuda
- Jiri Benc
- Arnd Bergmann
- Thomas Bogendoerfer
- Vitaly Bordug
- James Bottomley
- Josh Boyer
- Neil Brown
- Mark Brown
- David Brownell
- Michael Buesch
- Franck Bui-Huu
- Adrian Bunk
- François Cami
- Ralph Campbell
- Luiz Fernando N. Capitulino
- Mauro Carvalho Chehab
- Denis Cheng
- Jonathan Corbet
- Glauber Costa
- Alan Cox
- Magnus Damm
- Ahmed S. Darwish
- Robert P. J. Day
- Hans de Goede
- Arnaldo Carvalho de Melo
- Helge Deller
- Jean Delvare
- Mathieu Desnoyers
- Sven-Thorsten Dietrich
- Alexey Dobriyan
- Daniel Drake
- Alex Dubov
- Randy Dunlap
- Michael Ellerman
- Pekka Enberg
- Jan Engelhardt
- Mark Fasheh
- J. Bruce Fields
- Larry Finger
- Jeremy Fitzhardinge
- Mike Frysinger
- Kumar Gala
- Robin Getz
- Liam Girdwood
- Jan-Benedict Glaw
- Thomas Gleixner
- Brice Goglin
- Cyrill Gorcunov
- Andy Gospodarek
- Thomas Graf
- Krzysztof Halasa
- Harvey Harrison
- Stephen Hemminger
- Michael Hennerich
- Tejun Heo
- Benjamin Herrenschmidt
- Kristian Høgsberg
- Henrique de Moraes Holschuh
- Marcel Holtmann
- Mike Isely
- Takashi Iwai
- Olof Johansson
- Dave Jones
- Jesper Juhl
- Matthias Kaehlcke
- Kenji Kaneshige
- Jan Kara
- Jeremy Kerr
- Russell King
- Olaf Kirch
- Roel Kluin
- Hans-Jürgen Koch
- Auke Kok
- Peter Korsgaard
- Jiri Kosina
- Aaro Koskinen
- Mariusz Kozlowski
- Greg Kroah-Hartman
- Michael Krufky
- Aneesh Kumar
- Clemens Ladisch
- Christoph Lameter
- Gunnar Larisch
- Anders Larsen
- Grant Likely
- John W. Linville
- Yinghai Lu
- Tony Luck
- Pavel Machek
- Matt Mackall
- Paul Mackerras
- Roland McGrath
- Patrick McHardy
- Kyle McMartin
- Paul Menage
- Thierry Merle
- Eric Miao
- Akinobu Mita
- Ingo Molnar
- James Morris
- Andrew Morton
- Paul Mundt
- Oleg Nesterov
- Luca Olivetti
- S.Çağlar Onur
- Pierre Ossman
- Keith Owens
- Venkatesh Pallipadi
- Nick Piggin
- Nicolas Pitre
- Evgeniy Polyakov
- Richard Purdie
- Mike Rapoport
- Sam Ravnborg
- Gerrit Renker
- Stefan Richter
- David Rientjes
- Luis R. Rodriguez
- Stefan Roese
- Francois Romieu
- Rami Rosen
- Stephen Rothwell
- Maciej W. Rozycki
- Mark Salyzyn
- Yoshinori Sato
- Deepak Saxena
- Holger Schurig
- Amit Shah
- Yoshihiro Shimoda
- Sergei Shtylyov
- Kay Sievers
- Sebastian Siewior
- Rik Snel
- Jes Sorensen
- Alexey Starikovskiy
- Alan Stern
- Timur Tabi
- Hirokazu Takata
- Eliezer Tamir
- Eugene Teo
- Doug Thompson
- FUJITA Tomonori
- Dmitry Torokhov
- Marcelo Tosatti
- Steven Toth
- Theodore Tso
- Matthias Urlichs
- Geert Uytterhoeven
- Arjan van de Ven
- Ivo van Doorn
- Rik van Riel
- Wim Van Sebroeck
- Hans Verkuil
- Horst H. von Brand
- Dmitri Vorobiev
- Anton Vorontsov
- Daniel Walker
- Johannes Weiner
- Harald Welte
- Matthew Wilcox
- Dan J. Williams
- Darrick J. Wong
- David Woodhouse
- Chris Wright
- Bryan Wu
- Rafael J. Wysocki
- Herbert Xu
- Vlad Yasevich
- Peter Zijlstra
- Bartlomiej Zolnierkiewicz
.. _cn_process_statement_kernel:
.. include:: ../disclaimer-zh_CN.rst
:Original: :ref:`Documentation/process/kernel-enforcement-statement.rst <process_statement_kernel>`
:Translator: Alex Shi <alex.shi@linux.alibaba.com>
Linux 内核执行声明
------------------
作为Linux内核的开发人员,我们对如何使用我们的软件以及如何实施软件许可证有着
浓厚的兴趣。遵守GPL-2.0的互惠共享义务对我们软件和社区的长期可持续性至关重要。
虽然有权强制执行对我们社区的贡献中的单独版权权益,但我们有共同的利益,即确保
个人强制执行行动的方式有利于我们的社区,不会对我们软件生态系统的健康和增长
产生意外的负面影响。为了阻止无益的执法行动,我们同意代表我们自己和我们版权
利益的任何继承人对Linux内核用户作出以下符合我们开发社区最大利益的承诺:
尽管有GPL-2.0的终止条款,我们同意,采用以下GPL-3.0条款作为我们许可证下的
附加许可,作为任何对许可证下权利的非防御性主张,这符合我们开发社区的最佳
利益。
但是,如果您停止所有违反本许可证的行为,则您从特定版权持有人处获得的
许可证将被恢复:(a)暂时恢复,除非版权持有人明确并最终终止您的许可证;
以及(b)永久恢复, 如果版权持有人未能在你终止违反后60天内以合理方式
通知您违反本许可证的行为,则永久恢复您的许可证。
此外,如果版权所有者以某种合理的方式通知您违反了本许可,这是您第一次
从该版权所有者处收到违反本许可的通知(对于任何作品),并且您在收到通知
后的30天内纠正违规行为。则您从特定版权所有者处获得的许可将永久恢复.
我们提供这些保证的目的是鼓励更多地使用该软件。我们希望公司和个人使用、修改和
分发此软件。我们希望以公开和透明的方式与用户合作,以消除我们对法规遵从性或强制
执行的任何不确定性,这些不确定性可能会限制我们软件的采用。我们将法律行动视为
最后手段,只有在其他社区努力未能解决这一问题时才采取行动。
最后,一旦一个不合规问题得到解决,我们希望用户会感到欢迎,加入我们为之努力的
这个项目。共同努力,我们会更强大。
除了下面提到的以外,我们只为自己说话,而不是为今天、过去或将来可能为之工作的
任何公司说话。
- Laura Abbott
- Bjorn Andersson (Linaro)
- Andrea Arcangeli
- Neil Armstrong
- Jens Axboe
- Pablo Neira Ayuso
- Khalid Aziz
- Ralf Baechle
- Felipe Balbi
- Arnd Bergmann
- Ard Biesheuvel
- Tim Bird
- Paolo Bonzini
- Christian Borntraeger
- Mark Brown (Linaro)
- Paul Burton
- Javier Martinez Canillas
- Rob Clark
- Kees Cook (Google)
- Jonathan Corbet
- Dennis Dalessandro
- Vivien Didelot (Savoir-faire Linux)
- Hans de Goede
- Mel Gorman (SUSE)
- Sven Eckelmann
- Alex Elder (Linaro)
- Fabio Estevam
- Larry Finger
- Bhumika Goyal
- Andy Gross
- Juergen Gross
- Shawn Guo
- Ulf Hansson
- Stephen Hemminger (Microsoft)
- Tejun Heo
- Rob Herring
- Masami Hiramatsu
- Michal Hocko
- Simon Horman
- Johan Hovold (Hovold Consulting AB)
- Christophe JAILLET
- Olof Johansson
- Lee Jones (Linaro)
- Heiner Kallweit
- Srinivas Kandagatla
- Jan Kara
- Shuah Khan (Samsung)
- David Kershner
- Jaegeuk Kim
- Namhyung Kim
- Colin Ian King
- Jeff Kirsher
- Greg Kroah-Hartman (Linux Foundation)
- Christian König
- Vinod Koul
- Krzysztof Kozlowski
- Viresh Kumar
- Aneesh Kumar K.V
- Julia Lawall
- Doug Ledford
- Chuck Lever (Oracle)
- Daniel Lezcano
- Shaohua Li
- Xin Long
- Tony Luck
- Catalin Marinas (Arm Ltd)
- Mike Marshall
- Chris Mason
- Paul E. McKenney
- Arnaldo Carvalho de Melo
- David S. Miller
- Ingo Molnar
- Kuninori Morimoto
- Trond Myklebust
- Martin K. Petersen (Oracle)
- Borislav Petkov
- Jiri Pirko
- Josh Poimboeuf
- Sebastian Reichel (Collabora)
- Guenter Roeck
- Joerg Roedel
- Leon Romanovsky
- Steven Rostedt (VMware)
- Frank Rowand
- Ivan Safonov
- Anna Schumaker
- Jes Sorensen
- K.Y. Srinivasan
- David Sterba (SUSE)
- Heiko Stuebner
- Jiri Kosina (SUSE)
- Willy Tarreau
- Dmitry Torokhov
- Linus Torvalds
- Thierry Reding
- Rik van Riel
- Luis R. Rodriguez
- Geert Uytterhoeven (Glider bvba)
- Eduardo Valentin (Amazon.com)
- Daniel Vetter
- Linus Walleij
- Richard Weinberger
- Dan Williams
- Rafael J. Wysocki
- Arvind Yadav
- Masahiro Yamada
- Wei Yongjun
- Lv Zheng
- Marc Zyngier (Arm Ltd)
......@@ -22,11 +22,9 @@ USB support
misc_usbsevseg
mtouchusb
ohci
rio
usbip_protocol
usbmon
usb-serial
wusb-design-overview
usb-help
text_files
......
......@@ -16,12 +16,6 @@ USB devfs drop permissions source
.. literalinclude:: usbdevfs-drop-permissions.c
:language: c
WUSB command line script to manipulate auth credentials
-------------------------------------------------------
.. literalinclude:: wusb-cbaf
:language: shell
Credits
-------
......
......@@ -44,7 +44,7 @@ that the ID used should be same for both master and slave driver loading.
e.g::
insmod omap_hdq.ko W1_ID=2
inamod w1_bq27000.ko F_ID=2
insmod w1_bq27000.ko F_ID=2
The driver also supports 1-wire mode. In this mode, there is no need to
pass slave ID as parameter. The driver will auto-detect slaves connected
......
......@@ -69,11 +69,12 @@ Protocol 2.13 (Kernel 3.14) Support 32- and 64-bit flags being set in
xloadflags to support booting a 64-bit kernel from 32-bit
EFI
Protocol 2.14: BURNT BY INCORRECT COMMIT ae7e1238e68f2a472a125673ab506d49158c1889
Protocol 2.14 BURNT BY INCORRECT COMMIT
ae7e1238e68f2a472a125673ab506d49158c1889
(x86/boot: Add ACPI RSDP address to setup_header)
DO NOT USE!!! ASSUME SAME AS 2.13.
Protocol 2.15: (Kernel 5.5) Added the kernel_info and kernel_info.setup_type_max.
Protocol 2.15 (Kernel 5.5) Added the kernel_info and kernel_info.setup_type_max.
============= ============================================================
.. note::
......@@ -834,14 +835,14 @@ Protocol: 2.09+
chunks of memory are occupied by kernel data.
Thus setup_indirect struct and SETUP_INDIRECT type were introduced in
protocol 2.15.
protocol 2.15::
struct setup_indirect {
__u32 type;
__u32 reserved; /* Reserved, must be set to zero. */
__u64 len;
__u64 addr;
};
struct setup_indirect {
__u32 type;
__u32 reserved; /* Reserved, must be set to zero. */
__u64 len;
__u64 addr;
};
The type member is a SETUP_INDIRECT | SETUP_* type. However, it cannot be
SETUP_INDIRECT itself since making the setup_indirect a tree structure
......@@ -849,19 +850,19 @@ Protocol: 2.09+
and stack space can be limited in boot contexts.
Let's give an example how to point to SETUP_E820_EXT data using setup_indirect.
In this case setup_data and setup_indirect will look like this:
struct setup_data {
__u64 next = 0 or <addr_of_next_setup_data_struct>;
__u32 type = SETUP_INDIRECT;
__u32 len = sizeof(setup_data);
__u8 data[sizeof(setup_indirect)] = struct setup_indirect {
__u32 type = SETUP_INDIRECT | SETUP_E820_EXT;
__u32 reserved = 0;
__u64 len = <len_of_SETUP_E820_EXT_data>;
__u64 addr = <addr_of_SETUP_E820_EXT_data>;
In this case setup_data and setup_indirect will look like this::
struct setup_data {
__u64 next = 0 or <addr_of_next_setup_data_struct>;
__u32 type = SETUP_INDIRECT;
__u32 len = sizeof(setup_data);
__u8 data[sizeof(setup_indirect)] = struct setup_indirect {
__u32 type = SETUP_INDIRECT | SETUP_E820_EXT;
__u32 reserved = 0;
__u64 len = <len_of_SETUP_E820_EXT_data>;
__u64 addr = <addr_of_SETUP_E820_EXT_data>;
}
}
}
.. note::
SETUP_INDIRECT | SETUP_NONE objects cannot be properly distinguished
......@@ -964,7 +965,7 @@ expected to copy into a setup_data chunk.
All kernel_info data should be part of this structure. Fixed size data have to
be put before kernel_info_var_len_data label. Variable size data have to be put
after kernel_info_var_len_data label. Each chunk of variable size data has to
be prefixed with header/magic and its size, e.g.:
be prefixed with header/magic and its size, e.g.::
kernel_info:
.ascii "LToP" /* Header, Linux top (structure). */
......
.. SPDX-License-Identifier: GPL-2.0
================
Memory Managment
================
=================
Memory Management
=================
Complete virtual memory map with 4-level page tables
====================================================
......
......@@ -17511,7 +17511,7 @@ F: drivers/mtd/nand/raw/vf610_nfc.c
VFAT/FAT/MSDOS FILESYSTEM
M: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
S: Maintained
F: Documentation/filesystems/vfat.txt
F: Documentation/filesystems/vfat.rst
F: fs/fat/
VFIO DRIVER
......
......@@ -42,7 +42,7 @@
* @res2: reserved
* @magic: Magic number (RISC-V specific; deprecated)
* @magic2: Magic number 2 (to match the ARM64 'magic' field pos)
* @res4: reserved (will be used for PE COFF offset)
* @res3: reserved (will be used for PE COFF offset)
*
* The intention is for this header format to be shared between multiple
* architectures to avoid a proliferation of image header formats.
......@@ -59,7 +59,7 @@ struct riscv_image_header {
u64 res2;
u64 magic;
u32 magic2;
u32 res4;
u32 res3;
};
#endif /* __ASSEMBLY__ */
#endif /* _ASM_RISCV_IMAGE_H */
......@@ -54,7 +54,7 @@ for file in `find $1 -name '*.c'`; do
if [[ ${FILES_INCLUDED[$file]+_} ]]; then
continue;
fi
str=$(scripts/kernel-doc -text -export "$file" 2>/dev/null)
str=$(scripts/kernel-doc -export "$file" 2>/dev/null)
if [[ -n "$str" ]]; then
echo "$file"
fi
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment