summaryrefslogtreecommitdiff
path: root/Documentation/filesystems/nfs
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/filesystems/nfs')
-rw-r--r--Documentation/filesystems/nfs/client-identifier.rst216
-rw-r--r--Documentation/filesystems/nfs/exporting.rst94
-rw-r--r--Documentation/filesystems/nfs/fault_injection.txt69
-rw-r--r--Documentation/filesystems/nfs/idmapper.txt75
-rw-r--r--Documentation/filesystems/nfs/index.rst16
-rw-r--r--Documentation/filesystems/nfs/knfsd-stats.rst (renamed from Documentation/filesystems/nfs/knfsd-stats.txt)17
-rw-r--r--Documentation/filesystems/nfs/nfs-rdma.txt274
-rw-r--r--Documentation/filesystems/nfs/nfs.txt136
-rw-r--r--Documentation/filesystems/nfs/nfs41-server.rst256
-rw-r--r--Documentation/filesystems/nfs/nfs41-server.txt173
-rw-r--r--Documentation/filesystems/nfs/nfsd-admin-interfaces.txt41
-rw-r--r--Documentation/filesystems/nfs/nfsroot.txt355
-rw-r--r--Documentation/filesystems/nfs/pnfs-block-server.txt37
-rw-r--r--Documentation/filesystems/nfs/pnfs-scsi-server.txt23
-rw-r--r--Documentation/filesystems/nfs/pnfs.rst (renamed from Documentation/filesystems/nfs/pnfs.txt)25
-rw-r--r--Documentation/filesystems/nfs/reexport.rst113
-rw-r--r--Documentation/filesystems/nfs/rpc-cache.rst (renamed from Documentation/filesystems/nfs/rpc-cache.txt)136
-rw-r--r--Documentation/filesystems/nfs/rpc-server-gss.rst (renamed from Documentation/filesystems/nfs/rpc-server-gss.txt)30
18 files changed, 805 insertions, 1281 deletions
diff --git a/Documentation/filesystems/nfs/client-identifier.rst b/Documentation/filesystems/nfs/client-identifier.rst
new file mode 100644
index 000000000000..4804441155f5
--- /dev/null
+++ b/Documentation/filesystems/nfs/client-identifier.rst
@@ -0,0 +1,216 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=======================
+NFSv4 client identifier
+=======================
+
+This document explains how the NFSv4 protocol identifies client
+instances in order to maintain file open and lock state during
+system restarts. A special identifier and principal are maintained
+on each client. These can be set by administrators, scripts
+provided by site administrators, or tools provided by Linux
+distributors.
+
+There are risks if a client's NFSv4 identifier and its principal
+are not chosen carefully.
+
+
+Introduction
+------------
+
+The NFSv4 protocol uses "lease-based file locking". Leases help
+NFSv4 servers provide file lock guarantees and manage their
+resources.
+
+Simply put, an NFSv4 server creates a lease for each NFSv4 client.
+The server collects each client's file open and lock state under
+the lease for that client.
+
+The client is responsible for periodically renewing its leases.
+While a lease remains valid, the server holding that lease
+guarantees the file locks the client has created remain in place.
+
+If a client stops renewing its lease (for example, if it crashes),
+the NFSv4 protocol allows the server to remove the client's open
+and lock state after a certain period of time. When a client
+restarts, it indicates to servers that open and lock state
+associated with its previous leases is no longer valid and can be
+destroyed immediately.
+
+In addition, each NFSv4 server manages a persistent list of client
+leases. When the server restarts and clients attempt to recover
+their state, the server uses this list to distinguish amongst
+clients that held state before the server restarted and clients
+sending fresh OPEN and LOCK requests. This enables file locks to
+persist safely across server restarts.
+
+NFSv4 client identifiers
+------------------------
+
+Each NFSv4 client presents an identifier to NFSv4 servers so that
+they can associate the client with its lease. Each client's
+identifier consists of two elements:
+
+ - co_ownerid: An arbitrary but fixed string.
+
+ - boot verifier: A 64-bit incarnation verifier that enables a
+ server to distinguish successive boot epochs of the same client.
+
+The NFSv4.0 specification refers to these two items as an
+"nfs_client_id4". The NFSv4.1 specification refers to these two
+items as a "client_owner4".
+
+NFSv4 servers tie this identifier to the principal and security
+flavor that the client used when presenting it. Servers use this
+principal to authorize subsequent lease modification operations
+sent by the client. Effectively this principal is a third element of
+the identifier.
+
+As part of the identity presented to servers, a good
+"co_ownerid" string has several important properties:
+
+ - The "co_ownerid" string identifies the client during reboot
+ recovery, therefore the string is persistent across client
+ reboots.
+ - The "co_ownerid" string helps servers distinguish the client
+ from others, therefore the string is globally unique. Note
+ that there is no central authority that assigns "co_ownerid"
+ strings.
+ - Because it often appears on the network in the clear, the
+ "co_ownerid" string does not reveal private information about
+ the client itself.
+ - The content of the "co_ownerid" string is set and unchanging
+ before the client attempts NFSv4 mounts after a restart.
+ - The NFSv4 protocol places a 1024-byte limit on the size of the
+ "co_ownerid" string.
+
+Protecting NFSv4 lease state
+----------------------------
+
+NFSv4 servers utilize the "client_owner4" as described above to
+assign a unique lease to each client. Under this scheme, there are
+circumstances where clients can interfere with each other. This is
+referred to as "lease stealing".
+
+If distinct clients present the same "co_ownerid" string and use
+the same principal (for example, AUTH_SYS and UID 0), a server is
+unable to tell that the clients are not the same. Each distinct
+client presents a different boot verifier, so it appears to the
+server as if there is one client that is rebooting frequently.
+Neither client can maintain open or lock state in this scenario.
+
+If distinct clients present the same "co_ownerid" string and use
+distinct principals, the server is likely to allow the first client
+to operate normally but reject subsequent clients with the same
+"co_ownerid" string.
+
+If a client's "co_ownerid" string or principal are not stable,
+state recovery after a server or client reboot is not guaranteed.
+If a client unexpectedly restarts but presents a different
+"co_ownerid" string or principal to the server, the server orphans
+the client's previous open and lock state. This blocks access to
+locked files until the server removes the orphaned state.
+
+If the server restarts and a client presents a changed "co_ownerid"
+string or principal to the server, the server will not allow the
+client to reclaim its open and lock state, and may give those locks
+to other clients in the meantime. This is referred to as "lock
+stealing".
+
+Lease stealing and lock stealing increase the potential for denial
+of service and in rare cases even data corruption.
+
+Selecting an appropriate client identifier
+------------------------------------------
+
+By default, the Linux NFSv4 client implementation constructs its
+"co_ownerid" string starting with the words "Linux NFS" followed by
+the client's UTS node name (the same node name, incidentally, that
+is used as the "machine name" in an AUTH_SYS credential). In small
+deployments, this construction is usually adequate. Often, however,
+the node name by itself is not adequately unique, and can change
+unexpectedly. Problematic situations include:
+
+ - NFS-root (diskless) clients, where the local DHCP server (or
+ equivalent) does not provide a unique host name.
+
+ - "Containers" within a single Linux host. If each container has
+ a separate network namespace, but does not use the UTS namespace
+ to provide a unique host name, then there can be multiple NFS
+ client instances with the same host name.
+
+ - Clients across multiple administrative domains that access a
+ common NFS server. If hostnames are not assigned centrally
+ then uniqueness cannot be guaranteed unless a domain name is
+ included in the hostname.
+
+Linux provides two mechanisms to add uniqueness to its "co_ownerid"
+string:
+
+ nfs.nfs4_unique_id
+ This module parameter can set an arbitrary uniquifier string
+ via the kernel command line, or when the "nfs" module is
+ loaded.
+
+ /sys/fs/nfs/net/nfs_client/identifier
+ This virtual file, available since Linux 5.3, is local to the
+ network namespace in which it is accessed and so can provide
+ distinction between network namespaces (containers) when the
+ hostname remains uniform.
+
+Note that this file is empty on name-space creation. If the
+container system has access to some sort of per-container identity
+then that uniquifier can be used. For example, a uniquifier might
+be formed at boot using the container's internal identifier:
+
+ sha256sum /etc/machine-id | awk '{print $1}' \\
+ > /sys/fs/nfs/net/nfs_client/identifier
+
+Security considerations
+-----------------------
+
+The use of cryptographic security for lease management operations
+is strongly encouraged.
+
+If NFS with Kerberos is not configured, a Linux NFSv4 client uses
+AUTH_SYS and UID 0 as the principal part of its client identity.
+This configuration is not only insecure, it increases the risk of
+lease and lock stealing. However, it might be the only choice for
+client configurations that have no local persistent storage.
+"co_ownerid" string uniqueness and persistence is critical in this
+case.
+
+When a Kerberos keytab is present on a Linux NFS client, the client
+attempts to use one of the principals in that keytab when
+identifying itself to servers. The "sec=" mount option does not
+control this behavior. Alternately, a single-user client with a
+Kerberos principal can use that principal in place of the client's
+host principal.
+
+Using Kerberos for this purpose enables the client and server to
+use the same lease for operations covered by all "sec=" settings.
+Additionally, the Linux NFS client uses the RPCSEC_GSS security
+flavor with Kerberos and the integrity QOS to prevent in-transit
+modification of lease modification requests.
+
+Additional notes
+----------------
+The Linux NFSv4 client establishes a single lease on each NFSv4
+server it accesses. NFSv4 mounts from a Linux NFSv4 client of a
+particular server then share that lease.
+
+Once a client establishes open and lock state, the NFSv4 protocol
+enables lease state to transition to other servers, following data
+that has been migrated. This hides data migration completely from
+running applications. The Linux NFSv4 client facilitates state
+migration by presenting the same "client_owner4" to all servers it
+encounters.
+
+========
+See Also
+========
+
+ - nfs(5)
+ - kerberos(7)
+ - RFC 7530 for the NFSv4.0 specification
+ - RFC 8881 for the NFSv4.1 specification.
diff --git a/Documentation/filesystems/nfs/exporting.rst b/Documentation/filesystems/nfs/exporting.rst
index 33d588a01ace..f04ce1215a03 100644
--- a/Documentation/filesystems/nfs/exporting.rst
+++ b/Documentation/filesystems/nfs/exporting.rst
@@ -122,12 +122,9 @@ are exportable by setting the s_export_op field in the struct
super_block. This field must point to a "struct export_operations"
struct which has the following members:
- encode_fh (optional)
- Takes a dentry and creates a filehandle fragment which can later be used
- to find or create a dentry for the same object. The default
- implementation creates a filehandle fragment that encodes a 32bit inode
- and generation number for the inode encoded, and if necessary the
- same information for the parent.
+ encode_fh (mandatory)
+ Takes a dentry and creates a filehandle fragment which may later be used
+ to find or create a dentry for the same object.
fh_to_dentry (mandatory)
Given a filehandle fragment, this should find the implied object and
@@ -154,6 +151,11 @@ struct which has the following members:
to find potential names, and matches inode numbers to find the correct
match.
+ flags
+ Some filesystems may need to be handled differently than others. The
+ export_operations struct also includes a flags field that allows the
+ filesystem to communicate such information to nfsd. See the Export
+ Operations Flags section below for more explanation.
A filehandle fragment consists of an array of 1 or more 4byte words,
together with a one byte "type".
@@ -163,3 +165,83 @@ generated by encode_fh, in which case it will have been padded with
nuls. Rather, the encode_fh routine should choose a "type" which
indicates the decode_fh how much of the filehandle is valid, and how
it should be interpreted.
+
+Export Operations Flags
+-----------------------
+In addition to the operation vector pointers, struct export_operations also
+contains a "flags" field that allows the filesystem to communicate to nfsd
+that it may want to do things differently when dealing with it. The
+following flags are defined:
+
+ EXPORT_OP_NOWCC - disable NFSv3 WCC attributes on this filesystem
+ RFC 1813 recommends that servers always send weak cache consistency
+ (WCC) data to the client after each operation. The server should
+ atomically collect attributes about the inode, do an operation on it,
+ and then collect the attributes afterward. This allows the client to
+ skip issuing GETATTRs in some situations but means that the server
+ is calling vfs_getattr for almost all RPCs. On some filesystems
+ (particularly those that are clustered or networked) this is expensive
+ and atomicity is difficult to guarantee. This flag indicates to nfsd
+ that it should skip providing WCC attributes to the client in NFSv3
+ replies when doing operations on this filesystem. Consider enabling
+ this on filesystems that have an expensive ->getattr inode operation,
+ or when atomicity between pre and post operation attribute collection
+ is impossible to guarantee.
+
+ EXPORT_OP_NOSUBTREECHK - disallow subtree checking on this fs
+ Many NFS operations deal with filehandles, which the server must then
+ vet to ensure that they live inside of an exported tree. When the
+ export consists of an entire filesystem, this is trivial. nfsd can just
+ ensure that the filehandle live on the filesystem. When only part of a
+ filesystem is exported however, then nfsd must walk the ancestors of the
+ inode to ensure that it's within an exported subtree. This is an
+ expensive operation and not all filesystems can support it properly.
+ This flag exempts the filesystem from subtree checking and causes
+ exportfs to get back an error if it tries to enable subtree checking
+ on it.
+
+ EXPORT_OP_CLOSE_BEFORE_UNLINK - always close cached files before unlinking
+ On some exportable filesystems (such as NFS) unlinking a file that
+ is still open can cause a fair bit of extra work. For instance,
+ the NFS client will do a "sillyrename" to ensure that the file
+ sticks around while it's still open. When reexporting, that open
+ file is held by nfsd so we usually end up doing a sillyrename, and
+ then immediately deleting the sillyrenamed file just afterward when
+ the link count actually goes to zero. Sometimes this delete can race
+ with other operations (for instance an rmdir of the parent directory).
+ This flag causes nfsd to close any open files for this inode _before_
+ calling into the vfs to do an unlink or a rename that would replace
+ an existing file.
+
+ EXPORT_OP_REMOTE_FS - Backing storage for this filesystem is remote
+ PF_LOCAL_THROTTLE exists for loopback NFSD, where a thread needs to
+ write to one bdi (the final bdi) in order to free up writes queued
+ to another bdi (the client bdi). Such threads get a private balance
+ of dirty pages so that dirty pages for the client bdi do not imact
+ the daemon writing to the final bdi. For filesystems whose durable
+ storage is not local (such as exported NFS filesystems), this
+ constraint has negative consequences. EXPORT_OP_REMOTE_FS enables
+ an export to disable writeback throttling.
+
+ EXPORT_OP_NOATOMIC_ATTR - Filesystem does not update attributes atomically
+ EXPORT_OP_NOATOMIC_ATTR indicates that the exported filesystem
+ cannot provide the semantics required by the "atomic" boolean in
+ NFSv4's change_info4. This boolean indicates to a client whether the
+ returned before and after change attributes were obtained atomically
+ with the respect to the requested metadata operation (UNLINK,
+ OPEN/CREATE, MKDIR, etc).
+
+ EXPORT_OP_FLUSH_ON_CLOSE - Filesystem flushes file data on close(2)
+ On most filesystems, inodes can remain under writeback after the
+ file is closed. NFSD relies on client activity or local flusher
+ threads to handle writeback. Certain filesystems, such as NFS, flush
+ all of an inode's dirty data on last close. Exports that behave this
+ way should set EXPORT_OP_FLUSH_ON_CLOSE so that NFSD knows to skip
+ waiting for writeback when closing such files.
+
+ EXPORT_OP_ASYNC_LOCK - Indicates a capable filesystem to do async lock
+ requests from lockd. Only set EXPORT_OP_ASYNC_LOCK if the filesystem has
+ it's own ->lock() functionality as core posix_lock_file() implementation
+ has no async lock request handling yet. For more information about how to
+ indicate an async lock request from a ->lock() file_operations struct, see
+ fs/locks.c and comment for the function vfs_lock_file().
diff --git a/Documentation/filesystems/nfs/fault_injection.txt b/Documentation/filesystems/nfs/fault_injection.txt
deleted file mode 100644
index f3a5b0a8ac05..000000000000
--- a/Documentation/filesystems/nfs/fault_injection.txt
+++ /dev/null
@@ -1,69 +0,0 @@
-
-Fault Injection
-===============
-Fault injection is a method for forcing errors that may not normally occur, or
-may be difficult to reproduce. Forcing these errors in a controlled environment
-can help the developer find and fix bugs before their code is shipped in a
-production system. Injecting an error on the Linux NFS server will allow us to
-observe how the client reacts and if it manages to recover its state correctly.
-
-NFSD_FAULT_INJECTION must be selected when configuring the kernel to use this
-feature.
-
-
-Using Fault Injection
-=====================
-On the client, mount the fault injection server through NFS v4.0+ and do some
-work over NFS (open files, take locks, ...).
-
-On the server, mount the debugfs filesystem to <debug_dir> and ls
-<debug_dir>/nfsd. This will show a list of files that will be used for
-injecting faults on the NFS server. As root, write a number n to the file
-corresponding to the action you want the server to take. The server will then
-process the first n items it finds. So if you want to forget 5 locks, echo '5'
-to <debug_dir>/nfsd/forget_locks. A value of 0 will tell the server to forget
-all corresponding items. A log message will be created containing the number
-of items forgotten (check dmesg).
-
-Go back to work on the client and check if the client recovered from the error
-correctly.
-
-
-Available Faults
-================
-forget_clients:
- The NFS server keeps a list of clients that have placed a mount call. If
- this list is cleared, the server will have no knowledge of who the client
- is, forcing the client to reauthenticate with the server.
-
-forget_openowners:
- The NFS server keeps a list of what files are currently opened and who
- they were opened by. Clearing this list will force the client to reopen
- its files.
-
-forget_locks:
- The NFS server keeps a list of what files are currently locked in the VFS.
- Clearing this list will force the client to reclaim its locks (files are
- unlocked through the VFS as they are cleared from this list).
-
-forget_delegations:
- A delegation is used to assure the client that a file, or part of a file,
- has not changed since the delegation was awarded. Clearing this list will
- force the client to reacquire its delegation before accessing the file
- again.
-
-recall_delegations:
- Delegations can be recalled by the server when another client attempts to
- access a file. This test will notify the client that its delegation has
- been revoked, forcing the client to reacquire the delegation before using
- the file again.
-
-
-tools/nfs/inject_faults.sh script
-=================================
-This script has been created to ease the fault injection process. This script
-will detect the mounted debugfs directory and write to the files located there
-based on the arguments passed by the user. For example, running
-`inject_faults.sh forget_locks 1` as root will instruct the server to forget
-one lock. Running `inject_faults forget_locks` will instruct the server to
-forgetall locks.
diff --git a/Documentation/filesystems/nfs/idmapper.txt b/Documentation/filesystems/nfs/idmapper.txt
deleted file mode 100644
index b86831acd583..000000000000
--- a/Documentation/filesystems/nfs/idmapper.txt
+++ /dev/null
@@ -1,75 +0,0 @@
-
-=========
-ID Mapper
-=========
-Id mapper is used by NFS to translate user and group ids into names, and to
-translate user and group names into ids. Part of this translation involves
-performing an upcall to userspace to request the information. There are two
-ways NFS could obtain this information: placing a call to /sbin/request-key
-or by placing a call to the rpc.idmap daemon.
-
-NFS will attempt to call /sbin/request-key first. If this succeeds, the
-result will be cached using the generic request-key cache. This call should
-only fail if /etc/request-key.conf is not configured for the id_resolver key
-type, see the "Configuring" section below if you wish to use the request-key
-method.
-
-If the call to /sbin/request-key fails (if /etc/request-key.conf is not
-configured with the id_resolver key type), then the idmapper will ask the
-legacy rpc.idmap daemon for the id mapping. This result will be stored
-in a custom NFS idmap cache.
-
-
-===========
-Configuring
-===========
-The file /etc/request-key.conf will need to be modified so /sbin/request-key can
-direct the upcall. The following line should be added:
-
-#OP TYPE DESCRIPTION CALLOUT INFO PROGRAM ARG1 ARG2 ARG3 ...
-#====== ======= =============== =============== ===============================
-create id_resolver * * /usr/sbin/nfs.idmap %k %d 600
-
-This will direct all id_resolver requests to the program /usr/sbin/nfs.idmap.
-The last parameter, 600, defines how many seconds into the future the key will
-expire. This parameter is optional for /usr/sbin/nfs.idmap. When the timeout
-is not specified, nfs.idmap will default to 600 seconds.
-
-id mapper uses for key descriptions:
- uid: Find the UID for the given user
- gid: Find the GID for the given group
- user: Find the user name for the given UID
- group: Find the group name for the given GID
-
-You can handle any of these individually, rather than using the generic upcall
-program. If you would like to use your own program for a uid lookup then you
-would edit your request-key.conf so it look similar to this:
-
-#OP TYPE DESCRIPTION CALLOUT INFO PROGRAM ARG1 ARG2 ARG3 ...
-#====== ======= =============== =============== ===============================
-create id_resolver uid:* * /some/other/program %k %d 600
-create id_resolver * * /usr/sbin/nfs.idmap %k %d 600
-
-Notice that the new line was added above the line for the generic program.
-request-key will find the first matching line and corresponding program. In
-this case, /some/other/program will handle all uid lookups and
-/usr/sbin/nfs.idmap will handle gid, user, and group lookups.
-
-See <file:Documentation/security/keys/request-key.rst> for more information
-about the request-key function.
-
-
-=========
-nfs.idmap
-=========
-nfs.idmap is designed to be called by request-key, and should not be run "by
-hand". This program takes two arguments, a serialized key and a key
-description. The serialized key is first converted into a key_serial_t, and
-then passed as an argument to keyctl_instantiate (both are part of keyutils.h).
-
-The actual lookups are performed by functions found in nfsidmap.h. nfs.idmap
-determines the correct function to call by looking at the first part of the
-description string. For example, a uid lookup description will appear as
-"uid:user@domain".
-
-nfs.idmap will return 0 if the key was instantiated, and non-zero otherwise.
diff --git a/Documentation/filesystems/nfs/index.rst b/Documentation/filesystems/nfs/index.rst
new file mode 100644
index 000000000000..8536134f31fd
--- /dev/null
+++ b/Documentation/filesystems/nfs/index.rst
@@ -0,0 +1,16 @@
+===============================
+NFS
+===============================
+
+
+.. toctree::
+ :maxdepth: 1
+
+ client-identifier
+ exporting
+ pnfs
+ rpc-cache
+ rpc-server-gss
+ nfs41-server
+ knfsd-stats
+ reexport
diff --git a/Documentation/filesystems/nfs/knfsd-stats.txt b/Documentation/filesystems/nfs/knfsd-stats.rst
index 1a5d82180b84..80bcf13550de 100644
--- a/Documentation/filesystems/nfs/knfsd-stats.txt
+++ b/Documentation/filesystems/nfs/knfsd-stats.rst
@@ -1,7 +1,9 @@
-
+============================
Kernel NFS Server Statistics
============================
+:Authors: Greg Banks <gnb@sgi.com> - 26 Mar 2009
+
This document describes the format and semantics of the statistics
which the kernel NFS server makes available to userspace. These
statistics are available in several text form pseudo files, each of
@@ -18,7 +20,7 @@ by parsing routines. All other lines contain a sequence of fields
separated by whitespace.
/proc/fs/nfsd/pool_stats
-------------------------
+========================
This file is available in kernels from 2.6.30 onwards, if the
/proc/fs/nfsd filesystem is mounted (it almost always should be).
@@ -109,15 +111,12 @@ this case), or the transport can be enqueued for later attention
(sockets-enqueued counts this case), or the packet can be temporarily
deferred because the transport is currently being used by an nfsd
thread. This last case is not very interesting and is not explicitly
-counted, but can be inferred from the other counters thus:
+counted, but can be inferred from the other counters thus::
-packets-deferred = packets-arrived - ( sockets-enqueued + threads-woken )
+ packets-deferred = packets-arrived - ( sockets-enqueued + threads-woken )
More
-----
-Descriptions of the other statistics file should go here.
-
+====
-Greg Banks <gnb@sgi.com>
-26 Mar 2009
+Descriptions of the other statistics file should go here.
diff --git a/Documentation/filesystems/nfs/nfs-rdma.txt b/Documentation/filesystems/nfs/nfs-rdma.txt
deleted file mode 100644
index 22dc0dd6889c..000000000000
--- a/Documentation/filesystems/nfs/nfs-rdma.txt
+++ /dev/null
@@ -1,274 +0,0 @@
-################################################################################
-# #
-# NFS/RDMA README #
-# #
-################################################################################
-
- Author: NetApp and Open Grid Computing
- Date: May 29, 2008
-
-Table of Contents
-~~~~~~~~~~~~~~~~~
- - Overview
- - Getting Help
- - Installation
- - Check RDMA and NFS Setup
- - NFS/RDMA Setup
-
-Overview
-~~~~~~~~
-
- This document describes how to install and setup the Linux NFS/RDMA client
- and server software.
-
- The NFS/RDMA client was first included in Linux 2.6.24. The NFS/RDMA server
- was first included in the following release, Linux 2.6.25.
-
- In our testing, we have obtained excellent performance results (full 10Gbit
- wire bandwidth at minimal client CPU) under many workloads. The code passes
- the full Connectathon test suite and operates over both Infiniband and iWARP
- RDMA adapters.
-
-Getting Help
-~~~~~~~~~~~~
-
- If you get stuck, you can ask questions on the
-
- nfs-rdma-devel@lists.sourceforge.net
-
- mailing list.
-
-Installation
-~~~~~~~~~~~~
-
- These instructions are a step by step guide to building a machine for
- use with NFS/RDMA.
-
- - Install an RDMA device
-
- Any device supported by the drivers in drivers/infiniband/hw is acceptable.
-
- Testing has been performed using several Mellanox-based IB cards, the
- Ammasso AMS1100 iWARP adapter, and the Chelsio cxgb3 iWARP adapter.
-
- - Install a Linux distribution and tools
-
- The first kernel release to contain both the NFS/RDMA client and server was
- Linux 2.6.25 Therefore, a distribution compatible with this and subsequent
- Linux kernel release should be installed.
-
- The procedures described in this document have been tested with
- distributions from Red Hat's Fedora Project (http://fedora.redhat.com/).
-
- - Install nfs-utils-1.1.2 or greater on the client
-
- An NFS/RDMA mount point can be obtained by using the mount.nfs command in
- nfs-utils-1.1.2 or greater (nfs-utils-1.1.1 was the first nfs-utils
- version with support for NFS/RDMA mounts, but for various reasons we
- recommend using nfs-utils-1.1.2 or greater). To see which version of
- mount.nfs you are using, type:
-
- $ /sbin/mount.nfs -V
-
- If the version is less than 1.1.2 or the command does not exist,
- you should install the latest version of nfs-utils.
-
- Download the latest package from:
-
- http://www.kernel.org/pub/linux/utils/nfs
-
- Uncompress the package and follow the installation instructions.
-
- If you will not need the idmapper and gssd executables (you do not need
- these to create an NFS/RDMA enabled mount command), the installation
- process can be simplified by disabling these features when running
- configure:
-
- $ ./configure --disable-gss --disable-nfsv4
-
- To build nfs-utils you will need the tcp_wrappers package installed. For
- more information on this see the package's README and INSTALL files.
-
- After building the nfs-utils package, there will be a mount.nfs binary in
- the utils/mount directory. This binary can be used to initiate NFS v2, v3,
- or v4 mounts. To initiate a v4 mount, the binary must be called
- mount.nfs4. The standard technique is to create a symlink called
- mount.nfs4 to mount.nfs.
-
- This mount.nfs binary should be installed at /sbin/mount.nfs as follows:
-
- $ sudo cp utils/mount/mount.nfs /sbin/mount.nfs
-
- In this location, mount.nfs will be invoked automatically for NFS mounts
- by the system mount command.
-
- NOTE: mount.nfs and therefore nfs-utils-1.1.2 or greater is only needed
- on the NFS client machine. You do not need this specific version of
- nfs-utils on the server. Furthermore, only the mount.nfs command from
- nfs-utils-1.1.2 is needed on the client.
-
- - Install a Linux kernel with NFS/RDMA
-
- The NFS/RDMA client and server are both included in the mainline Linux
- kernel version 2.6.25 and later. This and other versions of the Linux
- kernel can be found at:
-
- https://www.kernel.org/pub/linux/kernel/
-
- Download the sources and place them in an appropriate location.
-
- - Configure the RDMA stack
-
- Make sure your kernel configuration has RDMA support enabled. Under
- Device Drivers -> InfiniBand support, update the kernel configuration
- to enable InfiniBand support [NOTE: the option name is misleading. Enabling
- InfiniBand support is required for all RDMA devices (IB, iWARP, etc.)].
-
- Enable the appropriate IB HCA support (mlx4, mthca, ehca, ipath, etc.) or
- iWARP adapter support (amso, cxgb3, etc.).
-
- If you are using InfiniBand, be sure to enable IP-over-InfiniBand support.
-
- - Configure the NFS client and server
-
- Your kernel configuration must also have NFS file system support and/or
- NFS server support enabled. These and other NFS related configuration
- options can be found under File Systems -> Network File Systems.
-
- - Build, install, reboot
-
- The NFS/RDMA code will be enabled automatically if NFS and RDMA
- are turned on. The NFS/RDMA client and server are configured via the hidden
- SUNRPC_XPRT_RDMA config option that depends on SUNRPC and INFINIBAND. The
- value of SUNRPC_XPRT_RDMA will be:
-
- - N if either SUNRPC or INFINIBAND are N, in this case the NFS/RDMA client
- and server will not be built
- - M if both SUNRPC and INFINIBAND are on (M or Y) and at least one is M,
- in this case the NFS/RDMA client and server will be built as modules
- - Y if both SUNRPC and INFINIBAND are Y, in this case the NFS/RDMA client
- and server will be built into the kernel
-
- Therefore, if you have followed the steps above and turned no NFS and RDMA,
- the NFS/RDMA client and server will be built.
-
- Build a new kernel, install it, boot it.
-
-Check RDMA and NFS Setup
-~~~~~~~~~~~~~~~~~~~~~~~~
-
- Before configuring the NFS/RDMA software, it is a good idea to test
- your new kernel to ensure that the kernel is working correctly.
- In particular, it is a good idea to verify that the RDMA stack
- is functioning as expected and standard NFS over TCP/IP and/or UDP/IP
- is working properly.
-
- - Check RDMA Setup
-
- If you built the RDMA components as modules, load them at
- this time. For example, if you are using a Mellanox Tavor/Sinai/Arbel
- card:
-
- $ modprobe ib_mthca
- $ modprobe ib_ipoib
-
- If you are using InfiniBand, make sure there is a Subnet Manager (SM)
- running on the network. If your IB switch has an embedded SM, you can
- use it. Otherwise, you will need to run an SM, such as OpenSM, on one
- of your end nodes.
-
- If an SM is running on your network, you should see the following:
-
- $ cat /sys/class/infiniband/driverX/ports/1/state
- 4: ACTIVE
-
- where driverX is mthca0, ipath5, ehca3, etc.
-
- To further test the InfiniBand software stack, use IPoIB (this
- assumes you have two IB hosts named host1 and host2):
-
- host1$ ip link set dev ib0 up
- host1$ ip address add dev ib0 a.b.c.x
- host2$ ip link set dev ib0 up
- host2$ ip address add dev ib0 a.b.c.y
- host1$ ping a.b.c.y
- host2$ ping a.b.c.x
-
- For other device types, follow the appropriate procedures.
-
- - Check NFS Setup
-
- For the NFS components enabled above (client and/or server),
- test their functionality over standard Ethernet using TCP/IP or UDP/IP.
-
-NFS/RDMA Setup
-~~~~~~~~~~~~~~
-
- We recommend that you use two machines, one to act as the client and
- one to act as the server.
-
- One time configuration:
-
- - On the server system, configure the /etc/exports file and
- start the NFS/RDMA server.
-
- Exports entries with the following formats have been tested:
-
- /vol0 192.168.0.47(fsid=0,rw,async,insecure,no_root_squash)
- /vol0 192.168.0.0/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash)
-
- The IP address(es) is(are) the client's IPoIB address for an InfiniBand
- HCA or the client's iWARP address(es) for an RNIC.
-
- NOTE: The "insecure" option must be used because the NFS/RDMA client does
- not use a reserved port.
-
- Each time a machine boots:
-
- - Load and configure the RDMA drivers
-
- For InfiniBand using a Mellanox adapter:
-
- $ modprobe ib_mthca
- $ modprobe ib_ipoib
- $ ip li set dev ib0 up
- $ ip addr add dev ib0 a.b.c.d
-
- NOTE: use unique addresses for the client and server
-
- - Start the NFS server
-
- If the NFS/RDMA server was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
- kernel config), load the RDMA transport module:
-
- $ modprobe svcrdma
-
- Regardless of how the server was built (module or built-in), start the
- server:
-
- $ /etc/init.d/nfs start
-
- or
-
- $ service nfs start
-
- Instruct the server to listen on the RDMA transport:
-
- $ echo rdma 20049 > /proc/fs/nfsd/portlist
-
- - On the client system
-
- If the NFS/RDMA client was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
- kernel config), load the RDMA client module:
-
- $ modprobe xprtrdma.ko
-
- Regardless of how the client was built (module or built-in), use this
- command to mount the NFS/RDMA server:
-
- $ mount -o rdma,port=20049 <IPoIB-server-name-or-address>:/<export> /mnt
-
- To verify that the mount is using RDMA, run "cat /proc/mounts" and check
- the "proto" field for the given mount.
-
- Congratulations! You're using NFS/RDMA!
diff --git a/Documentation/filesystems/nfs/nfs.txt b/Documentation/filesystems/nfs/nfs.txt
deleted file mode 100644
index f2571c8bef74..000000000000
--- a/Documentation/filesystems/nfs/nfs.txt
+++ /dev/null
@@ -1,136 +0,0 @@
-
-The NFS client
-==============
-
-The NFS version 2 protocol was first documented in RFC1094 (March 1989).
-Since then two more major releases of NFS have been published, with NFSv3
-being documented in RFC1813 (June 1995), and NFSv4 in RFC3530 (April
-2003).
-
-The Linux NFS client currently supports all the above published versions,
-and work is in progress on adding support for minor version 1 of the NFSv4
-protocol.
-
-The purpose of this document is to provide information on some of the
-special features of the NFS client that can be configured by system
-administrators.
-
-
-The nfs4_unique_id parameter
-============================
-
-NFSv4 requires clients to identify themselves to servers with a unique
-string. File open and lock state shared between one client and one server
-is associated with this identity. To support robust NFSv4 state recovery
-and transparent state migration, this identity string must not change
-across client reboots.
-
-Without any other intervention, the Linux client uses a string that contains
-the local system's node name. System administrators, however, often do not
-take care to ensure that node names are fully qualified and do not change
-over the lifetime of a client system. Node names can have other
-administrative requirements that require particular behavior that does not
-work well as part of an nfs_client_id4 string.
-
-The nfs.nfs4_unique_id boot parameter specifies a unique string that can be
-used instead of a system's node name when an NFS client identifies itself to
-a server. Thus, if the system's node name is not unique, or it changes, its
-nfs.nfs4_unique_id stays the same, preventing collision with other clients
-or loss of state during NFS reboot recovery or transparent state migration.
-
-The nfs.nfs4_unique_id string is typically a UUID, though it can contain
-anything that is believed to be unique across all NFS clients. An
-nfs4_unique_id string should be chosen when a client system is installed,
-just as a system's root file system gets a fresh UUID in its label at
-install time.
-
-The string should remain fixed for the lifetime of the client. It can be
-changed safely if care is taken that the client shuts down cleanly and all
-outstanding NFSv4 state has expired, to prevent loss of NFSv4 state.
-
-This string can be stored in an NFS client's grub.conf, or it can be provided
-via a net boot facility such as PXE. It may also be specified as an nfs.ko
-module parameter. Specifying a uniquifier string is not support for NFS
-clients running in containers.
-
-
-The DNS resolver
-================
-
-NFSv4 allows for one server to refer the NFS client to data that has been
-migrated onto another server by means of the special "fs_locations"
-attribute. See
- http://tools.ietf.org/html/rfc3530#section-6
-and
- http://tools.ietf.org/html/draft-ietf-nfsv4-referrals-00
-
-The fs_locations information can take the form of either an ip address and
-a path, or a DNS hostname and a path. The latter requires the NFS client to
-do a DNS lookup in order to mount the new volume, and hence the need for an
-upcall to allow userland to provide this service.
-
-Assuming that the user has the 'rpc_pipefs' filesystem mounted in the usual
-/var/lib/nfs/rpc_pipefs, the upcall consists of the following steps:
-
- (1) The process checks the dns_resolve cache to see if it contains a
- valid entry. If so, it returns that entry and exits.
-
- (2) If no valid entry exists, the helper script '/sbin/nfs_cache_getent'
- (may be changed using the 'nfs.cache_getent' kernel boot parameter)
- is run, with two arguments:
- - the cache name, "dns_resolve"
- - the hostname to resolve
-
- (3) After looking up the corresponding ip address, the helper script
- writes the result into the rpc_pipefs pseudo-file
- '/var/lib/nfs/rpc_pipefs/cache/dns_resolve/channel'
- in the following (text) format:
-
- "<ip address> <hostname> <ttl>\n"
-
- Where <ip address> is in the usual IPv4 (123.456.78.90) or IPv6
- (ffee:ddcc:bbaa:9988:7766:5544:3322:1100, ffee::1100, ...) format.
- <hostname> is identical to the second argument of the helper
- script, and <ttl> is the 'time to live' of this cache entry (in
- units of seconds).
-
- Note: If <ip address> is invalid, say the string "0", then a negative
- entry is created, which will cause the kernel to treat the hostname
- as having no valid DNS translation.
-
-
-
-
-A basic sample /sbin/nfs_cache_getent
-=====================================
-
-#!/bin/bash
-#
-ttl=600
-#
-cut=/usr/bin/cut
-getent=/usr/bin/getent
-rpc_pipefs=/var/lib/nfs/rpc_pipefs
-#
-die()
-{
- echo "Usage: $0 cache_name entry_name"
- exit 1
-}
-
-[ $# -lt 2 ] && die
-cachename="$1"
-cache_path=${rpc_pipefs}/cache/${cachename}/channel
-
-case "${cachename}" in
- dns_resolve)
- name="$2"
- result="$(${getent} hosts ${name} | ${cut} -f1 -d\ )"
- [ -z "${result}" ] && result="0"
- ;;
- *)
- die
- ;;
-esac
-echo "${result} ${name} ${ttl}" >${cache_path}
-
diff --git a/Documentation/filesystems/nfs/nfs41-server.rst b/Documentation/filesystems/nfs/nfs41-server.rst
new file mode 100644
index 000000000000..16b5f02f81c3
--- /dev/null
+++ b/Documentation/filesystems/nfs/nfs41-server.rst
@@ -0,0 +1,256 @@
+=============================
+NFSv4.1 Server Implementation
+=============================
+
+Server support for minorversion 1 can be controlled using the
+/proc/fs/nfsd/versions control file. The string output returned
+by reading this file will contain either "+4.1" or "-4.1"
+correspondingly.
+
+Currently, server support for minorversion 1 is enabled by default.
+It can be disabled at run time by writing the string "-4.1" to
+the /proc/fs/nfsd/versions control file. Note that to write this
+control file, the nfsd service must be taken down. You can use rpc.nfsd
+for this; see rpc.nfsd(8).
+
+(Warning: older servers will interpret "+4.1" and "-4.1" as "+4" and
+"-4", respectively. Therefore, code meant to work on both new and old
+kernels must turn 4.1 on or off *before* turning support for version 4
+on or off; rpc.nfsd does this correctly.)
+
+The NFSv4 minorversion 1 (NFSv4.1) implementation in nfsd is based
+on RFC 5661.
+
+From the many new features in NFSv4.1 the current implementation
+focuses on the mandatory-to-implement NFSv4.1 Sessions, providing
+"exactly once" semantics and better control and throttling of the
+resources allocated for each client.
+
+The table below, taken from the NFSv4.1 document, lists
+the operations that are mandatory to implement (REQ), optional
+(OPT), and NFSv4.0 operations that are required not to implement (MNI)
+in minor version 1. The first column indicates the operations that
+are not supported yet by the linux server implementation.
+
+The OPTIONAL features identified and their abbreviations are as follows:
+
+- **pNFS** Parallel NFS
+- **FDELG** File Delegations
+- **DDELG** Directory Delegations
+
+The following abbreviations indicate the linux server implementation status.
+
+- **I** Implemented NFSv4.1 operations.
+- **NS** Not Supported.
+- **NS\*** Unimplemented optional feature.
+
+Operations
+==========
+
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| Implementation status | Operation | REQ,REC, OPT or NMI | Feature (REQ, REC or OPT) | Definition |
++=======================+======================+=====================+===========================+================+
+| | ACCESS | REQ | | Section 18.1 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | BACKCHANNEL_CTL | REQ | | Section 18.33 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | BIND_CONN_TO_SESSION | REQ | | Section 18.34 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | CLOSE | REQ | | Section 18.2 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | COMMIT | REQ | | Section 18.3 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | CREATE | REQ | | Section 18.4 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | CREATE_SESSION | REQ | | Section 18.36 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| NS* | DELEGPURGE | OPT | FDELG (REQ) | Section 18.5 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | DELEGRETURN | OPT | FDELG, | Section 18.6 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | | | DDELG, pNFS | |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | | | (REQ) | |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | DESTROY_CLIENTID | REQ | | Section 18.50 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | DESTROY_SESSION | REQ | | Section 18.37 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | EXCHANGE_ID | REQ | | Section 18.35 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | FREE_STATEID | REQ | | Section 18.38 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | GETATTR | REQ | | Section 18.7 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | GETDEVICEINFO | OPT | pNFS (REQ) | Section 18.40 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| NS* | GETDEVICELIST | OPT | pNFS (OPT) | Section 18.41 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | GETFH | REQ | | Section 18.8 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| NS* | GET_DIR_DELEGATION | OPT | DDELG (REQ) | Section 18.39 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | LAYOUTCOMMIT | OPT | pNFS (REQ) | Section 18.42 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | LAYOUTGET | OPT | pNFS (REQ) | Section 18.43 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | LAYOUTRETURN | OPT | pNFS (REQ) | Section 18.44 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | LINK | OPT | | Section 18.9 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | LOCK | REQ | | Section 18.10 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | LOCKT | REQ | | Section 18.11 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | LOCKU | REQ | | Section 18.12 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | LOOKUP | REQ | | Section 18.13 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | LOOKUPP | REQ | | Section 18.14 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | NVERIFY | REQ | | Section 18.15 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | OPEN | REQ | | Section 18.16 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| NS* | OPENATTR | OPT | | Section 18.17 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | OPEN_CONFIRM | MNI | | N/A |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | OPEN_DOWNGRADE | REQ | | Section 18.18 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | PUTFH | REQ | | Section 18.19 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | PUTPUBFH | REQ | | Section 18.20 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | PUTROOTFH | REQ | | Section 18.21 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | READ | REQ | | Section 18.22 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | READDIR | REQ | | Section 18.23 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | READLINK | OPT | | Section 18.24 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | RECLAIM_COMPLETE | REQ | | Section 18.51 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | RELEASE_LOCKOWNER | MNI | | N/A |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | REMOVE | REQ | | Section 18.25 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | RENAME | REQ | | Section 18.26 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | RENEW | MNI | | N/A |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | RESTOREFH | REQ | | Section 18.27 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | SAVEFH | REQ | | Section 18.28 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | SECINFO | REQ | | Section 18.29 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | SECINFO_NO_NAME | REC | pNFS files | Section 18.45, |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | | | layout (REQ) | Section 13.12 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | SEQUENCE | REQ | | Section 18.46 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | SETATTR | REQ | | Section 18.30 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | SETCLIENTID | MNI | | N/A |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | SETCLIENTID_CONFIRM | MNI | | N/A |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| NS | SET_SSV | REQ | | Section 18.47 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| I | TEST_STATEID | REQ | | Section 18.48 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | VERIFY | REQ | | Section 18.31 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| NS* | WANT_DELEGATION | OPT | FDELG (OPT) | Section 18.49 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+| | WRITE | REQ | | Section 18.32 |
++-----------------------+----------------------+---------------------+---------------------------+----------------+
+
+
+Callback Operations
+===================
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| Implementation status | Operation | REQ,REC, OPT or NMI | Feature (REQ, REC or OPT) | Definition |
++=======================+=========================+=====================+===========================+===============+
+| | CB_GETATTR | OPT | FDELG (REQ) | Section 20.1 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| I | CB_LAYOUTRECALL | OPT | pNFS (REQ) | Section 20.3 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| NS* | CB_NOTIFY | OPT | DDELG (REQ) | Section 20.4 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| NS* | CB_NOTIFY_DEVICEID | OPT | pNFS (OPT) | Section 20.12 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| NS* | CB_NOTIFY_LOCK | OPT | | Section 20.11 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| NS* | CB_PUSH_DELEG | OPT | FDELG (OPT) | Section 20.5 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| | CB_RECALL | OPT | FDELG, | Section 20.2 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| | | | DDELG, pNFS | |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| | | | (REQ) | |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| NS* | CB_RECALL_ANY | OPT | FDELG, | Section 20.6 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| | | | DDELG, pNFS | |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| | | | (REQ) | |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| NS | CB_RECALL_SLOT | REQ | | Section 20.8 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| NS* | CB_RECALLABLE_OBJ_AVAIL | OPT | DDELG, pNFS | Section 20.7 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| | | | (REQ) | |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| I | CB_SEQUENCE | OPT | FDELG, | Section 20.9 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| | | | DDELG, pNFS | |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| | | | (REQ) | |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| NS* | CB_WANTS_CANCELLED | OPT | FDELG, | Section 20.10 |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| | | | DDELG, pNFS | |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+| | | | (REQ) | |
++-----------------------+-------------------------+---------------------+---------------------------+---------------+
+
+
+Implementation notes:
+=====================
+
+SSV:
+ The spec claims this is mandatory, but we don't actually know of any
+ implementations, so we're ignoring it for now. The server returns
+ NFS4ERR_ENCR_ALG_UNSUPP on EXCHANGE_ID, which should be future-proof.
+
+GSS on the backchannel:
+ Again, theoretically required but not widely implemented (in
+ particular, the current Linux client doesn't request it). We return
+ NFS4ERR_ENCR_ALG_UNSUPP on CREATE_SESSION.
+
+DELEGPURGE:
+ mandatory only for servers that support CLAIM_DELEGATE_PREV and/or
+ CLAIM_DELEG_PREV_FH (which allows clients to keep delegations that
+ persist across client reboots). Thus we need not implement this for
+ now.
+
+EXCHANGE_ID:
+ implementation ids are ignored
+
+CREATE_SESSION:
+ backchannel attributes are ignored
+
+SEQUENCE:
+ no support for dynamic slot table renegotiation (optional)
+
+Nonstandard compound limitations:
+ No support for a sessions fore channel RPC compound that requires both a
+ ca_maxrequestsize request and a ca_maxresponsesize reply, so we may
+ fail to live up to the promise we made in CREATE_SESSION fore channel
+ negotiation.
+
+See also http://wiki.linux-nfs.org/wiki/index.php/Server_4.0_and_4.1_issues.
diff --git a/Documentation/filesystems/nfs/nfs41-server.txt b/Documentation/filesystems/nfs/nfs41-server.txt
deleted file mode 100644
index 682a59fabe3f..000000000000
--- a/Documentation/filesystems/nfs/nfs41-server.txt
+++ /dev/null
@@ -1,173 +0,0 @@
-NFSv4.1 Server Implementation
-
-Server support for minorversion 1 can be controlled using the
-/proc/fs/nfsd/versions control file. The string output returned
-by reading this file will contain either "+4.1" or "-4.1"
-correspondingly.
-
-Currently, server support for minorversion 1 is enabled by default.
-It can be disabled at run time by writing the string "-4.1" to
-the /proc/fs/nfsd/versions control file. Note that to write this
-control file, the nfsd service must be taken down. You can use rpc.nfsd
-for this; see rpc.nfsd(8).
-
-(Warning: older servers will interpret "+4.1" and "-4.1" as "+4" and
-"-4", respectively. Therefore, code meant to work on both new and old
-kernels must turn 4.1 on or off *before* turning support for version 4
-on or off; rpc.nfsd does this correctly.)
-
-The NFSv4 minorversion 1 (NFSv4.1) implementation in nfsd is based
-on RFC 5661.
-
-From the many new features in NFSv4.1 the current implementation
-focuses on the mandatory-to-implement NFSv4.1 Sessions, providing
-"exactly once" semantics and better control and throttling of the
-resources allocated for each client.
-
-The table below, taken from the NFSv4.1 document, lists
-the operations that are mandatory to implement (REQ), optional
-(OPT), and NFSv4.0 operations that are required not to implement (MNI)
-in minor version 1. The first column indicates the operations that
-are not supported yet by the linux server implementation.
-
-The OPTIONAL features identified and their abbreviations are as follows:
- pNFS Parallel NFS
- FDELG File Delegations
- DDELG Directory Delegations
-
-The following abbreviations indicate the linux server implementation status.
- I Implemented NFSv4.1 operations.
- NS Not Supported.
- NS* Unimplemented optional feature.
-
-Operations
-
- +----------------------+------------+--------------+----------------+
- | Operation | REQ, REC, | Feature | Definition |
- | | OPT, or | (REQ, REC, | |
- | | MNI | or OPT) | |
- +----------------------+------------+--------------+----------------+
- | ACCESS | REQ | | Section 18.1 |
-I | BACKCHANNEL_CTL | REQ | | Section 18.33 |
-I | BIND_CONN_TO_SESSION | REQ | | Section 18.34 |
- | CLOSE | REQ | | Section 18.2 |
- | COMMIT | REQ | | Section 18.3 |
- | CREATE | REQ | | Section 18.4 |
-I | CREATE_SESSION | REQ | | Section 18.36 |
-NS*| DELEGPURGE | OPT | FDELG (REQ) | Section 18.5 |
- | DELEGRETURN | OPT | FDELG, | Section 18.6 |
- | | | DDELG, pNFS | |
- | | | (REQ) | |
-I | DESTROY_CLIENTID | REQ | | Section 18.50 |
-I | DESTROY_SESSION | REQ | | Section 18.37 |
-I | EXCHANGE_ID | REQ | | Section 18.35 |
-I | FREE_STATEID | REQ | | Section 18.38 |
- | GETATTR | REQ | | Section 18.7 |
-I | GETDEVICEINFO | OPT | pNFS (REQ) | Section 18.40 |
-NS*| GETDEVICELIST | OPT | pNFS (OPT) | Section 18.41 |
- | GETFH | REQ | | Section 18.8 |
-NS*| GET_DIR_DELEGATION | OPT | DDELG (REQ) | Section 18.39 |
-I | LAYOUTCOMMIT | OPT | pNFS (REQ) | Section 18.42 |
-I | LAYOUTGET | OPT | pNFS (REQ) | Section 18.43 |
-I | LAYOUTRETURN | OPT | pNFS (REQ) | Section 18.44 |
- | LINK | OPT | | Section 18.9 |
- | LOCK | REQ | | Section 18.10 |
- | LOCKT | REQ | | Section 18.11 |
- | LOCKU | REQ | | Section 18.12 |
- | LOOKUP | REQ | | Section 18.13 |
- | LOOKUPP | REQ | | Section 18.14 |
- | NVERIFY | REQ | | Section 18.15 |
- | OPEN | REQ | | Section 18.16 |
-NS*| OPENATTR | OPT | | Section 18.17 |
- | OPEN_CONFIRM | MNI | | N/A |
- | OPEN_DOWNGRADE | REQ | | Section 18.18 |
- | PUTFH | REQ | | Section 18.19 |
- | PUTPUBFH | REQ | | Section 18.20 |
- | PUTROOTFH | REQ | | Section 18.21 |
- | READ | REQ | | Section 18.22 |
- | READDIR | REQ | | Section 18.23 |
- | READLINK | OPT | | Section 18.24 |
- | RECLAIM_COMPLETE | REQ | | Section 18.51 |
- | RELEASE_LOCKOWNER | MNI | | N/A |
- | REMOVE | REQ | | Section 18.25 |
- | RENAME | REQ | | Section 18.26 |
- | RENEW | MNI | | N/A |
- | RESTOREFH | REQ | | Section 18.27 |
- | SAVEFH | REQ | | Section 18.28 |
- | SECINFO | REQ | | Section 18.29 |
-I | SECINFO_NO_NAME | REC | pNFS files | Section 18.45, |
- | | | layout (REQ) | Section 13.12 |
-I | SEQUENCE | REQ | | Section 18.46 |
- | SETATTR | REQ | | Section 18.30 |
- | SETCLIENTID | MNI | | N/A |
- | SETCLIENTID_CONFIRM | MNI | | N/A |
-NS | SET_SSV | REQ | | Section 18.47 |
-I | TEST_STATEID | REQ | | Section 18.48 |
- | VERIFY | REQ | | Section 18.31 |
-NS*| WANT_DELEGATION | OPT | FDELG (OPT) | Section 18.49 |
- | WRITE | REQ | | Section 18.32 |
-
-Callback Operations
-
- +-------------------------+-----------+-------------+---------------+
- | Operation | REQ, REC, | Feature | Definition |
- | | OPT, or | (REQ, REC, | |
- | | MNI | or OPT) | |
- +-------------------------+-----------+-------------+---------------+
- | CB_GETATTR | OPT | FDELG (REQ) | Section 20.1 |
-I | CB_LAYOUTRECALL | OPT | pNFS (REQ) | Section 20.3 |
-NS*| CB_NOTIFY | OPT | DDELG (REQ) | Section 20.4 |
-NS*| CB_NOTIFY_DEVICEID | OPT | pNFS (OPT) | Section 20.12 |
-NS*| CB_NOTIFY_LOCK | OPT | | Section 20.11 |
-NS*| CB_PUSH_DELEG | OPT | FDELG (OPT) | Section 20.5 |
- | CB_RECALL | OPT | FDELG, | Section 20.2 |
- | | | DDELG, pNFS | |
- | | | (REQ) | |
-NS*| CB_RECALL_ANY | OPT | FDELG, | Section 20.6 |
- | | | DDELG, pNFS | |
- | | | (REQ) | |
-NS | CB_RECALL_SLOT | REQ | | Section 20.8 |
-NS*| CB_RECALLABLE_OBJ_AVAIL | OPT | DDELG, pNFS | Section 20.7 |
- | | | (REQ) | |
-I | CB_SEQUENCE | OPT | FDELG, | Section 20.9 |
- | | | DDELG, pNFS | |
- | | | (REQ) | |
-NS*| CB_WANTS_CANCELLED | OPT | FDELG, | Section 20.10 |
- | | | DDELG, pNFS | |
- | | | (REQ) | |
- +-------------------------+-----------+-------------+---------------+
-
-Implementation notes:
-
-SSV:
-* The spec claims this is mandatory, but we don't actually know of any
- implementations, so we're ignoring it for now. The server returns
- NFS4ERR_ENCR_ALG_UNSUPP on EXCHANGE_ID, which should be future-proof.
-
-GSS on the backchannel:
-* Again, theoretically required but not widely implemented (in
- particular, the current Linux client doesn't request it). We return
- NFS4ERR_ENCR_ALG_UNSUPP on CREATE_SESSION.
-
-DELEGPURGE:
-* mandatory only for servers that support CLAIM_DELEGATE_PREV and/or
- CLAIM_DELEG_PREV_FH (which allows clients to keep delegations that
- persist across client reboots). Thus we need not implement this for
- now.
-
-EXCHANGE_ID:
-* implementation ids are ignored
-
-CREATE_SESSION:
-* backchannel attributes are ignored
-
-SEQUENCE:
-* no support for dynamic slot table renegotiation (optional)
-
-Nonstandard compound limitations:
-* No support for a sessions fore channel RPC compound that requires both a
- ca_maxrequestsize request and a ca_maxresponsesize reply, so we may
- fail to live up to the promise we made in CREATE_SESSION fore channel
- negotiation.
-
-See also http://wiki.linux-nfs.org/wiki/index.php/Server_4.0_and_4.1_issues.
diff --git a/Documentation/filesystems/nfs/nfsd-admin-interfaces.txt b/Documentation/filesystems/nfs/nfsd-admin-interfaces.txt
deleted file mode 100644
index 56a96fb08a73..000000000000
--- a/Documentation/filesystems/nfs/nfsd-admin-interfaces.txt
+++ /dev/null
@@ -1,41 +0,0 @@
-Administrative interfaces for nfsd
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Note that normally these interfaces are used only by the utilities in
-nfs-utils.
-
-nfsd is controlled mainly by pseudofiles under the "nfsd" filesystem,
-which is normally mounted at /proc/fs/nfsd/.
-
-The server is always started by the first write of a nonzero value to
-nfsd/threads.
-
-Before doing that, NFSD can be told which sockets to listen on by
-writing to nfsd/portlist; that write may be:
-
- - an ascii-encoded file descriptor, which should refer to a
- bound (and listening, for tcp) socket, or
- - "transportname port", where transportname is currently either
- "udp", "tcp", or "rdma".
-
-If nfsd is started without doing any of these, then it will create one
-udp and one tcp listener at port 2049 (see nfsd_init_socks).
-
-On startup, nfsd and lockd grace periods start.
-
-nfsd is shut down by a write of 0 to nfsd/threads. All locks and state
-are thrown away at that point.
-
-Between startup and shutdown, the number of threads may be adjusted up
-or down by additional writes to nfsd/threads or by writes to
-nfsd/pool_threads.
-
-For more detail about files under nfsd/ and what they control, see
-fs/nfsd/nfsctl.c; most of them have detailed comments.
-
-Implementation notes
-^^^^^^^^^^^^^^^^^^^^
-
-Note that the rpc server requires the caller to serialize addition and
-removal of listening sockets, and startup and shutdown of the server.
-For nfsd this is done using nfsd_mutex.
diff --git a/Documentation/filesystems/nfs/nfsroot.txt b/Documentation/filesystems/nfs/nfsroot.txt
deleted file mode 100644
index ae4332464560..000000000000
--- a/Documentation/filesystems/nfs/nfsroot.txt
+++ /dev/null
@@ -1,355 +0,0 @@
-Mounting the root filesystem via NFS (nfsroot)
-===============================================
-
-Written 1996 by Gero Kuhlmann <gero@gkminix.han.de>
-Updated 1997 by Martin Mares <mj@atrey.karlin.mff.cuni.cz>
-Updated 2006 by Nico Schottelius <nico-kernel-nfsroot@schottelius.org>
-Updated 2006 by Horms <horms@verge.net.au>
-Updated 2018 by Chris Novakovic <chris@chrisn.me.uk>
-
-
-
-In order to use a diskless system, such as an X-terminal or printer server
-for example, it is necessary for the root filesystem to be present on a
-non-disk device. This may be an initramfs (see Documentation/filesystems/
-ramfs-rootfs-initramfs.txt), a ramdisk (see Documentation/admin-guide/initrd.rst) or a
-filesystem mounted via NFS. The following text describes on how to use NFS
-for the root filesystem. For the rest of this text 'client' means the
-diskless system, and 'server' means the NFS server.
-
-
-
-
-1.) Enabling nfsroot capabilities
- -----------------------------
-
-In order to use nfsroot, NFS client support needs to be selected as
-built-in during configuration. Once this has been selected, the nfsroot
-option will become available, which should also be selected.
-
-In the networking options, kernel level autoconfiguration can be selected,
-along with the types of autoconfiguration to support. Selecting all of
-DHCP, BOOTP and RARP is safe.
-
-
-
-
-2.) Kernel command line
- -------------------
-
-When the kernel has been loaded by a boot loader (see below) it needs to be
-told what root fs device to use. And in the case of nfsroot, where to find
-both the server and the name of the directory on the server to mount as root.
-This can be established using the following kernel command line parameters:
-
-
-root=/dev/nfs
-
- This is necessary to enable the pseudo-NFS-device. Note that it's not a
- real device but just a synonym to tell the kernel to use NFS instead of
- a real device.
-
-
-nfsroot=[<server-ip>:]<root-dir>[,<nfs-options>]
-
- If the `nfsroot' parameter is NOT given on the command line,
- the default "/tftpboot/%s" will be used.
-
- <server-ip> Specifies the IP address of the NFS server.
- The default address is determined by the `ip' parameter
- (see below). This parameter allows the use of different
- servers for IP autoconfiguration and NFS.
-
- <root-dir> Name of the directory on the server to mount as root.
- If there is a "%s" token in the string, it will be
- replaced by the ASCII-representation of the client's
- IP address.
-
- <nfs-options> Standard NFS options. All options are separated by commas.
- The following defaults are used:
- port = as given by server portmap daemon
- rsize = 4096
- wsize = 4096
- timeo = 7
- retrans = 3
- acregmin = 3
- acregmax = 60
- acdirmin = 30
- acdirmax = 60
- flags = hard, nointr, noposix, cto, ac
-
-
-ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:
- <dns0-ip>:<dns1-ip>:<ntp0-ip>
-
- This parameter tells the kernel how to configure IP addresses of devices
- and also how to set up the IP routing table. It was originally called
- `nfsaddrs', but now the boot-time IP configuration works independently of
- NFS, so it was renamed to `ip' and the old name remained as an alias for
- compatibility reasons.
-
- If this parameter is missing from the kernel command line, all fields are
- assumed to be empty, and the defaults mentioned below apply. In general
- this means that the kernel tries to configure everything using
- autoconfiguration.
-
- The <autoconf> parameter can appear alone as the value to the `ip'
- parameter (without all the ':' characters before). If the value is
- "ip=off" or "ip=none", no autoconfiguration will take place, otherwise
- autoconfiguration will take place. The most common way to use this
- is "ip=dhcp".
-
- <client-ip> IP address of the client.
-
- Default: Determined using autoconfiguration.
-
- <server-ip> IP address of the NFS server. If RARP is used to determine
- the client address and this parameter is NOT empty only
- replies from the specified server are accepted.
-
- Only required for NFS root. That is autoconfiguration
- will not be triggered if it is missing and NFS root is not
- in operation.
-
- Value is exported to /proc/net/pnp with the prefix "bootserver "
- (see below).
-
- Default: Determined using autoconfiguration.
- The address of the autoconfiguration server is used.
-
- <gw-ip> IP address of a gateway if the server is on a different subnet.
-
- Default: Determined using autoconfiguration.
-
- <netmask> Netmask for local network interface. If unspecified
- the netmask is derived from the client IP address assuming
- classful addressing.
-
- Default: Determined using autoconfiguration.
-
- <hostname> Name of the client. If a '.' character is present, anything
- before the first '.' is used as the client's hostname, and anything
- after it is used as its NIS domain name. May be supplied by
- autoconfiguration, but its absence will not trigger autoconfiguration.
- If specified and DHCP is used, the user-provided hostname (and NIS
- domain name, if present) will be carried in the DHCP request; this
- may cause a DNS record to be created or updated for the client.
-
- Default: Client IP address is used in ASCII notation.
-
- <device> Name of network device to use.
-
- Default: If the host only has one device, it is used.
- Otherwise the device is determined using
- autoconfiguration. This is done by sending
- autoconfiguration requests out of all devices,
- and using the device that received the first reply.
-
- <autoconf> Method to use for autoconfiguration. In the case of options
- which specify multiple autoconfiguration protocols,
- requests are sent using all protocols, and the first one
- to reply is used.
-
- Only autoconfiguration protocols that have been compiled
- into the kernel will be used, regardless of the value of
- this option.
-
- off or none: don't use autoconfiguration
- (do static IP assignment instead)
- on or any: use any protocol available in the kernel
- (default)
- dhcp: use DHCP
- bootp: use BOOTP
- rarp: use RARP
- both: use both BOOTP and RARP but not DHCP
- (old option kept for backwards compatibility)
-
- if dhcp is used, the client identifier can be used by following
- format "ip=dhcp,client-id-type,client-id-value"
-
- Default: any
-
- <dns0-ip> IP address of primary nameserver.
- Value is exported to /proc/net/pnp with the prefix "nameserver "
- (see below).
-
- Default: None if not using autoconfiguration; determined
- automatically if using autoconfiguration.
-
- <dns1-ip> IP address of secondary nameserver.
- See <dns0-ip>.
-
- <ntp0-ip> IP address of a Network Time Protocol (NTP) server.
- Value is exported to /proc/net/ipconfig/ntp_servers, but is
- otherwise unused (see below).
-
- Default: None if not using autoconfiguration; determined
- automatically if using autoconfiguration.
-
- After configuration (whether manual or automatic) is complete, two files
- are created in the following format; lines are omitted if their respective
- value is empty following configuration:
-
- - /proc/net/pnp:
-
- #PROTO: <DHCP|BOOTP|RARP|MANUAL> (depending on configuration method)
- domain <dns-domain> (if autoconfigured, the DNS domain)
- nameserver <dns0-ip> (primary name server IP)
- nameserver <dns1-ip> (secondary name server IP)
- nameserver <dns2-ip> (tertiary name server IP)
- bootserver <server-ip> (NFS server IP)
-
- - /proc/net/ipconfig/ntp_servers:
-
- <ntp0-ip> (NTP server IP)
- <ntp1-ip> (NTP server IP)
- <ntp2-ip> (NTP server IP)
-
- <dns-domain> and <dns2-ip> (in /proc/net/pnp) and <ntp1-ip> and <ntp2-ip>
- (in /proc/net/ipconfig/ntp_servers) are requested during autoconfiguration;
- they cannot be specified as part of the "ip=" kernel command line parameter.
-
- Because the "domain" and "nameserver" options are recognised by DNS
- resolvers, /etc/resolv.conf is often linked to /proc/net/pnp on systems
- that use an NFS root filesystem.
-
- Note that the kernel will not synchronise the system time with any NTP
- servers it discovers; this is the responsibility of a user space process
- (e.g. an initrd/initramfs script that passes the IP addresses listed in
- /proc/net/ipconfig/ntp_servers to an NTP client before mounting the real
- root filesystem if it is on NFS).
-
-
-nfsrootdebug
-
- This parameter enables debugging messages to appear in the kernel
- log at boot time so that administrators can verify that the correct
- NFS mount options, server address, and root path are passed to the
- NFS client.
-
-
-rdinit=<executable file>
-
- To specify which file contains the program that starts system
- initialization, administrators can use this command line parameter.
- The default value of this parameter is "/init". If the specified
- file exists and the kernel can execute it, root filesystem related
- kernel command line parameters, including `nfsroot=', are ignored.
-
- A description of the process of mounting the root file system can be
- found in:
-
- Documentation/driver-api/early-userspace/early_userspace_support.rst
-
-
-
-
-3.) Boot Loader
- ----------
-
-To get the kernel into memory different approaches can be used.
-They depend on various facilities being available:
-
-
-3.1) Booting from a floppy using syslinux
-
- When building kernels, an easy way to create a boot floppy that uses
- syslinux is to use the zdisk or bzdisk make targets which use zimage
- and bzimage images respectively. Both targets accept the
- FDARGS parameter which can be used to set the kernel command line.
-
- e.g.
- make bzdisk FDARGS="root=/dev/nfs"
-
- Note that the user running this command will need to have
- access to the floppy drive device, /dev/fd0
-
- For more information on syslinux, including how to create bootdisks
- for prebuilt kernels, see http://syslinux.zytor.com/
-
- N.B: Previously it was possible to write a kernel directly to
- a floppy using dd, configure the boot device using rdev, and
- boot using the resulting floppy. Linux no longer supports this
- method of booting.
-
-3.2) Booting from a cdrom using isolinux
-
- When building kernels, an easy way to create a bootable cdrom that
- uses isolinux is to use the isoimage target which uses a bzimage
- image. Like zdisk and bzdisk, this target accepts the FDARGS
- parameter which can be used to set the kernel command line.
-
- e.g.
- make isoimage FDARGS="root=/dev/nfs"
-
- The resulting iso image will be arch/<ARCH>/boot/image.iso
- This can be written to a cdrom using a variety of tools including
- cdrecord.
-
- e.g.
- cdrecord dev=ATAPI:1,0,0 arch/x86/boot/image.iso
-
- For more information on isolinux, including how to create bootdisks
- for prebuilt kernels, see http://syslinux.zytor.com/
-
-3.2) Using LILO
- When using LILO all the necessary command line parameters may be
- specified using the 'append=' directive in the LILO configuration
- file.
-
- However, to use the 'root=' directive you also need to create
- a dummy root device, which may be removed after LILO is run.
-
- mknod /dev/boot255 c 0 255
-
- For information on configuring LILO, please refer to its documentation.
-
-3.3) Using GRUB
- When using GRUB, kernel parameter are simply appended after the kernel
- specification: kernel <kernel> <parameters>
-
-3.4) Using loadlin
- loadlin may be used to boot Linux from a DOS command prompt without
- requiring a local hard disk to mount as root. This has not been
- thoroughly tested by the authors of this document, but in general
- it should be possible configure the kernel command line similarly
- to the configuration of LILO.
-
- Please refer to the loadlin documentation for further information.
-
-3.5) Using a boot ROM
- This is probably the most elegant way of booting a diskless client.
- With a boot ROM the kernel is loaded using the TFTP protocol. The
- authors of this document are not aware of any no commercial boot
- ROMs that support booting Linux over the network. However, there
- are two free implementations of a boot ROM, netboot-nfs and
- etherboot, both of which are available on sunsite.unc.edu, and both
- of which contain everything you need to boot a diskless Linux client.
-
-3.6) Using pxelinux
- Pxelinux may be used to boot linux using the PXE boot loader
- which is present on many modern network cards.
-
- When using pxelinux, the kernel image is specified using
- "kernel <relative-path-below /tftpboot>". The nfsroot parameters
- are passed to the kernel by adding them to the "append" line.
- It is common to use serial console in conjunction with pxeliunx,
- see Documentation/admin-guide/serial-console.rst for more information.
-
- For more information on isolinux, including how to create bootdisks
- for prebuilt kernels, see http://syslinux.zytor.com/
-
-
-
-
-4.) Credits
- -------
-
- The nfsroot code in the kernel and the RARP support have been written
- by Gero Kuhlmann <gero@gkminix.han.de>.
-
- The rest of the IP layer autoconfiguration code has been written
- by Martin Mares <mj@atrey.karlin.mff.cuni.cz>.
-
- In order to write the initial version of nfsroot I would like to thank
- Jens-Uwe Mager <jum@anubis.han.de> for his help.
diff --git a/Documentation/filesystems/nfs/pnfs-block-server.txt b/Documentation/filesystems/nfs/pnfs-block-server.txt
deleted file mode 100644
index 2143673cf154..000000000000
--- a/Documentation/filesystems/nfs/pnfs-block-server.txt
+++ /dev/null
@@ -1,37 +0,0 @@
-pNFS block layout server user guide
-
-The Linux NFS server now supports the pNFS block layout extension. In this
-case the NFS server acts as Metadata Server (MDS) for pNFS, which in addition
-to handling all the metadata access to the NFS export also hands out layouts
-to the clients to directly access the underlying block devices that are
-shared with the client.
-
-To use pNFS block layouts with with the Linux NFS server the exported file
-system needs to support the pNFS block layouts (currently just XFS), and the
-file system must sit on shared storage (typically iSCSI) that is accessible
-to the clients in addition to the MDS. As of now the file system needs to
-sit directly on the exported volume, striping or concatenation of
-volumes on the MDS and clients is not supported yet.
-
-On the server, pNFS block volume support is automatically if the file system
-support it. On the client make sure the kernel has the CONFIG_PNFS_BLOCK
-option enabled, the blkmapd daemon from nfs-utils is running, and the
-file system is mounted using the NFSv4.1 protocol version (mount -o vers=4.1).
-
-If the nfsd server needs to fence a non-responding client it calls
-/sbin/nfsd-recall-failed with the first argument set to the IP address of
-the client, and the second argument set to the device node without the /dev
-prefix for the file system to be fenced. Below is an example file that shows
-how to translate the device into a serial number from SCSI EVPD 0x80:
-
-cat > /sbin/nfsd-recall-failed << EOF
-#!/bin/sh
-
-CLIENT="$1"
-DEV="/dev/$2"
-EVPD=`sg_inq --page=0x80 ${DEV} | \
- grep "Unit serial number:" | \
- awk -F ': ' '{print $2}'`
-
-echo "fencing client ${CLIENT} serial ${EVPD}" >> /var/log/pnfsd-fence.log
-EOF
diff --git a/Documentation/filesystems/nfs/pnfs-scsi-server.txt b/Documentation/filesystems/nfs/pnfs-scsi-server.txt
deleted file mode 100644
index 5bef7268bd9f..000000000000
--- a/Documentation/filesystems/nfs/pnfs-scsi-server.txt
+++ /dev/null
@@ -1,23 +0,0 @@
-
-pNFS SCSI layout server user guide
-==================================
-
-This document describes support for pNFS SCSI layouts in the Linux NFS server.
-With pNFS SCSI layouts, the NFS server acts as Metadata Server (MDS) for pNFS,
-which in addition to handling all the metadata access to the NFS export,
-also hands out layouts to the clients so that they can directly access the
-underlying SCSI LUNs that are shared with the client.
-
-To use pNFS SCSI layouts with with the Linux NFS server, the exported file
-system needs to support the pNFS SCSI layouts (currently just XFS), and the
-file system must sit on a SCSI LUN that is accessible to the clients in
-addition to the MDS. As of now the file system needs to sit directly on the
-exported LUN, striping or concatenation of LUNs on the MDS and clients
-is not supported yet.
-
-On a server built with CONFIG_NFSD_SCSI, the pNFS SCSI volume support is
-automatically enabled if the file system is exported using the "pnfs"
-option and the underlying SCSI device support persistent reservations.
-On the client make sure the kernel has the CONFIG_PNFS_BLOCK option
-enabled, and the file system is mounted using the NFSv4.1 protocol
-version (mount -o vers=4.1).
diff --git a/Documentation/filesystems/nfs/pnfs.txt b/Documentation/filesystems/nfs/pnfs.rst
index 80dc0bdc302a..7c470ecdc3a9 100644
--- a/Documentation/filesystems/nfs/pnfs.txt
+++ b/Documentation/filesystems/nfs/pnfs.rst
@@ -1,15 +1,17 @@
-Reference counting in pnfs:
+==========================
+Reference counting in pnfs
==========================
The are several inter-related caches. We have layouts which can
reference multiple devices, each of which can reference multiple data servers.
Each data server can be referenced by multiple devices. Each device
-can be referenced by multiple layouts. To keep all of this straight,
+can be referenced by multiple layouts. To keep all of this straight,
we need to reference count.
struct pnfs_layout_hdr
-----------------------
+======================
+
The on-the-wire command LAYOUTGET corresponds to struct
pnfs_layout_segment, usually referred to by the variable name lseg.
Each nfs_inode may hold a pointer to a cache of these layout
@@ -25,7 +27,8 @@ the reference count, as the layout is kept around by the lseg that
keeps it in the list.
deviceid_cache
---------------
+==============
+
lsegs reference device ids, which are resolved per nfs_client and
layout driver type. The device ids are held in a RCU cache (struct
nfs4_deviceid_cache). The cache itself is referenced across each
@@ -38,24 +41,26 @@ justification, but seems reasonable given that we can have multiple
deviceid's per filesystem, and multiple filesystems per nfs_client.
The hash code is copied from the nfsd code base. A discussion of
-hashing and variations of this algorithm can be found at:
-http://groups.google.com/group/comp.lang.c/browse_thread/thread/9522965e2b8d3809
+hashing and variations of this algorithm can be found `here.
+<http://groups.google.com/group/comp.lang.c/browse_thread/thread/9522965e2b8d3809>`_
data server cache
------------------
+=================
+
file driver devices refer to data servers, which are kept in a module
level cache. Its reference is held over the lifetime of the deviceid
pointing to it.
lseg
-----
+====
+
lseg maintains an extra reference corresponding to the NFS_LSEG_VALID
bit which holds it in the pnfs_layout_hdr's list. When the final lseg
is removed from the pnfs_layout_hdr's list, the NFS_LAYOUT_DESTROYED
bit is set, preventing any new lsegs from being added.
layout drivers
---------------
+==============
PNFS utilizes what is called layout drivers. The STD defines 4 basic
layout types: "files", "objects", "blocks", and "flexfiles". For each
@@ -68,6 +73,6 @@ Blocks-layout-driver code is in: fs/nfs/blocklayout/.. directory
Flexfiles-layout-driver code is in: fs/nfs/flexfilelayout/.. directory
blocks-layout setup
--------------------
+===================
TODO: Document the setup needs of the blocks layout driver
diff --git a/Documentation/filesystems/nfs/reexport.rst b/Documentation/filesystems/nfs/reexport.rst
new file mode 100644
index 000000000000..ff9ae4a46530
--- /dev/null
+++ b/Documentation/filesystems/nfs/reexport.rst
@@ -0,0 +1,113 @@
+Reexporting NFS filesystems
+===========================
+
+Overview
+--------
+
+It is possible to reexport an NFS filesystem over NFS. However, this
+feature comes with a number of limitations. Before trying it, we
+recommend some careful research to determine whether it will work for
+your purposes.
+
+A discussion of current known limitations follows.
+
+"fsid=" required, crossmnt broken
+---------------------------------
+
+We require the "fsid=" export option on any reexport of an NFS
+filesystem. You can use "uuidgen -r" to generate a unique argument.
+
+The "crossmnt" export does not propagate "fsid=", so it will not allow
+traversing into further nfs filesystems; if you wish to export nfs
+filesystems mounted under the exported filesystem, you'll need to export
+them explicitly, assigning each its own unique "fsid= option.
+
+Reboot recovery
+---------------
+
+The NFS protocol's normal reboot recovery mechanisms don't work for the
+case when the reexport server reboots. Clients will lose any locks
+they held before the reboot, and further IO will result in errors.
+Closing and reopening files should clear the errors.
+
+Filehandle limits
+-----------------
+
+If the original server uses an X byte filehandle for a given object, the
+reexport server's filehandle for the reexported object will be X+22
+bytes, rounded up to the nearest multiple of four bytes.
+
+The result must fit into the RFC-mandated filehandle size limits:
+
++-------+-----------+
+| NFSv2 | 32 bytes |
++-------+-----------+
+| NFSv3 | 64 bytes |
++-------+-----------+
+| NFSv4 | 128 bytes |
++-------+-----------+
+
+So, for example, you will only be able to reexport a filesystem over
+NFSv2 if the original server gives you filehandles that fit in 10
+bytes--which is unlikely.
+
+In general there's no way to know the maximum filehandle size given out
+by an NFS server without asking the server vendor.
+
+But the following table gives a few examples. The first column is the
+typical length of the filehandle from a Linux server exporting the given
+filesystem, the second is the length after that nfs export is reexported
+by another Linux host:
+
++--------+-------------------+----------------+
+| | filehandle length | after reexport |
++========+===================+================+
+| ext4: | 28 bytes | 52 bytes |
++--------+-------------------+----------------+
+| xfs: | 32 bytes | 56 bytes |
++--------+-------------------+----------------+
+| btrfs: | 40 bytes | 64 bytes |
++--------+-------------------+----------------+
+
+All will therefore fit in an NFSv3 or NFSv4 filehandle after reexport,
+but none are reexportable over NFSv2.
+
+Linux server filehandles are a bit more complicated than this, though;
+for example:
+
+ - The (non-default) "subtreecheck" export option generally
+ requires another 4 to 8 bytes in the filehandle.
+ - If you export a subdirectory of a filesystem (instead of
+ exporting the filesystem root), that also usually adds 4 to 8
+ bytes.
+ - If you export over NFSv2, knfsd usually uses a shorter
+ filesystem identifier that saves 8 bytes.
+ - The root directory of an export uses a filehandle that is
+ shorter.
+
+As you can see, the 128-byte NFSv4 filehandle is large enough that
+you're unlikely to have trouble using NFSv4 to reexport any filesystem
+exported from a Linux server. In general, if the original server is
+something that also supports NFSv3, you're *probably* OK. Re-exporting
+over NFSv3 may be dicier, and reexporting over NFSv2 will probably
+never work.
+
+For more details of Linux filehandle structure, the best reference is
+the source code and comments; see in particular:
+
+ - include/linux/exportfs.h:enum fid_type
+ - include/uapi/linux/nfsd/nfsfh.h:struct nfs_fhbase_new
+ - fs/nfsd/nfsfh.c:set_version_and_fsid_type
+ - fs/nfs/export.c:nfs_encode_fh
+
+Open DENY bits ignored
+----------------------
+
+NFS since NFSv4 supports ALLOW and DENY bits taken from Windows, which
+allow you, for example, to open a file in a mode which forbids other
+read opens or write opens. The Linux client doesn't use them, and the
+server's support has always been incomplete: they are enforced only
+against other NFS users, not against processes accessing the exported
+filesystem locally. A reexport server will also not pass them along to
+the original server, so they will not be enforced between clients of
+different reexport servers.
diff --git a/Documentation/filesystems/nfs/rpc-cache.txt b/Documentation/filesystems/nfs/rpc-cache.rst
index c4dac829db0f..339efd75016a 100644
--- a/Documentation/filesystems/nfs/rpc-cache.txt
+++ b/Documentation/filesystems/nfs/rpc-cache.rst
@@ -1,9 +1,14 @@
- This document gives a brief introduction to the caching
+=========
+RPC Cache
+=========
+
+This document gives a brief introduction to the caching
mechanisms in the sunrpc layer that is used, in particular,
for NFS authentication.
-CACHES
+Caches
======
+
The caching replaces the old exports table and allows for
a wide variety of values to be caches.
@@ -12,6 +17,7 @@ quite possibly very different in content and use. There is a corpus
of common code for managing these caches.
Examples of caches that are likely to be needed are:
+
- mapping from IP address to client name
- mapping from client name and filesystem to export options
- mapping from UID to list of GIDs, to work around NFS's limitation
@@ -21,6 +27,7 @@ Examples of caches that are likely to be needed are:
- mapping from network identify to public key for crypto authentication.
The common code handles such things as:
+
- general cache lookup with correct locking
- supporting 'NEGATIVE' as well as positive entries
- allowing an EXPIRED time on cache items, and removing
@@ -35,60 +42,66 @@ The common code handles such things as:
Creating a Cache
----------------
-1/ A cache needs a datum to store. This is in the form of a
- structure definition that must contain a
- struct cache_head
+- A cache needs a datum to store. This is in the form of a
+ structure definition that must contain a struct cache_head
as an element, usually the first.
It will also contain a key and some content.
Each cache element is reference counted and contains
expiry and update times for use in cache management.
-2/ A cache needs a "cache_detail" structure that
+- A cache needs a "cache_detail" structure that
describes the cache. This stores the hash table, some
parameters for cache management, and some operations detailing how
to work with particular cache items.
- The operations requires are:
- struct cache_head *alloc(void)
- This simply allocates appropriate memory and returns
- a pointer to the cache_detail embedded within the
- structure
- void cache_put(struct kref *)
- This is called when the last reference to an item is
- dropped. The pointer passed is to the 'ref' field
- in the cache_head. cache_put should release any
- references create by 'cache_init' and, if CACHE_VALID
- is set, any references created by cache_update.
- It should then release the memory allocated by
- 'alloc'.
- int match(struct cache_head *orig, struct cache_head *new)
- test if the keys in the two structures match. Return
- 1 if they do, 0 if they don't.
- void init(struct cache_head *orig, struct cache_head *new)
- Set the 'key' fields in 'new' from 'orig'. This may
- include taking references to shared objects.
- void update(struct cache_head *orig, struct cache_head *new)
- Set the 'content' fileds in 'new' from 'orig'.
- int cache_show(struct seq_file *m, struct cache_detail *cd,
- struct cache_head *h)
- Optional. Used to provide a /proc file that lists the
- contents of a cache. This should show one item,
- usually on just one line.
- int cache_request(struct cache_detail *cd, struct cache_head *h,
- char **bpp, int *blen)
- Format a request to be send to user-space for an item
- to be instantiated. *bpp is a buffer of size *blen.
- bpp should be moved forward over the encoded message,
- and *blen should be reduced to show how much free
- space remains. Return 0 on success or <0 if not
- enough room or other problem.
- int cache_parse(struct cache_detail *cd, char *buf, int len)
- A message from user space has arrived to fill out a
- cache entry. It is in 'buf' of length 'len'.
- cache_parse should parse this, find the item in the
- cache with sunrpc_cache_lookup_rcu, and update the item
- with sunrpc_cache_update.
-
-
-3/ A cache needs to be registered using cache_register(). This
+
+ The operations are:
+
+ struct cache_head \*alloc(void)
+ This simply allocates appropriate memory and returns
+ a pointer to the cache_detail embedded within the
+ structure
+
+ void cache_put(struct kref \*)
+ This is called when the last reference to an item is
+ dropped. The pointer passed is to the 'ref' field
+ in the cache_head. cache_put should release any
+ references create by 'cache_init' and, if CACHE_VALID
+ is set, any references created by cache_update.
+ It should then release the memory allocated by
+ 'alloc'.
+
+ int match(struct cache_head \*orig, struct cache_head \*new)
+ test if the keys in the two structures match. Return
+ 1 if they do, 0 if they don't.
+
+ void init(struct cache_head \*orig, struct cache_head \*new)
+ Set the 'key' fields in 'new' from 'orig'. This may
+ include taking references to shared objects.
+
+ void update(struct cache_head \*orig, struct cache_head \*new)
+ Set the 'content' fields in 'new' from 'orig'.
+
+ int cache_show(struct seq_file \*m, struct cache_detail \*cd, struct cache_head \*h)
+ Optional. Used to provide a /proc file that lists the
+ contents of a cache. This should show one item,
+ usually on just one line.
+
+ int cache_request(struct cache_detail \*cd, struct cache_head \*h, char \*\*bpp, int \*blen)
+ Format a request to be send to user-space for an item
+ to be instantiated. \*bpp is a buffer of size \*blen.
+ bpp should be moved forward over the encoded message,
+ and \*blen should be reduced to show how much free
+ space remains. Return 0 on success or <0 if not
+ enough room or other problem.
+
+ int cache_parse(struct cache_detail \*cd, char \*buf, int len)
+ A message from user space has arrived to fill out a
+ cache entry. It is in 'buf' of length 'len'.
+ cache_parse should parse this, find the item in the
+ cache with sunrpc_cache_lookup_rcu, and update the item
+ with sunrpc_cache_update.
+
+
+- A cache needs to be registered using cache_register(). This
includes it on a list of caches that will be regularly
cleaned to discard old data.
@@ -107,7 +120,7 @@ cache_check will return -ENOENT in the entry is negative or if an up
call is needed but not possible, -EAGAIN if an upcall is pending,
or 0 if the data is valid;
-cache_check can be passed a "struct cache_req *". This structure is
+cache_check can be passed a "struct cache_req\*". This structure is
typically embedded in the actual request and can be used to create a
deferred copy of the request (struct cache_deferred_req). This is
done when the found cache item is not uptodate, but the is reason to
@@ -139,9 +152,11 @@ The 'channel' works a bit like a datagram socket. Each 'write' is
passed as a whole to the cache for parsing and interpretation.
Each cache can treat the write requests differently, but it is
expected that a message written will contain:
+
- a key
- an expiry time
- a content.
+
with the intention that an item in the cache with the give key
should be create or updated to have the given content, and the
expiry time should be set on that item.
@@ -156,7 +171,8 @@ If there are no more requests to return, read will return EOF, but a
select or poll for read will block waiting for another request to be
added.
-Thus a user-space helper is likely to:
+Thus a user-space helper is likely to::
+
open the channel.
select for readable
read a request
@@ -175,12 +191,13 @@ Each cache should also define a "cache_request" method which
takes a cache item and encodes a request into the buffer
provided.
-Note: If a cache has no active readers on the channel, and has had not
-active readers for more than 60 seconds, further requests will not be
-added to the channel but instead all lookups that do not find a valid
-entry will fail. This is partly for backward compatibility: The
-previous nfs exports table was deemed to be authoritative and a
-failed lookup meant a definite 'no'.
+.. note::
+ If a cache has no active readers on the channel, and has had not
+ active readers for more than 60 seconds, further requests will not be
+ added to the channel but instead all lookups that do not find a valid
+ entry will fail. This is partly for backward compatibility: The
+ previous nfs exports table was deemed to be authoritative and a
+ failed lookup meant a definite 'no'.
request/response format
-----------------------
@@ -193,10 +210,11 @@ with precisely one newline character which should be at the end.
Fields within the record should be separated by spaces, normally one.
If spaces, newlines, or nul characters are needed in a field they
much be quoted. two mechanisms are available:
-1/ If a field begins '\x' then it must contain an even number of
+
+- If a field begins '\x' then it must contain an even number of
hex digits, and pairs of these digits provide the bytes in the
field.
-2/ otherwise a \ in the field must be followed by 3 octal digits
+- otherwise a \ in the field must be followed by 3 octal digits
which give the code for a byte. Other characters are treated
as them selves. At the very least, space, newline, nul, and
'\' must be quoted in this way.
diff --git a/Documentation/filesystems/nfs/rpc-server-gss.txt b/Documentation/filesystems/nfs/rpc-server-gss.rst
index 310bbbaf9080..5c1a1c58fc27 100644
--- a/Documentation/filesystems/nfs/rpc-server-gss.txt
+++ b/Documentation/filesystems/nfs/rpc-server-gss.rst
@@ -1,4 +1,4 @@
-
+=========================================
rpcsec_gss support for kernel RPC servers
=========================================
@@ -9,14 +9,16 @@ NFSv4.1 and higher don't require the client to act as a server for the
purposes of authentication.)
RPCGSS is specified in a few IETF documents:
- - RFC2203 v1: http://tools.ietf.org/rfc/rfc2203.txt
- - RFC5403 v2: http://tools.ietf.org/rfc/rfc5403.txt
-and there is a 3rd version being proposed:
- - http://tools.ietf.org/id/draft-williams-rpcsecgssv3.txt
- (At draft n. 02 at the time of writing)
+
+ - RFC2203 v1: https://tools.ietf.org/rfc/rfc2203.txt
+ - RFC5403 v2: https://tools.ietf.org/rfc/rfc5403.txt
+
+There is a third version that we don't currently implement:
+
+ - RFC7861 v3: https://tools.ietf.org/rfc/rfc7861.txt
Background
-----------
+==========
The RPCGSS Authentication method describes a way to perform GSSAPI
Authentication for NFS. Although GSSAPI is itself completely mechanism
@@ -27,8 +29,9 @@ The Linux kernel, at the moment, supports only the KRB5 mechanism, and
depends on GSSAPI extensions that are KRB5 specific.
GSSAPI is a complex library, and implementing it completely in kernel is
-unwarranted. However GSSAPI operations are fundementally separable in 2
+unwarranted. However GSSAPI operations are fundamentally separable in 2
parts:
+
- initial context establishment
- integrity/privacy protection (signing and encrypting of individual
packets)
@@ -41,7 +44,7 @@ kernel, but leave the initial context establishment to userspace. We
need upcalls to request userspace to perform context establishment.
NFS Server Legacy Upcall Mechanism
-----------------------------------
+==================================
The classic upcall mechanism uses a custom text based upcall mechanism
to talk to a custom daemon called rpc.svcgssd that is provide by the
@@ -62,21 +65,20 @@ groups) due to limitation on the size of the buffer that can be send
back to the kernel (4KiB).
NFS Server New RPC Upcall Mechanism
------------------------------------
+===================================
The newer upcall mechanism uses RPC over a unix socket to a daemon
called gss-proxy, implemented by a userspace program called Gssproxy.
-The gss_proxy RPC protocol is currently documented here:
-
- https://fedorahosted.org/gss-proxy/wiki/ProtocolDocumentation
+The gss_proxy RPC protocol is currently documented `here
+<https://fedorahosted.org/gss-proxy/wiki/ProtocolDocumentation>`_.
This upcall mechanism uses the kernel rpc client and connects to the gssproxy
userspace program over a regular unix socket. The gssproxy protocol does not
suffer from the size limitations of the legacy protocol.
Negotiating Upcall Mechanisms
------------------------------
+=============================
To provide backward compatibility, the kernel defaults to using the
legacy mechanism. To switch to the new mechanism, gss-proxy must bind