aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2008-07-21 00:21:46 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2008-07-21 00:21:46 -0400
commit14b395e35d1afdd8019d11b92e28041fad591b71 (patch)
treecff7ba9bed7a38300b19a5bacc632979d64fd9c8 /Documentation
parent734b397cd14f3340394a8dd3266bec97d01f034b (diff)
parent5108b27651727b5aba0826e8fd7be71b42428701 (diff)
Merge branch 'for-2.6.27' of git://linux-nfs.org/~bfields/linux
* 'for-2.6.27' of git://linux-nfs.org/~bfields/linux: (51 commits) nfsd: nfs4xdr.c do-while is not a compound statement nfsd: Use C99 initializers in fs/nfsd/nfs4xdr.c lockd: Pass "struct sockaddr *" to new failover-by-IP function lockd: get host reference in nlmsvc_create_block() instead of callers lockd: minor svclock.c style fixes lockd: eliminate duplicate nlmsvc_lookup_host call from nlmsvc_lock lockd: eliminate duplicate nlmsvc_lookup_host call from nlmsvc_testlock lockd: nlm_release_host() checks for NULL, caller needn't file lock: reorder struct file_lock to save space on 64 bit builds nfsd: take file and mnt write in nfs4_upgrade_open nfsd: document open share bit tracking nfsd: tabulate nfs4 xdr encoding functions nfsd: dprint operation names svcrdma: Change WR context get/put to use the kmem cache svcrdma: Create a kmem cache for the WR contexts svcrdma: Add flush_scheduled_work to module exit function svcrdma: Limit ORD based on client's advertised IRD svcrdma: Remove unused wait q from svcrdma_xprt structure svcrdma: Remove unneeded spin locks from __svc_rdma_free svcrdma: Add dma map count and WARN_ON ...
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/filesystems/nfs-rdma.txt103
1 files changed, 59 insertions, 44 deletions
diff --git a/Documentation/filesystems/nfs-rdma.txt b/Documentation/filesystems/nfs-rdma.txt
index d0ec45ae4e7d..44bd766f2e5d 100644
--- a/Documentation/filesystems/nfs-rdma.txt
+++ b/Documentation/filesystems/nfs-rdma.txt
@@ -5,7 +5,7 @@
5################################################################################ 5################################################################################
6 6
7 Author: NetApp and Open Grid Computing 7 Author: NetApp and Open Grid Computing
8 Date: April 15, 2008 8 Date: May 29, 2008
9 9
10Table of Contents 10Table of Contents
11~~~~~~~~~~~~~~~~~ 11~~~~~~~~~~~~~~~~~
@@ -60,16 +60,18 @@ Installation
60 The procedures described in this document have been tested with 60 The procedures described in this document have been tested with
61 distributions from Red Hat's Fedora Project (http://fedora.redhat.com/). 61 distributions from Red Hat's Fedora Project (http://fedora.redhat.com/).
62 62
63 - Install nfs-utils-1.1.1 or greater on the client 63 - Install nfs-utils-1.1.2 or greater on the client
64 64
65 An NFS/RDMA mount point can only be obtained by using the mount.nfs 65 An NFS/RDMA mount point can be obtained by using the mount.nfs command in
66 command in nfs-utils-1.1.1 or greater. To see which version of mount.nfs 66 nfs-utils-1.1.2 or greater (nfs-utils-1.1.1 was the first nfs-utils
67 you are using, type: 67 version with support for NFS/RDMA mounts, but for various reasons we
68 recommend using nfs-utils-1.1.2 or greater). To see which version of
69 mount.nfs you are using, type:
68 70
69 > /sbin/mount.nfs -V 71 $ /sbin/mount.nfs -V
70 72
71 If the version is less than 1.1.1 or the command does not exist, 73 If the version is less than 1.1.2 or the command does not exist,
72 then you will need to install the latest version of nfs-utils. 74 you should install the latest version of nfs-utils.
73 75
74 Download the latest package from: 76 Download the latest package from:
75 77
@@ -77,22 +79,33 @@ Installation
77 79
78 Uncompress the package and follow the installation instructions. 80 Uncompress the package and follow the installation instructions.
79 81
80 If you will not be using GSS and NFSv4, the installation process 82 If you will not need the idmapper and gssd executables (you do not need
81 can be simplified by disabling these features when running configure: 83 these to create an NFS/RDMA enabled mount command), the installation
84 process can be simplified by disabling these features when running
85 configure:
82 86
83 > ./configure --disable-gss --disable-nfsv4 87 $ ./configure --disable-gss --disable-nfsv4
84 88
85 For more information on this see the package's README and INSTALL files. 89 To build nfs-utils you will need the tcp_wrappers package installed. For
90 more information on this see the package's README and INSTALL files.
86 91
87 After building the nfs-utils package, there will be a mount.nfs binary in 92 After building the nfs-utils package, there will be a mount.nfs binary in
88 the utils/mount directory. This binary can be used to initiate NFS v2, v3, 93 the utils/mount directory. This binary can be used to initiate NFS v2, v3,
89 or v4 mounts. To initiate a v4 mount, the binary must be called mount.nfs4. 94 or v4 mounts. To initiate a v4 mount, the binary must be called
90 The standard technique is to create a symlink called mount.nfs4 to mount.nfs. 95 mount.nfs4. The standard technique is to create a symlink called
96 mount.nfs4 to mount.nfs.
91 97
92 NOTE: mount.nfs and therefore nfs-utils-1.1.1 or greater is only needed 98 This mount.nfs binary should be installed at /sbin/mount.nfs as follows:
99
100 $ sudo cp utils/mount/mount.nfs /sbin/mount.nfs
101
102 In this location, mount.nfs will be invoked automatically for NFS mounts
103 by the system mount commmand.
104
105 NOTE: mount.nfs and therefore nfs-utils-1.1.2 or greater is only needed
93 on the NFS client machine. You do not need this specific version of 106 on the NFS client machine. You do not need this specific version of
94 nfs-utils on the server. Furthermore, only the mount.nfs command from 107 nfs-utils on the server. Furthermore, only the mount.nfs command from
95 nfs-utils-1.1.1 is needed on the client. 108 nfs-utils-1.1.2 is needed on the client.
96 109
97 - Install a Linux kernel with NFS/RDMA 110 - Install a Linux kernel with NFS/RDMA
98 111
@@ -156,8 +169,8 @@ Check RDMA and NFS Setup
156 this time. For example, if you are using a Mellanox Tavor/Sinai/Arbel 169 this time. For example, if you are using a Mellanox Tavor/Sinai/Arbel
157 card: 170 card:
158 171
159 > modprobe ib_mthca 172 $ modprobe ib_mthca
160 > modprobe ib_ipoib 173 $ modprobe ib_ipoib
161 174
162 If you are using InfiniBand, make sure there is a Subnet Manager (SM) 175 If you are using InfiniBand, make sure there is a Subnet Manager (SM)
163 running on the network. If your IB switch has an embedded SM, you can 176 running on the network. If your IB switch has an embedded SM, you can
@@ -166,7 +179,7 @@ Check RDMA and NFS Setup
166 179
167 If an SM is running on your network, you should see the following: 180 If an SM is running on your network, you should see the following:
168 181
169 > cat /sys/class/infiniband/driverX/ports/1/state 182 $ cat /sys/class/infiniband/driverX/ports/1/state
170 4: ACTIVE 183 4: ACTIVE
171 184
172 where driverX is mthca0, ipath5, ehca3, etc. 185 where driverX is mthca0, ipath5, ehca3, etc.
@@ -174,10 +187,10 @@ Check RDMA and NFS Setup
174 To further test the InfiniBand software stack, use IPoIB (this 187 To further test the InfiniBand software stack, use IPoIB (this
175 assumes you have two IB hosts named host1 and host2): 188 assumes you have two IB hosts named host1 and host2):
176 189
177 host1> ifconfig ib0 a.b.c.x 190 host1$ ifconfig ib0 a.b.c.x
178 host2> ifconfig ib0 a.b.c.y 191 host2$ ifconfig ib0 a.b.c.y
179 host1> ping a.b.c.y 192 host1$ ping a.b.c.y
180 host2> ping a.b.c.x 193 host2$ ping a.b.c.x
181 194
182 For other device types, follow the appropriate procedures. 195 For other device types, follow the appropriate procedures.
183 196
@@ -202,11 +215,11 @@ NFS/RDMA Setup
202 /vol0 192.168.0.47(fsid=0,rw,async,insecure,no_root_squash) 215 /vol0 192.168.0.47(fsid=0,rw,async,insecure,no_root_squash)
203 /vol0 192.168.0.0/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash) 216 /vol0 192.168.0.0/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash)
204 217
205 The IP address(es) is(are) the client's IPoIB address for an InfiniBand HCA or the 218 The IP address(es) is(are) the client's IPoIB address for an InfiniBand
206 cleint's iWARP address(es) for an RNIC. 219 HCA or the cleint's iWARP address(es) for an RNIC.
207 220
208 NOTE: The "insecure" option must be used because the NFS/RDMA client does not 221 NOTE: The "insecure" option must be used because the NFS/RDMA client does
209 use a reserved port. 222 not use a reserved port.
210 223
211 Each time a machine boots: 224 Each time a machine boots:
212 225
@@ -214,43 +227,45 @@ NFS/RDMA Setup
214 227
215 For InfiniBand using a Mellanox adapter: 228 For InfiniBand using a Mellanox adapter:
216 229
217 > modprobe ib_mthca 230 $ modprobe ib_mthca
218 > modprobe ib_ipoib 231 $ modprobe ib_ipoib
219 > ifconfig ib0 a.b.c.d 232 $ ifconfig ib0 a.b.c.d
220 233
221 NOTE: use unique addresses for the client and server 234 NOTE: use unique addresses for the client and server
222 235
223 - Start the NFS server 236 - Start the NFS server
224 237
225 If the NFS/RDMA server was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in kernel config), 238 If the NFS/RDMA server was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
226 load the RDMA transport module: 239 kernel config), load the RDMA transport module:
227 240
228 > modprobe svcrdma 241 $ modprobe svcrdma
229 242
230 Regardless of how the server was built (module or built-in), start the server: 243 Regardless of how the server was built (module or built-in), start the
244 server:
231 245
232 > /etc/init.d/nfs start 246 $ /etc/init.d/nfs start
233 247
234 or 248 or
235 249
236 > service nfs start 250 $ service nfs start
237 251
238 Instruct the server to listen on the RDMA transport: 252 Instruct the server to listen on the RDMA transport:
239 253
240 > echo rdma 2050 > /proc/fs/nfsd/portlist 254 $ echo rdma 2050 > /proc/fs/nfsd/portlist
241 255
242 - On the client system 256 - On the client system
243 257
244 If the NFS/RDMA client was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in kernel config), 258 If the NFS/RDMA client was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
245 load the RDMA client module: 259 kernel config), load the RDMA client module:
246 260
247 > modprobe xprtrdma.ko 261 $ modprobe xprtrdma.ko
248 262
249 Regardless of how the client was built (module or built-in), issue the mount.nfs command: 263 Regardless of how the client was built (module or built-in), use this
264 command to mount the NFS/RDMA server:
250 265
251 > /path/to/your/mount.nfs <IPoIB-server-name-or-address>:/<export> /mnt -i -o rdma,port=2050 266 $ mount -o rdma,port=2050 <IPoIB-server-name-or-address>:/<export> /mnt
252 267
253 To verify that the mount is using RDMA, run "cat /proc/mounts" and check the 268 To verify that the mount is using RDMA, run "cat /proc/mounts" and check
254 "proto" field for the given mount. 269 the "proto" field for the given mount.
255 270
256 Congratulations! You're using NFS/RDMA! 271 Congratulations! You're using NFS/RDMA!