aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/scsi/scsi_fc_transport.txt
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/scsi/scsi_fc_transport.txt')
-rw-r--r--Documentation/scsi/scsi_fc_transport.txt450
1 files changed, 450 insertions, 0 deletions
diff --git a/Documentation/scsi/scsi_fc_transport.txt b/Documentation/scsi/scsi_fc_transport.txt
new file mode 100644
index 000000000000..d403e46d8463
--- /dev/null
+++ b/Documentation/scsi/scsi_fc_transport.txt
@@ -0,0 +1,450 @@
1 SCSI FC Tansport
2 =============================================
3
4Date: 4/12/2007
5Kernel Revisions for features:
6 rports : <<TBS>>
7 vports : 2.6.22 (? TBD)
8
9
10Introduction
11============
12This file documents the features and components of the SCSI FC Transport.
13It also provides documents the API between the transport and FC LLDDs.
14The FC transport can be found at:
15 drivers/scsi/scsi_transport_fc.c
16 include/scsi/scsi_transport_fc.h
17 include/scsi/scsi_netlink_fc.h
18
19This file is found at Documentation/scsi/scsi_fc_transport.txt
20
21
22FC Remote Ports (rports)
23========================================================================
24<< To Be Supplied >>
25
26
27FC Virtual Ports (vports)
28========================================================================
29
30Overview:
31-------------------------------
32
33 New FC standards have defined mechanisms which allows for a single physical
34 port to appear on as multiple communication ports. Using the N_Port Id
35 Virtualization (NPIV) mechanism, a point-to-point connection to a Fabric
36 can be assigned more than 1 N_Port_ID. Each N_Port_ID appears as a
37 separate port to other endpoints on the fabric, even though it shares one
38 physical link to the switch for communication. Each N_Port_ID can have a
39 unique view of the fabric based on fabric zoning and array lun-masking
40 (just like a normal non-NPIV adapter). Using the Virtual Fabric (VF)
41 mechanism, adding a fabric header to each frame allows the port to
42 interact with the Fabric Port to join multiple fabrics. The port will
43 obtain an N_Port_ID on each fabric it joins. Each fabric will have its
44 own unique view of endpoints and configuration parameters. NPIV may be
45 used together with VF so that the port can obtain multiple N_Port_IDs
46 on each virtual fabric.
47
48 The FC transport is now recognizing a new object - a vport. A vport is
49 an entity that has a world-wide unique World Wide Port Name (wwpn) and
50 World Wide Node Name (wwnn). The transport also allows for the FC4's to
51 be specified for the vport, with FCP_Initiator being the primary role
52 expected. Once instantiated by one of the above methods, it will have a
53 distinct N_Port_ID and view of fabric endpoints and storage entities.
54 The fc_host associated with the physical adapter will export the ability
55 to create vports. The transport will create the vport object within the
56 Linux device tree, and instruct the fc_host's driver to instantiate the
57 virtual port. Typically, the driver will create a new scsi_host instance
58 on the vport, resulting in a unique <H,C,T,L> namespace for the vport.
59 Thus, whether a FC port is based on a physical port or on a virtual port,
60 each will appear as a unique scsi_host with its own target and lun space.
61
62 Note: At this time, the transport is written to create only NPIV-based
63 vports. However, consideration was given to VF-based vports and it
64 should be a minor change to add support if needed. The remaining
65 discussion will concentrate on NPIV.
66
67 Note: World Wide Name assignment (and uniqueness guarantees) are left
68 up to an administrative entity controling the vport. For example,
69 if vports are to be associated with virtual machines, a XEN mgmt
70 utility would be responsible for creating wwpn/wwnn's for the vport,
71 using it's own naming authority and OUI. (Note: it already does this
72 for virtual MAC addresses).
73
74
75Device Trees and Vport Objects:
76-------------------------------
77
78 Today, the device tree typically contains the scsi_host object,
79 with rports and scsi target objects underneath it. Currently the FC
80 transport creates the vport object and places it under the scsi_host
81 object corresponding to the physical adapter. The LLDD will allocate
82 a new scsi_host for the vport and link it's object under the vport.
83 The remainder of the tree under the vports scsi_host is the same
84 as the non-NPIV case. The transport is written currently to easily
85 allow the parent of the vport to be something other than the scsi_host.
86 This could be used in the future to link the object onto a vm-specific
87 device tree. If the vport's parent is not the physical port's scsi_host,
88 a symbolic link to the vport object will be placed in the physical
89 port's scsi_host.
90
91 Here's what to expect in the device tree :
92 The typical Physical Port's Scsi_Host:
93 /sys/devices/.../host17/
94 and it has the typical decendent tree:
95 /sys/devices/.../host17/rport-17:0-0/target17:0:0/17:0:0:0:
96 and then the vport is created on the Physical Port:
97 /sys/devices/.../host17/vport-17:0-0
98 and the vport's Scsi_Host is then created:
99 /sys/devices/.../host17/vport-17:0-0/host18
100 and then the rest of the tree progresses, such as:
101 /sys/devices/.../host17/vport-17:0-0/host18/rport-18:0-0/target18:0:0/18:0:0:0:
102
103 Here's what to expect in the sysfs tree :
104 scsi_hosts:
105 /sys/class/scsi_host/host17 physical port's scsi_host
106 /sys/class/scsi_host/host18 vport's scsi_host
107 fc_hosts:
108 /sys/class/fc_host/host17 physical port's fc_host
109 /sys/class/fc_host/host18 vport's fc_host
110 fc_vports:
111 /sys/class/fc_vports/vport-17:0-0 the vport's fc_vport
112 fc_rports:
113 /sys/class/fc_remote_ports/rport-17:0-0 rport on the physical port
114 /sys/class/fc_remote_ports/rport-18:0-0 rport on the vport
115
116
117Vport Attributes:
118-------------------------------
119
120 The new fc_vport class object has the following attributes
121
122 node_name: Read_Only
123 The WWNN of the vport
124
125 port_name: Read_Only
126 The WWPN of the vport
127
128 roles: Read_Only
129 Indicates the FC4 roles enabled on the vport.
130
131 symbolic_name: Read_Write
132 A string, appended to the driver's symbolic port name string, which
133 is registered with the switch to identify the vport. For example,
134 a hypervisor could set this string to "Xen Domain 2 VM 5 Vport 2",
135 and this set of identifiers can be seen on switch management screens
136 to identify the port.
137
138 vport_delete: Write_Only
139 When written with a "1", will tear down the vport.
140
141 vport_disable: Write_Only
142 When written with a "1", will transition the vport to a disabled.
143 state. The vport will still be instantiated with the Linux kernel,
144 but it will not be active on the FC link.
145 When written with a "0", will enable the vport.
146
147 vport_last_state: Read_Only
148 Indicates the previous state of the vport. See the section below on
149 "Vport States".
150
151 vport_state: Read_Only
152 Indicates the state of the vport. See the section below on
153 "Vport States".
154
155 vport_type: Read_Only
156 Reflects the FC mechanism used to create the virtual port.
157 Only NPIV is supported currently.
158
159
160 For the fc_host class object, the following attributes are added for vports:
161
162 max_npiv_vports: Read_Only
163 Indicates the maximum number of NPIV-based vports that the
164 driver/adapter can support on the fc_host.
165
166 npiv_vports_inuse: Read_Only
167 Indicates how many NPIV-based vports have been instantiated on the
168 fc_host.
169
170 vport_create: Write_Only
171 A "simple" create interface to instantiate a vport on an fc_host.
172 A "<WWPN>:<WWNN>" string is written to the attribute. The transport
173 then instantiates the vport object and calls the LLDD to create the
174 vport with the role of FCP_Initiator. Each WWN is specified as 16
175 hex characters and may *not* contain any prefixes (e.g. 0x, x, etc).
176
177 vport_delete: Write_Only
178 A "simple" delete interface to teardown a vport. A "<WWPN>:<WWNN>"
179 string is written to the attribute. The transport will locate the
180 vport on the fc_host with the same WWNs and tear it down. Each WWN
181 is specified as 16 hex characters and may *not* contain any prefixes
182 (e.g. 0x, x, etc).
183
184
185Vport States:
186-------------------------------
187
188 Vport instantiation consists of two parts:
189 - Creation with the kernel and LLDD. This means all transport and
190 driver data structures are built up, and device objects created.
191 This is equivalent to a driver "attach" on an adapter, which is
192 independent of the adapter's link state.
193 - Instantiation of the vport on the FC link via ELS traffic, etc.
194 This is equivalent to a "link up" and successfull link initialization.
195 Futher information can be found in the interfaces section below for
196 Vport Creation.
197
198 Once a vport has been instantiated with the kernel/LLDD, a vport state
199 can be reported via the sysfs attribute. The following states exist:
200
201 FC_VPORT_UNKNOWN - Unknown
202 An temporary state, typically set only while the vport is being
203 instantiated with the kernel and LLDD.
204
205 FC_VPORT_ACTIVE - Active
206 The vport has been successfully been created on the FC link.
207 It is fully functional.
208
209 FC_VPORT_DISABLED - Disabled
210 The vport instantiated, but "disabled". The vport is not instantiated
211 on the FC link. This is equivalent to a physical port with the
212 link "down".
213
214 FC_VPORT_LINKDOWN - Linkdown
215 The vport is not operational as the physical link is not operational.
216
217 FC_VPORT_INITIALIZING - Initializing
218 The vport is in the process of instantiating on the FC link.
219 The LLDD will set this state just prior to starting the ELS traffic
220 to create the vport. This state will persist until the vport is
221 successfully created (state becomes FC_VPORT_ACTIVE) or it fails
222 (state is one of the values below). As this state is transitory,
223 it will not be preserved in the "vport_last_state".
224
225 FC_VPORT_NO_FABRIC_SUPP - No Fabric Support
226 The vport is not operational. One of the following conditions were
227 encountered:
228 - The FC topology is not Point-to-Point
229 - The FC port is not connected to an F_Port
230 - The F_Port has indicated that NPIV is not supported.
231
232 FC_VPORT_NO_FABRIC_RSCS - No Fabric Resources
233 The vport is not operational. The Fabric failed FDISC with a status
234 indicating that it does not have sufficient resources to complete
235 the operation.
236
237 FC_VPORT_FABRIC_LOGOUT - Fabric Logout
238 The vport is not operational. The Fabric has LOGO'd the N_Port_ID
239 associated with the vport.
240
241 FC_VPORT_FABRIC_REJ_WWN - Fabric Rejected WWN
242 The vport is not operational. The Fabric failed FDISC with a status
243 indicating that the WWN's are not valid.
244
245 FC_VPORT_FAILED - VPort Failed
246 The vport is not operational. This is a catchall for all other
247 error conditions.
248
249
250 The following state table indicates the different state transitions:
251
252 State Event New State
253 --------------------------------------------------------------------
254 n/a Initialization Unknown
255 Unknown: Link Down Linkdown
256 Link Up & Loop No Fabric Support
257 Link Up & no Fabric No Fabric Support
258 Link Up & FLOGI response No Fabric Support
259 indicates no NPIV support
260 Link Up & FDISC being sent Initializing
261 Disable request Disable
262 Linkdown: Link Up Unknown
263 Initializing: FDISC ACC Active
264 FDISC LS_RJT w/ no resources No Fabric Resources
265 FDISC LS_RJT w/ invalid Fabric Rejected WWN
266 pname or invalid nport_id
267 FDISC LS_RJT failed for Vport Failed
268 other reasons
269 Link Down Linkdown
270 Disable request Disable
271 Disable: Enable request Unknown
272 Active: LOGO received from fabric Fabric Logout
273 Link Down Linkdown
274 Disable request Disable
275 Fabric Logout: Link still up Unknown
276
277 The following 4 error states all have the same transitions:
278 No Fabric Support:
279 No Fabric Resources:
280 Fabric Rejected WWN:
281 Vport Failed:
282 Disable request Disable
283 Link goes down Linkdown
284
285
286Transport <-> LLDD Interfaces :
287-------------------------------
288
289Vport support by LLDD:
290
291 The LLDD indicates support for vports by supplying a vport_create()
292 function in the transport template. The presense of this function will
293 cause the creation of the new attributes on the fc_host. As part of
294 the physical port completing its initialization relative to the
295 transport, it should set the max_npiv_vports attribute to indicate the
296 maximum number of vports the driver and/or adapter supports.
297
298
299Vport Creation:
300
301 The LLDD vport_create() syntax is:
302
303 int vport_create(struct fc_vport *vport, bool disable)
304
305 where:
306 vport: Is the newly allocated vport object
307 disable: If "true", the vport is to be created in a disabled stated.
308 If "false", the vport is to be enabled upon creation.
309
310 When a request is made to create a new vport (via sgio/netlink, or the
311 vport_create fc_host attribute), the transport will validate that the LLDD
312 can support another vport (e.g. max_npiv_vports > npiv_vports_inuse).
313 If not, the create request will be failed. If space remains, the transport
314 will increment the vport count, create the vport object, and then call the
315 LLDD's vport_create() function with the newly allocated vport object.
316
317 As mentioned above, vport creation is divided into two parts:
318 - Creation with the kernel and LLDD. This means all transport and
319 driver data structures are built up, and device objects created.
320 This is equivalent to a driver "attach" on an adapter, which is
321 independent of the adapter's link state.
322 - Instantiation of the vport on the FC link via ELS traffic, etc.
323 This is equivalent to a "link up" and successfull link initialization.
324
325 The LLDD's vport_create() function will not synchronously wait for both
326 parts to be fully completed before returning. It must validate that the
327 infrastructure exists to support NPIV, and complete the first part of
328 vport creation (data structure build up) before returning. We do not
329 hinge vport_create() on the link-side operation mainly because:
330 - The link may be down. It is not a failure if it is. It simply
331 means the vport is in an inoperable state until the link comes up.
332 This is consistent with the link bouncing post vport creation.
333 - The vport may be created in a disabled state.
334 - This is consistent with a model where: the vport equates to a
335 FC adapter. The vport_create is synonymous with driver attachment
336 to the adapter, which is independent of link state.
337
338 Note: special error codes have been defined to delineate infrastructure
339 failure cases for quicker resolution.
340
341 The expected behavior for the LLDD's vport_create() function is:
342 - Validate Infrastructure:
343 - If the driver or adapter cannot support another vport, whether
344 due to improper firmware, (a lie about) max_npiv, or a lack of
345 some other resource - return VPCERR_UNSUPPORTED.
346 - If the driver validates the WWN's against those already active on
347 the adapter and detects an overlap - return VPCERR_BAD_WWN.
348 - If the driver detects the topology is loop, non-fabric, or the
349 FLOGI did not support NPIV - return VPCERR_NO_FABRIC_SUPP.
350 - Allocate data structures. If errors are encountered, such as out
351 of memory conditions, return the respective negative Exxx error code.
352 - If the role is FCP Initiator, the LLDD is to :
353 - Call scsi_host_alloc() to allocate a scsi_host for the vport.
354 - Call scsi_add_host(new_shost, &vport->dev) to start the scsi_host
355 and bind it as a child of the vport device.
356 - Initializes the fc_host attribute values.
357 - Kick of further vport state transitions based on the disable flag and
358 link state - and return success (zero).
359
360 LLDD Implementers Notes:
361 - It is suggested that there be a different fc_function_templates for
362 the physical port and the virtual port. The physical port's template
363 would have the vport_create, vport_delete, and vport_disable functions,
364 while the vports would not.
365 - It is suggested that there be different scsi_host_templates
366 for the physical port and virtual port. Likely, there are driver
367 attributes, embedded into the scsi_host_template, that are applicable
368 for the physical port only (link speed, topology setting, etc). This
369 ensures that the attributes are applicable to the respective scsi_host.
370
371
372Vport Disable/Enable:
373
374 The LLDD vport_disable() syntax is:
375
376 int vport_disable(struct fc_vport *vport, bool disable)
377
378 where:
379 vport: Is vport to to be enabled or disabled
380 disable: If "true", the vport is to be disabled.
381 If "false", the vport is to be enabled.
382
383 When a request is made to change the disabled state on a vport, the
384 transport will validate the request against the existing vport state.
385 If the request is to disable and the vport is already disabled, the
386 request will fail. Similarly, if the request is to enable, and the
387 vport is not in a disabled state, the request will fail. If the request
388 is valid for the vport state, the transport will call the LLDD to
389 change the vport's state.
390
391 Within the LLDD, if a vport is disabled, it remains instantiated with
392 the kernel and LLDD, but it is not active or visible on the FC link in
393 any way. (see Vport Creation and the 2 part instantiation discussion).
394 The vport will remain in this state until it is deleted or re-enabled.
395 When enabling a vport, the LLDD reinstantiates the vport on the FC
396 link - essentially restarting the LLDD statemachine (see Vport States
397 above).
398
399
400Vport Deletion:
401
402 The LLDD vport_delete() syntax is:
403
404 int vport_delete(struct fc_vport *vport)
405
406 where:
407 vport: Is vport to delete
408
409 When a request is made to delete a vport (via sgio/netlink, or via the
410 fc_host or fc_vport vport_delete attributes), the transport will call
411 the LLDD to terminate the vport on the FC link, and teardown all other
412 datastructures and references. If the LLDD completes successfully,
413 the transport will teardown the vport objects and complete the vport
414 removal. If the LLDD delete request fails, the vport object will remain,
415 but will be in an indeterminate state.
416
417 Within the LLDD, the normal code paths for a scsi_host teardown should
418 be followed. E.g. If the vport has a FCP Initiator role, the LLDD
419 will call fc_remove_host() for the vports scsi_host, followed by
420 scsi_remove_host() and scsi_host_put() for the vports scsi_host.
421
422
423Other:
424 fc_host port_type attribute:
425 There is a new fc_host port_type value - FC_PORTTYPE_NPIV. This value
426 must be set on all vport-based fc_hosts. Normally, on a physical port,
427 the port_type attribute would be set to NPORT, NLPORT, etc based on the
428 topology type and existence of the fabric. As this is not applicable to
429 a vport, it makes more sense to report the FC mechanism used to create
430 the vport.
431
432 Driver unload:
433 FC drivers are required to call fc_remove_host() prior to calling
434 scsi_remove_host(). This allows the fc_host to tear down all remote
435 ports prior the scsi_host being torn down. The fc_remove_host() call
436 was updated to remove all vports for the fc_host as well.
437
438
439Credits
440=======
441The following people have contributed to this document:
442
443
444
445
446
447
448James Smart
449james.smart@emulex.com
450