aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2019-07-11 18:14:01 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2019-07-11 18:14:01 -0400
commitba6d10ab8014ac10d25ca513352b6665e73b5785 (patch)
tree3b7aaa3f2d76d0c0e9612bc87e1da45577465528
parent64b08df460cfdfc2b010263043a057cdd33500ed (diff)
parentbaf23eddbf2a4ba9bf2bdb342686c71a8042e39b (diff)
Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI updates from James Bottomley: "This is mostly update of the usual drivers: qla2xxx, hpsa, lpfc, ufs, mpt3sas, ibmvscsi, megaraid_sas, bnx2fc and hisi_sas as well as the removal of the osst driver (I heard from Willem privately that he would like the driver removed because all his test hardware has failed). Plus number of minor changes, spelling fixes and other trivia. The big merge conflict this time around is the SPDX licence tags. Following discussion on linux-next, we believe our version to be more accurate than the one in the tree, so the resolution is to take our version for all the SPDX conflicts" Note on the SPDX license tag conversion conflicts: the SCSI tree had done its own SPDX conversion, which in some cases conflicted with the treewide ones done by Thomas & co. In almost all cases, the conflicts were purely syntactic: the SCSI tree used the old-style SPDX tags ("GPL-2.0" and "GPL-2.0+") while the treewide conversion had used the new-style ones ("GPL-2.0-only" and "GPL-2.0-or-later"). In these cases I picked the new-style one. In a few cases, the SPDX conversion was actually different, though. As explained by James above, and in more detail in a pre-pull-request thread: "The other problem is actually substantive: In the libsas code Luben Tuikov originally specified gpl 2.0 only by dint of stating: * This file is licensed under GPLv2. In all the libsas files, but then muddied the water by quoting GPLv2 verbatim (which includes the or later than language). So for these files Christoph did the conversion to v2 only SPDX tags and Thomas converted to v2 or later tags" So in those cases, where the spdx tag substantially mattered, I took the SCSI tree conversion of it, but then also took the opportunity to turn the old-style "GPL-2.0" into a new-style "GPL-2.0-only" tag. Similarly, when there were whitespace differences or other differences to the comments around the copyright notices, I took the version from the SCSI tree as being the more specific conversion. Finally, in the spdx conversions that had no conflicts (because the treewide ones hadn't been done for those files), I just took the SCSI tree version as-is, even if it was old-style. The old-style conversions are perfectly valid, even if the "-only" and "-or-later" versions are perhaps more descriptive. * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (185 commits) scsi: qla2xxx: move IO flush to the front of NVME rport unregistration scsi: qla2xxx: Fix NVME cmd and LS cmd timeout race condition scsi: qla2xxx: on session delete, return nvme cmd scsi: qla2xxx: Fix kernel crash after disconnecting NVMe devices scsi: megaraid_sas: Update driver version to 07.710.06.00-rc1 scsi: megaraid_sas: Introduce various Aero performance modes scsi: megaraid_sas: Use high IOPS queues based on IO workload scsi: megaraid_sas: Set affinity for high IOPS reply queues scsi: megaraid_sas: Enable coalescing for high IOPS queues scsi: megaraid_sas: Add support for High IOPS queues scsi: megaraid_sas: Add support for MPI toolbox commands scsi: megaraid_sas: Offload Aero RAID5/6 division calculations to driver scsi: megaraid_sas: RAID1 PCI bandwidth limit algorithm is applicable for only Ventura scsi: megaraid_sas: megaraid_sas: Add check for count returned by HOST_DEVICE_LIST DCMD scsi: megaraid_sas: Handle sequence JBOD map failure at driver level scsi: megaraid_sas: Don't send FPIO to RL Bypass queue scsi: megaraid_sas: In probe context, retry IOC INIT once if firmware is in fault scsi: megaraid_sas: Release Mutex lock before OCR in case of DCMD timeout scsi: megaraid_sas: Call disable_irq from process IRQ poll scsi: megaraid_sas: Remove few debug counters from IO path ...
-rw-r--r--Documentation/scsi/osst.txt218
-rw-r--r--Documentation/scsi/ufs.txt7
-rw-r--r--MAINTAINERS13
-rw-r--r--arch/m68k/mac/config.c10
-rw-r--r--drivers/infiniband/ulp/srp/ib_srp.c21
-rw-r--r--drivers/message/fusion/mptbase.c3
-rw-r--r--drivers/scsi/Kconfig57
-rw-r--r--drivers/scsi/Makefile4
-rw-r--r--drivers/scsi/NCR5380.c18
-rw-r--r--drivers/scsi/NCR5380.h2
-rw-r--r--drivers/scsi/aic7xxx/aic7xxx.reg2
-rw-r--r--drivers/scsi/aic94xx/aic94xx_dev.c4
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc.h14
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc_els.c60
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc_fcoe.c3
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc_io.c116
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc_tgt.c10
-rw-r--r--drivers/scsi/cxgbi/cxgb4i/cxgb4i.c9
-rw-r--r--drivers/scsi/fdomain.c597
-rw-r--r--drivers/scsi/fdomain.h114
-rw-r--r--drivers/scsi/fdomain_isa.c222
-rw-r--r--drivers/scsi/fdomain_pci.c68
-rw-r--r--drivers/scsi/hisi_sas/hisi_sas.h8
-rw-r--r--drivers/scsi/hisi_sas/hisi_sas_main.c16
-rw-r--r--drivers/scsi/hisi_sas/hisi_sas_v2_hw.c50
-rw-r--r--drivers/scsi/hisi_sas/hisi_sas_v3_hw.c50
-rw-r--r--drivers/scsi/hpsa.c280
-rw-r--r--drivers/scsi/hpsa.h6
-rw-r--r--drivers/scsi/hpsa_cmd.h2
-rw-r--r--drivers/scsi/ibmvscsi/ibmvscsi.c77
-rw-r--r--drivers/scsi/ibmvscsi/ibmvscsi.h10
-rw-r--r--drivers/scsi/isci/remote_device.c4
-rw-r--r--drivers/scsi/isci/remote_device.h5
-rw-r--r--drivers/scsi/isci/request.c8
-rw-r--r--drivers/scsi/isci/task.c2
-rw-r--r--drivers/scsi/libiscsi_tcp.c2
-rw-r--r--drivers/scsi/libsas/sas_discover.c23
-rw-r--r--drivers/scsi/libsas/sas_event.c18
-rw-r--r--drivers/scsi/libsas/sas_expander.c71
-rw-r--r--drivers/scsi/libsas/sas_init.c2
-rw-r--r--drivers/scsi/libsas/sas_internal.h2
-rw-r--r--drivers/scsi/libsas/sas_phy.c18
-rw-r--r--drivers/scsi/libsas/sas_port.c24
-rw-r--r--drivers/scsi/libsas/sas_scsi_host.c2
-rw-r--r--drivers/scsi/lpfc/lpfc_attr.c34
-rw-r--r--drivers/scsi/lpfc/lpfc_bsg.c2
-rw-r--r--drivers/scsi/lpfc/lpfc_crtn.h3
-rw-r--r--drivers/scsi/lpfc/lpfc_ct.c14
-rw-r--r--drivers/scsi/lpfc/lpfc_els.c1
-rw-r--r--drivers/scsi/lpfc/lpfc_init.c512
-rw-r--r--drivers/scsi/lpfc/lpfc_nvme.c16
-rw-r--r--drivers/scsi/lpfc/lpfc_nvmet.c332
-rw-r--r--drivers/scsi/lpfc/lpfc_nvmet.h1
-rw-r--r--drivers/scsi/lpfc/lpfc_scsi.c16
-rw-r--r--drivers/scsi/lpfc/lpfc_sli.c76
-rw-r--r--drivers/scsi/lpfc/lpfc_sli4.h11
-rw-r--r--drivers/scsi/lpfc/lpfc_version.h2
-rw-r--r--drivers/scsi/mac_scsi.c421
-rw-r--r--drivers/scsi/megaraid/Kconfig.megaraid1
-rw-r--r--drivers/scsi/megaraid/Makefile2
-rw-r--r--drivers/scsi/megaraid/megaraid_sas.h101
-rw-r--r--drivers/scsi/megaraid/megaraid_sas_base.c712
-rw-r--r--drivers/scsi/megaraid/megaraid_sas_debugfs.c179
-rw-r--r--drivers/scsi/megaraid/megaraid_sas_fp.c82
-rw-r--r--drivers/scsi/megaraid/megaraid_sas_fusion.c551
-rw-r--r--drivers/scsi/megaraid/megaraid_sas_fusion.h33
-rw-r--r--drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h2
-rw-r--r--drivers/scsi/mpt3sas/mpt3sas_base.c497
-rw-r--r--drivers/scsi/mpt3sas/mpt3sas_base.h35
-rw-r--r--drivers/scsi/mpt3sas/mpt3sas_config.c73
-rw-r--r--drivers/scsi/mpt3sas/mpt3sas_ctl.c234
-rw-r--r--drivers/scsi/mpt3sas/mpt3sas_scsih.c52
-rw-r--r--drivers/scsi/mpt3sas/mpt3sas_transport.c8
-rw-r--r--drivers/scsi/mvsas/mv_sas.c2
-rw-r--r--drivers/scsi/mvsas/mv_sas.h3
-rw-r--r--drivers/scsi/osst.c6108
-rw-r--r--drivers/scsi/osst.h651
-rw-r--r--drivers/scsi/osst_detect.h7
-rw-r--r--drivers/scsi/osst_options.h107
-rw-r--r--drivers/scsi/pcmcia/Kconfig10
-rw-r--r--drivers/scsi/pcmcia/Makefile1
-rw-r--r--drivers/scsi/pcmcia/fdomain_cs.c95
-rw-r--r--drivers/scsi/pm8001/pm8001_ctl.c52
-rw-r--r--drivers/scsi/pm8001/pm8001_hwi.c4
-rw-r--r--drivers/scsi/pm8001/pm8001_sas.c4
-rw-r--r--drivers/scsi/pm8001/pm8001_sas.h1
-rw-r--r--drivers/scsi/pm8001/pm80xx_hwi.c4
-rw-r--r--drivers/scsi/qla2xxx/qla_def.h5
-rw-r--r--drivers/scsi/qla2xxx/qla_gbl.h2
-rw-r--r--drivers/scsi/qla2xxx/qla_init.c1
-rw-r--r--drivers/scsi/qla2xxx/qla_nvme.c236
-rw-r--r--drivers/scsi/qla2xxx/qla_nvme.h2
-rw-r--r--drivers/scsi/qla2xxx/qla_os.c1
-rw-r--r--drivers/scsi/qla2xxx/qla_target.c16
-rw-r--r--drivers/scsi/scsi.c12
-rw-r--r--drivers/scsi/scsi_debugfs.h1
-rw-r--r--drivers/scsi/scsi_error.c26
-rw-r--r--drivers/scsi/scsi_lib.c4
-rw-r--r--drivers/scsi/scsi_pm.c6
-rw-r--r--drivers/scsi/scsi_priv.h1
-rw-r--r--drivers/scsi/scsi_sysfs.c7
-rw-r--r--drivers/scsi/scsi_transport_fc.c3
-rw-r--r--drivers/scsi/sd.c111
-rw-r--r--drivers/scsi/ses.c7
-rw-r--r--drivers/scsi/st.c6
-rw-r--r--drivers/scsi/storvsc_drv.c11
-rw-r--r--drivers/scsi/ufs/ufs-qcom.c23
-rw-r--r--drivers/scsi/ufs/ufs-sysfs.c6
-rw-r--r--drivers/scsi/ufs/ufs_bsg.c6
-rw-r--r--drivers/scsi/ufs/ufshcd-pci.c2
-rw-r--r--drivers/scsi/ufs/ufshcd.c35
-rw-r--r--drivers/scsi/ufs/ufshcd.h5
-rw-r--r--drivers/scsi/ufs/ufshci.h6
-rw-r--r--drivers/scsi/virtio_scsi.c3
-rw-r--r--drivers/scsi/wd719x.c42
-rw-r--r--drivers/target/iscsi/iscsi_target_nego.c15
-rw-r--r--drivers/target/target_core_user.c16
-rw-r--r--include/scsi/fc/fc_fip.h14
-rw-r--r--include/scsi/fc/fc_ms.h3
-rw-r--r--include/scsi/iscsi_if.h2
-rw-r--r--include/scsi/iscsi_proto.h2
-rw-r--r--include/scsi/libiscsi_tcp.h2
-rw-r--r--include/scsi/libsas.h5
-rw-r--r--include/scsi/sas.h2
-rw-r--r--include/scsi/scsi_transport.h2
-rw-r--r--include/scsi/scsi_transport_fc.h3
-rw-r--r--include/uapi/scsi/fc/fc_els.h13
-rw-r--r--include/uapi/scsi/fc/fc_fs.h13
-rw-r--r--include/uapi/scsi/fc/fc_gs.h13
-rw-r--r--include/uapi/scsi/fc/fc_ns.h13
-rw-r--r--include/uapi/scsi/scsi_bsg_fc.h15
-rw-r--r--include/uapi/scsi/scsi_netlink.h15
-rw-r--r--include/uapi/scsi/scsi_netlink_fc.h15
133 files changed, 5179 insertions, 8874 deletions
diff --git a/Documentation/scsi/osst.txt b/Documentation/scsi/osst.txt
deleted file mode 100644
index 00c8ebb2fd18..000000000000
--- a/Documentation/scsi/osst.txt
+++ /dev/null
@@ -1,218 +0,0 @@
1README file for the osst driver
2===============================
3(w) Kurt Garloff <garloff@suse.de> 12/2000
4
5This file describes the osst driver as of version 0.8.x/0.9.x, the released
6version of the osst driver.
7It is intended to help advanced users to understand the role of osst and to
8get them started using (and maybe debugging) it.
9It won't address issues like "How do I compile a kernel?" or "How do I load
10a module?", as these are too basic.
11Once the OnStream got merged into the official kernel, the distro makers
12will provide the OnStream support for those who are not familiar with
13hacking their kernels.
14
15
16Purpose
17-------
18The osst driver was developed, because the standard SCSI tape driver in
19Linux, st, does not support the OnStream SC-x0 SCSI tape. The st is not to
20blame for that, as the OnStream tape drives do not support the standard SCSI
21command set for Serial Access Storage Devices (SASDs), which basically
22corresponds to the QIC-157 spec.
23Nevertheless, the OnStream tapes are nice pieces of hardware and therefore
24the osst driver has been written to make these tape devs supported by Linux.
25The driver is free software. It's released under the GNU GPL and planned to
26be integrated into the mainstream kernel.
27
28
29Implementation
30--------------
31The osst is a new high-level SCSI driver, just like st, sr, sd and sg. It
32can be compiled into the kernel or loaded as a module.
33As it represents a new device, it got assigned a new device node: /dev/osstX
34are character devices with major no 206 and minor numbers like the /dev/stX
35devices. If those are not present, you may create them by calling
36Makedevs.sh as root (see below).
37The driver started being a copy of st and as such, the osst devices'
38behavior looks very much the same as st to the userspace applications.
39
40
41History
42-------
43In the first place, osst shared its identity very much with st. That meant
44that it used the same kernel structures and the same device node as st.
45So you could only have either of them being present in the kernel. This has
46been fixed by registering an own device, now.
47st and osst can coexist, each only accessing the devices it can support by
48themselves.
49
50
51Installation
52------------
53osst got integrated into the linux kernel. Select it during kernel
54configuration as module or compile statically into the kernel.
55Compile your kernel and install the modules.
56
57Now, your osst driver is inside the kernel or available as a module,
58depending on your choice during kernel config. You may still need to create
59the device nodes by calling the Makedevs.sh script (see below) manually.
60
61To load your module, you may use the command
62modprobe osst
63as root. dmesg should show you, whether your OnStream tapes have been
64recognized.
65
66If you want to have the module autoloaded on access to /dev/osst, you may
67add something like
68alias char-major-206 osst
69to a file under /etc/modprobe.d/ directory.
70
71You may find it convenient to create a symbolic link
72ln -s nosst0 /dev/tape
73to make programs assuming a default name of /dev/tape more convenient to
74use.
75
76The device nodes for osst have to be created. Use the Makedevs.sh script
77attached to this file.
78
79
80Using it
81--------
82You may use the OnStream tape driver with your standard backup software,
83which may be tar, cpio, amanda, arkeia, BRU, Lone Tar, ...
84by specifying /dev/(n)osst0 as the tape device to use or using the above
85symlink trick. The IOCTLs to control tape operation are also mostly
86supported and you may try the mt (or mt_st) program to jump between
87filemarks, eject the tape, ...
88
89There's one limitation: You need to use a block size of 32kB.
90
91(This limitation is worked on and will be fixed in version 0.8.8 of
92 this driver.)
93
94If you just want to get started with standard software, here is an example
95for creating and restoring a full backup:
96# Backup
97tar cvf - / --exclude /proc | buffer -s 32k -m 24M -B -t -o /dev/nosst0
98# Restore
99buffer -s 32k -m 8M -B -t -i /dev/osst0 | tar xvf - -C /
100
101The buffer command has been used to buffer the data before it goes to the
102tape (or the file system) in order to smooth out the data stream and prevent
103the tape from needing to stop and rewind. The OnStream does have an internal
104buffer and a variable speed which help this, but especially on writing, the
105buffering still proves useful in most cases. It also pads the data to
106guarantees the block size of 32k. (Otherwise you may pass the -b64 option to
107tar.)
108Expect something like 1.8MB/s for the SC-x0 drives and 0.9MB/s for the DI-30.
109The USB drive will give you about 0.7MB/s.
110On a fast machine, you may profit from software data compression (z flag for
111tar).
112
113
114USB and IDE
115-----------
116Via the SCSI emulation layers usb-storage and ide-scsi, you can also use the
117osst driver to drive the USB-30 and the DI-30 drives. (Unfortunately, there
118is no such layer for the parallel port, otherwise the DP-30 would work as
119well.) For the USB support, you need the latest 2.4.0-test kernels and the
120latest usb-storage driver from
121http://www.linux-usb.org/
122http://sourceforge.net/cvs/?group_id=3581
123
124Note that the ide-tape driver as of 1.16f uses a slightly outdated on-tape
125format and therefore is not completely interoperable with osst tapes.
126
127The ADR-x0 line is fully SCSI-2 compliant and is supported by st, not osst.
128The on-tape format is supposed to be compatible with the one used by osst.
129
130
131Feedback and updates
132--------------------
133The driver development is coordinated through a mailing list
134<osst@linux1.onstream.nl>
135a CVS repository and some web pages.
136The tester's pages which contain recent news and updated drivers to download
137can be found on
138http://sourceforge.net/projects/osst/
139
140If you find any problems, please have a look at the tester's page in order
141to see whether the problem is already known and solved. Otherwise, please
142report it to the mailing list. Your feedback is welcome. (This holds also
143for reports of successful usage, of course.)
144In case of trouble, please do always provide the following info:
145* driver and kernel version used (see syslog)
146* driver messages (syslog)
147* SCSI config and OnStream Firmware (/proc/scsi/scsi)
148* description of error. Is it reproducible?
149* software and commands used
150
151You may subscribe to the mailing list, BTW, it's a majordomo list.
152
153
154Status
155------
1560.8.0 was the first widespread BETA release. Since then a lot of reports
157have been sent, but mostly reported success or only minor trouble.
158All the issues have been addressed.
159Check the web pages for more info about the current developments.
1600.9.x is the tree for the 2.3/2.4 kernel.
161
162
163Acknowledgments
164----------------
165The driver has been started by making a copy of Kai Makisara's st driver.
166Most of the development has been done by Willem Riede. The presence of the
167userspace program osg (onstreamsg) from Terry Hardie has been rather
168helpful. The same holds for Gadi Oxman's ide-tape support for the DI-30.
169I did add some patches to those drivers as well and coordinated things a
170little bit.
171Note that most of them did mostly spend their spare time for the creation of
172this driver.
173The people from OnStream, especially Jack Bombeeck did support this project
174and always tried to answer HW or FW related questions. Furthermore, he
175pushed the FW developers to do the right things.
176SuSE did support this project by allowing me to work on it during my working
177time for them and by integrating the driver into their distro.
178
179More people did help by sending useful comments. Sorry to those who have
180been forgotten. Thanks to all the GNU/FSF and Linux developers who made this
181platform such an interesting, nice and stable platform.
182Thanks go to those who tested the drivers and did send useful reports. Your
183help is needed!
184
185
186Makedevs.sh
187-----------
188#!/bin/sh
189# Script to create OnStream SC-x0 device nodes (major 206)
190# Usage: Makedevs.sh [nos [path to dev]]
191# $Id: README.osst.kernel,v 1.4 2000/12/20 14:13:15 garloff Exp $
192major=206
193nrs=4
194dir=/dev
195test -z "$1" || nrs=$1
196test -z "$2" || dir=$2
197declare -i nr
198nr=0
199test -d $dir || mkdir -p $dir
200while test $nr -lt $nrs; do
201 mknod $dir/osst$nr c $major $nr
202 chown 0.disk $dir/osst$nr; chmod 660 $dir/osst$nr;
203 mknod $dir/nosst$nr c $major $[nr+128]
204 chown 0.disk $dir/nosst$nr; chmod 660 $dir/nosst$nr;
205 mknod $dir/osst${nr}l c $major $[nr+32]
206 chown 0.disk $dir/osst${nr}l; chmod 660 $dir/osst${nr}l;
207 mknod $dir/nosst${nr}l c $major $[nr+160]
208 chown 0.disk $dir/nosst${nr}l; chmod 660 $dir/nosst${nr}l;
209 mknod $dir/osst${nr}m c $major $[nr+64]
210 chown 0.disk $dir/osst${nr}m; chmod 660 $dir/osst${nr}m;
211 mknod $dir/nosst${nr}m c $major $[nr+192]
212 chown 0.disk $dir/nosst${nr}m; chmod 660 $dir/nosst${nr}m;
213 mknod $dir/osst${nr}a c $major $[nr+96]
214 chown 0.disk $dir/osst${nr}a; chmod 660 $dir/osst${nr}a;
215 mknod $dir/nosst${nr}a c $major $[nr+224]
216 chown 0.disk $dir/nosst${nr}a; chmod 660 $dir/nosst${nr}a;
217 let nr+=1
218done
diff --git a/Documentation/scsi/ufs.txt b/Documentation/scsi/ufs.txt
index 1769f71c4c20..81842ec3e116 100644
--- a/Documentation/scsi/ufs.txt
+++ b/Documentation/scsi/ufs.txt
@@ -158,6 +158,13 @@ send SG_IO with the applicable sg_io_v4:
158If you wish to read or write a descriptor, use the appropriate xferp of 158If you wish to read or write a descriptor, use the appropriate xferp of
159sg_io_v4. 159sg_io_v4.
160 160
161The userspace tool that interacts with the ufs-bsg endpoint and uses its
162upiu-based protocol is available at:
163
164 https://github.com/westerndigitalcorporation/ufs-tool
165
166For more detailed information about the tool and its supported
167features, please see the tool's README.
161 168
162UFS Specifications can be found at, 169UFS Specifications can be found at,
163UFS - http://www.jedec.org/sites/default/files/docs/JESD220.pdf 170UFS - http://www.jedec.org/sites/default/files/docs/JESD220.pdf
diff --git a/MAINTAINERS b/MAINTAINERS
index 618e4979960b..f2ae76355d95 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11779,16 +11779,6 @@ S: Maintained
11779F: drivers/mtd/nand/onenand/ 11779F: drivers/mtd/nand/onenand/
11780F: include/linux/mtd/onenand*.h 11780F: include/linux/mtd/onenand*.h
11781 11781
11782ONSTREAM SCSI TAPE DRIVER
11783M: Willem Riede <osst@riede.org>
11784L: osst-users@lists.sourceforge.net
11785L: linux-scsi@vger.kernel.org
11786S: Maintained
11787F: Documentation/scsi/osst.txt
11788F: drivers/scsi/osst.*
11789F: drivers/scsi/osst_*.h
11790F: drivers/scsi/st.h
11791
11792OP-TEE DRIVER 11782OP-TEE DRIVER
11793M: Jens Wiklander <jens.wiklander@linaro.org> 11783M: Jens Wiklander <jens.wiklander@linaro.org>
11794S: Maintained 11784S: Maintained
@@ -12680,8 +12670,7 @@ S: Orphan
12680F: drivers/scsi/pmcraid.* 12670F: drivers/scsi/pmcraid.*
12681 12671
12682PMC SIERRA PM8001 DRIVER 12672PMC SIERRA PM8001 DRIVER
12683M: Jack Wang <jinpu.wang@profitbricks.com> 12673M: Jack Wang <jinpu.wang@cloud.ionos.com>
12684M: lindar_liu@usish.com
12685L: linux-scsi@vger.kernel.org 12674L: linux-scsi@vger.kernel.org
12686S: Supported 12675S: Supported
12687F: drivers/scsi/pm8001/ 12676F: drivers/scsi/pm8001/
diff --git a/arch/m68k/mac/config.c b/arch/m68k/mac/config.c
index 11be08f4f750..205ac75da13d 100644
--- a/arch/m68k/mac/config.c
+++ b/arch/m68k/mac/config.c
@@ -911,6 +911,10 @@ static const struct resource mac_scsi_iifx_rsrc[] __initconst = {
911 .flags = IORESOURCE_MEM, 911 .flags = IORESOURCE_MEM,
912 .start = 0x50008000, 912 .start = 0x50008000,
913 .end = 0x50009FFF, 913 .end = 0x50009FFF,
914 }, {
915 .flags = IORESOURCE_MEM,
916 .start = 0x50008000,
917 .end = 0x50009FFF,
914 }, 918 },
915}; 919};
916 920
@@ -1012,10 +1016,12 @@ int __init mac_platform_init(void)
1012 case MAC_SCSI_IIFX: 1016 case MAC_SCSI_IIFX:
1013 /* Addresses from The Guide to Mac Family Hardware. 1017 /* Addresses from The Guide to Mac Family Hardware.
1014 * $5000 8000 - $5000 9FFF: SCSI DMA 1018 * $5000 8000 - $5000 9FFF: SCSI DMA
1019 * $5000 A000 - $5000 BFFF: Alternate SCSI
1015 * $5000 C000 - $5000 DFFF: Alternate SCSI (DMA) 1020 * $5000 C000 - $5000 DFFF: Alternate SCSI (DMA)
1016 * $5000 E000 - $5000 FFFF: Alternate SCSI (Hsk) 1021 * $5000 E000 - $5000 FFFF: Alternate SCSI (Hsk)
1017 * The SCSI DMA custom IC embeds the 53C80 core. mac_scsi does 1022 * The A/UX header file sys/uconfig.h says $50F0 8000.
1018 * not make use of its DMA or hardware handshaking logic. 1023 * The "SCSI DMA" custom IC embeds the 53C80 core and
1024 * supports Programmed IO, DMA and PDMA (hardware handshake).
1019 */ 1025 */
1020 platform_device_register_simple("mac_scsi", 0, 1026 platform_device_register_simple("mac_scsi", 0,
1021 mac_scsi_iifx_rsrc, ARRAY_SIZE(mac_scsi_iifx_rsrc)); 1027 mac_scsi_iifx_rsrc, ARRAY_SIZE(mac_scsi_iifx_rsrc));
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index 4305da2c9037..d5cbad2c61e4 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -2340,7 +2340,6 @@ static void srp_handle_qp_err(struct ib_cq *cq, struct ib_wc *wc,
2340static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd) 2340static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
2341{ 2341{
2342 struct srp_target_port *target = host_to_target(shost); 2342 struct srp_target_port *target = host_to_target(shost);
2343 struct srp_rport *rport = target->rport;
2344 struct srp_rdma_ch *ch; 2343 struct srp_rdma_ch *ch;
2345 struct srp_request *req; 2344 struct srp_request *req;
2346 struct srp_iu *iu; 2345 struct srp_iu *iu;
@@ -2350,16 +2349,6 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
2350 u32 tag; 2349 u32 tag;
2351 u16 idx; 2350 u16 idx;
2352 int len, ret; 2351 int len, ret;
2353 const bool in_scsi_eh = !in_interrupt() && current == shost->ehandler;
2354
2355 /*
2356 * The SCSI EH thread is the only context from which srp_queuecommand()
2357 * can get invoked for blocked devices (SDEV_BLOCK /
2358 * SDEV_CREATED_BLOCK). Avoid racing with srp_reconnect_rport() by
2359 * locking the rport mutex if invoked from inside the SCSI EH.
2360 */
2361 if (in_scsi_eh)
2362 mutex_lock(&rport->mutex);
2363 2352
2364 scmnd->result = srp_chkready(target->rport); 2353 scmnd->result = srp_chkready(target->rport);
2365 if (unlikely(scmnd->result)) 2354 if (unlikely(scmnd->result))
@@ -2428,13 +2417,7 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
2428 goto err_unmap; 2417 goto err_unmap;
2429 } 2418 }
2430 2419
2431 ret = 0; 2420 return 0;
2432
2433unlock_rport:
2434 if (in_scsi_eh)
2435 mutex_unlock(&rport->mutex);
2436
2437 return ret;
2438 2421
2439err_unmap: 2422err_unmap:
2440 srp_unmap_data(scmnd, ch, req); 2423 srp_unmap_data(scmnd, ch, req);
@@ -2456,7 +2439,7 @@ err:
2456 ret = SCSI_MLQUEUE_HOST_BUSY; 2439 ret = SCSI_MLQUEUE_HOST_BUSY;
2457 } 2440 }
2458 2441
2459 goto unlock_rport; 2442 return ret;
2460} 2443}
2461 2444
2462/* 2445/*
diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptbase.c
index d8882b0a1338..c2dd322691d1 100644
--- a/drivers/message/fusion/mptbase.c
+++ b/drivers/message/fusion/mptbase.c
@@ -6001,13 +6001,12 @@ mpt_findImVolumes(MPT_ADAPTER *ioc)
6001 if (mpt_config(ioc, &cfg) != 0) 6001 if (mpt_config(ioc, &cfg) != 0)
6002 goto out; 6002 goto out;
6003 6003
6004 mem = kmalloc(iocpage2sz, GFP_KERNEL); 6004 mem = kmemdup(pIoc2, iocpage2sz, GFP_KERNEL);
6005 if (!mem) { 6005 if (!mem) {
6006 rc = -ENOMEM; 6006 rc = -ENOMEM;
6007 goto out; 6007 goto out;
6008 } 6008 }
6009 6009
6010 memcpy(mem, (u8 *)pIoc2, iocpage2sz);
6011 ioc->raid_data.pIocPg2 = (IOCPage2_t *) mem; 6010 ioc->raid_data.pIocPg2 = (IOCPage2_t *) mem;
6012 6011
6013 mpt_read_ioc_pg_3(ioc); 6012 mpt_read_ioc_pg_3(ioc);
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index f31b6b780eaf..75f66f8ad3ea 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -99,28 +99,6 @@ config CHR_DEV_ST
99 To compile this driver as a module, choose M here and read 99 To compile this driver as a module, choose M here and read
100 <file:Documentation/scsi/scsi.txt>. The module will be called st. 100 <file:Documentation/scsi/scsi.txt>. The module will be called st.
101 101
102config CHR_DEV_OSST
103 tristate "SCSI OnStream SC-x0 tape support"
104 depends on SCSI
105 ---help---
106 The OnStream SC-x0 SCSI tape drives cannot be driven by the
107 standard st driver, but instead need this special osst driver and
108 use the /dev/osstX char device nodes (major 206). Via usb-storage,
109 you may be able to drive the USB-x0 and DI-x0 drives as well.
110 Note that there is also a second generation of OnStream
111 tape drives (ADR-x0) that supports the standard SCSI-2 commands for
112 tapes (QIC-157) and can be driven by the standard driver st.
113 For more information, you may have a look at the SCSI-HOWTO
114 <http://www.tldp.org/docs.html#howto> and
115 <file:Documentation/scsi/osst.txt> in the kernel source.
116 More info on the OnStream driver may be found on
117 <http://sourceforge.net/projects/osst/>
118 Please also have a look at the standard st docu, as most of it
119 applies to osst as well.
120
121 To compile this driver as a module, choose M here and read
122 <file:Documentation/scsi/scsi.txt>. The module will be called osst.
123
124config BLK_DEV_SR 102config BLK_DEV_SR
125 tristate "SCSI CDROM support" 103 tristate "SCSI CDROM support"
126 depends on SCSI && BLK_DEV 104 depends on SCSI && BLK_DEV
@@ -664,6 +642,41 @@ config SCSI_DMX3191D
664 To compile this driver as a module, choose M here: the 642 To compile this driver as a module, choose M here: the
665 module will be called dmx3191d. 643 module will be called dmx3191d.
666 644
645config SCSI_FDOMAIN
646 tristate
647 depends on SCSI
648
649config SCSI_FDOMAIN_PCI
650 tristate "Future Domain TMC-3260/AHA-2920A PCI SCSI support"
651 depends on PCI && SCSI
652 select SCSI_FDOMAIN
653 help
654 This is support for Future Domain's PCI SCSI host adapters (TMC-3260)
655 and other adapters with PCI bus based on the Future Domain chipsets
656 (Adaptec AHA-2920A).
657
658 NOTE: Newer Adaptec AHA-2920C boards use the Adaptec AIC-7850 chip
659 and should use the aic7xxx driver ("Adaptec AIC7xxx chipset SCSI
660 controller support"). This Future Domain driver works with the older
661 Adaptec AHA-2920A boards with a Future Domain chip on them.
662
663 To compile this driver as a module, choose M here: the
664 module will be called fdomain_pci.
665
666config SCSI_FDOMAIN_ISA
667 tristate "Future Domain 16xx ISA SCSI support"
668 depends on ISA && SCSI
669 select CHECK_SIGNATURE
670 select SCSI_FDOMAIN
671 help
672 This is support for Future Domain's 16-bit SCSI host adapters
673 (TMC-1660/1680, TMC-1650/1670, TMC-1610M/MER/MEX) and other adapters
674 with ISA bus based on the Future Domain chipsets (Quantum ISA-200S,
675 ISA-250MG; and at least one IBM board).
676
677 To compile this driver as a module, choose M here: the
678 module will be called fdomain_isa.
679
667config SCSI_GDTH 680config SCSI_GDTH
668 tristate "Intel/ICP (former GDT SCSI Disk Array) RAID Controller support" 681 tristate "Intel/ICP (former GDT SCSI Disk Array) RAID Controller support"
669 depends on PCI && SCSI 682 depends on PCI && SCSI
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index 8826111fdf4a..aeda53901064 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -76,6 +76,9 @@ obj-$(CONFIG_SCSI_AIC94XX) += aic94xx/
76obj-$(CONFIG_SCSI_PM8001) += pm8001/ 76obj-$(CONFIG_SCSI_PM8001) += pm8001/
77obj-$(CONFIG_SCSI_ISCI) += isci/ 77obj-$(CONFIG_SCSI_ISCI) += isci/
78obj-$(CONFIG_SCSI_IPS) += ips.o 78obj-$(CONFIG_SCSI_IPS) += ips.o
79obj-$(CONFIG_SCSI_FDOMAIN) += fdomain.o
80obj-$(CONFIG_SCSI_FDOMAIN_PCI) += fdomain_pci.o
81obj-$(CONFIG_SCSI_FDOMAIN_ISA) += fdomain_isa.o
79obj-$(CONFIG_SCSI_GENERIC_NCR5380) += g_NCR5380.o 82obj-$(CONFIG_SCSI_GENERIC_NCR5380) += g_NCR5380.o
80obj-$(CONFIG_SCSI_QLOGIC_FAS) += qlogicfas408.o qlogicfas.o 83obj-$(CONFIG_SCSI_QLOGIC_FAS) += qlogicfas408.o qlogicfas.o
81obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o 84obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o
@@ -143,7 +146,6 @@ obj-$(CONFIG_SCSI_WD719X) += wd719x.o
143obj-$(CONFIG_ARM) += arm/ 146obj-$(CONFIG_ARM) += arm/
144 147
145obj-$(CONFIG_CHR_DEV_ST) += st.o 148obj-$(CONFIG_CHR_DEV_ST) += st.o
146obj-$(CONFIG_CHR_DEV_OSST) += osst.o
147obj-$(CONFIG_BLK_DEV_SD) += sd_mod.o 149obj-$(CONFIG_BLK_DEV_SD) += sd_mod.o
148obj-$(CONFIG_BLK_DEV_SR) += sr_mod.o 150obj-$(CONFIG_BLK_DEV_SR) += sr_mod.o
149obj-$(CONFIG_CHR_DEV_SG) += sg.o 151obj-$(CONFIG_CHR_DEV_SG) += sg.o
diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c
index fe0535affc14..d9fa9cf2fd8b 100644
--- a/drivers/scsi/NCR5380.c
+++ b/drivers/scsi/NCR5380.c
@@ -709,6 +709,8 @@ static void NCR5380_main(struct work_struct *work)
709 NCR5380_information_transfer(instance); 709 NCR5380_information_transfer(instance);
710 done = 0; 710 done = 0;
711 } 711 }
712 if (!hostdata->connected)
713 NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
712 spin_unlock_irq(&hostdata->lock); 714 spin_unlock_irq(&hostdata->lock);
713 if (!done) 715 if (!done)
714 cond_resched(); 716 cond_resched();
@@ -1110,8 +1112,6 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
1110 spin_lock_irq(&hostdata->lock); 1112 spin_lock_irq(&hostdata->lock);
1111 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); 1113 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
1112 NCR5380_reselect(instance); 1114 NCR5380_reselect(instance);
1113 if (!hostdata->connected)
1114 NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
1115 shost_printk(KERN_ERR, instance, "reselection after won arbitration?\n"); 1115 shost_printk(KERN_ERR, instance, "reselection after won arbitration?\n");
1116 goto out; 1116 goto out;
1117 } 1117 }
@@ -1119,7 +1119,6 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
1119 if (err < 0) { 1119 if (err < 0) {
1120 spin_lock_irq(&hostdata->lock); 1120 spin_lock_irq(&hostdata->lock);
1121 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); 1121 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
1122 NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
1123 1122
1124 /* Can't touch cmd if it has been reclaimed by the scsi ML */ 1123 /* Can't touch cmd if it has been reclaimed by the scsi ML */
1125 if (!hostdata->selecting) 1124 if (!hostdata->selecting)
@@ -1157,7 +1156,6 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
1157 if (err < 0) { 1156 if (err < 0) {
1158 shost_printk(KERN_ERR, instance, "select: REQ timeout\n"); 1157 shost_printk(KERN_ERR, instance, "select: REQ timeout\n");
1159 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); 1158 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
1160 NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
1161 goto out; 1159 goto out;
1162 } 1160 }
1163 if (!hostdata->selecting) { 1161 if (!hostdata->selecting) {
@@ -1763,10 +1761,8 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
1763 scmd_printk(KERN_INFO, cmd, 1761 scmd_printk(KERN_INFO, cmd,
1764 "switching to slow handshake\n"); 1762 "switching to slow handshake\n");
1765 cmd->device->borken = 1; 1763 cmd->device->borken = 1;
1766 sink = 1; 1764 do_reset(instance);
1767 do_abort(instance); 1765 bus_reset_cleanup(instance);
1768 cmd->result = DID_ERROR << 16;
1769 /* XXX - need to source or sink data here, as appropriate */
1770 } 1766 }
1771 } else { 1767 } else {
1772 /* Transfer a small chunk so that the 1768 /* Transfer a small chunk so that the
@@ -1826,9 +1822,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
1826 */ 1822 */
1827 NCR5380_write(TARGET_COMMAND_REG, 0); 1823 NCR5380_write(TARGET_COMMAND_REG, 0);
1828 1824
1829 /* Enable reselect interrupts */
1830 NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
1831
1832 maybe_release_dma_irq(instance); 1825 maybe_release_dma_irq(instance);
1833 return; 1826 return;
1834 case MESSAGE_REJECT: 1827 case MESSAGE_REJECT:
@@ -1860,8 +1853,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
1860 */ 1853 */
1861 NCR5380_write(TARGET_COMMAND_REG, 0); 1854 NCR5380_write(TARGET_COMMAND_REG, 0);
1862 1855
1863 /* Enable reselect interrupts */
1864 NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
1865#ifdef SUN3_SCSI_VME 1856#ifdef SUN3_SCSI_VME
1866 dregs->csr |= CSR_DMA_ENABLE; 1857 dregs->csr |= CSR_DMA_ENABLE;
1867#endif 1858#endif
@@ -1964,7 +1955,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
1964 cmd->result = DID_ERROR << 16; 1955 cmd->result = DID_ERROR << 16;
1965 complete_cmd(instance, cmd); 1956 complete_cmd(instance, cmd);
1966 maybe_release_dma_irq(instance); 1957 maybe_release_dma_irq(instance);
1967 NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
1968 return; 1958 return;
1969 } 1959 }
1970 msgout = NOP; 1960 msgout = NOP;
diff --git a/drivers/scsi/NCR5380.h b/drivers/scsi/NCR5380.h
index efca509b92b0..5935fd6d1a05 100644
--- a/drivers/scsi/NCR5380.h
+++ b/drivers/scsi/NCR5380.h
@@ -235,7 +235,7 @@ struct NCR5380_cmd {
235#define NCR5380_PIO_CHUNK_SIZE 256 235#define NCR5380_PIO_CHUNK_SIZE 256
236 236
237/* Time limit (ms) to poll registers when IRQs are disabled, e.g. during PDMA */ 237/* Time limit (ms) to poll registers when IRQs are disabled, e.g. during PDMA */
238#define NCR5380_REG_POLL_TIME 15 238#define NCR5380_REG_POLL_TIME 10
239 239
240static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr) 240static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr)
241{ 241{
diff --git a/drivers/scsi/aic7xxx/aic7xxx.reg b/drivers/scsi/aic7xxx/aic7xxx.reg
index ba0b411d03e2..00fde2243e48 100644
--- a/drivers/scsi/aic7xxx/aic7xxx.reg
+++ b/drivers/scsi/aic7xxx/aic7xxx.reg
@@ -1666,7 +1666,7 @@ scratch_ram {
1666 size 6 1666 size 6
1667 /* 1667 /*
1668 * These are reserved registers in the card's scratch ram on the 2742. 1668 * These are reserved registers in the card's scratch ram on the 2742.
1669 * The EISA configuraiton chip is mapped here. On Rev E. of the 1669 * The EISA configuration chip is mapped here. On Rev E. of the
1670 * aic7770, the sequencer can use this area for scratch, but the 1670 * aic7770, the sequencer can use this area for scratch, but the
1671 * host cannot directly access these registers. On later chips, this 1671 * host cannot directly access these registers. On later chips, this
1672 * area can be read and written by both the host and the sequencer. 1672 * area can be read and written by both the host and the sequencer.
diff --git a/drivers/scsi/aic94xx/aic94xx_dev.c b/drivers/scsi/aic94xx/aic94xx_dev.c
index 730b35e7c1ba..604a5331f639 100644
--- a/drivers/scsi/aic94xx/aic94xx_dev.c
+++ b/drivers/scsi/aic94xx/aic94xx_dev.c
@@ -170,9 +170,7 @@ static int asd_init_target_ddb(struct domain_device *dev)
170 } 170 }
171 } else { 171 } else {
172 flags |= CONCURRENT_CONN_SUPP; 172 flags |= CONCURRENT_CONN_SUPP;
173 if (!dev->parent && 173 if (!dev->parent && dev_is_expander(dev->dev_type))
174 (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE ||
175 dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE))
176 asd_ddbsite_write_byte(asd_ha, ddb, MAX_CCONN, 174 asd_ddbsite_write_byte(asd_ha, ddb, MAX_CCONN,
177 4); 175 4);
178 else 176 else
diff --git a/drivers/scsi/bnx2fc/bnx2fc.h b/drivers/scsi/bnx2fc/bnx2fc.h
index 901a31632493..3b84db8d13a9 100644
--- a/drivers/scsi/bnx2fc/bnx2fc.h
+++ b/drivers/scsi/bnx2fc/bnx2fc.h
@@ -66,7 +66,7 @@
66#include "bnx2fc_constants.h" 66#include "bnx2fc_constants.h"
67 67
68#define BNX2FC_NAME "bnx2fc" 68#define BNX2FC_NAME "bnx2fc"
69#define BNX2FC_VERSION "2.11.8" 69#define BNX2FC_VERSION "2.12.10"
70 70
71#define PFX "bnx2fc: " 71#define PFX "bnx2fc: "
72 72
@@ -75,8 +75,9 @@
75#define BNX2X_DOORBELL_PCI_BAR 2 75#define BNX2X_DOORBELL_PCI_BAR 2
76 76
77#define BNX2FC_MAX_BD_LEN 0xffff 77#define BNX2FC_MAX_BD_LEN 0xffff
78#define BNX2FC_BD_SPLIT_SZ 0x8000 78#define BNX2FC_BD_SPLIT_SZ 0xffff
79#define BNX2FC_MAX_BDS_PER_CMD 256 79#define BNX2FC_MAX_BDS_PER_CMD 255
80#define BNX2FC_FW_MAX_BDS_PER_CMD 255
80 81
81#define BNX2FC_SQ_WQES_MAX 256 82#define BNX2FC_SQ_WQES_MAX 256
82 83
@@ -433,8 +434,10 @@ struct bnx2fc_cmd {
433 void (*cb_func)(struct bnx2fc_els_cb_arg *cb_arg); 434 void (*cb_func)(struct bnx2fc_els_cb_arg *cb_arg);
434 struct bnx2fc_els_cb_arg *cb_arg; 435 struct bnx2fc_els_cb_arg *cb_arg;
435 struct delayed_work timeout_work; /* timer for ULP timeouts */ 436 struct delayed_work timeout_work; /* timer for ULP timeouts */
436 struct completion tm_done; 437 struct completion abts_done;
437 int wait_for_comp; 438 struct completion cleanup_done;
439 int wait_for_abts_comp;
440 int wait_for_cleanup_comp;
438 u16 xid; 441 u16 xid;
439 struct fcoe_err_report_entry err_entry; 442 struct fcoe_err_report_entry err_entry;
440 struct fcoe_task_ctx_entry *task; 443 struct fcoe_task_ctx_entry *task;
@@ -455,6 +458,7 @@ struct bnx2fc_cmd {
455#define BNX2FC_FLAG_ELS_TIMEOUT 0xb 458#define BNX2FC_FLAG_ELS_TIMEOUT 0xb
456#define BNX2FC_FLAG_CMD_LOST 0xc 459#define BNX2FC_FLAG_CMD_LOST 0xc
457#define BNX2FC_FLAG_SRR_SENT 0xd 460#define BNX2FC_FLAG_SRR_SENT 0xd
461#define BNX2FC_FLAG_ISSUE_CLEANUP_REQ 0xe
458 u8 rec_retry; 462 u8 rec_retry;
459 u8 srr_retry; 463 u8 srr_retry;
460 u32 srr_offset; 464 u32 srr_offset;
diff --git a/drivers/scsi/bnx2fc/bnx2fc_els.c b/drivers/scsi/bnx2fc/bnx2fc_els.c
index 76e65a32f38c..754f2e82d955 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_els.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_els.c
@@ -610,7 +610,6 @@ int bnx2fc_send_rec(struct bnx2fc_cmd *orig_io_req)
610 rc = bnx2fc_initiate_els(tgt, ELS_REC, &rec, sizeof(rec), 610 rc = bnx2fc_initiate_els(tgt, ELS_REC, &rec, sizeof(rec),
611 bnx2fc_rec_compl, cb_arg, 611 bnx2fc_rec_compl, cb_arg,
612 r_a_tov); 612 r_a_tov);
613rec_err:
614 if (rc) { 613 if (rc) {
615 BNX2FC_IO_DBG(orig_io_req, "REC failed - release\n"); 614 BNX2FC_IO_DBG(orig_io_req, "REC failed - release\n");
616 spin_lock_bh(&tgt->tgt_lock); 615 spin_lock_bh(&tgt->tgt_lock);
@@ -618,6 +617,7 @@ rec_err:
618 spin_unlock_bh(&tgt->tgt_lock); 617 spin_unlock_bh(&tgt->tgt_lock);
619 kfree(cb_arg); 618 kfree(cb_arg);
620 } 619 }
620rec_err:
621 return rc; 621 return rc;
622} 622}
623 623
@@ -654,7 +654,6 @@ int bnx2fc_send_srr(struct bnx2fc_cmd *orig_io_req, u32 offset, u8 r_ctl)
654 rc = bnx2fc_initiate_els(tgt, ELS_SRR, &srr, sizeof(srr), 654 rc = bnx2fc_initiate_els(tgt, ELS_SRR, &srr, sizeof(srr),
655 bnx2fc_srr_compl, cb_arg, 655 bnx2fc_srr_compl, cb_arg,
656 r_a_tov); 656 r_a_tov);
657srr_err:
658 if (rc) { 657 if (rc) {
659 BNX2FC_IO_DBG(orig_io_req, "SRR failed - release\n"); 658 BNX2FC_IO_DBG(orig_io_req, "SRR failed - release\n");
660 spin_lock_bh(&tgt->tgt_lock); 659 spin_lock_bh(&tgt->tgt_lock);
@@ -664,6 +663,7 @@ srr_err:
664 } else 663 } else
665 set_bit(BNX2FC_FLAG_SRR_SENT, &orig_io_req->req_flags); 664 set_bit(BNX2FC_FLAG_SRR_SENT, &orig_io_req->req_flags);
666 665
666srr_err:
667 return rc; 667 return rc;
668} 668}
669 669
@@ -854,33 +854,57 @@ void bnx2fc_process_els_compl(struct bnx2fc_cmd *els_req,
854 kref_put(&els_req->refcount, bnx2fc_cmd_release); 854 kref_put(&els_req->refcount, bnx2fc_cmd_release);
855} 855}
856 856
857#define BNX2FC_FCOE_MAC_METHOD_GRANGED_MAC 1
858#define BNX2FC_FCOE_MAC_METHOD_FCF_MAP 2
859#define BNX2FC_FCOE_MAC_METHOD_FCOE_SET_MAC 3
857static void bnx2fc_flogi_resp(struct fc_seq *seq, struct fc_frame *fp, 860static void bnx2fc_flogi_resp(struct fc_seq *seq, struct fc_frame *fp,
858 void *arg) 861 void *arg)
859{ 862{
860 struct fcoe_ctlr *fip = arg; 863 struct fcoe_ctlr *fip = arg;
861 struct fc_exch *exch = fc_seq_exch(seq); 864 struct fc_exch *exch = fc_seq_exch(seq);
862 struct fc_lport *lport = exch->lp; 865 struct fc_lport *lport = exch->lp;
863 u8 *mac; 866
864 u8 op; 867 struct fc_frame_header *fh;
868 u8 *granted_mac;
869 u8 fcoe_mac[6];
870 u8 fc_map[3];
871 int method;
865 872
866 if (IS_ERR(fp)) 873 if (IS_ERR(fp))
867 goto done; 874 goto done;
868 875
869 mac = fr_cb(fp)->granted_mac; 876 fh = fc_frame_header_get(fp);
870 if (is_zero_ether_addr(mac)) { 877 granted_mac = fr_cb(fp)->granted_mac;
871 op = fc_frame_payload_op(fp); 878
872 if (lport->vport) { 879 /*
873 if (op == ELS_LS_RJT) { 880 * We set the source MAC for FCoE traffic based on the Granted MAC
874 printk(KERN_ERR PFX "bnx2fc_flogi_resp is LS_RJT\n"); 881 * address from the switch.
875 fc_vport_terminate(lport->vport); 882 *
876 fc_frame_free(fp); 883 * If granted_mac is non-zero, we use that.
877 return; 884 * If the granted_mac is zeroed out, create the FCoE MAC based on
878 } 885 * the sel_fcf->fc_map and the d_id fo the FLOGI frame.
879 } 886 * If sel_fcf->fc_map is 0, then we use the default FCF-MAC plus the
880 fcoe_ctlr_recv_flogi(fip, lport, fp); 887 * d_id of the FLOGI frame.
888 */
889 if (!is_zero_ether_addr(granted_mac)) {
890 ether_addr_copy(fcoe_mac, granted_mac);
891 method = BNX2FC_FCOE_MAC_METHOD_GRANGED_MAC;
892 } else if (fip->sel_fcf && fip->sel_fcf->fc_map != 0) {
893 hton24(fc_map, fip->sel_fcf->fc_map);
894 fcoe_mac[0] = fc_map[0];
895 fcoe_mac[1] = fc_map[1];
896 fcoe_mac[2] = fc_map[2];
897 fcoe_mac[3] = fh->fh_d_id[0];
898 fcoe_mac[4] = fh->fh_d_id[1];
899 fcoe_mac[5] = fh->fh_d_id[2];
900 method = BNX2FC_FCOE_MAC_METHOD_FCF_MAP;
901 } else {
902 fc_fcoe_set_mac(fcoe_mac, fh->fh_d_id);
903 method = BNX2FC_FCOE_MAC_METHOD_FCOE_SET_MAC;
881 } 904 }
882 if (!is_zero_ether_addr(mac)) 905
883 fip->update_mac(lport, mac); 906 BNX2FC_HBA_DBG(lport, "fcoe_mac=%pM method=%d\n", fcoe_mac, method);
907 fip->update_mac(lport, fcoe_mac);
884done: 908done:
885 fc_lport_flogi_resp(seq, fp, lport); 909 fc_lport_flogi_resp(seq, fp, lport);
886} 910}
diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
index a75e74ad1698..7796799bf04a 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
@@ -2971,7 +2971,8 @@ static struct scsi_host_template bnx2fc_shost_template = {
2971 .this_id = -1, 2971 .this_id = -1,
2972 .cmd_per_lun = 3, 2972 .cmd_per_lun = 3,
2973 .sg_tablesize = BNX2FC_MAX_BDS_PER_CMD, 2973 .sg_tablesize = BNX2FC_MAX_BDS_PER_CMD,
2974 .max_sectors = 1024, 2974 .dma_boundary = 0x7fff,
2975 .max_sectors = 0x3fbf,
2975 .track_queue_depth = 1, 2976 .track_queue_depth = 1,
2976 .slave_configure = bnx2fc_slave_configure, 2977 .slave_configure = bnx2fc_slave_configure,
2977 .shost_attrs = bnx2fc_host_attrs, 2978 .shost_attrs = bnx2fc_host_attrs,
diff --git a/drivers/scsi/bnx2fc/bnx2fc_io.c b/drivers/scsi/bnx2fc/bnx2fc_io.c
index 8def63c0755f..9e50e5b53763 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_io.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_io.c
@@ -70,7 +70,7 @@ static void bnx2fc_cmd_timeout(struct work_struct *work)
70 &io_req->req_flags)) { 70 &io_req->req_flags)) {
71 /* Handle eh_abort timeout */ 71 /* Handle eh_abort timeout */
72 BNX2FC_IO_DBG(io_req, "eh_abort timed out\n"); 72 BNX2FC_IO_DBG(io_req, "eh_abort timed out\n");
73 complete(&io_req->tm_done); 73 complete(&io_req->abts_done);
74 } else if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, 74 } else if (test_bit(BNX2FC_FLAG_ISSUE_ABTS,
75 &io_req->req_flags)) { 75 &io_req->req_flags)) {
76 /* Handle internally generated ABTS timeout */ 76 /* Handle internally generated ABTS timeout */
@@ -775,31 +775,32 @@ retry_tmf:
775 io_req->on_tmf_queue = 1; 775 io_req->on_tmf_queue = 1;
776 list_add_tail(&io_req->link, &tgt->active_tm_queue); 776 list_add_tail(&io_req->link, &tgt->active_tm_queue);
777 777
778 init_completion(&io_req->tm_done); 778 init_completion(&io_req->abts_done);
779 io_req->wait_for_comp = 1; 779 io_req->wait_for_abts_comp = 1;
780 780
781 /* Ring doorbell */ 781 /* Ring doorbell */
782 bnx2fc_ring_doorbell(tgt); 782 bnx2fc_ring_doorbell(tgt);
783 spin_unlock_bh(&tgt->tgt_lock); 783 spin_unlock_bh(&tgt->tgt_lock);
784 784
785 rc = wait_for_completion_timeout(&io_req->tm_done, 785 rc = wait_for_completion_timeout(&io_req->abts_done,
786 interface->tm_timeout * HZ); 786 interface->tm_timeout * HZ);
787 spin_lock_bh(&tgt->tgt_lock); 787 spin_lock_bh(&tgt->tgt_lock);
788 788
789 io_req->wait_for_comp = 0; 789 io_req->wait_for_abts_comp = 0;
790 if (!(test_bit(BNX2FC_FLAG_TM_COMPL, &io_req->req_flags))) { 790 if (!(test_bit(BNX2FC_FLAG_TM_COMPL, &io_req->req_flags))) {
791 set_bit(BNX2FC_FLAG_TM_TIMEOUT, &io_req->req_flags); 791 set_bit(BNX2FC_FLAG_TM_TIMEOUT, &io_req->req_flags);
792 if (io_req->on_tmf_queue) { 792 if (io_req->on_tmf_queue) {
793 list_del_init(&io_req->link); 793 list_del_init(&io_req->link);
794 io_req->on_tmf_queue = 0; 794 io_req->on_tmf_queue = 0;
795 } 795 }
796 io_req->wait_for_comp = 1; 796 io_req->wait_for_cleanup_comp = 1;
797 init_completion(&io_req->cleanup_done);
797 bnx2fc_initiate_cleanup(io_req); 798 bnx2fc_initiate_cleanup(io_req);
798 spin_unlock_bh(&tgt->tgt_lock); 799 spin_unlock_bh(&tgt->tgt_lock);
799 rc = wait_for_completion_timeout(&io_req->tm_done, 800 rc = wait_for_completion_timeout(&io_req->cleanup_done,
800 BNX2FC_FW_TIMEOUT); 801 BNX2FC_FW_TIMEOUT);
801 spin_lock_bh(&tgt->tgt_lock); 802 spin_lock_bh(&tgt->tgt_lock);
802 io_req->wait_for_comp = 0; 803 io_req->wait_for_cleanup_comp = 0;
803 if (!rc) 804 if (!rc)
804 kref_put(&io_req->refcount, bnx2fc_cmd_release); 805 kref_put(&io_req->refcount, bnx2fc_cmd_release);
805 } 806 }
@@ -1047,6 +1048,9 @@ int bnx2fc_initiate_cleanup(struct bnx2fc_cmd *io_req)
1047 /* Obtain free SQ entry */ 1048 /* Obtain free SQ entry */
1048 bnx2fc_add_2_sq(tgt, xid); 1049 bnx2fc_add_2_sq(tgt, xid);
1049 1050
1051 /* Set flag that cleanup request is pending with the firmware */
1052 set_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ, &io_req->req_flags);
1053
1050 /* Ring doorbell */ 1054 /* Ring doorbell */
1051 bnx2fc_ring_doorbell(tgt); 1055 bnx2fc_ring_doorbell(tgt);
1052 1056
@@ -1085,7 +1089,8 @@ static int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req)
1085 struct bnx2fc_rport *tgt = io_req->tgt; 1089 struct bnx2fc_rport *tgt = io_req->tgt;
1086 unsigned int time_left; 1090 unsigned int time_left;
1087 1091
1088 io_req->wait_for_comp = 1; 1092 init_completion(&io_req->cleanup_done);
1093 io_req->wait_for_cleanup_comp = 1;
1089 bnx2fc_initiate_cleanup(io_req); 1094 bnx2fc_initiate_cleanup(io_req);
1090 1095
1091 spin_unlock_bh(&tgt->tgt_lock); 1096 spin_unlock_bh(&tgt->tgt_lock);
@@ -1094,21 +1099,21 @@ static int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req)
1094 * Can't wait forever on cleanup response lest we let the SCSI error 1099 * Can't wait forever on cleanup response lest we let the SCSI error
1095 * handler wait forever 1100 * handler wait forever
1096 */ 1101 */
1097 time_left = wait_for_completion_timeout(&io_req->tm_done, 1102 time_left = wait_for_completion_timeout(&io_req->cleanup_done,
1098 BNX2FC_FW_TIMEOUT); 1103 BNX2FC_FW_TIMEOUT);
1099 io_req->wait_for_comp = 0; 1104 if (!time_left) {
1100 if (!time_left)
1101 BNX2FC_IO_DBG(io_req, "%s(): Wait for cleanup timed out.\n", 1105 BNX2FC_IO_DBG(io_req, "%s(): Wait for cleanup timed out.\n",
1102 __func__); 1106 __func__);
1103 1107
1104 /* 1108 /*
1105 * Release reference held by SCSI command the cleanup completion 1109 * Put the extra reference to the SCSI command since it would
1106 * hits the BNX2FC_CLEANUP case in bnx2fc_process_cq_compl() and 1110 * not have been returned in this case.
1107 * thus the SCSI command is not returnedi by bnx2fc_scsi_done(). 1111 */
1108 */ 1112 kref_put(&io_req->refcount, bnx2fc_cmd_release);
1109 kref_put(&io_req->refcount, bnx2fc_cmd_release); 1113 }
1110 1114
1111 spin_lock_bh(&tgt->tgt_lock); 1115 spin_lock_bh(&tgt->tgt_lock);
1116 io_req->wait_for_cleanup_comp = 0;
1112 return SUCCESS; 1117 return SUCCESS;
1113} 1118}
1114 1119
@@ -1197,7 +1202,8 @@ int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd)
1197 /* Move IO req to retire queue */ 1202 /* Move IO req to retire queue */
1198 list_add_tail(&io_req->link, &tgt->io_retire_queue); 1203 list_add_tail(&io_req->link, &tgt->io_retire_queue);
1199 1204
1200 init_completion(&io_req->tm_done); 1205 init_completion(&io_req->abts_done);
1206 init_completion(&io_req->cleanup_done);
1201 1207
1202 if (test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags)) { 1208 if (test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags)) {
1203 printk(KERN_ERR PFX "eh_abort: io_req (xid = 0x%x) " 1209 printk(KERN_ERR PFX "eh_abort: io_req (xid = 0x%x) "
@@ -1225,26 +1231,28 @@ int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd)
1225 kref_put(&io_req->refcount, 1231 kref_put(&io_req->refcount,
1226 bnx2fc_cmd_release); /* drop timer hold */ 1232 bnx2fc_cmd_release); /* drop timer hold */
1227 set_bit(BNX2FC_FLAG_EH_ABORT, &io_req->req_flags); 1233 set_bit(BNX2FC_FLAG_EH_ABORT, &io_req->req_flags);
1228 io_req->wait_for_comp = 1; 1234 io_req->wait_for_abts_comp = 1;
1229 rc = bnx2fc_initiate_abts(io_req); 1235 rc = bnx2fc_initiate_abts(io_req);
1230 if (rc == FAILED) { 1236 if (rc == FAILED) {
1237 io_req->wait_for_cleanup_comp = 1;
1231 bnx2fc_initiate_cleanup(io_req); 1238 bnx2fc_initiate_cleanup(io_req);
1232 spin_unlock_bh(&tgt->tgt_lock); 1239 spin_unlock_bh(&tgt->tgt_lock);
1233 wait_for_completion(&io_req->tm_done); 1240 wait_for_completion(&io_req->cleanup_done);
1234 spin_lock_bh(&tgt->tgt_lock); 1241 spin_lock_bh(&tgt->tgt_lock);
1235 io_req->wait_for_comp = 0; 1242 io_req->wait_for_cleanup_comp = 0;
1236 goto done; 1243 goto done;
1237 } 1244 }
1238 spin_unlock_bh(&tgt->tgt_lock); 1245 spin_unlock_bh(&tgt->tgt_lock);
1239 1246
1240 /* Wait 2 * RA_TOV + 1 to be sure timeout function hasn't fired */ 1247 /* Wait 2 * RA_TOV + 1 to be sure timeout function hasn't fired */
1241 time_left = wait_for_completion_timeout(&io_req->tm_done, 1248 time_left = wait_for_completion_timeout(&io_req->abts_done,
1242 (2 * rp->r_a_tov + 1) * HZ); 1249 (2 * rp->r_a_tov + 1) * HZ);
1243 if (time_left) 1250 if (time_left)
1244 BNX2FC_IO_DBG(io_req, "Timed out in eh_abort waiting for tm_done"); 1251 BNX2FC_IO_DBG(io_req,
1252 "Timed out in eh_abort waiting for abts_done");
1245 1253
1246 spin_lock_bh(&tgt->tgt_lock); 1254 spin_lock_bh(&tgt->tgt_lock);
1247 io_req->wait_for_comp = 0; 1255 io_req->wait_for_abts_comp = 0;
1248 if (test_bit(BNX2FC_FLAG_IO_COMPL, &io_req->req_flags)) { 1256 if (test_bit(BNX2FC_FLAG_IO_COMPL, &io_req->req_flags)) {
1249 BNX2FC_IO_DBG(io_req, "IO completed in a different context\n"); 1257 BNX2FC_IO_DBG(io_req, "IO completed in a different context\n");
1250 rc = SUCCESS; 1258 rc = SUCCESS;
@@ -1319,10 +1327,29 @@ void bnx2fc_process_cleanup_compl(struct bnx2fc_cmd *io_req,
1319 BNX2FC_IO_DBG(io_req, "Entered process_cleanup_compl " 1327 BNX2FC_IO_DBG(io_req, "Entered process_cleanup_compl "
1320 "refcnt = %d, cmd_type = %d\n", 1328 "refcnt = %d, cmd_type = %d\n",
1321 kref_read(&io_req->refcount), io_req->cmd_type); 1329 kref_read(&io_req->refcount), io_req->cmd_type);
1330 /*
1331 * Test whether there is a cleanup request pending. If not just
1332 * exit.
1333 */
1334 if (!test_and_clear_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ,
1335 &io_req->req_flags))
1336 return;
1337 /*
1338 * If we receive a cleanup completion for this request then the
1339 * firmware will not give us an abort completion for this request
1340 * so clear any ABTS pending flags.
1341 */
1342 if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags) &&
1343 !test_bit(BNX2FC_FLAG_ABTS_DONE, &io_req->req_flags)) {
1344 set_bit(BNX2FC_FLAG_ABTS_DONE, &io_req->req_flags);
1345 if (io_req->wait_for_abts_comp)
1346 complete(&io_req->abts_done);
1347 }
1348
1322 bnx2fc_scsi_done(io_req, DID_ERROR); 1349 bnx2fc_scsi_done(io_req, DID_ERROR);
1323 kref_put(&io_req->refcount, bnx2fc_cmd_release); 1350 kref_put(&io_req->refcount, bnx2fc_cmd_release);
1324 if (io_req->wait_for_comp) 1351 if (io_req->wait_for_cleanup_comp)
1325 complete(&io_req->tm_done); 1352 complete(&io_req->cleanup_done);
1326} 1353}
1327 1354
1328void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req, 1355void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req,
@@ -1346,6 +1373,16 @@ void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req,
1346 return; 1373 return;
1347 } 1374 }
1348 1375
1376 /*
1377 * If we receive an ABTS completion here then we will not receive
1378 * a cleanup completion so clear any cleanup pending flags.
1379 */
1380 if (test_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ, &io_req->req_flags)) {
1381 clear_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ, &io_req->req_flags);
1382 if (io_req->wait_for_cleanup_comp)
1383 complete(&io_req->cleanup_done);
1384 }
1385
1349 /* Do not issue RRQ as this IO is already cleanedup */ 1386 /* Do not issue RRQ as this IO is already cleanedup */
1350 if (test_and_set_bit(BNX2FC_FLAG_IO_CLEANUP, 1387 if (test_and_set_bit(BNX2FC_FLAG_IO_CLEANUP,
1351 &io_req->req_flags)) 1388 &io_req->req_flags))
@@ -1390,10 +1427,10 @@ void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req,
1390 bnx2fc_cmd_timer_set(io_req, r_a_tov); 1427 bnx2fc_cmd_timer_set(io_req, r_a_tov);
1391 1428
1392io_compl: 1429io_compl:
1393 if (io_req->wait_for_comp) { 1430 if (io_req->wait_for_abts_comp) {
1394 if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT, 1431 if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT,
1395 &io_req->req_flags)) 1432 &io_req->req_flags))
1396 complete(&io_req->tm_done); 1433 complete(&io_req->abts_done);
1397 } else { 1434 } else {
1398 /* 1435 /*
1399 * We end up here when ABTS is issued as 1436 * We end up here when ABTS is issued as
@@ -1577,9 +1614,9 @@ void bnx2fc_process_tm_compl(struct bnx2fc_cmd *io_req,
1577 sc_cmd->scsi_done(sc_cmd); 1614 sc_cmd->scsi_done(sc_cmd);
1578 1615
1579 kref_put(&io_req->refcount, bnx2fc_cmd_release); 1616 kref_put(&io_req->refcount, bnx2fc_cmd_release);
1580 if (io_req->wait_for_comp) { 1617 if (io_req->wait_for_abts_comp) {
1581 BNX2FC_IO_DBG(io_req, "tm_compl - wake up the waiter\n"); 1618 BNX2FC_IO_DBG(io_req, "tm_compl - wake up the waiter\n");
1582 complete(&io_req->tm_done); 1619 complete(&io_req->abts_done);
1583 } 1620 }
1584} 1621}
1585 1622
@@ -1623,6 +1660,7 @@ static int bnx2fc_map_sg(struct bnx2fc_cmd *io_req)
1623 u64 addr; 1660 u64 addr;
1624 int i; 1661 int i;
1625 1662
1663 WARN_ON(scsi_sg_count(sc) > BNX2FC_MAX_BDS_PER_CMD);
1626 /* 1664 /*
1627 * Use dma_map_sg directly to ensure we're using the correct 1665 * Use dma_map_sg directly to ensure we're using the correct
1628 * dev struct off of pcidev. 1666 * dev struct off of pcidev.
@@ -1670,6 +1708,16 @@ static int bnx2fc_build_bd_list_from_sg(struct bnx2fc_cmd *io_req)
1670 } 1708 }
1671 io_req->bd_tbl->bd_valid = bd_count; 1709 io_req->bd_tbl->bd_valid = bd_count;
1672 1710
1711 /*
1712 * Return the command to ML if BD count exceeds the max number
1713 * that can be handled by FW.
1714 */
1715 if (bd_count > BNX2FC_FW_MAX_BDS_PER_CMD) {
1716 pr_err("bd_count = %d exceeded FW supported max BD(255), task_id = 0x%x\n",
1717 bd_count, io_req->xid);
1718 return -ENOMEM;
1719 }
1720
1673 return 0; 1721 return 0;
1674} 1722}
1675 1723
@@ -1926,10 +1974,10 @@ void bnx2fc_process_scsi_cmd_compl(struct bnx2fc_cmd *io_req,
1926 * between command abort and (late) completion. 1974 * between command abort and (late) completion.
1927 */ 1975 */
1928 BNX2FC_IO_DBG(io_req, "xid not on active_cmd_queue\n"); 1976 BNX2FC_IO_DBG(io_req, "xid not on active_cmd_queue\n");
1929 if (io_req->wait_for_comp) 1977 if (io_req->wait_for_abts_comp)
1930 if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT, 1978 if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT,
1931 &io_req->req_flags)) 1979 &io_req->req_flags))
1932 complete(&io_req->tm_done); 1980 complete(&io_req->abts_done);
1933 } 1981 }
1934 1982
1935 bnx2fc_unmap_sg_list(io_req); 1983 bnx2fc_unmap_sg_list(io_req);
diff --git a/drivers/scsi/bnx2fc/bnx2fc_tgt.c b/drivers/scsi/bnx2fc/bnx2fc_tgt.c
index d735e87e416a..50384b4a817c 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_tgt.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_tgt.c
@@ -187,7 +187,7 @@ void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt)
187 /* Handle eh_abort timeout */ 187 /* Handle eh_abort timeout */
188 BNX2FC_IO_DBG(io_req, "eh_abort for IO " 188 BNX2FC_IO_DBG(io_req, "eh_abort for IO "
189 "cleaned up\n"); 189 "cleaned up\n");
190 complete(&io_req->tm_done); 190 complete(&io_req->abts_done);
191 } 191 }
192 kref_put(&io_req->refcount, 192 kref_put(&io_req->refcount,
193 bnx2fc_cmd_release); /* drop timer hold */ 193 bnx2fc_cmd_release); /* drop timer hold */
@@ -210,8 +210,8 @@ void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt)
210 list_del_init(&io_req->link); 210 list_del_init(&io_req->link);
211 io_req->on_tmf_queue = 0; 211 io_req->on_tmf_queue = 0;
212 BNX2FC_IO_DBG(io_req, "tm_queue cleanup\n"); 212 BNX2FC_IO_DBG(io_req, "tm_queue cleanup\n");
213 if (io_req->wait_for_comp) 213 if (io_req->wait_for_abts_comp)
214 complete(&io_req->tm_done); 214 complete(&io_req->abts_done);
215 } 215 }
216 216
217 list_for_each_entry_safe(io_req, tmp, &tgt->els_queue, link) { 217 list_for_each_entry_safe(io_req, tmp, &tgt->els_queue, link) {
@@ -251,8 +251,8 @@ void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt)
251 /* Handle eh_abort timeout */ 251 /* Handle eh_abort timeout */
252 BNX2FC_IO_DBG(io_req, "eh_abort for IO " 252 BNX2FC_IO_DBG(io_req, "eh_abort for IO "
253 "in retire_q\n"); 253 "in retire_q\n");
254 if (io_req->wait_for_comp) 254 if (io_req->wait_for_abts_comp)
255 complete(&io_req->tm_done); 255 complete(&io_req->abts_done);
256 } 256 }
257 kref_put(&io_req->refcount, bnx2fc_cmd_release); 257 kref_put(&io_req->refcount, bnx2fc_cmd_release);
258 } 258 }
diff --git a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
index 66d6e1f4b3c3..da50e87921bc 100644
--- a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
+++ b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
@@ -1665,8 +1665,12 @@ static u8 get_iscsi_dcb_priority(struct net_device *ndev)
1665 return 0; 1665 return 0;
1666 1666
1667 if (caps & DCB_CAP_DCBX_VER_IEEE) { 1667 if (caps & DCB_CAP_DCBX_VER_IEEE) {
1668 iscsi_dcb_app.selector = IEEE_8021QAZ_APP_SEL_ANY; 1668 iscsi_dcb_app.selector = IEEE_8021QAZ_APP_SEL_STREAM;
1669 rv = dcb_ieee_getapp_mask(ndev, &iscsi_dcb_app); 1669 rv = dcb_ieee_getapp_mask(ndev, &iscsi_dcb_app);
1670 if (!rv) {
1671 iscsi_dcb_app.selector = IEEE_8021QAZ_APP_SEL_ANY;
1672 rv = dcb_ieee_getapp_mask(ndev, &iscsi_dcb_app);
1673 }
1670 } else if (caps & DCB_CAP_DCBX_VER_CEE) { 1674 } else if (caps & DCB_CAP_DCBX_VER_CEE) {
1671 iscsi_dcb_app.selector = DCB_APP_IDTYPE_PORTNUM; 1675 iscsi_dcb_app.selector = DCB_APP_IDTYPE_PORTNUM;
1672 rv = dcb_getapp(ndev, &iscsi_dcb_app); 1676 rv = dcb_getapp(ndev, &iscsi_dcb_app);
@@ -2260,7 +2264,8 @@ cxgb4_dcb_change_notify(struct notifier_block *self, unsigned long val,
2260 u8 priority; 2264 u8 priority;
2261 2265
2262 if (iscsi_app->dcbx & DCB_CAP_DCBX_VER_IEEE) { 2266 if (iscsi_app->dcbx & DCB_CAP_DCBX_VER_IEEE) {
2263 if (iscsi_app->app.selector != IEEE_8021QAZ_APP_SEL_ANY) 2267 if ((iscsi_app->app.selector != IEEE_8021QAZ_APP_SEL_STREAM) &&
2268 (iscsi_app->app.selector != IEEE_8021QAZ_APP_SEL_ANY))
2264 return NOTIFY_DONE; 2269 return NOTIFY_DONE;
2265 2270
2266 priority = iscsi_app->app.priority; 2271 priority = iscsi_app->app.priority;
diff --git a/drivers/scsi/fdomain.c b/drivers/scsi/fdomain.c
new file mode 100644
index 000000000000..b5e66971b6d9
--- /dev/null
+++ b/drivers/scsi/fdomain.c
@@ -0,0 +1,597 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * Driver for Future Domain TMC-16x0 and TMC-3260 SCSI host adapters
4 * Copyright 2019 Ondrej Zary
5 *
6 * Original driver by
7 * Rickard E. Faith, faith@cs.unc.edu
8 *
9 * Future Domain BIOS versions supported for autodetect:
10 * 2.0, 3.0, 3.2, 3.4 (1.0), 3.5 (2.0), 3.6, 3.61
11 * Chips supported:
12 * TMC-1800, TMC-18C50, TMC-18C30, TMC-36C70
13 * Boards supported:
14 * Future Domain TMC-1650, TMC-1660, TMC-1670, TMC-1680, TMC-1610M/MER/MEX
15 * Future Domain TMC-3260 (PCI)
16 * Quantum ISA-200S, ISA-250MG
17 * Adaptec AHA-2920A (PCI) [BUT *NOT* AHA-2920C -- use aic7xxx instead]
18 * IBM ?
19 *
20 * NOTE:
21 *
22 * The Adaptec AHA-2920C has an Adaptec AIC-7850 chip on it.
23 * Use the aic7xxx driver for this board.
24 *
25 * The Adaptec AHA-2920A has a Future Domain chip on it, so this is the right
26 * driver for that card. Unfortunately, the boxes will probably just say
27 * "2920", so you'll have to look on the card for a Future Domain logo, or a
28 * letter after the 2920.
29 *
30 * If you have a TMC-8xx or TMC-9xx board, then this is not the driver for
31 * your board.
32 *
33 * DESCRIPTION:
34 *
35 * This is the Linux low-level SCSI driver for Future Domain TMC-1660/1680
36 * TMC-1650/1670, and TMC-3260 SCSI host adapters. The 1650 and 1670 have a
37 * 25-pin external connector, whereas the 1660 and 1680 have a SCSI-2 50-pin
38 * high-density external connector. The 1670 and 1680 have floppy disk
39 * controllers built in. The TMC-3260 is a PCI bus card.
40 *
41 * Future Domain's older boards are based on the TMC-1800 chip, and this
42 * driver was originally written for a TMC-1680 board with the TMC-1800 chip.
43 * More recently, boards are being produced with the TMC-18C50 and TMC-18C30
44 * chips.
45 *
46 * Please note that the drive ordering that Future Domain implemented in BIOS
47 * versions 3.4 and 3.5 is the opposite of the order (currently) used by the
48 * rest of the SCSI industry.
49 *
50 *
51 * REFERENCES USED:
52 *
53 * "TMC-1800 SCSI Chip Specification (FDC-1800T)", Future Domain Corporation,
54 * 1990.
55 *
56 * "Technical Reference Manual: 18C50 SCSI Host Adapter Chip", Future Domain
57 * Corporation, January 1992.
58 *
59 * "LXT SCSI Products: Specifications and OEM Technical Manual (Revision
60 * B/September 1991)", Maxtor Corporation, 1991.
61 *
62 * "7213S product Manual (Revision P3)", Maxtor Corporation, 1992.
63 *
64 * "Draft Proposed American National Standard: Small Computer System
65 * Interface - 2 (SCSI-2)", Global Engineering Documents. (X3T9.2/86-109,
66 * revision 10h, October 17, 1991)
67 *
68 * Private communications, Drew Eckhardt (drew@cs.colorado.edu) and Eric
69 * Youngdale (ericy@cais.com), 1992.
70 *
71 * Private communication, Tuong Le (Future Domain Engineering department),
72 * 1994. (Disk geometry computations for Future Domain BIOS version 3.4, and
73 * TMC-18C30 detection.)
74 *
75 * Hogan, Thom. The Programmer's PC Sourcebook. Microsoft Press, 1988. Page
76 * 60 (2.39: Disk Partition Table Layout).
77 *
78 * "18C30 Technical Reference Manual", Future Domain Corporation, 1993, page
79 * 6-1.
80 */
81
82#include <linux/module.h>
83#include <linux/interrupt.h>
84#include <linux/delay.h>
85#include <linux/pci.h>
86#include <linux/workqueue.h>
87#include <scsi/scsicam.h>
88#include <scsi/scsi_cmnd.h>
89#include <scsi/scsi_device.h>
90#include <scsi/scsi_host.h>
91#include "fdomain.h"
92
93/*
94 * FIFO_COUNT: The host adapter has an 8K cache (host adapters based on the
95 * 18C30 chip have a 2k cache). When this many 512 byte blocks are filled by
96 * the SCSI device, an interrupt will be raised. Therefore, this could be as
97 * low as 0, or as high as 16. Note, however, that values which are too high
98 * or too low seem to prevent any interrupts from occurring, and thereby lock
99 * up the machine.
100 */
101#define FIFO_COUNT 2 /* Number of 512 byte blocks before INTR */
102#define PARITY_MASK ACTL_PAREN /* Parity enabled, 0 = disabled */
103
104enum chip_type {
105 unknown = 0x00,
106 tmc1800 = 0x01,
107 tmc18c50 = 0x02,
108 tmc18c30 = 0x03,
109};
110
111struct fdomain {
112 int base;
113 struct scsi_cmnd *cur_cmd;
114 enum chip_type chip;
115 struct work_struct work;
116};
117
118static inline void fdomain_make_bus_idle(struct fdomain *fd)
119{
120 outb(0, fd->base + REG_BCTL);
121 outb(0, fd->base + REG_MCTL);
122 if (fd->chip == tmc18c50 || fd->chip == tmc18c30)
123 /* Clear forced intr. */
124 outb(ACTL_RESET | ACTL_CLRFIRQ | PARITY_MASK,
125 fd->base + REG_ACTL);
126 else
127 outb(ACTL_RESET | PARITY_MASK, fd->base + REG_ACTL);
128}
129
130static enum chip_type fdomain_identify(int port)
131{
132 u16 id = inb(port + REG_ID_LSB) | inb(port + REG_ID_MSB) << 8;
133
134 switch (id) {
135 case 0x6127:
136 return tmc1800;
137 case 0x60e9: /* 18c50 or 18c30 */
138 break;
139 default:
140 return unknown;
141 }
142
143 /* Try to toggle 32-bit mode. This only works on an 18c30 chip. */
144 outb(CFG2_32BIT, port + REG_CFG2);
145 if ((inb(port + REG_CFG2) & CFG2_32BIT)) {
146 outb(0, port + REG_CFG2);
147 if ((inb(port + REG_CFG2) & CFG2_32BIT) == 0)
148 return tmc18c30;
149 }
150 /* If that failed, we are an 18c50. */
151 return tmc18c50;
152}
153
154static int fdomain_test_loopback(int base)
155{
156 int i;
157
158 for (i = 0; i < 255; i++) {
159 outb(i, base + REG_LOOPBACK);
160 if (inb(base + REG_LOOPBACK) != i)
161 return 1;
162 }
163
164 return 0;
165}
166
167static void fdomain_reset(int base)
168{
169 outb(1, base + REG_BCTL);
170 mdelay(20);
171 outb(0, base + REG_BCTL);
172 mdelay(1150);
173 outb(0, base + REG_MCTL);
174 outb(PARITY_MASK, base + REG_ACTL);
175}
176
177static int fdomain_select(struct Scsi_Host *sh, int target)
178{
179 int status;
180 unsigned long timeout;
181 struct fdomain *fd = shost_priv(sh);
182
183 outb(BCTL_BUSEN | BCTL_SEL, fd->base + REG_BCTL);
184 outb(BIT(sh->this_id) | BIT(target), fd->base + REG_SCSI_DATA_NOACK);
185
186 /* Stop arbitration and enable parity */
187 outb(PARITY_MASK, fd->base + REG_ACTL);
188
189 timeout = 350; /* 350 msec */
190
191 do {
192 status = inb(fd->base + REG_BSTAT);
193 if (status & BSTAT_BSY) {
194 /* Enable SCSI Bus */
195 /* (on error, should make bus idle with 0) */
196 outb(BCTL_BUSEN, fd->base + REG_BCTL);
197 return 0;
198 }
199 mdelay(1);
200 } while (--timeout);
201 fdomain_make_bus_idle(fd);
202 return 1;
203}
204
205static void fdomain_finish_cmd(struct fdomain *fd, int result)
206{
207 outb(0, fd->base + REG_ICTL);
208 fdomain_make_bus_idle(fd);
209 fd->cur_cmd->result = result;
210 fd->cur_cmd->scsi_done(fd->cur_cmd);
211 fd->cur_cmd = NULL;
212}
213
214static void fdomain_read_data(struct scsi_cmnd *cmd)
215{
216 struct fdomain *fd = shost_priv(cmd->device->host);
217 unsigned char *virt, *ptr;
218 size_t offset, len;
219
220 while ((len = inw(fd->base + REG_FIFO_COUNT)) > 0) {
221 offset = scsi_bufflen(cmd) - scsi_get_resid(cmd);
222 virt = scsi_kmap_atomic_sg(scsi_sglist(cmd), scsi_sg_count(cmd),
223 &offset, &len);
224 ptr = virt + offset;
225 if (len & 1)
226 *ptr++ = inb(fd->base + REG_FIFO);
227 if (len > 1)
228 insw(fd->base + REG_FIFO, ptr, len >> 1);
229 scsi_set_resid(cmd, scsi_get_resid(cmd) - len);
230 scsi_kunmap_atomic_sg(virt);
231 }
232}
233
234static void fdomain_write_data(struct scsi_cmnd *cmd)
235{
236 struct fdomain *fd = shost_priv(cmd->device->host);
237 /* 8k FIFO for pre-tmc18c30 chips, 2k FIFO for tmc18c30 */
238 int FIFO_Size = fd->chip == tmc18c30 ? 0x800 : 0x2000;
239 unsigned char *virt, *ptr;
240 size_t offset, len;
241
242 while ((len = FIFO_Size - inw(fd->base + REG_FIFO_COUNT)) > 512) {
243 offset = scsi_bufflen(cmd) - scsi_get_resid(cmd);
244 if (len + offset > scsi_bufflen(cmd)) {
245 len = scsi_bufflen(cmd) - offset;
246 if (len == 0)
247 break;
248 }
249 virt = scsi_kmap_atomic_sg(scsi_sglist(cmd), scsi_sg_count(cmd),
250 &offset, &len);
251 ptr = virt + offset;
252 if (len & 1)
253 outb(*ptr++, fd->base + REG_FIFO);
254 if (len > 1)
255 outsw(fd->base + REG_FIFO, ptr, len >> 1);
256 scsi_set_resid(cmd, scsi_get_resid(cmd) - len);
257 scsi_kunmap_atomic_sg(virt);
258 }
259}
260
261static void fdomain_work(struct work_struct *work)
262{
263 struct fdomain *fd = container_of(work, struct fdomain, work);
264 struct Scsi_Host *sh = container_of((void *)fd, struct Scsi_Host,
265 hostdata);
266 struct scsi_cmnd *cmd = fd->cur_cmd;
267 unsigned long flags;
268 int status;
269 int done = 0;
270
271 spin_lock_irqsave(sh->host_lock, flags);
272
273 if (cmd->SCp.phase & in_arbitration) {
274 status = inb(fd->base + REG_ASTAT);
275 if (!(status & ASTAT_ARB)) {
276 fdomain_finish_cmd(fd, DID_BUS_BUSY << 16);
277 goto out;
278 }
279 cmd->SCp.phase = in_selection;
280
281 outb(ICTL_SEL | FIFO_COUNT, fd->base + REG_ICTL);
282 outb(BCTL_BUSEN | BCTL_SEL, fd->base + REG_BCTL);
283 outb(BIT(cmd->device->host->this_id) | BIT(scmd_id(cmd)),
284 fd->base + REG_SCSI_DATA_NOACK);
285 /* Stop arbitration and enable parity */
286 outb(ACTL_IRQEN | PARITY_MASK, fd->base + REG_ACTL);
287 goto out;
288 } else if (cmd->SCp.phase & in_selection) {
289 status = inb(fd->base + REG_BSTAT);
290 if (!(status & BSTAT_BSY)) {
291 /* Try again, for slow devices */
292 if (fdomain_select(cmd->device->host, scmd_id(cmd))) {
293 fdomain_finish_cmd(fd, DID_NO_CONNECT << 16);
294 goto out;
295 }
296 /* Stop arbitration and enable parity */
297 outb(ACTL_IRQEN | PARITY_MASK, fd->base + REG_ACTL);
298 }
299 cmd->SCp.phase = in_other;
300 outb(ICTL_FIFO | ICTL_REQ | FIFO_COUNT, fd->base + REG_ICTL);
301 outb(BCTL_BUSEN, fd->base + REG_BCTL);
302 goto out;
303 }
304
305 /* cur_cmd->SCp.phase == in_other: this is the body of the routine */
306 status = inb(fd->base + REG_BSTAT);
307
308 if (status & BSTAT_REQ) {
309 switch (status & 0x0e) {
310 case BSTAT_CMD: /* COMMAND OUT */
311 outb(cmd->cmnd[cmd->SCp.sent_command++],
312 fd->base + REG_SCSI_DATA);
313 break;
314 case 0: /* DATA OUT -- tmc18c50/tmc18c30 only */
315 if (fd->chip != tmc1800 && !cmd->SCp.have_data_in) {
316 cmd->SCp.have_data_in = -1;
317 outb(ACTL_IRQEN | ACTL_FIFOWR | ACTL_FIFOEN |
318 PARITY_MASK, fd->base + REG_ACTL);
319 }
320 break;
321 case BSTAT_IO: /* DATA IN -- tmc18c50/tmc18c30 only */
322 if (fd->chip != tmc1800 && !cmd->SCp.have_data_in) {
323 cmd->SCp.have_data_in = 1;
324 outb(ACTL_IRQEN | ACTL_FIFOEN | PARITY_MASK,
325 fd->base + REG_ACTL);
326 }
327 break;
328 case BSTAT_CMD | BSTAT_IO: /* STATUS IN */
329 cmd->SCp.Status = inb(fd->base + REG_SCSI_DATA);
330 break;
331 case BSTAT_MSG | BSTAT_CMD: /* MESSAGE OUT */
332 outb(MESSAGE_REJECT, fd->base + REG_SCSI_DATA);
333 break;
334 case BSTAT_MSG | BSTAT_IO | BSTAT_CMD: /* MESSAGE IN */
335 cmd->SCp.Message = inb(fd->base + REG_SCSI_DATA);
336 if (!cmd->SCp.Message)
337 ++done;
338 break;
339 }
340 }
341
342 if (fd->chip == tmc1800 && !cmd->SCp.have_data_in &&
343 cmd->SCp.sent_command >= cmd->cmd_len) {
344 if (cmd->sc_data_direction == DMA_TO_DEVICE) {
345 cmd->SCp.have_data_in = -1;
346 outb(ACTL_IRQEN | ACTL_FIFOWR | ACTL_FIFOEN |
347 PARITY_MASK, fd->base + REG_ACTL);
348 } else {
349 cmd->SCp.have_data_in = 1;
350 outb(ACTL_IRQEN | ACTL_FIFOEN | PARITY_MASK,
351 fd->base + REG_ACTL);
352 }
353 }
354
355 if (cmd->SCp.have_data_in == -1) /* DATA OUT */
356 fdomain_write_data(cmd);
357
358 if (cmd->SCp.have_data_in == 1) /* DATA IN */
359 fdomain_read_data(cmd);
360
361 if (done) {
362 fdomain_finish_cmd(fd, (cmd->SCp.Status & 0xff) |
363 ((cmd->SCp.Message & 0xff) << 8) |
364 (DID_OK << 16));
365 } else {
366 if (cmd->SCp.phase & disconnect) {
367 outb(ICTL_FIFO | ICTL_SEL | ICTL_REQ | FIFO_COUNT,
368 fd->base + REG_ICTL);
369 outb(0, fd->base + REG_BCTL);
370 } else
371 outb(ICTL_FIFO | ICTL_REQ | FIFO_COUNT,
372 fd->base + REG_ICTL);
373 }
374out:
375 spin_unlock_irqrestore(sh->host_lock, flags);
376}
377
378static irqreturn_t fdomain_irq(int irq, void *dev_id)
379{
380 struct fdomain *fd = dev_id;
381
382 /* Is it our IRQ? */
383 if ((inb(fd->base + REG_ASTAT) & ASTAT_IRQ) == 0)
384 return IRQ_NONE;
385
386 outb(0, fd->base + REG_ICTL);
387
388 /* We usually have one spurious interrupt after each command. */
389 if (!fd->cur_cmd) /* Spurious interrupt */
390 return IRQ_NONE;
391
392 schedule_work(&fd->work);
393
394 return IRQ_HANDLED;
395}
396
397static int fdomain_queue(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
398{
399 struct fdomain *fd = shost_priv(cmd->device->host);
400 unsigned long flags;
401
402 cmd->SCp.Status = 0;
403 cmd->SCp.Message = 0;
404 cmd->SCp.have_data_in = 0;
405 cmd->SCp.sent_command = 0;
406 cmd->SCp.phase = in_arbitration;
407 scsi_set_resid(cmd, scsi_bufflen(cmd));
408
409 spin_lock_irqsave(sh->host_lock, flags);
410
411 fd->cur_cmd = cmd;
412
413 fdomain_make_bus_idle(fd);
414
415 /* Start arbitration */
416 outb(0, fd->base + REG_ICTL);
417 outb(0, fd->base + REG_BCTL); /* Disable data drivers */
418 /* Set our id bit */
419 outb(BIT(cmd->device->host->this_id), fd->base + REG_SCSI_DATA_NOACK);
420 outb(ICTL_ARB, fd->base + REG_ICTL);
421 /* Start arbitration */
422 outb(ACTL_ARB | ACTL_IRQEN | PARITY_MASK, fd->base + REG_ACTL);
423
424 spin_unlock_irqrestore(sh->host_lock, flags);
425
426 return 0;
427}
428
429static int fdomain_abort(struct scsi_cmnd *cmd)
430{
431 struct Scsi_Host *sh = cmd->device->host;
432 struct fdomain *fd = shost_priv(sh);
433 unsigned long flags;
434
435 if (!fd->cur_cmd)
436 return FAILED;
437
438 spin_lock_irqsave(sh->host_lock, flags);
439
440 fdomain_make_bus_idle(fd);
441 fd->cur_cmd->SCp.phase |= aborted;
442 fd->cur_cmd->result = DID_ABORT << 16;
443
444 /* Aborts are not done well. . . */
445 fdomain_finish_cmd(fd, DID_ABORT << 16);
446 spin_unlock_irqrestore(sh->host_lock, flags);
447 return SUCCESS;
448}
449
450static int fdomain_host_reset(struct scsi_cmnd *cmd)
451{
452 struct Scsi_Host *sh = cmd->device->host;
453 struct fdomain *fd = shost_priv(sh);
454 unsigned long flags;
455
456 spin_lock_irqsave(sh->host_lock, flags);
457 fdomain_reset(fd->base);
458 spin_unlock_irqrestore(sh->host_lock, flags);
459 return SUCCESS;
460}
461
462static int fdomain_biosparam(struct scsi_device *sdev,
463 struct block_device *bdev, sector_t capacity,
464 int geom[])
465{
466 unsigned char *p = scsi_bios_ptable(bdev);
467
468 if (p && p[65] == 0xaa && p[64] == 0x55 /* Partition table valid */
469 && p[4]) { /* Partition type */
470 geom[0] = p[5] + 1; /* heads */
471 geom[1] = p[6] & 0x3f; /* sectors */
472 } else {
473 if (capacity >= 0x7e0000) {
474 geom[0] = 255; /* heads */
475 geom[1] = 63; /* sectors */
476 } else if (capacity >= 0x200000) {
477 geom[0] = 128; /* heads */
478 geom[1] = 63; /* sectors */
479 } else {
480 geom[0] = 64; /* heads */
481 geom[1] = 32; /* sectors */
482 }
483 }
484 geom[2] = sector_div(capacity, geom[0] * geom[1]);
485 kfree(p);
486
487 return 0;
488}
489
490static struct scsi_host_template fdomain_template = {
491 .module = THIS_MODULE,
492 .name = "Future Domain TMC-16x0",
493 .proc_name = "fdomain",
494 .queuecommand = fdomain_queue,
495 .eh_abort_handler = fdomain_abort,
496 .eh_host_reset_handler = fdomain_host_reset,
497 .bios_param = fdomain_biosparam,
498 .can_queue = 1,
499 .this_id = 7,
500 .sg_tablesize = 64,
501 .dma_boundary = PAGE_SIZE - 1,
502};
503
504struct Scsi_Host *fdomain_create(int base, int irq, int this_id,
505 struct device *dev)
506{
507 struct Scsi_Host *sh;
508 struct fdomain *fd;
509 enum chip_type chip;
510 static const char * const chip_names[] = {
511 "Unknown", "TMC-1800", "TMC-18C50", "TMC-18C30"
512 };
513 unsigned long irq_flags = 0;
514
515 chip = fdomain_identify(base);
516 if (!chip)
517 return NULL;
518
519 fdomain_reset(base);
520
521 if (fdomain_test_loopback(base))
522 return NULL;
523
524 if (!irq) {
525 dev_err(dev, "card has no IRQ assigned");
526 return NULL;
527 }
528
529 sh = scsi_host_alloc(&fdomain_template, sizeof(struct fdomain));
530 if (!sh)
531 return NULL;
532
533 if (this_id)
534 sh->this_id = this_id & 0x07;
535
536 sh->irq = irq;
537 sh->io_port = base;
538 sh->n_io_port = FDOMAIN_REGION_SIZE;
539
540 fd = shost_priv(sh);
541 fd->base = base;
542 fd->chip = chip;
543 INIT_WORK(&fd->work, fdomain_work);
544
545 if (dev_is_pci(dev) || !strcmp(dev->bus->name, "pcmcia"))
546 irq_flags = IRQF_SHARED;
547
548 if (request_irq(irq, fdomain_irq, irq_flags, "fdomain", fd))
549 goto fail_put;
550
551 shost_printk(KERN_INFO, sh, "%s chip at 0x%x irq %d SCSI ID %d\n",
552 dev_is_pci(dev) ? "TMC-36C70 (PCI bus)" : chip_names[chip],
553 base, irq, sh->this_id);
554
555 if (scsi_add_host(sh, dev))
556 goto fail_free_irq;
557
558 scsi_scan_host(sh);
559
560 return sh;
561
562fail_free_irq:
563 free_irq(irq, fd);
564fail_put:
565 scsi_host_put(sh);
566 return NULL;
567}
568EXPORT_SYMBOL_GPL(fdomain_create);
569
570int fdomain_destroy(struct Scsi_Host *sh)
571{
572 struct fdomain *fd = shost_priv(sh);
573
574 cancel_work_sync(&fd->work);
575 scsi_remove_host(sh);
576 if (sh->irq)
577 free_irq(sh->irq, fd);
578 scsi_host_put(sh);
579 return 0;
580}
581EXPORT_SYMBOL_GPL(fdomain_destroy);
582
583#ifdef CONFIG_PM_SLEEP
584static int fdomain_resume(struct device *dev)
585{
586 struct fdomain *fd = shost_priv(dev_get_drvdata(dev));
587
588 fdomain_reset(fd->base);
589 return 0;
590}
591
592static SIMPLE_DEV_PM_OPS(fdomain_pm_ops, NULL, fdomain_resume);
593#endif /* CONFIG_PM_SLEEP */
594
595MODULE_AUTHOR("Ondrej Zary, Rickard E. Faith");
596MODULE_DESCRIPTION("Future Domain TMC-16x0/TMC-3260 SCSI driver");
597MODULE_LICENSE("GPL");
diff --git a/drivers/scsi/fdomain.h b/drivers/scsi/fdomain.h
new file mode 100644
index 000000000000..6f63fc6b0d12
--- /dev/null
+++ b/drivers/scsi/fdomain.h
@@ -0,0 +1,114 @@
1/* SPDX-License-Identifier: GPL-2.0 */
2
3#define FDOMAIN_REGION_SIZE 0x10
4#define FDOMAIN_BIOS_SIZE 0x2000
5
6enum {
7 in_arbitration = 0x02,
8 in_selection = 0x04,
9 in_other = 0x08,
10 disconnect = 0x10,
11 aborted = 0x20,
12 sent_ident = 0x40,
13};
14
15/* (@) = not present on TMC1800, (#) = not present on TMC1800 and TMC18C50 */
16#define REG_SCSI_DATA 0 /* R/W: SCSI Data (with ACK) */
17#define REG_BSTAT 1 /* R: SCSI Bus Status */
18#define BSTAT_BSY BIT(0) /* Busy */
19#define BSTAT_MSG BIT(1) /* Message */
20#define BSTAT_IO BIT(2) /* Input/Output */
21#define BSTAT_CMD BIT(3) /* Command/Data */
22#define BSTAT_REQ BIT(4) /* Request and Not Ack */
23#define BSTAT_SEL BIT(5) /* Select */
24#define BSTAT_ACK BIT(6) /* Acknowledge and Request */
25#define BSTAT_ATN BIT(7) /* Attention */
26#define REG_BCTL 1 /* W: SCSI Bus Control */
27#define BCTL_RST BIT(0) /* Bus Reset */
28#define BCTL_SEL BIT(1) /* Select */
29#define BCTL_BSY BIT(2) /* Busy */
30#define BCTL_ATN BIT(3) /* Attention */
31#define BCTL_IO BIT(4) /* Input/Output */
32#define BCTL_CMD BIT(5) /* Command/Data */
33#define BCTL_MSG BIT(6) /* Message */
34#define BCTL_BUSEN BIT(7) /* Enable bus drivers */
35#define REG_ASTAT 2 /* R: Adapter Status 1 */
36#define ASTAT_IRQ BIT(0) /* Interrupt active */
37#define ASTAT_ARB BIT(1) /* Arbitration complete */
38#define ASTAT_PARERR BIT(2) /* Parity error */
39#define ASTAT_RST BIT(3) /* SCSI reset occurred */
40#define ASTAT_FIFODIR BIT(4) /* FIFO direction */
41#define ASTAT_FIFOEN BIT(5) /* FIFO enabled */
42#define ASTAT_PAREN BIT(6) /* Parity enabled */
43#define ASTAT_BUSEN BIT(7) /* Bus drivers enabled */
44#define REG_ICTL 2 /* W: Interrupt Control */
45#define ICTL_FIFO_MASK 0x0f /* FIFO threshold, 1/16 FIFO size */
46#define ICTL_FIFO BIT(4) /* Int. on FIFO count */
47#define ICTL_ARB BIT(5) /* Int. on Arbitration complete */
48#define ICTL_SEL BIT(6) /* Int. on SCSI Select */
49#define ICTL_REQ BIT(7) /* Int. on SCSI Request */
50#define REG_FSTAT 3 /* R: Adapter Status 2 (FIFO) - (@) */
51#define FSTAT_ONOTEMPTY BIT(0) /* Output FIFO not empty */
52#define FSTAT_INOTEMPTY BIT(1) /* Input FIFO not empty */
53#define FSTAT_NOTEMPTY BIT(2) /* Main FIFO not empty */
54#define FSTAT_NOTFULL BIT(3) /* Main FIFO not full */
55#define REG_MCTL 3 /* W: SCSI Data Mode Control */
56#define MCTL_ACK_MASK 0x0f /* Acknowledge period */
57#define MCTL_ACTDEASS BIT(4) /* Active deassert of REQ and ACK */
58#define MCTL_TARGET BIT(5) /* Enable target mode */
59#define MCTL_FASTSYNC BIT(6) /* Enable Fast Synchronous */
60#define MCTL_SYNC BIT(7) /* Enable Synchronous */
61#define REG_INTCOND 4 /* R: Interrupt Condition - (@) */
62#define IRQ_FIFO BIT(1) /* FIFO interrupt */
63#define IRQ_REQ BIT(2) /* SCSI Request interrupt */
64#define IRQ_SEL BIT(3) /* SCSI Select interrupt */
65#define IRQ_ARB BIT(4) /* SCSI Arbitration interrupt */
66#define IRQ_RST BIT(5) /* SCSI Reset interrupt */
67#define IRQ_FORCED BIT(6) /* Forced interrupt */
68#define IRQ_TIMEOUT BIT(7) /* Bus timeout */
69#define REG_ACTL 4 /* W: Adapter Control 1 */
70#define ACTL_RESET BIT(0) /* Reset FIFO, parity, reset int. */
71#define ACTL_FIRQ BIT(1) /* Set Forced interrupt */
72#define ACTL_ARB BIT(2) /* Initiate Bus Arbitration */
73#define ACTL_PAREN BIT(3) /* Enable SCSI Parity */
74#define ACTL_IRQEN BIT(4) /* Enable interrupts */
75#define ACTL_CLRFIRQ BIT(5) /* Clear Forced interrupt */
76#define ACTL_FIFOWR BIT(6) /* FIFO Direction (1=write) */
77#define ACTL_FIFOEN BIT(7) /* Enable FIFO */
78#define REG_ID_LSB 5 /* R: ID Code (LSB) */
79#define REG_ACTL2 5 /* Adapter Control 2 - (@) */
80#define ACTL2_RAMOVRLY BIT(0) /* Enable RAM overlay */
81#define ACTL2_SLEEP BIT(7) /* Sleep mode */
82#define REG_ID_MSB 6 /* R: ID Code (MSB) */
83#define REG_LOOPBACK 7 /* R/W: Loopback */
84#define REG_SCSI_DATA_NOACK 8 /* R/W: SCSI Data (no ACK) */
85#define REG_ASTAT3 9 /* R: Adapter Status 3 */
86#define ASTAT3_ACTDEASS BIT(0) /* Active deassert enabled */
87#define ASTAT3_RAMOVRLY BIT(1) /* RAM overlay enabled */
88#define ASTAT3_TARGERR BIT(2) /* Target error */
89#define ASTAT3_IRQEN BIT(3) /* Interrupts enabled */
90#define ASTAT3_IRQMASK 0xf0 /* Enabled interrupts mask */
91#define REG_CFG1 10 /* R: Configuration Register 1 */
92#define CFG1_BUS BIT(0) /* 0 = ISA */
93#define CFG1_IRQ_MASK 0x0e /* IRQ jumpers */
94#define CFG1_IO_MASK 0x30 /* I/O base jumpers */
95#define CFG1_BIOS_MASK 0xc0 /* BIOS base jumpers */
96#define REG_CFG2 11 /* R/W: Configuration Register 2 (@) */
97#define CFG2_ROMDIS BIT(0) /* ROM disabled */
98#define CFG2_RAMDIS BIT(1) /* RAM disabled */
99#define CFG2_IRQEDGE BIT(2) /* Edge-triggered interrupts */
100#define CFG2_NOWS BIT(3) /* No wait states */
101#define CFG2_32BIT BIT(7) /* 32-bit mode */
102#define REG_FIFO 12 /* R/W: FIFO */
103#define REG_FIFO_COUNT 14 /* R: FIFO Data Count */
104
105#ifdef CONFIG_PM_SLEEP
106static const struct dev_pm_ops fdomain_pm_ops;
107#define FDOMAIN_PM_OPS (&fdomain_pm_ops)
108#else
109#define FDOMAIN_PM_OPS NULL
110#endif /* CONFIG_PM_SLEEP */
111
112struct Scsi_Host *fdomain_create(int base, int irq, int this_id,
113 struct device *dev);
114int fdomain_destroy(struct Scsi_Host *sh);
diff --git a/drivers/scsi/fdomain_isa.c b/drivers/scsi/fdomain_isa.c
new file mode 100644
index 000000000000..28639adf8219
--- /dev/null
+++ b/drivers/scsi/fdomain_isa.c
@@ -0,0 +1,222 @@
1// SPDX-License-Identifier: GPL-2.0
2
3#include <linux/module.h>
4#include <linux/io.h>
5#include <linux/isa.h>
6#include <scsi/scsi_host.h>
7#include "fdomain.h"
8
9#define MAXBOARDS_PARAM 4
10static int io[MAXBOARDS_PARAM] = { 0, 0, 0, 0 };
11module_param_hw_array(io, int, ioport, NULL, 0);
12MODULE_PARM_DESC(io, "base I/O address of controller (0x140, 0x150, 0x160, 0x170)");
13
14static int irq[MAXBOARDS_PARAM] = { 0, 0, 0, 0 };
15module_param_hw_array(irq, int, irq, NULL, 0);
16MODULE_PARM_DESC(irq, "IRQ of controller (0=auto [default])");
17
18static int scsi_id[MAXBOARDS_PARAM] = { 0, 0, 0, 0 };
19module_param_hw_array(scsi_id, int, other, NULL, 0);
20MODULE_PARM_DESC(scsi_id, "SCSI ID of controller (default = 7)");
21
22static unsigned long addresses[] = {
23 0xc8000,
24 0xca000,
25 0xce000,
26 0xde000,
27};
28#define ADDRESS_COUNT ARRAY_SIZE(addresses)
29
30static unsigned short ports[] = { 0x140, 0x150, 0x160, 0x170 };
31#define PORT_COUNT ARRAY_SIZE(ports)
32
33static unsigned short irqs[] = { 3, 5, 10, 11, 12, 14, 15, 0 };
34
35/* This driver works *ONLY* for Future Domain cards using the TMC-1800,
36 * TMC-18C50, or TMC-18C30 chip. This includes models TMC-1650, 1660, 1670,
37 * and 1680. These are all 16-bit cards.
38 * BIOS versions prior to 3.2 assigned SCSI ID 6 to SCSI adapter.
39 *
40 * The following BIOS signature signatures are for boards which do *NOT*
41 * work with this driver (these TMC-8xx and TMC-9xx boards may work with the
42 * Seagate driver):
43 *
44 * FUTURE DOMAIN CORP. (C) 1986-1988 V4.0I 03/16/88
45 * FUTURE DOMAIN CORP. (C) 1986-1989 V5.0C2/14/89
46 * FUTURE DOMAIN CORP. (C) 1986-1989 V6.0A7/28/89
47 * FUTURE DOMAIN CORP. (C) 1986-1990 V6.0105/31/90
48 * FUTURE DOMAIN CORP. (C) 1986-1990 V6.0209/18/90
49 * FUTURE DOMAIN CORP. (C) 1986-1990 V7.009/18/90
50 * FUTURE DOMAIN CORP. (C) 1992 V8.00.004/02/92
51 *
52 * (The cards which do *NOT* work are all 8-bit cards -- although some of
53 * them have a 16-bit form-factor, the upper 8-bits are used only for IRQs
54 * and are *NOT* used for data. You can tell the difference by following
55 * the tracings on the circuit board -- if only the IRQ lines are involved,
56 * you have a "8-bit" card, and should *NOT* use this driver.)
57 */
58
59static struct signature {
60 const char *signature;
61 int offset;
62 int length;
63 int this_id;
64 int base_offset;
65} signatures[] = {
66/* 1 2 3 4 5 6 */
67/* 123456789012345678901234567890123456789012345678901234567890 */
68{ "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V2.07/28/89", 5, 50, 6, 0x1fcc },
69{ "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V1.07/28/89", 5, 50, 6, 0x1fcc },
70{ "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V2.07/28/89", 72, 50, 6, 0x1fa2 },
71{ "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V2.0", 73, 43, 6, 0x1fa2 },
72{ "FUTURE DOMAIN CORP. (C) 1991 1800-V2.0.", 72, 39, 6, 0x1fa3 },
73{ "FUTURE DOMAIN CORP. (C) 1992 V3.00.004/02/92", 5, 44, 6, 0 },
74{ "FUTURE DOMAIN TMC-18XX (C) 1993 V3.203/12/93", 5, 44, 7, 0 },
75{ "IBM F1 P2 BIOS v1.0011/09/92", 5, 28, 7, 0x1ff3 },
76{ "IBM F1 P2 BIOS v1.0104/29/93", 5, 28, 7, 0 },
77{ "Future Domain Corp. V1.0008/18/93", 5, 33, 7, 0 },
78{ "Future Domain Corp. V2.0108/18/93", 5, 33, 7, 0 },
79{ "FUTURE DOMAIN CORP. V3.5008/18/93", 5, 34, 7, 0 },
80{ "FUTURE DOMAIN 18c30/18c50/1800 (C) 1994 V3.5", 5, 44, 7, 0 },
81{ "FUTURE DOMAIN CORP. V3.6008/18/93", 5, 34, 7, 0 },
82{ "FUTURE DOMAIN CORP. V3.6108/18/93", 5, 34, 7, 0 },
83};
84#define SIGNATURE_COUNT ARRAY_SIZE(signatures)
85
86static int fdomain_isa_match(struct device *dev, unsigned int ndev)
87{
88 struct Scsi_Host *sh;
89 int i, base = 0, irq = 0;
90 unsigned long bios_base = 0;
91 struct signature *sig = NULL;
92 void __iomem *p;
93 static struct signature *saved_sig;
94 int this_id = 7;
95
96 if (ndev < ADDRESS_COUNT) { /* scan supported ISA BIOS addresses */
97 p = ioremap(addresses[ndev], FDOMAIN_BIOS_SIZE);
98 if (!p)
99 return 0;
100 for (i = 0; i < SIGNATURE_COUNT; i++)
101 if (check_signature(p + signatures[i].offset,
102 signatures[i].signature,
103 signatures[i].length))
104 break;
105 if (i == SIGNATURE_COUNT) /* no signature found */
106 goto fail_unmap;
107 sig = &signatures[i];
108 bios_base = addresses[ndev];
109 /* read I/O base from BIOS area */
110 if (sig->base_offset)
111 base = readb(p + sig->base_offset) +
112 (readb(p + sig->base_offset + 1) << 8);
113 iounmap(p);
114 if (base)
115 dev_info(dev, "BIOS at 0x%lx specifies I/O base 0x%x\n",
116 bios_base, base);
117 else
118 dev_info(dev, "BIOS at 0x%lx\n", bios_base);
119 if (!base) { /* no I/O base in BIOS area */
120 /* save BIOS signature for later use in port probing */
121 saved_sig = sig;
122 return 0;
123 }
124 } else /* scan supported I/O ports */
125 base = ports[ndev - ADDRESS_COUNT];
126
127 /* use saved BIOS signature if present */
128 if (!sig && saved_sig)
129 sig = saved_sig;
130
131 if (!request_region(base, FDOMAIN_REGION_SIZE, "fdomain_isa"))
132 return 0;
133
134 irq = irqs[(inb(base + REG_CFG1) & 0x0e) >> 1];
135
136
137 if (sig)
138 this_id = sig->this_id;
139
140 sh = fdomain_create(base, irq, this_id, dev);
141 if (!sh) {
142 release_region(base, FDOMAIN_REGION_SIZE);
143 return 0;
144 }
145
146 dev_set_drvdata(dev, sh);
147 return 1;
148fail_unmap:
149 iounmap(p);
150 return 0;
151}
152
153static int fdomain_isa_param_match(struct device *dev, unsigned int ndev)
154{
155 struct Scsi_Host *sh;
156 int irq_ = irq[ndev];
157
158 if (!io[ndev])
159 return 0;
160
161 if (!request_region(io[ndev], FDOMAIN_REGION_SIZE, "fdomain_isa")) {
162 dev_err(dev, "base 0x%x already in use", io[ndev]);
163 return 0;
164 }
165
166 if (irq_ <= 0)
167 irq_ = irqs[(inb(io[ndev] + REG_CFG1) & 0x0e) >> 1];
168
169 sh = fdomain_create(io[ndev], irq_, scsi_id[ndev], dev);
170 if (!sh) {
171 dev_err(dev, "controller not found at base 0x%x", io[ndev]);
172 release_region(io[ndev], FDOMAIN_REGION_SIZE);
173 return 0;
174 }
175
176 dev_set_drvdata(dev, sh);
177 return 1;
178}
179
180static int fdomain_isa_remove(struct device *dev, unsigned int ndev)
181{
182 struct Scsi_Host *sh = dev_get_drvdata(dev);
183 int base = sh->io_port;
184
185 fdomain_destroy(sh);
186 release_region(base, FDOMAIN_REGION_SIZE);
187 dev_set_drvdata(dev, NULL);
188 return 0;
189}
190
191static struct isa_driver fdomain_isa_driver = {
192 .match = fdomain_isa_match,
193 .remove = fdomain_isa_remove,
194 .driver = {
195 .name = "fdomain_isa",
196 .pm = FDOMAIN_PM_OPS,
197 },
198};
199
200static int __init fdomain_isa_init(void)
201{
202 int isa_probe_count = ADDRESS_COUNT + PORT_COUNT;
203
204 if (io[0]) { /* use module parameters if present */
205 fdomain_isa_driver.match = fdomain_isa_param_match;
206 isa_probe_count = MAXBOARDS_PARAM;
207 }
208
209 return isa_register_driver(&fdomain_isa_driver, isa_probe_count);
210}
211
212static void __exit fdomain_isa_exit(void)
213{
214 isa_unregister_driver(&fdomain_isa_driver);
215}
216
217module_init(fdomain_isa_init);
218module_exit(fdomain_isa_exit);
219
220MODULE_AUTHOR("Ondrej Zary, Rickard E. Faith");
221MODULE_DESCRIPTION("Future Domain TMC-16x0 ISA SCSI driver");
222MODULE_LICENSE("GPL");
diff --git a/drivers/scsi/fdomain_pci.c b/drivers/scsi/fdomain_pci.c
new file mode 100644
index 000000000000..3e05ce7b89e5
--- /dev/null
+++ b/drivers/scsi/fdomain_pci.c
@@ -0,0 +1,68 @@
1// SPDX-License-Identifier: GPL-2.0
2
3#include <linux/module.h>
4#include <linux/pci.h>
5#include "fdomain.h"
6
7static int fdomain_pci_probe(struct pci_dev *pdev,
8 const struct pci_device_id *d)
9{
10 int err;
11 struct Scsi_Host *sh;
12
13 err = pci_enable_device(pdev);
14 if (err)
15 goto fail;
16
17 err = pci_request_regions(pdev, "fdomain_pci");
18 if (err)
19 goto disable_device;
20
21 err = -ENODEV;
22 if (pci_resource_len(pdev, 0) == 0)
23 goto release_region;
24
25 sh = fdomain_create(pci_resource_start(pdev, 0), pdev->irq, 7,
26 &pdev->dev);
27 if (!sh)
28 goto release_region;
29
30 pci_set_drvdata(pdev, sh);
31 return 0;
32
33release_region:
34 pci_release_regions(pdev);
35disable_device:
36 pci_disable_device(pdev);
37fail:
38 return err;
39}
40
41static void fdomain_pci_remove(struct pci_dev *pdev)
42{
43 struct Scsi_Host *sh = pci_get_drvdata(pdev);
44
45 fdomain_destroy(sh);
46 pci_release_regions(pdev);
47 pci_disable_device(pdev);
48}
49
50static struct pci_device_id fdomain_pci_table[] = {
51 { PCI_DEVICE(PCI_VENDOR_ID_FD, PCI_DEVICE_ID_FD_36C70) },
52 {}
53};
54MODULE_DEVICE_TABLE(pci, fdomain_pci_table);
55
56static struct pci_driver fdomain_pci_driver = {
57 .name = "fdomain_pci",
58 .id_table = fdomain_pci_table,
59 .probe = fdomain_pci_probe,
60 .remove = fdomain_pci_remove,
61 .driver.pm = FDOMAIN_PM_OPS,
62};
63
64module_pci_driver(fdomain_pci_driver);
65
66MODULE_AUTHOR("Ondrej Zary, Rickard E. Faith");
67MODULE_DESCRIPTION("Future Domain TMC-3260 PCI SCSI driver");
68MODULE_LICENSE("GPL");
diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h
index 8d9a8fb2dd32..42a02cc47a60 100644
--- a/drivers/scsi/hisi_sas/hisi_sas.h
+++ b/drivers/scsi/hisi_sas/hisi_sas.h
@@ -61,10 +61,6 @@
61#define HISI_SAS_MAX_SMP_RESP_SZ 1028 61#define HISI_SAS_MAX_SMP_RESP_SZ 1028
62#define HISI_SAS_MAX_STP_RESP_SZ 28 62#define HISI_SAS_MAX_STP_RESP_SZ 28
63 63
64#define DEV_IS_EXPANDER(type) \
65 ((type == SAS_EDGE_EXPANDER_DEVICE) || \
66 (type == SAS_FANOUT_EXPANDER_DEVICE))
67
68#define HISI_SAS_SATA_PROTOCOL_NONDATA 0x1 64#define HISI_SAS_SATA_PROTOCOL_NONDATA 0x1
69#define HISI_SAS_SATA_PROTOCOL_PIO 0x2 65#define HISI_SAS_SATA_PROTOCOL_PIO 0x2
70#define HISI_SAS_SATA_PROTOCOL_DMA 0x4 66#define HISI_SAS_SATA_PROTOCOL_DMA 0x4
@@ -479,12 +475,12 @@ struct hisi_sas_command_table_stp {
479 u8 atapi_cdb[ATAPI_CDB_LEN]; 475 u8 atapi_cdb[ATAPI_CDB_LEN];
480}; 476};
481 477
482#define HISI_SAS_SGE_PAGE_CNT SG_CHUNK_SIZE 478#define HISI_SAS_SGE_PAGE_CNT (124)
483struct hisi_sas_sge_page { 479struct hisi_sas_sge_page {
484 struct hisi_sas_sge sge[HISI_SAS_SGE_PAGE_CNT]; 480 struct hisi_sas_sge sge[HISI_SAS_SGE_PAGE_CNT];
485} __aligned(16); 481} __aligned(16);
486 482
487#define HISI_SAS_SGE_DIF_PAGE_CNT SG_CHUNK_SIZE 483#define HISI_SAS_SGE_DIF_PAGE_CNT HISI_SAS_SGE_PAGE_CNT
488struct hisi_sas_sge_dif_page { 484struct hisi_sas_sge_dif_page {
489 struct hisi_sas_sge sge[HISI_SAS_SGE_DIF_PAGE_CNT]; 485 struct hisi_sas_sge sge[HISI_SAS_SGE_DIF_PAGE_CNT];
490} __aligned(16); 486} __aligned(16);
diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
index 5879771d82b2..cb746cfc2fa8 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
@@ -803,7 +803,7 @@ static int hisi_sas_dev_found(struct domain_device *device)
803 device->lldd_dev = sas_dev; 803 device->lldd_dev = sas_dev;
804 hisi_hba->hw->setup_itct(hisi_hba, sas_dev); 804 hisi_hba->hw->setup_itct(hisi_hba, sas_dev);
805 805
806 if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) { 806 if (parent_dev && dev_is_expander(parent_dev->dev_type)) {
807 int phy_no; 807 int phy_no;
808 u8 phy_num = parent_dev->ex_dev.num_phys; 808 u8 phy_num = parent_dev->ex_dev.num_phys;
809 struct ex_phy *phy; 809 struct ex_phy *phy;
@@ -1446,7 +1446,7 @@ static void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 old_state,
1446 1446
1447 _sas_port = sas_port; 1447 _sas_port = sas_port;
1448 1448
1449 if (DEV_IS_EXPANDER(dev->dev_type)) 1449 if (dev_is_expander(dev->dev_type))
1450 sas_ha->notify_port_event(sas_phy, 1450 sas_ha->notify_port_event(sas_phy,
1451 PORTE_BROADCAST_RCVD); 1451 PORTE_BROADCAST_RCVD);
1452 } 1452 }
@@ -1533,7 +1533,7 @@ static void hisi_sas_terminate_stp_reject(struct hisi_hba *hisi_hba)
1533 struct domain_device *port_dev = sas_port->port_dev; 1533 struct domain_device *port_dev = sas_port->port_dev;
1534 struct domain_device *device; 1534 struct domain_device *device;
1535 1535
1536 if (!port_dev || !DEV_IS_EXPANDER(port_dev->dev_type)) 1536 if (!port_dev || !dev_is_expander(port_dev->dev_type))
1537 continue; 1537 continue;
1538 1538
1539 /* Try to find a SATA device */ 1539 /* Try to find a SATA device */
@@ -1903,7 +1903,7 @@ static int hisi_sas_clear_nexus_ha(struct sas_ha_struct *sas_ha)
1903 struct domain_device *device = sas_dev->sas_device; 1903 struct domain_device *device = sas_dev->sas_device;
1904 1904
1905 if ((sas_dev->dev_type == SAS_PHY_UNUSED) || !device || 1905 if ((sas_dev->dev_type == SAS_PHY_UNUSED) || !device ||
1906 DEV_IS_EXPANDER(device->dev_type)) 1906 dev_is_expander(device->dev_type))
1907 continue; 1907 continue;
1908 1908
1909 rc = hisi_sas_debug_I_T_nexus_reset(device); 1909 rc = hisi_sas_debug_I_T_nexus_reset(device);
@@ -2475,6 +2475,14 @@ EXPORT_SYMBOL_GPL(hisi_sas_alloc);
2475 2475
2476void hisi_sas_free(struct hisi_hba *hisi_hba) 2476void hisi_sas_free(struct hisi_hba *hisi_hba)
2477{ 2477{
2478 int i;
2479
2480 for (i = 0; i < hisi_hba->n_phy; i++) {
2481 struct hisi_sas_phy *phy = &hisi_hba->phy[i];
2482
2483 del_timer_sync(&phy->timer);
2484 }
2485
2478 if (hisi_hba->wq) 2486 if (hisi_hba->wq)
2479 destroy_workqueue(hisi_hba->wq); 2487 destroy_workqueue(hisi_hba->wq);
2480} 2488}
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
index d99086ef6244..e9b15d45f98f 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
@@ -422,70 +422,70 @@ static const struct hisi_sas_hw_error one_bit_ecc_errors[] = {
422 .irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_1B_OFF), 422 .irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_1B_OFF),
423 .msk = HGC_DQE_ECC_1B_ADDR_MSK, 423 .msk = HGC_DQE_ECC_1B_ADDR_MSK,
424 .shift = HGC_DQE_ECC_1B_ADDR_OFF, 424 .shift = HGC_DQE_ECC_1B_ADDR_OFF,
425 .msg = "hgc_dqe_acc1b_intr found: Ram address is 0x%08X\n", 425 .msg = "hgc_dqe_ecc1b_intr",
426 .reg = HGC_DQE_ECC_ADDR, 426 .reg = HGC_DQE_ECC_ADDR,
427 }, 427 },
428 { 428 {
429 .irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_1B_OFF), 429 .irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_1B_OFF),
430 .msk = HGC_IOST_ECC_1B_ADDR_MSK, 430 .msk = HGC_IOST_ECC_1B_ADDR_MSK,
431 .shift = HGC_IOST_ECC_1B_ADDR_OFF, 431 .shift = HGC_IOST_ECC_1B_ADDR_OFF,
432 .msg = "hgc_iost_acc1b_intr found: Ram address is 0x%08X\n", 432 .msg = "hgc_iost_ecc1b_intr",
433 .reg = HGC_IOST_ECC_ADDR, 433 .reg = HGC_IOST_ECC_ADDR,
434 }, 434 },
435 { 435 {
436 .irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_1B_OFF), 436 .irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_1B_OFF),
437 .msk = HGC_ITCT_ECC_1B_ADDR_MSK, 437 .msk = HGC_ITCT_ECC_1B_ADDR_MSK,
438 .shift = HGC_ITCT_ECC_1B_ADDR_OFF, 438 .shift = HGC_ITCT_ECC_1B_ADDR_OFF,
439 .msg = "hgc_itct_acc1b_intr found: am address is 0x%08X\n", 439 .msg = "hgc_itct_ecc1b_intr",
440 .reg = HGC_ITCT_ECC_ADDR, 440 .reg = HGC_ITCT_ECC_ADDR,
441 }, 441 },
442 { 442 {
443 .irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_1B_OFF), 443 .irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_1B_OFF),
444 .msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK, 444 .msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK,
445 .shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF, 445 .shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF,
446 .msg = "hgc_iostl_acc1b_intr found: memory address is 0x%08X\n", 446 .msg = "hgc_iostl_ecc1b_intr",
447 .reg = HGC_LM_DFX_STATUS2, 447 .reg = HGC_LM_DFX_STATUS2,
448 }, 448 },
449 { 449 {
450 .irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_1B_OFF), 450 .irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_1B_OFF),
451 .msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK, 451 .msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK,
452 .shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF, 452 .shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF,
453 .msg = "hgc_itctl_acc1b_intr found: memory address is 0x%08X\n", 453 .msg = "hgc_itctl_ecc1b_intr",
454 .reg = HGC_LM_DFX_STATUS2, 454 .reg = HGC_LM_DFX_STATUS2,
455 }, 455 },
456 { 456 {
457 .irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_1B_OFF), 457 .irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_1B_OFF),
458 .msk = HGC_CQE_ECC_1B_ADDR_MSK, 458 .msk = HGC_CQE_ECC_1B_ADDR_MSK,
459 .shift = HGC_CQE_ECC_1B_ADDR_OFF, 459 .shift = HGC_CQE_ECC_1B_ADDR_OFF,
460 .msg = "hgc_cqe_acc1b_intr found: Ram address is 0x%08X\n", 460 .msg = "hgc_cqe_ecc1b_intr",
461 .reg = HGC_CQE_ECC_ADDR, 461 .reg = HGC_CQE_ECC_ADDR,
462 }, 462 },
463 { 463 {
464 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_1B_OFF), 464 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_1B_OFF),
465 .msk = HGC_RXM_DFX_STATUS14_MEM0_MSK, 465 .msk = HGC_RXM_DFX_STATUS14_MEM0_MSK,
466 .shift = HGC_RXM_DFX_STATUS14_MEM0_OFF, 466 .shift = HGC_RXM_DFX_STATUS14_MEM0_OFF,
467 .msg = "rxm_mem0_acc1b_intr found: memory address is 0x%08X\n", 467 .msg = "rxm_mem0_ecc1b_intr",
468 .reg = HGC_RXM_DFX_STATUS14, 468 .reg = HGC_RXM_DFX_STATUS14,
469 }, 469 },
470 { 470 {
471 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_1B_OFF), 471 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_1B_OFF),
472 .msk = HGC_RXM_DFX_STATUS14_MEM1_MSK, 472 .msk = HGC_RXM_DFX_STATUS14_MEM1_MSK,
473 .shift = HGC_RXM_DFX_STATUS14_MEM1_OFF, 473 .shift = HGC_RXM_DFX_STATUS14_MEM1_OFF,
474 .msg = "rxm_mem1_acc1b_intr found: memory address is 0x%08X\n", 474 .msg = "rxm_mem1_ecc1b_intr",
475 .reg = HGC_RXM_DFX_STATUS14, 475 .reg = HGC_RXM_DFX_STATUS14,
476 }, 476 },
477 { 477 {
478 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_1B_OFF), 478 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_1B_OFF),
479 .msk = HGC_RXM_DFX_STATUS14_MEM2_MSK, 479 .msk = HGC_RXM_DFX_STATUS14_MEM2_MSK,
480 .shift = HGC_RXM_DFX_STATUS14_MEM2_OFF, 480 .shift = HGC_RXM_DFX_STATUS14_MEM2_OFF,
481 .msg = "rxm_mem2_acc1b_intr found: memory address is 0x%08X\n", 481 .msg = "rxm_mem2_ecc1b_intr",
482 .reg = HGC_RXM_DFX_STATUS14, 482 .reg = HGC_RXM_DFX_STATUS14,
483 }, 483 },
484 { 484 {
485 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_1B_OFF), 485 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_1B_OFF),
486 .msk = HGC_RXM_DFX_STATUS15_MEM3_MSK, 486 .msk = HGC_RXM_DFX_STATUS15_MEM3_MSK,
487 .shift = HGC_RXM_DFX_STATUS15_MEM3_OFF, 487 .shift = HGC_RXM_DFX_STATUS15_MEM3_OFF,
488 .msg = "rxm_mem3_acc1b_intr found: memory address is 0x%08X\n", 488 .msg = "rxm_mem3_ecc1b_intr",
489 .reg = HGC_RXM_DFX_STATUS15, 489 .reg = HGC_RXM_DFX_STATUS15,
490 }, 490 },
491}; 491};
@@ -495,70 +495,70 @@ static const struct hisi_sas_hw_error multi_bit_ecc_errors[] = {
495 .irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF), 495 .irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF),
496 .msk = HGC_DQE_ECC_MB_ADDR_MSK, 496 .msk = HGC_DQE_ECC_MB_ADDR_MSK,
497 .shift = HGC_DQE_ECC_MB_ADDR_OFF, 497 .shift = HGC_DQE_ECC_MB_ADDR_OFF,
498 .msg = "hgc_dqe_accbad_intr (0x%x) found: Ram address is 0x%08X\n", 498 .msg = "hgc_dqe_eccbad_intr",
499 .reg = HGC_DQE_ECC_ADDR, 499 .reg = HGC_DQE_ECC_ADDR,
500 }, 500 },
501 { 501 {
502 .irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF), 502 .irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF),
503 .msk = HGC_IOST_ECC_MB_ADDR_MSK, 503 .msk = HGC_IOST_ECC_MB_ADDR_MSK,
504 .shift = HGC_IOST_ECC_MB_ADDR_OFF, 504 .shift = HGC_IOST_ECC_MB_ADDR_OFF,
505 .msg = "hgc_iost_accbad_intr (0x%x) found: Ram address is 0x%08X\n", 505 .msg = "hgc_iost_eccbad_intr",
506 .reg = HGC_IOST_ECC_ADDR, 506 .reg = HGC_IOST_ECC_ADDR,
507 }, 507 },
508 { 508 {
509 .irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF), 509 .irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF),
510 .msk = HGC_ITCT_ECC_MB_ADDR_MSK, 510 .msk = HGC_ITCT_ECC_MB_ADDR_MSK,
511 .shift = HGC_ITCT_ECC_MB_ADDR_OFF, 511 .shift = HGC_ITCT_ECC_MB_ADDR_OFF,
512 .msg = "hgc_itct_accbad_intr (0x%x) found: Ram address is 0x%08X\n", 512 .msg = "hgc_itct_eccbad_intr",
513 .reg = HGC_ITCT_ECC_ADDR, 513 .reg = HGC_ITCT_ECC_ADDR,
514 }, 514 },
515 { 515 {
516 .irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF), 516 .irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF),
517 .msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK, 517 .msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK,
518 .shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF, 518 .shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF,
519 .msg = "hgc_iostl_accbad_intr (0x%x) found: memory address is 0x%08X\n", 519 .msg = "hgc_iostl_eccbad_intr",
520 .reg = HGC_LM_DFX_STATUS2, 520 .reg = HGC_LM_DFX_STATUS2,
521 }, 521 },
522 { 522 {
523 .irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF), 523 .irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF),
524 .msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK, 524 .msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK,
525 .shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF, 525 .shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF,
526 .msg = "hgc_itctl_accbad_intr (0x%x) found: memory address is 0x%08X\n", 526 .msg = "hgc_itctl_eccbad_intr",
527 .reg = HGC_LM_DFX_STATUS2, 527 .reg = HGC_LM_DFX_STATUS2,
528 }, 528 },
529 { 529 {
530 .irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF), 530 .irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF),
531 .msk = HGC_CQE_ECC_MB_ADDR_MSK, 531 .msk = HGC_CQE_ECC_MB_ADDR_MSK,
532 .shift = HGC_CQE_ECC_MB_ADDR_OFF, 532 .shift = HGC_CQE_ECC_MB_ADDR_OFF,
533 .msg = "hgc_cqe_accbad_intr (0x%x) found: Ram address is 0x%08X\n", 533 .msg = "hgc_cqe_eccbad_intr",
534 .reg = HGC_CQE_ECC_ADDR, 534 .reg = HGC_CQE_ECC_ADDR,
535 }, 535 },
536 { 536 {
537 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF), 537 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF),
538 .msk = HGC_RXM_DFX_STATUS14_MEM0_MSK, 538 .msk = HGC_RXM_DFX_STATUS14_MEM0_MSK,
539 .shift = HGC_RXM_DFX_STATUS14_MEM0_OFF, 539 .shift = HGC_RXM_DFX_STATUS14_MEM0_OFF,
540 .msg = "rxm_mem0_accbad_intr (0x%x) found: memory address is 0x%08X\n", 540 .msg = "rxm_mem0_eccbad_intr",
541 .reg = HGC_RXM_DFX_STATUS14, 541 .reg = HGC_RXM_DFX_STATUS14,
542 }, 542 },
543 { 543 {
544 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF), 544 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF),
545 .msk = HGC_RXM_DFX_STATUS14_MEM1_MSK, 545 .msk = HGC_RXM_DFX_STATUS14_MEM1_MSK,
546 .shift = HGC_RXM_DFX_STATUS14_MEM1_OFF, 546 .shift = HGC_RXM_DFX_STATUS14_MEM1_OFF,
547 .msg = "rxm_mem1_accbad_intr (0x%x) found: memory address is 0x%08X\n", 547 .msg = "rxm_mem1_eccbad_intr",
548 .reg = HGC_RXM_DFX_STATUS14, 548 .reg = HGC_RXM_DFX_STATUS14,
549 }, 549 },
550 { 550 {
551 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF), 551 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF),
552 .msk = HGC_RXM_DFX_STATUS14_MEM2_MSK, 552 .msk = HGC_RXM_DFX_STATUS14_MEM2_MSK,
553 .shift = HGC_RXM_DFX_STATUS14_MEM2_OFF, 553 .shift = HGC_RXM_DFX_STATUS14_MEM2_OFF,
554 .msg = "rxm_mem2_accbad_intr (0x%x) found: memory address is 0x%08X\n", 554 .msg = "rxm_mem2_eccbad_intr",
555 .reg = HGC_RXM_DFX_STATUS14, 555 .reg = HGC_RXM_DFX_STATUS14,
556 }, 556 },
557 { 557 {
558 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF), 558 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF),
559 .msk = HGC_RXM_DFX_STATUS15_MEM3_MSK, 559 .msk = HGC_RXM_DFX_STATUS15_MEM3_MSK,
560 .shift = HGC_RXM_DFX_STATUS15_MEM3_OFF, 560 .shift = HGC_RXM_DFX_STATUS15_MEM3_OFF,
561 .msg = "rxm_mem3_accbad_intr (0x%x) found: memory address is 0x%08X\n", 561 .msg = "rxm_mem3_eccbad_intr",
562 .reg = HGC_RXM_DFX_STATUS15, 562 .reg = HGC_RXM_DFX_STATUS15,
563 }, 563 },
564}; 564};
@@ -944,7 +944,7 @@ static void setup_itct_v2_hw(struct hisi_hba *hisi_hba,
944 break; 944 break;
945 case SAS_SATA_DEV: 945 case SAS_SATA_DEV:
946 case SAS_SATA_PENDING: 946 case SAS_SATA_PENDING:
947 if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 947 if (parent_dev && dev_is_expander(parent_dev->dev_type))
948 qw0 = HISI_SAS_DEV_TYPE_STP << ITCT_HDR_DEV_TYPE_OFF; 948 qw0 = HISI_SAS_DEV_TYPE_STP << ITCT_HDR_DEV_TYPE_OFF;
949 else 949 else
950 qw0 = HISI_SAS_DEV_TYPE_SATA << ITCT_HDR_DEV_TYPE_OFF; 950 qw0 = HISI_SAS_DEV_TYPE_SATA << ITCT_HDR_DEV_TYPE_OFF;
@@ -2526,7 +2526,7 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba,
2526 /* create header */ 2526 /* create header */
2527 /* dw0 */ 2527 /* dw0 */
2528 dw0 = port->id << CMD_HDR_PORT_OFF; 2528 dw0 = port->id << CMD_HDR_PORT_OFF;
2529 if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 2529 if (parent_dev && dev_is_expander(parent_dev->dev_type))
2530 dw0 |= 3 << CMD_HDR_CMD_OFF; 2530 dw0 |= 3 << CMD_HDR_CMD_OFF;
2531 else 2531 else
2532 dw0 |= 4 << CMD_HDR_CMD_OFF; 2532 dw0 |= 4 << CMD_HDR_CMD_OFF;
@@ -2973,7 +2973,8 @@ one_bit_ecc_error_process_v2_hw(struct hisi_hba *hisi_hba, u32 irq_value)
2973 val = hisi_sas_read32(hisi_hba, ecc_error->reg); 2973 val = hisi_sas_read32(hisi_hba, ecc_error->reg);
2974 val &= ecc_error->msk; 2974 val &= ecc_error->msk;
2975 val >>= ecc_error->shift; 2975 val >>= ecc_error->shift;
2976 dev_warn(dev, ecc_error->msg, val); 2976 dev_warn(dev, "%s found: mem addr is 0x%08X\n",
2977 ecc_error->msg, val);
2977 } 2978 }
2978 } 2979 }
2979} 2980}
@@ -2992,7 +2993,8 @@ static void multi_bit_ecc_error_process_v2_hw(struct hisi_hba *hisi_hba,
2992 val = hisi_sas_read32(hisi_hba, ecc_error->reg); 2993 val = hisi_sas_read32(hisi_hba, ecc_error->reg);
2993 val &= ecc_error->msk; 2994 val &= ecc_error->msk;
2994 val >>= ecc_error->shift; 2995 val >>= ecc_error->shift;
2995 dev_err(dev, ecc_error->msg, irq_value, val); 2996 dev_err(dev, "%s (0x%x) found: mem addr is 0x%08X\n",
2997 ecc_error->msg, irq_value, val);
2996 queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2998 queue_work(hisi_hba->wq, &hisi_hba->rst_work);
2997 } 2999 }
2998 } 3000 }
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index 0efd55baacd3..5f0f6df11adf 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -23,6 +23,7 @@
23#define ITCT_CLR_EN_MSK (0x1 << ITCT_CLR_EN_OFF) 23#define ITCT_CLR_EN_MSK (0x1 << ITCT_CLR_EN_OFF)
24#define ITCT_DEV_OFF 0 24#define ITCT_DEV_OFF 0
25#define ITCT_DEV_MSK (0x7ff << ITCT_DEV_OFF) 25#define ITCT_DEV_MSK (0x7ff << ITCT_DEV_OFF)
26#define SAS_AXI_USER3 0x50
26#define IO_SATA_BROKEN_MSG_ADDR_LO 0x58 27#define IO_SATA_BROKEN_MSG_ADDR_LO 0x58
27#define IO_SATA_BROKEN_MSG_ADDR_HI 0x5c 28#define IO_SATA_BROKEN_MSG_ADDR_HI 0x5c
28#define SATA_INITI_D2H_STORE_ADDR_LO 0x60 29#define SATA_INITI_D2H_STORE_ADDR_LO 0x60
@@ -549,6 +550,7 @@ static void init_reg_v3_hw(struct hisi_hba *hisi_hba)
549 /* Global registers init */ 550 /* Global registers init */
550 hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 551 hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE,
551 (u32)((1ULL << hisi_hba->queue_count) - 1)); 552 (u32)((1ULL << hisi_hba->queue_count) - 1));
553 hisi_sas_write32(hisi_hba, SAS_AXI_USER3, 0);
552 hisi_sas_write32(hisi_hba, CFG_MAX_TAG, 0xfff0400); 554 hisi_sas_write32(hisi_hba, CFG_MAX_TAG, 0xfff0400);
553 hisi_sas_write32(hisi_hba, HGC_SAS_TXFAIL_RETRY_CTRL, 0x108); 555 hisi_sas_write32(hisi_hba, HGC_SAS_TXFAIL_RETRY_CTRL, 0x108);
554 hisi_sas_write32(hisi_hba, CFG_AGING_TIME, 0x1); 556 hisi_sas_write32(hisi_hba, CFG_AGING_TIME, 0x1);
@@ -752,7 +754,7 @@ static void setup_itct_v3_hw(struct hisi_hba *hisi_hba,
752 break; 754 break;
753 case SAS_SATA_DEV: 755 case SAS_SATA_DEV:
754 case SAS_SATA_PENDING: 756 case SAS_SATA_PENDING:
755 if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 757 if (parent_dev && dev_is_expander(parent_dev->dev_type))
756 qw0 = HISI_SAS_DEV_TYPE_STP << ITCT_HDR_DEV_TYPE_OFF; 758 qw0 = HISI_SAS_DEV_TYPE_STP << ITCT_HDR_DEV_TYPE_OFF;
757 else 759 else
758 qw0 = HISI_SAS_DEV_TYPE_SATA << ITCT_HDR_DEV_TYPE_OFF; 760 qw0 = HISI_SAS_DEV_TYPE_SATA << ITCT_HDR_DEV_TYPE_OFF;
@@ -906,8 +908,14 @@ static void enable_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
906static void disable_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no) 908static void disable_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
907{ 909{
908 u32 cfg = hisi_sas_phy_read32(hisi_hba, phy_no, PHY_CFG); 910 u32 cfg = hisi_sas_phy_read32(hisi_hba, phy_no, PHY_CFG);
911 u32 irq_msk = hisi_sas_phy_read32(hisi_hba, phy_no, CHL_INT2_MSK);
912 static const u32 msk = BIT(CHL_INT2_RX_DISP_ERR_OFF) |
913 BIT(CHL_INT2_RX_CODE_ERR_OFF) |
914 BIT(CHL_INT2_RX_INVLD_DW_OFF);
909 u32 state; 915 u32 state;
910 916
917 hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2_MSK, msk | irq_msk);
918
911 cfg &= ~PHY_CFG_ENA_MSK; 919 cfg &= ~PHY_CFG_ENA_MSK;
912 hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg); 920 hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg);
913 921
@@ -918,6 +926,15 @@ static void disable_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
918 cfg |= PHY_CFG_PHY_RST_MSK; 926 cfg |= PHY_CFG_PHY_RST_MSK;
919 hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg); 927 hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg);
920 } 928 }
929
930 udelay(1);
931
932 hisi_sas_phy_read32(hisi_hba, phy_no, ERR_CNT_INVLD_DW);
933 hisi_sas_phy_read32(hisi_hba, phy_no, ERR_CNT_DISP_ERR);
934 hisi_sas_phy_read32(hisi_hba, phy_no, ERR_CNT_CODE_ERR);
935
936 hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2, msk);
937 hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2_MSK, irq_msk);
921} 938}
922 939
923static void start_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no) 940static void start_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
@@ -1336,10 +1353,10 @@ static void prep_ata_v3_hw(struct hisi_hba *hisi_hba,
1336 u32 dw1 = 0, dw2 = 0; 1353 u32 dw1 = 0, dw2 = 0;
1337 1354
1338 hdr->dw0 = cpu_to_le32(port->id << CMD_HDR_PORT_OFF); 1355 hdr->dw0 = cpu_to_le32(port->id << CMD_HDR_PORT_OFF);
1339 if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 1356 if (parent_dev && dev_is_expander(parent_dev->dev_type))
1340 hdr->dw0 |= cpu_to_le32(3 << CMD_HDR_CMD_OFF); 1357 hdr->dw0 |= cpu_to_le32(3 << CMD_HDR_CMD_OFF);
1341 else 1358 else
1342 hdr->dw0 |= cpu_to_le32(4 << CMD_HDR_CMD_OFF); 1359 hdr->dw0 |= cpu_to_le32(4U << CMD_HDR_CMD_OFF);
1343 1360
1344 switch (task->data_dir) { 1361 switch (task->data_dir) {
1345 case DMA_TO_DEVICE: 1362 case DMA_TO_DEVICE:
@@ -1407,7 +1424,7 @@ static void prep_abort_v3_hw(struct hisi_hba *hisi_hba,
1407 struct hisi_sas_port *port = slot->port; 1424 struct hisi_sas_port *port = slot->port;
1408 1425
1409 /* dw0 */ 1426 /* dw0 */
1410 hdr->dw0 = cpu_to_le32((5 << CMD_HDR_CMD_OFF) | /*abort*/ 1427 hdr->dw0 = cpu_to_le32((5U << CMD_HDR_CMD_OFF) | /*abort*/
1411 (port->id << CMD_HDR_PORT_OFF) | 1428 (port->id << CMD_HDR_PORT_OFF) |
1412 (dev_is_sata(dev) 1429 (dev_is_sata(dev)
1413 << CMD_HDR_ABORT_DEVICE_TYPE_OFF) | 1430 << CMD_HDR_ABORT_DEVICE_TYPE_OFF) |
@@ -1826,77 +1843,77 @@ static const struct hisi_sas_hw_error multi_bit_ecc_errors[] = {
1826 .irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF), 1843 .irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF),
1827 .msk = HGC_DQE_ECC_MB_ADDR_MSK, 1844 .msk = HGC_DQE_ECC_MB_ADDR_MSK,
1828 .shift = HGC_DQE_ECC_MB_ADDR_OFF, 1845 .shift = HGC_DQE_ECC_MB_ADDR_OFF,
1829 .msg = "hgc_dqe_eccbad_intr found: ram addr is 0x%08X\n", 1846 .msg = "hgc_dqe_eccbad_intr",
1830 .reg = HGC_DQE_ECC_ADDR, 1847 .reg = HGC_DQE_ECC_ADDR,
1831 }, 1848 },
1832 { 1849 {
1833 .irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF), 1850 .irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF),
1834 .msk = HGC_IOST_ECC_MB_ADDR_MSK, 1851 .msk = HGC_IOST_ECC_MB_ADDR_MSK,
1835 .shift = HGC_IOST_ECC_MB_ADDR_OFF, 1852 .shift = HGC_IOST_ECC_MB_ADDR_OFF,
1836 .msg = "hgc_iost_eccbad_intr found: ram addr is 0x%08X\n", 1853 .msg = "hgc_iost_eccbad_intr",
1837 .reg = HGC_IOST_ECC_ADDR, 1854 .reg = HGC_IOST_ECC_ADDR,
1838 }, 1855 },
1839 { 1856 {
1840 .irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF), 1857 .irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF),
1841 .msk = HGC_ITCT_ECC_MB_ADDR_MSK, 1858 .msk = HGC_ITCT_ECC_MB_ADDR_MSK,
1842 .shift = HGC_ITCT_ECC_MB_ADDR_OFF, 1859 .shift = HGC_ITCT_ECC_MB_ADDR_OFF,
1843 .msg = "hgc_itct_eccbad_intr found: ram addr is 0x%08X\n", 1860 .msg = "hgc_itct_eccbad_intr",
1844 .reg = HGC_ITCT_ECC_ADDR, 1861 .reg = HGC_ITCT_ECC_ADDR,
1845 }, 1862 },
1846 { 1863 {
1847 .irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF), 1864 .irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF),
1848 .msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK, 1865 .msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK,
1849 .shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF, 1866 .shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF,
1850 .msg = "hgc_iostl_eccbad_intr found: mem addr is 0x%08X\n", 1867 .msg = "hgc_iostl_eccbad_intr",
1851 .reg = HGC_LM_DFX_STATUS2, 1868 .reg = HGC_LM_DFX_STATUS2,
1852 }, 1869 },
1853 { 1870 {
1854 .irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF), 1871 .irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF),
1855 .msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK, 1872 .msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK,
1856 .shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF, 1873 .shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF,
1857 .msg = "hgc_itctl_eccbad_intr found: mem addr is 0x%08X\n", 1874 .msg = "hgc_itctl_eccbad_intr",
1858 .reg = HGC_LM_DFX_STATUS2, 1875 .reg = HGC_LM_DFX_STATUS2,
1859 }, 1876 },
1860 { 1877 {
1861 .irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF), 1878 .irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF),
1862 .msk = HGC_CQE_ECC_MB_ADDR_MSK, 1879 .msk = HGC_CQE_ECC_MB_ADDR_MSK,
1863 .shift = HGC_CQE_ECC_MB_ADDR_OFF, 1880 .shift = HGC_CQE_ECC_MB_ADDR_OFF,
1864 .msg = "hgc_cqe_eccbad_intr found: ram address is 0x%08X\n", 1881 .msg = "hgc_cqe_eccbad_intr",
1865 .reg = HGC_CQE_ECC_ADDR, 1882 .reg = HGC_CQE_ECC_ADDR,
1866 }, 1883 },
1867 { 1884 {
1868 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF), 1885 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF),
1869 .msk = HGC_RXM_DFX_STATUS14_MEM0_MSK, 1886 .msk = HGC_RXM_DFX_STATUS14_MEM0_MSK,
1870 .shift = HGC_RXM_DFX_STATUS14_MEM0_OFF, 1887 .shift = HGC_RXM_DFX_STATUS14_MEM0_OFF,
1871 .msg = "rxm_mem0_eccbad_intr found: mem addr is 0x%08X\n", 1888 .msg = "rxm_mem0_eccbad_intr",
1872 .reg = HGC_RXM_DFX_STATUS14, 1889 .reg = HGC_RXM_DFX_STATUS14,
1873 }, 1890 },
1874 { 1891 {
1875 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF), 1892 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF),
1876 .msk = HGC_RXM_DFX_STATUS14_MEM1_MSK, 1893 .msk = HGC_RXM_DFX_STATUS14_MEM1_MSK,
1877 .shift = HGC_RXM_DFX_STATUS14_MEM1_OFF, 1894 .shift = HGC_RXM_DFX_STATUS14_MEM1_OFF,
1878 .msg = "rxm_mem1_eccbad_intr found: mem addr is 0x%08X\n", 1895 .msg = "rxm_mem1_eccbad_intr",
1879 .reg = HGC_RXM_DFX_STATUS14, 1896 .reg = HGC_RXM_DFX_STATUS14,
1880 }, 1897 },
1881 { 1898 {
1882 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF), 1899 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF),
1883 .msk = HGC_RXM_DFX_STATUS14_MEM2_MSK, 1900 .msk = HGC_RXM_DFX_STATUS14_MEM2_MSK,
1884 .shift = HGC_RXM_DFX_STATUS14_MEM2_OFF, 1901 .shift = HGC_RXM_DFX_STATUS14_MEM2_OFF,
1885 .msg = "rxm_mem2_eccbad_intr found: mem addr is 0x%08X\n", 1902 .msg = "rxm_mem2_eccbad_intr",
1886 .reg = HGC_RXM_DFX_STATUS14, 1903 .reg = HGC_RXM_DFX_STATUS14,
1887 }, 1904 },
1888 { 1905 {
1889 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF), 1906 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF),
1890 .msk = HGC_RXM_DFX_STATUS15_MEM3_MSK, 1907 .msk = HGC_RXM_DFX_STATUS15_MEM3_MSK,
1891 .shift = HGC_RXM_DFX_STATUS15_MEM3_OFF, 1908 .shift = HGC_RXM_DFX_STATUS15_MEM3_OFF,
1892 .msg = "rxm_mem3_eccbad_intr found: mem addr is 0x%08X\n", 1909 .msg = "rxm_mem3_eccbad_intr",
1893 .reg = HGC_RXM_DFX_STATUS15, 1910 .reg = HGC_RXM_DFX_STATUS15,
1894 }, 1911 },
1895 { 1912 {
1896 .irq_msk = BIT(SAS_ECC_INTR_OOO_RAM_ECC_MB_OFF), 1913 .irq_msk = BIT(SAS_ECC_INTR_OOO_RAM_ECC_MB_OFF),
1897 .msk = AM_ROB_ECC_ERR_ADDR_MSK, 1914 .msk = AM_ROB_ECC_ERR_ADDR_MSK,
1898 .shift = AM_ROB_ECC_ERR_ADDR_OFF, 1915 .shift = AM_ROB_ECC_ERR_ADDR_OFF,
1899 .msg = "ooo_ram_eccbad_intr found: ROB_ECC_ERR_ADDR=0x%08X\n", 1916 .msg = "ooo_ram_eccbad_intr",
1900 .reg = AM_ROB_ECC_ERR_ADDR, 1917 .reg = AM_ROB_ECC_ERR_ADDR,
1901 }, 1918 },
1902}; 1919};
@@ -1915,7 +1932,8 @@ static void multi_bit_ecc_error_process_v3_hw(struct hisi_hba *hisi_hba,
1915 val = hisi_sas_read32(hisi_hba, ecc_error->reg); 1932 val = hisi_sas_read32(hisi_hba, ecc_error->reg);
1916 val &= ecc_error->msk; 1933 val &= ecc_error->msk;
1917 val >>= ecc_error->shift; 1934 val >>= ecc_error->shift;
1918 dev_err(dev, ecc_error->msg, irq_value, val); 1935 dev_err(dev, "%s (0x%x) found: mem addr is 0x%08X\n",
1936 ecc_error->msg, irq_value, val);
1919 queue_work(hisi_hba->wq, &hisi_hba->rst_work); 1937 queue_work(hisi_hba->wq, &hisi_hba->rst_work);
1920 } 1938 }
1921 } 1939 }
diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index ffd7e9506570..43a6b5350775 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -60,7 +60,7 @@
60 * HPSA_DRIVER_VERSION must be 3 byte values (0-255) separated by '.' 60 * HPSA_DRIVER_VERSION must be 3 byte values (0-255) separated by '.'
61 * with an optional trailing '-' followed by a byte value (0-255). 61 * with an optional trailing '-' followed by a byte value (0-255).
62 */ 62 */
63#define HPSA_DRIVER_VERSION "3.4.20-160" 63#define HPSA_DRIVER_VERSION "3.4.20-170"
64#define DRIVER_NAME "HP HPSA Driver (v " HPSA_DRIVER_VERSION ")" 64#define DRIVER_NAME "HP HPSA Driver (v " HPSA_DRIVER_VERSION ")"
65#define HPSA "hpsa" 65#define HPSA "hpsa"
66 66
@@ -73,6 +73,8 @@
73 73
74/*define how many times we will try a command because of bus resets */ 74/*define how many times we will try a command because of bus resets */
75#define MAX_CMD_RETRIES 3 75#define MAX_CMD_RETRIES 3
76/* How long to wait before giving up on a command */
77#define HPSA_EH_PTRAID_TIMEOUT (240 * HZ)
76 78
77/* Embedded module documentation macros - see modules.h */ 79/* Embedded module documentation macros - see modules.h */
78MODULE_AUTHOR("Hewlett-Packard Company"); 80MODULE_AUTHOR("Hewlett-Packard Company");
@@ -344,11 +346,6 @@ static inline bool hpsa_is_cmd_idle(struct CommandList *c)
344 return c->scsi_cmd == SCSI_CMD_IDLE; 346 return c->scsi_cmd == SCSI_CMD_IDLE;
345} 347}
346 348
347static inline bool hpsa_is_pending_event(struct CommandList *c)
348{
349 return c->reset_pending;
350}
351
352/* extract sense key, asc, and ascq from sense data. -1 means invalid. */ 349/* extract sense key, asc, and ascq from sense data. -1 means invalid. */
353static void decode_sense_data(const u8 *sense_data, int sense_data_len, 350static void decode_sense_data(const u8 *sense_data, int sense_data_len,
354 u8 *sense_key, u8 *asc, u8 *ascq) 351 u8 *sense_key, u8 *asc, u8 *ascq)
@@ -1144,6 +1141,8 @@ static void __enqueue_cmd_and_start_io(struct ctlr_info *h,
1144{ 1141{
1145 dial_down_lockup_detection_during_fw_flash(h, c); 1142 dial_down_lockup_detection_during_fw_flash(h, c);
1146 atomic_inc(&h->commands_outstanding); 1143 atomic_inc(&h->commands_outstanding);
1144 if (c->device)
1145 atomic_inc(&c->device->commands_outstanding);
1147 1146
1148 reply_queue = h->reply_map[raw_smp_processor_id()]; 1147 reply_queue = h->reply_map[raw_smp_processor_id()];
1149 switch (c->cmd_type) { 1148 switch (c->cmd_type) {
@@ -1167,9 +1166,6 @@ static void __enqueue_cmd_and_start_io(struct ctlr_info *h,
1167 1166
1168static void enqueue_cmd_and_start_io(struct ctlr_info *h, struct CommandList *c) 1167static void enqueue_cmd_and_start_io(struct ctlr_info *h, struct CommandList *c)
1169{ 1168{
1170 if (unlikely(hpsa_is_pending_event(c)))
1171 return finish_cmd(c);
1172
1173 __enqueue_cmd_and_start_io(h, c, DEFAULT_REPLY_QUEUE); 1169 __enqueue_cmd_and_start_io(h, c, DEFAULT_REPLY_QUEUE);
1174} 1170}
1175 1171
@@ -1842,25 +1838,33 @@ static int hpsa_find_outstanding_commands_for_dev(struct ctlr_info *h,
1842 return count; 1838 return count;
1843} 1839}
1844 1840
1841#define NUM_WAIT 20
1845static void hpsa_wait_for_outstanding_commands_for_dev(struct ctlr_info *h, 1842static void hpsa_wait_for_outstanding_commands_for_dev(struct ctlr_info *h,
1846 struct hpsa_scsi_dev_t *device) 1843 struct hpsa_scsi_dev_t *device)
1847{ 1844{
1848 int cmds = 0; 1845 int cmds = 0;
1849 int waits = 0; 1846 int waits = 0;
1847 int num_wait = NUM_WAIT;
1848
1849 if (device->external)
1850 num_wait = HPSA_EH_PTRAID_TIMEOUT;
1850 1851
1851 while (1) { 1852 while (1) {
1852 cmds = hpsa_find_outstanding_commands_for_dev(h, device); 1853 cmds = hpsa_find_outstanding_commands_for_dev(h, device);
1853 if (cmds == 0) 1854 if (cmds == 0)
1854 break; 1855 break;
1855 if (++waits > 20) 1856 if (++waits > num_wait)
1856 break; 1857 break;
1857 msleep(1000); 1858 msleep(1000);
1858 } 1859 }
1859 1860
1860 if (waits > 20) 1861 if (waits > num_wait) {
1861 dev_warn(&h->pdev->dev, 1862 dev_warn(&h->pdev->dev,
1862 "%s: removing device with %d outstanding commands!\n", 1863 "%s: removing device [%d:%d:%d:%d] with %d outstanding commands!\n",
1863 __func__, cmds); 1864 __func__,
1865 h->scsi_host->host_no,
1866 device->bus, device->target, device->lun, cmds);
1867 }
1864} 1868}
1865 1869
1866static void hpsa_remove_device(struct ctlr_info *h, 1870static void hpsa_remove_device(struct ctlr_info *h,
@@ -2131,11 +2135,16 @@ static int hpsa_slave_configure(struct scsi_device *sdev)
2131 sdev->no_uld_attach = !sd || !sd->expose_device; 2135 sdev->no_uld_attach = !sd || !sd->expose_device;
2132 2136
2133 if (sd) { 2137 if (sd) {
2134 if (sd->external) 2138 sd->was_removed = 0;
2139 if (sd->external) {
2135 queue_depth = EXTERNAL_QD; 2140 queue_depth = EXTERNAL_QD;
2136 else 2141 sdev->eh_timeout = HPSA_EH_PTRAID_TIMEOUT;
2142 blk_queue_rq_timeout(sdev->request_queue,
2143 HPSA_EH_PTRAID_TIMEOUT);
2144 } else {
2137 queue_depth = sd->queue_depth != 0 ? 2145 queue_depth = sd->queue_depth != 0 ?
2138 sd->queue_depth : sdev->host->can_queue; 2146 sd->queue_depth : sdev->host->can_queue;
2147 }
2139 } else 2148 } else
2140 queue_depth = sdev->host->can_queue; 2149 queue_depth = sdev->host->can_queue;
2141 2150
@@ -2146,7 +2155,12 @@ static int hpsa_slave_configure(struct scsi_device *sdev)
2146 2155
2147static void hpsa_slave_destroy(struct scsi_device *sdev) 2156static void hpsa_slave_destroy(struct scsi_device *sdev)
2148{ 2157{
2149 /* nothing to do. */ 2158 struct hpsa_scsi_dev_t *hdev = NULL;
2159
2160 hdev = sdev->hostdata;
2161
2162 if (hdev)
2163 hdev->was_removed = 1;
2150} 2164}
2151 2165
2152static void hpsa_free_ioaccel2_sg_chain_blocks(struct ctlr_info *h) 2166static void hpsa_free_ioaccel2_sg_chain_blocks(struct ctlr_info *h)
@@ -2414,13 +2428,16 @@ static int handle_ioaccel_mode2_error(struct ctlr_info *h,
2414 break; 2428 break;
2415 } 2429 }
2416 2430
2431 if (dev->in_reset)
2432 retry = 0;
2433
2417 return retry; /* retry on raid path? */ 2434 return retry; /* retry on raid path? */
2418} 2435}
2419 2436
2420static void hpsa_cmd_resolve_events(struct ctlr_info *h, 2437static void hpsa_cmd_resolve_events(struct ctlr_info *h,
2421 struct CommandList *c) 2438 struct CommandList *c)
2422{ 2439{
2423 bool do_wake = false; 2440 struct hpsa_scsi_dev_t *dev = c->device;
2424 2441
2425 /* 2442 /*
2426 * Reset c->scsi_cmd here so that the reset handler will know 2443 * Reset c->scsi_cmd here so that the reset handler will know
@@ -2429,25 +2446,12 @@ static void hpsa_cmd_resolve_events(struct ctlr_info *h,
2429 */ 2446 */
2430 c->scsi_cmd = SCSI_CMD_IDLE; 2447 c->scsi_cmd = SCSI_CMD_IDLE;
2431 mb(); /* Declare command idle before checking for pending events. */ 2448 mb(); /* Declare command idle before checking for pending events. */
2432 if (c->reset_pending) { 2449 if (dev) {
2433 unsigned long flags; 2450 atomic_dec(&dev->commands_outstanding);
2434 struct hpsa_scsi_dev_t *dev; 2451 if (dev->in_reset &&
2435 2452 atomic_read(&dev->commands_outstanding) <= 0)
2436 /* 2453 wake_up_all(&h->event_sync_wait_queue);
2437 * There appears to be a reset pending; lock the lock and
2438 * reconfirm. If so, then decrement the count of outstanding
2439 * commands and wake the reset command if this is the last one.
2440 */
2441 spin_lock_irqsave(&h->lock, flags);
2442 dev = c->reset_pending; /* Re-fetch under the lock. */
2443 if (dev && atomic_dec_and_test(&dev->reset_cmds_out))
2444 do_wake = true;
2445 c->reset_pending = NULL;
2446 spin_unlock_irqrestore(&h->lock, flags);
2447 } 2454 }
2448
2449 if (do_wake)
2450 wake_up_all(&h->event_sync_wait_queue);
2451} 2455}
2452 2456
2453static void hpsa_cmd_resolve_and_free(struct ctlr_info *h, 2457static void hpsa_cmd_resolve_and_free(struct ctlr_info *h,
@@ -2496,6 +2500,11 @@ static void process_ioaccel2_completion(struct ctlr_info *h,
2496 dev->offload_to_be_enabled = 0; 2500 dev->offload_to_be_enabled = 0;
2497 } 2501 }
2498 2502
2503 if (dev->in_reset) {
2504 cmd->result = DID_RESET << 16;
2505 return hpsa_cmd_free_and_done(h, c, cmd);
2506 }
2507
2499 return hpsa_retry_cmd(h, c); 2508 return hpsa_retry_cmd(h, c);
2500 } 2509 }
2501 2510
@@ -2574,6 +2583,12 @@ static void complete_scsi_command(struct CommandList *cp)
2574 cmd->result = (DID_OK << 16); /* host byte */ 2583 cmd->result = (DID_OK << 16); /* host byte */
2575 cmd->result |= (COMMAND_COMPLETE << 8); /* msg byte */ 2584 cmd->result |= (COMMAND_COMPLETE << 8); /* msg byte */
2576 2585
2586 /* SCSI command has already been cleaned up in SML */
2587 if (dev->was_removed) {
2588 hpsa_cmd_resolve_and_free(h, cp);
2589 return;
2590 }
2591
2577 if (cp->cmd_type == CMD_IOACCEL2 || cp->cmd_type == CMD_IOACCEL1) { 2592 if (cp->cmd_type == CMD_IOACCEL2 || cp->cmd_type == CMD_IOACCEL1) {
2578 if (dev->physical_device && dev->expose_device && 2593 if (dev->physical_device && dev->expose_device &&
2579 dev->removed) { 2594 dev->removed) {
@@ -2595,10 +2610,6 @@ static void complete_scsi_command(struct CommandList *cp)
2595 return hpsa_cmd_free_and_done(h, cp, cmd); 2610 return hpsa_cmd_free_and_done(h, cp, cmd);
2596 } 2611 }
2597 2612
2598 if ((unlikely(hpsa_is_pending_event(cp))))
2599 if (cp->reset_pending)
2600 return hpsa_cmd_free_and_done(h, cp, cmd);
2601
2602 if (cp->cmd_type == CMD_IOACCEL2) 2613 if (cp->cmd_type == CMD_IOACCEL2)
2603 return process_ioaccel2_completion(h, cp, cmd, dev); 2614 return process_ioaccel2_completion(h, cp, cmd, dev);
2604 2615
@@ -3048,7 +3059,7 @@ out:
3048 return rc; 3059 return rc;
3049} 3060}
3050 3061
3051static int hpsa_send_reset(struct ctlr_info *h, unsigned char *scsi3addr, 3062static int hpsa_send_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev,
3052 u8 reset_type, int reply_queue) 3063 u8 reset_type, int reply_queue)
3053{ 3064{
3054 int rc = IO_OK; 3065 int rc = IO_OK;
@@ -3056,11 +3067,10 @@ static int hpsa_send_reset(struct ctlr_info *h, unsigned char *scsi3addr,
3056 struct ErrorInfo *ei; 3067 struct ErrorInfo *ei;
3057 3068
3058 c = cmd_alloc(h); 3069 c = cmd_alloc(h);
3059 3070 c->device = dev;
3060 3071
3061 /* fill_cmd can't fail here, no data buffer to map. */ 3072 /* fill_cmd can't fail here, no data buffer to map. */
3062 (void) fill_cmd(c, reset_type, h, NULL, 0, 0, 3073 (void) fill_cmd(c, reset_type, h, NULL, 0, 0, dev->scsi3addr, TYPE_MSG);
3063 scsi3addr, TYPE_MSG);
3064 rc = hpsa_scsi_do_simple_cmd(h, c, reply_queue, NO_TIMEOUT); 3074 rc = hpsa_scsi_do_simple_cmd(h, c, reply_queue, NO_TIMEOUT);
3065 if (rc) { 3075 if (rc) {
3066 dev_warn(&h->pdev->dev, "Failed to send reset command\n"); 3076 dev_warn(&h->pdev->dev, "Failed to send reset command\n");
@@ -3138,9 +3148,8 @@ static bool hpsa_cmd_dev_match(struct ctlr_info *h, struct CommandList *c,
3138} 3148}
3139 3149
3140static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev, 3150static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev,
3141 unsigned char *scsi3addr, u8 reset_type, int reply_queue) 3151 u8 reset_type, int reply_queue)
3142{ 3152{
3143 int i;
3144 int rc = 0; 3153 int rc = 0;
3145 3154
3146 /* We can really only handle one reset at a time */ 3155 /* We can really only handle one reset at a time */
@@ -3149,38 +3158,14 @@ static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev,
3149 return -EINTR; 3158 return -EINTR;
3150 } 3159 }
3151 3160
3152 BUG_ON(atomic_read(&dev->reset_cmds_out) != 0); 3161 rc = hpsa_send_reset(h, dev, reset_type, reply_queue);
3153 3162 if (!rc) {
3154 for (i = 0; i < h->nr_cmds; i++) { 3163 /* incremented by sending the reset request */
3155 struct CommandList *c = h->cmd_pool + i; 3164 atomic_dec(&dev->commands_outstanding);
3156 int refcount = atomic_inc_return(&c->refcount);
3157
3158 if (refcount > 1 && hpsa_cmd_dev_match(h, c, dev, scsi3addr)) {
3159 unsigned long flags;
3160
3161 /*
3162 * Mark the target command as having a reset pending,
3163 * then lock a lock so that the command cannot complete
3164 * while we're considering it. If the command is not
3165 * idle then count it; otherwise revoke the event.
3166 */
3167 c->reset_pending = dev;
3168 spin_lock_irqsave(&h->lock, flags); /* Implied MB */
3169 if (!hpsa_is_cmd_idle(c))
3170 atomic_inc(&dev->reset_cmds_out);
3171 else
3172 c->reset_pending = NULL;
3173 spin_unlock_irqrestore(&h->lock, flags);
3174 }
3175
3176 cmd_free(h, c);
3177 }
3178
3179 rc = hpsa_send_reset(h, scsi3addr, reset_type, reply_queue);
3180 if (!rc)
3181 wait_event(h->event_sync_wait_queue, 3165 wait_event(h->event_sync_wait_queue,
3182 atomic_read(&dev->reset_cmds_out) == 0 || 3166 atomic_read(&dev->commands_outstanding) <= 0 ||
3183 lockup_detected(h)); 3167 lockup_detected(h));
3168 }
3184 3169
3185 if (unlikely(lockup_detected(h))) { 3170 if (unlikely(lockup_detected(h))) {
3186 dev_warn(&h->pdev->dev, 3171 dev_warn(&h->pdev->dev,
@@ -3188,10 +3173,8 @@ static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev,
3188 rc = -ENODEV; 3173 rc = -ENODEV;
3189 } 3174 }
3190 3175
3191 if (unlikely(rc)) 3176 if (!rc)
3192 atomic_set(&dev->reset_cmds_out, 0); 3177 rc = wait_for_device_to_become_ready(h, dev->scsi3addr, 0);
3193 else
3194 rc = wait_for_device_to_become_ready(h, scsi3addr, 0);
3195 3178
3196 mutex_unlock(&h->reset_mutex); 3179 mutex_unlock(&h->reset_mutex);
3197 return rc; 3180 return rc;
@@ -4820,6 +4803,9 @@ static int hpsa_scsi_ioaccel_direct_map(struct ctlr_info *h,
4820 4803
4821 c->phys_disk = dev; 4804 c->phys_disk = dev;
4822 4805
4806 if (dev->in_reset)
4807 return -1;
4808
4823 return hpsa_scsi_ioaccel_queue_command(h, c, dev->ioaccel_handle, 4809 return hpsa_scsi_ioaccel_queue_command(h, c, dev->ioaccel_handle,
4824 cmd->cmnd, cmd->cmd_len, dev->scsi3addr, dev); 4810 cmd->cmnd, cmd->cmd_len, dev->scsi3addr, dev);
4825} 4811}
@@ -5010,6 +4996,11 @@ static int hpsa_scsi_ioaccel2_queue_command(struct ctlr_info *h,
5010 } else 4996 } else
5011 cp->sg_count = (u8) use_sg; 4997 cp->sg_count = (u8) use_sg;
5012 4998
4999 if (phys_disk->in_reset) {
5000 cmd->result = DID_RESET << 16;
5001 return -1;
5002 }
5003
5013 enqueue_cmd_and_start_io(h, c); 5004 enqueue_cmd_and_start_io(h, c);
5014 return 0; 5005 return 0;
5015} 5006}
@@ -5027,6 +5018,9 @@ static int hpsa_scsi_ioaccel_queue_command(struct ctlr_info *h,
5027 if (!c->scsi_cmd->device->hostdata) 5018 if (!c->scsi_cmd->device->hostdata)
5028 return -1; 5019 return -1;
5029 5020
5021 if (phys_disk->in_reset)
5022 return -1;
5023
5030 /* Try to honor the device's queue depth */ 5024 /* Try to honor the device's queue depth */
5031 if (atomic_inc_return(&phys_disk->ioaccel_cmds_out) > 5025 if (atomic_inc_return(&phys_disk->ioaccel_cmds_out) >
5032 phys_disk->queue_depth) { 5026 phys_disk->queue_depth) {
@@ -5110,6 +5104,9 @@ static int hpsa_scsi_ioaccel_raid_map(struct ctlr_info *h,
5110 if (!dev) 5104 if (!dev)
5111 return -1; 5105 return -1;
5112 5106
5107 if (dev->in_reset)
5108 return -1;
5109
5113 /* check for valid opcode, get LBA and block count */ 5110 /* check for valid opcode, get LBA and block count */
5114 switch (cmd->cmnd[0]) { 5111 switch (cmd->cmnd[0]) {
5115 case WRITE_6: 5112 case WRITE_6:
@@ -5414,13 +5411,13 @@ static int hpsa_scsi_ioaccel_raid_map(struct ctlr_info *h,
5414 */ 5411 */
5415static int hpsa_ciss_submit(struct ctlr_info *h, 5412static int hpsa_ciss_submit(struct ctlr_info *h,
5416 struct CommandList *c, struct scsi_cmnd *cmd, 5413 struct CommandList *c, struct scsi_cmnd *cmd,
5417 unsigned char scsi3addr[]) 5414 struct hpsa_scsi_dev_t *dev)
5418{ 5415{
5419 cmd->host_scribble = (unsigned char *) c; 5416 cmd->host_scribble = (unsigned char *) c;
5420 c->cmd_type = CMD_SCSI; 5417 c->cmd_type = CMD_SCSI;
5421 c->scsi_cmd = cmd; 5418 c->scsi_cmd = cmd;
5422 c->Header.ReplyQueue = 0; /* unused in simple mode */ 5419 c->Header.ReplyQueue = 0; /* unused in simple mode */
5423 memcpy(&c->Header.LUN.LunAddrBytes[0], &scsi3addr[0], 8); 5420 memcpy(&c->Header.LUN.LunAddrBytes[0], &dev->scsi3addr[0], 8);
5424 c->Header.tag = cpu_to_le64((c->cmdindex << DIRECT_LOOKUP_SHIFT)); 5421 c->Header.tag = cpu_to_le64((c->cmdindex << DIRECT_LOOKUP_SHIFT));
5425 5422
5426 /* Fill in the request block... */ 5423 /* Fill in the request block... */
@@ -5471,6 +5468,12 @@ static int hpsa_ciss_submit(struct ctlr_info *h,
5471 hpsa_cmd_resolve_and_free(h, c); 5468 hpsa_cmd_resolve_and_free(h, c);
5472 return SCSI_MLQUEUE_HOST_BUSY; 5469 return SCSI_MLQUEUE_HOST_BUSY;
5473 } 5470 }
5471
5472 if (dev->in_reset) {
5473 hpsa_cmd_resolve_and_free(h, c);
5474 return SCSI_MLQUEUE_HOST_BUSY;
5475 }
5476
5474 enqueue_cmd_and_start_io(h, c); 5477 enqueue_cmd_and_start_io(h, c);
5475 /* the cmd'll come back via intr handler in complete_scsi_command() */ 5478 /* the cmd'll come back via intr handler in complete_scsi_command() */
5476 return 0; 5479 return 0;
@@ -5522,8 +5525,7 @@ static inline void hpsa_cmd_partial_init(struct ctlr_info *h, int index,
5522} 5525}
5523 5526
5524static int hpsa_ioaccel_submit(struct ctlr_info *h, 5527static int hpsa_ioaccel_submit(struct ctlr_info *h,
5525 struct CommandList *c, struct scsi_cmnd *cmd, 5528 struct CommandList *c, struct scsi_cmnd *cmd)
5526 unsigned char *scsi3addr)
5527{ 5529{
5528 struct hpsa_scsi_dev_t *dev = cmd->device->hostdata; 5530 struct hpsa_scsi_dev_t *dev = cmd->device->hostdata;
5529 int rc = IO_ACCEL_INELIGIBLE; 5531 int rc = IO_ACCEL_INELIGIBLE;
@@ -5531,6 +5533,12 @@ static int hpsa_ioaccel_submit(struct ctlr_info *h,
5531 if (!dev) 5533 if (!dev)
5532 return SCSI_MLQUEUE_HOST_BUSY; 5534 return SCSI_MLQUEUE_HOST_BUSY;
5533 5535
5536 if (dev->in_reset)
5537 return SCSI_MLQUEUE_HOST_BUSY;
5538
5539 if (hpsa_simple_mode)
5540 return IO_ACCEL_INELIGIBLE;
5541
5534 cmd->host_scribble = (unsigned char *) c; 5542 cmd->host_scribble = (unsigned char *) c;
5535 5543
5536 if (dev->offload_enabled) { 5544 if (dev->offload_enabled) {
@@ -5563,8 +5571,12 @@ static void hpsa_command_resubmit_worker(struct work_struct *work)
5563 cmd->result = DID_NO_CONNECT << 16; 5571 cmd->result = DID_NO_CONNECT << 16;
5564 return hpsa_cmd_free_and_done(c->h, c, cmd); 5572 return hpsa_cmd_free_and_done(c->h, c, cmd);
5565 } 5573 }
5566 if (c->reset_pending) 5574
5575 if (dev->in_reset) {
5576 cmd->result = DID_RESET << 16;
5567 return hpsa_cmd_free_and_done(c->h, c, cmd); 5577 return hpsa_cmd_free_and_done(c->h, c, cmd);
5578 }
5579
5568 if (c->cmd_type == CMD_IOACCEL2) { 5580 if (c->cmd_type == CMD_IOACCEL2) {
5569 struct ctlr_info *h = c->h; 5581 struct ctlr_info *h = c->h;
5570 struct io_accel2_cmd *c2 = &h->ioaccel2_cmd_pool[c->cmdindex]; 5582 struct io_accel2_cmd *c2 = &h->ioaccel2_cmd_pool[c->cmdindex];
@@ -5572,7 +5584,7 @@ static void hpsa_command_resubmit_worker(struct work_struct *work)
5572 5584
5573 if (c2->error_data.serv_response == 5585 if (c2->error_data.serv_response ==
5574 IOACCEL2_STATUS_SR_TASK_COMP_SET_FULL) { 5586 IOACCEL2_STATUS_SR_TASK_COMP_SET_FULL) {
5575 rc = hpsa_ioaccel_submit(h, c, cmd, dev->scsi3addr); 5587 rc = hpsa_ioaccel_submit(h, c, cmd);
5576 if (rc == 0) 5588 if (rc == 0)
5577 return; 5589 return;
5578 if (rc == SCSI_MLQUEUE_HOST_BUSY) { 5590 if (rc == SCSI_MLQUEUE_HOST_BUSY) {
@@ -5588,7 +5600,7 @@ static void hpsa_command_resubmit_worker(struct work_struct *work)
5588 } 5600 }
5589 } 5601 }
5590 hpsa_cmd_partial_init(c->h, c->cmdindex, c); 5602 hpsa_cmd_partial_init(c->h, c->cmdindex, c);
5591 if (hpsa_ciss_submit(c->h, c, cmd, dev->scsi3addr)) { 5603 if (hpsa_ciss_submit(c->h, c, cmd, dev)) {
5592 /* 5604 /*
5593 * If we get here, it means dma mapping failed. Try 5605 * If we get here, it means dma mapping failed. Try
5594 * again via scsi mid layer, which will then get 5606 * again via scsi mid layer, which will then get
@@ -5607,7 +5619,6 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
5607{ 5619{
5608 struct ctlr_info *h; 5620 struct ctlr_info *h;
5609 struct hpsa_scsi_dev_t *dev; 5621 struct hpsa_scsi_dev_t *dev;
5610 unsigned char scsi3addr[8];
5611 struct CommandList *c; 5622 struct CommandList *c;
5612 int rc = 0; 5623 int rc = 0;
5613 5624
@@ -5629,14 +5640,18 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
5629 return 0; 5640 return 0;
5630 } 5641 }
5631 5642
5632 memcpy(scsi3addr, dev->scsi3addr, sizeof(scsi3addr));
5633
5634 if (unlikely(lockup_detected(h))) { 5643 if (unlikely(lockup_detected(h))) {
5635 cmd->result = DID_NO_CONNECT << 16; 5644 cmd->result = DID_NO_CONNECT << 16;
5636 cmd->scsi_done(cmd); 5645 cmd->scsi_done(cmd);
5637 return 0; 5646 return 0;
5638 } 5647 }
5648
5649 if (dev->in_reset)
5650 return SCSI_MLQUEUE_DEVICE_BUSY;
5651
5639 c = cmd_tagged_alloc(h, cmd); 5652 c = cmd_tagged_alloc(h, cmd);
5653 if (c == NULL)
5654 return SCSI_MLQUEUE_DEVICE_BUSY;
5640 5655
5641 /* 5656 /*
5642 * Call alternate submit routine for I/O accelerated commands. 5657 * Call alternate submit routine for I/O accelerated commands.
@@ -5645,7 +5660,7 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
5645 if (likely(cmd->retries == 0 && 5660 if (likely(cmd->retries == 0 &&
5646 !blk_rq_is_passthrough(cmd->request) && 5661 !blk_rq_is_passthrough(cmd->request) &&
5647 h->acciopath_status)) { 5662 h->acciopath_status)) {
5648 rc = hpsa_ioaccel_submit(h, c, cmd, scsi3addr); 5663 rc = hpsa_ioaccel_submit(h, c, cmd);
5649 if (rc == 0) 5664 if (rc == 0)
5650 return 0; 5665 return 0;
5651 if (rc == SCSI_MLQUEUE_HOST_BUSY) { 5666 if (rc == SCSI_MLQUEUE_HOST_BUSY) {
@@ -5653,7 +5668,7 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
5653 return SCSI_MLQUEUE_HOST_BUSY; 5668 return SCSI_MLQUEUE_HOST_BUSY;
5654 } 5669 }
5655 } 5670 }
5656 return hpsa_ciss_submit(h, c, cmd, scsi3addr); 5671 return hpsa_ciss_submit(h, c, cmd, dev);
5657} 5672}
5658 5673
5659static void hpsa_scan_complete(struct ctlr_info *h) 5674static void hpsa_scan_complete(struct ctlr_info *h)
@@ -5935,8 +5950,9 @@ static int wait_for_device_to_become_ready(struct ctlr_info *h,
5935static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd) 5950static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
5936{ 5951{
5937 int rc = SUCCESS; 5952 int rc = SUCCESS;
5953 int i;
5938 struct ctlr_info *h; 5954 struct ctlr_info *h;
5939 struct hpsa_scsi_dev_t *dev; 5955 struct hpsa_scsi_dev_t *dev = NULL;
5940 u8 reset_type; 5956 u8 reset_type;
5941 char msg[48]; 5957 char msg[48];
5942 unsigned long flags; 5958 unsigned long flags;
@@ -6002,9 +6018,19 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
6002 reset_type == HPSA_DEVICE_RESET_MSG ? "logical " : "physical "); 6018 reset_type == HPSA_DEVICE_RESET_MSG ? "logical " : "physical ");
6003 hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); 6019 hpsa_show_dev_msg(KERN_WARNING, h, dev, msg);
6004 6020
6021 /*
6022 * wait to see if any commands will complete before sending reset
6023 */
6024 dev->in_reset = true; /* block any new cmds from OS for this device */
6025 for (i = 0; i < 10; i++) {
6026 if (atomic_read(&dev->commands_outstanding) > 0)
6027 msleep(1000);
6028 else
6029 break;
6030 }
6031
6005 /* send a reset to the SCSI LUN which the command was sent to */ 6032 /* send a reset to the SCSI LUN which the command was sent to */
6006 rc = hpsa_do_reset(h, dev, dev->scsi3addr, reset_type, 6033 rc = hpsa_do_reset(h, dev, reset_type, DEFAULT_REPLY_QUEUE);
6007 DEFAULT_REPLY_QUEUE);
6008 if (rc == 0) 6034 if (rc == 0)
6009 rc = SUCCESS; 6035 rc = SUCCESS;
6010 else 6036 else
@@ -6018,6 +6044,8 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
6018return_reset_status: 6044return_reset_status:
6019 spin_lock_irqsave(&h->reset_lock, flags); 6045 spin_lock_irqsave(&h->reset_lock, flags);
6020 h->reset_in_progress = 0; 6046 h->reset_in_progress = 0;
6047 if (dev)
6048 dev->in_reset = false;
6021 spin_unlock_irqrestore(&h->reset_lock, flags); 6049 spin_unlock_irqrestore(&h->reset_lock, flags);
6022 return rc; 6050 return rc;
6023} 6051}
@@ -6043,7 +6071,6 @@ static struct CommandList *cmd_tagged_alloc(struct ctlr_info *h,
6043 BUG(); 6071 BUG();
6044 } 6072 }
6045 6073
6046 atomic_inc(&c->refcount);
6047 if (unlikely(!hpsa_is_cmd_idle(c))) { 6074 if (unlikely(!hpsa_is_cmd_idle(c))) {
6048 /* 6075 /*
6049 * We expect that the SCSI layer will hand us a unique tag 6076 * We expect that the SCSI layer will hand us a unique tag
@@ -6051,14 +6078,20 @@ static struct CommandList *cmd_tagged_alloc(struct ctlr_info *h,
6051 * two requests...because if the selected command isn't idle 6078 * two requests...because if the selected command isn't idle
6052 * then someone is going to be very disappointed. 6079 * then someone is going to be very disappointed.
6053 */ 6080 */
6054 dev_err(&h->pdev->dev, 6081 if (idx != h->last_collision_tag) { /* Print once per tag */
6055 "tag collision (tag=%d) in cmd_tagged_alloc().\n", 6082 dev_warn(&h->pdev->dev,
6056 idx); 6083 "%s: tag collision (tag=%d)\n", __func__, idx);
6057 if (c->scsi_cmd != NULL) 6084 if (c->scsi_cmd != NULL)
6058 scsi_print_command(c->scsi_cmd); 6085 scsi_print_command(c->scsi_cmd);
6059 scsi_print_command(scmd); 6086 if (scmd)
6087 scsi_print_command(scmd);
6088 h->last_collision_tag = idx;
6089 }
6090 return NULL;
6060 } 6091 }
6061 6092
6093 atomic_inc(&c->refcount);
6094
6062 hpsa_cmd_partial_init(h, idx, c); 6095 hpsa_cmd_partial_init(h, idx, c);
6063 return c; 6096 return c;
6064} 6097}
@@ -6126,6 +6159,7 @@ static struct CommandList *cmd_alloc(struct ctlr_info *h)
6126 break; /* it's ours now. */ 6159 break; /* it's ours now. */
6127 } 6160 }
6128 hpsa_cmd_partial_init(h, i, c); 6161 hpsa_cmd_partial_init(h, i, c);
6162 c->device = NULL;
6129 return c; 6163 return c;
6130} 6164}
6131 6165
@@ -6579,8 +6613,7 @@ static int hpsa_ioctl(struct scsi_device *dev, unsigned int cmd,
6579 } 6613 }
6580} 6614}
6581 6615
6582static void hpsa_send_host_reset(struct ctlr_info *h, unsigned char *scsi3addr, 6616static void hpsa_send_host_reset(struct ctlr_info *h, u8 reset_type)
6583 u8 reset_type)
6584{ 6617{
6585 struct CommandList *c; 6618 struct CommandList *c;
6586 6619
@@ -7983,10 +8016,15 @@ clean_up:
7983static void hpsa_free_irqs(struct ctlr_info *h) 8016static void hpsa_free_irqs(struct ctlr_info *h)
7984{ 8017{
7985 int i; 8018 int i;
8019 int irq_vector = 0;
8020
8021 if (hpsa_simple_mode)
8022 irq_vector = h->intr_mode;
7986 8023
7987 if (!h->msix_vectors || h->intr_mode != PERF_MODE_INT) { 8024 if (!h->msix_vectors || h->intr_mode != PERF_MODE_INT) {
7988 /* Single reply queue, only one irq to free */ 8025 /* Single reply queue, only one irq to free */
7989 free_irq(pci_irq_vector(h->pdev, 0), &h->q[h->intr_mode]); 8026 free_irq(pci_irq_vector(h->pdev, irq_vector),
8027 &h->q[h->intr_mode]);
7990 h->q[h->intr_mode] = 0; 8028 h->q[h->intr_mode] = 0;
7991 return; 8029 return;
7992 } 8030 }
@@ -8005,6 +8043,10 @@ static int hpsa_request_irqs(struct ctlr_info *h,
8005 irqreturn_t (*intxhandler)(int, void *)) 8043 irqreturn_t (*intxhandler)(int, void *))
8006{ 8044{
8007 int rc, i; 8045 int rc, i;
8046 int irq_vector = 0;
8047
8048 if (hpsa_simple_mode)
8049 irq_vector = h->intr_mode;
8008 8050
8009 /* 8051 /*
8010 * initialize h->q[x] = x so that interrupt handlers know which 8052 * initialize h->q[x] = x so that interrupt handlers know which
@@ -8040,14 +8082,14 @@ static int hpsa_request_irqs(struct ctlr_info *h,
8040 if (h->msix_vectors > 0 || h->pdev->msi_enabled) { 8082 if (h->msix_vectors > 0 || h->pdev->msi_enabled) {
8041 sprintf(h->intrname[0], "%s-msi%s", h->devname, 8083 sprintf(h->intrname[0], "%s-msi%s", h->devname,
8042 h->msix_vectors ? "x" : ""); 8084 h->msix_vectors ? "x" : "");
8043 rc = request_irq(pci_irq_vector(h->pdev, 0), 8085 rc = request_irq(pci_irq_vector(h->pdev, irq_vector),
8044 msixhandler, 0, 8086 msixhandler, 0,
8045 h->intrname[0], 8087 h->intrname[0],
8046 &h->q[h->intr_mode]); 8088 &h->q[h->intr_mode]);
8047 } else { 8089 } else {
8048 sprintf(h->intrname[h->intr_mode], 8090 sprintf(h->intrname[h->intr_mode],
8049 "%s-intx", h->devname); 8091 "%s-intx", h->devname);
8050 rc = request_irq(pci_irq_vector(h->pdev, 0), 8092 rc = request_irq(pci_irq_vector(h->pdev, irq_vector),
8051 intxhandler, IRQF_SHARED, 8093 intxhandler, IRQF_SHARED,
8052 h->intrname[0], 8094 h->intrname[0],
8053 &h->q[h->intr_mode]); 8095 &h->q[h->intr_mode]);
@@ -8055,7 +8097,7 @@ static int hpsa_request_irqs(struct ctlr_info *h,
8055 } 8097 }
8056 if (rc) { 8098 if (rc) {
8057 dev_err(&h->pdev->dev, "failed to get irq %d for %s\n", 8099 dev_err(&h->pdev->dev, "failed to get irq %d for %s\n",
8058 pci_irq_vector(h->pdev, 0), h->devname); 8100 pci_irq_vector(h->pdev, irq_vector), h->devname);
8059 hpsa_free_irqs(h); 8101 hpsa_free_irqs(h);
8060 return -ENODEV; 8102 return -ENODEV;
8061 } 8103 }
@@ -8065,7 +8107,7 @@ static int hpsa_request_irqs(struct ctlr_info *h,
8065static int hpsa_kdump_soft_reset(struct ctlr_info *h) 8107static int hpsa_kdump_soft_reset(struct ctlr_info *h)
8066{ 8108{
8067 int rc; 8109 int rc;
8068 hpsa_send_host_reset(h, RAID_CTLR_LUNID, HPSA_RESET_TYPE_CONTROLLER); 8110 hpsa_send_host_reset(h, HPSA_RESET_TYPE_CONTROLLER);
8069 8111
8070 dev_info(&h->pdev->dev, "Waiting for board to soft reset.\n"); 8112 dev_info(&h->pdev->dev, "Waiting for board to soft reset.\n");
8071 rc = hpsa_wait_for_board_state(h->pdev, h->vaddr, BOARD_NOT_READY); 8113 rc = hpsa_wait_for_board_state(h->pdev, h->vaddr, BOARD_NOT_READY);
@@ -8121,6 +8163,11 @@ static void hpsa_undo_allocations_after_kdump_soft_reset(struct ctlr_info *h)
8121 destroy_workqueue(h->rescan_ctlr_wq); 8163 destroy_workqueue(h->rescan_ctlr_wq);
8122 h->rescan_ctlr_wq = NULL; 8164 h->rescan_ctlr_wq = NULL;
8123 } 8165 }
8166 if (h->monitor_ctlr_wq) {
8167 destroy_workqueue(h->monitor_ctlr_wq);
8168 h->monitor_ctlr_wq = NULL;
8169 }
8170
8124 kfree(h); /* init_one 1 */ 8171 kfree(h); /* init_one 1 */
8125} 8172}
8126 8173
@@ -8456,8 +8503,8 @@ static void hpsa_event_monitor_worker(struct work_struct *work)
8456 8503
8457 spin_lock_irqsave(&h->lock, flags); 8504 spin_lock_irqsave(&h->lock, flags);
8458 if (!h->remove_in_progress) 8505 if (!h->remove_in_progress)
8459 schedule_delayed_work(&h->event_monitor_work, 8506 queue_delayed_work(h->monitor_ctlr_wq, &h->event_monitor_work,
8460 HPSA_EVENT_MONITOR_INTERVAL); 8507 HPSA_EVENT_MONITOR_INTERVAL);
8461 spin_unlock_irqrestore(&h->lock, flags); 8508 spin_unlock_irqrestore(&h->lock, flags);
8462} 8509}
8463 8510
@@ -8502,7 +8549,7 @@ static void hpsa_monitor_ctlr_worker(struct work_struct *work)
8502 8549
8503 spin_lock_irqsave(&h->lock, flags); 8550 spin_lock_irqsave(&h->lock, flags);
8504 if (!h->remove_in_progress) 8551 if (!h->remove_in_progress)
8505 schedule_delayed_work(&h->monitor_ctlr_work, 8552 queue_delayed_work(h->monitor_ctlr_wq, &h->monitor_ctlr_work,
8506 h->heartbeat_sample_interval); 8553 h->heartbeat_sample_interval);
8507 spin_unlock_irqrestore(&h->lock, flags); 8554 spin_unlock_irqrestore(&h->lock, flags);
8508} 8555}
@@ -8670,6 +8717,12 @@ reinit_after_soft_reset:
8670 goto clean7; /* aer/h */ 8717 goto clean7; /* aer/h */
8671 } 8718 }
8672 8719
8720 h->monitor_ctlr_wq = hpsa_create_controller_wq(h, "monitor");
8721 if (!h->monitor_ctlr_wq) {
8722 rc = -ENOMEM;
8723 goto clean7;
8724 }
8725
8673 /* 8726 /*
8674 * At this point, the controller is ready to take commands. 8727 * At this point, the controller is ready to take commands.
8675 * Now, if reset_devices and the hard reset didn't work, try 8728 * Now, if reset_devices and the hard reset didn't work, try
@@ -8799,6 +8852,10 @@ clean1: /* wq/aer/h */
8799 destroy_workqueue(h->rescan_ctlr_wq); 8852 destroy_workqueue(h->rescan_ctlr_wq);
8800 h->rescan_ctlr_wq = NULL; 8853 h->rescan_ctlr_wq = NULL;
8801 } 8854 }
8855 if (h->monitor_ctlr_wq) {
8856 destroy_workqueue(h->monitor_ctlr_wq);
8857 h->monitor_ctlr_wq = NULL;
8858 }
8802 kfree(h); 8859 kfree(h);
8803 return rc; 8860 return rc;
8804} 8861}
@@ -8946,6 +9003,7 @@ static void hpsa_remove_one(struct pci_dev *pdev)
8946 cancel_delayed_work_sync(&h->event_monitor_work); 9003 cancel_delayed_work_sync(&h->event_monitor_work);
8947 destroy_workqueue(h->rescan_ctlr_wq); 9004 destroy_workqueue(h->rescan_ctlr_wq);
8948 destroy_workqueue(h->resubmit_wq); 9005 destroy_workqueue(h->resubmit_wq);
9006 destroy_workqueue(h->monitor_ctlr_wq);
8949 9007
8950 hpsa_delete_sas_host(h); 9008 hpsa_delete_sas_host(h);
8951 9009
diff --git a/drivers/scsi/hpsa.h b/drivers/scsi/hpsa.h
index 59e023696fff..f8c88fc7b80a 100644
--- a/drivers/scsi/hpsa.h
+++ b/drivers/scsi/hpsa.h
@@ -65,6 +65,7 @@ struct hpsa_scsi_dev_t {
65 u8 physical_device : 1; 65 u8 physical_device : 1;
66 u8 expose_device; 66 u8 expose_device;
67 u8 removed : 1; /* device is marked for death */ 67 u8 removed : 1; /* device is marked for death */
68 u8 was_removed : 1; /* device actually removed */
68#define RAID_CTLR_LUNID "\0\0\0\0\0\0\0\0" 69#define RAID_CTLR_LUNID "\0\0\0\0\0\0\0\0"
69 unsigned char device_id[16]; /* from inquiry pg. 0x83 */ 70 unsigned char device_id[16]; /* from inquiry pg. 0x83 */
70 u64 sas_address; 71 u64 sas_address;
@@ -75,11 +76,12 @@ struct hpsa_scsi_dev_t {
75 unsigned char raid_level; /* from inquiry page 0xC1 */ 76 unsigned char raid_level; /* from inquiry page 0xC1 */
76 unsigned char volume_offline; /* discovered via TUR or VPD */ 77 unsigned char volume_offline; /* discovered via TUR or VPD */
77 u16 queue_depth; /* max queue_depth for this device */ 78 u16 queue_depth; /* max queue_depth for this device */
78 atomic_t reset_cmds_out; /* Count of commands to-be affected */ 79 atomic_t commands_outstanding; /* track commands sent to device */
79 atomic_t ioaccel_cmds_out; /* Only used for physical devices 80 atomic_t ioaccel_cmds_out; /* Only used for physical devices
80 * counts commands sent to physical 81 * counts commands sent to physical
81 * device via "ioaccel" path. 82 * device via "ioaccel" path.
82 */ 83 */
84 bool in_reset;
83 u32 ioaccel_handle; 85 u32 ioaccel_handle;
84 u8 active_path_index; 86 u8 active_path_index;
85 u8 path_map; 87 u8 path_map;
@@ -174,6 +176,7 @@ struct ctlr_info {
174 struct CfgTable __iomem *cfgtable; 176 struct CfgTable __iomem *cfgtable;
175 int interrupts_enabled; 177 int interrupts_enabled;
176 int max_commands; 178 int max_commands;
179 int last_collision_tag; /* tags are global */
177 atomic_t commands_outstanding; 180 atomic_t commands_outstanding;
178# define PERF_MODE_INT 0 181# define PERF_MODE_INT 0
179# define DOORBELL_INT 1 182# define DOORBELL_INT 1
@@ -300,6 +303,7 @@ struct ctlr_info {
300 int needs_abort_tags_swizzled; 303 int needs_abort_tags_swizzled;
301 struct workqueue_struct *resubmit_wq; 304 struct workqueue_struct *resubmit_wq;
302 struct workqueue_struct *rescan_ctlr_wq; 305 struct workqueue_struct *rescan_ctlr_wq;
306 struct workqueue_struct *monitor_ctlr_wq;
303 atomic_t abort_cmds_available; 307 atomic_t abort_cmds_available;
304 wait_queue_head_t event_sync_wait_queue; 308 wait_queue_head_t event_sync_wait_queue;
305 struct mutex reset_mutex; 309 struct mutex reset_mutex;
diff --git a/drivers/scsi/hpsa_cmd.h b/drivers/scsi/hpsa_cmd.h
index f6afca4b2319..7825cbfea4dc 100644
--- a/drivers/scsi/hpsa_cmd.h
+++ b/drivers/scsi/hpsa_cmd.h
@@ -448,7 +448,7 @@ struct CommandList {
448 struct hpsa_scsi_dev_t *phys_disk; 448 struct hpsa_scsi_dev_t *phys_disk;
449 449
450 int abort_pending; 450 int abort_pending;
451 struct hpsa_scsi_dev_t *reset_pending; 451 struct hpsa_scsi_dev_t *device;
452 atomic_t refcount; /* Must be last to avoid memset in hpsa_cmd_init() */ 452 atomic_t refcount; /* Must be last to avoid memset in hpsa_cmd_init() */
453} __aligned(COMMANDLIST_ALIGNMENT); 453} __aligned(COMMANDLIST_ALIGNMENT);
454 454
diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
index 4aea97ee4b24..7f66a7783209 100644
--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
+++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
@@ -814,7 +814,7 @@ static void ibmvscsi_reset_host(struct ibmvscsi_host_data *hostdata)
814 atomic_set(&hostdata->request_limit, 0); 814 atomic_set(&hostdata->request_limit, 0);
815 815
816 purge_requests(hostdata, DID_ERROR); 816 purge_requests(hostdata, DID_ERROR);
817 hostdata->reset_crq = 1; 817 hostdata->action = IBMVSCSI_HOST_ACTION_RESET;
818 wake_up(&hostdata->work_wait_q); 818 wake_up(&hostdata->work_wait_q);
819} 819}
820 820
@@ -1165,7 +1165,8 @@ static void login_rsp(struct srp_event_struct *evt_struct)
1165 be32_to_cpu(evt_struct->xfer_iu->srp.login_rsp.req_lim_delta)); 1165 be32_to_cpu(evt_struct->xfer_iu->srp.login_rsp.req_lim_delta));
1166 1166
1167 /* If we had any pending I/Os, kick them */ 1167 /* If we had any pending I/Os, kick them */
1168 scsi_unblock_requests(hostdata->host); 1168 hostdata->action = IBMVSCSI_HOST_ACTION_UNBLOCK;
1169 wake_up(&hostdata->work_wait_q);
1169} 1170}
1170 1171
1171/** 1172/**
@@ -1783,7 +1784,7 @@ static void ibmvscsi_handle_crq(struct viosrp_crq *crq,
1783 /* We need to re-setup the interpartition connection */ 1784 /* We need to re-setup the interpartition connection */
1784 dev_info(hostdata->dev, "Re-enabling adapter!\n"); 1785 dev_info(hostdata->dev, "Re-enabling adapter!\n");
1785 hostdata->client_migrated = 1; 1786 hostdata->client_migrated = 1;
1786 hostdata->reenable_crq = 1; 1787 hostdata->action = IBMVSCSI_HOST_ACTION_REENABLE;
1787 purge_requests(hostdata, DID_REQUEUE); 1788 purge_requests(hostdata, DID_REQUEUE);
1788 wake_up(&hostdata->work_wait_q); 1789 wake_up(&hostdata->work_wait_q);
1789 } else { 1790 } else {
@@ -2036,6 +2037,16 @@ static struct device_attribute ibmvscsi_host_config = {
2036 .show = show_host_config, 2037 .show = show_host_config,
2037}; 2038};
2038 2039
2040static int ibmvscsi_host_reset(struct Scsi_Host *shost, int reset_type)
2041{
2042 struct ibmvscsi_host_data *hostdata = shost_priv(shost);
2043
2044 dev_info(hostdata->dev, "Initiating adapter reset!\n");
2045 ibmvscsi_reset_host(hostdata);
2046
2047 return 0;
2048}
2049
2039static struct device_attribute *ibmvscsi_attrs[] = { 2050static struct device_attribute *ibmvscsi_attrs[] = {
2040 &ibmvscsi_host_vhost_loc, 2051 &ibmvscsi_host_vhost_loc,
2041 &ibmvscsi_host_vhost_name, 2052 &ibmvscsi_host_vhost_name,
@@ -2062,6 +2073,7 @@ static struct scsi_host_template driver_template = {
2062 .eh_host_reset_handler = ibmvscsi_eh_host_reset_handler, 2073 .eh_host_reset_handler = ibmvscsi_eh_host_reset_handler,
2063 .slave_configure = ibmvscsi_slave_configure, 2074 .slave_configure = ibmvscsi_slave_configure,
2064 .change_queue_depth = ibmvscsi_change_queue_depth, 2075 .change_queue_depth = ibmvscsi_change_queue_depth,
2076 .host_reset = ibmvscsi_host_reset,
2065 .cmd_per_lun = IBMVSCSI_CMDS_PER_LUN_DEFAULT, 2077 .cmd_per_lun = IBMVSCSI_CMDS_PER_LUN_DEFAULT,
2066 .can_queue = IBMVSCSI_MAX_REQUESTS_DEFAULT, 2078 .can_queue = IBMVSCSI_MAX_REQUESTS_DEFAULT,
2067 .this_id = -1, 2079 .this_id = -1,
@@ -2091,48 +2103,75 @@ static unsigned long ibmvscsi_get_desired_dma(struct vio_dev *vdev)
2091 2103
2092static void ibmvscsi_do_work(struct ibmvscsi_host_data *hostdata) 2104static void ibmvscsi_do_work(struct ibmvscsi_host_data *hostdata)
2093{ 2105{
2106 unsigned long flags;
2094 int rc; 2107 int rc;
2095 char *action = "reset"; 2108 char *action = "reset";
2096 2109
2097 if (hostdata->reset_crq) { 2110 spin_lock_irqsave(hostdata->host->host_lock, flags);
2098 smp_rmb(); 2111 switch (hostdata->action) {
2099 hostdata->reset_crq = 0; 2112 case IBMVSCSI_HOST_ACTION_UNBLOCK:
2100 2113 rc = 0;
2114 break;
2115 case IBMVSCSI_HOST_ACTION_RESET:
2116 spin_unlock_irqrestore(hostdata->host->host_lock, flags);
2101 rc = ibmvscsi_reset_crq_queue(&hostdata->queue, hostdata); 2117 rc = ibmvscsi_reset_crq_queue(&hostdata->queue, hostdata);
2118 spin_lock_irqsave(hostdata->host->host_lock, flags);
2102 if (!rc) 2119 if (!rc)
2103 rc = ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0); 2120 rc = ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0);
2104 vio_enable_interrupts(to_vio_dev(hostdata->dev)); 2121 vio_enable_interrupts(to_vio_dev(hostdata->dev));
2105 } else if (hostdata->reenable_crq) { 2122 break;
2106 smp_rmb(); 2123 case IBMVSCSI_HOST_ACTION_REENABLE:
2107 action = "enable"; 2124 action = "enable";
2125 spin_unlock_irqrestore(hostdata->host->host_lock, flags);
2108 rc = ibmvscsi_reenable_crq_queue(&hostdata->queue, hostdata); 2126 rc = ibmvscsi_reenable_crq_queue(&hostdata->queue, hostdata);
2109 hostdata->reenable_crq = 0; 2127 spin_lock_irqsave(hostdata->host->host_lock, flags);
2110 if (!rc) 2128 if (!rc)
2111 rc = ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0); 2129 rc = ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0);
2112 } else 2130 break;
2131 case IBMVSCSI_HOST_ACTION_NONE:
2132 default:
2133 spin_unlock_irqrestore(hostdata->host->host_lock, flags);
2113 return; 2134 return;
2135 }
2136
2137 hostdata->action = IBMVSCSI_HOST_ACTION_NONE;
2114 2138
2115 if (rc) { 2139 if (rc) {
2116 atomic_set(&hostdata->request_limit, -1); 2140 atomic_set(&hostdata->request_limit, -1);
2117 dev_err(hostdata->dev, "error after %s\n", action); 2141 dev_err(hostdata->dev, "error after %s\n", action);
2118 } 2142 }
2143 spin_unlock_irqrestore(hostdata->host->host_lock, flags);
2119 2144
2120 scsi_unblock_requests(hostdata->host); 2145 scsi_unblock_requests(hostdata->host);
2121} 2146}
2122 2147
2123static int ibmvscsi_work_to_do(struct ibmvscsi_host_data *hostdata) 2148static int __ibmvscsi_work_to_do(struct ibmvscsi_host_data *hostdata)
2124{ 2149{
2125 if (kthread_should_stop()) 2150 if (kthread_should_stop())
2126 return 1; 2151 return 1;
2127 else if (hostdata->reset_crq) { 2152 switch (hostdata->action) {
2128 smp_rmb(); 2153 case IBMVSCSI_HOST_ACTION_NONE:
2129 return 1; 2154 return 0;
2130 } else if (hostdata->reenable_crq) { 2155 case IBMVSCSI_HOST_ACTION_RESET:
2131 smp_rmb(); 2156 case IBMVSCSI_HOST_ACTION_REENABLE:
2132 return 1; 2157 case IBMVSCSI_HOST_ACTION_UNBLOCK:
2158 default:
2159 break;
2133 } 2160 }
2134 2161
2135 return 0; 2162 return 1;
2163}
2164
2165static int ibmvscsi_work_to_do(struct ibmvscsi_host_data *hostdata)
2166{
2167 unsigned long flags;
2168 int rc;
2169
2170 spin_lock_irqsave(hostdata->host->host_lock, flags);
2171 rc = __ibmvscsi_work_to_do(hostdata);
2172 spin_unlock_irqrestore(hostdata->host->host_lock, flags);
2173
2174 return rc;
2136} 2175}
2137 2176
2138static int ibmvscsi_work(void *data) 2177static int ibmvscsi_work(void *data)
diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.h b/drivers/scsi/ibmvscsi/ibmvscsi.h
index 6ebd1410488d..e60916ef7a49 100644
--- a/drivers/scsi/ibmvscsi/ibmvscsi.h
+++ b/drivers/scsi/ibmvscsi/ibmvscsi.h
@@ -74,13 +74,19 @@ struct event_pool {
74 dma_addr_t iu_token; 74 dma_addr_t iu_token;
75}; 75};
76 76
77enum ibmvscsi_host_action {
78 IBMVSCSI_HOST_ACTION_NONE = 0,
79 IBMVSCSI_HOST_ACTION_RESET,
80 IBMVSCSI_HOST_ACTION_REENABLE,
81 IBMVSCSI_HOST_ACTION_UNBLOCK,
82};
83
77/* all driver data associated with a host adapter */ 84/* all driver data associated with a host adapter */
78struct ibmvscsi_host_data { 85struct ibmvscsi_host_data {
79 struct list_head host_list; 86 struct list_head host_list;
80 atomic_t request_limit; 87 atomic_t request_limit;
81 int client_migrated; 88 int client_migrated;
82 int reset_crq; 89 enum ibmvscsi_host_action action;
83 int reenable_crq;
84 struct device *dev; 90 struct device *dev;
85 struct event_pool pool; 91 struct event_pool pool;
86 struct crq_queue queue; 92 struct crq_queue queue;
diff --git a/drivers/scsi/isci/remote_device.c b/drivers/scsi/isci/remote_device.c
index 9d29edb9f590..49aa4e657c44 100644
--- a/drivers/scsi/isci/remote_device.c
+++ b/drivers/scsi/isci/remote_device.c
@@ -1087,7 +1087,7 @@ static void sci_remote_device_ready_state_enter(struct sci_base_state_machine *s
1087 1087
1088 if (dev->dev_type == SAS_SATA_DEV || (dev->tproto & SAS_PROTOCOL_SATA)) { 1088 if (dev->dev_type == SAS_SATA_DEV || (dev->tproto & SAS_PROTOCOL_SATA)) {
1089 sci_change_state(&idev->sm, SCI_STP_DEV_IDLE); 1089 sci_change_state(&idev->sm, SCI_STP_DEV_IDLE);
1090 } else if (dev_is_expander(dev)) { 1090 } else if (dev_is_expander(dev->dev_type)) {
1091 sci_change_state(&idev->sm, SCI_SMP_DEV_IDLE); 1091 sci_change_state(&idev->sm, SCI_SMP_DEV_IDLE);
1092 } else 1092 } else
1093 isci_remote_device_ready(ihost, idev); 1093 isci_remote_device_ready(ihost, idev);
@@ -1478,7 +1478,7 @@ static enum sci_status isci_remote_device_construct(struct isci_port *iport,
1478 struct domain_device *dev = idev->domain_dev; 1478 struct domain_device *dev = idev->domain_dev;
1479 enum sci_status status; 1479 enum sci_status status;
1480 1480
1481 if (dev->parent && dev_is_expander(dev->parent)) 1481 if (dev->parent && dev_is_expander(dev->parent->dev_type))
1482 status = sci_remote_device_ea_construct(iport, idev); 1482 status = sci_remote_device_ea_construct(iport, idev);
1483 else 1483 else
1484 status = sci_remote_device_da_construct(iport, idev); 1484 status = sci_remote_device_da_construct(iport, idev);
diff --git a/drivers/scsi/isci/remote_device.h b/drivers/scsi/isci/remote_device.h
index 47a013fffae7..3ad681c4c20a 100644
--- a/drivers/scsi/isci/remote_device.h
+++ b/drivers/scsi/isci/remote_device.h
@@ -295,11 +295,6 @@ static inline struct isci_remote_device *rnc_to_dev(struct sci_remote_node_conte
295 return idev; 295 return idev;
296} 296}
297 297
298static inline bool dev_is_expander(struct domain_device *dev)
299{
300 return dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE;
301}
302
303static inline void sci_remote_device_decrement_request_count(struct isci_remote_device *idev) 298static inline void sci_remote_device_decrement_request_count(struct isci_remote_device *idev)
304{ 299{
305 /* XXX delete this voodoo when converting to the top-level device 300 /* XXX delete this voodoo when converting to the top-level device
diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c
index 1b18cf55167e..343d24c7e788 100644
--- a/drivers/scsi/isci/request.c
+++ b/drivers/scsi/isci/request.c
@@ -224,7 +224,7 @@ static void scu_ssp_request_construct_task_context(
224 idev = ireq->target_device; 224 idev = ireq->target_device;
225 iport = idev->owning_port; 225 iport = idev->owning_port;
226 226
227 /* Fill in the TC with the its required data */ 227 /* Fill in the TC with its required data */
228 task_context->abort = 0; 228 task_context->abort = 0;
229 task_context->priority = 0; 229 task_context->priority = 0;
230 task_context->initiator_request = 1; 230 task_context->initiator_request = 1;
@@ -506,7 +506,7 @@ static void scu_sata_request_construct_task_context(
506 idev = ireq->target_device; 506 idev = ireq->target_device;
507 iport = idev->owning_port; 507 iport = idev->owning_port;
508 508
509 /* Fill in the TC with the its required data */ 509 /* Fill in the TC with its required data */
510 task_context->abort = 0; 510 task_context->abort = 0;
511 task_context->priority = SCU_TASK_PRIORITY_NORMAL; 511 task_context->priority = SCU_TASK_PRIORITY_NORMAL;
512 task_context->initiator_request = 1; 512 task_context->initiator_request = 1;
@@ -3101,7 +3101,7 @@ sci_io_request_construct(struct isci_host *ihost,
3101 /* pass */; 3101 /* pass */;
3102 else if (dev_is_sata(dev)) 3102 else if (dev_is_sata(dev))
3103 memset(&ireq->stp.cmd, 0, sizeof(ireq->stp.cmd)); 3103 memset(&ireq->stp.cmd, 0, sizeof(ireq->stp.cmd));
3104 else if (dev_is_expander(dev)) 3104 else if (dev_is_expander(dev->dev_type))
3105 /* pass */; 3105 /* pass */;
3106 else 3106 else
3107 return SCI_FAILURE_UNSUPPORTED_PROTOCOL; 3107 return SCI_FAILURE_UNSUPPORTED_PROTOCOL;
@@ -3235,7 +3235,7 @@ sci_io_request_construct_smp(struct device *dev,
3235 iport = idev->owning_port; 3235 iport = idev->owning_port;
3236 3236
3237 /* 3237 /*
3238 * Fill in the TC with the its required data 3238 * Fill in the TC with its required data
3239 * 00h 3239 * 00h
3240 */ 3240 */
3241 task_context->priority = 0; 3241 task_context->priority = 0;
diff --git a/drivers/scsi/isci/task.c b/drivers/scsi/isci/task.c
index fb6eba331ac6..26fa1a4d1e6b 100644
--- a/drivers/scsi/isci/task.c
+++ b/drivers/scsi/isci/task.c
@@ -511,7 +511,7 @@ int isci_task_abort_task(struct sas_task *task)
511 "%s: dev = %p (%s%s), task = %p, old_request == %p\n", 511 "%s: dev = %p (%s%s), task = %p, old_request == %p\n",
512 __func__, idev, 512 __func__, idev,
513 (dev_is_sata(task->dev) ? "STP/SATA" 513 (dev_is_sata(task->dev) ? "STP/SATA"
514 : ((dev_is_expander(task->dev)) 514 : ((dev_is_expander(task->dev->dev_type))
515 ? "SMP" 515 ? "SMP"
516 : "SSP")), 516 : "SSP")),
517 ((idev) ? ((test_bit(IDEV_GONE, &idev->flags)) 517 ((idev) ? ((test_bit(IDEV_GONE, &idev->flags))
diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c
index 719e57685dd5..6ef93c7af954 100644
--- a/drivers/scsi/libiscsi_tcp.c
+++ b/drivers/scsi/libiscsi_tcp.c
@@ -8,8 +8,6 @@
8 * Copyright (C) 2006 Red Hat, Inc. All rights reserved. 8 * Copyright (C) 2006 Red Hat, Inc. All rights reserved.
9 * maintained by open-iscsi@googlegroups.com 9 * maintained by open-iscsi@googlegroups.com
10 * 10 *
11 * See the file COPYING included with this distribution for more details.
12 *
13 * Credits: 11 * Credits:
14 * Christoph Hellwig 12 * Christoph Hellwig
15 * FUJITA Tomonori 13 * FUJITA Tomonori
diff --git a/drivers/scsi/libsas/sas_discover.c b/drivers/scsi/libsas/sas_discover.c
index 726ada9b8c79..abcad097ff2f 100644
--- a/drivers/scsi/libsas/sas_discover.c
+++ b/drivers/scsi/libsas/sas_discover.c
@@ -1,25 +1,9 @@
1// SPDX-License-Identifier: GPL-2.0
1/* 2/*
2 * Serial Attached SCSI (SAS) Discover process 3 * Serial Attached SCSI (SAS) Discover process
3 * 4 *
4 * Copyright (C) 2005 Adaptec, Inc. All rights reserved. 5 * Copyright (C) 2005 Adaptec, Inc. All rights reserved.
5 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com> 6 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com>
6 *
7 * This file is licensed under GPLv2.
8 *
9 * This program is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU General Public License as
11 * published by the Free Software Foundation; either version 2 of the
12 * License, or (at your option) any later version.
13 *
14 * This program is distributed in the hope that it will be useful, but
15 * WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 * General Public License for more details.
18 *
19 * You should have received a copy of the GNU General Public License
20 * along with this program; if not, write to the Free Software
21 * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
22 *
23 */ 7 */
24 8
25#include <linux/scatterlist.h> 9#include <linux/scatterlist.h>
@@ -309,7 +293,7 @@ void sas_free_device(struct kref *kref)
309 dev->phy = NULL; 293 dev->phy = NULL;
310 294
311 /* remove the phys and ports, everything else should be gone */ 295 /* remove the phys and ports, everything else should be gone */
312 if (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) 296 if (dev_is_expander(dev->dev_type))
313 kfree(dev->ex_dev.ex_phy); 297 kfree(dev->ex_dev.ex_phy);
314 298
315 if (dev_is_sata(dev) && dev->sata_dev.ap) { 299 if (dev_is_sata(dev) && dev->sata_dev.ap) {
@@ -519,8 +503,7 @@ static void sas_revalidate_domain(struct work_struct *work)
519 pr_debug("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id, 503 pr_debug("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id,
520 task_pid_nr(current)); 504 task_pid_nr(current));
521 505
522 if (ddev && (ddev->dev_type == SAS_FANOUT_EXPANDER_DEVICE || 506 if (ddev && dev_is_expander(ddev->dev_type))
523 ddev->dev_type == SAS_EDGE_EXPANDER_DEVICE))
524 res = sas_ex_revalidate_domain(ddev); 507 res = sas_ex_revalidate_domain(ddev);
525 508
526 pr_debug("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n", 509 pr_debug("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n",
diff --git a/drivers/scsi/libsas/sas_event.c b/drivers/scsi/libsas/sas_event.c
index b1e0f7d2b396..a1852f6c042b 100644
--- a/drivers/scsi/libsas/sas_event.c
+++ b/drivers/scsi/libsas/sas_event.c
@@ -1,25 +1,9 @@
1// SPDX-License-Identifier: GPL-2.0
1/* 2/*
2 * Serial Attached SCSI (SAS) Event processing 3 * Serial Attached SCSI (SAS) Event processing
3 * 4 *
4 * Copyright (C) 2005 Adaptec, Inc. All rights reserved. 5 * Copyright (C) 2005 Adaptec, Inc. All rights reserved.
5 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com> 6 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com>
6 *
7 * This file is licensed under GPLv2.
8 *
9 * This program is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU General Public License as
11 * published by the Free Software Foundation; either version 2 of the
12 * License, or (at your option) any later version.
13 *
14 * This program is distributed in the hope that it will be useful, but
15 * WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 * General Public License for more details.
18 *
19 * You should have received a copy of the GNU General Public License
20 * along with this program; if not, write to the Free Software
21 * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
22 *
23 */ 7 */
24 8
25#include <linux/export.h> 9#include <linux/export.h>
diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
index 9f7e2457360e..9fdb9c9fbda4 100644
--- a/drivers/scsi/libsas/sas_expander.c
+++ b/drivers/scsi/libsas/sas_expander.c
@@ -1,3 +1,4 @@
1// SPDX-License-Identifier: GPL-2.0
1/* 2/*
2 * Serial Attached SCSI (SAS) Expander discovery and configuration 3 * Serial Attached SCSI (SAS) Expander discovery and configuration
3 * 4 *
@@ -5,21 +6,6 @@
5 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com> 6 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com>
6 * 7 *
7 * This file is licensed under GPLv2. 8 * This file is licensed under GPLv2.
8 *
9 * This program is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU General Public License as
11 * published by the Free Software Foundation; either version 2 of the
12 * License, or (at your option) any later version.
13 *
14 * This program is distributed in the hope that it will be useful, but
15 * WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 * General Public License for more details.
18 *
19 * You should have received a copy of the GNU General Public License
20 * along with this program; if not, write to the Free Software
21 * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
22 *
23 */ 9 */
24 10
25#include <linux/scatterlist.h> 11#include <linux/scatterlist.h>
@@ -1106,7 +1092,7 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id)
1106 SAS_ADDR(dev->sas_addr), 1092 SAS_ADDR(dev->sas_addr),
1107 phy_id); 1093 phy_id);
1108 sas_ex_disable_phy(dev, phy_id); 1094 sas_ex_disable_phy(dev, phy_id);
1109 break; 1095 return res;
1110 } else 1096 } else
1111 memcpy(dev->port->disc.fanout_sas_addr, 1097 memcpy(dev->port->disc.fanout_sas_addr,
1112 ex_phy->attached_sas_addr, SAS_ADDR_SIZE); 1098 ex_phy->attached_sas_addr, SAS_ADDR_SIZE);
@@ -1118,27 +1104,9 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id)
1118 break; 1104 break;
1119 } 1105 }
1120 1106
1121 if (child) { 1107 if (!child)
1122 int i; 1108 pr_notice("ex %016llx phy%02d failed to discover\n",
1123 1109 SAS_ADDR(dev->sas_addr), phy_id);
1124 for (i = 0; i < ex->num_phys; i++) {
1125 if (ex->ex_phy[i].phy_state == PHY_VACANT ||
1126 ex->ex_phy[i].phy_state == PHY_NOT_PRESENT)
1127 continue;
1128 /*
1129 * Due to races, the phy might not get added to the
1130 * wide port, so we add the phy to the wide port here.
1131 */
1132 if (SAS_ADDR(ex->ex_phy[i].attached_sas_addr) ==
1133 SAS_ADDR(child->sas_addr)) {
1134 ex->ex_phy[i].phy_state= PHY_DEVICE_DISCOVERED;
1135 if (sas_ex_join_wide_port(dev, i))
1136 pr_debug("Attaching ex phy%02d to wide port %016llx\n",
1137 i, SAS_ADDR(ex->ex_phy[i].attached_sas_addr));
1138 }
1139 }
1140 }
1141
1142 return res; 1110 return res;
1143} 1111}
1144 1112
@@ -1154,8 +1122,7 @@ static int sas_find_sub_addr(struct domain_device *dev, u8 *sub_addr)
1154 phy->phy_state == PHY_NOT_PRESENT) 1122 phy->phy_state == PHY_NOT_PRESENT)
1155 continue; 1123 continue;
1156 1124
1157 if ((phy->attached_dev_type == SAS_EDGE_EXPANDER_DEVICE || 1125 if (dev_is_expander(phy->attached_dev_type) &&
1158 phy->attached_dev_type == SAS_FANOUT_EXPANDER_DEVICE) &&
1159 phy->routing_attr == SUBTRACTIVE_ROUTING) { 1126 phy->routing_attr == SUBTRACTIVE_ROUTING) {
1160 1127
1161 memcpy(sub_addr, phy->attached_sas_addr, SAS_ADDR_SIZE); 1128 memcpy(sub_addr, phy->attached_sas_addr, SAS_ADDR_SIZE);
@@ -1173,8 +1140,7 @@ static int sas_check_level_subtractive_boundary(struct domain_device *dev)
1173 u8 sub_addr[SAS_ADDR_SIZE] = {0, }; 1140 u8 sub_addr[SAS_ADDR_SIZE] = {0, };
1174 1141
1175 list_for_each_entry(child, &ex->children, siblings) { 1142 list_for_each_entry(child, &ex->children, siblings) {
1176 if (child->dev_type != SAS_EDGE_EXPANDER_DEVICE && 1143 if (!dev_is_expander(child->dev_type))
1177 child->dev_type != SAS_FANOUT_EXPANDER_DEVICE)
1178 continue; 1144 continue;
1179 if (sub_addr[0] == 0) { 1145 if (sub_addr[0] == 0) {
1180 sas_find_sub_addr(child, sub_addr); 1146 sas_find_sub_addr(child, sub_addr);
@@ -1259,8 +1225,7 @@ static int sas_check_ex_subtractive_boundary(struct domain_device *dev)
1259 phy->phy_state == PHY_NOT_PRESENT) 1225 phy->phy_state == PHY_NOT_PRESENT)
1260 continue; 1226 continue;
1261 1227
1262 if ((phy->attached_dev_type == SAS_FANOUT_EXPANDER_DEVICE || 1228 if (dev_is_expander(phy->attached_dev_type) &&
1263 phy->attached_dev_type == SAS_EDGE_EXPANDER_DEVICE) &&
1264 phy->routing_attr == SUBTRACTIVE_ROUTING) { 1229 phy->routing_attr == SUBTRACTIVE_ROUTING) {
1265 1230
1266 if (!sub_sas_addr) 1231 if (!sub_sas_addr)
@@ -1356,8 +1321,7 @@ static int sas_check_parent_topology(struct domain_device *child)
1356 if (!child->parent) 1321 if (!child->parent)
1357 return 0; 1322 return 0;
1358 1323
1359 if (child->parent->dev_type != SAS_EDGE_EXPANDER_DEVICE && 1324 if (!dev_is_expander(child->parent->dev_type))
1360 child->parent->dev_type != SAS_FANOUT_EXPANDER_DEVICE)
1361 return 0; 1325 return 0;
1362 1326
1363 parent_ex = &child->parent->ex_dev; 1327 parent_ex = &child->parent->ex_dev;
@@ -1653,8 +1617,7 @@ static int sas_ex_level_discovery(struct asd_sas_port *port, const int level)
1653 struct domain_device *dev; 1617 struct domain_device *dev;
1654 1618
1655 list_for_each_entry(dev, &port->dev_list, dev_list_node) { 1619 list_for_each_entry(dev, &port->dev_list, dev_list_node) {
1656 if (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || 1620 if (dev_is_expander(dev->dev_type)) {
1657 dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) {
1658 struct sas_expander_device *ex = 1621 struct sas_expander_device *ex =
1659 rphy_to_expander_device(dev->rphy); 1622 rphy_to_expander_device(dev->rphy);
1660 1623
@@ -1886,7 +1849,7 @@ static int sas_find_bcast_dev(struct domain_device *dev,
1886 SAS_ADDR(dev->sas_addr)); 1849 SAS_ADDR(dev->sas_addr));
1887 } 1850 }
1888 list_for_each_entry(ch, &ex->children, siblings) { 1851 list_for_each_entry(ch, &ex->children, siblings) {
1889 if (ch->dev_type == SAS_EDGE_EXPANDER_DEVICE || ch->dev_type == SAS_FANOUT_EXPANDER_DEVICE) { 1852 if (dev_is_expander(ch->dev_type)) {
1890 res = sas_find_bcast_dev(ch, src_dev); 1853 res = sas_find_bcast_dev(ch, src_dev);
1891 if (*src_dev) 1854 if (*src_dev)
1892 return res; 1855 return res;
@@ -1903,8 +1866,7 @@ static void sas_unregister_ex_tree(struct asd_sas_port *port, struct domain_devi
1903 1866
1904 list_for_each_entry_safe(child, n, &ex->children, siblings) { 1867 list_for_each_entry_safe(child, n, &ex->children, siblings) {
1905 set_bit(SAS_DEV_GONE, &child->state); 1868 set_bit(SAS_DEV_GONE, &child->state);
1906 if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE || 1869 if (dev_is_expander(child->dev_type))
1907 child->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
1908 sas_unregister_ex_tree(port, child); 1870 sas_unregister_ex_tree(port, child);
1909 else 1871 else
1910 sas_unregister_dev(port, child); 1872 sas_unregister_dev(port, child);
@@ -1924,8 +1886,7 @@ static void sas_unregister_devs_sas_addr(struct domain_device *parent,
1924 if (SAS_ADDR(child->sas_addr) == 1886 if (SAS_ADDR(child->sas_addr) ==
1925 SAS_ADDR(phy->attached_sas_addr)) { 1887 SAS_ADDR(phy->attached_sas_addr)) {
1926 set_bit(SAS_DEV_GONE, &child->state); 1888 set_bit(SAS_DEV_GONE, &child->state);
1927 if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE || 1889 if (dev_is_expander(child->dev_type))
1928 child->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
1929 sas_unregister_ex_tree(parent->port, child); 1890 sas_unregister_ex_tree(parent->port, child);
1930 else 1891 else
1931 sas_unregister_dev(parent->port, child); 1892 sas_unregister_dev(parent->port, child);
@@ -1954,8 +1915,7 @@ static int sas_discover_bfs_by_root_level(struct domain_device *root,
1954 int res = 0; 1915 int res = 0;
1955 1916
1956 list_for_each_entry(child, &ex_root->children, siblings) { 1917 list_for_each_entry(child, &ex_root->children, siblings) {
1957 if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE || 1918 if (dev_is_expander(child->dev_type)) {
1958 child->dev_type == SAS_FANOUT_EXPANDER_DEVICE) {
1959 struct sas_expander_device *ex = 1919 struct sas_expander_device *ex =
1960 rphy_to_expander_device(child->rphy); 1920 rphy_to_expander_device(child->rphy);
1961 1921
@@ -2008,8 +1968,7 @@ static int sas_discover_new(struct domain_device *dev, int phy_id)
2008 list_for_each_entry(child, &dev->ex_dev.children, siblings) { 1968 list_for_each_entry(child, &dev->ex_dev.children, siblings) {
2009 if (SAS_ADDR(child->sas_addr) == 1969 if (SAS_ADDR(child->sas_addr) ==
2010 SAS_ADDR(ex_phy->attached_sas_addr)) { 1970 SAS_ADDR(ex_phy->attached_sas_addr)) {
2011 if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE || 1971 if (dev_is_expander(child->dev_type))
2012 child->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
2013 res = sas_discover_bfs_by_root(child); 1972 res = sas_discover_bfs_by_root(child);
2014 break; 1973 break;
2015 } 1974 }
diff --git a/drivers/scsi/libsas/sas_init.c b/drivers/scsi/libsas/sas_init.c
index d50810da53a9..21c43b18d5d5 100644
--- a/drivers/scsi/libsas/sas_init.c
+++ b/drivers/scsi/libsas/sas_init.c
@@ -1,4 +1,4 @@
1// SPDX-License-Identifier: GPL-2.0-or-later 1// SPDX-License-Identifier: GPL-2.0-only
2/* 2/*
3 * Serial Attached SCSI (SAS) Transport Layer initialization 3 * Serial Attached SCSI (SAS) Transport Layer initialization
4 * 4 *
diff --git a/drivers/scsi/libsas/sas_internal.h b/drivers/scsi/libsas/sas_internal.h
index 1f1e07e98477..01f1738ce6df 100644
--- a/drivers/scsi/libsas/sas_internal.h
+++ b/drivers/scsi/libsas/sas_internal.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: GPL-2.0-or-later */ 1/* SPDX-License-Identifier: GPL-2.0-only */
2/* 2/*
3 * Serial Attached SCSI (SAS) class internal header file 3 * Serial Attached SCSI (SAS) class internal header file
4 * 4 *
diff --git a/drivers/scsi/libsas/sas_phy.c b/drivers/scsi/libsas/sas_phy.c
index b71f5ac6c7dc..4ca4b1f30bd0 100644
--- a/drivers/scsi/libsas/sas_phy.c
+++ b/drivers/scsi/libsas/sas_phy.c
@@ -1,25 +1,9 @@
1// SPDX-License-Identifier: GPL-2.0
1/* 2/*
2 * Serial Attached SCSI (SAS) Phy class 3 * Serial Attached SCSI (SAS) Phy class
3 * 4 *
4 * Copyright (C) 2005 Adaptec, Inc. All rights reserved. 5 * Copyright (C) 2005 Adaptec, Inc. All rights reserved.
5 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com> 6 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com>
6 *
7 * This file is licensed under GPLv2.
8 *
9 * This program is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU General Public License as
11 * published by the Free Software Foundation; either version 2 of the
12 * License, or (at your option) any later version.
13 *
14 * This program is distributed in the hope that it will be useful, but
15 * WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 * General Public License for more details.
18 *
19 * You should have received a copy of the GNU General Public License
20 * along with this program; if not, write to the Free Software
21 * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
22 *
23 */ 7 */
24 8
25#include "sas_internal.h" 9#include "sas_internal.h"
diff --git a/drivers/scsi/libsas/sas_port.c b/drivers/scsi/libsas/sas_port.c
index 38a10478605c..7c86fd248129 100644
--- a/drivers/scsi/libsas/sas_port.c
+++ b/drivers/scsi/libsas/sas_port.c
@@ -1,25 +1,9 @@
1// SPDX-License-Identifier: GPL-2.0
1/* 2/*
2 * Serial Attached SCSI (SAS) Port class 3 * Serial Attached SCSI (SAS) Port class
3 * 4 *
4 * Copyright (C) 2005 Adaptec, Inc. All rights reserved. 5 * Copyright (C) 2005 Adaptec, Inc. All rights reserved.
5 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com> 6 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com>
6 *
7 * This file is licensed under GPLv2.
8 *
9 * This program is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU General Public License as
11 * published by the Free Software Foundation; either version 2 of the
12 * License, or (at your option) any later version.
13 *
14 * This program is distributed in the hope that it will be useful, but
15 * WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 * General Public License for more details.
18 *
19 * You should have received a copy of the GNU General Public License
20 * along with this program; if not, write to the Free Software
21 * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
22 *
23 */ 7 */
24 8
25#include "sas_internal.h" 9#include "sas_internal.h"
@@ -70,7 +54,7 @@ static void sas_resume_port(struct asd_sas_phy *phy)
70 continue; 54 continue;
71 } 55 }
72 56
73 if (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) { 57 if (dev_is_expander(dev->dev_type)) {
74 dev->ex_dev.ex_change_count = -1; 58 dev->ex_dev.ex_change_count = -1;
75 for (i = 0; i < dev->ex_dev.num_phys; i++) { 59 for (i = 0; i < dev->ex_dev.num_phys; i++) {
76 struct ex_phy *phy = &dev->ex_dev.ex_phy[i]; 60 struct ex_phy *phy = &dev->ex_dev.ex_phy[i];
@@ -195,7 +179,7 @@ static void sas_form_port(struct asd_sas_phy *phy)
195 179
196 sas_discover_event(phy->port, DISCE_DISCOVER_DOMAIN); 180 sas_discover_event(phy->port, DISCE_DISCOVER_DOMAIN);
197 /* Only insert a revalidate event after initial discovery */ 181 /* Only insert a revalidate event after initial discovery */
198 if (port_dev && sas_dev_type_is_expander(port_dev->dev_type)) { 182 if (port_dev && dev_is_expander(port_dev->dev_type)) {
199 struct expander_device *ex_dev = &port_dev->ex_dev; 183 struct expander_device *ex_dev = &port_dev->ex_dev;
200 184
201 ex_dev->ex_change_count = -1; 185 ex_dev->ex_change_count = -1;
@@ -264,7 +248,7 @@ void sas_deform_port(struct asd_sas_phy *phy, int gone)
264 spin_unlock_irqrestore(&sas_ha->phy_port_lock, flags); 248 spin_unlock_irqrestore(&sas_ha->phy_port_lock, flags);
265 249
266 /* Only insert revalidate event if the port still has members */ 250 /* Only insert revalidate event if the port still has members */
267 if (port->port && dev && sas_dev_type_is_expander(dev->dev_type)) { 251 if (port->port && dev && dev_is_expander(dev->dev_type)) {
268 struct expander_device *ex_dev = &dev->ex_dev; 252 struct expander_device *ex_dev = &dev->ex_dev;
269 253
270 ex_dev->ex_change_count = -1; 254 ex_dev->ex_change_count = -1;
diff --git a/drivers/scsi/libsas/sas_scsi_host.c b/drivers/scsi/libsas/sas_scsi_host.c
index ede0674d8399..4f339f939a51 100644
--- a/drivers/scsi/libsas/sas_scsi_host.c
+++ b/drivers/scsi/libsas/sas_scsi_host.c
@@ -1,4 +1,4 @@
1// SPDX-License-Identifier: GPL-2.0-or-later 1// SPDX-License-Identifier: GPL-2.0-only
2/* 2/*
3 * Serial Attached SCSI (SAS) class SCSI Host glue. 3 * Serial Attached SCSI (SAS) class SCSI Host glue.
4 * 4 *
diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
index 2bd1e014103b..ea62322ffe2b 100644
--- a/drivers/scsi/lpfc/lpfc_attr.c
+++ b/drivers/scsi/lpfc/lpfc_attr.c
@@ -4097,9 +4097,9 @@ lpfc_topology_store(struct device *dev, struct device_attribute *attr,
4097 } 4097 }
4098 if ((phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC || 4098 if ((phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC ||
4099 phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC) && 4099 phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC) &&
4100 val != FLAGS_TOPOLOGY_MODE_PT_PT) { 4100 val == 4) {
4101 lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 4101 lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
4102 "3114 Only non-FC-AL mode is supported\n"); 4102 "3114 Loop mode not supported\n");
4103 return -EINVAL; 4103 return -EINVAL;
4104 } 4104 }
4105 phba->cfg_topology = val; 4105 phba->cfg_topology = val;
@@ -5180,7 +5180,8 @@ lpfc_cq_max_proc_limit_store(struct device *dev, struct device_attribute *attr,
5180 5180
5181 /* set the values on the cq's */ 5181 /* set the values on the cq's */
5182 for (i = 0; i < phba->cfg_irq_chann; i++) { 5182 for (i = 0; i < phba->cfg_irq_chann; i++) {
5183 eq = phba->sli4_hba.hdwq[i].hba_eq; 5183 /* Get the EQ corresponding to the IRQ vector */
5184 eq = phba->sli4_hba.hba_eq_hdl[i].eq;
5184 if (!eq) 5185 if (!eq)
5185 continue; 5186 continue;
5186 5187
@@ -5301,35 +5302,44 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
5301 len += scnprintf( 5302 len += scnprintf(
5302 buf + len, PAGE_SIZE - len, 5303 buf + len, PAGE_SIZE - len,
5303 "CPU %02d hdwq None " 5304 "CPU %02d hdwq None "
5304 "physid %d coreid %d ht %d\n", 5305 "physid %d coreid %d ht %d ua %d\n",
5305 phba->sli4_hba.curr_disp_cpu, 5306 phba->sli4_hba.curr_disp_cpu,
5306 cpup->phys_id, 5307 cpup->phys_id, cpup->core_id,
5307 cpup->core_id, cpup->hyper); 5308 (cpup->flag & LPFC_CPU_MAP_HYPER),
5309 (cpup->flag & LPFC_CPU_MAP_UNASSIGN));
5308 else 5310 else
5309 len += scnprintf( 5311 len += scnprintf(
5310 buf + len, PAGE_SIZE - len, 5312 buf + len, PAGE_SIZE - len,
5311 "CPU %02d EQ %04d hdwq %04d " 5313 "CPU %02d EQ %04d hdwq %04d "
5312 "physid %d coreid %d ht %d\n", 5314 "physid %d coreid %d ht %d ua %d\n",
5313 phba->sli4_hba.curr_disp_cpu, 5315 phba->sli4_hba.curr_disp_cpu,
5314 cpup->eq, cpup->hdwq, cpup->phys_id, 5316 cpup->eq, cpup->hdwq, cpup->phys_id,
5315 cpup->core_id, cpup->hyper); 5317 cpup->core_id,
5318 (cpup->flag & LPFC_CPU_MAP_HYPER),
5319 (cpup->flag & LPFC_CPU_MAP_UNASSIGN));
5316 } else { 5320 } else {
5317 if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY) 5321 if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY)
5318 len += scnprintf( 5322 len += scnprintf(
5319 buf + len, PAGE_SIZE - len, 5323 buf + len, PAGE_SIZE - len,
5320 "CPU %02d hdwq None " 5324 "CPU %02d hdwq None "
5321 "physid %d coreid %d ht %d IRQ %d\n", 5325 "physid %d coreid %d ht %d ua %d IRQ %d\n",
5322 phba->sli4_hba.curr_disp_cpu, 5326 phba->sli4_hba.curr_disp_cpu,
5323 cpup->phys_id, 5327 cpup->phys_id,
5324 cpup->core_id, cpup->hyper, cpup->irq); 5328 cpup->core_id,
5329 (cpup->flag & LPFC_CPU_MAP_HYPER),
5330 (cpup->flag & LPFC_CPU_MAP_UNASSIGN),
5331 cpup->irq);
5325 else 5332 else
5326 len += scnprintf( 5333 len += scnprintf(
5327 buf + len, PAGE_SIZE - len, 5334 buf + len, PAGE_SIZE - len,
5328 "CPU %02d EQ %04d hdwq %04d " 5335 "CPU %02d EQ %04d hdwq %04d "
5329 "physid %d coreid %d ht %d IRQ %d\n", 5336 "physid %d coreid %d ht %d ua %d IRQ %d\n",
5330 phba->sli4_hba.curr_disp_cpu, 5337 phba->sli4_hba.curr_disp_cpu,
5331 cpup->eq, cpup->hdwq, cpup->phys_id, 5338 cpup->eq, cpup->hdwq, cpup->phys_id,
5332 cpup->core_id, cpup->hyper, cpup->irq); 5339 cpup->core_id,
5340 (cpup->flag & LPFC_CPU_MAP_HYPER),
5341 (cpup->flag & LPFC_CPU_MAP_UNASSIGN),
5342 cpup->irq);
5333 } 5343 }
5334 5344
5335 phba->sli4_hba.curr_disp_cpu++; 5345 phba->sli4_hba.curr_disp_cpu++;
diff --git a/drivers/scsi/lpfc/lpfc_bsg.c b/drivers/scsi/lpfc/lpfc_bsg.c
index b0202bc0aa62..b7216d694bff 100644
--- a/drivers/scsi/lpfc/lpfc_bsg.c
+++ b/drivers/scsi/lpfc/lpfc_bsg.c
@@ -5741,7 +5741,7 @@ lpfc_get_trunk_info(struct bsg_job *job)
5741 5741
5742 event_reply->port_speed = phba->sli4_hba.link_state.speed / 1000; 5742 event_reply->port_speed = phba->sli4_hba.link_state.speed / 1000;
5743 event_reply->logical_speed = 5743 event_reply->logical_speed =
5744 phba->sli4_hba.link_state.logical_speed / 100; 5744 phba->sli4_hba.link_state.logical_speed / 1000;
5745job_error: 5745job_error:
5746 bsg_reply->result = rc; 5746 bsg_reply->result = rc;
5747 bsg_job_done(job, bsg_reply->result, 5747 bsg_job_done(job, bsg_reply->result,
diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
index 866374801140..68e9f96242d3 100644
--- a/drivers/scsi/lpfc/lpfc_crtn.h
+++ b/drivers/scsi/lpfc/lpfc_crtn.h
@@ -572,7 +572,8 @@ void lpfc_nvmet_destroy_targetport(struct lpfc_hba *phba);
572void lpfc_nvmet_unsol_ls_event(struct lpfc_hba *phba, 572void lpfc_nvmet_unsol_ls_event(struct lpfc_hba *phba,
573 struct lpfc_sli_ring *pring, struct lpfc_iocbq *piocb); 573 struct lpfc_sli_ring *pring, struct lpfc_iocbq *piocb);
574void lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba, uint32_t idx, 574void lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba, uint32_t idx,
575 struct rqb_dmabuf *nvmebuf, uint64_t isr_ts); 575 struct rqb_dmabuf *nvmebuf, uint64_t isr_ts,
576 uint8_t cqflag);
576void lpfc_nvme_mod_param_dep(struct lpfc_hba *phba); 577void lpfc_nvme_mod_param_dep(struct lpfc_hba *phba);
577void lpfc_nvme_abort_fcreq_cmpl(struct lpfc_hba *phba, 578void lpfc_nvme_abort_fcreq_cmpl(struct lpfc_hba *phba,
578 struct lpfc_iocbq *cmdiocb, 579 struct lpfc_iocbq *cmdiocb,
diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
index 4812bbbf43cc..ec72c39997d2 100644
--- a/drivers/scsi/lpfc/lpfc_ct.c
+++ b/drivers/scsi/lpfc/lpfc_ct.c
@@ -2358,6 +2358,7 @@ static int
2358lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport, 2358lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport,
2359 struct lpfc_fdmi_attr_def *ad) 2359 struct lpfc_fdmi_attr_def *ad)
2360{ 2360{
2361 struct lpfc_hba *phba = vport->phba;
2361 struct lpfc_fdmi_attr_entry *ae; 2362 struct lpfc_fdmi_attr_entry *ae;
2362 uint32_t size; 2363 uint32_t size;
2363 2364
@@ -2366,9 +2367,13 @@ lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport,
2366 2367
2367 ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */ 2368 ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
2368 ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */ 2369 ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
2369 if (vport->nvmei_support || vport->phba->nvmet_support)
2370 ae->un.AttrTypes[6] = 0x01; /* Type 0x28 - NVME */
2371 ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */ 2370 ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
2371
2372 /* Check to see if Firmware supports NVME and on physical port */
2373 if ((phba->sli_rev == LPFC_SLI_REV4) && (vport == phba->pport) &&
2374 phba->sli4_hba.pc_sli4_params.nvme)
2375 ae->un.AttrTypes[6] = 0x01; /* Type 0x28 - NVME */
2376
2372 size = FOURBYTES + 32; 2377 size = FOURBYTES + 32;
2373 ad->AttrLen = cpu_to_be16(size); 2378 ad->AttrLen = cpu_to_be16(size);
2374 ad->AttrType = cpu_to_be16(RPRT_SUPPORTED_FC4_TYPES); 2379 ad->AttrType = cpu_to_be16(RPRT_SUPPORTED_FC4_TYPES);
@@ -2680,9 +2685,12 @@ lpfc_fdmi_port_attr_active_fc4type(struct lpfc_vport *vport,
2680 2685
2681 ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */ 2686 ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
2682 ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */ 2687 ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
2688 ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
2689
2690 /* Check to see if NVME is configured or not */
2683 if (vport->phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) 2691 if (vport->phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)
2684 ae->un.AttrTypes[6] = 0x1; /* Type 0x28 - NVME */ 2692 ae->un.AttrTypes[6] = 0x1; /* Type 0x28 - NVME */
2685 ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */ 2693
2686 size = FOURBYTES + 32; 2694 size = FOURBYTES + 32;
2687 ad->AttrLen = cpu_to_be16(size); 2695 ad->AttrLen = cpu_to_be16(size);
2688 ad->AttrType = cpu_to_be16(RPRT_ACTIVE_FC4_TYPES); 2696 ad->AttrType = cpu_to_be16(RPRT_ACTIVE_FC4_TYPES);
diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
index 968ed0fd37f7..f12780f4cfbb 100644
--- a/drivers/scsi/lpfc/lpfc_els.c
+++ b/drivers/scsi/lpfc/lpfc_els.c
@@ -4308,6 +4308,7 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
4308 if ((rspiocb->iocb.ulpStatus == 0) 4308 if ((rspiocb->iocb.ulpStatus == 0)
4309 && (ndlp->nlp_flag & NLP_ACC_REGLOGIN)) { 4309 && (ndlp->nlp_flag & NLP_ACC_REGLOGIN)) {
4310 if (!lpfc_unreg_rpi(vport, ndlp) && 4310 if (!lpfc_unreg_rpi(vport, ndlp) &&
4311 (!(vport->fc_flag & FC_PT2PT)) &&
4311 (ndlp->nlp_state == NLP_STE_PLOGI_ISSUE || 4312 (ndlp->nlp_state == NLP_STE_PLOGI_ISSUE ||
4312 ndlp->nlp_state == NLP_STE_REG_LOGIN_ISSUE)) { 4313 ndlp->nlp_state == NLP_STE_REG_LOGIN_ISSUE)) {
4313 lpfc_printf_vlog(vport, KERN_INFO, 4314 lpfc_printf_vlog(vport, KERN_INFO,
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index eaaef682de25..6d6b14295734 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -72,7 +72,7 @@ unsigned long _dump_buf_dif_order;
72spinlock_t _dump_buf_lock; 72spinlock_t _dump_buf_lock;
73 73
74/* Used when mapping IRQ vectors in a driver centric manner */ 74/* Used when mapping IRQ vectors in a driver centric manner */
75uint32_t lpfc_present_cpu; 75static uint32_t lpfc_present_cpu;
76 76
77static void lpfc_get_hba_model_desc(struct lpfc_hba *, uint8_t *, uint8_t *); 77static void lpfc_get_hba_model_desc(struct lpfc_hba *, uint8_t *, uint8_t *);
78static int lpfc_post_rcv_buf(struct lpfc_hba *); 78static int lpfc_post_rcv_buf(struct lpfc_hba *);
@@ -93,8 +93,8 @@ static void lpfc_sli4_cq_event_release_all(struct lpfc_hba *);
93static void lpfc_sli4_disable_intr(struct lpfc_hba *); 93static void lpfc_sli4_disable_intr(struct lpfc_hba *);
94static uint32_t lpfc_sli4_enable_intr(struct lpfc_hba *, uint32_t); 94static uint32_t lpfc_sli4_enable_intr(struct lpfc_hba *, uint32_t);
95static void lpfc_sli4_oas_verify(struct lpfc_hba *phba); 95static void lpfc_sli4_oas_verify(struct lpfc_hba *phba);
96static uint16_t lpfc_find_eq_handle(struct lpfc_hba *, uint16_t);
97static uint16_t lpfc_find_cpu_handle(struct lpfc_hba *, uint16_t, int); 96static uint16_t lpfc_find_cpu_handle(struct lpfc_hba *, uint16_t, int);
97static void lpfc_setup_bg(struct lpfc_hba *, struct Scsi_Host *);
98 98
99static struct scsi_transport_template *lpfc_transport_template = NULL; 99static struct scsi_transport_template *lpfc_transport_template = NULL;
100static struct scsi_transport_template *lpfc_vport_transport_template = NULL; 100static struct scsi_transport_template *lpfc_vport_transport_template = NULL;
@@ -1274,8 +1274,10 @@ lpfc_hb_eq_delay_work(struct work_struct *work)
1274 if (!eqcnt) 1274 if (!eqcnt)
1275 goto requeue; 1275 goto requeue;
1276 1276
1277 /* Loop thru all IRQ vectors */
1277 for (i = 0; i < phba->cfg_irq_chann; i++) { 1278 for (i = 0; i < phba->cfg_irq_chann; i++) {
1278 eq = phba->sli4_hba.hdwq[i].hba_eq; 1279 /* Get the EQ corresponding to the IRQ vector */
1280 eq = phba->sli4_hba.hba_eq_hdl[i].eq;
1279 if (eq && eqcnt[eq->last_cpu] < 2) 1281 if (eq && eqcnt[eq->last_cpu] < 2)
1280 eqcnt[eq->last_cpu]++; 1282 eqcnt[eq->last_cpu]++;
1281 continue; 1283 continue;
@@ -4114,14 +4116,13 @@ lpfc_new_io_buf(struct lpfc_hba *phba, int num_to_alloc)
4114 * pci bus space for an I/O. The DMA buffer includes the 4116 * pci bus space for an I/O. The DMA buffer includes the
4115 * number of SGE's necessary to support the sg_tablesize. 4117 * number of SGE's necessary to support the sg_tablesize.
4116 */ 4118 */
4117 lpfc_ncmd->data = dma_pool_alloc(phba->lpfc_sg_dma_buf_pool, 4119 lpfc_ncmd->data = dma_pool_zalloc(phba->lpfc_sg_dma_buf_pool,
4118 GFP_KERNEL, 4120 GFP_KERNEL,
4119 &lpfc_ncmd->dma_handle); 4121 &lpfc_ncmd->dma_handle);
4120 if (!lpfc_ncmd->data) { 4122 if (!lpfc_ncmd->data) {
4121 kfree(lpfc_ncmd); 4123 kfree(lpfc_ncmd);
4122 break; 4124 break;
4123 } 4125 }
4124 memset(lpfc_ncmd->data, 0, phba->cfg_sg_dma_buf_size);
4125 4126
4126 /* 4127 /*
4127 * 4K Page alignment is CRITICAL to BlockGuard, double check 4128 * 4K Page alignment is CRITICAL to BlockGuard, double check
@@ -4347,6 +4348,9 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
4347 4348
4348 timer_setup(&vport->delayed_disc_tmo, lpfc_delayed_disc_tmo, 0); 4349 timer_setup(&vport->delayed_disc_tmo, lpfc_delayed_disc_tmo, 0);
4349 4350
4351 if (phba->sli3_options & LPFC_SLI3_BG_ENABLED)
4352 lpfc_setup_bg(phba, shost);
4353
4350 error = scsi_add_host_with_dma(shost, dev, &phba->pcidev->dev); 4354 error = scsi_add_host_with_dma(shost, dev, &phba->pcidev->dev);
4351 if (error) 4355 if (error)
4352 goto out_put_shost; 4356 goto out_put_shost;
@@ -5055,7 +5059,7 @@ lpfc_update_trunk_link_status(struct lpfc_hba *phba,
5055 bf_get(lpfc_acqe_fc_la_speed, acqe_fc)); 5059 bf_get(lpfc_acqe_fc_la_speed, acqe_fc));
5056 5060
5057 phba->sli4_hba.link_state.logical_speed = 5061 phba->sli4_hba.link_state.logical_speed =
5058 bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc); 5062 bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc) * 10;
5059 /* We got FC link speed, convert to fc_linkspeed (READ_TOPOLOGY) */ 5063 /* We got FC link speed, convert to fc_linkspeed (READ_TOPOLOGY) */
5060 phba->fc_linkspeed = 5064 phba->fc_linkspeed =
5061 lpfc_async_link_speed_to_read_top( 5065 lpfc_async_link_speed_to_read_top(
@@ -5158,8 +5162,14 @@ lpfc_sli4_async_fc_evt(struct lpfc_hba *phba, struct lpfc_acqe_fc_la *acqe_fc)
5158 bf_get(lpfc_acqe_fc_la_port_number, acqe_fc); 5162 bf_get(lpfc_acqe_fc_la_port_number, acqe_fc);
5159 phba->sli4_hba.link_state.fault = 5163 phba->sli4_hba.link_state.fault =
5160 bf_get(lpfc_acqe_link_fault, acqe_fc); 5164 bf_get(lpfc_acqe_link_fault, acqe_fc);
5161 phba->sli4_hba.link_state.logical_speed = 5165
5166 if (bf_get(lpfc_acqe_fc_la_att_type, acqe_fc) ==
5167 LPFC_FC_LA_TYPE_LINK_DOWN)
5168 phba->sli4_hba.link_state.logical_speed = 0;
5169 else if (!phba->sli4_hba.conf_trunk)
5170 phba->sli4_hba.link_state.logical_speed =
5162 bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc) * 10; 5171 bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc) * 10;
5172
5163 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 5173 lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
5164 "2896 Async FC event - Speed:%dGBaud Topology:x%x " 5174 "2896 Async FC event - Speed:%dGBaud Topology:x%x "
5165 "LA Type:x%x Port Type:%d Port Number:%d Logical speed:" 5175 "LA Type:x%x Port Type:%d Port Number:%d Logical speed:"
@@ -6551,6 +6561,8 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
6551 spin_lock_init(&phba->sli4_hba.abts_nvmet_buf_list_lock); 6561 spin_lock_init(&phba->sli4_hba.abts_nvmet_buf_list_lock);
6552 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_abts_nvmet_ctx_list); 6562 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_abts_nvmet_ctx_list);
6553 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_nvmet_io_wait_list); 6563 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_nvmet_io_wait_list);
6564 spin_lock_init(&phba->sli4_hba.t_active_list_lock);
6565 INIT_LIST_HEAD(&phba->sli4_hba.t_active_ctx_list);
6554 } 6566 }
6555 6567
6556 /* This abort list used by worker thread */ 6568 /* This abort list used by worker thread */
@@ -7660,8 +7672,6 @@ lpfc_post_init_setup(struct lpfc_hba *phba)
7660 */ 7672 */
7661 shost = pci_get_drvdata(phba->pcidev); 7673 shost = pci_get_drvdata(phba->pcidev);
7662 shost->can_queue = phba->cfg_hba_queue_depth - 10; 7674 shost->can_queue = phba->cfg_hba_queue_depth - 10;
7663 if (phba->sli3_options & LPFC_SLI3_BG_ENABLED)
7664 lpfc_setup_bg(phba, shost);
7665 7675
7666 lpfc_host_attrib_init(shost); 7676 lpfc_host_attrib_init(shost);
7667 7677
@@ -8740,8 +8750,10 @@ int
8740lpfc_sli4_queue_create(struct lpfc_hba *phba) 8750lpfc_sli4_queue_create(struct lpfc_hba *phba)
8741{ 8751{
8742 struct lpfc_queue *qdesc; 8752 struct lpfc_queue *qdesc;
8743 int idx, eqidx, cpu; 8753 int idx, cpu, eqcpu;
8744 struct lpfc_sli4_hdw_queue *qp; 8754 struct lpfc_sli4_hdw_queue *qp;
8755 struct lpfc_vector_map_info *cpup;
8756 struct lpfc_vector_map_info *eqcpup;
8745 struct lpfc_eq_intr_info *eqi; 8757 struct lpfc_eq_intr_info *eqi;
8746 8758
8747 /* 8759 /*
@@ -8826,40 +8838,60 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
8826 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_wq_list); 8838 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_wq_list);
8827 8839
8828 /* Create HBA Event Queues (EQs) */ 8840 /* Create HBA Event Queues (EQs) */
8829 for (idx = 0; idx < phba->cfg_hdw_queue; idx++) { 8841 for_each_present_cpu(cpu) {
8830 /* determine EQ affinity */ 8842 /* We only want to create 1 EQ per vector, even though
8831 eqidx = lpfc_find_eq_handle(phba, idx); 8843 * multiple CPUs might be using that vector. so only
8832 cpu = lpfc_find_cpu_handle(phba, eqidx, LPFC_FIND_BY_EQ); 8844 * selects the CPUs that are LPFC_CPU_FIRST_IRQ.
8833 /*
8834 * If there are more Hardware Queues than available
8835 * EQs, multiple Hardware Queues may share a common EQ.
8836 */ 8845 */
8837 if (idx >= phba->cfg_irq_chann) { 8846 cpup = &phba->sli4_hba.cpu_map[cpu];
8838 /* Share an existing EQ */ 8847 if (!(cpup->flag & LPFC_CPU_FIRST_IRQ))
8839 phba->sli4_hba.hdwq[idx].hba_eq =
8840 phba->sli4_hba.hdwq[eqidx].hba_eq;
8841 continue; 8848 continue;
8842 } 8849
8843 /* Create an EQ */ 8850 /* Get a ptr to the Hardware Queue associated with this CPU */
8851 qp = &phba->sli4_hba.hdwq[cpup->hdwq];
8852
8853 /* Allocate an EQ */
8844 qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE, 8854 qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
8845 phba->sli4_hba.eq_esize, 8855 phba->sli4_hba.eq_esize,
8846 phba->sli4_hba.eq_ecount, cpu); 8856 phba->sli4_hba.eq_ecount, cpu);
8847 if (!qdesc) { 8857 if (!qdesc) {
8848 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 8858 lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
8849 "0497 Failed allocate EQ (%d)\n", idx); 8859 "0497 Failed allocate EQ (%d)\n",
8860 cpup->hdwq);
8850 goto out_error; 8861 goto out_error;
8851 } 8862 }
8852 qdesc->qe_valid = 1; 8863 qdesc->qe_valid = 1;
8853 qdesc->hdwq = idx; 8864 qdesc->hdwq = cpup->hdwq;
8854 8865 qdesc->chann = cpu; /* First CPU this EQ is affinitised to */
8855 /* Save the CPU this EQ is affinitised to */
8856 qdesc->chann = cpu;
8857 phba->sli4_hba.hdwq[idx].hba_eq = qdesc;
8858 qdesc->last_cpu = qdesc->chann; 8866 qdesc->last_cpu = qdesc->chann;
8867
8868 /* Save the allocated EQ in the Hardware Queue */
8869 qp->hba_eq = qdesc;
8870
8859 eqi = per_cpu_ptr(phba->sli4_hba.eq_info, qdesc->last_cpu); 8871 eqi = per_cpu_ptr(phba->sli4_hba.eq_info, qdesc->last_cpu);
8860 list_add(&qdesc->cpu_list, &eqi->list); 8872 list_add(&qdesc->cpu_list, &eqi->list);
8861 } 8873 }
8862 8874
8875 /* Now we need to populate the other Hardware Queues, that share
8876 * an IRQ vector, with the associated EQ ptr.
8877 */
8878 for_each_present_cpu(cpu) {
8879 cpup = &phba->sli4_hba.cpu_map[cpu];
8880
8881 /* Check for EQ already allocated in previous loop */
8882 if (cpup->flag & LPFC_CPU_FIRST_IRQ)
8883 continue;
8884
8885 /* Check for multiple CPUs per hdwq */
8886 qp = &phba->sli4_hba.hdwq[cpup->hdwq];
8887 if (qp->hba_eq)
8888 continue;
8889
8890 /* We need to share an EQ for this hdwq */
8891 eqcpu = lpfc_find_cpu_handle(phba, cpup->eq, LPFC_FIND_BY_EQ);
8892 eqcpup = &phba->sli4_hba.cpu_map[eqcpu];
8893 qp->hba_eq = phba->sli4_hba.hdwq[eqcpup->hdwq].hba_eq;
8894 }
8863 8895
8864 /* Allocate SCSI SLI4 CQ/WQs */ 8896 /* Allocate SCSI SLI4 CQ/WQs */
8865 for (idx = 0; idx < phba->cfg_hdw_queue; idx++) { 8897 for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
@@ -9122,23 +9154,31 @@ static inline void
9122lpfc_sli4_release_hdwq(struct lpfc_hba *phba) 9154lpfc_sli4_release_hdwq(struct lpfc_hba *phba)
9123{ 9155{
9124 struct lpfc_sli4_hdw_queue *hdwq; 9156 struct lpfc_sli4_hdw_queue *hdwq;
9157 struct lpfc_queue *eq;
9125 uint32_t idx; 9158 uint32_t idx;
9126 9159
9127 hdwq = phba->sli4_hba.hdwq; 9160 hdwq = phba->sli4_hba.hdwq;
9128 for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
9129 if (idx < phba->cfg_irq_chann)
9130 lpfc_sli4_queue_free(hdwq[idx].hba_eq);
9131 hdwq[idx].hba_eq = NULL;
9132 9161
9162 /* Loop thru all Hardware Queues */
9163 for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
9164 /* Free the CQ/WQ corresponding to the Hardware Queue */
9133 lpfc_sli4_queue_free(hdwq[idx].fcp_cq); 9165 lpfc_sli4_queue_free(hdwq[idx].fcp_cq);
9134 lpfc_sli4_queue_free(hdwq[idx].nvme_cq); 9166 lpfc_sli4_queue_free(hdwq[idx].nvme_cq);
9135 lpfc_sli4_queue_free(hdwq[idx].fcp_wq); 9167 lpfc_sli4_queue_free(hdwq[idx].fcp_wq);
9136 lpfc_sli4_queue_free(hdwq[idx].nvme_wq); 9168 lpfc_sli4_queue_free(hdwq[idx].nvme_wq);
9169 hdwq[idx].hba_eq = NULL;
9137 hdwq[idx].fcp_cq = NULL; 9170 hdwq[idx].fcp_cq = NULL;
9138 hdwq[idx].nvme_cq = NULL; 9171 hdwq[idx].nvme_cq = NULL;
9139 hdwq[idx].fcp_wq = NULL; 9172 hdwq[idx].fcp_wq = NULL;
9140 hdwq[idx].nvme_wq = NULL; 9173 hdwq[idx].nvme_wq = NULL;
9141 } 9174 }
9175 /* Loop thru all IRQ vectors */
9176 for (idx = 0; idx < phba->cfg_irq_chann; idx++) {
9177 /* Free the EQ corresponding to the IRQ vector */
9178 eq = phba->sli4_hba.hba_eq_hdl[idx].eq;
9179 lpfc_sli4_queue_free(eq);
9180 phba->sli4_hba.hba_eq_hdl[idx].eq = NULL;
9181 }
9142} 9182}
9143 9183
9144/** 9184/**
@@ -9316,16 +9356,17 @@ static void
9316lpfc_setup_cq_lookup(struct lpfc_hba *phba) 9356lpfc_setup_cq_lookup(struct lpfc_hba *phba)
9317{ 9357{
9318 struct lpfc_queue *eq, *childq; 9358 struct lpfc_queue *eq, *childq;
9319 struct lpfc_sli4_hdw_queue *qp;
9320 int qidx; 9359 int qidx;
9321 9360
9322 qp = phba->sli4_hba.hdwq;
9323 memset(phba->sli4_hba.cq_lookup, 0, 9361 memset(phba->sli4_hba.cq_lookup, 0,
9324 (sizeof(struct lpfc_queue *) * (phba->sli4_hba.cq_max + 1))); 9362 (sizeof(struct lpfc_queue *) * (phba->sli4_hba.cq_max + 1)));
9363 /* Loop thru all IRQ vectors */
9325 for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) { 9364 for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) {
9326 eq = qp[qidx].hba_eq; 9365 /* Get the EQ corresponding to the IRQ vector */
9366 eq = phba->sli4_hba.hba_eq_hdl[qidx].eq;
9327 if (!eq) 9367 if (!eq)
9328 continue; 9368 continue;
9369 /* Loop through all CQs associated with that EQ */
9329 list_for_each_entry(childq, &eq->child_list, list) { 9370 list_for_each_entry(childq, &eq->child_list, list) {
9330 if (childq->queue_id > phba->sli4_hba.cq_max) 9371 if (childq->queue_id > phba->sli4_hba.cq_max)
9331 continue; 9372 continue;
@@ -9354,9 +9395,10 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
9354{ 9395{
9355 uint32_t shdr_status, shdr_add_status; 9396 uint32_t shdr_status, shdr_add_status;
9356 union lpfc_sli4_cfg_shdr *shdr; 9397 union lpfc_sli4_cfg_shdr *shdr;
9398 struct lpfc_vector_map_info *cpup;
9357 struct lpfc_sli4_hdw_queue *qp; 9399 struct lpfc_sli4_hdw_queue *qp;
9358 LPFC_MBOXQ_t *mboxq; 9400 LPFC_MBOXQ_t *mboxq;
9359 int qidx; 9401 int qidx, cpu;
9360 uint32_t length, usdelay; 9402 uint32_t length, usdelay;
9361 int rc = -ENOMEM; 9403 int rc = -ENOMEM;
9362 9404
@@ -9417,32 +9459,55 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
9417 rc = -ENOMEM; 9459 rc = -ENOMEM;
9418 goto out_error; 9460 goto out_error;
9419 } 9461 }
9462
9463 /* Loop thru all IRQ vectors */
9420 for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) { 9464 for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) {
9421 if (!qp[qidx].hba_eq) { 9465 /* Create HBA Event Queues (EQs) in order */
9422 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 9466 for_each_present_cpu(cpu) {
9423 "0522 Fast-path EQ (%d) not " 9467 cpup = &phba->sli4_hba.cpu_map[cpu];
9424 "allocated\n", qidx); 9468
9425 rc = -ENOMEM; 9469 /* Look for the CPU thats using that vector with
9426 goto out_destroy; 9470 * LPFC_CPU_FIRST_IRQ set.
9427 } 9471 */
9428 rc = lpfc_eq_create(phba, qp[qidx].hba_eq, 9472 if (!(cpup->flag & LPFC_CPU_FIRST_IRQ))
9429 phba->cfg_fcp_imax); 9473 continue;
9430 if (rc) { 9474 if (qidx != cpup->eq)
9431 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 9475 continue;
9432 "0523 Failed setup of fast-path EQ " 9476
9433 "(%d), rc = 0x%x\n", qidx, 9477 /* Create an EQ for that vector */
9434 (uint32_t)rc); 9478 rc = lpfc_eq_create(phba, qp[cpup->hdwq].hba_eq,
9435 goto out_destroy; 9479 phba->cfg_fcp_imax);
9480 if (rc) {
9481 lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
9482 "0523 Failed setup of fast-path"
9483 " EQ (%d), rc = 0x%x\n",
9484 cpup->eq, (uint32_t)rc);
9485 goto out_destroy;
9486 }
9487
9488 /* Save the EQ for that vector in the hba_eq_hdl */
9489 phba->sli4_hba.hba_eq_hdl[cpup->eq].eq =
9490 qp[cpup->hdwq].hba_eq;
9491
9492 lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
9493 "2584 HBA EQ setup: queue[%d]-id=%d\n",
9494 cpup->eq,
9495 qp[cpup->hdwq].hba_eq->queue_id);
9436 } 9496 }
9437 lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
9438 "2584 HBA EQ setup: queue[%d]-id=%d\n", qidx,
9439 qp[qidx].hba_eq->queue_id);
9440 } 9497 }
9441 9498
9499 /* Loop thru all Hardware Queues */
9442 if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) { 9500 if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) {
9443 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) { 9501 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) {
9502 cpu = lpfc_find_cpu_handle(phba, qidx,
9503 LPFC_FIND_BY_HDWQ);
9504 cpup = &phba->sli4_hba.cpu_map[cpu];
9505
9506 /* Create the CQ/WQ corresponding to the
9507 * Hardware Queue
9508 */
9444 rc = lpfc_create_wq_cq(phba, 9509 rc = lpfc_create_wq_cq(phba,
9445 qp[qidx].hba_eq, 9510 phba->sli4_hba.hdwq[cpup->hdwq].hba_eq,
9446 qp[qidx].nvme_cq, 9511 qp[qidx].nvme_cq,
9447 qp[qidx].nvme_wq, 9512 qp[qidx].nvme_wq,
9448 &phba->sli4_hba.hdwq[qidx].nvme_cq_map, 9513 &phba->sli4_hba.hdwq[qidx].nvme_cq_map,
@@ -9458,8 +9523,12 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
9458 } 9523 }
9459 9524
9460 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) { 9525 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) {
9526 cpu = lpfc_find_cpu_handle(phba, qidx, LPFC_FIND_BY_HDWQ);
9527 cpup = &phba->sli4_hba.cpu_map[cpu];
9528
9529 /* Create the CQ/WQ corresponding to the Hardware Queue */
9461 rc = lpfc_create_wq_cq(phba, 9530 rc = lpfc_create_wq_cq(phba,
9462 qp[qidx].hba_eq, 9531 phba->sli4_hba.hdwq[cpup->hdwq].hba_eq,
9463 qp[qidx].fcp_cq, 9532 qp[qidx].fcp_cq,
9464 qp[qidx].fcp_wq, 9533 qp[qidx].fcp_wq,
9465 &phba->sli4_hba.hdwq[qidx].fcp_cq_map, 9534 &phba->sli4_hba.hdwq[qidx].fcp_cq_map,
@@ -9711,6 +9780,7 @@ void
9711lpfc_sli4_queue_unset(struct lpfc_hba *phba) 9780lpfc_sli4_queue_unset(struct lpfc_hba *phba)
9712{ 9781{
9713 struct lpfc_sli4_hdw_queue *qp; 9782 struct lpfc_sli4_hdw_queue *qp;
9783 struct lpfc_queue *eq;
9714 int qidx; 9784 int qidx;
9715 9785
9716 /* Unset mailbox command work queue */ 9786 /* Unset mailbox command work queue */
@@ -9762,14 +9832,20 @@ lpfc_sli4_queue_unset(struct lpfc_hba *phba)
9762 9832
9763 /* Unset fast-path SLI4 queues */ 9833 /* Unset fast-path SLI4 queues */
9764 if (phba->sli4_hba.hdwq) { 9834 if (phba->sli4_hba.hdwq) {
9835 /* Loop thru all Hardware Queues */
9765 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) { 9836 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) {
9837 /* Destroy the CQ/WQ corresponding to Hardware Queue */
9766 qp = &phba->sli4_hba.hdwq[qidx]; 9838 qp = &phba->sli4_hba.hdwq[qidx];
9767 lpfc_wq_destroy(phba, qp->fcp_wq); 9839 lpfc_wq_destroy(phba, qp->fcp_wq);
9768 lpfc_wq_destroy(phba, qp->nvme_wq); 9840 lpfc_wq_destroy(phba, qp->nvme_wq);
9769 lpfc_cq_destroy(phba, qp->fcp_cq); 9841 lpfc_cq_destroy(phba, qp->fcp_cq);
9770 lpfc_cq_destroy(phba, qp->nvme_cq); 9842 lpfc_cq_destroy(phba, qp->nvme_cq);
9771 if (qidx < phba->cfg_irq_chann) 9843 }
9772 lpfc_eq_destroy(phba, qp->hba_eq); 9844 /* Loop thru all IRQ vectors */
9845 for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) {
9846 /* Destroy the EQ corresponding to the IRQ vector */
9847 eq = phba->sli4_hba.hba_eq_hdl[qidx].eq;
9848 lpfc_eq_destroy(phba, eq);
9773 } 9849 }
9774 } 9850 }
9775 9851
@@ -10559,11 +10635,12 @@ lpfc_sli_disable_intr(struct lpfc_hba *phba)
10559} 10635}
10560 10636
10561/** 10637/**
10562 * lpfc_find_cpu_handle - Find the CPU that corresponds to the specified EQ 10638 * lpfc_find_cpu_handle - Find the CPU that corresponds to the specified Queue
10563 * @phba: pointer to lpfc hba data structure. 10639 * @phba: pointer to lpfc hba data structure.
10564 * @id: EQ vector index or Hardware Queue index 10640 * @id: EQ vector index or Hardware Queue index
10565 * @match: LPFC_FIND_BY_EQ = match by EQ 10641 * @match: LPFC_FIND_BY_EQ = match by EQ
10566 * LPFC_FIND_BY_HDWQ = match by Hardware Queue 10642 * LPFC_FIND_BY_HDWQ = match by Hardware Queue
10643 * Return the CPU that matches the selection criteria
10567 */ 10644 */
10568static uint16_t 10645static uint16_t
10569lpfc_find_cpu_handle(struct lpfc_hba *phba, uint16_t id, int match) 10646lpfc_find_cpu_handle(struct lpfc_hba *phba, uint16_t id, int match)
@@ -10571,40 +10648,27 @@ lpfc_find_cpu_handle(struct lpfc_hba *phba, uint16_t id, int match)
10571 struct lpfc_vector_map_info *cpup; 10648 struct lpfc_vector_map_info *cpup;
10572 int cpu; 10649 int cpu;
10573 10650
10574 /* Find the desired phys_id for the specified EQ */ 10651 /* Loop through all CPUs */
10575 for_each_present_cpu(cpu) { 10652 for_each_present_cpu(cpu) {
10576 cpup = &phba->sli4_hba.cpu_map[cpu]; 10653 cpup = &phba->sli4_hba.cpu_map[cpu];
10654
10655 /* If we are matching by EQ, there may be multiple CPUs using
10656 * using the same vector, so select the one with
10657 * LPFC_CPU_FIRST_IRQ set.
10658 */
10577 if ((match == LPFC_FIND_BY_EQ) && 10659 if ((match == LPFC_FIND_BY_EQ) &&
10660 (cpup->flag & LPFC_CPU_FIRST_IRQ) &&
10578 (cpup->irq != LPFC_VECTOR_MAP_EMPTY) && 10661 (cpup->irq != LPFC_VECTOR_MAP_EMPTY) &&
10579 (cpup->eq == id)) 10662 (cpup->eq == id))
10580 return cpu; 10663 return cpu;
10664
10665 /* If matching by HDWQ, select the first CPU that matches */
10581 if ((match == LPFC_FIND_BY_HDWQ) && (cpup->hdwq == id)) 10666 if ((match == LPFC_FIND_BY_HDWQ) && (cpup->hdwq == id))
10582 return cpu; 10667 return cpu;
10583 } 10668 }
10584 return 0; 10669 return 0;
10585} 10670}
10586 10671
10587/**
10588 * lpfc_find_eq_handle - Find the EQ that corresponds to the specified
10589 * Hardware Queue
10590 * @phba: pointer to lpfc hba data structure.
10591 * @hdwq: Hardware Queue index
10592 */
10593static uint16_t
10594lpfc_find_eq_handle(struct lpfc_hba *phba, uint16_t hdwq)
10595{
10596 struct lpfc_vector_map_info *cpup;
10597 int cpu;
10598
10599 /* Find the desired phys_id for the specified EQ */
10600 for_each_present_cpu(cpu) {
10601 cpup = &phba->sli4_hba.cpu_map[cpu];
10602 if (cpup->hdwq == hdwq)
10603 return cpup->eq;
10604 }
10605 return 0;
10606}
10607
10608#ifdef CONFIG_X86 10672#ifdef CONFIG_X86
10609/** 10673/**
10610 * lpfc_find_hyper - Determine if the CPU map entry is hyper-threaded 10674 * lpfc_find_hyper - Determine if the CPU map entry is hyper-threaded
@@ -10645,24 +10709,31 @@ lpfc_find_hyper(struct lpfc_hba *phba, int cpu,
10645static void 10709static void
10646lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors) 10710lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
10647{ 10711{
10648 int i, cpu, idx; 10712 int i, cpu, idx, new_cpu, start_cpu, first_cpu;
10649 int max_phys_id, min_phys_id; 10713 int max_phys_id, min_phys_id;
10650 int max_core_id, min_core_id; 10714 int max_core_id, min_core_id;
10651 struct lpfc_vector_map_info *cpup; 10715 struct lpfc_vector_map_info *cpup;
10716 struct lpfc_vector_map_info *new_cpup;
10652 const struct cpumask *maskp; 10717 const struct cpumask *maskp;
10653#ifdef CONFIG_X86 10718#ifdef CONFIG_X86
10654 struct cpuinfo_x86 *cpuinfo; 10719 struct cpuinfo_x86 *cpuinfo;
10655#endif 10720#endif
10656 10721
10657 /* Init cpu_map array */ 10722 /* Init cpu_map array */
10658 memset(phba->sli4_hba.cpu_map, 0xff, 10723 for_each_possible_cpu(cpu) {
10659 (sizeof(struct lpfc_vector_map_info) * 10724 cpup = &phba->sli4_hba.cpu_map[cpu];
10660 phba->sli4_hba.num_possible_cpu)); 10725 cpup->phys_id = LPFC_VECTOR_MAP_EMPTY;
10726 cpup->core_id = LPFC_VECTOR_MAP_EMPTY;
10727 cpup->hdwq = LPFC_VECTOR_MAP_EMPTY;
10728 cpup->eq = LPFC_VECTOR_MAP_EMPTY;
10729 cpup->irq = LPFC_VECTOR_MAP_EMPTY;
10730 cpup->flag = 0;
10731 }
10661 10732
10662 max_phys_id = 0; 10733 max_phys_id = 0;
10663 min_phys_id = 0xffff; 10734 min_phys_id = LPFC_VECTOR_MAP_EMPTY;
10664 max_core_id = 0; 10735 max_core_id = 0;
10665 min_core_id = 0xffff; 10736 min_core_id = LPFC_VECTOR_MAP_EMPTY;
10666 10737
10667 /* Update CPU map with physical id and core id of each CPU */ 10738 /* Update CPU map with physical id and core id of each CPU */
10668 for_each_present_cpu(cpu) { 10739 for_each_present_cpu(cpu) {
@@ -10671,13 +10742,12 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
10671 cpuinfo = &cpu_data(cpu); 10742 cpuinfo = &cpu_data(cpu);
10672 cpup->phys_id = cpuinfo->phys_proc_id; 10743 cpup->phys_id = cpuinfo->phys_proc_id;
10673 cpup->core_id = cpuinfo->cpu_core_id; 10744 cpup->core_id = cpuinfo->cpu_core_id;
10674 cpup->hyper = lpfc_find_hyper(phba, cpu, 10745 if (lpfc_find_hyper(phba, cpu, cpup->phys_id, cpup->core_id))
10675 cpup->phys_id, cpup->core_id); 10746 cpup->flag |= LPFC_CPU_MAP_HYPER;
10676#else 10747#else
10677 /* No distinction between CPUs for other platforms */ 10748 /* No distinction between CPUs for other platforms */
10678 cpup->phys_id = 0; 10749 cpup->phys_id = 0;
10679 cpup->core_id = cpu; 10750 cpup->core_id = cpu;
10680 cpup->hyper = 0;
10681#endif 10751#endif
10682 10752
10683 lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 10753 lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
@@ -10703,23 +10773,216 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
10703 eqi->icnt = 0; 10773 eqi->icnt = 0;
10704 } 10774 }
10705 10775
10776 /* This loop sets up all CPUs that are affinitized with a
10777 * irq vector assigned to the driver. All affinitized CPUs
10778 * will get a link to that vectors IRQ and EQ.
10779 */
10706 for (idx = 0; idx < phba->cfg_irq_chann; idx++) { 10780 for (idx = 0; idx < phba->cfg_irq_chann; idx++) {
10781 /* Get a CPU mask for all CPUs affinitized to this vector */
10707 maskp = pci_irq_get_affinity(phba->pcidev, idx); 10782 maskp = pci_irq_get_affinity(phba->pcidev, idx);
10708 if (!maskp) 10783 if (!maskp)
10709 continue; 10784 continue;
10710 10785
10786 i = 0;
10787 /* Loop through all CPUs associated with vector idx */
10711 for_each_cpu_and(cpu, maskp, cpu_present_mask) { 10788 for_each_cpu_and(cpu, maskp, cpu_present_mask) {
10789 /* Set the EQ index and IRQ for that vector */
10712 cpup = &phba->sli4_hba.cpu_map[cpu]; 10790 cpup = &phba->sli4_hba.cpu_map[cpu];
10713 cpup->eq = idx; 10791 cpup->eq = idx;
10714 cpup->hdwq = idx;
10715 cpup->irq = pci_irq_vector(phba->pcidev, idx); 10792 cpup->irq = pci_irq_vector(phba->pcidev, idx);
10716 10793
10717 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 10794 lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
10718 "3336 Set Affinity: CPU %d " 10795 "3336 Set Affinity: CPU %d "
10719 "hdwq %d irq %d\n", 10796 "irq %d eq %d\n",
10720 cpu, cpup->hdwq, cpup->irq); 10797 cpu, cpup->irq, cpup->eq);
10798
10799 /* If this is the first CPU thats assigned to this
10800 * vector, set LPFC_CPU_FIRST_IRQ.
10801 */
10802 if (!i)
10803 cpup->flag |= LPFC_CPU_FIRST_IRQ;
10804 i++;
10721 } 10805 }
10722 } 10806 }
10807
10808 /* After looking at each irq vector assigned to this pcidev, its
10809 * possible to see that not ALL CPUs have been accounted for.
10810 * Next we will set any unassigned (unaffinitized) cpu map
10811 * entries to a IRQ on the same phys_id.
10812 */
10813 first_cpu = cpumask_first(cpu_present_mask);
10814 start_cpu = first_cpu;
10815
10816 for_each_present_cpu(cpu) {
10817 cpup = &phba->sli4_hba.cpu_map[cpu];
10818
10819 /* Is this CPU entry unassigned */
10820 if (cpup->eq == LPFC_VECTOR_MAP_EMPTY) {
10821 /* Mark CPU as IRQ not assigned by the kernel */
10822 cpup->flag |= LPFC_CPU_MAP_UNASSIGN;
10823
10824 /* If so, find a new_cpup thats on the the SAME
10825 * phys_id as cpup. start_cpu will start where we
10826 * left off so all unassigned entries don't get assgined
10827 * the IRQ of the first entry.
10828 */
10829 new_cpu = start_cpu;
10830 for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) {
10831 new_cpup = &phba->sli4_hba.cpu_map[new_cpu];
10832 if (!(new_cpup->flag & LPFC_CPU_MAP_UNASSIGN) &&
10833 (new_cpup->irq != LPFC_VECTOR_MAP_EMPTY) &&
10834 (new_cpup->phys_id == cpup->phys_id))
10835 goto found_same;
10836 new_cpu = cpumask_next(
10837 new_cpu, cpu_present_mask);
10838 if (new_cpu == nr_cpumask_bits)
10839 new_cpu = first_cpu;
10840 }
10841 /* At this point, we leave the CPU as unassigned */
10842 continue;
10843found_same:
10844 /* We found a matching phys_id, so copy the IRQ info */
10845 cpup->eq = new_cpup->eq;
10846 cpup->irq = new_cpup->irq;
10847
10848 /* Bump start_cpu to the next slot to minmize the
10849 * chance of having multiple unassigned CPU entries
10850 * selecting the same IRQ.
10851 */
10852 start_cpu = cpumask_next(new_cpu, cpu_present_mask);
10853 if (start_cpu == nr_cpumask_bits)
10854 start_cpu = first_cpu;
10855
10856 lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
10857 "3337 Set Affinity: CPU %d "
10858 "irq %d from id %d same "
10859 "phys_id (%d)\n",
10860 cpu, cpup->irq, new_cpu, cpup->phys_id);
10861 }
10862 }
10863
10864 /* Set any unassigned cpu map entries to a IRQ on any phys_id */
10865 start_cpu = first_cpu;
10866
10867 for_each_present_cpu(cpu) {
10868 cpup = &phba->sli4_hba.cpu_map[cpu];
10869
10870 /* Is this entry unassigned */
10871 if (cpup->eq == LPFC_VECTOR_MAP_EMPTY) {
10872 /* Mark it as IRQ not assigned by the kernel */
10873 cpup->flag |= LPFC_CPU_MAP_UNASSIGN;
10874
10875 /* If so, find a new_cpup thats on ANY phys_id
10876 * as the cpup. start_cpu will start where we
10877 * left off so all unassigned entries don't get
10878 * assigned the IRQ of the first entry.
10879 */
10880 new_cpu = start_cpu;
10881 for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) {
10882 new_cpup = &phba->sli4_hba.cpu_map[new_cpu];
10883 if (!(new_cpup->flag & LPFC_CPU_MAP_UNASSIGN) &&
10884 (new_cpup->irq != LPFC_VECTOR_MAP_EMPTY))
10885 goto found_any;
10886 new_cpu = cpumask_next(
10887 new_cpu, cpu_present_mask);
10888 if (new_cpu == nr_cpumask_bits)
10889 new_cpu = first_cpu;
10890 }
10891 /* We should never leave an entry unassigned */
10892 lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
10893 "3339 Set Affinity: CPU %d "
10894 "irq %d UNASSIGNED\n",
10895 cpup->hdwq, cpup->irq);
10896 continue;
10897found_any:
10898 /* We found an available entry, copy the IRQ info */
10899 cpup->eq = new_cpup->eq;
10900 cpup->irq = new_cpup->irq;
10901
10902 /* Bump start_cpu to the next slot to minmize the
10903 * chance of having multiple unassigned CPU entries
10904 * selecting the same IRQ.
10905 */
10906 start_cpu = cpumask_next(new_cpu, cpu_present_mask);
10907 if (start_cpu == nr_cpumask_bits)
10908 start_cpu = first_cpu;
10909
10910 lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
10911 "3338 Set Affinity: CPU %d "
10912 "irq %d from id %d (%d/%d)\n",
10913 cpu, cpup->irq, new_cpu,
10914 new_cpup->phys_id, new_cpup->core_id);
10915 }
10916 }
10917
10918 /* Finally we need to associate a hdwq with each cpu_map entry
10919 * This will be 1 to 1 - hdwq to cpu, unless there are less
10920 * hardware queues then CPUs. For that case we will just round-robin
10921 * the available hardware queues as they get assigned to CPUs.
10922 */
10923 idx = 0;
10924 start_cpu = 0;
10925 for_each_present_cpu(cpu) {
10926 cpup = &phba->sli4_hba.cpu_map[cpu];
10927 if (idx >= phba->cfg_hdw_queue) {
10928 /* We need to reuse a Hardware Queue for another CPU,
10929 * so be smart about it and pick one that has its
10930 * IRQ/EQ mapped to the same phys_id (CPU package).
10931 * and core_id.
10932 */
10933 new_cpu = start_cpu;
10934 for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) {
10935 new_cpup = &phba->sli4_hba.cpu_map[new_cpu];
10936 if ((new_cpup->hdwq != LPFC_VECTOR_MAP_EMPTY) &&
10937 (new_cpup->phys_id == cpup->phys_id) &&
10938 (new_cpup->core_id == cpup->core_id))
10939 goto found_hdwq;
10940 new_cpu = cpumask_next(
10941 new_cpu, cpu_present_mask);
10942 if (new_cpu == nr_cpumask_bits)
10943 new_cpu = first_cpu;
10944 }
10945
10946 /* If we can't match both phys_id and core_id,
10947 * settle for just a phys_id match.
10948 */
10949 new_cpu = start_cpu;
10950 for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) {
10951 new_cpup = &phba->sli4_hba.cpu_map[new_cpu];
10952 if ((new_cpup->hdwq != LPFC_VECTOR_MAP_EMPTY) &&
10953 (new_cpup->phys_id == cpup->phys_id))
10954 goto found_hdwq;
10955 new_cpu = cpumask_next(
10956 new_cpu, cpu_present_mask);
10957 if (new_cpu == nr_cpumask_bits)
10958 new_cpu = first_cpu;
10959 }
10960
10961 /* Otherwise just round robin on cfg_hdw_queue */
10962 cpup->hdwq = idx % phba->cfg_hdw_queue;
10963 goto logit;
10964found_hdwq:
10965 /* We found an available entry, copy the IRQ info */
10966 start_cpu = cpumask_next(new_cpu, cpu_present_mask);
10967 if (start_cpu == nr_cpumask_bits)
10968 start_cpu = first_cpu;
10969 cpup->hdwq = new_cpup->hdwq;
10970 } else {
10971 /* 1 to 1, CPU to hdwq */
10972 cpup->hdwq = idx;
10973 }
10974logit:
10975 lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
10976 "3335 Set Affinity: CPU %d (phys %d core %d): "
10977 "hdwq %d eq %d irq %d flg x%x\n",
10978 cpu, cpup->phys_id, cpup->core_id,
10979 cpup->hdwq, cpup->eq, cpup->irq, cpup->flag);
10980 idx++;
10981 }
10982
10983 /* The cpu_map array will be used later during initialization
10984 * when EQ / CQ / WQs are allocated and configured.
10985 */
10723 return; 10986 return;
10724} 10987}
10725 10988
@@ -11331,24 +11594,43 @@ lpfc_get_sli4_parameters(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
11331 mbx_sli4_parameters); 11594 mbx_sli4_parameters);
11332 phba->sli4_hba.extents_in_use = bf_get(cfg_ext, mbx_sli4_parameters); 11595 phba->sli4_hba.extents_in_use = bf_get(cfg_ext, mbx_sli4_parameters);
11333 phba->sli4_hba.rpi_hdrs_in_use = bf_get(cfg_hdrr, mbx_sli4_parameters); 11596 phba->sli4_hba.rpi_hdrs_in_use = bf_get(cfg_hdrr, mbx_sli4_parameters);
11334 phba->nvme_support = (bf_get(cfg_nvme, mbx_sli4_parameters) && 11597
11335 bf_get(cfg_xib, mbx_sli4_parameters)); 11598 /* Check for firmware nvme support */
11336 11599 rc = (bf_get(cfg_nvme, mbx_sli4_parameters) &&
11337 if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP) || 11600 bf_get(cfg_xib, mbx_sli4_parameters));
11338 !phba->nvme_support) { 11601
11339 phba->nvme_support = 0; 11602 if (rc) {
11340 phba->nvmet_support = 0; 11603 /* Save this to indicate the Firmware supports NVME */
11341 phba->cfg_nvmet_mrq = 0; 11604 sli4_params->nvme = 1;
11342 lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_NVME, 11605
11343 "6101 Disabling NVME support: " 11606 /* Firmware NVME support, check driver FC4 NVME support */
11344 "Not supported by firmware: %d %d\n", 11607 if (phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP) {
11345 bf_get(cfg_nvme, mbx_sli4_parameters), 11608 lpfc_printf_log(phba, KERN_INFO, LOG_INIT | LOG_NVME,
11346 bf_get(cfg_xib, mbx_sli4_parameters)); 11609 "6133 Disabling NVME support: "
11347 11610 "FC4 type not supported: x%x\n",
11348 /* If firmware doesn't support NVME, just use SCSI support */ 11611 phba->cfg_enable_fc4_type);
11349 if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_FCP)) 11612 goto fcponly;
11350 return -ENODEV; 11613 }
11351 phba->cfg_enable_fc4_type = LPFC_ENABLE_FCP; 11614 } else {
11615 /* No firmware NVME support, check driver FC4 NVME support */
11616 sli4_params->nvme = 0;
11617 if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) {
11618 lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_NVME,
11619 "6101 Disabling NVME support: Not "
11620 "supported by firmware (%d %d) x%x\n",
11621 bf_get(cfg_nvme, mbx_sli4_parameters),
11622 bf_get(cfg_xib, mbx_sli4_parameters),
11623 phba->cfg_enable_fc4_type);
11624fcponly:
11625 phba->nvme_support = 0;
11626 phba->nvmet_support = 0;
11627 phba->cfg_nvmet_mrq = 0;
11628
11629 /* If no FC4 type support, move to just SCSI support */
11630 if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_FCP))
11631 return -ENODEV;
11632 phba->cfg_enable_fc4_type = LPFC_ENABLE_FCP;
11633 }
11352 } 11634 }
11353 11635
11354 /* Only embed PBDE for if_type 6, PBDE support requires xib be set */ 11636 /* Only embed PBDE for if_type 6, PBDE support requires xib be set */
diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index fdd16d9f55a1..946642cee3df 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -2143,7 +2143,9 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
2143 struct completion *lport_unreg_cmp) 2143 struct completion *lport_unreg_cmp)
2144{ 2144{
2145 u32 wait_tmo; 2145 u32 wait_tmo;
2146 int ret; 2146 int ret, i, pending = 0;
2147 struct lpfc_sli_ring *pring;
2148 struct lpfc_hba *phba = vport->phba;
2147 2149
2148 /* Host transport has to clean up and confirm requiring an indefinite 2150 /* Host transport has to clean up and confirm requiring an indefinite
2149 * wait. Print a message if a 10 second wait expires and renew the 2151 * wait. Print a message if a 10 second wait expires and renew the
@@ -2153,10 +2155,18 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
2153 while (true) { 2155 while (true) {
2154 ret = wait_for_completion_timeout(lport_unreg_cmp, wait_tmo); 2156 ret = wait_for_completion_timeout(lport_unreg_cmp, wait_tmo);
2155 if (unlikely(!ret)) { 2157 if (unlikely(!ret)) {
2158 pending = 0;
2159 for (i = 0; i < phba->cfg_hdw_queue; i++) {
2160 pring = phba->sli4_hba.hdwq[i].nvme_wq->pring;
2161 if (!pring)
2162 continue;
2163 if (pring->txcmplq_cnt)
2164 pending += pring->txcmplq_cnt;
2165 }
2156 lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_IOERR, 2166 lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_IOERR,
2157 "6176 Lport %p Localport %p wait " 2167 "6176 Lport %p Localport %p wait "
2158 "timed out. Renewing.\n", 2168 "timed out. Pending %d. Renewing.\n",
2159 lport, vport->localport); 2169 lport, vport->localport, pending);
2160 continue; 2170 continue;
2161 } 2171 }
2162 break; 2172 break;
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index 06170824a69b..d5812719de2b 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -220,19 +220,68 @@ lpfc_nvmet_cmd_template(void)
220 /* Word 12, 13, 14, 15 - is zero */ 220 /* Word 12, 13, 14, 15 - is zero */
221} 221}
222 222
223#if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
224static struct lpfc_nvmet_rcv_ctx *
225lpfc_nvmet_get_ctx_for_xri(struct lpfc_hba *phba, u16 xri)
226{
227 struct lpfc_nvmet_rcv_ctx *ctxp;
228 unsigned long iflag;
229 bool found = false;
230
231 spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag);
232 list_for_each_entry(ctxp, &phba->sli4_hba.t_active_ctx_list, list) {
233 if (ctxp->ctxbuf->sglq->sli4_xritag != xri)
234 continue;
235
236 found = true;
237 break;
238 }
239 spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag);
240 if (found)
241 return ctxp;
242
243 return NULL;
244}
245
246static struct lpfc_nvmet_rcv_ctx *
247lpfc_nvmet_get_ctx_for_oxid(struct lpfc_hba *phba, u16 oxid, u32 sid)
248{
249 struct lpfc_nvmet_rcv_ctx *ctxp;
250 unsigned long iflag;
251 bool found = false;
252
253 spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag);
254 list_for_each_entry(ctxp, &phba->sli4_hba.t_active_ctx_list, list) {
255 if (ctxp->oxid != oxid || ctxp->sid != sid)
256 continue;
257
258 found = true;
259 break;
260 }
261 spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag);
262 if (found)
263 return ctxp;
264
265 return NULL;
266}
267#endif
268
223static void 269static void
224lpfc_nvmet_defer_release(struct lpfc_hba *phba, struct lpfc_nvmet_rcv_ctx *ctxp) 270lpfc_nvmet_defer_release(struct lpfc_hba *phba, struct lpfc_nvmet_rcv_ctx *ctxp)
225{ 271{
226 lockdep_assert_held(&ctxp->ctxlock); 272 lockdep_assert_held(&ctxp->ctxlock);
227 273
228 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 274 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
229 "6313 NVMET Defer ctx release xri x%x flg x%x\n", 275 "6313 NVMET Defer ctx release oxid x%x flg x%x\n",
230 ctxp->oxid, ctxp->flag); 276 ctxp->oxid, ctxp->flag);
231 277
232 if (ctxp->flag & LPFC_NVMET_CTX_RLS) 278 if (ctxp->flag & LPFC_NVMET_CTX_RLS)
233 return; 279 return;
234 280
235 ctxp->flag |= LPFC_NVMET_CTX_RLS; 281 ctxp->flag |= LPFC_NVMET_CTX_RLS;
282 spin_lock(&phba->sli4_hba.t_active_list_lock);
283 list_del(&ctxp->list);
284 spin_unlock(&phba->sli4_hba.t_active_list_lock);
236 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 285 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
237 list_add_tail(&ctxp->list, &phba->sli4_hba.lpfc_abts_nvmet_ctx_list); 286 list_add_tail(&ctxp->list, &phba->sli4_hba.lpfc_abts_nvmet_ctx_list);
238 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 287 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
@@ -343,16 +392,23 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
343 } 392 }
344 393
345 if (ctxp->rqb_buffer) { 394 if (ctxp->rqb_buffer) {
346 nvmebuf = ctxp->rqb_buffer;
347 spin_lock_irqsave(&ctxp->ctxlock, iflag); 395 spin_lock_irqsave(&ctxp->ctxlock, iflag);
348 ctxp->rqb_buffer = NULL; 396 nvmebuf = ctxp->rqb_buffer;
349 if (ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) { 397 /* check if freed in another path whilst acquiring lock */
350 ctxp->flag &= ~LPFC_NVMET_CTX_REUSE_WQ; 398 if (nvmebuf) {
351 spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 399 ctxp->rqb_buffer = NULL;
352 nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); 400 if (ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) {
401 ctxp->flag &= ~LPFC_NVMET_CTX_REUSE_WQ;
402 spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
403 nvmebuf->hrq->rqbp->rqb_free_buffer(phba,
404 nvmebuf);
405 } else {
406 spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
407 /* repost */
408 lpfc_rq_buf_free(phba, &nvmebuf->hbuf);
409 }
353 } else { 410 } else {
354 spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 411 spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
355 lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */
356 } 412 }
357 } 413 }
358 ctxp->state = LPFC_NVMET_STE_FREE; 414 ctxp->state = LPFC_NVMET_STE_FREE;
@@ -388,8 +444,9 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
388 spin_lock_init(&ctxp->ctxlock); 444 spin_lock_init(&ctxp->ctxlock);
389 445
390#ifdef CONFIG_SCSI_LPFC_DEBUG_FS 446#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
391 if (ctxp->ts_cmd_nvme) { 447 /* NOTE: isr time stamp is stale when context is re-assigned*/
392 ctxp->ts_cmd_nvme = ktime_get_ns(); 448 if (ctxp->ts_isr_cmd) {
449 ctxp->ts_cmd_nvme = 0;
393 ctxp->ts_nvme_data = 0; 450 ctxp->ts_nvme_data = 0;
394 ctxp->ts_data_wqput = 0; 451 ctxp->ts_data_wqput = 0;
395 ctxp->ts_isr_data = 0; 452 ctxp->ts_isr_data = 0;
@@ -402,9 +459,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
402#endif 459#endif
403 atomic_inc(&tgtp->rcv_fcp_cmd_in); 460 atomic_inc(&tgtp->rcv_fcp_cmd_in);
404 461
405 /* flag new work queued, replacement buffer has already 462 /* Indicate that a replacement buffer has been posted */
406 * been reposted
407 */
408 spin_lock_irqsave(&ctxp->ctxlock, iflag); 463 spin_lock_irqsave(&ctxp->ctxlock, iflag);
409 ctxp->flag |= LPFC_NVMET_CTX_REUSE_WQ; 464 ctxp->flag |= LPFC_NVMET_CTX_REUSE_WQ;
410 spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 465 spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
@@ -433,6 +488,9 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
433 * Use the CPU context list, from the MRQ the IO was received on 488 * Use the CPU context list, from the MRQ the IO was received on
434 * (ctxp->idx), to save context structure. 489 * (ctxp->idx), to save context structure.
435 */ 490 */
491 spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag);
492 list_del_init(&ctxp->list);
493 spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag);
436 cpu = raw_smp_processor_id(); 494 cpu = raw_smp_processor_id();
437 infop = lpfc_get_ctx_list(phba, cpu, ctxp->idx); 495 infop = lpfc_get_ctx_list(phba, cpu, ctxp->idx);
438 spin_lock_irqsave(&infop->nvmet_ctx_list_lock, iflag); 496 spin_lock_irqsave(&infop->nvmet_ctx_list_lock, iflag);
@@ -700,8 +758,10 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
700 } 758 }
701 759
702 lpfc_printf_log(phba, KERN_INFO, logerr, 760 lpfc_printf_log(phba, KERN_INFO, logerr,
703 "6315 IO Error Cmpl xri x%x: %x/%x XBUSY:x%x\n", 761 "6315 IO Error Cmpl oxid: x%x xri: x%x %x/%x "
704 ctxp->oxid, status, result, ctxp->flag); 762 "XBUSY:x%x\n",
763 ctxp->oxid, ctxp->ctxbuf->sglq->sli4_xritag,
764 status, result, ctxp->flag);
705 765
706 } else { 766 } else {
707 rsp->fcp_error = NVME_SC_SUCCESS; 767 rsp->fcp_error = NVME_SC_SUCCESS;
@@ -849,7 +909,6 @@ lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
849 * before freeing ctxp and iocbq. 909 * before freeing ctxp and iocbq.
850 */ 910 */
851 lpfc_in_buf_free(phba, &nvmebuf->dbuf); 911 lpfc_in_buf_free(phba, &nvmebuf->dbuf);
852 ctxp->rqb_buffer = 0;
853 atomic_inc(&nvmep->xmt_ls_rsp); 912 atomic_inc(&nvmep->xmt_ls_rsp);
854 return 0; 913 return 0;
855 } 914 }
@@ -922,7 +981,7 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
922 (ctxp->state == LPFC_NVMET_STE_ABORT)) { 981 (ctxp->state == LPFC_NVMET_STE_ABORT)) {
923 atomic_inc(&lpfc_nvmep->xmt_fcp_drop); 982 atomic_inc(&lpfc_nvmep->xmt_fcp_drop);
924 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 983 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
925 "6102 IO xri x%x aborted\n", 984 "6102 IO oxid x%x aborted\n",
926 ctxp->oxid); 985 ctxp->oxid);
927 rc = -ENXIO; 986 rc = -ENXIO;
928 goto aerr; 987 goto aerr;
@@ -1022,7 +1081,7 @@ lpfc_nvmet_xmt_fcp_abort(struct nvmet_fc_target_port *tgtport,
1022 ctxp->hdwq = &phba->sli4_hba.hdwq[0]; 1081 ctxp->hdwq = &phba->sli4_hba.hdwq[0];
1023 1082
1024 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 1083 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
1025 "6103 NVMET Abort op: oxri x%x flg x%x ste %d\n", 1084 "6103 NVMET Abort op: oxid x%x flg x%x ste %d\n",
1026 ctxp->oxid, ctxp->flag, ctxp->state); 1085 ctxp->oxid, ctxp->flag, ctxp->state);
1027 1086
1028 lpfc_nvmeio_data(phba, "NVMET FCP ABRT: xri x%x flg x%x ste x%x\n", 1087 lpfc_nvmeio_data(phba, "NVMET FCP ABRT: xri x%x flg x%x ste x%x\n",
@@ -1035,7 +1094,7 @@ lpfc_nvmet_xmt_fcp_abort(struct nvmet_fc_target_port *tgtport,
1035 /* Since iaab/iaar are NOT set, we need to check 1094 /* Since iaab/iaar are NOT set, we need to check
1036 * if the firmware is in process of aborting IO 1095 * if the firmware is in process of aborting IO
1037 */ 1096 */
1038 if (ctxp->flag & LPFC_NVMET_XBUSY) { 1097 if (ctxp->flag & (LPFC_NVMET_XBUSY | LPFC_NVMET_ABORT_OP)) {
1039 spin_unlock_irqrestore(&ctxp->ctxlock, flags); 1098 spin_unlock_irqrestore(&ctxp->ctxlock, flags);
1040 return; 1099 return;
1041 } 1100 }
@@ -1098,6 +1157,7 @@ lpfc_nvmet_xmt_fcp_release(struct nvmet_fc_target_port *tgtport,
1098 ctxp->state, aborting); 1157 ctxp->state, aborting);
1099 1158
1100 atomic_inc(&lpfc_nvmep->xmt_fcp_release); 1159 atomic_inc(&lpfc_nvmep->xmt_fcp_release);
1160 ctxp->flag &= ~LPFC_NVMET_TNOTIFY;
1101 1161
1102 if (aborting) 1162 if (aborting)
1103 return; 1163 return;
@@ -1122,7 +1182,7 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
1122 1182
1123 if (!nvmebuf) { 1183 if (!nvmebuf) {
1124 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR, 1184 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
1125 "6425 Defer rcv: no buffer xri x%x: " 1185 "6425 Defer rcv: no buffer oxid x%x: "
1126 "flg %x ste %x\n", 1186 "flg %x ste %x\n",
1127 ctxp->oxid, ctxp->flag, ctxp->state); 1187 ctxp->oxid, ctxp->flag, ctxp->state);
1128 return; 1188 return;
@@ -1514,10 +1574,12 @@ void
1514lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba, 1574lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
1515 struct sli4_wcqe_xri_aborted *axri) 1575 struct sli4_wcqe_xri_aborted *axri)
1516{ 1576{
1577#if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
1517 uint16_t xri = bf_get(lpfc_wcqe_xa_xri, axri); 1578 uint16_t xri = bf_get(lpfc_wcqe_xa_xri, axri);
1518 uint16_t rxid = bf_get(lpfc_wcqe_xa_remote_xid, axri); 1579 uint16_t rxid = bf_get(lpfc_wcqe_xa_remote_xid, axri);
1519 struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp; 1580 struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp;
1520 struct lpfc_nvmet_tgtport *tgtp; 1581 struct lpfc_nvmet_tgtport *tgtp;
1582 struct nvmefc_tgt_fcp_req *req = NULL;
1521 struct lpfc_nodelist *ndlp; 1583 struct lpfc_nodelist *ndlp;
1522 unsigned long iflag = 0; 1584 unsigned long iflag = 0;
1523 int rrq_empty = 0; 1585 int rrq_empty = 0;
@@ -1548,7 +1610,7 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
1548 */ 1610 */
1549 if (ctxp->flag & LPFC_NVMET_CTX_RLS && 1611 if (ctxp->flag & LPFC_NVMET_CTX_RLS &&
1550 !(ctxp->flag & LPFC_NVMET_ABORT_OP)) { 1612 !(ctxp->flag & LPFC_NVMET_ABORT_OP)) {
1551 list_del(&ctxp->list); 1613 list_del_init(&ctxp->list);
1552 released = true; 1614 released = true;
1553 } 1615 }
1554 ctxp->flag &= ~LPFC_NVMET_XBUSY; 1616 ctxp->flag &= ~LPFC_NVMET_XBUSY;
@@ -1568,7 +1630,7 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
1568 } 1630 }
1569 1631
1570 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 1632 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
1571 "6318 XB aborted oxid %x flg x%x (%x)\n", 1633 "6318 XB aborted oxid x%x flg x%x (%x)\n",
1572 ctxp->oxid, ctxp->flag, released); 1634 ctxp->oxid, ctxp->flag, released);
1573 if (released) 1635 if (released)
1574 lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf); 1636 lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf);
@@ -1579,6 +1641,33 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
1579 } 1641 }
1580 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 1642 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
1581 spin_unlock_irqrestore(&phba->hbalock, iflag); 1643 spin_unlock_irqrestore(&phba->hbalock, iflag);
1644
1645 ctxp = lpfc_nvmet_get_ctx_for_xri(phba, xri);
1646 if (ctxp) {
1647 /*
1648 * Abort already done by FW, so BA_ACC sent.
1649 * However, the transport may be unaware.
1650 */
1651 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
1652 "6323 NVMET Rcv ABTS xri x%x ctxp state x%x "
1653 "flag x%x oxid x%x rxid x%x\n",
1654 xri, ctxp->state, ctxp->flag, ctxp->oxid,
1655 rxid);
1656
1657 spin_lock_irqsave(&ctxp->ctxlock, iflag);
1658 ctxp->flag |= LPFC_NVMET_ABTS_RCV;
1659 ctxp->state = LPFC_NVMET_STE_ABORT;
1660 spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
1661
1662 lpfc_nvmeio_data(phba,
1663 "NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n",
1664 xri, raw_smp_processor_id(), 0);
1665
1666 req = &ctxp->ctx.fcp_req;
1667 if (req)
1668 nvmet_fc_rcv_fcp_abort(phba->targetport, req);
1669 }
1670#endif
1582} 1671}
1583 1672
1584int 1673int
@@ -1589,19 +1678,23 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
1589 struct lpfc_hba *phba = vport->phba; 1678 struct lpfc_hba *phba = vport->phba;
1590 struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp; 1679 struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp;
1591 struct nvmefc_tgt_fcp_req *rsp; 1680 struct nvmefc_tgt_fcp_req *rsp;
1592 uint16_t xri; 1681 uint32_t sid;
1682 uint16_t oxid, xri;
1593 unsigned long iflag = 0; 1683 unsigned long iflag = 0;
1594 1684
1595 xri = be16_to_cpu(fc_hdr->fh_ox_id); 1685 sid = sli4_sid_from_fc_hdr(fc_hdr);
1686 oxid = be16_to_cpu(fc_hdr->fh_ox_id);
1596 1687
1597 spin_lock_irqsave(&phba->hbalock, iflag); 1688 spin_lock_irqsave(&phba->hbalock, iflag);
1598 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 1689 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
1599 list_for_each_entry_safe(ctxp, next_ctxp, 1690 list_for_each_entry_safe(ctxp, next_ctxp,
1600 &phba->sli4_hba.lpfc_abts_nvmet_ctx_list, 1691 &phba->sli4_hba.lpfc_abts_nvmet_ctx_list,
1601 list) { 1692 list) {
1602 if (ctxp->ctxbuf->sglq->sli4_xritag != xri) 1693 if (ctxp->oxid != oxid || ctxp->sid != sid)
1603 continue; 1694 continue;
1604 1695
1696 xri = ctxp->ctxbuf->sglq->sli4_xritag;
1697
1605 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 1698 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
1606 spin_unlock_irqrestore(&phba->hbalock, iflag); 1699 spin_unlock_irqrestore(&phba->hbalock, iflag);
1607 1700
@@ -1626,11 +1719,93 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
1626 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 1719 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
1627 spin_unlock_irqrestore(&phba->hbalock, iflag); 1720 spin_unlock_irqrestore(&phba->hbalock, iflag);
1628 1721
1629 lpfc_nvmeio_data(phba, "NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n", 1722 /* check the wait list */
1630 xri, raw_smp_processor_id(), 1); 1723 if (phba->sli4_hba.nvmet_io_wait_cnt) {
1724 struct rqb_dmabuf *nvmebuf;
1725 struct fc_frame_header *fc_hdr_tmp;
1726 u32 sid_tmp;
1727 u16 oxid_tmp;
1728 bool found = false;
1729
1730 spin_lock_irqsave(&phba->sli4_hba.nvmet_io_wait_lock, iflag);
1731
1732 /* match by oxid and s_id */
1733 list_for_each_entry(nvmebuf,
1734 &phba->sli4_hba.lpfc_nvmet_io_wait_list,
1735 hbuf.list) {
1736 fc_hdr_tmp = (struct fc_frame_header *)
1737 (nvmebuf->hbuf.virt);
1738 oxid_tmp = be16_to_cpu(fc_hdr_tmp->fh_ox_id);
1739 sid_tmp = sli4_sid_from_fc_hdr(fc_hdr_tmp);
1740 if (oxid_tmp != oxid || sid_tmp != sid)
1741 continue;
1742
1743 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
1744 "6321 NVMET Rcv ABTS oxid x%x from x%x "
1745 "is waiting for a ctxp\n",
1746 oxid, sid);
1747
1748 list_del_init(&nvmebuf->hbuf.list);
1749 phba->sli4_hba.nvmet_io_wait_cnt--;
1750 found = true;
1751 break;
1752 }
1753 spin_unlock_irqrestore(&phba->sli4_hba.nvmet_io_wait_lock,
1754 iflag);
1755
1756 /* free buffer since already posted a new DMA buffer to RQ */
1757 if (found) {
1758 nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
1759 /* Respond with BA_ACC accordingly */
1760 lpfc_sli4_seq_abort_rsp(vport, fc_hdr, 1);
1761 return 0;
1762 }
1763 }
1764
1765 /* check active list */
1766 ctxp = lpfc_nvmet_get_ctx_for_oxid(phba, oxid, sid);
1767 if (ctxp) {
1768 xri = ctxp->ctxbuf->sglq->sli4_xritag;
1769
1770 spin_lock_irqsave(&ctxp->ctxlock, iflag);
1771 ctxp->flag |= (LPFC_NVMET_ABTS_RCV | LPFC_NVMET_ABORT_OP);
1772 spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
1773
1774 lpfc_nvmeio_data(phba,
1775 "NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n",
1776 xri, raw_smp_processor_id(), 0);
1777
1778 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
1779 "6322 NVMET Rcv ABTS:acc oxid x%x xri x%x "
1780 "flag x%x state x%x\n",
1781 ctxp->oxid, xri, ctxp->flag, ctxp->state);
1782
1783 if (ctxp->flag & LPFC_NVMET_TNOTIFY) {
1784 /* Notify the transport */
1785 nvmet_fc_rcv_fcp_abort(phba->targetport,
1786 &ctxp->ctx.fcp_req);
1787 } else {
1788 cancel_work_sync(&ctxp->ctxbuf->defer_work);
1789 spin_lock_irqsave(&ctxp->ctxlock, iflag);
1790 lpfc_nvmet_defer_release(phba, ctxp);
1791 spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
1792 }
1793 if (ctxp->state == LPFC_NVMET_STE_RCV)
1794 lpfc_nvmet_unsol_fcp_issue_abort(phba, ctxp, ctxp->sid,
1795 ctxp->oxid);
1796 else
1797 lpfc_nvmet_sol_fcp_issue_abort(phba, ctxp, ctxp->sid,
1798 ctxp->oxid);
1799
1800 lpfc_sli4_seq_abort_rsp(vport, fc_hdr, 1);
1801 return 0;
1802 }
1803
1804 lpfc_nvmeio_data(phba, "NVMET ABTS RCV: oxid x%x CPU %02x rjt %d\n",
1805 oxid, raw_smp_processor_id(), 1);
1631 1806
1632 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 1807 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
1633 "6320 NVMET Rcv ABTS:rjt xri x%x\n", xri); 1808 "6320 NVMET Rcv ABTS:rjt oxid x%x\n", oxid);
1634 1809
1635 /* Respond with BA_RJT accordingly */ 1810 /* Respond with BA_RJT accordingly */
1636 lpfc_sli4_seq_abort_rsp(vport, fc_hdr, 0); 1811 lpfc_sli4_seq_abort_rsp(vport, fc_hdr, 0);
@@ -1714,6 +1889,18 @@ lpfc_nvmet_wqfull_process(struct lpfc_hba *phba,
1714 spin_unlock_irqrestore(&pring->ring_lock, iflags); 1889 spin_unlock_irqrestore(&pring->ring_lock, iflags);
1715 return; 1890 return;
1716 } 1891 }
1892 if (rc == WQE_SUCCESS) {
1893#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
1894 if (ctxp->ts_cmd_nvme) {
1895 if (ctxp->ctx.fcp_req.op == NVMET_FCOP_RSP)
1896 ctxp->ts_status_wqput = ktime_get_ns();
1897 else
1898 ctxp->ts_data_wqput = ktime_get_ns();
1899 }
1900#endif
1901 } else {
1902 WARN_ON(rc);
1903 }
1717 } 1904 }
1718 wq->q_flag &= ~HBA_NVMET_WQFULL; 1905 wq->q_flag &= ~HBA_NVMET_WQFULL;
1719 spin_unlock_irqrestore(&pring->ring_lock, iflags); 1906 spin_unlock_irqrestore(&pring->ring_lock, iflags);
@@ -1879,8 +2066,20 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
1879 return; 2066 return;
1880 } 2067 }
1881 2068
2069 if (ctxp->flag & LPFC_NVMET_ABTS_RCV) {
2070 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
2071 "6324 IO oxid x%x aborted\n",
2072 ctxp->oxid);
2073 return;
2074 }
2075
1882 payload = (uint32_t *)(nvmebuf->dbuf.virt); 2076 payload = (uint32_t *)(nvmebuf->dbuf.virt);
1883 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 2077 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
2078 ctxp->flag |= LPFC_NVMET_TNOTIFY;
2079#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
2080 if (ctxp->ts_isr_cmd)
2081 ctxp->ts_cmd_nvme = ktime_get_ns();
2082#endif
1884 /* 2083 /*
1885 * The calling sequence should be: 2084 * The calling sequence should be:
1886 * nvmet_fc_rcv_fcp_req->lpfc_nvmet_xmt_fcp_op/cmp- req->done 2085 * nvmet_fc_rcv_fcp_req->lpfc_nvmet_xmt_fcp_op/cmp- req->done
@@ -1930,6 +2129,7 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
1930 phba->sli4_hba.nvmet_mrq_data[qno], 1, qno); 2129 phba->sli4_hba.nvmet_mrq_data[qno], 1, qno);
1931 return; 2130 return;
1932 } 2131 }
2132 ctxp->flag &= ~LPFC_NVMET_TNOTIFY;
1933 atomic_inc(&tgtp->rcv_fcp_cmd_drop); 2133 atomic_inc(&tgtp->rcv_fcp_cmd_drop);
1934 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 2134 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
1935 "2582 FCP Drop IO x%x: err x%x: x%x x%x x%x\n", 2135 "2582 FCP Drop IO x%x: err x%x: x%x x%x x%x\n",
@@ -2019,6 +2219,8 @@ lpfc_nvmet_replenish_context(struct lpfc_hba *phba,
2019 * @phba: pointer to lpfc hba data structure. 2219 * @phba: pointer to lpfc hba data structure.
2020 * @idx: relative index of MRQ vector 2220 * @idx: relative index of MRQ vector
2021 * @nvmebuf: pointer to lpfc nvme command HBQ data structure. 2221 * @nvmebuf: pointer to lpfc nvme command HBQ data structure.
2222 * @isr_timestamp: in jiffies.
2223 * @cqflag: cq processing information regarding workload.
2022 * 2224 *
2023 * This routine is used for processing the WQE associated with a unsolicited 2225 * This routine is used for processing the WQE associated with a unsolicited
2024 * event. It first determines whether there is an existing ndlp that matches 2226 * event. It first determines whether there is an existing ndlp that matches
@@ -2031,7 +2233,8 @@ static void
2031lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba, 2233lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
2032 uint32_t idx, 2234 uint32_t idx,
2033 struct rqb_dmabuf *nvmebuf, 2235 struct rqb_dmabuf *nvmebuf,
2034 uint64_t isr_timestamp) 2236 uint64_t isr_timestamp,
2237 uint8_t cqflag)
2035{ 2238{
2036 struct lpfc_nvmet_rcv_ctx *ctxp; 2239 struct lpfc_nvmet_rcv_ctx *ctxp;
2037 struct lpfc_nvmet_tgtport *tgtp; 2240 struct lpfc_nvmet_tgtport *tgtp;
@@ -2118,6 +2321,9 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
2118 sid = sli4_sid_from_fc_hdr(fc_hdr); 2321 sid = sli4_sid_from_fc_hdr(fc_hdr);
2119 2322
2120 ctxp = (struct lpfc_nvmet_rcv_ctx *)ctx_buf->context; 2323 ctxp = (struct lpfc_nvmet_rcv_ctx *)ctx_buf->context;
2324 spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag);
2325 list_add_tail(&ctxp->list, &phba->sli4_hba.t_active_ctx_list);
2326 spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag);
2121 if (ctxp->state != LPFC_NVMET_STE_FREE) { 2327 if (ctxp->state != LPFC_NVMET_STE_FREE) {
2122 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 2328 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
2123 "6414 NVMET Context corrupt %d %d oxid x%x\n", 2329 "6414 NVMET Context corrupt %d %d oxid x%x\n",
@@ -2140,24 +2346,41 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
2140 spin_lock_init(&ctxp->ctxlock); 2346 spin_lock_init(&ctxp->ctxlock);
2141 2347
2142#ifdef CONFIG_SCSI_LPFC_DEBUG_FS 2348#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
2143 if (isr_timestamp) { 2349 if (isr_timestamp)
2144 ctxp->ts_isr_cmd = isr_timestamp; 2350 ctxp->ts_isr_cmd = isr_timestamp;
2145 ctxp->ts_cmd_nvme = ktime_get_ns(); 2351 ctxp->ts_cmd_nvme = 0;
2146 ctxp->ts_nvme_data = 0; 2352 ctxp->ts_nvme_data = 0;
2147 ctxp->ts_data_wqput = 0; 2353 ctxp->ts_data_wqput = 0;
2148 ctxp->ts_isr_data = 0; 2354 ctxp->ts_isr_data = 0;
2149 ctxp->ts_data_nvme = 0; 2355 ctxp->ts_data_nvme = 0;
2150 ctxp->ts_nvme_status = 0; 2356 ctxp->ts_nvme_status = 0;
2151 ctxp->ts_status_wqput = 0; 2357 ctxp->ts_status_wqput = 0;
2152 ctxp->ts_isr_status = 0; 2358 ctxp->ts_isr_status = 0;
2153 ctxp->ts_status_nvme = 0; 2359 ctxp->ts_status_nvme = 0;
2154 } else {
2155 ctxp->ts_cmd_nvme = 0;
2156 }
2157#endif 2360#endif
2158 2361
2159 atomic_inc(&tgtp->rcv_fcp_cmd_in); 2362 atomic_inc(&tgtp->rcv_fcp_cmd_in);
2160 lpfc_nvmet_process_rcv_fcp_req(ctx_buf); 2363 /* check for cq processing load */
2364 if (!cqflag) {
2365 lpfc_nvmet_process_rcv_fcp_req(ctx_buf);
2366 return;
2367 }
2368
2369 if (!queue_work(phba->wq, &ctx_buf->defer_work)) {
2370 atomic_inc(&tgtp->rcv_fcp_cmd_drop);
2371 lpfc_printf_log(phba, KERN_ERR, LOG_NVME,
2372 "6325 Unable to queue work for oxid x%x. "
2373 "FCP Drop IO [x%x x%x x%x]\n",
2374 ctxp->oxid,
2375 atomic_read(&tgtp->rcv_fcp_cmd_in),
2376 atomic_read(&tgtp->rcv_fcp_cmd_out),
2377 atomic_read(&tgtp->xmt_fcp_release));
2378
2379 spin_lock_irqsave(&ctxp->ctxlock, iflag);
2380 lpfc_nvmet_defer_release(phba, ctxp);
2381 spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
2382 lpfc_nvmet_unsol_fcp_issue_abort(phba, ctxp, sid, oxid);
2383 }
2161} 2384}
2162 2385
2163/** 2386/**
@@ -2194,6 +2417,8 @@ lpfc_nvmet_unsol_ls_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
2194 * @phba: pointer to lpfc hba data structure. 2417 * @phba: pointer to lpfc hba data structure.
2195 * @idx: relative index of MRQ vector 2418 * @idx: relative index of MRQ vector
2196 * @nvmebuf: pointer to received nvme data structure. 2419 * @nvmebuf: pointer to received nvme data structure.
2420 * @isr_timestamp: in jiffies.
2421 * @cqflag: cq processing information regarding workload.
2197 * 2422 *
2198 * This routine is used to process an unsolicited event received from a SLI 2423 * This routine is used to process an unsolicited event received from a SLI
2199 * (Service Level Interface) ring. The actual processing of the data buffer 2424 * (Service Level Interface) ring. The actual processing of the data buffer
@@ -2205,14 +2430,14 @@ void
2205lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba, 2430lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba,
2206 uint32_t idx, 2431 uint32_t idx,
2207 struct rqb_dmabuf *nvmebuf, 2432 struct rqb_dmabuf *nvmebuf,
2208 uint64_t isr_timestamp) 2433 uint64_t isr_timestamp,
2434 uint8_t cqflag)
2209{ 2435{
2210 if (phba->nvmet_support == 0) { 2436 if (phba->nvmet_support == 0) {
2211 lpfc_rq_buf_free(phba, &nvmebuf->hbuf); 2437 lpfc_rq_buf_free(phba, &nvmebuf->hbuf);
2212 return; 2438 return;
2213 } 2439 }
2214 lpfc_nvmet_unsol_fcp_buffer(phba, idx, nvmebuf, 2440 lpfc_nvmet_unsol_fcp_buffer(phba, idx, nvmebuf, isr_timestamp, cqflag);
2215 isr_timestamp);
2216} 2441}
2217 2442
2218/** 2443/**
@@ -2750,7 +2975,7 @@ lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
2750 if ((ctxp->flag & LPFC_NVMET_CTX_RLS) && 2975 if ((ctxp->flag & LPFC_NVMET_CTX_RLS) &&
2751 !(ctxp->flag & LPFC_NVMET_XBUSY)) { 2976 !(ctxp->flag & LPFC_NVMET_XBUSY)) {
2752 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 2977 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
2753 list_del(&ctxp->list); 2978 list_del_init(&ctxp->list);
2754 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 2979 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
2755 released = true; 2980 released = true;
2756 } 2981 }
@@ -2759,7 +2984,7 @@ lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
2759 atomic_inc(&tgtp->xmt_abort_rsp); 2984 atomic_inc(&tgtp->xmt_abort_rsp);
2760 2985
2761 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 2986 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
2762 "6165 ABORT cmpl: xri x%x flg x%x (%d) " 2987 "6165 ABORT cmpl: oxid x%x flg x%x (%d) "
2763 "WCQE: %08x %08x %08x %08x\n", 2988 "WCQE: %08x %08x %08x %08x\n",
2764 ctxp->oxid, ctxp->flag, released, 2989 ctxp->oxid, ctxp->flag, released,
2765 wcqe->word0, wcqe->total_data_placed, 2990 wcqe->word0, wcqe->total_data_placed,
@@ -2834,7 +3059,7 @@ lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
2834 if ((ctxp->flag & LPFC_NVMET_CTX_RLS) && 3059 if ((ctxp->flag & LPFC_NVMET_CTX_RLS) &&
2835 !(ctxp->flag & LPFC_NVMET_XBUSY)) { 3060 !(ctxp->flag & LPFC_NVMET_XBUSY)) {
2836 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 3061 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
2837 list_del(&ctxp->list); 3062 list_del_init(&ctxp->list);
2838 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 3063 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
2839 released = true; 3064 released = true;
2840 } 3065 }
@@ -2843,7 +3068,7 @@ lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
2843 atomic_inc(&tgtp->xmt_abort_rsp); 3068 atomic_inc(&tgtp->xmt_abort_rsp);
2844 3069
2845 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 3070 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
2846 "6316 ABTS cmpl xri x%x flg x%x (%x) " 3071 "6316 ABTS cmpl oxid x%x flg x%x (%x) "
2847 "WCQE: %08x %08x %08x %08x\n", 3072 "WCQE: %08x %08x %08x %08x\n",
2848 ctxp->oxid, ctxp->flag, released, 3073 ctxp->oxid, ctxp->flag, released,
2849 wcqe->word0, wcqe->total_data_placed, 3074 wcqe->word0, wcqe->total_data_placed,
@@ -3214,7 +3439,7 @@ aerr:
3214 spin_lock_irqsave(&ctxp->ctxlock, flags); 3439 spin_lock_irqsave(&ctxp->ctxlock, flags);
3215 if (ctxp->flag & LPFC_NVMET_CTX_RLS) { 3440 if (ctxp->flag & LPFC_NVMET_CTX_RLS) {
3216 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 3441 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
3217 list_del(&ctxp->list); 3442 list_del_init(&ctxp->list);
3218 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 3443 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
3219 released = true; 3444 released = true;
3220 } 3445 }
@@ -3223,8 +3448,9 @@ aerr:
3223 3448
3224 atomic_inc(&tgtp->xmt_abort_rsp_error); 3449 atomic_inc(&tgtp->xmt_abort_rsp_error);
3225 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_ABTS, 3450 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_ABTS,
3226 "6135 Failed to Issue ABTS for oxid x%x. Status x%x\n", 3451 "6135 Failed to Issue ABTS for oxid x%x. Status x%x "
3227 ctxp->oxid, rc); 3452 "(%x)\n",
3453 ctxp->oxid, rc, released);
3228 if (released) 3454 if (released)
3229 lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf); 3455 lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf);
3230 return 1; 3456 return 1;
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.h b/drivers/scsi/lpfc/lpfc_nvmet.h
index 2f3f603d94c4..8ff67deac10a 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.h
+++ b/drivers/scsi/lpfc/lpfc_nvmet.h
@@ -140,6 +140,7 @@ struct lpfc_nvmet_rcv_ctx {
140#define LPFC_NVMET_ABTS_RCV 0x10 /* ABTS received on exchange */ 140#define LPFC_NVMET_ABTS_RCV 0x10 /* ABTS received on exchange */
141#define LPFC_NVMET_CTX_REUSE_WQ 0x20 /* ctx reused via WQ */ 141#define LPFC_NVMET_CTX_REUSE_WQ 0x20 /* ctx reused via WQ */
142#define LPFC_NVMET_DEFER_WQFULL 0x40 /* Waiting on a free WQE */ 142#define LPFC_NVMET_DEFER_WQFULL 0x40 /* Waiting on a free WQE */
143#define LPFC_NVMET_TNOTIFY 0x80 /* notify transport of abts */
143 struct rqb_dmabuf *rqb_buffer; 144 struct rqb_dmabuf *rqb_buffer;
144 struct lpfc_nvmet_ctxbuf *ctxbuf; 145 struct lpfc_nvmet_ctxbuf *ctxbuf;
145 struct lpfc_sli4_hdw_queue *hdwq; 146 struct lpfc_sli4_hdw_queue *hdwq;
diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
index ba996fbde89b..f9df800e7067 100644
--- a/drivers/scsi/lpfc/lpfc_scsi.c
+++ b/drivers/scsi/lpfc/lpfc_scsi.c
@@ -3879,10 +3879,8 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
3879 */ 3879 */
3880 spin_lock(&lpfc_cmd->buf_lock); 3880 spin_lock(&lpfc_cmd->buf_lock);
3881 lpfc_cmd->cur_iocbq.iocb_flag &= ~LPFC_DRIVER_ABORTED; 3881 lpfc_cmd->cur_iocbq.iocb_flag &= ~LPFC_DRIVER_ABORTED;
3882 if (lpfc_cmd->waitq) { 3882 if (lpfc_cmd->waitq)
3883 wake_up(lpfc_cmd->waitq); 3883 wake_up(lpfc_cmd->waitq);
3884 lpfc_cmd->waitq = NULL;
3885 }
3886 spin_unlock(&lpfc_cmd->buf_lock); 3884 spin_unlock(&lpfc_cmd->buf_lock);
3887 3885
3888 lpfc_release_scsi_buf(phba, lpfc_cmd); 3886 lpfc_release_scsi_buf(phba, lpfc_cmd);
@@ -4718,6 +4716,9 @@ wait_for_cmpl:
4718 iocb->sli4_xritag, ret, 4716 iocb->sli4_xritag, ret,
4719 cmnd->device->id, cmnd->device->lun); 4717 cmnd->device->id, cmnd->device->lun);
4720 } 4718 }
4719
4720 lpfc_cmd->waitq = NULL;
4721
4721 spin_unlock(&lpfc_cmd->buf_lock); 4722 spin_unlock(&lpfc_cmd->buf_lock);
4722 goto out; 4723 goto out;
4723 4724
@@ -4797,7 +4798,12 @@ lpfc_check_fcp_rsp(struct lpfc_vport *vport, struct lpfc_io_buf *lpfc_cmd)
4797 rsp_info, 4798 rsp_info,
4798 rsp_len, rsp_info_code); 4799 rsp_len, rsp_info_code);
4799 4800
4800 if ((fcprsp->rspStatus2&RSP_LEN_VALID) && (rsp_len == 8)) { 4801 /* If FCP_RSP_LEN_VALID bit is one, then the FCP_RSP_LEN
4802 * field specifies the number of valid bytes of FCP_RSP_INFO.
4803 * The FCP_RSP_LEN field shall be set to 0x04 or 0x08
4804 */
4805 if ((fcprsp->rspStatus2 & RSP_LEN_VALID) &&
4806 ((rsp_len == 8) || (rsp_len == 4))) {
4801 switch (rsp_info_code) { 4807 switch (rsp_info_code) {
4802 case RSP_NO_FAILURE: 4808 case RSP_NO_FAILURE:
4803 lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP, 4809 lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
@@ -5741,7 +5747,7 @@ lpfc_enable_oas_lun(struct lpfc_hba *phba, struct lpfc_name *vport_wwpn,
5741 5747
5742 /* Create an lun info structure and add to list of luns */ 5748 /* Create an lun info structure and add to list of luns */
5743 lun_info = lpfc_create_device_data(phba, vport_wwpn, target_wwpn, lun, 5749 lun_info = lpfc_create_device_data(phba, vport_wwpn, target_wwpn, lun,
5744 pri, false); 5750 pri, true);
5745 if (lun_info) { 5751 if (lun_info) {
5746 lun_info->oas_enabled = true; 5752 lun_info->oas_enabled = true;
5747 lun_info->priority = pri; 5753 lun_info->priority = pri;
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index 4329cc44bb55..f9e6a135d656 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -108,7 +108,7 @@ lpfc_get_iocb_from_iocbq(struct lpfc_iocbq *iocbq)
108 * endianness. This function can be called with or without 108 * endianness. This function can be called with or without
109 * lock. 109 * lock.
110 **/ 110 **/
111void 111static void
112lpfc_sli4_pcimem_bcopy(void *srcp, void *destp, uint32_t cnt) 112lpfc_sli4_pcimem_bcopy(void *srcp, void *destp, uint32_t cnt)
113{ 113{
114 uint64_t *src = srcp; 114 uint64_t *src = srcp;
@@ -5571,6 +5571,7 @@ lpfc_sli4_arm_cqeq_intr(struct lpfc_hba *phba)
5571 int qidx; 5571 int qidx;
5572 struct lpfc_sli4_hba *sli4_hba = &phba->sli4_hba; 5572 struct lpfc_sli4_hba *sli4_hba = &phba->sli4_hba;
5573 struct lpfc_sli4_hdw_queue *qp; 5573 struct lpfc_sli4_hdw_queue *qp;
5574 struct lpfc_queue *eq;
5574 5575
5575 sli4_hba->sli4_write_cq_db(phba, sli4_hba->mbx_cq, 0, LPFC_QUEUE_REARM); 5576 sli4_hba->sli4_write_cq_db(phba, sli4_hba->mbx_cq, 0, LPFC_QUEUE_REARM);
5576 sli4_hba->sli4_write_cq_db(phba, sli4_hba->els_cq, 0, LPFC_QUEUE_REARM); 5577 sli4_hba->sli4_write_cq_db(phba, sli4_hba->els_cq, 0, LPFC_QUEUE_REARM);
@@ -5578,18 +5579,24 @@ lpfc_sli4_arm_cqeq_intr(struct lpfc_hba *phba)
5578 sli4_hba->sli4_write_cq_db(phba, sli4_hba->nvmels_cq, 0, 5579 sli4_hba->sli4_write_cq_db(phba, sli4_hba->nvmels_cq, 0,
5579 LPFC_QUEUE_REARM); 5580 LPFC_QUEUE_REARM);
5580 5581
5581 qp = sli4_hba->hdwq;
5582 if (sli4_hba->hdwq) { 5582 if (sli4_hba->hdwq) {
5583 /* Loop thru all Hardware Queues */
5583 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) { 5584 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) {
5584 sli4_hba->sli4_write_cq_db(phba, qp[qidx].fcp_cq, 0, 5585 qp = &sli4_hba->hdwq[qidx];
5586 /* ARM the corresponding CQ */
5587 sli4_hba->sli4_write_cq_db(phba, qp->fcp_cq, 0,
5585 LPFC_QUEUE_REARM); 5588 LPFC_QUEUE_REARM);
5586 sli4_hba->sli4_write_cq_db(phba, qp[qidx].nvme_cq, 0, 5589 sli4_hba->sli4_write_cq_db(phba, qp->nvme_cq, 0,
5587 LPFC_QUEUE_REARM); 5590 LPFC_QUEUE_REARM);
5588 } 5591 }
5589 5592
5590 for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) 5593 /* Loop thru all IRQ vectors */
5591 sli4_hba->sli4_write_eq_db(phba, qp[qidx].hba_eq, 5594 for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) {
5592 0, LPFC_QUEUE_REARM); 5595 eq = sli4_hba->hba_eq_hdl[qidx].eq;
5596 /* ARM the corresponding EQ */
5597 sli4_hba->sli4_write_eq_db(phba, eq,
5598 0, LPFC_QUEUE_REARM);
5599 }
5593 } 5600 }
5594 5601
5595 if (phba->nvmet_support) { 5602 if (phba->nvmet_support) {
@@ -7875,26 +7882,28 @@ lpfc_sli4_mbox_completions_pending(struct lpfc_hba *phba)
7875 * and will process all the completions associated with the eq for the 7882 * and will process all the completions associated with the eq for the
7876 * mailbox completion queue. 7883 * mailbox completion queue.
7877 **/ 7884 **/
7878bool 7885static bool
7879lpfc_sli4_process_missed_mbox_completions(struct lpfc_hba *phba) 7886lpfc_sli4_process_missed_mbox_completions(struct lpfc_hba *phba)
7880{ 7887{
7881 struct lpfc_sli4_hba *sli4_hba = &phba->sli4_hba; 7888 struct lpfc_sli4_hba *sli4_hba = &phba->sli4_hba;
7882 uint32_t eqidx; 7889 uint32_t eqidx;
7883 struct lpfc_queue *fpeq = NULL; 7890 struct lpfc_queue *fpeq = NULL;
7891 struct lpfc_queue *eq;
7884 bool mbox_pending; 7892 bool mbox_pending;
7885 7893
7886 if (unlikely(!phba) || (phba->sli_rev != LPFC_SLI_REV4)) 7894 if (unlikely(!phba) || (phba->sli_rev != LPFC_SLI_REV4))
7887 return false; 7895 return false;
7888 7896
7889 /* Find the eq associated with the mcq */ 7897 /* Find the EQ associated with the mbox CQ */
7890 7898 if (sli4_hba->hdwq) {
7891 if (sli4_hba->hdwq) 7899 for (eqidx = 0; eqidx < phba->cfg_irq_chann; eqidx++) {
7892 for (eqidx = 0; eqidx < phba->cfg_irq_chann; eqidx++) 7900 eq = phba->sli4_hba.hba_eq_hdl[eqidx].eq;
7893 if (sli4_hba->hdwq[eqidx].hba_eq->queue_id == 7901 if (eq->queue_id == sli4_hba->mbx_cq->assoc_qid) {
7894 sli4_hba->mbx_cq->assoc_qid) { 7902 fpeq = eq;
7895 fpeq = sli4_hba->hdwq[eqidx].hba_eq;
7896 break; 7903 break;
7897 } 7904 }
7905 }
7906 }
7898 if (!fpeq) 7907 if (!fpeq)
7899 return false; 7908 return false;
7900 7909
@@ -13605,14 +13614,9 @@ __lpfc_sli4_process_cq(struct lpfc_hba *phba, struct lpfc_queue *cq,
13605 goto rearm_and_exit; 13614 goto rearm_and_exit;
13606 13615
13607 /* Process all the entries to the CQ */ 13616 /* Process all the entries to the CQ */
13617 cq->q_flag = 0;
13608 cqe = lpfc_sli4_cq_get(cq); 13618 cqe = lpfc_sli4_cq_get(cq);
13609 while (cqe) { 13619 while (cqe) {
13610#if defined(CONFIG_SCSI_LPFC_DEBUG_FS) && defined(BUILD_NVME)
13611 if (phba->ktime_on)
13612 cq->isr_timestamp = ktime_get_ns();
13613 else
13614 cq->isr_timestamp = 0;
13615#endif
13616 workposted |= handler(phba, cq, cqe); 13620 workposted |= handler(phba, cq, cqe);
13617 __lpfc_sli4_consume_cqe(phba, cq, cqe); 13621 __lpfc_sli4_consume_cqe(phba, cq, cqe);
13618 13622
@@ -13626,6 +13630,9 @@ __lpfc_sli4_process_cq(struct lpfc_hba *phba, struct lpfc_queue *cq,
13626 consumed = 0; 13630 consumed = 0;
13627 } 13631 }
13628 13632
13633 if (count == LPFC_NVMET_CQ_NOTIFY)
13634 cq->q_flag |= HBA_NVMET_CQ_NOTIFY;
13635
13629 cqe = lpfc_sli4_cq_get(cq); 13636 cqe = lpfc_sli4_cq_get(cq);
13630 } 13637 }
13631 if (count >= phba->cfg_cq_poll_threshold) { 13638 if (count >= phba->cfg_cq_poll_threshold) {
@@ -13941,10 +13948,10 @@ lpfc_sli4_nvmet_handle_rcqe(struct lpfc_hba *phba, struct lpfc_queue *cq,
13941 goto drop; 13948 goto drop;
13942 13949
13943 if (fc_hdr->fh_type == FC_TYPE_FCP) { 13950 if (fc_hdr->fh_type == FC_TYPE_FCP) {
13944 dma_buf->bytes_recv = bf_get(lpfc_rcqe_length, rcqe); 13951 dma_buf->bytes_recv = bf_get(lpfc_rcqe_length, rcqe);
13945 lpfc_nvmet_unsol_fcp_event( 13952 lpfc_nvmet_unsol_fcp_event(
13946 phba, idx, dma_buf, 13953 phba, idx, dma_buf, cq->isr_timestamp,
13947 cq->isr_timestamp); 13954 cq->q_flag & HBA_NVMET_CQ_NOTIFY);
13948 return false; 13955 return false;
13949 } 13956 }
13950drop: 13957drop:
@@ -14110,6 +14117,12 @@ process_cq:
14110 } 14117 }
14111 14118
14112work_cq: 14119work_cq:
14120#if defined(CONFIG_SCSI_LPFC_DEBUG_FS)
14121 if (phba->ktime_on)
14122 cq->isr_timestamp = ktime_get_ns();
14123 else
14124 cq->isr_timestamp = 0;
14125#endif
14113 if (!queue_work_on(cq->chann, phba->wq, &cq->irqwork)) 14126 if (!queue_work_on(cq->chann, phba->wq, &cq->irqwork))
14114 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 14127 lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
14115 "0363 Cannot schedule soft IRQ " 14128 "0363 Cannot schedule soft IRQ "
@@ -14236,7 +14249,7 @@ lpfc_sli4_hba_intr_handler(int irq, void *dev_id)
14236 return IRQ_NONE; 14249 return IRQ_NONE;
14237 14250
14238 /* Get to the EQ struct associated with this vector */ 14251 /* Get to the EQ struct associated with this vector */
14239 fpeq = phba->sli4_hba.hdwq[hba_eqidx].hba_eq; 14252 fpeq = phba->sli4_hba.hba_eq_hdl[hba_eqidx].eq;
14240 if (unlikely(!fpeq)) 14253 if (unlikely(!fpeq))
14241 return IRQ_NONE; 14254 return IRQ_NONE;
14242 14255
@@ -14521,7 +14534,7 @@ lpfc_modify_hba_eq_delay(struct lpfc_hba *phba, uint32_t startq,
14521 /* set values by EQ_DELAY register if supported */ 14534 /* set values by EQ_DELAY register if supported */
14522 if (phba->sli.sli_flag & LPFC_SLI_USE_EQDR) { 14535 if (phba->sli.sli_flag & LPFC_SLI_USE_EQDR) {
14523 for (qidx = startq; qidx < phba->cfg_irq_chann; qidx++) { 14536 for (qidx = startq; qidx < phba->cfg_irq_chann; qidx++) {
14524 eq = phba->sli4_hba.hdwq[qidx].hba_eq; 14537 eq = phba->sli4_hba.hba_eq_hdl[qidx].eq;
14525 if (!eq) 14538 if (!eq)
14526 continue; 14539 continue;
14527 14540
@@ -14530,7 +14543,6 @@ lpfc_modify_hba_eq_delay(struct lpfc_hba *phba, uint32_t startq,
14530 if (++cnt >= numq) 14543 if (++cnt >= numq)
14531 break; 14544 break;
14532 } 14545 }
14533
14534 return; 14546 return;
14535 } 14547 }
14536 14548
@@ -14558,7 +14570,7 @@ lpfc_modify_hba_eq_delay(struct lpfc_hba *phba, uint32_t startq,
14558 dmult = LPFC_DMULT_MAX; 14570 dmult = LPFC_DMULT_MAX;
14559 14571
14560 for (qidx = startq; qidx < phba->cfg_irq_chann; qidx++) { 14572 for (qidx = startq; qidx < phba->cfg_irq_chann; qidx++) {
14561 eq = phba->sli4_hba.hdwq[qidx].hba_eq; 14573 eq = phba->sli4_hba.hba_eq_hdl[qidx].eq;
14562 if (!eq) 14574 if (!eq)
14563 continue; 14575 continue;
14564 eq->q_mode = usdelay; 14576 eq->q_mode = usdelay;
@@ -14660,8 +14672,10 @@ lpfc_eq_create(struct lpfc_hba *phba, struct lpfc_queue *eq, uint32_t imax)
14660 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 14672 lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
14661 "0360 Unsupported EQ count. (%d)\n", 14673 "0360 Unsupported EQ count. (%d)\n",
14662 eq->entry_count); 14674 eq->entry_count);
14663 if (eq->entry_count < 256) 14675 if (eq->entry_count < 256) {
14664 return -EINVAL; 14676 status = -EINVAL;
14677 goto out;
14678 }
14665 /* fall through - otherwise default to smallest count */ 14679 /* fall through - otherwise default to smallest count */
14666 case 256: 14680 case 256:
14667 bf_set(lpfc_eq_context_count, &eq_create->u.request.context, 14681 bf_set(lpfc_eq_context_count, &eq_create->u.request.context,
@@ -14713,7 +14727,7 @@ lpfc_eq_create(struct lpfc_hba *phba, struct lpfc_queue *eq, uint32_t imax)
14713 eq->host_index = 0; 14727 eq->host_index = 0;
14714 eq->notify_interval = LPFC_EQ_NOTIFY_INTRVL; 14728 eq->notify_interval = LPFC_EQ_NOTIFY_INTRVL;
14715 eq->max_proc_limit = LPFC_EQ_MAX_PROC_LIMIT; 14729 eq->max_proc_limit = LPFC_EQ_MAX_PROC_LIMIT;
14716 14730out:
14717 mempool_free(mbox, phba->mbox_mem_pool); 14731 mempool_free(mbox, phba->mbox_mem_pool);
14718 return status; 14732 return status;
14719} 14733}
diff --git a/drivers/scsi/lpfc/lpfc_sli4.h b/drivers/scsi/lpfc/lpfc_sli4.h
index 8e4fd1a98023..3aeca387b22a 100644
--- a/drivers/scsi/lpfc/lpfc_sli4.h
+++ b/drivers/scsi/lpfc/lpfc_sli4.h
@@ -197,6 +197,8 @@ struct lpfc_queue {
197#define LPFC_DB_LIST_FORMAT 0x02 197#define LPFC_DB_LIST_FORMAT 0x02
198 uint8_t q_flag; 198 uint8_t q_flag;
199#define HBA_NVMET_WQFULL 0x1 /* We hit WQ Full condition for NVMET */ 199#define HBA_NVMET_WQFULL 0x1 /* We hit WQ Full condition for NVMET */
200#define HBA_NVMET_CQ_NOTIFY 0x1 /* LPFC_NVMET_CQ_NOTIFY CQEs this EQE */
201#define LPFC_NVMET_CQ_NOTIFY 4
200 void __iomem *db_regaddr; 202 void __iomem *db_regaddr;
201 uint16_t dpp_enable; 203 uint16_t dpp_enable;
202 uint16_t dpp_id; 204 uint16_t dpp_id;
@@ -450,6 +452,7 @@ struct lpfc_hba_eq_hdl {
450 uint32_t idx; 452 uint32_t idx;
451 char handler_name[LPFC_SLI4_HANDLER_NAME_SZ]; 453 char handler_name[LPFC_SLI4_HANDLER_NAME_SZ];
452 struct lpfc_hba *phba; 454 struct lpfc_hba *phba;
455 struct lpfc_queue *eq;
453}; 456};
454 457
455/*BB Credit recovery value*/ 458/*BB Credit recovery value*/
@@ -512,6 +515,7 @@ struct lpfc_pc_sli4_params {
512#define LPFC_WQ_SZ64_SUPPORT 1 515#define LPFC_WQ_SZ64_SUPPORT 1
513#define LPFC_WQ_SZ128_SUPPORT 2 516#define LPFC_WQ_SZ128_SUPPORT 2
514 uint8_t wqpcnt; 517 uint8_t wqpcnt;
518 uint8_t nvme;
515}; 519};
516 520
517#define LPFC_CQ_4K_PAGE_SZ 0x1 521#define LPFC_CQ_4K_PAGE_SZ 0x1
@@ -546,7 +550,10 @@ struct lpfc_vector_map_info {
546 uint16_t irq; 550 uint16_t irq;
547 uint16_t eq; 551 uint16_t eq;
548 uint16_t hdwq; 552 uint16_t hdwq;
549 uint16_t hyper; 553 uint16_t flag;
554#define LPFC_CPU_MAP_HYPER 0x1
555#define LPFC_CPU_MAP_UNASSIGN 0x2
556#define LPFC_CPU_FIRST_IRQ 0x4
550}; 557};
551#define LPFC_VECTOR_MAP_EMPTY 0xffff 558#define LPFC_VECTOR_MAP_EMPTY 0xffff
552 559
@@ -843,6 +850,8 @@ struct lpfc_sli4_hba {
843 struct list_head lpfc_nvmet_sgl_list; 850 struct list_head lpfc_nvmet_sgl_list;
844 spinlock_t abts_nvmet_buf_list_lock; /* list of aborted NVMET IOs */ 851 spinlock_t abts_nvmet_buf_list_lock; /* list of aborted NVMET IOs */
845 struct list_head lpfc_abts_nvmet_ctx_list; 852 struct list_head lpfc_abts_nvmet_ctx_list;
853 spinlock_t t_active_list_lock; /* list of active NVMET IOs */
854 struct list_head t_active_ctx_list;
846 struct list_head lpfc_nvmet_io_wait_list; 855 struct list_head lpfc_nvmet_io_wait_list;
847 struct lpfc_nvmet_ctx_info *nvmet_ctx_info; 856 struct lpfc_nvmet_ctx_info *nvmet_ctx_info;
848 struct lpfc_sglq **lpfc_sglq_active_list; 857 struct lpfc_sglq **lpfc_sglq_active_list;
diff --git a/drivers/scsi/lpfc/lpfc_version.h b/drivers/scsi/lpfc/lpfc_version.h
index 220a932fe943..f7e93aaf1e00 100644
--- a/drivers/scsi/lpfc/lpfc_version.h
+++ b/drivers/scsi/lpfc/lpfc_version.h
@@ -20,7 +20,7 @@
20 * included with this package. * 20 * included with this package. *
21 *******************************************************************/ 21 *******************************************************************/
22 22
23#define LPFC_DRIVER_VERSION "12.2.0.2" 23#define LPFC_DRIVER_VERSION "12.2.0.3"
24#define LPFC_DRIVER_NAME "lpfc" 24#define LPFC_DRIVER_NAME "lpfc"
25 25
26/* Used for SLI 2/3 */ 26/* Used for SLI 2/3 */
diff --git a/drivers/scsi/mac_scsi.c b/drivers/scsi/mac_scsi.c
index dba9517d9553..9c5566217ef6 100644
--- a/drivers/scsi/mac_scsi.c
+++ b/drivers/scsi/mac_scsi.c
@@ -4,6 +4,8 @@
4 * 4 *
5 * Copyright 1998, Michael Schmitz <mschmitz@lbl.gov> 5 * Copyright 1998, Michael Schmitz <mschmitz@lbl.gov>
6 * 6 *
7 * Copyright 2019 Finn Thain
8 *
7 * derived in part from: 9 * derived in part from:
8 */ 10 */
9/* 11/*
@@ -12,6 +14,7 @@
12 * Copyright 1995, Russell King 14 * Copyright 1995, Russell King
13 */ 15 */
14 16
17#include <linux/delay.h>
15#include <linux/types.h> 18#include <linux/types.h>
16#include <linux/module.h> 19#include <linux/module.h>
17#include <linux/ioport.h> 20#include <linux/ioport.h>
@@ -22,6 +25,7 @@
22 25
23#include <asm/hwtest.h> 26#include <asm/hwtest.h>
24#include <asm/io.h> 27#include <asm/io.h>
28#include <asm/macintosh.h>
25#include <asm/macints.h> 29#include <asm/macints.h>
26#include <asm/setup.h> 30#include <asm/setup.h>
27 31
@@ -53,7 +57,7 @@ static int setup_cmd_per_lun = -1;
53module_param(setup_cmd_per_lun, int, 0); 57module_param(setup_cmd_per_lun, int, 0);
54static int setup_sg_tablesize = -1; 58static int setup_sg_tablesize = -1;
55module_param(setup_sg_tablesize, int, 0); 59module_param(setup_sg_tablesize, int, 0);
56static int setup_use_pdma = -1; 60static int setup_use_pdma = 512;
57module_param(setup_use_pdma, int, 0); 61module_param(setup_use_pdma, int, 0);
58static int setup_hostid = -1; 62static int setup_hostid = -1;
59module_param(setup_hostid, int, 0); 63module_param(setup_hostid, int, 0);
@@ -90,223 +94,318 @@ static int __init mac_scsi_setup(char *str)
90__setup("mac5380=", mac_scsi_setup); 94__setup("mac5380=", mac_scsi_setup);
91#endif /* !MODULE */ 95#endif /* !MODULE */
92 96
93/* Pseudo DMA asm originally by Ove Edlund */ 97/*
94 98 * According to "Inside Macintosh: Devices", Mac OS requires disk drivers to
95#define CP_IO_TO_MEM(s,d,n) \ 99 * specify the number of bytes between the delays expected from a SCSI target.
96__asm__ __volatile__ \ 100 * This allows the operating system to "prevent bus errors when a target fails
97 (" cmp.w #4,%2\n" \ 101 * to deliver the next byte within the processor bus error timeout period."
98 " bls 8f\n" \ 102 * Linux SCSI drivers lack knowledge of the timing behaviour of SCSI targets
99 " move.w %1,%%d0\n" \ 103 * so bus errors are unavoidable.
100 " neg.b %%d0\n" \ 104 *
101 " and.w #3,%%d0\n" \ 105 * If a MOVE.B instruction faults, we assume that zero bytes were transferred
102 " sub.w %%d0,%2\n" \ 106 * and simply retry. That assumption probably depends on target behaviour but
103 " bra 2f\n" \ 107 * seems to hold up okay. The NOP provides synchronization: without it the
104 " 1: move.b (%0),(%1)+\n" \ 108 * fault can sometimes occur after the program counter has moved past the
105 " 2: dbf %%d0,1b\n" \ 109 * offending instruction. Post-increment addressing can't be used.
106 " move.w %2,%%d0\n" \ 110 */
107 " lsr.w #5,%%d0\n" \ 111
108 " bra 4f\n" \ 112#define MOVE_BYTE(operands) \
109 " 3: move.l (%0),(%1)+\n" \ 113 asm volatile ( \
110 "31: move.l (%0),(%1)+\n" \ 114 "1: moveb " operands " \n" \
111 "32: move.l (%0),(%1)+\n" \ 115 "11: nop \n" \
112 "33: move.l (%0),(%1)+\n" \ 116 " addq #1,%0 \n" \
113 "34: move.l (%0),(%1)+\n" \ 117 " subq #1,%1 \n" \
114 "35: move.l (%0),(%1)+\n" \ 118 "40: \n" \
115 "36: move.l (%0),(%1)+\n" \ 119 " \n" \
116 "37: move.l (%0),(%1)+\n" \ 120 ".section .fixup,\"ax\" \n" \
117 " 4: dbf %%d0,3b\n" \ 121 ".even \n" \
118 " move.w %2,%%d0\n" \ 122 "90: movel #1, %2 \n" \
119 " lsr.w #2,%%d0\n" \ 123 " jra 40b \n" \
120 " and.w #7,%%d0\n" \ 124 ".previous \n" \
121 " bra 6f\n" \ 125 " \n" \
122 " 5: move.l (%0),(%1)+\n" \ 126 ".section __ex_table,\"a\" \n" \
123 " 6: dbf %%d0,5b\n" \ 127 ".align 4 \n" \
124 " and.w #3,%2\n" \ 128 ".long 1b,90b \n" \
125 " bra 8f\n" \ 129 ".long 11b,90b \n" \
126 " 7: move.b (%0),(%1)+\n" \ 130 ".previous \n" \
127 " 8: dbf %2,7b\n" \ 131 : "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
128 " moveq.l #0, %2\n" \ 132
129 " 9: \n" \ 133/*
130 ".section .fixup,\"ax\"\n" \ 134 * If a MOVE.W (or MOVE.L) instruction faults, it cannot be retried because
131 " .even\n" \ 135 * the residual byte count would be uncertain. In that situation the MOVE_WORD
132 "91: moveq.l #1, %2\n" \ 136 * macro clears n in the fixup section to abort the transfer.
133 " jra 9b\n" \ 137 */
134 "94: moveq.l #4, %2\n" \ 138
135 " jra 9b\n" \ 139#define MOVE_WORD(operands) \
136 ".previous\n" \ 140 asm volatile ( \
137 ".section __ex_table,\"a\"\n" \ 141 "1: movew " operands " \n" \
138 " .align 4\n" \ 142 "11: nop \n" \
139 " .long 1b,91b\n" \ 143 " subq #2,%1 \n" \
140 " .long 3b,94b\n" \ 144 "40: \n" \
141 " .long 31b,94b\n" \ 145 " \n" \
142 " .long 32b,94b\n" \ 146 ".section .fixup,\"ax\" \n" \
143 " .long 33b,94b\n" \ 147 ".even \n" \
144 " .long 34b,94b\n" \ 148 "90: movel #0, %1 \n" \
145 " .long 35b,94b\n" \ 149 " movel #2, %2 \n" \
146 " .long 36b,94b\n" \ 150 " jra 40b \n" \
147 " .long 37b,94b\n" \ 151 ".previous \n" \
148 " .long 5b,94b\n" \ 152 " \n" \
149 " .long 7b,91b\n" \ 153 ".section __ex_table,\"a\" \n" \
150 ".previous" \ 154 ".align 4 \n" \
151 : "=a"(s), "=a"(d), "=d"(n) \ 155 ".long 1b,90b \n" \
152 : "0"(s), "1"(d), "2"(n) \ 156 ".long 11b,90b \n" \
153 : "d0") 157 ".previous \n" \
158 : "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
159
160#define MOVE_16_WORDS(operands) \
161 asm volatile ( \
162 "1: movew " operands " \n" \
163 "2: movew " operands " \n" \
164 "3: movew " operands " \n" \
165 "4: movew " operands " \n" \
166 "5: movew " operands " \n" \
167 "6: movew " operands " \n" \
168 "7: movew " operands " \n" \
169 "8: movew " operands " \n" \
170 "9: movew " operands " \n" \
171 "10: movew " operands " \n" \
172 "11: movew " operands " \n" \
173 "12: movew " operands " \n" \
174 "13: movew " operands " \n" \
175 "14: movew " operands " \n" \
176 "15: movew " operands " \n" \
177 "16: movew " operands " \n" \
178 "17: nop \n" \
179 " subl #32,%1 \n" \
180 "40: \n" \
181 " \n" \
182 ".section .fixup,\"ax\" \n" \
183 ".even \n" \
184 "90: movel #0, %1 \n" \
185 " movel #2, %2 \n" \
186 " jra 40b \n" \
187 ".previous \n" \
188 " \n" \
189 ".section __ex_table,\"a\" \n" \
190 ".align 4 \n" \
191 ".long 1b,90b \n" \
192 ".long 2b,90b \n" \
193 ".long 3b,90b \n" \
194 ".long 4b,90b \n" \
195 ".long 5b,90b \n" \
196 ".long 6b,90b \n" \
197 ".long 7b,90b \n" \
198 ".long 8b,90b \n" \
199 ".long 9b,90b \n" \
200 ".long 10b,90b \n" \
201 ".long 11b,90b \n" \
202 ".long 12b,90b \n" \
203 ".long 13b,90b \n" \
204 ".long 14b,90b \n" \
205 ".long 15b,90b \n" \
206 ".long 16b,90b \n" \
207 ".long 17b,90b \n" \
208 ".previous \n" \
209 : "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
210
211#define MAC_PDMA_DELAY 32
212
213static inline int mac_pdma_recv(void __iomem *io, unsigned char *start, int n)
214{
215 unsigned char *addr = start;
216 int result = 0;
217
218 if (n >= 1) {
219 MOVE_BYTE("%3@,%0@");
220 if (result)
221 goto out;
222 }
223 if (n >= 1 && ((unsigned long)addr & 1)) {
224 MOVE_BYTE("%3@,%0@");
225 if (result)
226 goto out;
227 }
228 while (n >= 32)
229 MOVE_16_WORDS("%3@,%0@+");
230 while (n >= 2)
231 MOVE_WORD("%3@,%0@+");
232 if (result)
233 return start - addr; /* Negated to indicate uncertain length */
234 if (n == 1)
235 MOVE_BYTE("%3@,%0@");
236out:
237 return addr - start;
238}
239
240static inline int mac_pdma_send(unsigned char *start, void __iomem *io, int n)
241{
242 unsigned char *addr = start;
243 int result = 0;
244
245 if (n >= 1) {
246 MOVE_BYTE("%0@,%3@");
247 if (result)
248 goto out;
249 }
250 if (n >= 1 && ((unsigned long)addr & 1)) {
251 MOVE_BYTE("%0@,%3@");
252 if (result)
253 goto out;
254 }
255 while (n >= 32)
256 MOVE_16_WORDS("%0@+,%3@");
257 while (n >= 2)
258 MOVE_WORD("%0@+,%3@");
259 if (result)
260 return start - addr; /* Negated to indicate uncertain length */
261 if (n == 1)
262 MOVE_BYTE("%0@,%3@");
263out:
264 return addr - start;
265}
266
267/* The "SCSI DMA" chip on the IIfx implements this register. */
268#define CTRL_REG 0x8
269#define CTRL_INTERRUPTS_ENABLE BIT(1)
270#define CTRL_HANDSHAKE_MODE BIT(3)
271
272static inline void write_ctrl_reg(struct NCR5380_hostdata *hostdata, u32 value)
273{
274 out_be32(hostdata->io + (CTRL_REG << 4), value);
275}
154 276
155static inline int macscsi_pread(struct NCR5380_hostdata *hostdata, 277static inline int macscsi_pread(struct NCR5380_hostdata *hostdata,
156 unsigned char *dst, int len) 278 unsigned char *dst, int len)
157{ 279{
158 u8 __iomem *s = hostdata->pdma_io + (INPUT_DATA_REG << 4); 280 u8 __iomem *s = hostdata->pdma_io + (INPUT_DATA_REG << 4);
159 unsigned char *d = dst; 281 unsigned char *d = dst;
160 int n = len; 282 int result = 0;
161 int transferred; 283
284 hostdata->pdma_residual = len;
162 285
163 while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG, 286 while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
164 BASR_DRQ | BASR_PHASE_MATCH, 287 BASR_DRQ | BASR_PHASE_MATCH,
165 BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) { 288 BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) {
166 CP_IO_TO_MEM(s, d, n); 289 int bytes;
290
291 if (macintosh_config->ident == MAC_MODEL_IIFX)
292 write_ctrl_reg(hostdata, CTRL_HANDSHAKE_MODE |
293 CTRL_INTERRUPTS_ENABLE);
167 294
168 transferred = d - dst - n; 295 bytes = mac_pdma_recv(s, d, min(hostdata->pdma_residual, 512));
169 hostdata->pdma_residual = len - transferred;
170 296
171 /* No bus error. */ 297 if (bytes > 0) {
172 if (n == 0) 298 d += bytes;
173 return 0; 299 hostdata->pdma_residual -= bytes;
300 }
301
302 if (hostdata->pdma_residual == 0)
303 goto out;
174 304
175 /* Target changed phase early? */
176 if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ, 305 if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
177 BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0) 306 BUS_AND_STATUS_REG, BASR_ACK,
178 scmd_printk(KERN_ERR, hostdata->connected, 307 BASR_ACK, HZ / 64) < 0)
308 scmd_printk(KERN_DEBUG, hostdata->connected,
179 "%s: !REQ and !ACK\n", __func__); 309 "%s: !REQ and !ACK\n", __func__);
180 if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) 310 if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
181 return 0; 311 goto out;
312
313 if (bytes == 0)
314 udelay(MAC_PDMA_DELAY);
315
316 if (bytes >= 0)
317 continue;
182 318
183 dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host, 319 dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
184 "%s: bus error (%d/%d)\n", __func__, transferred, len); 320 "%s: bus error (%d/%d)\n", __func__, d - dst, len);
185 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); 321 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
186 d = dst + transferred; 322 result = -1;
187 n = len - transferred; 323 goto out;
188 } 324 }
189 325
190 scmd_printk(KERN_ERR, hostdata->connected, 326 scmd_printk(KERN_ERR, hostdata->connected,
191 "%s: phase mismatch or !DRQ\n", __func__); 327 "%s: phase mismatch or !DRQ\n", __func__);
192 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); 328 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
193 return -1; 329 result = -1;
330out:
331 if (macintosh_config->ident == MAC_MODEL_IIFX)
332 write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
333 return result;
194} 334}
195 335
196
197#define CP_MEM_TO_IO(s,d,n) \
198__asm__ __volatile__ \
199 (" cmp.w #4,%2\n" \
200 " bls 8f\n" \
201 " move.w %0,%%d0\n" \
202 " neg.b %%d0\n" \
203 " and.w #3,%%d0\n" \
204 " sub.w %%d0,%2\n" \
205 " bra 2f\n" \
206 " 1: move.b (%0)+,(%1)\n" \
207 " 2: dbf %%d0,1b\n" \
208 " move.w %2,%%d0\n" \
209 " lsr.w #5,%%d0\n" \
210 " bra 4f\n" \
211 " 3: move.l (%0)+,(%1)\n" \
212 "31: move.l (%0)+,(%1)\n" \
213 "32: move.l (%0)+,(%1)\n" \
214 "33: move.l (%0)+,(%1)\n" \
215 "34: move.l (%0)+,(%1)\n" \
216 "35: move.l (%0)+,(%1)\n" \
217 "36: move.l (%0)+,(%1)\n" \
218 "37: move.l (%0)+,(%1)\n" \
219 " 4: dbf %%d0,3b\n" \
220 " move.w %2,%%d0\n" \
221 " lsr.w #2,%%d0\n" \
222 " and.w #7,%%d0\n" \
223 " bra 6f\n" \
224 " 5: move.l (%0)+,(%1)\n" \
225 " 6: dbf %%d0,5b\n" \
226 " and.w #3,%2\n" \
227 " bra 8f\n" \
228 " 7: move.b (%0)+,(%1)\n" \
229 " 8: dbf %2,7b\n" \
230 " moveq.l #0, %2\n" \
231 " 9: \n" \
232 ".section .fixup,\"ax\"\n" \
233 " .even\n" \
234 "91: moveq.l #1, %2\n" \
235 " jra 9b\n" \
236 "94: moveq.l #4, %2\n" \
237 " jra 9b\n" \
238 ".previous\n" \
239 ".section __ex_table,\"a\"\n" \
240 " .align 4\n" \
241 " .long 1b,91b\n" \
242 " .long 3b,94b\n" \
243 " .long 31b,94b\n" \
244 " .long 32b,94b\n" \
245 " .long 33b,94b\n" \
246 " .long 34b,94b\n" \
247 " .long 35b,94b\n" \
248 " .long 36b,94b\n" \
249 " .long 37b,94b\n" \
250 " .long 5b,94b\n" \
251 " .long 7b,91b\n" \
252 ".previous" \
253 : "=a"(s), "=a"(d), "=d"(n) \
254 : "0"(s), "1"(d), "2"(n) \
255 : "d0")
256
257static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata, 336static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata,
258 unsigned char *src, int len) 337 unsigned char *src, int len)
259{ 338{
260 unsigned char *s = src; 339 unsigned char *s = src;
261 u8 __iomem *d = hostdata->pdma_io + (OUTPUT_DATA_REG << 4); 340 u8 __iomem *d = hostdata->pdma_io + (OUTPUT_DATA_REG << 4);
262 int n = len; 341 int result = 0;
263 int transferred; 342
343 hostdata->pdma_residual = len;
264 344
265 while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG, 345 while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
266 BASR_DRQ | BASR_PHASE_MATCH, 346 BASR_DRQ | BASR_PHASE_MATCH,
267 BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) { 347 BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) {
268 CP_MEM_TO_IO(s, d, n); 348 int bytes;
269 349
270 transferred = s - src - n; 350 if (macintosh_config->ident == MAC_MODEL_IIFX)
271 hostdata->pdma_residual = len - transferred; 351 write_ctrl_reg(hostdata, CTRL_HANDSHAKE_MODE |
352 CTRL_INTERRUPTS_ENABLE);
272 353
273 /* Target changed phase early? */ 354 bytes = mac_pdma_send(s, d, min(hostdata->pdma_residual, 512));
274 if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ, 355
275 BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0) 356 if (bytes > 0) {
276 scmd_printk(KERN_ERR, hostdata->connected, 357 s += bytes;
277 "%s: !REQ and !ACK\n", __func__); 358 hostdata->pdma_residual -= bytes;
278 if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) 359 }
279 return 0;
280 360
281 /* No bus error. */ 361 if (hostdata->pdma_residual == 0) {
282 if (n == 0) {
283 if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG, 362 if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG,
284 TCR_LAST_BYTE_SENT, 363 TCR_LAST_BYTE_SENT,
285 TCR_LAST_BYTE_SENT, HZ / 64) < 0) 364 TCR_LAST_BYTE_SENT,
365 HZ / 64) < 0) {
286 scmd_printk(KERN_ERR, hostdata->connected, 366 scmd_printk(KERN_ERR, hostdata->connected,
287 "%s: Last Byte Sent timeout\n", __func__); 367 "%s: Last Byte Sent timeout\n", __func__);
288 return 0; 368 result = -1;
369 }
370 goto out;
289 } 371 }
290 372
373 if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
374 BUS_AND_STATUS_REG, BASR_ACK,
375 BASR_ACK, HZ / 64) < 0)
376 scmd_printk(KERN_DEBUG, hostdata->connected,
377 "%s: !REQ and !ACK\n", __func__);
378 if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
379 goto out;
380
381 if (bytes == 0)
382 udelay(MAC_PDMA_DELAY);
383
384 if (bytes >= 0)
385 continue;
386
291 dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host, 387 dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
292 "%s: bus error (%d/%d)\n", __func__, transferred, len); 388 "%s: bus error (%d/%d)\n", __func__, s - src, len);
293 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); 389 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
294 s = src + transferred; 390 result = -1;
295 n = len - transferred; 391 goto out;
296 } 392 }
297 393
298 scmd_printk(KERN_ERR, hostdata->connected, 394 scmd_printk(KERN_ERR, hostdata->connected,
299 "%s: phase mismatch or !DRQ\n", __func__); 395 "%s: phase mismatch or !DRQ\n", __func__);
300 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); 396 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
301 397 result = -1;
302 return -1; 398out:
399 if (macintosh_config->ident == MAC_MODEL_IIFX)
400 write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
401 return result;
303} 402}
304 403
305static int macscsi_dma_xfer_len(struct NCR5380_hostdata *hostdata, 404static int macscsi_dma_xfer_len(struct NCR5380_hostdata *hostdata,
306 struct scsi_cmnd *cmd) 405 struct scsi_cmnd *cmd)
307{ 406{
308 if (hostdata->flags & FLAG_NO_PSEUDO_DMA || 407 if (hostdata->flags & FLAG_NO_PSEUDO_DMA ||
309 cmd->SCp.this_residual < 16) 408 cmd->SCp.this_residual < setup_use_pdma)
310 return 0; 409 return 0;
311 410
312 return cmd->SCp.this_residual; 411 return cmd->SCp.this_residual;
diff --git a/drivers/scsi/megaraid/Kconfig.megaraid b/drivers/scsi/megaraid/Kconfig.megaraid
index e630e41dc843..2adc2afd9f91 100644
--- a/drivers/scsi/megaraid/Kconfig.megaraid
+++ b/drivers/scsi/megaraid/Kconfig.megaraid
@@ -79,6 +79,7 @@ config MEGARAID_LEGACY
79config MEGARAID_SAS 79config MEGARAID_SAS
80 tristate "LSI Logic MegaRAID SAS RAID Module" 80 tristate "LSI Logic MegaRAID SAS RAID Module"
81 depends on PCI && SCSI 81 depends on PCI && SCSI
82 select IRQ_POLL
82 help 83 help
83 Module for LSI Logic's SAS based RAID controllers. 84 Module for LSI Logic's SAS based RAID controllers.
84 To compile this driver as a module, choose 'm' here. 85 To compile this driver as a module, choose 'm' here.
diff --git a/drivers/scsi/megaraid/Makefile b/drivers/scsi/megaraid/Makefile
index 6e74d21227a5..12177e4cae65 100644
--- a/drivers/scsi/megaraid/Makefile
+++ b/drivers/scsi/megaraid/Makefile
@@ -3,4 +3,4 @@ obj-$(CONFIG_MEGARAID_MM) += megaraid_mm.o
3obj-$(CONFIG_MEGARAID_MAILBOX) += megaraid_mbox.o 3obj-$(CONFIG_MEGARAID_MAILBOX) += megaraid_mbox.o
4obj-$(CONFIG_MEGARAID_SAS) += megaraid_sas.o 4obj-$(CONFIG_MEGARAID_SAS) += megaraid_sas.o
5megaraid_sas-objs := megaraid_sas_base.o megaraid_sas_fusion.o \ 5megaraid_sas-objs := megaraid_sas_base.o megaraid_sas_fusion.o \
6 megaraid_sas_fp.o 6 megaraid_sas_fp.o megaraid_sas_debugfs.o
diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
index fe9a785b7b6f..ca724fe91b8d 100644
--- a/drivers/scsi/megaraid/megaraid_sas.h
+++ b/drivers/scsi/megaraid/megaraid_sas.h
@@ -21,8 +21,8 @@
21/* 21/*
22 * MegaRAID SAS Driver meta data 22 * MegaRAID SAS Driver meta data
23 */ 23 */
24#define MEGASAS_VERSION "07.707.51.00-rc1" 24#define MEGASAS_VERSION "07.710.06.00-rc1"
25#define MEGASAS_RELDATE "February 7, 2019" 25#define MEGASAS_RELDATE "June 18, 2019"
26 26
27/* 27/*
28 * Device IDs 28 * Device IDs
@@ -52,6 +52,10 @@
52#define PCI_DEVICE_ID_LSI_AERO_10E2 0x10e2 52#define PCI_DEVICE_ID_LSI_AERO_10E2 0x10e2
53#define PCI_DEVICE_ID_LSI_AERO_10E5 0x10e5 53#define PCI_DEVICE_ID_LSI_AERO_10E5 0x10e5
54#define PCI_DEVICE_ID_LSI_AERO_10E6 0x10e6 54#define PCI_DEVICE_ID_LSI_AERO_10E6 0x10e6
55#define PCI_DEVICE_ID_LSI_AERO_10E0 0x10e0
56#define PCI_DEVICE_ID_LSI_AERO_10E3 0x10e3
57#define PCI_DEVICE_ID_LSI_AERO_10E4 0x10e4
58#define PCI_DEVICE_ID_LSI_AERO_10E7 0x10e7
55 59
56/* 60/*
57 * Intel HBA SSDIDs 61 * Intel HBA SSDIDs
@@ -123,6 +127,8 @@
123#define MFI_RESET_ADAPTER 0x00000002 127#define MFI_RESET_ADAPTER 0x00000002
124#define MEGAMFI_FRAME_SIZE 64 128#define MEGAMFI_FRAME_SIZE 64
125 129
130#define MFI_STATE_FAULT_CODE 0x0FFF0000
131#define MFI_STATE_FAULT_SUBCODE 0x0000FF00
126/* 132/*
127 * During FW init, clear pending cmds & reset state using inbound_msg_0 133 * During FW init, clear pending cmds & reset state using inbound_msg_0
128 * 134 *
@@ -190,6 +196,7 @@ enum MFI_CMD_OP {
190 MFI_CMD_SMP = 0x7, 196 MFI_CMD_SMP = 0x7,
191 MFI_CMD_STP = 0x8, 197 MFI_CMD_STP = 0x8,
192 MFI_CMD_NVME = 0x9, 198 MFI_CMD_NVME = 0x9,
199 MFI_CMD_TOOLBOX = 0xa,
193 MFI_CMD_OP_COUNT, 200 MFI_CMD_OP_COUNT,
194 MFI_CMD_INVALID = 0xff 201 MFI_CMD_INVALID = 0xff
195}; 202};
@@ -1449,7 +1456,39 @@ struct megasas_ctrl_info {
1449 1456
1450 u8 reserved6[64]; 1457 u8 reserved6[64];
1451 1458
1452 u32 rsvdForAdptOp[64]; 1459 struct {
1460 #if defined(__BIG_ENDIAN_BITFIELD)
1461 u32 reserved:19;
1462 u32 support_pci_lane_margining: 1;
1463 u32 support_psoc_update:1;
1464 u32 support_force_personality_change:1;
1465 u32 support_fde_type_mix:1;
1466 u32 support_snap_dump:1;
1467 u32 support_nvme_tm:1;
1468 u32 support_oce_only:1;
1469 u32 support_ext_mfg_vpd:1;
1470 u32 support_pcie:1;
1471 u32 support_cvhealth_info:1;
1472 u32 support_profile_change:2;
1473 u32 mr_config_ext2_supported:1;
1474 #else
1475 u32 mr_config_ext2_supported:1;
1476 u32 support_profile_change:2;
1477 u32 support_cvhealth_info:1;
1478 u32 support_pcie:1;
1479 u32 support_ext_mfg_vpd:1;
1480 u32 support_oce_only:1;
1481 u32 support_nvme_tm:1;
1482 u32 support_snap_dump:1;
1483 u32 support_fde_type_mix:1;
1484 u32 support_force_personality_change:1;
1485 u32 support_psoc_update:1;
1486 u32 support_pci_lane_margining: 1;
1487 u32 reserved:19;
1488 #endif
1489 } adapter_operations5;
1490
1491 u32 rsvdForAdptOp[63];
1453 1492
1454 u8 reserved7[3]; 1493 u8 reserved7[3];
1455 1494
@@ -1483,7 +1522,9 @@ struct megasas_ctrl_info {
1483#define MEGASAS_FW_BUSY 1 1522#define MEGASAS_FW_BUSY 1
1484 1523
1485/* Driver's internal Logging levels*/ 1524/* Driver's internal Logging levels*/
1486#define OCR_LOGS (1 << 0) 1525#define OCR_DEBUG (1 << 0)
1526#define TM_DEBUG (1 << 1)
1527#define LD_PD_DEBUG (1 << 2)
1487 1528
1488#define SCAN_PD_CHANNEL 0x1 1529#define SCAN_PD_CHANNEL 0x1
1489#define SCAN_VD_CHANNEL 0x2 1530#define SCAN_VD_CHANNEL 0x2
@@ -1559,6 +1600,7 @@ enum FW_BOOT_CONTEXT {
1559#define MFI_IO_TIMEOUT_SECS 180 1600#define MFI_IO_TIMEOUT_SECS 180
1560#define MEGASAS_SRIOV_HEARTBEAT_INTERVAL_VF (5 * HZ) 1601#define MEGASAS_SRIOV_HEARTBEAT_INTERVAL_VF (5 * HZ)
1561#define MEGASAS_OCR_SETTLE_TIME_VF (1000 * 30) 1602#define MEGASAS_OCR_SETTLE_TIME_VF (1000 * 30)
1603#define MEGASAS_SRIOV_MAX_RESET_TRIES_VF 1
1562#define MEGASAS_ROUTINE_WAIT_TIME_VF 300 1604#define MEGASAS_ROUTINE_WAIT_TIME_VF 300
1563#define MFI_REPLY_1078_MESSAGE_INTERRUPT 0x80000000 1605#define MFI_REPLY_1078_MESSAGE_INTERRUPT 0x80000000
1564#define MFI_REPLY_GEN2_MESSAGE_INTERRUPT 0x00000001 1606#define MFI_REPLY_GEN2_MESSAGE_INTERRUPT 0x00000001
@@ -1583,7 +1625,10 @@ enum FW_BOOT_CONTEXT {
1583 1625
1584#define MR_CAN_HANDLE_SYNC_CACHE_OFFSET 0X01000000 1626#define MR_CAN_HANDLE_SYNC_CACHE_OFFSET 0X01000000
1585 1627
1628#define MR_ATOMIC_DESCRIPTOR_SUPPORT_OFFSET (1 << 24)
1629
1586#define MR_CAN_HANDLE_64_BIT_DMA_OFFSET (1 << 25) 1630#define MR_CAN_HANDLE_64_BIT_DMA_OFFSET (1 << 25)
1631#define MR_INTR_COALESCING_SUPPORT_OFFSET (1 << 26)
1587 1632
1588#define MEGASAS_WATCHDOG_THREAD_INTERVAL 1000 1633#define MEGASAS_WATCHDOG_THREAD_INTERVAL 1000
1589#define MEGASAS_WAIT_FOR_NEXT_DMA_MSECS 20 1634#define MEGASAS_WAIT_FOR_NEXT_DMA_MSECS 20
@@ -1762,7 +1807,7 @@ struct megasas_init_frame {
1762 __le32 pad_0; /*0Ch */ 1807 __le32 pad_0; /*0Ch */
1763 1808
1764 __le16 flags; /*10h */ 1809 __le16 flags; /*10h */
1765 __le16 reserved_3; /*12h */ 1810 __le16 replyqueue_mask; /*12h */
1766 __le32 data_xfer_len; /*14h */ 1811 __le32 data_xfer_len; /*14h */
1767 1812
1768 __le32 queue_info_new_phys_addr_lo; /*18h */ 1813 __le32 queue_info_new_phys_addr_lo; /*18h */
@@ -2160,6 +2205,10 @@ struct megasas_aen_event {
2160struct megasas_irq_context { 2205struct megasas_irq_context {
2161 struct megasas_instance *instance; 2206 struct megasas_instance *instance;
2162 u32 MSIxIndex; 2207 u32 MSIxIndex;
2208 u32 os_irq;
2209 struct irq_poll irqpoll;
2210 bool irq_poll_scheduled;
2211 bool irq_line_enable;
2163}; 2212};
2164 2213
2165struct MR_DRV_SYSTEM_INFO { 2214struct MR_DRV_SYSTEM_INFO {
@@ -2190,6 +2239,23 @@ enum MR_PD_TYPE {
2190#define MR_DEFAULT_NVME_MDTS_KB 128 2239#define MR_DEFAULT_NVME_MDTS_KB 128
2191#define MR_NVME_PAGE_SIZE_MASK 0x000000FF 2240#define MR_NVME_PAGE_SIZE_MASK 0x000000FF
2192 2241
2242/*Aero performance parameters*/
2243#define MR_HIGH_IOPS_QUEUE_COUNT 8
2244#define MR_DEVICE_HIGH_IOPS_DEPTH 8
2245#define MR_HIGH_IOPS_BATCH_COUNT 16
2246
2247enum MR_PERF_MODE {
2248 MR_BALANCED_PERF_MODE = 0,
2249 MR_IOPS_PERF_MODE = 1,
2250 MR_LATENCY_PERF_MODE = 2,
2251};
2252
2253#define MEGASAS_PERF_MODE_2STR(mode) \
2254 ((mode) == MR_BALANCED_PERF_MODE ? "Balanced" : \
2255 (mode) == MR_IOPS_PERF_MODE ? "IOPS" : \
2256 (mode) == MR_LATENCY_PERF_MODE ? "Latency" : \
2257 "Unknown")
2258
2193struct megasas_instance { 2259struct megasas_instance {
2194 2260
2195 unsigned int *reply_map; 2261 unsigned int *reply_map;
@@ -2246,6 +2312,7 @@ struct megasas_instance {
2246 u32 secure_jbod_support; 2312 u32 secure_jbod_support;
2247 u32 support_morethan256jbod; /* FW support for more than 256 PD/JBOD */ 2313 u32 support_morethan256jbod; /* FW support for more than 256 PD/JBOD */
2248 bool use_seqnum_jbod_fp; /* Added for PD sequence */ 2314 bool use_seqnum_jbod_fp; /* Added for PD sequence */
2315 bool smp_affinity_enable;
2249 spinlock_t crashdump_lock; 2316 spinlock_t crashdump_lock;
2250 2317
2251 struct megasas_register_set __iomem *reg_set; 2318 struct megasas_register_set __iomem *reg_set;
@@ -2263,6 +2330,7 @@ struct megasas_instance {
2263 u16 ldio_threshold; 2330 u16 ldio_threshold;
2264 u16 cur_can_queue; 2331 u16 cur_can_queue;
2265 u32 max_sectors_per_req; 2332 u32 max_sectors_per_req;
2333 bool msix_load_balance;
2266 struct megasas_aen_event *ev; 2334 struct megasas_aen_event *ev;
2267 2335
2268 struct megasas_cmd **cmd_list; 2336 struct megasas_cmd **cmd_list;
@@ -2290,15 +2358,13 @@ struct megasas_instance {
2290 struct pci_dev *pdev; 2358 struct pci_dev *pdev;
2291 u32 unique_id; 2359 u32 unique_id;
2292 u32 fw_support_ieee; 2360 u32 fw_support_ieee;
2361 u32 threshold_reply_count;
2293 2362
2294 atomic_t fw_outstanding; 2363 atomic_t fw_outstanding;
2295 atomic_t ldio_outstanding; 2364 atomic_t ldio_outstanding;
2296 atomic_t fw_reset_no_pci_access; 2365 atomic_t fw_reset_no_pci_access;
2297 atomic_t ieee_sgl; 2366 atomic64_t total_io_count;
2298 atomic_t prp_sgl; 2367 atomic64_t high_iops_outstanding;
2299 atomic_t sge_holes_type1;
2300 atomic_t sge_holes_type2;
2301 atomic_t sge_holes_type3;
2302 2368
2303 struct megasas_instance_template *instancet; 2369 struct megasas_instance_template *instancet;
2304 struct tasklet_struct isr_tasklet; 2370 struct tasklet_struct isr_tasklet;
@@ -2366,8 +2432,18 @@ struct megasas_instance {
2366 u8 task_abort_tmo; 2432 u8 task_abort_tmo;
2367 u8 max_reset_tmo; 2433 u8 max_reset_tmo;
2368 u8 snapdump_wait_time; 2434 u8 snapdump_wait_time;
2435#ifdef CONFIG_DEBUG_FS
2436 struct dentry *debugfs_root;
2437 struct dentry *raidmap_dump;
2438#endif
2369 u8 enable_fw_dev_list; 2439 u8 enable_fw_dev_list;
2440 bool atomic_desc_support;
2441 bool support_seqnum_jbod_fp;
2442 bool support_pci_lane_margining;
2443 u8 low_latency_index_start;
2444 int perf_mode;
2370}; 2445};
2446
2371struct MR_LD_VF_MAP { 2447struct MR_LD_VF_MAP {
2372 u32 size; 2448 u32 size;
2373 union MR_LD_REF ref; 2449 union MR_LD_REF ref;
@@ -2623,4 +2699,9 @@ void megasas_fusion_stop_watchdog(struct megasas_instance *instance);
2623void megasas_set_dma_settings(struct megasas_instance *instance, 2699void megasas_set_dma_settings(struct megasas_instance *instance,
2624 struct megasas_dcmd_frame *dcmd, 2700 struct megasas_dcmd_frame *dcmd,
2625 dma_addr_t dma_addr, u32 dma_len); 2701 dma_addr_t dma_addr, u32 dma_len);
2702int megasas_adp_reset_wait_for_ready(struct megasas_instance *instance,
2703 bool do_adp_reset,
2704 int ocr_context);
2705int megasas_irqpoll(struct irq_poll *irqpoll, int budget);
2706void megasas_dump_fusion_io(struct scsi_cmnd *scmd);
2626#endif /*LSI_MEGARAID_SAS_H */ 2707#endif /*LSI_MEGARAID_SAS_H */
diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
index 3dd1df472dc6..80ab9700f1de 100644
--- a/drivers/scsi/megaraid/megaraid_sas_base.c
+++ b/drivers/scsi/megaraid/megaraid_sas_base.c
@@ -36,12 +36,14 @@
36#include <linux/mutex.h> 36#include <linux/mutex.h>
37#include <linux/poll.h> 37#include <linux/poll.h>
38#include <linux/vmalloc.h> 38#include <linux/vmalloc.h>
39#include <linux/irq_poll.h>
39 40
40#include <scsi/scsi.h> 41#include <scsi/scsi.h>
41#include <scsi/scsi_cmnd.h> 42#include <scsi/scsi_cmnd.h>
42#include <scsi/scsi_device.h> 43#include <scsi/scsi_device.h>
43#include <scsi/scsi_host.h> 44#include <scsi/scsi_host.h>
44#include <scsi/scsi_tcq.h> 45#include <scsi/scsi_tcq.h>
46#include <scsi/scsi_dbg.h>
45#include "megaraid_sas_fusion.h" 47#include "megaraid_sas_fusion.h"
46#include "megaraid_sas.h" 48#include "megaraid_sas.h"
47 49
@@ -50,47 +52,59 @@
50 * Will be set in megasas_init_mfi if user does not provide 52 * Will be set in megasas_init_mfi if user does not provide
51 */ 53 */
52static unsigned int max_sectors; 54static unsigned int max_sectors;
53module_param_named(max_sectors, max_sectors, int, 0); 55module_param_named(max_sectors, max_sectors, int, 0444);
54MODULE_PARM_DESC(max_sectors, 56MODULE_PARM_DESC(max_sectors,
55 "Maximum number of sectors per IO command"); 57 "Maximum number of sectors per IO command");
56 58
57static int msix_disable; 59static int msix_disable;
58module_param(msix_disable, int, S_IRUGO); 60module_param(msix_disable, int, 0444);
59MODULE_PARM_DESC(msix_disable, "Disable MSI-X interrupt handling. Default: 0"); 61MODULE_PARM_DESC(msix_disable, "Disable MSI-X interrupt handling. Default: 0");
60 62
61static unsigned int msix_vectors; 63static unsigned int msix_vectors;
62module_param(msix_vectors, int, S_IRUGO); 64module_param(msix_vectors, int, 0444);
63MODULE_PARM_DESC(msix_vectors, "MSI-X max vector count. Default: Set by FW"); 65MODULE_PARM_DESC(msix_vectors, "MSI-X max vector count. Default: Set by FW");
64 66
65static int allow_vf_ioctls; 67static int allow_vf_ioctls;
66module_param(allow_vf_ioctls, int, S_IRUGO); 68module_param(allow_vf_ioctls, int, 0444);
67MODULE_PARM_DESC(allow_vf_ioctls, "Allow ioctls in SR-IOV VF mode. Default: 0"); 69MODULE_PARM_DESC(allow_vf_ioctls, "Allow ioctls in SR-IOV VF mode. Default: 0");
68 70
69static unsigned int throttlequeuedepth = MEGASAS_THROTTLE_QUEUE_DEPTH; 71static unsigned int throttlequeuedepth = MEGASAS_THROTTLE_QUEUE_DEPTH;
70module_param(throttlequeuedepth, int, S_IRUGO); 72module_param(throttlequeuedepth, int, 0444);
71MODULE_PARM_DESC(throttlequeuedepth, 73MODULE_PARM_DESC(throttlequeuedepth,
72 "Adapter queue depth when throttled due to I/O timeout. Default: 16"); 74 "Adapter queue depth when throttled due to I/O timeout. Default: 16");
73 75
74unsigned int resetwaittime = MEGASAS_RESET_WAIT_TIME; 76unsigned int resetwaittime = MEGASAS_RESET_WAIT_TIME;
75module_param(resetwaittime, int, S_IRUGO); 77module_param(resetwaittime, int, 0444);
76MODULE_PARM_DESC(resetwaittime, "Wait time in (1-180s) after I/O timeout before resetting adapter. Default: 180s"); 78MODULE_PARM_DESC(resetwaittime, "Wait time in (1-180s) after I/O timeout before resetting adapter. Default: 180s");
77 79
78int smp_affinity_enable = 1; 80int smp_affinity_enable = 1;
79module_param(smp_affinity_enable, int, S_IRUGO); 81module_param(smp_affinity_enable, int, 0444);
80MODULE_PARM_DESC(smp_affinity_enable, "SMP affinity feature enable/disable Default: enable(1)"); 82MODULE_PARM_DESC(smp_affinity_enable, "SMP affinity feature enable/disable Default: enable(1)");
81 83
82int rdpq_enable = 1; 84int rdpq_enable = 1;
83module_param(rdpq_enable, int, S_IRUGO); 85module_param(rdpq_enable, int, 0444);
84MODULE_PARM_DESC(rdpq_enable, "Allocate reply queue in chunks for large queue depth enable/disable Default: enable(1)"); 86MODULE_PARM_DESC(rdpq_enable, "Allocate reply queue in chunks for large queue depth enable/disable Default: enable(1)");
85 87
86unsigned int dual_qdepth_disable; 88unsigned int dual_qdepth_disable;
87module_param(dual_qdepth_disable, int, S_IRUGO); 89module_param(dual_qdepth_disable, int, 0444);
88MODULE_PARM_DESC(dual_qdepth_disable, "Disable dual queue depth feature. Default: 0"); 90MODULE_PARM_DESC(dual_qdepth_disable, "Disable dual queue depth feature. Default: 0");
89 91
90unsigned int scmd_timeout = MEGASAS_DEFAULT_CMD_TIMEOUT; 92unsigned int scmd_timeout = MEGASAS_DEFAULT_CMD_TIMEOUT;
91module_param(scmd_timeout, int, S_IRUGO); 93module_param(scmd_timeout, int, 0444);
92MODULE_PARM_DESC(scmd_timeout, "scsi command timeout (10-90s), default 90s. See megasas_reset_timer."); 94MODULE_PARM_DESC(scmd_timeout, "scsi command timeout (10-90s), default 90s. See megasas_reset_timer.");
93 95
96int perf_mode = -1;
97module_param(perf_mode, int, 0444);
98MODULE_PARM_DESC(perf_mode, "Performance mode (only for Aero adapters), options:\n\t\t"
99 "0 - balanced: High iops and low latency queues are allocated &\n\t\t"
100 "interrupt coalescing is enabled only on high iops queues\n\t\t"
101 "1 - iops: High iops queues are not allocated &\n\t\t"
102 "interrupt coalescing is enabled on all queues\n\t\t"
103 "2 - latency: High iops queues are not allocated &\n\t\t"
104 "interrupt coalescing is disabled on all queues\n\t\t"
105 "default mode is 'balanced'"
106 );
107
94MODULE_LICENSE("GPL"); 108MODULE_LICENSE("GPL");
95MODULE_VERSION(MEGASAS_VERSION); 109MODULE_VERSION(MEGASAS_VERSION);
96MODULE_AUTHOR("megaraidlinux.pdl@broadcom.com"); 110MODULE_AUTHOR("megaraidlinux.pdl@broadcom.com");
@@ -154,6 +168,10 @@ static struct pci_device_id megasas_pci_table[] = {
154 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E2)}, 168 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E2)},
155 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E5)}, 169 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E5)},
156 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E6)}, 170 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E6)},
171 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E0)},
172 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E3)},
173 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E4)},
174 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E7)},
157 {} 175 {}
158}; 176};
159 177
@@ -170,10 +188,17 @@ static u32 support_poll_for_event;
170u32 megasas_dbg_lvl; 188u32 megasas_dbg_lvl;
171static u32 support_device_change; 189static u32 support_device_change;
172static bool support_nvme_encapsulation; 190static bool support_nvme_encapsulation;
191static bool support_pci_lane_margining;
173 192
174/* define lock for aen poll */ 193/* define lock for aen poll */
175spinlock_t poll_aen_lock; 194spinlock_t poll_aen_lock;
176 195
196extern struct dentry *megasas_debugfs_root;
197extern void megasas_init_debugfs(void);
198extern void megasas_exit_debugfs(void);
199extern void megasas_setup_debugfs(struct megasas_instance *instance);
200extern void megasas_destroy_debugfs(struct megasas_instance *instance);
201
177void 202void
178megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd, 203megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
179 u8 alt_status); 204 u8 alt_status);
@@ -1098,8 +1123,9 @@ megasas_issue_blocked_cmd(struct megasas_instance *instance,
1098 ret = wait_event_timeout(instance->int_cmd_wait_q, 1123 ret = wait_event_timeout(instance->int_cmd_wait_q,
1099 cmd->cmd_status_drv != MFI_STAT_INVALID_STATUS, timeout * HZ); 1124 cmd->cmd_status_drv != MFI_STAT_INVALID_STATUS, timeout * HZ);
1100 if (!ret) { 1125 if (!ret) {
1101 dev_err(&instance->pdev->dev, "Failed from %s %d DCMD Timed out\n", 1126 dev_err(&instance->pdev->dev,
1102 __func__, __LINE__); 1127 "DCMD(opcode: 0x%x) is timed out, func:%s\n",
1128 cmd->frame->dcmd.opcode, __func__);
1103 return DCMD_TIMEOUT; 1129 return DCMD_TIMEOUT;
1104 } 1130 }
1105 } else 1131 } else
@@ -1128,6 +1154,7 @@ megasas_issue_blocked_abort_cmd(struct megasas_instance *instance,
1128 struct megasas_cmd *cmd; 1154 struct megasas_cmd *cmd;
1129 struct megasas_abort_frame *abort_fr; 1155 struct megasas_abort_frame *abort_fr;
1130 int ret = 0; 1156 int ret = 0;
1157 u32 opcode;
1131 1158
1132 cmd = megasas_get_cmd(instance); 1159 cmd = megasas_get_cmd(instance);
1133 1160
@@ -1163,8 +1190,10 @@ megasas_issue_blocked_abort_cmd(struct megasas_instance *instance,
1163 ret = wait_event_timeout(instance->abort_cmd_wait_q, 1190 ret = wait_event_timeout(instance->abort_cmd_wait_q,
1164 cmd->cmd_status_drv != MFI_STAT_INVALID_STATUS, timeout * HZ); 1191 cmd->cmd_status_drv != MFI_STAT_INVALID_STATUS, timeout * HZ);
1165 if (!ret) { 1192 if (!ret) {
1166 dev_err(&instance->pdev->dev, "Failed from %s %d Abort Timed out\n", 1193 opcode = cmd_to_abort->frame->dcmd.opcode;
1167 __func__, __LINE__); 1194 dev_err(&instance->pdev->dev,
1195 "Abort(to be aborted DCMD opcode: 0x%x) is timed out func:%s\n",
1196 opcode, __func__);
1168 return DCMD_TIMEOUT; 1197 return DCMD_TIMEOUT;
1169 } 1198 }
1170 } else 1199 } else
@@ -1918,7 +1947,6 @@ megasas_set_nvme_device_properties(struct scsi_device *sdev, u32 max_io_size)
1918static void megasas_set_static_target_properties(struct scsi_device *sdev, 1947static void megasas_set_static_target_properties(struct scsi_device *sdev,
1919 bool is_target_prop) 1948 bool is_target_prop)
1920{ 1949{
1921 u16 target_index = 0;
1922 u8 interface_type; 1950 u8 interface_type;
1923 u32 device_qd = MEGASAS_DEFAULT_CMD_PER_LUN; 1951 u32 device_qd = MEGASAS_DEFAULT_CMD_PER_LUN;
1924 u32 max_io_size_kb = MR_DEFAULT_NVME_MDTS_KB; 1952 u32 max_io_size_kb = MR_DEFAULT_NVME_MDTS_KB;
@@ -1935,8 +1963,6 @@ static void megasas_set_static_target_properties(struct scsi_device *sdev,
1935 */ 1963 */
1936 blk_queue_rq_timeout(sdev->request_queue, scmd_timeout * HZ); 1964 blk_queue_rq_timeout(sdev->request_queue, scmd_timeout * HZ);
1937 1965
1938 target_index = (sdev->channel * MEGASAS_MAX_DEV_PER_CHANNEL) + sdev->id;
1939
1940 switch (interface_type) { 1966 switch (interface_type) {
1941 case SAS_PD: 1967 case SAS_PD:
1942 device_qd = MEGASAS_SAS_QD; 1968 device_qd = MEGASAS_SAS_QD;
@@ -2822,21 +2848,108 @@ blk_eh_timer_return megasas_reset_timer(struct scsi_cmnd *scmd)
2822} 2848}
2823 2849
2824/** 2850/**
2825 * megasas_dump_frame - This function will dump MPT/MFI frame 2851 * megasas_dump - This function will print hexdump of provided buffer.
2852 * @buf: Buffer to be dumped
2853 * @sz: Size in bytes
2854 * @format: Different formats of dumping e.g. format=n will
2855 * cause only 'n' 32 bit words to be dumped in a single
2856 * line.
2826 */ 2857 */
2827static inline void 2858inline void
2828megasas_dump_frame(void *mpi_request, int sz) 2859megasas_dump(void *buf, int sz, int format)
2829{ 2860{
2830 int i; 2861 int i;
2831 __le32 *mfp = (__le32 *)mpi_request; 2862 __le32 *buf_loc = (__le32 *)buf;
2863
2864 for (i = 0; i < (sz / sizeof(__le32)); i++) {
2865 if ((i % format) == 0) {
2866 if (i != 0)
2867 printk(KERN_CONT "\n");
2868 printk(KERN_CONT "%08x: ", (i * 4));
2869 }
2870 printk(KERN_CONT "%08x ", le32_to_cpu(buf_loc[i]));
2871 }
2872 printk(KERN_CONT "\n");
2873}
2874
2875/**
2876 * megasas_dump_reg_set - This function will print hexdump of register set
2877 * @buf: Buffer to be dumped
2878 * @sz: Size in bytes
2879 * @format: Different formats of dumping e.g. format=n will
2880 * cause only 'n' 32 bit words to be dumped in a
2881 * single line.
2882 */
2883inline void
2884megasas_dump_reg_set(void __iomem *reg_set)
2885{
2886 unsigned int i, sz = 256;
2887 u32 __iomem *reg = (u32 __iomem *)reg_set;
2888
2889 for (i = 0; i < (sz / sizeof(u32)); i++)
2890 printk("%08x: %08x\n", (i * 4), readl(&reg[i]));
2891}
2892
2893/**
2894 * megasas_dump_fusion_io - This function will print key details
2895 * of SCSI IO
2896 * @scmd: SCSI command pointer of SCSI IO
2897 */
2898void
2899megasas_dump_fusion_io(struct scsi_cmnd *scmd)
2900{
2901 struct megasas_cmd_fusion *cmd;
2902 union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc;
2903 struct megasas_instance *instance;
2904
2905 cmd = (struct megasas_cmd_fusion *)scmd->SCp.ptr;
2906 instance = (struct megasas_instance *)scmd->device->host->hostdata;
2907
2908 scmd_printk(KERN_INFO, scmd,
2909 "scmd: (0x%p) retries: 0x%x allowed: 0x%x\n",
2910 scmd, scmd->retries, scmd->allowed);
2911 scsi_print_command(scmd);
2912
2913 if (cmd) {
2914 req_desc = (union MEGASAS_REQUEST_DESCRIPTOR_UNION *)cmd->request_desc;
2915 scmd_printk(KERN_INFO, scmd, "Request descriptor details:\n");
2916 scmd_printk(KERN_INFO, scmd,
2917 "RequestFlags:0x%x MSIxIndex:0x%x SMID:0x%x LMID:0x%x DevHandle:0x%x\n",
2918 req_desc->SCSIIO.RequestFlags,
2919 req_desc->SCSIIO.MSIxIndex, req_desc->SCSIIO.SMID,
2920 req_desc->SCSIIO.LMID, req_desc->SCSIIO.DevHandle);
2921
2922 printk(KERN_INFO "IO request frame:\n");
2923 megasas_dump(cmd->io_request,
2924 MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE, 8);
2925 printk(KERN_INFO "Chain frame:\n");
2926 megasas_dump(cmd->sg_frame,
2927 instance->max_chain_frame_sz, 8);
2928 }
2929
2930}
2931
2932/*
2933 * megasas_dump_sys_regs - This function will dump system registers through
2934 * sysfs.
2935 * @reg_set: Pointer to System register set.
2936 * @buf: Buffer to which output is to be written.
2937 * @return: Number of bytes written to buffer.
2938 */
2939static inline ssize_t
2940megasas_dump_sys_regs(void __iomem *reg_set, char *buf)
2941{
2942 unsigned int i, sz = 256;
2943 int bytes_wrote = 0;
2944 char *loc = (char *)buf;
2945 u32 __iomem *reg = (u32 __iomem *)reg_set;
2832 2946
2833 printk(KERN_INFO "IO request frame:\n\t"); 2947 for (i = 0; i < sz / sizeof(u32); i++) {
2834 for (i = 0; i < sz / sizeof(__le32); i++) { 2948 bytes_wrote += snprintf(loc + bytes_wrote, PAGE_SIZE,
2835 if (i && ((i % 8) == 0)) 2949 "%08x: %08x\n", (i * 4),
2836 printk("\n\t"); 2950 readl(&reg[i]));
2837 printk("%08x ", le32_to_cpu(mfp[i]));
2838 } 2951 }
2839 printk("\n"); 2952 return bytes_wrote;
2840} 2953}
2841 2954
2842/** 2955/**
@@ -2850,24 +2963,20 @@ static int megasas_reset_bus_host(struct scsi_cmnd *scmd)
2850 instance = (struct megasas_instance *)scmd->device->host->hostdata; 2963 instance = (struct megasas_instance *)scmd->device->host->hostdata;
2851 2964
2852 scmd_printk(KERN_INFO, scmd, 2965 scmd_printk(KERN_INFO, scmd,
2853 "Controller reset is requested due to IO timeout\n" 2966 "OCR is requested due to IO timeout!!\n");
2854 "SCSI command pointer: (%p)\t SCSI host state: %d\t" 2967
2855 " SCSI host busy: %d\t FW outstanding: %d\n", 2968 scmd_printk(KERN_INFO, scmd,
2856 scmd, scmd->device->host->shost_state, 2969 "SCSI host state: %d SCSI host busy: %d FW outstanding: %d\n",
2970 scmd->device->host->shost_state,
2857 scsi_host_busy(scmd->device->host), 2971 scsi_host_busy(scmd->device->host),
2858 atomic_read(&instance->fw_outstanding)); 2972 atomic_read(&instance->fw_outstanding));
2859
2860 /* 2973 /*
2861 * First wait for all commands to complete 2974 * First wait for all commands to complete
2862 */ 2975 */
2863 if (instance->adapter_type == MFI_SERIES) { 2976 if (instance->adapter_type == MFI_SERIES) {
2864 ret = megasas_generic_reset(scmd); 2977 ret = megasas_generic_reset(scmd);
2865 } else { 2978 } else {
2866 struct megasas_cmd_fusion *cmd; 2979 megasas_dump_fusion_io(scmd);
2867 cmd = (struct megasas_cmd_fusion *)scmd->SCp.ptr;
2868 if (cmd)
2869 megasas_dump_frame(cmd->io_request,
2870 MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE);
2871 ret = megasas_reset_fusion(scmd->device->host, 2980 ret = megasas_reset_fusion(scmd->device->host,
2872 SCSIIO_TIMEOUT_OCR); 2981 SCSIIO_TIMEOUT_OCR);
2873 } 2982 }
@@ -3017,7 +3126,7 @@ megasas_service_aen(struct megasas_instance *instance, struct megasas_cmd *cmd)
3017} 3126}
3018 3127
3019static ssize_t 3128static ssize_t
3020megasas_fw_crash_buffer_store(struct device *cdev, 3129fw_crash_buffer_store(struct device *cdev,
3021 struct device_attribute *attr, const char *buf, size_t count) 3130 struct device_attribute *attr, const char *buf, size_t count)
3022{ 3131{
3023 struct Scsi_Host *shost = class_to_shost(cdev); 3132 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3036,14 +3145,13 @@ megasas_fw_crash_buffer_store(struct device *cdev,
3036} 3145}
3037 3146
3038static ssize_t 3147static ssize_t
3039megasas_fw_crash_buffer_show(struct device *cdev, 3148fw_crash_buffer_show(struct device *cdev,
3040 struct device_attribute *attr, char *buf) 3149 struct device_attribute *attr, char *buf)
3041{ 3150{
3042 struct Scsi_Host *shost = class_to_shost(cdev); 3151 struct Scsi_Host *shost = class_to_shost(cdev);
3043 struct megasas_instance *instance = 3152 struct megasas_instance *instance =
3044 (struct megasas_instance *) shost->hostdata; 3153 (struct megasas_instance *) shost->hostdata;
3045 u32 size; 3154 u32 size;
3046 unsigned long buff_addr;
3047 unsigned long dmachunk = CRASH_DMA_BUF_SIZE; 3155 unsigned long dmachunk = CRASH_DMA_BUF_SIZE;
3048 unsigned long src_addr; 3156 unsigned long src_addr;
3049 unsigned long flags; 3157 unsigned long flags;
@@ -3060,8 +3168,6 @@ megasas_fw_crash_buffer_show(struct device *cdev,
3060 return -EINVAL; 3168 return -EINVAL;
3061 } 3169 }
3062 3170
3063 buff_addr = (unsigned long) buf;
3064
3065 if (buff_offset > (instance->fw_crash_buffer_size * dmachunk)) { 3171 if (buff_offset > (instance->fw_crash_buffer_size * dmachunk)) {
3066 dev_err(&instance->pdev->dev, 3172 dev_err(&instance->pdev->dev,
3067 "Firmware crash dump offset is out of range\n"); 3173 "Firmware crash dump offset is out of range\n");
@@ -3081,7 +3187,7 @@ megasas_fw_crash_buffer_show(struct device *cdev,
3081} 3187}
3082 3188
3083static ssize_t 3189static ssize_t
3084megasas_fw_crash_buffer_size_show(struct device *cdev, 3190fw_crash_buffer_size_show(struct device *cdev,
3085 struct device_attribute *attr, char *buf) 3191 struct device_attribute *attr, char *buf)
3086{ 3192{
3087 struct Scsi_Host *shost = class_to_shost(cdev); 3193 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3093,7 +3199,7 @@ megasas_fw_crash_buffer_size_show(struct device *cdev,
3093} 3199}
3094 3200
3095static ssize_t 3201static ssize_t
3096megasas_fw_crash_state_store(struct device *cdev, 3202fw_crash_state_store(struct device *cdev,
3097 struct device_attribute *attr, const char *buf, size_t count) 3203 struct device_attribute *attr, const char *buf, size_t count)
3098{ 3204{
3099 struct Scsi_Host *shost = class_to_shost(cdev); 3205 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3128,7 +3234,7 @@ megasas_fw_crash_state_store(struct device *cdev,
3128} 3234}
3129 3235
3130static ssize_t 3236static ssize_t
3131megasas_fw_crash_state_show(struct device *cdev, 3237fw_crash_state_show(struct device *cdev,
3132 struct device_attribute *attr, char *buf) 3238 struct device_attribute *attr, char *buf)
3133{ 3239{
3134 struct Scsi_Host *shost = class_to_shost(cdev); 3240 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3139,14 +3245,14 @@ megasas_fw_crash_state_show(struct device *cdev,
3139} 3245}
3140 3246
3141static ssize_t 3247static ssize_t
3142megasas_page_size_show(struct device *cdev, 3248page_size_show(struct device *cdev,
3143 struct device_attribute *attr, char *buf) 3249 struct device_attribute *attr, char *buf)
3144{ 3250{
3145 return snprintf(buf, PAGE_SIZE, "%ld\n", (unsigned long)PAGE_SIZE - 1); 3251 return snprintf(buf, PAGE_SIZE, "%ld\n", (unsigned long)PAGE_SIZE - 1);
3146} 3252}
3147 3253
3148static ssize_t 3254static ssize_t
3149megasas_ldio_outstanding_show(struct device *cdev, struct device_attribute *attr, 3255ldio_outstanding_show(struct device *cdev, struct device_attribute *attr,
3150 char *buf) 3256 char *buf)
3151{ 3257{
3152 struct Scsi_Host *shost = class_to_shost(cdev); 3258 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3156,7 +3262,7 @@ megasas_ldio_outstanding_show(struct device *cdev, struct device_attribute *attr
3156} 3262}
3157 3263
3158static ssize_t 3264static ssize_t
3159megasas_fw_cmds_outstanding_show(struct device *cdev, 3265fw_cmds_outstanding_show(struct device *cdev,
3160 struct device_attribute *attr, char *buf) 3266 struct device_attribute *attr, char *buf)
3161{ 3267{
3162 struct Scsi_Host *shost = class_to_shost(cdev); 3268 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3165,18 +3271,37 @@ megasas_fw_cmds_outstanding_show(struct device *cdev,
3165 return snprintf(buf, PAGE_SIZE, "%d\n", atomic_read(&instance->fw_outstanding)); 3271 return snprintf(buf, PAGE_SIZE, "%d\n", atomic_read(&instance->fw_outstanding));
3166} 3272}
3167 3273
3168static DEVICE_ATTR(fw_crash_buffer, S_IRUGO | S_IWUSR, 3274static ssize_t
3169 megasas_fw_crash_buffer_show, megasas_fw_crash_buffer_store); 3275dump_system_regs_show(struct device *cdev,
3170static DEVICE_ATTR(fw_crash_buffer_size, S_IRUGO, 3276 struct device_attribute *attr, char *buf)
3171 megasas_fw_crash_buffer_size_show, NULL); 3277{
3172static DEVICE_ATTR(fw_crash_state, S_IRUGO | S_IWUSR, 3278 struct Scsi_Host *shost = class_to_shost(cdev);
3173 megasas_fw_crash_state_show, megasas_fw_crash_state_store); 3279 struct megasas_instance *instance =
3174static DEVICE_ATTR(page_size, S_IRUGO, 3280 (struct megasas_instance *)shost->hostdata;
3175 megasas_page_size_show, NULL); 3281
3176static DEVICE_ATTR(ldio_outstanding, S_IRUGO, 3282 return megasas_dump_sys_regs(instance->reg_set, buf);
3177 megasas_ldio_outstanding_show, NULL); 3283}
3178static DEVICE_ATTR(fw_cmds_outstanding, S_IRUGO, 3284
3179 megasas_fw_cmds_outstanding_show, NULL); 3285static ssize_t
3286raid_map_id_show(struct device *cdev, struct device_attribute *attr,
3287 char *buf)
3288{
3289 struct Scsi_Host *shost = class_to_shost(cdev);
3290 struct megasas_instance *instance =
3291 (struct megasas_instance *)shost->hostdata;
3292
3293 return snprintf(buf, PAGE_SIZE, "%ld\n",
3294 (unsigned long)instance->map_id);
3295}
3296
3297static DEVICE_ATTR_RW(fw_crash_buffer);
3298static DEVICE_ATTR_RO(fw_crash_buffer_size);
3299static DEVICE_ATTR_RW(fw_crash_state);
3300static DEVICE_ATTR_RO(page_size);
3301static DEVICE_ATTR_RO(ldio_outstanding);
3302static DEVICE_ATTR_RO(fw_cmds_outstanding);
3303static DEVICE_ATTR_RO(dump_system_regs);
3304static DEVICE_ATTR_RO(raid_map_id);
3180 3305
3181struct device_attribute *megaraid_host_attrs[] = { 3306struct device_attribute *megaraid_host_attrs[] = {
3182 &dev_attr_fw_crash_buffer_size, 3307 &dev_attr_fw_crash_buffer_size,
@@ -3185,6 +3310,8 @@ struct device_attribute *megaraid_host_attrs[] = {
3185 &dev_attr_page_size, 3310 &dev_attr_page_size,
3186 &dev_attr_ldio_outstanding, 3311 &dev_attr_ldio_outstanding,
3187 &dev_attr_fw_cmds_outstanding, 3312 &dev_attr_fw_cmds_outstanding,
3313 &dev_attr_dump_system_regs,
3314 &dev_attr_raid_map_id,
3188 NULL, 3315 NULL,
3189}; 3316};
3190 3317
@@ -3368,6 +3495,7 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
3368 case MFI_CMD_SMP: 3495 case MFI_CMD_SMP:
3369 case MFI_CMD_STP: 3496 case MFI_CMD_STP:
3370 case MFI_CMD_NVME: 3497 case MFI_CMD_NVME:
3498 case MFI_CMD_TOOLBOX:
3371 megasas_complete_int_cmd(instance, cmd); 3499 megasas_complete_int_cmd(instance, cmd);
3372 break; 3500 break;
3373 3501
@@ -3776,7 +3904,6 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
3776 int i; 3904 int i;
3777 u8 max_wait; 3905 u8 max_wait;
3778 u32 fw_state; 3906 u32 fw_state;
3779 u32 cur_state;
3780 u32 abs_state, curr_abs_state; 3907 u32 abs_state, curr_abs_state;
3781 3908
3782 abs_state = instance->instancet->read_fw_status_reg(instance); 3909 abs_state = instance->instancet->read_fw_status_reg(instance);
@@ -3791,13 +3918,18 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
3791 switch (fw_state) { 3918 switch (fw_state) {
3792 3919
3793 case MFI_STATE_FAULT: 3920 case MFI_STATE_FAULT:
3794 dev_printk(KERN_DEBUG, &instance->pdev->dev, "FW in FAULT state!!\n"); 3921 dev_printk(KERN_ERR, &instance->pdev->dev,
3922 "FW in FAULT state, Fault code:0x%x subcode:0x%x func:%s\n",
3923 abs_state & MFI_STATE_FAULT_CODE,
3924 abs_state & MFI_STATE_FAULT_SUBCODE, __func__);
3795 if (ocr) { 3925 if (ocr) {
3796 max_wait = MEGASAS_RESET_WAIT_TIME; 3926 max_wait = MEGASAS_RESET_WAIT_TIME;
3797 cur_state = MFI_STATE_FAULT;
3798 break; 3927 break;
3799 } else 3928 } else {
3929 dev_printk(KERN_DEBUG, &instance->pdev->dev, "System Register set:\n");
3930 megasas_dump_reg_set(instance->reg_set);
3800 return -ENODEV; 3931 return -ENODEV;
3932 }
3801 3933
3802 case MFI_STATE_WAIT_HANDSHAKE: 3934 case MFI_STATE_WAIT_HANDSHAKE:
3803 /* 3935 /*
@@ -3817,7 +3949,6 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
3817 &instance->reg_set->inbound_doorbell); 3949 &instance->reg_set->inbound_doorbell);
3818 3950
3819 max_wait = MEGASAS_RESET_WAIT_TIME; 3951 max_wait = MEGASAS_RESET_WAIT_TIME;
3820 cur_state = MFI_STATE_WAIT_HANDSHAKE;
3821 break; 3952 break;
3822 3953
3823 case MFI_STATE_BOOT_MESSAGE_PENDING: 3954 case MFI_STATE_BOOT_MESSAGE_PENDING:
@@ -3833,7 +3964,6 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
3833 &instance->reg_set->inbound_doorbell); 3964 &instance->reg_set->inbound_doorbell);
3834 3965
3835 max_wait = MEGASAS_RESET_WAIT_TIME; 3966 max_wait = MEGASAS_RESET_WAIT_TIME;
3836 cur_state = MFI_STATE_BOOT_MESSAGE_PENDING;
3837 break; 3967 break;
3838 3968
3839 case MFI_STATE_OPERATIONAL: 3969 case MFI_STATE_OPERATIONAL:
@@ -3866,7 +3996,6 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
3866 &instance->reg_set->inbound_doorbell); 3996 &instance->reg_set->inbound_doorbell);
3867 3997
3868 max_wait = MEGASAS_RESET_WAIT_TIME; 3998 max_wait = MEGASAS_RESET_WAIT_TIME;
3869 cur_state = MFI_STATE_OPERATIONAL;
3870 break; 3999 break;
3871 4000
3872 case MFI_STATE_UNDEFINED: 4001 case MFI_STATE_UNDEFINED:
@@ -3874,37 +4003,33 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
3874 * This state should not last for more than 2 seconds 4003 * This state should not last for more than 2 seconds
3875 */ 4004 */
3876 max_wait = MEGASAS_RESET_WAIT_TIME; 4005 max_wait = MEGASAS_RESET_WAIT_TIME;
3877 cur_state = MFI_STATE_UNDEFINED;
3878 break; 4006 break;
3879 4007
3880 case MFI_STATE_BB_INIT: 4008 case MFI_STATE_BB_INIT:
3881 max_wait = MEGASAS_RESET_WAIT_TIME; 4009 max_wait = MEGASAS_RESET_WAIT_TIME;
3882 cur_state = MFI_STATE_BB_INIT;
3883 break; 4010 break;
3884 4011
3885 case MFI_STATE_FW_INIT: 4012 case MFI_STATE_FW_INIT:
3886 max_wait = MEGASAS_RESET_WAIT_TIME; 4013 max_wait = MEGASAS_RESET_WAIT_TIME;
3887 cur_state = MFI_STATE_FW_INIT;
3888 break; 4014 break;
3889 4015
3890 case MFI_STATE_FW_INIT_2: 4016 case MFI_STATE_FW_INIT_2:
3891 max_wait = MEGASAS_RESET_WAIT_TIME; 4017 max_wait = MEGASAS_RESET_WAIT_TIME;
3892 cur_state = MFI_STATE_FW_INIT_2;
3893 break; 4018 break;
3894 4019
3895 case MFI_STATE_DEVICE_SCAN: 4020 case MFI_STATE_DEVICE_SCAN:
3896 max_wait = MEGASAS_RESET_WAIT_TIME; 4021 max_wait = MEGASAS_RESET_WAIT_TIME;
3897 cur_state = MFI_STATE_DEVICE_SCAN;
3898 break; 4022 break;
3899 4023
3900 case MFI_STATE_FLUSH_CACHE: 4024 case MFI_STATE_FLUSH_CACHE:
3901 max_wait = MEGASAS_RESET_WAIT_TIME; 4025 max_wait = MEGASAS_RESET_WAIT_TIME;
3902 cur_state = MFI_STATE_FLUSH_CACHE;
3903 break; 4026 break;
3904 4027
3905 default: 4028 default:
3906 dev_printk(KERN_DEBUG, &instance->pdev->dev, "Unknown state 0x%x\n", 4029 dev_printk(KERN_DEBUG, &instance->pdev->dev, "Unknown state 0x%x\n",
3907 fw_state); 4030 fw_state);
4031 dev_printk(KERN_DEBUG, &instance->pdev->dev, "System Register set:\n");
4032 megasas_dump_reg_set(instance->reg_set);
3908 return -ENODEV; 4033 return -ENODEV;
3909 } 4034 }
3910 4035
@@ -3927,6 +4052,8 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
3927 if (curr_abs_state == abs_state) { 4052 if (curr_abs_state == abs_state) {
3928 dev_printk(KERN_DEBUG, &instance->pdev->dev, "FW state [%d] hasn't changed " 4053 dev_printk(KERN_DEBUG, &instance->pdev->dev, "FW state [%d] hasn't changed "
3929 "in %d secs\n", fw_state, max_wait); 4054 "in %d secs\n", fw_state, max_wait);
4055 dev_printk(KERN_DEBUG, &instance->pdev->dev, "System Register set:\n");
4056 megasas_dump_reg_set(instance->reg_set);
3930 return -ENODEV; 4057 return -ENODEV;
3931 } 4058 }
3932 4059
@@ -3990,23 +4117,12 @@ static int megasas_create_frame_pool(struct megasas_instance *instance)
3990{ 4117{
3991 int i; 4118 int i;
3992 u16 max_cmd; 4119 u16 max_cmd;
3993 u32 sge_sz;
3994 u32 frame_count; 4120 u32 frame_count;
3995 struct megasas_cmd *cmd; 4121 struct megasas_cmd *cmd;
3996 4122
3997 max_cmd = instance->max_mfi_cmds; 4123 max_cmd = instance->max_mfi_cmds;
3998 4124
3999 /* 4125 /*
4000 * Size of our frame is 64 bytes for MFI frame, followed by max SG
4001 * elements and finally SCSI_SENSE_BUFFERSIZE bytes for sense buffer
4002 */
4003 sge_sz = (IS_DMA64) ? sizeof(struct megasas_sge64) :
4004 sizeof(struct megasas_sge32);
4005
4006 if (instance->flag_ieee)
4007 sge_sz = sizeof(struct megasas_sge_skinny);
4008
4009 /*
4010 * For MFI controllers. 4126 * For MFI controllers.
4011 * max_num_sge = 60 4127 * max_num_sge = 60
4012 * max_sge_sz = 16 byte (sizeof megasas_sge_skinny) 4128 * max_sge_sz = 16 byte (sizeof megasas_sge_skinny)
@@ -4255,8 +4371,10 @@ megasas_get_pd_info(struct megasas_instance *instance, struct scsi_device *sdev)
4255 switch (dcmd_timeout_ocr_possible(instance)) { 4371 switch (dcmd_timeout_ocr_possible(instance)) {
4256 case INITIATE_OCR: 4372 case INITIATE_OCR:
4257 cmd->flags |= DRV_DCMD_SKIP_REFIRE; 4373 cmd->flags |= DRV_DCMD_SKIP_REFIRE;
4374 mutex_unlock(&instance->reset_mutex);
4258 megasas_reset_fusion(instance->host, 4375 megasas_reset_fusion(instance->host,
4259 MFI_IO_TIMEOUT_OCR); 4376 MFI_IO_TIMEOUT_OCR);
4377 mutex_lock(&instance->reset_mutex);
4260 break; 4378 break;
4261 case KILL_ADAPTER: 4379 case KILL_ADAPTER:
4262 megaraid_sas_kill_hba(instance); 4380 megaraid_sas_kill_hba(instance);
@@ -4292,7 +4410,6 @@ megasas_get_pd_list(struct megasas_instance *instance)
4292 struct megasas_dcmd_frame *dcmd; 4410 struct megasas_dcmd_frame *dcmd;
4293 struct MR_PD_LIST *ci; 4411 struct MR_PD_LIST *ci;
4294 struct MR_PD_ADDRESS *pd_addr; 4412 struct MR_PD_ADDRESS *pd_addr;
4295 dma_addr_t ci_h = 0;
4296 4413
4297 if (instance->pd_list_not_supported) { 4414 if (instance->pd_list_not_supported) {
4298 dev_info(&instance->pdev->dev, "MR_DCMD_PD_LIST_QUERY " 4415 dev_info(&instance->pdev->dev, "MR_DCMD_PD_LIST_QUERY "
@@ -4301,7 +4418,6 @@ megasas_get_pd_list(struct megasas_instance *instance)
4301 } 4418 }
4302 4419
4303 ci = instance->pd_list_buf; 4420 ci = instance->pd_list_buf;
4304 ci_h = instance->pd_list_buf_h;
4305 4421
4306 cmd = megasas_get_cmd(instance); 4422 cmd = megasas_get_cmd(instance);
4307 4423
@@ -4374,6 +4490,9 @@ megasas_get_pd_list(struct megasas_instance *instance)
4374 4490
4375 case DCMD_SUCCESS: 4491 case DCMD_SUCCESS:
4376 pd_addr = ci->addr; 4492 pd_addr = ci->addr;
4493 if (megasas_dbg_lvl & LD_PD_DEBUG)
4494 dev_info(&instance->pdev->dev, "%s, sysPD count: 0x%x\n",
4495 __func__, le32_to_cpu(ci->count));
4377 4496
4378 if ((le32_to_cpu(ci->count) > 4497 if ((le32_to_cpu(ci->count) >
4379 (MEGASAS_MAX_PD_CHANNELS * MEGASAS_MAX_DEV_PER_CHANNEL))) 4498 (MEGASAS_MAX_PD_CHANNELS * MEGASAS_MAX_DEV_PER_CHANNEL)))
@@ -4389,6 +4508,11 @@ megasas_get_pd_list(struct megasas_instance *instance)
4389 pd_addr->scsiDevType; 4508 pd_addr->scsiDevType;
4390 instance->local_pd_list[le16_to_cpu(pd_addr->deviceId)].driveState = 4509 instance->local_pd_list[le16_to_cpu(pd_addr->deviceId)].driveState =
4391 MR_PD_STATE_SYSTEM; 4510 MR_PD_STATE_SYSTEM;
4511 if (megasas_dbg_lvl & LD_PD_DEBUG)
4512 dev_info(&instance->pdev->dev,
4513 "PD%d: targetID: 0x%03x deviceType:0x%x\n",
4514 pd_index, le16_to_cpu(pd_addr->deviceId),
4515 pd_addr->scsiDevType);
4392 pd_addr++; 4516 pd_addr++;
4393 } 4517 }
4394 4518
@@ -4492,6 +4616,10 @@ megasas_get_ld_list(struct megasas_instance *instance)
4492 break; 4616 break;
4493 4617
4494 case DCMD_SUCCESS: 4618 case DCMD_SUCCESS:
4619 if (megasas_dbg_lvl & LD_PD_DEBUG)
4620 dev_info(&instance->pdev->dev, "%s, LD count: 0x%x\n",
4621 __func__, ld_count);
4622
4495 if (ld_count > instance->fw_supported_vd_count) 4623 if (ld_count > instance->fw_supported_vd_count)
4496 break; 4624 break;
4497 4625
@@ -4501,6 +4629,10 @@ megasas_get_ld_list(struct megasas_instance *instance)
4501 if (ci->ldList[ld_index].state != 0) { 4629 if (ci->ldList[ld_index].state != 0) {
4502 ids = ci->ldList[ld_index].ref.targetId; 4630 ids = ci->ldList[ld_index].ref.targetId;
4503 instance->ld_ids[ids] = ci->ldList[ld_index].ref.targetId; 4631 instance->ld_ids[ids] = ci->ldList[ld_index].ref.targetId;
4632 if (megasas_dbg_lvl & LD_PD_DEBUG)
4633 dev_info(&instance->pdev->dev,
4634 "LD%d: targetID: 0x%03x\n",
4635 ld_index, ids);
4504 } 4636 }
4505 } 4637 }
4506 4638
@@ -4604,6 +4736,10 @@ megasas_ld_list_query(struct megasas_instance *instance, u8 query_type)
4604 case DCMD_SUCCESS: 4736 case DCMD_SUCCESS:
4605 tgtid_count = le32_to_cpu(ci->count); 4737 tgtid_count = le32_to_cpu(ci->count);
4606 4738
4739 if (megasas_dbg_lvl & LD_PD_DEBUG)
4740 dev_info(&instance->pdev->dev, "%s, LD count: 0x%x\n",
4741 __func__, tgtid_count);
4742
4607 if ((tgtid_count > (instance->fw_supported_vd_count))) 4743 if ((tgtid_count > (instance->fw_supported_vd_count)))
4608 break; 4744 break;
4609 4745
@@ -4611,6 +4747,9 @@ megasas_ld_list_query(struct megasas_instance *instance, u8 query_type)
4611 for (ld_index = 0; ld_index < tgtid_count; ld_index++) { 4747 for (ld_index = 0; ld_index < tgtid_count; ld_index++) {
4612 ids = ci->targetId[ld_index]; 4748 ids = ci->targetId[ld_index];
4613 instance->ld_ids[ids] = ci->targetId[ld_index]; 4749 instance->ld_ids[ids] = ci->targetId[ld_index];
4750 if (megasas_dbg_lvl & LD_PD_DEBUG)
4751 dev_info(&instance->pdev->dev, "LD%d: targetID: 0x%03x\n",
4752 ld_index, ci->targetId[ld_index]);
4614 } 4753 }
4615 4754
4616 break; 4755 break;
@@ -4690,6 +4829,13 @@ megasas_host_device_list_query(struct megasas_instance *instance,
4690 */ 4829 */
4691 count = le32_to_cpu(ci->count); 4830 count = le32_to_cpu(ci->count);
4692 4831
4832 if (count > (MEGASAS_MAX_PD + MAX_LOGICAL_DRIVES_EXT))
4833 break;
4834
4835 if (megasas_dbg_lvl & LD_PD_DEBUG)
4836 dev_info(&instance->pdev->dev, "%s, Device count: 0x%x\n",
4837 __func__, count);
4838
4693 memset(instance->local_pd_list, 0, 4839 memset(instance->local_pd_list, 0,
4694 MEGASAS_MAX_PD * sizeof(struct megasas_pd_list)); 4840 MEGASAS_MAX_PD * sizeof(struct megasas_pd_list));
4695 memset(instance->ld_ids, 0xff, MAX_LOGICAL_DRIVES_EXT); 4841 memset(instance->ld_ids, 0xff, MAX_LOGICAL_DRIVES_EXT);
@@ -4701,8 +4847,16 @@ megasas_host_device_list_query(struct megasas_instance *instance,
4701 ci->host_device_list[i].scsi_type; 4847 ci->host_device_list[i].scsi_type;
4702 instance->local_pd_list[target_id].driveState = 4848 instance->local_pd_list[target_id].driveState =
4703 MR_PD_STATE_SYSTEM; 4849 MR_PD_STATE_SYSTEM;
4850 if (megasas_dbg_lvl & LD_PD_DEBUG)
4851 dev_info(&instance->pdev->dev,
4852 "Device %d: PD targetID: 0x%03x deviceType:0x%x\n",
4853 i, target_id, ci->host_device_list[i].scsi_type);
4704 } else { 4854 } else {
4705 instance->ld_ids[target_id] = target_id; 4855 instance->ld_ids[target_id] = target_id;
4856 if (megasas_dbg_lvl & LD_PD_DEBUG)
4857 dev_info(&instance->pdev->dev,
4858 "Device %d: LD targetID: 0x%03x\n",
4859 i, target_id);
4706 } 4860 }
4707 } 4861 }
4708 4862
@@ -4714,8 +4868,10 @@ megasas_host_device_list_query(struct megasas_instance *instance,
4714 switch (dcmd_timeout_ocr_possible(instance)) { 4868 switch (dcmd_timeout_ocr_possible(instance)) {
4715 case INITIATE_OCR: 4869 case INITIATE_OCR:
4716 cmd->flags |= DRV_DCMD_SKIP_REFIRE; 4870 cmd->flags |= DRV_DCMD_SKIP_REFIRE;
4871 mutex_unlock(&instance->reset_mutex);
4717 megasas_reset_fusion(instance->host, 4872 megasas_reset_fusion(instance->host,
4718 MFI_IO_TIMEOUT_OCR); 4873 MFI_IO_TIMEOUT_OCR);
4874 mutex_lock(&instance->reset_mutex);
4719 break; 4875 break;
4720 case KILL_ADAPTER: 4876 case KILL_ADAPTER:
4721 megaraid_sas_kill_hba(instance); 4877 megaraid_sas_kill_hba(instance);
@@ -4863,8 +5019,10 @@ void megasas_get_snapdump_properties(struct megasas_instance *instance)
4863 switch (dcmd_timeout_ocr_possible(instance)) { 5019 switch (dcmd_timeout_ocr_possible(instance)) {
4864 case INITIATE_OCR: 5020 case INITIATE_OCR:
4865 cmd->flags |= DRV_DCMD_SKIP_REFIRE; 5021 cmd->flags |= DRV_DCMD_SKIP_REFIRE;
5022 mutex_unlock(&instance->reset_mutex);
4866 megasas_reset_fusion(instance->host, 5023 megasas_reset_fusion(instance->host,
4867 MFI_IO_TIMEOUT_OCR); 5024 MFI_IO_TIMEOUT_OCR);
5025 mutex_lock(&instance->reset_mutex);
4868 break; 5026 break;
4869 case KILL_ADAPTER: 5027 case KILL_ADAPTER:
4870 megaraid_sas_kill_hba(instance); 5028 megaraid_sas_kill_hba(instance);
@@ -4943,6 +5101,7 @@ megasas_get_ctrl_info(struct megasas_instance *instance)
4943 le32_to_cpus((u32 *)&ci->adapterOperations2); 5101 le32_to_cpus((u32 *)&ci->adapterOperations2);
4944 le32_to_cpus((u32 *)&ci->adapterOperations3); 5102 le32_to_cpus((u32 *)&ci->adapterOperations3);
4945 le16_to_cpus((u16 *)&ci->adapter_operations4); 5103 le16_to_cpus((u16 *)&ci->adapter_operations4);
5104 le32_to_cpus((u32 *)&ci->adapter_operations5);
4946 5105
4947 /* Update the latest Ext VD info. 5106 /* Update the latest Ext VD info.
4948 * From Init path, store current firmware details. 5107 * From Init path, store current firmware details.
@@ -4950,12 +5109,14 @@ megasas_get_ctrl_info(struct megasas_instance *instance)
4950 * in case of Firmware upgrade without system reboot. 5109 * in case of Firmware upgrade without system reboot.
4951 */ 5110 */
4952 megasas_update_ext_vd_details(instance); 5111 megasas_update_ext_vd_details(instance);
4953 instance->use_seqnum_jbod_fp = 5112 instance->support_seqnum_jbod_fp =
4954 ci->adapterOperations3.useSeqNumJbodFP; 5113 ci->adapterOperations3.useSeqNumJbodFP;
4955 instance->support_morethan256jbod = 5114 instance->support_morethan256jbod =
4956 ci->adapter_operations4.support_pd_map_target_id; 5115 ci->adapter_operations4.support_pd_map_target_id;
4957 instance->support_nvme_passthru = 5116 instance->support_nvme_passthru =
4958 ci->adapter_operations4.support_nvme_passthru; 5117 ci->adapter_operations4.support_nvme_passthru;
5118 instance->support_pci_lane_margining =
5119 ci->adapter_operations5.support_pci_lane_margining;
4959 instance->task_abort_tmo = ci->TaskAbortTO; 5120 instance->task_abort_tmo = ci->TaskAbortTO;
4960 instance->max_reset_tmo = ci->MaxResetTO; 5121 instance->max_reset_tmo = ci->MaxResetTO;
4961 5122
@@ -4987,6 +5148,10 @@ megasas_get_ctrl_info(struct megasas_instance *instance)
4987 dev_info(&instance->pdev->dev, 5148 dev_info(&instance->pdev->dev,
4988 "FW provided TM TaskAbort/Reset timeout\t: %d secs/%d secs\n", 5149 "FW provided TM TaskAbort/Reset timeout\t: %d secs/%d secs\n",
4989 instance->task_abort_tmo, instance->max_reset_tmo); 5150 instance->task_abort_tmo, instance->max_reset_tmo);
5151 dev_info(&instance->pdev->dev, "JBOD sequence map support\t: %s\n",
5152 instance->support_seqnum_jbod_fp ? "Yes" : "No");
5153 dev_info(&instance->pdev->dev, "PCI Lane Margining support\t: %s\n",
5154 instance->support_pci_lane_margining ? "Yes" : "No");
4990 5155
4991 break; 5156 break;
4992 5157
@@ -4994,8 +5159,10 @@ megasas_get_ctrl_info(struct megasas_instance *instance)
4994 switch (dcmd_timeout_ocr_possible(instance)) { 5159 switch (dcmd_timeout_ocr_possible(instance)) {
4995 case INITIATE_OCR: 5160 case INITIATE_OCR:
4996 cmd->flags |= DRV_DCMD_SKIP_REFIRE; 5161 cmd->flags |= DRV_DCMD_SKIP_REFIRE;
5162 mutex_unlock(&instance->reset_mutex);
4997 megasas_reset_fusion(instance->host, 5163 megasas_reset_fusion(instance->host,
4998 MFI_IO_TIMEOUT_OCR); 5164 MFI_IO_TIMEOUT_OCR);
5165 mutex_lock(&instance->reset_mutex);
4999 break; 5166 break;
5000 case KILL_ADAPTER: 5167 case KILL_ADAPTER:
5001 megaraid_sas_kill_hba(instance); 5168 megaraid_sas_kill_hba(instance);
@@ -5262,6 +5429,25 @@ fail_alloc_cmds:
5262 return 1; 5429 return 1;
5263} 5430}
5264 5431
5432static
5433void megasas_setup_irq_poll(struct megasas_instance *instance)
5434{
5435 struct megasas_irq_context *irq_ctx;
5436 u32 count, i;
5437
5438 count = instance->msix_vectors > 0 ? instance->msix_vectors : 1;
5439
5440 /* Initialize IRQ poll */
5441 for (i = 0; i < count; i++) {
5442 irq_ctx = &instance->irq_context[i];
5443 irq_ctx->os_irq = pci_irq_vector(instance->pdev, i);
5444 irq_ctx->irq_poll_scheduled = false;
5445 irq_poll_init(&irq_ctx->irqpoll,
5446 instance->threshold_reply_count,
5447 megasas_irqpoll);
5448 }
5449}
5450
5265/* 5451/*
5266 * megasas_setup_irqs_ioapic - register legacy interrupts. 5452 * megasas_setup_irqs_ioapic - register legacy interrupts.
5267 * @instance: Adapter soft state 5453 * @instance: Adapter soft state
@@ -5286,6 +5472,8 @@ megasas_setup_irqs_ioapic(struct megasas_instance *instance)
5286 __func__, __LINE__); 5472 __func__, __LINE__);
5287 return -1; 5473 return -1;
5288 } 5474 }
5475 instance->perf_mode = MR_LATENCY_PERF_MODE;
5476 instance->low_latency_index_start = 0;
5289 return 0; 5477 return 0;
5290} 5478}
5291 5479
@@ -5320,6 +5508,7 @@ megasas_setup_irqs_msix(struct megasas_instance *instance, u8 is_probe)
5320 &instance->irq_context[j]); 5508 &instance->irq_context[j]);
5321 /* Retry irq register for IO_APIC*/ 5509 /* Retry irq register for IO_APIC*/
5322 instance->msix_vectors = 0; 5510 instance->msix_vectors = 0;
5511 instance->msix_load_balance = false;
5323 if (is_probe) { 5512 if (is_probe) {
5324 pci_free_irq_vectors(instance->pdev); 5513 pci_free_irq_vectors(instance->pdev);
5325 return megasas_setup_irqs_ioapic(instance); 5514 return megasas_setup_irqs_ioapic(instance);
@@ -5328,6 +5517,7 @@ megasas_setup_irqs_msix(struct megasas_instance *instance, u8 is_probe)
5328 } 5517 }
5329 } 5518 }
5330 } 5519 }
5520
5331 return 0; 5521 return 0;
5332} 5522}
5333 5523
@@ -5340,6 +5530,16 @@ static void
5340megasas_destroy_irqs(struct megasas_instance *instance) { 5530megasas_destroy_irqs(struct megasas_instance *instance) {
5341 5531
5342 int i; 5532 int i;
5533 int count;
5534 struct megasas_irq_context *irq_ctx;
5535
5536 count = instance->msix_vectors > 0 ? instance->msix_vectors : 1;
5537 if (instance->adapter_type != MFI_SERIES) {
5538 for (i = 0; i < count; i++) {
5539 irq_ctx = &instance->irq_context[i];
5540 irq_poll_disable(&irq_ctx->irqpoll);
5541 }
5542 }
5343 5543
5344 if (instance->msix_vectors) 5544 if (instance->msix_vectors)
5345 for (i = 0; i < instance->msix_vectors; i++) { 5545 for (i = 0; i < instance->msix_vectors; i++) {
@@ -5368,10 +5568,12 @@ megasas_setup_jbod_map(struct megasas_instance *instance)
5368 pd_seq_map_sz = sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) + 5568 pd_seq_map_sz = sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) +
5369 (sizeof(struct MR_PD_CFG_SEQ) * (MAX_PHYSICAL_DEVICES - 1)); 5569 (sizeof(struct MR_PD_CFG_SEQ) * (MAX_PHYSICAL_DEVICES - 1));
5370 5570
5571 instance->use_seqnum_jbod_fp =
5572 instance->support_seqnum_jbod_fp;
5371 if (reset_devices || !fusion || 5573 if (reset_devices || !fusion ||
5372 !instance->ctrl_info_buf->adapterOperations3.useSeqNumJbodFP) { 5574 !instance->support_seqnum_jbod_fp) {
5373 dev_info(&instance->pdev->dev, 5575 dev_info(&instance->pdev->dev,
5374 "Jbod map is not supported %s %d\n", 5576 "JBOD sequence map is disabled %s %d\n",
5375 __func__, __LINE__); 5577 __func__, __LINE__);
5376 instance->use_seqnum_jbod_fp = false; 5578 instance->use_seqnum_jbod_fp = false;
5377 return; 5579 return;
@@ -5410,9 +5612,11 @@ skip_alloc:
5410static void megasas_setup_reply_map(struct megasas_instance *instance) 5612static void megasas_setup_reply_map(struct megasas_instance *instance)
5411{ 5613{
5412 const struct cpumask *mask; 5614 const struct cpumask *mask;
5413 unsigned int queue, cpu; 5615 unsigned int queue, cpu, low_latency_index_start;
5414 5616
5415 for (queue = 0; queue < instance->msix_vectors; queue++) { 5617 low_latency_index_start = instance->low_latency_index_start;
5618
5619 for (queue = low_latency_index_start; queue < instance->msix_vectors; queue++) {
5416 mask = pci_irq_get_affinity(instance->pdev, queue); 5620 mask = pci_irq_get_affinity(instance->pdev, queue);
5417 if (!mask) 5621 if (!mask)
5418 goto fallback; 5622 goto fallback;
@@ -5423,8 +5627,14 @@ static void megasas_setup_reply_map(struct megasas_instance *instance)
5423 return; 5627 return;
5424 5628
5425fallback: 5629fallback:
5426 for_each_possible_cpu(cpu) 5630 queue = low_latency_index_start;
5427 instance->reply_map[cpu] = cpu % instance->msix_vectors; 5631 for_each_possible_cpu(cpu) {
5632 instance->reply_map[cpu] = queue;
5633 if (queue == (instance->msix_vectors - 1))
5634 queue = low_latency_index_start;
5635 else
5636 queue++;
5637 }
5428} 5638}
5429 5639
5430/** 5640/**
@@ -5461,6 +5671,89 @@ int megasas_get_device_list(struct megasas_instance *instance)
5461 5671
5462 return SUCCESS; 5672 return SUCCESS;
5463} 5673}
5674
5675/**
5676 * megasas_set_high_iops_queue_affinity_hint - Set affinity hint for high IOPS queues
5677 * @instance: Adapter soft state
5678 * return: void
5679 */
5680static inline void
5681megasas_set_high_iops_queue_affinity_hint(struct megasas_instance *instance)
5682{
5683 int i;
5684 int local_numa_node;
5685
5686 if (instance->perf_mode == MR_BALANCED_PERF_MODE) {
5687 local_numa_node = dev_to_node(&instance->pdev->dev);
5688
5689 for (i = 0; i < instance->low_latency_index_start; i++)
5690 irq_set_affinity_hint(pci_irq_vector(instance->pdev, i),
5691 cpumask_of_node(local_numa_node));
5692 }
5693}
5694
5695static int
5696__megasas_alloc_irq_vectors(struct megasas_instance *instance)
5697{
5698 int i, irq_flags;
5699 struct irq_affinity desc = { .pre_vectors = instance->low_latency_index_start };
5700 struct irq_affinity *descp = &desc;
5701
5702 irq_flags = PCI_IRQ_MSIX;
5703
5704 if (instance->smp_affinity_enable)
5705 irq_flags |= PCI_IRQ_AFFINITY;
5706 else
5707 descp = NULL;
5708
5709 i = pci_alloc_irq_vectors_affinity(instance->pdev,
5710 instance->low_latency_index_start,
5711 instance->msix_vectors, irq_flags, descp);
5712
5713 return i;
5714}
5715
5716/**
5717 * megasas_alloc_irq_vectors - Allocate IRQ vectors/enable MSI-x vectors
5718 * @instance: Adapter soft state
5719 * return: void
5720 */
5721static void
5722megasas_alloc_irq_vectors(struct megasas_instance *instance)
5723{
5724 int i;
5725 unsigned int num_msix_req;
5726
5727 i = __megasas_alloc_irq_vectors(instance);
5728
5729 if ((instance->perf_mode == MR_BALANCED_PERF_MODE) &&
5730 (i != instance->msix_vectors)) {
5731 if (instance->msix_vectors)
5732 pci_free_irq_vectors(instance->pdev);
5733 /* Disable Balanced IOPS mode and try realloc vectors */
5734 instance->perf_mode = MR_LATENCY_PERF_MODE;
5735 instance->low_latency_index_start = 1;
5736 num_msix_req = num_online_cpus() + instance->low_latency_index_start;
5737
5738 instance->msix_vectors = min(num_msix_req,
5739 instance->msix_vectors);
5740
5741 i = __megasas_alloc_irq_vectors(instance);
5742
5743 }
5744
5745 dev_info(&instance->pdev->dev,
5746 "requested/available msix %d/%d\n", instance->msix_vectors, i);
5747
5748 if (i > 0)
5749 instance->msix_vectors = i;
5750 else
5751 instance->msix_vectors = 0;
5752
5753 if (instance->smp_affinity_enable)
5754 megasas_set_high_iops_queue_affinity_hint(instance);
5755}
5756
5464/** 5757/**
5465 * megasas_init_fw - Initializes the FW 5758 * megasas_init_fw - Initializes the FW
5466 * @instance: Adapter soft state 5759 * @instance: Adapter soft state
@@ -5474,12 +5767,15 @@ static int megasas_init_fw(struct megasas_instance *instance)
5474 u32 max_sectors_2, tmp_sectors, msix_enable; 5767 u32 max_sectors_2, tmp_sectors, msix_enable;
5475 u32 scratch_pad_1, scratch_pad_2, scratch_pad_3, status_reg; 5768 u32 scratch_pad_1, scratch_pad_2, scratch_pad_3, status_reg;
5476 resource_size_t base_addr; 5769 resource_size_t base_addr;
5770 void *base_addr_phys;
5477 struct megasas_ctrl_info *ctrl_info = NULL; 5771 struct megasas_ctrl_info *ctrl_info = NULL;
5478 unsigned long bar_list; 5772 unsigned long bar_list;
5479 int i, j, loop, fw_msix_count = 0; 5773 int i, j, loop;
5480 struct IOV_111 *iovPtr; 5774 struct IOV_111 *iovPtr;
5481 struct fusion_context *fusion; 5775 struct fusion_context *fusion;
5482 bool do_adp_reset = true; 5776 bool intr_coalescing;
5777 unsigned int num_msix_req;
5778 u16 lnksta, speed;
5483 5779
5484 fusion = instance->ctrl_context; 5780 fusion = instance->ctrl_context;
5485 5781
@@ -5500,6 +5796,11 @@ static int megasas_init_fw(struct megasas_instance *instance)
5500 goto fail_ioremap; 5796 goto fail_ioremap;
5501 } 5797 }
5502 5798
5799 base_addr_phys = &base_addr;
5800 dev_printk(KERN_DEBUG, &instance->pdev->dev,
5801 "BAR:0x%lx BAR's base_addr(phys):%pa mapped virt_addr:0x%p\n",
5802 instance->bar, base_addr_phys, instance->reg_set);
5803
5503 if (instance->adapter_type != MFI_SERIES) 5804 if (instance->adapter_type != MFI_SERIES)
5504 instance->instancet = &megasas_instance_template_fusion; 5805 instance->instancet = &megasas_instance_template_fusion;
5505 else { 5806 else {
@@ -5526,29 +5827,35 @@ static int megasas_init_fw(struct megasas_instance *instance)
5526 } 5827 }
5527 5828
5528 if (megasas_transition_to_ready(instance, 0)) { 5829 if (megasas_transition_to_ready(instance, 0)) {
5529 if (instance->adapter_type >= INVADER_SERIES) { 5830 dev_info(&instance->pdev->dev,
5831 "Failed to transition controller to ready from %s!\n",
5832 __func__);
5833 if (instance->adapter_type != MFI_SERIES) {
5530 status_reg = instance->instancet->read_fw_status_reg( 5834 status_reg = instance->instancet->read_fw_status_reg(
5531 instance); 5835 instance);
5532 do_adp_reset = status_reg & MFI_RESET_ADAPTER; 5836 if (status_reg & MFI_RESET_ADAPTER) {
5533 } 5837 if (megasas_adp_reset_wait_for_ready
5534 5838 (instance, true, 0) == FAILED)
5535 if (do_adp_reset) { 5839 goto fail_ready_state;
5840 } else {
5841 goto fail_ready_state;
5842 }
5843 } else {
5536 atomic_set(&instance->fw_reset_no_pci_access, 1); 5844 atomic_set(&instance->fw_reset_no_pci_access, 1);
5537 instance->instancet->adp_reset 5845 instance->instancet->adp_reset
5538 (instance, instance->reg_set); 5846 (instance, instance->reg_set);
5539 atomic_set(&instance->fw_reset_no_pci_access, 0); 5847 atomic_set(&instance->fw_reset_no_pci_access, 0);
5540 dev_info(&instance->pdev->dev,
5541 "FW restarted successfully from %s!\n",
5542 __func__);
5543 5848
5544 /*waiting for about 30 second before retry*/ 5849 /*waiting for about 30 second before retry*/
5545 ssleep(30); 5850 ssleep(30);
5546 5851
5547 if (megasas_transition_to_ready(instance, 0)) 5852 if (megasas_transition_to_ready(instance, 0))
5548 goto fail_ready_state; 5853 goto fail_ready_state;
5549 } else {
5550 goto fail_ready_state;
5551 } 5854 }
5855
5856 dev_info(&instance->pdev->dev,
5857 "FW restarted successfully from %s!\n",
5858 __func__);
5552 } 5859 }
5553 5860
5554 megasas_init_ctrl_params(instance); 5861 megasas_init_ctrl_params(instance);
@@ -5573,11 +5880,21 @@ static int megasas_init_fw(struct megasas_instance *instance)
5573 MR_MAX_RAID_MAP_SIZE_MASK); 5880 MR_MAX_RAID_MAP_SIZE_MASK);
5574 } 5881 }
5575 5882
5883 switch (instance->adapter_type) {
5884 case VENTURA_SERIES:
5885 fusion->pcie_bw_limitation = true;
5886 break;
5887 case AERO_SERIES:
5888 fusion->r56_div_offload = true;
5889 break;
5890 default:
5891 break;
5892 }
5893
5576 /* Check if MSI-X is supported while in ready state */ 5894 /* Check if MSI-X is supported while in ready state */
5577 msix_enable = (instance->instancet->read_fw_status_reg(instance) & 5895 msix_enable = (instance->instancet->read_fw_status_reg(instance) &
5578 0x4000000) >> 0x1a; 5896 0x4000000) >> 0x1a;
5579 if (msix_enable && !msix_disable) { 5897 if (msix_enable && !msix_disable) {
5580 int irq_flags = PCI_IRQ_MSIX;
5581 5898
5582 scratch_pad_1 = megasas_readl 5899 scratch_pad_1 = megasas_readl
5583 (instance, &instance->reg_set->outbound_scratch_pad_1); 5900 (instance, &instance->reg_set->outbound_scratch_pad_1);
@@ -5587,7 +5904,6 @@ static int megasas_init_fw(struct megasas_instance *instance)
5587 /* Thunderbolt Series*/ 5904 /* Thunderbolt Series*/
5588 instance->msix_vectors = (scratch_pad_1 5905 instance->msix_vectors = (scratch_pad_1
5589 & MR_MAX_REPLY_QUEUES_OFFSET) + 1; 5906 & MR_MAX_REPLY_QUEUES_OFFSET) + 1;
5590 fw_msix_count = instance->msix_vectors;
5591 } else { 5907 } else {
5592 instance->msix_vectors = ((scratch_pad_1 5908 instance->msix_vectors = ((scratch_pad_1
5593 & MR_MAX_REPLY_QUEUES_EXT_OFFSET) 5909 & MR_MAX_REPLY_QUEUES_EXT_OFFSET)
@@ -5616,7 +5932,12 @@ static int megasas_init_fw(struct megasas_instance *instance)
5616 if (rdpq_enable) 5932 if (rdpq_enable)
5617 instance->is_rdpq = (scratch_pad_1 & MR_RDPQ_MODE_OFFSET) ? 5933 instance->is_rdpq = (scratch_pad_1 & MR_RDPQ_MODE_OFFSET) ?
5618 1 : 0; 5934 1 : 0;
5619 fw_msix_count = instance->msix_vectors; 5935
5936 if (!instance->msix_combined) {
5937 instance->msix_load_balance = true;
5938 instance->smp_affinity_enable = false;
5939 }
5940
5620 /* Save 1-15 reply post index address to local memory 5941 /* Save 1-15 reply post index address to local memory
5621 * Index 0 is already saved from reg offset 5942 * Index 0 is already saved from reg offset
5622 * MPI2_REPLY_POST_HOST_INDEX_OFFSET 5943 * MPI2_REPLY_POST_HOST_INDEX_OFFSET
@@ -5629,22 +5950,91 @@ static int megasas_init_fw(struct megasas_instance *instance)
5629 + (loop * 0x10)); 5950 + (loop * 0x10));
5630 } 5951 }
5631 } 5952 }
5953
5954 dev_info(&instance->pdev->dev,
5955 "firmware supports msix\t: (%d)",
5956 instance->msix_vectors);
5632 if (msix_vectors) 5957 if (msix_vectors)
5633 instance->msix_vectors = min(msix_vectors, 5958 instance->msix_vectors = min(msix_vectors,
5634 instance->msix_vectors); 5959 instance->msix_vectors);
5635 } else /* MFI adapters */ 5960 } else /* MFI adapters */
5636 instance->msix_vectors = 1; 5961 instance->msix_vectors = 1;
5637 /* Don't bother allocating more MSI-X vectors than cpus */ 5962
5638 instance->msix_vectors = min(instance->msix_vectors, 5963
5639 (unsigned int)num_online_cpus()); 5964 /*
5640 if (smp_affinity_enable) 5965 * For Aero (if some conditions are met), driver will configure a
5641 irq_flags |= PCI_IRQ_AFFINITY; 5966 * few additional reply queues with interrupt coalescing enabled.
5642 i = pci_alloc_irq_vectors(instance->pdev, 1, 5967 * These queues with interrupt coalescing enabled are called
5643 instance->msix_vectors, irq_flags); 5968 * High IOPS queues and rest of reply queues (based on number of
5644 if (i > 0) 5969 * logical CPUs) are termed as Low latency queues.
5645 instance->msix_vectors = i; 5970 *
5971 * Total Number of reply queues = High IOPS queues + low latency queues
5972 *
5973 * For rest of fusion adapters, 1 additional reply queue will be
5974 * reserved for management commands, rest of reply queues
5975 * (based on number of logical CPUs) will be used for IOs and
5976 * referenced as IO queues.
5977 * Total Number of reply queues = 1 + IO queues
5978 *
5979 * MFI adapters supports single MSI-x so single reply queue
5980 * will be used for IO and management commands.
5981 */
5982
5983 intr_coalescing = (scratch_pad_1 & MR_INTR_COALESCING_SUPPORT_OFFSET) ?
5984 true : false;
5985 if (intr_coalescing &&
5986 (num_online_cpus() >= MR_HIGH_IOPS_QUEUE_COUNT) &&
5987 (instance->msix_vectors == MEGASAS_MAX_MSIX_QUEUES))
5988 instance->perf_mode = MR_BALANCED_PERF_MODE;
5646 else 5989 else
5647 instance->msix_vectors = 0; 5990 instance->perf_mode = MR_LATENCY_PERF_MODE;
5991
5992
5993 if (instance->adapter_type == AERO_SERIES) {
5994 pcie_capability_read_word(instance->pdev, PCI_EXP_LNKSTA, &lnksta);
5995 speed = lnksta & PCI_EXP_LNKSTA_CLS;
5996
5997 /*
5998 * For Aero, if PCIe link speed is <16 GT/s, then driver should operate
5999 * in latency perf mode and enable R1 PCI bandwidth algorithm
6000 */
6001 if (speed < 0x4) {
6002 instance->perf_mode = MR_LATENCY_PERF_MODE;
6003 fusion->pcie_bw_limitation = true;
6004 }
6005
6006 /*
6007 * Performance mode settings provided through module parameter-perf_mode will
6008 * take affect only for:
6009 * 1. Aero family of adapters.
6010 * 2. When user sets module parameter- perf_mode in range of 0-2.
6011 */
6012 if ((perf_mode >= MR_BALANCED_PERF_MODE) &&
6013 (perf_mode <= MR_LATENCY_PERF_MODE))
6014 instance->perf_mode = perf_mode;
6015 /*
6016 * If intr coalescing is not supported by controller FW, then IOPS
6017 * and Balanced modes are not feasible.
6018 */
6019 if (!intr_coalescing)
6020 instance->perf_mode = MR_LATENCY_PERF_MODE;
6021
6022 }
6023
6024 if (instance->perf_mode == MR_BALANCED_PERF_MODE)
6025 instance->low_latency_index_start =
6026 MR_HIGH_IOPS_QUEUE_COUNT;
6027 else
6028 instance->low_latency_index_start = 1;
6029
6030 num_msix_req = num_online_cpus() + instance->low_latency_index_start;
6031
6032 instance->msix_vectors = min(num_msix_req,
6033 instance->msix_vectors);
6034
6035 megasas_alloc_irq_vectors(instance);
6036 if (!instance->msix_vectors)
6037 instance->msix_load_balance = false;
5648 } 6038 }
5649 /* 6039 /*
5650 * MSI-X host index 0 is common for all adapter. 6040 * MSI-X host index 0 is common for all adapter.
@@ -5669,8 +6059,6 @@ static int megasas_init_fw(struct megasas_instance *instance)
5669 megasas_setup_reply_map(instance); 6059 megasas_setup_reply_map(instance);
5670 6060
5671 dev_info(&instance->pdev->dev, 6061 dev_info(&instance->pdev->dev,
5672 "firmware supports msix\t: (%d)", fw_msix_count);
5673 dev_info(&instance->pdev->dev,
5674 "current msix/online cpus\t: (%d/%d)\n", 6062 "current msix/online cpus\t: (%d/%d)\n",
5675 instance->msix_vectors, (unsigned int)num_online_cpus()); 6063 instance->msix_vectors, (unsigned int)num_online_cpus());
5676 dev_info(&instance->pdev->dev, 6064 dev_info(&instance->pdev->dev,
@@ -5707,6 +6095,9 @@ static int megasas_init_fw(struct megasas_instance *instance)
5707 megasas_setup_irqs_ioapic(instance)) 6095 megasas_setup_irqs_ioapic(instance))
5708 goto fail_init_adapter; 6096 goto fail_init_adapter;
5709 6097
6098 if (instance->adapter_type != MFI_SERIES)
6099 megasas_setup_irq_poll(instance);
6100
5710 instance->instancet->enable_intr(instance); 6101 instance->instancet->enable_intr(instance);
5711 6102
5712 dev_info(&instance->pdev->dev, "INIT adapter done\n"); 6103 dev_info(&instance->pdev->dev, "INIT adapter done\n");
@@ -5833,8 +6224,8 @@ static int megasas_init_fw(struct megasas_instance *instance)
5833 instance->UnevenSpanSupport ? "yes" : "no"); 6224 instance->UnevenSpanSupport ? "yes" : "no");
5834 dev_info(&instance->pdev->dev, "firmware crash dump : %s\n", 6225 dev_info(&instance->pdev->dev, "firmware crash dump : %s\n",
5835 instance->crash_dump_drv_support ? "yes" : "no"); 6226 instance->crash_dump_drv_support ? "yes" : "no");
5836 dev_info(&instance->pdev->dev, "jbod sync map : %s\n", 6227 dev_info(&instance->pdev->dev, "JBOD sequence map : %s\n",
5837 instance->use_seqnum_jbod_fp ? "yes" : "no"); 6228 instance->use_seqnum_jbod_fp ? "enabled" : "disabled");
5838 6229
5839 instance->max_sectors_per_req = instance->max_num_sge * 6230 instance->max_sectors_per_req = instance->max_num_sge *
5840 SGE_BUFFER_SIZE / 512; 6231 SGE_BUFFER_SIZE / 512;
@@ -6197,8 +6588,10 @@ megasas_get_target_prop(struct megasas_instance *instance,
6197 switch (dcmd_timeout_ocr_possible(instance)) { 6588 switch (dcmd_timeout_ocr_possible(instance)) {
6198 case INITIATE_OCR: 6589 case INITIATE_OCR:
6199 cmd->flags |= DRV_DCMD_SKIP_REFIRE; 6590 cmd->flags |= DRV_DCMD_SKIP_REFIRE;
6591 mutex_unlock(&instance->reset_mutex);
6200 megasas_reset_fusion(instance->host, 6592 megasas_reset_fusion(instance->host,
6201 MFI_IO_TIMEOUT_OCR); 6593 MFI_IO_TIMEOUT_OCR);
6594 mutex_lock(&instance->reset_mutex);
6202 break; 6595 break;
6203 case KILL_ADAPTER: 6596 case KILL_ADAPTER:
6204 megaraid_sas_kill_hba(instance); 6597 megaraid_sas_kill_hba(instance);
@@ -6748,6 +7141,7 @@ static inline void megasas_init_ctrl_params(struct megasas_instance *instance)
6748 INIT_LIST_HEAD(&instance->internal_reset_pending_q); 7141 INIT_LIST_HEAD(&instance->internal_reset_pending_q);
6749 7142
6750 atomic_set(&instance->fw_outstanding, 0); 7143 atomic_set(&instance->fw_outstanding, 0);
7144 atomic64_set(&instance->total_io_count, 0);
6751 7145
6752 init_waitqueue_head(&instance->int_cmd_wait_q); 7146 init_waitqueue_head(&instance->int_cmd_wait_q);
6753 init_waitqueue_head(&instance->abort_cmd_wait_q); 7147 init_waitqueue_head(&instance->abort_cmd_wait_q);
@@ -6770,6 +7164,8 @@ static inline void megasas_init_ctrl_params(struct megasas_instance *instance)
6770 instance->last_time = 0; 7164 instance->last_time = 0;
6771 instance->disableOnlineCtrlReset = 1; 7165 instance->disableOnlineCtrlReset = 1;
6772 instance->UnevenSpanSupport = 0; 7166 instance->UnevenSpanSupport = 0;
7167 instance->smp_affinity_enable = smp_affinity_enable ? true : false;
7168 instance->msix_load_balance = false;
6773 7169
6774 if (instance->adapter_type != MFI_SERIES) 7170 if (instance->adapter_type != MFI_SERIES)
6775 INIT_WORK(&instance->work_init, megasas_fusion_ocr_wq); 7171 INIT_WORK(&instance->work_init, megasas_fusion_ocr_wq);
@@ -6791,6 +7187,12 @@ static int megasas_probe_one(struct pci_dev *pdev,
6791 u16 control = 0; 7187 u16 control = 0;
6792 7188
6793 switch (pdev->device) { 7189 switch (pdev->device) {
7190 case PCI_DEVICE_ID_LSI_AERO_10E0:
7191 case PCI_DEVICE_ID_LSI_AERO_10E3:
7192 case PCI_DEVICE_ID_LSI_AERO_10E4:
7193 case PCI_DEVICE_ID_LSI_AERO_10E7:
7194 dev_err(&pdev->dev, "Adapter is in non secure mode\n");
7195 return 1;
6794 case PCI_DEVICE_ID_LSI_AERO_10E1: 7196 case PCI_DEVICE_ID_LSI_AERO_10E1:
6795 case PCI_DEVICE_ID_LSI_AERO_10E5: 7197 case PCI_DEVICE_ID_LSI_AERO_10E5:
6796 dev_info(&pdev->dev, "Adapter is in configurable secure mode\n"); 7198 dev_info(&pdev->dev, "Adapter is in configurable secure mode\n");
@@ -6910,6 +7312,8 @@ static int megasas_probe_one(struct pci_dev *pdev,
6910 goto fail_start_aen; 7312 goto fail_start_aen;
6911 } 7313 }
6912 7314
7315 megasas_setup_debugfs(instance);
7316
6913 /* Get current SR-IOV LD/VF affiliation */ 7317 /* Get current SR-IOV LD/VF affiliation */
6914 if (instance->requestorId) 7318 if (instance->requestorId)
6915 megasas_get_ld_vf_affiliation(instance, 1); 7319 megasas_get_ld_vf_affiliation(instance, 1);
@@ -7041,13 +7445,17 @@ static void megasas_shutdown_controller(struct megasas_instance *instance,
7041static int 7445static int
7042megasas_suspend(struct pci_dev *pdev, pm_message_t state) 7446megasas_suspend(struct pci_dev *pdev, pm_message_t state)
7043{ 7447{
7044 struct Scsi_Host *host;
7045 struct megasas_instance *instance; 7448 struct megasas_instance *instance;
7046 7449
7047 instance = pci_get_drvdata(pdev); 7450 instance = pci_get_drvdata(pdev);
7048 host = instance->host; 7451
7452 if (!instance)
7453 return 0;
7454
7049 instance->unload = 1; 7455 instance->unload = 1;
7050 7456
7457 dev_info(&pdev->dev, "%s is called\n", __func__);
7458
7051 /* Shutdown SR-IOV heartbeat timer */ 7459 /* Shutdown SR-IOV heartbeat timer */
7052 if (instance->requestorId && !instance->skip_heartbeat_timer_del) 7460 if (instance->requestorId && !instance->skip_heartbeat_timer_del)
7053 del_timer_sync(&instance->sriov_heartbeat_timer); 7461 del_timer_sync(&instance->sriov_heartbeat_timer);
@@ -7097,11 +7505,16 @@ megasas_resume(struct pci_dev *pdev)
7097 int irq_flags = PCI_IRQ_LEGACY; 7505 int irq_flags = PCI_IRQ_LEGACY;
7098 7506
7099 instance = pci_get_drvdata(pdev); 7507 instance = pci_get_drvdata(pdev);
7508
7509 if (!instance)
7510 return 0;
7511
7100 host = instance->host; 7512 host = instance->host;
7101 pci_set_power_state(pdev, PCI_D0); 7513 pci_set_power_state(pdev, PCI_D0);
7102 pci_enable_wake(pdev, PCI_D0, 0); 7514 pci_enable_wake(pdev, PCI_D0, 0);
7103 pci_restore_state(pdev); 7515 pci_restore_state(pdev);
7104 7516
7517 dev_info(&pdev->dev, "%s is called\n", __func__);
7105 /* 7518 /*
7106 * PCI prepping: enable device set bus mastering and dma mask 7519 * PCI prepping: enable device set bus mastering and dma mask
7107 */ 7520 */
@@ -7133,7 +7546,7 @@ megasas_resume(struct pci_dev *pdev)
7133 /* Now re-enable MSI-X */ 7546 /* Now re-enable MSI-X */
7134 if (instance->msix_vectors) { 7547 if (instance->msix_vectors) {
7135 irq_flags = PCI_IRQ_MSIX; 7548 irq_flags = PCI_IRQ_MSIX;
7136 if (smp_affinity_enable) 7549 if (instance->smp_affinity_enable)
7137 irq_flags |= PCI_IRQ_AFFINITY; 7550 irq_flags |= PCI_IRQ_AFFINITY;
7138 } 7551 }
7139 rval = pci_alloc_irq_vectors(instance->pdev, 1, 7552 rval = pci_alloc_irq_vectors(instance->pdev, 1,
@@ -7171,6 +7584,9 @@ megasas_resume(struct pci_dev *pdev)
7171 megasas_setup_irqs_ioapic(instance)) 7584 megasas_setup_irqs_ioapic(instance))
7172 goto fail_init_mfi; 7585 goto fail_init_mfi;
7173 7586
7587 if (instance->adapter_type != MFI_SERIES)
7588 megasas_setup_irq_poll(instance);
7589
7174 /* Re-launch SR-IOV heartbeat timer */ 7590 /* Re-launch SR-IOV heartbeat timer */
7175 if (instance->requestorId) { 7591 if (instance->requestorId) {
7176 if (!megasas_sriov_start_heartbeat(instance, 0)) 7592 if (!megasas_sriov_start_heartbeat(instance, 0))
@@ -7261,6 +7677,10 @@ static void megasas_detach_one(struct pci_dev *pdev)
7261 u32 pd_seq_map_sz; 7677 u32 pd_seq_map_sz;
7262 7678
7263 instance = pci_get_drvdata(pdev); 7679 instance = pci_get_drvdata(pdev);
7680
7681 if (!instance)
7682 return;
7683
7264 host = instance->host; 7684 host = instance->host;
7265 fusion = instance->ctrl_context; 7685 fusion = instance->ctrl_context;
7266 7686
@@ -7374,6 +7794,8 @@ skip_firing_dcmds:
7374 7794
7375 megasas_free_ctrl_mem(instance); 7795 megasas_free_ctrl_mem(instance);
7376 7796
7797 megasas_destroy_debugfs(instance);
7798
7377 scsi_host_put(host); 7799 scsi_host_put(host);
7378 7800
7379 pci_disable_device(pdev); 7801 pci_disable_device(pdev);
@@ -7387,6 +7809,9 @@ static void megasas_shutdown(struct pci_dev *pdev)
7387{ 7809{
7388 struct megasas_instance *instance = pci_get_drvdata(pdev); 7810 struct megasas_instance *instance = pci_get_drvdata(pdev);
7389 7811
7812 if (!instance)
7813 return;
7814
7390 instance->unload = 1; 7815 instance->unload = 1;
7391 7816
7392 if (megasas_wait_for_adapter_operational(instance)) 7817 if (megasas_wait_for_adapter_operational(instance))
@@ -7532,7 +7957,9 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
7532 7957
7533 if ((ioc->frame.hdr.cmd >= MFI_CMD_OP_COUNT) || 7958 if ((ioc->frame.hdr.cmd >= MFI_CMD_OP_COUNT) ||
7534 ((ioc->frame.hdr.cmd == MFI_CMD_NVME) && 7959 ((ioc->frame.hdr.cmd == MFI_CMD_NVME) &&
7535 !instance->support_nvme_passthru)) { 7960 !instance->support_nvme_passthru) ||
7961 ((ioc->frame.hdr.cmd == MFI_CMD_TOOLBOX) &&
7962 !instance->support_pci_lane_margining)) {
7536 dev_err(&instance->pdev->dev, 7963 dev_err(&instance->pdev->dev,
7537 "Received invalid ioctl command 0x%x\n", 7964 "Received invalid ioctl command 0x%x\n",
7538 ioc->frame.hdr.cmd); 7965 ioc->frame.hdr.cmd);
@@ -7568,10 +7995,13 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
7568 opcode = le32_to_cpu(cmd->frame->dcmd.opcode); 7995 opcode = le32_to_cpu(cmd->frame->dcmd.opcode);
7569 7996
7570 if (opcode == MR_DCMD_CTRL_SHUTDOWN) { 7997 if (opcode == MR_DCMD_CTRL_SHUTDOWN) {
7998 mutex_lock(&instance->reset_mutex);
7571 if (megasas_get_ctrl_info(instance) != DCMD_SUCCESS) { 7999 if (megasas_get_ctrl_info(instance) != DCMD_SUCCESS) {
7572 megasas_return_cmd(instance, cmd); 8000 megasas_return_cmd(instance, cmd);
8001 mutex_unlock(&instance->reset_mutex);
7573 return -1; 8002 return -1;
7574 } 8003 }
8004 mutex_unlock(&instance->reset_mutex);
7575 } 8005 }
7576 8006
7577 if (opcode == MR_DRIVER_SET_APP_CRASHDUMP_MODE) { 8007 if (opcode == MR_DRIVER_SET_APP_CRASHDUMP_MODE) {
@@ -8013,6 +8443,14 @@ support_nvme_encapsulation_show(struct device_driver *dd, char *buf)
8013 8443
8014static DRIVER_ATTR_RO(support_nvme_encapsulation); 8444static DRIVER_ATTR_RO(support_nvme_encapsulation);
8015 8445
8446static ssize_t
8447support_pci_lane_margining_show(struct device_driver *dd, char *buf)
8448{
8449 return sprintf(buf, "%u\n", support_pci_lane_margining);
8450}
8451
8452static DRIVER_ATTR_RO(support_pci_lane_margining);
8453
8016static inline void megasas_remove_scsi_device(struct scsi_device *sdev) 8454static inline void megasas_remove_scsi_device(struct scsi_device *sdev)
8017{ 8455{
8018 sdev_printk(KERN_INFO, sdev, "SCSI device is removed\n"); 8456 sdev_printk(KERN_INFO, sdev, "SCSI device is removed\n");
@@ -8161,7 +8599,7 @@ megasas_aen_polling(struct work_struct *work)
8161 struct megasas_instance *instance = ev->instance; 8599 struct megasas_instance *instance = ev->instance;
8162 union megasas_evt_class_locale class_locale; 8600 union megasas_evt_class_locale class_locale;
8163 int event_type = 0; 8601 int event_type = 0;
8164 u32 seq_num, wait_time = MEGASAS_RESET_WAIT_TIME; 8602 u32 seq_num;
8165 int error; 8603 int error;
8166 u8 dcmd_ret = DCMD_SUCCESS; 8604 u8 dcmd_ret = DCMD_SUCCESS;
8167 8605
@@ -8171,10 +8609,6 @@ megasas_aen_polling(struct work_struct *work)
8171 return; 8609 return;
8172 } 8610 }
8173 8611
8174 /* Adjust event workqueue thread wait time for VF mode */
8175 if (instance->requestorId)
8176 wait_time = MEGASAS_ROUTINE_WAIT_TIME_VF;
8177
8178 /* Don't run the event workqueue thread if OCR is running */ 8612 /* Don't run the event workqueue thread if OCR is running */
8179 mutex_lock(&instance->reset_mutex); 8613 mutex_lock(&instance->reset_mutex);
8180 8614
@@ -8286,6 +8720,7 @@ static int __init megasas_init(void)
8286 support_poll_for_event = 2; 8720 support_poll_for_event = 2;
8287 support_device_change = 1; 8721 support_device_change = 1;
8288 support_nvme_encapsulation = true; 8722 support_nvme_encapsulation = true;
8723 support_pci_lane_margining = true;
8289 8724
8290 memset(&megasas_mgmt_info, 0, sizeof(megasas_mgmt_info)); 8725 memset(&megasas_mgmt_info, 0, sizeof(megasas_mgmt_info));
8291 8726
@@ -8301,6 +8736,8 @@ static int __init megasas_init(void)
8301 8736
8302 megasas_mgmt_majorno = rval; 8737 megasas_mgmt_majorno = rval;
8303 8738
8739 megasas_init_debugfs();
8740
8304 /* 8741 /*
8305 * Register ourselves as PCI hotplug module 8742 * Register ourselves as PCI hotplug module
8306 */ 8743 */
@@ -8340,8 +8777,17 @@ static int __init megasas_init(void)
8340 if (rval) 8777 if (rval)
8341 goto err_dcf_support_nvme_encapsulation; 8778 goto err_dcf_support_nvme_encapsulation;
8342 8779
8780 rval = driver_create_file(&megasas_pci_driver.driver,
8781 &driver_attr_support_pci_lane_margining);
8782 if (rval)
8783 goto err_dcf_support_pci_lane_margining;
8784
8343 return rval; 8785 return rval;
8344 8786
8787err_dcf_support_pci_lane_margining:
8788 driver_remove_file(&megasas_pci_driver.driver,
8789 &driver_attr_support_nvme_encapsulation);
8790
8345err_dcf_support_nvme_encapsulation: 8791err_dcf_support_nvme_encapsulation:
8346 driver_remove_file(&megasas_pci_driver.driver, 8792 driver_remove_file(&megasas_pci_driver.driver,
8347 &driver_attr_support_device_change); 8793 &driver_attr_support_device_change);
@@ -8360,6 +8806,7 @@ err_dcf_rel_date:
8360err_dcf_attr_ver: 8806err_dcf_attr_ver:
8361 pci_unregister_driver(&megasas_pci_driver); 8807 pci_unregister_driver(&megasas_pci_driver);
8362err_pcidrv: 8808err_pcidrv:
8809 megasas_exit_debugfs();
8363 unregister_chrdev(megasas_mgmt_majorno, "megaraid_sas_ioctl"); 8810 unregister_chrdev(megasas_mgmt_majorno, "megaraid_sas_ioctl");
8364 return rval; 8811 return rval;
8365} 8812}
@@ -8380,8 +8827,11 @@ static void __exit megasas_exit(void)
8380 driver_remove_file(&megasas_pci_driver.driver, &driver_attr_version); 8827 driver_remove_file(&megasas_pci_driver.driver, &driver_attr_version);
8381 driver_remove_file(&megasas_pci_driver.driver, 8828 driver_remove_file(&megasas_pci_driver.driver,
8382 &driver_attr_support_nvme_encapsulation); 8829 &driver_attr_support_nvme_encapsulation);
8830 driver_remove_file(&megasas_pci_driver.driver,
8831 &driver_attr_support_pci_lane_margining);
8383 8832
8384 pci_unregister_driver(&megasas_pci_driver); 8833 pci_unregister_driver(&megasas_pci_driver);
8834 megasas_exit_debugfs();
8385 unregister_chrdev(megasas_mgmt_majorno, "megaraid_sas_ioctl"); 8835 unregister_chrdev(megasas_mgmt_majorno, "megaraid_sas_ioctl");
8386} 8836}
8387 8837
diff --git a/drivers/scsi/megaraid/megaraid_sas_debugfs.c b/drivers/scsi/megaraid/megaraid_sas_debugfs.c
new file mode 100644
index 000000000000..c69760775efa
--- /dev/null
+++ b/drivers/scsi/megaraid/megaraid_sas_debugfs.c
@@ -0,0 +1,179 @@
1/*
2 * Linux MegaRAID driver for SAS based RAID controllers
3 *
4 * Copyright (c) 2003-2018 LSI Corporation.
5 * Copyright (c) 2003-2018 Avago Technologies.
6 * Copyright (c) 2003-2018 Broadcom Inc.
7 *
8 * This program is free software; you can redistribute it and/or
9 * modify it under the terms of the GNU General Public License
10 * as published by the Free Software Foundation; either version 2
11 * of the License, or (at your option) any later version.
12 *
13 * This program is distributed in the hope that it will be useful,
14 * but WITHOUT ANY WARRANTY; without even the implied warranty of
15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 * GNU General Public License for more details.
17 *
18 * You should have received a copy of the GNU General Public License
19 * along with this program. If not, see <http://www.gnu.org/licenses/>.
20 *
21 * Authors: Broadcom Inc.
22 * Kashyap Desai <kashyap.desai@broadcom.com>
23 * Sumit Saxena <sumit.saxena@broadcom.com>
24 * Shivasharan S <shivasharan.srikanteshwara@broadcom.com>
25 *
26 * Send feedback to: megaraidlinux.pdl@broadcom.com
27 */
28#include <linux/kernel.h>
29#include <linux/types.h>
30#include <linux/pci.h>
31#include <linux/interrupt.h>
32#include <linux/compat.h>
33#include <linux/irq_poll.h>
34
35#include <scsi/scsi.h>
36#include <scsi/scsi_device.h>
37#include <scsi/scsi_host.h>
38
39#include "megaraid_sas_fusion.h"
40#include "megaraid_sas.h"
41
42#ifdef CONFIG_DEBUG_FS
43#include <linux/debugfs.h>
44
45struct dentry *megasas_debugfs_root;
46
47static ssize_t
48megasas_debugfs_read(struct file *filp, char __user *ubuf, size_t cnt,
49 loff_t *ppos)
50{
51 struct megasas_debugfs_buffer *debug = filp->private_data;
52
53 if (!debug || !debug->buf)
54 return 0;
55
56 return simple_read_from_buffer(ubuf, cnt, ppos, debug->buf, debug->len);
57}
58
59static int
60megasas_debugfs_raidmap_open(struct inode *inode, struct file *file)
61{
62 struct megasas_instance *instance = inode->i_private;
63 struct megasas_debugfs_buffer *debug;
64 struct fusion_context *fusion;
65
66 fusion = instance->ctrl_context;
67
68 debug = kzalloc(sizeof(struct megasas_debugfs_buffer), GFP_KERNEL);
69 if (!debug)
70 return -ENOMEM;
71
72 debug->buf = (void *)fusion->ld_drv_map[(instance->map_id & 1)];
73 debug->len = fusion->drv_map_sz;
74 file->private_data = debug;
75
76 return 0;
77}
78
79static int
80megasas_debugfs_release(struct inode *inode, struct file *file)
81{
82 struct megasas_debug_buffer *debug = file->private_data;
83
84 if (!debug)
85 return 0;
86
87 file->private_data = NULL;
88 kfree(debug);
89 return 0;
90}
91
92static const struct file_operations megasas_debugfs_raidmap_fops = {
93 .owner = THIS_MODULE,
94 .open = megasas_debugfs_raidmap_open,
95 .read = megasas_debugfs_read,
96 .release = megasas_debugfs_release,
97};
98
99/*
100 * megasas_init_debugfs : Create debugfs root for megaraid_sas driver
101 */
102void megasas_init_debugfs(void)
103{
104 megasas_debugfs_root = debugfs_create_dir("megaraid_sas", NULL);
105 if (!megasas_debugfs_root)
106 pr_info("Cannot create debugfs root\n");
107}
108
109/*
110 * megasas_exit_debugfs : Remove debugfs root for megaraid_sas driver
111 */
112void megasas_exit_debugfs(void)
113{
114 debugfs_remove_recursive(megasas_debugfs_root);
115}
116
117/*
118 * megasas_setup_debugfs : Setup debugfs per Fusion adapter
119 * instance: Soft instance of adapter
120 */
121void
122megasas_setup_debugfs(struct megasas_instance *instance)
123{
124 char name[64];
125 struct fusion_context *fusion;
126
127 fusion = instance->ctrl_context;
128
129 if (fusion) {
130 snprintf(name, sizeof(name),
131 "scsi_host%d", instance->host->host_no);
132 if (!instance->debugfs_root) {
133 instance->debugfs_root =
134 debugfs_create_dir(name, megasas_debugfs_root);
135 if (!instance->debugfs_root) {
136 dev_err(&instance->pdev->dev,
137 "Cannot create per adapter debugfs directory\n");
138 return;
139 }
140 }
141
142 snprintf(name, sizeof(name), "raidmap_dump");
143 instance->raidmap_dump =
144 debugfs_create_file(name, S_IRUGO,
145 instance->debugfs_root, instance,
146 &megasas_debugfs_raidmap_fops);
147 if (!instance->raidmap_dump) {
148 dev_err(&instance->pdev->dev,
149 "Cannot create raidmap debugfs file\n");
150 debugfs_remove(instance->debugfs_root);
151 return;
152 }
153 }
154
155}
156
157/*
158 * megasas_destroy_debugfs : Destroy debugfs per Fusion adapter
159 * instance: Soft instance of adapter
160 */
161void megasas_destroy_debugfs(struct megasas_instance *instance)
162{
163 debugfs_remove_recursive(instance->debugfs_root);
164}
165
166#else
167void megasas_init_debugfs(void)
168{
169}
170void megasas_exit_debugfs(void)
171{
172}
173void megasas_setup_debugfs(struct megasas_instance *instance)
174{
175}
176void megasas_destroy_debugfs(struct megasas_instance *instance)
177{
178}
179#endif /*CONFIG_DEBUG_FS*/
diff --git a/drivers/scsi/megaraid/megaraid_sas_fp.c b/drivers/scsi/megaraid/megaraid_sas_fp.c
index 12637606c46d..50b8c1b12767 100644
--- a/drivers/scsi/megaraid/megaraid_sas_fp.c
+++ b/drivers/scsi/megaraid/megaraid_sas_fp.c
@@ -33,6 +33,7 @@
33#include <linux/compat.h> 33#include <linux/compat.h>
34#include <linux/blkdev.h> 34#include <linux/blkdev.h>
35#include <linux/poll.h> 35#include <linux/poll.h>
36#include <linux/irq_poll.h>
36 37
37#include <scsi/scsi.h> 38#include <scsi/scsi.h>
38#include <scsi/scsi_cmnd.h> 39#include <scsi/scsi_cmnd.h>
@@ -45,7 +46,7 @@
45 46
46#define LB_PENDING_CMDS_DEFAULT 4 47#define LB_PENDING_CMDS_DEFAULT 4
47static unsigned int lb_pending_cmds = LB_PENDING_CMDS_DEFAULT; 48static unsigned int lb_pending_cmds = LB_PENDING_CMDS_DEFAULT;
48module_param(lb_pending_cmds, int, S_IRUGO); 49module_param(lb_pending_cmds, int, 0444);
49MODULE_PARM_DESC(lb_pending_cmds, "Change raid-1 load balancing outstanding " 50MODULE_PARM_DESC(lb_pending_cmds, "Change raid-1 load balancing outstanding "
50 "threshold. Valid Values are 1-128. Default: 4"); 51 "threshold. Valid Values are 1-128. Default: 4");
51 52
@@ -889,6 +890,77 @@ u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow,
889} 890}
890 891
891/* 892/*
893 * mr_get_phy_params_r56_rmw - Calculate parameters for R56 CTIO write operation
894 * @instance: Adapter soft state
895 * @ld: LD index
896 * @stripNo: Strip Number
897 * @io_info: IO info structure pointer
898 * pRAID_Context: RAID context pointer
899 * map: RAID map pointer
900 *
901 * This routine calculates the logical arm, data Arm, row number and parity arm
902 * for R56 CTIO write operation.
903 */
904static void mr_get_phy_params_r56_rmw(struct megasas_instance *instance,
905 u32 ld, u64 stripNo,
906 struct IO_REQUEST_INFO *io_info,
907 struct RAID_CONTEXT_G35 *pRAID_Context,
908 struct MR_DRV_RAID_MAP_ALL *map)
909{
910 struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
911 u8 span, dataArms, arms, dataArm, logArm;
912 s8 rightmostParityArm, PParityArm;
913 u64 rowNum;
914 u64 *pdBlock = &io_info->pdBlock;
915
916 dataArms = raid->rowDataSize;
917 arms = raid->rowSize;
918
919 rowNum = mega_div64_32(stripNo, dataArms);
920 /* parity disk arm, first arm is 0 */
921 rightmostParityArm = (arms - 1) - mega_mod64(rowNum, arms);
922
923 /* logical arm within row */
924 logArm = mega_mod64(stripNo, dataArms);
925 /* physical arm for data */
926 dataArm = mega_mod64((rightmostParityArm + 1 + logArm), arms);
927
928 if (raid->spanDepth == 1) {
929 span = 0;
930 } else {
931 span = (u8)MR_GetSpanBlock(ld, rowNum, pdBlock, map);
932 if (span == SPAN_INVALID)
933 return;
934 }
935
936 if (raid->level == 6) {
937 /* P Parity arm, note this can go negative adjust if negative */
938 PParityArm = (arms - 2) - mega_mod64(rowNum, arms);
939
940 if (PParityArm < 0)
941 PParityArm += arms;
942
943 /* rightmostParityArm is P-Parity for RAID 5 and Q-Parity for RAID */
944 pRAID_Context->flow_specific.r56_arm_map = rightmostParityArm;
945 pRAID_Context->flow_specific.r56_arm_map |=
946 (u16)(PParityArm << RAID_CTX_R56_P_ARM_SHIFT);
947 } else {
948 pRAID_Context->flow_specific.r56_arm_map |=
949 (u16)(rightmostParityArm << RAID_CTX_R56_P_ARM_SHIFT);
950 }
951
952 pRAID_Context->reg_lock_row_lba = cpu_to_le64(rowNum);
953 pRAID_Context->flow_specific.r56_arm_map |=
954 (u16)(logArm << RAID_CTX_R56_LOG_ARM_SHIFT);
955 cpu_to_le16s(&pRAID_Context->flow_specific.r56_arm_map);
956 pRAID_Context->span_arm = (span << RAID_CTX_SPANARM_SPAN_SHIFT) | dataArm;
957 pRAID_Context->raid_flags = (MR_RAID_FLAGS_IO_SUB_TYPE_R56_DIV_OFFLOAD <<
958 MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT);
959
960 return;
961}
962
963/*
892****************************************************************************** 964******************************************************************************
893* 965*
894* MR_BuildRaidContext function 966* MR_BuildRaidContext function
@@ -954,6 +1026,7 @@ MR_BuildRaidContext(struct megasas_instance *instance,
954 stripSize = 1 << raid->stripeShift; 1026 stripSize = 1 << raid->stripeShift;
955 stripe_mask = stripSize-1; 1027 stripe_mask = stripSize-1;
956 1028
1029 io_info->data_arms = raid->rowDataSize;
957 1030
958 /* 1031 /*
959 * calculate starting row and stripe, and number of strips and rows 1032 * calculate starting row and stripe, and number of strips and rows
@@ -1095,6 +1168,13 @@ MR_BuildRaidContext(struct megasas_instance *instance,
1095 /* save pointer to raid->LUN array */ 1168 /* save pointer to raid->LUN array */
1096 *raidLUN = raid->LUN; 1169 *raidLUN = raid->LUN;
1097 1170
1171 /* Aero R5/6 Division Offload for WRITE */
1172 if (fusion->r56_div_offload && (raid->level >= 5) && !isRead) {
1173 mr_get_phy_params_r56_rmw(instance, ld, start_strip, io_info,
1174 (struct RAID_CONTEXT_G35 *)pRAID_Context,
1175 map);
1176 return true;
1177 }
1098 1178
1099 /*Get Phy Params only if FP capable, or else leave it to MR firmware 1179 /*Get Phy Params only if FP capable, or else leave it to MR firmware
1100 to do the calculation.*/ 1180 to do the calculation.*/
diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
index 4dfa0685a86c..a32b3f0fcd15 100644
--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
+++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
@@ -35,6 +35,7 @@
35#include <linux/poll.h> 35#include <linux/poll.h>
36#include <linux/vmalloc.h> 36#include <linux/vmalloc.h>
37#include <linux/workqueue.h> 37#include <linux/workqueue.h>
38#include <linux/irq_poll.h>
38 39
39#include <scsi/scsi.h> 40#include <scsi/scsi.h>
40#include <scsi/scsi_cmnd.h> 41#include <scsi/scsi_cmnd.h>
@@ -87,6 +88,62 @@ extern u32 megasas_readl(struct megasas_instance *instance,
87 const volatile void __iomem *addr); 88 const volatile void __iomem *addr);
88 89
89/** 90/**
91 * megasas_adp_reset_wait_for_ready - initiate chip reset and wait for
92 * controller to come to ready state
93 * @instance - adapter's soft state
94 * @do_adp_reset - If true, do a chip reset
95 * @ocr_context - If called from OCR context this will
96 * be set to 1, else 0
97 *
98 * This function initates a chip reset followed by a wait for controller to
99 * transition to ready state.
100 * During this, driver will block all access to PCI config space from userspace
101 */
102int
103megasas_adp_reset_wait_for_ready(struct megasas_instance *instance,
104 bool do_adp_reset,
105 int ocr_context)
106{
107 int ret = FAILED;
108
109 /*
110 * Block access to PCI config space from userspace
111 * when diag reset is initiated from driver
112 */
113 if (megasas_dbg_lvl & OCR_DEBUG)
114 dev_info(&instance->pdev->dev,
115 "Block access to PCI config space %s %d\n",
116 __func__, __LINE__);
117
118 pci_cfg_access_lock(instance->pdev);
119
120 if (do_adp_reset) {
121 if (instance->instancet->adp_reset
122 (instance, instance->reg_set))
123 goto out;
124 }
125
126 /* Wait for FW to become ready */
127 if (megasas_transition_to_ready(instance, ocr_context)) {
128 dev_warn(&instance->pdev->dev,
129 "Failed to transition controller to ready for scsi%d.\n",
130 instance->host->host_no);
131 goto out;
132 }
133
134 ret = SUCCESS;
135out:
136 if (megasas_dbg_lvl & OCR_DEBUG)
137 dev_info(&instance->pdev->dev,
138 "Unlock access to PCI config space %s %d\n",
139 __func__, __LINE__);
140
141 pci_cfg_access_unlock(instance->pdev);
142
143 return ret;
144}
145
146/**
90 * megasas_check_same_4gb_region - check if allocation 147 * megasas_check_same_4gb_region - check if allocation
91 * crosses same 4GB boundary or not 148 * crosses same 4GB boundary or not
92 * @instance - adapter's soft instance 149 * @instance - adapter's soft instance
@@ -133,7 +190,8 @@ megasas_enable_intr_fusion(struct megasas_instance *instance)
133 writel(~MFI_FUSION_ENABLE_INTERRUPT_MASK, &(regs)->outbound_intr_mask); 190 writel(~MFI_FUSION_ENABLE_INTERRUPT_MASK, &(regs)->outbound_intr_mask);
134 191
135 /* Dummy readl to force pci flush */ 192 /* Dummy readl to force pci flush */
136 readl(&regs->outbound_intr_mask); 193 dev_info(&instance->pdev->dev, "%s is called outbound_intr_mask:0x%08x\n",
194 __func__, readl(&regs->outbound_intr_mask));
137} 195}
138 196
139/** 197/**
@@ -144,14 +202,14 @@ void
144megasas_disable_intr_fusion(struct megasas_instance *instance) 202megasas_disable_intr_fusion(struct megasas_instance *instance)
145{ 203{
146 u32 mask = 0xFFFFFFFF; 204 u32 mask = 0xFFFFFFFF;
147 u32 status;
148 struct megasas_register_set __iomem *regs; 205 struct megasas_register_set __iomem *regs;
149 regs = instance->reg_set; 206 regs = instance->reg_set;
150 instance->mask_interrupts = 1; 207 instance->mask_interrupts = 1;
151 208
152 writel(mask, &regs->outbound_intr_mask); 209 writel(mask, &regs->outbound_intr_mask);
153 /* Dummy readl to force pci flush */ 210 /* Dummy readl to force pci flush */
154 status = readl(&regs->outbound_intr_mask); 211 dev_info(&instance->pdev->dev, "%s is called outbound_intr_mask:0x%08x\n",
212 __func__, readl(&regs->outbound_intr_mask));
155} 213}
156 214
157int 215int
@@ -207,21 +265,17 @@ inline void megasas_return_cmd_fusion(struct megasas_instance *instance,
207} 265}
208 266
209/** 267/**
210 * megasas_fire_cmd_fusion - Sends command to the FW 268 * megasas_write_64bit_req_desc - PCI writes 64bit request descriptor
211 * @instance: Adapter soft state 269 * @instance: Adapter soft state
212 * @req_desc: 64bit Request descriptor 270 * @req_desc: 64bit Request descriptor
213 *
214 * Perform PCI Write.
215 */ 271 */
216
217static void 272static void
218megasas_fire_cmd_fusion(struct megasas_instance *instance, 273megasas_write_64bit_req_desc(struct megasas_instance *instance,
219 union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc) 274 union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc)
220{ 275{
221#if defined(writeq) && defined(CONFIG_64BIT) 276#if defined(writeq) && defined(CONFIG_64BIT)
222 u64 req_data = (((u64)le32_to_cpu(req_desc->u.high) << 32) | 277 u64 req_data = (((u64)le32_to_cpu(req_desc->u.high) << 32) |
223 le32_to_cpu(req_desc->u.low)); 278 le32_to_cpu(req_desc->u.low));
224
225 writeq(req_data, &instance->reg_set->inbound_low_queue_port); 279 writeq(req_data, &instance->reg_set->inbound_low_queue_port);
226#else 280#else
227 unsigned long flags; 281 unsigned long flags;
@@ -235,6 +289,25 @@ megasas_fire_cmd_fusion(struct megasas_instance *instance,
235} 289}
236 290
237/** 291/**
292 * megasas_fire_cmd_fusion - Sends command to the FW
293 * @instance: Adapter soft state
294 * @req_desc: 32bit or 64bit Request descriptor
295 *
296 * Perform PCI Write. AERO SERIES supports 32 bit Descriptor.
297 * Prior to AERO_SERIES support 64 bit Descriptor.
298 */
299static void
300megasas_fire_cmd_fusion(struct megasas_instance *instance,
301 union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc)
302{
303 if (instance->atomic_desc_support)
304 writel(le32_to_cpu(req_desc->u.low),
305 &instance->reg_set->inbound_single_queue_port);
306 else
307 megasas_write_64bit_req_desc(instance, req_desc);
308}
309
310/**
238 * megasas_fusion_update_can_queue - Do all Adapter Queue depth related calculations here 311 * megasas_fusion_update_can_queue - Do all Adapter Queue depth related calculations here
239 * @instance: Adapter soft state 312 * @instance: Adapter soft state
240 * fw_boot_context: Whether this function called during probe or after OCR 313 * fw_boot_context: Whether this function called during probe or after OCR
@@ -924,6 +997,7 @@ wait_and_poll(struct megasas_instance *instance, struct megasas_cmd *cmd,
924{ 997{
925 int i; 998 int i;
926 struct megasas_header *frame_hdr = &cmd->frame->hdr; 999 struct megasas_header *frame_hdr = &cmd->frame->hdr;
1000 u32 status_reg;
927 1001
928 u32 msecs = seconds * 1000; 1002 u32 msecs = seconds * 1000;
929 1003
@@ -933,6 +1007,12 @@ wait_and_poll(struct megasas_instance *instance, struct megasas_cmd *cmd,
933 for (i = 0; (i < msecs) && (frame_hdr->cmd_status == 0xff); i += 20) { 1007 for (i = 0; (i < msecs) && (frame_hdr->cmd_status == 0xff); i += 20) {
934 rmb(); 1008 rmb();
935 msleep(20); 1009 msleep(20);
1010 if (!(i % 5000)) {
1011 status_reg = instance->instancet->read_fw_status_reg(instance)
1012 & MFI_STATE_MASK;
1013 if (status_reg == MFI_STATE_FAULT)
1014 break;
1015 }
936 } 1016 }
937 1017
938 if (frame_hdr->cmd_status == MFI_STAT_INVALID_STATUS) 1018 if (frame_hdr->cmd_status == MFI_STAT_INVALID_STATUS)
@@ -966,6 +1046,7 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
966 u32 scratch_pad_1; 1046 u32 scratch_pad_1;
967 ktime_t time; 1047 ktime_t time;
968 bool cur_fw_64bit_dma_capable; 1048 bool cur_fw_64bit_dma_capable;
1049 bool cur_intr_coalescing;
969 1050
970 fusion = instance->ctrl_context; 1051 fusion = instance->ctrl_context;
971 1052
@@ -999,6 +1080,16 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
999 goto fail_fw_init; 1080 goto fail_fw_init;
1000 } 1081 }
1001 1082
1083 cur_intr_coalescing = (scratch_pad_1 & MR_INTR_COALESCING_SUPPORT_OFFSET) ?
1084 true : false;
1085
1086 if ((instance->low_latency_index_start ==
1087 MR_HIGH_IOPS_QUEUE_COUNT) && cur_intr_coalescing)
1088 instance->perf_mode = MR_BALANCED_PERF_MODE;
1089
1090 dev_info(&instance->pdev->dev, "Performance mode :%s\n",
1091 MEGASAS_PERF_MODE_2STR(instance->perf_mode));
1092
1002 instance->fw_sync_cache_support = (scratch_pad_1 & 1093 instance->fw_sync_cache_support = (scratch_pad_1 &
1003 MR_CAN_HANDLE_SYNC_CACHE_OFFSET) ? 1 : 0; 1094 MR_CAN_HANDLE_SYNC_CACHE_OFFSET) ? 1 : 0;
1004 dev_info(&instance->pdev->dev, "FW supports sync cache\t: %s\n", 1095 dev_info(&instance->pdev->dev, "FW supports sync cache\t: %s\n",
@@ -1083,6 +1174,22 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
1083 cpu_to_le32(lower_32_bits(ioc_init_handle)); 1174 cpu_to_le32(lower_32_bits(ioc_init_handle));
1084 init_frame->data_xfer_len = cpu_to_le32(sizeof(struct MPI2_IOC_INIT_REQUEST)); 1175 init_frame->data_xfer_len = cpu_to_le32(sizeof(struct MPI2_IOC_INIT_REQUEST));
1085 1176
1177 /*
1178 * Each bit in replyqueue_mask represents one group of MSI-x vectors
1179 * (each group has 8 vectors)
1180 */
1181 switch (instance->perf_mode) {
1182 case MR_BALANCED_PERF_MODE:
1183 init_frame->replyqueue_mask =
1184 cpu_to_le16(~(~0 << instance->low_latency_index_start/8));
1185 break;
1186 case MR_IOPS_PERF_MODE:
1187 init_frame->replyqueue_mask =
1188 cpu_to_le16(~(~0 << instance->msix_vectors/8));
1189 break;
1190 }
1191
1192
1086 req_desc.u.low = cpu_to_le32(lower_32_bits(cmd->frame_phys_addr)); 1193 req_desc.u.low = cpu_to_le32(lower_32_bits(cmd->frame_phys_addr));
1087 req_desc.u.high = cpu_to_le32(upper_32_bits(cmd->frame_phys_addr)); 1194 req_desc.u.high = cpu_to_le32(upper_32_bits(cmd->frame_phys_addr));
1088 req_desc.MFAIo.RequestFlags = 1195 req_desc.MFAIo.RequestFlags =
@@ -1101,7 +1208,8 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
1101 break; 1208 break;
1102 } 1209 }
1103 1210
1104 megasas_fire_cmd_fusion(instance, &req_desc); 1211 /* For AERO also, IOC_INIT requires 64 bit descriptor write */
1212 megasas_write_64bit_req_desc(instance, &req_desc);
1105 1213
1106 wait_and_poll(instance, cmd, MFI_IO_TIMEOUT_SECS); 1214 wait_and_poll(instance, cmd, MFI_IO_TIMEOUT_SECS);
1107 1215
@@ -1111,6 +1219,17 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
1111 goto fail_fw_init; 1219 goto fail_fw_init;
1112 } 1220 }
1113 1221
1222 if (instance->adapter_type >= AERO_SERIES) {
1223 scratch_pad_1 = megasas_readl
1224 (instance, &instance->reg_set->outbound_scratch_pad_1);
1225
1226 instance->atomic_desc_support =
1227 (scratch_pad_1 & MR_ATOMIC_DESCRIPTOR_SUPPORT_OFFSET) ? 1 : 0;
1228
1229 dev_info(&instance->pdev->dev, "FW supports atomic descriptor\t: %s\n",
1230 instance->atomic_desc_support ? "Yes" : "No");
1231 }
1232
1114 return 0; 1233 return 0;
1115 1234
1116fail_fw_init: 1235fail_fw_init:
@@ -1133,7 +1252,7 @@ fail_fw_init:
1133int 1252int
1134megasas_sync_pd_seq_num(struct megasas_instance *instance, bool pend) { 1253megasas_sync_pd_seq_num(struct megasas_instance *instance, bool pend) {
1135 int ret = 0; 1254 int ret = 0;
1136 u32 pd_seq_map_sz; 1255 size_t pd_seq_map_sz;
1137 struct megasas_cmd *cmd; 1256 struct megasas_cmd *cmd;
1138 struct megasas_dcmd_frame *dcmd; 1257 struct megasas_dcmd_frame *dcmd;
1139 struct fusion_context *fusion = instance->ctrl_context; 1258 struct fusion_context *fusion = instance->ctrl_context;
@@ -1142,9 +1261,7 @@ megasas_sync_pd_seq_num(struct megasas_instance *instance, bool pend) {
1142 1261
1143 pd_sync = (void *)fusion->pd_seq_sync[(instance->pd_seq_map_id & 1)]; 1262 pd_sync = (void *)fusion->pd_seq_sync[(instance->pd_seq_map_id & 1)];
1144 pd_seq_h = fusion->pd_seq_phys[(instance->pd_seq_map_id & 1)]; 1263 pd_seq_h = fusion->pd_seq_phys[(instance->pd_seq_map_id & 1)];
1145 pd_seq_map_sz = sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) + 1264 pd_seq_map_sz = struct_size(pd_sync, seq, MAX_PHYSICAL_DEVICES - 1);
1146 (sizeof(struct MR_PD_CFG_SEQ) *
1147 (MAX_PHYSICAL_DEVICES - 1));
1148 1265
1149 cmd = megasas_get_cmd(instance); 1266 cmd = megasas_get_cmd(instance);
1150 if (!cmd) { 1267 if (!cmd) {
@@ -1625,6 +1742,7 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
1625 struct fusion_context *fusion; 1742 struct fusion_context *fusion;
1626 u32 scratch_pad_1; 1743 u32 scratch_pad_1;
1627 int i = 0, count; 1744 int i = 0, count;
1745 u32 status_reg;
1628 1746
1629 fusion = instance->ctrl_context; 1747 fusion = instance->ctrl_context;
1630 1748
@@ -1707,8 +1825,21 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
1707 if (megasas_alloc_cmds_fusion(instance)) 1825 if (megasas_alloc_cmds_fusion(instance))
1708 goto fail_alloc_cmds; 1826 goto fail_alloc_cmds;
1709 1827
1710 if (megasas_ioc_init_fusion(instance)) 1828 if (megasas_ioc_init_fusion(instance)) {
1711 goto fail_ioc_init; 1829 status_reg = instance->instancet->read_fw_status_reg(instance);
1830 if (((status_reg & MFI_STATE_MASK) == MFI_STATE_FAULT) &&
1831 (status_reg & MFI_RESET_ADAPTER)) {
1832 /* Do a chip reset and then retry IOC INIT once */
1833 if (megasas_adp_reset_wait_for_ready
1834 (instance, true, 0) == FAILED)
1835 goto fail_ioc_init;
1836
1837 if (megasas_ioc_init_fusion(instance))
1838 goto fail_ioc_init;
1839 } else {
1840 goto fail_ioc_init;
1841 }
1842 }
1712 1843
1713 megasas_display_intel_branding(instance); 1844 megasas_display_intel_branding(instance);
1714 if (megasas_get_ctrl_info(instance)) { 1845 if (megasas_get_ctrl_info(instance)) {
@@ -1720,6 +1851,7 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
1720 1851
1721 instance->flag_ieee = 1; 1852 instance->flag_ieee = 1;
1722 instance->r1_ldio_hint_default = MR_R1_LDIO_PIGGYBACK_DEFAULT; 1853 instance->r1_ldio_hint_default = MR_R1_LDIO_PIGGYBACK_DEFAULT;
1854 instance->threshold_reply_count = instance->max_fw_cmds / 4;
1723 fusion->fast_path_io = 0; 1855 fusion->fast_path_io = 0;
1724 1856
1725 if (megasas_allocate_raid_maps(instance)) 1857 if (megasas_allocate_raid_maps(instance))
@@ -1970,7 +2102,6 @@ megasas_is_prp_possible(struct megasas_instance *instance,
1970 mega_mod64(sg_dma_address(sg_scmd), 2102 mega_mod64(sg_dma_address(sg_scmd),
1971 mr_nvme_pg_size)) { 2103 mr_nvme_pg_size)) {
1972 build_prp = false; 2104 build_prp = false;
1973 atomic_inc(&instance->sge_holes_type1);
1974 break; 2105 break;
1975 } 2106 }
1976 } 2107 }
@@ -1980,7 +2111,6 @@ megasas_is_prp_possible(struct megasas_instance *instance,
1980 sg_dma_len(sg_scmd)), 2111 sg_dma_len(sg_scmd)),
1981 mr_nvme_pg_size))) { 2112 mr_nvme_pg_size))) {
1982 build_prp = false; 2113 build_prp = false;
1983 atomic_inc(&instance->sge_holes_type2);
1984 break; 2114 break;
1985 } 2115 }
1986 } 2116 }
@@ -1989,7 +2119,6 @@ megasas_is_prp_possible(struct megasas_instance *instance,
1989 if (mega_mod64(sg_dma_address(sg_scmd), 2119 if (mega_mod64(sg_dma_address(sg_scmd),
1990 mr_nvme_pg_size)) { 2120 mr_nvme_pg_size)) {
1991 build_prp = false; 2121 build_prp = false;
1992 atomic_inc(&instance->sge_holes_type3);
1993 break; 2122 break;
1994 } 2123 }
1995 } 2124 }
@@ -2122,7 +2251,6 @@ megasas_make_prp_nvme(struct megasas_instance *instance, struct scsi_cmnd *scmd,
2122 main_chain_element->Length = 2251 main_chain_element->Length =
2123 cpu_to_le32(num_prp_in_chain * sizeof(u64)); 2252 cpu_to_le32(num_prp_in_chain * sizeof(u64));
2124 2253
2125 atomic_inc(&instance->prp_sgl);
2126 return build_prp; 2254 return build_prp;
2127} 2255}
2128 2256
@@ -2197,7 +2325,6 @@ megasas_make_sgl_fusion(struct megasas_instance *instance,
2197 memset(sgl_ptr, 0, instance->max_chain_frame_sz); 2325 memset(sgl_ptr, 0, instance->max_chain_frame_sz);
2198 } 2326 }
2199 } 2327 }
2200 atomic_inc(&instance->ieee_sgl);
2201} 2328}
2202 2329
2203/** 2330/**
@@ -2509,9 +2636,10 @@ static void megasas_stream_detect(struct megasas_instance *instance,
2509 * 2636 *
2510 */ 2637 */
2511static void 2638static void
2512megasas_set_raidflag_cpu_affinity(union RAID_CONTEXT_UNION *praid_context, 2639megasas_set_raidflag_cpu_affinity(struct fusion_context *fusion,
2513 struct MR_LD_RAID *raid, bool fp_possible, 2640 union RAID_CONTEXT_UNION *praid_context,
2514 u8 is_read, u32 scsi_buff_len) 2641 struct MR_LD_RAID *raid, bool fp_possible,
2642 u8 is_read, u32 scsi_buff_len)
2515{ 2643{
2516 u8 cpu_sel = MR_RAID_CTX_CPUSEL_0; 2644 u8 cpu_sel = MR_RAID_CTX_CPUSEL_0;
2517 struct RAID_CONTEXT_G35 *rctx_g35; 2645 struct RAID_CONTEXT_G35 *rctx_g35;
@@ -2569,11 +2697,11 @@ megasas_set_raidflag_cpu_affinity(union RAID_CONTEXT_UNION *praid_context,
2569 * vs MR_RAID_FLAGS_IO_SUB_TYPE_CACHE_BYPASS. 2697 * vs MR_RAID_FLAGS_IO_SUB_TYPE_CACHE_BYPASS.
2570 * IO Subtype is not bitmap. 2698 * IO Subtype is not bitmap.
2571 */ 2699 */
2572 if ((raid->level == 1) && (!is_read)) { 2700 if ((fusion->pcie_bw_limitation) && (raid->level == 1) && (!is_read) &&
2573 if (scsi_buff_len > MR_LARGE_IO_MIN_SIZE) 2701 (scsi_buff_len > MR_LARGE_IO_MIN_SIZE)) {
2574 praid_context->raid_context_g35.raid_flags = 2702 praid_context->raid_context_g35.raid_flags =
2575 (MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT 2703 (MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT
2576 << MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT); 2704 << MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT);
2577 } 2705 }
2578} 2706}
2579 2707
@@ -2679,6 +2807,7 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
2679 io_info.r1_alt_dev_handle = MR_DEVHANDLE_INVALID; 2807 io_info.r1_alt_dev_handle = MR_DEVHANDLE_INVALID;
2680 scsi_buff_len = scsi_bufflen(scp); 2808 scsi_buff_len = scsi_bufflen(scp);
2681 io_request->DataLength = cpu_to_le32(scsi_buff_len); 2809 io_request->DataLength = cpu_to_le32(scsi_buff_len);
2810 io_info.data_arms = 1;
2682 2811
2683 if (scp->sc_data_direction == DMA_FROM_DEVICE) 2812 if (scp->sc_data_direction == DMA_FROM_DEVICE)
2684 io_info.isRead = 1; 2813 io_info.isRead = 1;
@@ -2698,8 +2827,19 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
2698 fp_possible = (io_info.fpOkForIo > 0) ? true : false; 2827 fp_possible = (io_info.fpOkForIo > 0) ? true : false;
2699 } 2828 }
2700 2829
2701 cmd->request_desc->SCSIIO.MSIxIndex = 2830 if ((instance->perf_mode == MR_BALANCED_PERF_MODE) &&
2702 instance->reply_map[raw_smp_processor_id()]; 2831 atomic_read(&scp->device->device_busy) >
2832 (io_info.data_arms * MR_DEVICE_HIGH_IOPS_DEPTH))
2833 cmd->request_desc->SCSIIO.MSIxIndex =
2834 mega_mod64((atomic64_add_return(1, &instance->high_iops_outstanding) /
2835 MR_HIGH_IOPS_BATCH_COUNT), instance->low_latency_index_start);
2836 else if (instance->msix_load_balance)
2837 cmd->request_desc->SCSIIO.MSIxIndex =
2838 (mega_mod64(atomic64_add_return(1, &instance->total_io_count),
2839 instance->msix_vectors));
2840 else
2841 cmd->request_desc->SCSIIO.MSIxIndex =
2842 instance->reply_map[raw_smp_processor_id()];
2703 2843
2704 if (instance->adapter_type >= VENTURA_SERIES) { 2844 if (instance->adapter_type >= VENTURA_SERIES) {
2705 /* FP for Optimal raid level 1. 2845 /* FP for Optimal raid level 1.
@@ -2717,8 +2857,9 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
2717 (instance->host->can_queue)) { 2857 (instance->host->can_queue)) {
2718 fp_possible = false; 2858 fp_possible = false;
2719 atomic_dec(&instance->fw_outstanding); 2859 atomic_dec(&instance->fw_outstanding);
2720 } else if ((scsi_buff_len > MR_LARGE_IO_MIN_SIZE) || 2860 } else if (fusion->pcie_bw_limitation &&
2721 (atomic_dec_if_positive(&mrdev_priv->r1_ldio_hint) > 0)) { 2861 ((scsi_buff_len > MR_LARGE_IO_MIN_SIZE) ||
2862 (atomic_dec_if_positive(&mrdev_priv->r1_ldio_hint) > 0))) {
2722 fp_possible = false; 2863 fp_possible = false;
2723 atomic_dec(&instance->fw_outstanding); 2864 atomic_dec(&instance->fw_outstanding);
2724 if (scsi_buff_len > MR_LARGE_IO_MIN_SIZE) 2865 if (scsi_buff_len > MR_LARGE_IO_MIN_SIZE)
@@ -2743,7 +2884,7 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
2743 2884
2744 /* If raid is NULL, set CPU affinity to default CPU0 */ 2885 /* If raid is NULL, set CPU affinity to default CPU0 */
2745 if (raid) 2886 if (raid)
2746 megasas_set_raidflag_cpu_affinity(&io_request->RaidContext, 2887 megasas_set_raidflag_cpu_affinity(fusion, &io_request->RaidContext,
2747 raid, fp_possible, io_info.isRead, 2888 raid, fp_possible, io_info.isRead,
2748 scsi_buff_len); 2889 scsi_buff_len);
2749 else 2890 else
@@ -2759,10 +2900,6 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
2759 (MPI2_REQ_DESCRIPT_FLAGS_FP_IO 2900 (MPI2_REQ_DESCRIPT_FLAGS_FP_IO
2760 << MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT); 2901 << MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
2761 if (instance->adapter_type == INVADER_SERIES) { 2902 if (instance->adapter_type == INVADER_SERIES) {
2762 if (rctx->reg_lock_flags == REGION_TYPE_UNUSED)
2763 cmd->request_desc->SCSIIO.RequestFlags =
2764 (MEGASAS_REQ_DESCRIPT_FLAGS_NO_LOCK <<
2765 MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
2766 rctx->type = MPI2_TYPE_CUDA; 2903 rctx->type = MPI2_TYPE_CUDA;
2767 rctx->nseg = 0x1; 2904 rctx->nseg = 0x1;
2768 io_request->IoFlags |= cpu_to_le16(MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH); 2905 io_request->IoFlags |= cpu_to_le16(MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH);
@@ -2970,50 +3107,71 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
2970 << MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT; 3107 << MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT;
2971 3108
2972 /* If FW supports PD sequence number */ 3109 /* If FW supports PD sequence number */
2973 if (instance->use_seqnum_jbod_fp && 3110 if (instance->support_seqnum_jbod_fp) {
2974 instance->pd_list[pd_index].driveType == TYPE_DISK) { 3111 if (instance->use_seqnum_jbod_fp &&
2975 /* TgtId must be incremented by 255 as jbod seq number is index 3112 instance->pd_list[pd_index].driveType == TYPE_DISK) {
2976 * below raid map 3113
2977 */ 3114 /* More than 256 PD/JBOD support for Ventura */
2978 /* More than 256 PD/JBOD support for Ventura */ 3115 if (instance->support_morethan256jbod)
2979 if (instance->support_morethan256jbod) 3116 pRAID_Context->virtual_disk_tgt_id =
2980 pRAID_Context->virtual_disk_tgt_id = 3117 pd_sync->seq[pd_index].pd_target_id;
2981 pd_sync->seq[pd_index].pd_target_id; 3118 else
2982 else 3119 pRAID_Context->virtual_disk_tgt_id =
2983 pRAID_Context->virtual_disk_tgt_id = 3120 cpu_to_le16(device_id +
2984 cpu_to_le16(device_id + (MAX_PHYSICAL_DEVICES - 1)); 3121 (MAX_PHYSICAL_DEVICES - 1));
2985 pRAID_Context->config_seq_num = pd_sync->seq[pd_index].seqNum; 3122 pRAID_Context->config_seq_num =
2986 io_request->DevHandle = pd_sync->seq[pd_index].devHandle; 3123 pd_sync->seq[pd_index].seqNum;
2987 if (instance->adapter_type >= VENTURA_SERIES) { 3124 io_request->DevHandle =
2988 io_request->RaidContext.raid_context_g35.routing_flags |= 3125 pd_sync->seq[pd_index].devHandle;
2989 (1 << MR_RAID_CTX_ROUTINGFLAGS_SQN_SHIFT); 3126 if (instance->adapter_type >= VENTURA_SERIES) {
2990 io_request->RaidContext.raid_context_g35.nseg_type |= 3127 io_request->RaidContext.raid_context_g35.routing_flags |=
2991 (1 << RAID_CONTEXT_NSEG_SHIFT); 3128 (1 << MR_RAID_CTX_ROUTINGFLAGS_SQN_SHIFT);
2992 io_request->RaidContext.raid_context_g35.nseg_type |= 3129 io_request->RaidContext.raid_context_g35.nseg_type |=
2993 (MPI2_TYPE_CUDA << RAID_CONTEXT_TYPE_SHIFT); 3130 (1 << RAID_CONTEXT_NSEG_SHIFT);
3131 io_request->RaidContext.raid_context_g35.nseg_type |=
3132 (MPI2_TYPE_CUDA << RAID_CONTEXT_TYPE_SHIFT);
3133 } else {
3134 pRAID_Context->type = MPI2_TYPE_CUDA;
3135 pRAID_Context->nseg = 0x1;
3136 pRAID_Context->reg_lock_flags |=
3137 (MR_RL_FLAGS_SEQ_NUM_ENABLE |
3138 MR_RL_FLAGS_GRANT_DESTINATION_CUDA);
3139 }
2994 } else { 3140 } else {
2995 pRAID_Context->type = MPI2_TYPE_CUDA; 3141 pRAID_Context->virtual_disk_tgt_id =
2996 pRAID_Context->nseg = 0x1; 3142 cpu_to_le16(device_id +
2997 pRAID_Context->reg_lock_flags |= 3143 (MAX_PHYSICAL_DEVICES - 1));
2998 (MR_RL_FLAGS_SEQ_NUM_ENABLE|MR_RL_FLAGS_GRANT_DESTINATION_CUDA); 3144 pRAID_Context->config_seq_num = 0;
3145 io_request->DevHandle = cpu_to_le16(0xFFFF);
2999 } 3146 }
3000 } else if (fusion->fast_path_io) {
3001 pRAID_Context->virtual_disk_tgt_id = cpu_to_le16(device_id);
3002 pRAID_Context->config_seq_num = 0;
3003 local_map_ptr = fusion->ld_drv_map[(instance->map_id & 1)];
3004 io_request->DevHandle =
3005 local_map_ptr->raidMap.devHndlInfo[device_id].curDevHdl;
3006 } else { 3147 } else {
3007 /* Want to send all IO via FW path */
3008 pRAID_Context->virtual_disk_tgt_id = cpu_to_le16(device_id); 3148 pRAID_Context->virtual_disk_tgt_id = cpu_to_le16(device_id);
3009 pRAID_Context->config_seq_num = 0; 3149 pRAID_Context->config_seq_num = 0;
3010 io_request->DevHandle = cpu_to_le16(0xFFFF); 3150
3151 if (fusion->fast_path_io) {
3152 local_map_ptr =
3153 fusion->ld_drv_map[(instance->map_id & 1)];
3154 io_request->DevHandle =
3155 local_map_ptr->raidMap.devHndlInfo[device_id].curDevHdl;
3156 } else {
3157 io_request->DevHandle = cpu_to_le16(0xFFFF);
3158 }
3011 } 3159 }
3012 3160
3013 cmd->request_desc->SCSIIO.DevHandle = io_request->DevHandle; 3161 cmd->request_desc->SCSIIO.DevHandle = io_request->DevHandle;
3014 3162
3015 cmd->request_desc->SCSIIO.MSIxIndex = 3163 if ((instance->perf_mode == MR_BALANCED_PERF_MODE) &&
3016 instance->reply_map[raw_smp_processor_id()]; 3164 atomic_read(&scmd->device->device_busy) > MR_DEVICE_HIGH_IOPS_DEPTH)
3165 cmd->request_desc->SCSIIO.MSIxIndex =
3166 mega_mod64((atomic64_add_return(1, &instance->high_iops_outstanding) /
3167 MR_HIGH_IOPS_BATCH_COUNT), instance->low_latency_index_start);
3168 else if (instance->msix_load_balance)
3169 cmd->request_desc->SCSIIO.MSIxIndex =
3170 (mega_mod64(atomic64_add_return(1, &instance->total_io_count),
3171 instance->msix_vectors));
3172 else
3173 cmd->request_desc->SCSIIO.MSIxIndex =
3174 instance->reply_map[raw_smp_processor_id()];
3017 3175
3018 if (!fp_possible) { 3176 if (!fp_possible) {
3019 /* system pd firmware path */ 3177 /* system pd firmware path */
@@ -3193,9 +3351,9 @@ void megasas_prepare_secondRaid1_IO(struct megasas_instance *instance,
3193 r1_cmd->request_desc->SCSIIO.DevHandle = cmd->r1_alt_dev_handle; 3351 r1_cmd->request_desc->SCSIIO.DevHandle = cmd->r1_alt_dev_handle;
3194 r1_cmd->io_request->DevHandle = cmd->r1_alt_dev_handle; 3352 r1_cmd->io_request->DevHandle = cmd->r1_alt_dev_handle;
3195 r1_cmd->r1_alt_dev_handle = cmd->io_request->DevHandle; 3353 r1_cmd->r1_alt_dev_handle = cmd->io_request->DevHandle;
3196 cmd->io_request->RaidContext.raid_context_g35.smid.peer_smid = 3354 cmd->io_request->RaidContext.raid_context_g35.flow_specific.peer_smid =
3197 cpu_to_le16(r1_cmd->index); 3355 cpu_to_le16(r1_cmd->index);
3198 r1_cmd->io_request->RaidContext.raid_context_g35.smid.peer_smid = 3356 r1_cmd->io_request->RaidContext.raid_context_g35.flow_specific.peer_smid =
3199 cpu_to_le16(cmd->index); 3357 cpu_to_le16(cmd->index);
3200 /*MSIxIndex of both commands request descriptors should be same*/ 3358 /*MSIxIndex of both commands request descriptors should be same*/
3201 r1_cmd->request_desc->SCSIIO.MSIxIndex = 3359 r1_cmd->request_desc->SCSIIO.MSIxIndex =
@@ -3313,7 +3471,7 @@ megasas_complete_r1_command(struct megasas_instance *instance,
3313 3471
3314 rctx_g35 = &cmd->io_request->RaidContext.raid_context_g35; 3472 rctx_g35 = &cmd->io_request->RaidContext.raid_context_g35;
3315 fusion = instance->ctrl_context; 3473 fusion = instance->ctrl_context;
3316 peer_smid = le16_to_cpu(rctx_g35->smid.peer_smid); 3474 peer_smid = le16_to_cpu(rctx_g35->flow_specific.peer_smid);
3317 3475
3318 r1_cmd = fusion->cmd_list[peer_smid - 1]; 3476 r1_cmd = fusion->cmd_list[peer_smid - 1];
3319 scmd_local = cmd->scmd; 3477 scmd_local = cmd->scmd;
@@ -3353,7 +3511,8 @@ megasas_complete_r1_command(struct megasas_instance *instance,
3353 * Completes all commands that is in reply descriptor queue 3511 * Completes all commands that is in reply descriptor queue
3354 */ 3512 */
3355int 3513int
3356complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex) 3514complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex,
3515 struct megasas_irq_context *irq_context)
3357{ 3516{
3358 union MPI2_REPLY_DESCRIPTORS_UNION *desc; 3517 union MPI2_REPLY_DESCRIPTORS_UNION *desc;
3359 struct MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR *reply_desc; 3518 struct MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR *reply_desc;
@@ -3486,7 +3645,7 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex)
3486 * number of reply counts and still there are more replies in reply queue 3645 * number of reply counts and still there are more replies in reply queue
3487 * pending to be completed 3646 * pending to be completed
3488 */ 3647 */
3489 if (threshold_reply_count >= THRESHOLD_REPLY_COUNT) { 3648 if (threshold_reply_count >= instance->threshold_reply_count) {
3490 if (instance->msix_combined) 3649 if (instance->msix_combined)
3491 writel(((MSIxIndex & 0x7) << 24) | 3650 writel(((MSIxIndex & 0x7) << 24) |
3492 fusion->last_reply_idx[MSIxIndex], 3651 fusion->last_reply_idx[MSIxIndex],
@@ -3496,23 +3655,46 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex)
3496 fusion->last_reply_idx[MSIxIndex], 3655 fusion->last_reply_idx[MSIxIndex],
3497 instance->reply_post_host_index_addr[0]); 3656 instance->reply_post_host_index_addr[0]);
3498 threshold_reply_count = 0; 3657 threshold_reply_count = 0;
3658 if (irq_context) {
3659 if (!irq_context->irq_poll_scheduled) {
3660 irq_context->irq_poll_scheduled = true;
3661 irq_context->irq_line_enable = true;
3662 irq_poll_sched(&irq_context->irqpoll);
3663 }
3664 return num_completed;
3665 }
3499 } 3666 }
3500 } 3667 }
3501 3668
3502 if (!num_completed) 3669 if (num_completed) {
3503 return IRQ_NONE; 3670 wmb();
3671 if (instance->msix_combined)
3672 writel(((MSIxIndex & 0x7) << 24) |
3673 fusion->last_reply_idx[MSIxIndex],
3674 instance->reply_post_host_index_addr[MSIxIndex/8]);
3675 else
3676 writel((MSIxIndex << 24) |
3677 fusion->last_reply_idx[MSIxIndex],
3678 instance->reply_post_host_index_addr[0]);
3679 megasas_check_and_restore_queue_depth(instance);
3680 }
3681 return num_completed;
3682}
3504 3683
3505 wmb(); 3684/**
3506 if (instance->msix_combined) 3685 * megasas_enable_irq_poll() - enable irqpoll
3507 writel(((MSIxIndex & 0x7) << 24) | 3686 */
3508 fusion->last_reply_idx[MSIxIndex], 3687static void megasas_enable_irq_poll(struct megasas_instance *instance)
3509 instance->reply_post_host_index_addr[MSIxIndex/8]); 3688{
3510 else 3689 u32 count, i;
3511 writel((MSIxIndex << 24) | 3690 struct megasas_irq_context *irq_ctx;
3512 fusion->last_reply_idx[MSIxIndex], 3691
3513 instance->reply_post_host_index_addr[0]); 3692 count = instance->msix_vectors > 0 ? instance->msix_vectors : 1;
3514 megasas_check_and_restore_queue_depth(instance); 3693
3515 return IRQ_HANDLED; 3694 for (i = 0; i < count; i++) {
3695 irq_ctx = &instance->irq_context[i];
3696 irq_poll_enable(&irq_ctx->irqpoll);
3697 }
3516} 3698}
3517 3699
3518/** 3700/**
@@ -3524,11 +3706,51 @@ void megasas_sync_irqs(unsigned long instance_addr)
3524 u32 count, i; 3706 u32 count, i;
3525 struct megasas_instance *instance = 3707 struct megasas_instance *instance =
3526 (struct megasas_instance *)instance_addr; 3708 (struct megasas_instance *)instance_addr;
3709 struct megasas_irq_context *irq_ctx;
3527 3710
3528 count = instance->msix_vectors > 0 ? instance->msix_vectors : 1; 3711 count = instance->msix_vectors > 0 ? instance->msix_vectors : 1;
3529 3712
3530 for (i = 0; i < count; i++) 3713 for (i = 0; i < count; i++) {
3531 synchronize_irq(pci_irq_vector(instance->pdev, i)); 3714 synchronize_irq(pci_irq_vector(instance->pdev, i));
3715 irq_ctx = &instance->irq_context[i];
3716 irq_poll_disable(&irq_ctx->irqpoll);
3717 if (irq_ctx->irq_poll_scheduled) {
3718 irq_ctx->irq_poll_scheduled = false;
3719 enable_irq(irq_ctx->os_irq);
3720 }
3721 }
3722}
3723
3724/**
3725 * megasas_irqpoll() - process a queue for completed reply descriptors
3726 * @irqpoll: IRQ poll structure associated with queue to poll.
3727 * @budget: Threshold of reply descriptors to process per poll.
3728 *
3729 * Return: The number of entries processed.
3730 */
3731
3732int megasas_irqpoll(struct irq_poll *irqpoll, int budget)
3733{
3734 struct megasas_irq_context *irq_ctx;
3735 struct megasas_instance *instance;
3736 int num_entries;
3737
3738 irq_ctx = container_of(irqpoll, struct megasas_irq_context, irqpoll);
3739 instance = irq_ctx->instance;
3740
3741 if (irq_ctx->irq_line_enable) {
3742 disable_irq(irq_ctx->os_irq);
3743 irq_ctx->irq_line_enable = false;
3744 }
3745
3746 num_entries = complete_cmd_fusion(instance, irq_ctx->MSIxIndex, irq_ctx);
3747 if (num_entries < budget) {
3748 irq_poll_complete(irqpoll);
3749 irq_ctx->irq_poll_scheduled = false;
3750 enable_irq(irq_ctx->os_irq);
3751 }
3752
3753 return num_entries;
3532} 3754}
3533 3755
3534/** 3756/**
@@ -3551,7 +3773,7 @@ megasas_complete_cmd_dpc_fusion(unsigned long instance_addr)
3551 return; 3773 return;
3552 3774
3553 for (MSIxIndex = 0 ; MSIxIndex < count; MSIxIndex++) 3775 for (MSIxIndex = 0 ; MSIxIndex < count; MSIxIndex++)
3554 complete_cmd_fusion(instance, MSIxIndex); 3776 complete_cmd_fusion(instance, MSIxIndex, NULL);
3555} 3777}
3556 3778
3557/** 3779/**
@@ -3566,6 +3788,11 @@ irqreturn_t megasas_isr_fusion(int irq, void *devp)
3566 if (instance->mask_interrupts) 3788 if (instance->mask_interrupts)
3567 return IRQ_NONE; 3789 return IRQ_NONE;
3568 3790
3791#if defined(ENABLE_IRQ_POLL)
3792 if (irq_context->irq_poll_scheduled)
3793 return IRQ_HANDLED;
3794#endif
3795
3569 if (!instance->msix_vectors) { 3796 if (!instance->msix_vectors) {
3570 mfiStatus = instance->instancet->clear_intr(instance); 3797 mfiStatus = instance->instancet->clear_intr(instance);
3571 if (!mfiStatus) 3798 if (!mfiStatus)
@@ -3578,7 +3805,8 @@ irqreturn_t megasas_isr_fusion(int irq, void *devp)
3578 return IRQ_HANDLED; 3805 return IRQ_HANDLED;
3579 } 3806 }
3580 3807
3581 return complete_cmd_fusion(instance, irq_context->MSIxIndex); 3808 return complete_cmd_fusion(instance, irq_context->MSIxIndex, irq_context)
3809 ? IRQ_HANDLED : IRQ_NONE;
3582} 3810}
3583 3811
3584/** 3812/**
@@ -3843,7 +4071,7 @@ megasas_check_reset_fusion(struct megasas_instance *instance,
3843static inline void megasas_trigger_snap_dump(struct megasas_instance *instance) 4071static inline void megasas_trigger_snap_dump(struct megasas_instance *instance)
3844{ 4072{
3845 int j; 4073 int j;
3846 u32 fw_state; 4074 u32 fw_state, abs_state;
3847 4075
3848 if (!instance->disableOnlineCtrlReset) { 4076 if (!instance->disableOnlineCtrlReset) {
3849 dev_info(&instance->pdev->dev, "Trigger snap dump\n"); 4077 dev_info(&instance->pdev->dev, "Trigger snap dump\n");
@@ -3853,11 +4081,13 @@ static inline void megasas_trigger_snap_dump(struct megasas_instance *instance)
3853 } 4081 }
3854 4082
3855 for (j = 0; j < instance->snapdump_wait_time; j++) { 4083 for (j = 0; j < instance->snapdump_wait_time; j++) {
3856 fw_state = instance->instancet->read_fw_status_reg(instance) & 4084 abs_state = instance->instancet->read_fw_status_reg(instance);
3857 MFI_STATE_MASK; 4085 fw_state = abs_state & MFI_STATE_MASK;
3858 if (fw_state == MFI_STATE_FAULT) { 4086 if (fw_state == MFI_STATE_FAULT) {
3859 dev_err(&instance->pdev->dev, 4087 dev_printk(KERN_ERR, &instance->pdev->dev,
3860 "Found FW in FAULT state, after snap dump trigger\n"); 4088 "FW in FAULT state Fault code:0x%x subcode:0x%x func:%s\n",
4089 abs_state & MFI_STATE_FAULT_CODE,
4090 abs_state & MFI_STATE_FAULT_SUBCODE, __func__);
3861 return; 4091 return;
3862 } 4092 }
3863 msleep(1000); 4093 msleep(1000);
@@ -3869,7 +4099,7 @@ int megasas_wait_for_outstanding_fusion(struct megasas_instance *instance,
3869 int reason, int *convert) 4099 int reason, int *convert)
3870{ 4100{
3871 int i, outstanding, retval = 0, hb_seconds_missed = 0; 4101 int i, outstanding, retval = 0, hb_seconds_missed = 0;
3872 u32 fw_state; 4102 u32 fw_state, abs_state;
3873 u32 waittime_for_io_completion; 4103 u32 waittime_for_io_completion;
3874 4104
3875 waittime_for_io_completion = 4105 waittime_for_io_completion =
@@ -3888,12 +4118,13 @@ int megasas_wait_for_outstanding_fusion(struct megasas_instance *instance,
3888 4118
3889 for (i = 0; i < waittime_for_io_completion; i++) { 4119 for (i = 0; i < waittime_for_io_completion; i++) {
3890 /* Check if firmware is in fault state */ 4120 /* Check if firmware is in fault state */
3891 fw_state = instance->instancet->read_fw_status_reg(instance) & 4121 abs_state = instance->instancet->read_fw_status_reg(instance);
3892 MFI_STATE_MASK; 4122 fw_state = abs_state & MFI_STATE_MASK;
3893 if (fw_state == MFI_STATE_FAULT) { 4123 if (fw_state == MFI_STATE_FAULT) {
3894 dev_warn(&instance->pdev->dev, "Found FW in FAULT state," 4124 dev_printk(KERN_ERR, &instance->pdev->dev,
3895 " will reset adapter scsi%d.\n", 4125 "FW in FAULT state Fault code:0x%x subcode:0x%x func:%s\n",
3896 instance->host->host_no); 4126 abs_state & MFI_STATE_FAULT_CODE,
4127 abs_state & MFI_STATE_FAULT_SUBCODE, __func__);
3897 megasas_complete_cmd_dpc_fusion((unsigned long)instance); 4128 megasas_complete_cmd_dpc_fusion((unsigned long)instance);
3898 if (instance->requestorId && reason) { 4129 if (instance->requestorId && reason) {
3899 dev_warn(&instance->pdev->dev, "SR-IOV Found FW in FAULT" 4130 dev_warn(&instance->pdev->dev, "SR-IOV Found FW in FAULT"
@@ -4042,6 +4273,13 @@ void megasas_refire_mgmt_cmd(struct megasas_instance *instance)
4042 } 4273 }
4043 4274
4044 break; 4275 break;
4276 case MFI_CMD_TOOLBOX:
4277 if (!instance->support_pci_lane_margining) {
4278 cmd_mfi->frame->hdr.cmd_status = MFI_STAT_INVALID_CMD;
4279 result = COMPLETE_CMD;
4280 }
4281
4282 break;
4045 default: 4283 default:
4046 break; 4284 break;
4047 } 4285 }
@@ -4265,6 +4503,7 @@ megasas_issue_tm(struct megasas_instance *instance, u16 device_handle,
4265 instance->instancet->disable_intr(instance); 4503 instance->instancet->disable_intr(instance);
4266 megasas_sync_irqs((unsigned long)instance); 4504 megasas_sync_irqs((unsigned long)instance);
4267 instance->instancet->enable_intr(instance); 4505 instance->instancet->enable_intr(instance);
4506 megasas_enable_irq_poll(instance);
4268 if (scsi_lookup->scmd == NULL) 4507 if (scsi_lookup->scmd == NULL)
4269 break; 4508 break;
4270 } 4509 }
@@ -4278,6 +4517,7 @@ megasas_issue_tm(struct megasas_instance *instance, u16 device_handle,
4278 megasas_sync_irqs((unsigned long)instance); 4517 megasas_sync_irqs((unsigned long)instance);
4279 rc = megasas_track_scsiio(instance, id, channel); 4518 rc = megasas_track_scsiio(instance, id, channel);
4280 instance->instancet->enable_intr(instance); 4519 instance->instancet->enable_intr(instance);
4520 megasas_enable_irq_poll(instance);
4281 4521
4282 break; 4522 break;
4283 case MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET: 4523 case MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET:
@@ -4376,9 +4616,6 @@ int megasas_task_abort_fusion(struct scsi_cmnd *scmd)
4376 4616
4377 instance = (struct megasas_instance *)scmd->device->host->hostdata; 4617 instance = (struct megasas_instance *)scmd->device->host->hostdata;
4378 4618
4379 scmd_printk(KERN_INFO, scmd, "task abort called for scmd(%p)\n", scmd);
4380 scsi_print_command(scmd);
4381
4382 if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) { 4619 if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) {
4383 dev_err(&instance->pdev->dev, "Controller is not OPERATIONAL," 4620 dev_err(&instance->pdev->dev, "Controller is not OPERATIONAL,"
4384 "SCSI host:%d\n", instance->host->host_no); 4621 "SCSI host:%d\n", instance->host->host_no);
@@ -4421,7 +4658,7 @@ int megasas_task_abort_fusion(struct scsi_cmnd *scmd)
4421 goto out; 4658 goto out;
4422 } 4659 }
4423 sdev_printk(KERN_INFO, scmd->device, 4660 sdev_printk(KERN_INFO, scmd->device,
4424 "attempting task abort! scmd(%p) tm_dev_handle 0x%x\n", 4661 "attempting task abort! scmd(0x%p) tm_dev_handle 0x%x\n",
4425 scmd, devhandle); 4662 scmd, devhandle);
4426 4663
4427 mr_device_priv_data->tm_busy = 1; 4664 mr_device_priv_data->tm_busy = 1;
@@ -4432,9 +4669,12 @@ int megasas_task_abort_fusion(struct scsi_cmnd *scmd)
4432 mr_device_priv_data->tm_busy = 0; 4669 mr_device_priv_data->tm_busy = 0;
4433 4670
4434 mutex_unlock(&instance->reset_mutex); 4671 mutex_unlock(&instance->reset_mutex);
4435out: 4672 scmd_printk(KERN_INFO, scmd, "task abort %s!! scmd(0x%p)\n",
4436 sdev_printk(KERN_INFO, scmd->device, "task abort: %s scmd(%p)\n",
4437 ((ret == SUCCESS) ? "SUCCESS" : "FAILED"), scmd); 4673 ((ret == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
4674out:
4675 scsi_print_command(scmd);
4676 if (megasas_dbg_lvl & TM_DEBUG)
4677 megasas_dump_fusion_io(scmd);
4438 4678
4439 return ret; 4679 return ret;
4440} 4680}
@@ -4457,9 +4697,6 @@ int megasas_reset_target_fusion(struct scsi_cmnd *scmd)
4457 4697
4458 instance = (struct megasas_instance *)scmd->device->host->hostdata; 4698 instance = (struct megasas_instance *)scmd->device->host->hostdata;
4459 4699
4460 sdev_printk(KERN_INFO, scmd->device,
4461 "target reset called for scmd(%p)\n", scmd);
4462
4463 if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) { 4700 if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) {
4464 dev_err(&instance->pdev->dev, "Controller is not OPERATIONAL," 4701 dev_err(&instance->pdev->dev, "Controller is not OPERATIONAL,"
4465 "SCSI host:%d\n", instance->host->host_no); 4702 "SCSI host:%d\n", instance->host->host_no);
@@ -4468,8 +4705,8 @@ int megasas_reset_target_fusion(struct scsi_cmnd *scmd)
4468 } 4705 }
4469 4706
4470 if (!mr_device_priv_data) { 4707 if (!mr_device_priv_data) {
4471 sdev_printk(KERN_INFO, scmd->device, "device been deleted! " 4708 sdev_printk(KERN_INFO, scmd->device,
4472 "scmd(%p)\n", scmd); 4709 "device been deleted! scmd: (0x%p)\n", scmd);
4473 scmd->result = DID_NO_CONNECT << 16; 4710 scmd->result = DID_NO_CONNECT << 16;
4474 ret = SUCCESS; 4711 ret = SUCCESS;
4475 goto out; 4712 goto out;
@@ -4492,7 +4729,7 @@ int megasas_reset_target_fusion(struct scsi_cmnd *scmd)
4492 } 4729 }
4493 4730
4494 sdev_printk(KERN_INFO, scmd->device, 4731 sdev_printk(KERN_INFO, scmd->device,
4495 "attempting target reset! scmd(%p) tm_dev_handle 0x%x\n", 4732 "attempting target reset! scmd(0x%p) tm_dev_handle: 0x%x\n",
4496 scmd, devhandle); 4733 scmd, devhandle);
4497 mr_device_priv_data->tm_busy = 1; 4734 mr_device_priv_data->tm_busy = 1;
4498 ret = megasas_issue_tm(instance, devhandle, 4735 ret = megasas_issue_tm(instance, devhandle,
@@ -4501,10 +4738,10 @@ int megasas_reset_target_fusion(struct scsi_cmnd *scmd)
4501 mr_device_priv_data); 4738 mr_device_priv_data);
4502 mr_device_priv_data->tm_busy = 0; 4739 mr_device_priv_data->tm_busy = 0;
4503 mutex_unlock(&instance->reset_mutex); 4740 mutex_unlock(&instance->reset_mutex);
4504out: 4741 scmd_printk(KERN_NOTICE, scmd, "target reset %s!!\n",
4505 scmd_printk(KERN_NOTICE, scmd, "megasas: target reset %s!!\n",
4506 (ret == SUCCESS) ? "SUCCESS" : "FAILED"); 4742 (ret == SUCCESS) ? "SUCCESS" : "FAILED");
4507 4743
4744out:
4508 return ret; 4745 return ret;
4509} 4746}
4510 4747
@@ -4549,12 +4786,14 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
4549 struct megasas_instance *instance; 4786 struct megasas_instance *instance;
4550 struct megasas_cmd_fusion *cmd_fusion, *r1_cmd; 4787 struct megasas_cmd_fusion *cmd_fusion, *r1_cmd;
4551 struct fusion_context *fusion; 4788 struct fusion_context *fusion;
4552 u32 abs_state, status_reg, reset_adapter; 4789 u32 abs_state, status_reg, reset_adapter, fpio_count = 0;
4553 u32 io_timeout_in_crash_mode = 0; 4790 u32 io_timeout_in_crash_mode = 0;
4554 struct scsi_cmnd *scmd_local = NULL; 4791 struct scsi_cmnd *scmd_local = NULL;
4555 struct scsi_device *sdev; 4792 struct scsi_device *sdev;
4556 int ret_target_prop = DCMD_FAILED; 4793 int ret_target_prop = DCMD_FAILED;
4557 bool is_target_prop = false; 4794 bool is_target_prop = false;
4795 bool do_adp_reset = true;
4796 int max_reset_tries = MEGASAS_FUSION_MAX_RESET_TRIES;
4558 4797
4559 instance = (struct megasas_instance *)shost->hostdata; 4798 instance = (struct megasas_instance *)shost->hostdata;
4560 fusion = instance->ctrl_context; 4799 fusion = instance->ctrl_context;
@@ -4621,7 +4860,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
4621 if (convert) 4860 if (convert)
4622 reason = 0; 4861 reason = 0;
4623 4862
4624 if (megasas_dbg_lvl & OCR_LOGS) 4863 if (megasas_dbg_lvl & OCR_DEBUG)
4625 dev_info(&instance->pdev->dev, "\nPending SCSI commands:\n"); 4864 dev_info(&instance->pdev->dev, "\nPending SCSI commands:\n");
4626 4865
4627 /* Now return commands back to the OS */ 4866 /* Now return commands back to the OS */
@@ -4634,13 +4873,17 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
4634 } 4873 }
4635 scmd_local = cmd_fusion->scmd; 4874 scmd_local = cmd_fusion->scmd;
4636 if (cmd_fusion->scmd) { 4875 if (cmd_fusion->scmd) {
4637 if (megasas_dbg_lvl & OCR_LOGS) { 4876 if (megasas_dbg_lvl & OCR_DEBUG) {
4638 sdev_printk(KERN_INFO, 4877 sdev_printk(KERN_INFO,
4639 cmd_fusion->scmd->device, "SMID: 0x%x\n", 4878 cmd_fusion->scmd->device, "SMID: 0x%x\n",
4640 cmd_fusion->index); 4879 cmd_fusion->index);
4641 scsi_print_command(cmd_fusion->scmd); 4880 megasas_dump_fusion_io(cmd_fusion->scmd);
4642 } 4881 }
4643 4882
4883 if (cmd_fusion->io_request->Function ==
4884 MPI2_FUNCTION_SCSI_IO_REQUEST)
4885 fpio_count++;
4886
4644 scmd_local->result = 4887 scmd_local->result =
4645 megasas_check_mpio_paths(instance, 4888 megasas_check_mpio_paths(instance,
4646 scmd_local); 4889 scmd_local);
@@ -4653,6 +4896,9 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
4653 } 4896 }
4654 } 4897 }
4655 4898
4899 dev_info(&instance->pdev->dev, "Outstanding fastpath IOs: %d\n",
4900 fpio_count);
4901
4656 atomic_set(&instance->fw_outstanding, 0); 4902 atomic_set(&instance->fw_outstanding, 0);
4657 4903
4658 status_reg = instance->instancet->read_fw_status_reg(instance); 4904 status_reg = instance->instancet->read_fw_status_reg(instance);
@@ -4664,52 +4910,45 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
4664 dev_warn(&instance->pdev->dev, "Reset not supported" 4910 dev_warn(&instance->pdev->dev, "Reset not supported"
4665 ", killing adapter scsi%d.\n", 4911 ", killing adapter scsi%d.\n",
4666 instance->host->host_no); 4912 instance->host->host_no);
4667 megaraid_sas_kill_hba(instance); 4913 goto kill_hba;
4668 instance->skip_heartbeat_timer_del = 1;
4669 retval = FAILED;
4670 goto out;
4671 } 4914 }
4672 4915
4673 /* Let SR-IOV VF & PF sync up if there was a HB failure */ 4916 /* Let SR-IOV VF & PF sync up if there was a HB failure */
4674 if (instance->requestorId && !reason) { 4917 if (instance->requestorId && !reason) {
4675 msleep(MEGASAS_OCR_SETTLE_TIME_VF); 4918 msleep(MEGASAS_OCR_SETTLE_TIME_VF);
4676 goto transition_to_ready; 4919 do_adp_reset = false;
4920 max_reset_tries = MEGASAS_SRIOV_MAX_RESET_TRIES_VF;
4677 } 4921 }
4678 4922
4679 /* Now try to reset the chip */ 4923 /* Now try to reset the chip */
4680 for (i = 0; i < MEGASAS_FUSION_MAX_RESET_TRIES; i++) { 4924 for (i = 0; i < max_reset_tries; i++) {
4681 4925 /*
4682 if (instance->instancet->adp_reset 4926 * Do adp reset and wait for
4683 (instance, instance->reg_set)) 4927 * controller to transition to ready
4928 */
4929 if (megasas_adp_reset_wait_for_ready(instance,
4930 do_adp_reset, 1) == FAILED)
4684 continue; 4931 continue;
4685transition_to_ready: 4932
4686 /* Wait for FW to become ready */ 4933 /* Wait for FW to become ready */
4687 if (megasas_transition_to_ready(instance, 1)) { 4934 if (megasas_transition_to_ready(instance, 1)) {
4688 dev_warn(&instance->pdev->dev, 4935 dev_warn(&instance->pdev->dev,
4689 "Failed to transition controller to ready for " 4936 "Failed to transition controller to ready for "
4690 "scsi%d.\n", instance->host->host_no); 4937 "scsi%d.\n", instance->host->host_no);
4691 if (instance->requestorId && !reason) 4938 continue;
4692 goto fail_kill_adapter;
4693 else
4694 continue;
4695 } 4939 }
4696 megasas_reset_reply_desc(instance); 4940 megasas_reset_reply_desc(instance);
4697 megasas_fusion_update_can_queue(instance, OCR_CONTEXT); 4941 megasas_fusion_update_can_queue(instance, OCR_CONTEXT);
4698 4942
4699 if (megasas_ioc_init_fusion(instance)) { 4943 if (megasas_ioc_init_fusion(instance)) {
4700 if (instance->requestorId && !reason) 4944 continue;
4701 goto fail_kill_adapter;
4702 else
4703 continue;
4704 } 4945 }
4705 4946
4706 if (megasas_get_ctrl_info(instance)) { 4947 if (megasas_get_ctrl_info(instance)) {
4707 dev_info(&instance->pdev->dev, 4948 dev_info(&instance->pdev->dev,
4708 "Failed from %s %d\n", 4949 "Failed from %s %d\n",
4709 __func__, __LINE__); 4950 __func__, __LINE__);
4710 megaraid_sas_kill_hba(instance); 4951 goto kill_hba;
4711 retval = FAILED;
4712 goto out;
4713 } 4952 }
4714 4953
4715 megasas_refire_mgmt_cmd(instance); 4954 megasas_refire_mgmt_cmd(instance);
@@ -4738,7 +4977,7 @@ transition_to_ready:
4738 clear_bit(MEGASAS_FUSION_IN_RESET, 4977 clear_bit(MEGASAS_FUSION_IN_RESET,
4739 &instance->reset_flags); 4978 &instance->reset_flags);
4740 instance->instancet->enable_intr(instance); 4979 instance->instancet->enable_intr(instance);
4741 4980 megasas_enable_irq_poll(instance);
4742 shost_for_each_device(sdev, shost) { 4981 shost_for_each_device(sdev, shost) {
4743 if ((instance->tgt_prop) && 4982 if ((instance->tgt_prop) &&
4744 (instance->nvme_page_size)) 4983 (instance->nvme_page_size))
@@ -4750,9 +4989,9 @@ transition_to_ready:
4750 4989
4751 atomic_set(&instance->adprecovery, MEGASAS_HBA_OPERATIONAL); 4990 atomic_set(&instance->adprecovery, MEGASAS_HBA_OPERATIONAL);
4752 4991
4753 dev_info(&instance->pdev->dev, "Interrupts are enabled and" 4992 dev_info(&instance->pdev->dev,
4754 " controller is OPERATIONAL for scsi:%d\n", 4993 "Adapter is OPERATIONAL for scsi:%d\n",
4755 instance->host->host_no); 4994 instance->host->host_no);
4756 4995
4757 /* Restart SR-IOV heartbeat */ 4996 /* Restart SR-IOV heartbeat */
4758 if (instance->requestorId) { 4997 if (instance->requestorId) {
@@ -4786,13 +5025,10 @@ transition_to_ready:
4786 5025
4787 goto out; 5026 goto out;
4788 } 5027 }
4789fail_kill_adapter:
4790 /* Reset failed, kill the adapter */ 5028 /* Reset failed, kill the adapter */
4791 dev_warn(&instance->pdev->dev, "Reset failed, killing " 5029 dev_warn(&instance->pdev->dev, "Reset failed, killing "
4792 "adapter scsi%d.\n", instance->host->host_no); 5030 "adapter scsi%d.\n", instance->host->host_no);
4793 megaraid_sas_kill_hba(instance); 5031 goto kill_hba;
4794 instance->skip_heartbeat_timer_del = 1;
4795 retval = FAILED;
4796 } else { 5032 } else {
4797 /* For VF: Restart HB timer if we didn't OCR */ 5033 /* For VF: Restart HB timer if we didn't OCR */
4798 if (instance->requestorId) { 5034 if (instance->requestorId) {
@@ -4800,8 +5036,15 @@ fail_kill_adapter:
4800 } 5036 }
4801 clear_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags); 5037 clear_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags);
4802 instance->instancet->enable_intr(instance); 5038 instance->instancet->enable_intr(instance);
5039 megasas_enable_irq_poll(instance);
4803 atomic_set(&instance->adprecovery, MEGASAS_HBA_OPERATIONAL); 5040 atomic_set(&instance->adprecovery, MEGASAS_HBA_OPERATIONAL);
5041 goto out;
4804 } 5042 }
5043kill_hba:
5044 megaraid_sas_kill_hba(instance);
5045 megasas_enable_irq_poll(instance);
5046 instance->skip_heartbeat_timer_del = 1;
5047 retval = FAILED;
4805out: 5048out:
4806 clear_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags); 5049 clear_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags);
4807 mutex_unlock(&instance->reset_mutex); 5050 mutex_unlock(&instance->reset_mutex);
diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.h b/drivers/scsi/megaraid/megaraid_sas_fusion.h
index 7fa73eaca1a8..c013c80fe4e6 100644
--- a/drivers/scsi/megaraid/megaraid_sas_fusion.h
+++ b/drivers/scsi/megaraid/megaraid_sas_fusion.h
@@ -75,7 +75,8 @@ enum MR_RAID_FLAGS_IO_SUB_TYPE {
75 MR_RAID_FLAGS_IO_SUB_TYPE_RMW_P = 3, 75 MR_RAID_FLAGS_IO_SUB_TYPE_RMW_P = 3,
76 MR_RAID_FLAGS_IO_SUB_TYPE_RMW_Q = 4, 76 MR_RAID_FLAGS_IO_SUB_TYPE_RMW_Q = 4,
77 MR_RAID_FLAGS_IO_SUB_TYPE_CACHE_BYPASS = 6, 77 MR_RAID_FLAGS_IO_SUB_TYPE_CACHE_BYPASS = 6,
78 MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT = 7 78 MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT = 7,
79 MR_RAID_FLAGS_IO_SUB_TYPE_R56_DIV_OFFLOAD = 8
79}; 80};
80 81
81/* 82/*
@@ -88,7 +89,6 @@ enum MR_RAID_FLAGS_IO_SUB_TYPE {
88 89
89#define MEGASAS_FP_CMD_LEN 16 90#define MEGASAS_FP_CMD_LEN 16
90#define MEGASAS_FUSION_IN_RESET 0 91#define MEGASAS_FUSION_IN_RESET 0
91#define THRESHOLD_REPLY_COUNT 50
92#define RAID_1_PEER_CMDS 2 92#define RAID_1_PEER_CMDS 2
93#define JBOD_MAPS_COUNT 2 93#define JBOD_MAPS_COUNT 2
94#define MEGASAS_REDUCE_QD_COUNT 64 94#define MEGASAS_REDUCE_QD_COUNT 64
@@ -140,12 +140,15 @@ struct RAID_CONTEXT_G35 {
140 u16 timeout_value; /* 0x02 -0x03 */ 140 u16 timeout_value; /* 0x02 -0x03 */
141 u16 routing_flags; // 0x04 -0x05 routing flags 141 u16 routing_flags; // 0x04 -0x05 routing flags
142 u16 virtual_disk_tgt_id; /* 0x06 -0x07 */ 142 u16 virtual_disk_tgt_id; /* 0x06 -0x07 */
143 u64 reg_lock_row_lba; /* 0x08 - 0x0F */ 143 __le64 reg_lock_row_lba; /* 0x08 - 0x0F */
144 u32 reg_lock_length; /* 0x10 - 0x13 */ 144 u32 reg_lock_length; /* 0x10 - 0x13 */
145 union { 145 union { // flow specific
146 u16 next_lmid; /* 0x14 - 0x15 */ 146 u16 rmw_op_index; /* 0x14 - 0x15, R5/6 RMW: rmw operation index*/
147 u16 peer_smid; /* used for the raid 1/10 fp writes */ 147 u16 peer_smid; /* 0x14 - 0x15, R1 Write: peer smid*/
148 } smid; 148 u16 r56_arm_map; /* 0x14 - 0x15, Unused [15], LogArm[14:10], P-Arm[9:5], Q-Arm[4:0] */
149
150 } flow_specific;
151
149 u8 ex_status; /* 0x16 : OUT */ 152 u8 ex_status; /* 0x16 : OUT */
150 u8 status; /* 0x17 status */ 153 u8 status; /* 0x17 status */
151 u8 raid_flags; /* 0x18 resvd[7:6], ioSubType[5:4], 154 u8 raid_flags; /* 0x18 resvd[7:6], ioSubType[5:4],
@@ -236,6 +239,13 @@ union RAID_CONTEXT_UNION {
236#define RAID_CTX_SPANARM_SPAN_SHIFT (5) 239#define RAID_CTX_SPANARM_SPAN_SHIFT (5)
237#define RAID_CTX_SPANARM_SPAN_MASK (0xE0) 240#define RAID_CTX_SPANARM_SPAN_MASK (0xE0)
238 241
242/* LogArm[14:10], P-Arm[9:5], Q-Arm[4:0] */
243#define RAID_CTX_R56_Q_ARM_MASK (0x1F)
244#define RAID_CTX_R56_P_ARM_SHIFT (5)
245#define RAID_CTX_R56_P_ARM_MASK (0x3E0)
246#define RAID_CTX_R56_LOG_ARM_SHIFT (10)
247#define RAID_CTX_R56_LOG_ARM_MASK (0x7C00)
248
239/* number of bits per index in U32 TrackStream */ 249/* number of bits per index in U32 TrackStream */
240#define BITS_PER_INDEX_STREAM 4 250#define BITS_PER_INDEX_STREAM 4
241#define INVALID_STREAM_NUM 16 251#define INVALID_STREAM_NUM 16
@@ -940,6 +950,7 @@ struct IO_REQUEST_INFO {
940 u8 pd_after_lb; 950 u8 pd_after_lb;
941 u16 r1_alt_dev_handle; /* raid 1/10 only */ 951 u16 r1_alt_dev_handle; /* raid 1/10 only */
942 bool ra_capable; 952 bool ra_capable;
953 u8 data_arms;
943}; 954};
944 955
945struct MR_LD_TARGET_SYNC { 956struct MR_LD_TARGET_SYNC {
@@ -1324,7 +1335,8 @@ struct fusion_context {
1324 dma_addr_t ioc_init_request_phys; 1335 dma_addr_t ioc_init_request_phys;
1325 struct MPI2_IOC_INIT_REQUEST *ioc_init_request; 1336 struct MPI2_IOC_INIT_REQUEST *ioc_init_request;
1326 struct megasas_cmd *ioc_init_cmd; 1337 struct megasas_cmd *ioc_init_cmd;
1327 1338 bool pcie_bw_limitation;
1339 bool r56_div_offload;
1328}; 1340};
1329 1341
1330union desc_value { 1342union desc_value {
@@ -1349,6 +1361,11 @@ struct MR_SNAPDUMP_PROPERTIES {
1349 u8 reserved[12]; 1361 u8 reserved[12];
1350}; 1362};
1351 1363
1364struct megasas_debugfs_buffer {
1365 void *buf;
1366 u32 len;
1367};
1368
1352void megasas_free_cmds_fusion(struct megasas_instance *instance); 1369void megasas_free_cmds_fusion(struct megasas_instance *instance);
1353int megasas_ioc_init_fusion(struct megasas_instance *instance); 1370int megasas_ioc_init_fusion(struct megasas_instance *instance);
1354u8 megasas_get_map_info(struct megasas_instance *instance); 1371u8 megasas_get_map_info(struct megasas_instance *instance);
diff --git a/drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h b/drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h
index a2f4a55c51be..167d79d145ca 100644
--- a/drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h
+++ b/drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h
@@ -1398,7 +1398,7 @@ typedef struct _MPI2_CONFIG_PAGE_IOC_1 {
1398 U8 PCIBusNum; /*0x0E */ 1398 U8 PCIBusNum; /*0x0E */
1399 U8 PCIDomainSegment; /*0x0F */ 1399 U8 PCIDomainSegment; /*0x0F */
1400 U32 Reserved1; /*0x10 */ 1400 U32 Reserved1; /*0x10 */
1401 U32 Reserved2; /*0x14 */ 1401 U32 ProductSpecific; /* 0x14 */
1402} MPI2_CONFIG_PAGE_IOC_1, 1402} MPI2_CONFIG_PAGE_IOC_1,
1403 *PTR_MPI2_CONFIG_PAGE_IOC_1, 1403 *PTR_MPI2_CONFIG_PAGE_IOC_1,
1404 Mpi2IOCPage1_t, *pMpi2IOCPage1_t; 1404 Mpi2IOCPage1_t, *pMpi2IOCPage1_t;
diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
index 8aacbd1e7db2..684662888792 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
@@ -74,28 +74,28 @@ static MPT_CALLBACK mpt_callbacks[MPT_MAX_CALLBACKS];
74#define MAX_HBA_QUEUE_DEPTH 30000 74#define MAX_HBA_QUEUE_DEPTH 30000
75#define MAX_CHAIN_DEPTH 100000 75#define MAX_CHAIN_DEPTH 100000
76static int max_queue_depth = -1; 76static int max_queue_depth = -1;
77module_param(max_queue_depth, int, 0); 77module_param(max_queue_depth, int, 0444);
78MODULE_PARM_DESC(max_queue_depth, " max controller queue depth "); 78MODULE_PARM_DESC(max_queue_depth, " max controller queue depth ");
79 79
80static int max_sgl_entries = -1; 80static int max_sgl_entries = -1;
81module_param(max_sgl_entries, int, 0); 81module_param(max_sgl_entries, int, 0444);
82MODULE_PARM_DESC(max_sgl_entries, " max sg entries "); 82MODULE_PARM_DESC(max_sgl_entries, " max sg entries ");
83 83
84static int msix_disable = -1; 84static int msix_disable = -1;
85module_param(msix_disable, int, 0); 85module_param(msix_disable, int, 0444);
86MODULE_PARM_DESC(msix_disable, " disable msix routed interrupts (default=0)"); 86MODULE_PARM_DESC(msix_disable, " disable msix routed interrupts (default=0)");
87 87
88static int smp_affinity_enable = 1; 88static int smp_affinity_enable = 1;
89module_param(smp_affinity_enable, int, S_IRUGO); 89module_param(smp_affinity_enable, int, 0444);
90MODULE_PARM_DESC(smp_affinity_enable, "SMP affinity feature enable/disable Default: enable(1)"); 90MODULE_PARM_DESC(smp_affinity_enable, "SMP affinity feature enable/disable Default: enable(1)");
91 91
92static int max_msix_vectors = -1; 92static int max_msix_vectors = -1;
93module_param(max_msix_vectors, int, 0); 93module_param(max_msix_vectors, int, 0444);
94MODULE_PARM_DESC(max_msix_vectors, 94MODULE_PARM_DESC(max_msix_vectors,
95 " max msix vectors"); 95 " max msix vectors");
96 96
97static int irqpoll_weight = -1; 97static int irqpoll_weight = -1;
98module_param(irqpoll_weight, int, 0); 98module_param(irqpoll_weight, int, 0444);
99MODULE_PARM_DESC(irqpoll_weight, 99MODULE_PARM_DESC(irqpoll_weight,
100 "irq poll weight (default= one fourth of HBA queue depth)"); 100 "irq poll weight (default= one fourth of HBA queue depth)");
101 101
@@ -103,6 +103,26 @@ static int mpt3sas_fwfault_debug;
103MODULE_PARM_DESC(mpt3sas_fwfault_debug, 103MODULE_PARM_DESC(mpt3sas_fwfault_debug,
104 " enable detection of firmware fault and halt firmware - (default=0)"); 104 " enable detection of firmware fault and halt firmware - (default=0)");
105 105
106static int perf_mode = -1;
107module_param(perf_mode, int, 0444);
108MODULE_PARM_DESC(perf_mode,
109 "Performance mode (only for Aero/Sea Generation), options:\n\t\t"
110 "0 - balanced: high iops mode is enabled &\n\t\t"
111 "interrupt coalescing is enabled only on high iops queues,\n\t\t"
112 "1 - iops: high iops mode is disabled &\n\t\t"
113 "interrupt coalescing is enabled on all queues,\n\t\t"
114 "2 - latency: high iops mode is disabled &\n\t\t"
115 "interrupt coalescing is enabled on all queues with timeout value 0xA,\n"
116 "\t\tdefault - default perf_mode is 'balanced'"
117 );
118
119enum mpt3sas_perf_mode {
120 MPT_PERF_MODE_DEFAULT = -1,
121 MPT_PERF_MODE_BALANCED = 0,
122 MPT_PERF_MODE_IOPS = 1,
123 MPT_PERF_MODE_LATENCY = 2,
124};
125
106static int 126static int
107_base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc); 127_base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc);
108 128
@@ -1282,7 +1302,7 @@ _base_async_event(struct MPT3SAS_ADAPTER *ioc, u8 msix_index, u32 reply)
1282 ack_request->EventContext = mpi_reply->EventContext; 1302 ack_request->EventContext = mpi_reply->EventContext;
1283 ack_request->VF_ID = 0; /* TODO */ 1303 ack_request->VF_ID = 0; /* TODO */
1284 ack_request->VP_ID = 0; 1304 ack_request->VP_ID = 0;
1285 mpt3sas_base_put_smid_default(ioc, smid); 1305 ioc->put_smid_default(ioc, smid);
1286 1306
1287 out: 1307 out:
1288 1308
@@ -2793,6 +2813,9 @@ _base_free_irq(struct MPT3SAS_ADAPTER *ioc)
2793 2813
2794 list_for_each_entry_safe(reply_q, next, &ioc->reply_queue_list, list) { 2814 list_for_each_entry_safe(reply_q, next, &ioc->reply_queue_list, list) {
2795 list_del(&reply_q->list); 2815 list_del(&reply_q->list);
2816 if (ioc->smp_affinity_enable)
2817 irq_set_affinity_hint(pci_irq_vector(ioc->pdev,
2818 reply_q->msix_index), NULL);
2796 free_irq(pci_irq_vector(ioc->pdev, reply_q->msix_index), 2819 free_irq(pci_irq_vector(ioc->pdev, reply_q->msix_index),
2797 reply_q); 2820 reply_q);
2798 kfree(reply_q); 2821 kfree(reply_q);
@@ -2857,14 +2880,13 @@ _base_assign_reply_queues(struct MPT3SAS_ADAPTER *ioc)
2857{ 2880{
2858 unsigned int cpu, nr_cpus, nr_msix, index = 0; 2881 unsigned int cpu, nr_cpus, nr_msix, index = 0;
2859 struct adapter_reply_queue *reply_q; 2882 struct adapter_reply_queue *reply_q;
2883 int local_numa_node;
2860 2884
2861 if (!_base_is_controller_msix_enabled(ioc)) 2885 if (!_base_is_controller_msix_enabled(ioc))
2862 return; 2886 return;
2863 ioc->msix_load_balance = false; 2887
2864 if (ioc->reply_queue_count < num_online_cpus()) { 2888 if (ioc->msix_load_balance)
2865 ioc->msix_load_balance = true;
2866 return; 2889 return;
2867 }
2868 2890
2869 memset(ioc->cpu_msix_table, 0, ioc->cpu_msix_table_sz); 2891 memset(ioc->cpu_msix_table, 0, ioc->cpu_msix_table_sz);
2870 2892
@@ -2874,14 +2896,33 @@ _base_assign_reply_queues(struct MPT3SAS_ADAPTER *ioc)
2874 if (!nr_msix) 2896 if (!nr_msix)
2875 return; 2897 return;
2876 2898
2877 if (smp_affinity_enable) { 2899 if (ioc->smp_affinity_enable) {
2900
2901 /*
2902 * set irq affinity to local numa node for those irqs
2903 * corresponding to high iops queues.
2904 */
2905 if (ioc->high_iops_queues) {
2906 local_numa_node = dev_to_node(&ioc->pdev->dev);
2907 for (index = 0; index < ioc->high_iops_queues;
2908 index++) {
2909 irq_set_affinity_hint(pci_irq_vector(ioc->pdev,
2910 index), cpumask_of_node(local_numa_node));
2911 }
2912 }
2913
2878 list_for_each_entry(reply_q, &ioc->reply_queue_list, list) { 2914 list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
2879 const cpumask_t *mask = pci_irq_get_affinity(ioc->pdev, 2915 const cpumask_t *mask;
2880 reply_q->msix_index); 2916
2917 if (reply_q->msix_index < ioc->high_iops_queues)
2918 continue;
2919
2920 mask = pci_irq_get_affinity(ioc->pdev,
2921 reply_q->msix_index);
2881 if (!mask) { 2922 if (!mask) {
2882 ioc_warn(ioc, "no affinity for msi %x\n", 2923 ioc_warn(ioc, "no affinity for msi %x\n",
2883 reply_q->msix_index); 2924 reply_q->msix_index);
2884 continue; 2925 goto fall_back;
2885 } 2926 }
2886 2927
2887 for_each_cpu_and(cpu, mask, cpu_online_mask) { 2928 for_each_cpu_and(cpu, mask, cpu_online_mask) {
@@ -2892,12 +2933,18 @@ _base_assign_reply_queues(struct MPT3SAS_ADAPTER *ioc)
2892 } 2933 }
2893 return; 2934 return;
2894 } 2935 }
2936
2937fall_back:
2895 cpu = cpumask_first(cpu_online_mask); 2938 cpu = cpumask_first(cpu_online_mask);
2939 nr_msix -= ioc->high_iops_queues;
2940 index = 0;
2896 2941
2897 list_for_each_entry(reply_q, &ioc->reply_queue_list, list) { 2942 list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
2898
2899 unsigned int i, group = nr_cpus / nr_msix; 2943 unsigned int i, group = nr_cpus / nr_msix;
2900 2944
2945 if (reply_q->msix_index < ioc->high_iops_queues)
2946 continue;
2947
2901 if (cpu >= nr_cpus) 2948 if (cpu >= nr_cpus)
2902 break; 2949 break;
2903 2950
@@ -2913,6 +2960,52 @@ _base_assign_reply_queues(struct MPT3SAS_ADAPTER *ioc)
2913} 2960}
2914 2961
2915/** 2962/**
2963 * _base_check_and_enable_high_iops_queues - enable high iops mode
2964 * @ ioc - per adapter object
2965 * @ hba_msix_vector_count - msix vectors supported by HBA
2966 *
2967 * Enable high iops queues only if
2968 * - HBA is a SEA/AERO controller and
2969 * - MSI-Xs vector supported by the HBA is 128 and
2970 * - total CPU count in the system >=16 and
2971 * - loaded driver with default max_msix_vectors module parameter and
2972 * - system booted in non kdump mode
2973 *
2974 * returns nothing.
2975 */
2976static void
2977_base_check_and_enable_high_iops_queues(struct MPT3SAS_ADAPTER *ioc,
2978 int hba_msix_vector_count)
2979{
2980 u16 lnksta, speed;
2981
2982 if (perf_mode == MPT_PERF_MODE_IOPS ||
2983 perf_mode == MPT_PERF_MODE_LATENCY) {
2984 ioc->high_iops_queues = 0;
2985 return;
2986 }
2987
2988 if (perf_mode == MPT_PERF_MODE_DEFAULT) {
2989
2990 pcie_capability_read_word(ioc->pdev, PCI_EXP_LNKSTA, &lnksta);
2991 speed = lnksta & PCI_EXP_LNKSTA_CLS;
2992
2993 if (speed < 0x4) {
2994 ioc->high_iops_queues = 0;
2995 return;
2996 }
2997 }
2998
2999 if (!reset_devices && ioc->is_aero_ioc &&
3000 hba_msix_vector_count == MPT3SAS_GEN35_MAX_MSIX_QUEUES &&
3001 num_online_cpus() >= MPT3SAS_HIGH_IOPS_REPLY_QUEUES &&
3002 max_msix_vectors == -1)
3003 ioc->high_iops_queues = MPT3SAS_HIGH_IOPS_REPLY_QUEUES;
3004 else
3005 ioc->high_iops_queues = 0;
3006}
3007
3008/**
2916 * _base_disable_msix - disables msix 3009 * _base_disable_msix - disables msix
2917 * @ioc: per adapter object 3010 * @ioc: per adapter object
2918 * 3011 *
@@ -2922,11 +3015,38 @@ _base_disable_msix(struct MPT3SAS_ADAPTER *ioc)
2922{ 3015{
2923 if (!ioc->msix_enable) 3016 if (!ioc->msix_enable)
2924 return; 3017 return;
2925 pci_disable_msix(ioc->pdev); 3018 pci_free_irq_vectors(ioc->pdev);
2926 ioc->msix_enable = 0; 3019 ioc->msix_enable = 0;
2927} 3020}
2928 3021
2929/** 3022/**
3023 * _base_alloc_irq_vectors - allocate msix vectors
3024 * @ioc: per adapter object
3025 *
3026 */
3027static int
3028_base_alloc_irq_vectors(struct MPT3SAS_ADAPTER *ioc)
3029{
3030 int i, irq_flags = PCI_IRQ_MSIX;
3031 struct irq_affinity desc = { .pre_vectors = ioc->high_iops_queues };
3032 struct irq_affinity *descp = &desc;
3033
3034 if (ioc->smp_affinity_enable)
3035 irq_flags |= PCI_IRQ_AFFINITY;
3036 else
3037 descp = NULL;
3038
3039 ioc_info(ioc, " %d %d\n", ioc->high_iops_queues,
3040 ioc->msix_vector_count);
3041
3042 i = pci_alloc_irq_vectors_affinity(ioc->pdev,
3043 ioc->high_iops_queues,
3044 ioc->msix_vector_count, irq_flags, descp);
3045
3046 return i;
3047}
3048
3049/**
2930 * _base_enable_msix - enables msix, failback to io_apic 3050 * _base_enable_msix - enables msix, failback to io_apic
2931 * @ioc: per adapter object 3051 * @ioc: per adapter object
2932 * 3052 *
@@ -2937,7 +3057,8 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
2937 int r; 3057 int r;
2938 int i, local_max_msix_vectors; 3058 int i, local_max_msix_vectors;
2939 u8 try_msix = 0; 3059 u8 try_msix = 0;
2940 unsigned int irq_flags = PCI_IRQ_MSIX; 3060
3061 ioc->msix_load_balance = false;
2941 3062
2942 if (msix_disable == -1 || msix_disable == 0) 3063 if (msix_disable == -1 || msix_disable == 0)
2943 try_msix = 1; 3064 try_msix = 1;
@@ -2948,12 +3069,16 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
2948 if (_base_check_enable_msix(ioc) != 0) 3069 if (_base_check_enable_msix(ioc) != 0)
2949 goto try_ioapic; 3070 goto try_ioapic;
2950 3071
2951 ioc->reply_queue_count = min_t(int, ioc->cpu_count, 3072 ioc_info(ioc, "MSI-X vectors supported: %d\n", ioc->msix_vector_count);
3073 pr_info("\t no of cores: %d, max_msix_vectors: %d\n",
3074 ioc->cpu_count, max_msix_vectors);
3075 if (ioc->is_aero_ioc)
3076 _base_check_and_enable_high_iops_queues(ioc,
3077 ioc->msix_vector_count);
3078 ioc->reply_queue_count =
3079 min_t(int, ioc->cpu_count + ioc->high_iops_queues,
2952 ioc->msix_vector_count); 3080 ioc->msix_vector_count);
2953 3081
2954 ioc_info(ioc, "MSI-X vectors supported: %d, no of cores: %d, max_msix_vectors: %d\n",
2955 ioc->msix_vector_count, ioc->cpu_count, max_msix_vectors);
2956
2957 if (!ioc->rdpq_array_enable && max_msix_vectors == -1) 3082 if (!ioc->rdpq_array_enable && max_msix_vectors == -1)
2958 local_max_msix_vectors = (reset_devices) ? 1 : 8; 3083 local_max_msix_vectors = (reset_devices) ? 1 : 8;
2959 else 3084 else
@@ -2965,14 +3090,23 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
2965 else if (local_max_msix_vectors == 0) 3090 else if (local_max_msix_vectors == 0)
2966 goto try_ioapic; 3091 goto try_ioapic;
2967 3092
2968 if (ioc->msix_vector_count < ioc->cpu_count) 3093 /*
2969 smp_affinity_enable = 0; 3094 * Enable msix_load_balance only if combined reply queue mode is
3095 * disabled on SAS3 & above generation HBA devices.
3096 */
3097 if (!ioc->combined_reply_queue &&
3098 ioc->hba_mpi_version_belonged != MPI2_VERSION) {
3099 ioc->msix_load_balance = true;
3100 }
2970 3101
2971 if (smp_affinity_enable) 3102 /*
2972 irq_flags |= PCI_IRQ_AFFINITY; 3103 * smp affinity setting is not need when msix load balance
3104 * is enabled.
3105 */
3106 if (ioc->msix_load_balance)
3107 ioc->smp_affinity_enable = 0;
2973 3108
2974 r = pci_alloc_irq_vectors(ioc->pdev, 1, ioc->reply_queue_count, 3109 r = _base_alloc_irq_vectors(ioc);
2975 irq_flags);
2976 if (r < 0) { 3110 if (r < 0) {
2977 dfailprintk(ioc, 3111 dfailprintk(ioc,
2978 ioc_info(ioc, "pci_alloc_irq_vectors failed (r=%d) !!!\n", 3112 ioc_info(ioc, "pci_alloc_irq_vectors failed (r=%d) !!!\n",
@@ -2991,11 +3125,15 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
2991 } 3125 }
2992 } 3126 }
2993 3127
3128 ioc_info(ioc, "High IOPs queues : %s\n",
3129 ioc->high_iops_queues ? "enabled" : "disabled");
3130
2994 return 0; 3131 return 0;
2995 3132
2996/* failback to io_apic interrupt routing */ 3133/* failback to io_apic interrupt routing */
2997 try_ioapic: 3134 try_ioapic:
2998 3135 ioc->high_iops_queues = 0;
3136 ioc_info(ioc, "High IOPs queues : disabled\n");
2999 ioc->reply_queue_count = 1; 3137 ioc->reply_queue_count = 1;
3000 r = pci_alloc_irq_vectors(ioc->pdev, 1, 1, PCI_IRQ_LEGACY); 3138 r = pci_alloc_irq_vectors(ioc->pdev, 1, 1, PCI_IRQ_LEGACY);
3001 if (r < 0) { 3139 if (r < 0) {
@@ -3265,8 +3403,18 @@ mpt3sas_base_get_reply_virt_addr(struct MPT3SAS_ADAPTER *ioc, u32 phys_addr)
3265 return ioc->reply + (phys_addr - (u32)ioc->reply_dma); 3403 return ioc->reply + (phys_addr - (u32)ioc->reply_dma);
3266} 3404}
3267 3405
3406/**
3407 * _base_get_msix_index - get the msix index
3408 * @ioc: per adapter object
3409 * @scmd: scsi_cmnd object
3410 *
3411 * returns msix index of general reply queues,
3412 * i.e. reply queue on which IO request's reply
3413 * should be posted by the HBA firmware.
3414 */
3268static inline u8 3415static inline u8
3269_base_get_msix_index(struct MPT3SAS_ADAPTER *ioc) 3416_base_get_msix_index(struct MPT3SAS_ADAPTER *ioc,
3417 struct scsi_cmnd *scmd)
3270{ 3418{
3271 /* Enables reply_queue load balancing */ 3419 /* Enables reply_queue load balancing */
3272 if (ioc->msix_load_balance) 3420 if (ioc->msix_load_balance)
@@ -3278,6 +3426,35 @@ _base_get_msix_index(struct MPT3SAS_ADAPTER *ioc)
3278} 3426}
3279 3427
3280/** 3428/**
3429 * _base_get_high_iops_msix_index - get the msix index of
3430 * high iops queues
3431 * @ioc: per adapter object
3432 * @scmd: scsi_cmnd object
3433 *
3434 * Returns: msix index of high iops reply queues.
3435 * i.e. high iops reply queue on which IO request's
3436 * reply should be posted by the HBA firmware.
3437 */
3438static inline u8
3439_base_get_high_iops_msix_index(struct MPT3SAS_ADAPTER *ioc,
3440 struct scsi_cmnd *scmd)
3441{
3442 /**
3443 * Round robin the IO interrupts among the high iops
3444 * reply queues in terms of batch count 16 when outstanding
3445 * IOs on the target device is >=8.
3446 */
3447 if (atomic_read(&scmd->device->device_busy) >
3448 MPT3SAS_DEVICE_HIGH_IOPS_DEPTH)
3449 return base_mod64((
3450 atomic64_add_return(1, &ioc->high_iops_outstanding) /
3451 MPT3SAS_HIGH_IOPS_BATCH_COUNT),
3452 MPT3SAS_HIGH_IOPS_REPLY_QUEUES);
3453
3454 return _base_get_msix_index(ioc, scmd);
3455}
3456
3457/**
3281 * mpt3sas_base_get_smid - obtain a free smid from internal queue 3458 * mpt3sas_base_get_smid - obtain a free smid from internal queue
3282 * @ioc: per adapter object 3459 * @ioc: per adapter object
3283 * @cb_idx: callback index 3460 * @cb_idx: callback index
@@ -3325,8 +3502,8 @@ mpt3sas_base_get_smid_scsiio(struct MPT3SAS_ADAPTER *ioc, u8 cb_idx,
3325 3502
3326 smid = tag + 1; 3503 smid = tag + 1;
3327 request->cb_idx = cb_idx; 3504 request->cb_idx = cb_idx;
3328 request->msix_io = _base_get_msix_index(ioc);
3329 request->smid = smid; 3505 request->smid = smid;
3506 request->scmd = scmd;
3330 INIT_LIST_HEAD(&request->chain_list); 3507 INIT_LIST_HEAD(&request->chain_list);
3331 return smid; 3508 return smid;
3332} 3509}
@@ -3380,6 +3557,7 @@ void mpt3sas_base_clear_st(struct MPT3SAS_ADAPTER *ioc,
3380 return; 3557 return;
3381 st->cb_idx = 0xFF; 3558 st->cb_idx = 0xFF;
3382 st->direct_io = 0; 3559 st->direct_io = 0;
3560 st->scmd = NULL;
3383 atomic_set(&ioc->chain_lookup[st->smid - 1].chain_offset, 0); 3561 atomic_set(&ioc->chain_lookup[st->smid - 1].chain_offset, 0);
3384 st->smid = 0; 3562 st->smid = 0;
3385} 3563}
@@ -3479,13 +3657,37 @@ _base_writeq(__u64 b, volatile void __iomem *addr, spinlock_t *writeq_lock)
3479#endif 3657#endif
3480 3658
3481/** 3659/**
3660 * _base_set_and_get_msix_index - get the msix index and assign to msix_io
3661 * variable of scsi tracker
3662 * @ioc: per adapter object
3663 * @smid: system request message index
3664 *
3665 * returns msix index.
3666 */
3667static u8
3668_base_set_and_get_msix_index(struct MPT3SAS_ADAPTER *ioc, u16 smid)
3669{
3670 struct scsiio_tracker *st = NULL;
3671
3672 if (smid < ioc->hi_priority_smid)
3673 st = _get_st_from_smid(ioc, smid);
3674
3675 if (st == NULL)
3676 return _base_get_msix_index(ioc, NULL);
3677
3678 st->msix_io = ioc->get_msix_index_for_smlio(ioc, st->scmd);
3679 return st->msix_io;
3680}
3681
3682/**
3482 * _base_put_smid_mpi_ep_scsi_io - send SCSI_IO request to firmware 3683 * _base_put_smid_mpi_ep_scsi_io - send SCSI_IO request to firmware
3483 * @ioc: per adapter object 3684 * @ioc: per adapter object
3484 * @smid: system request message index 3685 * @smid: system request message index
3485 * @handle: device handle 3686 * @handle: device handle
3486 */ 3687 */
3487static void 3688static void
3488_base_put_smid_mpi_ep_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle) 3689_base_put_smid_mpi_ep_scsi_io(struct MPT3SAS_ADAPTER *ioc,
3690 u16 smid, u16 handle)
3489{ 3691{
3490 Mpi2RequestDescriptorUnion_t descriptor; 3692 Mpi2RequestDescriptorUnion_t descriptor;
3491 u64 *request = (u64 *)&descriptor; 3693 u64 *request = (u64 *)&descriptor;
@@ -3498,7 +3700,7 @@ _base_put_smid_mpi_ep_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle)
3498 _base_clone_mpi_to_sys_mem(mpi_req_iomem, (void *)mfp, 3700 _base_clone_mpi_to_sys_mem(mpi_req_iomem, (void *)mfp,
3499 ioc->request_sz); 3701 ioc->request_sz);
3500 descriptor.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO; 3702 descriptor.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO;
3501 descriptor.SCSIIO.MSIxIndex = _base_get_msix_index(ioc); 3703 descriptor.SCSIIO.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
3502 descriptor.SCSIIO.SMID = cpu_to_le16(smid); 3704 descriptor.SCSIIO.SMID = cpu_to_le16(smid);
3503 descriptor.SCSIIO.DevHandle = cpu_to_le16(handle); 3705 descriptor.SCSIIO.DevHandle = cpu_to_le16(handle);
3504 descriptor.SCSIIO.LMID = 0; 3706 descriptor.SCSIIO.LMID = 0;
@@ -3520,7 +3722,7 @@ _base_put_smid_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle)
3520 3722
3521 3723
3522 descriptor.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO; 3724 descriptor.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO;
3523 descriptor.SCSIIO.MSIxIndex = _base_get_msix_index(ioc); 3725 descriptor.SCSIIO.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
3524 descriptor.SCSIIO.SMID = cpu_to_le16(smid); 3726 descriptor.SCSIIO.SMID = cpu_to_le16(smid);
3525 descriptor.SCSIIO.DevHandle = cpu_to_le16(handle); 3727 descriptor.SCSIIO.DevHandle = cpu_to_le16(handle);
3526 descriptor.SCSIIO.LMID = 0; 3728 descriptor.SCSIIO.LMID = 0;
@@ -3529,13 +3731,13 @@ _base_put_smid_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle)
3529} 3731}
3530 3732
3531/** 3733/**
3532 * mpt3sas_base_put_smid_fast_path - send fast path request to firmware 3734 * _base_put_smid_fast_path - send fast path request to firmware
3533 * @ioc: per adapter object 3735 * @ioc: per adapter object
3534 * @smid: system request message index 3736 * @smid: system request message index
3535 * @handle: device handle 3737 * @handle: device handle
3536 */ 3738 */
3537void 3739static void
3538mpt3sas_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid, 3740_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid,
3539 u16 handle) 3741 u16 handle)
3540{ 3742{
3541 Mpi2RequestDescriptorUnion_t descriptor; 3743 Mpi2RequestDescriptorUnion_t descriptor;
@@ -3543,7 +3745,7 @@ mpt3sas_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid,
3543 3745
3544 descriptor.SCSIIO.RequestFlags = 3746 descriptor.SCSIIO.RequestFlags =
3545 MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO; 3747 MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO;
3546 descriptor.SCSIIO.MSIxIndex = _base_get_msix_index(ioc); 3748 descriptor.SCSIIO.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
3547 descriptor.SCSIIO.SMID = cpu_to_le16(smid); 3749 descriptor.SCSIIO.SMID = cpu_to_le16(smid);
3548 descriptor.SCSIIO.DevHandle = cpu_to_le16(handle); 3750 descriptor.SCSIIO.DevHandle = cpu_to_le16(handle);
3549 descriptor.SCSIIO.LMID = 0; 3751 descriptor.SCSIIO.LMID = 0;
@@ -3552,13 +3754,13 @@ mpt3sas_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid,
3552} 3754}
3553 3755
3554/** 3756/**
3555 * mpt3sas_base_put_smid_hi_priority - send Task Management request to firmware 3757 * _base_put_smid_hi_priority - send Task Management request to firmware
3556 * @ioc: per adapter object 3758 * @ioc: per adapter object
3557 * @smid: system request message index 3759 * @smid: system request message index
3558 * @msix_task: msix_task will be same as msix of IO incase of task abort else 0. 3760 * @msix_task: msix_task will be same as msix of IO incase of task abort else 0.
3559 */ 3761 */
3560void 3762static void
3561mpt3sas_base_put_smid_hi_priority(struct MPT3SAS_ADAPTER *ioc, u16 smid, 3763_base_put_smid_hi_priority(struct MPT3SAS_ADAPTER *ioc, u16 smid,
3562 u16 msix_task) 3764 u16 msix_task)
3563{ 3765{
3564 Mpi2RequestDescriptorUnion_t descriptor; 3766 Mpi2RequestDescriptorUnion_t descriptor;
@@ -3607,7 +3809,7 @@ mpt3sas_base_put_smid_nvme_encap(struct MPT3SAS_ADAPTER *ioc, u16 smid)
3607 3809
3608 descriptor.Default.RequestFlags = 3810 descriptor.Default.RequestFlags =
3609 MPI26_REQ_DESCRIPT_FLAGS_PCIE_ENCAPSULATED; 3811 MPI26_REQ_DESCRIPT_FLAGS_PCIE_ENCAPSULATED;
3610 descriptor.Default.MSIxIndex = _base_get_msix_index(ioc); 3812 descriptor.Default.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
3611 descriptor.Default.SMID = cpu_to_le16(smid); 3813 descriptor.Default.SMID = cpu_to_le16(smid);
3612 descriptor.Default.LMID = 0; 3814 descriptor.Default.LMID = 0;
3613 descriptor.Default.DescriptorTypeDependent = 0; 3815 descriptor.Default.DescriptorTypeDependent = 0;
@@ -3616,12 +3818,12 @@ mpt3sas_base_put_smid_nvme_encap(struct MPT3SAS_ADAPTER *ioc, u16 smid)
3616} 3818}
3617 3819
3618/** 3820/**
3619 * mpt3sas_base_put_smid_default - Default, primarily used for config pages 3821 * _base_put_smid_default - Default, primarily used for config pages
3620 * @ioc: per adapter object 3822 * @ioc: per adapter object
3621 * @smid: system request message index 3823 * @smid: system request message index
3622 */ 3824 */
3623void 3825static void
3624mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid) 3826_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
3625{ 3827{
3626 Mpi2RequestDescriptorUnion_t descriptor; 3828 Mpi2RequestDescriptorUnion_t descriptor;
3627 void *mpi_req_iomem; 3829 void *mpi_req_iomem;
@@ -3639,7 +3841,7 @@ mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
3639 } 3841 }
3640 request = (u64 *)&descriptor; 3842 request = (u64 *)&descriptor;
3641 descriptor.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; 3843 descriptor.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE;
3642 descriptor.Default.MSIxIndex = _base_get_msix_index(ioc); 3844 descriptor.Default.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
3643 descriptor.Default.SMID = cpu_to_le16(smid); 3845 descriptor.Default.SMID = cpu_to_le16(smid);
3644 descriptor.Default.LMID = 0; 3846 descriptor.Default.LMID = 0;
3645 descriptor.Default.DescriptorTypeDependent = 0; 3847 descriptor.Default.DescriptorTypeDependent = 0;
@@ -3653,6 +3855,95 @@ mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
3653} 3855}
3654 3856
3655/** 3857/**
3858 * _base_put_smid_scsi_io_atomic - send SCSI_IO request to firmware using
3859 * Atomic Request Descriptor
3860 * @ioc: per adapter object
3861 * @smid: system request message index
3862 * @handle: device handle, unused in this function, for function type match
3863 *
3864 * Return nothing.
3865 */
3866static void
3867_base_put_smid_scsi_io_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid,
3868 u16 handle)
3869{
3870 Mpi26AtomicRequestDescriptor_t descriptor;
3871 u32 *request = (u32 *)&descriptor;
3872
3873 descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO;
3874 descriptor.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
3875 descriptor.SMID = cpu_to_le16(smid);
3876
3877 writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
3878}
3879
3880/**
3881 * _base_put_smid_fast_path_atomic - send fast path request to firmware
3882 * using Atomic Request Descriptor
3883 * @ioc: per adapter object
3884 * @smid: system request message index
3885 * @handle: device handle, unused in this function, for function type match
3886 * Return nothing
3887 */
3888static void
3889_base_put_smid_fast_path_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid,
3890 u16 handle)
3891{
3892 Mpi26AtomicRequestDescriptor_t descriptor;
3893 u32 *request = (u32 *)&descriptor;
3894
3895 descriptor.RequestFlags = MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO;
3896 descriptor.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
3897 descriptor.SMID = cpu_to_le16(smid);
3898
3899 writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
3900}
3901
3902/**
3903 * _base_put_smid_hi_priority_atomic - send Task Management request to
3904 * firmware using Atomic Request Descriptor
3905 * @ioc: per adapter object
3906 * @smid: system request message index
3907 * @msix_task: msix_task will be same as msix of IO incase of task abort else 0
3908 *
3909 * Return nothing.
3910 */
3911static void
3912_base_put_smid_hi_priority_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid,
3913 u16 msix_task)
3914{
3915 Mpi26AtomicRequestDescriptor_t descriptor;
3916 u32 *request = (u32 *)&descriptor;
3917
3918 descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY;
3919 descriptor.MSIxIndex = msix_task;
3920 descriptor.SMID = cpu_to_le16(smid);
3921
3922 writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
3923}
3924
3925/**
3926 * _base_put_smid_default - Default, primarily used for config pages
3927 * use Atomic Request Descriptor
3928 * @ioc: per adapter object
3929 * @smid: system request message index
3930 *
3931 * Return nothing.
3932 */
3933static void
3934_base_put_smid_default_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid)
3935{
3936 Mpi26AtomicRequestDescriptor_t descriptor;
3937 u32 *request = (u32 *)&descriptor;
3938
3939 descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE;
3940 descriptor.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
3941 descriptor.SMID = cpu_to_le16(smid);
3942
3943 writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
3944}
3945
3946/**
3656 * _base_display_OEMs_branding - Display branding string 3947 * _base_display_OEMs_branding - Display branding string
3657 * @ioc: per adapter object 3948 * @ioc: per adapter object
3658 */ 3949 */
@@ -3952,7 +4243,7 @@ _base_display_fwpkg_version(struct MPT3SAS_ADAPTER *ioc)
3952 ioc->build_sg(ioc, &mpi_request->SGL, 0, 0, fwpkg_data_dma, 4243 ioc->build_sg(ioc, &mpi_request->SGL, 0, 0, fwpkg_data_dma,
3953 data_length); 4244 data_length);
3954 init_completion(&ioc->base_cmds.done); 4245 init_completion(&ioc->base_cmds.done);
3955 mpt3sas_base_put_smid_default(ioc, smid); 4246 ioc->put_smid_default(ioc, smid);
3956 /* Wait for 15 seconds */ 4247 /* Wait for 15 seconds */
3957 wait_for_completion_timeout(&ioc->base_cmds.done, 4248 wait_for_completion_timeout(&ioc->base_cmds.done,
3958 FW_IMG_HDR_READ_TIMEOUT*HZ); 4249 FW_IMG_HDR_READ_TIMEOUT*HZ);
@@ -4192,6 +4483,71 @@ out:
4192} 4483}
4193 4484
4194/** 4485/**
4486 * _base_update_ioc_page1_inlinewith_perf_mode - Update IOC Page1 fields
4487 * according to performance mode.
4488 * @ioc : per adapter object
4489 *
4490 * Return nothing.
4491 */
4492static void
4493_base_update_ioc_page1_inlinewith_perf_mode(struct MPT3SAS_ADAPTER *ioc)
4494{
4495 Mpi2IOCPage1_t ioc_pg1;
4496 Mpi2ConfigReply_t mpi_reply;
4497
4498 mpt3sas_config_get_ioc_pg1(ioc, &mpi_reply, &ioc->ioc_pg1_copy);
4499 memcpy(&ioc_pg1, &ioc->ioc_pg1_copy, sizeof(Mpi2IOCPage1_t));
4500
4501 switch (perf_mode) {
4502 case MPT_PERF_MODE_DEFAULT:
4503 case MPT_PERF_MODE_BALANCED:
4504 if (ioc->high_iops_queues) {
4505 ioc_info(ioc,
4506 "Enable interrupt coalescing only for first\t"
4507 "%d reply queues\n",
4508 MPT3SAS_HIGH_IOPS_REPLY_QUEUES);
4509 /*
4510 * If 31st bit is zero then interrupt coalescing is
4511 * enabled for all reply descriptor post queues.
4512 * If 31st bit is set to one then user can
4513 * enable/disable interrupt coalescing on per reply
4514 * descriptor post queue group(8) basis. So to enable
4515 * interrupt coalescing only on first reply descriptor
4516 * post queue group 31st bit and zero th bit is enabled.
4517 */
4518 ioc_pg1.ProductSpecific = cpu_to_le32(0x80000000 |
4519 ((1 << MPT3SAS_HIGH_IOPS_REPLY_QUEUES/8) - 1));
4520 mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1);
4521 ioc_info(ioc, "performance mode: balanced\n");
4522 return;
4523 }
4524 /* Fall through */
4525 case MPT_PERF_MODE_LATENCY:
4526 /*
4527 * Enable interrupt coalescing on all reply queues
4528 * with timeout value 0xA
4529 */
4530 ioc_pg1.CoalescingTimeout = cpu_to_le32(0xa);
4531 ioc_pg1.Flags |= cpu_to_le32(MPI2_IOCPAGE1_REPLY_COALESCING);
4532 ioc_pg1.ProductSpecific = 0;
4533 mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1);
4534 ioc_info(ioc, "performance mode: latency\n");
4535 break;
4536 case MPT_PERF_MODE_IOPS:
4537 /*
4538 * Enable interrupt coalescing on all reply queues.
4539 */
4540 ioc_info(ioc,
4541 "performance mode: iops with coalescing timeout: 0x%x\n",
4542 le32_to_cpu(ioc_pg1.CoalescingTimeout));
4543 ioc_pg1.Flags |= cpu_to_le32(MPI2_IOCPAGE1_REPLY_COALESCING);
4544 ioc_pg1.ProductSpecific = 0;
4545 mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1);
4546 break;
4547 }
4548}
4549
4550/**
4195 * _base_static_config_pages - static start of day config pages 4551 * _base_static_config_pages - static start of day config pages
4196 * @ioc: per adapter object 4552 * @ioc: per adapter object
4197 */ 4553 */
@@ -4258,6 +4614,8 @@ _base_static_config_pages(struct MPT3SAS_ADAPTER *ioc)
4258 4614
4259 if (ioc->iounit_pg8.NumSensors) 4615 if (ioc->iounit_pg8.NumSensors)
4260 ioc->temp_sensors_count = ioc->iounit_pg8.NumSensors; 4616 ioc->temp_sensors_count = ioc->iounit_pg8.NumSensors;
4617 if (ioc->is_aero_ioc)
4618 _base_update_ioc_page1_inlinewith_perf_mode(ioc);
4261} 4619}
4262 4620
4263/** 4621/**
@@ -5431,7 +5789,7 @@ mpt3sas_base_sas_iounit_control(struct MPT3SAS_ADAPTER *ioc,
5431 mpi_request->Operation == MPI2_SAS_OP_PHY_LINK_RESET) 5789 mpi_request->Operation == MPI2_SAS_OP_PHY_LINK_RESET)
5432 ioc->ioc_link_reset_in_progress = 1; 5790 ioc->ioc_link_reset_in_progress = 1;
5433 init_completion(&ioc->base_cmds.done); 5791 init_completion(&ioc->base_cmds.done);
5434 mpt3sas_base_put_smid_default(ioc, smid); 5792 ioc->put_smid_default(ioc, smid);
5435 wait_for_completion_timeout(&ioc->base_cmds.done, 5793 wait_for_completion_timeout(&ioc->base_cmds.done,
5436 msecs_to_jiffies(10000)); 5794 msecs_to_jiffies(10000));
5437 if ((mpi_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET || 5795 if ((mpi_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET ||
@@ -5510,7 +5868,7 @@ mpt3sas_base_scsi_enclosure_processor(struct MPT3SAS_ADAPTER *ioc,
5510 ioc->base_cmds.smid = smid; 5868 ioc->base_cmds.smid = smid;
5511 memcpy(request, mpi_request, sizeof(Mpi2SepReply_t)); 5869 memcpy(request, mpi_request, sizeof(Mpi2SepReply_t));
5512 init_completion(&ioc->base_cmds.done); 5870 init_completion(&ioc->base_cmds.done);
5513 mpt3sas_base_put_smid_default(ioc, smid); 5871 ioc->put_smid_default(ioc, smid);
5514 wait_for_completion_timeout(&ioc->base_cmds.done, 5872 wait_for_completion_timeout(&ioc->base_cmds.done,
5515 msecs_to_jiffies(10000)); 5873 msecs_to_jiffies(10000));
5516 if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) { 5874 if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) {
@@ -5693,6 +6051,9 @@ _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc)
5693 if ((facts->IOCCapabilities & 6051 if ((facts->IOCCapabilities &
5694 MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE) && (!reset_devices)) 6052 MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE) && (!reset_devices))
5695 ioc->rdpq_array_capable = 1; 6053 ioc->rdpq_array_capable = 1;
6054 if ((facts->IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_ATOMIC_REQ)
6055 && ioc->is_aero_ioc)
6056 ioc->atomic_desc_capable = 1;
5696 facts->FWVersion.Word = le32_to_cpu(mpi_reply.FWVersion.Word); 6057 facts->FWVersion.Word = le32_to_cpu(mpi_reply.FWVersion.Word);
5697 facts->IOCRequestFrameSize = 6058 facts->IOCRequestFrameSize =
5698 le16_to_cpu(mpi_reply.IOCRequestFrameSize); 6059 le16_to_cpu(mpi_reply.IOCRequestFrameSize);
@@ -5914,7 +6275,7 @@ _base_send_port_enable(struct MPT3SAS_ADAPTER *ioc)
5914 mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE; 6275 mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE;
5915 6276
5916 init_completion(&ioc->port_enable_cmds.done); 6277 init_completion(&ioc->port_enable_cmds.done);
5917 mpt3sas_base_put_smid_default(ioc, smid); 6278 ioc->put_smid_default(ioc, smid);
5918 wait_for_completion_timeout(&ioc->port_enable_cmds.done, 300*HZ); 6279 wait_for_completion_timeout(&ioc->port_enable_cmds.done, 300*HZ);
5919 if (!(ioc->port_enable_cmds.status & MPT3_CMD_COMPLETE)) { 6280 if (!(ioc->port_enable_cmds.status & MPT3_CMD_COMPLETE)) {
5920 ioc_err(ioc, "%s: timeout\n", __func__); 6281 ioc_err(ioc, "%s: timeout\n", __func__);
@@ -5973,7 +6334,7 @@ mpt3sas_port_enable(struct MPT3SAS_ADAPTER *ioc)
5973 memset(mpi_request, 0, sizeof(Mpi2PortEnableRequest_t)); 6334 memset(mpi_request, 0, sizeof(Mpi2PortEnableRequest_t));
5974 mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE; 6335 mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE;
5975 6336
5976 mpt3sas_base_put_smid_default(ioc, smid); 6337 ioc->put_smid_default(ioc, smid);
5977 return 0; 6338 return 0;
5978} 6339}
5979 6340
@@ -6089,7 +6450,7 @@ _base_event_notification(struct MPT3SAS_ADAPTER *ioc)
6089 mpi_request->EventMasks[i] = 6450 mpi_request->EventMasks[i] =
6090 cpu_to_le32(ioc->event_masks[i]); 6451 cpu_to_le32(ioc->event_masks[i]);
6091 init_completion(&ioc->base_cmds.done); 6452 init_completion(&ioc->base_cmds.done);
6092 mpt3sas_base_put_smid_default(ioc, smid); 6453 ioc->put_smid_default(ioc, smid);
6093 wait_for_completion_timeout(&ioc->base_cmds.done, 30*HZ); 6454 wait_for_completion_timeout(&ioc->base_cmds.done, 30*HZ);
6094 if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) { 6455 if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) {
6095 ioc_err(ioc, "%s: timeout\n", __func__); 6456 ioc_err(ioc, "%s: timeout\n", __func__);
@@ -6549,6 +6910,8 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
6549 } 6910 }
6550 } 6911 }
6551 6912
6913 ioc->smp_affinity_enable = smp_affinity_enable;
6914
6552 ioc->rdpq_array_enable_assigned = 0; 6915 ioc->rdpq_array_enable_assigned = 0;
6553 ioc->dma_mask = 0; 6916 ioc->dma_mask = 0;
6554 if (ioc->is_aero_ioc) 6917 if (ioc->is_aero_ioc)
@@ -6569,6 +6932,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
6569 ioc->build_sg_scmd = &_base_build_sg_scmd; 6932 ioc->build_sg_scmd = &_base_build_sg_scmd;
6570 ioc->build_sg = &_base_build_sg; 6933 ioc->build_sg = &_base_build_sg;
6571 ioc->build_zero_len_sge = &_base_build_zero_len_sge; 6934 ioc->build_zero_len_sge = &_base_build_zero_len_sge;
6935 ioc->get_msix_index_for_smlio = &_base_get_msix_index;
6572 break; 6936 break;
6573 case MPI25_VERSION: 6937 case MPI25_VERSION:
6574 case MPI26_VERSION: 6938 case MPI26_VERSION:
@@ -6583,15 +6947,30 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
6583 ioc->build_nvme_prp = &_base_build_nvme_prp; 6947 ioc->build_nvme_prp = &_base_build_nvme_prp;
6584 ioc->build_zero_len_sge = &_base_build_zero_len_sge_ieee; 6948 ioc->build_zero_len_sge = &_base_build_zero_len_sge_ieee;
6585 ioc->sge_size_ieee = sizeof(Mpi2IeeeSgeSimple64_t); 6949 ioc->sge_size_ieee = sizeof(Mpi2IeeeSgeSimple64_t);
6586 6950 if (ioc->high_iops_queues)
6951 ioc->get_msix_index_for_smlio =
6952 &_base_get_high_iops_msix_index;
6953 else
6954 ioc->get_msix_index_for_smlio = &_base_get_msix_index;
6587 break; 6955 break;
6588 } 6956 }
6589 6957 if (ioc->atomic_desc_capable) {
6590 if (ioc->is_mcpu_endpoint) 6958 ioc->put_smid_default = &_base_put_smid_default_atomic;
6591 ioc->put_smid_scsi_io = &_base_put_smid_mpi_ep_scsi_io; 6959 ioc->put_smid_scsi_io = &_base_put_smid_scsi_io_atomic;
6592 else 6960 ioc->put_smid_fast_path =
6593 ioc->put_smid_scsi_io = &_base_put_smid_scsi_io; 6961 &_base_put_smid_fast_path_atomic;
6594 6962 ioc->put_smid_hi_priority =
6963 &_base_put_smid_hi_priority_atomic;
6964 } else {
6965 ioc->put_smid_default = &_base_put_smid_default;
6966 ioc->put_smid_fast_path = &_base_put_smid_fast_path;
6967 ioc->put_smid_hi_priority = &_base_put_smid_hi_priority;
6968 if (ioc->is_mcpu_endpoint)
6969 ioc->put_smid_scsi_io =
6970 &_base_put_smid_mpi_ep_scsi_io;
6971 else
6972 ioc->put_smid_scsi_io = &_base_put_smid_scsi_io;
6973 }
6595 /* 6974 /*
6596 * These function pointers for other requests that don't 6975 * These function pointers for other requests that don't
6597 * the require IEEE scatter gather elements. 6976 * the require IEEE scatter gather elements.
diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h b/drivers/scsi/mpt3sas/mpt3sas_base.h
index 480219f0efc5..6afbdb044310 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_base.h
+++ b/drivers/scsi/mpt3sas/mpt3sas_base.h
@@ -76,8 +76,8 @@
76#define MPT3SAS_DRIVER_NAME "mpt3sas" 76#define MPT3SAS_DRIVER_NAME "mpt3sas"
77#define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>" 77#define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>"
78#define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver" 78#define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver"
79#define MPT3SAS_DRIVER_VERSION "28.100.00.00" 79#define MPT3SAS_DRIVER_VERSION "29.100.00.00"
80#define MPT3SAS_MAJOR_VERSION 28 80#define MPT3SAS_MAJOR_VERSION 29
81#define MPT3SAS_MINOR_VERSION 100 81#define MPT3SAS_MINOR_VERSION 100
82#define MPT3SAS_BUILD_VERSION 0 82#define MPT3SAS_BUILD_VERSION 0
83#define MPT3SAS_RELEASE_VERSION 00 83#define MPT3SAS_RELEASE_VERSION 00
@@ -355,6 +355,12 @@ struct mpt3sas_nvme_cmd {
355 355
356#define VIRTUAL_IO_FAILED_RETRY (0x32010081) 356#define VIRTUAL_IO_FAILED_RETRY (0x32010081)
357 357
358/* High IOPs definitions */
359#define MPT3SAS_DEVICE_HIGH_IOPS_DEPTH 8
360#define MPT3SAS_HIGH_IOPS_REPLY_QUEUES 8
361#define MPT3SAS_HIGH_IOPS_BATCH_COUNT 16
362#define MPT3SAS_GEN35_MAX_MSIX_QUEUES 128
363
358/* OEM Specific Flags will come from OEM specific header files */ 364/* OEM Specific Flags will come from OEM specific header files */
359struct Mpi2ManufacturingPage10_t { 365struct Mpi2ManufacturingPage10_t {
360 MPI2_CONFIG_PAGE_HEADER Header; /* 00h */ 366 MPI2_CONFIG_PAGE_HEADER Header; /* 00h */
@@ -824,6 +830,7 @@ struct chain_lookup {
824 */ 830 */
825struct scsiio_tracker { 831struct scsiio_tracker {
826 u16 smid; 832 u16 smid;
833 struct scsi_cmnd *scmd;
827 u8 cb_idx; 834 u8 cb_idx;
828 u8 direct_io; 835 u8 direct_io;
829 struct pcie_sg_list pcie_sg_list; 836 struct pcie_sg_list pcie_sg_list;
@@ -924,6 +931,12 @@ typedef void (*PUT_SMID_IO_FP_HIP) (struct MPT3SAS_ADAPTER *ioc, u16 smid,
924 u16 funcdep); 931 u16 funcdep);
925typedef void (*PUT_SMID_DEFAULT) (struct MPT3SAS_ADAPTER *ioc, u16 smid); 932typedef void (*PUT_SMID_DEFAULT) (struct MPT3SAS_ADAPTER *ioc, u16 smid);
926typedef u32 (*BASE_READ_REG) (const volatile void __iomem *addr); 933typedef u32 (*BASE_READ_REG) (const volatile void __iomem *addr);
934/*
935 * To get high iops reply queue's msix index when high iops mode is enabled
936 * else get the msix index of general reply queues.
937 */
938typedef u8 (*GET_MSIX_INDEX) (struct MPT3SAS_ADAPTER *ioc,
939 struct scsi_cmnd *scmd);
927 940
928/* IOC Facts and Port Facts converted from little endian to cpu */ 941/* IOC Facts and Port Facts converted from little endian to cpu */
929union mpi3_version_union { 942union mpi3_version_union {
@@ -1025,6 +1038,8 @@ typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
1025 * @cpu_msix_table: table for mapping cpus to msix index 1038 * @cpu_msix_table: table for mapping cpus to msix index
1026 * @cpu_msix_table_sz: table size 1039 * @cpu_msix_table_sz: table size
1027 * @total_io_cnt: Gives total IO count, used to load balance the interrupts 1040 * @total_io_cnt: Gives total IO count, used to load balance the interrupts
1041 * @high_iops_outstanding: used to load balance the interrupts
1042 * within high iops reply queues
1028 * @msix_load_balance: Enables load balancing of interrupts across 1043 * @msix_load_balance: Enables load balancing of interrupts across
1029 * the multiple MSIXs 1044 * the multiple MSIXs
1030 * @schedule_dead_ioc_flush_running_cmds: callback to flush pending commands 1045 * @schedule_dead_ioc_flush_running_cmds: callback to flush pending commands
@@ -1147,6 +1162,8 @@ typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
1147 * path functions resulting in Null pointer reference followed by kernel 1162 * path functions resulting in Null pointer reference followed by kernel
1148 * crash. To avoid the above race condition we use mutex syncrhonization 1163 * crash. To avoid the above race condition we use mutex syncrhonization
1149 * which ensures the syncrhonization between cli/sysfs_show path. 1164 * which ensures the syncrhonization between cli/sysfs_show path.
1165 * @atomic_desc_capable: Atomic Request Descriptor support.
1166 * @GET_MSIX_INDEX: Get the msix index of high iops queues.
1150 */ 1167 */
1151struct MPT3SAS_ADAPTER { 1168struct MPT3SAS_ADAPTER {
1152 struct list_head list; 1169 struct list_head list;
@@ -1206,8 +1223,10 @@ struct MPT3SAS_ADAPTER {
1206 MPT3SAS_FLUSH_RUNNING_CMDS schedule_dead_ioc_flush_running_cmds; 1223 MPT3SAS_FLUSH_RUNNING_CMDS schedule_dead_ioc_flush_running_cmds;
1207 u32 non_operational_loop; 1224 u32 non_operational_loop;
1208 atomic64_t total_io_cnt; 1225 atomic64_t total_io_cnt;
1226 atomic64_t high_iops_outstanding;
1209 bool msix_load_balance; 1227 bool msix_load_balance;
1210 u16 thresh_hold; 1228 u16 thresh_hold;
1229 u8 high_iops_queues;
1211 1230
1212 /* internal commands, callback index */ 1231 /* internal commands, callback index */
1213 u8 scsi_io_cb_idx; 1232 u8 scsi_io_cb_idx;
@@ -1267,6 +1286,7 @@ struct MPT3SAS_ADAPTER {
1267 Mpi2IOUnitPage0_t iounit_pg0; 1286 Mpi2IOUnitPage0_t iounit_pg0;
1268 Mpi2IOUnitPage1_t iounit_pg1; 1287 Mpi2IOUnitPage1_t iounit_pg1;
1269 Mpi2IOUnitPage8_t iounit_pg8; 1288 Mpi2IOUnitPage8_t iounit_pg8;
1289 Mpi2IOCPage1_t ioc_pg1_copy;
1270 1290
1271 struct _boot_device req_boot_device; 1291 struct _boot_device req_boot_device;
1272 struct _boot_device req_alt_boot_device; 1292 struct _boot_device req_alt_boot_device;
@@ -1385,6 +1405,7 @@ struct MPT3SAS_ADAPTER {
1385 1405
1386 u8 combined_reply_queue; 1406 u8 combined_reply_queue;
1387 u8 combined_reply_index_count; 1407 u8 combined_reply_index_count;
1408 u8 smp_affinity_enable;
1388 /* reply post register index */ 1409 /* reply post register index */
1389 resource_size_t **replyPostRegisterIndex; 1410 resource_size_t **replyPostRegisterIndex;
1390 1411
@@ -1412,6 +1433,7 @@ struct MPT3SAS_ADAPTER {
1412 u8 hide_drives; 1433 u8 hide_drives;
1413 spinlock_t diag_trigger_lock; 1434 spinlock_t diag_trigger_lock;
1414 u8 diag_trigger_active; 1435 u8 diag_trigger_active;
1436 u8 atomic_desc_capable;
1415 BASE_READ_REG base_readl; 1437 BASE_READ_REG base_readl;
1416 struct SL_WH_MASTER_TRIGGER_T diag_trigger_master; 1438 struct SL_WH_MASTER_TRIGGER_T diag_trigger_master;
1417 struct SL_WH_EVENT_TRIGGERS_T diag_trigger_event; 1439 struct SL_WH_EVENT_TRIGGERS_T diag_trigger_event;
@@ -1422,7 +1444,10 @@ struct MPT3SAS_ADAPTER {
1422 u8 is_gen35_ioc; 1444 u8 is_gen35_ioc;
1423 u8 is_aero_ioc; 1445 u8 is_aero_ioc;
1424 PUT_SMID_IO_FP_HIP put_smid_scsi_io; 1446 PUT_SMID_IO_FP_HIP put_smid_scsi_io;
1425 1447 PUT_SMID_IO_FP_HIP put_smid_fast_path;
1448 PUT_SMID_IO_FP_HIP put_smid_hi_priority;
1449 PUT_SMID_DEFAULT put_smid_default;
1450 GET_MSIX_INDEX get_msix_index_for_smlio;
1426}; 1451};
1427 1452
1428typedef u8 (*MPT_CALLBACK)(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, 1453typedef u8 (*MPT_CALLBACK)(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
@@ -1611,6 +1636,10 @@ int mpt3sas_config_get_sas_iounit_pg1(struct MPT3SAS_ADAPTER *ioc,
1611int mpt3sas_config_set_sas_iounit_pg1(struct MPT3SAS_ADAPTER *ioc, 1636int mpt3sas_config_set_sas_iounit_pg1(struct MPT3SAS_ADAPTER *ioc,
1612 Mpi2ConfigReply_t *mpi_reply, Mpi2SasIOUnitPage1_t *config_page, 1637 Mpi2ConfigReply_t *mpi_reply, Mpi2SasIOUnitPage1_t *config_page,
1613 u16 sz); 1638 u16 sz);
1639int mpt3sas_config_get_ioc_pg1(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t
1640 *mpi_reply, Mpi2IOCPage1_t *config_page);
1641int mpt3sas_config_set_ioc_pg1(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t
1642 *mpi_reply, Mpi2IOCPage1_t *config_page);
1614int mpt3sas_config_get_ioc_pg8(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t 1643int mpt3sas_config_get_ioc_pg8(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t
1615 *mpi_reply, Mpi2IOCPage8_t *config_page); 1644 *mpi_reply, Mpi2IOCPage8_t *config_page);
1616int mpt3sas_config_get_expander_pg0(struct MPT3SAS_ADAPTER *ioc, 1645int mpt3sas_config_get_expander_pg0(struct MPT3SAS_ADAPTER *ioc,
diff --git a/drivers/scsi/mpt3sas/mpt3sas_config.c b/drivers/scsi/mpt3sas/mpt3sas_config.c
index fb0a17252f86..14a1a2793dd5 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_config.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_config.c
@@ -380,7 +380,7 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
380 memcpy(config_request, mpi_request, sizeof(Mpi2ConfigRequest_t)); 380 memcpy(config_request, mpi_request, sizeof(Mpi2ConfigRequest_t));
381 _config_display_some_debug(ioc, smid, "config_request", NULL); 381 _config_display_some_debug(ioc, smid, "config_request", NULL);
382 init_completion(&ioc->config_cmds.done); 382 init_completion(&ioc->config_cmds.done);
383 mpt3sas_base_put_smid_default(ioc, smid); 383 ioc->put_smid_default(ioc, smid);
384 wait_for_completion_timeout(&ioc->config_cmds.done, timeout*HZ); 384 wait_for_completion_timeout(&ioc->config_cmds.done, timeout*HZ);
385 if (!(ioc->config_cmds.status & MPT3_CMD_COMPLETE)) { 385 if (!(ioc->config_cmds.status & MPT3_CMD_COMPLETE)) {
386 mpt3sas_base_check_cmd_timeout(ioc, 386 mpt3sas_base_check_cmd_timeout(ioc,
@@ -949,6 +949,77 @@ mpt3sas_config_get_ioc_pg8(struct MPT3SAS_ADAPTER *ioc,
949 out: 949 out:
950 return r; 950 return r;
951} 951}
952/**
953 * mpt3sas_config_get_ioc_pg1 - obtain ioc page 1
954 * @ioc: per adapter object
955 * @mpi_reply: reply mf payload returned from firmware
956 * @config_page: contents of the config page
957 * Context: sleep.
958 *
959 * Return: 0 for success, non-zero for failure.
960 */
961int
962mpt3sas_config_get_ioc_pg1(struct MPT3SAS_ADAPTER *ioc,
963 Mpi2ConfigReply_t *mpi_reply, Mpi2IOCPage1_t *config_page)
964{
965 Mpi2ConfigRequest_t mpi_request;
966 int r;
967
968 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
969 mpi_request.Function = MPI2_FUNCTION_CONFIG;
970 mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
971 mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IOC;
972 mpi_request.Header.PageNumber = 1;
973 mpi_request.Header.PageVersion = MPI2_IOCPAGE8_PAGEVERSION;
974 ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
975 r = _config_request(ioc, &mpi_request, mpi_reply,
976 MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
977 if (r)
978 goto out;
979
980 mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
981 r = _config_request(ioc, &mpi_request, mpi_reply,
982 MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
983 sizeof(*config_page));
984 out:
985 return r;
986}
987
988/**
989 * mpt3sas_config_set_ioc_pg1 - modify ioc page 1
990 * @ioc: per adapter object
991 * @mpi_reply: reply mf payload returned from firmware
992 * @config_page: contents of the config page
993 * Context: sleep.
994 *
995 * Return: 0 for success, non-zero for failure.
996 */
997int
998mpt3sas_config_set_ioc_pg1(struct MPT3SAS_ADAPTER *ioc,
999 Mpi2ConfigReply_t *mpi_reply, Mpi2IOCPage1_t *config_page)
1000{
1001 Mpi2ConfigRequest_t mpi_request;
1002 int r;
1003
1004 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
1005 mpi_request.Function = MPI2_FUNCTION_CONFIG;
1006 mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
1007 mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IOC;
1008 mpi_request.Header.PageNumber = 1;
1009 mpi_request.Header.PageVersion = MPI2_IOCPAGE8_PAGEVERSION;
1010 ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
1011 r = _config_request(ioc, &mpi_request, mpi_reply,
1012 MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
1013 if (r)
1014 goto out;
1015
1016 mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_WRITE_CURRENT;
1017 r = _config_request(ioc, &mpi_request, mpi_reply,
1018 MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
1019 sizeof(*config_page));
1020 out:
1021 return r;
1022}
952 1023
953/** 1024/**
954 * mpt3sas_config_get_sas_device_pg0 - obtain sas device page 0 1025 * mpt3sas_config_get_sas_device_pg0 - obtain sas device page 0
diff --git a/drivers/scsi/mpt3sas/mpt3sas_ctl.c b/drivers/scsi/mpt3sas/mpt3sas_ctl.c
index b2bb47c14d35..d4ecfbbe738c 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_ctl.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_ctl.c
@@ -822,7 +822,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
822 if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST) 822 if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST)
823 ioc->put_smid_scsi_io(ioc, smid, device_handle); 823 ioc->put_smid_scsi_io(ioc, smid, device_handle);
824 else 824 else
825 mpt3sas_base_put_smid_default(ioc, smid); 825 ioc->put_smid_default(ioc, smid);
826 break; 826 break;
827 } 827 }
828 case MPI2_FUNCTION_SCSI_TASK_MGMT: 828 case MPI2_FUNCTION_SCSI_TASK_MGMT:
@@ -859,7 +859,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
859 tm_request->DevHandle)); 859 tm_request->DevHandle));
860 ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz, 860 ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
861 data_in_dma, data_in_sz); 861 data_in_dma, data_in_sz);
862 mpt3sas_base_put_smid_hi_priority(ioc, smid, 0); 862 ioc->put_smid_hi_priority(ioc, smid, 0);
863 break; 863 break;
864 } 864 }
865 case MPI2_FUNCTION_SMP_PASSTHROUGH: 865 case MPI2_FUNCTION_SMP_PASSTHROUGH:
@@ -890,7 +890,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
890 } 890 }
891 ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma, 891 ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma,
892 data_in_sz); 892 data_in_sz);
893 mpt3sas_base_put_smid_default(ioc, smid); 893 ioc->put_smid_default(ioc, smid);
894 break; 894 break;
895 } 895 }
896 case MPI2_FUNCTION_SATA_PASSTHROUGH: 896 case MPI2_FUNCTION_SATA_PASSTHROUGH:
@@ -905,7 +905,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
905 } 905 }
906 ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma, 906 ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma,
907 data_in_sz); 907 data_in_sz);
908 mpt3sas_base_put_smid_default(ioc, smid); 908 ioc->put_smid_default(ioc, smid);
909 break; 909 break;
910 } 910 }
911 case MPI2_FUNCTION_FW_DOWNLOAD: 911 case MPI2_FUNCTION_FW_DOWNLOAD:
@@ -913,7 +913,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
913 { 913 {
914 ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma, 914 ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma,
915 data_in_sz); 915 data_in_sz);
916 mpt3sas_base_put_smid_default(ioc, smid); 916 ioc->put_smid_default(ioc, smid);
917 break; 917 break;
918 } 918 }
919 case MPI2_FUNCTION_TOOLBOX: 919 case MPI2_FUNCTION_TOOLBOX:
@@ -928,7 +928,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
928 ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz, 928 ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
929 data_in_dma, data_in_sz); 929 data_in_dma, data_in_sz);
930 } 930 }
931 mpt3sas_base_put_smid_default(ioc, smid); 931 ioc->put_smid_default(ioc, smid);
932 break; 932 break;
933 } 933 }
934 case MPI2_FUNCTION_SAS_IO_UNIT_CONTROL: 934 case MPI2_FUNCTION_SAS_IO_UNIT_CONTROL:
@@ -948,7 +948,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
948 default: 948 default:
949 ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz, 949 ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
950 data_in_dma, data_in_sz); 950 data_in_dma, data_in_sz);
951 mpt3sas_base_put_smid_default(ioc, smid); 951 ioc->put_smid_default(ioc, smid);
952 break; 952 break;
953 } 953 }
954 954
@@ -1576,7 +1576,7 @@ _ctl_diag_register_2(struct MPT3SAS_ADAPTER *ioc,
1576 cpu_to_le32(ioc->product_specific[buffer_type][i]); 1576 cpu_to_le32(ioc->product_specific[buffer_type][i]);
1577 1577
1578 init_completion(&ioc->ctl_cmds.done); 1578 init_completion(&ioc->ctl_cmds.done);
1579 mpt3sas_base_put_smid_default(ioc, smid); 1579 ioc->put_smid_default(ioc, smid);
1580 wait_for_completion_timeout(&ioc->ctl_cmds.done, 1580 wait_for_completion_timeout(&ioc->ctl_cmds.done,
1581 MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); 1581 MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
1582 1582
@@ -1903,7 +1903,7 @@ mpt3sas_send_diag_release(struct MPT3SAS_ADAPTER *ioc, u8 buffer_type,
1903 mpi_request->VP_ID = 0; 1903 mpi_request->VP_ID = 0;
1904 1904
1905 init_completion(&ioc->ctl_cmds.done); 1905 init_completion(&ioc->ctl_cmds.done);
1906 mpt3sas_base_put_smid_default(ioc, smid); 1906 ioc->put_smid_default(ioc, smid);
1907 wait_for_completion_timeout(&ioc->ctl_cmds.done, 1907 wait_for_completion_timeout(&ioc->ctl_cmds.done,
1908 MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); 1908 MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
1909 1909
@@ -2151,7 +2151,7 @@ _ctl_diag_read_buffer(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
2151 mpi_request->VP_ID = 0; 2151 mpi_request->VP_ID = 0;
2152 2152
2153 init_completion(&ioc->ctl_cmds.done); 2153 init_completion(&ioc->ctl_cmds.done);
2154 mpt3sas_base_put_smid_default(ioc, smid); 2154 ioc->put_smid_default(ioc, smid);
2155 wait_for_completion_timeout(&ioc->ctl_cmds.done, 2155 wait_for_completion_timeout(&ioc->ctl_cmds.done,
2156 MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); 2156 MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
2157 2157
@@ -2319,6 +2319,10 @@ _ctl_ioctl_main(struct file *file, unsigned int cmd, void __user *arg,
2319 break; 2319 break;
2320 } 2320 }
2321 2321
2322 if (karg.hdr.ioc_number != ioctl_header.ioc_number) {
2323 ret = -EINVAL;
2324 break;
2325 }
2322 if (_IOC_SIZE(cmd) == sizeof(struct mpt3_ioctl_command)) { 2326 if (_IOC_SIZE(cmd) == sizeof(struct mpt3_ioctl_command)) {
2323 uarg = arg; 2327 uarg = arg;
2324 ret = _ctl_do_mpt_command(ioc, karg, &uarg->mf); 2328 ret = _ctl_do_mpt_command(ioc, karg, &uarg->mf);
@@ -2453,7 +2457,7 @@ _ctl_mpt2_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg)
2453 2457
2454/* scsi host attributes */ 2458/* scsi host attributes */
2455/** 2459/**
2456 * _ctl_version_fw_show - firmware version 2460 * version_fw_show - firmware version
2457 * @cdev: pointer to embedded class device 2461 * @cdev: pointer to embedded class device
2458 * @attr: ? 2462 * @attr: ?
2459 * @buf: the buffer returned 2463 * @buf: the buffer returned
@@ -2461,7 +2465,7 @@ _ctl_mpt2_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg)
2461 * A sysfs 'read-only' shost attribute. 2465 * A sysfs 'read-only' shost attribute.
2462 */ 2466 */
2463static ssize_t 2467static ssize_t
2464_ctl_version_fw_show(struct device *cdev, struct device_attribute *attr, 2468version_fw_show(struct device *cdev, struct device_attribute *attr,
2465 char *buf) 2469 char *buf)
2466{ 2470{
2467 struct Scsi_Host *shost = class_to_shost(cdev); 2471 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2473,10 +2477,10 @@ _ctl_version_fw_show(struct device *cdev, struct device_attribute *attr,
2473 (ioc->facts.FWVersion.Word & 0x0000FF00) >> 8, 2477 (ioc->facts.FWVersion.Word & 0x0000FF00) >> 8,
2474 ioc->facts.FWVersion.Word & 0x000000FF); 2478 ioc->facts.FWVersion.Word & 0x000000FF);
2475} 2479}
2476static DEVICE_ATTR(version_fw, S_IRUGO, _ctl_version_fw_show, NULL); 2480static DEVICE_ATTR_RO(version_fw);
2477 2481
2478/** 2482/**
2479 * _ctl_version_bios_show - bios version 2483 * version_bios_show - bios version
2480 * @cdev: pointer to embedded class device 2484 * @cdev: pointer to embedded class device
2481 * @attr: ? 2485 * @attr: ?
2482 * @buf: the buffer returned 2486 * @buf: the buffer returned
@@ -2484,7 +2488,7 @@ static DEVICE_ATTR(version_fw, S_IRUGO, _ctl_version_fw_show, NULL);
2484 * A sysfs 'read-only' shost attribute. 2488 * A sysfs 'read-only' shost attribute.
2485 */ 2489 */
2486static ssize_t 2490static ssize_t
2487_ctl_version_bios_show(struct device *cdev, struct device_attribute *attr, 2491version_bios_show(struct device *cdev, struct device_attribute *attr,
2488 char *buf) 2492 char *buf)
2489{ 2493{
2490 struct Scsi_Host *shost = class_to_shost(cdev); 2494 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2498,10 +2502,10 @@ _ctl_version_bios_show(struct device *cdev, struct device_attribute *attr,
2498 (version & 0x0000FF00) >> 8, 2502 (version & 0x0000FF00) >> 8,
2499 version & 0x000000FF); 2503 version & 0x000000FF);
2500} 2504}
2501static DEVICE_ATTR(version_bios, S_IRUGO, _ctl_version_bios_show, NULL); 2505static DEVICE_ATTR_RO(version_bios);
2502 2506
2503/** 2507/**
2504 * _ctl_version_mpi_show - MPI (message passing interface) version 2508 * version_mpi_show - MPI (message passing interface) version
2505 * @cdev: pointer to embedded class device 2509 * @cdev: pointer to embedded class device
2506 * @attr: ? 2510 * @attr: ?
2507 * @buf: the buffer returned 2511 * @buf: the buffer returned
@@ -2509,7 +2513,7 @@ static DEVICE_ATTR(version_bios, S_IRUGO, _ctl_version_bios_show, NULL);
2509 * A sysfs 'read-only' shost attribute. 2513 * A sysfs 'read-only' shost attribute.
2510 */ 2514 */
2511static ssize_t 2515static ssize_t
2512_ctl_version_mpi_show(struct device *cdev, struct device_attribute *attr, 2516version_mpi_show(struct device *cdev, struct device_attribute *attr,
2513 char *buf) 2517 char *buf)
2514{ 2518{
2515 struct Scsi_Host *shost = class_to_shost(cdev); 2519 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2518,10 +2522,10 @@ _ctl_version_mpi_show(struct device *cdev, struct device_attribute *attr,
2518 return snprintf(buf, PAGE_SIZE, "%03x.%02x\n", 2522 return snprintf(buf, PAGE_SIZE, "%03x.%02x\n",
2519 ioc->facts.MsgVersion, ioc->facts.HeaderVersion >> 8); 2523 ioc->facts.MsgVersion, ioc->facts.HeaderVersion >> 8);
2520} 2524}
2521static DEVICE_ATTR(version_mpi, S_IRUGO, _ctl_version_mpi_show, NULL); 2525static DEVICE_ATTR_RO(version_mpi);
2522 2526
2523/** 2527/**
2524 * _ctl_version_product_show - product name 2528 * version_product_show - product name
2525 * @cdev: pointer to embedded class device 2529 * @cdev: pointer to embedded class device
2526 * @attr: ? 2530 * @attr: ?
2527 * @buf: the buffer returned 2531 * @buf: the buffer returned
@@ -2529,7 +2533,7 @@ static DEVICE_ATTR(version_mpi, S_IRUGO, _ctl_version_mpi_show, NULL);
2529 * A sysfs 'read-only' shost attribute. 2533 * A sysfs 'read-only' shost attribute.
2530 */ 2534 */
2531static ssize_t 2535static ssize_t
2532_ctl_version_product_show(struct device *cdev, struct device_attribute *attr, 2536version_product_show(struct device *cdev, struct device_attribute *attr,
2533 char *buf) 2537 char *buf)
2534{ 2538{
2535 struct Scsi_Host *shost = class_to_shost(cdev); 2539 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2537,10 +2541,10 @@ _ctl_version_product_show(struct device *cdev, struct device_attribute *attr,
2537 2541
2538 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.ChipName); 2542 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.ChipName);
2539} 2543}
2540static DEVICE_ATTR(version_product, S_IRUGO, _ctl_version_product_show, NULL); 2544static DEVICE_ATTR_RO(version_product);
2541 2545
2542/** 2546/**
2543 * _ctl_version_nvdata_persistent_show - ndvata persistent version 2547 * version_nvdata_persistent_show - ndvata persistent version
2544 * @cdev: pointer to embedded class device 2548 * @cdev: pointer to embedded class device
2545 * @attr: ? 2549 * @attr: ?
2546 * @buf: the buffer returned 2550 * @buf: the buffer returned
@@ -2548,7 +2552,7 @@ static DEVICE_ATTR(version_product, S_IRUGO, _ctl_version_product_show, NULL);
2548 * A sysfs 'read-only' shost attribute. 2552 * A sysfs 'read-only' shost attribute.
2549 */ 2553 */
2550static ssize_t 2554static ssize_t
2551_ctl_version_nvdata_persistent_show(struct device *cdev, 2555version_nvdata_persistent_show(struct device *cdev,
2552 struct device_attribute *attr, char *buf) 2556 struct device_attribute *attr, char *buf)
2553{ 2557{
2554 struct Scsi_Host *shost = class_to_shost(cdev); 2558 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2557,11 +2561,10 @@ _ctl_version_nvdata_persistent_show(struct device *cdev,
2557 return snprintf(buf, PAGE_SIZE, "%08xh\n", 2561 return snprintf(buf, PAGE_SIZE, "%08xh\n",
2558 le32_to_cpu(ioc->iounit_pg0.NvdataVersionPersistent.Word)); 2562 le32_to_cpu(ioc->iounit_pg0.NvdataVersionPersistent.Word));
2559} 2563}
2560static DEVICE_ATTR(version_nvdata_persistent, S_IRUGO, 2564static DEVICE_ATTR_RO(version_nvdata_persistent);
2561 _ctl_version_nvdata_persistent_show, NULL);
2562 2565
2563/** 2566/**
2564 * _ctl_version_nvdata_default_show - nvdata default version 2567 * version_nvdata_default_show - nvdata default version
2565 * @cdev: pointer to embedded class device 2568 * @cdev: pointer to embedded class device
2566 * @attr: ? 2569 * @attr: ?
2567 * @buf: the buffer returned 2570 * @buf: the buffer returned
@@ -2569,7 +2572,7 @@ static DEVICE_ATTR(version_nvdata_persistent, S_IRUGO,
2569 * A sysfs 'read-only' shost attribute. 2572 * A sysfs 'read-only' shost attribute.
2570 */ 2573 */
2571static ssize_t 2574static ssize_t
2572_ctl_version_nvdata_default_show(struct device *cdev, struct device_attribute 2575version_nvdata_default_show(struct device *cdev, struct device_attribute
2573 *attr, char *buf) 2576 *attr, char *buf)
2574{ 2577{
2575 struct Scsi_Host *shost = class_to_shost(cdev); 2578 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2578,11 +2581,10 @@ _ctl_version_nvdata_default_show(struct device *cdev, struct device_attribute
2578 return snprintf(buf, PAGE_SIZE, "%08xh\n", 2581 return snprintf(buf, PAGE_SIZE, "%08xh\n",
2579 le32_to_cpu(ioc->iounit_pg0.NvdataVersionDefault.Word)); 2582 le32_to_cpu(ioc->iounit_pg0.NvdataVersionDefault.Word));
2580} 2583}
2581static DEVICE_ATTR(version_nvdata_default, S_IRUGO, 2584static DEVICE_ATTR_RO(version_nvdata_default);
2582 _ctl_version_nvdata_default_show, NULL);
2583 2585
2584/** 2586/**
2585 * _ctl_board_name_show - board name 2587 * board_name_show - board name
2586 * @cdev: pointer to embedded class device 2588 * @cdev: pointer to embedded class device
2587 * @attr: ? 2589 * @attr: ?
2588 * @buf: the buffer returned 2590 * @buf: the buffer returned
@@ -2590,7 +2592,7 @@ static DEVICE_ATTR(version_nvdata_default, S_IRUGO,
2590 * A sysfs 'read-only' shost attribute. 2592 * A sysfs 'read-only' shost attribute.
2591 */ 2593 */
2592static ssize_t 2594static ssize_t
2593_ctl_board_name_show(struct device *cdev, struct device_attribute *attr, 2595board_name_show(struct device *cdev, struct device_attribute *attr,
2594 char *buf) 2596 char *buf)
2595{ 2597{
2596 struct Scsi_Host *shost = class_to_shost(cdev); 2598 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2598,10 +2600,10 @@ _ctl_board_name_show(struct device *cdev, struct device_attribute *attr,
2598 2600
2599 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardName); 2601 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardName);
2600} 2602}
2601static DEVICE_ATTR(board_name, S_IRUGO, _ctl_board_name_show, NULL); 2603static DEVICE_ATTR_RO(board_name);
2602 2604
2603/** 2605/**
2604 * _ctl_board_assembly_show - board assembly name 2606 * board_assembly_show - board assembly name
2605 * @cdev: pointer to embedded class device 2607 * @cdev: pointer to embedded class device
2606 * @attr: ? 2608 * @attr: ?
2607 * @buf: the buffer returned 2609 * @buf: the buffer returned
@@ -2609,7 +2611,7 @@ static DEVICE_ATTR(board_name, S_IRUGO, _ctl_board_name_show, NULL);
2609 * A sysfs 'read-only' shost attribute. 2611 * A sysfs 'read-only' shost attribute.
2610 */ 2612 */
2611static ssize_t 2613static ssize_t
2612_ctl_board_assembly_show(struct device *cdev, struct device_attribute *attr, 2614board_assembly_show(struct device *cdev, struct device_attribute *attr,
2613 char *buf) 2615 char *buf)
2614{ 2616{
2615 struct Scsi_Host *shost = class_to_shost(cdev); 2617 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2617,10 +2619,10 @@ _ctl_board_assembly_show(struct device *cdev, struct device_attribute *attr,
2617 2619
2618 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardAssembly); 2620 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardAssembly);
2619} 2621}
2620static DEVICE_ATTR(board_assembly, S_IRUGO, _ctl_board_assembly_show, NULL); 2622static DEVICE_ATTR_RO(board_assembly);
2621 2623
2622/** 2624/**
2623 * _ctl_board_tracer_show - board tracer number 2625 * board_tracer_show - board tracer number
2624 * @cdev: pointer to embedded class device 2626 * @cdev: pointer to embedded class device
2625 * @attr: ? 2627 * @attr: ?
2626 * @buf: the buffer returned 2628 * @buf: the buffer returned
@@ -2628,7 +2630,7 @@ static DEVICE_ATTR(board_assembly, S_IRUGO, _ctl_board_assembly_show, NULL);
2628 * A sysfs 'read-only' shost attribute. 2630 * A sysfs 'read-only' shost attribute.
2629 */ 2631 */
2630static ssize_t 2632static ssize_t
2631_ctl_board_tracer_show(struct device *cdev, struct device_attribute *attr, 2633board_tracer_show(struct device *cdev, struct device_attribute *attr,
2632 char *buf) 2634 char *buf)
2633{ 2635{
2634 struct Scsi_Host *shost = class_to_shost(cdev); 2636 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2636,10 +2638,10 @@ _ctl_board_tracer_show(struct device *cdev, struct device_attribute *attr,
2636 2638
2637 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardTracerNumber); 2639 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardTracerNumber);
2638} 2640}
2639static DEVICE_ATTR(board_tracer, S_IRUGO, _ctl_board_tracer_show, NULL); 2641static DEVICE_ATTR_RO(board_tracer);
2640 2642
2641/** 2643/**
2642 * _ctl_io_delay_show - io missing delay 2644 * io_delay_show - io missing delay
2643 * @cdev: pointer to embedded class device 2645 * @cdev: pointer to embedded class device
2644 * @attr: ? 2646 * @attr: ?
2645 * @buf: the buffer returned 2647 * @buf: the buffer returned
@@ -2650,7 +2652,7 @@ static DEVICE_ATTR(board_tracer, S_IRUGO, _ctl_board_tracer_show, NULL);
2650 * A sysfs 'read-only' shost attribute. 2652 * A sysfs 'read-only' shost attribute.
2651 */ 2653 */
2652static ssize_t 2654static ssize_t
2653_ctl_io_delay_show(struct device *cdev, struct device_attribute *attr, 2655io_delay_show(struct device *cdev, struct device_attribute *attr,
2654 char *buf) 2656 char *buf)
2655{ 2657{
2656 struct Scsi_Host *shost = class_to_shost(cdev); 2658 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2658,10 +2660,10 @@ _ctl_io_delay_show(struct device *cdev, struct device_attribute *attr,
2658 2660
2659 return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->io_missing_delay); 2661 return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->io_missing_delay);
2660} 2662}
2661static DEVICE_ATTR(io_delay, S_IRUGO, _ctl_io_delay_show, NULL); 2663static DEVICE_ATTR_RO(io_delay);
2662 2664
2663/** 2665/**
2664 * _ctl_device_delay_show - device missing delay 2666 * device_delay_show - device missing delay
2665 * @cdev: pointer to embedded class device 2667 * @cdev: pointer to embedded class device
2666 * @attr: ? 2668 * @attr: ?
2667 * @buf: the buffer returned 2669 * @buf: the buffer returned
@@ -2672,7 +2674,7 @@ static DEVICE_ATTR(io_delay, S_IRUGO, _ctl_io_delay_show, NULL);
2672 * A sysfs 'read-only' shost attribute. 2674 * A sysfs 'read-only' shost attribute.
2673 */ 2675 */
2674static ssize_t 2676static ssize_t
2675_ctl_device_delay_show(struct device *cdev, struct device_attribute *attr, 2677device_delay_show(struct device *cdev, struct device_attribute *attr,
2676 char *buf) 2678 char *buf)
2677{ 2679{
2678 struct Scsi_Host *shost = class_to_shost(cdev); 2680 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2680,10 +2682,10 @@ _ctl_device_delay_show(struct device *cdev, struct device_attribute *attr,
2680 2682
2681 return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->device_missing_delay); 2683 return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->device_missing_delay);
2682} 2684}
2683static DEVICE_ATTR(device_delay, S_IRUGO, _ctl_device_delay_show, NULL); 2685static DEVICE_ATTR_RO(device_delay);
2684 2686
2685/** 2687/**
2686 * _ctl_fw_queue_depth_show - global credits 2688 * fw_queue_depth_show - global credits
2687 * @cdev: pointer to embedded class device 2689 * @cdev: pointer to embedded class device
2688 * @attr: ? 2690 * @attr: ?
2689 * @buf: the buffer returned 2691 * @buf: the buffer returned
@@ -2693,7 +2695,7 @@ static DEVICE_ATTR(device_delay, S_IRUGO, _ctl_device_delay_show, NULL);
2693 * A sysfs 'read-only' shost attribute. 2695 * A sysfs 'read-only' shost attribute.
2694 */ 2696 */
2695static ssize_t 2697static ssize_t
2696_ctl_fw_queue_depth_show(struct device *cdev, struct device_attribute *attr, 2698fw_queue_depth_show(struct device *cdev, struct device_attribute *attr,
2697 char *buf) 2699 char *buf)
2698{ 2700{
2699 struct Scsi_Host *shost = class_to_shost(cdev); 2701 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2701,10 +2703,10 @@ _ctl_fw_queue_depth_show(struct device *cdev, struct device_attribute *attr,
2701 2703
2702 return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->facts.RequestCredit); 2704 return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->facts.RequestCredit);
2703} 2705}
2704static DEVICE_ATTR(fw_queue_depth, S_IRUGO, _ctl_fw_queue_depth_show, NULL); 2706static DEVICE_ATTR_RO(fw_queue_depth);
2705 2707
2706/** 2708/**
2707 * _ctl_sas_address_show - sas address 2709 * sas_address_show - sas address
2708 * @cdev: pointer to embedded class device 2710 * @cdev: pointer to embedded class device
2709 * @attr: ? 2711 * @attr: ?
2710 * @buf: the buffer returned 2712 * @buf: the buffer returned
@@ -2714,7 +2716,7 @@ static DEVICE_ATTR(fw_queue_depth, S_IRUGO, _ctl_fw_queue_depth_show, NULL);
2714 * A sysfs 'read-only' shost attribute. 2716 * A sysfs 'read-only' shost attribute.
2715 */ 2717 */
2716static ssize_t 2718static ssize_t
2717_ctl_host_sas_address_show(struct device *cdev, struct device_attribute *attr, 2719host_sas_address_show(struct device *cdev, struct device_attribute *attr,
2718 char *buf) 2720 char *buf)
2719 2721
2720{ 2722{
@@ -2724,11 +2726,10 @@ _ctl_host_sas_address_show(struct device *cdev, struct device_attribute *attr,
2724 return snprintf(buf, PAGE_SIZE, "0x%016llx\n", 2726 return snprintf(buf, PAGE_SIZE, "0x%016llx\n",
2725 (unsigned long long)ioc->sas_hba.sas_address); 2727 (unsigned long long)ioc->sas_hba.sas_address);
2726} 2728}
2727static DEVICE_ATTR(host_sas_address, S_IRUGO, 2729static DEVICE_ATTR_RO(host_sas_address);
2728 _ctl_host_sas_address_show, NULL);
2729 2730
2730/** 2731/**
2731 * _ctl_logging_level_show - logging level 2732 * logging_level_show - logging level
2732 * @cdev: pointer to embedded class device 2733 * @cdev: pointer to embedded class device
2733 * @attr: ? 2734 * @attr: ?
2734 * @buf: the buffer returned 2735 * @buf: the buffer returned
@@ -2736,7 +2737,7 @@ static DEVICE_ATTR(host_sas_address, S_IRUGO,
2736 * A sysfs 'read/write' shost attribute. 2737 * A sysfs 'read/write' shost attribute.
2737 */ 2738 */
2738static ssize_t 2739static ssize_t
2739_ctl_logging_level_show(struct device *cdev, struct device_attribute *attr, 2740logging_level_show(struct device *cdev, struct device_attribute *attr,
2740 char *buf) 2741 char *buf)
2741{ 2742{
2742 struct Scsi_Host *shost = class_to_shost(cdev); 2743 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2745,7 +2746,7 @@ _ctl_logging_level_show(struct device *cdev, struct device_attribute *attr,
2745 return snprintf(buf, PAGE_SIZE, "%08xh\n", ioc->logging_level); 2746 return snprintf(buf, PAGE_SIZE, "%08xh\n", ioc->logging_level);
2746} 2747}
2747static ssize_t 2748static ssize_t
2748_ctl_logging_level_store(struct device *cdev, struct device_attribute *attr, 2749logging_level_store(struct device *cdev, struct device_attribute *attr,
2749 const char *buf, size_t count) 2750 const char *buf, size_t count)
2750{ 2751{
2751 struct Scsi_Host *shost = class_to_shost(cdev); 2752 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2760,11 +2761,10 @@ _ctl_logging_level_store(struct device *cdev, struct device_attribute *attr,
2760 ioc->logging_level); 2761 ioc->logging_level);
2761 return strlen(buf); 2762 return strlen(buf);
2762} 2763}
2763static DEVICE_ATTR(logging_level, S_IRUGO | S_IWUSR, _ctl_logging_level_show, 2764static DEVICE_ATTR_RW(logging_level);
2764 _ctl_logging_level_store);
2765 2765
2766/** 2766/**
2767 * _ctl_fwfault_debug_show - show/store fwfault_debug 2767 * fwfault_debug_show - show/store fwfault_debug
2768 * @cdev: pointer to embedded class device 2768 * @cdev: pointer to embedded class device
2769 * @attr: ? 2769 * @attr: ?
2770 * @buf: the buffer returned 2770 * @buf: the buffer returned
@@ -2773,7 +2773,7 @@ static DEVICE_ATTR(logging_level, S_IRUGO | S_IWUSR, _ctl_logging_level_show,
2773 * A sysfs 'read/write' shost attribute. 2773 * A sysfs 'read/write' shost attribute.
2774 */ 2774 */
2775static ssize_t 2775static ssize_t
2776_ctl_fwfault_debug_show(struct device *cdev, struct device_attribute *attr, 2776fwfault_debug_show(struct device *cdev, struct device_attribute *attr,
2777 char *buf) 2777 char *buf)
2778{ 2778{
2779 struct Scsi_Host *shost = class_to_shost(cdev); 2779 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2782,7 +2782,7 @@ _ctl_fwfault_debug_show(struct device *cdev, struct device_attribute *attr,
2782 return snprintf(buf, PAGE_SIZE, "%d\n", ioc->fwfault_debug); 2782 return snprintf(buf, PAGE_SIZE, "%d\n", ioc->fwfault_debug);
2783} 2783}
2784static ssize_t 2784static ssize_t
2785_ctl_fwfault_debug_store(struct device *cdev, struct device_attribute *attr, 2785fwfault_debug_store(struct device *cdev, struct device_attribute *attr,
2786 const char *buf, size_t count) 2786 const char *buf, size_t count)
2787{ 2787{
2788 struct Scsi_Host *shost = class_to_shost(cdev); 2788 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2797,11 +2797,10 @@ _ctl_fwfault_debug_store(struct device *cdev, struct device_attribute *attr,
2797 ioc->fwfault_debug); 2797 ioc->fwfault_debug);
2798 return strlen(buf); 2798 return strlen(buf);
2799} 2799}
2800static DEVICE_ATTR(fwfault_debug, S_IRUGO | S_IWUSR, 2800static DEVICE_ATTR_RW(fwfault_debug);
2801 _ctl_fwfault_debug_show, _ctl_fwfault_debug_store);
2802 2801
2803/** 2802/**
2804 * _ctl_ioc_reset_count_show - ioc reset count 2803 * ioc_reset_count_show - ioc reset count
2805 * @cdev: pointer to embedded class device 2804 * @cdev: pointer to embedded class device
2806 * @attr: ? 2805 * @attr: ?
2807 * @buf: the buffer returned 2806 * @buf: the buffer returned
@@ -2811,7 +2810,7 @@ static DEVICE_ATTR(fwfault_debug, S_IRUGO | S_IWUSR,
2811 * A sysfs 'read-only' shost attribute. 2810 * A sysfs 'read-only' shost attribute.
2812 */ 2811 */
2813static ssize_t 2812static ssize_t
2814_ctl_ioc_reset_count_show(struct device *cdev, struct device_attribute *attr, 2813ioc_reset_count_show(struct device *cdev, struct device_attribute *attr,
2815 char *buf) 2814 char *buf)
2816{ 2815{
2817 struct Scsi_Host *shost = class_to_shost(cdev); 2816 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2819,10 +2818,10 @@ _ctl_ioc_reset_count_show(struct device *cdev, struct device_attribute *attr,
2819 2818
2820 return snprintf(buf, PAGE_SIZE, "%d\n", ioc->ioc_reset_count); 2819 return snprintf(buf, PAGE_SIZE, "%d\n", ioc->ioc_reset_count);
2821} 2820}
2822static DEVICE_ATTR(ioc_reset_count, S_IRUGO, _ctl_ioc_reset_count_show, NULL); 2821static DEVICE_ATTR_RO(ioc_reset_count);
2823 2822
2824/** 2823/**
2825 * _ctl_ioc_reply_queue_count_show - number of reply queues 2824 * reply_queue_count_show - number of reply queues
2826 * @cdev: pointer to embedded class device 2825 * @cdev: pointer to embedded class device
2827 * @attr: ? 2826 * @attr: ?
2828 * @buf: the buffer returned 2827 * @buf: the buffer returned
@@ -2832,7 +2831,7 @@ static DEVICE_ATTR(ioc_reset_count, S_IRUGO, _ctl_ioc_reset_count_show, NULL);
2832 * A sysfs 'read-only' shost attribute. 2831 * A sysfs 'read-only' shost attribute.
2833 */ 2832 */
2834static ssize_t 2833static ssize_t
2835_ctl_ioc_reply_queue_count_show(struct device *cdev, 2834reply_queue_count_show(struct device *cdev,
2836 struct device_attribute *attr, char *buf) 2835 struct device_attribute *attr, char *buf)
2837{ 2836{
2838 u8 reply_queue_count; 2837 u8 reply_queue_count;
@@ -2847,11 +2846,10 @@ _ctl_ioc_reply_queue_count_show(struct device *cdev,
2847 2846
2848 return snprintf(buf, PAGE_SIZE, "%d\n", reply_queue_count); 2847 return snprintf(buf, PAGE_SIZE, "%d\n", reply_queue_count);
2849} 2848}
2850static DEVICE_ATTR(reply_queue_count, S_IRUGO, _ctl_ioc_reply_queue_count_show, 2849static DEVICE_ATTR_RO(reply_queue_count);
2851 NULL);
2852 2850
2853/** 2851/**
2854 * _ctl_BRM_status_show - Backup Rail Monitor Status 2852 * BRM_status_show - Backup Rail Monitor Status
2855 * @cdev: pointer to embedded class device 2853 * @cdev: pointer to embedded class device
2856 * @attr: ? 2854 * @attr: ?
2857 * @buf: the buffer returned 2855 * @buf: the buffer returned
@@ -2861,7 +2859,7 @@ static DEVICE_ATTR(reply_queue_count, S_IRUGO, _ctl_ioc_reply_queue_count_show,
2861 * A sysfs 'read-only' shost attribute. 2859 * A sysfs 'read-only' shost attribute.
2862 */ 2860 */
2863static ssize_t 2861static ssize_t
2864_ctl_BRM_status_show(struct device *cdev, struct device_attribute *attr, 2862BRM_status_show(struct device *cdev, struct device_attribute *attr,
2865 char *buf) 2863 char *buf)
2866{ 2864{
2867 struct Scsi_Host *shost = class_to_shost(cdev); 2865 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2923,7 +2921,7 @@ _ctl_BRM_status_show(struct device *cdev, struct device_attribute *attr,
2923 mutex_unlock(&ioc->pci_access_mutex); 2921 mutex_unlock(&ioc->pci_access_mutex);
2924 return rc; 2922 return rc;
2925} 2923}
2926static DEVICE_ATTR(BRM_status, S_IRUGO, _ctl_BRM_status_show, NULL); 2924static DEVICE_ATTR_RO(BRM_status);
2927 2925
2928struct DIAG_BUFFER_START { 2926struct DIAG_BUFFER_START {
2929 __le32 Size; 2927 __le32 Size;
@@ -2936,7 +2934,7 @@ struct DIAG_BUFFER_START {
2936}; 2934};
2937 2935
2938/** 2936/**
2939 * _ctl_host_trace_buffer_size_show - host buffer size (trace only) 2937 * host_trace_buffer_size_show - host buffer size (trace only)
2940 * @cdev: pointer to embedded class device 2938 * @cdev: pointer to embedded class device
2941 * @attr: ? 2939 * @attr: ?
2942 * @buf: the buffer returned 2940 * @buf: the buffer returned
@@ -2944,7 +2942,7 @@ struct DIAG_BUFFER_START {
2944 * A sysfs 'read-only' shost attribute. 2942 * A sysfs 'read-only' shost attribute.
2945 */ 2943 */
2946static ssize_t 2944static ssize_t
2947_ctl_host_trace_buffer_size_show(struct device *cdev, 2945host_trace_buffer_size_show(struct device *cdev,
2948 struct device_attribute *attr, char *buf) 2946 struct device_attribute *attr, char *buf)
2949{ 2947{
2950 struct Scsi_Host *shost = class_to_shost(cdev); 2948 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -2976,11 +2974,10 @@ _ctl_host_trace_buffer_size_show(struct device *cdev,
2976 ioc->ring_buffer_sz = size; 2974 ioc->ring_buffer_sz = size;
2977 return snprintf(buf, PAGE_SIZE, "%d\n", size); 2975 return snprintf(buf, PAGE_SIZE, "%d\n", size);
2978} 2976}
2979static DEVICE_ATTR(host_trace_buffer_size, S_IRUGO, 2977static DEVICE_ATTR_RO(host_trace_buffer_size);
2980 _ctl_host_trace_buffer_size_show, NULL);
2981 2978
2982/** 2979/**
2983 * _ctl_host_trace_buffer_show - firmware ring buffer (trace only) 2980 * host_trace_buffer_show - firmware ring buffer (trace only)
2984 * @cdev: pointer to embedded class device 2981 * @cdev: pointer to embedded class device
2985 * @attr: ? 2982 * @attr: ?
2986 * @buf: the buffer returned 2983 * @buf: the buffer returned
@@ -2992,7 +2989,7 @@ static DEVICE_ATTR(host_trace_buffer_size, S_IRUGO,
2992 * offset to the same attribute, it will move the pointer. 2989 * offset to the same attribute, it will move the pointer.
2993 */ 2990 */
2994static ssize_t 2991static ssize_t
2995_ctl_host_trace_buffer_show(struct device *cdev, struct device_attribute *attr, 2992host_trace_buffer_show(struct device *cdev, struct device_attribute *attr,
2996 char *buf) 2993 char *buf)
2997{ 2994{
2998 struct Scsi_Host *shost = class_to_shost(cdev); 2995 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3024,7 +3021,7 @@ _ctl_host_trace_buffer_show(struct device *cdev, struct device_attribute *attr,
3024} 3021}
3025 3022
3026static ssize_t 3023static ssize_t
3027_ctl_host_trace_buffer_store(struct device *cdev, struct device_attribute *attr, 3024host_trace_buffer_store(struct device *cdev, struct device_attribute *attr,
3028 const char *buf, size_t count) 3025 const char *buf, size_t count)
3029{ 3026{
3030 struct Scsi_Host *shost = class_to_shost(cdev); 3027 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3037,14 +3034,13 @@ _ctl_host_trace_buffer_store(struct device *cdev, struct device_attribute *attr,
3037 ioc->ring_buffer_offset = val; 3034 ioc->ring_buffer_offset = val;
3038 return strlen(buf); 3035 return strlen(buf);
3039} 3036}
3040static DEVICE_ATTR(host_trace_buffer, S_IRUGO | S_IWUSR, 3037static DEVICE_ATTR_RW(host_trace_buffer);
3041 _ctl_host_trace_buffer_show, _ctl_host_trace_buffer_store);
3042 3038
3043 3039
3044/*****************************************/ 3040/*****************************************/
3045 3041
3046/** 3042/**
3047 * _ctl_host_trace_buffer_enable_show - firmware ring buffer (trace only) 3043 * host_trace_buffer_enable_show - firmware ring buffer (trace only)
3048 * @cdev: pointer to embedded class device 3044 * @cdev: pointer to embedded class device
3049 * @attr: ? 3045 * @attr: ?
3050 * @buf: the buffer returned 3046 * @buf: the buffer returned
@@ -3054,7 +3050,7 @@ static DEVICE_ATTR(host_trace_buffer, S_IRUGO | S_IWUSR,
3054 * This is a mechnism to post/release host_trace_buffers 3050 * This is a mechnism to post/release host_trace_buffers
3055 */ 3051 */
3056static ssize_t 3052static ssize_t
3057_ctl_host_trace_buffer_enable_show(struct device *cdev, 3053host_trace_buffer_enable_show(struct device *cdev,
3058 struct device_attribute *attr, char *buf) 3054 struct device_attribute *attr, char *buf)
3059{ 3055{
3060 struct Scsi_Host *shost = class_to_shost(cdev); 3056 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3072,7 +3068,7 @@ _ctl_host_trace_buffer_enable_show(struct device *cdev,
3072} 3068}
3073 3069
3074static ssize_t 3070static ssize_t
3075_ctl_host_trace_buffer_enable_store(struct device *cdev, 3071host_trace_buffer_enable_store(struct device *cdev,
3076 struct device_attribute *attr, const char *buf, size_t count) 3072 struct device_attribute *attr, const char *buf, size_t count)
3077{ 3073{
3078 struct Scsi_Host *shost = class_to_shost(cdev); 3074 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3122,14 +3118,12 @@ _ctl_host_trace_buffer_enable_store(struct device *cdev,
3122 out: 3118 out:
3123 return strlen(buf); 3119 return strlen(buf);
3124} 3120}
3125static DEVICE_ATTR(host_trace_buffer_enable, S_IRUGO | S_IWUSR, 3121static DEVICE_ATTR_RW(host_trace_buffer_enable);
3126 _ctl_host_trace_buffer_enable_show,
3127 _ctl_host_trace_buffer_enable_store);
3128 3122
3129/*********** diagnostic trigger suppport *********************************/ 3123/*********** diagnostic trigger suppport *********************************/
3130 3124
3131/** 3125/**
3132 * _ctl_diag_trigger_master_show - show the diag_trigger_master attribute 3126 * diag_trigger_master_show - show the diag_trigger_master attribute
3133 * @cdev: pointer to embedded class device 3127 * @cdev: pointer to embedded class device
3134 * @attr: ? 3128 * @attr: ?
3135 * @buf: the buffer returned 3129 * @buf: the buffer returned
@@ -3137,7 +3131,7 @@ static DEVICE_ATTR(host_trace_buffer_enable, S_IRUGO | S_IWUSR,
3137 * A sysfs 'read/write' shost attribute. 3131 * A sysfs 'read/write' shost attribute.
3138 */ 3132 */
3139static ssize_t 3133static ssize_t
3140_ctl_diag_trigger_master_show(struct device *cdev, 3134diag_trigger_master_show(struct device *cdev,
3141 struct device_attribute *attr, char *buf) 3135 struct device_attribute *attr, char *buf)
3142 3136
3143{ 3137{
@@ -3154,7 +3148,7 @@ _ctl_diag_trigger_master_show(struct device *cdev,
3154} 3148}
3155 3149
3156/** 3150/**
3157 * _ctl_diag_trigger_master_store - store the diag_trigger_master attribute 3151 * diag_trigger_master_store - store the diag_trigger_master attribute
3158 * @cdev: pointer to embedded class device 3152 * @cdev: pointer to embedded class device
3159 * @attr: ? 3153 * @attr: ?
3160 * @buf: the buffer returned 3154 * @buf: the buffer returned
@@ -3163,7 +3157,7 @@ _ctl_diag_trigger_master_show(struct device *cdev,
3163 * A sysfs 'read/write' shost attribute. 3157 * A sysfs 'read/write' shost attribute.
3164 */ 3158 */
3165static ssize_t 3159static ssize_t
3166_ctl_diag_trigger_master_store(struct device *cdev, 3160diag_trigger_master_store(struct device *cdev,
3167 struct device_attribute *attr, const char *buf, size_t count) 3161 struct device_attribute *attr, const char *buf, size_t count)
3168 3162
3169{ 3163{
@@ -3182,12 +3176,11 @@ _ctl_diag_trigger_master_store(struct device *cdev,
3182 spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags); 3176 spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
3183 return rc; 3177 return rc;
3184} 3178}
3185static DEVICE_ATTR(diag_trigger_master, S_IRUGO | S_IWUSR, 3179static DEVICE_ATTR_RW(diag_trigger_master);
3186 _ctl_diag_trigger_master_show, _ctl_diag_trigger_master_store);
3187 3180
3188 3181
3189/** 3182/**
3190 * _ctl_diag_trigger_event_show - show the diag_trigger_event attribute 3183 * diag_trigger_event_show - show the diag_trigger_event attribute
3191 * @cdev: pointer to embedded class device 3184 * @cdev: pointer to embedded class device
3192 * @attr: ? 3185 * @attr: ?
3193 * @buf: the buffer returned 3186 * @buf: the buffer returned
@@ -3195,7 +3188,7 @@ static DEVICE_ATTR(diag_trigger_master, S_IRUGO | S_IWUSR,
3195 * A sysfs 'read/write' shost attribute. 3188 * A sysfs 'read/write' shost attribute.
3196 */ 3189 */
3197static ssize_t 3190static ssize_t
3198_ctl_diag_trigger_event_show(struct device *cdev, 3191diag_trigger_event_show(struct device *cdev,
3199 struct device_attribute *attr, char *buf) 3192 struct device_attribute *attr, char *buf)
3200{ 3193{
3201 struct Scsi_Host *shost = class_to_shost(cdev); 3194 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3211,7 +3204,7 @@ _ctl_diag_trigger_event_show(struct device *cdev,
3211} 3204}
3212 3205
3213/** 3206/**
3214 * _ctl_diag_trigger_event_store - store the diag_trigger_event attribute 3207 * diag_trigger_event_store - store the diag_trigger_event attribute
3215 * @cdev: pointer to embedded class device 3208 * @cdev: pointer to embedded class device
3216 * @attr: ? 3209 * @attr: ?
3217 * @buf: the buffer returned 3210 * @buf: the buffer returned
@@ -3220,7 +3213,7 @@ _ctl_diag_trigger_event_show(struct device *cdev,
3220 * A sysfs 'read/write' shost attribute. 3213 * A sysfs 'read/write' shost attribute.
3221 */ 3214 */
3222static ssize_t 3215static ssize_t
3223_ctl_diag_trigger_event_store(struct device *cdev, 3216diag_trigger_event_store(struct device *cdev,
3224 struct device_attribute *attr, const char *buf, size_t count) 3217 struct device_attribute *attr, const char *buf, size_t count)
3225 3218
3226{ 3219{
@@ -3239,12 +3232,11 @@ _ctl_diag_trigger_event_store(struct device *cdev,
3239 spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags); 3232 spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
3240 return sz; 3233 return sz;
3241} 3234}
3242static DEVICE_ATTR(diag_trigger_event, S_IRUGO | S_IWUSR, 3235static DEVICE_ATTR_RW(diag_trigger_event);
3243 _ctl_diag_trigger_event_show, _ctl_diag_trigger_event_store);
3244 3236
3245 3237
3246/** 3238/**
3247 * _ctl_diag_trigger_scsi_show - show the diag_trigger_scsi attribute 3239 * diag_trigger_scsi_show - show the diag_trigger_scsi attribute
3248 * @cdev: pointer to embedded class device 3240 * @cdev: pointer to embedded class device
3249 * @attr: ? 3241 * @attr: ?
3250 * @buf: the buffer returned 3242 * @buf: the buffer returned
@@ -3252,7 +3244,7 @@ static DEVICE_ATTR(diag_trigger_event, S_IRUGO | S_IWUSR,
3252 * A sysfs 'read/write' shost attribute. 3244 * A sysfs 'read/write' shost attribute.
3253 */ 3245 */
3254static ssize_t 3246static ssize_t
3255_ctl_diag_trigger_scsi_show(struct device *cdev, 3247diag_trigger_scsi_show(struct device *cdev,
3256 struct device_attribute *attr, char *buf) 3248 struct device_attribute *attr, char *buf)
3257{ 3249{
3258 struct Scsi_Host *shost = class_to_shost(cdev); 3250 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3268,7 +3260,7 @@ _ctl_diag_trigger_scsi_show(struct device *cdev,
3268} 3260}
3269 3261
3270/** 3262/**
3271 * _ctl_diag_trigger_scsi_store - store the diag_trigger_scsi attribute 3263 * diag_trigger_scsi_store - store the diag_trigger_scsi attribute
3272 * @cdev: pointer to embedded class device 3264 * @cdev: pointer to embedded class device
3273 * @attr: ? 3265 * @attr: ?
3274 * @buf: the buffer returned 3266 * @buf: the buffer returned
@@ -3277,7 +3269,7 @@ _ctl_diag_trigger_scsi_show(struct device *cdev,
3277 * A sysfs 'read/write' shost attribute. 3269 * A sysfs 'read/write' shost attribute.
3278 */ 3270 */
3279static ssize_t 3271static ssize_t
3280_ctl_diag_trigger_scsi_store(struct device *cdev, 3272diag_trigger_scsi_store(struct device *cdev,
3281 struct device_attribute *attr, const char *buf, size_t count) 3273 struct device_attribute *attr, const char *buf, size_t count)
3282{ 3274{
3283 struct Scsi_Host *shost = class_to_shost(cdev); 3275 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3295,12 +3287,11 @@ _ctl_diag_trigger_scsi_store(struct device *cdev,
3295 spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags); 3287 spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
3296 return sz; 3288 return sz;
3297} 3289}
3298static DEVICE_ATTR(diag_trigger_scsi, S_IRUGO | S_IWUSR, 3290static DEVICE_ATTR_RW(diag_trigger_scsi);
3299 _ctl_diag_trigger_scsi_show, _ctl_diag_trigger_scsi_store);
3300 3291
3301 3292
3302/** 3293/**
3303 * _ctl_diag_trigger_scsi_show - show the diag_trigger_mpi attribute 3294 * diag_trigger_scsi_show - show the diag_trigger_mpi attribute
3304 * @cdev: pointer to embedded class device 3295 * @cdev: pointer to embedded class device
3305 * @attr: ? 3296 * @attr: ?
3306 * @buf: the buffer returned 3297 * @buf: the buffer returned
@@ -3308,7 +3299,7 @@ static DEVICE_ATTR(diag_trigger_scsi, S_IRUGO | S_IWUSR,
3308 * A sysfs 'read/write' shost attribute. 3299 * A sysfs 'read/write' shost attribute.
3309 */ 3300 */
3310static ssize_t 3301static ssize_t
3311_ctl_diag_trigger_mpi_show(struct device *cdev, 3302diag_trigger_mpi_show(struct device *cdev,
3312 struct device_attribute *attr, char *buf) 3303 struct device_attribute *attr, char *buf)
3313{ 3304{
3314 struct Scsi_Host *shost = class_to_shost(cdev); 3305 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3324,7 +3315,7 @@ _ctl_diag_trigger_mpi_show(struct device *cdev,
3324} 3315}
3325 3316
3326/** 3317/**
3327 * _ctl_diag_trigger_mpi_store - store the diag_trigger_mpi attribute 3318 * diag_trigger_mpi_store - store the diag_trigger_mpi attribute
3328 * @cdev: pointer to embedded class device 3319 * @cdev: pointer to embedded class device
3329 * @attr: ? 3320 * @attr: ?
3330 * @buf: the buffer returned 3321 * @buf: the buffer returned
@@ -3333,7 +3324,7 @@ _ctl_diag_trigger_mpi_show(struct device *cdev,
3333 * A sysfs 'read/write' shost attribute. 3324 * A sysfs 'read/write' shost attribute.
3334 */ 3325 */
3335static ssize_t 3326static ssize_t
3336_ctl_diag_trigger_mpi_store(struct device *cdev, 3327diag_trigger_mpi_store(struct device *cdev,
3337 struct device_attribute *attr, const char *buf, size_t count) 3328 struct device_attribute *attr, const char *buf, size_t count)
3338{ 3329{
3339 struct Scsi_Host *shost = class_to_shost(cdev); 3330 struct Scsi_Host *shost = class_to_shost(cdev);
@@ -3352,8 +3343,7 @@ _ctl_diag_trigger_mpi_store(struct device *cdev,
3352 return sz; 3343 return sz;
3353} 3344}
3354 3345
3355static DEVICE_ATTR(diag_trigger_mpi, S_IRUGO | S_IWUSR, 3346static DEVICE_ATTR_RW(diag_trigger_mpi);
3356 _ctl_diag_trigger_mpi_show, _ctl_diag_trigger_mpi_store);
3357 3347
3358/*********** diagnostic trigger suppport *** END ****************************/ 3348/*********** diagnostic trigger suppport *** END ****************************/
3359 3349
@@ -3391,7 +3381,7 @@ struct device_attribute *mpt3sas_host_attrs[] = {
3391/* device attributes */ 3381/* device attributes */
3392 3382
3393/** 3383/**
3394 * _ctl_device_sas_address_show - sas address 3384 * sas_address_show - sas address
3395 * @dev: pointer to embedded class device 3385 * @dev: pointer to embedded class device
3396 * @attr: ? 3386 * @attr: ?
3397 * @buf: the buffer returned 3387 * @buf: the buffer returned
@@ -3401,7 +3391,7 @@ struct device_attribute *mpt3sas_host_attrs[] = {
3401 * A sysfs 'read-only' shost attribute. 3391 * A sysfs 'read-only' shost attribute.
3402 */ 3392 */
3403static ssize_t 3393static ssize_t
3404_ctl_device_sas_address_show(struct device *dev, struct device_attribute *attr, 3394sas_address_show(struct device *dev, struct device_attribute *attr,
3405 char *buf) 3395 char *buf)
3406{ 3396{
3407 struct scsi_device *sdev = to_scsi_device(dev); 3397 struct scsi_device *sdev = to_scsi_device(dev);
@@ -3410,10 +3400,10 @@ _ctl_device_sas_address_show(struct device *dev, struct device_attribute *attr,
3410 return snprintf(buf, PAGE_SIZE, "0x%016llx\n", 3400 return snprintf(buf, PAGE_SIZE, "0x%016llx\n",
3411 (unsigned long long)sas_device_priv_data->sas_target->sas_address); 3401 (unsigned long long)sas_device_priv_data->sas_target->sas_address);
3412} 3402}
3413static DEVICE_ATTR(sas_address, S_IRUGO, _ctl_device_sas_address_show, NULL); 3403static DEVICE_ATTR_RO(sas_address);
3414 3404
3415/** 3405/**
3416 * _ctl_device_handle_show - device handle 3406 * sas_device_handle_show - device handle
3417 * @dev: pointer to embedded class device 3407 * @dev: pointer to embedded class device
3418 * @attr: ? 3408 * @attr: ?
3419 * @buf: the buffer returned 3409 * @buf: the buffer returned
@@ -3423,7 +3413,7 @@ static DEVICE_ATTR(sas_address, S_IRUGO, _ctl_device_sas_address_show, NULL);
3423 * A sysfs 'read-only' shost attribute. 3413 * A sysfs 'read-only' shost attribute.
3424 */ 3414 */
3425static ssize_t 3415static ssize_t
3426_ctl_device_handle_show(struct device *dev, struct device_attribute *attr, 3416sas_device_handle_show(struct device *dev, struct device_attribute *attr,
3427 char *buf) 3417 char *buf)
3428{ 3418{
3429 struct scsi_device *sdev = to_scsi_device(dev); 3419 struct scsi_device *sdev = to_scsi_device(dev);
@@ -3432,10 +3422,10 @@ _ctl_device_handle_show(struct device *dev, struct device_attribute *attr,
3432 return snprintf(buf, PAGE_SIZE, "0x%04x\n", 3422 return snprintf(buf, PAGE_SIZE, "0x%04x\n",
3433 sas_device_priv_data->sas_target->handle); 3423 sas_device_priv_data->sas_target->handle);
3434} 3424}
3435static DEVICE_ATTR(sas_device_handle, S_IRUGO, _ctl_device_handle_show, NULL); 3425static DEVICE_ATTR_RO(sas_device_handle);
3436 3426
3437/** 3427/**
3438 * _ctl_device_ncq_io_prio_show - send prioritized io commands to device 3428 * sas_ncq_io_prio_show - send prioritized io commands to device
3439 * @dev: pointer to embedded device 3429 * @dev: pointer to embedded device
3440 * @attr: ? 3430 * @attr: ?
3441 * @buf: the buffer returned 3431 * @buf: the buffer returned
@@ -3443,7 +3433,7 @@ static DEVICE_ATTR(sas_device_handle, S_IRUGO, _ctl_device_handle_show, NULL);
3443 * A sysfs 'read/write' sdev attribute, only works with SATA 3433 * A sysfs 'read/write' sdev attribute, only works with SATA
3444 */ 3434 */
3445static ssize_t 3435static ssize_t
3446_ctl_device_ncq_prio_enable_show(struct device *dev, 3436sas_ncq_prio_enable_show(struct device *dev,
3447 struct device_attribute *attr, char *buf) 3437 struct device_attribute *attr, char *buf)
3448{ 3438{
3449 struct scsi_device *sdev = to_scsi_device(dev); 3439 struct scsi_device *sdev = to_scsi_device(dev);
@@ -3454,7 +3444,7 @@ _ctl_device_ncq_prio_enable_show(struct device *dev,
3454} 3444}
3455 3445
3456static ssize_t 3446static ssize_t
3457_ctl_device_ncq_prio_enable_store(struct device *dev, 3447sas_ncq_prio_enable_store(struct device *dev,
3458 struct device_attribute *attr, 3448 struct device_attribute *attr,
3459 const char *buf, size_t count) 3449 const char *buf, size_t count)
3460{ 3450{
@@ -3471,9 +3461,7 @@ _ctl_device_ncq_prio_enable_store(struct device *dev,
3471 sas_device_priv_data->ncq_prio_enable = ncq_prio_enable; 3461 sas_device_priv_data->ncq_prio_enable = ncq_prio_enable;
3472 return strlen(buf); 3462 return strlen(buf);
3473} 3463}
3474static DEVICE_ATTR(sas_ncq_prio_enable, S_IRUGO | S_IWUSR, 3464static DEVICE_ATTR_RW(sas_ncq_prio_enable);
3475 _ctl_device_ncq_prio_enable_show,
3476 _ctl_device_ncq_prio_enable_store);
3477 3465
3478struct device_attribute *mpt3sas_dev_attrs[] = { 3466struct device_attribute *mpt3sas_dev_attrs[] = {
3479 &dev_attr_sas_address, 3467 &dev_attr_sas_address,
diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
index 1ccfbc7eebe0..27c731a3fb49 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
@@ -113,22 +113,22 @@ MODULE_PARM_DESC(logging_level,
113 113
114 114
115static ushort max_sectors = 0xFFFF; 115static ushort max_sectors = 0xFFFF;
116module_param(max_sectors, ushort, 0); 116module_param(max_sectors, ushort, 0444);
117MODULE_PARM_DESC(max_sectors, "max sectors, range 64 to 32767 default=32767"); 117MODULE_PARM_DESC(max_sectors, "max sectors, range 64 to 32767 default=32767");
118 118
119 119
120static int missing_delay[2] = {-1, -1}; 120static int missing_delay[2] = {-1, -1};
121module_param_array(missing_delay, int, NULL, 0); 121module_param_array(missing_delay, int, NULL, 0444);
122MODULE_PARM_DESC(missing_delay, " device missing delay , io missing delay"); 122MODULE_PARM_DESC(missing_delay, " device missing delay , io missing delay");
123 123
124/* scsi-mid layer global parmeter is max_report_luns, which is 511 */ 124/* scsi-mid layer global parmeter is max_report_luns, which is 511 */
125#define MPT3SAS_MAX_LUN (16895) 125#define MPT3SAS_MAX_LUN (16895)
126static u64 max_lun = MPT3SAS_MAX_LUN; 126static u64 max_lun = MPT3SAS_MAX_LUN;
127module_param(max_lun, ullong, 0); 127module_param(max_lun, ullong, 0444);
128MODULE_PARM_DESC(max_lun, " max lun, default=16895 "); 128MODULE_PARM_DESC(max_lun, " max lun, default=16895 ");
129 129
130static ushort hbas_to_enumerate; 130static ushort hbas_to_enumerate;
131module_param(hbas_to_enumerate, ushort, 0); 131module_param(hbas_to_enumerate, ushort, 0444);
132MODULE_PARM_DESC(hbas_to_enumerate, 132MODULE_PARM_DESC(hbas_to_enumerate,
133 " 0 - enumerates both SAS 2.0 & SAS 3.0 generation HBAs\n \ 133 " 0 - enumerates both SAS 2.0 & SAS 3.0 generation HBAs\n \
134 1 - enumerates only SAS 2.0 generation HBAs\n \ 134 1 - enumerates only SAS 2.0 generation HBAs\n \
@@ -142,17 +142,17 @@ MODULE_PARM_DESC(hbas_to_enumerate,
142 * Either bit can be set, or both 142 * Either bit can be set, or both
143 */ 143 */
144static int diag_buffer_enable = -1; 144static int diag_buffer_enable = -1;
145module_param(diag_buffer_enable, int, 0); 145module_param(diag_buffer_enable, int, 0444);
146MODULE_PARM_DESC(diag_buffer_enable, 146MODULE_PARM_DESC(diag_buffer_enable,
147 " post diag buffers (TRACE=1/SNAPSHOT=2/EXTENDED=4/default=0)"); 147 " post diag buffers (TRACE=1/SNAPSHOT=2/EXTENDED=4/default=0)");
148static int disable_discovery = -1; 148static int disable_discovery = -1;
149module_param(disable_discovery, int, 0); 149module_param(disable_discovery, int, 0444);
150MODULE_PARM_DESC(disable_discovery, " disable discovery "); 150MODULE_PARM_DESC(disable_discovery, " disable discovery ");
151 151
152 152
153/* permit overriding the host protection capabilities mask (EEDP/T10 PI) */ 153/* permit overriding the host protection capabilities mask (EEDP/T10 PI) */
154static int prot_mask = -1; 154static int prot_mask = -1;
155module_param(prot_mask, int, 0); 155module_param(prot_mask, int, 0444);
156MODULE_PARM_DESC(prot_mask, " host protection capabilities mask, def=7 "); 156MODULE_PARM_DESC(prot_mask, " host protection capabilities mask, def=7 ");
157 157
158 158
@@ -2685,7 +2685,7 @@ mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, u64 lun,
2685 int_to_scsilun(lun, (struct scsi_lun *)mpi_request->LUN); 2685 int_to_scsilun(lun, (struct scsi_lun *)mpi_request->LUN);
2686 mpt3sas_scsih_set_tm_flag(ioc, handle); 2686 mpt3sas_scsih_set_tm_flag(ioc, handle);
2687 init_completion(&ioc->tm_cmds.done); 2687 init_completion(&ioc->tm_cmds.done);
2688 mpt3sas_base_put_smid_hi_priority(ioc, smid, msix_task); 2688 ioc->put_smid_hi_priority(ioc, smid, msix_task);
2689 wait_for_completion_timeout(&ioc->tm_cmds.done, timeout*HZ); 2689 wait_for_completion_timeout(&ioc->tm_cmds.done, timeout*HZ);
2690 if (!(ioc->tm_cmds.status & MPT3_CMD_COMPLETE)) { 2690 if (!(ioc->tm_cmds.status & MPT3_CMD_COMPLETE)) {
2691 if (mpt3sas_base_check_cmd_timeout(ioc, 2691 if (mpt3sas_base_check_cmd_timeout(ioc,
@@ -3659,7 +3659,7 @@ _scsih_tm_tr_send(struct MPT3SAS_ADAPTER *ioc, u16 handle)
3659 mpi_request->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET; 3659 mpi_request->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET;
3660 mpi_request->MsgFlags = tr_method; 3660 mpi_request->MsgFlags = tr_method;
3661 set_bit(handle, ioc->device_remove_in_progress); 3661 set_bit(handle, ioc->device_remove_in_progress);
3662 mpt3sas_base_put_smid_hi_priority(ioc, smid, 0); 3662 ioc->put_smid_hi_priority(ioc, smid, 0);
3663 mpt3sas_trigger_master(ioc, MASTER_TRIGGER_DEVICE_REMOVAL); 3663 mpt3sas_trigger_master(ioc, MASTER_TRIGGER_DEVICE_REMOVAL);
3664 3664
3665out: 3665out:
@@ -3755,7 +3755,7 @@ _scsih_tm_tr_complete(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
3755 mpi_request->Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL; 3755 mpi_request->Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL;
3756 mpi_request->Operation = MPI2_SAS_OP_REMOVE_DEVICE; 3756 mpi_request->Operation = MPI2_SAS_OP_REMOVE_DEVICE;
3757 mpi_request->DevHandle = mpi_request_tm->DevHandle; 3757 mpi_request->DevHandle = mpi_request_tm->DevHandle;
3758 mpt3sas_base_put_smid_default(ioc, smid_sas_ctrl); 3758 ioc->put_smid_default(ioc, smid_sas_ctrl);
3759 3759
3760 return _scsih_check_for_pending_tm(ioc, smid); 3760 return _scsih_check_for_pending_tm(ioc, smid);
3761} 3761}
@@ -3881,7 +3881,7 @@ _scsih_tm_tr_volume_send(struct MPT3SAS_ADAPTER *ioc, u16 handle)
3881 mpi_request->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; 3881 mpi_request->Function = MPI2_FUNCTION_SCSI_TASK_MGMT;
3882 mpi_request->DevHandle = cpu_to_le16(handle); 3882 mpi_request->DevHandle = cpu_to_le16(handle);
3883 mpi_request->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET; 3883 mpi_request->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET;
3884 mpt3sas_base_put_smid_hi_priority(ioc, smid, 0); 3884 ioc->put_smid_hi_priority(ioc, smid, 0);
3885} 3885}
3886 3886
3887/** 3887/**
@@ -3970,7 +3970,7 @@ _scsih_issue_delayed_event_ack(struct MPT3SAS_ADAPTER *ioc, u16 smid, U16 event,
3970 ack_request->EventContext = event_context; 3970 ack_request->EventContext = event_context;
3971 ack_request->VF_ID = 0; /* TODO */ 3971 ack_request->VF_ID = 0; /* TODO */
3972 ack_request->VP_ID = 0; 3972 ack_request->VP_ID = 0;
3973 mpt3sas_base_put_smid_default(ioc, smid); 3973 ioc->put_smid_default(ioc, smid);
3974} 3974}
3975 3975
3976/** 3976/**
@@ -4026,7 +4026,7 @@ _scsih_issue_delayed_sas_io_unit_ctrl(struct MPT3SAS_ADAPTER *ioc,
4026 mpi_request->Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL; 4026 mpi_request->Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL;
4027 mpi_request->Operation = MPI2_SAS_OP_REMOVE_DEVICE; 4027 mpi_request->Operation = MPI2_SAS_OP_REMOVE_DEVICE;
4028 mpi_request->DevHandle = cpu_to_le16(handle); 4028 mpi_request->DevHandle = cpu_to_le16(handle);
4029 mpt3sas_base_put_smid_default(ioc, smid); 4029 ioc->put_smid_default(ioc, smid);
4030} 4030}
4031 4031
4032/** 4032/**
@@ -4734,12 +4734,12 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
4734 if (sas_target_priv_data->flags & MPT_TARGET_FASTPATH_IO) { 4734 if (sas_target_priv_data->flags & MPT_TARGET_FASTPATH_IO) {
4735 mpi_request->IoFlags = cpu_to_le16(scmd->cmd_len | 4735 mpi_request->IoFlags = cpu_to_le16(scmd->cmd_len |
4736 MPI25_SCSIIO_IOFLAGS_FAST_PATH); 4736 MPI25_SCSIIO_IOFLAGS_FAST_PATH);
4737 mpt3sas_base_put_smid_fast_path(ioc, smid, handle); 4737 ioc->put_smid_fast_path(ioc, smid, handle);
4738 } else 4738 } else
4739 ioc->put_smid_scsi_io(ioc, smid, 4739 ioc->put_smid_scsi_io(ioc, smid,
4740 le16_to_cpu(mpi_request->DevHandle)); 4740 le16_to_cpu(mpi_request->DevHandle));
4741 } else 4741 } else
4742 mpt3sas_base_put_smid_default(ioc, smid); 4742 ioc->put_smid_default(ioc, smid);
4743 return 0; 4743 return 0;
4744 4744
4745 out: 4745 out:
@@ -5210,6 +5210,7 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
5210 ((ioc_status & MPI2_IOCSTATUS_MASK) 5210 ((ioc_status & MPI2_IOCSTATUS_MASK)
5211 != MPI2_IOCSTATUS_SCSI_TASK_TERMINATED)) { 5211 != MPI2_IOCSTATUS_SCSI_TASK_TERMINATED)) {
5212 st->direct_io = 0; 5212 st->direct_io = 0;
5213 st->scmd = scmd;
5213 memcpy(mpi_request->CDB.CDB32, scmd->cmnd, scmd->cmd_len); 5214 memcpy(mpi_request->CDB.CDB32, scmd->cmnd, scmd->cmd_len);
5214 mpi_request->DevHandle = 5215 mpi_request->DevHandle =
5215 cpu_to_le16(sas_device_priv_data->sas_target->handle); 5216 cpu_to_le16(sas_device_priv_data->sas_target->handle);
@@ -7601,7 +7602,7 @@ _scsih_ir_fastpath(struct MPT3SAS_ADAPTER *ioc, u16 handle, u8 phys_disk_num)
7601 handle, phys_disk_num)); 7602 handle, phys_disk_num));
7602 7603
7603 init_completion(&ioc->scsih_cmds.done); 7604 init_completion(&ioc->scsih_cmds.done);
7604 mpt3sas_base_put_smid_default(ioc, smid); 7605 ioc->put_smid_default(ioc, smid);
7605 wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ); 7606 wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ);
7606 7607
7607 if (!(ioc->scsih_cmds.status & MPT3_CMD_COMPLETE)) { 7608 if (!(ioc->scsih_cmds.status & MPT3_CMD_COMPLETE)) {
@@ -9633,7 +9634,7 @@ _scsih_ir_shutdown(struct MPT3SAS_ADAPTER *ioc)
9633 if (!ioc->hide_ir_msg) 9634 if (!ioc->hide_ir_msg)
9634 ioc_info(ioc, "IR shutdown (sending)\n"); 9635 ioc_info(ioc, "IR shutdown (sending)\n");
9635 init_completion(&ioc->scsih_cmds.done); 9636 init_completion(&ioc->scsih_cmds.done);
9636 mpt3sas_base_put_smid_default(ioc, smid); 9637 ioc->put_smid_default(ioc, smid);
9637 wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ); 9638 wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ);
9638 9639
9639 if (!(ioc->scsih_cmds.status & MPT3_CMD_COMPLETE)) { 9640 if (!(ioc->scsih_cmds.status & MPT3_CMD_COMPLETE)) {
@@ -9670,6 +9671,7 @@ static void scsih_remove(struct pci_dev *pdev)
9670 struct _pcie_device *pcie_device, *pcienext; 9671 struct _pcie_device *pcie_device, *pcienext;
9671 struct workqueue_struct *wq; 9672 struct workqueue_struct *wq;
9672 unsigned long flags; 9673 unsigned long flags;
9674 Mpi2ConfigReply_t mpi_reply;
9673 9675
9674 ioc->remove_host = 1; 9676 ioc->remove_host = 1;
9675 9677
@@ -9684,7 +9686,13 @@ static void scsih_remove(struct pci_dev *pdev)
9684 spin_unlock_irqrestore(&ioc->fw_event_lock, flags); 9686 spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
9685 if (wq) 9687 if (wq)
9686 destroy_workqueue(wq); 9688 destroy_workqueue(wq);
9687 9689 /*
9690 * Copy back the unmodified ioc page1. so that on next driver load,
9691 * current modified changes on ioc page1 won't take effect.
9692 */
9693 if (ioc->is_aero_ioc)
9694 mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply,
9695 &ioc->ioc_pg1_copy);
9688 /* release all the volumes */ 9696 /* release all the volumes */
9689 _scsih_ir_shutdown(ioc); 9697 _scsih_ir_shutdown(ioc);
9690 sas_remove_host(shost); 9698 sas_remove_host(shost);
@@ -9747,6 +9755,7 @@ scsih_shutdown(struct pci_dev *pdev)
9747 struct MPT3SAS_ADAPTER *ioc = shost_priv(shost); 9755 struct MPT3SAS_ADAPTER *ioc = shost_priv(shost);
9748 struct workqueue_struct *wq; 9756 struct workqueue_struct *wq;
9749 unsigned long flags; 9757 unsigned long flags;
9758 Mpi2ConfigReply_t mpi_reply;
9750 9759
9751 ioc->remove_host = 1; 9760 ioc->remove_host = 1;
9752 9761
@@ -9761,6 +9770,13 @@ scsih_shutdown(struct pci_dev *pdev)
9761 spin_unlock_irqrestore(&ioc->fw_event_lock, flags); 9770 spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
9762 if (wq) 9771 if (wq)
9763 destroy_workqueue(wq); 9772 destroy_workqueue(wq);
9773 /*
9774 * Copy back the unmodified ioc page1 so that on next driver load,
9775 * current modified changes on ioc page1 won't take effect.
9776 */
9777 if (ioc->is_aero_ioc)
9778 mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply,
9779 &ioc->ioc_pg1_copy);
9764 9780
9765 _scsih_ir_shutdown(ioc); 9781 _scsih_ir_shutdown(ioc);
9766 mpt3sas_base_detach(ioc); 9782 mpt3sas_base_detach(ioc);
diff --git a/drivers/scsi/mpt3sas/mpt3sas_transport.c b/drivers/scsi/mpt3sas/mpt3sas_transport.c
index 60ae2d0feb2b..5324662751bf 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_transport.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_transport.c
@@ -367,7 +367,7 @@ _transport_expander_report_manufacture(struct MPT3SAS_ADAPTER *ioc,
367 ioc_info(ioc, "report_manufacture - send to sas_addr(0x%016llx)\n", 367 ioc_info(ioc, "report_manufacture - send to sas_addr(0x%016llx)\n",
368 (u64)sas_address)); 368 (u64)sas_address));
369 init_completion(&ioc->transport_cmds.done); 369 init_completion(&ioc->transport_cmds.done);
370 mpt3sas_base_put_smid_default(ioc, smid); 370 ioc->put_smid_default(ioc, smid);
371 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ); 371 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
372 372
373 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { 373 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
@@ -1139,7 +1139,7 @@ _transport_get_expander_phy_error_log(struct MPT3SAS_ADAPTER *ioc,
1139 (u64)phy->identify.sas_address, 1139 (u64)phy->identify.sas_address,
1140 phy->number)); 1140 phy->number));
1141 init_completion(&ioc->transport_cmds.done); 1141 init_completion(&ioc->transport_cmds.done);
1142 mpt3sas_base_put_smid_default(ioc, smid); 1142 ioc->put_smid_default(ioc, smid);
1143 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ); 1143 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
1144 1144
1145 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { 1145 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
@@ -1434,7 +1434,7 @@ _transport_expander_phy_control(struct MPT3SAS_ADAPTER *ioc,
1434 (u64)phy->identify.sas_address, 1434 (u64)phy->identify.sas_address,
1435 phy->number, phy_operation)); 1435 phy->number, phy_operation));
1436 init_completion(&ioc->transport_cmds.done); 1436 init_completion(&ioc->transport_cmds.done);
1437 mpt3sas_base_put_smid_default(ioc, smid); 1437 ioc->put_smid_default(ioc, smid);
1438 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ); 1438 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
1439 1439
1440 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { 1440 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
@@ -1911,7 +1911,7 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
1911 ioc_info(ioc, "%s: sending smp request\n", __func__)); 1911 ioc_info(ioc, "%s: sending smp request\n", __func__));
1912 1912
1913 init_completion(&ioc->transport_cmds.done); 1913 init_completion(&ioc->transport_cmds.done);
1914 mpt3sas_base_put_smid_default(ioc, smid); 1914 ioc->put_smid_default(ioc, smid);
1915 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ); 1915 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
1916 1916
1917 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { 1917 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
index 6dcae0e50018..3e0b8ebe257f 100644
--- a/drivers/scsi/mvsas/mv_sas.c
+++ b/drivers/scsi/mvsas/mv_sas.c
@@ -1193,7 +1193,7 @@ static int mvs_dev_found_notify(struct domain_device *dev, int lock)
1193 mvi_device->dev_type = dev->dev_type; 1193 mvi_device->dev_type = dev->dev_type;
1194 mvi_device->mvi_info = mvi; 1194 mvi_device->mvi_info = mvi;
1195 mvi_device->sas_device = dev; 1195 mvi_device->sas_device = dev;
1196 if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) { 1196 if (parent_dev && dev_is_expander(parent_dev->dev_type)) {
1197 int phy_id; 1197 int phy_id;
1198 u8 phy_num = parent_dev->ex_dev.num_phys; 1198 u8 phy_num = parent_dev->ex_dev.num_phys;
1199 struct ex_phy *phy; 1199 struct ex_phy *phy;
diff --git a/drivers/scsi/mvsas/mv_sas.h b/drivers/scsi/mvsas/mv_sas.h
index b7d7ec435487..519edc796691 100644
--- a/drivers/scsi/mvsas/mv_sas.h
+++ b/drivers/scsi/mvsas/mv_sas.h
@@ -50,9 +50,6 @@ extern struct mvs_info *tgt_mvi;
50extern const struct mvs_dispatch mvs_64xx_dispatch; 50extern const struct mvs_dispatch mvs_64xx_dispatch;
51extern const struct mvs_dispatch mvs_94xx_dispatch; 51extern const struct mvs_dispatch mvs_94xx_dispatch;
52 52
53#define DEV_IS_EXPANDER(type) \
54 ((type == SAS_EDGE_EXPANDER_DEVICE) || (type == SAS_FANOUT_EXPANDER_DEVICE))
55
56#define bit(n) ((u64)1 << n) 53#define bit(n) ((u64)1 << n)
57 54
58#define for_each_phy(__lseq_mask, __mc, __lseq) \ 55#define for_each_phy(__lseq_mask, __mc, __lseq) \
diff --git a/drivers/scsi/osst.c b/drivers/scsi/osst.c
deleted file mode 100644
index 815bb4097c1b..000000000000
--- a/drivers/scsi/osst.c
+++ /dev/null
@@ -1,6108 +0,0 @@
1// SPDX-License-Identifier: GPL-2.0-only
2/*
3 SCSI Tape Driver for Linux version 1.1 and newer. See the accompanying
4 file Documentation/scsi/st.txt for more information.
5
6 History:
7
8 OnStream SCSI Tape support (osst) cloned from st.c by
9 Willem Riede (osst@riede.org) Feb 2000
10 Fixes ... Kurt Garloff <garloff@suse.de> Mar 2000
11
12 Rewritten from Dwayne Forsyth's SCSI tape driver by Kai Makisara.
13 Contribution and ideas from several people including (in alphabetical
14 order) Klaus Ehrenfried, Wolfgang Denk, Steve Hirsch, Andreas Koppenh"ofer,
15 Michael Leodolter, Eyal Lebedinsky, J"org Weule, and Eric Youngdale.
16
17 Copyright 1992 - 2002 Kai Makisara / 2000 - 2006 Willem Riede
18 email osst@riede.org
19
20 $Header: /cvsroot/osst/Driver/osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
21
22 Microscopic alterations - Rik Ling, 2000/12/21
23 Last st.c sync: Tue Oct 15 22:01:04 2002 by makisara
24 Some small formal changes - aeb, 950809
25*/
26
27static const char * cvsid = "$Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $";
28static const char * osst_version = "0.99.4";
29
30/* The "failure to reconnect" firmware bug */
31#define OSST_FW_NEED_POLL_MIN 10601 /*(107A)*/
32#define OSST_FW_NEED_POLL_MAX 10704 /*(108D)*/
33#define OSST_FW_NEED_POLL(x,d) ((x) >= OSST_FW_NEED_POLL_MIN && (x) <= OSST_FW_NEED_POLL_MAX && d->host->this_id != 7)
34
35#include <linux/module.h>
36
37#include <linux/fs.h>
38#include <linux/kernel.h>
39#include <linux/sched/signal.h>
40#include <linux/proc_fs.h>
41#include <linux/mm.h>
42#include <linux/slab.h>
43#include <linux/init.h>
44#include <linux/string.h>
45#include <linux/errno.h>
46#include <linux/mtio.h>
47#include <linux/ioctl.h>
48#include <linux/fcntl.h>
49#include <linux/spinlock.h>
50#include <linux/vmalloc.h>
51#include <linux/blkdev.h>
52#include <linux/moduleparam.h>
53#include <linux/delay.h>
54#include <linux/jiffies.h>
55#include <linux/mutex.h>
56#include <linux/uaccess.h>
57#include <asm/dma.h>
58
59/* The driver prints some debugging information on the console if DEBUG
60 is defined and non-zero. */
61#define DEBUG 0
62
63/* The message level for the debug messages is currently set to KERN_NOTICE
64 so that people can easily see the messages. Later when the debugging messages
65 in the drivers are more widely classified, this may be changed to KERN_DEBUG. */
66#define OSST_DEB_MSG KERN_NOTICE
67
68#include <scsi/scsi.h>
69#include <scsi/scsi_dbg.h>
70#include <scsi/scsi_device.h>
71#include <scsi/scsi_driver.h>
72#include <scsi/scsi_eh.h>
73#include <scsi/scsi_host.h>
74#include <scsi/scsi_ioctl.h>
75
76#define ST_KILOBYTE 1024
77
78#include "st.h"
79#include "osst.h"
80#include "osst_options.h"
81#include "osst_detect.h"
82
83static DEFINE_MUTEX(osst_int_mutex);
84static int max_dev = 0;
85static int write_threshold_kbs = 0;
86static int max_sg_segs = 0;
87
88#ifdef MODULE
89MODULE_AUTHOR("Willem Riede");
90MODULE_DESCRIPTION("OnStream {DI-|FW-|SC-|USB}{30|50} Tape Driver");
91MODULE_LICENSE("GPL");
92MODULE_ALIAS_CHARDEV_MAJOR(OSST_MAJOR);
93MODULE_ALIAS_SCSI_DEVICE(TYPE_TAPE);
94
95module_param(max_dev, int, 0444);
96MODULE_PARM_DESC(max_dev, "Maximum number of OnStream Tape Drives to attach (4)");
97
98module_param(write_threshold_kbs, int, 0644);
99MODULE_PARM_DESC(write_threshold_kbs, "Asynchronous write threshold (KB; 32)");
100
101module_param(max_sg_segs, int, 0644);
102MODULE_PARM_DESC(max_sg_segs, "Maximum number of scatter/gather segments to use (9)");
103#else
104static struct osst_dev_parm {
105 char *name;
106 int *val;
107} parms[] __initdata = {
108 { "max_dev", &max_dev },
109 { "write_threshold_kbs", &write_threshold_kbs },
110 { "max_sg_segs", &max_sg_segs }
111};
112#endif
113
114/* Some default definitions have been moved to osst_options.h */
115#define OSST_BUFFER_SIZE (OSST_BUFFER_BLOCKS * ST_KILOBYTE)
116#define OSST_WRITE_THRESHOLD (OSST_WRITE_THRESHOLD_BLOCKS * ST_KILOBYTE)
117
118/* The buffer size should fit into the 24 bits for length in the
119 6-byte SCSI read and write commands. */
120#if OSST_BUFFER_SIZE >= (2 << 24 - 1)
121#error "Buffer size should not exceed (2 << 24 - 1) bytes!"
122#endif
123
124#if DEBUG
125static int debugging = 1;
126/* uncomment define below to test error recovery */
127// #define OSST_INJECT_ERRORS 1
128#endif
129
130/* Do not retry! The drive firmware already retries when appropriate,
131 and when it tries to tell us something, we had better listen... */
132#define MAX_RETRIES 0
133
134#define NO_TAPE NOT_READY
135
136#define OSST_WAIT_POSITION_COMPLETE (HZ > 200 ? HZ / 200 : 1)
137#define OSST_WAIT_WRITE_COMPLETE (HZ / 12)
138#define OSST_WAIT_LONG_WRITE_COMPLETE (HZ / 2)
139
140#define OSST_TIMEOUT (200 * HZ)
141#define OSST_LONG_TIMEOUT (1800 * HZ)
142
143#define TAPE_NR(x) (iminor(x) & ((1 << ST_MODE_SHIFT)-1))
144#define TAPE_MODE(x) ((iminor(x) & ST_MODE_MASK) >> ST_MODE_SHIFT)
145#define TAPE_REWIND(x) ((iminor(x) & 0x80) == 0)
146#define TAPE_IS_RAW(x) (TAPE_MODE(x) & (ST_NBR_MODES >> 1))
147
148/* Internal ioctl to set both density (uppermost 8 bits) and blocksize (lower
149 24 bits) */
150#define SET_DENS_AND_BLK 0x10001
151
152static int osst_buffer_size = OSST_BUFFER_SIZE;
153static int osst_write_threshold = OSST_WRITE_THRESHOLD;
154static int osst_max_sg_segs = OSST_MAX_SG;
155static int osst_max_dev = OSST_MAX_TAPES;
156static int osst_nr_dev;
157
158static struct osst_tape **os_scsi_tapes = NULL;
159static DEFINE_RWLOCK(os_scsi_tapes_lock);
160
161static int modes_defined = 0;
162
163static struct osst_buffer *new_tape_buffer(int, int, int);
164static int enlarge_buffer(struct osst_buffer *, int);
165static void normalize_buffer(struct osst_buffer *);
166static int append_to_buffer(const char __user *, struct osst_buffer *, int);
167static int from_buffer(struct osst_buffer *, char __user *, int);
168static int osst_zero_buffer_tail(struct osst_buffer *);
169static int osst_copy_to_buffer(struct osst_buffer *, unsigned char *);
170static int osst_copy_from_buffer(struct osst_buffer *, unsigned char *);
171
172static int osst_probe(struct device *);
173static int osst_remove(struct device *);
174
175static struct scsi_driver osst_template = {
176 .gendrv = {
177 .name = "osst",
178 .owner = THIS_MODULE,
179 .probe = osst_probe,
180 .remove = osst_remove,
181 }
182};
183
184static int osst_int_ioctl(struct osst_tape *STp, struct osst_request ** aSRpnt,
185 unsigned int cmd_in, unsigned long arg);
186
187static int osst_set_frame_position(struct osst_tape *STp, struct osst_request ** aSRpnt, int frame, int skip);
188
189static int osst_get_frame_position(struct osst_tape *STp, struct osst_request ** aSRpnt);
190
191static int osst_flush_write_buffer(struct osst_tape *STp, struct osst_request ** aSRpnt);
192
193static int osst_write_error_recovery(struct osst_tape * STp, struct osst_request ** aSRpnt, int pending);
194
195static inline char *tape_name(struct osst_tape *tape)
196{
197 return tape->drive->disk_name;
198}
199
200/* Routines that handle the interaction with mid-layer SCSI routines */
201
202
203/* Normalize Sense */
204static void osst_analyze_sense(struct osst_request *SRpnt, struct st_cmdstatus *s)
205{
206 const u8 *ucp;
207 const u8 *sense = SRpnt->sense;
208
209 s->have_sense = scsi_normalize_sense(SRpnt->sense,
210 SCSI_SENSE_BUFFERSIZE, &s->sense_hdr);
211 s->flags = 0;
212
213 if (s->have_sense) {
214 s->deferred = 0;
215 s->remainder_valid =
216 scsi_get_sense_info_fld(sense, SCSI_SENSE_BUFFERSIZE, &s->uremainder64);
217 switch (sense[0] & 0x7f) {
218 case 0x71:
219 s->deferred = 1;
220 /* fall through */
221 case 0x70:
222 s->fixed_format = 1;
223 s->flags = sense[2] & 0xe0;
224 break;
225 case 0x73:
226 s->deferred = 1;
227 /* fall through */
228 case 0x72:
229 s->fixed_format = 0;
230 ucp = scsi_sense_desc_find(sense, SCSI_SENSE_BUFFERSIZE, 4);
231 s->flags = ucp ? (ucp[3] & 0xe0) : 0;
232 break;
233 }
234 }
235}
236
237/* Convert the result to success code */
238static int osst_chk_result(struct osst_tape * STp, struct osst_request * SRpnt)
239{
240 char *name = tape_name(STp);
241 int result = SRpnt->result;
242 u8 * sense = SRpnt->sense, scode;
243#if DEBUG
244 const char *stp;
245#endif
246 struct st_cmdstatus *cmdstatp;
247
248 if (!result)
249 return 0;
250
251 cmdstatp = &STp->buffer->cmdstat;
252 osst_analyze_sense(SRpnt, cmdstatp);
253
254 if (cmdstatp->have_sense)
255 scode = STp->buffer->cmdstat.sense_hdr.sense_key;
256 else
257 scode = 0;
258#if DEBUG
259 if (debugging) {
260 printk(OSST_DEB_MSG "%s:D: Error: %x, cmd: %x %x %x %x %x %x\n",
261 name, result,
262 SRpnt->cmd[0], SRpnt->cmd[1], SRpnt->cmd[2],
263 SRpnt->cmd[3], SRpnt->cmd[4], SRpnt->cmd[5]);
264 if (scode) printk(OSST_DEB_MSG "%s:D: Sense: %02x, ASC: %02x, ASCQ: %02x\n",
265 name, scode, sense[12], sense[13]);
266 if (cmdstatp->have_sense)
267 __scsi_print_sense(STp->device, name,
268 SRpnt->sense, SCSI_SENSE_BUFFERSIZE);
269 }
270 else
271#endif
272 if (cmdstatp->have_sense && (
273 scode != NO_SENSE &&
274 scode != RECOVERED_ERROR &&
275/* scode != UNIT_ATTENTION && */
276 scode != BLANK_CHECK &&
277 scode != VOLUME_OVERFLOW &&
278 SRpnt->cmd[0] != MODE_SENSE &&
279 SRpnt->cmd[0] != TEST_UNIT_READY)) { /* Abnormal conditions for tape */
280 if (cmdstatp->have_sense) {
281 printk(KERN_WARNING "%s:W: Command with sense data:\n", name);
282 __scsi_print_sense(STp->device, name,
283 SRpnt->sense, SCSI_SENSE_BUFFERSIZE);
284 }
285 else {
286 static int notyetprinted = 1;
287
288 printk(KERN_WARNING
289 "%s:W: Warning %x (driver bt 0x%x, host bt 0x%x).\n",
290 name, result, driver_byte(result),
291 host_byte(result));
292 if (notyetprinted) {
293 notyetprinted = 0;
294 printk(KERN_INFO
295 "%s:I: This warning may be caused by your scsi controller,\n", name);
296 printk(KERN_INFO
297 "%s:I: it has been reported with some Buslogic cards.\n", name);
298 }
299 }
300 }
301 STp->pos_unknown |= STp->device->was_reset;
302
303 if (cmdstatp->have_sense && scode == RECOVERED_ERROR) {
304 STp->recover_count++;
305 STp->recover_erreg++;
306#if DEBUG
307 if (debugging) {
308 if (SRpnt->cmd[0] == READ_6)
309 stp = "read";
310 else if (SRpnt->cmd[0] == WRITE_6)
311 stp = "write";
312 else
313 stp = "ioctl";
314 printk(OSST_DEB_MSG "%s:D: Recovered %s error (%d).\n", name, stp,
315 STp->recover_count);
316 }
317#endif
318 if ((sense[2] & 0xe0) == 0)
319 return 0;
320 }
321 return (-EIO);
322}
323
324
325/* Wakeup from interrupt */
326static void osst_end_async(struct request *req, blk_status_t status)
327{
328 struct scsi_request *rq = scsi_req(req);
329 struct osst_request *SRpnt = req->end_io_data;
330 struct osst_tape *STp = SRpnt->stp;
331 struct rq_map_data *mdata = &SRpnt->stp->buffer->map_data;
332
333 STp->buffer->cmdstat.midlevel_result = SRpnt->result = rq->result;
334#if DEBUG
335 STp->write_pending = 0;
336#endif
337 if (rq->sense_len)
338 memcpy(SRpnt->sense, rq->sense, SCSI_SENSE_BUFFERSIZE);
339 if (SRpnt->waiting)
340 complete(SRpnt->waiting);
341
342 if (SRpnt->bio) {
343 kfree(mdata->pages);
344 blk_rq_unmap_user(SRpnt->bio);
345 }
346
347 blk_put_request(req);
348}
349
350/* osst_request memory management */
351static struct osst_request *osst_allocate_request(void)
352{
353 return kzalloc(sizeof(struct osst_request), GFP_KERNEL);
354}
355
356static void osst_release_request(struct osst_request *streq)
357{
358 kfree(streq);
359}
360
361static int osst_execute(struct osst_request *SRpnt, const unsigned char *cmd,
362 int cmd_len, int data_direction, void *buffer, unsigned bufflen,
363 int use_sg, int timeout, int retries)
364{
365 struct request *req;
366 struct scsi_request *rq;
367 struct page **pages = NULL;
368 struct rq_map_data *mdata = &SRpnt->stp->buffer->map_data;
369
370 int err = 0;
371 int write = (data_direction == DMA_TO_DEVICE);
372
373 req = blk_get_request(SRpnt->stp->device->request_queue,
374 write ? REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, 0);
375 if (IS_ERR(req))
376 return DRIVER_ERROR << 24;
377
378 rq = scsi_req(req);
379 req->rq_flags |= RQF_QUIET;
380
381 SRpnt->bio = NULL;
382
383 if (use_sg) {
384 struct scatterlist *sg, *sgl = (struct scatterlist *)buffer;
385 int i;
386
387 pages = kcalloc(use_sg, sizeof(struct page *), GFP_KERNEL);
388 if (!pages)
389 goto free_req;
390
391 for_each_sg(sgl, sg, use_sg, i)
392 pages[i] = sg_page(sg);
393
394 mdata->null_mapped = 1;
395
396 mdata->page_order = get_order(sgl[0].length);
397 mdata->nr_entries =
398 DIV_ROUND_UP(bufflen, PAGE_SIZE << mdata->page_order);
399 mdata->offset = 0;
400
401 err = blk_rq_map_user(req->q, req, mdata, NULL, bufflen, GFP_KERNEL);
402 if (err) {
403 kfree(pages);
404 goto free_req;
405 }
406 SRpnt->bio = req->bio;
407 mdata->pages = pages;
408
409 } else if (bufflen) {
410 err = blk_rq_map_kern(req->q, req, buffer, bufflen, GFP_KERNEL);
411 if (err)
412 goto free_req;
413 }
414
415 rq->cmd_len = cmd_len;
416 memset(rq->cmd, 0, BLK_MAX_CDB); /* ATAPI hates garbage after CDB */
417 memcpy(rq->cmd, cmd, rq->cmd_len);
418 req->timeout = timeout;
419 rq->retries = retries;
420 req->end_io_data = SRpnt;
421
422 blk_execute_rq_nowait(req->q, NULL, req, 1, osst_end_async);
423 return 0;
424free_req:
425 blk_put_request(req);
426 return DRIVER_ERROR << 24;
427}
428
429/* Do the scsi command. Waits until command performed if do_wait is true.
430 Otherwise osst_write_behind_check() is used to check that the command
431 has finished. */
432static struct osst_request * osst_do_scsi(struct osst_request *SRpnt, struct osst_tape *STp,
433 unsigned char *cmd, int bytes, int direction, int timeout, int retries, int do_wait)
434{
435 unsigned char *bp;
436 unsigned short use_sg;
437#ifdef OSST_INJECT_ERRORS
438 static int inject = 0;
439 static int repeat = 0;
440#endif
441 struct completion *waiting;
442
443 /* if async, make sure there's no command outstanding */
444 if (!do_wait && ((STp->buffer)->last_SRpnt)) {
445 printk(KERN_ERR "%s: Async command already active.\n",
446 tape_name(STp));
447 if (signal_pending(current))
448 (STp->buffer)->syscall_result = (-EINTR);
449 else
450 (STp->buffer)->syscall_result = (-EBUSY);
451 return NULL;
452 }
453
454 if (SRpnt == NULL) {
455 SRpnt = osst_allocate_request();
456 if (SRpnt == NULL) {
457 printk(KERN_ERR "%s: Can't allocate SCSI request.\n",
458 tape_name(STp));
459 if (signal_pending(current))
460 (STp->buffer)->syscall_result = (-EINTR);
461 else
462 (STp->buffer)->syscall_result = (-EBUSY);
463 return NULL;
464 }
465 SRpnt->stp = STp;
466 }
467
468 /* If async IO, set last_SRpnt. This ptr tells write_behind_check
469 which IO is outstanding. It's nulled out when the IO completes. */
470 if (!do_wait)
471 (STp->buffer)->last_SRpnt = SRpnt;
472
473 waiting = &STp->wait;
474 init_completion(waiting);
475 SRpnt->waiting = waiting;
476
477 use_sg = (bytes > STp->buffer->sg[0].length) ? STp->buffer->use_sg : 0;
478 if (use_sg) {
479 bp = (char *)&(STp->buffer->sg[0]);
480 if (STp->buffer->sg_segs < use_sg)
481 use_sg = STp->buffer->sg_segs;
482 }
483 else
484 bp = (STp->buffer)->b_data;
485
486 memcpy(SRpnt->cmd, cmd, sizeof(SRpnt->cmd));
487 STp->buffer->cmdstat.have_sense = 0;
488 STp->buffer->syscall_result = 0;
489
490 if (osst_execute(SRpnt, cmd, COMMAND_SIZE(cmd[0]), direction, bp, bytes,
491 use_sg, timeout, retries))
492 /* could not allocate the buffer or request was too large */
493 (STp->buffer)->syscall_result = (-EBUSY);
494 else if (do_wait) {
495 wait_for_completion(waiting);
496 SRpnt->waiting = NULL;
497 STp->buffer->syscall_result = osst_chk_result(STp, SRpnt);
498#ifdef OSST_INJECT_ERRORS
499 if (STp->buffer->syscall_result == 0 &&
500 cmd[0] == READ_6 &&
501 cmd[4] &&
502 ( (++ inject % 83) == 29 ||
503 (STp->first_frame_position == 240
504 /* or STp->read_error_frame to fail again on the block calculated above */ &&
505 ++repeat < 3))) {
506 printk(OSST_DEB_MSG "%s:D: Injecting read error\n", tape_name(STp));
507 STp->buffer->last_result_fatal = 1;
508 }
509#endif
510 }
511 return SRpnt;
512}
513
514
515/* Handle the write-behind checking (downs the semaphore) */
516static void osst_write_behind_check(struct osst_tape *STp)
517{
518 struct osst_buffer * STbuffer;
519
520 STbuffer = STp->buffer;
521
522#if DEBUG
523 if (STp->write_pending)
524 STp->nbr_waits++;
525 else
526 STp->nbr_finished++;
527#endif
528 wait_for_completion(&(STp->wait));
529 STp->buffer->last_SRpnt->waiting = NULL;
530
531 STp->buffer->syscall_result = osst_chk_result(STp, STp->buffer->last_SRpnt);
532
533 if (STp->buffer->syscall_result)
534 STp->buffer->syscall_result =
535 osst_write_error_recovery(STp, &(STp->buffer->last_SRpnt), 1);
536 else
537 STp->first_frame_position++;
538
539 osst_release_request(STp->buffer->last_SRpnt);
540
541 if (STbuffer->writing < STbuffer->buffer_bytes)
542 printk(KERN_WARNING "osst :A: write_behind_check: something left in buffer!\n");
543
544 STbuffer->last_SRpnt = NULL;
545 STbuffer->buffer_bytes -= STbuffer->writing;
546 STbuffer->writing = 0;
547
548 return;
549}
550
551
552
553/* Onstream specific Routines */
554/*
555 * Initialize the OnStream AUX
556 */
557static void osst_init_aux(struct osst_tape * STp, int frame_type, int frame_seq_number,
558 int logical_blk_num, int blk_sz, int blk_cnt)
559{
560 os_aux_t *aux = STp->buffer->aux;
561 os_partition_t *par = &aux->partition;
562 os_dat_t *dat = &aux->dat;
563
564 if (STp->raw) return;
565
566 memset(aux, 0, sizeof(*aux));
567 aux->format_id = htonl(0);
568 memcpy(aux->application_sig, "LIN4", 4);
569 aux->hdwr = htonl(0);
570 aux->frame_type = frame_type;
571
572 switch (frame_type) {
573 case OS_FRAME_TYPE_HEADER:
574 aux->update_frame_cntr = htonl(STp->update_frame_cntr);
575 par->partition_num = OS_CONFIG_PARTITION;
576 par->par_desc_ver = OS_PARTITION_VERSION;
577 par->wrt_pass_cntr = htons(0xffff);
578 /* 0-4 = reserved, 5-9 = header, 2990-2994 = header, 2995-2999 = reserved */
579 par->first_frame_ppos = htonl(0);
580 par->last_frame_ppos = htonl(0xbb7);
581 aux->frame_seq_num = htonl(0);
582 aux->logical_blk_num_high = htonl(0);
583 aux->logical_blk_num = htonl(0);
584 aux->next_mark_ppos = htonl(STp->first_mark_ppos);
585 break;
586 case OS_FRAME_TYPE_DATA:
587 case OS_FRAME_TYPE_MARKER:
588 dat->dat_sz = 8;
589 dat->reserved1 = 0;
590 dat->entry_cnt = 1;
591 dat->reserved3 = 0;
592 dat->dat_list[0].blk_sz = htonl(blk_sz);
593 dat->dat_list[0].blk_cnt = htons(blk_cnt);
594 dat->dat_list[0].flags = frame_type==OS_FRAME_TYPE_MARKER?
595 OS_DAT_FLAGS_MARK:OS_DAT_FLAGS_DATA;
596 dat->dat_list[0].reserved = 0;
597 /* fall through */
598 case OS_FRAME_TYPE_EOD:
599 aux->update_frame_cntr = htonl(0);
600 par->partition_num = OS_DATA_PARTITION;
601 par->par_desc_ver = OS_PARTITION_VERSION;
602 par->wrt_pass_cntr = htons(STp->wrt_pass_cntr);
603 par->first_frame_ppos = htonl(STp->first_data_ppos);
604 par->last_frame_ppos = htonl(STp->capacity);
605 aux->frame_seq_num = htonl(frame_seq_number);
606 aux->logical_blk_num_high = htonl(0);
607 aux->logical_blk_num = htonl(logical_blk_num);
608 break;
609 default: ; /* probably FILL */
610 }
611 aux->filemark_cnt = htonl(STp->filemark_cnt);
612 aux->phys_fm = htonl(0xffffffff);
613 aux->last_mark_ppos = htonl(STp->last_mark_ppos);
614 aux->last_mark_lbn = htonl(STp->last_mark_lbn);
615}
616
617/*
618 * Verify that we have the correct tape frame
619 */
620static int osst_verify_frame(struct osst_tape * STp, int frame_seq_number, int quiet)
621{
622 char * name = tape_name(STp);
623 os_aux_t * aux = STp->buffer->aux;
624 os_partition_t * par = &(aux->partition);
625 struct st_partstat * STps = &(STp->ps[STp->partition]);
626 unsigned int blk_cnt, blk_sz, i;
627
628 if (STp->raw) {
629 if (STp->buffer->syscall_result) {
630 for (i=0; i < STp->buffer->sg_segs; i++)
631 memset(page_address(sg_page(&STp->buffer->sg[i])),
632 0, STp->buffer->sg[i].length);
633 strcpy(STp->buffer->b_data, "READ ERROR ON FRAME");
634 } else
635 STp->buffer->buffer_bytes = OS_FRAME_SIZE;
636 return 1;
637 }
638 if (STp->buffer->syscall_result) {
639#if DEBUG
640 printk(OSST_DEB_MSG "%s:D: Skipping frame, read error\n", name);
641#endif
642 return 0;
643 }
644 if (ntohl(aux->format_id) != 0) {
645#if DEBUG
646 printk(OSST_DEB_MSG "%s:D: Skipping frame, format_id %u\n", name, ntohl(aux->format_id));
647#endif
648 goto err_out;
649 }
650 if (memcmp(aux->application_sig, STp->application_sig, 4) != 0 &&
651 (memcmp(aux->application_sig, "LIN3", 4) != 0 || STp->linux_media_version != 4)) {
652#if DEBUG
653 printk(OSST_DEB_MSG "%s:D: Skipping frame, incorrect application signature\n", name);
654#endif
655 goto err_out;
656 }
657 if (par->partition_num != OS_DATA_PARTITION) {
658 if (!STp->linux_media || STp->linux_media_version != 2) {
659#if DEBUG
660 printk(OSST_DEB_MSG "%s:D: Skipping frame, partition num %d\n",
661 name, par->partition_num);
662#endif
663 goto err_out;
664 }
665 }
666 if (par->par_desc_ver != OS_PARTITION_VERSION) {
667#if DEBUG
668 printk(OSST_DEB_MSG "%s:D: Skipping frame, partition version %d\n", name, par->par_desc_ver);
669#endif
670 goto err_out;
671 }
672 if (ntohs(par->wrt_pass_cntr) != STp->wrt_pass_cntr) {
673#if DEBUG
674 printk(OSST_DEB_MSG "%s:D: Skipping frame, wrt_pass_cntr %d (expected %d)\n",
675 name, ntohs(par->wrt_pass_cntr), STp->wrt_pass_cntr);
676#endif
677 goto err_out;
678 }
679 if (aux->frame_type != OS_FRAME_TYPE_DATA &&
680 aux->frame_type != OS_FRAME_TYPE_EOD &&
681 aux->frame_type != OS_FRAME_TYPE_MARKER) {
682 if (!quiet) {
683#if DEBUG
684 printk(OSST_DEB_MSG "%s:D: Skipping frame, frame type %x\n", name, aux->frame_type);
685#endif
686 }
687 goto err_out;
688 }
689 if (aux->frame_type == OS_FRAME_TYPE_EOD &&
690 STp->first_frame_position < STp->eod_frame_ppos) {
691 printk(KERN_INFO "%s:I: Skipping premature EOD frame %d\n", name,
692 STp->first_frame_position);
693 goto err_out;
694 }
695 if (frame_seq_number != -1 && ntohl(aux->frame_seq_num) != frame_seq_number) {
696 if (!quiet) {
697#if DEBUG
698 printk(OSST_DEB_MSG "%s:D: Skipping frame, sequence number %u (expected %d)\n",
699 name, ntohl(aux->frame_seq_num), frame_seq_number);
700#endif
701 }
702 goto err_out;
703 }
704 if (aux->frame_type == OS_FRAME_TYPE_MARKER) {
705 STps->eof = ST_FM_HIT;
706
707 i = ntohl(aux->filemark_cnt);
708 if (STp->header_cache != NULL && i < OS_FM_TAB_MAX && (i > STp->filemark_cnt ||
709 STp->first_frame_position - 1 != ntohl(STp->header_cache->dat_fm_tab.fm_tab_ent[i]))) {
710#if DEBUG
711 printk(OSST_DEB_MSG "%s:D: %s filemark %d at frame pos %d\n", name,
712 STp->header_cache->dat_fm_tab.fm_tab_ent[i] == 0?"Learned":"Corrected",
713 i, STp->first_frame_position - 1);
714#endif
715 STp->header_cache->dat_fm_tab.fm_tab_ent[i] = htonl(STp->first_frame_position - 1);
716 if (i >= STp->filemark_cnt)
717 STp->filemark_cnt = i+1;
718 }
719 }
720 if (aux->frame_type == OS_FRAME_TYPE_EOD) {
721 STps->eof = ST_EOD_1;
722 STp->frame_in_buffer = 1;
723 }
724 if (aux->frame_type == OS_FRAME_TYPE_DATA) {
725 blk_cnt = ntohs(aux->dat.dat_list[0].blk_cnt);
726 blk_sz = ntohl(aux->dat.dat_list[0].blk_sz);
727 STp->buffer->buffer_bytes = blk_cnt * blk_sz;
728 STp->buffer->read_pointer = 0;
729 STp->frame_in_buffer = 1;
730
731 /* See what block size was used to write file */
732 if (STp->block_size != blk_sz && blk_sz > 0) {
733 printk(KERN_INFO
734 "%s:I: File was written with block size %d%c, currently %d%c, adjusted to match.\n",
735 name, blk_sz<1024?blk_sz:blk_sz/1024,blk_sz<1024?'b':'k',
736 STp->block_size<1024?STp->block_size:STp->block_size/1024,
737 STp->block_size<1024?'b':'k');
738 STp->block_size = blk_sz;
739 STp->buffer->buffer_blocks = OS_DATA_SIZE / blk_sz;
740 }
741 STps->eof = ST_NOEOF;
742 }
743 STp->frame_seq_number = ntohl(aux->frame_seq_num);
744 STp->logical_blk_num = ntohl(aux->logical_blk_num);
745 return 1;
746
747err_out:
748 if (STp->read_error_frame == 0)
749 STp->read_error_frame = STp->first_frame_position - 1;
750 return 0;
751}
752
753/*
754 * Wait for the unit to become Ready
755 */
756static int osst_wait_ready(struct osst_tape * STp, struct osst_request ** aSRpnt,
757 unsigned timeout, int initial_delay)
758{
759 unsigned char cmd[MAX_COMMAND_SIZE];
760 struct osst_request * SRpnt;
761 unsigned long startwait = jiffies;
762#if DEBUG
763 int dbg = debugging;
764 char * name = tape_name(STp);
765
766 printk(OSST_DEB_MSG "%s:D: Reached onstream wait ready\n", name);
767#endif
768
769 if (initial_delay > 0)
770 msleep(jiffies_to_msecs(initial_delay));
771
772 memset(cmd, 0, MAX_COMMAND_SIZE);
773 cmd[0] = TEST_UNIT_READY;
774
775 SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1);
776 *aSRpnt = SRpnt;
777 if (!SRpnt) return (-EBUSY);
778
779 while ( STp->buffer->syscall_result && time_before(jiffies, startwait + timeout*HZ) &&
780 (( SRpnt->sense[2] == 2 && SRpnt->sense[12] == 4 &&
781 (SRpnt->sense[13] == 1 || SRpnt->sense[13] == 8) ) ||
782 ( SRpnt->sense[2] == 6 && SRpnt->sense[12] == 0x28 &&
783 SRpnt->sense[13] == 0 ) )) {
784#if DEBUG
785 if (debugging) {
786 printk(OSST_DEB_MSG "%s:D: Sleeping in onstream wait ready\n", name);
787 printk(OSST_DEB_MSG "%s:D: Turning off debugging for a while\n", name);
788 debugging = 0;
789 }
790#endif
791 msleep(100);
792
793 memset(cmd, 0, MAX_COMMAND_SIZE);
794 cmd[0] = TEST_UNIT_READY;
795
796 SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1);
797 }
798 *aSRpnt = SRpnt;
799#if DEBUG
800 debugging = dbg;
801#endif
802 if ( STp->buffer->syscall_result &&
803 osst_write_error_recovery(STp, aSRpnt, 0) ) {
804#if DEBUG
805 printk(OSST_DEB_MSG "%s:D: Abnormal exit from onstream wait ready\n", name);
806 printk(OSST_DEB_MSG "%s:D: Result = %d, Sense: 0=%02x, 2=%02x, 12=%02x, 13=%02x\n", name,
807 STp->buffer->syscall_result, SRpnt->sense[0], SRpnt->sense[2],
808 SRpnt->sense[12], SRpnt->sense[13]);
809#endif
810 return (-EIO);
811 }
812#if DEBUG
813 printk(OSST_DEB_MSG "%s:D: Normal exit from onstream wait ready\n", name);
814#endif
815 return 0;
816}
817
818/*
819 * Wait for a tape to be inserted in the unit
820 */
821static int osst_wait_for_medium(struct osst_tape * STp, struct osst_request ** aSRpnt, unsigned timeout)
822{
823 unsigned char cmd[MAX_COMMAND_SIZE];
824 struct osst_request * SRpnt;
825 unsigned long startwait = jiffies;
826#if DEBUG
827 int dbg = debugging;
828 char * name = tape_name(STp);
829
830 printk(OSST_DEB_MSG "%s:D: Reached onstream wait for medium\n", name);
831#endif
832
833 memset(cmd, 0, MAX_COMMAND_SIZE);
834 cmd[0] = TEST_UNIT_READY;
835
836 SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1);
837 *aSRpnt = SRpnt;
838 if (!SRpnt) return (-EBUSY);
839
840 while ( STp->buffer->syscall_result && time_before(jiffies, startwait + timeout*HZ) &&
841 SRpnt->sense[2] == 2 && SRpnt->sense[12] == 0x3a && SRpnt->sense[13] == 0 ) {
842#if DEBUG
843 if (debugging) {
844 printk(OSST_DEB_MSG "%s:D: Sleeping in onstream wait medium\n", name);
845 printk(OSST_DEB_MSG "%s:D: Turning off debugging for a while\n", name);
846 debugging = 0;
847 }
848#endif
849 msleep(100);
850
851 memset(cmd, 0, MAX_COMMAND_SIZE);
852 cmd[0] = TEST_UNIT_READY;
853
854 SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1);
855 }
856 *aSRpnt = SRpnt;
857#if DEBUG
858 debugging = dbg;
859#endif
860 if ( STp->buffer->syscall_result && SRpnt->sense[2] != 2 &&
861 SRpnt->sense[12] != 4 && SRpnt->sense[13] == 1) {
862#if DEBUG
863 printk(OSST_DEB_MSG "%s:D: Abnormal exit from onstream wait medium\n", name);
864 printk(OSST_DEB_MSG "%s:D: Result = %d, Sense: 0=%02x, 2=%02x, 12=%02x, 13=%02x\n", name,
865 STp->buffer->syscall_result, SRpnt->sense[0], SRpnt->sense[2],
866 SRpnt->sense[12], SRpnt->sense[13]);
867#endif
868 return 0;
869 }
870#if DEBUG
871 printk(OSST_DEB_MSG "%s:D: Normal exit from onstream wait medium\n", name);
872#endif
873 return 1;
874}
875
876static int osst_position_tape_and_confirm(struct osst_tape * STp, struct osst_request ** aSRpnt, int frame)
877{
878 int retval;
879
880 osst_wait_ready(STp, aSRpnt, 15 * 60, 0); /* TODO - can this catch a write error? */
881 retval = osst_set_frame_position(STp, aSRpnt, frame, 0);
882 if (retval) return (retval);
883 osst_wait_ready(STp, aSRpnt, 15 * 60, OSST_WAIT_POSITION_COMPLETE);
884 return (osst_get_frame_position(STp, aSRpnt));
885}
886
887/*
888 * Wait for write(s) to complete
889 */
890static int osst_flush_drive_buffer(struct osst_tape * STp, struct osst_request ** aSRpnt)
891{
892 unsigned char cmd[MAX_COMMAND_SIZE];
893 struct osst_request * SRpnt;
894 int result = 0;
895 int delay = OSST_WAIT_WRITE_COMPLETE;
896#if DEBUG
897 char * name = tape_name(STp);
898
899 printk(OSST_DEB_MSG "%s:D: Reached onstream flush drive buffer (write filemark)\n", name);
900#endif
901
902 memset(cmd, 0, MAX_COMMAND_SIZE);
903 cmd[0] = WRITE_FILEMARKS;
904 cmd[1] = 1;
905
906 SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1);
907 *aSRpnt = SRpnt;
908 if (!SRpnt) return (-EBUSY);
909 if (STp->buffer->syscall_result) {
910 if ((SRpnt->sense[2] & 0x0f) == 2 && SRpnt->sense[12] == 4) {
911 if (SRpnt->sense[13] == 8) {
912 delay = OSST_WAIT_LONG_WRITE_COMPLETE;
913 }
914 } else
915 result = osst_write_error_recovery(STp, aSRpnt, 0);
916 }
917 result |= osst_wait_ready(STp, aSRpnt, 5 * 60, delay);
918 STp->ps[STp->partition].rw = OS_WRITING_COMPLETE;
919
920 return (result);
921}
922
923#define OSST_POLL_PER_SEC 10
924static int osst_wait_frame(struct osst_tape * STp, struct osst_request ** aSRpnt, int curr, int minlast, int to)
925{
926 unsigned long startwait = jiffies;
927 char * name = tape_name(STp);
928#if DEBUG
929 char notyetprinted = 1;
930#endif
931 if (minlast >= 0 && STp->ps[STp->partition].rw != ST_READING)
932 printk(KERN_ERR "%s:A: Waiting for frame without having initialized read!\n", name);
933
934 while (time_before (jiffies, startwait + to*HZ))
935 {
936 int result;
937 result = osst_get_frame_position(STp, aSRpnt);
938 if (result == -EIO)
939 if ((result = osst_write_error_recovery(STp, aSRpnt, 0)) == 0)
940 return 0; /* successful recovery leaves drive ready for frame */
941 if (result < 0) break;
942 if (STp->first_frame_position == curr &&
943 ((minlast < 0 &&
944 (signed)STp->last_frame_position > (signed)curr + minlast) ||
945 (minlast >= 0 && STp->cur_frames > minlast)
946 ) && result >= 0)
947 {
948#if DEBUG
949 if (debugging || time_after_eq(jiffies, startwait + 2*HZ/OSST_POLL_PER_SEC))
950 printk (OSST_DEB_MSG
951 "%s:D: Succ wait f fr %i (>%i): %i-%i %i (%i): %3li.%li s\n",
952 name, curr, curr+minlast, STp->first_frame_position,
953 STp->last_frame_position, STp->cur_frames,
954 result, (jiffies-startwait)/HZ,
955 (((jiffies-startwait)%HZ)*10)/HZ);
956#endif
957 return 0;
958 }
959#if DEBUG
960 if (time_after_eq(jiffies, startwait + 2*HZ/OSST_POLL_PER_SEC) && notyetprinted)
961 {
962 printk (OSST_DEB_MSG "%s:D: Wait for frame %i (>%i): %i-%i %i (%i)\n",
963 name, curr, curr+minlast, STp->first_frame_position,
964 STp->last_frame_position, STp->cur_frames, result);
965 notyetprinted--;
966 }
967#endif
968 msleep(1000 / OSST_POLL_PER_SEC);
969 }
970#if DEBUG
971 printk (OSST_DEB_MSG "%s:D: Fail wait f fr %i (>%i): %i-%i %i: %3li.%li s\n",
972 name, curr, curr+minlast, STp->first_frame_position,
973 STp->last_frame_position, STp->cur_frames,
974 (jiffies-startwait)/HZ, (((jiffies-startwait)%HZ)*10)/HZ);
975#endif
976 return -EBUSY;
977}
978
979static int osst_recover_wait_frame(struct osst_tape * STp, struct osst_request ** aSRpnt, int writing)
980{
981 struct osst_request * SRpnt;
982 unsigned char cmd[MAX_COMMAND_SIZE];
983 unsigned long startwait = jiffies;
984 int retval = 1;
985 char * name = tape_name(STp);
986
987 if (writing) {
988 char mybuf[24];
989 char * olddata = STp->buffer->b_data;
990 int oldsize = STp->buffer->buffer_size;
991
992 /* write zero fm then read pos - if shows write error, try to recover - if no progress, wait */
993
994 memset(cmd, 0, MAX_COMMAND_SIZE);
995 cmd[0] = WRITE_FILEMARKS;
996 cmd[1] = 1;
997 SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, 0, DMA_NONE, STp->timeout,
998 MAX_RETRIES, 1);
999
1000 while (retval && time_before (jiffies, startwait + 5*60*HZ)) {
1001
1002 if (STp->buffer->syscall_result && (SRpnt->sense[2] & 0x0f) != 2) {
1003
1004 /* some failure - not just not-ready */
1005 retval = osst_write_error_recovery(STp, aSRpnt, 0);
1006 break;
1007 }
1008 schedule_timeout_interruptible(HZ / OSST_POLL_PER_SEC);
1009
1010 STp->buffer->b_data = mybuf; STp->buffer->buffer_size = 24;
1011 memset(cmd, 0, MAX_COMMAND_SIZE);
1012 cmd[0] = READ_POSITION;
1013
1014 SRpnt = osst_do_scsi(SRpnt, STp, cmd, 20, DMA_FROM_DEVICE, STp->timeout,
1015 MAX_RETRIES, 1);
1016
1017 retval = ( STp->buffer->syscall_result || (STp->buffer)->b_data[15] > 25 );
1018 STp->buffer->b_data = olddata; STp->buffer->buffer_size = oldsize;
1019 }
1020 if (retval)
1021 printk(KERN_ERR "%s:E: Device did not succeed to write buffered data\n", name);
1022 } else
1023 /* TODO - figure out which error conditions can be handled */
1024 if (STp->buffer->syscall_result)
1025 printk(KERN_WARNING
1026 "%s:W: Recover_wait_frame(read) cannot handle %02x:%02x:%02x\n", name,
1027 (*aSRpnt)->sense[ 2] & 0x0f,
1028 (*aSRpnt)->sense[12],
1029 (*aSRpnt)->sense[13]);
1030
1031 return retval;
1032}
1033
1034/*
1035 * Read the next OnStream tape frame at the current location
1036 */
1037static int osst_read_frame(struct osst_tape * STp, struct osst_request ** aSRpnt, int timeout)
1038{
1039 unsigned char cmd[MAX_COMMAND_SIZE];
1040 struct osst_request * SRpnt;
1041 int retval = 0;
1042#if DEBUG
1043 os_aux_t * aux = STp->buffer->aux;
1044 char * name = tape_name(STp);
1045#endif
1046
1047 if (STp->poll)
1048 if (osst_wait_frame (STp, aSRpnt, STp->first_frame_position, 0, timeout))
1049 retval = osst_recover_wait_frame(STp, aSRpnt, 0);
1050
1051 memset(cmd, 0, MAX_COMMAND_SIZE);
1052 cmd[0] = READ_6;
1053 cmd[1] = 1;
1054 cmd[4] = 1;
1055
1056#if DEBUG
1057 if (debugging)
1058 printk(OSST_DEB_MSG "%s:D: Reading frame from OnStream tape\n", name);
1059#endif
1060 SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, OS_FRAME_SIZE, DMA_FROM_DEVICE,
1061 STp->timeout, MAX_RETRIES, 1);
1062 *aSRpnt = SRpnt;
1063 if (!SRpnt)
1064 return (-EBUSY);
1065
1066 if ((STp->buffer)->syscall_result) {
1067 retval = 1;
1068 if (STp->read_error_frame == 0) {
1069 STp->read_error_frame = STp->first_frame_position;
1070#if DEBUG
1071 printk(OSST_DEB_MSG "%s:D: Recording read error at %d\n", name, STp->read_error_frame);
1072#endif
1073 }
1074#if DEBUG
1075 if (debugging)
1076 printk(OSST_DEB_MSG "%s:D: Sense: %2x %2x %2x %2x %2x %2x %2x %2x\n",
1077 name,
1078 SRpnt->sense[0], SRpnt->sense[1],
1079 SRpnt->sense[2], SRpnt->sense[3],
1080 SRpnt->sense[4], SRpnt->sense[5],
1081 SRpnt->sense[6], SRpnt->sense[7]);
1082#endif
1083 }
1084 else
1085 STp->first_frame_position++;
1086#if DEBUG
1087 if (debugging) {
1088 char sig[8]; int i;
1089 for (i=0;i<4;i++)
1090 sig[i] = aux->application_sig[i]<32?'^':aux->application_sig[i];
1091 sig[4] = '\0';
1092 printk(OSST_DEB_MSG
1093 "%s:D: AUX: %s UpdFrCt#%d Wpass#%d %s FrSeq#%d LogBlk#%d Qty=%d Sz=%d\n", name, sig,
1094 ntohl(aux->update_frame_cntr), ntohs(aux->partition.wrt_pass_cntr),
1095 aux->frame_type==1?"EOD":aux->frame_type==2?"MARK":
1096 aux->frame_type==8?"HEADR":aux->frame_type==0x80?"DATA":"FILL",
1097 ntohl(aux->frame_seq_num), ntohl(aux->logical_blk_num),
1098 ntohs(aux->dat.dat_list[0].blk_cnt), ntohl(aux->dat.dat_list[0].blk_sz) );
1099 if (aux->frame_type==2)
1100 printk(OSST_DEB_MSG "%s:D: mark_cnt=%d, last_mark_ppos=%d, last_mark_lbn=%d\n", name,
1101 ntohl(aux->filemark_cnt), ntohl(aux->last_mark_ppos), ntohl(aux->last_mark_lbn));
1102 printk(OSST_DEB_MSG "%s:D: Exit read frame from OnStream tape with code %d\n", name, retval);
1103 }
1104#endif
1105 return (retval);
1106}
1107
1108static int osst_initiate_read(struct osst_tape * STp, struct osst_request ** aSRpnt)
1109{
1110 struct st_partstat * STps = &(STp->ps[STp->partition]);
1111 struct osst_request * SRpnt ;
1112 unsigned char cmd[MAX_COMMAND_SIZE];
1113 int retval = 0;
1114 char * name = tape_name(STp);
1115
1116 if (STps->rw != ST_READING) { /* Initialize read operation */
1117 if (STps->rw == ST_WRITING || STp->dirty) {
1118 STp->write_type = OS_WRITE_DATA;
1119 osst_flush_write_buffer(STp, aSRpnt);
1120 osst_flush_drive_buffer(STp, aSRpnt);
1121 }
1122 STps->rw = ST_READING;
1123 STp->frame_in_buffer = 0;
1124
1125 /*
1126 * Issue a read 0 command to get the OnStream drive
1127 * read frames into its buffer.
1128 */
1129 memset(cmd, 0, MAX_COMMAND_SIZE);
1130 cmd[0] = READ_6;
1131 cmd[1] = 1;
1132
1133#if DEBUG
1134 printk(OSST_DEB_MSG "%s:D: Start Read Ahead on OnStream tape\n", name);
1135#endif
1136 SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1);
1137 *aSRpnt = SRpnt;
1138 if ((retval = STp->buffer->syscall_result))
1139 printk(KERN_WARNING "%s:W: Error starting read ahead\n", name);
1140 }
1141
1142 return retval;
1143}
1144
1145static int osst_get_logical_frame(struct osst_tape * STp, struct osst_request ** aSRpnt,
1146 int frame_seq_number, int quiet)
1147{
1148 struct st_partstat * STps = &(STp->ps[STp->partition]);
1149 char * name = tape_name(STp);
1150 int cnt = 0,
1151 bad = 0,
1152 past = 0,
1153 x,
1154 position;
1155
1156 /*
1157 * If we want just any frame (-1) and there is a frame in the buffer, return it
1158 */
1159 if (frame_seq_number == -1 && STp->frame_in_buffer) {
1160#if DEBUG
1161 printk(OSST_DEB_MSG "%s:D: Frame %d still in buffer\n", name, STp->frame_seq_number);
1162#endif
1163 return (STps->eof);
1164 }
1165 /*
1166 * Search and wait for the next logical tape frame
1167 */
1168 while (1) {
1169 if (cnt++ > 400) {
1170 printk(KERN_ERR "%s:E: Couldn't find logical frame %d, aborting\n",
1171 name, frame_seq_number);
1172 if (STp->read_error_frame) {
1173 osst_set_frame_position(STp, aSRpnt, STp->read_error_frame, 0);
1174#if DEBUG
1175 printk(OSST_DEB_MSG "%s:D: Repositioning tape to bad frame %d\n",
1176 name, STp->read_error_frame);
1177#endif
1178 STp->read_error_frame = 0;
1179 STp->abort_count++;
1180 }
1181 return (-EIO);
1182 }
1183#if DEBUG
1184 if (debugging)
1185 printk(OSST_DEB_MSG "%s:D: Looking for frame %d, attempt %d\n",
1186 name, frame_seq_number, cnt);
1187#endif
1188 if ( osst_initiate_read(STp, aSRpnt)
1189 || ( (!STp->frame_in_buffer) && osst_read_frame(STp, aSRpnt, 30) ) ) {
1190 if (STp->raw)
1191 return (-EIO);
1192 position = osst_get_frame_position(STp, aSRpnt);
1193 if (position >= 0xbae && position < 0xbb8)
1194 position = 0xbb8;
1195 else if (position > STp->eod_frame_ppos || ++bad == 10) {
1196 position = STp->read_error_frame - 1;
1197 bad = 0;
1198 }
1199 else {
1200 position += 29;
1201 cnt += 19;
1202 }
1203#if DEBUG
1204 printk(OSST_DEB_MSG "%s:D: Bad frame detected, positioning tape to block %d\n",
1205 name, position);
1206#endif
1207 osst_set_frame_position(STp, aSRpnt, position, 0);
1208 continue;
1209 }
1210 if (osst_verify_frame(STp, frame_seq_number, quiet))
1211 break;
1212 if (osst_verify_frame(STp, -1, quiet)) {
1213 x = ntohl(STp->buffer->aux->frame_seq_num);
1214 if (STp->fast_open) {
1215 printk(KERN_WARNING
1216 "%s:W: Found logical frame %d instead of %d after fast open\n",
1217 name, x, frame_seq_number);
1218 STp->header_ok = 0;
1219 STp->read_error_frame = 0;
1220 return (-EIO);
1221 }
1222 if (x > frame_seq_number) {
1223 if (++past > 3) {
1224 /* positioning backwards did not bring us to the desired frame */
1225 position = STp->read_error_frame - 1;
1226 }
1227 else {
1228 position = osst_get_frame_position(STp, aSRpnt)
1229 + frame_seq_number - x - 1;
1230
1231 if (STp->first_frame_position >= 3000 && position < 3000)
1232 position -= 10;
1233 }
1234#if DEBUG
1235 printk(OSST_DEB_MSG
1236 "%s:D: Found logical frame %d while looking for %d: back up %d\n",
1237 name, x, frame_seq_number,
1238 STp->first_frame_position - position);
1239#endif
1240 osst_set_frame_position(STp, aSRpnt, position, 0);
1241 cnt += 10;
1242 }
1243 else
1244 past = 0;
1245 }
1246 if (osst_get_frame_position(STp, aSRpnt) == 0xbaf) {
1247#if DEBUG
1248 printk(OSST_DEB_MSG "%s:D: Skipping config partition\n", name);
1249#endif
1250 osst_set_frame_position(STp, aSRpnt, 0xbb8, 0);
1251 cnt--;
1252 }
1253 STp->frame_in_buffer = 0;
1254 }
1255 if (cnt > 1) {
1256 STp->recover_count++;
1257 STp->recover_erreg++;
1258 printk(KERN_WARNING "%s:I: Don't worry, Read error at position %d recovered\n",
1259 name, STp->read_error_frame);
1260 }
1261 STp->read_count++;
1262
1263#if DEBUG
1264 if (debugging || STps->eof)
1265 printk(OSST_DEB_MSG
1266 "%s:D: Exit get logical frame (%d=>%d) from OnStream tape with code %d\n",
1267 name, frame_seq_number, STp->frame_seq_number, STps->eof);
1268#endif
1269 STp->fast_open = 0;
1270 STp->read_error_frame = 0;
1271 return (STps->eof);
1272}
1273
1274static int osst_seek_logical_blk(struct osst_tape * STp, struct osst_request ** aSRpnt, int logical_blk_num)
1275{
1276 struct st_partstat * STps = &(STp->ps[STp->partition]);
1277 char * name = tape_name(STp);
1278 int retries = 0;
1279 int frame_seq_estimate, ppos_estimate, move;
1280
1281 if (logical_blk_num < 0) logical_blk_num = 0;
1282#if DEBUG
1283 printk(OSST_DEB_MSG "%s:D: Seeking logical block %d (now at %d, size %d%c)\n",
1284 name, logical_blk_num, STp->logical_blk_num,
1285 STp->block_size<1024?STp->block_size:STp->block_size/1024,
1286 STp->block_size<1024?'b':'k');
1287#endif
1288 /* Do we know where we are? */
1289 if (STps->drv_block >= 0) {
1290 move = logical_blk_num - STp->logical_blk_num;
1291 if (move < 0) move -= (OS_DATA_SIZE / STp->block_size) - 1;
1292 move /= (OS_DATA_SIZE / STp->block_size);
1293 frame_seq_estimate = STp->frame_seq_number + move;
1294 } else
1295 frame_seq_estimate = logical_blk_num * STp->block_size / OS_DATA_SIZE;
1296
1297 if (frame_seq_estimate < 2980) ppos_estimate = frame_seq_estimate + 10;
1298 else ppos_estimate = frame_seq_estimate + 20;
1299 while (++retries < 10) {
1300 if (ppos_estimate > STp->eod_frame_ppos-2) {
1301 frame_seq_estimate += STp->eod_frame_ppos - 2 - ppos_estimate;
1302 ppos_estimate = STp->eod_frame_ppos - 2;
1303 }
1304 if (frame_seq_estimate < 0) {
1305 frame_seq_estimate = 0;
1306 ppos_estimate = 10;
1307 }
1308 osst_set_frame_position(STp, aSRpnt, ppos_estimate, 0);
1309 if (osst_get_logical_frame(STp, aSRpnt, frame_seq_estimate, 1) >= 0) {
1310 /* we've located the estimated frame, now does it have our block? */
1311 if (logical_blk_num < STp->logical_blk_num ||
1312 logical_blk_num >= STp->logical_blk_num + ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt)) {
1313 if (STps->eof == ST_FM_HIT)
1314 move = logical_blk_num < STp->logical_blk_num? -2 : 1;
1315 else {
1316 move = logical_blk_num - STp->logical_blk_num;
1317 if (move < 0) move -= (OS_DATA_SIZE / STp->block_size) - 1;
1318 move /= (OS_DATA_SIZE / STp->block_size);
1319 }
1320 if (!move) move = logical_blk_num > STp->logical_blk_num ? 1 : -1;
1321#if DEBUG
1322 printk(OSST_DEB_MSG
1323 "%s:D: Seek retry %d at ppos %d fsq %d (est %d) lbn %d (need %d) move %d\n",
1324 name, retries, ppos_estimate, STp->frame_seq_number, frame_seq_estimate,
1325 STp->logical_blk_num, logical_blk_num, move);
1326#endif
1327 frame_seq_estimate += move;
1328 ppos_estimate += move;
1329 continue;
1330 } else {
1331 STp->buffer->read_pointer = (logical_blk_num - STp->logical_blk_num) * STp->block_size;
1332 STp->buffer->buffer_bytes -= STp->buffer->read_pointer;
1333 STp->logical_blk_num = logical_blk_num;
1334#if DEBUG
1335 printk(OSST_DEB_MSG
1336 "%s:D: Seek success at ppos %d fsq %d in_buf %d, bytes %d, ptr %d*%d\n",
1337 name, ppos_estimate, STp->frame_seq_number, STp->frame_in_buffer,
1338 STp->buffer->buffer_bytes, STp->buffer->read_pointer / STp->block_size,
1339 STp->block_size);
1340#endif
1341 STps->drv_file = ntohl(STp->buffer->aux->filemark_cnt);
1342 if (STps->eof == ST_FM_HIT) {
1343 STps->drv_file++;
1344 STps->drv_block = 0;
1345 } else {
1346 STps->drv_block = ntohl(STp->buffer->aux->last_mark_lbn)?
1347 STp->logical_blk_num -
1348 (STps->drv_file ? ntohl(STp->buffer->aux->last_mark_lbn) + 1 : 0):
1349 -1;
1350 }
1351 STps->eof = (STp->first_frame_position >= STp->eod_frame_ppos)?ST_EOD:ST_NOEOF;
1352 return 0;
1353 }
1354 }
1355 if (osst_get_logical_frame(STp, aSRpnt, -1, 1) < 0)
1356 goto error;
1357 /* we are not yet at the estimated frame, adjust our estimate of its physical position */
1358#if DEBUG
1359 printk(OSST_DEB_MSG "%s:D: Seek retry %d at ppos %d fsq %d (est %d) lbn %d (need %d)\n",
1360 name, retries, ppos_estimate, STp->frame_seq_number, frame_seq_estimate,
1361 STp->logical_blk_num, logical_blk_num);
1362#endif
1363 if (frame_seq_estimate != STp->frame_seq_number)
1364 ppos_estimate += frame_seq_estimate - STp->frame_seq_number;
1365 else
1366 break;
1367 }
1368error:
1369 printk(KERN_ERR "%s:E: Couldn't seek to logical block %d (at %d), %d retries\n",
1370 name, logical_blk_num, STp->logical_blk_num, retries);
1371 return (-EIO);
1372}
1373
1374/* The values below are based on the OnStream frame payload size of 32K == 2**15,
1375 * that is, OSST_FRAME_SHIFT + OSST_SECTOR_SHIFT must be 15. With a minimum block
1376 * size of 512 bytes, we need to be able to resolve 32K/512 == 64 == 2**6 positions
1377 * inside each frame. Finally, OSST_SECTOR_MASK == 2**OSST_FRAME_SHIFT - 1.
1378 */
1379#define OSST_FRAME_SHIFT 6
1380#define OSST_SECTOR_SHIFT 9
1381#define OSST_SECTOR_MASK 0x03F
1382
1383static int osst_get_sector(struct osst_tape * STp, struct osst_request ** aSRpnt)
1384{
1385 int sector;
1386#if DEBUG
1387 char * name = tape_name(STp);
1388
1389 printk(OSST_DEB_MSG
1390 "%s:D: Positioned at ppos %d, frame %d, lbn %d, file %d, blk %d, %cptr %d, eof %d\n",
1391 name, STp->first_frame_position, STp->frame_seq_number, STp->logical_blk_num,
1392 STp->ps[STp->partition].drv_file, STp->ps[STp->partition].drv_block,
1393 STp->ps[STp->partition].rw == ST_WRITING?'w':'r',
1394 STp->ps[STp->partition].rw == ST_WRITING?STp->buffer->buffer_bytes:
1395 STp->buffer->read_pointer, STp->ps[STp->partition].eof);
1396#endif
1397 /* do we know where we are inside a file? */
1398 if (STp->ps[STp->partition].drv_block >= 0) {
1399 sector = (STp->frame_in_buffer ? STp->first_frame_position-1 :
1400 STp->first_frame_position) << OSST_FRAME_SHIFT;
1401 if (STp->ps[STp->partition].rw == ST_WRITING)
1402 sector |= (STp->buffer->buffer_bytes >> OSST_SECTOR_SHIFT) & OSST_SECTOR_MASK;
1403 else
1404 sector |= (STp->buffer->read_pointer >> OSST_SECTOR_SHIFT) & OSST_SECTOR_MASK;
1405 } else {
1406 sector = osst_get_frame_position(STp, aSRpnt);
1407 if (sector > 0)
1408 sector <<= OSST_FRAME_SHIFT;
1409 }
1410 return sector;
1411}
1412
1413static int osst_seek_sector(struct osst_tape * STp, struct osst_request ** aSRpnt, int sector)
1414{
1415 struct st_partstat * STps = &(STp->ps[STp->partition]);
1416 int frame = sector >> OSST_FRAME_SHIFT,
1417 offset = (sector & OSST_SECTOR_MASK) << OSST_SECTOR_SHIFT,
1418 r;
1419#if DEBUG
1420 char * name = tape_name(STp);
1421
1422 printk(OSST_DEB_MSG "%s:D: Seeking sector %d in frame %d at offset %d\n",
1423 name, sector, frame, offset);
1424#endif
1425 if (frame < 0 || frame >= STp->capacity) return (-ENXIO);
1426
1427 if (frame <= STp->first_data_ppos) {
1428 STp->frame_seq_number = STp->logical_blk_num = STps->drv_file = STps->drv_block = 0;
1429 return (osst_set_frame_position(STp, aSRpnt, frame, 0));
1430 }
1431 r = osst_set_frame_position(STp, aSRpnt, offset?frame:frame-1, 0);
1432 if (r < 0) return r;
1433
1434 r = osst_get_logical_frame(STp, aSRpnt, -1, 1);
1435 if (r < 0) return r;
1436
1437 if (osst_get_frame_position(STp, aSRpnt) != (offset?frame+1:frame)) return (-EIO);
1438
1439 if (offset) {
1440 STp->logical_blk_num += offset / STp->block_size;
1441 STp->buffer->read_pointer = offset;
1442 STp->buffer->buffer_bytes -= offset;
1443 } else {
1444 STp->frame_seq_number++;
1445 STp->frame_in_buffer = 0;
1446 STp->logical_blk_num += ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt);
1447 STp->buffer->buffer_bytes = STp->buffer->read_pointer = 0;
1448 }
1449 STps->drv_file = ntohl(STp->buffer->aux->filemark_cnt);
1450 if (STps->eof == ST_FM_HIT) {
1451 STps->drv_file++;
1452 STps->drv_block = 0;
1453 } else {
1454 STps->drv_block = ntohl(STp->buffer->aux->last_mark_lbn)?
1455 STp->logical_blk_num -
1456 (STps->drv_file ? ntohl(STp->buffer->aux->last_mark_lbn) + 1 : 0):
1457 -1;
1458 }
1459 STps->eof = (STp->first_frame_position >= STp->eod_frame_ppos)?ST_EOD:ST_NOEOF;
1460#if DEBUG
1461 printk(OSST_DEB_MSG
1462 "%s:D: Now positioned at ppos %d, frame %d, lbn %d, file %d, blk %d, rptr %d, eof %d\n",
1463 name, STp->first_frame_position, STp->frame_seq_number, STp->logical_blk_num,
1464 STps->drv_file, STps->drv_block, STp->buffer->read_pointer, STps->eof);
1465#endif
1466 return 0;
1467}
1468
1469/*
1470 * Read back the drive's internal buffer contents, as a part
1471 * of the write error recovery mechanism for old OnStream
1472 * firmware revisions.
1473 * Precondition for this function to work: all frames in the
1474 * drive's buffer must be of one type (DATA, MARK or EOD)!
1475 */
1476static int osst_read_back_buffer_and_rewrite(struct osst_tape * STp, struct osst_request ** aSRpnt,
1477 unsigned int frame, unsigned int skip, int pending)
1478{
1479 struct osst_request * SRpnt = * aSRpnt;
1480 unsigned char * buffer, * p;
1481 unsigned char cmd[MAX_COMMAND_SIZE];
1482 int flag, new_frame, i;
1483 int nframes = STp->cur_frames;
1484 int blks_per_frame = ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt);
1485 int frame_seq_number = ntohl(STp->buffer->aux->frame_seq_num)
1486 - (nframes + pending - 1);
1487 int logical_blk_num = ntohl(STp->buffer->aux->logical_blk_num)
1488 - (nframes + pending - 1) * blks_per_frame;
1489 char * name = tape_name(STp);
1490 unsigned long startwait = jiffies;
1491#if DEBUG
1492 int dbg = debugging;
1493#endif
1494
1495 if ((buffer = vmalloc(array_size((nframes + 1), OS_DATA_SIZE))) == NULL)
1496 return (-EIO);
1497
1498 printk(KERN_INFO "%s:I: Reading back %d frames from drive buffer%s\n",
1499 name, nframes, pending?" and one that was pending":"");
1500
1501 osst_copy_from_buffer(STp->buffer, (p = &buffer[nframes * OS_DATA_SIZE]));
1502#if DEBUG
1503 if (pending && debugging)
1504 printk(OSST_DEB_MSG "%s:D: Pending frame %d (lblk %d), data %02x %02x %02x %02x\n",
1505 name, frame_seq_number + nframes,
1506 logical_blk_num + nframes * blks_per_frame,
1507 p[0], p[1], p[2], p[3]);
1508#endif
1509 for (i = 0, p = buffer; i < nframes; i++, p += OS_DATA_SIZE) {
1510
1511 memset(cmd, 0, MAX_COMMAND_SIZE);
1512 cmd[0] = 0x3C; /* Buffer Read */
1513 cmd[1] = 6; /* Retrieve Faulty Block */
1514 cmd[7] = 32768 >> 8;
1515 cmd[8] = 32768 & 0xff;
1516
1517 SRpnt = osst_do_scsi(SRpnt, STp, cmd, OS_FRAME_SIZE, DMA_FROM_DEVICE,
1518 STp->timeout, MAX_RETRIES, 1);
1519
1520 if ((STp->buffer)->syscall_result || !SRpnt) {
1521 printk(KERN_ERR "%s:E: Failed to read frame back from OnStream buffer\n", name);
1522 vfree(buffer);
1523 *aSRpnt = SRpnt;
1524 return (-EIO);
1525 }
1526 osst_copy_from_buffer(STp->buffer, p);
1527#if DEBUG
1528 if (debugging)
1529 printk(OSST_DEB_MSG "%s:D: Read back logical frame %d, data %02x %02x %02x %02x\n",
1530 name, frame_seq_number + i, p[0], p[1], p[2], p[3]);
1531#endif
1532 }
1533 *aSRpnt = SRpnt;
1534 osst_get_frame_position(STp, aSRpnt);
1535
1536#if DEBUG
1537 printk(OSST_DEB_MSG "%s:D: Frames left in buffer: %d\n", name, STp->cur_frames);
1538#endif
1539 /* Write synchronously so we can be sure we're OK again and don't have to recover recursively */
1540 /* In the header we don't actually re-write the frames that fail, just the ones after them */
1541
1542 for (flag=1, new_frame=frame, p=buffer, i=0; i < nframes + pending; ) {
1543
1544 if (flag) {
1545 if (STp->write_type == OS_WRITE_HEADER) {
1546 i += skip;
1547 p += skip * OS_DATA_SIZE;
1548 }
1549 else if (new_frame < 2990 && new_frame+skip+nframes+pending >= 2990)
1550 new_frame = 3000-i;
1551 else
1552 new_frame += skip;
1553#if DEBUG
1554 printk(OSST_DEB_MSG "%s:D: Position to frame %d, write fseq %d\n",
1555 name, new_frame+i, frame_seq_number+i);
1556#endif
1557 osst_set_frame_position(STp, aSRpnt, new_frame + i, 0);
1558 osst_wait_ready(STp, aSRpnt, 60, OSST_WAIT_POSITION_COMPLETE);
1559 osst_get_frame_position(STp, aSRpnt);
1560 SRpnt = * aSRpnt;
1561
1562 if (new_frame > frame + 1000) {
1563 printk(KERN_ERR "%s:E: Failed to find writable tape media\n", name);
1564 vfree(buffer);
1565 return (-EIO);
1566 }
1567 if ( i >= nframes + pending ) break;
1568 flag = 0;
1569 }
1570 osst_copy_to_buffer(STp->buffer, p);
1571 /*
1572 * IMPORTANT: for error recovery to work, _never_ queue frames with mixed frame type!
1573 */
1574 osst_init_aux(STp, STp->buffer->aux->frame_type, frame_seq_number+i,
1575 logical_blk_num + i*blks_per_frame,
1576 ntohl(STp->buffer->aux->dat.dat_list[0].blk_sz), blks_per_frame);
1577 memset(cmd, 0, MAX_COMMAND_SIZE);
1578 cmd[0] = WRITE_6;
1579 cmd[1] = 1;
1580 cmd[4] = 1;
1581
1582#if DEBUG
1583 if (debugging)
1584 printk(OSST_DEB_MSG
1585 "%s:D: About to write frame %d, seq %d, lbn %d, data %02x %02x %02x %02x\n",
1586 name, new_frame+i, frame_seq_number+i, logical_blk_num + i*blks_per_frame,
1587 p[0], p[1], p[2], p[3]);
1588#endif
1589 SRpnt = osst_do_scsi(SRpnt, STp, cmd, OS_FRAME_SIZE, DMA_TO_DEVICE,
1590 STp->timeout, MAX_RETRIES, 1);
1591
1592 if (STp->buffer->syscall_result)
1593 flag = 1;
1594 else {
1595 p += OS_DATA_SIZE; i++;
1596
1597 /* if we just sent the last frame, wait till all successfully written */
1598 if ( i == nframes + pending ) {
1599#if DEBUG
1600 printk(OSST_DEB_MSG "%s:D: Check re-write successful\n", name);
1601#endif
1602 memset(cmd, 0, MAX_COMMAND_SIZE);
1603 cmd[0] = WRITE_FILEMARKS;
1604 cmd[1] = 1;
1605 SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE,
1606 STp->timeout, MAX_RETRIES, 1);
1607#if DEBUG
1608 if (debugging) {
1609 printk(OSST_DEB_MSG "%s:D: Sleeping in re-write wait ready\n", name);
1610 printk(OSST_DEB_MSG "%s:D: Turning off debugging for a while\n", name);
1611 debugging = 0;
1612 }
1613#endif
1614 flag = STp->buffer->syscall_result;
1615 while ( !flag && time_before(jiffies, startwait + 60*HZ) ) {
1616
1617 memset(cmd, 0, MAX_COMMAND_SIZE);
1618 cmd[0] = TEST_UNIT_READY;
1619
1620 SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE, STp->timeout,
1621 MAX_RETRIES, 1);
1622
1623 if (SRpnt->sense[2] == 2 && SRpnt->sense[12] == 4 &&
1624 (SRpnt->sense[13] == 1 || SRpnt->sense[13] == 8)) {
1625 /* in the process of becoming ready */
1626 msleep(100);
1627 continue;
1628 }
1629 if (STp->buffer->syscall_result)
1630 flag = 1;
1631 break;
1632 }
1633#if DEBUG
1634 debugging = dbg;
1635 printk(OSST_DEB_MSG "%s:D: Wait re-write finished\n", name);
1636#endif
1637 }
1638 }
1639 *aSRpnt = SRpnt;
1640 if (flag) {
1641 if ((SRpnt->sense[ 2] & 0x0f) == 13 &&
1642 SRpnt->sense[12] == 0 &&
1643 SRpnt->sense[13] == 2) {
1644 printk(KERN_ERR "%s:E: Volume overflow in write error recovery\n", name);
1645 vfree(buffer);
1646 return (-EIO); /* hit end of tape = fail */
1647 }
1648 i = ((SRpnt->sense[3] << 24) |
1649 (SRpnt->sense[4] << 16) |
1650 (SRpnt->sense[5] << 8) |
1651 SRpnt->sense[6] ) - new_frame;
1652 p = &buffer[i * OS_DATA_SIZE];
1653#if DEBUG
1654 printk(OSST_DEB_MSG "%s:D: Additional write error at %d\n", name, new_frame+i);
1655#endif
1656 osst_get_frame_position(STp, aSRpnt);
1657#if DEBUG
1658 printk(OSST_DEB_MSG "%s:D: reported frame positions: host = %d, tape = %d, buffer = %d\n",
1659 name, STp->first_frame_position, STp->last_frame_position, STp->cur_frames);
1660#endif
1661 }
1662 }
1663 if (flag) {
1664 /* error recovery did not successfully complete */
1665 printk(KERN_ERR "%s:D: Write error recovery failed in %s\n", name,
1666 STp->write_type == OS_WRITE_HEADER?"header":"body");
1667 }
1668 if (!pending)
1669 osst_copy_to_buffer(STp->buffer, p); /* so buffer content == at entry in all cases */
1670 vfree(buffer);
1671 return 0;
1672}
1673
1674static int osst_reposition_and_retry(struct osst_tape * STp, struct osst_request ** aSRpnt,
1675 unsigned int frame, unsigned int skip, int pending)
1676{
1677 unsigned char cmd[MAX_COMMAND_SIZE];
1678 struct osst_request * SRpnt;
1679 char * name = tape_name(STp);
1680 int expected = 0;
1681 int attempts = 1000 / skip;
1682 int flag = 1;
1683 unsigned long startwait = jiffies;
1684#if DEBUG
1685 int dbg = debugging;
1686#endif
1687
1688 while (attempts && time_before(jiffies, startwait + 60*HZ)) {
1689 if (flag) {
1690#if DEBUG
1691 debugging = dbg;
1692#endif
1693 if (frame < 2990 && frame+skip+STp->cur_frames+pending >= 2990)
1694 frame = 3000-skip;
1695 expected = frame+skip+STp->cur_frames+pending;
1696#if DEBUG
1697 printk(OSST_DEB_MSG "%s:D: Position to fppos %d, re-write from fseq %d\n",
1698 name, frame+skip, STp->frame_seq_number-STp->cur_frames-pending);
1699#endif
1700 osst_set_frame_position(STp, aSRpnt, frame + skip, 1);
1701 flag = 0;
1702 attempts--;
1703 schedule_timeout_interruptible(msecs_to_jiffies(100));
1704 }
1705 if (osst_get_frame_position(STp, aSRpnt) < 0) { /* additional write error */
1706#if DEBUG
1707 printk(OSST_DEB_MSG "%s:D: Addl error, host %d, tape %d, buffer %d\n",
1708 name, STp->first_frame_position,
1709 STp->last_frame_position, STp->cur_frames);
1710#endif
1711 frame = STp->last_frame_position;
1712 flag = 1;
1713 continue;
1714 }
1715 if (pending && STp->cur_frames < 50) {
1716
1717 memset(cmd, 0, MAX_COMMAND_SIZE);
1718 cmd[0] = WRITE_6;
1719 cmd[1] = 1;
1720 cmd[4] = 1;
1721#if DEBUG
1722 printk(OSST_DEB_MSG "%s:D: About to write pending fseq %d at fppos %d\n",
1723 name, STp->frame_seq_number-1, STp->first_frame_position);
1724#endif
1725 SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, OS_FRAME_SIZE, DMA_TO_DEVICE,
1726 STp->timeout, MAX_RETRIES, 1);
1727 *aSRpnt = SRpnt;
1728
1729 if (STp->buffer->syscall_result) { /* additional write error */
1730 if ((SRpnt->sense[ 2] & 0x0f) == 13 &&
1731 SRpnt->sense[12] == 0 &&
1732 SRpnt->sense[13] == 2) {
1733 printk(KERN_ERR
1734 "%s:E: Volume overflow in write error recovery\n",
1735 name);
1736 break; /* hit end of tape = fail */
1737 }
1738 flag = 1;
1739 }
1740 else
1741 pending = 0;
1742
1743 continue;
1744 }
1745 if (STp->cur_frames == 0) {
1746#if DEBUG
1747 debugging = dbg;
1748 printk(OSST_DEB_MSG "%s:D: Wait re-write finished\n", name);
1749#endif
1750 if (STp->first_frame_position != expected) {
1751 printk(KERN_ERR "%s:A: Actual position %d - expected %d\n",
1752 name, STp->first_frame_position, expected);
1753 return (-EIO);
1754 }
1755 return 0;
1756 }
1757#if DEBUG
1758 if (debugging) {
1759 printk(OSST_DEB_MSG "%s:D: Sleeping in re-write wait ready\n", name);
1760 printk(OSST_DEB_MSG "%s:D: Turning off debugging for a while\n", name);
1761 debugging = 0;
1762 }
1763#endif
1764 schedule_timeout_interruptible(msecs_to_jiffies(100));
1765 }
1766 printk(KERN_ERR "%s:E: Failed to find valid tape media\n", name);
1767#if DEBUG
1768 debugging = dbg;
1769#endif
1770 return (-EIO);
1771}
1772
1773/*
1774 * Error recovery algorithm for the OnStream tape.
1775 */
1776
1777static int osst_write_error_recovery(struct osst_tape * STp, struct osst_request ** aSRpnt, int pending)
1778{
1779 struct osst_request * SRpnt = * aSRpnt;
1780 struct st_partstat * STps = & STp->ps[STp->partition];
1781 char * name = tape_name(STp);
1782 int retval = 0;
1783 int rw_state;
1784 unsigned int frame, skip;
1785
1786 rw_state = STps->rw;
1787
1788 if ((SRpnt->sense[ 2] & 0x0f) != 3
1789 || SRpnt->sense[12] != 12
1790 || SRpnt->sense[13] != 0) {
1791#if DEBUG
1792 printk(OSST_DEB_MSG "%s:D: Write error recovery cannot handle %02x:%02x:%02x\n", name,
1793 SRpnt->sense[2], SRpnt->sense[12], SRpnt->sense[13]);
1794#endif
1795 return (-EIO);
1796 }
1797 frame = (SRpnt->sense[3] << 24) |
1798 (SRpnt->sense[4] << 16) |
1799 (SRpnt->sense[5] << 8) |
1800 SRpnt->sense[6];
1801 skip = SRpnt->sense[9];
1802
1803#if DEBUG
1804 printk(OSST_DEB_MSG "%s:D: Detected physical bad frame at %u, advised to skip %d\n", name, frame, skip);
1805#endif
1806 osst_get_frame_position(STp, aSRpnt);
1807#if DEBUG
1808 printk(OSST_DEB_MSG "%s:D: reported frame positions: host = %d, tape = %d\n",
1809 name, STp->first_frame_position, STp->last_frame_position);
1810#endif
1811 switch (STp->write_type) {
1812 case OS_WRITE_DATA:
1813 case OS_WRITE_EOD:
1814 case OS_WRITE_NEW_MARK:
1815 printk(KERN_WARNING
1816 "%s:I: Relocating %d buffered logical frames from position %u to %u\n",
1817 name, STp->cur_frames, frame, (frame + skip > 3000 && frame < 3000)?3000:frame + skip);
1818 if (STp->os_fw_rev >= 10600)
1819 retval = osst_reposition_and_retry(STp, aSRpnt, frame, skip, pending);
1820 else
1821 retval = osst_read_back_buffer_and_rewrite(STp, aSRpnt, frame, skip, pending);
1822 printk(KERN_WARNING "%s:%s: %sWrite error%srecovered\n", name,
1823 retval?"E" :"I",
1824 retval?"" :"Don't worry, ",
1825 retval?" not ":" ");
1826 break;
1827 case OS_WRITE_LAST_MARK:
1828 printk(KERN_ERR "%s:E: Bad frame in update last marker, fatal\n", name);
1829 osst_set_frame_position(STp, aSRpnt, frame + STp->cur_frames + pending, 0);
1830 retval = -EIO;
1831 break;
1832 case OS_WRITE_HEADER:
1833 printk(KERN_WARNING "%s:I: Bad frame in header partition, skipped\n", name);
1834 retval = osst_read_back_buffer_and_rewrite(STp, aSRpnt, frame, 1, pending);
1835 break;
1836 default:
1837 printk(KERN_INFO "%s:I: Bad frame in filler, ignored\n", name);
1838 osst_set_frame_position(STp, aSRpnt, frame + STp->cur_frames + pending, 0);
1839 }
1840 osst_get_frame_position(STp, aSRpnt);
1841#if DEBUG
1842 printk(OSST_DEB_MSG "%s:D: Positioning complete, cur_frames %d, pos %d, tape pos %d\n",
1843 name, STp->cur_frames, STp->first_frame_position, STp->last_frame_position);
1844 printk(OSST_DEB_MSG "%s:D: next logical frame to write: %d\n", name, STp->logical_blk_num);
1845#endif
1846 if (retval == 0) {
1847 STp->recover_count++;
1848 STp->recover_erreg++;
1849 } else
1850 STp->abort_count++;
1851
1852 STps->rw = rw_state;
1853 return retval;
1854}
1855
1856static int osst_space_over_filemarks_backward(struct osst_tape * STp, struct osst_request ** aSRpnt,
1857 int mt_op, int mt_count)
1858{
1859 char * name = tape_name(STp);
1860 int cnt;
1861 int last_mark_ppos = -1;
1862
1863#if DEBUG
1864 printk(OSST_DEB_MSG "%s:D: Reached space_over_filemarks_backwards %d %d\n", name, mt_op, mt_count);
1865#endif
1866 if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) {
1867#if DEBUG
1868 printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks_bwd\n", name);
1869#endif
1870 return -EIO;
1871 }
1872 if (STp->linux_media_version >= 4) {
1873 /*
1874 * direct lookup in header filemark list
1875 */
1876 cnt = ntohl(STp->buffer->aux->filemark_cnt);
1877 if (STp->header_ok &&
1878 STp->header_cache != NULL &&
1879 (cnt - mt_count) >= 0 &&
1880 (cnt - mt_count) < OS_FM_TAB_MAX &&
1881 (cnt - mt_count) < STp->filemark_cnt &&
1882 STp->header_cache->dat_fm_tab.fm_tab_ent[cnt-1] == STp->buffer->aux->last_mark_ppos)
1883
1884 last_mark_ppos = ntohl(STp->header_cache->dat_fm_tab.fm_tab_ent[cnt - mt_count]);
1885#if DEBUG
1886 if (STp->header_cache == NULL || (cnt - mt_count) < 0 || (cnt - mt_count) >= OS_FM_TAB_MAX)
1887 printk(OSST_DEB_MSG "%s:D: Filemark lookup fail due to %s\n", name,
1888 STp->header_cache == NULL?"lack of header cache":"count out of range");
1889 else
1890 printk(OSST_DEB_MSG "%s:D: Filemark lookup: prev mark %d (%s), skip %d to %d\n",
1891 name, cnt,
1892 ((cnt == -1 && ntohl(STp->buffer->aux->last_mark_ppos) == -1) ||
1893 (STp->header_cache->dat_fm_tab.fm_tab_ent[cnt-1] ==
1894 STp->buffer->aux->last_mark_ppos))?"match":"error",
1895 mt_count, last_mark_ppos);
1896#endif
1897 if (last_mark_ppos > 10 && last_mark_ppos < STp->eod_frame_ppos) {
1898 osst_position_tape_and_confirm(STp, aSRpnt, last_mark_ppos);
1899 if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) {
1900#if DEBUG
1901 printk(OSST_DEB_MSG
1902 "%s:D: Couldn't get logical blk num in space_filemarks\n", name);
1903#endif
1904 return (-EIO);
1905 }
1906 if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_MARKER) {
1907 printk(KERN_WARNING "%s:W: Expected to find marker at ppos %d, not found\n",
1908 name, last_mark_ppos);
1909 return (-EIO);
1910 }
1911 goto found;
1912 }
1913#if DEBUG
1914 printk(OSST_DEB_MSG "%s:D: Reverting to scan filemark backwards\n", name);
1915#endif
1916 }
1917 cnt = 0;
1918 while (cnt != mt_count) {
1919 last_mark_ppos = ntohl(STp->buffer->aux->last_mark_ppos);
1920 if (last_mark_ppos == -1)
1921 return (-EIO);
1922#if DEBUG
1923 printk(OSST_DEB_MSG "%s:D: Positioning to last mark at %d\n", name, last_mark_ppos);
1924#endif
1925 osst_position_tape_and_confirm(STp, aSRpnt, last_mark_ppos);
1926 cnt++;
1927 if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) {
1928#if DEBUG
1929 printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks\n", name);
1930#endif
1931 return (-EIO);
1932 }
1933 if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_MARKER) {
1934 printk(KERN_WARNING "%s:W: Expected to find marker at ppos %d, not found\n",
1935 name, last_mark_ppos);
1936 return (-EIO);
1937 }
1938 }
1939found:
1940 if (mt_op == MTBSFM) {
1941 STp->frame_seq_number++;
1942 STp->frame_in_buffer = 0;
1943 STp->buffer->buffer_bytes = 0;
1944 STp->buffer->read_pointer = 0;
1945 STp->logical_blk_num += ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt);
1946 }
1947 return 0;
1948}
1949
1950/*
1951 * ADRL 1.1 compatible "slow" space filemarks fwd version
1952 *
1953 * Just scans for the filemark sequentially.
1954 */
1955static int osst_space_over_filemarks_forward_slow(struct osst_tape * STp, struct osst_request ** aSRpnt,
1956 int mt_op, int mt_count)
1957{
1958 int cnt = 0;
1959#if DEBUG
1960 char * name = tape_name(STp);
1961
1962 printk(OSST_DEB_MSG "%s:D: Reached space_over_filemarks_forward_slow %d %d\n", name, mt_op, mt_count);
1963#endif
1964 if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) {
1965#if DEBUG
1966 printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks_fwd\n", name);
1967#endif
1968 return (-EIO);
1969 }
1970 while (1) {
1971 if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) {
1972#if DEBUG
1973 printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks\n", name);
1974#endif
1975 return (-EIO);
1976 }
1977 if (STp->buffer->aux->frame_type == OS_FRAME_TYPE_MARKER)
1978 cnt++;
1979 if (STp->buffer->aux->frame_type == OS_FRAME_TYPE_EOD) {
1980#if DEBUG
1981 printk(OSST_DEB_MSG "%s:D: space_fwd: EOD reached\n", name);
1982#endif
1983 if (STp->first_frame_position > STp->eod_frame_ppos+1) {
1984#if DEBUG
1985 printk(OSST_DEB_MSG "%s:D: EOD position corrected (%d=>%d)\n",
1986 name, STp->eod_frame_ppos, STp->first_frame_position-1);
1987#endif
1988 STp->eod_frame_ppos = STp->first_frame_position-1;
1989 }
1990 return (-EIO);
1991 }
1992 if (cnt == mt_count)
1993 break;
1994 STp->frame_in_buffer = 0;
1995 }
1996 if (mt_op == MTFSF) {
1997 STp->frame_seq_number++;
1998 STp->frame_in_buffer = 0;
1999 STp->buffer->buffer_bytes = 0;
2000 STp->buffer->read_pointer = 0;
2001 STp->logical_blk_num += ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt);
2002 }
2003 return 0;
2004}
2005
2006/*
2007 * Fast linux specific version of OnStream FSF
2008 */
2009static int osst_space_over_filemarks_forward_fast(struct osst_tape * STp, struct osst_request ** aSRpnt,
2010 int mt_op, int mt_count)
2011{
2012 char * name = tape_name(STp);
2013 int cnt = 0,
2014 next_mark_ppos = -1;
2015
2016#if DEBUG
2017 printk(OSST_DEB_MSG "%s:D: Reached space_over_filemarks_forward_fast %d %d\n", name, mt_op, mt_count);
2018#endif
2019 if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) {
2020#if DEBUG
2021 printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks_fwd\n", name);
2022#endif
2023 return (-EIO);
2024 }
2025
2026 if (STp->linux_media_version >= 4) {
2027 /*
2028 * direct lookup in header filemark list
2029 */
2030 cnt = ntohl(STp->buffer->aux->filemark_cnt) - 1;
2031 if (STp->header_ok &&
2032 STp->header_cache != NULL &&
2033 (cnt + mt_count) < OS_FM_TAB_MAX &&
2034 (cnt + mt_count) < STp->filemark_cnt &&
2035 ((cnt == -1 && ntohl(STp->buffer->aux->last_mark_ppos) == -1) ||
2036 (STp->header_cache->dat_fm_tab.fm_tab_ent[cnt] == STp->buffer->aux->last_mark_ppos)))
2037
2038 next_mark_ppos = ntohl(STp->header_cache->dat_fm_tab.fm_tab_ent[cnt + mt_count]);
2039#if DEBUG
2040 if (STp->header_cache == NULL || (cnt + mt_count) >= OS_FM_TAB_MAX)
2041 printk(OSST_DEB_MSG "%s:D: Filemark lookup fail due to %s\n", name,
2042 STp->header_cache == NULL?"lack of header cache":"count out of range");
2043 else
2044 printk(OSST_DEB_MSG "%s:D: Filemark lookup: prev mark %d (%s), skip %d to %d\n",
2045 name, cnt,
2046 ((cnt == -1 && ntohl(STp->buffer->aux->last_mark_ppos) == -1) ||
2047 (STp->header_cache->dat_fm_tab.fm_tab_ent[cnt] ==
2048 STp->buffer->aux->last_mark_ppos))?"match":"error",
2049 mt_count, next_mark_ppos);
2050#endif
2051 if (next_mark_ppos <= 10 || next_mark_ppos > STp->eod_frame_ppos) {
2052#if DEBUG
2053 printk(OSST_DEB_MSG "%s:D: Reverting to slow filemark space\n", name);
2054#endif
2055 return osst_space_over_filemarks_forward_slow(STp, aSRpnt, mt_op, mt_count);
2056 } else {
2057 osst_position_tape_and_confirm(STp, aSRpnt, next_mark_ppos);
2058 if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) {
2059#if DEBUG
2060 printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks\n",
2061 name);
2062#endif
2063 return (-EIO);
2064 }
2065 if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_MARKER) {
2066 printk(KERN_WARNING "%s:W: Expected to find marker at ppos %d, not found\n",
2067 name, next_mark_ppos);
2068 return (-EIO);
2069 }
2070 if (ntohl(STp->buffer->aux->filemark_cnt) != cnt + mt_count) {
2071 printk(KERN_WARNING "%s:W: Expected to find marker %d at ppos %d, not %d\n",
2072 name, cnt+mt_count, next_mark_ppos,
2073 ntohl(STp->buffer->aux->filemark_cnt));
2074 return (-EIO);
2075 }
2076 }
2077 } else {
2078 /*
2079 * Find nearest (usually previous) marker, then jump from marker to marker
2080 */
2081 while (1) {
2082 if (STp->buffer->aux->frame_type == OS_FRAME_TYPE_MARKER)
2083 break;
2084 if (STp->buffer->aux->frame_type == OS_FRAME_TYPE_EOD) {
2085#if DEBUG
2086 printk(OSST_DEB_MSG "%s:D: space_fwd: EOD reached\n", name);
2087#endif
2088 return (-EIO);
2089 }
2090 if (ntohl(STp->buffer->aux->filemark_cnt) == 0) {
2091 if (STp->first_mark_ppos == -1) {
2092#if DEBUG
2093 printk(OSST_DEB_MSG "%s:D: Reverting to slow filemark space\n", name);
2094#endif
2095 return osst_space_over_filemarks_forward_slow(STp, aSRpnt, mt_op, mt_count);
2096 }
2097 osst_position_tape_and_confirm(STp, aSRpnt, STp->first_mark_ppos);
2098 if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) {
2099#if DEBUG
2100 printk(OSST_DEB_MSG
2101 "%s:D: Couldn't get logical blk num in space_filemarks_fwd_fast\n",
2102 name);
2103#endif
2104 return (-EIO);
2105 }
2106 if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_MARKER) {
2107 printk(KERN_WARNING "%s:W: Expected to find filemark at %d\n",
2108 name, STp->first_mark_ppos);
2109 return (-EIO);
2110 }
2111 } else {
2112 if (osst_space_over_filemarks_backward(STp, aSRpnt, MTBSF, 1) < 0)
2113 return (-EIO);
2114 mt_count++;
2115 }
2116 }
2117 cnt++;
2118 while (cnt != mt_count) {
2119 next_mark_ppos = ntohl(STp->buffer->aux->next_mark_ppos);
2120 if (!next_mark_ppos || next_mark_ppos > STp->eod_frame_ppos) {
2121#if DEBUG
2122 printk(OSST_DEB_MSG "%s:D: Reverting to slow filemark space\n", name);
2123#endif
2124 return osst_space_over_filemarks_forward_slow(STp, aSRpnt, mt_op, mt_count - cnt);
2125 }
2126#if DEBUG
2127 else printk(OSST_DEB_MSG "%s:D: Positioning to next mark at %d\n", name, next_mark_ppos);
2128#endif
2129 osst_position_tape_and_confirm(STp, aSRpnt, next_mark_ppos);
2130 cnt++;
2131 if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) {
2132#if DEBUG
2133 printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks\n",
2134 name);
2135#endif
2136 return (-EIO);
2137 }
2138 if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_MARKER) {
2139 printk(KERN_WARNING "%s:W: Expected to find marker at ppos %d, not found\n",
2140 name, next_mark_ppos);
2141 return (-EIO);
2142 }
2143 }
2144 }
2145 if (mt_op == MTFSF) {
2146 STp->frame_seq_number++;
2147 STp->frame_in_buffer = 0;
2148 STp->buffer->buffer_bytes = 0;
2149 STp->buffer->read_pointer = 0;
2150 STp->logical_blk_num += ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt);
2151 }
2152 return 0;
2153}
2154
2155/*
2156 * In debug mode, we want to see as many errors as possible
2157 * to test the error recovery mechanism.
2158 */
2159#if DEBUG
2160static void osst_set_retries(struct osst_tape * STp, struct osst_request ** aSRpnt, int retries)
2161{
2162 unsigned char cmd[MAX_COMMAND_SIZE];
2163 struct osst_request * SRpnt = * aSRpnt;
2164 char * name = tape_name(STp);
2165
2166 memset(cmd, 0, MAX_COMMAND_SIZE);
2167 cmd[0] = MODE_SELECT;
2168 cmd[1] = 0x10;
2169 cmd[4] = NUMBER_RETRIES_PAGE_LENGTH + MODE_HEADER_LENGTH;
2170
2171 (STp->buffer)->b_data[0] = cmd[4] - 1;
2172 (STp->buffer)->b_data[1] = 0; /* Medium Type - ignoring */
2173 (STp->buffer)->b_data[2] = 0; /* Reserved */
2174 (STp->buffer)->b_data[3] = 0; /* Block Descriptor Length */
2175 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 0] = NUMBER_RETRIES_PAGE | (1 << 7);
2176 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 1] = 2;
2177 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 2] = 4;
2178 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 3] = retries;
2179
2180 if (debugging)
2181 printk(OSST_DEB_MSG "%s:D: Setting number of retries on OnStream tape to %d\n", name, retries);
2182
2183 SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_TO_DEVICE, STp->timeout, 0, 1);
2184 *aSRpnt = SRpnt;
2185
2186 if ((STp->buffer)->syscall_result)
2187 printk (KERN_ERR "%s:D: Couldn't set retries to %d\n", name, retries);
2188}
2189#endif
2190
2191
2192static int osst_write_filemark(struct osst_tape * STp, struct osst_request ** aSRpnt)
2193{
2194 int result;
2195 int this_mark_ppos = STp->first_frame_position;
2196 int this_mark_lbn = STp->logical_blk_num;
2197#if DEBUG
2198 char * name = tape_name(STp);
2199#endif
2200
2201 if (STp->raw) return 0;
2202
2203 STp->write_type = OS_WRITE_NEW_MARK;
2204#if DEBUG
2205 printk(OSST_DEB_MSG "%s:D: Writing Filemark %i at fppos %d (fseq %d, lblk %d)\n",
2206 name, STp->filemark_cnt, this_mark_ppos, STp->frame_seq_number, this_mark_lbn);
2207#endif
2208 STp->dirty = 1;
2209 result = osst_flush_write_buffer(STp, aSRpnt);
2210 result |= osst_flush_drive_buffer(STp, aSRpnt);
2211 STp->last_mark_ppos = this_mark_ppos;
2212 STp->last_mark_lbn = this_mark_lbn;
2213 if (STp->header_cache != NULL && STp->filemark_cnt < OS_FM_TAB_MAX)
2214 STp->header_cache->dat_fm_tab.fm_tab_ent[STp->filemark_cnt] = htonl(this_mark_ppos);
2215 if (STp->filemark_cnt++ == 0)
2216 STp->first_mark_ppos = this_mark_ppos;
2217 return result;
2218}
2219
2220static int osst_write_eod(struct osst_tape * STp, struct osst_request ** aSRpnt)
2221{
2222 int result;
2223#if DEBUG
2224 char * name = tape_name(STp);
2225#endif
2226
2227 if (STp->raw) return 0;
2228
2229 STp->write_type = OS_WRITE_EOD;
2230 STp->eod_frame_ppos = STp->first_frame_position;
2231#if DEBUG
2232 printk(OSST_DEB_MSG "%s:D: Writing EOD at fppos %d (fseq %d, lblk %d)\n", name,
2233 STp->eod_frame_ppos, STp->frame_seq_number, STp->logical_blk_num);
2234#endif
2235 STp->dirty = 1;
2236
2237 result = osst_flush_write_buffer(STp, aSRpnt);
2238 result |= osst_flush_drive_buffer(STp, aSRpnt);
2239 STp->eod_frame_lfa = --(STp->frame_seq_number);
2240 return result;
2241}
2242
2243static int osst_write_filler(struct osst_tape * STp, struct osst_request ** aSRpnt, int where, int count)
2244{
2245 char * name = tape_name(STp);
2246
2247#if DEBUG
2248 printk(OSST_DEB_MSG "%s:D: Reached onstream write filler group %d\n", name, where);
2249#endif
2250 osst_wait_ready(STp, aSRpnt, 60 * 5, 0);
2251 osst_set_frame_position(STp, aSRpnt, where, 0);
2252 STp->write_type = OS_WRITE_FILLER;
2253 while (count--) {
2254 memcpy(STp->buffer->b_data, "Filler", 6);
2255 STp->buffer->buffer_bytes = 6;
2256 STp->dirty = 1;
2257 if (osst_flush_write_buffer(STp, aSRpnt)) {
2258 printk(KERN_INFO "%s:I: Couldn't write filler frame\n", name);
2259 return (-EIO);
2260 }
2261 }
2262#if DEBUG
2263 printk(OSST_DEB_MSG "%s:D: Exiting onstream write filler group\n", name);
2264#endif
2265 return osst_flush_drive_buffer(STp, aSRpnt);
2266}
2267
2268static int __osst_write_header(struct osst_tape * STp, struct osst_request ** aSRpnt, int where, int count)
2269{
2270 char * name = tape_name(STp);
2271 int result;
2272
2273#if DEBUG
2274 printk(OSST_DEB_MSG "%s:D: Reached onstream write header group %d\n", name, where);
2275#endif
2276 osst_wait_ready(STp, aSRpnt, 60 * 5, 0);
2277 osst_set_frame_position(STp, aSRpnt, where, 0);
2278 STp->write_type = OS_WRITE_HEADER;
2279 while (count--) {
2280 osst_copy_to_buffer(STp->buffer, (unsigned char *)STp->header_cache);
2281 STp->buffer->buffer_bytes = sizeof(os_header_t);
2282 STp->dirty = 1;
2283 if (osst_flush_write_buffer(STp, aSRpnt)) {
2284 printk(KERN_INFO "%s:I: Couldn't write header frame\n", name);
2285 return (-EIO);
2286 }
2287 }
2288 result = osst_flush_drive_buffer(STp, aSRpnt);
2289#if DEBUG
2290 printk(OSST_DEB_MSG "%s:D: Write onstream header group %s\n", name, result?"failed":"done");
2291#endif
2292 return result;
2293}
2294
2295static int osst_write_header(struct osst_tape * STp, struct osst_request ** aSRpnt, int locate_eod)
2296{
2297 os_header_t * header;
2298 int result;
2299 char * name = tape_name(STp);
2300
2301#if DEBUG
2302 printk(OSST_DEB_MSG "%s:D: Writing tape header\n", name);
2303#endif
2304 if (STp->raw) return 0;
2305
2306 if (STp->header_cache == NULL) {
2307 if ((STp->header_cache = vmalloc(sizeof(os_header_t))) == NULL) {
2308 printk(KERN_ERR "%s:E: Failed to allocate header cache\n", name);
2309 return (-ENOMEM);
2310 }
2311 memset(STp->header_cache, 0, sizeof(os_header_t));
2312#if DEBUG
2313 printk(OSST_DEB_MSG "%s:D: Allocated and cleared memory for header cache\n", name);
2314#endif
2315 }
2316 if (STp->header_ok) STp->update_frame_cntr++;
2317 else STp->update_frame_cntr = 0;
2318
2319 header = STp->header_cache;
2320 strcpy(header->ident_str, "ADR_SEQ");
2321 header->major_rev = 1;
2322 header->minor_rev = 4;
2323 header->ext_trk_tb_off = htons(17192);
2324 header->pt_par_num = 1;
2325 header->partition[0].partition_num = OS_DATA_PARTITION;
2326 header->partition[0].par_desc_ver = OS_PARTITION_VERSION;
2327 header->partition[0].wrt_pass_cntr = htons(STp->wrt_pass_cntr);
2328 header->partition[0].first_frame_ppos = htonl(STp->first_data_ppos);
2329 header->partition[0].last_frame_ppos = htonl(STp->capacity);
2330 header->partition[0].eod_frame_ppos = htonl(STp->eod_frame_ppos);
2331 header->cfg_col_width = htonl(20);
2332 header->dat_col_width = htonl(1500);
2333 header->qfa_col_width = htonl(0);
2334 header->ext_track_tb.nr_stream_part = 1;
2335 header->ext_track_tb.et_ent_sz = 32;
2336 header->ext_track_tb.dat_ext_trk_ey.et_part_num = 0;
2337 header->ext_track_tb.dat_ext_trk_ey.fmt = 1;
2338 header->ext_track_tb.dat_ext_trk_ey.fm_tab_off = htons(17736);
2339 header->ext_track_tb.dat_ext_trk_ey.last_hlb_hi = 0;
2340 header->ext_track_tb.dat_ext_trk_ey.last_hlb = htonl(STp->eod_frame_lfa);
2341 header->ext_track_tb.dat_ext_trk_ey.last_pp = htonl(STp->eod_frame_ppos);
2342 header->dat_fm_tab.fm_part_num = 0;
2343 header->dat_fm_tab.fm_tab_ent_sz = 4;
2344 header->dat_fm_tab.fm_tab_ent_cnt = htons(STp->filemark_cnt<OS_FM_TAB_MAX?
2345 STp->filemark_cnt:OS_FM_TAB_MAX);
2346
2347 result = __osst_write_header(STp, aSRpnt, 0xbae, 5);
2348 if (STp->update_frame_cntr == 0)
2349 osst_write_filler(STp, aSRpnt, 0xbb3, 5);
2350 result &= __osst_write_header(STp, aSRpnt, 5, 5);
2351
2352 if (locate_eod) {
2353#if DEBUG
2354 printk(OSST_DEB_MSG "%s:D: Locating back to eod frame addr %d\n", name, STp->eod_frame_ppos);
2355#endif
2356 osst_set_frame_position(STp, aSRpnt, STp->eod_frame_ppos, 0);
2357 }
2358 if (result)
2359 printk(KERN_ERR "%s:E: Write header failed\n", name);
2360 else {
2361 memcpy(STp->application_sig, "LIN4", 4);
2362 STp->linux_media = 1;
2363 STp->linux_media_version = 4;
2364 STp->header_ok = 1;
2365 }
2366 return result;
2367}
2368
2369static int osst_reset_header(struct osst_tape * STp, struct osst_request ** aSRpnt)
2370{
2371 if (STp->header_cache != NULL)
2372 memset(STp->header_cache, 0, sizeof(os_header_t));
2373
2374 STp->logical_blk_num = STp->frame_seq_number = 0;
2375 STp->frame_in_buffer = 0;
2376 STp->eod_frame_ppos = STp->first_data_ppos = 0x0000000A;
2377 STp->filemark_cnt = 0;
2378 STp->first_mark_ppos = STp->last_mark_ppos = STp->last_mark_lbn = -1;
2379 return osst_write_header(STp, aSRpnt, 1);
2380}
2381
2382static int __osst_analyze_headers(struct osst_tape * STp, struct osst_request ** aSRpnt, int ppos)
2383{
2384 char * name = tape_name(STp);
2385 os_header_t * header;
2386 os_aux_t * aux;
2387 char id_string[8];
2388 int linux_media_version,
2389 update_frame_cntr;
2390
2391 if (STp->raw)
2392 return 1;
2393
2394 if (ppos == 5 || ppos == 0xbae || STp->buffer->syscall_result) {
2395 if (osst_set_frame_position(STp, aSRpnt, ppos, 0))
2396 printk(KERN_WARNING "%s:W: Couldn't position tape\n", name);
2397 osst_wait_ready(STp, aSRpnt, 60 * 15, 0);
2398 if (osst_initiate_read (STp, aSRpnt)) {
2399 printk(KERN_WARNING "%s:W: Couldn't initiate read\n", name);
2400 return 0;
2401 }
2402 }
2403 if (osst_read_frame(STp, aSRpnt, 180)) {
2404#if DEBUG
2405 printk(OSST_DEB_MSG "%s:D: Couldn't read header frame\n", name);
2406#endif
2407 return 0;
2408 }
2409 header = (os_header_t *) STp->buffer->b_data; /* warning: only first segment addressable */
2410 aux = STp->buffer->aux;
2411 if (aux->frame_type != OS_FRAME_TYPE_HEADER) {
2412#if DEBUG
2413 printk(OSST_DEB_MSG "%s:D: Skipping non-header frame (%d)\n", name, ppos);
2414#endif
2415 return 0;
2416 }
2417 if (ntohl(aux->frame_seq_num) != 0 ||
2418 ntohl(aux->logical_blk_num) != 0 ||
2419 aux->partition.partition_num != OS_CONFIG_PARTITION ||
2420 ntohl(aux->partition.first_frame_ppos) != 0 ||
2421 ntohl(aux->partition.last_frame_ppos) != 0xbb7 ) {
2422#if DEBUG
2423 printk(OSST_DEB_MSG "%s:D: Invalid header frame (%d,%d,%d,%d,%d)\n", name,
2424 ntohl(aux->frame_seq_num), ntohl(aux->logical_blk_num),
2425 aux->partition.partition_num, ntohl(aux->partition.first_frame_ppos),
2426 ntohl(aux->partition.last_frame_ppos));
2427#endif
2428 return 0;
2429 }
2430 if (strncmp(header->ident_str, "ADR_SEQ", 7) != 0 &&
2431 strncmp(header->ident_str, "ADR-SEQ", 7) != 0) {
2432 strlcpy(id_string, header->ident_str, 8);
2433#if DEBUG
2434 printk(OSST_DEB_MSG "%s:D: Invalid header identification string %s\n", name, id_string);
2435#endif
2436 return 0;
2437 }
2438 update_frame_cntr = ntohl(aux->update_frame_cntr);
2439 if (update_frame_cntr < STp->update_frame_cntr) {
2440#if DEBUG
2441 printk(OSST_DEB_MSG "%s:D: Skipping frame %d with update_frame_counter %d<%d\n",
2442 name, ppos, update_frame_cntr, STp->update_frame_cntr);
2443#endif
2444 return 0;
2445 }
2446 if (header->major_rev != 1 || header->minor_rev != 4 ) {
2447#if DEBUG
2448 printk(OSST_DEB_MSG "%s:D: %s revision %d.%d detected (1.4 supported)\n",
2449 name, (header->major_rev != 1 || header->minor_rev < 2 ||
2450 header->minor_rev > 4 )? "Invalid" : "Warning:",
2451 header->major_rev, header->minor_rev);
2452#endif
2453 if (header->major_rev != 1 || header->minor_rev < 2 || header->minor_rev > 4)
2454 return 0;
2455 }
2456#if DEBUG
2457 if (header->pt_par_num != 1)
2458 printk(KERN_INFO "%s:W: %d partitions defined, only one supported\n",
2459 name, header->pt_par_num);
2460#endif
2461 memcpy(id_string, aux->application_sig, 4);
2462 id_string[4] = 0;
2463 if (memcmp(id_string, "LIN", 3) == 0) {
2464 STp->linux_media = 1;
2465 linux_media_version = id_string[3] - '0';
2466 if (linux_media_version != 4)
2467 printk(KERN_INFO "%s:I: Linux media version %d detected (current 4)\n",
2468 name, linux_media_version);
2469 } else {
2470 printk(KERN_WARNING "%s:W: Non Linux media detected (%s)\n", name, id_string);
2471 return 0;
2472 }
2473 if (linux_media_version < STp->linux_media_version) {
2474#if DEBUG
2475 printk(OSST_DEB_MSG "%s:D: Skipping frame %d with linux_media_version %d\n",
2476 name, ppos, linux_media_version);
2477#endif
2478 return 0;
2479 }
2480 if (linux_media_version > STp->linux_media_version) {
2481#if DEBUG
2482 printk(OSST_DEB_MSG "%s:D: Frame %d sets linux_media_version to %d\n",
2483 name, ppos, linux_media_version);
2484#endif
2485 memcpy(STp->application_sig, id_string, 5);
2486 STp->linux_media_version = linux_media_version;
2487 STp->update_frame_cntr = -1;
2488 }
2489 if (update_frame_cntr > STp->update_frame_cntr) {
2490#if DEBUG
2491 printk(OSST_DEB_MSG "%s:D: Frame %d sets update_frame_counter to %d\n",
2492 name, ppos, update_frame_cntr);
2493#endif
2494 if (STp->header_cache == NULL) {
2495 if ((STp->header_cache = vmalloc(sizeof(os_header_t))) == NULL) {
2496 printk(KERN_ERR "%s:E: Failed to allocate header cache\n", name);
2497 return 0;
2498 }
2499#if DEBUG
2500 printk(OSST_DEB_MSG "%s:D: Allocated memory for header cache\n", name);
2501#endif
2502 }
2503 osst_copy_from_buffer(STp->buffer, (unsigned char *)STp->header_cache);
2504 header = STp->header_cache; /* further accesses from cached (full) copy */
2505
2506 STp->wrt_pass_cntr = ntohs(header->partition[0].wrt_pass_cntr);
2507 STp->first_data_ppos = ntohl(header->partition[0].first_frame_ppos);
2508 STp->eod_frame_ppos = ntohl(header->partition[0].eod_frame_ppos);
2509 STp->eod_frame_lfa = ntohl(header->ext_track_tb.dat_ext_trk_ey.last_hlb);
2510 STp->filemark_cnt = ntohl(aux->filemark_cnt);
2511 STp->first_mark_ppos = ntohl(aux->next_mark_ppos);
2512 STp->last_mark_ppos = ntohl(aux->last_mark_ppos);
2513 STp->last_mark_lbn = ntohl(aux->last_mark_lbn);
2514 STp->update_frame_cntr = update_frame_cntr;
2515#if DEBUG
2516 printk(OSST_DEB_MSG "%s:D: Detected write pass %d, update frame counter %d, filemark counter %d\n",
2517 name, STp->wrt_pass_cntr, STp->update_frame_cntr, STp->filemark_cnt);
2518 printk(OSST_DEB_MSG "%s:D: first data frame on tape = %d, last = %d, eod frame = %d\n", name,
2519 STp->first_data_ppos,
2520 ntohl(header->partition[0].last_frame_ppos),
2521 ntohl(header->partition[0].eod_frame_ppos));
2522 printk(OSST_DEB_MSG "%s:D: first mark on tape = %d, last = %d, eod frame = %d\n",
2523 name, STp->first_mark_ppos, STp->last_mark_ppos, STp->eod_frame_ppos);
2524#endif
2525 if (header->minor_rev < 4 && STp->linux_media_version == 4) {
2526#if DEBUG
2527 printk(OSST_DEB_MSG "%s:D: Moving filemark list to ADR 1.4 location\n", name);
2528#endif
2529 memcpy((void *)header->dat_fm_tab.fm_tab_ent,
2530 (void *)header->old_filemark_list, sizeof(header->dat_fm_tab.fm_tab_ent));
2531 memset((void *)header->old_filemark_list, 0, sizeof(header->old_filemark_list));
2532 }
2533 if (header->minor_rev == 4 &&
2534 (header->ext_trk_tb_off != htons(17192) ||
2535 header->partition[0].partition_num != OS_DATA_PARTITION ||
2536 header->partition[0].par_desc_ver != OS_PARTITION_VERSION ||
2537 header->partition[0].last_frame_ppos != htonl(STp->capacity) ||
2538 header->cfg_col_width != htonl(20) ||
2539 header->dat_col_width != htonl(1500) ||
2540 header->qfa_col_width != htonl(0) ||
2541 header->ext_track_tb.nr_stream_part != 1 ||
2542 header->ext_track_tb.et_ent_sz != 32 ||
2543 header->ext_track_tb.dat_ext_trk_ey.et_part_num != OS_DATA_PARTITION ||
2544 header->ext_track_tb.dat_ext_trk_ey.fmt != 1 ||
2545 header->ext_track_tb.dat_ext_trk_ey.fm_tab_off != htons(17736) ||
2546 header->ext_track_tb.dat_ext_trk_ey.last_hlb_hi != 0 ||
2547 header->ext_track_tb.dat_ext_trk_ey.last_pp != htonl(STp->eod_frame_ppos) ||
2548 header->dat_fm_tab.fm_part_num != OS_DATA_PARTITION ||
2549 header->dat_fm_tab.fm_tab_ent_sz != 4 ||
2550 header->dat_fm_tab.fm_tab_ent_cnt !=
2551 htons(STp->filemark_cnt<OS_FM_TAB_MAX?STp->filemark_cnt:OS_FM_TAB_MAX)))
2552 printk(KERN_WARNING "%s:W: Failed consistency check ADR 1.4 format\n", name);
2553
2554 }
2555
2556 return 1;
2557}
2558
2559static int osst_analyze_headers(struct osst_tape * STp, struct osst_request ** aSRpnt)
2560{
2561 int position, ppos;
2562 int first, last;
2563 int valid = 0;
2564 char * name = tape_name(STp);
2565
2566 position = osst_get_frame_position(STp, aSRpnt);
2567
2568 if (STp->raw) {
2569 STp->header_ok = STp->linux_media = 1;
2570 STp->linux_media_version = 0;
2571 return 1;
2572 }
2573 STp->header_ok = STp->linux_media = STp->linux_media_version = 0;
2574 STp->wrt_pass_cntr = STp->update_frame_cntr = -1;
2575 STp->eod_frame_ppos = STp->first_data_ppos = -1;
2576 STp->first_mark_ppos = STp->last_mark_ppos = STp->last_mark_lbn = -1;
2577#if DEBUG
2578 printk(OSST_DEB_MSG "%s:D: Reading header\n", name);
2579#endif
2580
2581 /* optimization for speed - if we are positioned at ppos 10, read second group first */
2582 /* TODO try the ADR 1.1 locations for the second group if we have no valid one yet... */
2583
2584 first = position==10?0xbae: 5;
2585 last = position==10?0xbb3:10;
2586
2587 for (ppos = first; ppos < last; ppos++)
2588 if (__osst_analyze_headers(STp, aSRpnt, ppos))
2589 valid = 1;
2590
2591 first = position==10? 5:0xbae;
2592 last = position==10?10:0xbb3;
2593
2594 for (ppos = first; ppos < last; ppos++)
2595 if (__osst_analyze_headers(STp, aSRpnt, ppos))
2596 valid = 1;
2597
2598 if (!valid) {
2599 printk(KERN_ERR "%s:E: Failed to find valid ADRL header, new media?\n", name);
2600 STp->eod_frame_ppos = STp->first_data_ppos = 0;
2601 osst_set_frame_position(STp, aSRpnt, 10, 0);
2602 return 0;
2603 }
2604 if (position <= STp->first_data_ppos) {
2605 position = STp->first_data_ppos;
2606 STp->ps[0].drv_file = STp->ps[0].drv_block = STp->frame_seq_number = STp->logical_blk_num = 0;
2607 }
2608 osst_set_frame_position(STp, aSRpnt, position, 0);
2609 STp->header_ok = 1;
2610
2611 return 1;
2612}
2613
2614static int osst_verify_position(struct osst_tape * STp, struct osst_request ** aSRpnt)
2615{
2616 int frame_position = STp->first_frame_position;
2617 int frame_seq_numbr = STp->frame_seq_number;
2618 int logical_blk_num = STp->logical_blk_num;
2619 int halfway_frame = STp->frame_in_buffer;
2620 int read_pointer = STp->buffer->read_pointer;
2621 int prev_mark_ppos = -1;
2622 int actual_mark_ppos, i, n;
2623#if DEBUG
2624 char * name = tape_name(STp);
2625
2626 printk(OSST_DEB_MSG "%s:D: Verify that the tape is really the one we think before writing\n", name);
2627#endif
2628 osst_set_frame_position(STp, aSRpnt, frame_position - 1, 0);
2629 if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) {
2630#if DEBUG
2631 printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in verify_position\n", name);
2632#endif
2633 return (-EIO);
2634 }
2635 if (STp->linux_media_version >= 4) {
2636 for (i=0; i<STp->filemark_cnt; i++)
2637 if ((n=ntohl(STp->header_cache->dat_fm_tab.fm_tab_ent[i])) < frame_position)
2638 prev_mark_ppos = n;
2639 } else
2640 prev_mark_ppos = frame_position - 1; /* usually - we don't really know */
2641 actual_mark_ppos = STp->buffer->aux->frame_type == OS_FRAME_TYPE_MARKER ?
2642 frame_position - 1 : ntohl(STp->buffer->aux->last_mark_ppos);
2643 if (frame_position != STp->first_frame_position ||
2644 frame_seq_numbr != STp->frame_seq_number + (halfway_frame?0:1) ||
2645 prev_mark_ppos != actual_mark_ppos ) {
2646#if DEBUG
2647 printk(OSST_DEB_MSG "%s:D: Block mismatch: fppos %d-%d, fseq %d-%d, mark %d-%d\n", name,
2648 STp->first_frame_position, frame_position,
2649 STp->frame_seq_number + (halfway_frame?0:1),
2650 frame_seq_numbr, actual_mark_ppos, prev_mark_ppos);
2651#endif
2652 return (-EIO);
2653 }
2654 if (halfway_frame) {
2655 /* prepare buffer for append and rewrite on top of original */
2656 osst_set_frame_position(STp, aSRpnt, frame_position - 1, 0);
2657 STp->buffer->buffer_bytes = read_pointer;
2658 STp->ps[STp->partition].rw = ST_WRITING;
2659 STp->dirty = 1;
2660 }
2661 STp->frame_in_buffer = halfway_frame;
2662 STp->frame_seq_number = frame_seq_numbr;
2663 STp->logical_blk_num = logical_blk_num;
2664 return 0;
2665}
2666
2667/* Acc. to OnStream, the vers. numbering is the following:
2668 * X.XX for released versions (X=digit),
2669 * XXXY for unreleased versions (Y=letter)
2670 * Ordering 1.05 < 106A < 106B < ... < 106a < ... < 1.06
2671 * This fn makes monoton numbers out of this scheme ...
2672 */
2673static unsigned int osst_parse_firmware_rev (const char * str)
2674{
2675 if (str[1] == '.') {
2676 return (str[0]-'0')*10000
2677 +(str[2]-'0')*1000
2678 +(str[3]-'0')*100;
2679 } else {
2680 return (str[0]-'0')*10000
2681 +(str[1]-'0')*1000
2682 +(str[2]-'0')*100 - 100
2683 +(str[3]-'@');
2684 }
2685}
2686
2687/*
2688 * Configure the OnStream SCII tape drive for default operation
2689 */
2690static int osst_configure_onstream(struct osst_tape *STp, struct osst_request ** aSRpnt)
2691{
2692 unsigned char cmd[MAX_COMMAND_SIZE];
2693 char * name = tape_name(STp);
2694 struct osst_request * SRpnt = * aSRpnt;
2695 osst_mode_parameter_header_t * header;
2696 osst_block_size_page_t * bs;
2697 osst_capabilities_page_t * cp;
2698 osst_tape_paramtr_page_t * prm;
2699 int drive_buffer_size;
2700
2701 if (STp->ready != ST_READY) {
2702#if DEBUG
2703 printk(OSST_DEB_MSG "%s:D: Not Ready\n", name);
2704#endif
2705 return (-EIO);
2706 }
2707
2708 if (STp->os_fw_rev < 10600) {
2709 printk(KERN_INFO "%s:I: Old OnStream firmware revision detected (%s),\n", name, STp->device->rev);
2710 printk(KERN_INFO "%s:I: an upgrade to version 1.06 or above is recommended\n", name);
2711 }
2712
2713 /*
2714 * Configure 32.5KB (data+aux) frame size.
2715 * Get the current frame size from the block size mode page
2716 */
2717 memset(cmd, 0, MAX_COMMAND_SIZE);
2718 cmd[0] = MODE_SENSE;
2719 cmd[1] = 8;
2720 cmd[2] = BLOCK_SIZE_PAGE;
2721 cmd[4] = BLOCK_SIZE_PAGE_LENGTH + MODE_HEADER_LENGTH;
2722
2723 SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_FROM_DEVICE, STp->timeout, 0, 1);
2724 if (SRpnt == NULL) {
2725#if DEBUG
2726 printk(OSST_DEB_MSG "osst :D: Busy\n");
2727#endif
2728 return (-EBUSY);
2729 }
2730 *aSRpnt = SRpnt;
2731 if ((STp->buffer)->syscall_result != 0) {
2732 printk (KERN_ERR "%s:E: Can't get tape block size mode page\n", name);
2733 return (-EIO);
2734 }
2735
2736 header = (osst_mode_parameter_header_t *) (STp->buffer)->b_data;
2737 bs = (osst_block_size_page_t *) ((STp->buffer)->b_data + sizeof(osst_mode_parameter_header_t) + header->bdl);
2738
2739#if DEBUG
2740 printk(OSST_DEB_MSG "%s:D: 32KB play back: %s\n", name, bs->play32 ? "Yes" : "No");
2741 printk(OSST_DEB_MSG "%s:D: 32.5KB play back: %s\n", name, bs->play32_5 ? "Yes" : "No");
2742 printk(OSST_DEB_MSG "%s:D: 32KB record: %s\n", name, bs->record32 ? "Yes" : "No");
2743 printk(OSST_DEB_MSG "%s:D: 32.5KB record: %s\n", name, bs->record32_5 ? "Yes" : "No");
2744#endif
2745
2746 /*
2747 * Configure default auto columns mode, 32.5KB transfer mode
2748 */
2749 bs->one = 1;
2750 bs->play32 = 0;
2751 bs->play32_5 = 1;
2752 bs->record32 = 0;
2753 bs->record32_5 = 1;
2754
2755 memset(cmd, 0, MAX_COMMAND_SIZE);
2756 cmd[0] = MODE_SELECT;
2757 cmd[1] = 0x10;
2758 cmd[4] = BLOCK_SIZE_PAGE_LENGTH + MODE_HEADER_LENGTH;
2759
2760 SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_TO_DEVICE, STp->timeout, 0, 1);
2761 *aSRpnt = SRpnt;
2762 if ((STp->buffer)->syscall_result != 0) {
2763 printk (KERN_ERR "%s:E: Couldn't set tape block size mode page\n", name);
2764 return (-EIO);
2765 }
2766
2767#if DEBUG
2768 printk(KERN_INFO "%s:D: Drive Block Size changed to 32.5K\n", name);
2769 /*
2770 * In debug mode, we want to see as many errors as possible
2771 * to test the error recovery mechanism.
2772 */
2773 osst_set_retries(STp, aSRpnt, 0);
2774 SRpnt = * aSRpnt;
2775#endif
2776
2777 /*
2778 * Set vendor name to 'LIN4' for "Linux support version 4".
2779 */
2780
2781 memset(cmd, 0, MAX_COMMAND_SIZE);
2782 cmd[0] = MODE_SELECT;
2783 cmd[1] = 0x10;
2784 cmd[4] = VENDOR_IDENT_PAGE_LENGTH + MODE_HEADER_LENGTH;
2785
2786 header->mode_data_length = VENDOR_IDENT_PAGE_LENGTH + MODE_HEADER_LENGTH - 1;
2787 header->medium_type = 0; /* Medium Type - ignoring */
2788 header->dsp = 0; /* Reserved */
2789 header->bdl = 0; /* Block Descriptor Length */
2790
2791 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 0] = VENDOR_IDENT_PAGE | (1 << 7);
2792 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 1] = 6;
2793 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 2] = 'L';
2794 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 3] = 'I';
2795 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 4] = 'N';
2796 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 5] = '4';
2797 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 6] = 0;
2798 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 7] = 0;
2799
2800 SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_TO_DEVICE, STp->timeout, 0, 1);
2801 *aSRpnt = SRpnt;
2802
2803 if ((STp->buffer)->syscall_result != 0) {
2804 printk (KERN_ERR "%s:E: Couldn't set vendor name to %s\n", name,
2805 (char *) ((STp->buffer)->b_data + MODE_HEADER_LENGTH + 2));
2806 return (-EIO);
2807 }
2808
2809 memset(cmd, 0, MAX_COMMAND_SIZE);
2810 cmd[0] = MODE_SENSE;
2811 cmd[1] = 8;
2812 cmd[2] = CAPABILITIES_PAGE;
2813 cmd[4] = CAPABILITIES_PAGE_LENGTH + MODE_HEADER_LENGTH;
2814
2815 SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_FROM_DEVICE, STp->timeout, 0, 1);
2816 *aSRpnt = SRpnt;
2817
2818 if ((STp->buffer)->syscall_result != 0) {
2819 printk (KERN_ERR "%s:E: Can't get capabilities page\n", name);
2820 return (-EIO);
2821 }
2822
2823 header = (osst_mode_parameter_header_t *) (STp->buffer)->b_data;
2824 cp = (osst_capabilities_page_t *) ((STp->buffer)->b_data +
2825 sizeof(osst_mode_parameter_header_t) + header->bdl);
2826
2827 drive_buffer_size = ntohs(cp->buffer_size) / 2;
2828
2829 memset(cmd, 0, MAX_COMMAND_SIZE);
2830 cmd[0] = MODE_SENSE;
2831 cmd[1] = 8;
2832 cmd[2] = TAPE_PARAMTR_PAGE;
2833 cmd[4] = TAPE_PARAMTR_PAGE_LENGTH + MODE_HEADER_LENGTH;
2834
2835 SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_FROM_DEVICE, STp->timeout, 0, 1);
2836 *aSRpnt = SRpnt;
2837
2838 if ((STp->buffer)->syscall_result != 0) {
2839 printk (KERN_ERR "%s:E: Can't get tape parameter page\n", name);
2840 return (-EIO);
2841 }
2842
2843 header = (osst_mode_parameter_header_t *) (STp->buffer)->b_data;
2844 prm = (osst_tape_paramtr_page_t *) ((STp->buffer)->b_data +
2845 sizeof(osst_mode_parameter_header_t) + header->bdl);
2846
2847 STp->density = prm->density;
2848 STp->capacity = ntohs(prm->segtrk) * ntohs(prm->trks);
2849#if DEBUG
2850 printk(OSST_DEB_MSG "%s:D: Density %d, tape length: %dMB, drive buffer size: %dKB\n",
2851 name, STp->density, STp->capacity / 32, drive_buffer_size);
2852#endif
2853
2854 return 0;
2855
2856}
2857
2858
2859/* Step over EOF if it has been inadvertently crossed (ioctl not used because
2860 it messes up the block number). */
2861static int cross_eof(struct osst_tape *STp, struct osst_request ** aSRpnt, int forward)
2862{
2863 int result;
2864 char * name = tape_name(STp);
2865
2866#if DEBUG
2867 if (debugging)
2868 printk(OSST_DEB_MSG "%s:D: Stepping over filemark %s.\n",
2869 name, forward ? "forward" : "backward");
2870#endif
2871
2872 if (forward) {
2873 /* assumes that the filemark is already read by the drive, so this is low cost */
2874 result = osst_space_over_filemarks_forward_slow(STp, aSRpnt, MTFSF, 1);
2875 }
2876 else
2877 /* assumes this is only called if we just read the filemark! */
2878 result = osst_seek_logical_blk(STp, aSRpnt, STp->logical_blk_num - 1);
2879
2880 if (result < 0)
2881 printk(KERN_WARNING "%s:W: Stepping over filemark %s failed.\n",
2882 name, forward ? "forward" : "backward");
2883
2884 return result;
2885}
2886
2887
2888/* Get the tape position. */
2889
2890static int osst_get_frame_position(struct osst_tape *STp, struct osst_request ** aSRpnt)
2891{
2892 unsigned char scmd[MAX_COMMAND_SIZE];
2893 struct osst_request * SRpnt;
2894 int result = 0;
2895 char * name = tape_name(STp);
2896
2897 /* KG: We want to be able to use it for checking Write Buffer availability
2898 * and thus don't want to risk to overwrite anything. Exchange buffers ... */
2899 char mybuf[24];
2900 char * olddata = STp->buffer->b_data;
2901 int oldsize = STp->buffer->buffer_size;
2902
2903 if (STp->ready != ST_READY) return (-EIO);
2904
2905 memset (scmd, 0, MAX_COMMAND_SIZE);
2906 scmd[0] = READ_POSITION;
2907
2908 STp->buffer->b_data = mybuf; STp->buffer->buffer_size = 24;
2909 SRpnt = osst_do_scsi(*aSRpnt, STp, scmd, 20, DMA_FROM_DEVICE,
2910 STp->timeout, MAX_RETRIES, 1);
2911 if (!SRpnt) {
2912 STp->buffer->b_data = olddata; STp->buffer->buffer_size = oldsize;
2913 return (-EBUSY);
2914 }
2915 *aSRpnt = SRpnt;
2916
2917 if (STp->buffer->syscall_result)
2918 result = ((SRpnt->sense[2] & 0x0f) == 3) ? -EIO : -EINVAL; /* 3: Write Error */
2919
2920 if (result == -EINVAL)
2921 printk(KERN_ERR "%s:E: Can't read tape position.\n", name);
2922 else {
2923 if (result == -EIO) { /* re-read position - this needs to preserve media errors */
2924 unsigned char mysense[16];
2925 memcpy (mysense, SRpnt->sense, 16);
2926 memset (scmd, 0, MAX_COMMAND_SIZE);
2927 scmd[0] = READ_POSITION;
2928 STp->buffer->b_data = mybuf; STp->buffer->buffer_size = 24;
2929 SRpnt = osst_do_scsi(SRpnt, STp, scmd, 20, DMA_FROM_DEVICE,
2930 STp->timeout, MAX_RETRIES, 1);
2931#if DEBUG
2932 printk(OSST_DEB_MSG "%s:D: Reread position, reason=[%02x:%02x:%02x], result=[%s%02x:%02x:%02x]\n",
2933 name, mysense[2], mysense[12], mysense[13], STp->buffer->syscall_result?"":"ok:",
2934 SRpnt->sense[2],SRpnt->sense[12],SRpnt->sense[13]);
2935#endif
2936 if (!STp->buffer->syscall_result)
2937 memcpy (SRpnt->sense, mysense, 16);
2938 else
2939 printk(KERN_WARNING "%s:W: Double error in get position\n", name);
2940 }
2941 STp->first_frame_position = ((STp->buffer)->b_data[4] << 24)
2942 + ((STp->buffer)->b_data[5] << 16)
2943 + ((STp->buffer)->b_data[6] << 8)
2944 + (STp->buffer)->b_data[7];
2945 STp->last_frame_position = ((STp->buffer)->b_data[ 8] << 24)
2946 + ((STp->buffer)->b_data[ 9] << 16)
2947 + ((STp->buffer)->b_data[10] << 8)
2948 + (STp->buffer)->b_data[11];
2949 STp->cur_frames = (STp->buffer)->b_data[15];
2950#if DEBUG
2951 if (debugging) {
2952 printk(OSST_DEB_MSG "%s:D: Drive Positions: host %d, tape %d%s, buffer %d\n", name,
2953 STp->first_frame_position, STp->last_frame_position,
2954 ((STp->buffer)->b_data[0]&0x80)?" (BOP)":
2955 ((STp->buffer)->b_data[0]&0x40)?" (EOP)":"",
2956 STp->cur_frames);
2957 }
2958#endif
2959 if (STp->cur_frames == 0 && STp->first_frame_position != STp->last_frame_position) {
2960#if DEBUG
2961 printk(OSST_DEB_MSG "%s:D: Correcting read position %d, %d, %d\n", name,
2962 STp->first_frame_position, STp->last_frame_position, STp->cur_frames);
2963#endif
2964 STp->first_frame_position = STp->last_frame_position;
2965 }
2966 }
2967 STp->buffer->b_data = olddata; STp->buffer->buffer_size = oldsize;
2968
2969 return (result == 0 ? STp->first_frame_position : result);
2970}
2971
2972
2973/* Set the tape block */
2974static int osst_set_frame_position(struct osst_tape *STp, struct osst_request ** aSRpnt, int ppos, int skip)
2975{
2976 unsigned char scmd[MAX_COMMAND_SIZE];
2977 struct osst_request * SRpnt;
2978 struct st_partstat * STps;
2979 int result = 0;
2980 int pp = (ppos == 3000 && !skip)? 0 : ppos;
2981 char * name = tape_name(STp);
2982
2983 if (STp->ready != ST_READY) return (-EIO);
2984
2985 STps = &(STp->ps[STp->partition]);
2986
2987 if (ppos < 0 || ppos > STp->capacity) {
2988 printk(KERN_WARNING "%s:W: Reposition request %d out of range\n", name, ppos);
2989 pp = ppos = ppos < 0 ? 0 : (STp->capacity - 1);
2990 result = (-EINVAL);
2991 }
2992
2993 do {
2994#if DEBUG
2995 if (debugging)
2996 printk(OSST_DEB_MSG "%s:D: Setting ppos to %d.\n", name, pp);
2997#endif
2998 memset (scmd, 0, MAX_COMMAND_SIZE);
2999 scmd[0] = SEEK_10;
3000 scmd[1] = 1;
3001 scmd[3] = (pp >> 24);
3002 scmd[4] = (pp >> 16);
3003 scmd[5] = (pp >> 8);
3004 scmd[6] = pp;
3005 if (skip)
3006 scmd[9] = 0x80;
3007
3008 SRpnt = osst_do_scsi(*aSRpnt, STp, scmd, 0, DMA_NONE, STp->long_timeout,
3009 MAX_RETRIES, 1);
3010 if (!SRpnt)
3011 return (-EBUSY);
3012 *aSRpnt = SRpnt;
3013
3014 if ((STp->buffer)->syscall_result != 0) {
3015#if DEBUG
3016 printk(OSST_DEB_MSG "%s:D: SEEK command from %d to %d failed.\n",
3017 name, STp->first_frame_position, pp);
3018#endif
3019 result = (-EIO);
3020 }
3021 if (pp != ppos)
3022 osst_wait_ready(STp, aSRpnt, 5 * 60, OSST_WAIT_POSITION_COMPLETE);
3023 } while ((pp != ppos) && (pp = ppos));
3024 STp->first_frame_position = STp->last_frame_position = ppos;
3025 STps->eof = ST_NOEOF;
3026 STps->at_sm = 0;
3027 STps->rw = ST_IDLE;
3028 STp->frame_in_buffer = 0;
3029 return result;
3030}
3031
3032static int osst_write_trailer(struct osst_tape *STp, struct osst_request ** aSRpnt, int leave_at_EOT)
3033{
3034 struct st_partstat * STps = &(STp->ps[STp->partition]);
3035 int result = 0;
3036
3037 if (STp->write_type != OS_WRITE_NEW_MARK) {
3038 /* true unless the user wrote the filemark for us */
3039 result = osst_flush_drive_buffer(STp, aSRpnt);
3040 if (result < 0) goto out;
3041 result = osst_write_filemark(STp, aSRpnt);
3042 if (result < 0) goto out;
3043
3044 if (STps->drv_file >= 0)
3045 STps->drv_file++ ;
3046 STps->drv_block = 0;
3047 }
3048 result = osst_write_eod(STp, aSRpnt);
3049 osst_write_header(STp, aSRpnt, leave_at_EOT);
3050
3051 STps->eof = ST_FM;
3052out:
3053 return result;
3054}
3055
3056/* osst versions of st functions - augmented and stripped to suit OnStream only */
3057
3058/* Flush the write buffer (never need to write if variable blocksize). */
3059static int osst_flush_write_buffer(struct osst_tape *STp, struct osst_request ** aSRpnt)
3060{
3061 int offset, transfer, blks = 0;
3062 int result = 0;
3063 unsigned char cmd[MAX_COMMAND_SIZE];
3064 struct osst_request * SRpnt = *aSRpnt;
3065 struct st_partstat * STps;
3066 char * name = tape_name(STp);
3067
3068 if ((STp->buffer)->writing) {
3069 if (SRpnt == (STp->buffer)->last_SRpnt)
3070#if DEBUG
3071 { printk(OSST_DEB_MSG
3072 "%s:D: aSRpnt points to osst_request that write_behind_check will release -- cleared\n", name);
3073#endif
3074 *aSRpnt = SRpnt = NULL;
3075#if DEBUG
3076 } else if (SRpnt)
3077 printk(OSST_DEB_MSG
3078 "%s:D: aSRpnt does not point to osst_request that write_behind_check will release -- strange\n", name);
3079#endif
3080 osst_write_behind_check(STp);
3081 if ((STp->buffer)->syscall_result) {
3082#if DEBUG
3083 if (debugging)
3084 printk(OSST_DEB_MSG "%s:D: Async write error (flush) %x.\n",
3085 name, (STp->buffer)->midlevel_result);
3086#endif
3087 if ((STp->buffer)->midlevel_result == INT_MAX)
3088 return (-ENOSPC);
3089 return (-EIO);
3090 }
3091 }
3092
3093 result = 0;
3094 if (STp->dirty == 1) {
3095
3096 STp->write_count++;
3097 STps = &(STp->ps[STp->partition]);
3098 STps->rw = ST_WRITING;
3099 offset = STp->buffer->buffer_bytes;
3100 blks = (offset + STp->block_size - 1) / STp->block_size;
3101 transfer = OS_FRAME_SIZE;
3102
3103 if (offset < OS_DATA_SIZE)
3104 osst_zero_buffer_tail(STp->buffer);
3105
3106 if (STp->poll)
3107 if (osst_wait_frame (STp, aSRpnt, STp->first_frame_position, -50, 120))
3108 result = osst_recover_wait_frame(STp, aSRpnt, 1);
3109
3110 memset(cmd, 0, MAX_COMMAND_SIZE);
3111 cmd[0] = WRITE_6;
3112 cmd[1] = 1;
3113 cmd[4] = 1;
3114
3115 switch (STp->write_type) {
3116 case OS_WRITE_DATA:
3117#if DEBUG
3118 if (debugging)
3119 printk(OSST_DEB_MSG "%s:D: Writing %d blocks to frame %d, lblks %d-%d\n",
3120 name, blks, STp->frame_seq_number,
3121 STp->logical_blk_num - blks, STp->logical_blk_num - 1);
3122#endif
3123 osst_init_aux(STp, OS_FRAME_TYPE_DATA, STp->frame_seq_number++,
3124 STp->logical_blk_num - blks, STp->block_size, blks);
3125 break;
3126 case OS_WRITE_EOD:
3127 osst_init_aux(STp, OS_FRAME_TYPE_EOD, STp->frame_seq_number++,
3128 STp->logical_blk_num, 0, 0);
3129 break;
3130 case OS_WRITE_NEW_MARK:
3131 osst_init_aux(STp, OS_FRAME_TYPE_MARKER, STp->frame_seq_number++,
3132 STp->logical_blk_num++, 0, blks=1);
3133 break;
3134 case OS_WRITE_HEADER:
3135 osst_init_aux(STp, OS_FRAME_TYPE_HEADER, 0, 0, 0, blks=0);
3136 break;
3137 default: /* probably FILLER */
3138 osst_init_aux(STp, OS_FRAME_TYPE_FILL, 0, 0, 0, 0);
3139 }
3140#if DEBUG
3141 if (debugging)
3142 printk(OSST_DEB_MSG "%s:D: Flushing %d bytes, Transferring %d bytes in %d lblocks.\n",
3143 name, offset, transfer, blks);
3144#endif
3145
3146 SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, transfer, DMA_TO_DEVICE,
3147 STp->timeout, MAX_RETRIES, 1);
3148 *aSRpnt = SRpnt;
3149 if (!SRpnt)
3150 return (-EBUSY);
3151
3152 if ((STp->buffer)->syscall_result != 0) {
3153#if DEBUG
3154 printk(OSST_DEB_MSG
3155 "%s:D: write sense [0]=0x%02x [2]=%02x [12]=%02x [13]=%02x\n",
3156 name, SRpnt->sense[0], SRpnt->sense[2],
3157 SRpnt->sense[12], SRpnt->sense[13]);
3158#endif
3159 if ((SRpnt->sense[0] & 0x70) == 0x70 &&
3160 (SRpnt->sense[2] & 0x40) && /* FIXME - SC-30 drive doesn't assert EOM bit */
3161 (SRpnt->sense[2] & 0x0f) == NO_SENSE) {
3162 STp->dirty = 0;
3163 (STp->buffer)->buffer_bytes = 0;
3164 result = (-ENOSPC);
3165 }
3166 else {
3167 if (osst_write_error_recovery(STp, aSRpnt, 1)) {
3168 printk(KERN_ERR "%s:E: Error on flush write.\n", name);
3169 result = (-EIO);
3170 }
3171 }
3172 STps->drv_block = (-1); /* FIXME - even if write recovery succeeds? */
3173 }
3174 else {
3175 STp->first_frame_position++;
3176 STp->dirty = 0;
3177 (STp->buffer)->buffer_bytes = 0;
3178 }
3179 }
3180#if DEBUG
3181 printk(OSST_DEB_MSG "%s:D: Exit flush write buffer with code %d\n", name, result);
3182#endif
3183 return result;
3184}
3185
3186
3187/* Flush the tape buffer. The tape will be positioned correctly unless
3188 seek_next is true. */
3189static int osst_flush_buffer(struct osst_tape * STp, struct osst_request ** aSRpnt, int seek_next)
3190{
3191 struct st_partstat * STps;
3192 int backspace = 0, result = 0;
3193#if DEBUG
3194 char * name = tape_name(STp);
3195#endif
3196
3197 /*
3198 * If there was a bus reset, block further access
3199 * to this device.
3200 */
3201 if( STp->pos_unknown)
3202 return (-EIO);
3203
3204 if (STp->ready != ST_READY)
3205 return 0;
3206
3207 STps = &(STp->ps[STp->partition]);
3208 if (STps->rw == ST_WRITING || STp->dirty) { /* Writing */
3209 STp->write_type = OS_WRITE_DATA;
3210 return osst_flush_write_buffer(STp, aSRpnt);
3211 }
3212 if (STp->block_size == 0)
3213 return 0;
3214
3215#if DEBUG
3216 printk(OSST_DEB_MSG "%s:D: Reached flush (read) buffer\n", name);
3217#endif
3218
3219 if (!STp->can_bsr) {
3220 backspace = ((STp->buffer)->buffer_bytes + (STp->buffer)->read_pointer) / STp->block_size -
3221 ((STp->buffer)->read_pointer + STp->block_size - 1 ) / STp->block_size ;
3222 (STp->buffer)->buffer_bytes = 0;
3223 (STp->buffer)->read_pointer = 0;
3224 STp->frame_in_buffer = 0; /* FIXME is this relevant w. OSST? */
3225 }
3226
3227 if (!seek_next) {
3228 if (STps->eof == ST_FM_HIT) {
3229 result = cross_eof(STp, aSRpnt, 0); /* Back over the EOF hit */
3230 if (!result)
3231 STps->eof = ST_NOEOF;
3232 else {
3233 if (STps->drv_file >= 0)
3234 STps->drv_file++;
3235 STps->drv_block = 0;
3236 }
3237 }
3238 if (!result && backspace > 0) /* TODO -- design and run a test case for this */
3239 result = osst_seek_logical_blk(STp, aSRpnt, STp->logical_blk_num - backspace);
3240 }
3241 else if (STps->eof == ST_FM_HIT) {
3242 if (STps->drv_file >= 0)
3243 STps->drv_file++;
3244 STps->drv_block = 0;
3245 STps->eof = ST_NOEOF;
3246 }
3247
3248 return result;
3249}
3250
3251static int osst_write_frame(struct osst_tape * STp, struct osst_request ** aSRpnt, int synchronous)
3252{
3253 unsigned char cmd[MAX_COMMAND_SIZE];
3254 struct osst_request * SRpnt;
3255 int blks;
3256#if DEBUG
3257 char * name = tape_name(STp);
3258#endif
3259
3260 if ((!STp-> raw) && (STp->first_frame_position == 0xbae)) { /* _must_ preserve buffer! */
3261#if DEBUG
3262 printk(OSST_DEB_MSG "%s:D: Reaching config partition.\n", name);
3263#endif
3264 if (osst_flush_drive_buffer(STp, aSRpnt) < 0) {
3265 return (-EIO);
3266 }
3267 /* error recovery may have bumped us past the header partition */
3268 if (osst_get_frame_position(STp, aSRpnt) < 0xbb8) {
3269#if DEBUG
3270 printk(OSST_DEB_MSG "%s:D: Skipping over config partition.\n", name);
3271#endif
3272 osst_position_tape_and_confirm(STp, aSRpnt, 0xbb8);
3273 }
3274 }
3275
3276 if (STp->poll)
3277 if (osst_wait_frame (STp, aSRpnt, STp->first_frame_position, -48, 120))
3278 if (osst_recover_wait_frame(STp, aSRpnt, 1))
3279 return (-EIO);
3280
3281// osst_build_stats(STp, &SRpnt);
3282
3283 STp->ps[STp->partition].rw = ST_WRITING;
3284 STp->write_type = OS_WRITE_DATA;
3285
3286 memset(cmd, 0, MAX_COMMAND_SIZE);
3287 cmd[0] = WRITE_6;
3288 cmd[1] = 1;
3289 cmd[4] = 1; /* one frame at a time... */
3290 blks = STp->buffer->buffer_bytes / STp->block_size;
3291#if DEBUG
3292 if (debugging)
3293 printk(OSST_DEB_MSG "%s:D: Writing %d blocks to frame %d, lblks %d-%d\n", name, blks,
3294 STp->frame_seq_number, STp->logical_blk_num - blks, STp->logical_blk_num - 1);
3295#endif
3296 osst_init_aux(STp, OS_FRAME_TYPE_DATA, STp->frame_seq_number++,
3297 STp->logical_blk_num - blks, STp->block_size, blks);
3298
3299#if DEBUG
3300 if (!synchronous)
3301 STp->write_pending = 1;
3302#endif
3303 SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, OS_FRAME_SIZE, DMA_TO_DEVICE, STp->timeout,
3304 MAX_RETRIES, synchronous);
3305 if (!SRpnt)
3306 return (-EBUSY);
3307 *aSRpnt = SRpnt;
3308
3309 if (synchronous) {
3310 if (STp->buffer->syscall_result != 0) {
3311#if DEBUG
3312 if (debugging)
3313 printk(OSST_DEB_MSG "%s:D: Error on write:\n", name);
3314#endif
3315 if ((SRpnt->sense[0] & 0x70) == 0x70 &&
3316 (SRpnt->sense[2] & 0x40)) {
3317 if ((SRpnt->sense[2] & 0x0f) == VOLUME_OVERFLOW)
3318 return (-ENOSPC);
3319 }
3320 else {
3321 if (osst_write_error_recovery(STp, aSRpnt, 1))
3322 return (-EIO);
3323 }
3324 }
3325 else
3326 STp->first_frame_position++;
3327 }
3328
3329 STp->write_count++;
3330
3331 return 0;
3332}
3333
3334/* Lock or unlock the drive door. Don't use when struct osst_request allocated. */
3335static int do_door_lock(struct osst_tape * STp, int do_lock)
3336{
3337 int retval;
3338
3339#if DEBUG
3340 printk(OSST_DEB_MSG "%s:D: %socking drive door.\n", tape_name(STp), do_lock ? "L" : "Unl");
3341#endif
3342
3343 retval = scsi_set_medium_removal(STp->device,
3344 do_lock ? SCSI_REMOVAL_PREVENT : SCSI_REMOVAL_ALLOW);
3345 if (!retval)
3346 STp->door_locked = do_lock ? ST_LOCKED_EXPLICIT : ST_UNLOCKED;
3347 else
3348 STp->door_locked = ST_LOCK_FAILS;
3349 return retval;
3350}
3351
3352/* Set the internal state after reset */
3353static void reset_state(struct osst_tape *STp)
3354{
3355 int i;
3356 struct st_partstat *STps;
3357
3358 STp->pos_unknown = 0;
3359 for (i = 0; i < ST_NBR_PARTITIONS; i++) {
3360 STps = &(STp->ps[i]);
3361 STps->rw = ST_IDLE;
3362 STps->eof = ST_NOEOF;
3363 STps->at_sm = 0;
3364 STps->last_block_valid = 0;
3365 STps->drv_block = -1;
3366 STps->drv_file = -1;
3367 }
3368}
3369
3370
3371/* Entry points to osst */
3372
3373/* Write command */
3374static ssize_t osst_write(struct file * filp, const char __user * buf, size_t count, loff_t *ppos)
3375{
3376 ssize_t total, retval = 0;
3377 ssize_t i, do_count, blks, transfer;
3378 int write_threshold;
3379 int doing_write = 0;
3380 const char __user * b_point;
3381 struct osst_request * SRpnt = NULL;
3382 struct st_modedef * STm;
3383 struct st_partstat * STps;
3384 struct osst_tape * STp = filp->private_data;
3385 char * name = tape_name(STp);
3386
3387
3388 if (mutex_lock_interruptible(&STp->lock))
3389 return (-ERESTARTSYS);
3390
3391 /*
3392 * If we are in the middle of error recovery, don't let anyone
3393 * else try and use this device. Also, if error recovery fails, it
3394 * may try and take the device offline, in which case all further
3395 * access to the device is prohibited.
3396 */
3397 if( !scsi_block_when_processing_errors(STp->device) ) {
3398 retval = (-ENXIO);
3399 goto out;
3400 }
3401
3402 if (STp->ready != ST_READY) {
3403 if (STp->ready == ST_NO_TAPE)
3404 retval = (-ENOMEDIUM);
3405 else
3406 retval = (-EIO);
3407 goto out;
3408 }
3409 STm = &(STp->modes[STp->current_mode]);
3410 if (!STm->defined) {
3411 retval = (-ENXIO);
3412 goto out;
3413 }
3414 if (count == 0)
3415 goto out;
3416
3417 /*
3418 * If there was a bus reset, block further access
3419 * to this device.
3420 */
3421 if (STp->pos_unknown) {
3422 retval = (-EIO);
3423 goto out;
3424 }
3425
3426#if DEBUG
3427 if (!STp->in_use) {
3428 printk(OSST_DEB_MSG "%s:D: Incorrect device.\n", name);
3429 retval = (-EIO);
3430 goto out;
3431 }
3432#endif
3433
3434 if (STp->write_prot) {
3435 retval = (-EACCES);
3436 goto out;
3437 }
3438
3439 /* Write must be integral number of blocks */
3440 if (STp->block_size != 0 && (count % STp->block_size) != 0) {
3441 printk(KERN_ERR "%s:E: Write (%zd bytes) not multiple of tape block size (%d%c).\n",
3442 name, count, STp->block_size<1024?
3443 STp->block_size:STp->block_size/1024, STp->block_size<1024?'b':'k');
3444 retval = (-EINVAL);
3445 goto out;
3446 }
3447
3448 if (STp->first_frame_position >= STp->capacity - OSST_EOM_RESERVE) {
3449 printk(KERN_ERR "%s:E: Write truncated at EOM early warning (frame %d).\n",
3450 name, STp->first_frame_position);
3451 retval = (-ENOSPC);
3452 goto out;
3453 }
3454
3455 if (STp->do_auto_lock && STp->door_locked == ST_UNLOCKED && !do_door_lock(STp, 1))
3456 STp->door_locked = ST_LOCKED_AUTO;
3457
3458 STps = &(STp->ps[STp->partition]);
3459
3460 if (STps->rw == ST_READING) {
3461#if DEBUG
3462 printk(OSST_DEB_MSG "%s:D: Switching from read to write at file %d, block %d\n", name,
3463 STps->drv_file, STps->drv_block);
3464#endif
3465 retval = osst_flush_buffer(STp, &SRpnt, 0);
3466 if (retval)
3467 goto out;
3468 STps->rw = ST_IDLE;
3469 }
3470 if (STps->rw != ST_WRITING) {
3471 /* Are we totally rewriting this tape? */
3472 if (!STp->header_ok ||
3473 (STp->first_frame_position == STp->first_data_ppos && STps->drv_block < 0) ||
3474 (STps->drv_file == 0 && STps->drv_block == 0)) {
3475 STp->wrt_pass_cntr++;
3476#if DEBUG
3477 printk(OSST_DEB_MSG "%s:D: Allocating next write pass counter: %d\n",
3478 name, STp->wrt_pass_cntr);
3479#endif
3480 osst_reset_header(STp, &SRpnt);
3481 STps->drv_file = STps->drv_block = 0;
3482 }
3483 /* Do we know where we'll be writing on the tape? */
3484 else {
3485 if ((STp->fast_open && osst_verify_position(STp, &SRpnt)) ||
3486 STps->drv_file < 0 || STps->drv_block < 0) {
3487 if (STp->first_frame_position == STp->eod_frame_ppos) { /* at EOD */
3488 STps->drv_file = STp->filemark_cnt;
3489 STps->drv_block = 0;
3490 }
3491 else {
3492 /* We have no idea where the tape is positioned - give up */
3493#if DEBUG
3494 printk(OSST_DEB_MSG
3495 "%s:D: Cannot write at indeterminate position.\n", name);
3496#endif
3497 retval = (-EIO);
3498 goto out;
3499 }
3500 }
3501 if ((STps->drv_file + STps->drv_block) > 0 && STps->drv_file < STp->filemark_cnt) {
3502 STp->filemark_cnt = STps->drv_file;
3503 STp->last_mark_ppos =
3504 ntohl(STp->header_cache->dat_fm_tab.fm_tab_ent[STp->filemark_cnt-1]);
3505 printk(KERN_WARNING
3506 "%s:W: Overwriting file %d with old write pass counter %d\n",
3507 name, STps->drv_file, STp->wrt_pass_cntr);
3508 printk(KERN_WARNING
3509 "%s:W: may lead to stale data being accepted on reading back!\n",
3510 name);
3511#if DEBUG
3512 printk(OSST_DEB_MSG
3513 "%s:D: resetting filemark count to %d and last mark ppos,lbn to %d,%d\n",
3514 name, STp->filemark_cnt, STp->last_mark_ppos, STp->last_mark_lbn);
3515#endif
3516 }
3517 }
3518 STp->fast_open = 0;
3519 }
3520 if (!STp->header_ok) {
3521#if DEBUG
3522 printk(OSST_DEB_MSG "%s:D: Write cannot proceed without valid headers\n", name);
3523#endif
3524 retval = (-EIO);
3525 goto out;
3526 }
3527
3528 if ((STp->buffer)->writing) {
3529if (SRpnt) printk(KERN_ERR "%s:A: Not supposed to have SRpnt at line %d\n", name, __LINE__);
3530 osst_write_behind_check(STp);
3531 if ((STp->buffer)->syscall_result) {
3532#if DEBUG
3533 if (debugging)
3534 printk(OSST_DEB_MSG "%s:D: Async write error (write) %x.\n", name,
3535 (STp->buffer)->midlevel_result);
3536#endif
3537 if ((STp->buffer)->midlevel_result == INT_MAX)
3538 STps->eof = ST_EOM_OK;
3539 else
3540 STps->eof = ST_EOM_ERROR;
3541 }
3542 }
3543 if (STps->eof == ST_EOM_OK) {
3544 retval = (-ENOSPC);
3545 goto out;
3546 }
3547 else if (STps->eof == ST_EOM_ERROR) {
3548 retval = (-EIO);
3549 goto out;
3550 }
3551
3552 /* Check the buffer readability in cases where copy_user might catch
3553 the problems after some tape movement. */
3554 if ((copy_from_user(&i, buf, 1) != 0 ||
3555 copy_from_user(&i, buf + count - 1, 1) != 0)) {
3556 retval = (-EFAULT);
3557 goto out;
3558 }
3559
3560 if (!STm->do_buffer_writes) {
3561 write_threshold = 1;
3562 }
3563 else
3564 write_threshold = (STp->buffer)->buffer_blocks * STp->block_size;
3565 if (!STm->do_async_writes)
3566 write_threshold--;
3567
3568 total = count;
3569#if DEBUG
3570 if (debugging)
3571 printk(OSST_DEB_MSG "%s:D: Writing %d bytes to file %d block %d lblk %d fseq %d fppos %d\n",
3572 name, (int) count, STps->drv_file, STps->drv_block,
3573 STp->logical_blk_num, STp->frame_seq_number, STp->first_frame_position);
3574#endif
3575 b_point = buf;
3576 while ((STp->buffer)->buffer_bytes + count > write_threshold)
3577 {
3578 doing_write = 1;
3579 do_count = (STp->buffer)->buffer_blocks * STp->block_size -
3580 (STp->buffer)->buffer_bytes;
3581 if (do_count > count)
3582 do_count = count;
3583
3584 i = append_to_buffer(b_point, STp->buffer, do_count);
3585 if (i) {
3586 retval = i;
3587 goto out;
3588 }
3589
3590 blks = do_count / STp->block_size;
3591 STp->logical_blk_num += blks; /* logical_blk_num is incremented as data is moved from user */
3592
3593 i = osst_write_frame(STp, &SRpnt, 1);
3594
3595 if (i == (-ENOSPC)) {
3596 transfer = STp->buffer->writing; /* FIXME -- check this logic */
3597 if (transfer <= do_count) {
3598 *ppos += do_count - transfer;
3599 count -= do_count - transfer;
3600 if (STps->drv_block >= 0) {
3601 STps->drv_block += (do_count - transfer) / STp->block_size;
3602 }
3603 STps->eof = ST_EOM_OK;
3604 retval = (-ENOSPC); /* EOM within current request */
3605#if DEBUG
3606 if (debugging)
3607 printk(OSST_DEB_MSG "%s:D: EOM with %d bytes unwritten.\n",
3608 name, (int) transfer);
3609#endif
3610 }
3611 else {
3612 STps->eof = ST_EOM_ERROR;
3613 STps->drv_block = (-1); /* Too cautious? */
3614 retval = (-EIO); /* EOM for old data */
3615#if DEBUG
3616 if (debugging)
3617 printk(OSST_DEB_MSG "%s:D: EOM with lost data.\n", name);
3618#endif
3619 }
3620 }
3621 else
3622 retval = i;
3623
3624 if (retval < 0) {
3625 if (SRpnt != NULL) {
3626 osst_release_request(SRpnt);
3627 SRpnt = NULL;
3628 }
3629 STp->buffer->buffer_bytes = 0;
3630 STp->dirty = 0;
3631 if (count < total)
3632 retval = total - count;
3633 goto out;
3634 }
3635
3636 *ppos += do_count;
3637 b_point += do_count;
3638 count -= do_count;
3639 if (STps->drv_block >= 0) {
3640 STps->drv_block += blks;
3641 }
3642 STp->buffer->buffer_bytes = 0;
3643 STp->dirty = 0;
3644 } /* end while write threshold exceeded */
3645
3646 if (count != 0) {
3647 STp->dirty = 1;
3648 i = append_to_buffer(b_point, STp->buffer, count);
3649 if (i) {
3650 retval = i;
3651 goto out;
3652 }
3653 blks = count / STp->block_size;
3654 STp->logical_blk_num += blks;
3655 if (STps->drv_block >= 0) {
3656 STps->drv_block += blks;
3657 }
3658 *ppos += count;
3659 count = 0;
3660 }
3661
3662 if (doing_write && (STp->buffer)->syscall_result != 0) {
3663 retval = (STp->buffer)->syscall_result;
3664 goto out;
3665 }
3666
3667 if (STm->do_async_writes && ((STp->buffer)->buffer_bytes >= STp->write_threshold)) {
3668 /* Schedule an asynchronous write */
3669 (STp->buffer)->writing = ((STp->buffer)->buffer_bytes /
3670 STp->block_size) * STp->block_size;
3671 STp->dirty = !((STp->buffer)->writing ==
3672 (STp->buffer)->buffer_bytes);
3673
3674 i = osst_write_frame(STp, &SRpnt, 0);
3675 if (i < 0) {
3676 retval = (-EIO);
3677 goto out;
3678 }
3679 SRpnt = NULL; /* Prevent releasing this request! */
3680 }
3681 STps->at_sm &= (total == 0);
3682 if (total > 0)
3683 STps->eof = ST_NOEOF;
3684
3685 retval = total;
3686
3687out:
3688 if (SRpnt != NULL) osst_release_request(SRpnt);
3689
3690 mutex_unlock(&STp->lock);
3691
3692 return retval;
3693}
3694
3695
3696/* Read command */
3697static ssize_t osst_read(struct file * filp, char __user * buf, size_t count, loff_t *ppos)
3698{
3699 ssize_t total, retval = 0;
3700 ssize_t i, transfer;
3701 int special;
3702 struct st_modedef * STm;
3703 struct st_partstat * STps;
3704 struct osst_request * SRpnt = NULL;
3705 struct osst_tape * STp = filp->private_data;
3706 char * name = tape_name(STp);
3707
3708
3709 if (mutex_lock_interruptible(&STp->lock))
3710 return (-ERESTARTSYS);
3711
3712 /*
3713 * If we are in the middle of error recovery, don't let anyone
3714 * else try and use this device. Also, if error recovery fails, it
3715 * may try and take the device offline, in which case all further
3716 * access to the device is prohibited.
3717 */
3718 if( !scsi_block_when_processing_errors(STp->device) ) {
3719 retval = (-ENXIO);
3720 goto out;
3721 }
3722
3723 if (STp->ready != ST_READY) {
3724 if (STp->ready == ST_NO_TAPE)
3725 retval = (-ENOMEDIUM);
3726 else
3727 retval = (-EIO);
3728 goto out;
3729 }
3730 STm = &(STp->modes[STp->current_mode]);
3731 if (!STm->defined) {
3732 retval = (-ENXIO);
3733 goto out;
3734 }
3735#if DEBUG
3736 if (!STp->in_use) {
3737 printk(OSST_DEB_MSG "%s:D: Incorrect device.\n", name);
3738 retval = (-EIO);
3739 goto out;
3740 }
3741#endif
3742 /* Must have initialized medium */
3743 if (!STp->header_ok) {
3744 retval = (-EIO);
3745 goto out;
3746 }
3747
3748 if (STp->do_auto_lock && STp->door_locked == ST_UNLOCKED && !do_door_lock(STp, 1))
3749 STp->door_locked = ST_LOCKED_AUTO;
3750
3751 STps = &(STp->ps[STp->partition]);
3752 if (STps->rw == ST_WRITING) {
3753 retval = osst_flush_buffer(STp, &SRpnt, 0);
3754 if (retval)
3755 goto out;
3756 STps->rw = ST_IDLE;
3757 /* FIXME -- this may leave the tape without EOD and up2date headers */
3758 }
3759
3760 if ((count % STp->block_size) != 0) {
3761 printk(KERN_WARNING
3762 "%s:W: Read (%zd bytes) not multiple of tape block size (%d%c).\n", name, count,
3763 STp->block_size<1024?STp->block_size:STp->block_size/1024, STp->block_size<1024?'b':'k');
3764 }
3765
3766#if DEBUG
3767 if (debugging && STps->eof != ST_NOEOF)
3768 printk(OSST_DEB_MSG "%s:D: EOF/EOM flag up (%d). Bytes %d\n", name,
3769 STps->eof, (STp->buffer)->buffer_bytes);
3770#endif
3771 if ((STp->buffer)->buffer_bytes == 0 &&
3772 STps->eof >= ST_EOD_1) {
3773 if (STps->eof < ST_EOD) {
3774 STps->eof += 1;
3775 retval = 0;
3776 goto out;
3777 }
3778 retval = (-EIO); /* EOM or Blank Check */
3779 goto out;
3780 }
3781
3782 /* Check the buffer writability before any tape movement. Don't alter
3783 buffer data. */
3784 if (copy_from_user(&i, buf, 1) != 0 ||
3785 copy_to_user (buf, &i, 1) != 0 ||
3786 copy_from_user(&i, buf + count - 1, 1) != 0 ||
3787 copy_to_user (buf + count - 1, &i, 1) != 0) {
3788 retval = (-EFAULT);
3789 goto out;
3790 }
3791
3792 /* Loop until enough data in buffer or a special condition found */
3793 for (total = 0, special = 0; total < count - STp->block_size + 1 && !special; ) {
3794
3795 /* Get new data if the buffer is empty */
3796 if ((STp->buffer)->buffer_bytes == 0) {
3797 if (STps->eof == ST_FM_HIT)
3798 break;
3799 special = osst_get_logical_frame(STp, &SRpnt, STp->frame_seq_number, 0);
3800 if (special < 0) { /* No need to continue read */
3801 STp->frame_in_buffer = 0;
3802 retval = special;
3803 goto out;
3804 }
3805 }
3806
3807 /* Move the data from driver buffer to user buffer */
3808 if ((STp->buffer)->buffer_bytes > 0) {
3809#if DEBUG
3810 if (debugging && STps->eof != ST_NOEOF)
3811 printk(OSST_DEB_MSG "%s:D: EOF up (%d). Left %d, needed %d.\n", name,
3812 STps->eof, (STp->buffer)->buffer_bytes, (int) (count - total));
3813#endif
3814 /* force multiple of block size, note block_size may have been adjusted */
3815 transfer = (((STp->buffer)->buffer_bytes < count - total ?
3816 (STp->buffer)->buffer_bytes : count - total)/
3817 STp->block_size) * STp->block_size;
3818
3819 if (transfer == 0) {
3820 printk(KERN_WARNING
3821 "%s:W: Nothing can be transferred, requested %zd, tape block size (%d%c).\n",
3822 name, count, STp->block_size < 1024?
3823 STp->block_size:STp->block_size/1024,
3824 STp->block_size<1024?'b':'k');
3825 break;
3826 }
3827 i = from_buffer(STp->buffer, buf, transfer);
3828 if (i) {
3829 retval = i;
3830 goto out;
3831 }
3832 STp->logical_blk_num += transfer / STp->block_size;
3833 STps->drv_block += transfer / STp->block_size;
3834 *ppos += transfer;
3835 buf += transfer;
3836 total += transfer;
3837 }
3838
3839 if ((STp->buffer)->buffer_bytes == 0) {
3840#if DEBUG
3841 if (debugging)
3842 printk(OSST_DEB_MSG "%s:D: Finished with frame %d\n",
3843 name, STp->frame_seq_number);
3844#endif
3845 STp->frame_in_buffer = 0;
3846 STp->frame_seq_number++; /* frame to look for next time */
3847 }
3848 } /* for (total = 0, special = 0; total < count && !special; ) */
3849
3850 /* Change the eof state if no data from tape or buffer */
3851 if (total == 0) {
3852 if (STps->eof == ST_FM_HIT) {
3853 STps->eof = (STp->first_frame_position >= STp->eod_frame_ppos)?ST_EOD_2:ST_FM;
3854 STps->drv_block = 0;
3855 if (STps->drv_file >= 0)
3856 STps->drv_file++;
3857 }
3858 else if (STps->eof == ST_EOD_1) {
3859 STps->eof = ST_EOD_2;
3860 if (STps->drv_block > 0 && STps->drv_file >= 0)
3861 STps->drv_file++;
3862 STps->drv_block = 0;
3863 }
3864 else if (STps->eof == ST_EOD_2)
3865 STps->eof = ST_EOD;
3866 }
3867 else if (STps->eof == ST_FM)
3868 STps->eof = ST_NOEOF;
3869
3870 retval = total;
3871
3872out:
3873 if (SRpnt != NULL) osst_release_request(SRpnt);
3874
3875 mutex_unlock(&STp->lock);
3876
3877 return retval;
3878}
3879
3880
3881/* Set the driver options */
3882static void osst_log_options(struct osst_tape *STp, struct st_modedef *STm, char *name)
3883{
3884 printk(KERN_INFO
3885"%s:I: Mode %d options: buffer writes: %d, async writes: %d, read ahead: %d\n",
3886 name, STp->current_mode, STm->do_buffer_writes, STm->do_async_writes,
3887 STm->do_read_ahead);
3888 printk(KERN_INFO
3889"%s:I: can bsr: %d, two FMs: %d, fast mteom: %d, auto lock: %d,\n",
3890 name, STp->can_bsr, STp->two_fm, STp->fast_mteom, STp->do_auto_lock);
3891 printk(KERN_INFO
3892"%s:I: defs for wr: %d, no block limits: %d, partitions: %d, s2 log: %d\n",
3893 name, STm->defaults_for_writes, STp->omit_blklims, STp->can_partitions,
3894 STp->scsi2_logical);
3895 printk(KERN_INFO
3896"%s:I: sysv: %d\n", name, STm->sysv);
3897#if DEBUG
3898 printk(KERN_INFO
3899 "%s:D: debugging: %d\n",
3900 name, debugging);
3901#endif
3902}
3903
3904
3905static int osst_set_options(struct osst_tape *STp, long options)
3906{
3907 int value;
3908 long code;
3909 struct st_modedef * STm;
3910 char * name = tape_name(STp);
3911
3912 STm = &(STp->modes[STp->current_mode]);
3913 if (!STm->defined) {
3914 memcpy(STm, &(STp->modes[0]), sizeof(*STm));
3915 modes_defined = 1;
3916#if DEBUG
3917 if (debugging)
3918 printk(OSST_DEB_MSG "%s:D: Initialized mode %d definition from mode 0\n",
3919 name, STp->current_mode);
3920#endif
3921 }
3922
3923 code = options & MT_ST_OPTIONS;
3924 if (code == MT_ST_BOOLEANS) {
3925 STm->do_buffer_writes = (options & MT_ST_BUFFER_WRITES) != 0;
3926 STm->do_async_writes = (options & MT_ST_ASYNC_WRITES) != 0;
3927 STm->defaults_for_writes = (options & MT_ST_DEF_WRITES) != 0;
3928 STm->do_read_ahead = (options & MT_ST_READ_AHEAD) != 0;
3929 STp->two_fm = (options & MT_ST_TWO_FM) != 0;
3930 STp->fast_mteom = (options & MT_ST_FAST_MTEOM) != 0;
3931 STp->do_auto_lock = (options & MT_ST_AUTO_LOCK) != 0;
3932 STp->can_bsr = (options & MT_ST_CAN_BSR) != 0;
3933 STp->omit_blklims = (options & MT_ST_NO_BLKLIMS) != 0;
3934 if ((STp->device)->scsi_level >= SCSI_2)
3935 STp->can_partitions = (options & MT_ST_CAN_PARTITIONS) != 0;
3936 STp->scsi2_logical = (options & MT_ST_SCSI2LOGICAL) != 0;
3937 STm->sysv = (options & MT_ST_SYSV) != 0;
3938#if DEBUG
3939 debugging = (options & MT_ST_DEBUGGING) != 0;
3940#endif
3941 osst_log_options(STp, STm, name);
3942 }
3943 else if (code == MT_ST_SETBOOLEANS || code == MT_ST_CLEARBOOLEANS) {
3944 value = (code == MT_ST_SETBOOLEANS);
3945 if ((options & MT_ST_BUFFER_WRITES) != 0)
3946 STm->do_buffer_writes = value;
3947 if ((options & MT_ST_ASYNC_WRITES) != 0)
3948 STm->do_async_writes = value;
3949 if ((options & MT_ST_DEF_WRITES) != 0)
3950 STm->defaults_for_writes = value;
3951 if ((options & MT_ST_READ_AHEAD) != 0)
3952 STm->do_read_ahead = value;
3953 if ((options & MT_ST_TWO_FM) != 0)
3954 STp->two_fm = value;
3955 if ((options & MT_ST_FAST_MTEOM) != 0)
3956 STp->fast_mteom = value;
3957 if ((options & MT_ST_AUTO_LOCK) != 0)
3958 STp->do_auto_lock = value;
3959 if ((options & MT_ST_CAN_BSR) != 0)
3960 STp->can_bsr = value;
3961 if ((options & MT_ST_NO_BLKLIMS) != 0)
3962 STp->omit_blklims = value;
3963 if ((STp->device)->scsi_level >= SCSI_2 &&
3964 (options & MT_ST_CAN_PARTITIONS) != 0)
3965 STp->can_partitions = value;
3966 if ((options & MT_ST_SCSI2LOGICAL) != 0)
3967 STp->scsi2_logical = value;
3968 if ((options & MT_ST_SYSV) != 0)
3969 STm->sysv = value;
3970#if DEBUG
3971 if ((options & MT_ST_DEBUGGING) != 0)
3972 debugging = value;
3973#endif
3974 osst_log_options(STp, STm, name);
3975 }
3976 else if (code == MT_ST_WRITE_THRESHOLD) {
3977 value = (options & ~MT_ST_OPTIONS) * ST_KILOBYTE;
3978 if (value < 1 || value > osst_buffer_size) {
3979 printk(KERN_WARNING "%s:W: Write threshold %d too small or too large.\n",
3980 name, value);
3981 return (-EIO);
3982 }
3983 STp->write_threshold = value;
3984 printk(KERN_INFO "%s:I: Write threshold set to %d bytes.\n",
3985 name, value);
3986 }
3987 else if (code == MT_ST_DEF_BLKSIZE) {
3988 value = (options & ~MT_ST_OPTIONS);
3989 if (value == ~MT_ST_OPTIONS) {
3990 STm->default_blksize = (-1);
3991 printk(KERN_INFO "%s:I: Default block size disabled.\n", name);
3992 }
3993 else {
3994 if (value < 512 || value > OS_DATA_SIZE || OS_DATA_SIZE % value) {
3995 printk(KERN_WARNING "%s:W: Default block size cannot be set to %d.\n",
3996 name, value);
3997 return (-EINVAL);
3998 }
3999 STm->default_blksize = value;
4000 printk(KERN_INFO "%s:I: Default block size set to %d bytes.\n",
4001 name, STm->default_blksize);
4002 }
4003 }
4004 else if (code == MT_ST_TIMEOUTS) {
4005 value = (options & ~MT_ST_OPTIONS);
4006 if ((value & MT_ST_SET_LONG_TIMEOUT) != 0) {
4007 STp->long_timeout = (value & ~MT_ST_SET_LONG_TIMEOUT) * HZ;
4008 printk(KERN_INFO "%s:I: Long timeout set to %d seconds.\n", name,
4009 (value & ~MT_ST_SET_LONG_TIMEOUT));
4010 }
4011 else {
4012 STp->timeout = value * HZ;
4013 printk(KERN_INFO "%s:I: Normal timeout set to %d seconds.\n", name, value);
4014 }
4015 }
4016 else if (code == MT_ST_DEF_OPTIONS) {
4017 code = (options & ~MT_ST_CLEAR_DEFAULT);
4018 value = (options & MT_ST_CLEAR_DEFAULT);
4019 if (code == MT_ST_DEF_DENSITY) {
4020 if (value == MT_ST_CLEAR_DEFAULT) {
4021 STm->default_density = (-1);
4022 printk(KERN_INFO "%s:I: Density default disabled.\n", name);
4023 }
4024 else {
4025 STm->default_density = value & 0xff;
4026 printk(KERN_INFO "%s:I: Density default set to %x\n",
4027 name, STm->default_density);
4028 }
4029 }
4030 else if (code == MT_ST_DEF_DRVBUFFER) {
4031 if (value == MT_ST_CLEAR_DEFAULT) {
4032 STp->default_drvbuffer = 0xff;
4033 printk(KERN_INFO "%s:I: Drive buffer default disabled.\n", name);
4034 }
4035 else {
4036 STp->default_drvbuffer = value & 7;
4037 printk(KERN_INFO "%s:I: Drive buffer default set to %x\n",
4038 name, STp->default_drvbuffer);
4039 }
4040 }
4041 else if (code == MT_ST_DEF_COMPRESSION) {
4042 if (value == MT_ST_CLEAR_DEFAULT) {
4043 STm->default_compression = ST_DONT_TOUCH;
4044 printk(KERN_INFO "%s:I: Compression default disabled.\n", name);
4045 }
4046 else {
4047 STm->default_compression = (value & 1 ? ST_YES : ST_NO);
4048 printk(KERN_INFO "%s:I: Compression default set to %x\n",
4049 name, (value & 1));
4050 }
4051 }
4052 }
4053 else
4054 return (-EIO);
4055
4056 return 0;
4057}
4058
4059
4060/* Internal ioctl function */
4061static int osst_int_ioctl(struct osst_tape * STp, struct osst_request ** aSRpnt,
4062 unsigned int cmd_in, unsigned long arg)
4063{
4064 int timeout;
4065 long ltmp;
4066 int i, ioctl_result;
4067 int chg_eof = 1;
4068 unsigned char cmd[MAX_COMMAND_SIZE];
4069 struct osst_request * SRpnt = * aSRpnt;
4070 struct st_partstat * STps;
4071 int fileno, blkno, at_sm, frame_seq_numbr, logical_blk_num;
4072 int datalen = 0, direction = DMA_NONE;
4073 char * name = tape_name(STp);
4074
4075 if (STp->ready != ST_READY && cmd_in != MTLOAD) {
4076 if (STp->ready == ST_NO_TAPE)
4077 return (-ENOMEDIUM);
4078 else
4079 return (-EIO);
4080 }
4081 timeout = STp->long_timeout;
4082 STps = &(STp->ps[STp->partition]);
4083 fileno = STps->drv_file;
4084 blkno = STps->drv_block;
4085 at_sm = STps->at_sm;
4086 frame_seq_numbr = STp->frame_seq_number;
4087 logical_blk_num = STp->logical_blk_num;
4088
4089 memset(cmd, 0, MAX_COMMAND_SIZE);
4090 switch (cmd_in) {
4091 case MTFSFM:
4092 chg_eof = 0; /* Changed from the FSF after this */
4093 /* fall through */
4094 case MTFSF:
4095 if (STp->raw)
4096 return (-EIO);
4097 if (STp->linux_media)
4098 ioctl_result = osst_space_over_filemarks_forward_fast(STp, &SRpnt, cmd_in, arg);
4099 else
4100 ioctl_result = osst_space_over_filemarks_forward_slow(STp, &SRpnt, cmd_in, arg);
4101 if (fileno >= 0)
4102 fileno += arg;
4103 blkno = 0;
4104 at_sm &= (arg == 0);
4105 goto os_bypass;
4106
4107 case MTBSF:
4108 chg_eof = 0; /* Changed from the FSF after this */
4109 /* fall through */
4110 case MTBSFM:
4111 if (STp->raw)
4112 return (-EIO);
4113 ioctl_result = osst_space_over_filemarks_backward(STp, &SRpnt, cmd_in, arg);
4114 if (fileno >= 0)
4115 fileno -= arg;
4116 blkno = (-1); /* We can't know the block number */
4117 at_sm &= (arg == 0);
4118 goto os_bypass;
4119
4120 case MTFSR:
4121 case MTBSR:
4122#if DEBUG
4123 if (debugging)
4124 printk(OSST_DEB_MSG "%s:D: Skipping %lu blocks %s from logical block %d\n",
4125 name, arg, cmd_in==MTFSR?"forward":"backward", logical_blk_num);
4126#endif
4127 if (cmd_in == MTFSR) {
4128 logical_blk_num += arg;
4129 if (blkno >= 0) blkno += arg;
4130 }
4131 else {
4132 logical_blk_num -= arg;
4133 if (blkno >= 0) blkno -= arg;
4134 }
4135 ioctl_result = osst_seek_logical_blk(STp, &SRpnt, logical_blk_num);
4136 fileno = STps->drv_file;
4137 blkno = STps->drv_block;
4138 at_sm &= (arg == 0);
4139 goto os_bypass;
4140
4141 case MTFSS:
4142 cmd[0] = SPACE;
4143 cmd[1] = 0x04; /* Space Setmarks */ /* FIXME -- OS can't do this? */
4144 cmd[2] = (arg >> 16);
4145 cmd[3] = (arg >> 8);
4146 cmd[4] = arg;
4147#if DEBUG
4148 if (debugging)
4149 printk(OSST_DEB_MSG "%s:D: Spacing tape forward %d setmarks.\n", name,
4150 cmd[2] * 65536 + cmd[3] * 256 + cmd[4]);
4151#endif
4152 if (arg != 0) {
4153 blkno = fileno = (-1);
4154 at_sm = 1;
4155 }
4156 break;
4157 case MTBSS:
4158 cmd[0] = SPACE;
4159 cmd[1] = 0x04; /* Space Setmarks */ /* FIXME -- OS can't do this? */
4160 ltmp = (-arg);
4161 cmd[2] = (ltmp >> 16);
4162 cmd[3] = (ltmp >> 8);
4163 cmd[4] = ltmp;
4164#if DEBUG
4165 if (debugging) {
4166 if (cmd[2] & 0x80)
4167 ltmp = 0xff000000;
4168 ltmp = ltmp | (cmd[2] << 16) | (cmd[3] << 8) | cmd[4];
4169 printk(OSST_DEB_MSG "%s:D: Spacing tape backward %ld setmarks.\n",
4170 name, (-ltmp));
4171 }
4172#endif
4173 if (arg != 0) {
4174 blkno = fileno = (-1);
4175 at_sm = 1;
4176 }
4177 break;
4178 case MTWEOF:
4179 if ((STps->rw == ST_WRITING || STp->dirty) && !STp->pos_unknown) {
4180 STp->write_type = OS_WRITE_DATA;
4181 ioctl_result = osst_flush_write_buffer(STp, &SRpnt);
4182 } else
4183 ioctl_result = 0;
4184#if DEBUG
4185 if (debugging)
4186 printk(OSST_DEB_MSG "%s:D: Writing %ld filemark(s).\n", name, arg);
4187#endif
4188 for (i=0; i<arg; i++)
4189 ioctl_result |= osst_write_filemark(STp, &SRpnt);
4190 if (fileno >= 0) fileno += arg;
4191 if (blkno >= 0) blkno = 0;
4192 goto os_bypass;
4193
4194 case MTWSM:
4195 if (STp->write_prot)
4196 return (-EACCES);
4197 if (!STp->raw)
4198 return 0;
4199 cmd[0] = WRITE_FILEMARKS; /* FIXME -- need OS version */
4200 if (cmd_in == MTWSM)
4201 cmd[1] = 2;
4202 cmd[2] = (arg >> 16);
4203 cmd[3] = (arg >> 8);
4204 cmd[4] = arg;
4205 timeout = STp->timeout;
4206#if DEBUG
4207 if (debugging)
4208 printk(OSST_DEB_MSG "%s:D: Writing %d setmark(s).\n", name,
4209 cmd[2] * 65536 + cmd[3] * 256 + cmd[4]);
4210#endif
4211 if (fileno >= 0)
4212 fileno += arg;
4213 blkno = 0;
4214 at_sm = (cmd_in == MTWSM);
4215 break;
4216 case MTOFFL:
4217 case MTLOAD:
4218 case MTUNLOAD:
4219 case MTRETEN:
4220 cmd[0] = START_STOP;
4221 cmd[1] = 1; /* Don't wait for completion */
4222 if (cmd_in == MTLOAD) {
4223 if (STp->ready == ST_NO_TAPE)
4224 cmd[4] = 4; /* open tray */
4225 else
4226 cmd[4] = 1; /* load */
4227 }
4228 if (cmd_in == MTRETEN)
4229 cmd[4] = 3; /* retension then mount */
4230 if (cmd_in == MTOFFL)
4231 cmd[4] = 4; /* rewind then eject */
4232 timeout = STp->timeout;
4233#if DEBUG
4234 if (debugging) {
4235 switch (cmd_in) {
4236 case MTUNLOAD:
4237 printk(OSST_DEB_MSG "%s:D: Unloading tape.\n", name);
4238 break;
4239 case MTLOAD:
4240 printk(OSST_DEB_MSG "%s:D: Loading tape.\n", name);
4241 break;
4242 case MTRETEN:
4243 printk(OSST_DEB_MSG "%s:D: Retensioning tape.\n", name);
4244 break;
4245 case MTOFFL:
4246 printk(OSST_DEB_MSG "%s:D: Ejecting tape.\n", name);
4247 break;
4248 }
4249 }
4250#endif
4251 fileno = blkno = at_sm = frame_seq_numbr = logical_blk_num = 0 ;
4252 break;
4253 case MTNOP:
4254#if DEBUG
4255 if (debugging)
4256 printk(OSST_DEB_MSG "%s:D: No-op on tape.\n", name);
4257#endif
4258 return 0; /* Should do something ? */
4259 break;
4260 case MTEOM:
4261#if DEBUG
4262 if (debugging)
4263 printk(OSST_DEB_MSG "%s:D: Spacing to end of recorded medium.\n", name);
4264#endif
4265 if ((osst_position_tape_and_confirm(STp, &SRpnt, STp->eod_frame_ppos) < 0) ||
4266 (osst_get_logical_frame(STp, &SRpnt, -1, 0) < 0)) {
4267 ioctl_result = -EIO;
4268 goto os_bypass;
4269 }
4270 if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_EOD) {
4271#if DEBUG
4272 printk(OSST_DEB_MSG "%s:D: No EOD frame found where expected.\n", name);
4273#endif
4274 ioctl_result = -EIO;
4275 goto os_bypass;
4276 }
4277 ioctl_result = osst_set_frame_position(STp, &SRpnt, STp->eod_frame_ppos, 0);
4278 fileno = STp->filemark_cnt;
4279 blkno = at_sm = 0;
4280 goto os_bypass;
4281
4282 case MTERASE:
4283 if (STp->write_prot)
4284 return (-EACCES);
4285 ioctl_result = osst_reset_header(STp, &SRpnt);
4286 i = osst_write_eod(STp, &SRpnt);
4287 if (i < ioctl_result) ioctl_result = i;
4288 i = osst_position_tape_and_confirm(STp, &SRpnt, STp->eod_frame_ppos);
4289 if (i < ioctl_result) ioctl_result = i;
4290 fileno = blkno = at_sm = 0 ;
4291 goto os_bypass;
4292
4293 case MTREW:
4294 cmd[0] = REZERO_UNIT; /* rewind */
4295 cmd[1] = 1;
4296#if DEBUG
4297 if (debugging)
4298 printk(OSST_DEB_MSG "%s:D: Rewinding tape, Immed=%d.\n", name, cmd[1]);
4299#endif
4300 fileno = blkno = at_sm = frame_seq_numbr = logical_blk_num = 0 ;
4301 break;
4302
4303 case MTSETBLK: /* Set block length */
4304 if ((STps->drv_block == 0 ) &&
4305 !STp->dirty &&
4306 ((STp->buffer)->buffer_bytes == 0) &&
4307 ((arg & MT_ST_BLKSIZE_MASK) >= 512 ) &&
4308 ((arg & MT_ST_BLKSIZE_MASK) <= OS_DATA_SIZE) &&
4309 !(OS_DATA_SIZE % (arg & MT_ST_BLKSIZE_MASK)) ) {
4310 /*
4311 * Only allowed to change the block size if you opened the
4312 * device at the beginning of a file before writing anything.
4313 * Note, that when reading, changing block_size is futile,
4314 * as the size used when writing overrides it.
4315 */
4316 STp->block_size = (arg & MT_ST_BLKSIZE_MASK);
4317 printk(KERN_INFO "%s:I: Block size set to %d bytes.\n",
4318 name, STp->block_size);
4319 return 0;
4320 }
4321 /* fall through */
4322 case MTSETDENSITY: /* Set tape density */
4323 case MTSETDRVBUFFER: /* Set drive buffering */
4324 case SET_DENS_AND_BLK: /* Set density and block size */
4325 chg_eof = 0;
4326 if (STp->dirty || (STp->buffer)->buffer_bytes != 0)
4327 return (-EIO); /* Not allowed if data in buffer */
4328 if ((cmd_in == MTSETBLK || cmd_in == SET_DENS_AND_BLK) &&
4329 (arg & MT_ST_BLKSIZE_MASK) != 0 &&
4330 (arg & MT_ST_BLKSIZE_MASK) != STp->block_size ) {
4331 printk(KERN_WARNING "%s:W: Illegal to set block size to %d%s.\n",
4332 name, (int)(arg & MT_ST_BLKSIZE_MASK),
4333 (OS_DATA_SIZE % (arg & MT_ST_BLKSIZE_MASK))?"":" now");
4334 return (-EINVAL);
4335 }
4336 return 0; /* FIXME silently ignore if block size didn't change */
4337
4338 default:
4339 return (-ENOSYS);
4340 }
4341
4342 SRpnt = osst_do_scsi(SRpnt, STp, cmd, datalen, direction, timeout, MAX_RETRIES, 1);
4343
4344 ioctl_result = (STp->buffer)->syscall_result;
4345
4346 if (!SRpnt) {
4347#if DEBUG
4348 printk(OSST_DEB_MSG "%s:D: Couldn't exec scsi cmd for IOCTL\n", name);
4349#endif
4350 return ioctl_result;
4351 }
4352
4353 if (!ioctl_result) { /* SCSI command successful */
4354 STp->frame_seq_number = frame_seq_numbr;
4355 STp->logical_blk_num = logical_blk_num;
4356 }
4357
4358os_bypass:
4359#if DEBUG
4360 if (debugging)
4361 printk(OSST_DEB_MSG "%s:D: IOCTL (%d) Result=%d\n", name, cmd_in, ioctl_result);
4362#endif
4363
4364 if (!ioctl_result) { /* success */
4365
4366 if (cmd_in == MTFSFM) {
4367 fileno--;
4368 blkno--;
4369 }
4370 if (cmd_in == MTBSFM) {
4371 fileno++;
4372 blkno++;
4373 }
4374 STps->drv_block = blkno;
4375 STps->drv_file = fileno;
4376 STps->at_sm = at_sm;
4377
4378 if (cmd_in == MTEOM)
4379 STps->eof = ST_EOD;
4380 else if ((cmd_in == MTFSFM || cmd_in == MTBSF) && STps->eof == ST_FM_HIT) {
4381 ioctl_result = osst_seek_logical_blk(STp, &SRpnt, STp->logical_blk_num-1);
4382 STps->drv_block++;
4383 STp->logical_blk_num++;
4384 STp->frame_seq_number++;
4385 STp->frame_in_buffer = 0;
4386 STp->buffer->read_pointer = 0;
4387 }
4388 else if (cmd_in == MTFSF)
4389 STps->eof = (STp->first_frame_position >= STp->eod_frame_ppos)?ST_EOD:ST_FM;
4390 else if (chg_eof)
4391 STps->eof = ST_NOEOF;
4392
4393 if (cmd_in == MTOFFL || cmd_in == MTUNLOAD)
4394 STp->rew_at_close = 0;
4395 else if (cmd_in == MTLOAD) {
4396 for (i=0; i < ST_NBR_PARTITIONS; i++) {
4397 STp->ps[i].rw = ST_IDLE;
4398 STp->ps[i].last_block_valid = 0;/* FIXME - where else is this field maintained? */
4399 }
4400 STp->partition = 0;
4401 }
4402
4403 if (cmd_in == MTREW) {
4404 ioctl_result = osst_position_tape_and_confirm(STp, &SRpnt, STp->first_data_ppos);
4405 if (ioctl_result > 0)
4406 ioctl_result = 0;
4407 }
4408
4409 } else if (cmd_in == MTBSF || cmd_in == MTBSFM ) {
4410 if (osst_position_tape_and_confirm(STp, &SRpnt, STp->first_data_ppos) < 0)
4411 STps->drv_file = STps->drv_block = -1;
4412 else
4413 STps->drv_file = STps->drv_block = 0;
4414 STps->eof = ST_NOEOF;
4415 } else if (cmd_in == MTFSF || cmd_in == MTFSFM) {
4416 if (osst_position_tape_and_confirm(STp, &SRpnt, STp->eod_frame_ppos) < 0)
4417 STps->drv_file = STps->drv_block = -1;
4418 else {
4419 STps->drv_file = STp->filemark_cnt;
4420 STps->drv_block = 0;
4421 }
4422 STps->eof = ST_EOD;
4423 } else if (cmd_in == MTBSR || cmd_in == MTFSR || cmd_in == MTWEOF || cmd_in == MTEOM) {
4424 STps->drv_file = STps->drv_block = (-1);
4425 STps->eof = ST_NOEOF;
4426 STp->header_ok = 0;
4427 } else if (cmd_in == MTERASE) {
4428 STp->header_ok = 0;
4429 } else if (SRpnt) { /* SCSI command was not completely successful. */
4430 if (SRpnt->sense[2] & 0x40) {
4431 STps->eof = ST_EOM_OK;
4432 STps->drv_block = 0;
4433 }
4434 if (chg_eof)
4435 STps->eof = ST_NOEOF;
4436
4437 if ((SRpnt->sense[2] & 0x0f) == BLANK_CHECK)
4438 STps->eof = ST_EOD;
4439
4440 if (cmd_in == MTLOAD && osst_wait_for_medium(STp, &SRpnt, 60))
4441 ioctl_result = osst_wait_ready(STp, &SRpnt, 5 * 60, OSST_WAIT_POSITION_COMPLETE);
4442 }
4443 *aSRpnt = SRpnt;
4444
4445 return ioctl_result;
4446}
4447
4448
4449/* Open the device */
4450static int __os_scsi_tape_open(struct inode * inode, struct file * filp)
4451{
4452 unsigned short flags;
4453 int i, b_size, new_session = 0, retval = 0;
4454 unsigned char cmd[MAX_COMMAND_SIZE];
4455 struct osst_request * SRpnt = NULL;
4456 struct osst_tape * STp;
4457 struct st_modedef * STm;
4458 struct st_partstat * STps;
4459 char * name;
4460 int dev = TAPE_NR(inode);
4461 int mode = TAPE_MODE(inode);
4462
4463 /*
4464 * We really want to do nonseekable_open(inode, filp); here, but some
4465 * versions of tar incorrectly call lseek on tapes and bail out if that
4466 * fails. So we disallow pread() and pwrite(), but permit lseeks.
4467 */
4468 filp->f_mode &= ~(FMODE_PREAD | FMODE_PWRITE);
4469
4470 write_lock(&os_scsi_tapes_lock);
4471 if (dev >= osst_max_dev || os_scsi_tapes == NULL ||
4472 (STp = os_scsi_tapes[dev]) == NULL || !STp->device) {
4473 write_unlock(&os_scsi_tapes_lock);
4474 return (-ENXIO);
4475 }
4476
4477 name = tape_name(STp);
4478
4479 if (STp->in_use) {
4480 write_unlock(&os_scsi_tapes_lock);
4481#if DEBUG
4482 printk(OSST_DEB_MSG "%s:D: Device already in use.\n", name);
4483#endif
4484 return (-EBUSY);
4485 }
4486 if (scsi_device_get(STp->device)) {
4487 write_unlock(&os_scsi_tapes_lock);
4488#if DEBUG
4489 printk(OSST_DEB_MSG "%s:D: Failed scsi_device_get.\n", name);
4490#endif
4491 return (-ENXIO);
4492 }
4493 filp->private_data = STp;
4494 STp->in_use = 1;
4495 write_unlock(&os_scsi_tapes_lock);
4496 STp->rew_at_close = TAPE_REWIND(inode);
4497
4498 if( !scsi_block_when_processing_errors(STp->device) ) {
4499 return -ENXIO;
4500 }
4501
4502 if (mode != STp->current_mode) {
4503#if DEBUG
4504 if (debugging)
4505 printk(OSST_DEB_MSG "%s:D: Mode change from %d to %d.\n",
4506 name, STp->current_mode, mode);
4507#endif
4508 new_session = 1;
4509 STp->current_mode = mode;
4510 }
4511 STm = &(STp->modes[STp->current_mode]);
4512
4513 flags = filp->f_flags;
4514 STp->write_prot = ((flags & O_ACCMODE) == O_RDONLY);
4515
4516 STp->raw = TAPE_IS_RAW(inode);
4517 if (STp->raw)
4518 STp->header_ok = 0;
4519
4520 /* Allocate data segments for this device's tape buffer */
4521 if (!enlarge_buffer(STp->buffer, STp->restr_dma)) {
4522 printk(KERN_ERR "%s:E: Unable to allocate memory segments for tape buffer.\n", name);
4523 retval = (-EOVERFLOW);
4524 goto err_out;
4525 }
4526 if (STp->buffer->buffer_size >= OS_FRAME_SIZE) {
4527 for (i = 0, b_size = 0;
4528 (i < STp->buffer->sg_segs) && ((b_size + STp->buffer->sg[i].length) <= OS_DATA_SIZE);
4529 b_size += STp->buffer->sg[i++].length);
4530 STp->buffer->aux = (os_aux_t *) (page_address(sg_page(&STp->buffer->sg[i])) + OS_DATA_SIZE - b_size);
4531#if DEBUG
4532 printk(OSST_DEB_MSG "%s:D: b_data points to %p in segment 0 at %p\n", name,
4533 STp->buffer->b_data, page_address(STp->buffer->sg[0].page));
4534 printk(OSST_DEB_MSG "%s:D: AUX points to %p in segment %d at %p\n", name,
4535 STp->buffer->aux, i, page_address(STp->buffer->sg[i].page));
4536#endif
4537 } else {
4538 STp->buffer->aux = NULL; /* this had better never happen! */
4539 printk(KERN_NOTICE "%s:A: Framesize %d too large for buffer.\n", name, OS_FRAME_SIZE);
4540 retval = (-EIO);
4541 goto err_out;
4542 }
4543 STp->buffer->writing = 0;
4544 STp->buffer->syscall_result = 0;
4545 STp->dirty = 0;
4546 for (i=0; i < ST_NBR_PARTITIONS; i++) {
4547 STps = &(STp->ps[i]);
4548 STps->rw = ST_IDLE;
4549 }
4550 STp->ready = ST_READY;
4551#if DEBUG
4552 STp->nbr_waits = STp->nbr_finished = 0;
4553#endif
4554
4555 memset (cmd, 0, MAX_COMMAND_SIZE);
4556 cmd[0] = TEST_UNIT_READY;
4557
4558 SRpnt = osst_do_scsi(NULL, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1);
4559 if (!SRpnt) {
4560 retval = (STp->buffer)->syscall_result; /* FIXME - valid? */
4561 goto err_out;
4562 }
4563 if ((SRpnt->sense[0] & 0x70) == 0x70 &&
4564 (SRpnt->sense[2] & 0x0f) == NOT_READY &&
4565 SRpnt->sense[12] == 4 ) {
4566#if DEBUG
4567 printk(OSST_DEB_MSG "%s:D: Unit not ready, cause %x\n", name, SRpnt->sense[13]);
4568#endif
4569 if (filp->f_flags & O_NONBLOCK) {
4570 retval = -EAGAIN;
4571 goto err_out;
4572 }
4573 if (SRpnt->sense[13] == 2) { /* initialize command required (LOAD) */
4574 memset (cmd, 0, MAX_COMMAND_SIZE);
4575 cmd[0] = START_STOP;
4576 cmd[1] = 1;
4577 cmd[4] = 1;
4578 SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE,
4579 STp->timeout, MAX_RETRIES, 1);
4580 }
4581 osst_wait_ready(STp, &SRpnt, (SRpnt->sense[13]==1?15:3) * 60, 0);
4582 }
4583 if ((SRpnt->sense[0] & 0x70) == 0x70 &&
4584 (SRpnt->sense[2] & 0x0f) == UNIT_ATTENTION) { /* New media? */
4585#if DEBUG
4586 printk(OSST_DEB_MSG "%s:D: Unit wants attention\n", name);
4587#endif
4588 STp->header_ok = 0;
4589
4590 for (i=0; i < 10; i++) {
4591
4592 memset (cmd, 0, MAX_COMMAND_SIZE);
4593 cmd[0] = TEST_UNIT_READY;
4594
4595 SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE,
4596 STp->timeout, MAX_RETRIES, 1);
4597 if ((SRpnt->sense[0] & 0x70) != 0x70 ||
4598 (SRpnt->sense[2] & 0x0f) != UNIT_ATTENTION)
4599 break;
4600 }
4601
4602 STp->pos_unknown = 0;
4603 STp->partition = STp->new_partition = 0;
4604 if (STp->can_partitions)
4605 STp->nbr_partitions = 1; /* This guess will be updated later if necessary */
4606 for (i=0; i < ST_NBR_PARTITIONS; i++) {
4607 STps = &(STp->ps[i]);
4608 STps->rw = ST_IDLE; /* FIXME - seems to be redundant... */
4609 STps->eof = ST_NOEOF;
4610 STps->at_sm = 0;
4611 STps->last_block_valid = 0;
4612 STps->drv_block = 0;
4613 STps->drv_file = 0 ;
4614 }
4615 new_session = 1;
4616 STp->recover_count = 0;
4617 STp->abort_count = 0;
4618 }
4619 /*
4620 * if we have valid headers from before, and the drive/tape seem untouched,
4621 * open without reconfiguring and re-reading the headers
4622 */
4623 if (!STp->buffer->syscall_result && STp->header_ok &&
4624 !SRpnt->result && SRpnt->sense[0] == 0) {
4625
4626 memset(cmd, 0, MAX_COMMAND_SIZE);
4627 cmd[0] = MODE_SENSE;
4628 cmd[1] = 8;
4629 cmd[2] = VENDOR_IDENT_PAGE;
4630 cmd[4] = VENDOR_IDENT_PAGE_LENGTH + MODE_HEADER_LENGTH;
4631
4632 SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_FROM_DEVICE, STp->timeout, 0, 1);
4633
4634 if (STp->buffer->syscall_result ||
4635 STp->buffer->b_data[MODE_HEADER_LENGTH + 2] != 'L' ||
4636 STp->buffer->b_data[MODE_HEADER_LENGTH + 3] != 'I' ||
4637 STp->buffer->b_data[MODE_HEADER_LENGTH + 4] != 'N' ||
4638 STp->buffer->b_data[MODE_HEADER_LENGTH + 5] != '4' ) {
4639#if DEBUG
4640 printk(OSST_DEB_MSG "%s:D: Signature was changed to %c%c%c%c\n", name,
4641 STp->buffer->b_data[MODE_HEADER_LENGTH + 2],
4642 STp->buffer->b_data[MODE_HEADER_LENGTH + 3],
4643 STp->buffer->b_data[MODE_HEADER_LENGTH + 4],
4644 STp->buffer->b_data[MODE_HEADER_LENGTH + 5]);
4645#endif
4646 STp->header_ok = 0;
4647 }
4648 i = STp->first_frame_position;
4649 if (STp->header_ok && i == osst_get_frame_position(STp, &SRpnt)) {
4650 if (STp->door_locked == ST_UNLOCKED) {
4651 if (do_door_lock(STp, 1))
4652 printk(KERN_INFO "%s:I: Can't lock drive door\n", name);
4653 else
4654 STp->door_locked = ST_LOCKED_AUTO;
4655 }
4656 if (!STp->frame_in_buffer) {
4657 STp->block_size = (STm->default_blksize > 0) ?
4658 STm->default_blksize : OS_DATA_SIZE;
4659 STp->buffer->buffer_bytes = STp->buffer->read_pointer = 0;
4660 }
4661 STp->buffer->buffer_blocks = OS_DATA_SIZE / STp->block_size;
4662 STp->fast_open = 1;
4663 osst_release_request(SRpnt);
4664 return 0;
4665 }
4666#if DEBUG
4667 if (i != STp->first_frame_position)
4668 printk(OSST_DEB_MSG "%s:D: Tape position changed from %d to %d\n",
4669 name, i, STp->first_frame_position);
4670#endif
4671 STp->header_ok = 0;
4672 }
4673 STp->fast_open = 0;
4674
4675 if ((STp->buffer)->syscall_result != 0 && /* in all error conditions except no medium */
4676 (SRpnt->sense[2] != 2 || SRpnt->sense[12] != 0x3A) ) {
4677
4678 memset(cmd, 0, MAX_COMMAND_SIZE);
4679 cmd[0] = MODE_SELECT;
4680 cmd[1] = 0x10;
4681 cmd[4] = 4 + MODE_HEADER_LENGTH;
4682
4683 (STp->buffer)->b_data[0] = cmd[4] - 1;
4684 (STp->buffer)->b_data[1] = 0; /* Medium Type - ignoring */
4685 (STp->buffer)->b_data[2] = 0; /* Reserved */
4686 (STp->buffer)->b_data[3] = 0; /* Block Descriptor Length */
4687 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 0] = 0x3f;
4688 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 1] = 1;
4689 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 2] = 2;
4690 (STp->buffer)->b_data[MODE_HEADER_LENGTH + 3] = 3;
4691
4692#if DEBUG
4693 printk(OSST_DEB_MSG "%s:D: Applying soft reset\n", name);
4694#endif
4695 SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_TO_DEVICE, STp->timeout, 0, 1);
4696
4697 STp->header_ok = 0;
4698
4699 for (i=0; i < 10; i++) {
4700
4701 memset (cmd, 0, MAX_COMMAND_SIZE);
4702 cmd[0] = TEST_UNIT_READY;
4703
4704 SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE,
4705 STp->timeout, MAX_RETRIES, 1);
4706 if ((SRpnt->sense[0] & 0x70) != 0x70 ||
4707 (SRpnt->sense[2] & 0x0f) == NOT_READY)
4708 break;
4709
4710 if ((SRpnt->sense[2] & 0x0f) == UNIT_ATTENTION) {
4711 int j;
4712
4713 STp->pos_unknown = 0;
4714 STp->partition = STp->new_partition = 0;
4715 if (STp->can_partitions)
4716 STp->nbr_partitions = 1; /* This guess will be updated later if necessary */
4717 for (j = 0; j < ST_NBR_PARTITIONS; j++) {
4718 STps = &(STp->ps[j]);
4719 STps->rw = ST_IDLE;
4720 STps->eof = ST_NOEOF;
4721 STps->at_sm = 0;
4722 STps->last_block_valid = 0;
4723 STps->drv_block = 0;
4724 STps->drv_file = 0 ;
4725 }
4726 new_session = 1;
4727 }
4728 }
4729 }
4730
4731 if (osst_wait_ready(STp, &SRpnt, 15 * 60, 0)) /* FIXME - not allowed with NOBLOCK */
4732 printk(KERN_INFO "%s:I: Device did not become Ready in open\n", name);
4733
4734 if ((STp->buffer)->syscall_result != 0) {
4735 if ((STp->device)->scsi_level >= SCSI_2 &&
4736 (SRpnt->sense[0] & 0x70) == 0x70 &&
4737 (SRpnt->sense[2] & 0x0f) == NOT_READY &&
4738 SRpnt->sense[12] == 0x3a) { /* Check ASC */
4739 STp->ready = ST_NO_TAPE;
4740 } else
4741 STp->ready = ST_NOT_READY;
4742 osst_release_request(SRpnt);
4743 SRpnt = NULL;
4744 STp->density = 0; /* Clear the erroneous "residue" */
4745 STp->write_prot = 0;
4746 STp->block_size = 0;
4747 STp->ps[0].drv_file = STp->ps[0].drv_block = (-1);
4748 STp->partition = STp->new_partition = 0;
4749 STp->door_locked = ST_UNLOCKED;
4750 return 0;
4751 }
4752
4753 osst_configure_onstream(STp, &SRpnt);
4754
4755 STp->block_size = STp->raw ? OS_FRAME_SIZE : (
4756 (STm->default_blksize > 0) ? STm->default_blksize : OS_DATA_SIZE);
4757 STp->buffer->buffer_blocks = STp->raw ? 1 : OS_DATA_SIZE / STp->block_size;
4758 STp->buffer->buffer_bytes =
4759 STp->buffer->read_pointer =
4760 STp->frame_in_buffer = 0;
4761
4762#if DEBUG
4763 if (debugging)
4764 printk(OSST_DEB_MSG "%s:D: Block size: %d, frame size: %d, buffer size: %d (%d blocks).\n",
4765 name, STp->block_size, OS_FRAME_SIZE, (STp->buffer)->buffer_size,
4766 (STp->buffer)->buffer_blocks);
4767#endif
4768
4769 if (STp->drv_write_prot) {
4770 STp->write_prot = 1;
4771#if DEBUG
4772 if (debugging)
4773 printk(OSST_DEB_MSG "%s:D: Write protected\n", name);
4774#endif
4775 if ((flags & O_ACCMODE) == O_WRONLY || (flags & O_ACCMODE) == O_RDWR) {
4776 retval = (-EROFS);
4777 goto err_out;
4778 }
4779 }
4780
4781 if (new_session) { /* Change the drive parameters for the new mode */
4782#if DEBUG
4783 if (debugging)
4784 printk(OSST_DEB_MSG "%s:D: New Session\n", name);
4785#endif
4786 STp->density_changed = STp->blksize_changed = 0;
4787 STp->compression_changed = 0;
4788 }
4789
4790 /*
4791 * properly position the tape and check the ADR headers
4792 */
4793 if (STp->door_locked == ST_UNLOCKED) {
4794 if (do_door_lock(STp, 1))
4795 printk(KERN_INFO "%s:I: Can't lock drive door\n", name);
4796 else
4797 STp->door_locked = ST_LOCKED_AUTO;
4798 }
4799
4800 osst_analyze_headers(STp, &SRpnt);
4801
4802 osst_release_request(SRpnt);
4803 SRpnt = NULL;
4804
4805 return 0;
4806
4807err_out:
4808 if (SRpnt != NULL)
4809 osst_release_request(SRpnt);
4810 normalize_buffer(STp->buffer);
4811 STp->header_ok = 0;
4812 STp->in_use = 0;
4813 scsi_device_put(STp->device);
4814
4815 return retval;
4816}
4817
4818/* BKL pushdown: spaghetti avoidance wrapper */
4819static int os_scsi_tape_open(struct inode * inode, struct file * filp)
4820{
4821 int ret;
4822
4823 mutex_lock(&osst_int_mutex);
4824 ret = __os_scsi_tape_open(inode, filp);
4825 mutex_unlock(&osst_int_mutex);
4826 return ret;
4827}
4828
4829
4830
4831/* Flush the tape buffer before close */
4832static int os_scsi_tape_flush(struct file * filp, fl_owner_t id)
4833{
4834 int result = 0, result2;
4835 struct osst_tape * STp = filp->private_data;
4836 struct st_modedef * STm = &(STp->modes[STp->current_mode]);
4837 struct st_partstat * STps = &(STp->ps[STp->partition]);
4838 struct osst_request * SRpnt = NULL;
4839 char * name = tape_name(STp);
4840
4841 if (file_count(filp) > 1)
4842 return 0;
4843
4844 if ((STps->rw == ST_WRITING || STp->dirty) && !STp->pos_unknown) {
4845 STp->write_type = OS_WRITE_DATA;
4846 result = osst_flush_write_buffer(STp, &SRpnt);
4847 if (result != 0 && result != (-ENOSPC))
4848 goto out;
4849 }
4850 if ( STps->rw >= ST_WRITING && !STp->pos_unknown) {
4851
4852#if DEBUG
4853 if (debugging) {
4854 printk(OSST_DEB_MSG "%s:D: File length %ld bytes.\n",
4855 name, (long)(filp->f_pos));
4856 printk(OSST_DEB_MSG "%s:D: Async write waits %d, finished %d.\n",
4857 name, STp->nbr_waits, STp->nbr_finished);
4858 }
4859#endif
4860 result = osst_write_trailer(STp, &SRpnt, !(STp->rew_at_close));
4861#if DEBUG
4862 if (debugging)
4863 printk(OSST_DEB_MSG "%s:D: Buffer flushed, %d EOF(s) written\n",
4864 name, 1+STp->two_fm);
4865#endif
4866 }
4867 else if (!STp->rew_at_close) {
4868 STps = &(STp->ps[STp->partition]);
4869 if (!STm->sysv || STps->rw != ST_READING) {
4870 if (STp->can_bsr)
4871 result = osst_flush_buffer(STp, &SRpnt, 0); /* this is the default path */
4872 else if (STps->eof == ST_FM_HIT) {
4873 result = cross_eof(STp, &SRpnt, 0);
4874 if (result) {
4875 if (STps->drv_file >= 0)
4876 STps->drv_file++;
4877 STps->drv_block = 0;
4878 STps->eof = ST_FM;
4879 }
4880 else
4881 STps->eof = ST_NOEOF;
4882 }
4883 }
4884 else if ((STps->eof == ST_NOEOF &&
4885 !(result = cross_eof(STp, &SRpnt, 1))) ||
4886 STps->eof == ST_FM_HIT) {
4887 if (STps->drv_file >= 0)
4888 STps->drv_file++;
4889 STps->drv_block = 0;
4890 STps->eof = ST_FM;
4891 }
4892 }
4893
4894out:
4895 if (STp->rew_at_close) {
4896 result2 = osst_position_tape_and_confirm(STp, &SRpnt, STp->first_data_ppos);
4897 STps->drv_file = STps->drv_block = STp->frame_seq_number = STp->logical_blk_num = 0;
4898 if (result == 0 && result2 < 0)
4899 result = result2;
4900 }
4901 if (SRpnt) osst_release_request(SRpnt);
4902
4903 if (STp->abort_count || STp->recover_count) {
4904 printk(KERN_INFO "%s:I:", name);
4905 if (STp->abort_count)
4906 printk(" %d unrecovered errors", STp->abort_count);
4907 if (STp->recover_count)
4908 printk(" %d recovered errors", STp->recover_count);
4909 if (STp->write_count)
4910 printk(" in %d frames written", STp->write_count);
4911 if (STp->read_count)
4912 printk(" in %d frames read", STp->read_count);
4913 printk("\n");
4914 STp->recover_count = 0;
4915 STp->abort_count = 0;
4916 }
4917 STp->write_count = 0;
4918 STp->read_count = 0;
4919
4920 return result;
4921}
4922
4923
4924/* Close the device and release it */
4925static int os_scsi_tape_close(struct inode * inode, struct file * filp)
4926{
4927 int result = 0;
4928 struct osst_tape * STp = filp->private_data;
4929
4930 if (STp->door_locked == ST_LOCKED_AUTO)
4931 do_door_lock(STp, 0);
4932
4933 if (STp->raw)
4934 STp->header_ok = 0;
4935
4936 normalize_buffer(STp->buffer);
4937 write_lock(&os_scsi_tapes_lock);
4938 STp->in_use = 0;
4939 write_unlock(&os_scsi_tapes_lock);
4940
4941 scsi_device_put(STp->device);
4942
4943 return result;
4944}
4945
4946
4947/* The ioctl command */
4948static long osst_ioctl(struct file * file,
4949 unsigned int cmd_in, unsigned long arg)
4950{
4951 int i, cmd_nr, cmd_type, blk, retval = 0;
4952 struct st_modedef * STm;
4953 struct st_partstat * STps;
4954 struct osst_request * SRpnt = NULL;
4955 struct osst_tape * STp = file->private_data;
4956 char * name = tape_name(STp);
4957 void __user * p = (void __user *)arg;
4958
4959 mutex_lock(&osst_int_mutex);
4960 if (mutex_lock_interruptible(&STp->lock)) {
4961 mutex_unlock(&osst_int_mutex);
4962 return -ERESTARTSYS;
4963 }
4964
4965#if DEBUG
4966 if (debugging && !STp->in_use) {
4967 printk(OSST_DEB_MSG "%s:D: Incorrect device.\n", name);
4968 retval = (-EIO);
4969 goto out;
4970 }
4971#endif
4972 STm = &(STp->modes[STp->current_mode]);
4973 STps = &(STp->ps[STp->partition]);
4974
4975 /*
4976 * If we are in the middle of error recovery, don't let anyone
4977 * else try and use this device. Also, if error recovery fails, it
4978 * may try and take the device offline, in which case all further
4979 * access to the device is prohibited.
4980 */
4981 retval = scsi_ioctl_block_when_processing_errors(STp->device, cmd_in,
4982 file->f_flags & O_NDELAY);
4983 if (retval)
4984 goto out;
4985
4986 cmd_type = _IOC_TYPE(cmd_in);
4987 cmd_nr = _IOC_NR(cmd_in);
4988#if DEBUG
4989 printk(OSST_DEB_MSG "%s:D: Ioctl %d,%d in %s mode\n", name,
4990 cmd_type, cmd_nr, STp->raw?"raw":"normal");
4991#endif
4992 if (cmd_type == _IOC_TYPE(MTIOCTOP) && cmd_nr == _IOC_NR(MTIOCTOP)) {
4993 struct mtop mtc;
4994 int auto_weof = 0;
4995
4996 if (_IOC_SIZE(cmd_in) != sizeof(mtc)) {
4997 retval = (-EINVAL);
4998 goto out;
4999 }
5000
5001 i = copy_from_user((char *) &mtc, p, sizeof(struct mtop));
5002 if (i) {
5003 retval = (-EFAULT);
5004 goto out;
5005 }
5006
5007 if (mtc.mt_op == MTSETDRVBUFFER && !capable(CAP_SYS_ADMIN)) {
5008 printk(KERN_WARNING "%s:W: MTSETDRVBUFFER only allowed for root.\n", name);
5009 retval = (-EPERM);
5010 goto out;
5011 }
5012
5013 if (!STm->defined && (mtc.mt_op != MTSETDRVBUFFER && (mtc.mt_count & MT_ST_OPTIONS) == 0)) {
5014 retval = (-ENXIO);
5015 goto out;
5016 }
5017
5018 if (!STp->pos_unknown) {
5019
5020 if (STps->eof == ST_FM_HIT) {
5021 if (mtc.mt_op == MTFSF || mtc.mt_op == MTFSFM|| mtc.mt_op == MTEOM) {
5022 mtc.mt_count -= 1;
5023 if (STps->drv_file >= 0)
5024 STps->drv_file += 1;
5025 }
5026 else if (mtc.mt_op == MTBSF || mtc.mt_op == MTBSFM) {
5027 mtc.mt_count += 1;
5028 if (STps->drv_file >= 0)
5029 STps->drv_file += 1;
5030 }
5031 }
5032
5033 if (mtc.mt_op == MTSEEK) {
5034 /* Old position must be restored if partition will be changed */
5035 i = !STp->can_partitions || (STp->new_partition != STp->partition);
5036 }
5037 else {
5038 i = mtc.mt_op == MTREW || mtc.mt_op == MTOFFL ||
5039 mtc.mt_op == MTRETEN || mtc.mt_op == MTEOM ||
5040 mtc.mt_op == MTLOCK || mtc.mt_op == MTLOAD ||
5041 mtc.mt_op == MTFSF || mtc.mt_op == MTFSFM ||
5042 mtc.mt_op == MTBSF || mtc.mt_op == MTBSFM ||
5043 mtc.mt_op == MTCOMPRESSION;
5044 }
5045 i = osst_flush_buffer(STp, &SRpnt, i);
5046 if (i < 0) {
5047 retval = i;
5048 goto out;
5049 }
5050 }
5051 else {
5052 /*
5053 * If there was a bus reset, block further access
5054 * to this device. If the user wants to rewind the tape,
5055 * then reset the flag and allow access again.
5056 */
5057 if(mtc.mt_op != MTREW &&
5058 mtc.mt_op != MTOFFL &&
5059 mtc.mt_op != MTRETEN &&
5060 mtc.mt_op != MTERASE &&
5061 mtc.mt_op != MTSEEK &&
5062 mtc.mt_op != MTEOM) {
5063 retval = (-EIO);
5064 goto out;
5065 }
5066 reset_state(STp);
5067 /* remove this when the midlevel properly clears was_reset */
5068 STp->device->was_reset = 0;
5069 }
5070
5071 if (mtc.mt_op != MTCOMPRESSION && mtc.mt_op != MTLOCK &&
5072 mtc.mt_op != MTNOP && mtc.mt_op != MTSETBLK &&
5073 mtc.mt_op != MTSETDENSITY && mtc.mt_op != MTSETDRVBUFFER &&
5074 mtc.mt_op != MTMKPART && mtc.mt_op != MTSETPART &&
5075 mtc.mt_op != MTWEOF && mtc.mt_op != MTWSM ) {
5076
5077 /*
5078 * The user tells us to move to another position on the tape.
5079 * If we were appending to the tape content, that would leave
5080 * the tape without proper end, in that case write EOD and
5081 * update the header to reflect its position.
5082 */
5083#if DEBUG
5084 printk(KERN_WARNING "%s:D: auto_weod %s at ffp=%d,efp=%d,fsn=%d,lbn=%d,fn=%d,bn=%d\n", name,
5085 STps->rw >= ST_WRITING ? "write" : STps->rw == ST_READING ? "read" : "idle",
5086 STp->first_frame_position, STp->eod_frame_ppos, STp->frame_seq_number,
5087 STp->logical_blk_num, STps->drv_file, STps->drv_block );
5088#endif
5089 if (STps->rw >= ST_WRITING && STp->first_frame_position >= STp->eod_frame_ppos) {
5090 auto_weof = ((STp->write_type != OS_WRITE_NEW_MARK) &&
5091 !(mtc.mt_op == MTREW || mtc.mt_op == MTOFFL));
5092 i = osst_write_trailer(STp, &SRpnt,
5093 !(mtc.mt_op == MTREW || mtc.mt_op == MTOFFL));
5094#if DEBUG
5095 printk(KERN_WARNING "%s:D: post trailer xeof=%d,ffp=%d,efp=%d,fsn=%d,lbn=%d,fn=%d,bn=%d\n",
5096 name, auto_weof, STp->first_frame_position, STp->eod_frame_ppos,
5097 STp->frame_seq_number, STp->logical_blk_num, STps->drv_file, STps->drv_block );
5098#endif
5099 if (i < 0) {
5100 retval = i;
5101 goto out;
5102 }
5103 }
5104 STps->rw = ST_IDLE;
5105 }
5106
5107 if (mtc.mt_op == MTOFFL && STp->door_locked != ST_UNLOCKED)
5108 do_door_lock(STp, 0); /* Ignore result! */
5109
5110 if (mtc.mt_op == MTSETDRVBUFFER &&
5111 (mtc.mt_count & MT_ST_OPTIONS) != 0) {
5112 retval = osst_set_options(STp, mtc.mt_count);
5113 goto out;
5114 }
5115
5116 if (mtc.mt_op == MTSETPART) {
5117 if (mtc.mt_count >= STp->nbr_partitions)
5118 retval = -EINVAL;
5119 else {
5120 STp->new_partition = mtc.mt_count;
5121 retval = 0;
5122 }
5123 goto out;
5124 }
5125
5126 if (mtc.mt_op == MTMKPART) {
5127 if (!STp->can_partitions) {
5128 retval = (-EINVAL);
5129 goto out;
5130 }
5131 if ((i = osst_int_ioctl(STp, &SRpnt, MTREW, 0)) < 0 /*||
5132 (i = partition_tape(inode, mtc.mt_count)) < 0*/) {
5133 retval = i;
5134 goto out;
5135 }
5136 for (i=0; i < ST_NBR_PARTITIONS; i++) {
5137 STp->ps[i].rw = ST_IDLE;
5138 STp->ps[i].at_sm = 0;
5139 STp->ps[i].last_block_valid = 0;
5140 }
5141 STp->partition = STp->new_partition = 0;
5142 STp->nbr_partitions = 1; /* Bad guess ?-) */
5143 STps->drv_block = STps->drv_file = 0;
5144 retval = 0;
5145 goto out;
5146 }
5147
5148 if (mtc.mt_op == MTSEEK) {
5149 if (STp->raw)
5150 i = osst_set_frame_position(STp, &SRpnt, mtc.mt_count, 0);
5151 else
5152 i = osst_seek_sector(STp, &SRpnt, mtc.mt_count);
5153 if (!STp->can_partitions)
5154 STp->ps[0].rw = ST_IDLE;
5155 retval = i;
5156 goto out;
5157 }
5158
5159 if (mtc.mt_op == MTLOCK || mtc.mt_op == MTUNLOCK) {
5160 retval = do_door_lock(STp, (mtc.mt_op == MTLOCK));
5161 goto out;
5162 }
5163
5164 if (auto_weof)
5165 cross_eof(STp, &SRpnt, 0);
5166
5167 if (mtc.mt_op == MTCOMPRESSION)
5168 retval = -EINVAL; /* OnStream drives don't have compression hardware */
5169 else
5170 /* MTBSF MTBSFM MTBSR MTBSS MTEOM MTERASE MTFSF MTFSFB MTFSR MTFSS
5171 * MTLOAD MTOFFL MTRESET MTRETEN MTREW MTUNLOAD MTWEOF MTWSM */
5172 retval = osst_int_ioctl(STp, &SRpnt, mtc.mt_op, mtc.mt_count);
5173 goto out;
5174 }
5175
5176 if (!STm->defined) {
5177 retval = (-ENXIO);
5178 goto out;
5179 }
5180
5181 if ((i = osst_flush_buffer(STp, &SRpnt, 0)) < 0) {
5182 retval = i;
5183 goto out;
5184 }
5185
5186 if (cmd_type == _IOC_TYPE(MTIOCGET) && cmd_nr == _IOC_NR(MTIOCGET)) {
5187 struct mtget mt_status;
5188
5189 if (_IOC_SIZE(cmd_in) != sizeof(struct mtget)) {
5190 retval = (-EINVAL);
5191 goto out;
5192 }
5193
5194 mt_status.mt_type = MT_ISONSTREAM_SC;
5195 mt_status.mt_erreg = STp->recover_erreg << MT_ST_SOFTERR_SHIFT;
5196 mt_status.mt_dsreg =
5197 ((STp->block_size << MT_ST_BLKSIZE_SHIFT) & MT_ST_BLKSIZE_MASK) |
5198 ((STp->density << MT_ST_DENSITY_SHIFT) & MT_ST_DENSITY_MASK);
5199 mt_status.mt_blkno = STps->drv_block;
5200 mt_status.mt_fileno = STps->drv_file;
5201 if (STp->block_size != 0) {
5202 if (STps->rw == ST_WRITING)
5203 mt_status.mt_blkno += (STp->buffer)->buffer_bytes / STp->block_size;
5204 else if (STps->rw == ST_READING)
5205 mt_status.mt_blkno -= ((STp->buffer)->buffer_bytes +
5206 STp->block_size - 1) / STp->block_size;
5207 }
5208
5209 mt_status.mt_gstat = 0;
5210 if (STp->drv_write_prot)
5211 mt_status.mt_gstat |= GMT_WR_PROT(0xffffffff);
5212 if (mt_status.mt_blkno == 0) {
5213 if (mt_status.mt_fileno == 0)
5214 mt_status.mt_gstat |= GMT_BOT(0xffffffff);
5215 else
5216 mt_status.mt_gstat |= GMT_EOF(0xffffffff);
5217 }
5218 mt_status.mt_resid = STp->partition;
5219 if (STps->eof == ST_EOM_OK || STps->eof == ST_EOM_ERROR)
5220 mt_status.mt_gstat |= GMT_EOT(0xffffffff);
5221 else if (STps->eof >= ST_EOM_OK)
5222 mt_status.mt_gstat |= GMT_EOD(0xffffffff);
5223 if (STp->density == 1)
5224 mt_status.mt_gstat |= GMT_D_800(0xffffffff);
5225 else if (STp->density == 2)
5226 mt_status.mt_gstat |= GMT_D_1600(0xffffffff);
5227 else if (STp->density == 3)
5228 mt_status.mt_gstat |= GMT_D_6250(0xffffffff);
5229 if (STp->ready == ST_READY)
5230 mt_status.mt_gstat |= GMT_ONLINE(0xffffffff);
5231 if (STp->ready == ST_NO_TAPE)
5232 mt_status.mt_gstat |= GMT_DR_OPEN(0xffffffff);
5233 if (STps->at_sm)
5234 mt_status.mt_gstat |= GMT_SM(0xffffffff);
5235 if (STm->do_async_writes || (STm->do_buffer_writes && STp->block_size != 0) ||
5236 STp->drv_buffer != 0)
5237 mt_status.mt_gstat |= GMT_IM_REP_EN(0xffffffff);
5238
5239 i = copy_to_user(p, &mt_status, sizeof(struct mtget));
5240 if (i) {
5241 retval = (-EFAULT);
5242 goto out;
5243 }
5244
5245 STp->recover_erreg = 0; /* Clear after read */
5246 retval = 0;
5247 goto out;
5248 } /* End of MTIOCGET */
5249
5250 if (cmd_type == _IOC_TYPE(MTIOCPOS) && cmd_nr == _IOC_NR(MTIOCPOS)) {
5251 struct mtpos mt_pos;
5252
5253 if (_IOC_SIZE(cmd_in) != sizeof(struct mtpos)) {
5254 retval = (-EINVAL);
5255 goto out;
5256 }
5257 if (STp->raw)
5258 blk = osst_get_frame_position(STp, &SRpnt);
5259 else
5260 blk = osst_get_sector(STp, &SRpnt);
5261 if (blk < 0) {
5262 retval = blk;
5263 goto out;
5264 }
5265 mt_pos.mt_blkno = blk;
5266 i = copy_to_user(p, &mt_pos, sizeof(struct mtpos));
5267 if (i)
5268 retval = -EFAULT;
5269 goto out;
5270 }
5271 if (SRpnt) osst_release_request(SRpnt);
5272
5273 mutex_unlock(&STp->lock);
5274
5275 retval = scsi_ioctl(STp->device, cmd_in, p);
5276 mutex_unlock(&osst_int_mutex);
5277 return retval;
5278
5279out:
5280 if (SRpnt) osst_release_request(SRpnt);
5281
5282 mutex_unlock(&STp->lock);
5283 mutex_unlock(&osst_int_mutex);
5284
5285 return retval;
5286}
5287
5288#ifdef CONFIG_COMPAT
5289static long osst_compat_ioctl(struct file * file, unsigned int cmd_in, unsigned long arg)
5290{
5291 struct osst_tape *STp = file->private_data;
5292 struct scsi_device *sdev = STp->device;
5293 int ret = -ENOIOCTLCMD;
5294 if (sdev->host->hostt->compat_ioctl) {
5295
5296 ret = sdev->host->hostt->compat_ioctl(sdev, cmd_in, (void __user *)arg);
5297
5298 }
5299 return ret;
5300}
5301#endif
5302
5303
5304
5305/* Memory handling routines */
5306
5307/* Try to allocate a new tape buffer skeleton. Caller must not hold os_scsi_tapes_lock */
5308static struct osst_buffer * new_tape_buffer( int from_initialization, int need_dma, int max_sg )
5309{
5310 int i;
5311 gfp_t priority;
5312 struct osst_buffer *tb;
5313
5314 if (from_initialization)
5315 priority = GFP_ATOMIC;
5316 else
5317 priority = GFP_KERNEL;
5318
5319 i = sizeof(struct osst_buffer) + (osst_max_sg_segs - 1) * sizeof(struct scatterlist);
5320 tb = kzalloc(i, priority);
5321 if (!tb) {
5322 printk(KERN_NOTICE "osst :I: Can't allocate new tape buffer.\n");
5323 return NULL;
5324 }
5325
5326 tb->sg_segs = tb->orig_sg_segs = 0;
5327 tb->use_sg = max_sg;
5328 tb->in_use = 1;
5329 tb->dma = need_dma;
5330 tb->buffer_size = 0;
5331#if DEBUG
5332 if (debugging)
5333 printk(OSST_DEB_MSG
5334 "osst :D: Allocated tape buffer skeleton (%d bytes, %d segments, dma: %d).\n",
5335 i, max_sg, need_dma);
5336#endif
5337 return tb;
5338}
5339
5340/* Try to allocate a temporary (while a user has the device open) enlarged tape buffer */
5341static int enlarge_buffer(struct osst_buffer *STbuffer, int need_dma)
5342{
5343 int segs, nbr, max_segs, b_size, order, got;
5344 gfp_t priority;
5345
5346 if (STbuffer->buffer_size >= OS_FRAME_SIZE)
5347 return 1;
5348
5349 if (STbuffer->sg_segs) {
5350 printk(KERN_WARNING "osst :A: Buffer not previously normalized.\n");
5351 normalize_buffer(STbuffer);
5352 }
5353 /* See how many segments we can use -- need at least two */
5354 nbr = max_segs = STbuffer->use_sg;
5355 if (nbr <= 2)
5356 return 0;
5357
5358 priority = GFP_KERNEL /* | __GFP_NOWARN */;
5359 if (need_dma)
5360 priority |= GFP_DMA;
5361
5362 /* Try to allocate the first segment up to OS_DATA_SIZE and the others
5363 big enough to reach the goal (code assumes no segments in place) */
5364 for (b_size = OS_DATA_SIZE, order = OSST_FIRST_ORDER; b_size >= PAGE_SIZE; order--, b_size /= 2) {
5365 struct page *page = alloc_pages(priority, order);
5366
5367 STbuffer->sg[0].offset = 0;
5368 if (page != NULL) {
5369 sg_set_page(&STbuffer->sg[0], page, b_size, 0);
5370 STbuffer->b_data = page_address(page);
5371 break;
5372 }
5373 }
5374 if (sg_page(&STbuffer->sg[0]) == NULL) {
5375 printk(KERN_NOTICE "osst :I: Can't allocate tape buffer main segment.\n");
5376 return 0;
5377 }
5378 /* Got initial segment of 'bsize,order', continue with same size if possible, except for AUX */
5379 for (segs=STbuffer->sg_segs=1, got=b_size;
5380 segs < max_segs && got < OS_FRAME_SIZE; ) {
5381 struct page *page = alloc_pages(priority, (OS_FRAME_SIZE - got <= PAGE_SIZE) ? 0 : order);
5382 STbuffer->sg[segs].offset = 0;
5383 if (page == NULL) {
5384 printk(KERN_WARNING "osst :W: Failed to enlarge buffer to %d bytes.\n",
5385 OS_FRAME_SIZE);
5386#if DEBUG
5387 STbuffer->buffer_size = got;
5388#endif
5389 normalize_buffer(STbuffer);
5390 return 0;
5391 }
5392 sg_set_page(&STbuffer->sg[segs], page, (OS_FRAME_SIZE - got <= PAGE_SIZE / 2) ? (OS_FRAME_SIZE - got) : b_size, 0);
5393 got += STbuffer->sg[segs].length;
5394 STbuffer->buffer_size = got;
5395 STbuffer->sg_segs = ++segs;
5396 }
5397#if DEBUG
5398 if (debugging) {
5399 printk(OSST_DEB_MSG
5400 "osst :D: Expanded tape buffer (%d bytes, %d->%d segments, dma: %d, at: %p).\n",
5401 got, STbuffer->orig_sg_segs, STbuffer->sg_segs, need_dma, STbuffer->b_data);
5402 printk(OSST_DEB_MSG
5403 "osst :D: segment sizes: first %d at %p, last %d bytes at %p.\n",
5404 STbuffer->sg[0].length, page_address(STbuffer->sg[0].page),
5405 STbuffer->sg[segs-1].length, page_address(STbuffer->sg[segs-1].page));
5406 }
5407#endif
5408
5409 return 1;
5410}
5411
5412
5413/* Release the segments */
5414static void normalize_buffer(struct osst_buffer *STbuffer)
5415{
5416 int i, order, b_size;
5417
5418 for (i=0; i < STbuffer->sg_segs; i++) {
5419
5420 for (b_size = PAGE_SIZE, order = 0;
5421 b_size < STbuffer->sg[i].length;
5422 b_size *= 2, order++);
5423
5424 __free_pages(sg_page(&STbuffer->sg[i]), order);
5425 STbuffer->buffer_size -= STbuffer->sg[i].length;
5426 }
5427#if DEBUG
5428 if (debugging && STbuffer->orig_sg_segs < STbuffer->sg_segs)
5429 printk(OSST_DEB_MSG "osst :D: Buffer at %p normalized to %d bytes (segs %d).\n",
5430 STbuffer->b_data, STbuffer->buffer_size, STbuffer->sg_segs);
5431#endif
5432 STbuffer->sg_segs = STbuffer->orig_sg_segs = 0;
5433}
5434
5435
5436/* Move data from the user buffer to the tape buffer. Returns zero (success) or
5437 negative error code. */
5438static int append_to_buffer(const char __user *ubp, struct osst_buffer *st_bp, int do_count)
5439{
5440 int i, cnt, res, offset;
5441
5442 for (i=0, offset=st_bp->buffer_bytes;
5443 i < st_bp->sg_segs && offset >= st_bp->sg[i].length; i++)
5444 offset -= st_bp->sg[i].length;
5445 if (i == st_bp->sg_segs) { /* Should never happen */
5446 printk(KERN_WARNING "osst :A: Append_to_buffer offset overflow.\n");
5447 return (-EIO);
5448 }
5449 for ( ; i < st_bp->sg_segs && do_count > 0; i++) {
5450 cnt = st_bp->sg[i].length - offset < do_count ?
5451 st_bp->sg[i].length - offset : do_count;
5452 res = copy_from_user(page_address(sg_page(&st_bp->sg[i])) + offset, ubp, cnt);
5453 if (res)
5454 return (-EFAULT);
5455 do_count -= cnt;
5456 st_bp->buffer_bytes += cnt;
5457 ubp += cnt;
5458 offset = 0;
5459 }
5460 if (do_count) { /* Should never happen */
5461 printk(KERN_WARNING "osst :A: Append_to_buffer overflow (left %d).\n",
5462 do_count);
5463 return (-EIO);
5464 }
5465 return 0;
5466}
5467
5468
5469/* Move data from the tape buffer to the user buffer. Returns zero (success) or
5470 negative error code. */
5471static int from_buffer(struct osst_buffer *st_bp, char __user *ubp, int do_count)
5472{
5473 int i, cnt, res, offset;
5474
5475 for (i=0, offset=st_bp->read_pointer;
5476 i < st_bp->sg_segs && offset >= st_bp->sg[i].length; i++)
5477 offset -= st_bp->sg[i].length;
5478 if (i == st_bp->sg_segs) { /* Should never happen */
5479 printk(KERN_WARNING "osst :A: From_buffer offset overflow.\n");
5480 return (-EIO);
5481 }
5482 for ( ; i < st_bp->sg_segs && do_count > 0; i++) {
5483 cnt = st_bp->sg[i].length - offset < do_count ?
5484 st_bp->sg[i].length - offset : do_count;
5485 res = copy_to_user(ubp, page_address(sg_page(&st_bp->sg[i])) + offset, cnt);
5486 if (res)
5487 return (-EFAULT);
5488 do_count -= cnt;
5489 st_bp->buffer_bytes -= cnt;
5490 st_bp->read_pointer += cnt;
5491 ubp += cnt;
5492 offset = 0;
5493 }
5494 if (do_count) { /* Should never happen */
5495 printk(KERN_WARNING "osst :A: From_buffer overflow (left %d).\n", do_count);
5496 return (-EIO);
5497 }
5498 return 0;
5499}
5500
5501/* Sets the tail of the buffer after fill point to zero.
5502 Returns zero (success) or negative error code. */
5503static int osst_zero_buffer_tail(struct osst_buffer *st_bp)
5504{
5505 int i, offset, do_count, cnt;
5506
5507 for (i = 0, offset = st_bp->buffer_bytes;
5508 i < st_bp->sg_segs && offset >= st_bp->sg[i].length; i++)
5509 offset -= st_bp->sg[i].length;
5510 if (i == st_bp->sg_segs) { /* Should never happen */
5511 printk(KERN_WARNING "osst :A: Zero_buffer offset overflow.\n");
5512 return (-EIO);
5513 }
5514 for (do_count = OS_DATA_SIZE - st_bp->buffer_bytes;
5515 i < st_bp->sg_segs && do_count > 0; i++) {
5516 cnt = st_bp->sg[i].length - offset < do_count ?
5517 st_bp->sg[i].length - offset : do_count ;
5518 memset(page_address(sg_page(&st_bp->sg[i])) + offset, 0, cnt);
5519 do_count -= cnt;
5520 offset = 0;
5521 }
5522 if (do_count) { /* Should never happen */
5523 printk(KERN_WARNING "osst :A: Zero_buffer overflow (left %d).\n", do_count);
5524 return (-EIO);
5525 }
5526 return 0;
5527}
5528
5529/* Copy a osst 32K chunk of memory into the buffer.
5530 Returns zero (success) or negative error code. */
5531static int osst_copy_to_buffer(struct osst_buffer *st_bp, unsigned char *ptr)
5532{
5533 int i, cnt, do_count = OS_DATA_SIZE;
5534
5535 for (i = 0; i < st_bp->sg_segs && do_count > 0; i++) {
5536 cnt = st_bp->sg[i].length < do_count ?
5537 st_bp->sg[i].length : do_count ;
5538 memcpy(page_address(sg_page(&st_bp->sg[i])), ptr, cnt);
5539 do_count -= cnt;
5540 ptr += cnt;
5541 }
5542 if (do_count || i != st_bp->sg_segs-1) { /* Should never happen */
5543 printk(KERN_WARNING "osst :A: Copy_to_buffer overflow (left %d at sg %d).\n",
5544 do_count, i);
5545 return (-EIO);
5546 }
5547 return 0;
5548}
5549
5550/* Copy a osst 32K chunk of memory from the buffer.
5551 Returns zero (success) or negative error code. */
5552static int osst_copy_from_buffer(struct osst_buffer *st_bp, unsigned char *ptr)
5553{
5554 int i, cnt, do_count = OS_DATA_SIZE;
5555
5556 for (i = 0; i < st_bp->sg_segs && do_count > 0; i++) {
5557 cnt = st_bp->sg[i].length < do_count ?
5558 st_bp->sg[i].length : do_count ;
5559 memcpy(ptr, page_address(sg_page(&st_bp->sg[i])), cnt);
5560 do_count -= cnt;
5561 ptr += cnt;
5562 }
5563 if (do_count || i != st_bp->sg_segs-1) { /* Should never happen */
5564 printk(KERN_WARNING "osst :A: Copy_from_buffer overflow (left %d at sg %d).\n",
5565 do_count, i);
5566 return (-EIO);
5567 }
5568 return 0;
5569}
5570
5571
5572/* Module housekeeping */
5573
5574static void validate_options (void)
5575{
5576 if (max_dev > 0)
5577 osst_max_dev = max_dev;
5578 if (write_threshold_kbs > 0)
5579 osst_write_threshold = write_threshold_kbs * ST_KILOBYTE;
5580 if (osst_write_threshold > osst_buffer_size)
5581 osst_write_threshold = osst_buffer_size;
5582 if (max_sg_segs >= OSST_FIRST_SG)
5583 osst_max_sg_segs = max_sg_segs;
5584#if DEBUG
5585 printk(OSST_DEB_MSG "osst :D: max tapes %d, write threshold %d, max s/g segs %d.\n",
5586 osst_max_dev, osst_write_threshold, osst_max_sg_segs);
5587#endif
5588}
5589
5590#ifndef MODULE
5591/* Set the boot options. Syntax: osst=xxx,yyy,...
5592 where xxx is write threshold in 1024 byte blocks,
5593 and yyy is number of s/g segments to use. */
5594static int __init osst_setup (char *str)
5595{
5596 int i, ints[5];
5597 char *stp;
5598
5599 stp = get_options(str, ARRAY_SIZE(ints), ints);
5600
5601 if (ints[0] > 0) {
5602 for (i = 0; i < ints[0] && i < ARRAY_SIZE(parms); i++)
5603 *parms[i].val = ints[i + 1];
5604 } else {
5605 while (stp != NULL) {
5606 for (i = 0; i < ARRAY_SIZE(parms); i++) {
5607 int len = strlen(parms[i].name);
5608 if (!strncmp(stp, parms[i].name, len) &&
5609 (*(stp + len) == ':' || *(stp + len) == '=')) {
5610 *parms[i].val =
5611 simple_strtoul(stp + len + 1, NULL, 0);
5612 break;
5613 }
5614 }
5615 if (i >= ARRAY_SIZE(parms))
5616 printk(KERN_INFO "osst :I: Illegal parameter in '%s'\n",
5617 stp);
5618 stp = strchr(stp, ',');
5619 if (stp)
5620 stp++;
5621 }
5622 }
5623
5624 return 1;
5625}
5626
5627__setup("osst=", osst_setup);
5628
5629#endif
5630
5631static const struct file_operations osst_fops = {
5632 .owner = THIS_MODULE,
5633 .read = osst_read,
5634 .write = osst_write,
5635 .unlocked_ioctl = osst_ioctl,
5636#ifdef CONFIG_COMPAT
5637 .compat_ioctl = osst_compat_ioctl,
5638#endif
5639 .open = os_scsi_tape_open,
5640 .flush = os_scsi_tape_flush,
5641 .release = os_scsi_tape_close,
5642 .llseek = noop_llseek,
5643};
5644
5645static int osst_supports(struct scsi_device * SDp)
5646{
5647 struct osst_support_data {
5648 char *vendor;
5649 char *model;
5650 char *rev;
5651 char *driver_hint; /* Name of the correct driver, NULL if unknown */
5652 };
5653
5654static struct osst_support_data support_list[] = {
5655 /* {"XXX", "Yy-", "", NULL}, example */
5656 SIGS_FROM_OSST,
5657 {NULL, }};
5658
5659 struct osst_support_data *rp;
5660
5661 /* We are willing to drive OnStream SC-x0 as well as the
5662 * * IDE, ParPort, FireWire, USB variants, if accessible by
5663 * * emulation layer (ide-scsi, usb-storage, ...) */
5664
5665 for (rp=&(support_list[0]); rp->vendor != NULL; rp++)
5666 if (!strncmp(rp->vendor, SDp->vendor, strlen(rp->vendor)) &&
5667 !strncmp(rp->model, SDp->model, strlen(rp->model)) &&
5668 !strncmp(rp->rev, SDp->rev, strlen(rp->rev)))
5669 return 1;
5670 return 0;
5671}
5672
5673/*
5674 * sysfs support for osst driver parameter information
5675 */
5676
5677static ssize_t version_show(struct device_driver *ddd, char *buf)
5678{
5679 return snprintf(buf, PAGE_SIZE, "%s\n", osst_version);
5680}
5681
5682static DRIVER_ATTR_RO(version);
5683
5684static int osst_create_sysfs_files(struct device_driver *sysfs)
5685{
5686 return driver_create_file(sysfs, &driver_attr_version);
5687}
5688
5689static void osst_remove_sysfs_files(struct device_driver *sysfs)
5690{
5691 driver_remove_file(sysfs, &driver_attr_version);
5692}
5693
5694/*
5695 * sysfs support for accessing ADR header information
5696 */
5697
5698static ssize_t osst_adr_rev_show(struct device *dev,
5699 struct device_attribute *attr, char *buf)
5700{
5701 struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev);
5702 ssize_t l = 0;
5703
5704 if (STp && STp->header_ok && STp->linux_media)
5705 l = snprintf(buf, PAGE_SIZE, "%d.%d\n", STp->header_cache->major_rev, STp->header_cache->minor_rev);
5706 return l;
5707}
5708
5709DEVICE_ATTR(ADR_rev, S_IRUGO, osst_adr_rev_show, NULL);
5710
5711static ssize_t osst_linux_media_version_show(struct device *dev,
5712 struct device_attribute *attr,
5713 char *buf)
5714{
5715 struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev);
5716 ssize_t l = 0;
5717
5718 if (STp && STp->header_ok && STp->linux_media)
5719 l = snprintf(buf, PAGE_SIZE, "LIN%d\n", STp->linux_media_version);
5720 return l;
5721}
5722
5723DEVICE_ATTR(media_version, S_IRUGO, osst_linux_media_version_show, NULL);
5724
5725static ssize_t osst_capacity_show(struct device *dev,
5726 struct device_attribute *attr, char *buf)
5727{
5728 struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev);
5729 ssize_t l = 0;
5730
5731 if (STp && STp->header_ok && STp->linux_media)
5732 l = snprintf(buf, PAGE_SIZE, "%d\n", STp->capacity);
5733 return l;
5734}
5735
5736DEVICE_ATTR(capacity, S_IRUGO, osst_capacity_show, NULL);
5737
5738static ssize_t osst_first_data_ppos_show(struct device *dev,
5739 struct device_attribute *attr,
5740 char *buf)
5741{
5742 struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev);
5743 ssize_t l = 0;
5744
5745 if (STp && STp->header_ok && STp->linux_media)
5746 l = snprintf(buf, PAGE_SIZE, "%d\n", STp->first_data_ppos);
5747 return l;
5748}
5749
5750DEVICE_ATTR(BOT_frame, S_IRUGO, osst_first_data_ppos_show, NULL);
5751
5752static ssize_t osst_eod_frame_ppos_show(struct device *dev,
5753 struct device_attribute *attr,
5754 char *buf)
5755{
5756 struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev);
5757 ssize_t l = 0;
5758
5759 if (STp && STp->header_ok && STp->linux_media)
5760 l = snprintf(buf, PAGE_SIZE, "%d\n", STp->eod_frame_ppos);
5761 return l;
5762}
5763
5764DEVICE_ATTR(EOD_frame, S_IRUGO, osst_eod_frame_ppos_show, NULL);
5765
5766static ssize_t osst_filemark_cnt_show(struct device *dev,
5767 struct device_attribute *attr, char *buf)
5768{
5769 struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev);
5770 ssize_t l = 0;
5771
5772 if (STp && STp->header_ok && STp->linux_media)
5773 l = snprintf(buf, PAGE_SIZE, "%d\n", STp->filemark_cnt);
5774 return l;
5775}
5776
5777DEVICE_ATTR(file_count, S_IRUGO, osst_filemark_cnt_show, NULL);
5778
5779static struct class *osst_sysfs_class;
5780
5781static int osst_sysfs_init(void)
5782{
5783 osst_sysfs_class = class_create(THIS_MODULE, "onstream_tape");
5784 if (IS_ERR(osst_sysfs_class)) {
5785 printk(KERN_ERR "osst :W: Unable to register sysfs class\n");
5786 return PTR_ERR(osst_sysfs_class);
5787 }
5788
5789 return 0;
5790}
5791
5792static void osst_sysfs_destroy(dev_t dev)
5793{
5794 device_destroy(osst_sysfs_class, dev);
5795}
5796
5797static int osst_sysfs_add(dev_t dev, struct device *device, struct osst_tape * STp, char * name)
5798{
5799 struct device *osst_member;
5800 int err;
5801
5802 osst_member = device_create(osst_sysfs_class, device, dev, STp,
5803 "%s", name);
5804 if (IS_ERR(osst_member)) {
5805 printk(KERN_WARNING "osst :W: Unable to add sysfs class member %s\n", name);
5806 return PTR_ERR(osst_member);
5807 }
5808
5809 err = device_create_file(osst_member, &dev_attr_ADR_rev);
5810 if (err)
5811 goto err_out;
5812 err = device_create_file(osst_member, &dev_attr_media_version);
5813 if (err)
5814 goto err_out;
5815 err = device_create_file(osst_member, &dev_attr_capacity);
5816 if (err)
5817 goto err_out;
5818 err = device_create_file(osst_member, &dev_attr_BOT_frame);
5819 if (err)
5820 goto err_out;
5821 err = device_create_file(osst_member, &dev_attr_EOD_frame);
5822 if (err)
5823 goto err_out;
5824 err = device_create_file(osst_member, &dev_attr_file_count);
5825 if (err)
5826 goto err_out;
5827
5828 return 0;
5829
5830err_out:
5831 osst_sysfs_destroy(dev);
5832 return err;
5833}
5834
5835static void osst_sysfs_cleanup(void)
5836{
5837 class_destroy(osst_sysfs_class);
5838}
5839
5840/*
5841 * osst startup / cleanup code
5842 */
5843
5844static int osst_probe(struct device *dev)
5845{
5846 struct scsi_device * SDp = to_scsi_device(dev);
5847 struct osst_tape * tpnt;
5848 struct st_modedef * STm;
5849 struct st_partstat * STps;
5850 struct osst_buffer * buffer;
5851 struct gendisk * drive;
5852 int i, dev_num, err = -ENODEV;
5853
5854 if (SDp->type != TYPE_TAPE || !osst_supports(SDp))
5855 return -ENODEV;
5856
5857 drive = alloc_disk(1);
5858 if (!drive) {
5859 printk(KERN_ERR "osst :E: Out of memory. Device not attached.\n");
5860 return -ENODEV;
5861 }
5862
5863 /* if this is the first attach, build the infrastructure */
5864 write_lock(&os_scsi_tapes_lock);
5865 if (os_scsi_tapes == NULL) {
5866 os_scsi_tapes = kmalloc_array(osst_max_dev,
5867 sizeof(struct osst_tape *),
5868 GFP_ATOMIC);
5869 if (os_scsi_tapes == NULL) {
5870 write_unlock(&os_scsi_tapes_lock);
5871 printk(KERN_ERR "osst :E: Unable to allocate array for OnStream SCSI tapes.\n");
5872 goto out_put_disk;
5873 }
5874 for (i=0; i < osst_max_dev; ++i) os_scsi_tapes[i] = NULL;
5875 }
5876
5877 if (osst_nr_dev >= osst_max_dev) {
5878 write_unlock(&os_scsi_tapes_lock);
5879 printk(KERN_ERR "osst :E: Too many tape devices (max. %d).\n", osst_max_dev);
5880 goto out_put_disk;
5881 }
5882
5883 /* find a free minor number */
5884 for (i = 0; i < osst_max_dev && os_scsi_tapes[i]; i++)
5885 ;
5886 if(i >= osst_max_dev) panic ("Scsi_devices corrupt (osst)");
5887 dev_num = i;
5888
5889 /* allocate a struct osst_tape for this device */
5890 tpnt = kzalloc(sizeof(struct osst_tape), GFP_ATOMIC);
5891 if (!tpnt) {
5892 write_unlock(&os_scsi_tapes_lock);
5893 printk(KERN_ERR "osst :E: Can't allocate device descriptor, device not attached.\n");
5894 goto out_put_disk;
5895 }
5896
5897 /* allocate a buffer for this device */
5898 i = SDp->host->sg_tablesize;
5899 if (osst_max_sg_segs < i)
5900 i = osst_max_sg_segs;
5901 buffer = new_tape_buffer(1, SDp->host->unchecked_isa_dma, i);
5902 if (buffer == NULL) {
5903 write_unlock(&os_scsi_tapes_lock);
5904 printk(KERN_ERR "osst :E: Unable to allocate a tape buffer, device not attached.\n");
5905 kfree(tpnt);
5906 goto out_put_disk;
5907 }
5908 os_scsi_tapes[dev_num] = tpnt;
5909 tpnt->buffer = buffer;
5910 tpnt->device = SDp;
5911 drive->private_data = &tpnt->driver;
5912 sprintf(drive->disk_name, "osst%d", dev_num);
5913 tpnt->driver = &osst_template;
5914 tpnt->drive = drive;
5915 tpnt->in_use = 0;
5916 tpnt->capacity = 0xfffff;
5917 tpnt->dirty = 0;
5918 tpnt->drv_buffer = 1; /* Try buffering if no mode sense */
5919 tpnt->restr_dma = (SDp->host)->unchecked_isa_dma;
5920 tpnt->density = 0;
5921 tpnt->do_auto_lock = OSST_AUTO_LOCK;
5922 tpnt->can_bsr = OSST_IN_FILE_POS;
5923 tpnt->can_partitions = 0;
5924 tpnt->two_fm = OSST_TWO_FM;
5925 tpnt->fast_mteom = OSST_FAST_MTEOM;
5926 tpnt->scsi2_logical = OSST_SCSI2LOGICAL; /* FIXME */
5927 tpnt->write_threshold = osst_write_threshold;
5928 tpnt->default_drvbuffer = 0xff; /* No forced buffering */
5929 tpnt->partition = 0;
5930 tpnt->new_partition = 0;
5931 tpnt->nbr_partitions = 0;
5932 tpnt->min_block = 512;
5933 tpnt->max_block = OS_DATA_SIZE;
5934 tpnt->timeout = OSST_TIMEOUT;
5935 tpnt->long_timeout = OSST_LONG_TIMEOUT;
5936
5937 /* Recognize OnStream tapes */
5938 /* We don't need to test for OnStream, as this has been done in detect () */
5939 tpnt->os_fw_rev = osst_parse_firmware_rev (SDp->rev);
5940 tpnt->omit_blklims = 1;
5941
5942 tpnt->poll = (strncmp(SDp->model, "DI-", 3) == 0) ||
5943 (strncmp(SDp->model, "FW-", 3) == 0) || OSST_FW_NEED_POLL(tpnt->os_fw_rev,SDp);
5944 tpnt->frame_in_buffer = 0;
5945 tpnt->header_ok = 0;
5946 tpnt->linux_media = 0;
5947 tpnt->header_cache = NULL;
5948
5949 for (i=0; i < ST_NBR_MODES; i++) {
5950 STm = &(tpnt->modes[i]);
5951 STm->defined = 0;
5952 STm->sysv = OSST_SYSV;
5953 STm->defaults_for_writes = 0;
5954 STm->do_async_writes = OSST_ASYNC_WRITES;
5955 STm->do_buffer_writes = OSST_BUFFER_WRITES;
5956 STm->do_read_ahead = OSST_READ_AHEAD;
5957 STm->default_compression = ST_DONT_TOUCH;
5958 STm->default_blksize = 512;
5959 STm->default_density = (-1); /* No forced density */
5960 }
5961
5962 for (i=0; i < ST_NBR_PARTITIONS; i++) {
5963 STps = &(tpnt->ps[i]);
5964 STps->rw = ST_IDLE;
5965 STps->eof = ST_NOEOF;
5966 STps->at_sm = 0;
5967 STps->last_block_valid = 0;
5968 STps->drv_block = (-1);
5969 STps->drv_file = (-1);
5970 }
5971
5972 tpnt->current_mode = 0;
5973 tpnt->modes[0].defined = 1;
5974 tpnt->modes[2].defined = 1;
5975 tpnt->density_changed = tpnt->compression_changed = tpnt->blksize_changed = 0;
5976
5977 mutex_init(&tpnt->lock);
5978 osst_nr_dev++;
5979 write_unlock(&os_scsi_tapes_lock);
5980
5981 {
5982 char name[8];
5983
5984 /* Rewind entry */
5985 err = osst_sysfs_add(MKDEV(OSST_MAJOR, dev_num), dev, tpnt, tape_name(tpnt));
5986 if (err)
5987 goto out_free_buffer;
5988
5989 /* No-rewind entry */
5990 snprintf(name, 8, "%s%s", "n", tape_name(tpnt));
5991 err = osst_sysfs_add(MKDEV(OSST_MAJOR, dev_num + 128), dev, tpnt, name);
5992 if (err)
5993 goto out_free_sysfs1;
5994 }
5995
5996 sdev_printk(KERN_INFO, SDp,
5997 "osst :I: Attached OnStream %.5s tape as %s\n",
5998 SDp->model, tape_name(tpnt));
5999
6000 return 0;
6001
6002out_free_sysfs1:
6003 osst_sysfs_destroy(MKDEV(OSST_MAJOR, dev_num));
6004out_free_buffer:
6005 kfree(buffer);
6006out_put_disk:
6007 put_disk(drive);
6008 return err;
6009};
6010
6011static int osst_remove(struct device *dev)
6012{
6013 struct scsi_device * SDp = to_scsi_device(dev);
6014 struct osst_tape * tpnt;
6015 int i;
6016
6017 if ((SDp->type != TYPE_TAPE) || (osst_nr_dev <= 0))
6018 return 0;
6019
6020 write_lock(&os_scsi_tapes_lock);
6021 for(i=0; i < osst_max_dev; i++) {
6022 if((tpnt = os_scsi_tapes[i]) && (tpnt->device == SDp)) {
6023 osst_sysfs_destroy(MKDEV(OSST_MAJOR, i));
6024 osst_sysfs_destroy(MKDEV(OSST_MAJOR, i+128));
6025 tpnt->device = NULL;
6026 put_disk(tpnt->drive);
6027 os_scsi_tapes[i] = NULL;
6028 osst_nr_dev--;
6029 write_unlock(&os_scsi_tapes_lock);
6030 vfree(tpnt->header_cache);
6031 if (tpnt->buffer) {
6032 normalize_buffer(tpnt->buffer);
6033 kfree(tpnt->buffer);
6034 }
6035 kfree(tpnt);
6036 return 0;
6037 }
6038 }
6039 write_unlock(&os_scsi_tapes_lock);
6040 return 0;
6041}
6042
6043static int __init init_osst(void)
6044{
6045 int err;
6046
6047 printk(KERN_INFO "osst :I: Tape driver with OnStream support version %s\nosst :I: %s\n", osst_version, cvsid);
6048
6049 validate_options();
6050
6051 err = osst_sysfs_init();
6052 if (err)
6053 return err;
6054
6055 err = register_chrdev(OSST_MAJOR, "osst", &osst_fops);
6056 if (err < 0) {
6057 printk(KERN_ERR "osst :E: Unable to register major %d for OnStream tapes\n", OSST_MAJOR);
6058 goto err_out;
6059 }
6060
6061 err = scsi_register_driver(&osst_template.gendrv);
6062 if (err)
6063 goto err_out_chrdev;
6064
6065 err = osst_create_sysfs_files(&osst_template.gendrv);
6066 if (err)
6067 goto err_out_scsidrv;
6068
6069 return 0;
6070
6071err_out_scsidrv:
6072 scsi_unregister_driver(&osst_template.gendrv);
6073err_out_chrdev:
6074 unregister_chrdev(OSST_MAJOR, "osst");
6075err_out:
6076 osst_sysfs_cleanup();
6077 return err;
6078}
6079
6080static void __exit exit_osst (void)
6081{
6082 int i;
6083 struct osst_tape * STp;
6084
6085 osst_remove_sysfs_files(&osst_template.gendrv);
6086 scsi_unregister_driver(&osst_template.gendrv);
6087 unregister_chrdev(OSST_MAJOR, "osst");
6088 osst_sysfs_cleanup();
6089
6090 if (os_scsi_tapes) {
6091 for (i=0; i < osst_max_dev; ++i) {
6092 if (!(STp = os_scsi_tapes[i])) continue;
6093 /* This is defensive, supposed to happen during detach */
6094 vfree(STp->header_cache);
6095 if (STp->buffer) {
6096 normalize_buffer(STp->buffer);
6097 kfree(STp->buffer);
6098 }
6099 put_disk(STp->drive);
6100 kfree(STp);
6101 }
6102 kfree(os_scsi_tapes);
6103 }
6104 printk(KERN_INFO "osst :I: Unloaded.\n");
6105}
6106
6107module_init(init_osst);
6108module_exit(exit_osst);
diff --git a/drivers/scsi/osst.h b/drivers/scsi/osst.h
deleted file mode 100644
index b90ae280853d..000000000000
--- a/drivers/scsi/osst.h
+++ /dev/null
@@ -1,651 +0,0 @@
1/* SPDX-License-Identifier: GPL-2.0 */
2/*
3 * $Header: /cvsroot/osst/Driver/osst.h,v 1.16 2005/01/01 21:13:35 wriede Exp $
4 */
5
6#include <asm/byteorder.h>
7#include <linux/completion.h>
8#include <linux/mutex.h>
9
10/* FIXME - rename and use the following two types or delete them!
11 * and the types really should go to st.h anyway...
12 * INQUIRY packet command - Data Format (From Table 6-8 of QIC-157C)
13 */
14typedef struct {
15 unsigned device_type :5; /* Peripheral Device Type */
16 unsigned reserved0_765 :3; /* Peripheral Qualifier - Reserved */
17 unsigned reserved1_6t0 :7; /* Reserved */
18 unsigned rmb :1; /* Removable Medium Bit */
19 unsigned ansi_version :3; /* ANSI Version */
20 unsigned ecma_version :3; /* ECMA Version */
21 unsigned iso_version :2; /* ISO Version */
22 unsigned response_format :4; /* Response Data Format */
23 unsigned reserved3_45 :2; /* Reserved */
24 unsigned reserved3_6 :1; /* TrmIOP - Reserved */
25 unsigned reserved3_7 :1; /* AENC - Reserved */
26 u8 additional_length; /* Additional Length (total_length-4) */
27 u8 rsv5, rsv6, rsv7; /* Reserved */
28 u8 vendor_id[8]; /* Vendor Identification */
29 u8 product_id[16]; /* Product Identification */
30 u8 revision_level[4]; /* Revision Level */
31 u8 vendor_specific[20]; /* Vendor Specific - Optional */
32 u8 reserved56t95[40]; /* Reserved - Optional */
33 /* Additional information may be returned */
34} idetape_inquiry_result_t;
35
36/*
37 * READ POSITION packet command - Data Format (From Table 6-57)
38 */
39typedef struct {
40 unsigned reserved0_10 :2; /* Reserved */
41 unsigned bpu :1; /* Block Position Unknown */
42 unsigned reserved0_543 :3; /* Reserved */
43 unsigned eop :1; /* End Of Partition */
44 unsigned bop :1; /* Beginning Of Partition */
45 u8 partition; /* Partition Number */
46 u8 reserved2, reserved3; /* Reserved */
47 u32 first_block; /* First Block Location */
48 u32 last_block; /* Last Block Location (Optional) */
49 u8 reserved12; /* Reserved */
50 u8 blocks_in_buffer[3]; /* Blocks In Buffer - (Optional) */
51 u32 bytes_in_buffer; /* Bytes In Buffer (Optional) */
52} idetape_read_position_result_t;
53
54/*
55 * Follows structures which are related to the SELECT SENSE / MODE SENSE
56 * packet commands.
57 */
58#define COMPRESSION_PAGE 0x0f
59#define COMPRESSION_PAGE_LENGTH 16
60
61#define CAPABILITIES_PAGE 0x2a
62#define CAPABILITIES_PAGE_LENGTH 20
63
64#define TAPE_PARAMTR_PAGE 0x2b
65#define TAPE_PARAMTR_PAGE_LENGTH 16
66
67#define NUMBER_RETRIES_PAGE 0x2f
68#define NUMBER_RETRIES_PAGE_LENGTH 4
69
70#define BLOCK_SIZE_PAGE 0x30
71#define BLOCK_SIZE_PAGE_LENGTH 4
72
73#define BUFFER_FILLING_PAGE 0x33
74#define BUFFER_FILLING_PAGE_LENGTH 4
75
76#define VENDOR_IDENT_PAGE 0x36
77#define VENDOR_IDENT_PAGE_LENGTH 8
78
79#define LOCATE_STATUS_PAGE 0x37
80#define LOCATE_STATUS_PAGE_LENGTH 0
81
82#define MODE_HEADER_LENGTH 4
83
84
85/*
86 * REQUEST SENSE packet command result - Data Format.
87 */
88typedef struct {
89 unsigned error_code :7; /* Current of deferred errors */
90 unsigned valid :1; /* The information field conforms to QIC-157C */
91 u8 reserved1 :8; /* Segment Number - Reserved */
92 unsigned sense_key :4; /* Sense Key */
93 unsigned reserved2_4 :1; /* Reserved */
94 unsigned ili :1; /* Incorrect Length Indicator */
95 unsigned eom :1; /* End Of Medium */
96 unsigned filemark :1; /* Filemark */
97 u32 information __attribute__ ((packed));
98 u8 asl; /* Additional sense length (n-7) */
99 u32 command_specific; /* Additional command specific information */
100 u8 asc; /* Additional Sense Code */
101 u8 ascq; /* Additional Sense Code Qualifier */
102 u8 replaceable_unit_code; /* Field Replaceable Unit Code */
103 unsigned sk_specific1 :7; /* Sense Key Specific */
104 unsigned sksv :1; /* Sense Key Specific information is valid */
105 u8 sk_specific2; /* Sense Key Specific */
106 u8 sk_specific3; /* Sense Key Specific */
107 u8 pad[2]; /* Padding to 20 bytes */
108} idetape_request_sense_result_t;
109
110/*
111 * Mode Parameter Header for the MODE SENSE packet command
112 */
113typedef struct {
114 u8 mode_data_length; /* Length of the following data transfer */
115 u8 medium_type; /* Medium Type */
116 u8 dsp; /* Device Specific Parameter */
117 u8 bdl; /* Block Descriptor Length */
118} osst_mode_parameter_header_t;
119
120/*
121 * Mode Parameter Block Descriptor the MODE SENSE packet command
122 *
123 * Support for block descriptors is optional.
124 */
125typedef struct {
126 u8 density_code; /* Medium density code */
127 u8 blocks[3]; /* Number of blocks */
128 u8 reserved4; /* Reserved */
129 u8 length[3]; /* Block Length */
130} osst_parameter_block_descriptor_t;
131
132/*
133 * The Data Compression Page, as returned by the MODE SENSE packet command.
134 */
135typedef struct {
136#if defined(__BIG_ENDIAN_BITFIELD)
137 unsigned ps :1;
138 unsigned reserved0 :1; /* Reserved */
139 unsigned page_code :6; /* Page Code - Should be 0xf */
140#elif defined(__LITTLE_ENDIAN_BITFIELD)
141 unsigned page_code :6; /* Page Code - Should be 0xf */
142 unsigned reserved0 :1; /* Reserved */
143 unsigned ps :1;
144#else
145#error "Please fix <asm/byteorder.h>"
146#endif
147 u8 page_length; /* Page Length - Should be 14 */
148#if defined(__BIG_ENDIAN_BITFIELD)
149 unsigned dce :1; /* Data Compression Enable */
150 unsigned dcc :1; /* Data Compression Capable */
151 unsigned reserved2 :6; /* Reserved */
152#elif defined(__LITTLE_ENDIAN_BITFIELD)
153 unsigned reserved2 :6; /* Reserved */
154 unsigned dcc :1; /* Data Compression Capable */
155 unsigned dce :1; /* Data Compression Enable */
156#else
157#error "Please fix <asm/byteorder.h>"
158#endif
159#if defined(__BIG_ENDIAN_BITFIELD)
160 unsigned dde :1; /* Data Decompression Enable */
161 unsigned red :2; /* Report Exception on Decompression */
162 unsigned reserved3 :5; /* Reserved */
163#elif defined(__LITTLE_ENDIAN_BITFIELD)
164 unsigned reserved3 :5; /* Reserved */
165 unsigned red :2; /* Report Exception on Decompression */
166 unsigned dde :1; /* Data Decompression Enable */
167#else
168#error "Please fix <asm/byteorder.h>"
169#endif
170 u32 ca; /* Compression Algorithm */
171 u32 da; /* Decompression Algorithm */
172 u8 reserved[4]; /* Reserved */
173} osst_data_compression_page_t;
174
175/*
176 * The Medium Partition Page, as returned by the MODE SENSE packet command.
177 */
178typedef struct {
179#if defined(__BIG_ENDIAN_BITFIELD)
180 unsigned ps :1;
181 unsigned reserved1_6 :1; /* Reserved */
182 unsigned page_code :6; /* Page Code - Should be 0x11 */
183#elif defined(__LITTLE_ENDIAN_BITFIELD)
184 unsigned page_code :6; /* Page Code - Should be 0x11 */
185 unsigned reserved1_6 :1; /* Reserved */
186 unsigned ps :1;
187#else
188#error "Please fix <asm/byteorder.h>"
189#endif
190 u8 page_length; /* Page Length - Should be 6 */
191 u8 map; /* Maximum Additional Partitions - Should be 0 */
192 u8 apd; /* Additional Partitions Defined - Should be 0 */
193#if defined(__BIG_ENDIAN_BITFIELD)
194 unsigned fdp :1; /* Fixed Data Partitions */
195 unsigned sdp :1; /* Should be 0 */
196 unsigned idp :1; /* Should be 0 */
197 unsigned psum :2; /* Should be 0 */
198 unsigned reserved4_012 :3; /* Reserved */
199#elif defined(__LITTLE_ENDIAN_BITFIELD)
200 unsigned reserved4_012 :3; /* Reserved */
201 unsigned psum :2; /* Should be 0 */
202 unsigned idp :1; /* Should be 0 */
203 unsigned sdp :1; /* Should be 0 */
204 unsigned fdp :1; /* Fixed Data Partitions */
205#else
206#error "Please fix <asm/byteorder.h>"
207#endif
208 u8 mfr; /* Medium Format Recognition */
209 u8 reserved[2]; /* Reserved */
210} osst_medium_partition_page_t;
211
212/*
213 * Capabilities and Mechanical Status Page
214 */
215typedef struct {
216#if defined(__BIG_ENDIAN_BITFIELD)
217 unsigned reserved1_67 :2;
218 unsigned page_code :6; /* Page code - Should be 0x2a */
219#elif defined(__LITTLE_ENDIAN_BITFIELD)
220 unsigned page_code :6; /* Page code - Should be 0x2a */
221 unsigned reserved1_67 :2;
222#else
223#error "Please fix <asm/byteorder.h>"
224#endif
225 u8 page_length; /* Page Length - Should be 0x12 */
226 u8 reserved2, reserved3;
227#if defined(__BIG_ENDIAN_BITFIELD)
228 unsigned reserved4_67 :2;
229 unsigned sprev :1; /* Supports SPACE in the reverse direction */
230 unsigned reserved4_1234 :4;
231 unsigned ro :1; /* Read Only Mode */
232#elif defined(__LITTLE_ENDIAN_BITFIELD)
233 unsigned ro :1; /* Read Only Mode */
234 unsigned reserved4_1234 :4;
235 unsigned sprev :1; /* Supports SPACE in the reverse direction */
236 unsigned reserved4_67 :2;
237#else
238#error "Please fix <asm/byteorder.h>"
239#endif
240#if defined(__BIG_ENDIAN_BITFIELD)
241 unsigned reserved5_67 :2;
242 unsigned qfa :1; /* Supports the QFA two partition formats */
243 unsigned reserved5_4 :1;
244 unsigned efmt :1; /* Supports ERASE command initiated formatting */
245 unsigned reserved5_012 :3;
246#elif defined(__LITTLE_ENDIAN_BITFIELD)
247 unsigned reserved5_012 :3;
248 unsigned efmt :1; /* Supports ERASE command initiated formatting */
249 unsigned reserved5_4 :1;
250 unsigned qfa :1; /* Supports the QFA two partition formats */
251 unsigned reserved5_67 :2;
252#else
253#error "Please fix <asm/byteorder.h>"
254#endif
255#if defined(__BIG_ENDIAN_BITFIELD)
256 unsigned cmprs :1; /* Supports data compression */
257 unsigned ecc :1; /* Supports error correction */
258 unsigned reserved6_45 :2; /* Reserved */
259 unsigned eject :1; /* The device can eject the volume */
260 unsigned prevent :1; /* The device defaults in the prevent state after power up */
261 unsigned locked :1; /* The volume is locked */
262 unsigned lock :1; /* Supports locking the volume */
263#elif defined(__LITTLE_ENDIAN_BITFIELD)
264 unsigned lock :1; /* Supports locking the volume */
265 unsigned locked :1; /* The volume is locked */
266 unsigned prevent :1; /* The device defaults in the prevent state after power up */
267 unsigned eject :1; /* The device can eject the volume */
268 unsigned reserved6_45 :2; /* Reserved */
269 unsigned ecc :1; /* Supports error correction */
270 unsigned cmprs :1; /* Supports data compression */
271#else
272#error "Please fix <asm/byteorder.h>"
273#endif
274#if defined(__BIG_ENDIAN_BITFIELD)
275 unsigned blk32768 :1; /* slowb - the device restricts the byte count for PIO */
276 /* transfers for slow buffer memory ??? */
277 /* Also 32768 block size in some cases */
278 unsigned reserved7_3_6 :4;
279 unsigned blk1024 :1; /* Supports 1024 bytes block size */
280 unsigned blk512 :1; /* Supports 512 bytes block size */
281 unsigned reserved7_0 :1;
282#elif defined(__LITTLE_ENDIAN_BITFIELD)
283 unsigned reserved7_0 :1;
284 unsigned blk512 :1; /* Supports 512 bytes block size */
285 unsigned blk1024 :1; /* Supports 1024 bytes block size */
286 unsigned reserved7_3_6 :4;
287 unsigned blk32768 :1; /* slowb - the device restricts the byte count for PIO */
288 /* transfers for slow buffer memory ??? */
289 /* Also 32768 block size in some cases */
290#else
291#error "Please fix <asm/byteorder.h>"
292#endif
293 __be16 max_speed; /* Maximum speed supported in KBps */
294 u8 reserved10, reserved11;
295 __be16 ctl; /* Continuous Transfer Limit in blocks */
296 __be16 speed; /* Current Speed, in KBps */
297 __be16 buffer_size; /* Buffer Size, in 512 bytes */
298 u8 reserved18, reserved19;
299} osst_capabilities_page_t;
300
301/*
302 * Block Size Page
303 */
304typedef struct {
305#if defined(__BIG_ENDIAN_BITFIELD)
306 unsigned ps :1;
307 unsigned reserved1_6 :1;
308 unsigned page_code :6; /* Page code - Should be 0x30 */
309#elif defined(__LITTLE_ENDIAN_BITFIELD)
310 unsigned page_code :6; /* Page code - Should be 0x30 */
311 unsigned reserved1_6 :1;
312 unsigned ps :1;
313#else
314#error "Please fix <asm/byteorder.h>"
315#endif
316 u8 page_length; /* Page Length - Should be 2 */
317 u8 reserved2;
318#if defined(__BIG_ENDIAN_BITFIELD)
319 unsigned one :1;
320 unsigned reserved2_6 :1;
321 unsigned record32_5 :1;
322 unsigned record32 :1;
323 unsigned reserved2_23 :2;
324 unsigned play32_5 :1;
325 unsigned play32 :1;
326#elif defined(__LITTLE_ENDIAN_BITFIELD)
327 unsigned play32 :1;
328 unsigned play32_5 :1;
329 unsigned reserved2_23 :2;
330 unsigned record32 :1;
331 unsigned record32_5 :1;
332 unsigned reserved2_6 :1;
333 unsigned one :1;
334#else
335#error "Please fix <asm/byteorder.h>"
336#endif
337} osst_block_size_page_t;
338
339/*
340 * Tape Parameters Page
341 */
342typedef struct {
343#if defined(__BIG_ENDIAN_BITFIELD)
344 unsigned ps :1;
345 unsigned reserved1_6 :1;
346 unsigned page_code :6; /* Page code - Should be 0x2b */
347#elif defined(__LITTLE_ENDIAN_BITFIELD)
348 unsigned page_code :6; /* Page code - Should be 0x2b */
349 unsigned reserved1_6 :1;
350 unsigned ps :1;
351#else
352#error "Please fix <asm/byteorder.h>"
353#endif
354 u8 reserved2;
355 u8 density;
356 u8 reserved3,reserved4;
357 __be16 segtrk;
358 __be16 trks;
359 u8 reserved5,reserved6,reserved7,reserved8,reserved9,reserved10;
360} osst_tape_paramtr_page_t;
361
362/* OnStream definitions */
363
364#define OS_CONFIG_PARTITION (0xff)
365#define OS_DATA_PARTITION (0)
366#define OS_PARTITION_VERSION (1)
367
368/*
369 * partition
370 */
371typedef struct os_partition_s {
372 __u8 partition_num;
373 __u8 par_desc_ver;
374 __be16 wrt_pass_cntr;
375 __be32 first_frame_ppos;
376 __be32 last_frame_ppos;
377 __be32 eod_frame_ppos;
378} os_partition_t;
379
380/*
381 * DAT entry
382 */
383typedef struct os_dat_entry_s {
384 __be32 blk_sz;
385 __be16 blk_cnt;
386 __u8 flags;
387 __u8 reserved;
388} os_dat_entry_t;
389
390/*
391 * DAT
392 */
393#define OS_DAT_FLAGS_DATA (0xc)
394#define OS_DAT_FLAGS_MARK (0x1)
395
396typedef struct os_dat_s {
397 __u8 dat_sz;
398 __u8 reserved1;
399 __u8 entry_cnt;
400 __u8 reserved3;
401 os_dat_entry_t dat_list[16];
402} os_dat_t;
403
404/*
405 * Frame types
406 */
407#define OS_FRAME_TYPE_FILL (0)
408#define OS_FRAME_TYPE_EOD (1 << 0)
409#define OS_FRAME_TYPE_MARKER (1 << 1)
410#define OS_FRAME_TYPE_HEADER (1 << 3)
411#define OS_FRAME_TYPE_DATA (1 << 7)
412
413/*
414 * AUX
415 */
416typedef struct os_aux_s {
417 __be32 format_id; /* hardware compatibility AUX is based on */
418 char application_sig[4]; /* driver used to write this media */
419 __be32 hdwr; /* reserved */
420 __be32 update_frame_cntr; /* for configuration frame */
421 __u8 frame_type;
422 __u8 frame_type_reserved;
423 __u8 reserved_18_19[2];
424 os_partition_t partition;
425 __u8 reserved_36_43[8];
426 __be32 frame_seq_num;
427 __be32 logical_blk_num_high;
428 __be32 logical_blk_num;
429 os_dat_t dat;
430 __u8 reserved188_191[4];
431 __be32 filemark_cnt;
432 __be32 phys_fm;
433 __be32 last_mark_ppos;
434 __u8 reserved204_223[20];
435
436 /*
437 * __u8 app_specific[32];
438 *
439 * Linux specific fields:
440 */
441 __be32 next_mark_ppos; /* when known, points to next marker */
442 __be32 last_mark_lbn; /* storing log_blk_num of last mark is extends ADR spec */
443 __u8 linux_specific[24];
444
445 __u8 reserved_256_511[256];
446} os_aux_t;
447
448#define OS_FM_TAB_MAX 1024
449
450typedef struct os_fm_tab_s {
451 __u8 fm_part_num;
452 __u8 reserved_1;
453 __u8 fm_tab_ent_sz;
454 __u8 reserved_3;
455 __be16 fm_tab_ent_cnt;
456 __u8 reserved6_15[10];
457 __be32 fm_tab_ent[OS_FM_TAB_MAX];
458} os_fm_tab_t;
459
460typedef struct os_ext_trk_ey_s {
461 __u8 et_part_num;
462 __u8 fmt;
463 __be16 fm_tab_off;
464 __u8 reserved4_7[4];
465 __be32 last_hlb_hi;
466 __be32 last_hlb;
467 __be32 last_pp;
468 __u8 reserved20_31[12];
469} os_ext_trk_ey_t;
470
471typedef struct os_ext_trk_tb_s {
472 __u8 nr_stream_part;
473 __u8 reserved_1;
474 __u8 et_ent_sz;
475 __u8 reserved3_15[13];
476 os_ext_trk_ey_t dat_ext_trk_ey;
477 os_ext_trk_ey_t qfa_ext_trk_ey;
478} os_ext_trk_tb_t;
479
480typedef struct os_header_s {
481 char ident_str[8];
482 __u8 major_rev;
483 __u8 minor_rev;
484 __be16 ext_trk_tb_off;
485 __u8 reserved12_15[4];
486 __u8 pt_par_num;
487 __u8 pt_reserved1_3[3];
488 os_partition_t partition[16];
489 __be32 cfg_col_width;
490 __be32 dat_col_width;
491 __be32 qfa_col_width;
492 __u8 cartridge[16];
493 __u8 reserved304_511[208];
494 __be32 old_filemark_list[16680/4]; /* in ADR 1.4 __u8 track_table[16680] */
495 os_ext_trk_tb_t ext_track_tb;
496 __u8 reserved17272_17735[464];
497 os_fm_tab_t dat_fm_tab;
498 os_fm_tab_t qfa_fm_tab;
499 __u8 reserved25960_32767[6808];
500} os_header_t;
501
502
503/*
504 * OnStream ADRL frame
505 */
506#define OS_FRAME_SIZE (32 * 1024 + 512)
507#define OS_DATA_SIZE (32 * 1024)
508#define OS_AUX_SIZE (512)
509//#define OSST_MAX_SG 2
510
511/* The OnStream tape buffer descriptor. */
512struct osst_buffer {
513 unsigned char in_use;
514 unsigned char dma; /* DMA-able buffer */
515 int buffer_size;
516 int buffer_blocks;
517 int buffer_bytes;
518 int read_pointer;
519 int writing;
520 int midlevel_result;
521 int syscall_result;
522 struct osst_request *last_SRpnt;
523 struct st_cmdstatus cmdstat;
524 struct rq_map_data map_data;
525 unsigned char *b_data;
526 os_aux_t *aux; /* onstream AUX structure at end of each block */
527 unsigned short use_sg; /* zero or number of s/g segments for this adapter */
528 unsigned short sg_segs; /* number of segments in s/g list */
529 unsigned short orig_sg_segs; /* number of segments allocated at first try */
530 struct scatterlist sg[1]; /* MUST BE last item */
531} ;
532
533/* The OnStream tape drive descriptor */
534struct osst_tape {
535 struct scsi_driver *driver;
536 unsigned capacity;
537 struct scsi_device *device;
538 struct mutex lock; /* for serialization */
539 struct completion wait; /* for SCSI commands */
540 struct osst_buffer * buffer;
541
542 /* Drive characteristics */
543 unsigned char omit_blklims;
544 unsigned char do_auto_lock;
545 unsigned char can_bsr;
546 unsigned char can_partitions;
547 unsigned char two_fm;
548 unsigned char fast_mteom;
549 unsigned char restr_dma;
550 unsigned char scsi2_logical;
551 unsigned char default_drvbuffer; /* 0xff = don't touch, value 3 bits */
552 unsigned char pos_unknown; /* after reset position unknown */
553 int write_threshold;
554 int timeout; /* timeout for normal commands */
555 int long_timeout; /* timeout for commands known to take long time*/
556
557 /* Mode characteristics */
558 struct st_modedef modes[ST_NBR_MODES];
559 int current_mode;
560
561 /* Status variables */
562 int partition;
563 int new_partition;
564 int nbr_partitions; /* zero until partition support enabled */
565 struct st_partstat ps[ST_NBR_PARTITIONS];
566 unsigned char dirty;
567 unsigned char ready;
568 unsigned char write_prot;
569 unsigned char drv_write_prot;
570 unsigned char in_use;
571 unsigned char blksize_changed;
572 unsigned char density_changed;
573 unsigned char compression_changed;
574 unsigned char drv_buffer;
575 unsigned char density;
576 unsigned char door_locked;
577 unsigned char rew_at_close;
578 unsigned char inited;
579 int block_size;
580 int min_block;
581 int max_block;
582 int recover_count; /* from tape opening */
583 int abort_count;
584 int write_count;
585 int read_count;
586 int recover_erreg; /* from last status call */
587 /*
588 * OnStream specific data
589 */
590 int os_fw_rev; /* the firmware revision * 10000 */
591 unsigned char raw; /* flag OnStream raw access (32.5KB block size) */
592 unsigned char poll; /* flag that this drive needs polling (IDE|firmware) */
593 unsigned char frame_in_buffer; /* flag that the frame as per frame_seq_number
594 * has been read into STp->buffer and is valid */
595 int frame_seq_number; /* logical frame number */
596 int logical_blk_num; /* logical block number */
597 unsigned first_frame_position; /* physical frame to be transferred to/from host */
598 unsigned last_frame_position; /* physical frame to be transferd to/from tape */
599 int cur_frames; /* current number of frames in internal buffer */
600 int max_frames; /* max number of frames in internal buffer */
601 char application_sig[5]; /* application signature */
602 unsigned char fast_open; /* flag that reminds us we didn't check headers at open */
603 unsigned short wrt_pass_cntr; /* write pass counter */
604 int update_frame_cntr; /* update frame counter */
605 int onstream_write_error; /* write error recovery active */
606 int header_ok; /* header frame verified ok */
607 int linux_media; /* reading linux-specifc media */
608 int linux_media_version;
609 os_header_t * header_cache; /* cache is kept for filemark positions */
610 int filemark_cnt;
611 int first_mark_ppos;
612 int last_mark_ppos;
613 int last_mark_lbn; /* storing log_blk_num of last mark is extends ADR spec */
614 int first_data_ppos;
615 int eod_frame_ppos;
616 int eod_frame_lfa;
617 int write_type; /* used in write error recovery */
618 int read_error_frame; /* used in read error recovery */
619 unsigned long cmd_start_time;
620 unsigned long max_cmd_time;
621
622#if DEBUG
623 unsigned char write_pending;
624 int nbr_finished;
625 int nbr_waits;
626 unsigned char last_cmnd[6];
627 unsigned char last_sense[16];
628#endif
629 struct gendisk *drive;
630} ;
631
632/* scsi tape command */
633struct osst_request {
634 unsigned char cmd[MAX_COMMAND_SIZE];
635 unsigned char sense[SCSI_SENSE_BUFFERSIZE];
636 int result;
637 struct osst_tape *stp;
638 struct completion *waiting;
639 struct bio *bio;
640};
641
642/* Values of write_type */
643#define OS_WRITE_DATA 0
644#define OS_WRITE_EOD 1
645#define OS_WRITE_NEW_MARK 2
646#define OS_WRITE_LAST_MARK 3
647#define OS_WRITE_HEADER 4
648#define OS_WRITE_FILLER 5
649
650/* Additional rw state */
651#define OS_WRITING_COMPLETE 3
diff --git a/drivers/scsi/osst_detect.h b/drivers/scsi/osst_detect.h
deleted file mode 100644
index 83c1d4fb11db..000000000000
--- a/drivers/scsi/osst_detect.h
+++ /dev/null
@@ -1,7 +0,0 @@
1/* SPDX-License-Identifier: GPL-2.0 */
2#define SIGS_FROM_OSST \
3 {"OnStream", "SC-", "", "osst"}, \
4 {"OnStream", "DI-", "", "osst"}, \
5 {"OnStream", "DP-", "", "osst"}, \
6 {"OnStream", "FW-", "", "osst"}, \
7 {"OnStream", "USB", "", "osst"}
diff --git a/drivers/scsi/osst_options.h b/drivers/scsi/osst_options.h
deleted file mode 100644
index a6a389b88876..000000000000
--- a/drivers/scsi/osst_options.h
+++ /dev/null
@@ -1,107 +0,0 @@
1/* SPDX-License-Identifier: GPL-2.0 */
2/*
3 The compile-time configurable defaults for the Linux SCSI tape driver.
4
5 Copyright 1995 Kai Makisara.
6
7 Last modified: Wed Sep 2 21:24:07 1998 by root@home
8
9 Changed (and renamed) for OnStream SCSI drives garloff@suse.de
10 2000-06-21
11
12 $Header: /cvsroot/osst/Driver/osst_options.h,v 1.6 2003/12/23 14:22:12 wriede Exp $
13*/
14
15#ifndef _OSST_OPTIONS_H
16#define _OSST_OPTIONS_H
17
18/* The minimum limit for the number of SCSI tape devices is determined by
19 OSST_MAX_TAPES. If the number of tape devices and the "slack" defined by
20 OSST_EXTRA_DEVS exceeds OSST_MAX_TAPES, the large number is used. */
21#define OSST_MAX_TAPES 4
22
23/* If OSST_IN_FILE_POS is nonzero, the driver positions the tape after the
24 record been read by the user program even if the tape has moved further
25 because of buffered reads. Should be set to zero to support also drives
26 that can't space backwards over records. NOTE: The tape will be
27 spaced backwards over an "accidentally" crossed filemark in any case. */
28#define OSST_IN_FILE_POS 1
29
30/* The tape driver buffer size in kilobytes. */
31/* Don't change, as this is the HW blocksize */
32#define OSST_BUFFER_BLOCKS 32
33
34/* The number of kilobytes of data in the buffer that triggers an
35 asynchronous write in fixed block mode. See also OSST_ASYNC_WRITES
36 below. */
37#define OSST_WRITE_THRESHOLD_BLOCKS 32
38
39/* OSST_EOM_RESERVE defines the number of frames are kept in reserve for
40 * * write error recovery when writing near end of medium. ENOSPC is returned
41 * * when write() is called and the tape write position is within this number
42 * * of blocks from the tape capacity. */
43#define OSST_EOM_RESERVE 300
44
45/* The maximum number of tape buffers the driver allocates. The number
46 is also constrained by the number of drives detected. Determines the
47 maximum number of concurrently active tape drives. */
48#define OSST_MAX_BUFFERS OSST_MAX_TAPES
49
50/* Maximum number of scatter/gather segments */
51/* Fit one buffer in pages and add one for the AUX header */
52#define OSST_MAX_SG (((OSST_BUFFER_BLOCKS*1024) / PAGE_SIZE) + 1)
53
54/* The number of scatter/gather segments to allocate at first try (must be
55 smaller or equal to the maximum). */
56#define OSST_FIRST_SG ((OSST_BUFFER_BLOCKS*1024) / PAGE_SIZE)
57
58/* The size of the first scatter/gather segments (determines the maximum block
59 size for SCSI adapters not supporting scatter/gather). The default is set
60 to try to allocate the buffer as one chunk. */
61#define OSST_FIRST_ORDER (15-PAGE_SHIFT)
62
63
64/* The following lines define defaults for properties that can be set
65 separately for each drive using the MTSTOPTIONS ioctl. */
66
67/* If OSST_TWO_FM is non-zero, the driver writes two filemarks after a
68 file being written. Some drives can't handle two filemarks at the
69 end of data. */
70#define OSST_TWO_FM 0
71
72/* If OSST_BUFFER_WRITES is non-zero, writes in fixed block mode are
73 buffered until the driver buffer is full or asynchronous write is
74 triggered. */
75#define OSST_BUFFER_WRITES 1
76
77/* If OSST_ASYNC_WRITES is non-zero, the SCSI write command may be started
78 without waiting for it to finish. May cause problems in multiple
79 tape backups. */
80#define OSST_ASYNC_WRITES 1
81
82/* If OSST_READ_AHEAD is non-zero, blocks are read ahead in fixed block
83 mode. */
84#define OSST_READ_AHEAD 1
85
86/* If OSST_AUTO_LOCK is non-zero, the drive door is locked at the first
87 read or write command after the device is opened. The door is opened
88 when the device is closed. */
89#define OSST_AUTO_LOCK 0
90
91/* If OSST_FAST_MTEOM is non-zero, the MTEOM ioctl is done using the
92 direct SCSI command. The file number status is lost but this method
93 is fast with some drives. Otherwise MTEOM is done by spacing over
94 files and the file number status is retained. */
95#define OSST_FAST_MTEOM 0
96
97/* If OSST_SCSI2LOGICAL is nonzero, the logical block addresses are used for
98 MTIOCPOS and MTSEEK by default. Vendor addresses are used if OSST_SCSI2LOGICAL
99 is zero. */
100#define OSST_SCSI2LOGICAL 0
101
102/* If OSST_SYSV is non-zero, the tape behaves according to the SYS V semantics.
103 The default is BSD semantics. */
104#define OSST_SYSV 0
105
106
107#endif
diff --git a/drivers/scsi/pcmcia/Kconfig b/drivers/scsi/pcmcia/Kconfig
index c544f48a1d18..2368f34efba3 100644
--- a/drivers/scsi/pcmcia/Kconfig
+++ b/drivers/scsi/pcmcia/Kconfig
@@ -20,6 +20,16 @@ config PCMCIA_AHA152X
20 To compile this driver as a module, choose M here: the 20 To compile this driver as a module, choose M here: the
21 module will be called aha152x_cs. 21 module will be called aha152x_cs.
22 22
23config PCMCIA_FDOMAIN
24 tristate "Future Domain PCMCIA support"
25 select SCSI_FDOMAIN
26 help
27 Say Y here if you intend to attach this type of PCMCIA SCSI host
28 adapter to your computer.
29
30 To compile this driver as a module, choose M here: the
31 module will be called fdomain_cs.
32
23config PCMCIA_NINJA_SCSI 33config PCMCIA_NINJA_SCSI
24 tristate "NinjaSCSI-3 / NinjaSCSI-32Bi (16bit) PCMCIA support" 34 tristate "NinjaSCSI-3 / NinjaSCSI-32Bi (16bit) PCMCIA support"
25 depends on !64BIT 35 depends on !64BIT
diff --git a/drivers/scsi/pcmcia/Makefile b/drivers/scsi/pcmcia/Makefile
index a5a24dd44e7e..02f5b44a2685 100644
--- a/drivers/scsi/pcmcia/Makefile
+++ b/drivers/scsi/pcmcia/Makefile
@@ -4,6 +4,7 @@ ccflags-y := -I $(srctree)/drivers/scsi
4 4
5# 16-bit client drivers 5# 16-bit client drivers
6obj-$(CONFIG_PCMCIA_QLOGIC) += qlogic_cs.o 6obj-$(CONFIG_PCMCIA_QLOGIC) += qlogic_cs.o
7obj-$(CONFIG_PCMCIA_FDOMAIN) += fdomain_cs.o
7obj-$(CONFIG_PCMCIA_AHA152X) += aha152x_cs.o 8obj-$(CONFIG_PCMCIA_AHA152X) += aha152x_cs.o
8obj-$(CONFIG_PCMCIA_NINJA_SCSI) += nsp_cs.o 9obj-$(CONFIG_PCMCIA_NINJA_SCSI) += nsp_cs.o
9obj-$(CONFIG_PCMCIA_SYM53C500) += sym53c500_cs.o 10obj-$(CONFIG_PCMCIA_SYM53C500) += sym53c500_cs.o
diff --git a/drivers/scsi/pcmcia/fdomain_cs.c b/drivers/scsi/pcmcia/fdomain_cs.c
new file mode 100644
index 000000000000..e42acf314d06
--- /dev/null
+++ b/drivers/scsi/pcmcia/fdomain_cs.c
@@ -0,0 +1,95 @@
1// SPDX-License-Identifier: (GPL-2.0 OR MPL-1.1)
2/*
3 * Driver for Future Domain-compatible PCMCIA SCSI cards
4 * Copyright 2019 Ondrej Zary
5 *
6 * The initial developer of the original code is David A. Hinds
7 * <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
8 * are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
9 */
10
11#include <linux/module.h>
12#include <linux/init.h>
13#include <scsi/scsi_host.h>
14#include <pcmcia/cistpl.h>
15#include <pcmcia/ds.h>
16#include "fdomain.h"
17
18MODULE_AUTHOR("Ondrej Zary, David Hinds");
19MODULE_DESCRIPTION("Future Domain PCMCIA SCSI driver");
20MODULE_LICENSE("Dual MPL/GPL");
21
22static int fdomain_config_check(struct pcmcia_device *p_dev, void *priv_data)
23{
24 p_dev->io_lines = 10;
25 p_dev->resource[0]->end = FDOMAIN_REGION_SIZE;
26 p_dev->resource[0]->flags &= ~IO_DATA_PATH_WIDTH;
27 p_dev->resource[0]->flags |= IO_DATA_PATH_WIDTH_AUTO;
28 return pcmcia_request_io(p_dev);
29}
30
31static int fdomain_probe(struct pcmcia_device *link)
32{
33 int ret;
34 struct Scsi_Host *sh;
35
36 link->config_flags |= CONF_ENABLE_IRQ | CONF_AUTO_SET_IO;
37 link->config_regs = PRESENT_OPTION;
38
39 ret = pcmcia_loop_config(link, fdomain_config_check, NULL);
40 if (ret)
41 return ret;
42
43 ret = pcmcia_enable_device(link);
44 if (ret)
45 goto fail_disable;
46
47 if (!request_region(link->resource[0]->start, FDOMAIN_REGION_SIZE,
48 "fdomain_cs"))
49 goto fail_disable;
50
51 sh = fdomain_create(link->resource[0]->start, link->irq, 7, &link->dev);
52 if (!sh) {
53 dev_err(&link->dev, "Controller initialization failed");
54 ret = -ENODEV;
55 goto fail_release;
56 }
57
58 link->priv = sh;
59
60 return 0;
61
62fail_release:
63 release_region(link->resource[0]->start, FDOMAIN_REGION_SIZE);
64fail_disable:
65 pcmcia_disable_device(link);
66 return ret;
67}
68
69static void fdomain_remove(struct pcmcia_device *link)
70{
71 fdomain_destroy(link->priv);
72 release_region(link->resource[0]->start, FDOMAIN_REGION_SIZE);
73 pcmcia_disable_device(link);
74}
75
76static const struct pcmcia_device_id fdomain_ids[] = {
77 PCMCIA_DEVICE_PROD_ID12("IBM Corp.", "SCSI PCMCIA Card", 0xe3736c88,
78 0x859cad20),
79 PCMCIA_DEVICE_PROD_ID1("SCSI PCMCIA Adapter Card", 0x8dacb57e),
80 PCMCIA_DEVICE_PROD_ID12(" SIMPLE TECHNOLOGY Corporation",
81 "SCSI PCMCIA Credit Card Controller",
82 0x182bdafe, 0xc80d106f),
83 PCMCIA_DEVICE_NULL,
84};
85MODULE_DEVICE_TABLE(pcmcia, fdomain_ids);
86
87static struct pcmcia_driver fdomain_cs_driver = {
88 .owner = THIS_MODULE,
89 .name = "fdomain_cs",
90 .probe = fdomain_probe,
91 .remove = fdomain_remove,
92 .id_table = fdomain_ids,
93};
94
95module_pcmcia_driver(fdomain_cs_driver);
diff --git a/drivers/scsi/pm8001/pm8001_ctl.c b/drivers/scsi/pm8001/pm8001_ctl.c
index d193961ea82f..6b85016b4db3 100644
--- a/drivers/scsi/pm8001/pm8001_ctl.c
+++ b/drivers/scsi/pm8001/pm8001_ctl.c
@@ -462,6 +462,24 @@ static ssize_t pm8001_ctl_bios_version_show(struct device *cdev,
462} 462}
463static DEVICE_ATTR(bios_version, S_IRUGO, pm8001_ctl_bios_version_show, NULL); 463static DEVICE_ATTR(bios_version, S_IRUGO, pm8001_ctl_bios_version_show, NULL);
464/** 464/**
465 * event_log_size_show - event log size
466 * @cdev: pointer to embedded class device
467 * @buf: the buffer returned
468 *
469 * A sysfs read shost attribute.
470 */
471static ssize_t event_log_size_show(struct device *cdev,
472 struct device_attribute *attr, char *buf)
473{
474 struct Scsi_Host *shost = class_to_shost(cdev);
475 struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost);
476 struct pm8001_hba_info *pm8001_ha = sha->lldd_ha;
477
478 return snprintf(buf, PAGE_SIZE, "%d\n",
479 pm8001_ha->main_cfg_tbl.pm80xx_tbl.event_log_size);
480}
481static DEVICE_ATTR_RO(event_log_size);
482/**
465 * pm8001_ctl_aap_log_show - IOP event log 483 * pm8001_ctl_aap_log_show - IOP event log
466 * @cdev: pointer to embedded class device 484 * @cdev: pointer to embedded class device
467 * @buf: the buffer returned 485 * @buf: the buffer returned
@@ -474,25 +492,26 @@ static ssize_t pm8001_ctl_iop_log_show(struct device *cdev,
474 struct Scsi_Host *shost = class_to_shost(cdev); 492 struct Scsi_Host *shost = class_to_shost(cdev);
475 struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); 493 struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost);
476 struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; 494 struct pm8001_hba_info *pm8001_ha = sha->lldd_ha;
477#define IOP_MEMMAP(r, c) \
478 (*(u32 *)((u8*)pm8001_ha->memoryMap.region[IOP].virt_ptr + (r) * 32 \
479 + (c)))
480 int i;
481 char *str = buf; 495 char *str = buf;
482 int max = 2; 496 u32 read_size =
483 for (i = 0; i < max; i++) { 497 pm8001_ha->main_cfg_tbl.pm80xx_tbl.event_log_size / 1024;
484 str += sprintf(str, "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" 498 static u32 start, end, count;
485 "0x%08x 0x%08x\n", 499 u32 max_read_times = 32;
486 IOP_MEMMAP(i, 0), 500 u32 max_count = (read_size * 1024) / (max_read_times * 4);
487 IOP_MEMMAP(i, 4), 501 u32 *temp = (u32 *)pm8001_ha->memoryMap.region[IOP].virt_ptr;
488 IOP_MEMMAP(i, 8), 502
489 IOP_MEMMAP(i, 12), 503 if ((count % max_count) == 0) {
490 IOP_MEMMAP(i, 16), 504 start = 0;
491 IOP_MEMMAP(i, 20), 505 end = max_read_times;
492 IOP_MEMMAP(i, 24), 506 count = 0;
493 IOP_MEMMAP(i, 28)); 507 } else {
508 start = end;
509 end = end + max_read_times;
494 } 510 }
495 511
512 for (; start < end; start++)
513 str += sprintf(str, "%08x ", *(temp+start));
514 count++;
496 return str - buf; 515 return str - buf;
497} 516}
498static DEVICE_ATTR(iop_log, S_IRUGO, pm8001_ctl_iop_log_show, NULL); 517static DEVICE_ATTR(iop_log, S_IRUGO, pm8001_ctl_iop_log_show, NULL);
@@ -796,6 +815,7 @@ struct device_attribute *pm8001_host_attrs[] = {
796 &dev_attr_max_sg_list, 815 &dev_attr_max_sg_list,
797 &dev_attr_sas_spec_support, 816 &dev_attr_sas_spec_support,
798 &dev_attr_logging_level, 817 &dev_attr_logging_level,
818 &dev_attr_event_log_size,
799 &dev_attr_host_sas_address, 819 &dev_attr_host_sas_address,
800 &dev_attr_bios_version, 820 &dev_attr_bios_version,
801 &dev_attr_ib_log, 821 &dev_attr_ib_log,
diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
index 109effd3557d..68a8217032d0 100644
--- a/drivers/scsi/pm8001/pm8001_hwi.c
+++ b/drivers/scsi/pm8001/pm8001_hwi.c
@@ -2356,7 +2356,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
2356 if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) && 2356 if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) &&
2357 (status != IO_UNDERFLOW)) { 2357 (status != IO_UNDERFLOW)) {
2358 if (!((t->dev->parent) && 2358 if (!((t->dev->parent) &&
2359 (DEV_IS_EXPANDER(t->dev->parent->dev_type)))) { 2359 (dev_is_expander(t->dev->parent->dev_type)))) {
2360 for (i = 0 , j = 4; j <= 7 && i <= 3; i++ , j++) 2360 for (i = 0 , j = 4; j <= 7 && i <= 3; i++ , j++)
2361 sata_addr_low[i] = pm8001_ha->sas_addr[j]; 2361 sata_addr_low[i] = pm8001_ha->sas_addr[j];
2362 for (i = 0 , j = 0; j <= 3 && i <= 3; i++ , j++) 2362 for (i = 0 , j = 0; j <= 3 && i <= 3; i++ , j++)
@@ -4560,7 +4560,7 @@ static int pm8001_chip_reg_dev_req(struct pm8001_hba_info *pm8001_ha,
4560 pm8001_dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) 4560 pm8001_dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
4561 stp_sspsmp_sata = 0x01; /*ssp or smp*/ 4561 stp_sspsmp_sata = 0x01; /*ssp or smp*/
4562 } 4562 }
4563 if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 4563 if (parent_dev && dev_is_expander(parent_dev->dev_type))
4564 phy_id = parent_dev->ex_dev.ex_phy->phy_id; 4564 phy_id = parent_dev->ex_dev.ex_phy->phy_id;
4565 else 4565 else
4566 phy_id = pm8001_dev->attached_phy; 4566 phy_id = pm8001_dev->attached_phy;
diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
index 88eef3b18e41..dd38c356a1a4 100644
--- a/drivers/scsi/pm8001/pm8001_sas.c
+++ b/drivers/scsi/pm8001/pm8001_sas.c
@@ -634,7 +634,7 @@ static int pm8001_dev_found_notify(struct domain_device *dev)
634 dev->lldd_dev = pm8001_device; 634 dev->lldd_dev = pm8001_device;
635 pm8001_device->dev_type = dev->dev_type; 635 pm8001_device->dev_type = dev->dev_type;
636 pm8001_device->dcompletion = &completion; 636 pm8001_device->dcompletion = &completion;
637 if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) { 637 if (parent_dev && dev_is_expander(parent_dev->dev_type)) {
638 int phy_id; 638 int phy_id;
639 struct ex_phy *phy; 639 struct ex_phy *phy;
640 for (phy_id = 0; phy_id < parent_dev->ex_dev.num_phys; 640 for (phy_id = 0; phy_id < parent_dev->ex_dev.num_phys;
@@ -1181,7 +1181,7 @@ int pm8001_query_task(struct sas_task *task)
1181 return rc; 1181 return rc;
1182} 1182}
1183 1183
1184/* mandatory SAM-3, still need free task/ccb info, abord the specified task */ 1184/* mandatory SAM-3, still need free task/ccb info, abort the specified task */
1185int pm8001_abort_task(struct sas_task *task) 1185int pm8001_abort_task(struct sas_task *task)
1186{ 1186{
1187 unsigned long flags; 1187 unsigned long flags;
diff --git a/drivers/scsi/pm8001/pm8001_sas.h b/drivers/scsi/pm8001/pm8001_sas.h
index ac6d8e3f22de..ff17c6aff63d 100644
--- a/drivers/scsi/pm8001/pm8001_sas.h
+++ b/drivers/scsi/pm8001/pm8001_sas.h
@@ -103,7 +103,6 @@ do { \
103#define PM8001_READ_VPD 103#define PM8001_READ_VPD
104 104
105 105
106#define DEV_IS_EXPANDER(type) ((type == SAS_EDGE_EXPANDER_DEVICE) || (type == SAS_FANOUT_EXPANDER_DEVICE))
107#define IS_SPCV_12G(dev) ((dev->device == 0X8074) \ 106#define IS_SPCV_12G(dev) ((dev->device == 0X8074) \
108 || (dev->device == 0X8076) \ 107 || (dev->device == 0X8076) \
109 || (dev->device == 0X8077) \ 108 || (dev->device == 0X8077) \
diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
index 301de40eb708..1128d86d241a 100644
--- a/drivers/scsi/pm8001/pm80xx_hwi.c
+++ b/drivers/scsi/pm8001/pm80xx_hwi.c
@@ -2066,7 +2066,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
2066 if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) && 2066 if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) &&
2067 (status != IO_UNDERFLOW)) { 2067 (status != IO_UNDERFLOW)) {
2068 if (!((t->dev->parent) && 2068 if (!((t->dev->parent) &&
2069 (DEV_IS_EXPANDER(t->dev->parent->dev_type)))) { 2069 (dev_is_expander(t->dev->parent->dev_type)))) {
2070 for (i = 0 , j = 4; i <= 3 && j <= 7; i++ , j++) 2070 for (i = 0 , j = 4; i <= 3 && j <= 7; i++ , j++)
2071 sata_addr_low[i] = pm8001_ha->sas_addr[j]; 2071 sata_addr_low[i] = pm8001_ha->sas_addr[j];
2072 for (i = 0 , j = 0; i <= 3 && j <= 3; i++ , j++) 2072 for (i = 0 , j = 0; i <= 3 && j <= 3; i++ , j++)
@@ -4561,7 +4561,7 @@ static int pm80xx_chip_reg_dev_req(struct pm8001_hba_info *pm8001_ha,
4561 pm8001_dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) 4561 pm8001_dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
4562 stp_sspsmp_sata = 0x01; /*ssp or smp*/ 4562 stp_sspsmp_sata = 0x01; /*ssp or smp*/
4563 } 4563 }
4564 if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 4564 if (parent_dev && dev_is_expander(parent_dev->dev_type))
4565 phy_id = parent_dev->ex_dev.ex_phy->phy_id; 4565 phy_id = parent_dev->ex_dev.ex_phy->phy_id;
4566 else 4566 else
4567 phy_id = pm8001_dev->attached_phy; 4567 phy_id = pm8001_dev->attached_phy;
diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
index 1a4095c56eee..bad2b12604f1 100644
--- a/drivers/scsi/qla2xxx/qla_def.h
+++ b/drivers/scsi/qla2xxx/qla_def.h
@@ -532,6 +532,8 @@ typedef struct srb {
532 uint8_t cmd_type; 532 uint8_t cmd_type;
533 uint8_t pad[3]; 533 uint8_t pad[3];
534 atomic_t ref_count; 534 atomic_t ref_count;
535 struct kref cmd_kref; /* need to migrate ref_count over to this */
536 void *priv;
535 wait_queue_head_t nvme_ls_waitq; 537 wait_queue_head_t nvme_ls_waitq;
536 struct fc_port *fcport; 538 struct fc_port *fcport;
537 struct scsi_qla_host *vha; 539 struct scsi_qla_host *vha;
@@ -554,6 +556,7 @@ typedef struct srb {
554 } u; 556 } u;
555 void (*done)(void *, int); 557 void (*done)(void *, int);
556 void (*free)(void *); 558 void (*free)(void *);
559 void (*put_fn)(struct kref *kref);
557} srb_t; 560} srb_t;
558 561
559#define GET_CMD_SP(sp) (sp->u.scmd.cmd) 562#define GET_CMD_SP(sp) (sp->u.scmd.cmd)
@@ -2336,7 +2339,6 @@ typedef struct fc_port {
2336 unsigned int id_changed:1; 2339 unsigned int id_changed:1;
2337 unsigned int scan_needed:1; 2340 unsigned int scan_needed:1;
2338 2341
2339 struct work_struct nvme_del_work;
2340 struct completion nvme_del_done; 2342 struct completion nvme_del_done;
2341 uint32_t nvme_prli_service_param; 2343 uint32_t nvme_prli_service_param;
2342#define NVME_PRLI_SP_CONF BIT_7 2344#define NVME_PRLI_SP_CONF BIT_7
@@ -4376,7 +4378,6 @@ typedef struct scsi_qla_host {
4376 4378
4377 struct nvme_fc_local_port *nvme_local_port; 4379 struct nvme_fc_local_port *nvme_local_port;
4378 struct completion nvme_del_done; 4380 struct completion nvme_del_done;
4379 struct list_head nvme_rport_list;
4380 4381
4381 uint16_t fcoe_vlan_id; 4382 uint16_t fcoe_vlan_id;
4382 uint16_t fcoe_fcf_idx; 4383 uint16_t fcoe_fcf_idx;
diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
index bbe69ab5cf3f..f9669fdf7798 100644
--- a/drivers/scsi/qla2xxx/qla_gbl.h
+++ b/drivers/scsi/qla2xxx/qla_gbl.h
@@ -908,4 +908,6 @@ void qlt_clr_qp_table(struct scsi_qla_host *vha);
908void qlt_set_mode(struct scsi_qla_host *); 908void qlt_set_mode(struct scsi_qla_host *);
909int qla2x00_set_data_rate(scsi_qla_host_t *vha, uint16_t mode); 909int qla2x00_set_data_rate(scsi_qla_host_t *vha, uint16_t mode);
910 910
911/* nvme.c */
912void qla_nvme_unregister_remote_port(struct fc_port *fcport);
911#endif /* _QLA_GBL_H */ 913#endif /* _QLA_GBL_H */
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
index 54772d4c377f..4059655639d9 100644
--- a/drivers/scsi/qla2xxx/qla_init.c
+++ b/drivers/scsi/qla2xxx/qla_init.c
@@ -5403,7 +5403,6 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
5403 fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT); 5403 fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT);
5404 fcport->deleted = 0; 5404 fcport->deleted = 0;
5405 fcport->logout_on_delete = 1; 5405 fcport->logout_on_delete = 1;
5406 fcport->login_retry = vha->hw->login_retry_count;
5407 fcport->n2n_chip_reset = fcport->n2n_link_reset_cnt = 0; 5406 fcport->n2n_chip_reset = fcport->n2n_link_reset_cnt = 0;
5408 5407
5409 switch (vha->hw->current_topology) { 5408 switch (vha->hw->current_topology) {
diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
index 22e3fba28e51..963094b3c300 100644
--- a/drivers/scsi/qla2xxx/qla_nvme.c
+++ b/drivers/scsi/qla2xxx/qla_nvme.c
@@ -12,8 +12,6 @@
12 12
13static struct nvme_fc_port_template qla_nvme_fc_transport; 13static struct nvme_fc_port_template qla_nvme_fc_transport;
14 14
15static void qla_nvme_unregister_remote_port(struct work_struct *);
16
17int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport) 15int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
18{ 16{
19 struct qla_nvme_rport *rport; 17 struct qla_nvme_rport *rport;
@@ -38,7 +36,6 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
38 (fcport->nvme_flag & NVME_FLAG_REGISTERED)) 36 (fcport->nvme_flag & NVME_FLAG_REGISTERED))
39 return 0; 37 return 0;
40 38
41 INIT_WORK(&fcport->nvme_del_work, qla_nvme_unregister_remote_port);
42 fcport->nvme_flag &= ~NVME_FLAG_RESETTING; 39 fcport->nvme_flag &= ~NVME_FLAG_RESETTING;
43 40
44 memset(&req, 0, sizeof(struct nvme_fc_port_info)); 41 memset(&req, 0, sizeof(struct nvme_fc_port_info));
@@ -74,7 +71,6 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
74 71
75 rport = fcport->nvme_remote_port->private; 72 rport = fcport->nvme_remote_port->private;
76 rport->fcport = fcport; 73 rport->fcport = fcport;
77 list_add_tail(&rport->list, &vha->nvme_rport_list);
78 74
79 fcport->nvme_flag |= NVME_FLAG_REGISTERED; 75 fcport->nvme_flag |= NVME_FLAG_REGISTERED;
80 return 0; 76 return 0;
@@ -124,53 +120,91 @@ static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport,
124 return 0; 120 return 0;
125} 121}
126 122
123static void qla_nvme_release_fcp_cmd_kref(struct kref *kref)
124{
125 struct srb *sp = container_of(kref, struct srb, cmd_kref);
126 struct nvme_private *priv = (struct nvme_private *)sp->priv;
127 struct nvmefc_fcp_req *fd;
128 struct srb_iocb *nvme;
129 unsigned long flags;
130
131 if (!priv)
132 goto out;
133
134 nvme = &sp->u.iocb_cmd;
135 fd = nvme->u.nvme.desc;
136
137 spin_lock_irqsave(&priv->cmd_lock, flags);
138 priv->sp = NULL;
139 sp->priv = NULL;
140 if (priv->comp_status == QLA_SUCCESS) {
141 fd->rcv_rsplen = nvme->u.nvme.rsp_pyld_len;
142 } else {
143 fd->rcv_rsplen = 0;
144 fd->transferred_length = 0;
145 }
146 fd->status = 0;
147 spin_unlock_irqrestore(&priv->cmd_lock, flags);
148
149 fd->done(fd);
150out:
151 qla2xxx_rel_qpair_sp(sp->qpair, sp);
152}
153
154static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
155{
156 struct srb *sp = container_of(kref, struct srb, cmd_kref);
157 struct nvme_private *priv = (struct nvme_private *)sp->priv;
158 struct nvmefc_ls_req *fd;
159 unsigned long flags;
160
161 if (!priv)
162 goto out;
163
164 spin_lock_irqsave(&priv->cmd_lock, flags);
165 priv->sp = NULL;
166 sp->priv = NULL;
167 spin_unlock_irqrestore(&priv->cmd_lock, flags);
168
169 fd = priv->fd;
170 fd->done(fd, priv->comp_status);
171out:
172 qla2x00_rel_sp(sp);
173}
174
175static void qla_nvme_ls_complete(struct work_struct *work)
176{
177 struct nvme_private *priv =
178 container_of(work, struct nvme_private, ls_work);
179
180 kref_put(&priv->sp->cmd_kref, qla_nvme_release_ls_cmd_kref);
181}
182
127static void qla_nvme_sp_ls_done(void *ptr, int res) 183static void qla_nvme_sp_ls_done(void *ptr, int res)
128{ 184{
129 srb_t *sp = ptr; 185 srb_t *sp = ptr;
130 struct srb_iocb *nvme;
131 struct nvmefc_ls_req *fd;
132 struct nvme_private *priv; 186 struct nvme_private *priv;
133 187
134 if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0)) 188 if (WARN_ON_ONCE(kref_read(&sp->cmd_kref) == 0))
135 return; 189 return;
136 190
137 atomic_dec(&sp->ref_count);
138
139 if (res) 191 if (res)
140 res = -EINVAL; 192 res = -EINVAL;
141 193
142 nvme = &sp->u.iocb_cmd; 194 priv = (struct nvme_private *)sp->priv;
143 fd = nvme->u.nvme.desc;
144 priv = fd->private;
145 priv->comp_status = res; 195 priv->comp_status = res;
196 INIT_WORK(&priv->ls_work, qla_nvme_ls_complete);
146 schedule_work(&priv->ls_work); 197 schedule_work(&priv->ls_work);
147 /* work schedule doesn't need the sp */
148 qla2x00_rel_sp(sp);
149} 198}
150 199
200/* it assumed that QPair lock is held. */
151static void qla_nvme_sp_done(void *ptr, int res) 201static void qla_nvme_sp_done(void *ptr, int res)
152{ 202{
153 srb_t *sp = ptr; 203 srb_t *sp = ptr;
154 struct srb_iocb *nvme; 204 struct nvme_private *priv = (struct nvme_private *)sp->priv;
155 struct nvmefc_fcp_req *fd;
156
157 nvme = &sp->u.iocb_cmd;
158 fd = nvme->u.nvme.desc;
159
160 if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0))
161 return;
162 205
163 atomic_dec(&sp->ref_count); 206 priv->comp_status = res;
164 207 kref_put(&sp->cmd_kref, qla_nvme_release_fcp_cmd_kref);
165 if (res == QLA_SUCCESS) {
166 fd->rcv_rsplen = nvme->u.nvme.rsp_pyld_len;
167 } else {
168 fd->rcv_rsplen = 0;
169 fd->transferred_length = 0;
170 }
171 fd->status = 0;
172 fd->done(fd);
173 qla2xxx_rel_qpair_sp(sp->qpair, sp);
174 208
175 return; 209 return;
176} 210}
@@ -189,44 +223,50 @@ static void qla_nvme_abort_work(struct work_struct *work)
189 __func__, sp, sp->handle, fcport, fcport->deleted); 223 __func__, sp, sp->handle, fcport, fcport->deleted);
190 224
191 if (!ha->flags.fw_started && (fcport && fcport->deleted)) 225 if (!ha->flags.fw_started && (fcport && fcport->deleted))
192 return; 226 goto out;
193 227
194 if (ha->flags.host_shutting_down) { 228 if (ha->flags.host_shutting_down) {
195 ql_log(ql_log_info, sp->fcport->vha, 0xffff, 229 ql_log(ql_log_info, sp->fcport->vha, 0xffff,
196 "%s Calling done on sp: %p, type: 0x%x, sp->ref_count: 0x%x\n", 230 "%s Calling done on sp: %p, type: 0x%x, sp->ref_count: 0x%x\n",
197 __func__, sp, sp->type, atomic_read(&sp->ref_count)); 231 __func__, sp, sp->type, atomic_read(&sp->ref_count));
198 sp->done(sp, 0); 232 sp->done(sp, 0);
199 return; 233 goto out;
200 } 234 }
201 235
202 if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0))
203 return;
204
205 rval = ha->isp_ops->abort_command(sp); 236 rval = ha->isp_ops->abort_command(sp);
206 237
207 ql_dbg(ql_dbg_io, fcport->vha, 0x212b, 238 ql_dbg(ql_dbg_io, fcport->vha, 0x212b,
208 "%s: %s command for sp=%p, handle=%x on fcport=%p rval=%x\n", 239 "%s: %s command for sp=%p, handle=%x on fcport=%p rval=%x\n",
209 __func__, (rval != QLA_SUCCESS) ? "Failed to abort" : "Aborted", 240 __func__, (rval != QLA_SUCCESS) ? "Failed to abort" : "Aborted",
210 sp, sp->handle, fcport, rval); 241 sp, sp->handle, fcport, rval);
242
243out:
244 /* kref_get was done before work was schedule. */
245 kref_put(&sp->cmd_kref, sp->put_fn);
211} 246}
212 247
213static void qla_nvme_ls_abort(struct nvme_fc_local_port *lport, 248static void qla_nvme_ls_abort(struct nvme_fc_local_port *lport,
214 struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd) 249 struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd)
215{ 250{
216 struct nvme_private *priv = fd->private; 251 struct nvme_private *priv = fd->private;
252 unsigned long flags;
253
254 spin_lock_irqsave(&priv->cmd_lock, flags);
255 if (!priv->sp) {
256 spin_unlock_irqrestore(&priv->cmd_lock, flags);
257 return;
258 }
259
260 if (!kref_get_unless_zero(&priv->sp->cmd_kref)) {
261 spin_unlock_irqrestore(&priv->cmd_lock, flags);
262 return;
263 }
264 spin_unlock_irqrestore(&priv->cmd_lock, flags);
217 265
218 INIT_WORK(&priv->abort_work, qla_nvme_abort_work); 266 INIT_WORK(&priv->abort_work, qla_nvme_abort_work);
219 schedule_work(&priv->abort_work); 267 schedule_work(&priv->abort_work);
220} 268}
221 269
222static void qla_nvme_ls_complete(struct work_struct *work)
223{
224 struct nvme_private *priv =
225 container_of(work, struct nvme_private, ls_work);
226 struct nvmefc_ls_req *fd = priv->fd;
227
228 fd->done(fd, priv->comp_status);
229}
230 270
231static int qla_nvme_ls_req(struct nvme_fc_local_port *lport, 271static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
232 struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd) 272 struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd)
@@ -240,8 +280,16 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
240 struct qla_hw_data *ha; 280 struct qla_hw_data *ha;
241 srb_t *sp; 281 srb_t *sp;
242 282
283
284 if (!fcport || (fcport && fcport->deleted))
285 return rval;
286
243 vha = fcport->vha; 287 vha = fcport->vha;
244 ha = vha->hw; 288 ha = vha->hw;
289
290 if (!ha->flags.fw_started)
291 return rval;
292
245 /* Alloc SRB structure */ 293 /* Alloc SRB structure */
246 sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC); 294 sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC);
247 if (!sp) 295 if (!sp)
@@ -250,11 +298,13 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
250 sp->type = SRB_NVME_LS; 298 sp->type = SRB_NVME_LS;
251 sp->name = "nvme_ls"; 299 sp->name = "nvme_ls";
252 sp->done = qla_nvme_sp_ls_done; 300 sp->done = qla_nvme_sp_ls_done;
253 atomic_set(&sp->ref_count, 1); 301 sp->put_fn = qla_nvme_release_ls_cmd_kref;
254 nvme = &sp->u.iocb_cmd; 302 sp->priv = (void *)priv;
255 priv->sp = sp; 303 priv->sp = sp;
304 kref_init(&sp->cmd_kref);
305 spin_lock_init(&priv->cmd_lock);
306 nvme = &sp->u.iocb_cmd;
256 priv->fd = fd; 307 priv->fd = fd;
257 INIT_WORK(&priv->ls_work, qla_nvme_ls_complete);
258 nvme->u.nvme.desc = fd; 308 nvme->u.nvme.desc = fd;
259 nvme->u.nvme.dir = 0; 309 nvme->u.nvme.dir = 0;
260 nvme->u.nvme.dl = 0; 310 nvme->u.nvme.dl = 0;
@@ -271,8 +321,10 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
271 if (rval != QLA_SUCCESS) { 321 if (rval != QLA_SUCCESS) {
272 ql_log(ql_log_warn, vha, 0x700e, 322 ql_log(ql_log_warn, vha, 0x700e,
273 "qla2x00_start_sp failed = %d\n", rval); 323 "qla2x00_start_sp failed = %d\n", rval);
274 atomic_dec(&sp->ref_count);
275 wake_up(&sp->nvme_ls_waitq); 324 wake_up(&sp->nvme_ls_waitq);
325 sp->priv = NULL;
326 priv->sp = NULL;
327 qla2x00_rel_sp(sp);
276 return rval; 328 return rval;
277 } 329 }
278 330
@@ -284,6 +336,18 @@ static void qla_nvme_fcp_abort(struct nvme_fc_local_port *lport,
284 struct nvmefc_fcp_req *fd) 336 struct nvmefc_fcp_req *fd)
285{ 337{
286 struct nvme_private *priv = fd->private; 338 struct nvme_private *priv = fd->private;
339 unsigned long flags;
340
341 spin_lock_irqsave(&priv->cmd_lock, flags);
342 if (!priv->sp) {
343 spin_unlock_irqrestore(&priv->cmd_lock, flags);
344 return;
345 }
346 if (!kref_get_unless_zero(&priv->sp->cmd_kref)) {
347 spin_unlock_irqrestore(&priv->cmd_lock, flags);
348 return;
349 }
350 spin_unlock_irqrestore(&priv->cmd_lock, flags);
287 351
288 INIT_WORK(&priv->abort_work, qla_nvme_abort_work); 352 INIT_WORK(&priv->abort_work, qla_nvme_abort_work);
289 schedule_work(&priv->abort_work); 353 schedule_work(&priv->abort_work);
@@ -487,11 +551,11 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
487 551
488 fcport = qla_rport->fcport; 552 fcport = qla_rport->fcport;
489 553
490 vha = fcport->vha; 554 if (!qpair || !fcport || (qpair && !qpair->fw_started) ||
491 555 (fcport && fcport->deleted))
492 if (test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags))
493 return rval; 556 return rval;
494 557
558 vha = fcport->vha;
495 /* 559 /*
496 * If we know the dev is going away while the transport is still sending 560 * If we know the dev is going away while the transport is still sending
497 * IO's return busy back to stall the IO Q. This happens when the 561 * IO's return busy back to stall the IO Q. This happens when the
@@ -507,12 +571,15 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
507 if (!sp) 571 if (!sp)
508 return -EBUSY; 572 return -EBUSY;
509 573
510 atomic_set(&sp->ref_count, 1);
511 init_waitqueue_head(&sp->nvme_ls_waitq); 574 init_waitqueue_head(&sp->nvme_ls_waitq);
575 kref_init(&sp->cmd_kref);
576 spin_lock_init(&priv->cmd_lock);
577 sp->priv = (void *)priv;
512 priv->sp = sp; 578 priv->sp = sp;
513 sp->type = SRB_NVME_CMD; 579 sp->type = SRB_NVME_CMD;
514 sp->name = "nvme_cmd"; 580 sp->name = "nvme_cmd";
515 sp->done = qla_nvme_sp_done; 581 sp->done = qla_nvme_sp_done;
582 sp->put_fn = qla_nvme_release_fcp_cmd_kref;
516 sp->qpair = qpair; 583 sp->qpair = qpair;
517 sp->vha = vha; 584 sp->vha = vha;
518 nvme = &sp->u.iocb_cmd; 585 nvme = &sp->u.iocb_cmd;
@@ -522,8 +589,10 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
522 if (rval != QLA_SUCCESS) { 589 if (rval != QLA_SUCCESS) {
523 ql_log(ql_log_warn, vha, 0x212d, 590 ql_log(ql_log_warn, vha, 0x212d,
524 "qla2x00_start_nvme_mq failed = %d\n", rval); 591 "qla2x00_start_nvme_mq failed = %d\n", rval);
525 atomic_dec(&sp->ref_count);
526 wake_up(&sp->nvme_ls_waitq); 592 wake_up(&sp->nvme_ls_waitq);
593 sp->priv = NULL;
594 priv->sp = NULL;
595 qla2xxx_rel_qpair_sp(sp->qpair, sp);
527 } 596 }
528 597
529 return rval; 598 return rval;
@@ -542,29 +611,16 @@ static void qla_nvme_localport_delete(struct nvme_fc_local_port *lport)
542static void qla_nvme_remoteport_delete(struct nvme_fc_remote_port *rport) 611static void qla_nvme_remoteport_delete(struct nvme_fc_remote_port *rport)
543{ 612{
544 fc_port_t *fcport; 613 fc_port_t *fcport;
545 struct qla_nvme_rport *qla_rport = rport->private, *trport; 614 struct qla_nvme_rport *qla_rport = rport->private;
546 615
547 fcport = qla_rport->fcport; 616 fcport = qla_rport->fcport;
548 fcport->nvme_remote_port = NULL; 617 fcport->nvme_remote_port = NULL;
549 fcport->nvme_flag &= ~NVME_FLAG_REGISTERED; 618 fcport->nvme_flag &= ~NVME_FLAG_REGISTERED;
550
551 list_for_each_entry_safe(qla_rport, trport,
552 &fcport->vha->nvme_rport_list, list) {
553 if (qla_rport->fcport == fcport) {
554 list_del(&qla_rport->list);
555 break;
556 }
557 }
558 complete(&fcport->nvme_del_done);
559
560 if (!test_bit(UNLOADING, &fcport->vha->dpc_flags)) {
561 INIT_WORK(&fcport->free_work, qlt_free_session_done);
562 schedule_work(&fcport->free_work);
563 }
564
565 fcport->nvme_flag &= ~NVME_FLAG_DELETING; 619 fcport->nvme_flag &= ~NVME_FLAG_DELETING;
566 ql_log(ql_log_info, fcport->vha, 0x2110, 620 ql_log(ql_log_info, fcport->vha, 0x2110,
567 "remoteport_delete of %p completed.\n", fcport); 621 "remoteport_delete of %p %8phN completed.\n",
622 fcport, fcport->port_name);
623 complete(&fcport->nvme_del_done);
568} 624}
569 625
570static struct nvme_fc_port_template qla_nvme_fc_transport = { 626static struct nvme_fc_port_template qla_nvme_fc_transport = {
@@ -586,35 +642,25 @@ static struct nvme_fc_port_template qla_nvme_fc_transport = {
586 .fcprqst_priv_sz = sizeof(struct nvme_private), 642 .fcprqst_priv_sz = sizeof(struct nvme_private),
587}; 643};
588 644
589static void qla_nvme_unregister_remote_port(struct work_struct *work) 645void qla_nvme_unregister_remote_port(struct fc_port *fcport)
590{ 646{
591 struct fc_port *fcport = container_of(work, struct fc_port, 647 int ret;
592 nvme_del_work);
593 struct qla_nvme_rport *qla_rport, *trport;
594 648
595 if (!IS_ENABLED(CONFIG_NVME_FC)) 649 if (!IS_ENABLED(CONFIG_NVME_FC))
596 return; 650 return;
597 651
598 ql_log(ql_log_warn, NULL, 0x2112, 652 ql_log(ql_log_warn, NULL, 0x2112,
599 "%s: unregister remoteport on %p\n",__func__, fcport); 653 "%s: unregister remoteport on %p %8phN\n",
600 654 __func__, fcport, fcport->port_name);
601 list_for_each_entry_safe(qla_rport, trport, 655
602 &fcport->vha->nvme_rport_list, list) { 656 nvme_fc_set_remoteport_devloss(fcport->nvme_remote_port, 0);
603 if (qla_rport->fcport == fcport) { 657 init_completion(&fcport->nvme_del_done);
604 ql_log(ql_log_info, fcport->vha, 0x2113, 658 ret = nvme_fc_unregister_remoteport(fcport->nvme_remote_port);
605 "%s: fcport=%p\n", __func__, fcport); 659 if (ret)
606 nvme_fc_set_remoteport_devloss 660 ql_log(ql_log_info, fcport->vha, 0x2114,
607 (fcport->nvme_remote_port, 0); 661 "%s: Failed to unregister nvme_remote_port (%d)\n",
608 init_completion(&fcport->nvme_del_done); 662 __func__, ret);
609 if (nvme_fc_unregister_remoteport 663 wait_for_completion(&fcport->nvme_del_done);
610 (fcport->nvme_remote_port))
611 ql_log(ql_log_info, fcport->vha, 0x2114,
612 "%s: Failed to unregister nvme_remote_port\n",
613 __func__);
614 wait_for_completion(&fcport->nvme_del_done);
615 break;
616 }
617 }
618} 664}
619 665
620void qla_nvme_delete(struct scsi_qla_host *vha) 666void qla_nvme_delete(struct scsi_qla_host *vha)
diff --git a/drivers/scsi/qla2xxx/qla_nvme.h b/drivers/scsi/qla2xxx/qla_nvme.h
index d3b8a6440113..67bb4a2a3742 100644
--- a/drivers/scsi/qla2xxx/qla_nvme.h
+++ b/drivers/scsi/qla2xxx/qla_nvme.h
@@ -34,10 +34,10 @@ struct nvme_private {
34 struct work_struct ls_work; 34 struct work_struct ls_work;
35 struct work_struct abort_work; 35 struct work_struct abort_work;
36 int comp_status; 36 int comp_status;
37 spinlock_t cmd_lock;
37}; 38};
38 39
39struct qla_nvme_rport { 40struct qla_nvme_rport {
40 struct list_head list;
41 struct fc_port *fcport; 41 struct fc_port *fcport;
42}; 42};
43 43
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index d056f5e7cf93..2e58cff9d200 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -4789,7 +4789,6 @@ struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *sht,
4789 INIT_LIST_HEAD(&vha->plogi_ack_list); 4789 INIT_LIST_HEAD(&vha->plogi_ack_list);
4790 INIT_LIST_HEAD(&vha->qp_list); 4790 INIT_LIST_HEAD(&vha->qp_list);
4791 INIT_LIST_HEAD(&vha->gnl.fcports); 4791 INIT_LIST_HEAD(&vha->gnl.fcports);
4792 INIT_LIST_HEAD(&vha->nvme_rport_list);
4793 INIT_LIST_HEAD(&vha->gpnid_list); 4792 INIT_LIST_HEAD(&vha->gpnid_list);
4794 INIT_WORK(&vha->iocb_work, qla2x00_iocb_work_fn); 4793 INIT_WORK(&vha->iocb_work, qla2x00_iocb_work_fn);
4795 4794
diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
index 2fd5c09b42d4..1c1f63be6eed 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -1004,6 +1004,12 @@ void qlt_free_session_done(struct work_struct *work)
1004 else 1004 else
1005 logout_started = true; 1005 logout_started = true;
1006 } 1006 }
1007 } /* if sess->logout_on_delete */
1008
1009 if (sess->nvme_flag & NVME_FLAG_REGISTERED &&
1010 !(sess->nvme_flag & NVME_FLAG_DELETING)) {
1011 sess->nvme_flag |= NVME_FLAG_DELETING;
1012 qla_nvme_unregister_remote_port(sess);
1007 } 1013 }
1008 } 1014 }
1009 1015
@@ -1155,14 +1161,8 @@ void qlt_unreg_sess(struct fc_port *sess)
1155 sess->last_rscn_gen = sess->rscn_gen; 1161 sess->last_rscn_gen = sess->rscn_gen;
1156 sess->last_login_gen = sess->login_gen; 1162 sess->last_login_gen = sess->login_gen;
1157 1163
1158 if (sess->nvme_flag & NVME_FLAG_REGISTERED && 1164 INIT_WORK(&sess->free_work, qlt_free_session_done);
1159 !(sess->nvme_flag & NVME_FLAG_DELETING)) { 1165 schedule_work(&sess->free_work);
1160 sess->nvme_flag |= NVME_FLAG_DELETING;
1161 schedule_work(&sess->nvme_del_work);
1162 } else {
1163 INIT_WORK(&sess->free_work, qlt_free_session_done);
1164 schedule_work(&sess->free_work);
1165 }
1166} 1166}
1167EXPORT_SYMBOL(qlt_unreg_sess); 1167EXPORT_SYMBOL(qlt_unreg_sess);
1168 1168
diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index 653d5ea6c5d9..1f5b5c8a7f72 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -86,15 +86,10 @@ unsigned int scsi_logging_level;
86EXPORT_SYMBOL(scsi_logging_level); 86EXPORT_SYMBOL(scsi_logging_level);
87#endif 87#endif
88 88
89/* sd, scsi core and power management need to coordinate flushing async actions */
90ASYNC_DOMAIN(scsi_sd_probe_domain);
91EXPORT_SYMBOL(scsi_sd_probe_domain);
92
93/* 89/*
94 * Separate domain (from scsi_sd_probe_domain) to maximize the benefit of 90 * Domain for asynchronous system resume operations. It is marked 'exclusive'
95 * asynchronous system resume operations. It is marked 'exclusive' to avoid 91 * to avoid being included in the async_synchronize_full() that is invoked by
96 * being included in the async_synchronize_full() that is invoked by 92 * dpm_resume().
97 * dpm_resume()
98 */ 93 */
99ASYNC_DOMAIN_EXCLUSIVE(scsi_sd_pm_domain); 94ASYNC_DOMAIN_EXCLUSIVE(scsi_sd_pm_domain);
100EXPORT_SYMBOL(scsi_sd_pm_domain); 95EXPORT_SYMBOL(scsi_sd_pm_domain);
@@ -821,7 +816,6 @@ static void __exit exit_scsi(void)
821 scsi_exit_devinfo(); 816 scsi_exit_devinfo();
822 scsi_exit_procfs(); 817 scsi_exit_procfs();
823 scsi_exit_queue(); 818 scsi_exit_queue();
824 async_unregister_domain(&scsi_sd_probe_domain);
825} 819}
826 820
827subsys_initcall(init_scsi); 821subsys_initcall(init_scsi);
diff --git a/drivers/scsi/scsi_debugfs.h b/drivers/scsi/scsi_debugfs.h
index 951b043e82d0..d125d1bd4184 100644
--- a/drivers/scsi/scsi_debugfs.h
+++ b/drivers/scsi/scsi_debugfs.h
@@ -1,3 +1,4 @@
1/* SPDX-License-Identifier: GPL-2.0 */
1struct request; 2struct request;
2struct seq_file; 3struct seq_file;
3 4
diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
index bfa569facd5b..1c470e31ae81 100644
--- a/drivers/scsi/scsi_error.c
+++ b/drivers/scsi/scsi_error.c
@@ -1055,7 +1055,7 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, unsigned char *cmnd,
1055 struct scsi_device *sdev = scmd->device; 1055 struct scsi_device *sdev = scmd->device;
1056 struct Scsi_Host *shost = sdev->host; 1056 struct Scsi_Host *shost = sdev->host;
1057 DECLARE_COMPLETION_ONSTACK(done); 1057 DECLARE_COMPLETION_ONSTACK(done);
1058 unsigned long timeleft = timeout; 1058 unsigned long timeleft = timeout, delay;
1059 struct scsi_eh_save ses; 1059 struct scsi_eh_save ses;
1060 const unsigned long stall_for = msecs_to_jiffies(100); 1060 const unsigned long stall_for = msecs_to_jiffies(100);
1061 int rtn; 1061 int rtn;
@@ -1066,7 +1066,29 @@ retry:
1066 1066
1067 scsi_log_send(scmd); 1067 scsi_log_send(scmd);
1068 scmd->scsi_done = scsi_eh_done; 1068 scmd->scsi_done = scsi_eh_done;
1069 rtn = shost->hostt->queuecommand(shost, scmd); 1069
1070 /*
1071 * Lock sdev->state_mutex to avoid that scsi_device_quiesce() can
1072 * change the SCSI device state after we have examined it and before
1073 * .queuecommand() is called.
1074 */
1075 mutex_lock(&sdev->state_mutex);
1076 while (sdev->sdev_state == SDEV_BLOCK && timeleft > 0) {
1077 mutex_unlock(&sdev->state_mutex);
1078 SCSI_LOG_ERROR_RECOVERY(5, sdev_printk(KERN_DEBUG, sdev,
1079 "%s: state %d <> %d\n", __func__, sdev->sdev_state,
1080 SDEV_BLOCK));
1081 delay = min(timeleft, stall_for);
1082 timeleft -= delay;
1083 msleep(jiffies_to_msecs(delay));
1084 mutex_lock(&sdev->state_mutex);
1085 }
1086 if (sdev->sdev_state != SDEV_BLOCK)
1087 rtn = shost->hostt->queuecommand(shost, scmd);
1088 else
1089 rtn = SCSI_MLQUEUE_DEVICE_BUSY;
1090 mutex_unlock(&sdev->state_mutex);
1091
1070 if (rtn) { 1092 if (rtn) {
1071 if (timeleft > stall_for) { 1093 if (timeleft > stall_for) {
1072 scsi_eh_restore_cmnd(scmd, &ses); 1094 scsi_eh_restore_cmnd(scmd, &ses);
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 65d0a10c76ad..a2fa31417749 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -2616,10 +2616,6 @@ EXPORT_SYMBOL_GPL(scsi_internal_device_block_nowait);
2616 * a legal transition). When the device is in this state, command processing 2616 * a legal transition). When the device is in this state, command processing
2617 * is paused until the device leaves the SDEV_BLOCK state. See also 2617 * is paused until the device leaves the SDEV_BLOCK state. See also
2618 * scsi_internal_device_unblock(). 2618 * scsi_internal_device_unblock().
2619 *
2620 * To do: avoid that scsi_send_eh_cmnd() calls queuecommand() after
2621 * scsi_internal_device_block() has blocked a SCSI device and also
2622 * remove the rport mutex lock and unlock calls from srp_queuecommand().
2623 */ 2619 */
2624static int scsi_internal_device_block(struct scsi_device *sdev) 2620static int scsi_internal_device_block(struct scsi_device *sdev)
2625{ 2621{
diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c
index 48ee68059fe6..74ded5f3c236 100644
--- a/drivers/scsi/scsi_pm.c
+++ b/drivers/scsi/scsi_pm.c
@@ -176,11 +176,7 @@ static int scsi_bus_resume_common(struct device *dev,
176 176
177static int scsi_bus_prepare(struct device *dev) 177static int scsi_bus_prepare(struct device *dev)
178{ 178{
179 if (scsi_is_sdev_device(dev)) { 179 if (scsi_is_host_device(dev)) {
180 /* sd probing uses async_schedule. Wait until it finishes. */
181 async_synchronize_full_domain(&scsi_sd_probe_domain);
182
183 } else if (scsi_is_host_device(dev)) {
184 /* Wait until async scanning is finished */ 180 /* Wait until async scanning is finished */
185 scsi_complete_async_scans(); 181 scsi_complete_async_scans();
186 } 182 }
diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h
index 5f21547b2ad2..cc2859d76d81 100644
--- a/drivers/scsi/scsi_priv.h
+++ b/drivers/scsi/scsi_priv.h
@@ -175,7 +175,6 @@ static inline void scsi_autopm_put_host(struct Scsi_Host *h) {}
175#endif /* CONFIG_PM */ 175#endif /* CONFIG_PM */
176 176
177extern struct async_domain scsi_sd_pm_domain; 177extern struct async_domain scsi_sd_pm_domain;
178extern struct async_domain scsi_sd_probe_domain;
179 178
180/* scsi_dh.c */ 179/* scsi_dh.c */
181#ifdef CONFIG_SCSI_DH 180#ifdef CONFIG_SCSI_DH
diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
index dbb206c90ecf..64c96c7828ee 100644
--- a/drivers/scsi/scsi_sysfs.c
+++ b/drivers/scsi/scsi_sysfs.c
@@ -767,8 +767,13 @@ store_state_field(struct device *dev, struct device_attribute *attr,
767 break; 767 break;
768 } 768 }
769 } 769 }
770 if (!state) 770 switch (state) {
771 case SDEV_RUNNING:
772 case SDEV_OFFLINE:
773 break;
774 default:
771 return -EINVAL; 775 return -EINVAL;
776 }
772 777
773 mutex_lock(&sdev->state_mutex); 778 mutex_lock(&sdev->state_mutex);
774 ret = scsi_device_set_state(sdev, state); 779 ret = scsi_device_set_state(sdev, state);
diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
index 118a687709ed..2732fa65119c 100644
--- a/drivers/scsi/scsi_transport_fc.c
+++ b/drivers/scsi/scsi_transport_fc.c
@@ -3,9 +3,6 @@
3 * FiberChannel transport specific attributes exported to sysfs. 3 * FiberChannel transport specific attributes exported to sysfs.
4 * 4 *
5 * Copyright (c) 2003 Silicon Graphics, Inc. All rights reserved. 5 * Copyright (c) 2003 Silicon Graphics, Inc. All rights reserved.
6 *
7 * ========
8 *
9 * Copyright (C) 2004-2007 James Smart, Emulex Corporation 6 * Copyright (C) 2004-2007 James Smart, Emulex Corporation
10 * Rewrite for host, target, device, and remote port attributes, 7 * Rewrite for host, target, device, and remote port attributes,
11 * statistics, and service functions... 8 * statistics, and service functions...
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index a3406bd62391..149d406aacc9 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -568,6 +568,7 @@ static struct scsi_driver sd_template = {
568 .name = "sd", 568 .name = "sd",
569 .owner = THIS_MODULE, 569 .owner = THIS_MODULE,
570 .probe = sd_probe, 570 .probe = sd_probe,
571 .probe_type = PROBE_PREFER_ASYNCHRONOUS,
571 .remove = sd_remove, 572 .remove = sd_remove,
572 .shutdown = sd_shutdown, 573 .shutdown = sd_shutdown,
573 .pm = &sd_pm_ops, 574 .pm = &sd_pm_ops,
@@ -3252,69 +3253,6 @@ static int sd_format_disk_name(char *prefix, int index, char *buf, int buflen)
3252 return 0; 3253 return 0;
3253} 3254}
3254 3255
3255/*
3256 * The asynchronous part of sd_probe
3257 */
3258static void sd_probe_async(void *data, async_cookie_t cookie)
3259{
3260 struct scsi_disk *sdkp = data;
3261 struct scsi_device *sdp;
3262 struct gendisk *gd;
3263 u32 index;
3264 struct device *dev;
3265
3266 sdp = sdkp->device;
3267 gd = sdkp->disk;
3268 index = sdkp->index;
3269 dev = &sdp->sdev_gendev;
3270
3271 gd->major = sd_major((index & 0xf0) >> 4);
3272 gd->first_minor = ((index & 0xf) << 4) | (index & 0xfff00);
3273
3274 gd->fops = &sd_fops;
3275 gd->private_data = &sdkp->driver;
3276 gd->queue = sdkp->device->request_queue;
3277
3278 /* defaults, until the device tells us otherwise */
3279 sdp->sector_size = 512;
3280 sdkp->capacity = 0;
3281 sdkp->media_present = 1;
3282 sdkp->write_prot = 0;
3283 sdkp->cache_override = 0;
3284 sdkp->WCE = 0;
3285 sdkp->RCD = 0;
3286 sdkp->ATO = 0;
3287 sdkp->first_scan = 1;
3288 sdkp->max_medium_access_timeouts = SD_MAX_MEDIUM_TIMEOUTS;
3289
3290 sd_revalidate_disk(gd);
3291
3292 gd->flags = GENHD_FL_EXT_DEVT;
3293 if (sdp->removable) {
3294 gd->flags |= GENHD_FL_REMOVABLE;
3295 gd->events |= DISK_EVENT_MEDIA_CHANGE;
3296 gd->event_flags = DISK_EVENT_FLAG_POLL | DISK_EVENT_FLAG_UEVENT;
3297 }
3298
3299 blk_pm_runtime_init(sdp->request_queue, dev);
3300 device_add_disk(dev, gd, NULL);
3301 if (sdkp->capacity)
3302 sd_dif_config_host(sdkp);
3303
3304 sd_revalidate_disk(gd);
3305
3306 if (sdkp->security) {
3307 sdkp->opal_dev = init_opal_dev(sdp, &sd_sec_submit);
3308 if (sdkp->opal_dev)
3309 sd_printk(KERN_NOTICE, sdkp, "supports TCG Opal\n");
3310 }
3311
3312 sd_printk(KERN_NOTICE, sdkp, "Attached SCSI %sdisk\n",
3313 sdp->removable ? "removable " : "");
3314 scsi_autopm_put_device(sdp);
3315 put_device(&sdkp->dev);
3316}
3317
3318/** 3256/**
3319 * sd_probe - called during driver initialization and whenever a 3257 * sd_probe - called during driver initialization and whenever a
3320 * new scsi device is attached to the system. It is called once 3258 * new scsi device is attached to the system. It is called once
@@ -3404,8 +3342,50 @@ static int sd_probe(struct device *dev)
3404 get_device(dev); 3342 get_device(dev);
3405 dev_set_drvdata(dev, sdkp); 3343 dev_set_drvdata(dev, sdkp);
3406 3344
3407 get_device(&sdkp->dev); /* prevent release before async_schedule */ 3345 gd->major = sd_major((index & 0xf0) >> 4);
3408 async_schedule_domain(sd_probe_async, sdkp, &scsi_sd_probe_domain); 3346 gd->first_minor = ((index & 0xf) << 4) | (index & 0xfff00);
3347
3348 gd->fops = &sd_fops;
3349 gd->private_data = &sdkp->driver;
3350 gd->queue = sdkp->device->request_queue;
3351
3352 /* defaults, until the device tells us otherwise */
3353 sdp->sector_size = 512;
3354 sdkp->capacity = 0;
3355 sdkp->media_present = 1;
3356 sdkp->write_prot = 0;
3357 sdkp->cache_override = 0;
3358 sdkp->WCE = 0;
3359 sdkp->RCD = 0;
3360 sdkp->ATO = 0;
3361 sdkp->first_scan = 1;
3362 sdkp->max_medium_access_timeouts = SD_MAX_MEDIUM_TIMEOUTS;
3363
3364 sd_revalidate_disk(gd);
3365
3366 gd->flags = GENHD_FL_EXT_DEVT;
3367 if (sdp->removable) {
3368 gd->flags |= GENHD_FL_REMOVABLE;
3369 gd->events |= DISK_EVENT_MEDIA_CHANGE;
3370 gd->event_flags = DISK_EVENT_FLAG_POLL | DISK_EVENT_FLAG_UEVENT;
3371 }
3372
3373 blk_pm_runtime_init(sdp->request_queue, dev);
3374 device_add_disk(dev, gd, NULL);
3375 if (sdkp->capacity)
3376 sd_dif_config_host(sdkp);
3377
3378 sd_revalidate_disk(gd);
3379
3380 if (sdkp->security) {
3381 sdkp->opal_dev = init_opal_dev(sdp, &sd_sec_submit);
3382 if (sdkp->opal_dev)
3383 sd_printk(KERN_NOTICE, sdkp, "supports TCG Opal\n");
3384 }
3385
3386 sd_printk(KERN_NOTICE, sdkp, "Attached SCSI %sdisk\n",
3387 sdp->removable ? "removable " : "");
3388 scsi_autopm_put_device(sdp);
3409 3389
3410 return 0; 3390 return 0;
3411 3391
@@ -3441,7 +3421,6 @@ static int sd_remove(struct device *dev)
3441 scsi_autopm_get_device(sdkp->device); 3421 scsi_autopm_get_device(sdkp->device);
3442 3422
3443 async_synchronize_full_domain(&scsi_sd_pm_domain); 3423 async_synchronize_full_domain(&scsi_sd_pm_domain);
3444 async_synchronize_full_domain(&scsi_sd_probe_domain);
3445 device_del(&sdkp->dev); 3424 device_del(&sdkp->dev);
3446 del_gendisk(sdkp->disk); 3425 del_gendisk(sdkp->disk);
3447 sd_shutdown(dev); 3426 sd_shutdown(dev);
diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
index 60f01a7b728c..c2afba2a5414 100644
--- a/drivers/scsi/ses.c
+++ b/drivers/scsi/ses.c
@@ -3,12 +3,7 @@
3 * SCSI Enclosure Services 3 * SCSI Enclosure Services
4 * 4 *
5 * Copyright (C) 2008 James Bottomley <James.Bottomley@HansenPartnership.com> 5 * Copyright (C) 2008 James Bottomley <James.Bottomley@HansenPartnership.com>
6 * 6 */
7**-----------------------------------------------------------------------------
8**
9**
10**-----------------------------------------------------------------------------
11*/
12 7
13#include <linux/slab.h> 8#include <linux/slab.h>
14#include <linux/module.h> 9#include <linux/module.h>
diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
index baada5b50bb1..e3266a64a477 100644
--- a/drivers/scsi/st.c
+++ b/drivers/scsi/st.c
@@ -228,7 +228,6 @@ static DEFINE_IDR(st_index_idr);
228 228
229 229
230 230
231#include "osst_detect.h"
232#ifndef SIGS_FROM_OSST 231#ifndef SIGS_FROM_OSST
233#define SIGS_FROM_OSST \ 232#define SIGS_FROM_OSST \
234 {"OnStream", "SC-", "", "osst"}, \ 233 {"OnStream", "SC-", "", "osst"}, \
@@ -4267,9 +4266,10 @@ static int st_probe(struct device *dev)
4267 if (SDp->type != TYPE_TAPE) 4266 if (SDp->type != TYPE_TAPE)
4268 return -ENODEV; 4267 return -ENODEV;
4269 if ((stp = st_incompatible(SDp))) { 4268 if ((stp = st_incompatible(SDp))) {
4270 sdev_printk(KERN_INFO, SDp, "Found incompatible tape\n");
4271 sdev_printk(KERN_INFO, SDp, 4269 sdev_printk(KERN_INFO, SDp,
4272 "st: The suggested driver is %s.\n", stp); 4270 "OnStream tapes are no longer supported;\n");
4271 sdev_printk(KERN_INFO, SDp,
4272 "please mail to linux-scsi@vger.kernel.org.\n");
4273 return -ENODEV; 4273 return -ENODEV;
4274 } 4274 }
4275 4275
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b89269120a2d..c2b6a0ca6933 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -375,6 +375,7 @@ enum storvsc_request_type {
375 375
376static int storvsc_ringbuffer_size = (128 * 1024); 376static int storvsc_ringbuffer_size = (128 * 1024);
377static u32 max_outstanding_req_per_channel; 377static u32 max_outstanding_req_per_channel;
378static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth);
378 379
379static int storvsc_vcpus_per_sub_channel = 4; 380static int storvsc_vcpus_per_sub_channel = 4;
380 381
@@ -1699,6 +1700,7 @@ static struct scsi_host_template scsi_driver = {
1699 .dma_boundary = PAGE_SIZE-1, 1700 .dma_boundary = PAGE_SIZE-1,
1700 .no_write_same = 1, 1701 .no_write_same = 1,
1701 .track_queue_depth = 1, 1702 .track_queue_depth = 1,
1703 .change_queue_depth = storvsc_change_queue_depth,
1702}; 1704};
1703 1705
1704enum { 1706enum {
@@ -1905,6 +1907,15 @@ err_out0:
1905 return ret; 1907 return ret;
1906} 1908}
1907 1909
1910/* Change a scsi target's queue depth */
1911static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth)
1912{
1913 if (queue_depth > scsi_driver.can_queue)
1914 queue_depth = scsi_driver.can_queue;
1915
1916 return scsi_change_queue_depth(sdev, queue_depth);
1917}
1918
1908static int storvsc_remove(struct hv_device *dev) 1919static int storvsc_remove(struct hv_device *dev)
1909{ 1920{
1910 struct storvsc_device *stor_device = hv_get_drvdata(dev); 1921 struct storvsc_device *stor_device = hv_get_drvdata(dev);
diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
index b4d1b5c22987..ee4b1da1e223 100644
--- a/drivers/scsi/ufs/ufs-qcom.c
+++ b/drivers/scsi/ufs/ufs-qcom.c
@@ -3,6 +3,7 @@
3 * Copyright (c) 2013-2016, Linux Foundation. All rights reserved. 3 * Copyright (c) 2013-2016, Linux Foundation. All rights reserved.
4 */ 4 */
5 5
6#include <linux/acpi.h>
6#include <linux/time.h> 7#include <linux/time.h>
7#include <linux/of.h> 8#include <linux/of.h>
8#include <linux/platform_device.h> 9#include <linux/platform_device.h>
@@ -161,6 +162,9 @@ static int ufs_qcom_init_lane_clks(struct ufs_qcom_host *host)
161 int err = 0; 162 int err = 0;
162 struct device *dev = host->hba->dev; 163 struct device *dev = host->hba->dev;
163 164
165 if (has_acpi_companion(dev))
166 return 0;
167
164 err = ufs_qcom_host_clk_get(dev, "rx_lane0_sync_clk", 168 err = ufs_qcom_host_clk_get(dev, "rx_lane0_sync_clk",
165 &host->rx_l0_sync_clk, false); 169 &host->rx_l0_sync_clk, false);
166 if (err) 170 if (err)
@@ -1127,9 +1131,13 @@ static int ufs_qcom_init(struct ufs_hba *hba)
1127 __func__, err); 1131 __func__, err);
1128 goto out_variant_clear; 1132 goto out_variant_clear;
1129 } else if (IS_ERR(host->generic_phy)) { 1133 } else if (IS_ERR(host->generic_phy)) {
1130 err = PTR_ERR(host->generic_phy); 1134 if (has_acpi_companion(dev)) {
1131 dev_err(dev, "%s: PHY get failed %d\n", __func__, err); 1135 host->generic_phy = NULL;
1132 goto out_variant_clear; 1136 } else {
1137 err = PTR_ERR(host->generic_phy);
1138 dev_err(dev, "%s: PHY get failed %d\n", __func__, err);
1139 goto out_variant_clear;
1140 }
1133 } 1141 }
1134 1142
1135 err = ufs_qcom_bus_register(host); 1143 err = ufs_qcom_bus_register(host);
@@ -1599,6 +1607,14 @@ static const struct of_device_id ufs_qcom_of_match[] = {
1599}; 1607};
1600MODULE_DEVICE_TABLE(of, ufs_qcom_of_match); 1608MODULE_DEVICE_TABLE(of, ufs_qcom_of_match);
1601 1609
1610#ifdef CONFIG_ACPI
1611static const struct acpi_device_id ufs_qcom_acpi_match[] = {
1612 { "QCOM24A5" },
1613 { },
1614};
1615MODULE_DEVICE_TABLE(acpi, ufs_qcom_acpi_match);
1616#endif
1617
1602static const struct dev_pm_ops ufs_qcom_pm_ops = { 1618static const struct dev_pm_ops ufs_qcom_pm_ops = {
1603 .suspend = ufshcd_pltfrm_suspend, 1619 .suspend = ufshcd_pltfrm_suspend,
1604 .resume = ufshcd_pltfrm_resume, 1620 .resume = ufshcd_pltfrm_resume,
@@ -1615,6 +1631,7 @@ static struct platform_driver ufs_qcom_pltform = {
1615 .name = "ufshcd-qcom", 1631 .name = "ufshcd-qcom",
1616 .pm = &ufs_qcom_pm_ops, 1632 .pm = &ufs_qcom_pm_ops,
1617 .of_match_table = of_match_ptr(ufs_qcom_of_match), 1633 .of_match_table = of_match_ptr(ufs_qcom_of_match),
1634 .acpi_match_table = ACPI_PTR(ufs_qcom_acpi_match),
1618 }, 1635 },
1619}; 1636};
1620module_platform_driver(ufs_qcom_pltform); 1637module_platform_driver(ufs_qcom_pltform);
diff --git a/drivers/scsi/ufs/ufs-sysfs.c b/drivers/scsi/ufs/ufs-sysfs.c
index 8d9332bb7d0c..f478685122ff 100644
--- a/drivers/scsi/ufs/ufs-sysfs.c
+++ b/drivers/scsi/ufs/ufs-sysfs.c
@@ -122,7 +122,7 @@ static void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit)
122{ 122{
123 unsigned long flags; 123 unsigned long flags;
124 124
125 if (!(hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT)) 125 if (!ufshcd_is_auto_hibern8_supported(hba))
126 return; 126 return;
127 127
128 spin_lock_irqsave(hba->host->host_lock, flags); 128 spin_lock_irqsave(hba->host->host_lock, flags);
@@ -164,7 +164,7 @@ static ssize_t auto_hibern8_show(struct device *dev,
164{ 164{
165 struct ufs_hba *hba = dev_get_drvdata(dev); 165 struct ufs_hba *hba = dev_get_drvdata(dev);
166 166
167 if (!(hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT)) 167 if (!ufshcd_is_auto_hibern8_supported(hba))
168 return -EOPNOTSUPP; 168 return -EOPNOTSUPP;
169 169
170 return snprintf(buf, PAGE_SIZE, "%d\n", ufshcd_ahit_to_us(hba->ahit)); 170 return snprintf(buf, PAGE_SIZE, "%d\n", ufshcd_ahit_to_us(hba->ahit));
@@ -177,7 +177,7 @@ static ssize_t auto_hibern8_store(struct device *dev,
177 struct ufs_hba *hba = dev_get_drvdata(dev); 177 struct ufs_hba *hba = dev_get_drvdata(dev);
178 unsigned int timer; 178 unsigned int timer;
179 179
180 if (!(hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT)) 180 if (!ufshcd_is_auto_hibern8_supported(hba))
181 return -EOPNOTSUPP; 181 return -EOPNOTSUPP;
182 182
183 if (kstrtouint(buf, 0, &timer)) 183 if (kstrtouint(buf, 0, &timer))
diff --git a/drivers/scsi/ufs/ufs_bsg.c b/drivers/scsi/ufs/ufs_bsg.c
index 869e71f861d6..a9344eb4e047 100644
--- a/drivers/scsi/ufs/ufs_bsg.c
+++ b/drivers/scsi/ufs/ufs_bsg.c
@@ -122,7 +122,7 @@ static int ufs_bsg_request(struct bsg_job *job)
122 memcpy(&uc, &bsg_request->upiu_req.uc, UIC_CMD_SIZE); 122 memcpy(&uc, &bsg_request->upiu_req.uc, UIC_CMD_SIZE);
123 ret = ufshcd_send_uic_cmd(hba, &uc); 123 ret = ufshcd_send_uic_cmd(hba, &uc);
124 if (ret) 124 if (ret)
125 dev_dbg(hba->dev, 125 dev_err(hba->dev,
126 "send uic cmd: error code %d\n", ret); 126 "send uic cmd: error code %d\n", ret);
127 127
128 memcpy(&bsg_reply->upiu_rsp.uc, &uc, UIC_CMD_SIZE); 128 memcpy(&bsg_reply->upiu_rsp.uc, &uc, UIC_CMD_SIZE);
@@ -149,7 +149,9 @@ static int ufs_bsg_request(struct bsg_job *job)
149out: 149out:
150 bsg_reply->result = ret; 150 bsg_reply->result = ret;
151 job->reply_len = sizeof(struct ufs_bsg_reply); 151 job->reply_len = sizeof(struct ufs_bsg_reply);
152 bsg_job_done(job, ret, bsg_reply->reply_payload_rcv_len); 152 /* complete the job here only if no error */
153 if (ret == 0)
154 bsg_job_done(job, ret, bsg_reply->reply_payload_rcv_len);
153 155
154 return ret; 156 return ret;
155} 157}
diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c
index ffe6f82182ba..3b19de3ae9a3 100644
--- a/drivers/scsi/ufs/ufshcd-pci.c
+++ b/drivers/scsi/ufs/ufshcd-pci.c
@@ -200,6 +200,8 @@ static const struct dev_pm_ops ufshcd_pci_pm_ops = {
200static const struct pci_device_id ufshcd_pci_tbl[] = { 200static const struct pci_device_id ufshcd_pci_tbl[] = {
201 { PCI_VENDOR_ID_SAMSUNG, 0xC00C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }, 201 { PCI_VENDOR_ID_SAMSUNG, 0xC00C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
202 { PCI_VDEVICE(INTEL, 0x9DFA), (kernel_ulong_t)&ufs_intel_cnl_hba_vops }, 202 { PCI_VDEVICE(INTEL, 0x9DFA), (kernel_ulong_t)&ufs_intel_cnl_hba_vops },
203 { PCI_VDEVICE(INTEL, 0x4B41), (kernel_ulong_t)&ufs_intel_cnl_hba_vops },
204 { PCI_VDEVICE(INTEL, 0x4B43), (kernel_ulong_t)&ufs_intel_cnl_hba_vops },
203 { } /* terminate list */ 205 { } /* terminate list */
204}; 206};
205 207
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 3fe3029617a8..04d3686511c8 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -3908,7 +3908,7 @@ static void ufshcd_auto_hibern8_enable(struct ufs_hba *hba)
3908{ 3908{
3909 unsigned long flags; 3909 unsigned long flags;
3910 3910
3911 if (!(hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT) || !hba->ahit) 3911 if (!ufshcd_is_auto_hibern8_supported(hba) || !hba->ahit)
3912 return; 3912 return;
3913 3913
3914 spin_lock_irqsave(hba->host->host_lock, flags); 3914 spin_lock_irqsave(hba->host->host_lock, flags);
@@ -5255,6 +5255,7 @@ static void ufshcd_err_handler(struct work_struct *work)
5255 goto skip_err_handling; 5255 goto skip_err_handling;
5256 } 5256 }
5257 if ((hba->saved_err & INT_FATAL_ERRORS) || 5257 if ((hba->saved_err & INT_FATAL_ERRORS) ||
5258 (hba->saved_err & UFSHCD_UIC_HIBERN8_MASK) ||
5258 ((hba->saved_err & UIC_ERROR) && 5259 ((hba->saved_err & UIC_ERROR) &&
5259 (hba->saved_uic_err & (UFSHCD_UIC_DL_PA_INIT_ERROR | 5260 (hba->saved_uic_err & (UFSHCD_UIC_DL_PA_INIT_ERROR |
5260 UFSHCD_UIC_DL_NAC_RECEIVED_ERROR | 5261 UFSHCD_UIC_DL_NAC_RECEIVED_ERROR |
@@ -5414,6 +5415,23 @@ static void ufshcd_update_uic_error(struct ufs_hba *hba)
5414 __func__, hba->uic_error); 5415 __func__, hba->uic_error);
5415} 5416}
5416 5417
5418static bool ufshcd_is_auto_hibern8_error(struct ufs_hba *hba,
5419 u32 intr_mask)
5420{
5421 if (!ufshcd_is_auto_hibern8_supported(hba))
5422 return false;
5423
5424 if (!(intr_mask & UFSHCD_UIC_HIBERN8_MASK))
5425 return false;
5426
5427 if (hba->active_uic_cmd &&
5428 (hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_ENTER ||
5429 hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_EXIT))
5430 return false;
5431
5432 return true;
5433}
5434
5417/** 5435/**
5418 * ufshcd_check_errors - Check for errors that need s/w attention 5436 * ufshcd_check_errors - Check for errors that need s/w attention
5419 * @hba: per-adapter instance 5437 * @hba: per-adapter instance
@@ -5432,6 +5450,15 @@ static void ufshcd_check_errors(struct ufs_hba *hba)
5432 queue_eh_work = true; 5450 queue_eh_work = true;
5433 } 5451 }
5434 5452
5453 if (hba->errors & UFSHCD_UIC_HIBERN8_MASK) {
5454 dev_err(hba->dev,
5455 "%s: Auto Hibern8 %s failed - status: 0x%08x, upmcrs: 0x%08x\n",
5456 __func__, (hba->errors & UIC_HIBERNATE_ENTER) ?
5457 "Enter" : "Exit",
5458 hba->errors, ufshcd_get_upmcrs(hba));
5459 queue_eh_work = true;
5460 }
5461
5435 if (queue_eh_work) { 5462 if (queue_eh_work) {
5436 /* 5463 /*
5437 * update the transfer error masks to sticky bits, let's do this 5464 * update the transfer error masks to sticky bits, let's do this
@@ -5494,6 +5521,10 @@ static void ufshcd_tmc_handler(struct ufs_hba *hba)
5494static void ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status) 5521static void ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status)
5495{ 5522{
5496 hba->errors = UFSHCD_ERROR_MASK & intr_status; 5523 hba->errors = UFSHCD_ERROR_MASK & intr_status;
5524
5525 if (ufshcd_is_auto_hibern8_error(hba, intr_status))
5526 hba->errors |= (UFSHCD_UIC_HIBERN8_MASK & intr_status);
5527
5497 if (hba->errors) 5528 if (hba->errors)
5498 ufshcd_check_errors(hba); 5529 ufshcd_check_errors(hba);
5499 5530
@@ -8313,7 +8344,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
8313 UIC_LINK_HIBERN8_STATE); 8344 UIC_LINK_HIBERN8_STATE);
8314 8345
8315 /* Set the default auto-hiberate idle timer value to 150 ms */ 8346 /* Set the default auto-hiberate idle timer value to 150 ms */
8316 if (hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT) { 8347 if (ufshcd_is_auto_hibern8_supported(hba) && !hba->ahit) {
8317 hba->ahit = FIELD_PREP(UFSHCI_AHIBERN8_TIMER_MASK, 150) | 8348 hba->ahit = FIELD_PREP(UFSHCI_AHIBERN8_TIMER_MASK, 150) |
8318 FIELD_PREP(UFSHCI_AHIBERN8_SCALE_MASK, 3); 8349 FIELD_PREP(UFSHCI_AHIBERN8_SCALE_MASK, 3);
8319 } 8350 }
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index ecfa898b9ccc..994d73d03207 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -740,6 +740,11 @@ return true;
740#endif 740#endif
741} 741}
742 742
743static inline bool ufshcd_is_auto_hibern8_supported(struct ufs_hba *hba)
744{
745 return (hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT);
746}
747
743#define ufshcd_writel(hba, val, reg) \ 748#define ufshcd_writel(hba, val, reg) \
744 writel((val), (hba)->mmio_base + (reg)) 749 writel((val), (hba)->mmio_base + (reg))
745#define ufshcd_readl(hba, reg) \ 750#define ufshcd_readl(hba, reg) \
diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
index 6fa889de5ee5..dbb75cd28dc8 100644
--- a/drivers/scsi/ufs/ufshci.h
+++ b/drivers/scsi/ufs/ufshci.h
@@ -144,8 +144,10 @@ enum {
144#define CONTROLLER_FATAL_ERROR 0x10000 144#define CONTROLLER_FATAL_ERROR 0x10000
145#define SYSTEM_BUS_FATAL_ERROR 0x20000 145#define SYSTEM_BUS_FATAL_ERROR 0x20000
146 146
147#define UFSHCD_UIC_PWR_MASK (UIC_HIBERNATE_ENTER |\ 147#define UFSHCD_UIC_HIBERN8_MASK (UIC_HIBERNATE_ENTER |\
148 UIC_HIBERNATE_EXIT |\ 148 UIC_HIBERNATE_EXIT)
149
150#define UFSHCD_UIC_PWR_MASK (UFSHCD_UIC_HIBERN8_MASK |\
149 UIC_POWER_MODE) 151 UIC_POWER_MODE)
150 152
151#define UFSHCD_UIC_MASK (UIC_COMMAND_COMPL | UFSHCD_UIC_PWR_MASK) 153#define UFSHCD_UIC_MASK (UIC_COMMAND_COMPL | UFSHCD_UIC_PWR_MASK)
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index 13f1b3b9923a..1705398b026a 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -74,9 +74,6 @@ struct virtio_scsi {
74 74
75 u32 num_queues; 75 u32 num_queues;
76 76
77 /* If the affinity hint is set for virtqueues */
78 bool affinity_hint_set;
79
80 struct hlist_node node; 77 struct hlist_node node;
81 78
82 /* Protected by event_vq lock */ 79 /* Protected by event_vq lock */
diff --git a/drivers/scsi/wd719x.c b/drivers/scsi/wd719x.c
index c2f40068f235..edc8a139a60d 100644
--- a/drivers/scsi/wd719x.c
+++ b/drivers/scsi/wd719x.c
@@ -108,8 +108,15 @@ static inline int wd719x_wait_done(struct wd719x *wd, int timeout)
108 } 108 }
109 109
110 if (status != WD719X_INT_NOERRORS) { 110 if (status != WD719X_INT_NOERRORS) {
111 u8 sue = wd719x_readb(wd, WD719X_AMR_SCB_ERROR);
112 /* we get this after wd719x_dev_reset, it's not an error */
113 if (sue == WD719X_SUE_TERM)
114 return 0;
115 /* we get this after wd719x_bus_reset, it's not an error */
116 if (sue == WD719X_SUE_RESET)
117 return 0;
111 dev_err(&wd->pdev->dev, "direct command failed, status 0x%02x, SUE 0x%02x\n", 118 dev_err(&wd->pdev->dev, "direct command failed, status 0x%02x, SUE 0x%02x\n",
112 status, wd719x_readb(wd, WD719X_AMR_SCB_ERROR)); 119 status, sue);
113 return -EIO; 120 return -EIO;
114 } 121 }
115 122
@@ -128,8 +135,10 @@ static int wd719x_direct_cmd(struct wd719x *wd, u8 opcode, u8 dev, u8 lun,
128 if (wd719x_wait_ready(wd)) 135 if (wd719x_wait_ready(wd))
129 return -ETIMEDOUT; 136 return -ETIMEDOUT;
130 137
131 /* make sure we get NO interrupts */ 138 /* disable interrupts except for RESET/ABORT (it breaks them) */
132 dev |= WD719X_DISABLE_INT; 139 if (opcode != WD719X_CMD_BUSRESET && opcode != WD719X_CMD_ABORT &&
140 opcode != WD719X_CMD_ABORT_TAG && opcode != WD719X_CMD_RESET)
141 dev |= WD719X_DISABLE_INT;
133 wd719x_writeb(wd, WD719X_AMR_CMD_PARAM, dev); 142 wd719x_writeb(wd, WD719X_AMR_CMD_PARAM, dev);
134 wd719x_writeb(wd, WD719X_AMR_CMD_PARAM_2, lun); 143 wd719x_writeb(wd, WD719X_AMR_CMD_PARAM_2, lun);
135 wd719x_writeb(wd, WD719X_AMR_CMD_PARAM_3, tag); 144 wd719x_writeb(wd, WD719X_AMR_CMD_PARAM_3, tag);
@@ -465,6 +474,7 @@ static int wd719x_abort(struct scsi_cmnd *cmd)
465 spin_lock_irqsave(wd->sh->host_lock, flags); 474 spin_lock_irqsave(wd->sh->host_lock, flags);
466 result = wd719x_direct_cmd(wd, action, cmd->device->id, 475 result = wd719x_direct_cmd(wd, action, cmd->device->id,
467 cmd->device->lun, cmd->tag, scb->phys, 0); 476 cmd->device->lun, cmd->tag, scb->phys, 0);
477 wd719x_finish_cmd(scb, DID_ABORT);
468 spin_unlock_irqrestore(wd->sh->host_lock, flags); 478 spin_unlock_irqrestore(wd->sh->host_lock, flags);
469 if (result) 479 if (result)
470 return FAILED; 480 return FAILED;
@@ -477,6 +487,7 @@ static int wd719x_reset(struct scsi_cmnd *cmd, u8 opcode, u8 device)
477 int result; 487 int result;
478 unsigned long flags; 488 unsigned long flags;
479 struct wd719x *wd = shost_priv(cmd->device->host); 489 struct wd719x *wd = shost_priv(cmd->device->host);
490 struct wd719x_scb *scb, *tmp;
480 491
481 dev_info(&wd->pdev->dev, "%s reset requested\n", 492 dev_info(&wd->pdev->dev, "%s reset requested\n",
482 (opcode == WD719X_CMD_BUSRESET) ? "bus" : "device"); 493 (opcode == WD719X_CMD_BUSRESET) ? "bus" : "device");
@@ -484,6 +495,12 @@ static int wd719x_reset(struct scsi_cmnd *cmd, u8 opcode, u8 device)
484 spin_lock_irqsave(wd->sh->host_lock, flags); 495 spin_lock_irqsave(wd->sh->host_lock, flags);
485 result = wd719x_direct_cmd(wd, opcode, device, 0, 0, 0, 496 result = wd719x_direct_cmd(wd, opcode, device, 0, 0, 0,
486 WD719X_WAIT_FOR_SCSI_RESET); 497 WD719X_WAIT_FOR_SCSI_RESET);
498 /* flush all SCBs (or all for a device if dev_reset) */
499 list_for_each_entry_safe(scb, tmp, &wd->active_scbs, list) {
500 if (opcode == WD719X_CMD_BUSRESET ||
501 scb->cmd->device->id == device)
502 wd719x_finish_cmd(scb, DID_RESET);
503 }
487 spin_unlock_irqrestore(wd->sh->host_lock, flags); 504 spin_unlock_irqrestore(wd->sh->host_lock, flags);
488 if (result) 505 if (result)
489 return FAILED; 506 return FAILED;
@@ -506,22 +523,23 @@ static int wd719x_host_reset(struct scsi_cmnd *cmd)
506 struct wd719x *wd = shost_priv(cmd->device->host); 523 struct wd719x *wd = shost_priv(cmd->device->host);
507 struct wd719x_scb *scb, *tmp; 524 struct wd719x_scb *scb, *tmp;
508 unsigned long flags; 525 unsigned long flags;
509 int result;
510 526
511 dev_info(&wd->pdev->dev, "host reset requested\n"); 527 dev_info(&wd->pdev->dev, "host reset requested\n");
512 spin_lock_irqsave(wd->sh->host_lock, flags); 528 spin_lock_irqsave(wd->sh->host_lock, flags);
513 /* Try to reinit the RISC */ 529 /* stop the RISC */
514 if (wd719x_chip_init(wd) == 0) 530 if (wd719x_direct_cmd(wd, WD719X_CMD_SLEEP, 0, 0, 0, 0,
515 result = SUCCESS; 531 WD719X_WAIT_FOR_RISC))
516 else 532 dev_warn(&wd->pdev->dev, "RISC sleep command failed\n");
517 result = FAILED; 533 /* disable RISC */
534 wd719x_writeb(wd, WD719X_PCI_MODE_SELECT, 0);
518 535
519 /* flush all SCBs */ 536 /* flush all SCBs */
520 list_for_each_entry_safe(scb, tmp, &wd->active_scbs, list) 537 list_for_each_entry_safe(scb, tmp, &wd->active_scbs, list)
521 wd719x_finish_cmd(scb, result); 538 wd719x_finish_cmd(scb, DID_RESET);
522 spin_unlock_irqrestore(wd->sh->host_lock, flags); 539 spin_unlock_irqrestore(wd->sh->host_lock, flags);
523 540
524 return result; 541 /* Try to reinit the RISC */
542 return wd719x_chip_init(wd) == 0 ? SUCCESS : FAILED;
525} 543}
526 544
527static int wd719x_biosparam(struct scsi_device *sdev, struct block_device *bdev, 545static int wd719x_biosparam(struct scsi_device *sdev, struct block_device *bdev,
@@ -673,7 +691,7 @@ static irqreturn_t wd719x_interrupt(int irq, void *dev_id)
673 else 691 else
674 dev_err(&wd->pdev->dev, "card returned invalid SCB pointer\n"); 692 dev_err(&wd->pdev->dev, "card returned invalid SCB pointer\n");
675 } else 693 } else
676 dev_warn(&wd->pdev->dev, "direct command 0x%x completed\n", 694 dev_dbg(&wd->pdev->dev, "direct command 0x%x completed\n",
677 regs.bytes.OPC); 695 regs.bytes.OPC);
678 break; 696 break;
679 case WD719X_INT_PIOREADY: 697 case WD719X_INT_PIOREADY:
diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
index 181a32a6f391..685d771b51d4 100644
--- a/drivers/target/iscsi/iscsi_target_nego.c
+++ b/drivers/target/iscsi/iscsi_target_nego.c
@@ -152,22 +152,11 @@ static u32 iscsi_handle_authentication(
152 152
153 if (strstr("None", authtype)) 153 if (strstr("None", authtype))
154 return 1; 154 return 1;
155#ifdef CANSRP
156 else if (strstr("SRP", authtype))
157 return srp_main_loop(conn, auth, in_buf, out_buf,
158 &in_length, out_length);
159#endif
160 else if (strstr("CHAP", authtype)) 155 else if (strstr("CHAP", authtype))
161 return chap_main_loop(conn, auth, in_buf, out_buf, 156 return chap_main_loop(conn, auth, in_buf, out_buf,
162 &in_length, out_length); 157 &in_length, out_length);
163 else if (strstr("SPKM1", authtype)) 158 /* SRP, SPKM1, SPKM2 and KRB5 are unsupported */
164 return 2; 159 return 2;
165 else if (strstr("SPKM2", authtype))
166 return 2;
167 else if (strstr("KRB5", authtype))
168 return 2;
169 else
170 return 2;
171} 160}
172 161
173static void iscsi_remove_failed_auth_entry(struct iscsi_conn *conn) 162static void iscsi_remove_failed_auth_entry(struct iscsi_conn *conn)
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index b43d6385a1a0..04eda111920e 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -1824,20 +1824,18 @@ static int tcmu_update_uio_info(struct tcmu_dev *udev)
1824{ 1824{
1825 struct tcmu_hba *hba = udev->hba->hba_ptr; 1825 struct tcmu_hba *hba = udev->hba->hba_ptr;
1826 struct uio_info *info; 1826 struct uio_info *info;
1827 size_t size, used;
1828 char *str; 1827 char *str;
1829 1828
1830 info = &udev->uio_info; 1829 info = &udev->uio_info;
1831 size = snprintf(NULL, 0, "tcm-user/%u/%s/%s", hba->host_id, udev->name,
1832 udev->dev_config);
1833 size += 1; /* for \0 */
1834 str = kmalloc(size, GFP_KERNEL);
1835 if (!str)
1836 return -ENOMEM;
1837 1830
1838 used = snprintf(str, size, "tcm-user/%u/%s", hba->host_id, udev->name);
1839 if (udev->dev_config[0]) 1831 if (udev->dev_config[0])
1840 snprintf(str + used, size - used, "/%s", udev->dev_config); 1832 str = kasprintf(GFP_KERNEL, "tcm-user/%u/%s/%s", hba->host_id,
1833 udev->name, udev->dev_config);
1834 else
1835 str = kasprintf(GFP_KERNEL, "tcm-user/%u/%s", hba->host_id,
1836 udev->name);
1837 if (!str)
1838 return -ENOMEM;
1841 1839
1842 /* If the old string exists, free it */ 1840 /* If the old string exists, free it */
1843 kfree(info->name); 1841 kfree(info->name);
diff --git a/include/scsi/fc/fc_fip.h b/include/scsi/fc/fc_fip.h
index 9710254fd98c..e0a3423ba09e 100644
--- a/include/scsi/fc/fc_fip.h
+++ b/include/scsi/fc/fc_fip.h
@@ -1,18 +1,6 @@
1/* SPDX-License-Identifier: GPL-2.0 */
1/* 2/*
2 * Copyright 2008 Cisco Systems, Inc. All rights reserved. 3 * Copyright 2008 Cisco Systems, Inc. All rights reserved.
3 *
4 * This program is free software; you may redistribute it and/or modify
5 * it under the terms of the GNU General Public License as published by
6 * the Free Software Foundation; version 2 of the License.
7 *
8 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
9 * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
10 * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
11 * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
12 * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
13 * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
14 * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
15 * SOFTWARE.
16 */ 4 */
17#ifndef _FC_FIP_H_ 5#ifndef _FC_FIP_H_
18#define _FC_FIP_H_ 6#define _FC_FIP_H_
diff --git a/include/scsi/fc/fc_ms.h b/include/scsi/fc/fc_ms.h
index b1424dccf426..800d53dc9470 100644
--- a/include/scsi/fc/fc_ms.h
+++ b/include/scsi/fc/fc_ms.h
@@ -1,5 +1,6 @@
1/* SPDX-License-Identifier: GPL-2.0-only */ 1/* SPDX-License-Identifier: GPL-2.0-only */
2/* * Copyright(c) 2011 Intel Corporation. All rights reserved. 2/*
3 * Copyright(c) 2011 Intel Corporation. All rights reserved.
3 * 4 *
4 * Maintained at www.Open-FCoE.org 5 * Maintained at www.Open-FCoE.org
5 */ 6 */
diff --git a/include/scsi/iscsi_if.h b/include/scsi/iscsi_if.h
index 8b31588460d5..92b11c7e0b4f 100644
--- a/include/scsi/iscsi_if.h
+++ b/include/scsi/iscsi_if.h
@@ -5,8 +5,6 @@
5 * Copyright (C) 2005 Dmitry Yusupov 5 * Copyright (C) 2005 Dmitry Yusupov
6 * Copyright (C) 2005 Alex Aizman 6 * Copyright (C) 2005 Alex Aizman
7 * maintained by open-iscsi@googlegroups.com 7 * maintained by open-iscsi@googlegroups.com
8 *
9 * See the file COPYING included with this distribution for more details.
10 */ 8 */
11 9
12#ifndef ISCSI_IF_H 10#ifndef ISCSI_IF_H
diff --git a/include/scsi/iscsi_proto.h b/include/scsi/iscsi_proto.h
index aeb4980745ca..b71b5c4f418c 100644
--- a/include/scsi/iscsi_proto.h
+++ b/include/scsi/iscsi_proto.h
@@ -5,8 +5,6 @@
5 * Copyright (C) 2005 Dmitry Yusupov 5 * Copyright (C) 2005 Dmitry Yusupov
6 * Copyright (C) 2005 Alex Aizman 6 * Copyright (C) 2005 Alex Aizman
7 * maintained by open-iscsi@googlegroups.com 7 * maintained by open-iscsi@googlegroups.com
8 *
9 * See the file COPYING included with this distribution for more details.
10 */ 8 */
11 9
12#ifndef ISCSI_PROTO_H 10#ifndef ISCSI_PROTO_H
diff --git a/include/scsi/libiscsi_tcp.h b/include/scsi/libiscsi_tcp.h
index 172f15e3dfd6..7c8ba9d7378b 100644
--- a/include/scsi/libiscsi_tcp.h
+++ b/include/scsi/libiscsi_tcp.h
@@ -5,8 +5,6 @@
5 * Copyright (C) 2008 Mike Christie 5 * Copyright (C) 2008 Mike Christie
6 * Copyright (C) 2008 Red Hat, Inc. All rights reserved. 6 * Copyright (C) 2008 Red Hat, Inc. All rights reserved.
7 * maintained by open-iscsi@googlegroups.com 7 * maintained by open-iscsi@googlegroups.com
8 *
9 * See the file COPYING included with this distribution for more details.
10 */ 8 */
11 9
12#ifndef LIBISCSI_TCP_H 10#ifndef LIBISCSI_TCP_H
diff --git a/include/scsi/libsas.h b/include/scsi/libsas.h
index e9664bb7d188..4e2d61e8fb1e 100644
--- a/include/scsi/libsas.h
+++ b/include/scsi/libsas.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: GPL-2.0-or-later */ 1/* SPDX-License-Identifier: GPL-2.0-only */
2/* 2/*
3 * SAS host prototypes and structures header file 3 * SAS host prototypes and structures header file
4 * 4 *
@@ -207,8 +207,7 @@ struct sas_work {
207 struct work_struct work; 207 struct work_struct work;
208}; 208};
209 209
210/* Lots of code duplicates this in the SCSI tree, which can be factored out */ 210static inline bool dev_is_expander(enum sas_device_type type)
211static inline bool sas_dev_type_is_expander(enum sas_device_type type)
212{ 211{
213 return type == SAS_EDGE_EXPANDER_DEVICE || 212 return type == SAS_EDGE_EXPANDER_DEVICE ||
214 type == SAS_FANOUT_EXPANDER_DEVICE; 213 type == SAS_FANOUT_EXPANDER_DEVICE;
diff --git a/include/scsi/sas.h b/include/scsi/sas.h
index 97a0f6bd201c..a5d8ae49198c 100644
--- a/include/scsi/sas.h
+++ b/include/scsi/sas.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: GPL-2.0-or-later */ 1/* SPDX-License-Identifier: GPL-2.0-only */
2/* 2/*
3 * SAS structures and definitions header file 3 * SAS structures and definitions header file
4 * 4 *
diff --git a/include/scsi/scsi_transport.h b/include/scsi/scsi_transport.h
index 0580dce280a1..a0458bda3148 100644
--- a/include/scsi/scsi_transport.h
+++ b/include/scsi/scsi_transport.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: GPL-2.0-or-later */ 1/* SPDX-License-Identifier: GPL-2.0-only */
2/* 2/*
3 * Transport specific attributes. 3 * Transport specific attributes.
4 * 4 *
diff --git a/include/scsi/scsi_transport_fc.h b/include/scsi/scsi_transport_fc.h
index 43f09c7c25a2..7db2dd783834 100644
--- a/include/scsi/scsi_transport_fc.h
+++ b/include/scsi/scsi_transport_fc.h
@@ -3,9 +3,6 @@
3 * FiberChannel transport specific attributes exported to sysfs. 3 * FiberChannel transport specific attributes exported to sysfs.
4 * 4 *
5 * Copyright (c) 2003 Silicon Graphics, Inc. All rights reserved. 5 * Copyright (c) 2003 Silicon Graphics, Inc. All rights reserved.
6 *
7 * ========
8 *
9 * Copyright (C) 2004-2007 James Smart, Emulex Corporation 6 * Copyright (C) 2004-2007 James Smart, Emulex Corporation
10 * Rewrite for host, target, device, and remote port attributes, 7 * Rewrite for host, target, device, and remote port attributes,
11 * statistics, and service functions... 8 * statistics, and service functions...
diff --git a/include/uapi/scsi/fc/fc_els.h b/include/uapi/scsi/fc/fc_els.h
index a81c53508cc6..76f627f0d13b 100644
--- a/include/uapi/scsi/fc/fc_els.h
+++ b/include/uapi/scsi/fc/fc_els.h
@@ -2,19 +2,6 @@
2/* 2/*
3 * Copyright(c) 2007 Intel Corporation. All rights reserved. 3 * Copyright(c) 2007 Intel Corporation. All rights reserved.
4 * 4 *
5 * This program is free software; you can redistribute it and/or modify it
6 * under the terms and conditions of the GNU General Public License,
7 * version 2, as published by the Free Software Foundation.
8 *
9 * This program is distributed in the hope it will be useful, but WITHOUT
10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
12 * more details.
13 *
14 * You should have received a copy of the GNU General Public License along with
15 * this program; if not, write to the Free Software Foundation, Inc.,
16 * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
17 *
18 * Maintained at www.Open-FCoE.org 5 * Maintained at www.Open-FCoE.org
19 */ 6 */
20 7
diff --git a/include/uapi/scsi/fc/fc_fs.h b/include/uapi/scsi/fc/fc_fs.h
index 8c0a292a61ed..0dab49dbb2f7 100644
--- a/include/uapi/scsi/fc/fc_fs.h
+++ b/include/uapi/scsi/fc/fc_fs.h
@@ -2,19 +2,6 @@
2/* 2/*
3 * Copyright(c) 2007 Intel Corporation. All rights reserved. 3 * Copyright(c) 2007 Intel Corporation. All rights reserved.
4 * 4 *
5 * This program is free software; you can redistribute it and/or modify it
6 * under the terms and conditions of the GNU General Public License,
7 * version 2, as published by the Free Software Foundation.
8 *
9 * This program is distributed in the hope it will be useful, but WITHOUT
10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
12 * more details.
13 *
14 * You should have received a copy of the GNU General Public License along with
15 * this program; if not, write to the Free Software Foundation, Inc.,
16 * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
17 *
18 * Maintained at www.Open-FCoE.org 5 * Maintained at www.Open-FCoE.org
19 */ 6 */
20 7
diff --git a/include/uapi/scsi/fc/fc_gs.h b/include/uapi/scsi/fc/fc_gs.h
index 2153f3524555..effb4c662fe5 100644
--- a/include/uapi/scsi/fc/fc_gs.h
+++ b/include/uapi/scsi/fc/fc_gs.h
@@ -2,19 +2,6 @@
2/* 2/*
3 * Copyright(c) 2007 Intel Corporation. All rights reserved. 3 * Copyright(c) 2007 Intel Corporation. All rights reserved.
4 * 4 *
5 * This program is free software; you can redistribute it and/or modify it
6 * under the terms and conditions of the GNU General Public License,
7 * version 2, as published by the Free Software Foundation.
8 *
9 * This program is distributed in the hope it will be useful, but WITHOUT
10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
12 * more details.
13 *
14 * You should have received a copy of the GNU General Public License along with
15 * this program; if not, write to the Free Software Foundation, Inc.,
16 * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
17 *
18 * Maintained at www.Open-FCoE.org 5 * Maintained at www.Open-FCoE.org
19 */ 6 */
20 7
diff --git a/include/uapi/scsi/fc/fc_ns.h b/include/uapi/scsi/fc/fc_ns.h
index 015e5e1ce8f1..4cf0a40a099a 100644
--- a/include/uapi/scsi/fc/fc_ns.h
+++ b/include/uapi/scsi/fc/fc_ns.h
@@ -2,19 +2,6 @@
2/* 2/*
3 * Copyright(c) 2007 Intel Corporation. All rights reserved. 3 * Copyright(c) 2007 Intel Corporation. All rights reserved.
4 * 4 *
5 * This program is free software; you can redistribute it and/or modify it
6 * under the terms and conditions of the GNU General Public License,
7 * version 2, as published by the Free Software Foundation.
8 *
9 * This program is distributed in the hope it will be useful, but WITHOUT
10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
12 * more details.
13 *
14 * You should have received a copy of the GNU General Public License along with
15 * this program; if not, write to the Free Software Foundation, Inc.,
16 * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
17 *
18 * Maintained at www.Open-FCoE.org 5 * Maintained at www.Open-FCoE.org
19 */ 6 */
20 7
diff --git a/include/uapi/scsi/scsi_bsg_fc.h b/include/uapi/scsi/scsi_bsg_fc.h
index 62597d86beed..52f32a60d056 100644
--- a/include/uapi/scsi/scsi_bsg_fc.h
+++ b/include/uapi/scsi/scsi_bsg_fc.h
@@ -3,21 +3,6 @@
3 * FC Transport BSG Interface 3 * FC Transport BSG Interface
4 * 4 *
5 * Copyright (C) 2008 James Smart, Emulex Corporation 5 * Copyright (C) 2008 James Smart, Emulex Corporation
6 *
7 * This program is free software; you can redistribute it and/or modify
8 * it under the terms of the GNU General Public License as published by
9 * the Free Software Foundation; either version 2 of the License, or
10 * (at your option) any later version.
11 *
12 * This program is distributed in the hope that it will be useful,
13 * but WITHOUT ANY WARRANTY; without even the implied warranty of
14 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 * GNU General Public License for more details.
16 *
17 * You should have received a copy of the GNU General Public License
18 * along with this program; if not, write to the Free Software
19 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 *
21 */ 6 */
22 7
23#ifndef SCSI_BSG_FC_H 8#ifndef SCSI_BSG_FC_H
diff --git a/include/uapi/scsi/scsi_netlink.h b/include/uapi/scsi/scsi_netlink.h
index 5ccc2333acab..5dd382054e45 100644
--- a/include/uapi/scsi/scsi_netlink.h
+++ b/include/uapi/scsi/scsi_netlink.h
@@ -4,21 +4,6 @@
4 * Used for the posting of outbound SCSI transport events 4 * Used for the posting of outbound SCSI transport events
5 * 5 *
6 * Copyright (C) 2006 James Smart, Emulex Corporation 6 * Copyright (C) 2006 James Smart, Emulex Corporation
7 *
8 * This program is free software; you can redistribute it and/or modify
9 * it under the terms of the GNU General Public License as published by
10 * the Free Software Foundation; either version 2 of the License, or
11 * (at your option) any later version.
12 *
13 * This program is distributed in the hope that it will be useful,
14 * but WITHOUT ANY WARRANTY; without even the implied warranty of
15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 * GNU General Public License for more details.
17 *
18 * You should have received a copy of the GNU General Public License
19 * along with this program; if not, write to the Free Software
20 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
21 *
22 */ 7 */
23#ifndef SCSI_NETLINK_H 8#ifndef SCSI_NETLINK_H
24#define SCSI_NETLINK_H 9#define SCSI_NETLINK_H
diff --git a/include/uapi/scsi/scsi_netlink_fc.h b/include/uapi/scsi/scsi_netlink_fc.h
index 060f563c38a2..a39023579051 100644
--- a/include/uapi/scsi/scsi_netlink_fc.h
+++ b/include/uapi/scsi/scsi_netlink_fc.h
@@ -3,21 +3,6 @@
3 * FC Transport Netlink Interface 3 * FC Transport Netlink Interface
4 * 4 *
5 * Copyright (C) 2006 James Smart, Emulex Corporation 5 * Copyright (C) 2006 James Smart, Emulex Corporation
6 *
7 * This program is free software; you can redistribute it and/or modify
8 * it under the terms of the GNU General Public License as published by
9 * the Free Software Foundation; either version 2 of the License, or
10 * (at your option) any later version.
11 *
12 * This program is distributed in the hope that it will be useful,
13 * but WITHOUT ANY WARRANTY; without even the implied warranty of
14 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 * GNU General Public License for more details.
16 *
17 * You should have received a copy of the GNU General Public License
18 * along with this program; if not, write to the Free Software
19 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 *
21 */ 6 */
22#ifndef SCSI_NETLINK_FC_H 7#ifndef SCSI_NETLINK_FC_H
23#define SCSI_NETLINK_FC_H 8#define SCSI_NETLINK_FC_H