diff options
author | Linus Torvalds <torvalds@woody.osdl.org> | 2006-12-04 11:29:45 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.osdl.org> | 2006-12-04 11:29:45 -0500 |
commit | 07704eb29a765d2e862000d952fd96271c1464e2 (patch) | |
tree | 43dcf020188d8eeaeb71fae8c09de1f7aec88c43 | |
parent | f75e3b1de6a72f6eb22f3ab120dd52b902357c03 (diff) | |
parent | 74f8f557fd0c6f32e17e78c9ef508ca66ef37d3a (diff) |
Merge branch 'for-linus' of git://git390.osdl.marist.edu/pub/scm/linux-2.6
* 'for-linus' of git://git390.osdl.marist.edu/pub/scm/linux-2.6: (34 commits)
[S390] Don't use small stacks when lockdep is used.
[S390] cio: Use device_reprobe() instead of bus_rescan_devices().
[S390] cio: Retry internal operations after vary off.
[S390] cio: Use path verification for last path gone after vary off.
[S390] non-unique constant/macro identifiers.
[S390] Memory detection fixes.
[S390] cio: Make ccw_dev_id_is_equal() more robust.
[S390] Convert extmem spin_lock into a mutex.
[S390] set KBUILD_IMAGE.
[S390] lockdep: show held locks when showing a stackdump
[S390] Add dynamic size check for usercopy functions.
[S390] Use diag260 for memory size detection.
[S390] pfault code cleanup.
[S390] Cleanup memory_chunk array usage.
[S390] Misaligned wait PSW at memory detection.
[S390] cpu shutdown rework
[S390] cpcmd <-> __cpcmd calling issues
[S390] Bad kexec control page allocation.
[S390] Reset infrastructure for re-IPL.
[S390] Some documentation typos.
...
59 files changed, 1090 insertions, 749 deletions
diff --git a/Documentation/s390/CommonIO b/Documentation/s390/CommonIO index d684a6ac69a8..22f82f21bc60 100644 --- a/Documentation/s390/CommonIO +++ b/Documentation/s390/CommonIO | |||
@@ -74,7 +74,7 @@ Command line parameters | |||
74 | 74 | ||
75 | Note: While already known devices can be added to the list of devices to be | 75 | Note: While already known devices can be added to the list of devices to be |
76 | ignored, there will be no effect on then. However, if such a device | 76 | ignored, there will be no effect on then. However, if such a device |
77 | disappears and then reappeares, it will then be ignored. | 77 | disappears and then reappears, it will then be ignored. |
78 | 78 | ||
79 | For example, | 79 | For example, |
80 | "echo add 0.0.a000-0.0.accc, 0.0.af00-0.0.afff > /proc/cio_ignore" | 80 | "echo add 0.0.a000-0.0.accc, 0.0.af00-0.0.afff > /proc/cio_ignore" |
@@ -82,7 +82,7 @@ Command line parameters | |||
82 | devices. | 82 | devices. |
83 | 83 | ||
84 | The devices can be specified either by bus id (0.0.abcd) or, for 2.4 backward | 84 | The devices can be specified either by bus id (0.0.abcd) or, for 2.4 backward |
85 | compatibilty, by the device number in hexadecimal (0xabcd or abcd). | 85 | compatibility, by the device number in hexadecimal (0xabcd or abcd). |
86 | 86 | ||
87 | 87 | ||
88 | * /proc/s390dbf/cio_*/ (S/390 debug feature) | 88 | * /proc/s390dbf/cio_*/ (S/390 debug feature) |
diff --git a/Documentation/s390/Debugging390.txt b/Documentation/s390/Debugging390.txt index 4dd25ee549e9..3f9ddbc23b27 100644 --- a/Documentation/s390/Debugging390.txt +++ b/Documentation/s390/Debugging390.txt | |||
@@ -7,7 +7,7 @@ | |||
7 | 7 | ||
8 | Overview of Document: | 8 | Overview of Document: |
9 | ===================== | 9 | ===================== |
10 | This document is intended to give an good overview of how to debug | 10 | This document is intended to give a good overview of how to debug |
11 | Linux for s/390 & z/Architecture. It isn't intended as a complete reference & not a | 11 | Linux for s/390 & z/Architecture. It isn't intended as a complete reference & not a |
12 | tutorial on the fundamentals of C & assembly. It doesn't go into | 12 | tutorial on the fundamentals of C & assembly. It doesn't go into |
13 | 390 IO in any detail. It is intended to complement the documents in the | 13 | 390 IO in any detail. It is intended to complement the documents in the |
@@ -300,7 +300,7 @@ On z/Architecture our page indexes are now 2k in size | |||
300 | but only mess with 2 segment indices each time we mess with | 300 | but only mess with 2 segment indices each time we mess with |
301 | a PMD. | 301 | a PMD. |
302 | 302 | ||
303 | 3) As z/Architecture supports upto a massive 5-level page table lookup we | 303 | 3) As z/Architecture supports up to a massive 5-level page table lookup we |
304 | can only use 3 currently on Linux ( as this is all the generic kernel | 304 | can only use 3 currently on Linux ( as this is all the generic kernel |
305 | currently supports ) however this may change in future | 305 | currently supports ) however this may change in future |
306 | this allows us to access ( according to my sums ) | 306 | this allows us to access ( according to my sums ) |
@@ -502,7 +502,7 @@ Notes: | |||
502 | ------ | 502 | ------ |
503 | 1) The only requirement is that registers which are used | 503 | 1) The only requirement is that registers which are used |
504 | by the callee are saved, e.g. the compiler is perfectly | 504 | by the callee are saved, e.g. the compiler is perfectly |
505 | capible of using r11 for purposes other than a frame a | 505 | capable of using r11 for purposes other than a frame a |
506 | frame pointer if a frame pointer is not needed. | 506 | frame pointer if a frame pointer is not needed. |
507 | 2) In functions with variable arguments e.g. printf the calling procedure | 507 | 2) In functions with variable arguments e.g. printf the calling procedure |
508 | is identical to one without variable arguments & the same number of | 508 | is identical to one without variable arguments & the same number of |
@@ -846,7 +846,7 @@ of time searching for debugging info. The following self explanatory line should | |||
846 | instead if the code isn't compiled -g, as it is much faster: | 846 | instead if the code isn't compiled -g, as it is much faster: |
847 | objdump --disassemble-all --syms vmlinux > vmlinux.lst | 847 | objdump --disassemble-all --syms vmlinux > vmlinux.lst |
848 | 848 | ||
849 | As hard drive space is valuble most of us use the following approach. | 849 | As hard drive space is valuable most of us use the following approach. |
850 | 1) Look at the emitted psw on the console to find the crash address in the kernel. | 850 | 1) Look at the emitted psw on the console to find the crash address in the kernel. |
851 | 2) Look at the file System.map ( in the linux directory ) produced when building | 851 | 2) Look at the file System.map ( in the linux directory ) produced when building |
852 | the kernel to find the closest address less than the current PSW to find the | 852 | the kernel to find the closest address less than the current PSW to find the |
@@ -902,7 +902,7 @@ A. It is a tool for intercepting calls to the kernel & logging them | |||
902 | to a file & on the screen. | 902 | to a file & on the screen. |
903 | 903 | ||
904 | Q. What use is it ? | 904 | Q. What use is it ? |
905 | A. You can used it to find out what files a particular program opens. | 905 | A. You can use it to find out what files a particular program opens. |
906 | 906 | ||
907 | 907 | ||
908 | 908 | ||
@@ -911,7 +911,7 @@ Example 1 | |||
911 | If you wanted to know does ping work but didn't have the source | 911 | If you wanted to know does ping work but didn't have the source |
912 | strace ping -c 1 127.0.0.1 | 912 | strace ping -c 1 127.0.0.1 |
913 | & then look at the man pages for each of the syscalls below, | 913 | & then look at the man pages for each of the syscalls below, |
914 | ( In fact this is sometimes easier than looking at some spagetti | 914 | ( In fact this is sometimes easier than looking at some spaghetti |
915 | source which conditionally compiles for several architectures ). | 915 | source which conditionally compiles for several architectures ). |
916 | Not everything that it throws out needs to make sense immediately. | 916 | Not everything that it throws out needs to make sense immediately. |
917 | 917 | ||
@@ -1037,7 +1037,7 @@ e.g. man strace, man alarm, man socket. | |||
1037 | 1037 | ||
1038 | Performance Debugging | 1038 | Performance Debugging |
1039 | ===================== | 1039 | ===================== |
1040 | gcc is capible of compiling in profiling code just add the -p option | 1040 | gcc is capable of compiling in profiling code just add the -p option |
1041 | to the CFLAGS, this obviously affects program size & performance. | 1041 | to the CFLAGS, this obviously affects program size & performance. |
1042 | This can be used by the gprof gnu profiling tool or the | 1042 | This can be used by the gprof gnu profiling tool or the |
1043 | gcov the gnu code coverage tool ( code coverage is a means of testing | 1043 | gcov the gnu code coverage tool ( code coverage is a means of testing |
@@ -1419,7 +1419,7 @@ On a SMP guest issue a command to all CPUs try prefixing the command with cpu al | |||
1419 | To issue a command to a particular cpu try cpu <cpu number> e.g. | 1419 | To issue a command to a particular cpu try cpu <cpu number> e.g. |
1420 | CPU 01 TR I R 2000.3000 | 1420 | CPU 01 TR I R 2000.3000 |
1421 | If you are running on a guest with several cpus & you have a IO related problem | 1421 | If you are running on a guest with several cpus & you have a IO related problem |
1422 | & cannot follow the flow of code but you know it isnt smp related. | 1422 | & cannot follow the flow of code but you know it isn't smp related. |
1423 | from the bash prompt issue | 1423 | from the bash prompt issue |
1424 | shutdown -h now or halt. | 1424 | shutdown -h now or halt. |
1425 | do a Q CPUS to find out how many cpus you have | 1425 | do a Q CPUS to find out how many cpus you have |
@@ -1602,7 +1602,7 @@ V000FFFD0 00010400 80010802 8001085A 000FFFA0 | |||
1602 | our 3rd return address is 8001085A | 1602 | our 3rd return address is 8001085A |
1603 | 1603 | ||
1604 | as the 04B52002 looks suspiciously like rubbish it is fair to assume that the kernel entry routines | 1604 | as the 04B52002 looks suspiciously like rubbish it is fair to assume that the kernel entry routines |
1605 | for the sake of optimisation dont set up a backchain. | 1605 | for the sake of optimisation don't set up a backchain. |
1606 | 1606 | ||
1607 | now look at System.map to see if the addresses make any sense. | 1607 | now look at System.map to see if the addresses make any sense. |
1608 | 1608 | ||
@@ -1638,11 +1638,11 @@ more useful information. | |||
1638 | 1638 | ||
1639 | Unlike other bus architectures modern 390 systems do their IO using mostly | 1639 | Unlike other bus architectures modern 390 systems do their IO using mostly |
1640 | fibre optics & devices such as tapes & disks can be shared between several mainframes, | 1640 | fibre optics & devices such as tapes & disks can be shared between several mainframes, |
1641 | also S390 can support upto 65536 devices while a high end PC based system might be choking | 1641 | also S390 can support up to 65536 devices while a high end PC based system might be choking |
1642 | with around 64. Here is some of the common IO terminology | 1642 | with around 64. Here is some of the common IO terminology |
1643 | 1643 | ||
1644 | Subchannel: | 1644 | Subchannel: |
1645 | This is the logical number most IO commands use to talk to an IO device there can be upto | 1645 | This is the logical number most IO commands use to talk to an IO device there can be up to |
1646 | 0x10000 (65536) of these in a configuration typically there is a few hundred. Under VM | 1646 | 0x10000 (65536) of these in a configuration typically there is a few hundred. Under VM |
1647 | for simplicity they are allocated contiguously, however on the native hardware they are not | 1647 | for simplicity they are allocated contiguously, however on the native hardware they are not |
1648 | they typically stay consistent between boots provided no new hardware is inserted or removed. | 1648 | they typically stay consistent between boots provided no new hardware is inserted or removed. |
@@ -1651,7 +1651,7 @@ HALT SUBCHANNEL,MODIFY SUBCHANNEL,RESUME SUBCHANNEL,START SUBCHANNEL,STORE SUBCH | |||
1651 | TEST SUBCHANNEL ) we use this as the ID of the device we wish to talk to, the most | 1651 | TEST SUBCHANNEL ) we use this as the ID of the device we wish to talk to, the most |
1652 | important of these instructions are START SUBCHANNEL ( to start IO ), TEST SUBCHANNEL ( to check | 1652 | important of these instructions are START SUBCHANNEL ( to start IO ), TEST SUBCHANNEL ( to check |
1653 | whether the IO completed successfully ), & HALT SUBCHANNEL ( to kill IO ), a subchannel | 1653 | whether the IO completed successfully ), & HALT SUBCHANNEL ( to kill IO ), a subchannel |
1654 | can have up to 8 channel paths to a device this offers redunancy if one is not available. | 1654 | can have up to 8 channel paths to a device this offers redundancy if one is not available. |
1655 | 1655 | ||
1656 | 1656 | ||
1657 | Device Number: | 1657 | Device Number: |
@@ -1659,7 +1659,7 @@ This number remains static & Is closely tied to the hardware, there are 65536 of | |||
1659 | also they are made up of a CHPID ( Channel Path ID, the most significant 8 bits ) | 1659 | also they are made up of a CHPID ( Channel Path ID, the most significant 8 bits ) |
1660 | & another lsb 8 bits. These remain static even if more devices are inserted or removed | 1660 | & another lsb 8 bits. These remain static even if more devices are inserted or removed |
1661 | from the hardware, there is a 1 to 1 mapping between Subchannels & Device Numbers provided | 1661 | from the hardware, there is a 1 to 1 mapping between Subchannels & Device Numbers provided |
1662 | devices arent inserted or removed. | 1662 | devices aren't inserted or removed. |
1663 | 1663 | ||
1664 | Channel Control Words: | 1664 | Channel Control Words: |
1665 | CCWS are linked lists of instructions initially pointed to by an operation request block (ORB), | 1665 | CCWS are linked lists of instructions initially pointed to by an operation request block (ORB), |
@@ -1674,7 +1674,7 @@ concurrently, you check how the IO went on by issuing a TEST SUBCHANNEL at each | |||
1674 | from which you receive an Interruption response block (IRB). If you get channel & device end | 1674 | from which you receive an Interruption response block (IRB). If you get channel & device end |
1675 | status in the IRB without channel checks etc. your IO probably went okay. If you didn't you | 1675 | status in the IRB without channel checks etc. your IO probably went okay. If you didn't you |
1676 | probably need a doctor to examine the IRB & extended status word etc. | 1676 | probably need a doctor to examine the IRB & extended status word etc. |
1677 | If an error occurs, more sophistocated control units have a facitity known as | 1677 | If an error occurs, more sophisticated control units have a facility known as |
1678 | concurrent sense this means that if an error occurs Extended sense information will | 1678 | concurrent sense this means that if an error occurs Extended sense information will |
1679 | be presented in the Extended status word in the IRB if not you have to issue a | 1679 | be presented in the Extended status word in the IRB if not you have to issue a |
1680 | subsequent SENSE CCW command after the test subchannel. | 1680 | subsequent SENSE CCW command after the test subchannel. |
@@ -1749,7 +1749,7 @@ Interface (OEMI). | |||
1749 | This byte wide Parallel channel path/bus has parity & data on the "Bus" cable | 1749 | This byte wide Parallel channel path/bus has parity & data on the "Bus" cable |
1750 | & control lines on the "Tag" cable. These can operate in byte multiplex mode for | 1750 | & control lines on the "Tag" cable. These can operate in byte multiplex mode for |
1751 | sharing between several slow devices or burst mode & monopolize the channel for the | 1751 | sharing between several slow devices or burst mode & monopolize the channel for the |
1752 | whole burst. Upto 256 devices can be addressed on one of these cables. These cables are | 1752 | whole burst. Up to 256 devices can be addressed on one of these cables. These cables are |
1753 | about one inch in diameter. The maximum unextended length supported by these cables is | 1753 | about one inch in diameter. The maximum unextended length supported by these cables is |
1754 | 125 Meters but this can be extended up to 2km with a fibre optic channel extended | 1754 | 125 Meters but this can be extended up to 2km with a fibre optic channel extended |
1755 | such as a 3044. The maximum burst speed supported is 4.5 megabytes per second however | 1755 | such as a 3044. The maximum burst speed supported is 4.5 megabytes per second however |
@@ -1759,7 +1759,7 @@ One of these paths can be daisy chained to up to 8 control units. | |||
1759 | 1759 | ||
1760 | ESCON if fibre optic it is also called FICON | 1760 | ESCON if fibre optic it is also called FICON |
1761 | Was introduced by IBM in 1990. Has 2 fibre optic cables & uses either leds or lasers | 1761 | Was introduced by IBM in 1990. Has 2 fibre optic cables & uses either leds or lasers |
1762 | for communication at a signaling rate of upto 200 megabits/sec. As 10bits are transferred | 1762 | for communication at a signaling rate of up to 200 megabits/sec. As 10bits are transferred |
1763 | for every 8 bits info this drops to 160 megabits/sec & to 18.6 Megabytes/sec once | 1763 | for every 8 bits info this drops to 160 megabits/sec & to 18.6 Megabytes/sec once |
1764 | control info & CRC are added. ESCON only operates in burst mode. | 1764 | control info & CRC are added. ESCON only operates in burst mode. |
1765 | 1765 | ||
@@ -1767,7 +1767,7 @@ ESCONs typical max cable length is 3km for the led version & 20km for the laser | |||
1767 | known as XDF ( extended distance facility ). This can be further extended by using an | 1767 | known as XDF ( extended distance facility ). This can be further extended by using an |
1768 | ESCON director which triples the above mentioned ranges. Unlike Bus & Tag as ESCON is | 1768 | ESCON director which triples the above mentioned ranges. Unlike Bus & Tag as ESCON is |
1769 | serial it uses a packet switching architecture the standard Bus & Tag control protocol | 1769 | serial it uses a packet switching architecture the standard Bus & Tag control protocol |
1770 | is however present within the packets. Upto 256 devices can be attached to each control | 1770 | is however present within the packets. Up to 256 devices can be attached to each control |
1771 | unit that uses one of these interfaces. | 1771 | unit that uses one of these interfaces. |
1772 | 1772 | ||
1773 | Common 390 Devices include: | 1773 | Common 390 Devices include: |
@@ -2050,7 +2050,7 @@ list test.c:1,10 | |||
2050 | 2050 | ||
2051 | directory: | 2051 | directory: |
2052 | Adds directories to be searched for source if gdb cannot find the source. | 2052 | Adds directories to be searched for source if gdb cannot find the source. |
2053 | (note it is a bit sensititive about slashes) | 2053 | (note it is a bit sensitive about slashes) |
2054 | e.g. To add the root of the filesystem to the searchpath do | 2054 | e.g. To add the root of the filesystem to the searchpath do |
2055 | directory // | 2055 | directory // |
2056 | 2056 | ||
@@ -2152,7 +2152,7 @@ program as if it just crashed on your system, it is usually called core & create | |||
2152 | current working directory. | 2152 | current working directory. |
2153 | This is very useful in that a customer can mail a core dump to a technical support department | 2153 | This is very useful in that a customer can mail a core dump to a technical support department |
2154 | & the technical support department can reconstruct what happened. | 2154 | & the technical support department can reconstruct what happened. |
2155 | Provided the have an identical copy of this program with debugging symbols compiled in & | 2155 | Provided they have an identical copy of this program with debugging symbols compiled in & |
2156 | the source base of this build is available. | 2156 | the source base of this build is available. |
2157 | In short it is far more useful than something like a crash log could ever hope to be. | 2157 | In short it is far more useful than something like a crash log could ever hope to be. |
2158 | 2158 | ||
diff --git a/Documentation/s390/cds.txt b/Documentation/s390/cds.txt index 32a96cc39215..05a2b4f7e38f 100644 --- a/Documentation/s390/cds.txt +++ b/Documentation/s390/cds.txt | |||
@@ -98,7 +98,7 @@ The following chapters describe the I/O related interface routines the | |||
98 | Linux/390 common device support (CDS) provides to allow for device specific | 98 | Linux/390 common device support (CDS) provides to allow for device specific |
99 | driver implementations on the IBM ESA/390 hardware platform. Those interfaces | 99 | driver implementations on the IBM ESA/390 hardware platform. Those interfaces |
100 | intend to provide the functionality required by every device driver | 100 | intend to provide the functionality required by every device driver |
101 | implementaion to allow to drive a specific hardware device on the ESA/390 | 101 | implementation to allow to drive a specific hardware device on the ESA/390 |
102 | platform. Some of the interface routines are specific to Linux/390 and some | 102 | platform. Some of the interface routines are specific to Linux/390 and some |
103 | of them can be found on other Linux platforms implementations too. | 103 | of them can be found on other Linux platforms implementations too. |
104 | Miscellaneous function prototypes, data declarations, and macro definitions | 104 | Miscellaneous function prototypes, data declarations, and macro definitions |
@@ -114,7 +114,7 @@ the ESA/390 architecture has implemented a so called channel subsystem, that | |||
114 | provides a unified view of the devices physically attached to the systems. | 114 | provides a unified view of the devices physically attached to the systems. |
115 | Though the ESA/390 hardware platform knows about a huge variety of different | 115 | Though the ESA/390 hardware platform knows about a huge variety of different |
116 | peripheral attachments like disk devices (aka. DASDs), tapes, communication | 116 | peripheral attachments like disk devices (aka. DASDs), tapes, communication |
117 | controllers, etc. they can all by accessed by a well defined access method and | 117 | controllers, etc. they can all be accessed by a well defined access method and |
118 | they are presenting I/O completion a unified way : I/O interruptions. Every | 118 | they are presenting I/O completion a unified way : I/O interruptions. Every |
119 | single device is uniquely identified to the system by a so called subchannel, | 119 | single device is uniquely identified to the system by a so called subchannel, |
120 | where the ESA/390 architecture allows for 64k devices be attached. | 120 | where the ESA/390 architecture allows for 64k devices be attached. |
@@ -338,7 +338,7 @@ DOIO_REPORT_ALL - report all interrupt conditions | |||
338 | The ccw_device_start() function returns : | 338 | The ccw_device_start() function returns : |
339 | 339 | ||
340 | 0 - successful completion or request successfully initiated | 340 | 0 - successful completion or request successfully initiated |
341 | -EBUSY - The device is currently processing a previous I/O request, or ther is | 341 | -EBUSY - The device is currently processing a previous I/O request, or there is |
342 | a status pending at the device. | 342 | a status pending at the device. |
343 | -ENODEV - cdev is invalid, the device is not operational or the ccw_device is | 343 | -ENODEV - cdev is invalid, the device is not operational or the ccw_device is |
344 | not online. | 344 | not online. |
@@ -361,7 +361,7 @@ first: | |||
361 | -EIO: the common I/O layer terminated the request due to an error state | 361 | -EIO: the common I/O layer terminated the request due to an error state |
362 | 362 | ||
363 | If the concurrent sense flag in the extended status word in the irb is set, the | 363 | If the concurrent sense flag in the extended status word in the irb is set, the |
364 | field irb->scsw.count describes the numer of device specific sense bytes | 364 | field irb->scsw.count describes the number of device specific sense bytes |
365 | available in the extended control word irb->scsw.ecw[0]. No device sensing by | 365 | available in the extended control word irb->scsw.ecw[0]. No device sensing by |
366 | the device driver itself is required. | 366 | the device driver itself is required. |
367 | 367 | ||
@@ -410,7 +410,7 @@ ccw_device_start() must be called disabled and with the ccw device lock held. | |||
410 | 410 | ||
411 | The device driver is allowed to issue the next ccw_device_start() call from | 411 | The device driver is allowed to issue the next ccw_device_start() call from |
412 | within its interrupt handler already. It is not required to schedule a | 412 | within its interrupt handler already. It is not required to schedule a |
413 | bottom-half, unless an non deterministically long running error recovery procedure | 413 | bottom-half, unless a non deterministically long running error recovery procedure |
414 | or similar needs to be scheduled. During I/O processing the Linux/390 generic | 414 | or similar needs to be scheduled. During I/O processing the Linux/390 generic |
415 | I/O device driver support has already obtained the IRQ lock, i.e. the handler | 415 | I/O device driver support has already obtained the IRQ lock, i.e. the handler |
416 | must not try to obtain it again when calling ccw_device_start() or we end in a | 416 | must not try to obtain it again when calling ccw_device_start() or we end in a |
@@ -431,7 +431,7 @@ information prior to device-end the device driver urgently relies on. In this | |||
431 | case all I/O interruptions are presented to the device driver until final | 431 | case all I/O interruptions are presented to the device driver until final |
432 | status is recognized. | 432 | status is recognized. |
433 | 433 | ||
434 | If a device is able to recover from asynchronosly presented I/O errors, it can | 434 | If a device is able to recover from asynchronously presented I/O errors, it can |
435 | perform overlapping I/O using the DOIO_EARLY_NOTIFICATION flag. While some | 435 | perform overlapping I/O using the DOIO_EARLY_NOTIFICATION flag. While some |
436 | devices always report channel-end and device-end together, with a single | 436 | devices always report channel-end and device-end together, with a single |
437 | interrupt, others present primary status (channel-end) when the channel is | 437 | interrupt, others present primary status (channel-end) when the channel is |
diff --git a/Documentation/s390/crypto/crypto-API.txt b/Documentation/s390/crypto/crypto-API.txt index 41a8b07da05a..71ae6ca9f2c2 100644 --- a/Documentation/s390/crypto/crypto-API.txt +++ b/Documentation/s390/crypto/crypto-API.txt | |||
@@ -17,8 +17,8 @@ arch/s390/crypto directory. | |||
17 | 2. Probing for availability of MSA | 17 | 2. Probing for availability of MSA |
18 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 18 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
19 | It should be possible to use Kernels with the z990 crypto implementations both | 19 | It should be possible to use Kernels with the z990 crypto implementations both |
20 | on machines with MSA available an on those without MSA (pre z990 or z990 | 20 | on machines with MSA available and on those without MSA (pre z990 or z990 |
21 | without MSA). Therefore a simple probing mechanisms has been implemented: | 21 | without MSA). Therefore a simple probing mechanism has been implemented: |
22 | In the init function of each crypto module the availability of MSA and of the | 22 | In the init function of each crypto module the availability of MSA and of the |
23 | respective crypto algorithm in particular will be tested. If the algorithm is | 23 | respective crypto algorithm in particular will be tested. If the algorithm is |
24 | available the module will load and register its algorithm with the crypto API. | 24 | available the module will load and register its algorithm with the crypto API. |
@@ -26,7 +26,7 @@ available the module will load and register its algorithm with the crypto API. | |||
26 | If the respective crypto algorithm is not available, the init function will | 26 | If the respective crypto algorithm is not available, the init function will |
27 | return -ENOSYS. In that case a fallback to the standard software implementation | 27 | return -ENOSYS. In that case a fallback to the standard software implementation |
28 | of the crypto algorithm must be taken ( -> the standard crypto modules are | 28 | of the crypto algorithm must be taken ( -> the standard crypto modules are |
29 | also build when compiling the kernel). | 29 | also built when compiling the kernel). |
30 | 30 | ||
31 | 31 | ||
32 | 3. Ensuring z990 crypto module preference | 32 | 3. Ensuring z990 crypto module preference |
diff --git a/Documentation/s390/s390dbf.txt b/Documentation/s390/s390dbf.txt index 000230cd26db..0eb7c58916de 100644 --- a/Documentation/s390/s390dbf.txt +++ b/Documentation/s390/s390dbf.txt | |||
@@ -36,7 +36,7 @@ switches to the next debug area. This is done in order to be sure | |||
36 | that the records which describe the origin of the exception are not | 36 | that the records which describe the origin of the exception are not |
37 | overwritten when a wrap around for the current area occurs. | 37 | overwritten when a wrap around for the current area occurs. |
38 | 38 | ||
39 | The debug areas itselve are also ordered in form of a ring buffer. | 39 | The debug areas themselves are also ordered in form of a ring buffer. |
40 | When an exception is thrown in the last debug area, the following debug | 40 | When an exception is thrown in the last debug area, the following debug |
41 | entries are then written again in the very first area. | 41 | entries are then written again in the very first area. |
42 | 42 | ||
@@ -55,7 +55,7 @@ The debug logs can be inspected in a live system through entries in | |||
55 | the debugfs-filesystem. Under the toplevel directory "s390dbf" there is | 55 | the debugfs-filesystem. Under the toplevel directory "s390dbf" there is |
56 | a directory for each registered component, which is named like the | 56 | a directory for each registered component, which is named like the |
57 | corresponding component. The debugfs normally should be mounted to | 57 | corresponding component. The debugfs normally should be mounted to |
58 | /sys/kernel/debug therefore the debug feature can be accessed unter | 58 | /sys/kernel/debug therefore the debug feature can be accessed under |
59 | /sys/kernel/debug/s390dbf. | 59 | /sys/kernel/debug/s390dbf. |
60 | 60 | ||
61 | The content of the directories are files which represent different views | 61 | The content of the directories are files which represent different views |
@@ -87,11 +87,11 @@ There are currently 2 possible triggers, which stop the debug feature | |||
87 | globally. The first possibility is to use the "debug_active" sysctl. If | 87 | globally. The first possibility is to use the "debug_active" sysctl. If |
88 | set to 1 the debug feature is running. If "debug_active" is set to 0 the | 88 | set to 1 the debug feature is running. If "debug_active" is set to 0 the |
89 | debug feature is turned off. | 89 | debug feature is turned off. |
90 | The second trigger which stops the debug feature is an kernel oops. | 90 | The second trigger which stops the debug feature is a kernel oops. |
91 | That prevents the debug feature from overwriting debug information that | 91 | That prevents the debug feature from overwriting debug information that |
92 | happened before the oops. After an oops you can reactivate the debug feature | 92 | happened before the oops. After an oops you can reactivate the debug feature |
93 | by piping 1 to /proc/sys/s390dbf/debug_active. Nevertheless, its not | 93 | by piping 1 to /proc/sys/s390dbf/debug_active. Nevertheless, its not |
94 | suggested to use an oopsed kernel in an production environment. | 94 | suggested to use an oopsed kernel in a production environment. |
95 | If you want to disallow the deactivation of the debug feature, you can use | 95 | If you want to disallow the deactivation of the debug feature, you can use |
96 | the "debug_stoppable" sysctl. If you set "debug_stoppable" to 0 the debug | 96 | the "debug_stoppable" sysctl. If you set "debug_stoppable" to 0 the debug |
97 | feature cannot be stopped. If the debug feature is already stopped, it | 97 | feature cannot be stopped. If the debug feature is already stopped, it |
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 245b81bc7157..583d9ff0a571 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig | |||
@@ -33,9 +33,6 @@ config GENERIC_CALIBRATE_DELAY | |||
33 | config GENERIC_TIME | 33 | config GENERIC_TIME |
34 | def_bool y | 34 | def_bool y |
35 | 35 | ||
36 | config GENERIC_BUST_SPINLOCK | ||
37 | bool | ||
38 | |||
39 | mainmenu "Linux Kernel Configuration" | 36 | mainmenu "Linux Kernel Configuration" |
40 | 37 | ||
41 | config S390 | 38 | config S390 |
@@ -181,7 +178,7 @@ config PACK_STACK | |||
181 | 178 | ||
182 | config SMALL_STACK | 179 | config SMALL_STACK |
183 | bool "Use 4kb/8kb for kernel stack instead of 8kb/16kb" | 180 | bool "Use 4kb/8kb for kernel stack instead of 8kb/16kb" |
184 | depends on PACK_STACK | 181 | depends on PACK_STACK && !LOCKDEP |
185 | help | 182 | help |
186 | If you say Y here and the compiler supports the -mkernel-backchain | 183 | If you say Y here and the compiler supports the -mkernel-backchain |
187 | option the kernel will use a smaller kernel stack size. For 31 bit | 184 | option the kernel will use a smaller kernel stack size. For 31 bit |
diff --git a/arch/s390/Makefile b/arch/s390/Makefile index 5deb9f7544a1..6598e5268573 100644 --- a/arch/s390/Makefile +++ b/arch/s390/Makefile | |||
@@ -35,6 +35,9 @@ cflags-$(CONFIG_MARCH_Z900) += $(call cc-option,-march=z900) | |||
35 | cflags-$(CONFIG_MARCH_Z990) += $(call cc-option,-march=z990) | 35 | cflags-$(CONFIG_MARCH_Z990) += $(call cc-option,-march=z990) |
36 | cflags-$(CONFIG_MARCH_Z9_109) += $(call cc-option,-march=z9-109) | 36 | cflags-$(CONFIG_MARCH_Z9_109) += $(call cc-option,-march=z9-109) |
37 | 37 | ||
38 | #KBUILD_IMAGE is necessary for make rpm | ||
39 | KBUILD_IMAGE :=arch/s390/boot/image | ||
40 | |||
38 | # | 41 | # |
39 | # Prevent tail-call optimizations, to get clearer backtraces: | 42 | # Prevent tail-call optimizations, to get clearer backtraces: |
40 | # | 43 | # |
diff --git a/arch/s390/kernel/Makefile b/arch/s390/kernel/Makefile index aa978978d3d1..a81881c9b297 100644 --- a/arch/s390/kernel/Makefile +++ b/arch/s390/kernel/Makefile | |||
@@ -4,7 +4,7 @@ | |||
4 | 4 | ||
5 | EXTRA_AFLAGS := -traditional | 5 | EXTRA_AFLAGS := -traditional |
6 | 6 | ||
7 | obj-y := bitmap.o traps.o time.o process.o \ | 7 | obj-y := bitmap.o traps.o time.o process.o reset.o \ |
8 | setup.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o \ | 8 | setup.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o \ |
9 | semaphore.o s390_ext.o debug.o profile.o irq.o ipl.o | 9 | semaphore.o s390_ext.o debug.o profile.o irq.o ipl.o |
10 | 10 | ||
diff --git a/arch/s390/kernel/cpcmd.c b/arch/s390/kernel/cpcmd.c index 1eae74e72f95..a5972f1541fe 100644 --- a/arch/s390/kernel/cpcmd.c +++ b/arch/s390/kernel/cpcmd.c | |||
@@ -21,14 +21,15 @@ static DEFINE_SPINLOCK(cpcmd_lock); | |||
21 | static char cpcmd_buf[241]; | 21 | static char cpcmd_buf[241]; |
22 | 22 | ||
23 | /* | 23 | /* |
24 | * the caller of __cpcmd has to ensure that the response buffer is below 2 GB | 24 | * __cpcmd has some restrictions over cpcmd |
25 | * - the response buffer must reside below 2GB (if any) | ||
26 | * - __cpcmd is unlocked and therefore not SMP-safe | ||
25 | */ | 27 | */ |
26 | int __cpcmd(const char *cmd, char *response, int rlen, int *response_code) | 28 | int __cpcmd(const char *cmd, char *response, int rlen, int *response_code) |
27 | { | 29 | { |
28 | unsigned long flags, cmdlen; | 30 | unsigned cmdlen; |
29 | int return_code, return_len; | 31 | int return_code, return_len; |
30 | 32 | ||
31 | spin_lock_irqsave(&cpcmd_lock, flags); | ||
32 | cmdlen = strlen(cmd); | 33 | cmdlen = strlen(cmd); |
33 | BUG_ON(cmdlen > 240); | 34 | BUG_ON(cmdlen > 240); |
34 | memcpy(cpcmd_buf, cmd, cmdlen); | 35 | memcpy(cpcmd_buf, cmd, cmdlen); |
@@ -74,7 +75,6 @@ int __cpcmd(const char *cmd, char *response, int rlen, int *response_code) | |||
74 | : "+d" (reg3) : "d" (reg2) : "cc"); | 75 | : "+d" (reg3) : "d" (reg2) : "cc"); |
75 | return_code = (int) reg3; | 76 | return_code = (int) reg3; |
76 | } | 77 | } |
77 | spin_unlock_irqrestore(&cpcmd_lock, flags); | ||
78 | if (response_code != NULL) | 78 | if (response_code != NULL) |
79 | *response_code = return_code; | 79 | *response_code = return_code; |
80 | return return_len; | 80 | return return_len; |
@@ -82,15 +82,18 @@ int __cpcmd(const char *cmd, char *response, int rlen, int *response_code) | |||
82 | 82 | ||
83 | EXPORT_SYMBOL(__cpcmd); | 83 | EXPORT_SYMBOL(__cpcmd); |
84 | 84 | ||
85 | #ifdef CONFIG_64BIT | ||
86 | int cpcmd(const char *cmd, char *response, int rlen, int *response_code) | 85 | int cpcmd(const char *cmd, char *response, int rlen, int *response_code) |
87 | { | 86 | { |
88 | char *lowbuf; | 87 | char *lowbuf; |
89 | int len; | 88 | int len; |
89 | unsigned long flags; | ||
90 | 90 | ||
91 | if ((rlen == 0) || (response == NULL) | 91 | if ((rlen == 0) || (response == NULL) |
92 | || !((unsigned long)response >> 31)) | 92 | || !((unsigned long)response >> 31)) { |
93 | spin_lock_irqsave(&cpcmd_lock, flags); | ||
93 | len = __cpcmd(cmd, response, rlen, response_code); | 94 | len = __cpcmd(cmd, response, rlen, response_code); |
95 | spin_unlock_irqrestore(&cpcmd_lock, flags); | ||
96 | } | ||
94 | else { | 97 | else { |
95 | lowbuf = kmalloc(rlen, GFP_KERNEL | GFP_DMA); | 98 | lowbuf = kmalloc(rlen, GFP_KERNEL | GFP_DMA); |
96 | if (!lowbuf) { | 99 | if (!lowbuf) { |
@@ -98,7 +101,9 @@ int cpcmd(const char *cmd, char *response, int rlen, int *response_code) | |||
98 | "cpcmd: could not allocate response buffer\n"); | 101 | "cpcmd: could not allocate response buffer\n"); |
99 | return -ENOMEM; | 102 | return -ENOMEM; |
100 | } | 103 | } |
104 | spin_lock_irqsave(&cpcmd_lock, flags); | ||
101 | len = __cpcmd(cmd, lowbuf, rlen, response_code); | 105 | len = __cpcmd(cmd, lowbuf, rlen, response_code); |
106 | spin_unlock_irqrestore(&cpcmd_lock, flags); | ||
102 | memcpy(response, lowbuf, rlen); | 107 | memcpy(response, lowbuf, rlen); |
103 | kfree(lowbuf); | 108 | kfree(lowbuf); |
104 | } | 109 | } |
@@ -106,4 +111,3 @@ int cpcmd(const char *cmd, char *response, int rlen, int *response_code) | |||
106 | } | 111 | } |
107 | 112 | ||
108 | EXPORT_SYMBOL(cpcmd); | 113 | EXPORT_SYMBOL(cpcmd); |
109 | #endif /* CONFIG_64BIT */ | ||
diff --git a/arch/s390/kernel/head.S b/arch/s390/kernel/head.S index 0cf59bb7a857..8f8c802f1bcf 100644 --- a/arch/s390/kernel/head.S +++ b/arch/s390/kernel/head.S | |||
@@ -418,24 +418,6 @@ start: | |||
418 | .gotr: | 418 | .gotr: |
419 | l %r10,.tbl # EBCDIC to ASCII table | 419 | l %r10,.tbl # EBCDIC to ASCII table |
420 | tr 0(240,%r8),0(%r10) | 420 | tr 0(240,%r8),0(%r10) |
421 | stidp __LC_CPUID # Are we running on VM maybe | ||
422 | cli __LC_CPUID,0xff | ||
423 | bnz .test | ||
424 | .long 0x83300060 # diag 3,0,x'0060' - storage size | ||
425 | b .done | ||
426 | .test: | ||
427 | mvc 0x68(8),.pgmnw # set up pgm check handler | ||
428 | l %r2,.fourmeg | ||
429 | lr %r3,%r2 | ||
430 | bctr %r3,%r0 # 4M-1 | ||
431 | .loop: iske %r0,%r3 | ||
432 | ar %r3,%r2 | ||
433 | .pgmx: | ||
434 | sr %r3,%r2 | ||
435 | la %r3,1(%r3) | ||
436 | .done: | ||
437 | l %r1,.memsize | ||
438 | st %r3,ARCH_OFFSET(%r1) | ||
439 | slr %r0,%r0 | 421 | slr %r0,%r0 |
440 | st %r0,INITRD_SIZE+ARCH_OFFSET-PARMAREA(%r11) | 422 | st %r0,INITRD_SIZE+ARCH_OFFSET-PARMAREA(%r11) |
441 | st %r0,INITRD_START+ARCH_OFFSET-PARMAREA(%r11) | 423 | st %r0,INITRD_START+ARCH_OFFSET-PARMAREA(%r11) |
@@ -443,9 +425,6 @@ start: | |||
443 | .tbl: .long _ebcasc # translate table | 425 | .tbl: .long _ebcasc # translate table |
444 | .cmd: .long COMMAND_LINE # address of command line buffer | 426 | .cmd: .long COMMAND_LINE # address of command line buffer |
445 | .parm: .long PARMAREA | 427 | .parm: .long PARMAREA |
446 | .memsize: .long memory_size | ||
447 | .fourmeg: .long 0x00400000 # 4M | ||
448 | .pgmnw: .long 0x00080000,.pgmx | ||
449 | .lowcase: | 428 | .lowcase: |
450 | .byte 0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07 | 429 | .byte 0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07 |
451 | .byte 0x08,0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x0f | 430 | .byte 0x08,0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x0f |
diff --git a/arch/s390/kernel/head31.S b/arch/s390/kernel/head31.S index 0a2c929486ab..4388b3309e0c 100644 --- a/arch/s390/kernel/head31.S +++ b/arch/s390/kernel/head31.S | |||
@@ -131,10 +131,11 @@ startup_continue: | |||
131 | .long init_thread_union | 131 | .long init_thread_union |
132 | .Lpmask: | 132 | .Lpmask: |
133 | .byte 0 | 133 | .byte 0 |
134 | .align 8 | 134 | .align 8 |
135 | .Lpcext:.long 0x00080000,0x80000000 | 135 | .Lpcext:.long 0x00080000,0x80000000 |
136 | .Lcr: | 136 | .Lcr: |
137 | .long 0x00 # place holder for cr0 | 137 | .long 0x00 # place holder for cr0 |
138 | .align 8 | ||
138 | .Lwaitsclp: | 139 | .Lwaitsclp: |
139 | .long 0x010a0000,0x80000000 + .Lsclph | 140 | .long 0x010a0000,0x80000000 + .Lsclph |
140 | .Lrcp: | 141 | .Lrcp: |
@@ -156,7 +157,7 @@ startup_continue: | |||
156 | slr %r4,%r4 # set start of chunk to zero | 157 | slr %r4,%r4 # set start of chunk to zero |
157 | slr %r5,%r5 # set end of chunk to zero | 158 | slr %r5,%r5 # set end of chunk to zero |
158 | slr %r6,%r6 # set access code to zero | 159 | slr %r6,%r6 # set access code to zero |
159 | la %r10, MEMORY_CHUNKS # number of chunks | 160 | la %r10,MEMORY_CHUNKS # number of chunks |
160 | .Lloop: | 161 | .Lloop: |
161 | tprot 0(%r5),0 # test protection of first byte | 162 | tprot 0(%r5),0 # test protection of first byte |
162 | ipm %r7 | 163 | ipm %r7 |
@@ -176,8 +177,6 @@ startup_continue: | |||
176 | st %r0,4(%r3) # store size of chunk | 177 | st %r0,4(%r3) # store size of chunk |
177 | st %r6,8(%r3) # store type of chunk | 178 | st %r6,8(%r3) # store type of chunk |
178 | la %r3,12(%r3) | 179 | la %r3,12(%r3) |
179 | l %r4,.Lmemsize-.LPG1(%r13) # address of variable memory_size | ||
180 | st %r5,0(%r4) # store last end to memory size | ||
181 | ahi %r10,-1 # update chunk number | 180 | ahi %r10,-1 # update chunk number |
182 | .Lchkloop: | 181 | .Lchkloop: |
183 | lr %r6,%r7 # set access code to last cc | 182 | lr %r6,%r7 # set access code to last cc |
@@ -292,7 +291,6 @@ startup_continue: | |||
292 | .Lpcmvpg:.long 0x00080000,0x80000000 + .Lchkmvpg | 291 | .Lpcmvpg:.long 0x00080000,0x80000000 + .Lchkmvpg |
293 | .Lpcidte:.long 0x00080000,0x80000000 + .Lchkidte | 292 | .Lpcidte:.long 0x00080000,0x80000000 + .Lchkidte |
294 | .Lpcdiag9c:.long 0x00080000,0x80000000 + .Lchkdiag9c | 293 | .Lpcdiag9c:.long 0x00080000,0x80000000 + .Lchkdiag9c |
295 | .Lmemsize:.long memory_size | ||
296 | .Lmchunk:.long memory_chunk | 294 | .Lmchunk:.long memory_chunk |
297 | .Lmflags:.long machine_flags | 295 | .Lmflags:.long machine_flags |
298 | .Lbss_bgn: .long __bss_start | 296 | .Lbss_bgn: .long __bss_start |
diff --git a/arch/s390/kernel/head64.S b/arch/s390/kernel/head64.S index 42f54d482441..c526279e1123 100644 --- a/arch/s390/kernel/head64.S +++ b/arch/s390/kernel/head64.S | |||
@@ -70,7 +70,20 @@ startup_continue: | |||
70 | sgr %r5,%r5 # set src,length and pad to zero | 70 | sgr %r5,%r5 # set src,length and pad to zero |
71 | mvcle %r2,%r4,0 # clear mem | 71 | mvcle %r2,%r4,0 # clear mem |
72 | jo .-4 # branch back, if not finish | 72 | jo .-4 # branch back, if not finish |
73 | # set program check new psw mask | ||
74 | mvc __LC_PGM_NEW_PSW(8),.Lpcmsk-.LPG1(%r13) | ||
75 | larl %r1,.Lslowmemdetect # set program check address | ||
76 | stg %r1,__LC_PGM_NEW_PSW+8 | ||
77 | lghi %r1,0xc | ||
78 | diag %r0,%r1,0x260 # get memory size of virtual machine | ||
79 | cgr %r0,%r1 # different? -> old detection routine | ||
80 | jne .Lslowmemdetect | ||
81 | aghi %r1,1 # size is one more than end | ||
82 | larl %r2,memory_chunk | ||
83 | stg %r1,8(%r2) # store size of chunk | ||
84 | j .Ldonemem | ||
73 | 85 | ||
86 | .Lslowmemdetect: | ||
74 | l %r2,.Lrcp-.LPG1(%r13) # Read SCP forced command word | 87 | l %r2,.Lrcp-.LPG1(%r13) # Read SCP forced command word |
75 | .Lservicecall: | 88 | .Lservicecall: |
76 | stosm .Lpmask-.LPG1(%r13),0x01 # authorize ext interrupts | 89 | stosm .Lpmask-.LPG1(%r13),0x01 # authorize ext interrupts |
@@ -139,8 +152,6 @@ startup_continue: | |||
139 | .int 0x100000 | 152 | .int 0x100000 |
140 | 153 | ||
141 | .Lfchunk: | 154 | .Lfchunk: |
142 | # set program check new psw mask | ||
143 | mvc __LC_PGM_NEW_PSW(8),.Lpcmsk-.LPG1(%r13) | ||
144 | 155 | ||
145 | # | 156 | # |
146 | # find memory chunks. | 157 | # find memory chunks. |
@@ -175,8 +186,6 @@ startup_continue: | |||
175 | stg %r0,8(%r3) # store size of chunk | 186 | stg %r0,8(%r3) # store size of chunk |
176 | st %r6,20(%r3) # store type of chunk | 187 | st %r6,20(%r3) # store type of chunk |
177 | la %r3,24(%r3) | 188 | la %r3,24(%r3) |
178 | larl %r8,memory_size | ||
179 | stg %r5,0(%r8) # store memory size | ||
180 | ahi %r10,-1 # update chunk number | 189 | ahi %r10,-1 # update chunk number |
181 | .Lchkloop: | 190 | .Lchkloop: |
182 | lr %r6,%r7 # set access code to last cc | 191 | lr %r6,%r7 # set access code to last cc |
diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c index 1f5e782b3d05..a36bea1188d9 100644 --- a/arch/s390/kernel/ipl.c +++ b/arch/s390/kernel/ipl.c | |||
@@ -13,12 +13,21 @@ | |||
13 | #include <linux/device.h> | 13 | #include <linux/device.h> |
14 | #include <linux/delay.h> | 14 | #include <linux/delay.h> |
15 | #include <linux/reboot.h> | 15 | #include <linux/reboot.h> |
16 | #include <linux/ctype.h> | ||
16 | #include <asm/smp.h> | 17 | #include <asm/smp.h> |
17 | #include <asm/setup.h> | 18 | #include <asm/setup.h> |
18 | #include <asm/cpcmd.h> | 19 | #include <asm/cpcmd.h> |
19 | #include <asm/cio.h> | 20 | #include <asm/cio.h> |
21 | #include <asm/ebcdic.h> | ||
22 | #include <asm/reset.h> | ||
20 | 23 | ||
21 | #define IPL_PARM_BLOCK_VERSION 0 | 24 | #define IPL_PARM_BLOCK_VERSION 0 |
25 | #define LOADPARM_LEN 8 | ||
26 | |||
27 | extern char s390_readinfo_sccb[]; | ||
28 | #define SCCB_VALID (*((__u16*)&s390_readinfo_sccb[6]) == 0x0010) | ||
29 | #define SCCB_LOADPARM (&s390_readinfo_sccb[24]) | ||
30 | #define SCCB_FLAG (s390_readinfo_sccb[91]) | ||
22 | 31 | ||
23 | enum ipl_type { | 32 | enum ipl_type { |
24 | IPL_TYPE_NONE = 1, | 33 | IPL_TYPE_NONE = 1, |
@@ -289,9 +298,25 @@ static struct attribute_group ipl_fcp_attr_group = { | |||
289 | 298 | ||
290 | /* CCW ipl device attributes */ | 299 | /* CCW ipl device attributes */ |
291 | 300 | ||
301 | static ssize_t ipl_ccw_loadparm_show(struct subsystem *subsys, char *page) | ||
302 | { | ||
303 | char loadparm[LOADPARM_LEN + 1] = {}; | ||
304 | |||
305 | if (!SCCB_VALID) | ||
306 | return sprintf(page, "#unknown#\n"); | ||
307 | memcpy(loadparm, SCCB_LOADPARM, LOADPARM_LEN); | ||
308 | EBCASC(loadparm, LOADPARM_LEN); | ||
309 | strstrip(loadparm); | ||
310 | return sprintf(page, "%s\n", loadparm); | ||
311 | } | ||
312 | |||
313 | static struct subsys_attribute sys_ipl_ccw_loadparm_attr = | ||
314 | __ATTR(loadparm, 0444, ipl_ccw_loadparm_show, NULL); | ||
315 | |||
292 | static struct attribute *ipl_ccw_attrs[] = { | 316 | static struct attribute *ipl_ccw_attrs[] = { |
293 | &sys_ipl_type_attr.attr, | 317 | &sys_ipl_type_attr.attr, |
294 | &sys_ipl_device_attr.attr, | 318 | &sys_ipl_device_attr.attr, |
319 | &sys_ipl_ccw_loadparm_attr.attr, | ||
295 | NULL, | 320 | NULL, |
296 | }; | 321 | }; |
297 | 322 | ||
@@ -348,8 +373,57 @@ static struct attribute_group reipl_fcp_attr_group = { | |||
348 | DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n", | 373 | DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n", |
349 | reipl_block_ccw->ipl_info.ccw.devno); | 374 | reipl_block_ccw->ipl_info.ccw.devno); |
350 | 375 | ||
376 | static void reipl_get_ascii_loadparm(char *loadparm) | ||
377 | { | ||
378 | memcpy(loadparm, &reipl_block_ccw->ipl_info.ccw.load_param, | ||
379 | LOADPARM_LEN); | ||
380 | EBCASC(loadparm, LOADPARM_LEN); | ||
381 | loadparm[LOADPARM_LEN] = 0; | ||
382 | strstrip(loadparm); | ||
383 | } | ||
384 | |||
385 | static ssize_t reipl_ccw_loadparm_show(struct subsystem *subsys, char *page) | ||
386 | { | ||
387 | char buf[LOADPARM_LEN + 1]; | ||
388 | |||
389 | reipl_get_ascii_loadparm(buf); | ||
390 | return sprintf(page, "%s\n", buf); | ||
391 | } | ||
392 | |||
393 | static ssize_t reipl_ccw_loadparm_store(struct subsystem *subsys, | ||
394 | const char *buf, size_t len) | ||
395 | { | ||
396 | int i, lp_len; | ||
397 | |||
398 | /* ignore trailing newline */ | ||
399 | lp_len = len; | ||
400 | if ((len > 0) && (buf[len - 1] == '\n')) | ||
401 | lp_len--; | ||
402 | /* loadparm can have max 8 characters and must not start with a blank */ | ||
403 | if ((lp_len > LOADPARM_LEN) || ((lp_len > 0) && (buf[0] == ' '))) | ||
404 | return -EINVAL; | ||
405 | /* loadparm can only contain "a-z,A-Z,0-9,SP,." */ | ||
406 | for (i = 0; i < lp_len; i++) { | ||
407 | if (isalpha(buf[i]) || isdigit(buf[i]) || (buf[i] == ' ') || | ||
408 | (buf[i] == '.')) | ||
409 | continue; | ||
410 | return -EINVAL; | ||
411 | } | ||
412 | /* initialize loadparm with blanks */ | ||
413 | memset(&reipl_block_ccw->ipl_info.ccw.load_param, ' ', LOADPARM_LEN); | ||
414 | /* copy and convert to ebcdic */ | ||
415 | memcpy(&reipl_block_ccw->ipl_info.ccw.load_param, buf, lp_len); | ||
416 | ASCEBC(reipl_block_ccw->ipl_info.ccw.load_param, LOADPARM_LEN); | ||
417 | return len; | ||
418 | } | ||
419 | |||
420 | static struct subsys_attribute sys_reipl_ccw_loadparm_attr = | ||
421 | __ATTR(loadparm, 0644, reipl_ccw_loadparm_show, | ||
422 | reipl_ccw_loadparm_store); | ||
423 | |||
351 | static struct attribute *reipl_ccw_attrs[] = { | 424 | static struct attribute *reipl_ccw_attrs[] = { |
352 | &sys_reipl_ccw_device_attr.attr, | 425 | &sys_reipl_ccw_device_attr.attr, |
426 | &sys_reipl_ccw_loadparm_attr.attr, | ||
353 | NULL, | 427 | NULL, |
354 | }; | 428 | }; |
355 | 429 | ||
@@ -502,23 +576,6 @@ static struct subsys_attribute dump_type_attr = | |||
502 | 576 | ||
503 | static decl_subsys(dump, NULL, NULL); | 577 | static decl_subsys(dump, NULL, NULL); |
504 | 578 | ||
505 | #ifdef CONFIG_SMP | ||
506 | static void dump_smp_stop_all(void) | ||
507 | { | ||
508 | int cpu; | ||
509 | preempt_disable(); | ||
510 | for_each_online_cpu(cpu) { | ||
511 | if (cpu == smp_processor_id()) | ||
512 | continue; | ||
513 | while (signal_processor(cpu, sigp_stop) == sigp_busy) | ||
514 | udelay(10); | ||
515 | } | ||
516 | preempt_enable(); | ||
517 | } | ||
518 | #else | ||
519 | #define dump_smp_stop_all() do { } while (0) | ||
520 | #endif | ||
521 | |||
522 | /* | 579 | /* |
523 | * Shutdown actions section | 580 | * Shutdown actions section |
524 | */ | 581 | */ |
@@ -571,11 +628,14 @@ void do_reipl(void) | |||
571 | { | 628 | { |
572 | struct ccw_dev_id devid; | 629 | struct ccw_dev_id devid; |
573 | static char buf[100]; | 630 | static char buf[100]; |
631 | char loadparm[LOADPARM_LEN + 1]; | ||
574 | 632 | ||
575 | switch (reipl_type) { | 633 | switch (reipl_type) { |
576 | case IPL_TYPE_CCW: | 634 | case IPL_TYPE_CCW: |
635 | reipl_get_ascii_loadparm(loadparm); | ||
577 | printk(KERN_EMERG "reboot on ccw device: 0.0.%04x\n", | 636 | printk(KERN_EMERG "reboot on ccw device: 0.0.%04x\n", |
578 | reipl_block_ccw->ipl_info.ccw.devno); | 637 | reipl_block_ccw->ipl_info.ccw.devno); |
638 | printk(KERN_EMERG "loadparm = '%s'\n", loadparm); | ||
579 | break; | 639 | break; |
580 | case IPL_TYPE_FCP: | 640 | case IPL_TYPE_FCP: |
581 | printk(KERN_EMERG "reboot on fcp device:\n"); | 641 | printk(KERN_EMERG "reboot on fcp device:\n"); |
@@ -588,12 +648,19 @@ void do_reipl(void) | |||
588 | switch (reipl_method) { | 648 | switch (reipl_method) { |
589 | case IPL_METHOD_CCW_CIO: | 649 | case IPL_METHOD_CCW_CIO: |
590 | devid.devno = reipl_block_ccw->ipl_info.ccw.devno; | 650 | devid.devno = reipl_block_ccw->ipl_info.ccw.devno; |
651 | if (ipl_get_type() == IPL_TYPE_CCW && devid.devno == ipl_devno) | ||
652 | diag308(DIAG308_IPL, NULL); | ||
591 | devid.ssid = 0; | 653 | devid.ssid = 0; |
592 | reipl_ccw_dev(&devid); | 654 | reipl_ccw_dev(&devid); |
593 | break; | 655 | break; |
594 | case IPL_METHOD_CCW_VM: | 656 | case IPL_METHOD_CCW_VM: |
595 | sprintf(buf, "IPL %X", reipl_block_ccw->ipl_info.ccw.devno); | 657 | if (strlen(loadparm) == 0) |
596 | cpcmd(buf, NULL, 0, NULL); | 658 | sprintf(buf, "IPL %X", |
659 | reipl_block_ccw->ipl_info.ccw.devno); | ||
660 | else | ||
661 | sprintf(buf, "IPL %X LOADPARM '%s'", | ||
662 | reipl_block_ccw->ipl_info.ccw.devno, loadparm); | ||
663 | __cpcmd(buf, NULL, 0, NULL); | ||
597 | break; | 664 | break; |
598 | case IPL_METHOD_CCW_DIAG: | 665 | case IPL_METHOD_CCW_DIAG: |
599 | diag308(DIAG308_SET, reipl_block_ccw); | 666 | diag308(DIAG308_SET, reipl_block_ccw); |
@@ -607,16 +674,17 @@ void do_reipl(void) | |||
607 | diag308(DIAG308_IPL, NULL); | 674 | diag308(DIAG308_IPL, NULL); |
608 | break; | 675 | break; |
609 | case IPL_METHOD_FCP_RO_VM: | 676 | case IPL_METHOD_FCP_RO_VM: |
610 | cpcmd("IPL", NULL, 0, NULL); | 677 | __cpcmd("IPL", NULL, 0, NULL); |
611 | break; | 678 | break; |
612 | case IPL_METHOD_NONE: | 679 | case IPL_METHOD_NONE: |
613 | default: | 680 | default: |
614 | if (MACHINE_IS_VM) | 681 | if (MACHINE_IS_VM) |
615 | cpcmd("IPL", NULL, 0, NULL); | 682 | __cpcmd("IPL", NULL, 0, NULL); |
616 | diag308(DIAG308_IPL, NULL); | 683 | diag308(DIAG308_IPL, NULL); |
617 | break; | 684 | break; |
618 | } | 685 | } |
619 | panic("reipl failed!\n"); | 686 | printk(KERN_EMERG "reboot failed!\n"); |
687 | signal_processor(smp_processor_id(), sigp_stop_and_store_status); | ||
620 | } | 688 | } |
621 | 689 | ||
622 | static void do_dump(void) | 690 | static void do_dump(void) |
@@ -639,17 +707,17 @@ static void do_dump(void) | |||
639 | 707 | ||
640 | switch (dump_method) { | 708 | switch (dump_method) { |
641 | case IPL_METHOD_CCW_CIO: | 709 | case IPL_METHOD_CCW_CIO: |
642 | dump_smp_stop_all(); | 710 | smp_send_stop(); |
643 | devid.devno = dump_block_ccw->ipl_info.ccw.devno; | 711 | devid.devno = dump_block_ccw->ipl_info.ccw.devno; |
644 | devid.ssid = 0; | 712 | devid.ssid = 0; |
645 | reipl_ccw_dev(&devid); | 713 | reipl_ccw_dev(&devid); |
646 | break; | 714 | break; |
647 | case IPL_METHOD_CCW_VM: | 715 | case IPL_METHOD_CCW_VM: |
648 | dump_smp_stop_all(); | 716 | smp_send_stop(); |
649 | sprintf(buf, "STORE STATUS"); | 717 | sprintf(buf, "STORE STATUS"); |
650 | cpcmd(buf, NULL, 0, NULL); | 718 | __cpcmd(buf, NULL, 0, NULL); |
651 | sprintf(buf, "IPL %X", dump_block_ccw->ipl_info.ccw.devno); | 719 | sprintf(buf, "IPL %X", dump_block_ccw->ipl_info.ccw.devno); |
652 | cpcmd(buf, NULL, 0, NULL); | 720 | __cpcmd(buf, NULL, 0, NULL); |
653 | break; | 721 | break; |
654 | case IPL_METHOD_CCW_DIAG: | 722 | case IPL_METHOD_CCW_DIAG: |
655 | diag308(DIAG308_SET, dump_block_ccw); | 723 | diag308(DIAG308_SET, dump_block_ccw); |
@@ -746,6 +814,17 @@ static int __init reipl_ccw_init(void) | |||
746 | reipl_block_ccw->hdr.version = IPL_PARM_BLOCK_VERSION; | 814 | reipl_block_ccw->hdr.version = IPL_PARM_BLOCK_VERSION; |
747 | reipl_block_ccw->hdr.blk0_len = sizeof(reipl_block_ccw->ipl_info.ccw); | 815 | reipl_block_ccw->hdr.blk0_len = sizeof(reipl_block_ccw->ipl_info.ccw); |
748 | reipl_block_ccw->hdr.pbt = DIAG308_IPL_TYPE_CCW; | 816 | reipl_block_ccw->hdr.pbt = DIAG308_IPL_TYPE_CCW; |
817 | /* check if read scp info worked and set loadparm */ | ||
818 | if (SCCB_VALID) | ||
819 | memcpy(reipl_block_ccw->ipl_info.ccw.load_param, | ||
820 | SCCB_LOADPARM, LOADPARM_LEN); | ||
821 | else | ||
822 | /* read scp info failed: set empty loadparm (EBCDIC blanks) */ | ||
823 | memset(reipl_block_ccw->ipl_info.ccw.load_param, 0x40, | ||
824 | LOADPARM_LEN); | ||
825 | /* FIXME: check for diag308_set_works when enabling diag ccw reipl */ | ||
826 | if (!MACHINE_IS_VM) | ||
827 | sys_reipl_ccw_loadparm_attr.attr.mode = S_IRUGO; | ||
749 | if (ipl_get_type() == IPL_TYPE_CCW) | 828 | if (ipl_get_type() == IPL_TYPE_CCW) |
750 | reipl_block_ccw->ipl_info.ccw.devno = ipl_devno; | 829 | reipl_block_ccw->ipl_info.ccw.devno = ipl_devno; |
751 | reipl_capabilities |= IPL_TYPE_CCW; | 830 | reipl_capabilities |= IPL_TYPE_CCW; |
@@ -827,13 +906,11 @@ static int __init dump_ccw_init(void) | |||
827 | return 0; | 906 | return 0; |
828 | } | 907 | } |
829 | 908 | ||
830 | extern char s390_readinfo_sccb[]; | ||
831 | |||
832 | static int __init dump_fcp_init(void) | 909 | static int __init dump_fcp_init(void) |
833 | { | 910 | { |
834 | int rc; | 911 | int rc; |
835 | 912 | ||
836 | if(!(s390_readinfo_sccb[91] & 0x2)) | 913 | if(!(SCCB_FLAG & 0x2) || !SCCB_VALID) |
837 | return 0; /* LDIPL DUMP is not installed */ | 914 | return 0; /* LDIPL DUMP is not installed */ |
838 | if (!diag308_set_works) | 915 | if (!diag308_set_works) |
839 | return 0; | 916 | return 0; |
@@ -931,3 +1008,53 @@ static int __init s390_ipl_init(void) | |||
931 | } | 1008 | } |
932 | 1009 | ||
933 | __initcall(s390_ipl_init); | 1010 | __initcall(s390_ipl_init); |
1011 | |||
1012 | static LIST_HEAD(rcall); | ||
1013 | static DEFINE_MUTEX(rcall_mutex); | ||
1014 | |||
1015 | void register_reset_call(struct reset_call *reset) | ||
1016 | { | ||
1017 | mutex_lock(&rcall_mutex); | ||
1018 | list_add(&reset->list, &rcall); | ||
1019 | mutex_unlock(&rcall_mutex); | ||
1020 | } | ||
1021 | EXPORT_SYMBOL_GPL(register_reset_call); | ||
1022 | |||
1023 | void unregister_reset_call(struct reset_call *reset) | ||
1024 | { | ||
1025 | mutex_lock(&rcall_mutex); | ||
1026 | list_del(&reset->list); | ||
1027 | mutex_unlock(&rcall_mutex); | ||
1028 | } | ||
1029 | EXPORT_SYMBOL_GPL(unregister_reset_call); | ||
1030 | |||
1031 | static void do_reset_calls(void) | ||
1032 | { | ||
1033 | struct reset_call *reset; | ||
1034 | |||
1035 | list_for_each_entry(reset, &rcall, list) | ||
1036 | reset->fn(); | ||
1037 | } | ||
1038 | |||
1039 | extern void reset_mcck_handler(void); | ||
1040 | |||
1041 | void s390_reset_system(void) | ||
1042 | { | ||
1043 | struct _lowcore *lc; | ||
1044 | |||
1045 | /* Stack for interrupt/machine check handler */ | ||
1046 | lc = (struct _lowcore *)(unsigned long) store_prefix(); | ||
1047 | lc->panic_stack = S390_lowcore.panic_stack; | ||
1048 | |||
1049 | /* Disable prefixing */ | ||
1050 | set_prefix(0); | ||
1051 | |||
1052 | /* Disable lowcore protection */ | ||
1053 | __ctl_clear_bit(0,28); | ||
1054 | |||
1055 | /* Set new machine check handler */ | ||
1056 | S390_lowcore.mcck_new_psw.mask = PSW_KERNEL_BITS & ~PSW_MASK_MCHECK; | ||
1057 | S390_lowcore.mcck_new_psw.addr = | ||
1058 | PSW_ADDR_AMODE | (unsigned long) &reset_mcck_handler; | ||
1059 | do_reset_calls(); | ||
1060 | } | ||
diff --git a/arch/s390/kernel/machine_kexec.c b/arch/s390/kernel/machine_kexec.c index 60b1ea9f946b..f6d9bcc0f75b 100644 --- a/arch/s390/kernel/machine_kexec.c +++ b/arch/s390/kernel/machine_kexec.c | |||
@@ -1,15 +1,10 @@ | |||
1 | /* | 1 | /* |
2 | * arch/s390/kernel/machine_kexec.c | 2 | * arch/s390/kernel/machine_kexec.c |
3 | * | 3 | * |
4 | * (C) Copyright IBM Corp. 2005 | 4 | * Copyright IBM Corp. 2005,2006 |
5 | * | 5 | * |
6 | * Author(s): Rolf Adelsberger <adelsberger@de.ibm.com> | 6 | * Author(s): Rolf Adelsberger, |
7 | * | 7 | * Heiko Carstens <heiko.carstens@de.ibm.com> |
8 | */ | ||
9 | |||
10 | /* | ||
11 | * s390_machine_kexec.c - handle the transition of Linux booting another kernel | ||
12 | * on the S390 architecture. | ||
13 | */ | 8 | */ |
14 | 9 | ||
15 | #include <linux/device.h> | 10 | #include <linux/device.h> |
@@ -22,86 +17,49 @@ | |||
22 | #include <asm/pgalloc.h> | 17 | #include <asm/pgalloc.h> |
23 | #include <asm/system.h> | 18 | #include <asm/system.h> |
24 | #include <asm/smp.h> | 19 | #include <asm/smp.h> |
20 | #include <asm/reset.h> | ||
25 | 21 | ||
26 | static void kexec_halt_all_cpus(void *); | 22 | typedef void (*relocate_kernel_t)(kimage_entry_t *, unsigned long); |
27 | |||
28 | typedef void (*relocate_kernel_t) (kimage_entry_t *, unsigned long); | ||
29 | 23 | ||
30 | extern const unsigned char relocate_kernel[]; | 24 | extern const unsigned char relocate_kernel[]; |
31 | extern const unsigned long long relocate_kernel_len; | 25 | extern const unsigned long long relocate_kernel_len; |
32 | 26 | ||
33 | int | 27 | int machine_kexec_prepare(struct kimage *image) |
34 | machine_kexec_prepare(struct kimage *image) | ||
35 | { | 28 | { |
36 | unsigned long reboot_code_buffer; | 29 | void *reboot_code_buffer; |
37 | 30 | ||
38 | /* We don't support anything but the default image type for now. */ | 31 | /* We don't support anything but the default image type for now. */ |
39 | if (image->type != KEXEC_TYPE_DEFAULT) | 32 | if (image->type != KEXEC_TYPE_DEFAULT) |
40 | return -EINVAL; | 33 | return -EINVAL; |
41 | 34 | ||
42 | /* Get the destination where the assembler code should be copied to.*/ | 35 | /* Get the destination where the assembler code should be copied to.*/ |
43 | reboot_code_buffer = page_to_pfn(image->control_code_page)<<PAGE_SHIFT; | 36 | reboot_code_buffer = (void *) page_to_phys(image->control_code_page); |
44 | 37 | ||
45 | /* Then copy it */ | 38 | /* Then copy it */ |
46 | memcpy((void *) reboot_code_buffer, relocate_kernel, | 39 | memcpy(reboot_code_buffer, relocate_kernel, relocate_kernel_len); |
47 | relocate_kernel_len); | ||
48 | return 0; | 40 | return 0; |
49 | } | 41 | } |
50 | 42 | ||
51 | void | 43 | void machine_kexec_cleanup(struct kimage *image) |
52 | machine_kexec_cleanup(struct kimage *image) | ||
53 | { | 44 | { |
54 | } | 45 | } |
55 | 46 | ||
56 | void | 47 | void machine_shutdown(void) |
57 | machine_shutdown(void) | ||
58 | { | 48 | { |
59 | printk(KERN_INFO "kexec: machine_shutdown called\n"); | 49 | printk(KERN_INFO "kexec: machine_shutdown called\n"); |
60 | } | 50 | } |
61 | 51 | ||
62 | NORET_TYPE void | 52 | void machine_kexec(struct kimage *image) |
63 | machine_kexec(struct kimage *image) | ||
64 | { | 53 | { |
65 | clear_all_subchannels(); | ||
66 | cio_reset_channel_paths(); | ||
67 | |||
68 | /* Disable lowcore protection */ | ||
69 | ctl_clear_bit(0,28); | ||
70 | |||
71 | on_each_cpu(kexec_halt_all_cpus, image, 0, 0); | ||
72 | for (;;); | ||
73 | } | ||
74 | |||
75 | extern void pfault_fini(void); | ||
76 | |||
77 | static void | ||
78 | kexec_halt_all_cpus(void *kernel_image) | ||
79 | { | ||
80 | static atomic_t cpuid = ATOMIC_INIT(-1); | ||
81 | int cpu; | ||
82 | struct kimage *image; | ||
83 | relocate_kernel_t data_mover; | 54 | relocate_kernel_t data_mover; |
84 | 55 | ||
85 | #ifdef CONFIG_PFAULT | 56 | smp_send_stop(); |
86 | if (MACHINE_IS_VM) | 57 | pfault_fini(); |
87 | pfault_fini(); | 58 | s390_reset_system(); |
88 | #endif | ||
89 | 59 | ||
90 | if (atomic_cmpxchg(&cpuid, -1, smp_processor_id()) != -1) | 60 | data_mover = (relocate_kernel_t) page_to_phys(image->control_code_page); |
91 | signal_processor(smp_processor_id(), sigp_stop); | ||
92 | |||
93 | /* Wait for all other cpus to enter stopped state */ | ||
94 | for_each_online_cpu(cpu) { | ||
95 | if (cpu == smp_processor_id()) | ||
96 | continue; | ||
97 | while (!smp_cpu_not_running(cpu)) | ||
98 | cpu_relax(); | ||
99 | } | ||
100 | |||
101 | image = (struct kimage *) kernel_image; | ||
102 | data_mover = (relocate_kernel_t) | ||
103 | (page_to_pfn(image->control_code_page) << PAGE_SHIFT); | ||
104 | 61 | ||
105 | /* Call the moving routine */ | 62 | /* Call the moving routine */ |
106 | (*data_mover) (&image->head, image->start); | 63 | (*data_mover)(&image->head, image->start); |
64 | for (;;); | ||
107 | } | 65 | } |
diff --git a/arch/s390/kernel/reipl.S b/arch/s390/kernel/reipl.S index 0340477f3b08..f9434d42ce9f 100644 --- a/arch/s390/kernel/reipl.S +++ b/arch/s390/kernel/reipl.S | |||
@@ -11,19 +11,10 @@ | |||
11 | .globl do_reipl_asm | 11 | .globl do_reipl_asm |
12 | do_reipl_asm: basr %r13,0 | 12 | do_reipl_asm: basr %r13,0 |
13 | .Lpg0: lpsw .Lnewpsw-.Lpg0(%r13) | 13 | .Lpg0: lpsw .Lnewpsw-.Lpg0(%r13) |
14 | 14 | .Lpg1: # do store status of all registers | |
15 | # switch off lowcore protection | ||
16 | |||
17 | .Lpg1: stctl %c0,%c0,.Lctlsave1-.Lpg0(%r13) | ||
18 | stctl %c0,%c0,.Lctlsave2-.Lpg0(%r13) | ||
19 | ni .Lctlsave1-.Lpg0(%r13),0xef | ||
20 | lctl %c0,%c0,.Lctlsave1-.Lpg0(%r13) | ||
21 | |||
22 | # do store status of all registers | ||
23 | 15 | ||
24 | stm %r0,%r15,__LC_GPREGS_SAVE_AREA | 16 | stm %r0,%r15,__LC_GPREGS_SAVE_AREA |
25 | stctl %c0,%c15,__LC_CREGS_SAVE_AREA | 17 | stctl %c0,%c15,__LC_CREGS_SAVE_AREA |
26 | mvc __LC_CREGS_SAVE_AREA(4),.Lctlsave2-.Lpg0(%r13) | ||
27 | stam %a0,%a15,__LC_AREGS_SAVE_AREA | 18 | stam %a0,%a15,__LC_AREGS_SAVE_AREA |
28 | stpx __LC_PREFIX_SAVE_AREA | 19 | stpx __LC_PREFIX_SAVE_AREA |
29 | stckc .Lclkcmp-.Lpg0(%r13) | 20 | stckc .Lclkcmp-.Lpg0(%r13) |
@@ -56,8 +47,7 @@ do_reipl_asm: basr %r13,0 | |||
56 | .L002: tm .Liplirb+8-.Lpg0(%r13),0xf3 | 47 | .L002: tm .Liplirb+8-.Lpg0(%r13),0xf3 |
57 | jz .L003 | 48 | jz .L003 |
58 | bas %r14,.Ldisab-.Lpg0(%r13) | 49 | bas %r14,.Ldisab-.Lpg0(%r13) |
59 | .L003: spx .Lnull-.Lpg0(%r13) | 50 | .L003: st %r1,__LC_SUBCHANNEL_ID |
60 | st %r1,__LC_SUBCHANNEL_ID | ||
61 | lpsw 0 | 51 | lpsw 0 |
62 | sigp 0,0,0(6) | 52 | sigp 0,0,0(6) |
63 | .Ldisab: st %r14,.Ldispsw+4-.Lpg0(%r13) | 53 | .Ldisab: st %r14,.Ldispsw+4-.Lpg0(%r13) |
@@ -65,9 +55,6 @@ do_reipl_asm: basr %r13,0 | |||
65 | .align 8 | 55 | .align 8 |
66 | .Lclkcmp: .quad 0x0000000000000000 | 56 | .Lclkcmp: .quad 0x0000000000000000 |
67 | .Lall: .long 0xff000000 | 57 | .Lall: .long 0xff000000 |
68 | .Lnull: .long 0x00000000 | ||
69 | .Lctlsave1: .long 0x00000000 | ||
70 | .Lctlsave2: .long 0x00000000 | ||
71 | .align 8 | 58 | .align 8 |
72 | .Lnewpsw: .long 0x00080000,0x80000000+.Lpg1 | 59 | .Lnewpsw: .long 0x00080000,0x80000000+.Lpg1 |
73 | .Lpcnew: .long 0x00080000,0x80000000+.Lecs | 60 | .Lpcnew: .long 0x00080000,0x80000000+.Lecs |
diff --git a/arch/s390/kernel/reipl64.S b/arch/s390/kernel/reipl64.S index de7435054f7c..f18ef260ca23 100644 --- a/arch/s390/kernel/reipl64.S +++ b/arch/s390/kernel/reipl64.S | |||
@@ -10,10 +10,10 @@ | |||
10 | #include <asm/lowcore.h> | 10 | #include <asm/lowcore.h> |
11 | .globl do_reipl_asm | 11 | .globl do_reipl_asm |
12 | do_reipl_asm: basr %r13,0 | 12 | do_reipl_asm: basr %r13,0 |
13 | .Lpg0: lpswe .Lnewpsw-.Lpg0(%r13) | ||
14 | .Lpg1: # do store status of all registers | ||
13 | 15 | ||
14 | # do store status of all registers | 16 | stg %r1,.Lregsave-.Lpg0(%r13) |
15 | |||
16 | .Lpg0: stg %r1,.Lregsave-.Lpg0(%r13) | ||
17 | lghi %r1,0x1000 | 17 | lghi %r1,0x1000 |
18 | stmg %r0,%r15,__LC_GPREGS_SAVE_AREA-0x1000(%r1) | 18 | stmg %r0,%r15,__LC_GPREGS_SAVE_AREA-0x1000(%r1) |
19 | lg %r0,.Lregsave-.Lpg0(%r13) | 19 | lg %r0,.Lregsave-.Lpg0(%r13) |
@@ -27,11 +27,7 @@ do_reipl_asm: basr %r13,0 | |||
27 | stpt __LC_CPU_TIMER_SAVE_AREA-0x1000(%r1) | 27 | stpt __LC_CPU_TIMER_SAVE_AREA-0x1000(%r1) |
28 | stg %r13, __LC_PSW_SAVE_AREA-0x1000+8(%r1) | 28 | stg %r13, __LC_PSW_SAVE_AREA-0x1000+8(%r1) |
29 | 29 | ||
30 | lpswe .Lnewpsw-.Lpg0(%r13) | 30 | lctlg %c6,%c6,.Lall-.Lpg0(%r13) |
31 | .Lpg1: lctlg %c6,%c6,.Lall-.Lpg0(%r13) | ||
32 | stctg %c0,%c0,.Lregsave-.Lpg0(%r13) | ||
33 | ni .Lregsave+4-.Lpg0(%r13),0xef | ||
34 | lctlg %c0,%c0,.Lregsave-.Lpg0(%r13) | ||
35 | lgr %r1,%r2 | 31 | lgr %r1,%r2 |
36 | mvc __LC_PGM_NEW_PSW(16),.Lpcnew-.Lpg0(%r13) | 32 | mvc __LC_PGM_NEW_PSW(16),.Lpcnew-.Lpg0(%r13) |
37 | stsch .Lschib-.Lpg0(%r13) | 33 | stsch .Lschib-.Lpg0(%r13) |
@@ -56,8 +52,7 @@ do_reipl_asm: basr %r13,0 | |||
56 | .L002: tm .Liplirb+8-.Lpg0(%r13),0xf3 | 52 | .L002: tm .Liplirb+8-.Lpg0(%r13),0xf3 |
57 | jz .L003 | 53 | jz .L003 |
58 | bas %r14,.Ldisab-.Lpg0(%r13) | 54 | bas %r14,.Ldisab-.Lpg0(%r13) |
59 | .L003: spx .Lnull-.Lpg0(%r13) | 55 | .L003: st %r1,__LC_SUBCHANNEL_ID |
60 | st %r1,__LC_SUBCHANNEL_ID | ||
61 | lhi %r1,0 # mode 0 = esa | 56 | lhi %r1,0 # mode 0 = esa |
62 | slr %r0,%r0 # set cpuid to zero | 57 | slr %r0,%r0 # set cpuid to zero |
63 | sigp %r1,%r0,0x12 # switch to esa mode | 58 | sigp %r1,%r0,0x12 # switch to esa mode |
@@ -70,7 +65,6 @@ do_reipl_asm: basr %r13,0 | |||
70 | .Lclkcmp: .quad 0x0000000000000000 | 65 | .Lclkcmp: .quad 0x0000000000000000 |
71 | .Lall: .quad 0x00000000ff000000 | 66 | .Lall: .quad 0x00000000ff000000 |
72 | .Lregsave: .quad 0x0000000000000000 | 67 | .Lregsave: .quad 0x0000000000000000 |
73 | .Lnull: .long 0x0000000000000000 | ||
74 | .align 16 | 68 | .align 16 |
75 | /* | 69 | /* |
76 | * These addresses have to be 31 bit otherwise | 70 | * These addresses have to be 31 bit otherwise |
diff --git a/arch/s390/kernel/relocate_kernel.S b/arch/s390/kernel/relocate_kernel.S index f9899ff2e5b0..3b456b80bcee 100644 --- a/arch/s390/kernel/relocate_kernel.S +++ b/arch/s390/kernel/relocate_kernel.S | |||
@@ -26,8 +26,7 @@ | |||
26 | relocate_kernel: | 26 | relocate_kernel: |
27 | basr %r13,0 # base address | 27 | basr %r13,0 # base address |
28 | .base: | 28 | .base: |
29 | stnsm sys_msk-.base(%r13),0xf8 # disable DAT and IRQ (external) | 29 | stnsm sys_msk-.base(%r13),0xfb # disable DAT |
30 | spx zero64-.base(%r13) # absolute addressing mode | ||
31 | stctl %c0,%c15,ctlregs-.base(%r13) | 30 | stctl %c0,%c15,ctlregs-.base(%r13) |
32 | stm %r0,%r15,gprregs-.base(%r13) | 31 | stm %r0,%r15,gprregs-.base(%r13) |
33 | la %r1,load_psw-.base(%r13) | 32 | la %r1,load_psw-.base(%r13) |
@@ -97,8 +96,6 @@ | |||
97 | lpsw 0 # hopefully start new kernel... | 96 | lpsw 0 # hopefully start new kernel... |
98 | 97 | ||
99 | .align 8 | 98 | .align 8 |
100 | zero64: | ||
101 | .quad 0 | ||
102 | load_psw: | 99 | load_psw: |
103 | .long 0x00080000,0x80000000 | 100 | .long 0x00080000,0x80000000 |
104 | sys_msk: | 101 | sys_msk: |
diff --git a/arch/s390/kernel/relocate_kernel64.S b/arch/s390/kernel/relocate_kernel64.S index 4fb443042d9c..1f9ea2067b59 100644 --- a/arch/s390/kernel/relocate_kernel64.S +++ b/arch/s390/kernel/relocate_kernel64.S | |||
@@ -27,8 +27,7 @@ | |||
27 | relocate_kernel: | 27 | relocate_kernel: |
28 | basr %r13,0 # base address | 28 | basr %r13,0 # base address |
29 | .base: | 29 | .base: |
30 | stnsm sys_msk-.base(%r13),0xf8 # disable DAT and IRQs | 30 | stnsm sys_msk-.base(%r13),0xfb # disable DAT |
31 | spx zero64-.base(%r13) # absolute addressing mode | ||
32 | stctg %c0,%c15,ctlregs-.base(%r13) | 31 | stctg %c0,%c15,ctlregs-.base(%r13) |
33 | stmg %r0,%r15,gprregs-.base(%r13) | 32 | stmg %r0,%r15,gprregs-.base(%r13) |
34 | lghi %r0,3 | 33 | lghi %r0,3 |
@@ -100,8 +99,6 @@ | |||
100 | lpsw 0 # hopefully start new kernel... | 99 | lpsw 0 # hopefully start new kernel... |
101 | 100 | ||
102 | .align 8 | 101 | .align 8 |
103 | zero64: | ||
104 | .quad 0 | ||
105 | load_psw: | 102 | load_psw: |
106 | .long 0x00080000,0x80000000 | 103 | .long 0x00080000,0x80000000 |
107 | sys_msk: | 104 | sys_msk: |
diff --git a/arch/s390/kernel/reset.S b/arch/s390/kernel/reset.S new file mode 100644 index 000000000000..be8688c0665c --- /dev/null +++ b/arch/s390/kernel/reset.S | |||
@@ -0,0 +1,48 @@ | |||
1 | /* | ||
2 | * arch/s390/kernel/reset.S | ||
3 | * | ||
4 | * Copyright (C) IBM Corp. 2006 | ||
5 | * Author(s): Heiko Carstens <heiko.carstens@de.ibm.com> | ||
6 | */ | ||
7 | |||
8 | #include <asm/ptrace.h> | ||
9 | #include <asm/lowcore.h> | ||
10 | |||
11 | #ifdef CONFIG_64BIT | ||
12 | |||
13 | .globl reset_mcck_handler | ||
14 | reset_mcck_handler: | ||
15 | basr %r13,0 | ||
16 | 0: lg %r15,__LC_PANIC_STACK # load panic stack | ||
17 | aghi %r15,-STACK_FRAME_OVERHEAD | ||
18 | lg %r1,s390_reset_mcck_handler-0b(%r13) | ||
19 | ltgr %r1,%r1 | ||
20 | jz 1f | ||
21 | basr %r14,%r1 | ||
22 | 1: la %r1,4095 | ||
23 | lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1) | ||
24 | lpswe __LC_MCK_OLD_PSW | ||
25 | |||
26 | .globl s390_reset_mcck_handler | ||
27 | s390_reset_mcck_handler: | ||
28 | .quad 0 | ||
29 | |||
30 | #else /* CONFIG_64BIT */ | ||
31 | |||
32 | .globl reset_mcck_handler | ||
33 | reset_mcck_handler: | ||
34 | basr %r13,0 | ||
35 | 0: l %r15,__LC_PANIC_STACK # load panic stack | ||
36 | ahi %r15,-STACK_FRAME_OVERHEAD | ||
37 | l %r1,s390_reset_mcck_handler-0b(%r13) | ||
38 | ltr %r1,%r1 | ||
39 | jz 1f | ||
40 | basr %r14,%r1 | ||
41 | 1: lm %r0,%r15,__LC_GPREGS_SAVE_AREA | ||
42 | lpsw __LC_MCK_OLD_PSW | ||
43 | |||
44 | .globl s390_reset_mcck_handler | ||
45 | s390_reset_mcck_handler: | ||
46 | .long 0 | ||
47 | |||
48 | #endif /* CONFIG_64BIT */ | ||
diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c index 2aa13e8e000a..b928fecdc743 100644 --- a/arch/s390/kernel/setup.c +++ b/arch/s390/kernel/setup.c | |||
@@ -62,13 +62,9 @@ EXPORT_SYMBOL_GPL(uaccess); | |||
62 | unsigned int console_mode = 0; | 62 | unsigned int console_mode = 0; |
63 | unsigned int console_devno = -1; | 63 | unsigned int console_devno = -1; |
64 | unsigned int console_irq = -1; | 64 | unsigned int console_irq = -1; |
65 | unsigned long memory_size = 0; | ||
66 | unsigned long machine_flags = 0; | 65 | unsigned long machine_flags = 0; |
67 | struct { | 66 | |
68 | unsigned long addr, size, type; | 67 | struct mem_chunk memory_chunk[MEMORY_CHUNKS]; |
69 | } memory_chunk[MEMORY_CHUNKS] = { { 0 } }; | ||
70 | #define CHUNK_READ_WRITE 0 | ||
71 | #define CHUNK_READ_ONLY 1 | ||
72 | volatile int __cpu_logical_map[NR_CPUS]; /* logical cpu to cpu address */ | 68 | volatile int __cpu_logical_map[NR_CPUS]; /* logical cpu to cpu address */ |
73 | unsigned long __initdata zholes_size[MAX_NR_ZONES]; | 69 | unsigned long __initdata zholes_size[MAX_NR_ZONES]; |
74 | static unsigned long __initdata memory_end; | 70 | static unsigned long __initdata memory_end; |
@@ -229,11 +225,11 @@ static void __init conmode_default(void) | |||
229 | char *ptr; | 225 | char *ptr; |
230 | 226 | ||
231 | if (MACHINE_IS_VM) { | 227 | if (MACHINE_IS_VM) { |
232 | __cpcmd("QUERY CONSOLE", query_buffer, 1024, NULL); | 228 | cpcmd("QUERY CONSOLE", query_buffer, 1024, NULL); |
233 | console_devno = simple_strtoul(query_buffer + 5, NULL, 16); | 229 | console_devno = simple_strtoul(query_buffer + 5, NULL, 16); |
234 | ptr = strstr(query_buffer, "SUBCHANNEL ="); | 230 | ptr = strstr(query_buffer, "SUBCHANNEL ="); |
235 | console_irq = simple_strtoul(ptr + 13, NULL, 16); | 231 | console_irq = simple_strtoul(ptr + 13, NULL, 16); |
236 | __cpcmd("QUERY TERM", query_buffer, 1024, NULL); | 232 | cpcmd("QUERY TERM", query_buffer, 1024, NULL); |
237 | ptr = strstr(query_buffer, "CONMODE"); | 233 | ptr = strstr(query_buffer, "CONMODE"); |
238 | /* | 234 | /* |
239 | * Set the conmode to 3215 so that the device recognition | 235 | * Set the conmode to 3215 so that the device recognition |
@@ -242,7 +238,7 @@ static void __init conmode_default(void) | |||
242 | * 3215 and the 3270 driver will try to access the console | 238 | * 3215 and the 3270 driver will try to access the console |
243 | * device (3215 as console and 3270 as normal tty). | 239 | * device (3215 as console and 3270 as normal tty). |
244 | */ | 240 | */ |
245 | __cpcmd("TERM CONMODE 3215", NULL, 0, NULL); | 241 | cpcmd("TERM CONMODE 3215", NULL, 0, NULL); |
246 | if (ptr == NULL) { | 242 | if (ptr == NULL) { |
247 | #if defined(CONFIG_SCLP_CONSOLE) | 243 | #if defined(CONFIG_SCLP_CONSOLE) |
248 | SET_CONSOLE_SCLP; | 244 | SET_CONSOLE_SCLP; |
@@ -299,14 +295,14 @@ static void do_machine_restart_nonsmp(char * __unused) | |||
299 | static void do_machine_halt_nonsmp(void) | 295 | static void do_machine_halt_nonsmp(void) |
300 | { | 296 | { |
301 | if (MACHINE_IS_VM && strlen(vmhalt_cmd) > 0) | 297 | if (MACHINE_IS_VM && strlen(vmhalt_cmd) > 0) |
302 | cpcmd(vmhalt_cmd, NULL, 0, NULL); | 298 | __cpcmd(vmhalt_cmd, NULL, 0, NULL); |
303 | signal_processor(smp_processor_id(), sigp_stop_and_store_status); | 299 | signal_processor(smp_processor_id(), sigp_stop_and_store_status); |
304 | } | 300 | } |
305 | 301 | ||
306 | static void do_machine_power_off_nonsmp(void) | 302 | static void do_machine_power_off_nonsmp(void) |
307 | { | 303 | { |
308 | if (MACHINE_IS_VM && strlen(vmpoff_cmd) > 0) | 304 | if (MACHINE_IS_VM && strlen(vmpoff_cmd) > 0) |
309 | cpcmd(vmpoff_cmd, NULL, 0, NULL); | 305 | __cpcmd(vmpoff_cmd, NULL, 0, NULL); |
310 | signal_processor(smp_processor_id(), sigp_stop_and_store_status); | 306 | signal_processor(smp_processor_id(), sigp_stop_and_store_status); |
311 | } | 307 | } |
312 | 308 | ||
@@ -489,6 +485,37 @@ setup_resources(void) | |||
489 | } | 485 | } |
490 | } | 486 | } |
491 | 487 | ||
488 | static void __init setup_memory_end(void) | ||
489 | { | ||
490 | unsigned long real_size, memory_size; | ||
491 | unsigned long max_mem, max_phys; | ||
492 | int i; | ||
493 | |||
494 | memory_size = real_size = 0; | ||
495 | max_phys = VMALLOC_END - VMALLOC_MIN_SIZE; | ||
496 | memory_end &= PAGE_MASK; | ||
497 | |||
498 | max_mem = memory_end ? min(max_phys, memory_end) : max_phys; | ||
499 | |||
500 | for (i = 0; i < MEMORY_CHUNKS; i++) { | ||
501 | struct mem_chunk *chunk = &memory_chunk[i]; | ||
502 | |||
503 | real_size = max(real_size, chunk->addr + chunk->size); | ||
504 | if (chunk->addr >= max_mem) { | ||
505 | memset(chunk, 0, sizeof(*chunk)); | ||
506 | continue; | ||
507 | } | ||
508 | if (chunk->addr + chunk->size > max_mem) | ||
509 | chunk->size = max_mem - chunk->addr; | ||
510 | memory_size = max(memory_size, chunk->addr + chunk->size); | ||
511 | } | ||
512 | if (!memory_end) | ||
513 | memory_end = memory_size; | ||
514 | if (real_size > memory_end) | ||
515 | printk("More memory detected than supported. Unused: %luk\n", | ||
516 | (real_size - memory_end) >> 10); | ||
517 | } | ||
518 | |||
492 | static void __init | 519 | static void __init |
493 | setup_memory(void) | 520 | setup_memory(void) |
494 | { | 521 | { |
@@ -645,8 +672,6 @@ setup_arch(char **cmdline_p) | |||
645 | init_mm.end_data = (unsigned long) &_edata; | 672 | init_mm.end_data = (unsigned long) &_edata; |
646 | init_mm.brk = (unsigned long) &_end; | 673 | init_mm.brk = (unsigned long) &_end; |
647 | 674 | ||
648 | memory_end = memory_size; | ||
649 | |||
650 | if (MACHINE_HAS_MVCOS) | 675 | if (MACHINE_HAS_MVCOS) |
651 | memcpy(&uaccess, &uaccess_mvcos, sizeof(uaccess)); | 676 | memcpy(&uaccess, &uaccess_mvcos, sizeof(uaccess)); |
652 | else | 677 | else |
@@ -654,20 +679,7 @@ setup_arch(char **cmdline_p) | |||
654 | 679 | ||
655 | parse_early_param(); | 680 | parse_early_param(); |
656 | 681 | ||
657 | #ifndef CONFIG_64BIT | 682 | setup_memory_end(); |
658 | memory_end &= ~0x400000UL; | ||
659 | |||
660 | /* | ||
661 | * We need some free virtual space to be able to do vmalloc. | ||
662 | * On a machine with 2GB memory we make sure that we have at | ||
663 | * least 128 MB free space for vmalloc. | ||
664 | */ | ||
665 | if (memory_end > 1920*1024*1024) | ||
666 | memory_end = 1920*1024*1024; | ||
667 | #else /* CONFIG_64BIT */ | ||
668 | memory_end &= ~0x200000UL; | ||
669 | #endif /* CONFIG_64BIT */ | ||
670 | |||
671 | setup_memory(); | 683 | setup_memory(); |
672 | setup_resources(); | 684 | setup_resources(); |
673 | setup_lowcore(); | 685 | setup_lowcore(); |
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c index 62822245f9be..19090f7d4f51 100644 --- a/arch/s390/kernel/smp.c +++ b/arch/s390/kernel/smp.c | |||
@@ -230,18 +230,37 @@ static inline void do_store_status(void) | |||
230 | } | 230 | } |
231 | } | 231 | } |
232 | 232 | ||
233 | static inline void do_wait_for_stop(void) | ||
234 | { | ||
235 | int cpu; | ||
236 | |||
237 | /* Wait for all other cpus to enter stopped state */ | ||
238 | for_each_online_cpu(cpu) { | ||
239 | if (cpu == smp_processor_id()) | ||
240 | continue; | ||
241 | while(!smp_cpu_not_running(cpu)) | ||
242 | cpu_relax(); | ||
243 | } | ||
244 | } | ||
245 | |||
233 | /* | 246 | /* |
234 | * this function sends a 'stop' sigp to all other CPUs in the system. | 247 | * this function sends a 'stop' sigp to all other CPUs in the system. |
235 | * it goes straight through. | 248 | * it goes straight through. |
236 | */ | 249 | */ |
237 | void smp_send_stop(void) | 250 | void smp_send_stop(void) |
238 | { | 251 | { |
252 | /* Disable all interrupts/machine checks */ | ||
253 | __load_psw_mask(PSW_KERNEL_BITS & ~PSW_MASK_MCHECK); | ||
254 | |||
239 | /* write magic number to zero page (absolute 0) */ | 255 | /* write magic number to zero page (absolute 0) */ |
240 | lowcore_ptr[smp_processor_id()]->panic_magic = __PANIC_MAGIC; | 256 | lowcore_ptr[smp_processor_id()]->panic_magic = __PANIC_MAGIC; |
241 | 257 | ||
242 | /* stop other processors. */ | 258 | /* stop other processors. */ |
243 | do_send_stop(); | 259 | do_send_stop(); |
244 | 260 | ||
261 | /* wait until other processors are stopped */ | ||
262 | do_wait_for_stop(); | ||
263 | |||
245 | /* store status of other processors. */ | 264 | /* store status of other processors. */ |
246 | do_store_status(); | 265 | do_store_status(); |
247 | } | 266 | } |
@@ -250,88 +269,28 @@ void smp_send_stop(void) | |||
250 | * Reboot, halt and power_off routines for SMP. | 269 | * Reboot, halt and power_off routines for SMP. |
251 | */ | 270 | */ |
252 | 271 | ||
253 | static void do_machine_restart(void * __unused) | ||
254 | { | ||
255 | int cpu; | ||
256 | static atomic_t cpuid = ATOMIC_INIT(-1); | ||
257 | |||
258 | if (atomic_cmpxchg(&cpuid, -1, smp_processor_id()) != -1) | ||
259 | signal_processor(smp_processor_id(), sigp_stop); | ||
260 | |||
261 | /* Wait for all other cpus to enter stopped state */ | ||
262 | for_each_online_cpu(cpu) { | ||
263 | if (cpu == smp_processor_id()) | ||
264 | continue; | ||
265 | while(!smp_cpu_not_running(cpu)) | ||
266 | cpu_relax(); | ||
267 | } | ||
268 | |||
269 | /* Store status of other cpus. */ | ||
270 | do_store_status(); | ||
271 | |||
272 | /* | ||
273 | * Finally call reipl. Because we waited for all other | ||
274 | * cpus to enter this function we know that they do | ||
275 | * not hold any s390irq-locks (the cpus have been | ||
276 | * interrupted by an external interrupt and s390irq | ||
277 | * locks are always held disabled). | ||
278 | */ | ||
279 | do_reipl(); | ||
280 | } | ||
281 | |||
282 | void machine_restart_smp(char * __unused) | 272 | void machine_restart_smp(char * __unused) |
283 | { | 273 | { |
284 | on_each_cpu(do_machine_restart, NULL, 0, 0); | 274 | smp_send_stop(); |
285 | } | 275 | do_reipl(); |
286 | |||
287 | static void do_wait_for_stop(void) | ||
288 | { | ||
289 | unsigned long cr[16]; | ||
290 | |||
291 | __ctl_store(cr, 0, 15); | ||
292 | cr[0] &= ~0xffff; | ||
293 | cr[6] = 0; | ||
294 | __ctl_load(cr, 0, 15); | ||
295 | for (;;) | ||
296 | enabled_wait(); | ||
297 | } | ||
298 | |||
299 | static void do_machine_halt(void * __unused) | ||
300 | { | ||
301 | static atomic_t cpuid = ATOMIC_INIT(-1); | ||
302 | |||
303 | if (atomic_cmpxchg(&cpuid, -1, smp_processor_id()) == -1) { | ||
304 | smp_send_stop(); | ||
305 | if (MACHINE_IS_VM && strlen(vmhalt_cmd) > 0) | ||
306 | cpcmd(vmhalt_cmd, NULL, 0, NULL); | ||
307 | signal_processor(smp_processor_id(), | ||
308 | sigp_stop_and_store_status); | ||
309 | } | ||
310 | do_wait_for_stop(); | ||
311 | } | 276 | } |
312 | 277 | ||
313 | void machine_halt_smp(void) | 278 | void machine_halt_smp(void) |
314 | { | 279 | { |
315 | on_each_cpu(do_machine_halt, NULL, 0, 0); | 280 | smp_send_stop(); |
316 | } | 281 | if (MACHINE_IS_VM && strlen(vmhalt_cmd) > 0) |
317 | 282 | __cpcmd(vmhalt_cmd, NULL, 0, NULL); | |
318 | static void do_machine_power_off(void * __unused) | 283 | signal_processor(smp_processor_id(), sigp_stop_and_store_status); |
319 | { | 284 | for (;;); |
320 | static atomic_t cpuid = ATOMIC_INIT(-1); | ||
321 | |||
322 | if (atomic_cmpxchg(&cpuid, -1, smp_processor_id()) == -1) { | ||
323 | smp_send_stop(); | ||
324 | if (MACHINE_IS_VM && strlen(vmpoff_cmd) > 0) | ||
325 | cpcmd(vmpoff_cmd, NULL, 0, NULL); | ||
326 | signal_processor(smp_processor_id(), | ||
327 | sigp_stop_and_store_status); | ||
328 | } | ||
329 | do_wait_for_stop(); | ||
330 | } | 285 | } |
331 | 286 | ||
332 | void machine_power_off_smp(void) | 287 | void machine_power_off_smp(void) |
333 | { | 288 | { |
334 | on_each_cpu(do_machine_power_off, NULL, 0, 0); | 289 | smp_send_stop(); |
290 | if (MACHINE_IS_VM && strlen(vmpoff_cmd) > 0) | ||
291 | __cpcmd(vmpoff_cmd, NULL, 0, NULL); | ||
292 | signal_processor(smp_processor_id(), sigp_stop_and_store_status); | ||
293 | for (;;); | ||
335 | } | 294 | } |
336 | 295 | ||
337 | /* | 296 | /* |
@@ -501,8 +460,6 @@ __init smp_count_cpus(void) | |||
501 | */ | 460 | */ |
502 | extern void init_cpu_timer(void); | 461 | extern void init_cpu_timer(void); |
503 | extern void init_cpu_vtimer(void); | 462 | extern void init_cpu_vtimer(void); |
504 | extern int pfault_init(void); | ||
505 | extern void pfault_fini(void); | ||
506 | 463 | ||
507 | int __devinit start_secondary(void *cpuvoid) | 464 | int __devinit start_secondary(void *cpuvoid) |
508 | { | 465 | { |
@@ -514,11 +471,9 @@ int __devinit start_secondary(void *cpuvoid) | |||
514 | #ifdef CONFIG_VIRT_TIMER | 471 | #ifdef CONFIG_VIRT_TIMER |
515 | init_cpu_vtimer(); | 472 | init_cpu_vtimer(); |
516 | #endif | 473 | #endif |
517 | #ifdef CONFIG_PFAULT | ||
518 | /* Enable pfault pseudo page faults on this cpu. */ | 474 | /* Enable pfault pseudo page faults on this cpu. */ |
519 | if (MACHINE_IS_VM) | 475 | pfault_init(); |
520 | pfault_init(); | 476 | |
521 | #endif | ||
522 | /* Mark this cpu as online */ | 477 | /* Mark this cpu as online */ |
523 | cpu_set(smp_processor_id(), cpu_online_map); | 478 | cpu_set(smp_processor_id(), cpu_online_map); |
524 | /* Switch on interrupts */ | 479 | /* Switch on interrupts */ |
@@ -708,11 +663,8 @@ __cpu_disable(void) | |||
708 | } | 663 | } |
709 | cpu_clear(cpu, cpu_online_map); | 664 | cpu_clear(cpu, cpu_online_map); |
710 | 665 | ||
711 | #ifdef CONFIG_PFAULT | ||
712 | /* Disable pfault pseudo page faults on this cpu. */ | 666 | /* Disable pfault pseudo page faults on this cpu. */ |
713 | if (MACHINE_IS_VM) | 667 | pfault_fini(); |
714 | pfault_fini(); | ||
715 | #endif | ||
716 | 668 | ||
717 | memset(&cr_parms.orvals, 0, sizeof(cr_parms.orvals)); | 669 | memset(&cr_parms.orvals, 0, sizeof(cr_parms.orvals)); |
718 | memset(&cr_parms.andvals, 0xff, sizeof(cr_parms.andvals)); | 670 | memset(&cr_parms.andvals, 0xff, sizeof(cr_parms.andvals)); |
@@ -860,4 +812,3 @@ EXPORT_SYMBOL(smp_ctl_clear_bit); | |||
860 | EXPORT_SYMBOL(smp_call_function); | 812 | EXPORT_SYMBOL(smp_call_function); |
861 | EXPORT_SYMBOL(smp_get_cpu); | 813 | EXPORT_SYMBOL(smp_get_cpu); |
862 | EXPORT_SYMBOL(smp_put_cpu); | 814 | EXPORT_SYMBOL(smp_put_cpu); |
863 | |||
diff --git a/arch/s390/kernel/traps.c b/arch/s390/kernel/traps.c index 92ecffbc8d82..3cbb0dcf1f1d 100644 --- a/arch/s390/kernel/traps.c +++ b/arch/s390/kernel/traps.c | |||
@@ -58,12 +58,6 @@ int sysctl_userprocess_debug = 0; | |||
58 | 58 | ||
59 | extern pgm_check_handler_t do_protection_exception; | 59 | extern pgm_check_handler_t do_protection_exception; |
60 | extern pgm_check_handler_t do_dat_exception; | 60 | extern pgm_check_handler_t do_dat_exception; |
61 | #ifdef CONFIG_PFAULT | ||
62 | extern int pfault_init(void); | ||
63 | extern void pfault_fini(void); | ||
64 | extern void pfault_interrupt(__u16 error_code); | ||
65 | static ext_int_info_t ext_int_pfault; | ||
66 | #endif | ||
67 | extern pgm_check_handler_t do_monitor_call; | 61 | extern pgm_check_handler_t do_monitor_call; |
68 | 62 | ||
69 | #define stack_pointer ({ void **sp; asm("la %0,0(15)" : "=&d" (sp)); sp; }) | 63 | #define stack_pointer ({ void **sp; asm("la %0,0(15)" : "=&d" (sp)); sp; }) |
@@ -135,7 +129,7 @@ __show_trace(unsigned long sp, unsigned long low, unsigned long high) | |||
135 | } | 129 | } |
136 | } | 130 | } |
137 | 131 | ||
138 | void show_trace(struct task_struct *task, unsigned long * stack) | 132 | void show_trace(struct task_struct *task, unsigned long *stack) |
139 | { | 133 | { |
140 | register unsigned long __r15 asm ("15"); | 134 | register unsigned long __r15 asm ("15"); |
141 | unsigned long sp; | 135 | unsigned long sp; |
@@ -157,6 +151,9 @@ void show_trace(struct task_struct *task, unsigned long * stack) | |||
157 | __show_trace(sp, S390_lowcore.thread_info, | 151 | __show_trace(sp, S390_lowcore.thread_info, |
158 | S390_lowcore.thread_info + THREAD_SIZE); | 152 | S390_lowcore.thread_info + THREAD_SIZE); |
159 | printk("\n"); | 153 | printk("\n"); |
154 | if (!task) | ||
155 | task = current; | ||
156 | debug_show_held_locks(task); | ||
160 | } | 157 | } |
161 | 158 | ||
162 | void show_stack(struct task_struct *task, unsigned long *sp) | 159 | void show_stack(struct task_struct *task, unsigned long *sp) |
@@ -739,22 +736,5 @@ void __init trap_init(void) | |||
739 | pgm_check_table[0x1C] = &space_switch_exception; | 736 | pgm_check_table[0x1C] = &space_switch_exception; |
740 | pgm_check_table[0x1D] = &hfp_sqrt_exception; | 737 | pgm_check_table[0x1D] = &hfp_sqrt_exception; |
741 | pgm_check_table[0x40] = &do_monitor_call; | 738 | pgm_check_table[0x40] = &do_monitor_call; |
742 | 739 | pfault_irq_init(); | |
743 | if (MACHINE_IS_VM) { | ||
744 | #ifdef CONFIG_PFAULT | ||
745 | /* | ||
746 | * Try to get pfault pseudo page faults going. | ||
747 | */ | ||
748 | if (register_early_external_interrupt(0x2603, pfault_interrupt, | ||
749 | &ext_int_pfault) != 0) | ||
750 | panic("Couldn't request external interrupt 0x2603"); | ||
751 | |||
752 | if (pfault_init() == 0) | ||
753 | return; | ||
754 | |||
755 | /* Tough luck, no pfault. */ | ||
756 | unregister_early_external_interrupt(0x2603, pfault_interrupt, | ||
757 | &ext_int_pfault); | ||
758 | #endif | ||
759 | } | ||
760 | } | 740 | } |
diff --git a/arch/s390/lib/Makefile b/arch/s390/lib/Makefile index b0cfa6c4883d..b5f94cf3bde8 100644 --- a/arch/s390/lib/Makefile +++ b/arch/s390/lib/Makefile | |||
@@ -4,7 +4,7 @@ | |||
4 | 4 | ||
5 | EXTRA_AFLAGS := -traditional | 5 | EXTRA_AFLAGS := -traditional |
6 | 6 | ||
7 | lib-y += delay.o string.o uaccess_std.o | 7 | lib-y += delay.o string.o uaccess_std.o uaccess_pt.o |
8 | lib-$(CONFIG_32BIT) += div64.o | 8 | lib-$(CONFIG_32BIT) += div64.o |
9 | lib-$(CONFIG_64BIT) += uaccess_mvcos.o | 9 | lib-$(CONFIG_64BIT) += uaccess_mvcos.o |
10 | lib-$(CONFIG_SMP) += spinlock.o | 10 | lib-$(CONFIG_SMP) += spinlock.o |
diff --git a/arch/s390/lib/uaccess_mvcos.c b/arch/s390/lib/uaccess_mvcos.c index 121b2935a422..f9a23d57eb79 100644 --- a/arch/s390/lib/uaccess_mvcos.c +++ b/arch/s390/lib/uaccess_mvcos.c | |||
@@ -27,6 +27,9 @@ | |||
27 | #define SLR "slgr" | 27 | #define SLR "slgr" |
28 | #endif | 28 | #endif |
29 | 29 | ||
30 | extern size_t copy_from_user_std(size_t, const void __user *, void *); | ||
31 | extern size_t copy_to_user_std(size_t, void __user *, const void *); | ||
32 | |||
30 | size_t copy_from_user_mvcos(size_t size, const void __user *ptr, void *x) | 33 | size_t copy_from_user_mvcos(size_t size, const void __user *ptr, void *x) |
31 | { | 34 | { |
32 | register unsigned long reg0 asm("0") = 0x81UL; | 35 | register unsigned long reg0 asm("0") = 0x81UL; |
@@ -66,6 +69,13 @@ size_t copy_from_user_mvcos(size_t size, const void __user *ptr, void *x) | |||
66 | return size; | 69 | return size; |
67 | } | 70 | } |
68 | 71 | ||
72 | size_t copy_from_user_mvcos_check(size_t size, const void __user *ptr, void *x) | ||
73 | { | ||
74 | if (size <= 256) | ||
75 | return copy_from_user_std(size, ptr, x); | ||
76 | return copy_from_user_mvcos(size, ptr, x); | ||
77 | } | ||
78 | |||
69 | size_t copy_to_user_mvcos(size_t size, void __user *ptr, const void *x) | 79 | size_t copy_to_user_mvcos(size_t size, void __user *ptr, const void *x) |
70 | { | 80 | { |
71 | register unsigned long reg0 asm("0") = 0x810000UL; | 81 | register unsigned long reg0 asm("0") = 0x810000UL; |
@@ -95,6 +105,13 @@ size_t copy_to_user_mvcos(size_t size, void __user *ptr, const void *x) | |||
95 | return size; | 105 | return size; |
96 | } | 106 | } |
97 | 107 | ||
108 | size_t copy_to_user_mvcos_check(size_t size, void __user *ptr, const void *x) | ||
109 | { | ||
110 | if (size <= 256) | ||
111 | return copy_to_user_std(size, ptr, x); | ||
112 | return copy_to_user_mvcos(size, ptr, x); | ||
113 | } | ||
114 | |||
98 | size_t copy_in_user_mvcos(size_t size, void __user *to, const void __user *from) | 115 | size_t copy_in_user_mvcos(size_t size, void __user *to, const void __user *from) |
99 | { | 116 | { |
100 | register unsigned long reg0 asm("0") = 0x810081UL; | 117 | register unsigned long reg0 asm("0") = 0x810081UL; |
@@ -145,18 +162,16 @@ size_t clear_user_mvcos(size_t size, void __user *to) | |||
145 | return size; | 162 | return size; |
146 | } | 163 | } |
147 | 164 | ||
148 | extern size_t copy_from_user_std_small(size_t, const void __user *, void *); | ||
149 | extern size_t copy_to_user_std_small(size_t, void __user *, const void *); | ||
150 | extern size_t strnlen_user_std(size_t, const char __user *); | 165 | extern size_t strnlen_user_std(size_t, const char __user *); |
151 | extern size_t strncpy_from_user_std(size_t, const char __user *, char *); | 166 | extern size_t strncpy_from_user_std(size_t, const char __user *, char *); |
152 | extern int futex_atomic_op(int, int __user *, int, int *); | 167 | extern int futex_atomic_op(int, int __user *, int, int *); |
153 | extern int futex_atomic_cmpxchg(int __user *, int, int); | 168 | extern int futex_atomic_cmpxchg(int __user *, int, int); |
154 | 169 | ||
155 | struct uaccess_ops uaccess_mvcos = { | 170 | struct uaccess_ops uaccess_mvcos = { |
156 | .copy_from_user = copy_from_user_mvcos, | 171 | .copy_from_user = copy_from_user_mvcos_check, |
157 | .copy_from_user_small = copy_from_user_std_small, | 172 | .copy_from_user_small = copy_from_user_std, |
158 | .copy_to_user = copy_to_user_mvcos, | 173 | .copy_to_user = copy_to_user_mvcos_check, |
159 | .copy_to_user_small = copy_to_user_std_small, | 174 | .copy_to_user_small = copy_to_user_std, |
160 | .copy_in_user = copy_in_user_mvcos, | 175 | .copy_in_user = copy_in_user_mvcos, |
161 | .clear_user = clear_user_mvcos, | 176 | .clear_user = clear_user_mvcos, |
162 | .strnlen_user = strnlen_user_std, | 177 | .strnlen_user = strnlen_user_std, |
diff --git a/arch/s390/lib/uaccess_pt.c b/arch/s390/lib/uaccess_pt.c new file mode 100644 index 000000000000..8741bdc09299 --- /dev/null +++ b/arch/s390/lib/uaccess_pt.c | |||
@@ -0,0 +1,153 @@ | |||
1 | /* | ||
2 | * arch/s390/lib/uaccess_pt.c | ||
3 | * | ||
4 | * User access functions based on page table walks. | ||
5 | * | ||
6 | * Copyright IBM Corp. 2006 | ||
7 | * Author(s): Gerald Schaefer (gerald.schaefer@de.ibm.com) | ||
8 | */ | ||
9 | |||
10 | #include <linux/errno.h> | ||
11 | #include <asm/uaccess.h> | ||
12 | #include <linux/mm.h> | ||
13 | #include <asm/futex.h> | ||
14 | |||
15 | static inline int __handle_fault(struct mm_struct *mm, unsigned long address, | ||
16 | int write_access) | ||
17 | { | ||
18 | struct vm_area_struct *vma; | ||
19 | int ret = -EFAULT; | ||
20 | |||
21 | down_read(&mm->mmap_sem); | ||
22 | vma = find_vma(mm, address); | ||
23 | if (unlikely(!vma)) | ||
24 | goto out; | ||
25 | if (unlikely(vma->vm_start > address)) { | ||
26 | if (!(vma->vm_flags & VM_GROWSDOWN)) | ||
27 | goto out; | ||
28 | if (expand_stack(vma, address)) | ||
29 | goto out; | ||
30 | } | ||
31 | |||
32 | if (!write_access) { | ||
33 | /* page not present, check vm flags */ | ||
34 | if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE))) | ||
35 | goto out; | ||
36 | } else { | ||
37 | if (!(vma->vm_flags & VM_WRITE)) | ||
38 | goto out; | ||
39 | } | ||
40 | |||
41 | survive: | ||
42 | switch (handle_mm_fault(mm, vma, address, write_access)) { | ||
43 | case VM_FAULT_MINOR: | ||
44 | current->min_flt++; | ||
45 | break; | ||
46 | case VM_FAULT_MAJOR: | ||
47 | current->maj_flt++; | ||
48 | break; | ||
49 | case VM_FAULT_SIGBUS: | ||
50 | goto out_sigbus; | ||
51 | case VM_FAULT_OOM: | ||
52 | goto out_of_memory; | ||
53 | default: | ||
54 | BUG(); | ||
55 | } | ||
56 | ret = 0; | ||
57 | out: | ||
58 | up_read(&mm->mmap_sem); | ||
59 | return ret; | ||
60 | |||
61 | out_of_memory: | ||
62 | up_read(&mm->mmap_sem); | ||
63 | if (current->pid == 1) { | ||
64 | yield(); | ||
65 | goto survive; | ||
66 | } | ||
67 | printk("VM: killing process %s\n", current->comm); | ||
68 | return ret; | ||
69 | |||
70 | out_sigbus: | ||
71 | up_read(&mm->mmap_sem); | ||
72 | current->thread.prot_addr = address; | ||
73 | current->thread.trap_no = 0x11; | ||
74 | force_sig(SIGBUS, current); | ||
75 | return ret; | ||
76 | } | ||
77 | |||
78 | static inline size_t __user_copy_pt(unsigned long uaddr, void *kptr, | ||
79 | size_t n, int write_user) | ||
80 | { | ||
81 | struct mm_struct *mm = current->mm; | ||
82 | unsigned long offset, pfn, done, size; | ||
83 | pgd_t *pgd; | ||
84 | pmd_t *pmd; | ||
85 | pte_t *pte; | ||
86 | void *from, *to; | ||
87 | |||
88 | done = 0; | ||
89 | retry: | ||
90 | spin_lock(&mm->page_table_lock); | ||
91 | do { | ||
92 | pgd = pgd_offset(mm, uaddr); | ||
93 | if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) | ||
94 | goto fault; | ||
95 | |||
96 | pmd = pmd_offset(pgd, uaddr); | ||
97 | if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) | ||
98 | goto fault; | ||
99 | |||
100 | pte = pte_offset_map(pmd, uaddr); | ||
101 | if (!pte || !pte_present(*pte) || | ||
102 | (write_user && !pte_write(*pte))) | ||
103 | goto fault; | ||
104 | |||
105 | pfn = pte_pfn(*pte); | ||
106 | if (!pfn_valid(pfn)) | ||
107 | goto out; | ||
108 | |||
109 | offset = uaddr & (PAGE_SIZE - 1); | ||
110 | size = min(n - done, PAGE_SIZE - offset); | ||
111 | if (write_user) { | ||
112 | to = (void *)((pfn << PAGE_SHIFT) + offset); | ||
113 | from = kptr + done; | ||
114 | } else { | ||
115 | from = (void *)((pfn << PAGE_SHIFT) + offset); | ||
116 | to = kptr + done; | ||
117 | } | ||
118 | memcpy(to, from, size); | ||
119 | done += size; | ||
120 | uaddr += size; | ||
121 | } while (done < n); | ||
122 | out: | ||
123 | spin_unlock(&mm->page_table_lock); | ||
124 | return n - done; | ||
125 | fault: | ||
126 | spin_unlock(&mm->page_table_lock); | ||
127 | if (__handle_fault(mm, uaddr, write_user)) | ||
128 | return n - done; | ||
129 | goto retry; | ||
130 | } | ||
131 | |||
132 | size_t copy_from_user_pt(size_t n, const void __user *from, void *to) | ||
133 | { | ||
134 | size_t rc; | ||
135 | |||
136 | if (segment_eq(get_fs(), KERNEL_DS)) { | ||
137 | memcpy(to, (void __kernel __force *) from, n); | ||
138 | return 0; | ||
139 | } | ||
140 | rc = __user_copy_pt((unsigned long) from, to, n, 0); | ||
141 | if (unlikely(rc)) | ||
142 | memset(to + n - rc, 0, rc); | ||
143 | return rc; | ||
144 | } | ||
145 | |||
146 | size_t copy_to_user_pt(size_t n, void __user *to, const void *from) | ||
147 | { | ||
148 | if (segment_eq(get_fs(), KERNEL_DS)) { | ||
149 | memcpy((void __kernel __force *) to, from, n); | ||
150 | return 0; | ||
151 | } | ||
152 | return __user_copy_pt((unsigned long) to, (void *) from, n, 1); | ||
153 | } | ||
diff --git a/arch/s390/lib/uaccess_std.c b/arch/s390/lib/uaccess_std.c index f44f0078b354..2d549ed2e113 100644 --- a/arch/s390/lib/uaccess_std.c +++ b/arch/s390/lib/uaccess_std.c | |||
@@ -28,6 +28,9 @@ | |||
28 | #define SLR "slgr" | 28 | #define SLR "slgr" |
29 | #endif | 29 | #endif |
30 | 30 | ||
31 | extern size_t copy_from_user_pt(size_t n, const void __user *from, void *to); | ||
32 | extern size_t copy_to_user_pt(size_t n, void __user *to, const void *from); | ||
33 | |||
31 | size_t copy_from_user_std(size_t size, const void __user *ptr, void *x) | 34 | size_t copy_from_user_std(size_t size, const void __user *ptr, void *x) |
32 | { | 35 | { |
33 | unsigned long tmp1, tmp2; | 36 | unsigned long tmp1, tmp2; |
@@ -69,34 +72,11 @@ size_t copy_from_user_std(size_t size, const void __user *ptr, void *x) | |||
69 | return size; | 72 | return size; |
70 | } | 73 | } |
71 | 74 | ||
72 | size_t copy_from_user_std_small(size_t size, const void __user *ptr, void *x) | 75 | size_t copy_from_user_std_check(size_t size, const void __user *ptr, void *x) |
73 | { | 76 | { |
74 | unsigned long tmp1, tmp2; | 77 | if (size <= 1024) |
75 | 78 | return copy_from_user_std(size, ptr, x); | |
76 | tmp1 = 0UL; | 79 | return copy_from_user_pt(size, ptr, x); |
77 | asm volatile( | ||
78 | "0: mvcp 0(%0,%2),0(%1),%3\n" | ||
79 | " "SLR" %0,%0\n" | ||
80 | " j 5f\n" | ||
81 | "1: la %4,255(%1)\n" /* %4 = ptr + 255 */ | ||
82 | " "LHI" %3,-4096\n" | ||
83 | " nr %4,%3\n" /* %4 = (ptr + 255) & -4096 */ | ||
84 | " "SLR" %4,%1\n" | ||
85 | " "CLR" %0,%4\n" /* copy crosses next page boundary? */ | ||
86 | " jnh 5f\n" | ||
87 | "2: mvcp 0(%4,%2),0(%1),%3\n" | ||
88 | " "SLR" %0,%4\n" | ||
89 | " "ALR" %2,%4\n" | ||
90 | "3:"LHI" %4,-1\n" | ||
91 | " "ALR" %4,%0\n" /* copy remaining size, subtract 1 */ | ||
92 | " bras %3,4f\n" | ||
93 | " xc 0(1,%2),0(%2)\n" | ||
94 | "4: ex %4,0(%3)\n" | ||
95 | "5:\n" | ||
96 | EX_TABLE(0b,1b) EX_TABLE(2b,3b) | ||
97 | : "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2) | ||
98 | : : "cc", "memory"); | ||
99 | return size; | ||
100 | } | 80 | } |
101 | 81 | ||
102 | size_t copy_to_user_std(size_t size, void __user *ptr, const void *x) | 82 | size_t copy_to_user_std(size_t size, void __user *ptr, const void *x) |
@@ -130,28 +110,11 @@ size_t copy_to_user_std(size_t size, void __user *ptr, const void *x) | |||
130 | return size; | 110 | return size; |
131 | } | 111 | } |
132 | 112 | ||
133 | size_t copy_to_user_std_small(size_t size, void __user *ptr, const void *x) | 113 | size_t copy_to_user_std_check(size_t size, void __user *ptr, const void *x) |
134 | { | 114 | { |
135 | unsigned long tmp1, tmp2; | 115 | if (size <= 1024) |
136 | 116 | return copy_to_user_std(size, ptr, x); | |
137 | tmp1 = 0UL; | 117 | return copy_to_user_pt(size, ptr, x); |
138 | asm volatile( | ||
139 | "0: mvcs 0(%0,%1),0(%2),%3\n" | ||
140 | " "SLR" %0,%0\n" | ||
141 | " j 3f\n" | ||
142 | "1: la %4,255(%1)\n" /* ptr + 255 */ | ||
143 | " "LHI" %3,-4096\n" | ||
144 | " nr %4,%3\n" /* (ptr + 255) & -4096UL */ | ||
145 | " "SLR" %4,%1\n" | ||
146 | " "CLR" %0,%4\n" /* copy crosses next page boundary? */ | ||
147 | " jnh 3f\n" | ||
148 | "2: mvcs 0(%4,%1),0(%2),%3\n" | ||
149 | " "SLR" %0,%4\n" | ||
150 | "3:\n" | ||
151 | EX_TABLE(0b,1b) EX_TABLE(2b,3b) | ||
152 | : "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2) | ||
153 | : : "cc", "memory"); | ||
154 | return size; | ||
155 | } | 118 | } |
156 | 119 | ||
157 | size_t copy_in_user_std(size_t size, void __user *to, const void __user *from) | 120 | size_t copy_in_user_std(size_t size, void __user *to, const void __user *from) |
@@ -343,10 +306,10 @@ int futex_atomic_cmpxchg(int __user *uaddr, int oldval, int newval) | |||
343 | } | 306 | } |
344 | 307 | ||
345 | struct uaccess_ops uaccess_std = { | 308 | struct uaccess_ops uaccess_std = { |
346 | .copy_from_user = copy_from_user_std, | 309 | .copy_from_user = copy_from_user_std_check, |
347 | .copy_from_user_small = copy_from_user_std_small, | 310 | .copy_from_user_small = copy_from_user_std, |
348 | .copy_to_user = copy_to_user_std, | 311 | .copy_to_user = copy_to_user_std_check, |
349 | .copy_to_user_small = copy_to_user_std_small, | 312 | .copy_to_user_small = copy_to_user_std, |
350 | .copy_in_user = copy_in_user_std, | 313 | .copy_in_user = copy_in_user_std, |
351 | .clear_user = clear_user_std, | 314 | .clear_user = clear_user_std, |
352 | .strnlen_user = strnlen_user_std, | 315 | .strnlen_user = strnlen_user_std, |
diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c index 226275d5c4f6..9e9bc48463a5 100644 --- a/arch/s390/mm/extmem.c +++ b/arch/s390/mm/extmem.c | |||
@@ -14,12 +14,13 @@ | |||
14 | #include <linux/slab.h> | 14 | #include <linux/slab.h> |
15 | #include <linux/module.h> | 15 | #include <linux/module.h> |
16 | #include <linux/bootmem.h> | 16 | #include <linux/bootmem.h> |
17 | #include <linux/ctype.h> | ||
17 | #include <asm/page.h> | 18 | #include <asm/page.h> |
18 | #include <asm/ebcdic.h> | 19 | #include <asm/ebcdic.h> |
19 | #include <asm/errno.h> | 20 | #include <asm/errno.h> |
20 | #include <asm/extmem.h> | 21 | #include <asm/extmem.h> |
21 | #include <asm/cpcmd.h> | 22 | #include <asm/cpcmd.h> |
22 | #include <linux/ctype.h> | 23 | #include <asm/setup.h> |
23 | 24 | ||
24 | #define DCSS_DEBUG /* Debug messages on/off */ | 25 | #define DCSS_DEBUG /* Debug messages on/off */ |
25 | 26 | ||
@@ -77,15 +78,11 @@ struct dcss_segment { | |||
77 | int segcnt; | 78 | int segcnt; |
78 | }; | 79 | }; |
79 | 80 | ||
80 | static DEFINE_SPINLOCK(dcss_lock); | 81 | static DEFINE_MUTEX(dcss_lock); |
81 | static struct list_head dcss_list = LIST_HEAD_INIT(dcss_list); | 82 | static struct list_head dcss_list = LIST_HEAD_INIT(dcss_list); |
82 | static char *segtype_string[] = { "SW", "EW", "SR", "ER", "SN", "EN", "SC", | 83 | static char *segtype_string[] = { "SW", "EW", "SR", "ER", "SN", "EN", "SC", |
83 | "EW/EN-MIXED" }; | 84 | "EW/EN-MIXED" }; |
84 | 85 | ||
85 | extern struct { | ||
86 | unsigned long addr, size, type; | ||
87 | } memory_chunk[MEMORY_CHUNKS]; | ||
88 | |||
89 | /* | 86 | /* |
90 | * Create the 8 bytes, ebcdic VM segment name from | 87 | * Create the 8 bytes, ebcdic VM segment name from |
91 | * an ascii name. | 88 | * an ascii name. |
@@ -117,7 +114,7 @@ segment_by_name (char *name) | |||
117 | struct list_head *l; | 114 | struct list_head *l; |
118 | struct dcss_segment *tmp, *retval = NULL; | 115 | struct dcss_segment *tmp, *retval = NULL; |
119 | 116 | ||
120 | assert_spin_locked(&dcss_lock); | 117 | BUG_ON(!mutex_is_locked(&dcss_lock)); |
121 | dcss_mkname (name, dcss_name); | 118 | dcss_mkname (name, dcss_name); |
122 | list_for_each (l, &dcss_list) { | 119 | list_for_each (l, &dcss_list) { |
123 | tmp = list_entry (l, struct dcss_segment, list); | 120 | tmp = list_entry (l, struct dcss_segment, list); |
@@ -249,8 +246,8 @@ segment_overlaps_storage(struct dcss_segment *seg) | |||
249 | { | 246 | { |
250 | int i; | 247 | int i; |
251 | 248 | ||
252 | for (i=0; i < MEMORY_CHUNKS && memory_chunk[i].size > 0; i++) { | 249 | for (i = 0; i < MEMORY_CHUNKS && memory_chunk[i].size > 0; i++) { |
253 | if (memory_chunk[i].type != 0) | 250 | if (memory_chunk[i].type != CHUNK_READ_WRITE) |
254 | continue; | 251 | continue; |
255 | if ((memory_chunk[i].addr >> 20) > (seg->end >> 20)) | 252 | if ((memory_chunk[i].addr >> 20) > (seg->end >> 20)) |
256 | continue; | 253 | continue; |
@@ -272,7 +269,7 @@ segment_overlaps_others (struct dcss_segment *seg) | |||
272 | struct list_head *l; | 269 | struct list_head *l; |
273 | struct dcss_segment *tmp; | 270 | struct dcss_segment *tmp; |
274 | 271 | ||
275 | assert_spin_locked(&dcss_lock); | 272 | BUG_ON(!mutex_is_locked(&dcss_lock)); |
276 | list_for_each(l, &dcss_list) { | 273 | list_for_each(l, &dcss_list) { |
277 | tmp = list_entry(l, struct dcss_segment, list); | 274 | tmp = list_entry(l, struct dcss_segment, list); |
278 | if ((tmp->start_addr >> 20) > (seg->end >> 20)) | 275 | if ((tmp->start_addr >> 20) > (seg->end >> 20)) |
@@ -429,7 +426,7 @@ segment_load (char *name, int do_nonshared, unsigned long *addr, | |||
429 | if (!MACHINE_IS_VM) | 426 | if (!MACHINE_IS_VM) |
430 | return -ENOSYS; | 427 | return -ENOSYS; |
431 | 428 | ||
432 | spin_lock (&dcss_lock); | 429 | mutex_lock(&dcss_lock); |
433 | seg = segment_by_name (name); | 430 | seg = segment_by_name (name); |
434 | if (seg == NULL) | 431 | if (seg == NULL) |
435 | rc = __segment_load (name, do_nonshared, addr, end); | 432 | rc = __segment_load (name, do_nonshared, addr, end); |
@@ -444,7 +441,7 @@ segment_load (char *name, int do_nonshared, unsigned long *addr, | |||
444 | rc = -EPERM; | 441 | rc = -EPERM; |
445 | } | 442 | } |
446 | } | 443 | } |
447 | spin_unlock (&dcss_lock); | 444 | mutex_unlock(&dcss_lock); |
448 | return rc; | 445 | return rc; |
449 | } | 446 | } |
450 | 447 | ||
@@ -467,7 +464,7 @@ segment_modify_shared (char *name, int do_nonshared) | |||
467 | unsigned long dummy; | 464 | unsigned long dummy; |
468 | int dcss_command, rc, diag_cc; | 465 | int dcss_command, rc, diag_cc; |
469 | 466 | ||
470 | spin_lock (&dcss_lock); | 467 | mutex_lock(&dcss_lock); |
471 | seg = segment_by_name (name); | 468 | seg = segment_by_name (name); |
472 | if (seg == NULL) { | 469 | if (seg == NULL) { |
473 | rc = -EINVAL; | 470 | rc = -EINVAL; |
@@ -508,7 +505,7 @@ segment_modify_shared (char *name, int do_nonshared) | |||
508 | &dummy, &dummy); | 505 | &dummy, &dummy); |
509 | kfree(seg); | 506 | kfree(seg); |
510 | out_unlock: | 507 | out_unlock: |
511 | spin_unlock(&dcss_lock); | 508 | mutex_unlock(&dcss_lock); |
512 | return rc; | 509 | return rc; |
513 | } | 510 | } |
514 | 511 | ||
@@ -526,7 +523,7 @@ segment_unload(char *name) | |||
526 | if (!MACHINE_IS_VM) | 523 | if (!MACHINE_IS_VM) |
527 | return; | 524 | return; |
528 | 525 | ||
529 | spin_lock(&dcss_lock); | 526 | mutex_lock(&dcss_lock); |
530 | seg = segment_by_name (name); | 527 | seg = segment_by_name (name); |
531 | if (seg == NULL) { | 528 | if (seg == NULL) { |
532 | PRINT_ERR ("could not find segment %s in segment_unload, " | 529 | PRINT_ERR ("could not find segment %s in segment_unload, " |
@@ -540,7 +537,7 @@ segment_unload(char *name) | |||
540 | kfree(seg); | 537 | kfree(seg); |
541 | } | 538 | } |
542 | out_unlock: | 539 | out_unlock: |
543 | spin_unlock(&dcss_lock); | 540 | mutex_unlock(&dcss_lock); |
544 | } | 541 | } |
545 | 542 | ||
546 | /* | 543 | /* |
@@ -559,12 +556,13 @@ segment_save(char *name) | |||
559 | if (!MACHINE_IS_VM) | 556 | if (!MACHINE_IS_VM) |
560 | return; | 557 | return; |
561 | 558 | ||
562 | spin_lock(&dcss_lock); | 559 | mutex_lock(&dcss_lock); |
563 | seg = segment_by_name (name); | 560 | seg = segment_by_name (name); |
564 | 561 | ||
565 | if (seg == NULL) { | 562 | if (seg == NULL) { |
566 | PRINT_ERR ("could not find segment %s in segment_save, please report to linux390@de.ibm.com\n",name); | 563 | PRINT_ERR("could not find segment %s in segment_save, please " |
567 | return; | 564 | "report to linux390@de.ibm.com\n", name); |
565 | goto out; | ||
568 | } | 566 | } |
569 | 567 | ||
570 | startpfn = seg->start_addr >> PAGE_SHIFT; | 568 | startpfn = seg->start_addr >> PAGE_SHIFT; |
@@ -591,7 +589,7 @@ segment_save(char *name) | |||
591 | goto out; | 589 | goto out; |
592 | } | 590 | } |
593 | out: | 591 | out: |
594 | spin_unlock(&dcss_lock); | 592 | mutex_unlock(&dcss_lock); |
595 | } | 593 | } |
596 | 594 | ||
597 | EXPORT_SYMBOL(segment_load); | 595 | EXPORT_SYMBOL(segment_load); |
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index 1c323bbfda91..cd85e34d8703 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c | |||
@@ -31,6 +31,7 @@ | |||
31 | #include <asm/uaccess.h> | 31 | #include <asm/uaccess.h> |
32 | #include <asm/pgtable.h> | 32 | #include <asm/pgtable.h> |
33 | #include <asm/kdebug.h> | 33 | #include <asm/kdebug.h> |
34 | #include <asm/s390_ext.h> | ||
34 | 35 | ||
35 | #ifndef CONFIG_64BIT | 36 | #ifndef CONFIG_64BIT |
36 | #define __FAIL_ADDR_MASK 0x7ffff000 | 37 | #define __FAIL_ADDR_MASK 0x7ffff000 |
@@ -394,6 +395,7 @@ void do_dat_exception(struct pt_regs *regs, unsigned long error_code) | |||
394 | /* | 395 | /* |
395 | * 'pfault' pseudo page faults routines. | 396 | * 'pfault' pseudo page faults routines. |
396 | */ | 397 | */ |
398 | static ext_int_info_t ext_int_pfault; | ||
397 | static int pfault_disable = 0; | 399 | static int pfault_disable = 0; |
398 | 400 | ||
399 | static int __init nopfault(char *str) | 401 | static int __init nopfault(char *str) |
@@ -422,7 +424,7 @@ int pfault_init(void) | |||
422 | __PF_RES_FIELD }; | 424 | __PF_RES_FIELD }; |
423 | int rc; | 425 | int rc; |
424 | 426 | ||
425 | if (pfault_disable) | 427 | if (!MACHINE_IS_VM || pfault_disable) |
426 | return -1; | 428 | return -1; |
427 | asm volatile( | 429 | asm volatile( |
428 | " diag %1,%0,0x258\n" | 430 | " diag %1,%0,0x258\n" |
@@ -440,7 +442,7 @@ void pfault_fini(void) | |||
440 | pfault_refbk_t refbk = | 442 | pfault_refbk_t refbk = |
441 | { 0x258, 1, 5, 2, 0ULL, 0ULL, 0ULL, 0ULL }; | 443 | { 0x258, 1, 5, 2, 0ULL, 0ULL, 0ULL, 0ULL }; |
442 | 444 | ||
443 | if (pfault_disable) | 445 | if (!MACHINE_IS_VM || pfault_disable) |
444 | return; | 446 | return; |
445 | __ctl_clear_bit(0,9); | 447 | __ctl_clear_bit(0,9); |
446 | asm volatile( | 448 | asm volatile( |
@@ -500,5 +502,25 @@ pfault_interrupt(__u16 error_code) | |||
500 | set_tsk_need_resched(tsk); | 502 | set_tsk_need_resched(tsk); |
501 | } | 503 | } |
502 | } | 504 | } |
503 | #endif | ||
504 | 505 | ||
506 | void __init pfault_irq_init(void) | ||
507 | { | ||
508 | if (!MACHINE_IS_VM) | ||
509 | return; | ||
510 | |||
511 | /* | ||
512 | * Try to get pfault pseudo page faults going. | ||
513 | */ | ||
514 | if (register_early_external_interrupt(0x2603, pfault_interrupt, | ||
515 | &ext_int_pfault) != 0) | ||
516 | panic("Couldn't request external interrupt 0x2603"); | ||
517 | |||
518 | if (pfault_init() == 0) | ||
519 | return; | ||
520 | |||
521 | /* Tough luck, no pfault. */ | ||
522 | pfault_disable = 1; | ||
523 | unregister_early_external_interrupt(0x2603, pfault_interrupt, | ||
524 | &ext_int_pfault); | ||
525 | } | ||
526 | #endif | ||
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c index 79ffef6bfaf8..a2cef57d7bcb 100644 --- a/drivers/s390/block/dasd.c +++ b/drivers/s390/block/dasd.c | |||
@@ -1264,15 +1264,21 @@ __dasd_check_expire(struct dasd_device * device) | |||
1264 | if (list_empty(&device->ccw_queue)) | 1264 | if (list_empty(&device->ccw_queue)) |
1265 | return; | 1265 | return; |
1266 | cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, list); | 1266 | cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, list); |
1267 | if (cqr->status == DASD_CQR_IN_IO && cqr->expires != 0) { | 1267 | if ((cqr->status == DASD_CQR_IN_IO && cqr->expires != 0) && |
1268 | if (time_after_eq(jiffies, cqr->expires + cqr->starttime)) { | 1268 | (time_after_eq(jiffies, cqr->expires + cqr->starttime))) { |
1269 | if (device->discipline->term_IO(cqr) != 0) { | ||
1270 | /* Hmpf, try again in 5 sec */ | ||
1271 | dasd_set_timer(device, 5*HZ); | ||
1272 | DEV_MESSAGE(KERN_ERR, device, | ||
1273 | "internal error - timeout (%is) expired " | ||
1274 | "for cqr %p, termination failed, " | ||
1275 | "retrying in 5s", | ||
1276 | (cqr->expires/HZ), cqr); | ||
1277 | } else { | ||
1269 | DEV_MESSAGE(KERN_ERR, device, | 1278 | DEV_MESSAGE(KERN_ERR, device, |
1270 | "internal error - timeout (%is) expired " | 1279 | "internal error - timeout (%is) expired " |
1271 | "for cqr %p (%i retries left)", | 1280 | "for cqr %p (%i retries left)", |
1272 | (cqr->expires/HZ), cqr, cqr->retries); | 1281 | (cqr->expires/HZ), cqr, cqr->retries); |
1273 | if (device->discipline->term_IO(cqr) != 0) | ||
1274 | /* Hmpf, try again in 1/10 sec */ | ||
1275 | dasd_set_timer(device, 10); | ||
1276 | } | 1282 | } |
1277 | } | 1283 | } |
1278 | } | 1284 | } |
diff --git a/drivers/s390/block/dasd_devmap.c b/drivers/s390/block/dasd_devmap.c index 91cf971f0652..17fdd8c9f740 100644 --- a/drivers/s390/block/dasd_devmap.c +++ b/drivers/s390/block/dasd_devmap.c | |||
@@ -684,21 +684,26 @@ dasd_ro_store(struct device *dev, struct device_attribute *attr, | |||
684 | const char *buf, size_t count) | 684 | const char *buf, size_t count) |
685 | { | 685 | { |
686 | struct dasd_devmap *devmap; | 686 | struct dasd_devmap *devmap; |
687 | int ro_flag; | 687 | int val; |
688 | char *endp; | ||
688 | 689 | ||
689 | devmap = dasd_devmap_from_cdev(to_ccwdev(dev)); | 690 | devmap = dasd_devmap_from_cdev(to_ccwdev(dev)); |
690 | if (IS_ERR(devmap)) | 691 | if (IS_ERR(devmap)) |
691 | return PTR_ERR(devmap); | 692 | return PTR_ERR(devmap); |
692 | ro_flag = buf[0] == '1'; | 693 | |
694 | val = simple_strtoul(buf, &endp, 0); | ||
695 | if (((endp + 1) < (buf + count)) || (val > 1)) | ||
696 | return -EINVAL; | ||
697 | |||
693 | spin_lock(&dasd_devmap_lock); | 698 | spin_lock(&dasd_devmap_lock); |
694 | if (ro_flag) | 699 | if (val) |
695 | devmap->features |= DASD_FEATURE_READONLY; | 700 | devmap->features |= DASD_FEATURE_READONLY; |
696 | else | 701 | else |
697 | devmap->features &= ~DASD_FEATURE_READONLY; | 702 | devmap->features &= ~DASD_FEATURE_READONLY; |
698 | if (devmap->device) | 703 | if (devmap->device) |
699 | devmap->device->features = devmap->features; | 704 | devmap->device->features = devmap->features; |
700 | if (devmap->device && devmap->device->gdp) | 705 | if (devmap->device && devmap->device->gdp) |
701 | set_disk_ro(devmap->device->gdp, ro_flag); | 706 | set_disk_ro(devmap->device->gdp, val); |
702 | spin_unlock(&dasd_devmap_lock); | 707 | spin_unlock(&dasd_devmap_lock); |
703 | return count; | 708 | return count; |
704 | } | 709 | } |
@@ -729,17 +734,22 @@ dasd_use_diag_store(struct device *dev, struct device_attribute *attr, | |||
729 | { | 734 | { |
730 | struct dasd_devmap *devmap; | 735 | struct dasd_devmap *devmap; |
731 | ssize_t rc; | 736 | ssize_t rc; |
732 | int use_diag; | 737 | int val; |
738 | char *endp; | ||
733 | 739 | ||
734 | devmap = dasd_devmap_from_cdev(to_ccwdev(dev)); | 740 | devmap = dasd_devmap_from_cdev(to_ccwdev(dev)); |
735 | if (IS_ERR(devmap)) | 741 | if (IS_ERR(devmap)) |
736 | return PTR_ERR(devmap); | 742 | return PTR_ERR(devmap); |
737 | use_diag = buf[0] == '1'; | 743 | |
744 | val = simple_strtoul(buf, &endp, 0); | ||
745 | if (((endp + 1) < (buf + count)) || (val > 1)) | ||
746 | return -EINVAL; | ||
747 | |||
738 | spin_lock(&dasd_devmap_lock); | 748 | spin_lock(&dasd_devmap_lock); |
739 | /* Changing diag discipline flag is only allowed in offline state. */ | 749 | /* Changing diag discipline flag is only allowed in offline state. */ |
740 | rc = count; | 750 | rc = count; |
741 | if (!devmap->device) { | 751 | if (!devmap->device) { |
742 | if (use_diag) | 752 | if (val) |
743 | devmap->features |= DASD_FEATURE_USEDIAG; | 753 | devmap->features |= DASD_FEATURE_USEDIAG; |
744 | else | 754 | else |
745 | devmap->features &= ~DASD_FEATURE_USEDIAG; | 755 | devmap->features &= ~DASD_FEATURE_USEDIAG; |
@@ -854,14 +864,20 @@ dasd_eer_store(struct device *dev, struct device_attribute *attr, | |||
854 | const char *buf, size_t count) | 864 | const char *buf, size_t count) |
855 | { | 865 | { |
856 | struct dasd_devmap *devmap; | 866 | struct dasd_devmap *devmap; |
857 | int rc; | 867 | int val, rc; |
868 | char *endp; | ||
858 | 869 | ||
859 | devmap = dasd_devmap_from_cdev(to_ccwdev(dev)); | 870 | devmap = dasd_devmap_from_cdev(to_ccwdev(dev)); |
860 | if (IS_ERR(devmap)) | 871 | if (IS_ERR(devmap)) |
861 | return PTR_ERR(devmap); | 872 | return PTR_ERR(devmap); |
862 | if (!devmap->device) | 873 | if (!devmap->device) |
863 | return count; | 874 | return -ENODEV; |
864 | if (buf[0] == '1') { | 875 | |
876 | val = simple_strtoul(buf, &endp, 0); | ||
877 | if (((endp + 1) < (buf + count)) || (val > 1)) | ||
878 | return -EINVAL; | ||
879 | |||
880 | if (val) { | ||
865 | rc = dasd_eer_enable(devmap->device); | 881 | rc = dasd_eer_enable(devmap->device); |
866 | if (rc) | 882 | if (rc) |
867 | return rc; | 883 | return rc; |
diff --git a/drivers/s390/char/con3215.c b/drivers/s390/char/con3215.c index d7de175d53f0..c9321b920e90 100644 --- a/drivers/s390/char/con3215.c +++ b/drivers/s390/char/con3215.c | |||
@@ -299,14 +299,14 @@ raw3215_timeout(unsigned long __data) | |||
299 | struct raw3215_info *raw = (struct raw3215_info *) __data; | 299 | struct raw3215_info *raw = (struct raw3215_info *) __data; |
300 | unsigned long flags; | 300 | unsigned long flags; |
301 | 301 | ||
302 | spin_lock_irqsave(raw->lock, flags); | 302 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
303 | if (raw->flags & RAW3215_TIMER_RUNS) { | 303 | if (raw->flags & RAW3215_TIMER_RUNS) { |
304 | del_timer(&raw->timer); | 304 | del_timer(&raw->timer); |
305 | raw->flags &= ~RAW3215_TIMER_RUNS; | 305 | raw->flags &= ~RAW3215_TIMER_RUNS; |
306 | raw3215_mk_write_req(raw); | 306 | raw3215_mk_write_req(raw); |
307 | raw3215_start_io(raw); | 307 | raw3215_start_io(raw); |
308 | } | 308 | } |
309 | spin_unlock_irqrestore(raw->lock, flags); | 309 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
310 | } | 310 | } |
311 | 311 | ||
312 | /* | 312 | /* |
@@ -355,10 +355,10 @@ raw3215_tasklet(void *data) | |||
355 | unsigned long flags; | 355 | unsigned long flags; |
356 | 356 | ||
357 | raw = (struct raw3215_info *) data; | 357 | raw = (struct raw3215_info *) data; |
358 | spin_lock_irqsave(raw->lock, flags); | 358 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
359 | raw3215_mk_write_req(raw); | 359 | raw3215_mk_write_req(raw); |
360 | raw3215_try_io(raw); | 360 | raw3215_try_io(raw); |
361 | spin_unlock_irqrestore(raw->lock, flags); | 361 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
362 | /* Check for pending message from raw3215_irq */ | 362 | /* Check for pending message from raw3215_irq */ |
363 | if (raw->message != NULL) { | 363 | if (raw->message != NULL) { |
364 | printk(raw->message, raw->msg_dstat, raw->msg_cstat); | 364 | printk(raw->message, raw->msg_dstat, raw->msg_cstat); |
@@ -512,9 +512,9 @@ raw3215_make_room(struct raw3215_info *raw, unsigned int length) | |||
512 | if (RAW3215_BUFFER_SIZE - raw->count >= length) | 512 | if (RAW3215_BUFFER_SIZE - raw->count >= length) |
513 | break; | 513 | break; |
514 | /* there might be another cpu waiting for the lock */ | 514 | /* there might be another cpu waiting for the lock */ |
515 | spin_unlock(raw->lock); | 515 | spin_unlock(get_ccwdev_lock(raw->cdev)); |
516 | udelay(100); | 516 | udelay(100); |
517 | spin_lock(raw->lock); | 517 | spin_lock(get_ccwdev_lock(raw->cdev)); |
518 | } | 518 | } |
519 | } | 519 | } |
520 | 520 | ||
@@ -528,7 +528,7 @@ raw3215_write(struct raw3215_info *raw, const char *str, unsigned int length) | |||
528 | int c, count; | 528 | int c, count; |
529 | 529 | ||
530 | while (length > 0) { | 530 | while (length > 0) { |
531 | spin_lock_irqsave(raw->lock, flags); | 531 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
532 | count = (length > RAW3215_BUFFER_SIZE) ? | 532 | count = (length > RAW3215_BUFFER_SIZE) ? |
533 | RAW3215_BUFFER_SIZE : length; | 533 | RAW3215_BUFFER_SIZE : length; |
534 | length -= count; | 534 | length -= count; |
@@ -555,7 +555,7 @@ raw3215_write(struct raw3215_info *raw, const char *str, unsigned int length) | |||
555 | /* start or queue request */ | 555 | /* start or queue request */ |
556 | raw3215_try_io(raw); | 556 | raw3215_try_io(raw); |
557 | } | 557 | } |
558 | spin_unlock_irqrestore(raw->lock, flags); | 558 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
559 | } | 559 | } |
560 | } | 560 | } |
561 | 561 | ||
@@ -568,7 +568,7 @@ raw3215_putchar(struct raw3215_info *raw, unsigned char ch) | |||
568 | unsigned long flags; | 568 | unsigned long flags; |
569 | unsigned int length, i; | 569 | unsigned int length, i; |
570 | 570 | ||
571 | spin_lock_irqsave(raw->lock, flags); | 571 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
572 | if (ch == '\t') { | 572 | if (ch == '\t') { |
573 | length = TAB_STOP_SIZE - (raw->line_pos%TAB_STOP_SIZE); | 573 | length = TAB_STOP_SIZE - (raw->line_pos%TAB_STOP_SIZE); |
574 | raw->line_pos += length; | 574 | raw->line_pos += length; |
@@ -592,7 +592,7 @@ raw3215_putchar(struct raw3215_info *raw, unsigned char ch) | |||
592 | /* start or queue request */ | 592 | /* start or queue request */ |
593 | raw3215_try_io(raw); | 593 | raw3215_try_io(raw); |
594 | } | 594 | } |
595 | spin_unlock_irqrestore(raw->lock, flags); | 595 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
596 | } | 596 | } |
597 | 597 | ||
598 | /* | 598 | /* |
@@ -604,13 +604,13 @@ raw3215_flush_buffer(struct raw3215_info *raw) | |||
604 | { | 604 | { |
605 | unsigned long flags; | 605 | unsigned long flags; |
606 | 606 | ||
607 | spin_lock_irqsave(raw->lock, flags); | 607 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
608 | if (raw->count > 0) { | 608 | if (raw->count > 0) { |
609 | raw->flags |= RAW3215_FLUSHING; | 609 | raw->flags |= RAW3215_FLUSHING; |
610 | raw3215_try_io(raw); | 610 | raw3215_try_io(raw); |
611 | raw->flags &= ~RAW3215_FLUSHING; | 611 | raw->flags &= ~RAW3215_FLUSHING; |
612 | } | 612 | } |
613 | spin_unlock_irqrestore(raw->lock, flags); | 613 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
614 | } | 614 | } |
615 | 615 | ||
616 | /* | 616 | /* |
@@ -625,9 +625,9 @@ raw3215_startup(struct raw3215_info *raw) | |||
625 | return 0; | 625 | return 0; |
626 | raw->line_pos = 0; | 626 | raw->line_pos = 0; |
627 | raw->flags |= RAW3215_ACTIVE; | 627 | raw->flags |= RAW3215_ACTIVE; |
628 | spin_lock_irqsave(raw->lock, flags); | 628 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
629 | raw3215_try_io(raw); | 629 | raw3215_try_io(raw); |
630 | spin_unlock_irqrestore(raw->lock, flags); | 630 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
631 | 631 | ||
632 | return 0; | 632 | return 0; |
633 | } | 633 | } |
@@ -644,21 +644,21 @@ raw3215_shutdown(struct raw3215_info *raw) | |||
644 | if (!(raw->flags & RAW3215_ACTIVE) || (raw->flags & RAW3215_FIXED)) | 644 | if (!(raw->flags & RAW3215_ACTIVE) || (raw->flags & RAW3215_FIXED)) |
645 | return; | 645 | return; |
646 | /* Wait for outstanding requests, then free irq */ | 646 | /* Wait for outstanding requests, then free irq */ |
647 | spin_lock_irqsave(raw->lock, flags); | 647 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
648 | if ((raw->flags & RAW3215_WORKING) || | 648 | if ((raw->flags & RAW3215_WORKING) || |
649 | raw->queued_write != NULL || | 649 | raw->queued_write != NULL || |
650 | raw->queued_read != NULL) { | 650 | raw->queued_read != NULL) { |
651 | raw->flags |= RAW3215_CLOSING; | 651 | raw->flags |= RAW3215_CLOSING; |
652 | add_wait_queue(&raw->empty_wait, &wait); | 652 | add_wait_queue(&raw->empty_wait, &wait); |
653 | set_current_state(TASK_INTERRUPTIBLE); | 653 | set_current_state(TASK_INTERRUPTIBLE); |
654 | spin_unlock_irqrestore(raw->lock, flags); | 654 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
655 | schedule(); | 655 | schedule(); |
656 | spin_lock_irqsave(raw->lock, flags); | 656 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
657 | remove_wait_queue(&raw->empty_wait, &wait); | 657 | remove_wait_queue(&raw->empty_wait, &wait); |
658 | set_current_state(TASK_RUNNING); | 658 | set_current_state(TASK_RUNNING); |
659 | raw->flags &= ~(RAW3215_ACTIVE | RAW3215_CLOSING); | 659 | raw->flags &= ~(RAW3215_ACTIVE | RAW3215_CLOSING); |
660 | } | 660 | } |
661 | spin_unlock_irqrestore(raw->lock, flags); | 661 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
662 | } | 662 | } |
663 | 663 | ||
664 | static int | 664 | static int |
@@ -686,7 +686,6 @@ raw3215_probe (struct ccw_device *cdev) | |||
686 | } | 686 | } |
687 | 687 | ||
688 | raw->cdev = cdev; | 688 | raw->cdev = cdev; |
689 | raw->lock = get_ccwdev_lock(cdev); | ||
690 | raw->inbuf = (char *) raw + sizeof(struct raw3215_info); | 689 | raw->inbuf = (char *) raw + sizeof(struct raw3215_info); |
691 | memset(raw, 0, sizeof(struct raw3215_info)); | 690 | memset(raw, 0, sizeof(struct raw3215_info)); |
692 | raw->buffer = (char *) kmalloc(RAW3215_BUFFER_SIZE, | 691 | raw->buffer = (char *) kmalloc(RAW3215_BUFFER_SIZE, |
@@ -809,9 +808,9 @@ con3215_unblank(void) | |||
809 | unsigned long flags; | 808 | unsigned long flags; |
810 | 809 | ||
811 | raw = raw3215[0]; /* console 3215 is the first one */ | 810 | raw = raw3215[0]; /* console 3215 is the first one */ |
812 | spin_lock_irqsave(raw->lock, flags); | 811 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
813 | raw3215_make_room(raw, RAW3215_BUFFER_SIZE); | 812 | raw3215_make_room(raw, RAW3215_BUFFER_SIZE); |
814 | spin_unlock_irqrestore(raw->lock, flags); | 813 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
815 | } | 814 | } |
816 | 815 | ||
817 | static int __init | 816 | static int __init |
@@ -873,7 +872,6 @@ con3215_init(void) | |||
873 | raw->buffer = (char *) alloc_bootmem_low(RAW3215_BUFFER_SIZE); | 872 | raw->buffer = (char *) alloc_bootmem_low(RAW3215_BUFFER_SIZE); |
874 | raw->inbuf = (char *) alloc_bootmem_low(RAW3215_INBUF_SIZE); | 873 | raw->inbuf = (char *) alloc_bootmem_low(RAW3215_INBUF_SIZE); |
875 | raw->cdev = cdev; | 874 | raw->cdev = cdev; |
876 | raw->lock = get_ccwdev_lock(cdev); | ||
877 | cdev->dev.driver_data = raw; | 875 | cdev->dev.driver_data = raw; |
878 | cdev->handler = raw3215_irq; | 876 | cdev->handler = raw3215_irq; |
879 | 877 | ||
@@ -1066,10 +1064,10 @@ tty3215_unthrottle(struct tty_struct * tty) | |||
1066 | 1064 | ||
1067 | raw = (struct raw3215_info *) tty->driver_data; | 1065 | raw = (struct raw3215_info *) tty->driver_data; |
1068 | if (raw->flags & RAW3215_THROTTLED) { | 1066 | if (raw->flags & RAW3215_THROTTLED) { |
1069 | spin_lock_irqsave(raw->lock, flags); | 1067 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
1070 | raw->flags &= ~RAW3215_THROTTLED; | 1068 | raw->flags &= ~RAW3215_THROTTLED; |
1071 | raw3215_try_io(raw); | 1069 | raw3215_try_io(raw); |
1072 | spin_unlock_irqrestore(raw->lock, flags); | 1070 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
1073 | } | 1071 | } |
1074 | } | 1072 | } |
1075 | 1073 | ||
@@ -1096,10 +1094,10 @@ tty3215_start(struct tty_struct *tty) | |||
1096 | 1094 | ||
1097 | raw = (struct raw3215_info *) tty->driver_data; | 1095 | raw = (struct raw3215_info *) tty->driver_data; |
1098 | if (raw->flags & RAW3215_STOPPED) { | 1096 | if (raw->flags & RAW3215_STOPPED) { |
1099 | spin_lock_irqsave(raw->lock, flags); | 1097 | spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); |
1100 | raw->flags &= ~RAW3215_STOPPED; | 1098 | raw->flags &= ~RAW3215_STOPPED; |
1101 | raw3215_try_io(raw); | 1099 | raw3215_try_io(raw); |
1102 | spin_unlock_irqrestore(raw->lock, flags); | 1100 | spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags); |
1103 | } | 1101 | } |
1104 | } | 1102 | } |
1105 | 1103 | ||
diff --git a/drivers/s390/char/sclp_quiesce.c b/drivers/s390/char/sclp_quiesce.c index 32004aae95c1..ffa9282ce97a 100644 --- a/drivers/s390/char/sclp_quiesce.c +++ b/drivers/s390/char/sclp_quiesce.c | |||
@@ -19,52 +19,17 @@ | |||
19 | 19 | ||
20 | #include "sclp.h" | 20 | #include "sclp.h" |
21 | 21 | ||
22 | |||
23 | #ifdef CONFIG_SMP | ||
24 | /* Signal completion of shutdown process. All CPUs except the first to enter | ||
25 | * this function: go to stopped state. First CPU: wait until all other | ||
26 | * CPUs are in stopped or check stop state. Afterwards, load special PSW | ||
27 | * to indicate completion. */ | ||
28 | static void | ||
29 | do_load_quiesce_psw(void * __unused) | ||
30 | { | ||
31 | static atomic_t cpuid = ATOMIC_INIT(-1); | ||
32 | psw_t quiesce_psw; | ||
33 | int cpu; | ||
34 | |||
35 | if (atomic_cmpxchg(&cpuid, -1, smp_processor_id()) != -1) | ||
36 | signal_processor(smp_processor_id(), sigp_stop); | ||
37 | /* Wait for all other cpus to enter stopped state */ | ||
38 | for_each_online_cpu(cpu) { | ||
39 | if (cpu == smp_processor_id()) | ||
40 | continue; | ||
41 | while(!smp_cpu_not_running(cpu)) | ||
42 | cpu_relax(); | ||
43 | } | ||
44 | /* Quiesce the last cpu with the special psw */ | ||
45 | quiesce_psw.mask = PSW_BASE_BITS | PSW_MASK_WAIT; | ||
46 | quiesce_psw.addr = 0xfff; | ||
47 | __load_psw(quiesce_psw); | ||
48 | } | ||
49 | |||
50 | /* Shutdown handler. Perform shutdown function on all CPUs. */ | ||
51 | static void | ||
52 | do_machine_quiesce(void) | ||
53 | { | ||
54 | on_each_cpu(do_load_quiesce_psw, NULL, 0, 0); | ||
55 | } | ||
56 | #else | ||
57 | /* Shutdown handler. Signal completion of shutdown by loading special PSW. */ | 22 | /* Shutdown handler. Signal completion of shutdown by loading special PSW. */ |
58 | static void | 23 | static void |
59 | do_machine_quiesce(void) | 24 | do_machine_quiesce(void) |
60 | { | 25 | { |
61 | psw_t quiesce_psw; | 26 | psw_t quiesce_psw; |
62 | 27 | ||
28 | smp_send_stop(); | ||
63 | quiesce_psw.mask = PSW_BASE_BITS | PSW_MASK_WAIT; | 29 | quiesce_psw.mask = PSW_BASE_BITS | PSW_MASK_WAIT; |
64 | quiesce_psw.addr = 0xfff; | 30 | quiesce_psw.addr = 0xfff; |
65 | __load_psw(quiesce_psw); | 31 | __load_psw(quiesce_psw); |
66 | } | 32 | } |
67 | #endif | ||
68 | 33 | ||
69 | /* Handler for quiesce event. Start shutdown procedure. */ | 34 | /* Handler for quiesce event. Start shutdown procedure. */ |
70 | static void | 35 | static void |
diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c index 2d78f0f4a40f..dbfb77b03928 100644 --- a/drivers/s390/cio/chsc.c +++ b/drivers/s390/cio/chsc.c | |||
@@ -251,6 +251,8 @@ s390_subchannel_remove_chpid(struct device *dev, void *data) | |||
251 | cc = cio_clear(sch); | 251 | cc = cio_clear(sch); |
252 | if (cc == -ENODEV) | 252 | if (cc == -ENODEV) |
253 | goto out_unreg; | 253 | goto out_unreg; |
254 | /* Request retry of internal operation. */ | ||
255 | device_set_intretry(sch); | ||
254 | /* Call handler. */ | 256 | /* Call handler. */ |
255 | if (sch->driver && sch->driver->termination) | 257 | if (sch->driver && sch->driver->termination) |
256 | sch->driver->termination(&sch->dev); | 258 | sch->driver->termination(&sch->dev); |
@@ -711,9 +713,6 @@ static inline int check_for_io_on_path(struct subchannel *sch, int index) | |||
711 | { | 713 | { |
712 | int cc; | 714 | int cc; |
713 | 715 | ||
714 | if (!device_is_online(sch)) | ||
715 | /* cio could be doing I/O. */ | ||
716 | return 0; | ||
717 | cc = stsch(sch->schid, &sch->schib); | 716 | cc = stsch(sch->schid, &sch->schib); |
718 | if (cc) | 717 | if (cc) |
719 | return 0; | 718 | return 0; |
@@ -722,6 +721,26 @@ static inline int check_for_io_on_path(struct subchannel *sch, int index) | |||
722 | return 0; | 721 | return 0; |
723 | } | 722 | } |
724 | 723 | ||
724 | static void terminate_internal_io(struct subchannel *sch) | ||
725 | { | ||
726 | if (cio_clear(sch)) { | ||
727 | /* Recheck device in case clear failed. */ | ||
728 | sch->lpm = 0; | ||
729 | if (device_trigger_verify(sch) != 0) { | ||
730 | if(css_enqueue_subchannel_slow(sch->schid)) { | ||
731 | css_clear_subchannel_slow_list(); | ||
732 | need_rescan = 1; | ||
733 | } | ||
734 | } | ||
735 | return; | ||
736 | } | ||
737 | /* Request retry of internal operation. */ | ||
738 | device_set_intretry(sch); | ||
739 | /* Call handler. */ | ||
740 | if (sch->driver && sch->driver->termination) | ||
741 | sch->driver->termination(&sch->dev); | ||
742 | } | ||
743 | |||
725 | static inline void | 744 | static inline void |
726 | __s390_subchannel_vary_chpid(struct subchannel *sch, __u8 chpid, int on) | 745 | __s390_subchannel_vary_chpid(struct subchannel *sch, __u8 chpid, int on) |
727 | { | 746 | { |
@@ -744,20 +763,26 @@ __s390_subchannel_vary_chpid(struct subchannel *sch, __u8 chpid, int on) | |||
744 | device_trigger_reprobe(sch); | 763 | device_trigger_reprobe(sch); |
745 | else if (sch->driver && sch->driver->verify) | 764 | else if (sch->driver && sch->driver->verify) |
746 | sch->driver->verify(&sch->dev); | 765 | sch->driver->verify(&sch->dev); |
747 | } else { | 766 | break; |
748 | sch->opm &= ~(0x80 >> chp); | 767 | } |
749 | sch->lpm &= ~(0x80 >> chp); | 768 | sch->opm &= ~(0x80 >> chp); |
750 | if (check_for_io_on_path(sch, chp)) | 769 | sch->lpm &= ~(0x80 >> chp); |
770 | if (check_for_io_on_path(sch, chp)) { | ||
771 | if (device_is_online(sch)) | ||
751 | /* Path verification is done after killing. */ | 772 | /* Path verification is done after killing. */ |
752 | device_kill_io(sch); | 773 | device_kill_io(sch); |
753 | else if (!sch->lpm) { | 774 | else |
775 | /* Kill and retry internal I/O. */ | ||
776 | terminate_internal_io(sch); | ||
777 | } else if (!sch->lpm) { | ||
778 | if (device_trigger_verify(sch) != 0) { | ||
754 | if (css_enqueue_subchannel_slow(sch->schid)) { | 779 | if (css_enqueue_subchannel_slow(sch->schid)) { |
755 | css_clear_subchannel_slow_list(); | 780 | css_clear_subchannel_slow_list(); |
756 | need_rescan = 1; | 781 | need_rescan = 1; |
757 | } | 782 | } |
758 | } else if (sch->driver && sch->driver->verify) | 783 | } |
759 | sch->driver->verify(&sch->dev); | 784 | } else if (sch->driver && sch->driver->verify) |
760 | } | 785 | sch->driver->verify(&sch->dev); |
761 | break; | 786 | break; |
762 | } | 787 | } |
763 | spin_unlock_irqrestore(&sch->lock, flags); | 788 | spin_unlock_irqrestore(&sch->lock, flags); |
@@ -1465,41 +1490,6 @@ chsc_get_chp_desc(struct subchannel *sch, int chp_no) | |||
1465 | return desc; | 1490 | return desc; |
1466 | } | 1491 | } |
1467 | 1492 | ||
1468 | static int reset_channel_path(struct channel_path *chp) | ||
1469 | { | ||
1470 | int cc; | ||
1471 | |||
1472 | cc = rchp(chp->id); | ||
1473 | switch (cc) { | ||
1474 | case 0: | ||
1475 | return 0; | ||
1476 | case 2: | ||
1477 | return -EBUSY; | ||
1478 | default: | ||
1479 | return -ENODEV; | ||
1480 | } | ||
1481 | } | ||
1482 | |||
1483 | static void reset_channel_paths_css(struct channel_subsystem *css) | ||
1484 | { | ||
1485 | int i; | ||
1486 | |||
1487 | for (i = 0; i <= __MAX_CHPID; i++) { | ||
1488 | if (css->chps[i]) | ||
1489 | reset_channel_path(css->chps[i]); | ||
1490 | } | ||
1491 | } | ||
1492 | |||
1493 | void cio_reset_channel_paths(void) | ||
1494 | { | ||
1495 | int i; | ||
1496 | |||
1497 | for (i = 0; i <= __MAX_CSSID; i++) { | ||
1498 | if (css[i] && css[i]->valid) | ||
1499 | reset_channel_paths_css(css[i]); | ||
1500 | } | ||
1501 | } | ||
1502 | |||
1503 | static int __init | 1493 | static int __init |
1504 | chsc_alloc_sei_area(void) | 1494 | chsc_alloc_sei_area(void) |
1505 | { | 1495 | { |
diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c index 8936e460a807..20aee2783847 100644 --- a/drivers/s390/cio/cio.c +++ b/drivers/s390/cio/cio.c | |||
@@ -21,6 +21,7 @@ | |||
21 | #include <asm/irq.h> | 21 | #include <asm/irq.h> |
22 | #include <asm/irq_regs.h> | 22 | #include <asm/irq_regs.h> |
23 | #include <asm/setup.h> | 23 | #include <asm/setup.h> |
24 | #include <asm/reset.h> | ||
24 | #include "airq.h" | 25 | #include "airq.h" |
25 | #include "cio.h" | 26 | #include "cio.h" |
26 | #include "css.h" | 27 | #include "css.h" |
@@ -28,6 +29,7 @@ | |||
28 | #include "ioasm.h" | 29 | #include "ioasm.h" |
29 | #include "blacklist.h" | 30 | #include "blacklist.h" |
30 | #include "cio_debug.h" | 31 | #include "cio_debug.h" |
32 | #include "../s390mach.h" | ||
31 | 33 | ||
32 | debug_info_t *cio_debug_msg_id; | 34 | debug_info_t *cio_debug_msg_id; |
33 | debug_info_t *cio_debug_trace_id; | 35 | debug_info_t *cio_debug_trace_id; |
@@ -841,26 +843,12 @@ __clear_subchannel_easy(struct subchannel_id schid) | |||
841 | return -EBUSY; | 843 | return -EBUSY; |
842 | } | 844 | } |
843 | 845 | ||
844 | struct sch_match_id { | 846 | static int __shutdown_subchannel_easy(struct subchannel_id schid, void *data) |
845 | struct subchannel_id schid; | ||
846 | struct ccw_dev_id devid; | ||
847 | int rc; | ||
848 | }; | ||
849 | |||
850 | static int __shutdown_subchannel_easy_and_match(struct subchannel_id schid, | ||
851 | void *data) | ||
852 | { | 847 | { |
853 | struct schib schib; | 848 | struct schib schib; |
854 | struct sch_match_id *match_id = data; | ||
855 | 849 | ||
856 | if (stsch_err(schid, &schib)) | 850 | if (stsch_err(schid, &schib)) |
857 | return -ENXIO; | 851 | return -ENXIO; |
858 | if (match_id && schib.pmcw.dnv && | ||
859 | (schib.pmcw.dev == match_id->devid.devno) && | ||
860 | (schid.ssid == match_id->devid.ssid)) { | ||
861 | match_id->schid = schid; | ||
862 | match_id->rc = 0; | ||
863 | } | ||
864 | if (!schib.pmcw.ena) | 852 | if (!schib.pmcw.ena) |
865 | return 0; | 853 | return 0; |
866 | switch(__disable_subchannel_easy(schid, &schib)) { | 854 | switch(__disable_subchannel_easy(schid, &schib)) { |
@@ -876,27 +864,111 @@ static int __shutdown_subchannel_easy_and_match(struct subchannel_id schid, | |||
876 | return 0; | 864 | return 0; |
877 | } | 865 | } |
878 | 866 | ||
879 | static int clear_all_subchannels_and_match(struct ccw_dev_id *devid, | 867 | static atomic_t chpid_reset_count; |
880 | struct subchannel_id *schid) | 868 | |
869 | static void s390_reset_chpids_mcck_handler(void) | ||
870 | { | ||
871 | struct crw crw; | ||
872 | struct mci *mci; | ||
873 | |||
874 | /* Check for pending channel report word. */ | ||
875 | mci = (struct mci *)&S390_lowcore.mcck_interruption_code; | ||
876 | if (!mci->cp) | ||
877 | return; | ||
878 | /* Process channel report words. */ | ||
879 | while (stcrw(&crw) == 0) { | ||
880 | /* Check for responses to RCHP. */ | ||
881 | if (crw.slct && crw.rsc == CRW_RSC_CPATH) | ||
882 | atomic_dec(&chpid_reset_count); | ||
883 | } | ||
884 | } | ||
885 | |||
886 | #define RCHP_TIMEOUT (30 * USEC_PER_SEC) | ||
887 | static void css_reset(void) | ||
888 | { | ||
889 | int i, ret; | ||
890 | unsigned long long timeout; | ||
891 | |||
892 | /* Reset subchannels. */ | ||
893 | for_each_subchannel(__shutdown_subchannel_easy, NULL); | ||
894 | /* Reset channel paths. */ | ||
895 | s390_reset_mcck_handler = s390_reset_chpids_mcck_handler; | ||
896 | /* Enable channel report machine checks. */ | ||
897 | __ctl_set_bit(14, 28); | ||
898 | /* Temporarily reenable machine checks. */ | ||
899 | local_mcck_enable(); | ||
900 | for (i = 0; i <= __MAX_CHPID; i++) { | ||
901 | ret = rchp(i); | ||
902 | if ((ret == 0) || (ret == 2)) | ||
903 | /* | ||
904 | * rchp either succeeded, or another rchp is already | ||
905 | * in progress. In either case, we'll get a crw. | ||
906 | */ | ||
907 | atomic_inc(&chpid_reset_count); | ||
908 | } | ||
909 | /* Wait for machine check for all channel paths. */ | ||
910 | timeout = get_clock() + (RCHP_TIMEOUT << 12); | ||
911 | while (atomic_read(&chpid_reset_count) != 0) { | ||
912 | if (get_clock() > timeout) | ||
913 | break; | ||
914 | cpu_relax(); | ||
915 | } | ||
916 | /* Disable machine checks again. */ | ||
917 | local_mcck_disable(); | ||
918 | /* Disable channel report machine checks. */ | ||
919 | __ctl_clear_bit(14, 28); | ||
920 | s390_reset_mcck_handler = NULL; | ||
921 | } | ||
922 | |||
923 | static struct reset_call css_reset_call = { | ||
924 | .fn = css_reset, | ||
925 | }; | ||
926 | |||
927 | static int __init init_css_reset_call(void) | ||
928 | { | ||
929 | atomic_set(&chpid_reset_count, 0); | ||
930 | register_reset_call(&css_reset_call); | ||
931 | return 0; | ||
932 | } | ||
933 | |||
934 | arch_initcall(init_css_reset_call); | ||
935 | |||
936 | struct sch_match_id { | ||
937 | struct subchannel_id schid; | ||
938 | struct ccw_dev_id devid; | ||
939 | int rc; | ||
940 | }; | ||
941 | |||
942 | static int __reipl_subchannel_match(struct subchannel_id schid, void *data) | ||
943 | { | ||
944 | struct schib schib; | ||
945 | struct sch_match_id *match_id = data; | ||
946 | |||
947 | if (stsch_err(schid, &schib)) | ||
948 | return -ENXIO; | ||
949 | if (schib.pmcw.dnv && | ||
950 | (schib.pmcw.dev == match_id->devid.devno) && | ||
951 | (schid.ssid == match_id->devid.ssid)) { | ||
952 | match_id->schid = schid; | ||
953 | match_id->rc = 0; | ||
954 | return 1; | ||
955 | } | ||
956 | return 0; | ||
957 | } | ||
958 | |||
959 | static int reipl_find_schid(struct ccw_dev_id *devid, | ||
960 | struct subchannel_id *schid) | ||
881 | { | 961 | { |
882 | struct sch_match_id match_id; | 962 | struct sch_match_id match_id; |
883 | 963 | ||
884 | match_id.devid = *devid; | 964 | match_id.devid = *devid; |
885 | match_id.rc = -ENODEV; | 965 | match_id.rc = -ENODEV; |
886 | local_irq_disable(); | 966 | for_each_subchannel(__reipl_subchannel_match, &match_id); |
887 | for_each_subchannel(__shutdown_subchannel_easy_and_match, &match_id); | ||
888 | if (match_id.rc == 0) | 967 | if (match_id.rc == 0) |
889 | *schid = match_id.schid; | 968 | *schid = match_id.schid; |
890 | return match_id.rc; | 969 | return match_id.rc; |
891 | } | 970 | } |
892 | 971 | ||
893 | |||
894 | void clear_all_subchannels(void) | ||
895 | { | ||
896 | local_irq_disable(); | ||
897 | for_each_subchannel(__shutdown_subchannel_easy_and_match, NULL); | ||
898 | } | ||
899 | |||
900 | extern void do_reipl_asm(__u32 schid); | 972 | extern void do_reipl_asm(__u32 schid); |
901 | 973 | ||
902 | /* Make sure all subchannels are quiet before we re-ipl an lpar. */ | 974 | /* Make sure all subchannels are quiet before we re-ipl an lpar. */ |
@@ -904,9 +976,9 @@ void reipl_ccw_dev(struct ccw_dev_id *devid) | |||
904 | { | 976 | { |
905 | struct subchannel_id schid; | 977 | struct subchannel_id schid; |
906 | 978 | ||
907 | if (clear_all_subchannels_and_match(devid, &schid)) | 979 | s390_reset_system(); |
980 | if (reipl_find_schid(devid, &schid) != 0) | ||
908 | panic("IPL Device not found\n"); | 981 | panic("IPL Device not found\n"); |
909 | cio_reset_channel_paths(); | ||
910 | do_reipl_asm(*((__u32*)&schid)); | 982 | do_reipl_asm(*((__u32*)&schid)); |
911 | } | 983 | } |
912 | 984 | ||
diff --git a/drivers/s390/cio/css.h b/drivers/s390/cio/css.h index 4c2ff8336288..9ff064e71767 100644 --- a/drivers/s390/cio/css.h +++ b/drivers/s390/cio/css.h | |||
@@ -94,6 +94,7 @@ struct ccw_device_private { | |||
94 | unsigned int donotify:1; /* call notify function */ | 94 | unsigned int donotify:1; /* call notify function */ |
95 | unsigned int recog_done:1; /* dev. recog. complete */ | 95 | unsigned int recog_done:1; /* dev. recog. complete */ |
96 | unsigned int fake_irb:1; /* deliver faked irb */ | 96 | unsigned int fake_irb:1; /* deliver faked irb */ |
97 | unsigned int intretry:1; /* retry internal operation */ | ||
97 | } __attribute__((packed)) flags; | 98 | } __attribute__((packed)) flags; |
98 | unsigned long intparm; /* user interruption parameter */ | 99 | unsigned long intparm; /* user interruption parameter */ |
99 | struct qdio_irq *qdio_data; | 100 | struct qdio_irq *qdio_data; |
@@ -171,6 +172,8 @@ void device_trigger_reprobe(struct subchannel *); | |||
171 | /* Helper functions for vary on/off. */ | 172 | /* Helper functions for vary on/off. */ |
172 | int device_is_online(struct subchannel *); | 173 | int device_is_online(struct subchannel *); |
173 | void device_kill_io(struct subchannel *); | 174 | void device_kill_io(struct subchannel *); |
175 | void device_set_intretry(struct subchannel *sch); | ||
176 | int device_trigger_verify(struct subchannel *sch); | ||
174 | 177 | ||
175 | /* Machine check helper function. */ | 178 | /* Machine check helper function. */ |
176 | void device_kill_pending_timer(struct subchannel *); | 179 | void device_kill_pending_timer(struct subchannel *); |
diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c index 39c98f940507..d3d3716ff84b 100644 --- a/drivers/s390/cio/device.c +++ b/drivers/s390/cio/device.c | |||
@@ -687,8 +687,20 @@ io_subchannel_register(void *data) | |||
687 | cdev = data; | 687 | cdev = data; |
688 | sch = to_subchannel(cdev->dev.parent); | 688 | sch = to_subchannel(cdev->dev.parent); |
689 | 689 | ||
690 | /* | ||
691 | * io_subchannel_register() will also be called after device | ||
692 | * recognition has been done for a boxed device (which will already | ||
693 | * be registered). We need to reprobe since we may now have sense id | ||
694 | * information. | ||
695 | */ | ||
690 | if (klist_node_attached(&cdev->dev.knode_parent)) { | 696 | if (klist_node_attached(&cdev->dev.knode_parent)) { |
691 | bus_rescan_devices(&ccw_bus_type); | 697 | if (!cdev->drv) { |
698 | ret = device_reprobe(&cdev->dev); | ||
699 | if (ret) | ||
700 | /* We can't do much here. */ | ||
701 | dev_info(&cdev->dev, "device_reprobe() returned" | ||
702 | " %d\n", ret); | ||
703 | } | ||
692 | goto out; | 704 | goto out; |
693 | } | 705 | } |
694 | /* make it known to the system */ | 706 | /* make it known to the system */ |
@@ -948,6 +960,9 @@ io_subchannel_ioterm(struct device *dev) | |||
948 | cdev = dev->driver_data; | 960 | cdev = dev->driver_data; |
949 | if (!cdev) | 961 | if (!cdev) |
950 | return; | 962 | return; |
963 | /* Internal I/O will be retried by the interrupt handler. */ | ||
964 | if (cdev->private->flags.intretry) | ||
965 | return; | ||
951 | cdev->private->state = DEV_STATE_CLEAR_VERIFY; | 966 | cdev->private->state = DEV_STATE_CLEAR_VERIFY; |
952 | if (cdev->handler) | 967 | if (cdev->handler) |
953 | cdev->handler(cdev, cdev->private->intparm, | 968 | cdev->handler(cdev, cdev->private->intparm, |
diff --git a/drivers/s390/cio/device_fsm.c b/drivers/s390/cio/device_fsm.c index de3d0857db9f..09c7672eb3f3 100644 --- a/drivers/s390/cio/device_fsm.c +++ b/drivers/s390/cio/device_fsm.c | |||
@@ -59,6 +59,27 @@ device_set_disconnected(struct subchannel *sch) | |||
59 | cdev->private->state = DEV_STATE_DISCONNECTED; | 59 | cdev->private->state = DEV_STATE_DISCONNECTED; |
60 | } | 60 | } |
61 | 61 | ||
62 | void device_set_intretry(struct subchannel *sch) | ||
63 | { | ||
64 | struct ccw_device *cdev; | ||
65 | |||
66 | cdev = sch->dev.driver_data; | ||
67 | if (!cdev) | ||
68 | return; | ||
69 | cdev->private->flags.intretry = 1; | ||
70 | } | ||
71 | |||
72 | int device_trigger_verify(struct subchannel *sch) | ||
73 | { | ||
74 | struct ccw_device *cdev; | ||
75 | |||
76 | cdev = sch->dev.driver_data; | ||
77 | if (!cdev || !cdev->online) | ||
78 | return -EINVAL; | ||
79 | dev_fsm_event(cdev, DEV_EVENT_VERIFY); | ||
80 | return 0; | ||
81 | } | ||
82 | |||
62 | /* | 83 | /* |
63 | * Timeout function. It just triggers a DEV_EVENT_TIMEOUT. | 84 | * Timeout function. It just triggers a DEV_EVENT_TIMEOUT. |
64 | */ | 85 | */ |
@@ -893,6 +914,12 @@ ccw_device_w4sense(struct ccw_device *cdev, enum dev_event dev_event) | |||
893 | * had killed the original request. | 914 | * had killed the original request. |
894 | */ | 915 | */ |
895 | if (irb->scsw.fctl & (SCSW_FCTL_CLEAR_FUNC | SCSW_FCTL_HALT_FUNC)) { | 916 | if (irb->scsw.fctl & (SCSW_FCTL_CLEAR_FUNC | SCSW_FCTL_HALT_FUNC)) { |
917 | /* Retry Basic Sense if requested. */ | ||
918 | if (cdev->private->flags.intretry) { | ||
919 | cdev->private->flags.intretry = 0; | ||
920 | ccw_device_do_sense(cdev, irb); | ||
921 | return; | ||
922 | } | ||
896 | cdev->private->flags.dosense = 0; | 923 | cdev->private->flags.dosense = 0; |
897 | memset(&cdev->private->irb, 0, sizeof(struct irb)); | 924 | memset(&cdev->private->irb, 0, sizeof(struct irb)); |
898 | ccw_device_accumulate_irb(cdev, irb); | 925 | ccw_device_accumulate_irb(cdev, irb); |
diff --git a/drivers/s390/cio/device_id.c b/drivers/s390/cio/device_id.c index a74785b9e4eb..f17275917fe5 100644 --- a/drivers/s390/cio/device_id.c +++ b/drivers/s390/cio/device_id.c | |||
@@ -191,6 +191,8 @@ __ccw_device_sense_id_start(struct ccw_device *cdev) | |||
191 | if ((sch->opm & cdev->private->imask) != 0 && | 191 | if ((sch->opm & cdev->private->imask) != 0 && |
192 | cdev->private->iretry > 0) { | 192 | cdev->private->iretry > 0) { |
193 | cdev->private->iretry--; | 193 | cdev->private->iretry--; |
194 | /* Reset internal retry indication. */ | ||
195 | cdev->private->flags.intretry = 0; | ||
194 | ret = cio_start (sch, cdev->private->iccws, | 196 | ret = cio_start (sch, cdev->private->iccws, |
195 | cdev->private->imask); | 197 | cdev->private->imask); |
196 | /* ret is 0, -EBUSY, -EACCES or -ENODEV */ | 198 | /* ret is 0, -EBUSY, -EACCES or -ENODEV */ |
@@ -237,8 +239,14 @@ ccw_device_check_sense_id(struct ccw_device *cdev) | |||
237 | return 0; /* Success */ | 239 | return 0; /* Success */ |
238 | } | 240 | } |
239 | /* Check the error cases. */ | 241 | /* Check the error cases. */ |
240 | if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC | SCSW_FCTL_CLEAR_FUNC)) | 242 | if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC | SCSW_FCTL_CLEAR_FUNC)) { |
243 | /* Retry Sense ID if requested. */ | ||
244 | if (cdev->private->flags.intretry) { | ||
245 | cdev->private->flags.intretry = 0; | ||
246 | return -EAGAIN; | ||
247 | } | ||
241 | return -ETIME; | 248 | return -ETIME; |
249 | } | ||
242 | if (irb->esw.esw0.erw.cons && (irb->ecw[0] & SNS0_CMD_REJECT)) { | 250 | if (irb->esw.esw0.erw.cons && (irb->ecw[0] & SNS0_CMD_REJECT)) { |
243 | /* | 251 | /* |
244 | * if the device doesn't support the SenseID | 252 | * if the device doesn't support the SenseID |
diff --git a/drivers/s390/cio/device_pgid.c b/drivers/s390/cio/device_pgid.c index 2975ce888c19..cb1879a96818 100644 --- a/drivers/s390/cio/device_pgid.c +++ b/drivers/s390/cio/device_pgid.c | |||
@@ -71,6 +71,8 @@ __ccw_device_sense_pgid_start(struct ccw_device *cdev) | |||
71 | ccw->cda = (__u32) __pa (&cdev->private->pgid[i]); | 71 | ccw->cda = (__u32) __pa (&cdev->private->pgid[i]); |
72 | if (cdev->private->iretry > 0) { | 72 | if (cdev->private->iretry > 0) { |
73 | cdev->private->iretry--; | 73 | cdev->private->iretry--; |
74 | /* Reset internal retry indication. */ | ||
75 | cdev->private->flags.intretry = 0; | ||
74 | ret = cio_start (sch, cdev->private->iccws, | 76 | ret = cio_start (sch, cdev->private->iccws, |
75 | cdev->private->imask); | 77 | cdev->private->imask); |
76 | /* ret is 0, -EBUSY, -EACCES or -ENODEV */ | 78 | /* ret is 0, -EBUSY, -EACCES or -ENODEV */ |
@@ -122,8 +124,14 @@ __ccw_device_check_sense_pgid(struct ccw_device *cdev) | |||
122 | 124 | ||
123 | sch = to_subchannel(cdev->dev.parent); | 125 | sch = to_subchannel(cdev->dev.parent); |
124 | irb = &cdev->private->irb; | 126 | irb = &cdev->private->irb; |
125 | if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC | SCSW_FCTL_CLEAR_FUNC)) | 127 | if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC | SCSW_FCTL_CLEAR_FUNC)) { |
128 | /* Retry Sense PGID if requested. */ | ||
129 | if (cdev->private->flags.intretry) { | ||
130 | cdev->private->flags.intretry = 0; | ||
131 | return -EAGAIN; | ||
132 | } | ||
126 | return -ETIME; | 133 | return -ETIME; |
134 | } | ||
127 | if (irb->esw.esw0.erw.cons && | 135 | if (irb->esw.esw0.erw.cons && |
128 | (irb->ecw[0]&(SNS0_CMD_REJECT|SNS0_INTERVENTION_REQ))) { | 136 | (irb->ecw[0]&(SNS0_CMD_REJECT|SNS0_INTERVENTION_REQ))) { |
129 | /* | 137 | /* |
@@ -253,6 +261,8 @@ __ccw_device_do_pgid(struct ccw_device *cdev, __u8 func) | |||
253 | ret = -EACCES; | 261 | ret = -EACCES; |
254 | if (cdev->private->iretry > 0) { | 262 | if (cdev->private->iretry > 0) { |
255 | cdev->private->iretry--; | 263 | cdev->private->iretry--; |
264 | /* Reset internal retry indication. */ | ||
265 | cdev->private->flags.intretry = 0; | ||
256 | ret = cio_start (sch, cdev->private->iccws, | 266 | ret = cio_start (sch, cdev->private->iccws, |
257 | cdev->private->imask); | 267 | cdev->private->imask); |
258 | /* We expect an interrupt in case of success or busy | 268 | /* We expect an interrupt in case of success or busy |
@@ -293,6 +303,8 @@ static int __ccw_device_do_nop(struct ccw_device *cdev) | |||
293 | ret = -EACCES; | 303 | ret = -EACCES; |
294 | if (cdev->private->iretry > 0) { | 304 | if (cdev->private->iretry > 0) { |
295 | cdev->private->iretry--; | 305 | cdev->private->iretry--; |
306 | /* Reset internal retry indication. */ | ||
307 | cdev->private->flags.intretry = 0; | ||
296 | ret = cio_start (sch, cdev->private->iccws, | 308 | ret = cio_start (sch, cdev->private->iccws, |
297 | cdev->private->imask); | 309 | cdev->private->imask); |
298 | /* We expect an interrupt in case of success or busy | 310 | /* We expect an interrupt in case of success or busy |
@@ -321,8 +333,14 @@ __ccw_device_check_pgid(struct ccw_device *cdev) | |||
321 | 333 | ||
322 | sch = to_subchannel(cdev->dev.parent); | 334 | sch = to_subchannel(cdev->dev.parent); |
323 | irb = &cdev->private->irb; | 335 | irb = &cdev->private->irb; |
324 | if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC | SCSW_FCTL_CLEAR_FUNC)) | 336 | if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC | SCSW_FCTL_CLEAR_FUNC)) { |
337 | /* Retry Set PGID if requested. */ | ||
338 | if (cdev->private->flags.intretry) { | ||
339 | cdev->private->flags.intretry = 0; | ||
340 | return -EAGAIN; | ||
341 | } | ||
325 | return -ETIME; | 342 | return -ETIME; |
343 | } | ||
326 | if (irb->esw.esw0.erw.cons) { | 344 | if (irb->esw.esw0.erw.cons) { |
327 | if (irb->ecw[0] & SNS0_CMD_REJECT) | 345 | if (irb->ecw[0] & SNS0_CMD_REJECT) |
328 | return -EOPNOTSUPP; | 346 | return -EOPNOTSUPP; |
@@ -360,8 +378,14 @@ static int __ccw_device_check_nop(struct ccw_device *cdev) | |||
360 | 378 | ||
361 | sch = to_subchannel(cdev->dev.parent); | 379 | sch = to_subchannel(cdev->dev.parent); |
362 | irb = &cdev->private->irb; | 380 | irb = &cdev->private->irb; |
363 | if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC | SCSW_FCTL_CLEAR_FUNC)) | 381 | if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC | SCSW_FCTL_CLEAR_FUNC)) { |
382 | /* Retry NOP if requested. */ | ||
383 | if (cdev->private->flags.intretry) { | ||
384 | cdev->private->flags.intretry = 0; | ||
385 | return -EAGAIN; | ||
386 | } | ||
364 | return -ETIME; | 387 | return -ETIME; |
388 | } | ||
365 | if (irb->scsw.cc == 3) { | 389 | if (irb->scsw.cc == 3) { |
366 | CIO_MSG_EVENT(2, "NOP - Device %04x on Subchannel 0.%x.%04x," | 390 | CIO_MSG_EVENT(2, "NOP - Device %04x on Subchannel 0.%x.%04x," |
367 | " lpm %02X, became 'not operational'\n", | 391 | " lpm %02X, became 'not operational'\n", |
diff --git a/drivers/s390/cio/device_status.c b/drivers/s390/cio/device_status.c index 3f7cbce4cd87..bdcf930f7beb 100644 --- a/drivers/s390/cio/device_status.c +++ b/drivers/s390/cio/device_status.c | |||
@@ -319,6 +319,9 @@ ccw_device_do_sense(struct ccw_device *cdev, struct irb *irb) | |||
319 | sch->sense_ccw.count = SENSE_MAX_COUNT; | 319 | sch->sense_ccw.count = SENSE_MAX_COUNT; |
320 | sch->sense_ccw.flags = CCW_FLAG_SLI; | 320 | sch->sense_ccw.flags = CCW_FLAG_SLI; |
321 | 321 | ||
322 | /* Reset internal retry indication. */ | ||
323 | cdev->private->flags.intretry = 0; | ||
324 | |||
322 | return cio_start (sch, &sch->sense_ccw, 0xff); | 325 | return cio_start (sch, &sch->sense_ccw, 0xff); |
323 | } | 326 | } |
324 | 327 | ||
diff --git a/drivers/s390/cio/qdio.c b/drivers/s390/cio/qdio.c index 476aa1da5cbc..8d5fa1b4d11f 100644 --- a/drivers/s390/cio/qdio.c +++ b/drivers/s390/cio/qdio.c | |||
@@ -481,7 +481,7 @@ qdio_stop_polling(struct qdio_q *q) | |||
481 | unsigned char state = 0; | 481 | unsigned char state = 0; |
482 | struct qdio_irq *irq = (struct qdio_irq *) q->irq_ptr; | 482 | struct qdio_irq *irq = (struct qdio_irq *) q->irq_ptr; |
483 | 483 | ||
484 | if (!atomic_swap(&q->polling,0)) | 484 | if (!atomic_xchg(&q->polling,0)) |
485 | return 1; | 485 | return 1; |
486 | 486 | ||
487 | QDIO_DBF_TEXT4(0,trace,"stoppoll"); | 487 | QDIO_DBF_TEXT4(0,trace,"stoppoll"); |
@@ -1964,8 +1964,8 @@ qdio_irq_check_sense(struct subchannel_id schid, struct irb *irb) | |||
1964 | QDIO_DBF_HEX0(0,sense,irb,QDIO_DBF_SENSE_LEN); | 1964 | QDIO_DBF_HEX0(0,sense,irb,QDIO_DBF_SENSE_LEN); |
1965 | 1965 | ||
1966 | QDIO_PRINT_WARN("sense data available on qdio channel.\n"); | 1966 | QDIO_PRINT_WARN("sense data available on qdio channel.\n"); |
1967 | HEXDUMP16(WARN,"irb: ",irb); | 1967 | QDIO_HEXDUMP16(WARN,"irb: ",irb); |
1968 | HEXDUMP16(WARN,"sense data: ",irb->ecw); | 1968 | QDIO_HEXDUMP16(WARN,"sense data: ",irb->ecw); |
1969 | } | 1969 | } |
1970 | 1970 | ||
1971 | } | 1971 | } |
@@ -3425,7 +3425,7 @@ do_qdio_handle_inbound(struct qdio_q *q, unsigned int callflags, | |||
3425 | 3425 | ||
3426 | if ((used_elements+count==QDIO_MAX_BUFFERS_PER_Q)&& | 3426 | if ((used_elements+count==QDIO_MAX_BUFFERS_PER_Q)&& |
3427 | (callflags&QDIO_FLAG_UNDER_INTERRUPT)) | 3427 | (callflags&QDIO_FLAG_UNDER_INTERRUPT)) |
3428 | atomic_swap(&q->polling,0); | 3428 | atomic_xchg(&q->polling,0); |
3429 | 3429 | ||
3430 | if (used_elements) | 3430 | if (used_elements) |
3431 | return; | 3431 | return; |
diff --git a/drivers/s390/cio/qdio.h b/drivers/s390/cio/qdio.h index 49bb9e371c32..42927c1b7451 100644 --- a/drivers/s390/cio/qdio.h +++ b/drivers/s390/cio/qdio.h | |||
@@ -236,7 +236,7 @@ enum qdio_irq_states { | |||
236 | #define QDIO_PRINT_EMERG(x...) do { } while (0) | 236 | #define QDIO_PRINT_EMERG(x...) do { } while (0) |
237 | #endif | 237 | #endif |
238 | 238 | ||
239 | #define HEXDUMP16(importance,header,ptr) \ | 239 | #define QDIO_HEXDUMP16(importance,header,ptr) \ |
240 | QDIO_PRINT_##importance(header "%02x %02x %02x %02x " \ | 240 | QDIO_PRINT_##importance(header "%02x %02x %02x %02x " \ |
241 | "%02x %02x %02x %02x %02x %02x %02x %02x " \ | 241 | "%02x %02x %02x %02x %02x %02x %02x %02x " \ |
242 | "%02x %02x %02x %02x\n",*(((char*)ptr)), \ | 242 | "%02x %02x %02x %02x\n",*(((char*)ptr)), \ |
@@ -429,8 +429,6 @@ struct qdio_perf_stats { | |||
429 | }; | 429 | }; |
430 | #endif /* QDIO_PERFORMANCE_STATS */ | 430 | #endif /* QDIO_PERFORMANCE_STATS */ |
431 | 431 | ||
432 | #define atomic_swap(a,b) xchg((int*)a.counter,b) | ||
433 | |||
434 | /* unlikely as the later the better */ | 432 | /* unlikely as the later the better */ |
435 | #define SYNC_MEMORY if (unlikely(q->siga_sync)) qdio_siga_sync_q(q) | 433 | #define SYNC_MEMORY if (unlikely(q->siga_sync)) qdio_siga_sync_q(q) |
436 | #define SYNC_MEMORY_ALL if (unlikely(q->siga_sync)) \ | 434 | #define SYNC_MEMORY_ALL if (unlikely(q->siga_sync)) \ |
diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c index 79d89c368919..6a54334ffe09 100644 --- a/drivers/s390/crypto/ap_bus.c +++ b/drivers/s390/crypto/ap_bus.c | |||
@@ -431,7 +431,15 @@ static int ap_uevent (struct device *dev, char **envp, int num_envp, | |||
431 | ap_dev->device_type); | 431 | ap_dev->device_type); |
432 | if (buffer_size - length <= 0) | 432 | if (buffer_size - length <= 0) |
433 | return -ENOMEM; | 433 | return -ENOMEM; |
434 | envp[1] = 0; | 434 | buffer += length; |
435 | buffer_size -= length; | ||
436 | /* Add MODALIAS= */ | ||
437 | envp[1] = buffer; | ||
438 | length = scnprintf(buffer, buffer_size, "MODALIAS=ap:t%02X", | ||
439 | ap_dev->device_type); | ||
440 | if (buffer_size - length <= 0) | ||
441 | return -ENOMEM; | ||
442 | envp[2] = NULL; | ||
435 | return 0; | 443 | return 0; |
436 | } | 444 | } |
437 | 445 | ||
diff --git a/drivers/s390/net/lcs.c b/drivers/s390/net/lcs.c index 66a8aec6efa6..08d4e47070bd 100644 --- a/drivers/s390/net/lcs.c +++ b/drivers/s390/net/lcs.c | |||
@@ -54,6 +54,8 @@ | |||
54 | #error Cannot compile lcs.c without some net devices switched on. | 54 | #error Cannot compile lcs.c without some net devices switched on. |
55 | #endif | 55 | #endif |
56 | 56 | ||
57 | #define PRINTK_HEADER " lcs: " | ||
58 | |||
57 | /** | 59 | /** |
58 | * initialization string for output | 60 | * initialization string for output |
59 | */ | 61 | */ |
@@ -120,7 +122,7 @@ lcs_alloc_channel(struct lcs_channel *channel) | |||
120 | kzalloc(LCS_IOBUFFERSIZE, GFP_DMA | GFP_KERNEL); | 122 | kzalloc(LCS_IOBUFFERSIZE, GFP_DMA | GFP_KERNEL); |
121 | if (channel->iob[cnt].data == NULL) | 123 | if (channel->iob[cnt].data == NULL) |
122 | break; | 124 | break; |
123 | channel->iob[cnt].state = BUF_STATE_EMPTY; | 125 | channel->iob[cnt].state = LCS_BUF_STATE_EMPTY; |
124 | } | 126 | } |
125 | if (cnt < LCS_NUM_BUFFS) { | 127 | if (cnt < LCS_NUM_BUFFS) { |
126 | /* Not all io buffers could be allocated. */ | 128 | /* Not all io buffers could be allocated. */ |
@@ -236,7 +238,7 @@ lcs_setup_read_ccws(struct lcs_card *card) | |||
236 | ((struct lcs_header *) | 238 | ((struct lcs_header *) |
237 | card->read.iob[cnt].data)->offset = LCS_ILLEGAL_OFFSET; | 239 | card->read.iob[cnt].data)->offset = LCS_ILLEGAL_OFFSET; |
238 | card->read.iob[cnt].callback = lcs_get_frames_cb; | 240 | card->read.iob[cnt].callback = lcs_get_frames_cb; |
239 | card->read.iob[cnt].state = BUF_STATE_READY; | 241 | card->read.iob[cnt].state = LCS_BUF_STATE_READY; |
240 | card->read.iob[cnt].count = LCS_IOBUFFERSIZE; | 242 | card->read.iob[cnt].count = LCS_IOBUFFERSIZE; |
241 | } | 243 | } |
242 | card->read.ccws[0].flags &= ~CCW_FLAG_PCI; | 244 | card->read.ccws[0].flags &= ~CCW_FLAG_PCI; |
@@ -247,7 +249,7 @@ lcs_setup_read_ccws(struct lcs_card *card) | |||
247 | card->read.ccws[LCS_NUM_BUFFS].cda = | 249 | card->read.ccws[LCS_NUM_BUFFS].cda = |
248 | (__u32) __pa(card->read.ccws); | 250 | (__u32) __pa(card->read.ccws); |
249 | /* Setg initial state of the read channel. */ | 251 | /* Setg initial state of the read channel. */ |
250 | card->read.state = CH_STATE_INIT; | 252 | card->read.state = LCS_CH_STATE_INIT; |
251 | 253 | ||
252 | card->read.io_idx = 0; | 254 | card->read.io_idx = 0; |
253 | card->read.buf_idx = 0; | 255 | card->read.buf_idx = 0; |
@@ -294,7 +296,7 @@ lcs_setup_write_ccws(struct lcs_card *card) | |||
294 | card->write.ccws[LCS_NUM_BUFFS].cda = | 296 | card->write.ccws[LCS_NUM_BUFFS].cda = |
295 | (__u32) __pa(card->write.ccws); | 297 | (__u32) __pa(card->write.ccws); |
296 | /* Set initial state of the write channel. */ | 298 | /* Set initial state of the write channel. */ |
297 | card->read.state = CH_STATE_INIT; | 299 | card->read.state = LCS_CH_STATE_INIT; |
298 | 300 | ||
299 | card->write.io_idx = 0; | 301 | card->write.io_idx = 0; |
300 | card->write.buf_idx = 0; | 302 | card->write.buf_idx = 0; |
@@ -496,7 +498,7 @@ lcs_start_channel(struct lcs_channel *channel) | |||
496 | channel->ccws + channel->io_idx, 0, 0, | 498 | channel->ccws + channel->io_idx, 0, 0, |
497 | DOIO_DENY_PREFETCH | DOIO_ALLOW_SUSPEND); | 499 | DOIO_DENY_PREFETCH | DOIO_ALLOW_SUSPEND); |
498 | if (rc == 0) | 500 | if (rc == 0) |
499 | channel->state = CH_STATE_RUNNING; | 501 | channel->state = LCS_CH_STATE_RUNNING; |
500 | spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); | 502 | spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); |
501 | if (rc) { | 503 | if (rc) { |
502 | LCS_DBF_TEXT_(4,trace,"essh%s", channel->ccwdev->dev.bus_id); | 504 | LCS_DBF_TEXT_(4,trace,"essh%s", channel->ccwdev->dev.bus_id); |
@@ -520,8 +522,8 @@ lcs_clear_channel(struct lcs_channel *channel) | |||
520 | LCS_DBF_TEXT_(4,trace,"ecsc%s", channel->ccwdev->dev.bus_id); | 522 | LCS_DBF_TEXT_(4,trace,"ecsc%s", channel->ccwdev->dev.bus_id); |
521 | return rc; | 523 | return rc; |
522 | } | 524 | } |
523 | wait_event(channel->wait_q, (channel->state == CH_STATE_CLEARED)); | 525 | wait_event(channel->wait_q, (channel->state == LCS_CH_STATE_CLEARED)); |
524 | channel->state = CH_STATE_STOPPED; | 526 | channel->state = LCS_CH_STATE_STOPPED; |
525 | return rc; | 527 | return rc; |
526 | } | 528 | } |
527 | 529 | ||
@@ -535,11 +537,11 @@ lcs_stop_channel(struct lcs_channel *channel) | |||
535 | unsigned long flags; | 537 | unsigned long flags; |
536 | int rc; | 538 | int rc; |
537 | 539 | ||
538 | if (channel->state == CH_STATE_STOPPED) | 540 | if (channel->state == LCS_CH_STATE_STOPPED) |
539 | return 0; | 541 | return 0; |
540 | LCS_DBF_TEXT(4,trace,"haltsch"); | 542 | LCS_DBF_TEXT(4,trace,"haltsch"); |
541 | LCS_DBF_TEXT_(4,trace,"%s", channel->ccwdev->dev.bus_id); | 543 | LCS_DBF_TEXT_(4,trace,"%s", channel->ccwdev->dev.bus_id); |
542 | channel->state = CH_STATE_INIT; | 544 | channel->state = LCS_CH_STATE_INIT; |
543 | spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); | 545 | spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); |
544 | rc = ccw_device_halt(channel->ccwdev, (addr_t) channel); | 546 | rc = ccw_device_halt(channel->ccwdev, (addr_t) channel); |
545 | spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); | 547 | spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); |
@@ -548,7 +550,7 @@ lcs_stop_channel(struct lcs_channel *channel) | |||
548 | return rc; | 550 | return rc; |
549 | } | 551 | } |
550 | /* Asynchronous halt initialted. Wait for its completion. */ | 552 | /* Asynchronous halt initialted. Wait for its completion. */ |
551 | wait_event(channel->wait_q, (channel->state == CH_STATE_HALTED)); | 553 | wait_event(channel->wait_q, (channel->state == LCS_CH_STATE_HALTED)); |
552 | lcs_clear_channel(channel); | 554 | lcs_clear_channel(channel); |
553 | return 0; | 555 | return 0; |
554 | } | 556 | } |
@@ -596,8 +598,8 @@ __lcs_get_buffer(struct lcs_channel *channel) | |||
596 | LCS_DBF_TEXT(5, trace, "_getbuff"); | 598 | LCS_DBF_TEXT(5, trace, "_getbuff"); |
597 | index = channel->io_idx; | 599 | index = channel->io_idx; |
598 | do { | 600 | do { |
599 | if (channel->iob[index].state == BUF_STATE_EMPTY) { | 601 | if (channel->iob[index].state == LCS_BUF_STATE_EMPTY) { |
600 | channel->iob[index].state = BUF_STATE_LOCKED; | 602 | channel->iob[index].state = LCS_BUF_STATE_LOCKED; |
601 | return channel->iob + index; | 603 | return channel->iob + index; |
602 | } | 604 | } |
603 | index = (index + 1) & (LCS_NUM_BUFFS - 1); | 605 | index = (index + 1) & (LCS_NUM_BUFFS - 1); |
@@ -626,7 +628,7 @@ __lcs_resume_channel(struct lcs_channel *channel) | |||
626 | { | 628 | { |
627 | int rc; | 629 | int rc; |
628 | 630 | ||
629 | if (channel->state != CH_STATE_SUSPENDED) | 631 | if (channel->state != LCS_CH_STATE_SUSPENDED) |
630 | return 0; | 632 | return 0; |
631 | if (channel->ccws[channel->io_idx].flags & CCW_FLAG_SUSPEND) | 633 | if (channel->ccws[channel->io_idx].flags & CCW_FLAG_SUSPEND) |
632 | return 0; | 634 | return 0; |
@@ -636,7 +638,7 @@ __lcs_resume_channel(struct lcs_channel *channel) | |||
636 | LCS_DBF_TEXT_(4, trace, "ersc%s", channel->ccwdev->dev.bus_id); | 638 | LCS_DBF_TEXT_(4, trace, "ersc%s", channel->ccwdev->dev.bus_id); |
637 | PRINT_ERR("Error in lcs_resume_channel: rc=%d\n",rc); | 639 | PRINT_ERR("Error in lcs_resume_channel: rc=%d\n",rc); |
638 | } else | 640 | } else |
639 | channel->state = CH_STATE_RUNNING; | 641 | channel->state = LCS_CH_STATE_RUNNING; |
640 | return rc; | 642 | return rc; |
641 | 643 | ||
642 | } | 644 | } |
@@ -670,10 +672,10 @@ lcs_ready_buffer(struct lcs_channel *channel, struct lcs_buffer *buffer) | |||
670 | int index, rc; | 672 | int index, rc; |
671 | 673 | ||
672 | LCS_DBF_TEXT(5, trace, "rdybuff"); | 674 | LCS_DBF_TEXT(5, trace, "rdybuff"); |
673 | BUG_ON(buffer->state != BUF_STATE_LOCKED && | 675 | BUG_ON(buffer->state != LCS_BUF_STATE_LOCKED && |
674 | buffer->state != BUF_STATE_PROCESSED); | 676 | buffer->state != LCS_BUF_STATE_PROCESSED); |
675 | spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); | 677 | spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); |
676 | buffer->state = BUF_STATE_READY; | 678 | buffer->state = LCS_BUF_STATE_READY; |
677 | index = buffer - channel->iob; | 679 | index = buffer - channel->iob; |
678 | /* Set length. */ | 680 | /* Set length. */ |
679 | channel->ccws[index].count = buffer->count; | 681 | channel->ccws[index].count = buffer->count; |
@@ -695,8 +697,8 @@ __lcs_processed_buffer(struct lcs_channel *channel, struct lcs_buffer *buffer) | |||
695 | int index, prev, next; | 697 | int index, prev, next; |
696 | 698 | ||
697 | LCS_DBF_TEXT(5, trace, "prcsbuff"); | 699 | LCS_DBF_TEXT(5, trace, "prcsbuff"); |
698 | BUG_ON(buffer->state != BUF_STATE_READY); | 700 | BUG_ON(buffer->state != LCS_BUF_STATE_READY); |
699 | buffer->state = BUF_STATE_PROCESSED; | 701 | buffer->state = LCS_BUF_STATE_PROCESSED; |
700 | index = buffer - channel->iob; | 702 | index = buffer - channel->iob; |
701 | prev = (index - 1) & (LCS_NUM_BUFFS - 1); | 703 | prev = (index - 1) & (LCS_NUM_BUFFS - 1); |
702 | next = (index + 1) & (LCS_NUM_BUFFS - 1); | 704 | next = (index + 1) & (LCS_NUM_BUFFS - 1); |
@@ -704,7 +706,7 @@ __lcs_processed_buffer(struct lcs_channel *channel, struct lcs_buffer *buffer) | |||
704 | channel->ccws[index].flags |= CCW_FLAG_SUSPEND; | 706 | channel->ccws[index].flags |= CCW_FLAG_SUSPEND; |
705 | channel->ccws[index].flags &= ~CCW_FLAG_PCI; | 707 | channel->ccws[index].flags &= ~CCW_FLAG_PCI; |
706 | /* Check the suspend bit of the previous buffer. */ | 708 | /* Check the suspend bit of the previous buffer. */ |
707 | if (channel->iob[prev].state == BUF_STATE_READY) { | 709 | if (channel->iob[prev].state == LCS_BUF_STATE_READY) { |
708 | /* | 710 | /* |
709 | * Previous buffer is in state ready. It might have | 711 | * Previous buffer is in state ready. It might have |
710 | * happened in lcs_ready_buffer that the suspend bit | 712 | * happened in lcs_ready_buffer that the suspend bit |
@@ -727,10 +729,10 @@ lcs_release_buffer(struct lcs_channel *channel, struct lcs_buffer *buffer) | |||
727 | unsigned long flags; | 729 | unsigned long flags; |
728 | 730 | ||
729 | LCS_DBF_TEXT(5, trace, "relbuff"); | 731 | LCS_DBF_TEXT(5, trace, "relbuff"); |
730 | BUG_ON(buffer->state != BUF_STATE_LOCKED && | 732 | BUG_ON(buffer->state != LCS_BUF_STATE_LOCKED && |
731 | buffer->state != BUF_STATE_PROCESSED); | 733 | buffer->state != LCS_BUF_STATE_PROCESSED); |
732 | spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); | 734 | spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); |
733 | buffer->state = BUF_STATE_EMPTY; | 735 | buffer->state = LCS_BUF_STATE_EMPTY; |
734 | spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); | 736 | spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); |
735 | } | 737 | } |
736 | 738 | ||
@@ -1264,7 +1266,7 @@ lcs_register_mc_addresses(void *data) | |||
1264 | netif_carrier_off(card->dev); | 1266 | netif_carrier_off(card->dev); |
1265 | netif_tx_disable(card->dev); | 1267 | netif_tx_disable(card->dev); |
1266 | wait_event(card->write.wait_q, | 1268 | wait_event(card->write.wait_q, |
1267 | (card->write.state != CH_STATE_RUNNING)); | 1269 | (card->write.state != LCS_CH_STATE_RUNNING)); |
1268 | lcs_fix_multicast_list(card); | 1270 | lcs_fix_multicast_list(card); |
1269 | if (card->state == DEV_STATE_UP) { | 1271 | if (card->state == DEV_STATE_UP) { |
1270 | netif_carrier_on(card->dev); | 1272 | netif_carrier_on(card->dev); |
@@ -1404,7 +1406,7 @@ lcs_irq(struct ccw_device *cdev, unsigned long intparm, struct irb *irb) | |||
1404 | } | 1406 | } |
1405 | } | 1407 | } |
1406 | /* How far in the ccw chain have we processed? */ | 1408 | /* How far in the ccw chain have we processed? */ |
1407 | if ((channel->state != CH_STATE_INIT) && | 1409 | if ((channel->state != LCS_CH_STATE_INIT) && |
1408 | (irb->scsw.fctl & SCSW_FCTL_START_FUNC)) { | 1410 | (irb->scsw.fctl & SCSW_FCTL_START_FUNC)) { |
1409 | index = (struct ccw1 *) __va((addr_t) irb->scsw.cpa) | 1411 | index = (struct ccw1 *) __va((addr_t) irb->scsw.cpa) |
1410 | - channel->ccws; | 1412 | - channel->ccws; |
@@ -1424,20 +1426,20 @@ lcs_irq(struct ccw_device *cdev, unsigned long intparm, struct irb *irb) | |||
1424 | (irb->scsw.dstat & DEV_STAT_CHN_END) || | 1426 | (irb->scsw.dstat & DEV_STAT_CHN_END) || |
1425 | (irb->scsw.dstat & DEV_STAT_UNIT_CHECK)) | 1427 | (irb->scsw.dstat & DEV_STAT_UNIT_CHECK)) |
1426 | /* Mark channel as stopped. */ | 1428 | /* Mark channel as stopped. */ |
1427 | channel->state = CH_STATE_STOPPED; | 1429 | channel->state = LCS_CH_STATE_STOPPED; |
1428 | else if (irb->scsw.actl & SCSW_ACTL_SUSPENDED) | 1430 | else if (irb->scsw.actl & SCSW_ACTL_SUSPENDED) |
1429 | /* CCW execution stopped on a suspend bit. */ | 1431 | /* CCW execution stopped on a suspend bit. */ |
1430 | channel->state = CH_STATE_SUSPENDED; | 1432 | channel->state = LCS_CH_STATE_SUSPENDED; |
1431 | if (irb->scsw.fctl & SCSW_FCTL_HALT_FUNC) { | 1433 | if (irb->scsw.fctl & SCSW_FCTL_HALT_FUNC) { |
1432 | if (irb->scsw.cc != 0) { | 1434 | if (irb->scsw.cc != 0) { |
1433 | ccw_device_halt(channel->ccwdev, (addr_t) channel); | 1435 | ccw_device_halt(channel->ccwdev, (addr_t) channel); |
1434 | return; | 1436 | return; |
1435 | } | 1437 | } |
1436 | /* The channel has been stopped by halt_IO. */ | 1438 | /* The channel has been stopped by halt_IO. */ |
1437 | channel->state = CH_STATE_HALTED; | 1439 | channel->state = LCS_CH_STATE_HALTED; |
1438 | } | 1440 | } |
1439 | if (irb->scsw.fctl & SCSW_FCTL_CLEAR_FUNC) { | 1441 | if (irb->scsw.fctl & SCSW_FCTL_CLEAR_FUNC) { |
1440 | channel->state = CH_STATE_CLEARED; | 1442 | channel->state = LCS_CH_STATE_CLEARED; |
1441 | } | 1443 | } |
1442 | /* Do the rest in the tasklet. */ | 1444 | /* Do the rest in the tasklet. */ |
1443 | tasklet_schedule(&channel->irq_tasklet); | 1445 | tasklet_schedule(&channel->irq_tasklet); |
@@ -1461,7 +1463,7 @@ lcs_tasklet(unsigned long data) | |||
1461 | /* Check for processed buffers. */ | 1463 | /* Check for processed buffers. */ |
1462 | iob = channel->iob; | 1464 | iob = channel->iob; |
1463 | buf_idx = channel->buf_idx; | 1465 | buf_idx = channel->buf_idx; |
1464 | while (iob[buf_idx].state == BUF_STATE_PROCESSED) { | 1466 | while (iob[buf_idx].state == LCS_BUF_STATE_PROCESSED) { |
1465 | /* Do the callback thing. */ | 1467 | /* Do the callback thing. */ |
1466 | if (iob[buf_idx].callback != NULL) | 1468 | if (iob[buf_idx].callback != NULL) |
1467 | iob[buf_idx].callback(channel, iob + buf_idx); | 1469 | iob[buf_idx].callback(channel, iob + buf_idx); |
@@ -1469,12 +1471,12 @@ lcs_tasklet(unsigned long data) | |||
1469 | } | 1471 | } |
1470 | channel->buf_idx = buf_idx; | 1472 | channel->buf_idx = buf_idx; |
1471 | 1473 | ||
1472 | if (channel->state == CH_STATE_STOPPED) | 1474 | if (channel->state == LCS_CH_STATE_STOPPED) |
1473 | // FIXME: what if rc != 0 ?? | 1475 | // FIXME: what if rc != 0 ?? |
1474 | rc = lcs_start_channel(channel); | 1476 | rc = lcs_start_channel(channel); |
1475 | spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); | 1477 | spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); |
1476 | if (channel->state == CH_STATE_SUSPENDED && | 1478 | if (channel->state == LCS_CH_STATE_SUSPENDED && |
1477 | channel->iob[channel->io_idx].state == BUF_STATE_READY) { | 1479 | channel->iob[channel->io_idx].state == LCS_BUF_STATE_READY) { |
1478 | // FIXME: what if rc != 0 ?? | 1480 | // FIXME: what if rc != 0 ?? |
1479 | rc = __lcs_resume_channel(channel); | 1481 | rc = __lcs_resume_channel(channel); |
1480 | } | 1482 | } |
@@ -1689,8 +1691,8 @@ lcs_detect(struct lcs_card *card) | |||
1689 | card->state = DEV_STATE_UP; | 1691 | card->state = DEV_STATE_UP; |
1690 | } else { | 1692 | } else { |
1691 | card->state = DEV_STATE_DOWN; | 1693 | card->state = DEV_STATE_DOWN; |
1692 | card->write.state = CH_STATE_INIT; | 1694 | card->write.state = LCS_CH_STATE_INIT; |
1693 | card->read.state = CH_STATE_INIT; | 1695 | card->read.state = LCS_CH_STATE_INIT; |
1694 | } | 1696 | } |
1695 | return rc; | 1697 | return rc; |
1696 | } | 1698 | } |
@@ -1705,8 +1707,8 @@ lcs_stopcard(struct lcs_card *card) | |||
1705 | 1707 | ||
1706 | LCS_DBF_TEXT(3, setup, "stopcard"); | 1708 | LCS_DBF_TEXT(3, setup, "stopcard"); |
1707 | 1709 | ||
1708 | if (card->read.state != CH_STATE_STOPPED && | 1710 | if (card->read.state != LCS_CH_STATE_STOPPED && |
1709 | card->write.state != CH_STATE_STOPPED && | 1711 | card->write.state != LCS_CH_STATE_STOPPED && |
1710 | card->state == DEV_STATE_UP) { | 1712 | card->state == DEV_STATE_UP) { |
1711 | lcs_clear_multicast_list(card); | 1713 | lcs_clear_multicast_list(card); |
1712 | rc = lcs_send_stoplan(card,LCS_INITIATOR_TCPIP); | 1714 | rc = lcs_send_stoplan(card,LCS_INITIATOR_TCPIP); |
@@ -1871,7 +1873,7 @@ lcs_stop_device(struct net_device *dev) | |||
1871 | netif_tx_disable(dev); | 1873 | netif_tx_disable(dev); |
1872 | dev->flags &= ~IFF_UP; | 1874 | dev->flags &= ~IFF_UP; |
1873 | wait_event(card->write.wait_q, | 1875 | wait_event(card->write.wait_q, |
1874 | (card->write.state != CH_STATE_RUNNING)); | 1876 | (card->write.state != LCS_CH_STATE_RUNNING)); |
1875 | rc = lcs_stopcard(card); | 1877 | rc = lcs_stopcard(card); |
1876 | if (rc) | 1878 | if (rc) |
1877 | PRINT_ERR("Try it again!\n "); | 1879 | PRINT_ERR("Try it again!\n "); |
diff --git a/drivers/s390/net/lcs.h b/drivers/s390/net/lcs.h index b5247dc08b57..0e1e4a0a88f0 100644 --- a/drivers/s390/net/lcs.h +++ b/drivers/s390/net/lcs.h | |||
@@ -23,11 +23,6 @@ do { \ | |||
23 | } while (0) | 23 | } while (0) |
24 | 24 | ||
25 | /** | 25 | /** |
26 | * some more definitions for debug or output stuff | ||
27 | */ | ||
28 | #define PRINTK_HEADER " lcs: " | ||
29 | |||
30 | /** | ||
31 | * sysfs related stuff | 26 | * sysfs related stuff |
32 | */ | 27 | */ |
33 | #define CARD_FROM_DEV(cdev) \ | 28 | #define CARD_FROM_DEV(cdev) \ |
@@ -127,22 +122,22 @@ do { \ | |||
127 | * LCS Buffer states | 122 | * LCS Buffer states |
128 | */ | 123 | */ |
129 | enum lcs_buffer_states { | 124 | enum lcs_buffer_states { |
130 | BUF_STATE_EMPTY, /* buffer is empty */ | 125 | LCS_BUF_STATE_EMPTY, /* buffer is empty */ |
131 | BUF_STATE_LOCKED, /* buffer is locked, don't touch */ | 126 | LCS_BUF_STATE_LOCKED, /* buffer is locked, don't touch */ |
132 | BUF_STATE_READY, /* buffer is ready for read/write */ | 127 | LCS_BUF_STATE_READY, /* buffer is ready for read/write */ |
133 | BUF_STATE_PROCESSED, | 128 | LCS_BUF_STATE_PROCESSED, |
134 | }; | 129 | }; |
135 | 130 | ||
136 | /** | 131 | /** |
137 | * LCS Channel State Machine declarations | 132 | * LCS Channel State Machine declarations |
138 | */ | 133 | */ |
139 | enum lcs_channel_states { | 134 | enum lcs_channel_states { |
140 | CH_STATE_INIT, | 135 | LCS_CH_STATE_INIT, |
141 | CH_STATE_HALTED, | 136 | LCS_CH_STATE_HALTED, |
142 | CH_STATE_STOPPED, | 137 | LCS_CH_STATE_STOPPED, |
143 | CH_STATE_RUNNING, | 138 | LCS_CH_STATE_RUNNING, |
144 | CH_STATE_SUSPENDED, | 139 | LCS_CH_STATE_SUSPENDED, |
145 | CH_STATE_CLEARED, | 140 | LCS_CH_STATE_CLEARED, |
146 | }; | 141 | }; |
147 | 142 | ||
148 | /** | 143 | /** |
diff --git a/drivers/s390/net/qeth.h b/drivers/s390/net/qeth.h index 821383d8cbe7..53c358c7d368 100644 --- a/drivers/s390/net/qeth.h +++ b/drivers/s390/net/qeth.h | |||
@@ -151,8 +151,6 @@ qeth_hex_dump(unsigned char *buf, size_t len) | |||
151 | #define SENSE_RESETTING_EVENT_BYTE 1 | 151 | #define SENSE_RESETTING_EVENT_BYTE 1 |
152 | #define SENSE_RESETTING_EVENT_FLAG 0x80 | 152 | #define SENSE_RESETTING_EVENT_FLAG 0x80 |
153 | 153 | ||
154 | #define atomic_swap(a,b) xchg((int *)a.counter, b) | ||
155 | |||
156 | /* | 154 | /* |
157 | * Common IO related definitions | 155 | * Common IO related definitions |
158 | */ | 156 | */ |
diff --git a/drivers/s390/net/qeth_main.c b/drivers/s390/net/qeth_main.c index 8364d5475ac7..7fdc5272c446 100644 --- a/drivers/s390/net/qeth_main.c +++ b/drivers/s390/net/qeth_main.c | |||
@@ -2982,7 +2982,7 @@ qeth_check_outbound_queue(struct qeth_qdio_out_q *queue) | |||
2982 | */ | 2982 | */ |
2983 | if ((atomic_read(&queue->used_buffers) <= QETH_LOW_WATERMARK_PACK) || | 2983 | if ((atomic_read(&queue->used_buffers) <= QETH_LOW_WATERMARK_PACK) || |
2984 | !atomic_read(&queue->set_pci_flags_count)){ | 2984 | !atomic_read(&queue->set_pci_flags_count)){ |
2985 | if (atomic_swap(&queue->state, QETH_OUT_Q_LOCKED_FLUSH) == | 2985 | if (atomic_xchg(&queue->state, QETH_OUT_Q_LOCKED_FLUSH) == |
2986 | QETH_OUT_Q_UNLOCKED) { | 2986 | QETH_OUT_Q_UNLOCKED) { |
2987 | /* | 2987 | /* |
2988 | * If we get in here, there was no action in | 2988 | * If we get in here, there was no action in |
@@ -3245,7 +3245,7 @@ qeth_free_qdio_buffers(struct qeth_card *card) | |||
3245 | int i, j; | 3245 | int i, j; |
3246 | 3246 | ||
3247 | QETH_DBF_TEXT(trace, 2, "freeqdbf"); | 3247 | QETH_DBF_TEXT(trace, 2, "freeqdbf"); |
3248 | if (atomic_swap(&card->qdio.state, QETH_QDIO_UNINITIALIZED) == | 3248 | if (atomic_xchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED) == |
3249 | QETH_QDIO_UNINITIALIZED) | 3249 | QETH_QDIO_UNINITIALIZED) |
3250 | return; | 3250 | return; |
3251 | kfree(card->qdio.in_q); | 3251 | kfree(card->qdio.in_q); |
@@ -4366,7 +4366,7 @@ out: | |||
4366 | if (flush_count) | 4366 | if (flush_count) |
4367 | qeth_flush_buffers(queue, 0, start_index, flush_count); | 4367 | qeth_flush_buffers(queue, 0, start_index, flush_count); |
4368 | else if (!atomic_read(&queue->set_pci_flags_count)) | 4368 | else if (!atomic_read(&queue->set_pci_flags_count)) |
4369 | atomic_swap(&queue->state, QETH_OUT_Q_LOCKED_FLUSH); | 4369 | atomic_xchg(&queue->state, QETH_OUT_Q_LOCKED_FLUSH); |
4370 | /* | 4370 | /* |
4371 | * queue->state will go from LOCKED -> UNLOCKED or from | 4371 | * queue->state will go from LOCKED -> UNLOCKED or from |
4372 | * LOCKED_FLUSH -> LOCKED if output_handler wanted to 'notify' us | 4372 | * LOCKED_FLUSH -> LOCKED if output_handler wanted to 'notify' us |
diff --git a/include/asm-s390/cio.h b/include/asm-s390/cio.h index 81287d86329d..d92785030980 100644 --- a/include/asm-s390/cio.h +++ b/include/asm-s390/cio.h | |||
@@ -278,17 +278,16 @@ struct ccw_dev_id { | |||
278 | static inline int ccw_dev_id_is_equal(struct ccw_dev_id *dev_id1, | 278 | static inline int ccw_dev_id_is_equal(struct ccw_dev_id *dev_id1, |
279 | struct ccw_dev_id *dev_id2) | 279 | struct ccw_dev_id *dev_id2) |
280 | { | 280 | { |
281 | return !memcmp(dev_id1, dev_id2, sizeof(struct ccw_dev_id)); | 281 | if ((dev_id1->ssid == dev_id2->ssid) && |
282 | (dev_id1->devno == dev_id2->devno)) | ||
283 | return 1; | ||
284 | return 0; | ||
282 | } | 285 | } |
283 | 286 | ||
284 | extern int diag210(struct diag210 *addr); | 287 | extern int diag210(struct diag210 *addr); |
285 | 288 | ||
286 | extern void wait_cons_dev(void); | 289 | extern void wait_cons_dev(void); |
287 | 290 | ||
288 | extern void clear_all_subchannels(void); | ||
289 | |||
290 | extern void cio_reset_channel_paths(void); | ||
291 | |||
292 | extern void css_schedule_reprobe(void); | 291 | extern void css_schedule_reprobe(void); |
293 | 292 | ||
294 | extern void reipl_ccw_dev(struct ccw_dev_id *id); | 293 | extern void reipl_ccw_dev(struct ccw_dev_id *id); |
diff --git a/include/asm-s390/cpcmd.h b/include/asm-s390/cpcmd.h index 1fcf65be7a23..48a9eab16429 100644 --- a/include/asm-s390/cpcmd.h +++ b/include/asm-s390/cpcmd.h | |||
@@ -7,8 +7,8 @@ | |||
7 | * Christian Borntraeger (cborntra@de.ibm.com), | 7 | * Christian Borntraeger (cborntra@de.ibm.com), |
8 | */ | 8 | */ |
9 | 9 | ||
10 | #ifndef __CPCMD__ | 10 | #ifndef _ASM_S390_CPCMD_H |
11 | #define __CPCMD__ | 11 | #define _ASM_S390_CPCMD_H |
12 | 12 | ||
13 | /* | 13 | /* |
14 | * the lowlevel function for cpcmd | 14 | * the lowlevel function for cpcmd |
@@ -16,9 +16,6 @@ | |||
16 | */ | 16 | */ |
17 | extern int __cpcmd(const char *cmd, char *response, int rlen, int *response_code); | 17 | extern int __cpcmd(const char *cmd, char *response, int rlen, int *response_code); |
18 | 18 | ||
19 | #ifndef __s390x__ | ||
20 | #define cpcmd __cpcmd | ||
21 | #else | ||
22 | /* | 19 | /* |
23 | * cpcmd is the in-kernel interface for issuing CP commands | 20 | * cpcmd is the in-kernel interface for issuing CP commands |
24 | * | 21 | * |
@@ -33,6 +30,5 @@ extern int __cpcmd(const char *cmd, char *response, int rlen, int *response_code | |||
33 | * NOTE: If the response buffer is not below 2 GB, cpcmd can sleep | 30 | * NOTE: If the response buffer is not below 2 GB, cpcmd can sleep |
34 | */ | 31 | */ |
35 | extern int cpcmd(const char *cmd, char *response, int rlen, int *response_code); | 32 | extern int cpcmd(const char *cmd, char *response, int rlen, int *response_code); |
36 | #endif /*__s390x__*/ | ||
37 | 33 | ||
38 | #endif | 34 | #endif /* _ASM_S390_CPCMD_H */ |
diff --git a/include/asm-s390/kexec.h b/include/asm-s390/kexec.h index ce28ddda0f50..9c35c8ad1afd 100644 --- a/include/asm-s390/kexec.h +++ b/include/asm-s390/kexec.h | |||
@@ -26,7 +26,7 @@ | |||
26 | 26 | ||
27 | /* Maximum address we can use for the control pages */ | 27 | /* Maximum address we can use for the control pages */ |
28 | /* Not more than 2GB */ | 28 | /* Not more than 2GB */ |
29 | #define KEXEC_CONTROL_MEMORY_LIMIT (1<<31) | 29 | #define KEXEC_CONTROL_MEMORY_LIMIT (1UL<<31) |
30 | 30 | ||
31 | /* Allocate one page for the pdp and the second for the code */ | 31 | /* Allocate one page for the pdp and the second for the code */ |
32 | #define KEXEC_CONTROL_CODE_SIZE 4096 | 32 | #define KEXEC_CONTROL_CODE_SIZE 4096 |
diff --git a/include/asm-s390/lowcore.h b/include/asm-s390/lowcore.h index 06583ed0bde7..74f7389bd3ee 100644 --- a/include/asm-s390/lowcore.h +++ b/include/asm-s390/lowcore.h | |||
@@ -362,6 +362,14 @@ static inline void set_prefix(__u32 address) | |||
362 | asm volatile("spx %0" : : "m" (address) : "memory"); | 362 | asm volatile("spx %0" : : "m" (address) : "memory"); |
363 | } | 363 | } |
364 | 364 | ||
365 | static inline __u32 store_prefix(void) | ||
366 | { | ||
367 | __u32 address; | ||
368 | |||
369 | asm volatile("stpx %0" : "=m" (address)); | ||
370 | return address; | ||
371 | } | ||
372 | |||
365 | #define __PANIC_MAGIC 0xDEADC0DE | 373 | #define __PANIC_MAGIC 0xDEADC0DE |
366 | 374 | ||
367 | #endif | 375 | #endif |
diff --git a/include/asm-s390/pgtable.h b/include/asm-s390/pgtable.h index 36bb6dacf008..2d968a69ed1f 100644 --- a/include/asm-s390/pgtable.h +++ b/include/asm-s390/pgtable.h | |||
@@ -110,13 +110,22 @@ extern char empty_zero_page[PAGE_SIZE]; | |||
110 | #define VMALLOC_OFFSET (8*1024*1024) | 110 | #define VMALLOC_OFFSET (8*1024*1024) |
111 | #define VMALLOC_START (((unsigned long) high_memory + VMALLOC_OFFSET) \ | 111 | #define VMALLOC_START (((unsigned long) high_memory + VMALLOC_OFFSET) \ |
112 | & ~(VMALLOC_OFFSET-1)) | 112 | & ~(VMALLOC_OFFSET-1)) |
113 | |||
114 | /* | ||
115 | * We need some free virtual space to be able to do vmalloc. | ||
116 | * VMALLOC_MIN_SIZE defines the minimum size of the vmalloc | ||
117 | * area. On a machine with 2GB memory we make sure that we | ||
118 | * have at least 128MB free space for vmalloc. On a machine | ||
119 | * with 4TB we make sure we have at least 1GB. | ||
120 | */ | ||
113 | #ifndef __s390x__ | 121 | #ifndef __s390x__ |
114 | # define VMALLOC_END (0x7fffffffL) | 122 | #define VMALLOC_MIN_SIZE 0x8000000UL |
123 | #define VMALLOC_END 0x80000000UL | ||
115 | #else /* __s390x__ */ | 124 | #else /* __s390x__ */ |
116 | # define VMALLOC_END (0x40000000000L) | 125 | #define VMALLOC_MIN_SIZE 0x40000000UL |
126 | #define VMALLOC_END 0x40000000000UL | ||
117 | #endif /* __s390x__ */ | 127 | #endif /* __s390x__ */ |
118 | 128 | ||
119 | |||
120 | /* | 129 | /* |
121 | * A 31 bit pagetable entry of S390 has following format: | 130 | * A 31 bit pagetable entry of S390 has following format: |
122 | * | PFRA | | OS | | 131 | * | PFRA | | OS | |
diff --git a/include/asm-s390/reset.h b/include/asm-s390/reset.h new file mode 100644 index 000000000000..9b439cf67800 --- /dev/null +++ b/include/asm-s390/reset.h | |||
@@ -0,0 +1,23 @@ | |||
1 | /* | ||
2 | * include/asm-s390/reset.h | ||
3 | * | ||
4 | * Copyright IBM Corp. 2006 | ||
5 | * Author(s): Heiko Carstens <heiko.carstens@de.ibm.com> | ||
6 | */ | ||
7 | |||
8 | #ifndef _ASM_S390_RESET_H | ||
9 | #define _ASM_S390_RESET_H | ||
10 | |||
11 | #include <linux/list.h> | ||
12 | |||
13 | struct reset_call { | ||
14 | struct list_head list; | ||
15 | void (*fn)(void); | ||
16 | }; | ||
17 | |||
18 | extern void register_reset_call(struct reset_call *reset); | ||
19 | extern void unregister_reset_call(struct reset_call *reset); | ||
20 | extern void s390_reset_system(void); | ||
21 | extern void (*s390_reset_mcck_handler)(void); | ||
22 | |||
23 | #endif /* _ASM_S390_RESET_H */ | ||
diff --git a/include/asm-s390/setup.h b/include/asm-s390/setup.h index 5d72eda8a11b..7664bacdd832 100644 --- a/include/asm-s390/setup.h +++ b/include/asm-s390/setup.h | |||
@@ -2,7 +2,7 @@ | |||
2 | * include/asm-s390/setup.h | 2 | * include/asm-s390/setup.h |
3 | * | 3 | * |
4 | * S390 version | 4 | * S390 version |
5 | * Copyright (C) 1999 IBM Deutschland Entwicklung GmbH, IBM Corporation | 5 | * Copyright IBM Corp. 1999,2006 |
6 | */ | 6 | */ |
7 | 7 | ||
8 | #ifndef _ASM_S390_SETUP_H | 8 | #ifndef _ASM_S390_SETUP_H |
@@ -30,6 +30,17 @@ | |||
30 | #endif /* __s390x__ */ | 30 | #endif /* __s390x__ */ |
31 | #define COMMAND_LINE ((char *) (0x10480)) | 31 | #define COMMAND_LINE ((char *) (0x10480)) |
32 | 32 | ||
33 | #define CHUNK_READ_WRITE 0 | ||
34 | #define CHUNK_READ_ONLY 1 | ||
35 | |||
36 | struct mem_chunk { | ||
37 | unsigned long addr; | ||
38 | unsigned long size; | ||
39 | unsigned long type; | ||
40 | }; | ||
41 | |||
42 | extern struct mem_chunk memory_chunk[]; | ||
43 | |||
33 | /* | 44 | /* |
34 | * Machine features detected in head.S | 45 | * Machine features detected in head.S |
35 | */ | 46 | */ |
@@ -53,7 +64,6 @@ extern unsigned long machine_flags; | |||
53 | #define MACHINE_HAS_MVCOS (machine_flags & 512) | 64 | #define MACHINE_HAS_MVCOS (machine_flags & 512) |
54 | #endif /* __s390x__ */ | 65 | #endif /* __s390x__ */ |
55 | 66 | ||
56 | |||
57 | #define MACHINE_HAS_SCLP (!MACHINE_IS_P390) | 67 | #define MACHINE_HAS_SCLP (!MACHINE_IS_P390) |
58 | 68 | ||
59 | /* | 69 | /* |
@@ -71,7 +81,6 @@ extern unsigned int console_irq; | |||
71 | #define SET_CONSOLE_3215 do { console_mode = 2; } while (0) | 81 | #define SET_CONSOLE_3215 do { console_mode = 2; } while (0) |
72 | #define SET_CONSOLE_3270 do { console_mode = 3; } while (0) | 82 | #define SET_CONSOLE_3270 do { console_mode = 3; } while (0) |
73 | 83 | ||
74 | |||
75 | struct ipl_list_hdr { | 84 | struct ipl_list_hdr { |
76 | u32 len; | 85 | u32 len; |
77 | u8 reserved1[3]; | 86 | u8 reserved1[3]; |
diff --git a/include/asm-s390/smp.h b/include/asm-s390/smp.h index c3cf030ada4d..7097c96ed026 100644 --- a/include/asm-s390/smp.h +++ b/include/asm-s390/smp.h | |||
@@ -18,6 +18,7 @@ | |||
18 | 18 | ||
19 | #include <asm/lowcore.h> | 19 | #include <asm/lowcore.h> |
20 | #include <asm/sigp.h> | 20 | #include <asm/sigp.h> |
21 | #include <asm/ptrace.h> | ||
21 | 22 | ||
22 | /* | 23 | /* |
23 | s390 specific smp.c headers | 24 | s390 specific smp.c headers |
@@ -101,6 +102,13 @@ smp_call_function_on(void (*func) (void *info), void *info, | |||
101 | func(info); | 102 | func(info); |
102 | return 0; | 103 | return 0; |
103 | } | 104 | } |
105 | |||
106 | static inline void smp_send_stop(void) | ||
107 | { | ||
108 | /* Disable all interrupts/machine checks */ | ||
109 | __load_psw_mask(PSW_KERNEL_BITS & ~PSW_MASK_MCHECK); | ||
110 | } | ||
111 | |||
104 | #define smp_cpu_not_running(cpu) 1 | 112 | #define smp_cpu_not_running(cpu) 1 |
105 | #define smp_get_cpu(cpu) ({ 0; }) | 113 | #define smp_get_cpu(cpu) ({ 0; }) |
106 | #define smp_put_cpu(cpu) ({ 0; }) | 114 | #define smp_put_cpu(cpu) ({ 0; }) |
diff --git a/include/asm-s390/system.h b/include/asm-s390/system.h index ccbafe4bf2cb..bd0b05ae87d2 100644 --- a/include/asm-s390/system.h +++ b/include/asm-s390/system.h | |||
@@ -115,6 +115,16 @@ extern void account_system_vtime(struct task_struct *); | |||
115 | #define account_vtime(x) do { /* empty */ } while (0) | 115 | #define account_vtime(x) do { /* empty */ } while (0) |
116 | #endif | 116 | #endif |
117 | 117 | ||
118 | #ifdef CONFIG_PFAULT | ||
119 | extern void pfault_irq_init(void); | ||
120 | extern int pfault_init(void); | ||
121 | extern void pfault_fini(void); | ||
122 | #else /* CONFIG_PFAULT */ | ||
123 | #define pfault_irq_init() do { } while (0) | ||
124 | #define pfault_init() ({-1;}) | ||
125 | #define pfault_fini() do { } while (0) | ||
126 | #endif /* CONFIG_PFAULT */ | ||
127 | |||
118 | #define finish_arch_switch(prev) do { \ | 128 | #define finish_arch_switch(prev) do { \ |
119 | set_fs(current->thread.mm_segment); \ | 129 | set_fs(current->thread.mm_segment); \ |
120 | account_vtime(prev); \ | 130 | account_vtime(prev); \ |
diff --git a/include/asm-s390/termios.h b/include/asm-s390/termios.h index d1e29cca54c9..62b23caf370e 100644 --- a/include/asm-s390/termios.h +++ b/include/asm-s390/termios.h | |||
@@ -75,39 +75,7 @@ struct termio { | |||
75 | */ | 75 | */ |
76 | #define INIT_C_CC "\003\034\177\025\004\0\1\0\021\023\032\0\022\017\027\026\0" | 76 | #define INIT_C_CC "\003\034\177\025\004\0\1\0\021\023\032\0\022\017\027\026\0" |
77 | 77 | ||
78 | /* | 78 | #include <asm-generic/termios.h> |
79 | * Translate a "termio" structure into a "termios". Ugh. | ||
80 | */ | ||
81 | #define SET_LOW_TERMIOS_BITS(termios, termio, x) { \ | ||
82 | unsigned short __tmp; \ | ||
83 | get_user(__tmp,&(termio)->x); \ | ||
84 | (termios)->x = (0xffff0000 & ((termios)->x)) | __tmp; \ | ||
85 | } | ||
86 | |||
87 | #define user_termio_to_kernel_termios(termios, termio) \ | ||
88 | ({ \ | ||
89 | SET_LOW_TERMIOS_BITS(termios, termio, c_iflag); \ | ||
90 | SET_LOW_TERMIOS_BITS(termios, termio, c_oflag); \ | ||
91 | SET_LOW_TERMIOS_BITS(termios, termio, c_cflag); \ | ||
92 | SET_LOW_TERMIOS_BITS(termios, termio, c_lflag); \ | ||
93 | copy_from_user((termios)->c_cc, (termio)->c_cc, NCC); \ | ||
94 | }) | ||
95 | |||
96 | /* | ||
97 | * Translate a "termios" structure into a "termio". Ugh. | ||
98 | */ | ||
99 | #define kernel_termios_to_user_termio(termio, termios) \ | ||
100 | ({ \ | ||
101 | put_user((termios)->c_iflag, &(termio)->c_iflag); \ | ||
102 | put_user((termios)->c_oflag, &(termio)->c_oflag); \ | ||
103 | put_user((termios)->c_cflag, &(termio)->c_cflag); \ | ||
104 | put_user((termios)->c_lflag, &(termio)->c_lflag); \ | ||
105 | put_user((termios)->c_line, &(termio)->c_line); \ | ||
106 | copy_to_user((termio)->c_cc, (termios)->c_cc, NCC); \ | ||
107 | }) | ||
108 | |||
109 | #define user_termios_to_kernel_termios(k, u) copy_from_user(k, u, sizeof(struct termios)) | ||
110 | #define kernel_termios_to_user_termios(u, k) copy_to_user(u, k, sizeof(struct termios)) | ||
111 | 79 | ||
112 | #endif /* __KERNEL__ */ | 80 | #endif /* __KERNEL__ */ |
113 | 81 | ||
diff --git a/include/asm-s390/uaccess.h b/include/asm-s390/uaccess.h index 72ae4efddb49..73ac4e82217b 100644 --- a/include/asm-s390/uaccess.h +++ b/include/asm-s390/uaccess.h | |||
@@ -201,7 +201,7 @@ extern int __get_user_bad(void) __attribute__((noreturn)); | |||
201 | * Returns number of bytes that could not be copied. | 201 | * Returns number of bytes that could not be copied. |
202 | * On success, this will be zero. | 202 | * On success, this will be zero. |
203 | */ | 203 | */ |
204 | static inline unsigned long | 204 | static inline unsigned long __must_check |
205 | __copy_to_user(void __user *to, const void *from, unsigned long n) | 205 | __copy_to_user(void __user *to, const void *from, unsigned long n) |
206 | { | 206 | { |
207 | if (__builtin_constant_p(n) && (n <= 256)) | 207 | if (__builtin_constant_p(n) && (n <= 256)) |
@@ -226,7 +226,7 @@ __copy_to_user(void __user *to, const void *from, unsigned long n) | |||
226 | * Returns number of bytes that could not be copied. | 226 | * Returns number of bytes that could not be copied. |
227 | * On success, this will be zero. | 227 | * On success, this will be zero. |
228 | */ | 228 | */ |
229 | static inline unsigned long | 229 | static inline unsigned long __must_check |
230 | copy_to_user(void __user *to, const void *from, unsigned long n) | 230 | copy_to_user(void __user *to, const void *from, unsigned long n) |
231 | { | 231 | { |
232 | might_sleep(); | 232 | might_sleep(); |
@@ -252,7 +252,7 @@ copy_to_user(void __user *to, const void *from, unsigned long n) | |||
252 | * If some data could not be copied, this function will pad the copied | 252 | * If some data could not be copied, this function will pad the copied |
253 | * data to the requested size using zero bytes. | 253 | * data to the requested size using zero bytes. |
254 | */ | 254 | */ |
255 | static inline unsigned long | 255 | static inline unsigned long __must_check |
256 | __copy_from_user(void *to, const void __user *from, unsigned long n) | 256 | __copy_from_user(void *to, const void __user *from, unsigned long n) |
257 | { | 257 | { |
258 | if (__builtin_constant_p(n) && (n <= 256)) | 258 | if (__builtin_constant_p(n) && (n <= 256)) |
@@ -277,7 +277,7 @@ __copy_from_user(void *to, const void __user *from, unsigned long n) | |||
277 | * If some data could not be copied, this function will pad the copied | 277 | * If some data could not be copied, this function will pad the copied |
278 | * data to the requested size using zero bytes. | 278 | * data to the requested size using zero bytes. |
279 | */ | 279 | */ |
280 | static inline unsigned long | 280 | static inline unsigned long __must_check |
281 | copy_from_user(void *to, const void __user *from, unsigned long n) | 281 | copy_from_user(void *to, const void __user *from, unsigned long n) |
282 | { | 282 | { |
283 | might_sleep(); | 283 | might_sleep(); |
@@ -288,13 +288,13 @@ copy_from_user(void *to, const void __user *from, unsigned long n) | |||
288 | return n; | 288 | return n; |
289 | } | 289 | } |
290 | 290 | ||
291 | static inline unsigned long | 291 | static inline unsigned long __must_check |
292 | __copy_in_user(void __user *to, const void __user *from, unsigned long n) | 292 | __copy_in_user(void __user *to, const void __user *from, unsigned long n) |
293 | { | 293 | { |
294 | return uaccess.copy_in_user(n, to, from); | 294 | return uaccess.copy_in_user(n, to, from); |
295 | } | 295 | } |
296 | 296 | ||
297 | static inline unsigned long | 297 | static inline unsigned long __must_check |
298 | copy_in_user(void __user *to, const void __user *from, unsigned long n) | 298 | copy_in_user(void __user *to, const void __user *from, unsigned long n) |
299 | { | 299 | { |
300 | might_sleep(); | 300 | might_sleep(); |
@@ -306,7 +306,7 @@ copy_in_user(void __user *to, const void __user *from, unsigned long n) | |||
306 | /* | 306 | /* |
307 | * Copy a null terminated string from userspace. | 307 | * Copy a null terminated string from userspace. |
308 | */ | 308 | */ |
309 | static inline long | 309 | static inline long __must_check |
310 | strncpy_from_user(char *dst, const char __user *src, long count) | 310 | strncpy_from_user(char *dst, const char __user *src, long count) |
311 | { | 311 | { |
312 | long res = -EFAULT; | 312 | long res = -EFAULT; |
@@ -343,13 +343,13 @@ strnlen_user(const char __user * src, unsigned long n) | |||
343 | * Zero Userspace | 343 | * Zero Userspace |
344 | */ | 344 | */ |
345 | 345 | ||
346 | static inline unsigned long | 346 | static inline unsigned long __must_check |
347 | __clear_user(void __user *to, unsigned long n) | 347 | __clear_user(void __user *to, unsigned long n) |
348 | { | 348 | { |
349 | return uaccess.clear_user(n, to); | 349 | return uaccess.clear_user(n, to); |
350 | } | 350 | } |
351 | 351 | ||
352 | static inline unsigned long | 352 | static inline unsigned long __must_check |
353 | clear_user(void __user *to, unsigned long n) | 353 | clear_user(void __user *to, unsigned long n) |
354 | { | 354 | { |
355 | might_sleep(); | 355 | might_sleep(); |
diff --git a/include/asm-s390/zcrypt.h b/include/asm-s390/zcrypt.h index 7244c68464f2..b90e55888a55 100644 --- a/include/asm-s390/zcrypt.h +++ b/include/asm-s390/zcrypt.h | |||
@@ -180,40 +180,8 @@ struct ica_xcRB { | |||
180 | * for the implementation details for the contents of the | 180 | * for the implementation details for the contents of the |
181 | * block | 181 | * block |
182 | * | 182 | * |
183 | * Z90STAT_TOTALCOUNT | 183 | * ZSECSENDCPRB |
184 | * Return an integer count of all device types together. | 184 | * Send an arbitrary CPRB to a crypto card. |
185 | * | ||
186 | * Z90STAT_PCICACOUNT | ||
187 | * Return an integer count of all PCICAs. | ||
188 | * | ||
189 | * Z90STAT_PCICCCOUNT | ||
190 | * Return an integer count of all PCICCs. | ||
191 | * | ||
192 | * Z90STAT_PCIXCCMCL2COUNT | ||
193 | * Return an integer count of all MCL2 PCIXCCs. | ||
194 | * | ||
195 | * Z90STAT_PCIXCCMCL3COUNT | ||
196 | * Return an integer count of all MCL3 PCIXCCs. | ||
197 | * | ||
198 | * Z90STAT_CEX2CCOUNT | ||
199 | * Return an integer count of all CEX2Cs. | ||
200 | * | ||
201 | * Z90STAT_CEX2ACOUNT | ||
202 | * Return an integer count of all CEX2As. | ||
203 | * | ||
204 | * Z90STAT_REQUESTQ_COUNT | ||
205 | * Return an integer count of the number of entries waiting to be | ||
206 | * sent to a device. | ||
207 | * | ||
208 | * Z90STAT_PENDINGQ_COUNT | ||
209 | * Return an integer count of the number of entries sent to a | ||
210 | * device awaiting the reply. | ||
211 | * | ||
212 | * Z90STAT_TOTALOPEN_COUNT | ||
213 | * Return an integer count of the number of open file handles. | ||
214 | * | ||
215 | * Z90STAT_DOMAIN_INDEX | ||
216 | * Return the integer value of the Cryptographic Domain. | ||
217 | * | 185 | * |
218 | * Z90STAT_STATUS_MASK | 186 | * Z90STAT_STATUS_MASK |
219 | * Return an 64 element array of unsigned chars for the status of | 187 | * Return an 64 element array of unsigned chars for the status of |
@@ -235,28 +203,51 @@ struct ica_xcRB { | |||
235 | * of successfully completed requests per device since the device | 203 | * of successfully completed requests per device since the device |
236 | * was detected and made available. | 204 | * was detected and made available. |
237 | * | 205 | * |
238 | * ICAZ90STATUS (deprecated) | 206 | * Z90STAT_REQUESTQ_COUNT |
207 | * Return an integer count of the number of entries waiting to be | ||
208 | * sent to a device. | ||
209 | * | ||
210 | * Z90STAT_PENDINGQ_COUNT | ||
211 | * Return an integer count of the number of entries sent to all | ||
212 | * devices awaiting the reply. | ||
213 | * | ||
214 | * Z90STAT_TOTALOPEN_COUNT | ||
215 | * Return an integer count of the number of open file handles. | ||
216 | * | ||
217 | * Z90STAT_DOMAIN_INDEX | ||
218 | * Return the integer value of the Cryptographic Domain. | ||
219 | * | ||
220 | * The following ioctls are deprecated and should be no longer used: | ||
221 | * | ||
222 | * Z90STAT_TOTALCOUNT | ||
223 | * Return an integer count of all device types together. | ||
224 | * | ||
225 | * Z90STAT_PCICACOUNT | ||
226 | * Return an integer count of all PCICAs. | ||
227 | * | ||
228 | * Z90STAT_PCICCCOUNT | ||
229 | * Return an integer count of all PCICCs. | ||
230 | * | ||
231 | * Z90STAT_PCIXCCMCL2COUNT | ||
232 | * Return an integer count of all MCL2 PCIXCCs. | ||
233 | * | ||
234 | * Z90STAT_PCIXCCMCL3COUNT | ||
235 | * Return an integer count of all MCL3 PCIXCCs. | ||
236 | * | ||
237 | * Z90STAT_CEX2CCOUNT | ||
238 | * Return an integer count of all CEX2Cs. | ||
239 | * | ||
240 | * Z90STAT_CEX2ACOUNT | ||
241 | * Return an integer count of all CEX2As. | ||
242 | * | ||
243 | * ICAZ90STATUS | ||
239 | * Return some device driver status in a ica_z90_status struct | 244 | * Return some device driver status in a ica_z90_status struct |
240 | * This takes an ica_z90_status struct as its arg. | 245 | * This takes an ica_z90_status struct as its arg. |
241 | * | 246 | * |
242 | * NOTE: this ioctl() is deprecated, and has been replaced with | 247 | * Z90STAT_PCIXCCCOUNT |
243 | * single ioctl()s for each type of status being requested | ||
244 | * | ||
245 | * Z90STAT_PCIXCCCOUNT (deprecated) | ||
246 | * Return an integer count of all PCIXCCs (MCL2 + MCL3). | 248 | * Return an integer count of all PCIXCCs (MCL2 + MCL3). |
247 | * This is DEPRECATED now that MCL3 PCIXCCs are treated differently from | 249 | * This is DEPRECATED now that MCL3 PCIXCCs are treated differently from |
248 | * MCL2 PCIXCCs. | 250 | * MCL2 PCIXCCs. |
249 | * | ||
250 | * Z90QUIESCE (not recommended) | ||
251 | * Quiesce the driver. This is intended to stop all new | ||
252 | * requests from being processed. Its use is NOT recommended, | ||
253 | * except in circumstances where there is no other way to stop | ||
254 | * callers from accessing the driver. Its original use was to | ||
255 | * allow the driver to be "drained" of work in preparation for | ||
256 | * a system shutdown. | ||
257 | * | ||
258 | * NOTE: once issued, this ban on new work cannot be undone | ||
259 | * except by unloading and reloading the driver. | ||
260 | */ | 251 | */ |
261 | 252 | ||
262 | /** | 253 | /** |