aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2014-12-12 17:49:56 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2014-12-12 17:49:56 -0500
commiteea0cf3fcd3243666e0bb792c560ec128b1f06a1 (patch)
tree2e3cebe332ff4a6898286c6e93f798655368286b
parent823e334ecd247dd49ca2c5c90414435d77135340 (diff)
parent95c97b5941542a4dedb22649adea98e25a88923e (diff)
Merge tag 'for-linus' of git://git.code.sf.net/p/openipmi/linux-ipmi
Pull IPMI driver updates from Corey Minyard: - Quite a few bug fixes - A new driver for the powernv - A new driver for the SMBus interface from the IPMI 2.0 specification * tag 'for-linus' of git://git.code.sf.net/p/openipmi/linux-ipmi: ipmi: Check the BT interrupt enable periodically ipmi: Fix attention handling for system interfaces ipmi: Periodically check to see if irqs and messages are set right drivers/char/ipmi: Add powernv IPMI driver ipmi: Add SMBus interface driver (SSIF) ipmi: Remove the now unused priority from SMI sender ipmi: Remove the now unnecessary message queue ipmi: Make the message handler easier to use for SMI interfaces ipmi: Move message sending into its own function ipmi: rename waiting_msgs to waiting_rcv_msgs ipmi: Fix handling of BMC flags ipmi: Initialize BMC device attributes ipmi: Unregister previously registered driver in error case ipmi: Use the proper type for acpi_handle ipmi: Fix a bug in hot add/remove ipmi: Remove useless sysfs_name parameters ipmi: clean up the device handling for the bmc device ipmi: Move the address source to string to ipmi-generic code ipmi: Ignore SSIF in the PNP handling
-rw-r--r--Documentation/IPMI.txt74
-rw-r--r--drivers/char/ipmi/Kconfig14
-rw-r--r--drivers/char/ipmi/Makefile2
-rw-r--r--drivers/char/ipmi/ipmi_msghandler.c560
-rw-r--r--drivers/char/ipmi/ipmi_powernv.c310
-rw-r--r--drivers/char/ipmi/ipmi_si_intf.c497
-rw-r--r--drivers/char/ipmi/ipmi_ssif.c1870
-rw-r--r--include/linux/ipmi.h6
-rw-r--r--include/linux/ipmi_smi.h10
9 files changed, 2828 insertions, 515 deletions
diff --git a/Documentation/IPMI.txt b/Documentation/IPMI.txt
index f13c9132e9f2..653d5d739d7f 100644
--- a/Documentation/IPMI.txt
+++ b/Documentation/IPMI.txt
@@ -42,7 +42,13 @@ The driver interface depends on your hardware. If your system
42properly provides the SMBIOS info for IPMI, the driver will detect it 42properly provides the SMBIOS info for IPMI, the driver will detect it
43and just work. If you have a board with a standard interface (These 43and just work. If you have a board with a standard interface (These
44will generally be either "KCS", "SMIC", or "BT", consult your hardware 44will generally be either "KCS", "SMIC", or "BT", consult your hardware
45manual), choose the 'IPMI SI handler' option. 45manual), choose the 'IPMI SI handler' option. A driver also exists
46for direct I2C access to the IPMI management controller. Some boards
47support this, but it is unknown if it will work on every board. For
48this, choose 'IPMI SMBus handler', but be ready to try to do some
49figuring to see if it will work on your system if the SMBIOS/APCI
50information is wrong or not present. It is fairly safe to have both
51these enabled and let the drivers auto-detect what is present.
46 52
47You should generally enable ACPI on your system, as systems with IPMI 53You should generally enable ACPI on your system, as systems with IPMI
48can have ACPI tables describing them. 54can have ACPI tables describing them.
@@ -52,7 +58,8 @@ their job correctly, the IPMI controller should be automatically
52detected (via ACPI or SMBIOS tables) and should just work. Sadly, 58detected (via ACPI or SMBIOS tables) and should just work. Sadly,
53many boards do not have this information. The driver attempts 59many boards do not have this information. The driver attempts
54standard defaults, but they may not work. If you fall into this 60standard defaults, but they may not work. If you fall into this
55situation, you need to read the section below named 'The SI Driver'. 61situation, you need to read the section below named 'The SI Driver' or
62"The SMBus Driver" on how to hand-configure your system.
56 63
57IPMI defines a standard watchdog timer. You can enable this with the 64IPMI defines a standard watchdog timer. You can enable this with the
58'IPMI Watchdog Timer' config option. If you compile the driver into 65'IPMI Watchdog Timer' config option. If you compile the driver into
@@ -97,7 +104,12 @@ driver, each open file for this device ties in to the message handler
97as an IPMI user. 104as an IPMI user.
98 105
99ipmi_si - A driver for various system interfaces. This supports KCS, 106ipmi_si - A driver for various system interfaces. This supports KCS,
100SMIC, and BT interfaces. 107SMIC, and BT interfaces. Unless you have an SMBus interface or your
108own custom interface, you probably need to use this.
109
110ipmi_ssif - A driver for accessing BMCs on the SMBus. It uses the
111I2C kernel driver's SMBus interfaces to send and receive IPMI messages
112over the SMBus.
101 113
102ipmi_watchdog - IPMI requires systems to have a very capable watchdog 114ipmi_watchdog - IPMI requires systems to have a very capable watchdog
103timer. This driver implements the standard Linux watchdog timer 115timer. This driver implements the standard Linux watchdog timer
@@ -476,6 +488,62 @@ for specifying an interface. Note that when removing an interface,
476only the first three parameters (si type, address type, and address) 488only the first three parameters (si type, address type, and address)
477are used for the comparison. Any options are ignored for removing. 489are used for the comparison. Any options are ignored for removing.
478 490
491The SMBus Driver (SSIF)
492-----------------------
493
494The SMBus driver allows up to 4 SMBus devices to be configured in the
495system. By default, the driver will only register with something it
496finds in DMI or ACPI tables. You can change this
497at module load time (for a module) with:
498
499 modprobe ipmi_ssif.o
500 addr=<i2caddr1>[,<i2caddr2>[,...]]
501 adapter=<adapter1>[,<adapter2>[...]]
502 dbg=<flags1>,<flags2>...
503 slave_addrs=<addr1>,<addr2>,...
504 [dbg_probe=1]
505
506The addresses are normal I2C addresses. The adapter is the string
507name of the adapter, as shown in /sys/class/i2c-adapter/i2c-<n>/name.
508It is *NOT* i2c-<n> itself.
509
510The debug flags are bit flags for each BMC found, they are:
511IPMI messages: 1, driver state: 2, timing: 4, I2C probe: 8
512
513Setting dbg_probe to 1 will enable debugging of the probing and
514detection process for BMCs on the SMBusses.
515
516The slave_addrs specifies the IPMI address of the local BMC. This is
517usually 0x20 and the driver defaults to that, but in case it's not, it
518can be specified when the driver starts up.
519
520Discovering the IPMI compliant BMC on the SMBus can cause devices on
521the I2C bus to fail. The SMBus driver writes a "Get Device ID" IPMI
522message as a block write to the I2C bus and waits for a response.
523This action can be detrimental to some I2C devices. It is highly
524recommended that the known I2C address be given to the SMBus driver in
525the smb_addr parameter unless you have DMI or ACPI data to tell the
526driver what to use.
527
528When compiled into the kernel, the addresses can be specified on the
529kernel command line as:
530
531 ipmb_ssif.addr=<i2caddr1>[,<i2caddr2>[...]]
532 ipmi_ssif.adapter=<adapter1>[,<adapter2>[...]]
533 ipmi_ssif.dbg=<flags1>[,<flags2>[...]]
534 ipmi_ssif.dbg_probe=1
535 ipmi_ssif.slave_addrs=<addr1>[,<addr2>[...]]
536
537These are the same options as on the module command line.
538
539The I2C driver does not support non-blocking access or polling, so
540this driver cannod to IPMI panic events, extend the watchdog at panic
541time, or other panic-related IPMI functions without special kernel
542patches and driver modifications. You can get those at the openipmi
543web page.
544
545The driver supports a hot add and remove of interfaces through the I2C
546sysfs interface.
479 547
480Other Pieces 548Other Pieces
481------------ 549------------
diff --git a/drivers/char/ipmi/Kconfig b/drivers/char/ipmi/Kconfig
index db1c9b7adaa6..6ed9e9fe5233 100644
--- a/drivers/char/ipmi/Kconfig
+++ b/drivers/char/ipmi/Kconfig
@@ -62,6 +62,20 @@ config IPMI_SI_PROBE_DEFAULTS
62 only be available on older systems if the "ipmi_si_intf.trydefaults=1" 62 only be available on older systems if the "ipmi_si_intf.trydefaults=1"
63 boot argument is passed. 63 boot argument is passed.
64 64
65config IPMI_SSIF
66 tristate 'IPMI SMBus handler (SSIF)'
67 select I2C
68 help
69 Provides a driver for a SMBus interface to a BMC, meaning that you
70 have a driver that must be accessed over an I2C bus instead of a
71 standard interface. This module requires I2C support.
72
73config IPMI_POWERNV
74 depends on PPC_POWERNV
75 tristate 'POWERNV (OPAL firmware) IPMI interface'
76 help
77 Provides a driver for OPAL firmware-based IPMI interfaces.
78
65config IPMI_WATCHDOG 79config IPMI_WATCHDOG
66 tristate 'IPMI Watchdog Timer' 80 tristate 'IPMI Watchdog Timer'
67 help 81 help
diff --git a/drivers/char/ipmi/Makefile b/drivers/char/ipmi/Makefile
index 16a93648d54e..f3ffde1f5f1f 100644
--- a/drivers/char/ipmi/Makefile
+++ b/drivers/char/ipmi/Makefile
@@ -7,5 +7,7 @@ ipmi_si-y := ipmi_si_intf.o ipmi_kcs_sm.o ipmi_smic_sm.o ipmi_bt_sm.o
7obj-$(CONFIG_IPMI_HANDLER) += ipmi_msghandler.o 7obj-$(CONFIG_IPMI_HANDLER) += ipmi_msghandler.o
8obj-$(CONFIG_IPMI_DEVICE_INTERFACE) += ipmi_devintf.o 8obj-$(CONFIG_IPMI_DEVICE_INTERFACE) += ipmi_devintf.o
9obj-$(CONFIG_IPMI_SI) += ipmi_si.o 9obj-$(CONFIG_IPMI_SI) += ipmi_si.o
10obj-$(CONFIG_IPMI_SSIF) += ipmi_ssif.o
11obj-$(CONFIG_IPMI_POWERNV) += ipmi_powernv.o
10obj-$(CONFIG_IPMI_WATCHDOG) += ipmi_watchdog.o 12obj-$(CONFIG_IPMI_WATCHDOG) += ipmi_watchdog.o
11obj-$(CONFIG_IPMI_POWEROFF) += ipmi_poweroff.o 13obj-$(CONFIG_IPMI_POWEROFF) += ipmi_poweroff.o
diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
index f816211f062f..5fa83f751378 100644
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -56,6 +56,8 @@ static int ipmi_init_msghandler(void);
56static void smi_recv_tasklet(unsigned long); 56static void smi_recv_tasklet(unsigned long);
57static void handle_new_recv_msgs(ipmi_smi_t intf); 57static void handle_new_recv_msgs(ipmi_smi_t intf);
58static void need_waiter(ipmi_smi_t intf); 58static void need_waiter(ipmi_smi_t intf);
59static int handle_one_recv_msg(ipmi_smi_t intf,
60 struct ipmi_smi_msg *msg);
59 61
60static int initialized; 62static int initialized;
61 63
@@ -191,12 +193,12 @@ struct ipmi_proc_entry {
191#endif 193#endif
192 194
193struct bmc_device { 195struct bmc_device {
194 struct platform_device *dev; 196 struct platform_device pdev;
195 struct ipmi_device_id id; 197 struct ipmi_device_id id;
196 unsigned char guid[16]; 198 unsigned char guid[16];
197 int guid_set; 199 int guid_set;
198 200 char name[16];
199 struct kref refcount; 201 struct kref usecount;
200 202
201 /* bmc device attributes */ 203 /* bmc device attributes */
202 struct device_attribute device_id_attr; 204 struct device_attribute device_id_attr;
@@ -210,6 +212,7 @@ struct bmc_device {
210 struct device_attribute guid_attr; 212 struct device_attribute guid_attr;
211 struct device_attribute aux_firmware_rev_attr; 213 struct device_attribute aux_firmware_rev_attr;
212}; 214};
215#define to_bmc_device(x) container_of((x), struct bmc_device, pdev.dev)
213 216
214/* 217/*
215 * Various statistics for IPMI, these index stats[] in the ipmi_smi 218 * Various statistics for IPMI, these index stats[] in the ipmi_smi
@@ -323,6 +326,9 @@ struct ipmi_smi {
323 326
324 struct kref refcount; 327 struct kref refcount;
325 328
329 /* Set when the interface is being unregistered. */
330 bool in_shutdown;
331
326 /* Used for a list of interfaces. */ 332 /* Used for a list of interfaces. */
327 struct list_head link; 333 struct list_head link;
328 334
@@ -341,7 +347,6 @@ struct ipmi_smi {
341 347
342 struct bmc_device *bmc; 348 struct bmc_device *bmc;
343 char *my_dev_name; 349 char *my_dev_name;
344 char *sysfs_name;
345 350
346 /* 351 /*
347 * This is the lower-layer's sender routine. Note that you 352 * This is the lower-layer's sender routine. Note that you
@@ -377,11 +382,16 @@ struct ipmi_smi {
377 * periodic timer interrupt. The tasklet is for handling received 382 * periodic timer interrupt. The tasklet is for handling received
378 * messages directly from the handler. 383 * messages directly from the handler.
379 */ 384 */
380 spinlock_t waiting_msgs_lock; 385 spinlock_t waiting_rcv_msgs_lock;
381 struct list_head waiting_msgs; 386 struct list_head waiting_rcv_msgs;
382 atomic_t watchdog_pretimeouts_to_deliver; 387 atomic_t watchdog_pretimeouts_to_deliver;
383 struct tasklet_struct recv_tasklet; 388 struct tasklet_struct recv_tasklet;
384 389
390 spinlock_t xmit_msgs_lock;
391 struct list_head xmit_msgs;
392 struct ipmi_smi_msg *curr_msg;
393 struct list_head hp_xmit_msgs;
394
385 /* 395 /*
386 * The list of command receivers that are registered for commands 396 * The list of command receivers that are registered for commands
387 * on this interface. 397 * on this interface.
@@ -474,6 +484,18 @@ static DEFINE_MUTEX(smi_watchers_mutex);
474#define ipmi_get_stat(intf, stat) \ 484#define ipmi_get_stat(intf, stat) \
475 ((unsigned int) atomic_read(&(intf)->stats[IPMI_STAT_ ## stat])) 485 ((unsigned int) atomic_read(&(intf)->stats[IPMI_STAT_ ## stat]))
476 486
487static char *addr_src_to_str[] = { "invalid", "hotmod", "hardcoded", "SPMI",
488 "ACPI", "SMBIOS", "PCI",
489 "device-tree", "default" };
490
491const char *ipmi_addr_src_to_str(enum ipmi_addr_src src)
492{
493 if (src > SI_DEFAULT)
494 src = 0; /* Invalid */
495 return addr_src_to_str[src];
496}
497EXPORT_SYMBOL(ipmi_addr_src_to_str);
498
477static int is_lan_addr(struct ipmi_addr *addr) 499static int is_lan_addr(struct ipmi_addr *addr)
478{ 500{
479 return addr->addr_type == IPMI_LAN_ADDR_TYPE; 501 return addr->addr_type == IPMI_LAN_ADDR_TYPE;
@@ -517,7 +539,7 @@ static void clean_up_interface_data(ipmi_smi_t intf)
517 539
518 tasklet_kill(&intf->recv_tasklet); 540 tasklet_kill(&intf->recv_tasklet);
519 541
520 free_smi_msg_list(&intf->waiting_msgs); 542 free_smi_msg_list(&intf->waiting_rcv_msgs);
521 free_recv_msg_list(&intf->waiting_events); 543 free_recv_msg_list(&intf->waiting_events);
522 544
523 /* 545 /*
@@ -1473,6 +1495,30 @@ static inline void format_lan_msg(struct ipmi_smi_msg *smi_msg,
1473 smi_msg->msgid = msgid; 1495 smi_msg->msgid = msgid;
1474} 1496}
1475 1497
1498static void smi_send(ipmi_smi_t intf, struct ipmi_smi_handlers *handlers,
1499 struct ipmi_smi_msg *smi_msg, int priority)
1500{
1501 int run_to_completion = intf->run_to_completion;
1502 unsigned long flags;
1503
1504 if (!run_to_completion)
1505 spin_lock_irqsave(&intf->xmit_msgs_lock, flags);
1506 if (intf->curr_msg) {
1507 if (priority > 0)
1508 list_add_tail(&smi_msg->link, &intf->hp_xmit_msgs);
1509 else
1510 list_add_tail(&smi_msg->link, &intf->xmit_msgs);
1511 smi_msg = NULL;
1512 } else {
1513 intf->curr_msg = smi_msg;
1514 }
1515 if (!run_to_completion)
1516 spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
1517
1518 if (smi_msg)
1519 handlers->sender(intf->send_info, smi_msg);
1520}
1521
1476/* 1522/*
1477 * Separate from ipmi_request so that the user does not have to be 1523 * Separate from ipmi_request so that the user does not have to be
1478 * supplied in certain circumstances (mainly at panic time). If 1524 * supplied in certain circumstances (mainly at panic time). If
@@ -1497,7 +1543,6 @@ static int i_ipmi_request(ipmi_user_t user,
1497 struct ipmi_smi_msg *smi_msg; 1543 struct ipmi_smi_msg *smi_msg;
1498 struct ipmi_recv_msg *recv_msg; 1544 struct ipmi_recv_msg *recv_msg;
1499 unsigned long flags; 1545 unsigned long flags;
1500 struct ipmi_smi_handlers *handlers;
1501 1546
1502 1547
1503 if (supplied_recv) 1548 if (supplied_recv)
@@ -1520,8 +1565,7 @@ static int i_ipmi_request(ipmi_user_t user,
1520 } 1565 }
1521 1566
1522 rcu_read_lock(); 1567 rcu_read_lock();
1523 handlers = intf->handlers; 1568 if (intf->in_shutdown) {
1524 if (!handlers) {
1525 rv = -ENODEV; 1569 rv = -ENODEV;
1526 goto out_err; 1570 goto out_err;
1527 } 1571 }
@@ -1856,7 +1900,7 @@ static int i_ipmi_request(ipmi_user_t user,
1856 } 1900 }
1857#endif 1901#endif
1858 1902
1859 handlers->sender(intf->send_info, smi_msg, priority); 1903 smi_send(intf, intf->handlers, smi_msg, priority);
1860 rcu_read_unlock(); 1904 rcu_read_unlock();
1861 1905
1862 return 0; 1906 return 0;
@@ -2153,7 +2197,7 @@ static void remove_proc_entries(ipmi_smi_t smi)
2153static int __find_bmc_guid(struct device *dev, void *data) 2197static int __find_bmc_guid(struct device *dev, void *data)
2154{ 2198{
2155 unsigned char *id = data; 2199 unsigned char *id = data;
2156 struct bmc_device *bmc = dev_get_drvdata(dev); 2200 struct bmc_device *bmc = to_bmc_device(dev);
2157 return memcmp(bmc->guid, id, 16) == 0; 2201 return memcmp(bmc->guid, id, 16) == 0;
2158} 2202}
2159 2203
@@ -2164,7 +2208,7 @@ static struct bmc_device *ipmi_find_bmc_guid(struct device_driver *drv,
2164 2208
2165 dev = driver_find_device(drv, NULL, guid, __find_bmc_guid); 2209 dev = driver_find_device(drv, NULL, guid, __find_bmc_guid);
2166 if (dev) 2210 if (dev)
2167 return dev_get_drvdata(dev); 2211 return to_bmc_device(dev);
2168 else 2212 else
2169 return NULL; 2213 return NULL;
2170} 2214}
@@ -2177,7 +2221,7 @@ struct prod_dev_id {
2177static int __find_bmc_prod_dev_id(struct device *dev, void *data) 2221static int __find_bmc_prod_dev_id(struct device *dev, void *data)
2178{ 2222{
2179 struct prod_dev_id *id = data; 2223 struct prod_dev_id *id = data;
2180 struct bmc_device *bmc = dev_get_drvdata(dev); 2224 struct bmc_device *bmc = to_bmc_device(dev);
2181 2225
2182 return (bmc->id.product_id == id->product_id 2226 return (bmc->id.product_id == id->product_id
2183 && bmc->id.device_id == id->device_id); 2227 && bmc->id.device_id == id->device_id);
@@ -2195,7 +2239,7 @@ static struct bmc_device *ipmi_find_bmc_prod_dev_id(
2195 2239
2196 dev = driver_find_device(drv, NULL, &id, __find_bmc_prod_dev_id); 2240 dev = driver_find_device(drv, NULL, &id, __find_bmc_prod_dev_id);
2197 if (dev) 2241 if (dev)
2198 return dev_get_drvdata(dev); 2242 return to_bmc_device(dev);
2199 else 2243 else
2200 return NULL; 2244 return NULL;
2201} 2245}
@@ -2204,84 +2248,92 @@ static ssize_t device_id_show(struct device *dev,
2204 struct device_attribute *attr, 2248 struct device_attribute *attr,
2205 char *buf) 2249 char *buf)
2206{ 2250{
2207 struct bmc_device *bmc = dev_get_drvdata(dev); 2251 struct bmc_device *bmc = to_bmc_device(dev);
2208 2252
2209 return snprintf(buf, 10, "%u\n", bmc->id.device_id); 2253 return snprintf(buf, 10, "%u\n", bmc->id.device_id);
2210} 2254}
2255DEVICE_ATTR(device_id, S_IRUGO, device_id_show, NULL);
2211 2256
2212static ssize_t provides_dev_sdrs_show(struct device *dev, 2257static ssize_t provides_device_sdrs_show(struct device *dev,
2213 struct device_attribute *attr, 2258 struct device_attribute *attr,
2214 char *buf) 2259 char *buf)
2215{ 2260{
2216 struct bmc_device *bmc = dev_get_drvdata(dev); 2261 struct bmc_device *bmc = to_bmc_device(dev);
2217 2262
2218 return snprintf(buf, 10, "%u\n", 2263 return snprintf(buf, 10, "%u\n",
2219 (bmc->id.device_revision & 0x80) >> 7); 2264 (bmc->id.device_revision & 0x80) >> 7);
2220} 2265}
2266DEVICE_ATTR(provides_device_sdrs, S_IRUGO, provides_device_sdrs_show, NULL);
2221 2267
2222static ssize_t revision_show(struct device *dev, struct device_attribute *attr, 2268static ssize_t revision_show(struct device *dev, struct device_attribute *attr,
2223 char *buf) 2269 char *buf)
2224{ 2270{
2225 struct bmc_device *bmc = dev_get_drvdata(dev); 2271 struct bmc_device *bmc = to_bmc_device(dev);
2226 2272
2227 return snprintf(buf, 20, "%u\n", 2273 return snprintf(buf, 20, "%u\n",
2228 bmc->id.device_revision & 0x0F); 2274 bmc->id.device_revision & 0x0F);
2229} 2275}
2276DEVICE_ATTR(revision, S_IRUGO, revision_show, NULL);
2230 2277
2231static ssize_t firmware_rev_show(struct device *dev, 2278static ssize_t firmware_revision_show(struct device *dev,
2232 struct device_attribute *attr, 2279 struct device_attribute *attr,
2233 char *buf) 2280 char *buf)
2234{ 2281{
2235 struct bmc_device *bmc = dev_get_drvdata(dev); 2282 struct bmc_device *bmc = to_bmc_device(dev);
2236 2283
2237 return snprintf(buf, 20, "%u.%x\n", bmc->id.firmware_revision_1, 2284 return snprintf(buf, 20, "%u.%x\n", bmc->id.firmware_revision_1,
2238 bmc->id.firmware_revision_2); 2285 bmc->id.firmware_revision_2);
2239} 2286}
2287DEVICE_ATTR(firmware_revision, S_IRUGO, firmware_revision_show, NULL);
2240 2288
2241static ssize_t ipmi_version_show(struct device *dev, 2289static ssize_t ipmi_version_show(struct device *dev,
2242 struct device_attribute *attr, 2290 struct device_attribute *attr,
2243 char *buf) 2291 char *buf)
2244{ 2292{
2245 struct bmc_device *bmc = dev_get_drvdata(dev); 2293 struct bmc_device *bmc = to_bmc_device(dev);
2246 2294
2247 return snprintf(buf, 20, "%u.%u\n", 2295 return snprintf(buf, 20, "%u.%u\n",
2248 ipmi_version_major(&bmc->id), 2296 ipmi_version_major(&bmc->id),
2249 ipmi_version_minor(&bmc->id)); 2297 ipmi_version_minor(&bmc->id));
2250} 2298}
2299DEVICE_ATTR(ipmi_version, S_IRUGO, ipmi_version_show, NULL);
2251 2300
2252static ssize_t add_dev_support_show(struct device *dev, 2301static ssize_t add_dev_support_show(struct device *dev,
2253 struct device_attribute *attr, 2302 struct device_attribute *attr,
2254 char *buf) 2303 char *buf)
2255{ 2304{
2256 struct bmc_device *bmc = dev_get_drvdata(dev); 2305 struct bmc_device *bmc = to_bmc_device(dev);
2257 2306
2258 return snprintf(buf, 10, "0x%02x\n", 2307 return snprintf(buf, 10, "0x%02x\n",
2259 bmc->id.additional_device_support); 2308 bmc->id.additional_device_support);
2260} 2309}
2310DEVICE_ATTR(additional_device_support, S_IRUGO, add_dev_support_show, NULL);
2261 2311
2262static ssize_t manufacturer_id_show(struct device *dev, 2312static ssize_t manufacturer_id_show(struct device *dev,
2263 struct device_attribute *attr, 2313 struct device_attribute *attr,
2264 char *buf) 2314 char *buf)
2265{ 2315{
2266 struct bmc_device *bmc = dev_get_drvdata(dev); 2316 struct bmc_device *bmc = to_bmc_device(dev);
2267 2317
2268 return snprintf(buf, 20, "0x%6.6x\n", bmc->id.manufacturer_id); 2318 return snprintf(buf, 20, "0x%6.6x\n", bmc->id.manufacturer_id);
2269} 2319}
2320DEVICE_ATTR(manufacturer_id, S_IRUGO, manufacturer_id_show, NULL);
2270 2321
2271static ssize_t product_id_show(struct device *dev, 2322static ssize_t product_id_show(struct device *dev,
2272 struct device_attribute *attr, 2323 struct device_attribute *attr,
2273 char *buf) 2324 char *buf)
2274{ 2325{
2275 struct bmc_device *bmc = dev_get_drvdata(dev); 2326 struct bmc_device *bmc = to_bmc_device(dev);
2276 2327
2277 return snprintf(buf, 10, "0x%4.4x\n", bmc->id.product_id); 2328 return snprintf(buf, 10, "0x%4.4x\n", bmc->id.product_id);
2278} 2329}
2330DEVICE_ATTR(product_id, S_IRUGO, product_id_show, NULL);
2279 2331
2280static ssize_t aux_firmware_rev_show(struct device *dev, 2332static ssize_t aux_firmware_rev_show(struct device *dev,
2281 struct device_attribute *attr, 2333 struct device_attribute *attr,
2282 char *buf) 2334 char *buf)
2283{ 2335{
2284 struct bmc_device *bmc = dev_get_drvdata(dev); 2336 struct bmc_device *bmc = to_bmc_device(dev);
2285 2337
2286 return snprintf(buf, 21, "0x%02x 0x%02x 0x%02x 0x%02x\n", 2338 return snprintf(buf, 21, "0x%02x 0x%02x 0x%02x 0x%02x\n",
2287 bmc->id.aux_firmware_revision[3], 2339 bmc->id.aux_firmware_revision[3],
@@ -2289,174 +2341,96 @@ static ssize_t aux_firmware_rev_show(struct device *dev,
2289 bmc->id.aux_firmware_revision[1], 2341 bmc->id.aux_firmware_revision[1],
2290 bmc->id.aux_firmware_revision[0]); 2342 bmc->id.aux_firmware_revision[0]);
2291} 2343}
2344DEVICE_ATTR(aux_firmware_revision, S_IRUGO, aux_firmware_rev_show, NULL);
2292 2345
2293static ssize_t guid_show(struct device *dev, struct device_attribute *attr, 2346static ssize_t guid_show(struct device *dev, struct device_attribute *attr,
2294 char *buf) 2347 char *buf)
2295{ 2348{
2296 struct bmc_device *bmc = dev_get_drvdata(dev); 2349 struct bmc_device *bmc = to_bmc_device(dev);
2297 2350
2298 return snprintf(buf, 100, "%Lx%Lx\n", 2351 return snprintf(buf, 100, "%Lx%Lx\n",
2299 (long long) bmc->guid[0], 2352 (long long) bmc->guid[0],
2300 (long long) bmc->guid[8]); 2353 (long long) bmc->guid[8]);
2301} 2354}
2355DEVICE_ATTR(guid, S_IRUGO, guid_show, NULL);
2356
2357static struct attribute *bmc_dev_attrs[] = {
2358 &dev_attr_device_id.attr,
2359 &dev_attr_provides_device_sdrs.attr,
2360 &dev_attr_revision.attr,
2361 &dev_attr_firmware_revision.attr,
2362 &dev_attr_ipmi_version.attr,
2363 &dev_attr_additional_device_support.attr,
2364 &dev_attr_manufacturer_id.attr,
2365 &dev_attr_product_id.attr,
2366 NULL
2367};
2302 2368
2303static void remove_files(struct bmc_device *bmc) 2369static struct attribute_group bmc_dev_attr_group = {
2304{ 2370 .attrs = bmc_dev_attrs,
2305 if (!bmc->dev) 2371};
2306 return;
2307 2372
2308 device_remove_file(&bmc->dev->dev, 2373static const struct attribute_group *bmc_dev_attr_groups[] = {
2309 &bmc->device_id_attr); 2374 &bmc_dev_attr_group,
2310 device_remove_file(&bmc->dev->dev, 2375 NULL
2311 &bmc->provides_dev_sdrs_attr); 2376};
2312 device_remove_file(&bmc->dev->dev,
2313 &bmc->revision_attr);
2314 device_remove_file(&bmc->dev->dev,
2315 &bmc->firmware_rev_attr);
2316 device_remove_file(&bmc->dev->dev,
2317 &bmc->version_attr);
2318 device_remove_file(&bmc->dev->dev,
2319 &bmc->add_dev_support_attr);
2320 device_remove_file(&bmc->dev->dev,
2321 &bmc->manufacturer_id_attr);
2322 device_remove_file(&bmc->dev->dev,
2323 &bmc->product_id_attr);
2324 2377
2325 if (bmc->id.aux_firmware_revision_set) 2378static struct device_type bmc_device_type = {
2326 device_remove_file(&bmc->dev->dev, 2379 .groups = bmc_dev_attr_groups,
2327 &bmc->aux_firmware_rev_attr); 2380};
2328 if (bmc->guid_set) 2381
2329 device_remove_file(&bmc->dev->dev, 2382static void
2330 &bmc->guid_attr); 2383release_bmc_device(struct device *dev)
2384{
2385 kfree(to_bmc_device(dev));
2331} 2386}
2332 2387
2333static void 2388static void
2334cleanup_bmc_device(struct kref *ref) 2389cleanup_bmc_device(struct kref *ref)
2335{ 2390{
2336 struct bmc_device *bmc; 2391 struct bmc_device *bmc = container_of(ref, struct bmc_device, usecount);
2337 2392
2338 bmc = container_of(ref, struct bmc_device, refcount); 2393 if (bmc->id.aux_firmware_revision_set)
2394 device_remove_file(&bmc->pdev.dev,
2395 &bmc->aux_firmware_rev_attr);
2396 if (bmc->guid_set)
2397 device_remove_file(&bmc->pdev.dev,
2398 &bmc->guid_attr);
2339 2399
2340 remove_files(bmc); 2400 platform_device_unregister(&bmc->pdev);
2341 platform_device_unregister(bmc->dev);
2342 kfree(bmc);
2343} 2401}
2344 2402
2345static void ipmi_bmc_unregister(ipmi_smi_t intf) 2403static void ipmi_bmc_unregister(ipmi_smi_t intf)
2346{ 2404{
2347 struct bmc_device *bmc = intf->bmc; 2405 struct bmc_device *bmc = intf->bmc;
2348 2406
2349 if (intf->sysfs_name) { 2407 sysfs_remove_link(&intf->si_dev->kobj, "bmc");
2350 sysfs_remove_link(&intf->si_dev->kobj, intf->sysfs_name);
2351 kfree(intf->sysfs_name);
2352 intf->sysfs_name = NULL;
2353 }
2354 if (intf->my_dev_name) { 2408 if (intf->my_dev_name) {
2355 sysfs_remove_link(&bmc->dev->dev.kobj, intf->my_dev_name); 2409 sysfs_remove_link(&bmc->pdev.dev.kobj, intf->my_dev_name);
2356 kfree(intf->my_dev_name); 2410 kfree(intf->my_dev_name);
2357 intf->my_dev_name = NULL; 2411 intf->my_dev_name = NULL;
2358 } 2412 }
2359 2413
2360 mutex_lock(&ipmidriver_mutex); 2414 mutex_lock(&ipmidriver_mutex);
2361 kref_put(&bmc->refcount, cleanup_bmc_device); 2415 kref_put(&bmc->usecount, cleanup_bmc_device);
2362 intf->bmc = NULL; 2416 intf->bmc = NULL;
2363 mutex_unlock(&ipmidriver_mutex); 2417 mutex_unlock(&ipmidriver_mutex);
2364} 2418}
2365 2419
2366static int create_files(struct bmc_device *bmc) 2420static int create_bmc_files(struct bmc_device *bmc)
2367{ 2421{
2368 int err; 2422 int err;
2369 2423
2370 bmc->device_id_attr.attr.name = "device_id";
2371 bmc->device_id_attr.attr.mode = S_IRUGO;
2372 bmc->device_id_attr.show = device_id_show;
2373 sysfs_attr_init(&bmc->device_id_attr.attr);
2374
2375 bmc->provides_dev_sdrs_attr.attr.name = "provides_device_sdrs";
2376 bmc->provides_dev_sdrs_attr.attr.mode = S_IRUGO;
2377 bmc->provides_dev_sdrs_attr.show = provides_dev_sdrs_show;
2378 sysfs_attr_init(&bmc->provides_dev_sdrs_attr.attr);
2379
2380 bmc->revision_attr.attr.name = "revision";
2381 bmc->revision_attr.attr.mode = S_IRUGO;
2382 bmc->revision_attr.show = revision_show;
2383 sysfs_attr_init(&bmc->revision_attr.attr);
2384
2385 bmc->firmware_rev_attr.attr.name = "firmware_revision";
2386 bmc->firmware_rev_attr.attr.mode = S_IRUGO;
2387 bmc->firmware_rev_attr.show = firmware_rev_show;
2388 sysfs_attr_init(&bmc->firmware_rev_attr.attr);
2389
2390 bmc->version_attr.attr.name = "ipmi_version";
2391 bmc->version_attr.attr.mode = S_IRUGO;
2392 bmc->version_attr.show = ipmi_version_show;
2393 sysfs_attr_init(&bmc->version_attr.attr);
2394
2395 bmc->add_dev_support_attr.attr.name = "additional_device_support";
2396 bmc->add_dev_support_attr.attr.mode = S_IRUGO;
2397 bmc->add_dev_support_attr.show = add_dev_support_show;
2398 sysfs_attr_init(&bmc->add_dev_support_attr.attr);
2399
2400 bmc->manufacturer_id_attr.attr.name = "manufacturer_id";
2401 bmc->manufacturer_id_attr.attr.mode = S_IRUGO;
2402 bmc->manufacturer_id_attr.show = manufacturer_id_show;
2403 sysfs_attr_init(&bmc->manufacturer_id_attr.attr);
2404
2405 bmc->product_id_attr.attr.name = "product_id";
2406 bmc->product_id_attr.attr.mode = S_IRUGO;
2407 bmc->product_id_attr.show = product_id_show;
2408 sysfs_attr_init(&bmc->product_id_attr.attr);
2409
2410 bmc->guid_attr.attr.name = "guid";
2411 bmc->guid_attr.attr.mode = S_IRUGO;
2412 bmc->guid_attr.show = guid_show;
2413 sysfs_attr_init(&bmc->guid_attr.attr);
2414
2415 bmc->aux_firmware_rev_attr.attr.name = "aux_firmware_revision";
2416 bmc->aux_firmware_rev_attr.attr.mode = S_IRUGO;
2417 bmc->aux_firmware_rev_attr.show = aux_firmware_rev_show;
2418 sysfs_attr_init(&bmc->aux_firmware_rev_attr.attr);
2419
2420 err = device_create_file(&bmc->dev->dev,
2421 &bmc->device_id_attr);
2422 if (err)
2423 goto out;
2424 err = device_create_file(&bmc->dev->dev,
2425 &bmc->provides_dev_sdrs_attr);
2426 if (err)
2427 goto out_devid;
2428 err = device_create_file(&bmc->dev->dev,
2429 &bmc->revision_attr);
2430 if (err)
2431 goto out_sdrs;
2432 err = device_create_file(&bmc->dev->dev,
2433 &bmc->firmware_rev_attr);
2434 if (err)
2435 goto out_rev;
2436 err = device_create_file(&bmc->dev->dev,
2437 &bmc->version_attr);
2438 if (err)
2439 goto out_firm;
2440 err = device_create_file(&bmc->dev->dev,
2441 &bmc->add_dev_support_attr);
2442 if (err)
2443 goto out_version;
2444 err = device_create_file(&bmc->dev->dev,
2445 &bmc->manufacturer_id_attr);
2446 if (err)
2447 goto out_add_dev;
2448 err = device_create_file(&bmc->dev->dev,
2449 &bmc->product_id_attr);
2450 if (err)
2451 goto out_manu;
2452 if (bmc->id.aux_firmware_revision_set) { 2424 if (bmc->id.aux_firmware_revision_set) {
2453 err = device_create_file(&bmc->dev->dev, 2425 bmc->aux_firmware_rev_attr.attr.name = "aux_firmware_revision";
2426 err = device_create_file(&bmc->pdev.dev,
2454 &bmc->aux_firmware_rev_attr); 2427 &bmc->aux_firmware_rev_attr);
2455 if (err) 2428 if (err)
2456 goto out_prod_id; 2429 goto out;
2457 } 2430 }
2458 if (bmc->guid_set) { 2431 if (bmc->guid_set) {
2459 err = device_create_file(&bmc->dev->dev, 2432 bmc->guid_attr.attr.name = "guid";
2433 err = device_create_file(&bmc->pdev.dev,
2460 &bmc->guid_attr); 2434 &bmc->guid_attr);
2461 if (err) 2435 if (err)
2462 goto out_aux_firm; 2436 goto out_aux_firm;
@@ -2466,44 +2440,17 @@ static int create_files(struct bmc_device *bmc)
2466 2440
2467out_aux_firm: 2441out_aux_firm:
2468 if (bmc->id.aux_firmware_revision_set) 2442 if (bmc->id.aux_firmware_revision_set)
2469 device_remove_file(&bmc->dev->dev, 2443 device_remove_file(&bmc->pdev.dev,
2470 &bmc->aux_firmware_rev_attr); 2444 &bmc->aux_firmware_rev_attr);
2471out_prod_id:
2472 device_remove_file(&bmc->dev->dev,
2473 &bmc->product_id_attr);
2474out_manu:
2475 device_remove_file(&bmc->dev->dev,
2476 &bmc->manufacturer_id_attr);
2477out_add_dev:
2478 device_remove_file(&bmc->dev->dev,
2479 &bmc->add_dev_support_attr);
2480out_version:
2481 device_remove_file(&bmc->dev->dev,
2482 &bmc->version_attr);
2483out_firm:
2484 device_remove_file(&bmc->dev->dev,
2485 &bmc->firmware_rev_attr);
2486out_rev:
2487 device_remove_file(&bmc->dev->dev,
2488 &bmc->revision_attr);
2489out_sdrs:
2490 device_remove_file(&bmc->dev->dev,
2491 &bmc->provides_dev_sdrs_attr);
2492out_devid:
2493 device_remove_file(&bmc->dev->dev,
2494 &bmc->device_id_attr);
2495out: 2445out:
2496 return err; 2446 return err;
2497} 2447}
2498 2448
2499static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum, 2449static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum)
2500 const char *sysfs_name)
2501{ 2450{
2502 int rv; 2451 int rv;
2503 struct bmc_device *bmc = intf->bmc; 2452 struct bmc_device *bmc = intf->bmc;
2504 struct bmc_device *old_bmc; 2453 struct bmc_device *old_bmc;
2505 int size;
2506 char dummy[1];
2507 2454
2508 mutex_lock(&ipmidriver_mutex); 2455 mutex_lock(&ipmidriver_mutex);
2509 2456
@@ -2527,7 +2474,7 @@ static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
2527 intf->bmc = old_bmc; 2474 intf->bmc = old_bmc;
2528 bmc = old_bmc; 2475 bmc = old_bmc;
2529 2476
2530 kref_get(&bmc->refcount); 2477 kref_get(&bmc->usecount);
2531 mutex_unlock(&ipmidriver_mutex); 2478 mutex_unlock(&ipmidriver_mutex);
2532 2479
2533 printk(KERN_INFO 2480 printk(KERN_INFO
@@ -2537,12 +2484,12 @@ static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
2537 bmc->id.product_id, 2484 bmc->id.product_id,
2538 bmc->id.device_id); 2485 bmc->id.device_id);
2539 } else { 2486 } else {
2540 char name[14];
2541 unsigned char orig_dev_id = bmc->id.device_id; 2487 unsigned char orig_dev_id = bmc->id.device_id;
2542 int warn_printed = 0; 2488 int warn_printed = 0;
2543 2489
2544 snprintf(name, sizeof(name), 2490 snprintf(bmc->name, sizeof(bmc->name),
2545 "ipmi_bmc.%4.4x", bmc->id.product_id); 2491 "ipmi_bmc.%4.4x", bmc->id.product_id);
2492 bmc->pdev.name = bmc->name;
2546 2493
2547 while (ipmi_find_bmc_prod_dev_id(&ipmidriver.driver, 2494 while (ipmi_find_bmc_prod_dev_id(&ipmidriver.driver,
2548 bmc->id.product_id, 2495 bmc->id.product_id,
@@ -2566,23 +2513,16 @@ static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
2566 } 2513 }
2567 } 2514 }
2568 2515
2569 bmc->dev = platform_device_alloc(name, bmc->id.device_id); 2516 bmc->pdev.dev.driver = &ipmidriver.driver;
2570 if (!bmc->dev) { 2517 bmc->pdev.id = bmc->id.device_id;
2571 mutex_unlock(&ipmidriver_mutex); 2518 bmc->pdev.dev.release = release_bmc_device;
2572 printk(KERN_ERR 2519 bmc->pdev.dev.type = &bmc_device_type;
2573 "ipmi_msghandler:" 2520 kref_init(&bmc->usecount);
2574 " Unable to allocate platform device\n");
2575 return -ENOMEM;
2576 }
2577 bmc->dev->dev.driver = &ipmidriver.driver;
2578 dev_set_drvdata(&bmc->dev->dev, bmc);
2579 kref_init(&bmc->refcount);
2580 2521
2581 rv = platform_device_add(bmc->dev); 2522 rv = platform_device_register(&bmc->pdev);
2582 mutex_unlock(&ipmidriver_mutex); 2523 mutex_unlock(&ipmidriver_mutex);
2583 if (rv) { 2524 if (rv) {
2584 platform_device_put(bmc->dev); 2525 put_device(&bmc->pdev.dev);
2585 bmc->dev = NULL;
2586 printk(KERN_ERR 2526 printk(KERN_ERR
2587 "ipmi_msghandler:" 2527 "ipmi_msghandler:"
2588 " Unable to register bmc device: %d\n", 2528 " Unable to register bmc device: %d\n",
@@ -2594,10 +2534,10 @@ static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
2594 return rv; 2534 return rv;
2595 } 2535 }
2596 2536
2597 rv = create_files(bmc); 2537 rv = create_bmc_files(bmc);
2598 if (rv) { 2538 if (rv) {
2599 mutex_lock(&ipmidriver_mutex); 2539 mutex_lock(&ipmidriver_mutex);
2600 platform_device_unregister(bmc->dev); 2540 platform_device_unregister(&bmc->pdev);
2601 mutex_unlock(&ipmidriver_mutex); 2541 mutex_unlock(&ipmidriver_mutex);
2602 2542
2603 return rv; 2543 return rv;
@@ -2614,44 +2554,26 @@ static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
2614 * create symlink from system interface device to bmc device 2554 * create symlink from system interface device to bmc device
2615 * and back. 2555 * and back.
2616 */ 2556 */
2617 intf->sysfs_name = kstrdup(sysfs_name, GFP_KERNEL); 2557 rv = sysfs_create_link(&intf->si_dev->kobj, &bmc->pdev.dev.kobj, "bmc");
2618 if (!intf->sysfs_name) {
2619 rv = -ENOMEM;
2620 printk(KERN_ERR
2621 "ipmi_msghandler: allocate link to BMC: %d\n",
2622 rv);
2623 goto out_err;
2624 }
2625
2626 rv = sysfs_create_link(&intf->si_dev->kobj,
2627 &bmc->dev->dev.kobj, intf->sysfs_name);
2628 if (rv) { 2558 if (rv) {
2629 kfree(intf->sysfs_name);
2630 intf->sysfs_name = NULL;
2631 printk(KERN_ERR 2559 printk(KERN_ERR
2632 "ipmi_msghandler: Unable to create bmc symlink: %d\n", 2560 "ipmi_msghandler: Unable to create bmc symlink: %d\n",
2633 rv); 2561 rv);
2634 goto out_err; 2562 goto out_err;
2635 } 2563 }
2636 2564
2637 size = snprintf(dummy, 0, "ipmi%d", ifnum); 2565 intf->my_dev_name = kasprintf(GFP_KERNEL, "ipmi%d", ifnum);
2638 intf->my_dev_name = kmalloc(size+1, GFP_KERNEL);
2639 if (!intf->my_dev_name) { 2566 if (!intf->my_dev_name) {
2640 kfree(intf->sysfs_name);
2641 intf->sysfs_name = NULL;
2642 rv = -ENOMEM; 2567 rv = -ENOMEM;
2643 printk(KERN_ERR 2568 printk(KERN_ERR
2644 "ipmi_msghandler: allocate link from BMC: %d\n", 2569 "ipmi_msghandler: allocate link from BMC: %d\n",
2645 rv); 2570 rv);
2646 goto out_err; 2571 goto out_err;
2647 } 2572 }
2648 snprintf(intf->my_dev_name, size+1, "ipmi%d", ifnum);
2649 2573
2650 rv = sysfs_create_link(&bmc->dev->dev.kobj, &intf->si_dev->kobj, 2574 rv = sysfs_create_link(&bmc->pdev.dev.kobj, &intf->si_dev->kobj,
2651 intf->my_dev_name); 2575 intf->my_dev_name);
2652 if (rv) { 2576 if (rv) {
2653 kfree(intf->sysfs_name);
2654 intf->sysfs_name = NULL;
2655 kfree(intf->my_dev_name); 2577 kfree(intf->my_dev_name);
2656 intf->my_dev_name = NULL; 2578 intf->my_dev_name = NULL;
2657 printk(KERN_ERR 2579 printk(KERN_ERR
@@ -2850,7 +2772,6 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
2850 void *send_info, 2772 void *send_info,
2851 struct ipmi_device_id *device_id, 2773 struct ipmi_device_id *device_id,
2852 struct device *si_dev, 2774 struct device *si_dev,
2853 const char *sysfs_name,
2854 unsigned char slave_addr) 2775 unsigned char slave_addr)
2855{ 2776{
2856 int i, j; 2777 int i, j;
@@ -2909,12 +2830,15 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
2909#ifdef CONFIG_PROC_FS 2830#ifdef CONFIG_PROC_FS
2910 mutex_init(&intf->proc_entry_lock); 2831 mutex_init(&intf->proc_entry_lock);
2911#endif 2832#endif
2912 spin_lock_init(&intf->waiting_msgs_lock); 2833 spin_lock_init(&intf->waiting_rcv_msgs_lock);
2913 INIT_LIST_HEAD(&intf->waiting_msgs); 2834 INIT_LIST_HEAD(&intf->waiting_rcv_msgs);
2914 tasklet_init(&intf->recv_tasklet, 2835 tasklet_init(&intf->recv_tasklet,
2915 smi_recv_tasklet, 2836 smi_recv_tasklet,
2916 (unsigned long) intf); 2837 (unsigned long) intf);
2917 atomic_set(&intf->watchdog_pretimeouts_to_deliver, 0); 2838 atomic_set(&intf->watchdog_pretimeouts_to_deliver, 0);
2839 spin_lock_init(&intf->xmit_msgs_lock);
2840 INIT_LIST_HEAD(&intf->xmit_msgs);
2841 INIT_LIST_HEAD(&intf->hp_xmit_msgs);
2918 spin_lock_init(&intf->events_lock); 2842 spin_lock_init(&intf->events_lock);
2919 atomic_set(&intf->event_waiters, 0); 2843 atomic_set(&intf->event_waiters, 0);
2920 intf->ticks_to_req_ev = IPMI_REQUEST_EV_TIME; 2844 intf->ticks_to_req_ev = IPMI_REQUEST_EV_TIME;
@@ -2984,7 +2908,7 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
2984 if (rv == 0) 2908 if (rv == 0)
2985 rv = add_proc_entries(intf, i); 2909 rv = add_proc_entries(intf, i);
2986 2910
2987 rv = ipmi_bmc_register(intf, i, sysfs_name); 2911 rv = ipmi_bmc_register(intf, i);
2988 2912
2989 out: 2913 out:
2990 if (rv) { 2914 if (rv) {
@@ -3014,12 +2938,50 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
3014} 2938}
3015EXPORT_SYMBOL(ipmi_register_smi); 2939EXPORT_SYMBOL(ipmi_register_smi);
3016 2940
2941static void deliver_smi_err_response(ipmi_smi_t intf,
2942 struct ipmi_smi_msg *msg,
2943 unsigned char err)
2944{
2945 msg->rsp[0] = msg->data[0] | 4;
2946 msg->rsp[1] = msg->data[1];
2947 msg->rsp[2] = err;
2948 msg->rsp_size = 3;
2949 /* It's an error, so it will never requeue, no need to check return. */
2950 handle_one_recv_msg(intf, msg);
2951}
2952
3017static void cleanup_smi_msgs(ipmi_smi_t intf) 2953static void cleanup_smi_msgs(ipmi_smi_t intf)
3018{ 2954{
3019 int i; 2955 int i;
3020 struct seq_table *ent; 2956 struct seq_table *ent;
2957 struct ipmi_smi_msg *msg;
2958 struct list_head *entry;
2959 struct list_head tmplist;
2960
2961 /* Clear out our transmit queues and hold the messages. */
2962 INIT_LIST_HEAD(&tmplist);
2963 list_splice_tail(&intf->hp_xmit_msgs, &tmplist);
2964 list_splice_tail(&intf->xmit_msgs, &tmplist);
2965
2966 /* Current message first, to preserve order */
2967 while (intf->curr_msg && !list_empty(&intf->waiting_rcv_msgs)) {
2968 /* Wait for the message to clear out. */
2969 schedule_timeout(1);
2970 }
3021 2971
3022 /* No need for locks, the interface is down. */ 2972 /* No need for locks, the interface is down. */
2973
2974 /*
2975 * Return errors for all pending messages in queue and in the
2976 * tables waiting for remote responses.
2977 */
2978 while (!list_empty(&tmplist)) {
2979 entry = tmplist.next;
2980 list_del(entry);
2981 msg = list_entry(entry, struct ipmi_smi_msg, link);
2982 deliver_smi_err_response(intf, msg, IPMI_ERR_UNSPECIFIED);
2983 }
2984
3023 for (i = 0; i < IPMI_IPMB_NUM_SEQ; i++) { 2985 for (i = 0; i < IPMI_IPMB_NUM_SEQ; i++) {
3024 ent = &(intf->seq_table[i]); 2986 ent = &(intf->seq_table[i]);
3025 if (!ent->inuse) 2987 if (!ent->inuse)
@@ -3031,20 +2993,33 @@ static void cleanup_smi_msgs(ipmi_smi_t intf)
3031int ipmi_unregister_smi(ipmi_smi_t intf) 2993int ipmi_unregister_smi(ipmi_smi_t intf)
3032{ 2994{
3033 struct ipmi_smi_watcher *w; 2995 struct ipmi_smi_watcher *w;
3034 int intf_num = intf->intf_num; 2996 int intf_num = intf->intf_num;
2997 ipmi_user_t user;
3035 2998
3036 ipmi_bmc_unregister(intf); 2999 ipmi_bmc_unregister(intf);
3037 3000
3038 mutex_lock(&smi_watchers_mutex); 3001 mutex_lock(&smi_watchers_mutex);
3039 mutex_lock(&ipmi_interfaces_mutex); 3002 mutex_lock(&ipmi_interfaces_mutex);
3040 intf->intf_num = -1; 3003 intf->intf_num = -1;
3041 intf->handlers = NULL; 3004 intf->in_shutdown = true;
3042 list_del_rcu(&intf->link); 3005 list_del_rcu(&intf->link);
3043 mutex_unlock(&ipmi_interfaces_mutex); 3006 mutex_unlock(&ipmi_interfaces_mutex);
3044 synchronize_rcu(); 3007 synchronize_rcu();
3045 3008
3046 cleanup_smi_msgs(intf); 3009 cleanup_smi_msgs(intf);
3047 3010
3011 /* Clean up the effects of users on the lower-level software. */
3012 mutex_lock(&ipmi_interfaces_mutex);
3013 rcu_read_lock();
3014 list_for_each_entry_rcu(user, &intf->users, link) {
3015 module_put(intf->handlers->owner);
3016 if (intf->handlers->dec_usecount)
3017 intf->handlers->dec_usecount(intf->send_info);
3018 }
3019 rcu_read_unlock();
3020 intf->handlers = NULL;
3021 mutex_unlock(&ipmi_interfaces_mutex);
3022
3048 remove_proc_entries(intf); 3023 remove_proc_entries(intf);
3049 3024
3050 /* 3025 /*
@@ -3134,7 +3109,6 @@ static int handle_ipmb_get_msg_cmd(ipmi_smi_t intf,
3134 ipmi_user_t user = NULL; 3109 ipmi_user_t user = NULL;
3135 struct ipmi_ipmb_addr *ipmb_addr; 3110 struct ipmi_ipmb_addr *ipmb_addr;
3136 struct ipmi_recv_msg *recv_msg; 3111 struct ipmi_recv_msg *recv_msg;
3137 struct ipmi_smi_handlers *handlers;
3138 3112
3139 if (msg->rsp_size < 10) { 3113 if (msg->rsp_size < 10) {
3140 /* Message not big enough, just ignore it. */ 3114 /* Message not big enough, just ignore it. */
@@ -3188,9 +3162,8 @@ static int handle_ipmb_get_msg_cmd(ipmi_smi_t intf,
3188 } 3162 }
3189#endif 3163#endif
3190 rcu_read_lock(); 3164 rcu_read_lock();
3191 handlers = intf->handlers; 3165 if (!intf->in_shutdown) {
3192 if (handlers) { 3166 smi_send(intf, intf->handlers, msg, 0);
3193 handlers->sender(intf->send_info, msg, 0);
3194 /* 3167 /*
3195 * We used the message, so return the value 3168 * We used the message, so return the value
3196 * that causes it to not be freed or 3169 * that causes it to not be freed or
@@ -3857,32 +3830,32 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
3857 3830
3858 /* See if any waiting messages need to be processed. */ 3831 /* See if any waiting messages need to be processed. */
3859 if (!run_to_completion) 3832 if (!run_to_completion)
3860 spin_lock_irqsave(&intf->waiting_msgs_lock, flags); 3833 spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
3861 while (!list_empty(&intf->waiting_msgs)) { 3834 while (!list_empty(&intf->waiting_rcv_msgs)) {
3862 smi_msg = list_entry(intf->waiting_msgs.next, 3835 smi_msg = list_entry(intf->waiting_rcv_msgs.next,
3863 struct ipmi_smi_msg, link); 3836 struct ipmi_smi_msg, link);
3864 list_del(&smi_msg->link);
3865 if (!run_to_completion) 3837 if (!run_to_completion)
3866 spin_unlock_irqrestore(&intf->waiting_msgs_lock, flags); 3838 spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
3839 flags);
3867 rv = handle_one_recv_msg(intf, smi_msg); 3840 rv = handle_one_recv_msg(intf, smi_msg);
3868 if (!run_to_completion) 3841 if (!run_to_completion)
3869 spin_lock_irqsave(&intf->waiting_msgs_lock, flags); 3842 spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
3870 if (rv == 0) { 3843 if (rv > 0) {
3871 /* Message handled */
3872 ipmi_free_smi_msg(smi_msg);
3873 } else if (rv < 0) {
3874 /* Fatal error on the message, del but don't free. */
3875 } else {
3876 /* 3844 /*
3877 * To preserve message order, quit if we 3845 * To preserve message order, quit if we
3878 * can't handle a message. 3846 * can't handle a message.
3879 */ 3847 */
3880 list_add(&smi_msg->link, &intf->waiting_msgs);
3881 break; 3848 break;
3849 } else {
3850 list_del(&smi_msg->link);
3851 if (rv == 0)
3852 /* Message handled */
3853 ipmi_free_smi_msg(smi_msg);
3854 /* If rv < 0, fatal error, del but don't free. */
3882 } 3855 }
3883 } 3856 }
3884 if (!run_to_completion) 3857 if (!run_to_completion)
3885 spin_unlock_irqrestore(&intf->waiting_msgs_lock, flags); 3858 spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock, flags);
3886 3859
3887 /* 3860 /*
3888 * If the pretimout count is non-zero, decrement one from it and 3861 * If the pretimout count is non-zero, decrement one from it and
@@ -3903,7 +3876,41 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
3903 3876
3904static void smi_recv_tasklet(unsigned long val) 3877static void smi_recv_tasklet(unsigned long val)
3905{ 3878{
3906 handle_new_recv_msgs((ipmi_smi_t) val); 3879 unsigned long flags = 0; /* keep us warning-free. */
3880 ipmi_smi_t intf = (ipmi_smi_t) val;
3881 int run_to_completion = intf->run_to_completion;
3882 struct ipmi_smi_msg *newmsg = NULL;
3883
3884 /*
3885 * Start the next message if available.
3886 *
3887 * Do this here, not in the actual receiver, because we may deadlock
3888 * because the lower layer is allowed to hold locks while calling
3889 * message delivery.
3890 */
3891 if (!run_to_completion)
3892 spin_lock_irqsave(&intf->xmit_msgs_lock, flags);
3893 if (intf->curr_msg == NULL && !intf->in_shutdown) {
3894 struct list_head *entry = NULL;
3895
3896 /* Pick the high priority queue first. */
3897 if (!list_empty(&intf->hp_xmit_msgs))
3898 entry = intf->hp_xmit_msgs.next;
3899 else if (!list_empty(&intf->xmit_msgs))
3900 entry = intf->xmit_msgs.next;
3901
3902 if (entry) {
3903 list_del(entry);
3904 newmsg = list_entry(entry, struct ipmi_smi_msg, link);
3905 intf->curr_msg = newmsg;
3906 }
3907 }
3908 if (!run_to_completion)
3909 spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
3910 if (newmsg)
3911 intf->handlers->sender(intf->send_info, newmsg);
3912
3913 handle_new_recv_msgs(intf);
3907} 3914}
3908 3915
3909/* Handle a new message from the lower layer. */ 3916/* Handle a new message from the lower layer. */
@@ -3911,13 +3918,16 @@ void ipmi_smi_msg_received(ipmi_smi_t intf,
3911 struct ipmi_smi_msg *msg) 3918 struct ipmi_smi_msg *msg)
3912{ 3919{
3913 unsigned long flags = 0; /* keep us warning-free. */ 3920 unsigned long flags = 0; /* keep us warning-free. */
3914 int run_to_completion; 3921 int run_to_completion = intf->run_to_completion;
3915
3916 3922
3917 if ((msg->data_size >= 2) 3923 if ((msg->data_size >= 2)
3918 && (msg->data[0] == (IPMI_NETFN_APP_REQUEST << 2)) 3924 && (msg->data[0] == (IPMI_NETFN_APP_REQUEST << 2))
3919 && (msg->data[1] == IPMI_SEND_MSG_CMD) 3925 && (msg->data[1] == IPMI_SEND_MSG_CMD)
3920 && (msg->user_data == NULL)) { 3926 && (msg->user_data == NULL)) {
3927
3928 if (intf->in_shutdown)
3929 goto free_msg;
3930
3921 /* 3931 /*
3922 * This is the local response to a command send, start 3932 * This is the local response to a command send, start
3923 * the timer for these. The user_data will not be 3933 * the timer for these. The user_data will not be
@@ -3953,29 +3963,40 @@ void ipmi_smi_msg_received(ipmi_smi_t intf,
3953 /* The message was sent, start the timer. */ 3963 /* The message was sent, start the timer. */
3954 intf_start_seq_timer(intf, msg->msgid); 3964 intf_start_seq_timer(intf, msg->msgid);
3955 3965
3966free_msg:
3956 ipmi_free_smi_msg(msg); 3967 ipmi_free_smi_msg(msg);
3957 goto out; 3968 } else {
3969 /*
3970 * To preserve message order, we keep a queue and deliver from
3971 * a tasklet.
3972 */
3973 if (!run_to_completion)
3974 spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
3975 list_add_tail(&msg->link, &intf->waiting_rcv_msgs);
3976 if (!run_to_completion)
3977 spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
3978 flags);
3958 } 3979 }
3959 3980
3960 /*
3961 * To preserve message order, if the list is not empty, we
3962 * tack this message onto the end of the list.
3963 */
3964 run_to_completion = intf->run_to_completion;
3965 if (!run_to_completion) 3981 if (!run_to_completion)
3966 spin_lock_irqsave(&intf->waiting_msgs_lock, flags); 3982 spin_lock_irqsave(&intf->xmit_msgs_lock, flags);
3967 list_add_tail(&msg->link, &intf->waiting_msgs); 3983 if (msg == intf->curr_msg)
3984 intf->curr_msg = NULL;
3968 if (!run_to_completion) 3985 if (!run_to_completion)
3969 spin_unlock_irqrestore(&intf->waiting_msgs_lock, flags); 3986 spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
3970 3987
3971 tasklet_schedule(&intf->recv_tasklet); 3988 if (run_to_completion)
3972 out: 3989 smi_recv_tasklet((unsigned long) intf);
3973 return; 3990 else
3991 tasklet_schedule(&intf->recv_tasklet);
3974} 3992}
3975EXPORT_SYMBOL(ipmi_smi_msg_received); 3993EXPORT_SYMBOL(ipmi_smi_msg_received);
3976 3994
3977void ipmi_smi_watchdog_pretimeout(ipmi_smi_t intf) 3995void ipmi_smi_watchdog_pretimeout(ipmi_smi_t intf)
3978{ 3996{
3997 if (intf->in_shutdown)
3998 return;
3999
3979 atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1); 4000 atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
3980 tasklet_schedule(&intf->recv_tasklet); 4001 tasklet_schedule(&intf->recv_tasklet);
3981} 4002}
@@ -4017,7 +4038,7 @@ static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent,
4017 struct ipmi_recv_msg *msg; 4038 struct ipmi_recv_msg *msg;
4018 struct ipmi_smi_handlers *handlers; 4039 struct ipmi_smi_handlers *handlers;
4019 4040
4020 if (intf->intf_num == -1) 4041 if (intf->in_shutdown)
4021 return; 4042 return;
4022 4043
4023 if (!ent->inuse) 4044 if (!ent->inuse)
@@ -4082,8 +4103,7 @@ static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent,
4082 ipmi_inc_stat(intf, 4103 ipmi_inc_stat(intf,
4083 retransmitted_ipmb_commands); 4104 retransmitted_ipmb_commands);
4084 4105
4085 intf->handlers->sender(intf->send_info, 4106 smi_send(intf, intf->handlers, smi_msg, 0);
4086 smi_msg, 0);
4087 } else 4107 } else
4088 ipmi_free_smi_msg(smi_msg); 4108 ipmi_free_smi_msg(smi_msg);
4089 4109
@@ -4145,15 +4165,12 @@ static unsigned int ipmi_timeout_handler(ipmi_smi_t intf, long timeout_period)
4145 4165
4146static void ipmi_request_event(ipmi_smi_t intf) 4166static void ipmi_request_event(ipmi_smi_t intf)
4147{ 4167{
4148 struct ipmi_smi_handlers *handlers;
4149
4150 /* No event requests when in maintenance mode. */ 4168 /* No event requests when in maintenance mode. */
4151 if (intf->maintenance_mode_enable) 4169 if (intf->maintenance_mode_enable)
4152 return; 4170 return;
4153 4171
4154 handlers = intf->handlers; 4172 if (!intf->in_shutdown)
4155 if (handlers) 4173 intf->handlers->request_events(intf->send_info);
4156 handlers->request_events(intf->send_info);
4157} 4174}
4158 4175
4159static struct timer_list ipmi_timer; 4176static struct timer_list ipmi_timer;
@@ -4548,6 +4565,7 @@ static int ipmi_init_msghandler(void)
4548 proc_ipmi_root = proc_mkdir("ipmi", NULL); 4565 proc_ipmi_root = proc_mkdir("ipmi", NULL);
4549 if (!proc_ipmi_root) { 4566 if (!proc_ipmi_root) {
4550 printk(KERN_ERR PFX "Unable to create IPMI proc dir"); 4567 printk(KERN_ERR PFX "Unable to create IPMI proc dir");
4568 driver_unregister(&ipmidriver.driver);
4551 return -ENOMEM; 4569 return -ENOMEM;
4552 } 4570 }
4553 4571
diff --git a/drivers/char/ipmi/ipmi_powernv.c b/drivers/char/ipmi/ipmi_powernv.c
new file mode 100644
index 000000000000..79524ed2a3cb
--- /dev/null
+++ b/drivers/char/ipmi/ipmi_powernv.c
@@ -0,0 +1,310 @@
1/*
2 * PowerNV OPAL IPMI driver
3 *
4 * Copyright 2014 IBM Corp.
5 *
6 * This program is free software; you can redistribute it and/or modify it
7 * under the terms of the GNU General Public License as published by the Free
8 * Software Foundation; either version 2 of the License, or (at your option)
9 * any later version.
10 */
11
12#define pr_fmt(fmt) "ipmi-powernv: " fmt
13
14#include <linux/ipmi_smi.h>
15#include <linux/list.h>
16#include <linux/module.h>
17#include <linux/of.h>
18
19#include <asm/opal.h>
20
21
22struct ipmi_smi_powernv {
23 u64 interface_id;
24 struct ipmi_device_id ipmi_id;
25 ipmi_smi_t intf;
26 u64 event;
27 struct notifier_block event_nb;
28
29 /**
30 * We assume that there can only be one outstanding request, so
31 * keep the pending message in cur_msg. We protect this from concurrent
32 * updates through send & recv calls, (and consequently opal_msg, which
33 * is in-use when cur_msg is set) with msg_lock
34 */
35 spinlock_t msg_lock;
36 struct ipmi_smi_msg *cur_msg;
37 struct opal_ipmi_msg *opal_msg;
38};
39
40static int ipmi_powernv_start_processing(void *send_info, ipmi_smi_t intf)
41{
42 struct ipmi_smi_powernv *smi = send_info;
43
44 smi->intf = intf;
45 return 0;
46}
47
48static void send_error_reply(struct ipmi_smi_powernv *smi,
49 struct ipmi_smi_msg *msg, u8 completion_code)
50{
51 msg->rsp[0] = msg->data[0] | 0x4;
52 msg->rsp[1] = msg->data[1];
53 msg->rsp[2] = completion_code;
54 msg->rsp_size = 3;
55 ipmi_smi_msg_received(smi->intf, msg);
56}
57
58static void ipmi_powernv_send(void *send_info, struct ipmi_smi_msg *msg)
59{
60 struct ipmi_smi_powernv *smi = send_info;
61 struct opal_ipmi_msg *opal_msg;
62 unsigned long flags;
63 int comp, rc;
64 size_t size;
65
66 /* ensure data_len will fit in the opal_ipmi_msg buffer... */
67 if (msg->data_size > IPMI_MAX_MSG_LENGTH) {
68 comp = IPMI_REQ_LEN_EXCEEDED_ERR;
69 goto err;
70 }
71
72 /* ... and that we at least have netfn and cmd bytes */
73 if (msg->data_size < 2) {
74 comp = IPMI_REQ_LEN_INVALID_ERR;
75 goto err;
76 }
77
78 spin_lock_irqsave(&smi->msg_lock, flags);
79
80 if (smi->cur_msg) {
81 comp = IPMI_NODE_BUSY_ERR;
82 goto err_unlock;
83 }
84
85 /* format our data for the OPAL API */
86 opal_msg = smi->opal_msg;
87 opal_msg->version = OPAL_IPMI_MSG_FORMAT_VERSION_1;
88 opal_msg->netfn = msg->data[0];
89 opal_msg->cmd = msg->data[1];
90 if (msg->data_size > 2)
91 memcpy(opal_msg->data, msg->data + 2, msg->data_size - 2);
92
93 /* data_size already includes the netfn and cmd bytes */
94 size = sizeof(*opal_msg) + msg->data_size - 2;
95
96 pr_devel("%s: opal_ipmi_send(0x%llx, %p, %ld)\n", __func__,
97 smi->interface_id, opal_msg, size);
98 rc = opal_ipmi_send(smi->interface_id, opal_msg, size);
99 pr_devel("%s: -> %d\n", __func__, rc);
100
101 if (!rc) {
102 smi->cur_msg = msg;
103 spin_unlock_irqrestore(&smi->msg_lock, flags);
104 return;
105 }
106
107 comp = IPMI_ERR_UNSPECIFIED;
108err_unlock:
109 spin_unlock_irqrestore(&smi->msg_lock, flags);
110err:
111 send_error_reply(smi, msg, comp);
112}
113
114static int ipmi_powernv_recv(struct ipmi_smi_powernv *smi)
115{
116 struct opal_ipmi_msg *opal_msg;
117 struct ipmi_smi_msg *msg;
118 unsigned long flags;
119 uint64_t size;
120 int rc;
121
122 pr_devel("%s: opal_ipmi_recv(%llx, msg, sz)\n", __func__,
123 smi->interface_id);
124
125 spin_lock_irqsave(&smi->msg_lock, flags);
126
127 if (!smi->cur_msg) {
128 pr_warn("no current message?\n");
129 return 0;
130 }
131
132 msg = smi->cur_msg;
133 opal_msg = smi->opal_msg;
134
135 size = cpu_to_be64(sizeof(*opal_msg) + IPMI_MAX_MSG_LENGTH);
136
137 rc = opal_ipmi_recv(smi->interface_id,
138 opal_msg,
139 &size);
140 size = be64_to_cpu(size);
141 pr_devel("%s: -> %d (size %lld)\n", __func__,
142 rc, rc == 0 ? size : 0);
143 if (rc) {
144 spin_unlock_irqrestore(&smi->msg_lock, flags);
145 ipmi_free_smi_msg(msg);
146 return 0;
147 }
148
149 if (size < sizeof(*opal_msg)) {
150 spin_unlock_irqrestore(&smi->msg_lock, flags);
151 pr_warn("unexpected IPMI message size %lld\n", size);
152 return 0;
153 }
154
155 if (opal_msg->version != OPAL_IPMI_MSG_FORMAT_VERSION_1) {
156 spin_unlock_irqrestore(&smi->msg_lock, flags);
157 pr_warn("unexpected IPMI message format (version %d)\n",
158 opal_msg->version);
159 return 0;
160 }
161
162 msg->rsp[0] = opal_msg->netfn;
163 msg->rsp[1] = opal_msg->cmd;
164 if (size > sizeof(*opal_msg))
165 memcpy(&msg->rsp[2], opal_msg->data, size - sizeof(*opal_msg));
166 msg->rsp_size = 2 + size - sizeof(*opal_msg);
167
168 smi->cur_msg = NULL;
169 spin_unlock_irqrestore(&smi->msg_lock, flags);
170 ipmi_smi_msg_received(smi->intf, msg);
171 return 0;
172}
173
174static void ipmi_powernv_request_events(void *send_info)
175{
176}
177
178static void ipmi_powernv_set_run_to_completion(void *send_info,
179 bool run_to_completion)
180{
181}
182
183static void ipmi_powernv_poll(void *send_info)
184{
185 struct ipmi_smi_powernv *smi = send_info;
186
187 ipmi_powernv_recv(smi);
188}
189
190static struct ipmi_smi_handlers ipmi_powernv_smi_handlers = {
191 .owner = THIS_MODULE,
192 .start_processing = ipmi_powernv_start_processing,
193 .sender = ipmi_powernv_send,
194 .request_events = ipmi_powernv_request_events,
195 .set_run_to_completion = ipmi_powernv_set_run_to_completion,
196 .poll = ipmi_powernv_poll,
197};
198
199static int ipmi_opal_event(struct notifier_block *nb,
200 unsigned long events, void *change)
201{
202 struct ipmi_smi_powernv *smi = container_of(nb,
203 struct ipmi_smi_powernv, event_nb);
204
205 if (events & smi->event)
206 ipmi_powernv_recv(smi);
207 return 0;
208}
209
210static int ipmi_powernv_probe(struct platform_device *pdev)
211{
212 struct ipmi_smi_powernv *ipmi;
213 struct device *dev;
214 u32 prop;
215 int rc;
216
217 if (!pdev || !pdev->dev.of_node)
218 return -ENODEV;
219
220 dev = &pdev->dev;
221
222 ipmi = devm_kzalloc(dev, sizeof(*ipmi), GFP_KERNEL);
223 if (!ipmi)
224 return -ENOMEM;
225
226 spin_lock_init(&ipmi->msg_lock);
227
228 rc = of_property_read_u32(dev->of_node, "ibm,ipmi-interface-id",
229 &prop);
230 if (rc) {
231 dev_warn(dev, "No interface ID property\n");
232 goto err_free;
233 }
234 ipmi->interface_id = prop;
235
236 rc = of_property_read_u32(dev->of_node, "interrupts", &prop);
237 if (rc) {
238 dev_warn(dev, "No interrupts property\n");
239 goto err_free;
240 }
241
242 ipmi->event = 1ull << prop;
243 ipmi->event_nb.notifier_call = ipmi_opal_event;
244
245 rc = opal_notifier_register(&ipmi->event_nb);
246 if (rc) {
247 dev_warn(dev, "OPAL notifier registration failed (%d)\n", rc);
248 goto err_free;
249 }
250
251 ipmi->opal_msg = devm_kmalloc(dev,
252 sizeof(*ipmi->opal_msg) + IPMI_MAX_MSG_LENGTH,
253 GFP_KERNEL);
254 if (!ipmi->opal_msg) {
255 rc = -ENOMEM;
256 goto err_unregister;
257 }
258
259 /* todo: query actual ipmi_device_id */
260 rc = ipmi_register_smi(&ipmi_powernv_smi_handlers, ipmi,
261 &ipmi->ipmi_id, dev, 0);
262 if (rc) {
263 dev_warn(dev, "IPMI SMI registration failed (%d)\n", rc);
264 goto err_free_msg;
265 }
266
267 dev_set_drvdata(dev, ipmi);
268 return 0;
269
270err_free_msg:
271 devm_kfree(dev, ipmi->opal_msg);
272err_unregister:
273 opal_notifier_unregister(&ipmi->event_nb);
274err_free:
275 devm_kfree(dev, ipmi);
276 return rc;
277}
278
279static int ipmi_powernv_remove(struct platform_device *pdev)
280{
281 struct ipmi_smi_powernv *smi = dev_get_drvdata(&pdev->dev);
282
283 ipmi_unregister_smi(smi->intf);
284 opal_notifier_unregister(&smi->event_nb);
285 return 0;
286}
287
288static const struct of_device_id ipmi_powernv_match[] = {
289 { .compatible = "ibm,opal-ipmi" },
290 { },
291};
292
293
294static struct platform_driver powernv_ipmi_driver = {
295 .driver = {
296 .name = "ipmi-powernv",
297 .owner = THIS_MODULE,
298 .of_match_table = ipmi_powernv_match,
299 },
300 .probe = ipmi_powernv_probe,
301 .remove = ipmi_powernv_remove,
302};
303
304
305module_platform_driver(powernv_ipmi_driver);
306
307MODULE_DEVICE_TABLE(of, ipmi_powernv_match);
308MODULE_DESCRIPTION("powernv IPMI driver");
309MODULE_AUTHOR("Jeremy Kerr <jk@ozlabs.org>");
310MODULE_LICENSE("GPL");
diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
index 5c4e1f625bbb..90c7fdf95419 100644
--- a/drivers/char/ipmi/ipmi_si_intf.c
+++ b/drivers/char/ipmi/ipmi_si_intf.c
@@ -92,12 +92,9 @@ enum si_intf_state {
92 SI_GETTING_FLAGS, 92 SI_GETTING_FLAGS,
93 SI_GETTING_EVENTS, 93 SI_GETTING_EVENTS,
94 SI_CLEARING_FLAGS, 94 SI_CLEARING_FLAGS,
95 SI_CLEARING_FLAGS_THEN_SET_IRQ,
96 SI_GETTING_MESSAGES, 95 SI_GETTING_MESSAGES,
97 SI_ENABLE_INTERRUPTS1, 96 SI_CHECKING_ENABLES,
98 SI_ENABLE_INTERRUPTS2, 97 SI_SETTING_ENABLES
99 SI_DISABLE_INTERRUPTS1,
100 SI_DISABLE_INTERRUPTS2
101 /* FIXME - add watchdog stuff. */ 98 /* FIXME - add watchdog stuff. */
102}; 99};
103 100
@@ -111,10 +108,6 @@ enum si_type {
111}; 108};
112static char *si_to_str[] = { "kcs", "smic", "bt" }; 109static char *si_to_str[] = { "kcs", "smic", "bt" };
113 110
114static char *ipmi_addr_src_to_str[] = { NULL, "hotmod", "hardcoded", "SPMI",
115 "ACPI", "SMBIOS", "PCI",
116 "device-tree", "default" };
117
118#define DEVICE_NAME "ipmi_si" 111#define DEVICE_NAME "ipmi_si"
119 112
120static struct platform_driver ipmi_driver; 113static struct platform_driver ipmi_driver;
@@ -174,8 +167,7 @@ struct smi_info {
174 struct si_sm_handlers *handlers; 167 struct si_sm_handlers *handlers;
175 enum si_type si_type; 168 enum si_type si_type;
176 spinlock_t si_lock; 169 spinlock_t si_lock;
177 struct list_head xmit_msgs; 170 struct ipmi_smi_msg *waiting_msg;
178 struct list_head hp_xmit_msgs;
179 struct ipmi_smi_msg *curr_msg; 171 struct ipmi_smi_msg *curr_msg;
180 enum si_intf_state si_state; 172 enum si_intf_state si_state;
181 173
@@ -254,9 +246,6 @@ struct smi_info {
254 /* The time (in jiffies) the last timeout occurred at. */ 246 /* The time (in jiffies) the last timeout occurred at. */
255 unsigned long last_timeout_jiffies; 247 unsigned long last_timeout_jiffies;
256 248
257 /* Used to gracefully stop the timer without race conditions. */
258 atomic_t stop_operation;
259
260 /* Are we waiting for the events, pretimeouts, received msgs? */ 249 /* Are we waiting for the events, pretimeouts, received msgs? */
261 atomic_t need_watch; 250 atomic_t need_watch;
262 251
@@ -268,6 +257,16 @@ struct smi_info {
268 */ 257 */
269 bool interrupt_disabled; 258 bool interrupt_disabled;
270 259
260 /*
261 * Does the BMC support events?
262 */
263 bool supports_event_msg_buff;
264
265 /*
266 * Did we get an attention that we did not handle?
267 */
268 bool got_attn;
269
271 /* From the get device id response... */ 270 /* From the get device id response... */
272 struct ipmi_device_id device_id; 271 struct ipmi_device_id device_id;
273 272
@@ -332,7 +331,10 @@ static void deliver_recv_msg(struct smi_info *smi_info,
332 struct ipmi_smi_msg *msg) 331 struct ipmi_smi_msg *msg)
333{ 332{
334 /* Deliver the message to the upper layer. */ 333 /* Deliver the message to the upper layer. */
335 ipmi_smi_msg_received(smi_info->intf, msg); 334 if (smi_info->intf)
335 ipmi_smi_msg_received(smi_info->intf, msg);
336 else
337 ipmi_free_smi_msg(msg);
336} 338}
337 339
338static void return_hosed_msg(struct smi_info *smi_info, int cCode) 340static void return_hosed_msg(struct smi_info *smi_info, int cCode)
@@ -356,28 +358,18 @@ static void return_hosed_msg(struct smi_info *smi_info, int cCode)
356static enum si_sm_result start_next_msg(struct smi_info *smi_info) 358static enum si_sm_result start_next_msg(struct smi_info *smi_info)
357{ 359{
358 int rv; 360 int rv;
359 struct list_head *entry = NULL;
360#ifdef DEBUG_TIMING 361#ifdef DEBUG_TIMING
361 struct timeval t; 362 struct timeval t;
362#endif 363#endif
363 364
364 /* Pick the high priority queue first. */ 365 if (!smi_info->waiting_msg) {
365 if (!list_empty(&(smi_info->hp_xmit_msgs))) {
366 entry = smi_info->hp_xmit_msgs.next;
367 } else if (!list_empty(&(smi_info->xmit_msgs))) {
368 entry = smi_info->xmit_msgs.next;
369 }
370
371 if (!entry) {
372 smi_info->curr_msg = NULL; 366 smi_info->curr_msg = NULL;
373 rv = SI_SM_IDLE; 367 rv = SI_SM_IDLE;
374 } else { 368 } else {
375 int err; 369 int err;
376 370
377 list_del(entry); 371 smi_info->curr_msg = smi_info->waiting_msg;
378 smi_info->curr_msg = list_entry(entry, 372 smi_info->waiting_msg = NULL;
379 struct ipmi_smi_msg,
380 link);
381#ifdef DEBUG_TIMING 373#ifdef DEBUG_TIMING
382 do_gettimeofday(&t); 374 do_gettimeofday(&t);
383 printk(KERN_DEBUG "**Start2: %d.%9.9d\n", t.tv_sec, t.tv_usec); 375 printk(KERN_DEBUG "**Start2: %d.%9.9d\n", t.tv_sec, t.tv_usec);
@@ -401,30 +393,15 @@ static enum si_sm_result start_next_msg(struct smi_info *smi_info)
401 return rv; 393 return rv;
402} 394}
403 395
404static void start_enable_irq(struct smi_info *smi_info) 396static void start_check_enables(struct smi_info *smi_info)
405{ 397{
406 unsigned char msg[2]; 398 unsigned char msg[2];
407 399
408 /*
409 * If we are enabling interrupts, we have to tell the
410 * BMC to use them.
411 */
412 msg[0] = (IPMI_NETFN_APP_REQUEST << 2); 400 msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
413 msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD; 401 msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD;
414 402
415 smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2); 403 smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2);
416 smi_info->si_state = SI_ENABLE_INTERRUPTS1; 404 smi_info->si_state = SI_CHECKING_ENABLES;
417}
418
419static void start_disable_irq(struct smi_info *smi_info)
420{
421 unsigned char msg[2];
422
423 msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
424 msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD;
425
426 smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2);
427 smi_info->si_state = SI_DISABLE_INTERRUPTS1;
428} 405}
429 406
430static void start_clear_flags(struct smi_info *smi_info) 407static void start_clear_flags(struct smi_info *smi_info)
@@ -440,6 +417,32 @@ static void start_clear_flags(struct smi_info *smi_info)
440 smi_info->si_state = SI_CLEARING_FLAGS; 417 smi_info->si_state = SI_CLEARING_FLAGS;
441} 418}
442 419
420static void start_getting_msg_queue(struct smi_info *smi_info)
421{
422 smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
423 smi_info->curr_msg->data[1] = IPMI_GET_MSG_CMD;
424 smi_info->curr_msg->data_size = 2;
425
426 smi_info->handlers->start_transaction(
427 smi_info->si_sm,
428 smi_info->curr_msg->data,
429 smi_info->curr_msg->data_size);
430 smi_info->si_state = SI_GETTING_MESSAGES;
431}
432
433static void start_getting_events(struct smi_info *smi_info)
434{
435 smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
436 smi_info->curr_msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD;
437 smi_info->curr_msg->data_size = 2;
438
439 smi_info->handlers->start_transaction(
440 smi_info->si_sm,
441 smi_info->curr_msg->data,
442 smi_info->curr_msg->data_size);
443 smi_info->si_state = SI_GETTING_EVENTS;
444}
445
443static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val) 446static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val)
444{ 447{
445 smi_info->last_timeout_jiffies = jiffies; 448 smi_info->last_timeout_jiffies = jiffies;
@@ -453,22 +456,45 @@ static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val)
453 * polled until we can allocate some memory. Once we have some 456 * polled until we can allocate some memory. Once we have some
454 * memory, we will re-enable the interrupt. 457 * memory, we will re-enable the interrupt.
455 */ 458 */
456static inline void disable_si_irq(struct smi_info *smi_info) 459static inline bool disable_si_irq(struct smi_info *smi_info)
457{ 460{
458 if ((smi_info->irq) && (!smi_info->interrupt_disabled)) { 461 if ((smi_info->irq) && (!smi_info->interrupt_disabled)) {
459 start_disable_irq(smi_info);
460 smi_info->interrupt_disabled = true; 462 smi_info->interrupt_disabled = true;
461 if (!atomic_read(&smi_info->stop_operation)) 463 start_check_enables(smi_info);
462 smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); 464 return true;
463 } 465 }
466 return false;
464} 467}
465 468
466static inline void enable_si_irq(struct smi_info *smi_info) 469static inline bool enable_si_irq(struct smi_info *smi_info)
467{ 470{
468 if ((smi_info->irq) && (smi_info->interrupt_disabled)) { 471 if ((smi_info->irq) && (smi_info->interrupt_disabled)) {
469 start_enable_irq(smi_info);
470 smi_info->interrupt_disabled = false; 472 smi_info->interrupt_disabled = false;
473 start_check_enables(smi_info);
474 return true;
471 } 475 }
476 return false;
477}
478
479/*
480 * Allocate a message. If unable to allocate, start the interrupt
481 * disable process and return NULL. If able to allocate but
482 * interrupts are disabled, free the message and return NULL after
483 * starting the interrupt enable process.
484 */
485static struct ipmi_smi_msg *alloc_msg_handle_irq(struct smi_info *smi_info)
486{
487 struct ipmi_smi_msg *msg;
488
489 msg = ipmi_alloc_smi_msg();
490 if (!msg) {
491 if (!disable_si_irq(smi_info))
492 smi_info->si_state = SI_NORMAL;
493 } else if (enable_si_irq(smi_info)) {
494 ipmi_free_smi_msg(msg);
495 msg = NULL;
496 }
497 return msg;
472} 498}
473 499
474static void handle_flags(struct smi_info *smi_info) 500static void handle_flags(struct smi_info *smi_info)
@@ -480,45 +506,22 @@ static void handle_flags(struct smi_info *smi_info)
480 506
481 start_clear_flags(smi_info); 507 start_clear_flags(smi_info);
482 smi_info->msg_flags &= ~WDT_PRE_TIMEOUT_INT; 508 smi_info->msg_flags &= ~WDT_PRE_TIMEOUT_INT;
483 ipmi_smi_watchdog_pretimeout(smi_info->intf); 509 if (smi_info->intf)
510 ipmi_smi_watchdog_pretimeout(smi_info->intf);
484 } else if (smi_info->msg_flags & RECEIVE_MSG_AVAIL) { 511 } else if (smi_info->msg_flags & RECEIVE_MSG_AVAIL) {
485 /* Messages available. */ 512 /* Messages available. */
486 smi_info->curr_msg = ipmi_alloc_smi_msg(); 513 smi_info->curr_msg = alloc_msg_handle_irq(smi_info);
487 if (!smi_info->curr_msg) { 514 if (!smi_info->curr_msg)
488 disable_si_irq(smi_info);
489 smi_info->si_state = SI_NORMAL;
490 return; 515 return;
491 }
492 enable_si_irq(smi_info);
493
494 smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
495 smi_info->curr_msg->data[1] = IPMI_GET_MSG_CMD;
496 smi_info->curr_msg->data_size = 2;
497 516
498 smi_info->handlers->start_transaction( 517 start_getting_msg_queue(smi_info);
499 smi_info->si_sm,
500 smi_info->curr_msg->data,
501 smi_info->curr_msg->data_size);
502 smi_info->si_state = SI_GETTING_MESSAGES;
503 } else if (smi_info->msg_flags & EVENT_MSG_BUFFER_FULL) { 518 } else if (smi_info->msg_flags & EVENT_MSG_BUFFER_FULL) {
504 /* Events available. */ 519 /* Events available. */
505 smi_info->curr_msg = ipmi_alloc_smi_msg(); 520 smi_info->curr_msg = alloc_msg_handle_irq(smi_info);
506 if (!smi_info->curr_msg) { 521 if (!smi_info->curr_msg)
507 disable_si_irq(smi_info);
508 smi_info->si_state = SI_NORMAL;
509 return; 522 return;
510 }
511 enable_si_irq(smi_info);
512
513 smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
514 smi_info->curr_msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD;
515 smi_info->curr_msg->data_size = 2;
516 523
517 smi_info->handlers->start_transaction( 524 start_getting_events(smi_info);
518 smi_info->si_sm,
519 smi_info->curr_msg->data,
520 smi_info->curr_msg->data_size);
521 smi_info->si_state = SI_GETTING_EVENTS;
522 } else if (smi_info->msg_flags & OEM_DATA_AVAIL && 525 } else if (smi_info->msg_flags & OEM_DATA_AVAIL &&
523 smi_info->oem_data_avail_handler) { 526 smi_info->oem_data_avail_handler) {
524 if (smi_info->oem_data_avail_handler(smi_info)) 527 if (smi_info->oem_data_avail_handler(smi_info))
@@ -527,6 +530,55 @@ static void handle_flags(struct smi_info *smi_info)
527 smi_info->si_state = SI_NORMAL; 530 smi_info->si_state = SI_NORMAL;
528} 531}
529 532
533/*
534 * Global enables we care about.
535 */
536#define GLOBAL_ENABLES_MASK (IPMI_BMC_EVT_MSG_BUFF | IPMI_BMC_RCV_MSG_INTR | \
537 IPMI_BMC_EVT_MSG_INTR)
538
539static u8 current_global_enables(struct smi_info *smi_info, u8 base,
540 bool *irq_on)
541{
542 u8 enables = 0;
543
544 if (smi_info->supports_event_msg_buff)
545 enables |= IPMI_BMC_EVT_MSG_BUFF;
546 else
547 enables &= ~IPMI_BMC_EVT_MSG_BUFF;
548
549 if (smi_info->irq && !smi_info->interrupt_disabled)
550 enables |= IPMI_BMC_RCV_MSG_INTR;
551 else
552 enables &= ~IPMI_BMC_RCV_MSG_INTR;
553
554 if (smi_info->supports_event_msg_buff &&
555 smi_info->irq && !smi_info->interrupt_disabled)
556
557 enables |= IPMI_BMC_EVT_MSG_INTR;
558 else
559 enables &= ~IPMI_BMC_EVT_MSG_INTR;
560
561 *irq_on = enables & (IPMI_BMC_EVT_MSG_INTR | IPMI_BMC_RCV_MSG_INTR);
562
563 return enables;
564}
565
566static void check_bt_irq(struct smi_info *smi_info, bool irq_on)
567{
568 u8 irqstate = smi_info->io.inputb(&smi_info->io, IPMI_BT_INTMASK_REG);
569
570 irqstate &= IPMI_BT_INTMASK_ENABLE_IRQ_BIT;
571
572 if ((bool)irqstate == irq_on)
573 return;
574
575 if (irq_on)
576 smi_info->io.outputb(&smi_info->io, IPMI_BT_INTMASK_REG,
577 IPMI_BT_INTMASK_ENABLE_IRQ_BIT);
578 else
579 smi_info->io.outputb(&smi_info->io, IPMI_BT_INTMASK_REG, 0);
580}
581
530static void handle_transaction_done(struct smi_info *smi_info) 582static void handle_transaction_done(struct smi_info *smi_info)
531{ 583{
532 struct ipmi_smi_msg *msg; 584 struct ipmi_smi_msg *msg;
@@ -581,7 +633,6 @@ static void handle_transaction_done(struct smi_info *smi_info)
581 } 633 }
582 634
583 case SI_CLEARING_FLAGS: 635 case SI_CLEARING_FLAGS:
584 case SI_CLEARING_FLAGS_THEN_SET_IRQ:
585 { 636 {
586 unsigned char msg[3]; 637 unsigned char msg[3];
587 638
@@ -592,10 +643,7 @@ static void handle_transaction_done(struct smi_info *smi_info)
592 dev_warn(smi_info->dev, 643 dev_warn(smi_info->dev,
593 "Error clearing flags: %2.2x\n", msg[2]); 644 "Error clearing flags: %2.2x\n", msg[2]);
594 } 645 }
595 if (smi_info->si_state == SI_CLEARING_FLAGS_THEN_SET_IRQ) 646 smi_info->si_state = SI_NORMAL;
596 start_enable_irq(smi_info);
597 else
598 smi_info->si_state = SI_NORMAL;
599 break; 647 break;
600 } 648 }
601 649
@@ -675,9 +723,11 @@ static void handle_transaction_done(struct smi_info *smi_info)
675 break; 723 break;
676 } 724 }
677 725
678 case SI_ENABLE_INTERRUPTS1: 726 case SI_CHECKING_ENABLES:
679 { 727 {
680 unsigned char msg[4]; 728 unsigned char msg[4];
729 u8 enables;
730 bool irq_on;
681 731
682 /* We got the flags from the SMI, now handle them. */ 732 /* We got the flags from the SMI, now handle them. */
683 smi_info->handlers->get_result(smi_info->si_sm, msg, 4); 733 smi_info->handlers->get_result(smi_info->si_sm, msg, 4);
@@ -687,70 +737,53 @@ static void handle_transaction_done(struct smi_info *smi_info)
687 dev_warn(smi_info->dev, 737 dev_warn(smi_info->dev,
688 "Maybe ok, but ipmi might run very slowly.\n"); 738 "Maybe ok, but ipmi might run very slowly.\n");
689 smi_info->si_state = SI_NORMAL; 739 smi_info->si_state = SI_NORMAL;
690 } else { 740 break;
741 }
742 enables = current_global_enables(smi_info, 0, &irq_on);
743 if (smi_info->si_type == SI_BT)
744 /* BT has its own interrupt enable bit. */
745 check_bt_irq(smi_info, irq_on);
746 if (enables != (msg[3] & GLOBAL_ENABLES_MASK)) {
747 /* Enables are not correct, fix them. */
691 msg[0] = (IPMI_NETFN_APP_REQUEST << 2); 748 msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
692 msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD; 749 msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD;
693 msg[2] = (msg[3] | 750 msg[2] = enables | (msg[3] & ~GLOBAL_ENABLES_MASK);
694 IPMI_BMC_RCV_MSG_INTR |
695 IPMI_BMC_EVT_MSG_INTR);
696 smi_info->handlers->start_transaction( 751 smi_info->handlers->start_transaction(
697 smi_info->si_sm, msg, 3); 752 smi_info->si_sm, msg, 3);
698 smi_info->si_state = SI_ENABLE_INTERRUPTS2; 753 smi_info->si_state = SI_SETTING_ENABLES;
754 } else if (smi_info->supports_event_msg_buff) {
755 smi_info->curr_msg = ipmi_alloc_smi_msg();
756 if (!smi_info->curr_msg) {
757 smi_info->si_state = SI_NORMAL;
758 break;
759 }
760 start_getting_msg_queue(smi_info);
761 } else {
762 smi_info->si_state = SI_NORMAL;
699 } 763 }
700 break; 764 break;
701 } 765 }
702 766
703 case SI_ENABLE_INTERRUPTS2: 767 case SI_SETTING_ENABLES:
704 { 768 {
705 unsigned char msg[4]; 769 unsigned char msg[4];
706 770
707 /* We got the flags from the SMI, now handle them. */
708 smi_info->handlers->get_result(smi_info->si_sm, msg, 4); 771 smi_info->handlers->get_result(smi_info->si_sm, msg, 4);
709 if (msg[2] != 0) { 772 if (msg[2] != 0)
710 dev_warn(smi_info->dev, 773 dev_warn(smi_info->dev,
711 "Couldn't set irq info: %x.\n", msg[2]); 774 "Could not set the global enables: 0x%x.\n",
712 dev_warn(smi_info->dev, 775 msg[2]);
713 "Maybe ok, but ipmi might run very slowly.\n");
714 } else
715 smi_info->interrupt_disabled = false;
716 smi_info->si_state = SI_NORMAL;
717 break;
718 }
719
720 case SI_DISABLE_INTERRUPTS1:
721 {
722 unsigned char msg[4];
723 776
724 /* We got the flags from the SMI, now handle them. */ 777 if (smi_info->supports_event_msg_buff) {
725 smi_info->handlers->get_result(smi_info->si_sm, msg, 4); 778 smi_info->curr_msg = ipmi_alloc_smi_msg();
726 if (msg[2] != 0) { 779 if (!smi_info->curr_msg) {
727 dev_warn(smi_info->dev, "Could not disable interrupts" 780 smi_info->si_state = SI_NORMAL;
728 ", failed get.\n"); 781 break;
729 smi_info->si_state = SI_NORMAL; 782 }
783 start_getting_msg_queue(smi_info);
730 } else { 784 } else {
731 msg[0] = (IPMI_NETFN_APP_REQUEST << 2); 785 smi_info->si_state = SI_NORMAL;
732 msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD;
733 msg[2] = (msg[3] &
734 ~(IPMI_BMC_RCV_MSG_INTR |
735 IPMI_BMC_EVT_MSG_INTR));
736 smi_info->handlers->start_transaction(
737 smi_info->si_sm, msg, 3);
738 smi_info->si_state = SI_DISABLE_INTERRUPTS2;
739 }
740 break;
741 }
742
743 case SI_DISABLE_INTERRUPTS2:
744 {
745 unsigned char msg[4];
746
747 /* We got the flags from the SMI, now handle them. */
748 smi_info->handlers->get_result(smi_info->si_sm, msg, 4);
749 if (msg[2] != 0) {
750 dev_warn(smi_info->dev, "Could not disable interrupts"
751 ", failed set.\n");
752 } 786 }
753 smi_info->si_state = SI_NORMAL;
754 break; 787 break;
755 } 788 }
756 } 789 }
@@ -808,25 +841,35 @@ static enum si_sm_result smi_event_handler(struct smi_info *smi_info,
808 * We prefer handling attn over new messages. But don't do 841 * We prefer handling attn over new messages. But don't do
809 * this if there is not yet an upper layer to handle anything. 842 * this if there is not yet an upper layer to handle anything.
810 */ 843 */
811 if (likely(smi_info->intf) && si_sm_result == SI_SM_ATTN) { 844 if (likely(smi_info->intf) &&
845 (si_sm_result == SI_SM_ATTN || smi_info->got_attn)) {
812 unsigned char msg[2]; 846 unsigned char msg[2];
813 847
814 smi_inc_stat(smi_info, attentions); 848 if (smi_info->si_state != SI_NORMAL) {
849 /*
850 * We got an ATTN, but we are doing something else.
851 * Handle the ATTN later.
852 */
853 smi_info->got_attn = true;
854 } else {
855 smi_info->got_attn = false;
856 smi_inc_stat(smi_info, attentions);
815 857
816 /* 858 /*
817 * Got a attn, send down a get message flags to see 859 * Got a attn, send down a get message flags to see
818 * what's causing it. It would be better to handle 860 * what's causing it. It would be better to handle
819 * this in the upper layer, but due to the way 861 * this in the upper layer, but due to the way
820 * interrupts work with the SMI, that's not really 862 * interrupts work with the SMI, that's not really
821 * possible. 863 * possible.
822 */ 864 */
823 msg[0] = (IPMI_NETFN_APP_REQUEST << 2); 865 msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
824 msg[1] = IPMI_GET_MSG_FLAGS_CMD; 866 msg[1] = IPMI_GET_MSG_FLAGS_CMD;
825 867
826 smi_info->handlers->start_transaction( 868 smi_info->handlers->start_transaction(
827 smi_info->si_sm, msg, 2); 869 smi_info->si_sm, msg, 2);
828 smi_info->si_state = SI_GETTING_FLAGS; 870 smi_info->si_state = SI_GETTING_FLAGS;
829 goto restart; 871 goto restart;
872 }
830 } 873 }
831 874
832 /* If we are currently idle, try to start the next message. */ 875 /* If we are currently idle, try to start the next message. */
@@ -846,19 +889,21 @@ static enum si_sm_result smi_event_handler(struct smi_info *smi_info,
846 */ 889 */
847 atomic_set(&smi_info->req_events, 0); 890 atomic_set(&smi_info->req_events, 0);
848 891
849 smi_info->curr_msg = ipmi_alloc_smi_msg(); 892 /*
850 if (!smi_info->curr_msg) 893 * Take this opportunity to check the interrupt and
851 goto out; 894 * message enable state for the BMC. The BMC can be
852 895 * asynchronously reset, and may thus get interrupts
853 smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2); 896 * disable and messages disabled.
854 smi_info->curr_msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD; 897 */
855 smi_info->curr_msg->data_size = 2; 898 if (smi_info->supports_event_msg_buff || smi_info->irq) {
899 start_check_enables(smi_info);
900 } else {
901 smi_info->curr_msg = alloc_msg_handle_irq(smi_info);
902 if (!smi_info->curr_msg)
903 goto out;
856 904
857 smi_info->handlers->start_transaction( 905 start_getting_events(smi_info);
858 smi_info->si_sm, 906 }
859 smi_info->curr_msg->data,
860 smi_info->curr_msg->data_size);
861 smi_info->si_state = SI_GETTING_EVENTS;
862 goto restart; 907 goto restart;
863 } 908 }
864 out: 909 out:
@@ -879,8 +924,7 @@ static void check_start_timer_thread(struct smi_info *smi_info)
879} 924}
880 925
881static void sender(void *send_info, 926static void sender(void *send_info,
882 struct ipmi_smi_msg *msg, 927 struct ipmi_smi_msg *msg)
883 int priority)
884{ 928{
885 struct smi_info *smi_info = send_info; 929 struct smi_info *smi_info = send_info;
886 enum si_sm_result result; 930 enum si_sm_result result;
@@ -889,14 +933,8 @@ static void sender(void *send_info,
889 struct timeval t; 933 struct timeval t;
890#endif 934#endif
891 935
892 if (atomic_read(&smi_info->stop_operation)) { 936 BUG_ON(smi_info->waiting_msg);
893 msg->rsp[0] = msg->data[0] | 4; 937 smi_info->waiting_msg = msg;
894 msg->rsp[1] = msg->data[1];
895 msg->rsp[2] = IPMI_ERR_UNSPECIFIED;
896 msg->rsp_size = 3;
897 deliver_recv_msg(smi_info, msg);
898 return;
899 }
900 938
901#ifdef DEBUG_TIMING 939#ifdef DEBUG_TIMING
902 do_gettimeofday(&t); 940 do_gettimeofday(&t);
@@ -905,16 +943,16 @@ static void sender(void *send_info,
905 943
906 if (smi_info->run_to_completion) { 944 if (smi_info->run_to_completion) {
907 /* 945 /*
908 * If we are running to completion, then throw it in 946 * If we are running to completion, start it and run
909 * the list and run transactions until everything is 947 * transactions until everything is clear.
910 * clear. Priority doesn't matter here.
911 */ 948 */
949 smi_info->curr_msg = smi_info->waiting_msg;
950 smi_info->waiting_msg = NULL;
912 951
913 /* 952 /*
914 * Run to completion means we are single-threaded, no 953 * Run to completion means we are single-threaded, no
915 * need for locks. 954 * need for locks.
916 */ 955 */
917 list_add_tail(&(msg->link), &(smi_info->xmit_msgs));
918 956
919 result = smi_event_handler(smi_info, 0); 957 result = smi_event_handler(smi_info, 0);
920 while (result != SI_SM_IDLE) { 958 while (result != SI_SM_IDLE) {
@@ -926,11 +964,6 @@ static void sender(void *send_info,
926 } 964 }
927 965
928 spin_lock_irqsave(&smi_info->si_lock, flags); 966 spin_lock_irqsave(&smi_info->si_lock, flags);
929 if (priority > 0)
930 list_add_tail(&msg->link, &smi_info->hp_xmit_msgs);
931 else
932 list_add_tail(&msg->link, &smi_info->xmit_msgs);
933
934 check_start_timer_thread(smi_info); 967 check_start_timer_thread(smi_info);
935 spin_unlock_irqrestore(&smi_info->si_lock, flags); 968 spin_unlock_irqrestore(&smi_info->si_lock, flags);
936} 969}
@@ -1068,8 +1101,7 @@ static void request_events(void *send_info)
1068{ 1101{
1069 struct smi_info *smi_info = send_info; 1102 struct smi_info *smi_info = send_info;
1070 1103
1071 if (atomic_read(&smi_info->stop_operation) || 1104 if (!smi_info->has_event_buffer)
1072 !smi_info->has_event_buffer)
1073 return; 1105 return;
1074 1106
1075 atomic_set(&smi_info->req_events, 1); 1107 atomic_set(&smi_info->req_events, 1);
@@ -1697,7 +1729,7 @@ static int parse_str(struct hotmod_vals *v, int *val, char *name, char **curr)
1697 } 1729 }
1698 *s = '\0'; 1730 *s = '\0';
1699 s++; 1731 s++;
1700 for (i = 0; hotmod_ops[i].name; i++) { 1732 for (i = 0; v[i].name; i++) {
1701 if (strcmp(*curr, v[i].name) == 0) { 1733 if (strcmp(*curr, v[i].name) == 0) {
1702 *val = v[i].val; 1734 *val = v[i].val;
1703 *curr = s; 1735 *curr = s;
@@ -2133,6 +2165,9 @@ static int try_init_spmi(struct SPMITable *spmi)
2133 case 3: /* BT */ 2165 case 3: /* BT */
2134 info->si_type = SI_BT; 2166 info->si_type = SI_BT;
2135 break; 2167 break;
2168 case 4: /* SSIF, just ignore */
2169 kfree(info);
2170 return -EIO;
2136 default: 2171 default:
2137 printk(KERN_INFO PFX "Unknown ACPI/SPMI SI type %d\n", 2172 printk(KERN_INFO PFX "Unknown ACPI/SPMI SI type %d\n",
2138 spmi->InterfaceType); 2173 spmi->InterfaceType);
@@ -2250,6 +2285,8 @@ static int ipmi_pnp_probe(struct pnp_dev *dev,
2250 case 3: 2285 case 3:
2251 info->si_type = SI_BT; 2286 info->si_type = SI_BT;
2252 break; 2287 break;
2288 case 4: /* SSIF, just ignore */
2289 goto err_free;
2253 default: 2290 default:
2254 dev_info(&dev->dev, "unknown IPMI type %lld\n", tmp); 2291 dev_info(&dev->dev, "unknown IPMI type %lld\n", tmp);
2255 goto err_free; 2292 goto err_free;
@@ -2913,9 +2950,11 @@ static int try_enable_event_buffer(struct smi_info *smi_info)
2913 goto out; 2950 goto out;
2914 } 2951 }
2915 2952
2916 if (resp[3] & IPMI_BMC_EVT_MSG_BUFF) 2953 if (resp[3] & IPMI_BMC_EVT_MSG_BUFF) {
2917 /* buffer is already enabled, nothing to do. */ 2954 /* buffer is already enabled, nothing to do. */
2955 smi_info->supports_event_msg_buff = true;
2918 goto out; 2956 goto out;
2957 }
2919 2958
2920 msg[0] = IPMI_NETFN_APP_REQUEST << 2; 2959 msg[0] = IPMI_NETFN_APP_REQUEST << 2;
2921 msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD; 2960 msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD;
@@ -2948,6 +2987,9 @@ static int try_enable_event_buffer(struct smi_info *smi_info)
2948 * that the event buffer is not supported. 2987 * that the event buffer is not supported.
2949 */ 2988 */
2950 rv = -ENOENT; 2989 rv = -ENOENT;
2990 else
2991 smi_info->supports_event_msg_buff = true;
2992
2951 out: 2993 out:
2952 kfree(resp); 2994 kfree(resp);
2953 return rv; 2995 return rv;
@@ -3188,15 +3230,10 @@ static void setup_xaction_handlers(struct smi_info *smi_info)
3188 3230
3189static inline void wait_for_timer_and_thread(struct smi_info *smi_info) 3231static inline void wait_for_timer_and_thread(struct smi_info *smi_info)
3190{ 3232{
3191 if (smi_info->intf) { 3233 if (smi_info->thread != NULL)
3192 /* 3234 kthread_stop(smi_info->thread);
3193 * The timer and thread are only running if the 3235 if (smi_info->timer_running)
3194 * interface has been started up and registered.
3195 */
3196 if (smi_info->thread != NULL)
3197 kthread_stop(smi_info->thread);
3198 del_timer_sync(&smi_info->si_timer); 3236 del_timer_sync(&smi_info->si_timer);
3199 }
3200} 3237}
3201 3238
3202static struct ipmi_default_vals 3239static struct ipmi_default_vals
@@ -3274,8 +3311,8 @@ static int add_smi(struct smi_info *new_smi)
3274 int rv = 0; 3311 int rv = 0;
3275 3312
3276 printk(KERN_INFO PFX "Adding %s-specified %s state machine", 3313 printk(KERN_INFO PFX "Adding %s-specified %s state machine",
3277 ipmi_addr_src_to_str[new_smi->addr_source], 3314 ipmi_addr_src_to_str(new_smi->addr_source),
3278 si_to_str[new_smi->si_type]); 3315 si_to_str[new_smi->si_type]);
3279 mutex_lock(&smi_infos_lock); 3316 mutex_lock(&smi_infos_lock);
3280 if (!is_new_interface(new_smi)) { 3317 if (!is_new_interface(new_smi)) {
3281 printk(KERN_CONT " duplicate interface\n"); 3318 printk(KERN_CONT " duplicate interface\n");
@@ -3305,7 +3342,7 @@ static int try_smi_init(struct smi_info *new_smi)
3305 printk(KERN_INFO PFX "Trying %s-specified %s state" 3342 printk(KERN_INFO PFX "Trying %s-specified %s state"
3306 " machine at %s address 0x%lx, slave address 0x%x," 3343 " machine at %s address 0x%lx, slave address 0x%x,"
3307 " irq %d\n", 3344 " irq %d\n",
3308 ipmi_addr_src_to_str[new_smi->addr_source], 3345 ipmi_addr_src_to_str(new_smi->addr_source),
3309 si_to_str[new_smi->si_type], 3346 si_to_str[new_smi->si_type],
3310 addr_space_to_str[new_smi->io.addr_type], 3347 addr_space_to_str[new_smi->io.addr_type],
3311 new_smi->io.addr_data, 3348 new_smi->io.addr_data,
@@ -3371,8 +3408,7 @@ static int try_smi_init(struct smi_info *new_smi)
3371 setup_oem_data_handler(new_smi); 3408 setup_oem_data_handler(new_smi);
3372 setup_xaction_handlers(new_smi); 3409 setup_xaction_handlers(new_smi);
3373 3410
3374 INIT_LIST_HEAD(&(new_smi->xmit_msgs)); 3411 new_smi->waiting_msg = NULL;
3375 INIT_LIST_HEAD(&(new_smi->hp_xmit_msgs));
3376 new_smi->curr_msg = NULL; 3412 new_smi->curr_msg = NULL;
3377 atomic_set(&new_smi->req_events, 0); 3413 atomic_set(&new_smi->req_events, 0);
3378 new_smi->run_to_completion = false; 3414 new_smi->run_to_completion = false;
@@ -3380,7 +3416,6 @@ static int try_smi_init(struct smi_info *new_smi)
3380 atomic_set(&new_smi->stats[i], 0); 3416 atomic_set(&new_smi->stats[i], 0);
3381 3417
3382 new_smi->interrupt_disabled = true; 3418 new_smi->interrupt_disabled = true;
3383 atomic_set(&new_smi->stop_operation, 0);
3384 atomic_set(&new_smi->need_watch, 0); 3419 atomic_set(&new_smi->need_watch, 0);
3385 new_smi->intf_num = smi_num; 3420 new_smi->intf_num = smi_num;
3386 smi_num++; 3421 smi_num++;
@@ -3394,9 +3429,15 @@ static int try_smi_init(struct smi_info *new_smi)
3394 * timer to avoid racing with the timer. 3429 * timer to avoid racing with the timer.
3395 */ 3430 */
3396 start_clear_flags(new_smi); 3431 start_clear_flags(new_smi);
3397 /* IRQ is defined to be set when non-zero. */ 3432
3398 if (new_smi->irq) 3433 /*
3399 new_smi->si_state = SI_CLEARING_FLAGS_THEN_SET_IRQ; 3434 * IRQ is defined to be set when non-zero. req_events will
3435 * cause a global flags check that will enable interrupts.
3436 */
3437 if (new_smi->irq) {
3438 new_smi->interrupt_disabled = false;
3439 atomic_set(&new_smi->req_events, 1);
3440 }
3400 3441
3401 if (!new_smi->dev) { 3442 if (!new_smi->dev) {
3402 /* 3443 /*
@@ -3428,7 +3469,6 @@ static int try_smi_init(struct smi_info *new_smi)
3428 new_smi, 3469 new_smi,
3429 &new_smi->device_id, 3470 &new_smi->device_id,
3430 new_smi->dev, 3471 new_smi->dev,
3431 "bmc",
3432 new_smi->slave_addr); 3472 new_smi->slave_addr);
3433 if (rv) { 3473 if (rv) {
3434 dev_err(new_smi->dev, "Unable to register device: error %d\n", 3474 dev_err(new_smi->dev, "Unable to register device: error %d\n",
@@ -3466,15 +3506,15 @@ static int try_smi_init(struct smi_info *new_smi)
3466 return 0; 3506 return 0;
3467 3507
3468 out_err_stop_timer: 3508 out_err_stop_timer:
3469 atomic_inc(&new_smi->stop_operation);
3470 wait_for_timer_and_thread(new_smi); 3509 wait_for_timer_and_thread(new_smi);
3471 3510
3472 out_err: 3511 out_err:
3473 new_smi->interrupt_disabled = true; 3512 new_smi->interrupt_disabled = true;
3474 3513
3475 if (new_smi->intf) { 3514 if (new_smi->intf) {
3476 ipmi_unregister_smi(new_smi->intf); 3515 ipmi_smi_t intf = new_smi->intf;
3477 new_smi->intf = NULL; 3516 new_smi->intf = NULL;
3517 ipmi_unregister_smi(intf);
3478 } 3518 }
3479 3519
3480 if (new_smi->irq_cleanup) { 3520 if (new_smi->irq_cleanup) {
@@ -3653,60 +3693,49 @@ module_init(init_ipmi_si);
3653static void cleanup_one_si(struct smi_info *to_clean) 3693static void cleanup_one_si(struct smi_info *to_clean)
3654{ 3694{
3655 int rv = 0; 3695 int rv = 0;
3656 unsigned long flags;
3657 3696
3658 if (!to_clean) 3697 if (!to_clean)
3659 return; 3698 return;
3660 3699
3700 if (to_clean->intf) {
3701 ipmi_smi_t intf = to_clean->intf;
3702
3703 to_clean->intf = NULL;
3704 rv = ipmi_unregister_smi(intf);
3705 if (rv) {
3706 pr_err(PFX "Unable to unregister device: errno=%d\n",
3707 rv);
3708 }
3709 }
3710
3661 if (to_clean->dev) 3711 if (to_clean->dev)
3662 dev_set_drvdata(to_clean->dev, NULL); 3712 dev_set_drvdata(to_clean->dev, NULL);
3663 3713
3664 list_del(&to_clean->link); 3714 list_del(&to_clean->link);
3665 3715
3666 /* Tell the driver that we are shutting down. */
3667 atomic_inc(&to_clean->stop_operation);
3668
3669 /* 3716 /*
3670 * Make sure the timer and thread are stopped and will not run 3717 * Make sure that interrupts, the timer and the thread are
3671 * again. 3718 * stopped and will not run again.
3672 */ 3719 */
3720 if (to_clean->irq_cleanup)
3721 to_clean->irq_cleanup(to_clean);
3673 wait_for_timer_and_thread(to_clean); 3722 wait_for_timer_and_thread(to_clean);
3674 3723
3675 /* 3724 /*
3676 * Timeouts are stopped, now make sure the interrupts are off 3725 * Timeouts are stopped, now make sure the interrupts are off
3677 * for the device. A little tricky with locks to make sure 3726 * in the BMC. Note that timers and CPU interrupts are off,
3678 * there are no races. 3727 * so no need for locks.
3679 */ 3728 */
3680 spin_lock_irqsave(&to_clean->si_lock, flags);
3681 while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) { 3729 while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) {
3682 spin_unlock_irqrestore(&to_clean->si_lock, flags);
3683 poll(to_clean); 3730 poll(to_clean);
3684 schedule_timeout_uninterruptible(1); 3731 schedule_timeout_uninterruptible(1);
3685 spin_lock_irqsave(&to_clean->si_lock, flags);
3686 } 3732 }
3687 disable_si_irq(to_clean); 3733 disable_si_irq(to_clean);
3688 spin_unlock_irqrestore(&to_clean->si_lock, flags);
3689 while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) { 3734 while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) {
3690 poll(to_clean); 3735 poll(to_clean);
3691 schedule_timeout_uninterruptible(1); 3736 schedule_timeout_uninterruptible(1);
3692 } 3737 }
3693 3738
3694 /* Clean up interrupts and make sure that everything is done. */
3695 if (to_clean->irq_cleanup)
3696 to_clean->irq_cleanup(to_clean);
3697 while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) {
3698 poll(to_clean);
3699 schedule_timeout_uninterruptible(1);
3700 }
3701
3702 if (to_clean->intf)
3703 rv = ipmi_unregister_smi(to_clean->intf);
3704
3705 if (rv) {
3706 printk(KERN_ERR PFX "Unable to unregister device: errno=%d\n",
3707 rv);
3708 }
3709
3710 if (to_clean->handlers) 3739 if (to_clean->handlers)
3711 to_clean->handlers->cleanup(to_clean->si_sm); 3740 to_clean->handlers->cleanup(to_clean->si_sm);
3712 3741
diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
new file mode 100644
index 000000000000..e178ac27e73c
--- /dev/null
+++ b/drivers/char/ipmi/ipmi_ssif.c
@@ -0,0 +1,1870 @@
1/*
2 * ipmi_ssif.c
3 *
4 * The interface to the IPMI driver for SMBus access to a SMBus
5 * compliant device. Called SSIF by the IPMI spec.
6 *
7 * Author: Intel Corporation
8 * Todd Davis <todd.c.davis@intel.com>
9 *
10 * Rewritten by Corey Minyard <minyard@acm.org> to support the
11 * non-blocking I2C interface, add support for multi-part
12 * transactions, add PEC support, and general clenaup.
13 *
14 * Copyright 2003 Intel Corporation
15 * Copyright 2005 MontaVista Software
16 *
17 * This program is free software; you can redistribute it and/or modify it
18 * under the terms of the GNU General Public License as published by the
19 * Free Software Foundation; either version 2 of the License, or (at your
20 * option) any later version.
21 */
22
23/*
24 * This file holds the "policy" for the interface to the SSIF state
25 * machine. It does the configuration, handles timers and interrupts,
26 * and drives the real SSIF state machine.
27 */
28
29/*
30 * TODO: Figure out how to use SMB alerts. This will require a new
31 * interface into the I2C driver, I believe.
32 */
33
34#include <linux/version.h>
35#if defined(MODVERSIONS)
36#include <linux/modversions.h>
37#endif
38
39#include <linux/module.h>
40#include <linux/moduleparam.h>
41#include <linux/sched.h>
42#include <linux/seq_file.h>
43#include <linux/timer.h>
44#include <linux/delay.h>
45#include <linux/errno.h>
46#include <linux/spinlock.h>
47#include <linux/slab.h>
48#include <linux/list.h>
49#include <linux/i2c.h>
50#include <linux/ipmi_smi.h>
51#include <linux/init.h>
52#include <linux/dmi.h>
53#include <linux/kthread.h>
54#include <linux/acpi.h>
55
56#define PFX "ipmi_ssif: "
57#define DEVICE_NAME "ipmi_ssif"
58
59#define IPMI_GET_SYSTEM_INTERFACE_CAPABILITIES_CMD 0x57
60
61#define SSIF_IPMI_REQUEST 2
62#define SSIF_IPMI_MULTI_PART_REQUEST_START 6
63#define SSIF_IPMI_MULTI_PART_REQUEST_MIDDLE 7
64#define SSIF_IPMI_RESPONSE 3
65#define SSIF_IPMI_MULTI_PART_RESPONSE_MIDDLE 9
66
67/* ssif_debug is a bit-field
68 * SSIF_DEBUG_MSG - commands and their responses
69 * SSIF_DEBUG_STATES - message states
70 * SSIF_DEBUG_TIMING - Measure times between events in the driver
71 */
72#define SSIF_DEBUG_TIMING 4
73#define SSIF_DEBUG_STATE 2
74#define SSIF_DEBUG_MSG 1
75#define SSIF_NODEBUG 0
76#define SSIF_DEFAULT_DEBUG (SSIF_NODEBUG)
77
78/*
79 * Timer values
80 */
81#define SSIF_MSG_USEC 20000 /* 20ms between message tries. */
82#define SSIF_MSG_PART_USEC 5000 /* 5ms for a message part */
83
84/* How many times to we retry sending/receiving the message. */
85#define SSIF_SEND_RETRIES 5
86#define SSIF_RECV_RETRIES 250
87
88#define SSIF_MSG_MSEC (SSIF_MSG_USEC / 1000)
89#define SSIF_MSG_JIFFIES ((SSIF_MSG_USEC * 1000) / TICK_NSEC)
90#define SSIF_MSG_PART_JIFFIES ((SSIF_MSG_PART_USEC * 1000) / TICK_NSEC)
91
92enum ssif_intf_state {
93 SSIF_NORMAL,
94 SSIF_GETTING_FLAGS,
95 SSIF_GETTING_EVENTS,
96 SSIF_CLEARING_FLAGS,
97 SSIF_GETTING_MESSAGES,
98 /* FIXME - add watchdog stuff. */
99};
100
101#define SSIF_IDLE(ssif) ((ssif)->ssif_state == SSIF_NORMAL \
102 && (ssif)->curr_msg == NULL)
103
104/*
105 * Indexes into stats[] in ssif_info below.
106 */
107enum ssif_stat_indexes {
108 /* Number of total messages sent. */
109 SSIF_STAT_sent_messages = 0,
110
111 /*
112 * Number of message parts sent. Messages may be broken into
113 * parts if they are long.
114 */
115 SSIF_STAT_sent_messages_parts,
116
117 /*
118 * Number of time a message was retried.
119 */
120 SSIF_STAT_send_retries,
121
122 /*
123 * Number of times the send of a message failed.
124 */
125 SSIF_STAT_send_errors,
126
127 /*
128 * Number of message responses received.
129 */
130 SSIF_STAT_received_messages,
131
132 /*
133 * Number of message fragments received.
134 */
135 SSIF_STAT_received_message_parts,
136
137 /*
138 * Number of times the receive of a message was retried.
139 */
140 SSIF_STAT_receive_retries,
141
142 /*
143 * Number of errors receiving messages.
144 */
145 SSIF_STAT_receive_errors,
146
147 /*
148 * Number of times a flag fetch was requested.
149 */
150 SSIF_STAT_flag_fetches,
151
152 /*
153 * Number of times the hardware didn't follow the state machine.
154 */
155 SSIF_STAT_hosed,
156
157 /*
158 * Number of received events.
159 */
160 SSIF_STAT_events,
161
162 /* Number of asyncronous messages received. */
163 SSIF_STAT_incoming_messages,
164
165 /* Number of watchdog pretimeouts. */
166 SSIF_STAT_watchdog_pretimeouts,
167
168 /* Always add statistics before this value, it must be last. */
169 SSIF_NUM_STATS
170};
171
172struct ssif_addr_info {
173 unsigned short addr;
174 struct i2c_board_info binfo;
175 char *adapter_name;
176 int debug;
177 int slave_addr;
178 enum ipmi_addr_src addr_src;
179 union ipmi_smi_info_union addr_info;
180
181 struct mutex clients_mutex;
182 struct list_head clients;
183
184 struct list_head link;
185};
186
187struct ssif_info;
188
189typedef void (*ssif_i2c_done)(struct ssif_info *ssif_info, int result,
190 unsigned char *data, unsigned int len);
191
192struct ssif_info {
193 ipmi_smi_t intf;
194 int intf_num;
195 spinlock_t lock;
196 struct ipmi_smi_msg *waiting_msg;
197 struct ipmi_smi_msg *curr_msg;
198 enum ssif_intf_state ssif_state;
199 unsigned long ssif_debug;
200
201 struct ipmi_smi_handlers handlers;
202
203 enum ipmi_addr_src addr_source; /* ACPI, PCI, SMBIOS, hardcode, etc. */
204 union ipmi_smi_info_union addr_info;
205
206 /*
207 * Flags from the last GET_MSG_FLAGS command, used when an ATTN
208 * is set to hold the flags until we are done handling everything
209 * from the flags.
210 */
211#define RECEIVE_MSG_AVAIL 0x01
212#define EVENT_MSG_BUFFER_FULL 0x02
213#define WDT_PRE_TIMEOUT_INT 0x08
214 unsigned char msg_flags;
215
216 bool has_event_buffer;
217
218 /*
219 * If set to true, this will request events the next time the
220 * state machine is idle.
221 */
222 bool req_events;
223
224 /*
225 * If set to true, this will request flags the next time the
226 * state machine is idle.
227 */
228 bool req_flags;
229
230 /*
231 * Used to perform timer operations when run-to-completion
232 * mode is on. This is a countdown timer.
233 */
234 int rtc_us_timer;
235
236 /* Used for sending/receiving data. +1 for the length. */
237 unsigned char data[IPMI_MAX_MSG_LENGTH + 1];
238 unsigned int data_len;
239
240 /* Temp receive buffer, gets copied into data. */
241 unsigned char recv[I2C_SMBUS_BLOCK_MAX];
242
243 struct i2c_client *client;
244 ssif_i2c_done done_handler;
245
246 /* Thread interface handling */
247 struct task_struct *thread;
248 struct completion wake_thread;
249 bool stopping;
250 int i2c_read_write;
251 int i2c_command;
252 unsigned char *i2c_data;
253 unsigned int i2c_size;
254
255 /* From the device id response. */
256 struct ipmi_device_id device_id;
257
258 struct timer_list retry_timer;
259 int retries_left;
260
261 /* Info from SSIF cmd */
262 unsigned char max_xmit_msg_size;
263 unsigned char max_recv_msg_size;
264 unsigned int multi_support;
265 int supports_pec;
266
267#define SSIF_NO_MULTI 0
268#define SSIF_MULTI_2_PART 1
269#define SSIF_MULTI_n_PART 2
270 unsigned char *multi_data;
271 unsigned int multi_len;
272 unsigned int multi_pos;
273
274 atomic_t stats[SSIF_NUM_STATS];
275};
276
277#define ssif_inc_stat(ssif, stat) \
278 atomic_inc(&(ssif)->stats[SSIF_STAT_ ## stat])
279#define ssif_get_stat(ssif, stat) \
280 ((unsigned int) atomic_read(&(ssif)->stats[SSIF_STAT_ ## stat]))
281
282static bool initialized;
283
284static atomic_t next_intf = ATOMIC_INIT(0);
285
286static void return_hosed_msg(struct ssif_info *ssif_info,
287 struct ipmi_smi_msg *msg);
288static void start_next_msg(struct ssif_info *ssif_info, unsigned long *flags);
289static int start_send(struct ssif_info *ssif_info,
290 unsigned char *data,
291 unsigned int len);
292
293static unsigned long *ipmi_ssif_lock_cond(struct ssif_info *ssif_info,
294 unsigned long *flags)
295{
296 spin_lock_irqsave(&ssif_info->lock, *flags);
297 return flags;
298}
299
300static void ipmi_ssif_unlock_cond(struct ssif_info *ssif_info,
301 unsigned long *flags)
302{
303 spin_unlock_irqrestore(&ssif_info->lock, *flags);
304}
305
306static void deliver_recv_msg(struct ssif_info *ssif_info,
307 struct ipmi_smi_msg *msg)
308{
309 ipmi_smi_t intf = ssif_info->intf;
310
311 if (!intf) {
312 ipmi_free_smi_msg(msg);
313 } else if (msg->rsp_size < 0) {
314 return_hosed_msg(ssif_info, msg);
315 pr_err(PFX
316 "Malformed message in deliver_recv_msg: rsp_size = %d\n",
317 msg->rsp_size);
318 } else {
319 ipmi_smi_msg_received(intf, msg);
320 }
321}
322
323static void return_hosed_msg(struct ssif_info *ssif_info,
324 struct ipmi_smi_msg *msg)
325{
326 ssif_inc_stat(ssif_info, hosed);
327
328 /* Make it a response */
329 msg->rsp[0] = msg->data[0] | 4;
330 msg->rsp[1] = msg->data[1];
331 msg->rsp[2] = 0xFF; /* Unknown error. */
332 msg->rsp_size = 3;
333
334 deliver_recv_msg(ssif_info, msg);
335}
336
337/*
338 * Must be called with the message lock held. This will release the
339 * message lock. Note that the caller will check SSIF_IDLE and start a
340 * new operation, so there is no need to check for new messages to
341 * start in here.
342 */
343static void start_clear_flags(struct ssif_info *ssif_info, unsigned long *flags)
344{
345 unsigned char msg[3];
346
347 ssif_info->msg_flags &= ~WDT_PRE_TIMEOUT_INT;
348 ssif_info->ssif_state = SSIF_CLEARING_FLAGS;
349 ipmi_ssif_unlock_cond(ssif_info, flags);
350
351 /* Make sure the watchdog pre-timeout flag is not set at startup. */
352 msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
353 msg[1] = IPMI_CLEAR_MSG_FLAGS_CMD;
354 msg[2] = WDT_PRE_TIMEOUT_INT;
355
356 if (start_send(ssif_info, msg, 3) != 0) {
357 /* Error, just go to normal state. */
358 ssif_info->ssif_state = SSIF_NORMAL;
359 }
360}
361
362static void start_flag_fetch(struct ssif_info *ssif_info, unsigned long *flags)
363{
364 unsigned char mb[2];
365
366 ssif_info->req_flags = false;
367 ssif_info->ssif_state = SSIF_GETTING_FLAGS;
368 ipmi_ssif_unlock_cond(ssif_info, flags);
369
370 mb[0] = (IPMI_NETFN_APP_REQUEST << 2);
371 mb[1] = IPMI_GET_MSG_FLAGS_CMD;
372 if (start_send(ssif_info, mb, 2) != 0)
373 ssif_info->ssif_state = SSIF_NORMAL;
374}
375
376static void check_start_send(struct ssif_info *ssif_info, unsigned long *flags,
377 struct ipmi_smi_msg *msg)
378{
379 if (start_send(ssif_info, msg->data, msg->data_size) != 0) {
380 unsigned long oflags;
381
382 flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
383 ssif_info->curr_msg = NULL;
384 ssif_info->ssif_state = SSIF_NORMAL;
385 ipmi_ssif_unlock_cond(ssif_info, flags);
386 ipmi_free_smi_msg(msg);
387 }
388}
389
390static void start_event_fetch(struct ssif_info *ssif_info, unsigned long *flags)
391{
392 struct ipmi_smi_msg *msg;
393
394 ssif_info->req_events = false;
395
396 msg = ipmi_alloc_smi_msg();
397 if (!msg) {
398 ssif_info->ssif_state = SSIF_NORMAL;
399 return;
400 }
401
402 ssif_info->curr_msg = msg;
403 ssif_info->ssif_state = SSIF_GETTING_EVENTS;
404 ipmi_ssif_unlock_cond(ssif_info, flags);
405
406 msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
407 msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD;
408 msg->data_size = 2;
409
410 check_start_send(ssif_info, flags, msg);
411}
412
413static void start_recv_msg_fetch(struct ssif_info *ssif_info,
414 unsigned long *flags)
415{
416 struct ipmi_smi_msg *msg;
417
418 msg = ipmi_alloc_smi_msg();
419 if (!msg) {
420 ssif_info->ssif_state = SSIF_NORMAL;
421 return;
422 }
423
424 ssif_info->curr_msg = msg;
425 ssif_info->ssif_state = SSIF_GETTING_MESSAGES;
426 ipmi_ssif_unlock_cond(ssif_info, flags);
427
428 msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
429 msg->data[1] = IPMI_GET_MSG_CMD;
430 msg->data_size = 2;
431
432 check_start_send(ssif_info, flags, msg);
433}
434
435/*
436 * Must be called with the message lock held. This will release the
437 * message lock. Note that the caller will check SSIF_IDLE and start a
438 * new operation, so there is no need to check for new messages to
439 * start in here.
440 */
441static void handle_flags(struct ssif_info *ssif_info, unsigned long *flags)
442{
443 if (ssif_info->msg_flags & WDT_PRE_TIMEOUT_INT) {
444 ipmi_smi_t intf = ssif_info->intf;
445 /* Watchdog pre-timeout */
446 ssif_inc_stat(ssif_info, watchdog_pretimeouts);
447 start_clear_flags(ssif_info, flags);
448 if (intf)
449 ipmi_smi_watchdog_pretimeout(intf);
450 } else if (ssif_info->msg_flags & RECEIVE_MSG_AVAIL)
451 /* Messages available. */
452 start_recv_msg_fetch(ssif_info, flags);
453 else if (ssif_info->msg_flags & EVENT_MSG_BUFFER_FULL)
454 /* Events available. */
455 start_event_fetch(ssif_info, flags);
456 else {
457 ssif_info->ssif_state = SSIF_NORMAL;
458 ipmi_ssif_unlock_cond(ssif_info, flags);
459 }
460}
461
462static int ipmi_ssif_thread(void *data)
463{
464 struct ssif_info *ssif_info = data;
465
466 while (!kthread_should_stop()) {
467 int result;
468
469 /* Wait for something to do */
470 wait_for_completion(&ssif_info->wake_thread);
471 init_completion(&ssif_info->wake_thread);
472
473 if (ssif_info->stopping)
474 break;
475
476 if (ssif_info->i2c_read_write == I2C_SMBUS_WRITE) {
477 result = i2c_smbus_write_block_data(
478 ssif_info->client, SSIF_IPMI_REQUEST,
479 ssif_info->i2c_data[0],
480 ssif_info->i2c_data + 1);
481 ssif_info->done_handler(ssif_info, result, NULL, 0);
482 } else {
483 result = i2c_smbus_read_block_data(
484 ssif_info->client, SSIF_IPMI_RESPONSE,
485 ssif_info->i2c_data);
486 if (result < 0)
487 ssif_info->done_handler(ssif_info, result,
488 NULL, 0);
489 else
490 ssif_info->done_handler(ssif_info, 0,
491 ssif_info->i2c_data,
492 result);
493 }
494 }
495
496 return 0;
497}
498
499static int ssif_i2c_send(struct ssif_info *ssif_info,
500 ssif_i2c_done handler,
501 int read_write, int command,
502 unsigned char *data, unsigned int size)
503{
504 ssif_info->done_handler = handler;
505
506 ssif_info->i2c_read_write = read_write;
507 ssif_info->i2c_command = command;
508 ssif_info->i2c_data = data;
509 ssif_info->i2c_size = size;
510 complete(&ssif_info->wake_thread);
511 return 0;
512}
513
514
515static void msg_done_handler(struct ssif_info *ssif_info, int result,
516 unsigned char *data, unsigned int len);
517
518static void retry_timeout(unsigned long data)
519{
520 struct ssif_info *ssif_info = (void *) data;
521 int rv;
522
523 if (ssif_info->stopping)
524 return;
525
526 ssif_info->rtc_us_timer = 0;
527
528 rv = ssif_i2c_send(ssif_info, msg_done_handler, I2C_SMBUS_READ,
529 SSIF_IPMI_RESPONSE,
530 ssif_info->recv, I2C_SMBUS_BLOCK_DATA);
531 if (rv < 0) {
532 /* request failed, just return the error. */
533 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
534 pr_info("Error from i2c_non_blocking_op(5)\n");
535
536 msg_done_handler(ssif_info, -EIO, NULL, 0);
537 }
538}
539
540static int start_resend(struct ssif_info *ssif_info);
541
542static void msg_done_handler(struct ssif_info *ssif_info, int result,
543 unsigned char *data, unsigned int len)
544{
545 struct ipmi_smi_msg *msg;
546 unsigned long oflags, *flags;
547 int rv;
548
549 /*
550 * We are single-threaded here, so no need for a lock until we
551 * start messing with driver states or the queues.
552 */
553
554 if (result < 0) {
555 ssif_info->retries_left--;
556 if (ssif_info->retries_left > 0) {
557 ssif_inc_stat(ssif_info, receive_retries);
558
559 mod_timer(&ssif_info->retry_timer,
560 jiffies + SSIF_MSG_JIFFIES);
561 ssif_info->rtc_us_timer = SSIF_MSG_USEC;
562 return;
563 }
564
565 ssif_inc_stat(ssif_info, receive_errors);
566
567 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
568 pr_info("Error in msg_done_handler: %d\n", result);
569 len = 0;
570 goto continue_op;
571 }
572
573 if ((len > 1) && (ssif_info->multi_pos == 0)
574 && (data[0] == 0x00) && (data[1] == 0x01)) {
575 /* Start of multi-part read. Start the next transaction. */
576 int i;
577
578 ssif_inc_stat(ssif_info, received_message_parts);
579
580 /* Remove the multi-part read marker. */
581 for (i = 0; i < (len-2); i++)
582 ssif_info->data[i] = data[i+2];
583 len -= 2;
584 ssif_info->multi_len = len;
585 ssif_info->multi_pos = 1;
586
587 rv = ssif_i2c_send(ssif_info, msg_done_handler, I2C_SMBUS_READ,
588 SSIF_IPMI_MULTI_PART_RESPONSE_MIDDLE,
589 ssif_info->recv, I2C_SMBUS_BLOCK_DATA);
590 if (rv < 0) {
591 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
592 pr_info("Error from i2c_non_blocking_op(1)\n");
593
594 result = -EIO;
595 } else
596 return;
597 } else if (ssif_info->multi_pos) {
598 /* Middle of multi-part read. Start the next transaction. */
599 int i;
600 unsigned char blocknum;
601
602 if (len == 0) {
603 result = -EIO;
604 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
605 pr_info(PFX "Middle message with no data\n");
606
607 goto continue_op;
608 }
609
610 blocknum = data[ssif_info->multi_len];
611
612 if (ssif_info->multi_len+len-1 > IPMI_MAX_MSG_LENGTH) {
613 /* Received message too big, abort the operation. */
614 result = -E2BIG;
615 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
616 pr_info("Received message too big\n");
617
618 goto continue_op;
619 }
620
621 /* Remove the blocknum from the data. */
622 for (i = 0; i < (len-1); i++)
623 ssif_info->data[i+ssif_info->multi_len] = data[i+1];
624 len--;
625 ssif_info->multi_len += len;
626 if (blocknum == 0xff) {
627 /* End of read */
628 len = ssif_info->multi_len;
629 data = ssif_info->data;
630 } else if ((blocknum+1) != ssif_info->multi_pos) {
631 /*
632 * Out of sequence block, just abort. Block
633 * numbers start at zero for the second block,
634 * but multi_pos starts at one, so the +1.
635 */
636 result = -EIO;
637 } else {
638 ssif_inc_stat(ssif_info, received_message_parts);
639
640 ssif_info->multi_pos++;
641
642 rv = ssif_i2c_send(ssif_info, msg_done_handler,
643 I2C_SMBUS_READ,
644 SSIF_IPMI_MULTI_PART_RESPONSE_MIDDLE,
645 ssif_info->recv,
646 I2C_SMBUS_BLOCK_DATA);
647 if (rv < 0) {
648 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
649 pr_info(PFX
650 "Error from i2c_non_blocking_op(2)\n");
651
652 result = -EIO;
653 } else
654 return;
655 }
656 }
657
658 if (result < 0) {
659 ssif_inc_stat(ssif_info, receive_errors);
660 } else {
661 ssif_inc_stat(ssif_info, received_messages);
662 ssif_inc_stat(ssif_info, received_message_parts);
663 }
664
665
666 continue_op:
667 if (ssif_info->ssif_debug & SSIF_DEBUG_STATE)
668 pr_info(PFX "DONE 1: state = %d, result=%d.\n",
669 ssif_info->ssif_state, result);
670
671 flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
672 msg = ssif_info->curr_msg;
673 if (msg) {
674 msg->rsp_size = len;
675 if (msg->rsp_size > IPMI_MAX_MSG_LENGTH)
676 msg->rsp_size = IPMI_MAX_MSG_LENGTH;
677 memcpy(msg->rsp, data, msg->rsp_size);
678 ssif_info->curr_msg = NULL;
679 }
680
681 switch (ssif_info->ssif_state) {
682 case SSIF_NORMAL:
683 ipmi_ssif_unlock_cond(ssif_info, flags);
684 if (!msg)
685 break;
686
687 if (result < 0)
688 return_hosed_msg(ssif_info, msg);
689 else
690 deliver_recv_msg(ssif_info, msg);
691 break;
692
693 case SSIF_GETTING_FLAGS:
694 /* We got the flags from the SSIF, now handle them. */
695 if ((result < 0) || (len < 4) || (data[2] != 0)) {
696 /*
697 * Error fetching flags, or invalid length,
698 * just give up for now.
699 */
700 ssif_info->ssif_state = SSIF_NORMAL;
701 ipmi_ssif_unlock_cond(ssif_info, flags);
702 pr_warn(PFX "Error getting flags: %d %d, %x\n",
703 result, len, data[2]);
704 } else if (data[0] != (IPMI_NETFN_APP_REQUEST | 1) << 2
705 || data[1] != IPMI_GET_MSG_FLAGS_CMD) {
706 pr_warn(PFX "Invalid response getting flags: %x %x\n",
707 data[0], data[1]);
708 } else {
709 ssif_inc_stat(ssif_info, flag_fetches);
710 ssif_info->msg_flags = data[3];
711 handle_flags(ssif_info, flags);
712 }
713 break;
714
715 case SSIF_CLEARING_FLAGS:
716 /* We cleared the flags. */
717 if ((result < 0) || (len < 3) || (data[2] != 0)) {
718 /* Error clearing flags */
719 pr_warn(PFX "Error clearing flags: %d %d, %x\n",
720 result, len, data[2]);
721 } else if (data[0] != (IPMI_NETFN_APP_REQUEST | 1) << 2
722 || data[1] != IPMI_CLEAR_MSG_FLAGS_CMD) {
723 pr_warn(PFX "Invalid response clearing flags: %x %x\n",
724 data[0], data[1]);
725 }
726 ssif_info->ssif_state = SSIF_NORMAL;
727 ipmi_ssif_unlock_cond(ssif_info, flags);
728 break;
729
730 case SSIF_GETTING_EVENTS:
731 if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) {
732 /* Error getting event, probably done. */
733 msg->done(msg);
734
735 /* Take off the event flag. */
736 ssif_info->msg_flags &= ~EVENT_MSG_BUFFER_FULL;
737 handle_flags(ssif_info, flags);
738 } else if (msg->rsp[0] != (IPMI_NETFN_APP_REQUEST | 1) << 2
739 || msg->rsp[1] != IPMI_READ_EVENT_MSG_BUFFER_CMD) {
740 pr_warn(PFX "Invalid response getting events: %x %x\n",
741 msg->rsp[0], msg->rsp[1]);
742 msg->done(msg);
743 /* Take off the event flag. */
744 ssif_info->msg_flags &= ~EVENT_MSG_BUFFER_FULL;
745 handle_flags(ssif_info, flags);
746 } else {
747 handle_flags(ssif_info, flags);
748 ssif_inc_stat(ssif_info, events);
749 deliver_recv_msg(ssif_info, msg);
750 }
751 break;
752
753 case SSIF_GETTING_MESSAGES:
754 if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) {
755 /* Error getting event, probably done. */
756 msg->done(msg);
757
758 /* Take off the msg flag. */
759 ssif_info->msg_flags &= ~RECEIVE_MSG_AVAIL;
760 handle_flags(ssif_info, flags);
761 } else if (msg->rsp[0] != (IPMI_NETFN_APP_REQUEST | 1) << 2
762 || msg->rsp[1] != IPMI_GET_MSG_CMD) {
763 pr_warn(PFX "Invalid response clearing flags: %x %x\n",
764 msg->rsp[0], msg->rsp[1]);
765 msg->done(msg);
766
767 /* Take off the msg flag. */
768 ssif_info->msg_flags &= ~RECEIVE_MSG_AVAIL;
769 handle_flags(ssif_info, flags);
770 } else {
771 ssif_inc_stat(ssif_info, incoming_messages);
772 handle_flags(ssif_info, flags);
773 deliver_recv_msg(ssif_info, msg);
774 }
775 break;
776 }
777
778 flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
779 if (SSIF_IDLE(ssif_info) && !ssif_info->stopping) {
780 if (ssif_info->req_events)
781 start_event_fetch(ssif_info, flags);
782 else if (ssif_info->req_flags)
783 start_flag_fetch(ssif_info, flags);
784 else
785 start_next_msg(ssif_info, flags);
786 } else
787 ipmi_ssif_unlock_cond(ssif_info, flags);
788
789 if (ssif_info->ssif_debug & SSIF_DEBUG_STATE)
790 pr_info(PFX "DONE 2: state = %d.\n", ssif_info->ssif_state);
791}
792
793static void msg_written_handler(struct ssif_info *ssif_info, int result,
794 unsigned char *data, unsigned int len)
795{
796 int rv;
797
798 /* We are single-threaded here, so no need for a lock. */
799 if (result < 0) {
800 ssif_info->retries_left--;
801 if (ssif_info->retries_left > 0) {
802 if (!start_resend(ssif_info)) {
803 ssif_inc_stat(ssif_info, send_retries);
804 return;
805 }
806 /* request failed, just return the error. */
807 ssif_inc_stat(ssif_info, send_errors);
808
809 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
810 pr_info(PFX
811 "Out of retries in msg_written_handler\n");
812 msg_done_handler(ssif_info, -EIO, NULL, 0);
813 return;
814 }
815
816 ssif_inc_stat(ssif_info, send_errors);
817
818 /*
819 * Got an error on transmit, let the done routine
820 * handle it.
821 */
822 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
823 pr_info("Error in msg_written_handler: %d\n", result);
824
825 msg_done_handler(ssif_info, result, NULL, 0);
826 return;
827 }
828
829 if (ssif_info->multi_data) {
830 /* In the middle of a multi-data write. */
831 int left;
832
833 ssif_inc_stat(ssif_info, sent_messages_parts);
834
835 left = ssif_info->multi_len - ssif_info->multi_pos;
836 if (left > 32)
837 left = 32;
838 /* Length byte. */
839 ssif_info->multi_data[ssif_info->multi_pos] = left;
840 ssif_info->multi_pos += left;
841 if (left < 32)
842 /*
843 * Write is finished. Note that we must end
844 * with a write of less than 32 bytes to
845 * complete the transaction, even if it is
846 * zero bytes.
847 */
848 ssif_info->multi_data = NULL;
849
850 rv = ssif_i2c_send(ssif_info, msg_written_handler,
851 I2C_SMBUS_WRITE,
852 SSIF_IPMI_MULTI_PART_REQUEST_MIDDLE,
853 ssif_info->multi_data + ssif_info->multi_pos,
854 I2C_SMBUS_BLOCK_DATA);
855 if (rv < 0) {
856 /* request failed, just return the error. */
857 ssif_inc_stat(ssif_info, send_errors);
858
859 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
860 pr_info("Error from i2c_non_blocking_op(3)\n");
861 msg_done_handler(ssif_info, -EIO, NULL, 0);
862 }
863 } else {
864 ssif_inc_stat(ssif_info, sent_messages);
865 ssif_inc_stat(ssif_info, sent_messages_parts);
866
867 /* Wait a jiffie then request the next message */
868 ssif_info->retries_left = SSIF_RECV_RETRIES;
869 ssif_info->rtc_us_timer = SSIF_MSG_PART_USEC;
870 mod_timer(&ssif_info->retry_timer,
871 jiffies + SSIF_MSG_PART_JIFFIES);
872 return;
873 }
874}
875
876static int start_resend(struct ssif_info *ssif_info)
877{
878 int rv;
879 int command;
880
881 if (ssif_info->data_len > 32) {
882 command = SSIF_IPMI_MULTI_PART_REQUEST_START;
883 ssif_info->multi_data = ssif_info->data;
884 ssif_info->multi_len = ssif_info->data_len;
885 /*
886 * Subtle thing, this is 32, not 33, because we will
887 * overwrite the thing at position 32 (which was just
888 * transmitted) with the new length.
889 */
890 ssif_info->multi_pos = 32;
891 ssif_info->data[0] = 32;
892 } else {
893 ssif_info->multi_data = NULL;
894 command = SSIF_IPMI_REQUEST;
895 ssif_info->data[0] = ssif_info->data_len;
896 }
897
898 rv = ssif_i2c_send(ssif_info, msg_written_handler, I2C_SMBUS_WRITE,
899 command, ssif_info->data, I2C_SMBUS_BLOCK_DATA);
900 if (rv && (ssif_info->ssif_debug & SSIF_DEBUG_MSG))
901 pr_info("Error from i2c_non_blocking_op(4)\n");
902 return rv;
903}
904
905static int start_send(struct ssif_info *ssif_info,
906 unsigned char *data,
907 unsigned int len)
908{
909 if (len > IPMI_MAX_MSG_LENGTH)
910 return -E2BIG;
911 if (len > ssif_info->max_xmit_msg_size)
912 return -E2BIG;
913
914 ssif_info->retries_left = SSIF_SEND_RETRIES;
915 memcpy(ssif_info->data+1, data, len);
916 ssif_info->data_len = len;
917 return start_resend(ssif_info);
918}
919
920/* Must be called with the message lock held. */
921static void start_next_msg(struct ssif_info *ssif_info, unsigned long *flags)
922{
923 struct ipmi_smi_msg *msg;
924 unsigned long oflags;
925
926 restart:
927 if (!SSIF_IDLE(ssif_info)) {
928 ipmi_ssif_unlock_cond(ssif_info, flags);
929 return;
930 }
931
932 if (!ssif_info->waiting_msg) {
933 ssif_info->curr_msg = NULL;
934 ipmi_ssif_unlock_cond(ssif_info, flags);
935 } else {
936 int rv;
937
938 ssif_info->curr_msg = ssif_info->waiting_msg;
939 ssif_info->waiting_msg = NULL;
940 ipmi_ssif_unlock_cond(ssif_info, flags);
941 rv = start_send(ssif_info,
942 ssif_info->curr_msg->data,
943 ssif_info->curr_msg->data_size);
944 if (rv) {
945 msg = ssif_info->curr_msg;
946 ssif_info->curr_msg = NULL;
947 return_hosed_msg(ssif_info, msg);
948 flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
949 goto restart;
950 }
951 }
952}
953
954static void sender(void *send_info,
955 struct ipmi_smi_msg *msg)
956{
957 struct ssif_info *ssif_info = (struct ssif_info *) send_info;
958 unsigned long oflags, *flags;
959
960 BUG_ON(ssif_info->waiting_msg);
961 ssif_info->waiting_msg = msg;
962
963 flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
964 start_next_msg(ssif_info, flags);
965
966 if (ssif_info->ssif_debug & SSIF_DEBUG_TIMING) {
967 struct timeval t;
968
969 do_gettimeofday(&t);
970 pr_info("**Enqueue %02x %02x: %ld.%6.6ld\n",
971 msg->data[0], msg->data[1], t.tv_sec, t.tv_usec);
972 }
973}
974
975static int get_smi_info(void *send_info, struct ipmi_smi_info *data)
976{
977 struct ssif_info *ssif_info = send_info;
978
979 data->addr_src = ssif_info->addr_source;
980 data->dev = &ssif_info->client->dev;
981 data->addr_info = ssif_info->addr_info;
982 get_device(data->dev);
983
984 return 0;
985}
986
987/*
988 * Instead of having our own timer to periodically check the message
989 * flags, we let the message handler drive us.
990 */
991static void request_events(void *send_info)
992{
993 struct ssif_info *ssif_info = (struct ssif_info *) send_info;
994 unsigned long oflags, *flags;
995
996 if (!ssif_info->has_event_buffer)
997 return;
998
999 flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
1000 /*
1001 * Request flags first, not events, because the lower layer
1002 * doesn't have a way to send an attention. But make sure
1003 * event checking still happens.
1004 */
1005 ssif_info->req_events = true;
1006 if (SSIF_IDLE(ssif_info))
1007 start_flag_fetch(ssif_info, flags);
1008 else {
1009 ssif_info->req_flags = true;
1010 ipmi_ssif_unlock_cond(ssif_info, flags);
1011 }
1012}
1013
1014static int inc_usecount(void *send_info)
1015{
1016 struct ssif_info *ssif_info = send_info;
1017
1018 if (!i2c_get_adapter(ssif_info->client->adapter->nr))
1019 return -ENODEV;
1020
1021 i2c_use_client(ssif_info->client);
1022 return 0;
1023}
1024
1025static void dec_usecount(void *send_info)
1026{
1027 struct ssif_info *ssif_info = send_info;
1028
1029 i2c_release_client(ssif_info->client);
1030 i2c_put_adapter(ssif_info->client->adapter);
1031}
1032
1033static int ssif_start_processing(void *send_info,
1034 ipmi_smi_t intf)
1035{
1036 struct ssif_info *ssif_info = send_info;
1037
1038 ssif_info->intf = intf;
1039
1040 return 0;
1041}
1042
1043#define MAX_SSIF_BMCS 4
1044
1045static unsigned short addr[MAX_SSIF_BMCS];
1046static int num_addrs;
1047module_param_array(addr, ushort, &num_addrs, 0);
1048MODULE_PARM_DESC(addr, "The addresses to scan for IPMI BMCs on the SSIFs.");
1049
1050static char *adapter_name[MAX_SSIF_BMCS];
1051static int num_adapter_names;
1052module_param_array(adapter_name, charp, &num_adapter_names, 0);
1053MODULE_PARM_DESC(adapter_name, "The string name of the I2C device that has the BMC. By default all devices are scanned.");
1054
1055static int slave_addrs[MAX_SSIF_BMCS];
1056static int num_slave_addrs;
1057module_param_array(slave_addrs, int, &num_slave_addrs, 0);
1058MODULE_PARM_DESC(slave_addrs,
1059 "The default IPMB slave address for the controller.");
1060
1061/*
1062 * Bit 0 enables message debugging, bit 1 enables state debugging, and
1063 * bit 2 enables timing debugging. This is an array indexed by
1064 * interface number"
1065 */
1066static int dbg[MAX_SSIF_BMCS];
1067static int num_dbg;
1068module_param_array(dbg, int, &num_dbg, 0);
1069MODULE_PARM_DESC(dbg, "Turn on debugging.");
1070
1071static bool ssif_dbg_probe;
1072module_param_named(dbg_probe, ssif_dbg_probe, bool, 0);
1073MODULE_PARM_DESC(dbg_probe, "Enable debugging of probing of adapters.");
1074
1075static int use_thread;
1076module_param(use_thread, int, 0);
1077MODULE_PARM_DESC(use_thread, "Use the thread interface.");
1078
1079static bool ssif_tryacpi = 1;
1080module_param_named(tryacpi, ssif_tryacpi, bool, 0);
1081MODULE_PARM_DESC(tryacpi, "Setting this to zero will disable the default scan of the interfaces identified via ACPI");
1082
1083static bool ssif_trydmi = 1;
1084module_param_named(trydmi, ssif_trydmi, bool, 0);
1085MODULE_PARM_DESC(trydmi, "Setting this to zero will disable the default scan of the interfaces identified via DMI (SMBIOS)");
1086
1087static DEFINE_MUTEX(ssif_infos_mutex);
1088static LIST_HEAD(ssif_infos);
1089
1090static int ssif_remove(struct i2c_client *client)
1091{
1092 struct ssif_info *ssif_info = i2c_get_clientdata(client);
1093 int rv;
1094
1095 if (!ssif_info)
1096 return 0;
1097
1098 i2c_set_clientdata(client, NULL);
1099
1100 /*
1101 * After this point, we won't deliver anything asychronously
1102 * to the message handler. We can unregister ourself.
1103 */
1104 rv = ipmi_unregister_smi(ssif_info->intf);
1105 if (rv) {
1106 pr_err(PFX "Unable to unregister device: errno=%d\n", rv);
1107 return rv;
1108 }
1109 ssif_info->intf = NULL;
1110
1111 /* make sure the driver is not looking for flags any more. */
1112 while (ssif_info->ssif_state != SSIF_NORMAL)
1113 schedule_timeout(1);
1114
1115 ssif_info->stopping = true;
1116 del_timer_sync(&ssif_info->retry_timer);
1117 if (ssif_info->thread) {
1118 complete(&ssif_info->wake_thread);
1119 kthread_stop(ssif_info->thread);
1120 }
1121
1122 /*
1123 * No message can be outstanding now, we have removed the
1124 * upper layer and it permitted us to do so.
1125 */
1126 kfree(ssif_info);
1127 return 0;
1128}
1129
1130static int do_cmd(struct i2c_client *client, int len, unsigned char *msg,
1131 int *resp_len, unsigned char *resp)
1132{
1133 int retry_cnt;
1134 int ret;
1135
1136 retry_cnt = SSIF_SEND_RETRIES;
1137 retry1:
1138 ret = i2c_smbus_write_block_data(client, SSIF_IPMI_REQUEST, len, msg);
1139 if (ret) {
1140 retry_cnt--;
1141 if (retry_cnt > 0)
1142 goto retry1;
1143 return -ENODEV;
1144 }
1145
1146 ret = -ENODEV;
1147 retry_cnt = SSIF_RECV_RETRIES;
1148 while (retry_cnt > 0) {
1149 ret = i2c_smbus_read_block_data(client, SSIF_IPMI_RESPONSE,
1150 resp);
1151 if (ret > 0)
1152 break;
1153 msleep(SSIF_MSG_MSEC);
1154 retry_cnt--;
1155 if (retry_cnt <= 0)
1156 break;
1157 }
1158
1159 if (ret > 0) {
1160 /* Validate that the response is correct. */
1161 if (ret < 3 ||
1162 (resp[0] != (msg[0] | (1 << 2))) ||
1163 (resp[1] != msg[1]))
1164 ret = -EINVAL;
1165 else {
1166 *resp_len = ret;
1167 ret = 0;
1168 }
1169 }
1170
1171 return ret;
1172}
1173
1174static int ssif_detect(struct i2c_client *client, struct i2c_board_info *info)
1175{
1176 unsigned char *resp;
1177 unsigned char msg[3];
1178 int rv;
1179 int len;
1180
1181 resp = kmalloc(IPMI_MAX_MSG_LENGTH, GFP_KERNEL);
1182 if (!resp)
1183 return -ENOMEM;
1184
1185 /* Do a Get Device ID command, since it is required. */
1186 msg[0] = IPMI_NETFN_APP_REQUEST << 2;
1187 msg[1] = IPMI_GET_DEVICE_ID_CMD;
1188 rv = do_cmd(client, 2, msg, &len, resp);
1189 if (rv)
1190 rv = -ENODEV;
1191 else
1192 strlcpy(info->type, DEVICE_NAME, I2C_NAME_SIZE);
1193 kfree(resp);
1194 return rv;
1195}
1196
1197static int smi_type_proc_show(struct seq_file *m, void *v)
1198{
1199 return seq_puts(m, "ssif\n");
1200}
1201
1202static int smi_type_proc_open(struct inode *inode, struct file *file)
1203{
1204 return single_open(file, smi_type_proc_show, inode->i_private);
1205}
1206
1207static const struct file_operations smi_type_proc_ops = {
1208 .open = smi_type_proc_open,
1209 .read = seq_read,
1210 .llseek = seq_lseek,
1211 .release = single_release,
1212};
1213
1214static int smi_stats_proc_show(struct seq_file *m, void *v)
1215{
1216 struct ssif_info *ssif_info = m->private;
1217
1218 seq_printf(m, "sent_messages: %u\n",
1219 ssif_get_stat(ssif_info, sent_messages));
1220 seq_printf(m, "sent_messages_parts: %u\n",
1221 ssif_get_stat(ssif_info, sent_messages_parts));
1222 seq_printf(m, "send_retries: %u\n",
1223 ssif_get_stat(ssif_info, send_retries));
1224 seq_printf(m, "send_errors: %u\n",
1225 ssif_get_stat(ssif_info, send_errors));
1226 seq_printf(m, "received_messages: %u\n",
1227 ssif_get_stat(ssif_info, received_messages));
1228 seq_printf(m, "received_message_parts: %u\n",
1229 ssif_get_stat(ssif_info, received_message_parts));
1230 seq_printf(m, "receive_retries: %u\n",
1231 ssif_get_stat(ssif_info, receive_retries));
1232 seq_printf(m, "receive_errors: %u\n",
1233 ssif_get_stat(ssif_info, receive_errors));
1234 seq_printf(m, "flag_fetches: %u\n",
1235 ssif_get_stat(ssif_info, flag_fetches));
1236 seq_printf(m, "hosed: %u\n",
1237 ssif_get_stat(ssif_info, hosed));
1238 seq_printf(m, "events: %u\n",
1239 ssif_get_stat(ssif_info, events));
1240 seq_printf(m, "watchdog_pretimeouts: %u\n",
1241 ssif_get_stat(ssif_info, watchdog_pretimeouts));
1242 return 0;
1243}
1244
1245static int smi_stats_proc_open(struct inode *inode, struct file *file)
1246{
1247 return single_open(file, smi_stats_proc_show, PDE_DATA(inode));
1248}
1249
1250static const struct file_operations smi_stats_proc_ops = {
1251 .open = smi_stats_proc_open,
1252 .read = seq_read,
1253 .llseek = seq_lseek,
1254 .release = single_release,
1255};
1256
1257static struct ssif_addr_info *ssif_info_find(unsigned short addr,
1258 char *adapter_name,
1259 bool match_null_name)
1260{
1261 struct ssif_addr_info *info, *found = NULL;
1262
1263restart:
1264 list_for_each_entry(info, &ssif_infos, link) {
1265 if (info->binfo.addr == addr) {
1266 if (info->adapter_name || adapter_name) {
1267 if (!info->adapter_name != !adapter_name) {
1268 /* One is NULL and one is not */
1269 continue;
1270 }
1271 if (strcmp(info->adapter_name, adapter_name))
1272 /* Names to not match */
1273 continue;
1274 }
1275 found = info;
1276 break;
1277 }
1278 }
1279
1280 if (!found && match_null_name) {
1281 /* Try to get an exact match first, then try with a NULL name */
1282 adapter_name = NULL;
1283 match_null_name = false;
1284 goto restart;
1285 }
1286
1287 return found;
1288}
1289
1290static bool check_acpi(struct ssif_info *ssif_info, struct device *dev)
1291{
1292#ifdef CONFIG_ACPI
1293 acpi_handle acpi_handle;
1294
1295 acpi_handle = ACPI_HANDLE(dev);
1296 if (acpi_handle) {
1297 ssif_info->addr_source = SI_ACPI;
1298 ssif_info->addr_info.acpi_info.acpi_handle = acpi_handle;
1299 return true;
1300 }
1301#endif
1302 return false;
1303}
1304
1305static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
1306{
1307 unsigned char msg[3];
1308 unsigned char *resp;
1309 struct ssif_info *ssif_info;
1310 int rv = 0;
1311 int len;
1312 int i;
1313 u8 slave_addr = 0;
1314 struct ssif_addr_info *addr_info = NULL;
1315
1316
1317 resp = kmalloc(IPMI_MAX_MSG_LENGTH, GFP_KERNEL);
1318 if (!resp)
1319 return -ENOMEM;
1320
1321 ssif_info = kzalloc(sizeof(*ssif_info), GFP_KERNEL);
1322 if (!ssif_info) {
1323 kfree(resp);
1324 return -ENOMEM;
1325 }
1326
1327 if (!check_acpi(ssif_info, &client->dev)) {
1328 addr_info = ssif_info_find(client->addr, client->adapter->name,
1329 true);
1330 if (!addr_info) {
1331 /* Must have come in through sysfs. */
1332 ssif_info->addr_source = SI_HOTMOD;
1333 } else {
1334 ssif_info->addr_source = addr_info->addr_src;
1335 ssif_info->ssif_debug = addr_info->debug;
1336 ssif_info->addr_info = addr_info->addr_info;
1337 slave_addr = addr_info->slave_addr;
1338 }
1339 }
1340
1341 pr_info(PFX "Trying %s-specified SSIF interface at i2c address 0x%x, adapter %s, slave address 0x%x\n",
1342 ipmi_addr_src_to_str(ssif_info->addr_source),
1343 client->addr, client->adapter->name, slave_addr);
1344
1345 /*
1346 * Do a Get Device ID command, since it comes back with some
1347 * useful info.
1348 */
1349 msg[0] = IPMI_NETFN_APP_REQUEST << 2;
1350 msg[1] = IPMI_GET_DEVICE_ID_CMD;
1351 rv = do_cmd(client, 2, msg, &len, resp);
1352 if (rv)
1353 goto out;
1354
1355 rv = ipmi_demangle_device_id(resp, len, &ssif_info->device_id);
1356 if (rv)
1357 goto out;
1358
1359 ssif_info->client = client;
1360 i2c_set_clientdata(client, ssif_info);
1361
1362 /* Now check for system interface capabilities */
1363 msg[0] = IPMI_NETFN_APP_REQUEST << 2;
1364 msg[1] = IPMI_GET_SYSTEM_INTERFACE_CAPABILITIES_CMD;
1365 msg[2] = 0; /* SSIF */
1366 rv = do_cmd(client, 3, msg, &len, resp);
1367 if (!rv && (len >= 3) && (resp[2] == 0)) {
1368 if (len < 7) {
1369 if (ssif_dbg_probe)
1370 pr_info(PFX "SSIF info too short: %d\n", len);
1371 goto no_support;
1372 }
1373
1374 /* Got a good SSIF response, handle it. */
1375 ssif_info->max_xmit_msg_size = resp[5];
1376 ssif_info->max_recv_msg_size = resp[6];
1377 ssif_info->multi_support = (resp[4] >> 6) & 0x3;
1378 ssif_info->supports_pec = (resp[4] >> 3) & 0x1;
1379
1380 /* Sanitize the data */
1381 switch (ssif_info->multi_support) {
1382 case SSIF_NO_MULTI:
1383 if (ssif_info->max_xmit_msg_size > 32)
1384 ssif_info->max_xmit_msg_size = 32;
1385 if (ssif_info->max_recv_msg_size > 32)
1386 ssif_info->max_recv_msg_size = 32;
1387 break;
1388
1389 case SSIF_MULTI_2_PART:
1390 if (ssif_info->max_xmit_msg_size > 64)
1391 ssif_info->max_xmit_msg_size = 64;
1392 if (ssif_info->max_recv_msg_size > 62)
1393 ssif_info->max_recv_msg_size = 62;
1394 break;
1395
1396 case SSIF_MULTI_n_PART:
1397 break;
1398
1399 default:
1400 /* Data is not sane, just give up. */
1401 goto no_support;
1402 }
1403 } else {
1404 no_support:
1405 /* Assume no multi-part or PEC support */
1406 pr_info(PFX "Error fetching SSIF: %d %d %2.2x, your system probably doesn't support this command so using defaults\n",
1407 rv, len, resp[2]);
1408
1409 ssif_info->max_xmit_msg_size = 32;
1410 ssif_info->max_recv_msg_size = 32;
1411 ssif_info->multi_support = SSIF_NO_MULTI;
1412 ssif_info->supports_pec = 0;
1413 }
1414
1415 /* Make sure the NMI timeout is cleared. */
1416 msg[0] = IPMI_NETFN_APP_REQUEST << 2;
1417 msg[1] = IPMI_CLEAR_MSG_FLAGS_CMD;
1418 msg[2] = WDT_PRE_TIMEOUT_INT;
1419 rv = do_cmd(client, 3, msg, &len, resp);
1420 if (rv || (len < 3) || (resp[2] != 0))
1421 pr_warn(PFX "Unable to clear message flags: %d %d %2.2x\n",
1422 rv, len, resp[2]);
1423
1424 /* Attempt to enable the event buffer. */
1425 msg[0] = IPMI_NETFN_APP_REQUEST << 2;
1426 msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD;
1427 rv = do_cmd(client, 2, msg, &len, resp);
1428 if (rv || (len < 4) || (resp[2] != 0)) {
1429 pr_warn(PFX "Error getting global enables: %d %d %2.2x\n",
1430 rv, len, resp[2]);
1431 rv = 0; /* Not fatal */
1432 goto found;
1433 }
1434
1435 if (resp[3] & IPMI_BMC_EVT_MSG_BUFF) {
1436 ssif_info->has_event_buffer = true;
1437 /* buffer is already enabled, nothing to do. */
1438 goto found;
1439 }
1440
1441 msg[0] = IPMI_NETFN_APP_REQUEST << 2;
1442 msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD;
1443 msg[2] = resp[3] | IPMI_BMC_EVT_MSG_BUFF;
1444 rv = do_cmd(client, 3, msg, &len, resp);
1445 if (rv || (len < 2)) {
1446 pr_warn(PFX "Error getting global enables: %d %d %2.2x\n",
1447 rv, len, resp[2]);
1448 rv = 0; /* Not fatal */
1449 goto found;
1450 }
1451
1452 if (resp[2] == 0)
1453 /* A successful return means the event buffer is supported. */
1454 ssif_info->has_event_buffer = true;
1455
1456 found:
1457 ssif_info->intf_num = atomic_inc_return(&next_intf);
1458
1459 if (ssif_dbg_probe) {
1460 pr_info("ssif_probe: i2c_probe found device at i2c address %x\n",
1461 client->addr);
1462 }
1463
1464 spin_lock_init(&ssif_info->lock);
1465 ssif_info->ssif_state = SSIF_NORMAL;
1466 init_timer(&ssif_info->retry_timer);
1467 ssif_info->retry_timer.data = (unsigned long) ssif_info;
1468 ssif_info->retry_timer.function = retry_timeout;
1469
1470 for (i = 0; i < SSIF_NUM_STATS; i++)
1471 atomic_set(&ssif_info->stats[i], 0);
1472
1473 if (ssif_info->supports_pec)
1474 ssif_info->client->flags |= I2C_CLIENT_PEC;
1475
1476 ssif_info->handlers.owner = THIS_MODULE;
1477 ssif_info->handlers.start_processing = ssif_start_processing;
1478 ssif_info->handlers.get_smi_info = get_smi_info;
1479 ssif_info->handlers.sender = sender;
1480 ssif_info->handlers.request_events = request_events;
1481 ssif_info->handlers.inc_usecount = inc_usecount;
1482 ssif_info->handlers.dec_usecount = dec_usecount;
1483
1484 {
1485 unsigned int thread_num;
1486
1487 thread_num = ((ssif_info->client->adapter->nr << 8) |
1488 ssif_info->client->addr);
1489 init_completion(&ssif_info->wake_thread);
1490 ssif_info->thread = kthread_run(ipmi_ssif_thread, ssif_info,
1491 "kssif%4.4x", thread_num);
1492 if (IS_ERR(ssif_info->thread)) {
1493 rv = PTR_ERR(ssif_info->thread);
1494 dev_notice(&ssif_info->client->dev,
1495 "Could not start kernel thread: error %d\n",
1496 rv);
1497 goto out;
1498 }
1499 }
1500
1501 rv = ipmi_register_smi(&ssif_info->handlers,
1502 ssif_info,
1503 &ssif_info->device_id,
1504 &ssif_info->client->dev,
1505 slave_addr);
1506 if (rv) {
1507 pr_err(PFX "Unable to register device: error %d\n", rv);
1508 goto out;
1509 }
1510
1511 rv = ipmi_smi_add_proc_entry(ssif_info->intf, "type",
1512 &smi_type_proc_ops,
1513 ssif_info);
1514 if (rv) {
1515 pr_err(PFX "Unable to create proc entry: %d\n", rv);
1516 goto out_err_unreg;
1517 }
1518
1519 rv = ipmi_smi_add_proc_entry(ssif_info->intf, "ssif_stats",
1520 &smi_stats_proc_ops,
1521 ssif_info);
1522 if (rv) {
1523 pr_err(PFX "Unable to create proc entry: %d\n", rv);
1524 goto out_err_unreg;
1525 }
1526
1527 out:
1528 if (rv)
1529 kfree(ssif_info);
1530 kfree(resp);
1531 return rv;
1532
1533 out_err_unreg:
1534 ipmi_unregister_smi(ssif_info->intf);
1535 goto out;
1536}
1537
1538static int ssif_adapter_handler(struct device *adev, void *opaque)
1539{
1540 struct ssif_addr_info *addr_info = opaque;
1541
1542 if (adev->type != &i2c_adapter_type)
1543 return 0;
1544
1545 i2c_new_device(to_i2c_adapter(adev), &addr_info->binfo);
1546
1547 if (!addr_info->adapter_name)
1548 return 1; /* Only try the first I2C adapter by default. */
1549 return 0;
1550}
1551
1552static int new_ssif_client(int addr, char *adapter_name,
1553 int debug, int slave_addr,
1554 enum ipmi_addr_src addr_src)
1555{
1556 struct ssif_addr_info *addr_info;
1557 int rv = 0;
1558
1559 mutex_lock(&ssif_infos_mutex);
1560 if (ssif_info_find(addr, adapter_name, false)) {
1561 rv = -EEXIST;
1562 goto out_unlock;
1563 }
1564
1565 addr_info = kzalloc(sizeof(*addr_info), GFP_KERNEL);
1566 if (!addr_info) {
1567 rv = -ENOMEM;
1568 goto out_unlock;
1569 }
1570
1571 if (adapter_name) {
1572 addr_info->adapter_name = kstrdup(adapter_name, GFP_KERNEL);
1573 if (!addr_info->adapter_name) {
1574 kfree(addr_info);
1575 rv = -ENOMEM;
1576 goto out_unlock;
1577 }
1578 }
1579
1580 strncpy(addr_info->binfo.type, DEVICE_NAME,
1581 sizeof(addr_info->binfo.type));
1582 addr_info->binfo.addr = addr;
1583 addr_info->binfo.platform_data = addr_info;
1584 addr_info->debug = debug;
1585 addr_info->slave_addr = slave_addr;
1586 addr_info->addr_src = addr_src;
1587
1588 list_add_tail(&addr_info->link, &ssif_infos);
1589
1590 if (initialized)
1591 i2c_for_each_dev(addr_info, ssif_adapter_handler);
1592 /* Otherwise address list will get it */
1593
1594out_unlock:
1595 mutex_unlock(&ssif_infos_mutex);
1596 return rv;
1597}
1598
1599static void free_ssif_clients(void)
1600{
1601 struct ssif_addr_info *info, *tmp;
1602
1603 mutex_lock(&ssif_infos_mutex);
1604 list_for_each_entry_safe(info, tmp, &ssif_infos, link) {
1605 list_del(&info->link);
1606 kfree(info->adapter_name);
1607 kfree(info);
1608 }
1609 mutex_unlock(&ssif_infos_mutex);
1610}
1611
1612static unsigned short *ssif_address_list(void)
1613{
1614 struct ssif_addr_info *info;
1615 unsigned int count = 0, i;
1616 unsigned short *address_list;
1617
1618 list_for_each_entry(info, &ssif_infos, link)
1619 count++;
1620
1621 address_list = kzalloc(sizeof(*address_list) * (count + 1), GFP_KERNEL);
1622 if (!address_list)
1623 return NULL;
1624
1625 i = 0;
1626 list_for_each_entry(info, &ssif_infos, link) {
1627 unsigned short addr = info->binfo.addr;
1628 int j;
1629
1630 for (j = 0; j < i; j++) {
1631 if (address_list[j] == addr)
1632 goto skip_addr;
1633 }
1634 address_list[i] = addr;
1635skip_addr:
1636 i++;
1637 }
1638 address_list[i] = I2C_CLIENT_END;
1639
1640 return address_list;
1641}
1642
1643#ifdef CONFIG_ACPI
1644static struct acpi_device_id ssif_acpi_match[] = {
1645 { "IPI0001", 0 },
1646 { },
1647};
1648MODULE_DEVICE_TABLE(acpi, ssif_acpi_match);
1649
1650/*
1651 * Once we get an ACPI failure, we don't try any more, because we go
1652 * through the tables sequentially. Once we don't find a table, there
1653 * are no more.
1654 */
1655static int acpi_failure;
1656
1657/*
1658 * Defined in the IPMI 2.0 spec.
1659 */
1660struct SPMITable {
1661 s8 Signature[4];
1662 u32 Length;
1663 u8 Revision;
1664 u8 Checksum;
1665 s8 OEMID[6];
1666 s8 OEMTableID[8];
1667 s8 OEMRevision[4];
1668 s8 CreatorID[4];
1669 s8 CreatorRevision[4];
1670 u8 InterfaceType;
1671 u8 IPMIlegacy;
1672 s16 SpecificationRevision;
1673
1674 /*
1675 * Bit 0 - SCI interrupt supported
1676 * Bit 1 - I/O APIC/SAPIC
1677 */
1678 u8 InterruptType;
1679
1680 /*
1681 * If bit 0 of InterruptType is set, then this is the SCI
1682 * interrupt in the GPEx_STS register.
1683 */
1684 u8 GPE;
1685
1686 s16 Reserved;
1687
1688 /*
1689 * If bit 1 of InterruptType is set, then this is the I/O
1690 * APIC/SAPIC interrupt.
1691 */
1692 u32 GlobalSystemInterrupt;
1693
1694 /* The actual register address. */
1695 struct acpi_generic_address addr;
1696
1697 u8 UID[4];
1698
1699 s8 spmi_id[1]; /* A '\0' terminated array starts here. */
1700};
1701
1702static int try_init_spmi(struct SPMITable *spmi)
1703{
1704 unsigned short myaddr;
1705
1706 if (num_addrs >= MAX_SSIF_BMCS)
1707 return -1;
1708
1709 if (spmi->IPMIlegacy != 1) {
1710 pr_warn("IPMI: Bad SPMI legacy: %d\n", spmi->IPMIlegacy);
1711 return -ENODEV;
1712 }
1713
1714 if (spmi->InterfaceType != 4)
1715 return -ENODEV;
1716
1717 if (spmi->addr.space_id != ACPI_ADR_SPACE_SMBUS) {
1718 pr_warn(PFX "Invalid ACPI SSIF I/O Address type: %d\n",
1719 spmi->addr.space_id);
1720 return -EIO;
1721 }
1722
1723 myaddr = spmi->addr.address >> 1;
1724
1725 return new_ssif_client(myaddr, NULL, 0, 0, SI_SPMI);
1726}
1727
1728static void spmi_find_bmc(void)
1729{
1730 acpi_status status;
1731 struct SPMITable *spmi;
1732 int i;
1733
1734 if (acpi_disabled)
1735 return;
1736
1737 if (acpi_failure)
1738 return;
1739
1740 for (i = 0; ; i++) {
1741 status = acpi_get_table(ACPI_SIG_SPMI, i+1,
1742 (struct acpi_table_header **)&spmi);
1743 if (status != AE_OK)
1744 return;
1745
1746 try_init_spmi(spmi);
1747 }
1748}
1749#else
1750static void spmi_find_bmc(void) { }
1751#endif
1752
1753#ifdef CONFIG_DMI
1754static int decode_dmi(const struct dmi_device *dmi_dev)
1755{
1756 struct dmi_header *dm = dmi_dev->device_data;
1757 u8 *data = (u8 *) dm;
1758 u8 len = dm->length;
1759 unsigned short myaddr;
1760 int slave_addr;
1761
1762 if (num_addrs >= MAX_SSIF_BMCS)
1763 return -1;
1764
1765 if (len < 9)
1766 return -1;
1767
1768 if (data[0x04] != 4) /* Not SSIF */
1769 return -1;
1770
1771 if ((data[8] >> 1) == 0) {
1772 /*
1773 * Some broken systems put the I2C address in
1774 * the slave address field. We try to
1775 * accommodate them here.
1776 */
1777 myaddr = data[6] >> 1;
1778 slave_addr = 0;
1779 } else {
1780 myaddr = data[8] >> 1;
1781 slave_addr = data[6];
1782 }
1783
1784 return new_ssif_client(myaddr, NULL, 0, 0, SI_SMBIOS);
1785}
1786
1787static void dmi_iterator(void)
1788{
1789 const struct dmi_device *dev = NULL;
1790
1791 while ((dev = dmi_find_device(DMI_DEV_TYPE_IPMI, NULL, dev)))
1792 decode_dmi(dev);
1793}
1794#else
1795static void dmi_iterator(void) { }
1796#endif
1797
1798static const struct i2c_device_id ssif_id[] = {
1799 { DEVICE_NAME, 0 },
1800 { }
1801};
1802MODULE_DEVICE_TABLE(i2c, ssif_id);
1803
1804static struct i2c_driver ssif_i2c_driver = {
1805 .class = I2C_CLASS_HWMON,
1806 .driver = {
1807 .owner = THIS_MODULE,
1808 .name = DEVICE_NAME
1809 },
1810 .probe = ssif_probe,
1811 .remove = ssif_remove,
1812 .id_table = ssif_id,
1813 .detect = ssif_detect
1814};
1815
1816static int init_ipmi_ssif(void)
1817{
1818 int i;
1819 int rv;
1820
1821 if (initialized)
1822 return 0;
1823
1824 pr_info("IPMI SSIF Interface driver\n");
1825
1826 /* build list for i2c from addr list */
1827 for (i = 0; i < num_addrs; i++) {
1828 rv = new_ssif_client(addr[i], adapter_name[i],
1829 dbg[i], slave_addrs[i],
1830 SI_HARDCODED);
1831 if (!rv)
1832 pr_err(PFX
1833 "Couldn't add hardcoded device at addr 0x%x\n",
1834 addr[i]);
1835 }
1836
1837 if (ssif_tryacpi)
1838 ssif_i2c_driver.driver.acpi_match_table =
1839 ACPI_PTR(ssif_acpi_match);
1840 if (ssif_trydmi)
1841 dmi_iterator();
1842 if (ssif_tryacpi)
1843 spmi_find_bmc();
1844
1845 ssif_i2c_driver.address_list = ssif_address_list();
1846
1847 rv = i2c_add_driver(&ssif_i2c_driver);
1848 if (!rv)
1849 initialized = true;
1850
1851 return rv;
1852}
1853module_init(init_ipmi_ssif);
1854
1855static void cleanup_ipmi_ssif(void)
1856{
1857 if (!initialized)
1858 return;
1859
1860 initialized = false;
1861
1862 i2c_del_driver(&ssif_i2c_driver);
1863
1864 free_ssif_clients();
1865}
1866module_exit(cleanup_ipmi_ssif);
1867
1868MODULE_AUTHOR("Todd C Davis <todd.c.davis@intel.com>, Corey Minyard <minyard@acm.org>");
1869MODULE_DESCRIPTION("IPMI driver for management controllers on a SMBus");
1870MODULE_LICENSE("GPL");
diff --git a/include/linux/ipmi.h b/include/linux/ipmi.h
index 76d2acbfa7c6..838dbfa3c331 100644
--- a/include/linux/ipmi.h
+++ b/include/linux/ipmi.h
@@ -37,6 +37,7 @@
37 37
38#include <linux/list.h> 38#include <linux/list.h>
39#include <linux/proc_fs.h> 39#include <linux/proc_fs.h>
40#include <linux/acpi.h> /* For acpi_handle */
40 41
41struct module; 42struct module;
42struct device; 43struct device;
@@ -278,15 +279,18 @@ enum ipmi_addr_src {
278 SI_INVALID = 0, SI_HOTMOD, SI_HARDCODED, SI_SPMI, SI_ACPI, SI_SMBIOS, 279 SI_INVALID = 0, SI_HOTMOD, SI_HARDCODED, SI_SPMI, SI_ACPI, SI_SMBIOS,
279 SI_PCI, SI_DEVICETREE, SI_DEFAULT 280 SI_PCI, SI_DEVICETREE, SI_DEFAULT
280}; 281};
282const char *ipmi_addr_src_to_str(enum ipmi_addr_src src);
281 283
282union ipmi_smi_info_union { 284union ipmi_smi_info_union {
285#ifdef CONFIG_ACPI
283 /* 286 /*
284 * the acpi_info element is defined for the SI_ACPI 287 * the acpi_info element is defined for the SI_ACPI
285 * address type 288 * address type
286 */ 289 */
287 struct { 290 struct {
288 void *acpi_handle; 291 acpi_handle acpi_handle;
289 } acpi_info; 292 } acpi_info;
293#endif
290}; 294};
291 295
292struct ipmi_smi_info { 296struct ipmi_smi_info {
diff --git a/include/linux/ipmi_smi.h b/include/linux/ipmi_smi.h
index bd349240d50e..0b1e569f5ff5 100644
--- a/include/linux/ipmi_smi.h
+++ b/include/linux/ipmi_smi.h
@@ -98,12 +98,11 @@ struct ipmi_smi_handlers {
98 operation is not allowed to fail. If an error occurs, it 98 operation is not allowed to fail. If an error occurs, it
99 should report back the error in a received message. It may 99 should report back the error in a received message. It may
100 do this in the current call context, since no write locks 100 do this in the current call context, since no write locks
101 are held when this is run. If the priority is > 0, the 101 are held when this is run. Message are delivered one at
102 message will go into a high-priority queue and be sent 102 a time by the message handler, a new message will not be
103 first. Otherwise, it goes into a normal-priority queue. */ 103 delivered until the previous message is returned. */
104 void (*sender)(void *send_info, 104 void (*sender)(void *send_info,
105 struct ipmi_smi_msg *msg, 105 struct ipmi_smi_msg *msg);
106 int priority);
107 106
108 /* Called by the upper layer to request that we try to get 107 /* Called by the upper layer to request that we try to get
109 events from the BMC we are attached to. */ 108 events from the BMC we are attached to. */
@@ -212,7 +211,6 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
212 void *send_info, 211 void *send_info,
213 struct ipmi_device_id *device_id, 212 struct ipmi_device_id *device_id,
214 struct device *dev, 213 struct device *dev,
215 const char *sysfs_name,
216 unsigned char slave_addr); 214 unsigned char slave_addr);
217 215
218/* 216/*