aboutsummaryrefslogtreecommitdiffstats
path: root/net/9p
Commit message (Collapse)AuthorAge
...
* 9p: eliminate callback complexityEric Van Hensbergen2008-10-17
| | | | | | | | | | The current trans_fd rpc mechanisms use a dynamic callback mechanism which introduces a lot of complexity which only accomodates a single special case. This patch removes much of that complexity in favor of a simple exception mechanism to deal with flushes. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: consolidate mux_rpc and request structureEric Van Hensbergen2008-10-17
| | | | | | | | | | Currently, trans_fd has two structures (p9_req and p9_mux-rpc) which contain mostly duplicate data. This patch consolidates these two structures and removes p9_mux_rpc. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: remove unnecessary prototypesEric Van Hensbergen2008-10-17
| | | | | | | | | | | Cleanup files by reordering functions in order to remove need for unnecessary function prototypes. There are no code changes here, just functions being moved around and prototypes being eliminated. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: remove duplicate client stateEric Van Hensbergen2008-10-17
| | | | | | | | Now that we are passing client state into the transport modules, remove duplicate state which is present in transport private structures. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: consolidate transport structureEric Van Hensbergen2008-10-17
| | | | | | | | | | | | | Right now there is a transport module structure which provides per-transport type functions and data and a transport structure which contains per-instance public data as well as function pointers to instance specific functions. This patch moves public transport visible instance data to the client structure (which in some cases had duplicate data) and consolidates the functions into the transport module structure. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p-trans_fd: use single pollerTejun Heo2008-10-17
| | | | | | | | | | | | | | | | | | | | | | | | trans_fd used pool of upto 100 pollers to monitor the r/w fds. The approach makes sense in userspace back when the only available interfaces were poll(2) and select(2). As each event monitor - trigger - handling iteration took O(n) where `n' is the number of watched fds, it makes sense to spread them to many pollers such that the `n' can be divided by the number of pollers. However, this doesn't make any sense in kernel because persistent edge triggered event monitoring is how the whole thing is implemented in the kernel in the first place. This patch converts trans_fd to use single poller which watches all the fds instead of the poll of pollers approach. All the fds are registered for monitoring on creation and only the fds with pending events are scanned when something happens much like how epoll is implemented. This change makes trans_fd fd monitoring more efficient and simpler. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* vfs: Use const for kernel parser tableSteven Whitehouse2008-10-13
| | | | | | | | | | | | | | This is a much better version of a previous patch to make the parser tables constant. Rather than changing the typedef, we put the "const" in all the various places where its required, allowing the __initconst exception for nfsroot which was the cause of the previous trouble. This was posted for review some time ago and I believe its been in -mm since then. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Cc: Alexander Viro <aviro@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 9p: fix put_data error handlingEric Van Hensbergen2008-09-24
| | | | | | | | | | | Abhishek Kulkarni pointed out an inconsistency in the way errors are returned from p9_put_data. On deeper exploration it seems the error handling for this path was completely wrong. This patch adds checks for allocation problems and propagates errors correctly. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: introduce missing kfreeJulia Lawall2008-09-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Error handling code following a kmalloc should free the allocated data. The semantic match that finds the problem is as follows: (http://www.emn.fr/x-info/coccinelle/) // <smpl> @r exists@ local idexpression x; statement S; expression E; identifier f,l; position p1,p2; expression *ptr != NULL; @@ ( if ((x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...)) == NULL) S | x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...); ... if (x == NULL) S ) <... when != x when != if (...) { <+...x...+> } x->f = E ...> ( return \(0\|<+...x...+>\|ptr\); | return@p2 ...; ) @script:python@ p1 << r.p1; p2 << r.p2; @@ print "* file: %s kmalloc %s return %s" % (p1[0].file,p1[0].line,p2[0].line) // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
* 9p-trans_fd: fix and clean up module init/exit pathsTejun Heo2008-09-24
| | | | | | | | | | | trans_fd leaked p9_mux_wq on module unload. Fix it. While at it, collapse p9_mux_global_init() into p9_trans_fd_init(). It's easier to follow this way and the global poll_tasks array is about to removed anyway. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p-trans_fd: don't do fs segment mangling in p9_fd_poll()Tejun Heo2008-09-24
| | | | | | | | | | | p9_fd_poll() is never called with user pointers and f_op->poll() doesn't expect its arguments to be from userland. There's no need to set kernel ds before calling f_op->poll() from p9_fd_poll(). Remove it. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p-trans_fd: clean up p9_conn_create()Tejun Heo2008-09-24
| | | | | | | | | | * Use kzalloc() to allocate p9_conn and remove 0/NULL initializations. * Clean up error return paths. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p-trans_fd: fix trans_fd::p9_conn_destroy()Tejun Heo2008-09-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | p9_conn_destroy() first kills all current requests by calling p9_conn_cancel(), then waits for the request list to be cleared by waiting on p9_conn->equeue. After that, polling is stopped and the trans is destroyed. This sequence has a few problems. * Read and write works were never cancelled and the p9_conn can be destroyed while the works are running as r/w works remove requests from the list and dereference the p9_conn from them. * The list emptiness wait using p9_conn->equeue wouldn't trigger because p9_conn_cancel() always clears all the lists and the only way the wait can be triggered is to have another task to issue a request between the slim window between p9_conn_cancel() and the wait, which isn't safe under the current implementation with or without the wait. This patch fixes the problem by first stopping poll, which can schedule r/w works, first and cancle r/w works which guarantees that r/w works are not and will not run from that point and then calling p9_conn_cancel() and do the rest of destruction. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: implement proper trans module refcounting and unregistrationTejun Heo2008-09-24
| | | | | | | | | | | | | | | | | | | | | 9p trans modules aren't refcounted nor were they unregistered properly. Fix it. * Add 9p_trans_module->owner and reference the module on each trans instance creation and put it on destruction. * Protect v9fs_trans_list with a spinlock. This isn't strictly necessary as the list is manipulated only during module loading / unloading but it's a good idea to make the API safe. * Unregister trans modules when the corresponding module is being unloaded. * While at it, kill unnecessary EXPORT_SYMBOL on p9_trans_fd_init(). Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* flag parameters: socket and socketpairUlrich Drepper2008-07-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for flag values which are ORed to the type passwd to socket and socketpair. The additional code is minimal. The flag values in this implementation can and must match the O_* flags. This avoids overhead in the conversion. The internal functions sock_alloc_fd and sock_map_fd get a new parameters and all callers are changed. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #include <fcntl.h> #include <stdio.h> #include <unistd.h> #include <netinet/in.h> #include <sys/socket.h> #define PORT 57392 /* For Linux these must be the same. */ #define SOCK_CLOEXEC O_CLOEXEC int main (void) { int fd; fd = socket (PF_INET, SOCK_STREAM, 0); if (fd == -1) { puts ("socket(0) failed"); return 1; } int coe = fcntl (fd, F_GETFD); if (coe == -1) { puts ("fcntl failed"); return 1; } if (coe & FD_CLOEXEC) { puts ("socket(0) set close-on-exec flag"); return 1; } close (fd); fd = socket (PF_INET, SOCK_STREAM|SOCK_CLOEXEC, 0); if (fd == -1) { puts ("socket(SOCK_CLOEXEC) failed"); return 1; } coe = fcntl (fd, F_GETFD); if (coe == -1) { puts ("fcntl failed"); return 1; } if ((coe & FD_CLOEXEC) == 0) { puts ("socket(SOCK_CLOEXEC) does not set close-on-exec flag"); return 1; } close (fd); int fds[2]; if (socketpair (PF_UNIX, SOCK_STREAM, 0, fds) == -1) { puts ("socketpair(0) failed"); return 1; } for (int i = 0; i < 2; ++i) { coe = fcntl (fds[i], F_GETFD); if (coe == -1) { puts ("fcntl failed"); return 1; } if (coe & FD_CLOEXEC) { printf ("socketpair(0) set close-on-exec flag for fds[%d]\n", i); return 1; } close (fds[i]); } if (socketpair (PF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0, fds) == -1) { puts ("socketpair(SOCK_CLOEXEC) failed"); return 1; } for (int i = 0; i < 2; ++i) { coe = fcntl (fds[i], F_GETFD); if (coe == -1) { puts ("fcntl failed"); return 1; } if ((coe & FD_CLOEXEC) == 0) { printf ("socketpair(SOCK_CLOEXEC) does not set close-on-exec flag for fds[%d]\n", i); return 1; } close (fds[i]); } puts ("OK"); return 0; } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Ulrich Drepper <drepper@redhat.com> Acked-by: Davide Libenzi <davidel@xmailserver.org> Cc: Michael Kerrisk <mtk.manpages@googlemail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 9p: fix error path during early mountEric Van Hensbergen2008-05-14
| | | | | | | | | | | | | There was some cleanup issues during early mount which would trigger a kernel bug for certain types of failure. This patch reorganizes the cleanup to get rid of the bad behavior. This also merges the 9pnet and 9pnet_fd modules for the purpose of configuration and initialization. Keeping the fd transport separate from the core 9pnet code seemed like a good idea at the time, but in practice has caused more harm and confusion than good. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: make cryptic unknown error from server less scaryEric Van Hensbergen2008-05-14
| | | | | | | | | Right now when we get an error string from the server that we can't map we report a cryptic error that actually makes it look like we are reporting a problem with the client. This changes the text of the log message to clarify where the error is coming from. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: fix flags length in netSteven Rostedt2008-05-14
| | | | | | | | | | | | Some files in the net/9p directory uses "int" for flags. This can cause hard to find bugs on some architectures. This patch converts the flags to use "long" instead. This bug was discovered by doing an allyesconfig make on the -rt kernel where checks are done to ensure all flags are of size sizeof(long). Signed-off-by: Steven Rostedt <srostedt@redhat.com> Acked-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: Correct fidpool creation failure in p9_client_createJosef 'Jeff' Sipek2008-05-14
| | | | | | | On error, p9_idpool_create returns an ERR_PTR-encoded errno. Signed-off-by: Josef 'Jeff' Sipek <jeffpc@josefsipek.net> Acked-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: use struct mutex instead of struct semaphoreJosef 'Jeff' Sipek2008-05-14
| | | | | | | Replace semaphores protecting use flags with a mutex. Signed-off-by: Josef 'Jeff' Sipek <jeffpc@josefsipek.net> Acked-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: propagate parse_option changes to client and transportsEric Van Hensbergen2008-05-14
| | | | | | | | | Propagate changes that were made to the parse_options code to the other parse options pieces present in the other modules. Looks like the client parse options was probably corrupting the parse string and causing problems for others. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: Documentation updatesEric Van Hensbergen2008-05-14
| | | | | | | | The kernel-doc comments of much of the 9p system have been in disarray since reorganization. This patch fixes those problems, adds additional documentation and a template book which collects the 9p information. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* Merge branch 'master' of ↵David S. Miller2008-04-03
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6
| * net/9p/trans_fd.c:p9_trans_fd_init(): module_init functions should return 0 ↵Andrew Morton2008-03-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | on success Mar 23 09:06:31 opensuse103 kernel: Installing 9P2000 support Mar 23 09:06:31 opensuse103 kernel: sys_init_module: '9pnet_fd'->init suspiciously returned 1, it should follow 0/-E convention Mar 23 09:06:31 opensuse103 kernel: sys_init_module: loading module anyway... Mar 23 09:06:31 opensuse103 kernel: Pid: 5323, comm: modprobe Not tainted 2.6.25-rc6-git7-default #1 Mar 23 09:06:31 opensuse103 kernel: [<c013c253>] sys_init_module+0x172b/0x17c9 Mar 23 09:06:31 opensuse103 kernel: [<c0108a6a>] sys_mmap2+0x62/0x77 Mar 23 09:06:31 opensuse103 kernel: [<c01059c4>] sysenter_past_esp+0x6d/0xa9 Mar 23 09:06:31 opensuse103 kernel: ======================= Cc: Latchesar Ionkov <lucho@ionkov.net> Cc: Eric Van Hensbergen <ericvh@opteron.(none)> Cc: David S. Miller <davem@davemloft.net> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: <devzero@web.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'master' of ../net-2.6/David S. Miller2008-03-24
|\| | | | | | | | | | | Conflicts: net/ipv6/ndisc.c
| * [9P] net/9p/trans_fd.c: remove unused variableJulia Lawall2008-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The variable cb is initialized but never used otherwise. The semantic patch that makes this change is as follows: (http://www.emn.fr/x-info/coccinelle/) // <smpl> @@ type T; identifier i; constant C; @@ ( extern T i; | - T i; <+... when != i - i = C; ...+> ) // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: replace remaining __FUNCTION__ occurrencesHarvey Harrison2008-03-05
|/ | | | | | | __FUNCTION__ is gcc-specific, use __func__ Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/9p/trans_virtio.c: kmalloc() enough memoryAdrian Bunk2008-02-19
| | | | | | | | The Coverity checker spotted that less memory than required was allocated. Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/9p/trans_virtio.c: Use BUG_ONJulia Lawall2008-02-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | if (...) BUG(); should be replaced with BUG_ON(...) when the test has no side-effects to allow a definition of BUG_ON that drops the code completely. The semantic patch that makes this change is as follows: (http://www.emn.fr/x-info/coccinelle/) // <smpl> @ disable unlikely @ expression E,f; @@ ( if (<... f(...) ...>) { BUG(); } | - if (unlikely(E)) { BUG(); } + BUG_ON(E); ) @@ expression E,f; @@ ( if (<... f(...) ...>) { BUG(); } | - if (E) { BUG(); } + BUG_ON(E); ) // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
* 9p: fix p9_printfcall exportAndrew Morton2008-02-06
| | | | | | | | ERROR: "p9_printfcall" [net/9p/9pnet_virtio.ko] undefined! Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: transport API reorganizationEric Van Hensbergen2008-02-06
| | | | | | | | | | This merges the mux.c (including the connection interface) with trans_fd in preparation for transport API changes. Ultimately, trans_fd will need to be rewritten to clean it up and simplify the implementation, but this reorganization is viewed as the first step. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: add remove function to trans_virtioEric Van Hensbergen2008-02-06
| | | | | | | | | | | Request from rusty: Just cleaning up patches for 2.6.25 merge, and noticed that net/9p/trans_virtio.c doesn't have a remove function. This will crash when removing the module (console doesn't have one because it can't really be removed). Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: Convert semaphore to spinlock for p9_idpoolAnthony Liguori2008-02-06
| | | | | | | | | | | | When booting from v9fs, down_interruptible in p9_idpool_get() triggered a BUG as it was being called with IRQs disabled. A spinlock seems like the right thing to be using since the idr functions go out of their way not to sleep. This patch eliminates the BUG by converting the semaphore to a spinlock. Signed-off-by: Anthony Liguori <aliguori@us.ibm.com> Acked-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: Fix soft lockup in virtio transportEric Van Hensbergen2008-02-06
| | | | | | | | This fixes a poorly placed spinlock which could result in a soft lockup condition. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: block-based virtio clientEric Van Hensbergen2008-02-06
| | | | | | | | This replaces the console-based virto client with a block-based client using a single request queue. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: create transport rpc cut-thruEric Van Hensbergen2008-02-06
| | | | | | | | | Add a new transport function which allows a cut-thru directly to the transport instead of processing request through the mux if the cut-thru exists. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: fix bug in p9_clone_statMartin Stava2008-02-06
| | | | | | | | | This patch fixes a bug in the copying of 9P stat information where string references weren't being updated properly. Signed-off-by: Martin Sava <martin.stava@gmail.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* virtio: explicit enable_cb/disable_cb rather than callback return.Rusty Russell2008-02-04
| | | | | | | | | | It seems that virtio_net wants to disable callbacks (interrupts) before calling netif_rx_schedule(), so we can't use the return value to do so. Rename "restart" to "cb_enable" and introduce "cb_disable" hook: callback now returns void, rather than a boolean. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* virtio: simplify config mechanism.Rusty Russell2008-02-04
| | | | | | | | | | | | | Previously we used a type/len pair within the config space, but this seems overkill. We now simply define a structure which represents the layout in the config space: the config space can now only be extended at the end. The main driver-visible changes: 1) We indicate what fields are present with an explicit feature bit. 2) Virtqueues are explicitly numbered, and not in the config space. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* [NET] 9p: kill dead static inline buf_put_stringIlpo Järvinen2008-01-31
| | | | | | Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Acked-by: Eric Van Hensbergen <ericvh@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* 9p: add missing end-of-options record for trans_fdLatchesar Ionkov2007-11-06
| | | | | | | | The list of options that the fd transport accepts is missing end-of-options marker. This patch adds it. Signed-off-by: Latchesar Ionkov <lucho@ionkov.net> Acked-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: return NULL when trans not foundLatchesar Ionkov2007-11-06
| | | | | | | | | v9fs_match_trans function returns arbitrary transport module instead of NULL when the requested transport is not registered. This patch modifies the function to return NULL in that case. Signed-off-by: Latchesar Ionkov <lucho@ionkov.net> Acked-by: Eric Van Hensbergen <ericvh@gmail.com>
* [9P]: Fix missing unlock before return in p9_mux_poll_startRoel Kluin2007-10-24
| | | | | Signed-off-by: Roel Kluin <12o3l@tiscali.nl> Signed-off-by: David S. Miller <davem@davemloft.net>
* 9p: add virtio transportEric Van Hensbergen2007-10-23
| | | | | | | | | This adds a transport to 9p for communicating between guests and a host using a virtio based transport. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* typo fixesMatt LaPlante2007-10-19
| | | | | | | | | | | | Most of these fixes were already submitted for old kernel versions, and were approved, but for some reason they never made it into the releases. Because this is a consolidation of a couple old missed patches, it touches both Kconfigs and documentation texts. Signed-off-by: Matt LaPlante <kernel1@cyberdogtech.com> Acked-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Adrian Bunk <bunk@kernel.org>
* 9p: remove sysctlEric Van Hensbergen2007-10-17
| | | | | | | | | | A sysctl method was added to enable and disable debugging levels. After further review, it was decided that there are better approaches to doing this and the sysctl methodology isn't really desirable. This patch removes the sysctl code from 9p. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: fix bad kconfig cross-dependencyEric Van Hensbergen2007-10-17
| | | | | | | | This patch moves transport dynamic registration and matching to the net module to prevent a bad Kconfig dependency between the net and fs 9p modules. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: attach-per-userLatchesar Ionkov2007-10-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 9P2000 protocol requires the authentication and permission checks to be done in the file server. For that reason every user that accesses the file server tree has to authenticate and attach to the server separately. Multiple users can share the same connection to the server. Currently v9fs does a single attach and executes all I/O operations as a single user. This makes using v9fs in multiuser environment unsafe as it depends on the client doing the permission checking. This patch improves the 9P2000 support by allowing every user to attach separately. The patch defines three modes of access (new mount option 'access'): - attach-per-user (access=user) (default mode for 9P2000.u) If a user tries to access a file served by v9fs for the first time, v9fs sends an attach command to the server (Tattach) specifying the user. If the attach succeeds, the user can access the v9fs tree. As there is no uname->uid (string->integer) mapping yet, this mode works only with the 9P2000.u dialect. - allow only one user to access the tree (access=<uid>) Only the user with uid can access the v9fs tree. Other users that attempt to access it will get EPERM error. - do all operations as a single user (access=any) (default for 9P2000) V9fs does a single attach and all operations are done as a single user. If this mode is selected, the v9fs behavior is identical with the current one. Signed-off-by: Latchesar Ionkov <lucho@ionkov.net> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: Make transports dynamicEric Van Hensbergen2007-10-17
| | | | | | | | | This patch abstracts out the interfaces to underlying transports so that new transports can be added as modules. This should also allow kernel configuration of transports without ifdef-hell. Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* 9p: fix bad error path in conversion routinesMariusz Kozlowski2007-08-23
| | | | | | | | When buf_check_overflow() returns != 0 we will hit kfree(ERR_PTR(err)) and it will not be happy about it. Signed-off-by: Mariusz Kozlowski <m.kozlowski@tuxland.pl> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
s="hl num">0x1; } /* Save active MC mask if hasn't set already */ if (!ctx->edac->mc_active_mask) ctx->edac->mc_active_mask = mcu_mask; return (mcu_mask & (1 << mc_idx)) ? 1 : 0; } static int xgene_edac_mc_add(struct xgene_edac *edac, struct device_node *np) { struct mem_ctl_info *mci; struct edac_mc_layer layers[2]; struct xgene_edac_mc_ctx tmp_ctx; struct xgene_edac_mc_ctx *ctx; struct resource res; int rc; memset(&tmp_ctx, 0, sizeof(tmp_ctx)); tmp_ctx.edac = edac; if (!devres_open_group(edac->dev, xgene_edac_mc_add, GFP_KERNEL)) return -ENOMEM; rc = of_address_to_resource(np, 0, &res); if (rc < 0) { dev_err(edac->dev, "no MCU resource address\n"); goto err_group; } tmp_ctx.mcu_csr = devm_ioremap_resource(edac->dev, &res); if (IS_ERR(tmp_ctx.mcu_csr)) { dev_err(edac->dev, "unable to map MCU resource\n"); rc = PTR_ERR(tmp_ctx.mcu_csr); goto err_group; } /* Ignore non-active MCU */ if (of_property_read_u32(np, "memory-controller", &tmp_ctx.mcu_id)) { dev_err(edac->dev, "no memory-controller property\n"); rc = -ENODEV; goto err_group; } if (!xgene_edac_mc_is_active(&tmp_ctx, tmp_ctx.mcu_id)) { rc = -ENODEV; goto err_group; } layers[0].type = EDAC_MC_LAYER_CHIP_SELECT; layers[0].size = 4; layers[0].is_virt_csrow = true; layers[1].type = EDAC_MC_LAYER_CHANNEL; layers[1].size = 2; layers[1].is_virt_csrow = false; mci = edac_mc_alloc(tmp_ctx.mcu_id, ARRAY_SIZE(layers), layers, sizeof(*ctx)); if (!mci) { rc = -ENOMEM; goto err_group; } ctx = mci->pvt_info; *ctx = tmp_ctx; /* Copy over resource value */ ctx->name = "xgene_edac_mc_err"; ctx->mci = mci; mci->pdev = &mci->dev; mci->ctl_name = ctx->name; mci->dev_name = ctx->name; mci->mtype_cap = MEM_FLAG_RDDR | MEM_FLAG_RDDR2 | MEM_FLAG_RDDR3 | MEM_FLAG_DDR | MEM_FLAG_DDR2 | MEM_FLAG_DDR3; mci->edac_ctl_cap = EDAC_FLAG_SECDED; mci->edac_cap = EDAC_FLAG_SECDED; mci->mod_name = EDAC_MOD_STR; mci->mod_ver = "0.1"; mci->ctl_page_to_phys = NULL; mci->scrub_cap = SCRUB_FLAG_HW_SRC; mci->scrub_mode = SCRUB_HW_SRC; if (edac_op_state == EDAC_OPSTATE_POLL) mci->edac_check = xgene_edac_mc_check; if (edac_mc_add_mc(mci)) { dev_err(edac->dev, "edac_mc_add_mc failed\n"); rc = -EINVAL; goto err_free; } xgene_edac_mc_create_debugfs_node(mci); list_add(&ctx->next, &edac->mcus); xgene_edac_mc_irq_ctl(mci, true); devres_remove_group(edac->dev, xgene_edac_mc_add); dev_info(edac->dev, "X-Gene EDAC MC registered\n"); return 0; err_free: edac_mc_free(mci); err_group: devres_release_group(edac->dev, xgene_edac_mc_add); return rc; } static int xgene_edac_mc_remove(struct xgene_edac_mc_ctx *mcu) { xgene_edac_mc_irq_ctl(mcu->mci, false); edac_mc_del_mc(&mcu->mci->dev); edac_mc_free(mcu->mci); return 0; } /* CPU L1/L2 error CSR */ #define MAX_CPU_PER_PMD 2 #define CPU_CSR_STRIDE 0x00100000 #define CPU_L2C_PAGE 0x000D0000 #define CPU_MEMERR_L2C_PAGE 0x000E0000 #define CPU_MEMERR_CPU_PAGE 0x000F0000 #define MEMERR_CPU_ICFECR_PAGE_OFFSET 0x0000 #define MEMERR_CPU_ICFESR_PAGE_OFFSET 0x0004 #define MEMERR_CPU_ICFESR_ERRWAY_RD(src) (((src) & 0xFF000000) >> 24) #define MEMERR_CPU_ICFESR_ERRINDEX_RD(src) (((src) & 0x003F0000) >> 16) #define MEMERR_CPU_ICFESR_ERRINFO_RD(src) (((src) & 0x0000FF00) >> 8) #define MEMERR_CPU_ICFESR_ERRTYPE_RD(src) (((src) & 0x00000070) >> 4) #define MEMERR_CPU_ICFESR_MULTCERR_MASK BIT(2) #define MEMERR_CPU_ICFESR_CERR_MASK BIT(0) #define MEMERR_CPU_LSUESR_PAGE_OFFSET 0x000c #define MEMERR_CPU_LSUESR_ERRWAY_RD(src) (((src) & 0xFF000000) >> 24) #define MEMERR_CPU_LSUESR_ERRINDEX_RD(src) (((src) & 0x003F0000) >> 16) #define MEMERR_CPU_LSUESR_ERRINFO_RD(src) (((src) & 0x0000FF00) >> 8) #define MEMERR_CPU_LSUESR_ERRTYPE_RD(src) (((src) & 0x00000070) >> 4) #define MEMERR_CPU_LSUESR_MULTCERR_MASK BIT(2) #define MEMERR_CPU_LSUESR_CERR_MASK BIT(0) #define MEMERR_CPU_LSUECR_PAGE_OFFSET 0x0008 #define MEMERR_CPU_MMUECR_PAGE_OFFSET 0x0010 #define MEMERR_CPU_MMUESR_PAGE_OFFSET 0x0014 #define MEMERR_CPU_MMUESR_ERRWAY_RD(src) (((src) & 0xFF000000) >> 24) #define MEMERR_CPU_MMUESR_ERRINDEX_RD(src) (((src) & 0x007F0000) >> 16) #define MEMERR_CPU_MMUESR_ERRINFO_RD(src) (((src) & 0x0000FF00) >> 8) #define MEMERR_CPU_MMUESR_ERRREQSTR_LSU_MASK BIT(7) #define MEMERR_CPU_MMUESR_ERRTYPE_RD(src) (((src) & 0x00000070) >> 4) #define MEMERR_CPU_MMUESR_MULTCERR_MASK BIT(2) #define MEMERR_CPU_MMUESR_CERR_MASK BIT(0) #define MEMERR_CPU_ICFESRA_PAGE_OFFSET 0x0804 #define MEMERR_CPU_LSUESRA_PAGE_OFFSET 0x080c #define MEMERR_CPU_MMUESRA_PAGE_OFFSET 0x0814 #define MEMERR_L2C_L2ECR_PAGE_OFFSET 0x0000 #define MEMERR_L2C_L2ESR_PAGE_OFFSET 0x0004 #define MEMERR_L2C_L2ESR_ERRSYN_RD(src) (((src) & 0xFF000000) >> 24) #define MEMERR_L2C_L2ESR_ERRWAY_RD(src) (((src) & 0x00FC0000) >> 18) #define MEMERR_L2C_L2ESR_ERRCPU_RD(src) (((src) & 0x00020000) >> 17) #define MEMERR_L2C_L2ESR_ERRGROUP_RD(src) (((src) & 0x0000E000) >> 13) #define MEMERR_L2C_L2ESR_ERRACTION_RD(src) (((src) & 0x00001C00) >> 10) #define MEMERR_L2C_L2ESR_ERRTYPE_RD(src) (((src) & 0x00000300) >> 8) #define MEMERR_L2C_L2ESR_MULTUCERR_MASK BIT(3) #define MEMERR_L2C_L2ESR_MULTICERR_MASK BIT(2) #define MEMERR_L2C_L2ESR_UCERR_MASK BIT(1) #define MEMERR_L2C_L2ESR_ERR_MASK BIT(0) #define MEMERR_L2C_L2EALR_PAGE_OFFSET 0x0008 #define CPUX_L2C_L2RTOCR_PAGE_OFFSET 0x0010 #define MEMERR_L2C_L2EAHR_PAGE_OFFSET 0x000c #define CPUX_L2C_L2RTOSR_PAGE_OFFSET 0x0014 #define MEMERR_L2C_L2RTOSR_MULTERR_MASK BIT(1) #define MEMERR_L2C_L2RTOSR_ERR_MASK BIT(0) #define CPUX_L2C_L2RTOALR_PAGE_OFFSET 0x0018 #define CPUX_L2C_L2RTOAHR_PAGE_OFFSET 0x001c #define MEMERR_L2C_L2ESRA_PAGE_OFFSET 0x0804 /* * Processor Module Domain (PMD) context - Context for a pair of processsors. * Each PMD consists of 2 CPUs and a shared L2 cache. Each CPU consists of * its own L1 cache. */ struct xgene_edac_pmd_ctx { struct list_head next; struct device ddev; char *name; struct xgene_edac *edac; struct edac_device_ctl_info *edac_dev; void __iomem *pmd_csr; u32 pmd; int version; }; static void xgene_edac_pmd_l1_check(struct edac_device_ctl_info *edac_dev, int cpu_idx) { struct xgene_edac_pmd_ctx *ctx = edac_dev->pvt_info; void __iomem *pg_f; u32 val; pg_f = ctx->pmd_csr + cpu_idx * CPU_CSR_STRIDE + CPU_MEMERR_CPU_PAGE; val = readl(pg_f + MEMERR_CPU_ICFESR_PAGE_OFFSET); if (val) { dev_err(edac_dev->dev, "CPU%d L1 memory error ICF 0x%08X Way 0x%02X Index 0x%02X Info 0x%02X\n", ctx->pmd * MAX_CPU_PER_PMD + cpu_idx, val, MEMERR_CPU_ICFESR_ERRWAY_RD(val), MEMERR_CPU_ICFESR_ERRINDEX_RD(val), MEMERR_CPU_ICFESR_ERRINFO_RD(val)); if (val & MEMERR_CPU_ICFESR_CERR_MASK) dev_err(edac_dev->dev, "One or more correctable error\n"); if (val & MEMERR_CPU_ICFESR_MULTCERR_MASK) dev_err(edac_dev->dev, "Multiple correctable error\n"); switch (MEMERR_CPU_ICFESR_ERRTYPE_RD(val)) { case 1: dev_err(edac_dev->dev, "L1 TLB multiple hit\n"); break; case 2: dev_err(edac_dev->dev, "Way select multiple hit\n"); break; case 3: dev_err(edac_dev->dev, "Physical tag parity error\n"); break; case 4: case 5: dev_err(edac_dev->dev, "L1 data parity error\n"); break; case 6: dev_err(edac_dev->dev, "L1 pre-decode parity error\n"); break; } /* Clear any HW errors */ writel(val, pg_f + MEMERR_CPU_ICFESR_PAGE_OFFSET); if (val & (MEMERR_CPU_ICFESR_CERR_MASK | MEMERR_CPU_ICFESR_MULTCERR_MASK)) edac_device_handle_ce(edac_dev, 0, 0, edac_dev->ctl_name); } val = readl(pg_f + MEMERR_CPU_LSUESR_PAGE_OFFSET); if (val) { dev_err(edac_dev->dev, "CPU%d memory error LSU 0x%08X Way 0x%02X Index 0x%02X Info 0x%02X\n", ctx->pmd * MAX_CPU_PER_PMD + cpu_idx, val, MEMERR_CPU_LSUESR_ERRWAY_RD(val), MEMERR_CPU_LSUESR_ERRINDEX_RD(val), MEMERR_CPU_LSUESR_ERRINFO_RD(val)); if (val & MEMERR_CPU_LSUESR_CERR_MASK) dev_err(edac_dev->dev, "One or more correctable error\n"); if (val & MEMERR_CPU_LSUESR_MULTCERR_MASK) dev_err(edac_dev->dev, "Multiple correctable error\n"); switch (MEMERR_CPU_LSUESR_ERRTYPE_RD(val)) { case 0: dev_err(edac_dev->dev, "Load tag error\n"); break; case 1: dev_err(edac_dev->dev, "Load data error\n"); break; case 2: dev_err(edac_dev->dev, "WSL multihit error\n"); break; case 3: dev_err(edac_dev->dev, "Store tag error\n"); break; case 4: dev_err(edac_dev->dev, "DTB multihit from load pipeline error\n"); break; case 5: dev_err(edac_dev->dev, "DTB multihit from store pipeline error\n"); break; } /* Clear any HW errors */ writel(val, pg_f + MEMERR_CPU_LSUESR_PAGE_OFFSET); if (val & (MEMERR_CPU_LSUESR_CERR_MASK | MEMERR_CPU_LSUESR_MULTCERR_MASK)) edac_device_handle_ce(edac_dev, 0, 0, edac_dev->ctl_name); } val = readl(pg_f + MEMERR_CPU_MMUESR_PAGE_OFFSET); if (val) { dev_err(edac_dev->dev, "CPU%d memory error MMU 0x%08X Way 0x%02X Index 0x%02X Info 0x%02X %s\n", ctx->pmd * MAX_CPU_PER_PMD + cpu_idx, val, MEMERR_CPU_MMUESR_ERRWAY_RD(val), MEMERR_CPU_MMUESR_ERRINDEX_RD(val), MEMERR_CPU_MMUESR_ERRINFO_RD(val), val & MEMERR_CPU_MMUESR_ERRREQSTR_LSU_MASK ? "LSU" : "ICF"); if (val & MEMERR_CPU_MMUESR_CERR_MASK) dev_err(edac_dev->dev, "One or more correctable error\n"); if (val & MEMERR_CPU_MMUESR_MULTCERR_MASK) dev_err(edac_dev->dev, "Multiple correctable error\n"); switch (MEMERR_CPU_MMUESR_ERRTYPE_RD(val)) { case 0: dev_err(edac_dev->dev, "Stage 1 UTB hit error\n"); break; case 1: dev_err(edac_dev->dev, "Stage 1 UTB miss error\n"); break; case 2: dev_err(edac_dev->dev, "Stage 1 UTB allocate error\n"); break; case 3: dev_err(edac_dev->dev, "TMO operation single bank error\n"); break; case 4: dev_err(edac_dev->dev, "Stage 2 UTB error\n"); break; case 5: dev_err(edac_dev->dev, "Stage 2 UTB miss error\n"); break; case 6: dev_err(edac_dev->dev, "Stage 2 UTB allocate error\n"); break; case 7: dev_err(edac_dev->dev, "TMO operation multiple bank error\n"); break; } /* Clear any HW errors */ writel(val, pg_f + MEMERR_CPU_MMUESR_PAGE_OFFSET); edac_device_handle_ce(edac_dev, 0, 0, edac_dev->ctl_name); } } static void xgene_edac_pmd_l2_check(struct edac_device_ctl_info *edac_dev) { struct xgene_edac_pmd_ctx *ctx = edac_dev->pvt_info; void __iomem *pg_d; void __iomem *pg_e; u32 val_hi; u32 val_lo; u32 val; /* Check L2 */ pg_e = ctx->pmd_csr + CPU_MEMERR_L2C_PAGE; val = readl(pg_e + MEMERR_L2C_L2ESR_PAGE_OFFSET); if (val) { val_lo = readl(pg_e + MEMERR_L2C_L2EALR_PAGE_OFFSET); val_hi = readl(pg_e + MEMERR_L2C_L2EAHR_PAGE_OFFSET); dev_err(edac_dev->dev, "PMD%d memory error L2C L2ESR 0x%08X @ 0x%08X.%08X\n", ctx->pmd, val, val_hi, val_lo); dev_err(edac_dev->dev, "ErrSyndrome 0x%02X ErrWay 0x%02X ErrCpu %d ErrGroup 0x%02X ErrAction 0x%02X\n", MEMERR_L2C_L2ESR_ERRSYN_RD(val), MEMERR_L2C_L2ESR_ERRWAY_RD(val), MEMERR_L2C_L2ESR_ERRCPU_RD(val), MEMERR_L2C_L2ESR_ERRGROUP_RD(val), MEMERR_L2C_L2ESR_ERRACTION_RD(val)); if (val & MEMERR_L2C_L2ESR_ERR_MASK) dev_err(edac_dev->dev, "One or more correctable error\n"); if (val & MEMERR_L2C_L2ESR_MULTICERR_MASK) dev_err(edac_dev->dev, "Multiple correctable error\n"); if (val & MEMERR_L2C_L2ESR_UCERR_MASK) dev_err(edac_dev->dev, "One or more uncorrectable error\n"); if (val & MEMERR_L2C_L2ESR_MULTUCERR_MASK) dev_err(edac_dev->dev, "Multiple uncorrectable error\n"); switch (MEMERR_L2C_L2ESR_ERRTYPE_RD(val)) { case 0: dev_err(edac_dev->dev, "Outbound SDB parity error\n"); break; case 1: dev_err(edac_dev->dev, "Inbound SDB parity error\n"); break; case 2: dev_err(edac_dev->dev, "Tag ECC error\n"); break; case 3: dev_err(edac_dev->dev, "Data ECC error\n"); break; } /* Clear any HW errors */ writel(val, pg_e + MEMERR_L2C_L2ESR_PAGE_OFFSET); if (val & (MEMERR_L2C_L2ESR_ERR_MASK | MEMERR_L2C_L2ESR_MULTICERR_MASK)) edac_device_handle_ce(edac_dev, 0, 0, edac_dev->ctl_name); if (val & (MEMERR_L2C_L2ESR_UCERR_MASK | MEMERR_L2C_L2ESR_MULTUCERR_MASK)) edac_device_handle_ue(edac_dev, 0, 0, edac_dev->ctl_name); } /* Check if any memory request timed out on L2 cache */ pg_d = ctx->pmd_csr + CPU_L2C_PAGE; val = readl(pg_d + CPUX_L2C_L2RTOSR_PAGE_OFFSET); if (val) { val_lo = readl(pg_d + CPUX_L2C_L2RTOALR_PAGE_OFFSET); val_hi = readl(pg_d + CPUX_L2C_L2RTOAHR_PAGE_OFFSET); dev_err(edac_dev->dev, "PMD%d L2C error L2C RTOSR 0x%08X @ 0x%08X.%08X\n", ctx->pmd, val, val_hi, val_lo); writel(val, pg_d + CPUX_L2C_L2RTOSR_PAGE_OFFSET); } } static void xgene_edac_pmd_check(struct edac_device_ctl_info *edac_dev) { struct xgene_edac_pmd_ctx *ctx = edac_dev->pvt_info; unsigned int pcp_hp_stat; int i; xgene_edac_pcp_rd(ctx->edac, PCPHPERRINTSTS, &pcp_hp_stat); if (!((PMD0_MERR_MASK << ctx->pmd) & pcp_hp_stat)) return; /* Check CPU L1 error */ for (i = 0; i < MAX_CPU_PER_PMD; i++) xgene_edac_pmd_l1_check(edac_dev, i); /* Check CPU L2 error */ xgene_edac_pmd_l2_check(edac_dev); } static void xgene_edac_pmd_cpu_hw_cfg(struct edac_device_ctl_info *edac_dev, int cpu) { struct xgene_edac_pmd_ctx *ctx = edac_dev->pvt_info; void __iomem *pg_f = ctx->pmd_csr + cpu * CPU_CSR_STRIDE + CPU_MEMERR_CPU_PAGE; /* * Enable CPU memory error: * MEMERR_CPU_ICFESRA, MEMERR_CPU_LSUESRA, and MEMERR_CPU_MMUESRA */ writel(0x00000301, pg_f + MEMERR_CPU_ICFECR_PAGE_OFFSET); writel(0x00000301, pg_f + MEMERR_CPU_LSUECR_PAGE_OFFSET); writel(0x00000101, pg_f + MEMERR_CPU_MMUECR_PAGE_OFFSET); } static void xgene_edac_pmd_hw_cfg(struct edac_device_ctl_info *edac_dev) { struct xgene_edac_pmd_ctx *ctx = edac_dev->pvt_info; void __iomem *pg_d = ctx->pmd_csr + CPU_L2C_PAGE; void __iomem *pg_e = ctx->pmd_csr + CPU_MEMERR_L2C_PAGE; /* Enable PMD memory error - MEMERR_L2C_L2ECR and L2C_L2RTOCR */ writel(0x00000703, pg_e + MEMERR_L2C_L2ECR_PAGE_OFFSET); /* Configure L2C HW request time out feature if supported */ if (ctx->version > 1) writel(0x00000119, pg_d + CPUX_L2C_L2RTOCR_PAGE_OFFSET); } static void xgene_edac_pmd_hw_ctl(struct edac_device_ctl_info *edac_dev, bool enable) { struct xgene_edac_pmd_ctx *ctx = edac_dev->pvt_info; int i; /* Enable PMD error interrupt */ if (edac_dev->op_state == OP_RUNNING_INTERRUPT) { if (enable) xgene_edac_pcp_clrbits(ctx->edac, PCPHPERRINTMSK, PMD0_MERR_MASK << ctx->pmd); else xgene_edac_pcp_setbits(ctx->edac, PCPHPERRINTMSK, PMD0_MERR_MASK << ctx->pmd); } if (enable) { xgene_edac_pmd_hw_cfg(edac_dev); /* Two CPUs per a PMD */ for (i = 0; i < MAX_CPU_PER_PMD; i++) xgene_edac_pmd_cpu_hw_cfg(edac_dev, i); } } static ssize_t xgene_edac_pmd_l1_inject_ctrl_write(struct file *file, const char __user *data, size_t count, loff_t *ppos) { struct edac_device_ctl_info *edac_dev = file->private_data; struct xgene_edac_pmd_ctx *ctx = edac_dev->pvt_info; void __iomem *cpux_pg_f; int i; for (i = 0; i < MAX_CPU_PER_PMD; i++) { cpux_pg_f = ctx->pmd_csr + i * CPU_CSR_STRIDE + CPU_MEMERR_CPU_PAGE; writel(MEMERR_CPU_ICFESR_MULTCERR_MASK | MEMERR_CPU_ICFESR_CERR_MASK, cpux_pg_f + MEMERR_CPU_ICFESRA_PAGE_OFFSET); writel(MEMERR_CPU_LSUESR_MULTCERR_MASK | MEMERR_CPU_LSUESR_CERR_MASK, cpux_pg_f + MEMERR_CPU_LSUESRA_PAGE_OFFSET); writel(MEMERR_CPU_MMUESR_MULTCERR_MASK | MEMERR_CPU_MMUESR_CERR_MASK, cpux_pg_f + MEMERR_CPU_MMUESRA_PAGE_OFFSET); } return count; } static ssize_t xgene_edac_pmd_l2_inject_ctrl_write(struct file *file, const char __user *data, size_t count, loff_t *ppos) { struct edac_device_ctl_info *edac_dev = file->private_data; struct xgene_edac_pmd_ctx *ctx = edac_dev->pvt_info; void __iomem *pg_e = ctx->pmd_csr + CPU_MEMERR_L2C_PAGE; writel(MEMERR_L2C_L2ESR_MULTUCERR_MASK | MEMERR_L2C_L2ESR_MULTICERR_MASK | MEMERR_L2C_L2ESR_UCERR_MASK | MEMERR_L2C_L2ESR_ERR_MASK, pg_e + MEMERR_L2C_L2ESRA_PAGE_OFFSET); return count; } static const struct file_operations xgene_edac_pmd_debug_inject_fops[] = { { .open = simple_open, .write = xgene_edac_pmd_l1_inject_ctrl_write, .llseek = generic_file_llseek, }, { .open = simple_open, .write = xgene_edac_pmd_l2_inject_ctrl_write, .llseek = generic_file_llseek, }, { } }; static void xgene_edac_pmd_create_debugfs_nodes( struct edac_device_ctl_info *edac_dev) { struct xgene_edac_pmd_ctx *ctx = edac_dev->pvt_info; struct dentry *edac_debugfs; char name[30]; if (!IS_ENABLED(CONFIG_EDAC_DEBUG)) return; /* * Todo: Switch to common EDAC debug file system for edac device * when available. */ if (!ctx->edac->dfs) { ctx->edac->dfs = debugfs_create_dir(edac_dev->dev->kobj.name, NULL); if (!ctx->edac->dfs) return; } sprintf(name, "PMD%d", ctx->pmd); edac_debugfs = debugfs_create_dir(name, ctx->edac->dfs); if (!edac_debugfs) return; debugfs_create_file("l1_inject_ctrl", S_IWUSR, edac_debugfs, edac_dev, &xgene_edac_pmd_debug_inject_fops[0]); debugfs_create_file("l2_inject_ctrl", S_IWUSR, edac_debugfs, edac_dev, &xgene_edac_pmd_debug_inject_fops[1]); } static int xgene_edac_pmd_available(u32 efuse, int pmd) { return (efuse & (1 << pmd)) ? 0 : 1; } static int xgene_edac_pmd_add(struct xgene_edac *edac, struct device_node *np, int version) { struct edac_device_ctl_info *edac_dev; struct xgene_edac_pmd_ctx *ctx; struct resource res; char edac_name[10]; u32 pmd; int rc; u32 val; if (!devres_open_group(edac->dev, xgene_edac_pmd_add, GFP_KERNEL)) return -ENOMEM; /* Determine if this PMD is disabled */ if (of_property_read_u32(np, "pmd-controller", &pmd)) { dev_err(edac->dev, "no pmd-controller property\n"); rc = -ENODEV; goto err_group; } rc = regmap_read(edac->efuse_map, 0, &val); if (rc) goto err_group; if (!xgene_edac_pmd_available(val, pmd)) { rc = -ENODEV; goto err_group; } sprintf(edac_name, "l2c%d", pmd); edac_dev = edac_device_alloc_ctl_info(sizeof(*ctx), edac_name, 1, "l2c", 1, 2, NULL, 0, edac_device_alloc_index()); if (!edac_dev) { rc = -ENOMEM; goto err_group; } ctx = edac_dev->pvt_info; ctx->name = "xgene_pmd_err"; ctx->pmd = pmd; ctx->edac = edac; ctx->edac_dev = edac_dev; ctx->ddev = *edac->dev; ctx->version = version; edac_dev->dev = &ctx->ddev; edac_dev->ctl_name = ctx->name; edac_dev->dev_name = ctx->name; edac_dev->mod_name = EDAC_MOD_STR; rc = of_address_to_resource(np, 0, &res); if (rc < 0) { dev_err(edac->dev, "no PMD resource address\n"); goto err_free; } ctx->pmd_csr = devm_ioremap_resource(edac->dev, &res); if (IS_ERR(ctx->pmd_csr)) { dev_err(edac->dev, "devm_ioremap_resource failed for PMD resource address\n"); rc = PTR_ERR(ctx->pmd_csr); goto err_free; } if (edac_op_state == EDAC_OPSTATE_POLL) edac_dev->edac_check = xgene_edac_pmd_check; xgene_edac_pmd_create_debugfs_nodes(edac_dev); rc = edac_device_add_device(edac_dev); if (rc > 0) { dev_err(edac->dev, "edac_device_add_device failed\n"); rc = -ENOMEM; goto err_free; } if (edac_op_state == EDAC_OPSTATE_INT) edac_dev->op_state = OP_RUNNING_INTERRUPT; list_add(&ctx->next, &edac->pmds); xgene_edac_pmd_hw_ctl(edac_dev, 1); devres_remove_group(edac->dev, xgene_edac_pmd_add); dev_info(edac->dev, "X-Gene EDAC PMD%d registered\n", ctx->pmd); return 0; err_free: edac_device_free_ctl_info(edac_dev); err_group: devres_release_group(edac->dev, xgene_edac_pmd_add); return rc; } static int xgene_edac_pmd_remove(struct xgene_edac_pmd_ctx *pmd) { struct edac_device_ctl_info *edac_dev = pmd->edac_dev; xgene_edac_pmd_hw_ctl(edac_dev, 0); edac_device_del_device(edac_dev->dev); edac_device_free_ctl_info(edac_dev); return 0; } static irqreturn_t xgene_edac_isr(int irq, void *dev_id) { struct xgene_edac *ctx = dev_id; struct xgene_edac_pmd_ctx *pmd; unsigned int pcp_hp_stat; unsigned int pcp_lp_stat; xgene_edac_pcp_rd(ctx, PCPHPERRINTSTS, &pcp_hp_stat); xgene_edac_pcp_rd(ctx, PCPLPERRINTSTS, &pcp_lp_stat); if ((MCU_UNCORR_ERR_MASK & pcp_hp_stat) || (MCU_CTL_ERR_MASK & pcp_hp_stat) || (MCU_CORR_ERR_MASK & pcp_lp_stat)) { struct xgene_edac_mc_ctx *mcu; list_for_each_entry(mcu, &ctx->mcus, next) { xgene_edac_mc_check(mcu->mci); } } list_for_each_entry(pmd, &ctx->pmds, next) { if ((PMD0_MERR_MASK << pmd->pmd) & pcp_hp_stat) xgene_edac_pmd_check(pmd->edac_dev); } return IRQ_HANDLED; } static int xgene_edac_probe(struct platform_device *pdev) { struct xgene_edac *edac; struct device_node *child; struct resource *res; int rc; edac = devm_kzalloc(&pdev->dev, sizeof(*edac), GFP_KERNEL); if (!edac) return -ENOMEM; edac->dev = &pdev->dev; platform_set_drvdata(pdev, edac); INIT_LIST_HEAD(&edac->mcus); INIT_LIST_HEAD(&edac->pmds); spin_lock_init(&edac->lock); mutex_init(&edac->mc_lock); edac->csw_map = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, "regmap-csw"); if (IS_ERR(edac->csw_map)) { dev_err(edac->dev, "unable to get syscon regmap csw\n"); rc = PTR_ERR(edac->csw_map); goto out_err; } edac->mcba_map = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, "regmap-mcba"); if (IS_ERR(edac->mcba_map)) { dev_err(edac->dev, "unable to get syscon regmap mcba\n"); rc = PTR_ERR(edac->mcba_map); goto out_err; } edac->mcbb_map = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, "regmap-mcbb"); if (IS_ERR(edac->mcbb_map)) { dev_err(edac->dev, "unable to get syscon regmap mcbb\n"); rc = PTR_ERR(edac->mcbb_map); goto out_err; } edac->efuse_map = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, "regmap-efuse"); if (IS_ERR(edac->efuse_map)) { dev_err(edac->dev, "unable to get syscon regmap efuse\n"); rc = PTR_ERR(edac->efuse_map); goto out_err; } res = platform_get_resource(pdev, IORESOURCE_MEM, 0); edac->pcp_csr = devm_ioremap_resource(&pdev->dev, res); if (IS_ERR(edac->pcp_csr)) { dev_err(&pdev->dev, "no PCP resource address\n"); rc = PTR_ERR(edac->pcp_csr); goto out_err; } if (edac_op_state == EDAC_OPSTATE_INT) { int irq; int i; for (i = 0; i < 3; i++) { irq = platform_get_irq(pdev, i); if (irq < 0) { dev_err(&pdev->dev, "No IRQ resource\n"); rc = -EINVAL; goto out_err; } rc = devm_request_irq(&pdev->dev, irq, xgene_edac_isr, IRQF_SHARED, dev_name(&pdev->dev), edac); if (rc) { dev_err(&pdev->dev, "Could not request IRQ %d\n", irq); goto out_err; } } } for_each_child_of_node(pdev->dev.of_node, child) { if (!of_device_is_available(child)) continue; if (of_device_is_compatible(child, "apm,xgene-edac-mc")) xgene_edac_mc_add(edac, child); if (of_device_is_compatible(child, "apm,xgene-edac-pmd")) xgene_edac_pmd_add(edac, child, 1); if (of_device_is_compatible(child, "apm,xgene-edac-pmd-v2")) xgene_edac_pmd_add(edac, child, 2); } return 0; out_err: return rc; } static int xgene_edac_remove(struct platform_device *pdev) { struct xgene_edac *edac = dev_get_drvdata(&pdev->dev); struct xgene_edac_mc_ctx *mcu; struct xgene_edac_mc_ctx *temp_mcu; struct xgene_edac_pmd_ctx *pmd; struct xgene_edac_pmd_ctx *temp_pmd; list_for_each_entry_safe(mcu, temp_mcu, &edac->mcus, next) { xgene_edac_mc_remove(mcu); } list_for_each_entry_safe(pmd, temp_pmd, &edac->pmds, next) { xgene_edac_pmd_remove(pmd); } return 0; } static const struct of_device_id xgene_edac_of_match[] = { { .compatible = "apm,xgene-edac" }, {}, }; MODULE_DEVICE_TABLE(of, xgene_edac_of_match); static struct platform_driver xgene_edac_driver = { .probe = xgene_edac_probe, .remove = xgene_edac_remove, .driver = { .name = "xgene-edac", .of_match_table = xgene_edac_of_match, }, }; static int __init xgene_edac_init(void) { int rc; /* Make sure error reporting method is sane */ switch (edac_op_state) { case EDAC_OPSTATE_POLL: case EDAC_OPSTATE_INT: break; default: edac_op_state = EDAC_OPSTATE_INT; break; } rc = platform_driver_register(&xgene_edac_driver); if (rc) { edac_printk(KERN_ERR, EDAC_MOD_STR, "EDAC fails to register\n"); goto reg_failed; } return 0; reg_failed: return rc; } module_init(xgene_edac_init); static void __exit xgene_edac_exit(void) { platform_driver_unregister(&xgene_edac_driver); } module_exit(xgene_edac_exit); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Feng Kan <fkan@apm.com>"); MODULE_DESCRIPTION("APM X-Gene EDAC driver"); module_param(edac_op_state, int, 0444); MODULE_PARM_DESC(edac_op_state, "EDAC error reporting state: 0=Poll, 2=Interrupt");