aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python
Commit message (Collapse)AuthorAge
* perf scripts: Add event_analyzing_sample-record/reportFeng Tang2012-09-17
| | | | | | | | | | | | | So that event_analyzing_sample.py can be shown by "perf script -l" Signed-off-by: Feng Tang <feng.tang@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1347007349-3102-4-git-send-email-feng.tang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf script python: Correct handler check and spelling errorsFeng Tang2012-08-09
| | | | | | | | | | | | | | | | | | | | Correct the checking for handler returned by PyDict_GetItemString(), also fix some spelling error and remove some data code in event_analyzing_sample.py, as suggested by Namhyung Kim. v2: restore back the wrongly removed trace_unhandled() func Signed-off-by: Feng Tang <feng.tang@intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20120809134613.067104c4@feng-i7 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf scripts python: Add event_analyzing_sample.py as a sample for general ↵Feng Tang2012-08-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | event handling Currently only trace point events are supported in perf/python script, the first 3 patches of this serie add the support for all types of events. This script is just a simple sample to show how to gather the basic information of the events and analyze them. This script will create one object for each event sample and insert them into a table in a database, then leverage the simple SQL commands to sort/group them. User can modify or write their brand new functions according to their specific requirment. Here is the sample of how to use the script: $ perf record -a tree $ perf script -s process_event.py There is 100 records in gen_events table Statistics about the general events grouped by thread/symbol/dso: comm number histgram ========================================== swapper 56 ###### tree 20 ##### perf 10 #### sshd 8 #### kworker/7:2 4 ### ksoftirqd/7 1 # plugin-containe 1 # symbol number histgram ========================================================== native_write_msr_safe 40 ###### __lock_acquire 8 #### ftrace_graph_caller 4 ### prepare_ftrace_return 4 ### intel_idle 3 ## native_sched_clock 3 ## Unknown_symbol 2 ## do_softirq 2 ## lock_release 2 ## lock_release_holdtime 2 ## trace_graph_entry 2 ## _IO_putc 1 # __d_lookup_rcu 1 # __do_fault 1 # __schedule 1 # _raw_spin_lock 1 # delay_tsc 1 # generic_exec_single 1 # generic_fillattr 1 # dso number histgram ================================================================== [kernel.kallsyms] 95 ####### /lib/libc-2.12.1.so 5 ### Signed-off-by: Feng Tang <feng.tang@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1344419875-21665-6-git-send-email-feng.tang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf scripts python: Add a python library EventClass.pyFeng Tang2012-08-08
| | | | | | | | | | | | | | | | | | This library defines several class types for perf events which could help to better analyze the event samples. Currently there are just a few classes, PerfEvent is the base class for all perf events, PebsEvent is a HW base Intel x86 PEBS event, and user could add more SW/HW event classes based on requriements. Signed-off-by: Feng Tang <feng.tang@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1344419875-21665-5-git-send-email-feng.tang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf script: Add drop monitor scriptNeil Horman2011-09-29
| | | | | | | | | | | | | | | | | | | A while back I created the dropmonitor protocol, which allowed users to get reports of dropped frames communicated to them via a netlink socket. While useful, several people have now asked that I integrate the ability to do drop monitoring with perf, so they don't have to run additional tools. This patch adds a drop monitor script to the perf suite, and provides the same output that the netlink socket does. Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1309801217-22450-1-git-send-email-nhorman@tuxdriver.com Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf script: Finish the rename from trace to scriptArnaldo Carvalho de Melo2010-12-25
| | | | | | | | | | | | | | | | | | The scripts have calls to 'perf trace' that need to be converted to 'perf script', do it. This problem was introduced in 133dc4c. Reported-by: Torok Edwin <edwintorok@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Torok Edwin <edwintorok@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf: Rename 'perf trace' to 'perf script'Ingo Molnar2010-11-16
| | | | | | | | Free the perf trace name space and rename the trace to 'script' which is a better match for the scripting engine. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* perf trace scripting: remove system-wide param from shell scriptsTom Zanussi2010-11-10
| | | | | | | | | Including -a unconditionally when recording doesn't allow for the option of running scripts without it. Future patches will add add it back if needed at run-time. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Acked-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
* perf python scripting: Add futex-contention scriptArnaldo Carvalho de Melo2010-10-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The equivalent to this SystemTAP script: http://sourceware.org/systemtap/wiki/WSFutexContention [root@doppio ~]# perf trace futex-contention Press control+C to stop and show the summary ^Cnpviewer.bin[15242] lock 7f0a8be19104 contended 29 times, 72806 avg ns npviewer.bin[15242] lock 7f0a8be19130 contended 2 times, 1355 avg ns synergyc[17245] lock f127f4 contended 1 times, 1830569 avg ns firefox[15116] lock 7f2b7238af0c contended 168 times, 1230390 avg ns synergyc[17245] lock f2fc20 contended 1 times, 33149 avg ns npviewer.bin[15255] lock 7f0a8be19074 contended 155 times, 73047 avg ns npviewer.bin[15255] lock 7f0a8be190a0 contended 127 times, 7088 avg ns synergyc[17247] lock f12854 contended 1 times, 46741 avg ns synergyc[17245] lock f12610 contended 1 times, 7358 avg ns [root@doppio ~]# Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf python scripting: Fixup cut'n'paste error in sctop scriptArnaldo Carvalho de Melo2010-10-26
| | | | | | | | | | | | Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf python scripting: Support fedora 11 (audit 1.7.17)Arnaldo Carvalho de Melo2010-10-25
| | | | | | | | | | | | | | | Where we don't have the audit.MACH_ARMEB constant. Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf python scripting: Improve the syscalls-by-pid scriptArnaldo Carvalho de Melo2010-10-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | . Print message at script start telling how to get te summary . Print the syscall names . Accept both pid (if numeric) or COMM name Now it looks like this: [root@emilia tmp]# perf trace syscall-counts-by-pid Press control+C to stop and show the summary ^C syscall events by comm/pid: comm [pid]/syscalls count ---------------------------------------- ---------- automount [1670] futex 2 sshd [2322] rt_sigprocmask 4 select 2 write 1 read 1 perf [15178] read 2506 open 794 close 769 write 240 getdents 112 lseek 16 stat 9 perf_counter_open 5 fcntl 5 mmap 5 statfs 2 perf [15179] read 56701 open 499 stat 176 fstat 149 close 109 mmap 98 brk 75 rt_sigaction 66 munmap 42 mprotect 24 lstat 7 lseek 5 getdents 4 ioctl 3 readlink 2 futex 1 statfs 1 getegid 1 geteuid 1 getgid 1 getuid 1 getrlimit 1 fcntl 1 uname 1 write 1 [root@emilia tmp]# fg -bash: fg: current: no such job [root@emilia tmp]# perf trace syscall-counts-by-pid 2322 Press control+C to stop and show the summary ^C syscall events by comm/pid: comm [pid]/syscalls count ---------------------------------------- ---------- sshd [2322] rt_sigprocmask 4 select 2 write 1 read 1 [root@emilia tmp]# perf trace syscall-counts-by-pid sshd Press control+C to stop and show the summary ^C syscall events for sshd: comm [pid]/syscalls count ---------------------------------------- ---------- sshd [2322] rt_sigprocmask 4 select 2 write 1 read 1 [root@emilia tmp]# Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf python scripting: print the syscall name on sctopArnaldo Carvalho de Melo2010-10-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | [root@emilia tmp]# perf trace sctop 1 syscall events: event count ---------------------------------------- ---------- read 215400 futex 4029 write 376 brk 33 rt_sigprocmask 24 select 17 lseek 2 fsync 1 ^C[root@emilia tmp]# Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf python scripting: Improve the syscalls-counts scriptArnaldo Carvalho de Melo2010-10-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | . Print message at script start telling how to get te summary . Print the syscall name Now it looks like this: [root@emilia ~]# perf trace syscall-counts Press control+C to stop and show the summary ^C syscall events: event count ---------------------------------------- ----------- read 102752 open 1293 close 878 write 319 stat 185 fstat 149 getdents 116 mmap 98 brk 80 rt_sigaction 66 munmap 42 mprotect 24 lseek 21 lstat 7 rt_sigprocmask 4 futex 3 statfs 3 ioctl 3 readlink 2 select 2 getegid 1 geteuid 1 getgid 1 getuid 1 getrlimit 1 fcntl 1 uname 1 [root@emilia ~]# Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf python scripting: Improve the failed-syscalls-by-pid scriptArnaldo Carvalho de Melo2010-10-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | . Print message at script start telling how to get te summary . Print the syscall name using the audit-lib-python package, if installed . Print the errno string . Accept both pid (if numeric) or COMM name Now it looks like this: [root@emilia ~]# perf trace failed-syscalls-by-pid Press control+C to stop and show the summary ^C syscall errors: comm [pid] count ------------------------------ ---------- automount [1670] syscall: futex err = ETIMEDOUT 39 irqbalance [1462] syscall: openat err = ENOENT 4 perf [7888] syscall: lseek err = ESPIPE 1 syscall: open err = ENOENT 24 perf [7889] syscall: ioctl err = EINVAL 1 syscall: readlink err = EINVAL 2 syscall: open err = ENOENT 389 syscall: stat err = ENOENT 141 syscall: lseek err = ESPIPE 3 [root@emilia ~]# [root@emilia ~]# perf trace failed-syscalls-by-pid 1670 Press control+C to stop and show the summary ^C syscall errors: comm [pid] count ------------------------------ ---------- automount [1670] syscall: futex err = ETIMEDOUT 2 [root@emilia ~]# [root@emilia ~]# [root@emilia ~]# [root@emilia ~]# perf trace failed-syscalls-by-pid automount Press control+C to stop and show the summary ^C syscall errors for automount: comm [pid] count ------------------------------ ---------- automount [1669] syscall: futex err = ETIMEDOUT 1 automount [1670] syscall: futex err = ETIMEDOUT 5 [root@emilia ~]# Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf trace: Use $PERF_EXEC_PATH in canned report scriptsBen Hutchings2010-10-23
| | | | | | | | | | | | Set $PERF_EXEC_PATH before starting the record and report scripts, and make them use it where necessary. Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1286723403.2955.205.camel@localhost> Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf: Add a script to show packets processingKoki Sanagi2010-09-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a perf script which shows packets processing and processed time. It helps us to investigate networking or network devices. If you want to use it, install perf and record perf.data like following. If you set script, perf gathers records until it ends. If not, you must Ctrl-C to stop recording. And if you want a report from record, If you use some options, you can limit the output. Option is below. tx: show only tx packets processing rx: show only rx packets processing dev=: show processing on this device debug: work with debug mode. It shows buffer status. For example, if you want to show received packets processing associated with eth4, 106133.171439sec cpu=0 irq_entry(+0.000msec irq=24:eth4) | softirq_entry(+0.006msec) | |---netif_receive_skb(+0.010msec skb=f2d15900 len=100) | | | skb_copy_datagram_iovec(+0.039msec 10291::10291) | napi_poll_exit(+0.022msec eth4) This perf script helps us to analyze the processing time of a transmit/receive sequence. Signed-off-by: Koki Sanagi <sanagi.koki@jp.fujitsu.com> Acked-by: David S. Miller <davem@davemloft.net> Cc: Neil Horman <nhorman@tuxdriver.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Kaneshige Kenji <kaneshige.kenji@jp.fujitsu.com> Cc: Izumo Taku <izumi.taku@jp.fujitsu.com> Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Scott Mcmillan <scott.a.mcmillan@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <4C72439D.3040001@jp.fujitsu.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
* perf, sched migration: Librarize task states and event headers helpersFrederic Weisbecker2010-08-01
| | | | | | | | | | | | | Librarize the task state and event headers helpers as they can be generally useful. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Nikhil Rao <ncrao@google.com> Cc: Tom Zanussi <tzanussi@gmail.com>
* perf, sched migration: Librarize the GUI classFrederic Weisbecker2010-08-01
| | | | | | | | | | | | | Export the GUI facility in the common library path. It is going to be useful for other scheduler views. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Nikhil Rao <ncrao@google.com> Cc: Tom Zanussi <tzanussi@gmail.com>
* perf, sched migration: Make the GUI class client agnosticFrederic Weisbecker2010-08-01
| | | | | | | | | | | | | | Make the perf migration GUI generic so that it can be reused for other kinds of trace painting. No more notion of CPUs or runqueue from the GUI class, it's now used as a library by the trace parser. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Nikhil Rao <ncrao@google.com> Cc: Tom Zanussi <tzanussi@gmail.com>
* perf, sched migration: Make it vertically scrollableFrederic Weisbecker2010-08-01
| | | | | | | | | | | | | | | | | | | With scheduler traces covering more than two cpus, rectangles of the CPUs 3 and more are not visibles. This makes the vertical navigation scrollable so that all of the CPUs rectangles are available. We also want to be able to zoom vertically, so that we can fit at best the screen with CPU rectangles, but that's for later. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Nikhil Rao <ncrao@google.com> Cc: Tom Zanussi <tzanussi@gmail.com>
* perf, sched migration: Parameterize cpu height and spacingNikhil Rao2010-08-01
| | | | | | | | | | | | | | | | | Without vertical zoom, it is not possible to see all CPUs in a trace taken on a larger machine. This patch parameterizes the height and spacing of CPUs so that you can fit more cpus into the screen. Ideally we should dynamically size/space the CPU rectangles with some minimum threshold. Until then, this patch is a stop-gap. Signed-off-by: Nikhil Rao <ncrao@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Tom Zanussi <tzanussi@gmail.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
* perf, sched migration: Fix key bindingsNikhil Rao2010-08-01
| | | | | | | | | | | | | | EVT_KEY_DOWN and EVT_LEFT_DOWN events are not bound to the RootFrame event handler. As a result, zoom/scroll via keyboard events do not work. This patch adds the missing bindings. Signed-off-by: Nikhil Rao <ncrao@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Tom Zanussi <tzanussi@gmail.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
* perf, sched migration: Ignore unhandled task statesFrederic Weisbecker2010-08-01
| | | | | | | | | | | | | | Stop printing an error message when we don't have the letter for a given task state. All we need to know is if the task is in the TASK_RUNNING state. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Nikhil Rao <ncrao@google.com> Cc: Tom Zanussi <tzanussi@gmail.com>
* perf, sched migration: Handle ignored migrate out eventsFrederic Weisbecker2010-08-01
| | | | | | | | | | | | | | | | | | | Migrate out events may happen on tasks that are not in the runqueue, for example this is the case for tasks that are sleeping. In this case, we don't want to log the migrate out event in the source runqueue because the task is not eventually in the runqueue and we have already logged its sleep event. This fixes timeslices that spuriously propagate a sleep event from the previous timeslice. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Nikhil Rao <ncrao@google.com> Cc: Tom Zanussi <tzanussi@gmail.com>
* perf: New migration tool overviewFrederic Weisbecker2010-08-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This brings a GUI tool that displays an overview of the load of tasks proportion in each CPUs. The CPUs forward progress is cut in timeslices. A new timeslice is created for every runqueue event: a task gets pushed out or pulled in the runqueue. For each timeslice, every CPUs rectangle is colored with a red power that describes the local load against the total load. This more red is the rectangle, the higher is the given CPU load. This load is the number of tasks running on the CPU, without any distinction against the scheduler policy of the tasks, for now. Also for each timeslice, the event origin is depicted on the CPUs that triggered it using a thin colored line on top of the rectangle timeslice. These events are: * sleep: a task went to sleep and has then been pulled out the runqueue. The origin color in the thin line is dark blue. * wake up: a task woke up and has then been pushed in the runqueue. The origin color is yellow. * wake up new: a new task woke up and has then been pushed in the runqueue. The origin color is green. * migrate in: a task migrated in the runqueue due to a load balancing operation. The origin color is violet. * migrate out: reverse of the previous one. Migrate in events usually have paired migrate out events in another runqueue. The origin color is light blue. Clicking on a timeslice provides the runqueue event details and the runqueue state. The CPU rectangles can be navigated using the usual arrow controls. Horizontal zooming in/out is possible with the "+" and "-" buttons. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Venkatesh Pallipadi <venki@google.com> Cc: Pierre Tardy <tardyp@gmail.com> Cc: Nikhil Rao <ncrao@google.com> Cc: Li Zefan <lizf@cn.fujitsu.com>
* perf scripts python: Give field dict to unhandled callbackPierre Tardy2010-06-01
| | | | | | | | | | | | | | | | | | | trace_unhandled() callback does not allow to access event fields, this patch resolves the problem. It can also been used as a more pythonic and flexible way for script writters to demux event types This will for example greatly simplify pytimechart event demux. Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Tom Zanussi <tzanussi@gmail.com> Cc: Ingo Molnar <mingo@elte.hu>, Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <1275340329-2397-1-git-send-email-tardyp@gmail.com> Signed-off-by: Pierre Tardy <tardyp@gmail.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf/trace/scripting: syscall-counts script cleanupTom Zanussi2010-05-10
| | | | | | | | | | | | A small fix for the syscall counts script: - silence the match output in the shell script Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> LKML-Reference: <1273466820-9330-10-git-send-email-tzanussi@gmail.com> Signed-off-by: Tom Zanussi <tzanussi@gmail.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf/trace/scripting: syscall-counts-by-pid script cleanupTom Zanussi2010-05-10
| | | | | | | | | | | | A small fix for the syscall counts by pid script: - silence the match output in the shell script Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> LKML-Reference: <1273466820-9330-9-git-send-email-tzanussi@gmail.com> Signed-off-by: Tom Zanussi <tzanussi@gmail.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf/trace/scripting: failed-syscalls-by-pid script cleanupTom Zanussi2010-05-10
| | | | | | | | | | | | A small fixe for the failed syscalls by pid script: - silence the match output in the shell script Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> LKML-Reference: <1273466820-9330-8-git-send-email-tzanussi@gmail.com> Signed-off-by: Tom Zanussi <tzanussi@gmail.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf: Remove leftover useless options to record trace events from scriptsFrederic Weisbecker2010-04-30
| | | | | | | | | | | | | | -f, -c 1, -R are now useless for trace events recording, moreover -M is useless and event hurts. Remove them from the documentation examples and from record scripts. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Tom Zanussi <tzanussi@gmail.com>
* perf trace/scripting: Enable scripting shell scripts for live modeTom Zanussi2010-04-14
| | | | | | | | | | | | | | | | | | It should be possible to run any perf trace script in 'live mode'. This requires being able to pass in e.g. '-i -' or other args, which the current shell scripts aren't equipped to handle. In a few cases, there are required or optional args that also need special handling. This patch makes changes the current set of shell scripts as necessary. Signed-off-by: Tom Zanussi <tzanussi@gmail.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: fweisbec@gmail.com Cc: rostedt@goodmis.org Cc: k-keiichi@bx.jp.nec.com Cc: acme@ghostprotocols.net LKML-Reference: <1270184365-8281-11-git-send-email-tzanussi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perf trace/scripting: Add rwtop and sctop scriptsTom Zanussi2010-04-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | A couple of scripts, one in Python and the other in Perl, that demonstrate 'live mode' tracing. For each, the output of the perf event stream is fed continuously to the script, which continuously aggregates the data and reports the current results every 3 seconds, or at the optionally specified interval. After the current results are displayed, the aggregations are cleared and the cycle begins anew. To run the scripts, simply pipe the output of the 'perf trace record' step as input to the corresponding 'perf trace report' step, using '-' as the filename to -o and -i: $ perf trace record sctop -o - | perf trace report sctop -i - Also adds clear_term() utility functions to the Util.pm and Util.py utility modules, for use by any script to clear the screen. Signed-off-by: Tom Zanussi <tzanussi@gmail.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: fweisbec@gmail.com Cc: rostedt@goodmis.org Cc: k-keiichi@bx.jp.nec.com Cc: acme@ghostprotocols.net LKML-Reference: <1270184365-8281-10-git-send-email-tzanussi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perf/scripts: Add syscall tracing scriptsTom Zanussi2010-02-24
| | | | | | | | | | | | | | | | | | Adds a set of scripts that aggregate system call totals and system call errors. Most are Python scripts that also test basic functionality of the new Python engine, but there's also one Perl script added for comparison and for reference in some new Documentation contained in a later patch. Signed-off-by: Tom Zanussi <tzanussi@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Keiichi KII <k-keiichi@bx.jp.nec.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <1264580883-15324-8-git-send-email-tzanussi@gmail.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
* perf/scripts: Add Python scripting engineTom Zanussi2010-02-24
Add base support for Python scripting to perf trace. Signed-off-by: Tom Zanussi <tzanussi@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Keiichi KII <k-keiichi@bx.jp.nec.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <1264580883-15324-6-git-send-email-tzanussi@gmail.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
43'>2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316
/*
 * ipg.c: Device Driver for the IP1000 Gigabit Ethernet Adapter
 *
 * Copyright (C) 2003, 2007  IC Plus Corp
 *
 * Original Author:
 *
 *   Craig Rich
 *   Sundance Technology, Inc.
 *   www.sundanceti.com
 *   craig_rich@sundanceti.com
 *
 * Current Maintainer:
 *
 *   Sorbica Shieh.
 *   http://www.icplus.com.tw
 *   sorbica@icplus.com.tw
 *
 *   Jesse Huang
 *   http://www.icplus.com.tw
 *   jesse@icplus.com.tw
 */
#include <linux/crc32.h>
#include <linux/ethtool.h>
#include <linux/mii.h>
#include <linux/mutex.h>

#include <asm/div64.h>

#define IPG_RX_RING_BYTES	(sizeof(struct ipg_rx) * IPG_RFDLIST_LENGTH)
#define IPG_TX_RING_BYTES	(sizeof(struct ipg_tx) * IPG_TFDLIST_LENGTH)
#define IPG_RESET_MASK \
	(IPG_AC_GLOBAL_RESET | IPG_AC_RX_RESET | IPG_AC_TX_RESET | \
	 IPG_AC_DMA | IPG_AC_FIFO | IPG_AC_NETWORK | IPG_AC_HOST | \
	 IPG_AC_AUTO_INIT)

#define ipg_w32(val32, reg)	iowrite32((val32), ioaddr + (reg))
#define ipg_w16(val16, reg)	iowrite16((val16), ioaddr + (reg))
#define ipg_w8(val8, reg)	iowrite8((val8), ioaddr + (reg))

#define ipg_r32(reg)		ioread32(ioaddr + (reg))
#define ipg_r16(reg)		ioread16(ioaddr + (reg))
#define ipg_r8(reg)		ioread8(ioaddr + (reg))

#define JUMBO_FRAME_4k_ONLY
enum {
	netdev_io_size = 128
};

#include "ipg.h"
#define DRV_NAME	"ipg"

MODULE_AUTHOR("IC Plus Corp. 2003");
MODULE_DESCRIPTION("IC Plus IP1000 Gigabit Ethernet Adapter Linux Driver");
MODULE_LICENSE("GPL");

/*
 * Variable record -- index by leading revision/length
 * Revision/Length(=N*4), Address1, Data1, Address2, Data2,...,AddressN,DataN
 */
static unsigned short DefaultPhyParam[] = {
	/* 11/12/03 IP1000A v1-3 rev=0x40 */
	/*--------------------------------------------------------------------------
	(0x4000|(15*4)), 31, 0x0001, 27, 0x01e0, 31, 0x0002, 22, 0x85bd, 24, 0xfff2,
				 27, 0x0c10, 28, 0x0c10, 29, 0x2c10, 31, 0x0003, 23, 0x92f6,
				 31, 0x0000, 23, 0x003d, 30, 0x00de, 20, 0x20e7,  9, 0x0700,
	  --------------------------------------------------------------------------*/
	/* 12/17/03 IP1000A v1-4 rev=0x40 */
	(0x4000 | (07 * 4)), 31, 0x0001, 27, 0x01e0, 31, 0x0002, 27, 0xeb8e, 31,
	    0x0000,
	30, 0x005e, 9, 0x0700,
	/* 01/09/04 IP1000A v1-5 rev=0x41 */
	(0x4100 | (07 * 4)), 31, 0x0001, 27, 0x01e0, 31, 0x0002, 27, 0xeb8e, 31,
	    0x0000,
	30, 0x005e, 9, 0x0700,
	0x0000
};

static const char *ipg_brand_name[] = {
	"IC PLUS IP1000 1000/100/10 based NIC",
	"Sundance Technology ST2021 based NIC",
	"Tamarack Microelectronics TC9020/9021 based NIC",
	"Tamarack Microelectronics TC9020/9021 based NIC",
	"D-Link NIC",
	"D-Link NIC IP1000A"
};

static struct pci_device_id ipg_pci_tbl[] __devinitdata = {
	{ PCI_VDEVICE(SUNDANCE,	0x1023), 0 },
	{ PCI_VDEVICE(SUNDANCE,	0x2021), 1 },
	{ PCI_VDEVICE(SUNDANCE,	0x1021), 2 },
	{ PCI_VDEVICE(DLINK,	0x9021), 3 },
	{ PCI_VDEVICE(DLINK,	0x4000), 4 },
	{ PCI_VDEVICE(DLINK,	0x4020), 5 },
	{ 0, }
};

MODULE_DEVICE_TABLE(pci, ipg_pci_tbl);

static inline void __iomem *ipg_ioaddr(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	return sp->ioaddr;
}

#ifdef IPG_DEBUG
static void ipg_dump_rfdlist(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	unsigned int i;
	u32 offset;

	IPG_DEBUG_MSG("_dump_rfdlist\n");

	printk(KERN_INFO "rx_current = %2.2x\n", sp->rx_current);
	printk(KERN_INFO "rx_dirty   = %2.2x\n", sp->rx_dirty);
	printk(KERN_INFO "RFDList start address = %16.16lx\n",
	       (unsigned long) sp->rxd_map);
	printk(KERN_INFO "RFDListPtr register   = %8.8x%8.8x\n",
	       ipg_r32(IPG_RFDLISTPTR1), ipg_r32(IPG_RFDLISTPTR0));

	for (i = 0; i < IPG_RFDLIST_LENGTH; i++) {
		offset = (u32) &sp->rxd[i].next_desc - (u32) sp->rxd;
		printk(KERN_INFO "%2.2x %4.4x RFDNextPtr = %16.16lx\n", i,
		       offset, (unsigned long) sp->rxd[i].next_desc);
		offset = (u32) &sp->rxd[i].rfs - (u32) sp->rxd;
		printk(KERN_INFO "%2.2x %4.4x RFS        = %16.16lx\n", i,
		       offset, (unsigned long) sp->rxd[i].rfs);
		offset = (u32) &sp->rxd[i].frag_info - (u32) sp->rxd;
		printk(KERN_INFO "%2.2x %4.4x frag_info   = %16.16lx\n", i,
		       offset, (unsigned long) sp->rxd[i].frag_info);
	}
}

static void ipg_dump_tfdlist(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	unsigned int i;
	u32 offset;

	IPG_DEBUG_MSG("_dump_tfdlist\n");

	printk(KERN_INFO "tx_current         = %2.2x\n", sp->tx_current);
	printk(KERN_INFO "tx_dirty = %2.2x\n", sp->tx_dirty);
	printk(KERN_INFO "TFDList start address = %16.16lx\n",
	       (unsigned long) sp->txd_map);
	printk(KERN_INFO "TFDListPtr register   = %8.8x%8.8x\n",
	       ipg_r32(IPG_TFDLISTPTR1), ipg_r32(IPG_TFDLISTPTR0));

	for (i = 0; i < IPG_TFDLIST_LENGTH; i++) {
		offset = (u32) &sp->txd[i].next_desc - (u32) sp->txd;
		printk(KERN_INFO "%2.2x %4.4x TFDNextPtr = %16.16lx\n", i,
		       offset, (unsigned long) sp->txd[i].next_desc);

		offset = (u32) &sp->txd[i].tfc - (u32) sp->txd;
		printk(KERN_INFO "%2.2x %4.4x TFC        = %16.16lx\n", i,
		       offset, (unsigned long) sp->txd[i].tfc);
		offset = (u32) &sp->txd[i].frag_info - (u32) sp->txd;
		printk(KERN_INFO "%2.2x %4.4x frag_info   = %16.16lx\n", i,
		       offset, (unsigned long) sp->txd[i].frag_info);
	}
}
#endif

static void ipg_write_phy_ctl(void __iomem *ioaddr, u8 data)
{
	ipg_w8(IPG_PC_RSVD_MASK & data, PHY_CTRL);
	ndelay(IPG_PC_PHYCTRLWAIT_NS);
}

static void ipg_drive_phy_ctl_low_high(void __iomem *ioaddr, u8 data)
{
	ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_LO | data);
	ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_HI | data);
}

static void send_three_state(void __iomem *ioaddr, u8 phyctrlpolarity)
{
	phyctrlpolarity |= (IPG_PC_MGMTDATA & 0) | IPG_PC_MGMTDIR;

	ipg_drive_phy_ctl_low_high(ioaddr, phyctrlpolarity);
}

static void send_end(void __iomem *ioaddr, u8 phyctrlpolarity)
{
	ipg_w8((IPG_PC_MGMTCLK_LO | (IPG_PC_MGMTDATA & 0) | IPG_PC_MGMTDIR |
		phyctrlpolarity) & IPG_PC_RSVD_MASK, PHY_CTRL);
}

static u16 read_phy_bit(void __iomem *ioaddr, u8 phyctrlpolarity)
{
	u16 bit_data;

	ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_LO | phyctrlpolarity);

	bit_data = ((ipg_r8(PHY_CTRL) & IPG_PC_MGMTDATA) >> 1) & 1;

	ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_HI | phyctrlpolarity);

	return bit_data;
}

/*
 * Read a register from the Physical Layer device located
 * on the IPG NIC, using the IPG PHYCTRL register.
 */
static int mdio_read(struct net_device *dev, int phy_id, int phy_reg)
{
	void __iomem *ioaddr = ipg_ioaddr(dev);
	/*
	 * The GMII mangement frame structure for a read is as follows:
	 *
	 * |Preamble|st|op|phyad|regad|ta|      data      |idle|
	 * |< 32 1s>|01|10|AAAAA|RRRRR|z0|DDDDDDDDDDDDDDDD|z   |
	 *
	 * <32 1s> = 32 consecutive logic 1 values
	 * A = bit of Physical Layer device address (MSB first)
	 * R = bit of register address (MSB first)
	 * z = High impedance state
	 * D = bit of read data (MSB first)
	 *
	 * Transmission order is 'Preamble' field first, bits transmitted
	 * left to right (first to last).
	 */
	struct {
		u32 field;
		unsigned int len;
	} p[] = {
		{ GMII_PREAMBLE,	32 },	/* Preamble */
		{ GMII_ST,		2  },	/* ST */
		{ GMII_READ,		2  },	/* OP */
		{ phy_id,		5  },	/* PHYAD */
		{ phy_reg,		5  },	/* REGAD */
		{ 0x0000,		2  },	/* TA */
		{ 0x0000,		16 },	/* DATA */
		{ 0x0000,		1  }	/* IDLE */
	};
	unsigned int i, j;
	u8 polarity, data;

	polarity  = ipg_r8(PHY_CTRL);
	polarity &= (IPG_PC_DUPLEX_POLARITY | IPG_PC_LINK_POLARITY);

	/* Create the Preamble, ST, OP, PHYAD, and REGAD field. */
	for (j = 0; j < 5; j++) {
		for (i = 0; i < p[j].len; i++) {
			/* For each variable length field, the MSB must be
			 * transmitted first. Rotate through the field bits,
			 * starting with the MSB, and move each bit into the
			 * the 1st (2^1) bit position (this is the bit position
			 * corresponding to the MgmtData bit of the PhyCtrl
			 * register for the IPG).
			 *
			 * Example: ST = 01;
			 *
			 *          First write a '0' to bit 1 of the PhyCtrl
			 *          register, then write a '1' to bit 1 of the
			 *          PhyCtrl register.
			 *
			 * To do this, right shift the MSB of ST by the value:
			 * [field length - 1 - #ST bits already written]
			 * then left shift this result by 1.
			 */
			data  = (p[j].field >> (p[j].len - 1 - i)) << 1;
			data &= IPG_PC_MGMTDATA;
			data |= polarity | IPG_PC_MGMTDIR;

			ipg_drive_phy_ctl_low_high(ioaddr, data);
		}
	}

	send_three_state(ioaddr, polarity);

	read_phy_bit(ioaddr, polarity);

	/*
	 * For a read cycle, the bits for the next two fields (TA and
	 * DATA) are driven by the PHY (the IPG reads these bits).
	 */
	for (i = 0; i < p[6].len; i++) {
		p[6].field |=
		    (read_phy_bit(ioaddr, polarity) << (p[6].len - 1 - i));
	}

	send_three_state(ioaddr, polarity);
	send_three_state(ioaddr, polarity);
	send_three_state(ioaddr, polarity);
	send_end(ioaddr, polarity);

	/* Return the value of the DATA field. */
	return p[6].field;
}

/*
 * Write to a register from the Physical Layer device located
 * on the IPG NIC, using the IPG PHYCTRL register.
 */
static void mdio_write(struct net_device *dev, int phy_id, int phy_reg, int val)
{
	void __iomem *ioaddr = ipg_ioaddr(dev);
	/*
	 * The GMII mangement frame structure for a read is as follows:
	 *
	 * |Preamble|st|op|phyad|regad|ta|      data      |idle|
	 * |< 32 1s>|01|10|AAAAA|RRRRR|z0|DDDDDDDDDDDDDDDD|z   |
	 *
	 * <32 1s> = 32 consecutive logic 1 values
	 * A = bit of Physical Layer device address (MSB first)
	 * R = bit of register address (MSB first)
	 * z = High impedance state
	 * D = bit of write data (MSB first)
	 *
	 * Transmission order is 'Preamble' field first, bits transmitted
	 * left to right (first to last).
	 */
	struct {
		u32 field;
		unsigned int len;
	} p[] = {
		{ GMII_PREAMBLE,	32 },	/* Preamble */
		{ GMII_ST,		2  },	/* ST */
		{ GMII_WRITE,		2  },	/* OP */
		{ phy_id,		5  },	/* PHYAD */
		{ phy_reg,		5  },	/* REGAD */
		{ 0x0002,		2  },	/* TA */
		{ val & 0xffff,		16 },	/* DATA */
		{ 0x0000,		1  }	/* IDLE */
	};
	unsigned int i, j;
	u8 polarity, data;

	polarity  = ipg_r8(PHY_CTRL);
	polarity &= (IPG_PC_DUPLEX_POLARITY | IPG_PC_LINK_POLARITY);

	/* Create the Preamble, ST, OP, PHYAD, and REGAD field. */
	for (j = 0; j < 7; j++) {
		for (i = 0; i < p[j].len; i++) {
			/* For each variable length field, the MSB must be
			 * transmitted first. Rotate through the field bits,
			 * starting with the MSB, and move each bit into the
			 * the 1st (2^1) bit position (this is the bit position
			 * corresponding to the MgmtData bit of the PhyCtrl
			 * register for the IPG).
			 *
			 * Example: ST = 01;
			 *
			 *          First write a '0' to bit 1 of the PhyCtrl
			 *          register, then write a '1' to bit 1 of the
			 *          PhyCtrl register.
			 *
			 * To do this, right shift the MSB of ST by the value:
			 * [field length - 1 - #ST bits already written]
			 * then left shift this result by 1.
			 */
			data  = (p[j].field >> (p[j].len - 1 - i)) << 1;
			data &= IPG_PC_MGMTDATA;
			data |= polarity | IPG_PC_MGMTDIR;

			ipg_drive_phy_ctl_low_high(ioaddr, data);
		}
	}

	/* The last cycle is a tri-state, so read from the PHY. */
	for (j = 7; j < 8; j++) {
		for (i = 0; i < p[j].len; i++) {
			ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_LO | polarity);

			p[j].field |= ((ipg_r8(PHY_CTRL) &
				IPG_PC_MGMTDATA) >> 1) << (p[j].len - 1 - i);

			ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_HI | polarity);
		}
	}
}

static void ipg_set_led_mode(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	u32 mode;

	mode = ipg_r32(ASIC_CTRL);
	mode &= ~(IPG_AC_LED_MODE_BIT_1 | IPG_AC_LED_MODE | IPG_AC_LED_SPEED);

	if ((sp->led_mode & 0x03) > 1)
		mode |= IPG_AC_LED_MODE_BIT_1;	/* Write Asic Control Bit 29 */

	if ((sp->led_mode & 0x01) == 1)
		mode |= IPG_AC_LED_MODE;	/* Write Asic Control Bit 14 */

	if ((sp->led_mode & 0x08) == 8)
		mode |= IPG_AC_LED_SPEED;	/* Write Asic Control Bit 27 */

	ipg_w32(mode, ASIC_CTRL);
}

static void ipg_set_phy_set(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	int physet;

	physet = ipg_r8(PHY_SET);
	physet &= ~(IPG_PS_MEM_LENB9B | IPG_PS_MEM_LEN9 | IPG_PS_NON_COMPDET);
	physet |= ((sp->led_mode & 0x70) >> 4);
	ipg_w8(physet, PHY_SET);
}

static int ipg_reset(struct net_device *dev, u32 resetflags)
{
	/* Assert functional resets via the IPG AsicCtrl
	 * register as specified by the 'resetflags' input
	 * parameter.
	 */
	void __iomem *ioaddr = ipg_ioaddr(dev);
	unsigned int timeout_count = 0;

	IPG_DEBUG_MSG("_reset\n");

	ipg_w32(ipg_r32(ASIC_CTRL) | resetflags, ASIC_CTRL);

	/* Delay added to account for problem with 10Mbps reset. */
	mdelay(IPG_AC_RESETWAIT);

	while (IPG_AC_RESET_BUSY & ipg_r32(ASIC_CTRL)) {
		mdelay(IPG_AC_RESETWAIT);
		if (++timeout_count > IPG_AC_RESET_TIMEOUT)
			return -ETIME;
	}
	/* Set LED Mode in Asic Control */
	ipg_set_led_mode(dev);

	/* Set PHYSet Register Value */
	ipg_set_phy_set(dev);
	return 0;
}

/* Find the GMII PHY address. */
static int ipg_find_phyaddr(struct net_device *dev)
{
	unsigned int phyaddr, i;

	for (i = 0; i < 32; i++) {
		u32 status;

		/* Search for the correct PHY address among 32 possible. */
		phyaddr = (IPG_NIC_PHY_ADDRESS + i) % 32;

		/* 10/22/03 Grace change verify from GMII_PHY_STATUS to
		   GMII_PHY_ID1
		 */

		status = mdio_read(dev, phyaddr, MII_BMSR);

		if ((status != 0xFFFF) && (status != 0))
			return phyaddr;
	}

	return 0x1f;
}

/*
 * Configure IPG based on result of IEEE 802.3 PHY
 * auto-negotiation.
 */
static int ipg_config_autoneg(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	unsigned int txflowcontrol;
	unsigned int rxflowcontrol;
	unsigned int fullduplex;
	unsigned int gig;
	u32 mac_ctrl_val;
	u32 asicctrl;
	u8 phyctrl;

	IPG_DEBUG_MSG("_config_autoneg\n");

	asicctrl = ipg_r32(ASIC_CTRL);
	phyctrl = ipg_r8(PHY_CTRL);
	mac_ctrl_val = ipg_r32(MAC_CTRL);

	/* Set flags for use in resolving auto-negotation, assuming
	 * non-1000Mbps, half duplex, no flow control.
	 */
	fullduplex = 0;
	txflowcontrol = 0;
	rxflowcontrol = 0;
	gig = 0;

	/* To accomodate a problem in 10Mbps operation,
	 * set a global flag if PHY running in 10Mbps mode.
	 */
	sp->tenmbpsmode = 0;

	printk(KERN_INFO "%s: Link speed = ", dev->name);

	/* Determine actual speed of operation. */
	switch (phyctrl & IPG_PC_LINK_SPEED) {
	case IPG_PC_LINK_SPEED_10MBPS:
		printk("10Mbps.\n");
		printk(KERN_INFO "%s: 10Mbps operational mode enabled.\n",
		       dev->name);
		sp->tenmbpsmode = 1;
		break;
	case IPG_PC_LINK_SPEED_100MBPS:
		printk("100Mbps.\n");
		break;
	case IPG_PC_LINK_SPEED_1000MBPS:
		printk("1000Mbps.\n");
		gig = 1;
		break;
	default:
		printk("undefined!\n");
		return 0;
	}

	if (phyctrl & IPG_PC_DUPLEX_STATUS) {
		fullduplex = 1;
		txflowcontrol = 1;
		rxflowcontrol = 1;
	}

	/* Configure full duplex, and flow control. */
	if (fullduplex == 1) {
		/* Configure IPG for full duplex operation. */
		printk(KERN_INFO "%s: setting full duplex, ", dev->name);

		mac_ctrl_val |= IPG_MC_DUPLEX_SELECT_FD;

		if (txflowcontrol == 1) {
			printk("TX flow control");
			mac_ctrl_val |= IPG_MC_TX_FLOW_CONTROL_ENABLE;
		} else {
			printk("no TX flow control");
			mac_ctrl_val &= ~IPG_MC_TX_FLOW_CONTROL_ENABLE;
		}

		if (rxflowcontrol == 1) {
			printk(", RX flow control.");
			mac_ctrl_val |= IPG_MC_RX_FLOW_CONTROL_ENABLE;
		} else {
			printk(", no RX flow control.");
			mac_ctrl_val &= ~IPG_MC_RX_FLOW_CONTROL_ENABLE;
		}

		printk("\n");
	} else {
		/* Configure IPG for half duplex operation. */
		printk(KERN_INFO "%s: setting half duplex, "
		       "no TX flow control, no RX flow control.\n", dev->name);

		mac_ctrl_val &= ~IPG_MC_DUPLEX_SELECT_FD &
			~IPG_MC_TX_FLOW_CONTROL_ENABLE &
			~IPG_MC_RX_FLOW_CONTROL_ENABLE;
	}
	ipg_w32(mac_ctrl_val, MAC_CTRL);
	return 0;
}

/* Determine and configure multicast operation and set
 * receive mode for IPG.
 */
static void ipg_nic_set_multicast_list(struct net_device *dev)
{
	void __iomem *ioaddr = ipg_ioaddr(dev);
	struct dev_mc_list *mc_list_ptr;
	unsigned int hashindex;
	u32 hashtable[2];
	u8 receivemode;

	IPG_DEBUG_MSG("_nic_set_multicast_list\n");

	receivemode = IPG_RM_RECEIVEUNICAST | IPG_RM_RECEIVEBROADCAST;

	if (dev->flags & IFF_PROMISC) {
		/* NIC to be configured in promiscuous mode. */
		receivemode = IPG_RM_RECEIVEALLFRAMES;
	} else if ((dev->flags & IFF_ALLMULTI) ||
		   (dev->flags & IFF_MULTICAST &
		    (dev->mc_count > IPG_MULTICAST_HASHTABLE_SIZE))) {
		/* NIC to be configured to receive all multicast
		 * frames. */
		receivemode |= IPG_RM_RECEIVEMULTICAST;
	} else if (dev->flags & IFF_MULTICAST & (dev->mc_count > 0)) {
		/* NIC to be configured to receive selected
		 * multicast addresses. */
		receivemode |= IPG_RM_RECEIVEMULTICASTHASH;
	}

	/* Calculate the bits to set for the 64 bit, IPG HASHTABLE.
	 * The IPG applies a cyclic-redundancy-check (the same CRC
	 * used to calculate the frame data FCS) to the destination
	 * address all incoming multicast frames whose destination
	 * address has the multicast bit set. The least significant
	 * 6 bits of the CRC result are used as an addressing index
	 * into the hash table. If the value of the bit addressed by
	 * this index is a 1, the frame is passed to the host system.
	 */

	/* Clear hashtable. */
	hashtable[0] = 0x00000000;
	hashtable[1] = 0x00000000;

	/* Cycle through all multicast addresses to filter. */
	for (mc_list_ptr = dev->mc_list;
	     mc_list_ptr != NULL; mc_list_ptr = mc_list_ptr->next) {
		/* Calculate CRC result for each multicast address. */
		hashindex = crc32_le(0xffffffff, mc_list_ptr->dmi_addr,
				     ETH_ALEN);

		/* Use only the least significant 6 bits. */
		hashindex = hashindex & 0x3F;

		/* Within "hashtable", set bit number "hashindex"
		 * to a logic 1.
		 */
		set_bit(hashindex, (void *)hashtable);
	}

	/* Write the value of the hashtable, to the 4, 16 bit
	 * HASHTABLE IPG registers.
	 */
	ipg_w32(hashtable[0], HASHTABLE_0);
	ipg_w32(hashtable[1], HASHTABLE_1);

	ipg_w8(IPG_RM_RSVD_MASK & receivemode, RECEIVE_MODE);

	IPG_DEBUG_MSG("ReceiveMode = %x\n", ipg_r8(RECEIVE_MODE));
}

static int ipg_io_config(struct net_device *dev)
{
	void __iomem *ioaddr = ipg_ioaddr(dev);
	u32 origmacctrl;
	u32 restoremacctrl;

	IPG_DEBUG_MSG("_io_config\n");

	origmacctrl = ipg_r32(MAC_CTRL);

	restoremacctrl = origmacctrl | IPG_MC_STATISTICS_ENABLE;

	/* Based on compilation option, determine if FCS is to be
	 * stripped on receive frames by IPG.
	 */
	if (!IPG_STRIP_FCS_ON_RX)
		restoremacctrl |= IPG_MC_RCV_FCS;

	/* Determine if transmitter and/or receiver are
	 * enabled so we may restore MACCTRL correctly.
	 */
	if (origmacctrl & IPG_MC_TX_ENABLED)
		restoremacctrl |= IPG_MC_TX_ENABLE;

	if (origmacctrl & IPG_MC_RX_ENABLED)
		restoremacctrl |= IPG_MC_RX_ENABLE;

	/* Transmitter and receiver must be disabled before setting
	 * IFSSelect.
	 */
	ipg_w32((origmacctrl & (IPG_MC_RX_DISABLE | IPG_MC_TX_DISABLE)) &
		IPG_MC_RSVD_MASK, MAC_CTRL);

	/* Now that transmitter and receiver are disabled, write
	 * to IFSSelect.
	 */
	ipg_w32((origmacctrl & IPG_MC_IFS_96BIT) & IPG_MC_RSVD_MASK, MAC_CTRL);

	/* Set RECEIVEMODE register. */
	ipg_nic_set_multicast_list(dev);

	ipg_w16(IPG_MAX_RXFRAME_SIZE, MAX_FRAME_SIZE);

	ipg_w8(IPG_RXDMAPOLLPERIOD_VALUE,   RX_DMA_POLL_PERIOD);
	ipg_w8(IPG_RXDMAURGENTTHRESH_VALUE, RX_DMA_URGENT_THRESH);
	ipg_w8(IPG_RXDMABURSTTHRESH_VALUE,  RX_DMA_BURST_THRESH);
	ipg_w8(IPG_TXDMAPOLLPERIOD_VALUE,   TX_DMA_POLL_PERIOD);
	ipg_w8(IPG_TXDMAURGENTTHRESH_VALUE, TX_DMA_URGENT_THRESH);
	ipg_w8(IPG_TXDMABURSTTHRESH_VALUE,  TX_DMA_BURST_THRESH);
	ipg_w16((IPG_IE_HOST_ERROR | IPG_IE_TX_DMA_COMPLETE |
		 IPG_IE_TX_COMPLETE | IPG_IE_INT_REQUESTED |
		 IPG_IE_UPDATE_STATS | IPG_IE_LINK_EVENT |
		 IPG_IE_RX_DMA_COMPLETE | IPG_IE_RX_DMA_PRIORITY), INT_ENABLE);
	ipg_w16(IPG_FLOWONTHRESH_VALUE,  FLOW_ON_THRESH);
	ipg_w16(IPG_FLOWOFFTHRESH_VALUE, FLOW_OFF_THRESH);

	/* IPG multi-frag frame bug workaround.
	 * Per silicon revision B3 eratta.
	 */
	ipg_w16(ipg_r16(DEBUG_CTRL) | 0x0200, DEBUG_CTRL);

	/* IPG TX poll now bug workaround.
	 * Per silicon revision B3 eratta.
	 */
	ipg_w16(ipg_r16(DEBUG_CTRL) | 0x0010, DEBUG_CTRL);

	/* IPG RX poll now bug workaround.
	 * Per silicon revision B3 eratta.
	 */
	ipg_w16(ipg_r16(DEBUG_CTRL) | 0x0020, DEBUG_CTRL);

	/* Now restore MACCTRL to original setting. */
	ipg_w32(IPG_MC_RSVD_MASK & restoremacctrl, MAC_CTRL);

	/* Disable unused RMON statistics. */
	ipg_w32(IPG_RZ_ALL, RMON_STATISTICS_MASK);

	/* Disable unused MIB statistics. */
	ipg_w32(IPG_SM_MACCONTROLFRAMESXMTD | IPG_SM_MACCONTROLFRAMESRCVD |
		IPG_SM_BCSTOCTETXMTOK_BCSTFRAMESXMTDOK | IPG_SM_TXJUMBOFRAMES |
		IPG_SM_MCSTOCTETXMTOK_MCSTFRAMESXMTDOK | IPG_SM_RXJUMBOFRAMES |
		IPG_SM_BCSTOCTETRCVDOK_BCSTFRAMESRCVDOK |
		IPG_SM_UDPCHECKSUMERRORS | IPG_SM_TCPCHECKSUMERRORS |
		IPG_SM_IPCHECKSUMERRORS, STATISTICS_MASK);

	return 0;
}

/*
 * Create a receive buffer within system memory and update
 * NIC private structure appropriately.
 */
static int ipg_get_rxbuff(struct net_device *dev, int entry)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	struct ipg_rx *rxfd = sp->rxd + entry;
	struct sk_buff *skb;
	u64 rxfragsize;

	IPG_DEBUG_MSG("_get_rxbuff\n");

	skb = netdev_alloc_skb(dev, IPG_RXSUPPORT_SIZE + NET_IP_ALIGN);
	if (!skb) {
		sp->rx_buff[entry] = NULL;
		return -ENOMEM;
	}

	/* Adjust the data start location within the buffer to
	 * align IP address field to a 16 byte boundary.
	 */
	skb_reserve(skb, NET_IP_ALIGN);

	/* Associate the receive buffer with the IPG NIC. */
	skb->dev = dev;

	/* Save the address of the sk_buff structure. */
	sp->rx_buff[entry] = skb;

	rxfd->frag_info = cpu_to_le64(pci_map_single(sp->pdev, skb->data,
		sp->rx_buf_sz, PCI_DMA_FROMDEVICE));

	/* Set the RFD fragment length. */
	rxfragsize = IPG_RXFRAG_SIZE;
	rxfd->frag_info |= cpu_to_le64((rxfragsize << 48) & IPG_RFI_FRAGLEN);

	return 0;
}

static int init_rfdlist(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	unsigned int i;

	IPG_DEBUG_MSG("_init_rfdlist\n");

	for (i = 0; i < IPG_RFDLIST_LENGTH; i++) {
		struct ipg_rx *rxfd = sp->rxd + i;

		if (sp->rx_buff[i]) {
			pci_unmap_single(sp->pdev,
				le64_to_cpu(rxfd->frag_info) & ~IPG_RFI_FRAGLEN,
				sp->rx_buf_sz, PCI_DMA_FROMDEVICE);
			dev_kfree_skb_irq(sp->rx_buff[i]);
			sp->rx_buff[i] = NULL;
		}

		/* Clear out the RFS field. */
		rxfd->rfs = 0x0000000000000000;

		if (ipg_get_rxbuff(dev, i) < 0) {
			/*
			 * A receive buffer was not ready, break the
			 * RFD list here.
			 */
			IPG_DEBUG_MSG("Cannot allocate Rx buffer.\n");

			/* Just in case we cannot allocate a single RFD.
			 * Should not occur.
			 */
			if (i == 0) {
				printk(KERN_ERR "%s: No memory available"
					" for RFD list.\n", dev->name);
				return -ENOMEM;
			}
		}

		rxfd->next_desc = cpu_to_le64(sp->rxd_map +
			sizeof(struct ipg_rx)*(i + 1));
	}
	sp->rxd[i - 1].next_desc = cpu_to_le64(sp->rxd_map);

	sp->rx_current = 0;
	sp->rx_dirty = 0;

	/* Write the location of the RFDList to the IPG. */
	ipg_w32((u32) sp->rxd_map, RFD_LIST_PTR_0);
	ipg_w32(0x00000000, RFD_LIST_PTR_1);

	return 0;
}

static void init_tfdlist(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	unsigned int i;

	IPG_DEBUG_MSG("_init_tfdlist\n");

	for (i = 0; i < IPG_TFDLIST_LENGTH; i++) {
		struct ipg_tx *txfd = sp->txd + i;

		txfd->tfc = cpu_to_le64(IPG_TFC_TFDDONE);

		if (sp->tx_buff[i]) {
			dev_kfree_skb_irq(sp->tx_buff[i]);
			sp->tx_buff[i] = NULL;
		}

		txfd->next_desc = cpu_to_le64(sp->txd_map +
			sizeof(struct ipg_tx)*(i + 1));
	}
	sp->txd[i - 1].next_desc = cpu_to_le64(sp->txd_map);

	sp->tx_current = 0;
	sp->tx_dirty = 0;

	/* Write the location of the TFDList to the IPG. */
	IPG_DDEBUG_MSG("Starting TFDListPtr = %8.8x\n",
		       (u32) sp->txd_map);
	ipg_w32((u32) sp->txd_map, TFD_LIST_PTR_0);
	ipg_w32(0x00000000, TFD_LIST_PTR_1);

	sp->reset_current_tfd = 1;
}

/*
 * Free all transmit buffers which have already been transfered
 * via DMA to the IPG.
 */
static void ipg_nic_txfree(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	unsigned int released, pending, dirty;

	IPG_DEBUG_MSG("_nic_txfree\n");

	pending = sp->tx_current - sp->tx_dirty;
	dirty = sp->tx_dirty % IPG_TFDLIST_LENGTH;

	for (released = 0; released < pending; released++) {
		struct sk_buff *skb = sp->tx_buff[dirty];
		struct ipg_tx *txfd = sp->txd + dirty;

		IPG_DEBUG_MSG("TFC = %16.16lx\n", (unsigned long) txfd->tfc);

		/* Look at each TFD's TFC field beginning
		 * at the last freed TFD up to the current TFD.
		 * If the TFDDone bit is set, free the associated
		 * buffer.
		 */
		if (!(txfd->tfc & cpu_to_le64(IPG_TFC_TFDDONE)))
                        break;

		/* Free the transmit buffer. */
		if (skb) {
			pci_unmap_single(sp->pdev,
				le64_to_cpu(txfd->frag_info) & ~IPG_TFI_FRAGLEN,
				skb->len, PCI_DMA_TODEVICE);

			dev_kfree_skb_irq(skb);

			sp->tx_buff[dirty] = NULL;
		}
		dirty = (dirty + 1) % IPG_TFDLIST_LENGTH;
	}

	sp->tx_dirty += released;

	if (netif_queue_stopped(dev) &&
	    (sp->tx_current != (sp->tx_dirty + IPG_TFDLIST_LENGTH))) {
		netif_wake_queue(dev);
	}
}

static void ipg_tx_timeout(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;

	ipg_reset(dev, IPG_AC_TX_RESET | IPG_AC_DMA | IPG_AC_NETWORK |
		  IPG_AC_FIFO);

	spin_lock_irq(&sp->lock);

	/* Re-configure after DMA reset. */
	if (ipg_io_config(dev) < 0) {
		printk(KERN_INFO "%s: Error during re-configuration.\n",
		       dev->name);
	}

	init_tfdlist(dev);

	spin_unlock_irq(&sp->lock);

	ipg_w32((ipg_r32(MAC_CTRL) | IPG_MC_TX_ENABLE) & IPG_MC_RSVD_MASK,
		MAC_CTRL);
}

/*
 * For TxComplete interrupts, free all transmit
 * buffers which have already been transfered via DMA
 * to the IPG.
 */
static void ipg_nic_txcleanup(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	unsigned int i;

	IPG_DEBUG_MSG("_nic_txcleanup\n");

	for (i = 0; i < IPG_TFDLIST_LENGTH; i++) {
		/* Reading the TXSTATUS register clears the
		 * TX_COMPLETE interrupt.
		 */
		u32 txstatusdword = ipg_r32(TX_STATUS);

		IPG_DEBUG_MSG("TxStatus = %8.8x\n", txstatusdword);

		/* Check for Transmit errors. Error bits only valid if
		 * TX_COMPLETE bit in the TXSTATUS register is a 1.
		 */
		if (!(txstatusdword & IPG_TS_TX_COMPLETE))
			break;

		/* If in 10Mbps mode, indicate transmit is ready. */
		if (sp->tenmbpsmode) {
			netif_wake_queue(dev);
		}

		/* Transmit error, increment stat counters. */
		if (txstatusdword & IPG_TS_TX_ERROR) {
			IPG_DEBUG_MSG("Transmit error.\n");
			sp->stats.tx_errors++;
		}

		/* Late collision, re-enable transmitter. */
		if (txstatusdword & IPG_TS_LATE_COLLISION) {
			IPG_DEBUG_MSG("Late collision on transmit.\n");
			ipg_w32((ipg_r32(MAC_CTRL) | IPG_MC_TX_ENABLE) &
				IPG_MC_RSVD_MASK, MAC_CTRL);
		}

		/* Maximum collisions, re-enable transmitter. */
		if (txstatusdword & IPG_TS_TX_MAX_COLL) {
			IPG_DEBUG_MSG("Maximum collisions on transmit.\n");
			ipg_w32((ipg_r32(MAC_CTRL) | IPG_MC_TX_ENABLE) &
				IPG_MC_RSVD_MASK, MAC_CTRL);
		}

		/* Transmit underrun, reset and re-enable
		 * transmitter.
		 */
		if (txstatusdword & IPG_TS_TX_UNDERRUN) {
			IPG_DEBUG_MSG("Transmitter underrun.\n");
			sp->stats.tx_fifo_errors++;
			ipg_reset(dev, IPG_AC_TX_RESET | IPG_AC_DMA |
				  IPG_AC_NETWORK | IPG_AC_FIFO);

			/* Re-configure after DMA reset. */
			if (ipg_io_config(dev) < 0) {
				printk(KERN_INFO
				       "%s: Error during re-configuration.\n",
				       dev->name);
			}
			init_tfdlist(dev);

			ipg_w32((ipg_r32(MAC_CTRL) | IPG_MC_TX_ENABLE) &
				IPG_MC_RSVD_MASK, MAC_CTRL);
		}
	}

	ipg_nic_txfree(dev);
}

/* Provides statistical information about the IPG NIC. */
static struct net_device_stats *ipg_nic_get_stats(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	u16 temp1;
	u16 temp2;

	IPG_DEBUG_MSG("_nic_get_stats\n");

	/* Check to see if the NIC has been initialized via nic_open,
	 * before trying to read statistic registers.
	 */
	if (!test_bit(__LINK_STATE_START, &dev->state))
		return &sp->stats;

	sp->stats.rx_packets += ipg_r32(IPG_FRAMESRCVDOK);
	sp->stats.tx_packets += ipg_r32(IPG_FRAMESXMTDOK);
	sp->stats.rx_bytes += ipg_r32(IPG_OCTETRCVOK);
	sp->stats.tx_bytes += ipg_r32(IPG_OCTETXMTOK);
	temp1 = ipg_r16(IPG_FRAMESLOSTRXERRORS);
	sp->stats.rx_errors += temp1;
	sp->stats.rx_missed_errors += temp1;
	temp1 = ipg_r32(IPG_SINGLECOLFRAMES) + ipg_r32(IPG_MULTICOLFRAMES) +
		ipg_r32(IPG_LATECOLLISIONS);
	temp2 = ipg_r16(IPG_CARRIERSENSEERRORS);
	sp->stats.collisions += temp1;
	sp->stats.tx_dropped += ipg_r16(IPG_FRAMESABORTXSCOLLS);
	sp->stats.tx_errors += ipg_r16(IPG_FRAMESWEXDEFERRAL) +
		ipg_r32(IPG_FRAMESWDEFERREDXMT) + temp1 + temp2;
	sp->stats.multicast += ipg_r32(IPG_MCSTOCTETRCVDOK);

	/* detailed tx_errors */
	sp->stats.tx_carrier_errors += temp2;

	/* detailed rx_errors */
	sp->stats.rx_length_errors += ipg_r16(IPG_INRANGELENGTHERRORS) +
		ipg_r16(IPG_FRAMETOOLONGERRRORS);
	sp->stats.rx_crc_errors += ipg_r16(IPG_FRAMECHECKSEQERRORS);

	/* Unutilized IPG statistic registers. */
	ipg_r32(IPG_MCSTFRAMESRCVDOK);

	return &sp->stats;
}

/* Restore used receive buffers. */
static int ipg_nic_rxrestore(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	const unsigned int curr = sp->rx_current;
	unsigned int dirty = sp->rx_dirty;

	IPG_DEBUG_MSG("_nic_rxrestore\n");

	for (dirty = sp->rx_dirty; curr - dirty > 0; dirty++) {
		unsigned int entry = dirty % IPG_RFDLIST_LENGTH;

		/* rx_copybreak may poke hole here and there. */
		if (sp->rx_buff[entry])
			continue;

		/* Generate a new receive buffer to replace the
		 * current buffer (which will be released by the
		 * Linux system).
		 */
		if (ipg_get_rxbuff(dev, entry) < 0) {
			IPG_DEBUG_MSG("Cannot allocate new Rx buffer.\n");

			break;
		}

		/* Reset the RFS field. */
		sp->rxd[entry].rfs = 0x0000000000000000;
	}
	sp->rx_dirty = dirty;

	return 0;
}

#ifdef JUMBO_FRAME

/* use jumboindex and jumbosize to control jumbo frame status
 * initial status is jumboindex=-1 and jumbosize=0
 * 1. jumboindex = -1 and jumbosize=0 : previous jumbo frame has been done.
 * 2. jumboindex != -1 and jumbosize != 0 : jumbo frame is not over size and receiving
 * 3. jumboindex = -1 and jumbosize != 0 : jumbo frame is over size, already dump
 *               previous receiving and need to continue dumping the current one
 */
enum {
	NORMAL_PACKET,
	ERROR_PACKET
};

enum {
	FRAME_NO_START_NO_END	= 0,
	FRAME_WITH_START		= 1,
	FRAME_WITH_END		= 10,
	FRAME_WITH_START_WITH_END = 11
};

inline void ipg_nic_rx_free_skb(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	unsigned int entry = sp->rx_current % IPG_RFDLIST_LENGTH;

	if (sp->rx_buff[entry]) {
		struct ipg_rx *rxfd = sp->rxd + entry;

		pci_unmap_single(sp->pdev,
			le64_to_cpu(rxfd->frag_info & ~IPG_RFI_FRAGLEN),
			sp->rx_buf_sz, PCI_DMA_FROMDEVICE);
		dev_kfree_skb_irq(sp->rx_buff[entry]);
		sp->rx_buff[entry] = NULL;
	}
}

inline int ipg_nic_rx_check_frame_type(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	struct ipg_rx *rxfd = sp->rxd + (sp->rx_current % IPG_RFDLIST_LENGTH);
	int type = FRAME_NO_START_NO_END;

	if (le64_to_cpu(rxfd->rfs) & IPG_RFS_FRAMESTART)
		type += FRAME_WITH_START;
	if (le64_to_cpu(rxfd->rfs) & IPG_RFS_FRAMEEND)
		type += FRAME_WITH_END;
	return type;
}

inline int ipg_nic_rx_check_error(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	unsigned int entry = sp->rx_current % IPG_RFDLIST_LENGTH;
	struct ipg_rx *rxfd = sp->rxd + entry;

	if (IPG_DROP_ON_RX_ETH_ERRORS && (le64_to_cpu(rxfd->rfs) &
	     (IPG_RFS_RXFIFOOVERRUN | IPG_RFS_RXRUNTFRAME |
	      IPG_RFS_RXALIGNMENTERROR | IPG_RFS_RXFCSERROR |
	      IPG_RFS_RXOVERSIZEDFRAME | IPG_RFS_RXLENGTHERROR))) {
		IPG_DEBUG_MSG("Rx error, RFS = %16.16lx\n",
			      (unsigned long) rxfd->rfs);

		/* Increment general receive error statistic. */
		sp->stats.rx_errors++;

		/* Increment detailed receive error statistics. */
		if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFIFOOVERRUN) {
			IPG_DEBUG_MSG("RX FIFO overrun occured.\n");

			sp->stats.rx_fifo_errors++;
		}

		if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXRUNTFRAME) {
			IPG_DEBUG_MSG("RX runt occured.\n");
			sp->stats.rx_length_errors++;
		}

		/* Do nothing for IPG_RFS_RXOVERSIZEDFRAME,
		 * error count handled by a IPG statistic register.
		 */

		if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXALIGNMENTERROR) {
			IPG_DEBUG_MSG("RX alignment error occured.\n");
			sp->stats.rx_frame_errors++;
		}

		/* Do nothing for IPG_RFS_RXFCSERROR, error count
		 * handled by a IPG statistic register.
		 */

		/* Free the memory associated with the RX
		 * buffer since it is erroneous and we will
		 * not pass it to higher layer processes.
		 */
		if (sp->rx_buff[entry]) {
			pci_unmap_single(sp->pdev,
				le64_to_cpu(rxfd->frag_info & ~IPG_RFI_FRAGLEN),
				sp->rx_buf_sz, PCI_DMA_FROMDEVICE);

			dev_kfree_skb_irq(sp->rx_buff[entry]);
			sp->rx_buff[entry] = NULL;
		}
		return ERROR_PACKET;
	}
	return NORMAL_PACKET;
}

static void ipg_nic_rx_with_start_and_end(struct net_device *dev,
					  struct ipg_nic_private *sp,
					  struct ipg_rx *rxfd, unsigned entry)
{
	struct ipg_jumbo *jumbo = &sp->jumbo;
	struct sk_buff *skb;
	int framelen;

	if (jumbo->found_start) {
		dev_kfree_skb_irq(jumbo->skb);
		jumbo->found_start = 0;
		jumbo->current_size = 0;
		jumbo->skb = NULL;
	}

	/* 1: found error, 0 no error */
	if (ipg_nic_rx_check_error(dev) != NORMAL_PACKET)
		return;

	skb = sp->rx_buff[entry];
	if (!skb)
		return;

	/* accept this frame and send to upper layer */
	framelen = le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFRAMELEN;
	if (framelen > IPG_RXFRAG_SIZE)
		framelen = IPG_RXFRAG_SIZE;

	skb_put(skb, framelen);
	skb->protocol = eth_type_trans(skb, dev);
	skb->ip_summed = CHECKSUM_NONE;
	netif_rx(skb);
	dev->last_rx = jiffies;
	sp->rx_buff[entry] = NULL;
}

static void ipg_nic_rx_with_start(struct net_device *dev,
				  struct ipg_nic_private *sp,
				  struct ipg_rx *rxfd, unsigned entry)
{
	struct ipg_jumbo *jumbo = &sp->jumbo;
	struct pci_dev *pdev = sp->pdev;
	struct sk_buff *skb;

	/* 1: found error, 0 no error */
	if (ipg_nic_rx_check_error(dev) != NORMAL_PACKET)
		return;

	/* accept this frame and send to upper layer */
	skb = sp->rx_buff[entry];
	if (!skb)
		return;

	if (jumbo->found_start)
		dev_kfree_skb_irq(jumbo->skb);

	pci_unmap_single(pdev, le64_to_cpu(rxfd->frag_info & ~IPG_RFI_FRAGLEN),
			 sp->rx_buf_sz, PCI_DMA_FROMDEVICE);

	skb_put(skb, IPG_RXFRAG_SIZE);

	jumbo->found_start = 1;
	jumbo->current_size = IPG_RXFRAG_SIZE;
	jumbo->skb = skb;

	sp->rx_buff[entry] = NULL;
	dev->last_rx = jiffies;
}

static void ipg_nic_rx_with_end(struct net_device *dev,
				struct ipg_nic_private *sp,
				struct ipg_rx *rxfd, unsigned entry)
{
	struct ipg_jumbo *jumbo = &sp->jumbo;

	/* 1: found error, 0 no error */
	if (ipg_nic_rx_check_error(dev) == NORMAL_PACKET) {
		struct sk_buff *skb = sp->rx_buff[entry];

		if (!skb)
			return;

		if (jumbo->found_start) {
			int framelen, endframelen;

			framelen = le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFRAMELEN;

			endframeLen = framelen - jumbo->current_size;
			/*
			if (framelen > IPG_RXFRAG_SIZE)
				framelen=IPG_RXFRAG_SIZE;
			 */
			if (framelen > IPG_RXSUPPORT_SIZE)
				dev_kfree_skb_irq(jumbo->skb);
			else {
				memcpy(skb_put(jumbo->skb, endframeLen),
				       skb->data, endframeLen);

				jumbo->skb->protocol =
				    eth_type_trans(jumbo->skb, dev);

				jumbo->skb->ip_summed = CHECKSUM_NONE;
				netif_rx(jumbo->skb);
			}
		}

		dev->last_rx = jiffies;
		jumbo->found_start = 0;
		jumbo->current_size = 0;
		jumbo->skb = NULL;

		ipg_nic_rx_free_skb(dev);
	} else {
		dev_kfree_skb_irq(jumbo->skb);
		jumbo->found_start = 0;
		jumbo->current_size = 0;
		jumbo->skb = NULL;
	}
}

static void ipg_nic_rx_no_start_no_end(struct net_device *dev,
				       struct ipg_nic_private *sp,
				       struct ipg_rx *rxfd, unsigned entry)
{
	struct ipg_jumbo *jumbo = &sp->jumbo;

	/* 1: found error, 0 no error */
	if (ipg_nic_rx_check_error(dev) == NORMAL_PACKET) {
		struct sk_buff *skb = sp->rx_buff[entry];

		if (skb) {
			if (jumbo->found_start) {
				jumbo->current_size += IPG_RXFRAG_SIZE;
				if (jumbo->current_size <= IPG_RXSUPPORT_SIZE) {
					memcpy(skb_put(jumbo->skb,
						       IPG_RXFRAG_SIZE),
					       skb->data, IPG_RXFRAG_SIZE);
				}
			}
			dev->last_rx = jiffies;
			ipg_nic_rx_free_skb(dev);
		}
	} else {
		dev_kfree_skb_irq(jumbo->skb);
		jumbo->found_start = 0;
		jumbo->current_size = 0;
		jumbo->skb = NULL;
	}
}

static int ipg_nic_rx(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	unsigned int curr = sp->rx_current;
	void __iomem *ioaddr = sp->ioaddr;
	unsigned int i;

	IPG_DEBUG_MSG("_nic_rx\n");

	for (i = 0; i < IPG_MAXRFDPROCESS_COUNT; i++, curr++) {
		unsigned int entry = curr % IPG_RFDLIST_LENGTH;
		struct ipg_rx *rxfd = sp->rxd + entry;

		if (!(rxfd->rfs & le64_to_cpu(IPG_RFS_RFDDONE)))
			break;

		switch (ipg_nic_rx_check_frame_type(dev)) {
		case FRAME_WITH_START_WITH_END:
			ipg_nic_rx_with_start_and_end(dev, tp, rxfd, entry);
			break;
		case FRAME_WITH_START:
			ipg_nic_rx_with_start(dev, tp, rxfd, entry);
			break;
		case FRAME_WITH_END:
			ipg_nic_rx_with_end(dev, tp, rxfd, entry);
			break;
		case FRAME_NO_START_NO_END:
			ipg_nic_rx_no_start_no_end(dev, tp, rxfd, entry);
			break;
		}
	}

	sp->rx_current = curr;

	if (i == IPG_MAXRFDPROCESS_COUNT) {
		/* There are more RFDs to process, however the
		 * allocated amount of RFD processing time has
		 * expired. Assert Interrupt Requested to make
		 * sure we come back to process the remaining RFDs.
		 */
		ipg_w32(ipg_r32(ASIC_CTRL) | IPG_AC_INT_REQUEST, ASIC_CTRL);
	}

	ipg_nic_rxrestore(dev);

	return 0;
}

#else
static int ipg_nic_rx(struct net_device *dev)
{
	/* Transfer received Ethernet frames to higher network layers. */
	struct ipg_nic_private *sp = netdev_priv(dev);
	unsigned int curr = sp->rx_current;
	void __iomem *ioaddr = sp->ioaddr;
	struct ipg_rx *rxfd;
	unsigned int i;

	IPG_DEBUG_MSG("_nic_rx\n");

#define __RFS_MASK \
	cpu_to_le64(IPG_RFS_RFDDONE | IPG_RFS_FRAMESTART | IPG_RFS_FRAMEEND)

	for (i = 0; i < IPG_MAXRFDPROCESS_COUNT; i++, curr++) {
		unsigned int entry = curr % IPG_RFDLIST_LENGTH;
		struct sk_buff *skb = sp->rx_buff[entry];
		unsigned int framelen;

		rxfd = sp->rxd + entry;

		if (((rxfd->rfs & __RFS_MASK) != __RFS_MASK) || !skb)
			break;

		/* Get received frame length. */
		framelen = le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFRAMELEN;

		/* Check for jumbo frame arrival with too small
		 * RXFRAG_SIZE.
		 */
		if (framelen > IPG_RXFRAG_SIZE) {
			IPG_DEBUG_MSG
			    ("RFS FrameLen > allocated fragment size.\n");

			framelen = IPG_RXFRAG_SIZE;
		}

		if ((IPG_DROP_ON_RX_ETH_ERRORS && (le64_to_cpu(rxfd->rfs) &
		       (IPG_RFS_RXFIFOOVERRUN | IPG_RFS_RXRUNTFRAME |
			IPG_RFS_RXALIGNMENTERROR | IPG_RFS_RXFCSERROR |
			IPG_RFS_RXOVERSIZEDFRAME | IPG_RFS_RXLENGTHERROR)))) {

			IPG_DEBUG_MSG("Rx error, RFS = %16.16lx\n",
				      (unsigned long int) rxfd->rfs);

			/* Increment general receive error statistic. */
			sp->stats.rx_errors++;

			/* Increment detailed receive error statistics. */
			if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFIFOOVERRUN) {
				IPG_DEBUG_MSG("RX FIFO overrun occured.\n");
				sp->stats.rx_fifo_errors++;
			}

			if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXRUNTFRAME) {
				IPG_DEBUG_MSG("RX runt occured.\n");
				sp->stats.rx_length_errors++;
			}

			if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXOVERSIZEDFRAME) ;
			/* Do nothing, error count handled by a IPG
			 * statistic register.
			 */

			if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXALIGNMENTERROR) {
				IPG_DEBUG_MSG("RX alignment error occured.\n");
				sp->stats.rx_frame_errors++;
			}

			if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFCSERROR) ;
			/* Do nothing, error count handled by a IPG
			 * statistic register.
			 */

			/* Free the memory associated with the RX
			 * buffer since it is erroneous and we will
			 * not pass it to higher layer processes.
			 */
			if (skb) {
				__le64 info = rxfd->frag_info;

				pci_unmap_single(sp->pdev,
					le64_to_cpu(info) & ~IPG_RFI_FRAGLEN,
					sp->rx_buf_sz, PCI_DMA_FROMDEVICE);

				dev_kfree_skb_irq(skb);
			}
		} else {

			/* Adjust the new buffer length to accomodate the size
			 * of the received frame.
			 */
			skb_put(skb, framelen);

			/* Set the buffer's protocol field to Ethernet. */
			skb->protocol = eth_type_trans(skb, dev);

			/* The IPG encountered an error with (or
			 * there were no) IP/TCP/UDP checksums.
			 * This may or may not indicate an invalid
			 * IP/TCP/UDP frame was received. Let the
			 * upper layer decide.
			 */
			skb->ip_summed = CHECKSUM_NONE;

			/* Hand off frame for higher layer processing.
			 * The function netif_rx() releases the sk_buff
			 * when processing completes.
			 */
			netif_rx(skb);

			/* Record frame receive time (jiffies = Linux
			 * kernel current time stamp).
			 */
			dev->last_rx = jiffies;
		}

		/* Assure RX buffer is not reused by IPG. */
		sp->rx_buff[entry] = NULL;
	}

	/*
	 * If there are more RFDs to proces and the allocated amount of RFD
	 * processing time has expired, assert Interrupt Requested to make
	 * sure we come back to process the remaining RFDs.
	 */
	if (i == IPG_MAXRFDPROCESS_COUNT)
		ipg_w32(ipg_r32(ASIC_CTRL) | IPG_AC_INT_REQUEST, ASIC_CTRL);

#ifdef IPG_DEBUG
	/* Check if the RFD list contained no receive frame data. */
	if (!i)
		sp->EmptyRFDListCount++;
#endif
	while ((le64_to_cpu(rxfd->rfs) & IPG_RFS_RFDDONE) &&
	       !((le64_to_cpu(rxfd->rfs) & IPG_RFS_FRAMESTART) &&
		 (le64_to_cpu(rxfd->rfs) & IPG_RFS_FRAMEEND))) {
		unsigned int entry = curr++ % IPG_RFDLIST_LENGTH;

		rxfd = sp->rxd + entry;

		IPG_DEBUG_MSG("Frame requires multiple RFDs.\n");

		/* An unexpected event, additional code needed to handle
		 * properly. So for the time being, just disregard the
		 * frame.
		 */

		/* Free the memory associated with the RX
		 * buffer since it is erroneous and we will
		 * not pass it to higher layer processes.
		 */
		if (sp->rx_buff[entry]) {
			pci_unmap_single(sp->pdev,
				le64_to_cpu(rxfd->frag_info) & ~IPG_RFI_FRAGLEN,
				sp->rx_buf_sz, PCI_DMA_FROMDEVICE);
			dev_kfree_skb_irq(sp->rx_buff[entry]);
		}

		/* Assure RX buffer is not reused by IPG. */
		sp->rx_buff[entry] = NULL;
	}

	sp->rx_current = curr;

	/* Check to see if there are a minimum number of used
	 * RFDs before restoring any (should improve performance.)
	 */
	if ((curr - sp->rx_dirty) >= IPG_MINUSEDRFDSTOFREE)
		ipg_nic_rxrestore(dev);

	return 0;
}
#endif

static void ipg_reset_after_host_error(struct work_struct *work)
{
	struct ipg_nic_private *sp =
		container_of(work, struct ipg_nic_private, task.work);
	struct net_device *dev = sp->dev;

	IPG_DDEBUG_MSG("DMACtrl = %8.8x\n", ioread32(sp->ioaddr + IPG_DMACTRL));

	/*
	 * Acknowledge HostError interrupt by resetting
	 * IPG DMA and HOST.
	 */
	ipg_reset(dev, IPG_AC_GLOBAL_RESET | IPG_AC_HOST | IPG_AC_DMA);

	init_rfdlist(dev);
	init_tfdlist(dev);

	if (ipg_io_config(dev) < 0) {
		printk(KERN_INFO "%s: Cannot recover from PCI error.\n",
		       dev->name);
		schedule_delayed_work(&sp->task, HZ);
	}
}

static irqreturn_t ipg_interrupt_handler(int irq, void *dev_inst)
{
	struct net_device *dev = dev_inst;
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	unsigned int handled = 0;
	u16 status;

	IPG_DEBUG_MSG("_interrupt_handler\n");

#ifdef JUMBO_FRAME
	ipg_nic_rxrestore(dev);
#endif
	spin_lock(&sp->lock);

	/* Get interrupt source information, and acknowledge
	 * some (i.e. TxDMAComplete, RxDMAComplete, RxEarly,
	 * IntRequested, MacControlFrame, LinkEvent) interrupts
	 * if issued. Also, all IPG interrupts are disabled by
	 * reading IntStatusAck.
	 */
	status = ipg_r16(INT_STATUS_ACK);

	IPG_DEBUG_MSG("IntStatusAck = %4.4x\n", status);

	/* Shared IRQ of remove event. */
	if (!(status & IPG_IS_RSVD_MASK))
		goto out_enable;

	handled = 1;

	if (unlikely(!netif_running(dev)))
		goto out_unlock;

	/* If RFDListEnd interrupt, restore all used RFDs. */
	if (status & IPG_IS_RFD_LIST_END) {
		IPG_DEBUG_MSG("RFDListEnd Interrupt.\n");

		/* The RFD list end indicates an RFD was encountered
		 * with a 0 NextPtr, or with an RFDDone bit set to 1
		 * (indicating the RFD is not read for use by the
		 * IPG.) Try to restore all RFDs.
		 */
		ipg_nic_rxrestore(dev);

#ifdef IPG_DEBUG
		/* Increment the RFDlistendCount counter. */
		sp->RFDlistendCount++;
#endif
	}

	/* If RFDListEnd, RxDMAPriority, RxDMAComplete, or
	 * IntRequested interrupt, process received frames. */
	if ((status & IPG_IS_RX_DMA_PRIORITY) ||
	    (status & IPG_IS_RFD_LIST_END) ||
	    (status & IPG_IS_RX_DMA_COMPLETE) ||
	    (status & IPG_IS_INT_REQUESTED)) {
#ifdef IPG_DEBUG
		/* Increment the RFD list checked counter if interrupted
		 * only to check the RFD list. */
		if (status & (~(IPG_IS_RX_DMA_PRIORITY | IPG_IS_RFD_LIST_END |
				IPG_IS_RX_DMA_COMPLETE | IPG_IS_INT_REQUESTED) &
			       (IPG_IS_HOST_ERROR | IPG_IS_TX_DMA_COMPLETE |
				IPG_IS_LINK_EVENT | IPG_IS_TX_COMPLETE |
				IPG_IS_UPDATE_STATS)))
			sp->RFDListCheckedCount++;
#endif

		ipg_nic_rx(dev);
	}

	/* If TxDMAComplete interrupt, free used TFDs. */
	if (status & IPG_IS_TX_DMA_COMPLETE)
		ipg_nic_txfree(dev);

	/* TxComplete interrupts indicate one of numerous actions.
	 * Determine what action to take based on TXSTATUS register.
	 */
	if (status & IPG_IS_TX_COMPLETE)
		ipg_nic_txcleanup(dev);

	/* If UpdateStats interrupt, update Linux Ethernet statistics */
	if (status & IPG_IS_UPDATE_STATS)
		ipg_nic_get_stats(dev);

	/* If HostError interrupt, reset IPG. */
	if (status & IPG_IS_HOST_ERROR) {
		IPG_DDEBUG_MSG("HostError Interrupt\n");

		schedule_delayed_work(&sp->task, 0);
	}

	/* If LinkEvent interrupt, resolve autonegotiation. */
	if (status & IPG_IS_LINK_EVENT) {
		if (ipg_config_autoneg(dev) < 0)
			printk(KERN_INFO "%s: Auto-negotiation error.\n",
			       dev->name);
	}

	/* If MACCtrlFrame interrupt, do nothing. */
	if (status & IPG_IS_MAC_CTRL_FRAME)
		IPG_DEBUG_MSG("MACCtrlFrame interrupt.\n");

	/* If RxComplete interrupt, do nothing. */
	if (status & IPG_IS_RX_COMPLETE)
		IPG_DEBUG_MSG("RxComplete interrupt.\n");

	/* If RxEarly interrupt, do nothing. */
	if (status & IPG_IS_RX_EARLY)
		IPG_DEBUG_MSG("RxEarly interrupt.\n");

out_enable:
	/* Re-enable IPG interrupts. */
	ipg_w16(IPG_IE_TX_DMA_COMPLETE | IPG_IE_RX_DMA_COMPLETE |
		IPG_IE_HOST_ERROR | IPG_IE_INT_REQUESTED | IPG_IE_TX_COMPLETE |
		IPG_IE_LINK_EVENT | IPG_IE_UPDATE_STATS, INT_ENABLE);
out_unlock:
	spin_unlock(&sp->lock);

	return IRQ_RETVAL(handled);
}

static void ipg_rx_clear(struct ipg_nic_private *sp)
{
	unsigned int i;

	for (i = 0; i < IPG_RFDLIST_LENGTH; i++) {
		if (sp->rx_buff[i]) {
			struct ipg_rx *rxfd = sp->rxd + i;

			dev_kfree_skb_irq(sp->rx_buff[i]);
			sp->rx_buff[i] = NULL;
			pci_unmap_single(sp->pdev,
				le64_to_cpu(rxfd->frag_info) & ~IPG_RFI_FRAGLEN,
				sp->rx_buf_sz, PCI_DMA_FROMDEVICE);
		}
	}
}

static void ipg_tx_clear(struct ipg_nic_private *sp)
{
	unsigned int i;

	for (i = 0; i < IPG_TFDLIST_LENGTH; i++) {
		if (sp->tx_buff[i]) {
			struct ipg_tx *txfd = sp->txd + i;

			pci_unmap_single(sp->pdev,
				le64_to_cpu(txfd->frag_info) & ~IPG_TFI_FRAGLEN,
				sp->tx_buff[i]->len, PCI_DMA_TODEVICE);

			dev_kfree_skb_irq(sp->tx_buff[i]);

			sp->tx_buff[i] = NULL;
		}
	}
}

static int ipg_nic_open(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	struct pci_dev *pdev = sp->pdev;
	int rc;

	IPG_DEBUG_MSG("_nic_open\n");

	sp->rx_buf_sz = IPG_RXSUPPORT_SIZE;

	/* Check for interrupt line conflicts, and request interrupt
	 * line for IPG.
	 *
	 * IMPORTANT: Disable IPG interrupts prior to registering
	 *            IRQ.
	 */
	ipg_w16(0x0000, INT_ENABLE);

	/* Register the interrupt line to be used by the IPG within
	 * the Linux system.
	 */
	rc = request_irq(pdev->irq, &ipg_interrupt_handler, IRQF_SHARED,
			 dev->name, dev);
	if (rc < 0) {
		printk(KERN_INFO "%s: Error when requesting interrupt.\n",
		       dev->name);
		goto out;
	}

	dev->irq = pdev->irq;

	rc = -ENOMEM;

	sp->rxd = dma_alloc_coherent(&pdev->dev, IPG_RX_RING_BYTES,
				     &sp->rxd_map, GFP_KERNEL);
	if (!sp->rxd)
		goto err_free_irq_0;

	sp->txd = dma_alloc_coherent(&pdev->dev, IPG_TX_RING_BYTES,
				     &sp->txd_map, GFP_KERNEL);
	if (!sp->txd)
		goto err_free_rx_1;

	rc = init_rfdlist(dev);
	if (rc < 0) {
		printk(KERN_INFO "%s: Error during configuration.\n",
		       dev->name);
		goto err_free_tx_2;
	}

	init_tfdlist(dev);

	rc = ipg_io_config(dev);
	if (rc < 0) {
		printk(KERN_INFO "%s: Error during configuration.\n",
		       dev->name);
		goto err_release_tfdlist_3;
	}

	/* Resolve autonegotiation. */
	if (ipg_config_autoneg(dev) < 0)
		printk(KERN_INFO "%s: Auto-negotiation error.\n", dev->name);

#ifdef JUMBO_FRAME
	/* initialize JUMBO Frame control variable */
	sp->jumbo.found_start = 0;
	sp->jumbo.current_size = 0;
	sp->jumbo.skb = 0;
	dev->mtu = IPG_TXFRAG_SIZE;
#endif

	/* Enable transmit and receive operation of the IPG. */
	ipg_w32((ipg_r32(MAC_CTRL) | IPG_MC_RX_ENABLE | IPG_MC_TX_ENABLE) &
		 IPG_MC_RSVD_MASK, MAC_CTRL);

	netif_start_queue(dev);
out:
	return rc;

err_release_tfdlist_3:
	ipg_tx_clear(sp);
	ipg_rx_clear(sp);
err_free_tx_2:
	dma_free_coherent(&pdev->dev, IPG_TX_RING_BYTES, sp->txd, sp->txd_map);
err_free_rx_1:
	dma_free_coherent(&pdev->dev, IPG_RX_RING_BYTES, sp->rxd, sp->rxd_map);
err_free_irq_0:
	free_irq(pdev->irq, dev);
	goto out;
}

static int ipg_nic_stop(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	struct pci_dev *pdev = sp->pdev;

	IPG_DEBUG_MSG("_nic_stop\n");

	netif_stop_queue(dev);

	IPG_DDEBUG_MSG("RFDlistendCount = %i\n", sp->RFDlistendCount);
	IPG_DDEBUG_MSG("RFDListCheckedCount = %i\n", sp->rxdCheckedCount);
	IPG_DDEBUG_MSG("EmptyRFDListCount = %i\n", sp->EmptyRFDListCount);
	IPG_DUMPTFDLIST(dev);

	do {
		(void) ipg_r16(INT_STATUS_ACK);

		ipg_reset(dev, IPG_AC_GLOBAL_RESET | IPG_AC_HOST | IPG_AC_DMA);

		synchronize_irq(pdev->irq);
	} while (ipg_r16(INT_ENABLE) & IPG_IE_RSVD_MASK);

	ipg_rx_clear(sp);

	ipg_tx_clear(sp);

	pci_free_consistent(pdev, IPG_RX_RING_BYTES, sp->rxd, sp->rxd_map);
	pci_free_consistent(pdev, IPG_TX_RING_BYTES, sp->txd, sp->txd_map);

	free_irq(pdev->irq, dev);

	return 0;
}

static int ipg_nic_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	unsigned int entry = sp->tx_current % IPG_TFDLIST_LENGTH;
	unsigned long flags;
	struct ipg_tx *txfd;

	IPG_DDEBUG_MSG("_nic_hard_start_xmit\n");

	/* If in 10Mbps mode, stop the transmit queue so
	 * no more transmit frames are accepted.
	 */
	if (sp->tenmbpsmode)
		netif_stop_queue(dev);

	if (sp->reset_current_tfd) {
		sp->reset_current_tfd = 0;
		entry = 0;
	}

	txfd = sp->txd + entry;

	sp->tx_buff[entry] = skb;

	/* Clear all TFC fields, except TFDDONE. */
	txfd->tfc = cpu_to_le64(IPG_TFC_TFDDONE);

	/* Specify the TFC field within the TFD. */
	txfd->tfc |= cpu_to_le64(IPG_TFC_WORDALIGNDISABLED |
		(IPG_TFC_FRAMEID & cpu_to_le64(sp->tx_current)) |
		(IPG_TFC_FRAGCOUNT & (1 << 24)));

	/* Request TxComplete interrupts at an interval defined
	 * by the constant IPG_FRAMESBETWEENTXCOMPLETES.
	 * Request TxComplete interrupt for every frame
	 * if in 10Mbps mode to accomodate problem with 10Mbps
	 * processing.
	 */
	if (sp->tenmbpsmode)
		txfd->tfc |= cpu_to_le64(IPG_TFC_TXINDICATE);
	txfd->tfc |= cpu_to_le64(IPG_TFC_TXDMAINDICATE);
	/* Based on compilation option, determine if FCS is to be
	 * appended to transmit frame by IPG.
	 */
	if (!(IPG_APPEND_FCS_ON_TX))
		txfd->tfc |= cpu_to_le64(IPG_TFC_FCSAPPENDDISABLE);

	/* Based on compilation option, determine if IP, TCP and/or
	 * UDP checksums are to be added to transmit frame by IPG.
	 */
	if (IPG_ADD_IPCHECKSUM_ON_TX)
		txfd->tfc |= cpu_to_le64(IPG_TFC_IPCHECKSUMENABLE);

	if (IPG_ADD_TCPCHECKSUM_ON_TX)
		txfd->tfc |= cpu_to_le64(IPG_TFC_TCPCHECKSUMENABLE);

	if (IPG_ADD_UDPCHECKSUM_ON_TX)
		txfd->tfc |= cpu_to_le64(IPG_TFC_UDPCHECKSUMENABLE);

	/* Based on compilation option, determine if VLAN tag info is to be
	 * inserted into transmit frame by IPG.
	 */
	if (IPG_INSERT_MANUAL_VLAN_TAG) {
		txfd->tfc |= cpu_to_le64(IPG_TFC_VLANTAGINSERT |
			((u64) IPG_MANUAL_VLAN_VID << 32) |
			((u64) IPG_MANUAL_VLAN_CFI << 44) |
			((u64) IPG_MANUAL_VLAN_USERPRIORITY << 45));
	}

	/* The fragment start location within system memory is defined
	 * by the sk_buff structure's data field. The physical address
	 * of this location within the system's virtual memory space
	 * is determined using the IPG_HOST2BUS_MAP function.
	 */
	txfd->frag_info = cpu_to_le64(pci_map_single(sp->pdev, skb->data,
		skb->len, PCI_DMA_TODEVICE));

	/* The length of the fragment within system memory is defined by
	 * the sk_buff structure's len field.
	 */
	txfd->frag_info |= cpu_to_le64(IPG_TFI_FRAGLEN &
		((u64) (skb->len & 0xffff) << 48));

	/* Clear the TFDDone bit last to indicate the TFD is ready
	 * for transfer to the IPG.
	 */
	txfd->tfc &= cpu_to_le64(~IPG_TFC_TFDDONE);

	spin_lock_irqsave(&sp->lock, flags);

	sp->tx_current++;

	mmiowb();

	ipg_w32(IPG_DC_TX_DMA_POLL_NOW, DMA_CTRL);

	if (sp->tx_current == (sp->tx_dirty + IPG_TFDLIST_LENGTH))
		netif_stop_queue(dev);

	spin_unlock_irqrestore(&sp->lock, flags);

	return NETDEV_TX_OK;
}

static void ipg_set_phy_default_param(unsigned char rev,
				      struct net_device *dev, int phy_address)
{
	unsigned short length;
	unsigned char revision;
	unsigned short *phy_param;
	unsigned short address, value;

	phy_param = &DefaultPhyParam[0];
	length = *phy_param & 0x00FF;
	revision = (unsigned char)((*phy_param) >> 8);
	phy_param++;
	while (length != 0) {
		if (rev == revision) {
			while (length > 1) {
				address = *phy_param;
				value = *(phy_param + 1);
				phy_param += 2;
				mdio_write(dev, phy_address, address, value);
				length -= 4;
			}
			break;
		} else {
			phy_param += length / 2;
			length = *phy_param & 0x00FF;
			revision = (unsigned char)((*phy_param) >> 8);
			phy_param++;
		}
	}
}

static int read_eeprom(struct net_device *dev, int eep_addr)
{
	void __iomem *ioaddr = ipg_ioaddr(dev);
	unsigned int i;
	int ret = 0;
	u16 value;

	value = IPG_EC_EEPROM_READOPCODE | (eep_addr & 0xff);
	ipg_w16(value, EEPROM_CTRL);

	for (i = 0; i < 1000; i++) {
		u16 data;

		mdelay(10);
		data = ipg_r16(EEPROM_CTRL);
		if (!(data & IPG_EC_EEPROM_BUSY)) {
			ret = ipg_r16(EEPROM_DATA);
			break;
		}
	}
	return ret;
}

static void ipg_init_mii(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	struct mii_if_info *mii_if = &sp->mii_if;
	int phyaddr;

	mii_if->dev          = dev;
	mii_if->mdio_read    = mdio_read;
	mii_if->mdio_write   = mdio_write;
	mii_if->phy_id_mask  = 0x1f;
	mii_if->reg_num_mask = 0x1f;

	mii_if->phy_id = phyaddr = ipg_find_phyaddr(dev);

	if (phyaddr != 0x1f) {
		u16 mii_phyctrl, mii_1000cr;
		u8 revisionid = 0;

		mii_1000cr  = mdio_read(dev, phyaddr, MII_CTRL1000);
		mii_1000cr |= ADVERTISE_1000FULL | ADVERTISE_1000HALF |
			GMII_PHY_1000BASETCONTROL_PreferMaster;
		mdio_write(dev, phyaddr, MII_CTRL1000, mii_1000cr);

		mii_phyctrl = mdio_read(dev, phyaddr, MII_BMCR);

		/* Set default phyparam */
		pci_read_config_byte(sp->pdev, PCI_REVISION_ID, &revisionid);
		ipg_set_phy_default_param(revisionid, dev, phyaddr);

		/* Reset PHY */
		mii_phyctrl |= BMCR_RESET | BMCR_ANRESTART;
		mdio_write(dev, phyaddr, MII_BMCR, mii_phyctrl);

	}
}

static int ipg_hw_init(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	void __iomem *ioaddr = sp->ioaddr;
	unsigned int i;
	int rc;

	/* Read/Write and Reset EEPROM Value */
	/* Read LED Mode Configuration from EEPROM */
	sp->led_mode = read_eeprom(dev, 6);

	/* Reset all functions within the IPG. Do not assert
	 * RST_OUT as not compatible with some PHYs.
	 */
	rc = ipg_reset(dev, IPG_RESET_MASK);
	if (rc < 0)
		goto out;

	ipg_init_mii(dev);

	/* Read MAC Address from EEPROM */
	for (i = 0; i < 3; i++)
		sp->station_addr[i] = read_eeprom(dev, 16 + i);

	for (i = 0; i < 3; i++)
		ipg_w16(sp->station_addr[i], STATION_ADDRESS_0 + 2*i);

	/* Set station address in ethernet_device structure. */
	dev->dev_addr[0] =  ipg_r16(STATION_ADDRESS_0) & 0x00ff;
	dev->dev_addr[1] = (ipg_r16(STATION_ADDRESS_0) & 0xff00) >> 8;
	dev->dev_addr[2] =  ipg_r16(STATION_ADDRESS_1) & 0x00ff;
	dev->dev_addr[3] = (ipg_r16(STATION_ADDRESS_1) & 0xff00) >> 8;
	dev->dev_addr[4] =  ipg_r16(STATION_ADDRESS_2) & 0x00ff;
	dev->dev_addr[5] = (ipg_r16(STATION_ADDRESS_2) & 0xff00) >> 8;
out:
	return rc;
}

static int ipg_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	int rc;

	mutex_lock(&sp->mii_mutex);
	rc = generic_mii_ioctl(&sp->mii_if, if_mii(ifr), cmd, NULL);
	mutex_unlock(&sp->mii_mutex);

	return rc;
}

static int ipg_nic_change_mtu(struct net_device *dev, int new_mtu)
{
	/* Function to accomodate changes to Maximum Transfer Unit
	 * (or MTU) of IPG NIC. Cannot use default function since
	 * the default will not allow for MTU > 1500 bytes.
	 */

	IPG_DEBUG_MSG("_nic_change_mtu\n");

	/* Check that the new MTU value is between 68 (14 byte header, 46
	 * byte payload, 4 byte FCS) and IPG_MAX_RXFRAME_SIZE, which
	 * corresponds to the MAXFRAMESIZE register in the IPG.
	 */
	if ((new_mtu < 68) || (new_mtu > IPG_MAX_RXFRAME_SIZE))
		return -EINVAL;

	dev->mtu = new_mtu;

	return 0;
}

static int ipg_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	int rc;

	mutex_lock(&sp->mii_mutex);
	rc = mii_ethtool_gset(&sp->mii_if, cmd);
	mutex_unlock(&sp->mii_mutex);

	return rc;
}

static int ipg_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	int rc;

	mutex_lock(&sp->mii_mutex);
	rc = mii_ethtool_sset(&sp->mii_if, cmd);
	mutex_unlock(&sp->mii_mutex);

	return rc;
}

static int ipg_nway_reset(struct net_device *dev)
{
	struct ipg_nic_private *sp = netdev_priv(dev);
	int rc;

	mutex_lock(&sp->mii_mutex);
	rc = mii_nway_restart(&sp->mii_if);
	mutex_unlock(&sp->mii_mutex);

	return rc;
}

static struct ethtool_ops ipg_ethtool_ops = {
	.get_settings = ipg_get_settings,
	.set_settings = ipg_set_settings,
	.nway_reset   = ipg_nway_reset,
};

static void __devexit ipg_remove(struct pci_dev *pdev)
{
	struct net_device *dev = pci_get_drvdata(pdev);
	struct ipg_nic_private *sp = netdev_priv(dev);

	IPG_DEBUG_MSG("_remove\n");

	/* Un-register Ethernet device. */
	unregister_netdev(dev);

	pci_iounmap(pdev, sp->ioaddr);

	pci_release_regions(pdev);

	free_netdev(dev);
	pci_disable_device(pdev);
	pci_set_drvdata(pdev, NULL);
}

static int __devinit ipg_probe(struct pci_dev *pdev,
			       const struct pci_device_id *id)
{
	unsigned int i = id->driver_data;
	struct ipg_nic_private *sp;
	struct net_device *dev;
	void __iomem *ioaddr;
	int rc;

	rc = pci_enable_device(pdev);
	if (rc < 0)
		goto out;

	printk(KERN_INFO "%s: %s\n", pci_name(pdev), ipg_brand_name[i]);

	pci_set_master(pdev);

	rc = pci_set_dma_mask(pdev, DMA_40BIT_MASK);
	if (rc < 0) {
		rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
		if (rc < 0) {
			printk(KERN_ERR "%s: DMA config failed.\n",
			       pci_name(pdev));
			goto err_disable_0;
		}
	}

	/*
	 * Initialize net device.
	 */
	dev = alloc_etherdev(sizeof(struct ipg_nic_private));
	if (!dev) {
		printk(KERN_ERR "%s: alloc_etherdev failed\n", pci_name(pdev));
		rc = -ENOMEM;
		goto err_disable_0;
	}

	sp = netdev_priv(dev);
	spin_lock_init(&sp->lock);
	mutex_init(&sp->mii_mutex);

	/* Declare IPG NIC functions for Ethernet device methods.
	 */
	dev->open = &ipg_nic_open;
	dev->stop = &ipg_nic_stop;
	dev->hard_start_xmit = &ipg_nic_hard_start_xmit;
	dev->get_stats = &ipg_nic_get_stats;
	dev->set_multicast_list = &ipg_nic_set_multicast_list;
	dev->do_ioctl = ipg_ioctl;
	dev->tx_timeout = ipg_tx_timeout;
	dev->change_mtu = &ipg_nic_change_mtu;

	SET_NETDEV_DEV(dev, &pdev->dev);
	SET_ETHTOOL_OPS(dev, &ipg_ethtool_ops);

	rc = pci_request_regions(pdev, DRV_NAME);
	if (rc)
		goto err_free_dev_1;

	ioaddr = pci_iomap(pdev, 1, pci_resource_len(pdev, 1));
	if (!ioaddr) {
		printk(KERN_ERR "%s cannot map MMIO\n", pci_name(pdev));
		rc = -EIO;
		goto err_release_regions_2;
	}

	/* Save the pointer to the PCI device information. */
	sp->ioaddr = ioaddr;
	sp->pdev = pdev;
	sp->dev = dev;

	INIT_DELAYED_WORK(&sp->task, ipg_reset_after_host_error);

	pci_set_drvdata(pdev, dev);

	rc = ipg_hw_init(dev);
	if (rc < 0)
		goto err_unmap_3;

	rc = register_netdev(dev);
	if (rc < 0)
		goto err_unmap_3;

	printk(KERN_INFO "Ethernet device registered as: %s\n", dev->name);
out:
	return rc;

err_unmap_3:
	pci_iounmap(pdev, ioaddr);
err_release_regions_2:
	pci_release_regions(pdev);
err_free_dev_1:
	free_netdev(dev);
err_disable_0:
	pci_disable_device(pdev);
	goto out;
}

static struct pci_driver ipg_pci_driver = {
	.name		= IPG_DRIVER_NAME,
	.id_table	= ipg_pci_tbl,
	.probe		= ipg_probe,
	.remove		= __devexit_p(ipg_remove),
};

static int __init ipg_init_module(void)
{
	return pci_register_driver(&ipg_pci_driver);
}

static void __exit ipg_exit_module(void)
{
	pci_unregister_driver(&ipg_pci_driver);
}

module_init(ipg_init_module);
module_exit(ipg_exit_module);