diff options
Diffstat (limited to 'Documentation/filesystems/inotify.txt')
-rw-r--r-- | Documentation/filesystems/inotify.txt | 151 |
1 files changed, 151 insertions, 0 deletions
diff --git a/Documentation/filesystems/inotify.txt b/Documentation/filesystems/inotify.txt new file mode 100644 index 000000000000..6d501903f68e --- /dev/null +++ b/Documentation/filesystems/inotify.txt | |||
@@ -0,0 +1,151 @@ | |||
1 | inotify | ||
2 | a powerful yet simple file change notification system | ||
3 | |||
4 | |||
5 | |||
6 | Document started 15 Mar 2005 by Robert Love <rml@novell.com> | ||
7 | |||
8 | |||
9 | (i) User Interface | ||
10 | |||
11 | Inotify is controlled by a set of three system calls and normal file I/O on a | ||
12 | returned file descriptor. | ||
13 | |||
14 | First step in using inotify is to initialise an inotify instance: | ||
15 | |||
16 | int fd = inotify_init (); | ||
17 | |||
18 | Each instance is associated with a unique, ordered queue. | ||
19 | |||
20 | Change events are managed by "watches". A watch is an (object,mask) pair where | ||
21 | the object is a file or directory and the mask is a bit mask of one or more | ||
22 | inotify events that the application wishes to receive. See <linux/inotify.h> | ||
23 | for valid events. A watch is referenced by a watch descriptor, or wd. | ||
24 | |||
25 | Watches are added via a path to the file. | ||
26 | |||
27 | Watches on a directory will return events on any files inside of the directory. | ||
28 | |||
29 | Adding a watch is simple: | ||
30 | |||
31 | int wd = inotify_add_watch (fd, path, mask); | ||
32 | |||
33 | Where "fd" is the return value from inotify_init(), path is the path to the | ||
34 | object to watch, and mask is the watch mask (see <linux/inotify.h>). | ||
35 | |||
36 | You can update an existing watch in the same manner, by passing in a new mask. | ||
37 | |||
38 | An existing watch is removed via | ||
39 | |||
40 | int ret = inotify_rm_watch (fd, wd); | ||
41 | |||
42 | Events are provided in the form of an inotify_event structure that is read(2) | ||
43 | from a given inotify instance. The filename is of dynamic length and follows | ||
44 | the struct. It is of size len. The filename is padded with null bytes to | ||
45 | ensure proper alignment. This padding is reflected in len. | ||
46 | |||
47 | You can slurp multiple events by passing a large buffer, for example | ||
48 | |||
49 | size_t len = read (fd, buf, BUF_LEN); | ||
50 | |||
51 | Where "buf" is a pointer to an array of "inotify_event" structures at least | ||
52 | BUF_LEN bytes in size. The above example will return as many events as are | ||
53 | available and fit in BUF_LEN. | ||
54 | |||
55 | Each inotify instance fd is also select()- and poll()-able. | ||
56 | |||
57 | You can find the size of the current event queue via the standard FIONREAD | ||
58 | ioctl on the fd returned by inotify_init(). | ||
59 | |||
60 | All watches are destroyed and cleaned up on close. | ||
61 | |||
62 | |||
63 | (ii) | ||
64 | |||
65 | Prototypes: | ||
66 | |||
67 | int inotify_init (void); | ||
68 | int inotify_add_watch (int fd, const char *path, __u32 mask); | ||
69 | int inotify_rm_watch (int fd, __u32 mask); | ||
70 | |||
71 | |||
72 | (iii) Internal Kernel Implementation | ||
73 | |||
74 | Each inotify instance is associated with an inotify_device structure. | ||
75 | |||
76 | Each watch is associated with an inotify_watch structure. Watches are chained | ||
77 | off of each associated device and each associated inode. | ||
78 | |||
79 | See fs/inotify.c for the locking and lifetime rules. | ||
80 | |||
81 | |||
82 | (iv) Rationale | ||
83 | |||
84 | Q: What is the design decision behind not tying the watch to the open fd of | ||
85 | the watched object? | ||
86 | |||
87 | A: Watches are associated with an open inotify device, not an open file. | ||
88 | This solves the primary problem with dnotify: keeping the file open pins | ||
89 | the file and thus, worse, pins the mount. Dnotify is therefore infeasible | ||
90 | for use on a desktop system with removable media as the media cannot be | ||
91 | unmounted. Watching a file should not require that it be open. | ||
92 | |||
93 | Q: What is the design decision behind using an-fd-per-instance as opposed to | ||
94 | an fd-per-watch? | ||
95 | |||
96 | A: An fd-per-watch quickly consumes more file descriptors than are allowed, | ||
97 | more fd's than are feasible to manage, and more fd's than are optimally | ||
98 | select()-able. Yes, root can bump the per-process fd limit and yes, users | ||
99 | can use epoll, but requiring both is a silly and extraneous requirement. | ||
100 | A watch consumes less memory than an open file, separating the number | ||
101 | spaces is thus sensible. The current design is what user-space developers | ||
102 | want: Users initialize inotify, once, and add n watches, requiring but one | ||
103 | fd and no twiddling with fd limits. Initializing an inotify instance two | ||
104 | thousand times is silly. If we can implement user-space's preferences | ||
105 | cleanly--and we can, the idr layer makes stuff like this trivial--then we | ||
106 | should. | ||
107 | |||
108 | There are other good arguments. With a single fd, there is a single | ||
109 | item to block on, which is mapped to a single queue of events. The single | ||
110 | fd returns all watch events and also any potential out-of-band data. If | ||
111 | every fd was a separate watch, | ||
112 | |||
113 | - There would be no way to get event ordering. Events on file foo and | ||
114 | file bar would pop poll() on both fd's, but there would be no way to tell | ||
115 | which happened first. A single queue trivially gives you ordering. Such | ||
116 | ordering is crucial to existing applications such as Beagle. Imagine | ||
117 | "mv a b ; mv b a" events without ordering. | ||
118 | |||
119 | - We'd have to maintain n fd's and n internal queues with state, | ||
120 | versus just one. It is a lot messier in the kernel. A single, linear | ||
121 | queue is the data structure that makes sense. | ||
122 | |||
123 | - User-space developers prefer the current API. The Beagle guys, for | ||
124 | example, love it. Trust me, I asked. It is not a surprise: Who'd want | ||
125 | to manage and block on 1000 fd's via select? | ||
126 | |||
127 | - No way to get out of band data. | ||
128 | |||
129 | - 1024 is still too low. ;-) | ||
130 | |||
131 | When you talk about designing a file change notification system that | ||
132 | scales to 1000s of directories, juggling 1000s of fd's just does not seem | ||
133 | the right interface. It is too heavy. | ||
134 | |||
135 | Additionally, it _is_ possible to more than one instance and | ||
136 | juggle more than one queue and thus more than one associated fd. There | ||
137 | need not be a one-fd-per-process mapping; it is one-fd-per-queue and a | ||
138 | process can easily want more than one queue. | ||
139 | |||
140 | Q: Why the system call approach? | ||
141 | |||
142 | A: The poor user-space interface is the second biggest problem with dnotify. | ||
143 | Signals are a terrible, terrible interface for file notification. Or for | ||
144 | anything, for that matter. The ideal solution, from all perspectives, is a | ||
145 | file descriptor-based one that allows basic file I/O and poll/select. | ||
146 | Obtaining the fd and managing the watches could have been done either via a | ||
147 | device file or a family of new system calls. We decided to implement a | ||
148 | family of system calls because that is the preffered approach for new kernel | ||
149 | interfaces. The only real difference was whether we wanted to use open(2) | ||
150 | and ioctl(2) or a couple of new system calls. System calls beat ioctls. | ||
151 | |||