aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/drm_gem.c
Commit message (Collapse)AuthorAge
* drm: kill ->gem_init_object() and friendsDavid Herrmann2013-10-09
| | | | | | | | | | | | | | | | | | | | | All drivers embed gem-objects into their own buffer objects. There is no reason to keep drm_gem_object_alloc(), gem->driver_private and ->gem_init_object() anymore. New drivers are highly encouraged to do the same. There is no benefit in allocating gem-objects separately. Cc: Dave Airlie <airlied@gmail.com> Cc: Alex Deucher <alexdeucher@gmail.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Rob Clark <robdclark@gmail.com> Cc: Inki Dae <inki.dae@samsung.com> Cc: Ben Skeggs <skeggsb@gmail.com> Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/prime: Remove PRIME handles only if supportedThierry Reding2013-08-29
| | | | | | | | | | Drivers that don't support PRIME will not have initialized the PRIME specific private component of struct drm_file. If called for such drivers, the drm_gem_remove_prime_handles() function will crash. Fix it by checking for PRIME support prior to removing the PRIME handles. Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Dave Airlie <airlied@gmail.com>
* drm/gem: implement vma access managementDavid Herrmann2013-08-26
| | | | | | | | | | | | | | | | | | | We implement automatic vma mmap() access management for all drivers using gem_mmap. We use the vma manager to add each open-file that creates a gem-handle to the vma-node of the underlying gem object. Once the handle is destroyed, we drop the open-file again. This allows us to use drm_vma_node_is_allowed() on _any_ gem object to see whether an open-file is granted access. In drm_gem_mmap() we use this to verify that unprivileged users cannot guess gem offsets and map arbitrary buffers. Note that this manages access for _all_ gem users (also TTM+GEM), but the actual access checks are only done for drm_gem_mmap(). TTM drivers use the TTM mmap helpers, which need to do that separately. Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/vma: add access management helpersDavid Herrmann2013-08-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The VMA offset manager uses a device-global address-space. Hence, any user can currently map any offset-node they want. They only need to guess the right offset. If we wanted per open-file offset spaces, we'd either need VM_NONLINEAR mappings or multiple "struct address_space" trees. As both doesn't really scale, we implement access management in the VMA manager itself. We use an rb-tree to store open-files for each VMA node. On each mmap call, GEM, TTM or the drivers must check whether the current user is allowed to map this file. We add a separate lock for each node as there is no generic lock available for the caller to protect the node easily. As we currently don't know whether an object may be used for mmap(), we have to do access management for all objects. If it turns out to slow down handle creation/deletion significantly, we can optimize it in several ways: - Most times only a single filp is added per bo so we could use a static "struct file *main_filp" which is checked/added/removed first before we fall back to the rbtree+drm_vma_offset_file. This could be even done lockless with rcu. - Let user-space pass a hint whether mmap() should be supported on the bo and avoid access-management if not. - .. there are probably more ideas once we have benchmarks .. v2: add drm_vma_node_verify_access() helper Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/prime: Always add exported buffers to the handle cacheDaniel Vetter2013-08-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... not only when the dma-buf is freshly created. In contrived examples someone else could have exported/imported the dma-buf already and handed us the gem object with a flink name. If such on object gets reexported as a dma_buf we won't have it in the handle cache already, which breaks the guarantee that for dma-buf imports we always hand back an existing handle if there is one. This is exercised by igt/prime_self_import/with_one_bo_two_files Now if we extend the locked sections just a notch more we can also plug th racy buf/handle cache setup in handle_to_fd: If evil userspace races a concurrent gem close against a prime export operation we can end up tearing down the gem handle before the dma buf handle cache is set up. When handle_to_fd gets around to adding the handle to the cache there will be no one left to clean it up, effectily leaking the bo (and the dma-buf, since the handle cache holds a ref on the dma-buf): Thread A Thread B handle_to_fd: lookup gem object from handle creates new dma_buf gem_close on the same handle obj->dma_buf is set, but file priv buf handle cache has no entry obj->handle_count drops to 0 drm_prime_add_buf_handle sets up the handle cache -> We have a dma-buf reference in the handle cache, but since the handle_count of the gem object already dropped to 0 no on will clean it up. When closing the drm device fd we'll hit the WARN_ON in drm_prime_destroy_file_private. The important change is to extend the critical section of the filp->prime.lock to cover the gem handle lookup. This serializes with a concurrent gem handle close. This leak is exercised by igt/prime_self_import/export-vs-gem_close-race Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/prime: Simplify drm_gem_remove_prime_handlesDaniel Vetter2013-08-20
| | | | | | | | | | | | | | | | | | | | | with the reworking semantics and locking of the obj->dma_buf pointer this pointer is always set as long as there's still a gem handle around and a dma_buf associated with this gem object. Also, the per file-priv lookup-cache for dma-buf importing is also unified between foreign and native objects. Hence we don't need to special case the clean any more and can simply drop the clause which only runs for foreing objects, i.e. with obj->import_attach set. Note that with this change (actually with the previous one to always set up obj->dma_buf even for foreign objects) it is no longer required to set obj->import_attach when importing a foreing object. So update comments accordingly, too. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/prime: proper locking+refcounting for obj->dma_buf linkDaniel Vetter2013-08-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The export dma-buf cache is semantically similar to an flink name. So semantically it makes sense to treat it the same and remove the name (i.e. the dma_buf pointer) and its references when the last gem handle disappears. Again we need to be careful, but double so: Not just could someone race and export with a gem close ioctl (so we need to recheck obj->handle_count again when assigning the new name), but multiple exports can also race against each another. This is prevented by holding the dev->object_name_lock across the entire section which touches obj->dma_buf. With the new scheme we also need to reinstate the obj->dma_buf link at import time (in case the only reference userspace has held in-between was through the dma-buf fd and not through any native gem handle). For simplicity we don't check whether it's a native object but unconditionally set up that link - with the new scheme of removing the obj->dma_buf reference when the last handle disappears we can do that. To make it clear that this is not just for exported buffers anymore als rename it from export_dma_buf to dma_buf. To make sure that now one can race a fd_to_handle or handle_to_fd with gem_close we use the same tricks as in flink of extending the dev->object_name_locking critical section. With this change we finally have a guaranteed 1:1 relationship (at least for native objects) between gem objects and dma-bufs, even accounting for races (which can happen since the dma-buf itself holds a reference while in-flight). This prevent igt/prime_self_import/export-vs-gem_close-race from Oopsing the kernel. There is still a leak though since the per-file priv dma-buf/handle cache handling is racy. That will be fixed in a later patch. v2: Remove the bogus dma_buf_put from the export_and_register_object failure path if we've raced with the handle count dropping to 0. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: completely close gem_open vs. gem_close racesDaniel Vetter2013-08-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The gem flink name holds a reference onto the object itself, and this self-reference would prevent an flink'ed object from every being freed. To break that loop we remove the flink name when the last userspace handle disappears, i.e. when obj->handle_count reaches 0. Now in gem_open we drop the dev->object_name_lock between the flink name lookup and actually adding the handle. This means a concurrent gem_close of the last handle could result in the flink name getting reaped right inbetween, i.e. Thread 1 Thread 2 gem_open gem_close flink -> obj lookup handle_count drops to 0 remove flink name create_handle handle_count++ If someone now flinks this object again, we'll get a new flink name. We can close this race by removing the lock dropping and making the entire lookup+handle_create sequence atomic. Unfortunately to still be able to share the handle_create logic this requires a handle_create_tail function which drops the lock - we can't hold the object_name_lock while calling into a driver's ->gem_open callback. Note that for flink fixing this race isn't really important, since racing gem_open against gem_close is clearly a userspace bug. And no matter how the race ends, we won't leak any references. But with dma-buf where the userspace dma-buf fd itself is refcounted this is a valid sequence and hence we should fix it. Therefore this patch here is just a warm-up exercise (and for consistency between flink buffer sharing and dma-buf buffer sharing with self-imports). Also note that this extension of the critical section in gem_open protected by dev->object_name_lock only works because it's now a mutex: A spinlock would conflict with the potential memory allocation in idr_preload(). This is exercises by igt/gem_flink_race/flink_name. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: switch dev->object_name_lock to a mutexDaniel Vetter2013-08-20
| | | | | | | | | | | | | | | | | | I want to wrap the creation of a dma-buf from a gem object in it, so that the obj->export_dma_buf cache can be atomically filled in. Instead of creating a new mutex just for that variable I've figured I can reuse the existing dev->object_name_lock, especially since the new semantics will exactly mirror the flink obj->name already protected by that lock. v2: idr_preload/idr_preload_end is now an atomic section, so need to move the mutex locking outside. [airlied: fix up conflict with patch to make debugfs use lock] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: make drm_gem_object_handle_unreference_unlocked staticDaniel Vetter2013-08-20
| | | | | | | | | No one outside of drm should use this, the official interfaces are drm_gem_handle_create and drm_gem_handle_delete. The handle refcounting is purely an implementation detail of gem. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: fix up flink name create raceDaniel Vetter2013-08-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the 2nd attempt, I've always been a bit dissatisified with the tricky nature of the first one: http://lists.freedesktop.org/archives/dri-devel/2012-July/025451.html The issue is that the flink ioctl can race with calling gem_close on the last gem handle. In that case we'll end up with a zero handle count, but an flink name (and it's corresponding reference). Which results in a neat space leak. In my first attempt I've solved this by rechecking the handle count. But fundamentally the issue is that ->handle_count isn't your usual refcount - it can be resurrected from 0 among other things. For those special beasts atomic_t often suggest way more ordering that it actually guarantees. To prevent being tricked by those hairy semantics take the easy way out and simply protect the handle with the existing dev->object_name_lock. With that change implemented it's dead easy to fix the flink vs. gem close reace: When we try to create the name we simply have to check whether there's still officially a gem handle around and if not refuse to create the flink name. Since the handle count decrement and flink name destruction is now also protected by that lock the reace is gone and we can't ever leak the flink reference again. Outside of the drm core only the exynos driver looks at the handle count, and tbh I have no idea why (it's just for debug dmesg output luckily). I've considered inlining the drm_gem_object_handle_free, but I plan to add more name-like things (like the exported dma_buf) to this scheme, so it's clearer to leave the handle freeing in its own function. This is exercised by the new gem_flink_race i-g-t testcase, which on my snb leaks gem objects at a rate of roughly 1k objects/s. v2: Fix up the error path handling in handle_create and make it more robust by simply calling object_handle_unreference. v3: Fix up the handle_unreference logic bug - atomic_dec_and_test retursn 1 for 0. Oops. v4: Squash in inlining of drm_gem_object_handle_reference as suggested by Dave Airlie and add a note that we now have a testcase. Cc: Dave Airlie <airlied@gmail.com> Cc: Inki Dae <inki.dae@samsung.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: WARN about unbalanced handle refcountsDaniel Vetter2013-08-18
| | | | | | | | | | Trying to drop a reference we don't have is a pretty serious bug. Trying to paper over it is an even worse offense. So scream into dmesg with a big WARN in case that ever happens. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: remove bogus NULL check from drm_gem_object_handle_unreference_unlockedDaniel Vetter2013-08-18
| | | | | | | | Calling this function with a NULL object is simply a bug, so papering over a NULL object not a good idea. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: move drm_gem_object_handle_unreference_unlocked into drm_gem.cDaniel Vetter2013-08-18
| | | | | | | | | | | | | | | We have three callers of this function now and it's neither performance critical nor really small. So an inline function feels like overkill and unecessarily separates the different parts of the code. Since all callers of drm_gem_object_handle_free are now in drm_gem.c we can make that static (and remove the unused EXPORT_SYMBOL). To avoid a forward declaration move it (and drm_gem_object_free_bug) up a bit. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: add shmem get/put page helpersRob Clark2013-08-18
| | | | | | | | Basically just extracting some code duplicated in gma500, omapdrm, udl, and upcoming msm driver. Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: add drm_gem_create_mmap_offset_size()Rob Clark2013-08-18
| | | | | | | | | | | | Variant of drm_gem_create_mmap_offset() which doesn't make the assumption that virtual size and physical size (obj->size) are the same. This is needed in omapdrm to deal with tiled buffers. And lets us get rid of a duplicated and slightly modified version of drm_gem_create_mmap_offset() in omapdrm. Signed-off-by: Rob Clark <robdclark@gmail.com> Reviewed-by: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: create drm_gem_dumb_destroyDaniel Vetter2013-08-06
| | | | | | | | | | | | | | | | | | | | | All the gem based kms drivers really want the same function to destroy a dumb framebuffer backing storage object. So give it to them and roll it out in all drivers. This still leaves the option open for kms drivers which don't use GEM for backing storage, but it does decently simplify matters for gem drivers. Acked-by: Inki Dae <inki.dae@samsung.com> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org> Cc: Ben Skeggs <skeggsb@gmail.com> Reviwed-by: Rob Clark <robdclark@gmail.com> Cc: Alex Deucher <alexdeucher@gmail.com> Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: fix mmap vma size calculationsDavid Herrmann2013-07-26
| | | | | | | | | | | | | | | | | | | | | The VMA manager is page-size based so drm_vma_node_size() returns the size in pages. However, drm_gem_mmap_obj() requires the size in bytes. Apply PAGE_SHIFT so we no longer get EINVAL during mmaps due to too small buffers. This bug was introduced in commit: 0de23977cfeb5b357ec884ba15417ae118ff9e9b "drm/gem: convert to new unified vma manager" Fixes i915 gtt mmap failure reported by Sedat Dilek in: Re: linux-next: Tree for Jul 25 [ call-trace: drm | drm-intel related? ] Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Reported-by: Sedat Dilek <sedat.dilek@gmail.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Signed-off-by: Dave Airlie <airlied@gmail.com>
* drm/gem: convert to new unified vma managerDavid Herrmann2013-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the new vma manager instead of the old hashtable. Also convert all drivers to use the new convenience helpers. This drops all the (map_list.hash.key << PAGE_SHIFT) non-sense. Locking and access-management is exactly the same as before with an additional lock inside of the vma-manager, which strictly wouldn't be needed for gem. v2: - rebase on drm-next - init nodes via drm_vma_node_reset() in drm_gem.c v3: - fix tegra v4: - remove duplicate if (drm_vma_node_has_offset()) checks - inline now trivial drm_vma_node_offset_addr() calls v5: - skip node-reset on gem-init due to kzalloc() - do not allow mapping gem-objects with offsets (backwards compat) - remove unneccessary casts Cc: Inki Dae <inki.dae@samsung.com> Cc: Rob Clark <robdclark@gmail.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Thierry Reding <thierry.reding@gmail.com> Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@gmail.com>
* drm/gem: simplify object initializationDavid Herrmann2013-07-23
| | | | | | | | | | | | | | | | | | | | drm_gem_object_init() and drm_gem_private_object_init() do exactly the same (except for shmem alloc) so make the first use the latter to reduce code duplication. Also drop the return code from drm_gem_private_object_init(). It seems unlikely that we will extend it any time soon so no reason to keep it around. This simplifies code paths in drivers, too. Last but not least, fix gma500 to call drm_gem_object_release() before freeing objects that were allocated via drm_gem_private_object_init(). That isn't actually necessary for now, but might be in the future. Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> Acked-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Dave Airlie <airlied@gmail.com>
* drm: make drm_mm_init() return voidDavid Herrmann2013-07-01
| | | | | | | | | There is no reason to return "int" as this function never fails. Furthermore, several drivers (ast, sis) already depend on this. Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: fix not to assign error value to gem nameYoungJun Cho2013-06-27
| | | | | | | | | | | | | | | | | | | | If idr_alloc() is failed, obj->name can be error value. Also it cleans up duplicated flink processing code. This regression has been introduced in commit 2e928815c1886fe628ed54623aa98d0889cf5509 Author: Tejun Heo <tj@kernel.org> Date: Wed Feb 27 17:04:08 2013 -0800 drm: convert to idr_alloc() Signed-off-by: YoungJun Cho <yj44.cho@samsung.com> Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Cc: stable@vger.kernel.org Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: add mutex lock when using drm_gem_mmap_objYoungJun Cho2013-06-27
| | | | | | | | | | | | | | | | The drm_gem_mmap_obj() has to be protected with dev->struct_mutex, but some caller functions do not. So it adds mutex lock to missing callers and adds assertion to check whether drm_gem_mmap_obj() is called with mutex lock or not. Signed-off-by: YoungJun Cho <yj44.cho@samsung.com> Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Rob Clark <robdclark@gmail.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: Split drm_gem_mmap() into object search and object mappingLaurent Pinchart2013-06-08
| | | | | | | | | | The drm_gem_mmap() function first finds the GEM object to be mapped based on the fake mmap offset and then maps the object. Split the object mapping code into a standalone drm_gem_mmap_obj() function that can be used to implement dma-buf mmap() operations. Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Reviewed-by: Rob Clark <robdclark@gmail.com>
* drm/prime: keep a reference from the handle to exported dma-buf (v6)Dave Airlie2013-04-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we have a problem with this: 1. i915: create gem object 2. i915: export gem object to prime 3. radeon: import gem object 4. close prime fd 5. radeon: unref object 6. i915: unref object i915 has an imported object reference in its file priv, that isn't cleaned up properly until fd close. The reference gets added at step 2, but at step 6 we don't have enough info to clean it up. The solution is to take a reference on the dma-buf when we export it, and drop the reference when the gem handle goes away. So when we export a dma_buf from a gem object, we keep track of it with the handle, we take a reference to the dma_buf. When we close the handle (i.e. userspace is finished with the buffer), we drop the reference to the dma_buf, and it gets collected. This patch isn't meant to fix any other problem or bikesheds, and it doesn't fix any races with other scenarios. v1.1: move export symbol line back up. v2: okay I had to do a bit more, as the first patch showed a leak on one of my tests, that I found using the dma-buf debugfs support, the problem case is exporting a buffer twice with the same handle, we'd add another export handle for it unnecessarily, however we now fail if we try to export the same object with a different gem handle, however I'm not sure if that is a case I want to support, and I've gotten the code to WARN_ON if we hit something like that. v2.1: rebase this patch, write better commit msg. v3: cleanup error handling, track import vs export in linked list, these two patches were separate previously, but seem to work better like this. v4: danvet is correct, this code is no longer useful, since the buffer better exist, so remove it. v5: always take a reference to the dma buf object, import or export. (Imre Deak contributed this originally) v6: square the circle, remove import vs export tracking now that there is no difference Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: convert to idr_alloc()Tejun Heo2013-02-27
| | | | | | | | | | | | | | | | Convert to the much saner new idr interface. * drm_ctxbitmap_next() error handling in drm_addctx() seems broken. drm_ctxbitmap_next() return -errno on failure not -1. [artem.savkov@gmail.com: missing idr_preload_end in drm_gem_flink_ioctl] [jslaby@suse.cz: fix drm_gem_flink_ioctl() return value] Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: David Airlie <airlied@linux.ie> Signed-off-by: Artem Savkov <artem.savkov@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* drm: don't use idr_remove_all()Tejun Heo2013-02-27
| | | | | | | | | | | | | | | | | | idr_destroy() can destroy idr by itself and idr_remove_all() is being deprecated. Drop its usage. * drm_ctxbitmap_cleanup() was calling idr_remove_all() but forgetting idr_destroy() thus leaking all buffered free idr_layers. Replace it with idr_destroy(). Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: David Airlie <airlied@linux.ie> Cc: Inki Dae <inki.dae@samsung.com> Cc: Joonyoung Shim <jy0922.shim@samsung.com> Cc: Seung-Woo Kim <sw0312.kim@samsung.com> Cc: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: kill vma flag VM_RESERVED and mm->reserved_vm counterKonstantin Khlebnikov2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA, currently it lost original meaning but still has some effects: | effect | alternative flags -+------------------------+--------------------------------------------- 1| account as reserved_vm | VM_IO 2| skip in core dump | VM_IO, VM_DONTDUMP 3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP 4| do not mlock | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP This patch removes reserved_vm counter from mm_struct. Seems like nobody cares about it, it does not exported into userspace directly, it only reduces total_vm showed in proc. Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP. remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP. remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP. [akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup] Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Carsten Otte <cotte@de.ibm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Eric Paris <eparis@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Morris <james.l.morris@oracle.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Kentaro Takeda <takedakn@nttdata.co.jp> Cc: Matt Helsley <matthltc@us.ibm.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Robert Richter <robert.richter@amd.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Venkatesh Pallipadi <venki@google.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* UAPI: (Scripted) Convert #include "..." to #include <path/...> in drivers/gpu/David Howells2012-10-02
| | | | | | | | | | | Convert #include "..." to #include <path/...> in drivers/gpu/. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Dave Airlie <airlied@redhat.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Dave Jones <davej@redhat.com>
* drm: Add colouring to the range allocatorChris Wilson2012-07-15
| | | | | | | | | | | | | | | | | | | | | | In order to support snoopable memory on non-LLC architectures (so that we can bind vgem objects into the i915 GATT for example), we have to avoid the prefetcher on the GPU from crossing memory domains and so prevent allocation of a snoopable PTE immediately following an uncached PTE. To do that, we need to extend the range allocator with support for tracking and segregating different node colours. This will be used by i915 to segregate memory domains within the GTT. v2: Now with more drm_mm helpers and less driver interference. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Airlie <airlied@redhat.com Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@gmail.com>
* drm/prime: add exported buffers to current fprivs imported buffer list (v2)Dave Airlie2012-05-23
| | | | | | | | | | If userspace attempts to import a buffer it exported on the same device, we need to return the same GEM handle for it, not a new handle pointing at the same GEM object. v2: move removals into a single fn, no need to set to NULL. (Chris Wilson) Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: Don't initialize local ret variable when not neededLaurent Pinchart2012-05-22
| | | | | | Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: pass dev to drm_vm_{open,close}_locked()Rob Clark2012-05-11
| | | | | | | | | | | | | | | | | | | | | Previously these functions would assume that vma->vm_file was the drm_file. Although if in some cases if the drm driver needs to use something else for the backing file (such as the tmpfs filp) then this assumption is no longer true. But vma->vm_private_data is still the GEM object. With this change, now the drm_device comes from the GEM object rather than the drm_file so the driver is more free to play with vma->vm_file. The scenario where this comes up is for mmap'ing of cached dmabuf's for non-coherent systems, where the driver needs to use fault handling and PTE shootdown to simulate coherency. We can't use the vma->vm_file of the dmabuf, which is using anon_inode's address_space. The most straightforward thing to do is to use the GEM object's obj->filp for vma->vm_file in all cases, for which we need this patch. Signed-off-by: Rob Clark <rob@ti.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: Unify and fix idr error handlingVille Syrjälä2012-04-24
| | | | | | | | | | | | The error handling code w.r.t. idr usage looks inconsistent. In the case of drm_mode_object_get() and drm_ctxbitmap_next() the error handling is also incomplete. Unify the code to follow the same pattern always. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: base prime/dma-buf support (v5)Dave Airlie2012-03-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the basic drm dma-buf interface layer, called PRIME. This commit doesn't add any driver support, it is simply and agreed upon starting point so we can work towards merging driver support for the next merge window. Current drivers with work done are nouveau, i915, udl, exynos and omap. The main APIs exposed to userspace allow translating a 32-bit object handle to a file descriptor, and a file descriptor to a 32-bit object handle. The flags value is currently limited to O_CLOEXEC. Acknowledgements: Daniel Vetter: lots of review Rob Clark: cleaned up lots of the internals and did lifetime review. v2: rename some functions after Chris preferred a green shed fix IS_ERR_OR_NULL -> IS_ERR v3: Fix Ville pointed out using buffer + kmalloc v4: add locking as per ickle review v5: allow re-exporting the original dma-buf (Daniel) Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Rob Clark <rob.clark@linaro.org> Reviewed-by: Sumit Semwal <sumit.semwal@linaro.org> Reviewed-by: Inki Dae <inki.dae@samsung.com> Acked-by: Ben Widawsky <benjamin.widawsky@intel.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: add core support for unplugging a device (v2)Dave Airlie2012-03-15
| | | | | | | | | | | | | | | | Two parts to this, one is simple unplug from sysfs for the device node. The second adds an unplugged state, if we have device opens, we just set the unplugged state and return, if we have no device opens we drop the drm device. If after a lastclose we discover we are unplugged we then drop the drm device. v2: use an atomic for unplugged and wrap it for users, add checks on open + mmap + ioctl entry points. Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: drop setting vm_file to filpDave Airlie2012-03-05
| | | | | | | | | | | Talking to Al Viro on irc, we can see no possible reason for doing this, the upper mmap code does it. The code has been there since first import into drm tree I can find. Al tracked down this as a requirement pre 2.3.51 hasn't been needed since. Acked-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: Pass the real error code back during GEM bo initialisationChris Wilson2012-01-30
| | | | | | | | | | In particular, I found I was hitting the max-file limit in the VFS, and the EFILE was being magically transformed into ENOMEM. Confusion reigns. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: add functions for mmap offset creationRob Clark2011-08-30
| | | | | Signed-off-by: Rob Clark <rob@ti.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm/gem: add support for private objectsAlan Cox2011-07-25
| | | | | | | | | | | | | | | These small changes should allow GEM to be used with non shmem objects as well as shmem objects. In the GMA500 case it allows the base framebuffer to appear as a GEM object and thus acquire a handle and work with KMS. For i915 it ought to be trivial to get back the wasted memory but putting the system fb back into stolen RAM and in general I can imagine it allowing the use of GEM and thus KMS with all the older cards that have their framebuffer firmly placed in video RAM. Signed-off-by: Alan Cox <alan@linux.intel.com> Tested-by: Rob Clark <rob@ti.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* Merge 3.0-rc7 into drm-core-nextDave Airlie2011-07-13
|\ | | | | | | | | This pulls in all the drm fixes up to this point which are needed for some -next patches to work.
| * drm/i915: use shmem_read_mapping_pageHugh Dickins2011-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Soon tmpfs will stop supporting ->readpage and read_cache_page_gfp(): once "tmpfs: add shmem_read_mapping_page_gfp" has been applied, this patch can be applied to ease the transition. Make i915_gem_object_get_pages_gtt() use shmem_read_mapping_page_gfp() in the one place it's needed; elsewhere use shmem_read_mapping_page(), with the mapping's gfp_mask properly initialized. Forget about __GFP_COLD: since tmpfs initializes its pages with memset, asking for a cold page is counter-productive. Include linux/shmem_fs.h also in drm_gem.c: with shmem_file_setup() now declared there too, we shall remove the prototype from linux/mm.h later. Signed-off-by: Hugh Dickins <hughd@google.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Keith Packard <keithp@keithp.com> Cc: Dave Airlie <airlied@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | drm/gem: add hooks to notify driver when object handle is created/destroyedBen Skeggs2011-06-20
|/ | | | | | | | Nouveau is going to use these hooks to map/unmap objects from a client's private GPU address space. Signed-off-by: Ben Skeggs <bskeggs@redhat.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: Fix use-after-free in drm_gem_vm_close()Chris Wilson2011-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As we may release the last reference, we need to store the device in a local variable in order to unlock afterwards. [ 60.140768] BUG: unable to handle kernel paging request at 6b6b6b9f [ 60.140973] IP: [<c1536d11>] __mutex_unlock_slowpath+0x5a/0x111 [ 60.141014] *pdpt = 0000000024a54001 *pde = 0000000000000000 [ 60.141014] Oops: 0002 [#1] PREEMPT SMP [ 60.141014] last sysfs file: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/PNP0C0A:00/power_supply/BAT0/voltage_now [ 60.141014] Modules linked in: uvcvideo ath9k pegasus ath9k_common ath9k_hw hid_egalax ath3k joydev asus_laptop sparse_keymap battery input_polldev [ 60.141014] [ 60.141014] Pid: 771, comm: meego-ux-daemon Not tainted 2.6.37.2-7.1 #1 EXOPC EXOPG06411/EXOPG06411 [ 60.141014] EIP: 0060:[<c1536d11>] EFLAGS: 00010046 CPU: 0 [ 60.141014] EIP is at __mutex_unlock_slowpath+0x5a/0x111 [ 60.141014] EAX: 00000100 EBX: 6b6b6b9b ECX: e9b4a1b0 EDX: e4a4e580 [ 60.141014] ESI: db162558 EDI: 00000246 EBP: e480be50 ESP: e480be44 [ 60.141014] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 [ 60.141014] Process meego-ux-daemon (pid: 771, ti=e480a000 task=e9b4a1b0 task.ti=e480a000) [ 60.141014] Stack: [ 60.141014] e4a4e580 db162558 f5a2f838 e480be58 c1536dd0 e480be68 c125ab1b db162558 [ 60.141014] db1624e0 e480be78 c10ba071 db162558 f760241c e480be94 c10bb0bc 000155fe [ 60.141014] f760241c f5a2f838 f5a2f8c8 00000000 e480bea4 c1037c24 00000000 f5a2f838 [ 60.141014] Call Trace: [ 60.141014] [<c1536dd0>] ? mutex_unlock+0x8/0xa [ 60.141014] [<c125ab1b>] ? drm_gem_vm_close+0x39/0x3d [ 60.141014] [<c10ba071>] ? remove_vma+0x2d/0x58 [ 60.141014] [<c10bb0bc>] ? exit_mmap+0x126/0x13f [ 60.141014] [<c1037c24>] ? mmput+0x37/0x9a [ 60.141014] [<c10d450d>] ? exec_mmap+0x178/0x19c [ 60.141014] [<c1537f85>] ? _raw_spin_unlock+0x1d/0x36 [ 60.141014] [<c10d4eb0>] ? flush_old_exec+0x42/0x75 [ 60.141014] [<c1104442>] ? load_elf_binary+0x32a/0x922 [ 60.141014] [<c10d3f76>] ? search_binary_handler+0x200/0x2ea [ 60.141014] [<c10d3ecf>] ? search_binary_handler+0x159/0x2ea [ 60.141014] [<c1104118>] ? load_elf_binary+0x0/0x922 [ 60.141014] [<c10d56b2>] ? do_execve+0x1ff/0x2e6 [ 60.141014] [<c100970e>] ? sys_execve+0x2d/0x55 [ 60.141014] [<c1002a5a>] ? ptregs_execve+0x12/0x18 [ 60.141014] [<c10029dc>] ? sysenter_do_call+0x12/0x3c [ 60.141014] [<c1530000>] ? init_centaur+0x9c/0x1ba [ 60.141014] Code: c1 00 75 0f ba 38 01 00 00 b8 8c 3a 6c c1 e8 cc 2e b0 ff 9c 58 8d 74 26 00 89 c7 fa 90 8d 74 26 00 e8 d2 b4 b2 ff b8 00 01 00 00 <f0> 66 0f c1 43 04 38 e0 74 07 f3 90 8a 43 04 eb f5 83 3d 64 ef [ 60.141014] EIP: [<c1536d11>] __mutex_unlock_slowpath+0x5a/0x111 SS:ESP 0068:e480be44 [ 60.141014] CR2: 000000006b6b6b9f Reported-by: Rusty Lynch <rusty.lynch@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@kernel.org Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: Trim the GEM mmap offset hashtabChris Wilson2011-02-22
| | | | | | | | | | | | | | | | Using an order 19 drm_ht for the mmap offsets is a little obscene. That means that will a fully populated GTT with every single object mmaped at least once in its lifetime, there will be exactly one object in each bucket. Typically systems only have at most a few thousand objects, though you may see a KDE desktop hit 50000. And most of those should never be mapped... On my systems, just using an order 10 ht would still have an average occupancy less than 1, so apply a small safety factor and use an order 12 ht, like the other mmap offset ht. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm: dumb scanout create/mmap for intel/radeon (v3)Dave Airlie2011-02-06
| | | | | | | | | | | | | | | | | | | This is just an idea that might or might not be a good idea, it basically adds two ioctls to create a dumb and map a dumb buffer suitable for scanout. The handle can be passed to the KMS ioctls to create a framebuffer. It looks to me like it would be useful in the following cases: a) in development drivers - we can always provide a shadowfb fallback. b) libkms users - we can clean up libkms a lot and avoid linking to libdrm_*. c) plymouth via libkms is a lot easier. Userspace bits would be just calls + mmaps. We could probably mark these handles somehow as not being suitable for acceleartion so as top stop people who are dumber than dumb. Signed-off-by: Dave Airlie <airlied@redhat.com>
* Merge remote branch 'korg/drm-fixes' into drm-vmware-nextDave Airlie2010-10-05
|\ | | | | | | | | | | | | | | | | necessary for some of the vmware fixes to be pushed in. Conflicts: drivers/gpu/drm/drm_gem.c drivers/gpu/drm/i915/intel_fb.c include/drm/drmP.h
| * drm: Hold the mutex when dropping the last GEM reference (v2)Chris Wilson2010-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to be fully threadsafe we need to check that the drm_gem_object refcount is still 0 after acquiring the mutex in order to call the free function. Otherwise, we may encounter scenarios like: Thread A: Thread B: drm_gem_close unreference_unlocked kref_put mutex_lock ... i915_gem_evict ... kref_get -> BUG ... i915_gem_unbind ... kref_put ... i915_gem_object_free ... mutex_unlock mutex_lock i915_gem_object_free -> BUG i915_gem_object_unbind kfree mutex_unlock Note that no driver is currently using the free_unlocked vfunc and it is scheduled for removal, hasten that process. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30454 Reported-and-Tested-by: Magnus Kessler <Magnus.Kessler@gmx.net> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@kernel.org Signed-off-by: Dave Airlie <airlied@redhat.com>
| * drm/gem: handlecount isn't really a kref so don't make it one.Dave Airlie2010-09-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There were lots of places being inconsistent since handle count looked like a kref but it really wasn't. Fix this my just making handle count an atomic on the object, and have it increase the normal object kref. Now i915/radeon/nouveau drivers can drop the normal reference on userspace object creation, and have the handle hold it. This patch fixes a memory leak or corruption on unload, because the driver had no way of knowing if a handle had been actually added for this object, and the fbcon object needed to know this to clean itself up properly. Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Dave Airlie <airlied@redhat.com>
| * drm: Prune GEM vma entriesChris Wilson2010-09-27
| | | | | | | | | | | | | | | | | | | | | | | | | | Hook the GEM vm open/close ops into the generic drm vm open/close so that the private vma entries are created and destroy appropriately. Fixes the leak of the drm_vma_entries during the lifetime of the filp. Reported-by: Matt Mackall <mpm@selenic.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: stable@kernel.org Signed-off-by: Dave Airlie <airlied@redhat.com>