- 03 Dec, 2010 40 commits
-
-
Francisco Jerez authored
nouveau_bo_move_m2mf() needs to lock the kernel channel, and it may be called from the pushbuf IOCTL with an user channel already locked. Use a separate subclass for the kernel channel mutex because this is legitimate mutex nesting. Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
In a multihead setup vblank interrupts may end up enabled in both heads. In that case we want to ignore the vblank interrupts coming from the wrong CRTC to avoid tearing and unbalanced calls to drm_vblank_get/put (fdo bug 31074). Reported-by: Felix Leimbach <felix.leimbach@gmx.net> Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
nv0x-nv4x should be mostly fine, nv50 doesn't work yet. Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Marcin Slusarz authored
nouveau_fence_* functions are not type safe, which could lead to bugs. Additionally every use of nouveau_fence_unref had to cast struct nouveau_fence to void **. Fix it by renaming old functions and creating static inline functions with new prototypes. We still need old functions, because we pass function pointers to ttm. As we are wrapping functions, drop unused "void *arg" parameter where possible. Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com> Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Only supported on NV50+ so far, and disabled by default currently. The module parameter "msi=1" will enable it. There's a kernel bug which will cause this to fail if the module (or the NVIDIA binary driver) has ever been loaded before loading nouveau with MSI enabled. As such, this is only safe to enable if you have nouveau load on boot, and don't wish to ever reload it. The workaround is to "echo 0 > /sys/bus/pci/devices/<device>/enable" until the enable count reads 0. Then you should be able to load nouveau with MSI enabled. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Not an issue right now, we're forced to 64k size/alignment by the BO allocator anyway. This won't be the case soon. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
This doesn't work yet for unknown reasons. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
This really needs cleaning up somehow, and probably investigate what's needed to do this on earlier generations. NVIDIA do something similar there too. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
We previously added all the available classes for the entire generation, even though the objects wouldn't work on the hardware. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
The structs themselves, as well as the non-sw object creation function are probably very misnamed now. That's a problem for later :) Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
We will eventually want to address hw engines other than PGRAPH. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Without it there's a potential race with nouveau_fence_update(). Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
It needs a "strong" channel reference because it actually writes to the channel pushbuf, otherwise the corresponding FIFO context could get kicked off in the middle of nouveau_fence_sync(). Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Fences didn't increment the channel reference count, and the fenced channel could go away at any time. Fixes a potential race in nouveau_fence_update(). Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
nouveau_channel_ref() takes a "weak" channel reference that doesn't prevent the hardware channel resources from being released, it just keeps the channel data structure alive. Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
nouveau_channel_put() can be executed after the 'refcount == 0' check in nouveau_channel_get() and before the channel reference count is incremented. In that case CPU0 will take the context down while CPU1 thinks it owns the channel and 'refcount == 1'. Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
The destroy_context() engine hooks call gpuobj management functions to release the channel resources, these functions use HARDIRQ-unsafe locks whereas destroy_context() is called with the HARDIRQ-safe context_switch_lock held, that's a lock ordering violation. Push the engine-specific channel destruction logic into destroy_context() and let the hardware-specific code lock and unlock when it's actually needed. Change the engine destruction order to avoid a race in the small gap between pgraph and pfifo context uninitialization. Reported-by: Marcin Slusarz <marcin.slusarz@gmail.com> Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
The pushbuf ioctl syncs after validation, no need for this anymore. Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Not used at all yet, but lets hook it up now anyway. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-