- 03 Dec, 2010 40 commits
-
-
Marcin Slusarz authored
nouveau_fence_* functions are not type safe, which could lead to bugs. Additionally every use of nouveau_fence_unref had to cast struct nouveau_fence to void **. Fix it by renaming old functions and creating static inline functions with new prototypes. We still need old functions, because we pass function pointers to ttm. As we are wrapping functions, drop unused "void *arg" parameter where possible. Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com> Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Only supported on NV50+ so far, and disabled by default currently. The module parameter "msi=1" will enable it. There's a kernel bug which will cause this to fail if the module (or the NVIDIA binary driver) has ever been loaded before loading nouveau with MSI enabled. As such, this is only safe to enable if you have nouveau load on boot, and don't wish to ever reload it. The workaround is to "echo 0 > /sys/bus/pci/devices/<device>/enable" until the enable count reads 0. Then you should be able to load nouveau with MSI enabled. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Not an issue right now, we're forced to 64k size/alignment by the BO allocator anyway. This won't be the case soon. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
This doesn't work yet for unknown reasons. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
This really needs cleaning up somehow, and probably investigate what's needed to do this on earlier generations. NVIDIA do something similar there too. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
We previously added all the available classes for the entire generation, even though the objects wouldn't work on the hardware. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
The structs themselves, as well as the non-sw object creation function are probably very misnamed now. That's a problem for later :) Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
We will eventually want to address hw engines other than PGRAPH. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Without it there's a potential race with nouveau_fence_update(). Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
It needs a "strong" channel reference because it actually writes to the channel pushbuf, otherwise the corresponding FIFO context could get kicked off in the middle of nouveau_fence_sync(). Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Fences didn't increment the channel reference count, and the fenced channel could go away at any time. Fixes a potential race in nouveau_fence_update(). Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
nouveau_channel_ref() takes a "weak" channel reference that doesn't prevent the hardware channel resources from being released, it just keeps the channel data structure alive. Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
nouveau_channel_put() can be executed after the 'refcount == 0' check in nouveau_channel_get() and before the channel reference count is incremented. In that case CPU0 will take the context down while CPU1 thinks it owns the channel and 'refcount == 1'. Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
The destroy_context() engine hooks call gpuobj management functions to release the channel resources, these functions use HARDIRQ-unsafe locks whereas destroy_context() is called with the HARDIRQ-safe context_switch_lock held, that's a lock ordering violation. Push the engine-specific channel destruction logic into destroy_context() and let the hardware-specific code lock and unlock when it's actually needed. Change the engine destruction order to avoid a race in the small gap between pgraph and pfifo context uninitialization. Reported-by: Marcin Slusarz <marcin.slusarz@gmail.com> Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Francisco Jerez authored
The pushbuf ioctl syncs after validation, no need for this anymore. Signed-off-by: Francisco Jerez <currojerez@riseup.net> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Not used at all yet, but lets hook it up now anyway. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
No other driver uses this, and userspace should be responsible for handling locking between them if they share BOs. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
This fixes a race condition between fbcon acceleration and TTM buffer moves. To reproduce: - start X - switch to vt and "while (true); do dmesg; done" - switch to another vt and "sleep 2 && cat /path/to/debugfs/dri/0/evict_vram" - switch back to vt running dmesg We don't make use of this on any other channel yet, they're currently protected by drm_global_mutex. This will change in the near future. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
A future commit will add locking to the DRM's channel, and there's numerous problems that come up if we allow printk from an interrupt context to be accelerated. It seems saner to just disallow it completely. As a nice side-effect, all the "to accel or not to accel" logic gets moved out of the chipset-specific code. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-