- 02 Nov, 2017 40 commits
-
-
Ben Skeggs authored
Another transition step to allow finer-grained patches transitioning to new MMU backends. Old backends will continue operate as before (accessing nvkm_mem::tag), and new backends will get a reference to the tags allocated here. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
This is a transition step, to enable finer-grained commits while transitioning to new MMU interfaces. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Upcoming MMU changes use nvkm_memory as its basic representation of memory, so we need to be able to allocate VRAM like this. The code is basically identical to the current chipset-specific allocators, minus support for compression tags (which will be handled elsewhere anyway). Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Adds support for 64-bit writes, and optimised filling of buffers with fixed 32/64-bit values. These will all be used by the upcoming MMU changes. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
We need to be able to prevent memory from being freed while it's still mapped in a GPU's address-space. Will be used by upcoming MMU changes. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Needed by VMM code to determine whether an allocation is compatible with a given page size (ie. you can't map 4KiB system memory pages into 64KiB GPU pages). Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Map flags (access, kind, etc) are currently defined in either the VMA, or the memory object, which turns out to not be ideal for things like suballocated buffers, etc. These will become per-map flags instead, so we need to support passing these arguments in nvkm_memory_map(). Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
nvkm_memory is going to be used by the upcoming mmu rework for the basic representation of a memory allocation, as such, this commit adds support for comptag allocation to nvkm_memory. This is very simple for now, in that it requires comptags for the entire memory allocation even if only certain ranges are compressed. Support for tracking ranges will be added at a later date. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
A single location for the MM allows us to share allocation logic. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
We probably don't want to destroy compression data when doing multiple mappings of a memory object. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
We're moving towards having a central place to handle comptag allocation, and as some GPUs don't have a ram submodule (ie. Tegra), we need to move the mm somewhere else. It probably never belonged in ram anyways. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
These will be used in upcoming patches. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Different sections of VRAM may have different properties (ie. can't be used for compression/display, can't be mapped, etc). We currently already support this, but it's a bit magic. This change makes it more obvious where we're allocating from. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
TTM memory allocations will be hanging off the DRM's client, but the locking needed to do so gets really tricky with all the other use of the DRM's object tree. To solve this, we make the normal DRM client a child of a new master, where the memory allocations will be done from instead. This also solves a potential race with client creation. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
The conditional is the same for every mapping. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
We don't really care about where the memory is, just that it's compatible with a VMA allocated for a given page size. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
It's far more convenient to deal with like this. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
MMU will need to know this during its constructor, so we can't delay deciding this until init-time. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
In a future commit, this will be constructed by common code. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Before: "imem: init completed in 299277us" After: "imem: init completed in 11574us" Suspend from Fedora 26 gnome desktop on GP102. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Before: "imem: suspend completed in 5540487us" After: "imem: suspend completed in 1871526us" Suspend from Fedora 26 gnome desktop on GP102. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
These will require slow-path access during suspend/resume. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
A good deal of the structures we map into here aren't accessed very often at all, and Fedora 26 has exposed an issue where after creating a heap of channels, BAR2 space would run out, and we'd need to make use of the slow path while accessing important structures like page tables. This implements an LRU on BAR2 space, which allows eviction of mappings that aren't currently needed, to make space for other objects. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Another piece of solving the "GP100 BAR2 VMM bootstrap" puzzle. Without doing this, we'd attempt to write PDEs for the lower page table levels through BAR2 before BAR2 access has been fully initialised. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-