Commit c4e85412 authored by Dave Airlie's avatar Dave Airlie

Merge tag 'drm-intel-next-2014-05-23' of git://anongit.freedesktop.org/drm-intel into drm-next

- prep refactoring for execlists (Oscar Mateo)
- corner-case fixes for runtime pm (Imre)
- tons of vblank improvements from Ville
- prep work for atomic plane/sprite updates (Ville)
- more chv code, now almost complete (tons of different people)
- refactoring and improvements for drm_irq.c merged through drm-intel-next
- g4x/ilk reset improvements (Ville)
- removal of encoder->mode_set
- moved audio state tracking into pipe_config
- shuffled fb pinning out of the platform crtc modeset callbacks into core code
- userptr support (Chris)
- OOM handling improvements from Chris, with now have a neat oom notifier which
  jumps additional debug information.
- topdown allocation of ppgtt PDEs (Ben)
- fixes and small improvements all over

* tag 'drm-intel-next-2014-05-23' of git://anongit.freedesktop.org/drm-intel: (187 commits)
  drm/i915: Kill private_default_ctx off
  drm/i915: s/i915_hw_context/intel_context
  drm/i915: Split the ringbuffers from the rings (3/3)
  drm/i915: Split the ringbuffers from the rings (2/3)
  drm/i915: Split the ringbuffers from the rings (1/3)
  drm/i915: s/intel_ring_buffer/intel_engine_cs
  drm/i915: disable GT power saving early during system suspend
  drm/i915: fix possible RPM ref leaking during RPS disabling
  drm/i915: remove user GTT mappings early during runtime suspend
  drm/i915: Implement WaVcpClkGateDisableForMediaReset:ctg, elk
  drm/i915: Fix gen2 and hsw+ scanline counter
  drm/i915: Draw a picture about video timings
  drm/i915: Improve gen3/4 frame counter
  drm/i915: Add a small adjustment to the pixel counter on interlaced modes
  drm/i915: Hold CRTC lock whilst freezing the planes
  drm/i915: Only discard backing storage on releasing the last ref
  drm/i915: Wait for pending page flips before enabling/disabling the primary plane
  drm/i915: grab the audio power domain when enabling audio on HSW+
  drm/i915: don't read HSW_AUD_PIN_ELD_CP_VLD when the power well is off
  drm/i915: move bsd dispatch index somewhere better
  ...
parents 182407a6 f83d6518
......@@ -3368,6 +3368,10 @@ void (*disable_vblank) (struct drm_device *dev, int crtc);</synopsis>
with a call to <function>drm_vblank_cleanup</function> in the driver
<methodname>unload</methodname> operation handler.
</para>
<sect2>
<title>Vertical Blanking and Interrupt Handling Functions Reference</title>
!Edrivers/gpu/drm/drm_irq.c
</sect2>
</sect1>
<!-- Internals: open/close, file operations and ioctls -->
......@@ -3710,17 +3714,16 @@ int num_ioctls;</synopsis>
<term>DRM_IOCTL_MODESET_CTL</term>
<listitem>
<para>
This should be called by application level drivers before and
after mode setting, since on many devices the vertical blank
counter is reset at that time. Internally, the DRM snapshots
the last vblank count when the ioctl is called with the
_DRM_PRE_MODESET command, so that the counter won't go backwards
(which is dealt with when _DRM_POST_MODESET is used).
This was only used for user-mode-settind drivers around
modesetting changes to allow the kernel to update the vblank
interrupt after mode setting, since on many devices the vertical
blank counter is reset to 0 at some point during modeset. Modern
drivers should not call this any more since with kernel mode
setting it is a no-op.
</para>
</listitem>
</varlistentry>
</variablelist>
<!--!Edrivers/char/drm/drm_irq.c-->
</para>
</sect1>
......@@ -3783,6 +3786,96 @@ int num_ioctls;</synopsis>
probing, so those sections fully apply.
</para>
</sect2>
<sect2>
<title>DPIO</title>
!Pdrivers/gpu/drm/i915/i915_reg.h DPIO
<table id="dpiox2">
<title>Dual channel PHY (VLV/CHV)</title>
<tgroup cols="8">
<colspec colname="c0" />
<colspec colname="c1" />
<colspec colname="c2" />
<colspec colname="c3" />
<colspec colname="c4" />
<colspec colname="c5" />
<colspec colname="c6" />
<colspec colname="c7" />
<spanspec spanname="ch0" namest="c0" nameend="c3" />
<spanspec spanname="ch1" namest="c4" nameend="c7" />
<spanspec spanname="ch0pcs01" namest="c0" nameend="c1" />
<spanspec spanname="ch0pcs23" namest="c2" nameend="c3" />
<spanspec spanname="ch1pcs01" namest="c4" nameend="c5" />
<spanspec spanname="ch1pcs23" namest="c6" nameend="c7" />
<thead>
<row>
<entry spanname="ch0">CH0</entry>
<entry spanname="ch1">CH1</entry>
</row>
</thead>
<tbody valign="top" align="center">
<row>
<entry spanname="ch0">CMN/PLL/REF</entry>
<entry spanname="ch1">CMN/PLL/REF</entry>
</row>
<row>
<entry spanname="ch0pcs01">PCS01</entry>
<entry spanname="ch0pcs23">PCS23</entry>
<entry spanname="ch1pcs01">PCS01</entry>
<entry spanname="ch1pcs23">PCS23</entry>
</row>
<row>
<entry>TX0</entry>
<entry>TX1</entry>
<entry>TX2</entry>
<entry>TX3</entry>
<entry>TX0</entry>
<entry>TX1</entry>
<entry>TX2</entry>
<entry>TX3</entry>
</row>
<row>
<entry spanname="ch0">DDI0</entry>
<entry spanname="ch1">DDI1</entry>
</row>
</tbody>
</tgroup>
</table>
<table id="dpiox1">
<title>Single channel PHY (CHV)</title>
<tgroup cols="4">
<colspec colname="c0" />
<colspec colname="c1" />
<colspec colname="c2" />
<colspec colname="c3" />
<spanspec spanname="ch0" namest="c0" nameend="c3" />
<spanspec spanname="ch0pcs01" namest="c0" nameend="c1" />
<spanspec spanname="ch0pcs23" namest="c2" nameend="c3" />
<thead>
<row>
<entry spanname="ch0">CH0</entry>
</row>
</thead>
<tbody valign="top" align="center">
<row>
<entry spanname="ch0">CMN/PLL/REF</entry>
</row>
<row>
<entry spanname="ch0pcs01">PCS01</entry>
<entry spanname="ch0pcs23">PCS23</entry>
</row>
<row>
<entry>TX0</entry>
<entry>TX1</entry>
<entry>TX2</entry>
<entry>TX3</entry>
</row>
<row>
<entry spanname="ch0">DDI2</entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
</sect1>
<sect1>
......
......@@ -418,7 +418,7 @@ static size_t __init gen6_stolen_size(int num, int slot, int func)
return gmch_ctrl << 25; /* 32 MB units */
}
static size_t gen8_stolen_size(int num, int slot, int func)
static size_t __init gen8_stolen_size(int num, int slot, int func)
{
u16 gmch_ctrl;
......@@ -428,48 +428,73 @@ static size_t gen8_stolen_size(int num, int slot, int func)
return gmch_ctrl << 25; /* 32 MB units */
}
static size_t __init chv_stolen_size(int num, int slot, int func)
{
u16 gmch_ctrl;
gmch_ctrl = read_pci_config_16(num, slot, func, SNB_GMCH_CTRL);
gmch_ctrl >>= SNB_GMCH_GMS_SHIFT;
gmch_ctrl &= SNB_GMCH_GMS_MASK;
/*
* 0x0 to 0x10: 32MB increments starting at 0MB
* 0x11 to 0x16: 4MB increments starting at 8MB
* 0x17 to 0x1d: 4MB increments start at 36MB
*/
if (gmch_ctrl < 0x11)
return gmch_ctrl << 25;
else if (gmch_ctrl < 0x17)
return (gmch_ctrl - 0x11 + 2) << 22;
else
return (gmch_ctrl - 0x17 + 9) << 22;
}
struct intel_stolen_funcs {
size_t (*size)(int num, int slot, int func);
u32 (*base)(int num, int slot, int func, size_t size);
};
static const struct intel_stolen_funcs i830_stolen_funcs = {
static const struct intel_stolen_funcs i830_stolen_funcs __initconst = {
.base = i830_stolen_base,
.size = i830_stolen_size,
};
static const struct intel_stolen_funcs i845_stolen_funcs = {
static const struct intel_stolen_funcs i845_stolen_funcs __initconst = {
.base = i845_stolen_base,
.size = i830_stolen_size,
};
static const struct intel_stolen_funcs i85x_stolen_funcs = {
static const struct intel_stolen_funcs i85x_stolen_funcs __initconst = {
.base = i85x_stolen_base,
.size = gen3_stolen_size,
};
static const struct intel_stolen_funcs i865_stolen_funcs = {
static const struct intel_stolen_funcs i865_stolen_funcs __initconst = {
.base = i865_stolen_base,
.size = gen3_stolen_size,
};
static const struct intel_stolen_funcs gen3_stolen_funcs = {
static const struct intel_stolen_funcs gen3_stolen_funcs __initconst = {
.base = intel_stolen_base,
.size = gen3_stolen_size,
};
static const struct intel_stolen_funcs gen6_stolen_funcs = {
static const struct intel_stolen_funcs gen6_stolen_funcs __initconst = {
.base = intel_stolen_base,
.size = gen6_stolen_size,
};
static const struct intel_stolen_funcs gen8_stolen_funcs = {
static const struct intel_stolen_funcs gen8_stolen_funcs __initconst = {
.base = intel_stolen_base,
.size = gen8_stolen_size,
};
static struct pci_device_id intel_stolen_ids[] __initdata = {
static const struct intel_stolen_funcs chv_stolen_funcs __initconst = {
.base = intel_stolen_base,
.size = chv_stolen_size,
};
static const struct pci_device_id intel_stolen_ids[] __initconst = {
INTEL_I830_IDS(&i830_stolen_funcs),
INTEL_I845G_IDS(&i845_stolen_funcs),
INTEL_I85X_IDS(&i85x_stolen_funcs),
......@@ -495,7 +520,8 @@ static struct pci_device_id intel_stolen_ids[] __initdata = {
INTEL_HSW_D_IDS(&gen6_stolen_funcs),
INTEL_HSW_M_IDS(&gen6_stolen_funcs),
INTEL_BDW_M_IDS(&gen8_stolen_funcs),
INTEL_BDW_D_IDS(&gen8_stolen_funcs)
INTEL_BDW_D_IDS(&gen8_stolen_funcs),
INTEL_CHV_IDS(&chv_stolen_funcs),
};
static void __init intel_graphics_stolen(int num, int slot, int func)
......
This diff is collapsed.
......@@ -5,6 +5,7 @@ config DRM_I915
depends on (AGP || AGP=n)
select INTEL_GTT
select AGP_INTEL if AGP
select INTERVAL_TREE
# we need shmfs for the swappable backing store, and in particular
# the shmem_readpage() which depends upon tmpfs
select SHMEM
......
......@@ -18,6 +18,7 @@ i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o
# GEM code
i915-y += i915_cmd_parser.o \
i915_gem_context.o \
i915_gem_render_state.o \
i915_gem_debug.o \
i915_gem_dmabuf.o \
i915_gem_evict.o \
......@@ -26,12 +27,18 @@ i915-y += i915_cmd_parser.o \
i915_gem.o \
i915_gem_stolen.o \
i915_gem_tiling.o \
i915_gem_userptr.o \
i915_gpu_error.o \
i915_irq.o \
i915_trace_points.o \
intel_ringbuffer.o \
intel_uncore.o
# autogenerated null render state
i915-y += intel_renderstate_gen6.o \
intel_renderstate_gen7.o \
intel_renderstate_gen8.o
# modesetting core code
i915-y += intel_bios.o \
intel_display.o \
......
......@@ -498,16 +498,18 @@ static u32 gen7_blt_get_cmd_length_mask(u32 cmd_header)
return 0;
}
static bool validate_cmds_sorted(struct intel_ring_buffer *ring)
static bool validate_cmds_sorted(struct intel_engine_cs *ring,
const struct drm_i915_cmd_table *cmd_tables,
int cmd_table_count)
{
int i;
bool ret = true;
if (!ring->cmd_tables || ring->cmd_table_count == 0)
if (!cmd_tables || cmd_table_count == 0)
return true;
for (i = 0; i < ring->cmd_table_count; i++) {
const struct drm_i915_cmd_table *table = &ring->cmd_tables[i];
for (i = 0; i < cmd_table_count; i++) {
const struct drm_i915_cmd_table *table = &cmd_tables[i];
u32 previous = 0;
int j;
......@@ -550,35 +552,103 @@ static bool check_sorted(int ring_id, const u32 *reg_table, int reg_count)
return ret;
}
static bool validate_regs_sorted(struct intel_ring_buffer *ring)
static bool validate_regs_sorted(struct intel_engine_cs *ring)
{
return check_sorted(ring->id, ring->reg_table, ring->reg_count) &&
check_sorted(ring->id, ring->master_reg_table,
ring->master_reg_count);
}
struct cmd_node {
const struct drm_i915_cmd_descriptor *desc;
struct hlist_node node;
};
/*
* Different command ranges have different numbers of bits for the opcode. For
* example, MI commands use bits 31:23 while 3D commands use bits 31:16. The
* problem is that, for example, MI commands use bits 22:16 for other fields
* such as GGTT vs PPGTT bits. If we include those bits in the mask then when
* we mask a command from a batch it could hash to the wrong bucket due to
* non-opcode bits being set. But if we don't include those bits, some 3D
* commands may hash to the same bucket due to not including opcode bits that
* make the command unique. For now, we will risk hashing to the same bucket.
*
* If we attempt to generate a perfect hash, we should be able to look at bits
* 31:29 of a command from a batch buffer and use the full mask for that
* client. The existing INSTR_CLIENT_MASK/SHIFT defines can be used for this.
*/
#define CMD_HASH_MASK STD_MI_OPCODE_MASK
static int init_hash_table(struct intel_engine_cs *ring,
const struct drm_i915_cmd_table *cmd_tables,
int cmd_table_count)
{
int i, j;
hash_init(ring->cmd_hash);
for (i = 0; i < cmd_table_count; i++) {
const struct drm_i915_cmd_table *table = &cmd_tables[i];
for (j = 0; j < table->count; j++) {
const struct drm_i915_cmd_descriptor *desc =
&table->table[j];
struct cmd_node *desc_node =
kmalloc(sizeof(*desc_node), GFP_KERNEL);
if (!desc_node)
return -ENOMEM;
desc_node->desc = desc;
hash_add(ring->cmd_hash, &desc_node->node,
desc->cmd.value & CMD_HASH_MASK);
}
}
return 0;
}
static void fini_hash_table(struct intel_engine_cs *ring)
{
struct hlist_node *tmp;
struct cmd_node *desc_node;
int i;
hash_for_each_safe(ring->cmd_hash, i, tmp, desc_node, node) {
hash_del(&desc_node->node);
kfree(desc_node);
}
}
/**
* i915_cmd_parser_init_ring() - set cmd parser related fields for a ringbuffer
* @ring: the ringbuffer to initialize
*
* Optionally initializes fields related to batch buffer command parsing in the
* struct intel_ring_buffer based on whether the platform requires software
* struct intel_engine_cs based on whether the platform requires software
* command parsing.
*
* Return: non-zero if initialization fails
*/
void i915_cmd_parser_init_ring(struct intel_ring_buffer *ring)
int i915_cmd_parser_init_ring(struct intel_engine_cs *ring)
{
const struct drm_i915_cmd_table *cmd_tables;
int cmd_table_count;
int ret;
if (!IS_GEN7(ring->dev))
return;
return 0;
switch (ring->id) {
case RCS:
if (IS_HASWELL(ring->dev)) {
ring->cmd_tables = hsw_render_ring_cmds;
ring->cmd_table_count =
cmd_tables = hsw_render_ring_cmds;
cmd_table_count =
ARRAY_SIZE(hsw_render_ring_cmds);
} else {
ring->cmd_tables = gen7_render_cmds;
ring->cmd_table_count = ARRAY_SIZE(gen7_render_cmds);
cmd_tables = gen7_render_cmds;
cmd_table_count = ARRAY_SIZE(gen7_render_cmds);
}
ring->reg_table = gen7_render_regs;
......@@ -595,17 +665,17 @@ void i915_cmd_parser_init_ring(struct intel_ring_buffer *ring)
ring->get_cmd_length_mask = gen7_render_get_cmd_length_mask;
break;
case VCS:
ring->cmd_tables = gen7_video_cmds;
ring->cmd_table_count = ARRAY_SIZE(gen7_video_cmds);
cmd_tables = gen7_video_cmds;
cmd_table_count = ARRAY_SIZE(gen7_video_cmds);
ring->get_cmd_length_mask = gen7_bsd_get_cmd_length_mask;
break;
case BCS:
if (IS_HASWELL(ring->dev)) {
ring->cmd_tables = hsw_blt_ring_cmds;
ring->cmd_table_count = ARRAY_SIZE(hsw_blt_ring_cmds);
cmd_tables = hsw_blt_ring_cmds;
cmd_table_count = ARRAY_SIZE(hsw_blt_ring_cmds);
} else {
ring->cmd_tables = gen7_blt_cmds;
ring->cmd_table_count = ARRAY_SIZE(gen7_blt_cmds);
cmd_tables = gen7_blt_cmds;
cmd_table_count = ARRAY_SIZE(gen7_blt_cmds);
}
ring->reg_table = gen7_blt_regs;
......@@ -622,8 +692,8 @@ void i915_cmd_parser_init_ring(struct intel_ring_buffer *ring)
ring->get_cmd_length_mask = gen7_blt_get_cmd_length_mask;
break;
case VECS:
ring->cmd_tables = hsw_vebox_cmds;
ring->cmd_table_count = ARRAY_SIZE(hsw_vebox_cmds);
cmd_tables = hsw_vebox_cmds;
cmd_table_count = ARRAY_SIZE(hsw_vebox_cmds);
/* VECS can use the same length_mask function as VCS */
ring->get_cmd_length_mask = gen7_bsd_get_cmd_length_mask;
break;
......@@ -633,18 +703,45 @@ void i915_cmd_parser_init_ring(struct intel_ring_buffer *ring)
BUG();
}
BUG_ON(!validate_cmds_sorted(ring));
BUG_ON(!validate_cmds_sorted(ring, cmd_tables, cmd_table_count));
BUG_ON(!validate_regs_sorted(ring));
ret = init_hash_table(ring, cmd_tables, cmd_table_count);
if (ret) {
DRM_ERROR("CMD: cmd_parser_init failed!\n");
fini_hash_table(ring);
return ret;
}
ring->needs_cmd_parser = true;
return 0;
}
/**
* i915_cmd_parser_fini_ring() - clean up cmd parser related fields
* @ring: the ringbuffer to clean up
*
* Releases any resources related to command parsing that may have been
* initialized for the specified ring.
*/
void i915_cmd_parser_fini_ring(struct intel_engine_cs *ring)
{
if (!ring->needs_cmd_parser)
return;
fini_hash_table(ring);
}
static const struct drm_i915_cmd_descriptor*
find_cmd_in_table(const struct drm_i915_cmd_table *table,
find_cmd_in_table(struct intel_engine_cs *ring,
u32 cmd_header)
{
int i;
struct cmd_node *desc_node;
for (i = 0; i < table->count; i++) {
const struct drm_i915_cmd_descriptor *desc = &table->table[i];
hash_for_each_possible(ring->cmd_hash, desc_node, node,
cmd_header & CMD_HASH_MASK) {
const struct drm_i915_cmd_descriptor *desc = desc_node->desc;
u32 masked_cmd = desc->cmd.mask & cmd_header;
u32 masked_value = desc->cmd.value & desc->cmd.mask;
......@@ -664,20 +761,16 @@ find_cmd_in_table(const struct drm_i915_cmd_table *table,
* ring's default length encoding and returns default_desc.
*/
static const struct drm_i915_cmd_descriptor*
find_cmd(struct intel_ring_buffer *ring,
find_cmd(struct intel_engine_cs *ring,
u32 cmd_header,
struct drm_i915_cmd_descriptor *default_desc)
{
const struct drm_i915_cmd_descriptor *desc;
u32 mask;
int i;
for (i = 0; i < ring->cmd_table_count; i++) {
const struct drm_i915_cmd_descriptor *desc;
desc = find_cmd_in_table(&ring->cmd_tables[i], cmd_header);
if (desc)
return desc;
}
desc = find_cmd_in_table(ring, cmd_header);
if (desc)
return desc;
mask = ring->get_cmd_length_mask(cmd_header);
if (!mask)
......@@ -744,12 +837,11 @@ static u32 *vmap_batch(struct drm_i915_gem_object *obj)
*
* Return: true if the ring requires software command parsing
*/
bool i915_needs_cmd_parser(struct intel_ring_buffer *ring)
bool i915_needs_cmd_parser(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
/* No command tables indicates a platform without parsing */
if (!ring->cmd_tables)
if (!ring->needs_cmd_parser)
return false;
/*
......@@ -763,7 +855,7 @@ bool i915_needs_cmd_parser(struct intel_ring_buffer *ring)
return (i915.enable_cmd_parser == 1);
}
static bool check_cmd(const struct intel_ring_buffer *ring,
static bool check_cmd(const struct intel_engine_cs *ring,
const struct drm_i915_cmd_descriptor *desc,
const u32 *cmd,
const bool is_master,
......@@ -865,7 +957,7 @@ static bool check_cmd(const struct intel_ring_buffer *ring,
*
* Return: non-zero if the parser finds violations or otherwise fails
*/
int i915_parse_cmds(struct intel_ring_buffer *ring,
int i915_parse_cmds(struct intel_engine_cs *ring,
struct drm_i915_gem_object *batch_obj,
u32 batch_start_offset,
bool is_master)
......
This diff is collapsed.
......@@ -44,6 +44,7 @@
#include <acpi/video.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/oom.h>
#define LP_RING(d) (&((struct drm_i915_private *)(d))->ring[RCS])
......@@ -63,7 +64,7 @@
* has access to the ring.
*/
#define RING_LOCK_TEST_WITH_RETURN(dev, file) do { \
if (LP_RING(dev->dev_private)->obj == NULL) \
if (LP_RING(dev->dev_private)->buffer->obj == NULL) \
LOCK_TEST_WITH_RETURN(dev, file); \
} while (0)
......@@ -119,7 +120,7 @@ static void i915_write_hws_pga(struct drm_device *dev)
static void i915_free_hws(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ring_buffer *ring = LP_RING(dev_priv);
struct intel_engine_cs *ring = LP_RING(dev_priv);
if (dev_priv->status_page_dmah) {
drm_pci_free(dev, dev_priv->status_page_dmah);
......@@ -139,7 +140,8 @@ void i915_kernel_lost_context(struct drm_device * dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_master_private *master_priv;
struct intel_ring_buffer *ring = LP_RING(dev_priv);
struct intel_engine_cs *ring = LP_RING(dev_priv);
struct intel_ringbuffer *ringbuf = ring->buffer;
/*
* We should never lose context on the ring with modesetting
......@@ -148,17 +150,17 @@ void i915_kernel_lost_context(struct drm_device * dev)
if (drm_core_check_feature(dev, DRIVER_MODESET))
return;
ring->head = I915_READ_HEAD(ring) & HEAD_ADDR;
ring->tail = I915_READ_TAIL(ring) & TAIL_ADDR;
ring->space = ring->head - (ring->tail + I915_RING_FREE_SPACE);
if (ring->space < 0)
ring->space += ring->size;
ringbuf->head = I915_READ_HEAD(ring) & HEAD_ADDR;
ringbuf->tail = I915_READ_TAIL(ring) & TAIL_ADDR;
ringbuf->space = ringbuf->head - (ringbuf->tail + I915_RING_FREE_SPACE);
if (ringbuf->space < 0)
ringbuf->space += ringbuf->size;
if (!dev->primary->master)
return;
master_priv = dev->primary->master->driver_priv;
if (ring->head == ring->tail && master_priv->sarea_priv)
if (ringbuf->head == ringbuf->tail && master_priv->sarea_priv)
master_priv->sarea_priv->perf_boxes |= I915_BOX_RING_EMPTY;
}
......@@ -201,7 +203,7 @@ static int i915_initialize(struct drm_device * dev, drm_i915_init_t * init)
}
if (init->ring_size != 0) {
if (LP_RING(dev_priv)->obj != NULL) {
if (LP_RING(dev_priv)->buffer->obj != NULL) {
i915_dma_cleanup(dev);
DRM_ERROR("Client tried to initialize ringbuffer in "
"GEM mode\n");
......@@ -234,11 +236,11 @@ static int i915_initialize(struct drm_device * dev, drm_i915_init_t * init)
static int i915_dma_resume(struct drm_device * dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ring_buffer *ring = LP_RING(dev_priv);
struct intel_engine_cs *ring = LP_RING(dev_priv);
DRM_DEBUG_DRIVER("%s\n", __func__);
if (ring->virtual_start == NULL) {
if (ring->buffer->virtual_start == NULL) {
DRM_ERROR("can not ioremap virtual address for"
" ring buffer\n");
return -ENOMEM;
......@@ -360,7 +362,7 @@ static int i915_emit_cmds(struct drm_device * dev, int *buffer, int dwords)
struct drm_i915_private *dev_priv = dev->dev_private;
int i, ret;
if ((dwords+1) * sizeof(int) >= LP_RING(dev_priv)->size - 8)
if ((dwords+1) * sizeof(int) >= LP_RING(dev_priv)->buffer->size - 8)
return -EINVAL;
for (i = 0; i < dwords;) {
......@@ -782,7 +784,7 @@ static int i915_wait_irq(struct drm_device * dev, int irq_nr)
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_master_private *master_priv = dev->primary->master->driver_priv;
int ret = 0;
struct intel_ring_buffer *ring = LP_RING(dev_priv);
struct intel_engine_cs *ring = LP_RING(dev_priv);
DRM_DEBUG_DRIVER("irq_nr=%d breadcrumb=%d\n", irq_nr,
READ_BREADCRUMB(dev_priv));
......@@ -823,7 +825,7 @@ static int i915_irq_emit(struct drm_device *dev, void *data,
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -ENODEV;
if (!dev_priv || !LP_RING(dev_priv)->virtual_start) {
if (!dev_priv || !LP_RING(dev_priv)->buffer->virtual_start) {
DRM_ERROR("called with no initialization\n");
return -EINVAL;
}
......@@ -1073,7 +1075,7 @@ static int i915_set_status_page(struct drm_device *dev, void *data,
{
struct drm_i915_private *dev_priv = dev->dev_private;
drm_i915_hws_addr_t *hws = data;
struct intel_ring_buffer *ring;
struct intel_engine_cs *ring;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -ENODEV;
......@@ -1570,7 +1572,6 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
spin_lock_init(&dev_priv->backlight_lock);
spin_lock_init(&dev_priv->uncore.lock);
spin_lock_init(&dev_priv->mm.object_stat_lock);
dev_priv->ring_index = 0;
mutex_init(&dev_priv->dpio_lock);
mutex_init(&dev_priv->modeset_restore_lock);
......@@ -1741,8 +1742,8 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
intel_power_domains_remove(dev_priv);
drm_vblank_cleanup(dev);
out_gem_unload:
if (dev_priv->mm.inactive_shrinker.scan_objects)
unregister_shrinker(&dev_priv->mm.inactive_shrinker);
WARN_ON(unregister_oom_notifier(&dev_priv->mm.oom_notifier));
unregister_shrinker(&dev_priv->mm.shrinker);
if (dev->pdev->msi_enabled)
pci_disable_msi(dev->pdev);
......@@ -1793,8 +1794,8 @@ int i915_driver_unload(struct drm_device *dev)
i915_teardown_sysfs(dev);
if (dev_priv->mm.inactive_shrinker.scan_objects)
unregister_shrinker(&dev_priv->mm.inactive_shrinker);
WARN_ON(unregister_oom_notifier(&dev_priv->mm.oom_notifier));
unregister_shrinker(&dev_priv->mm.shrinker);
io_mapping_free(dev_priv->gtt.mappable);
arch_phys_wc_del(dev_priv->gtt.mtrr);
......@@ -1867,7 +1868,7 @@ int i915_driver_unload(struct drm_device *dev)
kmem_cache_destroy(dev_priv->slab);
pci_dev_put(dev_priv->bridge_dev);
kfree(dev->dev_private);
kfree(dev_priv);
return 0;
}
......@@ -1983,6 +1984,7 @@ const struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_DESTROY, i915_gem_context_destroy_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_REG_READ, i915_reg_read_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GET_RESET_STATS, i915_get_reset_stats_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_USERPTR, i915_gem_userptr_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
};
int i915_max_ioctl = DRM_ARRAY_SIZE(i915_ioctls);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -178,7 +178,7 @@ static int get_context_size(struct drm_device *dev)
void i915_gem_context_free(struct kref *ctx_ref)
{
struct i915_hw_context *ctx = container_of(ctx_ref,
struct intel_context *ctx = container_of(ctx_ref,
typeof(*ctx), ref);
struct i915_hw_ppgtt *ppgtt = NULL;
......@@ -199,7 +199,7 @@ void i915_gem_context_free(struct kref *ctx_ref)
}
static struct i915_hw_ppgtt *
create_vm_for_ctx(struct drm_device *dev, struct i915_hw_context *ctx)
create_vm_for_ctx(struct drm_device *dev, struct intel_context *ctx)
{
struct i915_hw_ppgtt *ppgtt;
int ret;
......@@ -218,12 +218,12 @@ create_vm_for_ctx(struct drm_device *dev, struct i915_hw_context *ctx)
return ppgtt;
}
static struct i915_hw_context *
static struct intel_context *
__create_hw_context(struct drm_device *dev,
struct drm_i915_file_private *file_priv)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_hw_context *ctx;
struct intel_context *ctx;
int ret;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
......@@ -285,14 +285,14 @@ __create_hw_context(struct drm_device *dev,
* context state of the GPU for applications that don't utilize HW contexts, as
* well as an idle case.
*/
static struct i915_hw_context *
static struct intel_context *
i915_gem_create_context(struct drm_device *dev,
struct drm_i915_file_private *file_priv,
bool create_vm)
{
const bool is_global_default_ctx = file_priv == NULL;
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_hw_context *ctx;
struct intel_context *ctx;
int ret = 0;
BUG_ON(!mutex_is_locked(&dev->struct_mutex));
......@@ -364,8 +364,8 @@ void i915_gem_context_reset(struct drm_device *dev)
/* Prevent the hardware from restoring the last context (which hung) on
* the next switch */
for (i = 0; i < I915_NUM_RINGS; i++) {
struct intel_ring_buffer *ring = &dev_priv->ring[i];
struct i915_hw_context *dctx = ring->default_context;
struct intel_engine_cs *ring = &dev_priv->ring[i];
struct intel_context *dctx = ring->default_context;
/* Do a fake switch to the default context */
if (ring->last_context == dctx)
......@@ -391,7 +391,7 @@ void i915_gem_context_reset(struct drm_device *dev)
int i915_gem_context_init(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_hw_context *ctx;
struct intel_context *ctx;
int i;
/* Init should only be called once per module load. Eventually the
......@@ -426,7 +426,7 @@ int i915_gem_context_init(struct drm_device *dev)
void i915_gem_context_fini(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_hw_context *dctx = dev_priv->ring[RCS].default_context;
struct intel_context *dctx = dev_priv->ring[RCS].default_context;
int i;
if (dctx->obj) {
......@@ -449,10 +449,12 @@ void i915_gem_context_fini(struct drm_device *dev)
i915_gem_context_unreference(dctx);
dev_priv->ring[RCS].last_context = NULL;
}
i915_gem_object_ggtt_unpin(dctx->obj);
}
for (i = 0; i < I915_NUM_RINGS; i++) {
struct intel_ring_buffer *ring = &dev_priv->ring[i];
struct intel_engine_cs *ring = &dev_priv->ring[i];
if (ring->last_context)
i915_gem_context_unreference(ring->last_context);
......@@ -461,13 +463,12 @@ void i915_gem_context_fini(struct drm_device *dev)
ring->last_context = NULL;
}
i915_gem_object_ggtt_unpin(dctx->obj);
i915_gem_context_unreference(dctx);
}
int i915_gem_context_enable(struct drm_i915_private *dev_priv)
{
struct intel_ring_buffer *ring;
struct intel_engine_cs *ring;
int ret, i;
/* This is the only place the aliasing PPGTT gets enabled, which means
......@@ -494,11 +495,7 @@ int i915_gem_context_enable(struct drm_i915_private *dev_priv)
static int context_idr_cleanup(int id, void *p, void *data)
{
struct i915_hw_context *ctx = p;
/* Ignore the default context because close will handle it */
if (i915_gem_context_is_default(ctx))
return 0;
struct intel_context *ctx = p;
i915_gem_context_unreference(ctx);
return 0;
......@@ -507,17 +504,17 @@ static int context_idr_cleanup(int id, void *p, void *data)
int i915_gem_context_open(struct drm_device *dev, struct drm_file *file)
{
struct drm_i915_file_private *file_priv = file->driver_priv;
struct intel_context *ctx;
idr_init(&file_priv->context_idr);
mutex_lock(&dev->struct_mutex);
file_priv->private_default_ctx =
i915_gem_create_context(dev, file_priv, USES_FULL_PPGTT(dev));
ctx = i915_gem_create_context(dev, file_priv, USES_FULL_PPGTT(dev));
mutex_unlock(&dev->struct_mutex);
if (IS_ERR(file_priv->private_default_ctx)) {
if (IS_ERR(ctx)) {
idr_destroy(&file_priv->context_idr);
return PTR_ERR(file_priv->private_default_ctx);
return PTR_ERR(ctx);
}
return 0;
......@@ -529,16 +526,14 @@ void i915_gem_context_close(struct drm_device *dev, struct drm_file *file)
idr_for_each(&file_priv->context_idr, context_idr_cleanup, NULL);
idr_destroy(&file_priv->context_idr);
i915_gem_context_unreference(file_priv->private_default_ctx);
}
struct i915_hw_context *
struct intel_context *
i915_gem_context_get(struct drm_i915_file_private *file_priv, u32 id)
{
struct i915_hw_context *ctx;
struct intel_context *ctx;
ctx = (struct i915_hw_context *)idr_find(&file_priv->context_idr, id);
ctx = (struct intel_context *)idr_find(&file_priv->context_idr, id);
if (!ctx)
return ERR_PTR(-ENOENT);
......@@ -546,8 +541,8 @@ i915_gem_context_get(struct drm_i915_file_private *file_priv, u32 id)
}
static inline int
mi_set_context(struct intel_ring_buffer *ring,
struct i915_hw_context *new_context,
mi_set_context(struct intel_engine_cs *ring,
struct intel_context *new_context,
u32 hw_flags)
{
int ret;
......@@ -567,7 +562,7 @@ mi_set_context(struct intel_ring_buffer *ring,
if (ret)
return ret;
/* WaProgramMiArbOnOffAroundMiSetContext:ivb,vlv,hsw,bdw */
/* WaProgramMiArbOnOffAroundMiSetContext:ivb,vlv,hsw,bdw,chv */
if (INTEL_INFO(ring->dev)->gen >= 7)
intel_ring_emit(ring, MI_ARB_ON_OFF | MI_ARB_DISABLE);
else
......@@ -596,11 +591,11 @@ mi_set_context(struct intel_ring_buffer *ring,
return ret;
}
static int do_switch(struct intel_ring_buffer *ring,
struct i915_hw_context *to)
static int do_switch(struct intel_engine_cs *ring,
struct intel_context *to)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
struct i915_hw_context *from = ring->last_context;
struct intel_context *from = ring->last_context;
struct i915_hw_ppgtt *ppgtt = ctx_to_ppgtt(to);
u32 hw_flags = 0;
int ret, i;
......@@ -701,13 +696,19 @@ static int do_switch(struct intel_ring_buffer *ring,
i915_gem_context_unreference(from);
}
to->is_initialized = true;
done:
i915_gem_context_reference(to);
ring->last_context = to;
to->last_ring = ring;
if (ring->id == RCS && !to->is_initialized && from == NULL) {
ret = i915_gem_render_state_init(ring);
if (ret)
DRM_ERROR("init render state: %d\n", ret);
}
to->is_initialized = true;
return 0;
unpin_out:
......@@ -726,8 +727,8 @@ static int do_switch(struct intel_ring_buffer *ring,
* it will have a refoucnt > 1. This allows us to destroy the context abstract
* object while letting the normal object tracking destroy the backing BO.
*/
int i915_switch_context(struct intel_ring_buffer *ring,
struct i915_hw_context *to)
int i915_switch_context(struct intel_engine_cs *ring,
struct intel_context *to)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
......@@ -756,7 +757,7 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
{
struct drm_i915_gem_context_create *args = data;
struct drm_i915_file_private *file_priv = file->driver_priv;
struct i915_hw_context *ctx;
struct intel_context *ctx;
int ret;
if (!hw_context_enabled(dev))
......@@ -782,7 +783,7 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
{
struct drm_i915_gem_context_destroy *args = data;
struct drm_i915_file_private *file_priv = file->driver_priv;
struct i915_hw_context *ctx;
struct intel_context *ctx;
int ret;
if (args->ctx_id == DEFAULT_CONTEXT_ID)
......
......@@ -229,6 +229,14 @@ static const struct dma_buf_ops i915_dmabuf_ops = {
struct dma_buf *i915_gem_prime_export(struct drm_device *dev,
struct drm_gem_object *gem_obj, int flags)
{
struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
if (obj->ops->dmabuf_export) {
int ret = obj->ops->dmabuf_export(obj);
if (ret)
return ERR_PTR(ret);
}
return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags);
}
......
......@@ -541,7 +541,7 @@ need_reloc_mappable(struct i915_vma *vma)
static int
i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
struct intel_ring_buffer *ring,
struct intel_engine_cs *ring,
bool *need_reloc)
{
struct drm_i915_gem_object *obj = vma->obj;
......@@ -596,7 +596,7 @@ i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
}
static int
i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
i915_gem_execbuffer_reserve(struct intel_engine_cs *ring,
struct list_head *vmas,
bool *need_relocs)
{
......@@ -610,6 +610,8 @@ i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
if (list_empty(vmas))
return 0;
i915_gem_retire_requests_ring(ring);
vm = list_first_entry(vmas, struct i915_vma, exec_list)->vm;
INIT_LIST_HEAD(&ordered_vmas);
......@@ -711,7 +713,7 @@ static int
i915_gem_execbuffer_relocate_slow(struct drm_device *dev,
struct drm_i915_gem_execbuffer2 *args,
struct drm_file *file,
struct intel_ring_buffer *ring,
struct intel_engine_cs *ring,
struct eb_vmas *eb,
struct drm_i915_gem_exec_object2 *exec)
{
......@@ -827,7 +829,7 @@ i915_gem_execbuffer_relocate_slow(struct drm_device *dev,
}
static int
i915_gem_execbuffer_move_to_gpu(struct intel_ring_buffer *ring,
i915_gem_execbuffer_move_to_gpu(struct intel_engine_cs *ring,
struct list_head *vmas)
{
struct i915_vma *vma;
......@@ -910,11 +912,11 @@ validate_exec_list(struct drm_i915_gem_exec_object2 *exec,
return 0;
}
static struct i915_hw_context *
static struct intel_context *
i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
struct intel_ring_buffer *ring, const u32 ctx_id)
struct intel_engine_cs *ring, const u32 ctx_id)
{
struct i915_hw_context *ctx = NULL;
struct intel_context *ctx = NULL;
struct i915_ctx_hang_stats *hs;
if (ring->id != RCS && ctx_id != DEFAULT_CONTEXT_ID)
......@@ -935,7 +937,7 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
static void
i915_gem_execbuffer_move_to_active(struct list_head *vmas,
struct intel_ring_buffer *ring)
struct intel_engine_cs *ring)
{
struct i915_vma *vma;
......@@ -970,7 +972,7 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,
static void
i915_gem_execbuffer_retire_commands(struct drm_device *dev,
struct drm_file *file,
struct intel_ring_buffer *ring,
struct intel_engine_cs *ring,
struct drm_i915_gem_object *obj)
{
/* Unconditionally force add_request to emit a full flush. */
......@@ -982,7 +984,7 @@ i915_gem_execbuffer_retire_commands(struct drm_device *dev,
static int
i915_reset_gen7_sol_offsets(struct drm_device *dev,
struct intel_ring_buffer *ring)
struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int ret, i;
......@@ -1025,12 +1027,12 @@ static int gen8_dispatch_bsd_ring(struct drm_device *dev,
int ring_id;
mutex_lock(&dev->struct_mutex);
if (dev_priv->ring_index == 0) {
if (dev_priv->mm.bsd_ring_dispatch_index == 0) {
ring_id = VCS;
dev_priv->ring_index = 1;
dev_priv->mm.bsd_ring_dispatch_index = 1;
} else {
ring_id = VCS2;
dev_priv->ring_index = 0;
dev_priv->mm.bsd_ring_dispatch_index = 0;
}
file_priv->bsd_ring = &dev_priv->ring[ring_id];
mutex_unlock(&dev->struct_mutex);
......@@ -1048,8 +1050,8 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
struct eb_vmas *eb;
struct drm_i915_gem_object *batch_obj;
struct drm_clip_rect *cliprects = NULL;
struct intel_ring_buffer *ring;
struct i915_hw_context *ctx;
struct intel_engine_cs *ring;
struct intel_context *ctx;
struct i915_address_space *vm;
const u32 ctx_id = i915_execbuffer2_get_context_id(*args);
u64 exec_start = args->batch_start_offset, exec_len;
......@@ -1168,6 +1170,11 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
goto pre_mutex_err;
}
} else {
if (args->DR4 == 0xffffffff) {
DRM_DEBUG("UXA submitting garbage DR4, fixing up\n");
args->DR4 = 0;
}
if (args->DR1 || args->DR4 || args->cliprects_ptr) {
DRM_DEBUG("0 cliprects but dirt in cliprects fields\n");
return -EINVAL;
......
......@@ -30,7 +30,8 @@
#include "i915_trace.h"
#include "intel_drv.h"
static void gen8_setup_private_ppat(struct drm_i915_private *dev_priv);
static void bdw_setup_private_ppat(struct drm_i915_private *dev_priv);
static void chv_setup_private_ppat(struct drm_i915_private *dev_priv);
bool intel_enable_ppgtt(struct drm_device *dev, bool full)
{
......@@ -196,7 +197,7 @@ static gen6_gtt_pte_t iris_pte_encode(dma_addr_t addr,
}
/* Broadwell Page Directory Pointer Descriptors */
static int gen8_write_pdp(struct intel_ring_buffer *ring, unsigned entry,
static int gen8_write_pdp(struct intel_engine_cs *ring, unsigned entry,
uint64_t val, bool synchronous)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
......@@ -226,7 +227,7 @@ static int gen8_write_pdp(struct intel_ring_buffer *ring, unsigned entry,
}
static int gen8_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct intel_ring_buffer *ring,
struct intel_engine_cs *ring,
bool synchronous)
{
int i, ret;
......@@ -275,6 +276,8 @@ static void gen8_ppgtt_clear_range(struct i915_address_space *vm,
num_entries--;
}
if (!HAS_LLC(ppgtt->base.dev))
drm_clflush_virt_range(pt_vaddr, PAGE_SIZE);
kunmap_atomic(pt_vaddr);
pte = 0;
......@@ -311,6 +314,8 @@ static void gen8_ppgtt_insert_entries(struct i915_address_space *vm,
gen8_pte_encode(sg_page_iter_dma_address(&sg_iter),
cache_level, true);
if (++pte == GEN8_PTES_PER_PAGE) {
if (!HAS_LLC(ppgtt->base.dev))
drm_clflush_virt_range(pt_vaddr, PAGE_SIZE);
kunmap_atomic(pt_vaddr);
pt_vaddr = NULL;
if (++pde == GEN8_PDES_PER_PAGE) {
......@@ -320,8 +325,11 @@ static void gen8_ppgtt_insert_entries(struct i915_address_space *vm,
pte = 0;
}
}
if (pt_vaddr)
if (pt_vaddr) {
if (!HAS_LLC(ppgtt->base.dev))
drm_clflush_virt_range(pt_vaddr, PAGE_SIZE);
kunmap_atomic(pt_vaddr);
}
}
static void gen8_free_page_tables(struct page **pt_pages)
......@@ -584,6 +592,8 @@ static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt, uint64_t size)
pd_vaddr[j] = gen8_pde_encode(ppgtt->base.dev, addr,
I915_CACHE_LLC);
}
if (!HAS_LLC(ppgtt->base.dev))
drm_clflush_virt_range(pd_vaddr, PAGE_SIZE);
kunmap_atomic(pd_vaddr);
}
......@@ -696,7 +706,7 @@ static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
}
static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct intel_ring_buffer *ring,
struct intel_engine_cs *ring,
bool synchronous)
{
struct drm_device *dev = ppgtt->base.dev;
......@@ -740,7 +750,7 @@ static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
}
static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct intel_ring_buffer *ring,
struct intel_engine_cs *ring,
bool synchronous)
{
struct drm_device *dev = ppgtt->base.dev;
......@@ -791,7 +801,7 @@ static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
}
static int gen6_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct intel_ring_buffer *ring,
struct intel_engine_cs *ring,
bool synchronous)
{
struct drm_device *dev = ppgtt->base.dev;
......@@ -812,7 +822,7 @@ static int gen8_ppgtt_enable(struct i915_hw_ppgtt *ppgtt)
{
struct drm_device *dev = ppgtt->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ring_buffer *ring;
struct intel_engine_cs *ring;
int j, ret;
for_each_ring(ring, dev_priv, j) {
......@@ -842,7 +852,7 @@ static int gen7_ppgtt_enable(struct i915_hw_ppgtt *ppgtt)
{
struct drm_device *dev = ppgtt->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ring_buffer *ring;
struct intel_engine_cs *ring;
uint32_t ecochk, ecobits;
int i;
......@@ -881,7 +891,7 @@ static int gen6_ppgtt_enable(struct i915_hw_ppgtt *ppgtt)
{
struct drm_device *dev = ppgtt->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ring_buffer *ring;
struct intel_engine_cs *ring;
uint32_t ecochk, gab_ctl, ecobits;
int i;
......@@ -1025,8 +1035,7 @@ static int gen6_ppgtt_allocate_page_directories(struct i915_hw_ppgtt *ppgtt)
&ppgtt->node, GEN6_PD_SIZE,
GEN6_PD_ALIGN, 0,
0, dev_priv->gtt.base.total,
DRM_MM_SEARCH_DEFAULT,
DRM_MM_CREATE_DEFAULT);
DRM_MM_TOPDOWN);
if (ret == -ENOSPC && !retried) {
ret = i915_gem_evict_something(dev, &dev_priv->gtt.base,
GEN6_PD_SIZE, GEN6_PD_ALIGN,
......@@ -1250,7 +1259,7 @@ static void undo_idling(struct drm_i915_private *dev_priv, bool interruptible)
void i915_check_and_clear_faults(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ring_buffer *ring;
struct intel_engine_cs *ring;
int i;
if (INTEL_INFO(dev)->gen < 6)
......@@ -1325,7 +1334,11 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev)
if (INTEL_INFO(dev)->gen >= 8) {
gen8_setup_private_ppat(dev_priv);
if (IS_CHERRYVIEW(dev))
chv_setup_private_ppat(dev_priv);
else
bdw_setup_private_ppat(dev_priv);
return;
}
......@@ -1753,6 +1766,17 @@ static inline unsigned int gen8_get_total_gtt_size(u16 bdw_gmch_ctl)
return bdw_gmch_ctl << 20;
}
static inline unsigned int chv_get_total_gtt_size(u16 gmch_ctrl)
{
gmch_ctrl >>= SNB_GMCH_GGMS_SHIFT;
gmch_ctrl &= SNB_GMCH_GGMS_MASK;
if (gmch_ctrl)
return 1 << (20 + gmch_ctrl);
return 0;
}
static inline size_t gen6_get_stolen_size(u16 snb_gmch_ctl)
{
snb_gmch_ctl >>= SNB_GMCH_GMS_SHIFT;
......@@ -1767,6 +1791,24 @@ static inline size_t gen8_get_stolen_size(u16 bdw_gmch_ctl)
return bdw_gmch_ctl << 25; /* 32 MB units */
}
static size_t chv_get_stolen_size(u16 gmch_ctrl)
{
gmch_ctrl >>= SNB_GMCH_GMS_SHIFT;
gmch_ctrl &= SNB_GMCH_GMS_MASK;
/*
* 0x0 to 0x10: 32MB increments starting at 0MB
* 0x11 to 0x16: 4MB increments starting at 8MB
* 0x17 to 0x1d: 4MB increments start at 36MB
*/
if (gmch_ctrl < 0x11)
return gmch_ctrl << 25;
else if (gmch_ctrl < 0x17)
return (gmch_ctrl - 0x11 + 2) << 22;
else
return (gmch_ctrl - 0x17 + 9) << 22;
}
static int ggtt_probe_common(struct drm_device *dev,
size_t gtt_size)
{
......@@ -1797,7 +1839,7 @@ static int ggtt_probe_common(struct drm_device *dev,
/* The GGTT and PPGTT need a private PPAT setup in order to handle cacheability
* bits. When using advanced contexts each context stores its own PAT, but
* writing this data shouldn't be harmful even in those cases. */
static void gen8_setup_private_ppat(struct drm_i915_private *dev_priv)
static void bdw_setup_private_ppat(struct drm_i915_private *dev_priv)
{
uint64_t pat;
......@@ -1816,6 +1858,33 @@ static void gen8_setup_private_ppat(struct drm_i915_private *dev_priv)
I915_WRITE(GEN8_PRIVATE_PAT + 4, pat >> 32);
}
static void chv_setup_private_ppat(struct drm_i915_private *dev_priv)
{
uint64_t pat;
/*
* Map WB on BDW to snooped on CHV.
*
* Only the snoop bit has meaning for CHV, the rest is
* ignored.
*
* Note that the harware enforces snooping for all page
* table accesses. The snoop bit is actually ignored for
* PDEs.
*/
pat = GEN8_PPAT(0, CHV_PPAT_SNOOP) |
GEN8_PPAT(1, 0) |
GEN8_PPAT(2, 0) |
GEN8_PPAT(3, 0) |
GEN8_PPAT(4, CHV_PPAT_SNOOP) |
GEN8_PPAT(5, CHV_PPAT_SNOOP) |
GEN8_PPAT(6, CHV_PPAT_SNOOP) |
GEN8_PPAT(7, CHV_PPAT_SNOOP);
I915_WRITE(GEN8_PRIVATE_PAT, pat);
I915_WRITE(GEN8_PRIVATE_PAT + 4, pat >> 32);
}
static int gen8_gmch_probe(struct drm_device *dev,
size_t *gtt_total,
size_t *stolen,
......@@ -1836,12 +1905,20 @@ static int gen8_gmch_probe(struct drm_device *dev,
pci_read_config_word(dev->pdev, SNB_GMCH_CTRL, &snb_gmch_ctl);
*stolen = gen8_get_stolen_size(snb_gmch_ctl);
if (IS_CHERRYVIEW(dev)) {
*stolen = chv_get_stolen_size(snb_gmch_ctl);
gtt_size = chv_get_total_gtt_size(snb_gmch_ctl);
} else {
*stolen = gen8_get_stolen_size(snb_gmch_ctl);
gtt_size = gen8_get_total_gtt_size(snb_gmch_ctl);
}
gtt_size = gen8_get_total_gtt_size(snb_gmch_ctl);
*gtt_total = (gtt_size / sizeof(gen8_gtt_pte_t)) << PAGE_SHIFT;
gen8_setup_private_ppat(dev_priv);
if (IS_CHERRYVIEW(dev))
chv_setup_private_ppat(dev_priv);
else
bdw_setup_private_ppat(dev_priv);
ret = ggtt_probe_common(dev, gtt_size);
......
......@@ -95,6 +95,7 @@ typedef gen8_gtt_pte_t gen8_ppgtt_pde_t;
#define PPAT_CACHED_INDEX _PAGE_PAT /* WB LLCeLLC */
#define PPAT_DISPLAY_ELLC_INDEX _PAGE_PCD /* WT eLLC */
#define CHV_PPAT_SNOOP (1<<6)
#define GEN8_PPAT_AGE(x) (x<<4)
#define GEN8_PPAT_LLCeLLC (3<<2)
#define GEN8_PPAT_LLCELLC (2<<2)
......@@ -256,11 +257,11 @@ struct i915_hw_ppgtt {
dma_addr_t *gen8_pt_dma_addr[4];
};
struct i915_hw_context *ctx;
struct intel_context *ctx;
int (*enable)(struct i915_hw_ppgtt *ppgtt);
int (*switch_mm)(struct i915_hw_ppgtt *ppgtt,
struct intel_ring_buffer *ring,
struct intel_engine_cs *ring,
bool synchronous);
void (*debug_dump)(struct i915_hw_ppgtt *ppgtt, struct seq_file *m);
};
......
/*
* Copyright © 2014 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
* Authors:
* Mika Kuoppala <mika.kuoppala@intel.com>
*
*/
#include "i915_drv.h"
#include "intel_renderstate.h"
struct i915_render_state {
struct drm_i915_gem_object *obj;
unsigned long ggtt_offset;
void *batch;
u32 size;
u32 len;
};
static struct i915_render_state *render_state_alloc(struct drm_device *dev)
{
struct i915_render_state *so;
struct page *page;
int ret;
so = kzalloc(sizeof(*so), GFP_KERNEL);
if (!so)
return ERR_PTR(-ENOMEM);
so->obj = i915_gem_alloc_object(dev, 4096);
if (so->obj == NULL) {
ret = -ENOMEM;
goto free;
}
so->size = 4096;
ret = i915_gem_obj_ggtt_pin(so->obj, 4096, 0);
if (ret)
goto free_gem;
BUG_ON(so->obj->pages->nents != 1);
page = sg_page(so->obj->pages->sgl);
so->batch = kmap(page);
if (!so->batch) {
ret = -ENOMEM;
goto unpin;
}
so->ggtt_offset = i915_gem_obj_ggtt_offset(so->obj);
return so;
unpin:
i915_gem_object_ggtt_unpin(so->obj);
free_gem:
drm_gem_object_unreference(&so->obj->base);
free:
kfree(so);
return ERR_PTR(ret);
}
static void render_state_free(struct i915_render_state *so)
{
kunmap(so->batch);
i915_gem_object_ggtt_unpin(so->obj);
drm_gem_object_unreference(&so->obj->base);
kfree(so);
}
static const struct intel_renderstate_rodata *
render_state_get_rodata(struct drm_device *dev, const int gen)
{
switch (gen) {
case 6:
return &gen6_null_state;
case 7:
return &gen7_null_state;
case 8:
return &gen8_null_state;
}
return NULL;
}
static int render_state_setup(const int gen,
const struct intel_renderstate_rodata *rodata,
struct i915_render_state *so)
{
const u64 goffset = i915_gem_obj_ggtt_offset(so->obj);
u32 reloc_index = 0;
u32 * const d = so->batch;
unsigned int i = 0;
int ret;
if (!rodata || rodata->batch_items * 4 > so->size)
return -EINVAL;
ret = i915_gem_object_set_to_cpu_domain(so->obj, true);
if (ret)
return ret;
while (i < rodata->batch_items) {
u32 s = rodata->batch[i];
if (reloc_index < rodata->reloc_items &&
i * 4 == rodata->reloc[reloc_index]) {
s += goffset & 0xffffffff;
/* We keep batch offsets max 32bit */
if (gen >= 8) {
if (i + 1 >= rodata->batch_items ||
rodata->batch[i + 1] != 0)
return -EINVAL;
d[i] = s;
i++;
s = (goffset & 0xffffffff00000000ull) >> 32;
}
reloc_index++;
}
d[i] = s;
i++;
}
ret = i915_gem_object_set_to_gtt_domain(so->obj, false);
if (ret)
return ret;
if (rodata->reloc_items != reloc_index) {
DRM_ERROR("not all relocs resolved, %d out of %d\n",
reloc_index, rodata->reloc_items);
return -EINVAL;
}
so->len = rodata->batch_items * 4;
return 0;
}
int i915_gem_render_state_init(struct intel_engine_cs *ring)
{
const int gen = INTEL_INFO(ring->dev)->gen;
struct i915_render_state *so;
const struct intel_renderstate_rodata *rodata;
int ret;
if (WARN_ON(ring->id != RCS))
return -ENOENT;
rodata = render_state_get_rodata(ring->dev, gen);
if (rodata == NULL)
return 0;
so = render_state_alloc(ring->dev);
if (IS_ERR(so))
return PTR_ERR(so);
ret = render_state_setup(gen, rodata, so);
if (ret)
goto out;
ret = ring->dispatch_execbuffer(ring,
i915_gem_obj_ggtt_offset(so->obj),
so->len,
I915_DISPATCH_SECURE);
if (ret)
goto out;
i915_vma_move_to_active(i915_gem_obj_to_ggtt(so->obj), ring);
ret = __i915_add_request(ring, NULL, so->obj, NULL);
/* __i915_add_request moves object to inactive if it fails */
out:
render_state_free(so);
return ret;
}
This diff is collapsed.
......@@ -205,6 +205,7 @@ static void print_error_buffers(struct drm_i915_error_state_buf *m,
err_puts(m, tiling_flag(err->tiling));
err_puts(m, dirty_flag(err->dirty));
err_puts(m, purgeable_flag(err->purgeable));
err_puts(m, err->userptr ? " userptr" : "");
err_puts(m, err->ring != -1 ? " " : "");
err_puts(m, ring_str(err->ring));
err_puts(m, i915_cache_level_str(err->cache_level));
......@@ -641,6 +642,7 @@ static void capture_bo(struct drm_i915_error_buffer *err,
err->tiling = obj->tiling_mode;
err->dirty = obj->dirty;
err->purgeable = obj->madv != I915_MADV_WILLNEED;
err->userptr = obj->userptr.mm != NULL;
err->ring = obj->ring ? obj->ring->id : -1;
err->cache_level = obj->cache_level;
}
......@@ -745,7 +747,7 @@ static void i915_gem_record_fences(struct drm_device *dev,
}
static void i915_record_ring_state(struct drm_device *dev,
struct intel_ring_buffer *ring,
struct intel_engine_cs *ring,
struct drm_i915_error_ring *ering)
{
struct drm_i915_private *dev_priv = dev->dev_private;
......@@ -823,8 +825,8 @@ static void i915_record_ring_state(struct drm_device *dev,
ering->hws = I915_READ(mmio);
}
ering->cpu_ring_head = ring->head;
ering->cpu_ring_tail = ring->tail;
ering->cpu_ring_head = ring->buffer->head;
ering->cpu_ring_tail = ring->buffer->tail;
ering->hangcheck_score = ring->hangcheck.score;
ering->hangcheck_action = ring->hangcheck.action;
......@@ -857,7 +859,7 @@ static void i915_record_ring_state(struct drm_device *dev,
}
static void i915_gem_record_active_context(struct intel_ring_buffer *ring,
static void i915_gem_record_active_context(struct intel_engine_cs *ring,
struct drm_i915_error_state *error,
struct drm_i915_error_ring *ering)
{
......@@ -884,7 +886,7 @@ static void i915_gem_record_rings(struct drm_device *dev,
int i, count;
for (i = 0; i < I915_NUM_RINGS; i++) {
struct intel_ring_buffer *ring = &dev_priv->ring[i];
struct intel_engine_cs *ring = &dev_priv->ring[i];
if (ring->dev == NULL)
continue;
......@@ -928,7 +930,7 @@ static void i915_gem_record_rings(struct drm_device *dev,
}
error->ring[i].ringbuffer =
i915_error_ggtt_object_create(dev_priv, ring->obj);
i915_error_ggtt_object_create(dev_priv, ring->buffer->obj);
if (ring->status_page.obj)
error->ring[i].hws_page =
......
This diff is collapsed.
This diff is collapsed.
......@@ -328,8 +328,6 @@ int i915_save_state(struct drm_device *dev)
}
}
intel_disable_gt_powersave(dev);
/* Cache mode state */
if (INTEL_INFO(dev)->gen < 7)
dev_priv->regfile.saveCACHE_MODE_0 = I915_READ(CACHE_MODE_0);
......
......@@ -186,7 +186,7 @@ i915_l3_write(struct file *filp, struct kobject *kobj,
struct drm_minor *dminor = dev_to_drm_minor(dev);
struct drm_device *drm_dev = dminor->dev;
struct drm_i915_private *dev_priv = drm_dev->dev_private;
struct i915_hw_context *ctx;
struct intel_context *ctx;
u32 *temp = NULL; /* Just here to make handling failures easy */
int slice = (int)(uintptr_t)attr->private;
int ret;
......
......@@ -326,8 +326,8 @@ TRACE_EVENT(i915_gem_evict_vm,
);
TRACE_EVENT(i915_gem_ring_sync_to,
TP_PROTO(struct intel_ring_buffer *from,
struct intel_ring_buffer *to,
TP_PROTO(struct intel_engine_cs *from,
struct intel_engine_cs *to,
u32 seqno),
TP_ARGS(from, to, seqno),
......@@ -352,7 +352,7 @@ TRACE_EVENT(i915_gem_ring_sync_to,
);
TRACE_EVENT(i915_gem_ring_dispatch,
TP_PROTO(struct intel_ring_buffer *ring, u32 seqno, u32 flags),
TP_PROTO(struct intel_engine_cs *ring, u32 seqno, u32 flags),
TP_ARGS(ring, seqno, flags),
TP_STRUCT__entry(
......@@ -375,7 +375,7 @@ TRACE_EVENT(i915_gem_ring_dispatch,
);
TRACE_EVENT(i915_gem_ring_flush,
TP_PROTO(struct intel_ring_buffer *ring, u32 invalidate, u32 flush),
TP_PROTO(struct intel_engine_cs *ring, u32 invalidate, u32 flush),
TP_ARGS(ring, invalidate, flush),
TP_STRUCT__entry(
......@@ -398,7 +398,7 @@ TRACE_EVENT(i915_gem_ring_flush,
);
DECLARE_EVENT_CLASS(i915_gem_request,
TP_PROTO(struct intel_ring_buffer *ring, u32 seqno),
TP_PROTO(struct intel_engine_cs *ring, u32 seqno),
TP_ARGS(ring, seqno),
TP_STRUCT__entry(
......@@ -418,12 +418,12 @@ DECLARE_EVENT_CLASS(i915_gem_request,
);
DEFINE_EVENT(i915_gem_request, i915_gem_request_add,
TP_PROTO(struct intel_ring_buffer *ring, u32 seqno),
TP_PROTO(struct intel_engine_cs *ring, u32 seqno),
TP_ARGS(ring, seqno)
);
TRACE_EVENT(i915_gem_request_complete,
TP_PROTO(struct intel_ring_buffer *ring),
TP_PROTO(struct intel_engine_cs *ring),
TP_ARGS(ring),
TP_STRUCT__entry(
......@@ -443,12 +443,12 @@ TRACE_EVENT(i915_gem_request_complete,
);
DEFINE_EVENT(i915_gem_request, i915_gem_request_retire,
TP_PROTO(struct intel_ring_buffer *ring, u32 seqno),
TP_PROTO(struct intel_engine_cs *ring, u32 seqno),
TP_ARGS(ring, seqno)
);
TRACE_EVENT(i915_gem_request_wait_begin,
TP_PROTO(struct intel_ring_buffer *ring, u32 seqno),
TP_PROTO(struct intel_engine_cs *ring, u32 seqno),
TP_ARGS(ring, seqno),
TP_STRUCT__entry(
......@@ -477,12 +477,12 @@ TRACE_EVENT(i915_gem_request_wait_begin,
);
DEFINE_EVENT(i915_gem_request, i915_gem_request_wait_end,
TP_PROTO(struct intel_ring_buffer *ring, u32 seqno),
TP_PROTO(struct intel_engine_cs *ring, u32 seqno),
TP_ARGS(ring, seqno)
);
DECLARE_EVENT_CLASS(i915_ring,
TP_PROTO(struct intel_ring_buffer *ring),
TP_PROTO(struct intel_engine_cs *ring),
TP_ARGS(ring),
TP_STRUCT__entry(
......@@ -499,12 +499,12 @@ DECLARE_EVENT_CLASS(i915_ring,
);
DEFINE_EVENT(i915_ring, i915_ring_wait_begin,
TP_PROTO(struct intel_ring_buffer *ring),
TP_PROTO(struct intel_engine_cs *ring),
TP_ARGS(ring)
);
DEFINE_EVENT(i915_ring, i915_ring_wait_end,
TP_PROTO(struct intel_ring_buffer *ring),
TP_PROTO(struct intel_engine_cs *ring),
TP_ARGS(ring)
);
......
......@@ -364,55 +364,6 @@ void hsw_fdi_link_train(struct drm_crtc *crtc)
DRM_ERROR("FDI link training failed!\n");
}
static void intel_ddi_mode_set(struct intel_encoder *encoder)
{
struct intel_crtc *crtc = to_intel_crtc(encoder->base.crtc);
int port = intel_ddi_get_encoder_port(encoder);
int pipe = crtc->pipe;
int type = encoder->type;
struct drm_display_mode *adjusted_mode = &crtc->config.adjusted_mode;
DRM_DEBUG_KMS("Preparing DDI mode on port %c, pipe %c\n",
port_name(port), pipe_name(pipe));
crtc->eld_vld = false;
if (type == INTEL_OUTPUT_DISPLAYPORT || type == INTEL_OUTPUT_EDP) {
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct intel_digital_port *intel_dig_port =
enc_to_dig_port(&encoder->base);
intel_dp->DP = intel_dig_port->saved_port_bits |
DDI_BUF_CTL_ENABLE | DDI_BUF_EMP_400MV_0DB_HSW;
intel_dp->DP |= DDI_PORT_WIDTH(intel_dp->lane_count);
if (intel_dp->has_audio) {
DRM_DEBUG_DRIVER("DP audio on pipe %c on DDI\n",
pipe_name(crtc->pipe));
/* write eld */
DRM_DEBUG_DRIVER("DP audio: write eld information\n");
intel_write_eld(&encoder->base, adjusted_mode);
}
} else if (type == INTEL_OUTPUT_HDMI) {
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
if (intel_hdmi->has_audio) {
/* Proper support for digital audio needs a new logic
* and a new set of registers, so we leave it for future
* patch bombing.
*/
DRM_DEBUG_DRIVER("HDMI audio on pipe %c on DDI\n",
pipe_name(crtc->pipe));
/* write eld */
DRM_DEBUG_DRIVER("HDMI audio: write eld information\n");
intel_write_eld(&encoder->base, adjusted_mode);
}
intel_hdmi->set_infoframes(&encoder->base, adjusted_mode);
}
}
static struct intel_encoder *
intel_ddi_get_crtc_encoder(struct drm_crtc *crtc)
{
......@@ -1062,9 +1013,7 @@ void intel_ddi_enable_transcoder_func(struct drm_crtc *crtc)
}
if (type == INTEL_OUTPUT_HDMI) {
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder);
if (intel_hdmi->has_hdmi_sink)
if (intel_crtc->config.has_hdmi_sink)
temp |= TRANS_DDI_MODE_SELECT_HDMI;
else
temp |= TRANS_DDI_MODE_SELECT_DVI;
......@@ -1293,28 +1242,48 @@ void intel_ddi_disable_pipe_clock(struct intel_crtc *intel_crtc)
static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder)
{
struct drm_encoder *encoder = &intel_encoder->base;
struct drm_crtc *crtc = encoder->crtc;
struct drm_i915_private *dev_priv = encoder->dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
enum port port = intel_ddi_get_encoder_port(intel_encoder);
int type = intel_encoder->type;
if (crtc->config.has_audio) {
DRM_DEBUG_DRIVER("Audio on pipe %c on DDI\n",
pipe_name(crtc->pipe));
/* write eld */
DRM_DEBUG_DRIVER("DDI audio: write eld information\n");
intel_write_eld(encoder, &crtc->config.adjusted_mode);
}
if (type == INTEL_OUTPUT_EDP) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
intel_edp_panel_on(intel_dp);
}
WARN_ON(intel_crtc->ddi_pll_sel == PORT_CLK_SEL_NONE);
I915_WRITE(PORT_CLK_SEL(port), intel_crtc->ddi_pll_sel);
WARN_ON(crtc->ddi_pll_sel == PORT_CLK_SEL_NONE);
I915_WRITE(PORT_CLK_SEL(port), crtc->ddi_pll_sel);
if (type == INTEL_OUTPUT_DISPLAYPORT || type == INTEL_OUTPUT_EDP) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
struct intel_digital_port *intel_dig_port =
enc_to_dig_port(encoder);
intel_dp->DP = intel_dig_port->saved_port_bits |
DDI_BUF_CTL_ENABLE | DDI_BUF_EMP_400MV_0DB_HSW;
intel_dp->DP |= DDI_PORT_WIDTH(intel_dp->lane_count);
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
intel_dp_start_link_train(intel_dp);
intel_dp_complete_link_train(intel_dp);
if (port != PORT_A)
intel_dp_stop_link_train(intel_dp);
} else if (type == INTEL_OUTPUT_HDMI) {
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder);
intel_hdmi->set_infoframes(encoder,
crtc->config.has_hdmi_sink,
&crtc->config.adjusted_mode);
}
}
......@@ -1385,7 +1354,8 @@ static void intel_enable_ddi(struct intel_encoder *intel_encoder)
intel_edp_psr_enable(intel_dp);
}
if (intel_crtc->eld_vld && type != INTEL_OUTPUT_EDP) {
if (intel_crtc->config.has_audio) {
intel_display_power_get(dev_priv, POWER_DOMAIN_AUDIO);
tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
tmp |= ((AUDIO_OUTPUT_ENABLE_A | AUDIO_ELD_VALID_A) << (pipe * 4));
I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp);
......@@ -1403,11 +1373,14 @@ static void intel_disable_ddi(struct intel_encoder *intel_encoder)
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t tmp;
if (intel_crtc->eld_vld && type != INTEL_OUTPUT_EDP) {
/* We can't touch HSW_AUD_PIN_ELD_CP_VLD uncionditionally because this
* register is part of the power well on Haswell. */
if (intel_crtc->config.has_audio) {
tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
tmp &= ~((AUDIO_OUTPUT_ENABLE_A | AUDIO_ELD_VALID_A) <<
(pipe * 4));
I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp);
intel_display_power_put(dev_priv, POWER_DOMAIN_AUDIO);
}
if (type == INTEL_OUTPUT_EDP) {
......@@ -1580,6 +1553,7 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
switch (temp & TRANS_DDI_MODE_SELECT_MASK) {
case TRANS_DDI_MODE_SELECT_HDMI:
pipe_config->has_hdmi_sink = true;
case TRANS_DDI_MODE_SELECT_DVI:
case TRANS_DDI_MODE_SELECT_FDI:
break;
......@@ -1592,6 +1566,12 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
break;
}
if (intel_display_power_enabled(dev_priv, POWER_DOMAIN_AUDIO)) {
temp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
if (temp & (AUDIO_OUTPUT_ENABLE_A << (intel_crtc->pipe * 4)))
pipe_config->has_audio = true;
}
if (encoder->type == INTEL_OUTPUT_EDP && dev_priv->vbt.edp_bpp &&
pipe_config->pipe_bpp > dev_priv->vbt.edp_bpp) {
/*
......@@ -1708,7 +1688,6 @@ void intel_ddi_init(struct drm_device *dev, enum port port)
DRM_MODE_ENCODER_TMDS);
intel_encoder->compute_config = intel_ddi_compute_config;
intel_encoder->mode_set = intel_ddi_mode_set;
intel_encoder->enable = intel_enable_ddi;
intel_encoder->pre_enable = intel_ddi_pre_enable;
intel_encoder->disable = intel_disable_ddi;
......
This diff is collapsed.
This diff is collapsed.
......@@ -106,8 +106,8 @@
#define INTEL_DVO_CHIP_TMDS 2
#define INTEL_DVO_CHIP_TVOUT 4
#define INTEL_DSI_COMMAND_MODE 0
#define INTEL_DSI_VIDEO_MODE 1
#define INTEL_DSI_VIDEO_MODE 0
#define INTEL_DSI_COMMAND_MODE 1
struct intel_framebuffer {
struct drm_framebuffer base;
......@@ -273,6 +273,13 @@ struct intel_crtc_config {
* accordingly. */
bool has_dp_encoder;
/* Whether we should send NULL infoframes. Required for audio. */
bool has_hdmi_sink;
/* Audio enabled on this pipe. Only valid if either has_hdmi_sink or
* has_dp_encoder is set. */
bool has_audio;
/*
* Enable dithering, used when the selected pipe bpp doesn't match the
* plane bpp.
......@@ -363,7 +370,6 @@ struct intel_crtc {
*/
bool active;
unsigned long enabled_power_domains;
bool eld_vld;
bool primary_enabled; /* is the primary plane (partially) visible? */
bool lowfreq_avail;
struct intel_overlay *overlay;
......@@ -403,6 +409,8 @@ struct intel_crtc {
} wm;
wait_queue_head_t vbl_wait;
int scanline_offset;
};
struct intel_plane_wm_parameters {
......@@ -486,6 +494,7 @@ struct intel_hdmi {
enum hdmi_infoframe_type type,
const void *frame, ssize_t len);
void (*set_infoframes)(struct drm_encoder *encoder,
bool enable,
struct drm_display_mode *adjusted_mode);
};
......@@ -561,6 +570,7 @@ vlv_dport_to_channel(struct intel_digital_port *dport)
{
switch (dport->port) {
case PORT_B:
case PORT_D:
return DPIO_CH0;
case PORT_C:
return DPIO_CH1;
......@@ -569,6 +579,20 @@ vlv_dport_to_channel(struct intel_digital_port *dport)
}
}
static inline int
vlv_pipe_to_channel(enum pipe pipe)
{
switch (pipe) {
case PIPE_A:
case PIPE_C:
return DPIO_CH0;
case PIPE_B:
return DPIO_CH1;
default:
BUG();
}
}
static inline struct drm_crtc *
intel_get_crtc_for_pipe(struct drm_device *dev, int pipe)
{
......@@ -593,6 +617,8 @@ struct intel_unpin_work {
#define INTEL_FLIP_INACTIVE 0
#define INTEL_FLIP_PENDING 1
#define INTEL_FLIP_COMPLETE 2
u32 flip_count;
u32 gtt_offset;
bool enable_stall_check;
};
......@@ -644,8 +670,6 @@ hdmi_to_dig_port(struct intel_hdmi *intel_hdmi)
/* i915_irq.c */
bool intel_set_cpu_fifo_underrun_reporting(struct drm_device *dev,
enum pipe pipe, bool enable);
bool __intel_set_cpu_fifo_underrun_reporting(struct drm_device *dev,
enum pipe pipe, bool enable);
bool intel_set_pch_fifo_underrun_reporting(struct drm_device *dev,
enum transcoder pch_transcoder,
bool enable);
......@@ -653,9 +677,12 @@ void ilk_enable_gt_irq(struct drm_i915_private *dev_priv, uint32_t mask);
void ilk_disable_gt_irq(struct drm_i915_private *dev_priv, uint32_t mask);
void snb_enable_pm_irq(struct drm_i915_private *dev_priv, uint32_t mask);
void snb_disable_pm_irq(struct drm_i915_private *dev_priv, uint32_t mask);
void bdw_enable_pm_irq(struct drm_i915_private *dev_priv, uint32_t mask);
void bdw_disable_pm_irq(struct drm_i915_private *dev_priv, uint32_t mask);
void intel_runtime_pm_disable_interrupts(struct drm_device *dev);
void intel_runtime_pm_restore_interrupts(struct drm_device *dev);
int intel_get_crtc_scanline(struct intel_crtc *crtc);
void i9xx_check_fifo_underruns(struct drm_device *dev);
/* intel_crt.c */
......@@ -694,7 +721,7 @@ int intel_pch_rawclk(struct drm_device *dev);
int valleyview_cur_cdclk(struct drm_i915_private *dev_priv);
void intel_mark_busy(struct drm_device *dev);
void intel_mark_fb_busy(struct drm_i915_gem_object *obj,
struct intel_ring_buffer *ring);
struct intel_engine_cs *ring);
void intel_mark_idle(struct drm_device *dev);
void intel_crtc_restore_mode(struct drm_crtc *crtc);
void intel_crtc_update_dpms(struct drm_crtc *crtc);
......@@ -726,7 +753,7 @@ void intel_release_load_detect_pipe(struct drm_connector *connector,
struct intel_load_detect_pipe *old);
int intel_pin_and_fence_fb_obj(struct drm_device *dev,
struct drm_i915_gem_object *obj,
struct intel_ring_buffer *pipelined);
struct intel_engine_cs *pipelined);
void intel_unpin_fb_obj(struct drm_i915_gem_object *obj);
struct drm_framebuffer *
__intel_framebuffer_create(struct drm_device *dev,
......@@ -777,6 +804,8 @@ int valleyview_get_vco(struct drm_i915_private *dev_priv);
void intel_mode_from_pipe_config(struct drm_display_mode *mode,
struct intel_crtc_config *pipe_config);
int intel_format_to_fourcc(int format);
void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc);
/* intel_dp.c */
void intel_dp_init(struct drm_device *dev, int output_reg, enum port port);
......@@ -902,6 +931,7 @@ extern struct drm_display_mode *intel_find_panel_downclock(
/* intel_pm.c */
void intel_init_clock_gating(struct drm_device *dev);
void intel_suspend_hw(struct drm_device *dev);
int ilk_wm_max_level(const struct drm_device *dev);
void intel_update_watermarks(struct drm_crtc *crtc);
void intel_update_sprite_watermarks(struct drm_plane *plane,
struct drm_crtc *crtc,
......
This diff is collapsed.
......@@ -31,7 +31,6 @@
struct intel_dsi_device {
unsigned int panel_id;
const char *name;
int type;
const struct intel_dsi_dev_ops *dev_ops;
void *dev_priv;
};
......@@ -85,6 +84,9 @@ struct intel_dsi {
/* virtual channel */
int channel;
/* Video mode or command mode */
u16 operation_mode;
/* number of DSI lanes */
unsigned int lane_count;
......@@ -112,6 +114,15 @@ struct intel_dsi {
u16 hs_to_lp_count;
u16 clk_lp_to_hs_count;
u16 clk_hs_to_lp_count;
u16 init_count;
/* all delays in ms */
u16 backlight_off_delay;
u16 backlight_on_delay;
u16 panel_on_delay;
u16 panel_off_delay;
u16 panel_pwr_cycle_delay;
};
static inline struct intel_dsi *enc_to_intel_dsi(struct drm_encoder *encoder)
......
This diff is collapsed.
This diff is collapsed.
......@@ -119,10 +119,6 @@ static void intel_lvds_get_config(struct intel_encoder *encoder,
pipe_config->adjusted_mode.crtc_clock = dotclock;
}
/* The LVDS pin pair needs to be on before the DPLLs are enabled.
* This is an exception to the general rule that mode_set doesn't turn
* things on.
*/
static void intel_pre_enable_lvds(struct intel_encoder *encoder)
{
struct intel_lvds_encoder *lvds_encoder = to_lvds_encoder(&encoder->base);
......@@ -324,15 +320,6 @@ static bool intel_lvds_compute_config(struct intel_encoder *intel_encoder,
return true;
}
static void intel_lvds_mode_set(struct intel_encoder *encoder)
{
/*
* We don't do anything here, the LVDS port is fully set up in the pre
* enable hook - the ordering constraints for enabling the lvds port vs.
* enabling the display pll are too strict.
*/
}
/**
* Detect the LVDS connection.
*
......@@ -946,7 +933,6 @@ void intel_lvds_init(struct drm_device *dev)
intel_encoder->enable = intel_enable_lvds;
intel_encoder->pre_enable = intel_pre_enable_lvds;
intel_encoder->compute_config = intel_lvds_compute_config;
intel_encoder->mode_set = intel_lvds_mode_set;
intel_encoder->disable = intel_disable_lvds;
intel_encoder->get_hw_state = intel_lvds_get_hw_state;
intel_encoder->get_config = intel_lvds_get_config;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -696,10 +696,7 @@ intel_post_enable_primary(struct drm_crtc *crtc)
* when going from primary only to sprite only and vice
* versa.
*/
if (intel_crtc->config.ips_enabled) {
intel_wait_for_vblank(dev, intel_crtc->pipe);
hsw_enable_ips(intel_crtc);
}
hsw_enable_ips(intel_crtc);
mutex_lock(&dev->struct_mutex);
intel_update_fbc(dev);
......@@ -1021,6 +1018,9 @@ intel_update_plane(struct drm_plane *plane, struct drm_crtc *crtc,
intel_crtc->primary_enabled = primary_enabled;
if (primary_was_enabled != primary_enabled)
intel_crtc_wait_for_pending_flips(crtc);
if (primary_was_enabled && !primary_enabled)
intel_pre_disable_primary(crtc);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment