Commit 7dbc515f authored by Thomas Zimmermann's avatar Thomas Zimmermann

fbdev: Improve performance of sys_fillrect()

Improve the performance of sys_fillrect() by using word-aligned
32/64-bit mov instructions. While the code tried to implement this,
the compiler failed to create fast instructions. The resulting
binary instructions were even slower than cfb_fillrect(), which
uses the same algorithm, but operates on I/O memory.

A microbenchmark measures the average number of CPU cycles
for sys_fillrect() after a stabilizing period of a few minutes
(i7-4790, FullHD, simpledrm, kernel with debugging). The value
for CFB is given as a reference.

  sys_fillrect(), new:  26586 cycles
  sys_fillrect(), old: 166603 cycles
  cfb_fillrect():       41012 cycles

In the optimized case, sys_fillrect() is now ~6x faster than before
and ~1.5x faster than the CFB implementation.
Signed-off-by: default avatarThomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: default avatarJavier Martinez Canillas <javierm@redhat.com>
Reviewed-by: default avatarSam Ravnborg <sam@ravnborg.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20220223193804.18636-2-tzimmermann@suse.de
parent 81d9d7f8
...@@ -50,19 +50,9 @@ bitfill_aligned(struct fb_info *p, unsigned long *dst, int dst_idx, ...@@ -50,19 +50,9 @@ bitfill_aligned(struct fb_info *p, unsigned long *dst, int dst_idx,
/* Main chunk */ /* Main chunk */
n /= bits; n /= bits;
while (n >= 8) { memset_l(dst, pat, n);
*dst++ = pat; dst += n;
*dst++ = pat;
*dst++ = pat;
*dst++ = pat;
*dst++ = pat;
*dst++ = pat;
*dst++ = pat;
*dst++ = pat;
n -= 8;
}
while (n--)
*dst++ = pat;
/* Trailing bits */ /* Trailing bits */
if (last) if (last)
*dst = comp(pat, *dst, last); *dst = comp(pat, *dst, last);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment