RADEON_GEM_WAIT_IDLE is declared DRM_IOW but mesa
uses it with drmCommandWriteRead instead of drmCommandWrite
which leads to the ioctl being unmatched and returning an
error on at least OpenBSD.
Problem originally found and fixed in libdrm by kettenis@
Dave Airlie pointed out that mesa has the same issue.
This change has already been merged in upstream mesa.
ok matthieu@ kettenis@
event that's not coming.
This unbreak clutter/cogl and probably other toolkits.
Upstream commit 25620eb1d277c6b80edb136eaeca12532fcfd3ce by Adam Jackson
ok ajacoutot@, jasper@, robert@
(but never made it into the 7.8 branch).
first:
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Thu Apr 22 12:47:41 2010 -0700
DRI2: add config query extension
Add a new DRI2 configuration query extension. Allows for DRI2
client code to query for common DRI2 configuration options.
second:
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Thu Apr 22 12:49:03 2010 -0700
DRI2/GLX: check for vblank_mode in DRI2 GLX code
Re-add support for the vblank_mode environment and configuration
variable. Useful for benchmarking and app control.
The final affect being that config and environment variables for
controlling swap mode work with dri2 now. which helps me a lot with
debugging.
ok matthieu@.
case libGL itself was dlopen()ed), it was using "libGL.so.1" (linux
convention, doesn't work on OpenBSD). Change it to "libGL.so" so it has
a hope in hell of working.
I finally wrote this patch when trying to port perl's OpenGL modules
ages ago and i finally decided that hacking each instance of dlopening
libGL to use RTLD_GLOBAL was dumb.
ok matthieu@
Since mesa changed some code, GL applications have been rather nasty to
the xserver, if they are unconstrained rendering wise they spam too many
requests at the xserver and make it slow as hell (even if the cpu is
fairly idle).
There is a throttling mechanism in the xserver (1.8 at least), but that
only really works if you are doing vblank syncing (which is turned off
in our intel driver right now for unrelated reasons), and even then an unsynced
client can cause the same problem.
While a proper fix is being worked on (I am in discussion with X
developers), comment out two conditionals in the intel mesa driver so
that even when using dri2 swapbuffers we wait on the swapbuffers before
last before rendeing more, this prevents almost DoSing the server.
Tested on ironlake, 855 and 965 by me (and my matthieu as well). ok
matthieu@
update mesa.
Specifically, we disallow in radeondrm for dri clients mapping
registers, so don't try and map them (and thus fail as we currently
were). for r300+ this was only used for falling back on old drm versions
(doesn't matter). For r100, the new BO abstraction used the SWI number
(in hardware scratch reg 3) for the buffer age, so use the newly added
getparam member to grab that info instead of trying to read the mapped
registers.
Update to the lastest kernel headers before you even think about
building this or trying to use a snapshot on r100/r200.
So now radeon works with mesa again, hoorah!
Tested on rv250 by Josh Elsasser, and on R420 (and x800) by myself.
is correctly handled. without fixes to mesa and the ddx, the so-called
backwards compat goop that was added just plain does not work and ends
up with rendering bullshit.
light of day and has already been removed in mesa master (ages ago).
As a bonus, removes the annoying "falling back to classic" message on
launching a gl application.
ok matthieu@.
from this was removed from the kernel and is very much deprecated.
Pageflipping is also probably broken and should not be used. Similar
change happened in mesa master a while back.
ok matthieu@
really don't need it. There's one case where it's used, and that is on
``older'' drms, newer ones provide that one value via a parameter.
This is the first stage in my project to stop all cards mapping
registers. This does mean that drivers that depend on this may
eventually die (tdfx, i'm looking at you!).
ok matthieu@
but this one (from mesa, prompted by my diff) should run a little faster.
Now mplayer -vo gl or gl2 works with dri enabled.
Detected by otto malloc. Some debugging help from todd.
ok matthieu@, todd@.
that the memory was not set to executable. This caused some horrible
segfaults that due to lack of hardware i've been unable to track down
for months.
Conveniently, there was already a memory allocator that uses mmap to
create executable memory, #ifdef linux. Make it usable for us too.
Problem solved!
Thanks for todd@ for helping me debug, and deraadt@ for noticing the
allocator.
Makes SiS work with dri, probaby solves a bunch of other people too.
ok matthieu@ (who has sent this upstream).
[965] Fix potential segfaults from bad realloc.
C has no order of evaluation restrictions on function arguments, so we
attempted to realloc from new-size to new-size.