This has a couple of additional implications for the internal weston API:
1) weston_view_configure no longer exists. Use weston_view_set_position
instead.
2) The weston_surface.configure callback no longer takes a width and
height. If you need these, surface.width/height are set before
configure is called. If you need to know when the width/height
changes, you must track that yourself.
Gather the variables affecting the coordinate transformations between
buffer and local coordinates into a new struct weston_buffer_viewport.
This will be more useful later, when the crop & scale extension is
implemented.
This wraps all accesses to an SHM buffer between wl_shm_buffer_begin
and end so that wayland-shm can install a handler for SIGBUS and catch
attempts to pass the compositor a buffer that is too small.
Both the Pixman renderer and the X11 backend contained effectively the same
region transformation code. This commit adds a weston_transformed_region
function and changes pixman-renderer and compositor-x11 to use it.
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Previously the renderers destroy function assumed they are only called
when the compositor is shutting down and that the compositor had
already destroyed all the surfaces. However, if a runtime renderer
switch would be done, the surface state would be leaked.
This patch adds a destroy_signal to the pixman and gl renderers. The
surface state objects will listen for that signal and destroy
themselves if needed.
This is a step towards runtime switchable renderers.
Remove create_surface() and destroy_surface() from the renderer
interface and change the renderers to create surface state on demand
and destroy it using the weston_surface's destroy signal.
Also make sure the surfaces' renderer state is reset to NULL on
destruction.
This is a step towards runtime switchable renderers.
(rpi-renderer changes are only compile-tested)
Also make sure backends destroy the renderers before shutting down the
compositor to avoid a double call to weston_binding_destroy().
This is a step towards making renderers switchable during runtime.
The weston_surface structure is split into two structures:
* The weston_surface structure storres everything required for a
client-side or server-side surface. This includes buffers; callbacks;
backend private data; input, damage, and opaque regions; and a few other
bookkeeping bits.
* The weston_view structure represents an entity in the scenegraph and
storres all of the geometry information. This includes clip region,
alpha, position, and the transformation list as well as all of the
temporary information derived from the geometry state. Because a view,
and not a surface, is a scenegraph element, the view is what is placed
in layers and planes.
There are a few things worth noting about the surface/view split:
1. This is *not* a modification to the protocol. It is, instead, a
modification to Weston's internal scenegraph to allow a single surface
to exist in multiple places at a time. Clients are completely unaware
of how many views to a particular surface exist.
2. A view is considered a direct child of a surface and is destroyed when
the surface is destroyed. Because of this, the view.surface pointer is
always valid and non-null.
3. The compositor's surface_list is replaced with a view_list. Due to
subsurfaces, building the view list is a little more complicated than
it used to be and involves building a tree of views on the fly whenever
subsurfaces are used. However, this means that backends can remain
completely subsurface-agnostic.
4. Surfaces and views both keep track of which outputs they are on.
5. The weston_surface structure now has width and height fields. These
are populated when a new buffer is attached before surface.configure
is called. This is because there are many surface-based operations
that really require the width and height and digging through the views
didn't work well.
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
This commit adds a weston_buffer structure to replace wl_buffer. This way
we can hold onto buffers by just their resource. In order to do this, the
every renderer.attach function has to fill in the weston_buffer.width and
weston_buffer.height fields.
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
AC_USE_SYSTEM_EXTENSIONS enables _XOPEN_SOURCE, _GNU_SOURCE and similar
macros to expose the largest extent of functionality supported by the
underlying system. This is required since these macros are often
limiting rather than merely additive, e.g. _XOPEN_SOURCE will actually
on some systems hide declarations which are not part of the X/Open spec.
Since this goes into config.h rather than the command line, ensure all
source is consistently including config.h before anything else,
including system libraries. This doesn't need to be guarded by a
HAVE_CONFIG_H ifdef, which was only ever a hangover from the X.Org
modular transition.
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
[pq: rebased and converted more files]
The old code had an off-by-one error on the y coordinate
where it says height - (cur_y - y). And it does the vflipping of
the *destination* buffer, whereas what is really needed is to
vflip the whole source buffer. This only affects when you read
out part of the image, such as when using the screen recoder.
Also, instead of doing the flipping manually we just let pixman
handle it.
We changed the protocol to always list modes in physical pixel
units (not scaled). And we removed the scaled mode flag. This
just updates the DRM and X11 compositors and the gl and pixman renderers
to handle this.
Both GL and pixman renderer (pixman probably only because GL did?)
return the screen capture image as y-flipped, therefore Weston y-flips
it again. However, the future rpi-renderer can produce only right-way-up
(non-flipped) screen captures, and does not need an y-flip.
Add a capability flag for y-flip, which the rpi-renderer will not set,
to get screen captures the right way up.
The wcap recording code needs yet another temporary buffer for the
non-flipped case, since the WCAP format is flipped, and the code
normally overwrites the input image as it compresses it. This becomes
difficult, if the compressor is supposed to flip while processing.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.co.uk>
The upcoming rpi-renderer cannot handle arbitrary rotations. Introduce
Weston capability bits, and add a bit for arbitrary rotation. GL and
Pixman renderers support it.
Shell or any other module must not produce surface transformations with
rotation, if the capability bit is not set. Do not register the surface
rotation binding in desktop shell, if arbitary rotation is not
supported.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.co.uk>
If you specify e.g. scale=2 in weston.ini an output section for the
X11 backend we automatically upscale all normal surfaces by this
amount. Additionally we respect a buffer_scale set on the buffer to
mean that the buffer is already in a scaled form.
This works with both the gl and the pixman renderer. The non-X
backends compile and work, but don't support changing the output
scale (they do downscale as needed due to buffer_scale though).
This also sends the new "scale" and "done" events on wl_output,
making clients aware of the scale.
Rather than storing the shadow_image in the untransformed space
and rotating on copy to hw_buffer we store both on the transformed
space. This means a copy between them is a straight copy, and that
apps supplying correctly transformed surface buffers need not
change them.
We also correctly handle all output transform including the previously
unhandled flipped ones, as well as client supplied buffer_transforms (which
were previously ignored).
We also simplify the actual rendering by just converting any damage
region to output coordinates and set it on a clip and composite
the whole buffer, letting pixman do the rectangle handling. This
means we always do all the transforms, including the surface positioning
as a pixman_image transform. This simplifies the code and sets us up
for handling scaling at a later stage.
The transform looks complicated, but in practice it ends up being
an integer translation almost always, so it will hit the pixman
fastpaths.
The X11 backend uses a shadow buffer to be able to support transformed
outputs. However, this belongs in the renderer, since otherwise this
code would have to be copied into every backend that uses the pixman
renderer and supports transformed outputs.
This renderer could be used when there's no graphic accelerator available,
for example in (future) KMS and fbdev backends.
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>