This test ensures that client submitted damage goes to the screen
correctly, regardless of output scale or transform.
The added quirk is explained in the test that uses it.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
I am working on adding a test to ensure Weston repaints damage
correctly, where I rely on Weston repainting exactly and only the damage
submitted by a client. That means I have to stop screenshooting from
damaging everything automatically. Doing that, I noticed that
screenshots on DRM-backend could theoretically get stuck if I do that.
So test for it.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Use consistent terminology with the code: index starts from zero,
numbering starts from one. Fixture 0 runs all fixtures.
Suggested-by: Marius Vlad <marius.vlad@collabora.com>
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
When there is a fixture setup array, list all fixture setups with their
numbers and names. This should help people picking a single fixture to
run and makes the --list output more interesting.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Instead of "fixture %d", use the proper fixture name if it exists or
nothing. Some places still show the fixture index because it is used on
the command line.
This makes the reports more readable.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Make it more explicit that the return value is NULL when there is no
arrray.
This patch makes the following patch smaller.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This allows tests to give a meaningful name for their fixture setups
when they use more than one of them.
If a test uses DECLARE_FIXTURE_SETUP_WITH_ARG(), it must now pass a
third argument naming the field which is struct fixture_metadata. This
also means that the fixture setup data must now be a struct, it cannot
be a plain type anymore. A compiler error is generated if the field type
is not the expected one.
All tests using DECLARE_FIXTURE_SETUP_WITH_ARG() and converted to the
new form and given names for their fixture setups.
The fixture setup names not actually used yet, that will be another
patch.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This makes sub-tests visible in the junit output, making Gitlab test
reports more detailed.
This does not apply to zuc tests, which look like they could produce
junit XML directly. And maybe TAP? Left for another time.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This adds a test to ensure that the wl_shm formats YUV420, NV12 and YUYV
are decoded and converted to RGB correctly in GL-renderer.
The test deliberately uses a 256 x 256 test image so that effects from
width vs. pitch vs. stride cannot be observed, and row padding is zero.
Also padding between planes is zero. Attempting to use a test image with
less "round" dimensions lead to stride mismatch in GL-renderer, likely
due to GL_UNPACK_ALIGNMENT being left at value 4. It is unclear if YUV
wl_shm buffers' row stride needs to be aligned to 4 bytes or not, so I
did not pursue fixing it. GL-renderer seems to be confusing width, pitch
and stride even further, and not e.g. allow padding with ARGB buffers.
See also: https://gitlab.freedesktop.org/wayland/weston/-/issues/354
Furthermore, the test arranges so that each 2x2 pixel block has the same
color. This avoids having to consider chroma siting when sub-sampling.
This way all the test cases can use the same reference image.
The source image chocolate-cake.png is taken and copyright by Pekka
Paalanen, hereby licensed as
http://creativecommons.org/licenses/by-sa/4.0/ .
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
In anticipation of invasive future work on color management, add an
alpha blending test to make sure we don't break alpha blending.
The idea for doing a monotonicity test came from glennk on #dri-devel in
Freenode IRC.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This refactors a new function verify_image() out of
verify_screen_content().
verify_image() will be useful with a test that verifies a screenshot
against a reference image but also wants to do additional testing on the
screenshot.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Const has documentary value saying the code will not modify the
parameter contents. Everything that can be const, should be const.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Do not write out PNG files for successful screenshot tests. It clutters
the build directory, but the biggest reason is to keep the CI artifacts
smaller.
internal-screenshot test still writes a PNG, it's good to keep one PNG
written so that we exercise the PNG writing code always.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This is a follow-up of commit "tests: start to use Weston's
default screenshooter in the test suite".
As we've started to use Weston's default screenshooter
implementation and protocol extension in the test suite,
we don't need what we've created specifically for the test
suite anymore.
Drop test suite screenshooter implementation and protocol
extension.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
Until now we had two different protocol extensions: one for the
test suite screenshooter and other for the screenshooter client.
As they are identical and this won't change, make the test suite
use the same protocol that the screenshooter client uses.
Besides the cleanup, this change will also allow us to use the
DRM writeback screenshooter in the test suite, as the test suite
implementation is hardcoded to use a renderer based screenshooter.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
The 'struct test' has a field 'int buffer_copy_done', but it
is in a fact a boolean. Change it to 'bool buffer_copy_done'.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
The wl_drm protocol is not being used by the test client. So
remove 'bool has_wl_drm' from 'struct client' and also the
branch that initializes this variable in handle_global().
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
When destroying the shell we need to remove the listeners
as well. The test-desktop-shell was forgetting to do this.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
There are some specific cases in which we need Weston to
behave differently when running in the test suite. This
adds a new API to allow the tests to select these behaviors.
For instance, in the DRM backend we plan to add a writeback
connector screenshooter. In case it fails for some
reason, it should fallback to the renderer screenshooter
that all other backends use. But if we add a test to
ensure the correctness of the writeback screenshooter,
we don't want it to fallback to the renderer one, we
want it to fail. With this new API we can choose to
disable the fallback behavior specifically for this test.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
Convert ivi-shell-app-test.c to use `weston_ini_setup`. It also removes
the pre-made weston.ini and all the related code in the meson files.
Signed-off-by: Igor Matheus Andrade Torrente <igormtorrente@gmail.com>
Currently doesn't exist a standard way to write a weston.ini inside a test.
Here, two new functions `weston_ini_setup` and `cfgln` are introduced to
help the test writer to write a weston.ini file and load it to the test.
And `internal-screenshot-test` is converted to use the new method of write
a weston.ini. This conversion serves as example and initial API test.
The tester needs to call `weston_test_harness_execute_as_client` or
`weston_test_harness_execute_as_plugin` in the same way as before.
The `weston_ini_setup` will fill the setup->config_file with the
correct path to the weston.ini file.
The main design goal is to avoid pre-made or build-made weston.ini(s)
and keep the test as self-contained as possible.
Closes:#410
Signed-off-by: Igor Matheus Andrade Torrente <igormtorrente@gmail.com>
This adds the first DRM-backend test. It is very simple
and was made in order to make easier to add more complex
DRM-backend tests in the future.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
With this patch we add support to run DRM-backend tests locally
in the test suite. For now this won't work in the CI, as there
are no cards available. But the plan is to achieve this by using
VKMS (virtual KMS) in the future.
To run DRM-backend tests locally, first of all the user has to
set the environment variable WESTON_TEST_SUITE_DRM_DEVICE to
'card0', 'card1' or any other device where he wants to run
the tests. Also, for now it only works if it is run as root,
but in the future this problem will be solved.
The tests will run on a non-default seat. The reason for that
is that we want to avoid opening input devices unnecessarily.
Also, since DRM-backend usage requires gaining DRM master status
on a DRM KMS device, nothing else must be using the device at
the same time. To achieve this we use a lock to run the
DRM-backend tests sequentially.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
The test suite is dealing only with headless-backend tests.
In order to make it able to run DRM-backend tests, we have
to properly select the renderer that it will use.
This patch add the command line option --use-pixman if the test
defines the DRM-backend renderer as RENDERER_PIXMAN, and it will
add nothing to the command line if it defines RENDERER_GL (the
DRM-backend default renderer is already GL). Also, if the user
defines the DRM-backend renderer as RENDERER_NOOP, the test will
fail (as it should, since DRM-backend does not implement it).
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
In the test suite we have some default options which
are command line arguments used by most of the tests.
Two of these are width==320 and height==240. But
when we have DRM or fbdev backends, width and height
are not possible command line arguments. This makes
impossible to run tests that uses one of these types
of backends, as the compositor won't open if the
command line string is wrong.
Fix this by not passing command line arguments width
and height if the backend is DRM or fbdev.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
This test ensures that
"pixman-renderer: half-fix bilinear sampling on edges"
keeps on working.
Unlike in the original report
https://gitlab.freedesktop.org/wayland/weston/issues/373, here we use buffer
scale 2 instead of output scale 2 to trigger bilinear filter. The effect is the
same, the actual resulting image in the failing case is just a little
different. This is so that it will be easy to add more viewport screenshooting
tests in this program in the future.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
There will be a new test program using viewports and would like to share this
bit of code.
There are two behavioral changes:
- Compositor wp_viewporter interface version is no longer checked.
- client_create_viewport() does not leak the viewporter object.
test_viewporter_double_create needs to call bind_to_singleton_global() itself
so that the viewporter object still exists when the error event arrives.
Otherwise error verification fails.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
When a test fails and it produces a difference image, also compute the min/max
per-channel signed difference statistics. These numbers can be used to adjust
the fuzz needed for fuzzy_match_pixels() to pass. Otherwise one would have to
manually inspect the reference and result images and figure out the values.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This patch continues the buffer and output transforms testing by iterating
through a representative selection of buffer transforms and scales.
For more details, see the previous patch "tests: add output transform tests".
https://gitlab.freedesktop.org/wayland/weston/issues/52
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This goes through all output transforms with two different buffer transforms
and verifies the visual output against reference images.
This commit introduces a new test input image 'basic-test-card.png'. It is a
small image with deliberately odd and indivisible dimensions to provoke bad
assumptions about image sizes. It contains red, green and blue areas which are
actually text that makes it very obvious if you have e.g. color channels
swapped. It has a white thick circle to highlight aspect ratio issues, and an
orange cross to show a mixed color. The white border is for contrast and a 1px
wide detail. The whole design makes it clear if the image happens to be rotated
or flipped in any way.
The image has one pixel wide transparent border so that bilinear sampling
filter near the edges of the image would produce the same colors with both
Pixman- and GL-renderers which handle the out-of-image samples fundamentally
differently: Pixman assumes (0, 0, 0, 0) samples outside of the image, while
GL-renderer clamps sample coordinates to the edge essentially repeating the
edge pixels.
It would have been "easy" to create a full matrix of
every output scale & transform x every buffer scale & transform, but that
would have resulted in 2 renderers * 8 output transforms * 3 output scales *
8 buffer transforms * 3 buffer scales = 1152 test cases that would have all
ran strictly serially because our test harness has no parallelism inside one
test program. That would have been slow to run, and need a lot more reference
images too.
Instead, I chose to iterate separately through all output scales & transforms
(this patch) and all buffer scales & transforms (next patch). This limits the
number of test cases in this patch to 56, and allows the two test programs to
run in parallel.
I did not even pick all possible scale & transform combinations here, but just
what I think is a representative sub-set to hopefully exercise all the code
paths.
https://gitlab.freedesktop.org/wayland/weston/issues/52
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Running with Mesa 20.1.0-devel (git-c7617d8908) GL renderer:
Radeon RX 550 Series (POLARIS11, DRM 3.27.0, 4.19.0-2-amd64, LLVM 8.0.1)
I found output-tranform test (a future patch) to produce exactly this much more
difference between Pixman and GL rendererers.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
It turns out that if the client is not explicitly destroyed, it will remain
connected until the compositor shuts down because there is no more a client
process that would terminate.
Usually this is not a problem, but if a test file has multiple screenshooting
tests, the windows from earlier tests in the file will remain on screen. That
is not wanted, hence implement client destruction.
To properly destroy a client, we also need a list of outputs. They used to be
simply leaked. This does not fix wl_registry.global_remove for wl_outputs, that
is left for a time when a test will actually need that.
This patch makes only ivi-shell-app test use the new client_destroy() to show
that it actually works. The added log scopes prove it: destroy requests get
sent. Sprinkling client_destroy() around in all other tests is left for a time
when it is actually necessary.
ivi-shell-app is a nicely simple test doing little else, hence I picked it.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The string from get_test_name() can be used for writing screenshot files and
others. Starting the name with the fixture number makes an alphabetized listing
of output files look unorganized.
Let's change the test name to begin with the test (source) name with fixture
and element numbers as suffixes. That makes a file listing easier to look
through, when you have multiple tests each saving multiple screenshot files.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
A future test wants to access the fixture data array for the currently running
fixture index to log the test description. This patch provides access to the
array index.
Rather than adding more gloabl variables, I changed the type of the existing
one which feels slightly cleaner.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
With these, a test can initialize the headless-backend with non-default scale
and transform which allows testing output scales and transforms.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Allow the reference image to be NULL or missing so that it does not even
attempt to load a reference image or compare it. You cannot just point the
reference image to an arbitrary image because the comparison functions can
abort due to size mismatch. This makes bootstrapping new tests easier when you
do not yet have a reference image.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The old name felt too... short.
The return type is changed to bool; fits better for a success/failure.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This will be useful in more tests.
No changes to the code, aside from dropping one 'static'.
Copyright 2017 is taken from git-blame of the moved code.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This adds the necessary fuzz to image matching to let GL-renderer pass.
The difference is due to rounding. weston-test-desktop-shell.c uses
weston_surface_set_color(dts->background_surface, 0.16, 0.32, 0.48, 1.);
to set the background color. Pixman-renderer will truncate those to uint8, but
GL-renderer seems to round instead, which causes the +1 in background color
channel values.
0.16 * 255 = 40.8
0.32 * 255 = 81.6
0.48 * 255 = 122.4
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This shall be used by CI due to https://gitlab.freedesktop.org/mesa/mesa/issues/2219
It defaults to true, meaning that people by default will be running the
GL-renderer tests. It works fine on hardware drivers, just not llvmpipe.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The fuzzy range will be used with GL-renderer testing, as it may produce
slightly different images than Pixman-renderer yet still correct results.
Such allowed differences are due to different rounding.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Releases touch devices and seat if they were allocated, clean up the
layers and free the weston_test structure.
Signed-off-by: Guillaume Champagne <champagne.guillaume.c@gmail.com>
Move xwayland test to the new harness.
This is the only test that can actually skip. It does it by exit(77) and that
is fine, because there is only one test case in the file so far. To get rid of
the exit() calls we need to return a value from the TEST() function but that is
a big surgery for another time.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This migrates all the client tests that have nothing special in them to the new
test harness.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>