There are some specific cases in which we need Weston to
behave differently when running in the test suite. This
adds a new API to allow the tests to select these behaviors.
For instance, in the DRM backend we plan to add a writeback
connector screenshooter. In case it fails for some
reason, it should fallback to the renderer screenshooter
that all other backends use. But if we add a test to
ensure the correctness of the writeback screenshooter,
we don't want it to fallback to the renderer one, we
want it to fail. With this new API we can choose to
disable the fallback behavior specifically for this test.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
Convert ivi-shell-app-test.c to use `weston_ini_setup`. It also removes
the pre-made weston.ini and all the related code in the meson files.
Signed-off-by: Igor Matheus Andrade Torrente <igormtorrente@gmail.com>
Currently doesn't exist a standard way to write a weston.ini inside a test.
Here, two new functions `weston_ini_setup` and `cfgln` are introduced to
help the test writer to write a weston.ini file and load it to the test.
And `internal-screenshot-test` is converted to use the new method of write
a weston.ini. This conversion serves as example and initial API test.
The tester needs to call `weston_test_harness_execute_as_client` or
`weston_test_harness_execute_as_plugin` in the same way as before.
The `weston_ini_setup` will fill the setup->config_file with the
correct path to the weston.ini file.
The main design goal is to avoid pre-made or build-made weston.ini(s)
and keep the test as self-contained as possible.
Closes:#410
Signed-off-by: Igor Matheus Andrade Torrente <igormtorrente@gmail.com>
This adds the first DRM-backend test. It is very simple
and was made in order to make easier to add more complex
DRM-backend tests in the future.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
With this patch we add support to run DRM-backend tests locally
in the test suite. For now this won't work in the CI, as there
are no cards available. But the plan is to achieve this by using
VKMS (virtual KMS) in the future.
To run DRM-backend tests locally, first of all the user has to
set the environment variable WESTON_TEST_SUITE_DRM_DEVICE to
'card0', 'card1' or any other device where he wants to run
the tests. Also, for now it only works if it is run as root,
but in the future this problem will be solved.
The tests will run on a non-default seat. The reason for that
is that we want to avoid opening input devices unnecessarily.
Also, since DRM-backend usage requires gaining DRM master status
on a DRM KMS device, nothing else must be using the device at
the same time. To achieve this we use a lock to run the
DRM-backend tests sequentially.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
The test suite is dealing only with headless-backend tests.
In order to make it able to run DRM-backend tests, we have
to properly select the renderer that it will use.
This patch add the command line option --use-pixman if the test
defines the DRM-backend renderer as RENDERER_PIXMAN, and it will
add nothing to the command line if it defines RENDERER_GL (the
DRM-backend default renderer is already GL). Also, if the user
defines the DRM-backend renderer as RENDERER_NOOP, the test will
fail (as it should, since DRM-backend does not implement it).
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
In the test suite we have some default options which
are command line arguments used by most of the tests.
Two of these are width==320 and height==240. But
when we have DRM or fbdev backends, width and height
are not possible command line arguments. This makes
impossible to run tests that uses one of these types
of backends, as the compositor won't open if the
command line string is wrong.
Fix this by not passing command line arguments width
and height if the backend is DRM or fbdev.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
This test ensures that
"pixman-renderer: half-fix bilinear sampling on edges"
keeps on working.
Unlike in the original report
https://gitlab.freedesktop.org/wayland/weston/issues/373, here we use buffer
scale 2 instead of output scale 2 to trigger bilinear filter. The effect is the
same, the actual resulting image in the failing case is just a little
different. This is so that it will be easy to add more viewport screenshooting
tests in this program in the future.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
There will be a new test program using viewports and would like to share this
bit of code.
There are two behavioral changes:
- Compositor wp_viewporter interface version is no longer checked.
- client_create_viewport() does not leak the viewporter object.
test_viewporter_double_create needs to call bind_to_singleton_global() itself
so that the viewporter object still exists when the error event arrives.
Otherwise error verification fails.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
When a test fails and it produces a difference image, also compute the min/max
per-channel signed difference statistics. These numbers can be used to adjust
the fuzz needed for fuzzy_match_pixels() to pass. Otherwise one would have to
manually inspect the reference and result images and figure out the values.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This patch continues the buffer and output transforms testing by iterating
through a representative selection of buffer transforms and scales.
For more details, see the previous patch "tests: add output transform tests".
https://gitlab.freedesktop.org/wayland/weston/issues/52
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This goes through all output transforms with two different buffer transforms
and verifies the visual output against reference images.
This commit introduces a new test input image 'basic-test-card.png'. It is a
small image with deliberately odd and indivisible dimensions to provoke bad
assumptions about image sizes. It contains red, green and blue areas which are
actually text that makes it very obvious if you have e.g. color channels
swapped. It has a white thick circle to highlight aspect ratio issues, and an
orange cross to show a mixed color. The white border is for contrast and a 1px
wide detail. The whole design makes it clear if the image happens to be rotated
or flipped in any way.
The image has one pixel wide transparent border so that bilinear sampling
filter near the edges of the image would produce the same colors with both
Pixman- and GL-renderers which handle the out-of-image samples fundamentally
differently: Pixman assumes (0, 0, 0, 0) samples outside of the image, while
GL-renderer clamps sample coordinates to the edge essentially repeating the
edge pixels.
It would have been "easy" to create a full matrix of
every output scale & transform x every buffer scale & transform, but that
would have resulted in 2 renderers * 8 output transforms * 3 output scales *
8 buffer transforms * 3 buffer scales = 1152 test cases that would have all
ran strictly serially because our test harness has no parallelism inside one
test program. That would have been slow to run, and need a lot more reference
images too.
Instead, I chose to iterate separately through all output scales & transforms
(this patch) and all buffer scales & transforms (next patch). This limits the
number of test cases in this patch to 56, and allows the two test programs to
run in parallel.
I did not even pick all possible scale & transform combinations here, but just
what I think is a representative sub-set to hopefully exercise all the code
paths.
https://gitlab.freedesktop.org/wayland/weston/issues/52
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Running with Mesa 20.1.0-devel (git-c7617d8908) GL renderer:
Radeon RX 550 Series (POLARIS11, DRM 3.27.0, 4.19.0-2-amd64, LLVM 8.0.1)
I found output-tranform test (a future patch) to produce exactly this much more
difference between Pixman and GL rendererers.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
It turns out that if the client is not explicitly destroyed, it will remain
connected until the compositor shuts down because there is no more a client
process that would terminate.
Usually this is not a problem, but if a test file has multiple screenshooting
tests, the windows from earlier tests in the file will remain on screen. That
is not wanted, hence implement client destruction.
To properly destroy a client, we also need a list of outputs. They used to be
simply leaked. This does not fix wl_registry.global_remove for wl_outputs, that
is left for a time when a test will actually need that.
This patch makes only ivi-shell-app test use the new client_destroy() to show
that it actually works. The added log scopes prove it: destroy requests get
sent. Sprinkling client_destroy() around in all other tests is left for a time
when it is actually necessary.
ivi-shell-app is a nicely simple test doing little else, hence I picked it.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The string from get_test_name() can be used for writing screenshot files and
others. Starting the name with the fixture number makes an alphabetized listing
of output files look unorganized.
Let's change the test name to begin with the test (source) name with fixture
and element numbers as suffixes. That makes a file listing easier to look
through, when you have multiple tests each saving multiple screenshot files.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
A future test wants to access the fixture data array for the currently running
fixture index to log the test description. This patch provides access to the
array index.
Rather than adding more gloabl variables, I changed the type of the existing
one which feels slightly cleaner.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
With these, a test can initialize the headless-backend with non-default scale
and transform which allows testing output scales and transforms.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Allow the reference image to be NULL or missing so that it does not even
attempt to load a reference image or compare it. You cannot just point the
reference image to an arbitrary image because the comparison functions can
abort due to size mismatch. This makes bootstrapping new tests easier when you
do not yet have a reference image.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The old name felt too... short.
The return type is changed to bool; fits better for a success/failure.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This will be useful in more tests.
No changes to the code, aside from dropping one 'static'.
Copyright 2017 is taken from git-blame of the moved code.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This adds the necessary fuzz to image matching to let GL-renderer pass.
The difference is due to rounding. weston-test-desktop-shell.c uses
weston_surface_set_color(dts->background_surface, 0.16, 0.32, 0.48, 1.);
to set the background color. Pixman-renderer will truncate those to uint8, but
GL-renderer seems to round instead, which causes the +1 in background color
channel values.
0.16 * 255 = 40.8
0.32 * 255 = 81.6
0.48 * 255 = 122.4
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This shall be used by CI due to https://gitlab.freedesktop.org/mesa/mesa/issues/2219
It defaults to true, meaning that people by default will be running the
GL-renderer tests. It works fine on hardware drivers, just not llvmpipe.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The fuzzy range will be used with GL-renderer testing, as it may produce
slightly different images than Pixman-renderer yet still correct results.
Such allowed differences are due to different rounding.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Releases touch devices and seat if they were allocated, clean up the
layers and free the weston_test structure.
Signed-off-by: Guillaume Champagne <champagne.guillaume.c@gmail.com>
Move xwayland test to the new harness.
This is the only test that can actually skip. It does it by exit(77) and that
is fine, because there is only one test case in the file so far. To get rid of
the exit() calls we need to return a value from the TEST() function but that is
a big surgery for another time.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This migrates all the client tests that have nothing special in them to the new
test harness.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The devices test was actually using the defaults instead of
weston-test-desktop-shell in meson.build, so this patch keeps it that way.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
All plugin tests have been converted to the new harness, so the old definition
can be removed.
The one remaining test surface-screenshot is a manual test, the plugin only
installs a debug key binding. Hence it is open-coded as a normal plugin, not as
a test.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Moving to the new test harness.
Carrying the test ini file still just to keep it the same even though I
accidentally noticed the test succeeds also with --no-config.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Moving to the new harness.
It would be possible to convert every case here into a separate PLUGIN_TEST,
but I did not see the value in that at this time.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The ivi-layout-test comprises of two halves: the client and the plugin. This
migrates the test to the new test harness.
In the old harness, the plugin was built as the test in meson.build and it fork
& exec'd the client part. In the new harness client tests start from the client
program which sets up the compositor in-process, so now the client is built as
the test in meson.build and the plugin is just an additional file.
Therefore there is not need for the plugin for fork & exec anything anymore, so
all that code is removed.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
These are the only remaining standalone non-ZUC tests. They do not need any
changes to be built with the new harness - in fact they have already been
running through the new harness.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Instead of relying on Meson setting up environment so that Weston and tests
find all their files, build those values into the tests. This way one can
execute a test program successfully wihtout Meson, simply by running it.
The old environment variables are still honoured if set. This might change in
the future.
Baking the source or build directory paths into the tests should not regress
reproducible builds, because the binaries where test-config.h values are used
will not be installed.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This replaces the old test harness with a new one.
The old harness relied on fork()'ing each test which makes tests independent,
but makes debugging them harder. The new harness runs client code in a thread
instead of a new process. A side-effect of not fork()'ing anymore is that any
failure will stop running a test series short. Fortunately we do not have any
tests that are expected to crash or fail.
The old harness executed 'weston' from Meson, with lots of setup as both
command line options and environment variables. The new harness executes
wet_main() instead: the test program itself calls the compositor main function
to execute the compositor in-process. Command line arguments are configured in
the test program itself, not in meson.build. Environment variables aside, you
are able to run a test by simply executing the test program, even if it is a
plugin test.
The new harness adds a new type of iteration: fixtures. For now, fixtures are
used to set up the compositor for tests that need a compositor. If necessary, a
fixture setup may include a data array of arbitrary type for executing the test
series for each element in the array. This will be most useful for running
screenshooting tests with both Pixman- and GL-renderers.
The new harness outputs TAP formatted results into stdout. Meson is not
switched to consume TAP yet though, because it would require a Meson version
requirement bump and would not have any benefits at this time. OTOH outputting
TAP is trivial and sets up a clear precedent of random test chatter belonging
to stderr.
This commit migrates only few tests to actually make use of the new features:
roles is a basic client test, subsurface-shot is a client test that
demonstrates the fixture array, and plugin-registry is a plugin test. The rest
of the tests will be migrated later.
Once all tests are migrated, we can remove the test-specific setup from
meson.build, leaving only the actual build instructions in there.
The not migrated tests and stand-alone tests suffer only a minor change: they
no longer fork() for each TEST(), otherwise they keep running as before.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
weston-test-runner.h includes wayland-util.h, therefore it needs
wayland-client. A partial dependency with just compile_args might have been
enough as it does not seem to use functions from wayland-util.c, but safer this
way and no harm.
Fixes: https://lists.freedesktop.org/archives/wayland-devel/2020-January/041149.html
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
test012 and test013 were exact duplicates of each other: asserting that
they could successfully look up a single boolean value.
Signed-off-by: Daniel Stone <daniels@collabora.com>
Wayland innovated a lot of cool things, but non-binary boolean values is
the great advances of our time.
Make config_parser_get_bool() work on boolean values, and switch all its
users.
Signed-off-by: Daniel Stone <daniels@collabora.com>
Instead of getting weston_output from the frame_signal argument 'void *data',
add weston_output in the private data struct of the users that are listening
to frame_signal. With this change we are able to pass previous_damage as the
data argument.
Signed-off-by: Leandro Ribeiro <leandrohr@riseup.net>
Nothing is using FAIL_TEST or FAIL_TEST_P and that is good. Remove them to not
encourage using them.
If we need a test that should fail, it always needs to fail in a very specific
way which needs to be checked. For this we have e.g. expect_protocol_error().
We never want a fail-test to pass because it failed in a way we did not expect.
Therefore these macros are useless.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Use a different section name to make sure that if this plugin is loaded into
the same process as where weston-test-runner.h is used, the two different
sections cannot get mixed up. This is just a precaution, but it removes a bit
of reader confusion as well.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This avoids confusing it with the opaque struct weston_test from
protocol/weston-test.xml.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Successful tests should just return, not call exit() which breaks the new test
harness when it uses TAP.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
When we move on to TAP, stdout will be reserved for TAP and stderr is for free
chatter. Set up an example that tests should use testlog() instead of fprintf
or printf to chat in the right place.
Most statements were already printing to stderr, so this just makes then a
little shorter. There are also some statements that printed to stdout and are
now corrected.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>