As the project develops, some tests are valuable and some become less valuable
When we first started reproducible builds work, testing reproducibility heavily
across multiple distros highlighted some unusual bugs and let us improve things.
We therefore currently run a-full with the targets:
I've noticed we pretty much always see the same set of failures with these
targets now, i.e. if one fails, they all fail the same way.
Those targets are heavy builds which don't reuse sstate for one of the build
streams and hence load the autobuilder heavily.
I'm thinking they've served their purpose and that a-full should go back to just
the randomly selected reproduiclbe target.
Does anyone feel we shouldn't do that?