[Buildroot] testing/infra: generating tests for python packages

Ricardo Martincoski ricardo.martincoski at gmail.com
Thu Sep 14 02:05:19 UTC 2017


On Wed, Sep 13, 2017 at 06:31 PM, Yann E. MORIN wrote:

> On 2017-09-13 23:04 +0200, Arnout Vandecappelle spake thusly:
>> On 13-09-17 18:41, Yann E. MORIN wrote:
>> > 
>> > On 2017-09-13 00:29 -0300, Ricardo Martincoski spake thusly:
>> >> Thomas mentioned [1] you guys talked about tests for python packages being
>> >> generated by the test infra.
>> >>
>> >> I performed an experiment [2] ignoring where the data is coming from (whether
>> >> it comes from the package recipe or from inside the infra) and I start this
>> >> thread to follow up the discussion.
>>  I didn't look at your solution yet, Ricardo, but here are my high-level
>> observations.

Not really a solution. It was just an experiment/hack so don't bother to look.

The solution discussed in this thread looks much better. It gives the developer
of the test all the freedom regarding all the variables I mentioned (timeout,
regex, python2/python3/both, gcc version, ...).

>> > I also performed an experiment, which failed because I am no python
>> > expert, but basically it was something like (in pseudo code because I
>> > lost it and it anyway did not work):
>> > 
>> >     support/testing/tests/package/test_python-modules.py
>> > 
>> >         for dir, files, _ = os.walk(os.path.join(buildroot_top_dir,'package'):
>> >             pkg = os.path.last_component(dir)
>> >             for f in files:
>> >                 if re.match('{}-test.py'.format(pkg),f):
>> >                     import_module(os.path.join(dir,f))
>> > 
>> > which basically scans the Buildroot package/ subdir to search for files

So you were not only trying to create a solution for python packages but one to
any package that needs runtime tests. Nice! So eventually test_dropbear could
be moved to package/dropbear.

And as a consequence, a patch adding a new package for which having a runtime
test makes sense would look like this:

package/<package>/Config.in ++++++
package/<package>/<package>.mk ++++++
package/<package>/<package>.hash +++++++
package/<package>/<package>.test or test_<package>.py or <package>.py ++++
package/Config.in +1
.gitlab-ci.yml +1

I think this beat the fact the test data is spread over the tree.

We can even call flake8 from check-package for .py/.test files, as once Arnout

>> > matching the glob 'PKG_NAME-test.py' (with PKG_NAME replaced by the
>> > package name, obviously), and imports them one by one.
>>  Looks like the right thing to do to me. However, to be consistent with the
>> existing tests, the pattern should be test_pkg.py, and the package name should
>> go through - to _ conversion.
> And to be consistent with the existing .mk and .hash files, the test
> file should start with the package name. So, we have two incompatible
> consistency requirements. One will have to win and the other to lose.
> And I think the best would be to keep the .mk and .hash scheme, because
> this is in the package directory.

nose2 defaults to look for test_*.py but it can be customized adding
'test-file-pattern = *.py' to unittest.cfg

>> > Of course, one may be more inventive with the tests.
>> > 
>> > And in the end, it would have been all integrated in the current infra.
>> > 
>> > But alas, that import stuff I could not make to work... :-( Maybe there
>> > is a node function to tell it where to load extra tests from?

There is a 'code-directories' option to be added to unittest.cfg but I couldn't
get it working easily. Needs further investigation.
I guess it needs to be relative to the path passed to -t that defaults to the
path passed to -s, but it then requires that all current imports inside test
infra be changed to relative or to contain the new full path, i.e. 'import
support.testing.infra.basetest' and probably it needs some __init__.py files to
be created.

>>  We can just add support/testing to sys.path.
> Except that did not work when I tried... :-/
> But I also tried setting package/ in sys.path and it did not work
> either... Meh, someone will have to teach me some python magic... ;-)
>> > An alternative would be to go with per-package test files, like
>> > explained above, but have the support/testing/run-test script do the can
>> > before calling nose, and for each file if finds, do a symlink ( or a
>> > copy) in the testing infra, in a special directory that is git-ignored ?
>>  Bwerk, hack...

Agree, but beware of the possible overpopulation of __init__.py, see below.
Perhaps there is a way to avoid it, but I don't know one.

> Maybe, but very, very easy and trivial!
> We can even keep the PKG_NAME-test.py (or PKG_NAME.test) and symlink
> that as test_PKG_NAME.py to respect the nose naming scheme.

That should work if we symlink each .py file to a git-ignored path that have

Other way around would be to have a symlink to package/, but that AFAIK would
require a __init__.py file in the package/ dir and also in each path leading to
the test files. So a lot of empty files, at least one per package that have

>> > Now, completely unrelated to the above, when we want to test each module
>> > to ensure they are correct and have the required dependencies, we need
>> > to have a configuration for each of the modules, which means doing one
>> > build per module we want to test, each build including building the
>> > python interpreter. This will make for a very long test campaign,
>> > indeed... :-/
>>  We don't care that it takes a long time. It's not intended to be run
>> sequentially anyway. And in gitlab, all tests run in parallel (but only on 4
>> builders IIRC). As long as the tests on gitlab finish within a day or so, we're
>> golden.
> Yep, but we currently have about 224 python packages, so if each needs
> 30 minutes to build and run, that's about 5 days to run the full
> suite... Just sayin'...

I think Arnout has a point here. With the 4 free runners in parallel it should
end in 28 hours.


More information about the buildroot mailing list