[ptest-runner] Added output processing to pytest


zangrc
 

Because there are quite a few OSS that use pytest for testing, and the
test output of pytest is not consistent with the output format of
automake, so the output of pytest needs to be processed. The following
is the solution given by Richard Purdie:

teaching ptest runner how to handle the alternate output
format(triggered by run-ptest-XXX instead of run-ptest)

Signed-off-by: Zang Ruochen <zangrc.fnst@...>
---
utils.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 56 insertions(+), 3 deletions(-)

diff --git a/utils.c b/utils.c
index a8ba190..fa4a483 100644
--- a/utils.c
+++ b/utils.c
@@ -74,6 +74,33 @@ check_allocation1(void *p, size_t size, char *file, int line, int exit_on_null)
}
}

+char *
+get_ptest_path(const char *dir, const char *d_name) {
+ struct stat st_buf;
+ char *run_ptest_default;
+ char *run_ptest_pytest;
+ char *run_ptest;
+
+ if (asprintf(&run_ptest_default, "%s/%s/ptest/run-ptest",
+ dir, d_name) == -1) {
+ return run_ptest;
+ }
+
+ if (asprintf(&run_ptest_pytest, "%s/%s/ptest/run-ptest-pytest",
+ dir, d_name) == -1) {
+ return run_ptest;
+ }
+
+ if (stat(run_ptest_pytest, &st_buf) != -1) {
+ free(run_ptest_default);
+ run_ptest = run_ptest_pytest;
+ } else {
+ free(run_ptest_pytest);
+ run_ptest = run_ptest_default;
+ }
+
+ return run_ptest;
+}

struct ptest_list *
get_available_ptests(const char *dir)
@@ -129,8 +156,8 @@ get_available_ptests(const char *dir)
continue;
}

- if (asprintf(&run_ptest, "%s/%s/ptest/run-ptest",
- dir, d_name) == -1) {
+ run_ptest = get_ptest_path(dir, d_name);
+ if (run_ptest == NULL) {
fail = 1;
saved_errno = errno;
free(d_name);
@@ -282,7 +309,22 @@ run_child(char *run_ptest, int fd_stdout, int fd_stderr)
close(fd_stderr); /* try using to see if this fixes bash run-read. rwm todo */
close_fds();

- execv(run_ptest, argv);
+ if (is_end_with(run_ptest, "run-ptest-pytest") == 1) {
+ char *cmd;
+ char pytest_append[] = "| sed -e 's/\\[...%\\]//g'| sed -e 's/PASSED/PASS/g'| sed -e 's/FAILED/FAIL/g'|sed -e 's/SKIPPED/SKIP/g'| awk '{if ($NF==\"PASS\" || $NF==\"FAIL\" || $NF==\"SKIP\" || $NF==\"XFAIL\" || $NF==\"XPASS\"){printf \"%s: %s\\n\", $NF, $0}else{print}}'| awk '{if ($NF==\"PASS\" || $NF==\"FAIL\" || $NF==\"SKIP\" || $NF==\"XFAIL\" || $NF==\"XPASS\") {$NF=\"\";print $0}else{print}}'";
+ if (asprintf(&cmd, "sh %s %s", run_ptest, pytest_append) == -1) {
+ exit(-1);
+ }
+ if (system(cmd) == -1) {
+ free(cmd);
+ exit(-1);
+ }
+ free(cmd);
+ } else {
+ if (execv(run_ptest, argv) == -1 ) {
+ exit(-1);
+ }
+ }

/* exit(1); not needed? */
}
@@ -400,6 +442,17 @@ setup_slave_pty(FILE *fp) {
return (slave);
}

+int
+is_end_with(const char *str1, const char *str2)
+{
+ char *head;
+ head = strstr(str1, str2);
+ if (head != NULL && strlen(head) == strlen(str2)) {
+ return 1;
+ } else {
+ return 0;
+ }
+}

int
run_ptests(struct ptest_list *head, const struct ptest_options opts,
--
2.17.1


Alexander Kanavin
 

On Fri, 22 May 2020 at 05:54, zangrc <zangrc.fnst@...> wrote:
+               char pytest_append[] = "| sed -e 's/\\[...%\\]//g'| sed -e 's/PASSED/PASS/g'| sed -e 's/FAILED/FAIL/g'|sed -e 's/SKIPPED/SKIP/g'| awk '{if ($NF==\"PASS\" || $NF==\"FAIL\" || $NF==\"SKIP\" || $NF==\"XFAIL\" || $NF==\"XPASS\"){printf \"%s: %s\\n\", $NF, $0}else{print}}'| awk '{if ($NF==\"PASS\" || $NF==\"FAIL\" || $NF==\"SKIP\" || $NF==\"XFAIL\" || $NF==\"XPASS\") {$NF=\"\";print $0}else{print}}'";

Is it possible to process the output directly, rather than tweak it via sed/awk shell pipelines that are very difficult to read?

Alex


 

On Fri, 22 May 2020 at 10:26, Alexander Kanavin <alex.kanavin@...> wrote:

On Fri, 22 May 2020 at 05:54, zangrc <zangrc.fnst@...> wrote:

+ char pytest_append[] = "| sed -e 's/\\[...%\\]//g'| sed -e 's/PASSED/PASS/g'| sed -e 's/FAILED/FAIL/g'|sed -e 's/SKIPPED/SKIP/g'| awk '{if ($NF==\"PASS\" || $NF==\"FAIL\" || $NF==\"SKIP\" || $NF==\"XFAIL\" || $NF==\"XPASS\"){printf \"%s: %s\\n\", $NF, $0}else{print}}'| awk '{if ($NF==\"PASS\" || $NF==\"FAIL\" || $NF==\"SKIP\" || $NF==\"XFAIL\" || $NF==\"XPASS\") {$NF=\"\";print $0}else{print}}'";

Is it possible to process the output directly, rather than tweak it via sed/awk shell pipelines that are very difficult to read?
Another option could be to generate the output in the correct format
directly from Python using something like this module which I wrote a
few years back:
https://gitlab.com/b5/BetaTest/betatest/-/blob/master/betatest/amtest.py

Thanks,

--
Paul Barker
Konsulko Group


Ross Burton <ross@...>
 

On Fri, 22 May 2020 at 10:29, Paul Barker <pbarker@...> wrote:

On Fri, 22 May 2020 at 10:26, Alexander Kanavin <alex.kanavin@...> wrote:

On Fri, 22 May 2020 at 05:54, zangrc <zangrc.fnst@...> wrote:

+ char pytest_append[] = "| sed -e 's/\\[...%\\]//g'| sed -e 's/PASSED/PASS/g'| sed -e 's/FAILED/FAIL/g'|sed -e 's/SKIPPED/SKIP/g'| awk '{if ($NF==\"PASS\" || $NF==\"FAIL\" || $NF==\"SKIP\" || $NF==\"XFAIL\" || $NF==\"XPASS\"){printf \"%s: %s\\n\", $NF, $0}else{print}}'| awk '{if ($NF==\"PASS\" || $NF==\"FAIL\" || $NF==\"SKIP\" || $NF==\"XFAIL\" || $NF==\"XPASS\") {$NF=\"\";print $0}else{print}}'";

Is it possible to process the output directly, rather than tweak it via sed/awk shell pipelines that are very difficult to read?
Another option could be to generate the output in the correct format
directly from Python using something like this module which I wrote a
few years back:
https://gitlab.com/b5/BetaTest/betatest/-/blob/master/betatest/amtest.py
Yes, this, please.

I endorsed this approach on the oe-devel list when this first came up,
and I'm really pleased you already implemented it.

We could have a recipe in oe-core with this in, or just drop it into
the python recipe directly.

Ross


 

On Fri, 22 May 2020 at 14:25, Ross Burton <ross@...> wrote:

On Fri, 22 May 2020 at 10:29, Paul Barker <pbarker@...> wrote:

On Fri, 22 May 2020 at 10:26, Alexander Kanavin <alex.kanavin@...> wrote:

On Fri, 22 May 2020 at 05:54, zangrc <zangrc.fnst@...> wrote:

+ char pytest_append[] = "| sed -e 's/\\[...%\\]//g'| sed -e 's/PASSED/PASS/g'| sed -e 's/FAILED/FAIL/g'|sed -e 's/SKIPPED/SKIP/g'| awk '{if ($NF==\"PASS\" || $NF==\"FAIL\" || $NF==\"SKIP\" || $NF==\"XFAIL\" || $NF==\"XPASS\"){printf \"%s: %s\\n\", $NF, $0}else{print}}'| awk '{if ($NF==\"PASS\" || $NF==\"FAIL\" || $NF==\"SKIP\" || $NF==\"XFAIL\" || $NF==\"XPASS\") {$NF=\"\";print $0}else{print}}'";

Is it possible to process the output directly, rather than tweak it via sed/awk shell pipelines that are very difficult to read?
Another option could be to generate the output in the correct format
directly from Python using something like this module which I wrote a
few years back:
https://gitlab.com/b5/BetaTest/betatest/-/blob/master/betatest/amtest.py
Yes, this, please.

I endorsed this approach on the oe-devel list when this first came up,
and I'm really pleased you already implemented it.

We could have a recipe in oe-core with this in, or just drop it into
the python recipe directly.
It's packaged on pypi: https://pypi.org/project/betatest/

I need to do a new release as I tidied a few things up and added a
subtest wrapper after I published v0.1.0. This gives me a kick to get
that done :)

Thanks,

--
Paul Barker
Konsulko Group


Ross Burton <ross@...>
 

On Fri, 22 May 2020 at 14:31, Paul Barker <pbarker@...> wrote:
We could have a recipe in oe-core with this in, or just drop it into
the python recipe directly.
It's packaged on pypi: https://pypi.org/project/betatest/

I need to do a new release as I tidied a few things up and added a
subtest wrapper after I published v0.1.0. This gives me a kick to get
that done :)
Awesome. I endorse a pypi recipe shipping that, which recipes can then re-use.

The next question is how to integrate that runner with pytest.

Ross


Anibal Limon
 



On Fri, 22 May 2020 at 08:37, Ross Burton <ross@...> wrote:
On Fri, 22 May 2020 at 14:31, Paul Barker <pbarker@...> wrote:
> > We could have a recipe in oe-core with this in, or just drop it into
> > the python recipe directly.
>
> It's packaged on pypi: https://pypi.org/project/betatest/
>
> I need to do a new release as I tidied a few things up and added a
> subtest wrapper after I published v0.1.0. This gives me a kick to get
> that done :)

Awesome.  I endorse a pypi recipe shipping that, which recipes can then re-use.

The next question is how to integrate that runner with pytest.

I like the idea of format the output at level of pytest, in OEQA there is OETestResult class that is where the changes need to be made to add the option to use AM format.


Regards,
Anibal
 

Ross


zangrc
 

Is it possible to continue to add ptest for python-XX in the old way at present, and wait for the new recipe to be integrated (uncertain time), and then uniformly process the output? Or suspend the work of appending ptest and wait for the solution.

--
Zang Ruochen

-----Original Message-----
From: yocto@... <yocto@...> On Behalf Of Ross Burton
Sent: Friday, May 22, 2020 9:37 PM
To: Paul Barker <pbarker@...>
Cc: Alexander Kanavin <alex.kanavin@...>; Zang, Ruochen/臧 若尘 <zangrc.fnst@...>; Yocto discussion list <yocto@...>; Anibal Limon <anibal.limon@...>
Subject: Re: [yocto] [ptest-runner] Added output processing to pytest

On Fri, 22 May 2020 at 14:31, Paul Barker <pbarker@...> wrote:
We could have a recipe in oe-core with this in, or just drop it into
the python recipe directly.
It's packaged on pypi: https://pypi.org/project/betatest/

I need to do a new release as I tidied a few things up and added a
subtest wrapper after I published v0.1.0. This gives me a kick to get
that done :)
Awesome. I endorse a pypi recipe shipping that, which recipes can then re-use.

The next question is how to integrate that runner with pytest.

Ross