[Automated-testing] Test Stack Survey for tbot

Tim.Bird at sony.com Tim.Bird at sony.com
Tue Oct 2 21:07:15 PDT 2018


> -----Original Message-----
> From: Heiko Schocher 
> 
> == Survey Questions ==
> * What is the name of your test framework?
> 
> '''tbot'''

Thanks for the response.  I have a question or two to help me understand
your system.  Please forgive any misunderstandings.

(Also, I've created the page https://elinux.org/Tbot_survey_response
for this response).

> Which of the aspects below of the CI loop does your test framework
> perform?
> 
> '''
> Not an easy question, as tbot does commandline automatization. So in
> principle, you can automate all commandline tasks ... tbot does not
> differentiate between DUT or a lab PC. For example, compiling a linux
> kernel (which is a testcase) can be run on the DUT (very good RAM test,
> if you have mounted the rootfs over NFS) or the testcase "build linux"
> can be executed on a build PC ...
> '''

It kind of strikes me as the 'forth' of test systems, where larger functionality
is built out of user-defined smaller units.  Let me know if I've misunderstood this.

...
> 
> ** if so, what data format or language is used (e.g. yaml, json, shell script)
> 
> '''The testcases are written in python.'''
> 
> Does your test definition include:
> * source code (or source code location)?
> 
...
> * cleanup instructions?
> 
> '''not in tbot itself, but possible in testcases'''
> 
> ** if anything else, please describe:
> 
> '''
> Maybe this fits into "execution instructions", I am unsure.
> 
> Tbot contains testcases for dumping register content from
> "start addr" to "end address" (under linux with devmem2 in
> U-Boot with md command or for the BDI debugger with "md"
> command) into a file. For example a resulting pinmux file
> for the BBB:
> 
> https://github.com/hsdenx/tbot/blob/master/src/files/bbb/am335x_pinmux.reg

> 
> This file can than be used in another testcase, which
> reads the registers address from the file, reads the current
> value from this address on the DUT and compare it with the value
> in  the file. So, if you booting a new linux kernel or U-Boot, you can
> check all register values, you are interested in, and be sure,
> your new kernel has the same values as your reference.
> '''

Where would this file be stored?  Does tbot provide the
rendezvous location for data to be transferred between tests
or from one test instance to another?  Or is the file location
also determined by the testcase authors?

> 
> Does your test system:
> * provide a set of existing tests?
> 
> '''Yes'''
> 
> ** if so, how many?
> 
> '''
> hs at xmglap:tbot  [master] $ find src/tc/ -type f | wc -l
> 302
> hs at xmglap:tbot  [master] $
> 
> in mainline. There are more from customers, easy to add. Just create
> new subdirectories in [1]. Name of testcase = filename, filename
> must be unique.
> '''
> 
> ==== build management ====
> Does your test system:
> * build the software under test (e.g. the kernel)?
> 
> '''yes'''
> 
> * build the test software?
> 
> '''Hmm... tbot itself does not need to build the testcases. But you can
> write testcases, which build for example a c file ... put it to
> the target in someway, execute the binary ... and so on ...'''

Would this c file source for a test program to be executed on
the DUT reside in the tbot repository, or be loaded
from somewhere else?

> * build other software (such as the distro, libraries, firmware)?
> 
> '''There are testcases, which checkout for example yocto, bitbake it,
> and install for example the new sd card image on the DUT, check
> if the new imagge boots, execute commandline tests on the new
> booted image (incl. check if the new image is really the new just
> created image through the /etc/timestamp yocto generates).'''
> 
> * support cross-compilation?
> 
> '''not task of tbot, but yes. I use tbot for weekly U-Boot mainline
> tests (incl. automating my maintainers work
...
> 
> * require a toolchain or build system for the SUT?
> 
> '''
> not required. If you have a way for building the images you
> need for your DUT, than use tbot only for installing the new images
> on the DUT and test the new image.
> '''
> 
> * require a toolchain or build system for the test software?
> 
> '''for tbot you need python 2.7 and python module paramiko must
> be installed on the device, where you start tbot. This must not
> be a big workstation, a raspberry pi for example is enough.
> '''
Is tbot running on a host, or on the 'lab pc' (mentioned below)?
or both?

> 
> * come with pre-built toolchains?
> 
> '''no'''
> 
> * store the build artifacts for generated software?
> 
> '''possible, if you write a testcases for it.'''
> 
> ** in what format is the build metadata stored (e.g. json)?
> 
> '''currently for U-Boot simple copy raw files in a subdirectory if
> all testcases finishedd successfully.

Is this  location standardized across many tests, or just defined
by testcase authors?  How would a new testcase, that used the
build artifacts from a previous testcase, know where to find them?
Would it just be hardcoded into the new testcase?

> 
> Linux / Yocto not yet (at last in mainline testcases)'''
> 
> ** are the build artifacts stored as raw files or in a database?
> 
> '''In principle all is possible, if you have a command for this task
> on the commandline ... write a testcase for it.'''
> 
> *** if a database, what database?
> 
> ==== Run artifact handling ====
> Does your test system:
> * store run artifacts
> 
> '''
> no, tbot itself does not do this.
> 
> But ... There are testcases, which generate with the help of gnuplot
> images (for example analyse top or latency output).
> 
> All files which tbot generates are at the end in a results directory.
> '''

I don't understand this response.  Is there a standard place that
tests in tbot place their results (like log output)?  For example, 
does the python API for testcases provide support for a test
writing to some directory, without the testcase itself identifying it?

> ** in what format?
> 
> '''txt, jpg, pdf, html'''
> 
> * put the run meta-data in a database?
> 
> '''Not easy to answer... tbot itself no. But ...
> 
> You can write Event-Backends (name is not perfect). tbot calls this
> backends at end of execution. In a backend, you can convert the collected
> information tbot collected while runnig (so called events ... not a really
> good name) into whatever format you want. An event can be for example:
> - start/end of testcase
> - all characters received/sended on the connections which tbot opens
> - Testcases can generate Events (for example, if a U-Boot version is
>    detected the testcase generate an U-Boot Version Event.
> 
> For example, there is a backend for a MySQL database, which stores some
> inforamtions the user is interested in into a MySQL database. Another
> backend
> creates a statistic image with gnuplot, another a dependency graph with
> dot tool, or a jenkins backend, which creates a junit.xml for jenkins ...
> 
...
> 
> ** if so, which database?
> * parse the test logs for results?
> 
> '''I do not know, what you mean here. In tbot you start a command on
> a connection, and your testcase parses the output of the command and
> decides, if it is good or bad, so yes.'''
> 
> * convert data from test logs into a unified format?
> 
> '''yes, see above.'''
> 
> ** if so, what is the format?
> 
> 
> Others are (hopefully easy) possible.'''
> 
> * evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)?
> 
> '''no. A testcase comes allways back with True or False.'''
> 
> * do you have a common set of result names: (e.g. pass, fail, skip, etc.)
> 
> '''No names. True/False only'''
> 
> ** if so, what are they?
> 
> * How is run data collected from the DUT?
> 
> '''
> only "console" output is logged.
> 
> But again ... if you need more, write a testcase for it ...
> You can start for example  tbot instances in parallel. For example:
> 
> - start a testcases over a serial console on the DUT (some timeintensiv
>    benchmark).
> - start a second tbot instance, which log into the DUT over ssh and
>    start on this connection a testcase, which monitors the output of
>    the top command and generate at the end a graph of top values you
>    are interested in with gnuplot.
> '''
> 
> ** e.g. by pushing from the DUT, or pulling from a server?
> * How is run data collected from external equipment?
> 
> '''not used yet, but possible'''
> 
> * Is external equipment data parsed?
> 
> '''
> no
> 
> Seems I have for each topic the same answer in my mind ...
> (write a testcase for it ...)
> '''
> 
> ==== User interface ====
> Does your test system:
> * have a visualization system?
> 
> '''no'''
> 
> * show build artifacts to users?
> 
> '''no'''
> 
> * show run artifacts to users?
> 
> '''no'''
> 
> * do you have a common set of result colors?
> 
> '''no'''
> 
> ** if so, what are they?
> * generate reports for test runs?
> 
> '''yes'''
> 
> * notify users of test results by e-mail?
> 
> '''
> no .. heh, but I thought about sending automated EMails, if
> tbot found, that a new patch in my patchwork ToDo list does
> not apply to mainline U-Boot or is not checkpatch clean ...
> 
> But I think, this should be a human task ;-)
> '''
> 
> * can you query (aggregate and filter) the build meta-data?
> 
> '''no'''
> 
> * can you query (aggregate and filter) the run meta-data?
> 
> '''no'''
> 
> * what language or data format is used for online results presentation? (e.g.
> HTML, Javascript, xml,
> etc.)
> 
> '''
> tbot itself does not do this, it is a commandline tool, but it
> can fill a MySQL database with results ... or deliver results to
> jenkins, buildbot.'''
> 
> * what language or data format is used for reports? (e.g. PDF, excel, etc.)
> 
> '''simple txt log file, if no backend is enabled.'''
> 
> * does your test system have a CLI control tool?
> 
> '''yes'''
> 
> ** what is it called?
> 
> '''tbot :-D '''
> 
> ==== Languages: ====
> Examples: json, python, yaml, C, javascript, etc.
> * what is the base language of your test framework core?
> 
> '''python'''
> 
> What languages or data formats is the user required to learn?
> (as opposed to those used internally)
> 
> '''python and of course, as tbot do commandline automation the
> user must now the commands on the commandline.'''
> 
> ==== Can a user do the following with your test framework: ====
> * manually request that a test be executed (independent of a CI trigger)?
> 
> '''yes'''
> 
> * see the results of recent tests?
> 
> '''while running, yes with verbose option.'''
> 
> * set the pass criteria for a test?
> 
> '''
> Heh... never thought about that, but yes!
> 
> You can pass through a commandline paramter values of variables
> used in testcases ... so yes, that could be done.
> '''
> 
> ** set the threshold value for a benchmark test?
> 
> '''see above, yes.'''
> 
> ** set the list of testcase results to ignore?
> 
> '''no'''
> 
> * provide a rating for a test? (e.g. give it 4 stars out of 5)
> 
> '''no'''
> 
> * customize a test?
> ** alter the command line for the test program?
> 
> '''possible if the testcase uses variables'''
> 
> ** alter the environment of the test program?
> 
> '''no'''
> 
> ** specify to skip a testcase?
> 
> '''no'''
> 
> ** set a new expected value for a test?
> 
> '''through variables passing from command line ... yes, possible.'''
> 
> ** edit the test program source?
> 
> '''Hmm.. the testcase is the testprogram ...'''
> 
> * customize the notification criteria?
> 
> '''no'''
> 
> ** customize the notification mechanism (eg. e-mail, text)
> * generate a custom report for a set of runs?
> 
> '''no'''
> 
> * save the report parameters to generate the same report in the future?
> 
> '''no...
> 
> Hmm... the hacker in me says yes ... the events tbot collects contains
> all data ... and they are stored in background into a file ... and you
> can save this file and put it into tbot with the option "-e" ... so it
> should generate the same info again ...'''
> 
> 
> ==== Requirements ====
> Does your test framework:
> * require minimum software on the DUT?
> 
> '''depends on the testcases you want to execute!
> 
> If you have no software on the DUT, you can call (only BDI yet) testcases,
> which install with a debugger Software into the DUT. Or for example
> on imx based devices call:
> 
...
> 
> for using imx_usb_loader for breathing live into a device ...
> 
> But as tbot does commandline automatization, you need at least some sort
> of a commandline.'''
> 
> * require minimum hardware on the DUT (e.g. memory)
> 
> '''depends on the testcases you want to execute'''
> 
> ** If so, what? (e.g. POSIX shell or some other interpreter, specific libraries,
> command line tools,
> etc.)
> 
> '''depends on the testcases you want to execute!'''
> 
> * require agent software on the DUT? (e.g. extra software besides
> production software)
> 
> '''no'''
> 
> ** If so, what agent?
> * is there optional agent software or libraries for the DUT?
> 
> '''no'''
> 
> * require external hardware in your labs?
> 
> '''Yes. You need at least a device to which the DUT is connected. This
> device is called "lab PC". This can be for example a raspberry pi.
> 
> You can of course attach more than one DUT to one lab PC.
> 
> tbot connects to this lab PC over ssh (but tbot can of course also
> run on this lab PC). The lab PC must have a connection to the DUT
> for the console and it must be possible to power on/off the DUT
> from the lab PC. Therefore tbot always open 2 connections over
> ssh to the labPC (This is the reason, why tbot uses the paramiko
> modul. You can open as many connections on one ssh connection with
> paramiko).

Is this required for every test?

What is the name of the system where tbot is running?  (I'll call that
the 'host').  Does tbot always require a test host (where tbot is running)
and a lap PC (that controls the DUT)?

For example, in the following test:
https://github.com/hsdenx/tbot/blob/master/src/tc/linux/tc_lx_get_version.py

what machine has the python interpreter that is executing this script?
* the host (where tbot is?)
* the lab pc?
* the DUT?

If different statements are run on different machines, can you explain which are
running where?

Is the connection between the host and the labpc always ssh?
Does the host connect to the DUT directly, or only through the labpc?

> 
> Both tasks are from tbots view just calling a testcase!
> 
> So, you need to know for your DUT on the lab PCs commandline how to
> 
> power on/off the DUT:
>    -> write a tbot testcase for it
>    -> use this testcase for powering on/off the DUT. You can config
>       in the lab config file, which testcase is needed for the DUT.
> 
> Connect to console:
>   -> write a testcase for it and use this testcase for connecting
>      to the DUT. Configure this also in the lab config file.
> 
>      Remark:
>      This must not be necessarily a serial line
>      Of course, if your bootloader needs serial line and you want to
>      do bootloader tests ... you need a serial line ... but you can
>      use for example ssh testcase for connecting to the DUT.
> 
>      Also a more complicated setup (hardware reasons) I have, is possible.
>      There, first I have to ssh from the lab PC to another PC and then start
>      there kermit ... yes, write a testcase for this scenario and use it
>      as "console testcase".
> '''

I think understanding the relationship between the host, the labPC and the DUT
in a tbot-based system will help me understand the system architecture better.

Thanks,
 -- Tim



More information about the automated-testing mailing list