[Automated-testing] Test Stack Survey - TCF

Perez-Gonzalez, Inaky inaky.perez-gonzalez at intel.com
Wed Oct 17 15:54:21 PDT 2018


Hi Tim

Hi Tim

    TB> Thanks.  I posted this as:
    TB> https://elinux.org/TCF_survey_response

    TB> Please review it to see if I mangled anything.

    TB> Some questions inline below:

    >> -----Original Message----- From: Tim Orling
    >> 
    >> = TCF Testing survey response - TCF survey response provided by
    >> Iñaky Perez-Gonzalez and Tim Orling
    >> 
    >> TCF - Test Case Framework
    >> 
    >> TCF provides support to automate any kind of steps that a human
    >> would do otherwise, typing in the console or issuing command
    >> line commands.
    >> 
    >> TCF will:
    TB> This and other use of future tense has me a bit confused.  Are
    TB> all the features listed in the response currently supported?
    TB> Is this framework in actual use now?

This is my bad--I filled up the information in two sessions and I
might have mangled it.

Yes, features are they all supported; we use it internally in Intel
for testing Zephyr (http://zephyrproject.org) and are expanding to
other areas.

    >> * (server side) provide remote access to DUTs (metadata, power
    >> switching, debug bridges, serial console i/o, networking
    >> tunnels, flashing support)
    >> 
    >> * discover testcases
    >> 
    >> * associate testcases with DUT/groups-of-DUTs where it can run.
    >> 
    >> * run each the combination testcase/group-of-DUTs in parallel
    >> and evaluate the testcase steps to determine success; report.
    >> 
    >> Which of the aspects below of the CI loop does your test
    >> framework perform?
    >> 
    >> Jenkins is
    TB> ???

Doesn't matter how much you review, you miss the obvious:

Jenkins is used to trigger, checkout code and launch the testcases 
using the this framework’s command line, which will (as needed) use 
remote hardware exposed by the daemon.

[I asked for an account on elinux.org so I can edit]

    >> Does your test framework: ==== source code access ==== * access
    >> source code repositories for the software under test?  No, must
    >> be checked out by triggering agent (eg: Jenkins)
    >> 
    >> * access source code repositories for the test software?  No,
    >> must be checked out by triggering agent (eg: Jenkins)
    >> 
    >> * include the source for the test software?  (if meaning the
    >> software that implements the system) YES,
    >> http://github.com/intel/tcf
    >> 
    >> (if meaning the software needed to test features) NO, that is
    >> to be provided by the user depending on what they want to test;
    >> convenience libraries provided.
    >> 
    >> * provide interfaces for developers to perform code reviews?
    >> NO, left for gerrit, github, etc -- can provide reports to such
    >> systems
    >> 
    >> * detect that the software under test has a new version?  NO,
    >> left to triggering agent (eg: Jenkins)
    >> 
    >> ** if so, how? (e.g. polling a repository, a git hook, scanning
    >> a mail list, etc.)  * detect that the test software has a new
    >> version?  NO, left to triggering agent (eg: Jenkins) or others
    >> 
    >> ==== test definitions ==== Does your test system:
    >> 
    >> * have a test definition repository?  NO, out of scope Left to
    >> the user to define their tests in any way or form (native
    >> python, drivers can be created for other formats). TCF will
    >> scan the provided repository for test definitions
    TB> Is there no meta-data associated with a test at all?  I
    TB> presume from this answer that all tests are Linux executables,
    TB> that run on the DUT?

I might not have understood the question, but definitely this requires
me rewording.

The framework is not tied to any OS or DUT type for that matter (can
be used to test a toaster).

Test definitions can have metadata associated (declarative), it being
optional but in most cases needed in one form or another for it to be
useful. As well, there is an imperative part for actually implementing
the test steps.

Natively the framework takes testcases written as a Python
class. Loaders can be written to read testcases in whichever format is
needed to provide an API for the framework to access the metadata and
execute the imperative steps (in whichever format are written).

Test definitions are thus contained, for example:

- in files in the file system (most common, in a source-controlled
  repository that is checked out by a CI agent such as Jenkins)
  
- in some other container that is somehow accesible via a driver

    TB> How do you match a test to a DUT if there's no meta-data to
    TB> support this?  Where would this meta-data reside in the
    TB> system?


...
    >> The steps to implement on each phase are implemented as python
    >> methods of a subclass of class tcfl.tc.tc_c (if using the
    >> native format) or might be implemented by a driver for an
    >> specific format (eg: Zephyr's testcase.yaml loader).

    TB> From this description, I'm starting to get the impression that
    TB> TCF has only imperative (code) and no declarative (data) items
    TB> in your test definitions.  Is that right?

Oh, I get what you mean now -- I updated my paragraph above.

    ...
    >> 
    >> * deploy the test program to the DUT?  Depends on the testcase,
    >> provides facilities to do so
    >> 
    >> * prepare the test environment on the DUT?  DUT driver specific
    >> and testcase specific Provides facilities to reset DUTs to
    >> well-known state
    >> 
    >> * start a monitor (another process to collect data) on the DUT?
    >> YES serial console by default, any other available by driver

    TB> ??  Does a process on the DUT monitor the DUT's serial
    TB> console?  I'm not following this.  The idea of a Monitor from
    TB> the glossary is that a monitor on the DUT collects local
    TB> information, and a monitor for 3rd party equipment collects
    TB> off-DUT information.  Finally, the test runner or the DUT
    TB> controller may grab the serial console from the DUT.

    TB> Can you elaborate this?

The DUT is not modified or altered at all (to try to follow
Heisemberg); there is an off-DUT monitor that collects DUT outputs
(eg: serial console output, netconsole output) and the test runner can
access it to make decissions.

I will fix that too

    >> 
    >> * start a monitor on external equipment?  Testcase specific
    >> Other equipment is treated as other DUT hardware that can be
    >> used and manipualted to accomplish whatever the test needs
    >> 
    >> * initiate the test on the DUT?  YES
    >> 
    >> * clean up the test environment on the DUT?  NO Allows
    >> post-analysis; left to tests to create a clean
    >> well-known-environment before starting
    >> 
    >> ==== DUT control ==== Does your test system: * store board
    >> configuration data?  YES ** in what format?  Defined internally
    >> in Python dictionaries, REST exported as JSON

    TB> Is DUT configuration stored in a filesystem somewhere?  Or is
    TB> it just enclosed in some other code (maybe a DUT driver?)

    >> 
    >> * store external equipment configuration data?  YES, all
    >> equipment is treated as a DUT
    >> 
    >> ** in what format?  same
    >> 
    >> * power cycle the DUT?  YES
    >> 
    >> * monitor the power usage during a run?  CAN do if proper
    >> equipment is attached and test configures and communicates with
    >> it
    >> 
    >> * gather a kernel trace during a run?  CAN do if test monitors
    >> proper outputs
    >> 
    >> * claim other hardware resources or machines (other than the
    >> DUT) for use during a test?  YES Testcase declares resources
    >> (DUTs) needed and runner will claim them all before
    >> manipulating them
    >> 
    >> * reserve a board for interactive use (ie remove it from
    >> automated testing)?  NO Single reservation system -- automation
    >> can claim another one
    >> 
    >> * provide a web-based control interface for the lab?  NO
    >> cmdline interfaces; web interface doable as a layer on top
    >> 
    >> * provide a CLI control interface for the lab?  YES
    >> 
    >> ==== Run artifact handling ==== Does your test system: * store
    >> run artifacts NO left to trigger layer (eg Jenkins) ** in what
    >> format?  * put the run meta-data in a database?  YES
    >> plugin-based reporting mechanism; current plugins available for
    >> text files, MongoDB, Junit
    >> 
    >> ** if so, which database?  MongoDB (plugin)
    >> 
    >> * parse the test logs for results?  test specific
    >> 
    >> * convert data from test logs into a unified format?  test
    >> specific -- test can choose to parse internal logs and produce
    >> reporting using the TCF report API for results and KPIs
    >> 
    >> ** if so, what is the format?  internal format that gets passed
    >> to each reporting plugin that is loaded real time; such plugins
    >> will store in whatever native format they support
    TB> Interesting.  Is this done at test execution time?  It sounds
    TB> like there is no "run artifact" store in TCF.

Correct; each test runs on its own thread and the calls to the
reporting API all get passed to all the report drivers configured and
it is up to them to decide how to store or not the information.

A driver might be saving the console output from multiple DUTs used in
the testcase in a unified format (so all can be seen at the same time
in a cronograph) while another might be capturing the data to
postprocess and decide based on a separate process that this testcase
run has to be reported to A or B or C for other operations.


    TB> I sounds like in either case, there is a Jenkins job activated
    TB> by a Jenkins CI trigger.  And that this job calls TCF to do
    TB> the processing.

yes, correct -- jenkins or any other CI agent -- or a human agent
going to reproduce what the CI agent did to triage, diagnose and fix a
problem.

I commonly use it to validate my patches, making sure they run in all
the HW where it has to work.

    TB> Is the TCF test code inside the same repository at the
    TB> software under test?  -- Tim

nope, it is an external tool.



More information about the automated-testing mailing list