Initial commit.
- qa-tools public release which includes:
- trace-based coverage tool
- quality metrics measurement and tracking setup
- associated in-source documentation.
Signed-off-by: Basil Eljuse <basil.eljuse@arm.com>
diff --git a/coverage-tool/docs/code_cov_diag.jpg b/coverage-tool/docs/code_cov_diag.jpg
new file mode 100644
index 0000000..d2f3f63
--- /dev/null
+++ b/coverage-tool/docs/code_cov_diag.jpg
Binary files differ
diff --git a/coverage-tool/docs/design_overview.md b/coverage-tool/docs/design_overview.md
new file mode 100644
index 0000000..89c4b66
--- /dev/null
+++ b/coverage-tool/docs/design_overview.md
@@ -0,0 +1,78 @@
+# Design overview
+
+This document explains the overall design approach to the trace-based code coverage tool.
+
+## Motivation
+
+The primary motivation for this code coverage tool is driven by the fact that there are no commercial off-the-shelf (COTS) tools that can be readily used for doing code coverage measurement for firmware components - especially those meant for memory constraint platforms. Most of the tools rely on the traditional approach where the code is instrumented to enable the coverage measurement. In the case of firmware components designed for memory constraint platforms, code size is a key consideration and the need to change memory maps to accomodate the instrumented code for enabling coverage measurement is seen as a pain point. A possible alternative is to perform the coverage measurement on emulation platforms which could free up the constraints of memory limitations. However this adds the need to have more platform specific code to be supported in the firmware for the emulation platform.
+
+The above factors led to a design approach to measure the code coverage based on execution trace, without the need for any code instrumentation. This approach provides the following benefits:
+- allows the user to test the real software stack without worrying about memory constraints - no code is instrumented; meaning real software is used during coverage run.
+- allows the user to test on real platforms rather than partial system emulations - coverage information can be obtained without expensive modelling or porting effort.
+
+
+## Known Limitations
+
+The following limitations are understood to exist with the trace-based coverage tool
+
+- This works only with non-relocatable code: here we can easily map the execution address of an instruction to those determined from the generated binaries. Even if there is some position independent code involved, if the location binding happens at build time then also the user can use this tool as the post-processing stage could still be made to do the mapping.
+- Accuracy of code coverage info mapped to the source code is limited by the completeness of DWARF signatures embedded: we know that with higher levels of code optimisation the DWARF signatures embedded will be `sparse` in nature, especially when the generated code is optimised for size. Ideally this solution works best when there is no compiler optimisation turned ON.
+- This is currently proven to work on FVPs (Fixed Virtual Platforms): Early prototyping shows this approach can work with Silicon platforms, however needs further development.
+
+
+## Design Details
+The following diagram outlines the individual components involved in the trace-based coverage tool.
+
+
+
+The following changes are needed at each of the stages to enable this code coverage measurement tool to work.
+
+### Compilation stage
+
+The coverage tool relies on the DWARF signatures embedded within the binaries generated for the firmware that runs as part of the coverage run. In case of GCC toolchain we enable it by adding -g flag during the compilation.
+
+The -g flag generates DWARF signatures embedded within the binaries as see in the example below:
+```
+100005b0 <tfm_plat_get_rotpk_hash>:
+tfm_plat_get_rotpk_hash():
+/workspace/workspace/tf-m-build-config/trusted-firmware-m/platform/ext/common/template/crypto_keys.c:173
+100005b0: b510 push {r4, lr}
+/workspace/workspace/tf-m-build-config/trusted-firmware-m/platform/ext/common/template/crypto_keys.c:174
+100005b2: 6814 ldr r4, [r2, #0]
+```
+
+### Trace generation stage
+
+The coverage tool relies on the generation of the execution trace from the target platform (in our case FVP). It relies on the coverage trace plugin which is an MTI based custom plugin that registers for trace source type `INST` and dumps a filtered set of instruction data that got executed during the coverage run. In case of silicon platforms it expects to use trace capture with tools like DSTREAM-ST.
+
+See [Coverage Plugin](./plugin_design.md) documentation to know more about this custom plugin.
+
+The following diagram shows an example trace capture output from the coverage trace plugin:
+```
+[PC address, times executed, opcode size]
+0010065c 1 4
+00100660 1 4
+00100664 1 2
+00100666 1 2
+...
+```
+
+### Post-processing stage
+
+In this stage coverage information is generated by:
+1. Determining the instructions executed from the trace output captured.
+2. Mapping those instructions to source code by utilising the DWARF signatures embedded within the binaries.
+3. Generating the LCOV .info files allowing us to report the coverage information with the LCOV tool and merge reports from multiple runs.
+
+### Typical steps to integrate trace-based coverage tool to CI setup
+
+- Generate the DWARF binary (elf or axf) files at build stage using the -g flag or equivalent compiler switches.
+- Build the coverage plugin using the corresponding PVLIB_HOME library for the 64-bit compiler and deploy in your CI to be used during execution.
+- Use the coverage plugin during FVP execution by providing the additional parameters. See [here](./plugin_user_guide.md#capturing-a-trace)
+- Clone the sources in your local workspace if not already there.
+- The generated trace logs along with the DWARF binary files, the bin utilities (objdump, readelf from the same toolchain for the DWARF binary files) and source code will be used as input to the *intermediate_layer.py* to generate the intermediate json layer.
+- The *generate_info_file.py* will parse the json intermediate layer file to an info file that can be read by the genhtml binary from LCOV.
+- Optionally use the merge.py to merge multiple coverage info files to generate a combined report.
+## License
+[BSD-3-Clause](../../license.md)
+
diff --git a/coverage-tool/docs/plugin_user_guide.md b/coverage-tool/docs/plugin_user_guide.md
new file mode 100644
index 0000000..5c600dd
--- /dev/null
+++ b/coverage-tool/docs/plugin_user_guide.md
@@ -0,0 +1,30 @@
+# coverage-plugin User Guide
+
+The *coverage-plugin* is a C++ project using the Model Trace Interface Plugin Development Kit (MTIPDK) in order to create a trace plugin, which is a special shared library. The trace plugins can be loaded into Arm Fast Models to produce execution trace data for doing code coverage measurement.
+
+## Dependencies
+- GCC 7.5.0 at least
+
+## Building the coverage-plugin
+```bash
+$ cd coverage-plugin
+$ make PVLIB_HOME=</path/to/model_library>
+```
+
+## Capturing a trace
+
+You need to add two options to your model command-line:
+
+```bash
+ --plugin /path/to/coverage_trace.so
+ -C TRACE.coverage_trace.trace-file-prefix="/path/to/TRACE-PREFIX"
+```
+
+You can then run your FVP model. The traces will be created at the end of the simulation*.
+
+BEWARE: Traces aren't numbered and will be overwritten if you do two successive runs. Aggregating results will require moving traces to a separate place or changing the prefix between runs. This is the responsibility of the plugin user.
+
+*NOTE: The plugin captures the traces in memory and on the termination of the simulation it writes the data to a file. If user terminates the simulation forcefully with a Ctrl+C the trace files are not generated.
+
+## License
+[BSD-3-Clause](../../license.md)
diff --git a/coverage-tool/docs/reporting_user_guide.md b/coverage-tool/docs/reporting_user_guide.md
new file mode 100644
index 0000000..7c0deea
--- /dev/null
+++ b/coverage-tool/docs/reporting_user_guide.md
@@ -0,0 +1,357 @@
+# coverage-reporting User Guide
+
+The *coverage-reporting* is collection of python and bash scripts to generate LCOV HTML-based reports for code coverage against C source code. There are two stages for this process:
+
+1. Converting the information from the execution traces (using coverage-plugin) of the FVP and the DWARF signatures from the elf/axf files to an intermediate JSON file.
+
+2. Converting the intermediate JSON file into an info file that can be read by LCOV utilities to produce a code coverage HTML report. There are merrge utility scipts provided to merge multiple info files to generate a combined report from multiple runs.
+
+## Intermediate JSON file
+This is a JSON file that contains the information including the source code line numbers embedded in the elf files (by virtue of DWARF signatures) paired against the execution trace log files from the coverage-plugin. Hence only the lines that are compiled and linked to form the final binaries will be referenced by the DWARF signatures. Thus the coverage information will always be against the compiled code that made into the binary. The tools needs a configuration json file as an input with the needed metadata to perform the coverage computation. This configuration file is given as below:
+```json
+{
+ "configuration":
+ {
+ "remove_workspace": "<true> if workspace must be from removed from the path of the source files",
+ "include_assembly": "<true> to include assembly source code in the intermediate layer"
+ },
+ "parameters":
+ {
+ "objdump": "<Path> to the objdump binary to handle DWARF signatures",
+ "readelf": "<Path> to the readelf binary to handle DWARF signatures",
+ "sources": [
+ {
+ "type": "git",
+ "URL": "<URL> git repo",
+ "COMMIT": "<Commit id>",
+ "REFSPEC": "<Refspec>",
+ "LOCATION": "<Folder> within 'workspace' where this source is located"
+ },
+ {
+ "type": "http",
+ "URL": "<URL> link to file",
+ "COMPRESSION": "xz",
+ "LOCATION": "<Folder within 'workspace' where this source is located>"
+ }
+ ],
+ "workspace": "<Workspace folder> where the source code was located to produce(build) the elf/axf files",
+ "output_file": "<Intermediate json layer output file name and location>",
+ "metadata": {"metadata_1": "metadata value"}
+ },
+ "elfs": [
+ {
+ "name": "<Full path name to elf/axf file>",
+ "traces": [
+ "Full path name to the trace file,"
+ ]
+ }
+ ]
+}
+```
+
+Here is an example of an actual configuration JSON file:
+
+```json
+{
+ "configuration":
+ {
+ "remove_workspace": true,
+ "include_assembly": true
+ },
+ "parameters":
+ {
+ "objdump": "gcc-arm-none-eabi-7-2018-q2-update/bin/arm-none-eabi-objdump",
+ "readelf": "gcc-arm-none-eabi-7-2018-q2-update/bin/arm-none-eabi-readelf",
+ "sources": [
+ {
+ "type": "git",
+ "URL": "https://git.trustedfirmware.org/TF-M/trusted-firmware-m.git/",
+ "COMMIT": "2ffadc12fb34baf0717908336698f8f612904",
+ "REFSPEC": "",
+ "LOCATION": "trusted-firmware-m"
+ },
+ {
+ "type": "git",
+ "URL": "https://mucboot.com/mcuboot.git",
+ "COMMIT": "507689a57516f558dac72bef634723b60c5cfb46b",
+ "REFSPEC": "",
+ "LOCATION": "mcuboot"
+ },
+ {
+ "type": "git",
+ "URL": "https://tf.org/mbed/mbed-crypto.git",
+ "COMMIT": "1146b4589011b69a6437e6b728f2af043a06ec19",
+ "REFSPEC": "",
+ "LOCATION": "mbed-crypto"
+ }
+ ],
+ "workspace": "/workspace/workspace/tf-m",
+ "output_file": "output_file.json"
+ },
+ "elfs": [
+ {
+ "name": "mcuboot.axf",
+ "traces": [
+ "reg-covtrace*.log"
+ ]
+ },
+ {
+ "name": "tfms.axf",
+ "traces": [
+ "reg-covtrace*.log"
+ ]
+ },
+ {
+ "name": "tfmns.axf",
+ "traces": [
+ "reg-covtrace*.log"
+ ]
+ }
+ ]
+}
+```
+
+
+As dependencies the script needs the path to the objdump and readelf binares from the *same* toolchain used to build the elf binaries tested.
+Now it can be invoked as:
+
+```bash
+$ python3 intermediate_layer.py --config-json <config json file> [--local-workspace <path to local folder/workspace where the source files are located]
+```
+The *local-workspace* option must be indicated if the current path to the source files is different from the workspace where the build (compiling and linking) happened. The latter will be in the DWARF signature while the former will be used to produce the coverage report. It is not a requirement to have the local workspace recreated but if not present then the program will not be able to find the line numbers belonging to functions within the source files (also **ctags** must be installed i.e. **sudo apt install exuberant-ctags**)
+
+The output is an intermediate json file with the following format:
+
+```json
+{
+ "configuration": {
+ "elf_map": {
+ "binary name 1": 0,
+ "binary name 2": 1
+ },
+ "metadata": {
+ "property 1": "metadata value 1",
+ "property 2": "metadata value 2"
+ },
+ "sources": [{
+ "type": "<git or http>",
+ "URL": "<url for the source>",
+ "COMMIT": "<commit id for git source>",
+ "REFSPEC": "<refspec for the git source",
+ "LOCATION": "<folder to put the source>"
+ }]
+ },
+ "source_files": {
+ "<Source file name>": {
+ "functions": {
+ "line": "<Function line number>",
+ "covered": "<true or false>"
+ },
+ "lines": {
+ "<line number>": {
+ "covered": "<true or false>",
+ "elf_index": {
+ "<Index from elf map>": {
+ "<Address in decimal>": [
+ "<Assembly opcode>",
+ "<Number of times executed>"
+ ]
+ }
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+An example snippet of an intermediate JSON file is here:
+
+```json
+{
+ "configuration": {
+ "elf_map": {
+ "bl1": 0,
+ "bl2": 1,
+ "bl31": 2
+ },
+ "metadata": {
+ "BUILD_CONFIG": "tf1",
+ "RUN_CONFIG": "tf2"
+ },
+ "sources": [
+ {
+ "type": "git",
+ "URL": "https://git.trustedfirmware.org/TF-M/trusted-firmware-m.git/",
+ "COMMIT": "2ffadc12fb34baf0717908336698f8f612904",
+ "REFSPEC": "",
+ "LOCATION": "trusted-firmware-m"
+ },
+ {
+ "type": "git",
+ "URL": "https://mucboot.com/mcuboot.git",
+ "COMMIT": "507689a57516f558dac72bef634723b60c5cfb46b",
+ "REFSPEC": "",
+ "LOCATION": "mcuboot"
+ },
+ {
+ "type": "git",
+ "URL": "https://tf.org/mbed/mbed-crypto.git",
+ "COMMIT": "1146b4589011b69a6437e6b728f2af043a06ec19",
+ "REFSPEC": "",
+ "LOCATION": "mbed-crypto"
+ }
+ ]
+ },
+ "source_files": {
+ "mcuboot/boot1.c": {
+ "functions": {
+ "arch_setup": true
+ },
+ "lines": {
+ "12": {
+ "covered": true,
+ "elf_index": {
+ "0": {
+ "6948": [
+ "b2760000 \torr\tx0, x0, #0x400",
+ 1
+ ]
+ }
+ }
+ },
+ "19": {
+ "covered": true,
+ "elf_index": {
+ "0": {
+ "6956": [
+ "d65f03c0 \tret",
+ 1
+ ]
+ }
+ }
+ }
+ }
+ },
+... more lines
+```
+
+
+
+## Report
+LCOV uses **info** files to produce a HTML report; hence to convert the intermediate json file to **info** file:
+```bash
+$ python3 generate_info_file.py --workspace <Workspace where the C source folder structure resides> --json <Intermediate json file> [--info <patht and filename for the info file>]
+```
+As was mentioned, the *workspace* option tells the program where to look for the source files thus is a requirement that the local workspace is populated.
+
+This will generate an info file *coverage.info* that can be input into LCOV to generate the final coverage report as below:
+
+```bash
+$ genhtml --branch-coverage coverage.info --output-directory <HTML report folder>
+```
+
+Here is a example snippet of a info file:
+
+```bash
+TN:
+SF:/home/projects/initial_attestation/attestation_key.c
+FN:213,attest_get_instance_id
+
+FN:171,attest_calc_instance_id
+
+FN:61,attest_register_initial_attestation_key
+
+FN:137,attest_get_signing_key_handle
+
+FN:149,attest_get_initial_attestation_public_key
+
+FN:118,attest_unregister_initial_attestation_key
+FNDA:1,attest_get_instance_id
+
+FNDA:1,attest_calc_instance_id
+
+FNDA:1,attest_register_initial_attestation_key
+
+FNDA:1,attest_get_signing_key_handle
+
+FNDA:1,attest_get_initial_attestation_public_key
+
+FNDA:1,attest_unregister_initial_attestation_key
+FNF:6
+FNH:6
+BRDA:71,0,0,0
+BRDA:71,0,1,1
+...<more lines>
+```
+
+Refer to [](http://ltp.sourceforge.net/coverage/lcov/geninfo.1.php) for meaning of the flags.
+
+## Wrapper
+There is a wrapper bash script that can generate the intermediate json file, create the info file and the LCOV report:
+```bash
+$ ./branch_coverage.sh --config config_file.json --workspace Local workspace --outdir html_report
+```
+
+## Merge files
+There is an utility wrapper that can merge jso and info files to produce a merge of the code coverage:
+```bash
+$ ./merge.sh -j <input json file> [-l <filename for report>] [-w <local workspace>] [-c to indicate to recreate workspace from sources]
+```
+This utility needs a input json file with the list of json/info files to be merged:
+```json
+{ "files" : [
+ {
+ "id": "<unique project id (string) that belongs the json and info files>",
+ "config":
+ {
+ "type": "<'http' or 'file'>",
+ "origin": "<URL or folder where the json files reside>"
+ },
+ "info":
+ {
+ "type": "<'http' or 'file'>",
+ "origin": "<URL or folder where the info files reside>"
+ }
+ },
+....More of these json objects
+ ]
+}
+```
+This utility will merge the files, create the C source folder structure and produce the LCOV reports for the merged files. The utility can do a translation from the workspaces for each info file to the local workspace in case the info files come from different workspaces. The only requirement is that all the info files come from the **same** sources, i.e. repositories.
+
+Example snippet of input json file:
+
+```bash
+{ "files" : [
+ {
+ "id": "Tests_Release_BL2",
+ "config":
+ {
+ "type": "file",
+ "origin": "/home/workspace/150133/output_file.json"
+ },
+ "info":
+ {
+ "type": "file",
+ "origin": "/home/workspace/150133/coverage.info"
+ }
+ },
+ {
+ "id": "Tests_Regression_BL2",
+ "config":
+ {
+ "type": "file",
+ "origin": "/home//workspace/150143/output_file.json"
+ },
+ "info":
+ "type": "file",
+ "origin": "/home/workspace/150143/coverage.info"
+ }
+ }
+ ]
+}
+```
+
+## License
+[BSD-3-Clause](../../license.md)
diff --git a/coverage-tool/docs/user_guide.md b/coverage-tool/docs/user_guide.md
new file mode 100644
index 0000000..70505a1
--- /dev/null
+++ b/coverage-tool/docs/user_guide.md
@@ -0,0 +1,15 @@
+# Trace-based Coverage Tool User Guide
+
+The *coverage-tool* is developed to provide code coverage measurement based on execution trace and without the need for code instrumentation. This tool is specifically meant for firmware components which are run on memory constraint platforms. The non-reliance on code instrumentation in this approach circumvents the frequent issue of instrumented code affecting the target memory model, where the firmware is expected to run. Thus here we test the firmware in the actual memory model it is intended to be eventually released. The coverage tool comprises of 2 main components. A *trace plugin component* and a set of *post processing scripts* to generate the coverage report.
+
+## Design Overview
+Refer to [design overview](./design_overview.md) for an outline of the design of this trace-based coverage tool.
+
+## Plugin user guide
+Refer to [plugin user guide](./plugin_user_guide.md) to learn more on how the plugin component is to be used as part of trace-based coverage tool.
+
+## Reporting user guide
+Refer to [reporting user guide](./reporting_user_guide.md) to learn more on how to use the post-processing scripts, that are part of the trace-based coverage tool, in order to generate the coverage report for analysis.
+
+## License
+[BSD-3-Clause](../../license.md)