GithubHelp home page GithubHelp logo

chipsalliance / sv-tests Goto Github PK

View Code? Open in Web Editor NEW
269.0 18.0 70.0 11.28 MB

Test suite designed to check compliance with the SystemVerilog standard.

Home Page: https://chipsalliance.github.io/sv-tests-results/

License: ISC License

Makefile 0.95% SystemVerilog 72.20% HTML 3.46% CSS 2.18% Python 17.30% JavaScript 3.84% C++ 0.08%
systemverilog symbiflow verilog hdl rtl compliance-testing

sv-tests's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sv-tests's Issues

Code highlighting improvements

This is a comment from #166 by @hzeller extracted to a new issue so it doesn't get lost:

Can we modify the output a little with the configuration of pygments ?

  • on the top, there is some whitespace. The generated code seems to create some empty<h2></h2> for instance
  • If all line-numbers could contain an <a name="L42">42</a>, so that we can directly link from a logfile path/to/test/file.sv:42

Test quality improvement: Double-check tests that fail for all tools

Some test fail for all tools. This could mean they use too sophisticated SystemVerilog features. But more often than not, they contain information that is missing, such as this number_test_23.sv that is using macros without them being defined (that particular test came out of some other test-suite that just did syntax checking but all the tools are confused then because they can't resolve symbols; so here, we should define the macros).

Other tests contained other issues such discovered by @alainmarcel and fixed in pull request #119 #116 #114 #113 #112 #111 (thanks @alainmarcel!).

So while still cranking out new tests, we should have a second look at the existing tests that fail.
I added some CSV output in the WIP pull request in SymbiFlow/sv-tests@ce82b79 ; with that, it is easy to look at these.

First, here is a fun overview of all the tests that are running successfully out of 496 (formulated as a SQL query for readability (and that is what I use with my personal tool for that), but awk or similar would work as well.)

from csv:out/results.csv "select sum(Icarus), sum(Slang), sum(Sv2v_zachjs), sum(Verilator), sum(Yosys)"
+-------------+------------+------------------+----------------+------------+
| sum(Icarus) | sum(Slang) | sum(Sv2v_zachjs) | sum(Verilator) | sum(Yosys) |
+-------------+------------+------------------+----------------+------------+
|         122 |        198 |              202 |            183 |        117 |
+-------------+------------+------------------+----------------+------------+

from csv:out/results.csv "select 100.0*sum(Icarus)/count(*) as Ikaraus_percent, 100.0*sum(Slang)/count(*) as Slang_percent, 100.0*sum(Sv2v_zachjs)/count(*) as sv2v_percent, 100.0*sum(Verilator)/count(*) as verilator_percent, 100.0*sum(Yosys)/count(*) as yosys_percent"
+-----------------+---------------+--------------+-------------------+---------------+
| Ikaraus_percent | Slang_percent | sv2v_percent | verilator_percent | yosys_percent |
+-----------------+---------------+--------------+-------------------+---------------+
|         24.5968 |       39.9194 |      40.7258 |           36.8952 |       23.5887 |
+-----------------+---------------+--------------+-------------------+---------------+

Anyway, here is a list of all tests that fail for all the tested tools. Might be worthwhile taking a second look at these:

from csv:out/results.csv select test where not Icarus and not Slang and not Sv2v_zachjs and not Verilator and not Yosys format "%s\n"
chapter-11/11.12--let_construct.sv
chapter-11/11.4.13--set_member.sv
chapter-11/11.4.14.4--dynamic_array_stream.sv
chapter-11/11.9--tagged_union.sv
chapter-11/11.9--tagged_union_member_access.sv
chapter-12/12.6.1--case_pattern.sv
chapter-12/12.6.1--casex_pattern.sv
chapter-12/12.6.1--casez_pattern.sv
chapter-12/12.6.2--if_pattern.sv
chapter-12/12.6.3--conditional_pattern.sv
chapter-12/12.7.5--dowhile.sv
chapter-5/5.10-structure-replication.sv
chapter-5/5.10-structures.sv
chapter-5/5.11-arrays-replication.sv
chapter-5/5.13-builtin-methods-arrays.sv
chapter-5/5.8-time-literals.sv
chapter-5/5.9-string-word-assignment.sv
chapter-8/8.13--inheritance.sv
chapter-8/8.14--override_member.sv
chapter-8/8.15--super.sv
chapter-8/8.20--virtual_method.sv
chapter-8/8.21--abstract_class.sv
chapter-8/8.22--dynamic_method_lookup.sv
chapter-8/8.23--scope_resolution.sv
chapter-8/8.24--out_of_block_methods.sv
chapter-8/8.25--parametrized_class_extend.sv
chapter-8/8.25.1--parametrized_class_scope_resolution.sv
chapter-8/8.26.2--implements.sv
chapter-8/8.26.2--implements_extends.sv
chapter-8/8.26.2--implements_multiple.sv
chapter-8/8.26.3--type_access_extends.sv
chapter-8/8.26.3--type_access_implements.sv
chapter-8/8.26.5--cast_between_interface_classes.sv
chapter-8/8.26.5--implemented_class_handle.sv
chapter-8/8.26.6.1--name_conflict_resolved.sv
chapter-8/8.26.6.2--parameter_type_conflict.sv
chapter-8/8.26.6.3--diamond_relationship.sv
chapter-8/8.26.7--partial_implementation.sv
chapter-8/8.27--forward_declaration.sv
chapter-8/8.5--parameters.sv
chapter-8/8.5--properties_enum.sv
chapter-8/8.7--constructor.sv
chapter-8/8.7--constructor_param.sv
chapter-8/8.8--typed_constructor.sv
chapter-8/8.8--typed_constructor_param.sv
chapter-9/9.3.5--statement_labels_par.sv
chapter-9/9.4.2.3--event_conditional.sv
chapter-9/9.4.2.4--event_sequence.sv
chapter-9/9.7--process_cls_await.sv
chapter-9/9.7--process_cls_kill.sv
chapter-9/9.7--process_cls_self.sv
chapter-9/9.7--process_cls_suspend_resume.sv
generic/class/class_test_0.sv
generic/class/class_test_1.sv
generic/class/class_test_10.sv
generic/class/class_test_11.sv
generic/class/class_test_12.sv
generic/class/class_test_13.sv
generic/class/class_test_14.sv
generic/class/class_test_15.sv
generic/class/class_test_16.sv
generic/class/class_test_17.sv
generic/class/class_test_18.sv
generic/class/class_test_19.sv
generic/class/class_test_2.sv
generic/class/class_test_20.sv
generic/class/class_test_21.sv
generic/class/class_test_22.sv
generic/class/class_test_23.sv
generic/class/class_test_24.sv
generic/class/class_test_25.sv
generic/class/class_test_26.sv
generic/class/class_test_27.sv
generic/class/class_test_28.sv
generic/class/class_test_29.sv
generic/class/class_test_3.sv
generic/class/class_test_30.sv
generic/class/class_test_32.sv
generic/class/class_test_33.sv
generic/class/class_test_34.sv
generic/class/class_test_35.sv
generic/class/class_test_36.sv
generic/class/class_test_37.sv
generic/class/class_test_38.sv
generic/class/class_test_39.sv
generic/class/class_test_4.sv
generic/class/class_test_40.sv
generic/class/class_test_41.sv
generic/class/class_test_42.sv
generic/class/class_test_43.sv
generic/class/class_test_44.sv
generic/class/class_test_45.sv
generic/class/class_test_46.sv
generic/class/class_test_47.sv
generic/class/class_test_48.sv
generic/class/class_test_49.sv
generic/class/class_test_5.sv
generic/class/class_test_50.sv
generic/class/class_test_51.sv
generic/class/class_test_52.sv
generic/class/class_test_53.sv
generic/class/class_test_54.sv
generic/class/class_test_55.sv
generic/class/class_test_56.sv
generic/class/class_test_57.sv
generic/class/class_test_58.sv
generic/class/class_test_59.sv
generic/class/class_test_6.sv
generic/class/class_test_60.sv
generic/class/class_test_61.sv
generic/class/class_test_62.sv
generic/class/class_test_63.sv
generic/class/class_test_64.sv
generic/class/class_test_65.sv
generic/class/class_test_66.sv
generic/class/class_test_67.sv
generic/class/class_test_68.sv
generic/class/class_test_69.sv
generic/class/class_test_7.sv
generic/class/class_test_8.sv
generic/class/class_test_9.sv
generic/iface/iface_class_test_0.sv
generic/iface/iface_class_test_1.sv
generic/iface/iface_class_test_10.sv
generic/iface/iface_class_test_11.sv
generic/iface/iface_class_test_2.sv
generic/iface/iface_class_test_3.sv
generic/iface/iface_class_test_4.sv
generic/iface/iface_class_test_5.sv
generic/iface/iface_class_test_6.sv
generic/iface/iface_class_test_7.sv
generic/iface/iface_class_test_8.sv
generic/iface/iface_class_test_9.sv
generic/member/class_member_test_0.sv
generic/member/class_member_test_1.sv
generic/member/class_member_test_10.sv
generic/member/class_member_test_11.sv
generic/member/class_member_test_12.sv
generic/member/class_member_test_13.sv
generic/member/class_member_test_14.sv
generic/member/class_member_test_15.sv
generic/member/class_member_test_16.sv
generic/member/class_member_test_17.sv
generic/member/class_member_test_18.sv
generic/member/class_member_test_19.sv
generic/member/class_member_test_2.sv
generic/member/class_member_test_20.sv
generic/member/class_member_test_21.sv
generic/member/class_member_test_22.sv
generic/member/class_member_test_23.sv
generic/member/class_member_test_24.sv
generic/member/class_member_test_25.sv
generic/member/class_member_test_26.sv
generic/member/class_member_test_27.sv
generic/member/class_member_test_28.sv
generic/member/class_member_test_29.sv
generic/member/class_member_test_3.sv
generic/member/class_member_test_30.sv
generic/member/class_member_test_31.sv
generic/member/class_member_test_32.sv
generic/member/class_member_test_33.sv
generic/member/class_member_test_34.sv
generic/member/class_member_test_35.sv
generic/member/class_member_test_36.sv
generic/member/class_member_test_37.sv
generic/member/class_member_test_38.sv
generic/member/class_member_test_39.sv
generic/member/class_member_test_4.sv
generic/member/class_member_test_40.sv
generic/member/class_member_test_41.sv
generic/member/class_member_test_42.sv
generic/member/class_member_test_43.sv
generic/member/class_member_test_44.sv
generic/member/class_member_test_45.sv
generic/member/class_member_test_46.sv
generic/member/class_member_test_47.sv
generic/member/class_member_test_48.sv
generic/member/class_member_test_49.sv
generic/member/class_member_test_5.sv
generic/member/class_member_test_50.sv
generic/member/class_member_test_51.sv
generic/member/class_member_test_52.sv
generic/member/class_member_test_53.sv
generic/member/class_member_test_54.sv
generic/member/class_member_test_55.sv
generic/member/class_member_test_56.sv
generic/member/class_member_test_57.sv
generic/member/class_member_test_58.sv
generic/member/class_member_test_6.sv
generic/member/class_member_test_7.sv
generic/member/class_member_test_8.sv
generic/member/class_member_test_9.sv
generic/number/number_test_17.sv
generic/number/number_test_18.sv
generic/number/number_test_19.sv
generic/number/number_test_20.sv
generic/number/number_test_21.sv
generic/number/number_test_22.sv
generic/number/number_test_23.sv
generic/number/number_test_37.sv
generic/number/number_test_38.sv
generic/number/number_test_39.sv
generic/number/number_test_40.sv
generic/number/number_test_41.sv
generic/number/number_test_54.sv
generic/number/number_test_55.sv
generic/number/number_test_56.sv
generic/number/number_test_57.sv
generic/number/number_test_58.sv
generic/number/number_test_73.sv
generic/number/number_test_74.sv
generic/number/number_test_75.sv
generic/number/number_test_76.sv
generic/number/number_test_77.sv
generic/preproc/preproc_test_2.sv
generic/preproc/preproc_test_3.sv
generic/typedef/typedef_test_10.sv
generic/typedef/typedef_test_11.sv
generic/typedef/typedef_test_12.sv
generic/typedef/typedef_test_13.sv
generic/typedef/typedef_test_15.sv
generic/typedef/typedef_test_17.sv
generic/typedef/typedef_test_9.sv

report: The 'close' button is currently broken

[context: current head https://symbiflow.github.io/sv-tests/ report, Chrome 75.0.3770.100]

The gray 'close' button in the corner of the "hover-screen" is currently doing nothing (which means it is hard to see the lower part of the main table as it is scrolling 'behind'.

(Maybe we can just have the main table not behind but simply in a split frame above, then we don't need the hover-screen closeable?).

Runners should be less verbose

Probably from the early implementation, there are still some debugging messages in the runner code that output how a test was successful.

These are merely FYI outputs of expected results, so should not clutter the output here: they that end up in the log-files anyway, to be reported by the sv-report. So I'd either remove them , make them logger.debug() or set the default log-level to ERROR (probably the best choice).

Problem would be that these will drown out actual problems, such as the Exception reported here:
https://github.com/SymbiFlow/sv-tests/blob/master/tools/runner#L140

Investigate if outputting xUnit format makes sense

If sv-tests could output xUnit XML, then the test results would be compatible with a wide range of tools. Could mean we can ditch creating our own HTML frontend?

XML files in xUnit format can be consumed by a wide range of tools, such as build systems, IDEs and continuous integration servers.

Schema

There are many schemas with minor differences.
We use one that is compatible with Jenkins xUnit plugin, a copy is
available under tests/vendor/jenkins/xunit-plugin/junit-10.xsd (see attached license).
You may also find these resources useful:

Naming of tests should contain short synopsis of what is tested

There are a bunch of tests that are just numbered, e.g. number_test_6.sv

Just looking at these files (or names in the test-outputs), it is not really possible to see what they are all about. If they would be containing a few words that describe what they are about it will be quicker to understand what is going on

number_test_6.sv -> number_test_one_bit_binary_literal.sv
number_test_8.sv -> number_test_one_bit_binary_literal_with_spaces.sv

Given that we already are in subdirectory of number/ here, maybe we don't even need the number_test prefix:
number/number_test_6.sv -> number/one_bit_binary_literal.sv
number/number_test_8.sv -> number/one_bit_binary_literal_with_spaces.sv

Parser tests?

Does this test suite target SystemVerilog Parsers or Full Compilers?

Some of the tests failing on tree-sitter-verilog because it is bare parser, not a preprocessor, not a compiler.

For example:

Preprocessor

Multiple Number tests: /tests/generic/number/number_test_*.sv include preprocessor directives like: DIGITS, WIDTH

Other examples:

sv-tests/tests/chapter-5/5.6.4--compiler-directives-begin-keywords.sv
sv-tests/tests/chapter-5/5.6.4--compiler-directives-pragma.sv
sv-tests/tests/chapter-5/5.6.4--compiler-directives-unconnected-drive.sv
...

Reserved words

All /tests/generated/keywords/5.6.2--keyword-*.sv tests suppose to fail on reserved keywords used is an identifier. I would not expect that it is Parser's job to fail in this case. Is it?

Scope failures

There are multiple tests for scope related issues that is expected to fail by the compiler.

Example:

sv-tests/tests/generated/encapsulation/8.18--inherited_local_from_inside.sv
sv-tests/tests/generated/encapsulation/8.18--inherited_local_from_outside.sv
sv-tests/tests/generated/encapsulation/8.18--inherited_prot_from_outside.sv
sv-tests/tests/generated/encapsulation/8.18--local_from_outside.sv
sv-tests/tests/generated/encapsulation/8.18--prot_from_outside.sv

I don't think it is the job of a Parser to fail here.

conf/lrm.conf should be used for descriptive annotations in report.html

The config file that maps sections to descriptions, but right now, this file is not used much.

The sv-report uses it to create a list of potential test case classifications, but we should possibly also use the information there to annotate the output in the reports. For instance having 11.4.7 annotated as 11.4.7 Logical operators is probably very helpful.

Find a way to detect regressions introduced in a PR

We should be able to detect what impact does a PR have on the status of the tests.

For example the issue described in #210 introduced a regression for Icarus and it took some time to notice it. An automatic way to detect that would be nice.

Not all test in different files have unique names

There are a few tests in different files that use the same name. Given that we might want to use these names as map-keys, it would be good if they are unique (maybe even with a little assertion that this is the case in sv-report).

Here are the test names and the file names they occur in:

+-------------------------------------+-----------------------------------------------------------------+
|              Test Name              |                           File Names                            |
+-------------------------------------+-----------------------------------------------------------------+
| array_addressing                    | chapter-11/11.5.2--array_addressing.sv                          |
:                                     : chapter-11/11.5.2--multi_dim_array_addressing.sv                :
| cast_task                           | chapter-6/6.24.2--cast_task.sv                                  |
:                                     : chapter-8/8.16--cast_task.sv                                    :
| event_nonblocking_assignment_repeat | chapter-9/9.4.5--event_nonblocking_assignment_repeat.sv         |
:                                     : chapter-9/9.4.5--event_nonblocking_assignment_repeat_int.sv     :
:                                     : chapter-9/9.4.5--event_nonblocking_assignment_repeat_int_neg.sv :
:                                     : chapter-9/9.4.5--event_nonblocking_assignment_repeat_neg.sv     :
| event_sequence                      | chapter-9/9.4.2.4--event_sequence.sv                            |
:                                     : chapter-9/9.4.3--event_sequence_controls.sv                     :
| idx_pos_part_select                 | chapter-11/11.5.1--idx_pos_part_select.sv                       |
:                                     : chapter-11/11.5.1--idx_select.sv                                :
| if                                  | chapter-12/12.4--if.sv                                          |
:                                     : chapter-12/12.4--if_else.sv                                     :
| string_compare                      | chapter-11/11.10.1--string_compare.sv                           |
:                                     : chapter-6/6.16.6--string_compare.sv                             :
| unique_if                           | chapter-12/12.4.2--priority_if.sv                               |
:                                     : chapter-12/12.4.2--unique0_if.sv                                :
:                                     : chapter-12/12.4.2--unique_if.sv                                 :
+-------------------------------------+-----------------------------------------------------------------+

Execution of tools should depend on availability of these tools.

Right now, the RUNNERS are filled by looking at the corresponding tools/runners/*.py scripts.

However, this requires that all the tools mentioned there have been checked out and compiled before into out/runners. If any of these has not been checked out or compiled, the whole test-suite run will fail.

The typical use-case in the future will be, that the users of this test-suite will only check out the tools they are interested in, and want to have a report created from these.

So we should look that we make the dependency what tools are run in the test-suite dependent on the existence of the necessary binaries.

This could be either done with make-file means (but then this would mean that the Makefile needs to have the knowledge what tools are there), or by having something like a probing --can_run flag on the scripts in the tools/runners/ directory: only the ones that return exit(0) are included in the report generation.

Exclude Fake runner by default

The Fake runner was a good placeholder in the early days, but now that we have a good set of 'real' tools running it is not really useful anymore. Now it just uses up a column in the table without adding information.

So I suggest removing it or at least disabling it by default (but only if there is a conceivable reason that we might need it; it is always possible to add one in the local client without it being checked in).

github.io report hosting: clicking on *.sv file in report downloads them vs. loading them in frame

The report hosted on https://symbiflow.github.io/sv-tests/ considers the *.sv files as binary and sets the mime-type to application/octet-stream.

curl -I https://symbiflow.github.io/sv-tests/tests/generic/number/number_test_68.sv
HTTP/2 200 
server: GitHub.com
content-type: application/octet-stream
...

Thus, clicking on it will make the browser download the file instead of showing it in the frame.

It might be hard to tell the github.io what mime-type we'd like the files to be served as, as we don't have access to that webserver configuration.

We might be able to force this by adding ".txt" to the filename ? So serving *.sv.txt.

Or, we go the full way and generate *.sv.html files which we then can also pretty-print to have line-numbers (and SystemVerilog keywords highlighted?).

Create a CSV report file

In a local version, I was hacking up some CSV file that helped a lot in getting some insights (e.g. which tests are failing for all tools (#122) or if names are actually unique (#148)).

Would be good if sv-report would generate something like that alongside report.html and possibly links to it from the report.html page. That way we can do evaluation with other tools.

Maybe something like this, though should also contain the tags.

test:string,file:string,Icarus:bool,Odin:bool,Slang:bool,Sv2v_zachjs:bool,Verilator:bool,Yosys:bool
abstract_class,chapter-8/8.21--abstract_class.sv,false,false,false,false,false,false,
abstract_class_inst,chapter-8/8.21--abstract_class_inst.sv,true,true,true,true,true,true,
always,chapter-9/9.2.2.1--always.sv,false,false,true,true,true,true,
always_comb,chapter-9/9.2.2.2--always_comb.sv,true,false,true,true,true,true,
always_ff,chapter-9/9.2.2.4--always_ff.sv,true,false,true,true,true,true,
...

tools/runner: should take parameter where the logs goes

Right now, tools/runner is creating the output logfile based on the runner and test.

This requires that the corresponding makefile rule shares the knowledge of how this filename
is constructed; it 'knows' where the runner will place the logfile:

$(OUT_DIR)/logs/$(1)/$(2).log: $(TESTS_DIR)/$(2)
	mkdir -p $(OUT_DIR)/logs/$(1)/$(dir $(2))
	./tools/runner --runner $(1) --test $(2)

... but also the runner knows how to do this.

Ideally, we only have this knowledge in one place. Since we orchestrate where logfiles should be in the makefile, I suggest to have an parameter to the runner that contains the name of the logfile to be written:

$(OUT_DIR)/logs/$(1)/$(2).log: $(TESTS_DIR)/$(2)
	./tools/runner --runner $(1) --test $(2) --out $@

(Depending on how Make can deal with it, maybe $@ might need to be $(OUT_DIR)/logs/$(1)/$(2).log).

The runner would just have one more parameter:

parser.add_argument("-o", "--out",  required=True)
#...
out = args.out

(then, also, we can move the optional logic to mkdir the directory of the logfile into tools/runner)

Besides following the 'only one place needs to know where results go', it will make the readability for the user easier who sees calls like this in the Makefile output (after the silent call of the runner mentioned in #80 is unsilenced):

./tools/runner --runner Sv2v_zachjs --test chapter-8/8.18--var_protected.sv --out ./out//logs/Sv2v_zachjs/chapter-8/8.18--var_protected.sv.log

So they can immediately see where the output would be, can copy-paste the logfile-name to look at it etc.

Add a sample browser-screenshot of reports to README

Once we have settled the new layout of the sv-report, we should add a screenshot to the toplevel readme so that it is easy show what to expect.

(and possibly a link to some actual output if it is generated by some continuous integration).

Makefiles: Avoid silent actions if possible

The makefiles currently have a lot of silent actions right now, which makes it harder understand what is happening and to debug what is going wrong when things fail. Often there is an @echo that describes what the following command does. We should let the commands speak for themself.

Let's avoid these to have a simpler and easier to use build-system. Here is an example on a subset of Makefile rules; instead of this:

clean:
	@echo -e "Removing $(OUT_DIR)"
	@rm -rf $(OUT_DIR)
	@echo -e "Removing $(TESTS_DIR)/generated/"
	@rm -rf $(TESTS_DIR)/generated/

report: init info tests
	@echo -e "\nGenerating report"
	@./tools/sv-report
	@echo -e "\nDONE!"

$(OUT_DIR)/logs/$(1)/$(2).log: $(TESTS_DIR)/$(2)
	@mkdir -p $(OUT_DIR)/logs/$(1)/$(dir $(2))
	@./tools/runner --runner $(1) --test $(2)

The following will be easier to read, debug and maintain:

clean:
	rm -rf $(OUT_DIR)
	rm -rf $(TESTS_DIR)/generated/

report: init info tests
	./tools/sv-report

$(OUT_DIR)/logs/$(1)/$(2).log: $(TESTS_DIR)/$(2)
	mkdir -p $(OUT_DIR)/logs/$(1)/$(dir $(2))
	./tools/runner --runner $(1) --test $(2)

(Possibly the only place where a silent action is useful is for @echo itself.)

Don't build tools on Travis

Currently travis is building all the tools as part of the CI. This means CI takes ~25 minutes rather than the ~5min for the run of the tests.

It would be good if the tools being used where prebuilt to make CI much faster.

index.thml ?

Why is there are a index.thml? Shouldn't it be index.html?

cd /tmp/d20190902-11660-1xtvaq5/work
commit c8963705834e3f19c0dc0c0789df55a60e58e988
Author: Deployment Bot (from Travis CI) <[email protected]>
Date:   Mon Sep 2 13:12:55 2019 +0000
    Deploy SymbiFlow/sv-tests to github.com/SymbiFlow/sv-tests.git:gh-pages
 index.thml | 96238 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 96238 insertions(+)

report: Have 'total' count row of successful tests vs. overall tests.

It would be good if we'd have a row that shows the 'totals' of all tests for a particular tool, e.g. passing 117/573. This is good for the TL;DR situation :)

This could be easily achieved by counting every test also towards a 'total' tag (though the row with the total count then needs to be printed a bit differently as we don't want the cells in that row to be clickable).

Set up travis

The CI should:

  • Run all the tests
  • Check PR's (code quality etc.)
  • Push the result HTML to GH pages

Create tests for the selected LRM chapters

  • 5 Lexical conventions: PR: #21
  • 6 Data types #79
  • 7 Aggregate data types PR: #49
  • 8 Classes: PR: #52
  • 9 Processes: PR: #26
  • 11 Operators and expressions: PR: #73
  • 12 Procedural programming statements: PR: #44

Please update this issue with the relevant issues and PR to keep track of everything.

Use externally stored logs in report page

As the test suite gets bigger, storing all the logs in the report.html file starts to create issues with the resulting file size (currently around 12MB) and performance.
Maybe loading the logs from external storage in a dynamic way could help with that.

Test failures when using Icarus Verilog

I had a quick look at the test failures when using Icarus Verilog. I spotted two recurring problems:

  1. Many tests do not include a top-level module. iverilog returns an error in this case, as there is nothing to elaborate. I have just enhanced the iverilog '-i' option (ignore missing modules) to also suppress this error, so using that option will fix many failures. Of course the test code will then only be checked for syntax errors.

  2. A number of tests contain an 'always' construct with no waits or delays. iverilog returns an error in this case, because it will cause the simulator to go into an infinite loop. It would be best not to code tests like that.

I did spot quite a few broken tests, e.g. number_test_63.v uses invalid syntax.

Import Yosys tests suites which make sense

Available tags in tests should drive the report, not tags available in lrm.conf

What happens now
Currently, all tags found in the test-cases are matched against availability in the lrm.conf and "Tag not present in the database"-warned when it is not, and the test is removed.
The keys in lrm.conf are then driving the lines that are reported.

This means, that someone who adds a test-case with a new tag also needs to be aware of the lrm.conf before a report can be generated. Given that the lrm.conf is mostly just helpful to provide descriptive annotations (after #84 is implemented), and potentially help define an order, this provides a larger hurdle to adding new test-cases that don't necessarily fit the tags we have available now.

What would be desirable instead
To simplify adding new tests, this should be the other way around: The tests declare tags, and sv-report will just collect all tags it has found in a map of sorts. We then output these and use the lrm.conf entries to define an order declared there (so that we keep the nice order of chapters which are otherwise lexicographically mangled).

Tags that are in the database, but don't show up in any tests, should not be printed (this will avoid the gray lines we have now for all the tags that are not available yet).

Tags that are mentioned in the tests, but not in the database, are appended lexicographically at the end of the report.

(sv-report can still have some diagnostic output about keys not found in the database with -v).

Our goal is that the ultimate source of truth is the metadata in the *.sv tests themself: they should be self-contained and drive what is reported. Adding a new test should be as simple as adding a single file with its meta-data.

The lrm.conf can help present additional information by expanding tags to descriptive annotations (#84) but its content should not prevent adding new tests.

Icarus fails with the '-i' switch

After the '-i' switch has been added to the icarus invocation in #196 after a suggestion from #138 the tests in CI started failing.

The issue is that the CI installs icarus from the apt repositories instead of building it from the submodule.

This should be solved by #183 but we should consider temporarily reverting the commit that adds the -i switch.

Left align the test name

Would make the following line up nicely;

5
5.1
5.1.1
5.1.2
5.1.3
5.2
5.2.1
5.2.2

Could maybe expand to a tree view?

signing in data_type

Looks like most of /tests/generated/integers/6.11--integer_{signed|unsigned}_{type}.sv tests have signing before type.

According to SV 2017 grammar: signing should be done after type.

data_type ::=
    integer_vector_type [ signing ] { packed_dimension }
  | integer_atom_type [ signing ]
  ...

tree-sitter-verilog - false positive results

Here are tests that reported in the table for tree-sitter-verolog as failed but actually passed.

sv-tests/tests/chapter-5/5.6.1--escaped-identifiers.sv

sv-tests/tests/generic/typedef/typedef_test_15.sv

sv-tests/tests/generic/class/class_test_10.sv
sv-tests/tests/generic/class/class_test_15.sv
sv-tests/tests/generic/class/class_test_38.sv

sv-tests/tests/chapter-12/12.7.5--dowhile.sv

Could you check why?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.