ispras / cv Goto Github PK
View Code? Open in Web Editor NEWKlever Continuous Verification Framework
License: Apache License 2.0
Klever Continuous Verification Framework
License: Apache License 2.0
I tried to create a new tag on an error trace page (not on a special tag page). I described a new tag and push "create". Then a message "Unknown error" is shown and the dialog window is not hide. So, I have to click "close". I supposed, that the tag was no created, but on a tag page I suprisingly found my tag.
I have an Ubuntu 20.04 with python 3.8. And some CV components do not work with it. In some cases I can choose the version, for example, run python3.5 ./scripts/launcher ...
. It's OK. But in some cases I can not. For example, web interface starts only with command start.sh with preset python3
, which is in my case python3.8. Right now, I have to change the script.
There are two benefits:
To convert existing json configurations we may use free converters.
Not a high priority.
For benchmark-visualization witnesses should be placed in a strict hardcoded paths, assuming a particular benchexec version. For other version the structure of the output paths are different, thus right now the witnesses can be found only for cloud launch.
The idea is to have a soft mode, when the witnesses will be searched in all subdirectories. For example, via find
. Then we may miss some relation between benchmark, rundefinition and files, but at least we will be able upload results for marking.
Now, the environment model looks like
main(..) {
pthread_t thread1;
ldv_thread create(&thread1, ...);
pthread_t thread2;
ldv_thread create(&thread2, ...);
...
This is the origin code. CIL optimizes is to the C99 standart, which claims that all definitions should be at function start:
# line 1
main(..) {
pthread_t thread1;
pthread_t thread2;
...
# line 3
ldv_thread create(&thread1, ...);
# line 6
ldv_thread create(&thread2, ...);
...
But thre problem is that line directives refer to the origin code. Thus we have one block of definitions, which corresponds to one line. This brokes CPAchecker calculation of mapping to the origin files and all references here and (important!) further will differenciate from real ones.
Try to do something in CPAchecker, but if it is possible, maybe, try to generate a correct main function right in generator.
Currently CV witness parser supports the following format of model comments
/* AUX_FUNC|AUX_FUNC_CALLBACK|MODEL_FUNC|NOTE|ASSERT|ENVIRONMENT_MODEL <function name> <comment> */
whereas new Klever EMG comments have the following format:
/* EMG_ACTION {"comment": "...", "name": "...", "relevant": true|false, "type": "..."} */
How to reproduce issue without Klever:
/ssd/jobs
and /ssd/build_base
, /ssd/work
.wget https://forge.ispras.ru/attachments/download/10126/build-base-linux-5.10.120-x86_64-allmodconfig.tar.xz
tar -xf build-base-linux-5.10.120-x86_64-allmodconfig.tar.xz
mv build-base-linux-5.10.120-x86_64-allmodconfig /ssd/build_base/
/ssd/jobs
./ssd/work
from CV repository:make install-witness-visualizer DEPLOY_DIR=/ssd/work
/ssd/work
.cd /ssd/wv
./scripts/visualize_witnesses.py -r results/ -d task_1 -s task_1 --debug -u
./scripts/visualize_witnesses.py -r results/ -d task_2 -s task_2 --debug -u
Resulting error traces will be in the results
directory.
It will be good to have a small set of examples to test the visualizer just after installation. Right now a new user need to prepare a graphml witness itself.
It will be convenient to have an ability to filter unsafes on the list page. For example, to show only unreported
ones. Now it is possible to reorder them, but it is not so useful.
The origin Klever
tool has extended tools for view adjustment. Maybe, get the ideas from it.
I created a pattern of error trace:
func1
__ERROR__
func2
__ERROR__
I set similarity 100 and partial include ordered
. I expected, that the pattern is applied to those traces, which contain two threads and the first one has func
in the trace, and the second thread - func2
. However, the pattern was applied to the trace, which contains func1
and func2
in the one thread and absolutely different the other thread.
Likely, you searched the patterns independently. Need somehow consider, that the patterns should be in the different threads.
If CIF is used as a preprocessor, it requires an aspect. If it is not set, None is used, and then the preparator fails, as NoneType object is inside cif_args list. So, the warning message is confusing. Better to fail, if we use CIF without aspect earlier.
And a more practical solution is to generate an empty aspect file with a warning, that "no aspect is set, use empty one".
before: file ("$this")
{
}
As the project is developed, it will be useful to have some CI. The main stages may be:
Package should be easier to support. There may be defined some requirements, which will be installed/updated automatically. Package version may be helpful to identify CV launches. Seems, deployment procedutre should be easier. Need to think about.
See setuptools
.
If uploader cannot establish connection to server (for example, missed openvpn
), it hungs:
Launcher: DEBUG: Processing results
Launcher: INFO: Preparing report on launches into file: '/home/alpha/git/cv/results/report_launches_sync_develop_2023_04_26_09_54_41.csv'
Launcher: INFO: Preparing report on components into file: '/home/alpha/git/cv/results/report_components_sync_develop_2023_04_26_09_54_41.csv'
Launcher: INFO: Preparing short report into file: '/home/alpha/git/cv/results/short_report_sync_develop_2023_04_26_09_54_41.csv'
Launcher: INFO: Exporting results into archive: '/home/alpha/git/cv/results/results_sync_develop_2023_04_26_09_54_41.zip'
Exporter: DEBUG: Wall time: 1.03s
Exporter: DEBUG: CPU time: 1008.6s
Exporter: DEBUG: Memory usage: 7322Mb
Exporter: INFO: Exporting results has been completed
Launcher: INFO: Uploading results into server 10.10.2.179:8989 with identifier 2
Launcher: DEBUG: Using name 'sync:races 2023_04_26_09_54_43' for uploaded report
And nothing changes after 30 minutes. Even I connect to openvpn.
After interruption I get waiting stack:
Traceback (most recent call last):
File "/home/alpha/git/cv/scripts/components/launcher.py", line 268, in _upload_results
subprocess.check_call(command, shell=True)
File "/usr/lib/python3.10/subprocess.py", line 364, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib/python3.10/subprocess.py", line 347, in call
return p.wait(timeout=timeout)
File "/usr/lib/python3.10/subprocess.py", line 1207, in wait
return self._wait(timeout=timeout)
File "/usr/lib/python3.10/subprocess.py", line 1941, in _wait
(pid, sts) = self._try_wait(0)
File "/usr/lib/python3.10/subprocess.py", line 1899, in _try_wait
(pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt
Benchmark processing script searches 'benchmark.xml' only in a current directory even if I set it up in configuration as "benchmark file". So, for now I have to make a link to it. Seems, the option is just ignored.
Currently there are 2 old prebuild versions of CIL. We need to try new versions, which also can be built from sources.
It will be useful to have an option to provide Docker container with a ready CV. An directory with sources may be mounted externally. The main question: are there any problems with resources.
Need to try to prepare a docker image and run a test launch.
Now, deployment is complicated and duplicated. Scripts and configurations should be used from repository. At least with some debug
option. Now, we have to modify scripts in deploy directory.
Plugins also should not be duplicated in deployment directory. They should be submodules or simplinks.
The task is large, so better to discuss first.
The idea is to have an ability to check several projects in one run. For example, a work of applications in an OS. So, we need to build several projects independently, then merge build commands, then construct a single CIL file. After that we should add an entry point for applications and run the verifier.
The key challenge is separated configurations for projects. So, we have to store different patches or entrypoints from different plugins, likely. And then combine them together.
Currently we have a broken link to CIF arch, it needs to be updated.
/* CIF Original function "kzalloc". Instrumenting function "kzalloc". */
is replaced by Instrumented function 'kzalloc'
in visualization
LDV model 'kzalloc'
- where it comes from?
Currently there are several formats of rules: sync rules, smg, unreach. We need to unify their description in order to get:
If I set incorrect path to sources in source dir
option, the launcher prints
Launcher: ERROR: Source directory '/home/alpha/work/ose/~/work/ose/lg_bku_pr_v1/DISP/' does not exist
But it does not finish its exection, and continue waiting in
File "/home/alpha/git/cv/scripts/components/full_launcher.py", line 661, in launch
sleep(BUSY_WAITING_INTERVAL * 10)
Better to finish the launch in this case.
Now entrypoints are generated for races
only if the corresponding option race
is false
. Should be vice versa. Need to update all configurations.
When we report bugs for developers, it is useful to set reported
status for a mark. However, the statuses are not shown on the main page with all unsafes and on the page with traces. So, to find an unreported bug, one needs to enter every bug to see its status. Better to show the statuses, as it is done for tags.
Currently there are some workaround to launch BenchExec with Ubuntu 22, but memory is still cannot be tracked and/or limited.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.