GithubHelp home page GithubHelp logo

ispras / cv Goto Github PK

View Code? Open in Web Editor NEW
1.0 1.0 3.0 31.58 MB

Klever Continuous Verification Framework

License: Apache License 2.0

Makefile 2.58% Shell 0.77% C 0.42% Python 96.06% PLSQL 0.17%

cv's People

Contributors

alekseevk1 avatar druidos avatar mutilin avatar pavelandrianov avatar vmordan avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cv's Issues

Bug in tag creation

I tried to create a new tag on an error trace page (not on a special tag page). I described a new tag and push "create". Then a message "Unknown error" is shown and the dialog window is not hide. So, I have to click "close". I supposed, that the tag was no created, but on a tag page I suprisingly found my tag.

Problem with new python in web-interface

I have an Ubuntu 20.04 with python 3.8. And some CV components do not work with it. In some cases I can choose the version, for example, run python3.5 ./scripts/launcher .... It's OK. But in some cases I can not. For example, web interface starts only with command start.sh with preset python3, which is in my case python3.8. Right now, I have to change the script.

Investigate switch to yaml configurations

There are two benefits:

  1. yaml is more compact and easy to read
  2. yaml allows to save comments

To convert existing json configurations we may use free converters.

Not a high priority.

Improve witness search

For benchmark-visualization witnesses should be placed in a strict hardcoded paths, assuming a particular benchexec version. For other version the structure of the output paths are different, thus right now the witnesses can be found only for cloud launch.
The idea is to have a soft mode, when the witnesses will be searched in all subdirectories. For example, via find. Then we may miss some relation between benchmark, rundefinition and files, but at least we will be able upload results for marking.

Problem with enviroment model and line directives

Now, the environment model looks like

main(..) {
pthread_t thread1;
ldv_thread create(&thread1, ...);

pthread_t thread2;
ldv_thread create(&thread2, ...);
...

This is the origin code. CIL optimizes is to the C99 standart, which claims that all definitions should be at function start:

# line 1
main(..) {
pthread_t thread1;
pthread_t thread2;
...
# line 3
ldv_thread create(&thread1, ...);
# line 6
ldv_thread create(&thread2, ...);
...

But thre problem is that line directives refer to the origin code. Thus we have one block of definitions, which corresponds to one line. This brokes CPAchecker calculation of mapping to the origin files and all references here and (important!) further will differenciate from real ones.
Try to do something in CPAchecker, but if it is possible, maybe, try to generate a correct main function right in generator.

Add support for Klever EMG comments

Currently CV witness parser supports the following format of model comments

/* AUX_FUNC|AUX_FUNC_CALLBACK|MODEL_FUNC|NOTE|ASSERT|ENVIRONMENT_MODEL <function name> <comment>  */

whereas new Klever EMG comments have the following format:

/* EMG_ACTION {"comment": "...", "name": "...", "relevant": true|false, "type": "..."} */

How to reproduce issue without Klever:

  1. Create the following working directories with write access: /ssd/jobs and /ssd/build_base , /ssd/work.
  2. Download and unzip build base:
wget https://forge.ispras.ru/attachments/download/10126/build-base-linux-5.10.120-x86_64-allmodconfig.tar.xz
tar -xf build-base-linux-5.10.120-x86_64-allmodconfig.tar.xz
mv build-base-linux-5.10.120-x86_64-allmodconfig /ssd/build_base/
  1. Unzip Klever job files into /ssd/jobs .
  2. Install Witness Visualizer into /ssd/work from CV repository:
make install-witness-visualizer DEPLOY_DIR=/ssd/work
  1. Unzip tasks files into /ssd/work.
  2. Visualize results with CV:
cd /ssd/wv
./scripts/visualize_witnesses.py -r results/ -d task_1 -s task_1 --debug -u
./scripts/visualize_witnesses.py -r results/ -d task_2 -s task_2 --debug -u

Resulting error traces will be in the results directory.

Add a small set of examples

It will be good to have a small set of examples to test the visualizer just after installation. Right now a new user need to prepare a graphml witness itself.

Add a filter for unsafes

It will be convenient to have an ability to filter unsafes on the list page. For example, to show only unreported ones. Now it is possible to reorder them, but it is not so useful.

The origin Klever tool has extended tools for view adjustment. Maybe, get the ideas from it.

Bug in error trace comparison

I created a pattern of error trace:

func1
  __ERROR__
func2
 __ERROR__

I set similarity 100 and partial include ordered. I expected, that the pattern is applied to those traces, which contain two threads and the first one has func in the trace, and the second thread - func2. However, the pattern was applied to the trace, which contains func1 and func2 in the one thread and absolutely different the other thread.

Likely, you searched the patterns independently. Need somehow consider, that the patterns should be in the different threads.

Failure in case of aspect file is not set

If CIF is used as a preprocessor, it requires an aspect. If it is not set, None is used, and then the preparator fails, as NoneType object is inside cif_args list. So, the warning message is confusing. Better to fail, if we use CIF without aspect earlier.

And a more practical solution is to generate an empty aspect file with a warning, that "no aspect is set, use empty one".

before: file ("$this")
{
}

Implement CI infrustructure

As the project is developed, it will be useful to have some CI. The main stages may be:

  • Build stage. Checks, that all external tools can be downloaded and build. Also, checks the deployment script of CV.
  • Checks stage. Run some code style and code check tools for python.
  • Unit tests stage. Run simple tests to check configurations and basic functionallity of different components.
  • Integration tests. Run complicated tests.

Implement CV as python package

Package should be easier to support. There may be defined some requirements, which will be installed/updated automatically. Package version may be helpful to identify CV launches. Seems, deployment procedutre should be easier. Need to think about.
See setuptools.

Hunging in case of uploader failure

If uploader cannot establish connection to server (for example, missed openvpn), it hungs:

Launcher: DEBUG: Processing results
Launcher: INFO: Preparing report on launches into file: '/home/alpha/git/cv/results/report_launches_sync_develop_2023_04_26_09_54_41.csv'
Launcher: INFO: Preparing report on components into file: '/home/alpha/git/cv/results/report_components_sync_develop_2023_04_26_09_54_41.csv'
Launcher: INFO: Preparing short report into file: '/home/alpha/git/cv/results/short_report_sync_develop_2023_04_26_09_54_41.csv'
Launcher: INFO: Exporting results into archive: '/home/alpha/git/cv/results/results_sync_develop_2023_04_26_09_54_41.zip'
Exporter: DEBUG: Wall time: 1.03s
Exporter: DEBUG: CPU time: 1008.6s
Exporter: DEBUG: Memory usage: 7322Mb
Exporter: INFO: Exporting results has been completed
Launcher: INFO: Uploading results into server 10.10.2.179:8989 with identifier 2
Launcher: DEBUG: Using name 'sync:races 2023_04_26_09_54_43' for uploaded report

And nothing changes after 30 minutes. Even I connect to openvpn.

After interruption I get waiting stack:

Traceback (most recent call last):
  File "/home/alpha/git/cv/scripts/components/launcher.py", line 268, in _upload_results
    subprocess.check_call(command, shell=True)
  File "/usr/lib/python3.10/subprocess.py", line 364, in check_call
    retcode = call(*popenargs, **kwargs)
  File "/usr/lib/python3.10/subprocess.py", line 347, in call
    return p.wait(timeout=timeout)
  File "/usr/lib/python3.10/subprocess.py", line 1207, in wait
    return self._wait(timeout=timeout)
  File "/usr/lib/python3.10/subprocess.py", line 1941, in _wait
    (pid, sts) = self._try_wait(0)
  File "/usr/lib/python3.10/subprocess.py", line 1899, in _try_wait
    (pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt

Support for notes

  1. Комментарий от cpachecker показывать в дополнение к коду, так как это дополнительная информация к коду, а не замена кода. Кроме модельных LDV и emg (и каких еще?) комментариев.
  2. CPAchecker выдает hide=false на все комментарии (это скорее всего неверно)
  3. Не щелкается последняя строчка (добавить)
  4. Показывать hide=false немного другим цветом
  5. Поддерживать несколько ноутов на одну строчку (одинакового или разного level)

Problem with benchmark xml definition

Benchmark processing script searches 'benchmark.xml' only in a current directory even if I set it up in configuration as "benchmark file". So, for now I have to make a link to it. Seems, the option is just ignored.

Update CIL

Currently there are 2 old prebuild versions of CIL. We need to try new versions, which also can be built from sources.

Investigate an option for run CV in Docker

It will be useful to have an option to provide Docker container with a ready CV. An directory with sources may be mounted externally. The main question: are there any problems with resources.

Need to try to prepare a docker image and run a test launch.

Deployment procedure should be refactored

Now, deployment is complicated and duplicated. Scripts and configurations should be used from repository. At least with some debug option. Now, we have to modify scripts in deploy directory.

Plugins also should not be duplicated in deployment directory. They should be submodules or simplinks.

Support verification of several projects

The task is large, so better to discuss first.

The idea is to have an ability to check several projects in one run. For example, a work of applications in an OS. So, we need to build several projects independently, then merge build commands, then construct a single CIL file. After that we should add an entry point for applications and run the verifier.

The key challenge is separated configurations for projects. So, we have to store different patches or entrypoints from different plugins, likely. And then combine them together.

Update CIF

Currently we have a broken link to CIF arch, it needs to be updated.

Support CIF comments

  1. /* CIF Original function "kzalloc". Instrumenting function "kzalloc". */
    is replaced by Instrumented function 'kzalloc' in visualization

  2. LDV model 'kzalloc' - where it comes from?

Missing line in table

There is a line expected between Дата начала решения and Реальные ошибки. Somehow it is missing.

image

Unify rule specifications in CV

Currently there are several formats of rules: sync rules, smg, unreach. We need to unify their description in order to get:

  1. specification automaton;
  2. verifier configuration;
  3. main generation strategy;
  4. coverage strategy;
  5. model files.

Hanging in case of incorrect path to sources

If I set incorrect path to sources in source dir option, the launcher prints

Launcher: ERROR: Source directory '/home/alpha/work/ose/~/work/ose/lg_bku_pr_v1/DISP/' does not exist

But it does not finish its exection, and continue waiting in

File "/home/alpha/git/cv/scripts/components/full_launcher.py", line 661, in launch
    sleep(BUSY_WAITING_INTERVAL * 10)

Better to finish the launch in this case.

Entrypoint config races is reverted

Now entrypoints are generated for races only if the corresponding option race is false. Should be vice versa. Need to update all configurations.

Show mark statuses on a page with traces

When we report bugs for developers, it is useful to set reported status for a mark. However, the statuses are not shown on the main page with all unsafes and on the page with traces. So, to find an unreported bug, one needs to enter every bug to see its status. Better to show the statuses, as it is done for tags.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.