GithubHelp home page GithubHelp logo

vas-group-imperial / verinet Goto Github PK

View Code? Open in Web Editor NEW
15.0 3.0 9.0 128.84 MB

The VeriNet toolkit for verification of neural networks

License: Other

Shell 0.45% Python 99.55%
neural-network deep-learning verification robustness safeai adversarial-attacks

verinet's Introduction

VeriNet

The VeriNet toolkit is a state-of-the-art sound and complete symbolic interval propagation based toolkit for verification of neural networks. VeriNet won second place overall and was the most performing among toolkits not using GPUs in the 2nd international verification of neural networks competition. VeriNet is devloped at the Verification of Autonomous Systems (VAS) group, Imperial College London.

Relevant Publications.

VeriNet is developed as part of the following publications:

Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search

DEEPSPLIT: An Efficient Splitting Method for Neural Network Verification via Indirect Effect Analysis

This version of VeriNet subsumes the VeriNet toolkit publised in the first paper and the DeepSplit toolkit published in the second paper.

Installation:

All dependencies can be installed via Pipenv.

VeriNet depends on the Xpress solver, which can solve smaller problems without a license; however, larger problems (networks with more than approximately 5000 nodes) require a license. Free academic licenses can be obtained at: https://content.fico.com/l/517101/2018-06-10/3fpbf

We recommend installing the developer dependencies for some extra optimisations during the loading of onnx models. These can be installed by running pipenv with the --dev option: $pipenv install --dev.

Usage:

Models:

VeriNet supports loading models in onnx format or custom models created with the VeriNetNN class, a subclass of torch.nn.module.

Loading onnx models:

Onnx models can be loaded as follows:

from verinet.parsers.onnx_parser import ONNXParser

onnx_parser = ONNXParser(onnx_model_path, input_names=("x",), transpose_fc_weights=False, use_64bit=False)
model = onnx_parser.to_pytorch()
model.eval()

The first argument is the path of the onnx file; input_names should be a tuple containing the input-variable name as stored in the onnx model; if transpose_fc_weights is true the weight matrices of fully-connected layers are transposed; if use_64bit is true the parameters of the model are stored as torch.DoubleTensors instead of torch.FloatTensors.

Custom models:

The following is a simple example of a VeriNetNN model with two inputs, one FC layer, one ReLU layer and 2 outputs:

import torch.nn as nn
from verinet.neural_networks.verinet_nn import VeriNetNN, VeriNetNNNode

nodes = [VeriNetNNNode(idx=0, op=nn.Identity(), connections_from=None, connections_to=[1]),
         VeriNetNNNode(idx=1, op=nn.nn.Linear(2, 2)(), connections_from=[0], connections_to=[2]),
         VeriNetNNNode(idx=2, op=nn.ReLU(), connections_from=[1], connections_to=[3]),
         VeriNetNNNode(idx=3, op=nn.Identity(), connections_from=[2], connections_to=None)]

model = VeriNetNN(nodes)

VeriNetNN takes as input a list of nodes (note that 'nodes' here do not correspond to neurons, each node may have multiple neurons) where each node has the following parameters:

  • idx: A unique node-index sorted topologically wrt the connections.
  • op: The operation performed by the node, all operations defined in verinet/neural_networks/custom_layers.py as well as nn.ReLU, nn.Sigmoid, nn.Tanh, nn.Linear, nn.Conv2d, nn.AvgPool2d, nn.Identity, nn.Reshape, nn.Transpose and nn.Flatten are supported.
  • connections_from: A list of which nodes' outputs are used as input in this node. Note that more than one output in a single node (residual connections) is only support for nodes with the AddDynamic op as defined in custom_layers.py.
  • connections_to: A list of which nodes' input depend on this node's output corresponding to connections_from.

The first and last layer should be nn.Identity nodes. BatchNorm2d and MaxPool2d operations can be implemented by saving the model to onnx and reloading as the onnx parser automatically attempts to convert these to equivalent Conv2d and ReLU layers.

Verification Objectives:

VeriNet supports verification objectives in the VNN-COMP'21 vnnlib format and custom objectives.

Vnnlib:

VeriNet supports vnnlib files formated as described in the following discussion: stanleybak/vnncomp2021#2. The files can be loaded as follows:

from verinet.parsers.vnnlib_parser import VNNLIBParser

vnnlib_parser = VNNLIBParser(vnnlib_path)
objectives = vnnlib_parser.get_objectives_from_vnnlib(model, input_shape)

The vnnlib_path parameter should be the path of the vnnlib file, model is the VeriNetNN model as discussed above while the input shape is a tuple describing the shape of the models input without batch-dimension (e.g. (784, ) for flattened MNIST (1, 28, 28) for MNIST images and (3, 32, 32) for CIFAR-10 Images).

Custom objectives:

The following is an example of how a custom verification objective for classification problems can be encoded (correct output larger than all other outputs):

from verinet.verification.objective import Objective

objective = Objective(input_bounds, output_size=10, model=model)
out_vars = objective.output_vars
for j in range(objective.output_size):
    if j != correct_output:
        # noinspection PyTypeChecker
        objective.add_constraints(out_vars[j] <= out_vars[correct_output])

Here input bounds is an array of shape (*network_input_shape, 2) where network_input_shape is the input shape of the network (withut batch dimension) and the last dimension contains the lower bounds at position 0 and upper bounds at position 1.

Note that the verification objective encodes what it means for the network to be Safe/Robust. And-type constraints can be encoded by calling objective.add_constraints for each and-clause, while or-type constraints can be encoded with '|' (e.g. (out_vars[0] < 1) | (out_vars[0] < out_vars[1])).

Verification:

After defining the model and objective as described above, verification is performed by using the VeriNet class as follows:

from verinet.verification.verinet import VeriNet

solver = VeriNet(use_gpu=True, max_procs=None)
status = solver.verify(objective=objective, timeout=3600)

The parameters of VeriNet, use_gpu and max_procs, determines whether to use the GPU and the maximum number of processes (max_procs = None automatically determines the number of processes depending on the cores available).

The parameters in solver.verify correspond to the objective as discussed above and the timeout in seconds. Note that is recommended to keep solver alive instead of creating a new object every call to reduce overhead.

After each verification run the number of branches explored and maximum depth reached are stored in solver.branches_explored and solver.max_depth, respectively. If the objective is determined to be unsafe/not-robust, a counter example is stored in solver.counter_example.

At the end of each run, status will be either Status.Safe, Status.Unsafe, Status.Undecided or Status.Underflow. Safe means that the property is robust, Unsafe that a counter example was found, undecided that the solver timed-out and underflow that an error was encountered, most likely due to floating point precision.

Advanced usage:

Environment variables:

The .env file contains some environment variables that are automatically enabled if pipenv is used, if pipenv is not used make sure to export these variables.

Config file:

The config file in verinet/util/config.py contains several advanced settings. Of particular interest are the following:

  • PRECISION: (32 or 64) The floating point precision used in SIP. Note that this does not affect the precision of the model itself, which can be adjusted in the ONNXParser as discussed above.
  • MAX_ESTIMATED_MEM_USAGE: The maximum estimated memory usage acceptable in SIP. Can be reduced to reduce the memory requirements at the cost of computational performance.
  • USE_SSIP and STORE_SSIP_BOUNDS: Performs a pre-processing using a lower cost SIP-variant. Should be enabled if the input-dimensionality is significantly smaller than the size of the network (e.g. less than 50 input nodes with more than 10k Relu nodes).

Contact:

[email protected]

Authors:

Patrick Henriksen: [email protected]
Alessio Lomuscio.

verinet's People

Contributors

marcelbulpr avatar pat676 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

verinet's Issues

Onnx Gather format supported?

Hi,

We have recently tried to use VeriNet to run some certifications on NN. But encountered a problem when it failed to identify nodes with Op_type: Gather (the nodes are not recognized, thus raising a key error).

Is are the following functions supported by VertiNet (torch.cat, torch.chunk, index slicing)? If not supported but feasible, can you point me to some help to implement this?

Best,
Calvin

Parsing dubinsrejoin.onnx (VNN-Comp 2022)

When running VeriNet on the dubins_rejoin instances of VNN-Comp 2022 using
benchmark(result_path, benchmark_path, instances_csv_path, input_shape=(8,), transpose_fc_weights=False), I get the following error:

ValueError: Expected at least one connection to non-input node: VeriNetNNNode(idx: 1, op: Linear(in_features=256, out_features=8, bias=True), to: [2], from: [])

The error gets thrown at VeriNet/verinet/sip_torch/sip.py, line 337, in _process_node.

Parsing for other networks works fine.

Thanks in advance for your help.

Counter example ignoring input bounds

I am currently trying to make dnnv use this version of Verinet.
In dnnv, the constraints are set up in a way that the resulting output that Verinet sees is just a bool, so I am currently doing something like this:

solver = VeriNet(use_gpu=True, max_procs=None)
objective = Objective(input_bounds, output_size=1, model=model)
objective.add_constraints(objective.output_vars[0] <= 1)
objective.add_constraints(objective.output_vars[0] >= 1)

However, Verinet always produces a counterexample in the first branch explored (Maximum depth reached: 0) which is outside the specified bounds.
I checked that the input_bounds has the correct shape and expected values (-e +e) for every value.

I am sure that I am missing something here, any help would be greatly appreciated.

Outdated pipflie.lock

The pipfile.lock is outdated, not all packages are still in the public repositories, so the installation fails.
Ignoring the file does not work either since the most up-to-date version of xpress causes problems during runtime.
Additionally, dnnv has a requirement of onnx<1.11 for both version 0.5.1 and the most up-to-date one.
Lastly, new versions of protobuf throw the following error.

TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

It would be important to have an up-to-date lock file available for ease of use.

Running VeriNet on cluster

Currently I am trying to run VeriNet on my cluster. With this simple code block following the read.me tutorial:

from verinet.parsers.vnnlib_parser import VNNLIBParser
from verinet.verification.verinet import VeriNet


#we can load our onnx networks
onnx_parser = ONNXParser(model_path)
model = onnx_parser.to_pytorch()
model.eval()

#now we need to import vnnlib
vnnlib_parser = VNNLIBParser(vnnlib_path)
objectives = vnnlib_parser.get_objectives_from_vnnlib(model, (784,))

However, this error occurs:

  File "/software/python3.7.15/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/software/python3.7.15/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/software/python3.7.15/lib/python3.7/multiprocessing/managers.py", line 597, in _run_server
    server.serve_forever()
  File "/software/python3.7.15/lib/python3.7/multiprocessing/managers.py", line 173, in serve_forever
    sys.exit(0)
SystemExit: 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/software/python3.7.15/lib/python3.7/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/software/python3.7.15/lib/python3.7/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/software/python3.7.15/lib/python3.7/multiprocessing/util.py", line 133, in _remove_temp_dir
    rmtree(tempdir)
  File "/software/python3.7.15/lib/python3.7/shutil.py", line 494, in rmtree
    _rmtree_safe_fd(fd, path, onerror)
  File "/software/python3.7.15/lib/python3.7/shutil.py", line 452, in _rmtree_safe_fd
    onerror(os.unlink, fullname, sys.exc_info())
  File "/software/python3.7.15/lib/python3.7/shutil.py", line 450, in _rmtree_safe_fd
    os.unlink(entry.name, dir_fd=topfd)
OSError: [Errno 16] Device or resource busy: '.nfs00000000005bce20000055be'

I was wondering if anyone has ever seen this issue and knows how to handle it.

Parsing of VNN-Comp 2022 instances

I tried to run VeriNet on the dubins_rejoin instances of this year's VNN-Comp using
benchmark(result_path, benchmark_path, instances_csv_path, input_shape=(8,), transpose_fc_weights=False), but the parsing of the vnnlib-properties failed.

For dubinsrejoin_case_safe_0.vnnlib the following error is thrown:

ValueError: And-expression: ['(<=', 'Y_1', 'Y_0)(<=', 'Y_2', 'Y_0)(<=', 'Y_3', 'Y_0)(<=', 'Y_4', 'Y_6)(<=', 'Y_5', 'Y_6)(<=', 'Y_7', 'Y_6'] not recognised

How can I fix this?
Thank you.

Issues with VeriNet when handling ONNX model outputs

Dear VeriNet Team,

I hope this message finds you well. I am currently using VeriNet to analyze an ONNX model that I have converted from a PyTorch model. The model is a simple neural network that outputs a tuple of two tensors, action and value, from its forward method.

When I run the model through VeriNet, it seems to be interpreting the output of the model as a list of None values, rather than a tuple of tensors. This is causing issues when VeriNet tries to create an Objective instance and perform robustness analysis, as it is expecting the output size of the model to be non-zero.

By the way I have trained a model using (Proximal policy optimization ) PPO. My model expects two separate inputs (lidar and state), input dimension is 340.

Here is the relevant code from my model's forward method:

def forward(self, x):
    policy = self.policy_net(x)
    value = self.value_net(x)

    action = self.action_layer(policy)
    value = self.value_layer(value)

    return action, value

And here is the output I'm seeing when I run the model through VeriNet:

Outputs: [None, None, None, None, None, None, None, None, None, None]
Outputs are None or not tensors
The output size of the model is: 0
Cannot create Objective instance because output_size is 0
Cannot perform robustness analysis because output_size is 0
Cannot check robustness because output_size is 0

I have verified that the model's forward method is working correctly and returning a tuple of tensors when run outside of VeriNet. I have also checked the model's state and the inputs it's receiving, and everything seems to be in order.

I would appreciate any guidance you could provide on this issue. Is there a specific way I should be structuring the output of my model to ensure compatibility with VeriNet? Or could this be an issue with how VeriNet is handling the output of ONNX models?

Thank you for your time and assistance.

Best regards,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.