vas-group-imperial / verinet Goto Github PK
View Code? Open in Web Editor NEWThe VeriNet toolkit for verification of neural networks
License: Other
The VeriNet toolkit for verification of neural networks
License: Other
The pipfile.lock is outdated, not all packages are still in the public repositories, so the installation fails.
Ignoring the file does not work either since the most up-to-date version of xpress causes problems during runtime.
Additionally, dnnv has a requirement of onnx<1.11
for both version 0.5.1
and the most up-to-date one.
Lastly, new versions of protobuf throw the following error.
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
It would be important to have an up-to-date lock file available for ease of use.
Currently I am trying to run VeriNet on my cluster. With this simple code block following the read.me tutorial:
from verinet.parsers.vnnlib_parser import VNNLIBParser
from verinet.verification.verinet import VeriNet
#we can load our onnx networks
onnx_parser = ONNXParser(model_path)
model = onnx_parser.to_pytorch()
model.eval()
#now we need to import vnnlib
vnnlib_parser = VNNLIBParser(vnnlib_path)
objectives = vnnlib_parser.get_objectives_from_vnnlib(model, (784,))
However, this error occurs:
File "/software/python3.7.15/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/software/python3.7.15/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/software/python3.7.15/lib/python3.7/multiprocessing/managers.py", line 597, in _run_server
server.serve_forever()
File "/software/python3.7.15/lib/python3.7/multiprocessing/managers.py", line 173, in serve_forever
sys.exit(0)
SystemExit: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/software/python3.7.15/lib/python3.7/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/software/python3.7.15/lib/python3.7/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/software/python3.7.15/lib/python3.7/multiprocessing/util.py", line 133, in _remove_temp_dir
rmtree(tempdir)
File "/software/python3.7.15/lib/python3.7/shutil.py", line 494, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/software/python3.7.15/lib/python3.7/shutil.py", line 452, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/software/python3.7.15/lib/python3.7/shutil.py", line 450, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
OSError: [Errno 16] Device or resource busy: '.nfs00000000005bce20000055be'
I was wondering if anyone has ever seen this issue and knows how to handle it.
I tried to run VeriNet on the dubins_rejoin
instances of this year's VNN-Comp using
benchmark(result_path, benchmark_path, instances_csv_path, input_shape=(8,), transpose_fc_weights=False)
, but the parsing of the vnnlib-properties failed.
For dubinsrejoin_case_safe_0.vnnlib
the following error is thrown:
ValueError: And-expression: ['(<=', 'Y_1', 'Y_0)(<=', 'Y_2', 'Y_0)(<=', 'Y_3', 'Y_0)(<=', 'Y_4', 'Y_6)(<=', 'Y_5', 'Y_6)(<=', 'Y_7', 'Y_6'] not recognised
How can I fix this?
Thank you.
When running VeriNet on the dubins_rejoin instances of VNN-Comp 2022 using
benchmark(result_path, benchmark_path, instances_csv_path, input_shape=(8,), transpose_fc_weights=False)
, I get the following error:
ValueError: Expected at least one connection to non-input node: VeriNetNNNode(idx: 1, op: Linear(in_features=256, out_features=8, bias=True), to: [2], from: [])
The error gets thrown at VeriNet/verinet/sip_torch/sip.py, line 337, in _process_node.
Parsing for other networks works fine.
Thanks in advance for your help.
Dear VeriNet Team,
I hope this message finds you well. I am currently using VeriNet to analyze an ONNX model that I have converted from a PyTorch model. The model is a simple neural network that outputs a tuple of two tensors, action
and value
, from its forward method.
When I run the model through VeriNet, it seems to be interpreting the output of the model as a list of None
values, rather than a tuple of tensors. This is causing issues when VeriNet tries to create an Objective instance and perform robustness analysis, as it is expecting the output size of the model to be non-zero.
By the way I have trained a model using (Proximal policy optimization ) PPO. My model expects two separate inputs (lidar and state), input dimension is 340.
Here is the relevant code from my model's forward method:
def forward(self, x):
policy = self.policy_net(x)
value = self.value_net(x)
action = self.action_layer(policy)
value = self.value_layer(value)
return action, value
And here is the output I'm seeing when I run the model through VeriNet:
Outputs: [None, None, None, None, None, None, None, None, None, None]
Outputs are None or not tensors
The output size of the model is: 0
Cannot create Objective instance because output_size is 0
Cannot perform robustness analysis because output_size is 0
Cannot check robustness because output_size is 0
I have verified that the model's forward method is working correctly and returning a tuple of tensors when run outside of VeriNet. I have also checked the model's state and the inputs it's receiving, and everything seems to be in order.
I would appreciate any guidance you could provide on this issue. Is there a specific way I should be structuring the output of my model to ensure compatibility with VeriNet? Or could this be an issue with how VeriNet is handling the output of ONNX models?
Thank you for your time and assistance.
Best regards,
I am currently trying to make dnnv use this version of Verinet.
In dnnv, the constraints are set up in a way that the resulting output that Verinet sees is just a bool, so I am currently doing something like this:
solver = VeriNet(use_gpu=True, max_procs=None)
objective = Objective(input_bounds, output_size=1, model=model)
objective.add_constraints(objective.output_vars[0] <= 1)
objective.add_constraints(objective.output_vars[0] >= 1)
However, Verinet always produces a counterexample in the first branch explored (Maximum depth reached: 0) which is outside the specified bounds.
I checked that the input_bounds
has the correct shape and expected values (-e +e) for every value.
I am sure that I am missing something here, any help would be greatly appreciated.
Hi,
We have recently tried to use VeriNet to run some certifications on NN. But encountered a problem when it failed to identify nodes with Op_type: Gather (the nodes are not recognized, thus raising a key error).
Is are the following functions supported by VertiNet (torch.cat, torch.chunk, index slicing)? If not supported but feasible, can you point me to some help to implement this?
Best,
Calvin
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.